jarr0dsz has quit [Remote host closed the connection]
jugglinmike has quit [Quit: Leaving.]
<bobthecow>
anyone around? i need some thoughts on naming a gem /cc ddfreyne guardian
<dkm>
bobthecow: I am! I need some help naming a gem too, but I think I need to get a better handle on what it's purpose will be first :)
<bobthecow>
:)
<bobthecow>
you don't know what your gem's purpose will be?
<dkm>
right now I'm calling it 'squee'
<dkm>
well
<dkm>
I'm trying to decide how general I should try to make it
<dkm>
and what belongs in the gem proper and what should be a plugin for my particular application
<bobthecow>
does it take pictures of puppies and kittens?
<bobthecow>
because that's pretty squee.
<dkm>
it does not
<dkm>
yeah, it started as a "Student Query something something"
<dkm>
I couldn't think of a good something something, but "squee" was fun to say
<dkm>
what I'm trying to decide is if I should even have the focus be on a personal data query tool
<dkm>
or just anything
<bobthecow>
build a tool that does what you want.
<bobthecow>
you can always generalize later.
<bobthecow>
for example, the twitter gem — the de facto twitter API wrapper for Ruby — was extracted out of the t gem — a command line twitter client.
<bobthecow>
they built a client first, then generalized that to all api access.
<dkm>
makes sense
stbuehler has quit [Ping timeout: 264 seconds]
stbuehler has joined #nanoc
<dkm>
any RSS/Atom experts/experienced useres here?
<dkm>
i.e. familiar with the spec and parsing feeds with something like FeedZirra
louquillio has quit [Read error: Connection reset by peer]
louquillio has joined #nanoc
ics has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
VitamineD has quit [Quit: VitamineD]
VitamineD has joined #nanoc
FunkyPenguin has quit [Ping timeout: 252 seconds]
FunkyPenguin has joined #nanoc
FunkyPenguin has quit [Ping timeout: 264 seconds]
FunkyPenguin has joined #nanoc
_whitelogger has joined #nanoc
<guardian>
re
<guardian>
ddfreyne: so? any idea about this cri dep?
jugglinmike has joined #nanoc
<ddfreyne>
guardian: hi
<ddfreyne>
gregkare: What do you mean?
<ddfreyne>
guardian: oops
<ddfreyne>
gregkare: nm
<gregkare>
ha :)
<ddfreyne>
guardian: Some Rubygems brain fart I guess... I don't know what went wrong
<ddfreyne>
I’m using rubocop to check my code
<guardian>
ok
<guardian>
well I upgraded rubygems
<guardian>
following our respective nanoc —version prints yesterday
<guardian>
maybe with RubyGems 2.1.11. it won't happen again
<ddfreyne>
guardian: You still have the issue?
<gregkare>
ddfreyne: Hahaha I've only seen the topic now
<dkm>
nanoc philosophy question... say I have some text files on my local file system that are not in a nanoc-friendly format, but I write a converter that converts them in to somethign that is (i.e. parses the file to get meta info and put it in a yaml header)
<dkm>
is it more nanocy to write the data source to read from the original data files and generate Nanoc::Items in memory on each compile
<dkm>
or have a command that generates file system files of nanoc-readable content that are put in the /content directory so that my regular Filesystem datasource reads them in?
<guardian>
ddfreyne: not after having uninstalled cri then reinstalled nanoc
<dkm>
I think I prefer option #1 more because I don't like the idea of having redundant information stored in multiple places on my local filesystem
<bobthecow>
dkm: nanoc is generally about custom data sources.
<bobthecow>
that said, you might use the #sync method on your data source to extract metadata into a giant yaml file or something if that extraction is slow.
<bobthecow>
the same way my remote data sources make all their api calls and dump into a file.
<bobthecow>
ddfreyne: i see you're rubocopping all the things :)
<bobthecow>
i did that a couple of days ago.
<dkm>
ah, I was not aware of the #sync method
<dkm>
I will look into how to use that
<dkm>
and that reminds me about a related question
<dkm>
I have a bunch of remote git repositories and I want to extrac info from them to generate a nanoc page
<dkm>
does it make sense to have a step in the #up method of the data source that does a 'git pull' on all the repos, skipping over them if there is not internet connection
<dkm>
or put that functionality into a separate command
<dkm>
so I first run the command to refresh all local repos, and then the GitRepo < DataSource just looks at the local copy
<dkm>
my understanding is that is the preferred way to do it