stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.18 and js-ipfs 0.33 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Con
<Swedneck>
well the index is on IPFS
pecastro has quit [Read error: No route to host]
<lordcirth>
Swedneck, right, but the title implies that IPFS itself is a torrent tracker
nonono has quit [Ping timeout: 272 seconds]
<postables[m]>
lordcirth: you can browse ipns hashes as typical `/ipfs/blah` how long ago did you public that IPNS record?
<lordcirth>
postables[m], only a few days ago, that's when I set up the node.
<postables[m]>
did you use non default lifetimes? if so its expired, default it 24h
<lordcirth>
I didn't (and don't) know how to do that
<lordcirth>
Also, isn't the point of pinning to keep it indefinitely?
<postables[m]>
pinning != publishign ipns records
<postables[m]>
you've pinned whatever hash you're referring to with the IPNS record, but IPNS records have lifetimes
<lordcirth>
postables[m], ah, ok. Clearly I don't understand IPNS well enough yet
<postables[m]>
at a high level its basically like DNS for IPFS, with the smae sort of concepts, records have lifetimes and TTLS
<postables[m]>
*edit:* ~~at a high level its basically like DNS for IPFS, with the smae sort of concepts, records have lifetimes and TTLS~~ -> at a high level its basically like DNS for IPFS, with the smae sort of concepts, records have lifetimes and ttls
<postables[m]>
*edit:* ~~at a high level its basically like DNS for IPFS, with the smae sort of concepts, records have lifetimes and ttls~~ -> at a high level its basically like DNS for IPFS, with the smae sort of concepts, records have lifetimes and tts
<postables[m]>
*edit:* ~~at a high level its basically like DNS for IPFS, with the smae sort of concepts, records have lifetimes and tts~~ -> at a high level its basically like DNS for IPFS, with the smae sort of concepts, records have lifetimes and ttl
<postables[m]>
so you would basically want to treat it like such, if you have some record thats really important you'll want to republish it X duration. the cool thing is you can use `ttl` to define the minimum valid lifetime, so say you can set your ttl for 12 hours, and lifetime for 24 hours
<lordcirth>
postables[m], ok, so I pinned the hash of a ipfs object, which was an IPNS record, which has since timed out and is useless?
<postables[m]>
i dont think so, i believe internally ipfs will resolve the ipns hash first
<postables[m]>
you might want to do `ipfs pin ls <ipfs-hash>` to make sure its pinned
<lordcirth>
postables[m], I actually un-pinned it today because it wasn't working. Though it's still in cache because cache isn't full
<postables[m]>
when you pinned it did you give the IPNS hash to pin?
<postables[m]>
if you don't specify `ipfs pin add /ipns/<ipns-hash>` it will think you're trying to pin an IPFS hash with that name
<postables[m]>
*edit:* ~~if you don't specify `ipfs pin add /ipns/<ipns-hash>` it will think you're trying to pin an IPFS hash with that name~~ -> if you don't specify `ipfs pin add /ipns/<ipns-hash>` it will think you're trying to pin an IPFS hash with that name and wont work
<postables[m]>
so if you want to pin an IPNS record you'll need to do `ipfs pin add /ipns/<ipns-hash>`
<postables[m]>
whereas if you do `ipfs pin add <ipns-hash>` it'll timeout eventually
<lordcirth>
postables[m], I think I gave it just the hash, yeah
<postables[m]>
ah ok, republish the record and give `/ipns/<ipns-hash>` and it should work just fine 😄
<lordcirth>
"Error: pin: context deadline exceeded" which I assume means it's expired
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<postables[m]>
probably
maxzor has quit [Remote host closed the connection]
<seba->
LordFenixNC[m], i don't think so, but i could be wrong
<LordFenixNC[m]>
so i have to repin my WHOLE IPFS folder and record the hashes?
swebb_ has joined #ipfs
swebb has quit [Ping timeout: 268 seconds]
<seba->
i think pining of a folder pins everything from it
<seba->
hm
<seba->
but i'm not 100%
<LordFenixNC[m]>
it pins everything inside the folder i know that... and each item gets its own hash... i just dont have hash list and everything i have right now is set up automated... so im going to have to do something to generate a list and save it
<LordFenixNC[m]>
what would be cool is if there was a script that allowed you to right click on a file and generate a hash that IPFS would normally generate
<geoah>
LordFenixNC[m]: you can find your pinned things with `ipfs pin ls` that you give you both direct (recursive i think) and idirect ones
<seba->
ipfs add --only-hash [filename]
<seba->
gives you the hash
<geoah>
since you have the hashes you should be able to do an `ipfs dag get <hash>` for the direct ones should give you the contents including filenames
<LordFenixNC[m]>
could i use ipfs dag get with Node hash to pull up entire node?
aarshkshah1992 has joined #ipfs
<aarshkshah1992>
voker57: Got what you mean. Thanks !
<geoah>
LordFenixNC[m]: node as in dag node? yeah you should be
<LordFenixNC[m]>
yeah trying that now lol if it works probably going to take forever to generate lol... over 1TB in content and counting lol
mischat has quit [Remote host closed the connection]
mischat has joined #ipfs
<geoah>
ipfs add --only-hash will recreate them --- ipfs dag will just go through the existing dags so it should be fast
<geoah>
btw not sure how well ipfs works with 1tb -- there used to be some issues about large dags
<geoah>
or maybe it was just large files
aarshkshah1992 has quit [Remote host closed the connection]
woss_io has quit [Ping timeout: 268 seconds]
aarshkshah1992 has joined #ipfs
mischat has quit [Ping timeout: 268 seconds]
alyoshaaa has joined #ipfs
malaclyps has quit [Read error: Connection reset by peer]
malaclyps has joined #ipfs
cyfex has joined #ipfs
cyfex_ has quit [Ping timeout: 240 seconds]
cyfex_ has joined #ipfs
aarshkshah1992 has quit [Ping timeout: 246 seconds]
cyfex has quit [Ping timeout: 250 seconds]
Ai9zO5AP has joined #ipfs
reit has joined #ipfs
Ai9zO5AP has quit [Read error: Connection reset by peer]
<LordFenixNC[m]>
thanks for the input... i think ill have to do the add only hash option then cause its still going or hanging there lol
i9zO5AP has joined #ipfs
i9zO5AP has quit [Client Quit]
BeerHall has joined #ipfs
Dubhe has quit [Ping timeout: 264 seconds]
<LordFenixNC[m]>
that way is working
<LordFenixNC[m]>
its going to take a while
<LordFenixNC[m]>
but better than nothing I CANT complain
<LordFenixNC[m]>
idk why the webui wont let you just view your files in your node
<LordFenixNC[m]>
i dont expect to be abel to play them or anything
<LordFenixNC[m]>
just File name and hash
WhizzWr has quit [Quit: Bye!]
cyfex_ is now known as cyfex
WhizzWr has joined #ipfs
mischat has joined #ipfs
mischat has quit [Read error: Connection reset by peer]
ikari` has quit [Quit: This computer has gone to sleep]
aerth has quit [Ping timeout: 256 seconds]
aerth has joined #ipfs
BeerHall has quit [Quit: BeerHall]
xnaas has quit [Quit: Dead.]
xnaas has joined #ipfs
vmx_ has joined #ipfs
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
vmx has quit [Ping timeout: 252 seconds]
vmx has joined #ipfs
vmx_ has quit [Ping timeout: 268 seconds]
ikari` has joined #ipfs
florianH__ has joined #ipfs
ddahl has joined #ipfs
<Unode>
Hi all, If I add several partially redundant files to IPFS. Is the chunking algorithm smart enough to realize this and allow storing the redundant information only once?
<Unode>
I'm considering adding a reasonably large collection of text files to IPFS and wonder if the chunking algorithm + compression can drastically reduce the storage needs
<r0kk3rz>
are the files identical?
<Unode>
no, only partially
<r0kk3rz>
then probably not
<Unode>
my understanding is that files are chunked and each chunk hashed into the DAG. Won't these blocks get reused if the same hash exists in another file?
<r0kk3rz>
yes
<r0kk3rz>
if the block ends up being identical
Belkaar has quit [Ping timeout: 250 seconds]
<Unode>
keyword being the 'if'
<Unode>
ok I see what you mean
<Unode>
thanks
<r0kk3rz>
yeah, depending on the file it could get chunked in places where the blocks wont be the same
<Unode>
so the blocks would have to align perfectly to be hashed the same way.
<r0kk3rz>
yeah
<Unode>
right, I can see this being the exception rather than the rule
Belkaar has joined #ipfs
Belkaar has joined #ipfs
<Unode>
thanks for the feedback
<geoah>
Unode you could always cheat and split the files into small sequential files with one paragraph or line each :P
<geoah>
that would increase the chances of collisions :D
<Unode>
geoah: yeah, although some have blobs (like PDFs) so it will get tricky.
<r0kk3rz>
yeah for IPFS, having a content aware chunker is the ideal case
<Unode>
I can see how this could become a project on its own
<geoah>
that's not really easy -- might be for txt files but pdfs/docs have an insane amount of metadata around the text
<Unode>
geoah: we'd need tiny chunks, which then also add a big overhead.
<Unode>
I guess it would have to be a smart chunker like r0kk3rz was saying
<Unode>
likely not something you'd do on a single pass
<Unode>
but perhaps worth considering for long-time storage
<Unode>
Not unlike certain compression algorithms, you'd have to be aware of blocks already existing in the network.
<Unode>
anyway, I'm digressing. Thanks for the info. I confirmed my interpretation
<r0kk3rz>
hmm i wonder if i could write a tracker to ipfs exporter...
xnaas has quit [Quit: Dead.]
xnaas has joined #ipfs
<ylp>
Unode: if you can identify identical file parts you can use the under laying ipdl structure to link them and deduplicate them
thomasan_ has joined #ipfs
<ylp>
it should be possible to create ipfs file node with your custom chunk I think
<Unode>
ylp: I can see how that could work, but I'd have to keep track of the chunks myself. Hence 'a project on its own'.
<Unode>
I can certainly see this as being something super useful in the near future as a way to compress/deduplicate large collections of files upon addition.
<Unode>
For instance keeping some kind of index of chunks that have been seen or exist in the network and check if the file being added can be chunked in such a way that the same chunks would be found.
<ylp>
also when the set of files changes you might want to recompute all/part of the custom chunks...
<Unode>
Not an easy task though...
thomasan_ has quit [Remote host closed the connection]
<ylp>
you need to chunk the files according to the number of duplicated parts then for each part create a chunk and get its hash to create the ipfs file
<postables[m]>
probably due to differences in the carriage return
<postables[m]>
windows is `\r` *nix is `\n`
<Obo[m]1>
that would make sense
<Obo[m]1>
that's an interesting dilemma if you're wanting certain things to be cross-system compatible
<Obo[m]1>
on an unrelated note, does anybody know how IPFS handles request garbage collection?
<Obo[m]1>
by that I mean, let's say I make an add request, and IPFS can't find that CID after like 10 minutes (or some other amount of time)
<Obo[m]1>
what's the process that IPFS takes to remove that request from the bitswap wantlist?
<Obo[m]1>
or where can I find out more about dht expiry mechanisms?
<ToxicFrog>
I would have expected IPFS to open all files in 'b' mode on windows, eliminating that problem. If it doesn't that's a serious correctness issue.
vmx has quit [Remote host closed the connection]
Fessus has quit [Ping timeout: 246 seconds]
hjoest[m] has joined #ipfs
mowcat has joined #ipfs
<Kolonka[m]>
Did you write that article ylp?
<Kolonka[m]>
And yeah, the Linux/Windows issue is concerning
mowcat has quit [Remote host closed the connection]
<LordFenixNC[m]>
yeah friend who is helping me build my IPFS network is using Linux cause he more famililar with it... and im using windows... he set stuff to auto pin hence why i dont have hashes... BUT i do have direct access to the files that are being pinned so figured just add hash only and link from there... but none of the links were working... double checked some already linked hashes and sure enough same file
<LordFenixNC[m]>
s dif hash. even downloaded those files and repinned in windows to see what hash it would generate and it matches the windows hash... vs the actual pinned hash 😦
<LordFenixNC[m]>
I already messaged my friend... so i guess he will have to make a text file or a text menu to work from... cause i know the ipfs scripts are about the same... i just dont want to screw up any automation he has set up
kiao2938 has joined #ipfs
<seba->
postables[m], that's mac, windows is \r\n
<LordFenixNC[m]>
how would i fix this?
<LordFenixNC[m]>
is there something i can adjust on my windows to make it generate the same hash?
<seba->
LordFenixNC[m] i have a windows and linux machine, can you give me the file so i can test it?
<LordFenixNC[m]>
QmYAgo6KNkjE645WUPyVetrWXk3eDYg4dkw2Pk3eqCjAj9 this is the working PINNED hash from linux
<LordFenixNC[m]>
QmaAJV57q9CrGMstvbHG8J2PVZKMAT35JozZpVefoxtTAa this is the hash windows spits out
<seba->
ok
<seba->
i'll try to download both
<LordFenixNC[m]>
the 2nd isnt pinned
<seba->
LordFenixNC[m] what is it, if i may know
<LordFenixNC[m]>
Episode 2 of tonya the evil
<LordFenixNC[m]>
its a mp4
<seba->
ok a movie
<seba->
well *tAa started to download, *Aj9 not yet, but I'm behind NAT with ports closed
<LordFenixNC[m]>
see here is the odd part
<ToxicFrog>
It'd be interesting to see what happens if you try it with, say, a short text file with multiple lines in it
<LordFenixNC[m]>
i assumed maybe the file system handled it dif so i downloaded the working hash
<ToxicFrog>
That should narrow down pretty quickly whether it's a line-endings/file-mode issue.
<ToxicFrog>
(and be faster to check)
<LordFenixNC[m]>
and did add hash only again and it gave me the same hash the OG file was giving to start with
<LordFenixNC[m]>
so its odd
cubemonkey has joined #ipfs
cubemonkey has quit [Changing host]
cubemonkey has joined #ipfs
<seba->
ok, a wild guess, it could be a hardware issue (RAM, PSU, MoBo), early warning
clemo has joined #ipfs
<seba->
i'll try to recreate
<LordFenixNC[m]>
I have 0 exp with centos and dont want to screw up what vexl did lol so stuck waiting for him to get off work... dif country dif time zones fun stuff
<seba->
*Aj9 doesn't want to start
<ToxicFrog>
If that's not the issue -- and honestly I'd be surprised if it was, it would cause issues with pinning ~any binary file from Windows so I'd expect it to be found long before now -- I'd guess either hardware issues (per seba-) or different configuration of the IPFS node in a way that affects hash generation.
<LordFenixNC[m]>
he has Cent0s running on a VM on my windows server
<seba->
yeah
<seba->
that could be a ram issue ^ with that
<seba->
sometimes if it runs low on ram
<seba->
it can do funny stuff
<seba->
sorry, it's a wild guess, but happened to me a while ago
<seba->
that's why i just giving it out as a possibility
aarshkshah1992 has quit [Read error: Connection reset by peer]
aarshkshah1992 has joined #ipfs
aarshkshah1992 has quit [Remote host closed the connection]
mowcat has joined #ipfs
<LordFenixNC[m]>
OMFG!!!!!! i figured it out...
<LordFenixNC[m]>
its not a Linux or windows issue
<LordFenixNC[m]>
or really IPFS issue
<LordFenixNC[m]>
to keep costs SUPER low so we can run everything WITHOUT ads and minimal donations we are using Rclone with unlimited cloud storage to create hard drives... normal IPFS blocks wont work causes the cloud to hang up and error out and other issues.... but IPFS with nocopy works...
<LordFenixNC[m]>
seems to get the same HASH without pinning you need to do this "ipfs add --only-hash --raw-leaves"
<LordFenixNC[m]>
thank you so much to those who tried to help and gave some great advice on my other problems
sbani has quit [Quit: Good bye]
sbani has joined #ipfs
pepesza has quit [Ping timeout: 240 seconds]
plexigras has quit [Ping timeout: 240 seconds]
pepesza has joined #ipfs
random_yanek has quit [Ping timeout: 240 seconds]
notkoos has quit [Ping timeout: 240 seconds]
notkoos has joined #ipfs
plexigras has joined #ipfs
purisame has joined #ipfs
jesse22 has joined #ipfs
senden9[m] has joined #ipfs
vijayee has joined #ipfs
purisame has quit [Ping timeout: 240 seconds]
dethos has joined #ipfs
random_yanek has joined #ipfs
ddahl has quit [Ping timeout: 245 seconds]
purisame has joined #ipfs
Caterpillar2 has joined #ipfs
Xaradas has joined #ipfs
Xaradas has quit [Quit: Going offline, see ya! (www.adiirc.com)]
Xaradas has joined #ipfs
dimitarvp has quit [Quit: Bye]
Xaradas has quit [Client Quit]
Xaradas has joined #ipfs
Xaradas has quit [Client Quit]
Xaradas has joined #ipfs
Xaradas has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<Obo[m]1>
LordFenixNC what's your project?
<LordFenixNC[m]>
I giant Media Archive stretching across 180 nodes and at least 10 VPS servers 100% adfree and because the player is whatever you browser uses 100% safe content...
<Obo[m]1>
that's really cool
<Obo[m]1>
Looking forward to seeing how it progresses!
Papa_Alpaka has quit [Quit: Going offline, see ya! (www.adiirc.com)]
plexigras has quit [Ping timeout: 240 seconds]
henriquev has joined #ipfs
ddahl has joined #ipfs
dethos has quit [Quit: Time to Go!]
<uncle_ben>
whenever you add a file it gets automatically pinned to your local repo, correct?
<Kolonka[m]>
Yes
<uncle_ben>
ok thx
<Kolonka[m]>
Could you give a link to that, LordFenixNC?
xcm has quit [Killed (cherryh.freenode.net (Nickname regained by services))]
<LordFenixNC[m]>
as soon as i have it in FULL swing and move the gateway off the main server so nothing is actually hosted on the site... trying my best to dance around DMCA
<LordFenixNC[m]>
cause right now other than the gateway nodes cache.... i have 0 files hosted on the server other than the software and website
xcm has joined #ipfs
Ai9zO5AP has quit [Ping timeout: 246 seconds]
}ls{ has quit [Quit: real life interrupt]
Ai9zO5AP has joined #ipfs
Ai9zO5AP has quit [Read error: Connection reset by peer]
Ai9zO5AP has joined #ipfs
spinza has quit [Quit: Coyote finally caught up with me...]
Fessus has joined #ipfs
spinza has joined #ipfs
Ai9zO5AP has quit [Ping timeout: 246 seconds]
<Kolonka[m]>
>trying my best to dance around DMCA
<Kolonka[m]>
I feel that
pecastro has quit [Ping timeout: 245 seconds]
Fessus has quit [Remote host closed the connection]
Fessus has joined #ipfs
Ai9zO5AP has joined #ipfs
joocain2 has quit [Ping timeout: 256 seconds]
Caterpillar2 has quit [Ping timeout: 250 seconds]
Caterpillar2 has joined #ipfs
joocain2 has joined #ipfs
woss_io has quit [Ping timeout: 244 seconds]
Fessus has quit [Remote host closed the connection]
Fessus has joined #ipfs
reit has quit [Ping timeout: 244 seconds]
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
cubemonkey has quit [Read error: Connection reset by peer]
Caterpillar2 has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
Fessus has quit [Remote host closed the connection]