stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.18 and js-ipfs 0.33 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Con
<postables[m]>
@ctOS: does that work with small files (ie 1MB size, 10KB size, etc...)
<ctOS>
You do *not* want to chunk anything on IPFS smaller than an 256 rolling kiB average. The protocol overhead and storage packing makes smaller chunks really inefficient.
<ctOS>
It works poorly (but better) on unique data compared to a fixed size chunker.
<ctOS>
postables[m]: there need to be actual duplicates in the data for them to be found also.
<ctOS>
postables[m]: send me a PM and I can send you a link to s draft showing concrete examples of the types of data it works well on.
<postables[m]>
ill send ya a PM on matrix
<postables[m]>
absolutely despisign iSCSI right now lol
<Swedneck>
do you have anything this magical for making adding files faster?
<ctOS>
Swedneck: well, yes. That chunker will make adding files go on average 50 % faster than without it.
<Swedneck>
fuck YES
<ctOS>
It reduces write overheads by deduplicating blocks and by packing blocks more efficiently to your drives.
<ctOS>
Swedneck: I’m assuming you’ve whitelisted ctrl.blog in adblock and have subscribed to future articles now. :P I’ve got some good stuff lined up in the next few days and weeks.
<Swedneck>
any reason not to use it?
<Swedneck>
heh, sure
<ctOS>
It doubles the CPU time which slows down small data sets (less than ~180 MiB), but it makes up for it by reducing the IO bottleneck that blocks the default chunker.
<Swedneck>
well i want to add 57GB sooo
dimitarvp has quit [Quit: Bye]
<ctOS>
Swedneck: Wanna contribute some test data from your data set? (You’ll need to temporary store three copies of the files in your IPFS store.) Send me a PM (you Matrix folks seem to have to initiate those) and I’ll send you hashing instructions and a test script to grab the data I need.
cehteh has quit [Ping timeout: 260 seconds]
<Swedneck>
hmm, maybe i can do that on my desktop some day
<Swedneck>
my server is slow as balls when adding files to ipfs
<ctOS>
Swedneck: do you know what bottlenecks it?
<ctOS>
Swedneck: start adding some data on your server (ideally non mem-cached data), then run `iostat -xk 10` in a shell and keep an eye on the output (especially the %util column).
mischat has joined #ipfs
cehteh has joined #ipfs
tiago has joined #ipfs
GyroW has joined #ipfs
hacman has joined #ipfs
GyroW has quit [Quit: Someone ate my pie]
GyroW has joined #ipfs
hacman has quit [Quit: Quitte]
Ai9zO5AP has quit [Quit: WeeChat 2.3]
tux0042[m] has joined #ipfs
jesse22_ has quit [Ping timeout: 250 seconds]
mischat has quit [Remote host closed the connection]
purisame has joined #ipfs
DrFelder has joined #ipfs
DrFelder has quit [Client Quit]
ygrek has quit [Ping timeout: 250 seconds]
ephemeral has joined #ipfs
ephemeral has quit [Quit: Page closed]
thomasan_ has joined #ipfs
Belkaar has quit [Read error: Connection reset by peer]
<edrex>
ACTION muses: I wish IPFS supported "unlisted" data. DHTs reveal blob hashes of course, but what if the DHT didn't contain blob hashes but hashes of hashes (like Dat project's Discovery Keys), and then to receive data you would have to present the normal blob hash directly to the holder (but not leak it on the network). Dat works kind of like this, but only at the repository level. It seems like you could do the same thing on a
<edrex>
global DHT. Maybe even allow unlisted data to live alongside listed. Has anyone thought about implementing this in IPFS? For me, that's what I want for most of my use cases: put all my data in the system and then send people branch keys when I want to share something.
<jonathan[m]1>
QmPNaLDk5BN4ekuT4z3XgCXQQcjZRWihRMB3LK2yjp6CZK (some iso files)
ikari` has joined #ipfs
ikari` has quit [Quit: This computer has gone to sleep]
ikari` has joined #ipfs
dimitarvp has joined #ipfs
skywavesurfer has joined #ipfs
hexfive has quit [Quit: WeeChat 2.3]
skywavesurfer_ has quit [Ping timeout: 250 seconds]
refpga has quit [Read error: Connection reset by peer]
refpga has joined #ipfs
xelra has quit [Ping timeout: 250 seconds]
xelra has joined #ipfs
ylp has joined #ipfs
lnykww has joined #ipfs
Ai9zO5AP has joined #ipfs
lnykww has left #ipfs [#ipfs]
DrFelder has joined #ipfs
Aliabbas has joined #ipfs
Aliabbas has quit [Client Quit]
Aliabbas has joined #ipfs
littleebee has quit [Quit: Connection closed for inactivity]
Aliabbas has quit [Quit: Page closed]
Caterpillar2 has joined #ipfs
Caterpillar2 has quit [Client Quit]
Caterpillar2 has joined #ipfs
bpol80 has quit [Ping timeout: 246 seconds]
Caterpillar2 has quit [Ping timeout: 250 seconds]
ONI_Ghost has quit [Quit: Leaving]
zane has quit [Ping timeout: 268 seconds]
endian has joined #ipfs
MuffinPimp has quit [Read error: Connection reset by peer]
MuffinPimp has joined #ipfs
clemo has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
tperson has quit [Ping timeout: 264 seconds]
tperson has joined #ipfs
wrouesnel has joined #ipfs
endian has quit [Quit: endian]
gmoro has joined #ipfs
xnaas has quit [Ping timeout: 250 seconds]
woss_io has joined #ipfs
spinza has quit [Quit: Coyote finally caught up with me...]
endian has joined #ipfs
endian has left #ipfs [#ipfs]
MuffinPimp_ has joined #ipfs
MuffinPimp has quit [Ping timeout: 268 seconds]
MuffinPimp_ is now known as MuffinPimp
MCFX2 has quit [Ping timeout: 258 seconds]
spinza has joined #ipfs
vmon has quit [Remote host closed the connection]
gritzko has quit [Ping timeout: 240 seconds]
Matthew[m] has joined #ipfs
vmx has joined #ipfs
<ctOS>
edrex: one of the IPFS project goals is global deduplication. If two people pack the same file into a directory with a random second file, then the duplicate file is deduplicated in IPFS. BitTorrent and Dat can’t deduplicate / share peers for the same identical file the way IPFS can. So an IPFS directory hash with a weekly archive of a podcasts will deduplicate from week to week, which is definitely not the case with BT and Dat.
<ctOS>
E.g. you’ll only store one copy of the GPL license file in your IPFS repo even thought the file is part of sixty different things you download. Or one copy of each podcast episode even though you’ve downloaded eight weekly archives.
goiko has quit [Quit: ﴾͡๏̯͡๏﴿ O'RLY? Bye!]
thomasan_ has quit [Read error: Connection reset by peer]
thomasan_ has joined #ipfs
bpol80 has joined #ipfs
kapil____ has quit [Quit: Connection closed for inactivity]
ikari` has quit [Quit: This computer has gone to sleep]
chiui has joined #ipfs
rcat has joined #ipfs
ikari` has joined #ipfs
gritzko has joined #ipfs
spinza has quit [Quit: Coyote finally caught up with me...]