encrypt3dbr0k3r has quit [Ping timeout: 264 seconds]
encrypt3dbr0k3r has joined #ipfs
sbani has quit [Quit: Good bye]
sbani has joined #ipfs
tcfhhrw has quit [Read error: Connection reset by peer]
<Powersource[m]>
i'm having a weird issue. i'm making an electron app using js-ipfsd-ctl. before ipfs 0.4.16 the app's daemon could only connect to other local daemons, which was weird but worked ok. on 0.4.16&17 the daemon looks like it's connecting to the rest of the internet which is great, but now discovery of the other local daemon is super slow
<Powersource[m]>
this is mostly an issue since i'm using a disposable repo in js for testing purposes and every time i'm launching the app i'm refetching large test files from the local daemon
<Powersource[m]>
might be a sign that I should just start using non-disposable repos instead :P
saki has quit [Quit: saki]
mrhavercamp has quit [Ping timeout: 240 seconds]
dethos has joined #ipfs
kaminishi has joined #ipfs
dethos has quit [Ping timeout: 260 seconds]
xcm has quit [Remote host closed the connection]
reit has quit [Quit: Leaving]
xcm has joined #ipfs
<fiatjaf>
is the internet archive using ipfs under the hood now?
<fiatjaf>
I'm also interested in seeing their hashes
<fiatjaf>
ok, so urlstore is just like a new datastore that instead of storing content on disk delegates that to an http url?
pcardune has quit [Remote host closed the connection]
pcardune has joined #ipfs
pcardune has quit [Ping timeout: 240 seconds]
<Powersource[m]>
continuing my above comment: I'm guessing it's because the client has a lot more nodes to look through to find the hashes (goes a lot faster when you only have to look in 1 place). Could ipfs/the dht prioritize communication with local (mdns/same machine) daemons? I feel that would be a really helpful heuristic.
<Powersource[m]>
void9: if those dirs are owned by root. but you should probably make yourself the owner
itaipu has joined #ipfs
<void9>
alright I just ran everything as root to test.. I could do cp /ipfs/hash ~ for a 2 mb file. Now I am trying to download a 10GB one from the same place and it doesn't work well.. the transfer stops at 65540KB
<makeworld[m]>
Can someone answere my question about the linked paragraph? Why is it being changed from /ipfs/ to /p2p/ for multiaddrs? Doesn't that section of the multiaddr denote the protocol being used, such as http or in this case, ipfs?
<makeworld[m]>
Obviously this doesn't make sense for libp2p in general, but shouldn't it remain for ipfs
<shguwu>
makeworld[m], I think the purpose of it was to return to original scheme rather than perpetuate incorrect usage
Jesin has joined #ipfs
Have-Quick has joined #ipfs
cris has quit [Ping timeout: 260 seconds]
cris has joined #ipfs
tcfhhrw has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
<makeworld[m]>
shguwu: what do you mean? what original scheme? and wouldn't using ipfs be correct outside of libp2p?
ericwooley_ has quit [Ping timeout: 240 seconds]
mauz555 has joined #ipfs
gmoro_ has quit [Ping timeout: 244 seconds]
Have-Quick has joined #ipfs
<shguwu>
using ipfs be correct outside of libp2p? you mean using ipfs with other methods to talk to peers?
<makeworld[m]>
I mean how ipfs is protocol in use when talking to peers, so why replace it with p2p, which is not protocol
<shguwu>
ipfs is a protocol to talk to other peers only on a certain level of abstraction. That multiaddr is used on the lower level of abstraction
<shguwu>
and the p2p part is to specify the means of communication. could be something else in the future
jesse22 has quit [Read error: Connection reset by peer]
jesse22 has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
Have-Quick has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
Have-Quick has joined #ipfs
Steverman has quit [Ping timeout: 244 seconds]
Have-Quick has quit [Quit: Have-Quick]
Have-Quick has joined #ipfs
<makeworld[m]>
I'm not sure what you mean, talking about abstraction and things. Isn't multiaddr supposed to specify protocols as its intent is to get rid of ambiguity?
<makeworld[m]>
Not the architecture of the connection (p2p, client-server, etc.)
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jesse22 has joined #ipfs
jesse22 has quit [Client Quit]
mauz555 has quit [Remote host closed the connection]
Steverman has joined #ipfs
abueide has joined #ipfs
jesse22 has joined #ipfs
clemo has joined #ipfs
noresult has quit [Quit: leaving]
noresult has joined #ipfs
BeardRadiation has joined #ipfs
[itchyjunk] has joined #ipfs
abueide has quit [Ping timeout: 240 seconds]
Tiez has quit [Quit: WeeChat 2.2]
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Have-Quick has quit [Quit: Have-Quick]
bae9Oosi has joined #ipfs
<bae9Oosi>
Hey guys, I want to write a simple p2p sharing service that allows peers to exchange textual notes and small files. I want it to be p2p so a user can save a note to their machine if it's important for them, and then sync it back to the other peers. Is ipfs a good fit for that or are there better alternatives for this use-case?
<bae9Oosi>
Note: I want installation to be as easy as possible. I'm not sure if ipfs can be embedded as a library.
Have-Quick has joined #ipfs
pcardune has joined #ipfs
Have-Quick has quit [Remote host closed the connection]
<voker57_>
bae9Oosi: do you want to implement "shared folders" like functionality?
Have-Quick has joined #ipfs
pcardune has quit [Ping timeout: 240 seconds]
<voker57_>
or do you only need something to generate links to content and link is shared outside the app?
<bae9Oosi>
voker57_: really it's more like a blog where each node can save (locally) any post to make sure others will be able to read/duplicate it when the node is online. I need to generate links to be shared outside of the app, yes (http gateway)
<voker57_>
bae9Oosi: yes, IPFS is a good fit for that
<bae9Oosi>
voker57_: ok, cool! I need to do more research about embedding it as a lib then. I've seen an open issue about that on github, so I figured it's still WIP
Guanin has joined #ipfs
mauz555 has joined #ipfs
<voker57_>
bae9Oosi: it's a better practice to launch node separately and talk to it/let users reuse existing node
<voker57_>
generally you want the library thing if you want to run a node in very specific way
<voker57_>
and in your case using standard interface should be ok
Have-Quick has quit [Remote host closed the connection]
<bae9Oosi>
voker57_: well, my main motivation here is ease of installation and use, but I guess I can just bundle everything together and control the node process from my app... dunno, we'll see
Have-Quick has quit [Read error: Connection reset by peer]
bae9Oosi has left #ipfs ["ERC (IRC client for Emacs 25.3.1)"]
mauz555 has quit [Remote host closed the connection]
matthiaskrgr has joined #ipfs
mauz555 has joined #ipfs
matthiaskrgr is now known as Guest37760
mauz555 has quit [Ping timeout: 256 seconds]
The_8472 has quit [Ping timeout: 240 seconds]
ericwooley_ has joined #ipfs
The_8472 has joined #ipfs
Guest37760 has quit [Changing host]
Guest37760 has joined #ipfs
Guest37760 has joined #ipfs
DJ-AN0N has joined #ipfs
rtjure has quit [Ping timeout: 260 seconds]
bomb-on has quit [Quit: SO LONG, SUCKERS!]
sanderp has quit [Ping timeout: 256 seconds]
drrty has quit [Ping timeout: 264 seconds]
<void9>
does anyone know how to get near native performance when fuse mounting ipfs?
<nixze>
void9: ipfs mount project that gives much better performance than native go-ipfs mount; https://github.com/piedar/js-ipfs-mount only tested it very lightly - maybe it can be used to be ported into go-ipfs
<void9>
just tested it.. maybe I am doing something wrong, but it does not work right
<void9>
did you get decent performance from it?
<nixze>
void9: when comparing native go-ipfs mount, with js-ipfs-mount mounts then I saw a huge difference ... but that probably was with already localy cached data
sanderp has joined #ipfs
<void9>
can you try this file ? QmdUuVHsr38g3ZdkZmUTx5wLY7eRoo5PP626AhfixCBZTp
<void9>
maybe my windows daemon is too slow ? hmm
<void9>
I am testing windows daemon / linux vm for fuse, on the same computer
<void9>
or where can i find very fast files available at gbit speeds on ipfs ?
<nixze>
void9: test by adding adding local nodes to the swarm ... if you only want to test mount performance, pin the objects first
<void9>
I want to test retrieval/mount performance. but retrieval should be fast as I am sharing the same local LAN between node and client ?
<nixze>
void9: test that separately
<nixze>
if you want to verify retrival then test that with ipfs cat or similar first
<void9>
nixze actually I did, it was 30-100MB/s locally, via http retrieval
<lemmi>
another possible cause performance differences between js and go can be that js and go can not really talk to each other at the moment. so while a file might be available in the js net, it may be hard to get via go-ipfs
<nixze>
there was some fixes in 0.4.17, are you using that, or are you using something earlier of go-ipfs?
<void9>
I am using 0.4.17 as the windows daemon, and ipfs-git on linux for testing mount
<nixze>
void9: ipfs ls on above hash no response after waiting for 2 minutes
<nixze>
and then I did ...
<void9>
nixze it's a file, not a folder
<nixze>
void9: ls works fine on files as well
<void9>
oh :( then there is something wrong with my node
<void9>
another question that has me confused. If I add files from a folder to ipfs, and they each get their own hash.. will they source-match with the files I add in a folder that are identical?
<nixze>
but it took ~2 minutes for it to be available, but access now is quick on a few local nodes
IRCsum has quit [Remote host closed the connection]
<void9>
I mean, if I add a whole folder vs adding just separate files from that folder, will the identical files have the same hash and be sourced together on ipfs ?
IRCsum has joined #ipfs
<nixze>
If you add identical files with the same algorithm then they should have the same file hash hash
<lemmi>
void9 maybe your vm setup screws something up. maybe this causes the vm to sit behind another nat
<void9>
lemmi: I set it to bridged networking, it has a local ip from the same subnet as the physical network
<lemmi>
k
<nixze>
hmm getting to late here
* nixze
needs sleep
<void9>
nixze, is adding as a file vs a folder with that file considered the same algorithm ?
<nixze>
void9: should be, AFAIK the only difference is if you are using --nocopy - in which case --raw-leaves is forced and will create different files
<nixze>
best way to avoid that is to always add files with --raw-leaves (seems it will be standard in the future)
<void9>
omg, this looks like such a horrible design
<void9>
oh well, good think it's not popular yet and only v 0.3
<nixze>
important to remember that this is all still experimental
<void9>
0.4*
<void9>
yeah
<void9>
thing*
ericwooley_ has quit [Remote host closed the connection]
<lemmi>
what's horrible about it?
ericwooley_ has joined #ipfs
<nixze>
on the raw-leaves things, what I have understood from history is that; at first it was a good design choice at the time to not allow raw-leaves at all. but than things changed.
<void9>
well, they should never have started with file duplication when adding stuff from local drives
<void9>
it's so obvious people will not like that that it should never have been otherwise
<lemmi>
there is no other way to ensure things don't just magically disappear
<lemmi>
and you have --nocopy if you are sure you can handle the responsibility yourself
<void9>
what can I say, huge responsibility :P
<nixze>
--nocopy has several (major) issues of it's own, (which I'm currently trying to work around) but again this is all still experimental, so I'm not complaining but rather documenting the issues that I see and the usecase.
<lemmi>
void9 well it is. so easy to accidentally mess something up with filestore
<void9>
ipfs could just monitor filesystem changes and if filsize changes or the file disappears, it remove it from storage?
<lemmi>
which also comes at a cost
<nixze>
void9: what if filesize stays the same but contents changes (like with a BTRFS filesystem image)
<lemmi>
inotify isn't perfect, it can miss events, so you need to constantly rescan to make sure you didn't miss anything
<void9>
nixze that was exactly what was in that file hash i pasted :P
<nixze>
void9: yep I know, that's why I took it as an example for when monitoring filesize does not help
Shnaw7 has joined #ipfs
<void9>
haha ok
<void9>
it's actually set to be a seed, so it's read only
<lemmi>
it's not at all trivial to get this right
<void9>
it will mount as read only if you try to mount it
<void9>
does not have to be perfect, but it has to be able to scale. and 2x storage requirements is not good scaling
<nixze>
what would be good however is to have the rm and clean commands added to filestore ... and verify extended so that it can look on file modification times to say how likely a rescan is to be needed
<void9>
and then if you have different hashes when using --nocopy, that's also not cool, divies the network resources for files in two fractiosn
<lemmi>
void9: you either pack it into ipfs and remove the source file, or use filestore. no 2x storage cost
<nixze>
lemmi: in reallity tho, that is not how it works
<nixze>
not right now at least
Shnaw7 has quit [Remote host closed the connection]
<void9>
and this seems like a project with quite a lot of attention .. why is the development so slow ? I mean it's been going on for 3 years at least
<lemmi>
because it's a massive undertaking
<nixze>
I would say that it isn't slow at all if you look on what is going on.
<lemmi>
nixze: filestore uses the filesize + the usual overhead. if i add the file to ipfs and then remove the file, the same is true
<void9>
all I know is that I tried it now and I couldn't get a 1GB image file to mount and use reasonably fast
shguwu has quit [Quit: Leaving]
<nixze>
lemmi: talking usecase here, trying to add a 400GB dataset to IPFS which I obtain (and need to keep updated) via rsync, files are added, removed, and some files changed (timestamp files) ...
<nixze>
for rsync to work it needs mtime set on the files, so adding to ipfs and remove the sourcefiles is not an option ...
<lemmi>
then filestore
<void9>
nixze: that's exactly the use case I was thinking of, keep a large collection of files in sync. is it doable yet ?
<nixze>
and right now there is no clean or rm in filestore, which breaks things when files are removed by rsync, and also when files are modified
<lemmi>
that's more an issue that your sync can't tell ipfs what's happening
<nixze>
there is several outstanding issues on github about it, and I feel I have been spamming those in the last few days (sorry about that)
<nixze>
lemmi: well it is an actuall usecase - and "rewriting rsync" is _not_ an option
<void9>
how hard would it be to set a custom time interval at which the filestore is checked for differing timestamps/filesize, and for new/missing files?
<void9>
I mean to code that into ipfs
<nixze>
However that is what I'm actually doing, adding a bunch of glue to get this workable ... but again there is no filestore rm, so it is (right now) impossible to remove files from filestore once they have been added
<void9>
haha really?
<lemmi>
i built something similar to host a distribution on ipfs, but it's too slow to build large directories from the shell. i haven't gotten around to build this in go directly
<lemmi>
nixze: are you sure? aren't this just pins?
pecastro has quit [Ping timeout: 260 seconds]
<lemmi>
void9: what you basically need to do is copy what syncthing does. and they put a lot of work in it to get this right and with actually ok performance
<nixze>
my workaround for this will be to add to filestore, and then add it to mfs, and use mfs to track old hashes, and compare to new ones based on mtime and dtime
<lemmi>
nixze: ipfs filestore ls before rsync. then rsync and track what gets removed
<lemmi>
if you don't use the output of rsync, you'll have no choice but to rescan
<nixze>
ipfs filestore ls does not give file hash - only block hashes
<nixze>
lemmi: I will use mfs to get those hashes and handle this - and that's fine
<voker57_>
[02:25:11] <void9> and this seems like a project with quite a lot of attention .. why is the development so slow ? I mean it's been going on for 3 years at least
<voker57_>
I reckon the team works on filecoin since that's what they have been paid to do
<void9>
I find the filecoin concept to be orders of magnitude more difficult to accomplish than ipfs
<voker57_>
ipfs certainly would need improvements if it were to work as filecoin storage backend, so hopefully it'll get some attention (esp. performance-wise) too
jesse22 has joined #ipfs
<lemmi>
ah, haven't had large enough files to notice this. but then mfs is the way to go right now, yes. i do that as well.