stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.18 and js-ipfs 0.33 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Con
refpga has joined #ipfs
woss_io has quit [Ping timeout: 258 seconds]
mmj has quit [Quit: Page closed]
reit has joined #ipfs
mauz555 has quit []
KL4JC is now known as eligrey
<whyrusleeping>
edrex: You have to enable the ipns pubsub option, and then any time you call `ipfs pubsub sub` it uses it under the hood
thomasan_ has joined #ipfs
thomasan_ has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
andi- has quit [Ping timeout: 252 seconds]
irdr has quit [Remote host closed the connection]
irdr has joined #ipfs
refpga has quit [Ping timeout: 268 seconds]
andi- has joined #ipfs
refpga has joined #ipfs
Steverman has quit [Ping timeout: 245 seconds]
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<ToxicFrog>
edrex: you can also use -Q with --progress and it'll put the status bar on stderr, so you can hash=$(ipfs add -Q --progress ...)
<postables[m]>
So within `$IPFS_PATH/config` particularly for flatfs types, there are a few config directives that I'm not quite sure what the specific functionality is. Two of them are `path` and `mountpoint` from playing around with config settings, it is clear that `path` is the path to the particular folder, however `mountpoint` I'm not quite sure, is that the root mountpoint for the directory?
Guest27040 has quit [Killed (tepper.freenode.net (Nickname regained by services))]
xcm has joined #ipfs
ikari-pl has quit [Quit: This computer has gone to sleep]
}ls{ has quit [Quit: real life interrupt]
ctOS has quit [Quit: Connection closed for inactivity]
refpga has quit [Ping timeout: 268 seconds]
BeerHall has quit [Quit: BeerHall]
xcm is now known as Guest96610
Guest96610 has quit [Killed (cherryh.freenode.net (Nickname regained by services))]
xcm has joined #ipfs
lidel` has joined #ipfs
Unode has joined #ipfs
<Unode>
hi everyone
<Unode>
Have a couple of IPFS related questions, some a bit more technical, some that I haven't quite found a satisfying answer on online searches.
<JCaesar>
fire away (don't ask to ask)
lidel has quit [Ping timeout: 245 seconds]
lidel` is now known as lidel
mischat_ has quit [Read error: Connection reset by peer]
<Unode>
The first is related to finding content in IPFS. If a client adds a given file to IPFS, is the network signaled and the content queued to be transferred to other nodes in the network or does this only happen if another client requests the exact file that was added?
<Unode>
In other words, if I add a file in my client and don't announce it to the network, would the network know that the file was added?
dethos has joined #ipfs
<Unode>
On the same note, if I am an administrator of a node that also serves as gateway, if I identify content that I do not want to have on the gateway for reasons (illegal, offensive,...) what ways do I have to prevent it?
<Unode>
I understand you can blacklist specific clients, but if the content is in the network, it could always bounce between other clients. Can I blacklist content instead?
mischat has joined #ipfs
<seba-->
no idea, i'm not there yet with that part of the protocol ^.^ (i'm slowly researching how the protocol really works by creating a new client) hm. but the last one i think you have some filtering. not sure.
Steverman has joined #ipfs
<Unode>
and then a second question, more practical. Lets say I have a key-value type structure that I want to allow users to search in an IPFS powered website. This key-value data is too large for the average web-client to hold in memory (1Gb+). Are there any solutions available that implement some kind of distributed search such that a single client doesn't have to hold the entire data?
<Unode>
I understand that the DHT works kind of like this, but it doesn't guarantee persistence.
<seba-->
interesting problem
<seba-->
but i'm not sure if i fully understand it
<seba-->
all data is split into chunks
<seba-->
256 kb is the typical one
<seba-->
you could search chunk by chunk?
sakalli_ has joined #ipfs
sakalli_ has quit [Excess Flood]
sakalli_ has joined #ipfs
<Unode>
seba--: the chunking is sort of like sharding in some database systems. You can split the data to make it easier to transport but when searching, you need a way to know which chunk holds the answer to your query. Otherwise you have to search all of them.
<seba-->
you could make a service which would tell you that?
<Unode>
And if you consider the step of knowing which chunk has the answer, that's exactly the same thing you are trying to solve by searching. That is, term->result in the key->store is the same as term->which_chunk
<seba-->
or use some heuristics if the data is ordered
sakalli_ has quit [Client Quit]
<seba-->
then you only have to check a few chunks
<seba-->
and not all
<Unode>
that's kind of what IPNS is trying to solve no? term->hash, except here hash is a chunk.
<seba-->
i think term->result is more complex than term->hash
<seba-->
because term->hash is predefined
<Unode>
However I also understand that IPNS is still experimental and somewhat volatile. Also for a large amount of key->values it may simply be too much to hold.
<seba-->
where as term->result isn't, becauase the search terms are infinite
<Unode>
seba--: no, term can be arbitrary, file->hash is predefined
<seba-->
yes
<Unode>
If I had a hash function that given 'term' would give me 'hash' I'd be done :) That would be all the search you needed :P
<seba-->
hm
<seba-->
what if we go a bit less abstract
<seba-->
and you tell me a bit more what you are trying to do
<ylp>
Unode: no content is added to your node unless you request it
<ylp>
for the file add, the file is addessed by its content hash so it can be accessed by anyone having the hash
<Unode>
ok, I'm trying to understand a few things but specifically the search problem is: Lets say 10 files. Each has a different name attached to it, some files more than one name (keys). When added to IPFS they get assigned a unique hash (the result I want to get at). I Then for every file there's a unique 'name' (key).
<Unode>
oops that went a bit too soon
<Unode>
resuming at ... (the result I want to get at). I want to know what kinds of solutions exist to allow going from one of the keys to the result.
<ylp>
in fact a node object is created for the folder with link to the files in that folder
<Unode>
IPNS seems like the answer but I can't tell if it's a reliable and stable (persistent) one once millions of key->result are added to it.
<ylp>
this object has its own hash
<ylp>
and each files has its own node that reference each block by its hash also
<Unode>
ylp: so if no-one knows the hash or queries for it (even if by chance), no one would know the content exists in IPFS?
<ylp>
except when a file/folder is added a notification is sent to your peers
<Unode>
ylp: you lost me with the node for folder. Does this mean that if I have folderA/folderB/file and folderA/file this would be treated differently by IPFS? It didn't seem to work like this when I tested it.
<ylp>
so a part of the network can know you added a folder with file named ..
<Unode>
ah so the file is announced?
refpga has joined #ipfs
<Unode>
except only transferred if a client requests it?
<drozdziak1>
Does non-recursive pinning mean I only pin one block for a multi-block piece of data under an IPFS address?
<ylp>
I'm not realy sure what each node do with this notification...
<ylp>
drozdziak1: yes you only pin the block corresponding to the given hash and not blocks linked to it
computerfreak has joined #ipfs
<drozdziak1>
ylp So if something exceeds block size and I `ipfs add` it then the pinning is incomplete?
<ylp>
yes or if you pin a folder none of its files will be pinned
<drozdziak1>
i.e. block-to-block relationship isn't "stronger" than named DAG nodes?
<drozdziak1>
s/nodes/edges
<drozdziak1>
ylp Okay, thanks!
chiui has joined #ipfs
computerfreak has quit [Client Quit]
<ylp>
IPLD is like the structure in a git repository
Steverman has quit [Ping timeout: 250 seconds]
woss_io has joined #ipfs
ylp has quit [Quit: Leaving.]
maxzor has joined #ipfs
ctOS has joined #ipfs
gritzko has quit [Quit: WeeChat 2.2]
woss_io has quit [Read error: Connection reset by peer]
nighty- has quit [Read error: Connection reset by peer]
cubemonk1y has quit [Read error: Connection reset by peer]
<Unode>
ylp: hum... was not aware that folders are actually hashed along. In the examples in the page, some of the nodes don't have a name, but in the illustration the names are represented. Are the names actually stored in the DAG as well, meaning these can be queried?
mischat has quit [Remote host closed the connection]
stoopkid has joined #ipfs
trqx has quit [Ping timeout: 256 seconds]
rozie has quit [Quit: leaving]
<Starism[m]>
I was wondering, is it possible for me to manually connect to a specific peer? (my friend's node)
<seba-->
Starism[m] what do you mean by that
<seba-->
you could add him to bootstrap
tombusby has quit [Ping timeout: 256 seconds]
rozie has joined #ipfs
<Starism[m]>
whats bootstrap?
<Starism[m]>
And i'm just guessing here,
<Starism[m]>
if i was connected to my friend's specific node then i wouldnt have to traverse the entire DHT (or wait 10 minutes) for me to access one of his files on the gateway/online
fridim has joined #ipfs
trqx has joined #ipfs
tombusby has joined #ipfs
<Starism[m]>
i was thinking something alone the lines of...
spinza has quit [Quit: Coyote finally caught up with me...]
<postables[m]>
Hmm I think I have an idea, on the bus but I'll brew on my idea and respond when I'm at the office
<Encrypt>
He is far away in the future it seems :D
gmoro has quit [Ping timeout: 250 seconds]
spinza has joined #ipfs
cubemonkey has quit [Read error: Connection reset by peer]
dethos has joined #ipfs
dethos has quit [Client Quit]
thomasan_ has joined #ipfs
spinza has quit [Quit: Coyote finally caught up with me...]
tombusby has quit [Remote host closed the connection]
tombusby has joined #ipfs
thomasan_ has quit [Remote host closed the connection]
spinza has joined #ipfs
<Unode>
Confused ... what's the difference between regular 'ipfs add' and the subcommands under 'ipfs files'?
<deltab>
the 'ipfs files' commands build up a directory in a sort of staging area
<Unode>
can you elaborate on that? My source of confusion is that the files under 'ipfs files' appear in the web UI as well but those added through 'ipfs add' don't, although the web interface lists a certain amount of storage used.
<deltab>
do you happen to have used git?
<deltab>
or transactions in a database?
<Unode>
yes, daily
<deltab>
so you add files to the index before committing
<deltab>
that means the git doesnt' have to keep recreating tree nodes
<deltab>
it waits until you commit
<Unode>
commit = ipfs files flush?
<deltab>
yeah
<Unode>
I see
<deltab>
(I think)
<deltab>
actually I'm not sure what flush does
<Unode>
It's not clear to me how to use the files commands. Tried cp but seems to only accept paths inside the 'index'.
<Unode>
I guess 'write' would be the equivalent to adding content.
<deltab>
right
<deltab>
re path inside
<Swedneck>
i've also struggled with ipfs files
<Unode>
but the description isn't clear. But anyway, thanks for the explanation.
<uncle_ben>
I'm trying to use the Merkle Forest IPLD explorer on the Explore tab of the Web UI, but no matter which dataset I use of the four listed, it just hangs for hours without loading anything. My daemon is working fine and I'm connected with peers, so I'm not sure what's wrong.
<Swedneck>
i think the way to do it is `ipfs add --pin=false` then `ipfs files cp /ipfs/HASH /some/dir`
<Unode>
Was just confused because the web UI had files as the interface but nothing else.
<Unode>
And by the way, adding files through the web UI makes my browser very memory hungry, hit the OOM-killer once..
<Unode>
Swedneck: so would you say that the files layer could be used to reorganize content in 'directories' (among other things)?
<deltab>
there's a number of levels at which you can use ipfs: you can add blocks directly, or add files individually, or use the mutable filesystem
<Swedneck>
i think that's the inteded use
<Unode>
ok
<deltab>
mfs is what the 'ipfs files' commands interact with
<deltab>
other commands give you lower-level access, just as in git you can add files or individual blobs
<Unode>
indeed
<deltab>
I guess the web UI only has MFS because that's the highest level and closest to what people expect from a filesystem
tombusby has quit [Remote host closed the connection]
<Unode>
deltab: but is misleading since it doesn't show content added via 'ipfs add'
tombusby has joined #ipfs
<Swedneck>
mfs is definitely the better interface IMO
<Swedneck>
i wish it was easier to use via commandline
<deltab>
Unode: because that was done outside MFS
<Unode>
Swedneck: FUSE mount perhaps? isn't there something like that?
<Swedneck>
think so
<Unode>
deltab: so ipfs add is actually lower level than MFS?
<deltab>
yes
<postables[m]>
When running `ipfs pin list` where exactly does it pull the pin list from/how does it construct the pin list? I'm curious if it's possible to store that bit of data in a memory or SSD cache
<deltab>
using 'ipfs add' you get a hash as output, and you have to keep track of that hash yourself
<Swedneck>
hmm actually the fuse mount is different
<Swedneck>
it's basically a gateway
<Swedneck>
you can't list anything in the fuse mount
<Swedneck>
which is actually quite neat
<Unode>
deltab: I see I managed to 'cp' content to MFS with 'ipfs cp /ipfs/hash /MyFolder/'
<postables[m]>
deltab: was that in reply to me? I keep track of hashes added and pinned to my nodes in a database but I'm looking for a way to periodically poll `ipfs pin ls` without incurring a big delay
mischat_ has joined #ipfs
<deltab>
postables[m]: no, I hadn't seen your message
<postables[m]>
Ah okay nvm then 🤦♀️
<Unode>
Q: Does pinned content get GC'ed if it exceeds StorageMax ?
<postables[m]>
GC only does unpinned content so no
<deltab>
I'd expect that pinned content never gets garbage collected
xcm has quit [Remote host closed the connection]
<Unode>
And content added is pinned by default right?
<deltab>
yes
<deltab>
pins are stored in a database
<deltab>
I don't know anything more about how it's implemented though
xcm has joined #ipfs
dsiypl4_ has joined #ipfs
dsiypl4 has quit [Ping timeout: 246 seconds]
mischat_ has quit [Ping timeout: 244 seconds]
<Unode>
Q: Does IPFS implement a queue under the hood? and does this queue persist after the request is made?
woss_io has quit [Ping timeout: 240 seconds]
<deltab>
a queue pf what?
<Unode>
Basically, if I use the IPFS companion and open a bunch of links, that take forever to open, how are these prioritized? FIFO?
<postables[m]>
No the only queue system in IPFS is within IPFS Cluster I believe
<Unode>
and if I then close the browser (i.e. gave up on waiting) will the daemon still try to pull the content from the network?
<postables[m]>
I'm not sure if it's quite FIFO it's whatever your node received. In the go-ipfs implementation they are in separate goroutines that will receive the data whenever it is returned
<postables[m]>
Are you using the embedded companion node?
<Unode>
no, go-ipfs + IPFS companion on firefox
<Unode>
running ipfs daemon on the side.
<postables[m]>
And yes I believe the companion request is still going until you cancel it (ie, closing your browser)
<Unode>
postables[m]: so I need to keep refreshing the tab if I want the content to finish loading?
<postables[m]>
No I wouldn't keep refreshing it that would reset the request I think
<postables[m]>
Leave it open and it'll do its thing. One suggestion though and this is something I've done with my own nodes
n3c has joined #ipfs
<Unode>
I noticed that when trying to open a video (~200MB) a few MBs were pulled, then the browser tries to play the content, complains it doesn't support the MIME type and nothing else seems to happen (i.e. the rest of the file is not fetched).
<postables[m]>
If content is taking awhile to arrive I'll poll the DHT for providers of that content an issue a swarm connect command to connect my node to the providing node
<postables[m]>
It seems to work fairly decently
<Unode>
postables[m]: manually?
<Unode>
seems like the sort of thing the protocol should take care of no?
<postables[m]>
Is this with the companion or a gateway? Companion I'm not too sure about but that sounds like a bug. I believe it was whyrusleeping that posted a doc about streaming a movie via ffmpeg command
<postables[m]>
Unode: I'm not sure if the protocol will peer up if the node isn't in your peer list I think it just asks for the data. Not 100% sure though
thomasan_ has joined #ipfs
<Unode>
the companion seems to rewrite URLs that have ipfs.io and a few other domains to 127.0.0.1:8080
n3c has left #ipfs [#ipfs]
<Swedneck>
it redirects any IPFS resource
thomasan_ has quit [Remote host closed the connection]
<Unode>
postables[m]: if it doesn't do that, seems like a rather efficient way of making the protocol more efficient. Maybe with some kind of quota to prevent flooding DHT with requests.
<Unode>
Swedneck: not sure how it does its magic but seems to make navigation of regular internet somewhat slower. Even google takes a moment to reply. Using an independent firefox profile for now to avoid messing up my default session.
<postables[m]>
I believe the next bitswap update for go-ipfs which will introduce sessions solves that. I'm not entirely certain though
<Unode>
postables[m]: that being?
<postables[m]>
That being as in when? You're guess is as good as mine
<postables[m]>
I think 0.4.19 might Include bitswap sessions? Not sure
<postables[m]>
deltab: I believe the overall idea is instead of constantly polling the DHT to find peers who are providing particular blocks, and even getting duplicate blocks instead of the blocks that you need next, it would establish a session with X number of providers who have the data and you would get it from them.
<postables[m]>
Bitswap sessions is not something I've done a lot of reading about so I might not be entirely accurate
<deltab>
ah, I had hoped it did that already
<Swedneck>
hmm, do you guys think it's a good idea to open an issue for adding the ability to add files to mfs directly?
<deltab>
Unode: one of the drawbacks of a decentralized system is there's no authority that can tell you immediately 'no, that's not available'; you can ask for something and be waiting a while for a response
<Swedneck>
`ipfs files add <file> <directory>` which would just do the `ipfs add` for you
<deltab>
just as in bittorrent, some hashes will lack reachable providers, and so trying to fetch them will result in a long wait
<deltab>
Swedneck: isn't that what 'ipfs files cp' does already?
<Swedneck>
no
<Swedneck>
ipfs files cp requires you to manually add to ipfs
<Unode>
Swedneck: I would be happy with having /ipfs visible in MFS for starters. But I agree that when I first used 'ipfs add -r' I was expecting the folder name and structure to be kept and available in MFS