stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.22 and js-ipfs 0.35 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
hkaddoil has quit [Ping timeout: 245 seconds]
snupples[m] has joined #ipfs
hkaddoil has joined #ipfs
clemo has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
hkaddoil has quit [Ping timeout: 276 seconds]
jayjo has quit [Read error: Connection reset by peer]
jayjo has joined #ipfs
mrCyborg has quit [Read error: Connection reset by peer]
ylp has joined #ipfs
pecastro has joined #ipfs
PyHedgehog has joined #ipfs
hkaddoil has joined #ipfs
p3n has quit [Remote host closed the connection]
p3n has joined #ipfs
hkaddoil has quit [Ping timeout: 265 seconds]
abbiya has quit [Quit: abbiya]
cxl000 has joined #ipfs
cxl000 has quit [Remote host closed the connection]
mpurg[m] has joined #ipfs
cxl000 has joined #ipfs
hsn has joined #ipfs
<pusherDiscord[m]>
how to add a list of ipfs hashes in one line with ipfs? without a timeout period? the fact that ipfs has no job control for pinning is hurting me a lot for months now. What is best course of action for large jobs?
<pusherDiscord[m]>
what about ipfs pinning queue control? it seems like its really needed
ZaZ has joined #ipfs
<pusherDiscord[m]>
I can roll my own automation , but it is always something. if i do ipfs get myself and then do an ipfs add afterwards, sometimes i am adding incomplete hash, i can ofcourse compare the filename hash returned but if its recursive, no more, because every line output from ipfs has the original hash in it, very frustrating. I can add them one by one but then i have to somehow account for --timeout for larger file, there is
<pusherDiscord[m]>
currently no --read-timeout as i was hoping, only global timeout that i can see. it feels like a maze this solution and normally i am writing blockchain automation, i dont think it should be this hard to manage multiple files in ipfs. It leads me to believe I am doing things very wrong for our batch (maybe i should use ipfs packs? but i had a lot of bad time with it, also we have to think about space on target device is
<pusherDiscord[m]>
a small ARM rpi), or ipfs is missing an important feature for queueing large number of ipfs pins. Please help !!
<pusherDiscord[m]>
thanks man, i know about this already, it wasn't clear whether it would help. These are user devices and are not part of 'our cluster', but they do need to sync a list of ipfs hash pins. Im really really struggling hard with the job management. Do you know whether it has such feature for single ipfs server in a non cluster environment for same pinning management or not. What i am trying to do should be really simple. I
<pusherDiscord[m]>
have a text file I collate from our blockchain to get a list of all the ipfs hashes, and i want them to be pinned. The problem is I'm having to write my own pin job scheduler, and it is proving really really problematic not doing it natively to ipfs. Im happy to admit a simple problem is really causing me grief here. I have also tried ipfs-packs but i found it too unreliable and restrictive. Why does not ipfs have such
<pusherDiscord[m]>
task queue management for ipfs jobs, it is so sad. I will take a look at the ipfs-cluster-service again. At this stage I just feel like a complete failure for not being able to do something so simple with ipfs. I don't really want to run a full cluster peer, i just want to be able to replicate a list of pins
cxl000 has quit [Ping timeout: 245 seconds]
<pusherDiscord[m]>
* thanks man, i know about this already, it wasn't clear whether it would help. These are user devices and are not part of 'our cluster', but they do need to sync a list of ipfs hash pins. Im really really struggling hard with the job management. Do you know whether it has such feature for single ipfs server in a non cluster environment for same pinning management or not. What i am trying to do should be really simple.
<pusherDiscord[m]>
I have a text file I collate from our blockchain to get a list of all the ipfs hashes, and i want them to be pinned on our user devices who run our image on their rpi's. The problem is I'm having to write my own pin job scheduler, and it is proving really really problematic not doing it natively to ipfs. Im happy to admit a simple problem is really causing me grief here. I have also tried ipfs-packs but i found it too
<pusherDiscord[m]>
unreliable and restrictive. Why does not ipfs have such task queue management for ipfs jobs, it is so sad. I will take a look at the ipfs-cluster-service again. At this stage I just feel like a complete failure for not being able to do something so simple with ipfs. I don't really want to run a full cluster peer, i just want to be able to replicate a list of pins
<pusherDiscord[m]>
I am not against the idea of setting up a private swarm, and having the nodes join our bootstrap ipfs peer. but i really liked the idea of just having a standard ipfs setup, and pin the hashes 'normally'. Maybe i am asking for too much or misunderstand some simple process that could help achieve it, i dont know, honestly
<pusherDiscord[m]>
I will look at ipfs-cluster-service again and report back if it can be of use but it sound like something different than desired
<pusherDiscord[m]>
yeah this is not what we need sadly. 😦
byanka has quit [Ping timeout: 245 seconds]
<pusherDiscord[m]>
I will look thru and see if there is a way for me to autoconfigure the setting of secrets from a dial home but i really wanted to avoid that. these images are run by our 'users' on rpi, from a clean image, ipfs init is run at boot time etc . so infuriating i cant find a simple solution, it make me feel like an automation engineer fail. Maybe we can have a wget ourhomeurl.com/cluster-config and setup as a cluster, maybe
<pusherDiscord[m]>
it is better way to go and such, but why the native ipfs server not allow this simple import list of hashes really is challenging me. The peers aren't known, so anyone can download our OS image and run the rpi ipfs, so it just feels like nothing is quite the right fit or some kind of gotcha whichever method we use
<pusherDiscord[m]>
IMO cluster service goes against the principle of DHT. Why do we need a peer list before hand? no. we want to publicly bootstrap as a normal node. Just why is it so difficult to add a list of ipfs hashesh and without ipfs-pack? just whyyyyyy
<pusherDiscord[m]>
😂
<pusherDiscord[m]>
* IMO cluster service goes against the principle of DHT. Why do we need a peer list before hand? no. we want to publicly bootstrap as a normal node. Just why is it so difficult to add a list of ipfs hashes and without ipfs-pack? just whyyyyyy
endvra has quit [Ping timeout: 268 seconds]
endvra has joined #ipfs
<pusherDiscord[m]>
I thought the whole reason of IPFS was that it was decentralized - using the ipfs-cluster-service strikes me that it can't operate in such a way, shouldnt peer discovery be via DHT? Maybe i misunderstand something about the way ipfs-cluster works but it looks to me like it has a list of nodes, which is what I thought IPFS was getting away from, requests should be made thru DHT as normal, I simply want users of our ipfs
<pusherDiscord[m]>
image to be able to sync with these files as any public node on the standard bootstrap would. It just seems there is no job management built into ipfs, and that ipfs-cluster isn't really the solution either. It would be impractical for users of our OS image to have push updates to our server I think, and would like be a list of peers to DDOS, imo
<pusherDiscord[m]>
i only see a global timeout at present
<pusherDiscord[m]>
previously to this i actually used a local gateway and curled the file using -t -T but, that is only so much use, since it will fail with recursive hashes etc
<pusherDiscord[m]>
I guess maybe i am being too much of a perfectionist . that is kind of okay what I have. its just the timeout issue that is the main pain
cxl000 has joined #ipfs
thexa4 has joined #ipfs
hkaddoil has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
zoobab has quit [Ping timeout: 245 seconds]
zoobab has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
vmx has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
mauz555 has joined #ipfs
hkaddoil has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
hkaddoil has quit [Remote host closed the connection]
hkaddoil has joined #ipfs
hkaddoil has quit [Client Quit]
hkaddoil has joined #ipfs
snk0752 has quit [Quit: Ping timeout (120 seconds)]
snk0752 has joined #ipfs
hkaddoil has quit [Client Quit]
BladedThesis_ has joined #ipfs
The_8472 has quit [Ping timeout: 248 seconds]
BladedThesis has quit [Read error: Connection reset by peer]
Acacia has quit [Remote host closed the connection]
ZaZ has quit [Quit: Leaving]
kaotisk has joined #ipfs
Caterpillar has joined #ipfs
<postablesDiscord>
pusher how does cluster go against a dht at all?
<postablesDiscord>
cluster is literally jsut to replicate pinsets between multiple peers. nobody is forced to join your cluster, or participate in a cluster
thexa4 has quit [Quit: My computer has gone to sleep. ZZZzzz…]
chirptuneDiscord has left #ipfs ["User left"]
vmx has quit [Remote host closed the connection]
halbeno_ has joined #ipfs
halbeno has quit [Ping timeout: 245 seconds]
mauz555 has quit [Remote host closed the connection]
mauz555 has joined #ipfs
xcm has quit [Remote host closed the connection]
CopenBra[m] has joined #ipfs
mauz555 has quit [Ping timeout: 276 seconds]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]