stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.18 and js-ipfs 0.33 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Con
shippy has joined #ipfs
sammacbeth has quit [Ping timeout: 245 seconds]
mischat has joined #ipfs
thomasanderson has quit [Remote host closed the connection]
mischat has quit [Ping timeout: 250 seconds]
thomasanderson has joined #ipfs
mischat has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has quit [Ping timeout: 250 seconds]
hph^ has quit []
Rboreal_Frippery has joined #ipfs
phs^ has joined #ipfs
sammacbeth has joined #ipfs
djdv has quit [Quit: brb]
djdv has joined #ipfs
sammacbeth has quit [Ping timeout: 244 seconds]
<DarkDrgn2k[m]>
@swedneck: why ?
<Swedneck>
why what?
anacrolix has quit [Quit: Connection closed for inactivity]
woss_io has quit [Ping timeout: 240 seconds]
<lord|>
Swedneck: what's needed
<DarkDrgn2k[m]>
what makes them unusable?
Rboreal_Frippery has quit [Ping timeout: 244 seconds]
<Swedneck>
ah
<Swedneck>
well hardbin is too focused on privacy, which makes it needlessly difficult to quickly paste something and share the url/hash
<lord|>
yeah it requires javascript which is annoying
<lord|>
but no way getting around that problem
<lord|>
without a centralized backend
<Swedneck>
ipfsbin i can't get to work, i presume i have to do some npm magic
<Swedneck>
i mean that it encrypts text
<DarkDrgn2k[m]>
its cause js-ipfs doenst support dht :/
<lord|>
obviously
<Swedneck>
that's just completely pointless for sharing text snippets in public chats
<DarkDrgn2k[m]>
and i guess that means you cant pin stuff
Steverman has quit [Ping timeout: 246 seconds]
<Swedneck>
i'm perfectly willing to let people pin it on my gateway node
<Swedneck>
i just want to restrict it to text only
<lord|>
DarkDrgn2k[m]: there is a pin API for the gateway
<lord|>
has to be enabled manually
<Swedneck>
i don't have any issue with js
<lord|>
or just use browser extensions
<lord|>
for ipfs redirection
<lord|>
and setup pinning locally
<lord|>
Swedneck: what's wrong with the URL hash being used anyways?
<Swedneck>
i don't want this for myself
<Swedneck>
i want it as a public service
<Swedneck>
nothing
<lord|>
well, the URL spec has a convenient way to do that
<lord|>
with URL hashes
<Swedneck>
but with hardbin you cannot simply copy the hash, you need a decryption key
<Swedneck>
what shouldn't be on IPFS is things you actively don't want to be preserved, sensitive data
<Bromskloss>
Hi! Is there an up-to-date guide for how to add files without copying them? Merely using `--nocopy` doesn't seem to do the trick. (I have also done `ipfs config --json Experimental.FilestoreEnabled true`, without knowing exactly what it means.)
asura_ has quit [Ping timeout: 245 seconds]
reit has joined #ipfs
<Swedneck>
which you wouldn't use a pastebin for to begin with
<fiatjaf>
for example, I don't know how to fit psatebin content in there, unless you manually create a folder with your pastebin hashes and update it on the site every time you get a new pastebin saved
mischat has quit [Ping timeout: 252 seconds]
<lord|>
fiatjaf: what's the problem with doing that?
<fiatjaf>
it's a hassle
<fiatjaf>
that few people will bear
<fiatjaf>
anyway, if someone has a link to the old folder and you delete it from your node, not only the link to the folder (which has the index of all pastes) is lost, but there's no way to know a new version is available
<lord|>
I don't think every single ipfs hash would have to recalculate though
<fiatjaf>
actually, this is a failure of the protocol
<lord|>
just by adding one file
<fiatjaf>
the great IPFS vision isn't so great in the end
<fiatjaf>
"oh let's just create a global content-addressed system" misses that
<lord|>
it's still useful for some things
<lord|>
but the hype obviously boiled over
<fiatjaf>
the fact that merkle trees can't point to their own future
<Swedneck>
there doesn't really need to be discoverability of pastebin stuff, just the preservation of links
<lord|>
&
<lord|>
^
<Swedneck>
people don't search for pastebin content, they're linked to it
<lord|>
yeah
<fiatjaf>
and then they pin it?
<lord|>
though one thing people do want to search for is free movies :)
<fiatjaf>
why would they?
<Swedneck>
so when the links go down, that causes quite an issue
<fiatjaf>
when I solve a problem after reading a bunch of error logs I want to forget about it
<fiatjaf>
now, if there was an easy way to keep that error log organized (along with the other content that helped me solve the error) then I would be inclined to save it
<fiatjaf>
for posterity
<lord|>
if you put something like project gutenberg on ipfs
<lord|>
it would be a good idea
<lord|>
because deduplication
<fiatjaf>
lord|, I agree
<fiatjaf>
I'm using IPFS mostly for that kind of thing
<fiatjaf>
there's a guy who actually put project gutenberg, if you want that
<lord|>
I tried pinning a larger selection of files than that guy did
<lord|>
took too long
<lord|>
unless someone else has since hosted every single file?
<fiatjaf>
would take a lot anyway
<fiatjaf>
what interesting archives do you have there, lord| ?
<lord|>
excluding anything audio/visual would be a good idea
<fiatjaf>
please sign up for bigsun.xyz and post them there
hurikhan77 has quit [Read error: Connection reset by peer]
hurikhan77 has joined #ipfs
sammacbeth has quit [Ping timeout: 244 seconds]
sammacbeth has joined #ipfs
hurikhan77 has quit [Read error: Connection reset by peer]
hurikhan77 has joined #ipfs
hurikhan77 has quit [Read error: Connection reset by peer]
kapil____ has joined #ipfs
hurikhan77 has joined #ipfs
hurikhan77 has quit [Read error: Connection reset by peer]
hurikhan77 has joined #ipfs
sammacbeth has quit [Ping timeout: 240 seconds]
sammacbeth has joined #ipfs
hurikhan77 has quit [Read error: Connection reset by peer]
hurikhan77 has joined #ipfs
hurikhan77 has quit [Client Quit]
user_51 has quit [Ping timeout: 245 seconds]
user_51 has joined #ipfs
bongozig_ has quit [Ping timeout: 250 seconds]
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 250 seconds]
nonono has joined #ipfs
<postables[m]>
fiatjaf: If you want I can pipe your hashes through my experimental search engine. I index content with Tesseract, TextRank, and Tensorflow. Will probably be adding DHT sniffing to pick up content soon
<DarkDrgn2k[m]>
Soooo... vps host lost my encrypted drive holding IPFSs' content
<DarkDrgn2k[m]>
can i "ipfs pin add HASH HASH HASH HASH" or does it have to be on per line?
asura__ has quit [Remote host closed the connection]
asura__ has joined #ipfs
_whitelogger has joined #ipfs
dimitarvp has joined #ipfs
MDude has quit [Ping timeout: 244 seconds]
kapil____ has quit [Quit: Connection closed for inactivity]
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 250 seconds]
eater has quit [Ping timeout: 250 seconds]
notkoos has joined #ipfs
leeola has joined #ipfs
<r0kk3rz>
not sure you could do that without having a full index of everything
toppler has joined #ipfs
BeerHall has quit [Ping timeout: 250 seconds]
MDude has joined #ipfs
eater has joined #ipfs
ruby32 has quit [Remote host closed the connection]
ruby32 has joined #ipfs
luginbash[m] has joined #ipfs
zxk has joined #ipfs
<deltab>
fiatjaf: okay, so you'll need a database to store what you find, and a crawler to find it
<deltab>
what kind of links do you want to index?
<jamiedubs[m]1>
No all-hands this week right? Is there one for next week?
i1nfusion has quit [Remote host closed the connection]
i1nfusion has joined #ipfs
<DarkDrgn2k[m]>
> One per line
<DarkDrgn2k[m]>
looks like its not one per line.. pined everythign
<DarkDrgn2k[m]>
i was using xarg... didnt do -L1
<DarkDrgn2k[m]>
is there any way to see acitvity on a specific hash?
pep7 has quit [Ping timeout: 246 seconds]
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 244 seconds]
kapil____ has joined #ipfs
<fiatjaf>
deltab: how do I crawl this thing?
<fiatjaf>
I mean, is there a way to slowly query all hashes on all nodes?
<deltab>
no, but you can query the hashes you have
<deltab>
it's also possible to watch which hashes are announced
<DarkDrgn2k[m]>
http://stats.myipfs.site <- trying to figure out why im running so high in bandwith ussage every day when i dont have any popular ipfs pins pinned
<postables[m]>
Still pretty WIP, haven't updated the deployed version in awhile
<fiatjaf>
nice, postables[m]
<fiatjaf>
I mean, I don't know anything about tesseract, tensorflow and textrank, seems like pretty complex
<fiatjaf>
how do you plan to do dht sniffing? is there a tutorial for that somewhere?
<fiatjaf>
with good search engines, we can live in an internet of pure hashes and no names, I think
<fiatjaf>
(except for the name of the search engine?)
<Swedneck>
it doesn't really seem to work, at all
<Swedneck>
"ipfs" should return some result
thomasanderson has joined #ipfs
<postables[m]>
swedneck: are you talking about the search engine? try the queries listed under the weekly ones.
<postables[m]>
fiatjaf: For DHT sniffing? dunno yet, simplest way is just `ipfs logs tail` and copy content hashes
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 240 seconds]
Mateon3 is now known as Mateon1
<Swedneck>
ah, that works
<Swedneck>
not many results, but still
<postables[m]>
like i said its extremely basic and WIP. I haven't given it much content to index yet, it's just some stuff i've uploaded myself, and other stuff people have given it to index
<postables[m]>
most of the work has been done on the backend for Lens itself, optimizing how we search through content, etc....
<Swedneck>
i'm just adding the fdroid repo to ipfs, wanna index that?
<Swedneck>
50 gigs of stuff
<postables[m]>
depends on the kind of content. It can index pretty much any text files, pictures, etc...
<postables[m]>
*edit:* ~~depends on the kind of content. It can index pretty much any text files, pictures, etc...~~ -> depends on the kind of content. It can index pretty much any text files, pdf, web pages, pictures, etc...
<Swedneck>
well it's mostly APKs and source code, but it has an index built in
<postables[m]>
sure when im back at my computer tonight i'll look at pinning that
<postables[m]>
wiping the data stored on these nodes in like less than a week tho
<Swedneck>
i'm just trying to get a working fdroid mirror on ipfs rn
<Swedneck>
my biggest issue is `ipfs add` being dick-punchingly slow
<Swedneck>
having the data on HDD doesn't help either
<postables[m]>
what're the specs of your node?
<postables[m]>
i've been running with bloom filter and hash on read lately and its awesome
<Swedneck>
whatnow
<Swedneck>
as for specs: shitty old pc from like 2012, if not even earlier
<Swedneck>
lemme get detailed specs
<jamiedubs[m]1>
Re DHT sniffing for search you should check out ipfssearch
<postables[m]>
ah yea dont enable bloom filter or hash on read then lol
<postables[m]>
i had to up my ipfs nodes to 40 vCPU cores for hash on read to not cripple it
<postables[m]>
bloom filter seems to be quite useful and only memory intensive
<Swedneck>
CPU: AMD A8-6500 APU (4) @ 3.5GHz
<Swedneck>
i have shit tons of memory
<postables[m]>
jamiedubs: thanks ill check it out
<Swedneck>
Memory: 4765MiB / 23292MiB
<postables[m]>
you should be able to use bloom filter then
<postables[m]>
*edit:* ~~you should be able to use bloom filter then~~ -> you should be able to use bloom filter then, i think my nodes have a 512MB filter
<Swedneck>
what does bloom filter do?
<postables[m]>
lets you check whether or not your node already has nodes from the merkle-dag i believe. you can avoid re-adding merkle-dag nodes that you already have
<postables[m]>
speeds up adding files and what not
<postables[m]>
hash on read allows you to preserve integrity but its stupidly expensive