alanshaw changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.22 and js-ipfs 0.40 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of
vegetarianisms has quit [Remote host closed the connection]
vauxia[m] has joined #ipfs
AbramAdelmo has quit [Remote host closed the connection]
AbramAdelmo has joined #ipfs
AkhIL[m]1 has joined #ipfs
AkhILman has left #ipfs ["Leaving"]
xcm has quit [Read error: Connection reset by peer]
xcm has joined #ipfs
endvra has joined #ipfs
mowcat has quit [Remote host closed the connection]
pecastro has quit [Ping timeout: 272 seconds]
RamRanRa has quit [Read error: Connection reset by peer]
_whitelogger has joined #ipfs
turona has quit [Ping timeout: 272 seconds]
turona has joined #ipfs
Simpatico_18 has joined #ipfs
mithilarun has quit [Remote host closed the connection]
mithilarun has joined #ipfs
Simpatico_18 has quit []
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
JohnFromAtl has joined #ipfs
mithilarun has quit [Remote host closed the connection]
hermeticA has joined #ipfs
v3ry3arly has joined #ipfs
zpmmckay has joined #ipfs
zpmmckay has left #ipfs [#ipfs]
}ls{ has quit [Ping timeout: 268 seconds]
}ls{ has joined #ipfs
obensource has quit [Ping timeout: 265 seconds]
M8431[m] is now known as Discord476
Discord476 is now known as Discord478
Discord478 is now known as Discord480
Discord480 is now known as Discord482
Discord482 is now known as Discord484
quirk has joined #ipfs
AbramAdelmo has quit [Remote host closed the connection]
_whitelogger has joined #ipfs
Clarth has joined #ipfs
}ls{ has quit [Quit: real life interrupt]
mauz555 has quit []
quirk has quit [Remote host closed the connection]
tpewnag has joined #ipfs
tpewnag has quit [Max SendQ exceeded]
Belkaar_ has quit [Ping timeout: 265 seconds]
Belkaar has joined #ipfs
Belkaar has joined #ipfs
zeden has quit [Quit: WeeChat 2.6]
AbramAdelmo has joined #ipfs
Schnauzer has joined #ipfs
tpewnag has joined #ipfs
CCR5-D32 has quit [Quit: ZZZzzz…]
user_51 has quit [Ping timeout: 265 seconds]
kakra has joined #ipfs
user_51 has joined #ipfs
hurikhan77 has quit [Ping timeout: 268 seconds]
KempfCreative has joined #ipfs
obensource has joined #ipfs
verin0x6 has joined #ipfs
verin0x has quit [Ping timeout: 240 seconds]
verin0x6 is now known as verin0x
MDude has quit [Quit: Going offline, see ya! (www.adiirc.com)]
tpewnag has quit [Remote host closed the connection]
_whitelogger has joined #ipfs
hermeticA has left #ipfs [#ipfs]
malfort_ has quit [Quit: Leaving]
malfort has joined #ipfs
fling has quit [Ping timeout: 240 seconds]
silotis has quit [Ping timeout: 260 seconds]
silotis has joined #ipfs
Clarth has quit [Ping timeout: 260 seconds]
maxzor has joined #ipfs
Cavedude has quit [Ping timeout: 260 seconds]
Cavedude has joined #ipfs
KempfCreative has quit [Ping timeout: 258 seconds]
stoopkid_ has joined #ipfs
notice81cabbage[ has joined #ipfs
maxzor has quit [Remote host closed the connection]
flacks has quit [Remote host closed the connection]
v3ry3arly has quit [Quit: sleeping or rebooting or something iunno]
Trieste has quit [Ping timeout: 268 seconds]
Trieste has joined #ipfs
fleeky has quit [Ping timeout: 260 seconds]
maxzor has joined #ipfs
fleeky has joined #ipfs
}ls{ has joined #ipfs
bengates has joined #ipfs
ylp has joined #ipfs
maxzor has quit [Ping timeout: 268 seconds]
pecastro has joined #ipfs
rendar has joined #ipfs
ipfs-stackbot has quit [Remote host closed the connection]
ipfs-stackbot has joined #ipfs
maxzor has joined #ipfs
ygrek_ has joined #ipfs
voker57 has quit [Quit: voker57]
vitaminx has joined #ipfs
voker57 has joined #ipfs
manray has quit [Ping timeout: 260 seconds]
maxzor has quit [Remote host closed the connection]
maxzor has joined #ipfs
maxzor has quit [Remote host closed the connection]
maxzor has joined #ipfs
maxzor_ has joined #ipfs
maxzor has quit [Ping timeout: 240 seconds]
manray has joined #ipfs
AbramAdelmo_ has joined #ipfs
AbramAdelmo has quit [Read error: Connection reset by peer]
maxzor_ has quit [Remote host closed the connection]
PyHedgehog has quit [Quit: Connection closed for inactivity]
vmx has joined #ipfs
ZaZ has quit [Quit: Leaving]
zeden has joined #ipfs
Schnauzer has quit [Ping timeout: 268 seconds]
Wimsey has joined #ipfs
KempfCreative has joined #ipfs
jokoon has joined #ipfs
AbramAdelmo has quit [Remote host closed the connection]
jcea has joined #ipfs
andrewnez[m] has joined #ipfs
manray has quit [Ping timeout: 258 seconds]
manray has joined #ipfs
jcea has quit [Remote host closed the connection]
jcea has joined #ipfs
AbramAdelmo has joined #ipfs
AbramAdelmo_ has joined #ipfs
AbramAdelmo has quit [Read error: Connection reset by peer]
<koivunejDiscord[>
is there anything in the CID to differentiate `dag-pb` and `dag-pb containing an unixfs object`, or are all `dag-pb`'s treated as unixfs objects unless the protobuf parsing fails?
bsm117532 has joined #ipfs
bsm117532 has left #ipfs [#ipfs]
robamman2020 has joined #ipfs
<robamman2020>
Hello...... Please come check out my chatroom: h t t p : / / nicechatroom2020.000webhostapp . c o m /
robamman2020 has quit [Disconnected by services]
<ShadowJonathanDi>
@Admins advertising
<ShadowJonathanDi>
woss lemme answer your question
<ShadowJonathanDi>
1: IPFS is a libp2p peer-to-peer network, by definition, it means that no central authority "keeps tabs" on how many or how much nodes check into the network, thus, an absolute number cant be determined.
<ShadowJonathanDi>
Like people said, its around 100k to a million (at the moment), but that's an estimate based off of non-exhaustive data.
octav1a has joined #ipfs
<ShadowJonathanDi>
2: I personally run my cluster self-hosted, there is indeed a possibility to deploy to a cloud service (I recommend DigitalOcean), but I recommend against it since it kinda undermines the core idea of IPFS (decentralized storage and addressing, the idea is to basically have everything at home).
<ShadowJonathanDi>
However, I *do* recommend making a ipfs cluster if you plan on either having high traffic or high volume, have an off-site backup service for your needs.
<ShadowJonathanDi>
This is recognized as a problem (example: someone wants to store 10TB of data but doesn't want to bother with paying pinning services (centralized), or setting up a IPFS cluster and spreading them to different providers (distributed, but costly)), and for that, "Filecoin" is currently being developed, I recommend looking into it.
<ShadowJonathanDi>
3: In my opinion, IPFS cluster is in a "beta" stage, and it's not hardened or tested *very* thoroughly, know what you want, and what you need, maybe a pinning service is enough if you plan on making a distributed app on IPFS.
<octav1a>
Are there any implementations started for IPFS wrappers / libs that allow efficient storage of versioned files? For example, storing a file that changes over time only slightly, and storing / downloading the deltas only instead of storing a whole copy of the file each time?
<octav1a>
For a small number of versions, I imagine it could be efficient to download the base data and the deltas, and apply all of them together, but for a larger number of versions this would get very inefficient, no?
<ShadowJonathanDi>
no
<ShadowJonathanDi>
there's talk and ideas of a git-like implementation, but...
<ShadowJonathanDi>
git works via snapshotting, not diffs
<ShadowJonathanDi>
* git works via snapshotting, not diff stacking
<octav1a>
wow, I had the wrong conception of it for a long time >.>
<ShadowJonathanDi>
and even if you'd need to apply 100s of diffs on top of eachother, that's a very cpu-intensive operation compared to snapshotting
<ShadowJonathanDi>
its a tradeoff between storage and cpu-time
<ShadowJonathanDi>
imo, use git-like snapshotting, and then de-duplicate the individual blocks in the changed files
<ShadowJonathanDi>
so that minimal extra storage is required
<ShadowJonathanDi>
but the low-cputime advantage of snapshotting is retained
<ShadowJonathanDi>
git actually works with a merkle tree
<octav1a>
So to summarize, from your perspective, 1) there is no library/wrapper that can sacrifice performance for minimizing disk space usage of pinning many very similar files in a chain, and 2) because of the workings of the current ipfs implementation, it would not be worth it to try this anyway.
<ShadowJonathanDi>
git branches are just references, and "pulling" and "pushing" is just sending or receiving extra "blocks" to local or remote store
<ShadowJonathanDi>
* git branches (and HEAD) are just references, and "pulling" and "pushing" is just sending or receiving extra "blocks" to local or remote store
<octav1a>
I think the line git says when you clone "resolving deltas (x/x)" was what lead me to this conclusion, btw.
<ShadowJonathanDi>
1: not right now, i imagine when the git-like versioning comes out, you can make a library that reduces duplicated data when a file has been changed (by only changing the blocks from the previous version of the file that has been changed)
<ShadowJonathanDi>
(same here, i was surprised when i actually looked up what git was)
<ShadowJonathanDi>
* (same here, so i was surprised when i actually looked up what git was)
<ShadowJonathanDi>
2: i didn't say that, there's merit in making a versioning system that works robustly, so that people can "pull" and "push" changes to a decentralized filesystem
<ShadowJonathanDi>
or copy that filesystem and make changes to it locally
<ShadowJonathanDi>
work on it like a git branch
<ShadowJonathanDi>
unfortunately then there's the problem of authority, to know for certain if a new version of the branch is actually "right", or "verified"
<ShadowJonathanDi>
that can be done in various ways, like signing commits, or by making an authority in the network, or a bunch of peers who hold authority to publish the new version over pubsub or IPRS
<ShadowJonathanDi>
(interplanetary record service, currently still in draft status, god i hope someone would take a crack at it someday)
<ShadowJonathanDi>
* (interplanetary record service, generalized IPNS, currently still in draft status, god i hope someone would take a crack at it someday)
<ShadowJonathanDi>
again, i *do* agree with #2 on the exact words of "current IPFS implementation", because the "current IPFS implementation" is absolute *shit* as far as ive experienced it, it still has an incredibly long way to go before its full potential can be realized
<octav1a>
In general, my plans are much less grandiose. :) I am just asking so I can determine where to aim my efforts: I have a use case where I would like to be able to track versions of one file over time. There would only be one committer. With local GIT versioning, I was able to see very small file size increase on making changes to the one file in the repository. With IPFS it seems like only duplication is currently supported. This is
<octav1a>
fine, I will just plan to use more disk space, I just wanted to get an idea if there was something better I could attempt.
<ShadowJonathanDi>
and i personally think that protocol labs needs to get their heads out of golang, and into python, so they can conduct actual experiences in that implimentation (py-ipfs, py-libp2p), so they can then upstream those changes to more robust implimentations (go-ipfs, rust-ipfs)
<ShadowJonathanDi>
* and i personally think that protocol labs needs to get their heads out of golang, and into python, so they can conduct actual experiences in that implementation (py-ipfs, py-libp2p), so they can then upstream those changes to more robust implementations (go-ipfs, rust-ipfs)
<octav1a>
For 2) it sounds like it could still be possible, but should it be part of go-ipfs or js-ipfs proper, or a wrapper that would be developed?
<ShadowJonathanDi>
not... right now, and since ipfs is based on top of libp2p (modular networking stack, which allows protocols to work on a need-to-access basis across the network), i think it'll be "part" of ipfs because it means it'll make developers able to slot it right in
<ShadowJonathanDi>
and hopefully then, core developers of go-ipfs would upstream that default module to the master branch and all
<octav1a>
Thanks for letting me know these things, I didn't even know there was a py-ipfs effort before, and I am definitely interested in contributing.
<ShadowJonathan>
Please actually take a look at py-libp2p, libp2p is the underlying foundation for what ipfs does and needs, and currently that project is moving slowly
<ShadowJonathan>
I do have to admit that I'm biased in that aspect because im currently contributing to py-libp2p, but still
<ShadowJonathan>
I'm more inclined to preach the ideas of libp2p and ipfs than their current implimentations, because they fall more short than that they currently provide, I'll be honest in that
<ShadowJonathan>
Even then, I believe this technology is crucial for the next decade or two, with the increasing centralisation of our data with big tech
<octav1a>
Well I have been using ipfs for archiving for about a year now, and apart from some bugs causing crashes of the daemon, the actual core functionality has been useful and reliable.
<ShadowJonathan>
It's probably even *more* important in the far future, when we need *interplanetary* data to persist without proprietary infrastructure to gatekeep the long distances between earth and other planets
<ShadowJonathan>
True, but I don't have full confidence in it being robust
<ShadowJonathan>
Or more specifically: it being maintainable, go-ipfs's core
maxzor has joined #ipfs
<ShadowJonathan>
They're personal concerns, so take them as you wish
MDude has joined #ipfs
<vaultec81[m]>
ShadowJonathan: go-ipfs does a pretty good job as far as performance from the language stand point. Interpreted vs compiled. Interpreted is always going to be a bit slower. I can see the appeal for someone wanting to make an application on py-libp2p or py-ipfs. But using that as the core server isn't worth lower performance.
<ShadowJonathanDi>
im sorry if i looked like if i was implying that, but i was not implying that
<swedneck[GMT1]Di>
i mean nowadays there's nuitka which compiles python and can give at least like 2x the speed
<ShadowJonathanDi>
ofc compiled is going to be faster, ofc interpreted is going to be slower, its the tradeoff between flexibility and performance, and i feel like go-ipfs is a little bit cobbled on, and thus a bit unmaintainable
<ShadowJonathanDi>
im particularly advertising python because of it's extremely powerful ability of plug-n-play, without the bloat-memory overhead of Java, while still being relatively fast
<ShadowJonathanDi>
im implying to use python as a "training ground" for new ideas/protocols to use within ipfs, and to think about turning that into a robust version in golang
<ShadowJonathanDi>
i know what its like to have idea-try-minimize development in golang, its not pretty
<ShadowJonathanDi>
thats why i wanna seperate the "idea+try" part, and have it be in a more flexible language like python
<ShadowJonathanDi>
if i look at the `ipfs` and `libp2p` github organizations, i see *so* many plugin projects for go-ipfs that're just abandoned because of less interest, and most of them have good *ideas*, its just that "ongoing development" is very tedious to do in golang
<eleitl[m]1>
How well is rust-ipfs doing? rust-libp2p seems to be tracking well.
<ShadowJonathanDi>
libp2p has the benefit of being language-agnostic (hell, almost "everything"-agnostic), so that opens up the ability to experiment in more flexible language, and then to "inscribe" it into stable languages
<ShadowJonathanDi>
when it has already been tested and hardened
<eleitl[m]1>
Interesting, so you say that libp2p itself is braking the ipfs project? I did not realize that.
<ShadowJonathanDi>
not... really?
<ShadowJonathanDi>
...wait, yeah, actually
<ShadowJonathanDi>
if you look at the whole idea of it all, there's just a few components
<eleitl[m]1>
I understood you that py-libp2p would be the place to test, and only translate to other other ecosystems when battle-tested/hardened there.
<eleitl[m]1>
You meant that, right?
<ShadowJonathanDi>
there's libp2p, which provides a collection of "lego blocks" for developers to work with, so they can make *any* project doing *anything* with *any* level of the libp2p stack, and it integrates all seamlessly; transports, protocols, and misc utility items
<ShadowJonathanDi>
ipfs is just a set of protocols and provided transports, all packaged nicely with some wrapping paper (and a bow), and then added some face with cmd commands
<ShadowJonathanDi>
yeah, i meant that
<eleitl[m]1>
Thanks, interesting.
<ShadowJonathanDi>
im lukewarm on ipfs atm, but libp2p actually can change the world
<ShadowJonathanDi>
its laughably minimal in its principals, but it provides a framework for foundation/application-level development in a decentralized format
<eleitl[m]1>
ipfs alone can quite help changing the world, particularly if the pinning slowdown and resource discovery are fixed.
<ShadowJonathanDi>
np
<swedneck>
ipfs is actually useful now, but libp2p could make networking soooo much nicer
<vaultec81[m]>
ShadowJonathan (Discord): so you are saying that it's better to build and test the standards on python. Then build the final product on go or rust. How does that solve the problem of management?
<ShadowJonathanDi>
^^^
<ShadowJonathanDi>
exactly
<swedneck>
it's like ipv6 but you don't need to wait for your ISP to implement it
<ShadowJonathanDi>
if implemented everywhere, it sets a baseline for any device to be able to communicate with any other, with minimal configuration
<ShadowJonathanDi>
and actual graceful negotiation on application-level protocol
<vaultec81[m]>
Libp2p won't solve the problems of nodes being discoverable unless it has a packet routing system built in. Presuming the node that is attempting to connected to is behind multiple links to reach the internet.
<ShadowJonathanDi>
> so you are saying that it's better to build and test the standards on python. Then build the final product on go or rust. How does that solve the problem of management?
<ShadowJonathanDi>
what do you mean, management?
<ShadowJonathanDi>
eventually yes, develop on flexible, finalize on static
<ShadowJonathanDi>
* > so you are saying that it's better to build and test the standards on python. Then build the final product on go or rust. How does that solve the problem of management?
<ShadowJonathanDi>
eventually yes, develop on flexible (JS, python), finalize on static (golang, rust)
<ShadowJonathanDi>
what do you mean, management?
<ShadowJonathanDi>
> it's like ipv6 but you don't need to wait for your ISP to implement it
<ShadowJonathanDi>
its a network built on top of a network
<ShadowJonathanDi>
now only if ipv6 can be deployed everywhere and this can all actually work smoothly :akkopout:
<eleitl[m]1>
p2p would certainly work a lot smoother with real IPv6 everywhere.
<ShadowJonathanDi>
libp2p is not trying to be its own transport, it builds upon transports
<ShadowJonathanDi>
regular networks are always going to have to exist, yes, but libp2p is prepared to work with each of them interchangeably
<eleitl[m]1>
A user on another channel has his router crashing when starting up IPFS. This is the kind of brain damage we're dealing with.
jonnycrunch has joined #ipfs
<vaultec81[m]>
Management being project management
<ShadowJonathanDi>
cuz of incoming traffic, his router is being SYN-barraged
<ShadowJonathanDi>
DDos'd, essentially
<swedneck>
> <@_discord_132583718291243008:permaweb.io> > it's like ipv6 but you don't need to wait for your ISP to implement it
<swedneck>
> its a network built on top of a network
<swedneck>
I know, hence I said 'like'
<ShadowJonathanDi>
thats cuz go-ipfs does not actually give a fuck who they're connecting with, and that happens on a network-wide basis
<swedneck>
For the end user they enable roughly the same features
<ShadowJonathanDi>
so it might just be a subset of people trying to connect to you, but that's still a thousand SYN's per second
jokoon has quit [Quit: jokoon]
<ShadowJonathanDi>
i got this same problem when i made my own ipfs cluster public to the world
<ShadowJonathanDi>
it crashed my router twice before i pulled the plug and removed the port-forward
<vaultec81[m]>
Awhile back my ipfs server had over 10k connections
<ShadowJonathanDi>
the network is riddled with star-like patterns, *everyone* is trying to connect to the same nodes, since they're the ones who are visible from the DHT
<ShadowJonathanDi>
many nodes are inaccessible, and as a result, their DHT values are never really queries
<ShadowJonathanDi>
* many nodes are inaccessible, and as a result, their DHT values are never really queried
<ShadowJonathanDi>
and remain buried
<ShadowJonathanDi>
so to someone querying the DHT, the network actually looks a lot smaller than it is, but many nodes live on the fringes, behind NAT'd routers, just regular clients like mobiles and laptops
<ShadowJonathanDi>
but surprise surprise, those cant handle a 1000 connections
<ShadowJonathanDi>
because of crappy ISP routers and years and years of infrastructure negligence
<ShadowJonathanDi>
*decades
<vaultec81[m]>
At the mercy of ISPs to deploy ipv6 for all devices to work plug and play.
<ShadowJonathanDi>
essentially, yeah
heizen has joined #ipfs
<ShadowJonathanDi>
i also see ISPs just doing absolute bullshitty things like dropping all incoming SYNs regardless of where they come from, and other dumbass umbrella policies, all to make majority customers happy and the profit rolling in
<ShadowJonathanDi>
:pinkbored:
<ShadowJonathanDi>
no wonder that repealing net neutrality became such a big thing a few years back
<ShadowJonathanDi>
everyone wanted the money
heizen has quit [Client Quit]
heizen has joined #ipfs
<ShadowJonathanDi>
the problem is is that today's internet is *really* biased for centralized networking, and anything that tries to use dialup connections as fiber cables will be met with heavy resistance
<swedneck>
i'm surprised there aren't any groups of people that have teamed up to make wireless mesh networks that span at least part of a city
<vaultec81[m]>
There are groups that are doing that
<swedneck>
where?
is_null has quit [Ping timeout: 240 seconds]
<ShadowJonathanDi>
by adding a million nodes into a network, and instructing each one of them to find 200 others, and connect to them, everywhere around the world there's now *thousands* of connections coming into normal home ADSL lines, and ISPs wont be happy when they figure out *why*
<swedneck>
would be interested to read about it
<vaultec81[m]>
NYC, Philly mesh, Freifunk
<eleitl[m]1>
Yes, we even have Freifunk meshes in the wild.
<ShadowJonathanDi>
im most interested in libp2p's listing of *bluetooth* as a transport
<ShadowJonathanDi>
when i saw that, my imagination went wild about "street-level" exchange, where people will walk around with their phones, and they'll essentially make the mesh themselves
<vaultec81[m]>
Those are some of the most popular mesh networks
<ShadowJonathanDi>
exchange files without instructing their phones, and basically keep "off the grid"
<eleitl[m]1>
In reality the meshes are a problem with batman-adv, but there are alternative protocols like Yggdrasil that should help quite a bit with that.
<swedneck>
<ShadowJonathanDi "when i saw that, my imagination "> shame bluetooth is a battery hog and about as secure as a wet sock
<ShadowJonathanDi>
true, but it's implimented in literally every device
<ShadowJonathanDi>
* true, but it's implemented in literally every device
<eleitl[m]1>
We actually have some 3 IPFS nodes on the Yggdrasil network.
<swedneck>
i wonder if it'd be possible to sneakily hide solar-powered routers around cities
<ShadowJonathanDi>
also, libp2p provides (and kinda *needs*) a security protocol to be registered and negotiated to provide a secure connection, and actually verify a node's identity
<vaultec81[m]>
Wouldn't be all that much to build some small solar panel mesh points
<eleitl[m]1>
Running APs on battery-buffered PV panels is a solved problem. Mikrotik did it for years.
<vaultec81[m]>
I think that routing plays a big role in ipfs. Transports around where data is located.
ZaZ has joined #ipfs
<igel[m]>
re: transport, the yggdrasil-network allows for MPTCP-like bonding
<ShadowJonathanDi>
looks fairly interesting
<igel[m]>
so if you have a ygg-router on a segment with 2 paths to the internet (one ethernet, one wireless) you can increase your throughput (even with 1 stream, like 1 ftp send/get or scp)
<ShadowJonathanDi>
i gotta say though, these are network-level decentralized implimentations, libp2p is an application-level one, basically providing any application or library an easy gateway to working in a decentralized way
<igel[m]>
(in some situations, a single stream of data can yield 1.5gbps using 1gig ethernet and .11ac wifi)
<ShadowJonathanDi>
and *thats* why i think libp2p will change the world, at the end of the day, there's going to be *thousands* of applications and implementations of everything and everything, but it needs a set of principles that hold it together
* igel[m]
will cheers to that
<ShadowJonathanDi>
libp2p doesnt set principles against networks, but it leverages them
<eleitl[m]1>
Sure. It just illustrates that Ygg had to solve some of the same problems, and did it at transport layer.
bsm117532 has joined #ipfs
<swedneck>
we should probably make a set of bridged rooms for mesh discussions
bsm117532 has quit [Ping timeout: 260 seconds]
ylp has quit [Quit: Leaving.]
bsm117532 has joined #ipfs
<ShokuninDiscord[>
How so?
<swedneck>
because it's a very interesting subject which doesn't fit in this room
<eleitl[m]1>
If I want to pin 56 k of project Gutenberg books, any useful suggestions? It's not large, only 48 Gbytes.
heizen has quit [Remote host closed the connection]
<eleitl[m]1>
There's Yggdrasil Community which at times has meshing discussions.
brianhoffman_ has joined #ipfs
<eleitl[m]1>
* There's Yggdrasil Community which at times has meshing discussions. Matrix: https://riot.im/app/#/room/#yggdrasil-community:matrix.org
mithilarun has joined #ipfs
vitaminx has quit [Quit: WeeChat 2.6]
brianhoffman has quit [Ping timeout: 265 seconds]
brianhoffman_ is now known as brianhoffman
bengates has quit [Remote host closed the connection]
brianhoffman has quit [Client Quit]
brianhoffman has joined #ipfs
is_null has joined #ipfs
manray has quit [Ping timeout: 265 seconds]
fling has joined #ipfs
l3 has joined #ipfs
l3 has left #ipfs [#ipfs]
Ecran has joined #ipfs
matt-h has quit [Quit: Leaving]
matt-h has joined #ipfs
<vaultec81[m]>
We definitely need a mesh networking topic section/room Shokunin (Discord) mesh networks will be extremely powerful in combination with IPFS and similar technology
<eleitl[m]1>
Sounds good to me.
mithilarun has quit [Remote host closed the connection]
<ShokuninDiscord[>
swedneck [GMT+1] go ahead!
xcm has quit [Remote host closed the connection]
<swedneck>
#mesh:permaweb.io
xcm has joined #ipfs
jonnycrunch has quit [Ping timeout: 260 seconds]
KempfCreative has quit [Ping timeout: 260 seconds]
is_null has quit [Ping timeout: 268 seconds]
manray has joined #ipfs
is_null has joined #ipfs
KempfCreative has joined #ipfs
Hooftly1337[m] has joined #ipfs
joocain2 has quit [Ping timeout: 240 seconds]
bsm117532 has quit [Quit: Leaving.]
lastdigitofpi[m] has joined #ipfs
_whitelogger has joined #ipfs
jokoon has quit [Quit: jokoon]
joocain2 has joined #ipfs
nijik has joined #ipfs
nijik has quit [Client Quit]
nijik has joined #ipfs
joocain2 has quit [Ping timeout: 240 seconds]
is_null has quit [Remote host closed the connection]
joocain2 has joined #ipfs
<lastdigitofpi[m]>
ShadowJonathan (Discord): Just reading the above discussion - and as a disclaimer i'm new to IPFS - why is TCP used for the DHT connections instead of UDP? Seems like the latter would handle large numbers of peers trying to talk to a node more gracefully by having the packets dropped
<ShadowJonathanDi>
oh
<ShadowJonathanDi>
that is a very good idea, and point
<ShadowJonathanDi>
uhhh
<ShadowJonathanDi>
:conniehmm:
<ShadowJonathanDi>
ah yeah
joocain2 has quit [Ping timeout: 240 seconds]
<ShadowJonathanDi>
DHT, currently, in ipfs (effectively libp2p) is not "up front", for DHT exchange to take place, the connection needs to go through *at least* a verification phase, so a security layer needs to be established on top of the connection
<ShadowJonathanDi>
TCP is used because the multiaddresses advertised by peers include them, and go-ipfs tries *all* multiaddresses (a multiaddress looks like this: `/ip4/127.0.0.1/tcp/4000`) to establish connections with
<ShadowJonathanDi>
TCP is most convenient for establishing and maintaining connections, it's also least probable that an ISP blocks incoming packets if they're TCP-based
<ShadowJonathanDi>
so the network has a heavy bias for TCP
<ShadowJonathanDi>
even so, because DHT is not "up front" in this case, there will *always* be overhead
<ShadowJonathanDi>
overhead of setting up connections, security, and negotiating correct DHT protocol versions
<lastdigitofpi[m]>
what do you mean by "up front"?
<vaultec81[m]>
There is support for quic which is UDP. All it takes is network adoption and all the connections made could be UDP.
<ShadowJonathanDi>
"up front" is that the packet is sent as *first*, that the DHT packet is *the* packet that is sent to a node, with the request inside
<ShadowJonathanDi>
i dont know much about quic, but yes
<vaultec81[m]>
I think QUIC support is being push as higher priority than TCP.
<vaultec81[m]>
Pushed*
<ShadowJonathanDi>
again, tell that to the ISPs
<ShadowJonathanDi>
i feel like them maintaining TCP reliability is the only thing they do sometimes
<ShadowJonathanDi>
:pinkbored:
<ShadowJonathanDi>
sometimes UDP just gets plain ignored
<ShadowJonathanDi>
(i have a very low opinion of ISPs, sorry, im biased in that sense)
<swedneck>
straying into #mesh:permaweb.io, honestly do we even need ISPs?
<swedneck>
couldn't each city just maintain their own network and connect to nearby cities?
<swedneck>
with the state maintaining connections that go underwater
<vaultec81[m]>
ISPs have nothing to do with whether a protocol is decide to be implemented in IPFS.
<swedneck>
it's kinda absurd that i need an ISP to contact a computer in the same city
<eleitl[m]1>
I've always thought that muni should own fiber as well as roads, pipes and other infrastructure.
<vaultec81[m]>
Absolutely cities could have their own network. But right now people aren't willing to setup these kind of things
<eleitl[m]1>
Lighting fiber is cheap, laying ducts is not.
<vaultec81[m]>
The best way to push adoption is build a standard for communication
<ShadowJonathanDi>
ISPs are basically just the capitalism response to "people want things but not care about making them"
<ShadowJonathanDi>
its more complicated than that
<ShadowJonathanDi>
buuuut
<ShadowJonathanDi>
yeah
<vaultec81[m]>
For example yggdrasil in combination with 802.11s
<lastdigitofpi[m]>
ShadowJonathan (Discord): hypothetically (and apologies if this is a naive question), how tricky would it be to make the DHT protocol work directly over UDP? (i am asking as i might be interested in exploring this)
<ShadowJonathanDi>
ISPs handle a lot of other things, like connecting to other ISPs and backbones, and giving good service between all of them
<ShadowJonathanDi>
It's infrastructure
<ShadowJonathanDi>
Hmmmm
<ShadowJonathanDi>
That'd be sidestepping libp2p, and idk what to say about that
<ShadowJonathanDi>
In short: I don't know
<ShadowJonathanDi>
The long answer will have to wait, I'm boutta eat dinner with my family
<swedneck>
vaultec81: thing is, at least in sweden the cities already maintain their own networks
<swedneck>
that's what makes it so absurd to me that i need an ISP for entirely city-local connections
mithilarun has joined #ipfs
<vaultec81[m]>
lastdigitofpi: UDP is simply a transport built into libp2p and DHT will automatically work.
<ShadowJonathanDi>
Yeah, but he's curious about reducing the overhead
<ShadowJonathanDi>
And I don't really know if it does
<vaultec81[m]>
What overhead are you talking about? Transport protocol overhead? Or specific DHT libp2p protocol overhead?
<lastdigitofpi[m]>
the overhead of establishing a TCP connection, specifically when a very large number of clients are trying to connect to a given node (above was a mention of someone whose router was crashing because he was getting what essentially appeared like a synflood attack)
<ShadowJonathanDi>
The overhead is establishing a connection, since you need to secure the connection, and negotiate protocols
<ShadowJonathanDi>
So there's at least 4 roundtrips before the DHT protocol is able to communicate
<ShadowJonathanDi>
* The overhead od establishing a connection, since you need to secure the connection, and negotiate protocols
<ShadowJonathanDi>
* The overhead of establishing a connection, since you need to secure the connection, and negotiate protocols
Ecran has quit [Quit: Going offline, see ya! (www.adiirc.com)]
maxzor has joined #ipfs
<vaultec81[m]>
You can't reduce that overhead when making a connection. 4 roundtrips is nothing compared to DHT search or normal DHT activity.
nijik has quit [Quit: Ambassador 1.2.0 [Pale Moon 28.8.1/20200111144550]]
bornjre has quit [Remote host closed the connection]
<vaultec81[m]>
UDP will help with transport protocol specific overhead. However, there needs to be some extra overhead built into the libp2p protocol to handle packet drops and other problems that may come up with using UDP.
plexigras has joined #ipfs
joocain2 has joined #ipfs
<vaultec81[m]>
I would say if a router crashes from syn/connection floods it's not a good router. Especially if IPFS isn't port forwarded in the first place. This goes back to the basic rules of network security and preventing ddos.
<vaultec81[m]>
If the router is open to such issues with IPFS. It's also open to common ddos attacks.
Jybz has joined #ipfs
joocain2 has quit [Ping timeout: 240 seconds]
<vaultec81[m]>
Sure there are ways to reduce the number of open connections needed but you won't be able to get rid of the overhead
<ShadowJonathanDi>
i think i also need to clarify something; when i said "star-pattern networks", i meant it in a way that;
mithilarun has quit [Remote host closed the connection]
<ShadowJonathanDi>
only a subset of the network's nodes is visible in the DHT at a given time, this is because many of those nodes that fall into the same bucket eligible for the DHT query is behind a NAT, or not on the network anymore
<ShadowJonathanDi>
those nodes then (probably) provide relay access to other nodes, furthering the traffic
<ShadowJonathanDi>
the amount of nodes that *are* connectable via any of the multiaddresses thus get stormed, since these nodes appear in DHT tables *everywhere*
<vaultec81[m]>
Ah I see what you saying about ISPs
<ShadowJonathanDi>
this problem isnt avoidable, but the rate of inaccessible nodes to accessible ones is rising, and at an absurd rate
<ShadowJonathanDi>
its like a telephone number with robocalls, when a robocaller finds out the other side is a live person, they'll share that number (sell it) so other companies can also robocall that number from larger lists
<ShadowJonathanDi>
so when a node is accessible, that availability gets distributed as part of a DHT query, when nodes give the querying node a list of "closest" nodes, and that available node is part of it
joocain2 has quit [Ping timeout: 240 seconds]
<vaultec81[m]>
Did the person having issues with router crash have IPFS port forwarded?
<ShadowJonathanDi>
so the network becomes a star pattern, every node suddenly wants to connect to those few accessible ones (from port-forwarding, or dedicated) so they can find more nodes to query for the DHT
<ShadowJonathanDi>
(me)
joocain2 has joined #ipfs
<ShadowJonathanDi>
and none of the node that live on the "fringes" (the ones behind commercial routers) ever become a good part of the network, as either they dont exist (get NAT-blocked), or the sudden volume of queries make the owner of that node shut it down
<ShadowJonathanDi>
* and none of the node that live on the "fringes" (the ones behind commercial routers) ever become a good part of the network, as either they dont exist (get NAT-blocked), or the sudden volume of queries make the owner of that node shut it down (either through direct action, or because of the router crashing, yes)
<ShadowJonathanDi>
* and none of the node that live on the "fringes" (the ones behind commercial routers) ever become a good part of the network, as either they dont exist (get NAT-blocked), or the sudden volume of queries make the owner/network of that node shut it down (either through direct action, or because of the router crashing, yes)
<ShadowJonathanDi>
* so the network becomes a star pattern, every node suddenly wants to connect to those few accessible ones (coming from port-forwarding, or dedicated) so they can find more nodes to query for the DHT
<ShadowJonathanDi>
i know this is probably going to stay a problem, and we dont live in a perfect world, but still
<ShadowJonathanDi>
every node that runs go-ipfs, by default, right now, has a lowwater of 50 connections, and a highwater of 300
<vaultec81[m]>
I would say the only way to make this system better is start pushing more UDP/QUIC that'll help the network overall. DHT could be improved in how many connections are used.
joocain2 has quit [Ping timeout: 240 seconds]
<ShadowJonathanDi>
that means that at any time the node can have 50 at minimum, 50 to *any* of those few that're available
dethos has joined #ipfs
joocain2 has joined #ipfs
<ShadowJonathanDi>
this means that suddenly the pressure of those few nodes starts to rise, since its an echo chamber in regards to how a node becomes part of those "accessible" nodes, you're either out (and only have outbound connections), or you're connected and you suddenly get *flooded* with inbound connections
<ShadowJonathanDi>
since every fucking node on the planet knows your node now, and wants to connect to it for their DHT queries
<vaultec81[m]>
Other nodes don't use your node for DHT queries. They search for a very specific set of entries that belong to only your node.
<ShadowJonathanDi>
yes, but they do that by asking *other* nodes for the "closest" node
maxzor has quit [Ping timeout: 260 seconds]
<ShadowJonathanDi>
and go-ipfs takes the shotgun approach
<ShadowJonathanDi>
and queries ALL of them
<ShadowJonathanDi>
even the nodes that're the furthest away from the list of nodes they get from other nodes
<lastdigitofpi[m]>
ShadowJonathan (Discord): does it establish connections to those nodes eagerly, or only when it needs to talk to them as part of a DHT query?
<vaultec81[m]>
QUIC will definitely help with shorter lived connection
<lastdigitofpi[m]>
is it similar to the issues gnutella had with query flooding?
<ShadowJonathanDi>
one second
<ShadowJonathanDi>
i actually need to verify the part i was going to rant about
<ShadowJonathanDi>
i dont know what gnutella is
<ShadowJonathanDi>
can you maybe explain it while i look at the source code of kad-dht real quick?
<lastdigitofpi[m]>
it was a very old system (around 2000 or so I think) and I don't know much about it, but I believe the way it worked was you would send out a query and that query would propagate to every node in the network, i.e. there was no TTL or similar
<ShadowJonathanDi>
shame that a million nodes arent gonna update so quickly :pinkbored:
<ShadowJonathanDi>
oh no, not exactly like that
<ShadowJonathanDi>
yikes, that soundsbad
<lastdigitofpi[m]>
so the more people that used it, the more load each node would experience because everyone else's queries would pass through them
dwilliams has joined #ipfs
<vaultec81[m]>
The bit about querying all the nodes, is if it can't be initially found. It will query nodes that are further out
<lastdigitofpi[m]>
i realise of course DHT is more efficient, as it moves closer to the target with each step
<lastdigitofpi[m]>
vaultec81: so if i submit a query for a hash i know is *not* in the network, it will propagate everywhere?
<ShadowJonathanDi>
> i realise of course DHT is more efficient, as it moves closer to the target with each step
<ShadowJonathanDi>
thats the idea
mithilarun has joined #ipfs
<vaultec81[m]>
<lastdigitofpi[m] "vaultec81: so if i submit a quer"> You aren't submitting a query. You are running a query. Your node has to go out and dial the closest node. I am not sure whether it will continue to dial ever node
<ShadowJonathanDi>
yeah thats what DHT is supposed to do
<vaultec81[m]>
Eventually the query will terminate
<ShadowJonathanDi>
i was about to make my argument that go-ipfs contacts all nodes it finds, and doesnt prioritize
<ShadowJonathanDi>
but i wanna see if my theory holds grounds
<ShadowJonathanDi>
* but i wanna see if my theory holds ground
cheetypants has quit [Ping timeout: 268 seconds]
<ShadowJonathanDi>
what the fuck?
<ShadowJonathanDi>
yeah no
<ShadowJonathanDi>
go-ipfs contacts all nodes and doesnt prioritize
seba- has quit [Read error: Connection reset by peer]
seba- has joined #ipfs
mauz555 has joined #ipfs
Trieste has quit [Ping timeout: 258 seconds]
bsm117532 has joined #ipfs
bsm117532 has quit [Read error: Connection reset by peer]
Trieste has joined #ipfs
rendar has quit []
maxzor has joined #ipfs
Jybz has quit [Quit: Konversation terminated!]
Ecran has joined #ipfs
jamiew has joined #ipfs
mithilarun has quit [Remote host closed the connection]
seba-- has joined #ipfs
seba- has quit [Ping timeout: 240 seconds]
icaruszDiscord[4 has joined #ipfs
ipfs-stackbot has quit [Remote host closed the connection]
daveatQCDiscord[ has joined #ipfs
ipfs-stackbot has joined #ipfs
<aschmahmann[m]>
@shadowjonathan:matrix.org: it's not contacting all nodes (but it is contacting more nodes than it should) and it is sorting them based on XOR distance (although mixing in latency too would be nice). There are still many improvements to be desired and some deviations from kademlia to correct (I'm on mobile right now, but there's an early stage PR in that repo to fix a number of the larger issues).
<aschmahmann[m]>
Also worth noting that there is definitely interest in moving towards QUIC in the DHT, but IIUC there are still a number of DHT improvements that are higher priority.
<ShadowJonathanDi>
> it is sorting them based on XOR distance
<ShadowJonathanDi>
its not
<ShadowJonathanDi>
thats the whole point of my raised issue, i hope you've seen that one
mrinfinity is now known as exnyne
maxzor has quit [Ping timeout: 265 seconds]
dexter0 has quit [Ping timeout: 260 seconds]
mithilarun has joined #ipfs
exnyne is now known as mrinfinity
mauz555 has quit []
mithilarun has quit [Remote host closed the connection]
captain_morgan20 has quit [Ping timeout: 265 seconds]
dexter0 has joined #ipfs
aLeSD has joined #ipfs
dexter0 has quit [Ping timeout: 246 seconds]
lidel` has joined #ipfs
voker57 has quit [Remote host closed the connection]
lidel has quit [Ping timeout: 240 seconds]
lidel` is now known as lidel
voker57 has joined #ipfs
is_null has joined #ipfs
mithilarun has joined #ipfs
riemann has quit [Quit: Ping timeout (120 seconds)]
AbramAdelmo_ has joined #ipfs
riemann has joined #ipfs
Guest41 has joined #ipfs
Guest41 has quit [Excess Flood]
MatrixBridge has joined #ipfs
MatrixBridge has left #ipfs ["User left"]
AbramAdelmo has quit [Ping timeout: 265 seconds]
hacman has quit [Quit: Leaving]
<aschmahmann[m]>
responded to you on github
<aschmahmann[m]>
^^ ShadowJonathan (Discord)
<ShadowJonathanDi>
just saw, im gonna look into it
<ShadowJonathanDi>
oh god-
<ShadowJonathanDi>
:facepalm:
KempfCreative has quit [Ping timeout: 260 seconds]
<aschmahmann[m]>
no worries, Go's version of dealing with generics is not what I would call straightforward 😛
<ShadowJonathanDi>
closing the issue
<aschmahmann[m]>
👍
<ShadowJonathanDi>
yeah
<ShadowJonathanDi>
i mis-read it
Wimsey has quit [Remote host closed the connection]