stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.18 and js-ipfs 0.33 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Con
Steverman has quit [Ping timeout: 272 seconds]
Steverman has quit [Ping timeout: 272 seconds]
Steverman has quit [Ping timeout: 272 seconds]
Steverman has quit [Ping timeout: 272 seconds]
Steverman has quit [Ping timeout: 272 seconds]
cheet has joined #ipfs
cheet has joined #ipfs
cheet has joined #ipfs
cheet has joined #ipfs
cheet has joined #ipfs
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
chiui has quit [Ping timeout: 240 seconds]
chiui has quit [Ping timeout: 240 seconds]
chiui has quit [Ping timeout: 240 seconds]
chiui has quit [Ping timeout: 240 seconds]
chiui has quit [Ping timeout: 240 seconds]
07IAAZFQO is now known as iczero
07IAAZFQO is now known as iczero
07IAAZFQO is now known as iczero
07IAAZFQO is now known as iczero
07IAAZFQO is now known as iczero
randomfromdc has joined #ipfs
randomfromdc has joined #ipfs
randomfromdc has joined #ipfs
randomfromdc has joined #ipfs
randomfromdc has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
<postables[m]>
swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
*edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]>
*edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]>
*edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]>
*edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]>
*edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]>
adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]>
adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<Swedneck>
it's just really really really painfully slow
<Swedneck>
it's just really really really painfully slow
<Swedneck>
it's just really really really painfully slow
<Swedneck>
it's just really really really painfully slow
<Swedneck>
it's just really really really painfully slow
<Swedneck>
over 10h to add 60 gigs of data
<Swedneck>
over 10h to add 60 gigs of data
<Swedneck>
over 10h to add 60 gigs of data
<Swedneck>
over 10h to add 60 gigs of data
<Swedneck>
over 10h to add 60 gigs of data
<Swedneck>
(which takes like, 10 min on my desktop)
<Swedneck>
(which takes like, 10 min on my desktop)
<Swedneck>
(which takes like, 10 min on my desktop)
<Swedneck>
(which takes like, 10 min on my desktop)
<Swedneck>
(which takes like, 10 min on my desktop)
<Swedneck>
how do i turn on debugging?
<Swedneck>
how do i turn on debugging?
<Swedneck>
how do i turn on debugging?
<Swedneck>
how do i turn on debugging?
<Swedneck>
how do i turn on debugging?
alexgr has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 246 seconds]
<postables[m]>
`ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]>
`ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]>
`ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]>
`ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]>
`ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]>
I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<postables[m]>
I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<postables[m]>
I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<postables[m]>
I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<postables[m]>
I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<Swedneck>
it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck>
it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck>
it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck>
it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck>
it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck>
do i need to let it run `ìpfs add` to completion?
<Swedneck>
do i need to let it run `ìpfs add` to completion?
<Swedneck>
do i need to let it run `ìpfs add` to completion?
<Swedneck>
do i need to let it run `ìpfs add` to completion?
<Swedneck>
do i need to let it run `ìpfs add` to completion?
Mateon3 has joined #ipfs
Mateon3 has joined #ipfs
Mateon3 has joined #ipfs
Mateon3 has joined #ipfs
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon3 is now known as Mateon1
Mateon3 is now known as Mateon1
Mateon3 is now known as Mateon1
Mateon3 is now known as Mateon1
Mateon3 is now known as Mateon1
skybeast has quit [Quit: Page closed]
skybeast has quit [Quit: Page closed]
skybeast has quit [Quit: Page closed]
skybeast has quit [Quit: Page closed]
skybeast has quit [Quit: Page closed]
q6AA4FD has quit [Ping timeout: 246 seconds]
q6AA4FD has quit [Ping timeout: 246 seconds]
q6AA4FD has quit [Ping timeout: 246 seconds]
q6AA4FD has quit [Ping timeout: 246 seconds]
q6AA4FD has quit [Ping timeout: 246 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 252 seconds]
cwahlers_ has quit [Ping timeout: 252 seconds]
cwahlers_ has quit [Ping timeout: 252 seconds]
cwahlers_ has quit [Ping timeout: 252 seconds]
cwahlers_ has quit [Ping timeout: 252 seconds]
<postables[m]>
I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<postables[m]>
I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<postables[m]>
I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<postables[m]>
I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<postables[m]>
I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<Swedneck>
i do not
<Swedneck>
i do not
<Swedneck>
i do not
<Swedneck>
i do not
<Swedneck>
i do not
<postables[m]>
hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<postables[m]>
hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<postables[m]>
hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<postables[m]>
hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<postables[m]>
hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<Swedneck>
fairly high, but not above 90% on any core
<Swedneck>
fairly high, but not above 90% on any core
<Swedneck>
fairly high, but not above 90% on any core
<Swedneck>
fairly high, but not above 90% on any core
<Swedneck>
fairly high, but not above 90% on any core
<postables[m]>
hmm
<postables[m]>
hmm
<postables[m]>
hmm
<postables[m]>
hmm
<postables[m]>
hmm
<postables[m]>
can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]>
can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]>
can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]>
can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]>
can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]>
*edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<postables[m]>
*edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<postables[m]>
*edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<postables[m]>
*edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<postables[m]>
*edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<Swedneck>
sure, they're on the same LAN btw
<Swedneck>
sure, they're on the same LAN btw
<Swedneck>
sure, they're on the same LAN btw
<Swedneck>
sure, they're on the same LAN btw
<Swedneck>
sure, they're on the same LAN btw
<postables[m]>
cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<postables[m]>
cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<postables[m]>
cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<postables[m]>
cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<postables[m]>
cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<Swedneck>
ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck>
ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck>
ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck>
ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck>
ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck>
oh boy right, another issue
<Swedneck>
oh boy right, another issue
<Swedneck>
oh boy right, another issue
<Swedneck>
oh boy right, another issue
<Swedneck>
oh boy right, another issue
<Swedneck>
`Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck>
`Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck>
`Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck>
`Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck>
`Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck>
i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
<Swedneck>
i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
<Swedneck>
i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
<Swedneck>
i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
<Swedneck>
i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
Fabricio20 has joined #ipfs
Fabricio20 has joined #ipfs
Fabricio20 has joined #ipfs
Fabricio20 has joined #ipfs
Fabricio20 has joined #ipfs
<postables[m]>
hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
*edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]>
*edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]>
*edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]>
*edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]>
*edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]>
into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]>
Do you have this issue not using `--nocopy`
<postables[m]>
Do you have this issue not using `--nocopy`
<postables[m]>
Do you have this issue not using `--nocopy`
<postables[m]>
Do you have this issue not using `--nocopy`
<postables[m]>
Do you have this issue not using `--nocopy`
<Swedneck>
well the issue was caused by using it, i think
<Swedneck>
well the issue was caused by using it, i think
<Swedneck>
well the issue was caused by using it, i think
<Swedneck>
well the issue was caused by using it, i think
<Swedneck>
well the issue was caused by using it, i think
<Swedneck>
i'm not using --nocopy anymore
<Swedneck>
i'm not using --nocopy anymore
<Swedneck>
i'm not using --nocopy anymore
<Swedneck>
i'm not using --nocopy anymore
<Swedneck>
i'm not using --nocopy anymore
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mischat has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
<postables[m]>
strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
Are you using flatfs?
<postables[m]>
Are you using flatfs?
<postables[m]>
Are you using flatfs?
<postables[m]>
Are you using flatfs?
<postables[m]>
Are you using flatfs?
<postables[m]>
*edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
*edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
*edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
*edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
*edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]>
Are you using flatfs for your repo?
<postables[m]>
Are you using flatfs for your repo?
<postables[m]>
Are you using flatfs for your repo?
<postables[m]>
Are you using flatfs for your repo?
<postables[m]>
Are you using flatfs for your repo?
<Swedneck>
the server is using badger
<Swedneck>
the server is using badger
<Swedneck>
the server is using badger
<Swedneck>
the server is using badger
<Swedneck>
the server is using badger
<Swedneck>
i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck>
i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck>
i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck>
i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck>
i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck>
i could just have fucked up the files somehow
<Swedneck>
i could just have fucked up the files somehow
<Swedneck>
i could just have fucked up the files somehow
<Swedneck>
i could just have fucked up the files somehow
<Swedneck>
i could just have fucked up the files somehow
<Swedneck>
there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck>
there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck>
there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck>
there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck>
there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck>
hmm, my server is really slow at downloading stuff as well..
<Swedneck>
hmm, my server is really slow at downloading stuff as well..
<Swedneck>
hmm, my server is really slow at downloading stuff as well..
<Swedneck>
hmm, my server is really slow at downloading stuff as well..
<Swedneck>
hmm, my server is really slow at downloading stuff as well..
<postables[m]>
re-downloading from IPFS?
<postables[m]>
re-downloading from IPFS?
<postables[m]>
re-downloading from IPFS?
<postables[m]>
re-downloading from IPFS?
<postables[m]>
re-downloading from IPFS?
<Swedneck>
no, rsync
<Swedneck>
no, rsync
<Swedneck>
no, rsync
<Swedneck>
no, rsync
<Swedneck>
no, rsync
<Swedneck>
cpu usage is around 50% for all cores still
<Swedneck>
cpu usage is around 50% for all cores still
<Swedneck>
cpu usage is around 50% for all cores still
<Swedneck>
cpu usage is around 50% for all cores still
<Swedneck>
cpu usage is around 50% for all cores still
<Swedneck>
5200 rpm drives aren't that much slower than 7200 rpm ones, right?
<Swedneck>
5200 rpm drives aren't that much slower than 7200 rpm ones, right?
<Swedneck>
5200 rpm drives aren't that much slower than 7200 rpm ones, right?
<Swedneck>
5200 rpm drives aren't that much slower than 7200 rpm ones, right?
<Swedneck>
5200 rpm drives aren't that much slower than 7200 rpm ones, right?
q6AA4FD has joined #ipfs
q6AA4FD has joined #ipfs
q6AA4FD has joined #ipfs
q6AA4FD has joined #ipfs
q6AA4FD has joined #ipfs
erratic has quit [Excess Flood]
erratic has quit [Excess Flood]
erratic has quit [Excess Flood]
erratic has quit [Excess Flood]
erratic has quit [Excess Flood]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
clemo has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 250 seconds]
<postables[m]>
for this kind of stuff, I believe they are
<postables[m]>
for this kind of stuff, I believe they are
<postables[m]>
for this kind of stuff, I believe they are
<postables[m]>
for this kind of stuff, I believe they are
<postables[m]>
for this kind of stuff, I believe they are
<Swedneck>
hmm
<Swedneck>
hmm
<Swedneck>
hmm
<Swedneck>
hmm
<Swedneck>
hmm
<Swedneck>
think a hybrid drive would be worth it for this?
<Swedneck>
think a hybrid drive would be worth it for this?
<Swedneck>
think a hybrid drive would be worth it for this?
<Swedneck>
think a hybrid drive would be worth it for this?
<Swedneck>
think a hybrid drive would be worth it for this?
<postables[m]>
somewhat, you should be fine with just a 7.2K RPM one
<postables[m]>
somewhat, you should be fine with just a 7.2K RPM one
<postables[m]>
somewhat, you should be fine with just a 7.2K RPM one
<postables[m]>
somewhat, you should be fine with just a 7.2K RPM one
<postables[m]>
somewhat, you should be fine with just a 7.2K RPM one
<postables[m]>
you can use 10KRPM if you need a little more speed without breaking the bank
<postables[m]>
you can use 10KRPM if you need a little more speed without breaking the bank
<postables[m]>
you can use 10KRPM if you need a little more speed without breaking the bank
<postables[m]>
you can use 10KRPM if you need a little more speed without breaking the bank
<postables[m]>
you can use 10KRPM if you need a little more speed without breaking the bank
<Swedneck>
well 1TB hybrid is 60 bucks
<Swedneck>
well 1TB hybrid is 60 bucks
<Swedneck>
well 1TB hybrid is 60 bucks
<Swedneck>
well 1TB hybrid is 60 bucks
<Swedneck>
well 1TB hybrid is 60 bucks
<Swedneck>
and no 10k available :(
<Swedneck>
and no 10k available :(
<Swedneck>
and no 10k available :(
<Swedneck>
and no 10k available :(
<Swedneck>
and no 10k available :(
<postables[m]>
if 1TB is 7.2K rpm u should be fine
<postables[m]>
if 1TB is 7.2K rpm u should be fine
<postables[m]>
if 1TB is 7.2K rpm u should be fine
<postables[m]>
if 1TB is 7.2K rpm u should be fine
<postables[m]>
if 1TB is 7.2K rpm u should be fine
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
<Swedneck>
i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
<Swedneck>
i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
<Swedneck>
i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
<Swedneck>
i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
<Swedneck>
i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
kapil____ has joined #ipfs
kapil____ has joined #ipfs
kapil____ has joined #ipfs
kapil____ has joined #ipfs
kapil____ has joined #ipfs
brewski[m] is now known as brewski0244[m]
brewski[m] is now known as brewski0244[m]
brewski[m] is now known as brewski0244[m]
brewski[m] is now known as brewski0244[m]
brewski[m] is now known as brewski0244[m]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has joined #ipfs
user_51 has joined #ipfs
user_51 has joined #ipfs
user_51 has joined #ipfs
user_51 has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 244 seconds]
cwahlers has quit [Ping timeout: 244 seconds]
cwahlers has quit [Ping timeout: 244 seconds]
cwahlers has quit [Ping timeout: 244 seconds]
cwahlers has quit [Ping timeout: 244 seconds]
<DarkDrgn2k[m]>
HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]>
HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]>
HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]>
HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]>
HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]>
so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]>
so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]>
so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]>
so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]>
so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]>
(they will even outpreform ssd usually)
<DarkDrgn2k[m]>
(they will even outpreform ssd usually)
<DarkDrgn2k[m]>
(they will even outpreform ssd usually)
<DarkDrgn2k[m]>
(they will even outpreform ssd usually)
<DarkDrgn2k[m]>
(they will even outpreform ssd usually)
<DarkDrgn2k[m]>
if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
<DarkDrgn2k[m]>
if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
<DarkDrgn2k[m]>
if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
<DarkDrgn2k[m]>
if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
<DarkDrgn2k[m]>
if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
jpf137 has quit [Ping timeout: 240 seconds]
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
jpf137 has quit [Ping timeout: 240 seconds]
jpf137 has quit [Ping timeout: 240 seconds]
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
jpf137 has quit [Ping timeout: 240 seconds]
jpf137 has quit [Ping timeout: 240 seconds]
thomasanderson has joined #ipfs
thomasanderson has quit [Ping timeout: 272 seconds]
thomasanderson has quit [Ping timeout: 272 seconds]
thomasanderson has quit [Ping timeout: 272 seconds]
thomasanderson has quit [Ping timeout: 272 seconds]
thomasanderson has quit [Ping timeout: 272 seconds]
<Swedneck>
Well it's mostly a bunch of apks
<Swedneck>
Well it's mostly a bunch of apks
<Swedneck>
Well it's mostly a bunch of apks
<Swedneck>
Well it's mostly a bunch of apks
<Swedneck>
Well it's mostly a bunch of apks
purisame has quit [Ping timeout: 244 seconds]
purisame has quit [Ping timeout: 244 seconds]
purisame has quit [Ping timeout: 244 seconds]
purisame has quit [Ping timeout: 244 seconds]
purisame has quit [Ping timeout: 244 seconds]
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
zzach has quit [Ping timeout: 246 seconds]
zzach has quit [Ping timeout: 246 seconds]
zzach has quit [Ping timeout: 246 seconds]
zzach has quit [Ping timeout: 246 seconds]
zzach has quit [Ping timeout: 246 seconds]
zzach has joined #ipfs
zzach has joined #ipfs
zzach has joined #ipfs
zzach has joined #ipfs
zzach has joined #ipfs
lassulus_ has joined #ipfs
lassulus_ has joined #ipfs
lassulus_ has joined #ipfs
lassulus_ has joined #ipfs
lassulus_ has joined #ipfs
lassulus has quit [Ping timeout: 244 seconds]
lassulus has quit [Ping timeout: 244 seconds]
lassulus has quit [Ping timeout: 244 seconds]
lassulus has quit [Ping timeout: 244 seconds]
lassulus has quit [Ping timeout: 244 seconds]
lassulus_ is now known as lassulus
lassulus_ is now known as lassulus
lassulus_ is now known as lassulus
lassulus_ is now known as lassulus
lassulus_ is now known as lassulus
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
dimitarvp has quit [Quit: Bye]
dimitarvp has quit [Quit: Bye]
dimitarvp has quit [Quit: Bye]
dimitarvp has quit [Quit: Bye]
dimitarvp has quit [Quit: Bye]
spinza has quit [Quit: Coyote finally caught up with me...]
spinza has quit [Quit: Coyote finally caught up with me...]
spinza has quit [Quit: Coyote finally caught up with me...]
spinza has quit [Quit: Coyote finally caught up with me...]
spinza has quit [Quit: Coyote finally caught up with me...]
mauz555 has quit [Remote host closed the connection]
mauz555 has quit [Remote host closed the connection]
mauz555 has quit [Remote host closed the connection]
mauz555 has quit [Remote host closed the connection]
mauz555 has quit [Remote host closed the connection]
spinza has joined #ipfs
spinza has joined #ipfs
spinza has joined #ipfs
spinza has joined #ipfs
spinza has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has quit [Ping timeout: 246 seconds]
thomasanderson has quit [Ping timeout: 246 seconds]
thomasanderson has quit [Ping timeout: 246 seconds]
thomasanderson has quit [Ping timeout: 246 seconds]
thomasanderson has quit [Ping timeout: 246 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 240 seconds]
cwahlers_ has quit [Ping timeout: 240 seconds]
cwahlers_ has quit [Ping timeout: 240 seconds]
cwahlers_ has quit [Ping timeout: 240 seconds]
cwahlers_ has quit [Ping timeout: 240 seconds]
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
mauz555 has quit [Ping timeout: 244 seconds]
mauz555 has quit [Ping timeout: 244 seconds]
mauz555 has quit [Ping timeout: 244 seconds]
mauz555 has quit [Ping timeout: 244 seconds]
mauz555 has quit [Ping timeout: 244 seconds]
thomasan_ has joined #ipfs
thomasan_ has joined #ipfs
thomasan_ has joined #ipfs
thomasan_ has joined #ipfs
thomasan_ has joined #ipfs
thomasanderson has quit [Ping timeout: 268 seconds]
thomasanderson has quit [Ping timeout: 268 seconds]
thomasanderson has quit [Ping timeout: 268 seconds]
thomasanderson has quit [Ping timeout: 268 seconds]
thomasanderson has quit [Ping timeout: 268 seconds]
e0f has joined #ipfs
e0f has joined #ipfs
e0f has joined #ipfs
e0f has joined #ipfs
e0f has joined #ipfs
thomasan_ has quit [Remote host closed the connection]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has quit [Remote host closed the connection]
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
BeerHall has joined #ipfs
BeerHall has joined #ipfs
BeerHall has joined #ipfs
BeerHall has joined #ipfs
BeerHall has joined #ipfs
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
_whitelogger____ has joined #ipfs
_whitelogger has joined #ipfs
_whitelogger has joined #ipfs
_whitelogger_ has joined #ipfs
_whitelogger_ has joined #ipfs
_whitelogger_ has joined #ipfs
_whitelogger__ has joined #ipfs
_whitelogger__ has joined #ipfs
_whitelogger__ has joined #ipfs
_whitelogger__ has joined #ipfs
_whitelogger___ has joined #ipfs
_whitelogger___ has joined #ipfs
_whitelogger___ has joined #ipfs
_whitelogger___ has joined #ipfs
_whitelogger___ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 272 seconds]
cwahlers has quit [Ping timeout: 272 seconds]
cwahlers has quit [Ping timeout: 272 seconds]
cwahlers has quit [Ping timeout: 272 seconds]
cwahlers has quit [Ping timeout: 272 seconds]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
fireglow has left #ipfs ["puf"]
fireglow has left #ipfs ["puf"]
fireglow has left #ipfs ["puf"]
fireglow has left #ipfs ["puf"]
fireglow has left #ipfs ["puf"]
James[m]5 has joined #ipfs
James[m]5 has joined #ipfs
James[m]5 has joined #ipfs
James[m]5 has joined #ipfs
James[m]5 has joined #ipfs
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
vyzo has quit [Quit: Leaving.]
vyzo has quit [Quit: Leaving.]
vyzo has quit [Quit: Leaving.]
vyzo has quit [Quit: Leaving.]
vyzo has quit [Quit: Leaving.]
vyzo has joined #ipfs
vyzo has joined #ipfs
vyzo has joined #ipfs
vyzo has joined #ipfs
vyzo has joined #ipfs
_whitelogger has joined #ipfs
kapil____ has joined #ipfs
aarshkshah1992 has joined #ipfs
<xialvjun[m]>
Is there any document about IPFS gateway api ?
James[m]5 has left #ipfs ["User left"]
aarshkshah1992 has quit [Remote host closed the connection]