There are tunnel protocols like 6to4, 6RD and so on to allow you to get an IPv6 connection tunneled to you. Various routers do support it.
Another option is to ask your ISP if he will supply a IPv6 subnet to you.
There are tunnel protocols like 6to4, 6RD and so on to allow you to get an IPv6 connection tunneled to you. Various routers do support it.
Another option is to ask your ISP if he will supply a IPv6 subnet to you.
Yep. Also claimed “it affects all GNU/Linux” while it only really does CUPS and so on.
Just alone full disclosure is a shit thing to do. Do not even mention the part where it was intended as a responsible disclosure.
I mean the “Crypto AG” was a thing. So not that unrealistic.
But that Proton is CIA is not that realistic imho but not impossible.
Qualcomm did work together with Microsoft and the Vendors closely together before the launch to create those devices.
Linux device vendors probably did not get the same treatment. So give it time. Also, why not buy a windows laptop and put linux on it?
You can disable the web updater in the config which is the default when deploying via docker. The only time i had a mismatch is when i migrated from a nativ debian installation to a docker one and fucked up some permissions. And that was during tinkering while migrating it. Its solid for me ever since.
Again, there is no official nextcloud auto updater, OP chose to use an auto updater which bricked OPs setup (a plugin was disabled).
Docker is kind of a giant mess in my experience. The trick to it is creating backup plans to recover your data when it fails.
Thats the trick for any production service. Especially when you do an update.
They’re releasing a new version every two month or so and dropping them rapidly from support, pinning it with a tag means that in 12 months the install would be exploitable.
The lifecycle can be found with a single online search. Here https://github.com/nextcloud/server/wiki/Maintenance-and-Release-Schedule
Releases are maintained for roughly a year.
Set yourself a notification if you forget it otherwise.
What are you talking about? If you are not manual (or by something like watchtower) pull the newest image it will not update by itself.
I have never seen an auto-update feature by nextcloud itself, can you pls link to it?
The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower.
Lol. Do not blame others for your incompetence. If you have automatically updates enabled then that is your fault when it breaks things. Just pin the major version with a tag like nextcloud:29 or something. Upgrading major versions automatically in production is a terrible decision.
That brings me to what’s available. I almost pulled the trigger on Synology DS423+. It looks reasonable powerful, I can put 4 SATA SSDs and 2 M.2… that’s what I thought. But it turned out it’s not possible to use M.2 as storage with anything but Synology’s own overpriced drives that aren’t even available in my country.
You can use a script to make them available. Still a pain.
Since you only need 2 TB, why do you even bother with the m.2 slots?
Why do you think that you need the m.2 in the first place? I guess you are hang up on “sata bad cause m.2 new” (thats btw only the connector not the interface, there are sata m.2 as well)
sata can handle 6 Gbps. That’s 6 times more than most home network connections can even handle. Since you have not mentioned once how many Ethernet ports the systems have and how fast they are, i figure you only have a 1 Gbps LAN.
Yes NVMe SSDs are somewhat cheaper these days, but not that much that i would bother with it. We are only talking about 2 times 2 tb.
Thanks my bad. OP was talking about ethernet in some of his comments so i somehow thought it was about an USB connected NIC.
I agree, all this attention grabbing sound to me as if this is actually not a big deal. But we will see i guess.
deleted by creator
Yes it has better defenses against timing attacks. Just alone the fact that multiple packets are bundled together makes it harder to identify the route a single package used.
Also, it seems that I2P is more vulnerable against deanonymization when leaving the hidden network, i think the official I2P faq has some info about that, but have not read up upon it myself.
Not only huge files. At the end of the article the author goes on about changing the load or manipulating the timing of the traffic.
For both you need to be part of the network and (to some degree) the traffic you want to trace needs to go through a node you are controlling if i understand it correctly. With increasing size it becomes more difficult.
Garlic routing[1] is a variant of onion routing that encrypts multiple messages together to make it more difficult[2] for attackers to perform traffic analysis and to increase the speed of data transfer.[3]
First sentence. Check up the linked article as source.
Garlic routing[1] is a variant of onion routing that encrypts multiple messages together to make it more difficult[2] for attackers to perform traffic analysis and to increase the speed of data transfer.[3]
First sentence. Check up the linked article as source.
Yes, sorry i worded it incorrectly you can try to make it harder but timing attacks are still possible.
Nope, just a summary that this is just old news. There is nothing new in the article.
Nope, I2P is still vulnerable to timing attacks. https://en.m.wikipedia.org/wiki/Garlic_routing
There are enough private trackers without the requirement of using a VPN.