• 1 Post
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • For someone to work it out, they would have to be targeting you specifically. I would imagine that is not as common as, eg, using a database of leaked passwords to automatically try as many username-password combinations as possible. I don’t think it’s a great pattern either, but it’s probably better than what most people would do to get easy-to-remember passwords. If you string it with other patterns that are easy for you to memorize you could get a password that is decently safe in total.

    Don’t complicate it. Use a password manager. I know none of my passwords and that’s how it should be.

    A password manager isn’t really any less complicated. You’ve just out-sourced the complexity to someone else. How have you actually vetted your password manager and what’s your backup plan for when they fuck up?



  • I have my own backup of the git repo and I downloaded this to compare and make sure it’s not some modified (potentially malicious) copy. The most recent commit on my copy of master was dc94882c9062ab88d3d5de35dcb8731111baaea2 (4 commits behind OP’s copy). I can verify:

    • that the history up to that commit is identical in both copies
    • after that commit, OP’s copy only has changes to translation files which are functionally insignificant

    So this does look to be a legitimate copy of the source code as it appeared on github!

    Clarifications:

    • This was just a random check, I do not have any reason to be suspicious of OP personally
    • I did not check branches other than master (yet?)
    • I did not (and cannot) check the validity of anything beyond the git repo
    • You don’t have a reason to trust me more than you trust OP… It would be nice if more people independently checked and verified against their own copies.

    I will be seeding this for the foreseeable future.


  • By the time you’re ready to buy a new card, Nvidia might be working well under wayland. They’ve already made significant changes in the past couple of years, like implementing GBM and hardware accelerated XWayland. To my understanding, this MR will also fix some remaining issues in the future. I don’t know how much more work needs to be done after that, but just the fact they are cooperating with the free software ecosystem is a good sign.

    Perhaps more importantly, the free nouveau driver can now experimentally reclock nvidia gpus from the 2000 series and newer. With this breakthrough it is possible that nouveau + nvk will be able to compete with the proprietary driver in the near future. If/when we have a well-supported free driver, we will probably have proper wayland support as well.

    I’m not really in a hurry to switch to Nvidia. I’ve been quite happy with my AMD cards so far. But it’s definitely a good thing to have the option to buy from any vendor.


  • Clarification: In my previous comment I meant that the implementation was antiquated, which is why it was causing many problems.

    Although I do think that desktop icons in general are outdated because they’re designed around a desktop metaphor that is itself outdated. Our use of computers has changed vastly over time and the original metaphors are irrelevant to today’s newcomers. Yet most desktop environments are still replicating the same 30 year old ideas. It’s because we’re used to them (which I understand is a valid reason), not because they are necessarily the most pleasant or the most efficient.







  • I would have liked perhaps a more distribution agnostic method of running NVMe-TCP in a way that the OS would not have to be booted.

    From the pull request:

    This all requires that the target mode stuff is included in the initrd of course. And the system will the stay in the initrd forever.

    I think that’s as minimal a boot target as you can reasonably get, or in other words you’re as far away from booting the OS as you can get.

    So now the question is whether this uses any systemd-specific interfaces beyond the .service and .target files. If not, it should not take much effort to create a wrapper init script for the executable and run it on non systemd distros.



  • I hate partitions. Moving and resizing partitions is not fun if you don’t correctly predict exactly the amount of space you need. If you really want the modularity, use btrfs subvolumes instead. IMPORTANT: While it is definitely feasible, ability to retain subvolumes might depend on the distro installer! Check before you commit to this approach!

    Also, consider using LVM or multi-device btrfs to make the drives act as one filesystem. This means that you will never have to worry about where to place your files to balance the load, but it might make removing/replacing a drive in the future harder.


  • Thanks for the explanation. I don’t really know how flash storage works. The fundamental idea of the problem I described would still apply, though as long as the input block size for dd extends to more than one page of the underlying storage.

    For example, say that exactly three pages fit in a block. If dd attempts to read pages A, B and C (ABC) and fails to read B, you would want the corresponding part zeroed in the output to preserve the offsets of all the other pages (A0C). But instead dd reads whatever it can for the entire block, then pads the rest of the block size with zeroes, effectively moving C forward (AC0). So essentially you magnify errors.


  • Thanks for the input, guys. I consider my issue resolved.

    As for the specific question I head, dd can fill with zeroes the blocks that failed to read with conv=noerror,sync. However, this puts the zeroes at the end of the block and not over the exact bit/byte that failed to read, meaning that a read error will invalidate the rest of the block.

    But the consensus across source I searched seems to be to use ddrescue instead of dd.


  • I already have done an rsync copy. I noticed that some files failed to transfer and I thought that maybe the drive is failing. Wanting to attempt to debug and possibly rescue some more data (eg parts of big files that failed to transfer completely) without messing with the original copy, I tried dd and that’s how we got here.

    Also this was a Windows system that was used daily by a family member and has a lot of installed background/tray services with saved logins. I imagine I could figure out everything there is to keep in an rsync clone, but it might be easier to have an image that I can try to mount to a VM and inspect “internally”.

    So I don’t need the clone strictly speaking but it would be nice to have. Plus, I would like to know the answer for the future as well.





  • Personally I don’t care so much about the things that Linux does better but rather the abusive things it doesn’t do. No ads, surveillance, forced updates etc. And it’s not that linux happens to not do that stuff. It’s that the decentralized nature of free software acts as a preventative measure against those malicious practices. On the other side, your best interests always conflict with those of a multi-billion company, practically guaranteeing that the software doesn’t behave as you. So windows are as unlikely to become better in this regard as linux is to become worse.

    Also the ability to build things from the ground up. If you want to customize windows you’re always trying to replace or override or remove stuff. Good luck figuring out if you have left something in the background adding overhead at best and conflicting with what you actually want to use at worst. This isn’t just some hypothetical. For example I’ve had windows make an HDD-era PC completely unusable because a background telemetry process would 100% the C: drive. It was a nightmarish experience to debug and fix this because even opening the task manager wouldn’t work most of the time.

    Having gotten the important stuff out of the way, I will add that even for stuff that you technically can do on both platforms, it is worth considering if they are equally likely to foster thriving communities. Sure I can replace the windows shell, but am I really given options of the same quality and longevity as the most popular linux shells? When a proprietary windows component takes an ugly turn is it as likely that someone will develop an alternative if it means they have to build it from the ground up, compared to the linux world where you would start by forking an existing project, eg how people who didn’t like gnome 3 forked gnome 2? The situation is nuanced and answers like “there exists a way to do X on Y” or “it is technically possible for someone to solve this” don’t fully cover it.