siamo in 2 a vomitare…

    • jsveiga@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      There is no single reason; OSs are very complex; I’ll try to point to some possible reasons, some of them are maybe fuzzy personal opinions.

      Note that in terms of stability and safety I don’t think MacOS is as bad as windows. MacOS is a Unix OS; Linux cloned and inherited a lot from Unix, and is considered a “Unix-like” OS.

      Roughly speaking, Unix evolved on servers, having to take care of multi user, multi threaded juggling. Windows was born as a single user, single task, desktop OS, and evolved to a terrible server OS (from a Unix point of view), trying to keep the same “look and feel”.

      Availability:

      Last time I installed Visual Studio on a Windows SERVER, it asked me to reboot to finalize the installation. That was a production server (I needed VS to a quick debug), with dozens of users and a database.

      It also happened to me that upon connecting to a Windows server, it decided to install an audio driver (for a local device it would use through RDP), and… asked for a reboot.

      How is it possible that MS considers it OK to need to reboot to install an IDE or an audio driver??

      Apart from those, many simple Windows configurations will ask for a reboot. That’s Ok if you’re running an XBox. Not Ok on a server.

      With Linux, only a kernel change requires reboot - and you can even avoid that too, with livepatch.

      If a company doesn’t take its OS seriously enough to solve these simple issues (or doesn’t have the competency to do it), how reliable can that OS be?

      Stability:

      Here I think the main reason is complexity. Yes, no kidding. Linux may seem harder and less user friendly for end users, but as an OS it is incomparably simpler and well though of than Windows.

      Some examples: Configurations are in text files, neatly separated by systems, usually in standard locations (/etc for example), and even when they are not there, you can locate them from the sw package file list or sources (ok,ok, user configs could be better standardized in their home dirs, but yet). When a sw is uninstalled and purged, it takes all its system config files with it. Compare that with an ever bloating registry on windows with everything piled up and leftover there.

      When a dependency is not used by any other sw, if you were a good boi and always used package managers, the dependecy is removed automatically (or through the package manager - you can actually see who depends on who, and who is needed). Very different from the ever growing leftovers from uninstalled programs on Windows.

      Everything being open source and (with modern distros) neatly tested and packed by your single distro maintainers also makes it much harder for dependency and incompatibility issues to happen.

      Also because of its multi-user, multi-task roots, Unix/Linux is MUCH less prone to “bluescreen” because of one single program or driver crashing. Ok, windows has gotten better over the years, but not there yet.

      Linux also makes much better use and management of resources (which makes it able of doing the same things as windows, using less CPU and memory, and faster), which also translates in more robust stability.

      Being open source, you also have an incomparably larger number of people helping fix issues - I’m no kernel developer, but I could help one diagnose an obscure ethernet driver issue (didn’t occur with their switches), because I had the sources and could run “bisect”. I had an Intel intermittent driver issue solved by an Intel kernel module developer in a matter of days. When was the last time you directly interacted with a MS developer to solve an issue?

      Security:

      Lets start with the elefant first: Yes, I agree that one of the reasons Linux is safer is because it’s less targeted by attacks. But it is also true that one of the reasons it’s less attacked is that it is harder to compromise.

      A Windows computer rarely will have just Windows. It has a lot of software that the user installed, a lot of bloatware MS installed, more bloatware from the computer maker, some more installed everytime you plug or connect a new device, etc. Most of these softwares are closed source and many you don’t even know where they came from and why are they installed. A lot of them phone home to send your precious data. Is it just me or this sound much harder to keep safe than having all software coming from scrutinized open source, from the same distro repository, not eager to get your data?

      Then again, the server roots vs the desktop toy roots: Unix/Linux are more serious about “compartmentalizing” processes, users, and accesses. One very simple example of this different mindset, which has much more ramifications: Windows uses a couple of accounts as the default runner of Windows services. So if you don’t manually do something about that, you’ll end up with multiple different services running with the same account (thus having the same permissions to resources). The default on Linux when you install a service is that the installation will create an specific account to run the service. That account will have simple, minimal and very specific permissions to do its job.

      Another example of this mindset: ssh will refuse to connect using exchanged keys if your .ssh config directory doesn’t have restricted permissions (for example, if it’s readable by other users). Have you ever seen such care from a MS product? The go-to “solution” for Windows users when something related to permissions doesn’t work is to give “everyone” full permissions - because that usually “solves” the problem. Well, that won’t work for your .ssh in Linux.

      No OS is absolutely the best in every way. Windows and MacOS are IMO quite superior in the UI; the integration and UI consistency between different programs, and even the “less choices” (no multiple distros, windows managers, and even file managers ffs) makes it easier for the general user (and honestly, non-general users: the last time I installed and developed with Android SDK in Linux, maaany years ago, I had to go through much more tinkering than for doing it in Windows, at the same era. Sometimes you just want to use a program, not exercise your Linux freedom).

      But look at the statistics for web server market share. When you’re not forced by business constraints (exchange server, ms sql server, windows centric network) to use Windows, and you’ll be exposed to all kinds of attacks from the internet, Linux dominates the market. Even Microsoft is learning that: https://www.cybersecurity-insiders.com/microsoft-uses-linux-instead-of-windows-for-its-azure-sphere/

        • jsveiga@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          No worries, I’m in recovery of reddit abstinence followed by vlemmy dying, so there is a lot of repressed typing there.

            • jsveiga@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              O never did distro hopping, so I know just a few ones:

              • Ancient RedHat, because the first servers I’ve setup were RH 4.2

              • Debian because after RH stabbed us all in the back for the first time after v.8, that’s what I started using for servers

              • SUSE (SLES), because that’s what I use with HANA DB installations

              • Kubuntu, as it was the easier to install and less traumatic when my wife started using our desktop, coming from windows 95, 20+ years ago

              I know there are pros and cons in any distro. Don’t waste much time distro hopping. Pick any of the mainstream ones. Install it as your “home base”, then if you want to try others, use live usb images.

              Between the maintream ones, it’s a lot about personal preference. I like kubuntu, because I prefer deb over rpm for packages, and coming from windows, kde is less “alien” to get used to.