Hi all,

I found a hobby in trying to secure my Linux server, maybe even beyond reasonable means.

Currently, my system is heavily locked down with user permissions. Every file has a group owner, and every server application has its own user. Each user will only have access to files it is explicitly added to.

My server is only accessible from LAN or VPN (though I’ve been interested in hosting publicly accessible stuff). I have TLS certs for most everything they can use it (albeit they’re self signed certs, which some people don’t like), and ssh is only via ssh keys that are passphrase protected.

What are some suggestions for things I can do to further improve my security? It doesn’t have to be super useful, as this is also fun for me.

Some things in mind:

  • 2 factor auth for SSH (and maybe all shell sessions if I can)
  • look into firejail, nsjail, etc.
  • look into access control lists
  • network namespace and vlan to prevent server applications from accessing the internal network when they don’t need to
  • considering containerization, but so far, I find it not worth foregoing the benefits I get of a single package manager for the entire server

Other questions:

  • Is there a way for me to be “notified” if shell access of any form is gained by someone? Or somehow block all shell access that is not 2FA’d?
  • my system currently secures files on the device. But all applications can see all process PIDs. Do I need to protect against this?

threat model

  • attacker gains shell access
  • attacker influences server application to perform unauthorized actions
  • not in my threat model: physical access
  • youmaynotknow@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    Maybe not 100% in the subject, but I just deployed a Wazuh instance to let me know how any of my hosts, containers and computers may have vulnerabilities. I found a crap load of holes in my services, and I’m halfway through squashing all of them.

    If this is a hobby, that’s sure to keep you entertained for quite some time.

  • epyon22@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 months ago

    I would reconsider docker because if a specific application leaks some sort of shell access or system file access you’ll be protected out side of container host escalation.

    Unrelated to security, I prefer docker because it leaves the server very clean if you remove different apps. Can also save time configuring more complex applications or applications that conflict with system libraries.

    Add fail2ban on your list of applications it watches logs for invalid logins and puts them on firewall block rules after so many failed attempts.

    • matcha_addict@lemy.lolOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      I really wish there was a system wide package manager for docker containers, which would update software in all your containers at once similar to how a typical package manager would.

      I did not completely rule out docker, but I wonder if I can obtain most of its benefits without this major con with package management. I mean I know it’s possible, since its mostly kernel features, but it would be difficult to simulate and the tooling is probably lacking (maybe nsjail can get me closer).

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      Docker performs some syscall filtering as well which may reduce the kernel attack surface. It can be pain to set up services this way, but it could help frustrate an attacker moving laterally in the system.

      Processes in the container cannot see external processes for example as I think interested the OP.

  • ramenu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    Absolutely essential is using a firewall and set it as strict as possible. Use MAC like SELinux or Apparmor. This is extremely overkill for a personal server, but you may also compile everything yourself and enable as many hardening flags as possible and compile your own kernel with as many mitigations and hardening flags enabled (also stripped out of features you don’t need)

  • LunchMoneyThief@links.hackliberty.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Consider running some kind of file integrity monitoring. samhain, tiger, tripwire, to name a few.

    considering containerization, but so far, I find it not worth foregoing the benefits I get of a single package manager for the entire server

    Just do MAC with either AppArmor or SELinux.

  • Pyrosis@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    Get your firewall right then maybe add fail2ban.

    You could also consider IDs/IPs on your primary router/firewall if this is internal. If not you can install surricata on a public server. Obviously if you go with something as powerful as surricata you no longer need fail2ban.

    Keep a sharp eye on any users with sudo. Beyond that consider docker as others have mentioned.

    It does add to security because it allows the developers a bit more control of what packages are utilized for their applications. It creates a more predictable environment.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 months ago

    There are entire books dating back to the 80’s that go into this, that are still fairly valid to this day.

    If you want to take things further at your own risk, look into how to use TPM and Secure Boot to your advantage. It’s tricky, but worth a delve.

    For network security, you’re only going to be as effective as the attack hitting you, and self-hosting is not where you want to get tested. Cloudflare is a fine and cheap solution for that. VLANS won’t save you, and an on-prem attack won’t save you here. Look into Crowdsec.

    Disable any wireless comms. Use your BIOS to ensure things like Bluetooth is disabled…you get the idea. Use RFKill to ensure the OS respects the disablement of wireless devices.

    At the end of the day, every single OS in existence is only as secure as the attack vectors you allow it to have. Eventually, somebody can get in. Just removing the obvious entry points is the best you can do.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        If you set it up incorrectly you can perform an attack called vlan hoping.

        You also need to setup Firewall rules to properly isolate zones

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          Only if you don’t set it up correctly. You should set which devices are allowed to set which vlans and then make sure client devices aren’t authorized to send or receive tagged packets.

          You then combine that with a firewall only needed traffic allowed.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    Is there a way for me to be “notified” if shell access of any form is gained by someone?

    Falco is a very powerful tool for this.

  • ancoraunamoka@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Great that you included your threat model, but you should have specified the type of services that you host/provide.

    One thing i would look into is disabling any port that is not necessary (like 80 and 443) and disable ssh on the wider network.

    Host a wireguard endpoint in the internal network that acts like a bastion and allows you to ssh-jump to any other host and VM on the network.

    Wireguard is more secure than ssh, assuming sound crypto and hygiene for both, because you can’t probe a host from the outside and know if wireguard is running or not

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago
    • SElinux

    • monitoring

    • proper containers (ideally rootless)

    • separate accounts for each function and permission set. Your containers should run as a low privileged user

  • ctr1@fl0w.cc
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Like others have mentioned, SELinux could be a great addition. It can be a massive pain, but it’s really effective at locking things down (if configured properly).

    However, the difficulty will depend on the distro. I use it with Gentoo, which has plenty of support/docs for it and provides policies for many packages. Although (when running strict policy types) I usually end up needing to adjust them or write my own.

    Obviously Red Hat would be another good choice, but I haven’t tried it. Fedora also has good support, but I’ve only ever used the OOTB targeted policies.

    That said, I’ve started relying on users/groups more often lately, since it really gets in the way of everything.

    • matcha_addict@lemy.lolOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      A fellow gentoo user in the wild! Do you have any thoughts on using containers with gentoo? It pains me the idea of foregoing all the awesome features of portage by using containers.

      What exactly does SE Linux provide over users / groups?

      • ctr1@fl0w.cc
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        👋 right on! I actually also have used containers as a key to my security layout before, but yeah you miss out on all the benefits of portage.

        I was doing something crazy and actually running Gentoo inside each one! It was very difficult to stay up-to-date. But I basically had my host as barebones as possible and used LibVirt containers for everything, attempting to make a few templates that I could keep updated and base other VMs on. I was able to keep this up for about two years then I had to relax (was my main PC). But it was really secure, and it does work.

        The benefit of encapsulation is that you have a lot of freedom inside each container, like install a different distro if you need to. Also as long as they are isolated you don’t need to worry as much about their individual security. But it’s still good to. I ran SELinux on the host and non-SELinux (but hardened) in the guests.

        SELinux has a lot of advantages over users/groups, but I think the latter can be just as secure if you know what you’re doing. For example with SELinux you can prevent certain applications from accessing the network, or restrict access to certain ports, etc. It’s also useful for desktop environments where a lot of GUI apps run under one user- e.g. neither my main user nor any other program can access my keepassxc directory, only the keepassxc process (and root) can (even though the application is running under my main user). You can also restrict root quite a bit, especially if you compile in the option to prevent disabling SELinux at boot (I need to recompile my kernel to disable it).

        But again while it is fun to learn, it is quite a pain and I’ve relaxed the setup on my new computer to use a different user for everything (including gui apps), which I think is secure enough for me. But this style relies on my ability to adhere to it, whereas with SELinux you can set it up to where you’re forced to

  • funtrek@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Some people suggest SELinux which is great. But if you really want to take it to the maximum use its MLS.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    3 months ago

    That sounds extremely painful to manage and prone to error if you aren’t using containers.

    • matcha_addict@lemy.lolOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      It does require some effort to manage, but I would argue it’s easier to keep all packages (including dependencies) up-to-date across the system, which is a huge security benefit imo.

      The permission system, once you set it up, you never need to change it unless you’re changing something.

    • ancoraunamoka@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I am not sure what you are talking about. None of the stuff OP talked about are related to containers. Also containers complicate networking a lot, so i would avoid them at all costs and use VMs