• 0 Posts
  • 11 Comments
Joined 11 months ago
cake
Cake day: August 20th, 2023

help-circle






  • Rescuer6394@feddit.nltoLinux@lemmy.mlZRAM is insane
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    10 months ago

    Isn’t zswap enabled by default?

    having zram + swap on disk isn’t the same as having zswap + swap on disk? the difference should be only that zram show as a swap device while zswap does not.

    having only zram, you are still confined by the total ram you have. idk how the average compression ratio is, but you can gain 1.5x ram max. to get more, you need a physical swap device.

    is there an advantage of using zram instead of zwap? when you still have a physical swap with lower priority.

    bonus question: What if I use all 3 of them? would this just be redundant?





  • Available from internet:

    • jellyfin
    • jellyseerr
    • immich
    • paperless-ngx
    • owncloud ocis
    • traefik
    • homarr

    Available only from local:

    • the *arr stack
    • qbittorrent
    • jackett
    • watchtower
    • apprise
    • netdata (kinda new, still have to fully understand how it works)
    • portainer
    • speedtest-tracker
    • homepage

    Security

    All the services available from internet, just goes through traefik to terminate https, I rely on the build in authentication of each service. To add another layer of security, I have fail2ban active on all those services.

    I have a public IP, and I have open on my router ports 80, 443, a random port for ssh and vpn.

    Hardware:

    Memory:
      System RAM: total: 8 GiB available: 7.73 GiB used: 4.46 GiB (57.7%)
      Report: arrays: 1 slots: 4 modules: 2 type: DDR3
    CPU:
      Info: 6-core model: AMD Phenom II X6 1090T bits: 64 type: MCP cache: L2: 3 MiB
    Graphics:
      Device-1: NVIDIA GP107 [GeForce GTX 1050 Ti] driver: nvidia v: 535.98
    

    docker compose files

    All the docker compose files + how I configured everything is available at: https://github.com/simone-viozzi/my-server

    Bonus:

    Since I like the ability of btrfs to do snapshots, I created all important docker volumes as btrfs subvolumes. Then I created a backup script that literally sends the subvolume (encrypted) to an external cloud. This does not allow incremental backups and most likely is not the best backup solution… but it works… the repo is: https://github.com/simone-viozzi/btrfs2cloud-backup

    I welcome any advice / criticism!