SayCyberOnceMore

  • 1 Post
  • 9 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • With respect, you wouldn’t install these by just doing an update, so pacman -Syu is fine.

    You would have needed to install these manually, or a package that depended on them - both from AUR - so you’d also need to use yay (etc) to install them.

    But - I totally agree with your points that tge names look innocent enough for someone to install those over other packages.

    Always look at the AUR (website) at the package details - if it’s new(ish) and has 0 or 1 votes, then be suspicious.



  • It depends on the sync / backup software

    Syncthing uses a stored list of hashes (which is why it takes a long time for the initial scan), then it can monitor filesystem activity for changes to know what to sync.

    Rsync compares all source and destination files with some magical high speed algorithm

    Then, backup software does… whatever.

    Back in the day on FAT filesystems they used the archive bit on each file’s metadata, which was (IIRC) set during a backup and reset with any writes to that file. The next backup could then just backup those files.

    Your current strategy is ok - just doing an offline backup after a bulk update, maybe it’s just making that more robust by automating it…?

    I suspect you have quite a large archive as photos don’t compress well, and +2TBs won’t disappear with dedupe… so, it’s mostly about long term archival rather than highly dynamic data changes.

    So that +2TB… do you drop those files in amongst everything else, or do you have 2 separate locations ie, “My Photos” + “To Be Organised”?

    Maybe only backup “MyPhotos” once a year / quarter (for example), but fully sync “To Be Organised”… then you’ve reduced risk, and volume of backup data…?


  • The main point is that sync (like RAID) isn’t a backup. If ransomware got in and started encrypting all your files, how would you know / protect yourself…

    There’s a lot of focus on 3-2-1 backups, so offsite is good, but consider your G-F-S strategy too - as long as this remote copy isn’t your only long-term backup option, then sync might be ok for you

    So, syncthing / rsync / etc is fine… but maybe just point it to your monthly / weekly / daily backup folder(s) rather than the main files?

    You also had some other suggestions I think, like zfs / btrfs snapshots… which would be a point in time copy of your files.

    Or burn the photos to DVD / Bluray and store them at the other location? No power requirements there…


  • I think most options have been covered here, but I’d like to consider some other bits…

    User accounts & file permissions:- if you have >1 account, note that the UserID is internally numbered (starting from 1000, so Bob=1000, Sue=1001) and your file system is probably setup using the numerical UserID… so re-creating the users in a different order would give Sue access to Bob’s files and vice versa.

    Similarly, backing up /etc /var etc… you should check if any applications (ie databases) need specific chmod and chown settings

    Rsync, tar, etc can cover some of this, you just need to check you rebuild users in the correct order.

    Maybe Ansible is another approach? So your disaster recovery would be:

    1. Install plain OS on new drive
    2. Get Ansible access to it (ie basic netwroking)
    3. Rebuild OS and instsll applicstions automatically with Ansible
    4. Restore application & home folders (again with Ansible)

    When you get this working, it’s amazing to watch an entire system being rebuilt


  • Wake on LAN won’t work remotely, so you’d either need to have access to a VPN at their location, or have a 2nd always on device that you can connect to and that could then WoL to your device… or… get a device with an IPMI which you remote into. (All non-VPN forms of remote connection are open to abuse)

    I suspect (guess) you’re not going to be able to setup a VPN, so perhaps an always on pi is going to be necessary - so maybe it’ll be that with drives set to spin down when idle?

    OpenMediaVault was my preferred choice until everything went docker on it which was getting too complex for a NAS… so I just created my own, which powers on at certain times of the day and off again when CPU / network IO was low enough.

    Data transfer with syncthing is great, but I don’t really recommend sync for snapshot backups… (consider your files are all corrupted, it’ll happily sync those corruptions) but I have enough space for a few versions of my files, so in theory I can roll back, but it’s cetainly not a Grandfather, Father, Son strategy.


  • I commented elsewhere here, but E2E encryption is just between the server and the end user (ie a VPN)

    You’re thinking about encryption at rest, on the storage.

    Immich would have to setup a whole new design to be able to store all the metadata on a per-user basis… but… you could have multiple Immich instances if you were to host it for your friends, but I think we’re drifting into “why bother” now…