Thinking of ways to protect against ransomware. Is there a physical usb "switch" that can be programmed to rotate n disks every month or so?

The host mounting the drive should not be aware of the switch. A rotation should appear to the host as a unplug of one disk and plug-in of another disk.

Important that the rotation can't be triggered from the network, so it has to be programmed on the switch itself.

Ideas?

@kalle You trying to prevent the ransomware from deleting what's on the disk?

Have you considered burning to a dvd or blu-ray? Remember that multi-session burns are possible too, so you don't have to use the entire disk at once.

Limited in size though: blu-ray maxes out at 100GB (125GB if you can get the rather rare quad layer disks).

@pete yes, trying to prevent ransomware from encrypting what's on the disk. Hopefully I'd notice an attack within a month or so, and if it happens I can use a previously connected disk to restore my data (though a bit old).

I'd like to mimic manually rotating disks (eg unplug one disk and plug another).

Bluray no good, cause I'd have to rotate manually, which is what I wanted get away from in the first place.

Data to backup: ~1TB.

@kalle hmm, you mean the total size of the data is 1TB? Or is that the size of the incremental changes?

The easiest general purpose solution is probably to use a raspberry pi or similar and export a drive with NFS or something. Then use a cron job to just do a normal backup. Or alternatively, turn off the network and use two RPIs.

@pete Total size ~1TB

Background: I use duplicity for backup of several machines to one backup dir, and I want to secure the backup dir against ransomware.

Idea is to rsync backup dir to an automatically rotating set of disks.

A poor man's solution would be to rotate disks manually (eg, use my hands), but I WILL fail, because I'm sloppy af.

What do you mean by "turn off the network and use two RPIs"?

@kalle I'd check how big your incremental changes actually are. A 100GB blu-ray disc might last long enough to be worthwhile...

My idea with the RPI is if you have two of them, you can have a cron job completely shut off their network interfaces periodically. That'll make it rather difficult for a hacker to get it!

@pete Oh, that's a neat trick. Will look into it.

This makes me wonder: are everyone managing their rotation manually? There seems to be no typical go-to solution for this (at least not affordable).

Re bluray: Typical increment: 0-5 GB. I do full backups every few weeks, so I have to rotate at least that often, but a full backup has to be stored on multiple disks. Quirky.

@kalle 0-5GB between full backups? If you can figure out how to work with sessions, blu ray will fix that problem.

I personally use blu-ray for backups myself. Though I mostly don't do incremental backups: my critical data is small enough that I use backup all my critical qubes vms to bluray about once a week.

For bigger data I use a combination of backups to external hd, and git-annex.

@pete I suppose my problem stems from a reluctance to discriminate among my data. I definitely back up too much and also use the same level of redundancy/security for everything.

Would still be good to have auto-disk-rotation. And maybe bluray is a good way forward.

Thank you very much for your input.

@kalle @pete I think you want a NAS via an rpi with a copy-on-write filesystem like btrfs. Then, as long as the attacker doesn't compromise your rpi, they can't irreversibly change file contents. For incremental backups, a CoW fs in full preservation mode should add very little overhead.

Follow

@harding @kalle re: btrfs, look at the btrfs send/receive functionality. It can duplicate snapshots, even on a remote machine.

@pete @harding Yes. send/receive might come in handy.

It turns out that the amount of options (and ways to fuck up) is staggering. Now I have the opposite problem: too many options 😃

@pete @harding Boss level backup achieved!

1. Our machines backup to local backup host L using duplicity. Mostly incremental, but full every ~2 months
2. rsync L to remote host R (R uses btrfs)
3. L sends a .snap file to R
4. cron job on R triggers on .snap file and creates read-only snapshot

Snapshot cleaning (to reclaim disk space) on R is manual at the moment, but should last at least 6 months.

Seems to work nicely. I can even restore stuff from it.

Thank you so much for all your input.

Sign in to participate in the conversation
Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!