Raspberry Pi 4 Cloning Challenge

As regular readers may know, for backups I swear by rpi-clone for making backups which can instantly take over in the case of a failed SSD or SD main drive on the RPi4.

But there is a little more to it. For unattended operation I’m not aware of any current technique for disabling the clone after the cloning operation and bringing it back online when needed – and as for using a pair of SSDs, there does not seem to be a way to guarantee which SSD (sda or sdb) will get the job.

Firstly, what DOES work… here is an extract of the code in my /etc/bash.bashrc file. Interested users are free to make use of this.

So, I can clone without issue, reliably. For SSDs I can, using a USB HUB powered directly from a suitably well powered RPI4, turn on the second SSD, clone (cloneb assuming the SSD or SD has been used before) and turn off. In the case of using sn SD for backup I can insert the SD, make the clone (clonem) then remove the SD. This all works very well.

But as I can only do this operation locally – my ideal would be to have a pair of SSDs in the RPI4 without a hub and somehow remotely turn the power to the second SSD on and off. Ideas welcome.

Why is this important, in my case I only return to the country where one of my RPI4s is, very irregularly and right now it effectively has no backup because I cannot get to it in order to temporarily power up the second drive, make the clone and power off – never mind actually having the second drive automatically take over in the event of a fault. Ideally if one SSD fails, the other would take over but it just does not seem to be that simple.

BLACK='\033[0;30m'
RED='\033[0;31m'
GREEN='\033[0;32m'
BROWN='\033[0;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
LIGHTGRAY='\033[0;37m'
DARKGRAY='\033[1;30m'
LIGHTRED='\033[1;31m'
LIGHTGREEN='\033[1;32m'
YELLOW='\033[1;33m'
LIGHTBLUE='\033[1;34m'
LIGHTPURPLE='\033[1;35m'
LIGHTCYAN='\033[1;36m'
WHITE='\033[1;37m'
NC='\033[0m'


alias stop=’sudo shutdown now’
alias boot=’sudo reboot’
alias partitions=’cat /proc/partitions’

clone () {
printf “${LIGHTBLUE}Creating a quick clone on SDA${NC}\n”
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -U sda)
if [ -n “$1” ]; then
bashCmd+=(-s “$1”)
fi
“${bashCmd[@]}”
}

cclone () {
printf “${LIGHTRED}Creating a full clone on SDA${NC}\n”
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -f -U sda)
if [ -n “$1” ]; then
bashCmd+=(-s “$1”)
fi
“${bashCmd[@]}”
}

cloneb () {
printf “${LIGHTBLUE}Creating a quick clone on SDB${NC}\n”
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -U sdb)
if [ -n “$1” ]; then
bashCmd+=(-s “$1”)
fi
“${bashCmd[@]}”
}

clonem () {
printf “${LIGHTBLUE}Creating a quick clone on MMCBLK0${NC}\n”
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -U mmcblk0)
if [ -n “$1” ]; then
bashCmd+=(-s “$1”)
fi
“${bashCmd[@]}”
}

ccloneb () {
printf “${LIGHTRED}Creating a full clone on SDB${NC}\n”
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -f -U sdb)
if [ -n “$1” ]; then
bashCmd+=(-s “$1”)
fi
“${bashCmd[@]}”
}

cclonem () {
printf “${LIGHTRED}Creating a full clone on mmcblk0${NC}\n”
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -f -U mmcblk0)
if [ -n “$1” ]; then
bashCmd+=(-s “$1”)
fi
“${bashCmd[@]}”
}

Facebooktwitterpinterestlinkedin

9 thoughts on “Raspberry Pi 4 Cloning Challenge

  1. I did not understood what the exact use case is, but I am running about 10 RPis and I run all of them in a read only mode on SD. That has solved the issue of failing SD cards and dead RPis. Whatever needs to be writable is mounted to ram. If persistence is required for some use cases, maybe mounting additional storage, syncing there periodically and unmounting would be the solution.

  2. You may wish to consider duplication at the application layer. The reason i say this is that i was facing the same issue. How to establish automatic failover. IN my situation the only critical applications on my HA Pi was node-red and MQTT. I decided to put up an additional Pi as backup. Ensuring that I had real-time failover I had 3 issues to navigate. 1) how to keep both node-red instances in sync, 2) how to enable communication between the node-reds who was in charge at the time and 3) How to tell all of the MQTT clients that there was a change in the broker ip address.

    For issues 1 and 2 I found some basic code that I have modified for this purpose. I have 2 node-red instances running the exact same code and syncing with each other every 250ms. They sync master control status and all state variables. For issue #3 as I am 100% Tasmotaized in my house I am able, upon failover, to send MQTT messages to all of my devices telling them to change their MQTT broker ip address.

    This is a little different way to address a hardware failover. But in my testing so far as I develop the complete solution, it is working very well.

  3. Apologies if this is a dumb suggestion as I haven’t tried it out. How about running the 2nd SSD via a tasmotized XY WFUSB that you have referenced in the past. I appreciate that this will reduce backup times to USB2 speeds but if it they are done overnight that shouldn’t be an issue. You can then turn the backup SSD on/off with MQTT commands for backup and restore purposes and if you use an external MQTT server (or similar) for messaging you can remotely power up the 2nd SSD if there is a complete failure to boot on the 1st.

    1. I was going to say.. n ot much use in a relay on the Pi which may have died 🙂 a Sonoff 4CH (I could see some track-breaking being needed on the PI to get relays connected to the SD and USB3). Better, have Tasmota on the 4CH control which SD/SSD is on – it would also have to decide when to power-cycle the PI – again using PI GPIO is no good if the Pi has died. Perhaps ir might be possible to get inputs on the Sonoff – so as to monitor a heartbeat routine on the Pi, if it fails, turn the power to the Pi off and on with an alternative SSD being active. For cloning, toggling SSDs without power cycling – and the Sonoff would need to be able to tell if the clonging was finished/

      1. Pete, I was thinking with 4 RasPi GPIOs tied to the 4 buttons on the Sonoff 4CH it would allow the RasPi (through your clone scripts, or the command line, etc.) to decide which SSD is powered on. With the 4CH on it’s own power supply (better yet a UPS) it will remember what device to keep powered while RasPi reboots. Might take some Tasmota configuration. Or perhaps just have the RasPi use your LAN to command the 4CH, that might be easier than GPIOs, though less reliable.
        If the RasPi is hung, with the RasPi power running through one of the 4CH relays, you can connect to the 4CH, choose which SSD to boot then power cycle the RasPi.

  4. Interesting problem and one which needs discussion on what’s the best method to take for redundancy. ie is an identical configuration needed or one with a different, cheaper filesystem device(usb thumbdrive instead of SSD). Or is the expectation that the file system device will fail and not the rPi itself.

    I’ve used some low power relays with Arduino and wonder if a relay can’t be used to periodically switch on/off a 2nd USB/SSD device, watch for the device to show up, rsync and then turn it off. Then a watchdog timer looks for an error situation(HD SMART ctrl?) and then swap fstab entries and reboot to then start on the backup device.

    Or maybe a full rPi/SSD which periodically powers up, resync’s and goes back to sleep waiting to be put into service when the primary shows errors.

    Interesting problem.

    1. I would say that the ONLY problems I’ve ever had with Raspberry Pis (I have several and had more before I moved the older models on) have been SD/power related and regardless the result has been SD failure, the Pi has always recovered when powered off, given a replacement SD and started again. The bad power issue can be gotten around largely) except during long power cuts beyond that which a PS can handle (such as a breaker going in a remote location – a complete notice can be called in to turn the breaker back on – but not mess with a Pi.

Comments are closed.