Rpi5 and the Transition from The Script to Docker

Docker

After several years of compatibility concerns and linear, incremental development of my RPi-based node-red/mqtt/Zigbee2mqtt home control system which started with RPi2 and worked all the way through RPI4 while not EVER starting from scratch – and hence many articles in here, I’ve made a transition. Here is some detail:

So some of you know I’ve been dragged into using Docker for adding packages to the new(ish) Raspberry Pi 5 and Pi OS 64-bit Bookworm combo. Choices were: update my “the script” because the RPi 5 / Bookworm combo seems to break the odd part of the script – or start using Docker.

The process for me has been to migrate (pretty much successfully – the audio dongle saga to follow) from RPi4 (32 bit Pi OS “Bullseye”) to RPi5 (64 bit Pi OS – “Bookworm”) and just now, to the LITE version without all the desktop clutter – irrelevant to my home control environment. It’s been a big move but hopefully I can save others some brainfreeze. Using Docker – the “containers” can be migrated to any other host – for example Orange Pi. The latter is covered elsewhere and I’m not personally planning to abandon RPi. I’ve noted which sections below are specific to Antonio’s Docker setup.

So firstly – why RPi 5? As I got more ambitious with my various devices and increasing use of Zigbee2MQTT etc, I’ve been finding the RPi4 slowing down a bit – and also found that I’ve been having issues upgrading Grafana – so I believe it is now time to say that “The script” is an immensely useful installation tool for (up to and including) RPi4 and 32-bit Bullseye – but it is time for me to move on.

Starting with the “Raspberry Pi Imager” on Windows – we need “operating system” – “Raspberry Pi OS (other)”, “Raspberry Pi OS Lite (64 bit)”. Why stick with Raspberry Pi? Simple – rpi-clone. No, not the original – an updated version. My friend and I disagree a little here as he is going down the device independent route – but now, with experience behind me as I review other devices – you can’t get away from the ease of using rpi-clone for daily backups – that simple one command (made even easier with my aliases) and a few minutes (no down time and no interaction required) – so the utterly boring task of doing regular backups becomes immensely practical – I rarely make any system changes or updates before using rpi-clone (maybe that’s why I’ve never had to start from scratch).

Starting with my Raspberry Pi4 32-bit working Bullseye installation, I have to say I was terrified at the idea of migrating all my Node-Red and Zigbee2MQTT stuff over to the new Pi and OS version – even on a new IP address – but it’s done and thanks to Antonio for the push and his work on Docker. I’ll try to not miss anything out as I go through this.

Getting the new RPi5 running proved a challenge and I’ve discussed this elsewhere but I’m putting it all together here. Command line usage only for me – so no screen to watch the startup didn’t help but here it is… using the Windows PC to create an SD with the RPi Imager: I created an image which at first would not work with an SSD plugged into the RPi (for my rpi-clone backups) – here goes… the solution involves a simple change. While flashing the SD on my PC – there’s an option to customise – I put in user PI with my usual password (even though I’m using ROOT now – because ROOT won’t work out of the box – read on – easy to sort).

Take the SD out of the PC after flashing 64-bit Pi OS Lite and put it straight back into the PC. A small boot drive will appear in Windows format on the PC (the other RPi partition won’t be seen) so add (anywhere in the file and using maybe Notepad or Notepad++)

usb_max_current_enable=1

Why? The addition of the SSD, rather stupidly, stops the RPi from booting unless you have a 4A+ power supply – With no monitor I didn’t spot that and blamed RPI-clone mistakenly). That line resolves the issue. I have a standard 3A power supply.

With the SD now in the RPi5 and it booted up, I’m using Mobaxterm on the PC to set up and use the RPi5. Advanced IP Scanner on the PC will find the RPi5 which is hardwired (I never use wireless for a critical controller like the RPi – so I didn’t bother setting up WiFi at all).

I suggest going down this fixed IP route – because In Docker – you can’t refer to localhost (my MQTT installation used to use localhost – see later for why the change. So next steps – enable root then fix the IP (below).

In Mobaxterm or similar, open an SSH session to access the RPi5 as user pi – at the IP address found by Advanced IP Scanner or similar.

# give root user a password
sudo passwd root

# change these 2 lines in /etc/ssh/sshd_config to allow root login via ssh
sudo nano /etc/ssh/sshd_config
PermitRootLogin yes
PasswordAuthentication yes

# now restart ssh to apply changes without reboot
sudo systemctl restart ssh

Done – now enable ROOT user.

sudo nmtui

In the above – there will be a commented out line… so – edit a connection – Ethernet – “wired connection 1” (keyboard here, not mouse) – simply change the IP4 configuration to the fixed IP you want (in my case 192.168.1.20) then gateway and DNS server(s) – in my case both192.168.1.1. done. Back to the command line – sudo reboot and you can now log in (again an SSH session) as user root with your chosen root password – at your chosen IP address. Much quicker to do than write about.

For my purposes I need pip3 (python pip3) and psutil for my clear screen funcion and neither are installed. So:

## Install pip3 - but that won't then install psutil for some reason - found another way 
apt install pip 
## This next line may or may not do something
apt --fix-broken install 
## jq below is needed by new script
apt install jq
##for my cls function or good in general - ipi install didn't work so this... 
apt install python3-psutil 
## also have to give my cls code (clear screen which also supplies coloured useful device info) "execute" permission - more on that later - leave for now

Next – the editor – as a Windows user, I grow tired of Nano and MCEDIT makes a fine replacement. Not essential but I’ll include this anyway – both Antonio and I use mcedit all the time now – great editor and intuitive. See this Midnight Commander reference. Also – Antonio’s automatic initial setup link for Midnight Commander (adds Dracula themes, sets up mcedit to be used with f4, and arrow navigation between folder trees).

## install mc as root
apt install mc
## remember to set theme in mc (not mcedit) -  options - appearance - I chose modarin256 and had to edit my session in Mobaxterm to allow 256 colours

NOW- the all important original rpi-clone original no longer works in RPi5 with Bookworm so – regardless of user pi or user root – I’m adding the revised version while I’m in the /root folder:

curl https://raw.githubusercontent.com/geerlingguy/rpi-clone/master/install | sudo bash

I rely heavily on rpi-clone and hence I like having aliases and functions to make life easier. So, in /etc/bash.bashrc add:

# set a fancy prompt
# original commented out along with the conditional code
# PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '

PS1="\[\033[38;5;39m\]\u\[$(tput sgr0)\]\[\033[38;5;15m\]@\[$(tput sgr0)\]\[\033[38;5;222m\]\h\[$(tput sgr0)\]\[\033[38;5;15m\]:\[$(tput sgr0)\]\[\033[38;5;83m\]\W\[$(tput sgr0)\]\[\033[38;5;15m\]:\[$(tput sgr0)\]\[\033[38;5;69m\]\A\[$(tput sgr0)\]\[\033[38;5;15m\][\[$(tput sgr0)\]\[\033[38;5;174m\]\$?\[$(tput sgr0)\]\[\033[38;5;15m\]]> \[$(tput sgr0)\]"

BLACK='\033[0;30m'
RED='\033[0;31m'
GREEN='\033[0;32m'
BROWN='\033[0;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
LIGHTGRAY='\033[0;37m'
DARKGRAY='\033[1;30m'
LIGHTRED='\033[1;31m'
LIGHTGREEN='\033[1;32m'
YELLOW='\033[1;33m'
LIGHTBLUE='\033[1;34m'
LIGHTPURPLE='\033[1;35m'
LIGHTCYAN='\033[1;36m'
WHITE='\033[1;37m'
NC='\033[0m'

alias space='df -h|grep -v udev|grep -v tmpfs|grep -v run'
alias stop='sudo shutdown now'
alias boot='sudo reboot'
alias partitions='cat /proc/partitions'
alias parts='sudo fdisk -l /dev/mmc* /dev/sd*'
alias cloned="sudo printf 'Last cloned on ' && sudo tune2fs -l /dev/sda2|grep -i write|grep -iv life|cut -d: -f 2-|xargs"

#alias cls='python /home/pi/cls.py'

#optional hostnames in 4 functions below
clone () { 
printf "${LIGHTBLUE}Creating a quick clone on SDA${NC}\n"
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -U sda)
if [ -n "$1" ];  then
bashCmd+=(-s "$1")
fi
"${bashCmd[@]}"
}

cclone () { 
printf "${LIGHTRED}Creating a full clone on SDA${NC}\n"
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -f -U sda)
if [ -n "$1" ];  then
bashCmd+=(-s "$1")
fi
"${bashCmd[@]}"
}

cloneb () { 
printf "${LIGHTBLUE}Creating a quick clone on SDB${NC}\n"
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -U sdb)
if [ -n "$1" ];  then
bashCmd+=(-s "$1")
fi
"${bashCmd[@]}"
}

clonem () { 
printf "${LIGHTBLUE}Creating a quick clone on MMCBLK0${NC}\n"
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -U mmcblk0)
if [ -n "$1" ];  then
bashCmd+=(-s "$1")
fi
"${bashCmd[@]}"
}

ccloneb () { 
printf "${LIGHTRED}Creating a full clone on SDB${NC}\n"
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -f -U sdb)
if [ -n "$1" ];  then
bashCmd+=(-s "$1")
fi
"${bashCmd[@]}"
}

cclonem () { 
printf "${LIGHTRED}Creating a full clone on MMCBLK0${NC}\n"
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -f -U mmcblk0)
if [ -n "$1" ];  then
bashCmd+=(-s "$1")
fi
"${bashCmd[@]}"
}

cclonec () { 
printf "${LIGHTRED}Creating a full clone on SDC${NC}\n"
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -f -U sdc)
if [ -n "$1" ];  then
bashCmd+=(-s "$1")
fi
"${bashCmd[@]}"
}

clonec () { 
printf "${LIGHTBLUE}Creating a quick clone on SDC${NC}\n"
touch /home/pi/clone-date
bashCmd=(sudo rpi-clone -U sdc)
if [ -n "$1" ];  then
bashCmd+=(-s "$1")
fi
"${bashCmd[@]}"
}

update () {
printf "${LIGHTGREEN}Getting upgrades...${NC}"
sudo apt update
sudo apt upgrade
}

created () {
printf "${LIGHTGREEN}This setup was created at ${YELLOW}"
bashCmd=(date -r /home/pi/clone-date +"%H:%M on %-d/%m/%Y")
"${bashCmd[@]}"
printf "${NC}"
}

Finally, insert the code for my cls command at /usr/local/bin using mcedit… then give the everyone full access to the file including execute.

#!/usr/bin/python3
import time
import os
import psutil
import platform
import socket
from datetime import datetime

import subprocess
 
byteunits = ('B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB')
def filesizeformat(value):
    exponent = int(log(value, 1024))
    return "%.1f %s" % (float(value) / pow(1024, exponent), byteunits[exponent])
 
def bytes2human(n):
    """
    >>> bytes2human(10000)
    '9K'
    >>> bytes2human(100001221)
    '95M'
    """
    symbols = ('K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y')
    prefix = {}
    for i, s in enumerate(symbols):
        prefix[s] = 1 << (i + 1) * 10
    for s in reversed(symbols):
        if n >= prefix[s]:
            value = int(float(n) / prefix[s])
            return '%s%s' % (value, s)
    return "%sB" % n
  
def cpu_usage():
    # load average, uptime
    av1, av2, av3 = os.getloadavg()
    return "%.1f %.1f %.1f" \
        % (av1, av2, av3)
 
def cpu_temperature():
    tempC = ((int(open('/sys/class/thermal/thermal_zone0/temp').read()) / 1000))
    return "%sc" \
        % (str(round(tempC,1)))
 
  
def disk_usage(dir):
    usage = psutil.disk_usage(dir)
    return " %s/%s" \
        % (bytes2human(usage.total-usage.used), bytes2human(usage.total))
  
def network(iface):
    stat = psutil.net_io_counters(pernic=True)[iface]
    return "%s: Tx%s, Rx%s" % \
           (iface, bytes2human(stat.bytes_sent), bytes2human(stat.bytes_recv))
  
    
bold = '\033[1m'
normal = '\033[0m'
red='\033[91m'
green='\033[92m'
blue='\033[94m'
default = '\033[39m'
magenta = '\033[38;5;200m'
lime = '\033[38;5;156m'
cyan = '\033[38;5;39m'
yellow = '\033[38;5;229m'

uptime = datetime.now() - datetime.fromtimestamp(psutil.boot_time())

os.system('cls' if os.name == 'nt' else 'clear')
host_name = socket.gethostname()
#host_ip = socket.gethostbyname(host_name+".local")  

def get_ip_address():
 ip_address = '';
 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
 s.connect(("8.8.8.8",80))
 ip_address = s.getsockname()[0]
 s.close()
 return ip_address
 
host_ip = get_ip_address()

def ram_usage():
  mem = psutil.virtual_memory()
  s=str(round((mem.available/1000000000.0),2)) + "G/" + str(round((mem.total/1000000000),2)) + "G"
  return s
  
sb=subprocess.Popen(['vcgencmd', 'get_throttled'],stdout=subprocess.PIPE)
cmd_out=sb.communicate()
power_value=cmd_out[0].decode().split("=")[1]

print (blue + "____________________________________________________________________________\n" + normal)
print (" Platform: " + cyan + "%s %s" % (platform.system(),platform.release()) + normal + "\tStart: " + yellow + str(datetime.now().strftime('%a %b %d at %H:%M:%S')) + normal)
print (" IP Address: " + red + host_ip + normal + "\t\tUptime: " + green + "%s" % str(uptime).split('.')[0]  + normal)
print (" CPU Temperature: " + red + cpu_temperature() + normal + "\t\t\tHostname: " + red + host_name + normal)
print (" Memory Free: " + magenta + ram_usage() + normal + "\t\tDisk Free: " + lime + disk_usage('/') + normal)
bum=psutil.cpu_freq(0)
print (" Current CPU speed: " + red + "%d" % int(bum.current) + "Mhz" + normal + "\t\tmin: " + red + "%d" % int(bum.min) + "Mhz" + normal + " max: " + red + "%d" % int(bum.max) + "Mhz" + normal)
print (" Power Status: " + power_value)
print (blue + "____________________________________________________________________________\n" + normal)

Docker allows you to run multiple, isolated, services on a single host running the Docker Daemon: See Antonio’s comments below re: the use of an .ENV file for setting variables in individual Docker “containers” .

We’re not using Docker containers with plain Docker, but through “Docker Compose”, which allows you to define all these Docker details in an easy to read and well organized YAML (structured text) file, which can then be managed with a small set of standard commands, always the same ones regardless of the underlying running service they interactwith. Also learn from my mistakes and remember that the Docker “container” is utterly independent from the host machine – so special precautions have to be taken to use for example ports or folders on the host (RPi) – that has given me GREAT headaches.

To avoid any problems, once having installed Docker and Docker Compose, we’ll create a folder for each service we’ll run, and define the service in a “docker-compose.yml” file (the filename is the default and allows you to avoid specifying it in every command). We’re going to use Antonio’s github DockerIOT repository as the base for this setup – use that as the ultimate setup guide for Docker, Docker compose and Antonio’s Docker installations including Node-Red, Mosquitto, Zigbee2MQTT and others:

First of all, install Docker on your system – the first (commented out) line below isn’t needed for ROOT users – I’m assuming from now on that we are user root and based in the root folder unless specified otherwise. Antonio’s repo refers to the use of Docker on various platforms – here I’m concentrating on the Raspberry Pi (mainly because of rpi-clone and years of experience of using the RPi). That is my chosen platform.

## sudo -s -H
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
docker --version

That is the basic Docker now installed on the RPi. Next we’ll install docker compose:

mkdir -p ~/.docker/cli-plugins/
curl -SL https://github.com/docker/compose/releases/download/v2.26.1/docker-compose-linux-$(uname -m) -o ~/.docker/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose
docker compose version

## Add some useful aliases to your $HOME/.bashrc or .zshrc - these are from Antonio - I prefer putting this and my other aliases and functions in /etc/bash.bashrc but you could put them in /root/bashrc
## mcedit is good for this:
alias docker-compose="docker compose"
alias dstart="docker-compose up -d"
alias dstop="docker-compose down"
alias drestart="docker-compose down; docker-compose up -d"
alias dlogs="docker-compose logs -f"
alias dupdate="docker-compose down; docker-compose pull; docker-compose up -d --force-recreate"
alias dsh="docker-compose exec \$(grep -A1 services docker-compose.yml|tail -1|cut -d: -f1|awk '{\$1=\$1};1') /bin/sh"
alias dbash="docker-compose exec \$(grep -A1 services docker-compose.yml|tail -1|cut -d: -f1|awk '{\$1=\$1};1') /bin/bash"

Next – on a new system, clone Antonio’s github repo – see later for a restore…

git clone https://github.com/fragolinux/DockerIOT.git $HOME/DockerIOT

In $HOME/DockerIOT (i.e. /root/DockerIOT) you’ll find a folder already setup for EACH component, and EACH has its own README which you should take a look at here, for specific configurations…

In any of those folders you’ll find its “docker-compose.yml” file. In that file, everything you see BEFORE a colon “:” is what you expose TO the container (if it’s a volume), same with ports (if it’s a port – ie the native port used by an application – MQTT for example typically uses port 1883).

Let’s take this Nodered container as an example:

services:
  nodered:
    container_name: nodered
    image: nodered/node-red
    volumes:
    - ./data:/data
    ports:
    - 1880:1880
    restart: unless-stopped
    environment:
      - TZ=${TZ}
    env_file:
      - .env

I refer you again back to here and the readme files within for more on Docker.

After following the instructions on Antonio’s repo, I found myself wanting of a working setup including my home control – to move from Desktop Pi OS to Pi OS Lite and having created a backup .tgz file of the entire DockerIOT folder structure… on my new PI OS Lite – restoring that structure was as simple as (from the /root folder):

# to CREATE a new backup...
## compress a full folder, PRESERVING permissions (Antono's Docker folder structure has DockerIOT as the outside folder
## change the date or name of the .tgz as you require  - c for compress)
## cd && tar cvzfp DockerIOT-20240414.tgz DockerIOT

# in my case to RESTORE my system with node-red etc... x for extract - I did this when moving from full Raspberry Pi OS to the LITE version
cd && tar xvzfp DockerIOT.tgz

Reboot is appropriate now.

So now, in /root/DockerIOT I go to each of the base folder for each Docker application I want enabled – and simply run:

## with Antonio's optional aliases in place, the usual "docker-compose up -d" and "docker-compose down" commands are replaced by dstart and dstop - simples!!
dstart

And like magic, my Node-Red, Mosquitto and Zigbee2mqtt are up and running – in the case of this restore – ready to go. For first setup always read Antonio’s README.md files in each application folder.

When I first set this up, I had to export my flows from my RPi4 Node-Red -I did them a page at a time – into Notepad++ – then pasted them into the new node-red the usual way. Of course, references to MQTT were pointing to the wrong IP address so I did a global search and replacement in the flows.json file to fix that – so easy I could not believe it – I globally replaced 192.168.1.19 with 192.168.1.20 and that was it. For zigbee2mqtt we had to ensure we used the same dongle or CAREFULLY change the reference to it in the docker-compose file – Antonio’s readme.md file for Zigbee2MQTT covers that.

After spending more time documenting this and learning new things to document than actually making changes, the move to RPI5, Bookworm and Docker, something I once considered a major challenge – is all done with only a few hours out of my life. And for all but completely new installations – life remains the same – rpi-clone (the updated one) is as always my primary clone/backup method but I now have an up to date setup with a faster, ultimately better RPi.

If you’re observant and still here you may have noticed no mention of sound – that’s the one thing I’ve not yet completed – RPi5 has no 3.5mm jack (how dumb is that) – I’ve ordered a USB audio dongle and will tackle getting my AWS Polly sounds working when the dongle arrives. So I have my SSD in a USB3 shielded adaptor, my Zigbee dongle at the end of a 1 metre shielded USB2 lead leaving one USB3 and one USB2 socket for audio and a spare. For reliable Zigbee and USB3 SSD operations, don’t ignore the shielded leads.

Though I’m using an SSD for backup – on the RPi5 there is a socket for NVME storage – 64GB as an example) is cheap and would free up a USB3 socket – I’ve yet to figure out if my new fan-free heatsink/base combo is compatible with that – it looks that way. My entire setup (thanks to the clutter-free PI OS Lite) fits into 17GB but I’ve yet to start graphing again – 64GB seems pretty future-proof for my purposes. As for RAM, I’m using well under 3GB of the RPi5’s 4.24GB.

My cls python file is entirely optional but I find it very useful. I used to store it in the /home/pi folder but now, stripped of it’s .py suffix, it sits in the /usr/local/bin folder.. remember, earlier I said it should have “execute” permission added. I did that with right-click permissions in the Mobaxterm SFTP window of my SSH session – but there’s also a command line command to change permissions. The CLS file:

#!/usr/bin/python3
import time
import os
import psutil
import platform
import socket
from datetime import datetime

import subprocess
 
byteunits = ('B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB')
def filesizeformat(value):
    exponent = int(log(value, 1024))
    return "%.1f %s" % (float(value) / pow(1024, exponent), byteunits[exponent])
 
def bytes2human(n):
    """
    >>> bytes2human(10000)
    '9K'
    >>> bytes2human(100001221)
    '95M'
    """
    symbols = ('K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y')
    prefix = {}
    for i, s in enumerate(symbols):
        prefix[s] = 1 << (i + 1) * 10
    for s in reversed(symbols):
        if n >= prefix[s]:
            value = int(float(n) / prefix[s])
            return '%s%s' % (value, s)
    return "%sB" % n
  
def cpu_usage():
    # load average, uptime
    av1, av2, av3 = os.getloadavg()
    return "%.1f %.1f %.1f" \
        % (av1, av2, av3)
 
def cpu_temperature():
    tempC = ((int(open('/sys/class/thermal/thermal_zone0/temp').read()) / 1000))
    return "%sc" \
        % (str(round(tempC,1)))
 
  
def disk_usage(dir):
    usage = psutil.disk_usage(dir)
    return " %s/%s" \
        % (bytes2human(usage.total-usage.used), bytes2human(usage.total))
  
def network(iface):
    stat = psutil.net_io_counters(pernic=True)[iface]
    return "%s: Tx%s, Rx%s" % \
           (iface, bytes2human(stat.bytes_sent), bytes2human(stat.bytes_recv))
  
    
bold = '\033[1m'
normal = '\033[0m'
red='\033[91m'
green='\033[92m'
blue='\033[94m'
default = '\033[39m'
magenta = '\033[38;5;200m'
lime = '\033[38;5;156m'
cyan = '\033[38;5;39m'
yellow = '\033[38;5;229m'

uptime = datetime.now() - datetime.fromtimestamp(psutil.boot_time())

os.system('cls' if os.name == 'nt' else 'clear')
host_name = socket.gethostname()
#host_ip = socket.gethostbyname(host_name+".local")  

def get_ip_address():
 ip_address = '';
 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
 s.connect(("8.8.8.8",80))
 ip_address = s.getsockname()[0]
 s.close()
 return ip_address
 
host_ip = get_ip_address()

def ram_usage():
  mem = psutil.virtual_memory()
  s=str(round((mem.available/1000000000.0),2)) + "G/" + str(round((mem.total/1000000000),2)) + "G"
  return s
  
sb=subprocess.Popen(['vcgencmd', 'get_throttled'],stdout=subprocess.PIPE)
cmd_out=sb.communicate()
power_value=cmd_out[0].decode().split("=")[1]

print (blue + "____________________________________________________________________________\n" + normal)
print (" Platform: " + cyan + "%s %s" % (platform.system(),platform.release()) + normal + "\tStart: " + yellow + str(datetime.now().strftime('%a %b %d at %H:%M:%S')) + normal)
print (" IP Address: " + red + host_ip + normal + "\t\tUptime: " + green + "%s" % str(uptime).split('.')[0]  + normal)
print (" CPU Temperature: " + red + cpu_temperature() + normal + "\t\t\tHostname: " + red + host_name + normal)
print (" Memory Free: " + magenta + ram_usage() + normal + "\t\tDisk Free: " + lime + disk_usage('/') + normal)
bum=psutil.cpu_freq(0)
print (" Current CPU speed: " + red + "%d" % int(bum.current) + "Mhz" + normal + "\t\tmin: " + red + "%d" % int(bum.min) + "Mhz" + normal + " max: " + red + "%d" % int(bum.max) + "Mhz" + normal)
print (" Power Status: " + power_value)
print (blue + "____________________________________________________________________________\n" + normal)

End result: Node-Red, MQTT and Zigbee2MQTT migrated to RPi5 and Bookworm 64-bit (Lite).

Facebooktwitterpinterestlinkedin

118 thoughts on “Rpi5 and the Transition from The Script to Docker

  1. added mosquitto management center, check the update mosquitto readme for info: https://github.com/fragolinux/DockerIOT/tree/main/mosquitto

    you’ll need the new “.env” file in there, and to add full “mqtt-mgmt” block added to your docker-compose.yml file, then restart mosquitto container…

    In the docker-compose file you’ll find 2 images available, official most updated and feature rich one (image refers to THIS version) but only x86 arch available, and older (but raspberry compatible) one, choose accordingly.

    feel free to contact the author of the arm raspberry version asking him to update his image to latest additions, his contact informations are here: https://hub.docker.com/u/dotwee

    1. I’ll check that myself jsut as soon as I’m finish slowly narrowing down this new code block issue with the blog service provider… if people knew how may private test pages I’ve put up today….

  2. you can make your flows completely independent from the ip address, and so have them easily copyable between systems, and no need to patch them after copy, by just adding a line like this to you HOST’s /etc/hosts file (each device should have ITS own ip address in here):

    192.168.1.39 host

    of course update with your ip… once done this, you can just put “host” in your flows or wherever you put the ip before, INSIDE your system… so, mqtt server in nodered is just “host”, same for whatever other piece where you used it… RESTART the containers, after doing this…

    i said INSIDE your system… the hosts file is read ONLY by your system, host itself and its containers, NOT by your devices around the house, so each tasmota MUST continue to contact the host via its ip, unless you don’t have a proper dns inside your lan…

    do NOT touch the exec ssh nodes, those where already set up yesterday with the updated ssh config file, which, indeed, allows to point to the host as just “host”…

    1. with today’s additions the DockerIOT folder will be fully portable, with no mods needed, on a completely new system AS IS… OF COURSE using a copy method which preserves the permissions, so the best will be the tar commands already discussed many times and which are in my main readme…

      BUT, this will work just once you fix your source system accordingly, if already setup (it’s a one time job): so editing nodered flows.json and replacing every occurrance of the ip address with “host”, editing the zigbee2mqtt config file to point to “host” for the mqtt server, and adding to both services (nodered and z2m) docker-compose.yml files, in the “volumes” section, this line (BEWARE TO THE DAMNED DASH MODIFIED BY WORDPRESS):

      – /etc/hosts:/etc/hosts:ro

      to allow the container to read the host’s hosts file

      on the new system, you need to edit the /etc/hosts file there, too, of course, and add the ssh authorized key to the host’s root file:

      cat /root/DockerIOT/nodered/ssh/id_nr.pub >> /root/.ssh/authorized_keys
      chmod 600 /root/.ssh/authorized_keys

  3. added new cool sw, dockge… it allows to see all my services, you can manage them from here, and check their logs and access their ports services, from a single web interface

      1. I’ll absolutely implement that just as soon as I finish work on my aws polly article update – just spent several hours making a short video which is already up there – will need to accompany it with some text – feel free to look in later – might need something from you about the SSH connections.

        We watch TV all our lives with little idea of the work people put into making the simplest videos. Seems the video is 15 mins long -I’d planned on it being way shorter – but the original material was well over an hour and took 3 attempts 🙂 not to mention editing, processing and upload time.

        UPDATE: Got DOCKGE running on RPI4, RPI5 and OP5+ – I have to say I think the install on RPi5 was quicker than OP5+ – Hmm…

        1. video seems good to me, a couple of things i’d address in the article, for the video it’s too late now:

          the key difference between aws (not polly, polly is just 1 of its hundreads of plugins) container, and all the other containers, is that aws runs as a command, it’s called and after it completes (good or bad, not important now) its job, it exits and does not leave anything running in background…

          all the other containers in my repo, are instead put up as system services, so they actually stay and run in background to answer the user requests… this is why aws is not in my repo, and should stay in your article…

          the other thing: i’ll factorize that “/root/DockerIOT/nodered” which is used a couple of times in that flow, in a variable, let’s say, “host_path_prefix”, to clarify further that that should be added ONLY for commands running on the host (via the ssh exec node) that need the full path instead of just the internal subpaths “/data/…” which instead should be used for nodes accessing stuff INSIDE the container itself…

          BOTH point to the SAME data (thanks to the volume mounting the local folder to the container one) but should be managed differently: commands running on the external via ssh and addressing HOST should use full paths INCLUDING that “prefix” one, while commands addressing CONTAINER mounted paths and because of this with normal nodes, and not exec and ssh, won’t need that prefix…

          so, mpg123 needs full path with prefix, as running on the host… and aws too, even if it’s a container, as it’s run as it was a HOST COMMAND, via that alias… alias that CANNOT be used inside that flow, and needs to be explicited FULLY, because aliases are not available in that container… that’s why you will see that LONG lines in the exec node using ssh for aws…

  4. My repo is now fully updated with ALL and JUST the info needed to bring up my setup, please refer to the main readme there (and each readme in the single services’ folders for specific infos): https://github.com/fragolinux/DockerIOT

    new enhanced iotmenu.sh available (now it shows each service if it’s running and which port each of its containers expose), and added home assistant, too.

    How to update from previous version (this should now considered stable, I don’t plan to rework the folder structure, and I had to fix all the docker-compose files because of the updated “docker compose” plugin which deprecated some of the sections in them):

    – IMPORTANT: shutdown ALL the services, be SURE to do this
    – Do a backup of the original folder and move the TGZ somewhere else for safety: “cd && tar cvzfp DockerIOT-20240426.tgz DockerIOT”
    – Rename the DockerIOT folder: cd; mv DockerIOT DockerIOT.orig”
    – Get the new one: “git clone https://github.com/fragolinux/DockerIOT”
    – Copy back all the “data” folders from the “orig” folder to the corresponding services folders in the new one, and run again the “chown” commands that some of them require (even though they should be already ok), for safety
    – Aside from the “data” folders, now copy back all the infos (JUST THE INFO, ***NOT*** the full files!) you changed in the .env or docker-compose.yml or various configs in every service folder (I know, tedious, but it’s needed), so every user and password you changed, or ip, or zigbee device, etc: IMPORTANT, DON’T copy the files back, just check if you modified something in them, and report JUST what you changed in the NEW ones…
    – IMPORTANT: rerun the steps about installing docker compose plugin, to get the latest one: https://github.com/fragolinux/DockerIOT#install-docker-compose
    – Bring up again all the services

    Done… check the main readme and all the others, while you get into them, on this last link above.

    1. I just helped Pete migrate his original setup to the new folder structure, a couple of strange things i’ll need to take a look, leaving notes here:
      – On rpi5, all services and their ports are doubled in my menu, while are ok on mine (in the screenshot here above)
      – Phpliteadmin throws an error about libs, maybe the armv8 image has some issues

  5. @everybody: The article was originally a reference to just Docker and my new repo and how to interact with it, now it contains a lot of unrelated stuff, Pete has clarified what is related to Docker and what is not.

    The only supported and updated source for my Docker setup will continue be the main readme on my repo, which i’m about to update: https://github.com/fragolinux/DockerIOT

    Not everything in here is needed for my setup, be warned…

  6. Later today i’ll push a new update, completely reworked all readmes, getting rid of the redundant infos about docker compose, those infos went in the main repo readme, as they were always the same… now service’s readmes contain just the needed info for a specific service, if any…

    And, fully reworked the menu too, now it shows directly if a service is running, with all its components exposed ports, as you can see, so easier to check where to access a given service…

    Stay tuned, i’ll test with Pete later, and probably push on my repo in the evening

    1. This menu update will need a new support tool, “jq”, a json cli processor, but i added checks in scripts, it will just fail at start if any requirement is missing in the system, like docker, docker compose, jq and dialog

    1. got nginx working thanks Antonio.
      A couple of questions when you get time.
      Is it prudent to install some stuff outside of containers for example PIVPN/PIhole.There seems to be a lot of confusion about dockerizing Wireshark which I use. Also I’m still having no success in passing audio out of the headphone socket. On my RPI4 as root raspi-config/audio gives “no internal audio devices found” whereas as pi user I get “69 headphones”
      PS really getting into this docker stuff

      1. Yes, you can whatever you want on YOUR host, I try to use containers for whatever service is now available as such, as it’s way easier and portable…

        pivpn, i use Zerotier these days, it needs no ports opened, Pete uses Tailscale, very similar to Zerotier…

        Pihole, it should be available as a container, or at least Adguard exists as such for sure

          1. GL-iNet routers are great – and they listen when I make suggestions though I’ve failed to convince them to reinstate the guest portal they took out a while ago. Handy for small holiday rental businesses.

      2. It gets worse, John. The RPi5 doesn’t HAVE a headphone socket – nice of them to think about their existing users – not. And if it wasn’t for Jeff Geerling we’d not have RPi-Clone as RPI changed something that stops the original RPi-Clone working. Had it not been for Jeff I’d have abandoned RPi.

        I got this LEMP installation (https://linuxiac.com/how-to-set-up-lemp-stack-with-docker-compose/) running in Docker after realising the ONE mistake….

        phpmyadmin:
        image: phpmyadmin/phpmyadmin:latest

        needs to be….

        phpmyadmin:
        image: phpmyadmin:latest

        But in the end I left Antonio towork on it and again this can be found in his repo.

        Worth a read incidentally – but since then, Antonio has taken that, improved it and added it to his Docker image collection and so now, apart from the audio I have all the major stuff from my original script (for RPi4 and earlier) and more running in Docker.

        Were it not for the updated RPi-Clone, I may as well be running this lot on another device – the Orange Pi OPi5+ which I’ve blogged elsewhere runs everything just fine (but of course with no RPi-clone).

    2. Pete, add this to your blog, is the safest way to add my new additions to an already running system (more tech people would suggest git fetch and pull, but i don’t want to deal with people asking how to solve their merge problems 😀 )

    3. Hi Antonio
      On your repo readme you refer to SQlite and HA and neither folder is present. Am I missing something? thanks

      Also what does it mean when I run any of the docker aliases I get
      no configuration file provided: not found

      1. Sqlite, install phpliteadmin, it’s there, HA, I’ll remove it, was stuff on my old repo…

        Today I’ll rework fully all the docker-compose files updating them to latest version of docker compose, all the readmes removing redundant stuff, and the iotmenu script, adding exposed ports near each service name, and remove the OLD folder

      2. Do you want HA, too? I can add it, in the Docker version… but I don’t know if the Docker version has the official addons… well, in case you don’t need them, as most of them are as standalone setups in my repo, anyway 🙂

        1. As I’m starting on a whole new adventure converting to Docker I thought I should take a look at HA. It is confusing though as to whether it should be installed outside of docker.

          1. Please, check on their site, there are a lot of ways to install it…

            They have a full image which can be installed bare metal on the ssd, called HAOs, home assistant operating system, which is their preferred way, but this way you loose control on the underlying system to do other things…

            this is why they suggest this instead of other install type, as this way you can’t mess around with the system and it’s easier for them to just debug HA bugs and not misconfigurations made by the end user…

            There’s a python virtual env install, too, a docker one, and maybe others…

            Some of them have the official addon store, where you find all you need, so if you install HA you can have mosquitto, nodered, influx, grafana, etc, ALL inside HA itself, already preconfigured and working out of the box… this is what i use both at my home, and at my parents, with a cloudflare free ssl tunnel to access HA without exposing it to internet on router…

            But this way, ALL service are, actually, INSIDE HA itself, so if HA does not start, you have no nodered, no mosquitto, no influx, etc…

            Your choice on the path to follow, go there and start learning 🙂

            I can just add the docker version, pretty easily… if you then install the HA integration in nodered, via websocket, you’ll have both worlds talking each other, you’ll have entities and services in nr, and create new ones from nr itself and they’ll pop up in HA…

            LOTS of fun, i assure you 🙂

          2. Look here for the table showing all the features and which install method has which of them… as you can see, the addons are only in the supervised (highly NOT recommended, it’s like Pete’s old script, everything will be installed on the machine itself, making it “unportable”), or the HAOs one…

            But as i’m adding most of the addons as standalone containers in my setup, it’s good to go with the Docker setup one, i’m adding it right now, will be available this evening with all the other improvements i wrote in an other comment…

            https://www.home-assistant.io/installation/#advanced-installation-methods

            1. I look forward to setting up more of your updated stuff but since I’ve started is there a problem if I pull down your repo again.

              1. I’ll write here what you need to do to move to the new setup, which is not really “new”, I moved around some things…

                Basically the safest way is to shutdown all the containers, rename the DockerIOT folder, bring down the new one with git clone, then move all the data folders back from the renamed one to the new one (and eventually compare the docker-compose if you did some customization, bringing that in the new docker-compose files…

                If you do so (important, shutdown containers and rename folder) you can always go back… you can even use the “tar” commands i gave here in other comments to do a backup with everything in it and permissions preserved, in case, and move that on your pc via winscp…

                All your data are in the respective “data” subfolders of each service you already configured and can be moved to the new folder as they are, nothing there changed…

                Why this? As docker-compose is deprecated and I moved to “docker compose”, a plugin of docker itself instead of a command on its own… and this, updated to latest version, started complaining that the 1st line of each docker compose file is now “deprecated”, the “version” one i mean, so i removed them all…

                Stay tuned

                1. I haven’t done much configuration yet so will probably delete and start again. Can I pull down now or have you more stuff to do.
                  I assume you are native Italian, your English is perfect. We’re just home from a holiday to Puglia….gorgeous.

                  1. i should have tested again with Pete, but now he’s busy, so why not you?
                    This is how:

                    – Rename the DockerIOT folder
                    – Get the new one: “git clone https://github.com/fragolinux/DockerIOT
                    – Switch to the branch where i pushed all these mods (are not on the “main” one yet): “cd Dockeriot; git switch readmes”
                    – Rerun the steps about installing docker compose plugin, to get the latest one: https://github.com/fragolinux/DockerIOT/tree/readmes?tab=readme-ov-file#install-docker-compose

                    Done… check the main readme and all the others, while you get into them, on this last link here above

                    let me know any problem you’ll find, i’ll fix it ASAP

                    You can use the iotmenu.sh (last bits of the main readme) after you setup the services you want, DON’T just go there and run “start”, as many of them need to be setup before… after setup, go on with the script…

                    1. I’m probably not the person who should be testing this but..
                      root@docker:~/DockerIOT# bash iotmenu.sh
                      “jq” executable missing, please install it… exiting

                    2. I’ve been working my way through this, all good except…
                      “change all the needed parameters in both .env and data/provisioning/datasources/datasource.yml files”

                      I’ve done both of these and entered a new password. I cannot sign in to either influxdb or Grafana with this password but I can with default “password”
                      Also can I copy influxd.bolt and influxd.sqlite over from my old setup if that’s where the data resides.

                    3. John, those .env and datasource are checked only the 1st time you setup everything, once 1st started, they go inside some database in the “data” folder, i don’t know how to change them after 1st run, then…

                      So, check on influx and grafana sites on how to change them after, or restart from scratch, i just tested and it worked with no issues, i’ve data pushed from nodered into influx nodes, i see data in influx web gui, and in grafana dashboard, too…

                      I changed user for both grafana and influx from admin to test, and pass from password to Passw0rd, in the .env file, and in the datasource i just changed influx user from admin to test, done the initial run of the service as explained in the readme (remember, it’s different… check the readme…), then brought down everything and bring it up again in the normal way, this time…

                      no issues, i could login with test and Passw0rd in both systems, in grafana i tested connection to influx and it worked…

                      oh, i released all of this in my “main” branch now, so you don’t need to switch to the “readmes” one anymore, which was removed, too, as merged in main…

                      about restore those previous influx files, honestly no idea, just try to put them in its folder, when service is shut down, then bring it up and check…

                      i’m no expert in each and any of these services (nor i use some), i’m just releasing their setup in an easy way, that’s it…

                    4. Probably easiest to start again. Do I just delete the folder grafana_influxdb. What then is the command to pull down just that folder from your repo.
                      BTW I’ve dicovered that migrating my influx data is not easy.

  7. Erm, no – but then I’ve not tried – getting rpi-clone to work was my main priority as everything else I use on the RPI5 is now running under Docker and I’ve not really used anything Pi-specific – like the speech I used Polly voice on previously. However it would be nice to have this kind of backward compatibility so if you do get MPGPLAYER to run, let me know. See my later comment about RPi5 and no 3.5mm jack.

    1. As a lightweight user of linux I have depended a lot on WinSCP to backup and restore files to the windows world. This new Docker build of Antonio lives in the root environment and WinSCP doesn’t seem to want to see these files. So…
      do we now have to move all these files around in the linux world. WinSCP was useful to just look into a file, even though often you couldn’t edit it because of permissions.

      1. Hi John – on the PC I use Mobaxterm – and I can log into the pi both as user pi and as user root. Full access to everything. Nothing has changed since previous versions of Pi OS. Of course if I so an SSH session as user PI, the SFTP window on the left doesn’t have root privilages – the main ssh window of course I can use SUDO… if I open a session as root I have access to everything. Thought of giving Mobaxterm a go on the PC – I eventually paid for it so I could easily get updates but I recall I was using the free version for a while, no problems.

        1. Well, something CHANGED, indeed… it’s not because you use Mobaxterm, but because you ran the commands I gave you and that now are in my other comment here… without them, there’s no CLIENT allowing you go in /root, it’s a SERVER side config… allowing root login and logging in as root allows that.

      2. Just enable root login and you can then use winscp to go in /root folder, too… obviously, connect as root from then on 🙂

        some basic backup commands, for now, till I complete the backup scripts (Pete, I suggest to add them all to the article itself, I sent them to you on WhatsApp:

        # —————————————-

        ### scp does not work as root? Give root user a password on remote machine, and enable root login via ssh

        sudo passwd root
        # edit: /etc/ssh/sshd_config
        PermitRootLogin yes
        PasswordAuthentication yes
        # then restart sshd: systemctl restart sshd

        # —————————————-

        ### BASIC BACKUP COMMANDS, to be run ALWAYS as root

        # —————————————-

        # compress a full folder, PRESERVING permissions
        cd
        tar cvzfp DockerIOT-20240414.tgz DockerIOT

        # —————————————-

        # decompress a full folder, PRESERVING permissions
        # BEWARE, risk of overwrite if something is already there in same folder…
        cd
        tar xvzfp DockerIOT-20240414.tgz

        # —————————————-

        # copy a folder from 1 system to an other, directly without windows:
        # BEWARE, risk of overwrite if something is already on the remote system…
        cd
        scp -r DockerIOT root@192.168.1.X:/root

        # —————————————-

        # copy a single file from 1 system to an other:
        # SAFER way, as file is compressed and has a date in its name:
        cd
        scp DockerIOT-20240414.tgz root@192.168.1.X:/root

    2. Everybody: tell me what’s not working and I’ll try to fix it, in a Docker way if possible, but hosts executables can still be used, just a little more tricky…

      I’ll not going to check every piece of the now pretty-much unmaintained “the script” to see what’s not working, if nobody uses those pieces…

      To be clear: “unmaintained” because the world goes on, Docker is a much better alternative now, read the article to know why, but easier to fully backup and migrate setup and data even to a COMPLETELY different hardware architecture is the main reason, dealing with a single folder without messing around with the host filesystem…

      And because I’ve not used “the script” from ages now, I was updating it just for Pete and you others, but I can’t use it on rpi5 as I don’t have one and don’t want to waste ages debugging things remotely on Pete’s one… and not wasting my money, too, on something I don’t even use 🙂

      I asked a couple of times here on the blog comments for a little donation allowing me to buy one, no one offered, so for me, “the script” seems to be dead, my time is valuable and goes to other cool things 🙂

      P.S.: I don’t even use my own DockerIOT repo, honestly: I made it just for fun and because I sometimes need one of its services, so better having everything documented and coherent in a single place… my main setup, both here and at my parents home, runs COMPLETELY inside Home Assistant OS and its addons, which is EXCELLENT, allowing me to interact with my IOT devices in every way I want, I can do dashboards in HA or Nodered, same for automations, everything JUST WORKS, INCLUDED full backups both on a NAS AND on Google Drive…

      my small pc died a few months ago, I just bought a new one, installed HAOs (Home Assistant OS), and gave it 1 of my backups, downloaded from GDrive: EVERYTHING came back EXACTLY as it was, NOTHING lost, on a COMPLETELY different hardware, and it was bare metal before, now a virtual machine as new minipc is way more powerful so I installed Proxmox on it and HAOs as a VM inside it…

      Proxmox allows me to take snapshots of the full system in seconds, i can do whatever i want on the system, if i screw it i can just restore the snapshot, or even create a new virtual machine from it to be up and running in a few minutes, while debugging what went wrong on the old one… Once you go this route, and its MANY advantages, you’ll never go back to microsd cards 🙂

  8. With Bookworm have you managed to get “MPG player” working? Remember the Polly voice creation subflow you outlined some years back. I still use that but could not get it to work with Bookworm.

      1. I posted this to Nodered forum but didn’t get a solution.
        Hi
        I’ve raised this issue before and Colin very kindly found a resolution. I’ve now completely rebuilt my PI4 around “Bookworm” and problem has arisen again. Using ssh to the PI if I enter:
        mpg123 /home/pi/recordings/testFile.mp3 … it plays without error.

        If I run same command in an node red exec I get this:
        High Performance MPEG 1.0/2.0/2.5 Audio Player for Layers 1, 2 and 3
        version 1.31.2; written and copyright by Michael Hipp and others
        free software (LGPL) without any warranty but with best wishes
        Cannot connect to server socket err = No such file or directory
        Cannot connect to server request channel
        jack server is not running or cannot be started
        JackShmReadWritePtr::~JackShmReadWritePtr – Init not done for -1, skipping unlock
        JackShmReadWritePtr::~JackShmReadWritePtr – Init not done for -1, skipping unlock

        I’ve tried the old solution “DISPLAY=:1 mpg123 /home/pi/recordings/testFile.mp3” and it still fails.
        Similar to before if I node-red-stop and then node-red, then it works.

            1. I’m still very much entrenched in the “old” world. When I used to load a module to nodejs for example
              “npm install mpg123” I would change dir to /home/pi/nodered first. Now within Docker what to do?

              1. use my menu (check here how to have it in case you already downloaded a previous version of my repo: https://tech.scargill.net/more-tasmota-thoughts-autonomous-auto-off/#comment-81059 )
                then select nodered, then the action to bash or sh (some containers have just sh and not bash, this is why i added both, but in sh you don’t have command completion…)

                or add these aliases to your system and use the last 2 (the menu does exactly the same thing): https://github.com/fragolinux/DockerIOT#useful-aliases

                once run that, you’ll be INSIDE the nodered container, where you’ll have npm as usual

    1. See other comments – no 3.5mm jack on RPi5. Polly voice – yes it would be nice to resurrect that but it looks like I’ll have to figure out how to use hdmi audio or more likely some kind of USB audio out device. (Cheap) Ideas welcome. Mewanwhile see the separate POLLY blog entry I’ve just put up – working just fine on RPi4 and 32-bit Buster – wish us luck in getting that and MPG123 running on the new RPi5 setup.

      1. well my pi4 does have a jack but Bookworm/Node-red offers no successful mp3 playout. In an earlier foray into Bookworm I used vpn to GUI desktop and used it’s own copy facility. BTW have you installed PIVPN and how?

        1. I no longer use PiVPN and other similar solutions. I’ve moved to Tailscale and never looked back – and I don’t use the RPi desktop – I SSH into the RPi from my PC using Mobaxterm. I used PiVPN for years and that and other VPN solutions were good but only for those devices which had VPN clients or servers etc. With Tailscale I can access any of my devices including my Tasmota devices directly from anywhere. Worth a look? And I will hunt down a cheap USB audio device – any proven tips welcomed.

            1. Hi John… Tailscale takes a little more than I have time for here. Ideally you should have it on all devices (so I originally had it on my Pi)- but another option is to have it on the main router only and allow network-wide remote access – that’s what I do now. I have this in fact in houses in 2 countries (sensibly on slightly different subnets) and can access all my toys from my phone (which of course also has tailscale). It also means I no longer need to have 2 web pages on my phone to control things, one for Spain internal, another for Spain external etc..

              Mostly when out and about I control everything by the RPi but occasionally it is handy to access the odd Tasmota device directly.

              Next time you’re bored, head over to the Tailscale website and have a read. Dead easy.

              1. I have messed around with Zerotier a bit and have connectivity with individual devices but I get a headache when trying to work with managed routes to my whole home network.
                If I have for example Zerotier on my router and it exposes a Zerotier ip of xx.xx.xx.1 then how do I connect from my phone to a tasmota on say xx.xx.xx.2

                1. Yes, it seems difficult but in fact I found it easy on Tailscale and have hence stopped using all other VPNs (apart from of course, the likes of Surfshark for accessing UK TV from Spain) -whether or not I could remember how to set it up again is another matter. The support guy at Zerotier explained and it was a lot simpler than it looked – talk to them and look at examples – just one setting to change. So when I am out on my phone I simply access my home 192.168.x.x devices as if I were at home. Most of the time I don’t turn off Zerotier as it doesn’t appear to slow anything down.

        2. I moved from RPi4 to 5 and to Bookworm because Antonio did his best to convince me that not moving on (Especially to Docker) would eventually land me in trouble. I fought this for some time – the script isn’t totally compatible with RPi5 and Bookworm, but ultimately I took the plunge (only this lat week) – and getting my Node-Red flows transferred was no-where near as hard as I’d expected. The Node-Red stuff is also working on OP5+ – though I’m still not buying the idea that life without RPi-Clone is easy. Even with a complete failure of an SD, RPi-clone means I can have the whole lot up and running in minutes. I keep an SSD on the RPi for making backups – if the SD should fail I can boot from the SSD and easily clone back to an SD (I know, I should do it the other way around). Happy to pass on any info as I learn.

          I’m hoping the USB audio will sort my last transfer problem and give me back my Amazon AWS Pi sound – not that it’s critical but might be for others.

          1. Yes I have depended on RPI-clone for years too and like you am reticent to mess with a working system which has taken many hours to get right. I do backup node-red regularly using a batch file. That stomach churning moment when you make some changes and the pi “dies” and you don’t know linux enough to investigate and your backup takes ages to boot. Even with extensive notes it takes ages to rebuild the whole system. So docker it is…….

            1. That’s why i suggest stay on Docker… we fully moved Pete’s setup between 2 completely different machines in minutes 2 days ago…

              Everything is inside a single folder, you just need “docker” and “docker compose” (NOT docker-compose, now deprecated… i created an alias for this in case someone has still scripts using that old command, now “compose” is a docker “plugin”)…

              i even moved between different archs, arm64 to x86, not a single line changed… just bring down all containers, so all services are not running and the copy is consistent

    2. Ok, first technical hitch. While waiting for my USB audio dongle to arrive, I thought I’d take alook at my old code for Amazon Poly.

      The Node-Red code is designed to capture an audio file from Polly – speech you ask for – Polly does a great job. So, at the command lin you fire a line of text and flags at Polly and it returns n MP3 file with the speech you requested.

      I’m now running as root, on the old RPi4 was running as pi.

      The good news is that the Polly command line MP3 retrieval still works a TREAT in Bookworm as user root (sending the file to /root/audio). The BAD news is – I last ran this code in RPi4 Node-Red as user Pi… and even if for now, if I miss out my clever buffering and simply send out the string to the Node-Red EXEC function as user root – nothing happens as aws cli is not running in the container – all attempts to either map the /usr/local/bin folder to the docker node-erd environment have failed and I could not succeed in getting a user/password ssh session inside the xec node to access the host aws. – no file – and the exec node throws an error.

      Not had time yet to investigate permissions but then surely root user has access to everything… but this is going to need investigating. I see no reason why the rest of my NR flow should fail in the new RPi5. Ok so I took what I’ve test-fired out at the command line successfully, removed the word AWS and sent the rest from an inject node – into the exec function with the title AWS.

      More work needed I’m afraid.

      1. Again, let’s clarify this..

        So: docker containers are like LITTLE virtual machines, COMPLETELY ISOLATED from the host one, which is “helping” them running… so, AGAIN, root INSIDE the nodered container and its filesystem are NOT the same root user as the external one, and “/root” in the container, what nodered sees, has NOTHING to do with “/root” on the host…

        If i remember well, i left a folder inside the “nodered/data” one on Pete’s system… as everybody now “SHOULD” know, the “nodered/data” folder is mounted inside the container as “/data” (look at its “docker-compose.yml” file), so let’s create a folder shared between the host and the nodered container, for everybody:

        # always as root, of course…
        mkdir -p /root/DockerIOT/nodered/data/files
        chown -R 1000:1000 /root/DockerIOT/nodered/data

        so now, everything you put inside /root/DockerIOT/nodered/data/files on the host, will be available inside the container as /data/files: THIS (and NOT /root/audio on the host!!!) is where you should put shared stuff, like scripts, files, whatever… feel free to create other folders, as you need them…

        now the other part, the “aws” command… given that OF COURSE the command on the host is NOT accessible INSIDE the container, we need to find a way to install aws cli INSIDE the nodered container (tried this morning, wasted a couple of hours because it uses libraries not available in the nodered docker image), OR access the host one… i think you can use the same procedure detailed here:

        https://discourse.nodered.org/t/node-red-in-a-docker-container-doesnt-seem-to-be-able-to-execute-some-programs-with-the-exec-node/78826/3

        so basically you need to enable access with ssh public key on the host, generating public and private keys, so it can access the host via ssh, without having to provide a password… how to do this, is detailed in that link, FOLLOW that… just change the USELESS pi (it’s not even mandatory in raspbian from years now, why insist in using it) user with root, in those guide…

        this way you can run whatever executable or script on the host, as you can see in the screenshot in the previous link… with THAT syntax, not invent anything else…

        OF COURSE this is FAR from being secure… if you expose nodered publicly, and someone breaks into it, he will have FULL access to the host filesystem via that ssh connection, no password asked AT ALL…

        1. You COULD use user pi in that ssh keypair setting in the previous link, as far as you share a folder on HIS HOST home, to a folder under /data INSIDE the container… As pi cannot access directly a folders under HOST /root…

          I’ll leave this to everybody as an exercise, so you’ll start banging your head on docker and start understanding how it works (everything is CLEARLY explained in this article and in MY previous comment…)

          I won’t add this feature to my setup nor give any other support on it: feel free to ask on THAT nodered support page for help… but EVERYTHING you need is in this page and comments, or there

          1. well I’ve run into another problem. I use serial extensively in node red to communicate with an external device. It appears that tty/USB0 no longer wants to work.

            1. that should be easy, you need to pass the local serial to the nodered container… go under /dev/serial/by-id and check how the actual local device is called

              then modify nodered docker-compose.yml and add a block as such (needs to be aligned exactly as the “ports” or “volumes” blocks):

              devices:
              – /dev/serial/by-id/DEVICENAME:/dev/ttyUSB0

              this way, the local device, the one on the left of the “:”, will be mapped as what you see on its right part, inside the container

              if you prefer the short name (i suggest not), you can even do so:

              devices:
              – /dev/ttyUSB0:/dev/ttyUSB0

              if wordpress messes up something, go take a look at the zigbee2mqtt docker-compose, there’s a similar setup used to pass the zigbee dongle to the z2m container

                1. everything… you need something like this added to the docker-compose of nodered (take a look at the one from zigbee2mqtt as reference):

                  devices:
                  – /dev/serial/by-id
                  usb-1a86_USB2.0-Ser_-if00-port0:/dev/ttyUSB0

                  then you will have /dev/ttyUSB0 inside the container, connected with that external one

                  1. I’ve had no luck with this at all.Node red logs are clear but within node red itself I keep getting

                    “[serialconfig:9d5f7edf.fec6f] serial port /dev/ttyUSB0 error: Error: Error: Permission denied, cannot open /dev/ttyUSB0”

                    1. I’ll check tomorrow, probably you need to add user 1000 (usually pi) to the dialout group, as you would do if nodered installed on host… Google this on how to do, it’s easy

                    1. done, added everything to my repo, follow CAREFULLY the updated readme, and check the files listed there…

                      https://github.com/fragolinux/DockerIOT/tree/main/nodered

                      @Pete, i explained in the nodered readme what’s needed (the chown), and what’s optional (ssh and serial), so now it should be everything clear…

                      @Pete: remember that you need to rerun the chown line everytime you add anything new under the “data” folder, like the “audio” new folder created yesterday, or you get permission errors…

                  2. Antonio thanks for the writeup. Unfortunately after careful mods to env file and yml and chown and reboot I still get
                    “[serialconfig:9d5f7edf.fec6f] serial port /dev/ttyUSB0 error: Error: Error: Permission denied, cannot open /dev/ttyUSB0”

                    Also my “audio” folder I put in root, is that bad.
                    Also do I need to do the whole SSH thing so that I can run exec mode within node red.

                    1. publish your configs on some site like pastebin or similar, let’s take a look

                      audio: it can’t be under /root as nodered container runs under user 1000 which has no access to it…

                      go in /root/DockerIOT/nodered and run:
                      mkdir -p data/audio
                      chown -R 1000:1000 data

                      or wait for peter to share the flow we made yesterday, then i’ll add something, if needed…

                      yes, ssh is needed… the key part is that every command you run in nodered that needs to point to something IN THE CONTAINER, should use paths like “/data/audio”, while every exec node using the ssh line to access the host, MUST use the full host paths corresponding to the ones inside the container, so for them you need: “/root/DockerIOT/nodered/data/audio”

                    2. if you want i can connect to your pc and help you fix this, but in the evening, not before 4pm today (italian timezone)

                      in case, run anydesk on your pc (needed to give me access, a standard remote assistance tool, we use it with Pete very often), contact me on google chat at fragolino AT gmail DOT com after 4pm

    1. Well, I would have done this years ago, as you can see my repo is 6 years old… as Pete who has many other interests, was never interested in this significant change of thinking, so I abandoned it till recently, when the alternatives were:

      – work again on the old script (which I’ve used for years, and I’ve not even a physical device to work on…) so for me “the script” is dead – device specific and still 32 bit

      – having fun with Docker, which allows to have something completely independent from the host itself, you can bring the full folder on an x86 pc or whatever other sbc can run docker, with exact same functionalities…

      I prefer the latter, which now gives you something that can last a few years for sure. Without my encouragement, you were all continuing to use the “old” RPi4 Bullseye 32bit, I don’t use any of these, as I’m happily on Home Assistant and its addons, now…

      This requires a bit of knowledge that i tried to explain in the original blog article, but EVERYBODY should READ and UNDERSTAND, before trying to apply old habits to a new setup…

      But different knowledge would have been needed to adapt the old script, too, so i don’t see this much learning to be done, honestly is way easier with Docker, where everything is standardized and coherent in my setup, each service is treated the same way and has the same folder structure, instead of the previous setup where everything was spread all over the filesystem…

      1. OK so I’m 70% of the way there. Serial working now thanks to Antonio.
        Some “niggles”
        1. I use “Wemo Emulator” in node red to tell Alexa to do stuff and now Alexa won’t discover any of the wemo virtual devices.

        2. Within nodered I use INA219 to monitor all sorts of voltages and currents in my loft setup. Before docker I changed dir to .node-red and did “npm install brettmarl/node-ina219” then inside settings.js I added
        var i2c = require(“i2c-bus”);
        and also ina219:require(‘ina219’)

        3. Still hoping Pete can sort out exec mode to mpg3 for sound output

        4. I use a sqlite node in node red to create and modify some small databases but I’m not sure how to interact with /root/DockerIOT/dbs/iot.db

        5. Can’t figure how to access phpliteadmin.php port 3080?

        6. Alexa SmartHome (node red) requires credential and asks you to browse to localhost:3456 This produces nothing. It also asks for a file path to store this. I tried /root/alexa-speak but that is all wrong I’m sure.
        Apologies for this all but Antonio did invite me to blog it.

        1. let’s answer them 1 by 1, but in random order

          2. ina219, you need to go in the container to install that, from nodered folder, run “dbash”, you will be inside and then run the npm command line, but i’m not sure it can reach the i2c device, nor i know which one it is… needs to be done the same of the serial to pass it to the container…

          3. already sorted out yesterday, wait for Peter to document everything

          4. i just added the new volume mount for the “dbs” folder to the nodered docker compose, please add the new line to yours, and read the notes there: https://github.com/fragolinux/DockerIOT/blob/main/nodered/docker-compose.yml

          5. phpliteadmin, just open http://yourip:3080 in browser, it will mount the default iot.db automatically, the access password is in the .env file

        2. 1. sorry, don’t know what to do with that, try reaching official nodered support channels and ask there, say you’re using it in docker, eventually show your docker-compose and .env files publishing them on a site like pastebin or similar, nothing secret there

          2. try following the official guide on how to enable pi-gpiod:
          https://nodered.org/docs/getting-started/docker#raspberry-pi—native-gpio-support

          6. try opening http://yourip:3456 instead, localhost can’t work, each device has its own localhost, and now you have both the host and the container one, too… yourip is your raspberry device ip, of course
          path, you can’t use /root inside container, put something like /data/alexa-speak instead

          1. 4. That works now except I copied over my existing iot.db and now it says “attempt to write to a read only database” Prior to this I would have changed permissions but with Docker ??

            6. I did try myIP:3456 but just get “can’t reach this page”

            1. on the host, run from the main DockerIOT folder:
              chown -R 1000:1000 dbs

              if this is not enough:
              chmod -R 777 dbs

              3456: sorry, don’t know how to help there. Please check on its author github or whatever, ask how to accept a request which is started by a docker container…

              or add
              – 3456:3456
              to docker-compose to expose it, too

              btw i remember i had similar issues, they were caused by the old, unsupported, node used… check one of the most recent nodes, in particular one of these (open nodered palette, search for alexa remote, look at their last update timedate):
              https://flows.nodered.org/node/node-red-contrib-alexa-remote2-mattl0
              https://flows.nodered.org/node/node-red-contrib-alexa-remote2-applestrudel

              applestrudel should work, Pete blogged about it a while ago:

              Alexa-Applestrudel (formerly Cakebaked formerly Remote2)

              1. AppleStrudel has same issue but it’s not important right now.
                As regards Wemo emulator I can find another external solution to Alexa commands.

                That just leaves INA219. I did dbash and the the npm install………
                Then var i2c = require(“i2c-bus”);
                and also ina219:require(‘ina219’) in settings.js and this crashes nodered.

                1. applestrudel, try the version i showed, the mattIO… it works for me…

                  i2c, can’t help there, i don’t even know how to use that… let’s wait for Pete if he knows better… look here:
                  https://cj-hewett.medium.com/using-johnny-five-with-node-red-and-docker-98daa5b31cc
                  or just move the sensor to an esp and use it remotely, without using rpi gpio

                  alexa commands, to as i showed Pete a few weeks ago: take whatever esp you have lying around, configure every usable gpio as a relay, configure mqtt, enable philips hue emulation… every (fake, nothing connected to gpio) relay will be exposed to alexa, and it will trigger an mqtt topic when triggered… then use these topics to do your automations…

                  Pete blogged it here: https://tech.scargill.net/a-tasmota-zigbee2mqtt-learning-day/

                2. wemo, i see that there are a lot of nodes available, exactly as for alexa remote… try one updated recently, it could be same problem, if original author did not update the node, it could not work on recent nodered…

                  otherwise say as i explained, with a simple esp configured as in previous comment

                  1. Thanks Antonio I’ll check that out. I made progress with the INA219 stuff by copying the ina module from my existing setup and putting the “require” stuff in settings.js
                    All seemed good until

                    Error: ENOENT: no such file or directory, open ‘/dev/i2c-1’

                    Apparently this is not an uncommon issue and most folk can’t resolve it.

                    1. just add that as a new volume, like:

                      – /dev/i2c-1:/dev/i2c-1

                      if this is not enough and lacks permissions, you need to check which group is that device into on the host, and add that to the same block in the docker compose which already lists the “dialout” one, on a new line

                    2. I tried to post my amended volumes but got a big warning about malicious etc

                      anyway that produces
                      yaml:invalid leading UTF-8 octet

                    3. that’s just some copypaste error from the code mangled by wordpress, don’t just copy from my comments, try to understand the basics and replicate that yourself, it’s just a duplicate some of the lines you already have in the docker compose, and adapt it… my 2c? You copied my line and that “-” sign is not a real dash, but a longer one produced by wordpress on blog…

                    4. Sorry that copy paste was lazy of me. I’m really struggling with this and keen to get it all working. i2c I have no idea where to go with it. Its a shame because it has worked perfectly in the old environment. I suppose I will have to drag out an esp module to do it. As for audio I await Pete’s blog on what line to use in the exec mode. Sneak preview perhaps?
                      Also it appears there are problems on the Amazon site as I can’t register my credentials.

                    5. john, tomorrow at 5pm, italian timezone, i’ll be at my pc… we missed a couple of times in the last 2 days 🙂
                      let’s take a look…

                      for sneak peak, take a look at Pete’s video, just published on youtube 🙂

                    6. Unfortunately the sound was pretty awful but I think I have all that in place anyway. I think I just need the “ugly” command line in exec mode.

                    7. just as a reference on how we passed the i2c device from host to container on John’s system…

                      i wrongly said to add it as a volume: no, it’s a device… we checked the group of that device on the host (ls -l /dev/i2c*) and it was “i2c”, but the same groups was NOT inside the container…

                      so we just took the GID from the host system (grep i2c /etc/group) and used that number inside the docker-compose, as in image

          2. i used the mattIO version, and configured as in image… but i use the home assistant addon, so you need to change that path:

            /config/cookie.txt
            to
            /data/cookie.txt

            and all the other relevant parts for your nation… add the mapping of port 3456:3456 to the docker compose, too, so you can reach it from the external (copy the one about 1880 and change numbers, respect indentation)

            1. Antonio
              Here’s another. I’m trying to install a node in node_modules but no node appears there. (also tried changing to data folder before install)

              root@docker:~:10:01[1]> cd DockerIOT/nodered
              root@docker:nodered:10:02[0]> dbash
              2fbb57056b8e:~$ npm i ina219-sync
              up to date, audited 311 packages in 3s

              46 packages are looking for funding
              run `npm fund` for details

              found 0 vulnerabilities
              2fbb57056b8e:~$ exit

                1. I’ve tried several installs and each time I get “packages installed etc” but there is no sign of them in node_modules.

                  1. solved… docker site is wrong… they say to mount a folder to the container as /data, but when you go in via bash, you’re NOT in that folder, you are under /usr/src/nodered… you need to move to /data, before installing…

                    if you do this, it will work:

                    dbash # to go inside container
                    cd /data
                    npm install ina219-sync
                    exit
                    drestart

Leave a Reply

Your email address will not be published. Required fields are marked *

Leave the field below empty!


The maximum upload file size: 512 MB. You can upload: image, audio, video, document, spreadsheet, interactive, text, archive, code, other. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop file here