Work in Progress using Docker

Fellow reader and collaborator who has helped me with “the script” over the past two years – Antonio Fragola,  aka MrShark has asked me to put up something about his current work in progress.

tmp5447

He has begun working on Docker with a view to migrating “the script” to a fully “dockerised” container-based version.

Docker allows easy management of software (you simply “drop” an old version container and recreate it with a newer one, re-attach your data volume and you are done),  allowing easier migration from a platform to another of your data (configuration are architecture agnostic, so you can just backup your data folder and bring it from a Raspberry Pi to a PC virtual machine in no time), less disk space requirements, WAY less memory requirements, and other things Antonio suggests we discuss here...

Antonio is working on creating a Github repository so that everybody can collaborate and enhance his work... his plan is to have a git repo that we can download, with a folder for every service actually supported by The Script (and more: he has already added Home Assistant, for example), and in every service folder there will be a DATA folder containing all the configs, logs, data, db, and whatever else is needed for the service to work... so, doing a backup of this folder will allow you to have ALL you need to restart from scratch...

Management will be easy thanks to a web gui (Portainer), and all the needed basic commands will have a clear wiki page as a reference on github... and of course, to maintain the ease of use of The Script, all required info will be given to the various services (user, password, etc) will be done before running the setup, in an automatic way as you're already familiar...

Finally, the end result should be multiplatform, so there will be auto-detection of the actual hardware architecture and eventually automatic modification of the download sources...

Work started from scratch just 4 days ago, and he has already setup portainer, mosquitto, home assistant and caddy (a VERY LIGHT alternative to apache which we use just for basic html pages, phpliteadmin and a few more services in the script, so no further need for the apache+php monster)...

Antonio is encouraging discussion of his vision and all help or suggestions are welcome... https://github.com/fragolinux/DockerIOT

Facebooktwittergoogle_pluspinterestlinkedin

39 thoughts on “Work in Progress using Docker

  1. I think that the idea is absolutely brilliant, especially for people like just beginning with Linux. Additionally I have just tried installing the 'script' on a clean freshly formatted dual boot PC with Ubuntu 16.04. The script loaded everything without issue, I could access the web page everything worked perfectly except node-RED. It seems there are incompatibility issues with the Node.js versions and node-RED or at least certain of the NR modules. These issues are not attributable to the 'script'.
    So if the work Antonio is doing makes resolving these issues easier that would make life easier.
    Any idea when it will be ready to try out Antonio?

    1. There's a tiny problem with the script and latest node red should be fixed tomorrow. It merely takes out a closing brace in settings.js (/home/pi/.node-red - if that is put back in all is then immediately well. I'm discussing that now as Antonio made a tiny change and we need to change it to another one.

      Pete

      1. going to completely rewrite the settings.js file generation...
        i'll start from the one created on nodered 1st start, clean it and put there some placeholders, for hw-support, credentials, projects and so on... this way we can ALWAYS have a working starting script, and easier change of parameters via sed, which is more difficult on the standard file... then send it to peter which should publish it on any of his sites, i think we can put it where he publish his index and css files...
        stay tuned

      2. and node v8 seems to have new dependencies before you could install node-red... tried in ubuntu 16.04 and 18.04, on both node-red v0.19.1 install fails on nodejs v8.11.4 for missing package: node-pre-gyp
        working on it, as v8 is ANYWAY the way to go: versions before are not recommended anymore by node-red guys...
        https://nodered.org/docs/getting-started/installation

        edit: nope, error persists...

        pi@raspbuntu:~$ sudo npm $NQUIET install -g --unsafe-perm node-red
        npm WARN registry Unexpected warning for https://registry.npmjs.org/: Miscellaneous Warning EAI_AGAIN: request to https://registry.npmjs.org/bcryptjs failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org:443
        npm WARN registry Using stale package data from https://registry.npmjs.org/ due to a request error during revalidation.
        npm WARN deprecated mailparser@0.6.2: Mailparser versions older than v2.3.0 are deprecated
        npm WARN deprecated nodemailer@1.11.0: All versions below 4.0.1 of Nodemailer are deprecated. See https://nodemailer.com/status/
        npm WARN deprecated mimelib@0.3.1: This project is unmaintained
        npm WARN deprecated mailcomposer@2.1.0: This project is unmaintained
        npm WARN deprecated buildmail@2.0.0: This project is unmaintained
        /usr/bin/node-red -> /usr/lib/node_modules/node-red/red.js
        /usr/bin/node-red-pi -> /usr/lib/node_modules/node-red/bin/node-red-pi

        > bcrypt@2.0.1 install /usr/lib/node_modules/node-red/node_modules/bcrypt
        > node-pre-gyp install --fallback-to-build

        node-pre-gyp ERR! Completion callback never invoked!
        node-pre-gyp ERR! System Linux 4.4.0-93-generic
        node-pre-gyp ERR! command "/usr/bin/node" "/usr/lib/node_modules/node-red/node_modules/bcrypt/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build"
        node-pre-gyp ERR! cwd /usr/lib/node_modules/node-red/node_modules/bcrypt
        node-pre-gyp ERR! node -v v8.11.4
        node-pre-gyp ERR! node-pre-gyp -v v0.9.1
        node-pre-gyp ERR! This is a bug in `node-pre-gyp`.
        node-pre-gyp ERR! Try to update node-pre-gyp and file an issue if it does not help:
        node-pre-gyp ERR!
        npm WARN optional SKIPPING OPTIONAL DEPENDENCY: bcrypt@2.0.1 (node_modules/node-red/node_modules/bcrypt):
        npm WARN optional SKIPPING OPTIONAL DEPENDENCY: bcrypt@2.0.1 install: `node-pre-gyp install --fallback-to-build`
        npm WARN optional SKIPPING OPTIONAL DEPENDENCY: Exit status 6

  2. Peter and Antonio, thats great news. I had looked at settings .js and the braces didn't look to be correct but that was on the assumption that braces have to be paired (open / close ) like they do in 'C'.
    I guess I'll have to reformat the HD partitions to clear out the stuff the old script loaded.
    Is the docker version likely to run on Ubuntu 18.04?

      1. braces problems should be solved, FOR EVER...
        just sent to Peter 2 scripts, for node v6 and v8, with the node-red settings part completely redone... now i download a premade settings.js file (https://tech.scargill.net/iot/settings.txt) which will ALWAYS work as not dependent on the one generated on startup by nodered itself... and this way i only need to use 6 lines to modify it, and only to add credentials, otherwise nodered will work anyway, without auth, of course... a safer way to go on instead of rely on the mods nodered guys do to their file...

        sed -i -e "s#\/\/adminAuth#adminAuth#" /home/pi/.node-red/settings.js
        sed -i -e "s#\/\/httpNodeAuth#httpNodeAuth#" /home/pi/.node-red/settings.js
        sed -i -e "s#NRUSERNAMEA#$adminname#" /home/pi/.node-red/settings.js
        sed -i -e "s#NRPASSWORDA#$bcryptadminpass#" /home/pi/.node-red/settings.js
        sed -i -e "s#NRUSERNAMEU#$username#" /home/pi/.node-red/settings.js
        sed -i -e "s#NRPASSWORDU#$bcryptuserpass#" /home/pi/.node-red/settings.js

        changes:
        moved node-red-contrib-opi-gpio install in the OrangePi gpio part of the script, as not needed on other boards...
        update nodejs to latest stable v6.14.4 and v8.11.4 and node-red to latest v0.19.1

    1. sure it will... i'm testing right now on "coreos" in a virtual machine, my plan is having everything working on there, then moving to support raspberry and the other small boards, creating ad-hoc images for their hw architectures (arm based), and finally move out of docker and going to Balena (https://www.balena.io), which is docker compatible and even lighter than it...

      i'd like to have an sbc that runs nothing more than the bare container management software, all the remaining should be "containerised"... and the docker folder, with all the user data, easily backupable and cross platform...

      i'm at a good point in virtual machine, and can't test it still on SBC because i'm on vacation and with limited data plan (my vm tests are in vpn on a vm at the office, so the big traffic is done from the vm directly from our 100mb fiber, not by my pour 4g router 🙂 )

      just halted tests today to solve these issues with latest nodered, then back there ASAP

    2. about braces: yes, i know, file changed again... i'm working on it... we will create a standard, ALWAYS working, basic settings.js file with some placeholders where to put credentials, hw enable, projects, etc... this way we're not going to rely anymore on the file generated on 1st nodered start, which could change from version to version, breaking the script... easier to manage this way...

  3. I love using Docker (the NR official container on docker hub) when I need to do my dev work on my node-red nodes. Just start up a new container, tell it to map the ports to something else (machine has node-red running on the defaults, so the container needs to remap) and in a few seconds I can test the new code. I do new to make better use of tags with git (a different issue). 🙂

    I've not tried this on a Pi.

  4. I played around with this quite a bit and IMHO there are still some bugs in Portainer that cause very peculiar problems. However, the idea is sound. Homebridge is a really good candidate especially if you want to run a number of instances to either isolate dodgy plug-ins or need to go past the accessory limit.

    Node red is also a good choice as this also allows segregation of services.

    One of the good points is the auto restart of containers that fail and the dual management.

    Runs on Pi Zeroes too.

    1. homebridge? isn't something for bitten apple fans? not going to add it...
      if you mean home-assistant, yes, it's there... but how big it is! docker image is 1.8gb...

      in the mean time, node-red just up and running 😉
      i need more test, of course i need real hardware soon (to pass the actual devices, like serial ports...), and test nodes that need recompile... but it's a good start...

      btw i'm using portainer just for easy management during creation of containers and the like... actual container creation i'm doing, is done via dockerfiles and compose files... even though i could convert some of them to portainer templates, which would be AWESOME...

  5. Aha,

    I was using 'Swarm Mode' which is where my issues probably came from. Hadn't considered using stand-alone CPUs as the swarm looked so attractive.

    However, using Portainer to install containers is really easy and I think that unless you use Portainer this way management benefits might be limited. Volumes, networks, container variables etc are all easily done in Portainer and for me, having used unRaid Docker management, access to the ready made libraries of dockers is extremely valuable for the non-standard requirements?

    1. ooohhh, swarm... that would be nice for the future, to give redundancy to whoever needs it... i like it 😀
      btw i'm new to docker (reading Docker Cookbook as i prefer practical stuff instead of those philosophical books, sometimes...), i need it for work, too, so i'm actively studying it 🙂

  6. Yes the swarm is nice but you lose the ability to decide where a docker runs. That would only matter with local I/O and I guess that is where a standalone CPU would be incorporated. The management nodes, of which there must be at least two, also have to agree so there is checking too.

    Did you have a look for ready made dockers here https://registry.hub.docker.com/search?q=library. The only thing to be wary of is the ARM versions necessary for a Raspberry Pi. Portainer can import these very simply and many don't need any changes. The big benefit is they are tried and tested. HA has 10M pulls with 100K for the Raspian version.

    1. sure, i'm not mad wanting to reinvent the wheel... i'm using standard containers and adding some mods to have them all save data in a given folder, for easier backup... yes, i know about different archs, that will be next step once i've all setup on x86 vm...

      look here for the actual ones, but today's ones are missing, and of course even the ones there need more work... and a proper github repo, not gists...
      https://gist.github.com/fragolinux

  7. Antonio,

    The docker stuff looks good - happy to help out wherever you need it (i run the script on a VM cluster under ESXi at home). I think the Balena angle is probably the wrong way to go as you would end up finding missing components further down the track

    Let me know if you want a hand with anything - i have been running docker for about a year now - using Portainer, cockpit, Sonarr, Radarr etc etc

    1. thanks, i'll take a look
      right now i'm looking in a way to give credentials to needing services... for now i'm creating some pwgen.sh scripts in the setup folders, which accept a password as parameter and return it (eventually encrypted) in service config files... but it's a work in progress, so will surely change... unfortunately the SECRETS native docker feature is available only in SWARM setups...

  8. Yeah it does get annoying when you read about something in docker and then find it is only available in Swarm !

    Why not read a config file on the Samba/Shared drive that the user creates with a username and password in it of their choosing - first start of the docker - services do not start, but using the passed in GUID and UID access the Shared/mounted folder - read the config file, then reboot and autostart the services then ? (after creating the user and chown/chmod files they need

    Craig

  9. and today, recreated nodered container, not based on standard one but on slim one (100mb when just deployed, vs more then 700mb of the standard...), and with support for compilation (needed for serialport, for example, which is correctly installed as node)... and added all Peter's selection of nodes, of course 🙂
    missing only auth and access to physical hw, which i can't test in a virtual machine, of course 🙂

  10. Any existing / published guides on how to use this in the docker?
    I'm using Docker containers, from official docker site, on Synology, and I pretty new to Docker idea, how about this?
    Am I supposed to build a container first, from Github?

    1. at this time, i'm testing only in an x86 virtual machine, as it's easier and faster... when it is all setup, i'll adapt it to armhf and arm64 archs...

      for now, you can setup a docker enabled virtual machine, download the git repository, go in one of the folders of the wanted service and run
      docker-compose up -d

      if in the folder there's a Dockerfile file, too, the command to use is:
      docker-compose up -d --build

      and if there's a pwgen.sh file, too, you can do, BEFORE running docker-compose, set up a password for the relative service using the syntax:
      bash pwgen.sh YOUR_CHOOSEN_PASSWORD

      i'm doing my best to have all of this consistent between the various service, so the same info are valid for any of them, and all of this will be automatized in the end, with ARCH autodetection and a menu similar to Peter's one

      and for now you'll have to search on your own how to do, i can't give details for this, too, right now... i suggest to install an ubuntu server 18.04 then run this to add latest docker-ce: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04

      every service will store its data in a subfolder called "data", inside their own directories, under DockerIOT... so you can just backup the full DockerIOT folder and you'll have all you need to bring your data on a new server

        1. just published an update to Mosquitto, to run with its own user... some dirty bash tricks but it works... for now, to test, this is the list of commands to run:
          1) if pwgen.sh exists, run it before anything else, passing a password as argument: bash pwgen.sh YOUR_CHOOSEN_PASSWORD
          2) if a Dockerfile exists, run: docker-compose up -d --build
          3) if a Dockerfile does not exists, run: docker-compose up -d

          i suggest starting with the PORTAINER container, so you can manage them all via http://yourip:9000

        1. sure
          i'm back to work from one week now, previously was on vacation and without a board to work with, and a limited data plan...

          now i'm fully operative and started working on porting on SBCs what i already did on x86 virtual machine, which i've to say is still not complete, but i'm very optimist in doing as much as Peter's script does, right now...

          i'm trying to avoid doing dockerfiles or different configs for different setups, and i already succeeded in unifying some services so the SAME setup can work on both... this will make it easier to maintain everything in the future...

          i remotely worked on an SBC that Peter kindly made me available while i was still in transit, you can see in some comments on the OrangePi +2E blog post how to setup it so it can run both Peter's setup, and be prepared for a Docker setup... standard legacy kernels do not allow docker working properly on these little boards...

          if you want to test something, i suggest you prepare an ubuntu 16.04 or 18.04 x86 vm, follow the online instructions to install docker-ce (NOT the included one, which is older), then pull down my git repository, go in the service directory you want to test and run the commands i put in some comments up here (run pwgen to create a password, then build the container using docker-compose)...

          i want to port what i did on sbc probably in this weekend, then start working in parallel on x86 before (it's easier in a vm...), then on sbc, and release updates regularly...

          but as Peter suggested in chat, i need to write down some instructions on basic Docker concepts and how to move between the objects you'll be working with... oh, you need to install docker-compose, too, as said, info are on the net, i'll add to the wiki on my git in the future... i use it to avoid writing long "docker run" commands as many do on youtube...

          being at work about 8 hours a day, i can dedicate only spare time to this, and a little more in weekends...

          but i'm confident something usable would be available soon, for now play with an x86 vm if you would like, and report any problem you should find 🙂

          1. Mr Shark, Thank you for such a comprehensive reply. I need to set up a VM again on my Windows PC. I also have an older PC upon which I have ubuntu 16.04 as a dual boot.
            I am very new to Linux and never used Docker so I think I'll wait until you have developed it further.
            Thank you again for all your hard work to make the use of Peter's script easier for novices like me.

  11. so, porting what i did this summer (in an ubuntu virtual machine) on real SBC started, and i'm quite happy with now well it works... just a little changes to the files i already created in vm, and all works on an Orange Pi Pc+ right now...

    in the mean time, i think that during testing i'll create some makefiles, so it's even easier to deploy services: just go in a service folder and run "make password=yourpassword", changing the password to whatever you want... this will do everything is needed to setup everything and protect with password...
    and so, creating a menu like Peter's script one will be way easier, too...

    now, back to work!
    to be continued...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.