Fellow reader and collaborator who has helped me with “the script” over the past two years – Antonio Fragola, aka MrShark has asked me to put up something about his current work in progress.
He has begun working on Docker with a view to migrating “the script” to a fully “dockerised” container-based version.
Docker allows easy management of software (you simply “drop” an old version container and recreate it with a newer one, re-attach your data volume and you are done), allowing easier migration from a platform to another of your data (configuration are architecture agnostic, so you can just backup your data folder and bring it from a Raspberry Pi to a PC virtual machine in no time), less disk space requirements, WAY less memory requirements, and other things Antonio suggests we discuss here…
Antonio is working on creating a Github repository so that everybody can collaborate and enhance his work… his plan is to have a git repo that we can download, with a folder for every service actually supported by The Script (and more: he has already added Home Assistant, for example), and in every service folder there will be a DATA folder containing all the configs, logs, data, db, and whatever else is needed for the service to work… so, doing a backup of this folder will allow you to have ALL you need to restart from scratch…
Management will be easy thanks to a web gui (Portainer), and all the needed basic commands will have a clear wiki page as a reference on github… and of course, to maintain the ease of use of The Script, all required info will be given to the various services (user, password, etc) will be done before running the setup, in an automatic way as you’re already familiar…
Finally, the end result should be multiplatform, so there will be auto-detection of the actual hardware architecture and eventually automatic modification of the download sources…
Work started from scratch just 4 days ago, and he has already setup portainer, mosquitto, home assistant and caddy (a VERY LIGHT alternative to apache which we use just for basic html pages, phpliteadmin and a few more services in the script, so no further need for the apache+php monster)…
Antonio is encouraging discussion of his vision and all help or suggestions are welcome… https://github.com/fragolinux/DockerIOT
someone made something similar to what i was starting to do, better so 😀
https://github.com/gcgarner/IOTstack
and watching Andreas video, it seems this setup is very well done, great!
BUT… you’re going to loose the hardware part, which i was going to preserve in my efforts… so no serial, i2c, etc… they can be added later, but you need to know how to do with docker…
script to install docker & docker-compose on Debian Buster:
https://gist.github.com/fragolinux/a1b975bb6520a5d3b860d03527f9a2b9
Hi Antonio,
thanks for all the work you have done to “dockerise” the script.
I have tried to run the nodered docker image on both raspian stretch and ubuntu 18.04 (on a raspberry pi 3B) and get an error when docker-compose gets to the command “RUN apk add –no-cache python make g++ gcc linux-headers udev”.
The error is:
standard_init_linux.go:190: exec user process caused “exec format error”
ERROR: Service ‘nodered’ failed to build: The command ‘/bin/sh -c apk add –no-cache python make g++ gcc linux-headers udev’ returned a non-zero code: 1
Can you help?
Regards, Sean.
as said in other comments, i do not work anymore on this project as i moved to home assistant and hassio in particular, with its very well done addons… sorry
I thought I had read the email trail but never picked-up on this.
No problem, Thanks for replying.
Hi Antonio, I have portainer installed already. I have looked at the logs for Node-RED and influx db both in the addons and in portainer. Apart from a single line showing “internal server error” there does not seem to be anything else wrong. I think I’ll have to load an earlier snapshot HA and/or try the set up on another machine. I can’t think what else to try. I could also try un-installing the NR HA addon and then re-instal it.
try contacting Frenck on homeassistant discord channels and ask him, or open an issue on github and give details
I have it working now on a ‘snapshot’ I had saved from a few days ago. I can only think something got corrupted. Thank you for all of your suggestions
Antonio, I am in need of your help again! despite numerous Google searches I can’t find the answer to my problem.
I installed the influxdb nodes via palette manager in the HA node-RED addon.
I have set up a database and when I try to send data to it from a MQTT topic I get the following error in the debug window, Error: Internal server error. I can’t find any reference to that on Google for that specific node.
So I thought I’ll uninstall that node and re-install it, no joy the uninstall option is not shown under palette manager, another Google search – no joy.
I do have the original Node-RED installed as per ‘The Script’ but since the HA version is in a container they shouldn’t conflict, should they? Anyway I thought I’ll uninstall the original version, another Google search, how to uninstall Node-RED, nothing again.
The only thing I can think to do is to uninstall the NR in HA and then re-install it. Can you offer any suggestions please.
they do not conflict till you use only one at a time… if they both use the same tcp port, well, they just can’t, that’s how networking works… well, they COULD if services are bound to different ip addresses, but that’s an other story, i avoid doing this…
other than that, “internal server error” it’s just the standard error services give when something wrong in config or worst in how they work… you can only point that out in the issue page of both influx and addon… take a look at logs, anyway…
install portainer addon, add the line to show home assistant containers, too (they’re hidden by default), and take a look at container logs from there… or by going in the addon page in home assistant, and refreshing logs on bottom of that page…
Antonio, thank you for all of your help, very much appreciated. I decided to use my other spare PC with a newer processor. I updated to ubuntu 18.04.1, installed Hass.IO and this time it used the amd64 images. I was easily able to install node-RED, grafana, influx, portaimer, Samba, configurator and a host of other goodies from Hass.IO addons.
I am very impressed with this docker system and I think my struggles with the older machine provided some useful learning.
I have now got to sort out using a SQLite database to store some data that is not needed in HA. I know Hass.IO uses SQLite for its master Db. I also want to install phpLite Admin to manage my database.
I hope your project is progressing well too.
I have that working, remember me tomorrow
to use sqlite in nodered, add this 2 line to the json config of the addon (usually the system_packages area is empty), then you can install sqlite node… about phpliteadmin, no need: there’s an sqliteweb addon you can install, and you can change the db used in its config after installed, if you don’t want to use the hassio one
“system_packages”: [
“build-base”,
“gcc”
],
Antonio, thanks for that information. However I can’t seem to set the path the my db in the SQLite web config. my db is in /home/pi/dbs/test1.db.
If I put that in the config it gives an error saying can’t find db. The wiki says the path to the db is relative to the /config and I guess that is in a container, so how does one provide a path to something that is outside of the container?
you can’t do that… docker containers cannot see the underlying filesystem unless you allow them to do that, using the VOLUMES parameter…
so your best alternative is to put that test1.db into the “config” folder of home assistant, this way you can just put its name in sqlweb json config web page, without any path (the addon uses the “config” folder as starting point)…
or, but not tried, you can symlink your db into the config folder, and again modify the addon config to recall it, again, without any path, only filename, unless you put it in some subfolder of the config one…
to find the config folder, use the configurator addon, or better the ide addon, or the samba addon, or the ssh addon, or whatever 🙂
p.s.: containers are not “sealed”… if you go from the underlying OS you can find their folder structure somewhere… in my case it’s: /usr/share/hassio/homeassistant/ but better use the addons suggested, or you will loose eventual layers on top of that…
Antonio, thank you once again for your help and I hope you are feeling better. I had the Samba addon set up so I used that. It is quite straight forward to change the db once one knows how. I try not to bother you with too many questions but I couldn’t find the answer I needed on the web. I really hope my questions and your answers help others on Peter’s blog who are following a similar path.
no problems at all, ask what you want, if i can help, i’ll do 🙂
i tend to be rude with those who don’t use a search engine before asking, because their time is NOT more valuable than mine 🙂
i had my bunch of RTFM when i was a baby-penguin, and i use to share those RTFM when someone acts as he lives in a pre-google age… 😀
well, my project is pretty much halted, this is why i didn’t keep updated this article…
you may ask why? Well, because at the 1st rule of hacking (“why doing anything? Because you CAN”…) i prefer the 2nd one: “never reinvent the wheel”…
in last month i started playing with Home Assistant, which AFAIK was never blogged here by Peter i think, and then, after messing around with the standard python install, i came to “Hass.IO”… which, guys, is just awesome! It does what i was trying to do, and much more!
We’re (mostly me, but i succeeded in “infecting” Peter, too, and with his expertise will soon be more advanced than me, surely) playing with it in these days, and it’s available in 2 fashions: as a docker container you can install easily on top of quite any little SBC with a decent amount of ram, or as a dedicated system which arrives preinstalled with all the needed stuff to run Home Assistant… i’m using the 1st scenario on a Rock64 board with 2gb ram and 32gb emmc, and the latter on a kindly donated to me RPi3b+, with equivalent results: it just works…
you’ll say: who the f..k cares about Home Assistant, we’re NodeRed guys here! Sure, but as “pragma is my dogma”, i hope you’re not taleban and accept new adventures and ways to achieve the same goal, better or easier…
so, once installed in any of the 2 variants, you can go in the HASSIO menu items and just start installing addons… all the ones we’re used to play with are present, in image the ones i’ve installed, but in the store (free, of course) there are Grafana, Influxdb, Samba, Pi-Hole, Motion-Eye, etc, too… they’re just a “select, install, configure, start” sequence away…
not a SINGLE command to run in a console, once installed you just need to feed in your parameters (user, password, ssl or not, and not much else in general), and that’s it, start the addon and wait for it to install the relative docker container… don’t be scared by docker: you’ll never have to deal with it or know it even exists under the hood!
Once installed you can start interacting with them as you used to do previously, via their relative web interfaces… AND/OR you have the new Home Assistant to interact with: if you install NodeRed you’ll find its nodes already installed, so you can for example create your panels in Home Assistant and do the automations in NodeRed if you’re already fond in it.
I’ve setup my NodeRed, Mosquitto+webclient, TasmoAdmin to manage my tasmotized sonoffs, Sqlite+web interface, in a matter of a couple of hours… AND i’ve all secured and accessible from the internet thanks to Nginx reverse proxy and DuckDns and Letsencrypt, with only the Https 443 port externally NATted on my router, in secure ssl and with dynamic dns…
To be clear, i’m quite new to all of this, i’m following the many good people on youtube that are doing an excellent work on Home Assistant, which supports TONS and TONS of devices… and to just have you the feel of how easy is to backup and restore something (EVERYTHING!) with Hass.IO, take a look at latest video by Rob, who’s migrating his setup from a Raspberry Pi to a Virtual Machine (2 COMPLETELY different hardware setups!) in a matter of minutes… https://www.youtube.com/watch?v=vnie-PJ87Eg
this will solve the most troublesome aspect of our setups: backups! Take a snapshot, move the snapshot in a new setup, restore and wait, done! We’ll soon talk again and better about all of these, surely… the contagion has taken Peter, too, so… and even if you don’t plan to use Home Assistant AT ALL, you can use its addon system to have an easily migratable setup… 😉
An excellent write up Antonio, very interesting. I am just about to go off on 10 days holiday so guess what my night time reading will be!!
I had wondered why you had gone quiet on the subject of running ‘the script’ on ‘docker’ . I presume it is possible to add one’s own ‘gadgets’ for example Peter’s Nano expansion of an ESP8266 to HA system.
they’re not script related, so they just work as before…
about studying Home Assistant, here the best video sources you can find:
keep in mind that everything you see they do with AUTOMATIONS in their configuration.yaml, can be done in nodered via the related components, once setup them…
https://www.youtube.com/channel/UCLecVrux63S6aYiErxdiy4w/videos
https://www.youtube.com/channel/UC2gyzKcHbYfqoXA5xbyGXtQ/videos
https://www.youtube.com/channel/UC7G4tLa4Kt6A9e3hJ-HO8ng/videos
https://www.youtube.com/channel/UC3sknm_GUCDESM7EmVvkgzg/videos
https://www.youtube.com/channel/UCSKQutOXuNLvFetrKuwudpg/videos
https://www.youtube.com/channel/UC5ZdPKE2ckcBhljTc2R_qNA/videos
https://www.youtube.com/user/Jfelipe83M/videos
Antonio, thank you for the very helpful links. I have had some success with installing Hass.IO, it has taken me a while because I have very little experience of Linux. I have installed it on version 18.04 with the graphical interface.
I have successfully installed node-RED, portainer, TasmoAdmin, Samba to date.
I really wanted to install Grafana but in the addons section of Hass.IO it says the add on is not available for me system. I looked for and tried other methods from some of the links you listed but I have not been able to get it to install in Docker. Did you manage to install Grafana? and if you did could you point me the source or a tutorial please.
I really like Hass.IO and want to make it my go to system.
I also have all the goodies from Peter’s script running alongside it.
the good about installing hass.io on top of a standard ubuntu, well… is that there’s a standard ubuntu underneath! So you can go 2 ways… standard grafana on ubuntu itself, OR, using the official docker container:
https://hub.docker.com/r/grafana/grafana/
http://docs.grafana.org/installation/docker/
Antonio, thanks for the information. I had tried running the install code as per the official installation but I get an error message which says ” docker: no matching manifest for unknown in the manifest list entries”
I don’t know what that means, can you help with that please?
Pull grafana master, info here, recent addition…
https://github.com/grafana/grafana/issues/13186
Antonio, I am trying to run my docker / home assistant set up on an Intel based PC set up as dual boot. however it is an old PC and it is running the 32 bit version of Ubuntu (18.04.1).
From the docker website it says Docker CE is supported on x86-64 or amd64 systems. When I look at the HA addons I have working they are using i386 images in the containers.
I think that is the problem with grafana, there is no i386 image for docker.
I’ll try again on a newer dual boot PC I have that should be 64 bit. If I can get everything to work I’ll buy a small fanless PC like the one you blogged about.
then ask on that github to add i386 support to the manifest, or go for the standard grafana install in linux
or use this alternative repo: https://github.com/urfin78/grafana-docker-i386
so, porting what i did this summer (in an ubuntu virtual machine) on real SBC started, and i’m quite happy with now well it works… just a little changes to the files i already created in vm, and all works on an Orange Pi Pc+ right now…
in the mean time, i think that during testing i’ll create some makefiles, so it’s even easier to deploy services: just go in a service folder and run “make password=yourpassword”, changing the password to whatever you want… this will do everything is needed to setup everything and protect with password…
and so, creating a menu like Peter’s script one will be way easier, too…
now, back to work!
to be continued…
I’m holding off until you’re done and you’ve tested the whole Docker thing on a PI. All looks very exciting.
as soon as i’m back to work and i’ve a decent, permanent, unlimited data plan 😀
Hi Mr Shark, any chance you could give an update on the progress of the docker system your are working on please.
sure
i’m back to work from one week now, previously was on vacation and without a board to work with, and a limited data plan…
now i’m fully operative and started working on porting on SBCs what i already did on x86 virtual machine, which i’ve to say is still not complete, but i’m very optimist in doing as much as Peter’s script does, right now…
i’m trying to avoid doing dockerfiles or different configs for different setups, and i already succeeded in unifying some services so the SAME setup can work on both… this will make it easier to maintain everything in the future…
i remotely worked on an SBC that Peter kindly made me available while i was still in transit, you can see in some comments on the OrangePi +2E blog post how to setup it so it can run both Peter’s setup, and be prepared for a Docker setup… standard legacy kernels do not allow docker working properly on these little boards…
if you want to test something, i suggest you prepare an ubuntu 16.04 or 18.04 x86 vm, follow the online instructions to install docker-ce (NOT the included one, which is older), then pull down my git repository, go in the service directory you want to test and run the commands i put in some comments up here (run pwgen to create a password, then build the container using docker-compose)…
i want to port what i did on sbc probably in this weekend, then start working in parallel on x86 before (it’s easier in a vm…), then on sbc, and release updates regularly…
but as Peter suggested in chat, i need to write down some instructions on basic Docker concepts and how to move between the objects you’ll be working with… oh, you need to install docker-compose, too, as said, info are on the net, i’ll add to the wiki on my git in the future… i use it to avoid writing long “docker run” commands as many do on youtube…
being at work about 8 hours a day, i can dedicate only spare time to this, and a little more in weekends…
but i’m confident something usable would be available soon, for now play with an x86 vm if you would like, and report any problem you should find 🙂
Mr Shark, Thank you for such a comprehensive reply. I need to set up a VM again on my Windows PC. I also have an older PC upon which I have ubuntu 16.04 as a dual boot.
I am very new to Linux and never used Docker so I think I’ll wait until you have developed it further.
Thank you again for all your hard work to make the use of Peter’s script easier for novices like me.
Any existing / published guides on how to use this in the docker?
I’m using Docker containers, from official docker site, on Synology, and I pretty new to Docker idea, how about this?
Am I supposed to build a container first, from Github?
at given time, for now i’m working on having services running correctly…
at this time, i’m testing only in an x86 virtual machine, as it’s easier and faster… when it is all setup, i’ll adapt it to armhf and arm64 archs…
for now, you can setup a docker enabled virtual machine, download the git repository, go in one of the folders of the wanted service and run
docker-compose up -d
if in the folder there’s a Dockerfile file, too, the command to use is:
docker-compose up -d –build
and if there’s a pwgen.sh file, too, you can do, BEFORE running docker-compose, set up a password for the relative service using the syntax:
bash pwgen.sh YOUR_CHOOSEN_PASSWORD
i’m doing my best to have all of this consistent between the various service, so the same info are valid for any of them, and all of this will be automatized in the end, with ARCH autodetection and a menu similar to Peter’s one
and for now you’ll have to search on your own how to do, i can’t give details for this, too, right now… i suggest to install an ubuntu server 18.04 then run this to add latest docker-ce: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
every service will store its data in a subfolder called “data”, inside their own directories, under DockerIOT… so you can just backup the full DockerIOT folder and you’ll have all you need to bring your data on a new server
OK, Thank you!
just published an update to Mosquitto, to run with its own user… some dirty bash tricks but it works… for now, to test, this is the list of commands to run:
1) if pwgen.sh exists, run it before anything else, passing a password as argument: bash pwgen.sh YOUR_CHOOSEN_PASSWORD
2) if a Dockerfile exists, run: docker-compose up -d –build
3) if a Dockerfile does not exists, run: docker-compose up -d
i suggest starting with the PORTAINER container, so you can manage them all via http://yourip:9000
Blynk Local Server, with automatic JAR update, just published… work goin’ on 🙂
https://github.com/fragolinux/DockerIOT/tree/master/blynk
and today, recreated nodered container, not based on standard one but on slim one (100mb when just deployed, vs more then 700mb of the standard…), and with support for compilation (needed for serialport, for example, which is correctly installed as node)… and added all Peter’s selection of nodes, of course 🙂
missing only auth and access to physical hw, which i can’t test in a virtual machine, of course 🙂
Yeah it does get annoying when you read about something in docker and then find it is only available in Swarm !
Why not read a config file on the Samba/Shared drive that the user creates with a username and password in it of their choosing – first start of the docker – services do not start, but using the passed in GUID and UID access the Shared/mounted folder – read the config file, then reboot and autostart the services then ? (after creating the user and chown/chmod files they need
Craig
Antonio,
Have a look at how these guys present their containers, very consistent and easy to manage – a nice touch is to accept the GID/UID that the user wants to run it as
https://github.com/linuxserver/docker-sonarr/blob/master/README.md
Craig
thanks, i’ll take a look
right now i’m looking in a way to give credentials to needing services… for now i’m creating some pwgen.sh scripts in the setup folders, which accept a password as parameter and return it (eventually encrypted) in service config files… but it’s a work in progress, so will surely change… unfortunately the SECRETS native docker feature is available only in SWARM setups…
Antonio,
The docker stuff looks good – happy to help out wherever you need it (i run the script on a VM cluster under ESXi at home). I think the Balena angle is probably the wrong way to go as you would end up finding missing components further down the track
Let me know if you want a hand with anything – i have been running docker for about a year now – using Portainer, cockpit, Sonarr, Radarr etc etc
GitHub repository just published, so i can work on it and document things while i work on it… Peter, could you add this url to the article? Thanks
https://github.com/fragolinux/DockerIOT
as stated there: BEWARE: STILL IN EARLY STAGES, WORK IN PROGRESS!
Yes the swarm is nice but you lose the ability to decide where a docker runs. That would only matter with local I/O and I guess that is where a standalone CPU would be incorporated. The management nodes, of which there must be at least two, also have to agree so there is checking too.
Did you have a look for ready made dockers here https://registry.hub.docker.com/search?q=library. The only thing to be wary of is the ARM versions necessary for a Raspberry Pi. Portainer can import these very simply and many don’t need any changes. The big benefit is they are tried and tested. HA has 10M pulls with 100K for the Raspian version.
sure, i’m not mad wanting to reinvent the wheel… i’m using standard containers and adding some mods to have them all save data in a given folder, for easier backup… yes, i know about different archs, that will be next step once i’ve all setup on x86 vm…
look here for the actual ones, but today’s ones are missing, and of course even the ones there need more work… and a proper github repo, not gists…
https://gist.github.com/fragolinux
Aha,
I was using ‘Swarm Mode’ which is where my issues probably came from. Hadn’t considered using stand-alone CPUs as the swarm looked so attractive.
However, using Portainer to install containers is really easy and I think that unless you use Portainer this way management benefits might be limited. Volumes, networks, container variables etc are all easily done in Portainer and for me, having used unRaid Docker management, access to the ready made libraries of dockers is extremely valuable for the non-standard requirements?
ooohhh, swarm… that would be nice for the future, to give redundancy to whoever needs it… i like it 😀
btw i’m new to docker (reading Docker Cookbook as i prefer practical stuff instead of those philosophical books, sometimes…), i need it for work, too, so i’m actively studying it 🙂
I played around with this quite a bit and IMHO there are still some bugs in Portainer that cause very peculiar problems. However, the idea is sound. Homebridge is a really good candidate especially if you want to run a number of instances to either isolate dodgy plug-ins or need to go past the accessory limit.
Node red is also a good choice as this also allows segregation of services.
One of the good points is the auto restart of containers that fail and the dual management.
Runs on Pi Zeroes too.
homebridge? isn’t something for bitten apple fans? not going to add it…
if you mean home-assistant, yes, it’s there… but how big it is! docker image is 1.8gb…
in the mean time, node-red just up and running 😉
i need more test, of course i need real hardware soon (to pass the actual devices, like serial ports…), and test nodes that need recompile… but it’s a good start…
btw i’m using portainer just for easy management during creation of containers and the like… actual container creation i’m doing, is done via dockerfiles and compose files… even though i could convert some of them to portainer templates, which would be AWESOME…
I love using Docker (the NR official container on docker hub) when I need to do my dev work on my node-red nodes. Just start up a new container, tell it to map the ports to something else (machine has node-red running on the defaults, so the container needs to remap) and in a few seconds I can test the new code. I do new to make better use of tags with git (a different issue). 🙂
I’ve not tried this on a Pi.
Peter and Antonio, thats great news. I had looked at settings .js and the braces didn’t look to be correct but that was on the assumption that braces have to be paired (open / close ) like they do in ‘C’.
I guess I’ll have to reformat the HD partitions to clear out the stuff the old script loaded.
Is the docker version likely to run on Ubuntu 18.04?
This one is waiting for Antonio. He made the original braces change. I’m guessing we will sort that today sometime.
https://github.com/node-red/node-red/blob/master/settings.js
new 0.19 addition: localfilesystem
broke script… i’ll start from this stock file to create our new version…
braces problems should be solved, FOR EVER…
just sent to Peter 2 scripts, for node v6 and v8, with the node-red settings part completely redone… now i download a premade settings.js file (https://tech.scargill.net/iot/settings.txt) which will ALWAYS work as not dependent on the one generated on startup by nodered itself… and this way i only need to use 6 lines to modify it, and only to add credentials, otherwise nodered will work anyway, without auth, of course… a safer way to go on instead of rely on the mods nodered guys do to their file…
sed -i -e “s#\/\/adminAuth#adminAuth#” /home/pi/.node-red/settings.js
sed -i -e “s#\/\/httpNodeAuth#httpNodeAuth#” /home/pi/.node-red/settings.js
sed -i -e “s#NRUSERNAMEA#$adminname#” /home/pi/.node-red/settings.js
sed -i -e “s#NRPASSWORDA#$bcryptadminpass#” /home/pi/.node-red/settings.js
sed -i -e “s#NRUSERNAMEU#$username#” /home/pi/.node-red/settings.js
sed -i -e “s#NRPASSWORDU#$bcryptuserpass#” /home/pi/.node-red/settings.js
changes:
moved node-red-contrib-opi-gpio install in the OrangePi gpio part of the script, as not needed on other boards…
update nodejs to latest stable v6.14.4 and v8.11.4 and node-red to latest v0.19.1
sure it will… i’m testing right now on “coreos” in a virtual machine, my plan is having everything working on there, then moving to support raspberry and the other small boards, creating ad-hoc images for their hw architectures (arm based), and finally move out of docker and going to Balena (https://www.balena.io), which is docker compatible and even lighter than it…
i’d like to have an sbc that runs nothing more than the bare container management software, all the remaining should be “containerised”… and the docker folder, with all the user data, easily backupable and cross platform…
i’m at a good point in virtual machine, and can’t test it still on SBC because i’m on vacation and with limited data plan (my vm tests are in vpn on a vm at the office, so the big traffic is done from the vm directly from our 100mb fiber, not by my pour 4g router 🙂 )
just halted tests today to solve these issues with latest nodered, then back there ASAP
about braces: yes, i know, file changed again… i’m working on it… we will create a standard, ALWAYS working, basic settings.js file with some placeholders where to put credentials, hw enable, projects, etc… this way we’re not going to rely anymore on the file generated on 1st nodered start, which could change from version to version, breaking the script… easier to manage this way…
This is absolutely fantastic. I would love to try docker on the Synology NAS 🙂
I think that the idea is absolutely brilliant, especially for people like just beginning with Linux. Additionally I have just tried installing the ‘script’ on a clean freshly formatted dual boot PC with Ubuntu 16.04. The script loaded everything without issue, I could access the web page everything worked perfectly except node-RED. It seems there are incompatibility issues with the Node.js versions and node-RED or at least certain of the NR modules. These issues are not attributable to the ‘script’.
So if the work Antonio is doing makes resolving these issues easier that would make life easier.
Any idea when it will be ready to try out Antonio?
There’s a tiny problem with the script and latest node red should be fixed tomorrow. It merely takes out a closing brace in settings.js (/home/pi/.node-red – if that is put back in all is then immediately well. I’m discussing that now as Antonio made a tiny change and we need to change it to another one.
Pete
going to completely rewrite the settings.js file generation…
i’ll start from the one created on nodered 1st start, clean it and put there some placeholders, for hw-support, credentials, projects and so on… this way we can ALWAYS have a working starting script, and easier change of parameters via sed, which is more difficult on the standard file… then send it to peter which should publish it on any of his sites, i think we can put it where he publish his index and css files…
stay tuned
just noticed that node v8 suggests to install a new package manager, YARN, instead of the standard NPM… have to take a look, seems promising…
https://yarnpkg.com/lang/en/
Peter’s nodes are all there, too:
https://yarnpkg.com/en/packages?q=%20&keywords%5B0%5D=scargill
and node v8 seems to have new dependencies before you could install node-red… tried in ubuntu 16.04 and 18.04, on both node-red v0.19.1 install fails on nodejs v8.11.4 for missing package: node-pre-gyp
working on it, as v8 is ANYWAY the way to go: versions before are not recommended anymore by node-red guys…
https://nodered.org/docs/getting-started/installation
edit: nope, error persists…
pi@raspbuntu:~$ sudo npm $NQUIET install -g –unsafe-perm node-red
npm WARN registry Unexpected warning for https://registry.npmjs.org/: Miscellaneous Warning EAI_AGAIN: request to https://registry.npmjs.org/bcryptjs failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org:443
npm WARN registry Using stale package data from https://registry.npmjs.org/ due to a request error during revalidation.
npm WARN deprecated mailparser@0.6.2: Mailparser versions older than v2.3.0 are deprecated
npm WARN deprecated nodemailer@1.11.0: All versions below 4.0.1 of Nodemailer are deprecated. See https://nodemailer.com/status/
npm WARN deprecated mimelib@0.3.1: This project is unmaintained
npm WARN deprecated mailcomposer@2.1.0: This project is unmaintained
npm WARN deprecated buildmail@2.0.0: This project is unmaintained
/usr/bin/node-red -> /usr/lib/node_modules/node-red/red.js
/usr/bin/node-red-pi -> /usr/lib/node_modules/node-red/bin/node-red-pi
> bcrypt@2.0.1 install /usr/lib/node_modules/node-red/node_modules/bcrypt
> node-pre-gyp install –fallback-to-build
node-pre-gyp ERR! Completion callback never invoked!
node-pre-gyp ERR! System Linux 4.4.0-93-generic
node-pre-gyp ERR! command “/usr/bin/node” “/usr/lib/node_modules/node-red/node_modules/bcrypt/node_modules/.bin/node-pre-gyp” “install” “–fallback-to-build”
node-pre-gyp ERR! cwd /usr/lib/node_modules/node-red/node_modules/bcrypt
node-pre-gyp ERR! node -v v8.11.4
node-pre-gyp ERR! node-pre-gyp -v v0.9.1
node-pre-gyp ERR! This is a bug in `node-pre-gyp`.
node-pre-gyp ERR! Try to update node-pre-gyp and file an issue if it does not help:
node-pre-gyp ERR!
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: bcrypt@2.0.1 (node_modules/node-red/node_modules/bcrypt):
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: bcrypt@2.0.1 install: `node-pre-gyp install –fallback-to-build`
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: Exit status 6
thanks for the blog post 🙂
working on samba right now, to allow access to the all-in-one DATA folder from windows, for easies management for non linux addicted 🙂
samba: DONE! I’ll optimize everything once i’ve all service running, of course… for now i’m publishing here as gists, i’ll create a repo ASAP…
https://gist.github.com/fragolinux