Node Red and HighCharts

HiChart Charts in Node-RedFirst things first – HighCharts are not free for commercial use. However, for your own website, a school site or a non-profit organisation they are free to use. You are also OK to make modifications. So – that’s the “cover my back” bit out of the way – now, how to use them in Node-Red.

I’ve looked at this before, because their charts are very, very nice.  As some of you know, I’m now sold on Grafana but I also really like the idea of using charts with my favourite database – SQLITE and this is not supported as far as I can see with Grafana. Indeed I’m glad there is a Node-Red Node for InfluxDB as I had trouble getting my head around inputting data into InfluxDB otherwise (the node makes it REALLY easy).

And so it was that I came back to HighCharts. Now one of the demos of theirs that I really like, involves plotting variable numbers of time-based lines in a graph (temperature, humidity etc)– with potentially missing values. I found when using other chart systems that if you had the odd missing value due to power failure etc, you had to fix that – with the HighCharts demo it just draws a smooth curve between the two.

Another of their demos features zooming and it is only in the last couple of days when taking another look – that I realised you could combine the two – zoomable multi-line graphs.

What you see on the right is little more than a tweeked version of their basic demo – with a few bits taken out to make the whole thing work in the limited space we typically have in Node-Red Desktop – and zooming built in.

I tested everything in JSFiddle and then tried dumping it into Node-Red – nothing – zilch. I found that using the libraries remotely just would not work. Out of desperation I grabbed the libraries, one of which is big and put them into one of my usual “myjs” directories on the Raspberry Pi and lo and behold, it all worked.

Sample data into HiCharts

Here you see above my test – now bear in mind this is very preliminary. In the brown FUNCTION nodes I have 2 different sets of data just to test things… and these feed an object in msg.payload through to the Dashboard UI. I’ve made two of these – using two different DIVs. One is called “mine1” and the other is called “mine2” – they fill DIV ids “container1” and “container2″ respectively. Everything else is the same.

So – starting off with their demo here http://www.highcharts.com/demo/spline-irregular-time

I took that demo and wrapped the lot in a function (object) – so as to hide everything from the outside world in case I wanted two or more of these on a page. I then wrapped the code inside a local, externally accessible function. This could be way more efficient – but I wanted something up and running and as you will see I have two completely independent graphs here.

Here is the code inside the blue template. The only thing that changes is the reference to the DIV (container1/container2) and the references to the new object (mine1/mine2) – and of course near the end I’ve fed unique titles and subtitles when creating the object but the latter do not have to be unique.

[pcsh lang=”js” tab_size=”4″ message=”” hl_lines=”” provider=”manual”]

<script src="/myjs/highcharts.js"></script>
<script src="/myjs/exporting.js"></script>
<div id="container1" style="min-width: 300px; height: 300px; margin: 0 auto"></div>

<script>
   (function(scope){
        scope.$watch('msg', function(msg) {
          mine1.graph(msg.payload);
        });
    })(scope);


var sample_data1=[{
        name: 'Winter 2012-2013',
        // Define the data points. All series have a dummy year
        // of 1970/71 in order to be compared on the same x axis. Note
        // that in JavaScript, months start at 0 for January, 1 for February etc.
        data: [
            [Date.UTC(1970, 9, 21), 0],
            [Date.UTC(1970, 10, 4), 0.28],
            [Date.UTC(1970, 10, 9), 0.25],
            [Date.UTC(1970, 10, 27), 0.2],
            [Date.UTC(1970, 11, 2), 0.28],
            [Date.UTC(1970, 11, 26), 0.28],
            [Date.UTC(1970, 11, 29), 0.47],
            [Date.UTC(1971, 0, 11), 0.79],
            [Date.UTC(1971, 0, 26), 0.72],
            [Date.UTC(1971, 1, 3), 1.02],
            [Date.UTC(1971, 1, 11), 1.12],
            [Date.UTC(1971, 1, 25), 1.2],
            [Date.UTC(1971, 2, 11), 1.18],
            [Date.UTC(1971, 3, 11), 1.19],
            [Date.UTC(1971, 4, 1), 1.85],
            [Date.UTC(1971, 4, 5), 2.22],
            [Date.UTC(1971, 4, 19), 1.15],
            [Date.UTC(1971, 5, 3), 0]
        ]
    }, {
        name: 'Winter 2013-2014',
        data: [
            [Date.UTC(1970, 9, 29), 0],
            [Date.UTC(1970, 10, 9), 0.4],
            [Date.UTC(1970, 11, 1), 0.25],
            [Date.UTC(1971, 0, 1), 1.66],
            [Date.UTC(1971, 0, 10), 1.8],
            [Date.UTC(1971, 1, 19), 1.76],
            [Date.UTC(1971, 2, 25), 2.62],
            [Date.UTC(1971, 3, 19), 2.41],
            [Date.UTC(1971, 3, 30), 2.05],
            [Date.UTC(1971, 4, 14), 1.7],
            [Date.UTC(1971, 4, 24), 1.1],
            [Date.UTC(1971, 5, 10), 0]
        ]
    }, {
        name: 'Winter 2014-2015',
        data: [
            [Date.UTC(1970, 10, 25), 0],
            [Date.UTC(1970, 11, 6), 0.25],
            [Date.UTC(1970, 11, 20), 1.41],
            [Date.UTC(1970, 11, 25), 1.64],
            [Date.UTC(1971, 0, 4), 1.6],
            [Date.UTC(1971, 0, 17), 2.55],
            [Date.UTC(1971, 0, 24), 2.62],
            [Date.UTC(1971, 1, 4), 2.5],
            [Date.UTC(1971, 1, 14), 2.42],
            [Date.UTC(1971, 2, 6), 2.74],
            [Date.UTC(1971, 2, 14), 2.62],
            [Date.UTC(1971, 2, 24), 2.6],
            [Date.UTC(1971, 3, 2), 2.81],
            [Date.UTC(1971, 3, 12), 2.63],
            [Date.UTC(1971, 3, 28), 2.77],
            [Date.UTC(1971, 4, 5), 2.68],
            [Date.UTC(1971, 4, 10), 2.56],
            [Date.UTC(1971, 4, 15), 2.39],
            [Date.UTC(1971, 4, 20), 2.3],
            [Date.UTC(1971, 5, 5), 2],
            [Date.UTC(1971, 5, 10), 1.85],
            [Date.UTC(1971, 5, 15), 1.49],
            [Date.UTC(1971, 5, 23), 1.08]
        ]
    }];
    
function doit(theContainer,title,subtitle)
{
this.graph=function(xx)
{
Highcharts.chart(theContainer, {
    chart: {
        zoomType: 'x',
        type: 'spline'
    },
    title: {
        text: title
    },
    subtitle: {
        text: subtitle
    },
    credits: { enabled: false },
    xAxis: {
        type: 'datetime',
        dateTimeLabelFormats: { // don't display the dummy year
            month: '%e. %b',
            year: '%b'
        },
        title: {
            text: 'Date'
        }
    },
    
    exporting: {
            buttons: {
                contextButton: {
                    enabled: false
                } } },

    yAxis: {
        title: {
            text: 'Snow depth (m)'
        },
        min: 0
    },
    tooltip: {
       headerFormat: '<b>{series.name}</b><br>',
       pointFormat: '{point.x:%e. %b}: {point.y:.2f} m'
    },

    plotOptions: {
        spline: {
            marker: {
                enabled: false
            }
        }
    },

    series: xx
});

}

}
mine1=new doit("container1","title1","Subtitle1");
mine1.graph(sample_data1);
</script>

[/pcsh]

and here is the code inside one of those orange functions – the only thing that changes is the data.

[pcsh lang=”js” tab_size=”4″ message=”” hl_lines=”” provider=”manual”]

msg.payload=[{
        name: 'Win 2012-2013',
        // Define the data points. All series have a dummy year
        // of 1970/71 in order to be compared on the same x axis. Note
        // that in JavaScript, months start at 0 for January, 1 for February etc.
        data: [
            [Date.UTC(1970, 9, 21), 0],
            [Date.UTC(1970, 10, 4), 0.28],
            [Date.UTC(1970, 10, 9), 0.25],
            [Date.UTC(1970, 10, 27), 0.2],
            [Date.UTC(1970, 11, 2), 0.28],
            [Date.UTC(1970, 11, 26), 0.28],
            [Date.UTC(1970, 11, 29), 0.47],
            [Date.UTC(1971, 0, 11), 0.79],
            [Date.UTC(1971, 0, 26), 0.72],
            [Date.UTC(1971, 1, 3), 1.02],
            [Date.UTC(1971, 1, 11), 1.12],
            [Date.UTC(1971, 1, 25), 1.2],
            [Date.UTC(1971, 2, 11), 1.18],
            [Date.UTC(1971, 3, 11), 1.19],
            [Date.UTC(1971, 4, 1), 1.85],
            [Date.UTC(1971, 4, 5), 2.22],
            [Date.UTC(1971, 4, 19), 1.15],
            [Date.UTC(1971, 5, 3), 0]
        ]
    }, {
        name: 'Win 2013-2014',
        data: [
            [Date.UTC(1970, 9, 29), 0],
            [Date.UTC(1970, 10, 9), 0.4],
            [Date.UTC(1970, 11, 1), 0.25],
            [Date.UTC(1971, 0, 1), 1.66],
            [Date.UTC(1971, 0, 10), 1.8],
            [Date.UTC(1971, 1, 19), 1.76],
            [Date.UTC(1971, 2, 25), 2.62],
            [Date.UTC(1971, 3, 19), 2.41],
            [Date.UTC(1971, 3, 30), 2.05],
            [Date.UTC(1971, 4, 14), 1.7],
            [Date.UTC(1971, 4, 24), 1.1],
            [Date.UTC(1971, 5, 10), 0]
        ]
    }, {
        name: 'Win 2014-2015',
        data: [
            [Date.UTC(1970, 10, 25), 0],
            [Date.UTC(1970, 11, 6), 0.25],
            [Date.UTC(1970, 11, 20), 1.41],
            [Date.UTC(1970, 11, 25), 1.64],
            [Date.UTC(1971, 0, 4), 1.6],
            [Date.UTC(1971, 0, 17), 2.55],
            [Date.UTC(1971, 0, 24), 2.62],
            [Date.UTC(1971, 1, 4), 2.5],
            [Date.UTC(1971, 1, 14), 2.42],
            [Date.UTC(1971, 2, 6), 2.74],
            [Date.UTC(1971, 2, 14), 2.62],
            [Date.UTC(1971, 2, 24), 2.6],
            [Date.UTC(1971, 3, 2), 2.81],
            [Date.UTC(1971, 3, 12), 2.63],
            [Date.UTC(1971, 3, 28), 2.77],
            [Date.UTC(1971, 4, 5), 2.68],
            [Date.UTC(1971, 4, 10), 2.56],
            [Date.UTC(1971, 4, 15), 2.39],
            [Date.UTC(1971, 4, 20), 2.3],
            [Date.UTC(1971, 5, 5), 2],
            [Date.UTC(1971, 5, 10), 1.85],
            [Date.UTC(1971, 5, 15), 1.49],
            [Date.UTC(1971, 5, 23), 1.08]
        ]
    }];
    
return msg;

[/pcsh]

With more work, this could be really useful – and don’t forget to check the BIG range of graphs and other widgets in the HiCharts demos – there is some wonderful stuff in there.

Back to databases – as you can see it would not be hard at all to pull data like this out of just about any database! That’s my next job after I tidy this up a bit.

Facebooktwitterpinterestlinkedin

27 thoughts on “Node Red and HighCharts

  1. I’m a little further on – not ready to write anything down yet but I now have a SQLITE database storing points and putting them out (multiple lines) to one of these charts. Just have to do the bit about aggregating over time – lots more coffee needed.

    A little furniture polishing to do first…

  2. My setup looks like this at the moment:

    1. An Asustor NAS box on the LAN containing 2 Western Digital WDC WD20EFRX spinning rust disks in a RAID 1 mirror
    2. This NAS box runs a flavour of linux
    3. node-RED is installed on the NAS
    4. All useful data on the NAS is backed up to a locally connected USB HDD and also backed up/sync’d offsite to dropbox and google drive

    I also rsync dump data from a couple of other computers on the LAN to the NAS and that data gets backed up in the same way.

    This works ok for me at the moment but it does limit me to the single node-RED instance that’s installed on the NAS box so if/when that becomes a problem I’ll scale out to mutliple raspberry pis (or similar). If/when I do scale out I’ll likely keep using the NAS as a central point of data backup and DB instance(s).

  3. I’m with your sediments with using external services but I had trouble installing Pete’s superscript and in the end went for just node.js, npm, NR, dashboard nodes and a few other nodes. It’s all running fine. The cloud was a an experiment to off-load data storage and analytics and upload status info and therefore avoid the need to open up my router for inbound monitoring traffic.

    I may well go for my own local solution but as Pete says, there are then other issues. One issue is backup. My thoughts are to ftp daily to my ISP web space or Dropbox etc. At most I lose a day’s data – I can live with that.

    Great blog Pete.

  4. Hi Pete, thanks for highlighting this.

    Presumably in “real life” there is no need to define the data at var sample_data1=[{ … and the mine1.graph(sample_data1); and instead let the node just wait for a payload input.

    As a wider issue I’m mulling over whether to go for a traditional SQL database or a NoSQL option but at the end of the day it’s not just the data (structured or not) but the availability of suitable tools like HighChart.

    I just played with BlueMix and all sensor data goes into a NoSQL database which I thought was cool until I saw the limit graph options and that to do my own data analytics I had to convert the data to MySQL and sign up for 2 extra services (MySQL and Data Analytics). I suppose they have to make their money somehow 😉

    Alan

    1. BlueMix, also known as “cloud”, also known as “someone elses computers you have little or no control over”. If you go down that route be it BlueMix, AWS, Azure or any other then one thing is certain: sooner or later you’ll have to pay for something.

      1. I’m with Joe on this – I wasted AGES a while ago getting to grips with Google speech – I put it out on the web – told others about it – put it in my home control – and at a whim, Google changed the terms and that was the end of that. Then I did the same with IVONA – all you had to do to use their excellent speech system was register…. I put that into my systems – and then… they stopped accepting registration.

        Which is why a lot of my stuff now has downloaded libraries – no matter what happens – they can’t take them away. I use Local Node-Red, local SQLITE, local MOSQUITTO. Sure, at some point in the future it is possible I may not be able to get updates but that’s it.

        Another cloud service I went for was the weather. Set it all up – had my little living room display delighting the wife with weather updates and….. they changed something – and it all just stopped working.

        As for databases – if you are doing this on an SBC with an SD memory, the choice of database must take into account how quickly it is going to eat your SD.

        1. Hi Pete, WRT the limits of the SBC, using RPi3 and changing it to boot from an SSD might help a bit. More room and supposedly longer life. You could also change the SSD to a hard drive. I’m about to play with this on one of my Pi 3s. Not sure about the overall speed but I’m not sure most HA really needs much more speed.

        2. To avoid SD wear out I use a file system in RAM and twice per day (at 6:00 am and 6:00 pm) I copy SQLite databases to SD. There is no absolute warranty you are not going to lose data. I have never lost anything because my device (Seagate GoFlex NET with debian) is connected to a UPS. Since my device is limited in RAM (128 MB) I use ZRAM to compress data and use less memory.

          1. Did you follow any particular guides or have you made a guide – as to how to stop SQLITE accessing FLASH and then copying the RAM DB to disk?

            1. * https://www.jamescoyle.net/knowledge/951-the-difference-between-a-tmpfs-and-ramfs-ram-disk
              * https://www.jamescoyle.net/how-to/943-create-a-ram-disk-in-linux
              * https://en.wikipedia.org/wiki/Tmpfs

              So to piece this jigsaw together I’d guess the trick is to create a ram based file system and mount it so it looks just like a regular file system. Once you have that you can use it like a regular file system and store your SQLite DB in there. Then at whatever interval you like dump your SQLite DB from the ram disk mount to the SD mount using a cron job (probably a simple copy shell script). Nice trick really.

            2. Peter.

              By default my database is stored in /tmp directory. Inserts and queries access the file in that location. At 6:00 am and 6:00 pm (using a cron task) I do a simple “cp /tmp/sensores.db /jc/sensores.db” being /jc a directory in the SD card. But I read that copying while in a database transaction could corrupt the destination database. I don’t do transactions but I do inserts (some sort of transaction). Now I am changing my procedure to:

              sqlite3 /tmp/sensores.db “.backup /jc/sensores.db”

              Which is safer.

              Additionally I have defined an init.d script that copies files from /tmp to SD while doing shutdown and copies files from SD to /tmp when doing start up.

              1. Hi there Juan – marginally too many assumptions about my knowledge there. is the /TMP directory in RAM by default – or did you do something to make that happen.

                I could see how at power up one might copy databases to a ram directory – and then at some point in the morning, copy them back onto disk – and I was aware of the issue of copying them live – your SQLITE3 option seems like a good idea.

                As my kit runs 24/6, I imagine the sequence would be – given a ram directory..

                On power up copy DB to RAM – always use RAM DB.

                at X in the morning, copy file.db to oldfile.db and use sqlite3 to copy the ram version to file.db….

                Yes that makes sense – just need to clarify getting a RAM directory – something I’ve never done – and I’m wondering sensibly how much room to allocate to that given a 500Meg RAM total limit… 100 Meg?

                1. Peter.

                  As far as I know, /tmp is in RAM for all the file systems of type tmpfs and ramfs (Raspbian uses tmpfs, I think so).

                  Every one is concerned about SD wear out. To avoid is you should set some options in your system like noatime in SD file systems to avoid being written every time you access a file, moving all temporary files to RAM, etc.

                  Some time ago I did several changed to all my Debian configurations to avoid wear out but I didn’t write what I did (My bad, I have to get used to write what I do), but I found this post very useful on this topic: http://raspberrypi.stackexchange.com/questions/169/how-can-i-extend-the-life-of-my-sd-card

                  Currently I own a Raspberry Pi, but is not in full production environment. My production environments are in Seagate GoFlex Net and Seagate FreeAgent Dockstar which are limited RAM devices (128MB RAM). So I use ZRAM for tmp and swapping. In fact I have swap using SD and ZRAM, but ZRAM has priority so the system won’t make swap to SD until ZRAM is exhausted.

                  By default newer Linux have ZRAM on them. All you need to do is to use command to create ZRAM partitions and set their attributes/parameters.

                  I have modified the following script that someone else wrote to create two ZRAM partitions. One for /tmp and the other for swapping.

                  —————————————————————-
                  #!/bin/sh
                  ### BEGIN INIT INFO
                  # Provides: zram
                  # Required-Start: $local_fs
                  # Required-Stop: $local_fs
                  # Default-Start: S
                  # Default-Stop: 0 1 6
                  # Short-Description: Use compressed RAM as in-memory swap
                  # Description: Use compressed RAM as in-memory swap
                  ### END INIT INFO

                  # Author: Antonio Galea
                  # Thanks to Przemysław Tomczyk for suggesting swapoff parallelization
                  # Distributed under the GPL version 3 or above, see terms at
                  # https://gnu.org/licenses/gpl-3.0.txt

                  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin

                  case “$1” in
                  “start”)
                  param=`modinfo zram|grep num_devices|cut -f2 -d:|tr -d ‘ ‘`
                  # create 2 zrams (zram0 and zram1) which are located at /sys/block
                  modprobe zram $param=2

                  # zram0 is going to be used for swap
                  echo 256000000 > /sys/block/zram0/disksize
                  mkswap /dev/zram0
                  swapon /dev/zram0 -p 10

                  # zram1 is going to be used for tmp
                  echo 100000000 > /sys/block/zram1/disksize
                  mkfs.ext2 /dev/zram1
                  mount /dev/zram1 /tmp
                  ;;
                  “stop”)
                  swapoff /dev/zram0
                  wait
                  sleep .5
                  modprobe -r zram
                  ;;
                  *)
                  echo “Usage: `basename $0` (start | stop)”
                  exit 1
                  ;;
                  esac
                  ——————————————————————

                  Excuse me but I don’t know how to beautify code in your blog. Dashes are not part of the script.

                  Create a file named /etc/init.d/zram with the previous script content.

                  Then execute the command ‘sudo update-rc.d zram defaults’ which will create a service for you named zram. You can start it with ‘service zram start’ and stop it ‘service zram stop’. It will be automatically started when you boot your system.

                  Let me explain you some commands inside this script:

                  – ‘modprobe zram $params=2’ creates two ZRAM partitions ($parm would contain ‘num_devices’ at this point. See previous command).
                  – ‘echo 256000000 > sys/block/zram0/disksize’ assigns 256000000 bytes to that zram1 partition. This memory is not used at the beginning but when needed.
                  – ‘mkswap /dev/zram0’ creates a swap area at zram0
                  – ‘swapon /dev/zram0 -p 10’ enables swapping at zram0 with priority 10. By default priority is 0 on regular swap devices. Assigning 10 to zram0 tell swapping to use it before lower priority ones.

                  – ‘echo 100000000 > /sys/block/zram1/disksize’ assigns 100000000 bytes to zram1. allocated when needed.
                  – ‘mkfs.ext2 /dev/zram1’ create a ext2 Filesystem. You don’t need journaling here so ext2 is better than ext3 or ext4.
                  – ‘mount /dev/zram1 /tmp’ mount zram1 at /tmp. From here on /tmp is going to be in a compressed (ZRAM) file system.

                  You should configure your NODE_RED and whatever is going to use SQLITE databases in RAM to access them at /TMP.

                  Your databases are going to be always at /TMP so you must copy them from SD to RAM at boot/startup and copy them from RAM to SD at shutdown.

                  I have this script:

                  ———————————————————————–
                  #!/bin/bash
                  ### BEGIN INIT INFO
                  # Provides: peter-copy-sqlite-db
                  # X-Start-Before: freeswitch
                  # Required-Start: $network $syslog $named $local_fs $remote_fs
                  # Required-Stop: $network $syslog $named $local_fs $remote_fs
                  # Default-Start: 2 3 4 5
                  # Default-Stop: 0 1 6
                  # X-Interactive: true
                  # Short-Description: Inicializacion fase 1
                  # Description: Este script hace la primera fase de inicializacion de Juan C.
                  ### END INIT INFO

                  # Load the VERBOSE setting and other rcS variables
                  . /lib/init/vars.sh

                  # Define LSB log_* functions.
                  # Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
                  . /lib/lsb/init-functions

                  do_start() {

                  /bin/cp /jc/sensores.db /tmp
                  }

                  do_stop() {

                  /bin/cp /tmp/sensores.db /jc
                  }

                  case “$1” in
                  start)
                  [ “$VERBOSE” != no ] && log_daemon_msg “Starting $DESC” “$NAME”
                  do_start
                  ;;
                  stop)
                  [ “$VERBOSE” != no ] && log_daemon_msg “Stipping $DESC” “$NAME”
                  do_stop
                  ;;
                  esac
                  ————————————————-

                  Again, excuse me but I don’t know how to beautify code in your blog. Dashes are not part of the script.

                  I have removed some other command not needed for this explanation.

                  Create a file named /etc/init.d/peter-copy-sqlite-db and insert the above code into it. The execute the command ‘sudo update-rc.d peter-copy-sqlite-db defaults’. Start with ‘service peter-copy-sqlite-db start’ and stop with ‘service peter-copy-sqlite-db stop’. Probably you won’t execute it manually but automatically when you restart your system.

                  As you insert records into your databases they are going to grow and grow in size so your zram partition. Once every night using a cron task I move last 7 days’ records to a permanent database in SD using this script:

                  ———————————————————
                  f=`sqlite3 /tmp/sensores.db “select max(timestamp)-604800000 from sensores”`
                  g=`sqlite3 /tmp/sensores.db “select max(timestamp)-604800000 from refrigeradora”`
                  sqlite3 /tmp/sensores.db “attach ‘/jc/sensores-permanent.db’ as perm; begin transaction; insert into perm.sensores select * from sensores where timestamp<=$f; insert into perm.refrigeradora select * from main.refrigeradora where timestamp<=$g; delete from sensores where timestamp<=$f; delete from refrigeradora where timestamp<=$g; commit; vacuum;"
                  ——————————————————–

                  Copy the previous content to a file located wherever you want. Lets assume it is /jc/backup-sensores-db.sh. The execute the command 'chmod +x /jc/backup-sensores-db.sh'

                  What this script does is copying records from /tmp/senores.db to /jc/sensores-permanent.db, delete copied records from /tmp/sensores.db and reducing /tmp/sensores.db (vacuum). I have two tables in the database names sensores and refrigeradora.

                  If you need records from /tmp/sensores.db and /jc/sensores-permanent.db at the same time in your node-red, then you must attach /jc/sensores-permanent.db once /tmp/sensores.db is used.

                  To execute this script every night you need to edit/create a file '/etc/cron.d/root' add two entries to it:

                  0 0,6,18 * * * root /usr/bin/sqlite3 /tmp/sensores.db ".backup /jc/sensores.db"
                  50 23 * * * root /jc/backup-sensores-db.sh

                  The first line will execute '/usr/bin/sqlite3 /tmp/sensores.db ".backup /jc/sensores.db"' everyday at 6:00 am and 6:00 pm (18:00). The second one will execute /jc/backup-sensores-db.sh' everyday at 23:50.

                  Make sure cron is enabled (started automatically when you boot your system) by executing the command 'sudo update-rc.d cron defaults'

                  Excuse me for this so large post.

        3. IVONA still works fine, but its being depreciated to POLLY on AWS now.
          This link gives another idea of how to locally use POLLY with a workflow approach on a R.Pi. (useful info but no good for node red users)
          https://www.losant.com/blog/speech-synthesis-on-the-raspberry-pi-using-node.js?utm_source=nodeweekly&utm_medium=email

          When things “IVONA” settle down, It’ll probably result in there being an updated IVONA node – and things will just continue as before. Its just annoying that to use it, one will now have to provide their credit card as part of the AWS signon process 🙁

          1. Well you are one up on me Pete as I thought they were simply no longer taking registrations full stop. It is IMHO the best sounding system, such a shame…

          2. Just an update – I just checked, all good for node-red because the existing IVONA node works fine with AWS generated keys.
            Must admit, I wish I didn’t have to provide my financial info as part of Amazon AWS registration.
            Just realised my password isn’t strong enough – so going back in to make it better 🙁

            1. Oh excellent so there is hope yet. Anyone going to their site however I hope they’ve updated it because it DID say “registrations closed”.

  5. I think i’ll have a go at highcharts for both temp/hum graphs but also gauges. I feel the NR gauges are a little limited.

Comments are closed.