Here’s a thought – now this might be available – but I can’t find it..
Most of us have phones – some Android, some Apple, few Microsoft. So sticking with Android for now – the latest Android phones (I have the HTC One M8 with Android 6.0 which is marvellous) handle speech recognition well. “Remind me to turn the cooker off in 5 minutes”. Mind you it never did actually remind me but never mind.
So all of that works.. but what would be REALLY NICE – if the same exact thing could recognise “NODERED”..
So how about “NODERED turn kitchen light on”.
ALL that is needed is for that entire sentence – but only sentences that begin with NODERED – to end up being sent by whatever – MQTT maybe – to the Node-Red installation – from there the most trivial of line parsers could figure out what you were trying to achieve – and do it for you.
SEE THIS UPDATE: https://tech.scargill.net/from-zero-to-star-trek/
This might be just the ticket for adding a quality voice system.
I’ve backed this from Seeed Studio.
https://www.kickstarter.com/projects/seeed/respeaker-an-open-modular-voice-interface-to-hack?ref=nav_search
time for new try? 😀
http://thenextweb.com/google/2016/03/23/now-can-power-app-googles-speech-recognition-software/
Ooooooh.
On the other hand… no – it all smacks too much of Google’s recent money grab. “Limited access” needs free TRIAL of Google Cloud…. think I’ll give that a miss.
You might be interested in Amazon’s Echo. It’s not for sale directly in the U.K yet – but can be purchased via the USA. It works fine here.
It’s very easy to use. An example command “Set a timer for 45 minutes” – and it will alert you.
Amazon allows you to set up SKILLS – A skill tells Echo where to post a request (and wait for a response), to a particular command. I’ve been using Node-Red to handle the requests from the echo and then send a customised response. In this way you can program the echo to handle your own apps.
As you can see just a couple of minor issues which hopefully someone can help me solve. Right now I can say to the phone “OK Google, turn the heating up, the lights off and the kettle on”….
Happy to release the Node-Red info for this – it is simple but effective (some of it came from my days of writing adventure games) – but I need Google to keep quiet and let me get on with it – see previous comment…
Peter, I think reply from Pete Matthews solves one of your issues and also think that the following information fixes the second:
From: http://tasker.dinglisch.net/userguide/en/androidpowermanagement.html
“Tasker Run In Foreground Preference
Make sure that Menu / Prefs / Monitor / General / Run In Foreground is checked.”
Thanks for this Juan – I tried it but maybe I’m not doing it correctly as there is no joy for me and my phone (Samsung J1) i.e. Chrome still hangs around after Tasker takes over from “Google Now”.
Ok, got another one for you..
I have a keyboard “TURN” – and the phrase “Turn heating down” – returns “heating down” – it all works a treat – except that Google insists on butting in – as well as my code working in TASKER – Google chips in with “here are some matching pictures”
So the question is – how to stop Google sticking it’s nose in!
I’ve got Android lollpop (5.1) and set up Tasker with “Autovoice Recognised Event -> MQTT Publisher”
To stop “Google now” doing as you say, I did this:
Go into the settings for your phone. Scroll down to Accessibility and tap on that. Find “Autovoice Google Now integration” and turn it on.
Edit: Also, in your Tasker profiles, where you created all your Autovoice recognized commands, scroll down to Advanced>Do Google Now Search> make sure its unchecked
Unfortunately, this doesn’t kill the “Google Chrome” seesion that hangs around afterwards – and the “kill app” doesn’t work on Root – so apparenetly you have to run this script ?!? (N.B. I’ve not tried it though)
https://groups.google.com/forum/#!searchin/tasker/kill$20chrome/tasker/PPzQWNB4rQg/7mOfYpdbNKcJ
I’ve got node red set up to take the commands (N.B. I just used %avcommnofilter as the payload) and at the moment I’m not sure if Node-red should decipher the appropriate MQTT stuff and send a confirmation message back (Mqtt again?) – Am interested to see which way you go!
Thanks Pete – I went t othat link you sent and half way down is a list of instructions – that bear no resemblance to any items I can see in Tasker… could not even start.
Another way to get to “Google Now integration” is as follows…..
– From your phone home page, click on the “Autovoice” icon
– Click on the first line “Google Now Integration” and tick ON the following options “Enabled” , “Run in Foreground” and “Only Voice”
N.B. I’ve yet to try solving the “Chrome session hanging arround” annoyance.
Here is another post about the “Tasker kill” code
https://groups.google.com/forum/#!topic/tasker/fq27xG8VUjM
It seems to be more associated with how Chrome works – as some sort of housekeeping thing – and is described in the link as follows by Jeremy Harris :
“Make a profile with an app context for Chrome that runs this task when you open Chrome. It should loop every 5 min to check if Chrome is still in the foreground and will kill Chrome after 30 minutes if it’s not being used for 30 consecutive minutes. If you go back to Chrome, then the countdown starts all over”.
A good idea – but hopefully a more appropriate way can be found?
I missed that VOICE ONLY bit (actually the FIRST thing on that menu is the advert to get me to BUY Autovoice – I should do that now) – but still the lingering search page (which only stays there for a second.. I’d just prefer it wasn’t there at all).
HOLD THE PHONE – I long-pressed Tasker to disable the thing – then long-pressed again and NOW it is WORKING at power up!!!
Ok, one last question for the WISE. To monitor MQTT outgoing messages – I am running MQTT CLIENT Android App -started up by TASKER on power up – only problem is – it comes up !!! ie full screen on the phone – how do I make TASKER start up an APP like MQTT CLIENT and then go HOME… so it’s not actually plastered all over the screen???
Almost there… I’ll blog this whole thing once it’s working properly.
Any ideas on the above guys?
I guess you should see this one too: https://groups.google.com/forum/m/#!topic/tasker/avS3E32he8U
Another one – it says “See that gray icon the top left corner of the Tasker UI? Long press it until it is in color, then back out.”
What’s the UI (I know what a UI is) – that the screen when you run tasker? There’s not a grey icon on the top left (as against bottom right in the previous article) anything – theres an orange thing – and if I long press that it says disabled – press again it says enabled… which means it’s enabled – but reboot the phone – and no tasker…. Very frustrating that in many of these articles people don’t put pics up…
This post could be helpful: http://androidforums.com/threads/how-do-i-get-tasker-to-autostart-itself.359311/
I looked at that “If Tasker’s On – Off switch (Tasker UI, bottom right) is “On” (green),” – all very nice but I don’t have a “UI bottom right” – not a green antthing at least – that us assuming by UI they mean go into tasker, properties, UI. If it’s just go into tasker there’s no green anything there either.
Got speech recognition sending commands via Autovoice, Tasker and the Tasker MQTT plug in…
Which leaves 2 issues – can’t get Tasker to start up at poweup… and I’m using MQTT-CLIENT to just verify the message – and though I can get Tasker to start it up (if I could get Tasker to start up on power up) – it comes up full screen and not as a background thing…
Any thoughts anyone – hey this has promise!!
I do believe I have autovoice + tasker + mqtt working. Stunning !!!!
Not sure how close to what you are looking for but I am sending text messages via https://moni.ai/ (android app) -> IFTTT -> SMS to my IOT Home stuff. Search for moni in IFTTT for samples. I use the command IRIS (Siri spelled backwards!). What I want is to just pick up my phone and talk to it to control things in my house, unfortunately moni requires the tap of a button to start voice receiving. Many have feature requested the always listening feature, so I am assuming it is coming soon.
Mr. Scargill, you have a great blog … many thanks for posting!
BR/Keith
Could not find any way to have mon.ai send mqtt or anything like it
Yes, you are correct.
I suppose this might work (only skimmed the article) -> http://ioneeds.com/2015/03/auto-purchase-and-ifttt-custom-api-with-mqtt-bridge-rule-manager-v1-0-are-ready-now/
But, it seems to just be adding a building block (complexity) to the system.
Progress? I changed the keyword to “abracadabra” – and said “abracadabra turn the lights on”
This time – Google’s normal speech functions did not try to kick in – and my little “Hi there” message appeared.
HURRAY.. but it worked about 3 times then stopped working… i.e. after 3 variations – finally the “hi there” stopped appearing.
Is anyone else having this issue with Autovoice and Tasker or was I too quick to jump in with Android 6 I wonder?
Is using a phone/wall mounted tablet really a good solution anyway? The microphones in them are tuned for listening in close proximity only. If you are holding your phone/standing in front of a wall mounted tablet, then you might as well just press the button to run your action/scene.
Alexia on the amazon echo thing looks more promising, but..
1. I don’t really want a cloud connected microphone in my home
2. You really want one in every room to provide coverage throughout your home….which would be $$$
What I really dream of is a small arduino/esp8266/RaspberryZero based device with a quality microphone/s attached which transmit back to your NodeRed/homeserver for processing and execution.
Quick google found this:
https://wolfpaulus.com/journal/embedded/raspberrypi2-sr/
looks like its all possible.
Erm I wasn’t planning on using a wall mounted job – the idea is to have the phone with you – and talk to it to control stuff….
Fair enough Pete.. I guess I was thinking about how I would like to use Voice activation in/of my home.
Not trying to steer the conversation – hey there maybe avenues I’ve not even thought of – what I’m trying to discover is – is this solution imperfect, or is it my phone – or Android 6 etc…
This article could help: http://www.pocketables.com/2013/06/beginners-guide-to-tasker-part-8-autovoice.html
I think you have to configure/say a complete command like “turn living room lights on”, not just “Holly Berry”.
I don’t see it – what’s the difference…. thats’ just a test and if it can’t get that, well….
What I actually want to do is say “Holly Berry turn on light 3” – and have it pass “turn on light 3 via MQTT . Then that same starting point will allow endless commands – but I can’t get past the starting block. Right now if I say “Holly berry turn the lights on” – I STILL get the “possible command received” grey block – but nothing further happens”
I have to say I have never used Tasker nor AutoVoice before.
This article could help you with the configuration: http://www.androidcentral.com/hands-free-automation-tasker-and-autovoice-part-1
All done – used the word “activate” which works well – so “activate updairs lights on” passes “upstairs lights on” to MQTT.
BOB…. Ok, I didn’t realise I was in beginners – I found it – and started from scratch.. a new profile responding to “Holly Berry” – a new task which pops up “HI” on the screen.
I ticked that option!
press the microphone and say “Holly Berry”….
Immediately a box appear saying “received possible commands – holly berry” – Google starts “Here is….” and that’s it.
Can someone please put me out of my misery – what on EARTH am I doing wrong here.
Oh this time – I got the full Google “Here is holly berry cottage” – and greyed out is my little “Hi there” message. How do I stop Google doing it’s own thing… I assumed the purpose of Autovoice was to competely override Google for those recognised messages. I don’t see the point of the grey box telling you that is has maybe recognised something – even if you say “Holly barry which brought up a college in England – but that warning still comes up.
Before developing anything I just read about what Jens said and I thing what you need is already done.
Install Tasker and AutoVoice. Thne, install MqttPublisherPlugin (https://play.google.com/store/apps/details?id=net.nosybore.mqttpublishplugin) and I guess you can now do what you want. Just let us know.
Very cool. I’m going to give these apps a try.
http://www.androidcentral.com/hands-free-automation-tasker-and-autovoice-part-1
Right- guys – this is not the problem – the problem is the operation of auto-whatit – and tasker.
So I create a configuration with “holly berry” and speak the command – and select “holly berry” – then add a task – the task will be to say “Hi Pete”.
Test it – works.
Now try using it… I press the mic button – beep… and I say “Holly Berry” –
A message comes up “a possible commands “Holly Berry”
What SHOULD happen is it says “Hi Pete”.
What ACTUALLY happens is that the normal voice starts off “Here is…” gets interrupted – and then MAYBE a second or two later it will say “Hi Peter” – but more often than not it won’t – it is as if the original programming is wrestling with this.
I hasten to add I’m using an HTC One M8 with Android 6.0
So the general idea works and your plugin link is problably JUST whats’ needed – if only I can convince AutoVoice and Tasker to PROPERLY take control of the speech….. does this make sense.
There is an wit.ai node and/or witd daemon, that can listen for a voice and have recognized json on output.
Thank you Past. for the tip about Wit.ai.
Absolutely amazing. Was able to make this work for all my devices running my home automation website interface. Cell, Tablet, Desktop. I can now speak in natural language and have my lights turn ON or OFF And have Ivona speak back to me. All in about 2 seconds. 🙂
1) Push a image of a Microphone on website.
2) Say Please turn on kitchen light, or any way you want to say it as long as the word KITCHEN and ON are in the sentence.
3) Send decoded response back via websockets to Node-red to be checked
4) Have Ivona Say a nice replay message about turning ON the KITCHEN light or ask you to say it again.
5) if the command is good send Kitchen On command via MQTT to one of my ESP8266 running a 433mhz
transmitter to turn on a wall socket.
Not bad for about ~2 sec of processing. A website mic interface may not be as convenient as a App built into a phone. but I now have one interface and it can be used on anything that runs a modern browser. That’s the best part. Also see there is another company called api.ai that is a lot like wit.ai
Sorry forgot to add a link to wit.ai home automation Demo page.
https://labs.wit.ai/demo
Peter, could you please describe the functionality you want.
I guess, the minimum would be to listen to voice, get text from voice and send it to MQTT (Configurable server, port, user, password and topic. Text sent as message). Using Node-red you read the text from MQTT and do whatever you want with it.
Yes that is pretty much it after an initial unique word or phrase it would wait for me to stop and send the lot as an Mqtt payload.
The Jasper project is trying to do that – http://jasperproject.github.io/
One user upgrade it to use the AT& speech-to-text engine – http://hackaday.com/2014/06/07/how-to-upgrade-jaspers-voice-recognition-with-atts-speech-to-text-api/
With an internet connection you can use Amazon’s Alexa system – https://github.com/goruck/all
I don’t think that was for phones?
I misunderstood you, I thought you wanted to do the voice recognition on a controller (RasPi) like the Amazon Echo does. But you’re right, most of us carry smartphones so why not speak into a smartphone, have it recognize the speech then send commands to the smart homes system. Good idea.
i think there is an addon for tasker which could do this…but its still needs “ok google”
https://play.google.com/store/apps/details?id=com.joaomgcd.autovoice&hl=en
You can also invoke it using “Ok booboo” … especially cool when you do it in a Yogi Bear voice.. 😉
HAH.. Yes I discovered Autovoice… I put in “Holly Berry” as that’s the name of our house – and by default entering that into Google you get “Hollyberry Cottage is a…” – etc.. at least I do.. but when I set up a task to say “Hi Pete” on receipt of the command “Holly Berry” – it kind of worked – I got “Hollyberry Cottage is…… Hi Pete.”
So a little work to go but IF that can be made to work reliably – I don’t see too much work getting tasker to run up some app that will fire off a message to Node-Red……
Could be exciting…
“Holly Berry turn all the lights off and the heating down to 18”
In my dreams.
What about using voice commands to send a tweet or email to an account which is monitored by Node Red. So, for example:
Tell Siri to send an email to Node Red (email address would need to be defined in Contacts) with subject line: Turn hall light on.
Then use the Email node in Node Red to do the appropriate action when that email is received.
Would that work?
Siri?? That’s Apple.
Yes, but presumably you’ve got an equivalent on Android, haven’t you?
WAY beyond that – I now have multiple commands working beyond my wildest dreams.
“Activate upstairs lights on and downstairs lights off and kitchen fan on”
I don’t mess about – still can’t get Tasker to start up with phone on though.. Working on that now.
Cracked auto startup – just now got to stop Google sometimes sneaking in – some of the explanations out there bear no relation to current tasker screens.
Take a look at this: http://lifehacker.com/how-to-create-custom-voice-commands-with-tasker-and-aut-1282209195
I haven’t tried it, but it looks like it should be easy to do what you want.
I have Autovoice and Tasker. Problem is that Autovoice seems to let the normal response mechanism through for a second or so then fails to get Tasker to operate properly. So if I were to make the Key words “Holly Berry” normally Android would start a normal response but with Autovoice in place the same verbal response starts then stops and Tasker starts to take over but in my tests a Tasker verbal response fails presumably because the original response had already begun
Peter,
I just started using Tasker myself this week and I ran into the same problem you describe. (Tasker task aborting). I fixed it by setting the Tasker collision handling to “Run Both Together”.
The collision handling property can be found at the properties icon at the bottom of a task.
Hope this helps.
-Bob
Bob – I’m not seeing properties at the bottom of my one and only task in Tasker (Android).
I’m seeing text, engine, stream, pitch, speed, respect audio focus, network, continue task immediately and if
and if I untick “respect audio focus” – it works better – but google’s femail voice still managed to sneak in 2+ words first… “Here is ho…..”
Hi Peter,
This link says the property icon is only visible when you are not in beginners mode. Sorry for the confusion.
http://tasker.dinglisch.net/userguide/en/activity_taskedit.html
The properties icon is on the same screen as the play button, fast forward button, + button, etc. On my screen, the icon looks like three slider controls. It says Task Edit at the top of the page.
-Bob
Peter,
One other thing that you might find useful – you can add a delay to your Tasker task. I use this with Pushbullet notifications received from my node-red server. I get the Pushbullet notification sound and then after a 1 sec delay, Tasker speaks the notification so I don’t have to look at the phone every time I get a notification. This solves the problem of the collision between the Pushbullet notification sound and the Tasker task that speaks the notification.
Another nice thing is the Pushbullet has built built in integration with Tasker.
-Bob