In our one earlier discussion, we delivered the concept of how we can Build ESP32 Arduino Glass Touch Switch With LED. Higher models of the glass touch switchboards also have Alexa voice control support. In this fundamental discussion, we will talk on how to combine Alexa with ESP32 Arduino and Watson IoT platform. We can use Amazon Echo Dot or just Android Smartphone with Alexa app as a verbal command input device. Basically we will speak a trigger command to Alexa E.g “Alexa, turn on the fan” which will raise an event on a middleware server and trigger an action to post a HTTP message to our Node-Red instance or directly send cURL command to Watson IoT Platform. In Node-Red we will pick up the message and send an MQTT message command to our gateway (ESP32 Arduino). The ESP32, upon receiving the command will turn on the fan-like our general purpose guide to controlling the AC Powered Appliances With ESP32 and IBM Watson IoT.
As for fan or light, we need a minimum of two sets of commands :
“Alexa, turn on the fan”
“Alexa, turn off the fan”
---
Controlling the speed of the fan is slightly complex from the electronics part, we want to avoid this fundamental guide.
For the above thing, we will use IFTTT (ifttt.com
) as middleware. That will decrease significant coding and sufficient for personal usage. To connect IFTTT Alexa channel with Amazon, we have some ready to use the channel to test-connect to Amazon account :
1 | https://ifttt.com/amazon_alexa |
IFTTT (ifttt.com
), upon receiving the command will poke IBM Watson IoT platform using normal webhook :
1 | https://ifttt.com/maker_webhooks |
“Alexa, turn on the fan” command will trigger one set of function. “Alexa, turn off the fan” command will trigger another set of function. In case you want to use Node-RED, then also IFTTT has ready to use node for Node-RED. That much setup is enough for personal need.
But for production-grade usage, you need to directly use either own server or IBM Cloud services (based on Apache OpenWhisk) to handle the verbal command. We can create an Alexa skill using Watson Assistant via the Apache OpenWhisk serverless framework. IBM Cloud Functions will be used to integrate Alexa with Watson Assistant. Here, we can invite the AI of Watson.
You: “Alexa, can Watson turn on the light” (can = forcing logic via Watson, it is not dumb on/off)
Alexa invokes IBM Cloud Functions with input text
The action gets the Watson Assistant context from Redis (if any).
The action gets a response from Watson Assistant (if any).
Determine the action if possible to allow (when applicable).
The response context is stored in Redis.
The response text is sent back to Alexa.
Alexa: “OK, light has been turned on”
OR
Alexa: “How funny! Light is already turned on.”
Here is an example GitHub repo which can help you:
1 | https://github.com/IBM/alexa-skill-watson-conversation |
The above will help you if you are wondering how the commercial smart switches work with one Alexa and so many users. You may use this idea to create creative command for doorbell :
Case 1 :
Visitor: “Alexa, should I press the doorbell”
Alexa: “None is in the house. Press the doorbell and speak, I will send a message to Abhishek”
Case 2 :
Visitor: “Alexa, should I press the doorbell”
Alexa: “None is in the house. Please phone Abhishek yourself”
Case 3 :
Visitor: “Alexa, should I press the doorbell”
Alexa: “Yes. Go ahead.”