Snips console login without sam

Ho ho ho says Santa! But today also me!

A lot has been going on since my last post. A lot! This is the first article with a little gift to the community! Since we started on Snips we’ve all tried to programmatically train and download our assistants. Project Alice did it, in a way that really wasn’t convenient:

  • Hey Alice!
  • Yes?
  • Update yourself
  • Ok -> Stopping Snips services, downloading the new assistant I had previously uploaded to my git repo, extracting it, place it where it belongs and restarting snips

Of course this worked but I had to manually train and download the assistant from the console and upload it to my git repo…

Then came the browser way. We can mimic the user activity on the browser, but somehow, at some point, we couldn’t train our assistant anymore, leaving us with login to console to train it before downloading it anyway…

Then came SAM, the Snips tool to manage your device and assistants/skills. Sam can train and download the assistant, provided your Snips account credentials. So if Sam can, why can’t we?

Disclaimer

  • Is this hacking?
  • No, it’s not, you only gain access to what’s rightfully yours. It’s more some reverse engineering and basic comprehension
  • Is this dangerous?
  • I won’t share any destructive endpoints here, but potentially you could end up deleting your assistant
  • Is this allowed?
  • I did ask the permission to share, wasn’t answered. I can’t see why it wouldn’t be though, as again, you only access your data
  • Is it hard?
  • No, not at all, the script is very short
  • I do not take any responsability if anything bad happens, data loss, key stolen, ban, beer spilled on keyboard etc.

JWT

Stands for Json Web token and is a way to authenticate a user to righfully access some locked data by providing credentials only once. You use them daily in your browsers without even knowing it. The JWT token is composed of three parts, separated by dots that are commonly base64 encoded. The first part contains the token infos (encryption used), the second part contains the payload, whatever needs to be passed and the third part contains the signature. How does it work for Snips? It’s pretty simple:

  • User asks the server to login, sending login and password on a TLS encrypted channel
  • Server authentifies the user and sends a short life JWT token back to user
  • User creates an alias and sends it to the server along with the JWT token provided just before
  • Server checks the JWT token and the alias and sends a master JWT token back to user
  • User stores the master JWT token and alias for any further connection to server

What’s needed?

I’ll show the way using python and will share a working copy of the script at the end of this article. Use python 3, even thoug it’s not essential, but python 2 is at its end of life. The only dependency you will need is Requests
pip3 install requests

Let’s do this!

This is a very basic and quick written script, for demo purpose only. You will need to make it stronger and add more checks! Create your python script, call it whatever you want.

# -*- coding: utf-8 -*-
import json
import requests
import toml
import uuid

class SnipsConsole:

  EMAIL = 'john'
  PASSWORD = 'doe'

  def __init__(self):
    self._tries = 0
    self._connected = False
    self._headers = {
      'Accept' : 'application/json',
      'Content-Type': 'application/json'
    }
    with open('/etc/snips.toml') as f:
      self._snips = toml.load(f)

    if 'console' in self._snips and 'console_token' in self._snips['console']:
      self._headers['Authorization'] = 'JWT {}'.format(self._snips['console']['console_token'])
      self._connected = True
    else:
      self._login()
  • I do set a max try to 3 so in case of failure we can retry but not endlessly
  • Next comes the basic header definition
  • I use snips.toml to store the information, so everything to do with snips stays at the same place and load it with the “toml” module pip3 install toml
  • If the token is already in the configurations we expend the header with the authorization token, if not, we call the login function
def _login(self):
  self._tries += 1
  if self._tries > 3:
    print("Max login tries reached, aborting")
    self._tries = 0
    return

  payload = {
    'email': self.EMAIL,
    'password': self.PASSWORD
  }

  req = self._req(url='v1/user/auth', data=payload)
  if req.status_code == 200:
    print('Connected to snips account, fetching auth token')
    try:
      token = req.headers['authorization']
      user = User(json.loads(req.content)['user'])
      accessToken = self._getAccessToken(user, token)
      if len(accessToken) > 0:
        print('Console token aquired, saving it!')
        if 'console' not in self._snips:
          self._snips['console'] = {}

        self._snips['console']['console_token'] = accessToken['token']
        self._snips['console']['console_alias'] = accessToken['alias']
        self._headers['Authorization'] = 'JWT {}'.format(accessToken['token'])
        self._saveSnipsConf()
        self._connected = True
        self._tries = 0
    except Exception as e:
      print('Exception during console token aquiring: {}'.format(e))
      self._connected = False
      return
  else:
    print("Couldn't connect to console: {}".format(req.status_code))
    self._connected = False
  • We first check if we have exceeded our tries, if yes we just stop
  • We prepare the payload with the needed informations, email and password of your Snips console account
  • We try to connect, using a function declared later, pointing to v1/user/auth with the payload previously declared
  • If the server answers with the http status code 200 it means we’ve been accepted otherwise the account connection failed and we can’t go further
  • We fetch the pre auth token that is passed by the server back to us in the response header
  • We build a User class
  • We fetch the console access token
  • If we get a console access token, we save it and load it in our headers for further user/passwordless communication
  • We set our state to connected and clear the tries if we ever need to go through the process again
def _getAccessToken(self, user, token: str) -> dict:
  alias = 'sam-{}'.format(str(uuid.uuid4())).replace('-', '')[:29]
  self._headers['Authorization'] = token
  req = self._req(url='v1/user/{}/accesstoken'.format(user.userId), data={'alias': alias})
  if req.status_code == 201:
    return json.loads(req.content)['token']
  return {}
  • We need to define an alias for the token. This is made by generating a uuid version 4 appended to the string “sam-“. We get rid of any “-” in that string and use only the first 29 characters. Don’t ask why, it’s that way. You can replace “sam-” with anything. I use “projectalice-“
  • We use the pre auth token we got in our headers so the server knows it’s us.
  • We send the request to the endpoint “v1/user/USERID/accesstoken“. USERID comes from the previous request, when we built the “User” class
  • If the server responds with the http code “201” we’ve been accepted and we return a dict made out of the “token” part of the response content
def _saveSnipsConf(self):
  with open('/etc/snips.toml', 'w') as f:
    toml.dump(self._snips, f)
  • Quick function to save our settings to snips.toml
def _req(self, url: str='', method: str='post', data: dict=None, **kwargs) -> requests.Response:
  req = requests.request(method=method, url='https://external-gateway.snips.ai/{}'.format(url), json=data, headers=self._headers, **kwargs)
  if req.status_code == 401:
    print('Console token expired or refused, need to login again')
    if 'Authorization' in self._headers:
      del self._headers['Authorization']
    self._connected = False
      self._snips['console']['console_token'] = ''
      self._snips['console']['console_alias'] = ''
      self._saveSnipsConf()
      self._login()
return req
  • The reason why I made this _req function instead of directly using requests built in functions is that if for any query to the snips server we make we get a 401 status code, the token has a problem and we need to call the login function again. Instead of checking the status after every http call I made one function for all the calls, that does the checking part
  • We send the request to the server by appending the passed url to the url base which is https://external-gateway.snips.ai, passing the headers and the payload as well as any other accepted arguments (**kwargs)
  • If we get a “401” http status code back, the token has been refused in which case we delete the authorization header, get rid of the snips toml configuration and call the login function again. Now the self._tries surely makes sense?
class User:
  def __init__(self, data):
    self._userId = data['id']
    self._userEmail = data['email']

  @property
  def userId(self) -> str:
    return self._userId
  • A simple class to hold the userid and the user email

That’s it!!

Yep, we’ve done it! We are connected to the snips server and we can try different endpoints, like listing our assistants, skills, train the nlu or asr, download the assistant zip file etc ūüôā Let me give you a few non destructive endpoints. They all need the ‘Authorization’ header to be set with the JWT key to be reachable!

  • NLU status: /v3/assistant/ASSISTANT_ID/status (method ‘get’) => where ASSISTANT_ID is replaced by the id of the wanted assistant.
  • NLU training: /v1/training (method ‘post’) => data: ‘assistantId’
  • ASR status: /v1/languagemodel/status (method ‘get’) => data: ‘assistantId’.
  • ASR training: /v1/languagemodel (method ‘post’) => data: ‘assistantId’
  • Assistant listing: /v3/assistant (method ‘get’) => data: ‘userId’
  • Assistant download: /v3/assistant/ASSISTANT_ID/download (method ‘get’) => where ASSISTANT_ID is replaced by the id of the wanted assistant.
  • Logout: /v1/user/USER_ID/accesstoken/ALIAS (method ‘get’). This deletes the alias and token from snips server!

That’s about enough for today! I hope you enjoyed this little introduction to how to programmatically manage your assistant in python! As always, dev safe!

Full working copy

Here’s the link: https://github.com/Psychokiller1888/snipsSamless/blob/master/main.py

Make sure to have the dependencies installed (toml and requests) and to run python 3. Run the script. Type email and enter the email. Type password and enter your password. Type login to log into the console and test the functions!

Links

Satellites and the multidetection hell

Hi all! It’s been a while since my last post and to be honest, my last works on both Snips and Project Alice. My professional life has taken a huge turn leaving me with little to no time for side projects. But here I am! Got some time to kill and I decided to attack a problem that often comes on discord or the forum and even through emails lately! Let’s name it: hotword multi detection!

If you have more than just a main unit, you surely already had the issue of multiple devices catching your hotword… Isn’t a real pain? Ok, they all catch what you are saying then too, and all answer to it, but if like me and many you have random speech generation, it ends up in a mess. Not to speak about intents that switch a state: Lights please! The first device to detect that turns the lights off, the second turns them back on!

There’s many ways to avoid this. This is the solution I now use that I kept as grand winner within the others.

The problem

  • The hotword is detected by the main unit on more than one site id
  • The callback for the hotword detection happens before any session is created, so no session id to grab, but a site id is available
  • The session started callback is called right after the hotword is detected and you¬†still have no way to disrupt that¬†despite the many requests I have made to be able to.
  • The session started callback doesn’t carry any information about the hotword that triggered it, but has a session id and a site id

 

The solution?

I have been trying various ways to overcome this: Detection strenght, detection timestamp, setting the mics lower etc etc. The best way that covers 95% of my problems: The order in which the hotword/session combo is started! Basically, your voice takes some time to reach your devices. Sound travels at approx. 343 meters per second in the air. Even a few milliseconds can be detected by a computer. So the further your device sits, the longer it’ll take for your voice to reach it, the longer it’ll take for the session to be started.

Solving by coding

Let’s start by adding a few callbacks to our assistant:

  • message_callback_add('hermes/hotword/default/detected', onHotwordDetected)
  • message_callback_add('hermes/dialogueManager/sessionStarted', onSessionStarted)

We will need a few data holders:

  • multiDetectionsHolder = []
  • sessions = {}

As we will need to define a frame in which sessions are to be considered multi detection, we need to import threading.timer

Ok, let’s go!

First, handle the hotword detection

def onHotwordDetected(self, client, data, msg):
payload = json.loads(msg.payload)

if len(self._multiDetectionsHolder) == 0:
threading.Timer(interval=0.3, function=self.handleMultiDetection).start()

self._multiDetectionsHolder.append(payload['siteId'])

What we do here is, when a hotword is detected, we check if there’s nothing in our holder, in which case we start a time that will call handleMultiDetection after 0.3 seconds which is enough time for your voice to reach a device that is placed 103 meters away. We then add the site id to our holder.

Second, handle the session start

def onSessionStarted(self, client, data, msg):
sessionId = json.loads(msg.payload)['sessionId']
self._sessions[sessionId] = msg

Nothing more! What we did here is basically storing the session id and the message itself into our sessions holder whenever a session is started!

Third, handle them /*–*!?**/ multi sessions!

def handleMultiDetection(self):
if len(self._multiDetectionsHolder) <= 1:
self._multiDetectionsHolder = []
return

for sessionId in self._sessions.keys():
message = self._sessions[sessionId]
payload = json.loads(message.payload)
if payload['siteId'] != self._multiDetectionsHolder[0]:
self._mqttClient.publish('hermes/dialogueManager/endSession', json.dumps({'sessionId': sessionId}))

self._multiDetectionsHolder = []

What happens here? Well, first of, this fires only 0.3 seconds after a hotword is detected. Be it multi detection or normal detection. So first, let’s make sure we are handling a multi detection by checking if the holder has more than one item. If not, simply empty the holder, we don’t need it, and return.
But if there’s more than one hotword detected within the last 0.3 seconds then loop over the sessions and extract their siteId. If that said site id doesn’t match the site id of the first item in the hotword detection holder, then it must have come after, so simply end that session!

And you know what? That’s it! You got your multi hotword detection cleaner on place! Now let’s just hope Snips will give us a way to act when the hotword is detected to block the notification sound if needed!

I hope you enjoyed this mini tutorial. This is one solution amongst others that also work. I just found this one to be more reliable.

Oh, what’s Snips are you asking? Well: https://snips.ai

See you guys and dev safe!

3D printed casing

Ho ho ho snipsters and followers! Ok, I’m about 7 days late, but better late than never no?

Just wanted to shoot a little message as it’s been a long time. Pretty busy end of the year. Been working on stabilizing Project Alice while implementing minor changes and addons. I am actually thinking over the possibility of sharing it to the grand public. As of now, Snips offices have kindly asked to have a look at the code so I decided to share it to them, as well as Rand for his home, along with some 3D printed cases.

Last week I decided I needed a Linux running computer instead of always having to flash a raspberry pi and start it. This is for the some cases I need to access a linux file system of shrink a raspberry pi image. And other cases as well. So I started to play around the idea and finally designed a casing. It’s prett cool, so imma use it for Alice main unit as well. Alice doesn’t need any mic or speaker, just a pi running with an SD. This case is designed to host a pi 3, with a 50×50 active fan. And if you need disk space, it can accomodate an X850 MSata adapter.

A few pictures? As you wish!!

I hear you already “Why all that empty space??”. I’m pretty sure some of you will add a speaker in there and instead of having the msata card and the fan, use a respeaker, no? ūüėČ

You like it? Well, then I shall share the files too!! You’ll need to print the spacers 4 times, the grille and fan guards 2 times and choose, based on your printer cap, to print the full cover, in which case you won’t need the pin, or both left and right cover

Download files

Good luck and dev safe!

Online, offline, back online, ISP crash…

Well, what then? No more voice, no more voice control over your house? Well, that’s only true if you are using a cloud service for your text-to-speech and/or your ASR. To be very honest, I do use Google ASR as well as Amazon Polly for TTS and I’m planning to migrate the TTS part to Google Wavenet.

In my opinion the only little devil here would be the ASR, but then again, it’s listening only when you wake Snips, unlike Google Home or Amazon dot. And the TTS? Well, it only turns a string into a voice, so don’t try to make your Snips assistant say sensitive information aloud and you’re fine.

But….. What when internet goes down? If your electricity goes out, I imagine you are clever enough to have your assistant running on a backup battery, but you surely don’t have your own internet. So either your assistant is a dead hardware in your home doing nothing more than led animations because this is most probably controlled by Snips, or you have a fallback solution.

This is where I have again worked around the corners, and to be honest again, it’s very very simple. How simple? As simple as saying it: No internet? Use Pico and Snips ASR. Internet, use whatever online service you want. How? Let’s explain first, I’ll share a Gist for you at the end of this post

  • First, install Snips, of course
  • On top of that, install Snips Google ASR.¬†Do not uninstall Snips-ASR!
  • Now, let’s open you assistant python (or whatever language)
    • Let’s make sure, when your assistant starts, to have it call my bash script
      subprocess.call(['/home/pi/offlineFallback/shell/switchOnlineState.sh', "1"])
    • Create some kind of loop in your assistant, that will call the online state check method every minutes. If it’s your assistant, do not use time.sleep() as in my demo, it would block the thread! Instead, use threading.Timer()
      while RUNNING:
         ONLINE = checkOnlineState()
         time.sleep(60)
    • In our online state checker, we’ll check if we have access to internet and act accordingly. Meaning we try to ping google.com (ever saw google offline??) with a short timeout of 2 seconds. If the ping fails it will raise an error. If not, and only if we actually were offline before, then we call the bash script and print the happy new. If we had an error raised, we are offline, so if we were online before let’s call the bash script and announce the terrible news
      def checkOnlineState():
         global ONLINE
      
         try:
            req = requests.get('http://clients3.google.com/generate_204')
            if req.status_code != 204:
               raise Exception
      
            if not ONLINE:
               subprocess.call(['/home/pi/offlineFallback/shell/switchOnlineState.sh', "1"])
               print('Internet is back, switching back to Amazon Polly voice and Google ASR')
      
            return True
         except:
            pass
      
         if ONLINE:
            subprocess.call(['/home/pi/offlineFallback/shell/switchOnlineState.sh', "0"])
            print('No more internet connection, falling back to PicoTTS and Snips ASR')
      
         return False
    • That’s pretty simple isn’t it? Oh wait, the bash script itself…So, the variable state becomes whatever was passed as an arguments, in our case 1 or 0, for online and offline. If we are online, then change picoTTS by customTTS in your snips.toml file, stop snips-asr and start snips-asr-google. And do the opposite if we are offline! And don’t forget to restart snips after that.
      #!/usr/bin/env bash
      
      state="$1"
      
      if [[ "$1" -eq "1" ]]; then
          sudo sed -i -e 's/provider = "picotts"/provider = "customtts"/' /etc/snips.toml
          sudo systemctl stop snips-asr
          sudo systemctl start snips-asr-google
      else
          sudo sed -i -e 's/provider = "customtts"/provider = "picotts"/' /etc/snips.toml
          sudo systemctl stop snips-asr-google
          sudo systemctl start snips-asr
      fi
      
      sudo systemctl restart snips-*
  • If you don’t add this directly to your assistant, you could add this script to your system startup.

See how simple this was? As I love to say

Your imagination is your limit. Dream and you’ll walk the moon

This is especially true when it comes to programing and hacking your way around… Have fun with Snips and dev safe!

 

The links? Ok!

Proximity sensor??!?

Been quite some time I haven’t written again. Well, plenty to do and done a lot.

Using assistants at home is really really helpfull nowadays, everything is connected and it’s just awesome to be able to ask your lights to change, your shutters to close etc.

Snips is awesome! But for those using it also, you surely had many false hotword detection. At least I have many while watching TV. It just activates too much! Or maybe you have friends at home and you want to turn it off? Snips is local, so no worries about data privacy, but still, you don’t want your assistant to react on those foreign voices.

I built my satellites for Snips and in the design phase I thought it would be good to include captors and sensors, even with no use idea yet. This is why I built in a APDS9960 and a BME680. The first one is a color, proximity, gesture and light sensor. The second one is a temperature, pressure, humidity, gas sensor.

The second one is clearly in full use, to make sure my windows and shutters behave nicely and assure me a fresh but not cold house.

But the first one? Well, it wasn’t in use at all. But I finally had an idea for the proximity sensor! Introducing Mute! Enjoy!

 

The leds are Neopixels, also by Adafruit, driven directly from the raspberry pi zero.

Some links? Sure!

Velux… PCB… Hack… Done

Been a while I didn’t write here, again you might say… But there’s a reason behind that. Remember that Velux hack for Snips?¬†Project Alice ‚Äď Raspberry voice controlled Velux

20180207_173838

If I remember well, the first comment I got on Discord was “It’s not professional at all”…¬†Well, that’s not wrong, but it is working you know. But still, I decided to take it to the next level. Before the whole story, let’s talk milestones:

  • Designing electrical schematics¬†DONE
  • Designing PCB¬†DONE
  • Printing, etching PCB¬†DONE
  • Mounting PCB¬†DONE
  • Connecting to Velux remote and Raspberry pi zero¬†DONE
  • Changing python script and have everything working as before¬†DONE
  • Making a tutorial
  • Making a video
  • Making pictures
  • Making a full functional Snips package
  • Distributing as open source
  • Proposing my services for the PCB against a small payment without being a commercial entity in the eyes of Snips ūüôā

And let’s clarify this: I’m not an electrician, I’m a precision mechanic. I did a little electronic at school, a bit more by myself, but never did this much. Oh and yes, special credits to my friend¬†mpbs¬†that supported and guided me for the electrical part.

So, when you first think about chips, well, England comes first if you’re british, for other you think thin sliced potatoes fried with salt orr whatever. I had never thought about using a chip even less about etching my own PCB. I was working using relays and prototyping board. Lego VS Duplo? But well, I love to learn, I need to try and create so I went in. Google…. Or whatever engine… Try to find information, you’ll get everything and nothing about creating your own PCBs… You can print them with a CNC, you can use a UV light, an iron, a laminator, hammer, whatever. Products? You can use ammonium, sodium, ferrite. No good leads to what, when, where. How do you finish your board? You can use Sur-Tin, you can use a special varnish and others. I won’t describe everything I bought… To find the way that worked for me.

Ok, first part was buying some PCB. I chose epoxy with 0.35mu copper, photo resistant applied, like this, by Scankemi. I also bought a UV tube by Philips and a few part to mount it.

Then, I thought about designing, so I googled for a good PCB design software. Many choices, but I ended up using EasyEda, because I thought that if by any chance I couldn’t make my own board I could always just order it from them directly. It’s a nice cloud software, does the job, auto router is handy but you’ll need to make the final corrections yourself. I recommend it!

After a couple of tries, I ended up with my schematics, on one-sided board. Printed, about 5 times until the orientation was correct and I tried the uv light! Well, placed the pcb with the transparent paper on top of it encased in a photo frame to make sure it’s really pressed together. Tried 30 secs, a minute, 2 minutes etc… Nothing. After bathing the pcb in the photo resistant remover, everything was gone and I ended up with a nice copper piece…

So, I decided to try a toner transfer. Back to printing, on 110g/m2 glossy photo paper. The glossy part is important, you won’t have a nice transfer otherwise. You need to mirror the copper side!!¬†Back to online search, I decided to try my wife’s toy, the iron! With more or less success I should say, the transfer is everything but optimal… After trying again and again both methods I decided the quality wasn’t what I was expecting. So I ended up digging in my office, at work, to find an old laminator, the Grail! Why? Because those old ones aren’t automated in temperature and speed! Ok, cutting the paper, placing it on the pcb, tape it nicely and off in the laminator @ 160¬įC.¬†I found out that going through about 22 times, at lowest speed possible and reverting the pcb every time to make it heat all the way gives the best results for me. I usually go until I can’t touch the pcb anymore. Directly after, you just throw it under water and let it soak until the paper gets totally transparent. Slowly peeling the paper off and the tracks were nicely transferred!! So happy!

Next step, was etching… What solution to use? Ammonium, sodium, ferrite? Well, decided to go for Ammonium chloride just because… It’s the only one I could find in Switzerland. Of course, don’t bother reading about it and try it cold, it will just take 10 times longer. Seriously, wear gloves, do the mix with hot water,¬†65¬įC¬†works well, and put the pcb in and keep moving it slowly with a plastic pincette or wooden sticks. Don’t use metal! After a while, the entire copper will be eaten away and your ammonium will have turned blue. Taking the pcb out, rinsing under running water and wow! The first etching seemed perfect! Oh yeah, pour the liquid in a plastic bottle and store it for further use. Store it away, correctly marked. Now, the toner was still on the pcb… Ok, found out that Aceton is perfect and immediately removes the toner. You end up with a nicely designed track!

Next step was making holes and soldering the components on it… Easy!

Only to find out that it’s a cable mess, it’s not nice and it doesn’t work. Not enough current to power the remote when going through the TI4066 chip. Back to drawing board, both on EasyEda and in my head. Decided to go full power, two-sided pcb, Mosfet, resistors etc etc. Again, used the laminator technic for toner transfer, but on both side of a double-sided pcb! Took some time to make sure the papers were correctly aligned but I ended up being good at it. Because yes, I made about 6 pcbs before the final working one… The next problem was, hey, two sided, but how do you connect side A to side B? Well, bought some silver copper wire and just soldered it on both side of the connecting hole. Of course, the army of wires from the controller to the board had to go, sorry. So I decided to use the very same silver copper wire to solder on the controller buttons and then directly on the board. The board becomes a hat for the remote controller, non removable once the buttons are soldered. But also a hat for the raspi zero! No more wires, expect for the remote power!

Oh, almost forgot! Actually, I found out when doing the last pcb… You need to protect the tracks or you’ll end up with a rusty copper board. For that you need some chems again. And it’s not easy to get it in Switzerland, because it’s extremely acid. I’ve used Sur Tin. You prepare the mix, soak your board for about 2 minutes in and it’s done. But you wear long clothes, gloves, glasses. That stuff ate my floor! And finally, the board is done! A few corrections to it, such as the MOSFET being 2mm too low and the power connector too, but that’s for the next ones I’ll do. Ok, enough talk, a few pictures of the finished product

 

Want to know what I used? Here a few links! And dev safe!

Project Alice – Snips Velux Red Queen

Velux windows, velux blinders, velux shades, skylights. These products are well-known, so known that in switzerland, you don’t speak about “roof windows” but you have “velux”.

Velux is using a completely closed protocol for its products which is a radio protocol belonging to IO-Homecontrol, an attempt to home automation, on a closed protocol that entirely locks users from doing anything or changing/adding a behavior. Reversing and tempering with this protocol has been tried by many, I haven’t seen any successful attempts yet.

Netatmo is a very successful french startup to brought some very well designed smart weather stations in our home, offering I think the best weather data around the world. They also made some nice indoor and outdoor security camera. Their base controller not only checks for the temperature, but also the air quality, the noise level, the atmospheric pressure, the humidity. Add to that the other (quite expensive) modules for the outside and you’ve got a full weather station with rain meter and wind sensor! They also provide control for you heating system, helping you lower your monthly bills!

I’m having both velux and Netatmo. Velux remotes are from the stone age, bulky, not updatable, very slow, even the new touch screens ones are maybe a bit better, but hell, try to make a complex program! And of course, no integration with anything else than IO… Until lately, when Netatmo announced a partnership with Velux for an intelligent home:¬†https://www.netatmo.com/fr-FR/partners/velux

It all started when I decided to use my voice to control my velux products using Snips, a raspberry and an old velux remote. It works like a charm and the latest updates I pushed to the script make it fail safe throughout the day and storms! But I needed more. Bored to ask Snips to open my windows when it was hot. Bored of opening them to only realise it was even hotter outside and all I was doing was warming up the inside of my house.

So I thought, hey, why not automate all that? I was for sure gonna buy the Netatmo Velux thingy, but hmm… A device per room? How much is this gonna cost? I thought, let’s use Project Alice and give it some abilities to control my house for myself…

I decided to name that module “Red Queen” because “Project Alice”. I worked a little on it, tested it in real-time. It’s far from finished, but…. it’s basically doing what Netatmo is going to release end of this year!! For the very modest sum of a raspberry pi 3 a Velux controller and a few electronic¬†components!!

So what is it doing? Well, when Project Alice boots, the Red Queen awakes and takes control of the air quality and decides, purely data based, what to do. It will check the status every 15 minutes and act as needed to stay as closed to the programmed comfort temperature. It handles many things already, opening or not, wind or not, us sleeping or not, us home or away etc etc etc. I won’t go too far into the details, but this is a compilation of all the possible logs (so decision) the Red Queen can take at the moment:

  • - Red Queen checking the air quality
    - - Velux overridden by user voice command
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 21.9c, outside temperature at 20.5c
    - - Inside temperature is inside comfort zone
  • - Red Queen checking the air quality
    - - Co2 level above 850ppm opening all windows!
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 24.8c, outside temperature at 20.5c
    - - Outside is cooler, opening to cool
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 24.8c, outside temperature at 27.5c
    - - Outside is warmer than inside, making sure windows are closed
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 24.8c, outside temperature at 29.1c
    - - Outside is warmer than inside, making sure windows are closed
    - - Outside is over 29c, closing blinders too
  • - Red Queen checking the air quality
    - - Comfort temperature at 19c, actual temperature in bedroom at 20.8c, outside temperature at 20.5c
    - - (Sleeeping) Keeping windows at 30% for air
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 18.8c, outside temperature at 20.5c
    - - Inside temperature lower than comfort, outside is warmer, opening windows
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 20.8c, outside temperature at 7.5c
    - - Inside temperature lower than comfort, outside is even colder, closing the windows
  • - Red Queen checking the air quality
    - - Comfort temperature at 19c, actual temperature in bedroom at 18.7c, outside temperature at 16.2c
    - - (Sleeping) Keeping windows at 10% for minimum airing
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 25.6c, outside temperature at 27.2c
    - - 21km/h wind from the lake, opening both sides for 15 minutes
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 23.5c, outside temperature at 16.2c
    - - (Users away) Opening windows to 50% to cool

 

The first logs talks about override. What’s this? Well, whenever I or my wife ask Alice (Snips) to open the windows or the blinders, I don’t want the Red Queen to blindly close them again 2 minutes after because her 15 minutes timer just ended. So when Snips gets an order, any Red Queen decisions are overridden (canceled) for the next hour. We stay master of our home!!

That’s a quick tour of what’s made so far. I have many ideas, maybe integrating the Netatmo heating control devices, but I’m having a floor heating… But I’ll definitely improve the Red Queen to better suit our home to the best temperature and air quality, for better days and nights!

Dev safe!

Some links:

redqueen

redqueen3

redqueen2

redqueen4