Snips console login without sam

Ho ho ho says Santa! But today also me!

A lot has been going on since my last post. A lot! This is the first article with a little gift to the community! Since we started on Snips we’ve all tried to programmatically train and download our assistants. Project Alice did it, in a way that really wasn’t convenient:

  • Hey Alice!
  • Yes?
  • Update yourself
  • Ok -> Stopping Snips services, downloading the new assistant I had previously uploaded to my git repo, extracting it, place it where it belongs and restarting snips

Of course this worked but I had to manually train and download the assistant from the console and upload it to my git repo…

Then came the browser way. We can mimic the user activity on the browser, but somehow, at some point, we couldn’t train our assistant anymore, leaving us with login to console to train it before downloading it anyway…

Then came SAM, the Snips tool to manage your device and assistants/skills. Sam can train and download the assistant, provided your Snips account credentials. So if Sam can, why can’t we?

Disclaimer

  • Is this hacking?
  • No, it’s not, you only gain access to what’s rightfully yours. It’s more some reverse engineering and basic comprehension
  • Is this dangerous?
  • I won’t share any destructive endpoints here, but potentially you could end up deleting your assistant
  • Is this allowed?
  • I did ask the permission to share, wasn’t answered. I can’t see why it wouldn’t be though, as again, you only access your data
  • Is it hard?
  • No, not at all, the script is very short
  • I do not take any responsability if anything bad happens, data loss, key stolen, ban, beer spilled on keyboard etc.

JWT

Stands for Json Web token and is a way to authenticate a user to righfully access some locked data by providing credentials only once. You use them daily in your browsers without even knowing it. The JWT token is composed of three parts, separated by dots that are commonly base64 encoded. The first part contains the token infos (encryption used), the second part contains the payload, whatever needs to be passed and the third part contains the signature. How does it work for Snips? It’s pretty simple:

  • User asks the server to login, sending login and password on a TLS encrypted channel
  • Server authentifies the user and sends a short life JWT token back to user
  • User creates an alias and sends it to the server along with the JWT token provided just before
  • Server checks the JWT token and the alias and sends a master JWT token back to user
  • User stores the master JWT token and alias for any further connection to server

What’s needed?

I’ll show the way using python and will share a working copy of the script at the end of this article. Use python 3, even thoug it’s not essential, but python 2 is at its end of life. The only dependency you will need is Requests
pip3 install requests

Let’s do this!

This is a very basic and quick written script, for demo purpose only. You will need to make it stronger and add more checks! Create your python script, call it whatever you want.

# -*- coding: utf-8 -*-
import json
import requests
import toml
import uuid

class SnipsConsole:

  EMAIL = 'john'
  PASSWORD = 'doe'

  def __init__(self):
    self._tries = 0
    self._connected = False
    self._headers = {
      'Accept' : 'application/json',
      'Content-Type': 'application/json'
    }
    with open('/etc/snips.toml') as f:
      self._snips = toml.load(f)

    if 'console' in self._snips and 'console_token' in self._snips['console']:
      self._headers['Authorization'] = 'JWT {}'.format(self._snips['console']['console_token'])
      self._connected = True
    else:
      self._login()
  • I do set a max try to 3 so in case of failure we can retry but not endlessly
  • Next comes the basic header definition
  • I use snips.toml to store the information, so everything to do with snips stays at the same place and load it with the “toml” module pip3 install toml
  • If the token is already in the configurations we expend the header with the authorization token, if not, we call the login function
def _login(self):
  self._tries += 1
  if self._tries > 3:
    print("Max login tries reached, aborting")
    self._tries = 0
    return

  payload = {
    'email': self.EMAIL,
    'password': self.PASSWORD
  }

  req = self._req(url='v1/user/auth', data=payload)
  if req.status_code == 200:
    print('Connected to snips account, fetching auth token')
    try:
      token = req.headers['authorization']
      user = User(json.loads(req.content)['user'])
      accessToken = self._getAccessToken(user, token)
      if len(accessToken) > 0:
        print('Console token aquired, saving it!')
        if 'console' not in self._snips:
          self._snips['console'] = {}

        self._snips['console']['console_token'] = accessToken['token']
        self._snips['console']['console_alias'] = accessToken['alias']
        self._headers['Authorization'] = 'JWT {}'.format(accessToken['token'])
        self._saveSnipsConf()
        self._connected = True
        self._tries = 0
    except Exception as e:
      print('Exception during console token aquiring: {}'.format(e))
      self._connected = False
      return
  else:
    print("Couldn't connect to console: {}".format(req.status_code))
    self._connected = False
  • We first check if we have exceeded our tries, if yes we just stop
  • We prepare the payload with the needed informations, email and password of your Snips console account
  • We try to connect, using a function declared later, pointing to v1/user/auth with the payload previously declared
  • If the server answers with the http status code 200 it means we’ve been accepted otherwise the account connection failed and we can’t go further
  • We fetch the pre auth token that is passed by the server back to us in the response header
  • We build a User class
  • We fetch the console access token
  • If we get a console access token, we save it and load it in our headers for further user/passwordless communication
  • We set our state to connected and clear the tries if we ever need to go through the process again
def _getAccessToken(self, user, token: str) -> dict:
  alias = 'sam-{}'.format(str(uuid.uuid4())).replace('-', '')[:29]
  self._headers['Authorization'] = token
  req = self._req(url='v1/user/{}/accesstoken'.format(user.userId), data={'alias': alias})
  if req.status_code == 201:
    return json.loads(req.content)['token']
  return {}
  • We need to define an alias for the token. This is made by generating a uuid version 4 appended to the string “sam-“. We get rid of any “-” in that string and use only the first 29 characters. Don’t ask why, it’s that way. You can replace “sam-” with anything. I use “projectalice-“
  • We use the pre auth token we got in our headers so the server knows it’s us.
  • We send the request to the endpoint “v1/user/USERID/accesstoken“. USERID comes from the previous request, when we built the “User” class
  • If the server responds with the http code “201” we’ve been accepted and we return a dict made out of the “token” part of the response content
def _saveSnipsConf(self):
  with open('/etc/snips.toml', 'w') as f:
    toml.dump(self._snips, f)
  • Quick function to save our settings to snips.toml
def _req(self, url: str='', method: str='post', data: dict=None, **kwargs) -> requests.Response:
  req = requests.request(method=method, url='https://external-gateway.snips.ai/{}'.format(url), json=data, headers=self._headers, **kwargs)
  if req.status_code == 401:
    print('Console token expired or refused, need to login again')
    if 'Authorization' in self._headers:
      del self._headers['Authorization']
    self._connected = False
      self._snips['console']['console_token'] = ''
      self._snips['console']['console_alias'] = ''
      self._saveSnipsConf()
      self._login()
return req
  • The reason why I made this _req function instead of directly using requests built in functions is that if for any query to the snips server we make we get a 401 status code, the token has a problem and we need to call the login function again. Instead of checking the status after every http call I made one function for all the calls, that does the checking part
  • We send the request to the server by appending the passed url to the url base which is https://external-gateway.snips.ai, passing the headers and the payload as well as any other accepted arguments (**kwargs)
  • If we get a “401” http status code back, the token has been refused in which case we delete the authorization header, get rid of the snips toml configuration and call the login function again. Now the self._tries surely makes sense?
class User:
  def __init__(self, data):
    self._userId = data['id']
    self._userEmail = data['email']

  @property
  def userId(self) -> str:
    return self._userId
  • A simple class to hold the userid and the user email

That’s it!!

Yep, we’ve done it! We are connected to the snips server and we can try different endpoints, like listing our assistants, skills, train the nlu or asr, download the assistant zip file etc ūüôā Let me give you a few non destructive endpoints. They all need the ‘Authorization’ header to be set with the JWT key to be reachable!

  • NLU status: /v3/assistant/ASSISTANT_ID/status (method ‘get’) => where ASSISTANT_ID is replaced by the id of the wanted assistant.
  • NLU training: /v1/training (method ‘post’) => data: ‘assistantId’
  • ASR status: /v1/languagemodel/status (method ‘get’) => data: ‘assistantId’.
  • ASR training: /v1/languagemodel (method ‘post’) => data: ‘assistantId’
  • Assistant listing: /v3/assistant (method ‘get’) => data: ‘userId’
  • Assistant download: /v3/assistant/ASSISTANT_ID/download (method ‘get’) => where ASSISTANT_ID is replaced by the id of the wanted assistant.
  • Logout: /v1/user/USER_ID/accesstoken/ALIAS (method ‘get’). This deletes the alias and token from snips server!

That’s about enough for today! I hope you enjoyed this little introduction to how to programmatically manage your assistant in python! As always, dev safe!

Full working copy

Here’s the link: https://github.com/Psychokiller1888/snipsSamless/blob/master/main.py

Make sure to have the dependencies installed (toml and requests) and to run python 3. Run the script. Type email and enter the email. Type password and enter your password. Type login to log into the console and test the functions!

Links

Online, offline, back online, ISP crash…

Well, what then? No more voice, no more voice control over your house? Well, that’s only true if you are using a cloud service for your text-to-speech and/or your ASR. To be very honest, I do use Google ASR as well as Amazon Polly for TTS and I’m planning to migrate the TTS part to Google Wavenet.

In my opinion the only little devil here would be the ASR, but then again, it’s listening only when you wake Snips, unlike Google Home or Amazon dot. And the TTS? Well, it only turns a string into a voice, so don’t try to make your Snips assistant say sensitive information aloud and you’re fine.

But….. What when internet goes down? If your electricity goes out, I imagine you are clever enough to have your assistant running on a backup battery, but you surely don’t have your own internet. So either your assistant is a dead hardware in your home doing nothing more than led animations because this is most probably controlled by Snips, or you have a fallback solution.

This is where I have again worked around the corners, and to be honest again, it’s very very simple. How simple? As simple as saying it: No internet? Use Pico and Snips ASR. Internet, use whatever online service you want. How? Let’s explain first, I’ll share a Gist for you at the end of this post

  • First, install Snips, of course
  • On top of that, install Snips Google ASR.¬†Do not uninstall Snips-ASR!
  • Now, let’s open you assistant python (or whatever language)
    • Let’s make sure, when your assistant starts, to have it call my bash script
      subprocess.call(['/home/pi/offlineFallback/shell/switchOnlineState.sh', "1"])
    • Create some kind of loop in your assistant, that will call the online state check method every minutes. If it’s your assistant, do not use time.sleep() as in my demo, it would block the thread! Instead, use threading.Timer()
      while RUNNING:
         ONLINE = checkOnlineState()
         time.sleep(60)
    • In our online state checker, we’ll check if we have access to internet and act accordingly. Meaning we try to ping google.com (ever saw google offline??) with a short timeout of 2 seconds. If the ping fails it will raise an error. If not, and only if we actually were offline before, then we call the bash script and print the happy new. If we had an error raised, we are offline, so if we were online before let’s call the bash script and announce the terrible news
      def checkOnlineState():
         global ONLINE
      
         try:
            req = requests.get('http://clients3.google.com/generate_204')
            if req.status_code != 204:
               raise Exception
      
            if not ONLINE:
               subprocess.call(['/home/pi/offlineFallback/shell/switchOnlineState.sh', "1"])
               print('Internet is back, switching back to Amazon Polly voice and Google ASR')
      
            return True
         except:
            pass
      
         if ONLINE:
            subprocess.call(['/home/pi/offlineFallback/shell/switchOnlineState.sh', "0"])
            print('No more internet connection, falling back to PicoTTS and Snips ASR')
      
         return False
    • That’s pretty simple isn’t it? Oh wait, the bash script itself…So, the variable state becomes whatever was passed as an arguments, in our case 1 or 0, for online and offline. If we are online, then change picoTTS by customTTS in your snips.toml file, stop snips-asr and start snips-asr-google. And do the opposite if we are offline! And don’t forget to restart snips after that.
      #!/usr/bin/env bash
      
      state="$1"
      
      if [[ "$1" -eq "1" ]]; then
          sudo sed -i -e 's/provider = "picotts"/provider = "customtts"/' /etc/snips.toml
          sudo systemctl stop snips-asr
          sudo systemctl start snips-asr-google
      else
          sudo sed -i -e 's/provider = "customtts"/provider = "picotts"/' /etc/snips.toml
          sudo systemctl stop snips-asr-google
          sudo systemctl start snips-asr
      fi
      
      sudo systemctl restart snips-*
  • If you don’t add this directly to your assistant, you could add this script to your system startup.

See how simple this was? As I love to say

Your imagination is your limit. Dream and you’ll walk the moon

This is especially true when it comes to programing and hacking your way around… Have fun with Snips and dev safe!

 

The links? Ok!

Proximity sensor??!?

Been quite some time I haven’t written again. Well, plenty to do and done a lot.

Using assistants at home is really really helpfull nowadays, everything is connected and it’s just awesome to be able to ask your lights to change, your shutters to close etc.

Snips is awesome! But for those using it also, you surely had many false hotword detection. At least I have many while watching TV. It just activates too much! Or maybe you have friends at home and you want to turn it off? Snips is local, so no worries about data privacy, but still, you don’t want your assistant to react on those foreign voices.

I built my satellites for Snips and in the design phase I thought it would be good to include captors and sensors, even with no use idea yet. This is why I built in a APDS9960 and a BME680. The first one is a color, proximity, gesture and light sensor. The second one is a temperature, pressure, humidity, gas sensor.

The second one is clearly in full use, to make sure my windows and shutters behave nicely and assure me a fresh but not cold house.

But the first one? Well, it wasn’t in use at all. But I finally had an idea for the proximity sensor! Introducing Mute! Enjoy!

 

The leds are Neopixels, also by Adafruit, driven directly from the raspberry pi zero.

Some links? Sure!

Project Alice – Snips Velux Red Queen

Velux windows, velux blinders, velux shades, skylights. These products are well-known, so known that in switzerland, you don’t speak about “roof windows” but you have “velux”.

Velux is using a completely closed protocol for its products which is a radio protocol belonging to IO-Homecontrol, an attempt to home automation, on a closed protocol that entirely locks users from doing anything or changing/adding a behavior. Reversing and tempering with this protocol has been tried by many, I haven’t seen any successful attempts yet.

Netatmo is a very successful french startup to brought some very well designed smart weather stations in our home, offering I think the best weather data around the world. They also made some nice indoor and outdoor security camera. Their base controller not only checks for the temperature, but also the air quality, the noise level, the atmospheric pressure, the humidity. Add to that the other (quite expensive) modules for the outside and you’ve got a full weather station with rain meter and wind sensor! They also provide control for you heating system, helping you lower your monthly bills!

I’m having both velux and Netatmo. Velux remotes are from the stone age, bulky, not updatable, very slow, even the new touch screens ones are maybe a bit better, but hell, try to make a complex program! And of course, no integration with anything else than IO… Until lately, when Netatmo announced a partnership with Velux for an intelligent home:¬†https://www.netatmo.com/fr-FR/partners/velux

It all started when I decided to use my voice to control my velux products using Snips, a raspberry and an old velux remote. It works like a charm and the latest updates I pushed to the script make it fail safe throughout the day and storms! But I needed more. Bored to ask Snips to open my windows when it was hot. Bored of opening them to only realise it was even hotter outside and all I was doing was warming up the inside of my house.

So I thought, hey, why not automate all that? I was for sure gonna buy the Netatmo Velux thingy, but hmm… A device per room? How much is this gonna cost? I thought, let’s use Project Alice and give it some abilities to control my house for myself…

I decided to name that module “Red Queen” because “Project Alice”. I worked a little on it, tested it in real-time. It’s far from finished, but…. it’s basically doing what Netatmo is going to release end of this year!! For the very modest sum of a raspberry pi 3 a Velux controller and a few electronic¬†components!!

So what is it doing? Well, when Project Alice boots, the Red Queen awakes and takes control of the air quality and decides, purely data based, what to do. It will check the status every 15 minutes and act as needed to stay as closed to the programmed comfort temperature. It handles many things already, opening or not, wind or not, us sleeping or not, us home or away etc etc etc. I won’t go too far into the details, but this is a compilation of all the possible logs (so decision) the Red Queen can take at the moment:

  • - Red Queen checking the air quality
    - - Velux overridden by user voice command
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 21.9c, outside temperature at 20.5c
    - - Inside temperature is inside comfort zone
  • - Red Queen checking the air quality
    - - Co2 level above 850ppm opening all windows!
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 24.8c, outside temperature at 20.5c
    - - Outside is cooler, opening to cool
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 24.8c, outside temperature at 27.5c
    - - Outside is warmer than inside, making sure windows are closed
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 24.8c, outside temperature at 29.1c
    - - Outside is warmer than inside, making sure windows are closed
    - - Outside is over 29c, closing blinders too
  • - Red Queen checking the air quality
    - - Comfort temperature at 19c, actual temperature in bedroom at 20.8c, outside temperature at 20.5c
    - - (Sleeeping) Keeping windows at 30% for air
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 18.8c, outside temperature at 20.5c
    - - Inside temperature lower than comfort, outside is warmer, opening windows
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 20.8c, outside temperature at 7.5c
    - - Inside temperature lower than comfort, outside is even colder, closing the windows
  • - Red Queen checking the air quality
    - - Comfort temperature at 19c, actual temperature in bedroom at 18.7c, outside temperature at 16.2c
    - - (Sleeping) Keeping windows at 10% for minimum airing
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 25.6c, outside temperature at 27.2c
    - - 21km/h wind from the lake, opening both sides for 15 minutes
  • - Red Queen checking the air quality
    - - Comfort temperature at 22c, actual temperature in living room at 23.5c, outside temperature at 16.2c
    - - (Users away) Opening windows to 50% to cool

 

The first logs talks about override. What’s this? Well, whenever I or my wife ask Alice (Snips) to open the windows or the blinders, I don’t want the Red Queen to blindly close them again 2 minutes after because her 15 minutes timer just ended. So when Snips gets an order, any Red Queen decisions are overridden (canceled) for the next hour. We stay master of our home!!

That’s a quick tour of what’s made so far. I have many ideas, maybe integrating the Netatmo heating control devices, but I’m having a floor heating… But I’ll definitely improve the Red Queen to better suit our home to the best temperature and air quality, for better days and nights!

Dev safe!

Some links:

redqueen

redqueen3

redqueen2

redqueen4

Alice, let’s play

An important part of assistant, in my opinion, is the ability to assist me when I’m bored… Entertain me Alice!

Alice can already tell me jokes, she randomly says things like a person, sometimes asks us how we are etc etc. But I wanted her to be able to play with us!

So I made a simple but very known game, a spelling game!

Here are the logs of a game I had with Alice. In bold my conversation with her:

[09:47:28] Watching on localhost:1883 (MQTT)
[09:47:34] [Hotword] detected on site office, for model alice
[09:47:35] [Dialogue] session with id ’68e5eff5-ecb1-4261-ad48-cd7b8ac77f2e’ was started on site office
[09:47:39] [Asr] captured text “let’s play a spelling game” in 3.7s
[09:47:40] [Nlu] detected intent Psychokiller1888:playGame with probability 0.485 for input “let’s play a spelling game”
Slots ->
game -> spelling game
[09:47:40] [Dialogue] New intent detected Psychokiller1888:playGame with probability 0.485
Slots ->
game -> spelling game
[09:47:40] [Dialogue] was ask to end session with id 68e5eff5-ecb1-4261-ad48-cd7b8ac77f2e
[09:47:40] [Dialogue] session with id ’68e5eff5-ecb1-4261-ad48-cd7b8ac77f2e’ was ended on site office. The session ended as expected
[09:47:40] [Dialogue] was asked to start a session on site office
[09:47:41] [Dialogue] session with id ‘a43b0618-c160-406e-a5ea-595cf9222c1d’ was started on site office
[09:47:41] [Tts] was asked to say “Ok, let’s play the spelling game!, How do you spell candle?
[09:47:50] [Asr] captured text “candle” in 4.5s
[09:47:51] [Nlu] intent not recognized for “candle”
[09:47:51] [Dialogue] was ask to end session with id a43b0618-c160-406e-a5ea-595cf9222c1d by saying ‘Perfect, you spelled it correctly!
[09:47:51] [Tts] was asked to say “Perfect, you spelled it correctly!”
[09:47:51] [Dialogue] was asked to start a session on site office
[09:47:55] [Dialogue] session with id ‘a43b0618-c160-406e-a5ea-595cf9222c1d’ was ended on site office. The session ended as expected
[09:47:55] [Dialogue] session with id ‘a1c0a5a3-1bb4-48dc-9948-01296f9f4911’ was started on site office
[09:47:55] [Tts] was asked to say “Next one, how do you spell hustler?
[09:48:07] [Asr] captured text “h u s t l e r” in 6.0s
[09:48:08] [Nlu] intent not recognized for “h u s t l e r”
[09:48:08] [Dialogue] was ask to end session with id a1c0a5a3-1bb4-48dc-9948-01296f9f4911 by saying ‘Bravo, you are correct!
[09:48:08] [Dialogue] was asked to start a session on site office
[09:48:08] [Tts] was asked to say “Bravo, you are correct!”
[09:48:12] [Dialogue] session with id ‘a1c0a5a3-1bb4-48dc-9948-01296f9f4911’ was ended on site office. The session ended as expected
[09:48:12] [Dialogue] session with id ‘47180a03-aaa6-40f5-8b8f-fc3cfe453369’ was started on site office
[09:48:12] [Tts] was asked to say “Next one, how do you spell hustler?
[09:48:20] [Asr] captured text “h u s t l e r” in 4.7s
[09:48:21] [Nlu] intent not recognized for “h u s t l e r”
[09:48:21] [Dialogue] was ask to end session with id 47180a03-aaa6-40f5-8b8f-fc3cfe453369 by saying ‘Excellent job, this is correct!
[09:48:21] [Tts] was asked to say “Excellent job, this is correct!”
[09:48:21] [Dialogue] was asked to start a session on site office
[09:48:25] [Dialogue] session with id ‘47180a03-aaa6-40f5-8b8f-fc3cfe453369’ was ended on site office. The session ended as expected
[09:48:25] [Dialogue] session with id ‘f716a882-42a4-4b46-a2d2-905e80f26431’ was started on site office
[09:48:25] [Tts] was asked to say “Next one, how do you spell bed?
[09:48:34] [Asr] captured text “b e d” in 3.4s
[09:48:35] [Nlu] intent not recognized for “b e d”
[09:48:35] [Dialogue] was ask to end session with id f716a882-42a4-4b46-a2d2-905e80f26431 by saying ‘Excellent job, this is correct!
[09:48:35] [Dialogue] was asked to start a session on site office
[09:48:35] [Tts] was asked to say “Excellent job, this is correct!”
[09:48:37] [Dialogue] session with id ‘f716a882-42a4-4b46-a2d2-905e80f26431’ was ended on site office. The session ended as expected
[09:48:37] [Dialogue] session with id ‘de54c2cc-0d22-471a-8f97-0dcd7eea1201’ was started on site office
[09:48:37] [Tts] was asked to say “Next one, how do you spell federation?
[09:48:49] [Asr] captured text “f e d e r a t i o n” in 6.3s
[09:48:50] [Nlu] intent not recognized for “f e d e r a t i o n”
[09:48:50] [Dialogue] was ask to end session with id de54c2cc-0d22-471a-8f97-0dcd7eea1201 by saying ‘Excellent job, this is correct!’
[09:48:50] [Tts] was asked to say “Excellent job, this is correct!
[09:48:50] [Dialogue] was asked to start a session on site office
[09:48:52] [Dialogue] session with id ‘de54c2cc-0d22-471a-8f97-0dcd7eea1201’ was ended on site office. The session ended as expected
[09:48:52] [Dialogue] session with id ‘5f947a82-cc36-4dc8-a84c-e814916edca1’ was started on site office
[09:48:52] [Tts] was asked to say “Next one, how do you spell water?
[09:49:02] [Asr] captured text “water” in 4.2s
[09:49:03] [Nlu] intent not recognized for “water”
[09:49:03] [Dialogue] was ask to end session with id 5f947a82-cc36-4dc8-a84c-e814916edca1 by saying ‘Bravo, you are correct!
[09:49:03] [Dialogue] was asked to start a session on site office
[09:49:03] [Tts] was asked to say “Bravo, you are correct!”
[09:49:05] [Dialogue] session with id ‘5f947a82-cc36-4dc8-a84c-e814916edca1’ was ended on site office. The session ended as expected
[09:49:05] [Dialogue] session with id ‘f43b908d-aae0-4f15-9aca-cae96c328083’ was started on site office
[09:49:05] [Tts] was asked to say “Next one, how do you spell federation?
[09:49:11] [Asr] captured text “JFK” in 2.4s
[09:49:12] [Nlu] intent not recognized for “JFK”
[09:49:12] [Dialogue] was ask to end session with id f43b908d-aae0-4f15-9aca-cae96c328083
[09:49:12] [Dialogue] session with id ‘f43b908d-aae0-4f15-9aca-cae96c328083’ was ended on site office. The session ended as expected
[09:49:12] [Dialogue] was asked to start a session on site office
[09:49:12] [Dialogue] was asked to start a session on site office
[09:49:12] [Dialogue] was asked to start a session on site office
[09:49:12] [Dialogue] was asked to start a session on site office
[09:49:12] [Dialogue] was asked to start a session on site office
[09:49:12] [Dialogue] session with id ‘011930eb-f478-4e86-bc1e-a19aacdee0f1’ was started on site office
[09:49:12] [Tts] was asked to say “This is wrong unfortunately…
[09:49:16] [Dialogue] session with id ‘011930eb-f478-4e86-bc1e-a19aacdee0f1’ was ended on site office. The session ended as expected
[09:49:16] [Dialogue] session with id ‘6df25c34-975c-4f08-9000-daa6ad5c4c7c’ was started on site office
[09:49:16] [Tts] was asked to say “federation is spelled: f, e, d, e, r, a, t, i, o, n,
[09:49:27] [Dialogue] session with id ‘6df25c34-975c-4f08-9000-daa6ad5c4c7c’ was ended on site office. The session ended as expected
[09:49:27] [Dialogue] session with id ‘a055dada-e285-4fe6-9599-564c01536942’ was started on site office
[09:49:27] [Tts] was asked to say “This is game over. You scored 6 points
[09:49:33] [Dialogue] session with id ‘a055dada-e285-4fe6-9599-564c01536942’ was ended on site office. The session ended as expected
[09:49:33] [Dialogue] session with id ‘5df42b3f-648c-49df-8fe8-97e3dfc3bd4b’ was started on site office
[09:49:33] [Tts] was asked to say “This is your new personal highscore! Congratulations!
[09:49:37] [Dialogue] session with id ‘5df42b3f-648c-49df-8fe8-97e3dfc3bd4b’ was ended on site office. The session ended as expected
[09:49:37] [Dialogue] session with id ’09be3a02-1cdb-47b2-bc72-2c78c14b1c63′ was started on site office
[09:49:37] [Tts] was asked to say “Do you want to play again?
[09:49:41] [Asr] captured text “no” in 1.9s
[09:49:42] [Nlu] detected intent Psychokiller1888:answerYesOrNo with probability 1.000 for input “no”
Slots ->
answer -> no
[09:49:42] [Dialogue] New intent detected Psychokiller1888:answerYesOrNo with probability 1.000
Slots ->
answer -> no
[09:49:42] [Dialogue] was ask to end session with id 09be3a02-1cdb-47b2-bc72-2c78c14b1c63 by saying ‘Ok, we’ll certainly play again soon!
[09:49:42] [Tts] was asked to say “Ok, we’ll certainly play again soon!”
[09:49:45] [Dialogue] session with id ’09be3a02-1cdb-47b2-bc72-2c78c14b1c63’ was ended on site office. The session ended as expected

Or if you prefer to listen, here’s another game:¬†Soundcloud

Yes, it tracks points and does store them in database so you are told if you beat your previous highscore. The game module engine can tell your ranking per game with the users that played in your home. Pretty good solution to decide who, between you and your partner, is going to cook this evening!

Nothing really hard to get there, taking advantage of intent not recognized again, using sqlite to store the highscores and query them. I’m planing on adding more games of course!

Dev safe!

Links

Snips: https://snips.ai
Soundcloud: https://soundcloud.com

Snips! Use my Sonos!

What’s better than a good sound quality? I have been asked quite many times how I did to have Snips talk to me through my Sonos equipment, so I decided to dump all this knowledge into a small demo script I shared on my Github account…

Nothing really fancy, but it shows you exactly how it’s done and can easily be extended. It supports multi room setting as well!

I’m talking about this:

As the readme tell you, don’t forget to¬†name the satellites the same as the Sonos speaker in the same room!¬†

Past that, it’s just a question of installing Samba, SoCo, running it, adding a library to your Sonos speakers and there you go!

I shared the assistant directly on the repository, but you can find the bundle shared on Snips as well

The repository is here: https://github.com/Psychokiller1888/SnipsSonosInvader

As usual a few links:

Project Alice – Multi slots support

Today I decided to do and show something very easy to do, but very useful, I don’t see many people using this… Not yet multi intents, that’s on its way for a stable release, but multi slots support.

I have many zones in my appartement, and if I have to ask one after the other, it’s a real pain:

  • “Jarvis? Turn on the lights in the kitchen”
  • “Jarvis? Turn on the lights in the living room”
  • “Jarvis? Turn on the lights in the stairs”

So what about:

“Jarvis? Turn on the lights in the kitchen, the living room and the stairs” ??

Well, it’s doable…

 

Really, there’s no dark magic behind this, Snips provides everything already, except for the python script part.

But try it yourself!

  • Create an intent, name it “powerOnLights
  • Add a slot, a custom one, name it, let’s say, “room
  • Add values to that custom slot, make sure they match your Philips Hue room names!
  • Add a couple of training phrases:
    • turn on the lights
    • lights on
    • light on in the kitchen
    • turn on the lights in the bedroom
    • i want light in the stairs
    • etc etc etc
  • Use the “Try assistant field” on the right of the snips dashboard page and type something like: “Turn on the lights in the kitchen and the bedroom”
  • You will notice that both kitchen and bedroom were captured by the NLU and returned as slots in the intent:
    {
      "input": "turn on the lights in the kitchen and the bedroom",
      "intent": {
        "intentName": "Psychokiller1888:powerOnLights",
        "probability": 0.30053192
      },
      "slots": [
        {
          "rawValue": "kitchen",
          "value": {
            "kind": "Custom",
            "value": "kitchen"
          },
          "range": {
            "start": 26,
            "end": 33
          },
          "entity": "room",
          "slotName": "room"
        },
        {
          "rawValue": "bedroom",
          "value": {
            "kind": "Custom",
            "value": "bedroom"
          },
          "range": {
            "start": 42,
            "end": 49
          },
          "entity": "room",
          "slotName": "room"
        }
      ]

It’s now only a matter of parsing the slots and using them in python or whatever language… It’s actually a trick known by all devs, or at least I used it in many languages already, because dictionaries can’t hold twice the same key

This is my way:

def parseSlotsToObjects(message):
   slots = defaultdict(list)
   data = json.loads(message.payload)
   if 'slots' in data:
      for slotData in data['slots']:
         slot = slotModel.Slot(slotData)
         slots[slot.slotName].append(slot)
   return slots

Slot is an object I use to transport the slots obviously:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

class Slot(object):
   def __init__(self, data):
      self.slotName = data['slotName']
      self.entity = data['entity']
      self.rawValue = data['rawValue']
      self.value = data['value']
      self.range = data['range']

Using the slots then is as easy as using a loop on the wanted dictionary key, as it will contain a list:

if intent == _INTENT_LIGHT_ON:
   if 'room' in slots:
      for slot in slots['room']:
         room = slot.rawValue.lower()
         if room == 'everywhere':
            _bridge.set_group(0, 'on', True)
            break
         else:
            for light in _groups[room].lights:
               light.on = True

 

A bit more technical blog entry, as many have expressed a wish for me to share details rather than just demos. The reason behind me not really sharing code at this point is that I work alone, fast and that the code is messy to share. I always work like this, achieve the goals, prove the concept feasible, then refactore to clean up the code, dedup, classify, perf improve.

Hope you enjoyed! Dev safe!

 

Oh, links, yeah….