2017 is the year we really start talking to our computers. Without question, Amazon's Alexa kicked off 2017 at CES by dominating the show. Amazon didn't even have an official booth at CES for Alexa, but the Amazon voice assistant was mentioned throughout the four day event.
This is also the year I purchased a Google Home (one of Amazon Echo's competitors), and my Nexus 6P phone was auto-updated with Google Assistant (one of Amazon Alexa's competitors). Google Home and Google Assistant allow natural speech to Google's vast services layer and knowledge graph. As a family we've enjoyed talking to Google Home. I've paired the Google Home with several Belkin Wemo switches and we can rather naturally turn off a few lights around the house.
I also wrote a basic Google Action to learn more about the guts of natural language processing (NLP) development. I used Google's API.AI to create the voice interactions with almost "drag and drop" ease. The only actual coding that was required was a webhook. Webhooks can be simple or complex programs that send the voice input variables to an external API for processing or to rattle through some if/then logic.
All this pre-text is interesting geekery that I'm sure fuels many product meetings across the business landscape. However, it was a conversation tonight with my four-year old daughter that solidifies voices as the future dominant and preferred way to interact with computers.
Here's the quick story... I was watching the news on our main TV, and my four-year old said, "Can I tell Google how much longer you can watch TV?" I said, "sure." A few minutes later Quinn is talking to the Google Home and had successfully set a timer for 20 minutes. She's four! She interfaced with a computer and set a timer for 20 minutes. That's amazing.