I initially walked right past one of the flashy games at Google I/O -- the search giant's annual developer conference. I saw people frantically meowing into a microphone and somehow causing toy cars to race around a miniature farm. I later learned that while the game was indeed gimmicky, the tech powering it was an incredibly cool way of potentially training Google Assistant to help people with disabilities.
I was exploring a tent full of experimental tech in the conference's so called "sandbox." As I was told, the tech was often developed in conjunction with interested third parties, not necessarily for wide release, but to solve a very specific problem for one or two people. Nevertheless, some of the tech will roll out widely soon, and several demos developed for a specific problem could potentially help many.
Here are the three coolest projects I saw in Google's experimental tech tent at I/O.
The horse goes moo
This was the experiment powering the farm game, and it's called the Teachable Machine. First rolled out in 2017 as an easy to way understand machine learning, it's a simple browser application you can use to teach a machine specific cues. I saw a demo in which I held still for a camera, then waved. The machine could then distinguish when I was waving and use that motion to trigger a Google Assistant action, like activating a smart bulb.
Recent updates to the machine were developed alongside a man named Steve Saling, who suffers from ALS and struggles to move. With it, he can blink his eyes and turn on the lights or turn off the TV when he raises his lower lip. Once the update rolls out later this year, the machine can be trained to recognize any kind of gesture or video input, as well as any kind of sound input.
In the game, participants were shown various animals and asked to do their best rendition of the associated animal sounds. After they trained the machine in their own version of a cat's meow, they were shown the animals again, to see how close they could get to their own created sound models. When they matched, the car moved on the track and they raced the person next to them.
As the attendant told me, the sound models aren't based on anything, so when participants saw a cat, they didn't actually have to meow to win. Few contestants realized this, but they stood a better chance if they just trained the machine with any noise they could easily replicate. They could make the cat oink.
While I liked that fun twist to the game, the implications for people with disabilities is potentially profound. The Teachable Machine is part of, and you could use it to potentially get Google Assistant to respond to you through any cue you're comfortable giving.
Read a bedtime story to your kids when you're not at home
My Storytime is much simpler than the Teachable Machine. Basically it's a program for a web browser that allows you to record stories -- but Google has added a few smart extras to the application. Developed with an instruments company in Portland, it could be a huge help to parents that travel a lot but want to be a part of the bedtime routine with little ones at home.
To use My Storytime, you'll record yourself reading a story -- it can be whatever story your child likes to hear or one of your creation. You can separate the story into chapters and add your own intro.
Add a name to your recording, then your child can simply ask the Google Home smart speaker at home to play it. When you first set it up, the program will guide you through a few introductory prompts. These will allow the smart speaker to ask your child, in your voice, if they are ready and let Google handle the response.
The smart speaker then starts playing the audio of you reading the story, so your child can hear you even if you can't be around. The tech was developed for a military family where the father had trouble being free at bedtime across the world.
Google's working on allowing you to import recordings you made while offline as well. When My Storytime is ready, which is expected to be in six to eight weeks, your child can start listening on any device with Google Assistant, including smart speakers, smart displays like the new phones like the Google Pixel 3A.and even
Go fly a kite
The actions developed with Engawa might never see wide release. Google collaborated with senior citizens in Japan on specific applications to make voice controls more understandable and meaningful to their lives. One reads from a community bulletin board through a smart speaker like the Google Home. Another allows participants to "time travel" with reminders of pop culture and trivia from specific dates in the past.
The coolest Engawa function is a custom-built Google Action meant to help one man fly kites. The man has a large collection of kites of various shapes and sizes. The code of the action tracks wind conditions in nearby parks and recommends which kite he can fly that day in what area.
At various developer sessions throughout I/O, Google representatives emphasized finding ways to help senior citizens with Google Assistant. Engawa could lay some of the necessary groundwork. While this specific application might not apply to most people, better weather awareness could help Google with proactive notifications for a number of locations and professions. Beyond the weather, this and the other two experiments show creative paths to solving a variety of problems with tech.