I wanna start off this show with something that is really a look at what's next.
Could you take a picture for me?
I believe this is a Rhodesian Ridgeback.
It recognizes you.
Firefly can recognize art.
So you're gonna connect smart phones and tablets.
To everyday objects.
Even things that don't have any computing or connectivity capabilities themselves.
Bar codes, QR codes, short links, even URLs themselves are all, in some ways, crutches for computing devices that have cameras, yet don't know what they're looking at.
Enter image recognition.
We haven't just recognized the episode, we've recognized the scene.
If you point the camera at a package of soap, well you can get a link to buy that soap on amazon.
Place Ernie into a room.
And Ernie comes to life.
Tag the dogs eyes and nose.
And then the photo is analyzed and uploaded.
If someone finds your dog, face recognition software will match it to the picture on file.
It responds to your gestures, and it listens to your voice.
Now image recognition is of course a close cousin of facial recognition which most of us got very familiar with right after 9/11 when certain major sporting events began using cameras to see who's coming in and figure out their identities.
But image recognition more broadly doesn't just focus on faces.
It'll look at just about any object, at least in the future, especially objects in the retail environment, and bring you information or abilities to act on that product.
But humans have been incredibly good at doing image recognition and pattern recognition for millennia.
So why are we teaching dumb devices to do something that we do supremely well.
Well because the gadget will bring some additional technologies to the table, that we don't have.
First of all, they have access to more information than you can ever stick in your head.
A larger database, if you will, of possible matches for what they're looking at.
More than you could ever memorize, or would ever want to.
Next up is discrimination, a tool that machines can use very well.
They can tell two very similar things apart in a way that humans often can't.
If you hear a song on the radio and you wanna tell me about it, so maybe I can identify it for you, what are you gonna do, hum a few bars?
That's not gonna help me a lot.
But a machine can very readily check it's database of very similar sounding songs, and find the right one.
Then there's automation.
We've talked a lot on this show about how we're starting to use cameras on devices to map the interior world.
Airports, stadiums, malls, what have you.
To do that with actual human eyes making notes into a database, that's too tedious, impractical.
To have devices do it in an automated way, is actually quite possible.
Now, how do those attributes add up into functions that you'll care about as a user?
Especially in the retail space?
First of all, it's product information.
[INAUDIBLE] codes, shelf tags, [INAUDIBLE] URL, any of those things try to get us closer to product information but they still stand between us and that information.
If you've got a device that can look natively at the product and inform you about it, the way that our eyes want to do, you're getting much closer.
When there are hard to search or fuzzy search items, try googling a shirt you saw someone wear that you would like to buy one of.
Then there's identification and personalization, but this is of the user, it's kind of the reverse of looking at the product it's now a device looking at the user of the product or the service.
To allow them to login, to even personalize or even to limit what they are able to access.
Now where image recognition technology makes it into your hands will be interesting.
The obvious and the current darling is in our mobiles, in phones and tablets because they're always with us.
They're typically always connected and they have very good cameras of course.
Next up look for it propogating in smart glasses.
Those that have cameras of course have most of the attributes of a mobile device.
But they also have this sort of, innate gaze match.
They're pointed where we're pointed.
They're very sort of, intuitive that way.
And finally there'll be a class of fixed recognition cameras.
Many of these 'll be doing the reverse piece, identifying the consumer who's regarding the product.
That could be something like a Kinect technology, it could be Brickstream's live device, or it could be cameras already mounted in stores for security taking on a new role of also figuring out who is doing what.
Now image recognition is still at a nascent phase, certainly for consumer use.
It's got a lot more misses than hits in my experience.
However, it's got a huge wide lane of potential to make a lot product interactions a lot less friction-filled, and that gives it a pretty good chance at being a next big thing.