We're here in this exciting lab we're building the [UNKNOWN] platform in Qualcomm San Diego's research facility.
This is where we build this really cool technology and we get it ready to go into our Snapdragon product lines.
We take the software and the hardware that we're working on we bring it together onto our you know, latest test devices to see how it's gonna work on our processors.
We show it pictures.
We show it real world objects.
We show it faces and it understands all that and from there, you can, you know, build a story.
You can build toys.
You can have the camera understand what it's looking for.
Right away, it detected that there was flowers and there were no people and that it's a plant.
So we have data about the scene and this is a live 3D object like you were gonna take a picture of it.
But I can also.
Pivot it, for example, an I can show it still images.
So we have cats, a beach scene.
You can see how quickly it's able to say sky and clouds.
We're going beyond just, you know, that it's a beach.
We're also having this other data about the picture where then you can search for the, the characteristics of your picture right away.
So we've shown another example.
Take a picture, analyze it fully on device, and then convert it to text.
This is a significant harder problem because I didn't observe the writer writing the strokes.
So, you're not, we're not just.
Converting a block of text let's say on a whiteboard.
We're preserving the positional orientation of the text as well.
But imagine that it could learn your style, your abbreviations, your, you know, your shorthand.