In a highly controlled environment like a construction site, being able to track equipment, people and what they're doing is critical to workplace safety.
Using our technology right now, both people and object recognition models are currently deployed in that construction yard.
And then on the screen there, before you, you can see that the solution is recognizing Yana in real time.
Take note again of that jack hammer in Ayuba's workshop.
It is resting vertically against the bench, which is actually a very unsafe position for such a large, heavy object to be in, and so I've already gone ahead and I've tagged that jack hammer in a variety of safe and unsafe positions.
In the system.
And so when I update this pipeline, which I'm gonna do right now, it's not only gonna see the jackhammer now, but it is also gonna tell me it's in an unstable orientation, and it's gonna notify Ayuba so he's able to very quickly resolve the issue.
The solution is running more than 27 million recognitions per second across people, objects, and activities.
Both in the yard and the workshop.
And we're making changes to the solution in real time in the cloud and deploying it to the Edge.
Which is a great example of how we can stretch the cloud to create a mesh of interconnected devices and services.