Here we see three spot robots playing together.
This jump rope behavior you see here serves to demonstrate both the capability of the robot an arm hardware, as well as the software suite that controls and coordinates them.
While jumping rope may not be particularly relevant in an industrial setting, We hope you can appreciate the capabilities that a quirky and fun scene like this demonstrates.
But as I said a the outset, we have much greater aspirations for spot than merely performing stunts like jump roping.
Based on discussions with early users is clear to us the main themes that make spot valuable to applications in construction utilities, oil and gas and manufacturing are equally relevant when we think about mobile manipulation.
This makes us very eager to deliver mobile manipulation capabilities that build on our experience in advanced mobility, autonomy and control Our intent is to make it as easy to perform manipulation tasks with spot in much the same way that spot has already made it easy to locomote through complex environments.
The arm we developed include six degrees of freedom in addition to a gripper.
The overall length of the arm is approximately one meter.
And when combined with a mobility of the base robot results in a system that can flexibly access all of the environment around the robot.
The arm weighs only eight kilos, but is capable of picking up and carrying roughly five kilos.
This gives the robot ample strength to interact with typical objects that the robot is likely to encounter in its environment.
Including dragging things like cinder blocks.
The arm is fast enough to move the indofactor at velocities of up to 10 meters per second making things like the jump rope demonstration possible.
Finally we have included both an imaging depth sensor and a 4K RGB camera in it's griper.
Making it possible to see objects that the robot is manipulating and perform inspection tasks.
This initial release of the arm will ship with two primary modes for controlling it.
You can use the tablet to directly tell you, operate the arm to perform one-off inspection or manipulation tasks.
Alternatively, you can access all of the arm functions through an API much like you can remotely control locomotion.
This enables development of novel autonomous control strategies, or even allows you to build your own custom teleoperation interface.
Out of the box.
Both of these interfaces provide access to basic kinematic control of the arm, as well as a suite of more complex behaviors, which incorporates supervised autonomy.
This is very similar to how we've tried to make spots mobility a seamless experience for the user.
Where you simply tell the robot where to go and the robot decides how to place its feet and maintain balance in the case of the arm.
The user tells the robot what to grasp or how to interact with the environment, and the robot makes the necessary local decisions in order to complete the task.
The arm is tightly integrated, add on for the robot.
This is important as it allows the entire system to maintain balance and take advantage of the underlying mobility when being used to manipulate the complex environments we expect the robots to be used in.
And while the arm can only be attached or removed by Boston Dynamics, there's room for additional sensors under and behind the arm.
So that the platform can still be customised for specific applications even once the arm is mounted.
This video demonstrates the robot being teleoperated to manipulate a variety of objects.
We start by using a semi autonomous grasping behaviour to grasp and drag a hose.
We can also see how the large workspace afforded by the combination of the arm and spotts mobility Can be used to nudge a container on the floor.
Finally, we can how the sensors in the gripper can be used to semiautonomously grasp, pick, and place a tool.
In this next video, we can see a more complex, integrated behavior being used to interact with the switch We foresee this type of constraint manipulation or interaction will be central to many of the use cases for mobile manipulation on spot.
The same type of interaction can be used to open doors.
Here we see a fully automated door opening behavior we have developed in action allowing the robot to quickly to reverse a standard door.
The door opening behavior merely requires the operator to point the robot at the door handle and tell the robot which side the hinge is on.
After that, Spot takes care of all the details of pushing or pulling the door, using it's foot to hold the door open while it regrasps the door and completes the task.
Finally, You can even use the in gripper color camera to perform inspections.
Here we can see an operator inspecting a leaky pipe and using those semi-autonomous constrained manipulation behaviors to grasp and close the valve.
We are looking forward to deploying the arm and its initial set of features with a broad set of users.
We are sure these new capabilities will enable users to discover what mobile manipulation can add to their business and help us continue to support the variety of use cases people are eager to explore.