Imagine if you could cut and paste information among your smartphone, tablet, smart table, and big screen. Better yet, what if you could flick objects from one device to another?
Software developer Nsquared has tied together a Windows Phone 7, Slate tablet, Microsoft Surface smart table, and Kinect-controlled big screen into one seamless computing experience. The video says it all (see below).
There are some nifty moments: Put your smartphone down on a Surface--a horizontal touch-screen display that doubles as a table--and the e-mail on the phone screen automatically shows up on the smart table beside the phone, larger. No need to do anything but put the phone down.
Here's another nifty moment: Look at a 3D model of a home on a large projected screen, choose replacement door handles using a separate application on your tablet, then flick them onto the big screen where they're rendered and incorporated into the model. Then grab another door handle from a Silverlight-enabled Web site and likewise flick it into the model. And for the piece de resistance, take a picture of a lamp with the tablet, crop the lamp from the background, and flick it into the model on the big screen.
The video shows that it's possible to tie these devices together, but it also left me wondering exactly how practical this is. The demonstration was a mishmash of input strategies--touchscreen, in-air gesture and speech--used to control several programs running on three different types of computers.
It looked a little awkward to juggle them all, both physically and mentally--remembering which device does what and how. I found myself thinking it might be easier just to do everything on one device: the tablet, table, or big screen. For example, why tell the computer to put the lamp on the shelf when you could simply drag it there using the tablet? And why use sweeping arm gestures to position the lamp on the big screen when a finger drag on the tablet would do the job better?
This is not to say that there isn't a killer app out there for seamless computing. This is a nifty prototype showing that the connections are there.
If you could flick objects from your tablet onto a big screen, what would you use it for?