Surface computing + augmented reality = Microsoft LightSpace

Microsoft surface table
Image via Wikipedia

Kinda like what the iPad is doing, no more mouse or keyboard.

(kevin @ the labs)

Microsoft is looking to extend its surface-computing work into the spatial-computing arena with a new research project known as LightSpace.

Andy Wilson, a Microsoft research who was key in bringing the Microsoft Surface tabletop to market, is one of the principals behind LightSpace. Wilson and other researchers are set to demonstrate the new project the week of October 3 at the ACM Symposium on User Interface Software and Technology in New York City. (Microsoft showed off LightSpace internally at its TechFest research fair on campus in March.)

LightSpace uses multiple depth cameras and projectors to allow users to transfer objects between touch surfaces. The cameras make it appear that users are carrying projected objects in their hands and then “dropping” them onto walls, where they can be manipulated using multitouch gestures similar to those used with the Microsoft Surface tabletop.

There is a connection between LightSpace and Microsoft’s Kinect game-sensor technology. Microsoft researchers envision Kinect helping to make depth cameras cheaper and more popular. According to a newly published whitepaper on LightSpace:

“Depth cameras (such as those from PrimeSense1, 3DV, and Canesta) are able to directly sense range to the nearest physical surface at each pixel location. They are unique in that they enable inexpensive real time 3D modeling of surface geometry, making some traditionally difficult computer vision problems easier. For example, with a depth camera it is trivial to composite a false background in a video conferencing application. Microsoft’s Kinect device builds on PrimeSense technology and computes a skeletal model of a player for motion-driven gaming. While such cameras are now rare, the release of Kinect is likely to make depth cameras inexpensive and widely available.”

A demo of Microsoft Surface
Image via Wikipedia

LightSpace attempts to further the notion that “the room is the computer.” (So much for “the network is the computer.”) It also points to a future where all physical surfaces become interactive displays — including the user’s body.

Microsoft has been looking for ways to extend its Surface technology. A couple of years ago, Microsoft researchers showed off a spherical Suface. In 2009, they demonstrated another Surface spinoff known as “SecondLight,” a rear projection-vision surface technology that allowed users to project through and beyond the display. (By the way, for you codename hounds, the Bing Maps on Silverlight project at Microsoft happened to be codenamed “FirstLight.”)

(Thanks to Microsoft Student Partner Pradeep for the pointer to the LightSpace page.)

Place tagged drinks and it gives you advertise...
Image via Wikipedia

Surface computing + augmented reality = Microsoft LightSpace | ZDNet.

LightSpace combines elements of surface computing and augmented reality research to create a highly interactive space where any surface, and even the space between surfaces, is fully interactive. Our concept transforms the ideas of surface computing into the new realm of spatial computing.

Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand.

Enhanced by Zemanta
Tagged on:

Leave a Reply