We recently upgraded the input tracking of the useTable with two gigabit ethernet cameras from uEye. Each camera delivers a resolution of 1280 x 1024 pixels. Both camera images are stitched together for a total tracked image 1280 x 2048 pixels. Stitching and tracking is done by CCV 1.5. The higher resolution is a fantastic benefit for our complex fiducial patterns.
dSensingNI, the framework for multitouch, freehand and advanced tangible interaction using a deep-sensing camera, is now available for download. You may also want to join the newly created user Forum! Looking forward to see many cool demos created with dSensingNI
In February 12 pupils from a local school visited the C-LAB in oder to participate on the first useTable workshop. During the one-day event they had to design different application ideas for collaborative multitouch setups. Further on they could test their programming skills within a multitouch ant-rugby contest.
The dSensingNI framework was presented at the Sixth International Conference on Tangible, Embedded and Embodied Interaction (TEI 2012) in Kingston, ON, Canada. It allows the usage of a depth-sensing camera (e.g. Kinect) to track multitouch gestures and tangible object movements in 3D space. We use this approach for advanced interaction techniques on the useTable.For more information and Videos please visit the dSensingNI project website www.dsensingni.net.
Today the PG MUTTI visited the THW in Detmold to get an introduction in their work simulated in a live demonstration.
There is a new blog section for the project group “Multi-User-Table for Tangible Interaction” (Mutti)!