Showing posts with label 3D. Show all posts
Showing posts with label 3D. Show all posts

Wednesday, August 10, 2011

Kinect for ArcGlobe

On June 16, 2011, Microsoft released the Kinect for Windows SDK. This SDK allows windows developers to support motion with an Xbox 360 Kinect device. The Applications Prototype Lab at Esri has just completed a prototype using a Kinect to navigate in ArcGlobe.

To fly forward, the user can raise their right hand. The display will navigate in the direct the right hand is pointing. We call this “superman navigation”. If the left hand is elevated, the display will pivot around a central location on the globe surface. And lastly, if both hands are raised, the screen will zoom in or out as the hands are both together or apart.

To use the add-in you must have the following:

  1. Kinect for Xbox 360 sensor,
  2. Windows 7 (32 or 64bit),
  3. .NET Framework 4.0,
  4. Kinect for Windows SDK beta.

The add-in (with source code) is available here.

This add-in was challenging in sense that translating traditional mouse navigation to motion was not easy. With mouse or touch input devices, users have immediate sensory feedback once they have clicked a mouse button or touched a screen. In a number of Xbox games, this issue has been overcome with a paused hover. That is, a user uses his or her hand to move the screen cursor over a button and waits for a few seconds as an activation animation completes. This is fine for buttons that occupy discrete areas of a screen but not for interaction throughout the screen,

The approach adopted, rightly or wrongly, by this add-in is that of a virtual screen that exists at arm’s length directly in front of the user. This virtual screen only extended +/- 25° from an arm point directly ahead at the real screen. This technique provides an approximate motion to screen mapping with screen contact only at full arm extension and only within a narrow 50°x50° area in front of the user.

Ultimately a better approach could be to rely purely on touch-like gestures such as left swipe, right swipe and pinching. There has been exciting work in this area by Deltakosh on the Kinect Toolbox project but I hope that the final release of the Kinect SDK includes gesture recognition.

Thursday, June 30, 2011

Modeling the Real World with Silverlight 5 3D

Late last year Esri’s Applications Prototype Lab published a Silverlight-based web application that could render a small area of the Earth’s surface in three dimensions. This app used Einar Ingebrigtsen’s graphics library called Balder and referenced imagery and elevation services from arcgis.com.

This year at MIX11, Microsoft released the beta of the Silverlight 5. Probably the most important improvement is the added support for GPU accelerated 3d graphics. This means Silverlight apps with 3d content will be rendered by a computer’s graphics card rather than dominating the CPU. It is now conceptually possible to develop Google Earth-like applications using Silverlight 5 3d!

Creating a virtual globe is a rather daunting project. However, inspired by Andy Beaulieu’s physics sample and associated blog post I decided to explore the possibility of modeling real world data in a browser.

Nicknamed “project alpha”, the prototype shown in the video above used the following pieces:

  1. Esri Silverlight SDK 2.2 (link)
    Mapping
  2. Microsoft Silverlight 5 beta (link)
    RIA
  3. Balder 0.8.8.9 (link)
    3D graphics engine
  4. JigLibX 0.3.1 (link)
    3D physics engine

Displaying a 3d world using streaming imagery and elevation data is not new. What is interesting about this prototype is the inclusion of a physics engine. Objects such as cars, people, tanks, planes can interact with each other and the Earth’s surface. Additionally, this prototypes shows that any location, and at any scale, can be modeled completely within a browser.

Whilst dropping hundred of tennis balls on Mount Ruapehu might seem a little fanciful, there are many practical uses of combining a 3d physics engine with mapping data. For example, an engineer may want to test drive a proposed road to examine it drivability and view. In defense, this technology can be for modeling and simulation of the battlefield.

Unless there is significant interest, the app and source code will be published once Silverlight 5 goes final.