Thursday, August 25, 2011

How to add mouse drag behavior in Silverlight

Charles Babbage

Today, end users expect (or demand) intuitive and interactive web applications. One component of this is the ability to manipulate object with a mouse. This blog post provides a three step walk through that describes how to create a draggable picture of Charles Babbage, the inventor of the first computer.

This walkthrough requires:

  1. Microsoft Visual Studio 2010 (link)
  2. Microsoft Visual Studio 2010 Service Pack 1 (link) – Optional
  3. Microsoft Silverlight 4 Tools for Visual Studio 2010 (link)
  4. Microsoft Expression Blend SDK for Silverlight 4 (link)

Step 1 – In VS2010, create a new Silverlight application called SilverlightApplication1

How to add mouse drag behavior in Silverlight

Click ok to create a host ASP.NET web application

How to add mouse drag behavior in Silverlight

Step 2 – Add blend interactivity behavior

Add the following assembly references:

  1. Microsoft.Expression.Interactions
    c:\Program Files (x86)\Microsoft SDKs\Expression\Blend\Silverlight\v4.0\Libraries\Microsoft.Expression.Interactions.dll
  2. System.Windows.Interactivity
    c:\Program Files (x86)\Microsoft SDKs\Expression\Blend\Silverlight\v4.0\Libraries\System.Windows.Interactivity.dll

How to add mouse drag behavior in Silverlight

Step 3 – Add drag behavior to an image

Copy and paste the following code into MainPage.xaml. This will display an image of Charles Babbage in the center of the screen. The XAML to enable mouse dragging is highlighted in yellow below.

<UserControl
   x:Class="SilverlightApplication1.MainPage"
   xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
   xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    xmlns:i="http://schemas.microsoft.com/expression/2010/interactivity"
 
  
 xmlns:ei="http://schemas.microsoft.com/expression/2010/interactions"
 
  
mc:Ignorable="d"
   d:DesignHeight="600"
   d:DesignWidth="800"
   >
    <Grid x:Name="LayoutRoot" Background="White">
       
<Image
           Width="100"
  Source="http://upload.wikimedia.org/wikipedia/commons/6/6b/Charles_Babbage_-_1860.jpg"
           >
            <i:Interaction.Behaviors>
                <ei:MouseDragElementBehavior/>
            </i:Interaction.Behaviors>
        </Image
>

    </Grid
>
</
UserControl
>

You are done! Press F5 to start the project is debug mode. Using your mouse, the picture of Charles Babbage can dragged around the screen.

How to add mouse drag behavior in Silverlight

This works fine but with a little extra work we can make the application a little more intuitive. By adding a frame, drop shadow and associating a hand cursor, the picture now has a visual queue inviting users to interact.

<UserControl
   x:Class="SilverlightApplication1.MainPage"
   xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
   xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
   xmlns:i="http://schemas.microsoft.com/expression/2010/interactivity"
 
  
xmlns:ei="http://schemas.microsoft.com/expression/2010/interactions"
 
  
mc:Ignorable="d"
   d:DesignHeight="600"
   d:DesignWidth="800"
   >
    <Grid x:Name="LayoutRoot" Background="White">
        <Grid>
            <Grid Background="White" Cursor="Hand" HorizontalAlignment="Center"
VerticalAlignment="Center">
                <i:Interaction.Behaviors>
                    <ei:MouseDragElementBehavior/>
                </i:Interaction.Behaviors>
                <Grid Background="Black">
                    <Grid.Effect>
                        <BlurEffect Radius="15"/>
                    </Grid.Effect>
                </Grid>
                <Grid Background="White"/>
                <Image
                   Margin="10"
                   Width="100"
  Source="http://upload.wikimedia.org/wikipedia/commons/6/6b/Charles_Babbage_-_1860.jpg"
                   />
            </Grid>
        </Grid>
    </Grid
>
</
UserControl
>

How to add mouse drag behavior in Silverlight

Wednesday, August 24, 2011

How to create a simple proxy in ASP.NET

A common obstruction when developing Silverlight web applications is cross domain access. Silverlight has built-in security that prevents communication to servers that do not explicitly granted access using with either a client access policy or cross domain file. Even though a server has services intended for public use, if the server does not have a cross domain file then Silverlight apps will be denied access.

This post presents a convenient and legitimate way to overcome this restriction with a simple proxy service in three easy steps!

Step 1 – In Visual Studio 2010, create a new ASP.NET Web Application project called WebApplication1

How to create a simple proxy in ASP.NET

Step 2 – Add a new generic handler called proxy.ashx

How to create a simple proxy in ASP.NET

Step 3 – Copy and paste this code into proxy.ashx.cs.

using System;
using System.IO;
using System.Net;
using
System.Web;

namespace
WebApplication1 {
   
public class proxy : IHttpHandler
{
       
public void ProcessRequest(HttpContext
context) {
           
HttpResponse
response = context.Response;

           
// Check for query string
            string uri = Uri
.UnescapeDataString(context.Request.QueryString.ToString());
           
if (string
.IsNullOrWhiteSpace(uri)) {
                response.StatusCode = 403;
                response.End();
               
return
;
            }

           
// Filter requests
            if (!uri.ToLowerInvariant().Contains("
wikimedia.org")) {
                response.StatusCode = 403;
                response.End();
               
return
;
            }

           
// Create web request
            WebRequest webRequest = WebRequest.Create(new Uri
(uri));
            webRequest.Method = context.Request.HttpMethod;

           
// Send the request to the server
            WebResponse serverResponse = null
;
           
try
{
                serverResponse = webRequest.GetResponse();
            }
           
catch (WebException
webExc) {
                response.StatusCode = 500;
                response.StatusDescription = webExc.Status.ToString();
                response.Write(webExc.Response);
                response.End();
               
return
;
            }

           
// Exit if invalid response
            if (serverResponse == null
) {
                response.End();
               
return
;
            }

           
// Configure reponse
            response.ContentType = serverResponse.ContentType;
           
Stream
stream = serverResponse.GetResponseStream();

           
byte[] buffer = new byte
[32768];
           
int
read = 0;

           
int
chunk;
           
while
((chunk = stream.Read(buffer, read, buffer.Length - read)) > 0) {
                read += chunk;
               
if (read != buffer.Length) { continue
; }
               
int
nextByte = stream.ReadByte();
               
if (nextByte == -1) { break
; }

               
// Resize the buffer
                byte[] newBuffer = new byte
[buffer.Length * 2];
               
Array
.Copy(buffer, newBuffer, buffer.Length);
                newBuffer[read] = (
byte
)nextByte;
                buffer = newBuffer;
                read++;
            }

           
// Buffer is now too big. Shrink it.
            byte[] ret = new byte
[read];
           
Array
.Copy(buffer, ret, read);

            response.OutputStream.Write(ret, 0, ret.Length);
            serverResponse.Close();
            stream.Close();
            response.End();
        }
       
public bool
IsReusable {
           
get { return false; }
        }
    }
}

You are done! To test, press F5 to run the web application in debug mode. Your default web browser will display the Default.aspx page from your project.

How to create a simple proxy in ASP.NET

Without closing the web browser, append the name of the proxy service and the web resource (highlighted below) that you need access to. For example:
http://localhost:51220/proxy.ashx?http://upload.wikimedia.org/wikipedia/en/f/f0/New-esri-logo.jpg

This is the result.

How to create a simple proxy in ASP.NET

In summary, this post described how to crate a simple proxy web service. The proxy can be used by Silverlight web application to access resources that are restricted due to a missing cross domain file. To prevent malicious use of the proxy it is advisable to add some sort of access restriction, for example, in this exercise the proxy was configured to only accept requests for content from the wikimedia.org domain.

Friday, August 12, 2011

Moondust – A WPF Theme

Moondust

Introducing moondust, a metro inspired WPF theme. The theme is far from complete but does include a button, checkbox, radiobutton, sliders and scrollbars.

Moondust emulates metro’s bold monocrome outlines but includes the addition of subtle drop shadow and white mouse-over halo effect.

Moondust

Moondust

The source code to this theme is available via the following link. The download includes the sample application displayed in this post.

How to align Kinect’s depth image with the color image?

How to align kinect’s depth image with its color image

These two good looking gentlemen are demonstrating the blending of kinect's depth and video feeds. Because the depth and video sensors are different resolutions and offset on the device itself there a computation procedure needed to map data one to the other.

Thankfully Microsoft has provided comprehensive documentation such as the Skeletal Viewer Walkthrough and the Programming Guide for the Kinect for Windows SDK. This post is going to provide simple walkthrough to efficiently map depth sensor pixels, for each person, to the video sensor feed.

This exercise requires:

  1. Xbox 360 Kinect
  2. Microsoft Windows 7 (32 or 64bit)
  3. Microsoft Visual Studio 2010
  4. Kinect for Windows SDK beta

In Microsoft Visual Studio 2010, create a new WPF application and add a reference to the Microsoft.Research.Kinect assembly. In this exercise the name of the project (and default namespace) is KinectSample.

Add the following code to MainWindow.xaml:

<Window x:Class="KinectSample.MainWindow"
       xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
       xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
       Title="Kinect Sample"
       Height="600"
       Width="800"
       >
    <Grid>
        <Image x:Name="ImageVideo" Stretch="UniformToFill" HorizontalAlignment="Center"
              VerticalAlignment="Center" />
        <Image x:Name="ImageDepth" Stretch="UniformToFill" HorizontalAlignment="Center"
              VerticalAlignment="Center" Opacity="0.5" />
    </Grid
>
</
Window
>

In MainWindows.xaml.cs add the following:

using System;
using System.Windows;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using
Microsoft.Research.Kinect.Nui;

namespace
KinectSample {
   
public partial class MainWindow : Window
{
       
private Runtime _runtime = null
;

        public
MainWindow() {
            InitializeComponent();


            this.Loaded += new RoutedEventHandler(this
.KinectControl_Loaded);
        }
       
private void KinectControl_Loaded(object sender, RoutedEventArgs
args) {
           
this._runtime = new Runtime
();

           
try
{
               
this
._runtime.Initialize(
                   
RuntimeOptions
.UseDepthAndPlayerIndex |
                   
RuntimeOptions
.UseSkeletalTracking |
                   
RuntimeOptions
.UseColor
                );
            }
           
catch (InvalidOperationException
) {
               
MessageBox.Show("Runtime initialization failed. "
+
                               
"Please make sure Kinect device is plugged in."
);
               
return
;
            }

           
try
{
               
this
._runtime.VideoStream.Open(
                   
ImageStreamType
.Video, 2,
                   
ImageResolution
.Resolution640x480,
                   
ImageType
.Color);
               
this
._runtime.DepthStream.Open(
                   
ImageStreamType
.Depth, 2,
                   
ImageResolution
.Resolution320x240,
                   
ImageType
.DepthAndPlayerIndex);
            }
           
catch (InvalidOperationException
) {
               
MessageBox.Show("Failed to open stream. "
+
                 
"Please make sure to specify a supported image type and resolution."
);
               
return
;
            }

           
this
._runtime.VideoFrameReady += (s, e) => {
               
PlanarImage
planarImage = e.ImageFrame.Image;
               
this.ImageVideo.Source = BitmapSource
.Create(
                    planarImage.Width,
                    planarImage.Height,
                    96d,
                    96d,
                   
PixelFormats
.Bgr32,
                   
null
,
                    planarImage.Bits,
                    planarImage.Width * planarImage.BytesPerPixel
                );
            };
           
this
._runtime.DepthFrameReady += (s, e) => {
               
//
                PlanarImage
planarImage = e.ImageFrame.Image;
               
byte
[] depth = planarImage.Bits;
               
int
width = planarImage.Width;
               
int
height = planarImage.Height;
               
byte[] color = new byte
[width * height * 4];
               
ImageViewArea viewArea = new ImageViewArea
() {
                    CenterX = 0,
                    CenterY = 0,
                    Zoom =
ImageDigitalZoom
.Zoom1x
                };
               
ImageResolution resolution = this
._runtime.VideoStream.Resolution;

               
for (int
y = 0; y < height; y++) {
                   
for (int
x = 0; x < width; x++) {
                       
int
index = (y * width + x) * 2;
                       
int
player = depth[index] & 7;
                       
if (player == 0) { continue
; }
                       
short
depthValue =
                            (
short
)(depth[index] | (depth[index + 1] << 8));                       
                       
int
colorX;
                       
int
colorY;
                       
this
._runtime.NuiCamera.GetColorPixelCoordinatesFromDepthPixel(
                            resolution,
                            viewArea,
                            x,
                            y,
                            depthValue,
                           
out
colorX,
                           
out
colorY
                        );
                       
int
adjustedX = colorX / 2;
                       
int
adjustedY = colorY / 2;
                       
if (adjustedX < 0 || adjustedX > 319) { continue
; }
                       
if (adjustedY < 0 || adjustedY > 239) { continue
; }
                       
int
indexColor = (adjustedY * width + adjustedX) * 4;
                       
Color[] colors = new Color
[]{
                           
Colors.Red, Colors.Green, Colors.Blue, Colors
.White,
                           
Colors.Gold, Colors.Cyan, Colors
.Plum
                        };
                       
Color
col = colors[player - 1];
                        color[indexColor + 0] = (
byte
)col.B;
                        color[indexColor + 1] = (
byte
)col.R;
                        color[indexColor + 2] = (
byte
)col.G;
                        color[indexColor + 3] = (
byte
)col.A;
                    }
                }

               
this.ImageDepth.Source = BitmapSource
.Create(
                    width,
                    height,
                    96d,
                    96d,
                   
PixelFormats
.Bgra32,
                   
null,
                    color,
                    width * 4
                );
            };
        }
    }
}

You are done!

This sample will color code the first seven people identified by the kinect device. The most intensive part of this code is the GetColorPixelCoordinatesFromDepthPixel call that maps depth pixels to the video image. To improve performance only pixels identified by the sensor as being a person are mapped. All other depth pixels are ignored.

Thursday, August 11, 2011

Kinect Control for WPF

Yesterday, Esri’s Applications Prototype Lab released a sample for ArcGlobe that allows users to navigate in three dimensions using a Kinect sensor and simple hand gestures.

This post describes a sample utility library developed in conjunction with the ArcGlobe add-in called KinectControl. KinectControl is a WPF user control that can display raw kinect feeds but most importantly provide developers with the orientation, inclination and extension of both arms relative to the sensor. KinectControl was developed a generic library that can used to kinectize any application.

The following few screenshots demonstrate the capabilities of the KinectControl. By default, KinectControl displays the sensor’s video feed and the skeleton of the closest person to the sensor. The orange text at the bottom of the app is debug information from the test application.

Kinect Control for WPF

Occasionally a limb may appear red, this indicates that one or more of the limb’s joints cannot be “tracked” and its position is “inferred” or approximated by the sensor. This often happens when a joint is obscured from view, for example, if a user is pointing their hand and arm directly towards the sensor, the user’s shoulder cannot be seen by the sensor.

On the upper right hand corner of the KinectControl are three button that allow the user to toggle between three different views. The video and depth views are self explanatory but the third, blend, is a combination of the both.

Kinect Control for WPF

The blend view color codes each and every person identified by the kinect sensor with a different color as shown below. The kinect sensor can identify up to seven people.

Kinect Control for WPF

This white stick figure graphic in the upper left hand corner is used to alert the user whenever he or she has moved beyond the kinect’s field of view. For example, in the screenshot below, the user has moved too far to their left.

Kinect Control for WPF

In the bottom left hand corner are two buttons to control the inclination of the sensor. Each button click will move the sensor one degree up or down.

Kinect Control for WPF

And lastly, the test app that is included with the sample uses binding to display the left and right arm orientation and inclination on the screen.

Kinect Control for WPF

Please click the link below to download the KinectControl sample. To use this sample you must have a Kinect connected to Windows 7 computer with Kinect for Windows SDK installed.

Wednesday, August 10, 2011

Kinect for ArcGlobe

On June 16, 2011, Microsoft released the Kinect for Windows SDK. This SDK allows windows developers to support motion with an Xbox 360 Kinect device. The Applications Prototype Lab at Esri has just completed a prototype using a Kinect to navigate in ArcGlobe.

To fly forward, the user can raise their right hand. The display will navigate in the direct the right hand is pointing. We call this “superman navigation”. If the left hand is elevated, the display will pivot around a central location on the globe surface. And lastly, if both hands are raised, the screen will zoom in or out as the hands are both together or apart.

To use the add-in you must have the following:

  1. Kinect for Xbox 360 sensor,
  2. Windows 7 (32 or 64bit),
  3. .NET Framework 4.0,
  4. Kinect for Windows SDK beta.

The add-in (with source code) is available here.

This add-in was challenging in sense that translating traditional mouse navigation to motion was not easy. With mouse or touch input devices, users have immediate sensory feedback once they have clicked a mouse button or touched a screen. In a number of Xbox games, this issue has been overcome with a paused hover. That is, a user uses his or her hand to move the screen cursor over a button and waits for a few seconds as an activation animation completes. This is fine for buttons that occupy discrete areas of a screen but not for interaction throughout the screen,

The approach adopted, rightly or wrongly, by this add-in is that of a virtual screen that exists at arm’s length directly in front of the user. This virtual screen only extended +/- 25° from an arm point directly ahead at the real screen. This technique provides an approximate motion to screen mapping with screen contact only at full arm extension and only within a narrow 50°x50° area in front of the user.

Ultimately a better approach could be to rely purely on touch-like gestures such as left swipe, right swipe and pinching. There has been exciting work in this area by Deltakosh on the Kinect Toolbox project but I hope that the final release of the Kinect SDK includes gesture recognition.