#746 – Specifying Inertial Deceleration

In WPF, you can use inertia so that objects will continue moving on the screen after you lift your finger off the screen.

Calculation of inertial behavior requires both an initial velocity and a deceleration.  WPF knows the initial velocity of an object, based on how fast you are moving it on the screen.  The deceleration value is something that you specify.

Initial velocity values are typically in the range of around 0-4 DIPs/ms (DIPs per millisecond), or 0-42 in/sec.

Deceleration is expressed in DIPs/ms^2 (DIPs per millisecond squared).  If we want to decelerate to 0 within about 1/2 sec, we can use values in the range of  0-0.008 DIPs/ms^2.  (4 / 500).  This is equivalent to 83 ft/sec^2.

If you start with a deceleration value in in/sec^2, you can convert to DIPs/ms^2 using the formula:

x’ = x * 96 / (1000 * 1000)

You can experiment with different deceleration values to get the exact deceleration behavior that you want in your application.

Advertisement

#745 – The Basics of Inertia

Inertia is the idea that an object will resist a change in motion.  For touch manipulation in WPF, inertia means that objects can continue moving a little bit after you lift your finger from the screen.

Inertial behavior depends on two things–the initial velocity of the object a specified deceleration value.

The initial velocity is the speed at which the object is moving across the screen when you let go of it.  The deceleration is the rate at which the initial velocity should be decreased, until it eventually reaches zero.  In other words–how quickly does the object slow down?

The deceleration value has units that are DIPs (device independent pixels) per ms^2 (millisecond squared, or “per millisecond per millisecond”).  I.e. DIPs/ms^2.  In other words, if the object’s initial velocity is expressed in DIPs/ms, how much should that velocity decrease every millisecond?

#744 – Keeping an Element within Window During Touch Manipulation

You can use the ManipulationDelta event handler to translate a user interface element in response to the user touching and dragging it.  In the previous code, there was nothing preventing the user from sliding the element off of the screen.

We can make the element stop when it hits a window boundary by checking its bounds against the bounds of its parent.  Below is the update code for the ManipulationDelta event handler that does the checking.  (See earlier example for the full code sample).

        private void Image_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
        {
            ManipulationDelta md = e.DeltaManipulation;
            Vector trans = md.Translation;

            Matrix m = imageTransform.Matrix;

            // Find center of element and then transform to get current location of center
            FrameworkElement fe = e.Source as FrameworkElement;
            Point center = new Point(fe.ActualWidth / 2, fe.ActualHeight / 2);
            center = m.Transform(center);

            // Check to see if element is at one of the edges of the window
            FrameworkElement feParent = fe.Parent as FrameworkElement;
            bool atEdge = false;
            if (feParent != null)
            {
                Rect feRect = fe.TransformToAncestor(feParent).TransformBounds(
                    new Rect(0.0, 0.0, fe.ActualWidth, fe.ActualHeight));
                atEdge = (feRect.Right + trans.X) > feParent.ActualWidth ||
                    (feRect.Bottom + trans.Y) > feParent.ActualHeight ||
                    (feRect.Left + trans.X) < 0 ||
                    (feRect.Top + trans.Y) < 0;
            }

            // Update matrix to reflect translation
            if (!atEdge)
                m.Translate(trans.X, trans.Y);

            imageTransform.Matrix = m;
            RaisePropertyChanged("ImageTransform");

            e.Handled = true;
        }

#743 – Using Touch Manipulation Events to Scale an Element

In the previous post, we used a ManipulationDelta object in the ManipulationDelta event handler to both translate and rotate a user interface element.  The user’s touch gestures for translation (sliding finger) and rotation (rotating two fingers) were automatically captured and available in the Translation and Rotation properties of the ManipulationDelta object.

We can also support scaling of an element using the ManipulationDelta event.  The ManipulationDelta object also contains a Scale property, which stores a Vector that indicates the target scale for the object (e.g. scale of 0.5 indicates 1/2 size).  This property is automatically set when a user uses two fingers on an element in a pinch or spread gesture, indicating that they want to zoom in or out of the element.

The sample code below supports all translation, rotation and scaling of an Image element.

    <Canvas Name="canvMain" Background="Transparent">
        <Image Source="JamesII.jpg" Width="100"
               IsManipulationEnabled="True"
               RenderTransform="{Binding ImageTransform}"
               ManipulationStarting="Image_ManipulationStarting" ManipulationDelta="Image_ManipulationDelta"/>
    </Canvas>

Below is the code-behind for this sample.  Note that we now apply translation, rotation and scaling to the underlying matrix.

    public partial class MainWindow : Window, INotifyPropertyChanged
    {
        public MainWindow()
        {
            InitializeComponent();
            this.DataContext = this;

            ImageTransform = new MatrixTransform();
        }

        private MatrixTransform imageTransform;
        public MatrixTransform ImageTransform
        {
            get { return imageTransform; }
            set
            {
                if (value != imageTransform)
                {
                    imageTransform = value;
                    RaisePropertyChanged("ImageTransform");
                }
            }
        }

        private void Image_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
        {
            // Ask for manipulations to be reported relative to the canvas
            e.ManipulationContainer = canvMain;
        }

        private void Image_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
        {
            ManipulationDelta md = e.DeltaManipulation;
            Vector trans = md.Translation;
            double rotate = md.Rotation;
            Vector scale = md.Scale;

            Matrix m = imageTransform.Matrix;

            // Find center of element and then transform to get current location of center
            FrameworkElement fe = e.Source as FrameworkElement;
            Point center = new Point(fe.ActualWidth / 2, fe.ActualHeight / 2);
            center = m.Transform(center);

            // Update matrix to reflect translation/rotation
            m.Translate(trans.X, trans.Y);
            m.RotateAt(rotate, center.X, center.Y);
            m.ScaleAt(scale.X, scale.Y, center.X, center.Y);

            imageTransform.Matrix = m;
            RaisePropertyChanged("ImageTransform");

            e.Handled = true;
        }

        public event PropertyChangedEventHandler PropertyChanged;

        private void RaisePropertyChanged(string prop)
        {
            if (PropertyChanged != null)
                PropertyChanged(this, new PropertyChangedEventArgs(prop));
        }
    }

#742 – Using Touch Manipulation Events to Rotate an Element

In addition to using the touch manipulation events to handle translation of an element, we can use the same mechanisms to allow a user to rotate an element using touch.

We can do both translation and rotation in the same event handler.  The ManipulationDelta object gives us both a translation (vector) and a rotation (angle).  Both are automatically incorporated into the ManipulationDelta object, based on how the user is touching the screen.  The user can translate by sliding one finger around and can rotate by placing two fingers on the object and rotating it.

We transform the element by calling two different functions of the underlying Matrix, for both translation and rotation.

Here’s the XAML, containing a single Image control that we’ll interact with.

    <Canvas Name="canvMain" Background="Transparent">
        <Image Source="JamesII.jpg" Width="100"
               IsManipulationEnabled="True"
               RenderTransform="{Binding ImageTransform}"
               ManipulationStarting="Image_ManipulationStarting" ManipulationDelta="Image_ManipulationDelta"/>
    </Canvas>

Here is the source code, with the updated ManipulationDelta event handler.

    public partial class MainWindow : Window, INotifyPropertyChanged
    {
        public MainWindow()
        {
            InitializeComponent();
            this.DataContext = this;

            ImageTransform = new MatrixTransform();
        }

        private MatrixTransform imageTransform;
        public MatrixTransform ImageTransform
        {
            get { return imageTransform; }
            set
            {
                if (value != imageTransform)
                {
                    imageTransform = value;
                    RaisePropertyChanged("ImageTransform");
                }
            }
        }

        private void Image_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
        {
            // Ask for manipulations to be reported relative to the canvas
            e.ManipulationContainer = canvMain;
        }

        private void Image_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
        {
            ManipulationDelta md = e.DeltaManipulation;
            Vector trans = md.Translation;
            double rotate = md.Rotation;

            Matrix m = imageTransform.Matrix;

            // Find center of element and then transform to get current location of center
            FrameworkElement fe = e.Source as FrameworkElement;
            Point center = new Point(fe.ActualWidth / 2, fe.ActualHeight / 2);
            center = m.Transform(center);

            // Update matrix to reflect translation/rotation
            m.Translate(trans.X, trans.Y);
            m.RotateAt(rotate, center.X, center.Y);

            imageTransform.Matrix = m;
            RaisePropertyChanged("ImageTransform");

            e.Handled = true;
        }

        public event PropertyChangedEventHandler PropertyChanged;

        private void RaisePropertyChanged(string prop)
        {
            if (PropertyChanged != null)
                PropertyChanged(this, new PropertyChangedEventArgs(prop));
        }
    }

742-001

#741 – Using Touch Manipulation Events to Translate an Element

You can use the touch manipulation events to translate an element, that is, to move it around on the screen.

To start with, you set the IsManipulationEnabled property of the element to true.  This allows the manipulation events to be fired.  You also handle both the ManipulationStarting and ManipulationDelta events.

    <Canvas Name="canvMain" Background="Transparent">
        <Image Source="JamesII.jpg" Width="100"
               IsManipulationEnabled="True"
               RenderTransform="{Binding ImageTransform}"
               ManipulationStarting="Image_ManipulationStarting" ManipulationDelta="Image_ManipulationDelta"/>
    </Canvas>

Below is the code that allows the image to be moved around (translated) as the user moves their finger.  We bind the image’s render transform to a MatrixTransform object, which contains a matrix that allows us to scale, rotate or translate the image.  In our case, we modify the translation part of the matrix with the translation vector returned in the ManipulationDelta event.

    public partial class MainWindow : Window, INotifyPropertyChanged
    {
        public MainWindow()
        {
            InitializeComponent();
            this.DataContext = this;

            ImageTransform = new MatrixTransform();
        }

        private MatrixTransform imageTransform;
        public MatrixTransform ImageTransform
        {
            get { return imageTransform; }
            set
            {
                if (value != imageTransform)
                {
                    imageTransform = value;
                    RaisePropertyChanged("ImageTransform");
                }
            }
        }

        private void Image_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
        {
            // Ask for manipulations to be reported relative to the canvas
            e.ManipulationContainer = canvMain;
        }

        private void Image_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
        {
            ManipulationDelta md = e.DeltaManipulation;
            Vector trans = md.Translation;

            // Update matrix to reflect translation
            Matrix m = imageTransform.Matrix;
            m.Translate(trans.X, trans.Y);
            imageTransform.Matrix = m;
            RaisePropertyChanged("ImageTransform");

            e.Handled = true;
        }

        public event PropertyChangedEventHandler PropertyChanged;

        private void RaisePropertyChanged(string prop)
        {
            if (PropertyChanged != null)
                PropertyChanged(this, new PropertyChangedEventArgs(prop));
        }
    }

741-001

Ad Free

I’ve turned off ads on 2,000 Things You Should Know about WPF.  For a full explanation, click here.

#740 – Set Background of Canvas to Transparent to Receive Touch Events

If you define one or more touch event handlers for a Canvas panel, without setting any other properties, you may not see any touch events for the canvas.

This happens because none of the controls inheriting from Panel will receive either touch or mouse events unless you specify a value for the panel’s Background property.  So, to receive touch events for the Canvas, you can just set its Background property to Transparent.

    <Canvas Name="canvMain" Background="Transparent"
        TouchDown="Canvas_TouchDown" TouchMove="Canvas_TouchMove" TouchUp="Canvas_TouchUp">
    </Canvas>

#739 – Handling Touch Input at Different Levels

In WPF, there are three different ways that your application can support touch input:

  • Built-in support for touch.  Some elements will automatically respond to touch input.  For example, you can trigger the Click event for a button by touching the button or scroll a ListBox by touching and dragging.
  • Manipulation Events.  User interface elements support a series of manipulation events that let you detect when the user is trying to rotate, scale (zoom) or translate (move) an element.  The touch points from two fingers are automatically mapped to an event with the correct data.  For example, spreading two fingers apart triggers an event that knows you want to zoom in.
  • Raw Touch Events.  You can handle individual events for touch down, up and move actions on an element, for all supported touch points.  For example, you can track the location of 10 fingers touching the screen at the same time.

#738 – Sample Code – Drawing and Moving Circles at Touch Points

Here’s some sample code that draws a circle for each touch point, when a finger contacts the screen, and then moves that circle around as you move your finger.  This is done using the raw touch events–TouchDown, TouchMove and TouchUp.

    <Canvas Name="canvMain" Background="Transparent"
        TouchDown="Canvas_TouchDown" TouchUp="Canvas_TouchUp" TouchMove="Canvas_TouchMove"/>

 

    public partial class MainWindow : Window
    {
        public MainWindow()
        {
            InitializeComponent();
            TouchPositions = new Dictionary<int, Point>();
            TouchEllipses = new Dictionary<int, Ellipse>();
        }

        private const double CircleWidth = 55;
        private Dictionary<int, Point> TouchPositions;
        private Dictionary<int, Ellipse> TouchEllipses;

        private void Canvas_TouchDown(object sender, TouchEventArgs e)
        {
            canvMain.CaptureTouch(e.TouchDevice);

            TouchPoint tp = e.GetTouchPoint(null);

            Ellipse el = AddEllipseAt(canvMain, tp.Position, Brushes.Red);

            TouchPositions.Add(e.TouchDevice.Id, tp.Position);
            TouchEllipses.Add(e.TouchDevice.Id, el);
            e.Handled = true;
        }

        private void Canvas_TouchMove(object sender, TouchEventArgs e)
        {
            TouchPoint tp = e.GetTouchPoint(null);

            Canvas.SetLeft(TouchEllipses[e.TouchDevice.Id], tp.Position.X - (CircleWidth / 2));
            Canvas.SetTop(TouchEllipses[e.TouchDevice.Id], tp.Position.Y - (CircleWidth / 2));
            e.Handled = true;
        }

        private void Canvas_TouchUp(object sender, TouchEventArgs e)
        {
            TouchPoint tp = e.GetTouchPoint(null);

            TouchPositions.Remove(e.TouchDevice.Id);

            canvMain.Children.Remove(TouchEllipses[e.TouchDevice.Id]);
            TouchEllipses.Remove(e.TouchDevice.Id);

            canvMain.ReleaseTouchCapture(e.TouchDevice);
            e.Handled = true;
        }

        private Ellipse AddEllipseAt(Canvas canv, Point pt, Brush brush)
        {
            Ellipse el = new Ellipse();
            el.Stroke = brush;
            el.Fill = brush;
            el.Width = CircleWidth;
            el.Height = CircleWidth;

            Canvas.SetLeft(el, pt.X - (CircleWidth / 2));
            Canvas.SetTop(el, pt.Y - (CircleWidth / 2));

            canv.Children.Add(el);

            return el;
        }

    }

738-001