Charles Petzold



Simulating Touch Inertia on Windows Phone 7

June 6, 2010
Roscoe, N.Y.

In Isaac Newton's Mathematical Principles of Natural Philosophy (1687), Definition 3 reads: "Inherent force of matter is the power of resisting which every body, so far as it is able, preserves in its state either of resting or of moving uniformly straight forward." (Cohen/Whitman translation, pg. 404) This is what we call "inertia." When you start an object moving with a little push or your hand, it will often continue moving for a little while until friction of some sort causes it to slow down and stop.

On multi-touch displays, users often move (or otherwise manipulate) objects with their fingers. To make this manipulation more realistic, inertia is implemented algorithmically to keep an object moving (or otherwise changing) after the user's fingers have left the screen. Inertia is built into the Manipulation events implemented in the Windows Presentation Foundation (WPF) but not in the subset of those events supported for Silverlight applications in Windows Phone 7. This blog entry is an exploration of what's required to add inertia support to phone applications.

Here's what happens in WPF:

When a user touches a particular element that's been enabled for manipulation, the ManipulationStarting event is followed by ManipulationStarted. As the finger (or fingers) on that element move, the application gets multiple ManipulationDelta events indicating how the fingers are resolved into translation, scaling, and rotation. When all the fingers are removed from the element, a ManipulationInertiaStarted event is fired to indicate a transition from manipulation through actual touch to manipulation through inertia.

The ManipulationInertiaStartingEventArgs object that accompanies the ManipulationInertiaStarting event includes a property named InitialVelocities of type ManipulationVelocities that indicates the translation, expansion, and rotation velocities at the time the finger leaves the element. The translation velocity, for example, indicates the speed of the element in device-independent uints (DIUs) per millisecond.

ManipulationInertiaStartingEventArgs also has three properties with subproperties that you must set to control inertia. These are:

Obviously translation inertia is most common, and rotation inertia is common in the real world, but you might also have an application where an object needs to change size under inertia. You can pick and choose the type of inertia you want.

It is my experience that the ManipulationInertiaStartingEventArgs already has objects set to these *Behavior properties, and that the InitialVelocity sub-properties are already set to the InitialVelocities values of ManipulationInertiaStartingEventArgs. All you need to do is set the Desired* sub-properties. Notice that each of these Inertia*Behavior classes defines two Desired* properties. You set either one or the other of these properties depending how you want to specify friction — as a deceleration or an actual amount.

For example, suppose that when a finger leaves an element, it is moving at a velocity of 2 DIUs per millisecond. (That can also be expressed as 2,000 DIUs per second or approximately 20 inches per second.) If you set TransitionBehavior.DesiredDeceleration to 0.01 DIUS per millisecond squared, then every millisecond, the velocity will decrease by 0.01 DIUs per millisecond. The velocity will get down to 0 DIUs per millisecond (and the element will stop) at the end of 200 milliseconds. During that time it will have traveled 200 DIUs. (You can calculate the distance by ½at2, where the acceleration a is 0.01 DIUS/msec2 and t is 200 msec.)

Alternatively, perhaps you set TranslationBehavior.DesiredDisplacement to 100 DIUs. In that case, the element will travel another 100 DIUs during which time its velocity decreases from 2 DIUs per msec down to 0. Since a velocity of 2 DIUS per msec = at and a distance of 100 DIUs = ½at2, you can calulate t as 100 msec and an acceleration of 0.02 DIUs/msec2. (The formula a = v2 / (2d) is useful for calculations like this.)

If the program wishes to get inertia, it must set at least one sub-property in ManipulationInertiaStartingEventArgs. The program will then receive additional ManipulationDelta events for the inertia. If a ManipulationDelta event handler wishes to differentiate between direct manipulation and inertial manipulation, it can check the IsInertial property of the event arguments. (There's also a ManipulationBoundaryFeedback event for when an object travelling under inertia is in danger of flying off the screen and hitting an expensive vase.) Only after the element stops moving as a result of inertia will the ManipulationCompleted event be fired.

That's how it's done in WPF.

Silverlight for Windows Phone does not directly support touch inertia. There is no ManipulationInertiaStarting event, so there is no ManipulationInertiaStartingEventArgs, no *Behavior properties, and no Inertia*Behavior classes.

HOWEVER, in Silverlight for Windows Phone, ManipulationDeltaEventArgs has a Velocities property and ManipulationCompletedEventArgs has a FinalVelocities property, both of type ManipulationVelocities, which has two properties: LinearVelocity and ExpansionVelocity. (Multi-touch rotation isn't supported by the Silverlight Manipulation events.) I haven't seen anything except velocities of 0 in the ManipulationDelta event, but the ManipulationCompleted event is accompanied by non-zero values of LinearVelocity, so in theory that's enough information for the program to implement inertia on its own.

Of course, by now we are entirely accustomed to little annoying differences between WPF and Silverlight that make no sense whatsoever, so I guess you won't be surprised to learn that in WPF velocities are expressed in DIUs per millisecond, but in Silverlight for Windows Phone, they're pixels per second. (Or at least they seem to be. They're in the right ballpark, at least.)

It was my original intent to write an InertiaManager class that a program could create when it received a ManipulationCompleted event. This InertiaManager class would install a handler for the CompositionTarget.Rendering event and generate additional ManipulationDelta events, followed by a second ManipulationCompleted event.

This was not to be. The big problem was the ManipulationDeltaEventArgs and ManipulationCompletedEventArgs classes, which have private set accessors on all their properties, and which are sealed to boot. For that reason, my InertiaManager was forced to define two new events, called ManipulationInertiaDelta and ManipulationInertiaCompleted with custom event argument classes. Unlike the standard Manipulation events, these are not routed events!

In addition, I had to create a substitute InertiaTranslationBehavior class, and ManipulationDelta and ManipulationVelocities structures. (I didn't attempt to do anything with scaling inertia because I can't get multi-touch scaling to work at all in the April Refresh of the Windows Phone 7 development tools.) I also threw in a much-needed Vector structure to ease the mathematics. (What graphical API these days doesn't have a Vector type???)

In the TouchInertiaDemo solution for Visual Studio, InertiaManager and all the support classes are in a DLL named Petzold.Phone.Silverlight. The demo progam displays a single "ball" that you can move around with your finger (or the mouse). If you've given it a little movement as your finger (or mouse button) lifts up, the ball continues to bounce around the display with a rather slow deceleration of 100 pixels per second squared.

As you know from my recent blog entry on Basic Manipulation Event Handling in Windows Phone 7, there's a little conflict between the ManipulationDelta event and the phone orientation. I take account of that in the ManipulationDelta handler, but not anywhere else. If the ball is bouncing side-to-side in portrait mode, it will continue bouncing side-to-side (and not up-and-down) if you switch to landscape mode. This is not correct. Also, if you initiate inertia in landscape mode, the coordinates of the vector will be switched.

The bouncing logic is kind of interesting. As a result of movement due to inertia, I maintain non-bounced coordinates in a field called absoluteEllipseLocation. These coordinates may be way off the screen. But in the ManipulationInertiaDelta event handler in MainPage.xaml.cs, I normalize these coordinates (with mirror-image flipping) to the size of the parent container less a margin equal to the radius of the ball. The code may be a little more obscure than conventional bouncing logic, but it's much shorter and simpler.