Games have flaunted gestural interfaces for years now. The Nintendo Wii is the most familiar example, but such interfaces can be traced back decades: Sony’s EyeToy; Bandai’s Power Pad; Mattel’s Power Glove; Amiga’s Joyboard; the rideable cars and motorbikes of ’80s – ’90s arcades; indeed, even Nintendo’s own progenitors of the Wii Remote, like Kirby Tilt ‘n Tumble for Game Boy Color.

Recently, all three major console manufacturers announced new gestural interfaces. Nintendo introduced the Wii Balance Board last year, a device capable of detecting pressure and movement on the floor. This year, the company released Wii MotionPlus, a Wii Remote expansion device that allows the system to detect more complex and subtle movements.

At E3 2009, Sony demonstrated prototypes for the PS3 Wand, a handheld rod that uses both internal sensors and computer vision, via the PlayStation Eye camera, to track and interpret motion.

And Microsoft announced Project Natal, a sensor system that foregoes the controller entirely in favor of an interface array of cameras and microphones capable of performing motion, facial, and voice recognition.

With few exceptions, designers and players understand gestural control as actions. Lean side to side on the Joyboard to ski in Mogul Maniac. Grasp and release the Power Glove to catch and throw in Super Glove Ball. Bat a hand in front of the EyeToy to strike a target in EyeToy: Play. Lean a plastic motorbike to steer in Hang On. Swing a Wii Remote to strike a tennis ball in Wii Sports.

Gestures of this sort also strive for realistic correspondence of the sort advocated by the direct manipulation human-computer interaction style. Input gestures, so the thinking goes, become more intuitive and enjoyable when they better resemble their corresponding real-world actions. And games become more gratifying when they respond to those gestures in more sophisticated and realistic ways.

Such values drove the design of all of the interface systems mentioned above: MotionPlus, Wand, and Natal all involve high-resolution technologies that hope to capture and understand movement in more detail.

Physical realism is the goal, a reduction of the gap between player action and in-game effect commensurate with advances in graphical realism. As one early review of MotionPlus put it, “It’s like going from VHS straight to Blu-ray.”

As much as physical realism might seem like a promising direction for gestural interfaces, it is a value that conceals an important truth: in ordinary experience, gestures not only perform actions, they also convey meaning.

Read the entire article over at Gamasutra

published June 30, 2009