This essay is a loose collection of principles for physical interaction. Some of this is taken from the book Physical Computing by Dan O’Sullivan and I, and some of it is material developed over the course of teaching and building interactive systems. I’ve given variations of this as a talk in various venues, and usually start each new year of Intro to Physical Computing at ITP with it. It seemed well beyond time to write it down.
A definition of interaction
Before we go too much further, I should define what I mean by interactivity. I like Chris Crawford’s definition:
“Interaction: a cyclic process in which two or more actors alternately listen, think, and speak.”
-Chris Crawford, Designing Interactivity
The keyword here is cyclic. Conversations don’t end with one exchange, and neither does interaction. There has to be an ongoing dialogue, and there has to be a subject to talk about. The action of one actor provokes a response from another actor. The first listens to that response, considers, and responds with another action. This cycle, repeated over many cycles, is an interactive conversation. The actors do not have to be human, they can be a person and a machine. But their actions have to be understandable to each other in some way.
Describe the form and the behavior
When designers describe interactive installations, they usually describe in great detail the controls, the displays, and the sights and sounds that participants will see when they encounter the installation. Often, they neglect to talk about the people for whom they are designing: who are they, what will they be expected to do with the installation, and why will they be motivated to do it. If you can’t motivate participants to press the button or walk into the room, the interaction won’t ever happen. This is a principle that’s stuck with me from working with actors and directors in college and during my early career: You can’t tell an actor what to do. Actors need to find the motivation for action on their own in order to take any action. The role of directors, playwrights, and designers is to provide the conditions in which the motivation will be found. Similarly, interaction designers can’t tell their audience what to do. We can only provide the setting and context in which they will understand the action that’s expected from the cues given.
Expressive, Instrumental, Instructional
When talking about physical interaction projects, it can be useful to know the goals of the creator of the project. This is particularly helpful when giving critical feedback on the project. There are a few categories of project that occur regularly in the physical computing classes I’ve taught: expressive works, instruments or tools, and instructional works.
Expressive works are often the least directly interactive, because they’re usually about expressing an artistic point of view. There’s a balance between participation and control of the story: the greater the level of participation, the more the participants, rather than the designer, determine what story emerges. The creator of an expressive work generally has something to say. Image, sound, movement and movement are the elements that multimedia tools provide in order to tell that story. Participants can change the direction of the story by doing what you don’t expect or don’t want them to do, or by ignoring certain elements altogether. Expressive projects are useful for learning about control of physical systems, the pacing and structure of and control of the story, and about the effects of aesthetic choices, like any expressive work, though.
Not all art that involves digital technologies has to be interactive. I think it’s hard for a lot of artists to give up the authorial voice and include the viewer. That’s okay. Sometimes you just want to make something beautiful, like the colorfield painting robot shown below, by Matthew Richard. It’s useful to know the difference between interactive works and those made with interactive tools, however.
Many interactive works are instruments, like Strings, a room-sized musical instrument by Luisa Pereira, Monica Bate Vidal, and Johann Deidrick, shown in the next image. This is an example of interactive work on which the creators have made a clear aesthetic mark, yet the participants complete the work through their participation. The instrument affords a number of different expressive actions by the participants, and at the same time it suggest what they might not do, because of its apparent fragility. You wouldn’t want to sit on the strings, for example.
Some instruments are tools for everyday use, like the custom PS3 controller shown below. It was built to match the particular abilities of the the young man in the photo, who had limited mobility of his upper body, but could move his head and one hand.
Instructional projects are the third popular category. These are works that are designed to convey an idea through direct experience. They rely on the idea that you can learn things through direct action and manipulation that you can learn no other way. Through these, you internalize an idea through direct experience, as seen in Jill Haefele’s Human:Nature project, in which the participant wears a pair of headphones containing live crickets, separated from one’s ear by a mesh screen. The direct acoustic experience of the crickets is unlike any recording of the crickets.
The next exhibit, How You See, by Brett Peterson, Xuedi Chen, Tom Arthur, and Hannah Mishin, is another example of an instructional work. Intended as a museum exhibit, it shows you through video what the rods and cones in your eyes actually see. Participants change the image seen by the eye on the screen by plugging and unplugging phono jacks that represent the connections of the different sensory cells of the eye.
Humans can understand a great deal about each others’ actions and the motivations underlying them because of our shared experiences. Even without language, there are certain assumptions we make about each other that are often correct. When we interact with machines, however, we can’t make those assumptions. Machines don’t understand that we can think, or even what thinking means. They can only read the world through changes in physical energy that our bodies generate through our physical actions. Machines don’t read your intentions, only your actions.
Transduction is the conversion of one form of energy into another. In electronics, transducers are elements that convert, or transduce, other forms of energy to or from electrical energy. Transducers are commonly divided into two groups. Sensors convert changes in light, heat, air pressure, and other forms of energy into electrical energy. Actuators convert electrical energy into light, heat, movement, and other forms of energy. Computers can read and write changes in electrical energy, so sensors and actuators are the senses and muscles of a computer.
Physical interaction involves transduction as well — This is important to keep in mind, because the computers you use in making interactive systems don’t know anything about a participant except what they can sense from the energy changes the participant generates. The computer can’t read their minds, it can only read their actions.
Digital and Analog
Sensors and actuators and the actions they sense and control can be divided into two groups: digital and analog. Digital sensors and actuators, sometimes called binary transducers, can sense or control two states, on or off. Analog sensors and actuators can sense or control a continually varying range of states. A simple way to remember the difference is to think about a cat on a mat. Digital sensors can tell you if the cat is on the mat, or the cat is not on the mat. Analog sensors can tell you how fat is the cat on the mat.
Although analog sensors are often called continuous, in practice the computers that read them measure a discrete number of possible values. For example, a typical microcontroller might be able to read the changing resistance produced by a force-sensing resistor (used under the cat on the mat above) to a resolution of 1,024 possible discrete states. The resolution of a sensor or actuator determines how finely you can sense or control the action at hand.
When you describe the action that a participant is expected to take, ask yourself how much you need to know about the action. Will it be enough to know that the action happened or not? If so, then a digital sensor might do the job. Or do you need to know about a range of possible states of the action? If that is the case, then you probably need an analog sensor.
Explicit and Implicit Interaction
Physical interaction is can be implicit or explicit. Thinking this way offers a link between a person’s intention and the physical action they take. An explicit interaction is one where the participant’s action is primarily intended to send the computer a message. Think of pressing a pushbutton. The physical affordances for an explicit action should be clear and obvious, and the sensing is often limited to a very contained area. An implicit interaction has some other primary purpose, and sending the computer a message is a secondary effect. Doorway entry sensors, automatic faucet sensors, and other motion detectors can support implicit interaction. The physical affordances may not be obvious, and sensing may be across a wide area. Too wide an area may result in false triggering.
Examples of sensors for explicit interaction: pushbuttons, knobs, sliders, keys, card swipers
Examples of sensors for implicit interaction: door entry sensors, floor triggers, faucet sensors, motion detectors.
“What should I do here?”
Interactive art breaks the rules of traditional plastic arts. With painting and sculpture, we know our role as viewers. We look but don’t touch. Interactive art is not always so clear. We don’t yet have an historical context for interactive art: should we touch, should we not touch? Should we watch and nod appreciatively? Should we dance and jump around frenetically? What’s acceptable? There’s no commonly expected response that audience members can either follow along with, or defy, as there is in the traditional plastic arts. It is important that artists who make interactive work set the expectations for the audience. If you want them to explore, you can make it vague, but give them something evocative, ike the soft, inviting fur texture of Jeremy Diamond’s Noise Nest sculpture shown in the previous image.
Quite often, the default for this kind of work is to leave things very open. This hasn’t changed for a lot of years. For example, consider one of the germinal works of interactive computer video art from Myron Kruger in the early 1970’s in the video below.
The video below is my colleague Danny Rozin’s Wooden Mirror from 1999…
And the video below shows is what we do with the Kinect as of 2011:
We’re still waving at the machines. Well, dancing now.
Ways to Generate Action
Use active verbs
One way to think about what your participants will do is to describe then using active verbs. Active verbs address what a person is to do, not just what they are to be. “To jump” is an active state. “To be scared” is not.
Actors need things to do. Give them actions, not states of being. If there is action and context, they’ll figure out the meaning.
In Channels, a project by Alvin Chang,Ginny Hung, and Suzanne Kirkpatrick, shown below, the visitor rows a virtual boat by paddling her hands in the water. The boat, the reeds, and the video in front of her all set up the context and suggest the action.
Break Down Inhibitions
When faced with the unfamiliar, people get defensive or self-conscious. If you can make participants feel comfortable and safe, you’ll break down their inhibitions and get them more engaged.
In Intimate Toilet, Jiyhyun Lee and Jihyun Moon inserted elements of the their own personalities to break down inhibitions. The project was designed to make users more comfortable using a squat toilet. They used lots of positive reinforcement and a little humor to get you comfortable. The narrative of the video and the fantastic unicorn and rainbows imagery give you a good sense of the tone of the piece. The video is controlled by a handle next to the toilet.
Give people reasons to cooperate
Most interactive experience (and in fact, I’d argue most games as well) are excuses for people to make contact with each other, to have a conversation. When we learn the necessary rules and actions, then forget that we’re doing them, it leaves space for us to talk. The ‘Roseabelle’ Ouija Board by Ruxandra Staicut, Sonaar Luthra, and Si Cho exemplifies this in the video below:
Give people reasons to compete
Competition is another way to get conversation going, so when you use it, make sure the competitors get to focus on each other. Dynamic Canvas by Eunyoung Kang and Sukmo Koo, shown in the next image, has participants facing each other as they blow on sensors to move a ball across a projected landscape.
Find the moment between decision and action. Engage.
The moment to influence a participant’s action is right after they decide to take action but before they take it. Mouna Andraos, Michael Kertesz, and Jun Oh found this moment for users of the elevators at Tisch. They wanted to change your mind when you’d decided to take the elevator, to encourage you to take the stairs. Using an overhead camera focused tightly on the elevator button, they captured the moment right before the button is pushed, and distracted the participant.
You can’t enforce interaction
People don’t have to interact with your work. They don’t even have to pay attention to it. So you don’t get to enforce anything, you just get to suggest, and hope they stay with you, as this behind-the-scenes video from the elevator tracking project above shows.
Use the whole body
Most devices we use don’t ask us to use our whole body, but most people welcome the chance to do so. Give participants that chance and you’ll make things more fun for them, as in The Big Bounce, a musical instrument by Christina Goodness that takes the full effort of jumping and bouncing to make music.
Give people reasons to use the whole space
Bodies can fill a space, if you give them the means to do it. Given permission and means to run (or swing) around the whole space as well, participants will gladly do it, as in this musical instrument, Swings, by Claire Mitchell, Engin Ayaz, and Patrick Muth.
Nothing is intuitive. Everything is learned.
As designers, we tend to call things intuitive when we think it’s obvious. But obvious means different things to different people. Intuition is what happens when we see something new that reminds us of something familiar. No interface is universally intuitive, and if you think yours is, you’ve lost empathy for your audience. Kate Hartman’s Talk to Yourself Hat examines the notion of intuition by offering an “intuitive” interaction that is nonsensical.
Differentiate automation from interaction.
If you look at many connected devices on the market currently, there is a strong bias towards automation. The internet of things is all about listening, measuring, and quantifying. But interaction is about enhancing, rather than replacing, what we do best. The most engaging interactive works strike a balance between these two.
The things we make are less important than the relationships they support
Finally, remember who you’re designing for. If you forget them, there’s not much point. Justin Lange’s Folkbox modified guitar embodies this ideal. The guitar he built was designed to restore his father’s ability to play guitar after an accident limited the motion in his hand. Designed for an audience of one, it involved considerable conversation and observation to match the affordances of the instrument with the capabilities of the musician.