I was at the Blur Conference last week in Orlando, FL, a conference focused on human computer interaction. It’s stuff that I love, thinking about when to step away from the traditional keyboard, mice, and touchscreen interfaces and thinking about when and how technology can better intercept the real world. One of Second Story’s goals, as part of good storytelling, is to help make the technology invisible to the average person. Sometimes this means really great content, sometimes beautiful and engaging design, other times, using non-traditional interfaces. The goal is to remove technology as a barrier to getting somewhere.
My most common approach is generally a reductive one, probably inspired by early exposure to Edward Tufte’s writings and his lovely rants about the complexity that arises in visual design leading to bad information. One of my takeaways was to create something and then iteratively keep removing components without losing the meaning of the content. In Tufte’s case, he was referring to charts made in Excel, removing all of the ‘chart junk.’ In my case, I think of an interaction or experience, and keep removing extra steps or elements without losing the spirit of what needs to happen. For example, we’ve been working on a smartphone app that has a couple of steps to figure out where you are and what you can do, based on the device. Only a click or two in the process, but in reality things that get in the way. If the phone has a camera, automatically pop open the camera. If it doesn’t, automatically start with a list of relevant objects. Eliminate the extra steps and make everything feel smarter in the experience. Decrease the noise, increase the signal.
But, at Blur, I gained a little bit of new insight and saw the potential of the reverse approach, an additive approach. One of the keynotes featured roboticist Cynthia Breazeal describing her work with sociable robots. I was amazed at the visceral reaction I had watching Leo, one of her robots, learn to be afraid of cookie monster. Entirely a learned response, but I felt sympathy for Leo in those few moments. I realize that almost all of my reaction entirely based on Leo’s apparent non-verbal communication. Cynthia’s research also found that creating identical tasks, such as keeping track of an exercise program, that with simple software vs an all digital guide vs a physical robot guide (and a crude one at that), that the physical counterpart had a much higher response rate from users. The software alone did poorly, the digital guide, a little better, but the robot had almost double the participation rate. Again, just a little bit of extra presence and support had a profound difference on people.
In discussions with my colleague, Thomas Wester, we realized that we have no good reason to limit our emotive experience to what happens on a screen. Sure, a little obvious, especially since we know that movies with a soundtrack are far more compelling than one without. And mood is easily altered based on the music playing in the background. But, it certainly wanted to make us consider much more strongly how we could add non-verbal cues and support to help provide greater context and emotive support in the experiences we design. It may be adding more physical presence, it might be providing more tailored experiences based on what we can derive from user-interaction (it’s pretty easy to track when a user smiles and gauge their mood, it turns out), or can be the extended interaction and follow-up over time once someone leaves the initial experience.
These are all additive things, things to add to the mix, but not in the usual way. It’s something I’ll be thinking about a lot more and trying to figure out ways to keep adding a little bit extra to create stronger connections for users and deeper experiences. Well, but not too much… because then I’ll need to start taking little bits away.
—Bruce Wyman, Director of Creative Development