Mobile Depth-Sensing

From the Protecting the Secret interactive at the Vault of the Secret Formula exhibit to the Connections Wall at the Emerging Issues Commons, Second Story regularly uses Kinect and other similar technologies to create dynamic content based on sensing where users are in physical space. In the past, the mobile use of these sensors has been restricted by their need to be tethered to powerful “desktop” CPUs; we’ve had to use USB signal extenders and dedicated wiring to mitigate these constraints.

But developments in computing are giving us enhanced flexibility. The latest ARM processors are small, portable, and powerful, and we’ve been experimenting with using them to process depth data right from the sensor’s location.

The processors can be powered over CAT-5 Ethernet or even battery (depending on the use case) which makes deployment easy, and they automatically start working as soon as they’re powered on, eliminating the need for an external display. Using available bandwidth, they can send data over WiFi or regular CAT-5 to more powerful CPUs that do data interpretation.

The sky’s the limit with these little guys. We can’t wait to explore the possibilities.

— Sorob Louie, Interactive Developer

Tagged with: , , , , ,
Posted in Technology

100 Years of Design

In 1914, a small group of designers inaugurated what became the American Institute of Graphic Arts. One hundred years later, Second Story has collaborated with AIGA to create a centennial microsite that celebrates the profound impact design has had on our society over the last century and invites everyone into a conversation about the impact of design on our daily lives.

Our first task was to collaborate with AIGA on curating a set of works to illustrate the breadth, diversity, and evolution of American design over the last century. We also wanted to present these works in a different way than a typical retrospective might. We wanted to focus on the “why” instead of the “how,” exploring the intentions behind these works rather than simply categorizing them by medium, style, geography, or plotting them on a timeline.

Intentions

These design intentions became the core of the site: five media-rich narratives focused on how design connects, informs, assists, delights, and influences us.

Boards

We also see this microsite as a time capsule that successive generations of designers might open in 2114, as AIGA celebrates 200 years. Knowing the tools and methods these folks will employ will evolve far beyond what any of us can imagine today, what kernels of truth or wisdom from AIGA’s first century of existence could this site preserve and pass on?

To find answers, our film crew captured the oral histories of 18 living legends of American design. We asked these designers to comment and reflect on their own seminal works, the arc of their careers, and the lessons they’d like to pass on to future generations. Their answers were humble, straightforward, hilarious, heartfelt, and enlightening. Being present with the likes of Paula Scher, Milton Glaser, Richard Saul Wurman, Jessica Helfand, Michael Bierut, Seymour Chwast, and many others was an incredible honor. Their stories and insights bring this content and conversation to life in a way nothing else could.

Most importantly, we wanted to invite everyone to the party. So we created a way for people to share how design connects, informs, assists, delights, and influences them today. Contributions are already pouring in, and we are thrilled to see such a diverse range of responses.

Centennials offer us a chance to look back at where we’ve been, to recognize a shared history and inheritance, and to appreciate the evolutionary continuum that connects those designers in 1914 to us here and now. They also give us the chance to look forward – to take what we’ve learned in new directions and ask what’s next. As part of the team who has spent over eight months bringing this microsite to life, I can say that looking back has taught us a tremendous amount about design’s role in shaping how we see our world, ourselves, and each other. Looking forward, this project has made us look deeply at what motivates us to do the work we do, and rededicated us to bringing those intentions to life in everything we create.

— David Waingarten, Creative Director, Storytelling

Tagged with: , , , , ,
Posted in Content, Culture, Design

Leap Motion Path

Leap Motion Path is a Second Story lab experiment exploring the use cases for the Leap Motion controller in the digital animation field. Our objective was to create a tool to capture 3D motion that could be used within an animation platform to control digital content.

One of our goals was to record an animation without the assistance of a keyboard or mouse. To achieve this, we needed a way for the animator to know where their hand was in relation to a recording canvas. We accomplished this by creating an inset frame that gives the animator space to interact with Leap Motion and gain spatial reference to where they are inside the computer screen. Once they’re ready, they can enter the frame and start recording. Later, in the animation software, they can remove the entry and exit points from the frame by cropping the animation recording.

Leap-Motion-Path_cropped

During this experiment, we encountered another interesting use case. Leap Motion provides a lot of data about the geometry of the hand. If you capture the position of the animator’s wrist and the tip of the index finger and draw a line between those points, you end up with a vector that indicates the direction of the hand. If you capture this vector over time, you see that it produces a beautiful ribbon. The animator can record this ribbon and use it in other animations.

ribbon_landscape2 2

As we developed Leap Motion Path, several stand-alone libraries came to be. We’ve hosted one called Path.js here on Github. To provide some additional context: we wanted to capture a 3D position over time and then animate along the path we recorded. If we were to animate along the finite points Leap gives us, the animation would be choppy because the resolution of these points wouldn’t resemble the actual path we drew with our hand. To combat this, we needed to interpolate a line or a curve between these points to give a finer resolution so we could animate at any speed. Path.js takes a collection of timestamped points and creates a linear interpolation between them. This allows Leap Motion Path to export an animation in vector format, allowing the animator to scale and stretch the animation as desired.

With more development, Leap Motion Path could be integrated into a standard digital animation workflow giving animators one more tool to create beautiful & lifelike work. Moving forward, to improve the motion-capture experience, we would need to re-write the recording mechanism as a plugin to an animation platform, enabling the animator to record and review all in one application. We look forward to integrating Leap Motion Path into our own animation workflow at Second Story.

— Dimitrii Pokrovskii, Interactive Developer

Tagged with: , , , , , ,
Posted in Technology

Unboxing the Kinect for Windows v2

kinect4Win_v2_small_v2

Kinect for Windows v2

We started experimenting with the Kinect for Windows v2 from Microsoft this week and are already excited by the new possibilities that this impressive new depth-sensing camera offers. Mechanically, the camera is a bit larger and bulkier than we would like, but it also features a tripod mount adapter (threaded insert) which will go a long way towards helping us incorporate the sensor into different environments.

Among the many improvements, we quickly found that the new camera was able to sense just about as many people as we could fit in the (expanded) camera’s field of view. Tracking has also improved, allowing skeletal data to be sensed in a variety of poses (sitting, prone). We’re also excited by the new level of detail captured in each pose which includes basic hand and finger tracking.

The depth image returned by the camera is also showing significant improvements in speed and image fidelity. This type of depth data allows for the capture of image details previously unattainable with the Kinect.

depth_small_edit_final

Raw depth image from the K4W

As we continue to experiment with and adopt the technologies that the future is bringing, we couldn’t help but take pause to thank the engineers and scientists at Microsoft who made this one possible. It is technologies like this that enable us to create new and inspiring experiences.

With that, we leave you with this final message for this season:

HappyHolidays

— Matt Arnold, Lead Integration Engineer

Tagged with: , , , ,
Posted in Technology

How We Built Lyt: a Technical View of the Making Process

At Second Story, we all share a passion for making and crafting. Our collaborations have taken lots of different forms, from a robot that prints tattoos to a floor-to-ceiling interactive sculpture to an imaginative birdhouse. We thrive in the vibrant designer community here in Portland, and our local involvement enabled us to meet members of the Intel Labs team who gave us a great opportunity to do some tinkering. They were about to release the Galileo board (see the specs here) and asked us to come up with a demo showing what the board could do. It would then be presented in Rome for the European version of the Maker Faire. After a few weeks of furious designing, prototyping, fabricating, and testing, Lyt, a collaborative, interactive lighting fixture, was born.

Resources

If you’re one of the lucky few who sees through the matrix, source code and build instructions for this project can be found on GitHub; the README over there contains a pretty detailed description of the overall architecture as well as how the various components interact together.

If you’re a more visual person, you can watch the making of Lyt on the creators project website.

Daniel Meyers, Creative Director, Environments, wrote a great blog post back in October about his thoughts on the project, but I wanted to provide some additional information about our process from a technical perspective. Here we go!

 Ideation

lit_sketches_Page_1

The Galileo board is an interesting mix: it’s both Linux-based and Arduino-compatible, offering developers the opportunity to play with advanced tools created by the Linux community and electronics components and shields from the Maker community. We tried to pick the best from both worlds for our prototype.

In terms of concept, we found inspiration in a couple of previous lab projects: Aurora, which incorporated LED mesh, and Real Fast Draw, a collaborative drawing application. After a couple of iterations, we knew what we wanted to make: an LED mesh from scratch that you could draw on.

Picking out the hardware

_MG_8302_sm

Once we decided to make an LED wall “display,” we had to find the right LEDs. Nowadays, you can buy strips by the meter, where the pitch (the distance between two LEDs) varies between 32 LED/m and 64 LED/m. Another factor to consider is the type of microcontroller driving the LED. The predominant ones are called LPD8806 and WS2801. These nifty controllers enable you to address independently every single LED on your strip. Thanks to the wonderful world of Arduino, you can buy these on Adafruit or SparkFun, and they both give you Arduino libraries to drive them.

After some experimentation, we decided to go for a 32 LED/m running on the WS2801. A smaller pitch would have been a bigger power draw and a more precise data signal would have been required (some people were successful running a 64 LED/m strip on a Netduino, but we didn’t want to lose too much time figuring this out, as the other one was working right out of the box).

Daisy chaining

_MG_8506_sm

After picking the strips we wanted, we had to see how many of them we would be able to plug together. These strips work with four wires: one 5V power, a ground wire, one SPI clock, and one DATA (“master out, slave in,” in the SPI lingo). The full-fledged SPI bus has two more wires, one “master in, slave out” to receive information from a slave device, and a slave select to tell which device is connected on the bus you’re talking to. Obviously the LED strip was not going to give us feedback, so there was no need for a master-in wire, but the lack of slave select was pretty annoying; it was harder to get multiple strips listening on the same bus and decide which one to talk to. So, instead of running them in parallel, we went for a series circuit where the maximum number of strips would be daisy-chained together. Experimentation taught us that we could go up to 12 meters before getting a nasty flickering effect. This was around 380 LEDs per fixture with an estimated power consumption of 70W.

We decided we would make 3 modular panels of 12m LED strips, each with its own Galileo board. This is clearly overkill–one Galileo board might have been enough to drive our three columns with a mux-demux controller to select which strip we wanted to talk to. However, from a demo perspective, it was safer to run 3 independent boards, so, if we accidentally fried one, we would still have a working prototype.

We’re happy to report that no Galileo board was harmed in the making of this demo.

Starting the development

_MG_8256_sm

We wanted to be able to control the light fixture using mobile devices and still have a board per fixture. We obviously needed some means of inter-board communication to notify them of the phone interaction, and having a self-contained system was our ideal. We ended up with a four-Galileo-board set-up, using one, which I’ll call the Lyt board, as the server. It dispatched relevant information to the three other boards driving the fixture.

The Adafruit library we used was taking a “pixel” buffer in entry and would display it nicely on the strip, but we had to do some slight modifications to make it run smoothly on the Galileo. It’s pretty common on the Arduino board to transfer buffer one bit at a time, by switching on and off a pin. This should work on the Galileo too, but, as a full Linux system is running, things are not that bare-metal. Instead, it’s mostly as if you were writing to file: you have to open a file descriptor, write, and close it. And this process goes through system calls that take a few milliseconds to kick in, which is not good when you have 400 LEDs to drive. Fortunately, the Intel folks were smart and added a function to the SPI library to transfer a buffer in one go, so instead of having to do syscalls for every byte, it’s reduced to one (or a few) for transferring one buffer entirely. These details are hidden in the depths of the SPI library, so, for the regular user, the only thing to know is that you can transfer a buffer quickly on SPI. The curious should investigate and explore the source code available in the Arduino IDE to better understand how all this black magic is working.

Adventures in debugging

5H2A9943_edit_sm

This project was a truly international effort: the fabrication and concepting happened in our studio in Portland, OR, the Galileo boards were fabricated in Ireland (it’s printed on them!), the software development was done in France (the developer was suffering from a serious case of wanderlust), and it was all for an event happening in Italy. One of many obstacles we faced during the development process: how to develop a graphic interface when you don’t have the display. Our solution was to create a small debug application using an awesome library called openFrameworks. Our app receives OSC messages from the board saying, “Hey, I’m going to display this and this on the LED,” the app parses the messages, and displays them on a laptop. In return, we added a communication link from the laptop to the board to tell the LED, “Hey, I’m clicking on this part of my phone, you should really do something.” This was, of course, not the final solution, but worked well as a place-holder–the communication link between phone and board would be developed later.

Networking is the key to success

5H2A9966_edit_sm

You may wonder how all these fancy boards were talking together. The Galileo board has a PCI express port (the kind of thing you can find in your laptop) in which you can put a WiFi card. At the time of this writing, the N-135 Intel wireless card is the only officially supported one. Using a tool called hostapd, we were able to make a Galileo board run as a WiFi access point and have all the others connect on it. Then it was easy to send OSC messages between all of them.

Finally, to get mobile phones to talk to the board, we wanted to provide the most lightweight experience for the user and thought it would be great to do the UI using HTML5 and CSS (i.e. a basic webpage). Part of the new specifications for HTML5, WebSockets enable your browser to talk with a different computer through a direct connection; they were perfect for our purposes. We compiled a WebSockets library for our Galileo board (the libwebsockets one), plugged in our Arduino sketch, and, after some tinkering, VOILA!…phones can talk to the Lyt board.

Conclusion

5H2A9974_edit_sm

In a few words, here’s what’s happening once you’ve plugged everything in:

Our Lyt board creates an access point you can connect to with your phone. Meanwhile, every light fixture connects to this WiFi and sends information to the Lyt board (like what color they’re currently displaying). When you request the Lyt page on your phone, the board sends it back to you, and, once the page is loaded, your phone opens a WebSocket to the Lyt board. This way, the phone can be updated by the board, telling it the color currently displayed on each fixture, and, in return, when you interact with the UI, the phone sends the information to the Lyt board which relays it to the appropriate fixture. Finally, the fixture triggers the touch animation.

After a few whirlwind weeks and with a successful demo in hand, we got to spend some time in Rome with Intel and the European Maker community. Once more, we were reminded that technology can be the seed for great collaboration and wonder.

— Philippe Laulheret, Interactive Developer

Tagged with: , , , , , , ,
Posted in Design, Technology

Balancing Spectacle with the Everyday

balance 2

In a passage from John Thackara’s book, In The Bubble: Designing in a Complex World, he takes a critical stance against place marketing and describes his vision for an alternative, more sustainable approach to the design of cities:

“A sustainable city … has to be a working city, a city of encounter and interaction—not a city for passive participation in entertainment. Sustainable cities will be postspectacular.”

Even though Second Story isn’t necessarily in the business of designing cities (sustainable or otherwise), I feel that Thackara’s words have significance for any designer concerned with how we shape our environments and facilitate experience in spaces.

He uses the phrase “postspectacular” to caution against design that is nothing but spectacle. For him, spectacle is an undesirable outcome of design that casts people as passive consumers of an experience rather than empowering them as active participants in meaningful human interaction.

It’s interesting to consider this stance in the context of Second Story’s work because, in some ways, we consider spectacle a key piece of what we do. In a Creative Mornings talk, our Innovation Director, Thomas Wester, described how Second Story’s work could be seen as a contemporary link in a long line of historical experiments in spectacle, such as the cyclorama and the eidophusikon.

In many cases, we hope to inspire the same wonder and awe that these inventions did when they were first revealed. I feel like the pursuit of spectacle in design can often end up expanding the limits of what we consider to be possible. For this reason, I’m not as harsh on the idea as Thackara seems to be.

But I do think that Thackara’s point still stands. Wonder and awe are valuable, but they’re also more transient aspects of an experience. A sustainable design, one that sustains meaning over time, requires us to think about how it integrates with and enriches the everyday working lives of people.

* * *

Thackara isn’t alone in voicing his concerns. During the Q&A of the above-mentioned Creative Mornings talk, one audience member asked Thomas whether he tires of our society’s obsession for new technology. Another raised his concern that so many examples of interactive art seem to exist only to serve interactivity as a goal unto itself.

What I sensed in these comments from the group was an underlying dissatisfaction with projects in the space of interactive digital experiences that celebrate technology and interactivity but pay little attention to the people who experience them. These were echoes of Thackara’s critique applied to our world of digital design, pointing out the lack of substance that occurs when a design only focuses on the spectacles of new technology or the latest paradigms of interactivity.

I think Thomas provided a couple of thoughtful answers to the audience’s questions (which you can listen to in full starting at about 39:25 and 41:02 in the video): “We often become couch potatoes and have this consuming attitude, and I think interactivity is anti-consumerism in that sense. It challenges you, and we try to make experiences that challenge you to make your own path through that experience.”

Which is easier said than done, of course. Just because there’s a touchscreen in the room, doesn’t mean you’ve created a meaningful interaction. But this drive to facilitate situations where people can feel empowered and engaged through interactivity, rather than simply awed, feels like a step closer to “postspectacular” design.

* * *

I was recently reading a book about filmmaker Roberto Rossellini on a recommendation from David Waingarten, our Creative Director of Storytelling (and local source of film knowledge). In a 1952 interview, Rossellini was asked to give an interpretation of Italian Neorealism, the film movement he was most associated with: “Neorealism is … a response to the genuine need to see men for what they are, with humility and without recourse to fabricating the exceptional; it means an awareness that the exceptional is arrived at through the investigation of reality.”

This quote captures why I still get excited about design projects that, from the perspective of technology or interactivity, lack a sense of spectacle. These are projects that provide compelling tools for researchers and educators or create channels for collaboration within a community. Just as Rossellini’s films aspired to reveal the exceptional in the reality of things, each of these projects has the potential to illuminate the everyday reality of its users by enabling them to take action. That is, in its own way, spectacular.

In the end, I value our studio’s drive to push the limits of spectacle. I am often inspired by what my studio-mates dream up and build. But it’s good to remind ourselves that it’s a balance of the pioneering spirit of spectacle with a mindful concern for the everyday that allows design to create sustainable and meaningful change in the human condition.

— Norman Lau, Senior Experience Designer

Tagged with: , , , , ,
Posted in Culture, Design, Technology

Put a Bird in It

Last month during Design Week Portland, we had a great time taking part in WeMake’s “Put A Bird In It” competition. The challenge was simple: craft a birdhouse to be auctioned off to help support art and music education in Portland’s public schools. Competing against other local artists, makers, and creatives, we got to work, collaborating across disciplines to come up with something inventive, beautiful, and representative of the studio.

We mulled over a variety of concepts, trying to determine how to best reflect Second Story’s culture through this project. We knew we wanted to incorporate the two things at the heart of our work–storytelling and technology–but we had to make sure the technological component was purposeful and practical. Needless to say, we cycled through a lot of ideas.

We sketched, we debated, and eventually we landed on a concept that perfectly represented us as a studio: a birdhouse inspired by a cabinet of curiosities. Brad Johnson and Julie Beeler, Second Story’s founders, have long been interested in these precursors to museums; in fact, the company once made a self-promotional trade show booth based on one. We decided to create a collection of oddities to inspire wonder and pique curiosity–in this case, an assortment of “extinct” animal hybrids, each half bird, half something else. Our tech component would be a microsite, a venue for us to tell some short stories about the creatures we came up with.

Everybody on the team was invited to think about the types of animals that could be represented, coming up with bird names and thinking about qualities associated with the hybrids. We democratically selected 8 final animals to run with: the armadilladee, cheetawk, flamingoat, giraffakeet, ostracamel, owlephant, peacoctopus, and porcupigeon. Inspired by everything from ancient literature to children’s movies to the Portland music scene, we started writing the strange stories of these imaginary specimens.

These short narratives helped inform the appearance of the birds and other items that ended up in the birdhouse. We took visual cues from old zoological engravings we came across in our research, and, once we’d drawn the birds digitally, we printed out the designs and traced them on a light table with a nib pen and India ink to ensure a precise and well-defined illustration quality. The process was slow going, but the results were gorgeous.

_MG_8597

Beyond the birds themselves, we filled the birdhouse with all kinds of accoutrements, some inspired by the narratives, others by nature. The objects we didn’t make or find in our neighborhood were bought at craft and specialty stores around Portland.

S0134244

Our lovely creatures needed a virtual space to live in and tell their stories, so we set to work on our microsite. We decided to go with a parallaxing effect for tablet and web to mimic the 3D layering seen in the physical birdhouse, and the end result is full of color, character, and movement. The microsite can be found at vogelkammer.com (vogelkammer literally means “bird room” in German).

IMG_3268

In the end, the birdhouse auction raised $10,000 to help support art & music education in Portland’s schools. Our vogelkammer was in good company– the designs our peers came up were terrific, and no two were alike. The Second Story birdhouse team, consisting of Laura Allcorn, Nora Bauman, Heather Daniel, Joe Carolino, Sam Jeibmann, Norman Lau, Sorob Louie, Swanny Mouton, Dimitrii Pokrovskii, Donald Richardson, Kirsten Southwell, and Filippo Spiezia, could not have been happier to be a part of this event. Collaborative, fun, and, best of all, for a good cause, this project was a true joy.

This slideshow requires JavaScript.

— The Birdhouse Team

Tagged with: , , , ,
Posted in Content, Culture, Design, Technology

Investing in Mentorship: Crafting a Story with University of Oregon Design Students

Sharing skills and knowledge across disciplines is part of our collaborative culture at Second Story, and my generous colleagues lend themselves without pause. This summer, I had the opportunity to extend the studio’s spirit of mentorship by working with a team of product design students from the University of Oregon.

Me (left) with the Melo team

Me (left) with the Melo team

I came to know this team via an article written about Design For America (DFA), a national organization that supports groups of interdisciplinary students as they dedicate a project—without class credit—to a local challenge. I was connected with the DFA team at the University of Oregon. The core team, comprised of designers Mica Russo, Andre Brown, and Madeleine Belval, have spent the last three years researching, testing, designing, and developing an environmental interactive installation catered to non-verbal autistic classrooms. Their project, titled Melo, uses light, sound, and tactility to engage students with their environment and with each other.

At the point where this mentorship began, the team was nearing the final stretch of their project. Their concept for Melo was well defined and in the process of being developed. They had one unanimous goal—to donate and install Melo into four classrooms around Oregon—yet they were still seeking a sense of closure around their project before graduating.

After a few discussions, it was clear that the team had trouble describing this project and their process without extensive depth—so it goes when you have your head down in a project for many years. We decided that our time together would be best served by my helping the team define themselves and tell their story. Using the 2014 IxDA Awards application deadline as a milestone, the Melo team, with participation from fellow UO student Sean Danaher, created a video and written pieces that summarize the project and their last three years of dedication into a concise and thoughtful narrative. Of course, this was no small task, and this effort was running concurrently with the final stretch of development. With a summer full of hard work, and some additional guidance from senior experience designer Norman Lau, senior interactive developer Matt Fargo and interactive developer Chris Carlson, they met their award deadline and are on track to deliver Melo to local classrooms before the end of the year.

Looking back on this experience, I am reminded of the larger benefits of mentorship, both inside and outside of working environments. While designers can be particularly cautious about where they invest their time, mentorship is the most generous way to engage the creative community. If you find you are interested in delving into a mentorship of your own, I have found these practices to be very successful:

Mutual excitement: It was insightful to see the team’s tactics as they charted the foreign waters of programmatic language, visual and written storytelling, and even operating camera equipment. Their confidence, passion, and experimental energy made it exciting to play an active part in their process.

Developing trust: The trust we built together reinforced that neither of us was wasting our time. The team’s expectation for me was to be invested and to care enough to challenge their work to be the best it could be. Similarly, I trusted in the team that they would be motivated enough to reach their goal and have our collaborative efforts realized.

Becoming friends: I believe that friendship is what distinguishes a mentor from an instructor. While an instructor feels a sense of obligation, a mentor has an authentic emotional investment in their mentee’s success—making the end goal of the relationship more than just a grade or pat on the back.

To learn more about Design For America and to sign up to mentor student teams in your community, visit http://designforamerica.com/.

— Kirsten Southwell, Experience Designer

Tagged with: , , , ,
Posted in Culture, Design

Everyone Needs a Good Listener: Theater as a Foundation for Interaction Paradigms

The last ten years have changed the way most people on the planet use, depend on and intuit technology. Advances in mobile devices and tablets have changed the expectations of users across the board. We have all learned a new language that continues to evolve; whether a swipe or a two-finger pinch, we demand responses from only slightly varied articulations, responses that are tailored to our needs, whims, and landscapes. It’s future-retail gospel that experiences must be connected, leverage omni-channel strategies, and be personal. And yet, even now, most digital interventions in retail are barely more than a website instantiation wrapped in a kiosk. Every day we’re getting more confident, clever, dare I say, closer to the right balance of technology and service. But we need to do more. In order to get there what we really need to find is a damn good listener.

To really explain what I’m talking about, allow me to revisit my past. My early academic and professional storytelling career took place quite literally on a stage, where there were perhaps a few more make-up and costume changes than my current venture allows. This former life in the theater, however, involved only limited digital expression and certainly no daily computer time, though perhaps just as much coffee. In both worlds, rich narrative experiences began with periods of discovery and conceptual thinking, moved into design and development, and ended with a live release. In the theater, that release just happened to involve live humans delivering each run of the show, but both spheres maintain a similarly iterative dimension. Migrating organically from the performing arts to experience design has allowed me to identify the way a set of common principles, illuminated by a shared library of connections to process and design thinking, has kept me invested in making and allowed me to experience perspectives that are present in so many of the arts.

Working in multidisciplinary teams, like we do at Second Story, I’m always acutely aware that there is much to be gained by looking to other mediums for insight on one’s own. And it has gotten me thinking a lot about my past in theater, this shared library of connections, and how the rich history of performance paradigms might be evaluated and applied to the (comparitively) fledgling field of interactive media. How might we look to a mature and developed art form to inspire the development of this new one? How might these lessons enable us to create enriched interactions that feel intuitive, responsive, and holistically connected to the story they are trying to tell, and more importantly, to experience-craving audiences?

I have carried from the theater into the museum world and on into my present work a growing knowledge and understanding of the importance of space and context in the creation of a time-based experience. To be clear, space matters. Our environment influences our perceptions and emotions more than we may realize. In the theater, with each run of a show, the same stories are told and retold every day in basically the same sequence. Though the speaking of words may be replicated each night, the experience of telling-–of performance–is nuanced and fluid because of the shifting context. The players, the pacing, even the tone of the room and energy of the audience affect the experience. As a performer, you are trained (at least in my experience) to drive the base narrative into your memory and then forget it. To absorb and know the arc, but to live and feel the moments as they happen, to be aware of the fluid space in which the performance takes place. To do this successfully, the performer, above all else, must listen.

Michael Shurtleff, who wrote a book that most actors read at one point or another in their career, said, “Listening is not merely hearing. Listening is reacting. Listening is being affected by what you hear. Listening is active.” Jack Lemmon extends this thought, asserting that acting:

…doesn’t have anything to do with listening to the words. We never really listen, in general conversation, to what the other person is saying. We listen to what they mean. And what theymean is often quite apart from the words. When you see a scene between two actors that goes really well you can be sure they’re not listening to each other — they’re feeling what the other person is trying to get at.

So what of it, in the context of experience design? Well, what if we could enable environments to do just that: listen? Public, commercial, and civic spaces are all rapidly becoming instrumented, but to what end? What if this is the first step to making responsive environments: making places that are smart enough to listen to our visitors implicitly? Beyond surveilling, what if our instruments transcend their “ears,” stop merely listening to what people say, and instead hear what they mean? Be it body language, words, gestures, taps… software and spaces need to evolve. We should strive to anticipate and intuit a person’s needs, especially when that need is to simply be left alone. Responsive places will become both the setting for and a part of the scene; these places will listen to understand motivations and needs and work to meet them.

This thinking is quite embedded in the structure of our software already. Code often will include so-called lines of “listeners,” commands that observe processes and trigger behaviors in software based on these observations. They’re always there. They’re always listening, and they react or shift or manipulate based on what they hear. This kind of listening is quiet, reactive, and responsive. These listeners are behind the scenes, not in your face. Imagine we can begin to populate the world with physical analogues to these software listeners–receptors in the neural network of the internet of things. These sorts of listeners in the wider context of a space can help us to understand people’s needs, so that we can deliver back more meaningful experiences.

In the retail context, this thinking has some obvious applications. Clinique, for example, a brand that has shifted towards an open-sell paradigm in the last few years, has adopted an analogue example of this concept. Though not truly “smart” in the digital sense, the ability for a customer to quite literally wear on their sleeve the level of service they desire is the first step in this evolution toward responsive retail experiences–but I’m pretty sure we can do better. I don’t mean infiltrating a customer’s space, surveilling or annoying them, or collecting data in order to target them with broadcast media. I mean that we need to be smart, reserved even, seeking a balance between digital tools and service. That’s what a great customer experience actually means: educational, inspiring, motivated service. There is a need and room for both digital and human touch points if we do it right, which is where we can look back to the theater. For an actor, it’s not only the listening that is important, it is the performance itself. The word “performance” in the context of theater makes sense, but “authentic response” is probably a better way to say it here, especially where retail is concerned. I want a two-fold experience when I enter a store. On the one hand, I want to be left alone to explore the landscape and confirm or address some basic assumptions or desires, but I also want confidence in those assumptions, or assistance, at the very least, from someone I can trust. I want a salesperson who is knowledgeable and, more importantly, engaged. I want someone whose motivation is authentic. I want to engage with someone who believes in what they are selling, and I want to be inspired.

We need to invest in both sides of this equation in order to really elevate the retail experience. A well-designed experience needs to be about not only the customer but the salesperson as well. They need to deliver similar but separate experiential goals to the person that will be using them or directing customers to use them. The goals of the real-life person need to be differentiated from those of the digital experience, but they must complement one another. No digital tool, no matter how smart, can ever surmount the potential connection between two human beings. I believe technology should empower both the tools and the human pieces of the puzzle.

In the coming months, I’ll dive into this idea a bit further, looking at some competing theories, practices, and ideas on response and reaction in the theatre and how they might be applied in the context of experience design broadly, not only looking at implications for retail.

Until then: Shirley Booth once stated that “the audience is 50 percent of the performance.” Chew on that.

— Traci Sym, Senior Experience Designer

Tagged with: , , , , ,
Posted in Content, Culture, Design, Technology

Shape of Story

Movie-goers with varied expectations gathered at The Hollywood Theatre earlier this month for Shape of Story, an interactive screening to spark conversation. Visual and interaction designers were curious as the event was among Design Week Portland’s extensive programming. Journalists, multimedia storytellers, and documentary filmmakers were interested in a new and experimental form of interactive narrative. Others came just to watch short films on the polarizing issue of gun rights and gun control. In the end, all in attendance were moved by the power of storytelling and engaged by a moderated discussion.

The audience members at Shape of Story used a smartphone-enabled web application to “tag” moments of emotional impact while watching seven short films. A visualization of their aggregated marks was shown after each short while they submitted comments to contextualize their reactions. The shape of each story and a curated selection of comments were displayed on the big screen during the discussion held after the screening. The crowd feedback helped structure the conversation. With the aid of a facilitator, Shape of Story can transform a traditional movie theater into a dynamic space for dialogue and debate, resulting in a memorable and informative collective experience.

THE GENESIS

The prevalence of mobile technology in public life is opening up new opportunities to explore storytelling within physical group experiences. Events that bring people together to watch the same screen or stage–scenarios ranging from corporate meetings to concerts, conferences, film, and theater–provide a clear opportunity for mobile interaction and social game play.

In the same vein as previous Second Story lab projects such as TEDxPortland After-Party 2011, Constellation and Real Fast Draw, Shape of Story is another example of how we are “empowering audiences to connect and share” in our “always-on world.”

As a former multimedia editor and a judge of numerous multimedia and photojournalism competitions, I’ve often imagined a tool that was capable of providing insight into the key ingredients of effective and impactful storytelling. What are the rhythms to narrative that emotionally connect with viewers? Chip Scanlan of The Poynter Institute, who writes and edits stories for a living, once told me that his secret to reviewing work was to write his story while reading. He takes note of his feelings moment to moment as he experiences a narrative. By broadening this approach to capture the responses of multiple people in a shared setting where their feedback can be displayed, you can start to visualize the shape of a story as defined by its audience. This feedback can then be used to facilitate dialogue among the respondents and the creators of the media they’re responding to.

CAPTURING EMOTIONS

We considered a number of technological challenges that were obvious from the beginning, and we adapted and improvised our approach to maximize the potential for engagement.

Challenge: Engaging with a device is disruptive to the overall experience of consuming the narrative.

We limited the level of engagement during the screening to a simple gesture: a tap. We asked the audience to tap the screen whenever the films moved them, bookmarking moments of emotional resonance. An on-screen color shift served as visual feedback to confirm the mark.

Immediately after each short, we displayed its shape of story on the big screen. The film was visualized as a timeline, with diamond symbols identifying the moments marked by the audience. The size of each diamond reflected the number of taps registered during the corresponding part of the film; a large diamond indicated many taps, a smaller one indicated few, and each tap contributed was visualized so every audience member’s voice was heard. To help identify and provide context to these moments of emotional engagement, the diamonds were accompanied by a corresponding thumbnail and transcript excerpt from the film.

Challenge: tapping a simple mark doesn’t give enough information or context.

After each short film, audience members were able to use the mobile app to anonymously comment on what they’d seen. They had three minutes before the next film started to submit their thoughts. A two-person team moderated the contributions, which were displayed on the big screen alongside each story shape after all seven shorts had been screened. These comments from the audience initiated an engaging conversation facilitated by Dave Miller, host of Oregon Public Broadcasting’s Think Out Loud.

Early in our process, we considered giving audience members the ability to comment on specific marks as well as offer a positive or negative value on a sliding scale. We eventually scoped out both features for technical and user experience challenges given our compressed development timeline. Providing rich contextual information to the tapped moments would be valuable and could lead to an extremely interesting visualization. We’d like to explore this feature for future iterations of the web application.

Challenge: the audience will forget or not be motivated to engage with their device.

Engaging with your mobile device is integrated into the experience and not as an add-on feature. Viewing long-form narrative films would definitely be possible with this technology, but designing the experience to accommodate thoughtful and deliberate moments of engagement would be required. Screening seven short films for the app’s debut allowed us to build in moments of purposeful engagement to the design of the evening.

CONTENT IS KING

The evening was not all about technology, however. We recognize that, as the adage goes, content is king, and we didn’t hold back on confronting a contentious issue head-on. With professor Wes Pope of University of Oregon’s Multimedia Journalism master’s program, we curated a diverse selection of shorts for the screening, all on the topic of gun ownership, gun rights, and gun control. Three powerful short films came directly from Wes’s master’s program course. Filmmaker Skye Fitzgerald of Spin Film contributed an excerpt from his upcoming documentary, Oregon / Divide. Kim Rees from Periscopic walked us through a screencast of U.S. Gun Deaths, their data-rich interactive visualization. We heard an extremely moving radio story by Amanda Peacher from Oregon Public Broadcasting entitled How Gun Violence Has Shaped Three Lives. I also had the privilege of co-producing an interview with The Oregonian’s Jamie Francis on his portrait series Oregonians Talk Guns.

The diversity of content and multimedia approaches empowered us to present many sides of the gun debate. For the event at The Hollywood Theatre, Shape of Story aspired to advance meaningful conversations. By identifying shifts in audience sentiment and offering every viewer the opportunity to participate in thoughtful discourse, the technology has the potential to reframe dialogue about controversial issues to encourage productive discussion.

— Andrew DeVigal, Director, Content Strategy

Video edit by Andrew DeVigal. Cinematography by Kate Szrom, Summer Hatfield, Katelyn Black and Wes Pope.

Tagged with: , , , , ,
Posted in Content, Culture, Design, Technology