Passing by the Wave

One of the keys to a successful interactive experience is providing a little something for everyone. Typically, members of the audience for an interactive installation will vary in their desire to invest time and attention. An individual may have a keen interest in delving into the nooks and crannies of a subject—say cubist architecture—or they may just walk by, see someone else interact with the experience, and decide to watch them briefly from afar.

When designing and developing an experience, it’s important to consider the “just passing by” audience member. In a museum or cultural institution setting, it is precisely the casual observer, the first-time visitor, the non-expert, who we want to educate, inform, and expose to our subject. Look here! This is why you should care about cubist architecture!

The most important thing is to engage the visitor, even temporarily, in a positive fashion. These itinerant visitors, wandering from exhibit to exhibit, display to display, must be catered to on their own terms: they want something they can appreciate in very little time, with little or no interaction, and from a distance.

Recently, working with the Foundation for the National Archives in Washington D.C., we created an experience consisting of a 15ft interactive touch table with proximity sensors, flanked by two mosaic walls with multiple displays.


The experience was designed to showcase documents, multimedia, and history related to the issues of civil and human rights in America. The table allows for up to 12 people to interact with it simultaneously, browsing through a series of timelines, exploring the National Archives’ extensive collection of primary source materials, and sharing their reactions to those records with others on the mosaic walls. You can take a look at a video demo, more images, and a description of the project on its portfolio page on our website.


During the concepting and ideation phase, we wanted to come up with a unifying element to make the table—which actually consisted of six 55” displays with two PQ Labs touch overlays—feel like a single entity, and, most importantly, engage the interest of passersby. In the end, we came up with the idea of a series of lines that would undulate seamlessly across the displays from one end of the table to the other. The lines’ sinuous motion serves as a metaphor for the fluidity of ideas, their contour-like geological representation evokes a sensation of the weight and momentum of history, and, as waves collide with each other, the patterns the lines generate speak to the complexity that can be created from simplicity.

This element we simply called “The Wave.” The wave, we decided, would flow by itself, but users would be able to interact with and excite it. It would also provide a large, beautiful, animated, easily accessible visual element ready to engage users from afar.


In creating the wave, there were two primary challenges. The first was the question of how it would behave; each member of the project team had an idea about how the wave should look and feel. The second challenge was to make sure that the wave would propagate seamlessly across displays. Each display was being run by a separate computer, so somehow all the computers had to be informed about the motion of the wave.

The first problem was solved by a little mathematical graphing and some prototyping in our lab. Initially, I considered physically modelling a wave. It quickly became apparent, however, that it would be computationally expensive, and, furthermore, the amount of data that had to be passed from display to display to keep the waves in sync was also too high. After all, the only thing our wave really had to do was look like a wave, and, at its most basic, a wave is just an oscillating function, something like a sine wave:


But that’s boring. Here’s a tidbit that’s not boring: a periodic function (a function that can repeat) can be recreated from the sum of sine and cosine functions. This group of functions is called a Fourier series. What this meant for us was that any wave shape we wanted to create was achievable using a number of sine waves added together. Here’s an example of what happened when we combined a few to make a more interesting shape:



Finally, we didn’t want the wave to repeat forever, so we multiplied it by a pulse function to get something like this:


Here’s a little animation of three sine waves and a pulse function with a variety of variables changing randomly. You can see that you do indeed get some organic shapes in there. This is a variation of the algorithm we ended up using to develop the wave:

And here is an early prototype of the wave:

The final complication was how to make sure the wave was synchronized over multiple displays. Because of the way the waves were created, the only data we would need to communicate across displays was the current animation frame, when the wave was created, how long it would live, and a few wave parameters (wavelength, speed, etc.). The hard part was figuring out how to make sure that every display knew what frame it was supposed to be on. You can’t just tell every display computer to render the wave “NOW” because that message takes time to get from the computer doing the telling (the server) to the display computer (the client) due to the network. This is called latency. One way to go about it would be to make sure that every computer kept track of time identically and then you would tell each computer that at “x” time they should play frame “y.” Then they could extrapolate what frame they should be rendering based on what time they thought it was. However, time can drift. Especially considering that synchronization needs to be accurate to, at most, a few tens of milliseconds, whatever solution I came up with had to factor in time-drift.

In order to tackle these issues, I created a synchronization tool called “All The Screens.” Client computers registered with a server, and the server calculated network latency (delays) and time drift and provided those clients with a way to determine what frame they should be rendering.  This solution has been open-sourced and can be found on GitHub. There is also a Google Chrome demo of the technology here.

These technical solutions allowed us to create the wave, whose mesmerizing motion lures visitors in to learn more about the history of civil and human rights in America. And, for that happy-go-lucky stroller who doesn’t have the time or inclination to delve into the content, perhaps the wave serves as a source of soothing visual relaxation, a counterpoint to the hustle and bustle of busy downtown DC.

Donald Richardson, Senior Interactive Developer

Tagged with: , , , , , , ,
Posted in Design, Technology

Mobile Case Study

Second Story is deepening its physical design and environments practice by offering industrial design services to our clientele. We aim to be innovative, designing physical solutions to elevate digital interactive experiences, but our work sometimes requires practical, engineered solutions to package digital content in simple ways that are meaningful to the overall audience experience.

We’re always excited by design challenges that let us get our hands dirty. When a recent mobile application project presented the need for packaging design, we brought manufacturing processes to the studio. Our team came together in impressive fashion, with staff members from every discipline collaborating to create an efficient assembly-line to achieve an immediate yet stylish solution for our client’s needs.

— Jordan Tull, Designer

Tagged with: , , , , , ,
Posted in Culture, Design

Finding The Heart of “100 Years of Design”


Last May, we had the opportunity to partner with AIGA, a longtime collaborator and dream client, on a new microsite to commemorate their centennial and celebrate the last 100 years of American design. Our first reaction was excitement: as designers who pride ourselves in our discipline and our history, we were honored to craft stories that include some of the world’s most influential designers and their work. Our second reaction: where do we start?

At Second Story, we often describe our process as “designing from the inside out.” As we thought about AIGA and what made it special, it became clear that the organization sits at the epicenter of the conversation between design and society. This simple diagram was our first attempt to show how this conversation and how the artifacts in AIGA’s archives could become the lens for the site.


Building a project’s foundation is one of the most challenging and exhilarating points in our creative process. We refer to this discovery as finding the heart—the one truth of the project that will never change. The “heart” is the story that the experience is begging to bring to life. Creative Director David Waingarten has described the task of finding and articulating this conceptual foundation as “being the first to walk into a dark room and look for the light switch.”


To find the heart of the AIGA Centennial project, we fully immersed ourselves in the content. We delved into the vast collection of artifacts in AIGA’s Design Archives, combed through articles from diverse voices in the design community, and looked at other retrospectives, critiques, and blog posts. In our quest for enlightenment, we noticed there was little discussion of design history that was not organized by time, form, medium, or discipline. While these ways of presenting design history are informative and educational, we wanted to create a living resource that captures the ever-evolving conversation between design and society and invites everyone deeper into it.

As we were having this discussion, our collaborators at AIGA pointed us to “No More Heroes,” a poignant article from a 1992 Eye Magazine that really spoke to us. This quote from Bridget Wilkins was especially inspirational to our conceptual development:


With AIGA’s guidance and after countless thought-model sketches and “what if!” epiphanies, we landed on a framework that gives diverse audiences a new way to look at and evaluate great design. We organized the stories by design intent, allowing the purpose of the artifacts to be revealed for the visitor. The intention is what defines design, and as Milton Glaser so eloquently states: “The best definition I have ever heard about design and the simplest one is moving from an existing condition to a preferred one, and that is a kind of symbolic way of saying you have a plan because the existing condition does not suffice.”

We had to consider how to make this story framework exciting and accessible for guests with varied knowledge of design. It couldn’t overwhelm the general public, but it also had to meet, if not exceed, the expectations of design enthusiasts and practitioners. To strike this balance, we created an experience with two layers. At the surface layer, visitors can view carefully curated artifacts, quotes, videos, and listen to audio clips. Those who are interested can go a level deeper to see additional artifacts, designer profiles, and moments from AIGA’s history. With 11 videos, 26 audio clips, 120 design artifacts, 17 designer profiles, 15 AIGA historical moments, and 19 quotes, there’s a wealth of content for visitors of all backgrounds to explore.


AIGA also wanted to extend the conversation to ensure that the microsite became a meaningful record of this time in design’s history. To foster discussion and participation, we needed an engaging prompt. How can we ask a stimulating and meaningful question without leaving the guest lost or spending 20 minutes trying to create a response? The ideation that we spent on those six words was extremely thorough: looking at the reactions from using words like “think” vs. “feel”, finding out if users were comfortable contributing from the first person (“I am connected by design that…”) or from a general perspective on design (“Design that connects is…”). We settled on a phrase that could be applied across all five intents and that allowed guests to choose an intention and add their own thoughts and images.

The results have been incredible to watch. Each day the conversation grows, with over 700 user contributions and counting. We are thrilled with the final site and hope the experience engages a broad audience in a dynamic conversation about the role of design in our society and everyday lives. We encourage you to explore these narratives and add your voice to celebrate the evolution and impact of American design over the last 100 years.

Our studio is forever grateful to AIGA for giving us the opportunity to be part of such an incredible moment in design history.

This slideshow requires JavaScript.

— Laura Allcorn, Senior Content Strategist & Kirsten Southwell, Experience Designer

Tagged with: , , , , , ,
Posted in Content, Culture, Design

You Can’t Go Wrong With 8,294,400 Interactive Pixels

As Ultra High Definition (UHD) displays become more readily available, we will begin to see the technology adopted in many ways. We are most interested in the new standard because it will have a direct impact on the way we design and display interactive content. While we have been developing applications that run at resolutions similar to the 3840×2160 pixel resolution offered by UHD displays for some time, we have been forced to display them on multiple HD displays which introduce visible seams when tiled together. With the advent of the UHD display, we can now combine both scale and fidelity in the presentation of our media with a single seamless display.

With interactive media, the scale of a display can act as a beacon, enticing potential users to come closer and explore content. A large-scale display also accommodates more users at a time, inviting collaboration, especially on a horizontal (table) surface where people are brought face-to-face with those across from them as they interact with the media.

But scale isn’t everything. By its nature, interactive content has to remain legible when viewed at an arm’s length as users touch and interact with the surface of the display. At this close range, most large displays don’t have the fidelity to carry type and subtle graphics. It is here that the UHD resolution succeeds where other displays fall short. At about 50 pixels per inch (ppi), the Planar UR8450 displays offer precise pixels and legible content even when viewed up close.

With this display, we can already begin to imagine a future where the notion of a pixel is no longer considered. Today, we can see this in relatively small Retina Displays where ppi counts surpass 300 and individual pixels seem to disappear. We look forward to a time when this type of fidelity will become ubiquitous on large and small displays. Increased legibility will allow content to be displayed at any scale and orientation, opening up new modes of interaction. Displays will become a window through which emotive, high resolution content will be displayed, bringing stories to life in new ways.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology

Mobile Depth-Sensing

From the Protecting the Secret interactive at the Vault of the Secret Formula exhibit to the Connections Wall at the Emerging Issues Commons, Second Story regularly uses Kinect and other similar technologies to create dynamic content based on sensing where users are in physical space. In the past, the mobile use of these sensors has been restricted by their need to be tethered to powerful “desktop” CPUs; we’ve had to use USB signal extenders and dedicated wiring to mitigate these constraints.

But developments in computing are giving us enhanced flexibility. The latest ARM processors are small, portable, and powerful, and we’ve been experimenting with using them to process depth data right from the sensor’s location.

The processors can be powered over CAT-5 Ethernet or even battery (depending on the use case) which makes deployment easy, and they automatically start working as soon as they’re powered on, eliminating the need for an external display. Using available bandwidth, they can send data over WiFi or regular CAT-5 to more powerful CPUs that do data interpretation.

The sky’s the limit with these little guys. We can’t wait to explore the possibilities.

— Sorob Louie, Interactive Developer

Tagged with: , , , , ,
Posted in Technology

100 Years of Design

In 1914, a small group of designers inaugurated what became the American Institute of Graphic Arts. One hundred years later, Second Story has collaborated with AIGA to create a centennial microsite that celebrates the profound impact design has had on our society over the last century and invites everyone into a conversation about the impact of design on our daily lives.

Our first task was to collaborate with AIGA on curating a set of works to illustrate the breadth, diversity, and evolution of American design over the last century. We also wanted to present these works in a different way than a typical retrospective might. We wanted to focus on the “why” instead of the “how,” exploring the intentions behind these works rather than simply categorizing them by medium, style, geography, or plotting them on a timeline.


These design intentions became the core of the site: five media-rich narratives focused on how design connects, informs, assists, delights, and influences us.


We also see this microsite as a time capsule that successive generations of designers might open in 2114, as AIGA celebrates 200 years. Knowing the tools and methods these folks will employ will evolve far beyond what any of us can imagine today, what kernels of truth or wisdom from AIGA’s first century of existence could this site preserve and pass on?

To find answers, our film crew captured the oral histories of 18 living legends of American design. We asked these designers to comment and reflect on their own seminal works, the arc of their careers, and the lessons they’d like to pass on to future generations. Their answers were humble, straightforward, hilarious, heartfelt, and enlightening. Being present with the likes of Paula Scher, Milton Glaser, Richard Saul Wurman, Jessica Helfand, Michael Bierut, Seymour Chwast, and many others was an incredible honor. Their stories and insights bring this content and conversation to life in a way nothing else could.

Most importantly, we wanted to invite everyone to the party. So we created a way for people to share how design connects, informs, assists, delights, and influences them today. Contributions are already pouring in, and we are thrilled to see such a diverse range of responses.

Centennials offer us a chance to look back at where we’ve been, to recognize a shared history and inheritance, and to appreciate the evolutionary continuum that connects those designers in 1914 to us here and now. They also give us the chance to look forward – to take what we’ve learned in new directions and ask what’s next. As part of the team who has spent over eight months bringing this microsite to life, I can say that looking back has taught us a tremendous amount about design’s role in shaping how we see our world, ourselves, and each other. Looking forward, this project has made us look deeply at what motivates us to do the work we do, and rededicated us to bringing those intentions to life in everything we create.

— David Waingarten, Creative Director, Storytelling

Tagged with: , , , , ,
Posted in Content, Culture, Design

Leap Motion Path

Leap Motion Path is a Second Story lab experiment exploring the use cases for the Leap Motion controller in the digital animation field. Our objective was to create a tool to capture 3D motion that could be used within an animation platform to control digital content.

One of our goals was to record an animation without the assistance of a keyboard or mouse. To achieve this, we needed a way for the animator to know where their hand was in relation to a recording canvas. We accomplished this by creating an inset frame that gives the animator space to interact with Leap Motion and gain spatial reference to where they are inside the computer screen. Once they’re ready, they can enter the frame and start recording. Later, in the animation software, they can remove the entry and exit points from the frame by cropping the animation recording.


During this experiment, we encountered another interesting use case. Leap Motion provides a lot of data about the geometry of the hand. If you capture the position of the animator’s wrist and the tip of the index finger and draw a line between those points, you end up with a vector that indicates the direction of the hand. If you capture this vector over time, you see that it produces a beautiful ribbon. The animator can record this ribbon and use it in other animations.

ribbon_landscape2 2

As we developed Leap Motion Path, several stand-alone libraries came to be. We’ve hosted one called Path.js here on Github. To provide some additional context: we wanted to capture a 3D position over time and then animate along the path we recorded. If we were to animate along the finite points Leap gives us, the animation would be choppy because the resolution of these points wouldn’t resemble the actual path we drew with our hand. To combat this, we needed to interpolate a line or a curve between these points to give a finer resolution so we could animate at any speed. Path.js takes a collection of timestamped points and creates a linear interpolation between them. This allows Leap Motion Path to export an animation in vector format, allowing the animator to scale and stretch the animation as desired.

With more development, Leap Motion Path could be integrated into a standard digital animation workflow giving animators one more tool to create beautiful & lifelike work. Moving forward, to improve the motion-capture experience, we would need to re-write the recording mechanism as a plugin to an animation platform, enabling the animator to record and review all in one application. We look forward to integrating Leap Motion Path into our own animation workflow at Second Story.

— Dimitrii Pokrovskii, Interactive Developer

Tagged with: , , , , , ,
Posted in Technology

Unboxing the Kinect for Windows v2


Kinect for Windows v2

We started experimenting with the Kinect for Windows v2 from Microsoft this week and are already excited by the new possibilities that this impressive new depth-sensing camera offers. Mechanically, the camera is a bit larger and bulkier than we would like, but it also features a tripod mount adapter (threaded insert) which will go a long way towards helping us incorporate the sensor into different environments.

Among the many improvements, we quickly found that the new camera was able to sense just about as many people as we could fit in the (expanded) camera’s field of view. Tracking has also improved, allowing skeletal data to be sensed in a variety of poses (sitting, prone). We’re also excited by the new level of detail captured in each pose which includes basic hand and finger tracking.

The depth image returned by the camera is also showing significant improvements in speed and image fidelity. This type of depth data allows for the capture of image details previously unattainable with the Kinect.


Raw depth image from the K4W

As we continue to experiment with and adopt the technologies that the future is bringing, we couldn’t help but take pause to thank the engineers and scientists at Microsoft who made this one possible. It is technologies like this that enable us to create new and inspiring experiences.

With that, we leave you with this final message for this season:


— Matt Arnold, Lead Integration Engineer

Tagged with: , , , ,
Posted in Technology

How We Built Lyt: a Technical View of the Making Process

At Second Story, we all share a passion for making and crafting. Our collaborations have taken lots of different forms, from a robot that prints tattoos to a floor-to-ceiling interactive sculpture to an imaginative birdhouse. We thrive in the vibrant designer community here in Portland, and our local involvement enabled us to meet members of the Intel Labs team who gave us a great opportunity to do some tinkering. They were about to release the Galileo board (see the specs here) and asked us to come up with a demo showing what the board could do. It would then be presented in Rome for the European version of the Maker Faire. After a few weeks of furious designing, prototyping, fabricating, and testing, Lyt, a collaborative, interactive lighting fixture, was born.


If you’re one of the lucky few who sees through the matrix, source code and build instructions for this project can be found on GitHub; the README over there contains a pretty detailed description of the overall architecture as well as how the various components interact together.

If you’re a more visual person, you can watch the making of Lyt on the creators project website.

Daniel Meyers, Creative Director, Environments, wrote a great blog post back in October about his thoughts on the project, but I wanted to provide some additional information about our process from a technical perspective. Here we go!



The Galileo board is an interesting mix: it’s both Linux-based and Arduino-compatible, offering developers the opportunity to play with advanced tools created by the Linux community and electronics components and shields from the Maker community. We tried to pick the best from both worlds for our prototype.

In terms of concept, we found inspiration in a couple of previous lab projects: Aurora, which incorporated LED mesh, and Real Fast Draw, a collaborative drawing application. After a couple of iterations, we knew what we wanted to make: an LED mesh from scratch that you could draw on.

Picking out the hardware


Once we decided to make an LED wall “display,” we had to find the right LEDs. Nowadays, you can buy strips by the meter, where the pitch (the distance between two LEDs) varies between 32 LED/m and 64 LED/m. Another factor to consider is the type of microcontroller driving the LED. The predominant ones are called LPD8806 and WS2801. These nifty controllers enable you to address independently every single LED on your strip. Thanks to the wonderful world of Arduino, you can buy these on Adafruit or SparkFun, and they both give you Arduino libraries to drive them.

After some experimentation, we decided to go for a 32 LED/m running on the WS2801. A smaller pitch would have been a bigger power draw and a more precise data signal would have been required (some people were successful running a 64 LED/m strip on a Netduino, but we didn’t want to lose too much time figuring this out, as the other one was working right out of the box).

Daisy chaining


After picking the strips we wanted, we had to see how many of them we would be able to plug together. These strips work with four wires: one 5V power, a ground wire, one SPI clock, and one DATA (“master out, slave in,” in the SPI lingo). The full-fledged SPI bus has two more wires, one “master in, slave out” to receive information from a slave device, and a slave select to tell which device is connected on the bus you’re talking to. Obviously the LED strip was not going to give us feedback, so there was no need for a master-in wire, but the lack of slave select was pretty annoying; it was harder to get multiple strips listening on the same bus and decide which one to talk to. So, instead of running them in parallel, we went for a series circuit where the maximum number of strips would be daisy-chained together. Experimentation taught us that we could go up to 12 meters before getting a nasty flickering effect. This was around 380 LEDs per fixture with an estimated power consumption of 70W.

We decided we would make 3 modular panels of 12m LED strips, each with its own Galileo board. This is clearly overkill–one Galileo board might have been enough to drive our three columns with a mux-demux controller to select which strip we wanted to talk to. However, from a demo perspective, it was safer to run 3 independent boards, so, if we accidentally fried one, we would still have a working prototype.

We’re happy to report that no Galileo board was harmed in the making of this demo.

Starting the development


We wanted to be able to control the light fixture using mobile devices and still have a board per fixture. We obviously needed some means of inter-board communication to notify them of the phone interaction, and having a self-contained system was our ideal. We ended up with a four-Galileo-board set-up, using one, which I’ll call the Lyt board, as the server. It dispatched relevant information to the three other boards driving the fixture.

The Adafruit library we used was taking a “pixel” buffer in entry and would display it nicely on the strip, but we had to do some slight modifications to make it run smoothly on the Galileo. It’s pretty common on the Arduino board to transfer buffer one bit at a time, by switching on and off a pin. This should work on the Galileo too, but, as a full Linux system is running, things are not that bare-metal. Instead, it’s mostly as if you were writing to file: you have to open a file descriptor, write, and close it. And this process goes through system calls that take a few milliseconds to kick in, which is not good when you have 400 LEDs to drive. Fortunately, the Intel folks were smart and added a function to the SPI library to transfer a buffer in one go, so instead of having to do syscalls for every byte, it’s reduced to one (or a few) for transferring one buffer entirely. These details are hidden in the depths of the SPI library, so, for the regular user, the only thing to know is that you can transfer a buffer quickly on SPI. The curious should investigate and explore the source code available in the Arduino IDE to better understand how all this black magic is working.

Adventures in debugging


This project was a truly international effort: the fabrication and concepting happened in our studio in Portland, OR, the Galileo boards were fabricated in Ireland (it’s printed on them!), the software development was done in France (the developer was suffering from a serious case of wanderlust), and it was all for an event happening in Italy. One of many obstacles we faced during the development process: how to develop a graphic interface when you don’t have the display. Our solution was to create a small debug application using an awesome library called openFrameworks. Our app receives OSC messages from the board saying, “Hey, I’m going to display this and this on the LED,” the app parses the messages, and displays them on a laptop. In return, we added a communication link from the laptop to the board to tell the LED, “Hey, I’m clicking on this part of my phone, you should really do something.” This was, of course, not the final solution, but worked well as a place-holder–the communication link between phone and board would be developed later.

Networking is the key to success


You may wonder how all these fancy boards were talking together. The Galileo board has a PCI express port (the kind of thing you can find in your laptop) in which you can put a WiFi card. At the time of this writing, the N-135 Intel wireless card is the only officially supported one. Using a tool called hostapd, we were able to make a Galileo board run as a WiFi access point and have all the others connect on it. Then it was easy to send OSC messages between all of them.

Finally, to get mobile phones to talk to the board, we wanted to provide the most lightweight experience for the user and thought it would be great to do the UI using HTML5 and CSS (i.e. a basic webpage). Part of the new specifications for HTML5, WebSockets enable your browser to talk with a different computer through a direct connection; they were perfect for our purposes. We compiled a WebSockets library for our Galileo board (the libwebsockets one), plugged in our Arduino sketch, and, after some tinkering, VOILA!…phones can talk to the Lyt board.



In a few words, here’s what’s happening once you’ve plugged everything in:

Our Lyt board creates an access point you can connect to with your phone. Meanwhile, every light fixture connects to this WiFi and sends information to the Lyt board (like what color they’re currently displaying). When you request the Lyt page on your phone, the board sends it back to you, and, once the page is loaded, your phone opens a WebSocket to the Lyt board. This way, the phone can be updated by the board, telling it the color currently displayed on each fixture, and, in return, when you interact with the UI, the phone sends the information to the Lyt board which relays it to the appropriate fixture. Finally, the fixture triggers the touch animation.

After a few whirlwind weeks and with a successful demo in hand, we got to spend some time in Rome with Intel and the European Maker community. Once more, we were reminded that technology can be the seed for great collaboration and wonder.

— Philippe Laulheret, Interactive Developer

Tagged with: , , , , , , ,
Posted in Design, Technology

Balancing Spectacle with the Everyday

balance 2

In a passage from John Thackara’s book, In The Bubble: Designing in a Complex World, he takes a critical stance against place marketing and describes his vision for an alternative, more sustainable approach to the design of cities:

“A sustainable city … has to be a working city, a city of encounter and interaction—not a city for passive participation in entertainment. Sustainable cities will be postspectacular.”

Even though Second Story isn’t necessarily in the business of designing cities (sustainable or otherwise), I feel that Thackara’s words have significance for any designer concerned with how we shape our environments and facilitate experience in spaces.

He uses the phrase “postspectacular” to caution against design that is nothing but spectacle. For him, spectacle is an undesirable outcome of design that casts people as passive consumers of an experience rather than empowering them as active participants in meaningful human interaction.

It’s interesting to consider this stance in the context of Second Story’s work because, in some ways, we consider spectacle a key piece of what we do. In a Creative Mornings talk, our Innovation Director, Thomas Wester, described how Second Story’s work could be seen as a contemporary link in a long line of historical experiments in spectacle, such as the cyclorama and the eidophusikon.

In many cases, we hope to inspire the same wonder and awe that these inventions did when they were first revealed. I feel like the pursuit of spectacle in design can often end up expanding the limits of what we consider to be possible. For this reason, I’m not as harsh on the idea as Thackara seems to be.

But I do think that Thackara’s point still stands. Wonder and awe are valuable, but they’re also more transient aspects of an experience. A sustainable design, one that sustains meaning over time, requires us to think about how it integrates with and enriches the everyday working lives of people.

* * *

Thackara isn’t alone in voicing his concerns. During the Q&A of the above-mentioned Creative Mornings talk, one audience member asked Thomas whether he tires of our society’s obsession for new technology. Another raised his concern that so many examples of interactive art seem to exist only to serve interactivity as a goal unto itself.

What I sensed in these comments from the group was an underlying dissatisfaction with projects in the space of interactive digital experiences that celebrate technology and interactivity but pay little attention to the people who experience them. These were echoes of Thackara’s critique applied to our world of digital design, pointing out the lack of substance that occurs when a design only focuses on the spectacles of new technology or the latest paradigms of interactivity.

I think Thomas provided a couple of thoughtful answers to the audience’s questions (which you can listen to in full starting at about 39:25 and 41:02 in the video): “We often become couch potatoes and have this consuming attitude, and I think interactivity is anti-consumerism in that sense. It challenges you, and we try to make experiences that challenge you to make your own path through that experience.”

Which is easier said than done, of course. Just because there’s a touchscreen in the room, doesn’t mean you’ve created a meaningful interaction. But this drive to facilitate situations where people can feel empowered and engaged through interactivity, rather than simply awed, feels like a step closer to “postspectacular” design.

* * *

I was recently reading a book about filmmaker Roberto Rossellini on a recommendation from David Waingarten, our Creative Director of Storytelling (and local source of film knowledge). In a 1952 interview, Rossellini was asked to give an interpretation of Italian Neorealism, the film movement he was most associated with: “Neorealism is … a response to the genuine need to see men for what they are, with humility and without recourse to fabricating the exceptional; it means an awareness that the exceptional is arrived at through the investigation of reality.”

This quote captures why I still get excited about design projects that, from the perspective of technology or interactivity, lack a sense of spectacle. These are projects that provide compelling tools for researchers and educators or create channels for collaboration within a community. Just as Rossellini’s films aspired to reveal the exceptional in the reality of things, each of these projects has the potential to illuminate the everyday reality of its users by enabling them to take action. That is, in its own way, spectacular.

In the end, I value our studio’s drive to push the limits of spectacle. I am often inspired by what my studio-mates dream up and build. But it’s good to remind ourselves that it’s a balance of the pioneering spirit of spectacle with a mindful concern for the everyday that allows design to create sustainable and meaningful change in the human condition.

— Norman Lau, Senior Experience Designer

Tagged with: , , , , ,
Posted in Culture, Design, Technology