“Digital Revolution” in London

With its post-apocalyptic looks and lost bits of nature, London’s Barbican Estate deserves a blog post all its own, but I wanted to share my experience with “Digital Revolution,” a unique exhibition hosted in its depths at the Barbican Centre.

This exhibition successfully brings together a selection of old, recent, and exclusive works, aiming to show the convergence of technology and creativity in design, music, film, videogames, and, actually, everywhere in our culture since the ‘70s. The boundary between these spheres has become nearly invisible, as creative minds have progressively found, crafted, and used digital tools to explore the possibilities.

Without any gimmick—it’s 100% SpaceInvaders-sticker-free!—this exhibition is a truly refreshing and inspiring museum experience designed for everyone. I suspect it will not export to the U.S., and I doubt all of you will be able to visit London in the next month, so I wanted to share here my thoughts and favorite parts.


Pong. Broken Age.

Digital Archaeology

The journey starts with a brief but necessary retrospective of digital antiques, surrounded by a dozen synchronized screens showing historical references to the digital landscape (the appearance of GIFs, iconic video games, famous movies using ground-breaking CGI, etc.). I say “brief and necessary” because this exhibit is not about nostalgia but rather about how our accomplishments over the past 40 years have informed a better understanding of the creative innovations being showcased and the breadth of possibilities for the future.

I got to play on an original Pong arcade station with a CTR screen. The Fairlight CMI, 30 years old and one of the most iconic sampling synthesizers, was on display. A 9-year-old kid held an original Game Boy in his hands and played his first game of Tetris. Visitors checked out a couple of obscure NetArt pieces using Netscape.

Showcasing these artifacts presented some real curatorial challenges, since, unlike books, sculptures, or paintings, they need another medium to be visible. It’s also very hard to find replacement parts for the consoles, and newer browsers deprecate old code. The fast digital obsolescence makes these works—and, by extension, our own work in the digital sphere—very fragile. Worse, the easy ability to duplicate digital content makes them less valued. The exhibit carefully labels these once innovative creations, giving their authors due recognition for their ingenuity.

Tetris and Minecraft. Deletion and creation.

Tetris and Minecraft. Deletion and creation.

We Create

The exhibition continues with more recent works grouped according to a series of themes. We Create glorifies projects that touch a sense of collaboration in their conception—such as the beautiful Broken Age, a point-and-click game supported by more than 80,000 people via Kickstarter; in their actual making and goal—like Aaron Koblin‘s work and Chris Milk’s Johnny Cash Project, where visitors are invited to collectively draw a tribute to Cash; or in the community that they developed—like the sandbox Minecraft with its millions of players and contributors and its omnipresence in the collective memory. Getting to watch a group of kids discover they could actually play Minecraft during the exhibition reminded me that “Digital Revolution” is truly a celebration of interaction. It understands that if a project (a game, a website, an app, etc.) is meant to be interactive, it has to be showcased in a participatory way because the user, the human, is such an essential part of it.

Another piece linked to We Create is Universal Everything’s Together, which invites people to create a short frame-by-frame looping animation. Pending review, contributions are displayed on a long wall of screens. As I regularly work on motion design at Second Story, I just loved this piece. The interface felt a bit raw, but constraint allows creativity, and the installation is still very attractive and educational. It’s a great way to understand the basics of animation and to challenge yourself to create contributions to inspire others.

Nearby, visitors can also play a few recent independent games, testaments to the ability of small teams to create powerful and rich experiences now that the tools they need are more affordable and knowledge is more accessible. Fez, Paper’s Please, Antichamber and Journey are some of the award-winning titles available.


Les Métamorphoses de Mr. Kalia.

State of Play

State of Play gathers body-conscious interactive installations, where creators experiment with play and gesture using the user’s physical behavior and tools to observe it like Microsoft’s Kinect (something we are very familiar with). Les Métamorphoses de Mr. Kalia, one of the pieces commissioned to Google DevArt, was definitely my favorite. The experience is quite simple: when in position, a character follows your movements and you take part in a short surrealistic interactive story (sort of a body extension of Google’s Puppet Parade). The piece is visually poetic, there’s a great sense of mystery and revelation in the relationship between participant and spectator, and best of all, you are given a link to replay your performance on the website right after you finish your story (by the way, here are my moves). Another very curious installation—that I unfortunately missed due to its unobvious location—is Umbrellium’s Assemblance, which allowed people to interact with lasers and sculpt the light with their shape and gestures.


Computer imagery from Apple’s Woodcut to Dreamworks’ Apollo.

Creative Spaces

Creative Spaces showcases some examples of technology’s influence on how we tell stories. A documentary describes the impressive Apollo project, a tool conceived by Intel for Dreamworks animators, that they used for How to Train your Dragon 2. This new tool enables a faster approach to their animation and lighting work, helping them to avoid needing to render every individual change they make—an apparently standard process in animation. Apollo improves efficiency and allows them to focus on the core of their craft.

The exhibition also includes two behind-the-scenes physical installations on two recent blockbusters. We first learn about the making of the city-bending scene in Christopher Nolan’s Inception. But while the content doesn’t differ much from what you see in the bonus section of your Blu-Ray, the experience offers the opportunity to use a Leap Motion controller—another familiar tool at Second Story—to interact with the content. Moving the hand from left to right over the sensor will play the scene forward, moving the hand down to up will make you browse through the different steps of the process as layers, from the first documentation shots to the conceptual art to the non-textured 3D explorations to the final render. This interaction sounds fairly simple, but without boundaries and because of a very discreet user interface, many would start the experience moving their hand in all directions, only understanding after a couple of seconds that you use just two axes. Even so, it is not a bad experience and is actually a very inspiring way to browse through content.

We also learn about Alfonso Cuaron’s Gravity and how they built an entire set of environments and tools to capture the actors’ performances and help them empathize with their characters. The installation was shaped like one of their tools, the Light Box, which immersed the actor in a cube of LEDs displaying a rough animation of what a character would see while in action. In the exhibit, several screens are precisely synchronized to display the different steps of a very complex scene where the visitors get entirely immersed. These screens then progressively reunite, revealing the final rendition.

Innovation also happens outside of Hollywood. The interactive documentary Clouds features artists, developers, and hackers talking about their thoughts on code, data, community, goals, and challenges. The documentary is actually a full 3D environment, where interviewees appear in-between illustrations created by coders, shot with a combination of Kinect’s RGBDToolkit and a DSLR camera. Interview by interview, the viewer can navigate through a large network of topics, here again enjoying a unique, non-linear experience.

Finally, with a bit of a political bent, James Bridle gathers online information about American drone strikes in Afghanistan and Yemen and publishes aerial images of the locations along with the date and a description of the event on an Instagram account named Dronestagram. This creates a virtual space, open to discussion, to bring closer and better reveal events of the same nature happening in less accessible areas and rarely covered by the media. I was surprised and excited to learn about this project and to see it in this exhibition. It’s an attempt to showcase a less entertainment-oriented piece aimed at raising debate on socio-political issues in the real world.


Clouds. House of Cards.


This section mostly covers how the development of visual arts, via synesthesia, helped create new experiences in the domain of music for both audiences and musicians. We find Arcade Fire’s Wilderness Downtown and its multi-window and geolocated interactive music video, Radiohead’s House of Cards music video, shot using only proximity sensors, Bjork’s album Biophilia and the apps that accompany each of its songs, Amon Tobin’s Isam 2.0 show with its impressive projection mapping performances, and Holly Herndon and Akihiko Taniguchi’s Chorus music video featuring 3D captures of messy work environments. There are so many terrific examples of this kind of work that it’s impossible to be exhaustive, but curator Conrad Bodman has gathered here a solid representative sampling.

I am not sure the setup for watching these videos was ideal, however: a wall of screens displaying them looped as a mosaic with headphones hanging in the front in order to hear the music. It felt hard to appreciate each piece on its own and keep focus on a single video at a time. Nevertheless, it’s a good conversation piece that inspires the visitor to remember other similar works that have moved them in the past.

Beyond music, I discovered Energy Flow, a collaboration between FIELD, Intel, and VICE. It’s an app that explores deconstruction of narration by using an algorithm to randomly associate and edit videos and sounds. This algorithm consequently creates a new story every time it is watched. The results are infinite, feel very poetic, and are open to interpretation.


Kinisi. MAN A.

Our Digital Futures + DevArt

Certainly the most experimental part of the exhibit, Our Digital Futures lets artists explore the potential evolution of the relationship between our environment and the human body. Alongside examples from the fashion industry introducing us to 3D-printed materials and wearable computing is Kinisi by Katia Vega, a tech-infused make-up project. Sensors are placed on a person’s specific facial muscles and LEDs are placed on their skin and hair. Digital signals collected by the sensors during specific movements (a blinking eye, an open mouth, raised eyebrows) activate a light sequence on the face. Products like Google Glass are already aiming at introducing facial interfaces, but this goes a step further by starting a conversation about the possibilities of skin and muscles as interfaces.

Back on the subject of synesthesia, we learn about colorblind artist and Cyborg Foundation founder Neil Harbisson who cannot see colors but can hear them via an antenna implanted in his skull. A camera placed at the top of the antenna, facing what he sees, translates the colors into sounds. More than experiencing colors in sounds, he also hears sounds, voices, and music in color.

Another body-enhancement project, the EyeWriter, was created by the Not Impossible Foundation, allowing people with forms of paralysis to write and draw using their brain waves and by tracking their eyes. The team has built a low-cost device, now available to everyone. A version of it has been created specifically for the graffiti artist TEMPT1, now entirely paralyzed except for his eyes, so he can continue his artwork.

Exploring the notion of urban camouflage, artists Gibson/Martelli came up with MAN A, an installation and art app where people can, via their smartphone or tablet, reveal invisible tribal dancers from a scene filled with markers at the intersection of camouflage, barcodes, and QR codes.

Finally, close to the waiting lines, people can interact with robots in Petting Zoo, built by Minimaforms. These creatures are artificially intelligent, evolving, and reacting to their environment and human behavior via camera-tracking systems. The interaction is awkwardly intimate with what look like the probes from The War of the Worlds. They don’t hurt and can be playful but also visibly angry. A slightly frustrating low wall is set up to preserve humans, making it harder to get close to the creatures.


Universal Everything’s Together

“Digital Revolution” is a truly interactive, well-curated museum exhibition. I sure hope the links above are inspiring and make you want to visit it, should you find yourself in London before mid-September. Indeed, a lot of the pieces shown in the exhibition can be viewed online —isn’t that the case for pretty much all works of art now, anyway?—but the Barbican Centre exhibition is unique enough to be worth experiencing in person.

— Swanny Mouton, Interaction Designer

Tagged with: , , , , , , , ,
Posted in Content, Culture, Design, Technology

Three Ways to Talk About the Internet of Things

Sometimes, our work feels like a sort of reverse archaeology. Just as archaeologists examine artifacts of the past to understand how people lived, we often encounter vague ideas that speak to the future and have to make our own interpretations to understand what it actually means for how we live now.

I like to think of the Internet of Things as one of these “artifacts from the future.” Like a group of archaeologists, I imagine the design and tech community turning it over in their collective mind, asking, “What is it? How was it used? What does it mean?”

I recently had the chance to participate in the Internet of Things Lab, a workshop being offered in collaboration with WebVisions and hosted by Claro. The event asked teams of designers, developers, strategists, and makers to spend two days concepting and prototyping new Internet of Things services.

Taking part in the workshop prompted me to put on my reverse archaeologist hat and examine the Internet of Things a little more closely. What are the different ways we can interpret it?

Here are three ways that I came up with (probably among many):

Interpretation 1: The Internet of Things means that every object around us will be ‘smart,’ connected, and listening.


This, I think, is the most common, technology-driven interpretation one might encounter, and certainly, it is functionally correct. Like digital entities on the internet, everyday objects will be networked and capture data about the environment and about you. Those objects are then enabled to take actions that presumably create a more efficient, carefree existence for people.

But, as Claro pointed out during our Internet of Things Lab, just because we can do things like have our egg trays notify us how many eggs are in the fridge, it doesn’t immediately mean they are meaningful or desirable things to do. This interpretation tells us all about how and what things might change with the Internet of Things, but not what ought to change and why.

And indeed, there’s already a backlash to this somewhat mechanical approach to the Internet of Things, everything from parodies to manifestos. These criticisms raise real concerns about the potential loss of individual agency, privacy, and diversity of experience.

So, if this is an incomplete picture of the Internet of Things,what if we focused less on the “Internet” part and more on the “Things” part?

Interpretation 2: The Internet of Things will enrich and focus our interactions with physical objects and environments.


This interpretation highlights the common roots between the Internet of Things and another long-standing technological concept: ubiquitous computing. In a subtle shift of emphasis, we’re not focused here on how our things can connect to the internet, but on how the internet can serve our things. The introduction of computation into our world provides an opportunity to enrich our interactions with its physicality, rather than distract from it.

Back in 1995, Mark Weiser, considered the father of ubiquitous computing, described an early example of this interpretation in a piece by artist Natalie Jeremijenko simply titled “The Dangling String”:

[T]he ‘Dangling String’ is an 8 foot piece of plastic spaghetti that hangs from a small electric motor mounted in the ceiling. The motor is electrically connected to a nearby Ethernet cable, so that each bit of information that goes past causes a tiny twitch of the motor.

The twitching of this network-enabled piece of string makes concrete something which was previously abstract: the traffic on the local network. In doing so, it alters our perception of both the physical and digital environment.

At Second Story, we’re very interested in how digital artifacts change our relationship to the physical environment. Our lab projects like Lyt and Aurora are, in part, explorations of how network-enabled, responsive environments can enhance our experiences of the physical world.

But there is one more interpretation of the Internet of Things that occurred to me during the workshop.

Interpretation 3: Our culture today exists at a scale and level of complexity that increasingly calls for more thoughtful interconnections and engagements between people and their objects and environments.


This last interpretation brings forward an element of the Internet of Things that is lacking in the previous two: the role of human intentionality and judgment. This is the Internet of People and Things. How can we take a “people first” approach to our technology that empowers us to create useful and meaningful change in the world?

For example, the Good Night Lamp could easily be described by its functionality. It’s a networked set of lamps that are synchronized, so that when the big lamp turns on, the smaller lamps connected to it do the same. However, the product is not interesting because of what it does, but because of what it enables people to do.

The designers of the Good Night Lamp call their product the “first physical social network.” At the core of their product is the idea of enabling people to connect across distances in subtle and delightful ways. You get the sense that it’s about the Internet of People first, and the Internet of Things second.

This interpretation also raises the question of agency. If the point is to put people first in the Internet of Things, shouldn’t everyone feel empowered to participate in its creation and construction? We recently got to play with a set of littleBits in our studio, which provides a nice example of how new platforms and toolkits can foster a world of active participants of the Internet of Things, rather than passive consumers.


So which interpretation is right? Well, all of them. Like any real design problem, there are multiple framings that you can take on the Internet of Things, each one equally valid and leading to different observations, principles, and design solutions. As we shape the future meaning of the Internet of Things, it’s worth surveying a healthy diversity of perspectives.

— Norman Lau, Senior Experience Designer




Posted in Culture, Design, Technology

Lessons from a Master: My Meeting with Massimo Vignelli

As part of Second Story’s contribution to AIGA’s 100 Years of Design, creative director David Waingarten and I flew to New York to film and interview design legends. The goal was to gather reflections and insights from industry giants in honor of the organization’s centennial and to leave a record for posterity.

While in New York, we had the great privilege to meet and speak with icons such as Paul Davis, Milton Glaser, Seymour Chwast, and Massimo Vignelli.


When I knocked on the door to Massimo Vignelli’s apartment, it was with great trepidation. I was going to interview a living legend. And I was no designer.

He was dressed impeccably in black and spoke with gracious passion, eloquence, and witty charm.

Yet it was clear during our conversation that Vignelli didn’t simply speak his ideals, he lived them. Design was an extension of his life–his ethics, values, and sense of social responsibility.

And though I cannot pinpoint the exact moment of transition, with a magic all his own, our conversation morphed into a philosophical discussion about life, the nature of design, and how one’s values and ethics ultimately fuel creative work.

I feel deeply humbled to have had the opportunity to meet and speak with Massimo Vignelli. The ideals and insights he shared with me have left a profound impact on my memory.


The video we created for 100 Years of Design offers a glimpse of the tenor of our conversation and Vignelli’s elegant mind.

I would like to share a few additional excerpts from our interview. The following quotes are left intact from the original transcripts to preserve their authenticity and that trace of his enchantingly mingled Italian-English.

“When I said that the life of a designer is a life of fight against ugliness – that is, exactly – a commitment that shows responsibility, social responsibility. It’s not the means to do what people want, it’s the means to give people what people need, and that is important…you have to find out the need.”

“Not everybody has been touched by the grace of creativity and vision and the determination to make better things for the world. But the people, they won’t make things better for themselves. Greed is the worst enemy, you know, of good design.”

“If there is an intellectual value, chances are there is an intellectual elegance in it or beauty, you know. […] I don’t think any person that is intellectually sophisticated can do a bad thing, an ugly thing. They will always have a drop of intelligence and that is enough to make it beautiful, because the only thing which is beautiful is intelligence, really, at the end.”


— Vanessa Patchett, A/V Producer

Tagged with: , , , , , ,
Posted in Content, Culture, Design

An Alternate (Augmented) Reality

We’ve been experimenting with augmented reality in our lab recently and have developed a mobile experience that overlays content depending on the observer’s perspective. Our goal was to make a simple and clean demonstration of how digital information can augment real world objects. In addition, we wanted to come up with a new way of interacting with mobile devices that steps away from touching a screen or using the inertial motion sensor (accelerometer).

We created an interaction mechanism that we are calling “line-of-sight activation.” As the mobile device viewing angle changes, different content is activated. This type of interaction encourages exploration: once a user realizes that a change in perspective signals a change in what content is revealed to them, they tend to play around more, curious about uncovering something new.

Line-of-sight activation allows for novel ways to tell stories around an object, be it a sculpture in a museum or a product in a store. It’s exciting to see the evolution of augmented reality and the opportunities for storytelling it provides.

— Dimitrii Pokrovskii, Interactive Developer


Tagged with: , , , , , ,
Posted in Culture, Technology

Our First FMX Experience

This slideshow requires JavaScript.

As busy creative professionals, it can be easy to lose sight of the impact that we have on audiences. We often find ourselves caught up in the day-to-day demands of projects and schedules and fall out of touch with the bigger picture of why we do what we do. Nothing can correct this trend better than a few days surrounded by colleagues, peers and talented students who share our interests and passion for storytelling and entertaining audiences. The right conference brings these elements together and produces more than a collection of presentations– it connects and inspires us in ways that cannot be measured. The collaborative atmosphere encourages us to reach beyond the comfortable and safe boundaries of what we do and strive for the next steps that will define the future of the industry. The FMX 2014 conference in Stuttgart, Germany provided this atmosphere and much more for all of those who attended and contributed to the show.

Thousands of the industry’s brightest and most talented individuals came together from 48 countries to meet and share their work and ideas at FMX. Short for “Film and Media Exchange,” FMX has historically focused on the film, animation, effects, and gaming industries, but new tracks were introduced this year that brought transmedia and physical interactives to the program. These subjects balanced the show with experiences that reach beyond the screen.

I was humbled and excited to be included with several talented presenters in the “Interaction in the Real World” portion of the FMX program hosted by Doug Cooper of DreamWorks Animation. My presentation, “Responsive Environments: Blurring the Lines Between Physical and Digital Worlds,” introduced the concepts of more open-ended (non-linear) storytelling experiences and the creation of rich environments that can envelop audiences in layers of narrative. The opportunity to contribute to the show and share our work was rewarding and exciting, and exposing the audience to real-world examples of our work and processes resonated well with the conference.

With so many great demonstrations and presentations, it was difficult to pick out personal highlights, but here are a few that stuck with me: Alex Meagher Grau presented his studio’s work and the process behind the creation of 360-degree immersive media (stories that play out through VR headsets and allow viewers to see any portion of the presentation they choose by looking around in real time as the story plays out). Tobias Kinnebrew of Bot & Dolly presented his unique work which combines live performance, projection mapping, and giant industrial robots. Alex McDowell led several engaging discussions about new educational models and the future of animation production, including the new tools and collaborative methods for bringing together increasingly diverse groups of creative professionals spread out around the world to create award-winning films and media productions.

The creative energy present at this year’s FMX show was contagious and provided the opportunity to raise our heads above the fray of day-to-day work to catch a glimpse of a bright and exciting future of the media industry. Students and professionals alike came away with renewed inspiration and passion for our work and its impact on audiences. If that doesn’t define a great conference, I’m not sure what does! Many thanks to the committees and organizers for including us and providing a star-studded and highlight-filled week of workshops, presentations, and media at this year’s FMX show.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology

Life as a Google Glass Explorer

Second Story has recently gotten its hands on a Google Glass. In order to improve our knowledge about heads-up displays, we decided to let whoever was interested use Glass for a day.

We all expected the full “gadget” potential of Glass: map navigation, the ability to search for specific information, even the opportunity to play target-practice games. This gave plentiful insight into the user experience, the effectiveness of the technology, and its responsiveness. But there was another perspective we discovered: what is Google Glass like as a creative tool?


One of the most natural things to do with Google Glass is to capture pictures and video, creating photographs of what the user is seeing at eye-level. If you get really keen with Glass, you can do this discreetly just by winking your eye—which has its own uncomfortable implications. Looking through a day’s worth of Glass-ing is strangely insightful; when taking a picture, you have essentially zero control over your ability to adjust lighting, composition, or even the exact moment at which the picture is taken. With all of the foundations of photography at a loss, what you are left with is a pure moment, an experience captured with minimal intervention.


The trend of point-of-view photography is hot right now, mostly due to the accessible prices of the GoPro. Glass is aware of this potential as well, advertising with footage of acrobats falling into each other’s arms, pilots doing barrel rolls, and people roaring down roller coasters. For those of us who live slightly less action-packed lives, are we able to create thoughtful—or thoughtless—photography without depending on a “Wow” factor? As first-hand Glass photographers, we began finding profundity in the ordinary.


The ability to capture point-of-view photography in a user’s mundane day has the power to change the way we see the world and the way the world sees us. We are not only able to tell a story literally as we see it, but we get to share the parts of our everyday that are notable not for their aesthetic beauty but for their essence of the moment. Whether or not that moment is worth photographing is up to the creator, as we become inspired by experiences we are living as opposed to scenes we want to compose.

Glass also changes the way in which we photograph subjects. Without having a physical camera held between you, the photographer is able to both act in and direct the photograph. Instead of the subject gazing into a device, they are looking into the eyes of the photographer, adding an additional level to the story. What is the subject reacting to? What is the relationship between the subject and the Glass-wearer?


We like to imagine how Glass will change the way we consume and tell stories. As Makers and amateur Glass photographers, we see this technology as a way to create with more intimacy and less interruption, blurring the lines between moments we have lived and moments we have observed.

— Kirsten Southwell, Experience Designer

Glass photography by Kirsten Southwell, Norman Lau, and Dimitrii Pokrovskii.

Posted in Culture, Technology

Passing by the Wave

One of the keys to a successful interactive experience is providing a little something for everyone. Typically, members of the audience for an interactive installation will vary in their desire to invest time and attention. An individual may have a keen interest in delving into the nooks and crannies of a subject—say cubist architecture—or they may just walk by, see someone else interact with the experience, and decide to watch them briefly from afar.

When designing and developing an experience, it’s important to consider the “just passing by” audience member. In a museum or cultural institution setting, it is precisely the casual observer, the first-time visitor, the non-expert, who we want to educate, inform, and expose to our subject. Look here! This is why you should care about cubist architecture!

The most important thing is to engage the visitor, even temporarily, in a positive fashion. These itinerant visitors, wandering from exhibit to exhibit, display to display, must be catered to on their own terms: they want something they can appreciate in very little time, with little or no interaction, and from a distance.

Recently, working with the Foundation for the National Archives in Washington D.C., we created an experience consisting of a 15ft interactive touch table with proximity sensors, flanked by two mosaic walls with multiple displays.


The experience was designed to showcase documents, multimedia, and history related to the issues of civil and human rights in America. The table allows for up to 12 people to interact with it simultaneously, browsing through a series of timelines, exploring the National Archives’ extensive collection of primary source materials, and sharing their reactions to those records with others on the mosaic walls. You can take a look at a video demo, more images, and a description of the project on its portfolio page on our website.


During the concepting and ideation phase, we wanted to come up with a unifying element to make the table—which actually consisted of six 55” displays with two PQ Labs touch overlays—feel like a single entity, and, most importantly, engage the interest of passersby. In the end, we came up with the idea of a series of lines that would undulate seamlessly across the displays from one end of the table to the other. The lines’ sinuous motion serves as a metaphor for the fluidity of ideas, their contour-like geological representation evokes a sensation of the weight and momentum of history, and, as waves collide with each other, the patterns the lines generate speak to the complexity that can be created from simplicity.

This element we simply called “The Wave.” The wave, we decided, would flow by itself, but users would be able to interact with and excite it. It would also provide a large, beautiful, animated, easily accessible visual element ready to engage users from afar.


In creating the wave, there were two primary challenges. The first was the question of how it would behave; each member of the project team had an idea about how the wave should look and feel. The second challenge was to make sure that the wave would propagate seamlessly across displays. Each display was being run by a separate computer, so somehow all the computers had to be informed about the motion of the wave.

The first problem was solved by a little mathematical graphing and some prototyping in our lab. Initially, I considered physically modelling a wave. It quickly became apparent, however, that it would be computationally expensive, and, furthermore, the amount of data that had to be passed from display to display to keep the waves in sync was also too high. After all, the only thing our wave really had to do was look like a wave, and, at its most basic, a wave is just an oscillating function, something like a sine wave:


But that’s boring. Here’s a tidbit that’s not boring: a periodic function (a function that can repeat) can be recreated from the sum of sine and cosine functions. This group of functions is called a Fourier series. What this meant for us was that any wave shape we wanted to create was achievable using a number of sine waves added together. Here’s an example of what happened when we combined a few to make a more interesting shape:



Finally, we didn’t want the wave to repeat forever, so we multiplied it by a pulse function to get something like this:


Here’s a little animation of three sine waves and a pulse function with a variety of variables changing randomly. You can see that you do indeed get some organic shapes in there. This is a variation of the algorithm we ended up using to develop the wave:

And here is an early prototype of the wave:

The final complication was how to make sure the wave was synchronized over multiple displays. Because of the way the waves were created, the only data we would need to communicate across displays was the current animation frame, when the wave was created, how long it would live, and a few wave parameters (wavelength, speed, etc.). The hard part was figuring out how to make sure that every display knew what frame it was supposed to be on. You can’t just tell every display computer to render the wave “NOW” because that message takes time to get from the computer doing the telling (the server) to the display computer (the client) due to the network. This is called latency. One way to go about it would be to make sure that every computer kept track of time identically and then you would tell each computer that at “x” time they should play frame “y.” Then they could extrapolate what frame they should be rendering based on what time they thought it was. However, time can drift. Especially considering that synchronization needs to be accurate to, at most, a few tens of milliseconds, whatever solution I came up with had to factor in time-drift.

In order to tackle these issues, I created a synchronization tool called “All The Screens.” Client computers registered with a server, and the server calculated network latency (delays) and time drift and provided those clients with a way to determine what frame they should be rendering.  This solution has been open-sourced and can be found on GitHub. There is also a Google Chrome demo of the technology here.

These technical solutions allowed us to create the wave, whose mesmerizing motion lures visitors in to learn more about the history of civil and human rights in America. And, for that happy-go-lucky stroller who doesn’t have the time or inclination to delve into the content, perhaps the wave serves as a source of soothing visual relaxation, a counterpoint to the hustle and bustle of busy downtown DC.

Donald Richardson, Senior Interactive Developer

Tagged with: , , , , , , ,
Posted in Design, Technology

Mobile Case Study

Second Story is deepening its physical design and environments practice by offering industrial design services to our clientele. We aim to be innovative, designing physical solutions to elevate digital interactive experiences, but our work sometimes requires practical, engineered solutions to package digital content in simple ways that are meaningful to the overall audience experience.

We’re always excited by design challenges that let us get our hands dirty. When a recent mobile application project presented the need for packaging design, we brought manufacturing processes to the studio. Our team came together in impressive fashion, with staff members from every discipline collaborating to create an efficient assembly-line to achieve an immediate yet stylish solution for our client’s needs.

— Jordan Tull, Designer

Tagged with: , , , , , ,
Posted in Culture, Design

Finding The Heart of “100 Years of Design”


Last May, we had the opportunity to partner with AIGA, a longtime collaborator and dream client, on a new microsite to commemorate their centennial and celebrate the last 100 years of American design. Our first reaction was excitement: as designers who pride ourselves in our discipline and our history, we were honored to craft stories that include some of the world’s most influential designers and their work. Our second reaction: where do we start?

At Second Story, we often describe our process as “designing from the inside out.” As we thought about AIGA and what made it special, it became clear that the organization sits at the epicenter of the conversation between design and society. This simple diagram was our first attempt to show how this conversation and how the artifacts in AIGA’s archives could become the lens for the site.


Building a project’s foundation is one of the most challenging and exhilarating points in our creative process. We refer to this discovery as finding the heart—the one truth of the project that will never change. The “heart” is the story that the experience is begging to bring to life. Creative Director David Waingarten has described the task of finding and articulating this conceptual foundation as “being the first to walk into a dark room and look for the light switch.”


To find the heart of the AIGA Centennial project, we fully immersed ourselves in the content. We delved into the vast collection of artifacts in AIGA’s Design Archives, combed through articles from diverse voices in the design community, and looked at other retrospectives, critiques, and blog posts. In our quest for enlightenment, we noticed there was little discussion of design history that was not organized by time, form, medium, or discipline. While these ways of presenting design history are informative and educational, we wanted to create a living resource that captures the ever-evolving conversation between design and society and invites everyone deeper into it.

As we were having this discussion, our collaborators at AIGA pointed us to “No More Heroes,” a poignant article from a 1992 Eye Magazine that really spoke to us. This quote from Bridget Wilkins was especially inspirational to our conceptual development:


With AIGA’s guidance and after countless thought-model sketches and “what if!” epiphanies, we landed on a framework that gives diverse audiences a new way to look at and evaluate great design. We organized the stories by design intent, allowing the purpose of the artifacts to be revealed for the visitor. The intention is what defines design, and as Milton Glaser so eloquently states: “The best definition I have ever heard about design and the simplest one is moving from an existing condition to a preferred one, and that is a kind of symbolic way of saying you have a plan because the existing condition does not suffice.”

We had to consider how to make this story framework exciting and accessible for guests with varied knowledge of design. It couldn’t overwhelm the general public, but it also had to meet, if not exceed, the expectations of design enthusiasts and practitioners. To strike this balance, we created an experience with two layers. At the surface layer, visitors can view carefully curated artifacts, quotes, videos, and listen to audio clips. Those who are interested can go a level deeper to see additional artifacts, designer profiles, and moments from AIGA’s history. With 11 videos, 26 audio clips, 120 design artifacts, 17 designer profiles, 15 AIGA historical moments, and 19 quotes, there’s a wealth of content for visitors of all backgrounds to explore.


AIGA also wanted to extend the conversation to ensure that the microsite became a meaningful record of this time in design’s history. To foster discussion and participation, we needed an engaging prompt. How can we ask a stimulating and meaningful question without leaving the guest lost or spending 20 minutes trying to create a response? The ideation that we spent on those six words was extremely thorough: looking at the reactions from using words like “think” vs. “feel”, finding out if users were comfortable contributing from the first person (“I am connected by design that…”) or from a general perspective on design (“Design that connects is…”). We settled on a phrase that could be applied across all five intents and that allowed guests to choose an intention and add their own thoughts and images.

The results have been incredible to watch. Each day the conversation grows, with over 700 user contributions and counting. We are thrilled with the final site and hope the experience engages a broad audience in a dynamic conversation about the role of design in our society and everyday lives. We encourage you to explore these narratives and add your voice to celebrate the evolution and impact of American design over the last 100 years.

Our studio is forever grateful to AIGA for giving us the opportunity to be part of such an incredible moment in design history.

This slideshow requires JavaScript.

— Laura Allcorn, Senior Content Strategist & Kirsten Southwell, Experience Designer

Tagged with: , , , , , ,
Posted in Content, Culture, Design

You Can’t Go Wrong With 8,294,400 Interactive Pixels

As Ultra High Definition (UHD) displays become more readily available, we will begin to see the technology adopted in many ways. We are most interested in the new standard because it will have a direct impact on the way we design and display interactive content. While we have been developing applications that run at resolutions similar to the 3840×2160 pixel resolution offered by UHD displays for some time, we have been forced to display them on multiple HD displays which introduce visible seams when tiled together. With the advent of the UHD display, we can now combine both scale and fidelity in the presentation of our media with a single seamless display.

With interactive media, the scale of a display can act as a beacon, enticing potential users to come closer and explore content. A large-scale display also accommodates more users at a time, inviting collaboration, especially on a horizontal (table) surface where people are brought face-to-face with those across from them as they interact with the media.

But scale isn’t everything. By its nature, interactive content has to remain legible when viewed at an arm’s length as users touch and interact with the surface of the display. At this close range, most large displays don’t have the fidelity to carry type and subtle graphics. It is here that the UHD resolution succeeds where other displays fall short. At about 50 pixels per inch (ppi), the Planar UR8450 displays offer precise pixels and legible content even when viewed up close.

With this display, we can already begin to imagine a future where the notion of a pixel is no longer considered. Today, we can see this in relatively small Retina Displays where ppi counts surpass 300 and individual pixels seem to disappear. We look forward to a time when this type of fidelity will become ubiquitous on large and small displays. Increased legibility will allow content to be displayed at any scale and orientation, opening up new modes of interaction. Displays will become a window through which emotive, high resolution content will be displayed, bringing stories to life in new ways.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology