Three Ways to Talk About the Internet of Things

Sometimes, our work feels like a sort of reverse archaeology. Just as archaeologists examine artifacts of the past to understand how people lived, we often encounter vague ideas that speak to the future and have to make our own interpretations to understand what it actually means for how we live now.

I like to think of the Internet of Things as one of these “artifacts from the future.” Like a group of archaeologists, I imagine the design and tech community turning it over in their collective mind, asking, “What is it? How was it used? What does it mean?”

I recently had the chance to participate in the Internet of Things Lab, a workshop being offered in collaboration with WebVisions and hosted by Claro. The event asked teams of designers, developers, strategists, and makers to spend two days concepting and prototyping new Internet of Things services.

Taking part in the workshop prompted me to put on my reverse archaeologist hat and examine the Internet of Things a little more closely. What are the different ways we can interpret it?

Here are three ways that I came up with (probably among many):

Interpretation 1: The Internet of Things means that every object around us will be ‘smart,’ connected, and listening.


This, I think, is the most common, technology-driven interpretation one might encounter, and certainly, it is functionally correct. Like digital entities on the internet, everyday objects will be networked and capture data about the environment and about you. Those objects are then enabled to take actions that presumably create a more efficient, carefree existence for people.

But, as Claro pointed out during our Internet of Things Lab, just because we can do things like have our egg trays notify us how many eggs are in the fridge, it doesn’t immediately mean they are meaningful or desirable things to do. This interpretation tells us all about how and what things might change with the Internet of Things, but not what ought to change and why.

And indeed, there’s already a backlash to this somewhat mechanical approach to the Internet of Things, everything from parodies to manifestos. These criticisms raise real concerns about the potential loss of individual agency, privacy, and diversity of experience.

So, if this is an incomplete picture of the Internet of Things,what if we focused less on the “Internet” part and more on the “Things” part?

Interpretation 2: The Internet of Things will enrich and focus our interactions with physical objects and environments.


This interpretation highlights the common roots between the Internet of Things and another long-standing technological concept: ubiquitous computing. In a subtle shift of emphasis, we’re not focused here on how our things can connect to the internet, but on how the internet can serve our things. The introduction of computation into our world provides an opportunity to enrich our interactions with its physicality, rather than distract from it.

Back in 1995, Mark Weiser, considered the father of ubiquitous computing, described an early example of this interpretation in a piece by artist Natalie Jeremijenko simply titled “The Dangling String”:

[T]he ‘Dangling String’ is an 8 foot piece of plastic spaghetti that hangs from a small electric motor mounted in the ceiling. The motor is electrically connected to a nearby Ethernet cable, so that each bit of information that goes past causes a tiny twitch of the motor.

The twitching of this network-enabled piece of string makes concrete something which was previously abstract: the traffic on the local network. In doing so, it alters our perception of both the physical and digital environment.

At Second Story, we’re very interested in how digital artifacts change our relationship to the physical environment. Our lab projects like Lyt and Aurora are, in part, explorations of how network-enabled, responsive environments can enhance our experiences of the physical world.

But there is one more interpretation of the Internet of Things that occurred to me during the workshop.

Interpretation 3: Our culture today exists at a scale and level of complexity that increasingly calls for more thoughtful interconnections and engagements between people and their objects and environments.


This last interpretation brings forward an element of the Internet of Things that is lacking in the previous two: the role of human intentionality and judgment. This is the Internet of People and Things. How can we take a “people first” approach to our technology that empowers us to create useful and meaningful change in the world?

For example, the Good Night Lamp could easily be described by its functionality. It’s a networked set of lamps that are synchronized, so that when the big lamp turns on, the smaller lamps connected to it do the same. However, the product is not interesting because of what it does, but because of what it enables people to do.

The designers of the Good Night Lamp call their product the “first physical social network.” At the core of their product is the idea of enabling people to connect across distances in subtle and delightful ways. You get the sense that it’s about the Internet of People first, and the Internet of Things second.

This interpretation also raises the question of agency. If the point is to put people first in the Internet of Things, shouldn’t everyone feel empowered to participate in its creation and construction? We recently got to play with a set of littleBits in our studio, which provides a nice example of how new platforms and toolkits can foster a world of active participants of the Internet of Things, rather than passive consumers.


So which interpretation is right? Well, all of them. Like any real design problem, there are multiple framings that you can take on the Internet of Things, each one equally valid and leading to different observations, principles, and design solutions. As we shape the future meaning of the Internet of Things, it’s worth surveying a healthy diversity of perspectives.

— Norman Lau, Senior Experience Designer




Posted in Culture, Design, Technology

Lessons from a Master: My Meeting with Massimo Vignelli

As part of Second Story’s contribution to AIGA’s 100 Years of Design, creative director David Waingarten and I flew to New York to film and interview design legends. The goal was to gather reflections and insights from industry giants in honor of the organization’s centennial and to leave a record for posterity.

While in New York, we had the great privilege to meet and speak with icons such as Paul Davis, Milton Glaser, Seymour Chwast, and Massimo Vignelli.


When I knocked on the door to Massimo Vignelli’s apartment, it was with great trepidation. I was going to interview a living legend. And I was no designer.

He was dressed impeccably in black and spoke with gracious passion, eloquence, and witty charm.

Yet it was clear during our conversation that Vignelli didn’t simply speak his ideals, he lived them. Design was an extension of his life–his ethics, values, and sense of social responsibility.

And though I cannot pinpoint the exact moment of transition, with a magic all his own, our conversation morphed into a philosophical discussion about life, the nature of design, and how one’s values and ethics ultimately fuel creative work.

I feel deeply humbled to have had the opportunity to meet and speak with Massimo Vignelli. The ideals and insights he shared with me have left a profound impact on my memory.


The video we created for 100 Years of Design offers a glimpse of the tenor of our conversation and Vignelli’s elegant mind.

I would like to share a few additional excerpts from our interview. The following quotes are left intact from the original transcripts to preserve their authenticity and that trace of his enchantingly mingled Italian-English.

“When I said that the life of a designer is a life of fight against ugliness – that is, exactly – a commitment that shows responsibility, social responsibility. It’s not the means to do what people want, it’s the means to give people what people need, and that is important…you have to find out the need.”

“Not everybody has been touched by the grace of creativity and vision and the determination to make better things for the world. But the people, they won’t make things better for themselves. Greed is the worst enemy, you know, of good design.”

“If there is an intellectual value, chances are there is an intellectual elegance in it or beauty, you know. […] I don’t think any person that is intellectually sophisticated can do a bad thing, an ugly thing. They will always have a drop of intelligence and that is enough to make it beautiful, because the only thing which is beautiful is intelligence, really, at the end.”


— Vanessa Patchett, A/V Producer

Tagged with: , , , , , ,
Posted in Content, Culture, Design

An Alternate (Augmented) Reality

We’ve been experimenting with augmented reality in our lab recently and have developed a mobile experience that overlays content depending on the observer’s perspective. Our goal was to make a simple and clean demonstration of how digital information can augment real world objects. In addition, we wanted to come up with a new way of interacting with mobile devices that steps away from touching a screen or using the inertial motion sensor (accelerometer).

We created an interaction mechanism that we are calling “line-of-sight activation.” As the mobile device viewing angle changes, different content is activated. This type of interaction encourages exploration: once a user realizes that a change in perspective signals a change in what content is revealed to them, they tend to play around more, curious about uncovering something new.

Line-of-sight activation allows for novel ways to tell stories around an object, be it a sculpture in a museum or a product in a store. It’s exciting to see the evolution of augmented reality and the opportunities for storytelling it provides.

— Dimitrii Pokrovskii, Interactive Developer


Tagged with: , , , , , ,
Posted in Culture, Technology

Our First FMX Experience

This slideshow requires JavaScript.

As busy creative professionals, it can be easy to lose sight of the impact that we have on audiences. We often find ourselves caught up in the day-to-day demands of projects and schedules and fall out of touch with the bigger picture of why we do what we do. Nothing can correct this trend better than a few days surrounded by colleagues, peers and talented students who share our interests and passion for storytelling and entertaining audiences. The right conference brings these elements together and produces more than a collection of presentations– it connects and inspires us in ways that cannot be measured. The collaborative atmosphere encourages us to reach beyond the comfortable and safe boundaries of what we do and strive for the next steps that will define the future of the industry. The FMX 2014 conference in Stuttgart, Germany provided this atmosphere and much more for all of those who attended and contributed to the show.

Thousands of the industry’s brightest and most talented individuals came together from 48 countries to meet and share their work and ideas at FMX. Short for “Film and Media Exchange,” FMX has historically focused on the film, animation, effects, and gaming industries, but new tracks were introduced this year that brought transmedia and physical interactives to the program. These subjects balanced the show with experiences that reach beyond the screen.

I was humbled and excited to be included with several talented presenters in the “Interaction in the Real World” portion of the FMX program hosted by Doug Cooper of DreamWorks Animation. My presentation, “Responsive Environments: Blurring the Lines Between Physical and Digital Worlds,” introduced the concepts of more open-ended (non-linear) storytelling experiences and the creation of rich environments that can envelop audiences in layers of narrative. The opportunity to contribute to the show and share our work was rewarding and exciting, and exposing the audience to real-world examples of our work and processes resonated well with the conference.

With so many great demonstrations and presentations, it was difficult to pick out personal highlights, but here are a few that stuck with me: Alex Meagher Grau presented his studio’s work and the process behind the creation of 360-degree immersive media (stories that play out through VR headsets and allow viewers to see any portion of the presentation they choose by looking around in real time as the story plays out). Tobias Kinnebrew of Bot & Dolly presented his unique work which combines live performance, projection mapping, and giant industrial robots. Alex McDowell led several engaging discussions about new educational models and the future of animation production, including the new tools and collaborative methods for bringing together increasingly diverse groups of creative professionals spread out around the world to create award-winning films and media productions.

The creative energy present at this year’s FMX show was contagious and provided the opportunity to raise our heads above the fray of day-to-day work to catch a glimpse of a bright and exciting future of the media industry. Students and professionals alike came away with renewed inspiration and passion for our work and its impact on audiences. If that doesn’t define a great conference, I’m not sure what does! Many thanks to the committees and organizers for including us and providing a star-studded and highlight-filled week of workshops, presentations, and media at this year’s FMX show.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology

Life as a Google Glass Explorer

Second Story has recently gotten its hands on a Google Glass. In order to improve our knowledge about heads-up displays, we decided to let whoever was interested use Glass for a day.

We all expected the full “gadget” potential of Glass: map navigation, the ability to search for specific information, even the opportunity to play target-practice games. This gave plentiful insight into the user experience, the effectiveness of the technology, and its responsiveness. But there was another perspective we discovered: what is Google Glass like as a creative tool?


One of the most natural things to do with Google Glass is to capture pictures and video, creating photographs of what the user is seeing at eye-level. If you get really keen with Glass, you can do this discreetly just by winking your eye—which has its own uncomfortable implications. Looking through a day’s worth of Glass-ing is strangely insightful; when taking a picture, you have essentially zero control over your ability to adjust lighting, composition, or even the exact moment at which the picture is taken. With all of the foundations of photography at a loss, what you are left with is a pure moment, an experience captured with minimal intervention.


The trend of point-of-view photography is hot right now, mostly due to the accessible prices of the GoPro. Glass is aware of this potential as well, advertising with footage of acrobats falling into each other’s arms, pilots doing barrel rolls, and people roaring down roller coasters. For those of us who live slightly less action-packed lives, are we able to create thoughtful—or thoughtless—photography without depending on a “Wow” factor? As first-hand Glass photographers, we began finding profundity in the ordinary.


The ability to capture point-of-view photography in a user’s mundane day has the power to change the way we see the world and the way the world sees us. We are not only able to tell a story literally as we see it, but we get to share the parts of our everyday that are notable not for their aesthetic beauty but for their essence of the moment. Whether or not that moment is worth photographing is up to the creator, as we become inspired by experiences we are living as opposed to scenes we want to compose.

Glass also changes the way in which we photograph subjects. Without having a physical camera held between you, the photographer is able to both act in and direct the photograph. Instead of the subject gazing into a device, they are looking into the eyes of the photographer, adding an additional level to the story. What is the subject reacting to? What is the relationship between the subject and the Glass-wearer?


We like to imagine how Glass will change the way we consume and tell stories. As Makers and amateur Glass photographers, we see this technology as a way to create with more intimacy and less interruption, blurring the lines between moments we have lived and moments we have observed.

— Kirsten Southwell, Experience Designer

Glass photography by Kirsten Southwell, Norman Lau, and Dimitrii Pokrovskii.

Posted in Culture, Technology

Passing by the Wave

One of the keys to a successful interactive experience is providing a little something for everyone. Typically, members of the audience for an interactive installation will vary in their desire to invest time and attention. An individual may have a keen interest in delving into the nooks and crannies of a subject—say cubist architecture—or they may just walk by, see someone else interact with the experience, and decide to watch them briefly from afar.

When designing and developing an experience, it’s important to consider the “just passing by” audience member. In a museum or cultural institution setting, it is precisely the casual observer, the first-time visitor, the non-expert, who we want to educate, inform, and expose to our subject. Look here! This is why you should care about cubist architecture!

The most important thing is to engage the visitor, even temporarily, in a positive fashion. These itinerant visitors, wandering from exhibit to exhibit, display to display, must be catered to on their own terms: they want something they can appreciate in very little time, with little or no interaction, and from a distance.

Recently, working with the Foundation for the National Archives in Washington D.C., we created an experience consisting of a 15ft interactive touch table with proximity sensors, flanked by two mosaic walls with multiple displays.


The experience was designed to showcase documents, multimedia, and history related to the issues of civil and human rights in America. The table allows for up to 12 people to interact with it simultaneously, browsing through a series of timelines, exploring the National Archives’ extensive collection of primary source materials, and sharing their reactions to those records with others on the mosaic walls. You can take a look at a video demo, more images, and a description of the project on its portfolio page on our website.


During the concepting and ideation phase, we wanted to come up with a unifying element to make the table—which actually consisted of six 55” displays with two PQ Labs touch overlays—feel like a single entity, and, most importantly, engage the interest of passersby. In the end, we came up with the idea of a series of lines that would undulate seamlessly across the displays from one end of the table to the other. The lines’ sinuous motion serves as a metaphor for the fluidity of ideas, their contour-like geological representation evokes a sensation of the weight and momentum of history, and, as waves collide with each other, the patterns the lines generate speak to the complexity that can be created from simplicity.

This element we simply called “The Wave.” The wave, we decided, would flow by itself, but users would be able to interact with and excite it. It would also provide a large, beautiful, animated, easily accessible visual element ready to engage users from afar.


In creating the wave, there were two primary challenges. The first was the question of how it would behave; each member of the project team had an idea about how the wave should look and feel. The second challenge was to make sure that the wave would propagate seamlessly across displays. Each display was being run by a separate computer, so somehow all the computers had to be informed about the motion of the wave.

The first problem was solved by a little mathematical graphing and some prototyping in our lab. Initially, I considered physically modelling a wave. It quickly became apparent, however, that it would be computationally expensive, and, furthermore, the amount of data that had to be passed from display to display to keep the waves in sync was also too high. After all, the only thing our wave really had to do was look like a wave, and, at its most basic, a wave is just an oscillating function, something like a sine wave:


But that’s boring. Here’s a tidbit that’s not boring: a periodic function (a function that can repeat) can be recreated from the sum of sine and cosine functions. This group of functions is called a Fourier series. What this meant for us was that any wave shape we wanted to create was achievable using a number of sine waves added together. Here’s an example of what happened when we combined a few to make a more interesting shape:



Finally, we didn’t want the wave to repeat forever, so we multiplied it by a pulse function to get something like this:


Here’s a little animation of three sine waves and a pulse function with a variety of variables changing randomly. You can see that you do indeed get some organic shapes in there. This is a variation of the algorithm we ended up using to develop the wave:

And here is an early prototype of the wave:

The final complication was how to make sure the wave was synchronized over multiple displays. Because of the way the waves were created, the only data we would need to communicate across displays was the current animation frame, when the wave was created, how long it would live, and a few wave parameters (wavelength, speed, etc.). The hard part was figuring out how to make sure that every display knew what frame it was supposed to be on. You can’t just tell every display computer to render the wave “NOW” because that message takes time to get from the computer doing the telling (the server) to the display computer (the client) due to the network. This is called latency. One way to go about it would be to make sure that every computer kept track of time identically and then you would tell each computer that at “x” time they should play frame “y.” Then they could extrapolate what frame they should be rendering based on what time they thought it was. However, time can drift. Especially considering that synchronization needs to be accurate to, at most, a few tens of milliseconds, whatever solution I came up with had to factor in time-drift.

In order to tackle these issues, I created a synchronization tool called “All The Screens.” Client computers registered with a server, and the server calculated network latency (delays) and time drift and provided those clients with a way to determine what frame they should be rendering.  This solution has been open-sourced and can be found on GitHub. There is also a Google Chrome demo of the technology here.

These technical solutions allowed us to create the wave, whose mesmerizing motion lures visitors in to learn more about the history of civil and human rights in America. And, for that happy-go-lucky stroller who doesn’t have the time or inclination to delve into the content, perhaps the wave serves as a source of soothing visual relaxation, a counterpoint to the hustle and bustle of busy downtown DC.

Donald Richardson, Senior Interactive Developer

Tagged with: , , , , , , ,
Posted in Design, Technology

Mobile Case Study

Second Story is deepening its physical design and environments practice by offering industrial design services to our clientele. We aim to be innovative, designing physical solutions to elevate digital interactive experiences, but our work sometimes requires practical, engineered solutions to package digital content in simple ways that are meaningful to the overall audience experience.

We’re always excited by design challenges that let us get our hands dirty. When a recent mobile application project presented the need for packaging design, we brought manufacturing processes to the studio. Our team came together in impressive fashion, with staff members from every discipline collaborating to create an efficient assembly-line to achieve an immediate yet stylish solution for our client’s needs.

— Jordan Tull, Designer

Tagged with: , , , , , ,
Posted in Culture, Design

Finding The Heart of “100 Years of Design”


Last May, we had the opportunity to partner with AIGA, a longtime collaborator and dream client, on a new microsite to commemorate their centennial and celebrate the last 100 years of American design. Our first reaction was excitement: as designers who pride ourselves in our discipline and our history, we were honored to craft stories that include some of the world’s most influential designers and their work. Our second reaction: where do we start?

At Second Story, we often describe our process as “designing from the inside out.” As we thought about AIGA and what made it special, it became clear that the organization sits at the epicenter of the conversation between design and society. This simple diagram was our first attempt to show how this conversation and how the artifacts in AIGA’s archives could become the lens for the site.


Building a project’s foundation is one of the most challenging and exhilarating points in our creative process. We refer to this discovery as finding the heart—the one truth of the project that will never change. The “heart” is the story that the experience is begging to bring to life. Creative Director David Waingarten has described the task of finding and articulating this conceptual foundation as “being the first to walk into a dark room and look for the light switch.”


To find the heart of the AIGA Centennial project, we fully immersed ourselves in the content. We delved into the vast collection of artifacts in AIGA’s Design Archives, combed through articles from diverse voices in the design community, and looked at other retrospectives, critiques, and blog posts. In our quest for enlightenment, we noticed there was little discussion of design history that was not organized by time, form, medium, or discipline. While these ways of presenting design history are informative and educational, we wanted to create a living resource that captures the ever-evolving conversation between design and society and invites everyone deeper into it.

As we were having this discussion, our collaborators at AIGA pointed us to “No More Heroes,” a poignant article from a 1992 Eye Magazine that really spoke to us. This quote from Bridget Wilkins was especially inspirational to our conceptual development:


With AIGA’s guidance and after countless thought-model sketches and “what if!” epiphanies, we landed on a framework that gives diverse audiences a new way to look at and evaluate great design. We organized the stories by design intent, allowing the purpose of the artifacts to be revealed for the visitor. The intention is what defines design, and as Milton Glaser so eloquently states: “The best definition I have ever heard about design and the simplest one is moving from an existing condition to a preferred one, and that is a kind of symbolic way of saying you have a plan because the existing condition does not suffice.”

We had to consider how to make this story framework exciting and accessible for guests with varied knowledge of design. It couldn’t overwhelm the general public, but it also had to meet, if not exceed, the expectations of design enthusiasts and practitioners. To strike this balance, we created an experience with two layers. At the surface layer, visitors can view carefully curated artifacts, quotes, videos, and listen to audio clips. Those who are interested can go a level deeper to see additional artifacts, designer profiles, and moments from AIGA’s history. With 11 videos, 26 audio clips, 120 design artifacts, 17 designer profiles, 15 AIGA historical moments, and 19 quotes, there’s a wealth of content for visitors of all backgrounds to explore.


AIGA also wanted to extend the conversation to ensure that the microsite became a meaningful record of this time in design’s history. To foster discussion and participation, we needed an engaging prompt. How can we ask a stimulating and meaningful question without leaving the guest lost or spending 20 minutes trying to create a response? The ideation that we spent on those six words was extremely thorough: looking at the reactions from using words like “think” vs. “feel”, finding out if users were comfortable contributing from the first person (“I am connected by design that…”) or from a general perspective on design (“Design that connects is…”). We settled on a phrase that could be applied across all five intents and that allowed guests to choose an intention and add their own thoughts and images.

The results have been incredible to watch. Each day the conversation grows, with over 700 user contributions and counting. We are thrilled with the final site and hope the experience engages a broad audience in a dynamic conversation about the role of design in our society and everyday lives. We encourage you to explore these narratives and add your voice to celebrate the evolution and impact of American design over the last 100 years.

Our studio is forever grateful to AIGA for giving us the opportunity to be part of such an incredible moment in design history.

This slideshow requires JavaScript.

— Laura Allcorn, Senior Content Strategist & Kirsten Southwell, Experience Designer

Tagged with: , , , , , ,
Posted in Content, Culture, Design

You Can’t Go Wrong With 8,294,400 Interactive Pixels

As Ultra High Definition (UHD) displays become more readily available, we will begin to see the technology adopted in many ways. We are most interested in the new standard because it will have a direct impact on the way we design and display interactive content. While we have been developing applications that run at resolutions similar to the 3840×2160 pixel resolution offered by UHD displays for some time, we have been forced to display them on multiple HD displays which introduce visible seams when tiled together. With the advent of the UHD display, we can now combine both scale and fidelity in the presentation of our media with a single seamless display.

With interactive media, the scale of a display can act as a beacon, enticing potential users to come closer and explore content. A large-scale display also accommodates more users at a time, inviting collaboration, especially on a horizontal (table) surface where people are brought face-to-face with those across from them as they interact with the media.

But scale isn’t everything. By its nature, interactive content has to remain legible when viewed at an arm’s length as users touch and interact with the surface of the display. At this close range, most large displays don’t have the fidelity to carry type and subtle graphics. It is here that the UHD resolution succeeds where other displays fall short. At about 50 pixels per inch (ppi), the Planar UR8450 displays offer precise pixels and legible content even when viewed up close.

With this display, we can already begin to imagine a future where the notion of a pixel is no longer considered. Today, we can see this in relatively small Retina Displays where ppi counts surpass 300 and individual pixels seem to disappear. We look forward to a time when this type of fidelity will become ubiquitous on large and small displays. Increased legibility will allow content to be displayed at any scale and orientation, opening up new modes of interaction. Displays will become a window through which emotive, high resolution content will be displayed, bringing stories to life in new ways.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology

Mobile Depth-Sensing

From the Protecting the Secret interactive at the Vault of the Secret Formula exhibit to the Connections Wall at the Emerging Issues Commons, Second Story regularly uses Kinect and other similar technologies to create dynamic content based on sensing where users are in physical space. In the past, the mobile use of these sensors has been restricted by their need to be tethered to powerful “desktop” CPUs; we’ve had to use USB signal extenders and dedicated wiring to mitigate these constraints.

But developments in computing are giving us enhanced flexibility. The latest ARM processors are small, portable, and powerful, and we’ve been experimenting with using them to process depth data right from the sensor’s location.

The processors can be powered over CAT-5 Ethernet or even battery (depending on the use case) which makes deployment easy, and they automatically start working as soon as they’re powered on, eliminating the need for an external display. Using available bandwidth, they can send data over WiFi or regular CAT-5 to more powerful CPUs that do data interpretation.

The sky’s the limit with these little guys. We can’t wait to explore the possibilities.

— Sorob Louie, Interactive Developer

Tagged with: , , , , ,
Posted in Technology