Our First FMX Experience

This slideshow requires JavaScript.

As busy creative professionals, it can be easy to lose sight of the impact that we have on audiences. We often find ourselves caught up in the day-to-day demands of projects and schedules and fall out of touch with the bigger picture of why we do what we do. Nothing can correct this trend better than a few days surrounded by colleagues, peers and talented students who share our interests and passion for storytelling and entertaining audiences. The right conference brings these elements together and produces more than a collection of presentations– it connects and inspires us in ways that cannot be measured. The collaborative atmosphere encourages us to reach beyond the comfortable and safe boundaries of what we do and strive for the next steps that will define the future of the industry. The FMX 2014 conference in Stuttgart, Germany provided this atmosphere and much more for all of those who attended and contributed to the show.

Thousands of the industry’s brightest and most talented individuals came together from 48 countries to meet and share their work and ideas at FMX. Short for “Film and Media Exchange,” FMX has historically focused on the film, animation, effects, and gaming industries, but new tracks were introduced this year that brought transmedia and physical interactives to the program. These subjects balanced the show with experiences that reach beyond the screen.

I was humbled and excited to be included with several talented presenters in the “Interaction in the Real World” portion of the FMX program hosted by Doug Cooper of DreamWorks Animation. My presentation, “Responsive Environments: Blurring the Lines Between Physical and Digital Worlds,” introduced the concepts of more open-ended (non-linear) storytelling experiences and the creation of rich environments that can envelop audiences in layers of narrative. The opportunity to contribute to the show and share our work was rewarding and exciting, and exposing the audience to real-world examples of our work and processes resonated well with the conference.

With so many great demonstrations and presentations, it was difficult to pick out personal highlights, but here are a few that stuck with me: Alex Meagher Grau presented his studio’s work and the process behind the creation of 360-degree immersive media (stories that play out through VR headsets and allow viewers to see any portion of the presentation they choose by looking around in real time as the story plays out). Tobias Kinnebrew of Bot & Dolly presented his unique work which combines live performance, projection mapping, and giant industrial robots. Alex McDowell led several engaging discussions about new educational models and the future of animation production, including the new tools and collaborative methods for bringing together increasingly diverse groups of creative professionals spread out around the world to create award-winning films and media productions.

The creative energy present at this year’s FMX show was contagious and provided the opportunity to raise our heads above the fray of day-to-day work to catch a glimpse of a bright and exciting future of the media industry. Students and professionals alike came away with renewed inspiration and passion for our work and its impact on audiences. If that doesn’t define a great conference, I’m not sure what does! Many thanks to the committees and organizers for including us and providing a star-studded and highlight-filled week of workshops, presentations, and media at this year’s FMX show.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology

Life as a Google Glass Explorer

Second Story has recently gotten its hands on a Google Glass. In order to improve our knowledge about heads-up displays, we decided to let whoever was interested use Glass for a day.

We all expected the full “gadget” potential of Glass: map navigation, the ability to search for specific information, even the opportunity to play target-practice games. This gave plentiful insight into the user experience, the effectiveness of the technology, and its responsiveness. But there was another perspective we discovered: what is Google Glass like as a creative tool?

kirsten_3

One of the most natural things to do with Google Glass is to capture pictures and video, creating photographs of what the user is seeing at eye-level. If you get really keen with Glass, you can do this discreetly just by winking your eye—which has its own uncomfortable implications. Looking through a day’s worth of Glass-ing is strangely insightful; when taking a picture, you have essentially zero control over your ability to adjust lighting, composition, or even the exact moment at which the picture is taken. With all of the foundations of photography at a loss, what you are left with is a pure moment, an experience captured with minimal intervention.

kirsten_1

The trend of point-of-view photography is hot right now, mostly due to the accessible prices of the GoPro. Glass is aware of this potential as well, advertising with footage of acrobats falling into each other’s arms, pilots doing barrel rolls, and people roaring down roller coasters. For those of us who live slightly less action-packed lives, are we able to create thoughtful—or thoughtless—photography without depending on a “Wow” factor? As first-hand Glass photographers, we began finding profundity in the ordinary.

dimitrii_2kirsten_4

The ability to capture point-of-view photography in a user’s mundane day has the power to change the way we see the world and the way the world sees us. We are not only able to tell a story literally as we see it, but we get to share the parts of our everyday that are notable not for their aesthetic beauty but for their essence of the moment. Whether or not that moment is worth photographing is up to the creator, as we become inspired by experiences we are living as opposed to scenes we want to compose.

Glass also changes the way in which we photograph subjects. Without having a physical camera held between you, the photographer is able to both act in and direct the photograph. Instead of the subject gazing into a device, they are looking into the eyes of the photographer, adding an additional level to the story. What is the subject reacting to? What is the relationship between the subject and the Glass-wearer?

kirsten_2norman_1

We like to imagine how Glass will change the way we consume and tell stories. As Makers and amateur Glass photographers, we see this technology as a way to create with more intimacy and less interruption, blurring the lines between moments we have lived and moments we have observed.

— Kirsten Southwell, Experience Designer

Glass photography by Kirsten Southwell, Norman Lau, and Dimitrii Pokrovskii.

Posted in Culture, Technology

Passing by the Wave

One of the keys to a successful interactive experience is providing a little something for everyone. Typically, members of the audience for an interactive installation will vary in their desire to invest time and attention. An individual may have a keen interest in delving into the nooks and crannies of a subject—say cubist architecture—or they may just walk by, see someone else interact with the experience, and decide to watch them briefly from afar.

When designing and developing an experience, it’s important to consider the “just passing by” audience member. In a museum or cultural institution setting, it is precisely the casual observer, the first-time visitor, the non-expert, who we want to educate, inform, and expose to our subject. Look here! This is why you should care about cubist architecture!

The most important thing is to engage the visitor, even temporarily, in a positive fashion. These itinerant visitors, wandering from exhibit to exhibit, display to display, must be catered to on their own terms: they want something they can appreciate in very little time, with little or no interaction, and from a distance.

Recently, working with the Foundation for the National Archives in Washington D.C., we created an experience consisting of a 15ft interactive touch table with proximity sensors, flanked by two mosaic walls with multiple displays.

wall_layout_wave

The experience was designed to showcase documents, multimedia, and history related to the issues of civil and human rights in America. The table allows for up to 12 people to interact with it simultaneously, browsing through a series of timelines, exploring the National Archives’ extensive collection of primary source materials, and sharing their reactions to those records with others on the mosaic walls. You can take a look at a video demo, more images, and a description of the project on its portfolio page on our website.

 

During the concepting and ideation phase, we wanted to come up with a unifying element to make the table—which actually consisted of six 55” displays with two PQ Labs touch overlays—feel like a single entity, and, most importantly, engage the interest of passersby. In the end, we came up with the idea of a series of lines that would undulate seamlessly across the displays from one end of the table to the other. The lines’ sinuous motion serves as a metaphor for the fluidity of ideas, their contour-like geological representation evokes a sensation of the weight and momentum of history, and, as waves collide with each other, the patterns the lines generate speak to the complexity that can be created from simplicity.

This element we simply called “The Wave.” The wave, we decided, would flow by itself, but users would be able to interact with and excite it. It would also provide a large, beautiful, animated, easily accessible visual element ready to engage users from afar.

three_half_screens

In creating the wave, there were two primary challenges. The first was the question of how it would behave; each member of the project team had an idea about how the wave should look and feel. The second challenge was to make sure that the wave would propagate seamlessly across displays. Each display was being run by a separate computer, so somehow all the computers had to be informed about the motion of the wave.

The first problem was solved by a little mathematical graphing and some prototyping in our lab. Initially, I considered physically modelling a wave. It quickly became apparent, however, that it would be computationally expensive, and, furthermore, the amount of data that had to be passed from display to display to keep the waves in sync was also too high. After all, the only thing our wave really had to do was look like a wave, and, at its most basic, a wave is just an oscillating function, something like a sine wave:

sin

But that’s boring. Here’s a tidbit that’s not boring: a periodic function (a function that can repeat) can be recreated from the sum of sine and cosine functions. This group of functions is called a Fourier series. What this meant for us was that any wave shape we wanted to create was achievable using a number of sine waves added together. Here’s an example of what happened when we combined a few to make a more interesting shape:

multiple_sin

 

Finally, we didn’t want the wave to repeat forever, so we multiplied it by a pulse function to get something like this:

multiple_sin_combined_with_pulse

Here’s a little animation of three sine waves and a pulse function with a variety of variables changing randomly. You can see that you do indeed get some organic shapes in there. This is a variation of the algorithm we ended up using to develop the wave:

And here is an early prototype of the wave:

The final complication was how to make sure the wave was synchronized over multiple displays. Because of the way the waves were created, the only data we would need to communicate across displays was the current animation frame, when the wave was created, how long it would live, and a few wave parameters (wavelength, speed, etc.). The hard part was figuring out how to make sure that every display knew what frame it was supposed to be on. You can’t just tell every display computer to render the wave “NOW” because that message takes time to get from the computer doing the telling (the server) to the display computer (the client) due to the network. This is called latency. One way to go about it would be to make sure that every computer kept track of time identically and then you would tell each computer that at “x” time they should play frame “y.” Then they could extrapolate what frame they should be rendering based on what time they thought it was. However, time can drift. Especially considering that synchronization needs to be accurate to, at most, a few tens of milliseconds, whatever solution I came up with had to factor in time-drift.

In order to tackle these issues, I created a synchronization tool called “All The Screens.” Client computers registered with a server, and the server calculated network latency (delays) and time drift and provided those clients with a way to determine what frame they should be rendering.  This solution has been open-sourced and can be found on GitHub. There is also a Google Chrome demo of the technology here.

These technical solutions allowed us to create the wave, whose mesmerizing motion lures visitors in to learn more about the history of civil and human rights in America. And, for that happy-go-lucky stroller who doesn’t have the time or inclination to delve into the content, perhaps the wave serves as a source of soothing visual relaxation, a counterpoint to the hustle and bustle of busy downtown DC.

Donald Richardson, Senior Interactive Developer

Tagged with: , , , , , , ,
Posted in Design, Technology

Mobile Case Study

Second Story is deepening its physical design and environments practice by offering industrial design services to our clientele. We aim to be innovative, designing physical solutions to elevate digital interactive experiences, but our work sometimes requires practical, engineered solutions to package digital content in simple ways that are meaningful to the overall audience experience.

We’re always excited by design challenges that let us get our hands dirty. When a recent mobile application project presented the need for packaging design, we brought manufacturing processes to the studio. Our team came together in impressive fashion, with staff members from every discipline collaborating to create an efficient assembly-line to achieve an immediate yet stylish solution for our client’s needs.

— Jordan Tull, Designer

Tagged with: , , , , , ,
Posted in Culture, Design

Finding The Heart of “100 Years of Design”

whiteboard_laura

Last May, we had the opportunity to partner with AIGA, a longtime collaborator and dream client, on a new microsite to commemorate their centennial and celebrate the last 100 years of American design. Our first reaction was excitement: as designers who pride ourselves in our discipline and our history, we were honored to craft stories that include some of the world’s most influential designers and their work. Our second reaction: where do we start?

At Second Story, we often describe our process as “designing from the inside out.” As we thought about AIGA and what made it special, it became clear that the organization sits at the epicenter of the conversation between design and society. This simple diagram was our first attempt to show how this conversation and how the artifacts in AIGA’s archives could become the lens for the site.

levels_of_engagement_diagram-02

Building a project’s foundation is one of the most challenging and exhilarating points in our creative process. We refer to this discovery as finding the heart—the one truth of the project that will never change. The “heart” is the story that the experience is begging to bring to life. Creative Director David Waingarten has described the task of finding and articulating this conceptual foundation as “being the first to walk into a dark room and look for the light switch.”

desk_edited

To find the heart of the AIGA Centennial project, we fully immersed ourselves in the content. We delved into the vast collection of artifacts in AIGA’s Design Archives, combed through articles from diverse voices in the design community, and looked at other retrospectives, critiques, and blog posts. In our quest for enlightenment, we noticed there was little discussion of design history that was not organized by time, form, medium, or discipline. While these ways of presenting design history are informative and educational, we wanted to create a living resource that captures the ever-evolving conversation between design and society and invites everyone deeper into it.

As we were having this discussion, our collaborators at AIGA pointed us to “No More Heroes,” a poignant article from a 1992 Eye Magazine that really spoke to us. This quote from Bridget Wilkins was especially inspirational to our conceptual development:

quote-01

With AIGA’s guidance and after countless thought-model sketches and “what if!” epiphanies, we landed on a framework that gives diverse audiences a new way to look at and evaluate great design. We organized the stories by design intent, allowing the purpose of the artifacts to be revealed for the visitor. The intention is what defines design, and as Milton Glaser so eloquently states: “The best definition I have ever heard about design and the simplest one is moving from an existing condition to a preferred one, and that is a kind of symbolic way of saying you have a plan because the existing condition does not suffice.”

We had to consider how to make this story framework exciting and accessible for guests with varied knowledge of design. It couldn’t overwhelm the general public, but it also had to meet, if not exceed, the expectations of design enthusiasts and practitioners. To strike this balance, we created an experience with two layers. At the surface layer, visitors can view carefully curated artifacts, quotes, videos, and listen to audio clips. Those who are interested can go a level deeper to see additional artifacts, designer profiles, and moments from AIGA’s history. With 11 videos, 26 audio clips, 120 design artifacts, 17 designer profiles, 15 AIGA historical moments, and 19 quotes, there’s a wealth of content for visitors of all backgrounds to explore.

whiteboard_edited

AIGA also wanted to extend the conversation to ensure that the microsite became a meaningful record of this time in design’s history. To foster discussion and participation, we needed an engaging prompt. How can we ask a stimulating and meaningful question without leaving the guest lost or spending 20 minutes trying to create a response? The ideation that we spent on those six words was extremely thorough: looking at the reactions from using words like “think” vs. “feel”, finding out if users were comfortable contributing from the first person (“I am connected by design that…”) or from a general perspective on design (“Design that connects is…”). We settled on a phrase that could be applied across all five intents and that allowed guests to choose an intention and add their own thoughts and images.

The results have been incredible to watch. Each day the conversation grows, with over 700 user contributions and counting. We are thrilled with the final site and hope the experience engages a broad audience in a dynamic conversation about the role of design in our society and everyday lives. We encourage you to explore these narratives and add your voice to celebrate the evolution and impact of American design over the last 100 years.

Our studio is forever grateful to AIGA for giving us the opportunity to be part of such an incredible moment in design history.

This slideshow requires JavaScript.

— Laura Allcorn, Senior Content Strategist & Kirsten Southwell, Experience Designer

Tagged with: , , , , , ,
Posted in Content, Culture, Design

You Can’t Go Wrong With 8,294,400 Interactive Pixels

As Ultra High Definition (UHD) displays become more readily available, we will begin to see the technology adopted in many ways. We are most interested in the new standard because it will have a direct impact on the way we design and display interactive content. While we have been developing applications that run at resolutions similar to the 3840×2160 pixel resolution offered by UHD displays for some time, we have been forced to display them on multiple HD displays which introduce visible seams when tiled together. With the advent of the UHD display, we can now combine both scale and fidelity in the presentation of our media with a single seamless display.

With interactive media, the scale of a display can act as a beacon, enticing potential users to come closer and explore content. A large-scale display also accommodates more users at a time, inviting collaboration, especially on a horizontal (table) surface where people are brought face-to-face with those across from them as they interact with the media.

But scale isn’t everything. By its nature, interactive content has to remain legible when viewed at an arm’s length as users touch and interact with the surface of the display. At this close range, most large displays don’t have the fidelity to carry type and subtle graphics. It is here that the UHD resolution succeeds where other displays fall short. At about 50 pixels per inch (ppi), the Planar UR8450 displays offer precise pixels and legible content even when viewed up close.

With this display, we can already begin to imagine a future where the notion of a pixel is no longer considered. Today, we can see this in relatively small Retina Displays where ppi counts surpass 300 and individual pixels seem to disappear. We look forward to a time when this type of fidelity will become ubiquitous on large and small displays. Increased legibility will allow content to be displayed at any scale and orientation, opening up new modes of interaction. Displays will become a window through which emotive, high resolution content will be displayed, bringing stories to life in new ways.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology

Mobile Depth-Sensing

From the Protecting the Secret interactive at the Vault of the Secret Formula exhibit to the Connections Wall at the Emerging Issues Commons, Second Story regularly uses Kinect and other similar technologies to create dynamic content based on sensing where users are in physical space. In the past, the mobile use of these sensors has been restricted by their need to be tethered to powerful “desktop” CPUs; we’ve had to use USB signal extenders and dedicated wiring to mitigate these constraints.

But developments in computing are giving us enhanced flexibility. The latest ARM processors are small, portable, and powerful, and we’ve been experimenting with using them to process depth data right from the sensor’s location.

The processors can be powered over CAT-5 Ethernet or even battery (depending on the use case) which makes deployment easy, and they automatically start working as soon as they’re powered on, eliminating the need for an external display. Using available bandwidth, they can send data over WiFi or regular CAT-5 to more powerful CPUs that do data interpretation.

The sky’s the limit with these little guys. We can’t wait to explore the possibilities.

— Sorob Louie, Interactive Developer

Tagged with: , , , , ,
Posted in Technology

100 Years of Design

In 1914, a small group of designers inaugurated what became the American Institute of Graphic Arts. One hundred years later, Second Story has collaborated with AIGA to create a centennial microsite that celebrates the profound impact design has had on our society over the last century and invites everyone into a conversation about the impact of design on our daily lives.

Our first task was to collaborate with AIGA on curating a set of works to illustrate the breadth, diversity, and evolution of American design over the last century. We also wanted to present these works in a different way than a typical retrospective might. We wanted to focus on the “why” instead of the “how,” exploring the intentions behind these works rather than simply categorizing them by medium, style, geography, or plotting them on a timeline.

Intentions

These design intentions became the core of the site: five media-rich narratives focused on how design connects, informs, assists, delights, and influences us.

Boards

We also see this microsite as a time capsule that successive generations of designers might open in 2114, as AIGA celebrates 200 years. Knowing the tools and methods these folks will employ will evolve far beyond what any of us can imagine today, what kernels of truth or wisdom from AIGA’s first century of existence could this site preserve and pass on?

To find answers, our film crew captured the oral histories of 18 living legends of American design. We asked these designers to comment and reflect on their own seminal works, the arc of their careers, and the lessons they’d like to pass on to future generations. Their answers were humble, straightforward, hilarious, heartfelt, and enlightening. Being present with the likes of Paula Scher, Milton Glaser, Richard Saul Wurman, Jessica Helfand, Michael Bierut, Seymour Chwast, and many others was an incredible honor. Their stories and insights bring this content and conversation to life in a way nothing else could.

Most importantly, we wanted to invite everyone to the party. So we created a way for people to share how design connects, informs, assists, delights, and influences them today. Contributions are already pouring in, and we are thrilled to see such a diverse range of responses.

Centennials offer us a chance to look back at where we’ve been, to recognize a shared history and inheritance, and to appreciate the evolutionary continuum that connects those designers in 1914 to us here and now. They also give us the chance to look forward – to take what we’ve learned in new directions and ask what’s next. As part of the team who has spent over eight months bringing this microsite to life, I can say that looking back has taught us a tremendous amount about design’s role in shaping how we see our world, ourselves, and each other. Looking forward, this project has made us look deeply at what motivates us to do the work we do, and rededicated us to bringing those intentions to life in everything we create.

— David Waingarten, Creative Director, Storytelling

Tagged with: , , , , ,
Posted in Content, Culture, Design

Leap Motion Path

Leap Motion Path is a Second Story lab experiment exploring the use cases for the Leap Motion controller in the digital animation field. Our objective was to create a tool to capture 3D motion that could be used within an animation platform to control digital content.

One of our goals was to record an animation without the assistance of a keyboard or mouse. To achieve this, we needed a way for the animator to know where their hand was in relation to a recording canvas. We accomplished this by creating an inset frame that gives the animator space to interact with Leap Motion and gain spatial reference to where they are inside the computer screen. Once they’re ready, they can enter the frame and start recording. Later, in the animation software, they can remove the entry and exit points from the frame by cropping the animation recording.

Leap-Motion-Path_cropped

During this experiment, we encountered another interesting use case. Leap Motion provides a lot of data about the geometry of the hand. If you capture the position of the animator’s wrist and the tip of the index finger and draw a line between those points, you end up with a vector that indicates the direction of the hand. If you capture this vector over time, you see that it produces a beautiful ribbon. The animator can record this ribbon and use it in other animations.

ribbon_landscape2 2

As we developed Leap Motion Path, several stand-alone libraries came to be. We’ve hosted one called Path.js here on Github. To provide some additional context: we wanted to capture a 3D position over time and then animate along the path we recorded. If we were to animate along the finite points Leap gives us, the animation would be choppy because the resolution of these points wouldn’t resemble the actual path we drew with our hand. To combat this, we needed to interpolate a line or a curve between these points to give a finer resolution so we could animate at any speed. Path.js takes a collection of timestamped points and creates a linear interpolation between them. This allows Leap Motion Path to export an animation in vector format, allowing the animator to scale and stretch the animation as desired.

With more development, Leap Motion Path could be integrated into a standard digital animation workflow giving animators one more tool to create beautiful & lifelike work. Moving forward, to improve the motion-capture experience, we would need to re-write the recording mechanism as a plugin to an animation platform, enabling the animator to record and review all in one application. We look forward to integrating Leap Motion Path into our own animation workflow at Second Story.

— Dimitrii Pokrovskii, Interactive Developer

Tagged with: , , , , , ,
Posted in Technology

Unboxing the Kinect for Windows v2

kinect4Win_v2_small_v2

Kinect for Windows v2

We started experimenting with the Kinect for Windows v2 from Microsoft this week and are already excited by the new possibilities that this impressive new depth-sensing camera offers. Mechanically, the camera is a bit larger and bulkier than we would like, but it also features a tripod mount adapter (threaded insert) which will go a long way towards helping us incorporate the sensor into different environments.

Among the many improvements, we quickly found that the new camera was able to sense just about as many people as we could fit in the (expanded) camera’s field of view. Tracking has also improved, allowing skeletal data to be sensed in a variety of poses (sitting, prone). We’re also excited by the new level of detail captured in each pose which includes basic hand and finger tracking.

The depth image returned by the camera is also showing significant improvements in speed and image fidelity. This type of depth data allows for the capture of image details previously unattainable with the Kinect.

depth_small_edit_final

Raw depth image from the K4W

As we continue to experiment with and adopt the technologies that the future is bringing, we couldn’t help but take pause to thank the engineers and scientists at Microsoft who made this one possible. It is technologies like this that enable us to create new and inspiring experiences.

With that, we leave you with this final message for this season:

HappyHolidays

— Matt Arnold, Lead Integration Engineer

Tagged with: , , , ,
Posted in Technology