iBeacon Technology: From Lab to Launch

It’s been almost a year since Apple launched its iBeacon technology. Since then, tech gurus have frantically speculated about how beacons will enhance consumer experiences in physical spaces. Although much has been said about what beacons can be used for (wayfinding, dynamic pricing, ticketing, etc.) — few are talking about how to design for this new experience paradigm. To truly understand the feeling of interacting with beacons, we decided to get our hands dirty and build a lab prototype that would allow us to experiment. We placed Gimbal beacons around our studio and built an app that triggers content as a person approaches these various spots. We strategically picked very different points of interest to allow exploration of different content types:

Still Furniturestill furnitureInteractive Project DemosInteractive Project DemosWindow VistasWindow Vistas

Our prototype app went through multiple rounds of testing and tweaking, unveiling key insights along the way. Inevitably, we learned much about the technical capabilities of beacons and Bluetooth Low Energy, but experiencing it first-hand within a real physical space brought to attention several experience design principles as well:

1) Content must be delivered in bite-sized, highly visual chunks.

05_5H2A0713Since the digital application is augmenting a physical experience, it should allow for a primarily heads-up experience where users are able to fully appreciate the physical world around them, but also quickly consume contextual content that helps them make sense of what they are seeing.

2) There is a fine balance between a helpful alert and an interruption.

03_5H2A0713Our first iteration of the app triggered content to take over the screen when the user came close to a beacon. Depending on the context, however, we quickly realized that this could be disruptive if the user is still looking at the content screen for a previously triggered beacon. So our second iteration “nudges” the user instead. A thumbnail bubble for the new content playfully animates onto the screen without obscuring it, and the user has the choice to launch or ignore it.

3) Navigation via physical means (walking up to various beacons) must be reconciled with more traditional navigation via app UI.

WoCC_Hub_01On-screen UI elements such as navigation menus and back buttons break down when content is navigated by physically walking through a space. Rules must be determined for which method trumps the other, while still ensuring that the user has a clear mental model of how the app works. For instance, we might consider abandoning traditional, nested hierarchies for more modal, state-based screen navigation.

After several iterations of the lab prototype, we had the opportunity to apply what we learned to the design of the World of Coca-Cola Explorer mobile application. Overall, our lab process allowed us to discover answers to questions that we would not have otherwise known to ask. As designers, it is impossible for us to anticipate every nuance of how a final experience will play out. The only way to get even close is to build something quickly and actually feel it ourselves — leaving plenty of room for unexpected and insightful learnings.

— Pavani Yalla, Associate Creative Director, Experience Design

Posted in Design, Technology

Design Week Portland: Reimagining Lovejoy Fountain

Second Story's installation at Lovejoy Fountain for Design Week Portland

This past Tuesday, Second Story, along with six other design studios and artists, had the privilege of participating in an exhibition called Revolution in the Landscape: Re-experience Halprin’s Fountains. Hosted by SEGD as part of Design Week Portland, the event aimed to showcase local designers and reactivate Lawrence Halprin’s Open Space Sequence. Second Story was asked to bring the Lovejoy Fountain to life.

The Halprin Open Spaces Sequence is one of the most important pieces of late 20th century public art in the US and the Lovejoy Fountain is a hidden gem nestled between buildings, shops, and condos in downtown Portland. Our goal was to stay true to Halprin’s original intent of creating a space full of dichotomous interaction while highlighting the existing architecture and complimenting the late Modernist aesthetic in a cutting edge temporary installation.

5H2A2260

After imagining many ways to activate the space, conducting physical testing, and visiting the fountain to consider the possibilities, we landed on a direction the whole studio could get behind. Using string, multiple light sources, and directional sound manipulation, we created an installation that would activate the fountain in a new way.

The following images offer a peak into what we created, but stay tuned for more in-depth documentation and behind the scenes footage!

5H2A10075H2A1539

5H2A2266

— Adam Paikowsky, Lab Technician

Tagged with: , , , ,
Posted in Culture, Design

A Lesson From Maker Faire

1

This month, we had the pleasure of exhibiting at Portland’s annual Mini Maker Faire held at OMSI. We described our project, Construct-A-Comic, as a digitally enhanced, do-it-yourself comic strip kit. At our booth, visitors could use a few different storytelling tools to create their own three-panel comic strip.

2

A set of paper props and characters with mix-and-match heads provided the cast for the comics. The skeleton and crocodile were particularly popular with the kids.

3

We set up a mini stage with projected video backgrounds that could be adjusted to different scenes, everything from a carnival to outer space.

4

Once the stage was set, we recorded the visitor’s creations in a panning video and posted them to Instagram.

5

That gives you a basic idea of what was set up at our booth. But there’s another aspect of our project that I’d like to talk about here. It’s something that I feel Construct-A-Comic exemplifies well, not necessarily because of what it is, but because of what it empowered people to do.

6

There’s a French word, bricolage, that describes the act of using whatever you have on hand to make something. Someone who performs bricolage, or a bricoleur, draws their materials and resources from a limited pool to achieve the ends of their work.

7

As an after-hours passion project with little budget, Construct-A-Comic was definitely an exercise in bricolage. Thanks to a set of talented volunteers from our studio, we were able to pull together our project from whatever we happened to have in our Lab. This is what you’ll find all over Maker Faire: groups of bricoleurs who are passionate about the act of making and embrace the challenge of working with constraints.

8

But the idea of bricolage can extend beyond the physical things we make. In his book The Savage Mind, anthropologist Claude Lévi-Strauss uses the metaphor of intellectual bricolage to describe what he sees as a valuable means for humans to make sense of the world:

[The bricoleur] derives his poetry from the fact that he does not confine himself to accomplishment and execution: he ‘speaks’ not only with things… but also through the medium of things: giving an account of his personality and life by the choices he makes between the limited possibilities. The ‘bricoleur’ may not ever complete his purpose but he always puts something of himself into it.

9

Put another way, this is a philosophy of process over product. There is something immensely valuable, but sometimes difficult to quantify, in the process of making things, and not just in what you end up making. The process of encountering and working with constraints should be celebrated because that’s what makes our activity inherently satisfying and meaningful.

10

I think part of what makes Construct-A-Comic special is the small way it encourages the kids who use it to engage in their own intellectual bricolage. We set up a space where we offered them limited tools and asked them to make something of their own. And they didn’t disappoint:

Our ability to freely engage in bricolage seems to dull as we get older. We become more tied up in believing there is a “right” way to do things and complain about the compromise when we are forced away from it.

11

But that flexibility of the bricoleur to see multiple possibilities in a limited set and adapt to shifting conditions is what I consider a good design skill. So, for designers especially, I think there is something to be learned from these kids who fearlessly created stories out of whatever we gave them. This is the lesson from the Maker Movement: you can shape our world meaningfully, not in spite of the constraints you will encounter, but because of them.

If you’d like to see all the stories our visitors made during Portland’s Mini Maker Faire, go to instagram.com/constructacomic.

— Norman Lau, Senior Experience Designer

Posted in Culture, Design, Technology, Uncategorized

The User Experience of Driverless Cars

I recently took a short road trip with a friend from Brooklyn to Dover, DE for a concert. As we inched south through New Jersey in typical I-95 traffic, conversation shifted to Google’s autonomous car project:

“We’ll be no more than 10 feet from the car in front of us,” Andrew exclaimed, “still going 65 mph! Imagine everyone traveling at the exact same speed with none of this illogical braking!”

We traded bits of news gathered from articles about how much safer the roads will be, how much energy we’ll save, and, of course, how much traffic we won’t be complaining about.

“If a central transportation network knows where every car on the road is headed, there must be ways to programmatically group cars with similar destinations and manipulate the infrastructure to create quicker routes.”

I was referencing what Höweler + Yoon Architecture developed for the Audi Urban Future competition in their Boswash:Shareway 2030 concept. The project imagines a connected network streamlining multiple modes of transportation that goes beyond the subject of autonomous cars —referring to the larger urban landscape and mobility in general—but it illustrates an important trend happening right now that’s paving the way for the autonomous cars of the future. We are seeing the internet’s physical potential being actualized in the form of connecting objects with environments and with people, mostly to streamline life’s everyday activities. Some call this the Internet of Things, and autonomous cars are another instance in this trend, making travel less stressful, costly, and dangerous.

However, of all the objects we’re connecting to people and environments, automobiles are a very unique instance. They’re symbols of freedom and passion, especially here in the United States. Songs and movies have been written about them, first kisses (and other firsts, I’m told) have taken place ‘parking’ in them, and some of the world’s most fervent sports fans literally stare at them circling a track. We’ve built up a collective nostalgia from the ‘user experience’ of driving that over time has helped define American culture.

But the experience of driving is about to change, and so must our relationship with cars and how they fit into society. It’s important we begin talking about what that means beyond all the energy, money, time, and lives we’re due to save.

To frame a conversation about the user experience of driverless cars, we can look through three lenses: Control, Freedom, and Trust.

Trust

Control | Freedom | Trust

I drive a car with manual transmission, which is a rarity these days. In fact, that I own a car at all in New York is uncommon, but I enjoy the feel, the connection between my movement and the mechanical reaction that makes the car go! It’s a matter of control, like steeping coffee in a French press versus letting it drip in a machine. I feel more accountable and responsible for the car’s movement, and therefore I’m more engaged in driving.

The recent Google car prototype features no steering wheel, brakes, or turn signals, and certainly no clutch or gearshift. It looks eerily empty inside, similar in a way to the first iPhone, whose interface-less interface changed our behavior and the way we interact with people and our environment in ways we don’t completely understand yet. Beyond being the conduit to the current Internet of Things trend, this culture disruptor has taught us how influential the human-computer interface can be.

With the driver’s controls, and, therefore, purpose now relinquished, he/she is no longer the primary user but a passive passenger (just another back seat driver?). The driver’s cockpit will shift to feel more like an extension of familiar environments like our homes, offices, trains or subways, though with the potential to become more personalized.

This is an important point for us at Second Story, where we don’t currently design vehicle interfaces, but we do explore our relationship with environments and how we interact within them. So, before driverless car interiors are lined with screens, the interesting conversations will be around what new roles the users of this environment have. What will an optimal seating arrangement look like? Will the father of nuclear families still sit in the driver seat? Will the driver still reach across to protect his/her passenger after a sudden stop? Will so much control be relinquished that parents send their children alone in the car to school and back? In this new flattened hierarchy of users, the notion of control is about to be radically redefined.

freedom

Control | Freedom | Trust

When I think of early moments in my life that triggered a monumental feeling, driving alone for the first time is up at the top of my list. It was just a 10-minute trip to Blockbuster (THAT long ago), but it gave me one of the first sensations of fearless adventure and escape I can remember. And it was adamantly clarified to me that with my newfound power comes responsibility. “Your life,” as my father put it, “is in your own hands.”

Receiving one’s driver’s license is a rite of passage, a tangible milestone in the transition from adolescence to adulthood where the notion of “responsibility” becomes a truly real thing.

I tried to imagine this life event in the time of driverless cars. A 16-year-old—perhaps even younger considering that the risks associated with driving would be irrelevant—would excitedly enter an address into the car’s computer, say a nearby 7-11 (which, unlike Blockbuster, would surely still be around). I then imagine her kicking back and texting her friends or checking her social feed. I can’t imagine her experiencing that liberating sensation of steering in whatever direction she wishes or the accountability associated with this newfound access to anywhere.

But the notion of freedom and exploration has been changing in recent years, especially as it relates to our cars and the millennial generation’s desire for them. According to a 2013 Time Magazine article by Brad Tuttle, fewer and fewer people between the age of 16 & 34 are buying cars.

Car-sharing services like Zipcar and Lyft have made individual mobility easier and cheaper by removing the commitment of owning. These new models of car usage are contributing to a shift in sentiment toward automobiles from passion to convenience, so freedom could be interpreted as the freedom to mobilize only as needed. And with driverless cars we would be able to seamlessly shift from one environment to the next without having to disconnect and distract from our digital activity (whether this constant connection is beneficial to us is a whole other conversation).

So how might we hold onto our sense of exploration and responsibility in the future of driverless cars? And is this a task worth undertaking? Perhaps we shouldn’t fret because new technology might continue to dictate our cravings for digital connection over an analog escape. In any case, multiple driving modes would help ease the transition. Maybe we include a configurable interior where one could choose the ‘classic setup’ featuring a steering wheel. We could flip cruise control around and allow a young driver to take the lead until the car’s computer anticipated danger, similar to today’s crash avoidance systems. Maintaining some sense of personal responsibility and the potential for escape might help ease the adoption of driverless cars.

Control

Control | Freedom | Trust

When my friend asked what I thought about Google’s driverless cars, I sat for a few seconds looking out at the people driving on the highway and thought: We will have reached the point of no return when we give up human control and rely solely on an invisible technology to manage our lives (admittedly I saw flashes of Terminator 2). Indeed a cynical reaction, though I would think it’s a daunting prospect for most of us. But how we trust our technology has changed dramatically just in the last 50 years.

There was a time not so long ago when we balanced our own checkbooks. We now sign into our banking websites with the same credentials as our Pinterest account. We send money freely over the Internet to companies we might not have heard of a week ago in exchange for a T-shirt. There is a sense of comfort we have achieved with technology because the incentives for convenience and efficiency have outweighed the risks.

But technology still disappoints us even without human error. Have you ever sent a text message to learn that it didn’t successfully send and a miscommunication resulted? Or sent the feared autocorrect message that incited a look of terror on both your face and the recipient’s? Even worse, our technology can purposely manipulate us—remember Facebook’s “Emotion Contagion experiment that caused an onslaught of outrage?

Now consider a driverless car failing us on a much larger scale: it takes us to the wrong location, makes us late, or worse, it jeopardizes our safety. In the last year, writers have tapped into the life or death dilemma designers of autonomous cars might have to consider: should a driverless car make impact with a person or car in front of it or swerve to effectively injure/kill its own passengers? There’s no easy answer, but the question is important for us to consider. Even though driverless cars are slated to save a staggering number of lives, when we leave instinctive decisions up to technology, a great debate surely divides us.

Earning trust in any form takes time. In the case of driverless cars, we will first need to overcome our fear of losing too much control and freedom. At what point will that happen? Some argue there is a threshold where our trust in technology results in diminishing returns for society, but it is technology’s tendency to recast cultural rules, to expose new possibilities that enable us to do more and better—the invention of the printing press, for example. Eventually we’ll all have to consider where that threshold lies and what sacrifices we’re willing to make in order to continue evolving with technology.

— Justin Berg, Experience Design Lead

Tagged with: , , ,
Posted in Culture, Design, Technology

“Digital Revolution” in London

With its post-apocalyptic looks and lost bits of nature, London’s Barbican Estate deserves a blog post all its own, but I wanted to share my experience with “Digital Revolution,” a unique exhibition hosted in its depths at the Barbican Centre.

This exhibition successfully brings together a selection of old, recent, and exclusive works, aiming to show the convergence of technology and creativity in design, music, film, videogames, and, actually, everywhere in our culture since the ‘70s. The boundary between these spheres has become nearly invisible, as creative minds have progressively found, crafted, and used digital tools to explore the possibilities.

Without any gimmick—it’s 100% Space-Invaders-sticker-free!—this exhibition is a truly refreshing and inspiring museum experience designed for everyone. I suspect it will not export to the U.S., and I doubt all of you will be able to visit London in the next month, so I wanted to share here my thoughts and favorite parts.

barbican_1

Pong. Broken Age.

Digital Archaeology

The journey starts with a brief but necessary retrospective of digital antiques, surrounded by a dozen synchronized screens showing historical references to the digital landscape (the appearance of GIFs, iconic video games, famous movies using ground-breaking CGI, etc.). I say “brief and necessary” because this exhibit is not about nostalgia but rather about how our accomplishments over the past 40 years have informed a better understanding of the creative innovations being showcased and the breadth of possibilities for the future.

I got to play on an original Pong arcade station with a CTR screen. The Fairlight CMI, 30 years old and one of the most iconic sampling synthesizers, was on display. A 9-year-old kid held an original Game Boy in his hands and played his first game of Tetris. Visitors checked out a couple of obscure NetArt pieces using Netscape.

Showcasing these artifacts presented some real curatorial challenges, since, unlike books, sculptures, or paintings, they need another medium to be visible. It’s also very hard to find replacement parts for the consoles, and newer browsers deprecate old code. The fast digital obsolescence makes these works—and, by extension, our own work in the digital sphere—very fragile. Worse, the easy ability to duplicate digital content makes them less valued. The exhibit carefully labels these once innovative creations, giving their authors due recognition for their ingenuity.

Tetris and Minecraft. Deletion and creation.

Tetris and Minecraft. Deletion and creation.

We Create

The exhibition continues with more recent works grouped according to a series of themes. We Create glorifies projects that touch a sense of collaboration in their conception—such as the beautiful Broken Age, a point-and-click game supported by more than 80,000 people via Kickstarter; in their actual making and goal—like Aaron Koblin‘s work and Chris Milk’s Johnny Cash Project, where visitors are invited to collectively draw a tribute to Cash; or in the community that they developed—like the sandbox Minecraft with its millions of players and contributors and its omnipresence in the collective memory. Getting to watch a group of kids discover they could actually play Minecraft during the exhibition reminded me that “Digital Revolution” is truly a celebration of interaction. It understands that if a project (a game, a website, an app, etc.) is meant to be interactive, it has to be showcased in a participatory way because the user, the human, is such an essential part of it.

Another piece linked to We Create is Universal Everything’s Together, which invites people to create a short frame-by-frame looping animation. Pending review, contributions are displayed on a long wall of screens. As I regularly work on motion design at Second Story, I just loved this piece. The interface felt a bit raw, but constraint allows creativity, and the installation is still very attractive and educational. It’s a great way to understand the basics of animation and to challenge yourself to create contributions to inspire others.

Nearby, visitors can also play a few recent independent games, testaments to the ability of small teams to create powerful and rich experiences now that the tools they need are more affordable and knowledge is more accessible. Fez, Paper’s Please, Antichamber and Journey are some of the award-winning titles available.

barbican_3

Les Métamorphoses de Mr. Kalia.

State of Play

State of Play gathers body-conscious interactive installations, where creators experiment with play and gesture using the user’s physical behavior and tools to observe it like Microsoft’s Kinect (something we are very familiar with). Les Métamorphoses de Mr. Kalia, one of the pieces commissioned to Google DevArt, was definitely my favorite. The experience is quite simple: when in position, a character follows your movements and you take part in a short surrealistic interactive story (sort of a body extension of Google’s Puppet Parade). The piece is visually poetic, there’s a great sense of mystery and revelation in the relationship between participant and spectator, and best of all, you are given a link to replay your performance on the website right after you finish your story (by the way, here are my moves). Another very curious installation—that I unfortunately missed due to its unobvious location—is Umbrellium’s Assemblance, which allowed people to interact with lasers and sculpt the light with their shape and gestures.

barbican_4

Computer imagery from Apple’s Woodcut to Dreamworks’ Apollo.

Creative Spaces

Creative Spaces showcases some examples of technology’s influence on how we tell stories. A documentary describes the impressive Apollo project, a tool conceived by Intel for Dreamworks animators, that they used for How to Train your Dragon 2. This new tool enables a faster approach to their animation and lighting work, helping them to avoid needing to render every individual change they make—an apparently standard process in animation. Apollo improves efficiency and allows them to focus on the core of their craft.

The exhibition also includes two behind-the-scenes physical installations on two recent blockbusters. We first learn about the making of the city-bending scene in Christopher Nolan’s Inception. But while the content doesn’t differ much from what you see in the bonus section of your Blu-Ray, the experience offers the opportunity to use a Leap Motion controller—another familiar tool at Second Story—to interact with the content. Moving the hand from left to right over the sensor will play the scene forward, moving the hand down to up will make you browse through the different steps of the process as layers, from the first documentation shots to the conceptual art to the non-textured 3D explorations to the final render. This interaction sounds fairly simple, but without boundaries and because of a very discreet user interface, many would start the experience moving their hand in all directions, only understanding after a couple of seconds that you use just two axes. Even so, it is not a bad experience and is actually a very inspiring way to browse through content.

We also learn about Alfonso Cuaron’s Gravity and how they built an entire set of environments and tools to capture the actors’ performances and help them empathize with their characters. The installation was shaped like one of their tools, the Light Box, which immersed the actor in a cube of LEDs displaying a rough animation of what a character would see while in action. In the exhibit, several screens are precisely synchronized to display the different steps of a very complex scene where the visitors get entirely immersed. These screens then progressively reunite, revealing the final rendition.

Innovation also happens outside of Hollywood. The interactive documentary Clouds features artists, developers, and hackers talking about their thoughts on code, data, community, goals, and challenges. The documentary is actually a full 3D environment, where interviewees appear in-between illustrations created by coders, shot with a combination of Kinect’s RGBDToolkit and a DSLR camera. Interview by interview, the viewer can navigate through a large network of topics, here again enjoying a unique, non-linear experience.

Finally, with a bit of a political bent, James Bridle gathers online information about American drone strikes in Afghanistan and Yemen and publishes aerial images of the locations along with the date and a description of the event on an Instagram account named Dronestagram. This creates a virtual space, open to discussion, to bring closer and better reveal events of the same nature happening in less accessible areas and rarely covered by the media. I was surprised and excited to learn about this project and to see it in this exhibition. It’s an attempt to showcase a less entertainment-oriented piece aimed at raising debate on socio-political issues in the real world.

barbican_5

Clouds. House of Cards.

Sound+Vision

This section mostly covers how the development of visual arts, via synesthesia, helped create new experiences in the domain of music for both audiences and musicians. We find Arcade Fire’s Wilderness Downtown and its multi-window and geolocated interactive music video, Radiohead’s House of Cards music video, shot using only proximity sensors, Bjork’s album Biophilia and the apps that accompany each of its songs, Amon Tobin’s Isam 2.0 show with its impressive projection mapping performances, and Holly Herndon and Akihiko Taniguchi’s Chorus music video featuring 3D captures of messy work environments. There are so many terrific examples of this kind of work that it’s impossible to be exhaustive, but curator Conrad Bodman has gathered here a solid representative sampling.

I am not sure the setup for watching these videos was ideal, however: a wall of screens displaying them looped as a mosaic with headphones hanging in the front in order to hear the music. It felt hard to appreciate each piece on its own and keep focus on a single video at a time. Nevertheless, it’s a good conversation piece that inspires the visitor to remember other similar works that have moved them in the past.

Beyond music, I discovered Energy Flow, a collaboration between FIELD, Intel, and VICE. It’s an app that explores deconstruction of narration by using an algorithm to randomly associate and edit videos and sounds. This algorithm consequently creates a new story every time it is watched. The results are infinite, feel very poetic, and are open to interpretation.

barbican_6

Kinisi. MAN A.

Our Digital Futures + DevArt

Certainly the most experimental part of the exhibit, Our Digital Futures lets artists explore the potential evolution of the relationship between our environment and the human body. Alongside examples from the fashion industry introducing us to 3D-printed materials and wearable computing is Kinisi by Katia Vega, a tech-infused make-up project. Sensors are placed on a person’s specific facial muscles and LEDs are placed on their skin and hair. Digital signals collected by the sensors during specific movements (a blinking eye, an open mouth, raised eyebrows) activate a light sequence on the face. Products like Google Glass are already aiming at introducing facial interfaces, but this goes a step further by starting a conversation about the possibilities of skin and muscles as interfaces.

Back on the subject of synesthesia, we learn about colorblind artist and Cyborg Foundation founder Neil Harbisson who cannot see colors but can hear them via an antenna implanted in his skull. A camera placed at the top of the antenna, facing what he sees, translates the colors into sounds. More than experiencing colors in sounds, he also hears sounds, voices, and music in color.

Another body-enhancement project, the EyeWriter, was created by the Not Impossible Foundation, allowing people with forms of paralysis to write and draw using their brain waves and by tracking their eyes. The team has built a low-cost device, now available to everyone. A version of it has been created specifically for the graffiti artist TEMPT1, now entirely paralyzed except for his eyes, so he can continue his artwork.

Exploring the notion of urban camouflage, artists Gibson/Martelli came up with MAN A, an installation and art app where people can, via their smartphone or tablet, reveal invisible tribal dancers from a scene filled with markers at the intersection of camouflage, barcodes, and QR codes.

Finally, close to the waiting lines, people can interact with robots in Petting Zoo, built by Minimaforms. These creatures are artificially intelligent, evolving, and reacting to their environment and human behavior via camera-tracking systems. The interaction is awkwardly intimate with what look like the probes from The War of the Worlds. They don’t hurt and can be playful but also visibly angry. A slightly frustrating low wall is set up to preserve humans, making it harder to get close to the creatures.

barbican_7

Universal Everything’s Together

“Digital Revolution” is a truly interactive, well-curated museum exhibition. I sure hope the links above are inspiring and make you want to visit it, should you find yourself in London before mid-September. Indeed, a lot of the pieces shown in the exhibition can be viewed online —isn’t that the case for pretty much all works of art now, anyway?—but the Barbican Centre exhibition is unique enough to be worth experiencing in person.

— Swanny Mouton, Interaction Designer

Tagged with: , , , , , , , ,
Posted in Content, Culture, Design, Technology

Three Ways to Talk About the Internet of Things

Sometimes, our work feels like a sort of reverse archaeology. Just as archaeologists examine artifacts of the past to understand how people lived, we often encounter vague ideas that speak to the future and have to make our own interpretations to understand what it actually means for how we live now.

I like to think of the Internet of Things as one of these “artifacts from the future.”Like a group of archaeologists, I imagine the design and tech community turning it over in their collective mind, asking, “What is it? How was it used? What does it mean?”

I recently had the chance to participate in the Internet of Things Lab, a workshop being offered in collaboration with WebVisions and hosted by Claro. The event asked teams of designers, developers, strategists, and makers to spend two days concepting and prototyping new Internet of Things services.

Taking part in the workshop prompted me to put on my reverse archaeologist hat and examine the Internet of Things a little more closely. What are the different ways we can interpret it?

Here are three ways that I came up with (probably among many):

Interpretation 1: The Internet of Things means that every object around us will be ‘smart,’ connected, and listening.

iot1

This, I think, is the most common, technology-driven interpretation one might encounter, and certainly, it is functionally correct. Like digital entities on the internet, everyday objects will be networked and capture data about the environment and about you. Those objects are then enabled to take actions that presumably create a more efficient, carefree existence for people.

But, as Claro pointed out during our Internet of Things Lab, just because we can do things like have our egg trays notify us how many eggs are in the fridge, it doesn’t immediately mean they are meaningful or desirable things to do. This interpretation tells us all about how and what things might change with the Internet of Things, but not what ought to change and why.

And indeed, there’s already a backlash to this somewhat mechanical approach to the Internet of Things, everything from parodies to manifestos. These criticisms raise real concerns about the potential loss of individual agency, privacy, and diversity of experience.

So, if this is an incomplete picture of the Internet of Things,what if we focused less on the “Internet” part and more on the “Things” part?

Interpretation 2: The Internet of Things will enrich and focus our interactions with physical objects and environments.

iot2

This interpretation highlights the common roots between the Internet of Things and another long-standing technological concept: ubiquitous computing. In a subtle shift of emphasis, we’re not focused here on how our things can connect to the internet, but on how the internet can serve our things. The introduction of computation into our world provides an opportunity to enrich our interactions with its physicality, rather than distract from it.

Back in 1995, Mark Weiser, considered the father of ubiquitous computing, described an early example of this interpretation in a piece by artist Natalie Jeremijenko simply titled “The Dangling String”:

[T]he ‘Dangling String’ is an 8 foot piece of plastic spaghetti that hangs from a small electric motor mounted in the ceiling. The motor is electrically connected to a nearby Ethernet cable, so that each bit of information that goes past causes a tiny twitch of the motor.

The twitching of this network-enabled piece of string makes concrete something which was previously abstract: the traffic on the local network. In doing so, it alters our perception of both the physical and digital environment.

At Second Story, we’re very interested in how digital artifacts change our relationship to the physical environment. Our lab projects like Lyt and Aurora are, in part, explorations of how network-enabled, responsive environments can enhance our experiences of the physical world.

But there is one more interpretation of the Internet of Things that occurred to me during the workshop.

Interpretation 3: Our culture today exists at a scale and level of complexity that increasingly calls for more thoughtful interconnections and engagements between people and their objects and environments.

iot3

This last interpretation brings forward an element of the Internet of Things that is lacking in the previous two: the role of human intentionality and judgment. This is the Internet of People and Things. How can we take a “people first” approach to our technology that empowers us to create useful and meaningful change in the world?

For example, the Good Night Lamp could easily be described by its functionality. It’s a networked set of lamps that are synchronized, so that when the big lamp turns on, the smaller lamps connected to it do the same. However, the product is not interesting because of what it does, but because of what it enables people to do.

The designers of the Good Night Lamp call their product the “first physical social network.” At the core of their product is the idea of enabling people to connect across distances in subtle and delightful ways. You get the sense that it’s about the Internet of People first, and the Internet of Things second.

This interpretation also raises the question of agency. If the point is to put people first in the Internet of Things, shouldn’t everyone feel empowered to participate in its creation and construction? We recently got to play with a set of littleBits in our studio, which provides a nice example of how new platforms and toolkits can foster a world of active participants of the Internet of Things, rather than passive consumers.

***

So which interpretation is right? Well, all of them. Like any real design problem, there are multiple framings that you can take on the Internet of Things, each one equally valid and leading to different observations, principles, and design solutions. As we shape the future meaning of the Internet of Things, it’s worth surveying a healthy diversity of perspectives.

— Norman Lau, Senior Experience Designer

 

 

 

Posted in Culture, Design, Technology

Lessons from a Master: My Meeting with Massimo Vignelli

As part of Second Story’s contribution to AIGA’s 100 Years of Design, creative director David Waingarten and I flew to New York to film and interview design legends. The goal was to gather reflections and insights from industry giants in honor of the organization’s centennial and to leave a record for posterity.

While in New York, we had the great privilege to meet and speak with icons such as Paul Davis, Milton Glaser, Seymour Chwast, and Massimo Vignelli.

2-0414_1200x800

When I knocked on the door to Massimo Vignelli’s apartment, it was with great trepidation. I was going to interview a living legend. And I was no designer.

He was dressed impeccably in black and spoke with gracious passion, eloquence, and witty charm.

Yet it was clear during our conversation that Vignelli didn’t simply speak his ideals, he lived them. Design was an extension of his life–his ethics, values, and sense of social responsibility.

And though I cannot pinpoint the exact moment of transition, with a magic all his own, our conversation morphed into a philosophical discussion about life, the nature of design, and how one’s values and ethics ultimately fuel creative work.

I feel deeply humbled to have had the opportunity to meet and speak with Massimo Vignelli. The ideals and insights he shared with me have left a profound impact on my memory.

2-0403_1200x800

The video we created for 100 Years of Design offers a glimpse of the tenor of our conversation and Vignelli’s elegant mind.

I would like to share a few additional excerpts from our interview. The following quotes are left intact from the original transcripts to preserve their authenticity and that trace of his enchantingly mingled Italian-English.


“When I said that the life of a designer is a life of fight against ugliness – that is, exactly – a commitment that shows responsibility, social responsibility. It’s not the means to do what people want, it’s the means to give people what people need, and that is important…you have to find out the need.”


“Not everybody has been touched by the grace of creativity and vision and the determination to make better things for the world. But the people, they won’t make things better for themselves. Greed is the worst enemy, you know, of good design.”


“If there is an intellectual value, chances are there is an intellectual elegance in it or beauty, you know. […] I don’t think any person that is intellectually sophisticated can do a bad thing, an ugly thing. They will always have a drop of intelligence and that is enough to make it beautiful, because the only thing which is beautiful is intelligence, really, at the end.”

2-0417_1200x800

— Vanessa Patchett, A/V Producer

Tagged with: , , , , , ,
Posted in Content, Culture, Design

An Alternate (Augmented) Reality

We’ve been experimenting with augmented reality in our lab recently and have developed a mobile experience that overlays content depending on the observer’s perspective. Our goal was to make a simple and clean demonstration of how digital information can augment real world objects. In addition, we wanted to come up with a new way of interacting with mobile devices that steps away from touching a screen or using the inertial motion sensor (accelerometer).

We created an interaction mechanism that we are calling “line-of-sight activation.” As the mobile device viewing angle changes, different content is activated. This type of interaction encourages exploration: once a user realizes that a change in perspective signals a change in what content is revealed to them, they tend to play around more, curious about uncovering something new.

Line-of-sight activation allows for novel ways to tell stories around an object, be it a sculpture in a museum or a product in a store. It’s exciting to see the evolution of augmented reality and the opportunities for storytelling it provides.

— Dimitrii Pokrovskii, Interactive Developer

 

Tagged with: , , , , , ,
Posted in Culture, Technology

Our First FMX Experience

This slideshow requires JavaScript.

As busy creative professionals, it can be easy to lose sight of the impact that we have on audiences. We often find ourselves caught up in the day-to-day demands of projects and schedules and fall out of touch with the bigger picture of why we do what we do. Nothing can correct this trend better than a few days surrounded by colleagues, peers and talented students who share our interests and passion for storytelling and entertaining audiences. The right conference brings these elements together and produces more than a collection of presentations– it connects and inspires us in ways that cannot be measured. The collaborative atmosphere encourages us to reach beyond the comfortable and safe boundaries of what we do and strive for the next steps that will define the future of the industry. The FMX 2014 conference in Stuttgart, Germany provided this atmosphere and much more for all of those who attended and contributed to the show.

Thousands of the industry’s brightest and most talented individuals came together from 48 countries to meet and share their work and ideas at FMX. Short for “Film and Media Exchange,” FMX has historically focused on the film, animation, effects, and gaming industries, but new tracks were introduced this year that brought transmedia and physical interactives to the program. These subjects balanced the show with experiences that reach beyond the screen.

I was humbled and excited to be included with several talented presenters in the “Interaction in the Real World” portion of the FMX program hosted by Doug Cooper of DreamWorks Animation. My presentation, “Responsive Environments: Blurring the Lines Between Physical and Digital Worlds,” introduced the concepts of more open-ended (non-linear) storytelling experiences and the creation of rich environments that can envelop audiences in layers of narrative. The opportunity to contribute to the show and share our work was rewarding and exciting, and exposing the audience to real-world examples of our work and processes resonated well with the conference.

With so many great demonstrations and presentations, it was difficult to pick out personal highlights, but here are a few that stuck with me: Alex Meagher Grau presented his studio’s work and the process behind the creation of 360-degree immersive media (stories that play out through VR headsets and allow viewers to see any portion of the presentation they choose by looking around in real time as the story plays out). Tobias Kinnebrew of Bot & Dolly presented his unique work which combines live performance, projection mapping, and giant industrial robots. Alex McDowell led several engaging discussions about new educational models and the future of animation production, including the new tools and collaborative methods for bringing together increasingly diverse groups of creative professionals spread out around the world to create award-winning films and media productions.

The creative energy present at this year’s FMX show was contagious and provided the opportunity to raise our heads above the fray of day-to-day work to catch a glimpse of a bright and exciting future of the media industry. Students and professionals alike came away with renewed inspiration and passion for our work and its impact on audiences. If that doesn’t define a great conference, I’m not sure what does! Many thanks to the committees and organizers for including us and providing a star-studded and highlight-filled week of workshops, presentations, and media at this year’s FMX show.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology

Life as a Google Glass Explorer

Second Story has recently gotten its hands on a Google Glass. In order to improve our knowledge about heads-up displays, we decided to let whoever was interested use Glass for a day.

We all expected the full “gadget” potential of Glass: map navigation, the ability to search for specific information, even the opportunity to play target-practice games. This gave plentiful insight into the user experience, the effectiveness of the technology, and its responsiveness. But there was another perspective we discovered: what is Google Glass like as a creative tool?

kirsten_3

One of the most natural things to do with Google Glass is to capture pictures and video, creating photographs of what the user is seeing at eye-level. If you get really keen with Glass, you can do this discreetly just by winking your eye—which has its own uncomfortable implications. Looking through a day’s worth of Glass-ing is strangely insightful; when taking a picture, you have essentially zero control over your ability to adjust lighting, composition, or even the exact moment at which the picture is taken. With all of the foundations of photography at a loss, what you are left with is a pure moment, an experience captured with minimal intervention.

kirsten_1

The trend of point-of-view photography is hot right now, mostly due to the accessible prices of the GoPro. Glass is aware of this potential as well, advertising with footage of acrobats falling into each other’s arms, pilots doing barrel rolls, and people roaring down roller coasters. For those of us who live slightly less action-packed lives, are we able to create thoughtful—or thoughtless—photography without depending on a “Wow” factor? As first-hand Glass photographers, we began finding profundity in the ordinary.

dimitrii_2kirsten_4

The ability to capture point-of-view photography in a user’s mundane day has the power to change the way we see the world and the way the world sees us. We are not only able to tell a story literally as we see it, but we get to share the parts of our everyday that are notable not for their aesthetic beauty but for their essence of the moment. Whether or not that moment is worth photographing is up to the creator, as we become inspired by experiences we are living as opposed to scenes we want to compose.

Glass also changes the way in which we photograph subjects. Without having a physical camera held between you, the photographer is able to both act in and direct the photograph. Instead of the subject gazing into a device, they are looking into the eyes of the photographer, adding an additional level to the story. What is the subject reacting to? What is the relationship between the subject and the Glass-wearer?

kirsten_2norman_1

We like to imagine how Glass will change the way we consume and tell stories. As Makers and amateur Glass photographers, we see this technology as a way to create with more intimacy and less interruption, blurring the lines between moments we have lived and moments we have observed.

— Kirsten Southwell, Experience Designer

Glass photography by Kirsten Southwell, Norman Lau, and Dimitrii Pokrovskii.

Posted in Culture, Technology