Life as a Google Glass Explorer

Second Story has recently gotten its hands on a Google Glass. In order to improve our knowledge about heads-up displays, we decided to let whoever was interested use Glass for a day.

We all expected the full “gadget” potential of Glass: map navigation, the ability to search for specific information, even the opportunity to play target-practice games. This gave plentiful insight into the user experience, the effectiveness of the technology, and its responsiveness. But there was another perspective we discovered: what is Google Glass like as a creative tool?

kirsten_3

One of the most natural things to do with Google Glass is to capture pictures and video, creating photographs of what the user is seeing at eye-level. If you get really keen with Glass, you can do this discreetly just by winking your eye—which has its own uncomfortable implications. Looking through a day’s worth of Glass-ing is strangely insightful; when taking a picture, you have essentially zero control over your ability to adjust lighting, composition, or even the exact moment at which the picture is taken. With all of the foundations of photography at a loss, what you are left with is a pure moment, an experience captured with minimal intervention.

kirsten_1

The trend of point-of-view photography is hot right now, mostly due to the accessible prices of the GoPro. Glass is aware of this potential as well, advertising with footage of acrobats falling into each other’s arms, pilots doing barrel rolls, and people roaring down roller coasters. For those of us who live slightly less action-packed lives, are we able to create thoughtful—or thoughtless—photography without depending on a “Wow” factor? As first-hand Glass photographers, we began finding profundity in the ordinary.

dimitrii_2kirsten_4

The ability to capture point-of-view photography in a user’s mundane day has the power to change the way we see the world and the way the world sees us. We are not only able to tell a story literally as we see it, but we get to share the parts of our everyday that are notable not for their aesthetic beauty but for their essence of the moment. Whether or not that moment is worth photographing is up to the creator, as we become inspired by experiences we are living as opposed to scenes we want to compose.

Glass also changes the way in which we photograph subjects. Without having a physical camera held between you, the photographer is able to both act in and direct the photograph. Instead of the subject gazing into a device, they are looking into the eyes of the photographer, adding an additional level to the story. What is the subject reacting to? What is the relationship between the subject and the Glass-wearer?

kirsten_2norman_1

We like to imagine how Glass will change the way we consume and tell stories. As Makers and amateur Glass photographers, we see this technology as a way to create with more intimacy and less interruption, blurring the lines between moments we have lived and moments we have observed.

— Kirsten Southwell, Experience Designer

Glass photography by Kirsten Southwell, Norman Lau, and Dimitrii Pokrovskii.

Posted in Culture, Technology

Passing by the Wave

One of the keys to a successful interactive experience is providing a little something for everyone. Typically, members of the audience for an interactive installation will vary in their desire to invest time and attention. An individual may have a keen interest in delving into the nooks and crannies of a subject—say cubist architecture—or they may just walk by, see someone else interact with the experience, and decide to watch them briefly from afar.

When designing and developing an experience, it’s important to consider the “just passing by” audience member. In a museum or cultural institution setting, it is precisely the casual observer, the first-time visitor, the non-expert, who we want to educate, inform, and expose to our subject. Look here! This is why you should care about cubist architecture!

The most important thing is to engage the visitor, even temporarily, in a positive fashion. These itinerant visitors, wandering from exhibit to exhibit, display to display, must be catered to on their own terms: they want something they can appreciate in very little time, with little or no interaction, and from a distance.

Recently, working with the Foundation for the National Archives in Washington D.C., we created an experience consisting of a 15ft interactive touch table with proximity sensors, flanked by two mosaic walls with multiple displays.

wall_layout_wave

The experience was designed to showcase documents, multimedia, and history related to the issues of civil and human rights in America. The table allows for up to 12 people to interact with it simultaneously, browsing through a series of timelines, exploring the National Archives’ extensive collection of primary source materials, and sharing their reactions to those records with others on the mosaic walls. You can take a look at a video demo, more images, and a description of the project on its portfolio page on our website.

 

During the concepting and ideation phase, we wanted to come up with a unifying element to make the table—which actually consisted of six 55” displays with two PQ Labs touch overlays—feel like a single entity, and, most importantly, engage the interest of passersby. In the end, we came up with the idea of a series of lines that would undulate seamlessly across the displays from one end of the table to the other. The lines’ sinuous motion serves as a metaphor for the fluidity of ideas, their contour-like geological representation evokes a sensation of the weight and momentum of history, and, as waves collide with each other, the patterns the lines generate speak to the complexity that can be created from simplicity.

This element we simply called “The Wave.” The wave, we decided, would flow by itself, but users would be able to interact with and excite it. It would also provide a large, beautiful, animated, easily accessible visual element ready to engage users from afar.

three_half_screens

In creating the wave, there were two primary challenges. The first was the question of how it would behave; each member of the project team had an idea about how the wave should look and feel. The second challenge was to make sure that the wave would propagate seamlessly across displays. Each display was being run by a separate computer, so somehow all the computers had to be informed about the motion of the wave.

The first problem was solved by a little mathematical graphing and some prototyping in our lab. Initially, I considered physically modelling a wave. It quickly became apparent, however, that it would be computationally expensive, and, furthermore, the amount of data that had to be passed from display to display to keep the waves in sync was also too high. After all, the only thing our wave really had to do was look like a wave, and, at its most basic, a wave is just an oscillating function, something like a sine wave:

sin

But that’s boring. Here’s a tidbit that’s not boring: a periodic function (a function that can repeat) can be recreated from the sum of sine and cosine functions. This group of functions is called a Fourier series. What this meant for us was that any wave shape we wanted to create was achievable using a number of sine waves added together. Here’s an example of what happened when we combined a few to make a more interesting shape:

multiple_sin

 

Finally, we didn’t want the wave to repeat forever, so we multiplied it by a pulse function to get something like this:

multiple_sin_combined_with_pulse

Here’s a little animation of three sine waves and a pulse function with a variety of variables changing randomly. You can see that you do indeed get some organic shapes in there. This is a variation of the algorithm we ended up using to develop the wave:

And here is an early prototype of the wave:

The final complication was how to make sure the wave was synchronized over multiple displays. Because of the way the waves were created, the only data we would need to communicate across displays was the current animation frame, when the wave was created, how long it would live, and a few wave parameters (wavelength, speed, etc.). The hard part was figuring out how to make sure that every display knew what frame it was supposed to be on. You can’t just tell every display computer to render the wave “NOW” because that message takes time to get from the computer doing the telling (the server) to the display computer (the client) due to the network. This is called latency. One way to go about it would be to make sure that every computer kept track of time identically and then you would tell each computer that at “x” time they should play frame “y.” Then they could extrapolate what frame they should be rendering based on what time they thought it was. However, time can drift. Especially considering that synchronization needs to be accurate to, at most, a few tens of milliseconds, whatever solution I came up with had to factor in time-drift.

In order to tackle these issues, I created a synchronization tool called “All The Screens.” Client computers registered with a server, and the server calculated network latency (delays) and time drift and provided those clients with a way to determine what frame they should be rendering.  This solution has been open-sourced and can be found on GitHub. There is also a Google Chrome demo of the technology here.

These technical solutions allowed us to create the wave, whose mesmerizing motion lures visitors in to learn more about the history of civil and human rights in America. And, for that happy-go-lucky stroller who doesn’t have the time or inclination to delve into the content, perhaps the wave serves as a source of soothing visual relaxation, a counterpoint to the hustle and bustle of busy downtown DC.

Donald Richardson, Senior Interactive Developer

Tagged with: , , , , , , ,
Posted in Design, Technology

Mobile Case Study

Second Story is deepening its physical design and environments practice by offering industrial design services to our clientele. We aim to be innovative, designing physical solutions to elevate digital interactive experiences, but our work sometimes requires practical, engineered solutions to package digital content in simple ways that are meaningful to the overall audience experience.

We’re always excited by design challenges that let us get our hands dirty. When a recent mobile application project presented the need for packaging design, we brought manufacturing processes to the studio. Our team came together in impressive fashion, with staff members from every discipline collaborating to create an efficient assembly-line to achieve an immediate yet stylish solution for our client’s needs.

— Jordan Tull, Designer

Tagged with: , , , , , ,
Posted in Culture, Design

Finding The Heart of “100 Years of Design”

whiteboard_laura

Last May, we had the opportunity to partner with AIGA, a longtime collaborator and dream client, on a new microsite to commemorate their centennial and celebrate the last 100 years of American design. Our first reaction was excitement: as designers who pride ourselves in our discipline and our history, we were honored to craft stories that include some of the world’s most influential designers and their work. Our second reaction: where do we start?

At Second Story, we often describe our process as “designing from the inside out.” As we thought about AIGA and what made it special, it became clear that the organization sits at the epicenter of the conversation between design and society. This simple diagram was our first attempt to show how this conversation and how the artifacts in AIGA’s archives could become the lens for the site.

levels_of_engagement_diagram-02

Building a project’s foundation is one of the most challenging and exhilarating points in our creative process. We refer to this discovery as finding the heart—the one truth of the project that will never change. The “heart” is the story that the experience is begging to bring to life. Creative Director David Waingarten has described the task of finding and articulating this conceptual foundation as “being the first to walk into a dark room and look for the light switch.”

desk_edited

To find the heart of the AIGA Centennial project, we fully immersed ourselves in the content. We delved into the vast collection of artifacts in AIGA’s Design Archives, combed through articles from diverse voices in the design community, and looked at other retrospectives, critiques, and blog posts. In our quest for enlightenment, we noticed there was little discussion of design history that was not organized by time, form, medium, or discipline. While these ways of presenting design history are informative and educational, we wanted to create a living resource that captures the ever-evolving conversation between design and society and invites everyone deeper into it.

As we were having this discussion, our collaborators at AIGA pointed us to “No More Heroes,” a poignant article from a 1992 Eye Magazine that really spoke to us. This quote from Bridget Wilkins was especially inspirational to our conceptual development:

quote-01

With AIGA’s guidance and after countless thought-model sketches and “what if!” epiphanies, we landed on a framework that gives diverse audiences a new way to look at and evaluate great design. We organized the stories by design intent, allowing the purpose of the artifacts to be revealed for the visitor. The intention is what defines design, and as Milton Glaser so eloquently states: “The best definition I have ever heard about design and the simplest one is moving from an existing condition to a preferred one, and that is a kind of symbolic way of saying you have a plan because the existing condition does not suffice.”

We had to consider how to make this story framework exciting and accessible for guests with varied knowledge of design. It couldn’t overwhelm the general public, but it also had to meet, if not exceed, the expectations of design enthusiasts and practitioners. To strike this balance, we created an experience with two layers. At the surface layer, visitors can view carefully curated artifacts, quotes, videos, and listen to audio clips. Those who are interested can go a level deeper to see additional artifacts, designer profiles, and moments from AIGA’s history. With 11 videos, 26 audio clips, 120 design artifacts, 17 designer profiles, 15 AIGA historical moments, and 19 quotes, there’s a wealth of content for visitors of all backgrounds to explore.

whiteboard_edited

AIGA also wanted to extend the conversation to ensure that the microsite became a meaningful record of this time in design’s history. To foster discussion and participation, we needed an engaging prompt. How can we ask a stimulating and meaningful question without leaving the guest lost or spending 20 minutes trying to create a response? The ideation that we spent on those six words was extremely thorough: looking at the reactions from using words like “think” vs. “feel”, finding out if users were comfortable contributing from the first person (“I am connected by design that…”) or from a general perspective on design (“Design that connects is…”). We settled on a phrase that could be applied across all five intents and that allowed guests to choose an intention and add their own thoughts and images.

The results have been incredible to watch. Each day the conversation grows, with over 700 user contributions and counting. We are thrilled with the final site and hope the experience engages a broad audience in a dynamic conversation about the role of design in our society and everyday lives. We encourage you to explore these narratives and add your voice to celebrate the evolution and impact of American design over the last 100 years.

Our studio is forever grateful to AIGA for giving us the opportunity to be part of such an incredible moment in design history.

This slideshow requires JavaScript.

— Laura Allcorn, Senior Content Strategist & Kirsten Southwell, Experience Designer

Tagged with: , , , , , ,
Posted in Content, Culture, Design

You Can’t Go Wrong With 8,294,400 Interactive Pixels

As Ultra High Definition (UHD) displays become more readily available, we will begin to see the technology adopted in many ways. We are most interested in the new standard because it will have a direct impact on the way we design and display interactive content. While we have been developing applications that run at resolutions similar to the 3840×2160 pixel resolution offered by UHD displays for some time, we have been forced to display them on multiple HD displays which introduce visible seams when tiled together. With the advent of the UHD display, we can now combine both scale and fidelity in the presentation of our media with a single seamless display.

With interactive media, the scale of a display can act as a beacon, enticing potential users to come closer and explore content. A large-scale display also accommodates more users at a time, inviting collaboration, especially on a horizontal (table) surface where people are brought face-to-face with those across from them as they interact with the media.

But scale isn’t everything. By its nature, interactive content has to remain legible when viewed at an arm’s length as users touch and interact with the surface of the display. At this close range, most large displays don’t have the fidelity to carry type and subtle graphics. It is here that the UHD resolution succeeds where other displays fall short. At about 50 pixels per inch (ppi), the Planar UR8450 displays offer precise pixels and legible content even when viewed up close.

With this display, we can already begin to imagine a future where the notion of a pixel is no longer considered. Today, we can see this in relatively small Retina Displays where ppi counts surpass 300 and individual pixels seem to disappear. We look forward to a time when this type of fidelity will become ubiquitous on large and small displays. Increased legibility will allow content to be displayed at any scale and orientation, opening up new modes of interaction. Displays will become a window through which emotive, high resolution content will be displayed, bringing stories to life in new ways.

— Matt Arnold, Lead Integration Engineer

Tagged with: , , ,
Posted in Technology

Mobile Depth-Sensing

From the Protecting the Secret interactive at the Vault of the Secret Formula exhibit to the Connections Wall at the Emerging Issues Commons, Second Story regularly uses Kinect and other similar technologies to create dynamic content based on sensing where users are in physical space. In the past, the mobile use of these sensors has been restricted by their need to be tethered to powerful “desktop” CPUs; we’ve had to use USB signal extenders and dedicated wiring to mitigate these constraints.

But developments in computing are giving us enhanced flexibility. The latest ARM processors are small, portable, and powerful, and we’ve been experimenting with using them to process depth data right from the sensor’s location.

The processors can be powered over CAT-5 Ethernet or even battery (depending on the use case) which makes deployment easy, and they automatically start working as soon as they’re powered on, eliminating the need for an external display. Using available bandwidth, they can send data over WiFi or regular CAT-5 to more powerful CPUs that do data interpretation.

The sky’s the limit with these little guys. We can’t wait to explore the possibilities.

— Sorob Louie, Interactive Developer

Tagged with: , , , , ,
Posted in Technology

100 Years of Design

In 1914, a small group of designers inaugurated what became the American Institute of Graphic Arts. One hundred years later, Second Story has collaborated with AIGA to create a centennial microsite that celebrates the profound impact design has had on our society over the last century and invites everyone into a conversation about the impact of design on our daily lives.

Our first task was to collaborate with AIGA on curating a set of works to illustrate the breadth, diversity, and evolution of American design over the last century. We also wanted to present these works in a different way than a typical retrospective might. We wanted to focus on the “why” instead of the “how,” exploring the intentions behind these works rather than simply categorizing them by medium, style, geography, or plotting them on a timeline.

Intentions

These design intentions became the core of the site: five media-rich narratives focused on how design connects, informs, assists, delights, and influences us.

Boards

We also see this microsite as a time capsule that successive generations of designers might open in 2114, as AIGA celebrates 200 years. Knowing the tools and methods these folks will employ will evolve far beyond what any of us can imagine today, what kernels of truth or wisdom from AIGA’s first century of existence could this site preserve and pass on?

To find answers, our film crew captured the oral histories of 18 living legends of American design. We asked these designers to comment and reflect on their own seminal works, the arc of their careers, and the lessons they’d like to pass on to future generations. Their answers were humble, straightforward, hilarious, heartfelt, and enlightening. Being present with the likes of Paula Scher, Milton Glaser, Richard Saul Wurman, Jessica Helfand, Michael Bierut, Seymour Chwast, and many others was an incredible honor. Their stories and insights bring this content and conversation to life in a way nothing else could.

Most importantly, we wanted to invite everyone to the party. So we created a way for people to share how design connects, informs, assists, delights, and influences them today. Contributions are already pouring in, and we are thrilled to see such a diverse range of responses.

Centennials offer us a chance to look back at where we’ve been, to recognize a shared history and inheritance, and to appreciate the evolutionary continuum that connects those designers in 1914 to us here and now. They also give us the chance to look forward – to take what we’ve learned in new directions and ask what’s next. As part of the team who has spent over eight months bringing this microsite to life, I can say that looking back has taught us a tremendous amount about design’s role in shaping how we see our world, ourselves, and each other. Looking forward, this project has made us look deeply at what motivates us to do the work we do, and rededicated us to bringing those intentions to life in everything we create.

— David Waingarten, Creative Director, Storytelling

Tagged with: , , , , ,
Posted in Content, Culture, Design

Leap Motion Path

Leap Motion Path is a Second Story lab experiment exploring the use cases for the Leap Motion controller in the digital animation field. Our objective was to create a tool to capture 3D motion that could be used within an animation platform to control digital content.

One of our goals was to record an animation without the assistance of a keyboard or mouse. To achieve this, we needed a way for the animator to know where their hand was in relation to a recording canvas. We accomplished this by creating an inset frame that gives the animator space to interact with Leap Motion and gain spatial reference to where they are inside the computer screen. Once they’re ready, they can enter the frame and start recording. Later, in the animation software, they can remove the entry and exit points from the frame by cropping the animation recording.

Leap-Motion-Path_cropped

During this experiment, we encountered another interesting use case. Leap Motion provides a lot of data about the geometry of the hand. If you capture the position of the animator’s wrist and the tip of the index finger and draw a line between those points, you end up with a vector that indicates the direction of the hand. If you capture this vector over time, you see that it produces a beautiful ribbon. The animator can record this ribbon and use it in other animations.

ribbon_landscape2 2

As we developed Leap Motion Path, several stand-alone libraries came to be. We’ve hosted one called Path.js here on Github. To provide some additional context: we wanted to capture a 3D position over time and then animate along the path we recorded. If we were to animate along the finite points Leap gives us, the animation would be choppy because the resolution of these points wouldn’t resemble the actual path we drew with our hand. To combat this, we needed to interpolate a line or a curve between these points to give a finer resolution so we could animate at any speed. Path.js takes a collection of timestamped points and creates a linear interpolation between them. This allows Leap Motion Path to export an animation in vector format, allowing the animator to scale and stretch the animation as desired.

With more development, Leap Motion Path could be integrated into a standard digital animation workflow giving animators one more tool to create beautiful & lifelike work. Moving forward, to improve the motion-capture experience, we would need to re-write the recording mechanism as a plugin to an animation platform, enabling the animator to record and review all in one application. We look forward to integrating Leap Motion Path into our own animation workflow at Second Story.

— Dimitrii Pokrovskii, Interactive Developer

Tagged with: , , , , , ,
Posted in Technology

Unboxing the Kinect for Windows v2

kinect4Win_v2_small_v2

Kinect for Windows v2

We started experimenting with the Kinect for Windows v2 from Microsoft this week and are already excited by the new possibilities that this impressive new depth-sensing camera offers. Mechanically, the camera is a bit larger and bulkier than we would like, but it also features a tripod mount adapter (threaded insert) which will go a long way towards helping us incorporate the sensor into different environments.

Among the many improvements, we quickly found that the new camera was able to sense just about as many people as we could fit in the (expanded) camera’s field of view. Tracking has also improved, allowing skeletal data to be sensed in a variety of poses (sitting, prone). We’re also excited by the new level of detail captured in each pose which includes basic hand and finger tracking.

The depth image returned by the camera is also showing significant improvements in speed and image fidelity. This type of depth data allows for the capture of image details previously unattainable with the Kinect.

depth_small_edit_final

Raw depth image from the K4W

As we continue to experiment with and adopt the technologies that the future is bringing, we couldn’t help but take pause to thank the engineers and scientists at Microsoft who made this one possible. It is technologies like this that enable us to create new and inspiring experiences.

With that, we leave you with this final message for this season:

HappyHolidays

— Matt Arnold, Lead Integration Engineer

Tagged with: , , , ,
Posted in Technology

How We Built Lyt: a Technical View of the Making Process

At Second Story, we all share a passion for making and crafting. Our collaborations have taken lots of different forms, from a robot that prints tattoos to a floor-to-ceiling interactive sculpture to an imaginative birdhouse. We thrive in the vibrant designer community here in Portland, and our local involvement enabled us to meet members of the Intel Labs team who gave us a great opportunity to do some tinkering. They were about to release the Galileo board (see the specs here) and asked us to come up with a demo showing what the board could do. It would then be presented in Rome for the European version of the Maker Faire. After a few weeks of furious designing, prototyping, fabricating, and testing, Lyt, a collaborative, interactive lighting fixture, was born.

Resources

If you’re one of the lucky few who sees through the matrix, source code and build instructions for this project can be found on GitHub; the README over there contains a pretty detailed description of the overall architecture as well as how the various components interact together.

If you’re a more visual person, you can watch the making of Lyt on the creators project website.

Daniel Meyers, Creative Director, Environments, wrote a great blog post back in October about his thoughts on the project, but I wanted to provide some additional information about our process from a technical perspective. Here we go!

 Ideation

lit_sketches_Page_1

The Galileo board is an interesting mix: it’s both Linux-based and Arduino-compatible, offering developers the opportunity to play with advanced tools created by the Linux community and electronics components and shields from the Maker community. We tried to pick the best from both worlds for our prototype.

In terms of concept, we found inspiration in a couple of previous lab projects: Aurora, which incorporated LED mesh, and Real Fast Draw, a collaborative drawing application. After a couple of iterations, we knew what we wanted to make: an LED mesh from scratch that you could draw on.

Picking out the hardware

_MG_8302_sm

Once we decided to make an LED wall “display,” we had to find the right LEDs. Nowadays, you can buy strips by the meter, where the pitch (the distance between two LEDs) varies between 32 LED/m and 64 LED/m. Another factor to consider is the type of microcontroller driving the LED. The predominant ones are called LPD8806 and WS2801. These nifty controllers enable you to address independently every single LED on your strip. Thanks to the wonderful world of Arduino, you can buy these on Adafruit or SparkFun, and they both give you Arduino libraries to drive them.

After some experimentation, we decided to go for a 32 LED/m running on the WS2801. A smaller pitch would have been a bigger power draw and a more precise data signal would have been required (some people were successful running a 64 LED/m strip on a Netduino, but we didn’t want to lose too much time figuring this out, as the other one was working right out of the box).

Daisy chaining

_MG_8506_sm

After picking the strips we wanted, we had to see how many of them we would be able to plug together. These strips work with four wires: one 5V power, a ground wire, one SPI clock, and one DATA (“master out, slave in,” in the SPI lingo). The full-fledged SPI bus has two more wires, one “master in, slave out” to receive information from a slave device, and a slave select to tell which device is connected on the bus you’re talking to. Obviously the LED strip was not going to give us feedback, so there was no need for a master-in wire, but the lack of slave select was pretty annoying; it was harder to get multiple strips listening on the same bus and decide which one to talk to. So, instead of running them in parallel, we went for a series circuit where the maximum number of strips would be daisy-chained together. Experimentation taught us that we could go up to 12 meters before getting a nasty flickering effect. This was around 380 LEDs per fixture with an estimated power consumption of 70W.

We decided we would make 3 modular panels of 12m LED strips, each with its own Galileo board. This is clearly overkill–one Galileo board might have been enough to drive our three columns with a mux-demux controller to select which strip we wanted to talk to. However, from a demo perspective, it was safer to run 3 independent boards, so, if we accidentally fried one, we would still have a working prototype.

We’re happy to report that no Galileo board was harmed in the making of this demo.

Starting the development

_MG_8256_sm

We wanted to be able to control the light fixture using mobile devices and still have a board per fixture. We obviously needed some means of inter-board communication to notify them of the phone interaction, and having a self-contained system was our ideal. We ended up with a four-Galileo-board set-up, using one, which I’ll call the Lyt board, as the server. It dispatched relevant information to the three other boards driving the fixture.

The Adafruit library we used was taking a “pixel” buffer in entry and would display it nicely on the strip, but we had to do some slight modifications to make it run smoothly on the Galileo. It’s pretty common on the Arduino board to transfer buffer one bit at a time, by switching on and off a pin. This should work on the Galileo too, but, as a full Linux system is running, things are not that bare-metal. Instead, it’s mostly as if you were writing to file: you have to open a file descriptor, write, and close it. And this process goes through system calls that take a few milliseconds to kick in, which is not good when you have 400 LEDs to drive. Fortunately, the Intel folks were smart and added a function to the SPI library to transfer a buffer in one go, so instead of having to do syscalls for every byte, it’s reduced to one (or a few) for transferring one buffer entirely. These details are hidden in the depths of the SPI library, so, for the regular user, the only thing to know is that you can transfer a buffer quickly on SPI. The curious should investigate and explore the source code available in the Arduino IDE to better understand how all this black magic is working.

Adventures in debugging

5H2A9943_edit_sm

This project was a truly international effort: the fabrication and concepting happened in our studio in Portland, OR, the Galileo boards were fabricated in Ireland (it’s printed on them!), the software development was done in France (the developer was suffering from a serious case of wanderlust), and it was all for an event happening in Italy. One of many obstacles we faced during the development process: how to develop a graphic interface when you don’t have the display. Our solution was to create a small debug application using an awesome library called openFrameworks. Our app receives OSC messages from the board saying, “Hey, I’m going to display this and this on the LED,” the app parses the messages, and displays them on a laptop. In return, we added a communication link from the laptop to the board to tell the LED, “Hey, I’m clicking on this part of my phone, you should really do something.” This was, of course, not the final solution, but worked well as a place-holder–the communication link between phone and board would be developed later.

Networking is the key to success

5H2A9966_edit_sm

You may wonder how all these fancy boards were talking together. The Galileo board has a PCI express port (the kind of thing you can find in your laptop) in which you can put a WiFi card. At the time of this writing, the N-135 Intel wireless card is the only officially supported one. Using a tool called hostapd, we were able to make a Galileo board run as a WiFi access point and have all the others connect on it. Then it was easy to send OSC messages between all of them.

Finally, to get mobile phones to talk to the board, we wanted to provide the most lightweight experience for the user and thought it would be great to do the UI using HTML5 and CSS (i.e. a basic webpage). Part of the new specifications for HTML5, WebSockets enable your browser to talk with a different computer through a direct connection; they were perfect for our purposes. We compiled a WebSockets library for our Galileo board (the libwebsockets one), plugged in our Arduino sketch, and, after some tinkering, VOILA!…phones can talk to the Lyt board.

Conclusion

5H2A9974_edit_sm

In a few words, here’s what’s happening once you’ve plugged everything in:

Our Lyt board creates an access point you can connect to with your phone. Meanwhile, every light fixture connects to this WiFi and sends information to the Lyt board (like what color they’re currently displaying). When you request the Lyt page on your phone, the board sends it back to you, and, once the page is loaded, your phone opens a WebSocket to the Lyt board. This way, the phone can be updated by the board, telling it the color currently displayed on each fixture, and, in return, when you interact with the UI, the phone sends the information to the Lyt board which relays it to the appropriate fixture. Finally, the fixture triggers the touch animation.

After a few whirlwind weeks and with a successful demo in hand, we got to spend some time in Rome with Intel and the European Maker community. Once more, we were reminded that technology can be the seed for great collaboration and wonder.

— Philippe Laulheret, Interactive Developer

Tagged with: , , , , , , ,
Posted in Design, Technology