The continued marriage of camera and computer has lent photos new levels of instancy and connectivity. But in the face of these shifts, our notion of personal cameras has stayed the same: a device that produces images as output. We reach for our cameras to, on a basic level, capture an end product mirroring what we see. To reinforce a memory.
But the camera as we know it is in flux, and further changes will expand its definition. The camera soon will be an eye that thinks and reacts to its surroundings. It will not only reproduce what’s in view, but also take in meaning from what it sees. With the correct software it will augment, enhance and act on reality—not just mirror it. Marketers, take note.
The notorious QR code was among the first nudges in this shift. For awhile, the QR code was a novelty to wedge into impractical spaces for unimaginative uses. The prevailing notion is that it’s “a fancy URL.” Plus, the QR is illegible to human eyes. It’s made to be seen by a camera-computer.
Yet if used correctly, the QR could give off small doses of magic—pairing with a mobile device’s camera to align a digital experience with a physical one. A prototype called “Clock for Robots” does just that. London-based design consultancy, Berg, developed the concept to convey a human sense of time and location to mobile devices. (Think: It’s about 10:30 AM. You’re in line at Everyman Espresso, not: 10:30:03:02; N40º 42.2823’, W074º 0.2771’.) A human sense of time and place accounts for solid walls and meaningful spaces. GPS, wifi and cell radio miss this —but this clock closes the gap.
Photos via Berg London
After its camera sees the clock at a given moment, a device can serve contextual options, making for a more cohesive experience on the whole. This creates opportunity to offer bites of valuable content. In a coffee shop, for instance, the clock might dull the ache of a long wait by offering a simple DIY guide to the in-house signature latte art. More generally, in-store marketing could tailor experiences to consumers’ tendency to move through spaces alongside their devices.
MIT Media Lab’s Grace Woo has responded to QR’s weaknesses with VRcodes. Instead of a human-illegible graphic, VRcodes are invisible—embedded in the light patterns of digital media, passively waiting until a device scans it. According to Fast Company, which has named Woo among this year’s 100 Most Creative People In Business, “her company, Pixels.IO, is already working with NBC Universal and the global advertising firm Aegis Media to bring VRcodes to the public.” The goal of Woo’s development is part of a larger initiative by MIT to build what it calls “viral spaces,” or places that bridge the mobile and physical worlds.
Video via MIT Media Lab/Grace Woo
Apart from mobile devices, cameras embedded in consumer products can literally map a digital experience onto a physical surface with the help of computer sight. Google Creative Lab and Berg paired to design a personal desk lamp fitted with a camera and projector. The camera keeps an eye on the lamp’s projection surface. Seeing what’s in front of it, the projector can output meaningful layers of feedback, like live translation for printed text.
Even this simple use case demonstrates how remixing familiar components creates a digital experience that isn’t trapped behind glass. With eyes, digital can intermingle with the physical; it knows where it’s going. This means inert physical surfaces can take on ephemeral branded content. It’s particularly interesting to consider potential to reach consumers in an intimate place (like on the desktop of a home office) but in a passive—almost deferential—way (away from the pixels of their computer display).
If we’re to put stock in the power of computer vision, then we have many changes to look forward to. Not least of them, is the ability to create new modes for delivering relevant, frictionless, magical contextual content.
Cover photo via Berg London