(Steven Levy) Larry Page and Sergey Brin have long had the dream of a hands-free, mobile Google, where search was a seamless process as you moved around the world. As the years progressed the vision did, too, expanding beyond search to persistent connections with the people in your lives.
In other words, Google’s view of the world now has the social side fully baked into it.
Today, Google is revealing that it is taking concrete steps towards that vision with ProjectGlass, an augmented reality system that will give users the full range of activities performed with a smart phone — without the smart phone. Instead, you wear some sort of geeky prosthetic (one of those pictured is reminiscent of the visor that Geordi La Forge wore on “Star Trek: The Next Generation,” but Google has also been experimenting with a version that piggybacks on regular spectacles.)
On top of your field of vision, you get icons, alerts, directional arrows, and other visual cues that inform, warn, or beg response.
And all of a sudden, the world becomes dickish — as in Philip K.
Glass is the second big project out of Google[x], the company’s Mountain View skunkworks devoted to long-term projects. Since Larry Page reassumed the role of CEO (exactly one year ago today), his fellow co-founder Sergey Brin has focused on Google (x) and Glass is apparently the project Brin promised news of almost a year ago at Google’s I/O conclave. Glass has been in the works for years, with key input from Babak Amir Parviz, a Google[x] employee who is still listed as the McMorrow Innovation Associate Professor at the University of Washington.
Parviz is one of three co-signers of the Google+ post announcing the project. His research specialties make him sound like a character in a Michael Crichton novel: Bio-nanotechnology, Self-Assembly, Nanofabrication, MEMS. Before coming to Google he co-authored a paper entitled, “Self-assembled crystalline semiconductor optoelectronics on glass and plastic.” All of this indicates that Google has made some advances in science behind projecting computer visuals that hang in your field of vision.
The second author on the Google+ post is Steve Lee, known previously as a Google location manager. I once saw Lee in action before Google’s Privacy Council, successfully defending a set of features in Google Latitude that, with the user’s permission, registered and stored a complete history of one’s peregrinations. It was clear that Lee was excited about the possibilities that come from exploiting location services in new ways. Obviously, location — giving directions, providing information about nearby services, and pegging the whereabouts of friends — is going to be a big part of this new initiative.
The third is Sebastian Thrun, he of the autonomous driving vehicles, open online education and a leader at Google[x].
Nuff said. Oh, and Sergey Brin didn’t sign the post but was deeply involved in the project.
The concept video for the Glass project concentrates on the cool things you may do with it one day — create instant contact with friends, monitor feeds about weather and other info, get information about a subway station out of service, receive turn by turn directions on the way to a destination, snap a picture by command, even find your way to a certain tome in the labyrinthine Strand bookstore. Everything works perfectly because, well, it’s a concept video and not a depiction of something that’s actually perfected. But Googlers have been testing prototypes and have already solved some (not all) of the challenges required to make this real and feasible. (In other words, this is more grounded than the Apple’s famous 1987 Knowledge Navigator concept video — which, although way premature, is looking prescient these days.)
The video has more than enough information to open up a conversation about the potential effects of having the digital world unbound from the confines of a hand-held gadget and more or less integrated into everyday reality. How can people maintain privacy when anyone can shoot video undetected? Will any teenager ever complete a face-to-face conversation when business e-mails, fresh family photos and Kardashian news spontaneously pop up in our fields of vision?
Really when you think about it, the possibilities of such systems are dazzling and dumbfounding. Consider that another paper co-authored by Parviz explores the idea of contact lenses that meter health issues by analyzing tear fluids “in a noninvasive and continuous fashion.” The information is then sent wirelessly for medical analysis. It’s easy to imagine a Glass-like connection as way to persistently jack into a vast informationsphere. It’s also provocative to envision how Glass would enhance Girls Around Me.
As of now, Glass is very much a concept as opposed to a product. Despite Google’s testing, it’s very far from public beta. (The New York Times’ Nick Bilton, citing unidentified sources in a story that got some of the project right, had earlier claimed that Google would be selling a product by the end of the year; Google indicates that this is extremely unlikely.) Google is releasing the video now to spur conversation and elicit suggestions. In addition, the move out of stealth will allow Google[x] testers to try out various permutations of the system without worrying about leaks. It’s also a timely means for Google to remind the public that its engineers have visions that transcend the current spats with Facebook, Apple, Microsoft, the DOJ and the European Union.
No wonder there’s a twinkle in Google’s eye today.