I went hands on — eyeballs-on, I suppose — with Google’s most hyped product of 2012. The results weren’t entirely spectacular.
At an event hyping up everything that Google does, mostly for the lifestyle press, Google saved its biggest hook until last, showing off Google Glass — which meant I finally had some time to get a brief bit of play with Glass.
Here’s some early, super-simple observations; if Google were happy with there being video there would also be a 30 Seconds of Tech below here. I presume they don’t know people knowing what Glass looks like, or something.
Google Glass: On the plus side
It is quite lightweight, even when propped up on glasses. I couldn’t entirely forget it was there for the very limited amount of time I had to test a pair, but I could see them becoming “invisible” once they became an assumed part of your everyday life.
For paired technology that works mostly on voice, it’s an impressive mashup of different elements, from the HUD to the touchpanel controls on the right ear.
Just as with a lot of wearable technology, there’s an immense potential to change the way we interact with technology with this kind of approach. Even from a quick bit of hands-on time I can see that, and it was perhaps more telling for some of the more lifestyle-magazine centric members of the crowd who were there. Their reactions were far stronger, showing how it’s also possible to become a bit blasé about this kind of technology.
Google Glass: On the minus side
I’d been looking forward to trying Glass myself for some time, mostly because I’m a glasses wearer, and that’s always an interesting proposition for any kind of visual technology. I can’t stand active shutter 3D glasses without wanting to quickly vomit, for example.
My problems with Glass were simpler. With my glasses on, I couldn’t find a spot for the HUD to sit where I could see all of it, even with shifting them around my nose. Without the glasses on, they projected onto the right spot of my vision — but of course I couldn’t actually see much beyond a fuzz being projected.
That’s a specific problem — those with contacts or 20/20 vision wouldn’t have that problem at all — but given as we age our eyes tend to degrade, it’s a challenge for anyone producing wearable visual technology. So far, from my brief hands on, it’s one that Google hasn’t beaten yet.
I’ve written before about the socially insulating aspects of this kind of technology, but what I hadn’t considered was the vocal aspect of Glass. The demonstration I saw did try to bring spoken volume down a little, although it clearly works best when it’s spoken loudly, but you’re going to be terribly conspicuous saying “OK Glass” all the time. As with a lot of spoken systems, from Google’s own search to Siri, results can sometimes be inadvertently hilarious. One demonstration kept missing “Melbourne”, unless it’s changed its name to “Milton” sometime recently, and the language translation demonstration was quite mechanical, exhibiting many of the same problems that I found in my Galaxy S4 S-Translate review.
You can take pictures or video without speech, but the results that I saw seemed barely worth it beyond novelty value. That’s something that should improve in time, and indeed I’ll be interested to see beyond Google’s If I Had Glass explorers whether they’ll upscale any of the specifications on final shipping Glass.
Obviously I’d like more testing time, but on first glance I can see the potential behind Glass, and that’s interesting — but the implementation is going to need some serious work. Perhaps that’s why Google hasn’t committed to an absolute release date or price just yet.