Me looking a bit dorky with Glass
Last week I got to try out Google Glass for the first time. I spent about an hour learning about and experimenting with Glass, and discovered that while there are a few things about Glass I like and see interesting potential for, Glass, in its current instantiation, definitely isn’t for me.
The first thing that struck me is that, at least for now, the head set is available for the right eye only. It just so happens that my right eye is defective (neurologically, not fixable with a lens), which meant that I couldn’t see much of what was happening in the display. I could read the largest text and vaguely make out images, but that was it.
I was surprised at how small the display is. I expected it to be bigger. The Google representative explained that they don’t want the display to interfere too much with your other activities… which makes sense, except that while watching a variety of people using Glass at the event (both newbies to Glass as well as experienced Google Glassers (is that the right term? or perhaps Googlers with Glass?)), it doesn’t appear that Glass is much different from using a mobile device in terms of multi-tasking. We humans are very bad at multi-tasking, and can really only pay attention to one thing at a time, with perhaps a small amount of awareness of other things in the periphery.
When using Glass, we switch our attention to the display (of course), and it really doesn’t look that different in terms of the lack of focus on our surroundings than when we look at our phones, although perhaps it’s slightly faster to switch attention back to our surroundings simply because we have less distance to move our head and eyes to do so. (It would be interesting if someone did a study comparing attention with Glass and attention with phone.)
I found the look that people get on their face and in their eyes when they are focusing on the display to be somewhat disconcerting at first. It doesn’t look “normal,” and at first, my brain wants to say, “Hey there’s something slightly wrong with that person.”
But back to the display. Because of my lack of vision in my right eye, I asked the Google representative about audio support for navigation. At this time there is limited audio support, so that wasn’t much help for me in terms of navigating through the menus. Learning how to navigate is the primary task when first using glass, and I found this rather cumbersome (a lot of swiping to try to get where you want to go), and of course, this process was extra cumbersome for me since I had to get help from another person to decipher the text on the display properly. I think once a user gets used to which voice commands are available, and the easy switch back to the “main menu” from where you can, with practice, get the hang of getting to the voice commands, navigating in Glass would get a whole lot faster and less cumbersome.
I tried out the translation capability, which seems like a useful application of Glass. However, I can already do that on my phone, and I’d need a whole lot more augmented reality–which seems like the “killer app” for this device–to make it worth $1500.
One thing I did like a lot about Glass is the bone conduction audio. I like that I can listen to music (and potentially audio menus for navigating at some point in the future) and still hear what’s going on around me. This technology needs work; it’s difficult to hear the audio when you’re in a loud environment, but I think it has potential.
Google’s big pitch for Glass (at least, the pitch given at this demo event) is that Glass allows you to remain more connected to your surroundings and other people than when you’re using your phone or another device. I’m not sure I see a big difference. That, plus the current lack of truly compelling applications, the difficulty navigating, and the lack of left eye support mean that I’m not going to run out anytime soon to buy Glass.