Petre Pătraşc

Adventures in software development land, with SMB stars and kernel hooks.

GitHub Bitbucket Stack Overflow Careers 2.0 LinkedIn

Droidcon 2013 | grep 'Google Glass'

08 Oct 2013

Today I got the chance to test out Google Glass at Droidcon Bucharest 2013, and before I go into sharing my impressions, let me just say that I was sceptical of the product when I had first heard of it, and that I haven't really followed development or anything related to it. I had little clue what the product was about, when it would launch, or if I was in the target audience.

I've done the above to keep me away from the hype and the hysteria that usually comes with big product launches - I'm also not high on marketing talk, and I prefer to look at a stable version and decide for myself when the time comes.

Droidcon 2013 Conference Room


Do you remember the original iPhone/iPod, and how it felt to be holding it in your hands, accessing YouTube and writing emails better than ever before? I remember the magic moment when you'd power it on, and how it felt like a leap-frog product, years ahead of anything else on the market. I remember the build quality, the rise of social media and the embrace of the developer ecosystem, all in a neat, super-usable product that's still fantastic to this day.

Iphone 1


I remember my first Android device, booting up the emulator for the first time, and running my code on my device. That felt amazing, and it made me hack away at the Android SDK, understanding it's advantages and disadvantages. Now, Google's done a fantastic job since Android 1.6, and in terms of interface design and interaction, we're still seeing some new metaphors emerge - Samsung S4's eyetracker functionality is quite brilliant, and it feels that the folks over there are really pushing the industry at the moment.

Samsung Galary S4


Google Glass felt nothing like my two experiences above. My main issue with it is that rather than focusing on delivering a powerful piece of functionality in a common every-day object, Google seem to have done the reverse, namely taking a pair of glasses, and sticking a computer in them. Smart-phones and tablets are an amazing asset because they respond very well to a particular problem: portability; not so long ago, you had to go through the effort of carrying a laptop with you if you wanted to check your email during lunch, or read your Twitter feed in bed.


Social networking, picture sharing and connectivity in general benefited greatly from the emergence of smart mobile devices, and are amongst the most accessible technologies that we've developed as a species. This is why I have difficulty in understanding the problem that Google Glass is trying to solve, and I have an issue finding a situation where they would somehow be a part of my every-day activities.

Social media

if Technical && Design ? 'success':'failure'

A set of small marketing previews that I watched during the conference had me assuming that the display would be somewhat of a "fullscreen" experience, where a screen kind of attaches/detaches itself as you interact with it - instead, the product features a small, high-DPI box which gets displayed where the right-eye lens would be.

I had trouble adjusting the depth of the camera correctly, as I figured that I should look at somebody in front of me and adjust the display based on that, so that I see the person and the display at the same time, much like the Terminators of our youth.

Terminator interface

I only managed to really see the screen to some degree after focusing my viewpoint on it, and ignoring the things around me. The idea in the picture below seems very nice, but it's not as much when you think about the fact that you have to focus on it every now and then, and lose attention on your surroundings.

Glass box example

One interacts with the system by using a touchpad that can be found on the frame of the glasses.

Let me repeat that for added emphasis.

One interacts with the system by using a touchpad that can be found on the frame of the glasses.

This sort of interaction makes me feel the way I do about the purpose of the device. I felt silly tapping my head to access things, and even though it was a new interaction, it did not feel natural or intuitive.

Ignoring my design concerns, swiping commands registered fine on the touchpad, I was able to navigate the menus, as well as a list of pictures that previous attendants had taken.

Technology demos are always prone to error, but despite a few attempts, I had no luck getting speech recognition to work, as the device kept switching between contexts and wouldn't register my arguably-perfect British accent.

Voice recognition failure

As technology/communication consumers, we've been constantly evolving, by getting access to faster tools, more integrated services and true cross-platform applications - all these nice apples have also made us expect more from our technology. I realise that I am part of the problem for how I feel about Glass - I've come to expect the things that people want from the software that I build:

  • make sure that it works for any user, from any background
  • make sure it is fast
  • make sure that it is innovative

All of the above are achieved with great difficulty, and that last requirement is sometimes the scariest thing to imagine.

I guess I felt that today lacked the magic of an awesome, integrated product, capable of all these wonderful interactions. I felt that I already knew everything that Google Glass could throw at me, that I couldn't be surprised in its current format. I guess I kind of expected Google Glass to be the Oculus Rift and bring in a new way of working with portable devices, or a new immersion factor - but it wasn't the case.

Anyway, here's a picture of me practicing my Cyclops look just in time for Halloween.

Me using Google Glass

The eye twitch is intended when using Google Glass.

  • Petre.