Rod Furlan built his own version of Google Glass, and found that it acted as extension of his senses :

My world changed the day I first wore my prototype. At first there was disappointment—my software was rudimentary, and the video cable running down to the onboard computer was a compromise I wasn’t particularly pleased with. Then there was discomfort, as I felt overwhelmed while trying to hold a conversation as information from the Internet (notifications, server statuses, stock prices, and messages) was streamed to me through the microdisplay. But when the batteries drained a few hours later and I took the prototype off, I had a feeling of loss. It was as if one of my senses had been taken away from me, which was something I certainly didn’t anticipate.

When I wear my prototype, I am connected to the world in a way that is quintessentially different from how I’m connected with my smartphone and computer. Our brains are eager to incorporate new streams of information into our mental model of the world. Once the initial period of adaptation is over, those augmented streams of information slowly fade into the background of our minds as conscious effort is replaced with subconscious monitoring.

The key insight I had while wearing my own version of Google Glass is that the true value of wearable point-of-view computing will not be in the initial goal of supporting augmented reality, which simply overlays information about the scene before the user. Instead, the greatest value will be in second-generation applications that provide total recall and augmented cognition. Imagine being able to call up (and share) everything you have ever seen, or read the transcripts for every conversation you ever had, alongside the names and faces of everyone you ever met. Imagine having supplemental contextual information relayed to you automatically so you could win any argument or impress your date.