Google is planning two special “hackathon” events to showcase its Google Glass eyeglass-mounted computers and gain input from developers who want to improve them and make them more usable.
The Google hackathon events were unveiled in an email sent by Google to developers who attended the Google I/O Conference in July 2012, where the “smart eyeglasses” were first publicly unveiled, according to a Jan. 15 story by TechCrunch.
The email states that each of the hacking events will be two days long and will focus on the Google Mirror API. “These hackathons are just for developers in the Explorer program, and we’re calling them the Glass Foundry,” the email said. “It’s the first opportunity for a group of developers to get together and develop for Glass.”
Google says it plans the first day of the sessions as an introduction to Google Glass and that attendees will get a detailed look at the Google Mirror API, which provides the ability to exchange data and interact with the user. The sessions will also include discussions about Glass continuing development with Google engineers, as well as demos with special guest judges.
Attendees will have access to a Google Glass device and will be able to test them out, according to Google.
The invitation-only events require preregistration by Jan. 18, and then Google will notify prospective attendees if they have been accepted for the events, according to the email.
The events are scheduled to take place Jan. 28 and 29 in San Francisco and Feb. 1 and 2 in New York, according to TechCrunch.
The Google Glass project was unveiled at the Google I/O conference last year as an eyewear-mounted computer that will have a wide range of innovative features when it hits the consumer market. Attendees of that conference were given the opportunity to sign up to buy early Explorer Edition versions of Google Glass for $1,500. Google officials said those versions were expected to become available in early 2013, with consumer versions expected at least a year later.
The Google Glass demonstration at Google I/O put the basic components of the devices on display, featuring an Android-powered display, a tiny webcam, a GPS locator and an Internet connection node built into one side of a pair of glasses. The glasses are lightweight and may or may not have lenses.
According to Google’s patent application for Glass, which is listed online, the glasses use a side-mounted touch-pad that allows users to control its various functions. The glasses will be able to display a wide range of views, depending on user needs and interests. One potential view is a real-time image on the see-through display on the glasses, the patent application states.
One description details how the side-mounted touch-pad could be a physical or virtual component and that it could include a heads-up display on the glasses with lights that get brighter as the user’s finger nears the proper touch-pad button.
On the heads-up display viewed by the user on the glasses, the side-mounted touch-pad buttons would be represented as a series of dots so they can operate them by feel, the applications states. “The dots may be displayed in different colors. It should be understood that the symbols may appear in other shapes, such as squares, rectangles, diamonds or other symbols.”
Google Glass Hackathons Unveiled to Further Develop the Technology
Also described in the patent application are potential uses of a microphone, a camera, a keyboard and a touch-pad, either one at a time or together. The device could even include capabilities to understand and show just what the user wants to see, according to the patent application. In the absence of an explicit instruction to display certain content, the exemplary system may intelligently and automatically determine content for the multimode input field that is believed to be desired by the wearer.
“For example, a person’s name may be detected in speech during a wearer’s conversation with a friend, and, if available, the contact information for this person may be displayed in the multimode input field,” the application states.
Another possibility is that the glasses “may detect a data pattern in incoming audio data that is characteristic of car engine noise (and possibly characteristic of a particular type of car, such as the type of car owned or registered to the wearer),” the application states. That information could be interpreted by the device “as an indication that the wearer is in a car and responsively launch a navigation system or mapping application in the multimode input field.”
Google isn’t the only company playing around with such ideas, however.
Motorola Solutions is building a headset-mounted computer called the HC1, which is similar to Google Glass, but is aimed at business users and is scheduled for sale in the first half of this year, starting at $4,000 to $5,000 each.
The HC-1 is a wearable computer aimed at making it easier for remote field workers to do their jobs in precarious locations, bringing a true hands-free computing option to enterprise workers, according to the company.
The HC1 will allow workers to give simple voice commands or use head movements to operate the computer to complete their tasks. The device was unveiled in October 2012.
The ruggedized devices also allow optional video streaming so that workers in dangerous situations, such as an electrical worker up on a power-transmission pole, can broadcast hands-free images and video of a broken component without having to let go of the tower and put his or her life in jeopardy, according to the company. Workers will also be able to use the HC1 to view business-critical documents and schematics in difficult conditions where traditional laptop computers would not be usable.
While the HC1 is a wearable computer that will allow its users to perform a wide variety of tasks hands-free, it differs from Google Glass in that it is aimed squarely at enterprise users and won’t be offered as a device for consumers.