Google Exec Outlines Advances in Deep Learning

 
 
By Jeffrey Burt  |  Posted 2015-03-19 Print this article Print
 
 
 
 
 
 
 
developer

Google Senior Fellow Jeff Dean, speaking at GTC, says better technology, smarter algorithms and massive data stores help push the research forward.

SAN JOSE, Calif.—Nvidia's overarching theme throughout the GPU Technology Conference here this week has been deep learning, the idea that with the right technology and right algorithms, machines can learn from their experience, and adapt their behavior.

During his keynote address March 17, Nvidia co-founder and CEO Jen-Hsun Huang folded everything he announced—from a new high-powered GPU to software and hardware tools for researchers and scientists to the detail he gave about the upcoming new Pascal architecture—into the message that they will be leveraged to advance the research and development of deep-learning neural networks.

"The topic of deep learning is probably as exciting an issue as any in this industry," Huang said during his talk.

It was  against this backdrop that Jeff Dean, a senior fellow in Google's Knowledge Group, took the stage at GTC March 18 to talk about the search giant's extensive work in deep learning—which is also known as machine learning—over the past several years. Google, with its massive stores of data on everything from search queries to Street View projects, seems like a company that naturally would be interested in the field.

In a fast-paced hour-long keynote, Dean talked about the advancements that Google and other tech companies—such as Microsoft and IBM—are making in the field, the promise that deep learning holds for everything from autonomous cars to medical research, and the challenges that lie ahead as the research continues.

The foundations are in place, such as technologies like GPUs and their massive parallel processing capabilities and the development of "nice, simple, general algorithms that can [enable neural networks to] learn from the raw data," Dean said.

"The good news is that there's plenty of data in the world, most of it on the Internet," he added.

There are texts, videos and still images, searches and queries and maps, and data from social networks. All this data can be used to help neural networks learn, and adapt their behaviors to what they have learned.

Self-driving cars have been a constant conversational topic at the GTC event, and offer a good example  of what has been done already and what still needs to be done. The advanced driver assistance systems (ADAS) in cars right now can detect when a collision is about to happen and apply the brakes, or determine when the car is drifting into another lane and alert the driver.

However, they will need additional capabilities before they are ready for everyday use. They need to be able to recognize whether an oncoming  vehicle in the opposite lane is a truck or a school bus, and then react accordingly (for example, knowing whether the school bus is picking up or dropping off students, and stopping because the red lights are blinking). Or they need to be able to read that a car parked on the side of the road with its driver's side door open could mean a person is about to get out of the car.

Much work around deep learning has involved image recognition—not only determining whether a photo in question is of a cat or a tree log, but how to describe the photo in a sentence (for example, a small child holding a teddy bear). There's also work being done around voice recognition, understanding relationships between words to understand what is meant, not just what is said.

 



 
 
 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel