At the international Web Summit in Lisbon Nov. 8, Facebook CTO Mike Schroepfer revealed Facebook’s latest updates around its long-term roadmap, which focuses on foundational technologies the company has developed in artificial intelligence, virtual reality and connectivity.
Facebook’s work, particularly in AI, is moving many of its internal and open source projects forward, and the company is also applying AI to serve longer-term challenges in the fields of connectivity and VR.
As the field of AI advances, Facebook is turning the latest research breakthroughs into tools, platforms and infrastructure that make it possible for anyone at Facebook to use AI in the projects they build. These include:
–Facebook AI research is going into production at Facebook faster than previously thanks largely to core AI infrastructure such as FBLearnerFlow, AutoML and Lumos.
–As engineers apply AI at scale, it’s already making an impact on the lives of people who use Facebook’s products and services each day, such as automatically translating posts for friends who speak different languages or ranking News Feed to show people more-relevant stories.
–Schroepfer introduced Caffe2Go, the framework that powers Style Transfer and puts AI into the palm of a user’s hand. Caffe2Go is an AI technique pioneered and developed by Facebook’s AI teams to learn the artistic style of a specific painting and then redraw every frame of a video in that style in real time.
Previously, effects like this were sent to a server, processed and then delivered back to a smartphone, but Facebook built a new deep learning platform on mobile so it can, for the first time, capture, analyze and process pixels in real time. This now can take place on the spot, so users can apply them to videos as they are actually taking them.
Facebook will be looking to open source parts of this AI framework over the coming months, the company said.
Other Facebook announcements at the Web Summit included the following:
–Facebook’s work on speech recognition also is helping to create more-realistic avatars and new UI tools for VR. This helps to create a feeling of presence with other people in VR.
–Image and video processing software powered by computer vision is improving immersive experiences and helping to support hardware advances.
–Facebook is applying computer vision to 3D city analysis to help plan deployments of millimeter wave technologies like Terragraph in dense urban areas to identify radio propagation paths connecting nearby sites with clear line-of-sight.