Close
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Logo
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Home Cloud
    • Cloud

    Google Researchers Explore Ways to Ensure Safety of Future AI Systems

    Written by

    Jaikumar Vijayan
    Published June 23, 2016
    Share
    Facebook
    Twitter
    Linkedin

      eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

      Concerns about things going wrong with the artificial intelligence systems of the future have gotten the attention of researchers at Google.

      Barely two weeks after the company’s DeepMind group announced a partnership with researchers at the University of Oxford to develop a kill switch for rogue AI systems, Google has released a technical paper devoted to addressing AI safety risks.

      The paper, titled “Concrete Problems in AI Safety,” was written in collaboration with researchers at Stanford University, University of California, Berkeley and OpenAI, a nonprofit company focused on AI research.

      It outlines five basic problems the researchers say are relatively minor today but predict will assume much greater importance as machines get smarter in the future.

      The paper explores practical approaches to solving these problems and ensuring that AI systems are engineered to operate in a reliable and safe manner, Google researcher Chris Olah said on the company’s Research Blog.

      “While possible AI safety risks have received a lot of public attention, most previous discussions have been very hypothetical and speculative,” Olah said. “We believe it’s essential to ground concerns in real machine learning research and to start developing practical approaches for engineering AI systems,” to operate safely, he said.

      Machine learning and artificial intelligence are important areas for Google. The company has said it wants to leverage advances in these areas to make its core technologies better. It already applies AI and machine intelligence techniques in certain applications including Google Translate, Google Photos and voice search.

      Company CEO Sundar Pichai has said that Google expects to see AI radically transforming the way people travel, accomplish daily tasks and tackle problems in areas such as health care and climate change.

      But advancing AI means making AI systems both smarter and safer, noted OpenAI researchers Paul Christiano and Greg Brockman in blog post announcing the company’s role in the newly released technical paper. That means “ensuring that AI systems do what people actually want them to do,” the researchers said.

      One of the five problems the newly released Google technical paper examines is a way to ensure an AI system will not impact its environment negatively when performing its functions. As an example, Olah pointed to programming a cleaning robot to not knock over a vase even though it can complete its task faster by doing so.

      Other problems involve figuring out ways to ensure robots do not engage in activities with negative consequences, such as sticking a wet mop into an electrical outlet, or they operate in a suitably safe manner in different environments such as on a shop floor or in office.

      “Many of the problems are not new, but the paper explores them in the context of cutting-edge systems,” OpenAI’s Christiano and Brockman said.

      The new technical paper is part of what appears to be a deepening focus on addressing AI safety-related issues. Google’s research with Oxford University, for instance, is focused on ensuring the hyper-intelligent AI systems of the future will never be capable of actively blocking interruption by a human operator. The goal is to ensure that engineers have a way of safely shutting down an AI system if it starts behaving in an erratic or unsafe manner.

      Jaikumar Vijayan
      Jaikumar Vijayan
      Vijayan is an award-winning independent journalist and tech content creation specialist covering data security and privacy, business intelligence, big data and data analytics.

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      MOST POPULAR ARTICLES

      Artificial Intelligence

      9 Best AI 3D Generators You Need...

      Sam Rinko - June 25, 2024 0
      AI 3D Generators are powerful tools for many different industries. Discover the best AI 3D Generators, and learn which is best for your specific use case.
      Read more
      Cloud

      RingCentral Expands Its Collaboration Platform

      Zeus Kerravala - November 22, 2023 0
      RingCentral adds AI-enabled contact center and hybrid event products to its suite of collaboration services.
      Read more
      Artificial Intelligence

      8 Best AI Data Analytics Software &...

      Aminu Abdullahi - January 18, 2024 0
      Learn the top AI data analytics software to use. Compare AI data analytics solutions & features to make the best choice for your business.
      Read more
      Latest News

      Zeus Kerravala on Networking: Multicloud, 5G, and...

      James Maguire - December 16, 2022 0
      I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
      Read more
      Video

      Datadog President Amit Agarwal on Trends in...

      James Maguire - November 11, 2022 0
      I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
      Read more
      Logo

      eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

      Facebook
      Linkedin
      RSS
      Twitter
      Youtube

      Advertisers

      Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

      Advertise with Us

      Menu

      • About eWeek
      • Subscribe to our Newsletter
      • Latest News

      Our Brands

      • Privacy Policy
      • Terms
      • About
      • Contact
      • Advertise
      • Sitemap
      • California – Do Not Sell My Information

      Property of TechnologyAdvice.
      © 2024 TechnologyAdvice. All Rights Reserved

      Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.

      ×