As 2018 continues on its downward slope toward a new year, artificial intelligence technologies are becoming more and more useful in every aspect of our lives. Just look around: They’re in our smartphones, our social networks, our home smart speakers, our personal vehicles—you name it, there’s AI at work somewhere.
In fact, experts predict they will only become more prevalent in both our personal and professional experiences, and that’s not much of a stretch to believe.
Now, as with any newly widespread technology, we’re confronted with a global need to make best of use of all this new IT; specifically, how to educate our communities on how to apply guard rails that will ensure the use of AI as both ethical and beneficial to all.
Some of these new rules will tackle questions such as:
- Humanity: How do machines affect our behavior and interaction?
- Inequality: How do we distribute the wealth created by machines?
- Unemployment: What happens after the end of jobs?
- Security: How do we keep AI safe from adversaries?
We could go on. There are lots of questions to be answered here; we can only get to a few right now. Rest assured, however, eWEEK will continue to examine this topic on a periodic basis.
Right now, it’s time for business and government to step in and help out with this. Working closely with AI experts and developers, business and government organizations need to define the principles that will guide ethical development of AI within industries, ultimately helping to ensure the technology is safe, sustainable and above all, reaches its potential.
In this eWEEK Data Point article, based on a recent paper published by Sage, we offer five key points for business and government to consider when building an ethical and competitive AI economy.
Data Point 1: Introduce AI corporate governance and ethical frameworks
The first step to supporting an ethical AI economy is to establish key principles to uphold moral use of the technology. Just as companies have done recently with social media guidelines, businesses should draft or revise corporate policy to include a stance on ethical use of the new technology, outlining accountability measures specific to their business model. Companies must also establish clear expectations at every level of the organization, making ethical AI part of routine conversation in discussions across the company—from board meetings and employee performance reviews to informal management and staff check-ins.
At the government level, officials should be mindful of the role regulators have in guiding the implementation of ethical best practices. Policy makers should work directly with industry experts to familiarize themselves with AI’s technical makeup, potential security risks and real world applications before launching formal investigation programs.
Data Point 2: Demystify AI and questions around its accountability
To demystify AI and questions surrounding accountability, businesses should start by acknowledging any potential risks of implementing the technology and develop strategies for testing AI both prior to deployment and once it’s in use. When building these strategies, companies should engage outside experts to explore how AI accountability applies to their specific corporate ambitions and customer needs.
In government, officials should recognize the essential balance between introducing measures to promote accountability and allowing for AI innovation and the ensuing productivity. Finding this balance will help keep both the technology and its users honest and transparent as new protocol is still being established.
Data Point 3: Build human trust in corporate AI
As a business, it’s essential to be as transparent as possible when communicating about the company’s use of AI. When introducing the idea to stakeholders, businesses must share the technology’s intended purpose and the reasons why the company’s leadership team feels it’s important to adopt in their organization. Business leaders must clearly define the standards around using AI by developing training and certification programs and should be open about the measures that were taken to define these best practices, especially any steps that were taken to test the technology for flaws.
Governments can work to build trust in corporate AI by looking for opportunities to involve members of the public in ongoing conversations that address popular concerns and introduce future considerations. Further, through government-anchored awareness campaigns, government can work to reduce public inhibitions around AI presence in work and everyday life.
Data Point 4: Bring AI into the workforce
To encourage bringing AI into the workforce, businesses should invest in staff retraining for any employees that interact significantly with automated technologies. To make best use of this training, companies can empower HR with the data needed to map future skills demand, so they know exactly what tools to arm their current and future workforce with in order to be successful.
Data Point 5: Digital education programs for young students are needed
Finally, both businesses and governments should support digital education programs for young students, helping to ensure that the next generation of workers is equipped to apply AI with an understanding of wider ethical issues. In an ideal world, students would leave education and anticipate ethical frameworks and principles around the use of AI technology; they would understand what accountability concerns led to creation of these principles and, in turn, expect transparency from companies doing business with AI. As outlined in the recent paper from Sage, this reality is within reach so long as business, government and industry players work together to lay the proper groundwork for ethics and success.