Microsoft President Brad Smith has taken the unusual step of asking the government to regulate its technology while it still can. At the same time, Smith said that Microsoft is placing limits on what its customers can do with some of its technology by prohibiting some uses in its terms of service.
The technology in question is facial recognition, which is already showing up in public. The U.S. Secret Service announced a few days ago that it’s testing facial recognition outside the White House. Meanwhile, Delta Airlines has begun using facial recognition in place of boarding passes on some international flights from its Atlanta hub. And U.S. Customs and Border Protection has famously begun catching people with fake passports using facial recognition at a number of airports in the U.S.
These are legitimate uses for facial recognition, although some other uses are questionable, such as using the technology to scan crowds at sporting events for alleged troublemakers, with plans to deny them access.
Microsoft Suggests Congressional Action
Smith, speaking for Microsoft—which is already well along in developing facial recognition technology—sees the possibility of abuse and the possibility of risks to privacy and to democracy. In his blog post on the topic, Smith outlines what he thinks Congress needs to address in legislation, and he sets a goal of having that legislation passed in 2019.
While Smith does not provide anything like draft legislation, he does lay out principles that he says should be part of any laws governing the use of facial recognition. They are requiring transparency and enabling third-party testing and comparisons. He said he believes that people should know when facial recognition is being used, and that companies that make facial recognition systems should allow others to validate the accuracy of their products. One of the problems Smith says he’s seeing is that some developers simply refuse to allow their products to be tested.
Smith stressed that it’s critical to avoid using facial recognition in ways that foster illegal discrimination and that there needs to be meaningful human review. Privacy needs to be protected, which means that people need to know when facial recognition is being used and what constitutes consent so they can “vote with their feet” if they object, he said.
Key Steps Microsoft Is Taking
But Smith and Microsoft are doing more than wishful thinking. Smith also laid out six steps that the company is taking to help ensure that its facial recognition technology is not misused:
- Fairness
- Transparency
- Accountability
- Nondiscrimination
- Notice and Consent
- Lawful Surveillance
To add some teeth to the effort, Smith said that Microsoft will be adding these conditions to its terms of service for any products or services that incorporate facial recognition. This may not sound like a big deal. After all, terms of service are widely ignored, but in Microsoft’s case, the company has put some corporate muscle behind it. Remember, this is the company that has held the U.S. Department of Justice at bay for years rather than reveal information in violation of European laws.
This is not the first time that Smith and Microsoft have expressed concern over the potential for misuse of facial recognition. In July, Smith wrote another blog post outlining the need for corporate responsibility and public regulation. He said that next week Microsoft will announce its position in more detail. This is all part of a larger theme advocating for greater protection for its customers’ privacy and control of their own data, such as the company statement in May committing to the EU’s GDPR and its privacy limits.
Why Other Companies, Countries Need to Join In
Unfortunately, Microsoft, despite its size and vast reach, is only one company. Worse, some of the biggest offenders in the misuse of facial recognition, notably the Chinese government, aren’t likely to care very much what limits the company puts into its terms of service. After all, any legal action would be tried in Chinese courts controlled by the same government that already violates human rights with impunity.
But Microsoft’s position can be backed if other companies adopt similar principles and if they place similar limits in their terms of service. This can have a positive effect in the tech industry for several reasons. First, taking a strong stand in favor of human rights is good for the company globally, because it eliminates one point of contention for a company that does business everywhere. Second, having one standard worldwide is easier to administer in the long run.
But to be effective, similar standards should be adopted by the rest of the industry. A unified approach by the big U.S. tech companies would go a long way in making sure that facial recognition technology isn’t misused, and it would also give cover to smaller companies that also want to prevent abuse but may not have the resources of Microsoft or Google.
While there’s not much that can be done to prevent misuse of technology by a government that operates without limits, it is possible for companies to enforce a standard if they simply refuse to do business with those who would misuse their products.
And it’s worth mentioning that such a standard should apply to more than facial recognition. We are at the point where the potential for IT misuse in general is great in many ways, and now is the time to enable protections while we still can.