WASHINGTON—When a prestigious organization such as the Brookings Institution here in the nation’s capital decides to study civilian robotics, you know that at the very least the organization will present some thought-provoking views.
In that sense, Brookings delivered. Unfortunately, the analysts who were delivering the results of their studies on the Future of Civilian Robotics have yet to agree what actually constitutes robotics.
Part of the reason for the confusion over what should be an easy question is that the think-tank is located in Washington, DC, where competing political agendas can easily obscure the realities of science or technology. Robotics is certainly one of those areas that bring out those competing agendas.
This gets even more complicated when you consider that aerial drones might also be robots. Thus it should be no surprise then that as soon as the D-word was mentioned, the conversation among the august researchers at Brookings immediately veered into discussions of privacy rights, Federal Aviation Administration regulation of things that fly and the critical issue of what happens when you fly small drones over the heads of people with shotguns.
The fact that the serious conversation about the place of robots in government and society was sidetracked into an irrelevant side discussion about drones is too bad. There are, in fact, some significant issues involving robots that deserve serious attention by legislators and regulators. One of those issues involves deciding what part of government needs to be involved in regulation.
For example, Ryan Calo from the University of Washington School of Law recommends the creation of what he calls a Federal Robotics Commission. But despite the name, the FRC wouldn’t be a group of bureaucrats that regulate robotics, but rather be more like NACA, which was the National Advisory Committee for Aeronautics, which eventually became NASA.
But during its time as NACA, the organization, which was founded in 1915, was instrumental in a number of advances in aviation, aircraft design, coordination and research. Eventually, NACA became NASA, which continued the aeronautical research, but moved into spaceflight. Calo envisions the NRC as having a similar function for robotics.
One of the questions that is already looming in the background for users of robotic systems, including the users of robots put to work in industry and manufacturing, is determining who is responsible for their safe and responsible operation.
If a robot causes harm, is it the fault of the robot? That’s unlikely since an inanimate object really can’t have the concept of responsibility. So is it the owner? Or perhaps it’s the person who programmed it? Maybe it’s the company that manufactured it?
Future of Robotics Debate Stumbles Over Question: What Is a Robot?
Right now, there aren’t any really good answers to that question. But right now, society is on the cusp of having to come up with that answer. Calo suggests a number of ideas including selective immunity.
But there are other ideas on how that should work. John D. Villasenor, a senior fellow at Brookings, suggests that existing product liability laws handled by states and localities are already in place and are sufficiently flexible to handle anything caused by a robot. But I have to look at how robots are already being used and wonder.
One thing that comes to mind actually happened years ago when hardly anyone realized robots were already in use in the office. I was the executive officer of a military facility that included a huge building with thousands of workers. Moving through this facility several times a day was a mail delivery robot that trundled along to each office where it stopped to drop off and pick up mail.
This machine was programmed to stop instantly when it sensed a person was too close. But suppose that hadn’t worked and as a result ran over someone’s foot? Who would have been responsible for any legal liability: the government, the maker of the machine or the person who had programmed it?
The questions go on from there. But one potential function of such an agency, if one were to exist as Calo suggests, might be to sort out what is a robot and what is not—and in the process help determine what sort of device would fall under its auspices.
Professor Gregory McNeal from Pepperdine University alluded to this issue when trying to separate the discussion of drones from that of robots. McNeal noted that the discussion of robotics has been clouded because some groups have vilified robots in general by intentionally lumping in the controversial privacy and safety issues surrounding aerial drones. He noted that demonizing drones, and by association robots, polarizes the discussion of their potential legitimate uses.
Calo suggests that robotics is a transformative technology, just as the Internet and aviation were transformative. But for such transformation to be incorporated into society and the world economy with sanity, the discussion needs to be moved beyond the political and emotional.
In a sense, the discussion of how robotics can fit into society needs to be transformed as well. The next time you’re discussing Google’s driverless cars, which are also robots, remember that they too are now part of the discussion about transformative, but controversial technology. However, chances are that right now drones aren’t part of that discussion at all.