Ethical principles in robots: a possibility?

by Mary Ann McGivern

View Author Profile

Join the Conversation

Send your thoughts to Letters to the Editor. Learn more

I'm in Wisconsin with my family, with members from as far away as Alaska. Monday morning on the deck, eating banana bread, my brother-in-law Tom raised the question of whether we can instill ethical principles in robots. My sister said we humans still control the robots, but Tom said not necessarily. At this point, we can always unplug them. However, at what point will a drone be able to decide to preserve itself, say, from an attempt to take control of the motherboard?

Artificial intelligence would seem to include the fact that the machines learn from the data they collect. So, for instance, a self-driving car would contain the rules of the road, a human installation. But it would be reading data from 4 miles ahead, making decisions about what route to take, appropriate speed, whether, I suppose, it would be more efficient to mow down a jaywalking pedestrian. These are decisions that can't be put into the robot ahead of time by a human because a human cannot foresee road conditions.

Isaac Asimov developed, in his science fiction, three robotic laws: A robot may not injure a human being or allow a human to come to harm; a robot must obey human orders unless they would cause human harm; a robot must protect itself, unless that self-protection conflicts with laws one and two. Very good thinking on Asimov's part. We just have not programmed these laws into computers. You can read more here: "Preventing an autonomous-systems arms race."

Human commitment to these robotic laws or some ethical code would end drone warfare, the topic of an editorial in Sunday's New York Times, "Reigning in the Drones."  It calls for better controls and public accountability of drone warfare. The Times refers to a 77-page report by The Stimson Center about the risks of creating more opposition and more drone warfare. And Richard A. Clarke, author of Sting of the Drone, writes: "Since [Nov. 12, 2001], the United States has killed at least two thousand people in five countries using armed drones. And the killing continues."

All this is only a surface discussion of tactical warfare. The deeper ethical questions about the powers of what we are making don't get discussed often. My brother-in-law's point is that today's computers, from cameras to bombs, were programmed by other computers. The programmers who wrote the original code assume the machines will keep the ball rolling.

We have a limited time frame to decide if we want our machines to hold some ethical norms. We haven't shown much ability so far to control technology. Soon we may lose control.

Latest News

Advertisement

1x per dayDaily Newsletters
1x per weekWeekly Newsletters
2x WeeklyBiweekly Newsletters