I’ve been presenting a seminar on engineering ethics to students in the Electrical Engineering Department at Washington State University for a number of years. I’ve started off with a video of the Challenger explosion and the questions leading up to it as expressed by Roger Boisjoly, a Morton Thiokol engineer.
Roger Boisjoly had studied data showing that the field joint seals on the solid rocket boosters for the shuttle were burning through and that cold weather made the problem worse. The Challenger was scheduled for launch with the first Teacher in Space on board and the weather was significantly below the recommended minimum – 35 degrees below the recommended minimum. Roger Boisjoly pushed for a launch delay in spite of the ramifications it would have to the program and to his employer (Morton Thiokol, the manufacturer of the solid rocket boosters). The launch went ahead in spite of his warnings, with the results we all know. This information, and more, is available here.
Roger Boisjoly did what he felt was right. A lesson in engineering ethics, and one that I hoped the students would never face during their careers. I then ran them through a number of scenarios from an older Intel Corporate training course on ethics (with Intel’s permission, of course) when I gave them the scenario and asked for their ideas on the ethics of the question. Was this something that you should do, or not do? If not, why not? Some great discussions would ensue before I would give them the Intel answer. And some of the “correct” answers surprised them, as they were more liberal than they would have thought. On the other hand, the scenario where you overheard someone telling a third party that they were going to buy stock in XYZ Company because they had heard that Intel was going to invest in the company had only one possible right answer, in spite of what a bunch of college students might have thought.
These scenarios were designed to provide some light to serious guidance on questions that might come up in one’s career, along with ideas of how they might be handled.
The September 2017 issue of The Institute published by the IEEE contains a short article on Page 5 providing the proposed changes to the IEEE Code of Ethics approved by the IEEE Board of Directors on June 25, 2017. The first change is to the first point in the Code of Ethics to strengthen the point about making decisions consistent with the safety, health, and welfare of the public. The second change is to the 5th point in the Code of Ethics. The new wording of this point is, “to improve the understanding of the capabilities of technology; by individuals and by society as a whole, and its applications and societal implications, including outcomes attributable to autonomous systems.” The last part of this new text, referring to “outcomes attributable to autonomous systems” and is the key point of interest here. Specifically, it feeds into the question discussed next.
There is a new question that came up in discussions with fellow engineers during the 2017 IEEE International Symposium on EMC/SIPI in Washington DC this August. One that does not have an easy answer, but one that is very real world and the people working on the projects will have to address. Think about fully automated cars on the road. There is an incredible amount of hardware and software necessary to make these vehicles a reality in the future. They must be able to recognize the environment around them. They must be able to navigate from point A to point B without breaking any traffic laws. They must be able to avoid other traffic to minimize accidents. They must be able to recognize their own fuel level (whether liquid fuel or batteries) and go to an appropriate facility to refuel or recharge so they don’t wind up stranded along the road, along with their passengers. But there is an important point that must also be addressed.
From Wikipedia, we have the following information on the Three Laws of Robotics, as expressed by Isaac Asimov in a short story in 1942:
The Three Laws of Robotics (often shortened to The Three Laws or known as Asimov’s Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround”, although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
While these “Laws” were written in a fiction short story, they are a good starting point for writing software that controls robots. You want robots that are safe for humans. We do not want robots that look out only for themselves.
Looking at the first of these three laws we find the core of an ethical dilemma for engineers writing software for the control of autonomous vehicles. The autonomous vehicle is a robot, designed for the purpose of transporting a person or people and goods. The question has been asked, “What would the vehicle’s programming be for the situation where the vehicle has to choose between killing a pedestrian or the occupants of the vehicle?” As an example, the vehicle comes around a blind curve and a pedestrian is standing in the middle of the road. If the vehicle is programmed along Asimov’s Laws, the first law says that the vehicle may not injure the pedestrian, so it must take whatever action is necessary to avoid hitting the pedestrian. If there is time, brake to a stop. If there is room, go around the pedestrian.
But, what if there isn’t time to stop, nor space to go around the pedestrian without having an accident that would result in harm to the occupants of the vehicle? Now the programming in the vehicle must choose – injure or kill the pedestrian, or injure or kill the occupants of the vehicle.
There’s the ethical question. Who do you kill? Something to think about when designing the software to operate an autonomous vehicle. Not an easy question to contemplate, but one that autonomous vehicle designers will have to address. I do not propose to provide an opinion on the correct answer, and, of course, the “correct” answer would depend on your point of view. Are you the pedestrian, or one of the occupants of the vehicle? What is the “right” answer? Think about this the next time you are writing code that may have serious ramifications for those around the device.