In the fast-paced world of research and development, along with the commercialization of emerging technologies, ethical considerations are often made after the product is already on the market. That is too late, and often ethics aren’t even discussed until something goes wrong.
Instead of “ethics as usual,” I suggest what is known as anticipatory ethics, which requires the matter to be considered from the beginning of a technology’s R&D cycle.
To some readers, this notion may sound like a smart idea, while others may consider it impractical. After all, the development of potentially lucrative products requires confidentiality in terms of intellectual property, and the fewer obstacles in the way, the better. But that doesn’t mean we shouldn’t at least try to ask important questions at each stage of development to better understand and be prepared for the ethical implications to individuals, communities, and society at large.
As both a professional ethicist and an engineer, I will be sharing my thoughts on this as a “Future Widespread Impacts From Innovations in Technology” panelist at the IEEE Technology Time Machine conference, to be held 20 and 21 October in San Diego. Here are some of the issues I’ll be discussing at the conference:
MICRO AND MACRO
For many of us, ethics is the application of moral principles to our daily actions, work, and relationships. It often boils down to the notion of doing the right thing. But “who” or “what” is making those decisions? To explain, I’d like to break down engineering ethics into micro- and macroethics.
Microethics relates to the actions of individuals. How does an engineer make ethical decisions on the job? Macroethics refers to the implications of public policy, and the collective social responsibilities of engineers. How do groups of individuals determine their ethical positions and incorporate them? For example, does a professional engineering organization such as IEEE have a responsibility to its members and to society to review and debate the societal implications of its collective work?
There is overlap between microethics and macroethics, and the process of applying ethics to technology innovation must integrate both.
One case where that is already happening is within the IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems. Its purpose is to “ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.” Programmers write the algorithms that support the logic of artificial intelligence. AI is not immune to human bias and foibles. Therefore, its potential impacts on society should be anticipated and, if necessary, mitigated. The initiative is complemented by IEEE TechEthics, a new program that fosters open, broad, and inclusive conversation about ethics in technology.
TECHNOLOGY IS ANYTHING BUT NEUTRAL
One popular notion is that technology is inherently neutral, and it’s how people use it that makes it ethical or not. An example is, “Guns don’t kill people. People kill people.” This myth should be debunked. Guns are designed as lethal weapons, and that is their purpose. Guns, like all technologies, become part of sociotechnical systems, which include political, social, and cultural contexts that merge with technology. To ignore this fact is both naive and dangerous.
During technology-related disasters, the popular explanation given is often “human error,” where human usually means operator. This approach ignores the humanity (and fallibility) of the technology’s designers, managers, and manufacturers. Were the operators trained to anticipate and recognize all system failures? Were the systems involved designed with safeguards that could preclude an “accident” attributable to human mistakes? We use “fail-safe” mechanisms in many endeavors, but in times of preventable, catastrophic events, such mechanisms require constant review and revision.
The challenge of integrating ethics into technology development is daunting. Many practical arguments suggest that it’s a quixotic quest. But, as author and futurist Arthur C. Clarke once wrote, “The only way to discover the limits of the possible is to go beyond them into the impossible.”
IEEE Senior Member Joe Herkert is a member of the IEEE Society on Social Implications of Technology and the TAB Ethics, Society & Technology Ad Hoc Committee. He is a visiting scholar at the Genetic Engineering and Society Center at North Carolina State University, in Raleigh.