A lip-reading system that could help those with hearing impairments. A human-sounding digital assistant that can call to make appointments and hold a natural conversation with the receptionist to find a convenient time. Just like many other artificial intelligence technologies, these applications could have multiple roles. That lip-reading system could be used as a surveillance tool. Con artists could employ that virtual assistant to persuade people to share private information with them.
The Association for Computing Machinery Future of Computing Academy wants those who write research papers about AI technologies to explain not only the positive aspects of their work but also the potential negative effects, Brent J. Hecht said in an interview with The New York Times. Hecht, an IEEE member, is chair of the academy, which provides an outlet to enable the next generation of computing professionals to voice their opinions about challenging issues that face the field and society.
Hecht and other members of the academy are calling on editors of peer-reviewed journals to reject submissions that don’t discuss the downsides of AI-enabled technology.
“The computer industry can become like the oil and tobacco industries, where we are just building the next thing, doing what our bosses tell us to do, not thinking of the implications,” Hecht said. “We can be the generation that starts to think more broadly.”
Do you think researchers should be transparent about all the possible implications of their work? Or is it better to keep such speculative information to themselves and not give anyone ideas for how to use the technology for nefarious purposes?