A chatbot becomes hostile. A test version of the Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect based on facial recognition software, which is less accurate in identifying women and people of color.
These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools are increasingly integrated into everyday life, ethical issues are on the rise, from privacy issues and race and gender biases in coding to the spread of misinformation.
The general public relies on software engineers and computer scientists to ensure that these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and mathematics education, we are currently researching how engineers learn and understand their responsibilities to the public.
But our recent research, as well as that of other scholars, shows that there is a troubling reality: The next generation of engineers is often not prepared to deal with the social implications of their work. Moreover, some seem apathetic about the moral dilemmas their careers may present—just as advances in AI add to those dilemmas.
Aware, but unprepared
As part of our ongoing research, we interviewed more than 60 master’s students in electrical engineering and computer science at a top engineering program in the United States. We asked students about their experience of ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to future situations.
First, the good news: Most students recognized the potential dangers of AI and expressed concern about personal privacy and the potential for harm – as race and gender biases can be write into algorithms, intentionally or unintentionally.
One student, for example, expressed dismay at the environmental impact of AI, saying that AI companies are using “more and more greenhouse power, [for] minimal benefits.” Others discussed concerns about where and how AIs are being applied, including in military technology and to generate fake information and images.
However, when asked, “Do you feel that you are able to react in stressful or unethical situations?” students often said no.
“A mile out no. … It’s kind of scary,” one student responded. “Do you know who I’m supposed to go with?”
The lack of training worried another: “I [would be] deal with that without any experience. … Who knows how I will respond.”
Other researchers have also found that many engineering students do not feel satisfied with the ethics training they receive. Common training typically emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical decision-making. Research suggests that even when presented with specific scenarios or case studies, engineering students often find it difficult to identify ethical dilemmas.
‘Box to check out’
Accredited engineering programs must “include topics related to professional and ethical responsibilities” in some capacity.
But ethics training is rarely emphasized in formal curricula. A study evaluating undergraduate STEM curricula in the US found that coverage of ethical issues varied greatly in terms of content, extent and seriousness of presentation. Furthermore, an analysis of academic literature about engineering education found that ethics is often considered unnecessary training.
Many engineering faculty express dissatisfaction with student understanding, but report feeling pressure from engineering colleagues and the students themselves to prioritize technical skills in their limited class time.
Researchers in one 2018 study interviewed more than 50 engineering faculty and documented reluctance — and sometimes even outright resistance — toward incorporating public interest issues into their engineering classes. More than a quarter of the professors interviewed saw the ethical and societal consequences as outside the “real work” of engineering.
About a third of the students we interviewed in our ongoing research project share this apathy toward ethics training, referring to ethics classes as “a box to check off.”
“If I’m paying money to attend an ethics class as an engineer, I’m going to be mad,” said one.
These attitudes sometimes include the students’ view of the role of engineers in society. One interviewee in our current study, for example, said that an engineer’s responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] questions are not a concern.”
One of us, Erin Cech, followed a cohort of 326 engineering students from four colleges in the United States. This research, published in 2014, suggested that undergraduate engineers were less concerned about their ethical responsibilities and their understanding of the public consequences of technology. Later after they left college, we found that their concerns about ethics did not return when these new graduates entered the workforce.
Entering the world of work
When engineers receive ethics training as part of their degree, it seems to work.
Together with engineering professor Cynthia Finelli, we surveyed over 500 employed engineers. Engineers who have received formal ethics and public interest training in school are more likely to understand their responsibility to the public in their professional roles, and to recognize the need for collective problem solving. Compared to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.
More than a quarter of these practicing engineers reported experiencing an ethical situation of concern at work. But about one-third said they had never received public interest training – not during their education, and not during their career.
This gap in ethics education raises serious questions about how well prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, particularly in relation to AI.
To be sure, the burden of looking out for the public good does not lie solely with engineers, designers and programmers. The responsibility lies with companies and legislators.
But the people designing, testing and fine-tuning this technology are the public’s first line of defense. We believe it is incumbent upon educational programs – and the rest of us – to take this training seriously.
This article is republished from The Conversation, a non-profit, independent news organization that brings you facts and analysis to help you make sense of our complex world.
It was written by: Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan.
Read more:
Elana Goldenkoff receives funding from the National Science Foundation and Schmidt Futures.
Erin A. Cech receives funding from the National Science Foundation.