AI could change ethics committees

    <span rang=Freedom / Shutterstock“src =” https://s.yimg.com/ny/api/res/1.2/attxxd4crq4uvjmsihgxq–/yxbwawq9aglnaglnagxhbmrlcjt3ptk2mdtopty0mq–/https://media 6F5F648DB52815 “data-SRC = “https://s.yimg.com/ny/api/res/1.2/AtTxXd4crq4UvJJmsiHgXQ–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/495a67dddaa358d2db6f5f648db52815″/>

The role of the ethics committee is to advise on what should be done in often controversial situations. They are used in medicine, research, business, law and various other fields.

The word “ethics” refers to the moral principles that govern human behavior. The task for ethics committees can be very difficult because of the wide range of moral, political, philosophical, cultural and religious views. However, good ethical arguments are the basis of society, as they are the basis of the laws and agreements we use to get along with each other.

Given the importance of ethics, any tool that can be used to help make better ethical decisions should be explored and used. Over the past few years, there has been a growing recognition that artificial intelligence (AI) is a tool that can be used to analyze complex data. So it makes sense to ask whether AI can be used to help make better ethical decisions.

Because AI is a class of computer algorithm, it relies on data. Ethics committees also rely on data, so one important question is whether AI can load the types of data that ethics committees routinely consider, and then analyze them meaningfully.

Here, context becomes very important. For example a hospital ethics committee might make decisions based on experience with patients, input from lawyers, and a general understanding of popular cultural or societal norms and views. It is currently difficult to see how such data could be captured and fed into an AI algorithm.

However, I chair a very specific type of ethics committee, called a research ethics committee (REC), whose role it is to review scientific research protocols. The aim is to promote high quality research and to protect the rights, safety, dignity and well-being of those who take part in the research.

Laboratory location.

Most of our activities involve reading complex documents to find out what the relevant ethical issues are, and then making recommendations to researchers on how they can improve their protocols, or proposed procedures. It is in this area that AI can be very helpful.

Research protocols, especially clinical trial protocols, often run to hundreds or thousands of pages. The information is dense and complex. Although protocols are accompanied by ethics application forms that seek to present information on key ethical issues in a way that REC members can easily find, the task can still be very time-consuming.

After studying the documents, REC members weigh up what they’ve read, compare it to guidance on good ethical practice, consider input from patient and stakeholder engagement groups, and then make a decision. can the research proceed as planned. The most common result is that more information and some modifications are needed before the research can proceed.

The role of machines?

Although efforts have been made to standardize REC membership and experience, researchers often complain that the process takes a long time and is inconsistent between different committees.

AI seems ideally placed to speed up the process and help resolve some of the inconsistencies. Not only could the AI ​​read such long documents very quickly, but it could also be trained on a large number of protocols and previous decisions.

He could very quickly identify any ethical issues and recommend solutions for the research teams to implement. This would greatly speed up the ethics review process and probably make it much more consistent. But is it ethically acceptable to use AI in this way?

While many of the REC tasks could clearly be performed by AI, it could also be argued that these review tasks do not amount to making an ethical decision. At the end of the review process, CECs are asked to decide whether a protocol, together with the updates, should receive a favorable or unfavorable opinion.

As a result, while the advantage of AI is clear in speeding up the process, it is not the same as making the final decision.

Someone in the loop

AI may be extremely effective at assessing a situation and recommending a course of action that is consistent with past “ethical” behavior. However, the decision to take a course of action, and then proceed to behave in that way, is essentially a human decision.

In the research ethics example, the AI ​​might recommend a course of action, but it is a human decision to actually take the action. The system could be designed to instruct ethics committees or researchers not to question what the AI ​​recommends doing, but such a decision is about how the AI ​​is used, not the AI own.

While AI is perhaps immediately useful to research ethics committees given the type of data we review, ways to encode non-textual data (such as people’s experiences) are unlikely to improve.

This means that AI may be able to help other areas of ethical decision-making over time. However, the main point is to confuse the tool used to analyze data, the AI, with the final “ethical” decision of how to act. The danger is not the AI, but how people choose to integrate AI into ethical decision-making processes.

This article from The Conversation is republished under a Creative Commons license. Read the original article.

The conversationThe conversation

The conversation

Simon Kolstoe previously received funding from the Health Research Authority to explore aspects of Research Ethics. He chairs the Cambs and Herts HRA (NHS) REC, MODREC and the UKHSA ethics and governance research group. He is a trustee of the UK Charity Research Integrity Office (UKRIO). All views expressed in this article are his own and should not be taken to reflect the views of those organisations.

Leave a Reply

Your email address will not be published. Required fields are marked *