Imagine you are on the waiting list for a non-urgent operation. You were seen at the clinic several months ago, but you still don’t have a date for the procedure. It’s very frustrating, but it looks like you’ll just have to wait.
However, the hospital’s surgical team has just been contacted via a chatbot. The chatbot asks a number of screening questions about whether your symptoms have worsened since the last time you were seen, and whether they are stopping you from sleeping, working or doing your daily activities.
Your symptoms are much the same, but part of you is wondering if you should answer yes. After all, that might get you on the list, or at least able to talk to someone. And yet, it’s not as if this is a real person.
The above scenario is based on chatbots already being used in the NHS to identify patients who no longer need to be on a waiting list, or who need to be prioritized.
There is great interest in using large language models (such as ChatGPT) to effectively manage communication in healthcare (for example, symptom advice, triage and appointment management). But when we interact with these virtual agents, do the usual ethical standards apply? Is it wrong – or at least is it so wrong – if we contact AI chat?
There is psychological evidence that people are more likely to be dishonest if they are knowingly interacting with a virtual agent.
In one experiment, people were asked to toss a coin and report the number of heads. (They could have received higher compensation if they had achieved a larger number.) The rate of cheating was three times higher if they were reporting to a machine than to a human. This suggests that some people would be more inclined to lie to a waiting list chatbot.
One reason why people may be more honest with people is because of their sensitivity to how others perceive them. The chatbot is not going to look down on you, judge you or talk harshly about you.
But we could ask a deeper question about why lying is wrong, and whether a virtual chat partner changes that.
The ethics of lies
There are different ways we can think about the ethics of lying.
Lying can be bad because it hurts other people. Lies can be very harmful to another person. They can cause someone to act on false information, or to be falsely reassured.
Sometimes, lies can be harmful because they undermine another person’s trust in people in general. But those reasons will not often apply to the chatbot.
Lies can hurt another person, even if they are harmless. If we willingly seduce another person, we may fail to respect their rational agency, or use them as a means to an end. But it is not clear whether we can deceive or do wrong, since they have no mind or ability to reason.
Lying can be bad for us because it undermines our credibility. Communication with others is important. But when we knowingly make false statements, the value of our testimony is diminished in the eyes of others.
For the person who repeatedly tells lies, then everything they say counts. This is part of the reason we care about lying and our social image. But if our interactions with the chatbot are not recorded and communicated (for example, to humans), our chatbot’s lies will not have that effect.
It is also bad for us to lie because others may be false to us in turn. (Why should people be honest with us if we won’t be honest with them?)
But again, that’s unlikely to be the result of convincing a chatbot. On the contrary, this kind of effect may be partly an incentive to lie to a chatbot, as people may be aware of the reported tendency of ChatGPT and similar agents to cannibalize.
Fairness
Of course, lying can be wrong for reasons of fairness. This is perhaps the most obvious reason why it is wrong to lie to a chatbot. If you were moved up the waiting list because of a lie, it would unfairly displace someone else.
Lying can be a form of fraud if you obtain an unfair or illegal gain or if you take away a legal right from someone else. Insurance companies are keen to emphasize this when using chatbots in new insurance applications.
Anytime you have a real-world benefit from lying in a chatbot interaction, your claim to that benefit may be in doubt. The anonymity of online interactions can lead to a feeling that no one will ever find out.
But many chatbot interactions, such as insurance requests, are recorded. It may be as likely, or even more likely, that fraud will be discovered.
victory
I focused on the bad consequences of lying and the ethical rules or laws that might be broken when we lie. But there is one more ethical reason why lying is wrong. This is about our character and the type of person we are. This is often reflected in the ethical importance of virtue.
Unless there are exceptional circumstances, we might think we should be honest in our communication, even if we know this won’t harm anyone or break any rules. An honest character would be good for reasons already mentioned, but it could also be good in itself. The virtue of honesty is also self-reinforcing: if we cultivate the virtue, it helps to reduce the temptation to lie.
This leaves an open question as to how these new types of interactions will change our character more generally.
The virtues of interacting with chatbots or virtual agents may be different than when we interact with real people. It may not always be wrong to lie to a chatbot. This may lead to the adoption of different standards for virtual communication. But if it does, one concern is whether it might affect our tendency to be honest in the rest of our lives.
This article from The Conversation is republished under a Creative Commons license. Read the original article.
Dominic Wilkinson receives funding from the Wellcome Trust and the Arts and Humanities Research Council.