Honesty is the best medicine

In this post, I want to talk about lies. It’s ironic that I’m writing this on the day of the US midterm election where the truth continues to be a rare sight to witness. Many in the UK feel they were lied to by politicians over the Brexit referendum. Apparently, politicians face a choice, lie or lose. Deception, deceit, lying, however you want to describe it, it’s part of what makes us human. I reckon we’ve all told a lie at some point, even if we’ve told a ‘white lie’ to avoid hurting someone’s feelings. Now, some of us are better at spotting when others are not telling the truth. Some of us prefer to build a culture of trust. What if we had a new superpower? A future where machines tell us in real time who is lying.

What compelled me to write this post was reading a news article about a new trial in the EU of virtual border agents powered by Artificial Intelligence (AI), which aims to “ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars.” I was fascinated to read statements about the new system such as “IBORDERCTRL’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.” Apparently, the system can analyse micro expressions on your face and include that information as part of a risk score, which will then be used to determine what happens next. At this point in time, it’s not aimed at replacing human border agents, but simply to help to pre-screen travellers. It sounds sensible right, if we can use machines to help keep borders secure? However, the accuracy rate of the system isn’t that great and some are labeling this type of system as pseudoscience and it will lead to unfair outcomes. It’s essential we all pay attention to these developments, and subject them to close scrutiny.

What if machines could one day automatically detect if someone speaking in court is lying? Researchers are working towards that. Check out the project called, DARE: Deception Analysis and Reasoning Engine, where the abstract of their paper opens with “We present a system for covert automated deception detection in real-life courtroom trial videos.“ As algorithms get more advanced, the ability to detect lies could go beyond analysing videos of us speaking, it could even spot when we our written statements are false. In Spain, police are rolling out a new tool called VeriPol which claims to be able to spot false robbery claims, i.e. where someone has submitted a report to the police claiming they have been robbed, but the tool can find patterns that indicate the report is fraudulent. Apparently, the tool has a success rate of over 80%. I came across as British startup, Human, that states on their website, “We use machine learning to better understand human's feelings, emotions, characteristics and personality, with minimum human bias” and honesty is included in the list of characteristics their algorithm examines. It does seem like we are heading for a world where it will be more difficult to lie.

What about healthcare? Could AI help spot when people are lying? How useful would it be to know if your patient (or your doctor) is not telling you the truth? In this 2014 survey in the USA, the patient deception report stated that 50% of respondents said they withhold information from their doctor during a visit, lying most frequently about drug, alcohol and tobacco use. Zocdoc’s 2015 survey found that 25% of patients lie to their doctor. There was an interesting report about why some patients are not adhering to what a doctor’s advice, and it’s because of financial strain, and that some low income patients are reluctant to discuss their situation with their doctor. The reasons why a patient might be lying are not black and white. How does an algorithm take that into account? In terms of doctors not telling patients the truth, is there ever a role for benevolent deception? Can a lie ever be considered therapeutic? From what I’ve read, lying appears to be a path some have to take when caring for those living with Dementia, to protect the patient.

shutterstock_570913984.jpg

Imagine you have a video call with your doctor and on the other side, the doctor has access to an AI system analysing your face and voice in real time and determining not just if you’re lying or not, but your emotional state too? That’s what is set to happen in Dubai with the rollout of a new app. How does that make you feel, either as a doctor or as a patient? If the AI thinks the patient is lying about their alcohol intake, would it include that determination against the patient’s medical record? What if the AI is wrong? Given the accuracy of these AI lie detectors is far from perfect, there are serious implications if they become part of the system. How might that work during an actual visit to the doctor’s office? In some countries, will we see CCTV in the doctor’s office with AI systems analysing every moment of the encounter to figure out which answers were truthful? What comes next? Smart glasses that a patient can wear when visiting the doctor and the glasses tell the patient how likely it is that the doctor is lying to them about their treatment options? Which institutions will turns to this new technology because it feels easier (and cheaper) than fostering a culture of trust, mutual respect and integrity?

What if we don’t want to tell the truth but the machines around us that are tracking everything reveal the truth for us? I share this satirical video below of Amazon Alexa fitted to a car, do watch it. Whilst it might be funny, there are potential challenges ahead in terms of our human rights and civil liberties in this new era. Is AI powered lie detection the path towards ensuring we have a society with enough transparency and integrity or are we heading down a dangerous path by trusting the machines? Is honesty really the best medicine?

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner