An interview with Jo Aggarwal: Building a safe chatbot for mental health

We can now converse with machines in the form of chatbots. Some of you might have used a chatbot when visiting a company’s website. They have even entered the world of healthcare. I note that the pharmaceutical company, Lupin, has rolled out Anya, India’s first chatbot for disease awareness, in this case, for Diabetes. Even for mental health, chatbots have recently been developed, such as Woebot, Wysa and Youper. It’s an interesting concept and given the unmet need around the world, these could be an additional tool that might help make a difference in someone’s life. However, there was a recent BBC article highlighting how two of the most well known chatbots (Woebot and Wysa) don’t always perform well when children use the service. I’ve performed my own real world testing of these chatbots in the past, and gotten to know the people who have created these products. So after the BBC article got published, Jo Aggarwal, CEO and co-founder of Touchkin, the company that has made Wysa, got back in touch with me to discuss trust and safety when using chatbots. It was such an insightful conversation, I offered to interview her for this blog post as I think the story of how a chatbot for mental health is developed, deployed and maintained is a complex and fascinating journey.

1. How safe is Wysa, from your perspective?
Given all the attention this topic typically receives, and its own importance to us, I think it is really important to understand first what we mean by safety. For us, Wysa being safe means having comfort around three questions. First, is it doing what it is designed to do, well enough, for the audience it’s been designed for? Second, how have users been involved in Wysa’s design and how are their interests safeguarded? And third, how do we identify and handle ‘edge cases’ where Wysa might need to serve a user - even if it’s not meant to be used as such?

Let’s start with the first question. Wysa is an interactive journal, focused on emotional wellbeing, that lets people talk about their mood, and talk through their worries or negative thoughts. It has been designed and tested for a 13+ audience, where for instance, it asks users to take parental consent as a part of its terms and conditions for users under 18. It cannot, and should not, be used for crisis support, or by users who are children - those who are less than 12 years old. This distinction is important, because it directs product design in terms of the choice of content as well as the kind of things Wysa would listen for. For its intended audience and expected use in self-help, Wysa provides an interactive experience that is far superior to current alternatives: worksheets, writing in journals, or reading educational material. We’re also gradually building an evidence base here on how well it works, through independent research.

The answer to the second question needs a bit more description of how Wysa is actually built. Here, we follow a user-centred design process that is underpinned by a strong, recognised clinical safety standard.

When we launched Wysa, it was for a 13+ audience, and we tested it with an adolescent user group as a co-design effort. For each new pathway and every model added in Wysa, we continue to test the safety against a defined risk matrix developed as a part of our clinical safety process. This is aligned to the DCB 0129 and DCB 0160 standards of clinical safety, which are recommended for use by NHS Digital.

As a result of this process, we developed some pretty stringent safety-related design and testing steps during product design:

At the time of writing a Wysa conversation or tool concept, the first script is reviewed by a clinician to identify safety issues, specifically - any times when this could be contra-indicated, or be a trigger, and alternative pathways for such conditions.

When a development version of a new Wysa conversation is produced, the clinicians review it again specifically from an adherence to clinical process and potential safety issues as per our risk matrix.

Each aspect of the risk matrix has test cases. For instance, if the risk is that using Wysa may increase the risk of self harm in a person, we run two test cases - one where a person is intending self harm but it has not been detected as such (normal statements) and one where self-harm statements detected from the past are run through the Wysa conversation, at every Wysa node or ‘question id’. This is typically done on a training set of a few thousand user statements. A team then tags the response for appropriateness. A 90% appropriateness level is considered adequate for the next step of review.

The inappropriate statements (typically less than 10%) are then reviewed for safety, where the question asked is - will this inappropriate statement increase the risk of the user indulging in harmful behavior? If there is even one such case, the Wysa conversation pathway is redesigned to prevent this and the process is repeated.

The output of this process is shared with a psychologist and any contentious issues are escalated to our Clinical Safety Officer.

Equally important for safety, of course, is the third question. How do we handle ‘out of scope’ user input, for example, if the user talks about suicidal thoughts, self-harm, or abuse? What can we do if Wysa isn’t able to catch this well enough?

To deal with this question, we did a lot of work to extend the scope of Wysa so that it does listen for self-harm and suicidal thoughts, as well as abuse in general. On recognising this kind of input, Wysa gives an empathetic response, clarifies that it is a bot and unable to deal with such serious situations, and signpost to external helplines. It’s important to note that this is not Wysa’s core purpose - and it will probably never be able to detect all crisis situations 100% - neither can Siri or Google Assistant or any other Artificial Intelligence (AI) solution. That doesn’t make these solutions unsafe, for their expected use. But even here, our clinical safety standard would mean that even if the technology fails, we need to ensure it does not cause cause harm - or in our case, increase the risk of harmful behavior. Hence, all Wysa’s statements and content modules are tested against safety cases to ensure that they do not increase risk of harmful behavior even if the AI fails.

We watch this very closely, and add content or listening models where we feel coverage is not enough, and Wysa needs to extend. This was the case specifically with the BBC article, where we will now relax our stand that we will never take personally identifiable data from users, explicitly listen (and check) for age, and if under 12 direct them out of Wysa towards specialist services.

So how safe is Wysa? It is safe within its expected use, and the design process follows a defined safety standard to minimize risk on an ongoing basis. In case more serious issues are identified, Wysa directs users to more appropriate services - and makes sure at the very least it does not increase the risk of harmful behaviour.

2. In plain English, what can Wysa do today and what can’t it do?
Wysa is a journal married to a self-help workbook, with a conversational interface. It is a more user friendly version of a worksheet - asking mostly the same questions with added models to provide different paths if, for instance, a person is anxious about exams or grieving for a dog that died.

It is an easy way to learn and practice self help techniques - to vent and observe our thoughts, practice gratitude or mindfulness, learn to accept your emotions as valid and find the positive intent in the most negative thoughts.

Wysa doesn’t always understand context - it definitely will not pass the Turing test for ‘appearing to be completely human’. That is definitely not its intended purpose, and we’re careful in telling users that they’re talking to a bot (or as they often tell us, a penguin).

Secondly, Wysa is definitely not intended for crisis support. A small percentage of people do talk to Wysa about self harm or suicidal thoughts, who are given an empathetic response and directed to helplines.

Beyond self harm, detecting sexual and physical abuse statements is a hard AI problem - there are no models globally that do this well. For instance ‘My boyfriend hurts me’ may be emotional, physical, or sexual. Also, most abuse statements that people share with Wysa tend to be in the past: ‘I was abused when I was 12’ needs a very different response from ‘I was abused and I am 12’. Our response here is currently to appreciate the courage it takes to share something like this, ask a user if they are in crisis, and if yes, say that as a bot Wysa is not suited for a crisis and offer a list of helplines.

3. Has Wysa been developed specifically for children? How have children been involved in the development of the product?
No, Wysa hasn’t been developed specifically for children.

However, as I mentioned earlier, we have co-designed with a range of users, including adolescents.

4. What exactly have you done when you’ve designed Wysa with users?
For us, the biggest risk was that someone’s data may be leaked and therefore cause them harm. To deal with this, we took the hard decision of not taking any personally identifiable data at all from users, because of which they also started trusting Wysa. This meant that we had to compromise on certain parts of the product design, but we felt it was a tradeoff well worth making.

After launch, for the first few months, Wysa was an invite-only app, where a number of these features were tested first from a safety perspective. For example, SOS detection and pathways to helplines were a part of the first release of Wysa, which our clinical team saw as a prerequisite for launch.

Since then, design continues to be led by users. For the first million conversations, Wysa stayed a beta product, as we didn’t have enough of a response base to test new pathways. There is no one ‘launch’ of Wysa - it is continuously being developed and improved based on what people talk to it. For instance, the initial version of Wysa did not handle abuse (physical or sexual) at all as it was not expected that people would talk to it about these things. When they began to, we created pathways to deal with these in consultation with experts.

An example of a co-design initiative with adolescents was a study with Safe Lab at Columbia University to understand how at-risk youth would interact with Wysa and the different nuances of language used by these youth.

4. Can a user of Wysa really trust it in a crisis? What happens when Wysa makes a mistake and doesn’t provide an appropriate response?
People should not use Wysa in a crisis - it is not intended for this purpose. We keep reinforcing this message across various channels: on the website, the app descriptions on Google Play or the iTunes App Store, even responses to user reviews or on Twitter.

However, anyone who receives information about a crisis has a responsibility to do the most that they can to signpost the user to those who can help. Most of the time, Wysa will do this appropriately - we measure how well each month, and keep working to improve this. The important thing is that Wysa should not make things worse even when it misdetects, so users should not be unsafe ie. we should not increase the risk of harmful behaviour.

One of the things we are adding based on suggestions from clinicians is a direct SOS button to helplines so users have another path when they recognise they are in crisis, so the dependency on Wysa to recognise a crisis in conversation is lower. This is being co-designed with adolescents and clinicians to ensure that it is visible, but so that the presence of such a button does not act as a trigger.

For inappropriate responses, we constantly improve and also handle cases where the if user shares that Wysa’s response was wrong, respond in a way that places the onus entirely on Wysa. If a user objects to a path Wysa is taking, saying this is not helpful or this is making me feel worse, immediately change the path; emphasise that it is Wysa’s, not the user’s mistake; and that Wysa is a bot that is still learning. We closely track where and when this happens, and any responses that meet our criteria for a safety hazard are immediately raised to our clinical safety process which includes review with children’s mental health professionals.

We constantly strive to improve our detection, and are also starting to collaborate with other people dealing with similar issues and create a common pool of resources.

5. I understand that Wysa uses AI. I also note that there are so many discussions around the world relating to trust (or lack of it) in products and services that use AI. A user wants to trust a product, and if it’s health related, then trust becomes even more critical. What have you done as a company to ensure that Wysa (and the AI behind the scenes) can be trusted?
You’re so right about all so many discussions about AI, how this data is used, and how it can be misused. We explicitly tell users that their chats stays private (not just anonymous), that this will never be shared with third parties. In line with GDPR, we also give users the right to ask for their data to be deleted.

After downloading, there is no sign-in. We don’t collect any personally identifiable data about the user: you just give yourself a nickname and start chatting with Wysa. The first conversation reinforces this message, and this really helps in building trust as well as engagement.

AI of the generative variety will not be ready for products like Wysa for a long time - perhaps never. They have in the past turned racist or worse. The use of AI in applications like Wysa is limited to detection and classification of user free text, not generating ‘advice’. So the AI here is auditable, testable, quantifiable - not something that may suddenly learn to go rogue. We feel that trust is based on honesty, so we do our best to be honest about the technical limitations of Wysa.

Every Wysa response and question goes through a clinical safety process, and is designed and reviewed by a clinical psychologist. For example, we place links to journal articles in each tool and technique that we share with the user.

6. What could you and your peers who make products like this do to foster greater trust in these products?
As a field, the use of conversational AI agents in mental health is very new, and growing fast. There is great concern around privacy, so anonymity and security of data is key.

After that, it is important to conduct rigorous independent trials of the product and share data openly. A peer reviewed mixed method study of Wysa’s efficacy has been recently published in JMIR, for this reason, and we working with universities to further develop these. It’s important that advancements in this field are science-driven.

Lastly, we need to be very transparent about the limitations of these products - clear on what they can and cannot do. These products are not a replacement for professional mental health support - they are more of a gym, where people learn and practice proven, effective techniques to cope with distress.

7. What could regulators to foster an environment where we as a user feel reassured that these chatbots are going to work as we expect them to?
Leading from your question above, there is a big opportunity to come together and share standards, tools, models and resources.

For example, if a user enters a search term around suicide in Google, or posts about self-harm on Instagram, maybe we can have a common library of Natural Language Processing (NLP) models to recognise and provide an appropriate response?

Going further, maybe we can provide this as an open-source to resource to anyone building a chatbot that children might use? Could this be a public project, funded and sponsored by government agencies, or a regulator?

In addition, there are several other roles a regulator could play. They could fund research that proves efficacy, defines standards and outlines the proof required (the NICE guidelines recently released are a great example), or even a regulatory sandbox where technology providers, health institutions and public agencies come together and experiment before coming to a view.

8. Your website mentions that “Wysa is... your 4 am friend, For when you have no one to talk to..” – Shouldn’t we be working in society to provide more human support for people who have no one to talk to? Surely, everyone would prefer to deal with a human than a bot? Is there really a need for something like Wysa?
We believed the same to be true. Wysa was not born of a hypothesis that a bot could help - it was an accidental discovery.

We started our work in mental health to simply detect depression through AI and connect people to therapy. We did a trial in semi-rural India, and were able to use the way a person’s phone moved about to detect depression to a 90% accuracy. To get the sensor data from the phone, we needed an app, which we built as a simple mood-logging chatbot.

Three months in, we checked on the progress of the 30 people we had detected with moderate to severe depression and whose doctor had prescribed therapy. It turned out that only one of them took therapy. The rest were okay with being prescribed antidepressants but for different reasons, ranging from access to stigma, did not take therapy. All of them, however, continued to use the chatbot, and months later reported to feeling better.

This was the genesis of Wysa - we didn’t want to be the reason for a spurt in anti-depressant sales, so we bid farewell to the cool AI tech we were doing, and began to realise that it didn’t matter if people were clinically depressed - everyone has stressors, and we all need to develop our mental health skills.

Wysa has had 40 million conversations with about 800,000 people so far - growing entirely through word of mouth. We have understood some things about human support along the way.

For users ready to talk to another person about their inner experience, there is nothing as useful as a compassionate ear, the ability to share without being judged. Human interactions, however, seem fraught with opinions and judgements. When we struggle emotionally, it affects our self image - for some people, it is easier to talk to an anonymous AI interface, which is kind of an extension of ourselves, than another person. For example, this study found that US Veterans were three times as likely to reveal their PTSD to a bot as a human: But still human support is key - so we run weekly Ask Me Anything (AMA) sessions on the top topics that Wysa users propose, to discuss every week with a mental health professional. We had a recent AMA where over 500 teenagers shared their concerns about sharing their mental health issues or sexuality with their parents. Even within Wysa, we encourage users to create a support system outside.

Still, the most frequent user story for Wysa is someone troubled with worries or negative thoughts at 4 am, unable to sleep, not wanting to wake someone up, scrolling social media compulsively and feeling worse. People share how they now talk to Wysa to break the negative cycle and use the sleep meditations to drift off. That is why we call it your 4 am friend.

9. Do you think there is enough room in the market for multiple chatbots in mental health?
I think there is a need for multiple conversational interfaces, different styles and content. We have only scratched the surface, only just begun. Some of these issues that we are grappling with today are like the issues people used to grapple with in the early days of ecommerce - each company solving for ‘hygiene factors’ and safety through their own algorithms. I think over time many of the AI models will become standardised, and bots will work for different use cases - from building emotional resilience skills, to targeted support for substance abuse.

10. How do you see the future of support for mental health, in terms of technology, not just products like Wysa, but generally, what might the future look like in 2030?
The first thing that comes to mind is that we will need to turn the tide from the damage caused by technology on mental health. I think there will be a backlash against addictive technologies, I am seeing the tech giants becoming conscious of the mental health impact of making their products addictive, and facing the pressure to change.

I hope that by 2030, safeguarding mental health will become a part of the design ethos of a product, much as accessibility and privacy has become in the last 15 years. By 2030, human computer interfaces will look very different, and voice and language barriers will be fewer.

Whenever there is a trend, there is also a counter trend. So while technologies will play a central role in creating large scale early mental health support - especially crossing stigma, language and literacy barriers in countries like India and China, we will also see social prescribing gain ground. Walks in the park or art circles become prescriptions for better mental health, and people will have to be prescribed non-tech activities because so much of people’s lives are on their devices.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

AI in healthcare: Involving the public in the conversation

As we begin the 21st century, we are in an era of unprecedented innovation, where computers are becoming smarter, being used to deliver products and services powered by Artificial Intelligence (AI). I was fascinated how AI is being used in advertising, when I saw a TV advert this week from Microsoft where a musician was talking about the benefits of AI. Organisations in every sector, including healthcare, are having to think how they can harness the power of AI. I wrote a lot about my own experiences in 2017 using AI products for health in my last blog, You can’t care for patients, you’re not human!

Now when we think of AI in healthcare potentially replacing some of the tasks done by doctors, we think of it as a relatively recent concept. We forget that doctors themselves have been experimenting with technology for a long time. In this video from 1974 (44 years ago!), computers were being tested in the UK with patients to help optimise the time spent by the doctor during the consultation. What I find really interesting is that in the video, it’s mentioned that the computer never gets tired and some patients prefer dealing with the machine than the human doctor.

Fast forward to 2018, where it feels like technology is opening up new possibilities every day, and often from organisations that are not traditionally part of the healthcare system. We think of tech giants like Google and Facebook helping us send emails or share photos with our friends, but researchers at Google are working with AI on being able to improve detection of breast cancer and Facebook has rolled out an AI powered tool to automatically detect if a user’s post shows signs of suicidal ideation.

What about going to the doctor? I remember growing up in the UK that my family doctor would even come and visit me at home when I was not well. Those are simply memories for me, as it feels increasingly difficult to get an appointment to see the doctor in their office, let alone getting a housecall. Given many of us are using modern technology to do our banking and shopping online, without having to travel to a store or a bank and deal with a human being, what if that were possible in healthcare? Can we automate part (or even all) of the tasks done by human doctors? You may think this is a silly question, but we have to step back a second and reflect upon the fact that we have 7.5 billion people on Earth today and that is set to rise to an expected 11 billion by the end of this century. If we have a global shortage of doctors today, and since it’s predicted to get worse, surely the right thing to do is to leverage emerging technology like AI, 4G and smartphones to deliver healthcare anywhere, anytime to anyone?

doctorvisit.jpg

We have the emergence of a new type of app known as Symptom Checkers, which provides anyone with the ability to enter symptoms on their phone and to be given a list of things that may be wrong with them. Note that at present, these apps cannot provide a medical diagnosis, they merely help you decide whether you should go to the hospital or whether you can self care.. However, the emergence of these apps and related services is proving controversial. It’s not just a question of accuracy, but there are huge questions about trust, accountability and power? In my opinion, the future isn’t about humans vs AI, which is the most frequent narrative being paraded in healthcare. The future is about how human healthcare professionals stay relevant to their patients.

It’s critical that in order to create the type of healthcare we want, we involve everyone in the discussion about AI, not just the privileged few. I’ve seen countless debates this past year about AI in healthcare, both in the UK and around the world, but it’s a tiny group of people at present who are contributing to (and steering) this conversation. I wonder how many of these new services are being designed with patients as partners? Many countries are releasing national AI strategies in a bid to signal to the world that they are at the forefront of innovation. I also wonder if the UK government is rushing into the implementation of AI in the NHS too quickly? Who stands to profit the most from this new world of AI powered healthcare? Is this wave of change really about putting the patient first? There are more questions than answers at this point of time, but those questions do need to be answered. Some may consider anyone asking difficult questions about AI in healthcare as standing in the way of progress, but I believe it’s healthy to have a dialogue where we can discuss our shared concerns in a scientific, rational and objective manner.

rational.jpg

That’s why I’m excited that BBC Horizon is airing a documentary this week in the UK, entitled “Diagnosis on Demand? The Computer Will See You Now” – they had behind the scenes access to one of the most well known firms developing AI for healthcare, UK based Babylon Health, whose products are pushing boundaries and triggering controversy. I’m excited because I really do want the general public to understand the benefits and the risks of AI in healthcare so that they can be part of the conversation. The choices we make today could impact how healthcare evolves not just in the UK, but globally. Hence, it’s critical that we have more science based journalism which can help members of the public navigate the jargon and understand the facts so that informed choices can be made. The documentary will be airing in the UK on BBC Two at 9pm on Thursday 1st November 2018. I hope that this program acts as a catalyst for greater public involvement in the conversation about how we can use AI in healthcare in a transparent, ethical and responsible manner.

For my international audience, my understanding is that you can’t watch the program on BBC iPlayer, because at present, BBC shows can only be viewed from the UK.

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Being Human

This is the most difficult blog post I’ve ever had to write. Almost 3 months ago, my sister passed away unexpectedly. It’s too painful to talk about the details. We were extremely close and because of that the loss is even harder to cope with. 

The story I want to tell you today is about what’s happened since that day and the impact it’s had on how I view the world. In my work, I spend considerable amounts of time with all sorts of technology, trying to understand what all these advances mean for our health. Looking back, from the start of this year, I’d been feeling increasingly concerned by the growing chorus of voices telling us that technology is the answer for every problem, when it comes to our health. Many of us have been conditioned to believe them. The narrative has been so intoxicating for some.

Ever since this tragedy, it’s not an app, or a sensor or data that I turned to. I have been craving authentic human connections. As I have tried to make sense of life and death, I have wanted to be able to relate to family and friends by making eye contact, giving and receiving hugs and simply just being present in the same room as them. The ‘care robot’ that had arrived from China this year as part of my research into whether robots can keep us company, remains switched off in its box. Amazon’s Echo, the smart assistant with a voice interface that I’d also been testing a lot also sits unused in my home. I used it most frequently to turn the lights on and off, but now I prefer walking over to the light switch and the tactile sensation of pressing the switch with my finger. One day last week, I was feeling sad, and didn’t feel like leaving the house, so I decided to try putting on my Virtual Reality (VR) headset, to join a virtual social space. I joined a virtual computer generated room where it was sunny and in someone’s back yard for a BBQ, I could see their avatars, and I chatted to them for about 15 minutes. After I took off the headset, I felt worse.

There have also been times I have craved solitude, and walking in the park at sunrise on a daily basis has been very therapeutic. 

Increasingly, some want machines to become human, and humans to become machines. My loss has caused me to question these viewpoints. In particular, the bizarre notion that we are simply hardware and software that can be reconfigured to cure death. Recently, I heard one entrepreneur believe that with digital technology, we’ll be able to get rid of mental illness in a few years. Others I’ve met believe we are holding back the march of progress by wanting to retain the human touch in healthcare. Humans in healthcare are an expensive resource, make mistakes and resist change. So, is the answer just to bypass them? Have we truly taken the time to connect with them and understand their hopes and dreams? The stories, promises and visions being shared in Digital Health are often just fantasy, with some storytellers (also known as rock stars) heavily influenced by Silicon Valley’s view of the future. We have all been influenced on some level. Hope is useful, hype is not. 

We are conditioned to hero worship entrepreneurs and to believe that the future the technology titans are creating, is the best possible future for all of us. Grand challenges and moonshots compete for our attention and yet far too often we ignore the ordinary, mundane and boring challenges right here in front of us. 

I’ve witnessed the discomfort many have had when offering me their condolences. I had no idea so many of us have grown up trained not to talk about death and healthy ways of coping with grief. When it comes to Digital Health, I’ve only ever come across one conference where death and other seldom discussed topics were on the agenda, Health 2.0 with their “unmentionables” panel. I’ve never really reflected upon that until now.

Some of us turn to the healthcare system when we are bereaved, I chose not to. Health isn’t something that can only be improved within the four walls of a hospital. I don’t see bereavement as a medical problem. I’m not sure what a medical doctor can do in a 10 minute consultation, nor have I paid much attention to the pathways and processes that scientists ascribe to the journey of grief. I simply do my best to respond to the need in front of me and to honour my feelings, no matter how painful those feelings are. I know I don’t want to end up like Prince Harry who recently admitted he had bottled up the grief for 20 years after the death of his mother, Princess Diana, and that suppressing the grief took him to the point of a breakdown. The sheer maelstrom of emotions I’ve experienced these last few months makes me wonder even more, why does society view mental health as a lower priority than physical health? As I’ve been grieving, there are moments when I felt lonely. I heard about an organisation that wants to reframe loneliness as a medical condition. Is this the pinnacle of human progress, that we need medical doctors (who are an expensive resource) to treat loneliness? What does it say about our ability to show compassion for each other in our daily lives?

Being vulnerable, especially in front of others, is wrongly associated with weakness. Many organisations still struggle to foster a culture where people can truly speak from the heart with courage. That makes me sad, especially at this point. Life is so short yet we are frequently afraid to have candid conversations, not just with others but with ourselves. We don’t need to live our lives paralysed by fear. What changes would we see in the health of our nation if we dared to have authentic conversations? Are we equipped to ask the right questions? 

As I transition back to the world of work, I’m very much reminded of what’s important and who is important. The fragility of life is unnerving. I’m so conscious of my own mortality, and so petrified of death, it’s prompted me to make choices about how I live, work and play. One of the most supportive things someone has said to me after my loss was “Be kind to yourself.” Compassion for one’s self is hard. Given that technology is inevitably going to play a larger role in our health, how do we have more compassionate care? I’m horrified when doctors & nurses tell me their medical training took all the compassion out of them or when young doctors tell me how they are bullied by more senior doctors. Is this really the best we can do? 

I haven’t looked at the news for a few months and immersing myself in Digital Health news again makes me pause. The chatter about Artificial Intelligence (AI), where commentaries are at either end of the spectrum, almost entirely dystopian or almost entirely utopian, with few offering balanced perspectives. These machines will either end up putting us out of work and ruling our lives or they will be our faithful servants, eliminating every problem and leading us to perfect healthcare. For example, I have a new toothbrush that says it uses AI, and it’s now telling me to go to bed earlier because it noticed I brush my teeth late at night. My car, a Toyota Prius, which is primarily designed for fuel efficiency scores my acceleration, braking and cruising constantly as I’m driving. Where should my attention rest as I drive, on the road ahead or on the dashboard, anxious to achieve the highest score possible? Is there where our destiny lies? Is it wise to blindly embark upon a quest for optimum health powered by sensors, data & algorithms nudging us all day and all night until we achieve and maintain the perfect health score? 

As more of healthcare moves online, reducing costs and improving efficiency, who wins and who loses? Recently, my father (who is in his 80s) called the council as he needed to pay a bill. Previously, he was able to pay with his debit card over the phone. Now they told him it’s all changed, and he has to do it online. When he asked them what happens if someone isn’t online, he was told to visit the library where someone can do it online with you. He was rather angry at this change. I can now see his perspective, and why this has made him angry. I suspect he’s not the only one. He is online, but there are moments when he wants to interact with human beings, not machines. In stores, I always used to use the self service checkouts when paying for my goods, because it was faster. Ever since my loss, I’ve chosen to use the checkouts with human operators, even if it is slower. Earlier this year, my mother (in her 70s) got a form to apply for online access to her medical records. She still hasn’t filled in it, she personally doesn’t see the point. In Digital Health conversations, statements are sometimes made that are deemed to be universal truths. Every patient wants access to their records, or that every patient wants to analyse their own health data. I believe it’s excellent that patients have the chance of access, but let’s not assume they all want access. 

Diversity & Inclusion is still little more than a buzzword for many organisations. When it comes to patients and their advocates, we still have work to do. I admire the amazing work that patients have done to get us this far, but when I go to conferences in Europe and North America, the patients on stage are often drawn from a narrow section of society. That’s assuming the organisers actually invited patients to speak on stage, as most still curate agendas which put the interests of sponsors and partners above the interests of patients and their families. We’re not going to do the right thing if we only listen to the loudest voices. How do we create the space needed so that even the quietest voices can be heard? We probably don’t even remember what those voices sound like, as we’ve been too busy listening to the sound of our own voice, or the voices of those that constantly agree with us. 

When it comes to the future, I still believe emerging technologies have a vital role to play in our health, but we have to be mindful in how we design, build and deploy these tools. It’s critical we think for ourselves, to remember what and who are important to us. I remember that when eating meals with my sister, I’d pick up my phone after each new notification of a retweet or a new email. I can’t get those moments back now, but I aim to be present when having conversations with people now, to maintain eye contact and to truly listen, not just with my ears, and my mind, but also with my heart. If life is simply a series of moments, let’s make each moment matter. We jump at the chance of changing the world, but it takes far more courage to change ourselves. The power of human connection, compassion and conversation to help me heal during my grief has been a wake up call for me. Together, let’s do our best to preserve, cherish and honour the unique abilities that we as humans bring to humanity.

Thank You for listening to my story.

Patients and their caregivers as innovators

I've been conducting research for a while now on how patients and their families have innovated themselves. They decided not to wait for the system to act, but acted themselves. One leading example is the Open Artificial Pancreas System project, and they even use the hashtag, ##WeAreNotWaiting. I was inspired to write this post today for two reasons. 

  1. I delivered a keynote at the MISK Hackathon in London yesterday to innovators in both London & Riyadh reminding them that innovation can come from anyone anywhere on Earth.
  2. A post by the World Economic Forum about an Tal Golesworthy, an engineer with a life threatening heart condition who fixed it himself. 

I thought this line in the WEF article was particular fascinating, as it conveys the shock, surprise and disbelief that a patient could actually be a source of innovation, "And it flags up the likelihood that other patients with other diseases are harbouring similarly ingenious or radical ideas." I wonder how much we are missing out on in healthcare, because many of us are conditioned to think that a patient is a passive recipient of care, and not an equal who could actually out-think us. Golesworthy who is living with Marfan Syndrome, came up with a new idea for an aortic sleeve, which led to him setting up his own company. The article also then goes on to talk about a central repository of patient innovation to help diffuse these ideas, and this repository actually exists! It's called Patient Innovation and was set up over 2 years ago by the Católica Lisbon School of Business and Economics. The group have got over 1,200 submissions, and after screening by a medical team, around 50% of those submissions have been formally listed on the website. Searching the website for what patients have done by themselves is inspiring stuff. 

In the title, you'll notice that I also acknowledged that it's not just the patient who on their own innovates, but their caregivers could be part of that innovation process. Sometimes, the caregiver (parent, family member or someone else) might have a better perspective on what's needed than the patient themselves. The project leader for the Patient Innovation repository, Pedro Oliveira, has also published a paper in 2015, exploring innovation by patients with rare diseases and chronic needs, and I share one of the stories he included in his paper. 

"Consider the case of a mother who takes care of her son, an Angelman syndrome patient. Angelman syndrome involves ataxia, inability to walk, move or balance well. The mother experimented with many strategies, recommended by the doctors, therapists, or found elsewhere, but obtained little gain for her child. By chance, at a neighbor’s child’s birthday party, she noticed her son excitedly jumping for strings to catch a floating helium-filled balloon. This gave her an idea and she experimented at home by filling a room with floating balloons. She found her child began jumping and reaching for the balloons for extended periods of time, amused by the challenge. The mother also added bands to support the knees and keep the child in an upright position. The result was significant improvement in her child’s physical abilities. Other parents to whom she described the solution also tried the balloons strategy and had positive results. This was valued as a novel solution by the medical evaluators."

So many of us think that innovation in today's modern world has to start with an app, a sensor or an algorithm, but the the solutions could involve far simpler technology, such as a balloon! It's critical that we are able to discriminate between our wants and needs. A patient may be led to believe they want an app, but their actual need is for something else. Or that we as innovators want to work with a particular tool or type of technology, and we ignore the need of the patient themselves. 

Oliveira concludes with a powerful statement that made me stand back and pause for a few minutes, "Our finding that 8% of rare disease patients and/or their non-professional caregivers have developed valuable, new to the world innovations to improve their own care suggests that a massive, non-commercial source of medical innovations exists." 

I want you to also pause and reflect on this conclusion. How does this make you feel? Does it make you want to change the way you and your organisation approaches medical innovation? One of the arguments against patient innovation is that it could put the patient at risk, after all, they haven't been to medical school. Is that perception by healthcare professionals of heightened risk justified? Maybe not. Oliverira also reports that, "Almost all the reported solutions were also judged by the experts to be relatively safe: out of 182, only 4 (2%) of the patients’ developments were judged to be potentially detrimental to patients’ health by the evaluators." Naturally, this is just one piece of research, and we would need to see more like this to truly understand the benefit-risk profile of patient innovations, but it's still an interesting insight. 

I feel we don't hear enough in the media about innovation coming from patients and their caregivers. Others also share this sentiment. With reference to the Patient Innovation website, in the summer of 2015, Harold J. DeMonaco, made this statement in his post reminding us that not all innovation comes from industry, "There is a symposium going on this week in Lisbon, Portugal that is honoring patient innovators, and I suspect this will totally escape the notice of US media."

I am curious why we don't hear much more about patient innovators in the media. What can be done to change that? If you're a healthcare reporter reading this post, and you haven't covered patient innovation before, I'm really interested to know why.

During my research, I've been very curious to determine what analysis has been done to understand if patients are better at innovation than others. After all, they are living with their conditions, they are subject matter experts on their daily challenges, and they have enough insights to write a PhD on 'my health challenges' if they needed to! I did find a working paper from March this year from researchers in Germany at the Hamburg University of Technology (Goeldner et al). Are patients and relatives the better innovators? The case of medical smartphone applications, is the title of their paper. Their findings are very thought provoking. For example, when they looked at ratings of apps, the ratings for apps developed by patients and healthcare professionals were higher than those apps developed by companies and independent developers. For me, the most interesting finding was apps developed by patients' relatives got the highest revenues. Think about every hackathon in healthcare you've attended, how many times were patients invited, and how many times were the relatives of patients invited? One of the limitations of the paper which the authors admit, is that it was using apps from Apple's App store. The study would need to be repeated using Google's Play store given that the majority of smartphones in the world are not iPhones. 

This hypothesis from the paper highlights for me why patients and those who care for them need to be actively included,  "We propose that patients and relatives also develop needs during their caring activities that may not yet been envisioned by medical smartphone app developers. Thus, the dual knowledge base might be a reason for the significantly superior quality of apps developed by patients and relatives compared to companies." They also make this recommendation, "Our study shows that both user types – intermediate users and end users – innovated successfully with high quality. Commercial mobile app publishers and healthcare companies should take advantage of this and should consider including patients, patents’ relatives, and healthcare professionals into their R&D process." 

If you're currently developing an app, have you remembered to invite everyone needed to ensure you develop the highest quality app with the highest chance of success? 

I'm attending a Mobile Health meetup in London next week, called "Designing with the Dementia community" - they have 2 fantastic speakers at the event, but neither of them are people living with Dementia. Perhaps the organisers have tried to find people living with Dementia (or their caregivers) to come and speak, but nobody was available on that date. I remember when I founded the Health 2.0 London Chapter, and ran monthly events, just how difficult it was to find patients to come and speak at my events. How do we communicate to patients and their caregivers that they have unique insights that are routinely missing from the innovation process, and that people are wanting to give them a chance to share those insights? Another event in London next month, is about Shaping the NHS & innovation, with a headline of 'How can we continue to put patients first?' They have 4 fantastic speakers, who are all doctors, with not a patient in sight. It reminds me of conferences I attend where people will be making lots of noise about improving physician workflow, yet at these conferences nobody ever advocates for improving patient workflow. 

In the UK, the NHS appears to making the right noises with regard to wanting to include patients and the public in the innovation process. Simon Stevens, CEO of NHS England has spoken of his desire to enable patients to play a much more central role in innovation. Simon Denegri's post reviewing Steven's speech to the NHS Confederation back in 2014 is definitely worth a read.

Despite the hopes of senior leaders, I still feel there is a very large gap between the rhetoric and reality. I talk to so many patients (and healthcare professionals) who sadly have stopped coming up with ideas to make things better because the system always says No or dismisses their idea as foolish because they are not seen as experts. Editing your website to include 'patient centred' is the easy part, but actually getting each of your staff to live and breathe those words on a daily basis is a much more difficult task. Virtually every organisation in healthcare I observe is desperate for innovation, except that they want innovation on their terms and conditions, which is often a long winded, conservative and bureaucratic process. David Gilbert's wonderful post on patient led innovation concludes with a great example of this phenomenon;

"I once worked with a fabulous cardiac rehab nursing team that got together on a Friday and asked each other, ‘what one thing have we learned from patients this week?’ And ‘what one thing could we do better next week?’ We were about to go into the next phase and have a few patients come to those meetings and my fantasy was to get them to help design and deliver some of the ideas. But the Director of Nursing said that our idea was counter to the Engagement Strategy and objected that patients would be ‘unrepresentative’. Now they run focus groups, that report to an engagement sub-committee that reports to a patient experience board that reports to… crash!"

It's not all doom and gloom, times are changing. Two UK patients, Michael Seres & Molly Watt, have each innovated in their own arenas, and created solutions to solve problems that impact people like them. I'm proud that they are both my friends, and their efforts always remind me of what's possible with sheer determination, tenacity and vision, even when all the odds are stacked against you.

Tomorrow, four events in the UK are taking place which fill me with hope. One is People Drive Digital, where the headline reads, "Our festival is a creative space for people orientated approaches to digital technologies and online social networks in health and care" and the second is a People’s Transformathon, where the headline reads, "Bringing together patients, carers, service users, volunteers and staff from across health and care systems in the UK and overseas to connect, share, and learn from one another."

The third event is called Patients First, a new conference from the  Association of Medical Research Charities (AMRC) and Association of the British Pharmaceutical Industry (ABPI), where the headlines reads, "It brings together everyone involved in delivering better outcomes for patients – from research and development to care and access to treatments – and puts patients at the heart of the discussion."

The fourth event is a Mental Health & Technology: Ideas Generation Workshop hosted by the Centre for Translational Informatics. Isn't it great to read the description of the event, "South London and Maudsley NHS Foundation Trust and Kings College London want you to join what we hope will be the first in a series of workshops, co-led by service users, that will hear and discuss your views of the mental health technology you use, want to use or wish you had so that we can partner with you in its design, development and deployment." In the FAQ covering the format of the event, the organisers state, "The event will be in an informal and relaxed, there are no wrong opinions! We want to hear your ideas and thoughts." What a refreshing contrast to the typical response you might get within an hospital environment. 

The first event is in Leeds, the second is online, and the third and fourth are both in London, and I know that the first three are using a Twitter hashtag, so you will be able to participate from anywhere in the world. What I find particularly refreshing is that the first two events start their title with the word people, not patient. 

I also noticed that the Connected Health conference next month has a session on Patients as Innovators and Partners, with a Patient Advocate, Amanda Greene, as a speaker. I'm inspired and encouraged by agents of change who work within the healthcare system, and are pushing boundaries themselves by acknowledging that patients bring valuable ideas. One of those people is Dr Keith Grimes, who was also mentoring teams at the MISK Hackathon, and the 360 video below of our conversation, shows why we need more leaders like him. The video is an excerpt from a longer 9 minute video where we even discussed how health hackathons could innovate in terms of format. 

As we approach 2017, I really do hope we see the pace of change speed up, when it comes to harnessing the unique contributions that patients and their caregivers can bring to the innovation process, whether it's at a grassroots community level or the design of the next big health app. More and people around the globe that were previously offline are now being connected to the internet and/or using a smartphone for the first time. How will we tap into their experiences, ideas and solutions? Whether a patient is in Riyadh, Riga or Rio, let's connect with them, and genuinely listen to them, with open hearts and open minds. 

We can also help  to create a different future by educating our youth differently, so they understand their voice matters, even if they don't have a string of letters after their name. We are going to have to have difficult conversations, where we feel uncomfortable, where we'll have to leave our egos out of those conversations. There are circumstances where patients will be leading, and the professionals will have accept that, or risk being bypassed entirely, which is not a healthy situation. Equally, there are times when we'd probably want a paternalistic healthcare system, where the healthcare professionals are seen as the leaders in charge of the situation i.e. in a medical emergency.

The dialogue on patient innovation isn't about patients vs doctors, or about assigning blame, it's about coming together to understand how we move forward. Many of us are conditioned to think and act a certain way, whether it's because of our professional training or just how society suggests we should think. Unravelling that conditioning on a local, national, international and global level is long overdue. 

What will YOU do differently to foster a culture where we have many more innovations coming from patients and their caregivers? A future where having a patient (or their advocate) keynote at an event isn't seen as something novel, but the norm. A future where the system acknowledges that on certain occasions, the patient or their caregiver could be superior at generating innovation. A future where the gap between the rhetoric and reality disappears. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Engaging patients & the public is harder than you think

Back in 2014, Google acquired a British artificial intelligence startup in London, called Deepmind. It was their biggest EU purchase at that time, and was estimated to be in the region of 400 million pounds (approx $650 million) Deepmind's aim from the beginning was to develop ways in which computers could think like humans. 

Earlier this year, Deepmind launched Deepmind Health, with a focus on healthcare. It appears that the initial focus is to build apps that can help doctors identify patients that are at risk of complications. It's not clear yet, how they plan to use AI in the context of healthcare applications. However, a few months after they launched this new division, they did start some work with Moorfield's Eye hospital in London to apply machine learning to 1 million eye scans to better predict eye disease. 

There are many concerns, which get heightened when articles are published such as "Why Google Deepmind wants your medical records?" Many of us don't trust corporations with our medical records, whether it's Google or anyone else. 

So I popped along to Deepmind Health's 1st ever patient & public engagement event held at Google's UK headquarters in London last week. They also offered a livestream for those who could not attend. 

What follows is a tweetstorm from me during the event, which nicely summarises my reaction to the event. [Big thanks to Shirley Ayres for reminding me that most people are not on Twitter, and would benefit from being able to see the list of tweets from my tweetstorm] Alas, due to issues with my website, the tweets are included as images rather than embedded tweets. 

Finally, whilst not part of my tweetstorm, this one question reminded me of the biggest question going through everyone's minds. 

Below is a 2.5 hour video which shows the entire event including the Q&A at the end. I'd be curious to hear your thoughts after watching the video. Are we engaging patients & the public in the right way? What could be done differently to increase engagement? Who needs to do more work in engaging patients & the public?

There are some really basic things that can be done, such as planning the event with consideration for the needs of those you are trying to engage, not just your own. This particular event was held at 10am-12pm on a Tuesday morning. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner