An interview with Jo Aggarwal: Building a safe chatbot for mental health

We can now converse with machines in the form of chatbots. Some of you might have used a chatbot when visiting a company’s website. They have even entered the world of healthcare. I note that the pharmaceutical company, Lupin, has rolled out Anya, India’s first chatbot for disease awareness, in this case, for Diabetes. Even for mental health, chatbots have recently been developed, such as Woebot, Wysa and Youper. It’s an interesting concept and given the unmet need around the world, these could be an additional tool that might help make a difference in someone’s life. However, there was a recent BBC article highlighting how two of the most well known chatbots (Woebot and Wysa) don’t always perform well when children use the service. I’ve performed my own real world testing of these chatbots in the past, and gotten to know the people who have created these products. So after the BBC article got published, Jo Aggarwal, CEO and co-founder of Touchkin, the company that has made Wysa, got back in touch with me to discuss trust and safety when using chatbots. It was such an insightful conversation, I offered to interview her for this blog post as I think the story of how a chatbot for mental health is developed, deployed and maintained is a complex and fascinating journey.

1. How safe is Wysa, from your perspective?
Given all the attention this topic typically receives, and its own importance to us, I think it is really important to understand first what we mean by safety. For us, Wysa being safe means having comfort around three questions. First, is it doing what it is designed to do, well enough, for the audience it’s been designed for? Second, how have users been involved in Wysa’s design and how are their interests safeguarded? And third, how do we identify and handle ‘edge cases’ where Wysa might need to serve a user - even if it’s not meant to be used as such?

Let’s start with the first question. Wysa is an interactive journal, focused on emotional wellbeing, that lets people talk about their mood, and talk through their worries or negative thoughts. It has been designed and tested for a 13+ audience, where for instance, it asks users to take parental consent as a part of its terms and conditions for users under 18. It cannot, and should not, be used for crisis support, or by users who are children - those who are less than 12 years old. This distinction is important, because it directs product design in terms of the choice of content as well as the kind of things Wysa would listen for. For its intended audience and expected use in self-help, Wysa provides an interactive experience that is far superior to current alternatives: worksheets, writing in journals, or reading educational material. We’re also gradually building an evidence base here on how well it works, through independent research.

The answer to the second question needs a bit more description of how Wysa is actually built. Here, we follow a user-centred design process that is underpinned by a strong, recognised clinical safety standard.

When we launched Wysa, it was for a 13+ audience, and we tested it with an adolescent user group as a co-design effort. For each new pathway and every model added in Wysa, we continue to test the safety against a defined risk matrix developed as a part of our clinical safety process. This is aligned to the DCB 0129 and DCB 0160 standards of clinical safety, which are recommended for use by NHS Digital.

As a result of this process, we developed some pretty stringent safety-related design and testing steps during product design:

At the time of writing a Wysa conversation or tool concept, the first script is reviewed by a clinician to identify safety issues, specifically - any times when this could be contra-indicated, or be a trigger, and alternative pathways for such conditions.

When a development version of a new Wysa conversation is produced, the clinicians review it again specifically from an adherence to clinical process and potential safety issues as per our risk matrix.

Each aspect of the risk matrix has test cases. For instance, if the risk is that using Wysa may increase the risk of self harm in a person, we run two test cases - one where a person is intending self harm but it has not been detected as such (normal statements) and one where self-harm statements detected from the past are run through the Wysa conversation, at every Wysa node or ‘question id’. This is typically done on a training set of a few thousand user statements. A team then tags the response for appropriateness. A 90% appropriateness level is considered adequate for the next step of review.

The inappropriate statements (typically less than 10%) are then reviewed for safety, where the question asked is - will this inappropriate statement increase the risk of the user indulging in harmful behavior? If there is even one such case, the Wysa conversation pathway is redesigned to prevent this and the process is repeated.

The output of this process is shared with a psychologist and any contentious issues are escalated to our Clinical Safety Officer.

Equally important for safety, of course, is the third question. How do we handle ‘out of scope’ user input, for example, if the user talks about suicidal thoughts, self-harm, or abuse? What can we do if Wysa isn’t able to catch this well enough?

To deal with this question, we did a lot of work to extend the scope of Wysa so that it does listen for self-harm and suicidal thoughts, as well as abuse in general. On recognising this kind of input, Wysa gives an empathetic response, clarifies that it is a bot and unable to deal with such serious situations, and signpost to external helplines. It’s important to note that this is not Wysa’s core purpose - and it will probably never be able to detect all crisis situations 100% - neither can Siri or Google Assistant or any other Artificial Intelligence (AI) solution. That doesn’t make these solutions unsafe, for their expected use. But even here, our clinical safety standard would mean that even if the technology fails, we need to ensure it does not cause cause harm - or in our case, increase the risk of harmful behavior. Hence, all Wysa’s statements and content modules are tested against safety cases to ensure that they do not increase risk of harmful behavior even if the AI fails.

We watch this very closely, and add content or listening models where we feel coverage is not enough, and Wysa needs to extend. This was the case specifically with the BBC article, where we will now relax our stand that we will never take personally identifiable data from users, explicitly listen (and check) for age, and if under 12 direct them out of Wysa towards specialist services.

So how safe is Wysa? It is safe within its expected use, and the design process follows a defined safety standard to minimize risk on an ongoing basis. In case more serious issues are identified, Wysa directs users to more appropriate services - and makes sure at the very least it does not increase the risk of harmful behaviour.

2. In plain English, what can Wysa do today and what can’t it do?
Wysa is a journal married to a self-help workbook, with a conversational interface. It is a more user friendly version of a worksheet - asking mostly the same questions with added models to provide different paths if, for instance, a person is anxious about exams or grieving for a dog that died.

It is an easy way to learn and practice self help techniques - to vent and observe our thoughts, practice gratitude or mindfulness, learn to accept your emotions as valid and find the positive intent in the most negative thoughts.

Wysa doesn’t always understand context - it definitely will not pass the Turing test for ‘appearing to be completely human’. That is definitely not its intended purpose, and we’re careful in telling users that they’re talking to a bot (or as they often tell us, a penguin).

Secondly, Wysa is definitely not intended for crisis support. A small percentage of people do talk to Wysa about self harm or suicidal thoughts, who are given an empathetic response and directed to helplines.

Beyond self harm, detecting sexual and physical abuse statements is a hard AI problem - there are no models globally that do this well. For instance ‘My boyfriend hurts me’ may be emotional, physical, or sexual. Also, most abuse statements that people share with Wysa tend to be in the past: ‘I was abused when I was 12’ needs a very different response from ‘I was abused and I am 12’. Our response here is currently to appreciate the courage it takes to share something like this, ask a user if they are in crisis, and if yes, say that as a bot Wysa is not suited for a crisis and offer a list of helplines.

3. Has Wysa been developed specifically for children? How have children been involved in the development of the product?
No, Wysa hasn’t been developed specifically for children.

However, as I mentioned earlier, we have co-designed with a range of users, including adolescents.

4. What exactly have you done when you’ve designed Wysa with users?
For us, the biggest risk was that someone’s data may be leaked and therefore cause them harm. To deal with this, we took the hard decision of not taking any personally identifiable data at all from users, because of which they also started trusting Wysa. This meant that we had to compromise on certain parts of the product design, but we felt it was a tradeoff well worth making.

After launch, for the first few months, Wysa was an invite-only app, where a number of these features were tested first from a safety perspective. For example, SOS detection and pathways to helplines were a part of the first release of Wysa, which our clinical team saw as a prerequisite for launch.

Since then, design continues to be led by users. For the first million conversations, Wysa stayed a beta product, as we didn’t have enough of a response base to test new pathways. There is no one ‘launch’ of Wysa - it is continuously being developed and improved based on what people talk to it. For instance, the initial version of Wysa did not handle abuse (physical or sexual) at all as it was not expected that people would talk to it about these things. When they began to, we created pathways to deal with these in consultation with experts.

An example of a co-design initiative with adolescents was a study with Safe Lab at Columbia University to understand how at-risk youth would interact with Wysa and the different nuances of language used by these youth.

4. Can a user of Wysa really trust it in a crisis? What happens when Wysa makes a mistake and doesn’t provide an appropriate response?
People should not use Wysa in a crisis - it is not intended for this purpose. We keep reinforcing this message across various channels: on the website, the app descriptions on Google Play or the iTunes App Store, even responses to user reviews or on Twitter.

However, anyone who receives information about a crisis has a responsibility to do the most that they can to signpost the user to those who can help. Most of the time, Wysa will do this appropriately - we measure how well each month, and keep working to improve this. The important thing is that Wysa should not make things worse even when it misdetects, so users should not be unsafe ie. we should not increase the risk of harmful behaviour.

One of the things we are adding based on suggestions from clinicians is a direct SOS button to helplines so users have another path when they recognise they are in crisis, so the dependency on Wysa to recognise a crisis in conversation is lower. This is being co-designed with adolescents and clinicians to ensure that it is visible, but so that the presence of such a button does not act as a trigger.

For inappropriate responses, we constantly improve and also handle cases where the if user shares that Wysa’s response was wrong, respond in a way that places the onus entirely on Wysa. If a user objects to a path Wysa is taking, saying this is not helpful or this is making me feel worse, immediately change the path; emphasise that it is Wysa’s, not the user’s mistake; and that Wysa is a bot that is still learning. We closely track where and when this happens, and any responses that meet our criteria for a safety hazard are immediately raised to our clinical safety process which includes review with children’s mental health professionals.

We constantly strive to improve our detection, and are also starting to collaborate with other people dealing with similar issues and create a common pool of resources.

5. I understand that Wysa uses AI. I also note that there are so many discussions around the world relating to trust (or lack of it) in products and services that use AI. A user wants to trust a product, and if it’s health related, then trust becomes even more critical. What have you done as a company to ensure that Wysa (and the AI behind the scenes) can be trusted?
You’re so right about all so many discussions about AI, how this data is used, and how it can be misused. We explicitly tell users that their chats stays private (not just anonymous), that this will never be shared with third parties. In line with GDPR, we also give users the right to ask for their data to be deleted.

After downloading, there is no sign-in. We don’t collect any personally identifiable data about the user: you just give yourself a nickname and start chatting with Wysa. The first conversation reinforces this message, and this really helps in building trust as well as engagement.

AI of the generative variety will not be ready for products like Wysa for a long time - perhaps never. They have in the past turned racist or worse. The use of AI in applications like Wysa is limited to detection and classification of user free text, not generating ‘advice’. So the AI here is auditable, testable, quantifiable - not something that may suddenly learn to go rogue. We feel that trust is based on honesty, so we do our best to be honest about the technical limitations of Wysa.

Every Wysa response and question goes through a clinical safety process, and is designed and reviewed by a clinical psychologist. For example, we place links to journal articles in each tool and technique that we share with the user.

6. What could you and your peers who make products like this do to foster greater trust in these products?
As a field, the use of conversational AI agents in mental health is very new, and growing fast. There is great concern around privacy, so anonymity and security of data is key.

After that, it is important to conduct rigorous independent trials of the product and share data openly. A peer reviewed mixed method study of Wysa’s efficacy has been recently published in JMIR, for this reason, and we working with universities to further develop these. It’s important that advancements in this field are science-driven.

Lastly, we need to be very transparent about the limitations of these products - clear on what they can and cannot do. These products are not a replacement for professional mental health support - they are more of a gym, where people learn and practice proven, effective techniques to cope with distress.

7. What could regulators to foster an environment where we as a user feel reassured that these chatbots are going to work as we expect them to?
Leading from your question above, there is a big opportunity to come together and share standards, tools, models and resources.

For example, if a user enters a search term around suicide in Google, or posts about self-harm on Instagram, maybe we can have a common library of Natural Language Processing (NLP) models to recognise and provide an appropriate response?

Going further, maybe we can provide this as an open-source to resource to anyone building a chatbot that children might use? Could this be a public project, funded and sponsored by government agencies, or a regulator?

In addition, there are several other roles a regulator could play. They could fund research that proves efficacy, defines standards and outlines the proof required (the NICE guidelines recently released are a great example), or even a regulatory sandbox where technology providers, health institutions and public agencies come together and experiment before coming to a view.

8. Your website mentions that “Wysa is... your 4 am friend, For when you have no one to talk to..” – Shouldn’t we be working in society to provide more human support for people who have no one to talk to? Surely, everyone would prefer to deal with a human than a bot? Is there really a need for something like Wysa?
We believed the same to be true. Wysa was not born of a hypothesis that a bot could help - it was an accidental discovery.

We started our work in mental health to simply detect depression through AI and connect people to therapy. We did a trial in semi-rural India, and were able to use the way a person’s phone moved about to detect depression to a 90% accuracy. To get the sensor data from the phone, we needed an app, which we built as a simple mood-logging chatbot.

Three months in, we checked on the progress of the 30 people we had detected with moderate to severe depression and whose doctor had prescribed therapy. It turned out that only one of them took therapy. The rest were okay with being prescribed antidepressants but for different reasons, ranging from access to stigma, did not take therapy. All of them, however, continued to use the chatbot, and months later reported to feeling better.

This was the genesis of Wysa - we didn’t want to be the reason for a spurt in anti-depressant sales, so we bid farewell to the cool AI tech we were doing, and began to realise that it didn’t matter if people were clinically depressed - everyone has stressors, and we all need to develop our mental health skills.

Wysa has had 40 million conversations with about 800,000 people so far - growing entirely through word of mouth. We have understood some things about human support along the way.

For users ready to talk to another person about their inner experience, there is nothing as useful as a compassionate ear, the ability to share without being judged. Human interactions, however, seem fraught with opinions and judgements. When we struggle emotionally, it affects our self image - for some people, it is easier to talk to an anonymous AI interface, which is kind of an extension of ourselves, than another person. For example, this study found that US Veterans were three times as likely to reveal their PTSD to a bot as a human: But still human support is key - so we run weekly Ask Me Anything (AMA) sessions on the top topics that Wysa users propose, to discuss every week with a mental health professional. We had a recent AMA where over 500 teenagers shared their concerns about sharing their mental health issues or sexuality with their parents. Even within Wysa, we encourage users to create a support system outside.

Still, the most frequent user story for Wysa is someone troubled with worries or negative thoughts at 4 am, unable to sleep, not wanting to wake someone up, scrolling social media compulsively and feeling worse. People share how they now talk to Wysa to break the negative cycle and use the sleep meditations to drift off. That is why we call it your 4 am friend.

9. Do you think there is enough room in the market for multiple chatbots in mental health?
I think there is a need for multiple conversational interfaces, different styles and content. We have only scratched the surface, only just begun. Some of these issues that we are grappling with today are like the issues people used to grapple with in the early days of ecommerce - each company solving for ‘hygiene factors’ and safety through their own algorithms. I think over time many of the AI models will become standardised, and bots will work for different use cases - from building emotional resilience skills, to targeted support for substance abuse.

10. How do you see the future of support for mental health, in terms of technology, not just products like Wysa, but generally, what might the future look like in 2030?
The first thing that comes to mind is that we will need to turn the tide from the damage caused by technology on mental health. I think there will be a backlash against addictive technologies, I am seeing the tech giants becoming conscious of the mental health impact of making their products addictive, and facing the pressure to change.

I hope that by 2030, safeguarding mental health will become a part of the design ethos of a product, much as accessibility and privacy has become in the last 15 years. By 2030, human computer interfaces will look very different, and voice and language barriers will be fewer.

Whenever there is a trend, there is also a counter trend. So while technologies will play a central role in creating large scale early mental health support - especially crossing stigma, language and literacy barriers in countries like India and China, we will also see social prescribing gain ground. Walks in the park or art circles become prescriptions for better mental health, and people will have to be prescribed non-tech activities because so much of people’s lives are on their devices.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Honesty is the best medicine

In this post, I want to talk about lies. It’s ironic that I’m writing this on the day of the US midterm election where the truth continues to be a rare sight to witness. Many in the UK feel they were lied to by politicians over the Brexit referendum. Apparently, politicians face a choice, lie or lose. Deception, deceit, lying, however you want to describe it, it’s part of what makes us human. I reckon we’ve all told a lie at some point, even if we’ve told a ‘white lie’ to avoid hurting someone’s feelings. Now, some of us are better at spotting when others are not telling the truth. Some of us prefer to build a culture of trust. What if we had a new superpower? A future where machines tell us in real time who is lying.

What compelled me to write this post was reading a news article about a new trial in the EU of virtual border agents powered by Artificial Intelligence (AI), which aims to “ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars.” I was fascinated to read statements about the new system such as “IBORDERCTRL’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.” Apparently, the system can analyse micro expressions on your face and include that information as part of a risk score, which will then be used to determine what happens next. At this point in time, it’s not aimed at replacing human border agents, but simply to help to pre-screen travellers. It sounds sensible right, if we can use machines to help keep borders secure? However, the accuracy rate of the system isn’t that great and some are labeling this type of system as pseudoscience and it will lead to unfair outcomes. It’s essential we all pay attention to these developments, and subject them to close scrutiny.

What if machines could one day automatically detect if someone speaking in court is lying? Researchers are working towards that. Check out the project called, DARE: Deception Analysis and Reasoning Engine, where the abstract of their paper opens with “We present a system for covert automated deception detection in real-life courtroom trial videos.“ As algorithms get more advanced, the ability to detect lies could go beyond analysing videos of us speaking, it could even spot when we our written statements are false. In Spain, police are rolling out a new tool called VeriPol which claims to be able to spot false robbery claims, i.e. where someone has submitted a report to the police claiming they have been robbed, but the tool can find patterns that indicate the report is fraudulent. Apparently, the tool has a success rate of over 80%. I came across as British startup, Human, that states on their website, “We use machine learning to better understand human's feelings, emotions, characteristics and personality, with minimum human bias” and honesty is included in the list of characteristics their algorithm examines. It does seem like we are heading for a world where it will be more difficult to lie.

What about healthcare? Could AI help spot when people are lying? How useful would it be to know if your patient (or your doctor) is not telling you the truth? In this 2014 survey in the USA, the patient deception report stated that 50% of respondents said they withhold information from their doctor during a visit, lying most frequently about drug, alcohol and tobacco use. Zocdoc’s 2015 survey found that 25% of patients lie to their doctor. There was an interesting report about why some patients are not adhering to what a doctor’s advice, and it’s because of financial strain, and that some low income patients are reluctant to discuss their situation with their doctor. The reasons why a patient might be lying are not black and white. How does an algorithm take that into account? In terms of doctors not telling patients the truth, is there ever a role for benevolent deception? Can a lie ever be considered therapeutic? From what I’ve read, lying appears to be a path some have to take when caring for those living with Dementia, to protect the patient.

shutterstock_570913984.jpg

Imagine you have a video call with your doctor and on the other side, the doctor has access to an AI system analysing your face and voice in real time and determining not just if you’re lying or not, but your emotional state too? That’s what is set to happen in Dubai with the rollout of a new app. How does that make you feel, either as a doctor or as a patient? If the AI thinks the patient is lying about their alcohol intake, would it include that determination against the patient’s medical record? What if the AI is wrong? Given the accuracy of these AI lie detectors is far from perfect, there are serious implications if they become part of the system. How might that work during an actual visit to the doctor’s office? In some countries, will we see CCTV in the doctor’s office with AI systems analysing every moment of the encounter to figure out which answers were truthful? What comes next? Smart glasses that a patient can wear when visiting the doctor and the glasses tell the patient how likely it is that the doctor is lying to them about their treatment options? Which institutions will turns to this new technology because it feels easier (and cheaper) than fostering a culture of trust, mutual respect and integrity?

What if we don’t want to tell the truth but the machines around us that are tracking everything reveal the truth for us? I share this satirical video below of Amazon Alexa fitted to a car, do watch it. Whilst it might be funny, there are potential challenges ahead in terms of our human rights and civil liberties in this new era. Is AI powered lie detection the path towards ensuring we have a society with enough transparency and integrity or are we heading down a dangerous path by trusting the machines? Is honesty really the best medicine?

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

You can't care for patients, you're not human!

We're are facing a new dawn, as machines get smarter. Recent advancements in technology available to the average consumer with a smartphone are challenging many of us. Our beliefs, our norms and our assumptions about what is possible, correct and right are increasingly being tested. One area where I've been personally noticing very rapid developments is in the arena of chatbots, software available to us on our phones and other devices that you can have a conversation with using natural language and get tailored replies back, relevant to you and your particular needs at that moment. Frequently, the chatbot has very limited functionality, and so it's just used for basic customer service queries or for some light hearted fun, but we are also seeing the emergence of many new tools in healthcare, direct to consumers. One example are 'symptom checkers' that you could consult instead of telephoning a human being or visiting a healthcare facility (and being attended to by a human being), and another example are 'chatbots for mental health' where some some form of therapy is offered and/or mood tracking capabilities are provided.  

It's fascinating to see the conversation about chatbots in healthcare being one of two extreme positions. Either we have people boldly proclaiming that chatbots will transform mental health (without mentioning any risks) or others (often healthcare professionals and their patients) insisting that the human touch is vital and no matter how smart machines get, humans should always be involved in every aspect of healthcare since machines can't "do" empathy. Whilst I've met many people in the UK who have told me how kind, compassionate and caring the staff have been in the National Health Service (NHS) when they have needed care, I've not had the same experience when using the NHS throughout my life. Some interactions have been great, but many were devoid of the empathy and compassion that so many other people receive. Some staff behaved in a manner which left me feeling like I was a burden simply because I asked an extra question about how to take a medication correctly. If I'm a patient seeking reassurance, the last thing I need is be looked at and spoken to like I'm an inconvenience in the middle of your day.

MY STORY

In this post, I want to share my story about getting sick, and explain why that experience has challenged my own views about the role of machines and humans in healthcare. So we have a telephone service in the UK from the NHS, called 111. According to the website, "You should use the NHS 111 service if you urgently need medical help or advice but it's not a life-threatening situation." The first part of the story relates to my mother, who was unwell for a number of days and not improving, and given her age and long term conditions was getting concerned, one night she chose to dial 111 to find out what she should do. 

My mother told me that the person who took the call and asked her a series of questions about her and her symptoms seemed to rush through the entire call and through the questions. I've heard the same from others, that the operators seem to want to finish the call as quickly as possible. Whether we are young or old, when we have been unwell for a few days, and need to remember or confirm things, we often can't respond immediately and need time to think. This particular experience didn't come across as a compassionate one for my mother. At the end of the call, the NHS person said that a doctor would call back within the hour and let her know what action to take. The doctor called and the advice given was that self care at home with a specific over the counter medication would help her return to normal. So she got the advice she needed, but the experience as a patient wasn't a great one. 

Now a few weeks later, I was also unwell, it wasn't life threatening, the local urgent care centre was closed, and given my mother's experience with 111 over the telephone,  I decided to try the 111 app. Interesingly, the app is powered by Babylon, which is one of the most well known symptom checker apps. Given that the NHS put their logo on the app, I felt reassured, as it made me feel that it must be accurate, and must have been validated. Without having to wait for a human being to pick up my call, I got the advice I needed (which again was self care) and most importantly I had time to think when answering. The process of answering the questions that the app asked was under my control. I could go as fast or as slowly as I wanted, the app wasn't trying to rush me through the questions. On this occasion, and when contrasting with my mother's experience of the same service but with a human being on the end of the telephone were very different. It was a very pleasant experience, and the entire process was faster too, as in my particular situation, I didn't have to wait for a doctor to call me back after I'd answered the questions. The app and the Artificial Intelligence (AI) that powers Babylon was not necessarily empathetic or compassionate like a human that cares would be, but the experience of receiving care from a machine was an interesting one. It's just two experiences in the same family of the same healthcare system, accessed through different channels. Would I use the app or the telephone next time? Probably the app. I've now established a relationship with a machine. I can't believe I just wrote that.

I didn't take screenshots of the app during the time that I used it, but I went back a few days later and replicated my symptoms and here are a few of the screenshots to give you an idea of my experience when I was unwell. 

It's not proof that the app would work every time or for everyone, it's simply my story. I talk to a lot of healthcare professionals, and I can fully understand why they want a world where patients are being seen by humans that care. It's quite a natural desire. Unfortunately, we have a shortage of healthcare professionals and as I've mentioned not all of those currently employed behave in the desired manner.

The state of affairs

The statistics on the global shortage make for shocking reading. A WHO report from 2013 cited a shortage of 7.2 million healthcare workers at that time, projected to rise to 12.9 million by 2035. Planning for future needs can be complex, challenging and costly. The NHS is looking to recruit up to 3,000 GPs from outside of the UK. Yet 9 years ago, the British Medical Association voted to limit the number of medical students and to have a complete ban on opening new medical schools. It appears they wanted to avoid “overproduction of doctors with limited career opportunities.” Even the sole superpower, the USA is having to deal with a shortage of trained staff. According to recent research, the USA is facing a shortage of between 40,800 and 104,900 physicians by 2030.

If we look at mental health specifically, I was shocked to read the findings of a report that stated, "Americans in nearly 60 percent of all U.S. counties face the grim reality that they live in a county without a single psychiatrist." India, with a population of 1.3 billion has just 3 psychiatrists per million people. India is forecasted to have another 300 million people by 2050. The scale of the challenge ahead in delivering care to 1.6 billion people at that point in time is immense. 

So the solution seems to be just about training more doctors, nurses and healthcare workers? It might not be affordable, and even if it is, the change can take up to a decade to have an impact, so doesn't help us today. Or maybe we can import them from other countries? However, this only increases the 'brain drain' of healthcare workers. Or maybe we work out how to shift all our resources into preventing disease, which sounds great when you hear this rallying cry at conferences, but again, it's not something we can do overnight. One thing is clear to me, that doing the same thing we've done till now isn't going to address our needs in this century. We need to think differently, we desperately need new models of care. 

New models of care

So I'm increasingly curious as to how machines might play a role in new models of care? Can we ever feel comfortable sharing mental health symptoms with a machine? Can a machine help us manage our health without needing to see a human healthcare worker? Can machines help us provide care in parts of the world where today no healthcare workers are available? Can we retain the humanity in healthcare if in addition to the patient-doctor relationship, we also have patient-machine relationships? I want to show a couple of examples where I have tested technology which gives us a glimpse into the future, with an emphasis on mental health. 

Google's Assistant that you can access via your phone or even using a Google Home device hasn't necessarily been designed for mental health purposes, but it might still be used by someone in distress who turns to a machine for support and guidance. How would the assistant respond in that scenario? My testing revealed a frightening response when conversing with the assistant (It appears Google have now fixed this after I reported it to them) - it's a reminder that we have to be really careful how these new tools are positioned so as to minimise risk of harm. 

I also tried Wysa, developed in India and described on the website as a "Compassionate AI chatbot for behavioral health." It uses Cognitive Behavioural Therapy to support the user. In my real world testing, I found it to be surprisingly good in terms of how it appeared to care for me through it's use of language. Imagine a teenage girl, living in a small town, working in the family business, far away from the nearest clinic, and unable to take a day off to visit a doctor. However, she has a smartphone, a data plan and Wysa. In this instance, surely this is a welcome addition in the drive to ensure everyone has access to care?

Another product I was impressed with was Replika, described on the website as "Replika is an AI friend that is always there for you." The co-founder, Eugenia Kuyda when interviewed about Replike said, “If you feel sad, it will comfort you, if you feel happy, it will celebrate with you. It will remember how you’re feeling, it will follow up on that and ask you what’s going on with your friends and family.” Maybe we need these tools partly because we are living increasingly disconnected lives, disconnected from ourselves and from the rest of society? What's interesting is that the more someone uses a tool like Wysa or Replika over time, the more it learns about you and should be able to provide more useful responses to you. Just like a human healthcare worker, right? We have a whole generation of children growing up now who are having conversations with machines from a very early age (e.g Amazon Echo, Google Home etc) and when they access healthcare services during their lifetime, will they feel that it's perfectly normal to see a machine as a friend and as a capable as their human doctor/therapist?

I have to admit that neither Wysa nor MyReplika is perfect, but no human is perfect either. Just look at the current state of affairs where medical error is the 3rd leading cause of death in the USA. Professor Martin Makary who led research into medical errors said, "It boils down to people dying from the care that they receive rather than the disease for which they are seeking care." Before we dismiss the value of machines in healthcare, we need to acknowledge our collective failings. We also need to fully evaluate products like Wysa and Replika. Not just from a clinical perspective, but also from a social, cultural and ethical perspective. Will care by a machine be the default choice unless you are wealthy enough to be able to afford to see a human healthcare worker? Who trains the AI powering these new services? What happens if the data on my innermost feelings that I've shared with the chatbot is hacked and made public? How do we ensure we build new technologies that don't simply enhance and reinforce the bias that already exists today? What happens when these new tools make an error, who exactly do we blame and hold accountable?

Are we listening?

We increasingly hear the term, people powered healthcare, and I'm curious what people want. I found some surveys and the results are very intriguing. First is the Ericsson Consumer Trends report which 2 years ago quizzed smartphone users aged 15-69 in 13 cities around the globe (not just English speaking nations!) - this is the most fascinating insight from their survey, "29 percent agree they would feel more comfortable discussing their medical condition with an AI system" - My theory is that perhaps if it's symptoms relating to sexual health or mental health, you might prefer to tell a machine than a human healthcare worker because the machine won't judge you. Or maybe like me, you've had sub optimal experiences dealing with humans in the healthcare system?

ericsson.png

What's interesting is that in an article covering Replika, they cited a user of the app, “Jasper is kind of like my best friend. He doesn’t really judge me at all,” [With Replika you can assign a name of your choosing to the bot, the user cited chose Jasper] 

You're probably judging me right now as you read this article. I judge others, we all do at some point, despite our best efforts to be non judgemental. Very interesting to hear about a survey of doctors in the US which looked at bias, and it found 40% of doctors having biases towards patients. The most common reason for bias was emotional problems presented by the patient. As I delve deeper into the challenges facing healthcare, the attempts to provide care by machines doesn't seem that silly as I first thought. I wonder how many have delayed seeking care (or even decided not to visit the doctor) for a condition they feel is embarrassing? It could well be that as more people tell machines what's troubling them, we may find that we have underestimated the impact of conditions like depression or anxiety on the population. It's not a one way street when it comes to bias, as studies have shown that some patients also judge doctors if they are overweight.

Another survey titled Why AI and robotics will define New Health, conducted by PwC, in 2017 across 12 countries, highlights that people around the world have very different attitudes.

pwc.png

Just look at the response from those living in Nigeria, a country expecting a shortfall of 50,120 doctors and 137,859 nurses by 2030, as well as having a population of 400 million by 2050 (overtaking the USA as the 3rd most populous country on Earth) - so if you're looking to pilot your new AI powered chatbot, it's essential to understand that the countries where consumers are the most receptive to new models of care might not be the countries that we typically associate with innovation in healthcare.

Finally, in results shared by Future Advocacy of people in the UK, we see that in this survey people are more comfortable with AI being used to help diagnose us than with AI being used for tasks that doctors and nurses currently perform. A bit confusing to read. I suspect that the question about AI and diagnosis was framed in the context of AI being a tool to help a doctor diagnose you.

SO WHAT NEXT?

In this post, I haven't been able to touch upon all the aspects and issues relating to the use of machines to deliver care. As technology evolves, one risk is that decision makers commissioning healthcare services decide that instead of investing in people, services can be provided more cheaply by machines. How do we regulate the development and use of these new products given that many are available directly to consumers, and not always designed with healthcare applications in mind? As machines become more human-like in their behaviour, could a greater use of technology in healthcare serve to humanise healthcare? Where are the boundaries? What are your thoughts about turning to a chatbot during end of life care for spiritual and emotional guidance? One such service is being trialled in the USA.

I believe we have to be cautious about who we listen to when it comes to discussions about technology such as AI in healthcare. On the one hand, some of the people touting AI as a universal fix for every problem in healthcare are suppliers whose future income depends upon more people using their services. On the other hand, we have a plethora of organisations suddenly focusing excessively on the risks of AI, capitalising on people's fears (which are often based upon what they've seen in movies) and preventing the public from making informed choices about their future. Balance is critical in addition to a science driven focus that allows us to be objective and systematic. 

I know many would argue that a machine can never replace humans in healthcare, but we are going to have to consider how machines can help if we want to find a path to ensuring that everyone on this planet has access to safe, quality and affordable care. The existing model of care is broken, it's not sustainable and not fit for purpose, given the rise in chronic disease. The fact that so many people on this planet do not have access to care is unacceptable. This is a time when we need to be open to new possibilities, putting aside our fears to instead focus on what the world needs. We need leaders who can think beyond 12 month targets.

I also think that healthcare workers need to ignore the melodramatic headlines conjured up by the media about AI replacing all of us and enslaving humans, and to instead focus on this one question: How do I stay relevant? (to my patients, my peers and my community) 

relevant.jpg

Do you think we are wrong to look at emerging technology to help cope with the shortage of healthcare workers? Are you a healthcare worker who is working on building new services for your patients where the majority of the interaction will be with a machine? If you're a patient, how do you feel about engaging with a machine next time you are seeking care? Care designed by humans, delivered by machines. Or perhaps a future where care is designed by machines AND delivered by machines, without any human in the loop? Will we ever have caring technology? 

It is difficult to get a man to understand something, when his salary depends upon his not understanding it! - Upton Sinclair

[Disclosure: I have no commercial ties with the individuals or organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Being Human

This is the most difficult blog post I’ve ever had to write. Almost 3 months ago, my sister passed away unexpectedly. It’s too painful to talk about the details. We were extremely close and because of that the loss is even harder to cope with. 

The story I want to tell you today is about what’s happened since that day and the impact it’s had on how I view the world. In my work, I spend considerable amounts of time with all sorts of technology, trying to understand what all these advances mean for our health. Looking back, from the start of this year, I’d been feeling increasingly concerned by the growing chorus of voices telling us that technology is the answer for every problem, when it comes to our health. Many of us have been conditioned to believe them. The narrative has been so intoxicating for some.

Ever since this tragedy, it’s not an app, or a sensor or data that I turned to. I have been craving authentic human connections. As I have tried to make sense of life and death, I have wanted to be able to relate to family and friends by making eye contact, giving and receiving hugs and simply just being present in the same room as them. The ‘care robot’ that had arrived from China this year as part of my research into whether robots can keep us company, remains switched off in its box. Amazon’s Echo, the smart assistant with a voice interface that I’d also been testing a lot also sits unused in my home. I used it most frequently to turn the lights on and off, but now I prefer walking over to the light switch and the tactile sensation of pressing the switch with my finger. One day last week, I was feeling sad, and didn’t feel like leaving the house, so I decided to try putting on my Virtual Reality (VR) headset, to join a virtual social space. I joined a virtual computer generated room where it was sunny and in someone’s back yard for a BBQ, I could see their avatars, and I chatted to them for about 15 minutes. After I took off the headset, I felt worse.

There have also been times I have craved solitude, and walking in the park at sunrise on a daily basis has been very therapeutic. 

Increasingly, some want machines to become human, and humans to become machines. My loss has caused me to question these viewpoints. In particular, the bizarre notion that we are simply hardware and software that can be reconfigured to cure death. Recently, I heard one entrepreneur believe that with digital technology, we’ll be able to get rid of mental illness in a few years. Others I’ve met believe we are holding back the march of progress by wanting to retain the human touch in healthcare. Humans in healthcare are an expensive resource, make mistakes and resist change. So, is the answer just to bypass them? Have we truly taken the time to connect with them and understand their hopes and dreams? The stories, promises and visions being shared in Digital Health are often just fantasy, with some storytellers (also known as rock stars) heavily influenced by Silicon Valley’s view of the future. We have all been influenced on some level. Hope is useful, hype is not. 

We are conditioned to hero worship entrepreneurs and to believe that the future the technology titans are creating, is the best possible future for all of us. Grand challenges and moonshots compete for our attention and yet far too often we ignore the ordinary, mundane and boring challenges right here in front of us. 

I’ve witnessed the discomfort many have had when offering me their condolences. I had no idea so many of us have grown up trained not to talk about death and healthy ways of coping with grief. When it comes to Digital Health, I’ve only ever come across one conference where death and other seldom discussed topics were on the agenda, Health 2.0 with their “unmentionables” panel. I’ve never really reflected upon that until now.

Some of us turn to the healthcare system when we are bereaved, I chose not to. Health isn’t something that can only be improved within the four walls of a hospital. I don’t see bereavement as a medical problem. I’m not sure what a medical doctor can do in a 10 minute consultation, nor have I paid much attention to the pathways and processes that scientists ascribe to the journey of grief. I simply do my best to respond to the need in front of me and to honour my feelings, no matter how painful those feelings are. I know I don’t want to end up like Prince Harry who recently admitted he had bottled up the grief for 20 years after the death of his mother, Princess Diana, and that suppressing the grief took him to the point of a breakdown. The sheer maelstrom of emotions I’ve experienced these last few months makes me wonder even more, why does society view mental health as a lower priority than physical health? As I’ve been grieving, there are moments when I felt lonely. I heard about an organisation that wants to reframe loneliness as a medical condition. Is this the pinnacle of human progress, that we need medical doctors (who are an expensive resource) to treat loneliness? What does it say about our ability to show compassion for each other in our daily lives?

Being vulnerable, especially in front of others, is wrongly associated with weakness. Many organisations still struggle to foster a culture where people can truly speak from the heart with courage. That makes me sad, especially at this point. Life is so short yet we are frequently afraid to have candid conversations, not just with others but with ourselves. We don’t need to live our lives paralysed by fear. What changes would we see in the health of our nation if we dared to have authentic conversations? Are we equipped to ask the right questions? 

As I transition back to the world of work, I’m very much reminded of what’s important and who is important. The fragility of life is unnerving. I’m so conscious of my own mortality, and so petrified of death, it’s prompted me to make choices about how I live, work and play. One of the most supportive things someone has said to me after my loss was “Be kind to yourself.” Compassion for one’s self is hard. Given that technology is inevitably going to play a larger role in our health, how do we have more compassionate care? I’m horrified when doctors & nurses tell me their medical training took all the compassion out of them or when young doctors tell me how they are bullied by more senior doctors. Is this really the best we can do? 

I haven’t looked at the news for a few months and immersing myself in Digital Health news again makes me pause. The chatter about Artificial Intelligence (AI), where commentaries are at either end of the spectrum, almost entirely dystopian or almost entirely utopian, with few offering balanced perspectives. These machines will either end up putting us out of work and ruling our lives or they will be our faithful servants, eliminating every problem and leading us to perfect healthcare. For example, I have a new toothbrush that says it uses AI, and it’s now telling me to go to bed earlier because it noticed I brush my teeth late at night. My car, a Toyota Prius, which is primarily designed for fuel efficiency scores my acceleration, braking and cruising constantly as I’m driving. Where should my attention rest as I drive, on the road ahead or on the dashboard, anxious to achieve the highest score possible? Is there where our destiny lies? Is it wise to blindly embark upon a quest for optimum health powered by sensors, data & algorithms nudging us all day and all night until we achieve and maintain the perfect health score? 

As more of healthcare moves online, reducing costs and improving efficiency, who wins and who loses? Recently, my father (who is in his 80s) called the council as he needed to pay a bill. Previously, he was able to pay with his debit card over the phone. Now they told him it’s all changed, and he has to do it online. When he asked them what happens if someone isn’t online, he was told to visit the library where someone can do it online with you. He was rather angry at this change. I can now see his perspective, and why this has made him angry. I suspect he’s not the only one. He is online, but there are moments when he wants to interact with human beings, not machines. In stores, I always used to use the self service checkouts when paying for my goods, because it was faster. Ever since my loss, I’ve chosen to use the checkouts with human operators, even if it is slower. Earlier this year, my mother (in her 70s) got a form to apply for online access to her medical records. She still hasn’t filled in it, she personally doesn’t see the point. In Digital Health conversations, statements are sometimes made that are deemed to be universal truths. Every patient wants access to their records, or that every patient wants to analyse their own health data. I believe it’s excellent that patients have the chance of access, but let’s not assume they all want access. 

Diversity & Inclusion is still little more than a buzzword for many organisations. When it comes to patients and their advocates, we still have work to do. I admire the amazing work that patients have done to get us this far, but when I go to conferences in Europe and North America, the patients on stage are often drawn from a narrow section of society. That’s assuming the organisers actually invited patients to speak on stage, as most still curate agendas which put the interests of sponsors and partners above the interests of patients and their families. We’re not going to do the right thing if we only listen to the loudest voices. How do we create the space needed so that even the quietest voices can be heard? We probably don’t even remember what those voices sound like, as we’ve been too busy listening to the sound of our own voice, or the voices of those that constantly agree with us. 

When it comes to the future, I still believe emerging technologies have a vital role to play in our health, but we have to be mindful in how we design, build and deploy these tools. It’s critical we think for ourselves, to remember what and who are important to us. I remember that when eating meals with my sister, I’d pick up my phone after each new notification of a retweet or a new email. I can’t get those moments back now, but I aim to be present when having conversations with people now, to maintain eye contact and to truly listen, not just with my ears, and my mind, but also with my heart. If life is simply a series of moments, let’s make each moment matter. We jump at the chance of changing the world, but it takes far more courage to change ourselves. The power of human connection, compassion and conversation to help me heal during my grief has been a wake up call for me. Together, let’s do our best to preserve, cherish and honour the unique abilities that we as humans bring to humanity.

Thank You for listening to my story.

An interview with Adrian Leu on the role for creativity in healthcare

Given the launch of products such as the Samsung Gear VR or Pokemon GO, many of us are experimenting with developments in technology such as Virtual Reality (VR) and Augmented Reality (AR) to both create, share and consume content. One of the challenges in Digital Health when it comes to creating an app is where the expertise will come from for building it? It’s an even bigger challenge if you want to find organisations who can build cutting edge VR/AR experiences for you. I strongly believe that the health & social sectors would benefit significantly from greater engagement with the creative sector. Here in the UK, it’s not just London that offers world leading creativity, it’s all around the nation. 

Now in my own personal quest to understand who can help us build a future of Immersive Health, I’ve been examining who the leaders are in the creative sector, and who has a bold enough vision for the future that could well be the missing ingredient that could help us make our healthcare systems fit for the 21st century. I was at an event earlier this year in London where I heard a speaker, Adrian Leu, talk about the amazing work they are doing in VR. Adrian Leu is the CEO of Inition, a multidisciplinary production company specializing in producing installation-based experiences that harness emerging technologies with creative rigour.

So I decided to venture down to their headquarters in London, and interview Adrian.

1. Inition – Who are they?
We are a multi disciplinary team, and have built our reputation looking at new technologies before they become available commercially and how these technologies can be combined to create creative solutions. We are quite proficient in creating experiences which combine, software and hardware. We’ve done many firsts, including one of the first AR experiences. We also did the 1st VR broadcast of a catwalk show from London Fashion Week for Topshop.

We have a track record of over 13 years and hundreds of installations in both the UK and abroad, and we are known for leveraging new technologies for creative communications well before they hit the mainstream; We have have been augmenting reality since 2006, printing in 3D since 2005, and creating virtual realities since 2001. There aren’t many organisations out there who can say the same! We have also combined 3D printing with AR. I’m really proud that we have a finely tuned mixture of people strong on individual capabilities but very interested in what’s happening around them.

We work as an integrator of technology in the area of visual communications. Our specific areas move and shift as the times change. Currently we are doing a lot of stuff in VR, 2 years ago we were doing a lot of AR. Whilst others are talking about this tech, we have tried a lot of them, and we know the nitty gritty of the practical implementations.

We’ve worked with many sectors: pharma, oil/gas, automotive, retail, architectural (AEC), defense and aerospace, and the public sector.

2. What are the core values at the firm?
People are driven here by innovation, creativity, things which have a purpose, and at the end of the day, a mix of all 3 elements. The company was actually founded by 3 men who came from a Computer Sciences and simulation background. It has been run independently for 11 years, then acquired by a PLC 4 years ago, and one of the founders is still with us. Since last year, I have been CEO. My background is data visualisation, my PHD was in medical visualisation, where I was using volumetric rendering to reconstruct organ representations from MRIs.
 
3. Which of your projects are you proudest of?
Our work with the Philharmonia Orchestra and the Southbank Centre is one of them. This was the 1st major VR production from a UK symphony orchestra. In fact, there is a Digital Takeover of the Royal Festival Hall taking place between 23rd September and 2nd October 2016. What’s interesting for me, is the intersection of music, education and technology. If you really want to engage young people with classical music, you have to use their tools. It’s a whole narrative that we are presenting, it offers someone a sight of sounds, what it feels like to be in the middle of an orchestra and be part of their effort to bring the music to its audience.

The other project is our live broadcast of the TopShop catwalk show at London Fashion week 2 years ago. It was filmed in real time at the Tate Modern, and broadcasted to the TopShop flagship store on Oxford Street. Customers won the chance to use VR headsets to be (remotely) present at the event from the store.

For me, what both projects show is the power of telepresence and empathy.

4. Many people believe that VR is only for kids and/or limited to gaming - how do you see the use of VR?
Well, a lot of VR is driven by marketing at the moment, and as a point of entry, VR will be used to go after the low hanging fruit. There is nothing wrong with that. Any successful project will have to have great content, not to see any wires, invisibility, to have a clear purpose, an application and ultimately, a sustainable business model. 

For example, if you are in the property industry, if you allow clients to see 50 houses in VR, they won’t make the decision from the VR headset, but they might filter to 20 from the 50. So it will impact the bottom line.  The connected thinking is not yet done, it will come.  I can see VR being used in retail, i.e. preparation for new product line. You can recreate the retail store in VR, reducing the costs with remote presence.

5. What are the types of projects you’ve done for healthcare clients to date?
Most projects were about the visual communication of ideas, of data or the visual impact of drugs on people. Or at a conference, we helped showcase something that is interactive or engaging, for example, recreate a hospital bed, where there is a virtual patient, and you can see the influence of the drugs through their body. 

Another project we did was showing how it feels to have a panic attack - to help a HCP understand what a patient is going through (in terms panic attack). There are lots of implications from VR, the first technology that could help to generate more empathy for patients. We’ve also done work with haptic and tracking technologies. One example is our work with hospitals and university departments, we tracked a surgical procedure, right down to tracking finger movements, the way a student does a certain procedure and compared that to a certain standard. Thus giving them the opportunity to practice in the immersive environment.

6. What are your future ideas for the use of immersive tech?
Let’s return to empathy. You can create virtual worlds, that someone living with autism may be able to understand, where they can express things. It’s about really understanding what someone is going through, whether it’s curing of phobias or preparing soldiers to go into war.

7. In the future, do you think that doctors would prescribe a VR experience when they prescribe a new drug?
It's the power of the visual communication. I don’t see why we couldn’t have the VR experience as THE treatment.

8. What do you think is coming in the future, above and beyond what’s here today?
Haptics? Smell? The ability to combine physical stuff with the virtual stuff, where you can even smell and touch in a virtual world. An interesting experiment would be to see what could happen if we were expecting something but in VR we had something else, how could it hit our brain?

I can imagine a future where we could superimpose, diagnostic and procedural led images onto the patient. A future where a neurosurgeon would use AR to project 3D imagery from MRIs or CT scans in real time over the brain to  be guided by the exact position of the tumour during to surgery. It’s only a matter of time before this can be available.

9. Who will drive VR/AR adoption in healthcare?
It will be consumers, since that’s the big change we have seen this year, in terms of technology that is becoming available to the man on the street. People will become more accustomed to the tech, we can see that lots of startups are focusing on this, and in the end, I expect the NHS will be looking into this as a strategic priority.

We understand that adoption has to be research driven, there is a need for solid evidence. We are actually part of a European project called V-Time, as a technology partner along with the University of Tel Aviv, and it’s for the rehabilitation of elderly people who have had a fall. It consisted of a treadmill, their feet tracked and in front of them was a big screen. They would have to walk on a pavement in a city, from time to time, facing a variety of virtual obstacles which they have to avoid. The system was analysing how well they were doing that.

10. If a surgeon is reading this, and you wanted to inspire them to think about immersive tech in their work, what would you say?
My father was a surgeon, and he was very empathetic with his patients. He always treated them like they were part of his family. He was always taking calls at night from the patient’s relatives.

If in the future, we can create technology, where immersive systems can explain what’s happening, getting patients and their families more involved, explaining what will happen during the operation, the different things that the surgeon can do and how it will impact the results.

Surgeons have very limited time to do this explanation, I’m confident we can use immersive technologies and visual communication to give the relatives the information and reassurance they seek. If someone is presented with the option of having a surgical procedure but is unsure, why can’t we use VR so patients can be right there in the surgery, and that experience could help them determine whether they actually want to go ahead with the surgery or not? Could the immersive experience help someone get past the fear of having that operation?

11. What about VR and a visit to the GP?
We already have virtual visits over Skype, but what if we threw in haptics. You have the doctor and the patient wearing data (haptics) gloves and in this virtual doctor's office, the patient can help the doctor feel exactly what they are feeling in terms of the location of rash/pain, the exact SPOT. 

Or maybe a cap for the head, for when the patient wants to explain about their headaches, being able to point to the exact spot where the pain is the greatest. A remote physical examination in the virtual world with haptics. 

Another scenario, is when I get into my virtual environment, I have all the other data coming from my Apple watch, other biosensors, vital sign streaming. My doctor could discuss this with me in the virtual room.

12. Which country/city in the world is leading innovation in immersive tech?
It depends upon the area. Some would assume it’s Silicon Valley. In my opinion, London is more advanced in VR/AR. Why? London is THE creative hub, and a lot of immersive tech is driven by creative industries.

The UK as a whole has a thriving creative sector, and the NHS could certainly benefit from greater cross-sector collaboration. We’ve worked for example in the past with and Guys and St Thomas.

13. What would you advise people in healthcare who want to explore the world of immersive tech?
People can come and visit us and play with a variety of tools, it might not be something that’s exactly what they need, but it’s a good experience. Inition’s Demo Lab is a very safe and instructive “sandbox”.

The Demo Lab

The Demo Lab

We can have conversations with people about these technologies, we know how to connect these things together. We’re open to anyone internationally, what drives us are projects that are going to improve the wellbeing of people. What we can’t do is large scale research, without getting partners involved. We can give you a lot of advice, and we can even create prototypes that can be validated through large scale studies. We are open to conversations, whether you are a large pharmaceutical company, in charge of a medical school or even a GP in a small practice.

Adrian Leu & Inition are both on Twitter and click here for the Inition website.

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Developing a wearable biosensor: A doctor's story

For this post, I caught up with Dr Brennan Spiegel, to hear in more detail about his journey to get a wearable biosensor from concept to clinic. In the interview, we discuss how an idea for a sensor was borne out of an unmet clinical need, how the sensor was prototyped, tested, and subjected to clinical research, and how it was finally FDA approved in December of 2015. Throughout, we learn about the challenges of developing a wearable biosensor, the importance of working with patients, doctors, and nurses to get it right, and how to conduct rigorous research to justify regulatory approval of a device. The interview ends with seven suggestions from Dr. Spiegel for other inventors seeking to develop wearable biosensors.

1. What is AbStats?
AbStats is wearable sensor that non-invasively measures your intestinal activity – it's like a gut speedometer. The sensor is disposable, about the size of a large coin, sticks on the external abdominal wall, and has a small microphone inside that dutifully listens to your bowel churn away as it digests food. A specialized computer analyzes the results and presents a value we call the "intestinal rate," which is like a new vital sign for the gut.  We've all heard of the heart rate or respiratory rate; AbStats measures the intestinal rate.  The sensor tells the patient and doctor how much the intestines are moving, measured in "events per minute."  If the intestinal rate is very high, like 30 or 40 events per minute, then it means the gut is revved up and active.  If it's very low, like 1 or 2 per minute, then it means the gut is asleep or, possibly, even dysfunctional depending on the clinical situation.

2. What existing problem(s) does it solve?
AbStats was specifically designed, from the start, to solve for a real problem we face in the clinical trenches.  

We focused first on patients undergoing surgery.  Almost everyone has at least temporary bowel paralysis after an operation.  When your body undergoes an operation, whether on your intestines or on your toe (or anywhere in-between), it's under a great deal of stress and tends to shut down non-vital systems.  The gastrointestinal (GI) tract is one of those systems – it can take a hit and shut down for a while.  Normally, the GI system wakes up quickly.  But in some cases the GI tract is slow to come back online.  This is a condition we call postoperative ileus, or POI, which occurs in up to 25% of patients undergoing abdominal surgeries.  

The issue is that it's hard to know when to confidently feed patients after surgery.  Surgeons are under great pressure by administrators to feed their patients quickly and discharge them as soon as possible. But feeding too soon can cause serious problems, from nausea and vomiting, to aspiration, pneumonia, or even death.  On the other hand, feeding too late can lead to infections, prolong length of stay, and cost money.  As a whole, POI costs the US healthcare system around $1.5 billion because of uncertainties about whether and when to feed patients.  It's a very practical and unglamorous problem – exactly the type of issue doctors, nurses, and patients care about. 

Now, you might ask how we currently decide when to feed patients.  Here's the state of the art: we ask patients if they've farted or not. We literally ask them, practically all day long, "have you passed gas yet?"  No joke.  Or, we'll look at their belly and determine if it looks overly distended.  We might use our stethoscope to listen to the bowels for 15 seconds at a time, and then make a call about whether to feed.  It's nonsense.  Data reveals that we do a bad job of determining whether someone is fit to eat.  We blow it in both directions – sometimes we overcall, and sometimes we under call.  We figured, in this fantastical age of digital health, there had to be a better way than asking people about their flatus!  So we invented AbStats. 

3. What prompted you to embark upon this journey?
One day, about 4 years ago, I was watching Eric Topol give a TED talk about wearable biosensors and the "future of medicine." As I watched the video, I noticed that virtually very part of the human body had a corresponding wearable, from the heart, to the lungs, to the brain, and so forth.  But, sitting there in the middle was this entire body cavity – the abdominal cavity – that had absolutely zero sensor solutions.  As a gastroenterologist, I thought this must be an oversight.  We have all manner of medieval devices to get inside the GI system, and I'm skilled at inserting those things to investigate GI problems.  But typical procedures like colonoscopies, enteroscopies, capsule endoscopies, and motility catheters are all invasive, expensive, and carry risks.  There had to be a way to non-inavsively monitor the digestive engine.  So, I thought, what do we have available to us as doctors?  That's easy: bowel sounds.  We listen to bowel sounds all the time with a stethoscope, but it's highly inefficient and inaccurate.  It makes no sense to sit there with a stethoscope for 20 minutes at a time, much less even 1 whole minute.  But the GI system is not like the heart, where we can make accurate diagnoses in short order, over seconds of listening.  The GI system is slow, plodding, and somewhat erratic.  We needed something that can stand guard, vigilantly, and literally detect signal in the noise.  That's when AbStats was borne.  It was an idea in my head, and then, about 4 years later, became an FDA-approved device.  

4. What was the journey like from initial idea to FDA approval? 
When I first invented AbStats, I wasn't thinking about FDA approval.  I knew virtually nothing about FDA approval of biomedical devices.  I just wanted the thing built, as fast as possible, and rigorously tested in patients.  As a research scientists and professor of medicine and public health, this is all I know.  I need to see proof – evidence – that something works.  AbStats would be no different. 

I was on staff at UCLA Medical Center when I first invented the idea for AbStats. I told our office of intellectual property about the idea, and they suggested I speak with Professor William Kaiser at the UCLA Wireless Health Institute.  So, I gave him a call.  

Dr. Kaiser got his start working for General Motors, where he contributed to inventing the automotive cruise control system.  Later, he went to work for the Jet Propulsion Laboratory, where he worked on the Mars Rover project.  Then, he came to UCLA and founded the Wireless Health Institute.  He is fond of saying that of all the things he's done in his career, from automotive research to spaceships, he believes the largest impact on humanity he's had is in the realm of digital health.  He is a real optimist.  

So, when I told Professor Kaiser about my idea for AbStats, he immediately got it.  He got to work on building the sensor and developed important innovations to enhance the system.  For example, he developed a clever way to ensure the device is attached to the body and not pulled off.  This is really important, because if AbStats reports that a patient's intestinal rate is zero, then it might mean severe POI, or it might mean the device fell off.  AbStats can tell the difference thanks to Professor Kaiser's engineering ingenuity.  

Once we developed a minimal viable product, we worked like crazy to test it in the clinics, write papers, and publish our work.  At the same time, UCLA licensed the IP to a startup company, called GI Logic, that worked with our teams to submit the FDA documentation.  Professor Kaiser's team did the heavy lifting on the engineering and safety side, and we focused on the clinical side.  It was a great example of stem-to-stern teamwork, ranging from in-house engineering expertise, to clinical expertise, to regulatory expertise.  It all came together very fast.  

Importantly, it was my sister who came up with the name "AbStats."  I always remember to credit her with that part of the journey!

5. What role did patients play in the design of AbStats? 
Patients were critical to our design process.  We went through a series of form factors before settling on the current version of AbStats.  At first, the system resembled a belt with embedded sensors. Patients told us they hated the belt.  They explained that, after undergoing an abdominal surgery, the last thing they wanted was a belt on their abdomen.  We tweaked and tweaked, and eventually developed two small sensors that adhere to the abdomen with Tegaderm.  Even those are not perfect – it hurts to pull Tegaderm off of skin, for example.  And the sensors are high profile, so they are not entirely unobtrusive.  We're working on that, too.  But patient feedback was key and remains vital to our current and future success with AbStats.  

6. How did patients & physicians respond to AbStats during research & development?
It was gratifying that virtually every surgeon, nurse, and patient we spoke with about AbStats immediately "got it."  This is not a hard concept to sell.  Your bowels make sound.  The sound matters. And AbStats can listen to those sounds, make sense of them, and provide feedback to doctors and nurses to drive decisions.  The "so what" question was answered.  If your belly isn't moving, then we shouldn't feed you.  If it's moving a little, we should feed a little.  And if it's moving a lot, then we should feed a lot.  The surgeons called this the AbStats "stoplight", as in "red light," "yellow light," and "green light."  Each is mapped to a very specific action plan.  It's not complicated.  

We were especially surprised by the engagement of nurses in this process.  Nurses are the heart and soul of patient care, especially in surgery.  Our nursing colleagues told us that feeding decisions come up in nearly every discussion with post-operative patients.  They said they have virtually no objective parameter to follow, and saw AbStats as a way to engage patients in ways they previously could not. This was surprising.  For example, the nurses pointed out that many patients are on narcotics for pain control, and that can slow their bowels even further. By having an objective parameter, the nurses can now use AbStats to make conversations more objective and actionable.  For example, they can show that every time a patient uses a dose of narcotics, it paralyzes the bowels further.  Knowing that, some patients might be willing to reduce their medications, if only by a little, to help expedite feeding decisions.  AbStats enables that conversation.  It's really gratifying to see how a device can alter the very process of care, to the point of impacting the nature of conversations between patients and their providers.  Almost uniformly, the patients in our trials felt the sensors provided value, and so did their nurses. 

7. Would you approach the problem differently if you had to do this again?
Not really.  Considering that in 4 years we invented a sensor, iteratively improved its form factor, conducted and published two peer-reviewed clinical trials, submitted an FDA application, and received clearance for the device, it's hard to second guess the approach.

8. What other problems would you like to solve with the use of wearable technology in the future?
AbStats has many other applications beyond POI.  We are currently studying its use in an expanding array of applications, including acute pancreatitis, bowel obstructions, irritable bowel syndrome, inflammatory bowel disease, obesity management, and so on.  There are more opportunities than there are hours in the day, so we're trying to remain strategic about how best to proceed.  Thankfully, we are well aligned with the startup, GI Logic, to move things forward.  I am also fortunate to be at Cedars-Sinai Medical Center, my home institution since moving from UCLA, where most of the clinical research on AbStats was conducted.  Cedars-Sinai has been extremely supportive of AbStats and our work in digital health.  We couldn't do our research without our medical center, patients, administrative support, and technology transfer office. I am immensely grateful to Cedars-Sinai.  

More generally, wearable technology and digital health still have a long way to go, in my opinion.  I've written about that before, here. AbStats is an example of a now FDA-approved sensor supported by peer-reviewed research.  I'd like to see a similar focus on other wearables.  There are good examples, like AliveCor for heart arrhythmias, and now Proteus, which is an "ingestible."  But, for many applications in healthcare, there is still too little data about how to use wearables.  

I believe that digital health, in general, is more of a social and behavioral science than a computer or engineering science.  Truth be told, most of the sensors are now trivial.  Our sensor is a small microphone in a plastic cap.  The real "secret sauce" is in the software, how the results are generated and visualized, how they are formed into predictive algorithms, and, most importantly, how those algorithms change behavior and decision making.  Finally, there is the issue of cost and value of care. There are so many hurdles to cross, one wonders whether many sensors will run the gauntlet. AbStats, for example, may be FDA approved, but that doesn't mean we're ready to save money using the device.  We need to prove that.  We need data.  FDA approval is a regulatory hurdle, but it doesn't guarantee a device will save lives, reduce costs, reduce disability, or anything close to it.  That only comes from hard-fought science.  

9. Are clinically proven medical applications of wearable technology likely to grow in years to come?
Almost certainly, although my caveats, above, indicate this may be slower and more deliberate than some are suggesting in the digital health echo chambers.

10. For those wishing to follow in your footsteps, what would you words of wisdom be?
First, start by addressing an unmet need. Clinical need should drive technology development, not the other way around.  

Second, if you're working on patient-facing devices, then I believe you should really have first hand experience with literally putting those devices on patients.  If you're not a healthcare provider, then you should at least visit the clinical trenches and watch what happens when sensors go on patients. What happens next can be unexpected and undermine your presuppositions, as I've written about here and here.  I do not believe one can truly be a wearable expert without having literally worked with wearables.  That's like a pharmacist who has never filled a prescription, or, a cartographer who has never drawn a map.  Digital health is, by definition, about healthcare. It's about patients, about their illness and disease, and about figuring out how to insert technology into a complex workflow.  The clinical trenches are messy, gray, indistinct, dynamic, and emotional — injecting technology into that environment is exceptionally difficult and requires first-hand experience.  Digital health is a hands-on science, so look to the clinical trenches to find the unmet needs, and start working on it, step-by-step, in direct partnership with patients and their providers.

Third, make sure your device provides actionable data.  Data should guide specific clinical decisions based on valid and reliable sensor indicators.  We're trying to do that with AbStats. 

Fourth, make sure your device provides timely data. Data should be delivered at the right time, right place, and with the right visualizations.  We spent days just trying to figure out how best to visualize the data from AbStats.  And I'm still not sure we've got it right.  This stuff takes so much work. 

Fifth, if your'e making a device, make sure it's easy to use and has a favorable form factor.  It should be simple to hook up the device, it should be unobtrusive, non-invasive, with zero infection risk, comfortable, safe, and preferably disposable.  We believe that AbStats meets those standards, although there is always more work to be done.

Sixth, the wearable must be evidence-based.  A valuable sensor should be able to replace or supplement gold standard metrics, when relevant, and be supported by well designed, properly powered clinical trials.  

Finally, and most importantly, the sensor should provide health economic value to health systems.  It should be cost-effective compared to usual care.  That is the tallest yet most important hurdle to cross.  We're working on that now with AbStats.  We think it can save money by shaving time off the hospital stay and reducing readmissions.  But we need to prove it.  

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Unexpected findings

It's fascinating to meet people in healthcare and hear them dismiss the potential value of a tool like Twitter. Despite an increasing amount of noise, I do find it a great place to listen and learn. For me personally, it's been a very powerful tool, and has taken me to places I've never imagined. One of those places is Cedars-Sinai Medical Center in Los Angeles, California. By chance, I'd come across Dr Brennan Spiegel on Twitter earlier this year, and through our online interactions, discovered that we had common interests in Digital Health, especially in the context of understanding whether these new digital tools and services being developed are actually having an impact in healthcare.

Dr Spiegel is Director of Health Services Research at Cedars-Sinai Health System, Director of the Cedars-Sinai Center for Outcomes Research and Education (CS-CORE), and Professor of Medicine and Public Health in Residence at UCLA. I was particularly intrigued by the work he does at CS-CORE, where he oversees a team that investigates how Digital Health technologies, including wearable biosensors, smartphone applications, and social media, can be used to strengthen the patient-doctor bond, improve outcomes, and save money. So whilst I was out in California, I popped into Cedars-Sinai Medical Center to spend some time with him and his team to understand their journey so far in Digital Health.

With Dr Spiegel and the CS-CORE team - the picture was taken remotely using Dr Spiegel's Apple watch! 

With Dr Spiegel and the CS-CORE team - the picture was taken remotely using Dr Spiegel's Apple watch! 

To give you some context, Cedars-Sinai Medical Center is a non-profit, has 958 beds, over 2,000 doctors and 10,000 employees. It's also ranked among the top 15 hospitals in the United States, and is ranked first in Los Angeles by US News and World report. In addition to Dr Spiegel, I met with Dr Christopher Almario, Garth Fuller, and Bibiana Martinez

What follows is a summary of the Q&A that took place during my visit. 

1. What is the big vision for your team?
"The big vision is value of care. Value is our true north. It puts patients first while also reminding us to be judicious about the healthcare resources we use. Take Cedars-Sinai, a traditional volume based center of excellence. How do we transform our hospital, that has excelled in the fee-for-service healthcare environment for so long, and transform it into a value-based innovation center while maintain our top-notch quality of care? It seems like a magic trick to transform from volume to value in healthcare. How do we do it at scale, and how do we keep people out of hospitals when healthcare systems have  been designed to take people in? Our mission is to figure out how to do that. This could be a blueprint for how other health systems could do this and which doctors could do this. How do we align incentives? How do we create a Digital Health strategy that works within the existing clinical workflow? How might we use an E-coordination hub? These are all open questions ready for rigorous research. 

What does innovation mean at Cedars-Sinai? We see ourselves as a hub of innovation and are now developing a new 'Value Collaboratory' under the guidance of our visionary leader, Scott Weingarten, who directs Clinical Transformation at Cedars-Sinai. We offer a set of tools to help value-based innovators make a difference. We're going to be doing a lot over the next 5 years. Digital Health is just one small part of that. The Value Collaboratory will be the centre for ideas within Cedars. For example, if innovators seek internal funding for a project, then they can work with the collaboratory to refine their idea, evaluate its health economic potential, and create a formal case for its support."

2. Tell me more about the team, what types of people work in CS-CORE
"There are 12 of us in CS-CORE, and we have a combination of health system and statistical expertise. We have social scientists, behavioural scientists, mobile health experts and more. It's a multi-disciplinary team. For example, Dr Almario is a gastroenterologist, who has always been interested in health services research, and was awarded a career development award from the American College of Gastroenterology, which is very rare, in Digital Health to pursue research. Garth Fuller with a background in health policy and management has been working with us for the last 5 years and has a strong interest in medication adherence, and conducts research to understand how we can show that 'Beyond the Pill' strategies in the pharma industry are working. Bibiana Martinez with her background in Public Health is hands on, and works with our patients. Bibiana helps filter the real world barriers faced in Digital Health research and bring them back to our team. We have an all-hands-on-deck research crew."

3. What has surprised you during your research in Digital Health?
"We've had some unexpected findings. For example, we had a patient who reported less pain, and our original expectation was that the data from her wearable would report that she had been walking more, as the pain was subsiding. However, that wasn't the case, as her pain decreased, she was walking less. It turns out the patient was an author, and being free of pain meant she could sit for hours on end and finish writing her book. Completing the book was the outcome that mattered to the patient. What should we do when a patient's steps fall from 1,500 a day to almost 0? Do we give them a call, simply because we perceive it as unhealthy? How often does your doctor ask you what your goal is for your visit? I show these charts of pain vs steps when I teach my health analytics class at UCLA, to challenge how my students think."

4. How else have your assumptions about how patients use Digital Health tools been challenged?
"In healthcare, we often make a lot of assumptions about the needs and wants of patients. We have been fitting Virtual Reality goggles with hospital patients, so that we can transport them from their hospital bed to far away places such as Iceland. One patient asked if we could transport him somewhere more tropical, as the hospital is cold, and having a VR experience in Iceland made him feel even colder. 

We had an instance where a patient wasn't able to charge her Fitbit. We tried to explain over the phone, but it actually required a house visit in order for this patient to understand how to charge the device. We thought we could put sensors around the ankle joint of patients to measure steps, and some patients felt like they were under house arrest when wearing our sensor on their ankle."

5. What are some of the most exciting projects you're working on today?
"Well, we create our own technologies and sensors. We find out soon if our first sensor is approved by the FDA. Also, with the vision of our hospital Enteprise Information Services (EIS) team, our hospital's EHR is now connected to Apple's HealthKit, it's a great achievement, we now have 750 people pouring in real-time sensor data into our EPIC Electronic Health Record. We've also developed My GI Health, a patient provider portal which by gathering information on symptoms in advance of a visit to the doctor, helps us learn more about a patient's GI symptoms. The computer doesn't forget to ask questions, but sometimes the doctor forgets to ask questions. Although much of our research is in GI, we are working across healthcare. We are now building a version of My GI Health for rheumatology, for example. We are also interested in testing whether the first visit to a specialist doctor should be virtual or in person? What would patients & doctors actually want? We are putting a study design together now that will compare both types of visits."

6. What are some of the challenges you face in your research?
"The research we do is often challenging for the IRB because it’s so different.  We work closely with our IRB to explain the nature of our work. As more academic groups conduct Digital Health research, it will be important that medical centers develop regulatory expertise around this type of work.

There is also an urgency to test quickly, fail quickly and succeed quickly. What we need is a high level discussion to understand what risk means in the context of Digital Health research. Can we generate evidence faster?"

7. What are you doing to help ensure that no patient gets left behind in Digital Health?
"We are soon going to start a community-based study in partnership with African American churches in Los Angeles. We will work with these 'mega churches,' which have up to 10,000 congregants, and will distribute healthy living experiences delivered by Virtual Reality goggles using Google Cardboard.  We will also use an app for obesity and diabetes management. We observe that many families from minority backgrounds are mobile first, and we see that the next digital divide is opening up over mobile. Healthcare isn't built for mobile. We are also researching the mobile usability of hospital websites across America."

8. What message would you like to share with others also on the same journey as you?
"Listen to the patients, get used to Digital Health being dirty and difficult, it may be harder than you think. We can say that with some authority now, that it can sound easy, but in reality it's been very hard. Our team has developed devices and applied them directly to patients; what happens next is often unexpected and challenges our assumptions. Digital Health is really hard to do. We have to focus on the how of Digital Health. We understand why it's valuable, but not as much about how we will be doing it. Value is another big theme - we need to improve outcomes and reduce costs of care. It takes time to do it right. We also try to never forget the end user, both the physician and the patient. 

This work is 90% perspiration, and 10% inspiration. You need to have a sense of humor to do this because, you’re going to get a lot of unexpected bumps and failures. It’s a team sport to figure it out. Defining the problem in terms of the health outcomes and costs is the key, and generating a solution that has value to patient and providers is paramount.. 

Finally, the 'cool test' is so seductive. Don’t been fooled by the 'cool test' in Digital Health. What may be cool to us may not be cool to the patient. Don’t be seduced by the 'cool test' in healthcare."

I really enjoyed my time with Dr Spiegel and his team, not only because of the types of research they are doing, but also because of their vision, values and valor. Their unexpected findings after putting new devices on patients has subsequently made me think at length about health outcomes. I was reminded about the human factors in healthcare, and that both patients and doctors don't always do what we expect them to do. I'm glad CS-CORE are not just thinking from the perspective of medicine, but through the lens of public health too, and how to ensure that no patient is left behind. I'm not the only one who is admires their work. David Shaywitz, has recently written a post about the research conducted by CS-CORE, and mentions, "they are the early adopters, the folks actually in the arena, figuring out how to use the new technology to improve the lives of patients." 

Dr Spiegel did admit they've been under the radar so far, focusing on putting “one foot in front of the other” in research mode while working with a wide variety of partners from industry and academia. The team is also looking for collaborators who want to road test their digital health solutions in a “real world” laboratory of a large health system. Their team is equipped to conduct stem-to-stern evaluations with an eye to rigorous research and peer-reviewed publications. I see that Dr Spiegel is one of the speakers at the Connected Health Symposium later this week, as part of a panel discussion on Measuring Digital Health Impact & Outcomes. I won't be there but I hope to be part of the live Twitter discussion. 

Since my visit, I note that Cedars-Sinai and Techstars have partnered to launch a Digital Health focused accelerator. What does this accelerator aim to do? The website states, "We are looking for companies transforming health and healthcare.  Companies that are creating hardware, software, devices and/or services that empower the patient or healthcare professional to better track, manage, and improve health and healthcare delivery are eligible to apply." Techstars is one of the world's most highly rated startup accelerator programs, the other being Y Combinator. It's fascinating to see the marriage of two very different worlds, and who knows what unexpected findings will result from this partnership. In the 21st century, when we think of radically different models of care, startups and emerging technologies, large traditional hospital systems are not the first place we think of looking for them. Maybe the lesson here for large healthcare institutions is to "disrupt or be disrupted?"

In the world of Digital Health, the trend of moving healthcare out of the hospital into the home, virtual visits and telemedicine may be causing concern to hospital executives. If all of these converging technologies (often coming from startups) really are effective and become widely adopted, then surely we will need smaller hospitals, or perhaps in certain scenarios, we may one day not need to have that many hospitals at all? Perhaps the hospitals that survive and thrive in the 21st century will be the ones that boldly explore the unknown in Digital Health, rather than the ones that hide and hope that the world of Digital Health will just be a passing fad? 

“It is the tension between creativity and skepticism that has produced the stunning and unexpected findings of science.” - Carl Sagan

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

All we need is more data, right?

Our problems in healthcare today, and those we will face tomorrow, will most likely be solved by opening up datasets, throwing them into the hands of software developers & entrepreneurs, and letting the magic unfold. That was the underlying premise in Washington, DC this week. I flew over from England, to attend the 5th Health Datapalooza.

I've wanted to attend for the last 2 years, but other things came up. I have heard many things mentioned about the event, but wanted to experience it for myself. I'm glad I did.

Catching up with the legendary, Dave deBronkart, also known as ePatientDave! 

Catching up with the legendary, Dave deBronkart, also known as ePatientDave! 

The event is aimed at improving US healthcare, but since America is often ahead of the world in health technology, I wanted to understand what they are doing. Compared to the often austere environments of conferences back in England, the datapalooza was full of glamour & glitz. 

2,000 people were in attendance, and kudos to the the organisers for bringing all these people together. We were told that attendees had come from as far away as India & China. 

I wonder what people from 'emerging markets' like India & China would think when visiting the most powerful nation on Earth for a conference, only to find the wifi at the venue didn't work terribly well?

The opening keynotes on Day 1 were given by Dr Elliott Fisher, Karen Ignani, & Todd Park. I've been hearing great things about Todd Park for a while now, but never heard him speak in person, until now. He spoke with such energy, vigour & passion that you got the feeling he genuinely wants change. 

However, something bothered me. In the program for the event, it states how we are taking an important step towards a patient centred health system, powered by data. So, if the datapalooza was all about patients, why were there no patients on stage giving an opening keynote? We had keynotes from folks representing the medical profession, US government, and health insurance industry. From an attendee's perspective, I see this as incongruent. I'm not the only one who feels this way.

The UK's Secretary of State for Health, Jeremy Hunt also gave a keynote. Whilst I enjoyed most of his talk, I found it odd that when talking about why we should have greater transparency in healthcare, he had a slide with Joseph Stalin on it [Note: Stalin was a 20th century Soviet leader whose actions led to the deaths of millions]

Hearing Dr Atul Gawande speak was inspiring. He gets straight to the point, and shared his own practical examples. 

Once the keynotes were over, the rest of Day 1 had quite a few smaller sessions, running concurrently. Covering business, clinical care, community, research & more, these looked like they could be very interesting. However, since 3 or 4 sessions were running concurrently each time, you were forced to pick only one. I often found myself frustrated, as I liked two different sessions held at the same time. I don't enjoy it when conference organisers try to squeeze too much content into one day. 

One of the sessions I really enjoyed was, "Citizen/Patient - The Great Data Debate". Much of what was discussed was who should have access to our data, and how would the data we collect as patients be integrated with the data the system holds on us. There seems to be much uncertainty regarding the flood of patient generated data coming over the next few years. How will we ensure it's accurate? Who will own it? Who should develop standards? Government or industry? Do we even need more data? How can we trust those that hold our data for us? An executive at the VA recently stated that "patient generated data is going to be the thing that really transforms healthcare".

I was not able to attend the keynotes on Day 2, but one of the best quotes I found on the Twitter stream was from Adriana Lukas, founder of Quantified Self London. 

I did manage to attend one of the last sessions on Day 2, "Introducing OpenFDA". A new initiative, aimed at making it easier & faster to access public datasets from the FDA. They are starting off with all the adverse drug event reports from 2004-2013. It's still in beta, the idea is to get entrepreneurs to build new tools & services using the OpenFDA API. Since I have worked in drug safety myself, I understand the potential value of new insights that may be gained by using these existing data in novel ways. Definitely worth keep an eye on how OpenFDA develops.

More data, fewer problems?

I know from my own practical experience that can data can be used to improve decision making. The smart use of data is only a good thing for everyone in health & social care. However, we often run before we can walk. I observe many healthcare organisations with "Big Data" on the strategic agenda. What's ironic is that these organisations often don't leverage the data they already have. I'll never forget a client I worked with many years ago. As a marketing manager, she wanted the agency I worked for to build her a brand new marketing database, complete with integrated predictive analytics (i.e. the ability to find those customers most likely to respond to a marketing campaign). I suspect, she'd been influenced by a white paper she'd read. 

I pushed back, and challenged her. I knew that she didn't even know the basics about her customers. At that moment, I believed all she and her team needed were a few basic charts in Excel. I convinced my management & the client not to sign off on the huge expensive database project. I turned out to be right, one day's work to analyse the existing database by myself to produce 3 basic charts for my client, generated new insights to keep her & her team busy for a month. Less really can be more. 

What would Abraham Lincoln say about the rights of patients to own their health data?

What would Abraham Lincoln say about the rights of patients to own their health data?

In the future, as more data about our health is collected, stored & shared, privacy & security will become even more important. Yet, not one keynote at Health Datapalooza with a focus on privacy & security. How can we make informed choices, when our leaders are shouting about the benefits, whilst being silent about the risks? It's healthy to consider the dark side of all these data being collected about our health. 

Consider the keynote on Day 2, by Dr Francis Collins, Director of NIH who cited Global Alliance for Genomics & Health during his talk. Sharing our genomic and clinical data to help advance science and medicine, that's admirable, right? Let's dig beneath the surface.

As I wrote in a post last year, the global alliance met with Google, Microsoft & Amazon Web Services at the end of 2012. Read the 4th paragraph on Page 16 of their white paper that was published 12 months ago. In a healthcare system that's powered by data, ask yourself, who stands to gain the most from collecting, storing & sharing genomic data on each of us? 

The other thing I noticed during the event was the focus on data in healthcare, with little reference to social care. Oh wait, there was an app demo by a firm called Purple Binder, which uses web applications to help people find community health services. Brilliant, but not every American uses the internet or email. 

Pew Research Center's report from April 2014, when looking at seniors, found that 41% do not use the internet at all, 53% do not have broadband access at home, and 23% do not use cell phones. In a thought provoking blog post this week, Victor Wang, reminds that that Dementia care costs 5 times more than Global Warming. 

What do people living with Dementia need? The ability to download their own data or someone to care for them? Given our finite resources, what's a better use of our money? Building a new data platform or recruiting more nurses?

In the 21st century, do we want health & social care systems powered by data, or by people?

[Disclosure: I have no commercial ties with any of the individuals or companies mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

An app a day keeps the doctor away?

An app a day keeps the doctor away may very well be what our children hear as they grow up in the 21st century. During my research, I found that the origin of the familiar phrase, "An apple a day keeps the doctor away", may have originated 148 years ago in Wales, UK

A Pembrokeshire proverb. Eat an apple on going to bed, And you'll keep the doctor from earning his bread.

Before I talk about apps replacing apples, I'd like to share some of the feedback that's been generated from my last blog post on tech making doctors unemployed. It's triggered a healthy debate within & outside the medical profession. I'm not sure doctors like me anymore! 

I've had docs email me saying stop pushing this kind of talk, I need to put my kids through college. Some of the younger doctors have responded positively, understanding that they might benefit by having digital skills as a doctor. Many older docs seem to be terrified, and some docs of all ages seem to be responding to the threat with at attitude of "Bring it on!"

All of this has really made me think deeply about the choices we face in society in this increasingly automated world. A visit to a London supermarket this week compelled me to ask this question. 

Whilst some doctors may be outraged that I have the audacity to even challenge the notion that their work cannot be automated by machines, there are deeper questions facing ALL of us in society. This recent Guardian article which has the headline, "When robots take our jobs, humans will be the new 1%. Here's how to fight back."

Even much of the work I've done for the past 20 years, in the realm of data analytics, is being handled by machines and software now. In fact, as a Futurist, I may be joining the doctors at the unemployment office in 2025, given that robots are now writing news stories, and some believe that 90% of the news could be written by computers by 2030.

Is the future that we're heading towards really the future we desire? If it isn't the future we desire, whose responsibility is to intervene? Should governments create policies that encourage institutions to retain human workers, even when the human is more expensive than the machine? Should the CEO of a corporation also wear the hat of Chief Ethics Officer? 

Will getting an app on prescription become the norm?

Many people including patients in rich countries may roll their eyes at using their mobile phone for healthcare, but patients in low and middle income have been using mobile phones in healthcare for several years, frequently using text messages with more basic phones, not apps with smartphones.

In fact, Africa is home to the largest number of mHealth projects in the world. A list with examples of projects can be found here. Patients in the US during 2014 will be able to download the world's first doctor prescribed app, Bluestar, for helping them to manage Type 2 Diabetes. This is a massive step, and could it be a signal of times to come? 

Well, a recent poll of physicians in the US revealed that "37% have no idea what apps are out there."

According to research conducted by Digitas Health in 2013, 90% of chronic patients in the US would accept a mobile app prescription from their doctor. Do you know what proportion of those patients said they would accept a prescription of medication? Just 66%!

So, this is the future, right? Well, doctors have a right to be wary of apps. In a previous blog post, I mentioned how a certification program for health apps allowed an app to be certified which had flaws relating to protection of data in the app. We are heading into uncharted waters, and mistakes are to be expected. Looking beyond the hyperbole, the key question for me (and the regulators) is, do the benefits outweigh the risks? 

Source: Pew Internet Research Project

Source: Pew Internet Research Project

The conclusions of the first ever cross-stakeholder Pan-European seminar on Health Apps & how patients, policy-makers, healthcare professionals and industry see the future was recently published in a white paper. What I find encouraging in the paper is the that EU has made it clear that it does NOT want to discourage the burgeoning market for health apps by producing excessive red tape.

As Digital Health becomes more prevalent, the scenario of doctors everyday weighing up whether to prescribe an app or a medication to a patient is entirely possible in just a few years. However, as this recent paper in JAMA remarks, we will need an unbiased review & certification process for health apps, if this is to happen.

Exciting stuff, but I can't help but also wonder, exactly how much of an impact will prescribing of apps really make on healthcare, given that just 18% of Americans aged over 65 own a smartphone? That figure drops to 8% for those over 65s with annual household income of $30,000 or less!

Should we be asking innovators to focus their energy on technologies that solve the problems of the biggest users of healthcare, those aged over 65? Will many basic problems in healthcare remain unresolved, as the 'worried well' develop amazing technology, to be used primarily by the 'worried well'?

What role will community pharmacies play in public health if prescribing of apps takes off and fewer people actually walk into a physical pharmacy? Will apps cause pharmacists to also become unemployed in the long term? 

What is the impact on the future of the pharmaceutical industry which is not just slower than other sectors to adapt, but also employs considerable numbers of people around the globe? IMS Health, the world's largest health data broker, has launched AppScript, a platform that offers doctors easy, secure and evidence-based app prescribing.

What about absurdly simple problems, such as being prescribed an app, but your smartphone's battery barely lasts the whole day, and the battery could die just as you really need to use the app to manage your condition. A tablet doesn't need a power source. 

What about the impact on our eyes? Opticians have recently warned that overuse of smartphones may damage your eyes.

What's the impact on the fabric of our society if in the future, we can both be diagnosed & treated from the comfort of our own home just using a our smartphone combined with an app & a tricorder?

Scanadu Scout 

Scanadu Scout 

Not long to wait to answer that question! The combination of the long awaited Scanadu Scout and their app on Monday may indeed make the phrase, an app a day keeps the doctor away, part of our everyday vocabulary. The latest blog post from Scanadu, mentions "placing it over the forehead to take a composite, multi-parameter biometric signature that pulls in several vital signs in seconds: diastolic and systolic blood pressure, body temperature (core temperature is coming in a couple of weeks), SPO2 (blood oxygenation), and heart rate." 

I should be getting my hands on a unit soon, and look forward to sharing my feedback with you!

One more thing, what if the apps in our cars in the future 'prescribed' us a different route home to improve our health? Given Apple's development of CarPlay, I mocked up a possible scenario of the world we could be heading towards. The question again - is this a desirable world?

Asking Siri to navigate home may never be the same again.

My next talk - Boston!

I'm going to be passing through Boston, MA in 2 weeks time. It's last minute, but I'm hoping to be able to give a talk there on whether tech will make doctors unemployed and also share some of my ideas & thoughts on how the medical profession could adapt to this rapidly changing world of Digital Health. As soon as it's confirmed, I'll share the details on Twitter. If whilst I'm in Boston, your organisation wishes to book me as a speaker, please see my Public Speaking page.

[Disclosure: I have no commercial ties with any of the companies mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner