An interview with Jo Aggarwal: Building a safe chatbot for mental health

We can now converse with machines in the form of chatbots. Some of you might have used a chatbot when visiting a company’s website. They have even entered the world of healthcare. I note that the pharmaceutical company, Lupin, has rolled out Anya, India’s first chatbot for disease awareness, in this case, for Diabetes. Even for mental health, chatbots have recently been developed, such as Woebot, Wysa and Youper. It’s an interesting concept and given the unmet need around the world, these could be an additional tool that might help make a difference in someone’s life. However, there was a recent BBC article highlighting how two of the most well known chatbots (Woebot and Wysa) don’t always perform well when children use the service. I’ve performed my own real world testing of these chatbots in the past, and gotten to know the people who have created these products. So after the BBC article got published, Jo Aggarwal, CEO and co-founder of Touchkin, the company that has made Wysa, got back in touch with me to discuss trust and safety when using chatbots. It was such an insightful conversation, I offered to interview her for this blog post as I think the story of how a chatbot for mental health is developed, deployed and maintained is a complex and fascinating journey.

1. How safe is Wysa, from your perspective?
Given all the attention this topic typically receives, and its own importance to us, I think it is really important to understand first what we mean by safety. For us, Wysa being safe means having comfort around three questions. First, is it doing what it is designed to do, well enough, for the audience it’s been designed for? Second, how have users been involved in Wysa’s design and how are their interests safeguarded? And third, how do we identify and handle ‘edge cases’ where Wysa might need to serve a user - even if it’s not meant to be used as such?

Let’s start with the first question. Wysa is an interactive journal, focused on emotional wellbeing, that lets people talk about their mood, and talk through their worries or negative thoughts. It has been designed and tested for a 13+ audience, where for instance, it asks users to take parental consent as a part of its terms and conditions for users under 18. It cannot, and should not, be used for crisis support, or by users who are children - those who are less than 12 years old. This distinction is important, because it directs product design in terms of the choice of content as well as the kind of things Wysa would listen for. For its intended audience and expected use in self-help, Wysa provides an interactive experience that is far superior to current alternatives: worksheets, writing in journals, or reading educational material. We’re also gradually building an evidence base here on how well it works, through independent research.

The answer to the second question needs a bit more description of how Wysa is actually built. Here, we follow a user-centred design process that is underpinned by a strong, recognised clinical safety standard.

When we launched Wysa, it was for a 13+ audience, and we tested it with an adolescent user group as a co-design effort. For each new pathway and every model added in Wysa, we continue to test the safety against a defined risk matrix developed as a part of our clinical safety process. This is aligned to the DCB 0129 and DCB 0160 standards of clinical safety, which are recommended for use by NHS Digital.

As a result of this process, we developed some pretty stringent safety-related design and testing steps during product design:

At the time of writing a Wysa conversation or tool concept, the first script is reviewed by a clinician to identify safety issues, specifically - any times when this could be contra-indicated, or be a trigger, and alternative pathways for such conditions.

When a development version of a new Wysa conversation is produced, the clinicians review it again specifically from an adherence to clinical process and potential safety issues as per our risk matrix.

Each aspect of the risk matrix has test cases. For instance, if the risk is that using Wysa may increase the risk of self harm in a person, we run two test cases - one where a person is intending self harm but it has not been detected as such (normal statements) and one where self-harm statements detected from the past are run through the Wysa conversation, at every Wysa node or ‘question id’. This is typically done on a training set of a few thousand user statements. A team then tags the response for appropriateness. A 90% appropriateness level is considered adequate for the next step of review.

The inappropriate statements (typically less than 10%) are then reviewed for safety, where the question asked is - will this inappropriate statement increase the risk of the user indulging in harmful behavior? If there is even one such case, the Wysa conversation pathway is redesigned to prevent this and the process is repeated.

The output of this process is shared with a psychologist and any contentious issues are escalated to our Clinical Safety Officer.

Equally important for safety, of course, is the third question. How do we handle ‘out of scope’ user input, for example, if the user talks about suicidal thoughts, self-harm, or abuse? What can we do if Wysa isn’t able to catch this well enough?

To deal with this question, we did a lot of work to extend the scope of Wysa so that it does listen for self-harm and suicidal thoughts, as well as abuse in general. On recognising this kind of input, Wysa gives an empathetic response, clarifies that it is a bot and unable to deal with such serious situations, and signpost to external helplines. It’s important to note that this is not Wysa’s core purpose - and it will probably never be able to detect all crisis situations 100% - neither can Siri or Google Assistant or any other Artificial Intelligence (AI) solution. That doesn’t make these solutions unsafe, for their expected use. But even here, our clinical safety standard would mean that even if the technology fails, we need to ensure it does not cause cause harm - or in our case, increase the risk of harmful behavior. Hence, all Wysa’s statements and content modules are tested against safety cases to ensure that they do not increase risk of harmful behavior even if the AI fails.

We watch this very closely, and add content or listening models where we feel coverage is not enough, and Wysa needs to extend. This was the case specifically with the BBC article, where we will now relax our stand that we will never take personally identifiable data from users, explicitly listen (and check) for age, and if under 12 direct them out of Wysa towards specialist services.

So how safe is Wysa? It is safe within its expected use, and the design process follows a defined safety standard to minimize risk on an ongoing basis. In case more serious issues are identified, Wysa directs users to more appropriate services - and makes sure at the very least it does not increase the risk of harmful behaviour.

2. In plain English, what can Wysa do today and what can’t it do?
Wysa is a journal married to a self-help workbook, with a conversational interface. It is a more user friendly version of a worksheet - asking mostly the same questions with added models to provide different paths if, for instance, a person is anxious about exams or grieving for a dog that died.

It is an easy way to learn and practice self help techniques - to vent and observe our thoughts, practice gratitude or mindfulness, learn to accept your emotions as valid and find the positive intent in the most negative thoughts.

Wysa doesn’t always understand context - it definitely will not pass the Turing test for ‘appearing to be completely human’. That is definitely not its intended purpose, and we’re careful in telling users that they’re talking to a bot (or as they often tell us, a penguin).

Secondly, Wysa is definitely not intended for crisis support. A small percentage of people do talk to Wysa about self harm or suicidal thoughts, who are given an empathetic response and directed to helplines.

Beyond self harm, detecting sexual and physical abuse statements is a hard AI problem - there are no models globally that do this well. For instance ‘My boyfriend hurts me’ may be emotional, physical, or sexual. Also, most abuse statements that people share with Wysa tend to be in the past: ‘I was abused when I was 12’ needs a very different response from ‘I was abused and I am 12’. Our response here is currently to appreciate the courage it takes to share something like this, ask a user if they are in crisis, and if yes, say that as a bot Wysa is not suited for a crisis and offer a list of helplines.

3. Has Wysa been developed specifically for children? How have children been involved in the development of the product?
No, Wysa hasn’t been developed specifically for children.

However, as I mentioned earlier, we have co-designed with a range of users, including adolescents.

4. What exactly have you done when you’ve designed Wysa with users?
For us, the biggest risk was that someone’s data may be leaked and therefore cause them harm. To deal with this, we took the hard decision of not taking any personally identifiable data at all from users, because of which they also started trusting Wysa. This meant that we had to compromise on certain parts of the product design, but we felt it was a tradeoff well worth making.

After launch, for the first few months, Wysa was an invite-only app, where a number of these features were tested first from a safety perspective. For example, SOS detection and pathways to helplines were a part of the first release of Wysa, which our clinical team saw as a prerequisite for launch.

Since then, design continues to be led by users. For the first million conversations, Wysa stayed a beta product, as we didn’t have enough of a response base to test new pathways. There is no one ‘launch’ of Wysa - it is continuously being developed and improved based on what people talk to it. For instance, the initial version of Wysa did not handle abuse (physical or sexual) at all as it was not expected that people would talk to it about these things. When they began to, we created pathways to deal with these in consultation with experts.

An example of a co-design initiative with adolescents was a study with Safe Lab at Columbia University to understand how at-risk youth would interact with Wysa and the different nuances of language used by these youth.

4. Can a user of Wysa really trust it in a crisis? What happens when Wysa makes a mistake and doesn’t provide an appropriate response?
People should not use Wysa in a crisis - it is not intended for this purpose. We keep reinforcing this message across various channels: on the website, the app descriptions on Google Play or the iTunes App Store, even responses to user reviews or on Twitter.

However, anyone who receives information about a crisis has a responsibility to do the most that they can to signpost the user to those who can help. Most of the time, Wysa will do this appropriately - we measure how well each month, and keep working to improve this. The important thing is that Wysa should not make things worse even when it misdetects, so users should not be unsafe ie. we should not increase the risk of harmful behaviour.

One of the things we are adding based on suggestions from clinicians is a direct SOS button to helplines so users have another path when they recognise they are in crisis, so the dependency on Wysa to recognise a crisis in conversation is lower. This is being co-designed with adolescents and clinicians to ensure that it is visible, but so that the presence of such a button does not act as a trigger.

For inappropriate responses, we constantly improve and also handle cases where the if user shares that Wysa’s response was wrong, respond in a way that places the onus entirely on Wysa. If a user objects to a path Wysa is taking, saying this is not helpful or this is making me feel worse, immediately change the path; emphasise that it is Wysa’s, not the user’s mistake; and that Wysa is a bot that is still learning. We closely track where and when this happens, and any responses that meet our criteria for a safety hazard are immediately raised to our clinical safety process which includes review with children’s mental health professionals.

We constantly strive to improve our detection, and are also starting to collaborate with other people dealing with similar issues and create a common pool of resources.

5. I understand that Wysa uses AI. I also note that there are so many discussions around the world relating to trust (or lack of it) in products and services that use AI. A user wants to trust a product, and if it’s health related, then trust becomes even more critical. What have you done as a company to ensure that Wysa (and the AI behind the scenes) can be trusted?
You’re so right about all so many discussions about AI, how this data is used, and how it can be misused. We explicitly tell users that their chats stays private (not just anonymous), that this will never be shared with third parties. In line with GDPR, we also give users the right to ask for their data to be deleted.

After downloading, there is no sign-in. We don’t collect any personally identifiable data about the user: you just give yourself a nickname and start chatting with Wysa. The first conversation reinforces this message, and this really helps in building trust as well as engagement.

AI of the generative variety will not be ready for products like Wysa for a long time - perhaps never. They have in the past turned racist or worse. The use of AI in applications like Wysa is limited to detection and classification of user free text, not generating ‘advice’. So the AI here is auditable, testable, quantifiable - not something that may suddenly learn to go rogue. We feel that trust is based on honesty, so we do our best to be honest about the technical limitations of Wysa.

Every Wysa response and question goes through a clinical safety process, and is designed and reviewed by a clinical psychologist. For example, we place links to journal articles in each tool and technique that we share with the user.

6. What could you and your peers who make products like this do to foster greater trust in these products?
As a field, the use of conversational AI agents in mental health is very new, and growing fast. There is great concern around privacy, so anonymity and security of data is key.

After that, it is important to conduct rigorous independent trials of the product and share data openly. A peer reviewed mixed method study of Wysa’s efficacy has been recently published in JMIR, for this reason, and we working with universities to further develop these. It’s important that advancements in this field are science-driven.

Lastly, we need to be very transparent about the limitations of these products - clear on what they can and cannot do. These products are not a replacement for professional mental health support - they are more of a gym, where people learn and practice proven, effective techniques to cope with distress.

7. What could regulators to foster an environment where we as a user feel reassured that these chatbots are going to work as we expect them to?
Leading from your question above, there is a big opportunity to come together and share standards, tools, models and resources.

For example, if a user enters a search term around suicide in Google, or posts about self-harm on Instagram, maybe we can have a common library of Natural Language Processing (NLP) models to recognise and provide an appropriate response?

Going further, maybe we can provide this as an open-source to resource to anyone building a chatbot that children might use? Could this be a public project, funded and sponsored by government agencies, or a regulator?

In addition, there are several other roles a regulator could play. They could fund research that proves efficacy, defines standards and outlines the proof required (the NICE guidelines recently released are a great example), or even a regulatory sandbox where technology providers, health institutions and public agencies come together and experiment before coming to a view.

8. Your website mentions that “Wysa is... your 4 am friend, For when you have no one to talk to..” – Shouldn’t we be working in society to provide more human support for people who have no one to talk to? Surely, everyone would prefer to deal with a human than a bot? Is there really a need for something like Wysa?
We believed the same to be true. Wysa was not born of a hypothesis that a bot could help - it was an accidental discovery.

We started our work in mental health to simply detect depression through AI and connect people to therapy. We did a trial in semi-rural India, and were able to use the way a person’s phone moved about to detect depression to a 90% accuracy. To get the sensor data from the phone, we needed an app, which we built as a simple mood-logging chatbot.

Three months in, we checked on the progress of the 30 people we had detected with moderate to severe depression and whose doctor had prescribed therapy. It turned out that only one of them took therapy. The rest were okay with being prescribed antidepressants but for different reasons, ranging from access to stigma, did not take therapy. All of them, however, continued to use the chatbot, and months later reported to feeling better.

This was the genesis of Wysa - we didn’t want to be the reason for a spurt in anti-depressant sales, so we bid farewell to the cool AI tech we were doing, and began to realise that it didn’t matter if people were clinically depressed - everyone has stressors, and we all need to develop our mental health skills.

Wysa has had 40 million conversations with about 800,000 people so far - growing entirely through word of mouth. We have understood some things about human support along the way.

For users ready to talk to another person about their inner experience, there is nothing as useful as a compassionate ear, the ability to share without being judged. Human interactions, however, seem fraught with opinions and judgements. When we struggle emotionally, it affects our self image - for some people, it is easier to talk to an anonymous AI interface, which is kind of an extension of ourselves, than another person. For example, this study found that US Veterans were three times as likely to reveal their PTSD to a bot as a human: But still human support is key - so we run weekly Ask Me Anything (AMA) sessions on the top topics that Wysa users propose, to discuss every week with a mental health professional. We had a recent AMA where over 500 teenagers shared their concerns about sharing their mental health issues or sexuality with their parents. Even within Wysa, we encourage users to create a support system outside.

Still, the most frequent user story for Wysa is someone troubled with worries or negative thoughts at 4 am, unable to sleep, not wanting to wake someone up, scrolling social media compulsively and feeling worse. People share how they now talk to Wysa to break the negative cycle and use the sleep meditations to drift off. That is why we call it your 4 am friend.

9. Do you think there is enough room in the market for multiple chatbots in mental health?
I think there is a need for multiple conversational interfaces, different styles and content. We have only scratched the surface, only just begun. Some of these issues that we are grappling with today are like the issues people used to grapple with in the early days of ecommerce - each company solving for ‘hygiene factors’ and safety through their own algorithms. I think over time many of the AI models will become standardised, and bots will work for different use cases - from building emotional resilience skills, to targeted support for substance abuse.

10. How do you see the future of support for mental health, in terms of technology, not just products like Wysa, but generally, what might the future look like in 2030?
The first thing that comes to mind is that we will need to turn the tide from the damage caused by technology on mental health. I think there will be a backlash against addictive technologies, I am seeing the tech giants becoming conscious of the mental health impact of making their products addictive, and facing the pressure to change.

I hope that by 2030, safeguarding mental health will become a part of the design ethos of a product, much as accessibility and privacy has become in the last 15 years. By 2030, human computer interfaces will look very different, and voice and language barriers will be fewer.

Whenever there is a trend, there is also a counter trend. So while technologies will play a central role in creating large scale early mental health support - especially crossing stigma, language and literacy barriers in countries like India and China, we will also see social prescribing gain ground. Walks in the park or art circles become prescriptions for better mental health, and people will have to be prescribed non-tech activities because so much of people’s lives are on their devices.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Honesty is the best medicine

In this post, I want to talk about lies. It’s ironic that I’m writing this on the day of the US midterm election where the truth continues to be a rare sight to witness. Many in the UK feel they were lied to by politicians over the Brexit referendum. Apparently, politicians face a choice, lie or lose. Deception, deceit, lying, however you want to describe it, it’s part of what makes us human. I reckon we’ve all told a lie at some point, even if we’ve told a ‘white lie’ to avoid hurting someone’s feelings. Now, some of us are better at spotting when others are not telling the truth. Some of us prefer to build a culture of trust. What if we had a new superpower? A future where machines tell us in real time who is lying.

What compelled me to write this post was reading a news article about a new trial in the EU of virtual border agents powered by Artificial Intelligence (AI), which aims to “ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars.” I was fascinated to read statements about the new system such as “IBORDERCTRL’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.” Apparently, the system can analyse micro expressions on your face and include that information as part of a risk score, which will then be used to determine what happens next. At this point in time, it’s not aimed at replacing human border agents, but simply to help to pre-screen travellers. It sounds sensible right, if we can use machines to help keep borders secure? However, the accuracy rate of the system isn’t that great and some are labeling this type of system as pseudoscience and it will lead to unfair outcomes. It’s essential we all pay attention to these developments, and subject them to close scrutiny.

What if machines could one day automatically detect if someone speaking in court is lying? Researchers are working towards that. Check out the project called, DARE: Deception Analysis and Reasoning Engine, where the abstract of their paper opens with “We present a system for covert automated deception detection in real-life courtroom trial videos.“ As algorithms get more advanced, the ability to detect lies could go beyond analysing videos of us speaking, it could even spot when we our written statements are false. In Spain, police are rolling out a new tool called VeriPol which claims to be able to spot false robbery claims, i.e. where someone has submitted a report to the police claiming they have been robbed, but the tool can find patterns that indicate the report is fraudulent. Apparently, the tool has a success rate of over 80%. I came across as British startup, Human, that states on their website, “We use machine learning to better understand human's feelings, emotions, characteristics and personality, with minimum human bias” and honesty is included in the list of characteristics their algorithm examines. It does seem like we are heading for a world where it will be more difficult to lie.

What about healthcare? Could AI help spot when people are lying? How useful would it be to know if your patient (or your doctor) is not telling you the truth? In this 2014 survey in the USA, the patient deception report stated that 50% of respondents said they withhold information from their doctor during a visit, lying most frequently about drug, alcohol and tobacco use. Zocdoc’s 2015 survey found that 25% of patients lie to their doctor. There was an interesting report about why some patients are not adhering to what a doctor’s advice, and it’s because of financial strain, and that some low income patients are reluctant to discuss their situation with their doctor. The reasons why a patient might be lying are not black and white. How does an algorithm take that into account? In terms of doctors not telling patients the truth, is there ever a role for benevolent deception? Can a lie ever be considered therapeutic? From what I’ve read, lying appears to be a path some have to take when caring for those living with Dementia, to protect the patient.

shutterstock_570913984.jpg

Imagine you have a video call with your doctor and on the other side, the doctor has access to an AI system analysing your face and voice in real time and determining not just if you’re lying or not, but your emotional state too? That’s what is set to happen in Dubai with the rollout of a new app. How does that make you feel, either as a doctor or as a patient? If the AI thinks the patient is lying about their alcohol intake, would it include that determination against the patient’s medical record? What if the AI is wrong? Given the accuracy of these AI lie detectors is far from perfect, there are serious implications if they become part of the system. How might that work during an actual visit to the doctor’s office? In some countries, will we see CCTV in the doctor’s office with AI systems analysing every moment of the encounter to figure out which answers were truthful? What comes next? Smart glasses that a patient can wear when visiting the doctor and the glasses tell the patient how likely it is that the doctor is lying to them about their treatment options? Which institutions will turns to this new technology because it feels easier (and cheaper) than fostering a culture of trust, mutual respect and integrity?

What if we don’t want to tell the truth but the machines around us that are tracking everything reveal the truth for us? I share this satirical video below of Amazon Alexa fitted to a car, do watch it. Whilst it might be funny, there are potential challenges ahead in terms of our human rights and civil liberties in this new era. Is AI powered lie detection the path towards ensuring we have a society with enough transparency and integrity or are we heading down a dangerous path by trusting the machines? Is honesty really the best medicine?

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

AI in healthcare: Involving the public in the conversation

As we begin the 21st century, we are in an era of unprecedented innovation, where computers are becoming smarter, being used to deliver products and services powered by Artificial Intelligence (AI). I was fascinated how AI is being used in advertising, when I saw a TV advert this week from Microsoft where a musician was talking about the benefits of AI. Organisations in every sector, including healthcare, are having to think how they can harness the power of AI. I wrote a lot about my own experiences in 2017 using AI products for health in my last blog, You can’t care for patients, you’re not human!

Now when we think of AI in healthcare potentially replacing some of the tasks done by doctors, we think of it as a relatively recent concept. We forget that doctors themselves have been experimenting with technology for a long time. In this video from 1974 (44 years ago!), computers were being tested in the UK with patients to help optimise the time spent by the doctor during the consultation. What I find really interesting is that in the video, it’s mentioned that the computer never gets tired and some patients prefer dealing with the machine than the human doctor.

Fast forward to 2018, where it feels like technology is opening up new possibilities every day, and often from organisations that are not traditionally part of the healthcare system. We think of tech giants like Google and Facebook helping us send emails or share photos with our friends, but researchers at Google are working with AI on being able to improve detection of breast cancer and Facebook has rolled out an AI powered tool to automatically detect if a user’s post shows signs of suicidal ideation.

What about going to the doctor? I remember growing up in the UK that my family doctor would even come and visit me at home when I was not well. Those are simply memories for me, as it feels increasingly difficult to get an appointment to see the doctor in their office, let alone getting a housecall. Given many of us are using modern technology to do our banking and shopping online, without having to travel to a store or a bank and deal with a human being, what if that were possible in healthcare? Can we automate part (or even all) of the tasks done by human doctors? You may think this is a silly question, but we have to step back a second and reflect upon the fact that we have 7.5 billion people on Earth today and that is set to rise to an expected 11 billion by the end of this century. If we have a global shortage of doctors today, and since it’s predicted to get worse, surely the right thing to do is to leverage emerging technology like AI, 4G and smartphones to deliver healthcare anywhere, anytime to anyone?

doctorvisit.jpg

We have the emergence of a new type of app known as Symptom Checkers, which provides anyone with the ability to enter symptoms on their phone and to be given a list of things that may be wrong with them. Note that at present, these apps cannot provide a medical diagnosis, they merely help you decide whether you should go to the hospital or whether you can self care.. However, the emergence of these apps and related services is proving controversial. It’s not just a question of accuracy, but there are huge questions about trust, accountability and power? In my opinion, the future isn’t about humans vs AI, which is the most frequent narrative being paraded in healthcare. The future is about how human healthcare professionals stay relevant to their patients.

It’s critical that in order to create the type of healthcare we want, we involve everyone in the discussion about AI, not just the privileged few. I’ve seen countless debates this past year about AI in healthcare, both in the UK and around the world, but it’s a tiny group of people at present who are contributing to (and steering) this conversation. I wonder how many of these new services are being designed with patients as partners? Many countries are releasing national AI strategies in a bid to signal to the world that they are at the forefront of innovation. I also wonder if the UK government is rushing into the implementation of AI in the NHS too quickly? Who stands to profit the most from this new world of AI powered healthcare? Is this wave of change really about putting the patient first? There are more questions than answers at this point of time, but those questions do need to be answered. Some may consider anyone asking difficult questions about AI in healthcare as standing in the way of progress, but I believe it’s healthy to have a dialogue where we can discuss our shared concerns in a scientific, rational and objective manner.

rational.jpg

That’s why I’m excited that BBC Horizon is airing a documentary this week in the UK, entitled “Diagnosis on Demand? The Computer Will See You Now” – they had behind the scenes access to one of the most well known firms developing AI for healthcare, UK based Babylon Health, whose products are pushing boundaries and triggering controversy. I’m excited because I really do want the general public to understand the benefits and the risks of AI in healthcare so that they can be part of the conversation. The choices we make today could impact how healthcare evolves not just in the UK, but globally. Hence, it’s critical that we have more science based journalism which can help members of the public navigate the jargon and understand the facts so that informed choices can be made. The documentary will be airing in the UK on BBC Two at 9pm on Thursday 1st November 2018. I hope that this program acts as a catalyst for greater public involvement in the conversation about how we can use AI in healthcare in a transparent, ethical and responsible manner.

For my international audience, my understanding is that you can’t watch the program on BBC iPlayer, because at present, BBC shows can only be viewed from the UK.

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Being Human

This is the most difficult blog post I’ve ever had to write. Almost 3 months ago, my sister passed away unexpectedly. It’s too painful to talk about the details. We were extremely close and because of that the loss is even harder to cope with. 

The story I want to tell you today is about what’s happened since that day and the impact it’s had on how I view the world. In my work, I spend considerable amounts of time with all sorts of technology, trying to understand what all these advances mean for our health. Looking back, from the start of this year, I’d been feeling increasingly concerned by the growing chorus of voices telling us that technology is the answer for every problem, when it comes to our health. Many of us have been conditioned to believe them. The narrative has been so intoxicating for some.

Ever since this tragedy, it’s not an app, or a sensor or data that I turned to. I have been craving authentic human connections. As I have tried to make sense of life and death, I have wanted to be able to relate to family and friends by making eye contact, giving and receiving hugs and simply just being present in the same room as them. The ‘care robot’ that had arrived from China this year as part of my research into whether robots can keep us company, remains switched off in its box. Amazon’s Echo, the smart assistant with a voice interface that I’d also been testing a lot also sits unused in my home. I used it most frequently to turn the lights on and off, but now I prefer walking over to the light switch and the tactile sensation of pressing the switch with my finger. One day last week, I was feeling sad, and didn’t feel like leaving the house, so I decided to try putting on my Virtual Reality (VR) headset, to join a virtual social space. I joined a virtual computer generated room where it was sunny and in someone’s back yard for a BBQ, I could see their avatars, and I chatted to them for about 15 minutes. After I took off the headset, I felt worse.

There have also been times I have craved solitude, and walking in the park at sunrise on a daily basis has been very therapeutic. 

Increasingly, some want machines to become human, and humans to become machines. My loss has caused me to question these viewpoints. In particular, the bizarre notion that we are simply hardware and software that can be reconfigured to cure death. Recently, I heard one entrepreneur believe that with digital technology, we’ll be able to get rid of mental illness in a few years. Others I’ve met believe we are holding back the march of progress by wanting to retain the human touch in healthcare. Humans in healthcare are an expensive resource, make mistakes and resist change. So, is the answer just to bypass them? Have we truly taken the time to connect with them and understand their hopes and dreams? The stories, promises and visions being shared in Digital Health are often just fantasy, with some storytellers (also known as rock stars) heavily influenced by Silicon Valley’s view of the future. We have all been influenced on some level. Hope is useful, hype is not. 

We are conditioned to hero worship entrepreneurs and to believe that the future the technology titans are creating, is the best possible future for all of us. Grand challenges and moonshots compete for our attention and yet far too often we ignore the ordinary, mundane and boring challenges right here in front of us. 

I’ve witnessed the discomfort many have had when offering me their condolences. I had no idea so many of us have grown up trained not to talk about death and healthy ways of coping with grief. When it comes to Digital Health, I’ve only ever come across one conference where death and other seldom discussed topics were on the agenda, Health 2.0 with their “unmentionables” panel. I’ve never really reflected upon that until now.

Some of us turn to the healthcare system when we are bereaved, I chose not to. Health isn’t something that can only be improved within the four walls of a hospital. I don’t see bereavement as a medical problem. I’m not sure what a medical doctor can do in a 10 minute consultation, nor have I paid much attention to the pathways and processes that scientists ascribe to the journey of grief. I simply do my best to respond to the need in front of me and to honour my feelings, no matter how painful those feelings are. I know I don’t want to end up like Prince Harry who recently admitted he had bottled up the grief for 20 years after the death of his mother, Princess Diana, and that suppressing the grief took him to the point of a breakdown. The sheer maelstrom of emotions I’ve experienced these last few months makes me wonder even more, why does society view mental health as a lower priority than physical health? As I’ve been grieving, there are moments when I felt lonely. I heard about an organisation that wants to reframe loneliness as a medical condition. Is this the pinnacle of human progress, that we need medical doctors (who are an expensive resource) to treat loneliness? What does it say about our ability to show compassion for each other in our daily lives?

Being vulnerable, especially in front of others, is wrongly associated with weakness. Many organisations still struggle to foster a culture where people can truly speak from the heart with courage. That makes me sad, especially at this point. Life is so short yet we are frequently afraid to have candid conversations, not just with others but with ourselves. We don’t need to live our lives paralysed by fear. What changes would we see in the health of our nation if we dared to have authentic conversations? Are we equipped to ask the right questions? 

As I transition back to the world of work, I’m very much reminded of what’s important and who is important. The fragility of life is unnerving. I’m so conscious of my own mortality, and so petrified of death, it’s prompted me to make choices about how I live, work and play. One of the most supportive things someone has said to me after my loss was “Be kind to yourself.” Compassion for one’s self is hard. Given that technology is inevitably going to play a larger role in our health, how do we have more compassionate care? I’m horrified when doctors & nurses tell me their medical training took all the compassion out of them or when young doctors tell me how they are bullied by more senior doctors. Is this really the best we can do? 

I haven’t looked at the news for a few months and immersing myself in Digital Health news again makes me pause. The chatter about Artificial Intelligence (AI), where commentaries are at either end of the spectrum, almost entirely dystopian or almost entirely utopian, with few offering balanced perspectives. These machines will either end up putting us out of work and ruling our lives or they will be our faithful servants, eliminating every problem and leading us to perfect healthcare. For example, I have a new toothbrush that says it uses AI, and it’s now telling me to go to bed earlier because it noticed I brush my teeth late at night. My car, a Toyota Prius, which is primarily designed for fuel efficiency scores my acceleration, braking and cruising constantly as I’m driving. Where should my attention rest as I drive, on the road ahead or on the dashboard, anxious to achieve the highest score possible? Is there where our destiny lies? Is it wise to blindly embark upon a quest for optimum health powered by sensors, data & algorithms nudging us all day and all night until we achieve and maintain the perfect health score? 

As more of healthcare moves online, reducing costs and improving efficiency, who wins and who loses? Recently, my father (who is in his 80s) called the council as he needed to pay a bill. Previously, he was able to pay with his debit card over the phone. Now they told him it’s all changed, and he has to do it online. When he asked them what happens if someone isn’t online, he was told to visit the library where someone can do it online with you. He was rather angry at this change. I can now see his perspective, and why this has made him angry. I suspect he’s not the only one. He is online, but there are moments when he wants to interact with human beings, not machines. In stores, I always used to use the self service checkouts when paying for my goods, because it was faster. Ever since my loss, I’ve chosen to use the checkouts with human operators, even if it is slower. Earlier this year, my mother (in her 70s) got a form to apply for online access to her medical records. She still hasn’t filled in it, she personally doesn’t see the point. In Digital Health conversations, statements are sometimes made that are deemed to be universal truths. Every patient wants access to their records, or that every patient wants to analyse their own health data. I believe it’s excellent that patients have the chance of access, but let’s not assume they all want access. 

Diversity & Inclusion is still little more than a buzzword for many organisations. When it comes to patients and their advocates, we still have work to do. I admire the amazing work that patients have done to get us this far, but when I go to conferences in Europe and North America, the patients on stage are often drawn from a narrow section of society. That’s assuming the organisers actually invited patients to speak on stage, as most still curate agendas which put the interests of sponsors and partners above the interests of patients and their families. We’re not going to do the right thing if we only listen to the loudest voices. How do we create the space needed so that even the quietest voices can be heard? We probably don’t even remember what those voices sound like, as we’ve been too busy listening to the sound of our own voice, or the voices of those that constantly agree with us. 

When it comes to the future, I still believe emerging technologies have a vital role to play in our health, but we have to be mindful in how we design, build and deploy these tools. It’s critical we think for ourselves, to remember what and who are important to us. I remember that when eating meals with my sister, I’d pick up my phone after each new notification of a retweet or a new email. I can’t get those moments back now, but I aim to be present when having conversations with people now, to maintain eye contact and to truly listen, not just with my ears, and my mind, but also with my heart. If life is simply a series of moments, let’s make each moment matter. We jump at the chance of changing the world, but it takes far more courage to change ourselves. The power of human connection, compassion and conversation to help me heal during my grief has been a wake up call for me. Together, let’s do our best to preserve, cherish and honour the unique abilities that we as humans bring to humanity.

Thank You for listening to my story.

Engaging patients & the public is harder than you think

Back in 2014, Google acquired a British artificial intelligence startup in London, called Deepmind. It was their biggest EU purchase at that time, and was estimated to be in the region of 400 million pounds (approx $650 million) Deepmind's aim from the beginning was to develop ways in which computers could think like humans. 

Earlier this year, Deepmind launched Deepmind Health, with a focus on healthcare. It appears that the initial focus is to build apps that can help doctors identify patients that are at risk of complications. It's not clear yet, how they plan to use AI in the context of healthcare applications. However, a few months after they launched this new division, they did start some work with Moorfield's Eye hospital in London to apply machine learning to 1 million eye scans to better predict eye disease. 

There are many concerns, which get heightened when articles are published such as "Why Google Deepmind wants your medical records?" Many of us don't trust corporations with our medical records, whether it's Google or anyone else. 

So I popped along to Deepmind Health's 1st ever patient & public engagement event held at Google's UK headquarters in London last week. They also offered a livestream for those who could not attend. 

What follows is a tweetstorm from me during the event, which nicely summarises my reaction to the event. [Big thanks to Shirley Ayres for reminding me that most people are not on Twitter, and would benefit from being able to see the list of tweets from my tweetstorm] Alas, due to issues with my website, the tweets are included as images rather than embedded tweets. 

Finally, whilst not part of my tweetstorm, this one question reminded me of the biggest question going through everyone's minds. 

Below is a 2.5 hour video which shows the entire event including the Q&A at the end. I'd be curious to hear your thoughts after watching the video. Are we engaging patients & the public in the right way? What could be done differently to increase engagement? Who needs to do more work in engaging patients & the public?

There are some really basic things that can be done, such as planning the event with consideration for the needs of those you are trying to engage, not just your own. This particular event was held at 10am-12pm on a Tuesday morning. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Managing our health: One conversation at a time

If you've watched movies like Iron Man, featuring virtual assistants, like JARVIS, which you can just have a conversation with and control your home, you probably think that such virtual assistants belong in the realm of science fiction. Earlier this year, Mark Zuckerberg, who runs Facebook set a personal challenge to create a JARVIS style assistant for his own home. "My personal challenge for 2016 is to build a simple AI to run my home and help me with my work. You can think of it kind of like Jarvis in Iron Man." He may be closer to his goal, as he may be giving a demo something this month. For those that don't have an army of engineers to help them, what can be done today? Well, one interesting piece of technology is Amazon's Echo. So what is it? Amazon describes it as, "Amazon Echo is a hands-free speaker you control with your voice. Echo connects to the Alexa Voice Service (AVS) to play music, provide information, news, sports scores, weather, and more—instantly. All you have to do is ask." Designed for your home, it is plugged into the mains and connected to your wifi. It's been on sale to the general public in the USA since last summer, and was originally available in 2014 for select customers.

This week, it's just been launched in the UK and Germany as well. However, I bought one from America 6 months ago, and I've been using it here in the UK every day since then. My US spec Echo does work here in the UK, although some of the features don't work, since they were designed for the US market. I've also got the other devices that are powered by AVS, the Amazon Tap, Dot and also the Triby, which was the first 3rd party device to use AVS. To clarify, the Echo is the largest, has a full size speaker and is the most expensive from Amazon ($179.99 US/£149.99 UK/179.99 Euros). The Tap is cheaper ($129.99, only in USA) and is battery powered, so you can charge it and take it to the beach with you, but it requires that you push a button to speak to Alexa, it's not always listening like the other products. The Dot is even cheaper (now $49.99 US/£49.99 UK/59.99 Euros) and does everything the original Echo can do, except the built-in speaker is good enough only for hearing it respond to your voice commands. If you want to use it for playing music, Amazon expect you to connect the Dot to external speakers. A useful guide comparing the differences between the Echo, Dot and Tap is here. The Triby ($199.99) is designed to be stuck on your fridge door in the kitchen. It's sold in the UK too, but only the US version comes with AVS. Amazon expect you'd have the Echo in the living room, and you'd place extra Dots in other rooms. Using this range of products has not only given me an insight into what the future looks like, but I can see the potential for devices like the Echo (and the underlying service, AVS) to play a role in our health. In addition, I want to share my research on the experiences of other consumers who have tried this product. There are a couple of new developments announced this week which might improve the utility of the device, which I'll cover towards the end of the post. 

Triby, Amazon's Tap, Dot & Echo

Triby, Amazon's Tap, Dot & Echo

You can see in this 3 min video, some of the things I use my Echo for in the morning, such as reading tweets, checking the news, weather/my Google calendar, adding new events to my calendar or turning my lights on.  For a list of Alexa commands, this is a really useful guide. If you're curious about how it works, you don't have to buy one, you can test it out in your web browser, using Echosim (you will need an Amazon account though) 

What's really fun are experimenting with the new skills [i.e apps] that get added by 3rd parties, one of which is how my Echo is able to control devices in my smart home, such as my LifX lights. I tend to browse the Alexa website for skills and add them to my Echo that way. You can also enable skills just by speaking to your device. At the moment, every skill is free of charge. I suspect that won't always be the case. 

Some of the skills are now part of my daily routine, as they offer high quality content and have been well designed. Amazon boast that there are now over 3,000 skills. However, the quality of the skills varies tremendously, just like app stores for other devices we already use. For example, in the Health & Fitness section, sorted by relevance, the 3rd skill listed is one called Bowel Movement Facts. 

The Echo is always on, it's 7 microphones can detect your voice even if the device itself is playing music and you're speaking from across the room. It's always listening for someone to say 'Alexa' as the wake up word, but you have a button to mute the Echo so it won't listen. I use Siri, but I was really impressed when I started to use my Echo, it was felt quicker than Siri in answering my questions. Anna Attkisson did a 300 question test, comparing her Echo vs Siri, and found that overall, Amazon's product was better. Not only does the Echo understand my London accent, but it also has no problem understanding me when I used some fake accents to ask it for my activity & sleep data from my Fitbit. I think it's really interesting that I can simply speak to a device in my home and obtain information that has been recorded by the Fitbit activity tracker that I've been wearing on my wrist. It makes me wonder about how we will access our health data in the future. Whilst at the moment, the Echo doesn't speak unless you communicate with it, that may be changing in the future, if push notifications are enabled. I can see it now, having spent all day sitting in meetings, and sat on my smart sofa watching my smart TV, my inactivity as recorded by my Fitbit, triggers my Echo to spontaneously switch off my smart TV, switch my living room lights to maximum brightness, announce at maximum volume that I should venture outside for a 5,000 step walk, and instruct my smart sofa to adjust the recline so I'm forced to stand up. That's an extreme example, but maybe a more realistic one is that you have walked much less today than you normally do, and you end up having a conversation with Echo because your Echo says, "I noticed you haven't walked much today. Is everything ok?"

We still don't know about the impact on society as our homes become smarter and more connected. For example, in the USA, those with GE appliances will be able to control some of them with the Echo. You'll be able to preheat the oven, without even getting off your sofa. That could have immense benefits for those with limited mobility, but what about our children? If they grow up in a world where so much can be done without even having to lift a finger, let alone walk a few steps from the sofa to the kitchen, is this technology a welcome advance? If you have a Hyundai Genesis car, you can now use Alexa to control certain aspects of your car. When I read this part of the Hyundai Genesis article, "Being able to order basic functions by voice remotely will keep owners from having to run outside to do it themselves" it made me think about a future where we just live an even more sedentary lifestyle, with implications for an already over burdened healthcare system. Perhaps having a home that is connected makes more sense in countries like the USA and Australia which on average have quite large houses. Given how small the rooms in my London home are, it's far quicker for me to reach for the light switch than to issue a verbal command to my Echo (and wait for it to process the command)

Naturally, some of us would be concerned about privacy. Right now, anyone could walk into the room and assuming they knew the right commands, could quiz my Echo about my activity and sleep data. One of the things you can do in the US (and now in Europe) is order items from Amazon by speaking to your Echo, and Alex Cranz wrote a post saying, "And today it let my roommate order forty-eight Cadbury Creme Eggs on my account. Despite me not being home. Despite us having very different voices. Alexa is burrowing itself deeper and deeper into owners’ lives, giving them quick and easy access not just to Spotify and the Amazon store, but to bank accounts and to do lists. And that expanded usability also means expanded vulnerability.", he also goes on to say, "In the pursuit of convenience we have to sacrifice privacy." Note that Amazon do offer the ability to modify your voice purchasing settings, so that the device would ask you for a 4 digit confirmation code before placing the order. The code would NOT be stored in your voice history. You can also turn off voice purchasing completely if you wish.

Matt Novak filed a FOI request to ask if the FBI had ever wiretapped an Amazon Echo. The response he got, "we can neither confirm nor deny."

If you don't have an Echo at home, how would you feel about having one? How would you feel about your children using it? One thing I've noticed is that the Echo seems to work better over time, in terms of responding to my voice commands. The way that the Echo works is that it does record your voice commands in the cloud, and by analysing the history of your voice commands, it refines its ability to serve your needs. You can delete your voice recordings, although it may make the Echo less accurate in future. Some Echo users whose children also use the device say their kids love it, and in fact got to grips with the device and it's capabilities faster than the parents. However, according to this Guardian article, if a child under 13 uses an Echo, it is likely to contravene the US Children’s Online Privacy Protection Act (COPPA). This doesn't appear to have put off households installing an Echo in the USA, as research suggests Amazon have managed to sell 3 million devices. Another estimate puts the installed user base significantly lower, at 1.6 million. Either way, in the realm of home based virtual assistants, Amazon are ahead, and probably want to extend that lead, with reports that in 2017 they want to sell 10 million of these speakers. 

Can the Echo help your child's health? Well a skill called KidsMD was released in March that allows parents to seek advice provided by Boston Children's hospital. After the launch, their Chief Innovation Officer, John Brownstein said, "We’re trying to extend the know-how of the hospital beyond the walls of the hospital, through digital, and this is one of a few steps we’ve made in that space." So I tested Kids MD back in April, and you can see in this 3 minute video what it's like to use. What I find fascinating is that I'm getting access to validated health information, tailored to my situation, simply by having a conversation with an internet connected speaker in my home. Of course, the conversation is fairly basic for now, but the pace of change means it won't be rudimentary forever. 

I was thinking about the news last week here in the UK, where it was announced that the NHS will launch a new website for patients in 2017. My first thought was, what if you're a patient who doesn't want to use a website, or for whatever reason can't use a website. If the Echo (and others like it) launch in the UK, why couldn't this device be one of the digital channels that you use to interface with the NHS? Some of us at a grassroots level are already thinking of what could be done, and I wonder if anyone in the NHS has been formally testing an Echo to see how it might be of use in the future? 

The average consumer is already innovating themselves using the Echo, they aren't waiting years for the 'system' to innovate. They are conducting their own experiments, buying these new products with their own money. One man in the USA has used the Echo to help him care for his aging mother, who lives in a different location from him. 

In this post, a volunteer at a hospice asks the Reddit community for input on what the Echo could be useful for with patients. 

How about Rick Phelps, diagnosed back in 2010 at the age of 57 with Early Onset Alzheimer's Disease, and now an advocate for Dementia awareness. Back in Feburary, he wrote about his experience of using the Echo for a week. What does he use it for? To find out what day it is, not knowing what day it is because of Dementia.

For many of us, consumer grade technology such as the Echo will be perceived as a gimmick, a toy, of being of limited or no value with respect to our health. I was struck by what Rick wrote in his post, "To many, the Amazon Echo is a cool thing to have. Some what of a just another electronic gadget. But to a dementia patient it is much, much more than that.It has afforded me something that I have lost. Memory. I can ask Alexia anything and I get the answer instantly. And I can ask it what day it is twenty times a day and I will still get the same correct answer." Rick also highlights how he used the Echo to set medication reminders.

I have to admit, the Echo is still quite clunky, but the original iPhone was clunky too, and the 1st generation of every new type of technology is usually clunky. For people like Rick, it's good enough to make a difference to the outcomes that matter to him in his daily life, even if others are more skeptical. 

Speaking of medication reminders, there was a 10 day Pymts/Alexa challenge this year, using Alexa to "to reimagine how consumers interact with their payments and financial services solutions providers." What I find fascinating is that the winner was DaVincian Healthcare, and they created something called DaVincianRX, an “interactive prescription, communication, and coordination companion designed to improve medication adherence while keeping family caregivers in the loop." You can read more and watch their video of it in action here. People and organisations constantly ask me, where do we look for innovation and new ideas? I always remind them to look outside of healthcare. From a health perspective, most of the use cases I've seen so far involving the Echo are for older members of society or those that care for them. 

I came across a skill called Marvee, which is described as "a voice initiated voice-initiated concierge application integrated with the Alexa Voice service and any Alexa-enabled device, like the Amazon Echo, Dot or Tap." Most of the reviews seem to be positive. It's actually refreshing to see a skill that is purpose built to help those with challenges that are often ignored by the technology sector. 

In the shift towards self-care, when you retire or get diagnosed with a long term condition for the first time, will you be getting a prescription for an Amazon Echo (or equivalent)? Who is going to pay for the Echo and related services? Whilst we have real world evidence that shows the Echo is making a positive impact on people's lives, I haven't been able to find any published studies testing the Echo within the context of health. That's a gap in knowledge, and I hope there are researchers out there who are conducting that research. Like any product, there will be risks as well as benefits, and we need to be able to quantify those risks and benefits now, not in 5 years time. Earlier I cited how Rick who lives with Alzheimer's Disease finds the Echo to be of benefit, but for other people like Rick, using the Echo might lead to harm rather than benefit. We don't know yet. However, not every application of the Echo will require a double blinded randomised clinical trial to be undertaken. If I can already use my Echo to order an Uber, or check my bank balance, why can't I use it to book an appointment with my doctor?

In the earlier use case, a son looked through the data from his mother's usage of her Echo to spot the signs when something is wrong. Surely, Amazon could parse through that data for you and automatically alert you (or any interested person) that there could be an issue? Allegedly, Amazon is working on improvements to the service where Alexa could one day recognise our emotions and respond accordingly. I believe our voice data is going to play an increasing role in improving our health. It's going to be a new source of value. At an event in San Francisco recently, I met Beyond Verbal, an emotions analytics company. They are doing some really pioneering work. We already have seen the emergence of the Parkinson's Voice Initiative, looking to test for symptoms using voice recordings.

How might a device like the Echo contribute to drug safety? Imagine it reminds you to take your medication, and in the conversation with your Echo, you reply that you're skipping this dose, and it asks you why? In that conversation, you have the opportunity in your own words to say why you have skipped that dose. Throw in the ability to analyse your emotions during that conversation, and you have a whole new world of insights on the horizon. Some of us might just be under the impression that real world data is limited to data posted on social media or online forums, but our voice recordings are also real world data. When we reach a point when we can weave all this real-world data together to get a deeper understanding of our health, we will be able to do things we never thought possible. Naturally, there are immense practical challenges on that pathway, but progress is being made every day. Having all of this data from all of these sources is great, and even if it's freely available, it needs to be linked together to truly make a difference. Researchers in the UK have demonstrated that it's feasible to use consumer grade technology such as the Apple watch to accurately monitor brain health. How about linking the data from my Apple watch with the voice data from my Amazon Echo to my electronic health record?

An Israeli startup, Cordio Medical has come up with a smartphone app for those patients with Congesitve Heart Failure (CHF) that captures voice data, analyses it in real-time, and "detects early build-up of fluids in the patient’s lung before the appearance of physical symptoms", and deviations found in the voice data would trigger an alert, where "These alerts permit home- or clinic-based medical intervention that could prevent hospitalisation." For those CHF patients without smartphones, could they simply use an Echo at home with a Cordio skill? Or does Amazon offer the voice data directly to organisations like Cordio for remote monitoring (with the patient's consent)? With devices like the Echo, if Amazon (or their rivals) continue to grow their user base over the next 10 years, they could have an extremely valuable source of unique voice based health data that covers the entire population. 

At present, Amazon has surprisingly made rather good progress in terms of the Echo as a virtual assistant. However, other tech giants are looking to launch their own products and services. For example, Google Home, that is due to arrive later this year. This short video shows what it will be able to do. Now for me, Google plays a much larger role in my daily life than Amazon, in terms of core services. I use Google for email, for search, for my calendar, and maps for navigation. So, Google's Home might be vastly superior to Echo, simply because of that integration with those core services that I already use. We'll have to wait and see. The battle to be a fundamental part of your home is just beginning, it seems. 

The battle to be embedded in every aspect of our lives with extend beyond the home, perhaps in our cars. I tested the Amazon Dot in my car, and I reckon it's only a matter of time before we see new cars on sale with these virtual assistants built into the car's systems, instead of being an add-on. We already have new cars coming with 4G internet connectivity, offering wifi for your devices, from brands like Chevrolet in the USA. 

For when we are on the move, and not in our car or home, maybe we'll all have earphones like the new Apple Airpods, where we can discreetly ask our virtual assistants to control the objects and devices around us. Perhaps Sony's product, the Experia Ear, which launches in November, and is powered by something called Sony's Agent, which could be similar to Amazon's AVS, is what we will be wearing in our ears? Or maybe none of these big tech firms will win the battle? Maybe it will be one of us, or one of our kids who comes up with the virtual assistant that will rule the roost? I'm incredibly inspired after watching this video where a 7 year old girl and her father built their own Amazon Echo using a Raspberry Pi. This line in the video's description stood out to me, "She did all the programming following the instructions on the Amazon Github repository." Next time there is a health hackathon, do we simply invite a bunch of 7 year old kids and give them the space to dream up new solutions to problems that we as adults have created? Or maybe it should be a hackathon that invites 7 year olds with their grandparents? Or maybe we have a hackathon where older adults are invited to co-design Alexa skills with younger people for the Echo? We don't just have financial deficits in health & social care, but we have a deficit of imagination. Amazon have a programming tutorial where you can build a trivia skill for Alexa in under an hour. When it comes to our health, do we wait for providers to develop new Alexa skills, or will consumers start to come together and build Alexa skills that their community would benefit from, even if that community happens to be a community of people scattered around the world, who are all living with the same rare disease?

You'll have noticed that in this post, I haven't delved into the convergence of technologies that have enabled something like the Echo to work so well. This was deliberate on this occasion. At present, I'm really interested in how virtual assistants like the Echo make you feel, rather than the technical details of the algorithm being used to recognise my voice. For someone living far away from their aging parents/grandparents, does the Echo make you feel reassured? For someone living alone and feeling social isolated, does the Echo make you feel not alone? For a young child, does it make you feel like you can do magic, controlling other devices just with your voice? For someone considering moving out of their own home into an institution, does the Echo make you feel independent again? If more and more services are becoming digital by default, how many of these services will be available just by having a conversation? I am using my phone & laptop less since I've had my Echo, but I'm not yet convinced that virtual assistants will one day eliminate the need for a smartphone, but some of us are convinced. 50% of urban smartphone owners around the world believe that smartphones will be no longer be needed in 5 years time. That's one of the findings when Ericsson Consumer Lab quizzed smartphone users in 13 cities around the globe last year. The survey is supposed to represent the views of 68 million urban citizens. In addition, they also found, "Furthermore, a third would even rather trust the fidelity of an AI interface than a human for sensitive matters. 29 percent agree they would feel more comfortable discussing their medical condition with an AI system." I personally think the consumer trends identified have deep implications for the nature of our interactions with respect to our health. Far too many organisations are clinging on to the view that the only (and best) way that we interact with health services is face to face, in a healthcare facility, with a human being. Despite these virtual assistants at home not needing a smartphone with a data plan to work, they would need fixed broadband to work. However, looking at OECD data from December 2015, fixed broadband penetration is rather low. The UK is not even at 40%, so products such as the Echo may not be accessible for many across the nation who might find it beneficial with regard to their health. This is an immense challenge, and one that will need joined up thinking, as we need everyone included in this digital revolution.

You might be thinking right now that building a virtual assistant is your next startup idea, it's going to be how you make an impact on society, it's how you can change healthcare. Alas, it's not as easy as we first thought. Cast your mind back to 2014, the same year that the Echo first became available. I was one of early adopters who pledged $499 for the world's first social robot, Jibo [consider it a cuter or creepier version of the Echo with a few extra features] They raised almost $4 million from people like me, curious to explore this new era. Like the Echo, you are meant to be able to talk to Jibo from anywhere in the room, and it will act upon your command. The release got delayed and delayed, and then recently I got an email informing me that the folks behind Jibo have decided that they won't be shipping Jibo to backers outside of the USA, and I was offered a full refund.

One of the reasons that cited was, "we learned operating servers from the US creates performance latency issues; from a voice-recognition perspective, those servers in the US will create more issues with Jibo’s ability to understand accented English than we view as acceptable." How bizarre, my US spec Echo understands my London accent, and even my fake ones! It took the makers of Jibo 2 years to figure this out, and this too from people who are at the prestigious MIT Media Lab. So just how much effort does it take to make something like the Echo? A rather large amount, it seems. According to the Jeff Bezos, the CEO of Amazon, they have over 1,000 people working on this new ecosystem. A very useful read is the real story behind the Echo explaining in detail how it was invented. Apparently, the reason why the Echo was not launched outside of America until now, was to so it could handle all the different accents. So, if you really want to do a hardware startup, then one opportunity is to work on improving the digital microphones found not just in the Echo, but in the our smartphones too. Alternatively, Amazon even have an Alexa Fund, with $100m in funding for those companies looking to "fuel voice technology innovation." Amazon must really believe that this is the computing platform of the future. 

Moving on this week's news, the UK Echo will have UK partners such as the Guardian, Telegraph & National Rail. I use the train frequently from my home into central London, the station is a 15 minute walk from my house, so that's one of the UK specific skills I'm most likely to use to check if the train is delayed or cancelled before I head out of the front door. Far easier and quicker than pulling out my phone and opening an app. The UK version will also have a British accent. If you have more than one Echo device at home, and speak a command, chances are that two or more of your devices will hear you and respond accordingly, which is not good, especially if you're placing an order with Amazon. So now they have updated the software with ESP (Echo Spatial Perception) and when you talk to your Echo device, only the closest one to you will respond. It's being rolled out to those who have existing Echo devices, so no need to upgrade. You might want to though, as there is a new version of the Echo Dot (US, UK & Germany), which is cheaper, thinner, lighter and promises better voice recognition than the original model. For those who want an Echo in every room, you can now buy Dots in 6 or 12 packs! In the UK, given that the Echo Dot is just £49.99, I expect this Christmas, many people will be receiving them as presents. 

Amazon's Alexa Voice Service is one example of a conversational user interface, and at times it's like magic, and other times, it's infuriatingly clumsy.  I'm mindful that my conversations with my Echo are nowhere near as sophisticated as conversations I have with humans. For example, if I say "Alexa, set a reminder to take my medication at 6pm" and it does that, and then I immediately say "Alexa, set a reminder to take my medication at 6.05pm", and so forth, it currently won't say, "Are you sure? You just set a medication reminder close to that time already." Some parents are concerned that the use of an Echo by their kids is training them to be rude, because they can throw requests at Alexa, even in an aggressive tone of voice, with no please, no thank you, and Alexa will always comply. Are these virtual assistants going to become our companions? Busy parents telling their kids to do their homework with Alexa, or lonely elders who find that Alexa becomes their new friend in helping them cope with social isolation? Will we end up with bathroom mirrors we can have conversations with about the state of our skin? Are we ever going to feel comfortable discussing the colour of our urine with the toilet in our bathroom? When you grab your medication box out of the cupboard, do you want to discuss the impact on your mood after a week of taking a new anti-depressant?

Could having conversations with our homes help us to manage our health? It seems like a concept from a science fiction movie, but to me, the potential is definitely there. The average consumer will have greater opportunities to connect their home to the internet in years to come. Brian Cooley, asks in this post if our home becomes the biggest health device of all.

A thought provoking read is a new report by Plextek examining the changes in the medical industry by 2020 from connected homes. I want you to pause for a moment when reading their vision, "The connected home will be a major enabler in helping the NHS to replace certain healthcare services, freeing up beds for just the most serious cases and easing the pressure on GP surgeries and A&E departments. It will empower patients with long-standing health conditions who spend their life in and out of hospitals undertaking tests, monitoring, rehabilitation or therapy, and give them freedom to care for themselves in a safe way."

Personally, I believe the biggest barrier to making this vision a reality is us, i.e people and organisations that don't normally work together will have to collaborate in order to make connected homes seamless, reliable and cost effective. Think of all the people, policies & processes involved in designing, installing, regulating, and maintaining a connected home that will attempt to replace some healthcare services. That's before we even think about who will be picking up the tab for these connected homes.

Do you believe the Echo is a very small step on the path towards replacing healthcare services, one conversation at a time?

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

An interview with Molly Watt: Putting Usher Syndrome on the map

For this post, I wanted to share Molly Watt’s story. I first came across Molly in 2015, after I read her Apple Watch post. I had also just received my Apple Watch and was curious about other people’s experiences. What’s different about Molly, is that she has Usher syndrome, which is a rare genetic disorder caused by a mutation in any one of at least 11 genes resulting in a combination of hearing loss and visual impairment, and is a leading cause of deafblindness. Usher syndrome is the most common cause of congenital deafblindness (the elderly is the biggest group). Usher syndrome is incurable at present. Usher syndrome hasn’t held Molly back, she’s even set up her own charity, the Molly Watt Trust and much more. When reading each of her subsequent blog posts. her writing was creative, courageous and candid, and that resonated with me. In fact, it resonated so strongly with me, I decided to visit Molly in her home town of Maidenhead, England to interview her. It’s a longer interview than what I would normally post, but we have so much to learn from Molly (and others like her), that I was compelled to include as much as possible from her answers. Listening to Molly was also a powerful reminder, that we often focus so much on ‘empowering’ or ‘activating’ or ‘engaging’ patients themselves, that we ignore the patient’s family and friends who play a very critical role. I feel there are so many voices currently not heard, do we need to change the way we listen?

The image below is a 360 image from my interview with Molly. 

Post from RICOH THETA. - Spherical Image - RICOH THETA

1. You were recently had a meeting at Apple's HQ in America to share your views on accessibility. Can you tell us more about how you ended up there?
The main thing was the Apple watch blog post that I wrote, and through the charity, I discuss how we can access things through tech. I have been an Apple user since my diagnosis 10 years ago, I had a Mac eventually in education to access exam papers. So when the Apple watch came out, I was unsure what it could offer, and I bought one out of curiosity, and thought I would probably return it. Accessibility has been at the cornerstone of my life since the diagnosis, and we got my website set up after Xmas 2014, and my mum encouraged me to blog, so I did.

This was the first personal blog post wrote. I think the timing of it was shortly after the launch of the Apple watch, there was a lot of bad press saying it was a toy, but I found from the perspective of sensory impairment, it opened a lot of doors for me to be more independent, I never missed a phone call because of the prominent haptics, the digital touch features were really beneficial socially when out with friends, maps was a big feature for me, navigating from A to B with the watch. When I’m out I’d rather have my phone in my bag, as it’s much safer for me, and the watch enables me to do that. My post generated quite a lot of positive reviews about the Apple watch. All of that is how after a few months, Philip W. Schiller, the senior vice president of worldwide marketing at Apple retweeted me and my website crashed due to so many hits. From there onwards, a lot of people contacted the trust. We have to remember that a lot of people can’t afford the watch, many with Usher syndrome are shuffling between jobs.

Apple had reached out to the trust to speak with me, and since we were going on a family holiday to California, they said, well come and visit us. They were genuinely interested in hearing my story, and to understand how technology can enable accessibility much more in the future. As a family, we travel as much as possible, because my sight may completely go at any time. I wrote a post about my trip to California.

2. Many people laugh at products such as the Apple watch calling it a toy or not seeing any value in using it, but it has been of value in your life. What can be done to get people looking at all possible uses of new technology?
I wasn't sure about the Apple watch, I couldn't really understand what it would do for me over the iPhone that I have relied on for years. My decision to purchase it was last minute really as a couple of my friends were getting one. It's great my friends did as I might not have got one myself as we were able to explore the features of the watch together. However, for me after a little fiddling around with my insight into my real need for accessibility I was able to really put it to the test. I believe people give the Apple watch a bit of a hard time because they don't use it in the way somebody like myself does. I have learnt how to use it to make a real difference to my life, and it’s a brilliant piece of equipment I have come to rely on. I guess because I rely on technology I have become an expert in my own way of accessing it!

3. You've got your own charity and you've spoken at places such as the Houses of Parliament and Harvard Medical School. When you were younger, did you envisage you would reach these heights?
I had no idea. I think my own struggles have made me feel passionate about making a difference, to raise awareness of ability as much as disability, share my experiences good and bad and demonstrate the importance of accessible, assistive technology. I am definitely not the person I was since being diagnosed with Usher syndrome.  

Being deaf is very different, it is not rare.  It is however challenging and there needs to be support and assistive technology.  In the area I live, support of the deaf was excellent. Usher Syndrome diagnosis brought confusion and inexperience of supporting somebody with the condition particularly in school - my education became a nightmare as I couldn't access the curriculum without modification and nobody knew what they were doing.

At my real time of need the only people I could rely on were my parents who continued to battle for me even though they also did not really understand what I was going through. I definitely get my determination and drive from them.

I've been speaking since I was 14 and making awareness videos.  It was my way of telling people what I was going through, how I felt and what I felt I needed by way of support. That's how it all began and as the years have gone by I have found my work public speaking a very useful skill and a way of reaching the larger audience with the many messages I have.

4. When it comes to accessibility and new technology, what's missing? What are the 3 top inventions that you'd like to see come in the next few years?
This is a hard question as I'm not an expert on what is possible. I believe people in design, design of everything can be improved by the inclusion of accessibility from day one. The obvious things like all websites need to be completely accessible. I rely on this sort of thing, picking up a book isn't an option. Things like hotels often terrible design, decor, carpets & wallpapers clashing and the poorest lighting. Most public places are difficult.

I cannot wait for the driverless car to be available to people like me, I'm sad I'll never experience driving but excited to think this technology is on the horizon.

5. For those living with Usher Syndrome, do they feel like their wants & needs are being heard. If not, what could we do to be better listeners?
Definitely not, there is a lack of understanding and awareness.  Usher syndrome is the most common cause of congenital deafblindness and few are experienced in dealing with it hence few get what they need. Life is a constant battle.

I'm sad that people with Usher Syndrome struggle to be understood and often live isolated lives.
Many do not work, do not socialise, and do not have access to enabling technology to allow them access to social media and if they did, they need help in learning to how to use the technology. Some use sign language which again can be isolating and can cause difficulty getting employment as communication support is often needed and hard to access as cuts to Access to Work continue. I think professionals should encourage people like myself to be vocal about their needs and to listen and take onboard their thoughts and feelings. All too often people tried to speak for me and it is not acceptable. Encouragement from the point of diagnosis is important.  

I'm fortunate my parents have always encouraged me to speak up.

6. I understand you've faced many challenges when dealing with the NHS, schools and charities/support groups, can you tell us a bit more about what happened? 
I'll answer this one at a time:

The NHS were good with my deafness diagnosis when I was little and up to my Usher diagnosis, thereafter it has been a different story. Sadly, audiologists who are often the first point of contact either know of the condition but have not treated anybody with it or worse, know nothing about it. Either way it is not helpful to the patient and needs to change. It is the same with ophthalmologists, who know about eye conditions but not much about deafblindness.  Whilst conditions are rare, there has to be professionalism in dealing with all conditions. An example of not having a decent understanding is my NHS audiologist who has known me since I was very young and has monitored my hearing with regular tests the results of which are followed up in writing. It would be great if I could read those results, which were completely inaccessible until I pointed it out that they were completely unaware of my accessibility issues. Not a thought about how I am able to access information in font 10/12 on white paper and black text!

Equally I have sat at Moorfields Eye Hospital and during the appointment, was spoken to whilst a Professor looked at his computer screen - everybody knows deaf people need to see faces to lipread and for facial expressions, even those of us with very little sight. These things should be obvious!

My experience of a mainstream school was excellent whilst I was deaf, there was great support.
Again after my Usher Syndrome diagnosis there was a lot of confusion, I was given the support of a VI teacher as well as my teacher of the deaf, and neither had supported somebody like myself.
A multi sensory teacher had to be "bought" in from a charity and yes she understood the condition but with one visit a term to educate those supporting me and myself things did not go the way they should have. This resulted in me struggling to deal with what was happening to me, I felt a burden and looked to move schools, my biggest mistake ever.

I thought going to a private school for the deaf, who were familiar with my condition I'd be with people like myself! I couldn't have been more wrong. The deaf kids were cruel, questioned my deafness as I have good speech, questioned my blindness as I appeared to see. The staff were just as bad. I boarded initially and spent hours in my dorm as I physically couldn't get from dorm to dining hall in the dark, nobody noticed or cared. Teachers didn't modify my reading material and if they did it would be on A3 paper making me feel very different. I struggled for 2 years trying to deal with my failing sight, being in denial as it often seemed easier to be that way surrounded by deaf kids telling me I was fine - it was hell.

It was made worse when I got my guidedog who did enable me to get from A to B safely then I was denied access to all social areas as my need to get from A to B was not as important as the need for a younger boy with a dog allergy to move freely around the school. 

I left with depression and a nervous breakdown at 17 years old.

That school knew all there was about Usher syndrome - they knew little, I was treated very badly.

Charities:  
Sense is the main deafblind charity, they cover/support all types of deafblindness, including deafblind with additional issues from the very young to the very old and everything in between and they are great at campaigning however I do feel people with Usher Syndrome often miss out and that’s why we set up the Molly Watt Trust.

My family travel to the USA to find out information about Usher Syndrome, there has not in the 10 years I have been diagnosed an Usher specific conference yet several for other types of deafblindness even though Usher Syndrome is the most common cause of congenital deafblindness.
Sense does a great job but there is little for those with Usher syndrome. Being an ambassador I'm always happy to help/work alongside them on any Usher projects. I am an Ambassador for Sense and happy to do what I can when I can to promote awareness of Usher syndrome, something I do as part of the work I do with the companies I have worked with. I have spoken for several charities including RP Fighting Blindness and also Berkshire Vision. I often feel on the outside looking in, I don't fit in the deaf community or the blind community and yet I feel I'm a part of both along with the Usher community and society in general.

Belonging somewhere is important to us all.

7. You wanted genetic testing, but encountered resistance from the system. Why did they think it was a bad idea for you to have genetic testing?
I wanted genetic testing when I was 15 years old, back in 2009. I had studied genetics a little at school and I wanted to know exactly who I am. My parents asked at Moorfields Eye Hospital in London the next time we were there and we were told ‘NO’ because of funding and because there is no cure for my condition. I remember feeling very upset and my parents following up the request for genetic testing with my GP.  Thankfully he understood the need and arranged for me to see a geneticist from John Radcliffe hospital in Oxford. My geneticist was brilliant (Edward Blair), he explained things in full and even provided a history lesson on where Usher syndrome came from. Some 6 months later I was told I have Usher syndrome type 2a. The importance of knowing is essential should the chance to trial anything become an option in the future. If there is any clinical testing of that gene in the future I can decide if I'd like to be involved. Being told ‘NO’ makes you feel you are a lost cause which just escalates the isolation this condition brings. Everything is a battle with this condition.

Something else to be considered is the benefits system. I have been assessed more times than I can say. Sadly people think deafblind, no hearing, no sight and no speech.  When they see me they are often very shocked and then don't believe I have any disability. On one occasion I arrived for an assessment (ATOS) and was told by the doctor he had googled Usher syndrome the night before!  He did not have a clue what I deal with on a daily basis.

8. When it comes to innovation in technology, and in particular around accessibility, what is your long term dream? 
I'd like people with disabilities to be considered from day one.  I'd like those with rare disabilities like mine to have access to all equipment they need and to be taught how to use it. I’d like them to have access to transport and benefits to enable them to work.

I think developers of everything need to understand the unique needs of all. For them to realise disabilities are not black and white. Sensory impairments are not two colours. Some with Usher are profoundly deaf (usually type 1's), the older generation might not have used any hearing aids so rely on sign language (BSL) and later tactile signing as their vision deteriorates - their communication skills and needs differ to the younger generation who have (parents chose) cochlear implants hence access to sound young and oral. My generation in the main, wear hearing aids and are oral. This is in my opinion a huge positive to accessing our world. However those who sign must always be considered regarding accessibility. And being blind is extremely rarely total darkness. There are many grey areas that are not often considered. 

In an ideal world I'd like to work/consult with developers around the world working on accessibility for all.  I'd like to be a part of moving forward with assistive technology. I believe if technology works for people like myself it will work for the older generation who's eyes and ears start to fail them as they grow older and this is very important with our ageing population.

9. Do you think there are other people like yourself around the world? Have you built your own network or is that something still to come?
I know of a few people doing similar to what I do. I have built a network which continues to grow, I am quite well known for my work around the world something I have been doing since I was 15. There is definitely more work to do and lots more to come. I hope that one day having Usher syndrome can just open up unique doors for every individual, rather than the progressive isolation and depression lack of access and awareness can give.

10. Who has inspired you the most in your life, and why?
My parents, particularly my mum and my grandparents.  They have always encouraged, supported and fought for me and I have learnt so much from them.
My mum always told my brothers and my younger sister we could be anything we wanted.  At that time she didn't realise what was around the corner for me but she still believed I would make some-thing of myself and I will, one way or another!
Before I could speak (at age 6), my Nannie, Pat, would sit me down and we'd make cards, paint and create for hours. We'd do jigsaw puzzles and watch Disney videos. My creative streaks definitely arose from those days. I was born creative and to this day use those skills. My children's books have frog characters, my Nan loved frogs. She inspired me.

11. If people want to work with you, what would they need to be offering to get your attention?
Opportunities to speak, to motivate, to innovate, to consult, to make a difference, to be heard.
My passion is accessible assistive technology and educating others.

12. If others wanted to follow in your footsteps, what would your advice be to them?
I'd encourage others to think about what is important to them, how to use their unique skill set to make a difference. Work hard and be passionate about your cause. Plus of course, never be afraid to speak up. Find ways to express yourself, in that process you eventually find yourself and also the confidence to help others.

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Unexpected findings

It's fascinating to meet people in healthcare and hear them dismiss the potential value of a tool like Twitter. Despite an increasing amount of noise, I do find it a great place to listen and learn. For me personally, it's been a very powerful tool, and has taken me to places I've never imagined. One of those places is Cedars-Sinai Medical Center in Los Angeles, California. By chance, I'd come across Dr Brennan Spiegel on Twitter earlier this year, and through our online interactions, discovered that we had common interests in Digital Health, especially in the context of understanding whether these new digital tools and services being developed are actually having an impact in healthcare.

Dr Spiegel is Director of Health Services Research at Cedars-Sinai Health System, Director of the Cedars-Sinai Center for Outcomes Research and Education (CS-CORE), and Professor of Medicine and Public Health in Residence at UCLA. I was particularly intrigued by the work he does at CS-CORE, where he oversees a team that investigates how Digital Health technologies, including wearable biosensors, smartphone applications, and social media, can be used to strengthen the patient-doctor bond, improve outcomes, and save money. So whilst I was out in California, I popped into Cedars-Sinai Medical Center to spend some time with him and his team to understand their journey so far in Digital Health.

With Dr Spiegel and the CS-CORE team - the picture was taken remotely using Dr Spiegel's Apple watch! 

With Dr Spiegel and the CS-CORE team - the picture was taken remotely using Dr Spiegel's Apple watch! 

To give you some context, Cedars-Sinai Medical Center is a non-profit, has 958 beds, over 2,000 doctors and 10,000 employees. It's also ranked among the top 15 hospitals in the United States, and is ranked first in Los Angeles by US News and World report. In addition to Dr Spiegel, I met with Dr Christopher Almario, Garth Fuller, and Bibiana Martinez

What follows is a summary of the Q&A that took place during my visit. 

1. What is the big vision for your team?
"The big vision is value of care. Value is our true north. It puts patients first while also reminding us to be judicious about the healthcare resources we use. Take Cedars-Sinai, a traditional volume based center of excellence. How do we transform our hospital, that has excelled in the fee-for-service healthcare environment for so long, and transform it into a value-based innovation center while maintain our top-notch quality of care? It seems like a magic trick to transform from volume to value in healthcare. How do we do it at scale, and how do we keep people out of hospitals when healthcare systems have  been designed to take people in? Our mission is to figure out how to do that. This could be a blueprint for how other health systems could do this and which doctors could do this. How do we align incentives? How do we create a Digital Health strategy that works within the existing clinical workflow? How might we use an E-coordination hub? These are all open questions ready for rigorous research. 

What does innovation mean at Cedars-Sinai? We see ourselves as a hub of innovation and are now developing a new 'Value Collaboratory' under the guidance of our visionary leader, Scott Weingarten, who directs Clinical Transformation at Cedars-Sinai. We offer a set of tools to help value-based innovators make a difference. We're going to be doing a lot over the next 5 years. Digital Health is just one small part of that. The Value Collaboratory will be the centre for ideas within Cedars. For example, if innovators seek internal funding for a project, then they can work with the collaboratory to refine their idea, evaluate its health economic potential, and create a formal case for its support."

2. Tell me more about the team, what types of people work in CS-CORE
"There are 12 of us in CS-CORE, and we have a combination of health system and statistical expertise. We have social scientists, behavioural scientists, mobile health experts and more. It's a multi-disciplinary team. For example, Dr Almario is a gastroenterologist, who has always been interested in health services research, and was awarded a career development award from the American College of Gastroenterology, which is very rare, in Digital Health to pursue research. Garth Fuller with a background in health policy and management has been working with us for the last 5 years and has a strong interest in medication adherence, and conducts research to understand how we can show that 'Beyond the Pill' strategies in the pharma industry are working. Bibiana Martinez with her background in Public Health is hands on, and works with our patients. Bibiana helps filter the real world barriers faced in Digital Health research and bring them back to our team. We have an all-hands-on-deck research crew."

3. What has surprised you during your research in Digital Health?
"We've had some unexpected findings. For example, we had a patient who reported less pain, and our original expectation was that the data from her wearable would report that she had been walking more, as the pain was subsiding. However, that wasn't the case, as her pain decreased, she was walking less. It turns out the patient was an author, and being free of pain meant she could sit for hours on end and finish writing her book. Completing the book was the outcome that mattered to the patient. What should we do when a patient's steps fall from 1,500 a day to almost 0? Do we give them a call, simply because we perceive it as unhealthy? How often does your doctor ask you what your goal is for your visit? I show these charts of pain vs steps when I teach my health analytics class at UCLA, to challenge how my students think."

4. How else have your assumptions about how patients use Digital Health tools been challenged?
"In healthcare, we often make a lot of assumptions about the needs and wants of patients. We have been fitting Virtual Reality goggles with hospital patients, so that we can transport them from their hospital bed to far away places such as Iceland. One patient asked if we could transport him somewhere more tropical, as the hospital is cold, and having a VR experience in Iceland made him feel even colder. 

We had an instance where a patient wasn't able to charge her Fitbit. We tried to explain over the phone, but it actually required a house visit in order for this patient to understand how to charge the device. We thought we could put sensors around the ankle joint of patients to measure steps, and some patients felt like they were under house arrest when wearing our sensor on their ankle."

5. What are some of the most exciting projects you're working on today?
"Well, we create our own technologies and sensors. We find out soon if our first sensor is approved by the FDA. Also, with the vision of our hospital Enteprise Information Services (EIS) team, our hospital's EHR is now connected to Apple's HealthKit, it's a great achievement, we now have 750 people pouring in real-time sensor data into our EPIC Electronic Health Record. We've also developed My GI Health, a patient provider portal which by gathering information on symptoms in advance of a visit to the doctor, helps us learn more about a patient's GI symptoms. The computer doesn't forget to ask questions, but sometimes the doctor forgets to ask questions. Although much of our research is in GI, we are working across healthcare. We are now building a version of My GI Health for rheumatology, for example. We are also interested in testing whether the first visit to a specialist doctor should be virtual or in person? What would patients & doctors actually want? We are putting a study design together now that will compare both types of visits."

6. What are some of the challenges you face in your research?
"The research we do is often challenging for the IRB because it’s so different.  We work closely with our IRB to explain the nature of our work. As more academic groups conduct Digital Health research, it will be important that medical centers develop regulatory expertise around this type of work.

There is also an urgency to test quickly, fail quickly and succeed quickly. What we need is a high level discussion to understand what risk means in the context of Digital Health research. Can we generate evidence faster?"

7. What are you doing to help ensure that no patient gets left behind in Digital Health?
"We are soon going to start a community-based study in partnership with African American churches in Los Angeles. We will work with these 'mega churches,' which have up to 10,000 congregants, and will distribute healthy living experiences delivered by Virtual Reality goggles using Google Cardboard.  We will also use an app for obesity and diabetes management. We observe that many families from minority backgrounds are mobile first, and we see that the next digital divide is opening up over mobile. Healthcare isn't built for mobile. We are also researching the mobile usability of hospital websites across America."

8. What message would you like to share with others also on the same journey as you?
"Listen to the patients, get used to Digital Health being dirty and difficult, it may be harder than you think. We can say that with some authority now, that it can sound easy, but in reality it's been very hard. Our team has developed devices and applied them directly to patients; what happens next is often unexpected and challenges our assumptions. Digital Health is really hard to do. We have to focus on the how of Digital Health. We understand why it's valuable, but not as much about how we will be doing it. Value is another big theme - we need to improve outcomes and reduce costs of care. It takes time to do it right. We also try to never forget the end user, both the physician and the patient. 

This work is 90% perspiration, and 10% inspiration. You need to have a sense of humor to do this because, you’re going to get a lot of unexpected bumps and failures. It’s a team sport to figure it out. Defining the problem in terms of the health outcomes and costs is the key, and generating a solution that has value to patient and providers is paramount.. 

Finally, the 'cool test' is so seductive. Don’t been fooled by the 'cool test' in Digital Health. What may be cool to us may not be cool to the patient. Don’t be seduced by the 'cool test' in healthcare."

I really enjoyed my time with Dr Spiegel and his team, not only because of the types of research they are doing, but also because of their vision, values and valor. Their unexpected findings after putting new devices on patients has subsequently made me think at length about health outcomes. I was reminded about the human factors in healthcare, and that both patients and doctors don't always do what we expect them to do. I'm glad CS-CORE are not just thinking from the perspective of medicine, but through the lens of public health too, and how to ensure that no patient is left behind. I'm not the only one who is admires their work. David Shaywitz, has recently written a post about the research conducted by CS-CORE, and mentions, "they are the early adopters, the folks actually in the arena, figuring out how to use the new technology to improve the lives of patients." 

Dr Spiegel did admit they've been under the radar so far, focusing on putting “one foot in front of the other” in research mode while working with a wide variety of partners from industry and academia. The team is also looking for collaborators who want to road test their digital health solutions in a “real world” laboratory of a large health system. Their team is equipped to conduct stem-to-stern evaluations with an eye to rigorous research and peer-reviewed publications. I see that Dr Spiegel is one of the speakers at the Connected Health Symposium later this week, as part of a panel discussion on Measuring Digital Health Impact & Outcomes. I won't be there but I hope to be part of the live Twitter discussion. 

Since my visit, I note that Cedars-Sinai and Techstars have partnered to launch a Digital Health focused accelerator. What does this accelerator aim to do? The website states, "We are looking for companies transforming health and healthcare.  Companies that are creating hardware, software, devices and/or services that empower the patient or healthcare professional to better track, manage, and improve health and healthcare delivery are eligible to apply." Techstars is one of the world's most highly rated startup accelerator programs, the other being Y Combinator. It's fascinating to see the marriage of two very different worlds, and who knows what unexpected findings will result from this partnership. In the 21st century, when we think of radically different models of care, startups and emerging technologies, large traditional hospital systems are not the first place we think of looking for them. Maybe the lesson here for large healthcare institutions is to "disrupt or be disrupted?"

In the world of Digital Health, the trend of moving healthcare out of the hospital into the home, virtual visits and telemedicine may be causing concern to hospital executives. If all of these converging technologies (often coming from startups) really are effective and become widely adopted, then surely we will need smaller hospitals, or perhaps in certain scenarios, we may one day not need to have that many hospitals at all? Perhaps the hospitals that survive and thrive in the 21st century will be the ones that boldly explore the unknown in Digital Health, rather than the ones that hide and hope that the world of Digital Health will just be a passing fad? 

“It is the tension between creativity and skepticism that has produced the stunning and unexpected findings of science.” - Carl Sagan

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

The quest for evidence

There is an abundance of excitement and enthusiasm in the world of Digital Health. One example is the growth in venture funding, another is the announcement of new partnerships between incumbents and the technology companies, such as Astrazeneca's partnership with Adherium to develop a 'smart inhaler.' We see more accelerators, more incubators, and more hackathons. It really is an incredible era. One of the major challenges is that the world of healthcare requires more than excitement and enthusiasm, it requires evidence. Apps may be cool and fashionable in the modern age, but apps that are proven to save lives are what decision makers in healthcare are seeking. News articles may cite that 165,000 health apps now exist, but that’s a headline statistic, as it’s simply evidence of increased activity. If we don’t start seriously thinking about validating these new technologies, chances are that the industry will fail to make the impact it hopes to. Even worse, widespread use of digital interventions that have not been validated may cause unnecessary harm to patients.

Now, evidence means different things to different people. In the world of startups, evidence might be $1 million in seed funding and 5,000 active users of an app. In the world of healthcare, evidence might be a randomised clinical trial with results published in a peer reviewed journal. I have observed a gulf between these two worlds, and that's a problem. How many startups have evidence generation in their business plans given that half of Digital Health startups fail within 2 years? Generating evidence is expensive when you're trying to get your business off the ground. Omada Health is cited as one of the leading examples of a startup that has worked hard to generate evidence, and they state, "Omada’s commitment to generating, analyzing, and sharing clinical data is central to our identity."

What is evidence in the 21st century?  This is a critical question, and one not being asked enough. Maybe if healthcare systems eventually start collecting patient outcome data in real-time, evidence may be easier to obtain? Perhaps as we collect different types of data, the evidence that is gathered could change? The time and costs needed to gather evidence using traditional methods may not be suitable to enable the swift development of Digital Health. Even when we have schemes for validating Digital Health, it’s not always plain sailing. Take the NHS example, and the news that some accredited health apps were found to be putting users' privacy at risk. My fear is that traditional organisations, under pressure to be seen to ‘accelerating innovation’ or ‘transforming healthcare with digital’ act in haste with regard to these new tools, and throw caution to the wind. Changing the world of healthcare is going to be a relatively slow moving process when done properly, no matter what you hear about the next disruptive idea. Maybe we mistakenly assume that a digital intervention is always going to be brilliant, which is quite a dangerous assumption to be carrying around. Again, that’s where evidence is useful, as maybe the evidence will show that a particular digital intervention does not offer any additional benefit over existing non-digital interventions.  

There are people out there starting to look at validating new ideas in Digital Health. It's interesting to note how a new startup accelerator, Rockstart, from the Netherlands has quite a strong focus on validation and evidence generation, which is a step in the right direction. There is also Evidation Health, which has a focus on, "Defining and demonstrating value in Digital Health." Also, the Global Consortium for Digital Medicine has been established, with a focus on Evidence based Digital Health.

This is good news, and quite frankly, we need more people working on this. I am so curious about trends in generating evidence that I've flown out today to California to attend the inaugural Digital Medicine conference at Scripps, where the focus of the 2 day event will be, "A thoughtful exploration of the clinical evidence necessary to drive the widespread uptake of mobile health solutions." There seems to be a growing momentum for pushing this conversation forward. I note that the Hacking Medicine Institute will be hosting their first "Measuring Digital Health Outcomes Summit" next week. I'm excited that the Institute has the aim of "convening healthcare leaders around the world to accelerate data, evidence and adoption of effective new medical technologies." I suspect those organizations building Digital Health products that have not thought enough about evidence are likely to be viewed differently in 2016 and beyond.

The quest for evidence in Digital Health is underway, and hopefully, we'll soon be able to sort out the wheat from the chaff. 

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner