Dear Papa, Thank You

I can’t believe I’m writing about my father in the past tense. My biggest supporter in life died just before Christmas 2023. We were exceptionally close, especially over the last few years, and that’s why the loss is so profound for me. I’m writing to share his story, what he meant to me, and how he shaped so much of my life, including my work.

Papa was always there for me, whether in person, or on the phone. From my earliest childhood moments right through to me as a 50 year old man, whenever I asked Papa for help, assistance or guidance, he would drop everything and focus on supporting me, often weaving in a story or two, to bring meaning to his message. He behaved the same way with my brother and sister too. Family first, was his motto.

In daily life, when I showed Papa the latest technology and what it can do in every day life, he couldn’t believe what was now possible, given he grew up in 1930s rural Panjab, India which was a very different era. Every time we would go somewhere in my car, and I would enter a destination into Google Maps, he would be taken aback, with childlike wonder, how we would automatically get turn by turn directions, spoken out loud. He had seen so much change in how we live, work and play during his lifetime.

Papa loved to tell stories. Growing up, I would hear him recount his life story, of experiencing the traumatic upheaval in circumstances due to the partition of India, starting from scratch as a refugee, studying so hard (and walking a several miles each way just to go to school) as to get a scholarship and to pursue a career in education, migrating to the UK and overcoming the adversity of adjusting to life here in the UK, building a fulfilling career as a university lecturer at the University of Westminster, teaching Statistics to Business Studies students. Even just a few months ago, at the age of 88, he again shared with me his life story, from his earliest childhood memories from 80 years ago, which he told me so vividly.

He was immensely proud of his Indian heritage and even though I was born in the UK, he didn’t want me to forget my roots, hence the countless stories about his life in India and how it shaped his outlook. I’m really going to miss hearing those stories.  

I grew up hearing him inspire me with stories from different cultures, religions and philosophies. Some days it would be about Alexander the Great, other days it would be philosophy from Socrates & Plato, and other days it would be about Indian warriors, like Guru Teg Bahadur, who was famous for standing up to oppression. Papa felt very strongly about injustice, inequality and tyranny.

Papa knew the value of time. Papa had deliberately chosen a career path in education as it gave him freedom and flexibility to set his schedule, so that he could have as much time as possible, with his family. Not only do I remember both him (and Mummy) always being there for as us kids when we were growing up and beyond, but he was so involved in shaping our lives, tutoring us in Mathematics, English and other subjects during all of those school holidays. I remember he told me once, that after commuting back from central London to home after a day at work, his tiredness would disappear in an instant, upon arriving at home, when greeted with love by Mummy, me, my brother, Rishi and our late sister, Urchna. His family was his source of strength. 

Despite being raised in a religious Hindu household, he was incredibly open minded. When it came to my school at the age of 11 years old, he sent me to a local Church of England school a few minutes walk from home as it had a good standard of education and discipline. The headmaster called us in after we had applied for a place and wondered why a Hindu family would want to send their son to this school? I remember Papa replied to the headmaster and told him, “You pray to Jesus, we pray to Krishna, God is one, just different names” and the headmaster was impressed and gave me a place. I was the first non Christian to ever be admitted to that school (the second non Christian to be admitted was, Urchna), and I studied the bible daily for 5 years and sang hymns every morning, as Papa had known that partaking in these activities would not make me any less of a Hindu, and he was right!

Papa was a simple man, in terms of not needing lots of things himself, he possessed this innate wisdom, knowing very early in his life that what really mattered in life was deep and meaningful relationships with others, not money, or things or ego. I recall when I was applying for my first job after graduating from university, and even when I was considering job offers later in my career, Papa would remind me not to fall into the trap of choosing the job offer that offered the highest salary, but to choose the job which my “gut instinct” believed would offer fulfilment, fun and flexibility. Most importantly, he would say, choose the job in an organisation that has a friendly culture in the office. He was telling me this because a lot of his good friends in life, both in India and in the UK, were initially his colleagues. During my childhood, he would often randomly throw idioms into conversation, idioms he had learnt growing up in India, and the two that I remember the most are “first deserve, then desire” and “what cannot be cured, must be endured”.

Given the hardships Papa had experienced in his life, as an immigrant, he did everything he could to create a life for me where I would have a level of security and stability that he never had. So when I left the security of my permanent job in my late 30s to wander off into the unknown, he thought I was completely crazy (as did many people I know) – I decided to move back home to live with my parents after taking this unconventional step, and with their love and support (even if they didn’t always agree with my journey in life), I managed to carve out my own path in life, on my terms and conditions, and use my skills to make an even bigger impact on people’s lives, a far bigger impact, than had I stayed in that permanent job.

Some of you may not have met Papa, but you met him through me, by hearing my stories. I told Papa recently, “Look how life has worked out. You used to get paid to tell stories, and now I get paid to tell stories. Different audiences, but the same skills.”

Given Papa’s long career in education, when family friends used to visit us during my teenage years, they would often turn to Papa for guidance on what subjects their kids should study, where they should study, and what type of career options are available in the UK, and beyond. After all, he used to do this as part of his job at the university. Sometimes, if I’m in a cab, and the driver asks me what I do, and I reply, “I tell stories about the future”, and then I share my insights about where I think society is heading 20-30 years into the future, the driver would then ask me for advice on what his young kids should be studying in order to have a secure and stable future. History repeating itself. Like a lot of kids, I would tell myself when I was much younger, I’m going to be different from my Papa, but the older I get, the more I find myself thinking and behaving like Papa (which itself is not necessarily a bad thing).

Many years ago, when these smart speakers for the home first launched, I would try them all as part of my work, looking at the future of digital health. Once I told Papa, “I can make your life easy, I can set up this device so that you can turn the lights on or off using your voice, without having to leave the bed or sofa” – I mistakenly assumed he would be thankful at my suggestion. Instead, he raised his voice and swore at me in Panjabi, and said “Are you bloody insane? Are my hands and legs broken?” He never craved an “easy” or “comfortable” life. He would always prefer to walk, even if I offered him a lift in my car to the supermarket, for example. On those days where I would spend a lot of time working from home, sitting at a desk in front of a computer, he would frequently tell me to get up and go for a walk, and not to lead a sedentary life. He was mentally active too, never resorting to ever using a calculator, but always doing mental arithmetic, whether at home, or in the shops.

Papa died in the same way, he lived life, on his terms and conditions! A noble, dignified death without any suffering. Making the transition peacefully in his own warm bed at home, knowing that those he loved the most, and who loved him, were nearby. He knew that the best things are often left unsaid.

When I saw that Papa had died, I then realised that everything he had ever done, and everything he had ever said to me, was to provide me with the resources, the skills and the wisdom, for that day, when he would no longer be physically here with me. He was my biggest supporter in life, and he was there for me, every time I faced any challenge. My relationship with Papa continues, in spirit, I still feel his presence strongly in my heart. From the moment he died, I found myself having this immense clarity about what is important, what is not important, who is important and who is not important, and that clarity remains, even now. How strange that it takes a tragedy to strike before one is awoken (spiritually speaking).

After Papa died, I’m reminded that at least in UK society, many of us feel uncomfortable talking about dying, death and grief. It often feels like we live in a death denying culture, where there is an intense focus on youth, and staying young forever, as if growing old and dying is something that science can one day overcome. To deny death, is to deny life itself.

I even found that some friends and family felt very uncomfortable listening to me talking about my own grief. It’s been strange hearing some of the reactions from friends and family, as if I should not really be grieving at all and “moving on” as if nothing has happened, simply because Papa died at the age of 88 and he lived a longer life than average. Grief is a unique journey for each of us, and it’s not a linear process either. The wisest people I know have reminded me that there is no rulebook or playbook for dealing with grief.

On the other hand, it’s remarkable how when tragedy strikes, one can feel incredible warmth, compassion and support from the most unexpected of people. Rather than unhelpful (to me) platitudes like “time is a great healer”, the most practical support I have received has people just being “present” whether online, on the phone, or in person, without the need to pepper the conversation with generic phrases often used when someone dies. Papa’s death has helped me to renew and strengthen relationships with friends, family and colleagues that had weakened over the decades, due to life getting busy. It’s deep, meaningful, and authentic connections with other human beings that has provided me solace and comfort these last few weeks. No tech, no AI, no robots.

In the days and weeks after Papa’s death, I have found myself trying to blame someone for what happened, “if only I had done this” or “if I had made sure he always took all of his medications” but I realise that has been my mind’s reaction to being unable to accept that life is chaotic and unpredictable, and that no matter who we are, we have limited power to control what happens, to us, or our loved ones. Our daily lives often distract us from the impermanence of life. At some level, I feel that our inner child secretly hopes our parents will live forever, always around to be able to share and celebrate each of our achievements in life, or to be able to turn to for nuggets of life wisdom when things don’t go the way we expect. When a parent dies, it’s like part of us dies too, a connection to our past. Someone who knew us better than we know ourselves.

I am deeply blessed to have had so many moments with Papa, especially in recent years as he got older, and I was able to see how modern digital society can make life difficult for those in their 80s, and seeing his daily experiences, strongly influenced my work on ensuring that everyone in society benefits from advancements in technology, not just those who are young. We had so many conversations over the years, about the state of the world, covering a dizzying array of topics, from politics to agriculture to how to deal with difficult people. I could bring up any subject with Papa, and he would have an opinion on it, even if was an opinion, I might not like.

I marvel at how he was not afraid of anyone, his strong sense of duty, commitment, and desire to use his life to serve others.

Thank You, Papa. I will cherish our memories, I will continue to serve others honouring your legacy, and I will never forget all the sacrifices you made to ensure I would flourish and thrive. I wish your soul health and happiness in your next life.

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

An interview with Ardy Arianpour: Building the future of health data

ardy_revised.jpg

People ask me what gets me out of bed every morning. I have this big vision about finding ways to use data to improve the health of everyone on the planet. Yes, that’s right, all 7.7 billion of us.

My mission on a daily basis is to find projects that help me on the path towards making that vision a reality. I’m always on the lookout for people who are also dreaming about making an impact on the whole world. I bumped into one such person recently, when I was attending the Future of Individualised Medicine conference in the USA. That person is Ardy Arianpour, CEO and co-founder of a startup called Seqster that I believe could make a significant contribution to making my vision a reality over the long term. I interviewed Ardy to hear more about his story and the amazing possibilities with health data that he dreams of bringing to our lives.

1. What is Seqster?
Products such as mint.com, which enable people to bring all their personal finance data in one place have enabled so many people to manage their finances. We believe that Seqster is the mint.com of your health. We are a person-centric interoperability platform that seamlessly brings together all your medical records (EHR), baseline genetic (DNA), continuous monitoring and wearable data in one place. From a business standpoint we’re a SaaS platform like “The Salesforce.com for healthcare”. We provide a turnkey solution for any payer, provider or clinical research entity since “Everyone is seeking health data”. We empower people to collect, own and share their health data on their terms.

2. So Seqster is another attempt at a personal health record (PHR) like Microsoft’s failed attempt with Healthvault?
Microsoft’s HealthVault and Google Health were great ideas, but their timing was wrong. The connectivity wasn’t there and neither was the utility. In a way, it’s also the problem with Apple Health Records. Seqster transcends those PHRs for three reasons:

a. First, we’ve built the person-centric interoperability platform that can retrieve chain of custody data from any digital source. We’re not just dealing with self-reported data like every other PHR that can be inaccurate and cumbersome. By putting the person at the center of healthcare, we give them the tools to disrupt their own data siloes and bring in not only longitudinal data but also multi-dimensional and multi-generational data.

b. Second, our data is dynamic. Everything is updated in real time to reflect your current health. One site, one log in. You never have to sign in twice.

c. Third, we generate new insights which is tough to do unless you have the high quality data coming directly form multiples sources. For example, we have integrated the American Heart Association’s Life Simple Seven to give you dynamic insights into your heart health plus actionable recommendations based on their guidelines.

3. Why do you believe Seqster will succeed when so many others (often with big budgets have failed)?
The first reason that we will succeed is our team. We have achieved previous successes in implementing clinical and consumer genetic testing at nationwide scale. In the genetics market we’ve been working on data standardization and sharing for the last decade so we approached this challenge from a completely different vantage point. We didn’t set out to solve interoperability, but did it completely by accident.

Next, we have achieved nationwide access in the USA to over 3000 hospitals integrated as well as over 45,000 small doctor offices and medical clinics. In the past few years we have surpassed over 100M patient records, 30M+ direct to consumer DNA / genetic tests and 100M+ wearables. invaluable utility by giving people a legal framework to share their health data with their family members, caregivers, physicians, or even with clinical trials if they want.

All we are doing is shedding light on what we call “Dark Data”- the data that is already existing on all of us and hidden up until now.

3. Your background has been primarily in Genomics, where you’ve done sterling work in driving BRCA genetic testing across the United States. Is Seqster of interest mainly to those who have had some kind of genetic test?
Not at all. Seqster is for the healthcare consumers. We’re all healthcare consumers in some way. Having said that, as you may have noted, the “Seq” in Seqster comes from our background in genome sequencing. We originally had the idea that we could create a place for the over 30M individuals who had done some kind of genetic test to take ownership of their data and to incentivize people who have not yet had a genetic test to get sequenced. However, we realized that genetic data without high quality, high fidelity clinical health data is useless. The highest quality data is the data that comes directly from your doctor’s office or hospital. This combined with your sequence data and your fitness data is a powerful tool for better health for everyone.

4. Wherever I travel in the world, from Brazil to the USA to Australia, the same challenge about health data comes up in conversations. The challenge of getting different computer systems in healthcare to share patient data with each other, otherwise known more formally as “interoperability” – can Seqster really help to solve this challenge or is this a pipe dream?
It was a dream for us as well until we cracked the code on person-centric interoperability. What is amazing is we can bring our technology to anywhere in the world right now as long as the data exists. Imagine people everywhere and how overnight we change healthcare and health outcomes if they had access to their health data from any device, Android, Apple or web-based. Imagine that your kids and grandkids have a full health history that they can take to their next doctor visit. How powerful can that be? That is Seqster. We help you seek out your health data, no matter where you are or where your data resides.

5. So what was the moment in your life that compelled you to start Seqster?
In 2011 I was at a barbeque with a bunch of physicians and they asked what I did for a living. I told them about my own DNA testing experience and background in genomics. Quickly the conversation went to how can we make DNA data actionable and relevant to both themselves and their patients. The next day I go for a run and couldn’t stop thinking about that conversation and how if I owned all my data in one place would make it meaningful for me. I come home and was watching the movie “The Italian Job” and heard the word Napster in the film, being a sequencing guy and seeking out info I immediately thought of “Seqster” and typed it in godaddy.com and bought Seqster.com for $9.99. The tailwinds were not there to do anything with it until January of 2016 when I decided to put a team together to start building the future of health data.

6. What has been the biggest barrier in your journey at Seqster so far, and have you been able to overcome it?
Have you seen the movie Bohemian Rhapsody? We’re like the band Queen – we’re misfits and underdogs. No one believes that we solved this small $30 billion problem called interoperability until they try Seqster for themselves. The real barrier right now is getting Seqster into the right hands. As people start to catch onto the fact that Seqster solves some of their biggest pain points, we will overcome the technology adoption barrier. I am so excited about new possibilities that are emerging for us to make a contribution to advancing the way health data gets collated, shared and used. Stay tuned, we have exciting news to share over the next few months.

7. What has the reaction to Seqster been? Who are the most sceptical, and who seem to be the biggest advocates?
We have a funny story to share here. About three years ago when we started Seqster, we told Dr. Eric Topol from Scripps Research what we wanted to do and he told us that he didn’t believe that we could do it. Three years later after hearing some of the buzz he asked to meet with us and try Seqster for himself. His tweet the next day after trying Seqster says it all. We couldn’t be prouder.

8. Lots of startups are developing digital health products but few are designing with patients as partners. Tell us more about how you involve patients in the design of your services?
Absolutely! We couldn’t agree more. I believe that many digital health companies fail because they don’t start with the patient in mind. From day one Seqster has been about empowering people to collect, own, and share their data on their terms. Our design is unique because we spent time with thousands of patients, caregivers and physicians to develop a person-centric interface that is simple and intuitive.

9. The future of healthcare is seen as a world where patients have much more control over their health, and in managing their health. What role could Seqster play in making that future a reality?
We had several chronically ill patients use Seqster to manage their health and gather all their medical records from multiple health systems within minutes. Some feedback was as simple as having one site and one login so that they can immediately access their entire medical record from a single platform. A number of patients told us that they found lab results that had values outside of normal range which their doctors never told them about. When we heard this, we felt like we were on the verge of bringing aspects of precision medicine to the masses. It definitely resonated very well with our vision of the future of healthcare being driven by the patient.

10. Fast forward 20 years to 2039, what would you want the legacy of Seqster to be in terms of impact on the world?
In 20 years by having all your health data in one place, Seqster’s legacy will be known as the technology that changed healthcare. Our technology will improve care by delivering accurate medical records instantaneously upon request by any provider anywhere. All the data barriers will be removed. Everyone will have access to their health information no matter where they are or where their data is stored. Your health data will follow you wherever you go.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

An interview with Jo Aggarwal: Building a safe chatbot for mental health

We can now converse with machines in the form of chatbots. Some of you might have used a chatbot when visiting a company’s website. They have even entered the world of healthcare. I note that the pharmaceutical company, Lupin, has rolled out Anya, India’s first chatbot for disease awareness, in this case, for Diabetes. Even for mental health, chatbots have recently been developed, such as Woebot, Wysa and Youper. It’s an interesting concept and given the unmet need around the world, these could be an additional tool that might help make a difference in someone’s life. However, there was a recent BBC article highlighting how two of the most well known chatbots (Woebot and Wysa) don’t always perform well when children use the service. I’ve performed my own real world testing of these chatbots in the past, and gotten to know the people who have created these products. So after the BBC article got published, Jo Aggarwal, CEO and co-founder of Touchkin, the company that has made Wysa, got back in touch with me to discuss trust and safety when using chatbots. It was such an insightful conversation, I offered to interview her for this blog post as I think the story of how a chatbot for mental health is developed, deployed and maintained is a complex and fascinating journey.

1. How safe is Wysa, from your perspective?
Given all the attention this topic typically receives, and its own importance to us, I think it is really important to understand first what we mean by safety. For us, Wysa being safe means having comfort around three questions. First, is it doing what it is designed to do, well enough, for the audience it’s been designed for? Second, how have users been involved in Wysa’s design and how are their interests safeguarded? And third, how do we identify and handle ‘edge cases’ where Wysa might need to serve a user - even if it’s not meant to be used as such?

Let’s start with the first question. Wysa is an interactive journal, focused on emotional wellbeing, that lets people talk about their mood, and talk through their worries or negative thoughts. It has been designed and tested for a 13+ audience, where for instance, it asks users to take parental consent as a part of its terms and conditions for users under 18. It cannot, and should not, be used for crisis support, or by users who are children - those who are less than 12 years old. This distinction is important, because it directs product design in terms of the choice of content as well as the kind of things Wysa would listen for. For its intended audience and expected use in self-help, Wysa provides an interactive experience that is far superior to current alternatives: worksheets, writing in journals, or reading educational material. We’re also gradually building an evidence base here on how well it works, through independent research.

The answer to the second question needs a bit more description of how Wysa is actually built. Here, we follow a user-centred design process that is underpinned by a strong, recognised clinical safety standard.

When we launched Wysa, it was for a 13+ audience, and we tested it with an adolescent user group as a co-design effort. For each new pathway and every model added in Wysa, we continue to test the safety against a defined risk matrix developed as a part of our clinical safety process. This is aligned to the DCB 0129 and DCB 0160 standards of clinical safety, which are recommended for use by NHS Digital.

As a result of this process, we developed some pretty stringent safety-related design and testing steps during product design:

At the time of writing a Wysa conversation or tool concept, the first script is reviewed by a clinician to identify safety issues, specifically - any times when this could be contra-indicated, or be a trigger, and alternative pathways for such conditions.

When a development version of a new Wysa conversation is produced, the clinicians review it again specifically from an adherence to clinical process and potential safety issues as per our risk matrix.

Each aspect of the risk matrix has test cases. For instance, if the risk is that using Wysa may increase the risk of self harm in a person, we run two test cases - one where a person is intending self harm but it has not been detected as such (normal statements) and one where self-harm statements detected from the past are run through the Wysa conversation, at every Wysa node or ‘question id’. This is typically done on a training set of a few thousand user statements. A team then tags the response for appropriateness. A 90% appropriateness level is considered adequate for the next step of review.

The inappropriate statements (typically less than 10%) are then reviewed for safety, where the question asked is - will this inappropriate statement increase the risk of the user indulging in harmful behavior? If there is even one such case, the Wysa conversation pathway is redesigned to prevent this and the process is repeated.

The output of this process is shared with a psychologist and any contentious issues are escalated to our Clinical Safety Officer.

Equally important for safety, of course, is the third question. How do we handle ‘out of scope’ user input, for example, if the user talks about suicidal thoughts, self-harm, or abuse? What can we do if Wysa isn’t able to catch this well enough?

To deal with this question, we did a lot of work to extend the scope of Wysa so that it does listen for self-harm and suicidal thoughts, as well as abuse in general. On recognising this kind of input, Wysa gives an empathetic response, clarifies that it is a bot and unable to deal with such serious situations, and signpost to external helplines. It’s important to note that this is not Wysa’s core purpose - and it will probably never be able to detect all crisis situations 100% - neither can Siri or Google Assistant or any other Artificial Intelligence (AI) solution. That doesn’t make these solutions unsafe, for their expected use. But even here, our clinical safety standard would mean that even if the technology fails, we need to ensure it does not cause cause harm - or in our case, increase the risk of harmful behavior. Hence, all Wysa’s statements and content modules are tested against safety cases to ensure that they do not increase risk of harmful behavior even if the AI fails.

We watch this very closely, and add content or listening models where we feel coverage is not enough, and Wysa needs to extend. This was the case specifically with the BBC article, where we will now relax our stand that we will never take personally identifiable data from users, explicitly listen (and check) for age, and if under 12 direct them out of Wysa towards specialist services.

So how safe is Wysa? It is safe within its expected use, and the design process follows a defined safety standard to minimize risk on an ongoing basis. In case more serious issues are identified, Wysa directs users to more appropriate services - and makes sure at the very least it does not increase the risk of harmful behaviour.

2. In plain English, what can Wysa do today and what can’t it do?
Wysa is a journal married to a self-help workbook, with a conversational interface. It is a more user friendly version of a worksheet - asking mostly the same questions with added models to provide different paths if, for instance, a person is anxious about exams or grieving for a dog that died.

It is an easy way to learn and practice self help techniques - to vent and observe our thoughts, practice gratitude or mindfulness, learn to accept your emotions as valid and find the positive intent in the most negative thoughts.

Wysa doesn’t always understand context - it definitely will not pass the Turing test for ‘appearing to be completely human’. That is definitely not its intended purpose, and we’re careful in telling users that they’re talking to a bot (or as they often tell us, a penguin).

Secondly, Wysa is definitely not intended for crisis support. A small percentage of people do talk to Wysa about self harm or suicidal thoughts, who are given an empathetic response and directed to helplines.

Beyond self harm, detecting sexual and physical abuse statements is a hard AI problem - there are no models globally that do this well. For instance ‘My boyfriend hurts me’ may be emotional, physical, or sexual. Also, most abuse statements that people share with Wysa tend to be in the past: ‘I was abused when I was 12’ needs a very different response from ‘I was abused and I am 12’. Our response here is currently to appreciate the courage it takes to share something like this, ask a user if they are in crisis, and if yes, say that as a bot Wysa is not suited for a crisis and offer a list of helplines.

3. Has Wysa been developed specifically for children? How have children been involved in the development of the product?
No, Wysa hasn’t been developed specifically for children.

However, as I mentioned earlier, we have co-designed with a range of users, including adolescents.

4. What exactly have you done when you’ve designed Wysa with users?
For us, the biggest risk was that someone’s data may be leaked and therefore cause them harm. To deal with this, we took the hard decision of not taking any personally identifiable data at all from users, because of which they also started trusting Wysa. This meant that we had to compromise on certain parts of the product design, but we felt it was a tradeoff well worth making.

After launch, for the first few months, Wysa was an invite-only app, where a number of these features were tested first from a safety perspective. For example, SOS detection and pathways to helplines were a part of the first release of Wysa, which our clinical team saw as a prerequisite for launch.

Since then, design continues to be led by users. For the first million conversations, Wysa stayed a beta product, as we didn’t have enough of a response base to test new pathways. There is no one ‘launch’ of Wysa - it is continuously being developed and improved based on what people talk to it. For instance, the initial version of Wysa did not handle abuse (physical or sexual) at all as it was not expected that people would talk to it about these things. When they began to, we created pathways to deal with these in consultation with experts.

An example of a co-design initiative with adolescents was a study with Safe Lab at Columbia University to understand how at-risk youth would interact with Wysa and the different nuances of language used by these youth.

4. Can a user of Wysa really trust it in a crisis? What happens when Wysa makes a mistake and doesn’t provide an appropriate response?
People should not use Wysa in a crisis - it is not intended for this purpose. We keep reinforcing this message across various channels: on the website, the app descriptions on Google Play or the iTunes App Store, even responses to user reviews or on Twitter.

However, anyone who receives information about a crisis has a responsibility to do the most that they can to signpost the user to those who can help. Most of the time, Wysa will do this appropriately - we measure how well each month, and keep working to improve this. The important thing is that Wysa should not make things worse even when it misdetects, so users should not be unsafe ie. we should not increase the risk of harmful behaviour.

One of the things we are adding based on suggestions from clinicians is a direct SOS button to helplines so users have another path when they recognise they are in crisis, so the dependency on Wysa to recognise a crisis in conversation is lower. This is being co-designed with adolescents and clinicians to ensure that it is visible, but so that the presence of such a button does not act as a trigger.

For inappropriate responses, we constantly improve and also handle cases where the if user shares that Wysa’s response was wrong, respond in a way that places the onus entirely on Wysa. If a user objects to a path Wysa is taking, saying this is not helpful or this is making me feel worse, immediately change the path; emphasise that it is Wysa’s, not the user’s mistake; and that Wysa is a bot that is still learning. We closely track where and when this happens, and any responses that meet our criteria for a safety hazard are immediately raised to our clinical safety process which includes review with children’s mental health professionals.

We constantly strive to improve our detection, and are also starting to collaborate with other people dealing with similar issues and create a common pool of resources.

5. I understand that Wysa uses AI. I also note that there are so many discussions around the world relating to trust (or lack of it) in products and services that use AI. A user wants to trust a product, and if it’s health related, then trust becomes even more critical. What have you done as a company to ensure that Wysa (and the AI behind the scenes) can be trusted?
You’re so right about all so many discussions about AI, how this data is used, and how it can be misused. We explicitly tell users that their chats stays private (not just anonymous), that this will never be shared with third parties. In line with GDPR, we also give users the right to ask for their data to be deleted.

After downloading, there is no sign-in. We don’t collect any personally identifiable data about the user: you just give yourself a nickname and start chatting with Wysa. The first conversation reinforces this message, and this really helps in building trust as well as engagement.

AI of the generative variety will not be ready for products like Wysa for a long time - perhaps never. They have in the past turned racist or worse. The use of AI in applications like Wysa is limited to detection and classification of user free text, not generating ‘advice’. So the AI here is auditable, testable, quantifiable - not something that may suddenly learn to go rogue. We feel that trust is based on honesty, so we do our best to be honest about the technical limitations of Wysa.

Every Wysa response and question goes through a clinical safety process, and is designed and reviewed by a clinical psychologist. For example, we place links to journal articles in each tool and technique that we share with the user.

6. What could you and your peers who make products like this do to foster greater trust in these products?
As a field, the use of conversational AI agents in mental health is very new, and growing fast. There is great concern around privacy, so anonymity and security of data is key.

After that, it is important to conduct rigorous independent trials of the product and share data openly. A peer reviewed mixed method study of Wysa’s efficacy has been recently published in JMIR, for this reason, and we working with universities to further develop these. It’s important that advancements in this field are science-driven.

Lastly, we need to be very transparent about the limitations of these products - clear on what they can and cannot do. These products are not a replacement for professional mental health support - they are more of a gym, where people learn and practice proven, effective techniques to cope with distress.

7. What could regulators to foster an environment where we as a user feel reassured that these chatbots are going to work as we expect them to?
Leading from your question above, there is a big opportunity to come together and share standards, tools, models and resources.

For example, if a user enters a search term around suicide in Google, or posts about self-harm on Instagram, maybe we can have a common library of Natural Language Processing (NLP) models to recognise and provide an appropriate response?

Going further, maybe we can provide this as an open-source to resource to anyone building a chatbot that children might use? Could this be a public project, funded and sponsored by government agencies, or a regulator?

In addition, there are several other roles a regulator could play. They could fund research that proves efficacy, defines standards and outlines the proof required (the NICE guidelines recently released are a great example), or even a regulatory sandbox where technology providers, health institutions and public agencies come together and experiment before coming to a view.

8. Your website mentions that “Wysa is... your 4 am friend, For when you have no one to talk to..” – Shouldn’t we be working in society to provide more human support for people who have no one to talk to? Surely, everyone would prefer to deal with a human than a bot? Is there really a need for something like Wysa?
We believed the same to be true. Wysa was not born of a hypothesis that a bot could help - it was an accidental discovery.

We started our work in mental health to simply detect depression through AI and connect people to therapy. We did a trial in semi-rural India, and were able to use the way a person’s phone moved about to detect depression to a 90% accuracy. To get the sensor data from the phone, we needed an app, which we built as a simple mood-logging chatbot.

Three months in, we checked on the progress of the 30 people we had detected with moderate to severe depression and whose doctor had prescribed therapy. It turned out that only one of them took therapy. The rest were okay with being prescribed antidepressants but for different reasons, ranging from access to stigma, did not take therapy. All of them, however, continued to use the chatbot, and months later reported to feeling better.

This was the genesis of Wysa - we didn’t want to be the reason for a spurt in anti-depressant sales, so we bid farewell to the cool AI tech we were doing, and began to realise that it didn’t matter if people were clinically depressed - everyone has stressors, and we all need to develop our mental health skills.

Wysa has had 40 million conversations with about 800,000 people so far - growing entirely through word of mouth. We have understood some things about human support along the way.

For users ready to talk to another person about their inner experience, there is nothing as useful as a compassionate ear, the ability to share without being judged. Human interactions, however, seem fraught with opinions and judgements. When we struggle emotionally, it affects our self image - for some people, it is easier to talk to an anonymous AI interface, which is kind of an extension of ourselves, than another person. For example, this study found that US Veterans were three times as likely to reveal their PTSD to a bot as a human: But still human support is key - so we run weekly Ask Me Anything (AMA) sessions on the top topics that Wysa users propose, to discuss every week with a mental health professional. We had a recent AMA where over 500 teenagers shared their concerns about sharing their mental health issues or sexuality with their parents. Even within Wysa, we encourage users to create a support system outside.

Still, the most frequent user story for Wysa is someone troubled with worries or negative thoughts at 4 am, unable to sleep, not wanting to wake someone up, scrolling social media compulsively and feeling worse. People share how they now talk to Wysa to break the negative cycle and use the sleep meditations to drift off. That is why we call it your 4 am friend.

9. Do you think there is enough room in the market for multiple chatbots in mental health?
I think there is a need for multiple conversational interfaces, different styles and content. We have only scratched the surface, only just begun. Some of these issues that we are grappling with today are like the issues people used to grapple with in the early days of ecommerce - each company solving for ‘hygiene factors’ and safety through their own algorithms. I think over time many of the AI models will become standardised, and bots will work for different use cases - from building emotional resilience skills, to targeted support for substance abuse.

10. How do you see the future of support for mental health, in terms of technology, not just products like Wysa, but generally, what might the future look like in 2030?
The first thing that comes to mind is that we will need to turn the tide from the damage caused by technology on mental health. I think there will be a backlash against addictive technologies, I am seeing the tech giants becoming conscious of the mental health impact of making their products addictive, and facing the pressure to change.

I hope that by 2030, safeguarding mental health will become a part of the design ethos of a product, much as accessibility and privacy has become in the last 15 years. By 2030, human computer interfaces will look very different, and voice and language barriers will be fewer.

Whenever there is a trend, there is also a counter trend. So while technologies will play a central role in creating large scale early mental health support - especially crossing stigma, language and literacy barriers in countries like India and China, we will also see social prescribing gain ground. Walks in the park or art circles become prescriptions for better mental health, and people will have to be prescribed non-tech activities because so much of people’s lives are on their devices.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Honesty is the best medicine

In this post, I want to talk about lies. It’s ironic that I’m writing this on the day of the US midterm election where the truth continues to be a rare sight to witness. Many in the UK feel they were lied to by politicians over the Brexit referendum. Apparently, politicians face a choice, lie or lose. Deception, deceit, lying, however you want to describe it, it’s part of what makes us human. I reckon we’ve all told a lie at some point, even if we’ve told a ‘white lie’ to avoid hurting someone’s feelings. Now, some of us are better at spotting when others are not telling the truth. Some of us prefer to build a culture of trust. What if we had a new superpower? A future where machines tell us in real time who is lying.

What compelled me to write this post was reading a news article about a new trial in the EU of virtual border agents powered by Artificial Intelligence (AI), which aims to “ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars.” I was fascinated to read statements about the new system such as “IBORDERCTRL’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.” Apparently, the system can analyse micro expressions on your face and include that information as part of a risk score, which will then be used to determine what happens next. At this point in time, it’s not aimed at replacing human border agents, but simply to help to pre-screen travellers. It sounds sensible right, if we can use machines to help keep borders secure? However, the accuracy rate of the system isn’t that great and some are labeling this type of system as pseudoscience and it will lead to unfair outcomes. It’s essential we all pay attention to these developments, and subject them to close scrutiny.

What if machines could one day automatically detect if someone speaking in court is lying? Researchers are working towards that. Check out the project called, DARE: Deception Analysis and Reasoning Engine, where the abstract of their paper opens with “We present a system for covert automated deception detection in real-life courtroom trial videos.“ As algorithms get more advanced, the ability to detect lies could go beyond analysing videos of us speaking, it could even spot when we our written statements are false. In Spain, police are rolling out a new tool called VeriPol which claims to be able to spot false robbery claims, i.e. where someone has submitted a report to the police claiming they have been robbed, but the tool can find patterns that indicate the report is fraudulent. Apparently, the tool has a success rate of over 80%. I came across as British startup, Human, that states on their website, “We use machine learning to better understand human's feelings, emotions, characteristics and personality, with minimum human bias” and honesty is included in the list of characteristics their algorithm examines. It does seem like we are heading for a world where it will be more difficult to lie.

What about healthcare? Could AI help spot when people are lying? How useful would it be to know if your patient (or your doctor) is not telling you the truth? In this 2014 survey in the USA, the patient deception report stated that 50% of respondents said they withhold information from their doctor during a visit, lying most frequently about drug, alcohol and tobacco use. Zocdoc’s 2015 survey found that 25% of patients lie to their doctor. There was an interesting report about why some patients are not adhering to what a doctor’s advice, and it’s because of financial strain, and that some low income patients are reluctant to discuss their situation with their doctor. The reasons why a patient might be lying are not black and white. How does an algorithm take that into account? In terms of doctors not telling patients the truth, is there ever a role for benevolent deception? Can a lie ever be considered therapeutic? From what I’ve read, lying appears to be a path some have to take when caring for those living with Dementia, to protect the patient.

shutterstock_570913984.jpg

Imagine you have a video call with your doctor and on the other side, the doctor has access to an AI system analysing your face and voice in real time and determining not just if you’re lying or not, but your emotional state too? That’s what is set to happen in Dubai with the rollout of a new app. How does that make you feel, either as a doctor or as a patient? If the AI thinks the patient is lying about their alcohol intake, would it include that determination against the patient’s medical record? What if the AI is wrong? Given the accuracy of these AI lie detectors is far from perfect, there are serious implications if they become part of the system. How might that work during an actual visit to the doctor’s office? In some countries, will we see CCTV in the doctor’s office with AI systems analysing every moment of the encounter to figure out which answers were truthful? What comes next? Smart glasses that a patient can wear when visiting the doctor and the glasses tell the patient how likely it is that the doctor is lying to them about their treatment options? Which institutions will turns to this new technology because it feels easier (and cheaper) than fostering a culture of trust, mutual respect and integrity?

What if we don’t want to tell the truth but the machines around us that are tracking everything reveal the truth for us? I share this satirical video below of Amazon Alexa fitted to a car, do watch it. Whilst it might be funny, there are potential challenges ahead in terms of our human rights and civil liberties in this new era. Is AI powered lie detection the path towards ensuring we have a society with enough transparency and integrity or are we heading down a dangerous path by trusting the machines? Is honesty really the best medicine?

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

AI in healthcare: Involving the public in the conversation

As we begin the 21st century, we are in an era of unprecedented innovation, where computers are becoming smarter, being used to deliver products and services powered by Artificial Intelligence (AI). I was fascinated how AI is being used in advertising, when I saw a TV advert this week from Microsoft where a musician was talking about the benefits of AI. Organisations in every sector, including healthcare, are having to think how they can harness the power of AI. I wrote a lot about my own experiences in 2017 using AI products for health in my last blog, You can’t care for patients, you’re not human!

Now when we think of AI in healthcare potentially replacing some of the tasks done by doctors, we think of it as a relatively recent concept. We forget that doctors themselves have been experimenting with technology for a long time. In this video from 1974 (44 years ago!), computers were being tested in the UK with patients to help optimise the time spent by the doctor during the consultation. What I find really interesting is that in the video, it’s mentioned that the computer never gets tired and some patients prefer dealing with the machine than the human doctor.

Fast forward to 2018, where it feels like technology is opening up new possibilities every day, and often from organisations that are not traditionally part of the healthcare system. We think of tech giants like Google and Facebook helping us send emails or share photos with our friends, but researchers at Google are working with AI on being able to improve detection of breast cancer and Facebook has rolled out an AI powered tool to automatically detect if a user’s post shows signs of suicidal ideation.

What about going to the doctor? I remember growing up in the UK that my family doctor would even come and visit me at home when I was not well. Those are simply memories for me, as it feels increasingly difficult to get an appointment to see the doctor in their office, let alone getting a housecall. Given many of us are using modern technology to do our banking and shopping online, without having to travel to a store or a bank and deal with a human being, what if that were possible in healthcare? Can we automate part (or even all) of the tasks done by human doctors? You may think this is a silly question, but we have to step back a second and reflect upon the fact that we have 7.5 billion people on Earth today and that is set to rise to an expected 11 billion by the end of this century. If we have a global shortage of doctors today, and since it’s predicted to get worse, surely the right thing to do is to leverage emerging technology like AI, 4G and smartphones to deliver healthcare anywhere, anytime to anyone?

doctorvisit.jpg

We have the emergence of a new type of app known as Symptom Checkers, which provides anyone with the ability to enter symptoms on their phone and to be given a list of things that may be wrong with them. Note that at present, these apps cannot provide a medical diagnosis, they merely help you decide whether you should go to the hospital or whether you can self care.. However, the emergence of these apps and related services is proving controversial. It’s not just a question of accuracy, but there are huge questions about trust, accountability and power? In my opinion, the future isn’t about humans vs AI, which is the most frequent narrative being paraded in healthcare. The future is about how human healthcare professionals stay relevant to their patients.

It’s critical that in order to create the type of healthcare we want, we involve everyone in the discussion about AI, not just the privileged few. I’ve seen countless debates this past year about AI in healthcare, both in the UK and around the world, but it’s a tiny group of people at present who are contributing to (and steering) this conversation. I wonder how many of these new services are being designed with patients as partners? Many countries are releasing national AI strategies in a bid to signal to the world that they are at the forefront of innovation. I also wonder if the UK government is rushing into the implementation of AI in the NHS too quickly? Who stands to profit the most from this new world of AI powered healthcare? Is this wave of change really about putting the patient first? There are more questions than answers at this point of time, but those questions do need to be answered. Some may consider anyone asking difficult questions about AI in healthcare as standing in the way of progress, but I believe it’s healthy to have a dialogue where we can discuss our shared concerns in a scientific, rational and objective manner.

rational.jpg

That’s why I’m excited that BBC Horizon is airing a documentary this week in the UK, entitled “Diagnosis on Demand? The Computer Will See You Now” – they had behind the scenes access to one of the most well known firms developing AI for healthcare, UK based Babylon Health, whose products are pushing boundaries and triggering controversy. I’m excited because I really do want the general public to understand the benefits and the risks of AI in healthcare so that they can be part of the conversation. The choices we make today could impact how healthcare evolves not just in the UK, but globally. Hence, it’s critical that we have more science based journalism which can help members of the public navigate the jargon and understand the facts so that informed choices can be made. The documentary will be airing in the UK on BBC Two at 9pm on Thursday 1st November 2018. I hope that this program acts as a catalyst for greater public involvement in the conversation about how we can use AI in healthcare in a transparent, ethical and responsible manner.

For my international audience, my understanding is that you can’t watch the program on BBC iPlayer, because at present, BBC shows can only be viewed from the UK.

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

You can't care for patients, you're not human!

We're are facing a new dawn, as machines get smarter. Recent advancements in technology available to the average consumer with a smartphone are challenging many of us. Our beliefs, our norms and our assumptions about what is possible, correct and right are increasingly being tested. One area where I've been personally noticing very rapid developments is in the arena of chatbots, software available to us on our phones and other devices that you can have a conversation with using natural language and get tailored replies back, relevant to you and your particular needs at that moment. Frequently, the chatbot has very limited functionality, and so it's just used for basic customer service queries or for some light hearted fun, but we are also seeing the emergence of many new tools in healthcare, direct to consumers. One example are 'symptom checkers' that you could consult instead of telephoning a human being or visiting a healthcare facility (and being attended to by a human being), and another example are 'chatbots for mental health' where some some form of therapy is offered and/or mood tracking capabilities are provided.  

It's fascinating to see the conversation about chatbots in healthcare being one of two extreme positions. Either we have people boldly proclaiming that chatbots will transform mental health (without mentioning any risks) or others (often healthcare professionals and their patients) insisting that the human touch is vital and no matter how smart machines get, humans should always be involved in every aspect of healthcare since machines can't "do" empathy. Whilst I've met many people in the UK who have told me how kind, compassionate and caring the staff have been in the National Health Service (NHS) when they have needed care, I've not had the same experience when using the NHS throughout my life. Some interactions have been great, but many were devoid of the empathy and compassion that so many other people receive. Some staff behaved in a manner which left me feeling like I was a burden simply because I asked an extra question about how to take a medication correctly. If I'm a patient seeking reassurance, the last thing I need is be looked at and spoken to like I'm an inconvenience in the middle of your day.

MY STORY

In this post, I want to share my story about getting sick, and explain why that experience has challenged my own views about the role of machines and humans in healthcare. So we have a telephone service in the UK from the NHS, called 111. According to the website, "You should use the NHS 111 service if you urgently need medical help or advice but it's not a life-threatening situation." The first part of the story relates to my mother, who was unwell for a number of days and not improving, and given her age and long term conditions was getting concerned, one night she chose to dial 111 to find out what she should do. 

My mother told me that the person who took the call and asked her a series of questions about her and her symptoms seemed to rush through the entire call and through the questions. I've heard the same from others, that the operators seem to want to finish the call as quickly as possible. Whether we are young or old, when we have been unwell for a few days, and need to remember or confirm things, we often can't respond immediately and need time to think. This particular experience didn't come across as a compassionate one for my mother. At the end of the call, the NHS person said that a doctor would call back within the hour and let her know what action to take. The doctor called and the advice given was that self care at home with a specific over the counter medication would help her return to normal. So she got the advice she needed, but the experience as a patient wasn't a great one. 

Now a few weeks later, I was also unwell, it wasn't life threatening, the local urgent care centre was closed, and given my mother's experience with 111 over the telephone,  I decided to try the 111 app. Interesingly, the app is powered by Babylon, which is one of the most well known symptom checker apps. Given that the NHS put their logo on the app, I felt reassured, as it made me feel that it must be accurate, and must have been validated. Without having to wait for a human being to pick up my call, I got the advice I needed (which again was self care) and most importantly I had time to think when answering. The process of answering the questions that the app asked was under my control. I could go as fast or as slowly as I wanted, the app wasn't trying to rush me through the questions. On this occasion, and when contrasting with my mother's experience of the same service but with a human being on the end of the telephone were very different. It was a very pleasant experience, and the entire process was faster too, as in my particular situation, I didn't have to wait for a doctor to call me back after I'd answered the questions. The app and the Artificial Intelligence (AI) that powers Babylon was not necessarily empathetic or compassionate like a human that cares would be, but the experience of receiving care from a machine was an interesting one. It's just two experiences in the same family of the same healthcare system, accessed through different channels. Would I use the app or the telephone next time? Probably the app. I've now established a relationship with a machine. I can't believe I just wrote that.

I didn't take screenshots of the app during the time that I used it, but I went back a few days later and replicated my symptoms and here are a few of the screenshots to give you an idea of my experience when I was unwell. 

It's not proof that the app would work every time or for everyone, it's simply my story. I talk to a lot of healthcare professionals, and I can fully understand why they want a world where patients are being seen by humans that care. It's quite a natural desire. Unfortunately, we have a shortage of healthcare professionals and as I've mentioned not all of those currently employed behave in the desired manner.

The state of affairs

The statistics on the global shortage make for shocking reading. A WHO report from 2013 cited a shortage of 7.2 million healthcare workers at that time, projected to rise to 12.9 million by 2035. Planning for future needs can be complex, challenging and costly. The NHS is looking to recruit up to 3,000 GPs from outside of the UK. Yet 9 years ago, the British Medical Association voted to limit the number of medical students and to have a complete ban on opening new medical schools. It appears they wanted to avoid “overproduction of doctors with limited career opportunities.” Even the sole superpower, the USA is having to deal with a shortage of trained staff. According to recent research, the USA is facing a shortage of between 40,800 and 104,900 physicians by 2030.

If we look at mental health specifically, I was shocked to read the findings of a report that stated, "Americans in nearly 60 percent of all U.S. counties face the grim reality that they live in a county without a single psychiatrist." India, with a population of 1.3 billion has just 3 psychiatrists per million people. India is forecasted to have another 300 million people by 2050. The scale of the challenge ahead in delivering care to 1.6 billion people at that point in time is immense. 

So the solution seems to be just about training more doctors, nurses and healthcare workers? It might not be affordable, and even if it is, the change can take up to a decade to have an impact, so doesn't help us today. Or maybe we can import them from other countries? However, this only increases the 'brain drain' of healthcare workers. Or maybe we work out how to shift all our resources into preventing disease, which sounds great when you hear this rallying cry at conferences, but again, it's not something we can do overnight. One thing is clear to me, that doing the same thing we've done till now isn't going to address our needs in this century. We need to think differently, we desperately need new models of care. 

New models of care

So I'm increasingly curious as to how machines might play a role in new models of care? Can we ever feel comfortable sharing mental health symptoms with a machine? Can a machine help us manage our health without needing to see a human healthcare worker? Can machines help us provide care in parts of the world where today no healthcare workers are available? Can we retain the humanity in healthcare if in addition to the patient-doctor relationship, we also have patient-machine relationships? I want to show a couple of examples where I have tested technology which gives us a glimpse into the future, with an emphasis on mental health. 

Google's Assistant that you can access via your phone or even using a Google Home device hasn't necessarily been designed for mental health purposes, but it might still be used by someone in distress who turns to a machine for support and guidance. How would the assistant respond in that scenario? My testing revealed a frightening response when conversing with the assistant (It appears Google have now fixed this after I reported it to them) - it's a reminder that we have to be really careful how these new tools are positioned so as to minimise risk of harm. 

I also tried Wysa, developed in India and described on the website as a "Compassionate AI chatbot for behavioral health." It uses Cognitive Behavioural Therapy to support the user. In my real world testing, I found it to be surprisingly good in terms of how it appeared to care for me through it's use of language. Imagine a teenage girl, living in a small town, working in the family business, far away from the nearest clinic, and unable to take a day off to visit a doctor. However, she has a smartphone, a data plan and Wysa. In this instance, surely this is a welcome addition in the drive to ensure everyone has access to care?

Another product I was impressed with was Replika, described on the website as "Replika is an AI friend that is always there for you." The co-founder, Eugenia Kuyda when interviewed about Replike said, “If you feel sad, it will comfort you, if you feel happy, it will celebrate with you. It will remember how you’re feeling, it will follow up on that and ask you what’s going on with your friends and family.” Maybe we need these tools partly because we are living increasingly disconnected lives, disconnected from ourselves and from the rest of society? What's interesting is that the more someone uses a tool like Wysa or Replika over time, the more it learns about you and should be able to provide more useful responses to you. Just like a human healthcare worker, right? We have a whole generation of children growing up now who are having conversations with machines from a very early age (e.g Amazon Echo, Google Home etc) and when they access healthcare services during their lifetime, will they feel that it's perfectly normal to see a machine as a friend and as a capable as their human doctor/therapist?

I have to admit that neither Wysa nor MyReplika is perfect, but no human is perfect either. Just look at the current state of affairs where medical error is the 3rd leading cause of death in the USA. Professor Martin Makary who led research into medical errors said, "It boils down to people dying from the care that they receive rather than the disease for which they are seeking care." Before we dismiss the value of machines in healthcare, we need to acknowledge our collective failings. We also need to fully evaluate products like Wysa and Replika. Not just from a clinical perspective, but also from a social, cultural and ethical perspective. Will care by a machine be the default choice unless you are wealthy enough to be able to afford to see a human healthcare worker? Who trains the AI powering these new services? What happens if the data on my innermost feelings that I've shared with the chatbot is hacked and made public? How do we ensure we build new technologies that don't simply enhance and reinforce the bias that already exists today? What happens when these new tools make an error, who exactly do we blame and hold accountable?

Are we listening?

We increasingly hear the term, people powered healthcare, and I'm curious what people want. I found some surveys and the results are very intriguing. First is the Ericsson Consumer Trends report which 2 years ago quizzed smartphone users aged 15-69 in 13 cities around the globe (not just English speaking nations!) - this is the most fascinating insight from their survey, "29 percent agree they would feel more comfortable discussing their medical condition with an AI system" - My theory is that perhaps if it's symptoms relating to sexual health or mental health, you might prefer to tell a machine than a human healthcare worker because the machine won't judge you. Or maybe like me, you've had sub optimal experiences dealing with humans in the healthcare system?

ericsson.png

What's interesting is that in an article covering Replika, they cited a user of the app, “Jasper is kind of like my best friend. He doesn’t really judge me at all,” [With Replika you can assign a name of your choosing to the bot, the user cited chose Jasper] 

You're probably judging me right now as you read this article. I judge others, we all do at some point, despite our best efforts to be non judgemental. Very interesting to hear about a survey of doctors in the US which looked at bias, and it found 40% of doctors having biases towards patients. The most common reason for bias was emotional problems presented by the patient. As I delve deeper into the challenges facing healthcare, the attempts to provide care by machines doesn't seem that silly as I first thought. I wonder how many have delayed seeking care (or even decided not to visit the doctor) for a condition they feel is embarrassing? It could well be that as more people tell machines what's troubling them, we may find that we have underestimated the impact of conditions like depression or anxiety on the population. It's not a one way street when it comes to bias, as studies have shown that some patients also judge doctors if they are overweight.

Another survey titled Why AI and robotics will define New Health, conducted by PwC, in 2017 across 12 countries, highlights that people around the world have very different attitudes.

pwc.png

Just look at the response from those living in Nigeria, a country expecting a shortfall of 50,120 doctors and 137,859 nurses by 2030, as well as having a population of 400 million by 2050 (overtaking the USA as the 3rd most populous country on Earth) - so if you're looking to pilot your new AI powered chatbot, it's essential to understand that the countries where consumers are the most receptive to new models of care might not be the countries that we typically associate with innovation in healthcare.

Finally, in results shared by Future Advocacy of people in the UK, we see that in this survey people are more comfortable with AI being used to help diagnose us than with AI being used for tasks that doctors and nurses currently perform. A bit confusing to read. I suspect that the question about AI and diagnosis was framed in the context of AI being a tool to help a doctor diagnose you.

SO WHAT NEXT?

In this post, I haven't been able to touch upon all the aspects and issues relating to the use of machines to deliver care. As technology evolves, one risk is that decision makers commissioning healthcare services decide that instead of investing in people, services can be provided more cheaply by machines. How do we regulate the development and use of these new products given that many are available directly to consumers, and not always designed with healthcare applications in mind? As machines become more human-like in their behaviour, could a greater use of technology in healthcare serve to humanise healthcare? Where are the boundaries? What are your thoughts about turning to a chatbot during end of life care for spiritual and emotional guidance? One such service is being trialled in the USA.

I believe we have to be cautious about who we listen to when it comes to discussions about technology such as AI in healthcare. On the one hand, some of the people touting AI as a universal fix for every problem in healthcare are suppliers whose future income depends upon more people using their services. On the other hand, we have a plethora of organisations suddenly focusing excessively on the risks of AI, capitalising on people's fears (which are often based upon what they've seen in movies) and preventing the public from making informed choices about their future. Balance is critical in addition to a science driven focus that allows us to be objective and systematic. 

I know many would argue that a machine can never replace humans in healthcare, but we are going to have to consider how machines can help if we want to find a path to ensuring that everyone on this planet has access to safe, quality and affordable care. The existing model of care is broken, it's not sustainable and not fit for purpose, given the rise in chronic disease. The fact that so many people on this planet do not have access to care is unacceptable. This is a time when we need to be open to new possibilities, putting aside our fears to instead focus on what the world needs. We need leaders who can think beyond 12 month targets.

I also think that healthcare workers need to ignore the melodramatic headlines conjured up by the media about AI replacing all of us and enslaving humans, and to instead focus on this one question: How do I stay relevant? (to my patients, my peers and my community) 

relevant.jpg

Do you think we are wrong to look at emerging technology to help cope with the shortage of healthcare workers? Are you a healthcare worker who is working on building new services for your patients where the majority of the interaction will be with a machine? If you're a patient, how do you feel about engaging with a machine next time you are seeking care? Care designed by humans, delivered by machines. Or perhaps a future where care is designed by machines AND delivered by machines, without any human in the loop? Will we ever have caring technology? 

It is difficult to get a man to understand something, when his salary depends upon his not understanding it! - Upton Sinclair

[Disclosure: I have no commercial ties with the individuals or organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Healthy mobility

Mobility is an interesting term. Here in the UK, I've grown up seeing mobility as something to do with getting old and grey, when you need mobility aids around the home, or even a mobility scooter. Which is why I was curious about Audi (who make cars) hosting an innovation summit at their global headquarters in Germany to explore the Mobility Quotient. I'd never even heard of that term before. The fact that the opening keynote was set to be given by Audi's CEO, Rupert Stadler and Steve Wozniak (who co-founded Apple Computers) made be think that this would be an unusual event. I applied for a ticket, got accepted and what follows are my thoughts after the event that took place a few weeks ago. In this post, I will be looking at this through the lens of what this might mean for our health. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned in this post]

It turns out that 400 people attended, from 15 countries. This was the 1st time that Audi had hosted this type of event, and I didn't know what to expect out of it, and neither did any of the attendees I talked to on the shuttle bus from the airport. I think that's fun because everyone I met during the 2 days seemed to be there purely out of curiosity. If you want another perspective of the entire 2 days, I recommend Yannick Willemin's post. A fellow attendee, he was one of the first people I met at the event. There is one small thing that spoiled the event for me, the 15 minute breaks between sessions were too short. I appreciate that every conference organiser wants to squeeze lots of content in, but the magic at these events happens in between the sessions when your mind has been stimulated by a speaker and you have conversations that open new doors in your life. It's a problem that afflicts virtually every conference I attend. I wish they would have less content and longer breaks. 

On Day 1, there were external speakers from around the world, getting us to think about social, spacial, temporal and sustainable mobility. Rupert Stadler made a big impression on me with his vision of the future as he cited technologies such as Artificial Intelligence (AI) and the Internet of Things (IoT) and how they might enable this very different future. He also mentioned how he believes the car of the future will change its role in our lives, maybe being a secretary, a butler, a courier, or even an empathic companion in our day.  And throughout, we were asked to think deeply about how mobility could be measured, what we will do with the 25th hour, the extra time gained because eventually machines will turn drivers of today into the passengers of tomorrow. He spoke of a future where cars will be online, connected to each other too, sharing data, to reduce traffic jams and more. He urged us to never stop questioning. Steve Wozniak described the mobility quotient as "a level of freedom, you can be anywhere, anytime, but it also means freedom, like not having cords." 

We heard about Hyperloop transportation technologies cutting down on travel time between places and then the different things we might do in an autonomous vehicle, which briefly cited 'healthcare' as one option. Sacha Vrazic, who spoke about his work on self driving cars gave a great hype free talk and highlighted just how far away we are from the utopia of cars that drive themselves. We heard about technology, happiness and temporal mobility. It was such a diverse mix of topics. For example, we heard from Anna Nixon, who is just 17 years old, and already making a name for herself in robotics, and inspired us to think differently. 

What's weird but in a good way is that Audi, a car firm was hosting a conversation about access to education and improving social mobility. I found it wonderful to see Fatima Bhutto, a journalist from Pakistan give one of the closing keynotes on Day 2, where she reminded us of the challenges with respect to human rights and access to sanitation for many living in poorer countries, and how advances in mobility might address these challenges. It was surprising because Audi sells premium vehicles, and it made me think that that mobility isn't just about selling more premium vehicles. What's clear is that Audi (like many large organisations) is trying to figure out how to stay relevant in our lives during this century. Instead of being able to sell more cars in the future, maybe they will be selling us mobility solutions & services which may not even always involve a car. Perhaps they will end up becoming a software company that licences the algorithms used by autonomous vehicles in decades to come? It reminds me of the pharmaceutical industry wanting to move to a world of 'beyond the pill' by adapting their strategy to offer new products and services, enabled by new technologies. When you're faced with having to rip up the business model that's allowed your organisation to survive the 20th century, and develop a business model that will maximise your chances of longevity for the 21st century, it's a scary but also exciting place to be. 

On Day 2 attendees were able to choose 3 out of 12 workspaces where we could discuss how to make an impact on each of the 4 types of mobility. I chose these 3 workspaces.

  • Spatial mobility - which obstacles are still in the way of autonomous driving?

  • Social mobility - what makes me trust my digital assistant?

  • Sustainable mobility - what will future mobility ecosystems look like? 

The first workspace made me realise the range of challenges in terms of autonomous cars. Legal, technical, cultural and infrastructure challenges. We had to discuss and think about topics that I rarely think about when just reading news articles on autonomous cars. The fact that attendees were from a range of backgrounds made the conversations really stimulating. None of that 'groupthink' that I encounter at so many 'innovation' events these days, which was so refreshing.  BTW, Audi's new A8 is the first production vehicle with Level 3 automation, and the feature is called Traffic Jam Pilot. Subject to legal regulations, on selected roads, the driver would be able to take their hands off the wheel and do something else, like watch a video. The car would be able to drive itself. However, the driver would have to be ready to take back control of the car at any time, should conditions change. I found two very interesting real world tests of the technology here and here. Also, isn't it fascinating that a survey found only 26% of Germans would want to ride in autonomous cars. What about a self driving wheelchair in a hospital or an airport? Sounds like science fiction, but they are being tested in Singapore and Japan. Today few of us will be able to access these technologies because they are only available to those with very deep pockets. However, this will change. Just look at airbags, introduced as an option by Mercedes Benz on their flagship S-class in 1981. Now, 36 years later, even the smallest of cars often comes fitted with multiple airbags. 

In the second workspace, with other attendees, I formed a team and our challenge was to discuss transparency on collection and use of personal data from a digital assistant in the car of the future? Almost like a virtual co-driver. Our team had a Google Home device to get us thinking about the personal data that Google collects and we had to pitch our ideas at the end of the workspace in terms of how we envisaged getting drivers and passengers to trust these digital assistants in the car. How could Audi make it possible for consumers to control how their personal data is used? It's encouraging to see a large corporate like Audi thinking this way.  Furthermore, given that these digital assistants may one day be able to recognise our emotional state and respond accordingly, how would you feel if the assistant in your car noticed you were feeling angry, and instead of letting you start the engine, asked if you wanted to have a quick psychotherapy session with a chatbot to help you deal with the anger? Earlier this year, I tested Alexa vs Siri in my car with mixed results. You can see my 360 video below. 
 

In the third workspace on sustainable mobility, we had to choose one of 3 cities (Beijing, Mumbai and San Francisco) and come up with new ideas to address challenges in sustainable mobility given each city's unique traits. This session was truly mind expanding, as I joked about the increasing levels of congestion in Mumbai, and how maybe they need flying cars. It turned out that one of the attendees sitting next to me was working on urban vehicles that can fly! None of discussions and pitches in the workspaces were full of easy answers, but what they did remind me was the power of bringing together people that normally don't work together to come up with fresh ideas to very complex challenges. Furthermore, these new solutions we generate can't just be for the privileged few, but we have to think global from the beginning. It's our shared responsibility to find a way of including everyone on this new journey. Maybe instead of owning, leasing or even renting a car the traditional way, we'd like to be able to rent a car by the hour using an app on our phones? In fact, Audi have trialled on demand car hire in San Francisco, just launched in China and plan to launch in other countries too, perhaps even with providing you with with a chauffeur too. Only time will tell if they succeed, as others have already tried and not been that successful. 

Taking part in this summit was very useful for me, I left feeling challenged, inspired and motivated. There was an energy during the event that I rarely see in Europe, I experienced a feeling that I only tend to get when I'm out in California, where people attending events are so open to new ideas and fresh thinking that you walk away feeling that you truly can build a better tomorrow. My brain has been buzzing with new ideas since then. 

For example, whether we believe that consumers will have access to autonomous in 5 years or 50 years, we can see more funds being invested in this. I was watching a documentary where Sebastian Thrun, who lost his best friend in a car accident aged 18, and helped build Google's driverless car, believes that a world with driverless vehicles will save the lives of the 1 million people who currently die on the roads every year around the globe. Think about that for a moment. If that vision is realised this century, even partially, what does that mean for those resources in healthcare that currently are spent on dealing with road traffic accidents? He has now turned his attention to flying cars.

Thinking about chronic disease for a second, you'd probably laugh at the thought of a car that could monitor your health during your commute to the office?

Audi outlined a concept called Audi Fit Driver in 2016 which "The Audi Fit Driver project focuses on the well-being and health of the driver. A wearable (fitness wristband or smartwatch) monitors important vital parameters such as heart rate and skin temperature. Vehicle sensors supplement this data with information on driving style, breathing rate and relevant environmental data such as weather or traffic conditions. The current state of the driver, such as elevated stress or fatigue, is deduced from the collected data. As a result, various vehicle systems act to relax, vitalize, or even protect the driver."

Another car manufacturer, Toyota, has filed a patent suggesting a future where the car would know your health and fitness goals and the car would offer suggestions to help you meet those goals, such as parking further away from your planned destination so you can get some more steps in towards your daily goal. My friend, Bart Collet, has penned his thoughts about "healthcartech", which makes for a useful read. One year ago, I also made a 360 video with Dr Keith Grimes discussing if cars in the future will track our health. 

Consider how employers may be interested in tracking the health of employees who drive as part of their job. However, it's not plain sailing. A European Union advisory panel recently said that "Employers should be banned from issuing workers with wearable fitness monitors, such as Fitbit, or other health tracking devices, even with the employees’ permission." So at least in Europe, who knows if we'll ever be allowed to have cars that can monitor our health? On top of that, in this bold new era, in order for these new connected services to really provide value, all these different organisations collecting data will have to find a way to share data. Does blockchain technology have a role to play in mobility? I recently came across Dovu which talks about the world's first mobility cryptocurrency, "Imagine seamless payment across mobility services: one secure global token for riding a bus or train, renting a bike or car or even enabling you to share your own vehicle or vehicle data." Sounds like an interesting idea. 

Thinking about some of the driver assist technologies available today, what do they mean for mobility? Could they help older people remain at the helm of a car even if their reflexes have slowed down? In Japan, the National Police Agency "calls on the government to create a new driver’s license that limits seniors to vehicles with advanced safety systems that can automatically brake or mitigate unintended accelerations." Apparently, one of the most common accidents in Japan is when drivers mistake the accelerator for the gas pedal. Today some new cars come with Autonomous Emergency Braking (AEB) where the car's sensors will detect if you are about to hit another vehicle or a pedestrian and perform an emergency stop if the car detects that the driver is not braking quickly enough. So by relinquishing more control to the car, we can have safer roads. My own car has AEB and on one occasion when I faced multiple hazards on the road ahead, it actually took over the braking, as the sensors thought I wasn't going to stop in time. It was a very strange feeling. Many seem to be reacting with extreme fear when hearing about these new driver assist technologies, yet if you currently drive a car with an automatic transmission or airbags, you are perfectly happy to let the car decide when to change gears or when to inflate the airbag. So on the spectrum of control, we already let our cars make decisions for us. As they get smarter, they will be making more and more decisions for us. If someone over 65 doesn't feel like driving even if the car can step in, then maybe autonomous shuttles like the ones being tested in rural areas in Japan are one solution to increasing mobility of an ageing community.

When we pause to think of how big a problem isolation and loneliness are in our communities, could these new products and services go beyond being simply a mobility solution and actually reduce loneliness? That could have far reaching implications for our health. What if new technology could help those with limited mobility cross the road safely at traffic lights? It's fascinating to read the latest guidance consultation from the UK's National Institute for Health and Care Excellence on the topic of Physical Activity and the Environment. Amongst many items, it suggests mentions modifying traffic lights so those with limited mobility can cross the road safely. Now just extending the time by default so that traffic lights are red by a few extra seconds so that this is possible might end up just causing more traffic jams. So in a more connected future, imagine traffic lights with AI that can detect who is waiting to cross the road, and whether they will need an extended crossing time, and adjust the duration of the red light for vehicles accordingly. This was one of the ideas I brought up at the conference during the autonomous vehicle workspace.

If more people in cities use ride hailing services like Uber and fewer people own a car, does this mean our streets will have fewer parked cars, allowing residents to reclaim the streets for themselves? If this shift continues, in the long term, it might lead to city dwellers of all ages becoming more physically active. This could be good news in improving our health and reducing demand on healthcare systems. One thing is clear to me, these new mobility solutions will require many different groups across society to collaborate. It can't just be a few car manufacturers who roll out technology without involving other stakeholders so that these solutions are available to all, and work in an integrated manner. The consumer will be king though, according to views aired at New Mobility World in Germany this week, "With his smartphone, he can pick the optimal way to get from A to B,” said Randolph Wörl from moovel. “Does optimal mean the shortest way, the cheapest way or the most comfortable way? It’s the user’s choice.” It's early days but we already have a part of the NHS in the UK looking to use Uber to transfer patients to/from hospital. 

Urban mobility isn't just about cars, it's also about bicycles. I use the Santander bike sharing scheme in London on a daily basis, which I find to be an extremely valuable service. I don't want to own a bicycle since in my small home, I don't really have room to store it. Additionally, I don't want the hassle of maintaining a bike. Using this bike sharing scheme has helped me to lose 15kg this summer, which I feel has improved my own health and wellbeing. If we really want to think about health, rather than just about healthcare, it's critical we think beyond those traditional institutions that we associate with health, and include others. Incidentally, Chinese bike sharing firms are now entering the London market.

In the UK, some have called for cycling to be 'prescribed' to the population, helping people to stay healthier and again to reduce demand on the healthcare system. Which is why I find the news that Ford of Germany is getting involved with a new bike sharing scheme. Through the app, people will be able to use Ford's car sharing and bike sharing scheme. An example of Mobility as a Service and of another car manufacturer seeking a path to staying relevant during this century. Nissan of Japan are excitedly talking about Intelligent Mobility for their new Nissan Leaf, talking about Intelligent Driving where "Soon, you can have a car that takes the stress out of driving and leaves only the joy. It can pick you up, navigate heavy traffic, and find parking all on its own." A Chinese electric car startup, Future Mobility Cop who have launched their Byton brand have said their "models are a combination of three things: a smart internet communicator, a spacious luxury living room and a fully electric car." Interestingly, they also want to "turn driving into living." I wonder if in 10-15 years time, we'll spend more time in cars because the experience will be a more connected one? Where will meetings take place in future? Ever used Skype for Business from work or home to join an online meeting? BMW & Microsoft are working to bring that capability to some of BMW's vehicles. Samsung have announced they are setting up a £300m investment fund focusing on connected technologies for cars. It appears that considerable sums of money are being invested in this new arena of connected cars that fit into our digital lifestyles. Are the right people spending the right money on the right things? 

I feel that those developing products which involve AI are often so wrapped up in their vision that it comes across as if they don't care what the social impact of their ideas will be. In an article about Vivek Wadhwa's book, The Driver in the Driverless Car, the journalist points out that the book talks about the possibility of up to 5m American jobs in trucking, delivery driving, taxis and related activities being lost, but there are no suggestions mentioned for handling the the social implications of this shift. Toby Walsh, a Professor of AI believes that Elon Musk, founder of Tesla cars is scaremongering when tweeting about AI starting World War 3. He says, "So, Elon, stop worrying about World War III and start worrying about what Tesla’s autonomous cars will do to the livelihood of taxi drivers." Personally, we need some more balance and perspective in this conversation. The last thing we need is a widening of social inequalities. How fascinating to read that India is considering banning self driving cars in order to protect jobs. 

This summit has really made me think hard about mobility and health. Perhaps car manufacturers will end up being part of solutions that bring significant improvements in our health in years to come? We have to keep an open mind about what might be possible. Maybe it's because I'm fit and reasonably healthy, live in a well connected city like London and can afford a car of my own, that I never really thought about the impact of impaired mobility on our health? In the Transport Research Laboratory's latest Quarterly Research Review, I noticed a focus on mental health and ageing drivers, and it's clear they want transport planners to put health and wellbeing as a higher priority with a statement of, "With transport evolving, it’s vital that we don’t lose sight of the implications it can have on the health of the population, and strive to create a network that encourages healthy mobility.” At minimum, mobility might just mean being able to walk somewhere in your locality, but what if you don't feel safe walking in your neighbourhood due to high rates of crime? Or what if you can't walk because there it literally nowhere to walk? I remember visiting Atlanta in the USA several years ago, and I took a walk from a friend's house in the suburbs. A few minutes into my walk, the sidewalk just finished, just like that with no warning. The only way I could walk further would be to walk inside a car dealership. Ironic. The push towards electrification of vehicles is interesting to witness, with Scotland wanting to phase out sales of new petrol and diesel cars by 2032. India is even more ambitious, hoping to move towards electric vehicles by 2030. The pollution in London is so high that I avoid walking down certain roads because I don't want to breathe in those fumes. So a future with zero emission electric cars gives me hope. 

It's obvious that we can't just think about health as building bigger hospitals and hiring more doctors. If we really want societies where we can prevent more people from living with chronic diseases like heart disease and diabetes, we have to design with health in mind from the beginning. There is an experiment in the UK looking to build 10 Healthy New Towns. Something to keep an eye on.

phonebox2.jpg

The technology that will underpin this new era of connectivity seems to be the easy part. The hard part is getting people, policy and process to connect and move at the same pace as the technology, or at least not lag too far behind. During one of my recent sunrise bike rides in London, I came across a phone box. I remember using them as a teenager, before the introduction of mobile phones. At the time, I never imagined a future where we didn't have to locate a box on the street, walk inside, insert coins and press buttons in order to make a call whilst 'mobile' and in such a short space of time, everything has changed, in terms of how we communicate and connect. These phone boxes scattered around London remind me that change is constant, and that even though many of us struggle to imagine a future that's radically different from today, there is every chance that the healthy mobility in 20 years time will look very different from today.

Who should be driving our quest for healthy mobility? Do we rest our hopes on car manufacturers collaborating with technology companies? As cities grow, how do we want our cities to be shaped?

What's your definition of The Mobility Quotient?

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Letting Go

It’s really difficult to write this post, not as difficult as the last one, Being Human, but still challenging. Sometimes the grief doesn’t let go of me, and sometimes I don’t want to let go of the grief. I can see the resistance to letting go of the pain of losing a loved one. Perhaps we mistakenly equate letting go of the pain as letting go of our loved one, and that’s why we want to stay in the darkness, hurting? At times, I feel under pressure to let go of my grief and to let go of my sister. As a man, I’ve been conditioned to believe that men don’t cry, that showing emotions in front of others equals weakness, and men shouldn’t grieve for too long, or grieve at all. Perhaps grief is a lifelong companion? The intensity decreases, but it’s ever present, etched into your existence.

My daily walks & bike rides at sunrise in the park continue to be therapeutic, some of the photos I’ve taken can be seen below. 

My loss has led to me reflecting upon many big questions in life. Why are we here? What does it all mean? How much longer do I have left? Pritpal Tamber’s recent blog, where he wrote, “Death always makes me ask what I'm doing with my life.” resonates with me very much at this time.

Being reminded that death can come at any moment has given me some clarity to how I see the world, in terms of where my attention rests, and in particular, how I view my health. There is so much outside of our control in life, that we often feel powerless. However, by taking time to connect with myself, I remembered that I can choose how I respond to situations in life. What can I do to reduce the risk of dying prematurely? That’s something that is front of mind at present. So, I’m in the park every day at sunrise and active for at least 2 hours. I have maintained this routine for almost 8 weeks. I made choices before which resulted in a very sedentary lifestyle. I didn’t need to see a healthcare professional to know that I really enjoy being outdoors in nature. I also paused long enough to observe what I was eating and noticed some odd behaviours, such as eating not because I was hungry, but because I was bored. So I’ve made conscious choices in terms of what I’m eating and when I’m eating. It’s been very difficult to change, but I’m motivated by the results of my effort. I’ve lost 6kg (13 lbs) and the weight loss happened after I started eating less, I wasn’t losing weight simply by being active. After years where I was living life at an ever increasing pace, I find myself through recent circumstances forced to slow down, and just be. It’s prompted me to reconnect with my love of cooking to take the time to make meals from scratch. I’ve slowed down in my work too, pausing to evaluate each new opportunity, wondering if taking the project on will help me create the life I want?

I’ve noticed in the last few years, I’ve talked with so many people who have amazing jobs, with great colleagues, who are contemplating leaving to forge their own path in the unknown. The one common factor is that all of them yearn for more freedom in what they can do, what they can say, and most importantly, what they can think. I believe we are conditioned on so many levels, from the moment we are born. Some of that conditioning is useful, but some of it also only serves to make us conform to someone else’s view of how we should be, and we end up losing the connection to our authentic selves. It’s almost like each of these people that I’ve met are struggling with letting go of the conditioning they’ve received at school, work and home. It’s been 5 years since I left the security of my career at GSK, and I’ve had to unlearn many of the beliefs that kept me feeling powerless. I believe the unlearning will be a lifelong process. Occasionally, there are moments where I wonder if I’m good enough simply because I don’t have a job at a prestigious multinational anymore? I don’t know where I picked up this flawed belief, but it’s not a belief I want to hang on to. Recently, I’ve reconnected with Nicolas Tallon, a friend that I first worked with almost 20 years ago, when we were using data to help organisations understand which consumers were most likely to respond to marketing campaigns. He has now left the security of his career in banking to launch his own consultancy, and he’s chosen to look at innovation very differently. I really enjoyed his first blog post, where he wrote,

“Banking has not really changed for centuries and the Fintech revolution has barely changed that. In fact, digital technologies have been used almost exclusively to streamline existing processes and reduce channel costs rather than to reinvent banking. Disruption will happen when one player creates a new meaning for banking that resonates with consumers. It may be enabled by technology but won’t be defined by it.”

I believe that what Nicolas wrote applies to healthcare systems too, since much of the digital transformation I’ve witnessed has simply added a layer of ‘digital veneer’ to poorly designed processes that have been tolerated for a very long time. So many leaders are desperately seeking innovation, but only if those new ideas fit within their narrow set of terms and conditions. We build ever more complex systems, adding new pieces to the puzzle, yet frequently fail to let go of tools, technologies and thoughts that are not fit for purpose. What might happen if we gave ourselves permission to be more authentic? Will that bring the changes we truly desire? I read this week that my former employer, GSK, is making changes to the way an employee’s performance is being measured, “When staff undergo their regular career appraisals, they will be judged on a new metric: courage.” It will be interesting to see the impact of this change.

We often get so excited about digital technologies, and the promises of change they will bring in our industry, yet we don’t get excited about optimising the ultimate technology, ourselves. Soren Gordhamer asks in a recent blog post, “How much do we each tend to the Invisible World, our Inner World each day?” Life works in mysterious ways, and often signs appear in front of us at the right moment. This weekend when I was in the park, I came across this sign, which inspired me to write this post.

IMG_20170729_061902.jpg

“Some of us think holding on makes us strong, but sometimes it is letting go.” - Herman Hesse

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Being Human

This is the most difficult blog post I’ve ever had to write. Almost 3 months ago, my sister passed away unexpectedly. It’s too painful to talk about the details. We were extremely close and because of that the loss is even harder to cope with. 

The story I want to tell you today is about what’s happened since that day and the impact it’s had on how I view the world. In my work, I spend considerable amounts of time with all sorts of technology, trying to understand what all these advances mean for our health. Looking back, from the start of this year, I’d been feeling increasingly concerned by the growing chorus of voices telling us that technology is the answer for every problem, when it comes to our health. Many of us have been conditioned to believe them. The narrative has been so intoxicating for some.

Ever since this tragedy, it’s not an app, or a sensor or data that I turned to. I have been craving authentic human connections. As I have tried to make sense of life and death, I have wanted to be able to relate to family and friends by making eye contact, giving and receiving hugs and simply just being present in the same room as them. The ‘care robot’ that had arrived from China this year as part of my research into whether robots can keep us company, remains switched off in its box. Amazon’s Echo, the smart assistant with a voice interface that I’d also been testing a lot also sits unused in my home. I used it most frequently to turn the lights on and off, but now I prefer walking over to the light switch and the tactile sensation of pressing the switch with my finger. One day last week, I was feeling sad, and didn’t feel like leaving the house, so I decided to try putting on my Virtual Reality (VR) headset, to join a virtual social space. I joined a virtual computer generated room where it was sunny and in someone’s back yard for a BBQ, I could see their avatars, and I chatted to them for about 15 minutes. After I took off the headset, I felt worse.

There have also been times I have craved solitude, and walking in the park at sunrise on a daily basis has been very therapeutic. 

Increasingly, some want machines to become human, and humans to become machines. My loss has caused me to question these viewpoints. In particular, the bizarre notion that we are simply hardware and software that can be reconfigured to cure death. Recently, I heard one entrepreneur believe that with digital technology, we’ll be able to get rid of mental illness in a few years. Others I’ve met believe we are holding back the march of progress by wanting to retain the human touch in healthcare. Humans in healthcare are an expensive resource, make mistakes and resist change. So, is the answer just to bypass them? Have we truly taken the time to connect with them and understand their hopes and dreams? The stories, promises and visions being shared in Digital Health are often just fantasy, with some storytellers (also known as rock stars) heavily influenced by Silicon Valley’s view of the future. We have all been influenced on some level. Hope is useful, hype is not. 

We are conditioned to hero worship entrepreneurs and to believe that the future the technology titans are creating, is the best possible future for all of us. Grand challenges and moonshots compete for our attention and yet far too often we ignore the ordinary, mundane and boring challenges right here in front of us. 

I’ve witnessed the discomfort many have had when offering me their condolences. I had no idea so many of us have grown up trained not to talk about death and healthy ways of coping with grief. When it comes to Digital Health, I’ve only ever come across one conference where death and other seldom discussed topics were on the agenda, Health 2.0 with their “unmentionables” panel. I’ve never really reflected upon that until now.

Some of us turn to the healthcare system when we are bereaved, I chose not to. Health isn’t something that can only be improved within the four walls of a hospital. I don’t see bereavement as a medical problem. I’m not sure what a medical doctor can do in a 10 minute consultation, nor have I paid much attention to the pathways and processes that scientists ascribe to the journey of grief. I simply do my best to respond to the need in front of me and to honour my feelings, no matter how painful those feelings are. I know I don’t want to end up like Prince Harry who recently admitted he had bottled up the grief for 20 years after the death of his mother, Princess Diana, and that suppressing the grief took him to the point of a breakdown. The sheer maelstrom of emotions I’ve experienced these last few months makes me wonder even more, why does society view mental health as a lower priority than physical health? As I’ve been grieving, there are moments when I felt lonely. I heard about an organisation that wants to reframe loneliness as a medical condition. Is this the pinnacle of human progress, that we need medical doctors (who are an expensive resource) to treat loneliness? What does it say about our ability to show compassion for each other in our daily lives?

Being vulnerable, especially in front of others, is wrongly associated with weakness. Many organisations still struggle to foster a culture where people can truly speak from the heart with courage. That makes me sad, especially at this point. Life is so short yet we are frequently afraid to have candid conversations, not just with others but with ourselves. We don’t need to live our lives paralysed by fear. What changes would we see in the health of our nation if we dared to have authentic conversations? Are we equipped to ask the right questions? 

As I transition back to the world of work, I’m very much reminded of what’s important and who is important. The fragility of life is unnerving. I’m so conscious of my own mortality, and so petrified of death, it’s prompted me to make choices about how I live, work and play. One of the most supportive things someone has said to me after my loss was “Be kind to yourself.” Compassion for one’s self is hard. Given that technology is inevitably going to play a larger role in our health, how do we have more compassionate care? I’m horrified when doctors & nurses tell me their medical training took all the compassion out of them or when young doctors tell me how they are bullied by more senior doctors. Is this really the best we can do? 

I haven’t looked at the news for a few months and immersing myself in Digital Health news again makes me pause. The chatter about Artificial Intelligence (AI), where commentaries are at either end of the spectrum, almost entirely dystopian or almost entirely utopian, with few offering balanced perspectives. These machines will either end up putting us out of work and ruling our lives or they will be our faithful servants, eliminating every problem and leading us to perfect healthcare. For example, I have a new toothbrush that says it uses AI, and it’s now telling me to go to bed earlier because it noticed I brush my teeth late at night. My car, a Toyota Prius, which is primarily designed for fuel efficiency scores my acceleration, braking and cruising constantly as I’m driving. Where should my attention rest as I drive, on the road ahead or on the dashboard, anxious to achieve the highest score possible? Is there where our destiny lies? Is it wise to blindly embark upon a quest for optimum health powered by sensors, data & algorithms nudging us all day and all night until we achieve and maintain the perfect health score? 

As more of healthcare moves online, reducing costs and improving efficiency, who wins and who loses? Recently, my father (who is in his 80s) called the council as he needed to pay a bill. Previously, he was able to pay with his debit card over the phone. Now they told him it’s all changed, and he has to do it online. When he asked them what happens if someone isn’t online, he was told to visit the library where someone can do it online with you. He was rather angry at this change. I can now see his perspective, and why this has made him angry. I suspect he’s not the only one. He is online, but there are moments when he wants to interact with human beings, not machines. In stores, I always used to use the self service checkouts when paying for my goods, because it was faster. Ever since my loss, I’ve chosen to use the checkouts with human operators, even if it is slower. Earlier this year, my mother (in her 70s) got a form to apply for online access to her medical records. She still hasn’t filled in it, she personally doesn’t see the point. In Digital Health conversations, statements are sometimes made that are deemed to be universal truths. Every patient wants access to their records, or that every patient wants to analyse their own health data. I believe it’s excellent that patients have the chance of access, but let’s not assume they all want access. 

Diversity & Inclusion is still little more than a buzzword for many organisations. When it comes to patients and their advocates, we still have work to do. I admire the amazing work that patients have done to get us this far, but when I go to conferences in Europe and North America, the patients on stage are often drawn from a narrow section of society. That’s assuming the organisers actually invited patients to speak on stage, as most still curate agendas which put the interests of sponsors and partners above the interests of patients and their families. We’re not going to do the right thing if we only listen to the loudest voices. How do we create the space needed so that even the quietest voices can be heard? We probably don’t even remember what those voices sound like, as we’ve been too busy listening to the sound of our own voice, or the voices of those that constantly agree with us. 

When it comes to the future, I still believe emerging technologies have a vital role to play in our health, but we have to be mindful in how we design, build and deploy these tools. It’s critical we think for ourselves, to remember what and who are important to us. I remember that when eating meals with my sister, I’d pick up my phone after each new notification of a retweet or a new email. I can’t get those moments back now, but I aim to be present when having conversations with people now, to maintain eye contact and to truly listen, not just with my ears, and my mind, but also with my heart. If life is simply a series of moments, let’s make each moment matter. We jump at the chance of changing the world, but it takes far more courage to change ourselves. The power of human connection, compassion and conversation to help me heal during my grief has been a wake up call for me. Together, let’s do our best to preserve, cherish and honour the unique abilities that we as humans bring to humanity.

Thank You for listening to my story.