An interview with Ardy Arianpour: Building the future of health data

ardy_revised.jpg

People ask me what gets me out of bed every morning. I have this big vision about finding ways to use data to improve the health of everyone on the planet. Yes, that’s right, all 7.7 billion of us.

My mission on a daily basis is to find projects that help me on the path towards making that vision a reality. I’m always on the lookout for people who are also dreaming about making an impact on the whole world. I bumped into one such person recently, when I was attending the Future of Individualised Medicine conference in the USA. That person is Ardy Arianpour, CEO and co-founder of a startup called Seqster that I believe could make a significant contribution to making my vision a reality over the long term. I interviewed Ardy to hear more about his story and the amazing possibilities with health data that he dreams of bringing to our lives.

1. What is Seqster?
Products such as mint.com, which enable people to bring all their personal finance data in one place have enabled so many people to manage their finances. We believe that Seqster is the mint.com of your health. We are a person-centric interoperability platform that seamlessly brings together all your medical records (EHR), baseline genetic (DNA), continuous monitoring and wearable data in one place. From a business standpoint we’re a SaaS platform like “The Salesforce.com for healthcare”. We provide a turnkey solution for any payer, provider or clinical research entity since “Everyone is seeking health data”. We empower people to collect, own and share their health data on their terms.

2. So Seqster is another attempt at a personal health record (PHR) like Microsoft’s failed attempt with Healthvault?
Microsoft’s HealthVault and Google Health were great ideas, but their timing was wrong. The connectivity wasn’t there and neither was the utility. In a way, it’s also the problem with Apple Health Records. Seqster transcends those PHRs for three reasons:

a. First, we’ve built the person-centric interoperability platform that can retrieve chain of custody data from any digital source. We’re not just dealing with self-reported data like every other PHR that can be inaccurate and cumbersome. By putting the person at the center of healthcare, we give them the tools to disrupt their own data siloes and bring in not only longitudinal data but also multi-dimensional and multi-generational data.

b. Second, our data is dynamic. Everything is updated in real time to reflect your current health. One site, one log in. You never have to sign in twice.

c. Third, we generate new insights which is tough to do unless you have the high quality data coming directly form multiples sources. For example, we have integrated the American Heart Association’s Life Simple Seven to give you dynamic insights into your heart health plus actionable recommendations based on their guidelines.

3. Why do you believe Seqster will succeed when so many others (often with big budgets have failed)?
The first reason that we will succeed is our team. We have achieved previous successes in implementing clinical and consumer genetic testing at nationwide scale. In the genetics market we’ve been working on data standardization and sharing for the last decade so we approached this challenge from a completely different vantage point. We didn’t set out to solve interoperability, but did it completely by accident.

Next, we have achieved nationwide access in the USA to over 3000 hospitals integrated as well as over 45,000 small doctor offices and medical clinics. In the past few years we have surpassed over 100M patient records, 30M+ direct to consumer DNA / genetic tests and 100M+ wearables. invaluable utility by giving people a legal framework to share their health data with their family members, caregivers, physicians, or even with clinical trials if they want.

All we are doing is shedding light on what we call “Dark Data”- the data that is already existing on all of us and hidden up until now.

3. Your background has been primarily in Genomics, where you’ve done sterling work in driving BRCA genetic testing across the United States. Is Seqster of interest mainly to those who have had some kind of genetic test?
Not at all. Seqster is for the healthcare consumers. We’re all healthcare consumers in some way. Having said that, as you may have noted, the “Seq” in Seqster comes from our background in genome sequencing. We originally had the idea that we could create a place for the over 30M individuals who had done some kind of genetic test to take ownership of their data and to incentivize people who have not yet had a genetic test to get sequenced. However, we realized that genetic data without high quality, high fidelity clinical health data is useless. The highest quality data is the data that comes directly from your doctor’s office or hospital. This combined with your sequence data and your fitness data is a powerful tool for better health for everyone.

4. Wherever I travel in the world, from Brazil to the USA to Australia, the same challenge about health data comes up in conversations. The challenge of getting different computer systems in healthcare to share patient data with each other, otherwise known more formally as “interoperability” – can Seqster really help to solve this challenge or is this a pipe dream?
It was a dream for us as well until we cracked the code on person-centric interoperability. What is amazing is we can bring our technology to anywhere in the world right now as long as the data exists. Imagine people everywhere and how overnight we change healthcare and health outcomes if they had access to their health data from any device, Android, Apple or web-based. Imagine that your kids and grandkids have a full health history that they can take to their next doctor visit. How powerful can that be? That is Seqster. We help you seek out your health data, no matter where you are or where your data resides.

5. So what was the moment in your life that compelled you to start Seqster?
In 2011 I was at a barbeque with a bunch of physicians and they asked what I did for a living. I told them about my own DNA testing experience and background in genomics. Quickly the conversation went to how can we make DNA data actionable and relevant to both themselves and their patients. The next day I go for a run and couldn’t stop thinking about that conversation and how if I owned all my data in one place would make it meaningful for me. I come home and was watching the movie “The Italian Job” and heard the word Napster in the film, being a sequencing guy and seeking out info I immediately thought of “Seqster” and typed it in godaddy.com and bought Seqster.com for $9.99. The tailwinds were not there to do anything with it until January of 2016 when I decided to put a team together to start building the future of health data.

6. What has been the biggest barrier in your journey at Seqster so far, and have you been able to overcome it?
Have you seen the movie Bohemian Rhapsody? We’re like the band Queen – we’re misfits and underdogs. No one believes that we solved this small $30 billion problem called interoperability until they try Seqster for themselves. The real barrier right now is getting Seqster into the right hands. As people start to catch onto the fact that Seqster solves some of their biggest pain points, we will overcome the technology adoption barrier. I am so excited about new possibilities that are emerging for us to make a contribution to advancing the way health data gets collated, shared and used. Stay tuned, we have exciting news to share over the next few months.

7. What has the reaction to Seqster been? Who are the most sceptical, and who seem to be the biggest advocates?
We have a funny story to share here. About three years ago when we started Seqster, we told Dr. Eric Topol from Scripps Research what we wanted to do and he told us that he didn’t believe that we could do it. Three years later after hearing some of the buzz he asked to meet with us and try Seqster for himself. His tweet the next day after trying Seqster says it all. We couldn’t be prouder.

8. Lots of startups are developing digital health products but few are designing with patients as partners. Tell us more about how you involve patients in the design of your services?
Absolutely! We couldn’t agree more. I believe that many digital health companies fail because they don’t start with the patient in mind. From day one Seqster has been about empowering people to collect, own, and share their data on their terms. Our design is unique because we spent time with thousands of patients, caregivers and physicians to develop a person-centric interface that is simple and intuitive.

9. The future of healthcare is seen as a world where patients have much more control over their health, and in managing their health. What role could Seqster play in making that future a reality?
We had several chronically ill patients use Seqster to manage their health and gather all their medical records from multiple health systems within minutes. Some feedback was as simple as having one site and one login so that they can immediately access their entire medical record from a single platform. A number of patients told us that they found lab results that had values outside of normal range which their doctors never told them about. When we heard this, we felt like we were on the verge of bringing aspects of precision medicine to the masses. It definitely resonated very well with our vision of the future of healthcare being driven by the patient.

10. Fast forward 20 years to 2039, what would you want the legacy of Seqster to be in terms of impact on the world?
In 20 years by having all your health data in one place, Seqster’s legacy will be known as the technology that changed healthcare. Our technology will improve care by delivering accurate medical records instantaneously upon request by any provider anywhere. All the data barriers will be removed. Everyone will have access to their health information no matter where they are or where their data is stored. Your health data will follow you wherever you go.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

AI in healthcare: Involving the public in the conversation

As we begin the 21st century, we are in an era of unprecedented innovation, where computers are becoming smarter, being used to deliver products and services powered by Artificial Intelligence (AI). I was fascinated how AI is being used in advertising, when I saw a TV advert this week from Microsoft where a musician was talking about the benefits of AI. Organisations in every sector, including healthcare, are having to think how they can harness the power of AI. I wrote a lot about my own experiences in 2017 using AI products for health in my last blog, You can’t care for patients, you’re not human!

Now when we think of AI in healthcare potentially replacing some of the tasks done by doctors, we think of it as a relatively recent concept. We forget that doctors themselves have been experimenting with technology for a long time. In this video from 1974 (44 years ago!), computers were being tested in the UK with patients to help optimise the time spent by the doctor during the consultation. What I find really interesting is that in the video, it’s mentioned that the computer never gets tired and some patients prefer dealing with the machine than the human doctor.

Fast forward to 2018, where it feels like technology is opening up new possibilities every day, and often from organisations that are not traditionally part of the healthcare system. We think of tech giants like Google and Facebook helping us send emails or share photos with our friends, but researchers at Google are working with AI on being able to improve detection of breast cancer and Facebook has rolled out an AI powered tool to automatically detect if a user’s post shows signs of suicidal ideation.

What about going to the doctor? I remember growing up in the UK that my family doctor would even come and visit me at home when I was not well. Those are simply memories for me, as it feels increasingly difficult to get an appointment to see the doctor in their office, let alone getting a housecall. Given many of us are using modern technology to do our banking and shopping online, without having to travel to a store or a bank and deal with a human being, what if that were possible in healthcare? Can we automate part (or even all) of the tasks done by human doctors? You may think this is a silly question, but we have to step back a second and reflect upon the fact that we have 7.5 billion people on Earth today and that is set to rise to an expected 11 billion by the end of this century. If we have a global shortage of doctors today, and since it’s predicted to get worse, surely the right thing to do is to leverage emerging technology like AI, 4G and smartphones to deliver healthcare anywhere, anytime to anyone?

doctorvisit.jpg

We have the emergence of a new type of app known as Symptom Checkers, which provides anyone with the ability to enter symptoms on their phone and to be given a list of things that may be wrong with them. Note that at present, these apps cannot provide a medical diagnosis, they merely help you decide whether you should go to the hospital or whether you can self care.. However, the emergence of these apps and related services is proving controversial. It’s not just a question of accuracy, but there are huge questions about trust, accountability and power? In my opinion, the future isn’t about humans vs AI, which is the most frequent narrative being paraded in healthcare. The future is about how human healthcare professionals stay relevant to their patients.

It’s critical that in order to create the type of healthcare we want, we involve everyone in the discussion about AI, not just the privileged few. I’ve seen countless debates this past year about AI in healthcare, both in the UK and around the world, but it’s a tiny group of people at present who are contributing to (and steering) this conversation. I wonder how many of these new services are being designed with patients as partners? Many countries are releasing national AI strategies in a bid to signal to the world that they are at the forefront of innovation. I also wonder if the UK government is rushing into the implementation of AI in the NHS too quickly? Who stands to profit the most from this new world of AI powered healthcare? Is this wave of change really about putting the patient first? There are more questions than answers at this point of time, but those questions do need to be answered. Some may consider anyone asking difficult questions about AI in healthcare as standing in the way of progress, but I believe it’s healthy to have a dialogue where we can discuss our shared concerns in a scientific, rational and objective manner.

rational.jpg

That’s why I’m excited that BBC Horizon is airing a documentary this week in the UK, entitled “Diagnosis on Demand? The Computer Will See You Now” – they had behind the scenes access to one of the most well known firms developing AI for healthcare, UK based Babylon Health, whose products are pushing boundaries and triggering controversy. I’m excited because I really do want the general public to understand the benefits and the risks of AI in healthcare so that they can be part of the conversation. The choices we make today could impact how healthcare evolves not just in the UK, but globally. Hence, it’s critical that we have more science based journalism which can help members of the public navigate the jargon and understand the facts so that informed choices can be made. The documentary will be airing in the UK on BBC Two at 9pm on Thursday 1st November 2018. I hope that this program acts as a catalyst for greater public involvement in the conversation about how we can use AI in healthcare in a transparent, ethical and responsible manner.

For my international audience, my understanding is that you can’t watch the program on BBC iPlayer, because at present, BBC shows can only be viewed from the UK.

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

You can't care for patients, you're not human!

We're are facing a new dawn, as machines get smarter. Recent advancements in technology available to the average consumer with a smartphone are challenging many of us. Our beliefs, our norms and our assumptions about what is possible, correct and right are increasingly being tested. One area where I've been personally noticing very rapid developments is in the arena of chatbots, software available to us on our phones and other devices that you can have a conversation with using natural language and get tailored replies back, relevant to you and your particular needs at that moment. Frequently, the chatbot has very limited functionality, and so it's just used for basic customer service queries or for some light hearted fun, but we are also seeing the emergence of many new tools in healthcare, direct to consumers. One example are 'symptom checkers' that you could consult instead of telephoning a human being or visiting a healthcare facility (and being attended to by a human being), and another example are 'chatbots for mental health' where some some form of therapy is offered and/or mood tracking capabilities are provided.  

It's fascinating to see the conversation about chatbots in healthcare being one of two extreme positions. Either we have people boldly proclaiming that chatbots will transform mental health (without mentioning any risks) or others (often healthcare professionals and their patients) insisting that the human touch is vital and no matter how smart machines get, humans should always be involved in every aspect of healthcare since machines can't "do" empathy. Whilst I've met many people in the UK who have told me how kind, compassionate and caring the staff have been in the National Health Service (NHS) when they have needed care, I've not had the same experience when using the NHS throughout my life. Some interactions have been great, but many were devoid of the empathy and compassion that so many other people receive. Some staff behaved in a manner which left me feeling like I was a burden simply because I asked an extra question about how to take a medication correctly. If I'm a patient seeking reassurance, the last thing I need is be looked at and spoken to like I'm an inconvenience in the middle of your day.

MY STORY

In this post, I want to share my story about getting sick, and explain why that experience has challenged my own views about the role of machines and humans in healthcare. So we have a telephone service in the UK from the NHS, called 111. According to the website, "You should use the NHS 111 service if you urgently need medical help or advice but it's not a life-threatening situation." The first part of the story relates to my mother, who was unwell for a number of days and not improving, and given her age and long term conditions was getting concerned, one night she chose to dial 111 to find out what she should do. 

My mother told me that the person who took the call and asked her a series of questions about her and her symptoms seemed to rush through the entire call and through the questions. I've heard the same from others, that the operators seem to want to finish the call as quickly as possible. Whether we are young or old, when we have been unwell for a few days, and need to remember or confirm things, we often can't respond immediately and need time to think. This particular experience didn't come across as a compassionate one for my mother. At the end of the call, the NHS person said that a doctor would call back within the hour and let her know what action to take. The doctor called and the advice given was that self care at home with a specific over the counter medication would help her return to normal. So she got the advice she needed, but the experience as a patient wasn't a great one. 

Now a few weeks later, I was also unwell, it wasn't life threatening, the local urgent care centre was closed, and given my mother's experience with 111 over the telephone,  I decided to try the 111 app. Interesingly, the app is powered by Babylon, which is one of the most well known symptom checker apps. Given that the NHS put their logo on the app, I felt reassured, as it made me feel that it must be accurate, and must have been validated. Without having to wait for a human being to pick up my call, I got the advice I needed (which again was self care) and most importantly I had time to think when answering. The process of answering the questions that the app asked was under my control. I could go as fast or as slowly as I wanted, the app wasn't trying to rush me through the questions. On this occasion, and when contrasting with my mother's experience of the same service but with a human being on the end of the telephone were very different. It was a very pleasant experience, and the entire process was faster too, as in my particular situation, I didn't have to wait for a doctor to call me back after I'd answered the questions. The app and the Artificial Intelligence (AI) that powers Babylon was not necessarily empathetic or compassionate like a human that cares would be, but the experience of receiving care from a machine was an interesting one. It's just two experiences in the same family of the same healthcare system, accessed through different channels. Would I use the app or the telephone next time? Probably the app. I've now established a relationship with a machine. I can't believe I just wrote that.

I didn't take screenshots of the app during the time that I used it, but I went back a few days later and replicated my symptoms and here are a few of the screenshots to give you an idea of my experience when I was unwell. 

It's not proof that the app would work every time or for everyone, it's simply my story. I talk to a lot of healthcare professionals, and I can fully understand why they want a world where patients are being seen by humans that care. It's quite a natural desire. Unfortunately, we have a shortage of healthcare professionals and as I've mentioned not all of those currently employed behave in the desired manner.

The state of affairs

The statistics on the global shortage make for shocking reading. A WHO report from 2013 cited a shortage of 7.2 million healthcare workers at that time, projected to rise to 12.9 million by 2035. Planning for future needs can be complex, challenging and costly. The NHS is looking to recruit up to 3,000 GPs from outside of the UK. Yet 9 years ago, the British Medical Association voted to limit the number of medical students and to have a complete ban on opening new medical schools. It appears they wanted to avoid “overproduction of doctors with limited career opportunities.” Even the sole superpower, the USA is having to deal with a shortage of trained staff. According to recent research, the USA is facing a shortage of between 40,800 and 104,900 physicians by 2030.

If we look at mental health specifically, I was shocked to read the findings of a report that stated, "Americans in nearly 60 percent of all U.S. counties face the grim reality that they live in a county without a single psychiatrist." India, with a population of 1.3 billion has just 3 psychiatrists per million people. India is forecasted to have another 300 million people by 2050. The scale of the challenge ahead in delivering care to 1.6 billion people at that point in time is immense. 

So the solution seems to be just about training more doctors, nurses and healthcare workers? It might not be affordable, and even if it is, the change can take up to a decade to have an impact, so doesn't help us today. Or maybe we can import them from other countries? However, this only increases the 'brain drain' of healthcare workers. Or maybe we work out how to shift all our resources into preventing disease, which sounds great when you hear this rallying cry at conferences, but again, it's not something we can do overnight. One thing is clear to me, that doing the same thing we've done till now isn't going to address our needs in this century. We need to think differently, we desperately need new models of care. 

New models of care

So I'm increasingly curious as to how machines might play a role in new models of care? Can we ever feel comfortable sharing mental health symptoms with a machine? Can a machine help us manage our health without needing to see a human healthcare worker? Can machines help us provide care in parts of the world where today no healthcare workers are available? Can we retain the humanity in healthcare if in addition to the patient-doctor relationship, we also have patient-machine relationships? I want to show a couple of examples where I have tested technology which gives us a glimpse into the future, with an emphasis on mental health. 

Google's Assistant that you can access via your phone or even using a Google Home device hasn't necessarily been designed for mental health purposes, but it might still be used by someone in distress who turns to a machine for support and guidance. How would the assistant respond in that scenario? My testing revealed a frightening response when conversing with the assistant (It appears Google have now fixed this after I reported it to them) - it's a reminder that we have to be really careful how these new tools are positioned so as to minimise risk of harm. 

I also tried Wysa, developed in India and described on the website as a "Compassionate AI chatbot for behavioral health." It uses Cognitive Behavioural Therapy to support the user. In my real world testing, I found it to be surprisingly good in terms of how it appeared to care for me through it's use of language. Imagine a teenage girl, living in a small town, working in the family business, far away from the nearest clinic, and unable to take a day off to visit a doctor. However, she has a smartphone, a data plan and Wysa. In this instance, surely this is a welcome addition in the drive to ensure everyone has access to care?

Another product I was impressed with was Replika, described on the website as "Replika is an AI friend that is always there for you." The co-founder, Eugenia Kuyda when interviewed about Replike said, “If you feel sad, it will comfort you, if you feel happy, it will celebrate with you. It will remember how you’re feeling, it will follow up on that and ask you what’s going on with your friends and family.” Maybe we need these tools partly because we are living increasingly disconnected lives, disconnected from ourselves and from the rest of society? What's interesting is that the more someone uses a tool like Wysa or Replika over time, the more it learns about you and should be able to provide more useful responses to you. Just like a human healthcare worker, right? We have a whole generation of children growing up now who are having conversations with machines from a very early age (e.g Amazon Echo, Google Home etc) and when they access healthcare services during their lifetime, will they feel that it's perfectly normal to see a machine as a friend and as a capable as their human doctor/therapist?

I have to admit that neither Wysa nor MyReplika is perfect, but no human is perfect either. Just look at the current state of affairs where medical error is the 3rd leading cause of death in the USA. Professor Martin Makary who led research into medical errors said, "It boils down to people dying from the care that they receive rather than the disease for which they are seeking care." Before we dismiss the value of machines in healthcare, we need to acknowledge our collective failings. We also need to fully evaluate products like Wysa and Replika. Not just from a clinical perspective, but also from a social, cultural and ethical perspective. Will care by a machine be the default choice unless you are wealthy enough to be able to afford to see a human healthcare worker? Who trains the AI powering these new services? What happens if the data on my innermost feelings that I've shared with the chatbot is hacked and made public? How do we ensure we build new technologies that don't simply enhance and reinforce the bias that already exists today? What happens when these new tools make an error, who exactly do we blame and hold accountable?

Are we listening?

We increasingly hear the term, people powered healthcare, and I'm curious what people want. I found some surveys and the results are very intriguing. First is the Ericsson Consumer Trends report which 2 years ago quizzed smartphone users aged 15-69 in 13 cities around the globe (not just English speaking nations!) - this is the most fascinating insight from their survey, "29 percent agree they would feel more comfortable discussing their medical condition with an AI system" - My theory is that perhaps if it's symptoms relating to sexual health or mental health, you might prefer to tell a machine than a human healthcare worker because the machine won't judge you. Or maybe like me, you've had sub optimal experiences dealing with humans in the healthcare system?

ericsson.png

What's interesting is that in an article covering Replika, they cited a user of the app, “Jasper is kind of like my best friend. He doesn’t really judge me at all,” [With Replika you can assign a name of your choosing to the bot, the user cited chose Jasper] 

You're probably judging me right now as you read this article. I judge others, we all do at some point, despite our best efforts to be non judgemental. Very interesting to hear about a survey of doctors in the US which looked at bias, and it found 40% of doctors having biases towards patients. The most common reason for bias was emotional problems presented by the patient. As I delve deeper into the challenges facing healthcare, the attempts to provide care by machines doesn't seem that silly as I first thought. I wonder how many have delayed seeking care (or even decided not to visit the doctor) for a condition they feel is embarrassing? It could well be that as more people tell machines what's troubling them, we may find that we have underestimated the impact of conditions like depression or anxiety on the population. It's not a one way street when it comes to bias, as studies have shown that some patients also judge doctors if they are overweight.

Another survey titled Why AI and robotics will define New Health, conducted by PwC, in 2017 across 12 countries, highlights that people around the world have very different attitudes.

pwc.png

Just look at the response from those living in Nigeria, a country expecting a shortfall of 50,120 doctors and 137,859 nurses by 2030, as well as having a population of 400 million by 2050 (overtaking the USA as the 3rd most populous country on Earth) - so if you're looking to pilot your new AI powered chatbot, it's essential to understand that the countries where consumers are the most receptive to new models of care might not be the countries that we typically associate with innovation in healthcare.

Finally, in results shared by Future Advocacy of people in the UK, we see that in this survey people are more comfortable with AI being used to help diagnose us than with AI being used for tasks that doctors and nurses currently perform. A bit confusing to read. I suspect that the question about AI and diagnosis was framed in the context of AI being a tool to help a doctor diagnose you.

SO WHAT NEXT?

In this post, I haven't been able to touch upon all the aspects and issues relating to the use of machines to deliver care. As technology evolves, one risk is that decision makers commissioning healthcare services decide that instead of investing in people, services can be provided more cheaply by machines. How do we regulate the development and use of these new products given that many are available directly to consumers, and not always designed with healthcare applications in mind? As machines become more human-like in their behaviour, could a greater use of technology in healthcare serve to humanise healthcare? Where are the boundaries? What are your thoughts about turning to a chatbot during end of life care for spiritual and emotional guidance? One such service is being trialled in the USA.

I believe we have to be cautious about who we listen to when it comes to discussions about technology such as AI in healthcare. On the one hand, some of the people touting AI as a universal fix for every problem in healthcare are suppliers whose future income depends upon more people using their services. On the other hand, we have a plethora of organisations suddenly focusing excessively on the risks of AI, capitalising on people's fears (which are often based upon what they've seen in movies) and preventing the public from making informed choices about their future. Balance is critical in addition to a science driven focus that allows us to be objective and systematic. 

I know many would argue that a machine can never replace humans in healthcare, but we are going to have to consider how machines can help if we want to find a path to ensuring that everyone on this planet has access to safe, quality and affordable care. The existing model of care is broken, it's not sustainable and not fit for purpose, given the rise in chronic disease. The fact that so many people on this planet do not have access to care is unacceptable. This is a time when we need to be open to new possibilities, putting aside our fears to instead focus on what the world needs. We need leaders who can think beyond 12 month targets.

I also think that healthcare workers need to ignore the melodramatic headlines conjured up by the media about AI replacing all of us and enslaving humans, and to instead focus on this one question: How do I stay relevant? (to my patients, my peers and my community) 

relevant.jpg

Do you think we are wrong to look at emerging technology to help cope with the shortage of healthcare workers? Are you a healthcare worker who is working on building new services for your patients where the majority of the interaction will be with a machine? If you're a patient, how do you feel about engaging with a machine next time you are seeking care? Care designed by humans, delivered by machines. Or perhaps a future where care is designed by machines AND delivered by machines, without any human in the loop? Will we ever have caring technology? 

It is difficult to get a man to understand something, when his salary depends upon his not understanding it! - Upton Sinclair

[Disclosure: I have no commercial ties with the individuals or organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Letting Go

It’s really difficult to write this post, not as difficult as the last one, Being Human, but still challenging. Sometimes the grief doesn’t let go of me, and sometimes I don’t want to let go of the grief. I can see the resistance to letting go of the pain of losing a loved one. Perhaps we mistakenly equate letting go of the pain as letting go of our loved one, and that’s why we want to stay in the darkness, hurting? At times, I feel under pressure to let go of my grief and to let go of my sister. As a man, I’ve been conditioned to believe that men don’t cry, that showing emotions in front of others equals weakness, and men shouldn’t grieve for too long, or grieve at all. Perhaps grief is a lifelong companion? The intensity decreases, but it’s ever present, etched into your existence.

My daily walks & bike rides at sunrise in the park continue to be therapeutic, some of the photos I’ve taken can be seen below. 

My loss has led to me reflecting upon many big questions in life. Why are we here? What does it all mean? How much longer do I have left? Pritpal Tamber’s recent blog, where he wrote, “Death always makes me ask what I'm doing with my life.” resonates with me very much at this time.

Being reminded that death can come at any moment has given me some clarity to how I see the world, in terms of where my attention rests, and in particular, how I view my health. There is so much outside of our control in life, that we often feel powerless. However, by taking time to connect with myself, I remembered that I can choose how I respond to situations in life. What can I do to reduce the risk of dying prematurely? That’s something that is front of mind at present. So, I’m in the park every day at sunrise and active for at least 2 hours. I have maintained this routine for almost 8 weeks. I made choices before which resulted in a very sedentary lifestyle. I didn’t need to see a healthcare professional to know that I really enjoy being outdoors in nature. I also paused long enough to observe what I was eating and noticed some odd behaviours, such as eating not because I was hungry, but because I was bored. So I’ve made conscious choices in terms of what I’m eating and when I’m eating. It’s been very difficult to change, but I’m motivated by the results of my effort. I’ve lost 6kg (13 lbs) and the weight loss happened after I started eating less, I wasn’t losing weight simply by being active. After years where I was living life at an ever increasing pace, I find myself through recent circumstances forced to slow down, and just be. It’s prompted me to reconnect with my love of cooking to take the time to make meals from scratch. I’ve slowed down in my work too, pausing to evaluate each new opportunity, wondering if taking the project on will help me create the life I want?

I’ve noticed in the last few years, I’ve talked with so many people who have amazing jobs, with great colleagues, who are contemplating leaving to forge their own path in the unknown. The one common factor is that all of them yearn for more freedom in what they can do, what they can say, and most importantly, what they can think. I believe we are conditioned on so many levels, from the moment we are born. Some of that conditioning is useful, but some of it also only serves to make us conform to someone else’s view of how we should be, and we end up losing the connection to our authentic selves. It’s almost like each of these people that I’ve met are struggling with letting go of the conditioning they’ve received at school, work and home. It’s been 5 years since I left the security of my career at GSK, and I’ve had to unlearn many of the beliefs that kept me feeling powerless. I believe the unlearning will be a lifelong process. Occasionally, there are moments where I wonder if I’m good enough simply because I don’t have a job at a prestigious multinational anymore? I don’t know where I picked up this flawed belief, but it’s not a belief I want to hang on to. Recently, I’ve reconnected with Nicolas Tallon, a friend that I first worked with almost 20 years ago, when we were using data to help organisations understand which consumers were most likely to respond to marketing campaigns. He has now left the security of his career in banking to launch his own consultancy, and he’s chosen to look at innovation very differently. I really enjoyed his first blog post, where he wrote,

“Banking has not really changed for centuries and the Fintech revolution has barely changed that. In fact, digital technologies have been used almost exclusively to streamline existing processes and reduce channel costs rather than to reinvent banking. Disruption will happen when one player creates a new meaning for banking that resonates with consumers. It may be enabled by technology but won’t be defined by it.”

I believe that what Nicolas wrote applies to healthcare systems too, since much of the digital transformation I’ve witnessed has simply added a layer of ‘digital veneer’ to poorly designed processes that have been tolerated for a very long time. So many leaders are desperately seeking innovation, but only if those new ideas fit within their narrow set of terms and conditions. We build ever more complex systems, adding new pieces to the puzzle, yet frequently fail to let go of tools, technologies and thoughts that are not fit for purpose. What might happen if we gave ourselves permission to be more authentic? Will that bring the changes we truly desire? I read this week that my former employer, GSK, is making changes to the way an employee’s performance is being measured, “When staff undergo their regular career appraisals, they will be judged on a new metric: courage.” It will be interesting to see the impact of this change.

We often get so excited about digital technologies, and the promises of change they will bring in our industry, yet we don’t get excited about optimising the ultimate technology, ourselves. Soren Gordhamer asks in a recent blog post, “How much do we each tend to the Invisible World, our Inner World each day?” Life works in mysterious ways, and often signs appear in front of us at the right moment. This weekend when I was in the park, I came across this sign, which inspired me to write this post.

IMG_20170729_061902.jpg

“Some of us think holding on makes us strong, but sometimes it is letting go.” - Herman Hesse

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Being Human

This is the most difficult blog post I’ve ever had to write. Almost 3 months ago, my sister passed away unexpectedly. It’s too painful to talk about the details. We were extremely close and because of that the loss is even harder to cope with. 

The story I want to tell you today is about what’s happened since that day and the impact it’s had on how I view the world. In my work, I spend considerable amounts of time with all sorts of technology, trying to understand what all these advances mean for our health. Looking back, from the start of this year, I’d been feeling increasingly concerned by the growing chorus of voices telling us that technology is the answer for every problem, when it comes to our health. Many of us have been conditioned to believe them. The narrative has been so intoxicating for some.

Ever since this tragedy, it’s not an app, or a sensor or data that I turned to. I have been craving authentic human connections. As I have tried to make sense of life and death, I have wanted to be able to relate to family and friends by making eye contact, giving and receiving hugs and simply just being present in the same room as them. The ‘care robot’ that had arrived from China this year as part of my research into whether robots can keep us company, remains switched off in its box. Amazon’s Echo, the smart assistant with a voice interface that I’d also been testing a lot also sits unused in my home. I used it most frequently to turn the lights on and off, but now I prefer walking over to the light switch and the tactile sensation of pressing the switch with my finger. One day last week, I was feeling sad, and didn’t feel like leaving the house, so I decided to try putting on my Virtual Reality (VR) headset, to join a virtual social space. I joined a virtual computer generated room where it was sunny and in someone’s back yard for a BBQ, I could see their avatars, and I chatted to them for about 15 minutes. After I took off the headset, I felt worse.

There have also been times I have craved solitude, and walking in the park at sunrise on a daily basis has been very therapeutic. 

Increasingly, some want machines to become human, and humans to become machines. My loss has caused me to question these viewpoints. In particular, the bizarre notion that we are simply hardware and software that can be reconfigured to cure death. Recently, I heard one entrepreneur believe that with digital technology, we’ll be able to get rid of mental illness in a few years. Others I’ve met believe we are holding back the march of progress by wanting to retain the human touch in healthcare. Humans in healthcare are an expensive resource, make mistakes and resist change. So, is the answer just to bypass them? Have we truly taken the time to connect with them and understand their hopes and dreams? The stories, promises and visions being shared in Digital Health are often just fantasy, with some storytellers (also known as rock stars) heavily influenced by Silicon Valley’s view of the future. We have all been influenced on some level. Hope is useful, hype is not. 

We are conditioned to hero worship entrepreneurs and to believe that the future the technology titans are creating, is the best possible future for all of us. Grand challenges and moonshots compete for our attention and yet far too often we ignore the ordinary, mundane and boring challenges right here in front of us. 

I’ve witnessed the discomfort many have had when offering me their condolences. I had no idea so many of us have grown up trained not to talk about death and healthy ways of coping with grief. When it comes to Digital Health, I’ve only ever come across one conference where death and other seldom discussed topics were on the agenda, Health 2.0 with their “unmentionables” panel. I’ve never really reflected upon that until now.

Some of us turn to the healthcare system when we are bereaved, I chose not to. Health isn’t something that can only be improved within the four walls of a hospital. I don’t see bereavement as a medical problem. I’m not sure what a medical doctor can do in a 10 minute consultation, nor have I paid much attention to the pathways and processes that scientists ascribe to the journey of grief. I simply do my best to respond to the need in front of me and to honour my feelings, no matter how painful those feelings are. I know I don’t want to end up like Prince Harry who recently admitted he had bottled up the grief for 20 years after the death of his mother, Princess Diana, and that suppressing the grief took him to the point of a breakdown. The sheer maelstrom of emotions I’ve experienced these last few months makes me wonder even more, why does society view mental health as a lower priority than physical health? As I’ve been grieving, there are moments when I felt lonely. I heard about an organisation that wants to reframe loneliness as a medical condition. Is this the pinnacle of human progress, that we need medical doctors (who are an expensive resource) to treat loneliness? What does it say about our ability to show compassion for each other in our daily lives?

Being vulnerable, especially in front of others, is wrongly associated with weakness. Many organisations still struggle to foster a culture where people can truly speak from the heart with courage. That makes me sad, especially at this point. Life is so short yet we are frequently afraid to have candid conversations, not just with others but with ourselves. We don’t need to live our lives paralysed by fear. What changes would we see in the health of our nation if we dared to have authentic conversations? Are we equipped to ask the right questions? 

As I transition back to the world of work, I’m very much reminded of what’s important and who is important. The fragility of life is unnerving. I’m so conscious of my own mortality, and so petrified of death, it’s prompted me to make choices about how I live, work and play. One of the most supportive things someone has said to me after my loss was “Be kind to yourself.” Compassion for one’s self is hard. Given that technology is inevitably going to play a larger role in our health, how do we have more compassionate care? I’m horrified when doctors & nurses tell me their medical training took all the compassion out of them or when young doctors tell me how they are bullied by more senior doctors. Is this really the best we can do? 

I haven’t looked at the news for a few months and immersing myself in Digital Health news again makes me pause. The chatter about Artificial Intelligence (AI), where commentaries are at either end of the spectrum, almost entirely dystopian or almost entirely utopian, with few offering balanced perspectives. These machines will either end up putting us out of work and ruling our lives or they will be our faithful servants, eliminating every problem and leading us to perfect healthcare. For example, I have a new toothbrush that says it uses AI, and it’s now telling me to go to bed earlier because it noticed I brush my teeth late at night. My car, a Toyota Prius, which is primarily designed for fuel efficiency scores my acceleration, braking and cruising constantly as I’m driving. Where should my attention rest as I drive, on the road ahead or on the dashboard, anxious to achieve the highest score possible? Is there where our destiny lies? Is it wise to blindly embark upon a quest for optimum health powered by sensors, data & algorithms nudging us all day and all night until we achieve and maintain the perfect health score? 

As more of healthcare moves online, reducing costs and improving efficiency, who wins and who loses? Recently, my father (who is in his 80s) called the council as he needed to pay a bill. Previously, he was able to pay with his debit card over the phone. Now they told him it’s all changed, and he has to do it online. When he asked them what happens if someone isn’t online, he was told to visit the library where someone can do it online with you. He was rather angry at this change. I can now see his perspective, and why this has made him angry. I suspect he’s not the only one. He is online, but there are moments when he wants to interact with human beings, not machines. In stores, I always used to use the self service checkouts when paying for my goods, because it was faster. Ever since my loss, I’ve chosen to use the checkouts with human operators, even if it is slower. Earlier this year, my mother (in her 70s) got a form to apply for online access to her medical records. She still hasn’t filled in it, she personally doesn’t see the point. In Digital Health conversations, statements are sometimes made that are deemed to be universal truths. Every patient wants access to their records, or that every patient wants to analyse their own health data. I believe it’s excellent that patients have the chance of access, but let’s not assume they all want access. 

Diversity & Inclusion is still little more than a buzzword for many organisations. When it comes to patients and their advocates, we still have work to do. I admire the amazing work that patients have done to get us this far, but when I go to conferences in Europe and North America, the patients on stage are often drawn from a narrow section of society. That’s assuming the organisers actually invited patients to speak on stage, as most still curate agendas which put the interests of sponsors and partners above the interests of patients and their families. We’re not going to do the right thing if we only listen to the loudest voices. How do we create the space needed so that even the quietest voices can be heard? We probably don’t even remember what those voices sound like, as we’ve been too busy listening to the sound of our own voice, or the voices of those that constantly agree with us. 

When it comes to the future, I still believe emerging technologies have a vital role to play in our health, but we have to be mindful in how we design, build and deploy these tools. It’s critical we think for ourselves, to remember what and who are important to us. I remember that when eating meals with my sister, I’d pick up my phone after each new notification of a retweet or a new email. I can’t get those moments back now, but I aim to be present when having conversations with people now, to maintain eye contact and to truly listen, not just with my ears, and my mind, but also with my heart. If life is simply a series of moments, let’s make each moment matter. We jump at the chance of changing the world, but it takes far more courage to change ourselves. The power of human connection, compassion and conversation to help me heal during my grief has been a wake up call for me. Together, let’s do our best to preserve, cherish and honour the unique abilities that we as humans bring to humanity.

Thank You for listening to my story.