Honesty is the best medicine

In this post, I want to talk about lies. It’s ironic that I’m writing this on the day of the US midterm election where the truth continues to be a rare sight to witness. Many in the UK feel they were lied to by politicians over the Brexit referendum. Apparently, politicians face a choice, lie or lose. Deception, deceit, lying, however you want to describe it, it’s part of what makes us human. I reckon we’ve all told a lie at some point, even if we’ve told a ‘white lie’ to avoid hurting someone’s feelings. Now, some of us are better at spotting when others are not telling the truth. Some of us prefer to build a culture of trust. What if we had a new superpower? A future where machines tell us in real time who is lying.

What compelled me to write this post was reading a news article about a new trial in the EU of virtual border agents powered by Artificial Intelligence (AI), which aims to “ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars.” I was fascinated to read statements about the new system such as “IBORDERCTRL’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.” Apparently, the system can analyse micro expressions on your face and include that information as part of a risk score, which will then be used to determine what happens next. At this point in time, it’s not aimed at replacing human border agents, but simply to help to pre-screen travellers. It sounds sensible right, if we can use machines to help keep borders secure? However, the accuracy rate of the system isn’t that great and some are labeling this type of system as pseudoscience and it will lead to unfair outcomes. It’s essential we all pay attention to these developments, and subject them to close scrutiny.

What if machines could one day automatically detect if someone speaking in court is lying? Researchers are working towards that. Check out the project called, DARE: Deception Analysis and Reasoning Engine, where the abstract of their paper opens with “We present a system for covert automated deception detection in real-life courtroom trial videos.“ As algorithms get more advanced, the ability to detect lies could go beyond analysing videos of us speaking, it could even spot when we our written statements are false. In Spain, police are rolling out a new tool called VeriPol which claims to be able to spot false robbery claims, i.e. where someone has submitted a report to the police claiming they have been robbed, but the tool can find patterns that indicate the report is fraudulent. Apparently, the tool has a success rate of over 80%. I came across as British startup, Human, that states on their website, “We use machine learning to better understand human's feelings, emotions, characteristics and personality, with minimum human bias” and honesty is included in the list of characteristics their algorithm examines. It does seem like we are heading for a world where it will be more difficult to lie.

What about healthcare? Could AI help spot when people are lying? How useful would it be to know if your patient (or your doctor) is not telling you the truth? In this 2014 survey in the USA, the patient deception report stated that 50% of respondents said they withhold information from their doctor during a visit, lying most frequently about drug, alcohol and tobacco use. Zocdoc’s 2015 survey found that 25% of patients lie to their doctor. There was an interesting report about why some patients are not adhering to what a doctor’s advice, and it’s because of financial strain, and that some low income patients are reluctant to discuss their situation with their doctor. The reasons why a patient might be lying are not black and white. How does an algorithm take that into account? In terms of doctors not telling patients the truth, is there ever a role for benevolent deception? Can a lie ever be considered therapeutic? From what I’ve read, lying appears to be a path some have to take when caring for those living with Dementia, to protect the patient.

shutterstock_570913984.jpg

Imagine you have a video call with your doctor and on the other side, the doctor has access to an AI system analysing your face and voice in real time and determining not just if you’re lying or not, but your emotional state too? That’s what is set to happen in Dubai with the rollout of a new app. How does that make you feel, either as a doctor or as a patient? If the AI thinks the patient is lying about their alcohol intake, would it include that determination against the patient’s medical record? What if the AI is wrong? Given the accuracy of these AI lie detectors is far from perfect, there are serious implications if they become part of the system. How might that work during an actual visit to the doctor’s office? In some countries, will we see CCTV in the doctor’s office with AI systems analysing every moment of the encounter to figure out which answers were truthful? What comes next? Smart glasses that a patient can wear when visiting the doctor and the glasses tell the patient how likely it is that the doctor is lying to them about their treatment options? Which institutions will turns to this new technology because it feels easier (and cheaper) than fostering a culture of trust, mutual respect and integrity?

What if we don’t want to tell the truth but the machines around us that are tracking everything reveal the truth for us? I share this satirical video below of Amazon Alexa fitted to a car, do watch it. Whilst it might be funny, there are potential challenges ahead in terms of our human rights and civil liberties in this new era. Is AI powered lie detection the path towards ensuring we have a society with enough transparency and integrity or are we heading down a dangerous path by trusting the machines? Is honesty really the best medicine?

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

AI in healthcare: Involving the public in the conversation

As we begin the 21st century, we are in an era of unprecedented innovation, where computers are becoming smarter, being used to deliver products and services powered by Artificial Intelligence (AI). I was fascinated how AI is being used in advertising, when I saw a TV advert this week from Microsoft where a musician was talking about the benefits of AI. Organisations in every sector, including healthcare, are having to think how they can harness the power of AI. I wrote a lot about my own experiences in 2017 using AI products for health in my last blog, You can’t care for patients, you’re not human!

Now when we think of AI in healthcare potentially replacing some of the tasks done by doctors, we think of it as a relatively recent concept. We forget that doctors themselves have been experimenting with technology for a long time. In this video from 1974 (44 years ago!), computers were being tested in the UK with patients to help optimise the time spent by the doctor during the consultation. What I find really interesting is that in the video, it’s mentioned that the computer never gets tired and some patients prefer dealing with the machine than the human doctor.

Fast forward to 2018, where it feels like technology is opening up new possibilities every day, and often from organisations that are not traditionally part of the healthcare system. We think of tech giants like Google and Facebook helping us send emails or share photos with our friends, but researchers at Google are working with AI on being able to improve detection of breast cancer and Facebook has rolled out an AI powered tool to automatically detect if a user’s post shows signs of suicidal ideation.

What about going to the doctor? I remember growing up in the UK that my family doctor would even come and visit me at home when I was not well. Those are simply memories for me, as it feels increasingly difficult to get an appointment to see the doctor in their office, let alone getting a housecall. Given many of us are using modern technology to do our banking and shopping online, without having to travel to a store or a bank and deal with a human being, what if that were possible in healthcare? Can we automate part (or even all) of the tasks done by human doctors? You may think this is a silly question, but we have to step back a second and reflect upon the fact that we have 7.5 billion people on Earth today and that is set to rise to an expected 11 billion by the end of this century. If we have a global shortage of doctors today, and since it’s predicted to get worse, surely the right thing to do is to leverage emerging technology like AI, 4G and smartphones to deliver healthcare anywhere, anytime to anyone?

doctorvisit.jpg

We have the emergence of a new type of app known as Symptom Checkers, which provides anyone with the ability to enter symptoms on their phone and to be given a list of things that may be wrong with them. Note that at present, these apps cannot provide a medical diagnosis, they merely help you decide whether you should go to the hospital or whether you can self care.. However, the emergence of these apps and related services is proving controversial. It’s not just a question of accuracy, but there are huge questions about trust, accountability and power? In my opinion, the future isn’t about humans vs AI, which is the most frequent narrative being paraded in healthcare. The future is about how human healthcare professionals stay relevant to their patients.

It’s critical that in order to create the type of healthcare we want, we involve everyone in the discussion about AI, not just the privileged few. I’ve seen countless debates this past year about AI in healthcare, both in the UK and around the world, but it’s a tiny group of people at present who are contributing to (and steering) this conversation. I wonder how many of these new services are being designed with patients as partners? Many countries are releasing national AI strategies in a bid to signal to the world that they are at the forefront of innovation. I also wonder if the UK government is rushing into the implementation of AI in the NHS too quickly? Who stands to profit the most from this new world of AI powered healthcare? Is this wave of change really about putting the patient first? There are more questions than answers at this point of time, but those questions do need to be answered. Some may consider anyone asking difficult questions about AI in healthcare as standing in the way of progress, but I believe it’s healthy to have a dialogue where we can discuss our shared concerns in a scientific, rational and objective manner.

rational.jpg

That’s why I’m excited that BBC Horizon is airing a documentary this week in the UK, entitled “Diagnosis on Demand? The Computer Will See You Now” – they had behind the scenes access to one of the most well known firms developing AI for healthcare, UK based Babylon Health, whose products are pushing boundaries and triggering controversy. I’m excited because I really do want the general public to understand the benefits and the risks of AI in healthcare so that they can be part of the conversation. The choices we make today could impact how healthcare evolves not just in the UK, but globally. Hence, it’s critical that we have more science based journalism which can help members of the public navigate the jargon and understand the facts so that informed choices can be made. The documentary will be airing in the UK on BBC Two at 9pm on Thursday 1st November 2018. I hope that this program acts as a catalyst for greater public involvement in the conversation about how we can use AI in healthcare in a transparent, ethical and responsible manner.

For my international audience, my understanding is that you can’t watch the program on BBC iPlayer, because at present, BBC shows can only be viewed from the UK.

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

You can't care for patients, you're not human!

We're are facing a new dawn, as machines get smarter. Recent advancements in technology available to the average consumer with a smartphone are challenging many of us. Our beliefs, our norms and our assumptions about what is possible, correct and right are increasingly being tested. One area where I've been personally noticing very rapid developments is in the arena of chatbots, software available to us on our phones and other devices that you can have a conversation with using natural language and get tailored replies back, relevant to you and your particular needs at that moment. Frequently, the chatbot has very limited functionality, and so it's just used for basic customer service queries or for some light hearted fun, but we are also seeing the emergence of many new tools in healthcare, direct to consumers. One example are 'symptom checkers' that you could consult instead of telephoning a human being or visiting a healthcare facility (and being attended to by a human being), and another example are 'chatbots for mental health' where some some form of therapy is offered and/or mood tracking capabilities are provided.  

It's fascinating to see the conversation about chatbots in healthcare being one of two extreme positions. Either we have people boldly proclaiming that chatbots will transform mental health (without mentioning any risks) or others (often healthcare professionals and their patients) insisting that the human touch is vital and no matter how smart machines get, humans should always be involved in every aspect of healthcare since machines can't "do" empathy. Whilst I've met many people in the UK who have told me how kind, compassionate and caring the staff have been in the National Health Service (NHS) when they have needed care, I've not had the same experience when using the NHS throughout my life. Some interactions have been great, but many were devoid of the empathy and compassion that so many other people receive. Some staff behaved in a manner which left me feeling like I was a burden simply because I asked an extra question about how to take a medication correctly. If I'm a patient seeking reassurance, the last thing I need is be looked at and spoken to like I'm an inconvenience in the middle of your day.

MY STORY

In this post, I want to share my story about getting sick, and explain why that experience has challenged my own views about the role of machines and humans in healthcare. So we have a telephone service in the UK from the NHS, called 111. According to the website, "You should use the NHS 111 service if you urgently need medical help or advice but it's not a life-threatening situation." The first part of the story relates to my mother, who was unwell for a number of days and not improving, and given her age and long term conditions was getting concerned, one night she chose to dial 111 to find out what she should do. 

My mother told me that the person who took the call and asked her a series of questions about her and her symptoms seemed to rush through the entire call and through the questions. I've heard the same from others, that the operators seem to want to finish the call as quickly as possible. Whether we are young or old, when we have been unwell for a few days, and need to remember or confirm things, we often can't respond immediately and need time to think. This particular experience didn't come across as a compassionate one for my mother. At the end of the call, the NHS person said that a doctor would call back within the hour and let her know what action to take. The doctor called and the advice given was that self care at home with a specific over the counter medication would help her return to normal. So she got the advice she needed, but the experience as a patient wasn't a great one. 

Now a few weeks later, I was also unwell, it wasn't life threatening, the local urgent care centre was closed, and given my mother's experience with 111 over the telephone,  I decided to try the 111 app. Interesingly, the app is powered by Babylon, which is one of the most well known symptom checker apps. Given that the NHS put their logo on the app, I felt reassured, as it made me feel that it must be accurate, and must have been validated. Without having to wait for a human being to pick up my call, I got the advice I needed (which again was self care) and most importantly I had time to think when answering. The process of answering the questions that the app asked was under my control. I could go as fast or as slowly as I wanted, the app wasn't trying to rush me through the questions. On this occasion, and when contrasting with my mother's experience of the same service but with a human being on the end of the telephone were very different. It was a very pleasant experience, and the entire process was faster too, as in my particular situation, I didn't have to wait for a doctor to call me back after I'd answered the questions. The app and the Artificial Intelligence (AI) that powers Babylon was not necessarily empathetic or compassionate like a human that cares would be, but the experience of receiving care from a machine was an interesting one. It's just two experiences in the same family of the same healthcare system, accessed through different channels. Would I use the app or the telephone next time? Probably the app. I've now established a relationship with a machine. I can't believe I just wrote that.

I didn't take screenshots of the app during the time that I used it, but I went back a few days later and replicated my symptoms and here are a few of the screenshots to give you an idea of my experience when I was unwell. 

It's not proof that the app would work every time or for everyone, it's simply my story. I talk to a lot of healthcare professionals, and I can fully understand why they want a world where patients are being seen by humans that care. It's quite a natural desire. Unfortunately, we have a shortage of healthcare professionals and as I've mentioned not all of those currently employed behave in the desired manner.

The state of affairs

The statistics on the global shortage make for shocking reading. A WHO report from 2013 cited a shortage of 7.2 million healthcare workers at that time, projected to rise to 12.9 million by 2035. Planning for future needs can be complex, challenging and costly. The NHS is looking to recruit up to 3,000 GPs from outside of the UK. Yet 9 years ago, the British Medical Association voted to limit the number of medical students and to have a complete ban on opening new medical schools. It appears they wanted to avoid “overproduction of doctors with limited career opportunities.” Even the sole superpower, the USA is having to deal with a shortage of trained staff. According to recent research, the USA is facing a shortage of between 40,800 and 104,900 physicians by 2030.

If we look at mental health specifically, I was shocked to read the findings of a report that stated, "Americans in nearly 60 percent of all U.S. counties face the grim reality that they live in a county without a single psychiatrist." India, with a population of 1.3 billion has just 3 psychiatrists per million people. India is forecasted to have another 300 million people by 2050. The scale of the challenge ahead in delivering care to 1.6 billion people at that point in time is immense. 

So the solution seems to be just about training more doctors, nurses and healthcare workers? It might not be affordable, and even if it is, the change can take up to a decade to have an impact, so doesn't help us today. Or maybe we can import them from other countries? However, this only increases the 'brain drain' of healthcare workers. Or maybe we work out how to shift all our resources into preventing disease, which sounds great when you hear this rallying cry at conferences, but again, it's not something we can do overnight. One thing is clear to me, that doing the same thing we've done till now isn't going to address our needs in this century. We need to think differently, we desperately need new models of care. 

New models of care

So I'm increasingly curious as to how machines might play a role in new models of care? Can we ever feel comfortable sharing mental health symptoms with a machine? Can a machine help us manage our health without needing to see a human healthcare worker? Can machines help us provide care in parts of the world where today no healthcare workers are available? Can we retain the humanity in healthcare if in addition to the patient-doctor relationship, we also have patient-machine relationships? I want to show a couple of examples where I have tested technology which gives us a glimpse into the future, with an emphasis on mental health. 

Google's Assistant that you can access via your phone or even using a Google Home device hasn't necessarily been designed for mental health purposes, but it might still be used by someone in distress who turns to a machine for support and guidance. How would the assistant respond in that scenario? My testing revealed a frightening response when conversing with the assistant (It appears Google have now fixed this after I reported it to them) - it's a reminder that we have to be really careful how these new tools are positioned so as to minimise risk of harm. 

I also tried Wysa, developed in India and described on the website as a "Compassionate AI chatbot for behavioral health." It uses Cognitive Behavioural Therapy to support the user. In my real world testing, I found it to be surprisingly good in terms of how it appeared to care for me through it's use of language. Imagine a teenage girl, living in a small town, working in the family business, far away from the nearest clinic, and unable to take a day off to visit a doctor. However, she has a smartphone, a data plan and Wysa. In this instance, surely this is a welcome addition in the drive to ensure everyone has access to care?

Another product I was impressed with was Replika, described on the website as "Replika is an AI friend that is always there for you." The co-founder, Eugenia Kuyda when interviewed about Replike said, “If you feel sad, it will comfort you, if you feel happy, it will celebrate with you. It will remember how you’re feeling, it will follow up on that and ask you what’s going on with your friends and family.” Maybe we need these tools partly because we are living increasingly disconnected lives, disconnected from ourselves and from the rest of society? What's interesting is that the more someone uses a tool like Wysa or Replika over time, the more it learns about you and should be able to provide more useful responses to you. Just like a human healthcare worker, right? We have a whole generation of children growing up now who are having conversations with machines from a very early age (e.g Amazon Echo, Google Home etc) and when they access healthcare services during their lifetime, will they feel that it's perfectly normal to see a machine as a friend and as a capable as their human doctor/therapist?

I have to admit that neither Wysa nor MyReplika is perfect, but no human is perfect either. Just look at the current state of affairs where medical error is the 3rd leading cause of death in the USA. Professor Martin Makary who led research into medical errors said, "It boils down to people dying from the care that they receive rather than the disease for which they are seeking care." Before we dismiss the value of machines in healthcare, we need to acknowledge our collective failings. We also need to fully evaluate products like Wysa and Replika. Not just from a clinical perspective, but also from a social, cultural and ethical perspective. Will care by a machine be the default choice unless you are wealthy enough to be able to afford to see a human healthcare worker? Who trains the AI powering these new services? What happens if the data on my innermost feelings that I've shared with the chatbot is hacked and made public? How do we ensure we build new technologies that don't simply enhance and reinforce the bias that already exists today? What happens when these new tools make an error, who exactly do we blame and hold accountable?

Are we listening?

We increasingly hear the term, people powered healthcare, and I'm curious what people want. I found some surveys and the results are very intriguing. First is the Ericsson Consumer Trends report which 2 years ago quizzed smartphone users aged 15-69 in 13 cities around the globe (not just English speaking nations!) - this is the most fascinating insight from their survey, "29 percent agree they would feel more comfortable discussing their medical condition with an AI system" - My theory is that perhaps if it's symptoms relating to sexual health or mental health, you might prefer to tell a machine than a human healthcare worker because the machine won't judge you. Or maybe like me, you've had sub optimal experiences dealing with humans in the healthcare system?

ericsson.png

What's interesting is that in an article covering Replika, they cited a user of the app, “Jasper is kind of like my best friend. He doesn’t really judge me at all,” [With Replika you can assign a name of your choosing to the bot, the user cited chose Jasper] 

You're probably judging me right now as you read this article. I judge others, we all do at some point, despite our best efforts to be non judgemental. Very interesting to hear about a survey of doctors in the US which looked at bias, and it found 40% of doctors having biases towards patients. The most common reason for bias was emotional problems presented by the patient. As I delve deeper into the challenges facing healthcare, the attempts to provide care by machines doesn't seem that silly as I first thought. I wonder how many have delayed seeking care (or even decided not to visit the doctor) for a condition they feel is embarrassing? It could well be that as more people tell machines what's troubling them, we may find that we have underestimated the impact of conditions like depression or anxiety on the population. It's not a one way street when it comes to bias, as studies have shown that some patients also judge doctors if they are overweight.

Another survey titled Why AI and robotics will define New Health, conducted by PwC, in 2017 across 12 countries, highlights that people around the world have very different attitudes.

pwc.png

Just look at the response from those living in Nigeria, a country expecting a shortfall of 50,120 doctors and 137,859 nurses by 2030, as well as having a population of 400 million by 2050 (overtaking the USA as the 3rd most populous country on Earth) - so if you're looking to pilot your new AI powered chatbot, it's essential to understand that the countries where consumers are the most receptive to new models of care might not be the countries that we typically associate with innovation in healthcare.

Finally, in results shared by Future Advocacy of people in the UK, we see that in this survey people are more comfortable with AI being used to help diagnose us than with AI being used for tasks that doctors and nurses currently perform. A bit confusing to read. I suspect that the question about AI and diagnosis was framed in the context of AI being a tool to help a doctor diagnose you.

SO WHAT NEXT?

In this post, I haven't been able to touch upon all the aspects and issues relating to the use of machines to deliver care. As technology evolves, one risk is that decision makers commissioning healthcare services decide that instead of investing in people, services can be provided more cheaply by machines. How do we regulate the development and use of these new products given that many are available directly to consumers, and not always designed with healthcare applications in mind? As machines become more human-like in their behaviour, could a greater use of technology in healthcare serve to humanise healthcare? Where are the boundaries? What are your thoughts about turning to a chatbot during end of life care for spiritual and emotional guidance? One such service is being trialled in the USA.

I believe we have to be cautious about who we listen to when it comes to discussions about technology such as AI in healthcare. On the one hand, some of the people touting AI as a universal fix for every problem in healthcare are suppliers whose future income depends upon more people using their services. On the other hand, we have a plethora of organisations suddenly focusing excessively on the risks of AI, capitalising on people's fears (which are often based upon what they've seen in movies) and preventing the public from making informed choices about their future. Balance is critical in addition to a science driven focus that allows us to be objective and systematic. 

I know many would argue that a machine can never replace humans in healthcare, but we are going to have to consider how machines can help if we want to find a path to ensuring that everyone on this planet has access to safe, quality and affordable care. The existing model of care is broken, it's not sustainable and not fit for purpose, given the rise in chronic disease. The fact that so many people on this planet do not have access to care is unacceptable. This is a time when we need to be open to new possibilities, putting aside our fears to instead focus on what the world needs. We need leaders who can think beyond 12 month targets.

I also think that healthcare workers need to ignore the melodramatic headlines conjured up by the media about AI replacing all of us and enslaving humans, and to instead focus on this one question: How do I stay relevant? (to my patients, my peers and my community) 

relevant.jpg

Do you think we are wrong to look at emerging technology to help cope with the shortage of healthcare workers? Are you a healthcare worker who is working on building new services for your patients where the majority of the interaction will be with a machine? If you're a patient, how do you feel about engaging with a machine next time you are seeking care? Care designed by humans, delivered by machines. Or perhaps a future where care is designed by machines AND delivered by machines, without any human in the loop? Will we ever have caring technology? 

It is difficult to get a man to understand something, when his salary depends upon his not understanding it! - Upton Sinclair

[Disclosure: I have no commercial ties with the individuals or organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Healthy mobility

Mobility is an interesting term. Here in the UK, I've grown up seeing mobility as something to do with getting old and grey, when you need mobility aids around the home, or even a mobility scooter. Which is why I was curious about Audi (who make cars) hosting an innovation summit at their global headquarters in Germany to explore the Mobility Quotient. I'd never even heard of that term before. The fact that the opening keynote was set to be given by Audi's CEO, Rupert Stadler and Steve Wozniak (who co-founded Apple Computers) made be think that this would be an unusual event. I applied for a ticket, got accepted and what follows are my thoughts after the event that took place a few weeks ago. In this post, I will be looking at this through the lens of what this might mean for our health. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned in this post]

It turns out that 400 people attended, from 15 countries. This was the 1st time that Audi had hosted this type of event, and I didn't know what to expect out of it, and neither did any of the attendees I talked to on the shuttle bus from the airport. I think that's fun because everyone I met during the 2 days seemed to be there purely out of curiosity. If you want another perspective of the entire 2 days, I recommend Yannick Willemin's post. A fellow attendee, he was one of the first people I met at the event. There is one small thing that spoiled the event for me, the 15 minute breaks between sessions were too short. I appreciate that every conference organiser wants to squeeze lots of content in, but the magic at these events happens in between the sessions when your mind has been stimulated by a speaker and you have conversations that open new doors in your life. It's a problem that afflicts virtually every conference I attend. I wish they would have less content and longer breaks. 

On Day 1, there were external speakers from around the world, getting us to think about social, spacial, temporal and sustainable mobility. Rupert Stadler made a big impression on me with his vision of the future as he cited technologies such as Artificial Intelligence (AI) and the Internet of Things (IoT) and how they might enable this very different future. He also mentioned how he believes the car of the future will change its role in our lives, maybe being a secretary, a butler, a courier, or even an empathic companion in our day.  And throughout, we were asked to think deeply about how mobility could be measured, what we will do with the 25th hour, the extra time gained because eventually machines will turn drivers of today into the passengers of tomorrow. He spoke of a future where cars will be online, connected to each other too, sharing data, to reduce traffic jams and more. He urged us to never stop questioning. Steve Wozniak described the mobility quotient as "a level of freedom, you can be anywhere, anytime, but it also means freedom, like not having cords." 

We heard about Hyperloop transportation technologies cutting down on travel time between places and then the different things we might do in an autonomous vehicle, which briefly cited 'healthcare' as one option. Sacha Vrazic, who spoke about his work on self driving cars gave a great hype free talk and highlighted just how far away we are from the utopia of cars that drive themselves. We heard about technology, happiness and temporal mobility. It was such a diverse mix of topics. For example, we heard from Anna Nixon, who is just 17 years old, and already making a name for herself in robotics, and inspired us to think differently. 

What's weird but in a good way is that Audi, a car firm was hosting a conversation about access to education and improving social mobility. I found it wonderful to see Fatima Bhutto, a journalist from Pakistan give one of the closing keynotes on Day 2, where she reminded us of the challenges with respect to human rights and access to sanitation for many living in poorer countries, and how advances in mobility might address these challenges. It was surprising because Audi sells premium vehicles, and it made me think that that mobility isn't just about selling more premium vehicles. What's clear is that Audi (like many large organisations) is trying to figure out how to stay relevant in our lives during this century. Instead of being able to sell more cars in the future, maybe they will be selling us mobility solutions & services which may not even always involve a car. Perhaps they will end up becoming a software company that licences the algorithms used by autonomous vehicles in decades to come? It reminds me of the pharmaceutical industry wanting to move to a world of 'beyond the pill' by adapting their strategy to offer new products and services, enabled by new technologies. When you're faced with having to rip up the business model that's allowed your organisation to survive the 20th century, and develop a business model that will maximise your chances of longevity for the 21st century, it's a scary but also exciting place to be. 

On Day 2 attendees were able to choose 3 out of 12 workspaces where we could discuss how to make an impact on each of the 4 types of mobility. I chose these 3 workspaces.

  • Spatial mobility - which obstacles are still in the way of autonomous driving?
  • Social mobility - what makes me trust my digital assistant?
  • Sustainable mobility - what will future mobility ecosystems look like? 

The first workspace made me realise the range of challenges in terms of autonomous cars. Legal, technical, cultural and infrastructure challenges. We had to discuss and think about topics that I rarely think about when just reading news articles on autonomous cars. The fact that attendees were from a range of backgrounds made the conversations really stimulating. None of that 'groupthink' that I encounter at so many 'innovation' events these days, which was so refreshing.  BTW, Audi's new A8 is the first production vehicle with Level 3 automation, and the feature is called Traffic Jam Pilot. Subject to legal regulations, on selected roads, the driver would be able to take their hands off the wheel and do something else, like watch a video. The car would be able to drive itself. However, the driver would have to be ready to take back control of the car at any time, should conditions change. I found two very interesting real world tests of the technology here and here. Also, isn't it fascinating that a survey found only 26% of Germans would want to ride in autonomous cars. What about a self driving wheelchair in a hospital or an airport? Sounds like science fiction, but they are being tested in Singapore and Japan. Today few of us will be able to access these technologies because they are only available to those with very deep pockets. However, this will change. Just look at airbags, introduced as an option by Mercedes Benz on their flagship S-class in 1981. Now, 36 years later, even the smallest of cars often comes fitted with multiple airbags. 

In the second workspace, with other attendees, I formed a team and our challenge was to discuss transparency on collection and use of personal data from a digital assistant in the car of the future? Almost like a virtual co-driver. Our team had a Google Home device to get us thinking about the personal data that Google collects and we had to pitch our ideas at the end of the workspace in terms of how we envisaged getting drivers and passengers to trust these digital assistants in the car. How could Audi make it possible for consumers to control how their personal data is used? It's encouraging to see a large corporate like Audi thinking this way.  Furthermore, given that these digital assistants may one day be able to recognise our emotional state and respond accordingly, how would you feel if the assistant in your car noticed you were feeling angry, and instead of letting you start the engine, asked if you wanted to have a quick psychotherapy session with a chatbot to help you deal with the anger? Earlier this year, I tested Alexa vs Siri in my car with mixed results. You can see my 360 video below. 
 

In the third workspace on sustainable mobility, we had to choose one of 3 cities (Beijing, Mumbai and San Francisco) and come up with new ideas to address challenges in sustainable mobility given each city's unique traits. This session was truly mind expanding, as I joked about the increasing levels of congestion in Mumbai, and how maybe they need flying cars. It turned out that one of the attendees sitting next to me was working on urban vehicles that can fly! None of discussions and pitches in the workspaces were full of easy answers, but what they did remind me was the power of bringing together people that normally don't work together to come up with fresh ideas to very complex challenges. Furthermore, these new solutions we generate can't just be for the privileged few, but we have to think global from the beginning. It's our shared responsibility to find a way of including everyone on this new journey. Maybe instead of owning, leasing or even renting a car the traditional way, we'd like to be able to rent a car by the hour using an app on our phones? In fact, Audi have trialled on demand car hire in San Francisco, just launched in China and plan to launch in other countries too, perhaps even with providing you with with a chauffeur too. Only time will tell if they succeed, as others have already tried and not been that successful. 

Taking part in this summit was very useful for me, I left feeling challenged, inspired and motivated. There was an energy during the event that I rarely see in Europe, I experienced a feeling that I only tend to get when I'm out in California, where people attending events are so open to new ideas and fresh thinking that you walk away feeling that you truly can build a better tomorrow. My brain has been buzzing with new ideas since then. 

For example, whether we believe that consumers will have access to autonomous in 5 years or 50 years, we can see more funds being invested in this. I was watching a documentary where Sebastian Thrun, who lost his best friend in a car accident aged 18, and helped build Google's driverless car, believes that a world with driverless vehicles will save the lives of the 1 million people who currently die on the roads every year around the globe. Think about that for a moment. If that vision is realised this century, even partially, what does that mean for those resources in healthcare that currently are spent on dealing with road traffic accidents? He has now turned his attention to flying cars.

Thinking about chronic disease for a second, you'd probably laugh at the thought of a car that could monitor your health during your commute to the office?

Audi outlined a concept called Audi Fit Driver in 2016 which "The Audi Fit Driver project focuses on the well-being and health of the driver. A wearable (fitness wristband or smartwatch) monitors important vital parameters such as heart rate and skin temperature. Vehicle sensors supplement this data with information on driving style, breathing rate and relevant environmental data such as weather or traffic conditions. The current state of the driver, such as elevated stress or fatigue, is deduced from the collected data. As a result, various vehicle systems act to relax, vitalize, or even protect the driver."

Another car manufacturer, Toyota, has filed a patent suggesting a future where the car would know your health and fitness goals and the car would offer suggestions to help you meet those goals, such as parking further away from your planned destination so you can get some more steps in towards your daily goal. My friend, Bart Collet, has penned his thoughts about "healthcartech", which makes for a useful read. One year ago, I also made a 360 video with Dr Keith Grimes discussing if cars in the future will track our health. 

Consider how employers may be interested in tracking the health of employees who drive as part of their job. However, it's not plain sailing. A European Union advisory panel recently said that "Employers should be banned from issuing workers with wearable fitness monitors, such as Fitbit, or other health tracking devices, even with the employees’ permission." So at least in Europe, who knows if we'll ever be allowed to have cars that can monitor our health? On top of that, in this bold new era, in order for these new connected services to really provide value, all these different organisations collecting data will have to find a way to share data. Does blockchain technology have a role to play in mobility? I recently came across Dovu which talks about the world's first mobility cryptocurrency, "Imagine seamless payment across mobility services: one secure global token for riding a bus or train, renting a bike or car or even enabling you to share your own vehicle or vehicle data." Sounds like an interesting idea. 

Thinking about some of the driver assist technologies available today, what do they mean for mobility? Could they help older people remain at the helm of a car even if their reflexes have slowed down? In Japan, the National Police Agency "calls on the government to create a new driver’s license that limits seniors to vehicles with advanced safety systems that can automatically brake or mitigate unintended accelerations." Apparently, one of the most common accidents in Japan is when drivers mistake the accelerator for the gas pedal. Today some new cars come with Autonomous Emergency Braking (AEB) where the car's sensors will detect if you are about to hit another vehicle or a pedestrian and perform an emergency stop if the car detects that the driver is not braking quickly enough. So by relinquishing more control to the car, we can have safer roads. My own car has AEB and on one occasion when I faced multiple hazards on the road ahead, it actually took over the braking, as the sensors thought I wasn't going to stop in time. It was a very strange feeling. Many seem to be reacting with extreme fear when hearing about these new driver assist technologies, yet if you currently drive a car with an automatic transmission or airbags, you are perfectly happy to let the car decide when to change gears or when to inflate the airbag. So on the spectrum of control, we already let our cars make decisions for us. As they get smarter, they will be making more and more decisions for us. If someone over 65 doesn't feel like driving even if the car can step in, then maybe autonomous shuttles like the ones being tested in rural areas in Japan are one solution to increasing mobility of an ageing community.

When we pause to think of how big a problem isolation and loneliness are in our communities, could these new products and services go beyond being simply a mobility solution and actually reduce loneliness? That could have far reaching implications for our health. What if new technology could help those with limited mobility cross the road safely at traffic lights? It's fascinating to read the latest guidance consultation from the UK's National Institute for Health and Care Excellence on the topic of Physical Activity and the Environment. Amongst many items, it suggests mentions modifying traffic lights so those with limited mobility can cross the road safely. Now just extending the time by default so that traffic lights are red by a few extra seconds so that this is possible might end up just causing more traffic jams. So in a more connected future, imagine traffic lights with AI that can detect who is waiting to cross the road, and whether they will need an extended crossing time, and adjust the duration of the red light for vehicles accordingly. This was one of the ideas I brought up at the conference during the autonomous vehicle workspace.

If more people in cities use ride hailing services like Uber and fewer people own a car, does this mean our streets will have fewer parked cars, allowing residents to reclaim the streets for themselves? If this shift continues, in the long term, it might lead to city dwellers of all ages becoming more physically active. This could be good news in improving our health and reducing demand on healthcare systems. One thing is clear to me, these new mobility solutions will require many different groups across society to collaborate. It can't just be a few car manufacturers who roll out technology without involving other stakeholders so that these solutions are available to all, and work in an integrated manner. The consumer will be king though, according to views aired at New Mobility World in Germany this week, "With his smartphone, he can pick the optimal way to get from A to B,” said Randolph Wörl from moovel. “Does optimal mean the shortest way, the cheapest way or the most comfortable way? It’s the user’s choice.” It's early days but we already have a part of the NHS in the UK looking to use Uber to transfer patients to/from hospital. 

Urban mobility isn't just about cars, it's also about bicycles. I use the Santander bike sharing scheme in London on a daily basis, which I find to be an extremely valuable service. I don't want to own a bicycle since in my small home, I don't really have room to store it. Additionally, I don't want the hassle of maintaining a bike. Using this bike sharing scheme has helped me to lose 15kg this summer, which I feel has improved my own health and wellbeing. If we really want to think about health, rather than just about healthcare, it's critical we think beyond those traditional institutions that we associate with health, and include others. Incidentally, Chinese bike sharing firms are now entering the London market.

In the UK, some have called for cycling to be 'prescribed' to the population, helping people to stay healthier and again to reduce demand on the healthcare system. Which is why I find the news that Ford of Germany is getting involved with a new bike sharing scheme. Through the app, people will be able to use Ford's car sharing and bike sharing scheme. An example of Mobility as a Service and of another car manufacturer seeking a path to staying relevant during this century. Nissan of Japan are excitedly talking about Intelligent Mobility for their new Nissan Leaf, talking about Intelligent Driving where "Soon, you can have a car that takes the stress out of driving and leaves only the joy. It can pick you up, navigate heavy traffic, and find parking all on its own." A Chinese electric car startup, Future Mobility Cop who have launched their Byton brand have said their "models are a combination of three things: a smart internet communicator, a spacious luxury living room and a fully electric car." Interestingly, they also want to "turn driving into living." I wonder if in 10-15 years time, we'll spend more time in cars because the experience will be a more connected one? Where will meetings take place in future? Ever used Skype for Business from work or home to join an online meeting? BMW & Microsoft are working to bring that capability to some of BMW's vehicles. Samsung have announced they are setting up a £300m investment fund focusing on connected technologies for cars. It appears that considerable sums of money are being invested in this new arena of connected cars that fit into our digital lifestyles. Are the right people spending the right money on the right things? 

I feel that those developing products which involve AI are often so wrapped up in their vision that it comes across as if they don't care what the social impact of their ideas will be. In an article about Vivek Wadhwa's book, The Driver in the Driverless Car, the journalist points out that the book talks about the possibility of up to 5m American jobs in trucking, delivery driving, taxis and related activities being lost, but there are no suggestions mentioned for handling the the social implications of this shift. Toby Walsh, a Professor of AI believes that Elon Musk, founder of Tesla cars is scaremongering when tweeting about AI starting World War 3. He says, "So, Elon, stop worrying about World War III and start worrying about what Tesla’s autonomous cars will do to the livelihood of taxi drivers." Personally, we need some more balance and perspective in this conversation. The last thing we need is a widening of social inequalities. How fascinating to read that India is considering banning self driving cars in order to protect jobs. 

This summit has really made me think hard about mobility and health. Perhaps car manufacturers will end up being part of solutions that bring significant improvements in our health in years to come? We have to keep an open mind about what might be possible. Maybe it's because I'm fit and reasonably healthy, live in a well connected city like London and can afford a car of my own, that I never really thought about the impact of impaired mobility on our health? In the Transport Research Laboratory's latest Quarterly Research Review, I noticed a focus on mental health and ageing drivers, and it's clear they want transport planners to put health and wellbeing as a higher priority with a statement of, "With transport evolving, it’s vital that we don’t lose sight of the implications it can have on the health of the population, and strive to create a network that encourages healthy mobility.” At minimum, mobility might just mean being able to walk somewhere in your locality, but what if you don't feel safe walking in your neighbourhood due to high rates of crime? Or what if you can't walk because there it literally nowhere to walk? I remember visiting Atlanta in the USA several years ago, and I took a walk from a friend's house in the suburbs. A few minutes into my walk, the sidewalk just finished, just like that with no warning. The only way I could walk further would be to walk inside a car dealership. Ironic. The push towards electrification of vehicles is interesting to witness, with Scotland wanting to phase out sales of new petrol and diesel cars by 2032. India is even more ambitious, hoping to move towards electric vehicles by 2030. The pollution in London is so high that I avoid walking down certain roads because I don't want to breathe in those fumes. So a future with zero emission electric cars gives me hope. 

It's obvious that we can't just think about health as building bigger hospitals and hiring more doctors. If we really want societies where we can prevent more people from living with chronic diseases like heart disease and diabetes, we have to design with health in mind from the beginning. There is an experiment in the UK looking to build 10 Healthy New Towns. Something to keep an eye on.

phonebox2.jpg

The technology that will underpin this new era of connectivity seems to be the easy part. The hard part is getting people, policy and process to connect and move at the same pace as the technology, or at least not lag too far behind. During one of my recent sunrise bike rides in London, I came across a phone box. I remember using them as a teenager, before the introduction of mobile phones. At the time, I never imagined a future where we didn't have to locate a box on the street, walk inside, insert coins and press buttons in order to make a call whilst 'mobile' and in such a short space of time, everything has changed, in terms of how we communicate and connect. These phone boxes scattered around London remind me that change is constant, and that even though many of us struggle to imagine a future that's radically different from today, there is every chance that the healthy mobility in 20 years time will look very different from today.

Who should be driving our quest for healthy mobility? Do we rest our hopes on car manufacturers collaborating with technology companies? As cities grow, how do we want our cities to be shaped?

What's your definition of The Mobility Quotient?

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Letting Go

It’s really difficult to write this post, not as difficult as the last one, Being Human, but still challenging. Sometimes the grief doesn’t let go of me, and sometimes I don’t want to let go of the grief. I can see the resistance to letting go of the pain of losing a loved one. Perhaps we mistakenly equate letting go of the pain as letting go of our loved one, and that’s why we want to stay in the darkness, hurting? At times, I feel under pressure to let go of my grief and to let go of my sister. As a man, I’ve been conditioned to believe that men don’t cry, that showing emotions in front of others equals weakness, and men shouldn’t grieve for too long, or grieve at all. Perhaps grief is a lifelong companion? The intensity decreases, but it’s ever present, etched into your existence.

My daily walks & bike rides at sunrise in the park continue to be therapeutic, some of the photos I’ve taken can be seen below. 

My loss has led to me reflecting upon many big questions in life. Why are we here? What does it all mean? How much longer do I have left? Pritpal Tamber’s recent blog, where he wrote, “Death always makes me ask what I'm doing with my life.” resonates with me very much at this time.

Being reminded that death can come at any moment has given me some clarity to how I see the world, in terms of where my attention rests, and in particular, how I view my health. There is so much outside of our control in life, that we often feel powerless. However, by taking time to connect with myself, I remembered that I can choose how I respond to situations in life. What can I do to reduce the risk of dying prematurely? That’s something that is front of mind at present. So, I’m in the park every day at sunrise and active for at least 2 hours. I have maintained this routine for almost 8 weeks. I made choices before which resulted in a very sedentary lifestyle. I didn’t need to see a healthcare professional to know that I really enjoy being outdoors in nature. I also paused long enough to observe what I was eating and noticed some odd behaviours, such as eating not because I was hungry, but because I was bored. So I’ve made conscious choices in terms of what I’m eating and when I’m eating. It’s been very difficult to change, but I’m motivated by the results of my effort. I’ve lost 6kg (13 lbs) and the weight loss happened after I started eating less, I wasn’t losing weight simply by being active. After years where I was living life at an ever increasing pace, I find myself through recent circumstances forced to slow down, and just be. It’s prompted me to reconnect with my love of cooking to take the time to make meals from scratch. I’ve slowed down in my work too, pausing to evaluate each new opportunity, wondering if taking the project on will help me create the life I want?

I’ve noticed in the last few years, I’ve talked with so many people who have amazing jobs, with great colleagues, who are contemplating leaving to forge their own path in the unknown. The one common factor is that all of them yearn for more freedom in what they can do, what they can say, and most importantly, what they can think. I believe we are conditioned on so many levels, from the moment we are born. Some of that conditioning is useful, but some of it also only serves to make us conform to someone else’s view of how we should be, and we end up losing the connection to our authentic selves. It’s almost like each of these people that I’ve met are struggling with letting go of the conditioning they’ve received at school, work and home. It’s been 5 years since I left the security of my career at GSK, and I’ve had to unlearn many of the beliefs that kept me feeling powerless. I believe the unlearning will be a lifelong process. Occasionally, there are moments where I wonder if I’m good enough simply because I don’t have a job at a prestigious multinational anymore? I don’t know where I picked up this flawed belief, but it’s not a belief I want to hang on to. Recently, I’ve reconnected with Nicolas Tallon, a friend that I first worked with almost 20 years ago, when we were using data to help organisations understand which consumers were most likely to respond to marketing campaigns. He has now left the security of his career in banking to launch his own consultancy, and he’s chosen to look at innovation very differently. I really enjoyed his first blog post, where he wrote,

“Banking has not really changed for centuries and the Fintech revolution has barely changed that. In fact, digital technologies have been used almost exclusively to streamline existing processes and reduce channel costs rather than to reinvent banking. Disruption will happen when one player creates a new meaning for banking that resonates with consumers. It may be enabled by technology but won’t be defined by it.”

I believe that what Nicolas wrote applies to healthcare systems too, since much of the digital transformation I’ve witnessed has simply added a layer of ‘digital veneer’ to poorly designed processes that have been tolerated for a very long time. So many leaders are desperately seeking innovation, but only if those new ideas fit within their narrow set of terms and conditions. We build ever more complex systems, adding new pieces to the puzzle, yet frequently fail to let go of tools, technologies and thoughts that are not fit for purpose. What might happen if we gave ourselves permission to be more authentic? Will that bring the changes we truly desire? I read this week that my former employer, GSK, is making changes to the way an employee’s performance is being measured, “When staff undergo their regular career appraisals, they will be judged on a new metric: courage.” It will be interesting to see the impact of this change.

We often get so excited about digital technologies, and the promises of change they will bring in our industry, yet we don’t get excited about optimising the ultimate technology, ourselves. Soren Gordhamer asks in a recent blog post, “How much do we each tend to the Invisible World, our Inner World each day?” Life works in mysterious ways, and often signs appear in front of us at the right moment. This weekend when I was in the park, I came across this sign, which inspired me to write this post.

IMG_20170729_061902.jpg

“Some of us think holding on makes us strong, but sometimes it is letting go.” - Herman Hesse

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner