An interview with Ardy Arianpour: Building the future of health data

ardy_revised.jpg

People ask me what gets me out of bed every morning. I have this big vision about finding ways to use data to improve the health of everyone on the planet. Yes, that’s right, all 7.7 billion of us.

My mission on a daily basis is to find projects that help me on the path towards making that vision a reality. I’m always on the lookout for people who are also dreaming about making an impact on the whole world. I bumped into one such person recently, when I was attending the Future of Individualised Medicine conference in the USA. That person is Ardy Arianpour, CEO and co-founder of a startup called Seqster that I believe could make a significant contribution to making my vision a reality over the long term. I interviewed Ardy to hear more about his story and the amazing possibilities with health data that he dreams of bringing to our lives.

1. What is Seqster?
Products such as mint.com, which enable people to bring all their personal finance data in one place have enabled so many people to manage their finances. We believe that Seqster is the mint.com of your health. We are a person-centric interoperability platform that seamlessly brings together all your medical records (EHR), baseline genetic (DNA), continuous monitoring and wearable data in one place. From a business standpoint we’re a SaaS platform like “The Salesforce.com for healthcare”. We provide a turnkey solution for any payer, provider or clinical research entity since “Everyone is seeking health data”. We empower people to collect, own and share their health data on their terms.

2. So Seqster is another attempt at a personal health record (PHR) like Microsoft’s failed attempt with Healthvault?
Microsoft’s HealthVault and Google Health were great ideas, but their timing was wrong. The connectivity wasn’t there and neither was the utility. In a way, it’s also the problem with Apple Health Records. Seqster transcends those PHRs for three reasons:

a. First, we’ve built the person-centric interoperability platform that can retrieve chain of custody data from any digital source. We’re not just dealing with self-reported data like every other PHR that can be inaccurate and cumbersome. By putting the person at the center of healthcare, we give them the tools to disrupt their own data siloes and bring in not only longitudinal data but also multi-dimensional and multi-generational data.

b. Second, our data is dynamic. Everything is updated in real time to reflect your current health. One site, one log in. You never have to sign in twice.

c. Third, we generate new insights which is tough to do unless you have the high quality data coming directly form multiples sources. For example, we have integrated the American Heart Association’s Life Simple Seven to give you dynamic insights into your heart health plus actionable recommendations based on their guidelines.

3. Why do you believe Seqster will succeed when so many others (often with big budgets have failed)?
The first reason that we will succeed is our team. We have achieved previous successes in implementing clinical and consumer genetic testing at nationwide scale. In the genetics market we’ve been working on data standardization and sharing for the last decade so we approached this challenge from a completely different vantage point. We didn’t set out to solve interoperability, but did it completely by accident.

Next, we have achieved nationwide access in the USA to over 3000 hospitals integrated as well as over 45,000 small doctor offices and medical clinics. In the past few years we have surpassed over 100M patient records, 30M+ direct to consumer DNA / genetic tests and 100M+ wearables. invaluable utility by giving people a legal framework to share their health data with their family members, caregivers, physicians, or even with clinical trials if they want.

All we are doing is shedding light on what we call “Dark Data”- the data that is already existing on all of us and hidden up until now.

3. Your background has been primarily in Genomics, where you’ve done sterling work in driving BRCA genetic testing across the United States. Is Seqster of interest mainly to those who have had some kind of genetic test?
Not at all. Seqster is for the healthcare consumers. We’re all healthcare consumers in some way. Having said that, as you may have noted, the “Seq” in Seqster comes from our background in genome sequencing. We originally had the idea that we could create a place for the over 30M individuals who had done some kind of genetic test to take ownership of their data and to incentivize people who have not yet had a genetic test to get sequenced. However, we realized that genetic data without high quality, high fidelity clinical health data is useless. The highest quality data is the data that comes directly from your doctor’s office or hospital. This combined with your sequence data and your fitness data is a powerful tool for better health for everyone.

4. Wherever I travel in the world, from Brazil to the USA to Australia, the same challenge about health data comes up in conversations. The challenge of getting different computer systems in healthcare to share patient data with each other, otherwise known more formally as “interoperability” – can Seqster really help to solve this challenge or is this a pipe dream?
It was a dream for us as well until we cracked the code on person-centric interoperability. What is amazing is we can bring our technology to anywhere in the world right now as long as the data exists. Imagine people everywhere and how overnight we change healthcare and health outcomes if they had access to their health data from any device, Android, Apple or web-based. Imagine that your kids and grandkids have a full health history that they can take to their next doctor visit. How powerful can that be? That is Seqster. We help you seek out your health data, no matter where you are or where your data resides.

5. So what was the moment in your life that compelled you to start Seqster?
In 2011 I was at a barbeque with a bunch of physicians and they asked what I did for a living. I told them about my own DNA testing experience and background in genomics. Quickly the conversation went to how can we make DNA data actionable and relevant to both themselves and their patients. The next day I go for a run and couldn’t stop thinking about that conversation and how if I owned all my data in one place would make it meaningful for me. I come home and was watching the movie “The Italian Job” and heard the word Napster in the film, being a sequencing guy and seeking out info I immediately thought of “Seqster” and typed it in godaddy.com and bought Seqster.com for $9.99. The tailwinds were not there to do anything with it until January of 2016 when I decided to put a team together to start building the future of health data.

6. What has been the biggest barrier in your journey at Seqster so far, and have you been able to overcome it?
Have you seen the movie Bohemian Rhapsody? We’re like the band Queen – we’re misfits and underdogs. No one believes that we solved this small $30 billion problem called interoperability until they try Seqster for themselves. The real barrier right now is getting Seqster into the right hands. As people start to catch onto the fact that Seqster solves some of their biggest pain points, we will overcome the technology adoption barrier. I am so excited about new possibilities that are emerging for us to make a contribution to advancing the way health data gets collated, shared and used. Stay tuned, we have exciting news to share over the next few months.

7. What has the reaction to Seqster been? Who are the most sceptical, and who seem to be the biggest advocates?
We have a funny story to share here. About three years ago when we started Seqster, we told Dr. Eric Topol from Scripps Research what we wanted to do and he told us that he didn’t believe that we could do it. Three years later after hearing some of the buzz he asked to meet with us and try Seqster for himself. His tweet the next day after trying Seqster says it all. We couldn’t be prouder.

8. Lots of startups are developing digital health products but few are designing with patients as partners. Tell us more about how you involve patients in the design of your services?
Absolutely! We couldn’t agree more. I believe that many digital health companies fail because they don’t start with the patient in mind. From day one Seqster has been about empowering people to collect, own, and share their data on their terms. Our design is unique because we spent time with thousands of patients, caregivers and physicians to develop a person-centric interface that is simple and intuitive.

9. The future of healthcare is seen as a world where patients have much more control over their health, and in managing their health. What role could Seqster play in making that future a reality?
We had several chronically ill patients use Seqster to manage their health and gather all their medical records from multiple health systems within minutes. Some feedback was as simple as having one site and one login so that they can immediately access their entire medical record from a single platform. A number of patients told us that they found lab results that had values outside of normal range which their doctors never told them about. When we heard this, we felt like we were on the verge of bringing aspects of precision medicine to the masses. It definitely resonated very well with our vision of the future of healthcare being driven by the patient.

10. Fast forward 20 years to 2039, what would you want the legacy of Seqster to be in terms of impact on the world?
In 20 years by having all your health data in one place, Seqster’s legacy will be known as the technology that changed healthcare. Our technology will improve care by delivering accurate medical records instantaneously upon request by any provider anywhere. All the data barriers will be removed. Everyone will have access to their health information no matter where they are or where their data is stored. Your health data will follow you wherever you go.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

An interview with Jo Aggarwal: Building a safe chatbot for mental health

We can now converse with machines in the form of chatbots. Some of you might have used a chatbot when visiting a company’s website. They have even entered the world of healthcare. I note that the pharmaceutical company, Lupin, has rolled out Anya, India’s first chatbot for disease awareness, in this case, for Diabetes. Even for mental health, chatbots have recently been developed, such as Woebot, Wysa and Youper. It’s an interesting concept and given the unmet need around the world, these could be an additional tool that might help make a difference in someone’s life. However, there was a recent BBC article highlighting how two of the most well known chatbots (Woebot and Wysa) don’t always perform well when children use the service. I’ve performed my own real world testing of these chatbots in the past, and gotten to know the people who have created these products. So after the BBC article got published, Jo Aggarwal, CEO and co-founder of Touchkin, the company that has made Wysa, got back in touch with me to discuss trust and safety when using chatbots. It was such an insightful conversation, I offered to interview her for this blog post as I think the story of how a chatbot for mental health is developed, deployed and maintained is a complex and fascinating journey.

1. How safe is Wysa, from your perspective?
Given all the attention this topic typically receives, and its own importance to us, I think it is really important to understand first what we mean by safety. For us, Wysa being safe means having comfort around three questions. First, is it doing what it is designed to do, well enough, for the audience it’s been designed for? Second, how have users been involved in Wysa’s design and how are their interests safeguarded? And third, how do we identify and handle ‘edge cases’ where Wysa might need to serve a user - even if it’s not meant to be used as such?

Let’s start with the first question. Wysa is an interactive journal, focused on emotional wellbeing, that lets people talk about their mood, and talk through their worries or negative thoughts. It has been designed and tested for a 13+ audience, where for instance, it asks users to take parental consent as a part of its terms and conditions for users under 18. It cannot, and should not, be used for crisis support, or by users who are children - those who are less than 12 years old. This distinction is important, because it directs product design in terms of the choice of content as well as the kind of things Wysa would listen for. For its intended audience and expected use in self-help, Wysa provides an interactive experience that is far superior to current alternatives: worksheets, writing in journals, or reading educational material. We’re also gradually building an evidence base here on how well it works, through independent research.

The answer to the second question needs a bit more description of how Wysa is actually built. Here, we follow a user-centred design process that is underpinned by a strong, recognised clinical safety standard.

When we launched Wysa, it was for a 13+ audience, and we tested it with an adolescent user group as a co-design effort. For each new pathway and every model added in Wysa, we continue to test the safety against a defined risk matrix developed as a part of our clinical safety process. This is aligned to the DCB 0129 and DCB 0160 standards of clinical safety, which are recommended for use by NHS Digital.

As a result of this process, we developed some pretty stringent safety-related design and testing steps during product design:

At the time of writing a Wysa conversation or tool concept, the first script is reviewed by a clinician to identify safety issues, specifically - any times when this could be contra-indicated, or be a trigger, and alternative pathways for such conditions.

When a development version of a new Wysa conversation is produced, the clinicians review it again specifically from an adherence to clinical process and potential safety issues as per our risk matrix.

Each aspect of the risk matrix has test cases. For instance, if the risk is that using Wysa may increase the risk of self harm in a person, we run two test cases - one where a person is intending self harm but it has not been detected as such (normal statements) and one where self-harm statements detected from the past are run through the Wysa conversation, at every Wysa node or ‘question id’. This is typically done on a training set of a few thousand user statements. A team then tags the response for appropriateness. A 90% appropriateness level is considered adequate for the next step of review.

The inappropriate statements (typically less than 10%) are then reviewed for safety, where the question asked is - will this inappropriate statement increase the risk of the user indulging in harmful behavior? If there is even one such case, the Wysa conversation pathway is redesigned to prevent this and the process is repeated.

The output of this process is shared with a psychologist and any contentious issues are escalated to our Clinical Safety Officer.

Equally important for safety, of course, is the third question. How do we handle ‘out of scope’ user input, for example, if the user talks about suicidal thoughts, self-harm, or abuse? What can we do if Wysa isn’t able to catch this well enough?

To deal with this question, we did a lot of work to extend the scope of Wysa so that it does listen for self-harm and suicidal thoughts, as well as abuse in general. On recognising this kind of input, Wysa gives an empathetic response, clarifies that it is a bot and unable to deal with such serious situations, and signpost to external helplines. It’s important to note that this is not Wysa’s core purpose - and it will probably never be able to detect all crisis situations 100% - neither can Siri or Google Assistant or any other Artificial Intelligence (AI) solution. That doesn’t make these solutions unsafe, for their expected use. But even here, our clinical safety standard would mean that even if the technology fails, we need to ensure it does not cause cause harm - or in our case, increase the risk of harmful behavior. Hence, all Wysa’s statements and content modules are tested against safety cases to ensure that they do not increase risk of harmful behavior even if the AI fails.

We watch this very closely, and add content or listening models where we feel coverage is not enough, and Wysa needs to extend. This was the case specifically with the BBC article, where we will now relax our stand that we will never take personally identifiable data from users, explicitly listen (and check) for age, and if under 12 direct them out of Wysa towards specialist services.

So how safe is Wysa? It is safe within its expected use, and the design process follows a defined safety standard to minimize risk on an ongoing basis. In case more serious issues are identified, Wysa directs users to more appropriate services - and makes sure at the very least it does not increase the risk of harmful behaviour.

2. In plain English, what can Wysa do today and what can’t it do?
Wysa is a journal married to a self-help workbook, with a conversational interface. It is a more user friendly version of a worksheet - asking mostly the same questions with added models to provide different paths if, for instance, a person is anxious about exams or grieving for a dog that died.

It is an easy way to learn and practice self help techniques - to vent and observe our thoughts, practice gratitude or mindfulness, learn to accept your emotions as valid and find the positive intent in the most negative thoughts.

Wysa doesn’t always understand context - it definitely will not pass the Turing test for ‘appearing to be completely human’. That is definitely not its intended purpose, and we’re careful in telling users that they’re talking to a bot (or as they often tell us, a penguin).

Secondly, Wysa is definitely not intended for crisis support. A small percentage of people do talk to Wysa about self harm or suicidal thoughts, who are given an empathetic response and directed to helplines.

Beyond self harm, detecting sexual and physical abuse statements is a hard AI problem - there are no models globally that do this well. For instance ‘My boyfriend hurts me’ may be emotional, physical, or sexual. Also, most abuse statements that people share with Wysa tend to be in the past: ‘I was abused when I was 12’ needs a very different response from ‘I was abused and I am 12’. Our response here is currently to appreciate the courage it takes to share something like this, ask a user if they are in crisis, and if yes, say that as a bot Wysa is not suited for a crisis and offer a list of helplines.

3. Has Wysa been developed specifically for children? How have children been involved in the development of the product?
No, Wysa hasn’t been developed specifically for children.

However, as I mentioned earlier, we have co-designed with a range of users, including adolescents.

4. What exactly have you done when you’ve designed Wysa with users?
For us, the biggest risk was that someone’s data may be leaked and therefore cause them harm. To deal with this, we took the hard decision of not taking any personally identifiable data at all from users, because of which they also started trusting Wysa. This meant that we had to compromise on certain parts of the product design, but we felt it was a tradeoff well worth making.

After launch, for the first few months, Wysa was an invite-only app, where a number of these features were tested first from a safety perspective. For example, SOS detection and pathways to helplines were a part of the first release of Wysa, which our clinical team saw as a prerequisite for launch.

Since then, design continues to be led by users. For the first million conversations, Wysa stayed a beta product, as we didn’t have enough of a response base to test new pathways. There is no one ‘launch’ of Wysa - it is continuously being developed and improved based on what people talk to it. For instance, the initial version of Wysa did not handle abuse (physical or sexual) at all as it was not expected that people would talk to it about these things. When they began to, we created pathways to deal with these in consultation with experts.

An example of a co-design initiative with adolescents was a study with Safe Lab at Columbia University to understand how at-risk youth would interact with Wysa and the different nuances of language used by these youth.

4. Can a user of Wysa really trust it in a crisis? What happens when Wysa makes a mistake and doesn’t provide an appropriate response?
People should not use Wysa in a crisis - it is not intended for this purpose. We keep reinforcing this message across various channels: on the website, the app descriptions on Google Play or the iTunes App Store, even responses to user reviews or on Twitter.

However, anyone who receives information about a crisis has a responsibility to do the most that they can to signpost the user to those who can help. Most of the time, Wysa will do this appropriately - we measure how well each month, and keep working to improve this. The important thing is that Wysa should not make things worse even when it misdetects, so users should not be unsafe ie. we should not increase the risk of harmful behaviour.

One of the things we are adding based on suggestions from clinicians is a direct SOS button to helplines so users have another path when they recognise they are in crisis, so the dependency on Wysa to recognise a crisis in conversation is lower. This is being co-designed with adolescents and clinicians to ensure that it is visible, but so that the presence of such a button does not act as a trigger.

For inappropriate responses, we constantly improve and also handle cases where the if user shares that Wysa’s response was wrong, respond in a way that places the onus entirely on Wysa. If a user objects to a path Wysa is taking, saying this is not helpful or this is making me feel worse, immediately change the path; emphasise that it is Wysa’s, not the user’s mistake; and that Wysa is a bot that is still learning. We closely track where and when this happens, and any responses that meet our criteria for a safety hazard are immediately raised to our clinical safety process which includes review with children’s mental health professionals.

We constantly strive to improve our detection, and are also starting to collaborate with other people dealing with similar issues and create a common pool of resources.

5. I understand that Wysa uses AI. I also note that there are so many discussions around the world relating to trust (or lack of it) in products and services that use AI. A user wants to trust a product, and if it’s health related, then trust becomes even more critical. What have you done as a company to ensure that Wysa (and the AI behind the scenes) can be trusted?
You’re so right about all so many discussions about AI, how this data is used, and how it can be misused. We explicitly tell users that their chats stays private (not just anonymous), that this will never be shared with third parties. In line with GDPR, we also give users the right to ask for their data to be deleted.

After downloading, there is no sign-in. We don’t collect any personally identifiable data about the user: you just give yourself a nickname and start chatting with Wysa. The first conversation reinforces this message, and this really helps in building trust as well as engagement.

AI of the generative variety will not be ready for products like Wysa for a long time - perhaps never. They have in the past turned racist or worse. The use of AI in applications like Wysa is limited to detection and classification of user free text, not generating ‘advice’. So the AI here is auditable, testable, quantifiable - not something that may suddenly learn to go rogue. We feel that trust is based on honesty, so we do our best to be honest about the technical limitations of Wysa.

Every Wysa response and question goes through a clinical safety process, and is designed and reviewed by a clinical psychologist. For example, we place links to journal articles in each tool and technique that we share with the user.

6. What could you and your peers who make products like this do to foster greater trust in these products?
As a field, the use of conversational AI agents in mental health is very new, and growing fast. There is great concern around privacy, so anonymity and security of data is key.

After that, it is important to conduct rigorous independent trials of the product and share data openly. A peer reviewed mixed method study of Wysa’s efficacy has been recently published in JMIR, for this reason, and we working with universities to further develop these. It’s important that advancements in this field are science-driven.

Lastly, we need to be very transparent about the limitations of these products - clear on what they can and cannot do. These products are not a replacement for professional mental health support - they are more of a gym, where people learn and practice proven, effective techniques to cope with distress.

7. What could regulators to foster an environment where we as a user feel reassured that these chatbots are going to work as we expect them to?
Leading from your question above, there is a big opportunity to come together and share standards, tools, models and resources.

For example, if a user enters a search term around suicide in Google, or posts about self-harm on Instagram, maybe we can have a common library of Natural Language Processing (NLP) models to recognise and provide an appropriate response?

Going further, maybe we can provide this as an open-source to resource to anyone building a chatbot that children might use? Could this be a public project, funded and sponsored by government agencies, or a regulator?

In addition, there are several other roles a regulator could play. They could fund research that proves efficacy, defines standards and outlines the proof required (the NICE guidelines recently released are a great example), or even a regulatory sandbox where technology providers, health institutions and public agencies come together and experiment before coming to a view.

8. Your website mentions that “Wysa is... your 4 am friend, For when you have no one to talk to..” – Shouldn’t we be working in society to provide more human support for people who have no one to talk to? Surely, everyone would prefer to deal with a human than a bot? Is there really a need for something like Wysa?
We believed the same to be true. Wysa was not born of a hypothesis that a bot could help - it was an accidental discovery.

We started our work in mental health to simply detect depression through AI and connect people to therapy. We did a trial in semi-rural India, and were able to use the way a person’s phone moved about to detect depression to a 90% accuracy. To get the sensor data from the phone, we needed an app, which we built as a simple mood-logging chatbot.

Three months in, we checked on the progress of the 30 people we had detected with moderate to severe depression and whose doctor had prescribed therapy. It turned out that only one of them took therapy. The rest were okay with being prescribed antidepressants but for different reasons, ranging from access to stigma, did not take therapy. All of them, however, continued to use the chatbot, and months later reported to feeling better.

This was the genesis of Wysa - we didn’t want to be the reason for a spurt in anti-depressant sales, so we bid farewell to the cool AI tech we were doing, and began to realise that it didn’t matter if people were clinically depressed - everyone has stressors, and we all need to develop our mental health skills.

Wysa has had 40 million conversations with about 800,000 people so far - growing entirely through word of mouth. We have understood some things about human support along the way.

For users ready to talk to another person about their inner experience, there is nothing as useful as a compassionate ear, the ability to share without being judged. Human interactions, however, seem fraught with opinions and judgements. When we struggle emotionally, it affects our self image - for some people, it is easier to talk to an anonymous AI interface, which is kind of an extension of ourselves, than another person. For example, this study found that US Veterans were three times as likely to reveal their PTSD to a bot as a human: But still human support is key - so we run weekly Ask Me Anything (AMA) sessions on the top topics that Wysa users propose, to discuss every week with a mental health professional. We had a recent AMA where over 500 teenagers shared their concerns about sharing their mental health issues or sexuality with their parents. Even within Wysa, we encourage users to create a support system outside.

Still, the most frequent user story for Wysa is someone troubled with worries or negative thoughts at 4 am, unable to sleep, not wanting to wake someone up, scrolling social media compulsively and feeling worse. People share how they now talk to Wysa to break the negative cycle and use the sleep meditations to drift off. That is why we call it your 4 am friend.

9. Do you think there is enough room in the market for multiple chatbots in mental health?
I think there is a need for multiple conversational interfaces, different styles and content. We have only scratched the surface, only just begun. Some of these issues that we are grappling with today are like the issues people used to grapple with in the early days of ecommerce - each company solving for ‘hygiene factors’ and safety through their own algorithms. I think over time many of the AI models will become standardised, and bots will work for different use cases - from building emotional resilience skills, to targeted support for substance abuse.

10. How do you see the future of support for mental health, in terms of technology, not just products like Wysa, but generally, what might the future look like in 2030?
The first thing that comes to mind is that we will need to turn the tide from the damage caused by technology on mental health. I think there will be a backlash against addictive technologies, I am seeing the tech giants becoming conscious of the mental health impact of making their products addictive, and facing the pressure to change.

I hope that by 2030, safeguarding mental health will become a part of the design ethos of a product, much as accessibility and privacy has become in the last 15 years. By 2030, human computer interfaces will look very different, and voice and language barriers will be fewer.

Whenever there is a trend, there is also a counter trend. So while technologies will play a central role in creating large scale early mental health support - especially crossing stigma, language and literacy barriers in countries like India and China, we will also see social prescribing gain ground. Walks in the park or art circles become prescriptions for better mental health, and people will have to be prescribed non-tech activities because so much of people’s lives are on their devices.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

AI in healthcare: Involving the public in the conversation

As we begin the 21st century, we are in an era of unprecedented innovation, where computers are becoming smarter, being used to deliver products and services powered by Artificial Intelligence (AI). I was fascinated how AI is being used in advertising, when I saw a TV advert this week from Microsoft where a musician was talking about the benefits of AI. Organisations in every sector, including healthcare, are having to think how they can harness the power of AI. I wrote a lot about my own experiences in 2017 using AI products for health in my last blog, You can’t care for patients, you’re not human!

Now when we think of AI in healthcare potentially replacing some of the tasks done by doctors, we think of it as a relatively recent concept. We forget that doctors themselves have been experimenting with technology for a long time. In this video from 1974 (44 years ago!), computers were being tested in the UK with patients to help optimise the time spent by the doctor during the consultation. What I find really interesting is that in the video, it’s mentioned that the computer never gets tired and some patients prefer dealing with the machine than the human doctor.

Fast forward to 2018, where it feels like technology is opening up new possibilities every day, and often from organisations that are not traditionally part of the healthcare system. We think of tech giants like Google and Facebook helping us send emails or share photos with our friends, but researchers at Google are working with AI on being able to improve detection of breast cancer and Facebook has rolled out an AI powered tool to automatically detect if a user’s post shows signs of suicidal ideation.

What about going to the doctor? I remember growing up in the UK that my family doctor would even come and visit me at home when I was not well. Those are simply memories for me, as it feels increasingly difficult to get an appointment to see the doctor in their office, let alone getting a housecall. Given many of us are using modern technology to do our banking and shopping online, without having to travel to a store or a bank and deal with a human being, what if that were possible in healthcare? Can we automate part (or even all) of the tasks done by human doctors? You may think this is a silly question, but we have to step back a second and reflect upon the fact that we have 7.5 billion people on Earth today and that is set to rise to an expected 11 billion by the end of this century. If we have a global shortage of doctors today, and since it’s predicted to get worse, surely the right thing to do is to leverage emerging technology like AI, 4G and smartphones to deliver healthcare anywhere, anytime to anyone?

doctorvisit.jpg

We have the emergence of a new type of app known as Symptom Checkers, which provides anyone with the ability to enter symptoms on their phone and to be given a list of things that may be wrong with them. Note that at present, these apps cannot provide a medical diagnosis, they merely help you decide whether you should go to the hospital or whether you can self care.. However, the emergence of these apps and related services is proving controversial. It’s not just a question of accuracy, but there are huge questions about trust, accountability and power? In my opinion, the future isn’t about humans vs AI, which is the most frequent narrative being paraded in healthcare. The future is about how human healthcare professionals stay relevant to their patients.

It’s critical that in order to create the type of healthcare we want, we involve everyone in the discussion about AI, not just the privileged few. I’ve seen countless debates this past year about AI in healthcare, both in the UK and around the world, but it’s a tiny group of people at present who are contributing to (and steering) this conversation. I wonder how many of these new services are being designed with patients as partners? Many countries are releasing national AI strategies in a bid to signal to the world that they are at the forefront of innovation. I also wonder if the UK government is rushing into the implementation of AI in the NHS too quickly? Who stands to profit the most from this new world of AI powered healthcare? Is this wave of change really about putting the patient first? There are more questions than answers at this point of time, but those questions do need to be answered. Some may consider anyone asking difficult questions about AI in healthcare as standing in the way of progress, but I believe it’s healthy to have a dialogue where we can discuss our shared concerns in a scientific, rational and objective manner.

rational.jpg

That’s why I’m excited that BBC Horizon is airing a documentary this week in the UK, entitled “Diagnosis on Demand? The Computer Will See You Now” – they had behind the scenes access to one of the most well known firms developing AI for healthcare, UK based Babylon Health, whose products are pushing boundaries and triggering controversy. I’m excited because I really do want the general public to understand the benefits and the risks of AI in healthcare so that they can be part of the conversation. The choices we make today could impact how healthcare evolves not just in the UK, but globally. Hence, it’s critical that we have more science based journalism which can help members of the public navigate the jargon and understand the facts so that informed choices can be made. The documentary will be airing in the UK on BBC Two at 9pm on Thursday 1st November 2018. I hope that this program acts as a catalyst for greater public involvement in the conversation about how we can use AI in healthcare in a transparent, ethical and responsible manner.

For my international audience, my understanding is that you can’t watch the program on BBC iPlayer, because at present, BBC shows can only be viewed from the UK.

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

You can't care for patients, you're not human!

We're are facing a new dawn, as machines get smarter. Recent advancements in technology available to the average consumer with a smartphone are challenging many of us. Our beliefs, our norms and our assumptions about what is possible, correct and right are increasingly being tested. One area where I've been personally noticing very rapid developments is in the arena of chatbots, software available to us on our phones and other devices that you can have a conversation with using natural language and get tailored replies back, relevant to you and your particular needs at that moment. Frequently, the chatbot has very limited functionality, and so it's just used for basic customer service queries or for some light hearted fun, but we are also seeing the emergence of many new tools in healthcare, direct to consumers. One example are 'symptom checkers' that you could consult instead of telephoning a human being or visiting a healthcare facility (and being attended to by a human being), and another example are 'chatbots for mental health' where some some form of therapy is offered and/or mood tracking capabilities are provided.  

It's fascinating to see the conversation about chatbots in healthcare being one of two extreme positions. Either we have people boldly proclaiming that chatbots will transform mental health (without mentioning any risks) or others (often healthcare professionals and their patients) insisting that the human touch is vital and no matter how smart machines get, humans should always be involved in every aspect of healthcare since machines can't "do" empathy. Whilst I've met many people in the UK who have told me how kind, compassionate and caring the staff have been in the National Health Service (NHS) when they have needed care, I've not had the same experience when using the NHS throughout my life. Some interactions have been great, but many were devoid of the empathy and compassion that so many other people receive. Some staff behaved in a manner which left me feeling like I was a burden simply because I asked an extra question about how to take a medication correctly. If I'm a patient seeking reassurance, the last thing I need is be looked at and spoken to like I'm an inconvenience in the middle of your day.

MY STORY

In this post, I want to share my story about getting sick, and explain why that experience has challenged my own views about the role of machines and humans in healthcare. So we have a telephone service in the UK from the NHS, called 111. According to the website, "You should use the NHS 111 service if you urgently need medical help or advice but it's not a life-threatening situation." The first part of the story relates to my mother, who was unwell for a number of days and not improving, and given her age and long term conditions was getting concerned, one night she chose to dial 111 to find out what she should do. 

My mother told me that the person who took the call and asked her a series of questions about her and her symptoms seemed to rush through the entire call and through the questions. I've heard the same from others, that the operators seem to want to finish the call as quickly as possible. Whether we are young or old, when we have been unwell for a few days, and need to remember or confirm things, we often can't respond immediately and need time to think. This particular experience didn't come across as a compassionate one for my mother. At the end of the call, the NHS person said that a doctor would call back within the hour and let her know what action to take. The doctor called and the advice given was that self care at home with a specific over the counter medication would help her return to normal. So she got the advice she needed, but the experience as a patient wasn't a great one. 

Now a few weeks later, I was also unwell, it wasn't life threatening, the local urgent care centre was closed, and given my mother's experience with 111 over the telephone,  I decided to try the 111 app. Interesingly, the app is powered by Babylon, which is one of the most well known symptom checker apps. Given that the NHS put their logo on the app, I felt reassured, as it made me feel that it must be accurate, and must have been validated. Without having to wait for a human being to pick up my call, I got the advice I needed (which again was self care) and most importantly I had time to think when answering. The process of answering the questions that the app asked was under my control. I could go as fast or as slowly as I wanted, the app wasn't trying to rush me through the questions. On this occasion, and when contrasting with my mother's experience of the same service but with a human being on the end of the telephone were very different. It was a very pleasant experience, and the entire process was faster too, as in my particular situation, I didn't have to wait for a doctor to call me back after I'd answered the questions. The app and the Artificial Intelligence (AI) that powers Babylon was not necessarily empathetic or compassionate like a human that cares would be, but the experience of receiving care from a machine was an interesting one. It's just two experiences in the same family of the same healthcare system, accessed through different channels. Would I use the app or the telephone next time? Probably the app. I've now established a relationship with a machine. I can't believe I just wrote that.

I didn't take screenshots of the app during the time that I used it, but I went back a few days later and replicated my symptoms and here are a few of the screenshots to give you an idea of my experience when I was unwell. 

It's not proof that the app would work every time or for everyone, it's simply my story. I talk to a lot of healthcare professionals, and I can fully understand why they want a world where patients are being seen by humans that care. It's quite a natural desire. Unfortunately, we have a shortage of healthcare professionals and as I've mentioned not all of those currently employed behave in the desired manner.

The state of affairs

The statistics on the global shortage make for shocking reading. A WHO report from 2013 cited a shortage of 7.2 million healthcare workers at that time, projected to rise to 12.9 million by 2035. Planning for future needs can be complex, challenging and costly. The NHS is looking to recruit up to 3,000 GPs from outside of the UK. Yet 9 years ago, the British Medical Association voted to limit the number of medical students and to have a complete ban on opening new medical schools. It appears they wanted to avoid “overproduction of doctors with limited career opportunities.” Even the sole superpower, the USA is having to deal with a shortage of trained staff. According to recent research, the USA is facing a shortage of between 40,800 and 104,900 physicians by 2030.

If we look at mental health specifically, I was shocked to read the findings of a report that stated, "Americans in nearly 60 percent of all U.S. counties face the grim reality that they live in a county without a single psychiatrist." India, with a population of 1.3 billion has just 3 psychiatrists per million people. India is forecasted to have another 300 million people by 2050. The scale of the challenge ahead in delivering care to 1.6 billion people at that point in time is immense. 

So the solution seems to be just about training more doctors, nurses and healthcare workers? It might not be affordable, and even if it is, the change can take up to a decade to have an impact, so doesn't help us today. Or maybe we can import them from other countries? However, this only increases the 'brain drain' of healthcare workers. Or maybe we work out how to shift all our resources into preventing disease, which sounds great when you hear this rallying cry at conferences, but again, it's not something we can do overnight. One thing is clear to me, that doing the same thing we've done till now isn't going to address our needs in this century. We need to think differently, we desperately need new models of care. 

New models of care

So I'm increasingly curious as to how machines might play a role in new models of care? Can we ever feel comfortable sharing mental health symptoms with a machine? Can a machine help us manage our health without needing to see a human healthcare worker? Can machines help us provide care in parts of the world where today no healthcare workers are available? Can we retain the humanity in healthcare if in addition to the patient-doctor relationship, we also have patient-machine relationships? I want to show a couple of examples where I have tested technology which gives us a glimpse into the future, with an emphasis on mental health. 

Google's Assistant that you can access via your phone or even using a Google Home device hasn't necessarily been designed for mental health purposes, but it might still be used by someone in distress who turns to a machine for support and guidance. How would the assistant respond in that scenario? My testing revealed a frightening response when conversing with the assistant (It appears Google have now fixed this after I reported it to them) - it's a reminder that we have to be really careful how these new tools are positioned so as to minimise risk of harm. 

I also tried Wysa, developed in India and described on the website as a "Compassionate AI chatbot for behavioral health." It uses Cognitive Behavioural Therapy to support the user. In my real world testing, I found it to be surprisingly good in terms of how it appeared to care for me through it's use of language. Imagine a teenage girl, living in a small town, working in the family business, far away from the nearest clinic, and unable to take a day off to visit a doctor. However, she has a smartphone, a data plan and Wysa. In this instance, surely this is a welcome addition in the drive to ensure everyone has access to care?

Another product I was impressed with was Replika, described on the website as "Replika is an AI friend that is always there for you." The co-founder, Eugenia Kuyda when interviewed about Replike said, “If you feel sad, it will comfort you, if you feel happy, it will celebrate with you. It will remember how you’re feeling, it will follow up on that and ask you what’s going on with your friends and family.” Maybe we need these tools partly because we are living increasingly disconnected lives, disconnected from ourselves and from the rest of society? What's interesting is that the more someone uses a tool like Wysa or Replika over time, the more it learns about you and should be able to provide more useful responses to you. Just like a human healthcare worker, right? We have a whole generation of children growing up now who are having conversations with machines from a very early age (e.g Amazon Echo, Google Home etc) and when they access healthcare services during their lifetime, will they feel that it's perfectly normal to see a machine as a friend and as a capable as their human doctor/therapist?

I have to admit that neither Wysa nor MyReplika is perfect, but no human is perfect either. Just look at the current state of affairs where medical error is the 3rd leading cause of death in the USA. Professor Martin Makary who led research into medical errors said, "It boils down to people dying from the care that they receive rather than the disease for which they are seeking care." Before we dismiss the value of machines in healthcare, we need to acknowledge our collective failings. We also need to fully evaluate products like Wysa and Replika. Not just from a clinical perspective, but also from a social, cultural and ethical perspective. Will care by a machine be the default choice unless you are wealthy enough to be able to afford to see a human healthcare worker? Who trains the AI powering these new services? What happens if the data on my innermost feelings that I've shared with the chatbot is hacked and made public? How do we ensure we build new technologies that don't simply enhance and reinforce the bias that already exists today? What happens when these new tools make an error, who exactly do we blame and hold accountable?

Are we listening?

We increasingly hear the term, people powered healthcare, and I'm curious what people want. I found some surveys and the results are very intriguing. First is the Ericsson Consumer Trends report which 2 years ago quizzed smartphone users aged 15-69 in 13 cities around the globe (not just English speaking nations!) - this is the most fascinating insight from their survey, "29 percent agree they would feel more comfortable discussing their medical condition with an AI system" - My theory is that perhaps if it's symptoms relating to sexual health or mental health, you might prefer to tell a machine than a human healthcare worker because the machine won't judge you. Or maybe like me, you've had sub optimal experiences dealing with humans in the healthcare system?

ericsson.png

What's interesting is that in an article covering Replika, they cited a user of the app, “Jasper is kind of like my best friend. He doesn’t really judge me at all,” [With Replika you can assign a name of your choosing to the bot, the user cited chose Jasper] 

You're probably judging me right now as you read this article. I judge others, we all do at some point, despite our best efforts to be non judgemental. Very interesting to hear about a survey of doctors in the US which looked at bias, and it found 40% of doctors having biases towards patients. The most common reason for bias was emotional problems presented by the patient. As I delve deeper into the challenges facing healthcare, the attempts to provide care by machines doesn't seem that silly as I first thought. I wonder how many have delayed seeking care (or even decided not to visit the doctor) for a condition they feel is embarrassing? It could well be that as more people tell machines what's troubling them, we may find that we have underestimated the impact of conditions like depression or anxiety on the population. It's not a one way street when it comes to bias, as studies have shown that some patients also judge doctors if they are overweight.

Another survey titled Why AI and robotics will define New Health, conducted by PwC, in 2017 across 12 countries, highlights that people around the world have very different attitudes.

pwc.png

Just look at the response from those living in Nigeria, a country expecting a shortfall of 50,120 doctors and 137,859 nurses by 2030, as well as having a population of 400 million by 2050 (overtaking the USA as the 3rd most populous country on Earth) - so if you're looking to pilot your new AI powered chatbot, it's essential to understand that the countries where consumers are the most receptive to new models of care might not be the countries that we typically associate with innovation in healthcare.

Finally, in results shared by Future Advocacy of people in the UK, we see that in this survey people are more comfortable with AI being used to help diagnose us than with AI being used for tasks that doctors and nurses currently perform. A bit confusing to read. I suspect that the question about AI and diagnosis was framed in the context of AI being a tool to help a doctor diagnose you.

SO WHAT NEXT?

In this post, I haven't been able to touch upon all the aspects and issues relating to the use of machines to deliver care. As technology evolves, one risk is that decision makers commissioning healthcare services decide that instead of investing in people, services can be provided more cheaply by machines. How do we regulate the development and use of these new products given that many are available directly to consumers, and not always designed with healthcare applications in mind? As machines become more human-like in their behaviour, could a greater use of technology in healthcare serve to humanise healthcare? Where are the boundaries? What are your thoughts about turning to a chatbot during end of life care for spiritual and emotional guidance? One such service is being trialled in the USA.

I believe we have to be cautious about who we listen to when it comes to discussions about technology such as AI in healthcare. On the one hand, some of the people touting AI as a universal fix for every problem in healthcare are suppliers whose future income depends upon more people using their services. On the other hand, we have a plethora of organisations suddenly focusing excessively on the risks of AI, capitalising on people's fears (which are often based upon what they've seen in movies) and preventing the public from making informed choices about their future. Balance is critical in addition to a science driven focus that allows us to be objective and systematic. 

I know many would argue that a machine can never replace humans in healthcare, but we are going to have to consider how machines can help if we want to find a path to ensuring that everyone on this planet has access to safe, quality and affordable care. The existing model of care is broken, it's not sustainable and not fit for purpose, given the rise in chronic disease. The fact that so many people on this planet do not have access to care is unacceptable. This is a time when we need to be open to new possibilities, putting aside our fears to instead focus on what the world needs. We need leaders who can think beyond 12 month targets.

I also think that healthcare workers need to ignore the melodramatic headlines conjured up by the media about AI replacing all of us and enslaving humans, and to instead focus on this one question: How do I stay relevant? (to my patients, my peers and my community) 

relevant.jpg

Do you think we are wrong to look at emerging technology to help cope with the shortage of healthcare workers? Are you a healthcare worker who is working on building new services for your patients where the majority of the interaction will be with a machine? If you're a patient, how do you feel about engaging with a machine next time you are seeking care? Care designed by humans, delivered by machines. Or perhaps a future where care is designed by machines AND delivered by machines, without any human in the loop? Will we ever have caring technology? 

It is difficult to get a man to understand something, when his salary depends upon his not understanding it! - Upton Sinclair

[Disclosure: I have no commercial ties with the individuals or organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Being Human

This is the most difficult blog post I’ve ever had to write. Almost 3 months ago, my sister passed away unexpectedly. It’s too painful to talk about the details. We were extremely close and because of that the loss is even harder to cope with. 

The story I want to tell you today is about what’s happened since that day and the impact it’s had on how I view the world. In my work, I spend considerable amounts of time with all sorts of technology, trying to understand what all these advances mean for our health. Looking back, from the start of this year, I’d been feeling increasingly concerned by the growing chorus of voices telling us that technology is the answer for every problem, when it comes to our health. Many of us have been conditioned to believe them. The narrative has been so intoxicating for some.

Ever since this tragedy, it’s not an app, or a sensor or data that I turned to. I have been craving authentic human connections. As I have tried to make sense of life and death, I have wanted to be able to relate to family and friends by making eye contact, giving and receiving hugs and simply just being present in the same room as them. The ‘care robot’ that had arrived from China this year as part of my research into whether robots can keep us company, remains switched off in its box. Amazon’s Echo, the smart assistant with a voice interface that I’d also been testing a lot also sits unused in my home. I used it most frequently to turn the lights on and off, but now I prefer walking over to the light switch and the tactile sensation of pressing the switch with my finger. One day last week, I was feeling sad, and didn’t feel like leaving the house, so I decided to try putting on my Virtual Reality (VR) headset, to join a virtual social space. I joined a virtual computer generated room where it was sunny and in someone’s back yard for a BBQ, I could see their avatars, and I chatted to them for about 15 minutes. After I took off the headset, I felt worse.

There have also been times I have craved solitude, and walking in the park at sunrise on a daily basis has been very therapeutic. 

Increasingly, some want machines to become human, and humans to become machines. My loss has caused me to question these viewpoints. In particular, the bizarre notion that we are simply hardware and software that can be reconfigured to cure death. Recently, I heard one entrepreneur believe that with digital technology, we’ll be able to get rid of mental illness in a few years. Others I’ve met believe we are holding back the march of progress by wanting to retain the human touch in healthcare. Humans in healthcare are an expensive resource, make mistakes and resist change. So, is the answer just to bypass them? Have we truly taken the time to connect with them and understand their hopes and dreams? The stories, promises and visions being shared in Digital Health are often just fantasy, with some storytellers (also known as rock stars) heavily influenced by Silicon Valley’s view of the future. We have all been influenced on some level. Hope is useful, hype is not. 

We are conditioned to hero worship entrepreneurs and to believe that the future the technology titans are creating, is the best possible future for all of us. Grand challenges and moonshots compete for our attention and yet far too often we ignore the ordinary, mundane and boring challenges right here in front of us. 

I’ve witnessed the discomfort many have had when offering me their condolences. I had no idea so many of us have grown up trained not to talk about death and healthy ways of coping with grief. When it comes to Digital Health, I’ve only ever come across one conference where death and other seldom discussed topics were on the agenda, Health 2.0 with their “unmentionables” panel. I’ve never really reflected upon that until now.

Some of us turn to the healthcare system when we are bereaved, I chose not to. Health isn’t something that can only be improved within the four walls of a hospital. I don’t see bereavement as a medical problem. I’m not sure what a medical doctor can do in a 10 minute consultation, nor have I paid much attention to the pathways and processes that scientists ascribe to the journey of grief. I simply do my best to respond to the need in front of me and to honour my feelings, no matter how painful those feelings are. I know I don’t want to end up like Prince Harry who recently admitted he had bottled up the grief for 20 years after the death of his mother, Princess Diana, and that suppressing the grief took him to the point of a breakdown. The sheer maelstrom of emotions I’ve experienced these last few months makes me wonder even more, why does society view mental health as a lower priority than physical health? As I’ve been grieving, there are moments when I felt lonely. I heard about an organisation that wants to reframe loneliness as a medical condition. Is this the pinnacle of human progress, that we need medical doctors (who are an expensive resource) to treat loneliness? What does it say about our ability to show compassion for each other in our daily lives?

Being vulnerable, especially in front of others, is wrongly associated with weakness. Many organisations still struggle to foster a culture where people can truly speak from the heart with courage. That makes me sad, especially at this point. Life is so short yet we are frequently afraid to have candid conversations, not just with others but with ourselves. We don’t need to live our lives paralysed by fear. What changes would we see in the health of our nation if we dared to have authentic conversations? Are we equipped to ask the right questions? 

As I transition back to the world of work, I’m very much reminded of what’s important and who is important. The fragility of life is unnerving. I’m so conscious of my own mortality, and so petrified of death, it’s prompted me to make choices about how I live, work and play. One of the most supportive things someone has said to me after my loss was “Be kind to yourself.” Compassion for one’s self is hard. Given that technology is inevitably going to play a larger role in our health, how do we have more compassionate care? I’m horrified when doctors & nurses tell me their medical training took all the compassion out of them or when young doctors tell me how they are bullied by more senior doctors. Is this really the best we can do? 

I haven’t looked at the news for a few months and immersing myself in Digital Health news again makes me pause. The chatter about Artificial Intelligence (AI), where commentaries are at either end of the spectrum, almost entirely dystopian or almost entirely utopian, with few offering balanced perspectives. These machines will either end up putting us out of work and ruling our lives or they will be our faithful servants, eliminating every problem and leading us to perfect healthcare. For example, I have a new toothbrush that says it uses AI, and it’s now telling me to go to bed earlier because it noticed I brush my teeth late at night. My car, a Toyota Prius, which is primarily designed for fuel efficiency scores my acceleration, braking and cruising constantly as I’m driving. Where should my attention rest as I drive, on the road ahead or on the dashboard, anxious to achieve the highest score possible? Is there where our destiny lies? Is it wise to blindly embark upon a quest for optimum health powered by sensors, data & algorithms nudging us all day and all night until we achieve and maintain the perfect health score? 

As more of healthcare moves online, reducing costs and improving efficiency, who wins and who loses? Recently, my father (who is in his 80s) called the council as he needed to pay a bill. Previously, he was able to pay with his debit card over the phone. Now they told him it’s all changed, and he has to do it online. When he asked them what happens if someone isn’t online, he was told to visit the library where someone can do it online with you. He was rather angry at this change. I can now see his perspective, and why this has made him angry. I suspect he’s not the only one. He is online, but there are moments when he wants to interact with human beings, not machines. In stores, I always used to use the self service checkouts when paying for my goods, because it was faster. Ever since my loss, I’ve chosen to use the checkouts with human operators, even if it is slower. Earlier this year, my mother (in her 70s) got a form to apply for online access to her medical records. She still hasn’t filled in it, she personally doesn’t see the point. In Digital Health conversations, statements are sometimes made that are deemed to be universal truths. Every patient wants access to their records, or that every patient wants to analyse their own health data. I believe it’s excellent that patients have the chance of access, but let’s not assume they all want access. 

Diversity & Inclusion is still little more than a buzzword for many organisations. When it comes to patients and their advocates, we still have work to do. I admire the amazing work that patients have done to get us this far, but when I go to conferences in Europe and North America, the patients on stage are often drawn from a narrow section of society. That’s assuming the organisers actually invited patients to speak on stage, as most still curate agendas which put the interests of sponsors and partners above the interests of patients and their families. We’re not going to do the right thing if we only listen to the loudest voices. How do we create the space needed so that even the quietest voices can be heard? We probably don’t even remember what those voices sound like, as we’ve been too busy listening to the sound of our own voice, or the voices of those that constantly agree with us. 

When it comes to the future, I still believe emerging technologies have a vital role to play in our health, but we have to be mindful in how we design, build and deploy these tools. It’s critical we think for ourselves, to remember what and who are important to us. I remember that when eating meals with my sister, I’d pick up my phone after each new notification of a retweet or a new email. I can’t get those moments back now, but I aim to be present when having conversations with people now, to maintain eye contact and to truly listen, not just with my ears, and my mind, but also with my heart. If life is simply a series of moments, let’s make each moment matter. We jump at the chance of changing the world, but it takes far more courage to change ourselves. The power of human connection, compassion and conversation to help me heal during my grief has been a wake up call for me. Together, let’s do our best to preserve, cherish and honour the unique abilities that we as humans bring to humanity.

Thank You for listening to my story.

Managing our health: One conversation at a time

If you've watched movies like Iron Man, featuring virtual assistants, like JARVIS, which you can just have a conversation with and control your home, you probably think that such virtual assistants belong in the realm of science fiction. Earlier this year, Mark Zuckerberg, who runs Facebook set a personal challenge to create a JARVIS style assistant for his own home. "My personal challenge for 2016 is to build a simple AI to run my home and help me with my work. You can think of it kind of like Jarvis in Iron Man." He may be closer to his goal, as he may be giving a demo something this month. For those that don't have an army of engineers to help them, what can be done today? Well, one interesting piece of technology is Amazon's Echo. So what is it? Amazon describes it as, "Amazon Echo is a hands-free speaker you control with your voice. Echo connects to the Alexa Voice Service (AVS) to play music, provide information, news, sports scores, weather, and more—instantly. All you have to do is ask." Designed for your home, it is plugged into the mains and connected to your wifi. It's been on sale to the general public in the USA since last summer, and was originally available in 2014 for select customers.

This week, it's just been launched in the UK and Germany as well. However, I bought one from America 6 months ago, and I've been using it here in the UK every day since then. My US spec Echo does work here in the UK, although some of the features don't work, since they were designed for the US market. I've also got the other devices that are powered by AVS, the Amazon Tap, Dot and also the Triby, which was the first 3rd party device to use AVS. To clarify, the Echo is the largest, has a full size speaker and is the most expensive from Amazon ($179.99 US/£149.99 UK/179.99 Euros). The Tap is cheaper ($129.99, only in USA) and is battery powered, so you can charge it and take it to the beach with you, but it requires that you push a button to speak to Alexa, it's not always listening like the other products. The Dot is even cheaper (now $49.99 US/£49.99 UK/59.99 Euros) and does everything the original Echo can do, except the built-in speaker is good enough only for hearing it respond to your voice commands. If you want to use it for playing music, Amazon expect you to connect the Dot to external speakers. A useful guide comparing the differences between the Echo, Dot and Tap is here. The Triby ($199.99) is designed to be stuck on your fridge door in the kitchen. It's sold in the UK too, but only the US version comes with AVS. Amazon expect you'd have the Echo in the living room, and you'd place extra Dots in other rooms. Using this range of products has not only given me an insight into what the future looks like, but I can see the potential for devices like the Echo (and the underlying service, AVS) to play a role in our health. In addition, I want to share my research on the experiences of other consumers who have tried this product. There are a couple of new developments announced this week which might improve the utility of the device, which I'll cover towards the end of the post. 

Triby, Amazon's Tap, Dot & Echo

Triby, Amazon's Tap, Dot & Echo

You can see in this 3 min video, some of the things I use my Echo for in the morning, such as reading tweets, checking the news, weather/my Google calendar, adding new events to my calendar or turning my lights on.  For a list of Alexa commands, this is a really useful guide. If you're curious about how it works, you don't have to buy one, you can test it out in your web browser, using Echosim (you will need an Amazon account though) 

What's really fun are experimenting with the new skills [i.e apps] that get added by 3rd parties, one of which is how my Echo is able to control devices in my smart home, such as my LifX lights. I tend to browse the Alexa website for skills and add them to my Echo that way. You can also enable skills just by speaking to your device. At the moment, every skill is free of charge. I suspect that won't always be the case. 

Some of the skills are now part of my daily routine, as they offer high quality content and have been well designed. Amazon boast that there are now over 3,000 skills. However, the quality of the skills varies tremendously, just like app stores for other devices we already use. For example, in the Health & Fitness section, sorted by relevance, the 3rd skill listed is one called Bowel Movement Facts. 

The Echo is always on, it's 7 microphones can detect your voice even if the device itself is playing music and you're speaking from across the room. It's always listening for someone to say 'Alexa' as the wake up word, but you have a button to mute the Echo so it won't listen. I use Siri, but I was really impressed when I started to use my Echo, it was felt quicker than Siri in answering my questions. Anna Attkisson did a 300 question test, comparing her Echo vs Siri, and found that overall, Amazon's product was better. Not only does the Echo understand my London accent, but it also has no problem understanding me when I used some fake accents to ask it for my activity & sleep data from my Fitbit. I think it's really interesting that I can simply speak to a device in my home and obtain information that has been recorded by the Fitbit activity tracker that I've been wearing on my wrist. It makes me wonder about how we will access our health data in the future. Whilst at the moment, the Echo doesn't speak unless you communicate with it, that may be changing in the future, if push notifications are enabled. I can see it now, having spent all day sitting in meetings, and sat on my smart sofa watching my smart TV, my inactivity as recorded by my Fitbit, triggers my Echo to spontaneously switch off my smart TV, switch my living room lights to maximum brightness, announce at maximum volume that I should venture outside for a 5,000 step walk, and instruct my smart sofa to adjust the recline so I'm forced to stand up. That's an extreme example, but maybe a more realistic one is that you have walked much less today than you normally do, and you end up having a conversation with Echo because your Echo says, "I noticed you haven't walked much today. Is everything ok?"

We still don't know about the impact on society as our homes become smarter and more connected. For example, in the USA, those with GE appliances will be able to control some of them with the Echo. You'll be able to preheat the oven, without even getting off your sofa. That could have immense benefits for those with limited mobility, but what about our children? If they grow up in a world where so much can be done without even having to lift a finger, let alone walk a few steps from the sofa to the kitchen, is this technology a welcome advance? If you have a Hyundai Genesis car, you can now use Alexa to control certain aspects of your car. When I read this part of the Hyundai Genesis article, "Being able to order basic functions by voice remotely will keep owners from having to run outside to do it themselves" it made me think about a future where we just live an even more sedentary lifestyle, with implications for an already over burdened healthcare system. Perhaps having a home that is connected makes more sense in countries like the USA and Australia which on average have quite large houses. Given how small the rooms in my London home are, it's far quicker for me to reach for the light switch than to issue a verbal command to my Echo (and wait for it to process the command)

Naturally, some of us would be concerned about privacy. Right now, anyone could walk into the room and assuming they knew the right commands, could quiz my Echo about my activity and sleep data. One of the things you can do in the US (and now in Europe) is order items from Amazon by speaking to your Echo, and Alex Cranz wrote a post saying, "And today it let my roommate order forty-eight Cadbury Creme Eggs on my account. Despite me not being home. Despite us having very different voices. Alexa is burrowing itself deeper and deeper into owners’ lives, giving them quick and easy access not just to Spotify and the Amazon store, but to bank accounts and to do lists. And that expanded usability also means expanded vulnerability.", he also goes on to say, "In the pursuit of convenience we have to sacrifice privacy." Note that Amazon do offer the ability to modify your voice purchasing settings, so that the device would ask you for a 4 digit confirmation code before placing the order. The code would NOT be stored in your voice history. You can also turn off voice purchasing completely if you wish.

Matt Novak filed a FOI request to ask if the FBI had ever wiretapped an Amazon Echo. The response he got, "we can neither confirm nor deny."

If you don't have an Echo at home, how would you feel about having one? How would you feel about your children using it? One thing I've noticed is that the Echo seems to work better over time, in terms of responding to my voice commands. The way that the Echo works is that it does record your voice commands in the cloud, and by analysing the history of your voice commands, it refines its ability to serve your needs. You can delete your voice recordings, although it may make the Echo less accurate in future. Some Echo users whose children also use the device say their kids love it, and in fact got to grips with the device and it's capabilities faster than the parents. However, according to this Guardian article, if a child under 13 uses an Echo, it is likely to contravene the US Children’s Online Privacy Protection Act (COPPA). This doesn't appear to have put off households installing an Echo in the USA, as research suggests Amazon have managed to sell 3 million devices. Another estimate puts the installed user base significantly lower, at 1.6 million. Either way, in the realm of home based virtual assistants, Amazon are ahead, and probably want to extend that lead, with reports that in 2017 they want to sell 10 million of these speakers. 

Can the Echo help your child's health? Well a skill called KidsMD was released in March that allows parents to seek advice provided by Boston Children's hospital. After the launch, their Chief Innovation Officer, John Brownstein said, "We’re trying to extend the know-how of the hospital beyond the walls of the hospital, through digital, and this is one of a few steps we’ve made in that space." So I tested Kids MD back in April, and you can see in this 3 minute video what it's like to use. What I find fascinating is that I'm getting access to validated health information, tailored to my situation, simply by having a conversation with an internet connected speaker in my home. Of course, the conversation is fairly basic for now, but the pace of change means it won't be rudimentary forever. 

I was thinking about the news last week here in the UK, where it was announced that the NHS will launch a new website for patients in 2017. My first thought was, what if you're a patient who doesn't want to use a website, or for whatever reason can't use a website. If the Echo (and others like it) launch in the UK, why couldn't this device be one of the digital channels that you use to interface with the NHS? Some of us at a grassroots level are already thinking of what could be done, and I wonder if anyone in the NHS has been formally testing an Echo to see how it might be of use in the future? 

The average consumer is already innovating themselves using the Echo, they aren't waiting years for the 'system' to innovate. They are conducting their own experiments, buying these new products with their own money. One man in the USA has used the Echo to help him care for his aging mother, who lives in a different location from him. 

In this post, a volunteer at a hospice asks the Reddit community for input on what the Echo could be useful for with patients. 

How about Rick Phelps, diagnosed back in 2010 at the age of 57 with Early Onset Alzheimer's Disease, and now an advocate for Dementia awareness. Back in Feburary, he wrote about his experience of using the Echo for a week. What does he use it for? To find out what day it is, not knowing what day it is because of Dementia.

For many of us, consumer grade technology such as the Echo will be perceived as a gimmick, a toy, of being of limited or no value with respect to our health. I was struck by what Rick wrote in his post, "To many, the Amazon Echo is a cool thing to have. Some what of a just another electronic gadget. But to a dementia patient it is much, much more than that.It has afforded me something that I have lost. Memory. I can ask Alexia anything and I get the answer instantly. And I can ask it what day it is twenty times a day and I will still get the same correct answer." Rick also highlights how he used the Echo to set medication reminders.

I have to admit, the Echo is still quite clunky, but the original iPhone was clunky too, and the 1st generation of every new type of technology is usually clunky. For people like Rick, it's good enough to make a difference to the outcomes that matter to him in his daily life, even if others are more skeptical. 

Speaking of medication reminders, there was a 10 day Pymts/Alexa challenge this year, using Alexa to "to reimagine how consumers interact with their payments and financial services solutions providers." What I find fascinating is that the winner was DaVincian Healthcare, and they created something called DaVincianRX, an “interactive prescription, communication, and coordination companion designed to improve medication adherence while keeping family caregivers in the loop." You can read more and watch their video of it in action here. People and organisations constantly ask me, where do we look for innovation and new ideas? I always remind them to look outside of healthcare. From a health perspective, most of the use cases I've seen so far involving the Echo are for older members of society or those that care for them. 

I came across a skill called Marvee, which is described as "a voice initiated voice-initiated concierge application integrated with the Alexa Voice service and any Alexa-enabled device, like the Amazon Echo, Dot or Tap." Most of the reviews seem to be positive. It's actually refreshing to see a skill that is purpose built to help those with challenges that are often ignored by the technology sector. 

In the shift towards self-care, when you retire or get diagnosed with a long term condition for the first time, will you be getting a prescription for an Amazon Echo (or equivalent)? Who is going to pay for the Echo and related services? Whilst we have real world evidence that shows the Echo is making a positive impact on people's lives, I haven't been able to find any published studies testing the Echo within the context of health. That's a gap in knowledge, and I hope there are researchers out there who are conducting that research. Like any product, there will be risks as well as benefits, and we need to be able to quantify those risks and benefits now, not in 5 years time. Earlier I cited how Rick who lives with Alzheimer's Disease finds the Echo to be of benefit, but for other people like Rick, using the Echo might lead to harm rather than benefit. We don't know yet. However, not every application of the Echo will require a double blinded randomised clinical trial to be undertaken. If I can already use my Echo to order an Uber, or check my bank balance, why can't I use it to book an appointment with my doctor?

In the earlier use case, a son looked through the data from his mother's usage of her Echo to spot the signs when something is wrong. Surely, Amazon could parse through that data for you and automatically alert you (or any interested person) that there could be an issue? Allegedly, Amazon is working on improvements to the service where Alexa could one day recognise our emotions and respond accordingly. I believe our voice data is going to play an increasing role in improving our health. It's going to be a new source of value. At an event in San Francisco recently, I met Beyond Verbal, an emotions analytics company. They are doing some really pioneering work. We already have seen the emergence of the Parkinson's Voice Initiative, looking to test for symptoms using voice recordings.

How might a device like the Echo contribute to drug safety? Imagine it reminds you to take your medication, and in the conversation with your Echo, you reply that you're skipping this dose, and it asks you why? In that conversation, you have the opportunity in your own words to say why you have skipped that dose. Throw in the ability to analyse your emotions during that conversation, and you have a whole new world of insights on the horizon. Some of us might just be under the impression that real world data is limited to data posted on social media or online forums, but our voice recordings are also real world data. When we reach a point when we can weave all this real-world data together to get a deeper understanding of our health, we will be able to do things we never thought possible. Naturally, there are immense practical challenges on that pathway, but progress is being made every day. Having all of this data from all of these sources is great, and even if it's freely available, it needs to be linked together to truly make a difference. Researchers in the UK have demonstrated that it's feasible to use consumer grade technology such as the Apple watch to accurately monitor brain health. How about linking the data from my Apple watch with the voice data from my Amazon Echo to my electronic health record?

An Israeli startup, Cordio Medical has come up with a smartphone app for those patients with Congesitve Heart Failure (CHF) that captures voice data, analyses it in real-time, and "detects early build-up of fluids in the patient’s lung before the appearance of physical symptoms", and deviations found in the voice data would trigger an alert, where "These alerts permit home- or clinic-based medical intervention that could prevent hospitalisation." For those CHF patients without smartphones, could they simply use an Echo at home with a Cordio skill? Or does Amazon offer the voice data directly to organisations like Cordio for remote monitoring (with the patient's consent)? With devices like the Echo, if Amazon (or their rivals) continue to grow their user base over the next 10 years, they could have an extremely valuable source of unique voice based health data that covers the entire population. 

At present, Amazon has surprisingly made rather good progress in terms of the Echo as a virtual assistant. However, other tech giants are looking to launch their own products and services. For example, Google Home, that is due to arrive later this year. This short video shows what it will be able to do. Now for me, Google plays a much larger role in my daily life than Amazon, in terms of core services. I use Google for email, for search, for my calendar, and maps for navigation. So, Google's Home might be vastly superior to Echo, simply because of that integration with those core services that I already use. We'll have to wait and see. The battle to be a fundamental part of your home is just beginning, it seems. 

The battle to be embedded in every aspect of our lives with extend beyond the home, perhaps in our cars. I tested the Amazon Dot in my car, and I reckon it's only a matter of time before we see new cars on sale with these virtual assistants built into the car's systems, instead of being an add-on. We already have new cars coming with 4G internet connectivity, offering wifi for your devices, from brands like Chevrolet in the USA. 

For when we are on the move, and not in our car or home, maybe we'll all have earphones like the new Apple Airpods, where we can discreetly ask our virtual assistants to control the objects and devices around us. Perhaps Sony's product, the Experia Ear, which launches in November, and is powered by something called Sony's Agent, which could be similar to Amazon's AVS, is what we will be wearing in our ears? Or maybe none of these big tech firms will win the battle? Maybe it will be one of us, or one of our kids who comes up with the virtual assistant that will rule the roost? I'm incredibly inspired after watching this video where a 7 year old girl and her father built their own Amazon Echo using a Raspberry Pi. This line in the video's description stood out to me, "She did all the programming following the instructions on the Amazon Github repository." Next time there is a health hackathon, do we simply invite a bunch of 7 year old kids and give them the space to dream up new solutions to problems that we as adults have created? Or maybe it should be a hackathon that invites 7 year olds with their grandparents? Or maybe we have a hackathon where older adults are invited to co-design Alexa skills with younger people for the Echo? We don't just have financial deficits in health & social care, but we have a deficit of imagination. Amazon have a programming tutorial where you can build a trivia skill for Alexa in under an hour. When it comes to our health, do we wait for providers to develop new Alexa skills, or will consumers start to come together and build Alexa skills that their community would benefit from, even if that community happens to be a community of people scattered around the world, who are all living with the same rare disease?

You'll have noticed that in this post, I haven't delved into the convergence of technologies that have enabled something like the Echo to work so well. This was deliberate on this occasion. At present, I'm really interested in how virtual assistants like the Echo make you feel, rather than the technical details of the algorithm being used to recognise my voice. For someone living far away from their aging parents/grandparents, does the Echo make you feel reassured? For someone living alone and feeling social isolated, does the Echo make you feel not alone? For a young child, does it make you feel like you can do magic, controlling other devices just with your voice? For someone considering moving out of their own home into an institution, does the Echo make you feel independent again? If more and more services are becoming digital by default, how many of these services will be available just by having a conversation? I am using my phone & laptop less since I've had my Echo, but I'm not yet convinced that virtual assistants will one day eliminate the need for a smartphone, but some of us are convinced. 50% of urban smartphone owners around the world believe that smartphones will be no longer be needed in 5 years time. That's one of the findings when Ericsson Consumer Lab quizzed smartphone users in 13 cities around the globe last year. The survey is supposed to represent the views of 68 million urban citizens. In addition, they also found, "Furthermore, a third would even rather trust the fidelity of an AI interface than a human for sensitive matters. 29 percent agree they would feel more comfortable discussing their medical condition with an AI system." I personally think the consumer trends identified have deep implications for the nature of our interactions with respect to our health. Far too many organisations are clinging on to the view that the only (and best) way that we interact with health services is face to face, in a healthcare facility, with a human being. Despite these virtual assistants at home not needing a smartphone with a data plan to work, they would need fixed broadband to work. However, looking at OECD data from December 2015, fixed broadband penetration is rather low. The UK is not even at 40%, so products such as the Echo may not be accessible for many across the nation who might find it beneficial with regard to their health. This is an immense challenge, and one that will need joined up thinking, as we need everyone included in this digital revolution.

You might be thinking right now that building a virtual assistant is your next startup idea, it's going to be how you make an impact on society, it's how you can change healthcare. Alas, it's not as easy as we first thought. Cast your mind back to 2014, the same year that the Echo first became available. I was one of early adopters who pledged $499 for the world's first social robot, Jibo [consider it a cuter or creepier version of the Echo with a few extra features] They raised almost $4 million from people like me, curious to explore this new era. Like the Echo, you are meant to be able to talk to Jibo from anywhere in the room, and it will act upon your command. The release got delayed and delayed, and then recently I got an email informing me that the folks behind Jibo have decided that they won't be shipping Jibo to backers outside of the USA, and I was offered a full refund.

One of the reasons that cited was, "we learned operating servers from the US creates performance latency issues; from a voice-recognition perspective, those servers in the US will create more issues with Jibo’s ability to understand accented English than we view as acceptable." How bizarre, my US spec Echo understands my London accent, and even my fake ones! It took the makers of Jibo 2 years to figure this out, and this too from people who are at the prestigious MIT Media Lab. So just how much effort does it take to make something like the Echo? A rather large amount, it seems. According to the Jeff Bezos, the CEO of Amazon, they have over 1,000 people working on this new ecosystem. A very useful read is the real story behind the Echo explaining in detail how it was invented. Apparently, the reason why the Echo was not launched outside of America until now, was to so it could handle all the different accents. So, if you really want to do a hardware startup, then one opportunity is to work on improving the digital microphones found not just in the Echo, but in the our smartphones too. Alternatively, Amazon even have an Alexa Fund, with $100m in funding for those companies looking to "fuel voice technology innovation." Amazon must really believe that this is the computing platform of the future. 

Moving on this week's news, the UK Echo will have UK partners such as the Guardian, Telegraph & National Rail. I use the train frequently from my home into central London, the station is a 15 minute walk from my house, so that's one of the UK specific skills I'm most likely to use to check if the train is delayed or cancelled before I head out of the front door. Far easier and quicker than pulling out my phone and opening an app. The UK version will also have a British accent. If you have more than one Echo device at home, and speak a command, chances are that two or more of your devices will hear you and respond accordingly, which is not good, especially if you're placing an order with Amazon. So now they have updated the software with ESP (Echo Spatial Perception) and when you talk to your Echo device, only the closest one to you will respond. It's being rolled out to those who have existing Echo devices, so no need to upgrade. You might want to though, as there is a new version of the Echo Dot (US, UK & Germany), which is cheaper, thinner, lighter and promises better voice recognition than the original model. For those who want an Echo in every room, you can now buy Dots in 6 or 12 packs! In the UK, given that the Echo Dot is just £49.99, I expect this Christmas, many people will be receiving them as presents. 

Amazon's Alexa Voice Service is one example of a conversational user interface, and at times it's like magic, and other times, it's infuriatingly clumsy.  I'm mindful that my conversations with my Echo are nowhere near as sophisticated as conversations I have with humans. For example, if I say "Alexa, set a reminder to take my medication at 6pm" and it does that, and then I immediately say "Alexa, set a reminder to take my medication at 6.05pm", and so forth, it currently won't say, "Are you sure? You just set a medication reminder close to that time already." Some parents are concerned that the use of an Echo by their kids is training them to be rude, because they can throw requests at Alexa, even in an aggressive tone of voice, with no please, no thank you, and Alexa will always comply. Are these virtual assistants going to become our companions? Busy parents telling their kids to do their homework with Alexa, or lonely elders who find that Alexa becomes their new friend in helping them cope with social isolation? Will we end up with bathroom mirrors we can have conversations with about the state of our skin? Are we ever going to feel comfortable discussing the colour of our urine with the toilet in our bathroom? When you grab your medication box out of the cupboard, do you want to discuss the impact on your mood after a week of taking a new anti-depressant?

Could having conversations with our homes help us to manage our health? It seems like a concept from a science fiction movie, but to me, the potential is definitely there. The average consumer will have greater opportunities to connect their home to the internet in years to come. Brian Cooley, asks in this post if our home becomes the biggest health device of all.

A thought provoking read is a new report by Plextek examining the changes in the medical industry by 2020 from connected homes. I want you to pause for a moment when reading their vision, "The connected home will be a major enabler in helping the NHS to replace certain healthcare services, freeing up beds for just the most serious cases and easing the pressure on GP surgeries and A&E departments. It will empower patients with long-standing health conditions who spend their life in and out of hospitals undertaking tests, monitoring, rehabilitation or therapy, and give them freedom to care for themselves in a safe way."

Personally, I believe the biggest barrier to making this vision a reality is us, i.e people and organisations that don't normally work together will have to collaborate in order to make connected homes seamless, reliable and cost effective. Think of all the people, policies & processes involved in designing, installing, regulating, and maintaining a connected home that will attempt to replace some healthcare services. That's before we even think about who will be picking up the tab for these connected homes.

Do you believe the Echo is a very small step on the path towards replacing healthcare services, one conversation at a time?

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

The Apple watch is dead. Long live the Apple watch.

I've had the Apple watch for just over a week now, and in this post, I'd like to share my experience and my thoughts about the future. I've examined many aspects of the functionality of the device, but also its potential for playing a role in health. It appears to be a device that polarises opinions, before it has even hit the market. I've met people who ordered one, not because they like it, or because they want some kind of 'smartwatch', but simply because it is a new product from Apple. Others have told me they would never purchase such a watch, because of the cost, and also they don't see a use for it given they already have an iPhone. 

There have been multiple attempts at 'smartwatches' to win over consumers. I use the term, 'smartwatch' very loosely, simply to group these wearables together. I'll be sharing more in this post about why these watches are still not particularly smart. Last summer, Android Wear launched, and I wrote about my initial experience and thoughts on health & social care. Android Wear hasn't been as successful as Google had hoped. I've actually been using a number of 'smartwatches', and for me, the closest existing rival to the Apple watch is the Samsung Gear S, which released late 2014, and didn't sell very well (I only know one person on Earth who has also purchased one). It overlaps in functionality with the Apple watch, with two big differences. It has its own SIM card inside the watch, with its own phone number, and it only works with an Android phone. I've been using the Gear S since November 2014, and the user experience is very different. Whilst the Samsung seemed to have just tried to miniaturise a computer/phone into a watch, It is clear to me that Apple have put considerably more thought into the design of the watch. A clear example of this difference in design thinking is the fact that the Gear S offers both a QWERTY keyboard on the watch such as when you write a text message, and also a web browser. Just because it is technically possible to do something on a device as small as a watch, doesn't mean it should be included as a feature. Thankfully, Apple have not added those two features. 

Some people say to me if the Apple watch is not replacing the iPhone, then what's the point? Why use an app on a tiny screen on your wrist when you could just use the same app on your iPhone? A perfectly sensible question to ask. To answer, I'll give you a real life example of why having the Apple watch made me feel safer as I navigated the streets of a foreign city at night. I flew from London to Milan on Tuesday evening, and after dinner in the city, I wanted to walk back to my hotel. I didn't know the route, so I used my Apple watch. I opened the Maps app, dictated the name of the hotel (which the watch recognised despite me being on a busy street), and chose the walking (vs driving) option for navigation. Why did the Apple watch make me feel safer walking back to my hotel at night? Well, you can keep your iPhone in your pocket, and you don't even need to glance at your Apple for instructions on when to turn left or right. You just walk normally, except that when you do have to turn left or right, the watch 'taps' your wrist in different ways. To anyone observing you, they wouldn't know you had an expensive phone and watch. Bear in mind that GPS is not always accurate, especially in cities with tall buildings. On one walk in London, the watch tapped to indicate that I turn right, into a clothing store. The street I actually had to turn right into was 50 yards up ahead. However, I'm not sure if the driving mode on the watch would be safe. Would you really want to navigate using your watch whilst you drive? 

One of the standard notifications is to alert you once an hour to stand up and move. Sounds like a useful concept given how many of us work in jobs that keep us sitting in a chair all day long. These notifications are simple, not smart, as they appear at the strangest of times. You'd think the notifications could have made use of data from sensors in your watch to be more relevant and timely.

Using the Gear S has changed how I use my Android phone. I typically keep my phone on silent, and use the Gear S to notify me of emails/calls etc. I find it particularly useful if I'm charging my phone at home or in the office, and I want to wander away from the phone, without missing any notifications. All these devices need to be paired with your phone via Bluetooth in order to work. Since the Gear S has its own SIM card, as soon it loses the Bluetooth connection with my phone, it forwards calls from the phone to the Gear S. So, if I left the phone at home to visit the gym, and someone rang my phone's number, the call would be forwarded to the Gear S. Since the Gear S has a speaker you can answer the call (or alternatively, you can connect the Gear S with bluetooth earphones, which is a lot better). Incidentally, I found the speaker on the Apple watch is competent, but not as loud as the speaker on the Gear S. 

When the Apple watch loses the Bluetooth connection with the iPhone because you've walked out of the house, the watch isn't completely useless. You can use it to track your workouts and it will continue to monitor your activity (move, stand & exercise). You can listen to music that's stored on the watch, and use Apple Pay to buy stuff (Apple Pay only available in the USA right now). Oh, and you can still use it as a watch to tell the time, set alarms and use the stopwatch feature! If you're at home or in the office and you wander around so that the Bluetooth connection is lost between your iPhone and the Apple watch, if your iPhone is also connected to a wifi network, you can also use Siri on the watch & send and receive iMessages. 

Which menu of apps do you prefer? Gear S (left) or Apple Watch (right)?

Which menu of apps do you prefer? Gear S (left) or Apple Watch (right)?

When it comes to learning to use the Apple watch, it should be intuitive, given Apple's previous products. Tell me something, if the Apple watch was intuitive, why would the user guide be nearly 100 pages long? (For comparison, the manual for the Gear S is also of a similar length!)

For example, the Apple watch features something called 'Force Touch', which can distinguish between you tapping the screen and pressing the screen. Pressing the screen brings up new menus or options within apps. For example when you open up the Maps app, in order to search for a destination, you have to press firmly on the screen, and two options then appear, "Search" & "Contacts." If you were unaware of "Force Touch" or had not read the User Guide, you might be bamboozled. When the Apple watch has a bunch of notifications you wish to clear, you have to press firmly on the screen for a "Clear All" option to appear. On the Gear S, when browsing the notifications, you simply swipe up to see the "Clear All" option. It seems the user interface on the Apple watch leaves many users confused, leading to 9to5mac creating a quick start user guide. Whilst browsing and choosing the apps on the Apple watch, I sometimes find myself starting the wrong app, because the screen and the icons are so small. In that respect, I do prefer the larger screen and traditional menu of the Gear S. For clarity, I purchased the larger of the two Apple watches, 42mm, rather than the 38mm. I do wonder how difficult or easy it would be for someone with Arthritis to use the Apple watch (or any wearable device with a touch screen)?

When it comes to health, one of the first apps I tried was one called Sickweather. It uses crowdsourced data for forecasting and mapping sickness. It is the same notification that would appear on the iPhone, but if you have the watch, it will appear there instead. Now it might seem of limited or no value to many, but for some people, it is useful. After I put out the tweet showing how the cough alert looked, it led to an interaction on Twitter with a guy called Jarrod, who has Cystic Fibrosis, and said the app would be useful for him. Sickweather has a Sickweather score that is only available on the Apple watch. 

I also tried an app called DocNow that provides instant access to doctors 24/7 from the Apple watch. A tap on the watch will initiate a HD video call with a doctor via the iPhone. Unfortunately, being in England, it didn't work for me when I tried it. That's being resolved I believe. 

There are also a number of apps on the watch for Medication reminders. Medication reminders on a watch are not new, I tested the MediSafe version for Android Wear last year. For the Apple watch, I tested an app from WebMD, and one good thing I noticed was it even includes a picture of the medication you are supposed to take. In the WebMD app on the iPhone, you can even use your own picture, if your pills look different from the stock image. It all sounds great, doesn't it? However, once I shared via Twitter, I got valuable feedback. Is the screen size too small for older people and/or people with poor eyesight? So, rather than on a watch, perhaps medication reminders for older people taking multiple medications are better delivered via a personal companion robot? (more on that in a future post as I have some updates in that arena) 

The Deadline app that shows my predicted life expectancy

The Deadline app that shows my predicted life expectancy

Another interesting app I tested was Deadline. This is an app that asks you questions about your lifestyle, and family history as well as reading some of your health data from the iPhone to then determine your life expectancy. It displays it on the watch as a tip on how to improve your life expectancy. The science behind this app is probably unvalidated, but as a concept, but it does make me wonder. In the future, If the science was accurate, and the app was validated, how comfortable would you feel with tailored health advice via your watch that was based upon the state of your health there and then? Would it be too intrusive if your watch nudged you to eat a salad instead of a burger?

The Apple watch searches for Bluetooth devices

The Apple watch searches for Bluetooth devices

Within the Bluetooth menu on the watch, I found that it shows two types of devices it can connect to, devices & health devices. I understand that it is possible to pair the watch to an external heart rate monitor, if you wanted to use that to monitor your heart rate rather than the sensor within the watch itself (I plan to test this connectivity). It is not clear what other health devices you could connect to the watch, but its a feature worth keeping track of. 

The watch comes with a sensor that will normally record your heart rate every 10 minutes, and store that data in the health app on the iPhone. That sensor could also act as a pulse oximeter, allowing measurement of oxygen content of your blood. However, this feature has not been activated yet. 

Now if you choose the Workout app, and select one of the workouts (such as Outdoor walk or Indoor Cycle), it will track your heart rate continuously. I did try that out with an Outdoor walk, and I also compared how the Gear S was measuring my heart rate compared with the Apple watch. Bear in mind that the positioning of both devices may have affected the results, and I'll have to repeat the test, with the devices in different positions, on different arms. 

HR on Gear S almost double that of Apple watch (I was sitting on a bench as a I rested during my walk) 

HR on Gear S almost double that of Apple watch (I was sitting on a bench as a I rested during my walk) 

How do steps/distance walked compare against other devices? Well, this picture illustrates the challenge with these consumer devices. For the picture, the Apple watch & Gear S were worn on my left hand, and the Microsoft Band was worn on my right hand. Same walk, different devices, different results. Note, I entered my age, gender, height and weight were entered exactly the same in the app for each device. Why does the Apple watch show more steps walked than the Microsoft band, but a longer distance? Why does the Gear S show more steps & more distance but fewer calories than the Apple watch? BTW, since the Apple Watch doesn't track sleep, I'm using the Microsoft band to track my sleep. Will we ever have one device that can serve every purpose or do we have multiple wearables?

Apple Watch (left), Microsoft Band (top right), Gear S (bottom right) 

Apple Watch (left), Microsoft Band (top right), Gear S (bottom right) 

Health app on my iPhone

Health app on my iPhone

I was curious about the data from my watch being recorded in the Health app on my iPhone, and I found something quite puzzling. The Outdoor walk I had selected on the watch, had captured my heart rate continuously but something didn't make sense.

The app shows 6 entries for 8.21am, two of them for 128bpm, two more for 127bmp, one at 78bpm, and one at 69bpm. The date stamp only shows the hour and minute not the second. How will it be possible to make sense of this data in any analysis if I have 6 different heart readings at 8.21am? (Update: 18th May - I got a response from Apple about this issue. They told me the watch will measure HR multiple times in a minute, but that the data in the health app is only in hours and minutes.)

Now that my heart rate is being captured with the watch, could that data ever be used with other personal data to tailor advertising messages to me? I'm outside Starbucks, having not slept well, woken up late, missed by usual bus to work, and voila, my watch gets a coupon offering me a discount off coffee within the next 10 minutes at THAT particular Starbucks. Would that be creepy or cool? I envisioned this scenario after reading a brilliant post by Aaron Friedman, on the future of search engines, which he says is all about context. Delivering information to your watch at the right place and the right time was the plan behind Google's Android Wear. A great idea, but their implementation last year was not optimal. Additionally, many of the first Android Wear watches didn't look very fashionable either. Their new strategy for Android Wear in response to the Apple watch may win them more consumers, but I'm not convinced.

I have been examining the role of the Apple watch in health primarily from a consumer perspective. What about people working in healthcare? Is the watch helpful for them? Well, Doximity, a professional network for physicians in the USA thinks so. An article about their app for the watch highlights, "They think the Apple Watch can enable medical professionals to share information easily, securely, and quickly — and perhaps most importantly, hands-free."

There is a hospital in the USA, Ochsner Health System, that is trialling the use of the Apple watch with patients with high blood pressure. Then you've got one of the biggest hospitals in Los Angeles, Cedars-Sinai has now added support for Apple's HealthKit, allowing data from a patient's phone to be added to their medical record. That's where I see the biggest advantage of the Apple watch over any other makers of smartwatches.  

  • Interface - Whilst not perfect, and probably too complex, once you get the hang of it, the Apple watch is a more polished user experience than its current rivals 

  • Integration - Whilst I capture health information with the Gear S, it doesn't really go anywhere from the Samsung S-health app. This is where Apple really shines. 

  • Ecosystem - With around 3,500 apps already available for the Apple watch (including many popular iPhone apps), and around 1,000 apps for the Samsung Gear watches, once again, Apple are ahead. I downloaded very few apps for the Gear S, as I didn't find many good ones.

The Bump is an app for pregnant women - this is the screen you see for several seconds as the app loads 

The Bump is an app for pregnant women - this is the screen you see for several seconds as the app loads 

Since I've mentioned apps, thanks to Tyler Martin for reminding me to mention some of the issues I faced with installing & using apps on the Apple watch. Maybe it is because the ecosystem is so new, but the apps can be buggy. You expect a tiny device like a watch to respond swiftly, it is not like a computer with a hard drive. Yet, there are times, when the watch does take a relatively long time to install/open apps, or the app crashes whilst you're using it. Those are the moments when you feel like you've purchased a product that is still a work in progress. I would hope these bugs get ironed out as more people start using these apps and report issues to the developers. The source of these problems may be that developers have had to create apps for the watch without actually having access to the watch prior to launch. Maybe those consumers waiting for Apple Watch 2.0 or 3.0 are the sensible ones?

Dr Eric Topol highlights in a tweet how the Apple watch may be of benefit to diabetics wishing to monitor their blood glucose levels when using a Dexcom CGM.

One thing I was reminded of this week was that we might have the latest technology such as an Apple watch, but the infrastructure around us was designed for a different era. For example, I travelled from London to Milan and Paris this week with British Airways. As a result, I was able to use their mobile boarding pass on my Apple watch. I checked-in online using the BA app on my iPhone, and then retrieved my boarding pass, which I added to Passbook. The passes in Passbook on your iPhone get transferred to Passbook on your Apple watch, so you could even board a plane using your Apple watch alone, if your phone was off or left at home. There are two parts to the boarding pass on the watch, one is the text information about your flight and the other part is the QR code which airport machines will scan. On the iPhone, you'd see both parts at once, on the watch, due to the small screen, you have to swipe up to see the QR code.

Instead of waiting by the screens in departures at Heathrow airport, I wandered around the airport at my leisure, and got a notification on my watch when the gate for my flight was announced. However, when I was at the gate, and was asked for my boarding pass, I had to take the watch off my wrist so the boarding pass could be scanned. The machine which scans boarding passes had been designed to scan paper boarding passes, and so didn't have a gap large enough to accommodate someone's arm wearing a watch. Where I wished I had a paper boarding pass was at Milan airport, where on departure, passport control wanted to see my boarding pass. The officer was in a kiosk fronted by a glass screen, and I had to take off my watch and slide it across the counter. However, when I did that, the screen of the watch went off, and as I leaned over the counter to tap the screen for the boarding pass to reappear, a bunch of notifications pinged to the watch, which then confused the officer in the kiosk. I had to then clear all the notifications from the watch, open Passbook on the watch, and bring up the boarding pass again.

When you Apple watch notifies you, it uses the new 'Taptic engine' to tap your wrist rather than the traditional vibration I get on devices such as the Gear S and Microsoft Band. I found these taps to be too weak. After reading the User Guide, I found within the watch, a menu that offered 'Prominent Haptic', which I switched on. It is better than before, but I still prefer the more noticeable vibration from the Gear S and Microsoft Band. 

There are some features of the Apple watch which seem rather frivolous. One of them is that you can press both buttons on the side of the watch, and a screenshot of the watch's display is then added to your iPhone's photo library.

You're probably wondering about battery life. Well, Apple claim 18 hours, and I did get close to that on the second day. After 16 hours, the battery was down to 14%. Another day, after 12 hours it was down to 12%. When it comes to charging the Apple watch, it's a magnetic dock that has a 2 metre long USB cable. You can't use your iPhone charging cable to charge the watch. I understand it's an engineering challenge that means currently every wearable has its own charging connector or charging dock. It's annoying, another cable to carry around. Don't lose it, a replacement isn't cheap at £29.

There are countless other reviews based upon a week's usage of the Apple watch. One week's usage won't always reveal the flaws, especially design defects. I'll give you a very real example. My Gear S has a charging dock that clips onto the watch, and you plug the micro USB charging cable into the charging dock. After 4 months of usage, the charging dock no longer clips onto the watch, meaning I can't charge it (unless I keep the dock in place with an elastic band). The exact same thing also happened after a few months with my Samsung Gear Fit. I went to the Samsung store in London yesterday, who told me that my warranty wouldn't cover this problem, as it was a cosmetic fault. I would have to purchase a new charging dock, which they didn't have in stock, as they don't sell many Gear S devices. I'm not the only one, as Gear S owners in the USA have the same problem, and a received a similar response from Samsung USA. Knowing how the Apple watch performs over a longer period of time is critical, as well as observing how Apple will respond to problems as they occur.

Charging docks, special adaptors, and unique cables all make living with wearable technology, more challenging than it needs to be. Just yesterday I came across a company called Humavox in Israel working on wireless charging which would include wearables. I really hope they succeed in making wearables easier to live with. In the meantime, one advantage of the Gear S is the charging dock also doubles as a supplementary battery, so if you are away from home and low on battery, you can just clip on the charging dock. Nothing like that with the Apple watch, apart from an aftermarket 'Reserve strap' coming out later this year. It promises to charge your watch as you wear it, but costs $249. An expensive fix for an already expensive watch.

The futuRE

The Apple watch is a good first attempt, and if Apple invest in refining the product, it may become a successful product line for them in the long term. Like many of its rivals, it is primarily an extension of the smartphone, another screen, on our wrist. Look around you, and most people aren't wearing a 'smartwatch.' Samsung has launched so many models, yet none of them have really gone mainstream. Apple may not succeed immediately with this first version of their watch, but simply because they are Apple, they may shift the culture and make consumers more interested in purchasing (and regularly using) a 'smartwatch' of some kind, even if it's not made by Apple. 

How much value will the Apple watch add to to our daily lives? Will it make a difference only to the young, or will it benefit the old too? Apart from making life more convenient, will it actually play a role in improving our health, or saving us money? It is too early to answer those questions as it has only just hit the market, but those are key questions to answer.

I'd personally want to get my questions about the accuracy of my heart rate data answered, especially if data from my watch could one day be added to my medical records. Even the differences in steps/distance walked/calories burned between the Apple, Samsung & Microsoft devices make me think twice about unvalidated data ending up in the system of my doctor or insurer. 

Genuine advances are needed in battery life, how much information is the Apple watch not able to capture about my health because it has to be charged whilst I sleep? If I'm travelling, I don't want to interrupt my routine to find somewhere to charge my watch. 

Today, based upon my experience so far,  I believe the Apple watch is the best 'smartwatch' available. It has got fewer flaws than other devices, such as the Gear S, but it still has got flaws that Apple needs to deal with. I have to admit I didn't really like it at first, but as I learnt how to use the features, it grew on me. Now tomorrow, it could be someone else, or a new form of technology, not even necessarily in form of a watch. Some people tell me they view the Apple watch as technology that is already redundant, good for loyal Apple customers, but not a genuine innovation in this arena. 

You've got the Pebble Time watch with the concept of 'smart straps', which could allow new possibilities. Then there is the Bluetooth 4.2 specification which just got finalised in Dec 2014, featuring low power IP connectivity. What would that mean for future devices? Bluetooth smart sensors that could connect directly to the internet, without having to be paired to a smartphone or tablet. 

Yi Tuntian, a former Microsoft official in China claims that wearables will replace mobile phones soon. I find that claim hard to believe. 

How about the new chip developed in Taiwan, which integrates sensors for tracking health as well as data transmission and processing. This chip is so small so that "people could wake up in the morning to the voice of a microcomputer in a headset informing them of the state of their health and things to look out for in their lifestyle."

It may be the case that the Apple watch ends up being of significant value for particular applications in healthcare & clinical trials for those who can afford it, but does not have long term success as a general smartwatch with the average consumer. Here in the UK, with the NHS hunting down the back of the sofa looking for extra pennies, experimenting with the Apple watch may be a pipe dream at best. 

I did find a wonderful story of how the Apple watch has changed someone's life, in just 5 days. Molly Watt, a 20 year old woman in England who has Usher Syndrome Type 2a. She was born deaf and has only a very small tunnel of vision in her right eye. In her blog, she describes how the different taps for turning left and right that helped me feel safer walking in Milan at night, allowed her to feel more confident when walking down the street, without relying upon hearing or sight. We might not see much benefit from Apple Pay on the watch for mobile payments, but could this feature be tremendously useful to someone with learning difficulties?

Without giving it a try and without generating evidence, it would premature to dismiss the Apple watch completely. As these consumer technologies evolve at an increasing rate, what actually is evidence, and how do we collect it?

You may see wearables as just a fad, a passing phase, and you'd never wear any of these devices. Well, what if you had to wear a device on your wrist, just to get insured? Nope, it is not science fiction, it iss the view of Swiss Re, a reinsurance giant whose executives believe it will be impossible to get life insurance in 5 to 10 years without a wearable device

A really fascinating article from Taiwan discusses the profitability of smartwatches in healthcare, and mentions, "Service platforms that integrate medical care organizations with insurance companies will produce the greatest value." 

Fitness is touted as one of the immediate applications for the Apple watch, yet Gregory Ferenstein's review suggests you won't gain much from the Apple watch in the fitness arena over and above simply using an iPhone. 

Or maybe we are misguided in pursuing the idea of 'smartwatches' entirely? Below is a great talk given by Gadi Amit recently at WIRED Health talking about the concept of wearable tech under our skin, and he states, "The biggest issue that I see… is the idea that if we load more and more functionality on our wrists, things will get better. In many cases, it does not."

I'm not surprised that Apple have sold so many watches, as they have a well oiled marketing machine. How many of today's purchasers will still be using the watch in 12 months time, or will it go the way of Google Glass? And how many people would be willing to upgrade to a newer version of the watch in 12-18 months? These are the hard metrics that we need to pay attention to, once the initial enthusiasm dissipates.

In my opinion, the single biggest improvement Apple could make is finding a way to extend the battery life, and also offer wireless charging. I'd happily have fewer features in exchange for not having to charge it every day. Come to think of it, current engineering limitations on battery life impacts the use of many portable devices, whether it is wearable tech, phones, tablets or laptops. We need a breakthrough in battery technology. 

So, will people wearing an Apple watch been treated with the same disdain as those who wore Google Glass? Users of Google Glass got branded, "Glassholes" and so will users of the Apple watch get branded, "Glanceholes?" 

The Apple watch is dead. Long live the Apple watch.

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

<

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner


Robots as companions: Are we ready?

Nao

Nao

Some people on Earth seem to think so. In fact, they believe in the concept so much, they are actually building the world's first personal robot that can read & respond to human emotions. A collaboration between French robotics firm, Aldebaran, and Softbank mobile from Japan. You may already know one of Aldebaran's existing robots, Nao. The new robot is called Pepper, is due to launch in Japan in February 2015, and is priced at 198,000 Yen. Using today's exchange rates, that's approximately $1,681 and £1,074, although only the Japanese launch has been confirmed for now. Pepper may be sold in the USA through Sprint stores at some point. The notion of a robot in your home that can interact with you, and even tell you a joke if you're feeling sad, attracted my curiosity. So much so, that in September 2014, I hopped on a train from London to Paris. 

Me &amp; Pepper in Paris

Me & Pepper in Paris

Why Paris? Well, the world's first home robot store opened in Paris this summer, called Aldebaran Atelier, and they had Pepper in the store. You can't buy any of the robots in the store just yet, it's more a place to come and learn about these robots. 

So what's Pepper like? You have to bear in mind that the version I interacted with in Paris is not the final version, so the features I saw are not fully developed, especially the aspects of recognising who you are, and getting to know you and your needs. The 3 minute video below shows some of the interaction I had. For now, Pepper understands English, French and Japanese. 

A bit more about how Pepper works. In the final version, Pepper will be able to understand 5 basic emotional expressions of the face: smiling, frowning, surprise, anger & sadness. Pepper will also read the tone of your voice, the verbage used, as well as non verbal communication such a tilting your head. So for example, if you're feeling sad, Pepper may suggest you go out. If you're feeling happy, Pepper may sing a song and do a dance for you (more on that later). According to a Mashable article, "Pepper has an 'emotional engine' and cloud based artificial intelligence". The article also states, "The cloud AI will allow Pepper to share learnings with cloud-based algorithms and pull down additional learning, so that its emotional intuition and response can continually improve. It's either a technological breakthrough or the most terrifying robot advancement I've ever heard of."

Some facts and figures for you; 

  • 4 feet tall, and weighs 61 lbs/28kg

  • 12 hour battery life - and automatically returns to charging station when battery is low

  • 3D camera which senses humans and their movements up to 3 metres away

In the press kit I was given at the store, it's stated that "Pepper's number one intention is about being kind and friendly. He has been engineered to meet not functional but emotional needs." 

It's not just speech and movement that Pepper responds to, it's also touch. There are sensors on the upper part of his head, upper part of his hands and on the tablet attached to his chest. Pepper may be talking to you, and if you place your hand on his head, the way that you would with a child, Pepper will go quiet. Although, when I tried it, Pepper responded by saying something about sensing someone was scratching his head! 

The creators anticipate Pepper being used to care for the elderly and for baby sitting. What are your thoughts? Do YOU envisage leaving your elderly parent or young child with Pepper for company whilst you do some chores or dash to the supermarket? I told Shirley Ayres, Co-Founder of the Connected Care Network, about Pepper. Her response was; "I'd prefer a robot companion to 15 minutes of care by a worker on minimum wage struggling to provide quality care on a zero hour contract."

Given aging populations, and the desire for many to grow old in their own home, rather than an institution, are household companion robots the answer to this challenge? As technology such as Pepper evolves, will a robot at home be the solution to increasingly lonely societies? Will we really prefer the company of a household robot versus another human being? Will we end up treating the purchase of Pepper the same way we treat the purchase of an Ipad? Will your children buy you a Pepper so they don't have to visit you as often as you'd like? The CEO of Aldebaran, Bruno Maisonnier, believes they will sell millions of these robots. Apparently, they'll be able to make a profit from the sales of robot related software and content. Apps for robots?

Pepper does have all sorts of sensors so it can understand humans as well as understand the environment it's operating within. I understand it will collect data, but it's not clear to me, at this stage, exactly what would be collected or shared. Just because Pepper seems kind and friendly, doesn't mean we should not consider the risks and benefits associated with any data it collects on us, our behaviours and intentions. There could be immense benefits from a robot that can 24 hours a day remind an older person when to take their medications, and potentially collect data on when doses are being skipped and why.

An Institute of Medicine panel has just recommended that "Physicians should collect more information about patients' behaviour and social environment in their electronic health records." Some of the information the panel recommends collecting include "whether they are experiencing depression; their social connections and sense of social isolation." Is technology such as Pepper the most effective route to collecting that data? Do we want a world where our household robot sends data to our doctor on how often we feel sad and lonely? Perhaps for those of us too afraid to reach out for help and support, that's a good thing?

My brief interaction in Paris with Pepper was fun and enjoyable, a glimpse into a possible future. With it's childlike gestures and ability to monitor and respond to our emotions, could we as humans one day form emotional attachments to household robots? Here is the video of Pepper wanting to play some music for me in the Paris store. 

One does wonder how the introduction of these new robots might impact jobs? What does technology such as Pepper mean for human carers? A recent report from Deloitte forecasts that 30% of jobs in London are at high risk from automation over the next 20 years. It's low paid, low skill jobs that are most at risk. Microsoft is trying out a different robot called K5 from Knightscope as security guards in their Silicon Valley campus. In Japan, Pepper has been used by Softbank to conduct market research with customers in a Tokyo store. Nestle is planning to use Pepper to sell coffee machines in 1,000 of it's stores across Japan by the end of 2015. Here is the video showing how Pepper might work in selling to consumers in Nestle's stores. 

Now, some of us may dismiss this robot technology as crude and clumsy, with little or no potential to make a significant impact. I personally think it's amazing that we've reached this point, and like any technology, it won't stand still. Over time, it will improve and become cheaper. We are at a turning point, whether we like it or not. Does Pepper signify the dawn of a new industry, or will these household robots be rejected by consumers? How are household robots treated by the law? Do we need to examine how our societies function rather than build technology such as Pepper? Are we becoming increasingly disconnected from ourselves that we need Pepper in our homes to connect with ourselves as humans? Does the prospect of having a robot like Pepper in your own home with your family, excite you or frighten you?

Given the intense pressure to reduce costs in health & social care, it would be foolish to dismiss Pepper completely. So in the future, will we also see companion robots like Pepper stationed in hospitals and doctor's offices too? Can personal robots that connect with our emotions change how we 'deliver' and 'receive' care?

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Wearables: Hope or Hype?

I've been thinking about this question a lot in 2014. I'm seeing more articles proclaiming that wearable technology is a 'fad', has no 'practical' value, and in the context of health are often viewed as inferior to officially certified & regulated medical devices.

"Technology is beyond good or bad", said Tamara Sword, at yesterday's Wearable Horizons event. Now, a very common piece of technology in our kitchen, the knife, is a classic example. Used for it's purpose, it speeds up the food preparation process by slicing carrots. However, it can also be used for harm, if used to slice a finger off.

The same goes with wearable technology like Google Glass. It can do immense good, such as saving someone's life. However, it could also be used to harm, if someone wearing Glass takes a picture of your 5 year old child in Starbucks without obtaining consent from your child or yourself [credit to the visionary John Havens for making me think about the Starbucks scenario]

We all have to start somewhere

We have to remember that the market for wearables is embryonic, it's not mature in any shape or size. Every innovation I see & test is crude & clumsy, with many flaws. Thinking back to 1903, wasn't the first airplane crude & clumsy?  Wearables WILL evolve, just like the the airplane [Hopefully, it won't take 111 years like the airplane] 

I salute those willing to take a risk and develop wearable technology. From the lone entrepreneurs to Nike, what unites them is that they took a risk. They experimented. Experiments don't always turn out to be successful, noting Nike's recent withdrawal from Wearables. That's entirely normal, we can't expect everyone to succeed at their first attempt. What would our world look like today, if Steve Jobs had given up after his first attempt? 

How many of you failed your driving test the first time? Instead of dismissing the brave efforts of those willing to take a risk into the unknown, we should be encouraging & supporting them. It's those willing to take those chances, to explore unknown waters, to imagine a better world, that have helped humanity make so much progress. 

Courageous conversations

There are many questions around wearables that remain unanswered. There are uncomfortable, awkward & terrifying conversations surrounding the use of wearable technology that are sorely needed, not just within society, but within our political, legal and regulatory framework too. If we place a piece of wearable tech on a patient with dementia, how do we obtain informed consent from the patient?

When I saw the recent headline that a hospital in Boston is equipping everyone in the ER room with Google Glass, my first reaction was one of excitement, with my second reaction being one one of curiosity. What happens to the face & voice data? At 3am, when the ambulance rushes you & your sick child to the hospital, will you really stop to ask the hospital staff, what the privacy policy is, regarding the images captured using Google Glass of your child in the hospital?

I observe that many, including those in the business of creating or using regulated medical devices, look down upon some wearable technologies. Activity trackers are frequently viewed as fun toys, not 'proper' medical devices.

Let me ask you something. If an overweight & inactive person aged 40, uses a $99 Fitbit to track their activity & sleep, leading to insights that trigger behaviour change over the long term, is it still a toy? For example, what if developing the habit of walking 10,000 steps a day (versus 1,000 today), reduces their risk of a heart attack, delays the onset of Diabetes or even prevents high blood pressure? Still just a 'glorified' pedometer?

Imagining a better tomorrow

I believe wearable technology, particularly for health, is just one step on the journey in today's digital world. There will continue to be failures, and whilst there is significant froth & hype, there is also significant hope. Our ability to imagine a better future is what has gotten us to where we are today. Imagination could be one of the most useful attributes for any organisation wishing to meet the challenges facing human health in the 21st century.

It appears likely that firms which don't have roots in health could be helping us realise this new world. Perhaps this shift frightens those firms that have got their roots in health? I'm intrigued to note that Samsung, are holding an event in San Francisco later this month, promising that a new conversation about health is about to begin. Having visited South Korea a few years ago, and learning so much about Samsung's history & vision, I'm going to be watching what they do very closely. 

Is it a bad thing if wearable technology (and the resulting data, insights & education) empowers patients and makes them less dependent on doctors & the healthcare system?

Is it wrong to dream of 'smart fabrics' where our health could be monitored 24 hours a day?

Is it silly to want to develop sensors that could one day transmit data directly from our internal organs to our doctors?

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner