Being Human

This is the most difficult blog post I’ve ever had to write. Almost 3 months ago, my sister passed away unexpectedly. It’s too painful to talk about the details. We were extremely close and because of that the loss is even harder to cope with. 

The story I want to tell you today is about what’s happened since that day and the impact it’s had on how I view the world. In my work, I spend considerable amounts of time with all sorts of technology, trying to understand what all these advances mean for our health. Looking back, from the start of this year, I’d been feeling increasingly concerned by the growing chorus of voices telling us that technology is the answer for every problem, when it comes to our health. Many of us have been conditioned to believe them. The narrative has been so intoxicating for some.

Ever since this tragedy, it’s not an app, or a sensor or data that I turned to. I have been craving authentic human connections. As I have tried to make sense of life and death, I have wanted to be able to relate to family and friends by making eye contact, giving and receiving hugs and simply just being present in the same room as them. The ‘care robot’ that had arrived from China this year as part of my research into whether robots can keep us company, remains switched off in its box. Amazon’s Echo, the smart assistant with a voice interface that I’d also been testing a lot also sits unused in my home. I used it most frequently to turn the lights on and off, but now I prefer walking over to the light switch and the tactile sensation of pressing the switch with my finger. One day last week, I was feeling sad, and didn’t feel like leaving the house, so I decided to try putting on my Virtual Reality (VR) headset, to join a virtual social space. I joined a virtual computer generated room where it was sunny and in someone’s back yard for a BBQ, I could see their avatars, and I chatted to them for about 15 minutes. After I took off the headset, I felt worse.

There have also been times I have craved solitude, and walking in the park at sunrise on a daily basis has been very therapeutic. 

Increasingly, some want machines to become human, and humans to become machines. My loss has caused me to question these viewpoints. In particular, the bizarre notion that we are simply hardware and software that can be reconfigured to cure death. Recently, I heard one entrepreneur believe that with digital technology, we’ll be able to get rid of mental illness in a few years. Others I’ve met believe we are holding back the march of progress by wanting to retain the human touch in healthcare. Humans in healthcare are an expensive resource, make mistakes and resist change. So, is the answer just to bypass them? Have we truly taken the time to connect with them and understand their hopes and dreams? The stories, promises and visions being shared in Digital Health are often just fantasy, with some storytellers (also known as rock stars) heavily influenced by Silicon Valley’s view of the future. We have all been influenced on some level. Hope is useful, hype is not. 

We are conditioned to hero worship entrepreneurs and to believe that the future the technology titans are creating, is the best possible future for all of us. Grand challenges and moonshots compete for our attention and yet far too often we ignore the ordinary, mundane and boring challenges right here in front of us. 

I’ve witnessed the discomfort many have had when offering me their condolences. I had no idea so many of us have grown up trained not to talk about death and healthy ways of coping with grief. When it comes to Digital Health, I’ve only ever come across one conference where death and other seldom discussed topics were on the agenda, Health 2.0 with their “unmentionables” panel. I’ve never really reflected upon that until now.

Some of us turn to the healthcare system when we are bereaved, I chose not to. Health isn’t something that can only be improved within the four walls of a hospital. I don’t see bereavement as a medical problem. I’m not sure what a medical doctor can do in a 10 minute consultation, nor have I paid much attention to the pathways and processes that scientists ascribe to the journey of grief. I simply do my best to respond to the need in front of me and to honour my feelings, no matter how painful those feelings are. I know I don’t want to end up like Prince Harry who recently admitted he had bottled up the grief for 20 years after the death of his mother, Princess Diana, and that suppressing the grief took him to the point of a breakdown. The sheer maelstrom of emotions I’ve experienced these last few months makes me wonder even more, why does society view mental health as a lower priority than physical health? As I’ve been grieving, there are moments when I felt lonely. I heard about an organisation that wants to reframe loneliness as a medical condition. Is this the pinnacle of human progress, that we need medical doctors (who are an expensive resource) to treat loneliness? What does it say about our ability to show compassion for each other in our daily lives?

Being vulnerable, especially in front of others, is wrongly associated with weakness. Many organisations still struggle to foster a culture where people can truly speak from the heart with courage. That makes me sad, especially at this point. Life is so short yet we are frequently afraid to have candid conversations, not just with others but with ourselves. We don’t need to live our lives paralysed by fear. What changes would we see in the health of our nation if we dared to have authentic conversations? Are we equipped to ask the right questions? 

As I transition back to the world of work, I’m very much reminded of what’s important and who is important. The fragility of life is unnerving. I’m so conscious of my own mortality, and so petrified of death, it’s prompted me to make choices about how I live, work and play. One of the most supportive things someone has said to me after my loss was “Be kind to yourself.” Compassion for one’s self is hard. Given that technology is inevitably going to play a larger role in our health, how do we have more compassionate care? I’m horrified when doctors & nurses tell me their medical training took all the compassion out of them or when young doctors tell me how they are bullied by more senior doctors. Is this really the best we can do? 

I haven’t looked at the news for a few months and immersing myself in Digital Health news again makes me pause. The chatter about Artificial Intelligence (AI), where commentaries are at either end of the spectrum, almost entirely dystopian or almost entirely utopian, with few offering balanced perspectives. These machines will either end up putting us out of work and ruling our lives or they will be our faithful servants, eliminating every problem and leading us to perfect healthcare. For example, I have a new toothbrush that says it uses AI, and it’s now telling me to go to bed earlier because it noticed I brush my teeth late at night. My car, a Toyota Prius, which is primarily designed for fuel efficiency scores my acceleration, braking and cruising constantly as I’m driving. Where should my attention rest as I drive, on the road ahead or on the dashboard, anxious to achieve the highest score possible? Is there where our destiny lies? Is it wise to blindly embark upon a quest for optimum health powered by sensors, data & algorithms nudging us all day and all night until we achieve and maintain the perfect health score? 

As more of healthcare moves online, reducing costs and improving efficiency, who wins and who loses? Recently, my father (who is in his 80s) called the council as he needed to pay a bill. Previously, he was able to pay with his debit card over the phone. Now they told him it’s all changed, and he has to do it online. When he asked them what happens if someone isn’t online, he was told to visit the library where someone can do it online with you. He was rather angry at this change. I can now see his perspective, and why this has made him angry. I suspect he’s not the only one. He is online, but there are moments when he wants to interact with human beings, not machines. In stores, I always used to use the self service checkouts when paying for my goods, because it was faster. Ever since my loss, I’ve chosen to use the checkouts with human operators, even if it is slower. Earlier this year, my mother (in her 70s) got a form to apply for online access to her medical records. She still hasn’t filled in it, she personally doesn’t see the point. In Digital Health conversations, statements are sometimes made that are deemed to be universal truths. Every patient wants access to their records, or that every patient wants to analyse their own health data. I believe it’s excellent that patients have the chance of access, but let’s not assume they all want access. 

Diversity & Inclusion is still little more than a buzzword for many organisations. When it comes to patients and their advocates, we still have work to do. I admire the amazing work that patients have done to get us this far, but when I go to conferences in Europe and North America, the patients on stage are often drawn from a narrow section of society. That’s assuming the organisers actually invited patients to speak on stage, as most still curate agendas which put the interests of sponsors and partners above the interests of patients and their families. We’re not going to do the right thing if we only listen to the loudest voices. How do we create the space needed so that even the quietest voices can be heard? We probably don’t even remember what those voices sound like, as we’ve been too busy listening to the sound of our own voice, or the voices of those that constantly agree with us. 

When it comes to the future, I still believe emerging technologies have a vital role to play in our health, but we have to be mindful in how we design, build and deploy these tools. It’s critical we think for ourselves, to remember what and who are important to us. I remember that when eating meals with my sister, I’d pick up my phone after each new notification of a retweet or a new email. I can’t get those moments back now, but I aim to be present when having conversations with people now, to maintain eye contact and to truly listen, not just with my ears, and my mind, but also with my heart. If life is simply a series of moments, let’s make each moment matter. We jump at the chance of changing the world, but it takes far more courage to change ourselves. The power of human connection, compassion and conversation to help me heal during my grief has been a wake up call for me. Together, let’s do our best to preserve, cherish and honour the unique abilities that we as humans bring to humanity.

Thank You for listening to my story.

Patients and their caregivers as innovators

I've been conducting research for a while now on how patients and their families have innovated themselves. They decided not to wait for the system to act, but acted themselves. One leading example is the Open Artificial Pancreas System project, and they even use the hashtag, ##WeAreNotWaiting. I was inspired to write this post today for two reasons. 

  1. I delivered a keynote at the MISK Hackathon in London yesterday to innovators in both London & Riyadh reminding them that innovation can come from anyone anywhere on Earth.

  2. A post by the World Economic Forum about an Tal Golesworthy, an engineer with a life threatening heart condition who fixed it himself. 

I thought this line in the WEF article was particular fascinating, as it conveys the shock, surprise and disbelief that a patient could actually be a source of innovation, "And it flags up the likelihood that other patients with other diseases are harbouring similarly ingenious or radical ideas." I wonder how much we are missing out on in healthcare, because many of us are conditioned to think that a patient is a passive recipient of care, and not an equal who could actually out-think us. Golesworthy who is living with Marfan Syndrome, came up with a new idea for an aortic sleeve, which led to him setting up his own company. The article also then goes on to talk about a central repository of patient innovation to help diffuse these ideas, and this repository actually exists! It's called Patient Innovation and was set up over 2 years ago by the Católica Lisbon School of Business and Economics. The group have got over 1,200 submissions, and after screening by a medical team, around 50% of those submissions have been formally listed on the website. Searching the website for what patients have done by themselves is inspiring stuff. 

In the title, you'll notice that I also acknowledged that it's not just the patient who on their own innovates, but their caregivers could be part of that innovation process. Sometimes, the caregiver (parent, family member or someone else) might have a better perspective on what's needed than the patient themselves. The project leader for the Patient Innovation repository, Pedro Oliveira, has also published a paper in 2015, exploring innovation by patients with rare diseases and chronic needs, and I share one of the stories he included in his paper. 

"Consider the case of a mother who takes care of her son, an Angelman syndrome patient. Angelman syndrome involves ataxia, inability to walk, move or balance well. The mother experimented with many strategies, recommended by the doctors, therapists, or found elsewhere, but obtained little gain for her child. By chance, at a neighbor’s child’s birthday party, she noticed her son excitedly jumping for strings to catch a floating helium-filled balloon. This gave her an idea and she experimented at home by filling a room with floating balloons. She found her child began jumping and reaching for the balloons for extended periods of time, amused by the challenge. The mother also added bands to support the knees and keep the child in an upright position. The result was significant improvement in her child’s physical abilities. Other parents to whom she described the solution also tried the balloons strategy and had positive results. This was valued as a novel solution by the medical evaluators."

So many of us think that innovation in today's modern world has to start with an app, a sensor or an algorithm, but the the solutions could involve far simpler technology, such as a balloon! It's critical that we are able to discriminate between our wants and needs. A patient may be led to believe they want an app, but their actual need is for something else. Or that we as innovators want to work with a particular tool or type of technology, and we ignore the need of the patient themselves. 

Oliveira concludes with a powerful statement that made me stand back and pause for a few minutes, "Our finding that 8% of rare disease patients and/or their non-professional caregivers have developed valuable, new to the world innovations to improve their own care suggests that a massive, non-commercial source of medical innovations exists." 

I want you to also pause and reflect on this conclusion. How does this make you feel? Does it make you want to change the way you and your organisation approaches medical innovation? One of the arguments against patient innovation is that it could put the patient at risk, after all, they haven't been to medical school. Is that perception by healthcare professionals of heightened risk justified? Maybe not. Oliverira also reports that, "Almost all the reported solutions were also judged by the experts to be relatively safe: out of 182, only 4 (2%) of the patients’ developments were judged to be potentially detrimental to patients’ health by the evaluators." Naturally, this is just one piece of research, and we would need to see more like this to truly understand the benefit-risk profile of patient innovations, but it's still an interesting insight. 

I feel we don't hear enough in the media about innovation coming from patients and their caregivers. Others also share this sentiment. With reference to the Patient Innovation website, in the summer of 2015, Harold J. DeMonaco, made this statement in his post reminding us that not all innovation comes from industry, "There is a symposium going on this week in Lisbon, Portugal that is honoring patient innovators, and I suspect this will totally escape the notice of US media."

I am curious why we don't hear much more about patient innovators in the media. What can be done to change that? If you're a healthcare reporter reading this post, and you haven't covered patient innovation before, I'm really interested to know why.

During my research, I've been very curious to determine what analysis has been done to understand if patients are better at innovation than others. After all, they are living with their conditions, they are subject matter experts on their daily challenges, and they have enough insights to write a PhD on 'my health challenges' if they needed to! I did find a working paper from March this year from researchers in Germany at the Hamburg University of Technology (Goeldner et al). Are patients and relatives the better innovators? The case of medical smartphone applications, is the title of their paper. Their findings are very thought provoking. For example, when they looked at ratings of apps, the ratings for apps developed by patients and healthcare professionals were higher than those apps developed by companies and independent developers. For me, the most interesting finding was apps developed by patients' relatives got the highest revenues. Think about every hackathon in healthcare you've attended, how many times were patients invited, and how many times were the relatives of patients invited? One of the limitations of the paper which the authors admit, is that it was using apps from Apple's App store. The study would need to be repeated using Google's Play store given that the majority of smartphones in the world are not iPhones. 

This hypothesis from the paper highlights for me why patients and those who care for them need to be actively included,  "We propose that patients and relatives also develop needs during their caring activities that may not yet been envisioned by medical smartphone app developers. Thus, the dual knowledge base might be a reason for the significantly superior quality of apps developed by patients and relatives compared to companies." They also make this recommendation, "Our study shows that both user types – intermediate users and end users – innovated successfully with high quality. Commercial mobile app publishers and healthcare companies should take advantage of this and should consider including patients, patents’ relatives, and healthcare professionals into their R&D process." 

If you're currently developing an app, have you remembered to invite everyone needed to ensure you develop the highest quality app with the highest chance of success? 

I'm attending a Mobile Health meetup in London next week, called "Designing with the Dementia community" - they have 2 fantastic speakers at the event, but neither of them are people living with Dementia. Perhaps the organisers have tried to find people living with Dementia (or their caregivers) to come and speak, but nobody was available on that date. I remember when I founded the Health 2.0 London Chapter, and ran monthly events, just how difficult it was to find patients to come and speak at my events. How do we communicate to patients and their caregivers that they have unique insights that are routinely missing from the innovation process, and that people are wanting to give them a chance to share those insights? Another event in London next month, is about Shaping the NHS & innovation, with a headline of 'How can we continue to put patients first?' They have 4 fantastic speakers, who are all doctors, with not a patient in sight. It reminds me of conferences I attend where people will be making lots of noise about improving physician workflow, yet at these conferences nobody ever advocates for improving patient workflow. 

In the UK, the NHS appears to making the right noises with regard to wanting to include patients and the public in the innovation process. Simon Stevens, CEO of NHS England has spoken of his desire to enable patients to play a much more central role in innovation. Simon Denegri's post reviewing Steven's speech to the NHS Confederation back in 2014 is definitely worth a read.

Despite the hopes of senior leaders, I still feel there is a very large gap between the rhetoric and reality. I talk to so many patients (and healthcare professionals) who sadly have stopped coming up with ideas to make things better because the system always says No or dismisses their idea as foolish because they are not seen as experts. Editing your website to include 'patient centred' is the easy part, but actually getting each of your staff to live and breathe those words on a daily basis is a much more difficult task. Virtually every organisation in healthcare I observe is desperate for innovation, except that they want innovation on their terms and conditions, which is often a long winded, conservative and bureaucratic process. David Gilbert's wonderful post on patient led innovation concludes with a great example of this phenomenon;

"I once worked with a fabulous cardiac rehab nursing team that got together on a Friday and asked each other, ‘what one thing have we learned from patients this week?’ And ‘what one thing could we do better next week?’ We were about to go into the next phase and have a few patients come to those meetings and my fantasy was to get them to help design and deliver some of the ideas. But the Director of Nursing said that our idea was counter to the Engagement Strategy and objected that patients would be ‘unrepresentative’. Now they run focus groups, that report to an engagement sub-committee that reports to a patient experience board that reports to… crash!"

It's not all doom and gloom, times are changing. Two UK patients, Michael Seres & Molly Watt, have each innovated in their own arenas, and created solutions to solve problems that impact people like them. I'm proud that they are both my friends, and their efforts always remind me of what's possible with sheer determination, tenacity and vision, even when all the odds are stacked against you.

Tomorrow, four events in the UK are taking place which fill me with hope. One is People Drive Digital, where the headline reads, "Our festival is a creative space for people orientated approaches to digital technologies and online social networks in health and care" and the second is a People’s Transformathon, where the headline reads, "Bringing together patients, carers, service users, volunteers and staff from across health and care systems in the UK and overseas to connect, share, and learn from one another."

The third event is called Patients First, a new conference from the  Association of Medical Research Charities (AMRC) and Association of the British Pharmaceutical Industry (ABPI), where the headlines reads, "It brings together everyone involved in delivering better outcomes for patients – from research and development to care and access to treatments – and puts patients at the heart of the discussion."

The fourth event is a Mental Health & Technology: Ideas Generation Workshop hosted by the Centre for Translational Informatics. Isn't it great to read the description of the event, "South London and Maudsley NHS Foundation Trust and Kings College London want you to join what we hope will be the first in a series of workshops, co-led by service users, that will hear and discuss your views of the mental health technology you use, want to use or wish you had so that we can partner with you in its design, development and deployment." In the FAQ covering the format of the event, the organisers state, "The event will be in an informal and relaxed, there are no wrong opinions! We want to hear your ideas and thoughts." What a refreshing contrast to the typical response you might get within an hospital environment. 

The first event is in Leeds, the second is online, and the third and fourth are both in London, and I know that the first three are using a Twitter hashtag, so you will be able to participate from anywhere in the world. What I find particularly refreshing is that the first two events start their title with the word people, not patient. 

I also noticed that the Connected Health conference next month has a session on Patients as Innovators and Partners, with a Patient Advocate, Amanda Greene, as a speaker. I'm inspired and encouraged by agents of change who work within the healthcare system, and are pushing boundaries themselves by acknowledging that patients bring valuable ideas. One of those people is Dr Keith Grimes, who was also mentoring teams at the MISK Hackathon, and the 360 video below of our conversation, shows why we need more leaders like him. The video is an excerpt from a longer 9 minute video where we even discussed how health hackathons could innovate in terms of format. 

As we approach 2017, I really do hope we see the pace of change speed up, when it comes to harnessing the unique contributions that patients and their caregivers can bring to the innovation process, whether it's at a grassroots community level or the design of the next big health app. More and people around the globe that were previously offline are now being connected to the internet and/or using a smartphone for the first time. How will we tap into their experiences, ideas and solutions? Whether a patient is in Riyadh, Riga or Rio, let's connect with them, and genuinely listen to them, with open hearts and open minds. 

We can also help  to create a different future by educating our youth differently, so they understand their voice matters, even if they don't have a string of letters after their name. We are going to have to have difficult conversations, where we feel uncomfortable, where we'll have to leave our egos out of those conversations. There are circumstances where patients will be leading, and the professionals will have accept that, or risk being bypassed entirely, which is not a healthy situation. Equally, there are times when we'd probably want a paternalistic healthcare system, where the healthcare professionals are seen as the leaders in charge of the situation i.e. in a medical emergency.

The dialogue on patient innovation isn't about patients vs doctors, or about assigning blame, it's about coming together to understand how we move forward. Many of us are conditioned to think and act a certain way, whether it's because of our professional training or just how society suggests we should think. Unravelling that conditioning on a local, national, international and global level is long overdue. 

What will YOU do differently to foster a culture where we have many more innovations coming from patients and their caregivers? A future where having a patient (or their advocate) keynote at an event isn't seen as something novel, but the norm. A future where the system acknowledges that on certain occasions, the patient or their caregiver could be superior at generating innovation. A future where the gap between the rhetoric and reality disappears. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Engaging patients & the public is harder than you think

Back in 2014, Google acquired a British artificial intelligence startup in London, called Deepmind. It was their biggest EU purchase at that time, and was estimated to be in the region of 400 million pounds (approx $650 million) Deepmind's aim from the beginning was to develop ways in which computers could think like humans. 

Earlier this year, Deepmind launched Deepmind Health, with a focus on healthcare. It appears that the initial focus is to build apps that can help doctors identify patients that are at risk of complications. It's not clear yet, how they plan to use AI in the context of healthcare applications. However, a few months after they launched this new division, they did start some work with Moorfield's Eye hospital in London to apply machine learning to 1 million eye scans to better predict eye disease. 

There are many concerns, which get heightened when articles are published such as "Why Google Deepmind wants your medical records?" Many of us don't trust corporations with our medical records, whether it's Google or anyone else. 

So I popped along to Deepmind Health's 1st ever patient & public engagement event held at Google's UK headquarters in London last week. They also offered a livestream for those who could not attend. 

What follows is a tweetstorm from me during the event, which nicely summarises my reaction to the event. [Big thanks to Shirley Ayres for reminding me that most people are not on Twitter, and would benefit from being able to see the list of tweets from my tweetstorm] Alas, due to issues with my website, the tweets are included as images rather than embedded tweets. 

Finally, whilst not part of my tweetstorm, this one question reminded me of the biggest question going through everyone's minds. 

Below is a 2.5 hour video which shows the entire event including the Q&A at the end. I'd be curious to hear your thoughts after watching the video. Are we engaging patients & the public in the right way? What could be done differently to increase engagement? Who needs to do more work in engaging patients & the public?

There are some really basic things that can be done, such as planning the event with consideration for the needs of those you are trying to engage, not just your own. This particular event was held at 10am-12pm on a Tuesday morning. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

An interview with Adrian Leu on the role for creativity in healthcare

Given the launch of products such as the Samsung Gear VR or Pokemon GO, many of us are experimenting with developments in technology such as Virtual Reality (VR) and Augmented Reality (AR) to both create, share and consume content. One of the challenges in Digital Health when it comes to creating an app is where the expertise will come from for building it? It’s an even bigger challenge if you want to find organisations who can build cutting edge VR/AR experiences for you. I strongly believe that the health & social sectors would benefit significantly from greater engagement with the creative sector. Here in the UK, it’s not just London that offers world leading creativity, it’s all around the nation. 

Now in my own personal quest to understand who can help us build a future of Immersive Health, I’ve been examining who the leaders are in the creative sector, and who has a bold enough vision for the future that could well be the missing ingredient that could help us make our healthcare systems fit for the 21st century. I was at an event earlier this year in London where I heard a speaker, Adrian Leu, talk about the amazing work they are doing in VR. Adrian Leu is the CEO of Inition, a multidisciplinary production company specializing in producing installation-based experiences that harness emerging technologies with creative rigour.

So I decided to venture down to their headquarters in London, and interview Adrian.

1. Inition – Who are they?
We are a multi disciplinary team, and have built our reputation looking at new technologies before they become available commercially and how these technologies can be combined to create creative solutions. We are quite proficient in creating experiences which combine, software and hardware. We’ve done many firsts, including one of the first AR experiences. We also did the 1st VR broadcast of a catwalk show from London Fashion Week for Topshop.

We have a track record of over 13 years and hundreds of installations in both the UK and abroad, and we are known for leveraging new technologies for creative communications well before they hit the mainstream; We have have been augmenting reality since 2006, printing in 3D since 2005, and creating virtual realities since 2001. There aren’t many organisations out there who can say the same! We have also combined 3D printing with AR. I’m really proud that we have a finely tuned mixture of people strong on individual capabilities but very interested in what’s happening around them.

We work as an integrator of technology in the area of visual communications. Our specific areas move and shift as the times change. Currently we are doing a lot of stuff in VR, 2 years ago we were doing a lot of AR. Whilst others are talking about this tech, we have tried a lot of them, and we know the nitty gritty of the practical implementations.

We’ve worked with many sectors: pharma, oil/gas, automotive, retail, architectural (AEC), defense and aerospace, and the public sector.

2. What are the core values at the firm?
People are driven here by innovation, creativity, things which have a purpose, and at the end of the day, a mix of all 3 elements. The company was actually founded by 3 men who came from a Computer Sciences and simulation background. It has been run independently for 11 years, then acquired by a PLC 4 years ago, and one of the founders is still with us. Since last year, I have been CEO. My background is data visualisation, my PHD was in medical visualisation, where I was using volumetric rendering to reconstruct organ representations from MRIs.
 
3. Which of your projects are you proudest of?
Our work with the Philharmonia Orchestra and the Southbank Centre is one of them. This was the 1st major VR production from a UK symphony orchestra. In fact, there is a Digital Takeover of the Royal Festival Hall taking place between 23rd September and 2nd October 2016. What’s interesting for me, is the intersection of music, education and technology. If you really want to engage young people with classical music, you have to use their tools. It’s a whole narrative that we are presenting, it offers someone a sight of sounds, what it feels like to be in the middle of an orchestra and be part of their effort to bring the music to its audience.

The other project is our live broadcast of the TopShop catwalk show at London Fashion week 2 years ago. It was filmed in real time at the Tate Modern, and broadcasted to the TopShop flagship store on Oxford Street. Customers won the chance to use VR headsets to be (remotely) present at the event from the store.

For me, what both projects show is the power of telepresence and empathy.

4. Many people believe that VR is only for kids and/or limited to gaming - how do you see the use of VR?
Well, a lot of VR is driven by marketing at the moment, and as a point of entry, VR will be used to go after the low hanging fruit. There is nothing wrong with that. Any successful project will have to have great content, not to see any wires, invisibility, to have a clear purpose, an application and ultimately, a sustainable business model. 

For example, if you are in the property industry, if you allow clients to see 50 houses in VR, they won’t make the decision from the VR headset, but they might filter to 20 from the 50. So it will impact the bottom line.  The connected thinking is not yet done, it will come.  I can see VR being used in retail, i.e. preparation for new product line. You can recreate the retail store in VR, reducing the costs with remote presence.

5. What are the types of projects you’ve done for healthcare clients to date?
Most projects were about the visual communication of ideas, of data or the visual impact of drugs on people. Or at a conference, we helped showcase something that is interactive or engaging, for example, recreate a hospital bed, where there is a virtual patient, and you can see the influence of the drugs through their body. 

Another project we did was showing how it feels to have a panic attack - to help a HCP understand what a patient is going through (in terms panic attack). There are lots of implications from VR, the first technology that could help to generate more empathy for patients. We’ve also done work with haptic and tracking technologies. One example is our work with hospitals and university departments, we tracked a surgical procedure, right down to tracking finger movements, the way a student does a certain procedure and compared that to a certain standard. Thus giving them the opportunity to practice in the immersive environment.

6. What are your future ideas for the use of immersive tech?
Let’s return to empathy. You can create virtual worlds, that someone living with autism may be able to understand, where they can express things. It’s about really understanding what someone is going through, whether it’s curing of phobias or preparing soldiers to go into war.

7. In the future, do you think that doctors would prescribe a VR experience when they prescribe a new drug?
It's the power of the visual communication. I don’t see why we couldn’t have the VR experience as THE treatment.

8. What do you think is coming in the future, above and beyond what’s here today?
Haptics? Smell? The ability to combine physical stuff with the virtual stuff, where you can even smell and touch in a virtual world. An interesting experiment would be to see what could happen if we were expecting something but in VR we had something else, how could it hit our brain?

I can imagine a future where we could superimpose, diagnostic and procedural led images onto the patient. A future where a neurosurgeon would use AR to project 3D imagery from MRIs or CT scans in real time over the brain to  be guided by the exact position of the tumour during to surgery. It’s only a matter of time before this can be available.

9. Who will drive VR/AR adoption in healthcare?
It will be consumers, since that’s the big change we have seen this year, in terms of technology that is becoming available to the man on the street. People will become more accustomed to the tech, we can see that lots of startups are focusing on this, and in the end, I expect the NHS will be looking into this as a strategic priority.

We understand that adoption has to be research driven, there is a need for solid evidence. We are actually part of a European project called V-Time, as a technology partner along with the University of Tel Aviv, and it’s for the rehabilitation of elderly people who have had a fall. It consisted of a treadmill, their feet tracked and in front of them was a big screen. They would have to walk on a pavement in a city, from time to time, facing a variety of virtual obstacles which they have to avoid. The system was analysing how well they were doing that.

10. If a surgeon is reading this, and you wanted to inspire them to think about immersive tech in their work, what would you say?
My father was a surgeon, and he was very empathetic with his patients. He always treated them like they were part of his family. He was always taking calls at night from the patient’s relatives.

If in the future, we can create technology, where immersive systems can explain what’s happening, getting patients and their families more involved, explaining what will happen during the operation, the different things that the surgeon can do and how it will impact the results.

Surgeons have very limited time to do this explanation, I’m confident we can use immersive technologies and visual communication to give the relatives the information and reassurance they seek. If someone is presented with the option of having a surgical procedure but is unsure, why can’t we use VR so patients can be right there in the surgery, and that experience could help them determine whether they actually want to go ahead with the surgery or not? Could the immersive experience help someone get past the fear of having that operation?

11. What about VR and a visit to the GP?
We already have virtual visits over Skype, but what if we threw in haptics. You have the doctor and the patient wearing data (haptics) gloves and in this virtual doctor's office, the patient can help the doctor feel exactly what they are feeling in terms of the location of rash/pain, the exact SPOT. 

Or maybe a cap for the head, for when the patient wants to explain about their headaches, being able to point to the exact spot where the pain is the greatest. A remote physical examination in the virtual world with haptics. 

Another scenario, is when I get into my virtual environment, I have all the other data coming from my Apple watch, other biosensors, vital sign streaming. My doctor could discuss this with me in the virtual room.

12. Which country/city in the world is leading innovation in immersive tech?
It depends upon the area. Some would assume it’s Silicon Valley. In my opinion, London is more advanced in VR/AR. Why? London is THE creative hub, and a lot of immersive tech is driven by creative industries.

The UK as a whole has a thriving creative sector, and the NHS could certainly benefit from greater cross-sector collaboration. We’ve worked for example in the past with and Guys and St Thomas.

13. What would you advise people in healthcare who want to explore the world of immersive tech?
People can come and visit us and play with a variety of tools, it might not be something that’s exactly what they need, but it’s a good experience. Inition’s Demo Lab is a very safe and instructive “sandbox”.

The Demo Lab

The Demo Lab

We can have conversations with people about these technologies, we know how to connect these things together. We’re open to anyone internationally, what drives us are projects that are going to improve the wellbeing of people. What we can’t do is large scale research, without getting partners involved. We can give you a lot of advice, and we can even create prototypes that can be validated through large scale studies. We are open to conversations, whether you are a large pharmaceutical company, in charge of a medical school or even a GP in a small practice.

Adrian Leu & Inition are both on Twitter and click here for the Inition website.

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Managing our health: One conversation at a time

If you've watched movies like Iron Man, featuring virtual assistants, like JARVIS, which you can just have a conversation with and control your home, you probably think that such virtual assistants belong in the realm of science fiction. Earlier this year, Mark Zuckerberg, who runs Facebook set a personal challenge to create a JARVIS style assistant for his own home. "My personal challenge for 2016 is to build a simple AI to run my home and help me with my work. You can think of it kind of like Jarvis in Iron Man." He may be closer to his goal, as he may be giving a demo something this month. For those that don't have an army of engineers to help them, what can be done today? Well, one interesting piece of technology is Amazon's Echo. So what is it? Amazon describes it as, "Amazon Echo is a hands-free speaker you control with your voice. Echo connects to the Alexa Voice Service (AVS) to play music, provide information, news, sports scores, weather, and more—instantly. All you have to do is ask." Designed for your home, it is plugged into the mains and connected to your wifi. It's been on sale to the general public in the USA since last summer, and was originally available in 2014 for select customers.

This week, it's just been launched in the UK and Germany as well. However, I bought one from America 6 months ago, and I've been using it here in the UK every day since then. My US spec Echo does work here in the UK, although some of the features don't work, since they were designed for the US market. I've also got the other devices that are powered by AVS, the Amazon Tap, Dot and also the Triby, which was the first 3rd party device to use AVS. To clarify, the Echo is the largest, has a full size speaker and is the most expensive from Amazon ($179.99 US/£149.99 UK/179.99 Euros). The Tap is cheaper ($129.99, only in USA) and is battery powered, so you can charge it and take it to the beach with you, but it requires that you push a button to speak to Alexa, it's not always listening like the other products. The Dot is even cheaper (now $49.99 US/£49.99 UK/59.99 Euros) and does everything the original Echo can do, except the built-in speaker is good enough only for hearing it respond to your voice commands. If you want to use it for playing music, Amazon expect you to connect the Dot to external speakers. A useful guide comparing the differences between the Echo, Dot and Tap is here. The Triby ($199.99) is designed to be stuck on your fridge door in the kitchen. It's sold in the UK too, but only the US version comes with AVS. Amazon expect you'd have the Echo in the living room, and you'd place extra Dots in other rooms. Using this range of products has not only given me an insight into what the future looks like, but I can see the potential for devices like the Echo (and the underlying service, AVS) to play a role in our health. In addition, I want to share my research on the experiences of other consumers who have tried this product. There are a couple of new developments announced this week which might improve the utility of the device, which I'll cover towards the end of the post. 

Triby, Amazon's Tap, Dot & Echo

Triby, Amazon's Tap, Dot & Echo

You can see in this 3 min video, some of the things I use my Echo for in the morning, such as reading tweets, checking the news, weather/my Google calendar, adding new events to my calendar or turning my lights on.  For a list of Alexa commands, this is a really useful guide. If you're curious about how it works, you don't have to buy one, you can test it out in your web browser, using Echosim (you will need an Amazon account though) 

What's really fun are experimenting with the new skills [i.e apps] that get added by 3rd parties, one of which is how my Echo is able to control devices in my smart home, such as my LifX lights. I tend to browse the Alexa website for skills and add them to my Echo that way. You can also enable skills just by speaking to your device. At the moment, every skill is free of charge. I suspect that won't always be the case. 

Some of the skills are now part of my daily routine, as they offer high quality content and have been well designed. Amazon boast that there are now over 3,000 skills. However, the quality of the skills varies tremendously, just like app stores for other devices we already use. For example, in the Health & Fitness section, sorted by relevance, the 3rd skill listed is one called Bowel Movement Facts. 

The Echo is always on, it's 7 microphones can detect your voice even if the device itself is playing music and you're speaking from across the room. It's always listening for someone to say 'Alexa' as the wake up word, but you have a button to mute the Echo so it won't listen. I use Siri, but I was really impressed when I started to use my Echo, it was felt quicker than Siri in answering my questions. Anna Attkisson did a 300 question test, comparing her Echo vs Siri, and found that overall, Amazon's product was better. Not only does the Echo understand my London accent, but it also has no problem understanding me when I used some fake accents to ask it for my activity & sleep data from my Fitbit. I think it's really interesting that I can simply speak to a device in my home and obtain information that has been recorded by the Fitbit activity tracker that I've been wearing on my wrist. It makes me wonder about how we will access our health data in the future. Whilst at the moment, the Echo doesn't speak unless you communicate with it, that may be changing in the future, if push notifications are enabled. I can see it now, having spent all day sitting in meetings, and sat on my smart sofa watching my smart TV, my inactivity as recorded by my Fitbit, triggers my Echo to spontaneously switch off my smart TV, switch my living room lights to maximum brightness, announce at maximum volume that I should venture outside for a 5,000 step walk, and instruct my smart sofa to adjust the recline so I'm forced to stand up. That's an extreme example, but maybe a more realistic one is that you have walked much less today than you normally do, and you end up having a conversation with Echo because your Echo says, "I noticed you haven't walked much today. Is everything ok?"

We still don't know about the impact on society as our homes become smarter and more connected. For example, in the USA, those with GE appliances will be able to control some of them with the Echo. You'll be able to preheat the oven, without even getting off your sofa. That could have immense benefits for those with limited mobility, but what about our children? If they grow up in a world where so much can be done without even having to lift a finger, let alone walk a few steps from the sofa to the kitchen, is this technology a welcome advance? If you have a Hyundai Genesis car, you can now use Alexa to control certain aspects of your car. When I read this part of the Hyundai Genesis article, "Being able to order basic functions by voice remotely will keep owners from having to run outside to do it themselves" it made me think about a future where we just live an even more sedentary lifestyle, with implications for an already over burdened healthcare system. Perhaps having a home that is connected makes more sense in countries like the USA and Australia which on average have quite large houses. Given how small the rooms in my London home are, it's far quicker for me to reach for the light switch than to issue a verbal command to my Echo (and wait for it to process the command)

Naturally, some of us would be concerned about privacy. Right now, anyone could walk into the room and assuming they knew the right commands, could quiz my Echo about my activity and sleep data. One of the things you can do in the US (and now in Europe) is order items from Amazon by speaking to your Echo, and Alex Cranz wrote a post saying, "And today it let my roommate order forty-eight Cadbury Creme Eggs on my account. Despite me not being home. Despite us having very different voices. Alexa is burrowing itself deeper and deeper into owners’ lives, giving them quick and easy access not just to Spotify and the Amazon store, but to bank accounts and to do lists. And that expanded usability also means expanded vulnerability.", he also goes on to say, "In the pursuit of convenience we have to sacrifice privacy." Note that Amazon do offer the ability to modify your voice purchasing settings, so that the device would ask you for a 4 digit confirmation code before placing the order. The code would NOT be stored in your voice history. You can also turn off voice purchasing completely if you wish.

Matt Novak filed a FOI request to ask if the FBI had ever wiretapped an Amazon Echo. The response he got, "we can neither confirm nor deny."

If you don't have an Echo at home, how would you feel about having one? How would you feel about your children using it? One thing I've noticed is that the Echo seems to work better over time, in terms of responding to my voice commands. The way that the Echo works is that it does record your voice commands in the cloud, and by analysing the history of your voice commands, it refines its ability to serve your needs. You can delete your voice recordings, although it may make the Echo less accurate in future. Some Echo users whose children also use the device say their kids love it, and in fact got to grips with the device and it's capabilities faster than the parents. However, according to this Guardian article, if a child under 13 uses an Echo, it is likely to contravene the US Children’s Online Privacy Protection Act (COPPA). This doesn't appear to have put off households installing an Echo in the USA, as research suggests Amazon have managed to sell 3 million devices. Another estimate puts the installed user base significantly lower, at 1.6 million. Either way, in the realm of home based virtual assistants, Amazon are ahead, and probably want to extend that lead, with reports that in 2017 they want to sell 10 million of these speakers. 

Can the Echo help your child's health? Well a skill called KidsMD was released in March that allows parents to seek advice provided by Boston Children's hospital. After the launch, their Chief Innovation Officer, John Brownstein said, "We’re trying to extend the know-how of the hospital beyond the walls of the hospital, through digital, and this is one of a few steps we’ve made in that space." So I tested Kids MD back in April, and you can see in this 3 minute video what it's like to use. What I find fascinating is that I'm getting access to validated health information, tailored to my situation, simply by having a conversation with an internet connected speaker in my home. Of course, the conversation is fairly basic for now, but the pace of change means it won't be rudimentary forever. 

I was thinking about the news last week here in the UK, where it was announced that the NHS will launch a new website for patients in 2017. My first thought was, what if you're a patient who doesn't want to use a website, or for whatever reason can't use a website. If the Echo (and others like it) launch in the UK, why couldn't this device be one of the digital channels that you use to interface with the NHS? Some of us at a grassroots level are already thinking of what could be done, and I wonder if anyone in the NHS has been formally testing an Echo to see how it might be of use in the future? 

The average consumer is already innovating themselves using the Echo, they aren't waiting years for the 'system' to innovate. They are conducting their own experiments, buying these new products with their own money. One man in the USA has used the Echo to help him care for his aging mother, who lives in a different location from him. 

In this post, a volunteer at a hospice asks the Reddit community for input on what the Echo could be useful for with patients. 

How about Rick Phelps, diagnosed back in 2010 at the age of 57 with Early Onset Alzheimer's Disease, and now an advocate for Dementia awareness. Back in Feburary, he wrote about his experience of using the Echo for a week. What does he use it for? To find out what day it is, not knowing what day it is because of Dementia.

For many of us, consumer grade technology such as the Echo will be perceived as a gimmick, a toy, of being of limited or no value with respect to our health. I was struck by what Rick wrote in his post, "To many, the Amazon Echo is a cool thing to have. Some what of a just another electronic gadget. But to a dementia patient it is much, much more than that.It has afforded me something that I have lost. Memory. I can ask Alexia anything and I get the answer instantly. And I can ask it what day it is twenty times a day and I will still get the same correct answer." Rick also highlights how he used the Echo to set medication reminders.

I have to admit, the Echo is still quite clunky, but the original iPhone was clunky too, and the 1st generation of every new type of technology is usually clunky. For people like Rick, it's good enough to make a difference to the outcomes that matter to him in his daily life, even if others are more skeptical. 

Speaking of medication reminders, there was a 10 day Pymts/Alexa challenge this year, using Alexa to "to reimagine how consumers interact with their payments and financial services solutions providers." What I find fascinating is that the winner was DaVincian Healthcare, and they created something called DaVincianRX, an “interactive prescription, communication, and coordination companion designed to improve medication adherence while keeping family caregivers in the loop." You can read more and watch their video of it in action here. People and organisations constantly ask me, where do we look for innovation and new ideas? I always remind them to look outside of healthcare. From a health perspective, most of the use cases I've seen so far involving the Echo are for older members of society or those that care for them. 

I came across a skill called Marvee, which is described as "a voice initiated voice-initiated concierge application integrated with the Alexa Voice service and any Alexa-enabled device, like the Amazon Echo, Dot or Tap." Most of the reviews seem to be positive. It's actually refreshing to see a skill that is purpose built to help those with challenges that are often ignored by the technology sector. 

In the shift towards self-care, when you retire or get diagnosed with a long term condition for the first time, will you be getting a prescription for an Amazon Echo (or equivalent)? Who is going to pay for the Echo and related services? Whilst we have real world evidence that shows the Echo is making a positive impact on people's lives, I haven't been able to find any published studies testing the Echo within the context of health. That's a gap in knowledge, and I hope there are researchers out there who are conducting that research. Like any product, there will be risks as well as benefits, and we need to be able to quantify those risks and benefits now, not in 5 years time. Earlier I cited how Rick who lives with Alzheimer's Disease finds the Echo to be of benefit, but for other people like Rick, using the Echo might lead to harm rather than benefit. We don't know yet. However, not every application of the Echo will require a double blinded randomised clinical trial to be undertaken. If I can already use my Echo to order an Uber, or check my bank balance, why can't I use it to book an appointment with my doctor?

In the earlier use case, a son looked through the data from his mother's usage of her Echo to spot the signs when something is wrong. Surely, Amazon could parse through that data for you and automatically alert you (or any interested person) that there could be an issue? Allegedly, Amazon is working on improvements to the service where Alexa could one day recognise our emotions and respond accordingly. I believe our voice data is going to play an increasing role in improving our health. It's going to be a new source of value. At an event in San Francisco recently, I met Beyond Verbal, an emotions analytics company. They are doing some really pioneering work. We already have seen the emergence of the Parkinson's Voice Initiative, looking to test for symptoms using voice recordings.

How might a device like the Echo contribute to drug safety? Imagine it reminds you to take your medication, and in the conversation with your Echo, you reply that you're skipping this dose, and it asks you why? In that conversation, you have the opportunity in your own words to say why you have skipped that dose. Throw in the ability to analyse your emotions during that conversation, and you have a whole new world of insights on the horizon. Some of us might just be under the impression that real world data is limited to data posted on social media or online forums, but our voice recordings are also real world data. When we reach a point when we can weave all this real-world data together to get a deeper understanding of our health, we will be able to do things we never thought possible. Naturally, there are immense practical challenges on that pathway, but progress is being made every day. Having all of this data from all of these sources is great, and even if it's freely available, it needs to be linked together to truly make a difference. Researchers in the UK have demonstrated that it's feasible to use consumer grade technology such as the Apple watch to accurately monitor brain health. How about linking the data from my Apple watch with the voice data from my Amazon Echo to my electronic health record?

An Israeli startup, Cordio Medical has come up with a smartphone app for those patients with Congesitve Heart Failure (CHF) that captures voice data, analyses it in real-time, and "detects early build-up of fluids in the patient’s lung before the appearance of physical symptoms", and deviations found in the voice data would trigger an alert, where "These alerts permit home- or clinic-based medical intervention that could prevent hospitalisation." For those CHF patients without smartphones, could they simply use an Echo at home with a Cordio skill? Or does Amazon offer the voice data directly to organisations like Cordio for remote monitoring (with the patient's consent)? With devices like the Echo, if Amazon (or their rivals) continue to grow their user base over the next 10 years, they could have an extremely valuable source of unique voice based health data that covers the entire population. 

At present, Amazon has surprisingly made rather good progress in terms of the Echo as a virtual assistant. However, other tech giants are looking to launch their own products and services. For example, Google Home, that is due to arrive later this year. This short video shows what it will be able to do. Now for me, Google plays a much larger role in my daily life than Amazon, in terms of core services. I use Google for email, for search, for my calendar, and maps for navigation. So, Google's Home might be vastly superior to Echo, simply because of that integration with those core services that I already use. We'll have to wait and see. The battle to be a fundamental part of your home is just beginning, it seems. 

The battle to be embedded in every aspect of our lives with extend beyond the home, perhaps in our cars. I tested the Amazon Dot in my car, and I reckon it's only a matter of time before we see new cars on sale with these virtual assistants built into the car's systems, instead of being an add-on. We already have new cars coming with 4G internet connectivity, offering wifi for your devices, from brands like Chevrolet in the USA. 

For when we are on the move, and not in our car or home, maybe we'll all have earphones like the new Apple Airpods, where we can discreetly ask our virtual assistants to control the objects and devices around us. Perhaps Sony's product, the Experia Ear, which launches in November, and is powered by something called Sony's Agent, which could be similar to Amazon's AVS, is what we will be wearing in our ears? Or maybe none of these big tech firms will win the battle? Maybe it will be one of us, or one of our kids who comes up with the virtual assistant that will rule the roost? I'm incredibly inspired after watching this video where a 7 year old girl and her father built their own Amazon Echo using a Raspberry Pi. This line in the video's description stood out to me, "She did all the programming following the instructions on the Amazon Github repository." Next time there is a health hackathon, do we simply invite a bunch of 7 year old kids and give them the space to dream up new solutions to problems that we as adults have created? Or maybe it should be a hackathon that invites 7 year olds with their grandparents? Or maybe we have a hackathon where older adults are invited to co-design Alexa skills with younger people for the Echo? We don't just have financial deficits in health & social care, but we have a deficit of imagination. Amazon have a programming tutorial where you can build a trivia skill for Alexa in under an hour. When it comes to our health, do we wait for providers to develop new Alexa skills, or will consumers start to come together and build Alexa skills that their community would benefit from, even if that community happens to be a community of people scattered around the world, who are all living with the same rare disease?

You'll have noticed that in this post, I haven't delved into the convergence of technologies that have enabled something like the Echo to work so well. This was deliberate on this occasion. At present, I'm really interested in how virtual assistants like the Echo make you feel, rather than the technical details of the algorithm being used to recognise my voice. For someone living far away from their aging parents/grandparents, does the Echo make you feel reassured? For someone living alone and feeling social isolated, does the Echo make you feel not alone? For a young child, does it make you feel like you can do magic, controlling other devices just with your voice? For someone considering moving out of their own home into an institution, does the Echo make you feel independent again? If more and more services are becoming digital by default, how many of these services will be available just by having a conversation? I am using my phone & laptop less since I've had my Echo, but I'm not yet convinced that virtual assistants will one day eliminate the need for a smartphone, but some of us are convinced. 50% of urban smartphone owners around the world believe that smartphones will be no longer be needed in 5 years time. That's one of the findings when Ericsson Consumer Lab quizzed smartphone users in 13 cities around the globe last year. The survey is supposed to represent the views of 68 million urban citizens. In addition, they also found, "Furthermore, a third would even rather trust the fidelity of an AI interface than a human for sensitive matters. 29 percent agree they would feel more comfortable discussing their medical condition with an AI system." I personally think the consumer trends identified have deep implications for the nature of our interactions with respect to our health. Far too many organisations are clinging on to the view that the only (and best) way that we interact with health services is face to face, in a healthcare facility, with a human being. Despite these virtual assistants at home not needing a smartphone with a data plan to work, they would need fixed broadband to work. However, looking at OECD data from December 2015, fixed broadband penetration is rather low. The UK is not even at 40%, so products such as the Echo may not be accessible for many across the nation who might find it beneficial with regard to their health. This is an immense challenge, and one that will need joined up thinking, as we need everyone included in this digital revolution.

You might be thinking right now that building a virtual assistant is your next startup idea, it's going to be how you make an impact on society, it's how you can change healthcare. Alas, it's not as easy as we first thought. Cast your mind back to 2014, the same year that the Echo first became available. I was one of early adopters who pledged $499 for the world's first social robot, Jibo [consider it a cuter or creepier version of the Echo with a few extra features] They raised almost $4 million from people like me, curious to explore this new era. Like the Echo, you are meant to be able to talk to Jibo from anywhere in the room, and it will act upon your command. The release got delayed and delayed, and then recently I got an email informing me that the folks behind Jibo have decided that they won't be shipping Jibo to backers outside of the USA, and I was offered a full refund.

One of the reasons that cited was, "we learned operating servers from the US creates performance latency issues; from a voice-recognition perspective, those servers in the US will create more issues with Jibo’s ability to understand accented English than we view as acceptable." How bizarre, my US spec Echo understands my London accent, and even my fake ones! It took the makers of Jibo 2 years to figure this out, and this too from people who are at the prestigious MIT Media Lab. So just how much effort does it take to make something like the Echo? A rather large amount, it seems. According to the Jeff Bezos, the CEO of Amazon, they have over 1,000 people working on this new ecosystem. A very useful read is the real story behind the Echo explaining in detail how it was invented. Apparently, the reason why the Echo was not launched outside of America until now, was to so it could handle all the different accents. So, if you really want to do a hardware startup, then one opportunity is to work on improving the digital microphones found not just in the Echo, but in the our smartphones too. Alternatively, Amazon even have an Alexa Fund, with $100m in funding for those companies looking to "fuel voice technology innovation." Amazon must really believe that this is the computing platform of the future. 

Moving on this week's news, the UK Echo will have UK partners such as the Guardian, Telegraph & National Rail. I use the train frequently from my home into central London, the station is a 15 minute walk from my house, so that's one of the UK specific skills I'm most likely to use to check if the train is delayed or cancelled before I head out of the front door. Far easier and quicker than pulling out my phone and opening an app. The UK version will also have a British accent. If you have more than one Echo device at home, and speak a command, chances are that two or more of your devices will hear you and respond accordingly, which is not good, especially if you're placing an order with Amazon. So now they have updated the software with ESP (Echo Spatial Perception) and when you talk to your Echo device, only the closest one to you will respond. It's being rolled out to those who have existing Echo devices, so no need to upgrade. You might want to though, as there is a new version of the Echo Dot (US, UK & Germany), which is cheaper, thinner, lighter and promises better voice recognition than the original model. For those who want an Echo in every room, you can now buy Dots in 6 or 12 packs! In the UK, given that the Echo Dot is just £49.99, I expect this Christmas, many people will be receiving them as presents. 

Amazon's Alexa Voice Service is one example of a conversational user interface, and at times it's like magic, and other times, it's infuriatingly clumsy.  I'm mindful that my conversations with my Echo are nowhere near as sophisticated as conversations I have with humans. For example, if I say "Alexa, set a reminder to take my medication at 6pm" and it does that, and then I immediately say "Alexa, set a reminder to take my medication at 6.05pm", and so forth, it currently won't say, "Are you sure? You just set a medication reminder close to that time already." Some parents are concerned that the use of an Echo by their kids is training them to be rude, because they can throw requests at Alexa, even in an aggressive tone of voice, with no please, no thank you, and Alexa will always comply. Are these virtual assistants going to become our companions? Busy parents telling their kids to do their homework with Alexa, or lonely elders who find that Alexa becomes their new friend in helping them cope with social isolation? Will we end up with bathroom mirrors we can have conversations with about the state of our skin? Are we ever going to feel comfortable discussing the colour of our urine with the toilet in our bathroom? When you grab your medication box out of the cupboard, do you want to discuss the impact on your mood after a week of taking a new anti-depressant?

Could having conversations with our homes help us to manage our health? It seems like a concept from a science fiction movie, but to me, the potential is definitely there. The average consumer will have greater opportunities to connect their home to the internet in years to come. Brian Cooley, asks in this post if our home becomes the biggest health device of all.

A thought provoking read is a new report by Plextek examining the changes in the medical industry by 2020 from connected homes. I want you to pause for a moment when reading their vision, "The connected home will be a major enabler in helping the NHS to replace certain healthcare services, freeing up beds for just the most serious cases and easing the pressure on GP surgeries and A&E departments. It will empower patients with long-standing health conditions who spend their life in and out of hospitals undertaking tests, monitoring, rehabilitation or therapy, and give them freedom to care for themselves in a safe way."

Personally, I believe the biggest barrier to making this vision a reality is us, i.e people and organisations that don't normally work together will have to collaborate in order to make connected homes seamless, reliable and cost effective. Think of all the people, policies & processes involved in designing, installing, regulating, and maintaining a connected home that will attempt to replace some healthcare services. That's before we even think about who will be picking up the tab for these connected homes.

Do you believe the Echo is a very small step on the path towards replacing healthcare services, one conversation at a time?

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Immersive Health: Are we ready?

That's the question that I've been reflecting upon over the last 12 months. Some of you may have noticed that in 2016, there is much more news, discussion and excitement with regard to Virtual Reality (VR) technology. Every other day there is some new announcement, and more and more people are believing that this could play a greater role in our future. Google has recently extended its foray into mobile VR beyond their Cardboard initiative with the announcement of their new Daydream platform. VR itself has been around for a while now, and I remember reading about the concept of VR when I took my first Computer Science class 29 years ago! 

Life is in 360, so why shouldn't our experiences be in 360 too? What really caught my attention in 2015 was a tweet by Susannah Fox when she was at TED 2015 after she had watched Chris Milk's talk "How Virtual Reality can create the ultimate empathy machine" and reading how it had impressed her. This really piqued my curiosity with respect to VR and its applications, as when we think of VR, we often associate it with computer games simply for entertainment. I was skeptical that putting on a VR headset could generate empathy for others. In his TED talk, Milk showed the 'Clouds over Sidra' VR experience he created, and after viewing it in VR, I was very surprised at how it made me feel. 

I was also inspired when attending the Body Computing Conference last autumn at USC, where Dr Leslie Saxon announced their new Virtual Care Clinic as well as announcing the winners of their VR Medical Hackathon. In fact, USC's Institute for Creative Technologies is one of the original pioneers when it comes to VR in healthcare as you can see in this short video.

In my quest to understand the future, I started to purchase many of these new devices as soon as they came onto the market. I believe it's important to try new hardware and software for more than a day or two in order to determine what it's like to live with the technology. I purchased a Samsung Gear VR, and ended up using it to demo VR experiences at the world's 1st Pop-Up Museum of Happiness in London at the start of 2016. I offered attendees the chance to experience guided meditation at the beach, snowboarding or diving in the ocean with the whales.

With Sam Cookney, we helped people attending the pop up Museum of Happiness in London, experience Virtual Reality. This man tried a 360 video using a Samsung Gear VR, which took him on a helicopter ride in the mountains, followed by snowboarding Hear his immediate reaction to the experience.

It was fascinating to see the range of reactions to the Gear VR, some thought it was terrible, and others enjoyed meditating on the beach so much, they didn't want to take the headset off and come back to the real world. Seeing some people smiling and laughing after a few minutes with a headset with a smartphone inside of it compelled me to keep exploring the potential uses of this technology. For example, given aging populations, how do we immerse ourselves in the world of someone aged 85, who lives alone and has multiple long term conditions? In Australia, a Virtual Dementia Experience has been developed, which "is an immersive, interactive virtual reality experience that invades the senses and takes people into the world of a person living with dementia, simulating thoughts, fears and challenges." That's already here today, so what might we do in the future?

It's not just about consuming VR content but creating it too. It's now possible to buy a 360 camera, record your own 360 video, and upload it to YouTube & Facebook which both support 360 videos. You can then share the video, and whoever views it can watch it on their computer, their smartphone or even using a VR headset. Personally, I suggest using a smartphone whilst connected to wifi or a VR headset if you have access to one. Dr Shafi Ahmed recently made history by performing a cancer surgery in London, live streamed using 360 cameras located in the operating theatre, which allowed people around the globe to watch the surgery up close and personal in Virtual Reality. 

Sometimes, there are unexpected findings associated with the use of emerging technology such as a 360 camera. Many of us might dismiss it as a gimmick for every day use. We can't assume how this tech will or will not impact lives. We often just have to get out there and try something new, even if we don't know what to expect. Our sense of wonder and curiosity takes us towards new horizons. I was delighted to read Molly Watt's post on how using a 360 camera has helped her see the world differently by taking 360 images, despite losing her peripheral vision a few years ago. If you haven't done so already, do read my last post which was an interview with Molly on how she is putting Usher Syndrome on the map. If you ask me for a list of the top 10 people that influence my thinking about the future of technology, Molly would definitely be in that list. I firmly believe that citizens should have the freedom to find or even make their own solutions. Couple that mindset with advances in technology available to consumers, and we are heading for a world where our children will attend school and assume that 'invention literacy' was always part of the curriculum. 

This year, I've used telemedicine services in the UK where I had a video call with a doctor using my tablet. Will my telemedicine visit with the doctor in 2020 be in Virtual Reality? What if the doctor could visit me virtually in my own home, and by seeing my home environment in 360 degrees, be able to pick up social and environmental cues that could help them make a more accurate diagnosis? Or what if taking a selfie with a 360 camera allowed researchers looking at smoking cessation programs to understand smoking triggers in someone's social and physical environment with one 360 image? That's a core element of a pitch from a team I was part of at a recent Cancer Research UK Innovation Workshop that led to us winning an award for funding to conduct a pilot study. The 360 image you see embedded below was taken moments after we won. You can move around the entire image and immerse yourself in that moment, much more than a regular image. 

Our team just got awarded funding at @crukresearch #innovation workshop #cancerprevention - Spherical Image - RICOH THETA

I recently gave a talk at Health 2.0 Amsterdam, called 'Immersive Health: Are we ready?' and I used my Ricoh Theta S 360 camera to record my talk. The resolution of the video could be better, which is why I've also gone out and also purchased 4K 360 cameras such as the Insta360 and the Kodak PixPro SP360. Our existing infrastructure is usually not quite ready to cope with these new technologies. For example, I tried embedding the 360 video in this post, but it didn't display as a 360 video, just a regular one. Hence, I had to insert links to my 360 videos throughout this post, which require you to click on, and then it launches YouTube where you can have the full 360 video experience. Now, when you watch the 360 video of my talk in Amsterdam on your phone, move the phone around and you'll change the position of the video! 

2016 has also seen the launch of two long awaited VR headsets, one called the Oculus Rift and the other is the HTC Vive. These are the most advanced products consumers can buy today, and I've bought both of them. Whilst they deliver an immersive experience that is unparalleled by any other technology available to consumers, there are a few drawbacks. The first of which is price. I paid almost £800 for the HTC Vive, and I needed to buy a rather high end gaming PC to use it, which was another £1,400. The Oculus Rift was cheaper at £529, and I've opted to get a VR ready laptop for use with the Rift, which is an eye watering £2,200. So, this level of VR tech is not affordable to the masses yet, but neither were the original mobile phones when they were first launched.

You may wonder, when it comes to making us healthier and happier, where is the value in these expensive VR systems? Well, hospitals are starting to experiment with these advanced products. For example, C.S Mott. Children's hospital in the USA, has been using the Oculus Rift with sick children who were stuck in a hospital ward for a length of time.

There is a compelling read on Reddit, entitled "Oculus Rift saved my sanity while stuck in the hospital, Thanks!" This story was written by a patient in the USA, who upon facing being stuck on the top floor of the hospital for treatment decided to bring in his own system to the hospital. The part that stood out to me the most is "All from my top floor prison of a floor I couldn't leave. This was my getaway, for the rest of the stay. Teleporting me away from the sterile, dry and bland existence that was my hospital room. I was playing my flight simulators and space simulators, racing cars and playing FPS. Life was good again." Granted, this isn't a clinical trial, but it's a positive outcome, their own success story, and a brilliant example of patients as innovators. 

Researchers have been testing VR for some time now, most of the proven use cases seem to be with exposure therapy. For example, 12 years ago, this study looked at the use of VR and computer games as exposure therapy for people with a fear of driving after a car accident. In this study from 16 years ago, researchers looked at the use of VR to overcome fear of flying. What else could we achieve now given the VR technology has evolved and we now have an array of VR products from cheap to expensive, available in the consumer market? There is also the expected array of hype with VR that we have to navigate, just like the hype that still surrounds wearables, big data and AI etc. Finding the signal within the noise won't be easy. We can't just jump into working with this technology because it's new and shiny, we all have to operate with finite resources, no matter how big an organisation we work in.

There are so many questions to answer before we can even begin to explore this arena. What can VR replace or augment? What can we do with VR that we never thought was possible before? Will we be prescribed a VR experience by our doctor alongside our medication so we can better understand the benefits of adhering to our treatment plan? What does the ongoing refinement of VR tech mean for medical education? What are the long term risks of using these headsets for extended periods of time? Is the use of VR going to isolate us or immerse us? Who can actually afford to use VR?

That's why I decided to launch my VR for Health & Social Care workshop in London. Since I've invested in many of the latest devices, tested them myself, have been reviewing the scientific literature to understand what's been studied so far, researched future trends in VR and thought of how they might be used across Health & Social Care, why not blend that all together into an interactive learning experience for people who want to be able to make informed decisions about VR. These devices whether they be the headsets or the cameras are best experienced with your own eyes. In my workshop, one of the things you'll be learning is how to take take, process and upload 360 videos, and then viewing the content you create, in Virtual Reality! Unlocking creativity is key, and I really do believe that the creative industries will have a much larger role to play when it comes to improving our health in the future. There is a deficit of imagination when it comes to new ideas and inventions today, and collectively we must be bolder in imagining the world we want to create for our children, and our grandchildren. 

For some of you, VR may turn out to be something you want to utilise immediately, for others it may still have too many limitations to be of value until the technology and related services evolve. To that end, I've worked with experts such as Dr Keith Grimes for clinical input and Shirley Ayres for her depth of experience in the social sector, when designing the workshop to ensure that you'll walk away with knowledge that you can apply immediately in your own work. 

The workshops are 4 hours in duration, and are held on selected Mondays, Wednesdays and Fridays in London during June and July. I've deliberately limited each workshop to 4 attendees, as I want to maximise the learning opportunity for each of you. I've been to conferences where there are long lines just to try out the latest VR systems for a couple of minutes, and if you have an unexpected reaction to VR, you're in a public place where everyone to see. 

The workshop has the same content, experience and devices across the 23 dates, so you can simply choose a date that's convenient for you. I've already received requests to run the workshops in other parts of the UK, if that's something you'd like, please contact me to discuss.

You can read more about the workshop and book your ticket here

Finally, I've made another 360 video when I was preparing for the workshop yesterday at the venue. You can take a look around the classroom as well as see the devices you'll be getting to use during the workshop. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above, apart from Dr Keith Grimes and Shirley Ayres who I have hired to consult on the design of my workshop]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

An interview with Molly Watt: Putting Usher Syndrome on the map

For this post, I wanted to share Molly Watt’s story. I first came across Molly in 2015, after I read her Apple Watch post. I had also just received my Apple Watch and was curious about other people’s experiences. What’s different about Molly, is that she has Usher syndrome, which is a rare genetic disorder caused by a mutation in any one of at least 11 genes resulting in a combination of hearing loss and visual impairment, and is a leading cause of deafblindness. Usher syndrome is the most common cause of congenital deafblindness (the elderly is the biggest group). Usher syndrome is incurable at present. Usher syndrome hasn’t held Molly back, she’s even set up her own charity, the Molly Watt Trust and much more. When reading each of her subsequent blog posts. her writing was creative, courageous and candid, and that resonated with me. In fact, it resonated so strongly with me, I decided to visit Molly in her home town of Maidenhead, England to interview her. It’s a longer interview than what I would normally post, but we have so much to learn from Molly (and others like her), that I was compelled to include as much as possible from her answers. Listening to Molly was also a powerful reminder, that we often focus so much on ‘empowering’ or ‘activating’ or ‘engaging’ patients themselves, that we ignore the patient’s family and friends who play a very critical role. I feel there are so many voices currently not heard, do we need to change the way we listen?

The image below is a 360 image from my interview with Molly. 

Post from RICOH THETA. - Spherical Image - RICOH THETA

1. You were recently had a meeting at Apple's HQ in America to share your views on accessibility. Can you tell us more about how you ended up there?
The main thing was the Apple watch blog post that I wrote, and through the charity, I discuss how we can access things through tech. I have been an Apple user since my diagnosis 10 years ago, I had a Mac eventually in education to access exam papers. So when the Apple watch came out, I was unsure what it could offer, and I bought one out of curiosity, and thought I would probably return it. Accessibility has been at the cornerstone of my life since the diagnosis, and we got my website set up after Xmas 2014, and my mum encouraged me to blog, so I did.

This was the first personal blog post wrote. I think the timing of it was shortly after the launch of the Apple watch, there was a lot of bad press saying it was a toy, but I found from the perspective of sensory impairment, it opened a lot of doors for me to be more independent, I never missed a phone call because of the prominent haptics, the digital touch features were really beneficial socially when out with friends, maps was a big feature for me, navigating from A to B with the watch. When I’m out I’d rather have my phone in my bag, as it’s much safer for me, and the watch enables me to do that. My post generated quite a lot of positive reviews about the Apple watch. All of that is how after a few months, Philip W. Schiller, the senior vice president of worldwide marketing at Apple retweeted me and my website crashed due to so many hits. From there onwards, a lot of people contacted the trust. We have to remember that a lot of people can’t afford the watch, many with Usher syndrome are shuffling between jobs.

Apple had reached out to the trust to speak with me, and since we were going on a family holiday to California, they said, well come and visit us. They were genuinely interested in hearing my story, and to understand how technology can enable accessibility much more in the future. As a family, we travel as much as possible, because my sight may completely go at any time. I wrote a post about my trip to California.

2. Many people laugh at products such as the Apple watch calling it a toy or not seeing any value in using it, but it has been of value in your life. What can be done to get people looking at all possible uses of new technology?
I wasn't sure about the Apple watch, I couldn't really understand what it would do for me over the iPhone that I have relied on for years. My decision to purchase it was last minute really as a couple of my friends were getting one. It's great my friends did as I might not have got one myself as we were able to explore the features of the watch together. However, for me after a little fiddling around with my insight into my real need for accessibility I was able to really put it to the test. I believe people give the Apple watch a bit of a hard time because they don't use it in the way somebody like myself does. I have learnt how to use it to make a real difference to my life, and it’s a brilliant piece of equipment I have come to rely on. I guess because I rely on technology I have become an expert in my own way of accessing it!

3. You've got your own charity and you've spoken at places such as the Houses of Parliament and Harvard Medical School. When you were younger, did you envisage you would reach these heights?
I had no idea. I think my own struggles have made me feel passionate about making a difference, to raise awareness of ability as much as disability, share my experiences good and bad and demonstrate the importance of accessible, assistive technology. I am definitely not the person I was since being diagnosed with Usher syndrome.  

Being deaf is very different, it is not rare.  It is however challenging and there needs to be support and assistive technology.  In the area I live, support of the deaf was excellent. Usher Syndrome diagnosis brought confusion and inexperience of supporting somebody with the condition particularly in school - my education became a nightmare as I couldn't access the curriculum without modification and nobody knew what they were doing.

At my real time of need the only people I could rely on were my parents who continued to battle for me even though they also did not really understand what I was going through. I definitely get my determination and drive from them.

I've been speaking since I was 14 and making awareness videos.  It was my way of telling people what I was going through, how I felt and what I felt I needed by way of support. That's how it all began and as the years have gone by I have found my work public speaking a very useful skill and a way of reaching the larger audience with the many messages I have.

4. When it comes to accessibility and new technology, what's missing? What are the 3 top inventions that you'd like to see come in the next few years?
This is a hard question as I'm not an expert on what is possible. I believe people in design, design of everything can be improved by the inclusion of accessibility from day one. The obvious things like all websites need to be completely accessible. I rely on this sort of thing, picking up a book isn't an option. Things like hotels often terrible design, decor, carpets & wallpapers clashing and the poorest lighting. Most public places are difficult.

I cannot wait for the driverless car to be available to people like me, I'm sad I'll never experience driving but excited to think this technology is on the horizon.

5. For those living with Usher Syndrome, do they feel like their wants & needs are being heard. If not, what could we do to be better listeners?
Definitely not, there is a lack of understanding and awareness.  Usher syndrome is the most common cause of congenital deafblindness and few are experienced in dealing with it hence few get what they need. Life is a constant battle.

I'm sad that people with Usher Syndrome struggle to be understood and often live isolated lives.
Many do not work, do not socialise, and do not have access to enabling technology to allow them access to social media and if they did, they need help in learning to how to use the technology. Some use sign language which again can be isolating and can cause difficulty getting employment as communication support is often needed and hard to access as cuts to Access to Work continue. I think professionals should encourage people like myself to be vocal about their needs and to listen and take onboard their thoughts and feelings. All too often people tried to speak for me and it is not acceptable. Encouragement from the point of diagnosis is important.  

I'm fortunate my parents have always encouraged me to speak up.

6. I understand you've faced many challenges when dealing with the NHS, schools and charities/support groups, can you tell us a bit more about what happened? 
I'll answer this one at a time:

The NHS were good with my deafness diagnosis when I was little and up to my Usher diagnosis, thereafter it has been a different story. Sadly, audiologists who are often the first point of contact either know of the condition but have not treated anybody with it or worse, know nothing about it. Either way it is not helpful to the patient and needs to change. It is the same with ophthalmologists, who know about eye conditions but not much about deafblindness.  Whilst conditions are rare, there has to be professionalism in dealing with all conditions. An example of not having a decent understanding is my NHS audiologist who has known me since I was very young and has monitored my hearing with regular tests the results of which are followed up in writing. It would be great if I could read those results, which were completely inaccessible until I pointed it out that they were completely unaware of my accessibility issues. Not a thought about how I am able to access information in font 10/12 on white paper and black text!

Equally I have sat at Moorfields Eye Hospital and during the appointment, was spoken to whilst a Professor looked at his computer screen - everybody knows deaf people need to see faces to lipread and for facial expressions, even those of us with very little sight. These things should be obvious!

My experience of a mainstream school was excellent whilst I was deaf, there was great support.
Again after my Usher Syndrome diagnosis there was a lot of confusion, I was given the support of a VI teacher as well as my teacher of the deaf, and neither had supported somebody like myself.
A multi sensory teacher had to be "bought" in from a charity and yes she understood the condition but with one visit a term to educate those supporting me and myself things did not go the way they should have. This resulted in me struggling to deal with what was happening to me, I felt a burden and looked to move schools, my biggest mistake ever.

I thought going to a private school for the deaf, who were familiar with my condition I'd be with people like myself! I couldn't have been more wrong. The deaf kids were cruel, questioned my deafness as I have good speech, questioned my blindness as I appeared to see. The staff were just as bad. I boarded initially and spent hours in my dorm as I physically couldn't get from dorm to dining hall in the dark, nobody noticed or cared. Teachers didn't modify my reading material and if they did it would be on A3 paper making me feel very different. I struggled for 2 years trying to deal with my failing sight, being in denial as it often seemed easier to be that way surrounded by deaf kids telling me I was fine - it was hell.

It was made worse when I got my guidedog who did enable me to get from A to B safely then I was denied access to all social areas as my need to get from A to B was not as important as the need for a younger boy with a dog allergy to move freely around the school. 

I left with depression and a nervous breakdown at 17 years old.

That school knew all there was about Usher syndrome - they knew little, I was treated very badly.

Charities:  
Sense is the main deafblind charity, they cover/support all types of deafblindness, including deafblind with additional issues from the very young to the very old and everything in between and they are great at campaigning however I do feel people with Usher Syndrome often miss out and that’s why we set up the Molly Watt Trust.

My family travel to the USA to find out information about Usher Syndrome, there has not in the 10 years I have been diagnosed an Usher specific conference yet several for other types of deafblindness even though Usher Syndrome is the most common cause of congenital deafblindness.
Sense does a great job but there is little for those with Usher syndrome. Being an ambassador I'm always happy to help/work alongside them on any Usher projects. I am an Ambassador for Sense and happy to do what I can when I can to promote awareness of Usher syndrome, something I do as part of the work I do with the companies I have worked with. I have spoken for several charities including RP Fighting Blindness and also Berkshire Vision. I often feel on the outside looking in, I don't fit in the deaf community or the blind community and yet I feel I'm a part of both along with the Usher community and society in general.

Belonging somewhere is important to us all.

7. You wanted genetic testing, but encountered resistance from the system. Why did they think it was a bad idea for you to have genetic testing?
I wanted genetic testing when I was 15 years old, back in 2009. I had studied genetics a little at school and I wanted to know exactly who I am. My parents asked at Moorfields Eye Hospital in London the next time we were there and we were told ‘NO’ because of funding and because there is no cure for my condition. I remember feeling very upset and my parents following up the request for genetic testing with my GP.  Thankfully he understood the need and arranged for me to see a geneticist from John Radcliffe hospital in Oxford. My geneticist was brilliant (Edward Blair), he explained things in full and even provided a history lesson on where Usher syndrome came from. Some 6 months later I was told I have Usher syndrome type 2a. The importance of knowing is essential should the chance to trial anything become an option in the future. If there is any clinical testing of that gene in the future I can decide if I'd like to be involved. Being told ‘NO’ makes you feel you are a lost cause which just escalates the isolation this condition brings. Everything is a battle with this condition.

Something else to be considered is the benefits system. I have been assessed more times than I can say. Sadly people think deafblind, no hearing, no sight and no speech.  When they see me they are often very shocked and then don't believe I have any disability. On one occasion I arrived for an assessment (ATOS) and was told by the doctor he had googled Usher syndrome the night before!  He did not have a clue what I deal with on a daily basis.

8. When it comes to innovation in technology, and in particular around accessibility, what is your long term dream? 
I'd like people with disabilities to be considered from day one.  I'd like those with rare disabilities like mine to have access to all equipment they need and to be taught how to use it. I’d like them to have access to transport and benefits to enable them to work.

I think developers of everything need to understand the unique needs of all. For them to realise disabilities are not black and white. Sensory impairments are not two colours. Some with Usher are profoundly deaf (usually type 1's), the older generation might not have used any hearing aids so rely on sign language (BSL) and later tactile signing as their vision deteriorates - their communication skills and needs differ to the younger generation who have (parents chose) cochlear implants hence access to sound young and oral. My generation in the main, wear hearing aids and are oral. This is in my opinion a huge positive to accessing our world. However those who sign must always be considered regarding accessibility. And being blind is extremely rarely total darkness. There are many grey areas that are not often considered. 

In an ideal world I'd like to work/consult with developers around the world working on accessibility for all.  I'd like to be a part of moving forward with assistive technology. I believe if technology works for people like myself it will work for the older generation who's eyes and ears start to fail them as they grow older and this is very important with our ageing population.

9. Do you think there are other people like yourself around the world? Have you built your own network or is that something still to come?
I know of a few people doing similar to what I do. I have built a network which continues to grow, I am quite well known for my work around the world something I have been doing since I was 15. There is definitely more work to do and lots more to come. I hope that one day having Usher syndrome can just open up unique doors for every individual, rather than the progressive isolation and depression lack of access and awareness can give.

10. Who has inspired you the most in your life, and why?
My parents, particularly my mum and my grandparents.  They have always encouraged, supported and fought for me and I have learnt so much from them.
My mum always told my brothers and my younger sister we could be anything we wanted.  At that time she didn't realise what was around the corner for me but she still believed I would make some-thing of myself and I will, one way or another!
Before I could speak (at age 6), my Nannie, Pat, would sit me down and we'd make cards, paint and create for hours. We'd do jigsaw puzzles and watch Disney videos. My creative streaks definitely arose from those days. I was born creative and to this day use those skills. My children's books have frog characters, my Nan loved frogs. She inspired me.

11. If people want to work with you, what would they need to be offering to get your attention?
Opportunities to speak, to motivate, to innovate, to consult, to make a difference, to be heard.
My passion is accessible assistive technology and educating others.

12. If others wanted to follow in your footsteps, what would your advice be to them?
I'd encourage others to think about what is important to them, how to use their unique skill set to make a difference. Work hard and be passionate about your cause. Plus of course, never be afraid to speak up. Find ways to express yourself, in that process you eventually find yourself and also the confidence to help others.

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Shifting to a world of prevention: A GP's story

For this post, I caught up with Dr Manpinder Sahota, a GP in Britain's NHS. We first interacted over Twitter, where we met on the topic of shifting healthcare to a world of prevention. Dr Sahota said he had a vision for building a GP practice with a focus on wellness and prevention of disease, and was curious if technology could play a role in that. So I hopped on a train to see him, and what follows is the interview at his practice, in Gravesend. For those who have never visited, Gravesend is an ancient town in north west Kent, England, situated 21 miles east south-east of Charing Cross, London on the south bank of the Thames estuary. Gravesend has one of the oldest surviving markets in the country, its earliest charter dates from 1268. For my American readers, Gravesend is where Princess Pocahontas is buried, having died there almost 400 years ago, on a ship bound for the Commonwealth of Virginia. Back to the present day, Gravesend [and the borough of Gravesham that it falls under] faces the challenge of childhood obesity, with 38.9% of 10 to 11-year-olds resident in Gravesham being overweight or obese. Demographics are changing, with 17% of the population of the borough of Gravesham having been born outside of the UK. 

Hearing about new models of care with Dr Manpinder Sahota at his GP practice

Hearing about new models of care with Dr Manpinder Sahota at his GP practice

1. What is your role & responsibilities?
I've been at the Pelham medical practice since 1999. We have 7 GPs, over 2 sites and almost 14,000 patients. I'm the Diabetes lead and a GP trainer as well. I also provide free acupuncture to some of my patients. 

2. What are the key challenges you're facing in the year ahead? 
In a place like Gravesend, where 50% of patients are not tech savvy, getting reminders on their mobile phone or 'choose and book' [Note: Choose and Book is a national electronic referral service which gives patients a choice of place, date and time for their first outpatient appointment in a hospital or clinic] doesn’t mean anything to them. Furthermore,  many can just about get to the local hospital on the cheapest bus, and often they can't afford a taxi to a hospital that is further away, so services such as 'choose and book' are of no use to them.  I'm seeing the local population getting sicker and sicker, and although some of my patients are living longer due to being on 9 or 10 drugs, they usually have very little quality of life. 

My main challenge is educating people in lifestyle changes, especially those from the lower social classes. I've found that if I can give them a practical bit of advice or even encouragement, it does lead to lower blood pressure and loss of weight.  

Patients only seem to listen when they are about to have ill health, many times there is no motivation to change behaviour, diet and exercise, especially given education levels can be quite low. 

I'm interested in pre-Diabetes and screening for pre-Diabetes, that is where the biggest change can happen. Usually, my patients know a bit about Diabetes from someone in the family, so there is some emotional trigger, which can help in our conversations. 

3. What is your big vision for moving to a world with a focus on prevention of disease?
My overall big vision is to get away from prescribing drugs, there are dangers of polypharmacy and I want to get people to rely upon themselves, and use lifestyle medicine as the first discussion point, before we go down the path of handing out tablets. I'm also thinking about depression, back pain, obesity related diseases, and am keen to provide Tai Chi classes, Yoga classes and Meditation classes at this new centre.

4. Tell us more about your new centre
My new centre is not replacing the existing GP surgery. It would be a new GP practice with a preventative component, One idea is to have a gym at the top of the surgery where Tai Chi classes could take place. I want to be able to prescribe patients a 12 week course on diet and nutrition with a personal trainer. There is a national program where certain courses for diabetes prevention can be done. No current funding, but in the future, there should be money coming from it. If the NHS wont fund my ideas, I will go to the British Heart Foundation or National Lottery. 

5. Switching over to technology, there is much talk about giving patients online access to their medical records, in the hope that it will improve the quality of care, shared decision making as well as patient outcomes. How often do patients come in and ask for a paper copy of their medical records?
Very rarely, it does happen though.

6. If today, your practice was able to offer online access to medical records for your patients, as an estimate, how many would use it?
I estimate 25% would use it. The remaining 75% aren't that educated and/or don't have computers. In fact, 10% of the patients visiting our practice need an interpreter during the visit, as they don't speak English, or don't speak it well enough. 

Our other practice is in a deprived area. Over there, the patients tend to believe the doctor knows best, and they don't want to be involved in their treatment decision, patients actually want a paternalistic healthcare system. Quite a lot of my Indian patients, believe that the doctor is God, and if you give them management options, they are not interested.

7. We hear so much about how wearable technology is changing healthcare. How many of your patients are coming in and showing you apps or wearables with respect to behaviour change (such as using a FitBit as a tool in increasing physical activity)?
Hardly any patients are showing up at appointments with this kind of technology.

8. What are your thoughts when you hear the term 'Big Data' in healthcare? How does it make you feel as a GP?
We are already overloaded with information, letters from hospitals, from agencies, if we have to look at even more information, that would be too much for us. We are doing too much administration work already, any new information would have to be controlled very well. We are literally drowning in information, as everything in the NHS gets sent to a patient's GP.

9. How might 'smarter homes'  in the future help you as a GP in terms of prevention?
Technology that could help spot rises in blood sugar, oxygen, pulse rates. Patients are already bringing in paper to their appointments showing their rising Blood Pressure levels. For current hypertensives, I'd like to see a patient's BP readings at home on my computer screen prior to the patient's visit. Patients could save admin time if they could pre-enter this information for me to see. What if we could get food diaries into patient's medical records, that would be great for preventing Diabetes. To be able to understand what they are eating on a daily or weekly basis, the carbohydrate content etc. 

10. Who influences you?
I follow Dr Aseem Malhotra and Jamie Oliver, they are both leading a national conversation. I hope to see celebrities and sports starts taking up the baton in health prevention and get involved in their local areas. What if we had footballers like David Beckham or Wayne Rooney helping to spread this message? Kids would listen to those people, rather than us. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Developing a wearable biosensor: A doctor's story

For this post, I caught up with Dr Brennan Spiegel, to hear in more detail about his journey to get a wearable biosensor from concept to clinic. In the interview, we discuss how an idea for a sensor was borne out of an unmet clinical need, how the sensor was prototyped, tested, and subjected to clinical research, and how it was finally FDA approved in December of 2015. Throughout, we learn about the challenges of developing a wearable biosensor, the importance of working with patients, doctors, and nurses to get it right, and how to conduct rigorous research to justify regulatory approval of a device. The interview ends with seven suggestions from Dr. Spiegel for other inventors seeking to develop wearable biosensors.

1. What is AbStats?
AbStats is wearable sensor that non-invasively measures your intestinal activity – it's like a gut speedometer. The sensor is disposable, about the size of a large coin, sticks on the external abdominal wall, and has a small microphone inside that dutifully listens to your bowel churn away as it digests food. A specialized computer analyzes the results and presents a value we call the "intestinal rate," which is like a new vital sign for the gut.  We've all heard of the heart rate or respiratory rate; AbStats measures the intestinal rate.  The sensor tells the patient and doctor how much the intestines are moving, measured in "events per minute."  If the intestinal rate is very high, like 30 or 40 events per minute, then it means the gut is revved up and active.  If it's very low, like 1 or 2 per minute, then it means the gut is asleep or, possibly, even dysfunctional depending on the clinical situation.

2. What existing problem(s) does it solve?
AbStats was specifically designed, from the start, to solve for a real problem we face in the clinical trenches.  

We focused first on patients undergoing surgery.  Almost everyone has at least temporary bowel paralysis after an operation.  When your body undergoes an operation, whether on your intestines or on your toe (or anywhere in-between), it's under a great deal of stress and tends to shut down non-vital systems.  The gastrointestinal (GI) tract is one of those systems – it can take a hit and shut down for a while.  Normally, the GI system wakes up quickly.  But in some cases the GI tract is slow to come back online.  This is a condition we call postoperative ileus, or POI, which occurs in up to 25% of patients undergoing abdominal surgeries.  

The issue is that it's hard to know when to confidently feed patients after surgery.  Surgeons are under great pressure by administrators to feed their patients quickly and discharge them as soon as possible. But feeding too soon can cause serious problems, from nausea and vomiting, to aspiration, pneumonia, or even death.  On the other hand, feeding too late can lead to infections, prolong length of stay, and cost money.  As a whole, POI costs the US healthcare system around $1.5 billion because of uncertainties about whether and when to feed patients.  It's a very practical and unglamorous problem – exactly the type of issue doctors, nurses, and patients care about. 

Now, you might ask how we currently decide when to feed patients.  Here's the state of the art: we ask patients if they've farted or not. We literally ask them, practically all day long, "have you passed gas yet?"  No joke.  Or, we'll look at their belly and determine if it looks overly distended.  We might use our stethoscope to listen to the bowels for 15 seconds at a time, and then make a call about whether to feed.  It's nonsense.  Data reveals that we do a bad job of determining whether someone is fit to eat.  We blow it in both directions – sometimes we overcall, and sometimes we under call.  We figured, in this fantastical age of digital health, there had to be a better way than asking people about their flatus!  So we invented AbStats. 

3. What prompted you to embark upon this journey?
One day, about 4 years ago, I was watching Eric Topol give a TED talk about wearable biosensors and the "future of medicine." As I watched the video, I noticed that virtually very part of the human body had a corresponding wearable, from the heart, to the lungs, to the brain, and so forth.  But, sitting there in the middle was this entire body cavity – the abdominal cavity – that had absolutely zero sensor solutions.  As a gastroenterologist, I thought this must be an oversight.  We have all manner of medieval devices to get inside the GI system, and I'm skilled at inserting those things to investigate GI problems.  But typical procedures like colonoscopies, enteroscopies, capsule endoscopies, and motility catheters are all invasive, expensive, and carry risks.  There had to be a way to non-inavsively monitor the digestive engine.  So, I thought, what do we have available to us as doctors?  That's easy: bowel sounds.  We listen to bowel sounds all the time with a stethoscope, but it's highly inefficient and inaccurate.  It makes no sense to sit there with a stethoscope for 20 minutes at a time, much less even 1 whole minute.  But the GI system is not like the heart, where we can make accurate diagnoses in short order, over seconds of listening.  The GI system is slow, plodding, and somewhat erratic.  We needed something that can stand guard, vigilantly, and literally detect signal in the noise.  That's when AbStats was borne.  It was an idea in my head, and then, about 4 years later, became an FDA-approved device.  

4. What was the journey like from initial idea to FDA approval? 
When I first invented AbStats, I wasn't thinking about FDA approval.  I knew virtually nothing about FDA approval of biomedical devices.  I just wanted the thing built, as fast as possible, and rigorously tested in patients.  As a research scientists and professor of medicine and public health, this is all I know.  I need to see proof – evidence – that something works.  AbStats would be no different. 

I was on staff at UCLA Medical Center when I first invented the idea for AbStats. I told our office of intellectual property about the idea, and they suggested I speak with Professor William Kaiser at the UCLA Wireless Health Institute.  So, I gave him a call.  

Dr. Kaiser got his start working for General Motors, where he contributed to inventing the automotive cruise control system.  Later, he went to work for the Jet Propulsion Laboratory, where he worked on the Mars Rover project.  Then, he came to UCLA and founded the Wireless Health Institute.  He is fond of saying that of all the things he's done in his career, from automotive research to spaceships, he believes the largest impact on humanity he's had is in the realm of digital health.  He is a real optimist.  

So, when I told Professor Kaiser about my idea for AbStats, he immediately got it.  He got to work on building the sensor and developed important innovations to enhance the system.  For example, he developed a clever way to ensure the device is attached to the body and not pulled off.  This is really important, because if AbStats reports that a patient's intestinal rate is zero, then it might mean severe POI, or it might mean the device fell off.  AbStats can tell the difference thanks to Professor Kaiser's engineering ingenuity.  

Once we developed a minimal viable product, we worked like crazy to test it in the clinics, write papers, and publish our work.  At the same time, UCLA licensed the IP to a startup company, called GI Logic, that worked with our teams to submit the FDA documentation.  Professor Kaiser's team did the heavy lifting on the engineering and safety side, and we focused on the clinical side.  It was a great example of stem-to-stern teamwork, ranging from in-house engineering expertise, to clinical expertise, to regulatory expertise.  It all came together very fast.  

Importantly, it was my sister who came up with the name "AbStats."  I always remember to credit her with that part of the journey!

5. What role did patients play in the design of AbStats? 
Patients were critical to our design process.  We went through a series of form factors before settling on the current version of AbStats.  At first, the system resembled a belt with embedded sensors. Patients told us they hated the belt.  They explained that, after undergoing an abdominal surgery, the last thing they wanted was a belt on their abdomen.  We tweaked and tweaked, and eventually developed two small sensors that adhere to the abdomen with Tegaderm.  Even those are not perfect – it hurts to pull Tegaderm off of skin, for example.  And the sensors are high profile, so they are not entirely unobtrusive.  We're working on that, too.  But patient feedback was key and remains vital to our current and future success with AbStats.  

6. How did patients & physicians respond to AbStats during research & development?
It was gratifying that virtually every surgeon, nurse, and patient we spoke with about AbStats immediately "got it."  This is not a hard concept to sell.  Your bowels make sound.  The sound matters. And AbStats can listen to those sounds, make sense of them, and provide feedback to doctors and nurses to drive decisions.  The "so what" question was answered.  If your belly isn't moving, then we shouldn't feed you.  If it's moving a little, we should feed a little.  And if it's moving a lot, then we should feed a lot.  The surgeons called this the AbStats "stoplight", as in "red light," "yellow light," and "green light."  Each is mapped to a very specific action plan.  It's not complicated.  

We were especially surprised by the engagement of nurses in this process.  Nurses are the heart and soul of patient care, especially in surgery.  Our nursing colleagues told us that feeding decisions come up in nearly every discussion with post-operative patients.  They said they have virtually no objective parameter to follow, and saw AbStats as a way to engage patients in ways they previously could not. This was surprising.  For example, the nurses pointed out that many patients are on narcotics for pain control, and that can slow their bowels even further. By having an objective parameter, the nurses can now use AbStats to make conversations more objective and actionable.  For example, they can show that every time a patient uses a dose of narcotics, it paralyzes the bowels further.  Knowing that, some patients might be willing to reduce their medications, if only by a little, to help expedite feeding decisions.  AbStats enables that conversation.  It's really gratifying to see how a device can alter the very process of care, to the point of impacting the nature of conversations between patients and their providers.  Almost uniformly, the patients in our trials felt the sensors provided value, and so did their nurses. 

7. Would you approach the problem differently if you had to do this again?
Not really.  Considering that in 4 years we invented a sensor, iteratively improved its form factor, conducted and published two peer-reviewed clinical trials, submitted an FDA application, and received clearance for the device, it's hard to second guess the approach.

8. What other problems would you like to solve with the use of wearable technology in the future?
AbStats has many other applications beyond POI.  We are currently studying its use in an expanding array of applications, including acute pancreatitis, bowel obstructions, irritable bowel syndrome, inflammatory bowel disease, obesity management, and so on.  There are more opportunities than there are hours in the day, so we're trying to remain strategic about how best to proceed.  Thankfully, we are well aligned with the startup, GI Logic, to move things forward.  I am also fortunate to be at Cedars-Sinai Medical Center, my home institution since moving from UCLA, where most of the clinical research on AbStats was conducted.  Cedars-Sinai has been extremely supportive of AbStats and our work in digital health.  We couldn't do our research without our medical center, patients, administrative support, and technology transfer office. I am immensely grateful to Cedars-Sinai.  

More generally, wearable technology and digital health still have a long way to go, in my opinion.  I've written about that before, here. AbStats is an example of a now FDA-approved sensor supported by peer-reviewed research.  I'd like to see a similar focus on other wearables.  There are good examples, like AliveCor for heart arrhythmias, and now Proteus, which is an "ingestible."  But, for many applications in healthcare, there is still too little data about how to use wearables.  

I believe that digital health, in general, is more of a social and behavioral science than a computer or engineering science.  Truth be told, most of the sensors are now trivial.  Our sensor is a small microphone in a plastic cap.  The real "secret sauce" is in the software, how the results are generated and visualized, how they are formed into predictive algorithms, and, most importantly, how those algorithms change behavior and decision making.  Finally, there is the issue of cost and value of care. There are so many hurdles to cross, one wonders whether many sensors will run the gauntlet. AbStats, for example, may be FDA approved, but that doesn't mean we're ready to save money using the device.  We need to prove that.  We need data.  FDA approval is a regulatory hurdle, but it doesn't guarantee a device will save lives, reduce costs, reduce disability, or anything close to it.  That only comes from hard-fought science.  

9. Are clinically proven medical applications of wearable technology likely to grow in years to come?
Almost certainly, although my caveats, above, indicate this may be slower and more deliberate than some are suggesting in the digital health echo chambers.

10. For those wishing to follow in your footsteps, what would you words of wisdom be?
First, start by addressing an unmet need. Clinical need should drive technology development, not the other way around.  

Second, if you're working on patient-facing devices, then I believe you should really have first hand experience with literally putting those devices on patients.  If you're not a healthcare provider, then you should at least visit the clinical trenches and watch what happens when sensors go on patients. What happens next can be unexpected and undermine your presuppositions, as I've written about here and here.  I do not believe one can truly be a wearable expert without having literally worked with wearables.  That's like a pharmacist who has never filled a prescription, or, a cartographer who has never drawn a map.  Digital health is, by definition, about healthcare. It's about patients, about their illness and disease, and about figuring out how to insert technology into a complex workflow.  The clinical trenches are messy, gray, indistinct, dynamic, and emotional — injecting technology into that environment is exceptionally difficult and requires first-hand experience.  Digital health is a hands-on science, so look to the clinical trenches to find the unmet needs, and start working on it, step-by-step, in direct partnership with patients and their providers.

Third, make sure your device provides actionable data.  Data should guide specific clinical decisions based on valid and reliable sensor indicators.  We're trying to do that with AbStats. 

Fourth, make sure your device provides timely data. Data should be delivered at the right time, right place, and with the right visualizations.  We spent days just trying to figure out how best to visualize the data from AbStats.  And I'm still not sure we've got it right.  This stuff takes so much work. 

Fifth, if your'e making a device, make sure it's easy to use and has a favorable form factor.  It should be simple to hook up the device, it should be unobtrusive, non-invasive, with zero infection risk, comfortable, safe, and preferably disposable.  We believe that AbStats meets those standards, although there is always more work to be done.

Sixth, the wearable must be evidence-based.  A valuable sensor should be able to replace or supplement gold standard metrics, when relevant, and be supported by well designed, properly powered clinical trials.  

Finally, and most importantly, the sensor should provide health economic value to health systems.  It should be cost-effective compared to usual care.  That is the tallest yet most important hurdle to cross.  We're working on that now with AbStats.  We think it can save money by shaving time off the hospital stay and reducing readmissions.  But we need to prove it.  

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner