I've just returned from California, where I attended these 3 conferences;
For this post, I'm going to focus on what I observed at these events regarding the quest for evidence in Digital Health. I'll be writing separate blog posts in the future relating to my overall experience at each of these events.
Starting with the first event which was hosted by Scripps Translational Science Institute, I was excited about the event. The opening sentence in the brochure said, "A thoughtful exploration of the clinical evidence necessary to drive the widespread uptake of mobile health solutions will be the focus of the first Scripps Health Digital Medicine conference." When booking my place, Three of the educational objectives of the event which sounded tremendously useful to me as a participant were;
- "Assess the quality of clinical trials of mobile health in terms of providing the evidence necessary to support implementation"
- "Discuss the implementation of mobile health technologies into clinical practice based on clinical trial evidence"
- "Identify innovative trial methodologies for use in digital medicine"
Having attended, I don't really feel those three objectives were met. Whilst some of the sessions were very interesting and thought provoking, it wasn't because the speakers were discussing evidence generation or clinical trials in this arena. Often they were talking about the future of Digital Health, and where we are heading. I walked away feeling confused and disappointed. Only on the second day, when Jeff Shuren, Director of the Center for Devices and Radiological Health at the FDA spoke, did I see a session which specifically related to the objectives listed above.
So onto Health 2.0, where I was expecting validation and evidence to be discussed at two sessions. The first was "Validating Performance in Healthcare and Turning the Dial on Credibility" and the second was "Arc Fusion: Getting real about the convergence of health IT and biomedicine." In the first session, Vik Khanna from Quizzify made a number of good points.
I didn't manage to attend the second session, but the talk was captured on video, and can be found here. Having watched the 40 minute video, there wasn't much exploration of evidence or validation.
However, before either of those two sessions took place, it was useful to hear about validation at the session on "Health Data Exploration Project-Personal Data for the Public Good." It's good that they are pursuing this research, and I look forward to seeing what they discover.
I also noticed this tweet during Health 2.0, but I can't find a link on the web that shows what the American Medical Association is doing here.
At Body Computing, there was a panel discussion on "Building a virtual healthcare system" and I asked the panel about whether we need some kind of new institute that can validate & certify these new digital interventions. Andy Thompson, the CEO of Proteus Digital Health replied, and said that we don't need new institutions, and that industry needs to collaborate with regulators to improve regulatory science, as the regulators can't do it alone. At some level, I think he has a good point, and later in this post, I'll explain why we might actually need a new institute.
I've tried a lot of wearable technology, especially smart watches, and there still isn't any real evidence showing that these are making an impact on our health. Whilst a watch that can remind you to walk more or workout at the gym in the best heart rate zone is of some use, many who work with patients every day, are asking, "What's the medical benefit?" There is a huge unmet need out there for wearable technology developed with medical grade sensors that doctors and patients can trust and use with confidence.
I must mention that this new AliveCor product is a prototype and has not been FDA cleared yet. I personally expect it to be a roaring success when it is launched. I note that at the Scripps conference, when both patients and doctors were commenting on what Digital Health product had impacted their life, AliveCor was cited nearly every time. The fact that the AliveCor app on the watch records the patient's voice, links it to the location, takes us a step forward on the path to a single patient view, the marriage of hard and soft data. We need more of this science driven innovation in Digital Health, where gathering of evidence is not an afterthought, and where the product/service has a clearly defined medical benefit.
I am witnessing increasing use of algorithms in healthcare, especially since we're collecting more data than we ever have before. Algorithms are like the invisible hand that guides many of our decisions, and since they are programmed by humans, how do we know what bias is incorporated into them? The recent scandal which involved Volkswagen's cars and an algorithm that was cheating the system makes me think about the need for greater transparency in healthcare.
I appreciate that in the modern era, algorithms are closely guarded secrets by companies just like Kentucky Fried Chicken guards its secret recipe. I'm not saying that private corporations should make their algorithms open source and lose their competitive advantage, but maybe we need an independent body that can be monitoring these algorithms in healthcare, not just once when the product is approved, but all year round, so that we can feel protected? I found a fascinating post by Jason Bloomberg, who in response to the VW emissions scandal, asks if this is the death knell for the Internet of Things? Bloomberg cites 'calibration attacks' as the possible cause of the VW scandal, and goes on to highlight how this may impact healthcare too. In my opinion, each of the three conferences I attended should have had a session where we could have a healthy debate about algorithms. I keep hearing about how artificial intelligence, big data and algorithms will lead to so many amazing things, but I never hear anyone talking about calibration attacks, and how to prevent them. Zara Rahman closes her wonderful post on understanding algorithms with, "Though we can't monitor the steps of the process that humans decide upon to create an algorithm, we can - or should be able to - monitor and have oversight on the data that is provided as input for those algorithms."
I don't think it's alarmist to examine a range of different future scenarios and to consider updated regulatory frameworks to reflect threats that never existed before. It's wonderful to hear speakers at conferences show us how the future is going to be better due to technological advances, but we also need to hear about the side effects of those new technologies too.
I recognise that not every digital intervention will need clinical trials and a whole body of evidence before it can be approved, accredited and adopted. For example, medication reminder apps that are a twist on the standard reminder app. Or it could be argued that even these simple apps should be regulated too? What if the software developer makes a mistake in the code and when a patient actually uses the app, their medication reminders in the app are switched around, leading to patient harm? A recent article highlights research that showed that most of the NHS approved apps for depression are actually unproven. Another related post by Simon Leigh, points out, "are apps forthcoming with the information they provide? It's easy enough to say this app beats depression, but do they offer any proof to turn this from what is essentially marketing into evidence of clinical effectiveness?"
Many people are so angry with the state of healthcare that they want this digital revolution to disrupt healthcare as quickly as possible. Asking for evidence and proof is often seen as slowing down this revolution, a sign of resistance to change. Just because something is digital doesn't mean we can trust it implicitly from the moment it's developed. Hype, hope and hubris will not be enough to deliver the sustainable change in healthcare that we all want to see. We are at a crossroads in Digital Health, and we have to be very careful going forwards that the recipients of these digital interventions aren't led to believe that promise equals proof.
[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]