Promise doesn't equal proof

I've just returned from California, where I attended these 3 conferences;

For this post, I'm going to focus on what I observed at these events regarding the quest for evidence in Digital Health. I'll be writing separate blog posts in the future relating to my overall experience at each of these events.

Starting with the first event which was hosted by Scripps Translational Science Institute, I was excited about the event. The opening sentence in the brochure said, "A thoughtful exploration of the clinical evidence necessary to drive the widespread uptake of mobile health solutions will be the focus of the first Scripps Health Digital Medicine conference." When booking my place, Three of the educational objectives of the event which sounded tremendously useful to me as a participant were;

  • "Assess the quality of clinical trials of mobile health in terms of providing the evidence necessary to support implementation"

  • "Discuss the implementation of mobile health technologies into clinical practice based on clinical trial evidence"

  • "Identify innovative trial methodologies for use in digital medicine"

Having attended, I don't really feel those three objectives were met. Whilst some of the sessions were very interesting and thought provoking, it wasn't because the speakers were discussing evidence generation or clinical trials in this arena. Often they were talking about the future of Digital Health, and where we are heading. I walked away feeling confused and disappointed. Only on the second day, when Jeff Shuren, Director of the Center for Devices and Radiological Health at the FDA spoke, did I see a session which specifically related to the objectives listed above.

So onto Health 2.0, where I was expecting validation and evidence to be discussed at two sessions. The first was "Validating Performance in Healthcare and Turning the Dial on Credibility" and the second was "Arc Fusion: Getting real about the convergence of health IT and biomedicine." In the first session, Vik Khanna from Quizzify made a number of good points.

I didn't manage to attend the second session, but the talk was captured on video, and can be found here. Having watched the 40 minute video, there wasn't much exploration of evidence or validation.

However, before either of those two sessions took place, it was useful to hear about validation at the session on "Health Data Exploration Project-Personal Data for the Public Good." It's good that they are pursuing this research, and I look forward to seeing what they discover.

I also noticed this tweet during Health 2.0, but I can't find a link on the web that shows what the American Medical Association is doing here.

At Body Computing, there was a panel discussion on "Building a virtual healthcare system" and I asked the panel about whether we need some kind of new institute that can validate & certify these new digital interventions. Andy Thompson, the CEO of Proteus Digital Health replied, and said that we don't need new institutions, and that industry needs to collaborate with regulators to improve regulatory science, as the regulators can't do it alone. At some level, I think he has a good point, and later in this post, I'll explain why we might actually need a new institute.

I've tried a lot of wearable technology, especially smart watches, and there still isn't any real evidence showing that these are making an impact on our health. Whilst a watch that can remind you to walk more or workout at the gym in the best heart rate zone is of some use, many who work with patients every day, are asking, "What's the medical benefit?" There is a huge unmet need out there for wearable technology developed with medical grade sensors that doctors and patients can trust and use with confidence.

At Body Computing, I witnessed the first public viewing of the AliveCor ECG for the Apple watch. You can see a demo by Dr Dave Albert, founder & CMO of AliveCor, in my video below.

I must mention that this new AliveCor product is a prototype and has not been FDA cleared yet. I personally expect it to be a roaring success when it is launched. I note that at the Scripps conference, when both patients and doctors were commenting on what Digital Health product had impacted their life, AliveCor was cited nearly every time. The fact that the AliveCor app on the watch records the patient's voice, links it to the location, takes us a step forward on the path to a single patient view, the marriage of hard and soft data. We need more of this science driven innovation in Digital Health, where gathering of evidence is not an afterthought, and where the product/service has a clearly defined medical benefit. 

I am witnessing increasing use of algorithms in healthcare, especially since we're collecting more data than we ever have before. Algorithms are like the invisible hand that guides many of our decisions, and since they are programmed by humans, how do we know what bias is incorporated into them? The recent scandal which involved Volkswagen's cars and an algorithm that was cheating the system makes me think about the need for greater transparency in healthcare.

I appreciate that in the modern era, algorithms are closely guarded secrets by companies just like Kentucky Fried Chicken guards its secret recipe. I'm not saying that private corporations should make their algorithms open source and lose their competitive advantage, but maybe we need an independent body that can be monitoring these algorithms in healthcare, not just once when the product is approved, but all year round, so that we can feel protected? I found a fascinating post by Jason Bloomberg, who in response to the VW emissions scandal, asks if this is the death knell for the Internet of Things?  Bloomberg cites 'calibration attacks' as the possible cause of the VW scandal, and goes on to highlight how this may impact healthcare too. In my opinion, each of the three conferences I attended should have had a session where we could have a healthy debate about algorithms. I keep hearing about how artificial intelligence, big data and algorithms will lead to so many amazing things, but I never hear anyone talking about calibration attacks, and how to prevent them. Zara Rahman closes her wonderful post on understanding algorithms with, "Though we can't monitor the steps of the process that humans decide upon to create an algorithm, we can - or should be able to - monitor and have oversight on the data that is provided as input for those algorithms."

I don't think it's alarmist to examine a range of different future scenarios and to consider updated regulatory frameworks to reflect threats that never existed before. It's wonderful to hear speakers at conferences show us how the future is going to be better due to technological advances, but we also need to hear about the side effects of those new technologies too.

I recognise that not every digital intervention will need clinical trials and a whole body of evidence before it can be approved, accredited and adopted. For example, medication reminder apps that are a twist on the standard reminder app. Or it could be argued that even these simple apps should be regulated too? What if the software developer makes a mistake in the code and when a patient actually uses the app, their medication reminders in the app are switched around, leading to patient harm? A recent article highlights research that showed that most of the NHS approved apps for depression are actually unproven. Another related post by Simon Leigh, points out, "are apps forthcoming with the information they provide? It's easy enough to say this app beats depression, but do they offer any proof to turn this from what is essentially marketing into evidence of clinical effectiveness?"

Many people are so angry with the state of healthcare that they want this digital revolution to disrupt healthcare as quickly as possible. Asking for evidence and proof is often seen as slowing down this revolution, a sign of resistance to change. Just because something is digital doesn't mean we can trust it implicitly from the moment it's developed. Hype, hope and hubris will not be enough to deliver the sustainable change in healthcare that we all want to see. We are at a crossroads in Digital Health, and we have to be very careful going forwards that the recipients of these digital interventions aren't led to believe that promise equals proof.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner



Painting a false picture of ourselves

In the quest for improving our health, we're on the path to capturing more data about us, and what we do, and what happens to us. It's no longer sufficient to capture data about our health when we visit the doctor. Sensors are popping up all over the place, even in pills that help others determine whether we are actually taking our medication. Today, the most prevalent sensors are the ones in those wristbands and smart watches that track how many steps we've taken and how much we've slept. We're likely to end up at some point in the future where many, if not all of us, will be monitored 24 hours a day. Recently, Target in the USA, announced it will be offering a Fitbit activity tracker to each of its 335,000 employees.

There are already insurers in the US & UK that are offering rewards if you share data from your wearable, and the data from the wearable proves you are being active enough. In Switzerland, a pilot project by health insurer, CSS, is monitoring how many steps customers are walking every day, with one implication being, "people who refuse to be monitored will be subject to higher premiums." In that same article, Peter Ohnemus of Dacadoo, believes "Eventually we will be implanted with a nano-chip which will constantly monitor us and transmit the data to a control centre."

Well, if pills with ingestible sensors are already here, then the vision of Ohnemus may not be that far fetched. En route to the nano-chip, I note that Samsung's new Sleepsense device that sits under your mattress and tracks your sleep (and analyses the quality of your sleep), offers a feature where a report about your sleep can be emailed daily to family members. You might use it to track how your elderly parents/grandparents/children are sleeping. At the 5th EAI International Conference on Wireless Mobile Communication and Healthcare in London next month, there is a keynote titled, "The car as a location for medical diagnosis." There is so much data about us that could be captured and shared with interested parties, it's an exciting new era for many of us. 

SLEEPsense was launched when I visited IFA earlier this month

SLEEPsense was launched when I visited IFA earlier this month

Not everyone is excited though. It's truly fascinating to observe how people might respond to the introduction of these new sensors in our lives. We're going to see many developments in 'smart home' technologies, and maybe Apple's HomeKit will be the catalyst for people to make their homes as smart as possible. Given aging populations, maybe older people, especially those living alone are the perfect candidates for these sensors and devices. Whilst their children, doctors and insurers may find the ability to 'remotely monitor' behaviour quite reassuring, what if the older person being monitored doesn't like being monitored? What strategies might they employ to hack the system? The short film below, 'Uninvited Guests' shows an elderly man and his smart home, and where the friction might occur. 

Then you have 'Unfit Bits' which pokes fun at the growing trend of linking data from your activity tracker with your insurance. "At Unfit Bits, we are investigating DIY fitness spoofing techniques to allow you to create walking datasets without actually having to share your personal data. These techniques help produce personal data to qualify you for insurance rewards even if you can't afford a high exercise lifestyle." Check out their video. 

These videos are food for thought. Our daily choices and behaviour are going to come under increased scrutiny, and just because it's technically possible, will it be socially desirable? Decisions are increasingly being made by algorithms, and algorithms need data. There is a call for healthcare to be more of a data driven culture, but how will we know if the data coming from outside the doctor's office can be trusted? There is huge concern regarding the risks of health data being stolen, but little concern regarding how health data may be falsified. 

In the case of employers tracking employees, "Instead of feeling like part of a team, surveilled workers may develop an us-versus-them mentality and look for opportunities to thwart the monitoring schemes of Big Boss", writes Lynn Parramore in her post examining the dystopia of workplace surveillance.  As these new 'monitoring' technologies and associated services emerge and grow, at the same time, will we also observe the emergence of technologies that will allow us to paint a false picture of ourselves?

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Data or it didn't happen

Today, there is incredible excitement, enthusiasm and euphoria about technology trends such as Wearables, Big Data and the Internet of Things. Listening to some speakers at conferences, it often sounds like the convergence of these technologies promises to solve every problem that humanity faces. Seemingly, all we need to do is let these new ideas, products and services emerge into society, and it will be happy ever after. Just like those fairy tales we read to our children. Except, life isn't a fairy tale, neither is it always fair and equal. In this post, I examine how these technologies are increasingly of interest to employers and insurers when it comes to determining risk, and how this may impact our future. 

Let's take the job interview. There may be some tests the candidate undertakes, but a large part of the interview is the human interaction, and what the interviewer(s) and interviewee think of each other. Someone may perform well during the interview, but turn out to under perform when doing the actual job. Naturally, that's a risk that every employer wishes to minimise. What if you could minimise risk with wearables during the recruitment process? That's the message of a recent post on a UK recruitment website,  "Recruiters can provide candidates with wearable devices and undertake mock interviews or competency tests. The data from the device can then be analysed to reveal how the candidate copes under pressure." I imagine there would be legal issues if an employer terminated the recruitment process simply on the basis of data collected from a wearable device, but it may augment the existing testing that takes place. Imagine the job is a management role requiring frequent resolution of conflicts, and your verbal answers convince the interviewer you'd cope with that level of stress. What if the biometric data captured from the wearable sensor during your interview showed that you wouldn't be able to cope with that level of stress. We might immediately think of this as intrusive and discriminatory, but would this insight actually be a good thing for both parties? I expect all of us at one point have worked alongside colleagues who couldn't handle pressure, and their reactions caused significant disruption in the workplace. Could this use of data from wearables and other sensors lead to healthier and happier workplaces? 

Could those recruiting for a job start even earlier? What if the job involved a large amount of walking, and there was a way to get access to the last 6 months of activity data from the activity tracker you've been wearing on your wrist every day? Is sharing your health & fitness data with your potential employer the way that some candidates will get an edge over other candidates that haven't collected that data? That assumes that you have a choice in whether you share or don't share, but what if every job application required that data by default? How would that make you feel? 

What if it's your first job in life, and your employer wants access to data about your performance during your many years of education? Education technology used at school which aims to help students may collect data that could tag you for life as giving up easily when faced with difficult tasks. The world isn't as equal as we'd like it to be, and left unchecked, these new technologies may worsen inequalities, as Cathy O’Neil highlights in a thought provoking post on student privacy, “The belief that data can solve problems that are our deepest problems, like inequality and access, is wrong. Whose kids have been exposed by their data is absolutely a question of class.”

There is increasing interest in developing wearables and other devices for babies, tracking aspects of a baby, mainly to provide additional reassurance to the parents. In theory, maybe it's a brilliant idea, with no apparent downsides? Laura June doesn't think so, She states, "The merger of the Internet of Things with baby gear — or the Internet of Babies — is not a positive development." Her argument against putting sensors into baby gear is that it would increase anxiety levels in parents, not reduce them. I'm already thinking about that data gathered from the moment the baby is born. Who would own and control it? The baby, the baby's parents, the government or the corporation that had made the software & hardware used to collect the data? Furthermore, what if the data from the baby could impact not just access to health insurance, but the pricing of the premium paid by the parents to cover the baby in their policy? Do you decide you don't want to buy these devices to monitor the health of your newborn baby in case one day that data might be used against your child when they are grown up? 

When we take out health and life insurance, we fill in a bunch of forms, supply the information needed for the insurer to determine risk, and then calculate a premium. Rick Huckstep points out, "The insurer is not able to reassess the changing risk profile over the term of the policy." So, you might be active, healthy and fit when you take out the policy, but what if your behaviour changes and your risk profile changes during the term of the policy? This is the opportunity that some are seeing for insurers to use data from wearables to determine how your risk profile changes during the term of the policy. Instead of a static premium at the outset, we have a world with dynamic and personalised premiums. Huckstep also writes, "Where premiums will adjust over the term of the policy to reflect a policyholder’s efforts to reduce the risk of ill-health or a chronic illness on an on-going basis. To do that requires a seismic shift in the approach to underwriting risk and represents one of the biggest areas for disruption in the insurance industry."

Already today, you can link your phone or wearable to Vitality UK health insurance, and accumulate points based upon your activity (e.g. 10 points if you walk 12,500+ steps in a day). Get enough points and exchange them for rewards such as a cinema ticket. A similar scheme has also launched in the USA with John Hancock for life insurance

Is Huckstep the only one thinking about a radically different future? Not at all. Neil Sprackling, Managing Director of Swiss Re (a reinsurer) has said, “This has the potential to be a mini revolution when it comes to the way we underwrite for life insurance risk." In fact, his colleague, Oliver Werneyer, has an even bolder vision with a post entitled, "No wearable device = no life insurance," in which he believes that in 5 to 10 years time, you might find not be able to buy life insurance if you don't have a wearable device collecting data about you and your behaviour. Direct Line, a UK insurer believe that technology is going to transform insurance. Their Group Marketing Director, Mark Evans, has recently talked about technology allowing them to understand a customer's "inherent risk." Could we be penalised for deviating away from our normal healthy lifestyle because of life's unexpected demands? In this new world, if you were under chronic stress because you suddenly had to take time off work to look after a grandparent that was really sick, would less sleep and less exercise result in a higher premium next month on your health insurance? I'm not sure how these new business models would work in practice. 

When it comes to risk being calculated more accurately based upon this stream of data from your wearables, surely it's a win-win for everyone involved? The insurers can calculate risk more accurately, and you can benefit from a lower premium if you take steps to lower your risk. Then there are opportunities for entrepreneurs to create software & hardware that serves these capabilities. Would the traditional financial capitals such as London and New York be the centre of these innovations? 

One of the big challenges to overcome, above and beyond established data privacy concerns, is data accuracy. In my opinion, these consumer devices that measure your sleep & steps are not yet accurate and reliable enough to be used as a basis for determining your risk, and your insurance premium. Sensor technology will evolve, so maybe one day, there will be 'insurance grade' wearables that your insurer will be able to offer you. These would be certified to be accurate, reliable and secure enough to be used in the context of being linked to your insurance policy. In this potential future, another issue is whether people will choose to not take insurance because they don't want to wear a wearable, or they simply don't like the idea of their behaviour being tracked 24/7. Does that create a whole new class of uninsured people in society? Or would their be so much of a backlash from consumers (or even policy makers) to this idea of insurers accessing this 24/7 stream of data about your health, that this new business model never becomes a reality? If it did become a reality, would consumers switch to those insurers that could handle the data from their wearables? 

Interestingly, who would be an insurer of the future? Will it be the incumbents, or will it be hardware startups that build insurance businesses around connected devices? That's the plan of Beam Technologies, who developed a connected toothbrush (yes, it connects via Bluetooth with your smartphone and the app collects data about your brushing habits). Their dental insurance plan is rolling out in the USA shortly. Beam are considering adding incentives, such as rewards for brushing twice a day. Another experiment is NEST partnering with American Family Insurance. They supply you a 'smart' smoke detector for your home, which "shares data about whether the smoke detectors are on, working and if the home’s Wi-Fi is on." In exchange, you get 5% discount off your home insurance. 

Switching back to work, employers are increasingly interested in the data from employee's wearables. Why? Again, it's about a more accurate risk profile when it comes to health & safety of employees. Take the tragic crash of the Germanwings flight this year, where it emerges the pilot deliberately crashed the plane, killing 150 passengers. At a recent event in Australia, it was suggested this accident might have been avoided if the airline were able to monitor stress in the pilot using data from a wearable device.

What other accidents in the workplace might be avoided if employers could monitor the health, fitness & wellbeing of employees 24 hours a day? In the future, would a hospital send a surgeon home because the data from the surgeon's wearable showed they had not slept enough in the last 5 days? What about bus, taxi or truck drivers that could be monitored remotely for drowsiness by using wearables? Those are some of the use cases that Fujitsu are exploring in Japan with their research. Conversely, what if you had been put forward for promotion to a management role, and a year's worth of data from your wearable worn during work showed your employer that you got severely stressed in meetings where you had to manage conflict? Would your employer be justified in not promoting you, citing the data that suggested promoting you would increase your risk of a heart attack? Bosses may be interested in accessing the data from your wearables just to verify what you are telling them. Some employees phone in pretending to be sick, to get an extra day off. In the future, that may not be possible if your boss can check the data from your wearable to verify that you haven't taken many steps as you're stuck in bed at home. If you can't trust your employees to tell the truth, do you just modify the corporate wellness scheme with mandatory monitoring using wearable technology?

If it's possible for employers to understand the risk profile for each employee, would those under pressure to increase profits, ever use the data from wearables to understand which employees are going to be 'expensive', and find a way to get them out of the company? Puts a whole new spin on 'People Analytics' and 'Optimising the workforce'. In a compelling post, Sarah O'Connor shares her experiment where she put on some wearables and shared the data with her boss. She was asked how it felt to share the data with her boss, "It felt very weird, and actually, I really didn't like the feeling at all. It just felt as if my job was suddenly leaking into every area of my life. Like on the Thursday night, a good friend and colleague had a 30th birthday party, and I went along. And it got to sort of 1 o'clock, and I realized I was panicking about my sleep monitor and what it was going to look like the next day." We already complain about checking work emails at home, and the boundaries between work and home blurring. Do you really want to be thinking about how skipping your regular session at the gym on a Monday night would look to your boss? Devices that will betray us can actually be a good thing for society. Take the recent case of a woman in the USA who reported being sexually assaulted whilst she was asleep in her own home at night. The police used the data from the activity tracker she wore on her wrist to prove that at the time of the alleged attack, she was not asleep but awake and walking. On the other hand, one might also consider that those with malicious intent could hack into these devices and falsify the data to frame you for a crime you didn't commit. 

If these trends continue to converge, I see enterprising criminals rubbing their hands with glee. A whole new economy dedicated to falsifying the stream of data from your wearable/IoT device to your school, doctor, insurer or employer, or whoever is going to be making decisions based upon that stream of data. Imagine it's the year 2020, you are out partying every night, and you pay a hacker to make it appear that you slept 8 hours a night. So many organisations are blindly jumping into data driven systems with the mindset of, 'In data, we trust,' that few bother to think hard enough about the harsh realities of real world data. Another aspect is bias in algorithms using this data about us. Hans de Zwart has written an illuminating post, "Demystifying the algorithm: Who designs our life?" Zwart shows us the sheer amount of human effort in designing Google Maps, and the routes it generates for us, "The incredible amount of human effort that has gone into Google Maps, every design decision, is completely mystified by a sleek and clean interface that we assume to be neutral. When these internet services don’t deliver what we want from them, we usually blame ourselves or “the computer”. Very rarely do we blame the people who made the software." With all these potential new algorithms classifying our risk profile based upon data we generate 24/7, I wonder how much transparency, governance and accountability there will be? 

There is much to think about and consider, one of the key points is the critical need for consumers to be rights aware. An inspiring example of this, is Nicole Wong, the former US Deputy CTO, who wrote a post explaining why she makes her kids read privacy policies. One sentence in particular stood out to me, " When I ask my kids about what data is collected and who can access it, I am asking them to think about what is valuable and what they are prepared to share or lose." Understanding the value exchange that takes place when you share your data with a provider is critical step towards being able to make informed choices. That's assuming all of us have a choice in the sharing of our data. In the future, when we teach our children how to read and write English, should they be learning 'A' is for algorithm, rather than 'A' is for apple? I gave a talk in London recently on the future of wearables, and I included a slide on when wearables will take off (slide 21 below). I believe they will take off when we have to wear them or when we can't access services without them. Surgeons and pilots are just two of the professions which may have to get used to being tracked 24/7.

Will the mantra of employers and insurers in the 21st century be, "Data or it didn't happen?"

If Big Data is set to become one of the greatest sources of power in the 21st century, that power needs a system of checks and balances. Just how much data are we prepared to give up in exchange for a job? Will insurance really be disrupted or will data privacy regulations prevent that from happening? Do we really want sensors on us, in our cars, our homes & our workplaces monitoring everything we do or don't do? Having data from cradle to grave on each of us is what medical researchers dream of, and may lead to giant leaps in medicine and global health. UNICEF's Wearables for Good challenge could solve everyday problems for those living in resource poor environments. Now, just because we might have the technology to classify risk on a real time basis, do we need to do that for everyone, all the time? Or should policy makers just ban this methodology before anyone can implement it? Is there a middle path? "Let's add in ethics to technology" argues Jennifer Barr, one of my friends who lives and works in Silicon Valley. Instead of just teaching our children to code, let's teach them how to code with ethics. 

There are so many questions, and still too few places where we can debate these questions. That needs to change. I am speaking at two events in London this week where these questions are being debated, the Critical Wearables Research Lab and Camp Alphaville. I look forward to continuing the conversation with you in person if you're at either of these events. 

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner




Will the home of the future improve our health?

According to BK Yoon, that would be one of the benefits in their vision of the 'home of the future'. Who is BK Yoon? He's President and CEO of Samsung Electronics. Last Friday, I listened as he delivered the opening keynote of IFA 2014, which is the largest consumer electronics and home appliance show in Europe.

Whilst many talk of bringing healthcare out of the hospital/doctor's office into the home, Samsung, in theory, have the resources and vision to make this a reality at a global level. Samsung Group, of which Samsung Electronics is the biggest subsidiary, have just invested $2 billion in setting up a biopharmaceuticals unit. Christopher Hansung Ko, CEO at the Samsung Bioepis unit. said in an earlier interview, “We are a Samsung company. Our mandate is to become No. 1 in everything we enter into, so our long-term goal is to become a leading pharmaceutical company in the world.” 

Is South Korea innovative?

My views on Samsung (and South Korea overall), changed when I visited Seoul, the capital of South Korea during my round the world trip in 2010. I had only scheduled a 3 day stopover, but ended up staying over 3 weeks. I was impressed by their ambitions, their attitude towards new technology and their commitment to education. Their journey over the last half century is truly amazing. "Fifty years ago, the country was poorer than Bolivia and Mozambique; today, it is richer than New Zealand and Spain." Bloomberg released their Global Innovation Index at the start of this year. Guess which country was top of the list? Yup, South Korea. The UK was ranked a lowly 16th.

After my visit, I left South Korea with a different perspective, and have been paying close attention to developments there ever since. I believe many people in the West underestimate the long term ambitions of a company like Samsung. Those wishing to understand the future, would be wise to monitor not just what's happening in Silicon Valley, but also in South Korea. 

Aging populations worry policy makers in many advanced economies. Interestingly, recent data shows that South Korea's population is aging the fastest out of all OECD countries. Maybe that's one of the drivers behind Samsung's strategy of developing technology that could one day help older people live independently. 

We've been hearing about smart and connected homes for many years, and one wonders how this technology would be integrate with our lives. How easy would it be to set up and use? How reliable would it be? Would I have to figure out what to do with all these data streams from the different devices? I used to believe that Apple was unique in really understanding the needs and wants of the consumer, but it seems Samsung have been taking notes. They have conducted lifestyle research with 30,000 people around the world. The results of that research were shared after they keynote. Whilst reading the reports, one feels like Samsung Electronics is now trying to position itself as a lifestyle company, not a technology company. Whether it's one of the opening statements, "The home of the future is about more than technology and gadgets. It's about people." It's about responsive homes that adapt to our needs, homes that protect us, homes that are empathetic. How much is all of this going to cost us, right? Is it only going to be for the rich? Is it only going to work with new homes, or can we retrofit the technology?

Data driven homes - is that what we truly want?

Their research also says, "Technology will promote eating and living right. Devices around the home will inspire us to make decisions that are right for our bodies, taking an active role in helping us achieve better health by turning goals into habits." So in Samsung's vision, the fridge of the future will inform you that some of the food inside has expired and needs to be thrown away. Where do we draw the line? What if you come home from work hoping to grab a beer from the fridge, but the fridge door is locked, because the home of the future knows you've already consumed more than your daily allowance of calories?

Do we actually want to live in data driven homes? Our homes are often an analogue refuge in an increasing digital & connected world. We have the choice to disconnect and switch off our devices and just 'be'. What if part of our contract with our energy provider or home insurer is having smart home technology installed?

In a recent survey of Americans aged 18+ for Lowe's, 70% of smartphone/tablet owners want to control something inside their home from their bed via their mobile. Obesity is already a public health issue not just in the USA, but in many nations. Will being able to switch on the coffee pot, adjust the thermostat and switch on the lights by speaking into your smartphone whilst lying in bed lead to humans leading even more sedentary lives in the future? 

However, there is a flip side. Rather than just dismiss this emerging technology as silly or promoting inactivity, these advancements may be of immense benefit to certain groups of people. For example, would the home of the future enable someone who is blind, disabled or with learning difficulties a greater chance to live independently? Would the technology be useful when you're discharged from hospital after surgery?

Is it just about the data?

One of the byproducts of these new technologies for our homes, are data. Sensors and devices which track everything we do in the home are going to be collecting and processing data about us, our behaviour and our habits. Samsung to mention in their research "Our home will develop digital boundaries that keep networks separate & secure, protecting us from data breaches, and ensuring that family members cannot intrude on each other's data." It all sounds great in a grand vision, but turning that into reality is much harder than it appears.

We expect to hear about Apple's HealthKit later today. Who will you trust with the data from your home in the future? Apple, an American corporation or Samsung, a South Korean corporation? Or neither of them? Is the strategy of connecting our homes to the internet simply a ploy to grab even more data about us? Who will own and control the data from our homes? Where will it be stored? 

Despite the optimism and hope in BK Yoon's keynote, I can't help wonder about the risks of your home's data feed getting hacked, and what it means for criminals such as burglars, or even terrorists. Do we want machines storing data on the movements of each family member within our own home? Or will tracking of family members who are very young or very old, in and around the home, give families peace of mind and reassurance?

Ultimately, who is going to pay for all of this innovation? Even today, when I talk to ordinary hard working families about new technologies, such as sleep tracking, I'm conscious it's out of the reach of many. For example, the recently launched Withings Aura system allows you to track and improve your sleep. It's priced at $299.95/£249.95. If given the choice, how many ordinary families would invest in the Withings Aura to improve their sleep vs buying a new bed from Ikea for the same price?

The video below was played during the keynote, and gave me a glimpse into Samsung's vision of the home of the future. BK Yoon claimed that the future is coming faster than we think. He seemed pretty confident. Only time will tell. 

How does this video make you feel? Does Samsung's vision make you excited and hopeful, or does it frighten you? Do you look forward to a home that aims  to protect you and cares for your family? How comfortable do you feel with your home potentially knowing more about your health than you or your doctor? How will the data about our health collected by our home integrate with the health & social care system? Will the company that is most successful in smart homes be the one that consumers trust the most?

[Disclosure: I have no commercial ties with the individuals and organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

The paradox of privacy

When you're driving the car, would you let an employee from a corporation sit in the passenger seat and record details on what route you're taking, which music you listen to and the text messages you send and receive? 

When you're sitting at home watching TV with your family, would you let an employee from a corporation sit on the sofa next to you and record details on what types of TV shows you watch? 

When you're in the gym working out, when you're going for your daily walk, would you let an employee from a corporation stand alongside you and record details on how long you walked, where you walked, and how your body responded to the physical activity? 

I suspect many of you would answer 'No' to all 3 questions. However, that's exactly the future that is being painted after the recent Google I/O event. Aimed at software developers, it revealed a glimpse of what Google have got planned for the year ahead. New services such as Android Wear, Android Auto, Android TV and Google Fit promise to change our lives. 

In this article titled 'Google's master plan: Turn everything into data!", David Auerbach appreciates how more sensors in our homes, cities and on our bodies is a hugely lucrative opportunity for a company like Google. "That information is also useful to companies that want to sell you things. And if Google stands between your data and the sellers and controls the pipe, then life is sweet for Google."

In a brilliant article by Parmy Olsen, she writes about the announcement at I/O about Google Fit, a new platform. "There’s a major advertising opportunity for Google to aggregate our health data in a realm outside of traditional search". Now during the event, Google did state that users would control what health and fitness data they share with the platform. Let's see whether corporate statements translate to actual terms & conditions in the years ahead. 

Do we even realise how much personal data are stored on our phones?

Do we even realise how much personal data are stored on our phones?

Why are companies like Google so interested in the data from your body in between doctor visits? As I've stated before, our bodies generate data 24/7, yet it's only currently captured when we visit the doctor. So, the organisation that captures, stores & aggregates that data at a global level is likely to be very profitable, as well as wielding significant power in health & social care. 

Indeed, it could also prove transformative for those providing & delivering health & social care. In the utopian vision of health systems powered by data, this constant stream of data about our health might allow the system to predict when we're likely to have a heart attack, or fall? 

Privacy and your baby

When people have a baby, some things change. It's human nature to want to protect and provide for our children when they are helpless and vulnerable. For example, someone may decide to upgrade to a safer car once they have a baby. We generally do everything we can to give our children the best possible start in life.

If you have a newborn baby, would you allow an employee from a corporation to enter your home and sit next to your baby and record data on it's sleeping patterns? In the emerging world of wearable technology, some parents are considering using products and services where their baby's data would be owned by someone else. 

Sproutling is a smart baby monitor, shipping in March 2015, but taking pre-orders now. It attaches to your baby's ankle, and measures heart rate and movement and interprets mood. It promises to learn and predict your baby's sleep habits. You've got an activity and sleep tracker for yourself, why not one for your baby, right? According to their website today, 31% of their monitors have been sold. The privacy policy on their website is commendably short, but not explicit enough in my opinion. So I went on Twitter to quiz Sproutling about who exactly owns the data collected from the baby using the device. As you can see, they referred me back to their privacy policy, and didn't really answer my question. 

The paradox

What's fascinating is how we say one thing and do another. A survey of 4,000 consumers in the UK, US and Australia found that 62% are worried about their personal data being used for marketing. Yet, 65% of respondents rarely or never read the privacy policy on a website before making a purchase. 

In a survey by Accenture Interactive, they found that 80% of people have privacy concerns with wearable Internet of Things connected technologies. Only 9% of those in the survey said they would share their data with brands for free. Yet, that figure rose to 28% would share their wearable data if they were given a coupon or discount based upon their lifestyle. 

Ideally, there would be a way in which we as consumers could own and control our personal data in the cloud and even profit from it. In fact, it already exists. The Respect Network promises just that, and was launched globally at the end of June 2014. From their website, "Respect Network enables secure, authentic and trusted relationships in the digital world". Surely, that's what we want in the 21st century? Or maybe not. I haven't met a single person who has heard of Respect Network since they launched. Not one person. What does that tell you about the world we live in?

Deep down, are we increasingly becoming apathetic about privacy? Is convenience a higher priority than knowing that our personal data are safe? Is being safe and secure in the digital world just a big hassle?

A survey of 15,000 consumers in 15 countries for the EMC Privacy Index found a number of behavioural paradoxes, one of which they termed "Take no action", "although privacy risks directly impact many consumers, most take virtually no action to protect their privacy – instead placing the onus on government and businesses". It reminds me of an interaction I had on Twitter recently with Dr Gulati, an investor in Digital Health. 

What needs to change?

Our children are growing up in a world where their personal data are going to be increasingly useful (or harmful), depending upon the context. What are our children taught at school about their personal data rights? It's been recently suggested that schools in England should offer lessons about sex and relationships from age 7, part of a "curriculum for life". Shouldn't the curriculum for life include being educated about the intersection of your personal data and your privacy?

We are moving towards a more connected world, whether we like it or not. Personally, I'm not averse to corporations and governments collecting data about us and our behaviour, as long as we are able to make informed choices. I like how in this article about the Internet of Things and privacy, Marc Loewenthal writes "discussions about the data created are far more likely to focus on how to use the data rather than how to protect it". Loewenthal also goes on to mention how the traditional forms of delivering privacy guidelines to consumers aren't fit for purpose in an increasingly connected world, "They typically ignore the privacy notices or terms of use, and the mechanisms for delivering the notices are often awkward, inconvenient, and unclear".

When was the last time you read through (and fully understood) the terms and conditions and privacy policy of a health app or piece of wearable technology? So many more connected devices, each with their own privacy policy and terms and conditions. Not something I look forward to as a consumer. The existing approach isn't effective, we need to think differently about how we can truly enable people to be able make informed choices in the 21st century.

Now, what if each of us  had our OWN terms and conditions and privacy policy and then we could see if the health app meets OUR criteria? We, as consumers, decide in advance what we want to share, with whom, and what we expect in return. How would that even work? Surely, we'd need to cluster similar needs together to perhaps form 5 standard privacy profiles? Imagine comparing three different health apps which do they same thing, but you can see instantly that only one of them has the privacy profile that meets your needs? Or even when browsing through the app store, you choose to only be shown those apps that match your privacy profile? That would definitely make it easier for each of us to be able to make an informed choice. 

Things are changing as it was revealed last night that Apple have tightened privacy rules in their new operating system for people developing apps using their new HealthKit API. An article cites text pulled from the licence, developers may "not sell an end-user's health information collected through the HealthKit API to advertising platforms, data brokers or information resellers," and are barred from using gathered data "for any purpose other than providing health and/or fitness services."

Apps using the HealthKit API must also provide privacy policies.

This news is definitely a big step forward for anyone who cares about the privacy of their health data. Although the guaranteed link to a privacy policy doesn't necessarily mean it will be easy to understand for consumers. I also wonder how companies that develop health apps using the HealthKit API will make money, given current business models are based around the collection and use of data. 

Will the news from Apple make you more likely as a consumer to download a health app for your iPhone vs your Android device? Will it cause you trust Apple more than Google or Samsung? Have Apple gone far enough with their recent announcement or could they do more? Will Apple's stance lead to them becoming THE trusted hub for our health data, above and beyond the current healthcare system?

How can we as individuals do more to become aware of our rights? As well as the campaigns to teach people to learn how to code, should we have campaigns to teach people how to protect their privacy? When commentators write that privacy is dead, do you believe them?

We're heading towards a future where over the next decade it will become far easier to use sensors to monitor the state of our bodies. Would you prefer a future where my body=my data or my body=their data? The choice is yours.

[Disclosure: I have no commercial ties with the individuals and organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Could every patient become an agent for change?

Our bodies generate data 24 hours a day, 7 days a week. It's typically only captured when we interact with the health & social care system, i.e. at the doctor's office or hospital.

Given how rapidly technology is evolving, we are looking at a possible future, where we may be able to capture data from our bodies 24/7. We might be able to learn so much more about the state of our bodies (and minds), not only when we are sick, but also when we are well. From a patient's perspective, sounds potentially useful, right? Provided we don't all have to train up as statisticians to make sense of these data. 

I spoke at an event in April hosted by the Connected Digital Economy Catapult about how in 5 years time, the data we collect about ourselves, may be more powerful than the data the system collects about us. You can see the highlights of the panel discussion in the video.

Kathleen Frisbee, an executive at the Veterans Administration in the USA, has recently stated that "patient generated data is going to be the thing that really transforms healthcare". 

Not everyone is as optimistic. Many in the medical profession have doubts about the accuracy & clinical value of data generated by the current crop of wearable technology & health apps. They do indeed have a point. After all, a doctor often has to work with an electronic health record (EHR), which records the medical history for patient, i.e. when they were diagnosed, treated as well as recording lab test results and free text notes from the doctor. There are standards for those data, such as clinical coding systems like ICD9/ICD10 that are used to classify diseases.

Perhaps much of the data that patients might generate using new technologies in the future will have limited or no clinical value according to the design of the EHR? The healthcare system is built upon the biomedical model, as Dr Pritpal S Tamber explains in his recent post where he discusses the future of health. What if I capture data about my health that is meaningful to me and my life, but the system either dismisses that data or cannot integrate it with the EHR? 

A 7 year old girl in Singapore came up with the idea for an app that helps parents record how much time they spend with their children. Perhaps how much time a parent spends with their children each day is the most valuable measure of their health for them? Patients who don't take medication they've been prescribed is a huge issue. What if patient generated data tells us that some patients don't get their repeat prescription because the queue at the pharmacy is simply too long?

Susannah Fox, an amazing thinker, in her recent post, says "Patients and caregivers have knowledge that is worthy of being enshrined and shared. What would a learning health system look like if it honored all participants’ intelligence?"

It makes me wonder whether the modern healthcare system is fundamentally flawed due to the lens through which it views the people who it's supposed to serve?

Does the system really want to change?

When it comes to the system, the NHS, I've had little success in engaging with it, even when I ran Health 2.0 London. It's been frustrating, as I want to learn more about the system's challenges, and how my ideas might be able to help. Last November, Pascal Lardier from Health 2.0, wrote about the experiences of entrepreneurs who found it difficult to work with the NHS. 

I suspect things are changing. Why do I say this? I've been invited to the HSJ Innovation Summit, which takes place this week. Their website asks the question, "Is it possible for a 20th Century creation to still be innovative in a 21st Century world?" 

I have also been invited to be part of a panel next week judging applications for the Quality in General Practice Innovation Fund at Tower Hamlets Clinical Commissioning Group. 

Many of my peers tell me to stay clear of the NHS, citing their own negative experiences. As an eternal optimist, I want to believe that as a system, the NHS really does want to change. That's why I've accepted both invitations. I'll be writing a blog post about my experience of both events. 

The key to a truly patient centred NHS?

Whilst the majority of patients I speak to are very satisfied with the NHS, I do speak with some who have felt frustrated at how they feel powerless when they feel they have been let down by the system. We hear much talk about patient centred care, but a system that lives and breathes those words every second of every day is still not with us. 

We can make patient centred care a reality, and our data is going to be key to that transformation. Once the data we collect as patients becomes more valuable than the data collected on us by the system, that's the tipping point. Provided we own and control our own health data, we can finally hold the system to account. "More and more people are getting less and less happy about simply surrendering information and getting nothing in return", says Patrick James, in this BBC article on Big Data. He also believes that "Increasingly, consumers and customers will attempt to hold back their data".

Interestingly, the Technology Manifesto published this month by Policy Exchange, puts forward that government policy should allow citizens to control how their personal data in public services are used. In the manifesto, using EHRs as example, this is what UK citizens should be able to do, "They should be able to manually assign access rights to the general practitioners and doctors of their choosing".

Does this mean a new era of solidarity for patients, not just in the UK, but globally? If patients are not being treated fairly, could groups of patients withdraw access to the data they collect 24/7 about their bodies, until the government, health insurer or pharmaceutical company takes action? Are their insights about population health that are waiting to discovered if we can marry clinical EHR data with data patients generate themselves? Will we be overwhelmed by all this data about us streaming 24/7? Where will all this patient generated data reside? How can we keep it private and secure? So many questions, and also, so many possibilities. 

[Disclosure: I have no commercial ties with the individuals and organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

All we need is more data, right?

Our problems in healthcare today, and those we will face tomorrow, will most likely be solved by opening up datasets, throwing them into the hands of software developers & entrepreneurs, and letting the magic unfold. That was the underlying premise in Washington, DC this week. I flew over from England, to attend the 5th Health Datapalooza.

I've wanted to attend for the last 2 years, but other things came up. I have heard many things mentioned about the event, but wanted to experience it for myself. I'm glad I did.

Catching up with the legendary, Dave deBronkart, also known as ePatientDave! 

Catching up with the legendary, Dave deBronkart, also known as ePatientDave! 

The event is aimed at improving US healthcare, but since America is often ahead of the world in health technology, I wanted to understand what they are doing. Compared to the often austere environments of conferences back in England, the datapalooza was full of glamour & glitz. 

2,000 people were in attendance, and kudos to the the organisers for bringing all these people together. We were told that attendees had come from as far away as India & China. 

I wonder what people from 'emerging markets' like India & China would think when visiting the most powerful nation on Earth for a conference, only to find the wifi at the venue didn't work terribly well?

The opening keynotes on Day 1 were given by Dr Elliott Fisher, Karen Ignani, & Todd Park. I've been hearing great things about Todd Park for a while now, but never heard him speak in person, until now. He spoke with such energy, vigour & passion that you got the feeling he genuinely wants change. 

However, something bothered me. In the program for the event, it states how we are taking an important step towards a patient centred health system, powered by data. So, if the datapalooza was all about patients, why were there no patients on stage giving an opening keynote? We had keynotes from folks representing the medical profession, US government, and health insurance industry. From an attendee's perspective, I see this as incongruent. I'm not the only one who feels this way.

The UK's Secretary of State for Health, Jeremy Hunt also gave a keynote. Whilst I enjoyed most of his talk, I found it odd that when talking about why we should have greater transparency in healthcare, he had a slide with Joseph Stalin on it [Note: Stalin was a 20th century Soviet leader whose actions led to the deaths of millions]

Hearing Dr Atul Gawande speak was inspiring. He gets straight to the point, and shared his own practical examples. 

Once the keynotes were over, the rest of Day 1 had quite a few smaller sessions, running concurrently. Covering business, clinical care, community, research & more, these looked like they could be very interesting. However, since 3 or 4 sessions were running concurrently each time, you were forced to pick only one. I often found myself frustrated, as I liked two different sessions held at the same time. I don't enjoy it when conference organisers try to squeeze too much content into one day. 

One of the sessions I really enjoyed was, "Citizen/Patient - The Great Data Debate". Much of what was discussed was who should have access to our data, and how would the data we collect as patients be integrated with the data the system holds on us. There seems to be much uncertainty regarding the flood of patient generated data coming over the next few years. How will we ensure it's accurate? Who will own it? Who should develop standards? Government or industry? Do we even need more data? How can we trust those that hold our data for us? An executive at the VA recently stated that "patient generated data is going to be the thing that really transforms healthcare".

I was not able to attend the keynotes on Day 2, but one of the best quotes I found on the Twitter stream was from Adriana Lukas, founder of Quantified Self London. 

I did manage to attend one of the last sessions on Day 2, "Introducing OpenFDA". A new initiative, aimed at making it easier & faster to access public datasets from the FDA. They are starting off with all the adverse drug event reports from 2004-2013. It's still in beta, the idea is to get entrepreneurs to build new tools & services using the OpenFDA API. Since I have worked in drug safety myself, I understand the potential value of new insights that may be gained by using these existing data in novel ways. Definitely worth keep an eye on how OpenFDA develops.

More data, fewer problems?

I know from my own practical experience that can data can be used to improve decision making. The smart use of data is only a good thing for everyone in health & social care. However, we often run before we can walk. I observe many healthcare organisations with "Big Data" on the strategic agenda. What's ironic is that these organisations often don't leverage the data they already have. I'll never forget a client I worked with many years ago. As a marketing manager, she wanted the agency I worked for to build her a brand new marketing database, complete with integrated predictive analytics (i.e. the ability to find those customers most likely to respond to a marketing campaign). I suspect, she'd been influenced by a white paper she'd read. 

I pushed back, and challenged her. I knew that she didn't even know the basics about her customers. At that moment, I believed all she and her team needed were a few basic charts in Excel. I convinced my management & the client not to sign off on the huge expensive database project. I turned out to be right, one day's work to analyse the existing database by myself to produce 3 basic charts for my client, generated new insights to keep her & her team busy for a month. Less really can be more. 

What would Abraham Lincoln say about the rights of patients to own their health data?

What would Abraham Lincoln say about the rights of patients to own their health data?

In the future, as more data about our health is collected, stored & shared, privacy & security will become even more important. Yet, not one keynote at Health Datapalooza with a focus on privacy & security. How can we make informed choices, when our leaders are shouting about the benefits, whilst being silent about the risks? It's healthy to consider the dark side of all these data being collected about our health. 

Consider the keynote on Day 2, by Dr Francis Collins, Director of NIH who cited Global Alliance for Genomics & Health during his talk. Sharing our genomic and clinical data to help advance science and medicine, that's admirable, right? Let's dig beneath the surface.

As I wrote in a post last year, the global alliance met with Google, Microsoft & Amazon Web Services at the end of 2012. Read the 4th paragraph on Page 16 of their white paper that was published 12 months ago. In a healthcare system that's powered by data, ask yourself, who stands to gain the most from collecting, storing & sharing genomic data on each of us? 

The other thing I noticed during the event was the focus on data in healthcare, with little reference to social care. Oh wait, there was an app demo by a firm called Purple Binder, which uses web applications to help people find community health services. Brilliant, but not every American uses the internet or email. 

Pew Research Center's report from April 2014, when looking at seniors, found that 41% do not use the internet at all, 53% do not have broadband access at home, and 23% do not use cell phones. In a thought provoking blog post this week, Victor Wang, reminds that that Dementia care costs 5 times more than Global Warming. 

What do people living with Dementia need? The ability to download their own data or someone to care for them? Given our finite resources, what's a better use of our money? Building a new data platform or recruiting more nurses?

In the 21st century, do we want health & social care systems powered by data, or by people?

[Disclosure: I have no commercial ties with any of the individuals or companies mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

The leading healthcare organisation of 2025 will be..?

Google, Apple, Samsung or perhaps the government? Actually, it's likely to be the organisation that manages to collect the most accurate, complete and representative data about our health. That organisation may not even exist yet.

That's right, your data, our collective data, is likely to make other people very rich & powerful over the next decade.

A couple of days ago, I was in Dublin for the Health XL Global Gathering, which attracted Digital Health innovators from around the globe. Three moonshots were announced, one of which was the use of Big Data to improve life expectancy. Bill Taranto, President of Merck Global Health Innovation Fund spelled out how they view our health data. Now, he was talking in the context of investing in startups that can build useful tools & services using our health data.

I don't have any issue with organisations wishing to profit from our health data. After all, we need incentives to spur new ideas, products and services that could solve the challenges ahead of us. Wherever I go in the world, I hear panel discussions about the need for "Big Data" to help patients, and the immense benefits that will arise from capturing, storing and using these data.

To the masses, it sounds like a no brainer, it's utopia, just tell me where to sign up. Unfortunately, that's not true. The conversation lacks balance. Despite regulators weighing up the risks vs benefits of any new drug prior to approval, Big Data seems to be lots of benefit, with minimal or no risk, according to conferences and media headlines.

That is completely inaccurate. There are massive risks on this journey, and we don't hear enough about these risks. Society needs to have a balanced conversation about our health data, not just a few people in conferences or academics in scientific journals. The data that we collect every day in the future on our health WILL at some point become MORE valuable than the data collected about our health when we visit the doctor. What I also find sad is that these panels are discussing how Big Data can help patients, but there patients are nearly always absent from the panel. Come on conference organisers, give attendees a chance to hear the voice of patients from the local community, even if what patients say upsets your corporate sponsors. I understand the situation. Writing the phrase, "patient centred care", is easy. Living those words is another thing altogether.

Many equate the value of our health data with the value of our financial data. Again, that's wrong. It's not the same.

Those that know me well, have heard my passion for moving to a world where patients own and control their own health data. I've written and spoken about my vision of the future many times already. What gave me hope this week, was the news from Samsung at their Voice of the Body event, that their new platform will do exactly what I've been campaigning for. This is brilliant news, and Samsung are blazing a trail here. I wonder if Apple & Google will follow their lead?

Muki & John from Intel Labs. Awesome people!

Muki & John from Intel Labs. Awesome people!

Now, I'm here in Washington, DC today for the Health Datapalooza event, where 2,000 people come together and exchange ideas on how data can be used to improve human health. Many of the leading thinkers & doers from around the globe will be here over the next few days, and I really hope that the conversations that take place are more balanced than in previous years at these types of events.

Well, I was blown away by the pre-conference workshop I attended today, hosted by Intel Labs and We The Data. An interactive workshop that tackled many questions ignored or glossed over, such as data literacy, and trust in the organisations that collect data on us. It was an extremely valuable experience.

After the workshop, I had to meet a friend outside the White House. I was running late, and asked the taxi driver with such vigour to take me to there immediately, he wondered if I actually had a meeting there! When I reached there, I was compelled to send a message to the American people.

As an aside, it's exactly 2 years since my last day at GSK, where I spent almost a decade analysing the largest patient databases from the US & Europe. As an entrepreneur, It's been a spectacular and often frightening rollercoaster ride, working under my own steam. Having the freedom to express myself has been the single biggest benefit. I appreciate that not everyone has the circumstances to quit a permanent job and walk into the unknown. It highlights to me the importance of imagination. Many organisations tell me they have limited, or no budget for innovation. I remind them, that they have a limitless resource, often untapped, the imagination of the people that work there. Can you imagine your organization behaving in a different way?

What now?

So, the next time you hear any organisation on the planet say they want to collect more data on us in order to improve health & social care, don't forget to ask them who exactly will own, control & profit from that data. Keep asking until you get an answer. Unfortunately, governments & corporations tend to listen to the masses, only when they start screaming & shouting.

Or maybe ask them why they want to collect even more data when they haven't fully leveraged the data they already have? Ask who stands to benefit the most from the new initiative? Is it always the patient, or it is another group of people?

shutterstock_38086993 (1).jpg

Maybe the leading healthcare organization in 2025 will be the one that genuinely operates with transparency, trust & integrity? Or maybe we shouldn't dream of a better world, and just accept that corporations & governments will sometimes put their own interests above those of the public good?

Do we need a World Data Organization that ensures we our digital data rights are protected just as our basic human rights are protected? What do you think? There is nothing to stop us from establishing a dialogue that enables every stakeholder in health data to ensure we do the right thing, the right way, at the right time.

[Disclosure: I have no commercial ties with any of the companies mentioned]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Think twice before sharing your data

Who needs hospitals? We have smartphones, sensors and data!

According to Eric Topol, who is one of the leading voices in Digital Health, the smartphone is going to be the healthcare delivery platform of the future. Awesome right? No need to go into a hospital in the future, the app on your phone can record your blood pressure and transmit it to your doctor via the internet etc. 

Is it just a few rich people in California who believe this? Not according to Intel's latest research (see infographic below on what health information people are willing to share). The survey collected responses from people in Brazil, China, France, India, Indonesia, Italy, Japan and the United States. 84% would share their vital stats like blood pressure and 75% would share information from a special monitor that's been swallowed to track internal organ health. In fact, India is the country most willing to share healthcare information to aid innovation. Super awesome news, right?

Eric Dishman, Intel fellow and general manager of the company's Health and Life Sciences Group, says "Most people appear to embrace a future of healthcare that allows them to get care outside hospital walls, lets them anonymously share their information for better outcomes, and personalizes care all the way down to an individual's specific genetic makeup." 

Also, this week was the mHealth Summit in Washington, DC. It's the largest event of it's kind, over 5,000 people from around the world gathered. I attended last year, but participated this year from London via Twitter. Amazing energy and bold visions of the future on mHealth. 

In fact, this week, I also participated in the world's first G8 Dementia Summit via Twitter. "Big Data" captured from patients around the globe was cited by many of the leaders as one of the ways in which we can work to beat Dementia by 2025. Yes, the G8 put a rather ambitious  goal of a cure (or disease modifying drug) by 2025. Again, we just need to collect all this data from individuals, remove personal information, make it anonymised, and Global Health in the future will be transformed, right?

Easier said than done

Unfortunately, many of the people at conferences who are envisioning a world where we happily share our personal health data altruistically for the benefit of medical research to improve Global Health are unaware of the realities on the ground. "Big Data" seems to be inserted by anyone and everyone into their speeches and tweets. Doctors, politicians, and corporate leaders frequently use the phrase, in the hope that more people will sit up and pay attention to what they are saying.

Let's take anonymisation. If someone tells you that your personal data will be anonymised and then aggregated and made available to 3rd parties, you believe them, when they tell you your data can't identify you. Let's see what the report from the Royal Society in June 2012 said; 

"the security of personal records in databases cannot be guaranteed through anonymisation procedures"

"Computer science has now demonstrated that the security of personal records in databases cannot be guaranteed through anonymisation procedures where identities are actively sought"

It's good to have people like Professor Ross Anderson who dare to question the viability of anonymisation

Now, there are tens of thousands of health apps, and generally how many of us take the time to read terms and conditions before downloading any app, let alone a health app? We trust the brand, don't we? How do we determine as consumers and patients, whether a health app is safe to use? 

A company in the US, Happtique is working on a program of certification for health apps. Definitely a worthwhile initiative. So whilst I was monitoring the Twitter stream during the mHealth Summit, I noticed a software developer, Harold Smith, at the event had shared his blog post with his findings that there were security issues with some apps that had passed the certification process at Happtique. Yes, shocking news, but even more shocking is how a lot of people in this industry don't seem to care. Kudos to Happtique, they did react swiftly to this news by suspending their certification program

Here in the UK, the NHS have set up a health apps library. Their review process is listed too. Their website says, "All apps submitted to the Health Apps Library are checked to make sure that they are relevant to people living in England; comply with data protection laws and comply with trusted sources of information, such as NHS Choices". I've got no reason to doubt the security of the apps on the NHS library, but I'm curious - what if someone independent like Harold Smith took a look at these apps? What would his findings be? 

2014 & beyond 

In an ideal world, none of us as end users would have to worry about the security & privacy of our personal health data. We all want improved health, and improved healthcare, and we are told that mobile technology, sensors & big data could make the world a much better place. As a Digital Health Futurist, I truly want to believe that. 

However, the road ahead is potentially very dangerous, largely because the froth and hype in Digital Health is overshadowing the need to have an open and candid discussion in society on the risks and benefits of going down this road. Companies such as GE, Intel, & Cisco are pumping billions into the Internet of Things. This week the Allseen Alliance was announced, standards to allow different devices to connect to each other. Again, exciting stuff, right? 

Imagine, your smart toilet connected to your smart fridge connected to your smartphone. Personalised meal suggestions on your phone based upon the combination of the clinical analysis of your urine and what food you have remaining in your fridge? More data about our health, more data about us being transmitted between devices and apps using wifi. Hmmm, how many of us have stopped to reflect upon what safeguards are needed to prevent our bodies from being the target of hackers

In principle, I'm not against any company or government collecting more data about us and our health. If collecting more data can help us develop a cure for diseases such as Cancer or Dementia, that would be an amazing achievement for science. 

However, I do want all of us, wherever we live on this planet, to be able to make INFORMED choices about how we share our health data, and who we share it with. Who will drive conversations that lead to a society where we can make informed choices about our health data? How do we get informed consent to participate in data sharing initiatives from those members of society who are vulnerable, such as children or older people with Dementia? Is that even ethical? 

One piece of good news that came out this week is that the Data & Society Research Institute is a new non-profit organisation launching in 2014. Based in New York City, it will be dedicated to addressing social, technical, ethical, legal, and policy issues that are emerging because of data-centric technological development. 

bill-of-rights.jpg

Data about us may be the key to improving the health of 7 billion people, but that can only happen if our rights are protected at all times. The issues are common to all personal data, not just health data. Perhaps the way forwards is the creation of an international bill of digital rights?

 

[Disclosure: I have no commercial ties with any of the companies mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner