Honesty is the best medicine

In this post, I want to talk about lies. It’s ironic that I’m writing this on the day of the US midterm election where the truth continues to be a rare sight to witness. Many in the UK feel they were lied to by politicians over the Brexit referendum. Apparently, politicians face a choice, lie or lose. Deception, deceit, lying, however you want to describe it, it’s part of what makes us human. I reckon we’ve all told a lie at some point, even if we’ve told a ‘white lie’ to avoid hurting someone’s feelings. Now, some of us are better at spotting when others are not telling the truth. Some of us prefer to build a culture of trust. What if we had a new superpower? A future where machines tell us in real time who is lying.

What compelled me to write this post was reading a news article about a new trial in the EU of virtual border agents powered by Artificial Intelligence (AI), which aims to “ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars.” I was fascinated to read statements about the new system such as “IBORDERCTRL’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.” Apparently, the system can analyse micro expressions on your face and include that information as part of a risk score, which will then be used to determine what happens next. At this point in time, it’s not aimed at replacing human border agents, but simply to help to pre-screen travellers. It sounds sensible right, if we can use machines to help keep borders secure? However, the accuracy rate of the system isn’t that great and some are labeling this type of system as pseudoscience and it will lead to unfair outcomes. It’s essential we all pay attention to these developments, and subject them to close scrutiny.

What if machines could one day automatically detect if someone speaking in court is lying? Researchers are working towards that. Check out the project called, DARE: Deception Analysis and Reasoning Engine, where the abstract of their paper opens with “We present a system for covert automated deception detection in real-life courtroom trial videos.“ As algorithms get more advanced, the ability to detect lies could go beyond analysing videos of us speaking, it could even spot when we our written statements are false. In Spain, police are rolling out a new tool called VeriPol which claims to be able to spot false robbery claims, i.e. where someone has submitted a report to the police claiming they have been robbed, but the tool can find patterns that indicate the report is fraudulent. Apparently, the tool has a success rate of over 80%. I came across as British startup, Human, that states on their website, “We use machine learning to better understand human's feelings, emotions, characteristics and personality, with minimum human bias” and honesty is included in the list of characteristics their algorithm examines. It does seem like we are heading for a world where it will be more difficult to lie.

What about healthcare? Could AI help spot when people are lying? How useful would it be to know if your patient (or your doctor) is not telling you the truth? In this 2014 survey in the USA, the patient deception report stated that 50% of respondents said they withhold information from their doctor during a visit, lying most frequently about drug, alcohol and tobacco use. Zocdoc’s 2015 survey found that 25% of patients lie to their doctor. There was an interesting report about why some patients are not adhering to what a doctor’s advice, and it’s because of financial strain, and that some low income patients are reluctant to discuss their situation with their doctor. The reasons why a patient might be lying are not black and white. How does an algorithm take that into account? In terms of doctors not telling patients the truth, is there ever a role for benevolent deception? Can a lie ever be considered therapeutic? From what I’ve read, lying appears to be a path some have to take when caring for those living with Dementia, to protect the patient.

shutterstock_570913984.jpg

Imagine you have a video call with your doctor and on the other side, the doctor has access to an AI system analysing your face and voice in real time and determining not just if you’re lying or not, but your emotional state too? That’s what is set to happen in Dubai with the rollout of a new app. How does that make you feel, either as a doctor or as a patient? If the AI thinks the patient is lying about their alcohol intake, would it include that determination against the patient’s medical record? What if the AI is wrong? Given the accuracy of these AI lie detectors is far from perfect, there are serious implications if they become part of the system. How might that work during an actual visit to the doctor’s office? In some countries, will we see CCTV in the doctor’s office with AI systems analysing every moment of the encounter to figure out which answers were truthful? What comes next? Smart glasses that a patient can wear when visiting the doctor and the glasses tell the patient how likely it is that the doctor is lying to them about their treatment options? Which institutions will turns to this new technology because it feels easier (and cheaper) than fostering a culture of trust, mutual respect and integrity?

What if we don’t want to tell the truth but the machines around us that are tracking everything reveal the truth for us? I share this satirical video below of Amazon Alexa fitted to a car, do watch it. Whilst it might be funny, there are potential challenges ahead in terms of our human rights and civil liberties in this new era. Is AI powered lie detection the path towards ensuring we have a society with enough transparency and integrity or are we heading down a dangerous path by trusting the machines? Is honesty really the best medicine?

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Engaging patients & the public is harder than you think

Back in 2014, Google acquired a British artificial intelligence startup in London, called Deepmind. It was their biggest EU purchase at that time, and was estimated to be in the region of 400 million pounds (approx $650 million) Deepmind's aim from the beginning was to develop ways in which computers could think like humans. 

Earlier this year, Deepmind launched Deepmind Health, with a focus on healthcare. It appears that the initial focus is to build apps that can help doctors identify patients that are at risk of complications. It's not clear yet, how they plan to use AI in the context of healthcare applications. However, a few months after they launched this new division, they did start some work with Moorfield's Eye hospital in London to apply machine learning to 1 million eye scans to better predict eye disease. 

There are many concerns, which get heightened when articles are published such as "Why Google Deepmind wants your medical records?" Many of us don't trust corporations with our medical records, whether it's Google or anyone else. 

So I popped along to Deepmind Health's 1st ever patient & public engagement event held at Google's UK headquarters in London last week. They also offered a livestream for those who could not attend. 

What follows is a tweetstorm from me during the event, which nicely summarises my reaction to the event. [Big thanks to Shirley Ayres for reminding me that most people are not on Twitter, and would benefit from being able to see the list of tweets from my tweetstorm] Alas, due to issues with my website, the tweets are included as images rather than embedded tweets. 

Finally, whilst not part of my tweetstorm, this one question reminded me of the biggest question going through everyone's minds. 

Below is a 2.5 hour video which shows the entire event including the Q&A at the end. I'd be curious to hear your thoughts after watching the video. Are we engaging patients & the public in the right way? What could be done differently to increase engagement? Who needs to do more work in engaging patients & the public?

There are some really basic things that can be done, such as planning the event with consideration for the needs of those you are trying to engage, not just your own. This particular event was held at 10am-12pm on a Tuesday morning. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Painting a false picture of ourselves

In the quest for improving our health, we're on the path to capturing more data about us, and what we do, and what happens to us. It's no longer sufficient to capture data about our health when we visit the doctor. Sensors are popping up all over the place, even in pills that help others determine whether we are actually taking our medication. Today, the most prevalent sensors are the ones in those wristbands and smart watches that track how many steps we've taken and how much we've slept. We're likely to end up at some point in the future where many, if not all of us, will be monitored 24 hours a day. Recently, Target in the USA, announced it will be offering a Fitbit activity tracker to each of its 335,000 employees.

There are already insurers in the US & UK that are offering rewards if you share data from your wearable, and the data from the wearable proves you are being active enough. In Switzerland, a pilot project by health insurer, CSS, is monitoring how many steps customers are walking every day, with one implication being, "people who refuse to be monitored will be subject to higher premiums." In that same article, Peter Ohnemus of Dacadoo, believes "Eventually we will be implanted with a nano-chip which will constantly monitor us and transmit the data to a control centre."

Well, if pills with ingestible sensors are already here, then the vision of Ohnemus may not be that far fetched. En route to the nano-chip, I note that Samsung's new Sleepsense device that sits under your mattress and tracks your sleep (and analyses the quality of your sleep), offers a feature where a report about your sleep can be emailed daily to family members. You might use it to track how your elderly parents/grandparents/children are sleeping. At the 5th EAI International Conference on Wireless Mobile Communication and Healthcare in London next month, there is a keynote titled, "The car as a location for medical diagnosis." There is so much data about us that could be captured and shared with interested parties, it's an exciting new era for many of us. 

SLEEPsense was launched when I visited IFA earlier this month

SLEEPsense was launched when I visited IFA earlier this month

Not everyone is excited though. It's truly fascinating to observe how people might respond to the introduction of these new sensors in our lives. We're going to see many developments in 'smart home' technologies, and maybe Apple's HomeKit will be the catalyst for people to make their homes as smart as possible. Given aging populations, maybe older people, especially those living alone are the perfect candidates for these sensors and devices. Whilst their children, doctors and insurers may find the ability to 'remotely monitor' behaviour quite reassuring, what if the older person being monitored doesn't like being monitored? What strategies might they employ to hack the system? The short film below, 'Uninvited Guests' shows an elderly man and his smart home, and where the friction might occur. 

Then you have 'Unfit Bits' which pokes fun at the growing trend of linking data from your activity tracker with your insurance. "At Unfit Bits, we are investigating DIY fitness spoofing techniques to allow you to create walking datasets without actually having to share your personal data. These techniques help produce personal data to qualify you for insurance rewards even if you can't afford a high exercise lifestyle." Check out their video. 

These videos are food for thought. Our daily choices and behaviour are going to come under increased scrutiny, and just because it's technically possible, will it be socially desirable? Decisions are increasingly being made by algorithms, and algorithms need data. There is a call for healthcare to be more of a data driven culture, but how will we know if the data coming from outside the doctor's office can be trusted? There is huge concern regarding the risks of health data being stolen, but little concern regarding how health data may be falsified. 

In the case of employers tracking employees, "Instead of feeling like part of a team, surveilled workers may develop an us-versus-them mentality and look for opportunities to thwart the monitoring schemes of Big Boss", writes Lynn Parramore in her post examining the dystopia of workplace surveillance.  As these new 'monitoring' technologies and associated services emerge and grow, at the same time, will we also observe the emergence of technologies that will allow us to paint a false picture of ourselves?

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Data or it didn't happen

Today, there is incredible excitement, enthusiasm and euphoria about technology trends such as Wearables, Big Data and the Internet of Things. Listening to some speakers at conferences, it often sounds like the convergence of these technologies promises to solve every problem that humanity faces. Seemingly, all we need to do is let these new ideas, products and services emerge into society, and it will be happy ever after. Just like those fairy tales we read to our children. Except, life isn't a fairy tale, neither is it always fair and equal. In this post, I examine how these technologies are increasingly of interest to employers and insurers when it comes to determining risk, and how this may impact our future. 

Let's take the job interview. There may be some tests the candidate undertakes, but a large part of the interview is the human interaction, and what the interviewer(s) and interviewee think of each other. Someone may perform well during the interview, but turn out to under perform when doing the actual job. Naturally, that's a risk that every employer wishes to minimise. What if you could minimise risk with wearables during the recruitment process? That's the message of a recent post on a UK recruitment website,  "Recruiters can provide candidates with wearable devices and undertake mock interviews or competency tests. The data from the device can then be analysed to reveal how the candidate copes under pressure." I imagine there would be legal issues if an employer terminated the recruitment process simply on the basis of data collected from a wearable device, but it may augment the existing testing that takes place. Imagine the job is a management role requiring frequent resolution of conflicts, and your verbal answers convince the interviewer you'd cope with that level of stress. What if the biometric data captured from the wearable sensor during your interview showed that you wouldn't be able to cope with that level of stress. We might immediately think of this as intrusive and discriminatory, but would this insight actually be a good thing for both parties? I expect all of us at one point have worked alongside colleagues who couldn't handle pressure, and their reactions caused significant disruption in the workplace. Could this use of data from wearables and other sensors lead to healthier and happier workplaces? 

Could those recruiting for a job start even earlier? What if the job involved a large amount of walking, and there was a way to get access to the last 6 months of activity data from the activity tracker you've been wearing on your wrist every day? Is sharing your health & fitness data with your potential employer the way that some candidates will get an edge over other candidates that haven't collected that data? That assumes that you have a choice in whether you share or don't share, but what if every job application required that data by default? How would that make you feel? 

What if it's your first job in life, and your employer wants access to data about your performance during your many years of education? Education technology used at school which aims to help students may collect data that could tag you for life as giving up easily when faced with difficult tasks. The world isn't as equal as we'd like it to be, and left unchecked, these new technologies may worsen inequalities, as Cathy O’Neil highlights in a thought provoking post on student privacy, “The belief that data can solve problems that are our deepest problems, like inequality and access, is wrong. Whose kids have been exposed by their data is absolutely a question of class.”

There is increasing interest in developing wearables and other devices for babies, tracking aspects of a baby, mainly to provide additional reassurance to the parents. In theory, maybe it's a brilliant idea, with no apparent downsides? Laura June doesn't think so, She states, "The merger of the Internet of Things with baby gear — or the Internet of Babies — is not a positive development." Her argument against putting sensors into baby gear is that it would increase anxiety levels in parents, not reduce them. I'm already thinking about that data gathered from the moment the baby is born. Who would own and control it? The baby, the baby's parents, the government or the corporation that had made the software & hardware used to collect the data? Furthermore, what if the data from the baby could impact not just access to health insurance, but the pricing of the premium paid by the parents to cover the baby in their policy? Do you decide you don't want to buy these devices to monitor the health of your newborn baby in case one day that data might be used against your child when they are grown up? 

When we take out health and life insurance, we fill in a bunch of forms, supply the information needed for the insurer to determine risk, and then calculate a premium. Rick Huckstep points out, "The insurer is not able to reassess the changing risk profile over the term of the policy." So, you might be active, healthy and fit when you take out the policy, but what if your behaviour changes and your risk profile changes during the term of the policy? This is the opportunity that some are seeing for insurers to use data from wearables to determine how your risk profile changes during the term of the policy. Instead of a static premium at the outset, we have a world with dynamic and personalised premiums. Huckstep also writes, "Where premiums will adjust over the term of the policy to reflect a policyholder’s efforts to reduce the risk of ill-health or a chronic illness on an on-going basis. To do that requires a seismic shift in the approach to underwriting risk and represents one of the biggest areas for disruption in the insurance industry."

Already today, you can link your phone or wearable to Vitality UK health insurance, and accumulate points based upon your activity (e.g. 10 points if you walk 12,500+ steps in a day). Get enough points and exchange them for rewards such as a cinema ticket. A similar scheme has also launched in the USA with John Hancock for life insurance

Is Huckstep the only one thinking about a radically different future? Not at all. Neil Sprackling, Managing Director of Swiss Re (a reinsurer) has said, “This has the potential to be a mini revolution when it comes to the way we underwrite for life insurance risk." In fact, his colleague, Oliver Werneyer, has an even bolder vision with a post entitled, "No wearable device = no life insurance," in which he believes that in 5 to 10 years time, you might find not be able to buy life insurance if you don't have a wearable device collecting data about you and your behaviour. Direct Line, a UK insurer believe that technology is going to transform insurance. Their Group Marketing Director, Mark Evans, has recently talked about technology allowing them to understand a customer's "inherent risk." Could we be penalised for deviating away from our normal healthy lifestyle because of life's unexpected demands? In this new world, if you were under chronic stress because you suddenly had to take time off work to look after a grandparent that was really sick, would less sleep and less exercise result in a higher premium next month on your health insurance? I'm not sure how these new business models would work in practice. 

When it comes to risk being calculated more accurately based upon this stream of data from your wearables, surely it's a win-win for everyone involved? The insurers can calculate risk more accurately, and you can benefit from a lower premium if you take steps to lower your risk. Then there are opportunities for entrepreneurs to create software & hardware that serves these capabilities. Would the traditional financial capitals such as London and New York be the centre of these innovations? 

One of the big challenges to overcome, above and beyond established data privacy concerns, is data accuracy. In my opinion, these consumer devices that measure your sleep & steps are not yet accurate and reliable enough to be used as a basis for determining your risk, and your insurance premium. Sensor technology will evolve, so maybe one day, there will be 'insurance grade' wearables that your insurer will be able to offer you. These would be certified to be accurate, reliable and secure enough to be used in the context of being linked to your insurance policy. In this potential future, another issue is whether people will choose to not take insurance because they don't want to wear a wearable, or they simply don't like the idea of their behaviour being tracked 24/7. Does that create a whole new class of uninsured people in society? Or would their be so much of a backlash from consumers (or even policy makers) to this idea of insurers accessing this 24/7 stream of data about your health, that this new business model never becomes a reality? If it did become a reality, would consumers switch to those insurers that could handle the data from their wearables? 

Interestingly, who would be an insurer of the future? Will it be the incumbents, or will it be hardware startups that build insurance businesses around connected devices? That's the plan of Beam Technologies, who developed a connected toothbrush (yes, it connects via Bluetooth with your smartphone and the app collects data about your brushing habits). Their dental insurance plan is rolling out in the USA shortly. Beam are considering adding incentives, such as rewards for brushing twice a day. Another experiment is NEST partnering with American Family Insurance. They supply you a 'smart' smoke detector for your home, which "shares data about whether the smoke detectors are on, working and if the home’s Wi-Fi is on." In exchange, you get 5% discount off your home insurance. 

Switching back to work, employers are increasingly interested in the data from employee's wearables. Why? Again, it's about a more accurate risk profile when it comes to health & safety of employees. Take the tragic crash of the Germanwings flight this year, where it emerges the pilot deliberately crashed the plane, killing 150 passengers. At a recent event in Australia, it was suggested this accident might have been avoided if the airline were able to monitor stress in the pilot using data from a wearable device.

What other accidents in the workplace might be avoided if employers could monitor the health, fitness & wellbeing of employees 24 hours a day? In the future, would a hospital send a surgeon home because the data from the surgeon's wearable showed they had not slept enough in the last 5 days? What about bus, taxi or truck drivers that could be monitored remotely for drowsiness by using wearables? Those are some of the use cases that Fujitsu are exploring in Japan with their research. Conversely, what if you had been put forward for promotion to a management role, and a year's worth of data from your wearable worn during work showed your employer that you got severely stressed in meetings where you had to manage conflict? Would your employer be justified in not promoting you, citing the data that suggested promoting you would increase your risk of a heart attack? Bosses may be interested in accessing the data from your wearables just to verify what you are telling them. Some employees phone in pretending to be sick, to get an extra day off. In the future, that may not be possible if your boss can check the data from your wearable to verify that you haven't taken many steps as you're stuck in bed at home. If you can't trust your employees to tell the truth, do you just modify the corporate wellness scheme with mandatory monitoring using wearable technology?

If it's possible for employers to understand the risk profile for each employee, would those under pressure to increase profits, ever use the data from wearables to understand which employees are going to be 'expensive', and find a way to get them out of the company? Puts a whole new spin on 'People Analytics' and 'Optimising the workforce'. In a compelling post, Sarah O'Connor shares her experiment where she put on some wearables and shared the data with her boss. She was asked how it felt to share the data with her boss, "It felt very weird, and actually, I really didn't like the feeling at all. It just felt as if my job was suddenly leaking into every area of my life. Like on the Thursday night, a good friend and colleague had a 30th birthday party, and I went along. And it got to sort of 1 o'clock, and I realized I was panicking about my sleep monitor and what it was going to look like the next day." We already complain about checking work emails at home, and the boundaries between work and home blurring. Do you really want to be thinking about how skipping your regular session at the gym on a Monday night would look to your boss? Devices that will betray us can actually be a good thing for society. Take the recent case of a woman in the USA who reported being sexually assaulted whilst she was asleep in her own home at night. The police used the data from the activity tracker she wore on her wrist to prove that at the time of the alleged attack, she was not asleep but awake and walking. On the other hand, one might also consider that those with malicious intent could hack into these devices and falsify the data to frame you for a crime you didn't commit. 

If these trends continue to converge, I see enterprising criminals rubbing their hands with glee. A whole new economy dedicated to falsifying the stream of data from your wearable/IoT device to your school, doctor, insurer or employer, or whoever is going to be making decisions based upon that stream of data. Imagine it's the year 2020, you are out partying every night, and you pay a hacker to make it appear that you slept 8 hours a night. So many organisations are blindly jumping into data driven systems with the mindset of, 'In data, we trust,' that few bother to think hard enough about the harsh realities of real world data. Another aspect is bias in algorithms using this data about us. Hans de Zwart has written an illuminating post, "Demystifying the algorithm: Who designs our life?" Zwart shows us the sheer amount of human effort in designing Google Maps, and the routes it generates for us, "The incredible amount of human effort that has gone into Google Maps, every design decision, is completely mystified by a sleek and clean interface that we assume to be neutral. When these internet services don’t deliver what we want from them, we usually blame ourselves or “the computer”. Very rarely do we blame the people who made the software." With all these potential new algorithms classifying our risk profile based upon data we generate 24/7, I wonder how much transparency, governance and accountability there will be? 

There is much to think about and consider, one of the key points is the critical need for consumers to be rights aware. An inspiring example of this, is Nicole Wong, the former US Deputy CTO, who wrote a post explaining why she makes her kids read privacy policies. One sentence in particular stood out to me, " When I ask my kids about what data is collected and who can access it, I am asking them to think about what is valuable and what they are prepared to share or lose." Understanding the value exchange that takes place when you share your data with a provider is critical step towards being able to make informed choices. That's assuming all of us have a choice in the sharing of our data. In the future, when we teach our children how to read and write English, should they be learning 'A' is for algorithm, rather than 'A' is for apple? I gave a talk in London recently on the future of wearables, and I included a slide on when wearables will take off (slide 21 below). I believe they will take off when we have to wear them or when we can't access services without them. Surgeons and pilots are just two of the professions which may have to get used to being tracked 24/7.

Will the mantra of employers and insurers in the 21st century be, "Data or it didn't happen?"

If Big Data is set to become one of the greatest sources of power in the 21st century, that power needs a system of checks and balances. Just how much data are we prepared to give up in exchange for a job? Will insurance really be disrupted or will data privacy regulations prevent that from happening? Do we really want sensors on us, in our cars, our homes & our workplaces monitoring everything we do or don't do? Having data from cradle to grave on each of us is what medical researchers dream of, and may lead to giant leaps in medicine and global health. UNICEF's Wearables for Good challenge could solve everyday problems for those living in resource poor environments. Now, just because we might have the technology to classify risk on a real time basis, do we need to do that for everyone, all the time? Or should policy makers just ban this methodology before anyone can implement it? Is there a middle path? "Let's add in ethics to technology" argues Jennifer Barr, one of my friends who lives and works in Silicon Valley. Instead of just teaching our children to code, let's teach them how to code with ethics. 

There are so many questions, and still too few places where we can debate these questions. That needs to change. I am speaking at two events in London this week where these questions are being debated, the Critical Wearables Research Lab and Camp Alphaville. I look forward to continuing the conversation with you in person if you're at either of these events. 

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner




The paradox of privacy

When you're driving the car, would you let an employee from a corporation sit in the passenger seat and record details on what route you're taking, which music you listen to and the text messages you send and receive? 

When you're sitting at home watching TV with your family, would you let an employee from a corporation sit on the sofa next to you and record details on what types of TV shows you watch? 

When you're in the gym working out, when you're going for your daily walk, would you let an employee from a corporation stand alongside you and record details on how long you walked, where you walked, and how your body responded to the physical activity? 

I suspect many of you would answer 'No' to all 3 questions. However, that's exactly the future that is being painted after the recent Google I/O event. Aimed at software developers, it revealed a glimpse of what Google have got planned for the year ahead. New services such as Android Wear, Android Auto, Android TV and Google Fit promise to change our lives. 

In this article titled 'Google's master plan: Turn everything into data!", David Auerbach appreciates how more sensors in our homes, cities and on our bodies is a hugely lucrative opportunity for a company like Google. "That information is also useful to companies that want to sell you things. And if Google stands between your data and the sellers and controls the pipe, then life is sweet for Google."

In a brilliant article by Parmy Olsen, she writes about the announcement at I/O about Google Fit, a new platform. "There’s a major advertising opportunity for Google to aggregate our health data in a realm outside of traditional search". Now during the event, Google did state that users would control what health and fitness data they share with the platform. Let's see whether corporate statements translate to actual terms & conditions in the years ahead. 

Do we even realise how much personal data are stored on our phones?

Do we even realise how much personal data are stored on our phones?

Why are companies like Google so interested in the data from your body in between doctor visits? As I've stated before, our bodies generate data 24/7, yet it's only currently captured when we visit the doctor. So, the organisation that captures, stores & aggregates that data at a global level is likely to be very profitable, as well as wielding significant power in health & social care. 

Indeed, it could also prove transformative for those providing & delivering health & social care. In the utopian vision of health systems powered by data, this constant stream of data about our health might allow the system to predict when we're likely to have a heart attack, or fall? 

Privacy and your baby

When people have a baby, some things change. It's human nature to want to protect and provide for our children when they are helpless and vulnerable. For example, someone may decide to upgrade to a safer car once they have a baby. We generally do everything we can to give our children the best possible start in life.

If you have a newborn baby, would you allow an employee from a corporation to enter your home and sit next to your baby and record data on it's sleeping patterns? In the emerging world of wearable technology, some parents are considering using products and services where their baby's data would be owned by someone else. 

Sproutling is a smart baby monitor, shipping in March 2015, but taking pre-orders now. It attaches to your baby's ankle, and measures heart rate and movement and interprets mood. It promises to learn and predict your baby's sleep habits. You've got an activity and sleep tracker for yourself, why not one for your baby, right? According to their website today, 31% of their monitors have been sold. The privacy policy on their website is commendably short, but not explicit enough in my opinion. So I went on Twitter to quiz Sproutling about who exactly owns the data collected from the baby using the device. As you can see, they referred me back to their privacy policy, and didn't really answer my question. 

The paradox

What's fascinating is how we say one thing and do another. A survey of 4,000 consumers in the UK, US and Australia found that 62% are worried about their personal data being used for marketing. Yet, 65% of respondents rarely or never read the privacy policy on a website before making a purchase. 

In a survey by Accenture Interactive, they found that 80% of people have privacy concerns with wearable Internet of Things connected technologies. Only 9% of those in the survey said they would share their data with brands for free. Yet, that figure rose to 28% would share their wearable data if they were given a coupon or discount based upon their lifestyle. 

Ideally, there would be a way in which we as consumers could own and control our personal data in the cloud and even profit from it. In fact, it already exists. The Respect Network promises just that, and was launched globally at the end of June 2014. From their website, "Respect Network enables secure, authentic and trusted relationships in the digital world". Surely, that's what we want in the 21st century? Or maybe not. I haven't met a single person who has heard of Respect Network since they launched. Not one person. What does that tell you about the world we live in?

Deep down, are we increasingly becoming apathetic about privacy? Is convenience a higher priority than knowing that our personal data are safe? Is being safe and secure in the digital world just a big hassle?

A survey of 15,000 consumers in 15 countries for the EMC Privacy Index found a number of behavioural paradoxes, one of which they termed "Take no action", "although privacy risks directly impact many consumers, most take virtually no action to protect their privacy – instead placing the onus on government and businesses". It reminds me of an interaction I had on Twitter recently with Dr Gulati, an investor in Digital Health. 

What needs to change?

Our children are growing up in a world where their personal data are going to be increasingly useful (or harmful), depending upon the context. What are our children taught at school about their personal data rights? It's been recently suggested that schools in England should offer lessons about sex and relationships from age 7, part of a "curriculum for life". Shouldn't the curriculum for life include being educated about the intersection of your personal data and your privacy?

We are moving towards a more connected world, whether we like it or not. Personally, I'm not averse to corporations and governments collecting data about us and our behaviour, as long as we are able to make informed choices. I like how in this article about the Internet of Things and privacy, Marc Loewenthal writes "discussions about the data created are far more likely to focus on how to use the data rather than how to protect it". Loewenthal also goes on to mention how the traditional forms of delivering privacy guidelines to consumers aren't fit for purpose in an increasingly connected world, "They typically ignore the privacy notices or terms of use, and the mechanisms for delivering the notices are often awkward, inconvenient, and unclear".

When was the last time you read through (and fully understood) the terms and conditions and privacy policy of a health app or piece of wearable technology? So many more connected devices, each with their own privacy policy and terms and conditions. Not something I look forward to as a consumer. The existing approach isn't effective, we need to think differently about how we can truly enable people to be able make informed choices in the 21st century.

Now, what if each of us  had our OWN terms and conditions and privacy policy and then we could see if the health app meets OUR criteria? We, as consumers, decide in advance what we want to share, with whom, and what we expect in return. How would that even work? Surely, we'd need to cluster similar needs together to perhaps form 5 standard privacy profiles? Imagine comparing three different health apps which do they same thing, but you can see instantly that only one of them has the privacy profile that meets your needs? Or even when browsing through the app store, you choose to only be shown those apps that match your privacy profile? That would definitely make it easier for each of us to be able to make an informed choice. 

Things are changing as it was revealed last night that Apple have tightened privacy rules in their new operating system for people developing apps using their new HealthKit API. An article cites text pulled from the licence, developers may "not sell an end-user's health information collected through the HealthKit API to advertising platforms, data brokers or information resellers," and are barred from using gathered data "for any purpose other than providing health and/or fitness services."

Apps using the HealthKit API must also provide privacy policies.

This news is definitely a big step forward for anyone who cares about the privacy of their health data. Although the guaranteed link to a privacy policy doesn't necessarily mean it will be easy to understand for consumers. I also wonder how companies that develop health apps using the HealthKit API will make money, given current business models are based around the collection and use of data. 

Will the news from Apple make you more likely as a consumer to download a health app for your iPhone vs your Android device? Will it cause you trust Apple more than Google or Samsung? Have Apple gone far enough with their recent announcement or could they do more? Will Apple's stance lead to them becoming THE trusted hub for our health data, above and beyond the current healthcare system?

How can we as individuals do more to become aware of our rights? As well as the campaigns to teach people to learn how to code, should we have campaigns to teach people how to protect their privacy? When commentators write that privacy is dead, do you believe them?

We're heading towards a future where over the next decade it will become far easier to use sensors to monitor the state of our bodies. Would you prefer a future where my body=my data or my body=their data? The choice is yours.

[Disclosure: I have no commercial ties with the individuals and organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

What is the future of data driven hospitals?

One of the critical factors for patient safety in hospitals, is that you've identified the patient correctly. The wrong medication given to the wrong patient at the wrong time could have serious, even fatal consequences. Patient wristbands are a start, but wristbands that contain barcodes are even better. According to GS1's website, in October 2013, it became mandatory in NHS England for all patient wristbands to contain a GS1 barcode. I wonder if we can improve even further?

A couple of things I've seen or tried recently got me thinking. 

My Samsung Gear Fit

My Samsung Gear Fit

Spanish airline Vueling is first to send boarding passes to a smartwatch

My experience wearing the Samsung Gear Fit on my  wrist 

Software from Japan that works with smart glasses to help you get info by looking at a barcode

In the future, if you were due to go into hospital, what if you could get your hospital 'boarding pass' sent to your smartwatch 24 hours before your visit? What if when you 'checked in' at the hospital, a member of staff was automatically notified of your arrival, on THEIR smartwatch? What if when a member of hospital staff wearing smart glasses wants to identify who you are, they simply look at your smartwatch that's displaying your barcode? 

Could that do even more to improve patient safety? Many observers continue to regard these individual technologies as crude & clumsy, and I'm right there with you.

However, when you stop for a moment, to imagine how they could be used together to do something that's never been done before, it makes you think. I ask you, what currently exists, that alone is not that great, but when combined with a couple of other technologies, solves your problem? 

Or it simply a case of repurposing wearable tech to suit your own needs, as in the case of this creative friend of mine, Anthony Harvey who want to see if the Gear Fit is capable of something new?

Now add to the mix, Bluetooth 4.1 at the end of 2014. What will moving from the current Bluetooth 4.0 to 4.1 mean for hospitals? Well, in theory, your 2015 heart rate monitor/activity tracker worn on your wrist could send data directly from your wearable device into your medical records, via the cloud.

So even before you've arrived at hospital for your surgery, they could have much more data about you, compared to the hospitals of today. As you can observe, the role of data in providing the best possible care, becomes even more paramount. 

How safe is your data in the hospital?

I shared an article on the Internet of Things via Twitter recently, and one of the people who engaged with me as a result was Scott Erven, based in the USA. He's done significant research into the security risks associated with the use of hospital equipment, and there's an eye opening WIRED article recently published about his work, and what needs to change. 

Quoting from the article, how many of you are shocked to read his findings? "In a study spanning two years, Erven and his team found drug infusion pumps–for delivering morphine drips, chemotherapy and antibiotics–that can be remotely manipulated to change the dosage doled out to patients;

Bluetooth-enabled defibrillators that can be manipulated to deliver random shocks to a patient’s heart or prevent a medically needed shock from occurring; X-rays that can be accessed by outsiders lurking on a hospital’s network;

temperature settings on refrigerators storing blood and drugs that can be reset, causing spoilage; and digital medical records that can be altered to cause physicians to misdiagnose, prescribe the wrong drugs or administer unwarranted care."

It certainly gave me a wake up call. Now, I had a video call with Scott this week, and the conversation was illuminating. With Wearables and the Internet of Things touted as technologies that are going to lead to an explosion in data (about each of us), and ultimately, be used to drive potential improvements in health & social care, there is also a dark side. 

Many of the articles, talks & press releases in Digital Health make it appear that this bold new world will be everything we've wanted in health & social care, it will be Utopia. Without stringent governance, accountability & trust, it could end up being our worst nightmare. 

What if someone wanted to hack into hospital equipment, your wearable devices or your health data, because they had malicious intent? What if an organisation, or even one person wanted to inflict a terrorist attack, and cause a serious loss of life? Instead of bombs, would they simply sit in front of a laptop & exploit the cyber security vulnerabilities that exist today (and may still exist tomorrow) in hospitals?

What if someone wanted to specifically target you, by modifying your health records to show that you'd had a mental health issue? It was just reported that a British woman had her employment offer for Emirates Airlines withdrawn after they found out her medical records revealed an episode of Depression in 2012. 

The UK has taken a bold step last year to publish the publication of mortality rates for individual hospital consultants in ten specialties. Greater transparency is to be encouraged, and hopefully will improve levels of care. Do we also campaign for publication of the hospital data breaches too? 

Can we actually trust the data the government publishes? Look at the recent scandal in the USA, at the Veteran's Adminstration, where it's come to light that the waiting time for medical treatment was misreported. 

A recent survey found that 50% of UK citizens don't trust the NHS with their personal data.

Today, when I speak to people around the world, who use any form of health & social care, they are primarily concerned about access, quality & cost. In the future, those people may be adding 'privacy & security of my data' to that list. 

The Digital Health community, along with government, has to address this sooner, than later.

Quite frankly, I don't see the point of gathering all this data on patients, if we can't assure them, that we've taken every step possible to keep it private & secure. 

[Disclosure: I have no commercial ties with any of the companies or individuals named in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

The future of your health data

Your health data usually belongs to someone else. If you go see a doctor and are diagnosed, the electronic record of that diagnosis is stored and could be part of a much larger anonymised dataset. If you're in the US, you may be one of the 180 million patients whose health insurance claims data are part of the MarketScan data from Truven Health Analytics. If you're in England, you may be aware of the government's plans to build a dataset, called care.data containing the GP & Hospital data for the 53 million patients who live in England. If you use a activity tracker, such as FitBit etc., you're once again giving your personal health data away, which may or may not be sold in the future.

Naturally, one of the important applications of all these data are to improve human health, especially when it comes to medical researchers looking to understand how we get sick, and how we respond to drugs & vaccines in the real world. These data are also valuable to health insurers and healthcare providers when it comes to improving their services. 

Nearly a year ago, at TEDx O'Porto, I shared my radical vision of how 7 billion people could get paid for sharing their health data, as well as having full control over who can access that data. Many leaders in the healthcare arena have laughed at my dream, or have responded with silence. It tends to be patients & startups that get most excited at my ideas. That's understandable, as we are talking about big changes in how we collect health data, store it and sell it. These changes are not going to happen overnight, but I'm pleased to see that changes are happening faster than I anticipated. 

I read an article today in MIT Technology Review about a New York based startup, DataCoup. According to the article, "DataCoup are running a beta trial where people get $8 a month in return for access to a combination of their social media accounts, such as Facebook and Twitter, and the feed of transactions from a credit or debit card." Looking at DataCoup's website, it claims to be the 1st personal data marketplace. 

Interesting, the article, also says "The company also might offer people the option of sharing data from lifelogging devices such as the FitBit or parts of their Web search history." When I tweeted earlier today, DataCoup confirmed that incorporating health data is in their plans. 

The dawn of a new industry?

blogheartdata.jpg

This news is extremely exciting for me, and gives me hope that 2014 is likely to be a turning point in raising awareness of how valuable our health data is. If you suffer from multiple diseases, and take multiple medications, your data may be more valuable to 3rd parties than someone who is healthy and not on any medications. Many entities currently profit from using your health data. Time for patients to share in that profit?

There is also a London startup called Handshake that is also a personal data marketplace. Their website states, "Handshake is an app and a website that allows you to negotiate a price for your personal data directly with the companies that want to buy it.". They appear to be in a closed beta at the moment. 

Then you have the concept of patient data co-operatives. Our Health Data Co-Operative is in the US, and has recently been recognised by the White House as playing a role in promoting "Data to Knowledge to Action". The founder, Patrick Grant, states, "Our Health Data Cooperative is built on the premise that Patients should benefit economically from access by third parties to their health information."

Over to Europe, and I recently came across HealthBank. A patient data co-operative based in Switzerland, but aiming to build a global secure depository for patient data. Their website talks of patients having "a HealthBank account,  to store, access, manage and share their health data. And users can earn financial and other returns on their health data, similar to receiving returns from a bank account."

You've heard of Bitcoin, the cryptocurrency that's hit the news? What if you could trade your health data for Healthcoins that could be used to pay for your healthcare or for healthy food? There is a guy in the Netherlands, Andre Boorsma, who has put forward the concept of Healthcoins. I'm curious - would this concept be tried in Emerging Markets first? 

What's the catch?

Exciting stuff, and we are entering a new era in the creation & use of personal health data. However, there are important hurdles to overcome. The first one is trust. The companies listed above have to build trust with the individuals who would be sharing data. Building trust takes time, unless you partner with an existing brand that is already trusted. Would you be more willing to use the services of DataCoup, Handshake, OurHDC, or HealthBank if they were associated with Amazon or Samsung? 

The second hurdle relates to privacy, security & governance. Do we have the technology in place to genuinely keep our personal data private & secure in these emerging platforms? Do we have the legislation (both country level & internationally) to fairly govern the sharing, management and trading of these data? There is also the thorny issue of obtaining informed consent. The vulnerable, such as a person with Dementia who may be given a Fitbit to wear, but someone else profits from their activity data being traded? 

Another issue is going to be accuracy, especially with health data that can be generated using wearable technology. Users are manipulating fitness trackers, as reported here. If you're a researcher buying access to aggregated data on Fitbit users, how accurate are the data? How representative will these data be of the general population? 

If we can trade our health data for economic returns, will this commoditization of our health data attract the attention of cybercriminals? 

What about Open Data? Some people argue that these new sources of health data should be donated into a commons, free for researchers to use for the benefit of humanity. 

What does this mean for you? 

Health data brokers - you need to be thinking about these trends, and how you adapt your company's strategy. If you don't, your future revenue streams are likely to suffer (or disappear!)

Healthcare providers & insurers - Are you ready for a world in which patients can choose who they want to share their health data with? 

Patients - Would you feel comfortable trading your health data for economic rewards? 

Pharmaceutical companies - How will this impact how you source data for clinical trials & observational studies? 

Startups - Immense opportunities (and pitfalls) ahead. If personal data marketplaces and patient data co-operatives take off, it could create a brand new industry. 

Policymakers & regulators - The world is definitely moving towards a personal data ecosystem where individuals can own, control and profit from their own data. Legislation needs to consider the rights of everyone involved with such a system. Will their be a special tax for those people who decide to sell their health data? 

[Disclosure: I have no commercial ties with any of the companies mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Think twice before sharing your data

Who needs hospitals? We have smartphones, sensors and data!

According to Eric Topol, who is one of the leading voices in Digital Health, the smartphone is going to be the healthcare delivery platform of the future. Awesome right? No need to go into a hospital in the future, the app on your phone can record your blood pressure and transmit it to your doctor via the internet etc. 

Is it just a few rich people in California who believe this? Not according to Intel's latest research (see infographic below on what health information people are willing to share). The survey collected responses from people in Brazil, China, France, India, Indonesia, Italy, Japan and the United States. 84% would share their vital stats like blood pressure and 75% would share information from a special monitor that's been swallowed to track internal organ health. In fact, India is the country most willing to share healthcare information to aid innovation. Super awesome news, right?

Eric Dishman, Intel fellow and general manager of the company's Health and Life Sciences Group, says "Most people appear to embrace a future of healthcare that allows them to get care outside hospital walls, lets them anonymously share their information for better outcomes, and personalizes care all the way down to an individual's specific genetic makeup." 

Also, this week was the mHealth Summit in Washington, DC. It's the largest event of it's kind, over 5,000 people from around the world gathered. I attended last year, but participated this year from London via Twitter. Amazing energy and bold visions of the future on mHealth. 

In fact, this week, I also participated in the world's first G8 Dementia Summit via Twitter. "Big Data" captured from patients around the globe was cited by many of the leaders as one of the ways in which we can work to beat Dementia by 2025. Yes, the G8 put a rather ambitious  goal of a cure (or disease modifying drug) by 2025. Again, we just need to collect all this data from individuals, remove personal information, make it anonymised, and Global Health in the future will be transformed, right?

Easier said than done

Unfortunately, many of the people at conferences who are envisioning a world where we happily share our personal health data altruistically for the benefit of medical research to improve Global Health are unaware of the realities on the ground. "Big Data" seems to be inserted by anyone and everyone into their speeches and tweets. Doctors, politicians, and corporate leaders frequently use the phrase, in the hope that more people will sit up and pay attention to what they are saying.

Let's take anonymisation. If someone tells you that your personal data will be anonymised and then aggregated and made available to 3rd parties, you believe them, when they tell you your data can't identify you. Let's see what the report from the Royal Society in June 2012 said; 

"the security of personal records in databases cannot be guaranteed through anonymisation procedures"

"Computer science has now demonstrated that the security of personal records in databases cannot be guaranteed through anonymisation procedures where identities are actively sought"

It's good to have people like Professor Ross Anderson who dare to question the viability of anonymisation

Now, there are tens of thousands of health apps, and generally how many of us take the time to read terms and conditions before downloading any app, let alone a health app? We trust the brand, don't we? How do we determine as consumers and patients, whether a health app is safe to use? 

A company in the US, Happtique is working on a program of certification for health apps. Definitely a worthwhile initiative. So whilst I was monitoring the Twitter stream during the mHealth Summit, I noticed a software developer, Harold Smith, at the event had shared his blog post with his findings that there were security issues with some apps that had passed the certification process at Happtique. Yes, shocking news, but even more shocking is how a lot of people in this industry don't seem to care. Kudos to Happtique, they did react swiftly to this news by suspending their certification program

Here in the UK, the NHS have set up a health apps library. Their review process is listed too. Their website says, "All apps submitted to the Health Apps Library are checked to make sure that they are relevant to people living in England; comply with data protection laws and comply with trusted sources of information, such as NHS Choices". I've got no reason to doubt the security of the apps on the NHS library, but I'm curious - what if someone independent like Harold Smith took a look at these apps? What would his findings be? 

2014 & beyond 

In an ideal world, none of us as end users would have to worry about the security & privacy of our personal health data. We all want improved health, and improved healthcare, and we are told that mobile technology, sensors & big data could make the world a much better place. As a Digital Health Futurist, I truly want to believe that. 

However, the road ahead is potentially very dangerous, largely because the froth and hype in Digital Health is overshadowing the need to have an open and candid discussion in society on the risks and benefits of going down this road. Companies such as GE, Intel, & Cisco are pumping billions into the Internet of Things. This week the Allseen Alliance was announced, standards to allow different devices to connect to each other. Again, exciting stuff, right? 

Imagine, your smart toilet connected to your smart fridge connected to your smartphone. Personalised meal suggestions on your phone based upon the combination of the clinical analysis of your urine and what food you have remaining in your fridge? More data about our health, more data about us being transmitted between devices and apps using wifi. Hmmm, how many of us have stopped to reflect upon what safeguards are needed to prevent our bodies from being the target of hackers

In principle, I'm not against any company or government collecting more data about us and our health. If collecting more data can help us develop a cure for diseases such as Cancer or Dementia, that would be an amazing achievement for science. 

However, I do want all of us, wherever we live on this planet, to be able to make INFORMED choices about how we share our health data, and who we share it with. Who will drive conversations that lead to a society where we can make informed choices about our health data? How do we get informed consent to participate in data sharing initiatives from those members of society who are vulnerable, such as children or older people with Dementia? Is that even ethical? 

One piece of good news that came out this week is that the Data & Society Research Institute is a new non-profit organisation launching in 2014. Based in New York City, it will be dedicated to addressing social, technical, ethical, legal, and policy issues that are emerging because of data-centric technological development. 

bill-of-rights.jpg

Data about us may be the key to improving the health of 7 billion people, but that can only happen if our rights are protected at all times. The issues are common to all personal data, not just health data. Perhaps the way forwards is the creation of an international bill of digital rights?

 

[Disclosure: I have no commercial ties with any of the companies mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Who Owns Your Health Data?

"Personal Data will be the new 'oil' - a valuable resource for the 21st century. It will emerge as a new asset class touching all aspects of society”. That's taken from the introduction of a report from the World Economic Forum published in January 2011. It's a fascinating read,  especially when they put forward the vision of a personal data ecosystem where individuals can have greater control over their personal data, digital identity and online privacy, and they will be better compensated for providing others with access to their personal data.

Sounds great, right? Sadly, it doesn't look like we are on the path to that vision.

For this vision to manifest itself, healthcare companies must buy into it, which means that they have to evolve their current business practices and models. The same is true for governments around the world. Given the recent revelations from Edward Snowden, making this vision a reality seems unlikely.

Does anyone believe we should own our health data?

Due to my background, I think a lot about our health data and the steps that we can take as citizens to help in the creation of this vision. I even gave a TEDx talk with my own ideas.

Though some leaders in the industry, such as Walter de Brouwer are stepping forward and bravely advocating that patients should own their own health data,  it's not the norm. Business models for free health apps are based upon users giving permission for those apps to collect, transmit, share and sell their users' personal data.

What are the current risks?

The current estimate is that are 40,000 health apps in the market place. In addition, a recent study by the Privacy Rights Clearinghouse stated that 72% of the assessed health apps presented medium to high risk of personal privacy violation. Additionally, of the free apps they reviewed, only 43% provided a link to a website privacy policy.

When was the last time you read through the terms and conditions, end user licence agreement or privacy policy BEFORE you agreed to download a health app? Take a look at this example of the privacy policy of Fitbit, would you read this?

Now, you may think that your health data alone is not that valuable, and you may well be right. However, if 100,000 people are using a health app, and a corporation accessing that data has heart rate, activity levels, sleep levels etc on all 100,000 people, then that 'cohort' of data becomes considerably more valuable. Whether it's scientists in a pharmaceutical company looking to understand people's health or a fitness company looking to understand which consumers to target for their next fitness product, getting access to this type of data unlocks new value for these organisations. That's not necessarily a bad thing, because we all want society to make progress in improving our health.

Unfortunately, I don't believe that consumers are currently able to make an informed choice. Unless you read through every line of all the policies, it's not that easy to find answers to these 3 questions;

Who owns your data?

Who has access to your data?

Who profits from your data?

Someone must be doing something to help answer these questions? 

The US government has recently published new proposals that lay out a "voluntary" Code of Conduct for mobile application short notices. Whilst it's a modest step forward, it's not enough. With almost 20 years of working with other people's personal data, I knew I had to do something.

As luck would have it, I was introduced to one of the leading  experts in security and privacy of health data, Dr Tyrone Grandison based in the USA. We identified the need for  a simple way of consumers being able to understand what they are agreeing to BEFORE they download a health app.

Dr Grandison and myself are working on a new service, launching this summer, called 'Who Owns Your Health Data?'. We hope that our service will allow each of you to make an informed choice when it comes to health apps. 

We are open to collaborating with others who share the same goal. Feel free to email us at info@woyhd.org