Honesty is the best medicine

In this post, I want to talk about lies. It’s ironic that I’m writing this on the day of the US midterm election where the truth continues to be a rare sight to witness. Many in the UK feel they were lied to by politicians over the Brexit referendum. Apparently, politicians face a choice, lie or lose. Deception, deceit, lying, however you want to describe it, it’s part of what makes us human. I reckon we’ve all told a lie at some point, even if we’ve told a ‘white lie’ to avoid hurting someone’s feelings. Now, some of us are better at spotting when others are not telling the truth. Some of us prefer to build a culture of trust. What if we had a new superpower? A future where machines tell us in real time who is lying.

What compelled me to write this post was reading a news article about a new trial in the EU of virtual border agents powered by Artificial Intelligence (AI), which aims to “ramp up security using an automated border-control system that will put travellers to the test using lie-detecting avatars.” I was fascinated to read statements about the new system such as “IBORDERCTRL’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.” Apparently, the system can analyse micro expressions on your face and include that information as part of a risk score, which will then be used to determine what happens next. At this point in time, it’s not aimed at replacing human border agents, but simply to help to pre-screen travellers. It sounds sensible right, if we can use machines to help keep borders secure? However, the accuracy rate of the system isn’t that great and some are labeling this type of system as pseudoscience and it will lead to unfair outcomes. It’s essential we all pay attention to these developments, and subject them to close scrutiny.

What if machines could one day automatically detect if someone speaking in court is lying? Researchers are working towards that. Check out the project called, DARE: Deception Analysis and Reasoning Engine, where the abstract of their paper opens with “We present a system for covert automated deception detection in real-life courtroom trial videos.“ As algorithms get more advanced, the ability to detect lies could go beyond analysing videos of us speaking, it could even spot when we our written statements are false. In Spain, police are rolling out a new tool called VeriPol which claims to be able to spot false robbery claims, i.e. where someone has submitted a report to the police claiming they have been robbed, but the tool can find patterns that indicate the report is fraudulent. Apparently, the tool has a success rate of over 80%. I came across as British startup, Human, that states on their website, “We use machine learning to better understand human's feelings, emotions, characteristics and personality, with minimum human bias” and honesty is included in the list of characteristics their algorithm examines. It does seem like we are heading for a world where it will be more difficult to lie.

What about healthcare? Could AI help spot when people are lying? How useful would it be to know if your patient (or your doctor) is not telling you the truth? In this 2014 survey in the USA, the patient deception report stated that 50% of respondents said they withhold information from their doctor during a visit, lying most frequently about drug, alcohol and tobacco use. Zocdoc’s 2015 survey found that 25% of patients lie to their doctor. There was an interesting report about why some patients are not adhering to what a doctor’s advice, and it’s because of financial strain, and that some low income patients are reluctant to discuss their situation with their doctor. The reasons why a patient might be lying are not black and white. How does an algorithm take that into account? In terms of doctors not telling patients the truth, is there ever a role for benevolent deception? Can a lie ever be considered therapeutic? From what I’ve read, lying appears to be a path some have to take when caring for those living with Dementia, to protect the patient.

shutterstock_570913984.jpg

Imagine you have a video call with your doctor and on the other side, the doctor has access to an AI system analysing your face and voice in real time and determining not just if you’re lying or not, but your emotional state too? That’s what is set to happen in Dubai with the rollout of a new app. How does that make you feel, either as a doctor or as a patient? If the AI thinks the patient is lying about their alcohol intake, would it include that determination against the patient’s medical record? What if the AI is wrong? Given the accuracy of these AI lie detectors is far from perfect, there are serious implications if they become part of the system. How might that work during an actual visit to the doctor’s office? In some countries, will we see CCTV in the doctor’s office with AI systems analysing every moment of the encounter to figure out which answers were truthful? What comes next? Smart glasses that a patient can wear when visiting the doctor and the glasses tell the patient how likely it is that the doctor is lying to them about their treatment options? Which institutions will turns to this new technology because it feels easier (and cheaper) than fostering a culture of trust, mutual respect and integrity?

What if we don’t want to tell the truth but the machines around us that are tracking everything reveal the truth for us? I share this satirical video below of Amazon Alexa fitted to a car, do watch it. Whilst it might be funny, there are potential challenges ahead in terms of our human rights and civil liberties in this new era. Is AI powered lie detection the path towards ensuring we have a society with enough transparency and integrity or are we heading down a dangerous path by trusting the machines? Is honesty really the best medicine?

[Disclosure: I have no commercial ties with any of the organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Engaging patients & the public is harder than you think

Back in 2014, Google acquired a British artificial intelligence startup in London, called Deepmind. It was their biggest EU purchase at that time, and was estimated to be in the region of 400 million pounds (approx $650 million) Deepmind's aim from the beginning was to develop ways in which computers could think like humans. 

Earlier this year, Deepmind launched Deepmind Health, with a focus on healthcare. It appears that the initial focus is to build apps that can help doctors identify patients that are at risk of complications. It's not clear yet, how they plan to use AI in the context of healthcare applications. However, a few months after they launched this new division, they did start some work with Moorfield's Eye hospital in London to apply machine learning to 1 million eye scans to better predict eye disease. 

There are many concerns, which get heightened when articles are published such as "Why Google Deepmind wants your medical records?" Many of us don't trust corporations with our medical records, whether it's Google or anyone else. 

So I popped along to Deepmind Health's 1st ever patient & public engagement event held at Google's UK headquarters in London last week. They also offered a livestream for those who could not attend. 

What follows is a tweetstorm from me during the event, which nicely summarises my reaction to the event. [Big thanks to Shirley Ayres for reminding me that most people are not on Twitter, and would benefit from being able to see the list of tweets from my tweetstorm] Alas, due to issues with my website, the tweets are included as images rather than embedded tweets. 

Finally, whilst not part of my tweetstorm, this one question reminded me of the biggest question going through everyone's minds. 

Below is a 2.5 hour video which shows the entire event including the Q&A at the end. I'd be curious to hear your thoughts after watching the video. Are we engaging patients & the public in the right way? What could be done differently to increase engagement? Who needs to do more work in engaging patients & the public?

There are some really basic things that can be done, such as planning the event with consideration for the needs of those you are trying to engage, not just your own. This particular event was held at 10am-12pm on a Tuesday morning. 

[Disclosure: I have no commercial ties with the individuals or organisations mentioned above]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Painting a false picture of ourselves

In the quest for improving our health, we're on the path to capturing more data about us, and what we do, and what happens to us. It's no longer sufficient to capture data about our health when we visit the doctor. Sensors are popping up all over the place, even in pills that help others determine whether we are actually taking our medication. Today, the most prevalent sensors are the ones in those wristbands and smart watches that track how many steps we've taken and how much we've slept. We're likely to end up at some point in the future where many, if not all of us, will be monitored 24 hours a day. Recently, Target in the USA, announced it will be offering a Fitbit activity tracker to each of its 335,000 employees.

There are already insurers in the US & UK that are offering rewards if you share data from your wearable, and the data from the wearable proves you are being active enough. In Switzerland, a pilot project by health insurer, CSS, is monitoring how many steps customers are walking every day, with one implication being, "people who refuse to be monitored will be subject to higher premiums." In that same article, Peter Ohnemus of Dacadoo, believes "Eventually we will be implanted with a nano-chip which will constantly monitor us and transmit the data to a control centre."

Well, if pills with ingestible sensors are already here, then the vision of Ohnemus may not be that far fetched. En route to the nano-chip, I note that Samsung's new Sleepsense device that sits under your mattress and tracks your sleep (and analyses the quality of your sleep), offers a feature where a report about your sleep can be emailed daily to family members. You might use it to track how your elderly parents/grandparents/children are sleeping. At the 5th EAI International Conference on Wireless Mobile Communication and Healthcare in London next month, there is a keynote titled, "The car as a location for medical diagnosis." There is so much data about us that could be captured and shared with interested parties, it's an exciting new era for many of us. 

SLEEPsense was launched when I visited IFA earlier this month

SLEEPsense was launched when I visited IFA earlier this month

Not everyone is excited though. It's truly fascinating to observe how people might respond to the introduction of these new sensors in our lives. We're going to see many developments in 'smart home' technologies, and maybe Apple's HomeKit will be the catalyst for people to make their homes as smart as possible. Given aging populations, maybe older people, especially those living alone are the perfect candidates for these sensors and devices. Whilst their children, doctors and insurers may find the ability to 'remotely monitor' behaviour quite reassuring, what if the older person being monitored doesn't like being monitored? What strategies might they employ to hack the system? The short film below, 'Uninvited Guests' shows an elderly man and his smart home, and where the friction might occur. 

Then you have 'Unfit Bits' which pokes fun at the growing trend of linking data from your activity tracker with your insurance. "At Unfit Bits, we are investigating DIY fitness spoofing techniques to allow you to create walking datasets without actually having to share your personal data. These techniques help produce personal data to qualify you for insurance rewards even if you can't afford a high exercise lifestyle." Check out their video. 

These videos are food for thought. Our daily choices and behaviour are going to come under increased scrutiny, and just because it's technically possible, will it be socially desirable? Decisions are increasingly being made by algorithms, and algorithms need data. There is a call for healthcare to be more of a data driven culture, but how will we know if the data coming from outside the doctor's office can be trusted? There is huge concern regarding the risks of health data being stolen, but little concern regarding how health data may be falsified. 

In the case of employers tracking employees, "Instead of feeling like part of a team, surveilled workers may develop an us-versus-them mentality and look for opportunities to thwart the monitoring schemes of Big Boss", writes Lynn Parramore in her post examining the dystopia of workplace surveillance.  As these new 'monitoring' technologies and associated services emerge and grow, at the same time, will we also observe the emergence of technologies that will allow us to paint a false picture of ourselves?

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Data or it didn't happen

Today, there is incredible excitement, enthusiasm and euphoria about technology trends such as Wearables, Big Data and the Internet of Things. Listening to some speakers at conferences, it often sounds like the convergence of these technologies promises to solve every problem that humanity faces. Seemingly, all we need to do is let these new ideas, products and services emerge into society, and it will be happy ever after. Just like those fairy tales we read to our children. Except, life isn't a fairy tale, neither is it always fair and equal. In this post, I examine how these technologies are increasingly of interest to employers and insurers when it comes to determining risk, and how this may impact our future. 

Let's take the job interview. There may be some tests the candidate undertakes, but a large part of the interview is the human interaction, and what the interviewer(s) and interviewee think of each other. Someone may perform well during the interview, but turn out to under perform when doing the actual job. Naturally, that's a risk that every employer wishes to minimise. What if you could minimise risk with wearables during the recruitment process? That's the message of a recent post on a UK recruitment website,  "Recruiters can provide candidates with wearable devices and undertake mock interviews or competency tests. The data from the device can then be analysed to reveal how the candidate copes under pressure." I imagine there would be legal issues if an employer terminated the recruitment process simply on the basis of data collected from a wearable device, but it may augment the existing testing that takes place. Imagine the job is a management role requiring frequent resolution of conflicts, and your verbal answers convince the interviewer you'd cope with that level of stress. What if the biometric data captured from the wearable sensor during your interview showed that you wouldn't be able to cope with that level of stress. We might immediately think of this as intrusive and discriminatory, but would this insight actually be a good thing for both parties? I expect all of us at one point have worked alongside colleagues who couldn't handle pressure, and their reactions caused significant disruption in the workplace. Could this use of data from wearables and other sensors lead to healthier and happier workplaces? 

Could those recruiting for a job start even earlier? What if the job involved a large amount of walking, and there was a way to get access to the last 6 months of activity data from the activity tracker you've been wearing on your wrist every day? Is sharing your health & fitness data with your potential employer the way that some candidates will get an edge over other candidates that haven't collected that data? That assumes that you have a choice in whether you share or don't share, but what if every job application required that data by default? How would that make you feel? 

What if it's your first job in life, and your employer wants access to data about your performance during your many years of education? Education technology used at school which aims to help students may collect data that could tag you for life as giving up easily when faced with difficult tasks. The world isn't as equal as we'd like it to be, and left unchecked, these new technologies may worsen inequalities, as Cathy O’Neil highlights in a thought provoking post on student privacy, “The belief that data can solve problems that are our deepest problems, like inequality and access, is wrong. Whose kids have been exposed by their data is absolutely a question of class.”

There is increasing interest in developing wearables and other devices for babies, tracking aspects of a baby, mainly to provide additional reassurance to the parents. In theory, maybe it's a brilliant idea, with no apparent downsides? Laura June doesn't think so, She states, "The merger of the Internet of Things with baby gear — or the Internet of Babies — is not a positive development." Her argument against putting sensors into baby gear is that it would increase anxiety levels in parents, not reduce them. I'm already thinking about that data gathered from the moment the baby is born. Who would own and control it? The baby, the baby's parents, the government or the corporation that had made the software & hardware used to collect the data? Furthermore, what if the data from the baby could impact not just access to health insurance, but the pricing of the premium paid by the parents to cover the baby in their policy? Do you decide you don't want to buy these devices to monitor the health of your newborn baby in case one day that data might be used against your child when they are grown up? 

When we take out health and life insurance, we fill in a bunch of forms, supply the information needed for the insurer to determine risk, and then calculate a premium. Rick Huckstep points out, "The insurer is not able to reassess the changing risk profile over the term of the policy." So, you might be active, healthy and fit when you take out the policy, but what if your behaviour changes and your risk profile changes during the term of the policy? This is the opportunity that some are seeing for insurers to use data from wearables to determine how your risk profile changes during the term of the policy. Instead of a static premium at the outset, we have a world with dynamic and personalised premiums. Huckstep also writes, "Where premiums will adjust over the term of the policy to reflect a policyholder’s efforts to reduce the risk of ill-health or a chronic illness on an on-going basis. To do that requires a seismic shift in the approach to underwriting risk and represents one of the biggest areas for disruption in the insurance industry."

Already today, you can link your phone or wearable to Vitality UK health insurance, and accumulate points based upon your activity (e.g. 10 points if you walk 12,500+ steps in a day). Get enough points and exchange them for rewards such as a cinema ticket. A similar scheme has also launched in the USA with John Hancock for life insurance

Is Huckstep the only one thinking about a radically different future? Not at all. Neil Sprackling, Managing Director of Swiss Re (a reinsurer) has said, “This has the potential to be a mini revolution when it comes to the way we underwrite for life insurance risk." In fact, his colleague, Oliver Werneyer, has an even bolder vision with a post entitled, "No wearable device = no life insurance," in which he believes that in 5 to 10 years time, you might find not be able to buy life insurance if you don't have a wearable device collecting data about you and your behaviour. Direct Line, a UK insurer believe that technology is going to transform insurance. Their Group Marketing Director, Mark Evans, has recently talked about technology allowing them to understand a customer's "inherent risk." Could we be penalised for deviating away from our normal healthy lifestyle because of life's unexpected demands? In this new world, if you were under chronic stress because you suddenly had to take time off work to look after a grandparent that was really sick, would less sleep and less exercise result in a higher premium next month on your health insurance? I'm not sure how these new business models would work in practice. 

When it comes to risk being calculated more accurately based upon this stream of data from your wearables, surely it's a win-win for everyone involved? The insurers can calculate risk more accurately, and you can benefit from a lower premium if you take steps to lower your risk. Then there are opportunities for entrepreneurs to create software & hardware that serves these capabilities. Would the traditional financial capitals such as London and New York be the centre of these innovations? 

One of the big challenges to overcome, above and beyond established data privacy concerns, is data accuracy. In my opinion, these consumer devices that measure your sleep & steps are not yet accurate and reliable enough to be used as a basis for determining your risk, and your insurance premium. Sensor technology will evolve, so maybe one day, there will be 'insurance grade' wearables that your insurer will be able to offer you. These would be certified to be accurate, reliable and secure enough to be used in the context of being linked to your insurance policy. In this potential future, another issue is whether people will choose to not take insurance because they don't want to wear a wearable, or they simply don't like the idea of their behaviour being tracked 24/7. Does that create a whole new class of uninsured people in society? Or would their be so much of a backlash from consumers (or even policy makers) to this idea of insurers accessing this 24/7 stream of data about your health, that this new business model never becomes a reality? If it did become a reality, would consumers switch to those insurers that could handle the data from their wearables? 

Interestingly, who would be an insurer of the future? Will it be the incumbents, or will it be hardware startups that build insurance businesses around connected devices? That's the plan of Beam Technologies, who developed a connected toothbrush (yes, it connects via Bluetooth with your smartphone and the app collects data about your brushing habits). Their dental insurance plan is rolling out in the USA shortly. Beam are considering adding incentives, such as rewards for brushing twice a day. Another experiment is NEST partnering with American Family Insurance. They supply you a 'smart' smoke detector for your home, which "shares data about whether the smoke detectors are on, working and if the home’s Wi-Fi is on." In exchange, you get 5% discount off your home insurance. 

Switching back to work, employers are increasingly interested in the data from employee's wearables. Why? Again, it's about a more accurate risk profile when it comes to health & safety of employees. Take the tragic crash of the Germanwings flight this year, where it emerges the pilot deliberately crashed the plane, killing 150 passengers. At a recent event in Australia, it was suggested this accident might have been avoided if the airline were able to monitor stress in the pilot using data from a wearable device.

What other accidents in the workplace might be avoided if employers could monitor the health, fitness & wellbeing of employees 24 hours a day? In the future, would a hospital send a surgeon home because the data from the surgeon's wearable showed they had not slept enough in the last 5 days? What about bus, taxi or truck drivers that could be monitored remotely for drowsiness by using wearables? Those are some of the use cases that Fujitsu are exploring in Japan with their research. Conversely, what if you had been put forward for promotion to a management role, and a year's worth of data from your wearable worn during work showed your employer that you got severely stressed in meetings where you had to manage conflict? Would your employer be justified in not promoting you, citing the data that suggested promoting you would increase your risk of a heart attack? Bosses may be interested in accessing the data from your wearables just to verify what you are telling them. Some employees phone in pretending to be sick, to get an extra day off. In the future, that may not be possible if your boss can check the data from your wearable to verify that you haven't taken many steps as you're stuck in bed at home. If you can't trust your employees to tell the truth, do you just modify the corporate wellness scheme with mandatory monitoring using wearable technology?

If it's possible for employers to understand the risk profile for each employee, would those under pressure to increase profits, ever use the data from wearables to understand which employees are going to be 'expensive', and find a way to get them out of the company? Puts a whole new spin on 'People Analytics' and 'Optimising the workforce'. In a compelling post, Sarah O'Connor shares her experiment where she put on some wearables and shared the data with her boss. She was asked how it felt to share the data with her boss, "It felt very weird, and actually, I really didn't like the feeling at all. It just felt as if my job was suddenly leaking into every area of my life. Like on the Thursday night, a good friend and colleague had a 30th birthday party, and I went along. And it got to sort of 1 o'clock, and I realized I was panicking about my sleep monitor and what it was going to look like the next day." We already complain about checking work emails at home, and the boundaries between work and home blurring. Do you really want to be thinking about how skipping your regular session at the gym on a Monday night would look to your boss? Devices that will betray us can actually be a good thing for society. Take the recent case of a woman in the USA who reported being sexually assaulted whilst she was asleep in her own home at night. The police used the data from the activity tracker she wore on her wrist to prove that at the time of the alleged attack, she was not asleep but awake and walking. On the other hand, one might also consider that those with malicious intent could hack into these devices and falsify the data to frame you for a crime you didn't commit. 

If these trends continue to converge, I see enterprising criminals rubbing their hands with glee. A whole new economy dedicated to falsifying the stream of data from your wearable/IoT device to your school, doctor, insurer or employer, or whoever is going to be making decisions based upon that stream of data. Imagine it's the year 2020, you are out partying every night, and you pay a hacker to make it appear that you slept 8 hours a night. So many organisations are blindly jumping into data driven systems with the mindset of, 'In data, we trust,' that few bother to think hard enough about the harsh realities of real world data. Another aspect is bias in algorithms using this data about us. Hans de Zwart has written an illuminating post, "Demystifying the algorithm: Who designs our life?" Zwart shows us the sheer amount of human effort in designing Google Maps, and the routes it generates for us, "The incredible amount of human effort that has gone into Google Maps, every design decision, is completely mystified by a sleek and clean interface that we assume to be neutral. When these internet services don’t deliver what we want from them, we usually blame ourselves or “the computer”. Very rarely do we blame the people who made the software." With all these potential new algorithms classifying our risk profile based upon data we generate 24/7, I wonder how much transparency, governance and accountability there will be? 

There is much to think about and consider, one of the key points is the critical need for consumers to be rights aware. An inspiring example of this, is Nicole Wong, the former US Deputy CTO, who wrote a post explaining why she makes her kids read privacy policies. One sentence in particular stood out to me, " When I ask my kids about what data is collected and who can access it, I am asking them to think about what is valuable and what they are prepared to share or lose." Understanding the value exchange that takes place when you share your data with a provider is critical step towards being able to make informed choices. That's assuming all of us have a choice in the sharing of our data. In the future, when we teach our children how to read and write English, should they be learning 'A' is for algorithm, rather than 'A' is for apple? I gave a talk in London recently on the future of wearables, and I included a slide on when wearables will take off (slide 21 below). I believe they will take off when we have to wear them or when we can't access services without them. Surgeons and pilots are just two of the professions which may have to get used to being tracked 24/7.

Will the mantra of employers and insurers in the 21st century be, "Data or it didn't happen?"

If Big Data is set to become one of the greatest sources of power in the 21st century, that power needs a system of checks and balances. Just how much data are we prepared to give up in exchange for a job? Will insurance really be disrupted or will data privacy regulations prevent that from happening? Do we really want sensors on us, in our cars, our homes & our workplaces monitoring everything we do or don't do? Having data from cradle to grave on each of us is what medical researchers dream of, and may lead to giant leaps in medicine and global health. UNICEF's Wearables for Good challenge could solve everyday problems for those living in resource poor environments. Now, just because we might have the technology to classify risk on a real time basis, do we need to do that for everyone, all the time? Or should policy makers just ban this methodology before anyone can implement it? Is there a middle path? "Let's add in ethics to technology" argues Jennifer Barr, one of my friends who lives and works in Silicon Valley. Instead of just teaching our children to code, let's teach them how to code with ethics. 

There are so many questions, and still too few places where we can debate these questions. That needs to change. I am speaking at two events in London this week where these questions are being debated, the Critical Wearables Research Lab and Camp Alphaville. I look forward to continuing the conversation with you in person if you're at either of these events. 

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner




The paradox of privacy

When you're driving the car, would you let an employee from a corporation sit in the passenger seat and record details on what route you're taking, which music you listen to and the text messages you send and receive? 

When you're sitting at home watching TV with your family, would you let an employee from a corporation sit on the sofa next to you and record details on what types of TV shows you watch? 

When you're in the gym working out, when you're going for your daily walk, would you let an employee from a corporation stand alongside you and record details on how long you walked, where you walked, and how your body responded to the physical activity? 

I suspect many of you would answer 'No' to all 3 questions. However, that's exactly the future that is being painted after the recent Google I/O event. Aimed at software developers, it revealed a glimpse of what Google have got planned for the year ahead. New services such as Android Wear, Android Auto, Android TV and Google Fit promise to change our lives. 

In this article titled 'Google's master plan: Turn everything into data!", David Auerbach appreciates how more sensors in our homes, cities and on our bodies is a hugely lucrative opportunity for a company like Google. "That information is also useful to companies that want to sell you things. And if Google stands between your data and the sellers and controls the pipe, then life is sweet for Google."

In a brilliant article by Parmy Olsen, she writes about the announcement at I/O about Google Fit, a new platform. "There’s a major advertising opportunity for Google to aggregate our health data in a realm outside of traditional search". Now during the event, Google did state that users would control what health and fitness data they share with the platform. Let's see whether corporate statements translate to actual terms & conditions in the years ahead. 

Do we even realise how much personal data are stored on our phones?

Do we even realise how much personal data are stored on our phones?

Why are companies like Google so interested in the data from your body in between doctor visits? As I've stated before, our bodies generate data 24/7, yet it's only currently captured when we visit the doctor. So, the organisation that captures, stores & aggregates that data at a global level is likely to be very profitable, as well as wielding significant power in health & social care. 

Indeed, it could also prove transformative for those providing & delivering health & social care. In the utopian vision of health systems powered by data, this constant stream of data about our health might allow the system to predict when we're likely to have a heart attack, or fall? 

Privacy and your baby

When people have a baby, some things change. It's human nature to want to protect and provide for our children when they are helpless and vulnerable. For example, someone may decide to upgrade to a safer car once they have a baby. We generally do everything we can to give our children the best possible start in life.

If you have a newborn baby, would you allow an employee from a corporation to enter your home and sit next to your baby and record data on it's sleeping patterns? In the emerging world of wearable technology, some parents are considering using products and services where their baby's data would be owned by someone else. 

Sproutling is a smart baby monitor, shipping in March 2015, but taking pre-orders now. It attaches to your baby's ankle, and measures heart rate and movement and interprets mood. It promises to learn and predict your baby's sleep habits. You've got an activity and sleep tracker for yourself, why not one for your baby, right? According to their website today, 31% of their monitors have been sold. The privacy policy on their website is commendably short, but not explicit enough in my opinion. So I went on Twitter to quiz Sproutling about who exactly owns the data collected from the baby using the device. As you can see, they referred me back to their privacy policy, and didn't really answer my question. 

The paradox

What's fascinating is how we say one thing and do another. A survey of 4,000 consumers in the UK, US and Australia found that 62% are worried about their personal data being used for marketing. Yet, 65% of respondents rarely or never read the privacy policy on a website before making a purchase. 

In a survey by Accenture Interactive, they found that 80% of people have privacy concerns with wearable Internet of Things connected technologies. Only 9% of those in the survey said they would share their data with brands for free. Yet, that figure rose to 28% would share their wearable data if they were given a coupon or discount based upon their lifestyle. 

Ideally, there would be a way in which we as consumers could own and control our personal data in the cloud and even profit from it. In fact, it already exists. The Respect Network promises just that, and was launched globally at the end of June 2014. From their website, "Respect Network enables secure, authentic and trusted relationships in the digital world". Surely, that's what we want in the 21st century? Or maybe not. I haven't met a single person who has heard of Respect Network since they launched. Not one person. What does that tell you about the world we live in?

Deep down, are we increasingly becoming apathetic about privacy? Is convenience a higher priority than knowing that our personal data are safe? Is being safe and secure in the digital world just a big hassle?

A survey of 15,000 consumers in 15 countries for the EMC Privacy Index found a number of behavioural paradoxes, one of which they termed "Take no action", "although privacy risks directly impact many consumers, most take virtually no action to protect their privacy – instead placing the onus on government and businesses". It reminds me of an interaction I had on Twitter recently with Dr Gulati, an investor in Digital Health. 

What needs to change?

Our children are growing up in a world where their personal data are going to be increasingly useful (or harmful), depending upon the context. What are our children taught at school about their personal data rights? It's been recently suggested that schools in England should offer lessons about sex and relationships from age 7, part of a "curriculum for life". Shouldn't the curriculum for life include being educated about the intersection of your personal data and your privacy?

We are moving towards a more connected world, whether we like it or not. Personally, I'm not averse to corporations and governments collecting data about us and our behaviour, as long as we are able to make informed choices. I like how in this article about the Internet of Things and privacy, Marc Loewenthal writes "discussions about the data created are far more likely to focus on how to use the data rather than how to protect it". Loewenthal also goes on to mention how the traditional forms of delivering privacy guidelines to consumers aren't fit for purpose in an increasingly connected world, "They typically ignore the privacy notices or terms of use, and the mechanisms for delivering the notices are often awkward, inconvenient, and unclear".

When was the last time you read through (and fully understood) the terms and conditions and privacy policy of a health app or piece of wearable technology? So many more connected devices, each with their own privacy policy and terms and conditions. Not something I look forward to as a consumer. The existing approach isn't effective, we need to think differently about how we can truly enable people to be able make informed choices in the 21st century.

Now, what if each of us  had our OWN terms and conditions and privacy policy and then we could see if the health app meets OUR criteria? We, as consumers, decide in advance what we want to share, with whom, and what we expect in return. How would that even work? Surely, we'd need to cluster similar needs together to perhaps form 5 standard privacy profiles? Imagine comparing three different health apps which do they same thing, but you can see instantly that only one of them has the privacy profile that meets your needs? Or even when browsing through the app store, you choose to only be shown those apps that match your privacy profile? That would definitely make it easier for each of us to be able to make an informed choice. 

Things are changing as it was revealed last night that Apple have tightened privacy rules in their new operating system for people developing apps using their new HealthKit API. An article cites text pulled from the licence, developers may "not sell an end-user's health information collected through the HealthKit API to advertising platforms, data brokers or information resellers," and are barred from using gathered data "for any purpose other than providing health and/or fitness services."

Apps using the HealthKit API must also provide privacy policies.

This news is definitely a big step forward for anyone who cares about the privacy of their health data. Although the guaranteed link to a privacy policy doesn't necessarily mean it will be easy to understand for consumers. I also wonder how companies that develop health apps using the HealthKit API will make money, given current business models are based around the collection and use of data. 

Will the news from Apple make you more likely as a consumer to download a health app for your iPhone vs your Android device? Will it cause you trust Apple more than Google or Samsung? Have Apple gone far enough with their recent announcement or could they do more? Will Apple's stance lead to them becoming THE trusted hub for our health data, above and beyond the current healthcare system?

How can we as individuals do more to become aware of our rights? As well as the campaigns to teach people to learn how to code, should we have campaigns to teach people how to protect their privacy? When commentators write that privacy is dead, do you believe them?

We're heading towards a future where over the next decade it will become far easier to use sensors to monitor the state of our bodies. Would you prefer a future where my body=my data or my body=their data? The choice is yours.

[Disclosure: I have no commercial ties with the individuals and organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner