Promise doesn't equal proof

I've just returned from California, where I attended these 3 conferences;

For this post, I'm going to focus on what I observed at these events regarding the quest for evidence in Digital Health. I'll be writing separate blog posts in the future relating to my overall experience at each of these events.

Starting with the first event which was hosted by Scripps Translational Science Institute, I was excited about the event. The opening sentence in the brochure said, "A thoughtful exploration of the clinical evidence necessary to drive the widespread uptake of mobile health solutions will be the focus of the first Scripps Health Digital Medicine conference." When booking my place, Three of the educational objectives of the event which sounded tremendously useful to me as a participant were;

  • "Assess the quality of clinical trials of mobile health in terms of providing the evidence necessary to support implementation"
  • "Discuss the implementation of mobile health technologies into clinical practice based on clinical trial evidence"
  • "Identify innovative trial methodologies for use in digital medicine"

Having attended, I don't really feel those three objectives were met. Whilst some of the sessions were very interesting and thought provoking, it wasn't because the speakers were discussing evidence generation or clinical trials in this arena. Often they were talking about the future of Digital Health, and where we are heading. I walked away feeling confused and disappointed. Only on the second day, when Jeff Shuren, Director of the Center for Devices and Radiological Health at the FDA spoke, did I see a session which specifically related to the objectives listed above.

So onto Health 2.0, where I was expecting validation and evidence to be discussed at two sessions. The first was "Validating Performance in Healthcare and Turning the Dial on Credibility" and the second was "Arc Fusion: Getting real about the convergence of health IT and biomedicine." In the first session, Vik Khanna from Quizzify made a number of good points.

I didn't manage to attend the second session, but the talk was captured on video, and can be found here. Having watched the 40 minute video, there wasn't much exploration of evidence or validation.

However, before either of those two sessions took place, it was useful to hear about validation at the session on "Health Data Exploration Project-Personal Data for the Public Good." It's good that they are pursuing this research, and I look forward to seeing what they discover.

I also noticed this tweet during Health 2.0, but I can't find a link on the web that shows what the American Medical Association is doing here.

At Body Computing, there was a panel discussion on "Building a virtual healthcare system" and I asked the panel about whether we need some kind of new institute that can validate & certify these new digital interventions. Andy Thompson, the CEO of Proteus Digital Health replied, and said that we don't need new institutions, and that industry needs to collaborate with regulators to improve regulatory science, as the regulators can't do it alone. At some level, I think he has a good point, and later in this post, I'll explain why we might actually need a new institute.

I've tried a lot of wearable technology, especially smart watches, and there still isn't any real evidence showing that these are making an impact on our health. Whilst a watch that can remind you to walk more or workout at the gym in the best heart rate zone is of some use, many who work with patients every day, are asking, "What's the medical benefit?" There is a huge unmet need out there for wearable technology developed with medical grade sensors that doctors and patients can trust and use with confidence.

At Body Computing, I witnessed the first public viewing of the AliveCor ECG for the Apple watch. You can see a demo by Dr Dave Albert, founder & CMO of AliveCor, in my video below.

I must mention that this new AliveCor product is a prototype and has not been FDA cleared yet. I personally expect it to be a roaring success when it is launched. I note that at the Scripps conference, when both patients and doctors were commenting on what Digital Health product had impacted their life, AliveCor was cited nearly every time. The fact that the AliveCor app on the watch records the patient's voice, links it to the location, takes us a step forward on the path to a single patient view, the marriage of hard and soft data. We need more of this science driven innovation in Digital Health, where gathering of evidence is not an afterthought, and where the product/service has a clearly defined medical benefit. 

I am witnessing increasing use of algorithms in healthcare, especially since we're collecting more data than we ever have before. Algorithms are like the invisible hand that guides many of our decisions, and since they are programmed by humans, how do we know what bias is incorporated into them? The recent scandal which involved Volkswagen's cars and an algorithm that was cheating the system makes me think about the need for greater transparency in healthcare.

I appreciate that in the modern era, algorithms are closely guarded secrets by companies just like Kentucky Fried Chicken guards its secret recipe. I'm not saying that private corporations should make their algorithms open source and lose their competitive advantage, but maybe we need an independent body that can be monitoring these algorithms in healthcare, not just once when the product is approved, but all year round, so that we can feel protected? I found a fascinating post by Jason Bloomberg, who in response to the VW emissions scandal, asks if this is the death knell for the Internet of Things?  Bloomberg cites 'calibration attacks' as the possible cause of the VW scandal, and goes on to highlight how this may impact healthcare too. In my opinion, each of the three conferences I attended should have had a session where we could have a healthy debate about algorithms. I keep hearing about how artificial intelligence, big data and algorithms will lead to so many amazing things, but I never hear anyone talking about calibration attacks, and how to prevent them. Zara Rahman closes her wonderful post on understanding algorithms with, "Though we can't monitor the steps of the process that humans decide upon to create an algorithm, we can - or should be able to - monitor and have oversight on the data that is provided as input for those algorithms."

I don't think it's alarmist to examine a range of different future scenarios and to consider updated regulatory frameworks to reflect threats that never existed before. It's wonderful to hear speakers at conferences show us how the future is going to be better due to technological advances, but we also need to hear about the side effects of those new technologies too.

I recognise that not every digital intervention will need clinical trials and a whole body of evidence before it can be approved, accredited and adopted. For example, medication reminder apps that are a twist on the standard reminder app. Or it could be argued that even these simple apps should be regulated too? What if the software developer makes a mistake in the code and when a patient actually uses the app, their medication reminders in the app are switched around, leading to patient harm? A recent article highlights research that showed that most of the NHS approved apps for depression are actually unproven. Another related post by Simon Leigh, points out, "are apps forthcoming with the information they provide? It's easy enough to say this app beats depression, but do they offer any proof to turn this from what is essentially marketing into evidence of clinical effectiveness?"

Many people are so angry with the state of healthcare that they want this digital revolution to disrupt healthcare as quickly as possible. Asking for evidence and proof is often seen as slowing down this revolution, a sign of resistance to change. Just because something is digital doesn't mean we can trust it implicitly from the moment it's developed. Hype, hope and hubris will not be enough to deliver the sustainable change in healthcare that we all want to see. We are at a crossroads in Digital Health, and we have to be very careful going forwards that the recipients of these digital interventions aren't led to believe that promise equals proof.

[Disclosure: I have no commercial ties to any of the individuals or organizations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner



Painting a false picture of ourselves

In the quest for improving our health, we're on the path to capturing more data about us, and what we do, and what happens to us. It's no longer sufficient to capture data about our health when we visit the doctor. Sensors are popping up all over the place, even in pills that help others determine whether we are actually taking our medication. Today, the most prevalent sensors are the ones in those wristbands and smart watches that track how many steps we've taken and how much we've slept. We're likely to end up at some point in the future where many, if not all of us, will be monitored 24 hours a day. Recently, Target in the USA, announced it will be offering a Fitbit activity tracker to each of its 335,000 employees.

There are already insurers in the US & UK that are offering rewards if you share data from your wearable, and the data from the wearable proves you are being active enough. In Switzerland, a pilot project by health insurer, CSS, is monitoring how many steps customers are walking every day, with one implication being, "people who refuse to be monitored will be subject to higher premiums." In that same article, Peter Ohnemus of Dacadoo, believes "Eventually we will be implanted with a nano-chip which will constantly monitor us and transmit the data to a control centre."

Well, if pills with ingestible sensors are already here, then the vision of Ohnemus may not be that far fetched. En route to the nano-chip, I note that Samsung's new Sleepsense device that sits under your mattress and tracks your sleep (and analyses the quality of your sleep), offers a feature where a report about your sleep can be emailed daily to family members. You might use it to track how your elderly parents/grandparents/children are sleeping. At the 5th EAI International Conference on Wireless Mobile Communication and Healthcare in London next month, there is a keynote titled, "The car as a location for medical diagnosis." There is so much data about us that could be captured and shared with interested parties, it's an exciting new era for many of us. 

SLEEPsense was launched when I visited IFA earlier this month

SLEEPsense was launched when I visited IFA earlier this month

Not everyone is excited though. It's truly fascinating to observe how people might respond to the introduction of these new sensors in our lives. We're going to see many developments in 'smart home' technologies, and maybe Apple's HomeKit will be the catalyst for people to make their homes as smart as possible. Given aging populations, maybe older people, especially those living alone are the perfect candidates for these sensors and devices. Whilst their children, doctors and insurers may find the ability to 'remotely monitor' behaviour quite reassuring, what if the older person being monitored doesn't like being monitored? What strategies might they employ to hack the system? The short film below, 'Uninvited Guests' shows an elderly man and his smart home, and where the friction might occur. 

Then you have 'Unfit Bits' which pokes fun at the growing trend of linking data from your activity tracker with your insurance. "At Unfit Bits, we are investigating DIY fitness spoofing techniques to allow you to create walking datasets without actually having to share your personal data. These techniques help produce personal data to qualify you for insurance rewards even if you can't afford a high exercise lifestyle." Check out their video. 

These videos are food for thought. Our daily choices and behaviour are going to come under increased scrutiny, and just because it's technically possible, will it be socially desirable? Decisions are increasingly being made by algorithms, and algorithms need data. There is a call for healthcare to be more of a data driven culture, but how will we know if the data coming from outside the doctor's office can be trusted? There is huge concern regarding the risks of health data being stolen, but little concern regarding how health data may be falsified. 

In the case of employers tracking employees, "Instead of feeling like part of a team, surveilled workers may develop an us-versus-them mentality and look for opportunities to thwart the monitoring schemes of Big Boss", writes Lynn Parramore in her post examining the dystopia of workplace surveillance.  As these new 'monitoring' technologies and associated services emerge and grow, at the same time, will we also observe the emergence of technologies that will allow us to paint a false picture of ourselves?

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

Data or it didn't happen

Today, there is incredible excitement, enthusiasm and euphoria about technology trends such as Wearables, Big Data and the Internet of Things. Listening to some speakers at conferences, it often sounds like the convergence of these technologies promises to solve every problem that humanity faces. Seemingly, all we need to do is let these new ideas, products and services emerge into society, and it will be happy ever after. Just like those fairy tales we read to our children. Except, life isn't a fairy tale, neither is it always fair and equal. In this post, I examine how these technologies are increasingly of interest to employers and insurers when it comes to determining risk, and how this may impact our future. 

Let's take the job interview. There may be some tests the candidate undertakes, but a large part of the interview is the human interaction, and what the interviewer(s) and interviewee think of each other. Someone may perform well during the interview, but turn out to under perform when doing the actual job. Naturally, that's a risk that every employer wishes to minimise. What if you could minimise risk with wearables during the recruitment process? That's the message of a recent post on a UK recruitment website,  "Recruiters can provide candidates with wearable devices and undertake mock interviews or competency tests. The data from the device can then be analysed to reveal how the candidate copes under pressure." I imagine there would be legal issues if an employer terminated the recruitment process simply on the basis of data collected from a wearable device, but it may augment the existing testing that takes place. Imagine the job is a management role requiring frequent resolution of conflicts, and your verbal answers convince the interviewer you'd cope with that level of stress. What if the biometric data captured from the wearable sensor during your interview showed that you wouldn't be able to cope with that level of stress. We might immediately think of this as intrusive and discriminatory, but would this insight actually be a good thing for both parties? I expect all of us at one point have worked alongside colleagues who couldn't handle pressure, and their reactions caused significant disruption in the workplace. Could this use of data from wearables and other sensors lead to healthier and happier workplaces? 

Could those recruiting for a job start even earlier? What if the job involved a large amount of walking, and there was a way to get access to the last 6 months of activity data from the activity tracker you've been wearing on your wrist every day? Is sharing your health & fitness data with your potential employer the way that some candidates will get an edge over other candidates that haven't collected that data? That assumes that you have a choice in whether you share or don't share, but what if every job application required that data by default? How would that make you feel? 

What if it's your first job in life, and your employer wants access to data about your performance during your many years of education? Education technology used at school which aims to help students may collect data that could tag you for life as giving up easily when faced with difficult tasks. The world isn't as equal as we'd like it to be, and left unchecked, these new technologies may worsen inequalities, as Cathy O’Neil highlights in a thought provoking post on student privacy, “The belief that data can solve problems that are our deepest problems, like inequality and access, is wrong. Whose kids have been exposed by their data is absolutely a question of class.”

There is increasing interest in developing wearables and other devices for babies, tracking aspects of a baby, mainly to provide additional reassurance to the parents. In theory, maybe it's a brilliant idea, with no apparent downsides? Laura June doesn't think so, She states, "The merger of the Internet of Things with baby gear — or the Internet of Babies — is not a positive development." Her argument against putting sensors into baby gear is that it would increase anxiety levels in parents, not reduce them. I'm already thinking about that data gathered from the moment the baby is born. Who would own and control it? The baby, the baby's parents, the government or the corporation that had made the software & hardware used to collect the data? Furthermore, what if the data from the baby could impact not just access to health insurance, but the pricing of the premium paid by the parents to cover the baby in their policy? Do you decide you don't want to buy these devices to monitor the health of your newborn baby in case one day that data might be used against your child when they are grown up? 

When we take out health and life insurance, we fill in a bunch of forms, supply the information needed for the insurer to determine risk, and then calculate a premium. Rick Huckstep points out, "The insurer is not able to reassess the changing risk profile over the term of the policy." So, you might be active, healthy and fit when you take out the policy, but what if your behaviour changes and your risk profile changes during the term of the policy? This is the opportunity that some are seeing for insurers to use data from wearables to determine how your risk profile changes during the term of the policy. Instead of a static premium at the outset, we have a world with dynamic and personalised premiums. Huckstep also writes, "Where premiums will adjust over the term of the policy to reflect a policyholder’s efforts to reduce the risk of ill-health or a chronic illness on an on-going basis. To do that requires a seismic shift in the approach to underwriting risk and represents one of the biggest areas for disruption in the insurance industry."

Already today, you can link your phone or wearable to Vitality UK health insurance, and accumulate points based upon your activity (e.g. 10 points if you walk 12,500+ steps in a day). Get enough points and exchange them for rewards such as a cinema ticket. A similar scheme has also launched in the USA with John Hancock for life insurance

Is Huckstep the only one thinking about a radically different future? Not at all. Neil Sprackling, Managing Director of Swiss Re (a reinsurer) has said, “This has the potential to be a mini revolution when it comes to the way we underwrite for life insurance risk." In fact, his colleague, Oliver Werneyer, has an even bolder vision with a post entitled, "No wearable device = no life insurance," in which he believes that in 5 to 10 years time, you might find not be able to buy life insurance if you don't have a wearable device collecting data about you and your behaviour. Direct Line, a UK insurer believe that technology is going to transform insurance. Their Group Marketing Director, Mark Evans, has recently talked about technology allowing them to understand a customer's "inherent risk." Could we be penalised for deviating away from our normal healthy lifestyle because of life's unexpected demands? In this new world, if you were under chronic stress because you suddenly had to take time off work to look after a grandparent that was really sick, would less sleep and less exercise result in a higher premium next month on your health insurance? I'm not sure how these new business models would work in practice. 

When it comes to risk being calculated more accurately based upon this stream of data from your wearables, surely it's a win-win for everyone involved? The insurers can calculate risk more accurately, and you can benefit from a lower premium if you take steps to lower your risk. Then there are opportunities for entrepreneurs to create software & hardware that serves these capabilities. Would the traditional financial capitals such as London and New York be the centre of these innovations? 

One of the big challenges to overcome, above and beyond established data privacy concerns, is data accuracy. In my opinion, these consumer devices that measure your sleep & steps are not yet accurate and reliable enough to be used as a basis for determining your risk, and your insurance premium. Sensor technology will evolve, so maybe one day, there will be 'insurance grade' wearables that your insurer will be able to offer you. These would be certified to be accurate, reliable and secure enough to be used in the context of being linked to your insurance policy. In this potential future, another issue is whether people will choose to not take insurance because they don't want to wear a wearable, or they simply don't like the idea of their behaviour being tracked 24/7. Does that create a whole new class of uninsured people in society? Or would their be so much of a backlash from consumers (or even policy makers) to this idea of insurers accessing this 24/7 stream of data about your health, that this new business model never becomes a reality? If it did become a reality, would consumers switch to those insurers that could handle the data from their wearables? 

Interestingly, who would be an insurer of the future? Will it be the incumbents, or will it be hardware startups that build insurance businesses around connected devices? That's the plan of Beam Technologies, who developed a connected toothbrush (yes, it connects via Bluetooth with your smartphone and the app collects data about your brushing habits). Their dental insurance plan is rolling out in the USA shortly. Beam are considering adding incentives, such as rewards for brushing twice a day. Another experiment is NEST partnering with American Family Insurance. They supply you a 'smart' smoke detector for your home, which "shares data about whether the smoke detectors are on, working and if the home’s Wi-Fi is on." In exchange, you get 5% discount off your home insurance. 

Switching back to work, employers are increasingly interested in the data from employee's wearables. Why? Again, it's about a more accurate risk profile when it comes to health & safety of employees. Take the tragic crash of the Germanwings flight this year, where it emerges the pilot deliberately crashed the plane, killing 150 passengers. At a recent event in Australia, it was suggested this accident might have been avoided if the airline were able to monitor stress in the pilot using data from a wearable device.

What other accidents in the workplace might be avoided if employers could monitor the health, fitness & wellbeing of employees 24 hours a day? In the future, would a hospital send a surgeon home because the data from the surgeon's wearable showed they had not slept enough in the last 5 days? What about bus, taxi or truck drivers that could be monitored remotely for drowsiness by using wearables? Those are some of the use cases that Fujitsu are exploring in Japan with their research. Conversely, what if you had been put forward for promotion to a management role, and a year's worth of data from your wearable worn during work showed your employer that you got severely stressed in meetings where you had to manage conflict? Would your employer be justified in not promoting you, citing the data that suggested promoting you would increase your risk of a heart attack? Bosses may be interested in accessing the data from your wearables just to verify what you are telling them. Some employees phone in pretending to be sick, to get an extra day off. In the future, that may not be possible if your boss can check the data from your wearable to verify that you haven't taken many steps as you're stuck in bed at home. If you can't trust your employees to tell the truth, do you just modify the corporate wellness scheme with mandatory monitoring using wearable technology?

If it's possible for employers to understand the risk profile for each employee, would those under pressure to increase profits, ever use the data from wearables to understand which employees are going to be 'expensive', and find a way to get them out of the company? Puts a whole new spin on 'People Analytics' and 'Optimising the workforce'. In a compelling post, Sarah O'Connor shares her experiment where she put on some wearables and shared the data with her boss. She was asked how it felt to share the data with her boss, "It felt very weird, and actually, I really didn't like the feeling at all. It just felt as if my job was suddenly leaking into every area of my life. Like on the Thursday night, a good friend and colleague had a 30th birthday party, and I went along. And it got to sort of 1 o'clock, and I realized I was panicking about my sleep monitor and what it was going to look like the next day." We already complain about checking work emails at home, and the boundaries between work and home blurring. Do you really want to be thinking about how skipping your regular session at the gym on a Monday night would look to your boss? Devices that will betray us can actually be a good thing for society. Take the recent case of a woman in the USA who reported being sexually assaulted whilst she was asleep in her own home at night. The police used the data from the activity tracker she wore on her wrist to prove that at the time of the alleged attack, she was not asleep but awake and walking. On the other hand, one might also consider that those with malicious intent could hack into these devices and falsify the data to frame you for a crime you didn't commit. 

If these trends continue to converge, I see enterprising criminals rubbing their hands with glee. A whole new economy dedicated to falsifying the stream of data from your wearable/IoT device to your school, doctor, insurer or employer, or whoever is going to be making decisions based upon that stream of data. Imagine it's the year 2020, you are out partying every night, and you pay a hacker to make it appear that you slept 8 hours a night. So many organisations are blindly jumping into data driven systems with the mindset of, 'In data, we trust,' that few bother to think hard enough about the harsh realities of real world data. Another aspect is bias in algorithms using this data about us. Hans de Zwart has written an illuminating post, "Demystifying the algorithm: Who designs our life?" Zwart shows us the sheer amount of human effort in designing Google Maps, and the routes it generates for us, "The incredible amount of human effort that has gone into Google Maps, every design decision, is completely mystified by a sleek and clean interface that we assume to be neutral. When these internet services don’t deliver what we want from them, we usually blame ourselves or “the computer”. Very rarely do we blame the people who made the software." With all these potential new algorithms classifying our risk profile based upon data we generate 24/7, I wonder how much transparency, governance and accountability there will be? 

There is much to think about and consider, one of the key points is the critical need for consumers to be rights aware. An inspiring example of this, is Nicole Wong, the former US Deputy CTO, who wrote a post explaining why she makes her kids read privacy policies. One sentence in particular stood out to me, " When I ask my kids about what data is collected and who can access it, I am asking them to think about what is valuable and what they are prepared to share or lose." Understanding the value exchange that takes place when you share your data with a provider is critical step towards being able to make informed choices. That's assuming all of us have a choice in the sharing of our data. In the future, when we teach our children how to read and write English, should they be learning 'A' is for algorithm, rather than 'A' is for apple? I gave a talk in London recently on the future of wearables, and I included a slide on when wearables will take off (slide 21 below). I believe they will take off when we have to wear them or when we can't access services without them. Surgeons and pilots are just two of the professions which may have to get used to being tracked 24/7.

Will the mantra of employers and insurers in the 21st century be, "Data or it didn't happen?"

If Big Data is set to become one of the greatest sources of power in the 21st century, that power needs a system of checks and balances. Just how much data are we prepared to give up in exchange for a job? Will insurance really be disrupted or will data privacy regulations prevent that from happening? Do we really want sensors on us, in our cars, our homes & our workplaces monitoring everything we do or don't do? Having data from cradle to grave on each of us is what medical researchers dream of, and may lead to giant leaps in medicine and global health. UNICEF's Wearables for Good challenge could solve everyday problems for those living in resource poor environments. Now, just because we might have the technology to classify risk on a real time basis, do we need to do that for everyone, all the time? Or should policy makers just ban this methodology before anyone can implement it? Is there a middle path? "Let's add in ethics to technology" argues Jennifer Barr, one of my friends who lives and works in Silicon Valley. Instead of just teaching our children to code, let's teach them how to code with ethics. 

There are so many questions, and still too few places where we can debate these questions. That needs to change. I am speaking at two events in London this week where these questions are being debated, the Critical Wearables Research Lab and Camp Alphaville. I look forward to continuing the conversation with you in person if you're at either of these events. 

[Disclosure: I have no commercial ties to any of the individuals or organisations mentioned in the post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner




Will the home of the future improve our health?

According to BK Yoon, that would be one of the benefits in their vision of the 'home of the future'. Who is BK Yoon? He's President and CEO of Samsung Electronics. Last Friday, I listened as he delivered the opening keynote of IFA 2014, which is the largest consumer electronics and home appliance show in Europe.

Whilst many talk of bringing healthcare out of the hospital/doctor's office into the home, Samsung, in theory, have the resources and vision to make this a reality at a global level. Samsung Group, of which Samsung Electronics is the biggest subsidiary, have just invested $2 billion in setting up a biopharmaceuticals unit. Christopher Hansung Ko, CEO at the Samsung Bioepis unit. said in an earlier interview, “We are a Samsung company. Our mandate is to become No. 1 in everything we enter into, so our long-term goal is to become a leading pharmaceutical company in the world.” 

Is South Korea innovative?

My views on Samsung (and South Korea overall), changed when I visited Seoul, the capital of South Korea during my round the world trip in 2010. I had only scheduled a 3 day stopover, but ended up staying over 3 weeks. I was impressed by their ambitions, their attitude towards new technology and their commitment to education. Their journey over the last half century is truly amazing. "Fifty years ago, the country was poorer than Bolivia and Mozambique; today, it is richer than New Zealand and Spain." Bloomberg released their Global Innovation Index at the start of this year. Guess which country was top of the list? Yup, South Korea. The UK was ranked a lowly 16th.

After my visit, I left South Korea with a different perspective, and have been paying close attention to developments there ever since. I believe many people in the West underestimate the long term ambitions of a company like Samsung. Those wishing to understand the future, would be wise to monitor not just what's happening in Silicon Valley, but also in South Korea. 

Aging populations worry policy makers in many advanced economies. Interestingly, recent data shows that South Korea's population is aging the fastest out of all OECD countries. Maybe that's one of the drivers behind Samsung's strategy of developing technology that could one day help older people live independently. 

We've been hearing about smart and connected homes for many years, and one wonders how this technology would be integrate with our lives. How easy would it be to set up and use? How reliable would it be? Would I have to figure out what to do with all these data streams from the different devices? I used to believe that Apple was unique in really understanding the needs and wants of the consumer, but it seems Samsung have been taking notes. They have conducted lifestyle research with 30,000 people around the world. The results of that research were shared after they keynote. Whilst reading the reports, one feels like Samsung Electronics is now trying to position itself as a lifestyle company, not a technology company. Whether it's one of the opening statements, "The home of the future is about more than technology and gadgets. It's about people." It's about responsive homes that adapt to our needs, homes that protect us, homes that are empathetic. How much is all of this going to cost us, right? Is it only going to be for the rich? Is it only going to work with new homes, or can we retrofit the technology?

Data driven homes - is that what we truly want?

Their research also says, "Technology will promote eating and living right. Devices around the home will inspire us to make decisions that are right for our bodies, taking an active role in helping us achieve better health by turning goals into habits." So in Samsung's vision, the fridge of the future will inform you that some of the food inside has expired and needs to be thrown away. Where do we draw the line? What if you come home from work hoping to grab a beer from the fridge, but the fridge door is locked, because the home of the future knows you've already consumed more than your daily allowance of calories?

Do we actually want to live in data driven homes? Our homes are often an analogue refuge in an increasing digital & connected world. We have the choice to disconnect and switch off our devices and just 'be'. What if part of our contract with our energy provider or home insurer is having smart home technology installed?

In a recent survey of Americans aged 18+ for Lowe's, 70% of smartphone/tablet owners want to control something inside their home from their bed via their mobile. Obesity is already a public health issue not just in the USA, but in many nations. Will being able to switch on the coffee pot, adjust the thermostat and switch on the lights by speaking into your smartphone whilst lying in bed lead to humans leading even more sedentary lives in the future? 

However, there is a flip side. Rather than just dismiss this emerging technology as silly or promoting inactivity, these advancements may be of immense benefit to certain groups of people. For example, would the home of the future enable someone who is blind, disabled or with learning difficulties a greater chance to live independently? Would the technology be useful when you're discharged from hospital after surgery?

Is it just about the data?

One of the byproducts of these new technologies for our homes, are data. Sensors and devices which track everything we do in the home are going to be collecting and processing data about us, our behaviour and our habits. Samsung to mention in their research "Our home will develop digital boundaries that keep networks separate & secure, protecting us from data breaches, and ensuring that family members cannot intrude on each other's data." It all sounds great in a grand vision, but turning that into reality is much harder than it appears.

We expect to hear about Apple's HealthKit later today. Who will you trust with the data from your home in the future? Apple, an American corporation or Samsung, a South Korean corporation? Or neither of them? Is the strategy of connecting our homes to the internet simply a ploy to grab even more data about us? Who will own and control the data from our homes? Where will it be stored? 

Despite the optimism and hope in BK Yoon's keynote, I can't help wonder about the risks of your home's data feed getting hacked, and what it means for criminals such as burglars, or even terrorists. Do we want machines storing data on the movements of each family member within our own home? Or will tracking of family members who are very young or very old, in and around the home, give families peace of mind and reassurance?

Ultimately, who is going to pay for all of this innovation? Even today, when I talk to ordinary hard working families about new technologies, such as sleep tracking, I'm conscious it's out of the reach of many. For example, the recently launched Withings Aura system allows you to track and improve your sleep. It's priced at $299.95/£249.95. If given the choice, how many ordinary families would invest in the Withings Aura to improve their sleep vs buying a new bed from Ikea for the same price?

The video below was played during the keynote, and gave me a glimpse into Samsung's vision of the home of the future. BK Yoon claimed that the future is coming faster than we think. He seemed pretty confident. Only time will tell. 

How does this video make you feel? Does Samsung's vision make you excited and hopeful, or does it frighten you? Do you look forward to a home that aims  to protect you and cares for your family? How comfortable do you feel with your home potentially knowing more about your health than you or your doctor? How will the data about our health collected by our home integrate with the health & social care system? Will the company that is most successful in smart homes be the one that consumers trust the most?

[Disclosure: I have no commercial ties with the individuals and organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner

The paradox of privacy

When you're driving the car, would you let an employee from a corporation sit in the passenger seat and record details on what route you're taking, which music you listen to and the text messages you send and receive? 

When you're sitting at home watching TV with your family, would you let an employee from a corporation sit on the sofa next to you and record details on what types of TV shows you watch? 

When you're in the gym working out, when you're going for your daily walk, would you let an employee from a corporation stand alongside you and record details on how long you walked, where you walked, and how your body responded to the physical activity? 

I suspect many of you would answer 'No' to all 3 questions. However, that's exactly the future that is being painted after the recent Google I/O event. Aimed at software developers, it revealed a glimpse of what Google have got planned for the year ahead. New services such as Android Wear, Android Auto, Android TV and Google Fit promise to change our lives. 

In this article titled 'Google's master plan: Turn everything into data!", David Auerbach appreciates how more sensors in our homes, cities and on our bodies is a hugely lucrative opportunity for a company like Google. "That information is also useful to companies that want to sell you things. And if Google stands between your data and the sellers and controls the pipe, then life is sweet for Google."

In a brilliant article by Parmy Olsen, she writes about the announcement at I/O about Google Fit, a new platform. "There’s a major advertising opportunity for Google to aggregate our health data in a realm outside of traditional search". Now during the event, Google did state that users would control what health and fitness data they share with the platform. Let's see whether corporate statements translate to actual terms & conditions in the years ahead. 

Do we even realise how much personal data are stored on our phones?

Do we even realise how much personal data are stored on our phones?

Why are companies like Google so interested in the data from your body in between doctor visits? As I've stated before, our bodies generate data 24/7, yet it's only currently captured when we visit the doctor. So, the organisation that captures, stores & aggregates that data at a global level is likely to be very profitable, as well as wielding significant power in health & social care. 

Indeed, it could also prove transformative for those providing & delivering health & social care. In the utopian vision of health systems powered by data, this constant stream of data about our health might allow the system to predict when we're likely to have a heart attack, or fall? 

Privacy and your baby

When people have a baby, some things change. It's human nature to want to protect and provide for our children when they are helpless and vulnerable. For example, someone may decide to upgrade to a safer car once they have a baby. We generally do everything we can to give our children the best possible start in life.

If you have a newborn baby, would you allow an employee from a corporation to enter your home and sit next to your baby and record data on it's sleeping patterns? In the emerging world of wearable technology, some parents are considering using products and services where their baby's data would be owned by someone else. 

Sproutling is a smart baby monitor, shipping in March 2015, but taking pre-orders now. It attaches to your baby's ankle, and measures heart rate and movement and interprets mood. It promises to learn and predict your baby's sleep habits. You've got an activity and sleep tracker for yourself, why not one for your baby, right? According to their website today, 31% of their monitors have been sold. The privacy policy on their website is commendably short, but not explicit enough in my opinion. So I went on Twitter to quiz Sproutling about who exactly owns the data collected from the baby using the device. As you can see, they referred me back to their privacy policy, and didn't really answer my question. 

The paradox

What's fascinating is how we say one thing and do another. A survey of 4,000 consumers in the UK, US and Australia found that 62% are worried about their personal data being used for marketing. Yet, 65% of respondents rarely or never read the privacy policy on a website before making a purchase. 

In a survey by Accenture Interactive, they found that 80% of people have privacy concerns with wearable Internet of Things connected technologies. Only 9% of those in the survey said they would share their data with brands for free. Yet, that figure rose to 28% would share their wearable data if they were given a coupon or discount based upon their lifestyle. 

Ideally, there would be a way in which we as consumers could own and control our personal data in the cloud and even profit from it. In fact, it already exists. The Respect Network promises just that, and was launched globally at the end of June 2014. From their website, "Respect Network enables secure, authentic and trusted relationships in the digital world". Surely, that's what we want in the 21st century? Or maybe not. I haven't met a single person who has heard of Respect Network since they launched. Not one person. What does that tell you about the world we live in?

Deep down, are we increasingly becoming apathetic about privacy? Is convenience a higher priority than knowing that our personal data are safe? Is being safe and secure in the digital world just a big hassle?

A survey of 15,000 consumers in 15 countries for the EMC Privacy Index found a number of behavioural paradoxes, one of which they termed "Take no action", "although privacy risks directly impact many consumers, most take virtually no action to protect their privacy – instead placing the onus on government and businesses". It reminds me of an interaction I had on Twitter recently with Dr Gulati, an investor in Digital Health. 

What needs to change?

Our children are growing up in a world where their personal data are going to be increasingly useful (or harmful), depending upon the context. What are our children taught at school about their personal data rights? It's been recently suggested that schools in England should offer lessons about sex and relationships from age 7, part of a "curriculum for life". Shouldn't the curriculum for life include being educated about the intersection of your personal data and your privacy?

We are moving towards a more connected world, whether we like it or not. Personally, I'm not averse to corporations and governments collecting data about us and our behaviour, as long as we are able to make informed choices. I like how in this article about the Internet of Things and privacy, Marc Loewenthal writes "discussions about the data created are far more likely to focus on how to use the data rather than how to protect it". Loewenthal also goes on to mention how the traditional forms of delivering privacy guidelines to consumers aren't fit for purpose in an increasingly connected world, "They typically ignore the privacy notices or terms of use, and the mechanisms for delivering the notices are often awkward, inconvenient, and unclear".

When was the last time you read through (and fully understood) the terms and conditions and privacy policy of a health app or piece of wearable technology? So many more connected devices, each with their own privacy policy and terms and conditions. Not something I look forward to as a consumer. The existing approach isn't effective, we need to think differently about how we can truly enable people to be able make informed choices in the 21st century.

Now, what if each of us  had our OWN terms and conditions and privacy policy and then we could see if the health app meets OUR criteria? We, as consumers, decide in advance what we want to share, with whom, and what we expect in return. How would that even work? Surely, we'd need to cluster similar needs together to perhaps form 5 standard privacy profiles? Imagine comparing three different health apps which do they same thing, but you can see instantly that only one of them has the privacy profile that meets your needs? Or even when browsing through the app store, you choose to only be shown those apps that match your privacy profile? That would definitely make it easier for each of us to be able to make an informed choice. 

Things are changing as it was revealed last night that Apple have tightened privacy rules in their new operating system for people developing apps using their new HealthKit API. An article cites text pulled from the licence, developers may "not sell an end-user's health information collected through the HealthKit API to advertising platforms, data brokers or information resellers," and are barred from using gathered data "for any purpose other than providing health and/or fitness services."

Apps using the HealthKit API must also provide privacy policies.

This news is definitely a big step forward for anyone who cares about the privacy of their health data. Although the guaranteed link to a privacy policy doesn't necessarily mean it will be easy to understand for consumers. I also wonder how companies that develop health apps using the HealthKit API will make money, given current business models are based around the collection and use of data. 

Will the news from Apple make you more likely as a consumer to download a health app for your iPhone vs your Android device? Will it cause you trust Apple more than Google or Samsung? Have Apple gone far enough with their recent announcement or could they do more? Will Apple's stance lead to them becoming THE trusted hub for our health data, above and beyond the current healthcare system?

How can we as individuals do more to become aware of our rights? As well as the campaigns to teach people to learn how to code, should we have campaigns to teach people how to protect their privacy? When commentators write that privacy is dead, do you believe them?

We're heading towards a future where over the next decade it will become far easier to use sensors to monitor the state of our bodies. Would you prefer a future where my body=my data or my body=their data? The choice is yours.

[Disclosure: I have no commercial ties with the individuals and organisations mentioned in this post]

Enter your email address to get notified by email every time I publish a new post:

Delivered by FeedBurner