• Privacy
  • UK’s NHS made illegal patient data transfer to Google’s DeepMind

    As eHealth expands its reach, and Artificial Intelligence (AI) becomes routine, benefits will increasingly depend on health systems handing over their patient data to specialist companies. It seems inevitable, but it might not always be legal. The UK’s NHS found that it wasn’t.

    An article in the UK’s Guardian says the Royal Free London NHS Trust, based in London, broke the law in November 2015 when it transferred 1.6m patient-identifiable records to DeepMind, the AI outfit owned by Google. It was part of a project where DeepMind’s built Streams, an app that provides clinical alerts about kidney injury. It needed the data for testing.

    The ruling says by transferring the data and using it for app testing, the Royal Free breached four data protection principles and patient confidentiality under common law. It sees the transfer as not fair, transparent, lawful, necessary or proportionate. Patients wouldn’t have expected it, they weren’t told about it, and their information rights weren’t available to them. 

    The UK’s Information Commissioner agreed. Its view’s that the core issue wasn’t the innovation. It was the inappropriate legal basis for sharing data which DeepMind could use to identify all the patients. A better way’s to keep the data in the health system and interface with apps such as Streams only when a clinical need arises. 

    Two issues are important. One’s dealing with an apparent data-grab of millions of patient records by a global organisation. The other’s the way the NHS seems keen to embed a global company into its routing working. Both need regulating and protection of patients’ rights and interests. 

    These offer insights for Africa’s health systems to deal constructively with external eHealth and AI firms. The relationships are already on a trajectory. A lesson from the NHS and DeepMind project’s essential that Africa avoids being dragged along its wake. There’s still time to do it.

  • Some mHealth need better privacy policies

    Keeping people’s health and healthcare data is strict requirement even without eHealth. One change that eHealth’s achieved’s an increase in societies’ and people’s awareness of privacy’s importance.

    A study by the Future of Privacy Forum (FPF), a US think tank, found that some mHealth providers don’t always see privacy like this.

    The number of apps providing privacy policies has increased, as FPF’s surveys in 2011 and 2012 show. It’s up by 8% on 2012 to 76%. Some 71% of the most popular mHealth apps have a link to their policies on their app platform listing page for users to check that their apps are effective before downloading them. Health and fitness apps often control and link to wearable devices and can collect sensitive health and wellness data, but they score less than average at providing privacy policies. The 71% score of top health and fitness apps with a privacy policy’s 6% less than all top apps. Some 61% linked to it from their app store listing pages, 10% less than all top apps.

    FPF’s study shows then need for standards of best practices for health and wellness data. To move it on, FPF’s published Best Practices for Consumer Wearables and Wellness Apps and Devices. It’s a detailed set of guidelines for app developers to follow so they can provide practical privacy protections for health and wellness data generated by uses. The Robert Wood Johnson Foundation supported the initiative, which includes contributions from several stakeholders, including companies, advocates, and regulators. It provides essential privacy policies and requirements for Africa’s health systems to adopt as they expand their mHealth programmes.

    The ten best practices are in three categories:

    Consumer choice

    1. Opt-in consent for data sharing with third parties
    2. Ban sharing with data brokers, information resellers and advertising networks
    3. Opt-outs for tailored first-party advertising
    4. Access, correction and deletion rights
    5. Enhanced notice and express consent for incompatible secondary uses

    Supporting interoperability (IoP)

    1. Compatibility with gold standard privacy frameworks
    2. Supports compliance with leading app platform standards

    Elevating data norms

    1. Supports sharing data for scientific research with informed consent
    2. Strong re-identification standard
    3. Strong data security standards.

    The guidelines can extend beyond countries existing eHealth, and specifically mHealth, legislation and regulation. For Africa’s health systems, where specific eHealth legislation and regulation is not developed, FPF’s guidelines provide an effective way of stepping it up.

  • EC’s mHealth privacy code can meet Africa’s regulation needs

    African countries recognise the need for privacy in eHealth. Many countries’ privacy regulations are for general data protection and may not be specific enough for all eHealth services. With mHealth being a major part of Africa’s eHealth, it seems to offer a good template to start to build up and apply eHealth regulations.

    The European Commission (EC) offers a helpful starting point. Its draft Code of Conduct on privacy for mobile health apps has been completed. It’s derived from data protection law, and awaiting formal approval. When it’s attained, app developers can volunteer their commitment to comply with the Code.

    Eleven questions comprise the issues dealt with by the Code:

    1. How should consent of app users be obtained, including valid, explicit consent from citizens to collect and use their data
    2. What are the main principles that need adopting  before making an mHealth app available, including the purpose limitation, data minimisation, transparency, privacy by design, privacy by default and citizens’ rights
    3. What information shall be provided to users before they can use any app, including a formal notice that identifies the app developer; describe the purpose of the data processing, how the data will be used and fits with products and services, guarantee fair processing; the precise categories of personal data that the app will process; whether personal data will be transferred from the user’s device, and if so, who to; users’ rights to access, correct and personal data; inform users that their app use is but needs their consent to permit personal data processing; provide contact information where users can ask question about data protection; and contain a link to a full privacy policy.  
    4. How long can data be retained, including acknowledging challenges to irreversibly anonymise health data when retention periods expire
    5. Security measures, including confidentiality, integrity and availability of the personal data processed by apps, and completing Privacy Impact Assessments (PIA)
    6. Can apps contain advertisements, including authorisation by users and having different approaches for advertising involving personal data
    7. What’s needed before disclosing data to a third party for processing, including data used for scientific research, analytics or Big Data analysis
    8. Can personal data collected by apps be used for secondary purposes, including processing operations, needing agreements in place with third parties
    9. Where can gathered data be transferred to, including compliance with the rules for international data transfers and where gathered data can be transferred to
    10. What action’s needed if there’s a personal data breach including who to notify
    11. How can data be gathered from children, including parental consent, and especially when apps are for children’s use

    A set of questions are suggested for completing a PIA. They’re

    1. Which kinds of personal data will the app process?
    2. For which purposes will this data be processed?
    3. How have users’ consent been obtained to process their data for every type of use foreseen?
    4. Was someone designated to answer questions about the apps privacy requirements?
    5. Was the app developed in consultation with a healthcare professional to ensure that data is relevant for the app’s purposes and not misrepresented to users?
    6. Explain what’s been done to respect a set of security objectives, or explain why theyr’e not relevant:
    • Principles of privacy by design and privacy by default:
    • Data has been pseudonymised or anonymised wherever possible
    • Appropriate authorisation mechanisms have been built into the app to avoid unlawful access
    •  Effective encryption has been used to mitigate the risk of breaches
    • Independent system security audits have been considered
    • Inform users when updated versions are available
    • Blocks all uses of old apps if the update is security critical  
    1. App has been developed using known guidelines on secure app development and secure software development
    2. App has been tested using mock data before it’s available to real end users
    3. Incidents affecting remotely stored data can be identified and addressed
    4. If any personal data collected or processed by the app is transferred to a third party, has appropriate contractual guarantees about their obligations been obtained, including purpose limitations, security measures, and their liability.

    The Code is culmination of a wide range of contributions. It’s a very valuable contribution as best practice for attaining privacy in apps and for this aspect of mHealth regulation. App developers in Africa can enhance their products by showing how they’ve complied, even if countries haven’t incorporated them into eHealth regulations. These can follow on promptly if countries use the EC Code as their initial draft to prepare their bespoke versions.

  • Google's DeepMind has UK's NHS patient data

    Privacy, confidentiality and ownership are important issues for personal health data. The UK’s NHS has given Google’s DeepMind 1.6m patient records from three London hospitals as part of an Artificial Intelligence (AI) project to build Streams, an app to help hospital staff monitor kidney patients. An article in New Scientist expresses some unease and reservations.

    It says the arrangement goes beyond an NHS data-sharing agreement and what was publicly announced. The arrangement also reveals a clear view of what the company is doing and what sensitive data it now has access to.

    While the project is for kidney patients, data given to DeepMind from the Royal Free NHS Trust includes information about people who are HIV-positive, drug overdoses and abortions over the last five years. It also includes access to the Trust’s submissions to the NHS Secondary User Service (SUS) database that includes all hospital treatments, such as critical care and accident and emergency departments.

    New Scientist says the data handed over suggests DeepMind has plans for a lot more that just the Stream app. MedConfidential, Sam Smith’s quoted as “This is not just about kidney function. They’re getting the full data.”

    Google says all the data’s needed because there’s no separate dataset for people with kidney conditions. This implies that searches of codes such as International Classification of Diseases and the NHS Healthcare Related Groups (HRG) can’t provide routes into the information needed. A Trust Statement says it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”

    DeepMind’s also developing Patient Rescue to provide a data analytics services to NHS hospital trusts. It’ll use data streams from hospitals to build other tools in addition to Streams. These are planned to provide real-time analyses of clinical data and support diagnostic decisions. Comparing new patients’ information with millions of other cases, Patient Rescue might predict if they’re in early stages of diseases not yet symptomatic.

    While some people might be alarmed at the scale and scope of data-sharing for analytics, it’s on the increase. Oxford University’s Computational Health Informatics Lab has deployed computer learning tools across the four hospitals of the Oxford University Hospitals NHS Foundation Trust. As well as monitoring health of individual patients’ health, these systems can look for infectious disease outbreaks. 

    It looks like we’ll have to come to terms with our health data being shared. It might be more acceptable if it’s transparent, within regulations and that regulations are enforced. These are important lessons for Africa’s eHealth regulations as they develop.

  • Would you fight to protect your data?

    Most of us agree: data privacy is a high priority for eHealth. But what each of us should do to protect it is more difficult to answer. An Austrian law student has a suggestion: challenge the biggest company you believe is not protecting your data and win.

    It’s taken Max Schrems three years of legal battles, but now Facebook’s European privacy practices are to be investigated by the Irish data protection watchdog. The October 2015 ruling overturns a previous decision by the watchdog that was premised on a safe harbour agreement which was recently declared invalid by the European Court of Justice (ECJ) after another, separate, two-year case by Schrems against Facebook.

    An Irish Guardian article says the high court in Dublin quashed the Irish data protection commissioner’s original refusal to examine Schrems’ complaint over the alleged movement of his data outside of Europe by Facebook after referring the case to the ECJ.

    For fifteen years a safe harbour agreement deemed European citizens’ data transferred between the EU and US as being adequately protected, allowing US companies to self certify their data protection practices. Not anymore. Judge Gerard Hogan describes it as a landmark challenge, which led to the most important ruling of the ECJ in years that “transcended international law … The commissioner is obliged now to investigate the complaint … and I’ve absolutely no doubt that she will proceed to do so,” Hogan said.

    Facebook doesn’t seem happy. It said “We will respond to inquiries from the Irish Data Protection Commission as they examine the protections for the transfer of personal data under applicable law,” reiterating that it does not give the US government direct access to its servers and it does not recognise the National Security Agency ’s (NSA) Prism surveillance programme.

    There are lessons for African countries. As we explore our obligations and global good practice to protect data, there are challenges. One is the risk of assuming protecting can be implied, without addressing the details. Another is the value of acting regionally to ensure protections.

    Schrems said that watchdogs in 28 European states will now be able to accept complaints about the movement of personal information and that he was considering other challenges to tech giants involved in cloud services. He said, “The court has been very clear a new safe harbour would have to give you the same rights as you have in Europe. That’s going to be hard to get a deal on.”

    Questions remain for Africa, such as, what’s the right starting point for tackling these issues so that African countries don’t have to deal with them alone. Could it be through the African Union Commission (EAC), or Regional Economic Communities (REC)? Are new regulatory bodies needed?

    Either way, tackling the issue soon is probably a good idea and cooperation between governments and companies might be sensible. Somewhere among African students there may be someone in the mood for a lengthy legal battle. They may have already set their sights on a multinational company with questionable data protection policies.

  • Some of England's NHS apps aren't secure

    Like fish and chips, smartphones and apps belong together. To ensure a harmonious relationship, England’s NHS accredits apps for people to use with a degree of confidence. They’re listed in the NHS Choices Health Apps Library, which tests them to ensure they meet clinical and data safety standards. The apps mainly help people lose weight, stop smoking, be more active and drink less alcohol.

    A report in Biomed Central says a study team at Imperial College London reviewed 79 apps and found that this assurance isn’t complete. Some apps, 23 in the review, don’t comply with privacy standards and send data without encrypting it. Some of these apps have been taken off the Library’s list. It raises the questions of why and how they were on the list. NHS England’s piloting a new, more rigorous accreditation process.

    The study found that:

    • 89% of apps transmitted information to online services
    • None encrypted personal information stored locally
    • 66% send identifying information over the Internet did not use encryption
    • 20% didn’t have a privacy policy
    • 67% had some form of privacy policy
    • 78% of information-transmitting apps with a policy didn’t describe the nature of personal information included in transmissions
    • Four apps sent both identifying and health information without encryption.

    For Africa’s increasing emphasis on mHealth, it shows that accreditation must be rigorous and ensure compliance with recognised standards. It may not be as easy as it seems. An audit of the accreditation’s essential.

  • Can de-identification maintain privacy?

    As databases about patients, analytics and Big Data for secondary uses expand in healthcare, protecting patients’ privacy is becoming increasingly important and challenging. Privacy Analytics’ recent white paper, De-identification 201. It’s part of Privacy Analytics De-Id University.

    Anonymising data it the goal of de-identification to ensure that data used beyond its primary role can’t be matched to the people it describes. This protects their privacy. The main dilemma is the trade-off between maximising privacy and maximising data’s usefulness. It’s challenging to achieve because removing patients’ names and other direct identifiers, such as social security numbers, from a dataset isn’t sufficient to achieve de-identification. Indirect data, called quasi-identifiers, such as age, birth dates and post codes are left in place, and when they’re combined, they can be used to identify individuals.

    Two types of de-identification standards are important to protect privacy; safe harbour and expert determination. Safe harbour is easy to use, but has a few drawbacks because some data can be lost. Expert determination relies on experts to use and retains data’s scope, and retains data. For both standards, it’s important that standard users know how the data for secondary purpose will be used.

    De-identification’s a technique alongside masking. Their main difference is in scope. De-identification can anonymise quasi-identifiers. Masking anonymises direct identifiers, and relies to a large extent on techniques that remove data, so may reduce the data’s usefulness.

    Privacy Analytics is clear that expert determination is a risk-based approach because the level of de-identification depends on a risk assessment of its use or disclosure. In addition to this advice, the white paper includes a valuable appendix of terminology. Privacy Analytics has numerous white papers that provide important insights for Africa’s eHealth regulators and data managers. It’s a valuable reference site.

  • How much privacy are we entitled to?

    Does your health status affect me? If the answer’s yes, then I have the right to protection. How should I be protected without infringing your rights to privacy and confidentiality? These questions may need new conversations, involving wider stakeholder groups, than most health systems have managed so far. They're likely to reveal circumstances in which rights to sharing information could outweigh rights to privacy.

    A recent African example is the Ebola outbreak, which strained communities’ abilities to maintain patients’ privacy and confidentiality. There are less extreme, though equally devastating examples, in which my knowing your health status, whether you like it or not, might be reasonable to protect me, particularly if your job means my life is in your hands.

    Public transport has had its share of tragedies. Many result in the loss of many lives. Examples are bus accidents in South Africa, train collisions in India and elsewhere, and the recent aircraft crash in the French Alps. It seems reasonable to suggest that if buses, trains and planes are regulated properly, then their drivers and pilots should be too. Regulating them would necessitate opening up some aspects of their health records to new types of scrutiny and information sharing.

    It’s more than three weeks since the tragedy when flight 9525 went down in France and all 150 on board lost their lives. It’s been a harrowing time for their families. French prosecutors now know that co-pilot Andreas Lubitz had been under treatment for severe depression. He had been seeing at least six doctors, who had prescribed a wide range of medication. A bewildered flying public is beginning to ask questions about how, with today’s modern technological connectedness, it was possible that the doctors did not know he was doctor hopping, and that his employer did not know of the extent of the risk and disability caused by his mental illness. The answer may be relatively simple; that current privacy and confidentiality laws and good practice limit the amount of sharing that’s possible, between doctors and from doctors to employers, particularly for certain types of diagnoses, including psychological conditions.

    The technical solution provides a pertinent example of eHealth’s potential and it’s challenges. While a well-connected health information infrastructure, supporting effective levels of interoperability, is technically achievable and will make meaningful sharing possible, there are tremendous human barriers in the way of it being effectively used. Confidentiality is but one highly emotive one. Dealing with these barriers needs robust stakeholder engagement to identify acceptable and appropriate sharing principles. It also needs a strong regulatory framework to apply them and support employers and employees to manage their relationships effectively to protect the public.

    We can hope that the scale of this unexpected shock will help us to re-imagine practical solutions to these challenges.

  • Healthcare privacy not a big concern for US patients

    Privacy and security around personal health data has been an area of concern for years. An article in BECKERS HEALTH IT & CIO REVIEW says a new poll from Truven Health Analytics and NPR Health, shows that health information privacy isn’t a big concern after all.

    A survey of 3,000 adults found that few people were concerned with their health data privacy. Only 11% of respondents expressed privacy concerns about health records held by their physicians. Some 14% had privacy concerns with hospitals and about 16% had concerns with health insurance companies.

    Americans are more comfortable sharing their health information than their social media or credit card purchase information. About 78% said they wouldn’t be willing to share their credit card purchase history and social media activity with physicians, hospitals and insurers, regardless of the planned use and even if it could help improve their healthcare.

    Research from the Ponemon Institute found a similar attitude. People are worried about security and privacy in general, but medical record privacy ranks near the bottom of their concerns.

    An intriguing feature of these findings is that healthcare records contain some details of patients’ financial profiles, such as social security numbers which are often the target for cyber-crime attacks in the USA. It’d be interesting to know if USA citizens attitudes to privacy and security of this data would be greater than their health data.

    Could this relaxed attitude also be true for patients in Africa?

  • The USA's Privacy & Security Forum

    Healthcare privacy and security experts from around the U.S. will gather in Boston from 8-9 September to share information and strategize on how to combat cybercrime, insider threats and other pressing challenges to patient data. The conference comes at a critical time. Less than a month ago 4.5 million patients data at the Franklin, Community Health Systems was affected in a breach. It was the second largest HIPAA breach ever reported.

    The two-day Privacy & Security Forum, presented by HIMSS Media and Healthcare IT News will have 43 speakers and include 19 sessions.  Roughly 250 people are expected to attend. Speakers will represent various US healthcare organisations including AetnaKaiser PermanenteBeth Israel Deaconess Medical CenterPartners HealthcareParkland Health & Hospital System and Seattle Children’s Hospital. Speakers will address issues such as cybercrime, medical device security, risk mitigation, HIPAA regulations, vendor and other 3rd party compliance, among other topics.

    “When it comes to privacy & Security, the stakes have never been higher,” said Mike Moran, the forum’s program director. “Our goal with the forum is to create an environment that allows attendees to share information and best practices with each other, and learn from some of the best healthcare privacy-and-security experts in the country.”

    While Africa’s eHealth environment is vastly different to that of the US, there is no reason why African countries can’t learn from their mistakes. They could help African countries to develop tools to side-step pitfalls and mitigate risks more effectively.