• Privacy
  • What GDPR means for African countries

    If you struggled to access your favourite news site this morning, due to pop-ups insisting that you refresh your privacy settings, you are not alone. And the site is invariably based in the European Union (EU), or doing business with individuals in the EU.

    Today is GDPR Day. The General Data Protection Regulation (GDPR) is a regulation created in EU law to protect the privacy of individuals’ data. It applies to data of all individuals in the EU, whether that data is used within the EU, or anywhere else in the world. It comes into force today, May 25 2018.

    GDPR brings in sweeping changes to how businesses and public sector organisations can handle information. Under the new rules, permission is required before any personal data can be used and how long it is kept is now closely controlled. Anyone can ask a company to delete their personal information too. Read the statement from the European Commission and its links to resources.

    “Personal data is the gold of the 21st century. And we leave our data basically at every step we take, especially in the digital world. When it comes to personal data today, people are naked in an aquarium" said Vera Jourová, Commissioner for Justice, Consumers and Gender Equality.

    The GDPR sets out key principles:

    Lawfulness, fairness and transparencyPurpose limitationData minimisationAccuracyStorage limitationIntegrity and confidentiality (security)Accountability

    The accountability principle requires those who use data to take responsibility for complying with the principles, and to have appropriate processes and records in place to demonstrate that compliance, including appropriate technical and organisational measures to ensure accountability. Regular testing and reviews are required to make certain that the measures remain effective, or to guide remedial action id required.

    These principles form the building blocks of the legislation. Compliance with the spirit of the principles is regarded as critical for good data protection practice. Even though the principles to don’t include fixed rules, penalties for ignoring them are substantial. Failure to comply with the basic principles are subject to fines of up to €20 million, or 4% of total worldwide annual turnover, whichever is higher.

    Individuals have:

    The right to be informedThe right of accessThe right to rectificationThe right to erasureThe right to restrict processingThe right to data portabilityThe right to objectRights in relation to automated decision making and profiling.

    The GDPR introduces a duty on all organisations to report certain types of personal data breach within 72 hours of becoming aware of the breach, and if the breach is likely to result in a high risk of adversely affecting individuals’ rights and freedoms, companies must also inform those individuals without undue delay. This requires that robust breach detection, investigation and internal reporting procedures are place to facilitate detection and decision-making.

    Close Circuit Television (CCTV) falls under the GDPR too.

    The UK Information Commissioners Office has extensive guidance. Many companies, such as IBM and Oracle, offer guidance too.

    While the GDPR does not apply to African countries directly, many African businesses will already be affected, due to their business relationships with the EU or its people. Whether you're affected yet or not, GDPR provides a best-practice model for incorporating into business practices and regulatory strengthening.

    African countries' regulatory strengthening is well underway. South Africa's Protection of Personal Information "POPI" Act is one example and provides many components of the GDPR.

    First steps towards compliance could be to:

    Brush up your cyber-security policy, andImplement a privacy management framework to help embed accountability measures and create a culture of privacy across your organisation.

    The commissions’ seven steps for businesses provide pointers too. They are:Check the personal data you review and process, the purpose for which you do it, and on what legal basisInform your customers, employees and other individuals when you collect their personal dataKeep the personal data for only as long as necessarySecure the personal data you are processingKeep documentation on your data processing activitiesMake sure your sub-contractors follow the same rulesConsider additional provisions, such as :Organisations might have to appoint a Data Protection Officer, particularly if processing of personal data is a core part of your businessData Protection Impact Assessment Such an impact assessment is reserved for those that pose more risk to personal data, for instance they do a large-scale monitoring of a publicly accessible area, including video-surveillance.

    In the meantime, dealing with your privacy preference update requests will ensure that data protection remains in the forefront of your mind, at least for today. Happy GDPR Day.


    Image from this tweet by @EU_Commission

  • How to construct the perfect password

    Passwords are personal, secret, vital and too complicated to be guessed. That’s the theory. It seems that expert advice hasn’t complied with the complicated part. A report from the US Joint Task Force Transformation Initiative Appendix A set out password practices. In an article in the Wall Street Journal (WSJ), the author, Bill Burr, a former National Institute of Standards and Technology (NIST) manager, says his advice wasn’t right. 

    The original report in 2003 was NIST Special Publication 800-53 Revision 4 Security and Privacy Controls for Federal Information Systems and Organizations. It’s been updated regularly, and proposed password management should include:

     Changing passwords every 90 days Adding capital letters, numbers and symbols to words, such as password being Pa55?w0rd.

    He now says passwords shouldn’t be changed frequently because people often make only small modifications, such as Pa55?w0rd to Pa55!w0rd. These changes weaken passwords when the intention’s to strengthen them.

    A report by the BBC says a better method’s a random string of words, such as "pig coffee wandered black." It takes malware longer to break this code than using random guesses to find Pa55!w0rd.

    Africa’s eHealth programmes and users can adopt this updated advice. They should also follow research on cyber-security practices. Complying with evidence-based actions is always best.

  • UK’s NHS made illegal patient data transfer to Google’s DeepMind

    As eHealth expands its reach, and Artificial Intelligence (AI) becomes routine, benefits will increasingly depend on health systems handing over their patient data to specialist companies. It seems inevitable, but it might not always be legal. The UK’s NHS found that it wasn’t.

    An article in the UK’s Guardian says the Royal Free London NHS Trust, based in London, broke the law in November 2015 when it transferred 1.6m patient-identifiable records to DeepMind, the AI outfit owned by Google. It was part of a project where DeepMind’s built Streams, an app that provides clinical alerts about kidney injury. It needed the data for testing.

    The ruling says by transferring the data and using it for app testing, the Royal Free breached four data protection principles and patient confidentiality under common law. It sees the transfer as not fair, transparent, lawful, necessary or proportionate. Patients wouldn’t have expected it, they weren’t told about it, and their information rights weren’t available to them. 

    The UK’s Information Commissioner agreed. Its view’s that the core issue wasn’t the innovation. It was the inappropriate legal basis for sharing data which DeepMind could use to identify all the patients. A better way’s to keep the data in the health system and interface with apps such as Streams only when a clinical need arises. 

    Two issues are important. One’s dealing with an apparent data-grab of millions of patient records by a global organisation. The other’s the way the NHS seems keen to embed a global company into its routing working. Both need regulating and protection of patients’ rights and interests. 

    These offer insights for Africa’s health systems to deal constructively with external eHealth and AI firms. The relationships are already on a trajectory. A lesson from the NHS and DeepMind project’s essential that Africa avoids being dragged along its wake. There’s still time to do it.

  • Some mHealth need better privacy policies

    Keeping people’s health and healthcare data is strict requirement even without eHealth. One change that eHealth’s achieved’s an increase in societies’ and people’s awareness of privacy’s importance.

    A study by the Future of Privacy Forum (FPF), a US think tank, found that some mHealth providers don’t always see privacy like this.

    The number of apps providing privacy policies has increased, as FPF’s surveys in 2011 and 2012 show. It’s up by 8% on 2012 to 76%. Some 71% of the most popular mHealth apps have a link to their policies on their app platform listing page for users to check that their apps are effective before downloading them. Health and fitness apps often control and link to wearable devices and can collect sensitive health and wellness data, but they score less than average at providing privacy policies. The 71% score of top health and fitness apps with a privacy policy’s 6% less than all top apps. Some 61% linked to it from their app store listing pages, 10% less than all top apps.

    FPF’s study shows then need for standards of best practices for health and wellness data. To move it on, FPF’s published Best Practices for Consumer Wearables and Wellness Apps and Devices. It’s a detailed set of guidelines for app developers to follow so they can provide practical privacy protections for health and wellness data generated by uses. The Robert Wood Johnson Foundation supported the initiative, which includes contributions from several stakeholders, including companies, advocates, and regulators. It provides essential privacy policies and requirements for Africa’s health systems to adopt as they expand their mHealth programmes.

    The ten best practices are in three categories:

    Consumer choice

    Opt-in consent for data sharing with third partiesBan sharing with data brokers, information resellers and advertising networksOpt-outs for tailored first-party advertisingAccess, correction and deletion rightsEnhanced notice and express consent for incompatible secondary uses

    Supporting interoperability (IoP)

    Compatibility with gold standard privacy frameworksSupports compliance with leading app platform standards

    Elevating data norms

    Supports sharing data for scientific research with informed consentStrong re-identification standardStrong data security standards.

    The guidelines can extend beyond countries existing eHealth, and specifically mHealth, legislation and regulation. For Africa’s health systems, where specific eHealth legislation and regulation is not developed, FPF’s guidelines provide an effective way of stepping it up.

  • EC’s mHealth privacy code can meet Africa’s regulation needs

    African countries recognise the need for privacy in eHealth. Many countries’ privacy regulations are for general data protection and may not be specific enough for all eHealth services. With mHealth being a major part of Africa’s eHealth, it seems to offer a good template to start to build up and apply eHealth regulations.

    The European Commission (EC) offers a helpful starting point. Its draft Code of Conduct on privacy for mobile health apps has been completed. It’s derived from data protection law, and awaiting formal approval. When it’s attained, app developers can volunteer their commitment to comply with the Code.

    Eleven questions comprise the issues dealt with by the Code:

    How should consent of app users be obtained, including valid, explicit consent from citizens to collect and use their dataWhat are the main principles that need adopting  before making an mHealth app available, including the purpose limitation, data minimisation, transparency, privacy by design, privacy by default and citizens’ rightsWhat information shall be provided to users before they can use any app, including a formal notice that identifies the app developer; describe the purpose of the data processing, how the data will be used and fits with products and services, guarantee fair processing; the precise categories of personal data that the app will process; whether personal data will be transferred from the user’s device, and if so, who to; users’ rights to access, correct and personal data; inform users that their app use is but needs their consent to permit personal data processing; provide contact information where users can ask question about data protection; and contain a link to a full privacy policy.  How long can data be retained, including acknowledging challenges to irreversibly anonymise health data when retention periods expireSecurity measures, including confidentiality, integrity and availability of the personal data processed by apps, and completing Privacy Impact Assessments (PIA)Can apps contain advertisements, including authorisation by users and having different approaches for advertising involving personal dataWhat’s needed before disclosing data to a third party for processing, including data used for scientific research, analytics or Big Data analysisCan personal data collected by apps be used for secondary purposes, including processing operations, needing agreements in place with third partiesWhere can gathered data be transferred to, including compliance with the rules for international data transfers and where gathered data can be transferred toWhat action’s needed if there’s a personal data breach including who to notifyHow can data be gathered from children, including parental consent, and especially when apps are for children’s use

    A set of questions are suggested for completing a PIA. They’re

    Which kinds of personal data will the app process?For which purposes will this data be processed?How have users’ consent been obtained to process their data for every type of use foreseen?Was someone designated to answer questions about the apps privacy requirements?Was the app developed in consultation with a healthcare professional to ensure that data is relevant for the app’s purposes and not misrepresented to users?Explain what’s been done to respect a set of security objectives, or explain why theyr’e not relevant:Principles of privacy by design and privacy by default:Data has been pseudonymised or anonymised wherever possibleAppropriate authorisation mechanisms have been built into the app to avoid unlawful access Effective encryption has been used to mitigate the risk of breachesIndependent system security audits have been consideredInform users when updated versions are availableBlocks all uses of old apps if the update is security critical  App has been developed using known guidelines on secure app development and secure software developmentApp has been tested using mock data before it’s available to real end usersIncidents affecting remotely stored data can be identified and addressedIf any personal data collected or processed by the app is transferred to a third party, has appropriate contractual guarantees about their obligations been obtained, including purpose limitations, security measures, and their liability.

    The Code is culmination of a wide range of contributions. It’s a very valuable contribution as best practice for attaining privacy in apps and for this aspect of mHealth regulation. App developers in Africa can enhance their products by showing how they’ve complied, even if countries haven’t incorporated them into eHealth regulations. These can follow on promptly if countries use the EC Code as their initial draft to prepare their bespoke versions.

  • Google's DeepMind has UK's NHS patient data

    Privacy, confidentiality and ownership are important issues for personal health data. The UK’s NHS has given Google’s DeepMind 1.6m patient records from three London hospitals as part of an Artificial Intelligence (AI) project to build Streams, an app to help hospital staff monitor kidney patients. An article in New Scientist expresses some unease and reservations.

    It says the arrangement goes beyond an NHS data-sharing agreement and what was publicly announced. The arrangement also reveals a clear view of what the company is doing and what sensitive data it now has access to.

    While the project is for kidney patients, data given to DeepMind from the Royal Free NHS Trust includes information about people who are HIV-positive, drug overdoses and abortions over the last five years. It also includes access to the Trust’s submissions to the NHS Secondary User Service (SUS) database that includes all hospital treatments, such as critical care and accident and emergency departments.

    New Scientist says the data handed over suggests DeepMind has plans for a lot more that just the Stream app. MedConfidential, Sam Smith’s quoted as “This is not just about kidney function. They’re getting the full data.”

    Google says all the data’s needed because there’s no separate dataset for people with kidney conditions. This implies that searches of codes such as International Classification of Diseases and the NHS Healthcare Related Groups (HRG) can’t provide routes into the information needed. A Trust Statement says it “provides DeepMind with NHS patient data in accordance with strict information governance rules and for the purpose of direct clinical care only.”

    DeepMind’s also developing Patient Rescue to provide a data analytics services to NHS hospital trusts. It’ll use data streams from hospitals to build other tools in addition to Streams. These are planned to provide real-time analyses of clinical data and support diagnostic decisions. Comparing new patients’ information with millions of other cases, Patient Rescue might predict if they’re in early stages of diseases not yet symptomatic.

    While some people might be alarmed at the scale and scope of data-sharing for analytics, it’s on the increase. Oxford University’s Computational Health Informatics Lab has deployed computer learning tools across the four hospitals of the Oxford University Hospitals NHS Foundation Trust. As well as monitoring health of individual patients’ health, these systems can look for infectious disease outbreaks. 

    It looks like we’ll have to come to terms with our health data being shared. It might be more acceptable if it’s transparent, within regulations and that regulations are enforced. These are important lessons for Africa’s eHealth regulations as they develop.

  • Would you fight to protect your data?

    Most of us agree: data privacy is a high priority for eHealth. But what each of us should do to protect it is more difficult to answer. An Austrian law student has a suggestion: challenge the biggest company you believe is not protecting your data and win.

    It’s taken Max Schrems three years of legal battles, but now Facebook’s European privacy practices are to be investigated by the Irish data protection watchdog. The October 2015 ruling overturns a previous decision by the watchdog that was premised on a safe harbour agreement which was recently declared invalid by the European Court of Justice (ECJ) after another, separate, two-year case by Schrems against Facebook.

    An Irish Guardian article says the high court in Dublin quashed the Irish data protection commissioner’s original refusal to examine Schrems’ complaint over the alleged movement of his data outside of Europe by Facebook after referring the case to the ECJ.

    For fifteen years a safe harbour agreement deemed European citizens’ data transferred between the EU and US as being adequately protected, allowing US companies to self certify their data protection practices. Not anymore. Judge Gerard Hogan describes it as a landmark challenge, which led to the most important ruling of the ECJ in years that “transcended international law … The commissioner is obliged now to investigate the complaint … and I’ve absolutely no doubt that she will proceed to do so,” Hogan said.

    Facebook doesn’t seem happy. It said “We will respond to inquiries from the Irish Data Protection Commission as they examine the protections for the transfer of personal data under applicable law,” reiterating that it does not give the US government direct access to its servers and it does not recognise the National Security Agency ’s (NSA) Prism surveillance programme.

    There are lessons for African countries. As we explore our obligations and global good practice to protect data, there are challenges. One is the risk of assuming protecting can be implied, without addressing the details. Another is the value of acting regionally to ensure protections.

    Schrems said that watchdogs in 28 European states will now be able to accept complaints about the movement of personal information and that he was considering other challenges to tech giants involved in cloud services. He said, “The court has been very clear a new safe harbour would have to give you the same rights as you have in Europe. That’s going to be hard to get a deal on.”

    Questions remain for Africa, such as, what’s the right starting point for tackling these issues so that African countries don’t have to deal with them alone. Could it be through the African Union Commission (EAC), or Regional Economic Communities (REC)? Are new regulatory bodies needed?

    Either way, tackling the issue soon is probably a good idea and cooperation between governments and companies might be sensible. Somewhere among African students there may be someone in the mood for a lengthy legal battle. They may have already set their sights on a multinational company with questionable data protection policies.

  • Some of England's NHS apps aren't secure

    Like fish and chips, smartphones and apps belong together. To ensure a harmonious relationship, England’s NHS accredits apps for people to use with a degree of confidence. They’re listed in the NHS Choices Health Apps Library, which tests them to ensure they meet clinical and data safety standards. The apps mainly help people lose weight, stop smoking, be more active and drink less alcohol.

    A report in Biomed Central says a study team at Imperial College London reviewed 79 apps and found that this assurance isn’t complete. Some apps, 23 in the review, don’t comply with privacy standards and send data without encrypting it. Some of these apps have been taken off the Library’s list. It raises the questions of why and how they were on the list. NHS England’s piloting a new, more rigorous accreditation process.

    The study found that:

    89% of apps transmitted information to online services None encrypted personal information stored locally 66% send identifying information over the Internet did not use encryption 20% didn’t have a privacy policy 67% had some form of privacy policy 78% of information-transmitting apps with a policy didn’t describe the nature of personal information included in transmissions Four apps sent both identifying and health information without encryption.

    For Africa’s increasing emphasis on mHealth, it shows that accreditation must be rigorous and ensure compliance with recognised standards. It may not be as easy as it seems. An audit of the accreditation’s essential.

  • Can de-identification maintain privacy?

    As databases about patients, analytics and Big Data for secondary uses expand in healthcare, protecting patients’ privacy is becoming increasingly important and challenging. Privacy Analytics’ recent white paper, De-identification 201. It’s part of Privacy Analytics De-Id University.

    Anonymising data it the goal of de-identification to ensure that data used beyond its primary role can’t be matched to the people it describes. This protects their privacy. The main dilemma is the trade-off between maximising privacy and maximising data’s usefulness. It’s challenging to achieve because removing patients’ names and other direct identifiers, such as social security numbers, from a dataset isn’t sufficient to achieve de-identification. Indirect data, called quasi-identifiers, such as age, birth dates and post codes are left in place, and when they’re combined, they can be used to identify individuals.

    Two types of de-identification standards are important to protect privacy; safe harbour and expert determination. Safe harbour is easy to use, but has a few drawbacks because some data can be lost. Expert determination relies on experts to use and retains data’s scope, and retains data. For both standards, it’s important that standard users know how the data for secondary purpose will be used.

    De-identification’s a technique alongside masking. Their main difference is in scope. De-identification can anonymise quasi-identifiers. Masking anonymises direct identifiers, and relies to a large extent on techniques that remove data, so may reduce the data’s usefulness.

    Privacy Analytics is clear that expert determination is a risk-based approach because the level of de-identification depends on a risk assessment of its use or disclosure. In addition to this advice, the white paper includes a valuable appendix of terminology. Privacy Analytics has numerous white papers that provide important insights for Africa’s eHealth regulators and data managers. It’s a valuable reference site.

  • How much privacy are we entitled to?

    Does your health status affect me? If the answer’s yes, then I have the right to protection. How should I be protected without infringing your rights to privacy and confidentiality? These questions may need new conversations, involving wider stakeholder groups, than most health systems have managed so far. They're likely to reveal circumstances in which rights to sharing information could outweigh rights to privacy.

    A recent African example is the Ebola outbreak, which strained communities’ abilities to maintain patients’ privacy and confidentiality. There are less extreme, though equally devastating examples, in which my knowing your health status, whether you like it or not, might be reasonable to protect me, particularly if your job means my life is in your hands.

    Public transport has had its share of tragedies. Many result in the loss of many lives. Examples are bus accidents in South Africa, train collisions in India and elsewhere, and the recent aircraft crash in the French Alps. It seems reasonable to suggest that if buses, trains and planes are regulated properly, then their drivers and pilots should be too. Regulating them would necessitate opening up some aspects of their health records to new types of scrutiny and information sharing.

    It’s more than three weeks since the tragedy when flight 9525 went down in France and all 150 on board lost their lives. It’s been a harrowing time for their families. French prosecutors now know that co-pilot Andreas Lubitz had been under treatment for severe depression. He had been seeing at least six doctors, who had prescribed a wide range of medication. A bewildered flying public is beginning to ask questions about how, with today’s modern technological connectedness, it was possible that the doctors did not know he was doctor hopping, and that his employer did not know of the extent of the risk and disability caused by his mental illness. The answer may be relatively simple; that current privacy and confidentiality laws and good practice limit the amount of sharing that’s possible, between doctors and from doctors to employers, particularly for certain types of diagnoses, including psychological conditions.

    The technical solution provides a pertinent example of eHealth’s potential and it’s challenges. While a well-connected health information infrastructure, supporting effective levels of interoperability, is technically achievable and will make meaningful sharing possible, there are tremendous human barriers in the way of it being effectively used. Confidentiality is but one highly emotive one. Dealing with these barriers needs robust stakeholder engagement to identify acceptable and appropriate sharing principles. It also needs a strong regulatory framework to apply them and support employers and employees to manage their relationships effectively to protect the public.

    We can hope that the scale of this unexpected shock will help us to re-imagine practical solutions to these challenges.