top of page
Search

AI and personal data: a match made in hell?


As Facebook’s Mark Zuckerberg is currently discovering, people care about what’s done with their personal data. Growing public consciousness of the value of such material has spurred the creation of the General Data Protection Regulation (“GDPR”), a key tenet of which is the principle of data minimisation: the idea that firms should collect only the bare minimum of personal data necessary for the accomplishment of a specified task.

Yet this strengthening of the regulatory environment coincides with the burgeoning age of artificial intelligence (“AI”). Technology based on machine-learning offers astonishing benefits to businesses and everyday lives, but to be of optimum value it needs to gather and process as much data as possible.

So: to reap the benefits of AI, must we let go our quibbles over personal data?

An example from the health industry Recently it was revealed that in 2017 Facebook explored a deal with healthcare providers under which hospitals would supply Facebook with anonymised patient information, which Facebook would then match with anonymised Facebook accounts belonging to the same people. Facebook’s stated mission was to improve medical care; for example, its technology might identify when elderly patients had few friends locally and alert hospitals that nurse visits should be considered. Ostensibly, this sounds like a good thing—it could even save lives. But it also involves a firm taking medical records, the most sensitive of personal data, without patient permission, and linking them to people via their social media accounts—potentially a massive breach of privacy. And as the Cambridge Analytica scandal has shown, who can guarantee that such data will be used only for the specified purpose?

Safeguarding the public from the drawbacks of AI

AI is a tremendous tool, ruthlessly focused and efficient, but it has its weaknesses, many of which the GDPR seeks to redress. But is it possible to do so without compromising AI’s utility?

AI is discriminatory

If humans can learn to be prejudiced and deceitful in pursuit of a desired goal, can we not expect the same of technology equipped with machine-learning? Cannot an outcome which appears perfectly rational on paper nevertheless offend moral or social standards?

The classic example concerns automated loan approvals: AI may employ discriminatory practices, such as rejecting all applicants from notoriously risky localities. The GDPR recognises this, by granting individuals the right not to be subject to decisions based purely on automated processing, as well as the right to object to an automated decision and demand human intervention, even when they have previously consented to the process. This has been criticised by some as stifling AI. What is the point of such technology if every decision it makes concerning personal data has to be verified by humans? The calculations conducted might be so complex that, though the outcome has the appearance of bias, it was in fact fully justified.

Nevertheless, in practice providing an appeals process for applicants refused a loan is hardly a new or burdensome requirement. Developers will need to ensure that such technology is capable of recording its reasoning, on which more below.

AI is secretive

Where automated decision-making is used in data processing, the GDPR requires subjects to be provided with privacy notices which explain the logic being used by the AI.

The central complaint in response is that, with AI becoming ever more complex, it is difficult to explain its operation in language the ordinary person can understand. In fact, such a requirement devalues the technology, by preventing machine-learning tools from becoming smart beyond human comprehension.

Despite this, the GDPR makes clear that details of the rationale or criteria relied upon is all that is required, rather than explanations of algorithms, and this should not be difficult to supply. Indeed, would it be desirable to create technology whose processes we cannot explain? Would that not make it difficult to identify malfunctions?

The likely result is a focus on developing “explainable AI”, which should be seen as a positive thing. It will also encourage businesses purchasing technology from third parties to understand their new tools before they buy.

AI is vulnerable

The bigger and more complex AI technology becomes, the greater the prospect of cyber-attack and data theft, and the harder it may be to detect. The GDPR requires that data is held securely and subjects are notified of any leak. This may discourage developers from creating AI technology so vast that they cannot secure it. Again, some possible functions of AI may be lost, but those whose data it relies upon will be protected.

A healthy balance

As always with AI, the question is whether the potential for harm outweighs the potential for good. It is, at least in theory, entirely possible to harness the benefits of AI without compromising core human freedoms such as privacy. Israel, for example, looks set to make the medical records of some nine million citizens available to developers, with the goal of advancing preventative medicines (and making an estimated $600 million)—but participation is voluntary, and safeguards have been promised to protect privacy and security.

AI is designed, built and utilised by humans—the GDPR is an important intervention to remind us of our power to ensure it remains a servant rather than a tyrant.

 

About the author

Alex McPherson graduated from Oxford University in 2003, and gained many years of broad legal experience at leading law firms Freshfields Bruckhaus Deringer and Hogan Lovells, including client secondments at Goldman Sachs, Tesco and ExxonMobil.

He was short-listed at the Lawyer Awards in the Merger & Acquisitions Deal of the Year.

He Co-Founded Ignition Law in 2015, a full-service law firm, which focuses on start-ups, scale-ups and entrepreneur-led businesses, to provide pragmatic and cost effective legal services to over 800 clients to date.

Ignition Law has been shortlisted at the 2018 Lawyer Awards, won at the 2017 LegalWeek Awards for Innovation, and has won a number of other accolades. Ignition Financial was set up in 2016, and mirrors Ignition Law’s values and approach, with a team of ex-Big 4 Accountants sharing the passion of working with start-ups and scale-ups.

Outside of Ignition, Alex is a fellow of the RSA, sits on a corporate advisory board of LexisNexis and has lectured on the Cambridge University Masters in Corporate Law and at various leading law and business schools.

bottom of page