Accountability and Transparency in Automated Decision-making

By Eliana Fonseca.

Automation has become one of the buzzwords of 2020, gaining more attention that previous years due to the COVID-19 pandemic as a solution to enable business continuity. The implementation of new artificial intelligence and machine learning technologies is skyrocketing the evolution of automation and in particular allowing and increasing its use in decision making processes.

Recent years have seen rapid growth in the use of automated decision-making systems in many areas, such as social media marketing, recruitment, healthcare, and the justice system. The debates involving the benefits versus pitfalls of such systems are invading the legal arena.

Automated decision-making is a decision or output given by a machine with minimal human intervention. This can be done using algorithms. Human intervention is a key in the definition of this phenomenon as even when a machine takes a decision or provides a result and there is no human review of it at the end of the process, the human touch is still there, including the upfront design when the system or the algorithm is created.

An algorithm, which is a word derived from the name of the mathematician Muhammad Ibn Musa Al-Khhwarizmi (Latinised as Algoritmi), is in simple terms a set of instructions to transform input data into a desired output.

To create an algorithm, a developer will first collect a dataset, known as the “training set”, which is the analysed and matched to a defined outcome or goal that the algorithm is intended to achieve. The data will be examined to identify specific patterns that are present in the training set that lead to the desired output.

The mechanism consists of feeding the machine with the data contained in the training set, so it can learn from it. Take the example of a recruitment process where an organisation wants to hire what they consider are “suitable employees”. The algorithm will be set in a way that when a resume contains the name of a particular business school or university or other and sometimes more personal attributes like nationality or interests outside work (noting the need to address the legitimacy of such parameters), the resume is flagged and positioned into a priority shortlist for later review. Resumes that do not contain the predetermined attributes could be disregarded to avoid further processing.

The last phase of the development of an algorithm is the testing and validation where the algorithm will be presented with a new set of data to check if it can detect the same patterns and generate the desired results.

When designing an algorithm, the developers do have some level of discretion on deciding what training set data to use, what objectives to set, what patterns are valid, and how to respond to certain outcomes like false positives where an error incorrectly shows as a positive result.

This helps to understand that algorithms are not such mystical creatures or autonomous objects that can be blamed for poor decisions as they are effectively responding to human designed algorithms with predetermined concepts and values. Basically, the algorithm will autonomously execute a decision in the way and with the criteria it was taught to do.

In examining the misconception that algorithms are objective, the results should be at best consistent to its decision design. In fact, all data processed by the same algorithm will coherently prompt the same result or decisions based on the same criteria. Data containing the same pattern will not be treated differently when processed by the same algorithm.

If however the view of being objective means that decisions taken by algorithms are impartial or unbiased, then this argument may require a deeper analysis. The bias and discrimination can often be hidden within the training set, on the objectives set or the links between the patterns that prompt the output, as well as the output itself.

The American docudrama “The Social Dilemma” raises interesting points regarding algorithms. In one of its passages, Dr Cathy O’Neil, Data Scientist, defines algorithms as “opinions embedded in code” and says that algorithms are “not objective”. She further notes, “Algorithms are optimized to some definition of success. So, if you can imagine, if a commercial enterprise builds an algorithm to their definition of success, it’s a commercial interest, it’s usually profit”.

Take the example of Amazon. In 2018, Amazon abandoned an AI-developed recruitment tool when it realised that the tool was not rating candidates in a gender-neutral way, as it was reportedly favouring male candidates. Such an outcome was because this AI tool had been fed and trained with resumes submitted to the company over the previous ten-year period. Most of the resumes were submitted by male candidates and therefore, the system was taught based on data that was balanced in a male candidate’s favour. Such patterns then influenced its subsequent decisions. It was said that the system was not given genders as key terms associated with the vetting process, however since most of the data was related to men, some gender-based patterns were conclusive for the decision-making process.

Underestimating the influence of the human intervention on automated decision-making processes may lead to rejecting the use of technology and software over recognising the issues are within the algorithm itself and the specific values or patters they use that should really be challenged and scrutinised.

When an algorithmic decision is found to be biased, discriminating or prompts a wrong result or infringes a human right the key issue to address may be to identify who is held accountable for it, whether that be the designer, the executor or the person who interpreted that decision and acted on it.

Accountability may be difficult to assess as the developer may argue that they did not know how the algorithm will be used in its workplace application. The person implementing the algorithmic tools may, in turn, not fully understand how the algorithmic tools operate, what situations it was designed to be used in and the extent of its application and limitations. Without an understanding of its source code, the user may be utilising the algorithm for situations it is not adequately equipped to handle.

To manage this, accountability has to be considered in the algorithm development process, supported by any relevant implementation procedures, cautions and policies to ascertain who has the necessary degree of control, the appropriateness of its use in defined application areas or the authority to be held accountable for errors.

Another key concept is transparency that is critical when relying on the use of algorithms for decision-making. It is not enough to give a person the right to scrutinise an individual or organisation for action taken based on an algorithmic suggested decision, although it can be argued that leaders do need to self-assess all advisory inputs they receive. For that right to be effective, a potential claim must have a clear fundament, and that can exist when the claimant can challenge the validity, accuracy, and legality of such a decision. This is achieved by giving transparency to how the algorithms work.

This extends beyond the transparency to the structure or code of the algorithm, to the appropriateness of the training set that considers elements such as the potential for biases hidden in the data, which may not be easily detectable from a review of the code.

In the famous case Loomis v. Wisconsin, the State of Wisconsin was challenged for using a decision support tool, called the “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS) in the sentencing of Eric Loomis to six years in prison. Apart from claiming that COMPAS was gender and race biased, it was alleged that using such software in sentencing violated the defendant’s right to due process because it prevented the defendant from challenging the scientific validity and accuracy of such a test as the access to the algorithm was impossible because it was protected as a trade secret as executable code.

The need for transparency is contested by an entity’s arguments of protecting trade secrets. Intellectual property rights or private protocols are assets to a business and may be arguably remedied by having algorithms reviewed by independent auditors. This would also have to consider how code updates are managed. Conversely, making algorithms more open to the public, could be easily manipulated and gamed for ill purpose.

While debates are still open and laws are starting to address critical points regarding the use of artificial intelligence, it is a must for companies using automated decision-making systems to develop policies for responsible development, implementation, use, and review of such systems.

This may include considerations of the legitimate purposes for using algorithms, the appropriateness and limitations of their use in complex situations, the depth and breadth of the datasets used that guides their applicability, the biases and other ethical issues around privacy and appropriate information use.

As security concerns to information management systems arise, such systems also have to consider the protection and security of the data used by the algorithm, any authentication certification required, the protection of the code, the need for updating the algorithm to remedy any issues found and the authorised users of these systems.

Independent on the system issues, such decision-making technology should still be considered an input to what may require a human intervention for the ultimate decision on complex, high risk or legally challenging matters.

About the author:

Eliana Fonseca is a qualified lawyer, admitted to the Argentinian Bar Association with an international career focused on Middle East countries, and currently working and living in Dubai, United Arab Emirates (UAE).

She works in the Luxury Industry as an Associate Legal Counsel of one of the leading retailers of luxury watches and jewelry in the UAE.

Eliana is a certified Legal Project Practitioner (LPP) and recognized by the International Institute of Legal Project Management (IILPM) as the Accredited Training Provider (ATP) in the UAE.

#ElianaFonseca #legaltech #decisionmaking #AI #automation #legal

Legal Business World Television ® Partners


Legal Business World Publicaations ® Partners

© 2017|2020 LegalBusinessWorld™| All Rights Reserve

Australia | Asia | Africa | Canada |  Belgium | China | Cyprus | Czech | Denmark | Estonia | Finland | France | Germany | Greece | Hungary | Hong Kong | Iceland | India | Indonesia | Israel |Italy | Japan | Latvia | Lithuania | Luxembourg | Malta | Mexico| Norway | Poland | Portugal | Romania | Russia | Singapore | Slovakia | South America | Spain | Sweden | Switzerland | Taiwan | The Netherlands | Turkey | Ukraine |United Kingdom | United States |Middle East | Africa