top of page
Search

Legal tech: Beyond the myths #1

By Arnoud Engelfriet


Legal tech is coming. With Artificial Intelligence on board. Ah, yes. We have seen and heard so many promises: it will transform our work. It will replace lawyers, reduce tedious work. And so on. Still, here we are, still typing away in Word while the shiny AI-powered workflow optimization tool gathers dust in the corner. Often the reason is the same: the tool was overpromised and underdelivers. This series will take a look at the various myths and misconceptions around AI in the legal sector. What can we expect, and what is still a fairytale?



Swimming submarines

Let’s get the biggest myth all out of the way: computers aren’t intelligent and never will be. That is not to disparage their capacities, which are formidable and which can deliver results that far outclass the best work of humans. It’s just that this is not intelligence, at least not in the foreseeable future. As computer scientist Edsger Dijkstra famously put it, “the question of whether machines can think is about as relevant as the question of whether submarines can swim.”


This is important because when considering the capabilities and results of AI, we tend to compare it to human capabilities and results. We expect human reasoning and the type of results (and mistakes) that humans make as well. And AI just isn’t delivering on that front. Fundamentally, an AI does not think as we know it. In its most common form, AI is driven by statistics: pattern recognition, similarity clustering, outlier spotting, and so on.

Jokingly it has been proposed that any mention of AI should be replaced by “giant Excel charts.” There is a kernel of truth to it. Starting with the assumption that AI, for example, does not understand language but can spot patterns in language, and look up different patterns that fit better you will have a much easier time accepting the analysis of a legal text.


Accuracy and trust

This misconception causes us to mistrust AI results and misjudges the usefulness of AI legal analysis and reviews. We are used to certain types of mistakes from a junior lawyer, for example. He or she will miss complicated or rare exceptions or focus too much on the letter of the law and forget the business aspects. A senior partner would never do that but could go too fast or be focused on her hobby horse regarding IP protection and consider liability a trivial issue taken care of by insurance.


AI never makes that kind of mistake. However, there is an entirely new class of mistakes to be made by AI that dives into case law (legal research) or extracts information from contracts. Based on statistics, a particular Superior Court verdict may best fit the current legal question. If so, the AI will happily suggest it as the winning argument. However, often a legal analysis is expected to cite certain cases, so this suggestion will be seen as “off”, not what a human lawyer would do. Similarly, suppose a human lawyer reviews a contract and sees a clause she’s never seen before. In that case, the lawyer will flag this as a question mark and seek a colleague’s input, or maybe consult a legal library or expert system to learn more. But when an AI encounters such a clause in a legal review, the AI will simply assign the label that best matches the clause according to its underlying data and algorithms. There’s no such thing as “I don’t know” for a computer. However, the AI will likely give this label a low probability or provide several (almost) equally likely alternatives.


In a firm where an AI system is newly deployed, this type of mistake will quickly cause human lawyers to throw up their hands and dismiss the AI system as useless. And they are right – if this were a new associate hyped-up to ease everyone’s workload, this would be a very disappointing outcome. But this is a computer, which does things differently. And this takes some getting used to.


Text recognition

A related subject – which we’ll discuss in a future article – is how computers analyze text. In the legal profession, the text is the raw material from which all legal output is built. From legal briefs to contracts to advise or pushback in negotiations, it all comes down to what is said and how it is said. In terms of speed, no human will ever beat a computer system in the task of looking for a specific phrase in a large number of documents or in performing any technical operation on a text. However, once we start considering the analysis of the meaning of text, it becomes a very different game.


Most AI systems – even those that promise natural language processing (NLP) – do not understand texts at all. They operate, again, on statistical analysis. This is a proper noun in the plural form, so the plural form must be appended from this list of associated verbs. Any intelligence is brought into the system by manual human design and thus is restricted to the list of intelligent steps that the human operator has thought of.


A few simple rules already provide a surprising appearance of intelligence. For example, in my NDA reviewing tool, I originally included a date checker: if someone used the tool on Friday after 3 pm local time, the “Waiting for results” page would add one of a set of random phrases, including “Don’t worry, I’ll get this done before the pubs open”. Also, we wrote the output to include some random exasperations such as “This NDA is for ten years, don’t sign that, are you crazy”. Such simple touches go a long way to create the impression that this is more than a computer doing calculations.


In the end, however, Ai does not understand text. It will go by statistics and patterns. This can be surprisingly effective, especially in the legal profession: there are only so many ways to declare the laws of California applicable to the agreement, after all. But it can also create very strange errors: if the system recognizes a price clause as a limitation of liability and amends it as instructed, you end up with a price of 2 million plus whatever the insurance pays out. No human would ever make such a mistake, and so this type of mistake is memorable and will underline that AI is far from production-ready.


Of course, this type of mistake is embarrassing, but is it really a fundamental error? Remember, this is how AI works: the clause matches the patterns for “limitation of liability” best, and such a clause should be amended to “two million plus insurance”. This is what I meant above with “computers don’t think”. Does that mean AI is useless? Far from it. The system should first of all be explicit about its confidence in the prediction and offer explanations such as “Clause recognized as a liability with 35% confidence, amended to match minimally acceptable limitation”. With such small steps, the output is a lot more understandable.


The true cost of AI

Another touchy subject always is the cost of an AI solution. This applies equally to law firms and in-house counsel that seek to deploy an AI-based solution, although for different reasons. Law firms tend to worry about the effect on billable hours: if it takes two minutes for a tool to do the same as a senior associate now does in an entire workday, what can be charged to the client? Even with the time for reviewing the tool’s output included, the cost per time spent will be significantly lower. But this fear is easily assuaged: surely a senior associate can find better things to do than the type of review that a tool can do as well?


The concerns of in-house counsel regarding costs are more complicated. There is a direct effect on the department when the tool is deployed: installation costs, consultancy and training all are billed by the vendor. And after that, a monthly bill will appear for the service. However, there is no direct benefit on the other end: the employees don’t suddenly become cheaper, as they are salaried and don’t reduce their hours now that a tool is taking over the drudgery of reviews or analysis. As with the law firms, in-house counsel will find new (and more challenging) work to do, but unlike with the law firms, there is no measure on the hours spent or time saved in the legal services provided to the company.


The net result may well be that the department’s cost is increased, but the people are just as busy, and no one knows whether the average quality or response time has improved. Of course, the answer is obvious: start measuring the quality of work and response time from the internal client’s perspective. This, however, hasn’t been done much, and in any case, it takes time before enough historical data is available to make quantifiable statements. In the meantime, the perception that the AI tool costs money without making things better lingers around. We will talk later about legal operations and implementation strategies to reduce these fears.


Going forward

AI is coming; there is no doubt about that. But with AI, misunderstandings and missed expectations will come, which may harm the successful deployment of AI in the legal workplace. The key challenge, therefore, is managing the intended users' expectations, which given the hype surrounding much of the AI “revolution” is going to be quite a challenge. In the upcoming episodes of this article series, we’ll discuss these and further myths in more detail and seek for practical solutions to get them out of the way.

 

About the Author

Arnoud Engelfriet is co-founder of the legal tech company JuriBlox, and creator of its AI contract review tool Lynn Legal. Arnoud has been working as an IT lawyer since 1993. After a career at Royal Philips as IP counsel, he became a partner at ICTRecht Legal Services, which has grown from a two-person firm in 2008 to an 80+ person legal consultancy firm.


bottom of page