By Joshua Walker.
I. Background (How Legal Innovation Efforts Fail by Definition) There is a common flaw in “legal innovation” efforts: The term is generally undefined. As attorneys, we would not tolerate a material undefined term in an agreement. Why tolerate one in an area which purports to fundamentally impact our work processes? Furthermore, how can one really define “success” if “failure” is not similarly defined? This ambiguity is somewhat reminiscent of salutary but frequently ill-defined “Ethical AI” efforts. A superficial, consequence-free treatment of profoundly important legal issues looks like “ethics theatre”; a red herring to prevent legal from intervening in engineering.
Similarly, failing to provide formal structure around “innovation” projects—whether legal or otherwise—tends to induce “innovation theatre”: activities designed to create a kind of shadow puppetry of change, without its back-breaking, painful reality. Improving client outcomes, at scale, in fast moving, high stakes engagements is more like breaking rocks at speed, or shifting lumber, than the technical sand play it is depicted to be. (This does not mean that innovation is not “fun”. Indeed, it is one of the most rewarding (and scalable beneficial) things one can do.
But it is the kind of “fun” or reward one gets after building a cabin, or finishing a painting. It follows effort and very sustained focus. It is the “fun” of craftsmanship—much like law at its best itself.)
Where new legal works are driven by appearance ab initio (as opposed to an ex post communication of early success), they have two negative effects. First, they occlude real innovation. It is hard for “expensive” improvements to flourish when funders and customers are offered an ocean of facile, empty options that look the same. Second, they actually sap the incentive to do the requisite heavy lifting for real change by rewarding the communicator (e.g., law firm) too early. As summarized in “On Legal AI” (Fastcase; Full Court Press 2019): Pretension is the thief of action.
This note defines “legal innovation” and poses ten challenges to attorneys and legal industry mavens, leveraging that definition. They may be completed within 12 months.
II. Legal Innovation (A Qualitative Equation)
Legal innovation can be defined in five characters: RP2 > RP1. In Plain English, legal innovation (or “LI”) is where the return on investment of process two exceeds the return on investment of process one.
Here are some rather more technical parameters around that definition: R generally refers to the net outcome of the system—including total risk and value. P2 generally means a “baseline”—the thing we are comparing our new [potentially improved] process against. As evident in this term’s lack of adjectival adornment: “process” does not mean technological process. It means process. We should be agnostic to technology per se. R drives all.
At the risk of (a) being too fancy and (b) being too mathematicky sounding (not a word), we can refer to the thing we are trying to achieve in legal innovation as “delta R”: An outcome where process two has outperformed process one.
By way of analogy: If a surgeon used new tool X for a surgical procedure Y, and it (X&Y) led to 20% decrease in surgical or contemporaneous mortality and a 20% improvement in health outcomes, all factors reasonably considered and controlled for, we would call that “innovation”.
The same analysis may be harder for “word surgeons”, but not impossible. We try to separate the procedure from the outcome in both cases. For both fields, we must also measure net societal, business, and particularly client costs, with an emphasis on individual health utility/wellness. Even if one believes that “cost should be no object” in medical, safety, or legal fields (e.g., discovery), it is . . . inevitably.
Allocating unlimited amounts to any particular procedure or group—or, worse, being totally blind to economics or economic impact—is going to end up killing people, by depriving someone/somehow. By burdening the system—or more importantly a human—beyond its or her capacity to sustain such burden at any point in time, you will break one or the other. Good intentions may be unlimited. But budgets are finite.
Armed with a simple formula, we can separate innovation theatre from innovation reality. Even a rough, qualitative formula can serve as a kind of intellectual razor—sorting project means and ends. We need only apply it carefully, perhaps surgically, understanding its limitations (especially when applied to complex legal artifacts, relationships, and culture).
III. Ten Challenges (For the Next 12 Months)
To that end, here are ten challenges designed to be completed within a twelve month period. We do not expect any one, or even any entity, to do them all . . . though this theoretically is possible. The ultimate goal is concrete, provable improvement in client outcomes—demonstrably helping people better—but we space these out amongst different categories and levels. Educating ourselves and others, for example, is a legitimate means to that ultimate goal.
The metes and bounds of any challenges should be expanded to the present resources and opportunity—but small, concrete pilots with rapidly testable results are encouraged.
There are several somewhat radical elements to these challenges. First, the efforts are designed to be social. Second, you have permission to fail (or at least be modest in your initial goals). Both of these things are somewhat antithetical to a hard-charging elite, working with high stakes. Socializing our failures and experiments is perhaps one of the more anathemic (this word does not exist) operations I can think of for the Bar. We earn our keep and our clients through a patina of perfection, and vigorous shows of competence. But if we let difficulty and the inevitable initial failures stop us from developing the next generation of legal solutions, the world will simply pass by the profession, and increasingly accrete operations on the edge of “legal” work to others more agile—accountants, pure legal tech companies, and others. Rather, start forming simple habits to evolve yourself, your enterprise, and your client’s legal function.
Indeed, I argue that we have a legal duty to engage in such challenges, and operationalize our successful outcomes.
“To Many Eyes, All Bugs Are Shallow”. Socializing experience and results (not client data or outcomes) is as important here as it is for software coders to commiserate and cross-fertilize software development approaches. The difficulties we are addressing are too profound and too socially dependent to address alone. Think of the present conversation as a kind of “Inns of Court” for legal system and process improvement (as well as a hub for vetting and socializing tools, technologies, resources, etc.).
Failure is an Option. Failure is Data. The second antithetical feature of these challenges is that you have permission to fail and/or be lousy at your first efforts.
This is probably the sole unique feature of “Silicon Valley” as a concept, according to its votaries: That failure of one effort is generally considered an advantage in the next one.
Perhaps this is the social instantiation of the scientific method: Where every experiment is considered a data gathering positive, even (and sometimes especially) where the experimenter’s hypothesis is disproven. The experiment generates data. And we are not prophets about outcomes. Incrementally, we gnaw away until we get to the truth. And the experimental effort is essential to generating the accidents and incidents that ultimately provide client value.
How do we experiment without affecting clients? In short, we already do, and we already are. We are already “experimenting” constantly with clients and client data, and these “experiments” (called “client engagements”) definitely affect them. Just because we avoid “blow ups” for certain periods, just because we are systematically diligent from a craft perspective does not mean we are not constantly gambling with client outcomes. We do not have systematized data on individual client outcomes, much less nation state outcomes.
A better question to ask the Bar, ourselves, is: How have we been experimenting so badly for so many centuries? A good experiment, any good sustained process, requires collecting outcome, cost, and other types of data. What the Bar has been doing is more like alchemy.
Let me give you an example of a “good experiment” that can only have a positive outcome for clients: A law firm analyzing its litigation outcomes and costs against a peer group cohort. This is analyzing historical data. But it is likely to reveal major surprises against innate firm biases and procedural assumptions. If a firm was worried about liability or confidentiality, it could handle the entire thing under privilege (under a separate, non-competing firm). (But note that this latter consideration is an issue for the firm, not the client.)
Another comparable (if more complex) project could be done with transaction outcomes. A third project (which controls against client risk), would be to design and test a new technologically-aided procedure for completing a new process. When an engineer develops a new bridge, she doesn’t just throw up some tarmac, girders, and cable and then start sending cars across the contraption. (I argue that this is precisely what attorneys do most of the time, albeit guided by experience.) There are explicit phases. To radically simplify: Design, Test, Build, Test Again . . . and only then let people use it. (See also “On Legal AI” re the “EDEN” method for legal artificial intelligence development.)
Risk of Stasis, Versus Risk of Change. The other problem with asking “what about client risk?” is that it is only one half of the equation. “What about the risk of not changing” is an equally valid question. Failing to improve, stasis, leads to economic death in many senses—but most particularly to law firms themselves, as institutions. What people really mean when they pose the former rhetorical question is that: Arrogant change insensitive to legal idiosyncrasy and untested (by experience or model) is likely to blow up. True.
But these are not the experiments or the challenges we are suggesting here. Quite the converse. The balancing of these risks are entailed in “RP2 > RP1”. But this subject deserves a broader conversation than we have time for here. (One may also need to consider “switching costs” in calculating an innovation threshold/delta, but these may ultimately or immediately washed out with continued operations, as well as further refinement.
Moreover, immediate switching costs can be ameliorated where the future value is concrete (or, at least, deemed a worthy risk). Mechanisms to address such switching costs include (i) contractual, (ii) third party, and/or (iii) client-based mechanisms or investment. Thus, clear prospective value can be parleyed into present investment. For example, prospective law firm software and analytics licensing fees can be parleyed into present investment to create such product.)
Again, permission to fail and/or be lousy at your first efforts is critical. Most of us (and this author especially) are lousy mathematicians, and middling “engineers” at best. That is not the point. We can focus on being attorneys best. The point of these challenges is to arbitrage even a smidge of quantitative or operational thinking into our standard practices.
I have found that even a basic understanding from other fields may definitively help us (a) ask good questions of our engineering and financial partners and clients and better (b) understand where we need to hire/partner/translate legal acumen into another domain. Thus, as with any habit or new field, you need to have the chutzpah to fail, meander, start very small. Pilot projects are good. But they need to be concrete.
To reiterate, by failure I mean only in situations which do not effect clients. Experiments are designed to learn. Implementations are designed to save. The ethical rules all apply. Indeed, the ethical rules require us to improve to improve client outcomes—they require us to pursue “delta R”.
The first three challenges are very simple (to say): Achieve delta R for each of the following client/user classes:
Pick a single category. Then pick a single use case from your no doubt myriad book of past or prospective cases.
The fourth challenge is a bit more specific: Given a specific “big data” or “artificial intelligence” (these terms are both . . . ambiguous) project, design, develop, test, and deploy a new legal governance system. (You may deploy “legal AI” in the challenge, but this is not required.) The fifth challenge requires you to design a legal solution from first principles. And so on. Essentially, the challenges involve: six verbs, twelve months, and a very large dollop of your personal discretion. I encourage you to reach out to me personally (e.g., http://linkedin.com/in/joshua-walker-17a9111a8) regarding any issue. Here is the summary:
There are two material innovation issues which we do not have space to address today: Innovation governance and (as noted above) risk. As a practitioner, I am actually most proud of some of my failures. For example, when I tried hard but ultimately failed to convince a major industrial company to loosen somewhat the operational reins on an innovation effort, I failed. But in hindsight, following that original advice would have yielded billions of dollars to the company. I hate seeing BigLaw making some of the same mistakes in their own innovation governance. Defining that macro innovation structure well and optimally will be do-or-die for these kinds of efforts.
The comments and any opinions herein do not reflect the views of any entity.
About the Author
Joshua Walker is the author of “On Legal AI”, perhaps the first fact-based treatise on the subject. Previously, he cofounded and led (i) CodeX: The Stanford Center for Legal Informatics and (ii) Lex Machina, which he also served as CEO and Chief Legal Architect. Walker has been building legal analytic systems for over 25 years, initially as National Team Analyst, Office of the Prosecutor, International Criminal Tribunal for Rwanda. He is currently the Chief Product Officer for Aon IPS; and continues to seek and actively develop the next generation of legal AI solutions.
Walker obtained his J.D. from the University of Chicago Law School and his A.B. from Harvard College, m.c.l. He writes and presents frequently all over the world, for governments, the Bar, and enterprise. #JoshuaWalker #legalinnovation #businessdevelopment