top of page

Crisis Management, AI and the United Fiasco | ‘With a Nod to Immanuel Kant’

Page 1 Article eMag

When you publish an academic book about lawyering that invokes Immanuel Kant, you hardly expect it to be topical on the subject of crisis management within a month after its appearance.

On March 17, 2017, my publisher released: 'Beyond Legal Reasoning: A Critique of Pure Lawyering'. The book was the culmination of my many years of experience at the intersection of law and business. Yes, there is most decidedly something to “thinking like a lawyer.” But, as any lawyer who has ever interacted with a CEO knows, how business people and lawyers respond to risk and uncertainty can be decidedly different.

Sometimes lawyers need to acknowledge the limits to thinking like a lawyer, and get beyond mere legal reasoning. Nothing tests that observation like a corporate crisis in which concerns about legal exposure conflict with reputation, goodwill, and, frankly, common sense.

On April 9, 2017, United Airlines and the police at Chicago’s O’Hare Airport combined to create one. The police forcibly removed Dr. David Dao from the United Airlines’ seat for which he had duly paid, triggering what the Wall Street Journal editorial page called a corporate public relations fiasco “for the ages.”

United’s CEO, Oscar Munoz, did not comment publicly for two days after the incident, and when he did, it was an unapologetic defense of United’s employees and the “re-accommodation” of overbooked passengers (a phrase the Journal noted was sure to find a place in the “Euphemism Hall of Fame”). A day later, as the share price of United’s common stock dipped significantly and the company became a foil for the late-night talk show comedians, Mr. Munoz and United began the process of trying to put the publicity toothpaste back in the tube. What piqued my interest particularly was a paragraph in the Wall Street Journal article about Mr. Munoz’s initial decision-making process: Mr. Munoz “bent over too far to support his employees,” said a person close to discussions this week among United executives. “I think he got bad advice. He probably listened to lawyers too much,” this person said.

That didn’t surprise me. The theme of my book is that there is a deceptively deductive aspect to legal reasoning. Once a lawyer sets a particular problem into a particular legal theory, the logic of the law, the inference from antecedent conditions to legal consequence, takes over and dictates the result. The real game therefore is in the far more mysterious thinking process of setting the problem. How does one come to see that a particular hypothesis might work to explain things either descriptive or normatively about a particular set of facts?

I have no doubt that one of the first hypotheses that sprang to mind was United’s potential tort liability to Dr. Dao. And when the facts and circumstances are in flux, lawyerly logic immediately turns to the process under which legal theory will play out in litigation. Hence, the quicker you get control of the channels of communication, and stem the creation of any additional evidence, or worse, a public admission of the very liability it’s your job to avoid, the better. In short, circle the wagons. Millions for defense but not a penny for tribute! It now seems obvious in retrospect that listening to the lawyers, if they did indeed advise circling the wagons, was a mistake.

In the days that followed, Mr. Munoz began to do what he likely should have done from the very first moment the crisis surfaced – apologize abjectly and profusely to Dr. Dao and everybody else, perhaps even personally and publicly. If the idea was to limit United’s legal liability to Dr. Dao, the loss in United’s market capitalization due to the drop in its stock price over the next few days far exceed by many times anything United might have had to pay Dr. Dao (and it will likely pay him anyway!) My point is that these crises are the defining moments of what still passes for human judgment. Judgment and wisdom are so hard to define, study, and capture precisely because the leap from a set of circumstances to a professional theory, whether of legal liability, public relations, accounting treatment, or human resources, is irreducible to any rule or set of rules, except rules of thumb. I’m sure Lanny Davis, the lawyer-PR guru who has counseled the Clintons, Martha Stewart, and others in a host high-profile crises, would agree that United violated his rule of thumb “tell it all, tell it early, tell it yourself.” But the reason it’s a rule of thumb and not an algorithm is that the crisis manager still has to make a judgment about whether the rule of thumb applies here.

I juxtapose this against something of great concern to the legal profession. The foremost critic of its paradigm, Richard Susskind, warns of the “increasingly capable” machines that will do the work that has heretofore fed lawyers and their families. To give credit to Mr. Susskind, I agree that machines are likely to replace humans everywhere they can. But it begs the question: “where can they replace humans?” I think humans being replaced by machines making complex judgments under conditions of great uncertainty is a bogey man, at least for a long, long time. Why?

I like Douglas Hofstadter’s thesis that consciousness arises when a thinking machine, like our brain, evolves to develop algorithms so complex they permit the subject to refer to itself and consider its own thinking. In the world of living things, the magic threshold of representional universality is crossed whenever a system’s repertoire of symbols, as in language, becomes extensible without any obvious limit. Hofstadter’s view is that algorithmic recursion is the key element that distinguishes the human mind and language precisely because that level of computational ability generates an open-ended and limitless system of communication. For all I know machines can develop that kind of computational ability, and will someday. But if machine-lawyers replace humanlawyers, I would have exactly the same concern about them I have about the limits of human-lawyers’ legal reasoning! What the United incident and the observation about listening to lawyers reminded me was that it is not just lawyers I need to be worried about. That is, if it is possible for a seasoned executive in a consumer-centric business to rely so badly on advice that has come from a human being “thinking like a lawyer,” it seems equally likely that I need to worry about one who relies on advice from an “increasingly capable” machine that also thinks like a lawyer.


A lawyer-machine could process the United situation as follows with a simple algorithm

containing the symbol “>”:

• The lawsuit from Dr. Dao will cost us $X.

• The value of the lost goodwill and market

capitalization will cost us $Y.

• $Y > $X” (NB: by a factor in the thousands).

Therefore, concentrate on lost good will and market cap even if it means doing things that

might weaken your position in the lawsuit with Dr. Dao.


But if the machine-lawyer does so, it isn’t really thinking like a lawyer. It’s thinking like an algorithmic, albeit inter-disciplinary, crisis manager. At a book talk recently, somebody asked me what I would do in the law school curriculum to facilitate “unlearning how to think like a lawyer” (my coinage). That’s where I want to go from here.

I confess to being something of a radical. Leave it to me and I would tear down many existing pedagogical and disciplinary silos. As a starter, I would scrap the current “law student only” introduction to business entity law in favor of a co-taught, interdisciplinary course for law and business students called “The Law and Finance of Business Organizations.” It is not going to happen soon, however. I have taught at two law schools, each of which shares a building with the university’s business school. Interactions between the two disciplines are rare. Indeed, at one of the schools, the hallways between the law wing and the business wing were referred to as the “DMZ.” At least one reason for the lack of interaction between the schools is the reality that, on most law faculties, the professors who teach business related topics are a small minority. And the ones who actually have business world experience are a minority within that minority.

At the other end of the academic spectrum are the law professors who teach and write in the areas least susceptible to professional inroads. No doubt issues of psychology, sociology, urban planning, and the like inform criminal law and procedure, for example. But I don’t believe the futurists among prosecutors and defense lawyers are having the same existential concerns about artificial intelligence taking over. To get where we think we ought to be, we few radicals need to work very hard at overcoming the institutional hurdles and barriers for which silos are an apt metaphor.

So where might we also teach “unlearning how to think like a lawyer,” at least in the intersection of law and business? The United crisis triggered my own flash of inspiration about crisis management, artificial intelligence, listening to the lawyers too much, and our curriculum. Some forward-thinking law educators, like Dean Andy Perlman at our law school, are creating practical programs embracing the intersection of law and technology. Machines are going to replace human beings when the primary factor of production is little more than processing power. Why employ hundreds of lawyers and paralegals to review documents when machines can do it so much better?

Hence, Suffolk’s Legal Technology and Innovation concentration offers students courses and experiences that will make them attractive to firms and law departments who take advantage of cloud computing, knowledge management systems, social media, electronic discovery, project management, and the like. That is all goodness, but it focuses on what the technology does best. I want to focus programmatically instead on what humans still do better than computers: dealing with multiple inputs in vague or ambiguous situations with significant gaps in information. In other words, crises. There is plenty of opportunity to be a first mover here. One only need do a quick search on the internet to find dozens of books from former crisis managers about their experiences, myriad research papers on crisis management in aviation, medicine, terror response, and companies offering simulation training. In contrast, when I did a Google search for “crisis management simulation law school,” the only course that popped up from any of the almost 200 U.S. law schools was one at the Yale Law School called “Corporate Crisis Management,” taught by a visiting clinical professor whose day job is being a corporate partner at Sullivan & Cromwell in New York City.

The link between Beyond Legal Reasoning and AI is this: what I call “pure lawyering” is, in its own way, as algorthmic as AI. If a lawyer can can set the problem into the list of elements that are the “if” clause of an “if-then” rule of law, deduction takes over, and the legal consequence flows necessarily. The lawyer gets to argue it’s game over. The trick, of course, is the setting of the problem, something that isn’t deductive at all. That’s the nonalgorithmic leap of insight or intuition that is still uniquely human. I’m agnostic on the question whether it will always be (see above), but I suspect it’s going to be a long time before silicon-based neural networks can replicate the complexity of a human brain. To be sure, human brains encountering crises still need to account for their heuristics and biases that push them toward wrong answers. I am underwhelmed, however, by the AI argument that anything in the foreseeable future will have all those unique capacities of human thinking, but cleansed of the heuristic and bias errors so ably catalogued by Daniel Kahneman and the other behavioral scientists.

I have been thinking recently about the intersection of artificial intelligence, simulation training, and human judgment in training pilots. I don’t care how good the pilots are, there are technologies that have made flying infinitely safer because they have taken the decision making out of the pilot’s hands. TCAS (Traffic and Collision Avoidance Systems) and ground proximity warnings are two examples. The literature is replete with speculation, prediction, reaction, and likely over-reaction about pilot-less flying. I’ve heard from experienced commercial pilots who doubt the ability of the pilot-less plane, for example, to abort a take-off after a tire explosion and evacuate hundreds of passengers, or to troubleshoot a system problem that requires an emergency diversion over dangerous terrain. I simply don’t know, other than to say that not having a human pilot on the plane would make me nervous! But flying is still a physical activity, and its automation has to do primarily with accommodation to the laws of physics and the evolution (as with driverless cars) of our human attitudes toward technology’s capabilities. Nevertheless, an entire field of something called “human factors” has developed involving psychological, social, physical, and biological research to assess human abilities, limitations, and characteristics, and to apply that knowledge to human interaction with tools, machines, tasks, and systems. Agencies and institutions like NASA, the Federal Aviation Administration, the Transportation Research Board, the U.S. General Accounting Office, the Food and Drug Administration tap into it. Corporate crises like the United situation or the Tylenol product tampering situation years ago seem to me to be the natural focus of “human factors” research as applied to lawyers. They are the places where thinking like a lawyer (and listening too much to the lawyers, as Mr. Munoz is reported to have done) can turn out to be so wrong. There is much to be done here, but a simulation course on crisis management, where law students would have to face off with aspiring business managers to resolve something like the United situation, strikes me as a wonderful first step.


Jeffrey Lipshaw is Professor of Law at Suffolk

University Law School in Boston, where he

teaches contracts and courses in the business

curriculum. Before becoming a full-time academic

in 2007, Professor Lipshaw spent twenty-

six years as a lawyer and business executive,

most recently serving as Senior Vice-President,

General Counsel, and Secretary for Great

Lakes Chemical Corporation. He began his career

with the law firm of Dykema Gossett in

Detroit, where he was a partner in the litigation and

corporate groups, and served as the Vice President & General Counsel of Allied Signal Automotive, a large auto parts manufacturer. Before coming to Suffolk, he was a visiting professor at the Wake Forest and Tulane law schools. He is a graduate of the University of Michigan, and a graduate of the Stanford Law School.

More on Jeff Lipshaw.

More on his Book ‘Beyond Legal Reasoning: A Critique of Pure Lawyering’ see page 21 of the eMagazine (click on the cover to open the eMag)

#KnowledgeUpdates #JeffLipshaw

bottom of page