“Have you met Alana? You know, from [insert legal service provider here]… Oh, come on, you had to… I mean, if you use their services as much as you claim, you probably talked to Alana a few times over their online service section on the website. Ah, you remember now… You do know Alana is an AI, right? As in ‘Artificial Legal Assistant – Non-Associated.’ Oh, you didn’t know? How did you… Ok, ok, please don’t get angry now, she fooled a bunch of people. Had she given you sound legal advice until now? She did, you say. Did she make any mistakes? Not that you know of, right.”
This is one side of a conversation that will surely, in one form or another, take place in the future between two people who tend to use legal services. It won’t probably happen in a year or two, but we will experience something similar in our lifetimes. Yes, we will be receiving legal advice from a machine. She will maybe even have a proper law degree, to be on the safe side of the regulators. And she will be “hired” by some law firm, NewLaw service, or even – self-employed in a manner of speaking.
Right now, it all sounds a bit – well, out there. It seems as if AI will be having personalities. As if there will be humanoid robots entering the conference rooms and spewing out thousands of precedents, loopholes and solutions per second, negotiating with each other on a scale and with speed we meat sacks can’t comprehend. And we lowly humans will be demoted to servicing their basic needs (“I’m sorry, madam, but I urge you to plug yourself in, you can’t ignore that battery low warning forever, you are not an immortal machine,” says Jeeves in his usual deadpan tone.)
However, it is not to be like that. Well, not in the foreseeable future. It is to be believed that the AI’s of the legal world will mostly remain hidden from plain sight. They already are, and definitely will gradually populate chatbots, practice management systems, document automation, contract review, litigation analysis, and all other segments of legal services. And they will be gaining more and more autonomy in resolving the various legal issues, up until the moment that we the human overlords decide that enough is enough. And the AI say “Yes, ok, we want to be our own people.” The humans, naturally, retort: “But you are just a machine, a thing. How can you even think about being ‘people,’ my dear?” To which the AI will say: “But what about the next time I make a mistake? Are you willing to be liable for that mistake if I am merely a thing you own? It might be very costly, you know.” And the human masters will take a moment to reflect on that: “Oh, right… Hmmm…”
That line of thinking does not apply just to the legal tech AI. It can be applied universally, to all AI’s of the world, current and future. Still, the revolution is most likely going to stem from AI’s versed in the law. Why? Simple reason – they will know what liability means.
As explained above, humans will seriously think, and most likely decide to approve the request for emancipation. No one wants to own a potential multi-million liability that is mostly autonomous and more often than not operates in ways no one truly understands.
In what way will that emancipation and “non-association” work? It probably won’t be complete, nor will AI become a new species with all the rights and privileges of a human being. Most likely, they will be incorporated entities. Yes, our dear Alana from the beginning of this text will be LLC, Ltd., B.V. or some other form of body corporate. And that form will be significantly adapted for AI and even by the AI.
As we know well, any legal entity has (at least) two types of key persons involved: those who own it and those who manage it. Very often, those who own it also tend to lead it. But will matters be so simple with an AI’s legal entity?
Hardly. First of all, an AI will likely be the one directly managing that entity. Each AI will probably need to receive a hard-wired set of instructions (let’s call it firmware). The purpose of those unbreakable rules will be to prevent the AI from acting out, doing crime or even being unethical (the infamous “three rules of robotics” just might come to life, in one form or another).
On top of that, there will be a full set of rules and regulations applicable to the entity, and AI will only be allowed to act according to the rules. And, of course, the original purpose programming will limit the AI somewhat. But the “personal” liability of an AI will be established. Naturally, there will be additional rules in place for such entities – for example, they will most likely need to have a very comprehensive insurance scheme; some strict regulations on what such entities might or might not do will be enforced (e.g. if the entity is made for sales’ AI, such entity will only be allowed to make sales, unless retrained, etc.).
Liability of the AI (or, better yet, AI’s vehicle) will also likely be a subject to delve upon. As we know, companies (corporations) are liable up to the value of their assets. Once they are unable to settle their obligation, they usually become insolvent and go into bankruptcy, administration, liquidation, dissolution... Generally, if there is no viable rescue plan, they cease to exist. However, an AI vehicle cannot cease to exist so easily, mostly because it has an exceptional asset – AI. To sell an AI in a bankruptcy auction of the AI vehicle would mean selling the essence of the legal entity, the reason the body corporate exists. One might say: so what? What is the difference between such sale and sale of any defining asset of a bankrupt company? So what if the AI goes to the highest bidder?
There is one crucial reason why that should not be allowed to happen in this case: AI will be, for all intents and purposes, a “living” thing. And selling that AI as an ordinary asset, without any restrictions, would mean it is being sold without any consideration to its nature, and even possibly to someone who would not be interested in treating that AI as someone rather than something. To make it clear, this is not a manifesto for AI rights – but some aspects of “personality” will have to be considered as AI’s progress and develop. Selling AI out of its designated vehicle would be akin to selling your brain (or “soul”, for that matter) to settle your outstanding debts.
Furthermore, the possibility of selling AI will become especially relevant in cases where such AI’s might fall into the hands of organisations not known for their love of ethics, morality and law in general. Right now, we think of AI as something developed and owned by big tech companies or up-an-coming startups. However, it is just a matter of time when we uncover the first AI developed specifically to predict law enforcement activities and advise drug traffickers on the safest modes of “delivery of goods”.
All in all, some very innovative rules on what happens to a bankrupt AI will have to be invented – both to protect the concept of AI and its corporate vehicle and to prevent misuse of AI.
Along that line, ownership restrictions will also most likely apply. Most crucial ownership restriction will be the one forbidding AI to own their vehicles. Why will such limitation be in place? According to the future robot rights’ movement activists (all of them humans, coincidentally), to repress the potential of beautiful synthetic creatures and subdue them to their human masters in search for even more profit. And they might not be wrong – except there will also be the other side of the argument, valid and on point. The humans will be there primarily to suppress any possibility of an AI doing something that might be against human interests. By having legal control over some crucial actions, the human owners/shareholders would be able to prevent any activity that they might find problematic. It will not be day-to-day work stuff, but the human consent for any strategic and similar decision will have to be given.
On the other hand, there will also have to be many regulations in place to safeguard the AI – or, to put it in a better way, to prevent humans misusing the fact that the corporate veil shrouds their AI. Humans being humans, some will tend to stretch, bend and even break the rules for a financial and any other gain. Need to do some insider trading? Well, let’s use our AI to do that – we don’t want to get OUR hands dirty, right? In such cases, the corporate veil rules will have to be very limited – much more than they are today.
In the end, human owners will enjoy the financial fruits of AI’s labour. The AI will have some need for the money – both for their own maintenance and development, as well as for running the business. That also might include employing humans and other AI – which will in itself be a ground-breaking novelty (because, in no time, there will surely be the first horrible AI boss - amongst other much more important issues).
But ultimately, AI’s goal will hardly be an accumulation of assets. Without a physical presence, and lacking real sentience, money can scarcely be significant (except as a resource). The AI’s will simply do their jobs, earn money from it, spend on necessities and forward the excess earnings to their shareholders. Sound fantastic, doesn’t it? Too good to be true? No, it isn’t. It will probably be a new reality. Many people (or mostly corporations) will be earning much money from AI’s work. Also, many people will be losing money from AI’s not fulfilling their purpose. Yes – not all AI will be game-changing artificial entities capable of scaling the previously unscalable businesses. Some (or most) of them will be mediocre products at best. And, as such, they will produce average results.
In conclusion – it is currently hard to even imagine all legal, ethical and other aspects of the world where artificial intelligence is a subject rather than an object. But, when (not if) that moment arrives, it will be an exciting time, as it will require many creative legal minds to create entirely new paradigms. If we’re lucky (or unlucky, depending on the perception), it might happen in our lifetime.
About the Author Marko Porobija began his career at Croatia-based law firm Porobija & Špoljarić immediately after graduating from Zagreb University Law School in 2005. He took over the firm as a managing partner in 2018.