top of page

Mobilising the power of what could go right to evolve legal ecosystems

By Valérie M. Saintot.

We live in a world of overstimulation: cognitive and sensory. We tend to oversimplify reality to cope with life complexity. The latest hype of contradictory information about generative artificial intelligence (GAI) has been the paramount of confusion. Thought leaders are pulling in opposite directions when predicting risks and promoting opportunities. The affirmed intentions of the creators and investors into GAI appear misaligned with their actual actions. The race to bring underdeveloped AI based products to capture a large chunk of the new market adds to a blurry picture. Citizens can hardly be expected to catch up and become a part of shaping the public space while being directly and structurally affected. Authoritative voices without proven credentials broadcast their volatile opinions which appear to evolve by the day.

What are we left with as choices in such a context? Fear and freeze? Be seduced and overreact? Rationalize and put judgment on hold? Pressing the pause button to take a step back and reflect might never have been as demanding as now. The world is loud. To cultivate the ability to detect reliable signals and to sustain attention on our needs and objectives without useless distraction is crucial. It is a competence we need to strengthen significantly and speedily over the next years. The complexity of the phenomena at play will grow exponentially, even more so in a world in phygital expansion. Giving up before having even tried to create the space for dialogue does not appear to be a good idea. Hard work is in sight, and it is not for the fainthearted.

Technology is rapidly moving from being used as productivity booster to affecting us on the subconscious level. People did not keep their hammers and screwdrivers next to their beds when they first invented them. Today, we sleep with multiple devices and screens in our bedrooms. This is possibly the starting point of realising how much we have become one with digital technologies. We are already augmented beings.

The present piece is written from the perspective of activating our critical thinking discernment. It is not a collection of miraculous tips and tricks. It is just a humble and possibly clumsy yet authentic attempt to self-reflect together. It intends to mobilise the power of what could go right.

1. Using critical thinking to prompt engineer ourselves, not just the bots

Are lawyers immune to the above questions? Should lawyers and their ecosystems invest in the efforts of developing a renewed interest in ethical dilemmas and be actors of sense-making? For several decades, the legal profession was left to its own slow-paced evolution. A sort of illusion of immunity to the triviality of day-to-day worldly matters. Slow pace can have a lot of merits and bring more depth and time to think and consider change. That acknowledged, it can become a hurdle when it stands in the way of democratic access to justice and when peace cannot be taken for granted anymore.

When too many weaknesses start to undermine the democratic building blocks of our judicial systems it is more than time to act. Legal costs may be unaffordable. The slow pace of procedures can be discouraging for citizens. Human cognitive biases are known to twist judicial decision-making. The goal is not to strive for perfection but for an acceptable level of imperfection.

Lawyers do enjoy quite some standing in society, giving them access to a regulated profession and making their services unavoidable in many private and professional occasions. In turn, citizens should expect that legal professionals feel compelled to willingly play an active role in shaping the societal and institutional mechanisms forming the very structures of our societies.

We can cultivate a legitimate expectation that the legal professionals must upgrade their game technologically to be best placed to influence the very functioning of democracy.

This piece is not about convincing that it is a good idea for legal professionals to become digitally literate, to be effective and efficient leaders who care for the human factors, including mental well-being. We have passed this point by now and the contributions of thought leaders like Richard Susskind or Mark Cohen are globally acclaimed. It would be a waste of precious time, energy, and intelligence to even start disputing these working assumptions.

The goal is to self-reflect and come to new levels of understanding. Thinking critically involves remembering, understanding, applying, analysing, evaluating, and creating (see Bloom taxonomy) new approaches to evolve the mindsets we generated yesterday. The thinking process need to be extensive to helps us acquire a new level of consciousness. When we critically think we can design and shape futureproof prototypes. Prompt engineering used to interact with GAI is a perfect pretext to train the advance art of thinking. As much as we are learning to prompt GAI we should even more intensely learn to prompt ourselves to tap our critical thinking capabilities.

Being born in the category called ‘human being’ and being member of an evolutionary species may not be enough to live up to our full human potential. How can we really be serious about the challenges at hand if we are not honing the extraordinary human potential we could access? The old philosophical debate on the impact of nature (innate skills) and culture (acquired skills) on people and society has never been as vivid as in the 21st Century.

How can we humbly realise that when we point one accusatory finger to AI weaknesses, we point three accusatory fingers to our own limitations?

We commonly criticise the algorithmic black box but are we not an even bigger black box to ourselves? We focus on the hallucination of the bots connected to large language models parroting human discourse and creativity but what about the multiple human hallucinations, past and present we might parrot ad nauseam?

If we cumulate black boxes and hallucinations, it is obvious that we will not aptly contain the GAI risks and even less tap the many opportunities GAI brings. Developing GAI can be an unprecedented way to develop ourselves as it holds a revealing mirror in our face and appears to be an integral part of our evolution. Large language models have figured out how to decouple knowledge, context, and form. This should speak to the heart of lawyers and their syllogistic thinking mindset. Lawyers have all it takes to be front runners in the societal conversation on GAI if they want and go for it. They need to viscerally stay attached to seek truth.

2. Seeing the living dimensions in the technological and AI arenas

What are the long-term goals pursued? Are the goals set big enough? Are the goals capable of making a structural and sustainable difference?

How similar or different are the goals for legal professionals versus for society as a whole? Can the goals of legal merely pursue effectiveness and efficiency of our judicial systems and well-being of legal professionals? Or are we up to something much bigger and significant? Is the project not as big as preserving life on planet Earth in the absence - until further notice - of a planet B… also for lawyers?

Can we rethink the legal profession without intertwining the redesigning project with three societal transformations: well-being, sustainability, and digital technologies? Can we really consider we will find ethical ways to regulate GAI and mitigate risks if we do not deeply go into life-centric and regenerative design principles? How can we seriously claim to design regulation(s) which will respect humanity if we are still not outgrowing the mindset of seeing lawyers as commodities to produce billable hours in a breathless fast forward system?

Lawyers, organisational ethicists, and compliance professionals are central actors of the transition to more sustainable business models, green finance, and other normative conversations aiming at rethinking our impact on the Earth. If we do not care for the living dimensions in us as persons, if we do not care for evolutionary biology, how are we going to be the voices and agents of sustainable needs in investment portfolios or companies balance sheets and cultures? Human beings are agents of proximity. Whatever is far from their eyes is far from their hearts. What is far from the hearts will not count in the design and decision-making process. How can anyone deal with today’s challenges without caring for their embodiment and living presence on the planet?

Legal professionals, as knowledge workers, must not only know about the sustainability goals and standards their clients must comply with (ESG and SDG) but also experience them in their own skin. Anything aiming below this ambition is self-deception, window dressing, or green washing. The new set of goals entitled Inner Development Goals ( published in April 2022 and endorsed in last autumn by the European Parliament was a key missing piece.

We cannot see the transformation for a more just and sustainable world as something happening outside us. We are nature. We need to factor in our own inner developmental transition towards being ever more humane, acting ever more respectful of life.

Making life-centricity a top design principle in whatever we touch, including evolving legal ecosystems, can empower us as legal professionals to impact the systems with meaningful intention, sustained attention, and desirable results.

3. Amplifying life-long microlearning to sediment technological skills over time

In the West, the common learning model is to attend some twelve years of primary and secondary school. Then one adds a few more years of professional or academic training. It is not common to pick up full academic qualifications later in life. It is unusual to pursue a fully-fledged new professional requalification too.

How does this square with the newest technological revolution brought by artificial intelligence, where knowledge workers are more affected in redefining their added value and ways of working than manual workers? This is a first in our short human history.

Legal professionals are required to proceed with a serious upgrade of their belief systems, mindsets, and skill mix. New clusters of literacies are needed possibly implying a serious upskilling. There is rich literature about the concept of T-shaped legal professionals. No need to expand on this. Legal education from now on needs to be envisaged from, at least, a triple perspective. The first perspective is when training as a lawyer at the law faculty and the first years at the bar. The second perspective is lifelong learning during the decades of practice most likely using ongoing microlearning. The third perspective is teaching basics of legal knowledge to non-lawyers like computer scientists developing AI solutions for lawyers.

Obviously, AI literacy needs to become part of legal professionals’ learning paths and be tailor-made to various roles (judge, litigators, advisor, paralegal, legal technologist). One of the new competencies trending is prompt engineering for legal professional. It is an obvious and rather easy one to acquire. As GAI performance will keep improving, the importance of prompt engineering is likely to decrease. That said, digital skill are many. The more you learn and practice, the more the sedimentation makes you agile to navigate the ever set of options available. But if you do not train or use the technology on an ongoing basis, you do not learn progressively and seamlessly. It leads to growing a gap which over time is daunting to close.

Another important learning dimension to mitigate the risk of AI becoming a threat is learning to be human: what is thinking, feeling, relating to life, to people, to nature. It matters to have workshops to learn a handful of neuropsychology and biology principles. We need to appreciate how unique the human brain, body, and mind are. Workshops on natural sciences would also be transformative. Understanding the universe, where we are coming from and where we are possibly heading to are not superfluous luxuries for salon discussions. They directly impact the way we set the values we see fit to consider. It helps to see the relation to the machine as an augmentation, not as a substitution. It would also make sense to be exposed, even if only couple of hours, to the philosophy of technology to equip lawyers with an inquisitive mindset.

The purpose is not to make lawyers jack of all trades but to possess a deeper and more incisive way to question technology and AI from an ethical and epistemological viewpoint. Other skills such as the ability to manage projects, design strategies, collaborate and mobilise collective intelligence obviously needs to be part of the necessary additional toolboxes. These are human software skills and can be of great use. In a nutshell, what matters is a serious upgrade of our human operating systems in line with our evolutionary nature.

In addition, lawyers tend to lock themselves into the kingdom of words and linear thinking. Large language models used for generative AI purposes are multimodal: numerical data, visuals, graphs, audio, videos, etc. Helping lawyers to become multimodal using legal design thinking and legal knowledge visualization is just the next best thing to do. Legal design augments the individual and collective analytical thinking potential of legal professionals. In combination with plain language, it improves the way citizens understand their rights and obligations.

There is no contradiction in terms between innovation and ethics. We, humans, tend to need a lot of shortcuts to come right with our lives. Commonly, ethics is perceived as bureaucratic while innovation is promoted as good. This is a little too simplistic to have a constructive conversation. Ethical aspects are a form of healthy constraints. Innovation perform for the sake of it is not necessarily helpful. Bottom line, we should help solve real problems and not invent new ones because we created a possible solution. The mindset you have towards ethics and innovation defines how you will see the way to team them up. If ethics is from the start considered as a part of a technological project, it can encourage innovation by asking ‘How should we act in the world?’.

4. ‘Fast alone, far together’, embracing the power of communities of practice

The antidote to a world that moves ever faster and is becoming less personable is heartfelt human relations. We need to deselect sources of distraction and tune up to the sources of sense-making. We can delegate ancillary tasks to the machine, so we have more time for person-to-person quality interactions and tackle what is transformational versus transactional. I will mention three communities that I wholeheartedly recommend when it comes to legal, tech, AI and ethics.

The first community is the Liquid Legal Institute e.V.. It is a global community of legal professionals. The purpose is to help evolve the legal ecosystems with an emphasis on legal technologies and AI. It has a very modern way of distributed collaboration and collective intelligence activation and action planning using Microsoft Teams, Github, web-based collaboration to deliver wonders. Kudos to Dr. Bernhard Waltl, Kai Jacob, Dr. Dierk Schindler and Dr. Roger Strathausen for being open and walking their talk as the founders of the Institute. Many members - Baltasar, Anita, Jutta, Robert, Carolin, Graciela, and Tati - are wonderful lawyers with whom I have the most stimulating conversations and hope to enjoy for many more years.

The second amazing community is the people at Z-inspection under the lead of Prof. Roberto Zicari. They have been working over the years to build a community of practitioners to grow criteria and methods to assess the trustworthiness of AI. They are acknowledged by many fora and their tools are part of the OECD.AI toolbox.

The latter is to be explored and consulted on all matters relating to AI ethics. Karin Tafur is a member of this community, co-author of the first book on Legal Design in Spanish and a source of inspiration to be followed on social media for AI and the law.

The third community worth closely following is ForHumanity under the lead of Ryan Carrier. ForHumanity defines its mission as ‘to examine and analyze the downside risks associated with the ubiquitous advance of AI & Automation, to engage in risk mitigation and ensure the optimal outcome’. Rohan Light is an active member from New Zealand. I learned so much interacting every month with him on a co-created learning agenda for more than a year. This was a wonderful system of peer-to-peer learning. This approach helps identify one’s own blind spots.

Each of us should identify strategies to regularly ‘publish’ upskilled versions of ourselves resulting from our regular human hardware and software updates.

5. Away from transactional improvements towards legal transformation

While we spoke little of hard-core legal knowledge and legal disciplines in this article, it legitimately and obviously remains the core focus for legal professionals. That said, ethics, strategy, culture, operations, procedures, services, and products are likely the hotspots of the digital transformation of legal. The three fundamental domains of well-being, digital and AI, and sustainability bring the motion and direction to the overall picture.

That is why we promote the idea that to support peace and democracy and to mobilise the power of what could go right, legal ecosystems need to dream big and far. Symbolically we call the approach ‘The Dove of Legal Transformation’ as we attribute to it a direct role to nurture peace and preserve democracy on a lovable planet.

We should be less concerned whether AI eventually access consciousness than whether we human beings will access collectively a sufficient level of consciousness to be life centric in our choices. May the fundamental values enshrined in democratic constitutions become embodied realities expressed by biologically intelligent beings before being merely parroted by artificially intelligent bots. We should ponder not to do with generative artificial intelligence what we did to nature: believing we are on top of it instead of realising we are one with it. It would take another complete article about how GAI may deserve to possibly be contemplated as a biological evolution. To be continued.


About the Author

Dr. Valérie M. Saintot, LL.M., is a lawyer (since 1994), mindfulness teacher (since 2005), visiting lecturer at Bucerius Law School (since 2021), and adjunct professor (since 2022). She has worked in the private sector and EU public sector. She has been featured as legal design thinking pioneer in the first book in Spanish on this topic (2022). Valérie is an active member of the Liquid Legal Institute and recently contributed to the update of the Legal Digitalization Guide. She has authored articles on the visual navigation of the law and legal knowledge visualization.

She has extensive experience in (legal) knowledge management. She is also actively promoting the transformation of legal ecosystems to take advantage of (G)AI and technologies to actively preserve peace and democracy. She promotes mindfulness-based leadership to face with resilience and discernment the many societal transformations under way. She is also a passionate international keynote speaker (