top of page

From Theory to Practice: Using Machine Learning to Help Those in Need

Director of Suffolk’s LIT Lab David Colarusso has collaborated with Stanford University’s Legal Design Lab to create an online game called “Learned Hands” to crowdsource the labeling of laypeople’s legal questions. He describes Suffolk’s work in using artificial intelligence and pattern recognition to now develop a one-of-a-kind issue spotter called “Spot” that will be made available for free to pro bono, government, and non-profit legal services providers to help address the access to justice crisis in the United States.

Last year, Suffolk’s Legal Innovation & Technology (LIT) Lab, in collaboration with Stanford’s Legal Design Lab, created an online game to crowdsource the labeling of laypeople’s legal questions. The effort aimed to address the dearth of high-quality labeled data for the training of machine learning (ML) models. The game’s name, Learned Hands, nods both to the ethos that many hands make light work and the prominent jurist Learned Hand. [1] After racking up more than 50,000 labels on thousands of texts, the LIT Lab is building on this work to create an online ML-powered issue spotter called Spot. With funding from Pew Charitable Trusts, this issue spotter will be made available free to pro bono, government, and non-profit legal service providers to help address the access to justice (A2J) crisis.

Machine Learning, the sub discipline within AI around which the current hype cycle revolves, is good at pattern recognition. Acquaint it with a sufficiently large number of example items, and it can “learn” to find things “like” those items hiding in the proverbial haystack. To accomplish such feats, however, we have to satisfy the machine’s need for data. Consequently, AI’s appetite is often a limiting factor when it comes to deploying an AI solution. The Learned Hands project served as a proof-of-principle, establishing that a crowd could help produce the data needed to train such models. The Lab’s current work will move from theory to practice, producing public issue spotting tools for public interest service providers.

Consider two areas where AI’s pattern recognition might have something to offer A2J. There are a number of services that match people with legal questions to lawyers offering pro bono limited representation (think free advice “calls” over email). Unfortunately, some questions go unclaimed. In part, this is because it can be hard to match questions to attorneys with relevant expertise. If I’m a volunteer lawyer with twenty years of health law experience, I probably prefer fielding people’s health law questions while avoiding intellectual property (“IP) issues.

To get health law questions on my plate and IP questions on someone else’s, a user’s questions need to be (quickly, efficiently, and accurately) labeled and routed to the right folks. Sure, people can do this, but their time and expertise are often better deployed elsewhere, especially if there are lots of questions. Court websites try to match users with the right resources, but it’s hard to search for something when you don’t know what it’s called. After all, you don’t know what you don’t know. Complicating matters further, lawyers don’t use words like everyone else. So it can be hard to match a user’s question with a lawyer’s expertise. Wouldn’t it be great if AI’s knack for pattern recognition could spot areas of law relevant to a person’s needs based on their own words (absent legalese), then direct them to the right guide, tool, template, resource, attorney, or otherwise? That’s what we’re working towards here.

I know what you’re thinking, but we are NOT talking about a robot lawyer. When we say “AI,” think augmented intelligence, not artificial intelligence. What we’re talking about is training models to spot patterns, and it’s worth remembering the sage advice of George Box, “all models are wrong, but some are useful.” Consequently, one must always consider two things before deciding to use a model: First, does the model improve on what came before? Second, is it starting a discussion (not ending it)? Unless the data are pristine and the decision is clear-cut, a model can only inform, not make, the decision.

An automated issue spotter has the potential to improve access to justice simply by making it a little easier to find legal resources. It doesn’t need to answer people’s questions. It just needs to point them in the right direction or bring them to the attention of someone in a position to help. It can get the conversation started by making an educated guess about what someone is looking for and jumping over a few mundane—but often intimidating—first steps.

Alternatively, a professional or paraprofessional could be presented with helpful resources based on a client’s reply to intake questions.

The promise of these use cases underlines why Suffolk Law’s LIT Lab will be offering the issue spotter as an online service and downloadable library for use by those working on A2J issues. A private beta is expected in January 2020 with a public release set to follow in late 2020. This will allow a community of public interest users to develop tools such as those imagined above.

The original Learned Hands game is still running, and it continues to produce data for Spot. The game presents players with a selection of lay peoples’ questions and asks them to confirm or deny the presence of issues. For example, “Do you see a Health Law issue?” These “votes” are combined to determine whether or not an issue is present. As you can imagine, deciding when you have a final answer is one of the hard parts. After all, if you ask two lawyers for an opinion, you’ll likely get five different answers.

The final answer is decided using statistical assumptions about the breakdown of voters without requiring a fixed number of votes. Effectively, if everyone agrees on the labeling, the final answer can be called with fewer votes than if there is disagreement. Consequently, the utility of the next vote changes based on earlier votes. This fact is used to order the presentation of questions and make sure that the next question someone votes on is the one that’s going to give us the most information or move us closest to finalizing a label. This means players don’t waste their time seeing a bunch of undisputed issues.

Players earn points based on how many questions they mark (with longer texts garnering more points). Players are ranked based on the points they’ve earned multiplied by their quality score, which reflects how well their markings agree with the final answers.

That’s right. You can compete against your colleagues for bragging rights as the best issue spotter (while training AI to help address A2J issues). Don’t forget to play Learned Hands during your commute, over lunch, or while waiting in court. To play visit:

Portions of this text were adapted from How an Online Game Can Help AI Address Access to Justice (A2J), originally published on Lawyerist. See


[1] Old English Proverb. John Heywood, The Proverbs and Epigrams of John Heywood (A.D. 1562) 54 (Burt Franklin ed., 1967) (1562); Learned Hand, Wikipedia,


About the Author David Colarusso is Practitioner in Residence & Director of the Legal Innovation & Technology Lab at Suffolk Law School

bottom of page