ARTICLE
Will AI Transform Dispute Resolution As We Know It? Innovation and disruption in law
Will advancements in technology eventually eliminate formal dispute resolution proceedings? What will be the impact of AI on legal practice and ethics?
At Practical Law, we have a deep interest in legal technology and its impact on how lawyers practice law. Recently, I had an opportunity to discuss the role of AI in dispute resolution in an online conversation with Mark Beer, OBE and Alessandro Rollo, associate at Omnia Strategy LLP.
AI as a risk management tool
It seems clear that AI will allow lawyers and businesses to make better informed and more accurate decisions around risk and disputes in the future. In fact, there are already many ways in which AI can be harnessed effectively by legal departments, such as in e-discovery, contract review and legal research.
The power, efficiency and accuracy of well-calibrated AI creates tantalizing possibilities in the field of dispute resolution. Mark proposes a future where AI’s predictive abilities eliminate most formal disputes. For example, software could be used to review a customer’s communications and payment patterns to predict their likelihood of defaulting on payment. Or software could be used to determine the likelihood of a claim’s success and chances of recovery, well before appearing in court. As Mark says:
If a party knows that it’s 100% going to win, and the other party knows it’s got a 0% chance of winning, then settlement is easy.
Enabling better access to justice
Having tools to help avoid disputes and assess the strength of one’s claims more effectively could reduce court backlogs and lead to greater access to justice. Allowing increased access to justice could be AI’s greatest positive impact on dispute resolution.
According to Mark, at the moment, 84% of people don’t go to see lawyers, because of a lack of trust in lawyers and high legal fees. He sees great opportunity for AI to reduce backlogs in tribunals clogged with routine matters, such as in family, employment, landlord-tenant, property disputes and petty crime.
As Mark observed, these five practice areas account for 60% of all disputes. Getting more of these matters heard and disposed of more efficiently could dramatically improve the ability of regular people to “have their day in court”.
Mark is optimistic that we will see more and more of these tools becoming commercially available soon. The necessary technology has been around for a while. The difference now is that more organisations are willing to make available masses of their unstructured data to train the algorithms, which should allow for the creation of better, more refined AI tools.
The impact of AI on legal ethics and practice
AI has already impacted, and will continue to impact, the working world in profound ways. Mark jokes that lawyers don’t need to worry about losing their jobs—according to willrobotstakemyjob.com, there is only a 4% likelihood that robots will replace lawyers. (The same site estimates, however, that there is a 40% chance that judges will be replaced by robots.)
Alessandro welcomes greater utilisation of AI in the legal profession. He thinks AI will create legal jobs, and not just eliminate them, as is often feared. People with new skillsets will be required to leverage AI tools to their maximum capability. At the same time, Alessandro cautions that care will need to be taken to ensure AI tools are available to all parties on an equitable basis.
Mark asks whether AI tools could progress to a point where they become so efficient and proficient that it will be unethical for lawyers not to use them. Today’s professional ethics standards do not necessarily mandate the use of technology when practising law, but where the use of AI tools would result in significant time and cost savings to the client, the ethical argument in favor of using AI becomes more persuasive.
Alessandro suggests we are already at a place where the use of legal technology does have an ethical dimension, citing the American Bar Association’s rule 1.1 on competence, which says: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology…”.
Alessandro also mentions that norms are starting to develop around requiring informed consent when AI Is used in dispute resolution. As the ethical framework around the use of AI in dispute resolution develops, in Alessandro’s view, market forces will squeeze out inefficient and uncompetitive firms that do not keep pace with technological developments.
Looking forward
Mark likes to cite experiment results presented in 2016 by researchers at Georgia Tech. Test subjects entered a smoke-filled room and, instead of heading for doors clearly marked with exit signs, decided to follow the directions of an erratic “emergency robot” that led test subjects in the wrong direction.
Mark’s point is that trust in technology, especially in younger generations, is high, even where that trust may not be warranted. Today’s younger generations may not be put off by the prospect of appearing in court before a computerised decision-maker. They may be more willing to assume that computers are always reliable, and correct.
But will they be fairer? Concerns with bias in AI are shared by Mark and Alessandro. However, if AI can be trained in a manner that is unbiased, or that at least minimises, bias, might we then be in a better position to avoid biases that are present, and often unconscious, in human decision makers?
Mark cites a study of an Israeli parole board where the likelihood of decisions favorable to the parolee dropped dramatically before the judges’ two daily food breaks, and before the end of the day. Were harsher sentences due to the judges’ low blood sugar levels? This figures to be one vulnerability that would not be shared by AI.
In our conversation, none of us professed to know exactly how things will turn out. However, it seems clear that, as the potential applications of AI in dispute resolution increase, we will constantly have to ask ourselves: Is this just? Is this fair? This should be a good thing.
Understanding the limits of AI can help us better understand what we want and need from our legal systems. Along the way we are certain to learn a great deal about our own limitations too.
Click here to listen to my conversation with Mark and Alessandro. Plus, to explore the array of practice area modules on Practical Law, request a demonstration at Thomson Reuters.
Bryan is the Practical Law Hong Kong Lead. Prior to joining Thomson Reuters, he practiced commercial and administrative law in Canada and worked as an in-house lawyer at a listed group in Hong Kong. He is a qualified lawyer in British Columbia, New York and Hong Kong.