Estonia, the Baltic nation formerly part of the defunct Soviet Union, has long been at the forefront of technology. In most ways, they are far more progressive than the U.S.
But a recent program to introduce Artificial Intelligence (AI) into various government ministries pushes boundaries.
Estonia’s chief data officer is a 28-year-old graduate student named Ott Velsberg, who is tasked with inserting AI into the government functions of nation of 1.3 million people.
Specifically, Velsberg and his team are tasked with creating a “robot judge” to adjudicate small claims cases in the hopes it will clear a backlog of small cases.
The theory is that parties to a lawsuit will upload documents and other relevant information and the AI will issue a decision that can be appealed to a human judge. Likely, this will begin as a pilot program and require tweaking based upon feedback from parties and lawyers.
Who Programs the AI Judge?
Similar to the questions raised here about autonomous vehicles, there are ethical questions posed. Who programs the AI and what value judgments, prejudices, and experiences are factored in?
Case in point would be the death penalty. Scads of research indicates black and brown skinned defendants are considered for and given the death penalty at hugely higher rates than white people. Would an artificial judge try to even this disparity or would it not consider race as a factor?
What about the case of the well-meaning but inexperienced roofer who installed a new roof on the multi-million dollar home that leaked? In many cases, the human being that acts as the judge might unintentionally consider the relative bargaining power, the ability to pay a judgment, and the ability of the plaintiff to absorb the damage and loss. Who would program the AI judge in this case and what factors would the judge consider? Do we want judges who exhibit absolutely no empathy? Or is that exactly the point—no empathy but also no bias? Is it possible to program a judge to have absolutely no point-of-view or “common sense?”
The question comes down to whether you, the litigant, want a fallible human deciding your fate or an algorithm created by a fallible human. Interesting question. I’m not sure there is an answer.
AI Elsewhere in the Law
In the U.S., there are some states where an algorithm helps recommend criminal sentences. In the United Kingdom, there is a service known as “DoNotPay” that is responsible for overturning 160,000 parking tickets in London and New York a few years ago. (That said, with general regard to AI in business and government settings, the European Commission recently called for the creation of ethical guidelines to avoid abuse and unscrupulous use of AI.)
An Estonian law firm in the capital city of Tallinn, Eesti Õigusbüroo, provides free legal aid via a chatbot and generates simple legal documents to send to collection agencies. Think Legal Zoom for law firms.
Can It Work In Estonia? What About Here?
In Estonia, AI already helps move things along. Inspectors don’t have to check on farmers receiving subsidies—they just use satellite images and upload them into an algorithm that overlays a map of fields where farmers receive subsidies. The government can tell who is growing crops in their fields and who is not, and thus who is entitled to subsidies.
Estonia already uses a national ID card for its citizens. They can access a menu of government services like tax filing and electronic voting using this card. The U.S., needless to say, is far behind in its technological uniformity.
Americans also tend to be protective of their privacy and reluctant to trust governmental programs to which they cede some of that autonomy.
Not So Fast Here!
With all the furor over collection of data by Apple, Google, and just about every other entity, it seems hard to get to the point where Americans would be comfortable with more government intrusion and monitoring.
Without that leap of faith, it seems a stretch whether the Estonian-based model could ever work here.
And yet there is a speed of resolution that might appeal to a culture focused on immediate gratification.
Then there is the ethical issue pointed out above—do you trust the people programming the AI judge?
Time will tell whether AI moves into the U.S. legal system anytime soon.
Contact Chicago Attorney Stephen Hoffman
If you’ve been in an accident and have questions, contact Chicago personal injury attorney Stephen L. Hoffman for a free consultation at (773) 944-9737. Stephen has nearly 30 years of legal experience and has collected millions of dollars for his clients. He is listed as a SuperLawyer, has a 10.0 rating on Avvo, and is BBB A+ accredited. He is also an Executive Level Member of the Lincoln Square Ravenswood Chamber of Commerce.
Stephen handles personal injury and workers’ compensation claims on a contingency fee basis, which means you don’t pay anything upfront and he only gets paid if you do. Don’t wait another day, contact Stephen now.