Unpacking Liability When AI Makes A Faulty Decision

Image related to Unpacking Liability When AI Makes A Faulty Decision

In an era increasingly defined by AI, more and more cases of AI-related harm will continue to come to the fore, raising a pressing question: Who bears the liability when an AI delivers a biased, discriminatory, or otherwise harmful outcome or output?

Our Chief Product Officer and Head of Geopolitical Risk Dr Megha Kumar wrote in Law360 about this.

Read the article on the Law360 website here:

AI's Promise and Pitfalls

Artificial intelligence is currently being widely adopted across various industries. There is no doubt this technology will prove transformational: It promises to drive economic efficiencies, democratize access to knowledge, improve security and create new pathways to ecological sustainability.

However, the same technology can trigger negative disruptions in labor and consumer markets, the information ecosystem, and the security domain, and could also exacerbate the climate crisis.

As AI systems become more autonomous and influential in decision-
making, concerns about AI-related harms and problematic decisions are growing.

Take the case of A.F., on behalf of J.F., and A.R., on behalf of B.R. v. Character Technologies Inc., in the U.S. District Court for the Eastern District of Texas, where the parents of two children from Texas have sued AI chatbot company Character.AI for allegedly providing sexual content to their children, aged 11 and 9, and encouraging self- harm and violence. 

The lawsuit, filed in December by the Social Media Victims Law Center and the Tech Justice Law Project, also named the company's two founders and Google owner Alphabet Inc., which incubated the technology.

The family claims that Character Technologies violated multiple U.S. laws, including the Children's Online Privacy Protection Act, in how it processed data about minors under 13, as well as laws that forbid showing age-inappropriate sexual content to minors. The claims also allege that Google violated the Texas Deceptive Trade Practices Act on product safety, communication on harmful effects, and product liability.

The lack of a federal AI liability framework will impede the ability of negatively affected consumers to seek legal redress, leaving them and the vendors of AI-powered services to navigate a complex patchwork of inconsistent and incomplete state-level laws.

In an era increasingly defined by AI, more and more cases of AI-related harm will continue to come to the fore, raising a pressing question: Who bears the liability when an AI delivers a biased, discriminatory, or otherwise harmful outcome or output?

 

Where AI Liability Comes From

To truly understand AI and who may be liable for its outcomes, we need to delve into its capabilities and limits.

Since the November 2022 launch of OpenAI's ChatGPT, generative AI technology and its use cases have evolved rapidly. Generative AI models are systems trained on large datasets of natural language content, proprietary data, image, video or sound content, and can generate new outputs, such as text, image, sound, software code and videos. These systems are being widely used for tasks such as analyzing commercial data, generating marketing or sales content, and creating customer-facing chatbots.
 
An AI model is only as good as the data it is trained on and the quality of the algorithms. If the dataset is compromised, then the output generated by the AI tool will necessarily be of limited utility or potentially cause harm. If the training data is biased or incomplete, and the safety controls are not rigorously and continuously tested, then the output generated by an AI tool could deliver problematic outcomes.

On the other hand, overinclusion of data without appropriate filtering and curating of the datasets fed into AI models carries the risk of copyright breaches, leakage of personal or sensitive information, and biased decision-making.

One major issue is the lack of transparency in AI decision-making processes, often referred to as the "black box" problem. Many vendors of the most sophisticated AI tools do not fully understand what exact chain of reasoning an AI system uses to parse through the dataset and reach its final conclusion — the vendors in many cases only understand how the method is supposed to work in principle.

With this lack of understanding of the AI decision-making process, some AI tools will generate problematic outcomes without the vendor or the party affected by the decision being aware of it. This raises the question of liability.

For example, in December, The Guardian reported that an AI-powered tenant screening tool called SafeRent, used by a U.S. letting company, gave a score to an ethnic minority female and on that basis recommended that her tenancy application be denied. The 11-page scoring report she received reportedly did not clarify why or how the AI tool reached that score, and she was given no redress options. Some 400 other Black and Hispanic customers had had a similar experience.

This is only the latest example. There are numerous other cases where marginalized groups have been negatively affected by algorithmic bias — whether it is women's resumes going through a recruitment AI, or non-white people screened by AI-powered facial recognition tools, or in judicial settings where reoffender risks are evaluated by AI models. If AI tools are not built and operated with quality data and robust governance protocols, such negative effects will proliferate and lead to multiple AI-related lawsuits by aggrieved parties.

Another recent example of overinclusion of data is Advance Local Media LLC v. Cohere Inc., filed in the U.S. District Court for the Southern District of New York in February. In this case, The Atlantic, Politico, Vox and other major publishers are suing AI start-up Cohere for copyright and trademark infringement. According to the lawsuit, Cohere improperly used at least 4,000 copyrighted works to train its large language models.

The publishers claim that Cohere's technology copies and displays entire articles without consent and evades paywalls, undermining their business models that depend on subscription and advertising revenue. Such lawsuits will proliferate.

In a parallel situation, U.S. courts have often sided with publishers that sued social media platforms that featured their journalistic content without consent.

 

Liability Lies With the Operator

To date, no AI model has been considered to have "legal personality," which means that liability for an AI-related activity will generally lie with the operator or user of the technology.
 
In the same way that if a delivery company has a truck that causes someone an injury, it will be the driver that will be held liable, not the truck itself, nor the manufacturer — unless it is proven that the incident was caused by the design and manufacture, not the operation, of the truck.

Should the use of an AI tool result in a problematic outcome that creates liability — such as inaccurate, biased or inconsistent decision-making; infringement of intellectual property rights; or a cyberbreach caused by an AI tool — the operator of the AI tool will likely be liable or held responsible under existing law. This is similar to how employers can be liable for the expected or instructed activities of their employees.

To address these challenges, the European Commission proposed the AI Liability Directive in 2023. This directive aimed to provide a clear structure for anyone affected to claim compensation in the event they are harmed by AI technology.

However, after more than two years on the table, the European Commission withdrew the directive after members failed to reach an agreement. The commission justified its removal by stating that there was "no foreseeable agreement." This move came in the wake of February's AI Action Summit in Paris, where U.S. Vice President JD Vance criticized the EU's regulatory approach in tech.

This withdrawal has had substantial implications for the legal landscape when it comes to AI. While some celebrated the decision, arguing that the directive was unnecessary and would have imposed additional regulatory burden on AI providers, others viewed it as a setback for consumer protection and accountability.


The Way Forward Is Perilous

Having repealed former-President Joe Biden's executive order on the safe, secure, and trustworthy development and use of AI, President Donald Trump issued his own executive order in January, giving federal agencies and other relevant bodies until mid-July 2025 to recommend an AI action plan. Issues such as data protection, intellectual property and algorithmic accountability will feature in that plan.

However, the White House's focus on removing barriers to AI innovation — coupled with the lack of a federal U.S. data privacy law, proliferation of state-level AI and data laws, and Trump's prickly relationship with federal regulators — does not bode well for the establishment of a streamlined framework for protecting people from AI-related harms.

Here, the contrast with the EU is notable. For example, in an important judgment in a case against Dun & Bradstreet Austria on Feb. 27, the Court of Justice of the European
Union ruled that the operator/controller must describe the procedure and principles applied in such a way that the data subjects can understand which of their personal data has been used and how in the automated decision-making process for credit checks, and enable the data subject to challenge the decision.

Importantly, the court ruled that failure to do so constitutes a violation of rights enshrined in the EU's General Data Protection Regulation.

In the absence of federal frameworks in the U.S., complainants will have no choice but to seek redress from patchy state-level frameworks. This can work, but it carries its own problems. For example, in July 2022, in Bauserman v. State of Michigan Unemployment
 
Insurance Agency, the Michigan Supreme Court ruled against the state government's use of a problematic automated tool to detect fraudulent unemployment claims. The tool's use between 2013 and 2017 led at least 11,000 families in Michigan to file for bankruptcy.

The state passed a law in 2017 to ensure such fraud claims are processed by humans,
and agreed to pay $20 million in damages. Is this a success story? Those affected maintain that the effect of the long judicial battle and the damage it caused them is beyond repair.

As the use of AI proliferates, and the Trump administration rescinds even the basic AI controls that Biden proposed, the risk of algorithmic harm will rise, making courts increasingly important for consumers and AI vendors.

 

[Photo by Neeqolah Creative Works on Unsplash] 

We Can Help

CyXcel’s AI and legal specialists enable organizations to navigate the challenges presented by AI adoption to ensure your AI governance, risk mitigation and data protection processes are robust.

For more information, or to speak with one of our team about how we can help your business, contact us today.