Liability & AI Malfunction: An AI System Developer’s Perspective

Share This Post

AI is rapidly being deployed around the world in many different use cases, with few guidelines for manufacturers, developers, and operators to follow. Along with the complexity of creating the technology, there remain many unanswered legal questions. As the CTO of a startup making privacy technologies accessible and easy to integrate, I have spent a significant amount of time thinking at the intersection of privacy, AI, and liability. More recently, I’ve taken a wider lens and have been deliberating over AI and liability overall. 

While many developers and engineers would rather stay as far away from thinking about the law as possible (I was one of them), that attitude might very well come back to bite us. As AI becomes more ubiquitous, we will be affected by the way failures in AI systems will be arbitrated, and by who and what is deemed to be at fault for a given failure. I was recently given “Rechtshandbuch Artificial Intelligence und Machine Learning” (in English: Legal Handbook of Artificial Intelligence and Machine Learning”) an excellent book on the legal implications of AI that covers the matter of liability.

Liability & AI: An AI System Developer’s Perspective

The book, edited and published by Markus Kaulartz and Tom Braegelmann, provides an overview of one of the most significant legal conundrums our societies are facing; namely, that laws must keep up with technology, but must do so in such a way that does not hinder innovation. We look specifically at Chapter 4.2, written by Maren Wöbbeking, which explores the careful balance that must exist between legal liability and the technologies behind AI. Key to achieving this balance are accurate risk measurements associated with autonomous systems and carefully attributing responsibility to system manufacturers, system operators, and bystanders.

Risks of Autonomous Systems

When considering legal liability around autonomous vehicles, for example, one must take into account both (1) the processes for autonomous decision-making and (2) the traceability and explainability of the decisions. It becomes particularly obvious why proper decision-making traceability is essential in order to determine liability when we specifically consider the disastrous effects that human-falsified input data might have on an outcome.

Wöbbeking quite rightly points out the implications of human interactions with autonomous systems when it comes to determining liability: the majority of autonomous systems still rely heavily on supervised or semi-supervised learning (meaning that one or many humans have to inform the system on which inputs correspond to which outputs at the training phase), and there is a growing literature in adversarial machine learning that’s dedicated to thwarting the many ways in which inputs to an AI system during both inference and training can lead to completely unexpected outputs/outcomes.

How can one pinpoint failure in such a system?

Broadly speaking, end-to-end approaches using a singular, large network yield the best results. Think of a complex system such as an autonomous car, with multiple sensory inputs such as LIDAR & cameras, what would happen if the camera calibration were to drift? It might be a better idea to build separate networks with clearly defined inputs & outputs for this reason — but then, this might impact accuracy, increasing the risk of failure.

In addition to considering potential attacks to AI models through the use of deceptive inputs or even to slight equipment malfunctions that detrimentally modify input signals, as Wöbbeking mentions, it is a human’s responsibility to determine that an autonomous system is being used in the same environment it was trained to run in (i.e., was created for).

Another aspect of liability law should consider whether appropriate measures were taken in order to mitigate the risks of using AI. Under the constrained environments that AI are currently deployed in, they seem to actually reduce the risk of performing certain tasks when compared to a human performing the same task. Risk mitigation will become increasingly relevant and crucial if autonomous systems are to be deployed in more varied and less constrained environments.

Strict Liability

Wöbbeking proceeds to discuss which questions might be covered by existing or new liability laws. One particularly difficult question to answer is whether the risks inherent in the state-of-the-art autonomous systems which otherwise reduce risk should be borne by the injured party, the operator, or the manufacturer.

One possible framework that could apply to autonomous systems is that of strict liability.

“Strict liability differs from ordinary negligence because strict liability establishes liability without fault. In other words, when a defendant is held strictly liable for harm caused to the plaintiff, he is held liable simply because the injury happened.”  

The manufacturer of the autonomous system is the party who has the most knowledge and control over the risks and also the most incentive to cut costs. Allocation of responsibility to the manufacturer therefore becomes an extra incentive for them to thoroughly evaluate and mitigate risk. However, holding the manufacturers strictly liable has the very real risk of inhibiting innovation. 

While manufacturers have a lot more control than operators, they cannot always control whether the operators have deployed a system according to the instructions. Operators themselves often have a choice between either using or not using an autonomous system they are provided with. Their obligations, while limited, are crucial: reducing risk by using the system as instructed.

Proportional Liability

An alternative framework that could be applied to autonomous systems is that of proportional liability.“Proportional Liability — refers to an arrangement for the assignment of liability in which each member of a group is held responsible for the financial results of the group in proportion to its participation.” 

Programmers, data providers, manufacturers, all further developers, third parties manipulating the system, operators, and users all influence the system and contribute to a possible wrong decision. Taking multi-causality into account, while more complex than blaming a single party, might be the right way to assess liability. Propositional liability might bypass any innovation inhibition by the manufacturers, while still holding irresponsible manufacturers more accountable for their lack of risk mitigation. It would help increase the likelihood that operators will take the necessary precautions around autonomous system use.

Risk recovery

Finally, Wöbbeking postulates that a possible regulation for the allocation of any recourse and liability risks would be to pool the risks through a community insurance solution. Possibly a solution similar to social security law. This would probably avoid the complexities associated with liability law and (among other benefits) would also mitigate the disadvantages of a specific risk allocation.

The future of AI 

Whatever the legal future of AI looks like, there are some clear takeaways for AI system developers on what we can do now to prepare ourselves; namely:

  1. Clearly document the design & testing process;
  2. Follow software engineering best practices — e.g. no dynamic allocation of memory and no use of recursion;
  3. Take great care in designing validation and test sets. And make sure a new test set is used after each major system update;
  4. Account for bias.

Join us for more discussions on artificial intelligence on LinkedIn, Twitter, and Youtube

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.