The Complexities of AI Liability and Standards of Care
AI is no longer confined to the realms of futuristic speculation; it’s actively shaping industries and transforming how businesses and individuals operate.
However, with this rapid adoption comes significant questions about liability and standards of care.
When AI Goes Wrong: The Question of Responsibility
AI’s inherent unpredictability is both its strength and its Achilles’ heel. The so-called “black box” nature of machine learning algorithms means that even the developers can’t always predict outcomes once the system is operational.
Technology Law
Read the latest Technology Law updates and news on artificial intelligence, privacy and data protection law, digital assets regulation, and beyond—delivered straight to your inbox!
No spam. Unsubscribe anytime.
This unpredictability raises a pressing question: when something goes wrong, who is responsible?
In traditional contexts, liability might arise under contract law, negligence, or statutory provisions such as the Consumer Protection Act 1987. However, AI adds layers of complexity.
Consider scenarios involving embodied AI, such as autonomous vehicles or robotic surgical devices. Physical harm caused by these systems may trigger liability, but the web of responsibility often includes developers, trainers, and end-users.
This chain requires careful consideration in contractual agreements to avoid a chaotic allocation of blame.
Contractual Nuances in AI Liability
Contractual obligations play a critical role in determining liability. They distinguish between two key types of contractual duties:
- Absolute Outcomes: If a party explicitly guarantees a specific result, such as an AI system identifying fraud with 100% accuracy, they bear strict liability if the system fails to deliver.
- Reasonable Skill and Care: On the other hand, a party may only commit to exercising due diligence in creating or deploying AI. In these cases, liability hinges on whether the processes behind the system were rigorous and adequate, rather than the outcome itself.
These distinctions aren’t just academic; they have real-world implications. For instance, companies deploying AI must ensure rigorous training, validation, and cross-checking of outputs.
The spotlight falls on whether adequate measures were taken to verify the system’s reliability, rather than whether it achieved perfection.
The Role of Governance and Accountability
One of the standout themes from the discussion is the critical importance of governance in AI systems. Assigning clear roles and responsibilities is not a luxury—it’s a necessity. This includes identifying:
- Human Oversight: Who supervises the AI and intervenes when needed?
- Data Protection: Who ensures compliance with privacy laws and ethical standards?
- Safety Protocols: Who evaluates and mitigates risks to users and the public?
In transactional scenarios, contracts must articulate these roles to minimise ambiguity.
For example, if a healthcare provider adopts an AI diagnostic tool, it should be clear whether liability for incorrect diagnoses rests with the tool’s developer, the healthcare provider, or both.
Navigating Emerging Challenges
Beyond contractual arrangements, AI liability issues extend into broader legal and ethical challenges.
The rapid deployment of AI across diverse fields—from finance to healthcare—means that even non-physical harm, such as economic damage, can lead to complex liability disputes.
Adding to the challenge is the fragmented nature of AI development. In many cases, the algorithm’s creator, trainer, and user are distinct entities, operating under separate legal frameworks.
Without a robust contractual “web of liability,” accountability risks becoming muddled.
Practical Tips for Minimising Liability Exposure
For organisations fortunate enough to be drafting contracts, clarity is key. Explicitly define:
- The Scope of Duties: Specify whether the responsibility lies in achieving an absolute outcome or exercising reasonable skill and care.
- Roles and Responsibilities: Assign accountability for each phase of AI development and deployment, ensuring no critical gaps are left unaddressed.
- Governance Mechanisms: Include provisions for oversight, safety, and compliance with applicable laws.
For businesses adopting AI, due diligence doesn’t stop at the contractual stage. Regular audits, comprehensive risk assessments, and transparent communication with stakeholders are equally crucial in mitigating liability risks.
- Čerka, Paulius, Jurgita Grigienė, and Gintarė Sirbikytė. "Liability for damages caused by artificial intelligence." Computer law & security review 31.3 (2015): 376-389.
- Matt Hervey and Dr Matthew Lavy, Law of Artificial Intelligence, 2nd Edition, Sweet & Maxwell, 2024.
- Sullivan, Hannah R., and Scott J. Schweikart. "Are current tort liability doctrines adequate for addressing injury caused by AI?." AMA journal of ethics 21.2 (2019): 160-166.
- Wendehorst, Christiane. "Strict liability for AI and other emerging technologies." Journal of European Tort Law 11.2 (2020): 150-180.