AAIA India Member Article: Who Owns the Mistake? Rethinking Liability in AI-Driven Decision-Making
AAIA India Member Article: Who Owns the Mistake? Rethinking Liability in AI-Driven Decision-Making
We live in a moment when machine decisions increasingly substitute for or augment human judgment...
Who Owns the Mistake? Rethinking Liability in AI-Driven Decision-Making
Managing Partner, AK Mylsamy & Associates LLP
Contact: subathra@akmllp.com
A problem of attribution and consequences
We live in a moment when machine decisions increasingly substitute for or augment human judgment. An AI model embedded in a vehicle can decide whether to brake or swerve in milliseconds; a diagnostic algorithm can suggest cancer treatments; an automated trading system can execute thousands of orders in a second. When those decisions cause harm, the central legal question becomes deceptively simple: who owns the mistake? The answer matters, not only for victims seeking compensation, but for social incentives that shape how firms test, deploy, and update AI systems. Scholars, regulators and courts have begun to confront this question, but a patchwork approach remains the norm: agencies issue sectoral guidance, courts stretch old tort categories to new technology, and legislators debate whether bespoke AI liability rules are needed.
The doctrinal starting points: product liability and professional negligence
Product liability. A natural starting point is product liability. This body of law, familiar from cases involving defective machines or cars, traditionally imposes responsibility on manufacturers for three types of defects: manufacturing, design, and failure to warn. In many situations, manufacturers can be held strictly liable, meaning victims do not have to prove negligence. At first glance, applying this framework to AI makes sense—especially where AI is embedded in physical products such as cars or medical devices. Yet AI differs in fundamental ways. Unlike static products, AI systems change over time through updates and continued learning, and they depend on outside data and cloud services. The familiar tools of product liability law, designed for fixed objects, struggle to keep up with these shifting systems. In the United States product liability traditionally addresses harm caused by defective products. Beginning with landmark judicial decisions (e.g., Greenman v. Yuba Power Products), and refined in the Restatement (Third) of Torts: Products Liability, the doctrine recognizes three defect categories: manufacturing defects, design defects, and warning (failure-to-warn) defects. Liability can be strict (no need to prove negligence) for manufacturing defects and, depending on the jurisdiction and test applied, for design defects when a product is unreasonably dangerous. These principles place the cost of accidents on manufacturers that put dangerous goods into the stream of commerce.
Applied to AI, product liability’s appeal is obvious: many AIs are embedded in physical devices (cars, medical devices) and cause compensable physical injury. But traditional product tests were built for physical objects with static designs; AI systems are often data-driven, opaque, may change through updates, and depend on third-party data and cloud services. These characteristics complicate applying standard defect analyses and causation tests.
Professional negligence (malpractice). When humans in expert roles exercise judgment (doctors, financial advisors), the law typically assesses liability using negligence principles tied to a professional standard of care: did the actor exercise the degree of skill and care ordinarily employed by competent practitioners under similar circumstances? Liability thus flows from a breach of duty, causation and harm. When professionals use tools, courts often hold the professional responsible for choosing and supervising tools reasonably. This “tool-user” framing tends to assign primary responsibility to the human professional rather than the toolmaker.
That framing can work in healthcare or legal practice where clinicians retain decisive authority and can reasonably be expected to question or override an AI recommendation. But where autonomy is algorithmic e.g., driverless cars with no human driver, or trading bots operating without real-time human oversight, the professional negligence model is less comfortable and may produce results that under-compensate victims or under-deter risky design choices.
Toward a workable set of principles and reforms
No single doctrinal silver bullet will fit all cases. Instead, a layered approach combining targeted regulation, doctrinal adaptation, contractual shifting of risks, and market-based insurance offers the best path forward.
1. Sector-specific regulation must set baseline safety expectations
Sector-specific regulations can set baseline expectations: in transport, crash reporting; in healthcare, post market surveillance of AI-based devices; in finance, circuit breakers and risk controls. These rules do not just protect the public; they also help define what counts as “reasonable” behavior in negligence cases.
Alongside regulation, tort law itself can adapt. For high-risk AI systems that operate autonomously and pose physical dangers, driverless cars without safety operators, implantable AI-driven devices there is a strong case for strict liability, or at least for presumptions that shift the burden onto manufacturers to prove the safety of their products. For lower-risk decision aids, negligence standards tied to professional judgment remain appropriate. In this sense, the law can calibrate liability according to the degree of risk and autonomy.
Regulators are already moving here. NHTSA’s crash reporting and the FDA’s AI/ML SaMD Action Plan create expectations about testing, transparency and postmarket surveillance; in finance, SEC/CFTC reforms emphasize pre-trade risk controls and circuit breakers. These regulatory baselines both protect the public and inform negligence inquiries (what a “reasonable” actor should have done). For high-risk AI (ADS, certain clinical decision tools, systems that can cause systemic market disruption), sector regulators should require
- robust logging and forensic ability,
- routine performance monitoring and reporting, and
- clearly defined human-in-the-loop or human-on-the-loop responsibilities.
2. Adapt tort doctrine where appropriate, calibrated strict liability and burden-shifting
Where AI systems operate autonomously and pose high physical risk (driverless vehicles operating without a safety operator; implantable AI medical devices), there is a strong policy case for imposing manufacturer-oriented strict liability or a statutory presumption of product defect subject to rebuttal. That reallocation keeps the costs of accidents with the parties best positioned to internalize them: designers, manufacturers, and deployers who control design, testing and updates. For lower-risk decision-aid applications, a negligence standard that treats AI as a tool and focuses liability on professional judgment remains appropriate. Several scholars and international proposals converge on a risk-based approach rather than blanket rules.
3. Create procedural fixes to remedy information asymmetry
Another crucial element is tackling information asymmetry. Victims often cannot prove what went wrong inside an opaque system. Courts and regulators should therefore require disclosure of key technical information model provenance, training data lineage, logs, update histories under safeguards that protect intellectual property but allow fair adjudication. Courts and regulators should demand technical transparency for litigation and oversight while preserving reasonable IP protections. Obligations to produce model provenance, training data lineage, telemetry logs and update histories under protective orders (or limited inspection regimes) would let plaintiffs and factfinders establish causation. NHTSA’s reporting order and FDA guidance point in this direction, and legislative proposals in other jurisdictions (e.g., the EU’s AI liability initiatives) propose comparable rules.
4. Encourage contractual risk allocation and insurance markets
Contracts and insurance also have a role. Through warranties, indemnities, and service commitments, companies can allocate risks between developers, deployers, and users. Insurance markets, if properly structured, can spread residual risk and create economic incentives for safer design, much as they do for other high-risk industries. Parties can and should allocate risks by contract e.g., indemnities from model providers, warranties as to dataset provenance and performance, and service-level commitments for monitoring. Where residual risk remains, a viable market for AI-risk insurance will help underwrite liabilities and create actuarial feedback. Regulators can help by clarifying that compliance with recognized standards and certification regimes is admissible evidence of due care.
5. Preserve incentives for explainability and human oversight
Law should reward, not penalize providers and deployers who build explainability, robust monitoring, and human-override designs. A negligence standard that recognizes a reasonableness “safe harbor” for actors who follow consensus technical best practices and regulatory guidance helps innovation while maintaining victim protections. This is consistent with proposals arguing for a calibrated negligence approach rather than blunt bans or unbounded strict liability.
Conclusion
AI-driven decision-making forces the law to do two things at once: hold someone accountable when people are harmed, and shape incentives so that safety is designed in, not appended later. Traditional doctrines like product liability and professional negligence remain powerful tools, but they do not map perfectly onto contemporary AI systems whose behavior is shaped by data, cloud services, continuous updates, and distributed supply chains.
A practical way forward is layered and risk-sensitive. For high-risk autonomous systems that can physically harm people without human supervision, the law should shift more responsibility to producers and deployers through stricter liability rules or rebuttable presumptions, backed by sectoral regulatory requirements for monitoring and reporting. For decision-aids used under active human supervision, professional-negligence frameworks informed by regulatory standards and best practices will usually suffice. Across the board, courts and regulators should demand the technical evidence (logs, provenance, updates) needed to allocate responsibility fairly, and parties should use contracts and insurance to allocate residual risk.
Answering the question “who owns the mistake?” is less about identifying a single liable actor than about constructing a governance system that fairly compensates victims, pushes companies toward safer design, and sustains socially valuable innovation. That balance, difficult though it is, is the key to integrating AI into society responsibly.
Images