You are currently viewing New liability rules on products and AI to protect consumers and foster innovation

New liability rules on products and AI to protect consumers and foster innovation

At the end of September, the Commission adopted two proposals to adapt liability rules to the digital age, circular economy and the impact of global value chains. Firstly, it proposes to modernise the existing rules on the strict liability of manufacturers for defective products. Secondly, the Commission proposes for the first time a targeted harmonisation of national liability rules for AI, making it easier for victims of AI-related damage to get compensation.

First proposal:

The revised Directive modernises and reinforces the current well-established rules, based on the strict liability of manufacturers, for the compensation of personal injury, damage to property or data loss caused by unsafe products. It ensures fair and predictable rules for businesses and consumers alike by:

  • Modernising liability rules for circular economy business models;
  • Modernising liability rules for products in the digital age, allowing compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services that are needed to operate the product, as well as when manufacturers fail to address cybersecurity vulnerabilities.
  • Creating a more level playing field between EU and non-EU manufacturers;
  • Putting consumers on an equal footing with manufacturers.

Second proposal:

In line with the objectives of the AI White Paper and with the Commission’s 2021 AI Act proposal, setting out a framework for excellence and trust in AI – the new rules will ensure that victims benefit from the same standards of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances.

The Directive simplifies the legal process for victims when it comes to proving that someone’s fault led to damage, by introducing two main features: first, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so called ‘presumption of causality’ will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems. Second, victims will have more tools to seek legal reparation, by introducing a right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved.

The new rules strike a balance between protecting consumers and fostering innovation, removing additional barriers for victims to access compensation

Source: European Commission