by Foo Yun Chee
BRUSSELS (Reuters) – Individuals and companies affected by drones, robots and other products or services equipped with artificial intelligence software will find it easier to sue for compensation under draft European Union rules seen by Reuters.
The AI Responsibility Directive, to be announced by the European Commission on Wednesday, aims to tackle the growing spread of AI-enabled products and services and a set of national rules across the 27-nation European Union.
The draft rules said victims could sue for compensation for damages to their life, property, health, and privacy caused by the error or omission of an AI provider, developer or user or who was discriminated against in the recruitment process using AI.
The rules seek to lighten the burden of proof on victims by introducing a “causation presumption,” meaning that victims only need to show that a manufacturer or user’s failure to comply with certain requirements caused the harm and then link that to AI technology in their claim.
Under the “right of access to evidence,” victims can ask a court to order companies and suppliers to provide information about high-risk AI systems so they can identify the person responsible and find out what went wrong.
On Wednesday, the EU executive will also update a Product Liability Directive that outlines the scope of manufacturers’ liability for defective products ranging from smart technology to machinery and pharmaceuticals.
The proposed changes would allow users to sue for compensation when software updates make their smart home products unsafe or when manufacturers fail to fix cybersecurity gaps.
Users with unsafe products outside the EU will be able to sue the manufacturer’s EU representative for compensation.
The Amnesty International liability directive will need a green light from EU countries and EU legislators before it becomes law.