13 résultats pour « AI Act »
Insurance Europe's recommendations ahead of the EU Commission’s Digital Omnibus package propose clearer and more consistent EU digital rules. The organization views the insurance sector as central to Europe's digital transition, citing its investments in cyber resilience, responsible data use, and AI tools for faster claims, improved prevention, safer data handling, and expanded consumer access.
The existing framework—including the AI Act, GDPR, DORA, and Cyber Resilience Act— is described as creating complexity through overlapping requirements, which hinders practical application and diverts resources from service enhancements.
The recommendations call for clarifications on AI scopes to avoid duplication with financial laws; reduced repetitive reporting under DORA with use of existing certifications; alignment of cybersecurity and cloud rules across DORA, CRA, and national frameworks; and clearer guidance on GDPR, AI Act, and Data Act for data use in AI training and anonymization.
These adjustments would redirect resources to better claims, cyber protection, prevention, and accessible products.
Date : Tags : , , , ,
Afin d’accompagner le secteur financier dans sa préparation, l’ACPR a organisé une réunion de Place le 17 septembre 2025 à l’occasion de laquelle elle a présenté un état des lieux de la nouvelle règlementation et donné des précisions quant à son rôle et son organisation en matière de surveillance des systèmes d’IA.
The European Union’s AI Act significantly reshapes corporate governance, imposing new responsibilities on directors, compliance officers, in-house counsels, and corporate lawyers. It demands transparency, risk management, and regulatory oversight for AI systems, particularly high-risk ones. These professionals must integrate AI oversight into governance, manage liability, conduct impact assessments, and ensure cross-border compliance. With its extraterritorial reach, the Act influences non-EU entities and sets global standards for AI governance. This paper aims to offer strategic guidance on aligning corporate policies with these emerging legal requirements, emphasizing proactive risk management and ethical AI adoption.
This paper examines the rise of algorithmic harms from AI, such as privacy erosion and inequality, exacerbated by accountability gaps and algorithmic opacity. It critiques existing legal frameworks in the US, EU, and Japan as insufficient, and proposes refined impact assessments, individual rights, and disclosure duties to enhance AI governance and mitigate harms.
The paper examines the EU AI Act's impact on banking supervision, highlighting the ECB's role. It discusses legal frameworks, obligations for high-risk AI systems, AI governance, and the balance between innovation and prudential requirements. Strategic policy recommendations are provided to enhance oversight and financial system integrity.
“... we argue there are good reasons for skepticism, as many of its key operative provisions delegate critical regulatory tasks to AI providers themselves, without adequate oversight or redress mechanisms. Despite its laudable intentions, the AI Act may deliver far less than it promises.”
“... we analyse the regulatory necessity in introducing a coercive regulatory framework, and second, present the regulatory concept of the AI Act with its fundamental decisions, core provisions and risk typology. Lastly, a critical analysis points to shortcomings, tensions and watered down assessments of the Act.”
“... the paper analyses (i) how the AI Act should be applied and implemented according to its original intention of a risk-based approach, (ii) how the AI Act should be complemented by sector-specific legislation in the future to avoid inconsistencies and over-regulation, and (iii) what lessons legislators around the world can learn from the AI Act in regulating AI.”
“This paper discusses and analyses the regulatory approach underlying the AI Act, the main issues surrounding the proposed regulation, and the implications for the AI Act's ability to achieve its goals.”