4 résultats pour « AI governance »

EIOPA publishes Opinion on AI governance and risk management

There is an increasing AI use in insurance—50% in non-life, 24% in life. To address emerging risks, undertakings must clarify supervisory responsibilities, maintain full accountability, and implement proportionate governance. Risk managers should conduct impact-based assessments, emphasizing data sensitivity, consumer impact, and financial exposure. Strong governance includes fairness, data quality, transparency, cybersecurity, and human oversight. Oversight extends to third-party providers, with contractual safeguards required. AI systems must align with existing frameworks like ERM and POG, ensuring traceability, explainability, and resilience throughout their lifecycle. Supervisory convergence across the sector remains a key regulatory goal.

AI in the Vault: AI Act's Impact on Financial Regulation

The paper analyzes the EU's Artificial Intelligence Act and its impact on AI regulation in banking and finance. It highlights the Act's potential to enhance governance, address high-risk applications, and the need for better coordination among regulators. Findings suggest challenges remain, including the necessity for adaptive frameworks to ensure ethical AI deployment.

AI Act and the ECB: Steering Financial Supervision in the EU

The paper examines the EU AI Act's impact on banking supervision, highlighting the ECB's role. It discusses legal frameworks, obligations for high-risk AI systems, AI governance, and the balance between innovation and prudential requirements. Strategic policy recommendations are provided to enhance oversight and financial system integrity.

Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence

This paper addresses the inadequacy of the current U.S. tort liability system in handling the catastrophic risks posed by advanced AI systems. The author proposes punitive damages to incentivize caution in AI development, even without malice or recklessness. Additional suggestions include recognizing AI as an abnormally dangerous activity and requiring liability insurance for AI systems. The paper concludes by acknowledging the limits of tort liability and exploring complementary policies for mitigating catastrophic AI risk.