6 résultats
pour « EU AI Act »
This article argues that there is an increasing erosion of the traditional public-private divide, which is a key principle of liberalism and the rule of law. The authors identify a gradual shift, starting with the "responsibilization" of private actors and progressing to risk-based regulation like the GDPR. They contend that the DSA and AI Act represent a new milestone, as they delegate regulatory powers to private companies, effectively turning them into regulators of their TPSPs. This “privatization of public action” is seen as a serious threat to the rule of law because it removes public action from public scrutiny. To address this, the authors suggest connecting the rule of law more closely with democracy, which could help set boundaries for the legislative conferral of regulatory powers to private entities.
The EU AI Act's implementation begins after a 3-year legislative journey, requiring national authorities to clarify and enforce it. This policy brief outlines Belgium's tasks under the Act, including scope application, exemptions, and the designation of competent authorities to manage AI-related responsibilities.
AI is transforming finance, enhancing efficiency while introducing risks like cyber threats and bias. The EU’s AI Act regulates high-risk AI in credit and insurance. Financial institutions must integrate AI responsibly, ensuring transparency and fairness. Supervisors like ACPR will enforce compliance, fostering trust and innovation through collaboration and governance.
The EU's AI Act is a pioneering, risk-based law designed to regulate AI. It balances promoting AI adoption with protecting fundamental rights and democratic values. The Act uses pre-emptive risk assessments to categorize AI technologies and apply corresponding legal requirements, drawing from existing EU product safety laws.
This paper examines the interplay of the AI Act and GDPR regarding explainable AI, focusing on individual safeguards. It outlines rules, compares explanations under both, and reviews EU frameworks. The paper argues that current laws are insufficient, necessitating broader, sector-specific regulations for explainable AI.
“This paper looks at global and regional efforts to come up with strategies and regulatory frameworks for AI governance. Chief amongst them include the OECD AI Principles; the EU AI Act; and the NIST AI RMF. The common thread among these frameworks or legislations is identifying and categorizing AI developments and deployments according to their risk levels and providing guidelines for ethical and trustworthy AI with considerations for human safety and innovation.”