72 résultats pour « ai »

A New Approach to Measuring AI Bias in Human Resources Functions: Model Risk Management

Companies use #ai tools for #hr decisions, but they face a balance between benefits and #risks. With limited federal #regulation and complex state laws, employers seek guidance. The #model#riskmanagement#mrm framework, adapted from #finance, aids in managing #airisks for #employment choices. Proportionality lets employers adjust validation to risks and tech changes. Objective analysis and a competent MRM team ensure AI tools align with design and legal requirements, fostering trust and #compliance.

AI Regulations in the Context of Natural Language Processing Research

Date : Tags : , , , , , , , , ,
Recent #ai developments, particularly in Natural Language Processing (#nlp) like #gpt3, are widely used. Ensuring safety and trust with increasing NLP use requires robust guidelines. Global AI #regulations are evolving through initiatives like the #euaiact, #unesco recommendations, #us AI Bill of Rights, and others. The EU AI Act's comprehensive regulation sets a potential global benchmark. NLP models are subject to existing rules, such as #gdpr. This paper explores AI regulations, GDPR's application to AI, the EU AI Act's #riskbasedapproach, and NLP's role within these frameworks.

Lessons from GDPR for AI Policymaking

Date : Tags : , , , , , ,
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.

Regulation of (Generative) AI Requires Continuous Oversight

Date : Tags : , , , , , ,
The submission suggests strategies for regulating #ai in #australia, including examining the rate of take-up of #automated #decisionmaking systems, and regulating specific applications of underlying AI technologies. It also suggests altering the definition of AI, creating a set of guiding principles, and adopting a #risk-based approach to #regulation.

Acceptable Risks in Europe’s Proposed AI Act

This paper critically assesses the proposed #euaiact regarding #riskmanagement and acceptability of #highrisk #ai systems. The Act aims to promote trustworthy AI with proportionate #regulations but its criteria, "as far as possible" (AFAP) and "state of the art," are deemed unworkable and lacking in proportionality and trustworthiness. The Parliament's proposed amendments, introducing "reasonableness" and cost-benefit analysis, are argued to be more balanced and workable.