72 résultats
pour « ai »
Companies use #ai tools for #hr decisions, but they face a balance between benefits and #risks. With limited federal #regulation and complex state laws, employers seek guidance. The #model#riskmanagement#mrm framework, adapted from #finance, aids in managing #airisks for #employment choices. Proportionality lets employers adjust validation to risks and tech changes. Objective analysis and a competent MRM team ensure AI tools align with design and legal requirements, fostering trust and #compliance.
Recent #ai developments, particularly in Natural Language Processing (#nlp) like #gpt3, are widely used. Ensuring safety and trust with increasing NLP use requires robust guidelines. Global AI #regulations are evolving through initiatives like the #euaiact, #unesco recommendations, #us AI Bill of Rights, and others. The EU AI Act's comprehensive regulation sets a potential global benchmark. NLP models are subject to existing rules, such as #gdpr. This paper explores AI regulations, GDPR's application to AI, the EU AI Act's #riskbasedapproach, and NLP's role within these frameworks.
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.
The submission suggests strategies for regulating #ai in #australia, including examining the rate of take-up of #automated #decisionmaking systems, and regulating specific applications of underlying AI technologies. It also suggests altering the definition of AI, creating a set of guiding principles, and adopting a #risk-based approach to #regulation.
This paper critically assesses the proposed #euaiact regarding #riskmanagement and acceptability of #highrisk #ai systems. The Act aims to promote trustworthy AI with proportionate #regulations but its criteria, "as far as possible" (AFAP) and "state of the art," are deemed unworkable and lacking in proportionality and trustworthiness. The Parliament's proposed amendments, introducing "reasonableness" and cost-benefit analysis, are argued to be more balanced and workable.
Private sector #ai applications can lead to unfair results and loss of informational #privacy, such as increasing #insurancepremiums. Addressing this involves exploring the philosophical theory of fairness as equality of opportunity.
The paper promotes better #riskmanagement and the fair allocation of #liability in #ai-related accidents.
This study explores the use of #ai #foundationmodels, specifically #chatgpt, in #auditing #esgreporting for #compliance with the #eu #taxonomy.
This paper discusses the potential of #ai #largelanguagemodels (#llms) in the #legal and #regulatorycompliance domain.
"This concise philosophical essay explores the hypothetical scenario of an #artificialintelligence (#ai) gaining agency capabilities and the relative impossibility of its shutdown due to the constraints of #network#infrastructure."