72 résultats pour « ai »

Building a Culture of Safety for AI: Perspectives and Challenges

Date : Tags : , , , , , , ,
The paper explores the challenges of building a #safetyculture for #ai, including the lack of consensus on #risk prioritization, a lack of standardized #safety practices, and the difficulty of #culturalchange. The authors suggest a comprehensive strategy that includes identifying and addressing #risks, using #redteams, and prioritizing safety over profitability.

How to Evaluate the Risks of Artificial Intelligence: A Proportionality‑Based, Risk Model

Date : Tags : , , , , , , ,
"The #eu proposal for the #artificialintelligenceact (#aia) defines four #risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of #ai systems (#ais), the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. Our suggestion is to apply the four categories to the risk #scenarios of each AIs, rather than solely to its field of application."

A Rumsfeldian Framework for Understanding How to Employ Generative AI Models for Financial Analysis

Date : Tags : , , , , , ,
This paper explores the use of #generativeai models in financial analysis within the Rumsfeldian framework of "known knowns, known unknowns, and unknown unknowns." It discusses the advantages of using #ai #models, such as their ability to identify complex patterns and automate processes, but also addresses the #uncertainties associated with generative AI, including #accuracy concerns and #ethical considerations.

Measuring Ai Safety

This paper addresses the challenges associated with the adoption of #machinelearning (#ml) in #financialinstitutions. While ML models offer high predictive accuracy, their lack of explainability, robustness, and fairness raises concerns about their trustworthiness. Furthermore, proposed #regulations require high-risk #ai systems to meet specific #requirements. To address these gaps, the paper introduces the Key AI Risk Indicators (KAIRI) framework, tailored to the #financialservices industry. The framework maps #regulatoryrequirements from the #euaiact to four measurable principles (Sustainability, Accuracy, Fairness, Explainability). For each principle, a set of statistical metrics is proposed to #measure, #manage, and #mitigate #airisks in #finance.

Taking AI Risks Seriously: A Proposal for the AI Act

Date : Tags : , , , , , , , , ,
"... we propose applying the #risk categories to specific #ai #scenarios, rather than solely to fields of application, using a #riskassessment #model that integrates the #aia [#eu #aiact] with the risk approach arising from the Intergovernmental Panel on Climate Change (#ipcc) and related literature. This model enables the estimation of the magnitude of AI risk by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We use large language models (#llms) as an example."

Regulating AI at work: labour relations, automation, and algorithmic management

These papers examine the role of #collectivebargaining and #governmentpolicy in shaping strategies to deploy new #digital and #ai-based technologies at work. The authors argue that efforts to better #regulate the use of AI and #algorithms at work are likely to be most effective when underpinned by social dialogue and collective #labourrights. The articles suggest specific lessons for #unions and policymakers seeking to develop broader strategies to engage with AI and #digitalisation at work.