72 résultats
pour « ai »
The paper explores the challenges of building a #safetyculture for #ai, including the lack of consensus on #risk prioritization, a lack of standardized #safety practices, and the difficulty of #culturalchange. The authors suggest a comprehensive strategy that includes identifying and addressing #risks, using #redteams, and prioritizing safety over profitability.
The article discusses the limitations of current #ai technologies such as #chatgpt, #largelanguagemodels, and #generativeai, and argues that we need to advance #researchanddevelopment beyond these limitations.
"The #eu proposal for the #artificialintelligenceact (#aia) defines four #risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of #ai systems (#ais), the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. Our suggestion is to apply the four categories to the risk #scenarios of each AIs, rather than solely to its field of application."
"In the rapidly evolving world of #ai technology, creating a robust #regulatoryframework that balances the benefits of AI #chatbots [like #chatgpt] with the prevention of their misuse is crucial."
This paper explores the use of #generativeai models in financial analysis within the Rumsfeldian framework of "known knowns, known unknowns, and unknown unknowns." It discusses the advantages of using #ai #models, such as their ability to identify complex patterns and automate processes, but also addresses the #uncertainties associated with generative AI, including #accuracy concerns and #ethical considerations.
This paper addresses the challenges associated with the adoption of #machinelearning (#ml) in #financialinstitutions. While ML models offer high predictive accuracy, their lack of explainability, robustness, and fairness raises concerns about their trustworthiness. Furthermore, proposed #regulations require high-risk #ai systems to meet specific #requirements. To address these gaps, the paper introduces the Key AI Risk Indicators (KAIRI) framework, tailored to the #financialservices industry. The framework maps #regulatoryrequirements from the #euaiact to four measurable principles (Sustainability, Accuracy, Fairness, Explainability). For each principle, a set of statistical metrics is proposed to #measure, #manage, and #mitigate #airisks in #finance.
"... we propose applying the #risk categories to specific #ai #scenarios, rather than solely to fields of application, using a #riskassessment #model that integrates the #aia [#eu #aiact] with the risk approach arising from the Intergovernmental Panel on Climate Change (#ipcc) and related literature. This model enables the estimation of the magnitude of AI risk by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We use large language models (#llms) as an example."
"... the combinatorial approach of #ai and #cybersecurity is beneficial but still the associated danger persist owing to its manipulation by #cyberattackers. This chapter highlights the recent AI techniques deployed in cybersecurity and identifies the #ethical challenges thereof."
#ethical dilemmas and #regulatory considerations associated with #ai and #chatgpt adoption in financial analysis are ... addressed, emphasizing the need for responsible AI usage and human oversight in critical #financial judgments.
These papers examine the role of #collectivebargaining and #governmentpolicy in shaping strategies to deploy new #digital and #ai-based technologies at work. The authors argue that efforts to better #regulate the use of AI and #algorithms at work are likely to be most effective when underpinned by social dialogue and collective #labourrights. The articles suggest specific lessons for #unions and policymakers seeking to develop broader strategies to engage with AI and #digitalisation at work.