6 résultats pour « llms »

Advanced Applications of Generative AI in Actuarial Science: Case Studies Beyond ChatGPT

This article claims that Generative AI (GenAI) is revolutionizing actuarial science, as demonstrated in four case studies. Large Language Models enhance claims cost prediction by extracting features from unstructured text, reducing errors. Retrieval-Augmented Generation automates market comparisons by processing document data. Fine-tuned, vision-enabled LLMs excel in classifying car damage and extracting contextual details. A multi-agent system autonomously analyzes datasets and generates detailed reports. GenAI also shows promise in automating claims processing, fraud detection, and document compliance verification. Challenges include regulatory compliance, ethical concerns, and technical limitations, emphasizing the need for careful integration of GenAI in insurance workflows.

A Proposal for Evaluating the Operational Risk for Chatbots Based on Large Language Models

Researchers proposed a new risk metric for evaluating security threats in Large Language Model (LLM) chatbots, considering system, user, and third-party risks. An empirical study using three chatbot models found that while prompt protection helps, it's not enough to prevent high-impact threats like misinformation and scams. Risk levels varied across industries and user age groups, highlighting the need for context-aware evaluation. The study contributes a structured risk assessment methodology to the field of AI security, offering a practical tool for improving LLM-powered chatbot safety and informing future research and regulatory frameworks.

Lessons from GDPR for AI Policymaking

Date : Tags : , , , , , ,
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.

Taking AI Risks Seriously: A Proposal for the AI Act

Date : Tags : , , , , , , , , ,
"... we propose applying the #risk categories to specific #ai #scenarios, rather than solely to fields of application, using a #riskassessment #model that integrates the #aia [#eu #aiact] with the risk approach arising from the Intergovernmental Panel on Climate Change (#ipcc) and related literature. This model enables the estimation of the magnitude of AI risk by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We use large language models (#llms) as an example."