5 résultats pour « llms »

A Proposal for Evaluating the Operational Risk for Chatbots Based on Large Language Models

Researchers proposed a new risk metric for evaluating security threats in Large Language Model (LLM) chatbots, considering system, user, and third-party risks. An empirical study using three chatbot models found that while prompt protection helps, it's not enough to prevent high-impact threats like misinformation and scams. Risk levels varied across industries and user age groups, highlighting the need for context-aware evaluation. The study contributes a structured risk assessment methodology to the field of AI security, offering a practical tool for improving LLM-powered chatbot safety and informing future research and regulatory frameworks.

Lessons from GDPR for AI Policymaking

Date : Tags : , , , , , ,
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.

Taking AI Risks Seriously: A Proposal for the AI Act

Date : Tags : , , , , , , , , ,
"... we propose applying the #risk categories to specific #ai #scenarios, rather than solely to fields of application, using a #riskassessment #model that integrates the #aia [#eu #aiact] with the risk approach arising from the Intergovernmental Panel on Climate Change (#ipcc) and related literature. This model enables the estimation of the magnitude of AI risk by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We use large language models (#llms) as an example."