24 résultats
pour « gdpr »
The paper 𝙏𝙝𝙚 𝙍𝙚𝙜𝙪𝙡𝙖𝙩𝙞𝙤𝙣 𝙤𝙛 𝘿𝙖𝙩𝙖 𝙋𝙧𝙞𝙫𝙖𝙘𝙮 𝙖𝙣𝙙 𝘾𝙮𝙗𝙚𝙧𝙨𝙚𝙘𝙪𝙧𝙞𝙩𝙮 by Jasmin Gider (Tilburg University - Tilburg University School of Economics and Management), Luc Renneboog (Tilburg University - Department of Finance), and Tal Strauss (European Central Bank ECB) compares and contrasts the regulatory landscapes of data privacy and cybersecurity in the EU and the US. It outlines the fragmented nature of US regulations, often relying on state-specific laws and sectoral approaches, in contrast to the EU's more unified framework like 𝗚𝗗𝗣𝗥 and 𝗡𝗜𝗦 Directives. The text details the increasing costs and frequency of cyber incidents, emphasizing the insufficient mandatory disclosure requirements in both regions. Furthermore, it identifies gaps in current legislation and ongoing efforts, such as the 𝗘𝗨'𝘀 𝗖𝘆𝗯𝗲𝗿 𝗥𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝗰𝗲 𝗔𝗰𝘁 and the US.'s 𝗖𝗜𝗥𝗖𝗜𝗔, to enhance 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝗰𝗲 and address underinvestment in 𝗰𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆.
The EU prioritizes cybersecurity and data protection due to rising cyber threats and digital transformation. It employs regulations like GDPR for personal data and the NIS Directive for critical infrastructure resilience. This study analyzes their impact, challenges, and interplay, also comparing them globally to assess effectiveness in safeguarding digital security and fostering trust.
This paper examines the interplay of the AI Act and GDPR regarding explainable AI, focusing on individual safeguards. It outlines rules, compares explanations under both, and reviews EU frameworks. The paper argues that current laws are insufficient, necessitating broader, sector-specific regulations for explainable AI.
Recent #ai developments, particularly in Natural Language Processing (#nlp) like #gpt3, are widely used. Ensuring safety and trust with increasing NLP use requires robust guidelines. Global AI #regulations are evolving through initiatives like the #euaiact, #unesco recommendations, #us AI Bill of Rights, and others. The EU AI Act's comprehensive regulation sets a potential global benchmark. NLP models are subject to existing rules, such as #gdpr. This paper explores AI regulations, GDPR's application to AI, the EU AI Act's #riskbasedapproach, and NLP's role within these frameworks.
“The origins of the discussion concerning the role of #risk in #datatransfers are difficult to trace. Despite this, #schrems II, a recent decision of the European Court of Justice (#cjeu), has given the topic new traction. This paper explores the risk-based approach (#rba) hypothesis for data transfers from a different perspective: the consequences of applying the 'two-step test' stated in Article 44 of #gdpr. The main goal is to present the challenges of applying this test and the various questions it raises.”
The introduction of #ai #chatgpt has stirred discussions about AI regulation. The controversy over classifying systems like ChatGPT as "high-risk" AI under #euaiact has sparked concerns. This paper explores how Large Language Models (#llms) such as ChatGPT are shaping AI policy debates and delves into potential lessons from the #gdpr for effective regulation.
This is a note on the #gdpr and the use of #us-based #cloudservers. The note raises concerns about the #risk of US #intelligenceagencies having access to #data transferred to any US cloud from the #eu, or directly accessed by US agencies, even while still in the EU / #eea or while in transit. The note discusses cases in #france, the #netherlands, and #germany that have addressed these issues, concluding that the legality of the use of US cloud servers and solutions remains problematic.
This paper explores the #uncertainty around when #data is considered "#personaldata" under #dataprotection#laws. The authors propose that by focusing on the specific #risks to #fundamentalrights that are caused by #dataprocessing, the question whether data falls under the scope of the #gdpr becomes clearer.
"By employing Big Data and Artificial Intelligence (AI), personal data that is categorized as sensitive data according to the GDPR Art. 9 can often be extracted. Art. 9(1) GDPR initially forbids this kind of processing. Almost no industrial control system functions without AI, even when considering the broad definition of the EU AI Regulation (EU AI Regulation-E)."
"... nothing meaningful for regulation can be determined solely by looking at the data itself. Data is what data does. Personal data is harmful when its use causes harm or creates a risk of harm. It is not harmful if it is not used in a way to cause harm or risk of harm."