EU/European Commission: Artificial Intelligence Questions and Answers Update – August 2024
EU/European Commission: Artificial Intelligence Questions and Answers Update- August 2024
If summer and rest bring answers to your existential questions, this August the Commission brings you answers to all your questions about Artificial Intelligence. On August 1, 2024, the European Commission updated its “Artificial Intelligence – Questions and Answers” page, an essential guide to navigating the AI regulatory landscape within the EU. So, what’s new since the 2023 version ? Spoiler alert: quite a lot!
🛑 Risk Categories
The 2024 version provides further clarification on risk categories. Prohibitions for “unacceptable risks” are more detailed, including examples such as the manipulation and misuse of biometric data, with guidance planned before they come into force in 2025. High risks” now include concrete cases, such as AI systems used in critical contexts in healthcare or by law enforcement, for example.
🔍 High-Risk Use Cases: Valuable additions made
This version focuses on the criteria that make an AI system high-risk, particularly with regard to biometric recognition and democratic processes. In addition, it includes exemptions for certain AI systems that do not present significant risks to health, safety or fundamental rights, providing a better understanding of the applications concerned.
🧩 The role of standardization in the AI Act
In May 2023, the European Commission mandated CEN/CENELEC to develop harmonized standards by April 2025. These standards, once published in the OJEU, will offer a full or partial “presumption of conformity” to the regulations for AI systems that comply with them.
🏷️ Watermarking and labeling obligations: Nothing will escape you!
The IA Act requires providers of generative AI to mark content so that it can be identified as artificially generated. Deployers must disclose manipulated content, unless it has been human-edited. These rules, aimed at preventing manipulation and misinformation, will come into force on August 2, 2026.
🛡️ Implementing the AI Act: Who does what?
National authorities are responsible for overseeing and enforcing rules on AI systems, while the EU is responsible for governing general-purpose AI models. A European Artificial Intelligence Council ensures cooperation between member states. The European AI Office provides strategic guidance, and two advisory bodies offer expert advice for a balanced approach to AI development.
🚀Here it is, you’re up to date! nexialist wishes you a great summer and! for those who want to dig deeper into this subject, feel free to visit our section dedicated to Artificial Intelligence.
En savoir plus sur le monde du DM & DMDIV
Nos articles récents
MDCG 2022-4 Rev.2 : Guidance on appropriate surveillance regarding the transitional provisions under Article 120 of the MDR with regard to devices covered by certificates according to the MDD or the AIMDD
Theme Surveillance of requirements for Legacy devices Target Audience Notified Bodies Products Concerned MD Documents mentioned MDR (UE) 2017/745 MDCG 2021-25 MDCG 2020-3 MDCG 2019-10 rev.1 MDCG 2021-1 rev. 1 NBOG BPG 2009-4 Legacy devices with valid EC...
MDCG 2022-3: Verification of manufactured class D IVDs by notified bodies
Theme Conformity assessment Target Audience Notified bodies Products Concerned Class D IVD Documents mentioned IVDR (EU) 2017/746 MDCG 2020-16 rev.1 The MDCG 2022-3 guidance specifies the role and responsibilities of Notified Bodies in the verification of batches or...
MDCG 2022-2: Guidance on general principles of clinical evidence for In Vitro Diagnostic medical devices (IVDs) – January 2022
Theme Clinical evidence - Performance evaluation Target Audience Manufacturers, investigators and study sponsors, regulators, notified bodies and other stakeholders when considering clinical evidence Products concerned IVD Documents mentioned IVDR (EU) 2017/746 MDCG...
0 Comments