EU's Artificial Intelligence Act (AIA)
Jump to navigation
Jump to search
An EU's Artificial Intelligence Act (AIA) is a EU regulation that aims to establish a harmonized framework for the development, deployment, and use of artificial intelligence (AI) systems within the European Union.
- Context:
- It can (typically) seek to ensure AI Technologies' safe, ethical, and lawful application, balancing innovation with fundamental rights and consumer protections.
- It can (typically) establish a common regulatory and legal framework for AI across all sectors within the EU, excluding military applications.
- It can (typically) classify AI systems into four risk categories: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk.
- It can (often) prohibit certain AI practices that are deemed to pose an unacceptable risk to fundamental rights, including manipulative behaviors and mass surveillance technologies.
- It can (often) require AI systems classified as high-risk to undergo thorough conformity assessments before deployment, ensuring compliance with specific regulatory requirements related to transparency, accountability, and data protection.
- It can (often) promote transparency and ethical use of AI by mandating disclosures for AI-generated content and implementing measures against the use of AI in ways that could harm individuals or society.
- It can:
- Prohibit against "social scoring" systems, where individuals are rated based on observed behaviors or predicted personal traits.
- Regulate AI tools used in employment, healthcare, and law enforcement, which must meet high standards of accuracy and fairness.
- Require AI-generated deepfakes to be disclosed to prevent misinformation.
- ...
- Example(s):
- AIA, 2024.
- ...
- Counter-Example(s):
- General Data Protection Regulation (GDPR).
- AI systems used for military, defense, or national security purposes are exempt from the act's provisions.
- AI technologies designed solely for personal or non-commercial uses may not be subject to the same level of scrutiny.
- See: Regulation (European Union), Conformity Assessment, Notified Body, Regulation (EU) 2017/745, Maximum Harmonisation.
References
2023
- GBard
- The EU's Artificial Intelligence Act (AIA) is a proposed regulation that aims to establish a harmonized framework for the development, deployment, and use of artificial intelligence (AI) systems within the European Union.
- Key objectives of the AIA include:
- Ensuring fundamental rights, safety, and fairness: The AIA prohibits the use of AI systems for certain purposes, such as social scoring, and sets out requirements for transparency, accountability, and explainability.
- Boosting research and development: The AIA encourages the development of safe and trustworthy AI by providing financial support and fostering collaboration between researchers, developers, and users.
- Promoting innovation and competitiveness: The AIA aims to create a level playing field for AI companies in the EU and to make Europe a global leader in AI development and deployment.
- The AIA classifies AI systems into four risk categories:
- Unacceptable Risk: These are AI systems that are prohibited from being used in the EU, such as those that manipulate human behavior or exploit vulnerabilities in individuals.
- High Risk: These are AI systems that require specific regulatory oversight, such as those used in critical infrastructure, law enforcement, or healthcare.
- Limited Risk: These are AI systems that pose a lower risk to individuals and society, such as those used in online advertising or chatbots.
- Minimal Risk: These are AI systems that pose a negligible risk to individuals and society, such as those used in simple games or spam filters.
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Artificial_Intelligence_Act Retrieved:2023-12-11.
- The Artificial Intelligence Act (AI Act) is a European Union regulation on artificial intelligence in the European Union. Proposed by the European Commission on 21 April 2021, it aims to introduce a common regulatory and legal framework for artificial intelligence. Its scope encompasses all sectors (except for military), and to all types of artificial intelligence. As a piece of product regulation, it would not confer rights on individuals, but would regulate the providers of artificial intelligence systems, and entities making use of them in a professional capacity. The proposed EU Artificial Intelligence Act aims to classify and regulate artificial intelligence applications based on their risk to cause harm. This classification primarily falls into three categories: banned practices, high-risk systems, and other AI systems. Banned practices are those that employ artificial intelligence to cause subliminal manipulation or exploit people's vulnerabilities that may result in physical or psychological harm, make indiscriminate use of real-time remote biometric identification in public spaces for law enforcement, or use AI-derived 'social scores' by authorities to unfairly disadvantage individuals or groups. The Act entirely prohibits the latter, while an authorisation regime is proposed for the first three in the context of law enforcement. High-risk systems, as per the Act, are those that pose significant threats to health, safety, or the fundamental rights of persons. They require a compulsory conformity assessment, undertaken as self-assessment by the provider, before being launched in the market. Particularly critical applications, such as those for medical devices, require the provider's self-assessment under AI Act requirements to be considered by the notified body conducting the assessment under existing EU regulations, like the Medical Devices Regulation. AI systems outside the categories above are not subject to any regulation, with Member States largely prevented from further regulating them via maximum harmonisation. Existing national laws related to the design or use of such systems are disapplied. However, a voluntary code of conduct for such systems is envisaged, though not from the outset.[1] The Act further proposes the introduction of a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. Like the European Union's General Data Protection Regulation, the AI Act could become a global standard. It is already having impact beyond Europe; in September 2021, Brazil's Congress passed a bill that creates a legal framework for artificial intelligence. The European Council adopted its general approach on the AI Act on 6 December 2022. Germany supports the Council's position but still sees some need for further improvement as formulated in an accompanying statement by the member state. Among the measures likely to be proposed is for AI developers for products such as Open AI's ChatGPT to declare whether copyrighted material was used to train their technology.
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedVeale 2021