EU AI Act Glossary
28 official definitions from the EU AI Act. Essential reading for compliance teams, legal counsel, and CTOs.
AI System
Art. 3(1)A machine-based system that, for a given set of objectives, generates outputs such as predictions, recommendations, decisions, or content that influence real or virtual environments.
General Purpose AI (GPAI) Model
Art. 3(63)An AI model trained on large amounts of data capable of performing a wide range of tasks, such as large language models (LLMs). GPAI models with systemic risk face additional obligations.
Intended Purpose
Art. 3(12)The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.
Placing on the Market
Art. 3(9)The first making available of an AI system or a general-purpose AI model on the Union market.
Putting into Service
Art. 3(11)The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
Reasonably Foreseeable Misuse
Art. 3(13)The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems.
Risk
Art. 3(2)The combination of the probability of an occurrence of harm and the severity of that harm.
Authorised Representative
Art. 3(5)A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider to carry out the obligations and procedures established in the EU AI Act on the provider's behalf.
Deployer
Art. 3(4)A natural or legal person who uses an AI system under their authority in a professional context — including startups using third-party AI APIs to build products for EU users.
Distributor
Art. 3(7)Any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties.
Importer
Art. 3(6)A natural or legal person established in the EU who places on the market an AI system bearing the name or trademark of a person established outside the EU.
Operator
Art. 3(8)Any of the following: provider, deployer, authorised representative, importer, or distributor.
Provider
Art. 3(3)A natural or legal person who develops or has developed an AI system or GPAI model and places it on the market or puts it into service under their own name or trademark.
CE Marking
Art. 48A mandatory conformity marking that high-risk AI systems must bear before being placed on the EU market, indicating compliance with applicable EU regulations including the AI Act.
Conformity Assessment
Art. 43The process by which providers of high-risk AI systems demonstrate compliance with the EU AI Act requirements, either through self-assessment or third-party auditing.
Fundamental Rights Impact Assessment (FRIA)
Art. 27A mandatory assessment that deployers of certain high-risk AI systems must conduct to evaluate the impact on fundamental rights before deploying the system.
Post-Market Monitoring
Art. 72An active system that providers must implement to collect and review experience from high-risk AI systems deployed in the market, identifying and reporting serious incidents.
Risk Management System
Art. 9A continuous iterative process that providers of high-risk AI systems must establish to identify, analyse, estimate, and mitigate risks throughout the AI lifecycle.
Technical Documentation
Art. 11, Annex IVComprehensive documentation that high-risk AI system providers must maintain, covering system design, training data, performance metrics, risk management, and post-market monitoring.
EU AI Office
Art. 64The central EU body within the European Commission responsible for overseeing GPAI models, supporting implementation of the AI Act, and coordinating between national authorities.
National Competent Authority (NCA)
Art. 70A national authority designated by each EU member state to supervise the application of the EU AI Act within their jurisdiction and handle market surveillance.
High-Risk AI System
Art. 6, Annex IIIAn AI system classified under Annex III of the EU AI Act that poses significant risks to health, safety, or fundamental rights of persons, requiring strict conformity assessment and documentation.
Limited-Risk AI System
Art. 50AI systems subject to transparency obligations only, such as chatbots and deepfakes, where users must be informed they are interacting with an AI.
Minimal-Risk AI System
Recital 48AI systems posing little to no risk (e.g., AI in video games or spam filters) that face no specific obligations under the EU AI Act, though voluntary codes of conduct are encouraged.
Prohibited AI Practice
Art. 5AI applications that are entirely banned under the EU AI Act, including social scoring, subliminal manipulation, exploitation of vulnerabilities, and real-time biometric identification in public spaces.
Systemic Risk
Art. 51A risk specific to GPAI models with high impact capabilities (trained with more than 10^25 FLOPs) that could cause widespread harm across multiple domains or sectors.
Human Oversight
Art. 14The requirement that high-risk AI systems be designed and developed to be effectively overseen by natural persons during their use, with the ability to intervene, override, or shut down the system.
Transparency Obligation
Art. 50Requirements for AI systems to be clear about their AI nature to end-users, including labelling AI-generated content and disclosing when users interact with AI chatbots or emotion recognition systems.
Definitions based on EU AI Act (Regulation EU 2024/1689). For legal advice, consult a qualified professional.