Fill in this short form to assess whether your AI system may fall under the EU AI Act.
Is your AI system used in any of the 8 high-risk sectors (e.g., education, employment, migration, law enforcement)?
Is your AI embedded in a product already CE-marked under EU safety laws (e.g., medical device, toy, machinery)?
Was the model trained using at least 10²⁵ FLOPs or 10⁶ GPU-hours? Applies to large general-purpose models.
Can users access the system directly (e.g., chatbot, image generator) without human gatekeeping?
Are children, people with disabilities, or other vulnerable groups among your users or data subjects?
Do you have a documented risk management process (e.g., ISO 42001 or internal procedure)?
Do you store logs and technical documentation that could support a regulatory audit?
Is there a defined human who can approve, intervene in, or shut down the system if needed?
Do you track performance metrics like accuracy, robustness, bias, or fairness for this system?
Have you defined a process to monitor performance and incidents after deployment?