Understanding the EU AI Act and its impact on UK education

Artificial intelligence is transforming education at pace, from adaptive learning platforms and automated marking to behaviour monitoring and admissions support tools. While the UK is no longer part of the European Union, the EU’s new artificial intelligence law, the EU AI Act, still matters for schools, colleges and academy trusts in England, Scotland, Wales and Northern Ireland.

UK education institutions are not directly regulated by the EU AI Act in the same way as EU-based organisations. However, the Act sets a powerful global benchmark. It will shape the AI-enabled products sold by edtech vendors, influence procurement and contractual terms, and provides a practical framework for thinking about risk, governance, safeguarding and ethics when using AI in education.


What is the EU AI act and why does it exist?

The EU AI Act is the world’s first comprehensive, legally binding framework for artificial intelligence. Rather than banning AI outright, it adopts a risk-based approach, regulating AI systems according to the level of potential harm they may pose to individuals and society.

The Act aims to:

  • Protect fundamental rights such as privacy, non-discrimination and access to education

  • Increase trust and transparency in AI systems

  • Encourage innovation that is human-centred, lawful and trustworthy

The four risk categories of the EU AI Act explained

The EU AI Act groups artificial intelligence systems into four distinct risk categories to ensure safety and ethical governance across the sector.

1. Unacceptable risk – Prohibited practices

AI practices considered a clear threat to fundamental rights are banned outright. These prohibitions, which became enforceable in February 2025, include

  • Social scoring – systems used by public authorities to evaluate or classify individuals.

  • Biometric categorisation – certain intrusive forms of biometric identification.

  • Emotion recognition – the use of emotion–sensing AI in workplaces and educational institutions, except in very narrow medical or safety–related contexts.

2. High risk – Regulated systems

High–risk AI systems are those that may significantly affect safety or fundamental rights – such as those used in school admissions or student assessment. These systems are permitted only if they meet strict legal safeguards and transparency requirements.

3. Transparency risk – Limited risk

This category includes AI systems such as chatbots or tools generating synthetic content. For these tools, compliance hinges on transparency; users must be informed that they are interacting with AI or AI–generated material.

4. Minimal or no risk

Most everyday AI applications fall into this category. These tools are subject to little or no regulation under the Act, allowing for continued innovation with minimal administrative burden.

Is your school’s data strategy ready for the AI era? Explore our Solutions to see how our Bett Innovation Award–nominated platform helps you move beyond reactive reporting to true predictive intelligence.

Explore all our solutions here!

Why Education is a sensitive area under the act

The EU recognises that some uses of AI in education can be particularly sensitive, especially where systems influence learners’ future opportunities, progression or access to education.

Education-related high-risk AI

Under Annex III of the EU AI Act, certain education uses are explicitly classified as high risk, most notably:

  • AI systems used to monitor or detect behaviour during tests or examinations, particularly where they may affect assessment outcomes or lead to disciplinary action.

Beyond this explicit listing, EU guidance and regulatory interpretation acknowledge that AI systems which meaningfully influence access to education, assessment outcomes or long-term educational trajectories may also fall within the high-risk category due to their impact on fundamental rights.

Examples that may be high risk depending on their design and use include:

  • Admissions decision-support systems

  • Automated or AI-assisted grading and assessment tools

  • Systems used for streaming, progression or pathway decisions

Not all educational AI is high risk. Tools providing basic tutoring, revision support or administrative assistance may fall outside these strict obligations.

Safeguards Required for High-Risk AI Systems

Where an AI system is classified as high risk, the Act requires extensive safeguards before it can be placed on or used in the EU market. These include:

  • A documented risk management system

  • Use of high-quality, relevant and representative datasets to reduce bias

  • Appropriate human oversight, allowing decisions to be understood and challenged

  • Clear technical documentation and record-keeping

  • Conformity assessments and registration of the system in an EU-level public database

These obligations primarily fall on AI providers, but they have significant implications for organisations that deploy these systems.

Master the fundamentals of AI governance

Before navigating the complexities of European regulation, it is essential to understand the broader landscape of ethical technology in schools. If you are looking to build a robust AI strategy for your Multi–Academy Trust or educational institution, you must first establish a foundation of transparency and accountability.

Don't miss the essential first chapter of our series, where we explore how global AI frameworks like those from the OECD are shaping the future of classroom technology. Whether you are concerned about student data privacy, algorithmic bias, or the ethical deployment of predictive analytics, our deep dive provides the strategic roadmap you need.

Read the blog

What responsibilities do schools and colleges have?

While the EU AI Act primary legal burden rests on AI providers, it also creates significant duties for deployers – organisations that use AI systems under their authority. In an educational context, this includes schools, colleges, and Multi–Academy Trusts (MATs).

Key duties for high–risk AI systems

For systems classified as high risk, deployers are expected to maintain rigorous standards of AI governance, including

  • Adherence to instructions – using the system strictly in line with the provider’s technical instructions.

  • Human oversight – assigning effective oversight to trained and competent staff who can intervene if necessary.

  • Data monitoring – monitoring input data, where controlled by the deployer, to ensure it is relevant and representative.

  • Operational logging – keeping logs of system operation to ensure traceability and accountability.

  • Incident reporting – reporting serious incidents or malfunctions to the provider and relevant authorities.

  • Regulatory cooperation – cooperating with regulators and market surveillance bodies during audits.

Intersection with UK GDPR and data protection

In addition to these emerging standards, schools must continue to meet existing obligations under UK GDPR. This includes carrying out a Data Protection Impact Assessment (DPIA) where personal data risks are high. The AI Act also introduces Fundamental Rights Impact Assessments in certain public–sector contexts, which complements rather than replaces existing data protection requirements.

While UK schools are not directly regulated by the EU AI Act unless they operate in the EU market, these expectations increasingly influence vendor behaviour, contracts, and procurement standards across the global edtech landscape.

Book your Deesha demo – Move from reactive reporting to predictive intelligence

Deesha is built with these global standards of transparency, accountability, and safety at its core. Navigating the complexities of AI governance and the EU AI Act doesn't have to be a solo journey for your leadership team.

Our platform is specifically designed to help Multi–Academy Trusts and independent schools unify their disparate data streams – from attendance and behaviour to finance and HR – into a single, ethical, and secure interface. By choosing Deesha, you aren't just buying a tool; you are investing in a future–proof data strategy that aligns with international benchmarks for high–risk AI systems.

Are you ready to see how predictive analytics can transform your student outcomes while maintaining the highest levels of data protection and human oversight?

Book a personalised Deesha demo today to explore our solutions and discover why we are the trusted partner for forward–thinking educational leaders.

Book your demo

AI Literacy: An Emerging Expectation

The EU AI Act introduces a clear expectation that organisations deploying AI systems, particularly high-risk ones, ensure an appropriate level of AI literacy among staff.

This does not mean a formal curriculum or qualification. Instead, it requires that staff involved in using or overseeing AI understand:

  • What the system does and does not do

  • Its limitations and potential risks

  • When and how to intervene or override outputs

  • How to escalate concerns or incidents

In practice, schools cannot simply purchase an AI tool and switch it on. Responsible deployment requires training, clear processes and informed oversight.

Implementation Timeline: What Applies and When

The EU AI Act is being implemented in phases:

  • February 2025: Prohibited AI practices, including emotion recognition in education, became enforceable

  • August 2025: Rules on general-purpose AI models and related transparency and governance obligations began to apply

  • August 2026: Most high-risk AI system requirements are scheduled to apply

  • August 2027: Some high-risk systems, particularly those embedded in regulated products, benefit from extended transition periods

There is ongoing political discussion within the EU about easing or delaying parts of the high-risk regime to reduce compliance burdens. These proposals are not yet final law, reinforcing that implementation is gradual and evolving.

Potential Challenges for UK Education Providers

While the EU AI Act strengthens protections, it also presents several challenges for the sector. One primary concern is the significant regulatory burden, as high–risk systems require extensive documentation and rigorous oversight. This may also impact innovation, as smaller edtech suppliers might struggle with the associated compliance costs. Furthermore, the UK–EU divergence – specifically the differences between the UK’s principles–based AI approach and the EU’s legislative model – could create additional complexity for providers. Finally, there are often grey areas in determining whether a specific educational AI tool truly qualifies as high risk, making the path to compliance less straightforward.

Key Takeaways for UK Education Leaders

Even where the EU AI Act does not apply directly, it offers a valuable blueprint for responsible AI use in education.

  1. Map your AI tools against risk categories
    Identify systems that influence admissions, assessment, monitoring or progression.

  2. Ask vendors tougher questions
    Request evidence of risk assessments, bias testing, data sources and oversight mechanisms.

  3. Embed governance and accountability
    Involve leadership, safeguarding, IT and data protection expertise in AI decisions.

  4. Invest in AI literacy
    Build understanding among staff, governors and, where appropriate, learners.

  5. Use the Act as a learning framework
    Treat it as guidance for ethical and responsible practice, not just legal compliance.

Final thoughts on AI governance in education

The EU AI Act represents far more than a simple piece of regulation. It serves as a clear signal to the global community that the use of AI in education carries serious responsibilities regarding ethics and safety.

For UK schools and colleges, the real value of this framework lies in the core principles it reinforces: fairness, transparency, accountability, and human oversight. By aligning with these principles now – even on a voluntary basis – education providers can adopt AI in ways that genuinely benefit learners while protecting their fundamental rights and maintaining institutional trust.

As we continue to navigate this evolving landscape, our next blog in this series will look beyond formal law and regulation to explore the influential ethical frameworks developed by international organisations such as the OECD and UNESCO.

Lead the way in ethical AI with Deesha

As a Bett Innovation Award Finalist, Deesha is committed to helping educational leaders implement these global standards through practical, data–driven solutions. Our platform provides the transparency and oversight required to ensure your school’s use of AI is both innovative and responsible.

Are you ready to see how a unified data strategy can simplify your compliance and enhance student outcomes?

Book a personalised Deesha demo today to speak with our team about your institution's journey toward responsible AI.

Book your demo

While you wait for the next post in this series, continue to build your expertise. Discover our other blogs on achieving data maturity, closing the attainment gap, and simplifying data infrastructure for MAT leaders.

Read our blogs
Next
Next

Responsible AI in education: Why understanding, governance, and global frameworks matter