Image Generated from Punter Southall Law

Europe is setting a global precedent through its European Union’s AI Act, the first comprehensive regulation on artificial intelligence in the world, which is set to come into full effect starting August 2, 2025. The AI Act was first adopted on March 13, 2024, and with its full implementation at the later end of Q2 of 2025, it will ensure that AI technologies used within the EU are safe, transparent, and in accordance with fundamental human rights.

What is the EU AI Act?

The EU AI Act is a comprehensive legal framework proposed by the European Commission that sets a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI. 

EU AI Act is a means to ensure Europeans can trust what AI has to offer by setting boundaries based on potential risks, aiding innovation and investments while making sure AI is safe, ethical, and beneficial, with a voluntary pact to ease the transition for those impacted by the new rules. 

EU AI Act Risk Classifications

The EU AI Act categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems pose a clear threat to the safety, rights, and livelihoods of people. 

For this classification, the AI Act prohibits eight practices: harmful AI-based manipulation and deception (deep fakes); harmful AI-based exploitation of vulnerabilities; social scoring; predictive criminal offense risk; untargeted scraping of the internet or CCTV material to create or expand facial recognition databases; emotion recognition tools in workplaces and educational institutions; biometric categorization to deduce certain protected characteristics like race, gender, religion, or political views; and real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.

The AI functionalities under unacceptable risk are banned because they violate privacy, give root to surveillance and discrimination, lack scientific reliability, and undermine human dignity and autonomy. 

High risk AI class poses serious risks to health, safety, or fundamental human rights. The AI use cases that fall under high risk include AI in critical infrastructure (traffic control or electricity grids), AI in education, AI in medical devices or surgeries, AI in hiring and workplace management, AI in determining access to essential services, AI in biometric and emotional recognition, AI in law enforcement, AI in migration and border control, and AI in justice systems and democratic processes.

According to the AI Act, before a high-risk AI can be put on the market, it must have gone through adequate risk assessment and mitigation systems; have high-quality data sets to minimize discriminatory outcomes; record how decisions are made and keep logs of how the AI operates; have clear technical documentation for authorities; have clear and adequate information for the user; have human oversight measures; and have a high level of robustness, cybersecurity, and accuracy. 

Limited risk AI systems do not pose major threats but still require a level of transparency to ensure public trust and informed use. This category includes chatbots and generative AI.

Minimal/no-risk AI systems are largely considered safe and do not require specific regulation under the AI Act. Examples of minimal-risk AI systems include AI in video games and spam filters in email systems.

Preparing for High-Risk AI Rules in 2025

Implementation and acceptance of Europe’s AI Act has not been without criticism from stakeholders who share the sentiment that the AI Act could stifle innovation, particularly startups and small enterprises.

To mitigate these concerns, the European Commission has proposed simplification initiatives to reduce the compliance burden without compromising the objectives of the AI Act. The commission also plans to launch an AI Act service desk in summer 2025 to provide practical compliance guidelines and support for businesses navigating the new regulations.

The AI Act will be fully applicable on August 2, 2026; however, starting from August 2, 2025, the following will be fully implemented: prohibitions and AI literacy obligations and governance rules and obligations for general-purpose AI models. However, rules for high-risk AI systems will be embedded into regulated products by August 2027.

Share.

I am a content writer with over three years of experience. I specialize in creating clear, engaging, and value-driven content across diverse niches, and I’m now focused on the tech and business space. My strong research skills, paired with a natural storytelling ability, enable me to break down complex topics into compelling, reader-friendly articles. As an avid reader and music lover, I bring creativity, insight, and a sharp eye for detail to every piece I write.

Comments are closed.

Exit mobile version