Close Menu

    Stay Ahead with Exclusive Updates!

    Enter your email below and be the first to know what’s happening in the ever-evolving world of technology!

    What's Hot

    NVIDIA AI Chip Demand Continues Driving Cloud Strategy Changes at AWS, Azure, and Google Cloud

    March 1, 2026

    Google DeepMind and OpenAI Intensify Competition as Coding AI Models Target Software Developers

    March 1, 2026

    Google Expands AI-Generated Audio Tools, Signaling a Major Shift in Digital Advertising Strategy

    March 1, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter)
    PhronewsPhronews
    • Home
    • Big Tech & Startups

      NVIDIA AI Chip Demand Continues Driving Cloud Strategy Changes at AWS, Azure, and Google Cloud

      March 1, 2026

      Google DeepMind and OpenAI Intensify Competition as Coding AI Models Target Software Developers

      March 1, 2026

      Google Expands AI-Generated Audio Tools, Signaling a Major Shift in Digital Advertising Strategy

      March 1, 2026

      Amazon Announces $12 Billion Louisiana Data Center Investment to Boost AI and Cloud Capacity

      February 27, 2026

      Hyperscalers Including Microsoft and Amazon Build Private Energy Systems to Power AI Data Centers

      February 27, 2026
    • Crypto

      AI Assisted Hacking Groups Target Crypto Firms With Multi-Layered Social Engineering

      February 18, 2026

      Global Crypto Regulations Expand as 2026 Begins With New Data Collection Frameworks and National Laws

      January 16, 2026

      Coinbase Bets on Stablecoin and On-Chain Growth as Key Market Drivers in 2026 Strategy

      January 10, 2026

      Tether Faces Ongoing Transparency Questions and Reserve Scrutiny Amid Massive Bitcoin Accumulation

      January 5, 2026

      Kanye West YZY Coin Crash Follows $3B Hype Launch

      August 24, 2025
    • Gadgets & Smart Tech
      Featured

      Tesla Launches China AI Training Center for Full Self-Driving Development

      By preciousFebruary 18, 2026
      Recent

      Tesla Launches China AI Training Center for Full Self-Driving Development

      February 18, 2026

      Samsung to Unveil AI-powered Galaxy S26 on February 25 Unpacked Event

      February 13, 2026

      Meta Introduces its Neural Wristband to the World

      February 4, 2026
    • Cybersecurity & Online Safety

      OpenAI Benchmarks AI Models for Smart Contract Security Testing in Blockchain Applications

      February 27, 2026

      Cybersecurity Stocks Drop as Anthropic Launches Claude Code Security Tool

      February 26, 2026

      AI Assisted Hacking Groups Target Crypto Firms With Multi-Layered Social Engineering

      February 18, 2026

      SentinelOne Warns Hackers are Targeting AI in Physical World Systems like Self-Driving Cars

      February 18, 2026

      Deepfake Zoom Calls Used in Corporate Fraud Attacks: Inside the Latest AI Social Engineering Scheme

      February 17, 2026
    PhronewsPhronews
    Home»Artificial Intelligence & The Future»Europe’s AI Act: Preparing for High-Risk AI Rules in 2025
    Artificial Intelligence & The Future

    Europe’s AI Act: Preparing for High-Risk AI Rules in 2025

    oluchiBy oluchiJune 4, 2025Updated:June 11, 2025No Comments
    Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Image Generated from Punter Southall Law

    Europe is setting a global precedent through its European Union’s AI Act, the first comprehensive regulation on artificial intelligence in the world, which is set to come into full effect starting August 2, 2025. The AI Act was first adopted on March 13, 2024, and with its full implementation at the later end of Q2 of 2025, it will ensure that AI technologies used within the EU are safe, transparent, and in accordance with fundamental human rights.

    What is the EU AI Act?

    The EU AI Act is a comprehensive legal framework proposed by the European Commission that sets a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI. 

    EU AI Act is a means to ensure Europeans can trust what AI has to offer by setting boundaries based on potential risks, aiding innovation and investments while making sure AI is safe, ethical, and beneficial, with a voluntary pact to ease the transition for those impacted by the new rules. 

    EU AI Act Risk Classifications

    The EU AI Act categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems pose a clear threat to the safety, rights, and livelihoods of people. 

    For this classification, the AI Act prohibits eight practices: harmful AI-based manipulation and deception (deep fakes); harmful AI-based exploitation of vulnerabilities; social scoring; predictive criminal offense risk; untargeted scraping of the internet or CCTV material to create or expand facial recognition databases; emotion recognition tools in workplaces and educational institutions; biometric categorization to deduce certain protected characteristics like race, gender, religion, or political views; and real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.

    The AI functionalities under unacceptable risk are banned because they violate privacy, give root to surveillance and discrimination, lack scientific reliability, and undermine human dignity and autonomy. 

    High risk AI class poses serious risks to health, safety, or fundamental human rights. The AI use cases that fall under high risk include AI in critical infrastructure (traffic control or electricity grids), AI in education, AI in medical devices or surgeries, AI in hiring and workplace management, AI in determining access to essential services, AI in biometric and emotional recognition, AI in law enforcement, AI in migration and border control, and AI in justice systems and democratic processes.

    According to the AI Act, before a high-risk AI can be put on the market, it must have gone through adequate risk assessment and mitigation systems; have high-quality data sets to minimize discriminatory outcomes; record how decisions are made and keep logs of how the AI operates; have clear technical documentation for authorities; have clear and adequate information for the user; have human oversight measures; and have a high level of robustness, cybersecurity, and accuracy. 

    Limited risk AI systems do not pose major threats but still require a level of transparency to ensure public trust and informed use. This category includes chatbots and generative AI.

    Minimal/no-risk AI systems are largely considered safe and do not require specific regulation under the AI Act. Examples of minimal-risk AI systems include AI in video games and spam filters in email systems.

    Preparing for High-Risk AI Rules in 2025

    Implementation and acceptance of Europe’s AI Act has not been without criticism from stakeholders who share the sentiment that the AI Act could stifle innovation, particularly startups and small enterprises.

    To mitigate these concerns, the European Commission has proposed simplification initiatives to reduce the compliance burden without compromising the objectives of the AI Act. The commission also plans to launch an AI Act service desk in summer 2025 to provide practical compliance guidelines and support for businesses navigating the new regulations.

    The AI Act will be fully applicable on August 2, 2026; however, starting from August 2, 2025, the following will be fully implemented: prohibitions and AI literacy obligations and governance rules and obligations for general-purpose AI models. However, rules for high-risk AI systems will be embedded into regulated products by August 2027.

    AIAccountability AIAct2025 AIActSupportDesk AIandDemocracy AIandHumanDignity AIandPrivacy AICompliance AICompliance2025 AIEcosystem AIEthics AIforGood AIForSMEs AIFutureEurope AIGovernance AIinEducation AIInEurope AIinHealthcare AIinHiring AIinJustice AIinMigration AIInnovation AILaw2025 AILegislation AILiteracy AIRegulation AITransparency AITrustFramework ArtificialIntelligence DigitalEurope EthicalAI EUAIAct EUCommission EULaw EuropeAI EuropeanAIPolicy FacialRecognitionBan GeneralPurposeAI HighRiskAI HumanRightsAI RegulateAI ResponsibleAI ResponsibleTech RiskBasedAI SocialScoringBan StartupsAndAI TransparentAI TrustworthyAI UnacceptableRiskAI
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    oluchi
    • X (Twitter)
    • LinkedIn

    I am a content writer with over three years of experience. I specialize in creating clear, engaging, and value-driven content across diverse niches, and I’m now focused on the tech and business space. My strong research skills, paired with a natural storytelling ability, enable me to break down complex topics into compelling, reader-friendly articles. As an avid reader and music lover, I bring creativity, insight, and a sharp eye for detail to every piece I write.

    Related Posts

    Google DeepMind and OpenAI Intensify Competition as Coding AI Models Target Software Developers

    March 1, 2026

    Google Expands AI-Generated Audio Tools, Signaling a Major Shift in Digital Advertising Strategy

    March 1, 2026

    Amazon Announces $12 Billion Louisiana Data Center Investment to Boost AI and Cloud Capacity

    February 27, 2026

    Comments are closed.

    Top Posts

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 2025

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 2025

    Coinbase responds to hack: customer impact and official statement

    May 22, 2025

    Coinbase Hack 2025: Everything we know so far.

    May 21, 2025
    Don't Miss
    Big Tech & Startups

    NVIDIA AI Chip Demand Continues Driving Cloud Strategy Changes at AWS, Azure, and Google Cloud

    By fariehanMarch 1, 2026

    Nvidia is now in control of roughly 80% of the AI data center chip market…

    Google DeepMind and OpenAI Intensify Competition as Coding AI Models Target Software Developers

    March 1, 2026

    Google Expands AI-Generated Audio Tools, Signaling a Major Shift in Digital Advertising Strategy

    March 1, 2026

    Amazon Announces $12 Billion Louisiana Data Center Investment to Boost AI and Cloud Capacity

    February 27, 2026
    Stay In Touch
    • Facebook
    • Twitter
    About Us
    About Us

    Evolving from Phronesis News, Phronews brings deep insight and smart analysis to the world of technology. Stay informed, stay ahead, and navigate tech with wisdom.
    We're accepting new partnerships right now.

    Email Us: info@phronews.com

    Facebook X (Twitter) Pinterest YouTube
    Our Picks
    Most Popular

    MIT Study Reveals ChatGPT Impairs Brain Activity & Thinking

    June 29, 2025

    From Ally to Adversary: What Elon Musk’s Feud with Trump Means for the EV Industry

    June 6, 2025

    Coinbase responds to hack: customer impact and official statement

    May 22, 2025
    © 2025. Phronews.
    • Home
    • About Us
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.