top of page
Formes de bulles abstraites

The AI Act explained in 5 points

The AI Act , adopted by the European Union (progressive entry into force from 2026), aims to regulate the safe, transparent and ethical use of artificial intelligence . The AI Act is the first international text to regulate artificial intelligence according to a risk-based approach. Its objective: to guarantee reliable, ethical, transparent and secure AI , while protecting the fundamental rights of European citizens

  1. Identify the risk of the AI system


The AI Act classifies AI systems into four risk categories:


  • Unacceptable risk : Outright ban ( e.g. AI exploiting the vulnerability of children or people in precarious situations, real-time facial recognition in public spaces, cognitive manipulation, social scoring, real-time biometric recognition)


  • High risk : Systems subject to strict requirements , AI in healthcare, education, justice, employment, critical infrastructure (conformity assessment, documentation, transparency, governance, cybersecurity).

Les risques de l’IA dans le secteur de la santé
Les risques de l’IA dans le secteur de la santé


  • Limited risk : Chatbots, generative AI, recommendation systems: Transparency obligations (e.g., chatbots, virtual assistants, content generators).

  • Minimal risk : Video games, photo filters, consumer AI with no social impact: Little or no constraints


  1. Safety measures required for high-risk AI


    Developers and suppliers must ensure the following aspects:


    • Rigorous risk management

      • Identify, analyze and mitigate risks throughout the AI lifecycle.

      • Update the system to address new vulnerabilities.

    • Quality of training data

      • Relevant, representative data without discriminatory bias .

      • Cybersecurity measures to protect data sets.

    • Traceability and complete technical documentation

      • Continuous logging (logs).

      • Technical files detailing design, testing and incidents.

    • Mandatory human supervision

      • Humans must be able to intervene, correct or deactivate the system at any time.

      • Clear training and instructions for operators.

    • Robustness and cybersecurity

      • Resistance to attacks (e.g. data poisoning , adversarial attacks ).

      • Continuous testing to ensure stability and security.

      • Secure software update obligation


  1. Measures for high-risk or moderate-risk generative AI systems and foundation models depending on their use


  • Do not produce disinformation or dangerous information (misinformation, incitement, illicit content).

  • Clear labeling of generated content (e.g. “AI image”, “AI text”).

  • Transparency about training data and model limitations.

  • Protection against malicious use (deepfakes, manipulation, etc.).



    4. Post-market surveillance and liability



  • Systems should be monitored after deployment .

  • Serious incidents must be reported to the competent authority within 15 days.

  • Companies must maintain a European register of high-risk AI .



  1. Why should you care in Canada?


Although Bill C-27 , which included the Artificial Intelligence and Data Act (AIDA) and aimed to establish a federal data protection law AND regulation for “high-impact” AI systems, has been put on hold for the time being, many frameworks and standards are now available for the use of trusted AI.


Only Canadian companies that touch the European market have to comply with the AI Act.


The European government has always been a pioneer in the area of personal data protection and risk. France adopted its first law with the Commission on Information Technology and Civil Liberties in 1978.


Canada has the best of both worlds. It can draw on the expertise of existing frameworks to avoid bias and errors without hampering innovation, but by promoting responsible and ethical innovation.


Ultimately, the following recommendations remain the benchmark:


  • Perform an AI risk assessment before any deployment.

  • Ensure data and supply chain security .

  • Implement an audit and traceability process .

  • Define human supervision and emergency shutdown procedures .

  • Establish continuous robustness testing and alert mechanisms.


 
 
 

Comments


PRADEL CONSULTING

6500 Trans-Canada Highway

Office 400

Pointe-Claire H9A 1R9

QC, CANADA

Follow us:

  • LinkedIn

Subscribe to our newsletter

Thank you for your submission.

© 2025 Pradel Conseil / Pradel Consulting

bottom of page