Unraveling the EU's AI Act - Another GDPR Move?
Generated by DALL·E - An image representing the impact of the EU's AI Act on technology and innovation on Europe with a robot and legal elements.

Unraveling the EU's AI Act - Another GDPR Move?

How will the EU's latest legislative endeavor, the AI Act, reshape the landscape of Artificial Intelligence (AI) and Machine Learning (ML) across Europe and beyond?

In a recent development that mirrors the impacts of GDPR, the recently agreed upon and expected to be finalized in the coming weeks, this legislation, set to be enacted on December 28th, 2024, with a two-year grace period, attempts to regulate all forms of AI and ML use-cases across providers and users. It aims to create safety and transparency around the use of AI systems.

With fines up to 35 million euros or 6% of revenue (whichever is higher) or 3% for SMEs/Startups, this is a regulation to be keenly aware of, especially if your company operates in or services EU audiences.

The legislation focuses on three primary domains:

  1. General-purpose AI systems (TITLE Ia): This includes companies like OpenAI, Anthropic, AI21Labs, or StabilityAI, which build and provide general-purpose AI models used by others to create end-user products.
  2. Prohibited artificial intelligence practices (TITLE II): This section lists practices that are banned and should be prohibited.
  3. High-risk AI systems (TITLE III): This includes use-cases that the EU deems as potentially affecting or at risk of violating people's basic rights.

Central to this legislation is the protection of fundamental rights and managing the uncertainties arising from advanced algorithmic systems.


Prohibited artificial intelligence practices

  1. Prohibiting AI systems that use subliminal techniques to significantly distort behavior, causing physical or psychological harm.
  2. Banning AI systems exploiting vulnerabilities of specific groups (age, disability, socio-economic status) to materially distort behavior, potentially causing harm.
  3. Forbidding AI systems that evaluate or classify individuals based on their social behavior or personality for social scoring, leading to detrimental treatment unrelated to the context in which data was collected or disproportionate to the individual's behavior.
  4. Real-Time Remote Biometric Identification by Law Enforcement is restricted to various conditions.

The key challenges arising from the broad scope of prohibitions 1-3 in the outlined regulations relate to their potential impact on various sectors that utilize AI.

  • AI in Advertising and Social Networks: The use of AI for subtle consumer influence and extensive behavioral monitoring in advertising and on social networks could fall under the scrutiny of these regulations.
  • Recommendation Systems in Various Sectors: Hyper-personalization in recommendation systems, commonly used in e-commerce, finance, and education (think adaptive learning applications, course recommendations, student profiling for internship recommendations, etc.), could all be seen as manipulative under these rules.
  • Enterprise Applications Influencing Critical Decisions: AI applications in enterprise settings, such as workflow optimization, HR, Social Enterprise, and Upskilling often make critical decisions that can influence employee tasks, career progression, and overall company operations. The provisions in Article 3(4) covering employment, management of workers, task allocation, and monitoring/evaluation of staff. Especially given how those algorithmic recommendations could compound issues around bias, discrimination, due process, and accountability without appropriate transparency and oversight safeguards implemented alongside them.


High-Risk AI systems

In response to these challenges, the Act introduces intricate compliance processes for high-risk AI systems, akin to drug approval workflows (including clinical trials!). This includes National AI Regulatory Sandboxes for controlled development and testing, a public registry of high-risk AI systems, the formation of an EU Artificial Intelligence Board for consistency in enforcement, and strict penalties for non-compliance.


The use-cases defined as High-Risk AI are:

  1. Biometrics: Remote biometric identification.
  2. Critical Infrastructure: AI in safety management for critical digital infrastructure, traffic, and essential utilities (water, gas, electricity, heating).
  3. Education and Training: AI for access and assignments in educational/vocational institutions; evaluating learning outcomes.
  4. Employment and Management: AI in recruitment, candidate evaluation, promotion, contract termination, task allocation, and performance monitoring.
  5. Essential Services and Benefits: AI in public assistance eligibility, personal creditworthiness, emergency service dispatch, and life/health insurance risk assessment.
  6. Law Enforcement: AI for risk assessment of offending, polygraph and emotional state detection, evidence reliability, predicting offenses, and criminal profiling.
  7. Migration and Border Control: AI for polygraph/emotional state detection in immigration, assessing entry risks, and examining asylum/visa applications.
  8. Justice and Democracy: AI in judicial decision-making, fact interpretation, and law application.


Let's deep dive a bit into National AI Regulatory Sandboxes:

  • These are controlled environments and frameworks established by EU member states' national authorities to facilitate development, training, testing and validation of innovative AI systems in live settings before full deployment.
  • Allows both regulator oversight into AI via direct monitoring/testing of systems in sandbox and helps companies continue innovation by getting systems operational and capturing real-world performance data to refine systems.
  • No set format specified, but implementing regulations cover issues like eligibility criteria, application process, development & testing procedures, and exit plans. Flexibility for national authorities.
  • Time limited (no longer than 12 months) with a set testing plan agreed up front between authorities and participating organizations. Intended to help both accelerate compliance building while reviewing performance.

I wonder if on that last point, that means the EU will also fund these efforts and testing facilities, or if that will become another burden on the companies developing AI/ML systems.


Impact on organization types

While the prohibitions apply to all organizations, the Act's impact will vary across different types of organizations:

  • Startups: While there are provisions to reduce compliance costs, those developing high-risk systems will still face considerable oversight. Unless they partner with a large Enterprise (this is not properly defined and can be interpreted to also cover cases where a Startup provides technology to a large Enterprise).
  • Research Organizations: A supportive environment for R&D is proposed with exception, though with potential limitations in accessing live systems.
  • Large Firms: These entities will face significant new compliance burdens and legal risks.
  • Government Agencies: As AI users and procurers, government agencies will now undertake extensive obligations around procurement, accountability, and transparency.


Yet another GDPR?

In summary, the current legislation broad definitions make it challenging to discern which AI applications will escape prohibition or high-risk classification. However, potential exceptions might include AI chatbots for non-critical customer service, coding assistants, industrial robotics in factory settings (excluding healthcare and public services), smart device applications like language translators, general analytics for business intelligence, and personal digital assistants for home or mobile use.

The Act adopts a highly cautious stance, reflecting the uncertainties surrounding AI's future development. Its focus seems to lie more on AI applications than on foundational models or research, which appear to have minimal, if any, compliance requirements. Notably, the legislation empowers the European Commission to continuously evaluate AI advancements and potentially expand the list of "high-risk" systems, including advanced general intelligence (AGI), should it become commercially viable.

While the Act introduces substantial consumer protections and enhances awareness of privacy and security, it also mirrors the General Data Protection Regulation (GDPR) in potentially imposing significant burdens, especially on small and medium-sized enterprises (SMEs) in the EU. These burdens include heavy compliance costs, complex and vague regulatory structures, and increased investment risks. This could hinder the EU's progress in adopting AI and ML systems, setting it back years compared to other regions rapidly advancing in this field.

Furthermore, while the legislation aims to address the handling and protection of confidential content, intellectual property rights, and sensitive company data during extensive review processes, it leaves open questions and risks. Legitimate concerns about how expansive disclosure obligations might affect future innovation and the potential leakage of sensitive details through oversight processes, which could have far-reaching implications.

Amir Towns

I sell money to small business owners and startups

7mo

Exciting development in AI regulation! Can't wait to see the impact 🌟.

Like
Reply
Rufo Guerreschi

Towards a global constituent assembly for AI and digital communications

7mo

Thanks Zohar!

John Couperthwaite, PhD

Higher Education Consultant and Senior Customer Success Manager

7mo

Thanks for this commentary Zohar Babin. The growing application of GenAI holds great potential and threat. It makes sense that the EU considers new legislation to ensure it is harnessed appropriately. I am sure other regions will follow.

Like
Reply
Nohar Zmora

VP Brand and Strategic Marketing

7mo

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics