close_game
close_game

The EU AI Act: Implications and lessons for India

May 29, 2024 01:24 PM IST

Authored by Ananya Raj Kakoti and Gunwant Singh, scholars of international relations, Jawaharlal Nehru University, New Delhi.

In March 2024, the European Parliament approved the EU AI Act, a pioneering piece of legislation designed to establish the world's first comprehensive regulatory framework for Artificial Intelligence (AI). This landmark decision passed with a significant majority of 523-46 votes, demonstrating the European Union (EU)'s commitment to leading the charge in AI governance. The Act employs a risk-based approach to AI regulation, categorising AI applications by risk level and imposing appropriate regulatory requirements. As the AI Act is rolled out through 2027, its impact is expected to extend globally, influencing international tech companies and shaping the future of AI development. This article delves into the key features and implications of the EU AI Act and explores what India can learn from this trailblazing regulation.

Artificial Intelligence
Artificial Intelligence

The EU AI Act classifies AI systems into four risk levels: Unacceptable, high, limited, and minimal risk. Practices deemed to pose unacceptable risks, such as social scoring by governments and real-time biometric identification in public spaces, are banned. High-risk AI applications, which include those used in critical infrastructure, education, employment, and law enforcement, face stringent requirements. These include conducting fundamental rights impact assessments, ensuring human oversight, and maintaining robust data governance and cybersecurity measures. Additionally, the Act mandates transparency, requiring AI systems to disclose their AI nature to users. To enforce compliance, the Act establishes a centralised EU database for high-risk AI systems and creates the EU AI Office to oversee regulation and ensure adherence to the Act’s provisions.

The EU AI Act is set to establish a new global benchmark for AI governance, with extensive implications. The Act’s rigorous standards are likely to influence AI regulations worldwide, similar to how the GDPR impacted global data privacy laws. Countries looking to enhance their AI regulatory frameworks may adopt or adapt elements of the EU's approach to align with these new global standards. International companies operating within the EU or marketing AI technologies to EU consumers will need to comply with the Act's requirements, leading to significant operational changes, increased compliance costs, and potential restructuring of business models. Companies must invest in robust compliance programs, including transparency measures and risk assessments, to avoid severe penalties. By enforcing strict regulations on high-risk AI applications and banning harmful practices, the Act encourages the development of ethical and safe AI technologies. This can foster a more trustworthy AI ecosystem globally, promoting innovation within ethical boundaries. Companies that comply with these standards may gain a competitive edge by building consumer trust and enhancing their brand reputation. Furthermore, as the EU AI Act sets a high bar for AI regulation, it can facilitate the harmonisation of AI standards across different regions, promoting international collaboration and interoperability of AI systems. Countries and international bodies might work together to establish compatible regulatory frameworks, reducing barriers to cross-border AI deployment. Finally, the Act's emphasis on protecting fundamental rights and preventing discriminatory practices sets a strong ethical foundation for AI development. This human-centric focus aligns with broader international efforts to ensure responsible AI use that does not infringe on human rights, potentially leading to global adoption of these principles and mitigating the risks associated with AI.

For technological companies, the EU AI Act presents both challenges and opportunities. Companies must navigate the Act’s complex compliance requirements, which may involve significant operational changes and increased costs. However, adhering to these standards can enhance their reputation for ethical practices and provide a competitive advantage in a market that increasingly values data privacy and security. High-risk AI providers will need to invest in robust compliance programs, ensure transparency, conduct thorough impact assessments, and maintain detailed documentation. Non-compliance could result in severe penalties, including fines of up to €35 million or 7% of global annual turnover.

India, with its growing tech industry and expanding AI capabilities, can learn several lessons from the EU AI Act:

· Risk-based regulation: India’s AI policy could benefit from a risk-based approach, categorising AI applications by their potential societal impact. This would allow for targeted regulation that balances innovation with public safety and ethical considerations.

· Transparency and accountability: Ensuring AI systems are transparent and accountable is crucial. India should mandate clear disclosure when AI is used and establish mechanisms for human oversight, particularly for high-risk applications.

· Data governance and privacy: Robust data governance frameworks are essential to protect personal data and ensure its ethical use in AI systems. India can look to the EU's stringent data protection standards as a model for its regulations.

· Global alignment: Since AI technology transcends borders, aligning India’s AI regulations with international standards can facilitate global cooperation and ensure that Indian companies remain competitive in the international market.

· Institutional framework: Establishing dedicated bodies to oversee AI implementation and compliance, similar to the EU AI Office, can help ensure consistent regulation and address emerging challenges in the AI landscape.

The EU AI Act represents a significant step towards creating a safe, ethical, and accountable AI ecosystem. While its stringent requirements pose challenges for businesses and researchers, they also set a global benchmark for AI governance. India, poised to become a major player in the global AI arena, can draw valuable lessons from the EU's approach, adopting a balanced regulatory framework that promotes innovation while safeguarding fundamental rights and public trust. By doing so, India can harness the full potential of AI technologies for sustainable and inclusive growth

This article is authored by Ananya Raj Kakoti and Gunwant Singh, scholars of international relations, Jawaharlal Nehru University, New Delhi.

SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Wednesday, July 24, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On