Dr. Tatjana Evas’ Post

View profile for Dr. Tatjana Evas, graphic

Legal and Policy Officer at European Commission DG CNECT, Artificial Intelligence policy development and coordination and Vice Chair of the OECD Working Party on Artificial Intelligence

Attending the #CPDP2024  has left me with mixed feelings 🤔. 👍 It's incredible to witness almost all panels discussing the EU AI Act (check out the program!). This is truly amazing and the enthusiasm is encouraging.  Joanna Bryson Ashley Casovan Sebastian Hallensleben Michel-Marie MAUDET Iakovina Kindylidi Emilia Gómez 👎 However, there are still quite a bit factually (!) wrong information and analyses of the AI Act, which is quite concerning. In light of this, in my personal opinion - The idea of professional certification for AI policy experts, as proposed by some scholars and practitioners, increasingly deserves serious consideration. Regrettably, I missed many excellent speakers and couldn’t attend all the panels I wished to. Looking forward to connecting 'online' or 'offline' on other occasions to discuss the AI Act, requirements, risk management, standards, enforcement, compliance, global cooperation on AI in the G7, OECD, UN, GPAI, Safety Summits and much more ! 🌐🤝   CPDP Conferences #AIAct #EU #AI

Alessandro Mantelero

Jean Monnet Chair in Mediterranean Digital Societies and Law | Independent expert on AI and human rights

1mo

Before focusing on certification and introducing additional bureaucratic elements, it would be nice to develop a proper methodology for risk management and impact assessment, and to do so in a truly transparent and participatory way, avoiding top-down or closed-door exercises.

Peter Hense 🇺🇦🇮🇱

Data | Privacy | Technology | Competition | Litigation | Travel & Hospitality Industry | Co-host @RegInt: Decoding AI Regulation

1mo

I completely agree with your analysis concerning quick judgments and limited knowledge. However, I do not believe that certifications will be beneficial because relying on *superficial* certifications has led to a false sense of achievement and trust within the privacy and data protection community.

Isabel Barberá

AI Advisor | Privacy Engineer | Co-founder of Rhite | Tech & Legal | PLOT4ai author | AI Risks | Member of ENISA Data Protection Engineering AHWG | Expert at ISO/IEC & CEN/CENELEC JTC21 developing AI technical standards

1mo

I share your feelings. Btw, we also had this morning a very interesting panel about the AI Act and medical devices where we talked about risk management, the standardization work, transparency requirements and privacy among other important concepts. Always happy to keep the discussion online or offline 😊

Sebastian Hallensleben

VDE | OECD | CEN-CENELEC | UNESCO | EUOS StandICT | KI-Park | PAI

1mo

There are indeed still many misconceptions regarding AI regulation as well as standardisation in the EU. Thank you for your tireless efforts to communicate directly in so many different fora. It makes a big difference to hear about the AI Act from those who have actually been at the heart of the effort of crafting it, and who are now leading the implementation effort through the AI Office.

Joanna Bryson

Professor of Ethics and Technology; Founding member of the Hertie School Centre for Digital Governance

1mo

As they say about scifi books in a Chicago Bookstore (the stars our limit) we can all agree half of them are crap, but we can't agree which half. I worry about certification processes that might get captured and who would be excluded. I'm often not able to attend meetings and recommend junior academics who in my expert estimation are likely to be helpful. Organisations have to be sure to hear from and/or amplify a number of diverse individuals, and panelists have to be ready to confront each other and really argue their corner. Audiences have to call organisations out who present biased panels that fail to present well-informed views. We have to work collectively in the area of discourse. At least, that's my opinion. Also, my certifications are my degrees and my publications and the people who've taken my advice (publicly). It's not that everyone's certification should match mine, rather just that we have to have those who assemble panels able to recognise and indeed seek out diverse but fairly-well evidenced sources of expertise.

Chiara Gallese

Marie Skłodowska-Curie Postdoctoral Fellow - AI & Law, AI Ethics, Personal Data Protection Law, Health Law

1mo

The AI Act intersects with many other laws, European and national, first of all the EHDS and the GDPR. More institutional guidance is needed in my opinion, like the one we had with the Article 29 Working Party. I have some plans to analyze the intersection of these laws in an ERC project as an anticipatory regulation tool, I really hope that I can get funded (my interview is on the 28). 🍀🤞

Curious what are the major areas of confusion or misinformation about the Act.

Christina Hitrova

Responsible AI at PwC CZ | Technology for Good | Digital Ethics | Law and Regulation of Technology

1mo

I agree, it is hard sometimes to identify good quality content in a sea of information that's quickly growing.. Perhaps, as someone else mentioned, official guidance from the AI Office or the AI Board can help with some such sticky issues and in the meantime perhaps a FAQ of common misconceptions? It's a real issue, now how do we solve it? Certifications might be part of the solution, but surely not the full. Even those creating certification content are themselves interpreting the law to the best of their abilities. Best would be if guidance and clarification came straight from the source.

Like
Reply
Stephan Engberg

Specialist in trustworthy identity, security and data sharing

1mo

AI is a complex field building on top of other unresolved areas The biggest problem is that real enforcement of GDPR would resolve most of the issues poorly addressed in AI Act - AI is secondary use of data and as thus MUST BE ANONYMOUS BY DESIGN. The biggest problem in the space is the mis-implementation of eIDAS 2.0 - instead of empowering citizens to share data anonymous for research as required by the regulation, the focus in the technical implementation (ARF) is on more surveillance and centralism. Much would work better if we could focus on AI without having to deal with the systemic abuse of personal data. This is coming by very poorly understood https://www.edps.europa.eu/system/files/2022-07/03_-_stephan_engberg_-_edps_trustworthy_pki_engberg_20220622_en_0.pdf

Ashley Casovan

Managing Director, IAPP, AI Governance Center

1mo

Thanks for the shout out. Sorry I missed you! Happy to hear you’re thinking about certifications. I appreciate the comments about the challenge with certifications which could also extend to some of the issues with standards. While inherently not perfect, IMO, it’s better to start to establish a bar of acceptability and continue to try to refine it over time. And will support AI Literacy objectives.

See more comments

To view or add a comment, sign in

Explore topics