Vadym Honcharenko’s Post

View profile for Vadym Honcharenko, graphic

Senior Privacy Manager @ Grammarly Legal | AIGP, CIPP/E/US/C, CIPM/T, CDPSE

EDPB: Report of the work undertaken by the ChatGPT Taskforce. The report outlines the common denominator agreed upon by the EU SAs in interpreting the GDPR regarding OpenAI's ChatGPT. It also contains a questionnaire that different SAs used when interacting with OpenAI. Key takeaways: - One of the most crucial takeaways is that companies should not rely on technical impossibility when implementing GDPR requirements. The EDPB here mentions data protection by design requirements, which state that the GDPR requirements should be considered when determining the means for processing and at the time of the processing itself. That said, the recently raised issues of the possibility of addressing the data rectification requests by LLMs or other concerns like using anonymization as something that might make the data useless for improving services should be considered from the beginning of building a model and service; - When relying on the legitimate interest legal basis, remember that adequate safeguards play a special role in reducing the undue impact on data subjects and can, therefore, change the balancing test in favor of the controller (e.g., define precise collection criteria; ensure that certain data categories like special categories of data are not collected or removed after data collection; delete or anonymize personal data that has been collected via web scraping before the training stage, etc.); also, when you use prompts for training purposes, a sufficient level of transparency to users about this is another crucial factor when performing LIA; - Do not transfer the responsibility of complying with the GDPR to your users by stating in your Terms or other user-facing documents that users are responsible for their inputs in prompts (data put in your system is your responsibility). For example, here's how Google Gemini puts it here: https://lnkd.in/gjkxwjgF: Please don't enter confidential information in your conversations or any data you wouldn't want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies; - To comply with the transparency requirements, the user should be informed that the generated text, although syntactically correct, may be biased or made up. Still, it doesn't mean this should become an argument of not complying with the data accuracy principle (the report says you must abide by it in any case); - It will also be helpful to check the questionnaire at the end of the report and understand where the gaps might be (e.g., have your DPIAs, LIAs, and purpose compatibility assessments in good shape); also, check your contracts with the processors or joint controllers, as you might need to show [if you are the sole controller] how you ensured that no other party (for example, another company) decides the purposes and means regarding the processing of personal data in the context of the LLM and software infrastructure. #GDPR #AI #privacy

To view or add a comment, sign in

Explore topics