Izzy Neis’ Post

View profile for Izzy Neis, graphic

Head of Digital @ ModSquad | CX | Community | Trust & Safety | Moderation | Social Media |

Watching "AI in the Children’s Space: A Conversation with an FTC Official" with the amazing trio of Rukiya Bonner, Michelle Rosenthal, and Dona J. Fraser. https://lnkd.in/gXmcV5K6 My quick notes, bear with me: 1. FTC's interest: How does AI usage deceive the viewer? Is it clear what it is (the AI content)? Or is the content creating false trust through Gen AI deception. 2. What data is collected? Is the data used for training parsed so youth data is deleted? If youth data is collected, was it obtained through verifiable consent? 3. From my POV on this call, it seems: Federal Gov is more focused on research, partnership with operators, and learning from EU's AI act, DSA (for governance), and GDPR (for ops) before they create legislation. Meanwhile, States are proactive in this space - and in their quick response are somewhat overlapping rules (which may make it difficult for a federal approach?). 4. Imperative to consider children as a subset (automatically) of any reference in legislation/regulation focused on "sensitive data". 5. In virtual environments, they're focused on topics like "NPC statements vs NPC sales in-game and how that may be manipulative or challenging advertising laws. Or how deep-fakes are abusive of Generative AI and endangering youth (data, well-being, etc). Lots of resources: CARU's "pre-screening review", COPPA safe harbor (woot) & resources (they're great partners to support compliance), the FTC's various resources & current case reviews, NIST -- understand section 5, updates to COPPA, DSA, GDPR, the EU's AI laws, etc. Mad props to Dona Fraser. I've worked with her on/off for the last 15 years (from ESRB to now BBB). She's amazing.

To view or add a comment, sign in

Explore topics