Very insightful panel on AI-based surveillance and mistreatment at the #GIJC2023, with presentations by Garance Burke, Gabriel Geiger and Lam Thuy Vo. Authorities around the world are increasingly deploying biased and poorly trained AI solutions to decide who is getting care and who is getting a ‘preventative’ interrogation. AI carries more risks than usual algorithms because it creates rules ‘bottom-up’, from biased data, and then these rules are baked into the algorithm, which in itself is opaque. Thus, as one team found, being young and speaking Turkish at home ended up being incorporated in one algorithm as factors that lead to people being flagged for intrusive interrogations.
Is your city / country using AI to sort and industrially process people this way? Try filing an FOI request to find out!
Founder & CEO of GoLexic | Edtech | Impact Factory Fellow
1moEnza Iannopollo is on a different level!! I would have loved to see this 👏🏼👏🏼