A lot of school district leaders have been asking what they should think about when using AI tools. Here is a list of some things to consider:
Must-haves:
-For under 13 (possibly 18) users- teachers and/or administrators should have transparency into how students are using the tools (i.e., have access to student conversation/transcripts and can get summaries of student AI activity)
-For under 18 users - there must be clear moderation mechanisms that keep students from engaging in negative use cases (self-harm, hate, etc.). These mechanisms must proactively notify key stakeholders (e.g., teachers and administrators) when the system detects these situations.
-AI vendor must use high quality underling models that minimize errors and hallucinations (today this is GPT-4, Gemini Pro 1.5, Anthropic Claude 3 or better). GPT3.5 is not acceptable for student/teacher use.
-AI vendor must have clear contracts with AI model creators stipulating that student data cannot be used for training of the general, public models
-AI vendor must be clear that student/educator data will not be sold to third parties in ANY circumstance (even in the event of bankruptcy or an acquisition)
-AI vendor must-have clean SOC-2 audits - this ensures that student and educator data is protected/secure
Nice-to-haves:
-AI vendor has taken care to prevent use of AI for cheating.
-AI vendor has added layers to the base model to minimize errors/hallucinations
-AI vendor has evaluation/benchmarking mechanisms to measure the AI error rate
-AI vendor has school/district/system level reporting so that AI use can be monitored centrally
-AI vendor has professional development for teachers to understand how to deploy in the classroom
-AI vendor has training for students to understand how to use the AI tools (and how to mitigate risks)
-AI tool supports multiple languages
-AI tool can be "one-stop" across grades and subject matter to avoid fragmentation of the experience (and the district having to manage multiple solutions).
The Kira team is the best!