AI ratings system flags popular AI models as unsafe for kids

IANS Photo

IANS Photo

IANS

Popular artificial intelligence (AI) models like Snapchat’s My AI, DALLE, and Stable Diffusion may not be safe for kids, demonstrating an increased likelihood of emotional, physical, and psychological harm, a first-ever AI ratings system has revealed.

The ratings system has been designed by Common Sense Media, a leading advocacy group for children and families, to assess the ethical use, transparency, and safety, and impact of AI products.

Developed with input from a range of AI experts, the AI ratings system evaluates products against a set of “Common Sense AI Principles” that help the public to understand where these tools are using best practices — and where they may compromise human rights or data privacy, or perpetuate misinformation and unfair bias.

In the process of reviewing 10 popular apps (including ChatGPT and Bard) on a five-point scale, Common Sense extracted the following insights about today's AI landscape that will be useful for policymakers, educators, parents, and consumers.

On a scale of 5, ChatGPT and Bard received a rating of 3 while Snapchat’s My AI, DALLE, and Stable Diffusion received just 1-2.

“Consumers must have access to a clear nutrition label for AI products that could compromise the safety and privacy of all Americans — but especially children and teens,” Common Sense Media founder and CEO James P. Steyer said.

The ratings will inform new legislative and regulatory efforts to keep kids safe online and to push for increased transparency from creators of these new products.

“AI is not always correct, nor is it values-neutral,” Common Sense Media's senior advisor of AI Tracy Pizzo-Frey, who helped bring this project to life, said.

“All generative AI, by virtue of the fact that the models are trained on massive amounts of internet data, host a wide variety of cultural, racial, socioeconomic, historical, and gender biases – and that is exactly what we found in our evaluations,” Pizzo-Frey added.