Accountability is Critical to Safer, More Responsible AI

We focused on modes of communication that average users employ when using LLMs, aiming to assess how LLMs can generate and perpetuate bias and hate in the everyday lives of typical LLM users.

As artificial intelligence becomes embedded in how people learn, communicate and make decisions, the risks of bias, misinformation and extremism are no longer theoretical – they are operational.

 

Educator & Caregiver Guide for Responsible AI Adoption

Review our guidelines for using the ADL AI Index™ to responsibly evaluate risks and make values-aligned decisions on AI adoption.

SVG contactcongress.svg

Subscribe to extremely™ brief Newsletter

Subscribe to extremely™ brief Newsletter

Your go-to resource for emerging trends and developments in extremism and hate.