Tools and Strategies

Artificial Intelligence: What do Parents, Caregivers and Educators Need to Know? 

AI tech, artifcial intelligence and big data analytics, innovative technology background
Related Content
Hands typing on laptop

ADL Ora: K-12 Educator AI Assistant

Blue bar graph vector graphic

Large Language Models

ADL AI Index TM

Two children looking at phone with concerned caregiver in background

iStock

What is Online Hate and Harassment?

Hands typing on laptop

Online Hate: What Can I Do to Help?

Echoes graphic background

Student Learning About the Holocaust from Echoes and Reflections

Student working at school desk with paper and highlighter

How Did Propaganda Fuel the Holocaust from USC Shoah Foundation and iWitness

Background 

Young people are using AI (artificial intelligence) tools every day for homework, curiosity, creativity, and social interaction. While we don’t have exact statistics on daily AI usage yet, recent Pew Research Center findings suggest that AI use begins to rise sharply by early adolescence, with widespread use among teens ages 13–17. While many see AI as a useful and credible source of information, it needs to be utilized critically and thoughtfully, just like any other online platform. Even though this innovative technology has powerful and applicable uses, research shows AI tools sometimes miss or repeat harmful stereotypes and misinformation in the areas tested. Because of the growing time spent using AI tools, the misinformation and bias that young people encounter through AI can intrude upon and fundamentally affect their understanding of the world, their beliefs, and even their safety online, according to a recent study from the Center for Digital Thriving at Project Zero.

What is AI and Why Does AI Literacy Matter? 

AI refers to computer systems that can perform tasks that typically require human intelligence, like answering questions, generating text, creating images, or making recommendations. Popular AI tools include ChatGPT, Claude, Google's Gemini, as well as AI features built into social media and other social and educational platforms. 

AI literacy means understanding how AI works, its limitations, and its potential for bias or misinformation. When children understand how AI works and why it sometimes gets things wrong, they become better equipped to recognize harmful narratives, including antisemitic ideas and conspiracy theories, on AI platforms and beyond. 

ADL research suggests that AI models vary in their ability to effectively detect and counter antisemitic and extremist narratives. This research highlights the need for caution and digital literacy when selecting and using AI models. 

Key Concerns About AI Use

The rise of, and increased reliance on, AI tools has created a lot of fear mongering, especially with respect to its implications for young people. While some of these fears are valid, as suggested by the results presented in the ADL AI Index, they can be at least partially mitigated when and if young people (and all users of AI programs) are equipped with critical thinking, analytical and media and digital literacy skills to thoughtfully and critically engage with AI-generated content. Young people, as digital natives, should develop these skills early, although they are equally relevant to all users seeking to engage with AI responsibly. 

  • AI doesn't "know" everything—it predicts patterns.
    • Most AI tools draw information from various sources online and use patterns to generate responses. They often don't verify facts or check for accuracy the way credible sources do. And, while some AI platforms rigorously test their models, ensure fact-checking for outputs, and employ accountability measures, many can fall short.  
  • Bias and stereotypes can appear in AI outputs.
    • AI sometimes gives answers that sound unfair, biased, or negative about groups of people because it learns from patterns that exist online—including harmful stereotypes.
  • Misinformation spreads quickly.
    • AI is designed to give fast, confident responses, but that doesn't mean the information is always correct. Young people need to understand that AI answers should be treated as a starting point, not necessarily the final answer.
  • Some AI tools are riskier than others.
    • ADL research suggests that some social media-connected AI tools are more prone to producing conspiratorial and other harmful content than other prominent models. 

New ADL AI Index 

Recently ADL released the ADL AI Index, the first comprehensive evaluation of leading large language models (LLMs) assessing how effectively they detect and refute antisemitic and extremist tropes and narratives. Through this study, six leading AI/LLM models were evaluated. While each model had gaps when responding to antisemitic and extremist material, overall performance varied substantially between LLMs. To learn more about the specific findings of this study explore the research here. 

What Can We Do?

Teaching Critical Thinking and Responsible AI Use 

  1. Pause: "Does this answer feel right?" 
  2. Verify: "Where else can I check this?" "What do other sources say?" 
  3. Ask: "Who can help me understand this topic better?" 

Based on research suggesting that AI use rises sharply by early adolescence, AI literacy education should begin before children start using AI themselves. Young children are often exposed to AI through watching older siblings, peers or family members use these systems, so it’s critical this education starts early. Starting AI literacy early helps young people develop critical thinking skills before regular AI use becomes normalized. Importantly, AI literacy can be taught without requiring children to use AI tools directly—through discussions about how AI works, analyzing AI-generated content together, or conversations on how AI appears in daily life, parents and caregivers can support young people developing the necessary critical analysis skills needed. 

  • Treat AI like a starting point, not the final answer
  • When AI provides information about Jewish people and other marginalized groups, cross-check with reputable sources
  • Take extra time to double-check information as part of using AI responsibly 

All AI assistants have some biases because they’re trained on large datasets created by people, and those datasets often reflect real-world assumptions and societal biases. This is why it is important to cross-reference information when using AI tools. Talk with young people about which AI tools they're choosing and what they're finding when they use them. Share your concerns about their safety and ask them to be open with you about their experiences. Frame it as a partnership—work together on understanding AI safety rather than simply setting rules. Additionally, make sure you know your school's policies on AI use and adhere to them as required. If school policy allows students to choose which AI platforms to use, encourage young people to utilize higher-safety tools. Consider elevating an Education AI tool like ADL Ora, which is a closed, trusted source, which pulls from expert-vetted resources designed for educators teaching sensitive and complex topics related to antisemitism.

Engage in the Conversation 

Age: 10 and up 

Questions to Start the Conversation 

  • What is AI and what examples of it do you see?
  • Where do you think AI gets its information from? And how does it develop responses? 
  • Why do you think the use of AI is growing? 
  • Is information from AI always accurate?  Why or why not? 
  • What experiences have you had with AI? What did you use it for? How did that experience go? 
  • What kinds of questions do you think AI is good at answering? What questions might not be best answered by AI?  

Questions to Dig Deeper 

  • If AI provides information about a group of people, how do you know if it’s true or a stereotype? Why do you think AI might sometimes give answers that sound unfair, biased, or negative about a group of people? 
  • What should you do if an AI tool gives an answer that feels wrong, biased or hurtful about an individual or group? 
  • If two sources say different things, what can you do next? How do you evaluate the reliability and credibility of information? 
  • What does it mean to use AI responsibly? 

Ideas for Taking Action 

Practice the 3-Step Check together

The next time someone in your family uses AI for homework or to answer a question, practice pausing, verifying, and asking together. 

Also, try these activities to build digital resilience: 

  • Compare and contrast: Ask two different AI tools the same question and discuss how the outputs differed. 
  • Spot the problems: Look at examples of biased or misleading answers and talk about what makes them harmful. 
  • Fact-checking game: Take a surprising AI-generated fact and find evidence for or against it. 
  • Bias detective: Practice identifying when information is incomplete or biased. 
  • Model responsible AI use: 
    • Commit as a family to treating AI as a helpful tool, not a truth-teller. When you use AI, show young people how you verify information and when you choose to ask a person instead. 

If a young person encounters antisemitism, hate, bias or misinformation through AI: 

  • Stay calm and invite them to talk about what they encountered. 
  • Explain why the information is hateful and/or inaccurate. 
  • Explain that some AI tools get these topics wrong because they learn patterns that exist online. 
  • Use it as a moment to reinforce empathy, respect, and critical thinking. 
  • Encourage them to bring questions to trusted adults.
    • Consider this an opportunity to generate a list of trusted adults to bring questions to related to information they are encountering through AI, if you have not done so already.
    • Make learning about AI an ongoing family conversation—these tools are always changing and evolving. Instead of ignoring how much AI is becoming part of daily life, staying informed together is the best way to use it wisely and understand what it can and can’t do. 

Additional Resources

To learn more about AI and LLMs, consider some of the resources below: