The New Frontier: Understanding AI Usage Among Teens and Parents
In 2022, when tools like ChatGPT were first made available to the public, the academic community found itself navigating an unprecedented landscape. Gillian Hayes, vice provost for academic personnel at the University of California, Irvine, recalls how early adopters rushed to establish rules around artificial intelligence (AI) without a clear understanding of its implications.
A Revolutionary Moment
Hayes compared this moment to the industrial and agricultural revolutions. “People were just trying to make decisions with whatever they could get their hands on,” she remarked.
Researching the Unknown
Recognizing the need for clarity, Hayes and Candice L. Odgers, a professor of psychological science and informatics, initiated a national survey to delve into AI usage among adolescents, parents, and educators. Their aim was to gather comprehensive data that would illuminate how attitudes and applications of AI evolve over time.
Surveying The Landscape
In collaboration with foundry10, an education research organization, the team surveyed 1,510 adolescents aged 9 to 17 and 2,826 parents of K-12 students across the United States. They also conducted focus groups with parents, students, and educators to explore their knowledge, concerns, and everyday interactions with AI. Data collection concluded in the fall of 2024, and preliminary findings were released earlier this year.
Unexpected Insights
The survey’s findings surprised Hayes and her team. While many teens were aware of AI’s potential dangers, they often lacked guidelines for its responsible use. This absence of direction can create confusion and hinder ethical, productive interactions with the technology.
Moral Considerations
Hayes was particularly struck by the low frequency with which adolescents used AI—only about 7 percent reported using it daily—with most interactions occurring through search engines rather than chatbots. Many participants exhibited a strong moral compass, grappling with the ethical dilemmas AI presents, especially in educational settings.
Real-Life Scenarios
One noteworthy incident involved a teen who self-published a book featuring an AI-generated cover image and some AI content. The participant’s mother engaged in a discussion about the appropriate use of AI, concluding that while it was permissible for the book, it shouldn’t be used for school assignments. This highlights a common misunderstanding among young people; they often do not recognize what constitutes cheating when using AI.
Defining Cheating
Hayes explains that many teens understand that cheating is wrong but are puzzled by what qualifies as dishonest behavior. For instance, some questioned why peer reviews were acceptable while using AI tools like Grammarly was not. “For the vast majority of adolescents, they know cheating is bad,” she stated. “They don’t want to be dishonest; they just find the boundaries unclear, and so do many teachers and parents.”
Critical Thinking Concerns
Teens also expressed anxiety over how AI could affect their critical thinking skills. According to Jennifer Rubin, a senior researcher at foundry10 involved in the study, young people acknowledge that AI is a tool they will likely rely on throughout their lives, yet they fear that irresponsible use could impede their education and future careers.
Equity and Access
Another surprising finding was the lack of significant equity gaps among AI users. Hayes noted that technological advancements often exacerbate disparities, but in this case, the results suggested few social disparities. This could be attributed to the novelty of AI, as no group fully understands its capabilities yet.
A Changing Paradigm
Typically, parents with higher education or income have an advantage in teaching their children about new technologies. However, Hayes pointed out that in today’s AI-driven world, no one has a firm grasp of the technology. “It may be that everyone is working at a reduced capacity,” she suggested.
Parental Perspectives
The study also revealed varying parental understandings of AI capabilities. Some viewed it merely as a search engine, while others were unaware that it could produce inaccurate results. Divergent opinions about how to engage with AI were evident: some parents embraced it fully, while others preferred to navigate cautiously.
Consensus on Guidelines
Despite differences in understanding, a consensus emerged among parents that clear school district policies around AI usage are essential. Rubin emphasized that such guidelines would help students learn how to use AI safely and effectively.
Frameworks for Understanding
Some districts have already begun implementing color-coded systems to classify AI uses. Green uses might suggest brainstorming or idea development, while yellow may indicate a gray area, like asking for help on a math problem. Red uses would clearly be inappropriate, such as soliciting an AI to write an essay for a school assignment.
Community Engagement
Many schools are also organizing listening sessions with parents to foster discussions on AI usage in the home. “It’s a fairly new technology; families often need guidance about how to approach it,” Rubin noted.
Guiding Educators
Karl Rectanus, chair of the EDSAFE AI Industry Council, advocates for applying the SAFE framework—Safe, Accountable, Fair, and Effective—when addressing AI usage. He believes that this framework can support both large organizations and individual educators in their approach.
Teaching, Not Banning
Rather than prohibiting AI, Hayes argues that educators should focus on teaching students how to use it responsibly. This approach is vital for preparing students for future workforce challenges.
Innovative Assessment Methods
At UC Irvine, one faculty member employs oral exams for computer science students. While students can utilize AI to write code, they are required to explain their work, ensuring comprehension of both AI’s role and the code itself.
Adapting to the Future
Hayes encourages educators to rethink learning outcomes in this era of generative AI. “What truly is my learning outcome here, and how can I teach it while accounting for the presence of AI?” she asks, acknowledging that AI is not going anywhere.
Conclusion
The evolving landscape of AI presents both challenges and opportunities for adolescents, parents, and educators. As technology continues to advance, creating a framework for understanding and guidance will be crucial to ensuring ethical and productive engagement with AI.
FAQs
1. What were the main findings of the survey on AI usage among teens?
The survey revealed that many teens are aware of the concerns surrounding AI but lack guidelines for its ethical use. Only about 7% used AI daily, mainly through search engines rather than chatbots.
2. How do teens define cheating in relation to AI?
Many teens understand that cheating is wrong; however, they often find it unclear what constitutes cheating with AI, leading to confusion in academic settings.
3. Did the study find any equity gaps among AI users?
Surprisingly, the survey showed no significant equity gaps, suggesting that in this new technology landscape, everyone is navigating the same uncertainties around AI.
4. What is the SAFE framework for teaching AI?
The SAFE framework stands for Safe, Accountable, Fair, and Effective and is recommended for educators to help address questions about AI usage in the classroom.
5. What innovative methods are educators using to teach AI?
Educators, including those at UC Irvine, are using methods like oral exams where students must explain their AI-generated code, ensuring a deeper understanding of the technology.