Skip to content Skip to Chat

AI in Education: Tool, Transformation, or Trouble?

This is a recap of a session entitled AI in EdTech: Tool, Transformation, or Trouble? from the 2025 ASU+GSV Summit, including insights from WGU’s David Morales.

Artificial intelligence is reshaping educational technology in profound ways, but how can we ensure that AI truly serves learners and educators rather than becoming a flashy add-on or, worse, a source of harm? This article dives into the thoughtful integration of AI in education, drawing on the expertise of leaders from Western Governors University, Quill.org, BrainPOP and MetaMetrics. Their insights highlight the promise and pitfalls of AI in education and offer a roadmap for building meaningful, ethical and impactful AI-powered learning tools.

Understanding the Role of AI in Education

AI in education is not just about embedding the latest technology into existing products. It’s about solving real problems faced by students and teachers through innovative, reliable and equitable solutions. This grounding principle ensures that AI serves a clear educational purpose and delivers value to learners.

From kindergarten classrooms to higher education, AI is being applied in diverse ways, whether as a tool for students, in support of teachers or by powering frameworks for educational providers. The challenge lies in balancing innovation with safety, trust, and reliability.

How AI is Transforming EdTech Products

Personalized Writing Feedback with Quill

Peter Gault, founder and executive director of Quill.org, a nonprofit focused on improving students’ writing skills, shared how AI can provide fast, fair, and effective feedback at scale. Writing is a complex skill requiring practice and nuanced feedback, which many students lack. Quill has been developing its own AI tool since 2018, emphasizing ethics and trustworthiness.

“A lot of students don’t get those opportunities to write and get enough feedback. AI can do an amazing job of evaluating writing, but we have a responsibility to make sure our tool is reliable and giving fair and equitable analysis of student work.”

Quill’s secret sauce lies in building custom datasets of authentic student writing paired with teacher-generated feedback. This approach allows them to fine-tune AI models that deliver reliable feedback directly to students in real time, without the need for a teacher to gate every response. As Gault explained, “You can send [large language models] up to a million words of context now. This is a real superpower: that we can customize AI with our own data.”

AI Literacy and Customized K-8 Learning at BrainPOP

Jay Chakrapani, chief product officer at BrainPOP, discussed their careful approach to AI, especially given their young audience (K-8). BrainPOP focuses on media, digital and AI literacy, helping kids learn joyfully while using technology responsibly. Their AI solution offers customized learning materials for both classrooms and home schools.

“When you put something in front of a third grader, it can’t hallucinate. It has to be private and it can’t be wrong,” said Chakrapani, who emphasized that the company spends a lot of time to ensure their AI solutions are safe for kids.

BrainPOP also recognizes the importance of maintaining high standards and human oversight. For example, their AI-generated grading suggestions are always reviewed and approved by teachers before reaching students, ensuring accuracy and appropriateness.

Measuring Reading Growth with MetaMetrics

Jing Wei, VP of machine learning & engineering at MetaMetrics, shared the long history of AI in education, highlighting the Lexile Framework for Reading — a pioneering AI-driven tool the company developed over 40 years ago. Lexile uses machine learning to match students with reading materials at the right difficulty level, optimizing reading growth.

MetaMetrics emphasizes transparency and educator support, providing training and clear explanations about what Lexile measures can and cannot do. Jing stressed the importance of maintaining high-quality standards and openly sharing research results with the education community.

“We created a large corpus of millions of texts... and used natural language processing and machine learning to build a readability model... 35 million students receive Lexile measures each year,” said Wei. “We want to be extra cautious in terms of the LLMs we are using, because ultimately we are trying to solve an education problem; we are not just trying to build AI for AI’s sake.”  

AI-Driven Personalization at WGU

David Morales, CIO and senior vice president of technology at WGU described how higher education can leverage AI to personalize student experiences from program selection to graduation. WGU’s AI initiatives focus on three key goals: attainment, return on investment, and equitable access.

WGU is developing a “decision intelligence framework” that dynamically adapts services and learning pathways based on individual student data, including geographic and cultural context. This personalized approach aims to improve student success and social mobility.

“If I know that my student is in Texas and I know they are coming from this specific zip code, I already know and understand a little bit more about their background,” he said. “How do we enable that information to serve better that individual, to truly implement services and learning mechanisms for them, not for a standard consumption of education?”

Building Trustworthy and Ethical AI in Education

Across all organizations, a central theme is the responsibility to build AI systems that serve a specific student need. AI solutions must also be trustworthy, reliable and equitable. This requires significant investment in research, development, and continuous evaluation.

“We hired dozens of teachers and had them manually score thousands of outputs to train the AI,” said Chakrapani. “When it’s out in production, we still have a teacher gating the feedback before it goes to the students, being the quality control gate.”

Gault echoed the value of custom datasets and human-in-the-loop evaluation.

“We have a team of teachers doing this at scale, he said “We know we have good accuracy because we built those data sets in advance.”

Morales underscored the need for clear communication with students about the data used.

“Define the outcomes, not the outputs,” he said “Make sure you create an operational process that enables you to track, measure and align to outputs such that the outcomes are correct. Make sure your students know what data you have and what data you’re using to serve them better.”

Evaluating Impact: Beyond Accuracy to Real-World Outcomes

Traditional long-term efficacy studies often take years, which is impractical given the rapid evolution of AI technologies. Quill is pioneering “rapid cycle evaluation,” inspired by pharmaceutical research phases, to quickly assess AI tools before scaling.

“Is this helping or is this harming?” asked Gault. “By doing these phase one trials, you can really see—is this working or not?”

Chakrapani described a layered approach to evaluation of the effectiveness of their product at BrainPOP, combining AI usage analytics, classroom observations, and longer-term efficacy studies aligned with state assessments.

Jing proposed a holistic validation framework inspired by educational assessment theory, incorporating accuracy, reliability, impact, and practicality to ensure AI solutions truly serve educational goals.

Gault stressed the scale of human oversight required for trustworthy AI.

“Our team of seven former educators review around 100,000 sentences per year manually,” he said. “The question is not if there is a human in the loop, but how many humans and how much evaluation is being done.”

Morales pointed out the need to question existing processes before automating them with AI.

“Let it not be automating a process that must be sunsetted,” he said. “Really figure out if that process should exist, and if it doesn’t, then how is AI enabling me to remove that process such that I’m really moving the needle for the outcome that I’m looking for?”

Key Takeaways for Building Meaningful AI-Enabled EdTech

  • Build your own datasets: Customize AI models with authentic, high-quality data that reflects your unique educational context.

  • Invest in rigorous evaluation: Continuous monitoring, human review, and rapid cycle testing are essential to maintain quality and trust.

  • Focus on outcomes, not just outputs: Measure real-world impact on student learning and success, not just technical accuracy or feature delivery.

  • Be transparent and ethical: Clearly communicate with learners and educators about data use and AI processes to build trust.

  • Reimagine processes: Use AI to innovate and transform education, not simply automate existing workflows.

Conclusion

Morales referenced a quote from Oren Harari, who said, “The electric light is not coming from the iteration or continued integration of candles.” AI in education is not about patching old methods but about illuminating new possibilities for personalized, equitable and effective learning experiences.  

AI in education presents both tremendous opportunities and significant challenges. The path forward requires intentionality, ethical design, and a relentless focus on the learner’s needs. By combining cutting-edge AI technology with deep educational expertise, transparency and continuous evaluation, we can harness AI’s potential to transform education for the better.

As the landscape evolves, educators, technologists, and policymakers must work together to ensure that AI tools empower every learner, support every teacher and uphold the highest standards of quality and equity.

Recommended Articles

Take a look at other articles from WGU. Our articles feature information on a wide variety of subjects, written with the help of subject matter experts and researchers who are well-versed in their industries. This allows us to provide articles with interesting, relevant, and accurate information.