Skip to content Skip to Chat

Guiding Responsible Student Use of AI

Co-authors: Courtney Munday, Senior Manager of Evaluation, WGU, and Frank Wray, Manager of Instruction Faculty, WGU

Western Governors University’s (WGU) mission has always been to expand access to education that’s both meaningful and workforce-relevant, and using new and emerging technology is central to our success.  As the world and the workplace continue to evolve, so must the way we teach and learn. Therefore, embracing artificial intelligence (AI) is a natural step toward training students to use it responsibly and ethically, in both their educational pursuits and their careers.

AI holds real promise for education, and it also brings complexity. At WGU, we view this moment not with apprehension, but with intention. Integrating AI is part of our broader commitment to innovation, not as a buzzword, but as a way to serve students better. It requires thoughtful leadership, ethical frameworks, and a willingness to adapt.

Our hope is that WGU’s model for AI implementation can provide insight, and perhaps inspiration, for other institutions navigating similar questions.

Managing Student Use of AI

As AI continues to shape higher education, WGU distinguishes itself by adopting a holistic, institution-wide approach to AI integration. Unlike many academic institutions that implement fragmented AI policies across various academic departments, WGU recognizes the importance of embedding AI strategically and ethically into learning experiences to enhance, rather than complicate, education. This comprehensive strategy aligns with findings from recent studies by Hanover Research and Inside Higher Ed (Hanover), and the 2024 EDUCAUSE AI Landscape Study, authored by Jenay Robert (Robert), both of which emphasize the critical role of institutional leadership in AI adoption.

AI usage within WGU’s competency-based model—particularly in performance assessment-based courses—is expanding rapidly. Hanover notes that institutions are grappling with concerns regarding AI’s reliability, ethical implications, and impact on academic integrity. While some universities have responded with restrictive measures, such as banning AI tools or implementing detection software, others—such as Auburn University—have adopted proactive strategies, including workshops that educate faculty on AI use. WGU views AI as an opportunity to redefine student engagement with knowledge, ensuring that ethical AI use is taught alongside critical thinking.

Robert reinforces this perspective, highlighting that while AI presents ethical risks, it also offers substantial benefits in personalized learning, research support and administrative efficiency. WGU acknowledges that AI literacy is an essential skill for graduates entering the workforce and, therefore, embraces AI as a vital component of modern education. Rather than restricting AI usage, WGU is designing courses that encourage students to engage with AI critically and responsibly, preparing them to use these tools effectively in their future careers.

A prime example of this intentional integration is the transformation of WGU’s general education curriculum. Rather than treating AI as an optional or incidental tool, the curriculum is structured to incorporate AI-driven analysis, problem-solving exercises and ethical decision-making frameworks. This initiative ensures that students engage meaningfully with AI while reinforcing foundational skills such as reasoning, communication and evidence-based evaluation. Through this approach, students learn how to use AI and receive guided instruction regarding the ethical considerations of AI-generated content—one of the key concerns identified by Robert.

By adopting a unified approach to AI implementation across disciplines and recognizing AI as a foundational skill rather than a disruptive force, WGU positions itself as a leader in the evolution of higher education. As institutions struggle to balance AI’s potential with concerns about academic integrity, WGU’s forward-thinking model demonstrates that ethical AI adoption is an opportunity to transform education and better prepare students for an AI-driven workforce.

AI and the Competency-Based Model

WGU employs a disaggregated competency-based model that allows students to progress at their own pace by demonstrating mastery through objective and performance assessments rather than accumulating Carnegie Units, also known as credit hours. While WGU seeks to leverage AI as a learning tool to promote critical thinking and problem-solving skills, it also acknowledges the potential for its misuse within assessments. To mitigate bias and ensure authentic learning, instruction faculty and evaluation faculty collaborate to align rubric expectations and create coaching opportunities that reflect a commitment to student success while safeguarding the integrity of WGU degrees.

WGU’s similarity checker includes an AI detection tool that operates in the background, collecting data on the presence of AI-generated text in student submissions. This data is accessible only to the academic integrity team for analysis. Although the tool boasts a 0.2% false positive rate, AI detection relies on linguistic patterns—such as sentence lengths, hyphens, average number of syllables, and parts of speech—to identify potential AI-generated content. 

Most detection tools only indicate that content is “likely AI-generated,” as they cannot definitively distinguish between AI-generated writing and writing that reflects limited skills. Consequently, human analysis remains essential. Additionally, tools like Grammarly may be flagged due to their influence on sentence structure. Given that even one false positive is unacceptable, WGU relies on overt evidence of generative AI use to identify integrity violations.

Academic integrity evaluators are trained in analyzing performance assessment content for accuracy, originality and AI-generated indicators. Academic integrity faculty complete manual evaluations of submissions by examining the content for the following AI patterns: 

  • Repetitive language

  • Grammatical errors

  • Vague content

  • Lack of personal perspective 

  • Factual errors

  • Fake, irretrievable sources 

A recent, prevalent issue concerning generative AI involves students citing fabricated sources in performance assessments. In response, instruction and evaluation faculty have adopted coaching-based intervention. When unverifiable sources are identified, the evaluation team locks the submission and returns it without evaluation. The student must then meet with their instructor to revise the work, and the instructor must validate all sources before the evaluation is unlocked. This approach has led to a reduction in integrity violations by emphasizing education over punishment.

While instructor intervention is the most ideal solution to managing students’ inappropriate use of AI, egregious integrity violations warrant academic warnings that require revisions. Multiple integrity violations result in coaching conversations with the academic coaching center, instruction faculty, and student conduct office.

Recommendations for Other Institutions

Prohibiting student use of AI tools is both impractical and counterproductive. In today’s workforce, AI literacy is essential. Rather than banning generative AI, institutions may benefit from integrating it into curricula in ways that promote critical thinking and responsible use. The following strategies can support this goal:

  • Develop a clear, accessible, and realistic university-wide AI policy. 

  • Provide faculty training on the policy to ensure consistent implementation across departments. 

  • Provide faculty training on similarity checkers and AI detection tools to ensure calibrated interpretations to avoid false positives.

  • Design curriculum and assessments that incorporate AI as a learning tool, while requiring personal reflection and critical analysis of AI-generated outputs. 

  • Adopt a coaching approach to inappropriate AI use. For example, help students understand the difference between using AI to brainstorm ideas versus using it to complete assignments.

Conclusion

Thoughtful, ethical use of AI in education isn’t just a compliance measure, it’s an opportunity to lead. WGU’s work in this space reflects our core belief that higher education should prepare learners not just for today’s workforce, but for tomorrow’s challenges.

By designing curriculum, support systems, and academic policies that incorporate AI with care, we’re helping students build the skills and judgment they’ll need in an AI-powered world. This is about more than tools; it’s about trust, personal accountability and lifelong learning.

We encourage other institutions to explore WGU’s approach as a model that can scale and adapt. We welcome partnership, dialogue, and shared learning as we all strive to ensure that AI supports, not replaces, the human potential at the heart of education.

Recommended Articles

Take a look at other articles from WGU. Our articles feature information on a wide variety of subjects, written with the help of subject matter experts and researchers who are well-versed in their industries. This allows us to provide articles with interesting, relevant, and accurate information.