본문 바로가기
Organizational Leadership Program/OLP_Development

Personal Vision of AI-Assisted Business Education 2035

by Jeonghwan (Jerry) Choi 2025. 6. 4.
반응형

Personal Vision of AI-Assisted Business Education for 2035:

By 2035, I envision a business education ecosystem that is universally affordable through Open Educational Resources, seamlessly integrated yet intellectually engaging, deeply immersive through augmented and AI-assisted technologies, and professionally transformative—empowering ethical, skilled problem solvers to thrive in an ever-evolving world.

Vision_AI_Assisted_Biz_EDU.pptx
0.05MB

 

 

 

** The vision statement was derived from the debriefing process of the 'Teaching and Learning with AI' Conference 2025, Orlando, Florida.

 

Teaching and Learning with AI Conference Debriefing.

1. AI is actively transforming higher education—not as a threat, but as a collaborative partner in learning. Faculty are encouraged to embrace "productive AI friction" to promote critical thinking and engagement.

2. Assessment methods are evolving to emphasize human reasoning and personal expression through formats like oral presentations, multi-stage tasks, scafolding methods, and reflective videos—reducing over-reliance on AI-generated content.

3. Ethical, inclusive integration is key. Institutions are adopting AI literacy frameworks and shared governance models to ensure access, equity, and academic integrity in the age of AI.

 

https://youtu.be/reiA21t2CMc?si=hbA-6pED7CNjs_2l

 

 


 

Debriefing Report

Reimagining Higher Education with Generative AI: Strategic Insights from the 2025 Teaching and Learning with AI Conference

 

Conference Attendance: May 28 ~ 30, 2025 
Location: Orlando, Florida

Prepared by: Jeonghwan (Jerry) Choi, PhD, MBA, ME

[FOR INTERNAL USE ONLY WITHIN Higher Education]


Executive Summary

The "Teaching and Learning with AI" conference, held in Orlando, Florida from May 27 to 30, 2025, delivered a compelling and urgent message: artificial intelligence is no longer an emerging tool—it is a transformational force in education. Experts from institutions such as Georgia State, University of Michigan, Arizona State, and Grand Valley State University presented both innovative strategies and complex dilemmas facing universities as they respond to this paradigm shift.

Key insights emerged around the necessity to integrate “AI-assisted—rather than AI-resistant—learning”, and to reimagine assessment formats to align with human-centered, critical thinking goals. Faculty are being urged to act as learning designers, promoting transparency, reflection, and AI-literate pedagogy through methods such as “AI spotlights” and metacognitive footnotes. The implications are clear: institutions that fail to adapt risk obsolescence.

The report also draws attention to the rising equity divide created by access to paid versus free AI tools, as well as the increasing demand for campus-wide policies, graduate-level AI fluency, and interdisciplinary AI literacy development. The conference demonstrated how AI is reshaping faculty workload, student expectations, and institutional governance.

For Higher Education, the takeaway is unmistakable: now is the time to develop a comprehensive AI strategy rooted in ethics, accessibility, and innovation to empower students and educators alike in an era of generative intelligence.

Key Takeout

  1. Reframe AI as a Learning Partner
    Use AI to foster critical thinking and reflection—not as a threat, but as a co-creator in student learning.
  2. Develop a Campus-Wide AI Policy Tailored by School (e.g., Graduate Programs)
    Create a flexible, faculty-led AI policy that promotes ethical use, academic freedom, and student empowerment.
  3. Redesign Assessments for AI Resilience
    Shift from essays to oral, process-based, and reflective assessments that emphasize authenticity, voice, and human judgment.
  4. Launch AI Literacy for Faculty and Students
    Offer self-paced AI training for faculty and embed prompt literacy and ethical AI use into general education and YourPace programs.
  5. Deploy AI for Student Support and Retention
    Pilot AI-powered advising chatbots and predictive tools to identify at-risk students and streamline academic support.
  6. Standardize Transparent AI Use Across Courses
    Adopt the AI Stoplight system to clarify expectations, promote trust, and ensure responsible AI use in assignments.
  7. Innovate Feedback with AI Micro-Apps
    Create AI-driven video feedback tools using avatars to boost student engagement, especially for Gen Z and online learners.

Introduction

The 2025 Teaching and Learning with AI Conference, hosted in Orlando, Florida, was a pivotal gathering of educators, technologists, and institutional leaders grappling with how artificial intelligence is reshaping higher education. The three-day event featured research-backed insights and case studies from global universities pioneering the integration of AI into instruction, student assessment, and academic governance.

Themes explored included the repositioning of AI from a “cheating risk” to a co-pilot in learning, the transformation of student assessments beyond text-heavy outputs, and the faculty’s evolving role in guiding ethical, personalized, and inclusive AI use. Several speakers advocated for reframing faculty responsibilities—not as AI detectors—but as ethical architects of learning.

A powerful metaphor from the keynote likened traditional instructors to Plato’s cave dwellers, urging faculty to stop mistaking educational shadows for the real possibilities that AI can bring.

From developing institution-wide AI policies, to using AI-powered analytics for student support and curriculum design, the conference made clear that transformative change is already underway. The future of higher education will depend on our ability to be proactive, not reactive, and to develop systems that prioritize human-AI collaboration, equity in access, and learner agency.

This report captures the conference’s most strategic insights and translates them into actionable recommendations tailored for the Higher Education. 

Day 1; Session 1: Friend, Tool, or Trouble? Striking a Balance with AI in the Classroom
Presenter: Eugenia Novokshanova, Ph.D., Associate Professor, Georgia State University, Perimeter College

Session Overview & Contents

This opening keynote questioned traditional higher education models through the allegory of Plato’s cave. Dr. Eugenia Novokshanova challenged educators to confront outdated assumptions and instead embrace Artificial Intelligence (AI) as a partner in pedagogical transformation. The central question: Should AI be treated as a rigid teacher, a neutral tool, or a transformative friend?

The speaker emphasized the need for educators to reframe their roles—not merely as content deliverers but as facilitators of cognitive dissonance and productive friction with AI. The keynote explored how AI can unlock deeper thinking when learners confront its limitations rather than passively consume its outputs.

Session Summary

“Are we, as higher education instructors, like the cave dwellers in Plato’s allegory, mistaking the shadows of traditional pedagogy for the true potential of learning? For too long, many of us have believed that we knew what education should look like and that we, the professoriate, should unilaterally determine our students’ learning needs and outcomes. However, Artificial Intelligence is rapidly transforming our understanding of pedagogy, especially in higher education, presenting us, higher education instructors, with a critical choice in how we integrate this powerful technology.

Should AI be positioned as a Teacher, a directive force that risks confining students within predetermined learning pathways—strengthening the chains of standardized curricula and limited perspectives that bind them to the cave of conventional learning? Or can AI evolve into a Friend, a collaborative partner that fosters student agency and empowers students as co-creators and allies in their own education as they move toward the essence of knowledge and genuine understanding? This shift requires us to confront the AI paradox: while students now have more access to knowledge than ever before, critical thinking only emerges when they experience productive friction with AI.”

Practical challenges included applying Socratic feedback in asynchronous or hybrid settings, where equity and feedback dynamics vary. Risks of AI misuse, including over-reliance on detection software that undermines trust, were also addressed. The call to action: make learners “troublemakers”—challengers of AI, not passive recipients.

Key Insights and Learnings

  • Productive Friction is Key: Real learning occurs when students challenge and evaluate AI responses.
  • AI as Learning Companion, Not Crutch: Position AI to enhance cognition, not automate outcomes.
  • Human Touch Still Matters: Socratic dialogue and feedback loops are difficult but necessary in hybrid formats.
  • Assessment Evolution: Move toward oral, dialogic, or process-based evaluation formats.
  • Equity Challenges Persist: Tools like Beatlebox, Zotero, or Adobe help level access, but premium AI services may create systemic disparity.
  • AI Literacy Needed: Students must learn to prompt, critique, and ethically engage with AI technologies.

Application for Higher Education

  • Curriculum Design: Embed “productive AI friction” in assignments—use critique loops, peer reviews, and reflective AI journaling.
  • Faculty Training: Train instructors to manage cognitive engagement with AI rather than police it.
  • Equity Lens: Higher Education must monitor access to paid versus free tools and explore institutional subscriptions (e.g., Scispace, Perplexity).
  • Assessment Shift: Expand oral presentations, live interviews, and metacognitive reflection across modalities (including YourPace).
  • Culture Shift: Promote AI not as a threat but as a co-pilot. Encourage student agency by training “AI troublemakers” who think critically and ethically.

Photo Placeholder

📸 Keynote Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Dr. Eugenia Novokshanova delivers a keynote urging educators to “make trouble” with AI and challenge their own assumptions.

Day 1; Session 2: Rethinking University Student Assignments in the Age of Generative AI
Presenter: Assoc. Prof. Dr. Evelina Jaleniauskiene, Kaunas University of Technology, Lithuania

Session Overview & Contents

This session examined how generative AI is challenging traditional academic assignments and reshaping how student learning is assessed. Dr. Evelina Jaleniauskiene from Kaunas University of Technology presented a compelling analysis of evolving instructional design and emphasized the need for universities to move beyond one-dimensional assignments like essays or reports.

The session highlighted the gradual stages of AI acceptance in the academic community and proposed a shift toward AI-resilient assessment models. Emphasis was placed on oral assessments, real-world problem solving, and multi-step learning activities that integrate both content generation and human refinement. These approaches aim to preserve academic integrity and deepen student engagement.

 

Session Summary

Prepared by Assoc. Prof. Dr. Evelina Jaleniauskiene (Kaunas University of Technology, Lithuania), this presentation explored how generative AI requires a redesign of student assignments. She outlined a spectrum of faculty responses—ranging from skepticism to experimentation—as universities adjust to the disruptive presence of AI.

One of the most notable developments is the rise of oral assessment formats that emphasize critical thinking and human judgment, mitigating risks of AI overuse. Other key strategies include multi-step assignments and real-world applications, where AI may serve as a support tool, but not the main executor. Dr. Jaleniauskiene explained how she has begun implementing this in courses such as MAOL 510, 610, and 640 starting in Summer 2025, where students engage with AI tools but must apply human reasoning and iteration to refine their outputs.

Key Insights and Learnings

  • AI acceptance follows distinct phases—awareness, resistance, cautious experimentation, and thoughtful integration.
  • Assignments that can be easily completed by AI are losing their value; meaningful assessment must be human-centric.
  • Oral assessments, reflective presentations, and iterative projects are gaining relevance.
  • Assignments should blend content creation by AI with refinement, critique, and application by students.
  • Real-world problem-based learning with AI support encourages innovation and deeper understanding.

Application for Higher Education

  • Higher Education faculty should be encouraged to implement oral assessment components and require reflective discussion on AI-generated content.
  • Faculty development initiatives can train instructors to restructure assignments into multi-phase activities that include critique, refinement, and human validation.
  • MAOL and MSB courses (especially capstone and applied learning formats) should embed this model of AI-human collaboration for deeper learning.
  • Instructional designers can assist faculty in developing AI-resilient rubrics and assignments using current models from MAOL 510, 610, and 640.
  • The YourPace modality should consider layered assessments to retain academic rigor in AI-augmented environments.

Photo Placeholder

📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Dr. Evelina Jaleniauskiene presenting her session on redesigning university assignments in the age of generative AI.

Day 1; Session 3: If You Can’t Beat Them, Join Them: Embracing AI Use by Students
Presenters: Dr. Jean Gordon (UNCW), Dr. Reid Oetjen, Dr. Dawn Oetjen (University of Central Florida, School of Global Health Management and Informatics)

Session Overview & Contents

This session tackled the practical shift from resisting student use of AI to responsibly embracing and integrating it within teaching and learning. Moderated by academic leaders in health and informatics, the session explored the evolving faculty workload, pedagogical shifts, and curriculum redesign needed in a GenAI-enabled academic world. The panel advocated for institutions to guide, not restrict, students in their AI usage.

The speakers provided strategies for faculty to recalibrate assessments and feedback mechanisms in response to AI-augmented student learning behaviors.

Session Summary

Dr. Jean Gordon (UNCW) emphasized that while AI may promise efficiencies, it will not reduce faculty workload. Instead, instructors can expect increased student enrollment, more intensive feedback expectations, and new demands for tech-savvy course design. These changes will require substantial adaptation across academic units.

Dr. Reid Oetjen (UCF) observed that by removing the burden of grammar, syntax, and formatting, AI tools can free students to focus on higher-level content development—but only if instructors intentionally design courses that support this shift.

Dr. Dawn Oetjen reinforced this view by proposing three practical strategies for AI-integrated learning:

  1. Set clear expectations for how AI can be used in the course.
  2. Require reflective writing or documentation of the student's process, especially in assignments involving AI.
  3. Design assessments that go beyond AI’s current capabilities, such as oral exams, reflection videos, and in-person presentations.

Key Insights and Learnings

  • Faculty workload may increase—not decrease—due to AI: more students, more tech-driven feedback, and higher interaction expectations.
  • Emphasizing “ideas over grammar” allows students to deepen learning—if scaffolded properly.
  • Reflection and process documentation are essential in AI-augmented assignments.
  • Oral and multimodal assessments can limit the overreliance on generative AI and ensure student authenticity.
  • Faculty must take a proactive role in defining AI usage boundaries within each course.

Application for Higher Education

  • Faculty Guidelines: Higher Education should publish unified guidance for faculty on acceptable AI use in coursework, modeled after the three principles above.
  • Assessment Redesign: Shift more assessments toward oral presentations, reflection videos, and synchronous check-ins to complement YourPace’s asynchronous format.
  • Faculty Development: Offer professional development workshops on designing AI-aware syllabi and multimodal evaluation strategies.
  • Transparency Protocols: Require students to submit AI-use statements or reflections to foster academic integrity and metacognition.
  • Program Alignment: Embed these approaches into MAOL, MSB, and YourPace programs to maintain learning rigor while supporting innovation.

Photo Placeholder

📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Panel introduction slide featuring Dr. Jean Gordon, Dr. Eric Richardson, Dr. Reid Oetjen, and Dr. Dawn Oetjen sharing strategic insights on embracing student AI use.

 

Day 1; Session 4: AI as a Strategic Partner: Leveraging AI to Advance Institutional Goals
Presenter: (Name not specified; session presenter shown visually on slide)

Session Overview & Contents

This session presented strategic case studies from universities across the United States that have successfully integrated artificial intelligence (AI) into their institutional operations. The goal was to demonstrate how AI can go beyond the classroom to solve systemic challenges such as retention, advising overload, and administrative inefficiencies.

Three higher education institutions—Georgia State, Arizona State, and Southern New Hampshire University—were highlighted for their AI implementation success. The session also encouraged attendees to think about AI as a strategic asset in addressing persistent campus-wide problems.

Session Summary

The presenter opened the session by positioning AI not merely as a classroom tool, but as a strategic enabler of institutional performance and student success. Using a case-study format, the following examples were shared:

  • Georgia State University: Leveraged predictive analytics to identify and intervene with at-risk students, leading to improved graduation rates.
  • Arizona State University: Deployed AI-powered chatbots to field common student questions, freeing up faculty and staff for higher-level advising.
  • Southern New Hampshire University: Implemented AI-supported advising systems, enabling more personalized academic pathways at scale.

The session concluded with a practical prompt to attendees: consider one real-world institutional challenge and identify how AI could help solve it. Example challenges discussed included advising overload, high course dropout rates, and administrative process delays.

Key Insights and Learnings

  • AI can directly support institutional KPIs (e.g., retention and graduation rates).
  • Operational AI tools like chatbots and predictive systems can enhance student services without expanding staff.
  • AI adoption needs campus-wide alignment and collaboration across departments.
  • Institutions must shift their mindset from “AI in the classroom” to “AI for campus transformation.”
  • Start small, solve real problems—build momentum through AI pilots aimed at visible, measurable pain points.

Application for Higher Education

  • Predictive Analytics for Student Success: Higher Education should pilot predictive tools in advising to flag at-risk students and improve YourPace progression rates.
  • AI Chatbots for Student Services: Develop a campus chatbot to handle common student queries related to registration, financial aid, and course planning.
  • Advising Efficiency: Adopt AI-supported advising tools to assist limited faculty resources and ensure personalized academic support.
  • Real-World Problem Framing: Establish a cross-functional AI Task Force to identify one campus pain point per semester that AI could address.
  • AI for Graduate Programs: Explore applications of AI in scholarship and administrative processes to boost competitiveness of Higher Education's graduate offerings.

Photo Placeholder

📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Presenter showcasing case studies of AI success from Georgia State, Arizona State, and Southern New Hampshire University.

 

Day 1; Session 5: Embracing AI in Graduate Scholarship: Policy Development and Faculty Governance
Presenters: Dr. Amy Campbell, Dr. Mark Staves, Dr. Lara Kessler, and Dr. Erica Hamilton, Grand Valley State University (GVSU)

Session Overview & Contents

This session detailed the institutional approach taken by Grand Valley State University to responsibly implement artificial intelligence in graduate education. GVSU’s Graduate Council led a multi-stakeholder policy development process grounded in faculty governance, academic freedom, and shared decision-making.

The presentation outlined a living AI policy framework that enables continual revision based on technological advances, disciplinary needs, and faculty/student input. The GVSU model emphasizes governance transparency, equitable access, and strategic integration across both teaching and administration.

Session Summary

Led by Dr. Amy Campbell and colleagues at GVSU, this session showcased a structured process for integrating AI into graduate policy and faculty governance. The GVSU Graduate Council developed an inclusive policy-making model that incorporated administrators, curriculum chairs, IT staff, and student voices. The policy design addressed not only use cases and ethical parameters but also governance, professional development, and shared responsibility.

Key features of the GVSU AI policy include:

  • Respect for academic freedom and disciplinary diversity
  • Faculty and student empowerment in AI use
  • Policy flexibility to adapt with technology
  • Clearly defined responsibilities for ethical AI application
  • Institution-wide workshops for professional development

Attendees were provided access to the GVSU AI policy site:
https://www.gvsu.edu/policies/policy.htm?policyId=3EDA3028-A1EF-7514-962877B732FDA124

 

Acceptable Use Policy for Public AI Solutions - University Policies - Grand Valley State University

Acceptable Use Policy for Public AI Solutions SLT 11.16 Policy Policy Statement Policy This policy applies to all members of the GVSU community, including students, faculty, staff, and any external partners engaged in activities involving AI in relation to

www.gvsu.edu

 

 

Key Insights and Learnings

  • AI governance must be a collaborative and iterative process.
  • Policy must preserve academic freedom while offering guidance.
  • Faculty empowerment and training are essential for sustainable integration.
  • AI can support not only teaching but also institutional operations.
  • Graduate-level frameworks offer scalable models for other programs.
  • AI policies should include clarity, accountability, and flexibility.
  • Institutions must balance responsiveness with long-term strategy.

Application for Higher Education

  • Explicit Knowledge: Develop a Higher Education-wide AI policy aligned with shared governance, grounded in academic freedom, and reflective of Higher Education’s rural and online learning context.
  • Implicit Knowledge: Redesign learning outcomes and rubrics to prioritize human reflection, ethics, and problem-solving in AI-supported tasks.
  • Tacit Knowledge: Build a faculty learning community to share AI use cases and refine practices collaboratively.
  • Infrastructure Investment: Pilot AI-powered student support tools such as chatbots for advising and course registration.
  • Equity Focus: Provide subsidized or institutional access to premium AI tools for underserved learners.

Photo Placeholder

📸 Session Photos – Teaching and Learning with AI 2025 Conference, Orlando
Caption 1: Dr. Amy Campbell introduces GVSU’s AI Policy Development Process during the graduate scholarship session.

Day 1; Session 6: Build No-Code AI-Powered Apps for the Classroom
Presenter:(Name not specified; presented by OnMicron.AI representative)

Session Overview & Contents

This session introduced faculty to the power of no-code platforms for building AI-driven educational applications. Using tools like Canva and OnMicron.AI, instructors can create custom classroom utilities—such as automatic quiz generators, chatbots, and micro-apps—without needing programming knowledge.

The presenter also explored educational content delivery preferences for Generation Alpha learners, who increasingly favor short-form audiovisual content over traditional reading materials. The session emphasized customizability vs. technical complexity as a key framework in selecting AI tools for education.

Session Summary

Presented by a representative from OnMicron.AI, this session provided actionable insights into how instructors can design and deploy AI apps without writing code. Demonstrations included how platforms like Canva can automatically generate multiple choice questions from input text, and how no-code frameworks enable rapid prototyping of assessment tools, feedback generators, and intelligent tutors.

A core model shown positioned tools on a matrix of technical complexity and customizability. For instance:

  • Chatbots: Low tech, low customization
  • Micro-apps: Low tech, high customization
  • Commercial integrations (e.g., Canva): High tech, low customization
  • AI agents: High tech, high customization (e.g., Ollie, OnMicron’s agent)

The session concluded with an ideation proposal: use virtual personas to deliver audiovisual feedback in TikTok-style short clips to match the preferences of digital-native students. This could enhance student engagement and information retention.

Key Insights and Learnings

  • No-code tools democratize AI app creation for faculty without tech backgrounds.
  • Micro-apps and assessment generators offer quick classroom utility without IT support.
  • Younger generations prefer audiovisual, short-form content, not text-heavy formats.
  • AI assistants with personas may boost engagement in asynchronous online learning.
  • Tools must be selected based on their balance of customizability and complexity.

Application for Higher Education

  • Pilot micro-app development within Higher Education’s Center for Teaching and Learning.
  • Use Canva or similar tools to help instructors auto-generate quizzes and assessments.
  • Encourage MAOL and YourPace faculty to explore audiovisual feedback apps for student engagement.
  • Develop virtual assistant personas to provide peer-style learning nudges in short video or audio messages.
  • Create an internal knowledge hub mapping no-code tools to teaching needs by complexity/customization.

Photo Placeholder

📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: OnMicron.AI representative presenting the customization vs. complexity matrix for classroom AI tools.

 

Day 2; Session 1: Using Artificial Intelligence to Extract Curriculum Insights from Student Evaluations of Teaching in Higher Education
Presenter: Marcelo Urzola Vasquez, Texas Tech University

Session overview & contents
This session explored how Artificial Intelligence (AI) can be leveraged to extract meaningful curriculum-level insights from student evaluations of teaching (SETs). Marcelo Urzola Vasquez of Texas Tech University presented a model for categorizing evaluation comments into structured domains called “Curriculum Buckets,” providing a consistent framework for interpreting large volumes of qualitative data.

The session offered practical tools and interpretive models to separate signal from noise in student feedback and to guide curriculum redesign efforts without requiring full overhauls. It reframed evaluation data as a diagnostic tool for instructional improvement.

Session summary
In this session, Marcelo Urzola Vasquez introduced a framework for extracting instructional value from student evaluations using AI categorization. At Texas Tech, courses with 60+ evaluations per section are analyzed through four “Curriculum Buckets”:

  • Pace: Speed, flow, and timing of instructional delivery
  • Communication: Clarity, tone, consistency, and responsiveness of instructor interactions
  • Integration: How well ideas and concepts are connected across the curriculum
  • Instructional Resources: Effectiveness of media, platforms, tools, and materials in supporting learning

AI helps sort qualitative responses into these categories, allowing institutions to ask strategic questions like:

  • What’s the signal versus the noise?
  • What does this course’s “diagnosis” reveal?
  • Is the problem major (“gorilla gap”) or minor (“squirrel problem”)?
  • What should be improved now without starting a curriculum revolution?
  • Would faculty feel confident teaching this course based on the report?

This approach transforms subjective comments into actionable insights for curriculum alignment and instructional development.

Key insights and learnings

  • AI can efficiently process large sets of student comments into thematic insights
  • Organizing evaluation feedback by curriculum buckets increases clarity for instructional action
  • Diagnostic metaphors (“patient,” “flag,” “gorilla gap”) enhance interpretation and prioritization
  • This methodology allows for incremental improvements rather than full course redesigns
  • AI does not replace academic judgment—it enhances it by reducing noise and spotlighting critical issues

Application for Higher Education

  • Integrate the Curriculum Buckets model into Higher Education’s course evaluation analysis process
  • Pilot AI-based tools that categorize qualitative feedback into thematic domains for MAOL and YourPace
  • Develop “Action Dashboards” for instructors summarizing evaluation insights in clear categories
  • Use metaphors (diagnosis, flags) in faculty training to promote data-informed teaching revisions
  • Apply insights to improve YourPace instructional quality without increasing faculty workload

Photo placeholder
📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Marcelo Urzola Vasquez presents the ‘Curriculum Buckets’ framework to categorize SET data using AI.

 

Day 2; Session 2: Developing a self-paced course to bridge U-M’s GenAI faculty development gap
Presenter: Anoff Nicholas Cobblah, University of Michigan

Session overview & contents
This session addressed the growing need for scalable and accessible GenAI training for faculty across higher education. Anoff Nicholas Cobblah of the University of Michigan presented the development and implementation of a self-paced online course titled Teaching with GenAI, designed to close the university’s GenAI faculty development gap. The course serves as a critical response to institutional priorities established by U-M’s Generative AI Advisory (GAIA) Committee and aligns with their broader AI governance and instructional integration strategy.

U of Michigan’s Generative Artificial Intelligence Committee Report

https://genai.umich.edu/committee-report

 

The course is modular, practical, and aligned with current pedagogical, ethical, and technological standards to support faculty in responsibly navigating GenAI in teaching, research, and advising.

Session summary
The self-paced course developed by U-M’s Center for Research on Learning & Teaching is structured into eight modules:

  1. Introduction to Generative AI
  2. GenAI Literacy and Critical Experimentation
  3. Prompt Literacy and Media Analysis
  4. Enhancing Education with GenAI
  5. Ethical Considerations
  6. GenAI and Equitable Learning Experiences
  7. Academic Integrity and GenAI
  8. Apply Your Knowledge

This modular format provides faculty with a scaffolded learning path that accommodates their teaching schedules and allows for personalized professional growth. The course is publicly accessible through the Canvas Commons and includes activities such as evaluating GenAI outputs, developing ethical classroom policies, and applying prompt engineering techniques.

All materials are available;

https://lor.instructure.com/resources/36768ea27ce844c58a39c3e682cf3b51?shared

 

Canvas Commons

 

lor.instructure.com

 

 

https://www.youtube.com/watch?v=89D6phLTpTw&ab_channel=umichTECH

 

The course’s development follows recommendations from U-M’s GAIA Committee, which call for institution-wide governance structures, infrastructure investment, and instructional transparency. Additionally, Ohio University’s "AI Stoplight" model was presented as a complementary framework to guide students’ responsible AI use in assignments.

Key insights and learnings

  • Modular, self-paced GenAI courses allow scalable, just-in-time faculty development
  • U-M’s governance model emphasizes transparency, equity, and ethical application
  • GenAI pedagogy includes prompt literacy, academic integrity, and equity-focused design
  • Institutional infrastructure (e.g., Maizey RAG assistant) can provide real-time support in LMS
  • Models like the “AI Stoplight” help students engage responsibly and critically with AI tools

Application for Higher Education

  • Develop a self-paced Teaching with AI course tailored for Higher Education faculty and aligned with Higher Education’s AI use policy
  • Incorporate GenAI course materials into faculty onboarding and annual development
  • Adopt and adapt the AI Stoplight framework across YourPace and MAOL courses for transparent student engagement
  • Designate a team to curate, localize, and maintain Higher Education-specific GenAI modules within Brightspace
  • Explore AI integration in LMS tools like Maizey to automate routine FAQs and instructional support
  • Encourage the development of a community knowledge hub to share local use cases, assignments, and evaluation strategies

Photo placeholder
📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Anoff Nicholas Cobblah presents the University of Michigan’s GenAI faculty training initiative through a self-paced course model.

Day 2; Session 3: Teaching transparently with GenAI spotlights
Presenter: Jennifer Lisy, Assistant Professor of Instruction, Ohio University, CTLA Fellow

Session overview & contents
This session from Ohio University showcased practical classroom strategies for guiding responsible student engagement with generative AI (GenAI). Jennifer Lisy and colleagues shared a portfolio of instructional cards and case studies that offer effective, transparent, and equitable methods for integrating GenAI in higher education. The highlight of the presentation was the AI Stoplight Framework, a visual guide for students to understand the boundaries and expectations for AI use on specific assignments.

In addition, the team addressed core issues with AI detection tools, discussed instructional strategies for personalization and equity, and advocated for aligning GenAI integration with evidence-based principles of teaching and learning.

Session summary
Ohio University’s Center for Teaching, Learning, and Assessment (CTLA) presented a collection of best practices for incorporating GenAI into classrooms through a transparent, student-centered lens. At the heart of the model is the AI Stoplight System, which categorizes assignment expectations into three zones:

  • Red – No AI Use Allowed: Suitable for personal reflection or assessments of core competencies.
  • Yellow – AI Use Is Limited: Students may use AI for drafting, editing, or feedback, but must submit a use statement (e.g., “I used ChatGPT to generate ideas… I had to change…”).
  • Green – AI Use Is Permitted: Students may use AI to design, create, or draft with full disclosure of tools used.

 

 

Complementary teaching strategies included:

  • Designing assignments that resist automation (e.g., oral presentations)
  • Supporting leaders in K–12 through GenAI-focused courses
  • Addressing intellectual property issues in student-generated work
  • Using GenAI for classroom engagement (e.g., simulated personas, tailored tutoring)
  • Supporting equity through personalized writing support
  • Developing prompt engineering skills
  • Encouraging critical discussion of ethics and transparency

The team warned against reliance on AI detectors due to high false positive rates and bias against non-native and neurodivergent students. Instead, they recommended examining assignment design to preempt academic misconduct.

Key insights and learnings

  • AI Stoplight fosters clarity, trust, and responsible student use of GenAI
  • AI detection tools are flawed and often inequitable; pedagogy is the first line of defense
  • Classroom strategies should balance personalization, interactivity, and ethical discourse
  • GenAI is a literacy: students should learn to analyze, critique, and responsibly use AI-generated content
  • Faculty should build course policies with clarity, equity, and transparency

Application for Higher Education

  • Implement the AI Stoplight Framework across MAOL, MSB, and YourPace courses to standardize expectations
  • Provide training and templates for faculty to craft assignment-specific AI usage policies
  • Avoid reliance on AI detection software; instead focus on instructional redesign and student reflection
  • Host a teaching with GenAI card series for faculty development, modeled on Ohio’s examples
  • Embed IP and ethics discussions into writing-intensive and creative courses
  • Promote prompt engineering and media critique as core learning outcomes in digital literacy curriculum
  • Encourage engineering and health programs to adopt GenAI critique assignments (e.g., “Learn from Mistakes with ChatGPT” in ET 3200)

Photo placeholder
📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Ohio University’s CTLA GenAI Spotlights display instructional strategy cards, emphasizing transparency, critical engagement, and instructional innovation.

 

 

Higher Education AI Stoplight Policy for Academic Integrity and Innovation

A Shared Framework for Ethical and Effective AI Use in Teaching and Learning

더보기

This policy provides clear guidance for both faculty and students to foster ethical, transparent, and pedagogically sound integration of AI tools in all Higher Education academic programs, including Masters, Undergraduate, Traditional, and Online programs. 


🟢 GREEN ZONE – AI Use is Permitted (Transparent and Supportive Use)

For Students:

  • Use AI for brainstorming, outlining, or drafting academic content.
  • Generate images, videos, or multimedia presentations with tools like Canva AI or Gamma.
  • Employ AI to summarize lectures or readings as a study aid.
  • Utilize grammar and clarity tools (e.g., Grammarly, ChatGPT) for revision.
  • Generate quiz questions for self-assessment.
  • Submit an AI Use Statement explaining the tools and their purpose.

For Faculty:

  • Design assignments that encourage responsible AI use and experimentation.
  • Allow AI-based creativity in projects, presentations, or design tasks.
  • Provide AI-integrated learning modules (e.g., using ChatGPT for case analysis).
  • Encourage prompt engineering for developing digital literacy.
  • Share approved tools and disclosure practices in syllabi.
  • Foster discussions on the benefits and limitations of AI.
  • Model transparent AI use in your own instructional materials.

🟡 YELLOW ZONE – AI Use is Limited (Guided and Disclosed Use)

For Students:

  • Use AI for ideation or early drafts but revise with personal analysis and input.
  • Obtain grammar or structure suggestions from AI without replacing original thought.
  • Submit side-by-side comparisons of AI output and personal edits.
  • Include a detailed AI Use Statement describing how AI influenced the final product.
  • Use AI as a feedback partner, not a final editor.
  • Apply AI in peer-reviewed or team settings only with group consent.
  • Engage in ethical critique of AI-generated information.

For Faculty:

  • Clarify when and how AI tools may be used in assignments.
  • Require AI Use Statements for transparency and reflection.
  • Review AI-generated drafts with students to teach editing and revision.
  • Provide exemplars of limited AI support (e.g., editing vs. writing).
  • Use AI to simulate classroom discussion prompts but not to replace teaching voice.
  • Train students to identify bias, ethical concerns, or misinformation in AI.
  • Assess student work on both content and clarity of AI integration.

🔴 RED ZONE – No AI Use Allowed (Authentic and Protected Work)

For Students:

  • Do not use AI for assessments requiring personal insight, reflection, or critical evaluation.
  • Avoid using AI in exams, tests, or quizzes.
  • Refrain from generating citations, code, or calculations via AI without verification.
  • Do not upload course materials or assignment prompts to AI platforms.
  • Avoid impersonating yourself with AI in discussions or recorded assignments.
  • Maintain academic honesty in peer reviews and collaborative work.
  • Recognize and honor tasks intended to build core professional competencies.

For Faculty:

  • Identify which assignments require strictly original student work (e.g., reflections, exams).
  • Clearly label "Red Zone" tasks in the syllabus and on Brightspace.
  • Avoid assigning activities easily completed by AI; instead, design reflective or applied learning tasks.
  • Use oral presentations, interviews, or simulations to assess individual competence.
  • Refrain from using AI to grade or write student feedback.
  • Maintain student trust by not relying on flawed AI detection tools.
  • Model academic integrity by completing grading and assessment authentically.

Implementation Support at Higher Education

  • Faculty Toolkits: Provide assignment templates and syllabus language for each zone.
  • Student Orientation: Integrate AI literacy and this policy in new student onboarding.
  • Center for Teaching & Learning: Host workshops and a "Teaching with AI" card series.
  • Brightspace Integration: Use rubric tags and color-coded banners to signal AI expectations.

This Stoplight Policy ensures ethical, equitable, and effective AI engagement across all levels of Higher Education instruction and learning.

 

 

Day 2; Session 4: Using AI to design aligned courses with OERs in higher education
Presenters: Ilana Grimes & Ramona Smith

Session overview & contents
This session focused on the strategic integration of Artificial Intelligence (AI) and Open Educational Resources (OERs) to design courses that are aligned, accessible, and outcomes-driven. Ilana Grimes and Ramona Smith introduced a practical, design-based approach that leverages AI to scaffold the course design process across disciplines and reduce both instructional workload and development costs.

Using AI in combination with OERs not only improves alignment between learning outcomes, instructional materials, and assessments, but also promotes cost-effective and inclusive education. The presenters provided sample AI prompts and an instructional workflow supported by an open-access AI-OER guidebook.

Session summary
Grimes and Smith demonstrated how AI can serve as a scaffolding engine that accelerates instructional alignment across four essential course design elements:

  1. Learning Objectives
  2. Instructional Materials (OERs)
  3. Student-Centered Activities
  4. Formative/Summative Assessments

AI prompts were used to guide instructional design tasks. Examples included:

  • “Write 3 measurable learning outcomes for a college-level unit on [topic].”
  • “List 5 OERs that support this objective: [objective].”
  • “Suggest student-centered activities to help achieve this outcome.”
  • “Write 3 aligned assessments (formative/summative) for this objective.”

The presenters stressed that AI is most useful not merely for content generation but for ensuring coherence and backward design throughout a course. Their AI + OER integration model supports instructional quality and adaptability across all fields, especially where access and affordability are critical.

An accompanying AI-OER Guidebook was introduced:
https://pressbooks.pub/aicopyrightanddataprivacyineducation

 

Balancing AI, Copyright, and Data Privacy in Education: A Guidebook for Educators – Simple Book Publishing

Contents Show All Contents Hide All Contents Book Contents Navigation I. CopyrightII. Creative CommonsIII. Anatomy of a CC LicenseIV. Remixes, Adapted Works, and Derivative WorksV. Generative AI, Copyright, and Creative CommonsVI. Property Rights and Data

pressbooks.pub

 

Key insights and learnings

  • AI accelerates alignment by helping instructors scaffold backwards from learning objectives
  • OERs increase flexibility and affordability while supporting learning outcomes
  • Using structured AI prompts ensures that all course components are interconnected
  • Cross-discipline application is feasible and scalable with proper design templates
  • Faculty need guidance not just on AI tools, but on how to ask AI the right pedagogical questions

Application for Higher Education

  • Train Higher Education faculty to use AI-aligned course design workflows across Graduate, Undergraduate, Traditional, and Online programs
  • Develop a local OER + AI prompt library to support aligned, open-access curriculum
  • Embed sample prompts into course design templates and instructional design consultations
  • Host workshops using the AI-OER Guidebook to support sustainable, high-impact course development
  • Implement this alignment strategy in general education reform or instructional review initiatives
  • Encourage backward course design in all modalities (in-person, online, and competency-based)

Photo placeholder
📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Ilana Grimes and Ramona Smith presenting how AI can be used to align course objectives, activities, materials, and assessments with OERs.

Day 2; Session 5: Leveraging AI for scalable and effective student evaluation in large-scale online courses
Presenter: Michal Ramot, The Hebrew University of Jerusalem

Session overview & contents
This session explored the challenges and opportunities in applying AI to student evaluation in massive open online courses (MOOCs) and other large-scale digital learning environments. Michal Ramot shared insights from a biology MOOC taught at The Hebrew University of Jerusalem, incorporating story-based content delivery, AI-assisted assessment, and peer engagement strategies.

With a focus on formative evaluation and learner-centered design, the course bypassed traditional exams in favor of AI-supported quizzes, discussion forums, and micro-feedback loops. The presenter emphasized that while AI is a valuable learning support tool, it creates significant challenges for academic integrity in evaluation.

Session summary
Ramot’s biology MOOC used a story-driven teaching model, combining multimedia and narrative to enhance learner engagement. The evaluation design included:

  • No exams
  • Quizzes after each video or reading (with multiple attempts allowed)
  • Social media-style posts and discussion forums, assessed using ChatGPT for rapid feedback
  • Persistent challenges with dishonesty and authenticity in student work

The session called attention to a major dilemma: while AI is exceptional at scaling instruction, it introduces vulnerabilities in high-stakes grading. The presenter labeled this “the elephant in the room.”

A proposed solution was AI-augmented peer evaluation. Students are asked to critique AI-written essays, using their judgment to assess content quality, ethics, and reasoning. This develops both critical thinking and GenAI literacy.

Additional innovations included the use of Notebook LM, a personalized learning application that allows students to ask questions, summarize readings, and track learning progress through generative feedback.

Key insights and learnings

  • AI enhances scalable, formative evaluation in MOOCs and large courses
  • Traditional exams may not suit digital-first or AI-integrated learning environments
  • Peer evaluation and AI critique activities promote critical engagement and reduce plagiarism
  • “Micro-feedback” loops using generative AI can support personalized learning at scale
  • Academic integrity remains a key vulnerability that must be addressed through pedagogy, not just detection

Application for Higher Education

  • Apply AI-supported formative evaluations (e.g., open quizzes and forums) in YourPace and MAOL courses
  • Use Notebook LM or similar tools for self-regulated learning and feedback support
  • Design assignments where students critique AI-generated essays to build critical thinking and media literacy
  • Develop a framework for AI + peer + instructor grading to balance efficiency and academic rigor
  • Train faculty to use ChatGPT or similar tools to process open-ended responses and discussion posts at scale
  • Focus on story-driven instructional design to enhance engagement in online science, health, and technical courses

Photo placeholder
📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Michal Ramot presents formative AI-powered evaluation strategies from The Hebrew University of Jerusalem's large-scale online biology course.

Day 2; Session 6: Lessons learned from real-world AI PhD projects
Presenters: Dr. Luann Fortune, Saybrook University & Dr. Sean Nufer, The Community Solution

Session overview & contents
This session explored practical applications of AI within graduate-level education, focusing on real-world doctoral projects and course-based assignments that scaffold AI literacy and critical engagement. Drs. Luann Fortune and Sean Nufer shared their experience with designing and implementing an AI-based critique assignment in a graduate core systems course, running since Fall 2023.

The presenters addressed institutional needs for university-wide AI policies and coordinated instructional strategies that address misinformation, academic honesty, and student confusion about reference quality, especially when working with AI-generated outputs.

Session summary
The central feature of the session was an AI-based critique assignment used in a graduate-level core course since Fall 2023. This assignment is scaffolded across the term and designed to help students engage with systems-level content using AI tools to analyze, critique, and refine information.

Assignment components include:

  • Using more than one prompt to compare and evaluate AI responses
  • Providing students with visual templates and video tutorials
  • Encouraging prompt iteration to generate more original or useful responses
  • Sharing university-wide AI use guidelines and citation policies

The presenters noted a rise in reference issues, including fictional or hallucinated citations that are harder to detect. Copy leaks and misinformation are growing concerns. OpenAI hallucination rates have increased with each new model version (from ~16% in GPT-3.5 to ~79% in GPT-4.0), posing challenges in academic work, particularly in health-related disciplines like vaccine science or clinical documentation.

The presenters called for AI policy development, institution-wide training, and scaffolded coursework to counteract misinformation and promote critical AI use.

Key insights and learnings

  • Scaffolded AI critique assignments help graduate students engage in higher-order thinking
  • Iterative prompting and critique reduce blind trust in AI outputs
  • AI-generated misinformation can pose serious risks in academic and professional domains
  • Providing tutorials, visual guides, and policy references empowers ethical AI use
  • Coordinated instructional efforts are needed to prevent spread of AI hallucinations in student work

Application for Higher Education

  • Incorporate AI-based critique assignments into MAOL and MSB graduate courses (e.g., AI generates → students evaluate)
  • Develop a university-wide AI misinformation awareness module, especially for health, education, and business fields
  • Train faculty and students to use multiple prompts and prompt engineering for information comparison
  • Share video tutorials and templates to guide responsible AI use
  • Collaborate with academic affairs to design and enforce AI citation and reference policies
  • Monitor AI evolution and hallucination rates to update instructional practices annually

Photo placeholder
📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Slide showing Saybrook University’s AI-based critique assignment cycle: Planning, Monitoring, Evaluation in a graduate core course.

Day 2; Session 7: How AI will save the humanities
Presenter: (Name not provided in original file)

Session overview & contents
This session challenged prevailing fears about the impact of generative AI on creativity, academic integrity, and humanistic education. Instead of viewing AI as a threat, the presenter reframed it as a tool for revival—particularly within the declining field of the humanities. The discussion addressed the structural and cultural challenges facing humanities departments, from decreased reading stamina to the prioritization of measurable, career-focused outcomes.

Using real-world examples and the REACH prompting framework, the session illustrated how AI can restore curiosity, develop critical thinking, and reposition essential skills for the digital age.

Session summary
The session began by outlining the decline in humanities education:

  • Shrinking enrollments in literature, philosophy, and arts disciplines
  • Institutional pressure for “data-driven” outcomes
  • Students devaluing introspection and nuance in favor of career utility
  • A cultural shift from deep reading to scan-based content consumption

However, the session argued that AI can support a revival of humanities pedagogy through:

  • Interactive and curiosity-driven learning
  • Student-generated prompts that reframe inquiry as a creative, intellectual act
  • AI summaries or pre-reading guides to scaffold engagement for Gen Alpha learners

An assignment strategy was shared where students create their own AI prompts using the REACH Framework:

  • R (Role) – Define the AI’s persona
  • E (Explain) – Clarify the task
  • A (Audience) – Identify who the output is for
  • C (Context) – Provide narrative and purpose
  • H (Handoff) – Specify the expected response or format

Example: Students create a prompt asking AI to defend a literary character’s moral complexity or compare philosophical themes in different cultural contexts. They then share, critique, and revise these prompts in class, treating prompt engineering as a rhetorical and writing skill.

Key insights and learnings

  • AI can revitalize humanities education by encouraging questioning and curiosity
  • Prompt creation becomes a form of rhetorical inquiry and writing craft
  • Peer review of AI prompts fosters collaboration, creativity, and metacognition
  • Humanities should lead ethical GenAI use, bridging gaps across STEM and liberal arts
  • “Soft skills” can be reframed as “essential skills” in a GenAI-influenced workforce

Application for Higher Education

  • Launch AI-supported assignments in humanities and social science courses, using the REACH framework
  • Encourage students to design and refine their own prompts as a method of literary, historical, or philosophical analysis
  • Integrate AI-guided pre-reading summaries for learners with short attention spans or reading anxiety
  • Reframe YourPace general education courses to showcase humanities as essential to AI ethics, critical thinking, and communication
  • Use peer-review sessions to critique and improve prompts, emphasizing clarity and curiosity
  • Provide professional development for humanities faculty in AI-enhanced teaching and prompt-based learning design

 

Day 2; Session 8: The 4D framework for AI fluency
Presenter: Claude Sonnet, University College Cork (Ireland)

Session overview & contents
This session introduced a pedagogical framework designed to help students develop responsible and reflective AI literacy. Claude Sonnet from University College Cork presented the 4D Framework for AI Fluency, which provides a scaffold for engaging with AI tools in creative, academic, and professional contexts. The 4Ds—Delegation, Description, Discernment, and Diligence—establish a shared language for evaluating how students interact with generative AI throughout the learning process.

The framework was illustrated using a creative writing assignment, and the session included discussion on misuse, transparency, and the importance of diligence in AI-assisted work.

Session summary
Sonnet outlined the 4D Framework as follows:

  1. Delegation – Setting goals and deciding whether or not AI is appropriate for the task
  2. Description – Prompting AI clearly and effectively to generate useful output
  3. Discernment – Critically evaluating the quality, relevance, and reliability of AI responses
  4. Diligence – Taking full responsibility for how AI is used, including documentation and ethical reflection

A creative writing assignment was used to operationalize the framework:

  • Students selected a topic (Delegation)
  • They brainstormed with AI and prompted it to generate outlines and drafts (Description, Discernment)
  • Using “Track Changes,” students revised AI-generated content to improve style, substance, and alignment with course goals (Discernment)
  • Finally, they submitted a Diligence Statement, disclosing how AI was used and what parts they retained or modified

The session also highlighted misuse scenarios (e.g., using AI for personal reflections), demonstrating that simply citing AI use does not equal responsible engagement. To address these risks, Sonnet encouraged educators to incorporate AI misuse case studies, such as those found in the Artificial Intelligence Incident Database.

Key insights and learnings

  • The 4D Framework supports metacognitive reflection about AI use in learning
  • Structured prompting and revision teach students to engage actively, not passively, with AI tools
  • Diligence Statements encourage transparency and help clarify authorship boundaries
  • Not all tasks are suited to AI; misaligned delegation can dilute learning goals
  • Real-world incident databases can help students understand the stakes of AI misuse

Application for Higher Education

  • Incorporate the 4D Framework into GenAI policy education, particularly in MAOL, MSB, and YourPace writing courses
  • Require Diligence Statements on AI-assisted submissions, including reflections on how AI was used and evaluated
  • Train faculty to design assignments aligned with the 4D principles, especially in courses involving creative, analytical, or research writing
  • Use case studies from the AI Incident Database to develop students' ethical reasoning and discernment
  • Develop a GenAI rubric at Higher Education that evaluates student use of AI across the four D domains

Photo placeholder
📸 Session Photo – Teaching and Learning with AI 2025 Conference, Orlando
Caption: Claude Sonnet presents the 4D Framework for AI Fluency, illustrated through a creative writing assignment and a case of misaligned delegation.

Day 3; Session 1: AI-resistant assignments in critical writing for engineering majors
Presenter: Dr. Kenneth Berry, Lyle School of Engineering, Southern Methodist University (SMU)

Session overview & contents
This session examined AI-resilient pedagogical strategies in a critical reasoning course for engineering majors at SMU. Dr. Kenneth Berry discussed how the prevalence of generative AI in student writing has raised faculty concerns, with 67% of faculty still viewing AI use as cheating. In this context, Berry emphasized the need for assignment design that cultivates authentic voice, process-based writing, and reflective engagement—instead of merely attempting to detect AI misuse.

 

The course, typically capped at 12–30 students due to the high feedback demand, aims to meet engineering students where they are: by teaching critical writing skills while acknowledging their likely dependence on AI.

Session summary
Dr. Berry presented three pillars for AI-resistant writing assessment:

  1. Personalization
    • Students write about personal experiences or reflections that are hard to convincingly generate with AI.
    • Assignments include:
      • “What should I know about you?” (Day 1 writing)
      • Author’s notes with each draft
      • Personal experience explanations in papers
      • Meta-cognitive footnotes describing how class concepts are being applied
    • Caveat: AI can still fabricate convincing stories if not monitored.
  2. Process over product
    • Large assignments are broken into smaller, traceable components:
      • Outlines with argument structure (e.g., hook, thesis, context)
      • Annotated bibliographies (summary, quote, purpose)
      • Drafting, peer review, and mind maps
      • Class presentations, debates, and critical reviews
    • Instructors track student progress and participation using quizzes, attendance, and name recognition
    • Meta-data analysis and engagement records help detect inauthentic work
  3. Voice and tone development
    • Students are coached to develop a distinct, conversational voice while avoiding:
      • Overuse of AI-generated filler
      • Jargon, excessive informality, and unnatural syntax
    • Rules include:
      • “Use ‘I’ and ‘me’ sparingly”
      • “Avoid split infinitives and wordy structures”

Berry also noted that graduate students can responsibly use AI in advocacy writing by optimizing clarity and structure while reflecting on accuracy, bias, and ethics of AI involvement.

Key insights and learnings

  • AI-resistant design focuses on authentic, personal, and process-oriented writing
  • Meta-cognitive prompts and structured outlines build transparency and skill
  • Personal voice development creates textual fingerprints harder to automate
  • Even AI-allowed assignments can be safeguarded through reflection and review stages
  • Faculty should reframe AI as a learning companion, not just a threat

Application for Higher Education

  • Integrate AI-resistant strategies into MAOL, YourPace, and writing-intensive courses
  • Adopt meta-cognitive footnotes and author’s notes in student assignments
  • Train faculty to use process-based writing stages (e.g., mind maps, outlines, annotated bibliographies)
  • Create workshops to support voice development in student writing
  • Encourage policy and advocacy students to use AI responsibly for clarity, impact, and ethical reflection
  • Use LMS analytics (metadata, version history, attendance) to verify engagement

Day 3; Session 3.2: The Hammer and Hole Analogy

Presenter: Wathall, J.

Session Overview & Contents

This session explored the nuanced ethical responsibilities and practical implications of integrating AI tools into nursing education. The metaphor of the "hammer and hole" was used to challenge educators: rather than viewing AI as a blunt instrument, we must critically consider its application and consequences within human-centered contexts. The session drew upon empirical studies highlighting how AI is being used by nursing students for both academic and practical adaptation.

Session Summary

Key findings revealed that nursing students actively use AI for personalized learning, to bridge the theory–practice gap, and to manage academic time constraints. Notably, international students highlighted AI’s value for cultural adjustment and language support. A central theme was the "open secret" of AI usage among students, which indicates a notable misalignment between institutional policies and learner realities.

Repository

https://stars.library.ucf.edu/traiil/

 

Teaching Repository of AI-Infused Learning | University of Central Florida

Our mission is to curate a collection of openly licensed, peer-reviewed strategies that utilize generative AI tools to support learning in higher education. Don’t miss this opportunity to be a TRAIILblazer at the forefront of teaching with AI! Want to sh

stars.library.ucf.edu

 

Furthermore, Ghimire and Qiu (2025) emphasized the need for transparency, ethical responsiveness, and acknowledgment of student agency. These elements, if embraced, can enable nursing education to benefit from AI while preserving core values such as empathy, compassion, and human-centered care.

The session also outlined practical challenges and opportunities, including:

  • Legislative ambiguity and its impact on AI tool adoption
  • Issues related to blinded peer review within LMS platforms
  • Opportunities to enhance retrieval and use of peer reviews for constructive learning outcomes

Key Insights and Learnings

  • AI tools serve as crucial bridges for students navigating complex learning contexts, especially international and nursing students.
  • There is a clear need for institutions to evolve their AI policies to align with actual student practices.
  • Ethical AI frameworks must be co-developed with student input to maintain trust and relevance in professional fields like nursing.
  • The "open secret" of AI usage reflects systemic gaps in transparency and support.

Application for Higher Education

Higher Education could consider piloting AI-integrated support systems for nursing and international students that focus on time management, language enhancement, and adaptive feedback. Establishing a transparent policy and ethics forum—including student voices—may help address institutional disconnects while reinforcing Higher Education’s commitment to compassionate, human-centered education.

 

Conclusion:

Building Higher Education’s AI-Ready Future through Ethical, Equitable, and Evidence-Based Integration

The 2025 Teaching and Learning with AI Conference made one truth abundantly clear: artificial intelligence is not merely a classroom tool—it is a transformational force redefining every layer of the academic ecosystem. From instructional design and student evaluation to institutional governance and graduate policy, AI is prompting higher education to evolve in real time. The question is no longer whether to integrate AI, but how to do so responsibly, transparently, and strategically.

For Higher Education, the insights gathered from the conference call for a multi-tiered response grounded in faculty empowerment, student equity, and institutional foresight. This means embracing AI not as a substitute for learning but as a scaffold for deeper thinking, reflection, and engagement. It means viewing students not as passive users of AI but as co-creators of knowledge, guided by ethical fluency and digital discernment.

Above all, it requires a cultural shift—from resistance to readiness, from policing to partnering, and from disciplinary silos to collaborative innovation. Whether through the development of self-paced faculty AI courses, AI-supported micro-feedback systems, or a university-wide GenAI literacy framework, Higher Education has the opportunity to lead with clarity, courage, and compassion in this new educational frontier.

The table below captures the strategic themes and actionable insights from the Day 1 and Day 2 sessions attended at the conference.

Key Strategic Insights from the AI in Teaching & Learning Conference

Strategic Theme Key Insights Higher Education Action Items
AI as Learning Companion AI should be a co-pilot, not a threat; students must engage critically and reflectively with AI outputs. Promote "productive friction" with AI via critique loops, peer reviews, and metacognitive journaling.
Transparent Teaching with AI AI Spotlights and usage declarations foster student trust and ethical engagement. Adopt Ohio’s “AI Stoplight” model across programs; require AI use statements in key assignments.
Ethical AI Governance & Policy Institutions need flexible, iterative AI governance rooted in faculty governance and academic freedom. Develop a campus-wide GenAI policy; engage stakeholders in ongoing review and governance forums.
Assessment Redesign AI demands a shift from traditional text-based exams to oral, dialogic, and iterative assessments. Introduce oral presentations, process-based assessments, and critique of AI-generated content in MAOL/YourPace.
Faculty & Student AI Literacy Both groups need structured training in prompt design, discernment, and AI ethics. Launch self-paced AI courses for faculty (based on U-M model); embed GenAI modules in YourPace and MAOL curriculum.
Scalable, Inclusive Course Design AI + OERs support affordable, outcome-aligned, and accessible learning design. Train faculty on AI-OER integration using backward design; localize campus' own prompt libraries and design templates.
AI in Institutional Operations AI can improve advising, retention, and service efficiency through chatbots and predictive analytics. Pilot AI chatbots for student FAQs; implement advising analytics to flag YourPace progression risks.
AI & Humanities Revival AI can rekindle curiosity, collaboration, and reflection in declining humanities programs. Use the REACH prompt framework in Gen Ed courses; encourage prompt design as rhetorical skill.
Graduate & Health Professions Use Scaffolded AI assignments and critique models are key to ethical use in high-stakes professional fields. Deploy AI critique assignments in MAOL/MSB; develop hallucination literacy and citation accuracy modules.
Instructional Innovation & Engagement Micro-apps, avatars, and no-code tools support next-gen learners with short-form, personalized feedback. Develop AI-powered microfeedback pilots using short video avatars; build an internal no-code tool map for instructors.

 

Final Reflection

AI is not a passing phase—it is a pedagogical paradigm. Higher Education must now act decisively to integrate generative AI in ways that honor our mission, uplift our students, and empower our educators. That journey begins not with detection software or one-size-fits-all bans, but with a deep commitment to equity, ethics, and excellence in innovation.

This report and its actionable insights serve as a blueprint for strategic, evidence-based implementation. As we move forward, we must build capacity not just in tools—but in trust, transparency, and transformation.

 

Appendix: Ideation #1

YourPace Program (1 credit):

AI Fluency for Learners: Designing Ethical, Equitable, and Effective Learning with Generative AI

Course Learning Objectives (CLOs):

  1. Understand and articulate the foundational concepts and educational potential of Generative AI.
  2. Critically evaluate the credibility, limitations, and ethical implications of AI-generated content in academic settings.
  3. Apply prompt engineering and inclusive strategies to personalize and enhance student learning with GenAI tools.
  4. Integrate GenAI responsibly and strategically into lifelong learning and professional development.

Module 1: Foundations of Generative AI for Learners
Module CLO 1: Understand and articulate the foundational concepts and educational potential of Generative AI.
Topics:

  • What Is Artificial Intelligence? Definitions and Basics
  • GenAI in Learning and Daily Academic Life
    Milestone #1: Write a 1,000–1,200 word reflection analyzing how GenAI could support your current or future learning goals, including use cases and ethical considerations.
    Reflection Video (2–3 minutes): Share your thoughts on how this module has changed or enhanced your understanding of AI's role in your education.

Module 2: Developing Critical and Ethical AI Literacy
Module CLO 2: Critically evaluate the credibility, limitations, and ethical implications of AI-generated content in academic settings.
Topics:

  • Evaluating the Quality and Credibility of AI Outputs
  • Responsible Use of GenAI and Academic Integrity
    Milestone #2: Critically review an AI-generated academic article summary. Evaluate its accuracy, identify any biases, and reflect on its ethical use in student work.
    Reflection Video (2–3 minutes): Discuss your experience evaluating AI-generated content and what this taught you about digital literacy and responsibility.

Module 3: Prompting, Personalization, and Inclusive Learning
Module CLO 3: Apply prompt engineering and inclusive strategies to personalize and enhance student learning with GenAI tools.
Topics:

  • Crafting Prompts for Academic and Study Support
  • Inclusive AI Tools for Diverse Learner Needs
    Milestone #3: Design a personalized GenAI-powered learning plan. Include prompt examples and discuss how this tool enhances accessibility or learning support for you or others.
    Reflection Video (2–3 minutes): Reflect on how prompt design helped you achieve specific learning goals and the inclusivity of AI tools.

Module 4: Learner Empowerment and Digital Confidence
Module CLO 4: Integrate GenAI responsibly and strategically into lifelong learning and professional development.
Topics:

  • AI and Student Autonomy: Becoming a Strategic Learner
  • Building Confidence with AI Tools in Academic and Career Contexts
    Milestone #4: Create a digital action plan to continue using GenAI as a learning partner. Describe how it will help you stay organized, motivated, and reflective in your studies.
    Reflection Video (2–3 minutes): Share how you envision continuing to use AI in your academic and professional life and the habits you’ll build around it.

Final Assessment: Capstone – Presenting Your AI-Enhanced Learning Strategy
Final Project: Prepare a 5–7 minute video or narrated presentation that synthesizes your learning from the course. Address the following:

  • A learning challenge you addressed using GenAI
  • The tools, prompts, and ethical strategies applied
  • Key learning outcomes and personal growth
  • Your future plan for AI-integrated academic or professional development
    Submit alongside a 1–2 page summary explaining your process, ethical reflection, and future learning roadmap.

 

Appendix: Ideation #2

Proposal: AI Micro-App for Audiovisual Feedback: Enhancing Engagement with Virtual Persona Technology

Proposal Summary:
This project proposes the development of a micro AI application designed to deliver short-form, personalized audiovisual feedback using AI-generated virtual personas. Recognizing the shift in student media consumption habits—especially among Gen Z and Alpha learners—this tool aims to reimagine traditional feedback practices by converting written comments into <60-second video or voice responses. These responses will be delivered through dynamic, emotionally resonant avatars tailored to student preferences.

The app will integrate open-source NLP models with avatar-based text-to-video platforms (e.g., Synthesia, HeyGen) to foster more engaging and digestible communication. Initial deployment will focus on one or two Higher Education courses to evaluate impact on learner engagement, motivation, and comprehension.

 

Estimated Budget: $5,000–$10,000 (Prototype development, testing, and pilot deployment)

Expected Impact:

  • Increased student attention and retention of feedback
  • Stronger emotional and cognitive connection to learning
  • Data-driven insights to inform scalable AI feedback strategies for Higher Education

This initiative supports Higher Education’s broader mission to embrace pedagogical innovation through ethically applied AI.

 

CONFIDENTIALITY NOTICE: This document is confidential and intended solely for internal use by the Dr. Jeonghwan (Jerry) Choi. Unauthorized sharing is prohibited.

INTELLECTUAL PROPERTY NOTICE: All intellectual content herein is the property of Dr. Jeonghwan (Jerry) Choi. Reproduction or distribution without permission is strictly prohibited.

 

© 2025 Dr. eonghwan (Jerry) Choi.

 

 


2025. 06. 19: Transofrmed to Personal Vision

 

 

General_TL_AI_Conference(20250523-30)_Debreifing_Ver3.pdf
10.71MB

 

 

2025. 06. 03: Initially archived 

 

 

 

댓글