
AI-Proofing Assignments in OT Education
Heather Kuhaneck, PhD OTR/L FAOTA
Feb 2, 2025
Artificial Intelligence (AI) is transforming education, whether we all like it or not! This post discusses the prevalence of academic dishonesty with AI, and provides practical strategies for “AI-proofing” assignments while leveraging AI as a learning tool (* full disclosure here- chatGPT has assisted with the generation of some of the ideas for outsmarting itself, others came from the references cited at the end).
Why do we need to AI proof our Assignments?
While AI tools like ChatGPT can support student learning, they also present challenges in maintaining academic integrity. Evidence is beginning to accumulate, suggesting that not only are students confused by what constitutes cheating in the age of AI, but many may be using AI for their work, without attribution. (See reference list at the end). The numbers vary and are likely skewed by student willingness to answer truthfully, however, some recent surveys suggest as many as 56% of students are using AI in their work. This may be a frustrating fact for educators, but it is a fact nonetheless. Research also suggests it can be quite difficult for faculty to determine when students have used AI to assist in their work and there are not yet tools to accurately identify AI produced text. So, what do we do? Knowledge is power and as occupational therapists, we have the skills to adapt!!!!
What Makes an Assignment Vulnerable to AI?
Assignments are susceptible to AI-generated responses if they rely on factual recall or basic explanation (e.g., “Describe the sensory processing challenges in children with autism.”). Similarly, many basic essay prompts are easy for AI to answer. Assignments that lack a requirement for individualized reasoning, reflection, or problem-solving and those without multi-step processes or unique, hands-on learning experiences are also easier for students to complete using AI. Many of the typical written assignments in an occupational therapy program can be completed with AI or at least assisted with AI.
If you are a faculty member who has not yet tried this, I highly recommend you put every one of your assignments into chatGPT to see what you get back. AI can now provide solid answers that may surprise you. AI can create, describe, or analyze fairly comprehensive case scenarios and AI can creatively answer even those prompts that require student reflection on a specific experience (the AI makes it up).
Strategies to AI-Proof OT Assignments
| Strategy | Example | Why it Works |
| Require Reflection & Responses Using a Specific Observation from YouTube Videos | Assign video analyses where students must watch a recorded therapy session and:Identify and discuss the best interventionsEvaluate the therapist’s approach and suggest alternative strategiesApply a theoretical model | AI cannot yet view You Tube Videos- although it can analyze transcripts of video content so be sure to focus questions on observations as opposed to verbalizations that can be turned into text |
| Intentionally and Purposefully Incorporate AI into the Assignment | Ask students to use AI in a specific task and then critically analyze its output:”Use ChatGPT to generate an answer to this question. What did it get right? What key elements did it miss?””Compare an AI-generated intervention plan with one you create manually. What are the strengths and weaknesses of each?” | This teaches students to critically engage with AI, improving their ability to assess the AI-generated content. |
| Design Case-Based Tasks with Incomplete or Ambiguous Information | “You are working with a 6-year-old with sensory processing challenges who refuses to participate in fine motor activities. What additional information do you need to determine the interventions you would consider?” Or, make sure the scenarios are complex, and allow for multiple possible solutions. Provide the solutions but have students rank order them and explain why one is better than another. | AI can generate general responses, but it struggles with nuanced problem- solving that requires judgment. Students who can provide explanations and justifications are more likely to have a true understanding As of right now, AI tools are largely unable to generate working solutions to complex and ambiguous problems. |
| Process- Oriented Work | Require submissions with items such as concept maps, flow charts, diagrams, drawings, and the like. Ask students to critique the work of AI in creating these types of visual aids or explain their rationale for how the visual aid was developed and why the relationships were structured in the way that they were. Alternatively, require hand drawn concept maps or have them created in class on paper with pens/markers/ etc. | AI can generate these documents but will have a more difficult time explaining the rationale between the structure and the relationships highlighted. If students create them by hand in class, you know they are not AI generated. |
| Oral or Live Demonstration Or Proctored Exams | Require students to create a video of themselves demonstrating a specific task or have them engage in a live debate in a small group. Use live demonstration of skills following provision of a scenario or case. Have students do an activity analysis of themselves completing an activity via video and ask specific questions about their performance, as opposed to generic analysis of the activity (which AI CAN complete quite well). | If students are given case information ahead of time, AI might help them prepare, but they will have to complete the final assessment on their own. |
| Group Projects | Have students work together to complete a project, analyse a case, or create an intervention. | Given the current prevalence and % of students who engage in AI “cheating” it may be harder to get an entire group of students to agree on the use of AI work without attribution. |
Putting AI to the Test: A Surprising Strategy for Educators
One of the most effective ways to understand AI’s capabilities—and its potential impact on student work—is to test it yourself. You might be surprised by the results!
For example, I recently asked ChatGPT to complete an activity analysis on bike riding, and the response was thorough, well-structured, and used appropriate OT terminology. It could even organize the analysis using the OTPF framework and generate a concept map to visually represent the information.
This strategy serves a dual purpose. Not only does it help educators assess how well AI can perform on specific assignments, but it also provides insight into recognizing AI-generated responses from students. Since AI tends to produce consistent output for similar prompts, reviewing AI-generated responses in advance can make it easier to detect when students rely too heavily on these tools.
Conclusion
AI is here to stay, and OT educators must adapt and ensure students develop the skills necessary for professional practice. While “AI-proofing” assignments is important, AI can also be a valuable tool for student learning when used ethically. Clear guidelines help students understand when and how AI tools are acceptable. AI literacy must now be part of every occupational therapy program. Instead of fearing AI, we can use it strategically—while maintaining the integrity of OT education.
How are you adapting your assignments in the age of AI? Share your experiences and ideas in the comments!
In addition to teaching about AI literacy, in an attempt to AI proof our curriculum, at Southern CT State’s OT Program, all exams will be in-person, competency-based demonstrations. We will use Video Ant for assessment of observation skills, and a variety of in-class case based learning activities in small groups, where learning will be assessed in real time, for example using strategies from “Making Thinking Visible.” All of our in-person courses will use flipped learning so that class time will focus on development of skills and assessing competencies.
References and Resources
*** See Handbook on AI and Quality Higher Education
Agha, N. C. (2025). Exploring the Prevalence and Techniques of AI-Assisted Cheating in Higher Education. AI and Ethics, Academic Integrity and the Future of Quality Assurance in Higher Education, 11.
Amigud, A., & Lancaster, T. (2019). 246 reasons to cheat: An analysis of students’ reasons for seeking to outsource academic work. Computers & Education, 134, 98-107.
Bower, M., Torrington, J., Lai, J. W., Petocz, P., & Alfano, M. (2024). How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence? Outcomes of the ChatGPT teacher survey. Education and Information Technologies, 1-37.
Costley, J. (2019). Student Perceptions of Academic Dishonesty at a Cyber-University in South Korea. Journal of Academic Ethics, 1-13.
Ehrich, J., Howard, S. J., Mu, C., & Bokosmaty, S. (2016). A comparison of Chinese and Australian university students’ attitudes towards plagiarism. Studies in Higher Education, 41(2), 231-246.
Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D. K¨oller, O.and M¨oller, J.”(2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays,” Computers and Education: Artificial Intelligence, vol. 6, p. 100209.
Gamage, K. A., Dehideniya, S. C., Xu, Z., & Tang, X. (2023). ChatGPT and higher education assessments: more opportunities than concerns?. Journal of Applied Learning and Teaching, 6(2).
Guruge, D. B., Kadel, R., Shailendra, S., & Sharma, A. (2025). Building Academic Integrity: Evaluating the Effectiveness of a New Framework to Address and Prevent Contract Cheating. Societies, 15(1), 11.
Kizilcec, R. F., Huber, E., Papanastasiou, E. C., Cram, A., Makridis, C. A., Smolansky, A., … & Raduescu, C. (2024). Perceived impact of generative AI on assessments: Comparing educator and student perspectives in Australia, Cyprus, and the United States. Computers and Education: Artificial Intelligence, 7, 100269.
Lee, V. R., Pope, D., Miles, S., & Zárate, R. C. (2024). Cheating in the age of generative AI: A high school survey study of cheating behaviors before and after the release of ChatGPT. Computers and Education: Artificial Intelligence, 7, 100253.
Mahato, M., Gaurav, K. (2023). Collegiate cheating: understanding the prevalence, causes, and consequences. SocioEconomic Challenges, 7(3), 152-163. https://doi.org/10.61093/sec.7(3).152-163.2023.
Nwozor, A. (2025). Artificial intelligence (AI) and academic honesty-dishonesty nexus: Trends and preventive measures. AI and Ethics, Academic Integrity and the Future of Quality Assurance in Higher Education, 27.
Oravec, J. A. (2023). Artificial intelligence implications for academic cheating: Expanding the dimensions of responsible human-AI collaboration with ChatGPT. Journal of Interactive Learning Research, 34(2), 213-237.
Xie, Y., Wu, S., & Chakravarty, S. (2023, October). AI meets AI: Artificial Intelligence and Academic Integrity-A Survey on Mitigating AI-Assisted Cheating in Computing Education. In Proceedings of the 24th Annual Conference on Information Technology Education (pp. 79-83).

Leave a comment