Within reasonable boundaries, LSA instructors can choose whether or not they wish to use Generative AI tools in the classroom or as part of assignments. General guidelines include that AI should not be used with human subject research information; that if you do use or allow your students to use AI for any class activities, use the UM-GPT instance or Maizey to preserve student data security; and that your departments or disciplines may make additional guidelines or recommendations, so check in with your department.
Explanations First
“No, no! The adventures first, explanations take such a dreadful time.” –Lewis Carroll
While it is tempting to dive directly into the adventure, whether that be use or avoidance of GenAI, we recommend first and foremost that instructors take the time to give students the explanation.
For those instructors who are concerned about use of GenAI to do classwork for students (write essays, answer test questions, etc.), this will mean spending some class time to discuss with your students why it is important for them to learn those skills for themselves. What is the value, to the students, in putting in the time and effort to learn those skills? While this may seem self-evident to instructors who have been in the field for years, remember that students are only just beginning. The rationale behind course activities is often not at all self-evident to them.
For those instructors who are excited about the possibilities of GenAI and want to let their classes sample this new tool, the explanation will mean making time to discuss with students what current AI capabilities and limitations are, and encouraging students to apply their own developing skills in evaluation, research, and critical thinking to any GenAI output.
In either case, it is important that expectations are clearly articulated in the syllabus and reinforced when assignments are given.
If you disallow Generative AI use:
Clearly state in the syllabus that GenAI tools are not allowed, that all work needs to be and is assumed to be the student’s own effort, and that GenAI use will be considered an act of academic dishonesty. Instructors should also give concrete reasons why the use of GenAI tools would mean missing out on development of critical skills.
If you allow Generative AI use:
Clearly state in the syllabus what activities GenAI tools will be used for, and that use of those tools is limited to those activities. Define any guidelines, such as how to attribute or indicate material generated by the AI tool. It may be helpful to note that these guidelines apply specifically to your course, and that other instructors may have other guidelines.
Sample Syllabus Statements
A variety of sample syllabus statements have been curated, from a variety of peer institutions. Feel free to adapt these for your own courses.
Sample syllabus/course policy statements
GenAI Tools
With the rise of numerous GenAI-based tools, from a wide variety of vendors who may or may not be trustworthy, it is important to make sure you use tools that maintain data security. Tools provided by the University of Michigan, such as U-M GPT, are private, secure, and free for U-M faculty and students. Data you share while using these tools will not be used for training these models, and hence are not at risk of unacceptable exposure.
LSA licensed tools such as Harmonize and Tophat will already have a Data Security Agreement that protects student and instructor data. Many of those tools are developing GenAI-based tools for things like improving assignment prompts. These are also generally safe to use.
Prompt Literacy
Regardless of the GenAI tool, the most successful output comes from well-designed prompts. The resources below can help you build your prompt literacy skills.
- Prompt Literacy in Academics (website)
- Generative AI Prompt Literacy (self-paced course)
FAQs for GenAI
Can I grade student work with GenAI tools like ChatGPT?
No. Student work cannot be graded with AI tools. They are not sufficiently reliable, nor are they designed for holistic analysis and evaluation.
Additional reasons to avoid using AI to grade include:
- AI tools are not equipped to evaluate, or even identify, complex issues such as ethical dilemmas, cultural differences, or social context present in student work.
- AI models learn from whatever data they are trained on, and many studies have shown that such training data contains biases and lacks diversity. This means that the AI might perpetuate those biases in grading, potentially leading to unfair evaluations.
- AI-generated feedback will lack the personalized insights and guidance that it is the professional duty of instructors to provide. Students learn and improve most from individualized feedback tailored to their abilities and needs. Also, students may need to seek clarification on grading feedback or challenge a grade, and AI would not be able to help in those situations.
- Using AI to grade student work raises ethical concerns about accountability and the abrogation of teaching responsibilities. Moreover, if faculty have clearly stated guidelines forbidding student use of GenAI in their assessments, students may rightly question the ethics of faculty using AI tools to evaluate that work.
Further, remember that student work remains the intellectual property of the student, whether it is submitted for a grade or not (see also our Copyright and Intellectual Property recommendations as regards student work). Currently, OpenAI’s privacy policy conflicts with LSA requirements. For this reason, using any non-UM AI tools for class assignments is strongly discouraged.
Can I use a GenAI detector to check student work for AI material?
No. LSA does not support the use of any plagiarism detection tools. Further, GenAI detectors are not able to accurately detect AI output. There is currently no reliable automated way to check for AI output in student work. Instead, consider explaining your expectations for AI use, and consequences of misuse, in your syllabus and in class discussion. Just as is the case in “classic” plagiarism, it is often the result of students genuinely not knowing or understanding what constitutes plagiarism and why it’s a problem.
What are the capabilities of GenAI as a teaching and learning tool?
While GenAI can’t replace the work faculty do in designing or teaching courses, it can assist with some course preparation tasks. Remember to always verify that what it generates is accurate, if you employ it this way, and understand that anything you enter or prompt may become part of the system’s data-set in ways that you will not be able to withdraw.
With that caveat, here are some example uses for GenAI in course preparation:
- Course Activities: GenAI can assist in drafting activities, such as prompts, exercises, and rubrics.
- Discussion and Reflection: Chatbots like ChatGPT can simulate conversations related to the subject matter, enhancing student engagement and reflection on the topic. For language learning, this can include conversational practice in the topic language. Remember that for student interactions, you must use UM-GPT to protect student data and privacy.
- Image Creation: Faculty can use GenAI to create images that adhere to specific design principles, such as balance, contrast, alignment, and hierarchy. Students can analyze these images to understand how these principles work in practice. Diagrams and models can also be generated if the course material needs to be supplemented.
What are the limitations of GenAI as a teaching and learning tool?
There are still many limitations to what we are calling generative AI. These include:
- It is not 100% accurate. In fact, ChatGPT is currently only about 70% accurate, even for simple factual queries. That means that all content created by GenAI must be verified by the user.
- It “hallucinates.” Not only are these tools not completely accurate, they will provide made-up or false information. This is currently a factor of how Large Language Models (LLMs) are programmed.
- Answers can’t be replicated easily. LLMs are programmed not to reply exactly the same way twice. If a class of students each input the same prompt, they may receive vastly different results, and it’s hard to anticipate what the results of a particular prompt may be. This can be a significant difficulty in class activities.
- GenAI programs have been created using biased training data, which means the results they provide may also be biased. While some bias is easy to spot, it is often subtle enough to avoid simple detection. Be particularly careful with prompt wording, and examine all GenAI output for potential bias.
- GenAI has a “yes bias” when asked if it wrote something, so it cannot be used to accurately identify potential AI use.
- Prompt literacy matters. Prompt literacy refers to the skill of crafting effective prompts or input instructions to guide the AI in generating desired outputs. Writing effective prompts is crucial when working with GenAI because it directly influences the quality, relevance, and accuracy of the output generated by the LLM. In many cases, it will be less work to just write the desired results yourself.
Further Resources
GenAI at U-M site: This site offers some resources and access to the secure UM tools such as UM-GPT.
The Full Report on GenAI at U-M: Detailed guidelines are available from the full report authored by U-M Generative Artificial Intelligence Advisory (GAIA) Committee.
Using Generative AI for Scientific Research: Learn about the best practices for navigating the usage of Generative AI in your research.
Schedule a consultation with the Learning & Teaching Consultants to discuss GenAI in relation to your particular classes.