Too Cold, Too Hot, Just Right: Finding a Comfortable Standard for AI Policy 

Elise Romero

27 October 2023

ChatGPT is an artificial intelligence (AI) chatbot trained on large language models (LLM) to respond in human-like ways to prompts. Created by OpenAI, the tech has sent people into a frenzy of positivity, outrage, and confusion—especially teachers, due to the multiplicity of ways the AI could be used by students in their classrooms (1). Many secondary schools have transitioned from outright bans of the tool to now hosting companies for AI coding workshops for their students, reflecting a larger cultural shift regarding feelings towards AI (2). Nonetheless, policy about AI usage in academic spaces remains unstandardized and varies between teachers, so it is immensely important that institutions clarify their expectations for students’ usage of ChatGPT—specifically, to cultivate a critical relationship when it comes to chatbots and writing. 

AI tools like ChatGPT should never replace the writing process or do the job of writing for students. Learning to write properly is a vital skill for young students, teaching critical thinking abilities they will utilize for the rest of their lives. While AI writing tools can act as capable soundboards and assistants to supplement the student’s natural thought process, relying upon them as a primary source of writing will stunt the student’s creative reasoning and damage their overall educational development. There is a significant difference between using AI writing tools as a soundboard or assistant and using it as your sole source of writing. While I firmly believe that a policy of moderation concerning the use of AI is necessary, banning these tools in their entirety is impossible and undesirable. Truthfully, I am excited about the possibilities they offer for increasing accessibility to writers, especially young ones. All the same, guidance is absolutely necessary to uphold academic integrity and ensure tools like ChatGPT are not erroneously regarded as infallible resources. Currently, discourse about AI policy is primarily regulated to individual course curriculum and syllabi because many schools have elected to allow professors the decision to permit AI writing tools in their classrooms. The University of Pittsburgh Writing Institute has suggested potential policies for professors to adopt, focusing on syllabus language that makes clear prohibitions or limitations on the use of AI writing tools (3). For professors who do choose to allow AI writing tools, they ask that students cite where the writing is not their own and disclose interactions with AI. Other schools, like the University of Pennsylvania, include a statement regarding a possible “stifl[ing of] your own independent thinking and creativity” as well as an advisory note “that the material generated by these programs may be inaccurate, incomplete, or otherwise problematic” (4). However, not all professors will choose to include that message and every class will take a different stance.

Considering the implications of AI technology, there is currently far too much variation in guidance concerning the use of these tools. While I agree that the decision for whether AI writing tools are allowed in class should remain a personal preference, a national standard for what is unacceptable must be reached. Just as plagiarism is acknowledged across the United States as passing someone else’s intellectual work off as your own, we must define what unacceptable uses of AI writing tools are for the classroom. The European Network for Academic Integrity considered AI writing tools when proposing their definition of “unauthorized content generation,” meaning the production of an academic work uses unapproved assistance from people or technology (5). This definition marries the idea of academic integrity to AI policy without sacrificing succinctness. I also believe more than just syllabus language needs to be provided to professors—like workshops and resources that make AI literacy accessible to professors, teachers, and students. Everyone should be involved in the process of understanding AI writing tools in a critical sense; merely trusting the outputs of the AI is not enough. We must question the results we receive and acknowledge that AI writing tools do not supersede other knowledge systems just because they are technological. AI writing tools are very good at some things, like making outlines or writing emails, but their generic and repetitive responses cannot sustain a student through an entire paper (at least, not well). Therefore, those teaching writing and composition must include critical engagements with AI or risk their students not thinking twice about their results from ChatGPT. 

I advocate for this policy because AI will be a part of all of our futures and ignoring it could lead to a disparity of understanding and a possible large impact on future job outcomes (6). If we ignore these conversations in favor of outright banning these tools, we only stand to put people at a disadvantage. Digital literacy is an increasingly important skill that encompasses a person’s ability to critically engage with online materials; just as understanding how to use Microsoft Office is useful, companies predict knowing how to use ChatGPT will set people ahead in the future job market. AI writing tools are undeniably part of our ecosystem, and in order to see positive interactions we must help create them. Without proper guidelines, we may never see the benefits generative AI can have on writing; a critical, ethical engagement means we work with AI writing tools minimally and never undervalue the importance of writing and thinking for ourselves. To fully reap the benefits of these revolutionary tools as technology continues to develop and evolve moving into the future, clear guidelines governing the use of AI are an absolute necessity.


Image via Pexels Free Photos.

References

(1) Shrivastava, Rashi. 2023. “Teachers Fear ChatGPT Will Make Cheating Easier than Ever.” Forbes, October 5, 2023. https://www.forbes.com/sites/rashishrivastava/2022/12/12/teachers-fear-chatgpt-will-make-cheating-easier-than-ever/?sh=3d2ef6031eef.

(2) Singer, Natasha. 2023. “Hey, Alexa, What Should Students Learn about A.I.?,” 2023. https://www.nytimes.com/2023/06/08/business/ai-literacy-schools-amazon-alexa.html.

‌(3) University of Pittsburgh Writing Institute. n.d. “AI Academic Integrity Policy Suggestions.” Www.writinginstitute.pitt.edu. Accessed October 11, 2023. https://www.writinginstitute.pitt.edu/sites/default/files/PDFs/ai_policies_6.1.23.pdf.

(4) “Syllabi Policies for AI Generative Tools.” 2019. Google Docs. 2019. https://docs.google.com/document/d/1RMVwzjc1o0Mi8Blw_-JUTcXv02b2WRH86vw7mi16W3U/edit.

(5) Tomáš Foltýnek, Sonja Bjelobaba, Irene Glendinning, Zeenath Reza Khan, Rita Santos, Pegi Pavletic, and Július Kravjar. 2023. “ENAI Recommendations on the Ethical Use of Artificial Intelligence in Education.” International Journal for Educational Integrity 19 (1). https://doi.org/10.1007/s40979-023-00133-4.

(6) Vlad Savov, and Debby Wu. 2023. “Nvidia CEO Says Those without AI Expertise Will Be Left Behind.” Bloomberg.com. Bloomberg. May 28, 2023. https://www.bloomberg.com/news/articles/2023-05-28/nvidia-ceo-says-those-without-ai-expertise-will-be-left-behind?embedded-checkout=true.

Leave a comment