Campos Business Research self publishes literature reviews focused on organizational management and Business.
May 2025 Literature Review
“Faculty AI Literacy and the Detection of AI-Generated Student Work in Higher Education A Review of Current Challenges and Responses”
By Steven Campos
It is important for students to have academic integrity in higher education, as the work you produced should be guaranteed to have been 100% produced by you. However, with the rise of AI like ChatGPT, academic integrity is being questioned. As one scholar puts it “I argue that the way ChatGPT and other AI powered text generators are used could surely undermine academic integrity.” (Damian Okaibedi Eke 2023) the concern is that students might use ChatGPT to write essays for them in the quest to save time and management tasks better (Ahnaf Chowdhury 2024) but right now no one knows what the best course of action to take to stop AI generated home work or essays. One research shared the same sentiment as me that “there is a pressing need for adequate training and adaptation periods. These are essential for researchers to become proficient in effectively utilising AI tools, thereby maximising their potential in academic work.” (Mohamed Khalifa 2024) This literature review explores the concept that faculty should learn to use ChatGPT and become AI literate to detect AI generated student work in higher education, as a solution to this problem. By examining how being AI literate benefits the professors on learning skills to detect AI generated student work.
1. The rise of Generative AI in higher education
Generative AI, is a work produced by AI. According to a group of researchers in their paper titled “Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study” they surveyed 6000 university students in Sweden and found that one third of the students were using generative AI. But why do students in higher education use generative AI like ChatGPT to produce essays and homeworks. When they could do it themselves? Luckily a group of researchers already answered that question in their research paper titled “Why do students use ChatGPT? Answering through a triangulation approach” in their paper they concluded that students use ChatGPT in an effort to save time and manage tasks efficiently. Of course they also mentioned other factors like Inseparability of Content, Ease of Access, Aided Learning, and Cognitive Miserliness of the User to have statistically significant on generating intention of using ChatGPT. This is completely understandable, because students full time students have a lot on their plate and using generative AI can help reduce that. However, not every student is using ChatGPT for this reason, according to a group of researchers in their paper titled “The use of generative AI by students with disabilities in higher education” they concluded that students with disabilities are using ChatGPT to overcome barriers. They also found that using generative AI helped reduce anxiety when it came to complex tasks for neurodivergent students.
2. Challenges in detecting AI-generated work
There are already some researchers that are trying to come up ways on detecting AI-generating work, two researchers Ali Garib and Tina A. Coffelt in their research paper titled “DETECTing the anomalies: Exploring implications of qualitative research in identifying AI-generated text for AI-assisted composition instruction” they proposed a method on detecting generated AI text by a method they call the DETECT method, which follows six steps. First is topic modeling determine, by using topic modeling they argue that people can tell if something is AI generated by the characteristics of the text. Second is textual analysis, which states that humans write with an audience in mind while AI doesn’t therefor we can use this as a method on detecting AI generated work. Third is Content analysis, which is finding unusual patterns within the text. Fourth is emotion and opinion mining, because AI when writing when compared to humans stay in one emotion when stating opinions while humans don’t which is a nuance characteristic of humans. Fifth is accuracy analysis, as somethings AI is wrong and will use false information in the text. Six is tally, where you evaluate using all of the analysis to see whether or not the work was AI produced or written by a human. The limitations of this is that generative AI is improving with every day, meaning that how ChatGPT writes will improve therefor will make it this method of detecting generative AI worthless. A group of researchers partially compliments my theory in their paper titled “Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays” in their findings they concluded that in general teachers cannot differentiate between student-written texts and AI-generated texts, because ChatGPT can be instructed to write a low quality essay on purpose. They found that when generative AI produced a high quality essay, it was easier to spot the difference between student-written texts and AI-generated texts, but when generative AI produced a low quality essay. It was virtually impossible to spot the difference between student-written texts and AI-generated texts. Meaning that one limitation on detecting AI-generated using the DETECT method is that students can by pass this by instructing generative AI to produce low-quality essays. A group of researchers have sought to find a way to solve this issue in their research paper titled “A comparative study of thematic choices and thematic progression patterns in human-written and AI-generated texts” where they tried to find a pattern in human-written texts and AI-generated texts and what they found according to them was in interpersonal themes, the generative AI tried its best to not express it's viewpoints as positive, negative, or neutral when compared to human-written texts. Also in human-written texts it is more dynamic and coherent than AI generated texts since it tries to use linear pattern in human-written texts. Another group of researchers tried to differentiate human-written and AI-generated sentences with a model in their research paper titled “Classification of human-written and AI-generated sentences using a hybrid CNN-GRU model optimized by the spotted hyena algorithm” in their research they a model called “CHWAIG-DLSHO” which classified sentences into different categories, surprisingly this model was able to detect the difference with an accuracy rate of over 99%. The limitations of this however, is that it would be a machine spotting the difference rather than a human.
3. The concept of AI literacy among faculty
First let’s address what is AI literacy, is it the understanding of the ins and outs of using generative AI engine. By having faculty learn to use AI and become literate they can than have an understanding of how generative AI can be used for and what the limitations of it are. For example, by having a professor use ChatGPT to write example essays for them. Professors can than have an understanding of how ChatGPT write and spot these nuance detail when a student submits work rather than using complicated software and models to tell the difference. Because according to a group of researchers in their research paper titled “The ChatGPT conundrum: Human-generated scientific manuscripts misidentified as AI creations by AI text detection tool” they found that ironically using ChatGPT to detect AI-generated texts had an 8.69% of detecting human-written texts as AI-generated texts. Giving credence that it is imperative that professors learn to detect AI-generated texts on their own through the concept of AI literacy rather than depending heavily on machines and programs on doing this for them.
4. Reassessing Academic Integrity
The irony is that if the solution on detecting AI-generated texts is by using AI this challenges the notion that we should reassess. According to two researchers Himendra Balalle and Sachini Pannilage in their literature review titled “Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity” not all of AI is bad, I’ve mentioned how AI has been beneficial to disabled students, some students using it to save time. Not all students are using AI to produce AI-generated texts and some professors are using AI in their classes, and some students are using AI to learn. We should rethink on how using AI counts as academic dishonesty, as it is clear that we are heading into a future where AI is the norm.
5. Gaps in literature
In my findings, there was a lack of empirical evidence relating to my theory of having professors learn to use generative AI to the point of have a deeper understanding of how the engine works exactly, like in the study where professors didn’t know that you could specify ChatGPT to write a low-quality essay. Then assessing the effectiveness of my theory by having two groups, one group of professors who have zero knowledge or understanding of generative AI engine and a second group of professor that have a deep understanding of generative AI engines.
What is AI literacy?
In my research I found that there isn’t a clear definition of what is AI literate or what does it mean to be literate in AI. This is one gap that I found, and hope that in the future a clear definition of what is AI literate to better test AI literacy.
Comparative studies on professors who don’t use and use generative AI
There isn’t enough research on the difference in terms of being able to spot AI-generated texts in professors who don’t use generative AI compared to professors who use generative AI everyday. Having comparable studies on this can give us an understanding of the effectiveness if professors learning to use generative AI to be AI literate in spotting AI-generated work.
6. Conclusion
Implications
If my theory holds some water, there could some tangible benefits to include AI literacy training on new professors during the onboarding process. Human Resource managers, could need create new training programs for professors to be AI literate.
In my findings, it seemed that humans were only able to detect the difference in AI-generated text and Human-written text only if the AI-generated text was at a high-quality and couldn’t tell the difference when it was at a low-quality. Machines, did a better job at detecting with an accuracy rate of 99%, while AI engines had an 8% error of misidentifying human-written texts as AI-generated texts. Obviously there is still a lot of research that needs to be done on assessing the accuracy of spotting the difference between, humans, machines, and AI.
References
Damian Okaibedi Eke, ChatGPT and the rise of generative AI: Threat to academic integrity?, Journal of Responsible Technology, Volume 13, 2023, 100060, ISSN 2666-6596, https://doi.org/10.1016/j.jrt.2023.100060.
Ahnaf Chowdhury Niloy, Md Ashraful Bari, Jakia Sultana, Rup Chowdhury, Fareha Meem Raisa, Afsana Islam, Saadman Mahmud, Iffat Jahan, Moumita Sarkar, Salma Akter, Nurunnahar Nishat, Muslima Afroz, Amit Sen, Tasnem Islam, Mehedi Hasan Tareq, Md Amjad Hossen, Why do students use ChatGPT? Answering through a triangulation approach, Computers and Education: Artificial Intelligence, Volume 6, 2024, 100208, ISSN 2666-920X, https://doi.org/10.1016/j.caeai.2024.100208.
Mohamed Khalifa, Mona Albadawy, Using artificial intelligence in academic writing and research: An essential productivity tool, Computer Methods and Programs in Biomedicine Update, Volume 5, 2024, 100145, ISSN 2666-9900, https://doi.org/10.1016/j.cmpbup.2024.100145.
Xin Zhao, Andrew Cox, Xuanning Chen, The use of generative AI by students with disabilities in higher education, The Internet and Higher Education, Volume 66, 2025, 101014, ISSN 1096-7516, https://doi.org/10.1016/j.iheduc.2025.101014.
Christian Stöhr, Amy Wanyu Ou, Hans Malmström, Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study, Computers and Education: Artificial Intelligence, Volume 7, 2024, 100259, ISSN 2666-920X, https://doi.org/10.1016/j.caeai.2024.100259.
Ali Garib, Tina A. Coffelt, DETECTing the anomalies: Exploring implications of qualitative research in identifying AI-generated text for AI-assisted composition instruction, Computers and Composition, Volume 73, 2024, 102869, ISSN 8755-4615, https://doi.org/10.1016/j.compcom.2024.102869.
Johanna Fleckenstein, Jennifer Meyer, Thorben Jansen, Stefan D. Keller, Olaf Köller, Jens Möller, Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays, Computers and Education: Artificial Intelligence, Volume 6, 2024, 100209, ISSN 2666-920X, https://doi.org/10.1016/j.caeai.2024.100209.
Shu Yang, Shukun Chen, Hailin Zhu, Jiayi Lin, Xi Wang, A comparative study of thematic choices and thematic progression patterns in human-written and AI-generated texts, System, Volume 126, 2024, 103494, ISSN 0346-251X, https://doi.org/10.1016/j.system.2024.103494.
Mahmoud Ragab, Ehab Bahaudien Ashary, Faris Kateb, Abeer Hakeem, Rayan Mosli, Nasser N. Albogami, Sameer Nooh, Classification of human-written and AI-generated sentences using a hybrid CNN-GRU model optimized by the spotted hyena algorithm, Alexandria Engineering Journal, Volume 126, 2025, Pages 116-130, ISSN 1110-0168, https://doi.org/10.1016/j.aej.2025.04.071.
Hooman H. Rashidi, Brandon D. Fennell, Samer Albahra, Bo Hu, Tom Gorbett, The ChatGPT conundrum: Human-generated scientific manuscripts misidentified as AI creations by AI text detection tool, Journal of Pathology Informatics, Volume 14, 2023, 100342, ISSN 2153-3539, https://doi.org/10.1016/j.jpi.2023.100342.
Himendra Balalle, Sachini Pannilage, Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity, Social Sciences & Humanities Open, Volume 11, 2025, 101299, ISSN 2590-2911, https://doi.org/10.1016/j.ssaho.2025.101299.