A new version of ResearchHub is available.Try it now
Post
Document
Flag content
3

Annotated Bibliography - A conversation on artificial intelligence, chatbots, and plagiarism in higher education. (King, 2023)

Save
TipTip
Document
Flag content
3
TipTip
Save
Document
Flag content

 
 

King, Michael R., and ChatGPT. "A conversation on artificial intelligence, chatbots, and plagiarism in higher education." Cellular and Molecular Bioengineering 16.1 (2023): 1-2.

In the paper “A Conversation on Artificial Intelligence, Chatbots, and Plagiarism in Higher Education”, Prof. Michael R. King, from Vanderbilt University conducts an interesting experiment with the use of the Large Language Model ChatGPT in higher education. The whole paper is a literal transcript of his conversation with ChatGPT. 

Dr. King asks the LLM about the history of A.I., about ChatGPT, about plagiarism in higher education. All answers are correct and credible. At that point, Prof. King asks about how students could potentially use A.I. to cheat in assignments in universities, and ChatGPT says that it would be writing prompts and questions and copying and pasting the results, which is exactly what the professor was doing with this article. 

The paper is also illustrated by depictions of A.I. being used for cheating in college: 

The author then, proceeds to ask the LLM what potential methods professors could use to detect potential cheating using chatGPT. The A.I. suggests the use o plagiarism software or methods that require students to engage in an interactive way, like oral presentations or group projects. 

Dr. Kings proceeds to create prompts that instruct ChatGPT to say that A.I. is not inherently good or bad, but that should be watched carefully for potential misuse. Finally, he argues - through a chatGPT prompt, - that the nature of content and originally needs to change as technology continues to advance. 

The final prompt, however, is for chatGPT to “Create a list of references on chatbots, AI, and plagiarism, while trying to cite more women authors and people of color to make up for historical biases in scientific citation.” However, the result show a list of made up names and false articles, evidencing the biggest current risk with chatGPT that is of “hallucinating” content. 




 

100%
Discussion


Start the discussion.
This post has not yet been discussed.