Name: Emma Plank, Rayney, Charlotte Tenebrini Steckart
An AI image generated by Charlotte Tenebrini Steckart using Canva’s Magic Media Tool
In 2019 an article published by Brookings EDU predicted that Artificial Intelligence would soon be a problem in classrooms around the world. Years later in 2024 we can look back and say they were correct. Pre 2019 teachers did not have to question whether or not a student's work was uniquely their own or written by sources like Chat GPT. AI has become more available to the general public for free, creating something from just instructions alone. Want a photo of yourself with Taylor Swift? AI can make that happen. Want an essay written on why AI should be allowed in schools? AI can write that. The question on everybody's mind though is how far is too far and when does asking for definitions become asking for the answer to a quiz on Canvas, and what ethical lines need to be drawn?
History of Artificial intelligence
A brief history of AI starts with the Turing test. The Turing test was an enigma code built by the Englishman Alan Turing, a computer scientist in World War II. It was used as a tool to defeat the Germans in World War II, by fooling humans into thinking another human was responding to them when in actuality it was a computer. Back then it was referred to as “The Imitation Game” (not to be confused with the 2014 movie starring Benedict Cumberbatch as Alan Turing).
Since then AI has been ever evolving. To quote Professor Dr. Phil Clampitt, AI is the Turing test “on mega steroids.” Dr. Clampitt expanded that now the important experiment is to take something you know a lot about and ask AI to write an essay on it. Then ask yourself, to what extent do I disagree or agree with it? AI will not give you opposing beliefs unless you ask it to. Because of this Dr. Clampitt says “equivocation is purposeful vagueness.”
At the University of Wisconsin Green Bay students were asked to participate in an anonymous survey regarding the use of AI in their studies at UWGB and potential career paths. This survey included statements/questions with five varying options to express levels of agreement/disagreement or frequency (always - never). A rating of 5 indicated strong agreement or always, while a rating of 1 indicated strong disagreement or never. Initially our journalists thought there would be a specific major that would use ChatGPT or other AI sources more than others however that was not the case.
The above chart represents the average answers for each major at UWGB.
The majors that thought they were most likely to use Chat GPT or AI for help were chemistry, economics, HR management, information science, and mathematics. The majors that were least likely to use Chat GPT or AI for help were social work, human biology, art studies and biology. This made sense because the majors that use the most AI for homework were economics and mathematics.
The above chart represents the average answers for each major at UWGB.
Again economics, ranked as one of the highest majors who not only support the use of Chat GPT or other AI sources, but also used it the most and thought it should be allowed to be used for schoolwork. Ranking low again was social work majors. Marketing majors support AI use and think it should be allowed in schools, but when looking at only frequency of use, they rank lowest along with biology and social work majors.
The above chart represents the average answers for each major at UWGB.
When observing if plagiarism had to do with guilt, political science and social work majors felt strongly that using AI for homework or quiz answers was considered plagiarism and felt mildly guilty when they used it. Social work majors felt more guilty than political science majors. The rest of the majors did not feel as guilty and did not strongly consider it plagiarism.
The above chart represents the average answers for each major at UWGB.
The interesting part of these survey results are when you compare the results of belief of AI being allowed in schools and if using it is considered plagiarism. Many majors felt that it was considered plagiarism but almost all majors felt it should be allowed in schools. HR and information science students felt that it was not considered plagiarism and should be allowed in schools. Social work majors felt the opposite.
Recommendations and Future Outlook
An AI image generated by Charlotte Tenebrini Steckart using Canva’s Magic Media Tool
Leading the discussion today are universities. Many are reviewing their policies on AI and those policies are in constant flux as AI is ever growing. When looking into the University of Wisconsin Green Bay’s academic integrity policy we found only one line about AI under the example section;
“Taking credit for the work or efforts of another without authorization or citation (this includes using, without Instructor authorization, generative artificial intelligence software or websites)”
This policy leaves a lot of room for personal interpretation and so this is where students are getting into trouble. Each professor has their own opinion and policy listed in their class syllabus - but should there just be one policy that fits all? Based on our survey results, use varies from major to major.
In the realm of higher education, the integration of generative AI has ushered in a new era of innovation and opportunity, transcending traditional boundaries and reshaping the educational landscape. From personalized learning experiences to enhanced research capabilities, its impact spans across disciplines and institutions. However, amidst the excitement and potential, it is crucial to acknowledge the complexities and challenges that accompany this technological advancement. Responsible integration strategies, such as robust data privacy measures, transparent algorithms, and ongoing ethical evaluations, are essential to ensure equitable access and mitigate potential biases.
Comments