Skip to Main Content

Western Political Thought II

About

Generative AI (GenAI) is a technology that has enjoyed quite a lot of attention recently with the launch of a number of end-user services, including ChatGPT from OpenAI and Bard from Google. The technology is programmed to generate new and original contents, including prose and images, by pre-training patterns from existing data.

In this way, GenAI is able to author well-composed and seemingly well-written papers with only the slightest prompt from a real person, which would seem to make it the perfect companion in higher education, and indeed in education everywhere, especially during peak periods where the workload may feel overwhelming.

Problems

There are, lots of problems associated with using genAI in academia and elsewhere.  

  • One of the more serious problems with the GenAI technology, not only in a high-intensity learning environment but also in the world as such, is its inability to distinguish between true and false.
  • Generative AI is not in the business of providing reliable content, but only of prediction and probability, and for that reason, it is very prone to fabrication (hallucinations). 
  • In fact, to be able to utilize the technology with any kind of confidence, you would need to have a robust understanding of the topic at hand, an understanding that may take many years to cultivate. 
  • Generative AI can replicate or even amplify negative content and may perpetuate biases and stereotypes about people and ideas.
    • This issue has to do with the pre-training of the software. If the topic you are exploring is contentious, you may find yourself inadvertently reproducing fake news, conspiracy theories, etc., as the software is no better, no wiser, and no more beholden to the truth than the data from which it learned.
    • Online trolling goes on online on an industrial scale every single day, so if you happen to tap into a minefield of viciously competing ideas when working with these services, you could be seriously compromised in your exam submissions.
  • With the relative dominance of English-language training data, there is a high likelihood that the responses you get will reflect a worldview that is specific to that part of the world. More likely than not, content generated by GenAI services is prone to represent Western perspectives and people and to drown out or misrepresent alternative, non-Western experiences, attitudes, and ideas.
  • Finally, because of the costs involved, in terms of both money and energy, the service is not retrained on an ongoing basis and, therefore, cannot necessarily be relied on to reflect current ideas and insights.
  • In other words, a lot of output data will quickly become outdated.

Academic Integrity

Class is specifically a space for learning and practicing invaluable writing and researching processes that cannot be replicated by generative artificial intelligence (AI). 

While the ever-changing (and exciting!) new developments with AI will find their place in our workforces and personal lives, in the realm of education, this kind of technology can counteract learning.

This is because the use of AI diminishes opportunities to learn from our experiences and from each other, to play with our creative freedoms, to problem-solve, and to contribute our ideas in authentic ways. 

In a nutshell, a college is a place for learning, and generative AI (e.g.: chatGPT) cannot do that learning for us. 

Academic integrity plays a vital role in the learning that takes place in your classes at Lewis University, and submitting work as your own that was generated by AI is plagiarism. 

For all of these reasons, any work written, developed, created, or inspired by generative artificial intelligence does not lend itself to our learning goals and is a breach of ethical engagement.