On this page, you will read about vulnerabilities in assignment requirements (in relation to AI) that can often be fixed. You will also find information on solutions and detailed examples. There are explanations of four specific vulnerabilities, each of which is detailed on a separate page.
By clicking on the links you can view the other pages or go back to the page on written assignment without supervision.
GenAI tends to ‘hallucinate’. Due to the next-word-prediction logic embedded in these models, they generate text that seems plausible but is factually wrong or even entirely made up. This problem is particularly pronounced when it comes to academic references for AI tools unable to access the internet. At the time of writing (December 2023), the free version of ChatGPT (using GPT3.5) cannot access the internet and struggles with academic references.
Since academic references always follow a particular logic, the free version of ChatGPT can produce references that are correctly formatted, but do not exist. The tricky part is that the alleged author, paper or journal may indeed exist, but the distinct combination was ‘hallucinated’ by ChatGPT. To make matters more complicated, the free version of ChatGPT is also capable of producing existing references. Especially references to seminal, well-known, often-cited papers are likely to be generated correctly by any AI tool, as these have a significant presence in the training data. This means that genAI generally is much better at coming up with real references for topics that have a long-established research tradition and topics that are more general.
Other tools, such as the paid version of ChatGPT or the free perplexity.ai tool (among others), are better at finding existing and relevant references. However, even these tools cannot reference perfectly. Again, they perform worse when the references required are less well-known. Moreover, they tend to refer to scientific articles rather generically.
Assignments rarely ask for general references on a topic. Instead, academic referencing is used to support a specific claim or finding. This is where AI tools can be unreliable as they often refer to the main argument of a paper, but rarely to a specific argument or part of the paper. Hence, in light of genAI, we recommend that you require students to engage deeply with discipline-specific literature in their assignments and refer precisely to arguments or sections of such works.
We see from the output that ChatGPT does well on this assignment. All of the suggested references exist and are indeed seminal pieces of literature on political trust.
But when we ask for literature on a more niche topic, a specific subarea of a topic, or a relationship between two variables, ChatGPT often leads us astray.
| What are the most important academic references on political trust? |
| Sure, here are the references:
1. Hetherington, M.J. (1998). “The Political Relevance of Political Trust.” The American Political Science Review, 92(4), 791-808. 2. Putnam, R. D. (2000). “Bowling Alone: The Collapse and Revival of American Community.” Simon and Schuster. 3. Levi, M., & Stoker, L. (2000). “Political trust and trustworthiness.” Annual Review of Political Science, 3(1), 475-507. 4. Norris, P. (1999). “Critical Citizens: Global Support for Democratic Government.” Oxford University Press. |
With a more targeted request for references we find that ChatGPT ‘hallucinates’. The first and the third suggestion flat-out don’t exist. Suggestions 2 and 4, however, do exist, but they don’t investigate the effect specified in the prompt. Hence, checking the references submitted by a student can be an effective way of detecting fraudulous use of AI.
Tip: Assignments rarely ask for general references on a topic. Instead, academic referencing is used to support a specific claim or finding. This is where ChatGPT is unreliable. Furthermore, double-checking each reference is arguably more time-consuming for students than searching for references via common databases. Thus, to make their assignments less ChatGPT-friendly, teachers are advised to ask for specific academic references.
| Which empirical studies show that social media use has an effect on people’s trust in politics? |
| Sure, here are the references of empirical studies showing the impact of social media use on people’s trust in politics:
1. Lelkes, Y., & Westwood, S. J. (2017). The limited effects of partisan media on political attitudes and behavior. Political Behavior, 39(3), 709-736. Link: https://doi.org/10.1007/s11109-016-9373-2 2. Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A., & Bonneau, R. (2015). Tweeting from left to right: Is online political communication more than an echo chamber? Psychological Science, 26(10), 1531-1542. Link: https://doi.org/10.1177/0956797615594620 3. Prior, M., Eshbaugh-Soha, M., & Trounstine, J. (2018). Political polarization on Twitter. Political Communication, 35(1), 1-24. Link: https://doi.org/10.1080/10584609.2017.1364814 4. Bode, L., & Vraga, E. K. (2018). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 68(4), 619-638. Link: https://doi.org/10.1093/joc/jqy044 |

