Artificial Intelligence has rapidly taken off in every sector, from the use in educational settings, to business, finance, and even healthcare. Despite its promises, the unregulated expansion of AI has led to a harmful impact in society, such as encouraging the suicide of a 16-year-old teenager to increasing noise pollution and energy costs near the people that share the same grid. AI has also been fined under the European Union’s General Data Protection Regulation (or GDPR) which is a law with a comprehensive set of principles like “the right to avoid automated decision making” (through AI) and “the right to delete your personal data”. OpenAI themselves has been fined 15 million euros over the GDPR. Therefore, while artificial intelligence does hold potential, currently, it is on a trajectory which shows that it is more threatening to society than beneficial because of its application in automated decision making and the large corpus of data required for its development.
An important distinction must be made between generative AI (like ChatGPT) and machine learning. Generative AI is prompt-based, meaning that you can prompt it to do things, oftentimes without prior training to do that specific task. For example, a user could tell it to “See if any of these messages contain bad words” on an online forum. This sounds great, but generative AI models can oftentimes be “jailbroken” to bypass their safety features (like bypassing the generation of illegal things, like “how to build a bomb”).
People argue that the issue with generative AI comes from the people using the tools and not the actual tools themselves, but these generative AI companies oftentimes promote their AI tools as replacements for human workers, such as Artisan who released several advertisements with headlines such as “Artisans [AI agents] won’t complain about work-life balance” and “Stop Hiring Humans”, even though AI is inherently stupid. Generative AI models are designed to predict the next word based on mathematical probabilities. It cannot do proper research or pretend to be a human, even though they try to be. Generative AI cannot take into consideration the intent of the prompter into account. It does not know the background of your situation, your culture, or every nuance of your situation. Its only job is to predict the next word.
Cory Doctorow described this in a term he called “enshittification”, which is the exponential degradation of platforms as companies prioritize their profit over their users. Generative AI has started to “enshittify” the web. We have already covered how AI models can be told to do illegal things and become a major privacy violation. Websites that have never used AI before (for example, Notion and Airtable) now have extensive AI features baked into their products. This does not make us more productive, it makes us more reliant on these tools.
The rise of AI also leads to more scrutiny in academia. This has led to a rise in so-called “AI detection” tools, despite much research proving that it does not work and flagging students who have not used generative AI (Fowler). OpenAI, the creators of ChatGPT, shut down their own AI detector due to a “low rate of accuracy” (Kelly). Many faculty at many secondary and post-secondary institutions still continue to use these tools and shove the burden of proof off to students. Students are oftentimes forced to “dumb down” their wordage in order to appease the AI detector. This also tends to disadvantage neurodivergent students, for instance, autistic students often demonstrate enhanced writing skills but have a tendency to achieve perfection, which can result in complex writing styles that AI detectors mistakenly flag (Gillespie-Lynch et al.). Since AI models are trained on a large corpus of text from the current generation, they generate text with commonly used words, making their output redundant.
AI models also need a lot of data. AI models need to know what is currently relevant in order to be able to process current slang, such as “67” and “rizz” (which both mean absolutely nothing and “charisma” as an adjective respectively), in order to process the prompters’ queries without not knowing what you mean. In order to get this amount of data, they will often “scrape” (to programmatically visit, download it, and convert it into data that a machine can process. This can often put a load on smaller websites. It can be used to “launder” words into a new format without any way to check the sources.
Others may say AI’s benefits exist and that they are transformative, such as decreasing the amount of menial tasks like writing E-mails or reviewing financial documents. However, these benefits are isolated and do not outweigh the downsides that are engrained in many generative AI systems. First off, many of these tests come from controlled experiments with large amounts of funding and controlled applications. It does not emulate the real world, where users go off script and can majorly mess up. It is also being used to fire workers en masse and then “enshittify” platforms. Essentially, while AI can be beneficial in a narrow scope, the current trajectory is downward.
In conclusion, AI has promise behind it and its applications in the real world. However, the current path we are heading down in regards to it shows only harm, whether it be threats to privacy, threats to life, or threats to academia. If we wish for AI to benefit society, we must regulate how AI is used in the workplace, in regards to automated decision making in everyday life, and limit how much data can be trained to make a large language model. Absent these policies, the only “innovation” in regards to the application of AI will be moving to more profit to those who make it and big businesses who wish to outsource all human input and output to Artificial Intelligence. For now, everyone, from academia using flawed GPT detectors to corporations auditing human judgement, is currently facing consequences due to ceding their authority to Artificial Intelligence systems that lack proper human judgement and knowledge. While it is not a Terminator-style AI takeover, the threat of AI comes from the quiet erosion of human creativity and accountability. This was understood decades ago, which is evidenced by a 1979 IBM training manual, which stated: “A computer can never be held accountable, therefore a computer must never make a management decision.” (Bonderud)
Works Cited
Bernstein, Joseph. “Can Cory Doctorow’s Book “Enshittification” Change the Tech Debate?” The New York Times, 5 Oct. 2025, www.nytimes.com/2025/10/05/books/review/cory-doctorow-enshittification.html. Accessed 16 Oct. 2025.
Bonderud, Doug. “Ai Decision Making Where Do Businesses Draw the Line.” Ibm.com, 31 Jan. 2025, www.ibm.com/think/insights/ai-decision-making-where-do-businesses-draw-the-line. Accessed 31 Oct. 2025.
Chatterjee, Rhitu. “Their Teenage Sons Died by Suicide. Now, They Are Sounding an Alarm about AI Chatbots.” NPR, 19 Sept. 2025, www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide. Accessed 16 Oct. 2025.
Edwards, Benj. “AI Company Trolls San Francisco with Billboards Saying “Stop Hiring Humans.”” Ars Technica, 10 Dec. 2024, arstechnica.com/information-technology/2024/12/ai-company-trolls-san-francisco-with-billboards-saying-stop-hiring-humans/. Accessed 21 Oct. 2025.
---. “Why AI Detectors Think the US Constitution Was Written by AI.” Ars Technica, 14 July 2023, arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/. Accessed 30 Oct. 2025.
Fowler, Geoffrey A. “Analysis | We Tested a New ChatGPT-Detector for Teachers. It Flagged an Innocent Student.” Washington Post, 3 Apr. 2023, www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/. Accessed 31 Oct. 2025.
Gillespie-Lynch, Kristen, et al. “Comparing the Writing Skills of Autistic and Nonautistic University Students: A Collaboration with Autistic University Students.” Autism, vol. 24, no. 7, 8 July 2020, p. 136236132092945, https://doi.org/10.1177/1362361320929453.
Kelly, Samantha Murphy. “ChatGPT Creator Pulls AI Detection Tool due to “Low Rate of Accuracy.”” CNN, 25 July 2023, edition.cnn.com/2023/07/25/tech/openai-ai-detection-tool. Accessed 31 Oct. 2025.
Merriam-Webster. “Definition of RIZZ.” Www.merriam-Webster.com, 25 Sept. 2023, www.merriam-webster.com/dictionary/rizz. Accessed 21 Oct. 2025.
Murray, Conor. “Why Are so Many Kids Saying “67?” Viral TikTok Trend, Explained.” Forbes, 15 Oct. 2025, www.forbes.com/sites/conormurray/2025/10/15/what-is-67-viral-internet-brainrot-meme-frustrates-teachers-sparks-south-park-parody/. Accessed 21 Oct. 2025.
Reuters Staff. “Italy Fines OpenAI 15 Million Euros over Privacy Rules Breach.” Reuters, 20 Dec. 2024, www.reuters.com/technology/italy-fines-openai-15-million-euros-over-privacy-rules-breach-2024-12-20/. Accessed 16 Oct. 2025.