Brilliant but Blind: The Risks of Relying on AI

By: Natalie Terrani  |  October 26, 2025
SHARE

By Natalie Terrani

Artificial intelligence (AI) is a significant tool integrated into almost every business and platform today. Its accessibility is incredible, benefiting people who are less technologically savvy. It reduces time and labor expenditure by the masses, and is an advantageous tool when dealing with large volumes of data. AI can do simple tasks that were previously performed by humans, via advancements such as automated checkouts and customer service bots. It can even provide students with tailored study help for little or no cost, benefiting those who can’t afford extra help. AI can also help maximize efficiency and effectiveness in areas such as the medical field. 

However, despite all of these groundbreaking improvements that AI makes in our lives, we must remember that most AI is powered and run by learning from massive amounts of data from the internet, making it susceptible to inaccuracies. Many users of AI platforms have little understanding of how AI works, how it is trained or how to use it to their advantage to maximize its capabilities. AI is a powerful tool, but it is just that: a tool. It should never replace human intelligence, and if we are overdependent on it, it will surely weaken our critical thinking skills and lead us to lean totally on AI systems that aren’t always reliable.

Simon Willison, an AI researcher who sits on the board of the Python software foundation, describes a combination of three components used together in large language models (LLMs, including AI chatbots and assistants) as the “lethal trifecta.” He explains that the combination of AI’s access to online sources, access to people’s private information and its ability to send out information to the world can be dangerous to both companies and consumers, especially if they are unaware of the risks. For example, many AI systems gain their information from thousands of data points from across the web. Internet data is filled with unauthorized sources, which can lead to AI providing inaccurate information. When connected to personal accounts, LLMs can gain access to personal sensitive data. Combined with its ability to send out information to the web, AI becomes an incredibly easy way for scammers, hackers and the like to steal personal information. Willison says that when just one component of the lethal trifecta is removed, the dangers will be significantly reduced. But the benefits gained from these three factors demonstrates how quickly the convenience given to us by AI can be turned into a liability.

What should be obvious to us is often overlooked: AI does not have the ability to differentiate between good and bad, and it is incapable of having the same morals and judgment that humans do. Though it is able to provide correct information most of the time, it’s not able to simply “know” right from wrong. Data-driven AI systems produce outputs that follow patterns, which contain weaknesses that can be studied and exploited by advanced hackers. These hackers are able to manipulate these weaknesses to their own advantage. The AI can’t know that it is being manipulated — it is just doing what it has been instructed to do.

These risks show why education about how AI works is essential to AI users of all ages. It should be taught in schools, workplaces and beyond. If awareness and knowledge regarding AI systems is spread at large, people will be aware of what they are dealing with and therefore will be able to use it much more responsibly. 

Photo Credit: Unsplash

SHARE