Hundreds of years ago, if you needed the opinion of an expert, you grabbed your horse and traveled until you found someone who could answer your question. Now? We have Google.
At our fingertips is a huge database of information. Want the latest information on the Trump court trial? Don't worry, you can choose to read about it from all the major news outlets. Conflicted on whether to drive four hours to see a total solar eclipse? Well, here are billions of links to people discussing the same topic.
Google is perfect when we want expert clarification. But what about everyday queries? These require a few seconds of thought to answer — the "can I touch a metal spoon in an airfryer?" and "should I use sunscreen if I'm going to the beach?" questions. Still, humans are intrinsically efficient animals. Given an opportunity to be lazy, we almost never choose the option requiring hard work. So, we default to Google for trivial questions.
Take a look at your most recent Google search. Imagine you lived in the age before electronic communication. Would you take your horse and start riding to the next town, or would you be able to answer it with a moment of rational thought?
We are becoming less reliant on our own thoughts and more reliant on an easily accessible conglomerate of thoughts: the Google hivemind. This means we spend less time being curious and less time trying to satiate our curiosity. After all, we can always just "google it." This isn't a new revelation. For years, studies have shown that our dependence on technology negatively impacts our brains.
Large Language Models (LLMs) like ChatGPT have taken this reliance a step further. Strangely, they combine the best aspects of the horse-and-buggy and Google eras. To understand this, we must first examine the critical area in which search engines are lacking.
It's the 19th century, and your eyelid isn't feeling quite right. You take your horse and your buggy; you arrive at a town with an eye expert. This expert probably isn't the best in the world. There are a few things that he might get wrong. He's only your choice by convenience, and he's just okay. But the benefit of this one-on-one human communication is that you can be specific. You can start by saying, "My eyelid is feeling a little off." And that expert can ask if you've felt pain, how long it's been since you've noticed a change, whether you've had other symptoms, what you've done in the last few days...
On the other hand, you can give Google all that information — "my eyelid has felt numb for two weeks since I went to a carnival where..." — and the first page of responses will be about general eye pain. Yes, the articles were written by world-class experts. Yes, there are billions of articles on leg pain. Yes, one of them could be applicable to your situation. Regardless, you can't scour through all of them to find that hidden gem. Here we come to the principle problem with Google: the articles fail to tailor themselves to your specific needs.
The fact that Google is so bad at giving personalized advice is exactly why social media sites like Reddit and Quora have skyrocketed in popularity. However, on these platforms, you return to an exacerbated version of the horse-and-buggy problem: anybody can answer, and you aren't sure if you should trust someone who might not be qualified to give advice (especially medical advice!).
Then, ChatGPT comes rolling in. It's a conglomerate of expert minds. It can personalize its answers to the most specific query in the world. It answers almost immediately. It's pretty much your all-knowing personal assistant. Or doctor. Or really whatever you want it to be.
So, my point is this: People will come to rely on LLMs. Someday, people will chat-gpt their queries just as they google them now, and this dependency may further the deterioration of our brains and our capacity for critical thinking.
Obviously, LLMs still have a long way to go before they become as widespread as search engines. But LLMs have only been spotlighted for a few years, and I've already started to observe this phenomenon.
Over 90% of the students at my high school admit to consistently using ChatGPT to assist with their schoolwork. After all, while Google can't write your To Kill a Mockingbird essay, ChatGPT will do so in five seconds.
The whole process of collecting evidence, forming an argument, and crafting a coherence piece of writing is difficult for most children. When there's an opportunity to use ChatGPT, it's a tall order to expect teenagers to take the moral high ground and spend hours grappling with their own writing, which probably reads like a dumpster fire compared to ChatGPT's prose. At the end of the day, teenagers are humans, and humans are lazy animals. So - “just chatgpt it.”
As LLMs grow more convenient, I fear that people will begin deferring all tasks, trivial and grand, to ChatGPT. We went from "horse-and-carriage it" to "google it" — who's to say we can't "chatgpt it"? If ChatGPT becomes as ubiquitous as Google, billions of brains could begin to collectively rely on it.
So, think about whether you should think. Write your own essays. Use a little less Google and a little more of your own brain.
Written by Chloe Xu.
The STEMScribe blog is a reader-supported publication. Subscribe to show your support. It’s free!
If you would like to submit a guest post to the blog, please fill out this form. Join the Discord server to gain access to our free tutoring and mentorship services.