My answer is unclear and controversial.

Google was once the go-to search engine for the internet. “Google it” even became a generic term for looking up something online. But there’s been a big shift to a different type of answer engine: AI chatbots like ChatGPT and others.
Let’s review the pros, the cons, the risks, and the impact of these choices.
First, we need to review the differences between the two.
AI for search
AI and search engines work differently, and each has strengths and weaknesses. Search is good for fresh, direct lookups, while AI is better for questions, summaries, or creative tasks. Both can be wrong or biased, so check sources. The best answer? Use both and always stay skeptical.
Different in concept
Search engines like Google, Bing, and others work by amassing huge indexes of information from having crawled (AKA read) all the webpages on all the websites they can access. These indexes, which consider hundreds of different factors, are used to answer the question, “Which pages on the internet best represent the terms being searched for?”
AI services, on the other hand, are “trained” on massive datasets that range from specific sets of training data to the same “all the webpages on all the websites they can access”. The goal of all this training is to answer the question, “What words would best follow the words that have been typed in by a user?”
Put simply, a search engine is basically a lookup — get pages that relate to a word or phrase — while AI (specifically large language models) is glorified auto-complete — return the best next word, and the next, and the next, and so on.
That AI works in place of a search engine is because generally, the “best next words” to follow any question or term are often the very answer we’re looking for.
Now that we have an idea about how each works, we can compare the accuracy, citation, timeliness, bias, and environmental impacts of both.
Help make it permanent by becoming a Patron.
AI can hallucinate. Search can miss.
I’m sure you’ve heard of AI “hallucinations” — an AI response that is completely wrong. That whole “best next word” model has nothing to do with accuracy. If the most appropriate response to a question doesn’t exist, AI can make it up. AI doesn’t know how to say “I don’t know”.
This is all a function of the training data. The more data AI gets trained on, the more likely it is to have an appropriate and correct answer. Yet, if the data the AI has been trained on is wrong or misleading, the AI wouldn’t know.
Search isn’t perfect either, though. It relies on the way a search is phrased, so ambiguous and incomplete search terms can lead to wildly off-topic results. In addition, search engines are constantly being “gamed” by individuals trying to get their pages (often poor quality, irrelevant, or even malicious) to rank higher.
Both require us to maintain a healthy level of skepticism and independent confirmation. Unfortunately, many people skip this step for either search or AI.
An answer versus a reference
AI prioritizes giving you an answer to your question or search terms. It may or may not include references to where the answers came from.
A search engine prioritizes giving you references to webpages that presumably contain what you’re looking for. Many search engines also include an AI-generated summary that may or may not suffice as the answer you’re looking for.
Freshness
Most search engines constantly scour the web for up-to-date information. Websites that change often or produce timely information, such as news sites, are crawled more frequently to include their latest information in search results.
AIs typically train on a snapshot of data, meaning that what they ‘know’ is only as current as when that snapshot was taken. To overcome this limitation, some AIs augment their responses with searches that are then factored into their responses.
Bias
Search results are heavily influenced by ads (particularly when those ads are difficult to distinguish from organic search results), SEO tactics used by people trying to game the system, and general website and page popularity.1
More than anything, AI is a product of its training data. If that data is limited in scope or biased in some way, then the AI will be similarly limited and biased. AI is also subject to “guardrails” implemented by each provider attempting to prevent it from generating responses that would be considered inappropriate.
Impact
Perhaps the most controversial topic when it comes to AI (well, next to whether or not it’s making us dumber or will lead to our extinction) is its impact on the environment. AI uses a lot of energy. Sure, the Google data center powering Google Search uses a lot of energy as well, but depending on what you measure (electricity use, CO2 generation, or something else), AI is somewhere between ten and hundreds of times more energy-hungry.2
Why? Because a search is a lookup: take the search terms and look up the most appropriate pages from the index of pages crawled. An AI response is a computation, and that takes more work. It’s the difference between looking up a word in a book’s index versus having to write a paragraph about the word from scratch.
My take on it all
Here’s what I do.
- If I have something that feels like a traditional search — maybe I want to find the website for a company, look up the definition of a word, or something that needs to be as up-to-date as possible — I’ll use a traditional search engine. These days, for me, that’s Kagi3.
- If I have something that feels more like a question — perhaps even something I consider half-baked — I’ll throw it at an AI-based search engine like Perplexity4. AI seems to do a better job of “understanding” what I’m trying to ask. I’ve gotten great responses by just pasting in an error message as my “question”, for example. I particularly appreciate that Perplexity includes explicit references to the sources of its answers.
- And if I want AI to “do” something like summarize an article, suggest what aspects of a topic I might be missing, or similarly creative tasks (including image generation), I head to ChatGPT5.
In other words, I use both search and AI for what I feel each is best at right now.
More important than anything else, though, is that I remain skeptical of what I’m told, regardless of where it comes from. I try very hard not to let any of these tools (or any others I might try) lull me into complacency. The answers have to pass the sniff test, and I check references if I’m not 100% certain.
Do this
ChatGPT actually put this the most succinctly:
“Instead of” might not be the right question. AI and search engines serve different strengths.
So that’s my suggestion: why not both? Experiment with AI and find out where it works best for you, but don’t hesitate to use a search engine for the things it’s best at.
Above all, remain skeptical of both. Never blindly accept answers without verifying their accuracy.
Subscribe to Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week.
Podcast audio
Footnotes & References
1: Normally, when I mention “bias”, I get a lot of commentary about political bias from people who believe that a search engine has been explicitly tailored to exclude results that don’t agree with some assumed political agenda. This amuses and frustrates me because I get it from both sides: the left believes search leans right, and the right believes that the same search engine leans left. I think it’s more likely that the individual doesn’t like or agree with the results that are, on the whole, relatively objective. Nonetheless, people will yell at me, convinced it’s otherwise.
2: I acknowledge this as a current issue. I think it’s an issue that will diminish over time as more and more renewable energy sources come online.
3: Kagi is a paid service. I do not get any compensation from them for mentioning them.
4: I pay for Perplexity as well, and they also do not compensate me for the mention.
5: I pay for ChatGPT. They don’t pay me.
I started using Kagi as my default search engine six months ago. I am firmly in the “Search Only” camp. I don’t do much as far as creating pictures or writing articles or letters and my experiments with AI (CoPilot, Chatgpt) left me with a “OK, that’s mildly interesting” feeling. Microsoft ticked me off with their efforts to push CoPilot so much, I’ve removed it from my computer. Except for Edge, I’ve not figured that one out yet.
I don’t feel that limiting myself to just using a search engine is causing me to miss something. I’ve learned that if the first set of results doesn’t yield what I’m looking for, changing the original search terms usually yields what I’m looking for.
This doesn’t mean I’m totally ignoring what is going on with AI. I just discovered recently that there is a thing called the “AI Darwin Awards”.
As to environmental concerns here in Georgia- Meta, Google, Microsoft and other companies are planning data centers around the state. Georgia Power is keeping two coal-fired plants on line and is requesting to add more gas-turbine plants to meet expected demands. That’s after bringing on line two additional nuclear plants. Little to nothing about renewables.
Google and other search engines now give an AI summary before the search results, so it’s a matter of ignoring the AI summary if you want to check the search results. This is a disaster for sites like Ask Leo! because people searching for an answer receive an AI summary and don’t have to click on a link to access the article(s). This can wipe out a website’s ad revenue.
It’s the worst of both worlds, as many sites will go down due to a lack of revenue.