You don’t, but your concern shouldn’t stop there.

That’s a question I received in response to someone getting their first copy of my weekly newsletter, Confident Computing.
As I thought about it, I realized it’s a great question. Not because I want to defend myself as “not AI” (I’m not, by the way), but because the question applies to so much these days, and it’s only getting worse.
And AI is only getting better.

AI or not?
You can’t prove whether something was written by AI or not, and that’s been true of plenty of tools before AI came along. What matters is whether you can trust the source and the information. Be skeptical of everything, not just AI. Judge content by whether it helps you, not by how it was made.
This isn’t anti-AI
I want to be clear: this isn’t an anti-AI rant.
- I use AI myself. How I use it continues to evolve as AI capabilities increase, but I currently believe AI to be useful.
- AI is just a tool. What matters is not whether AI is used, but how it is used.
To that latter point, consider this from the original question: “I’m afraid to click on all the links you sent me to click.”
AI has nothing to do with whether or not the links are trustworthy or safe. Nothing.
- Someone could use AI to create a newsletter full of perfectly safe links.
- Someone could use AI to create a newsletter full of malicious links.
- Someone not using AI could create a newsletter full of perfectly safe links.
- Someone not using AI could create a newsletter full of malicious links.
AI is just a tool.
Help keep it going by becoming a Patron.
So, how do I know you’re not AI?
You don’t.
There is no way for you to determine that this article was not written by AI. I can and do claim I wrote it myself from scratch, but short of having you physically in my office watching me type, I have no way to prove that.
And before you start citing things like writing style, voice, and AI-detection tools, consider two things:
- AI trained on a specific writing style and voice is pretty darned good at creating content in that voice and style.
- Even if you think you can tell the difference today, in the near future, you won’t be able to. AI is only getting better. The way I put it is, the AI you see today is the worst AI you’ll ever see.
This applies to all media: the written word, photos, audio, and video. It can all be completely fabricated and indistinguishable from reality — if not by today’s AI, then the one to come along next.
Great. What do we do with that?

Stay skeptical, my friend
You should already be skeptical when dealing with online information, regardless of the tools and techniques that create it.
- It’s long been possible to create fake photos and videos of things that never happened or don’t exist. AI just makes it easier.
- Altering audio to make someone sound like someone else isn’t new. AI just makes it easier.
- A good writer can create very official-sounding, convincing content. AI makes this easier, too.
Yes, someone with malicious intent can use AI to mislead.
Whether or not they do, you need to be on guard anyway.
AI information can be unvetted and misleading, or it can be clear, accurate, and genuinely helpful. The same is absolutely true for content generated by a real person.
If you get a clear solution to an issue you’re facing, does it really matter whether or not AI was part of the process that got it to you? Real humans are just as prone to errors1 as AI. You should be skeptical of, and do your best to validate, all answers — regardless of the source.
AI-generated versus AI-assisted
In my article Why I Cringe When I Hear People Are Using ChatGPT to Look Things Up, I lament the blind trust that people are placing in AI chatbots as they use them as replacements for search engines. This is pure AI-generated content and should be met with an even higher degree of skepticism, because no human was part of the answer-creation process. While this will also certainly get better over time, it’s this use that generates the overconfident hallucinations and misinformation you need to be wary of.
AI-assisted is something else. I’d consider my own use of AI in this category. It’s a tool I use to streamline my process and generate better, faster results. In all cases, “better” is a judgment I — still a real human — make before hitting “publish” on any article.
Whether AI-generated, AI-assisted, or completely independent, it can be incorrect. Or it can be correct.
And, while the level of skepticism required might differ somewhat as you build trust, in neither case should you blindly accept what you’re told.
It’s all about trust
What matters most is your trust in the information sources you use. Trust isn’t something that is built overnight.
For example, be skeptical about that first issue of a newsletter you just subscribed to — even mine. Focus less on how it was generated and more on whether or not you get value from it. Hopefully, over time, subsequent issues will provide you enough helpful information to warrant your trust.
The same is true for any product or source of information. Trust must be earned, especially online, whether you build it through recommendations from friends, reviews, or your own experience over time.
And that’s completely independent of what tools — AI or otherwise — were used to create what you see.
Do this
By all means, be skeptical.
But don’t limit your skepticism to AI. Conversely, don’t dismiss something out of hand because AI might have been used.
What matters is the result, not necessarily how it was made2. Can you trust the answer? Can you trust the product? Does it add value, answer a question, or benefit you in some other way?
AI or not, I’d love the opportunity to earn your trust. Subscribe to Confident Computing! Less frustration and more confidence, solutions, answers, and tips in your inbox every week.
Podcast audio
Footnotes & References
1: I’ve often thought that AI’s ability to “hallucinate” or confidently fabricate false information isn’t so much a bug as an accurate simulation of how real humans often do the same.
2: I do understand objections to what AI costs, literally, in terms of possible environmental damage. My position is that this is a problem that will be solved in the coming years. Many solutions in the form of more renewable energy and improved AI algorithms are underway already.




Even though I know it is AI generated, I daresay there is a hint of reality in your illustration for this article. I say this as a pet owner myself.
Indeed. My “puppeteers” (or pup … eteers):

In olden days (last year), we’d say an altered photo was “photoshopped” (generic term). Now we call it AI even if a real person did it using photo-editing software.