A lot of people look down on writers who use AI to help them write. I think that’s ridiculous.
Why does this happen? It usually comes down to two assumptions:
- If you use AI, you probably don’t know what you’re talking about.
- The content is probably full of hallucinations and bad facts.
But those same problems exist in human-written content too.
To distrust something just because it’s AI-assisted is to believe the opposite:
- People who write without AI typically know a lot about the topic they’re writing about.
- Handwritten content contains fewer errors or lies than AI-generated content.
That’s clearly not true.
In both cases, the way to check for accuracy is the same: look at the sources. If there are none, that’s a red flag—AI or not.
Now, author credibility is trickier. Someone can churn out AI slop without doing a second of research. They might not even understand the topic well enough to be a decent amateur. But again, this happens with hand-written content too.
Here’s a good test:
Ask AI to write about something you know deeply. Don’t feed it any hints. Challenge it. The results probably won’t impress you.
But when you ask it about something you don’t know much about, it sounds great. That’s the trap. You can’t judge credibility based on content alone unless you already understand the topic.
So if you see a piece of content that screams “written by AI,” how should you evaluate it?
The same way you’d evaluate anything else:
- Look at the author’s background.
- Check if they cite good sources.
That’s it.
Don’t assume something’s garbage just because it has the AI “feel.” And definitely don’t use formatting quirks—like too many em dashes—as your litmus test for trust.
Evaluate the work, not the tool.