When Likes Matter More Than Truth
Another day , another viral post on LinkedIn misusing data in order to drive engagement.
This one struck a particular nerve, not for its content alone, but for what it represented, a growing pattern of intellectual dishonesty that I’ve watched spread across social media for years.
The post was deceptively simple, a screenshot of an article headline declaring "AI chatbots defeated doctors at diagnosing illness."
No link to the source. No context. No data. Just a clickbait style headline accompanied by an influencer's bold proclamation: "In case you missed it, AI is now a better doctor than doctors, even when they are using AI."
I watched as the engagement numbers climbed, 661 likes, 163 comments, 72 reposts. The comments section became a echo chamber of uncritical laziness:
"It will soon be illegal for a top consultant not to consult AI before they make a final decision."
"AI will soon replace dermatologists."
"AI outperforming doctors is awesome progress for healthcare."
Each comment added another layer to my growing frustration. A frustration born not from a single post but from watching this pattern repeat itself over and over and over again. Social media influencers, armed with screenshots and clickbait, preying on a population eager to consume but hesitant to question. Too busy scrolling to slow down enough to read the content beyond the headline.
My anger peaked when I finally located the actual study in JAMA (Journal of the American Medical Association). Buried on page seven was the truth the influencer had conveniently left out of his post, "Results of this study should not be interpreted to indicate that LLM should be used for diagnosis autonomously without physician oversight." The research had tested AI on just six curated clinical vignettes, pre-summarized by human clinicians, a far cry from the complexity of real-world medical diagnosis.
Among the sea of naive responses, a few voices of reason emerged. One commenter pointed out the telling absence of medical professionals in the discussion, "I just went through 113 comments here and noticed there's not a single doctor contributing to the debate. Makes me wonder, what does that say? " Another cut straight to the heart of the problem, "AI enthusiasts will believe anything someone writes about AI, as long as it's good new for AI. There's no critical attitude, no worry about due diligence."
These influencers may view their actions as harmless engagement farming but they're causing real damage. They're not just building their social media equity, they're eroding the foundation of informed discourse. Each oversimplified screenshot, each exaggerated claim, each misrepresented study chips away at our collective ability to engage with complex truths.
What's most troubling is the calculated nature of it all. These influencers, many of whom likely have the resources and capability to access and understand the original research, choose sensationalism over substance. They understand that nuance doesn't go viral, that complexity doesn't generate likes, that truth is often less engaging than fiction.
The cost of these intellectual shortcuts isn't just measured in misinformation, it's measured in the degradation of our public discourse, in the weakening of our critical thinking muscles, in the growing divide between headline-deep understanding and genuine knowledge.
It's gross. It's disgusting. It's unethical. And it needs to stop.
But until we collectively decide to value truth over virality, to prize substance over shares, these influencers will continue to find fertile ground for their misleading seeds.
Remember, the responsibility lies not just with those who spread misinformation but with all of us who choose whether to nurture it or call it out for what it is: A betrayal of trust for the sake of likes and shares.