The AI Mirage

A pot of coffee brewed, the office cleaned up, i was ready for a weekend coding project that i had been looking forward to. A break from the day-to-day of data analysis to work on a project just for fun.

With twenty years of PHP and Python development under my belt, i decided to experiment with Tesla's Fleet API, bringing along an AI assistant, Claude, as my co-pilot. What unfolded over the weekend would become a perfect microcosm of why the current narrative about AI replacing skilled professionals is not just flawed, but dangerous.

As I worked with Claude, our interaction revealed a truth that many are overlooking in their rush to declare the end of human expertise. Even with the API documentation readily available, i watched as the AI repeatedly generated code that failed to adhere to the API's basic requirements. It was like working with an eager intern who had memorized all the syntax but couldn't grasp the deeper principles of why certain approaches worked while others failed.

The most insightful moment came when i tried to evolve my prototype into something more robust. In my local sandbox environment, everything seemed promising, the kind of early success that might lead someone to post triumphantly on LinkedIn about their AI-powered breakthroughs.

But as I began asking questions about deployment, about moving this prototype into a production environment, the limitations became stark. The AI's responses grew increasingly disconnected from the broader context of what we were building. It could solve individual problems, yes, but each solution seemed to exist in isolation, often creating new issues elsewhere in the application.

This experience mirrors what i've observed running an analytics agency. Just as people proclaim that AI will make everyone a software developer overnight, there's a parallel narrative that tools like ChatGPT will instantly transform anyone into a data analyst. The reality couldn't be further from the truth. The same fundamental misunderstandings plague both fields, the belief that generating code or creating a basic visualization is the same as producing meaningful, actionable insights that can drive business decisions.

In both software development and data analytics, i feel a complex mix of emotions every time i open LinkedIn and see another post about someone with no background experience creating a "profitable app" or "revolutionary analysis" with ChatGPT over a weekend.

It's not anger at what i perceive as deception but rather a deep frustration born of understanding. These aren't people trying to sell snake oil, well some are, they're enthusiasts who simply don't know what they don't know. They're celebrating building a rowboat without realizing they're planning to cross an ocean.

The gap between a working prototype and a production-ready application is massive, just as the gap between a basic data visualization and meaningful analytics is deep. Where is the robust error handling in the code? The security measures protecting user data? The authentication systems? The scalable infrastructure? In analytics, where is the data validation? The statistical rigor? The business context that transforms numbers into actionable insights? These are fundamental requirements for any serious work. Yet they often remain invisible to those who haven't spent years learning why they matter.

This is why the narrative of AI replacing skilled professionals is so problematic. It's not just wrong, it's misleading in a way that could have real consequences. Yes, these AI platforms are remarkable tools for prototyping, for brainstorming, for working through specific problems. They can even be valuable learning aids for those truly committed to understanding software development or data analytics or any other field. But they're not magical solutions that eliminate the need for expertise.

While AI can help accelerate certain aspects of development and analysis, it can't (at least not yet) replace the accumulated wisdom that comes from years of real-world experience. It can't anticipate all the ways a system might fail or understand the delicate balance between functionality and security or grasp the full complexity of creating something that's not just functional, but reliable, secure, and maintainable. In data analytics, it can't understand the nuanced business context that turns raw data into valuable insights or recognize when correlations are meaningful versus misleading.

The real danger isn't that AI will replace developers or analysts doctors, it's that we might convince ourselves it already has, leading to a generation of applications and analyses and other products built on shaky foundations by people who believe that knowing how to prompt an AI is the same as knowing how to build software or analyze data.

For those willing to learn, these AI platforms can be powerful allies. But they're tools to augment human expertise, not replace it. And until we collectively acknowledge this truth, we risk building our digital future on a foundation of misunderstanding.

jason thompson

Jason Thompson is the CEO and co-founder of 33 Sticks, a boutique analytics company focused on helping businesses make human-centered decisions through data. He regularly speaks on topics related to data literacy and ethical analytics practices and is the co-author of the analytics children’s book ‘A is for Analytics’

https://www.hippieceolife.com/
Next
Next

Why Small Innovations Matter