
I remembered the first time I heard the term “Artificial Intelligence.” It must have been back in the early 1980s, when personal computers like the Apple II had just begun filtering into our homes and schools.
At that time, “AI” sounded like something out of a science fiction movie, not a technology that I would interact with on a daily basis. But here we are, decades later, and the term “AI” is practically everywhere. It is overused, overhyped, and applied to just about any computer program that exhibits the tiniest bit of adaptability.
I had spent a large portion of my career working with various forms of technology. Back in the 1980s, I tinkered with the rudimentary command-line interfaces that BASIC offered, played text-based adventure games, and occasionally used rule-based systems that felt clever for their time.
Now, looking at the software and apps around me, I see them described with the grand, somewhat magical label “AI,” even when what they are doing is little more than applying a pre-coded set of rules or pattern-matching algorithms that we have been perfecting for decades. It can often feel like the language of marketing hype rather than a technological leap.
I believe is necessary to clarify this overuse of the term “AI,” especially considering how often it is attached to tools that promise world-changing capabilities. These promises frequently remain unfulfilled. When I reflected on the tech that I used in my earlier days, I realized that if we applied today’s extremely generous definition of AI to that old technology, we might consider the following “AI” by today’s references:
- Text Adventure Games: Titles like “Zork” used simple branching logic to simulate intelligent responses.
- Early Spell Checkers and Grammar Checkers: These programs applied rudimentary pattern matching and rules to suggest corrections.
- Basic Voice Command Systems: DragonDictate recognized limited sets of spoken commands.
- Chess Programs: Chessmaster, or the early versions of Deep Blue, relied on brute force computation and heuristics rather than machine learning.
- Expert Systems: Programs designed for narrow tasks—diagnosing equipment malfunctions or suggesting treatments—using carefully crafted if-then rules.
We would have never called these systems “AI” back in the day. They were “expert systems” or “smart software.” They were innovative but did not rise to the level and almost mystical label that “AI” carries now.
Yet in the modern tech landscape, where any semblance of automated decision-making or pattern recognition is deemed “AI,” these old technologies could easily be rebranded. It was not that they suddenly became more intelligent, it was that our marketing-driven language has shifted.
The shift is not just about nostalgia. It has real consequences. For one thing, it sets wildly unrealistic expectations. A company announces it is using “AI” to revolutionize customer service, and we imagine near-human conversational skills. But what we often get is a glorified chatbot, stumbling over anything beyond the most basic queries.
Another company claims its “AI” can predict market trends, and we expect near-perfect foresight. Instead, we find a statistical model that fails to beat a decent analyst armed with spreadsheets and good intuition.
I saw this firsthand in recent years, when I experimented with the new generation of generative language models, including ChatGPT. The hype around these tools was enormous. They were supposed to write articles, summarize documents, answer questions, and even come up with new ideas.
At times, I got the sense that the world expected them to fully replace human writers, teachers, and advisors. And certainly, they showcased some remarkable capabilities. But they also had glaring and detrimental flaws like misinformation, nonsensical answers, and a tendency to sound confident while being completely wrong.
Using ChatGPT is like having a frustratingly unmotivated intern who continues to disappoint, but on rare occasions accidentally proves to be useful.
This is my honest assessment after a couple years of interacting with it. Sometimes I would ask it a question and get a surprisingly nuanced, well-structured response Then five minutes later, I would ask a similar question and receive a confused, off-topic answer that any human with a bit of training or care could have improved upon.
It was this inconsistency and lack of reliability that felt reminiscent of an actual human who did not really want to be there working for you. A far cry from a revolutionary helper that would replace humans.
What shaped some of my early expectations was the gap between the marketed image of AI and the actual, practical reality of what the tech can deliver. In the news media, AI is often depicted as an unstoppable force, ready to revolutionize everything from healthcare to legal advice. But the real tools we have, like ChatGPT, are much more modest.
We can assist ChatGPT with tasks, suggest ideas, and even produce content at scale, but chatbots require oversight, verification, and a steady human hand. In other words, they do not just run on autopilot to produce perfect results. Hence the intern comparison, requiring any work produced to be double and triple-checked for accuracy and truth.
If so much of this modern technology is overhyped “AI,” what good is it really for? Why even bother with these tools if they are not living up to the hype, envisioned as the fulfillment of science fiction? Having spent decades around technology, I can still see the value of something like ChatGPT, especially if we view those programs as what they really are, advanced tools and not human replacements.
ChatGPT and similar language models are useful as a starting point. They are good for outlining projects, brainstorming, summarizing vast amounts of text, and as such it is a tool that can save time and energy while enhancing the quality of the end product.
ChatGPT can spark ideas that may not have been considered, but moving forward with the next step for development is where the tech quickly falls flat. Approaching the tool with that mindset helps to manage expectations, whereas all the hype just misleads its capabilities and amplifies those weaknesses.
The relentless overuse of the term “AI” in marketing materials and headlines threatens to erode public trust in the technology. People see stories about AI transforming industries overnight, then experience the reality of clunky customer support bots that barely understand their requests.
If we continue to label every advanced automation as “AI,” we risk setting ourselves up for disappointment and backlash. When people’s expectations soar too high, the inevitable shortcomings feel like betrayals, and it becomes harder for genuine innovations to gain respect. In the end, that is what defines progress in technology, a gradual refinement of tools over time, guided by human insight and honesty.
We may call it AI, or we may not. The important thing is to keep our expectations in check and treat these tools as what they are — useful, frustrating, occasionally brilliant, and very much works in progress.
© Art
Dall-E