AI Isn't As Bad Or As Good As You (probably) Think
- nicoletheepickle
- Apr 9
- 7 min read
I love AI. I think AI has amazing potential to help us make the world a better place. If you've already decided AI is the bomb or the devil or nothing in between try taking a peek at my take. I'd love to hear from people who see gaps in my logic. I'm curious regarding what I'm missing...
The AI In The Room
The public "discourse" around artificial intelligence is a study in extremes—tech fanboys, CEOs, and Youtube "gurus" promising digital transcendence versus existential doomsdayers, concerned corporate responsibility overlords, and artistic types forecasting humanity's replacement. While some concerns driving these opinions are well-intentioned and valid, both narratives ultimately serve particular interests and fundamentally misrepresent the nature of current AI technology.
Have a seat while I lay out a more nuanced approach. A more balanced position—recognizing both the genuine utility of AI systems and their profound limitations. This means resisting the pull of simplistic narratives and acknowledging that we can simultaneously use AI tools while maintaining critical distance from corporate mythmaking and mostly invalid existential threats that surround them.
What Is AI?
AI systems are specialized computer programs designed to perform specific tasks that typically require human intelligence. Despite impressive advances, today's AI remains "narrow"—excelling at particular functions but unable to transfer that capability to unrelated tasks the way humans naturally do.
Think about the AI you encounter daily:
Email filters that separate important messages from spam
Voice assistants that recognize your commands
Photo apps that automatically organize your pictures
Recommendation systems that suggest movies or music
Each of these systems does one job well, but none can step outside its lane. Your spam filter can't suggest movies, and your music recommendation system can't identify faces in photos. This specialization is key to understanding AI's current state.
By 2025, AI has become increasingly multimodal, integrating text, audio, and image processing capabilities into unified systems. Models can now plan actions following conversations, showing a more agentic quality than earlier iterations. The optimization of AI infrastructure has also led to significant improvements in processing time and operating costs.
Language Models - Beyond Simple Mimicry
When you chat with AI assistants like recent large language models (LLMs), you're interacting with sophisticated systems that can seem remarkably human-like, which leads to both fascination and concern.
But how do they actually work?
Imagine if you read every book, article, and website in existence and became extremely good at predicting what word should come next in any sentence. That's essentially what LLMs do—they're sophisticated pattern-matching systems trained on vast amounts of text.
When you ask an LLM a question, it's using statistical patterns to generate a response that would likely follow your question, based on similar patterns in its training data. While recent models have shown improved reasoning capabilities and contextual understanding beyond pure pattern matching, they still lack human-like understanding of the content they process.
What AI Can Do Well in 2025
Process and analyze enormous datasets faster than humans
Identify patterns in data that might be invisible to human analysts
Generate human-like text, images, and other media based on prompts
Perform specific tasks with superhuman precision when properly trained
What AI Can't Do Well
Despite these advancements, fundamental limitations persist:
Reasoning and causality: AI systems struggle with understanding cause-effect relationships that aren't explicitly represented in their training data
Common sense knowledge: Everyday knowledge that humans take for granted remains challenging for AI
Transferring knowledge across domains: Skills learned in one context rarely transfer to new situations
Hallucinations and factual accuracy: Models continue to generate plausible-sounding but incorrect information
Domain mismatch: Systems trained on broad datasets still struggle with specific or niche subjects
Follow the Incentives
A core principle of critical thinking is examining incentive structures. Who benefits from particular narratives about AI?
Corporate interests benefit from both hype and fear—the former drives investment and stock prices, while the latter creates urgency and FOMO (fear of missing out)
Media outlets gain clicks and attention from dramatic headlines about digital sentience or job apocalypse
Academic researchers often secure funding by promising revolutionary advances
Politicians find AI serves as a convenient distraction from more immediate societal problems
Understanding these incentives doesn't invalidate all claims, but it provides essential context for evaluating them.
Myth vs. Reality - Addressing Common Concerns
When it comes to AI, distinguishing between legitimate concerns and exaggerated fears helps us focus on what truly matters. Let's examine some common beliefs about AI and see how they compare with current reality:
Myth: AI Systems "Understand" Language
Reality: While recent models demonstrate improved contextual awareness, they fundamentally process language differently than humans. They recognize statistical patterns rather than comprehending meaning as humans do. Their "understanding" is more akin to sophisticated prediction than true comprehension.
Myth: Current AI Progress Leads Directly to AGI
Reality: The path from current narrow AI to hypothetical AGI is neither clear nor guaranteed. Many researchers argue that entirely different approaches may be necessary to achieve general intelligence.
As Melanie Mitchell, computer scientist and author, notes: "The history of AI is filled with examples of tasks that were once thought to require 'general intelligence' but were eventually solved by narrow methods."
Myth: AI Systems Can Reason Like Humans
Reality: Though reasoning capabilities have improved in recent models, AI still struggles with complex logical reasoning, especially when it requires integrating multiple knowledge domains or understanding implicit information.
Myth: "AI will become conscious and decide humans are a threat"
Reality: Today's AI systems have no consciousness, desires, or independent goals—they're specialized tools that follow programmed objectives.
Myth: "If AI isn't sentient today, it will be tomorrow"
Reality: Many leading AI researchers question whether true AGI is even possible with current approaches.
The Evidence Problem
Claims about AI capabilities often outpace rigorous evidence. When subjected to careful testing, many assertions about AI "understanding" or "reasoning" reveal significant limitations:
Tests show that small changes to inputs can produce dramatically different outputs, indicating brittle understanding
Models perform poorly on tasks requiring consistent reasoning across different scenarios
Performance degrades significantly on examples that differ from training distributions
For example, recent studies testing reasoning capabilities found that while models appear to solve complex problems, they often rely on superficial patterns rather than deep understanding of the underlying concepts.
Try asking AI systems these seemingly simple questions:
"Can a typical 3-year-old child lift a car over their head?"
"If I put a book in a freezer for an hour, will I be able to read it?"
"If I have two cups of water, and pour one into a tall glass and one into a short, wide bowl, which contains more water?"
Many AI systems still struggle with these basic physical reasoning questions that any human can answer easily. This highlights the gap between current approaches and true understanding.
The Anthropomorphism Trap
By 2025, anthropomorphic AI design has become more prevalent, raising new ethical questions about systems that behave in human-like ways. This trend makes it increasingly important to maintain clarity about what these systems actually are.
Humans naturally tend to anthropomorphize—attributing human characteristics to non-human entities. When an AI system uses first-person pronouns like "I" or discusses concepts like "thinking" or "feeling," it's easy to forget these are just patterns of language, not expressions of actual experience.
Our tendency to attribute human-like qualities to AI systems creates several problems:
It obscures technical understanding of how these systems actually work
It creates unrealistic expectations about AI capabilities
It can lead to inappropriate trust in AI systems for critical decisions
It complicates ethical discussions about AI development and deployment
Our brains evolved to understand other people, not algorithms. We're primed to interpret entities that use language as having minds similar to our own. This makes it particularly difficult to maintain a clear perspective on what language models are actually doing.
The "AI Effect"
There's a curious phenomenon in AI research sometimes called the "AI Effect"—when AI successfully solves a problem, people often say "that's not really intelligence." Chess-playing computers were once considered the height of AI research; now they're just seen as specialized algorithms.
This shifting definition creates a moving target where "true AI" is always just beyond current capabilities, feeding into narratives about imminent breakthroughs to general intelligence.
A Balanced Perspective
Rather than asking whether AGI is imminent or impossible, more productive questions include:
What specific capabilities are current AI systems demonstrating?
What rigorous evidence supports claims about these capabilities?
What fundamental limitations remain unaddressed?
What new research directions might address these limitations?
Questions to Ask About AI Headlines
When you encounter alarming AI news, consider asking:
Is this about a specific capability or general intelligence? Most breakthroughs involve narrow abilities, not general intelligence.
What measurable advance is actually being claimed? Look beyond vague terms like "breakthrough" or "human-like."
Who benefits from this framing? Consider whether company valuations or funding might be influenced by dramatic claims.
What do independent experts say? Seek perspectives from researchers without financial interest in the technology.
What are the actual limitations? Often buried deep in technical papers are important caveats not mentioned in headlines.
Resisting False Binaries
Perhaps most importantly, critical thinking about AI requires rejecting the false binary of "revolutionary technology" versus "nothing new here." Current AI systems represent significant advances in pattern recognition and statistical modeling without approaching general intelligence.
They are genuinely useful tools with specific limitations—neither magical thinking machines nor glorified search engines. The intellectually honest position acknowledges both the genuine utility and the fundamental constraints of these systems.
Conclusion
The path toward more capable AI systems requires both ambitious research and clear-eyed assessment. By avoiding both uncritical enthusiasm and dismissive skepticism, we can engage with AI developments in a way that acknowledges both remarkable progress and fundamental limitations.
As we continue to develop AI technology, maintaining this balanced perspective will be crucial for making informed decisions about research priorities, regulatory approaches, and practical applications.
In Plain English: AI systems are tools created by humans to serve human purposes. They reflect both our ingenuity and our limitations. Understanding what they can and cannot do allows us to approach them with neither unwarranted fear nor excessive expectation.
Glossary: AI Terms in Plain English
AI (Artificial Intelligence): Computer systems designed to perform tasks that typically require human intelligence
Narrow AI: Systems designed for specific tasks without general intelligence (all current AI)
Machine Learning: Systems that improve performance by learning patterns from data
LLM (Large Language Model): AI trained on vast text data to predict and generate human-like language
AGI (Artificial General Intelligence): Theoretical future AI with human-like general intelligence (does not currently exist)
Multimodal AI: Systems that can process and generate multiple types of data (text, images, audio)
Hallucination: When AI generates plausible-sounding but incorrect information
Training Data: Information used to teach AI systems patterns and relationships
Algorithm: Step-by-step procedure for solving a problem or performing a task
Neural Network: Machine learning approach inspired by brain structure
For Further Reading
If you're interested in exploring nuanced, expert perspectives on AI that cut through both hype and fear, consider these books by leading researchers and critics:
"Artificial Intelligence: A Guide for Thinking Humans" by Melanie Mitchell
"Rebooting AI: Building Artificial Intelligence We Can Trust" by Gary Marcus and Ernest Davis
"Atlas of AI" by Kate Crawford
"You Are Not a Gadget" by Jaron Lanier
"The Alignment Problem" by Brian Christian
"Human Compatible: Artificial Intelligence and the Problem of Control" by Stuart Russell
"The Myth of Artificial Intelligence" by Erik Larson


Comentários