Living Through the AI Era: Bubble, Revolution, or Both?
I am a passionate software engineer driven by a deep fascination with how technology can elegantly solve real-world problems. With a strong belief in the power of innovation, I thrive on creating cutting-edge solutions that make a meaningful impact on people's lives and the world around us.
My dedication to excellence and continuous learning enables me to stay at the forefront of technological advancements, always seeking to leverage the latest tools and frameworks to deliver robust and scalable software solutions. I take pride in crafting efficient and user-centric applications that not only meet the needs of today but also anticipate the challenges of tomorrow.
Beyond my technical expertise, I have a keen interest in venture capital and startup ecosystems. I am captivated by the dynamic and transformative nature of entrepreneurship. My desire to understand the business side of technology and my analytical mindset fuel my enthusiasm for exploring innovative opportunities and identifying high-potential ventures.
As a software engineer, I embrace collaboration, seeing every project as an opportunity to work alongside talented teams and foster an environment of creativity and growth. I am motivated by the prospect of being part of ventures that drive positive change and shape a better future.
In essence, my personal brand stands for a software engineer who is not only passionate about the intricacies of coding but also deeply motivated by the potential of technology to create meaningful solutions and the captivating world of venture capital.
Unless explicitly stated, the opinions expressed on this blog are mine and do not represent that of any organisation I am associated with.
Disclaimer: These are my personal reflections—anecdotal, observational, and written with the awareness that the AI landscape changes by the minute. By the time you finish reading this, parts of this essay may already be out of date.
The Financial Question—Are We Living Through Another Bubble?
The Investment Surge
Billions of dollars are flowing into AI start-ups, infrastructure, and model providers. As of October 2025, venture capitalists have poured £153 billion ($192.7 billion) into AI start-ups so far this year—setting new global records and putting 2025 on track to be the first year where more than half of total VC dollars went into the industry.[1] In Q3 2025 alone, global venture funding jumped 38% to reach $97 billion, with the three largest rounds going to foundation model companies Anthropic ($13 billion), xAI ($5.3 billion), and Mistral AI ($2 billion).[2]
Established players like Microsoft, Google, and Amazon have committed staggering sums into AI-specific cloud infrastructure and GPU capacity. Microsoft is investing $80 billion in fiscal 2025, Amazon plans $100 billion, and Google is targeting $75 billion—with combined infrastructure investments from tech mega-caps projected to reach $364 billion.[3]
This surge feels eerily reminiscent of the dot-com bubble of 2000—a period when any company adding ".com" to its name could attract outsized valuations. The parallel is clear: FOMO-driven investment, speculative valuations, and promises of a "revolution" that few can fully articulate.
Yet there are differences. Unlike early web start-ups that could spin up overnight, AI at scale demands real physical capital: data centres, chips, fibre networks, and vast operational energy. That infrastructure is tangible, expensive, and (at least somewhat) defensible.
A Rational Bubble?
Could both things be true—that AI is overhyped, and it will transform industries? Possibly. Market bubbles often overstate short-term potential whilst underestimating long-term impact. The internet boom left behind massive overcapacity and bankruptcies, but also the foundations of today's online economy.
A similar pattern might play out here. Some AI start-ups will fade, but the infrastructure and capabilities they helped build could prove essential later.
I'm not qualified to call a definitive financial verdict—but we might be in a "mini-bubble" within a broader structural shift.
The Technical Question—Will AI Deliver?
"It's Just Pattern Matching"
Critics argue that modern AI isn't truly intelligent; it merely recombines statistical patterns from its training data. In this view, large language models (LLMs) are glorified autocomplete systems—powerful but fundamentally shallow. Recent research from Apple and IBM has highlighted these limitations. IBM's analysis notes that LLMs "can't do deductive reasoning" and are instead "set up to do pattern recognition", whilst Apple research demonstrates that AI models primarily rely on statistical pattern matching rather than genuine mathematical reasoning, with accuracy collapsing on high-complexity tasks. Columbia University Professor Vishal Misra has articulated similar concerns, arguing in his June 2025 essay that language models are fundamentally limited by their reliance on statistical patterns and cannot truly improve themselves through reasoning.[4]
The critique is fair: AI systems hallucinate, lack grounding in physical reality, and often fail at causal reasoning. But to dismiss them entirely misses something essential.
Humans Are Pattern Matchers Too
Human cognition relies heavily on pattern recognition. From reading facial expressions to completing sentences, much of our "thinking" involves comparing current stimuli with stored examples. We may encode richer context, but the process is still probabilistic and experience-based.
In that sense, AI's "pattern matching" is not alien—it's familiar. The question isn't whether AI uses patterns, but how far pattern-based learning can go before it hits cognitive or conceptual ceilings.
Counterarguments to "AI is Just Pattern Matching"
There are several compelling counterarguments to this reductive claim, and experts increasingly emphasise that, whilst pattern matching is a fundamental mechanism, modern AI systems operate in ways that go beyond this basic principle:
Hierarchical Reasoning and Abstraction: Advanced AI models leverage hierarchical representations that move beyond raw pattern recognition, enabling abstraction, analogical reasoning, and context-aware responses—facilitating problem-solving that resembles human cognition in specific domains. Recent research on hierarchical reasoning models demonstrates that AI systems inspired by the brain's multi-timescale processing can achieve exceptional performance on complex reasoning tasks, including the Abstraction and Reasoning Corpus (ARC), a key benchmark for measuring artificial general intelligence capabilities.[8]
Generalisation and Zero-Shot Learning: State-of-the-art AI models, especially large language models, demonstrate the ability to generalise from previously unseen data ("zero-shot" tasks), inferring solutions through context rather than simple memorisation of past patterns. Research has shown that large language models are capable of zero-shot reasoning on complex multi-step tasks, performing competently on problems they have never been explicitly trained to solve.[9]
Autonomous Planning and Multi-Step Reasoning: Agentic AI workflows not only identify patterns but plan workflows, adapt to novel data, and synthesise information across unstructured contexts, mimicking some aspects of human decision-making and creative synthesis.
Emergent Behaviours: Modern deep learning systems exhibit emergent behaviours (such as generating code, composing music, or playing strategic games) that were not explicitly trained into them, suggesting a more complex form of intelligence than rote statistical pattern matching. As language models scale up, they demonstrate emergent abilities—capabilities that appear unpredictably at certain model sizes and were not present in smaller versions.[10]
Human Cognition Relies on Patterns Too: Critiques of AI as "just pattern matching" sometimes overlook the reality that human cognition itself relies heavily on identifying, storing, and manipulating patterns; intelligence emerges from using those patterns adaptively.
Whilst pattern matching is a fundamental building block, contemporary AI systems utilise this capacity as a substrate for more sophisticated reasoning, planning, and abstraction—undermining claims that "AI is just pattern matching" and supporting a more nuanced view of how artificial intelligence operates in practice.
Technical Perspective: What Does AI Really Deliver?
In domains rich in structured data and textual context—summarisation, translation, search, code generation, documentation—AI has already proved transformative. Developers routinely use tools such as Amazon Q, Replit, Lovable or Copilot-like assistants to draft, test, and debug code. Analysts use AI to summarise long documents or generate reports in minutes.
These are not trivial gains; they redefine productivity in knowledge work.
Many critics argue that today's AI systems are not "intelligent" in the way humans are, and are merely sophisticated pattern matchers or glorified text generators. The distinction between human and artificial intelligence is subtle: whilst humans certainly use pattern matching to solve problems, what separates us is our awareness of our limitations and our ability to self-correct.
That said, it's undeniable that AI has become remarkably effective in text generation, summarisation, and other language-heavy tasks. This has driven progress in fields like software development automation, customer service response, fraud detection, and medical research—domains where large volumes of data or text need to be managed rapidly and intelligently.
Agentic AI—A New Name for Workflow Automation?
A new term is circulating: agentic AI. It describes systems that can plan, act, and self-correct through multiple steps, rather than executing a single prompt.
On closer inspection, it's reminiscent of workflow automation tools like IFTTT, Zapier, or n8n—systems built around conditional logic: If This Then That. The difference is that agentic AI can dynamically reason about tasks, make decisions under uncertainty, and learn from context.
Traditional automation tools—think of systems like IFTTT or Zapier—operate within rigid, rule-based frameworks. They execute predefined sequences: if X happens, do Y. These systems are dependable but inflexible. If something unexpected occurs—say, a missing input, an API change, or a new context—they typically fail silently or require manual intervention. Their strength lies in predictability, but their weakness is brittleness. They can't reason about why a step failed or decide on an alternative course of action.
More sophisticated platforms like n8n occupy a middle ground. n8n allows for branching, looping, and dynamic adaptation, offering greater flexibility and customisation, particularly for developers building complex automations.[14] Unlike simpler tools, n8n provides a visual interface where different apps connect like puzzle pieces through nodes, and it supports custom code alongside pre-built integrations. However, even n8n operates primarily through structured, event-driven logic—when something happens, something else follows.
Agentic AI represents a step beyond this paradigm. Rather than relying on static rules or even complex conditional logic, agentic systems learn from context. They can interpret goals described in natural language, decompose them into sub-tasks, and plan multi-step actions autonomously. When something goes wrong, they don't just stop—they use reasoning and feedback loops to attempt recovery, such as re-running a step, reformulating a query, or selecting a different tool. This adaptability is often powered by large language models, which allow these agents to parse flexible instructions and dynamically adjust their behaviour. Whilst traditional automation tools like n8n excel at structured workflows and event-driven logic, agentic AI systems are better suited for building intelligent systems where agents can think through problems and assess outcomes.[15]
In short, agentic AI blends the predictability of automation with the fluid intelligence of contextual reasoning—moving automation from a rigid "workflow engine" to something more akin to a digital collaborator.
Where Is the World-Changing AI?
Take Sora 2, OpenAI's text-to-video model. Its results are visually stunning—cinematic clips rendered from short text prompts. Yet beyond creative or entertainment niches, its world-changing impact remains speculative.
Or look at Tesla's Optimus robot. Tesla released footage of Optimus performing kung fu–style movements, demonstrating fluid motion and balance.[11] It's an impressive technical feat, but one designed for controlled conditions. Whether such dexterity translates into robust, generalisable robotics is uncertain.
These are exciting glimpses, not revolutions. The path from demo to deployment is long and fraught with engineering, safety, and economic barriers. But, again, AI innovation is moving at terrific speed.
Whilst AI brings hype and headlines, genuine world-changing applications remain rare. Sora 2, a high-performance multimodal generative model, is technically impressive but hasn't fundamentally transformed daily life. Meanwhile, demonstrations like Tesla's Optimus robot performing kung fu moves—praised for their smooth, humanlike execution—highlight technical progress in machine learning and robotics. However, such feats are often achieved in highly controlled environments and are not yet generalised solutions for complex real-world problems.
Despite major advances across industries—from agriculture to autonomous vehicles and fraud detection—these accomplishments have yet to redefine society on the scale promised by AI's loudest advocates.
The Infrastructure Backdrop—Power, Scale, and Sustainability
The Cost of Intelligence
AI doesn't exist in a vacuum—it runs on silicon and electricity. According to the International Energy Agency, data centres currently account for around 1–1.5% of global electricity consumption.[5] The IEA projects that data centre electricity consumption will more than double by 2030, with AI being a chief driver of this growth.[6] However, data centre demand growth is expected to account for less than 10% of global electricity demand growth between 2024 and 2030.[7] Training a single frontier model can cost tens of millions of pounds and emit thousands of tonnes of CO₂ equivalent.
New hyperscale data centres are under construction worldwide, often near renewable-energy hubs. But the scale raises hard questions:
How sustainable is exponential growth in compute demand?
Can innovation in model efficiency outpace energy consumption?
What happens when power supply becomes a geopolitical bottleneck?
Scaling Limits
Model performance improvements have so far followed predictable scaling laws—but the cost curve is steepening. Each new "state-of-the-art" model demands disproportionately more compute, memory, and data. Diminishing returns are already visible. Recent reports from Bloomberg and The Information indicate that OpenAI, Google, and Anthropic are experiencing slower improvement rates despite massive investments in computing power and data.[12] Research from Epoch AI shows that training costs for frontier models have grown by a factor of 2-3x per year over the past eight years, with today's most expensive models costing tens of millions of dollars to train—suggesting costs could exceed $1 billion by 2027.[13]
Future breakthroughs may depend less on bigger models and more on smarter architectures, efficiency gains, and hybrid reasoning systems that combine symbolic, causal, and neural elements.
Tentative Conclusions
Financially, AI may contain pockets of speculative excess, but the underlying infrastructure is real and strategically important. A correction, if it comes, will likely prune hype rather than kill the field.
Technically, AI delivers extraordinary capabilities in text-based reasoning and automation but remains narrow. General intelligence, common sense, and real-world generalisation remain unsolved.
Agentic AI could evolve into the backbone of digital operations—the next generation of workflow orchestration rather than an entirely new paradigm.
Sustainability will become the next frontier constraint: compute, power, and cooling capacity may shape AI's growth curve more than algorithms.
In short, AI is not a mirage—but neither is it omnipotent. It's a technological wave still searching for equilibrium between hype, utility, and feasibility.
"We may be living through both a bubble and a revolution—the two are not mutually exclusive."
References
[1]: Bloomberg. (October 2025). "AI Is Dominating 2025 VC Investing, Pulling in $192.7 Billion." https://www.bloomberg.com/news/articles/2025-10-03/ai-is-dominating-2025-vc-investing-pulling-in-192-7-billion
[2]: Crunchbase. (October 2025). "Q3 Venture Funding Jumps 38% As More Massive Rounds Go To AI Giants." https://news.crunchbase.com/venture/global-vc-funding-biggest-deals-q3-2025-ai-ma-data/
[3]: WebProNews. (October 2025). "Microsoft, Alphabet, Amazon Drive AI Supercycle with $364B Investments." https://www.webpronews.com/microsoft-alphabet-amazon-drive-ai-supercycle-with-364b-investments/; Microsoft On the Issues. (January 2025). "The golden opportunity for American AI." https://blogs.microsoft.com/on-the-issues/2025/01/03/the-golden-opportunity-for-american-ai/
[4] IBM Think. (April 2025). "How smart is machine intelligence? AI aces games but fails basic reality check." https://www.ibm.com/think/news/mit-study-evaluating-world-model-ai; MLQ.ai. (2025). "Apple Research Exposes Limits of AI Reasoning Models." https://mlq.ai/news/apple-research-exposes-limits-of-ai-reasoning-models-ahead-of-wwdc-2025/; Misra, V. (June 2025). "The Illusion of Thinking: Why Language Models Can't Improve Themselves." https://medium.com/@vishalmisra/the-illusion-of-thinking-why-language-models-cant-improve-themselves-0c71a13811e2
[5]: International Energy Agency. (2024). "Data centres & networks." https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks
[6]: Nature. (April 2025). "Data centres will use twice as much energy by 2030—driven by AI." https://www.nature.com/articles/d41586-025-01113-z
[7]: International Energy Agency. (2024). "Energy demand from AI." https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai
[8]: Sapient Intelligence. (June 2025). "Hierarchical Reasoning Model." https://arxiv.org/abs/2506.21734; ARC Prize. (2025). "The Hidden Drivers of HRM's Performance on ARC-AGI." https://arcprize.org/blog/hrm-analysis
[9]: Kojima, T., et al. (2022). "Large Language Models are Zero-Shot Reasoners." https://arxiv.org/abs/2205.11916
[10]: Wei, J., et al. (2022). "Emergent Abilities of Large Language Models." https://arxiv.org/abs/2206.07682; Center for Security and Emerging Technology. (April 2024). "Emergent Abilities in Large Language Models: An Explainer." https://cset.georgetown.edu/article/emergent-abilities-in-large-language-models-an-explainer/
[12]: Marketing AI Institute. (November 2024). "The Great AI Scaling Debate: Have We Hit a Wall?" https://www.marketingaiinstitute.com/blog/scaling-laws-ai-wall; Platformer. (November 2024). "AI companies hit a scaling wall." https://www.platformer.news/openai-google-scaling-laws-anthropic-ai/
[13]: Epoch AI. (January 2025). "How much does it cost to train frontier AI models?" https://epoch.ai/blog/how-much-does-it-cost-to-train-frontier-ai-models; Sevilla, J., et al. (May 2024). "The rising costs of training frontier AI models." https://arxiv.org/abs/2405.21015
[14]: n8n. (2025). "Advanced AI Workflow Automation Software & Tools." https://n8n.io/ai/; Hostinger. (September 2025). "What is n8n? Intro to a workflow automation tool." https://www.hostinger.com/tutorials/what-is-n8n
[15]: Gupta, M. (October 2025). "OpenAI AgentKit vs N8N: The best AI Workflow Builder?" https://medium.com/data-science-in-your-pocket/openai-agentkit-vs-n8n-the-best-ai-workflow-builder-da5eaf21aa10
