• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Codemotion Magazine

We code the future. Together

  • Discover
    • Events
    • Community
    • Partners
    • Become a partner
    • Hackathons
  • Magazine
    • Backend
    • Frontend
    • AI/ML
    • DevOps
    • Dev Life
    • Soft Skills
    • Infographics
  • Talent
    • Discover Talent
    • Jobs
    • Manifesto
  • Companies
  • For Business
    • EN
    • IT
    • ES
  • Sign in

Arnaldo MorenaMay 6, 2026 6 min read

Ignorance is not a reason to start prompting…

AI/ML
facebooktwitterlinkedinreddit

“Insufficient data, human decision required…”

Every time the computer delivered that terse printout to Engineer Kano, the camera would cut immediately to Commander Koenig’s baffled face, waiting for help from a frowning Professor Bergman — both of them under the watchful eye of Dr. Russell, quietly hoping neither man would have a cardiac event from the stress.

Recommended article
May 5, 2026

From Vibe Coding to AI-Driven Society: How AI Is Rewriting Work, Business, and Everyday Life

Natalia de Pablo Garcia

Natalia de Pablo Garcia

AI/ML

Say what you will about the computer on Space: 1999 — it was a glorified stack of blinking hardware — but it had data in abundance, having drifted through half an uncharted galaxy collecting it.

Which left the seven-year-old version of me with a nagging suspicion: what if they’re just winging the data collection?


Fast forward to today.

Over the past two years, artificial intelligence has become the centerpiece of global digital transformation. Enterprises, public institutions and startups are racing to deploy AI assistants, automation pipelines, conversational systems and generative platforms. This is no longer a conversation about innovation — it’s a conversation about survival and competitive advantage.

And yet, behind the initial enthusiasm, many organizations are running headfirst into a problem that is both very modern and very old: having access to the best AI models available does not automatically produce real results. Demos work beautifully. Production rarely does.

The reason is straightforward: the real bottleneck in enterprise AI is not the model. It’s the data.

There’s a formula that captures this well:

AI Success = Solution Quality × Adoption Scale

It’s that second factor — the ability to actually embed AI into operational processes — that represents the defining challenge of this moment.


This isn’t just an Italian problem. But it hits harder here.

You’ll often hear that companies are experimenting heavily without managing to truly integrate AI into their operations. In reality, this isn’t a uniquely Italian phenomenon.

Across the US and Europe, many organizations are living through what analysts have started calling “pilot purgatory”: dozens of proof-of-concept projects that never make it to production. The difference is that many large international corporations had already spent years investing in cloud infrastructure, data governance, application integration and technical modernization. When AI arrived, they had fertile ground to build on.

Italy’s productive landscape, by contrast, amplifies the problem. The economy is built largely on SMEs — often running legacy software, artisanal workflows and systems that grew organically over decades without any coherent integration strategy. In many of these companies, multiple ERPs coexist, Excel spreadsheets serve as operational databases, documentation is scattered, knowledge bases are incomplete and core processes resist standardization.

AI, in this context, acts as a brutal stress test of your digital organization. Processes that functioned yesterday — held together by human judgment and tribal knowledge — become unmanageable the moment an AI agent or automated system tries to interpret them. The real question, then, isn’t “do we have AI?” It’s “have we built the conditions for AI to actually work?”


The real competitive advantage will be in data, not in models.

Most AI conversations today fixate on models. Which one is smarter, faster, more accurate. I’ve personally had to referee heated arguments — sometimes bordering on physical — between advocates of the LLM that performs best on culinary tasks and fans of the one that apparently excels at interpreting dog behavior.

But the market is shifting fast.

Within a few years, virtually every company will have access to extraordinarily powerful models through standardized platforms and increasingly commoditized services. The differentiator won’t be having AI — it’ll be having better data than everyone else, and knowing how to organize it. Feeling a sense of déjà vu?

In practice, models are trending toward interchangeability. What will make the difference is the quality of the context they receive. An AI assistant fed fragmented data produces fragmented answers. An AI agent running on incoherent information doesn’t accelerate productivity — it accelerates mistakes.

This is why the companies seeing the most significant returns aren’t necessarily the ones running the most sophisticated models. They’re the ones that invested early in building a solid data foundation.


The “single source of truth” becomes essential.

One of the most important concepts in the new enterprise AI architecture is the single source of truth: a unified, consistent, governed data layer. In practical terms, this means consolidating business information into a shared ecosystem — eliminating duplication, inconsistency and organizational silos.

This is where many companies are discovering the true cost of years of accumulated technical debt.

Consider a typical mid-sized manufacturer. The CRM holds one version of customer data. The ERP holds a different version. The support team runs on separate tickets. Documentation lives across shared drives and internal repositories. Operational data survives inside Excel files maintained by hand. When an AI system is connected to this ecosystem, the result is predictable: inconsistent answers, incomplete context, brittle automations.

More mature organizations are tackling this through progressive consolidation. Data gets centralized into shared platforms, semantically normalized, catalogued and made accessible through standardized APIs and pipelines. Only then do AI agents get integrated into real workflows. At that point, AI stops being a chatbot and becomes an operational layer on top of your business processes. That’s where you can actually measure ROI.


Workflow integration matters more than the demo.

One of the most common mistakes is treating AI as a tool that sits alongside daily work rather than inside it. Value emerges when intelligence is embedded directly into existing processes. An isolated AI assistant can impress in a demo. An AI system woven into real workflows can transform an organization.

Think about a technical support team. A generic chatbot might shave a few tickets off the queue. But an AI agent connected to ticket history, technical documentation, operational logs and customer data can automatically retrieve context, suggest consistent solutions, flag recurring issues and support escalation. The difference is not marginal — it’s structural.

The same principle applies to software development, cybersecurity, financial analysis, supply chain management and document handling. Effective enterprise AI isn’t the most spectacular. It’s the one that quietly removes friction from daily operations.


Does it still make sense to become a Data Engineer or Data Analyst?

In this landscape, a reasonable question emerges: is specializing in data still worth it?

The answer is yes — more than ever.

Many assumed AI would erode the value of roles like Data Analyst or Data Engineer. The opposite is happening. The more powerful models become, the more valuable it is to have people who can organize, govern and make data reliable. In an AI-first world, data infrastructure quality is becoming the primary competitive factor. Organizations need people who can build robust pipelines, orchestrate information flows, ensure semantic quality and monitor the full data lifecycle.

The roles most likely to grow over the next few years are precisely those involved in building the operational context for AI:

  • Data Engineer
  • AI Engineer
  • Data Architect
  • MLOps Engineer
  • Data Governance Specialist

AI doesn’t eliminate the value of data. It makes data more central than it has ever been.


The risk for SMEs — everywhere.

For small and medium enterprises, the moment is particularly delicate. Many are adopting AI tactically and in fragments: a chatbot here, an isolated automation there, a text generator, a productivity assistant. It’s a natural first phase. But without an architectural strategy, it risks deepening the very fragmentation it was meant to solve.

The companies that will actually transform are likely those willing to invest in three things simultaneously: data consolidation, workflow integration and people development. That last one is consistently underestimated. AI doesn’t simply replace human work — it changes the way people interact with systems. What grows is the need for hybrid competencies: professionals who can navigate data, process design, automation and business context at the same time.


The real transformation is organizational.

The shift underway is not fundamentally about technology. It’s about how organizations build and preserve internal knowledge. For years, technical debt was treated as an IT problem. With AI, it’s becoming a strategic one — because intelligent agents amplify everything: incoherent information, chaotic workflows, duplication, incomplete documentation, non-standardized processes.

AI is forcing a clear choice on every organization: keep accumulating disconnected intelligent tools, or invest seriously in building a coherent, governed data ecosystem.

In the long run, value won’t come from the best prompt. It’ll come from the ability to turn business data into scalable operational knowledge.

Remember that when you’re building your next company or preparing your organization for a serious AI deployment: a brave commander, a computer expert, a brilliant doctor and a physics genius don’t amount to much if there’s no one responsible for collecting the data and keeping it in order.

The fact that in Space: 1999, that responsibility fell to Sandra Benes — whom the seven-year-old version of me was hopelessly in love with — probably influenced my career more than I’d care to admit.

Codemotion Collection Background
Top of the week
Our team’s picks

Want to find more articles like this? Check out the Top of the week collection, where you'll find a curated selection of fresh, new content just for you.

Share on:facebooktwitterlinkedinreddit

Tagged as:ai adoption

Arnaldo Morena
First steps i moved into computers world were my beloved basic programs I wrote on a Zx Spectrum in early 80s. In 90s , while i was studing economic , i was often asked to help people on using personal computer for every day business : It's been a one way ticket. First and lasting love was for managing data , so i have started using msaccess and SqlServer to build databases , elaborate information and reports using tons and tons of Visual Basic code . My web career started developing in Asp and Asp.net , then I began to…
LLMs in Gaming: Mapping the Present
Previous Post

Footer

Discover

  • Events
  • Community
  • Partners
  • Become a partner
  • Hackathons

Magazine

  • Tech articles

Talent

  • Discover talent
  • Jobs

Companies

  • Discover companies

For Business

  • Codemotion for companies

About

  • About us
  • Become a contributor
  • Work with us
  • Contact us

Follow Us

© Copyright Codemotion srl Via Marsala, 29/H, 00185 Roma P.IVA 12392791005 | Privacy policy | Terms and conditions