November 26, 2025 | 15:24
Reading-Time: ca. 5 Min

When AI Meets a Crumbling Foundation

For many, AI is the great promise for salvation. More efficiency, more ease, more future. Everyone is talking about it, so it must be true. And so many are jumping on the bandwagon, which, from the external perspective, appears to be a big party.My impression is that the discussion in medium-sized companies tends to focus more on opportunities and less on realities. There is a lack of honest assessment of the situation. And by that I don’t just mean the technology, but above all the non-technical governance.

What is governance and why do many people struggle with it?

In the context of an information security management system (ISMS), governance refers to the overarching framework of policies, responsibilities, processes and control mechanisms that ensures that information security is systematically planned, implemented, monitored and continuously improved.1 If an ISMS is the tactic, then governance is the greater strategy.

Many companies today do not even have governance for classic, completely deterministic systems. In other words, systems whose behaviour should theoretically be completely predictable and controllable.

How did I come to this conclusion? A brief look at Bitkom’s figures on security incidents is convincing: 262 billion euros in total damage to the German economy, of which over 22 billion occurred last year. 2 Every day, typical Windows-centric corporate networks based on the well-known triad of Windows, Office and AD are literally ‘switched off’. Many of them have specialist staff, risk assessments, IT guidelines, certificates and audits, as well as established ISMS structures. Quite obviously all in vain.

Quick self-test: If you have to answer ‘no’ to one or more of the following questions, you should definitely read on:

  • You have a complete overview of all installed software, including dependencies and individual libraries.
  • You receive a notification when an unknown device appears.
  • You have no shadow IT in the form of private end devices.
  • You receive an immediate warning when a specific EventID is triggered, e.g. when users or logs are deleted.
  • You can track down which contractor worked remotely between 8 and 9 a.m. six months ago on which of your systems and name him by name in just a few minutes.3

It’s great to have you on board.
Now we’re really getting started.

Non-deterministic AI on top

What characterises a non-deterministic system?

  • Two identical inputs can produce different outputs.
  • Errors do not arise from fixed rules or code, but from probability distributions.
  • Security gaps arise as a by-product of model behaviour.
  • Reproducibility and auditability are limited or impossible.

So we barely have deterministic systems under control, and now we’re adding non-deterministic AI on top of that. A system whose behaviour changes depending on input, context, questioner or relative moon humidity.

That’s not courageous.
It’s not modern either.
It’s negligent.

A quote from Eva Wolfangel for a little hope, source: https://media.ccc.de/v/god2025-56472-keynote-code-dark-age#t=2162A quote from Eva Wolfangel for a little hope, source: https://media.ccc.de/v/god2025-56472-keynote-code-dark-age#t=2162

The governance gap

The central structural gap in all areas where AI is currently being experimented with arises precisely here. At best, we have the knowledge, methods and tools for classic, deterministic systems. But what we need are approaches for non-deterministic systems.

What are AI manufacturers and sellers offering us apart from an input slot for an LLM or ready-made results? Not much, according to the BSI white paper:4

  • Little transparency about source code, model architecture or training data
  • Few mechanisms to limit model behaviour or AI bias
  • Diffusion of responsibility in the event of defects or wrong decisions

What remains are international reference standards from the OECD and the EU AI Act, which is binding since 2024 and is likely to be known to very few people. Companies are already required to document risk assessments and the AI competence of their employees.

At least the Heise Academy offers a webinar on responsible AI governance.

Conclusion

We manage modern systems that we dont fully understand with tools that belong to a different era. AI isnt the problem here. The lack of governance is.

Before companies unleash AI agents on customers, employees or internal processes, using Vibe Code5 to create products, they need:

  • Clean IT and security as a foundation
    Not as a side issue, but as a non-negotiable basic requirement.

  • A governance model for AI
    With the risk assessment and proof of employee competence already required by law, as well as documentation of traceability, clear boundaries and responsibilities.

  • Mandatory labelling of AI results and products
    So that it remains clear where AI is involved. The aforementioned EU AI Act is limited to images. In the IT context, source codes or documentation would be more interesting.

When Microsoft CEO Satya Nadella claims that 30% of code is already AI-generated,6 this has implications for software quality.7

At the moment, many are loudly celebrating on a fast-moving train. It is unclear where the journey is headed and when the next stop will be. The digital future of entire companies is currently being built on rubble.

The few examples I personally know of where AI models are used had to be readjusted afterwards at great effort and expense. MIT estimates that 95% of all AI projects fail due to a ‘weak foundation’. 8

The issue is rarely the algorithm. The real problem is the weak foundations beneath them: untrusted data, identity systems that aren’t secure and infrastructure that cannot meet new demands. Without these basics, projects collapse before they create value.

I couldn’t explain it better.

Yours,
Tomas Jakobs

© 2025 Tomas Jakobs - Imprint and Legal Notice

Support this blog - Donate a Coffee