In 2001, I built something that would clearly could be called “AI” today — but from the symbolic, expert-systems era, not today’s deep-learning era. The timing feels almost ironic, because many of the problems I targeted back then are returning now, only stronger, because AI is writing code at scale.
During my research at the University of Paderborn, I wrote my master’s thesis. The title says it plainly:
“Development of rule-based components for error detection in static program analysis.”
What I built was a static analysis system — think of it as an automated code review — designed to do four things well:
It represents programs as a program model (a structured representation of code). It expresses that model as facts in Prolog. It uses rules (also in Prolog) to detect error classes. And it is extendable: you can add new rules and new “error knowledge” without rewriting the system.
I explicitly addressed a problem that was already clear in 2001: static analysis — especially code review — lacked enough automation. I chose Prolog because it is strong for knowledge representation and reasoning. Prolog is still used today in high-value areas where logic, rules, and deterministic reasoning matter.
Architecturally, the solution was modular. I separated it into components such as general rules, language specification, model-specific rules, analysis rules, the software model, and an “error characteristic tree” (Fehlermerkmalbaum). On top of that, I implemented classic analysis building blocks: slicing, graphs (control flow and data flow), metrics (including McCabe), anomaly handling, loop/if handling, reporting, and more. I even built a helper tool in Visual Basic to manage a large rule base and avoid duplicated work.
That was not “just programming.” That was knowledge engineering.
Why this was “AI” — long before today’s hype
What I built maps cleanly to a classic expert system architecture:
The knowledge base is the set of rules and error classes. Facts are the program model represented as Prolog facts. The inference engine is Prolog’s logical resolution and rule execution. Explainability comes naturally: rules are explicit, so you can trace why something was flagged. Extensibility is built in: add rules to grow expertise over time.
This is what “AI” meant for decades: symbolic reasoning. And in today’s language, this approach fits well into what people now call neuro-symbolic systems — modern AI combined with rule-based reasoning.
The part that feels very modern in 2026
AI has created a new, very practical problem:
AI can generate code fast — but it can also generate bugs fast.
And some of today’s “bugs” are not classic crashes. They look more like hallucinations: the code runs, tests may pass, but the logic is wrong and the output is incorrect.
The industry is re-learning something the symbolic era understood well:
Creation is cheap. Verification is expensive.
That is why companies suddenly care so much about automated reviews, governance, traceability, proof of correctness, and audit-friendly reasoning. This is exactly where rule-based static analysis becomes valuable again.
In simple terms, my thesis is an early blueprint for a modern idea:
Let AI write code — but let rules verify it.
LLMs are great at proposing and creating. Rule engines are great at checking and proving. In the AI era, that combination becomes powerful.
What my thesis “predicted” (without trying to)
Even if I did not frame it like this in 2001, the core ideas match what matters now:
I aimed for language independence — not locking analysis to one language. That’s the same direction modern tooling takes with abstract representations, IRs, and query engines.
My error knowledge base was essentially policy-as-code: rules defining what is acceptable and what is risky. That mindset is now central in security, privacy, safety, and AI governance.
And the modular structure resembles what many people now call “agentic systems” — except my “agents” were modules and rules, not chat-based assistants.
If you revived this today, how it connects to modern AI
A modern version could look like this:
An LLM proposes new rules or detects patterns quickly and creatively. A rule engine validates findings and produces deterministic, auditable results. A human approves high-risk changes as part of governance. The system learns where rules are missing and highlights coverage gaps.
A strong modern positioning would be:
From AI code generation to AI code verification — with explainable rule-based reasoning.
Suddenly, what I worked on 25 years ago becomes a practical blueprint for the next generation of software engineering — especially in high-risk domains like payments, POS, security, and compliance.
I built an explainable, rule-driven intelligence layer for software quality — and that is exactly what the AI era needs if we want AI-generated software to stay safe and reliable.
Sometimes the future doesn’t arrive out of nowhere. Sometimes it quietly waits inside an old thesis.
Maybe it’s time to continue that work — this time with a clear mission: verifying AI output at scale.