{"id":143103,"date":"2026-02-22T08:57:00","date_gmt":"2026-02-22T08:57:00","guid":{"rendered":"https:\/\/darkopavic.xyz\/?p=143103"},"modified":"2026-02-20T11:00:22","modified_gmt":"2026-02-20T11:00:22","slug":"i-built-ai-in-2001-now-the-world-needs-it-again","status":"publish","type":"post","link":"https:\/\/darkopavic.xyz\/index.php\/2026\/02\/22\/i-built-ai-in-2001-now-the-world-needs-it-again\/","title":{"rendered":"I Built \u201cAI\u201d in 2001 &#8211; Now the World Needs It Again"},"content":{"rendered":"\n<p id=\"ember2556\">In 2001, I built something that would clearly could be called \u201cAI\u201d today \u2014 but from the symbolic, expert-systems era, not today\u2019s deep-learning era. The timing feels almost ironic, because many of the problems I targeted back then are returning now, only stronger, because AI is writing code at scale.<\/p>\n\n\n\n<p id=\"ember2557\">During my research at the University of Paderborn, I wrote my master\u2019s thesis. The title says it plainly:<\/p>\n\n\n\n<p id=\"ember2558\"><strong>\u201cDevelopment of rule-based components for error detection in static program analysis.\u201d<\/strong><\/p>\n\n\n\n<p id=\"ember2559\">What I built was a static analysis system \u2014 think of it as an automated code review \u2014 designed to do four things well:<\/p>\n\n\n\n<p id=\"ember2560\">It represents programs as a <strong>program model<\/strong> (a structured representation of code). It expresses that model as <strong>facts in Prolog<\/strong>. It uses <strong>rules (also in Prolog)<\/strong> to detect error classes. And it is <strong>extendable<\/strong>: you can add new rules and new \u201cerror knowledge\u201d without rewriting the system.<\/p>\n\n\n\n<p id=\"ember2561\">I explicitly addressed a problem that was already clear in 2001: static analysis \u2014 especially code review \u2014 lacked enough automation. I chose Prolog because it is strong for knowledge representation and reasoning. Prolog is still used today in high-value areas where logic, rules, and deterministic reasoning matter.<\/p>\n\n\n\n<p id=\"ember2562\">Architecturally, the solution was modular. I separated it into components such as general rules, language specification, model-specific rules, analysis rules, the software model, and an \u201cerror characteristic tree\u201d (<em>Fehlermerkmalbaum<\/em>). On top of that, I implemented classic analysis building blocks: slicing, graphs (control flow and data flow), metrics (including McCabe), anomaly handling, loop\/if handling, reporting, and more. I even built a helper tool in Visual Basic to manage a large rule base and avoid duplicated work.<\/p>\n\n\n\n<p id=\"ember2563\">That was not \u201cjust programming.\u201d That was <strong>knowledge engineering<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember2564\">Why this was \u201cAI\u201d \u2014 long before today\u2019s hype<\/h3>\n\n\n\n<p id=\"ember2565\">What I built maps cleanly to a classic expert system architecture:<\/p>\n\n\n\n<p id=\"ember2566\">The knowledge base is the set of rules and error classes. Facts are the program model represented as Prolog facts. The inference engine is Prolog\u2019s logical resolution and rule execution. Explainability comes naturally: rules are explicit, so you can trace why something was flagged. Extensibility is built in: add rules to grow expertise over time.<\/p>\n\n\n\n<p id=\"ember2567\">This is what \u201cAI\u201d meant for decades: symbolic reasoning. And in today\u2019s language, this approach fits well into what people now call <strong>neuro-symbolic<\/strong> systems \u2014 modern AI combined with rule-based reasoning.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember2568\">The part that feels very modern in 2026<\/h3>\n\n\n\n<p id=\"ember2569\">AI has created a new, very practical problem:<\/p>\n\n\n\n<p id=\"ember2570\">AI can generate code fast \u2014 but it can also generate bugs fast.<\/p>\n\n\n\n<p id=\"ember2571\">And some of today\u2019s \u201cbugs\u201d are not classic crashes. They look more like hallucinations: the code runs, tests may pass, but the logic is wrong and the output is incorrect.<\/p>\n\n\n\n<p id=\"ember2572\">The industry is re-learning something the symbolic era understood well:<\/p>\n\n\n\n<p id=\"ember2573\"><strong>Creation is cheap. Verification is expensive.<\/strong><\/p>\n\n\n\n<p id=\"ember2574\">That is why companies suddenly care so much about automated reviews, governance, traceability, proof of correctness, and audit-friendly reasoning. This is exactly where rule-based static analysis becomes valuable again.<\/p>\n\n\n\n<p id=\"ember2575\">In simple terms, my thesis is an early blueprint for a modern idea:<\/p>\n\n\n\n<p id=\"ember2576\">Let AI write code \u2014 but let rules verify it.<\/p>\n\n\n\n<p id=\"ember2577\">LLMs are great at proposing and creating. Rule engines are great at checking and proving. In the AI era, that combination becomes powerful.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember2578\">What my thesis \u201cpredicted\u201d (without trying to)<\/h3>\n\n\n\n<p id=\"ember2579\">Even if I did not frame it like this in 2001, the core ideas match what matters now:<\/p>\n\n\n\n<p id=\"ember2580\">I aimed for language independence \u2014 not locking analysis to one language. That\u2019s the same direction modern tooling takes with abstract representations, IRs, and query engines.<\/p>\n\n\n\n<p id=\"ember2581\">My error knowledge base was essentially policy-as-code: rules defining what is acceptable and what is risky. That mindset is now central in security, privacy, safety, and AI governance.<\/p>\n\n\n\n<p id=\"ember2582\">And the modular structure resembles what many people now call \u201cagentic systems\u201d \u2014 except my \u201cagents\u201d were modules and rules, not chat-based assistants.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember2583\">If you revived this today, how it connects to modern AI<\/h3>\n\n\n\n<p id=\"ember2584\">A modern version could look like this:<\/p>\n\n\n\n<p id=\"ember2585\">An LLM proposes new rules or detects patterns quickly and creatively. A rule engine validates findings and produces deterministic, auditable results. A human approves high-risk changes as part of governance. The system learns where rules are missing and highlights coverage gaps.<\/p>\n\n\n\n<p id=\"ember2586\">A strong modern positioning would be:<\/p>\n\n\n\n<p id=\"ember2587\"><strong>From AI code generation to AI code verification \u2014 with explainable rule-based reasoning.<\/strong><\/p>\n\n\n\n<p id=\"ember2588\">Suddenly, what I worked on 25 years ago becomes a practical blueprint for the next generation of software engineering \u2014 especially in high-risk domains like payments, POS, security, and compliance.<\/p>\n\n\n\n<p id=\"ember2589\">I built an explainable, rule-driven intelligence layer for software quality \u2014 and that is exactly what the AI era needs if we want AI-generated software to stay safe and reliable.<\/p>\n\n\n\n<p id=\"ember2590\">Sometimes the future doesn\u2019t arrive out of nowhere. Sometimes it quietly waits inside an old thesis.<\/p>\n\n\n\n<p id=\"ember2591\">Maybe it\u2019s time to continue that work \u2014 this time with a clear mission: <strong>verifying AI output at scale.<\/strong><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In 2001, I built something that would clearly could be called \u201cAI\u201d today \u2014 but from the symbolic, expert-systems era, not today\u2019s deep-learning era. The timing feels almost ironic, because many of the problems I targeted back then are returning now, only stronger, because AI is writing code at scale. During my research at the&#8230;<\/p>\n","protected":false},"author":1,"featured_media":143104,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"episode_type":"","audio_file":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","filesize_raw":"","footnotes":""},"categories":[56],"tags":[83,84],"class_list":["post-143103","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology","tag-ai","tag-research"],"_links":{"self":[{"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/posts\/143103","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/comments?post=143103"}],"version-history":[{"count":1,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/posts\/143103\/revisions"}],"predecessor-version":[{"id":143105,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/posts\/143103\/revisions\/143105"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/media\/143104"}],"wp:attachment":[{"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/media?parent=143103"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/categories?post=143103"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/tags?post=143103"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}