{"id":143176,"date":"2026-04-25T06:46:21","date_gmt":"2026-04-25T06:46:21","guid":{"rendered":"https:\/\/darkopavic.xyz\/?p=143176"},"modified":"2026-04-25T09:00:36","modified_gmt":"2026-04-25T09:00:36","slug":"here-is-why-ai-will-not-kill-software-but-it-will-kill-weak-software","status":"publish","type":"post","link":"https:\/\/darkopavic.xyz\/index.php\/2026\/04\/25\/here-is-why-ai-will-not-kill-software-but-it-will-kill-weak-software\/","title":{"rendered":"Here Is Why AI Will Not Kill Software \u2014 But It Will Kill Weak Software"},"content":{"rendered":"\n<p>Imagine a software company that does everything right. It sees a real market gap, designs a compelling new product, and uses a mix of human engineering and AI-assisted coding to get to market fast. The launch goes well. Customers notice. The market validates the idea. Then, only a few months later, a competitor rebuilds the visible parts of the product with even better AI tools, a leaner team, and a lower cost base. Suddenly the original innovator is no longer competing against yesterday\u2019s software economics. It is competing against a world in which software creation itself is getting cheaper, faster, and more contestable.<\/p>\n\n\n\n<p>That scenario is no longer theoretical. Stanford\u2019s <a href=\"https:\/\/hai.stanford.edu\/ai-index\/2025-ai-index-report\">AI Index 2025<\/a> reports that the inference cost for a system performing at the level of GPT-3.5 fell by more than 280-fold between November 2022 and October 2024, while open-weight models sharply narrowed the performance gap with closed models. An <a href=\"https:\/\/arxiv.org\/abs\/2302.06590\">experimental study on GitHub Copilot<\/a> found that developers with AI assistance completed a coding task 55.8% faster than a control group. And an <a href=\"https:\/\/www.oecd.org\/content\/dam\/oecd\/en\/publications\/reports\/2025\/06\/the-effects-of-generative-ai-on-productivity-innovation-and-entrepreneurship_da1d085d\/b21df222-en.pdf\">OECD review of generative AI and innovation<\/a> concludes that AI can lower entry barriers, reduce time-to-market, and let even non-technical users create functional prototypes.<\/p>\n\n\n\n<p>That is the new condition of competition. AI does not make innovation meaningless. But it does make shallow innovation easier to copy. In that world, the main strategic question is no longer simply, \u201cHow fast can we build?\u201d It is, \u201cHow do we innovate in a way that still lets us capture value after others can copy the surface of what we made?\u201d<\/p>\n\n\n\n<p>The answer, in my view, is that AI is not ending innovation. It is raising the bar for what counts as a defensible innovation. In the next phase of software, durable advantage will belong less to the company that merely ships first and more to the company that learns faster, embeds deeper, earns more trust, controls more of the workflow, and stands behind outcomes that matter.<\/p>\n\n\n\n<p>The question, then, is not whether AI helps competitors. It obviously does. The real question is where value still lives after code itself becomes easier to generate.<\/p>\n\n\n\n<p>That is where strategy begins.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Code Is Becoming Abundant<\/h1>\n\n\n\n<p>For two decades, most software strategy quietly depended on one underlying assumption: that building high-quality software was sufficiently difficult that the codebase itself could act as a moat. That assumption is breaking down. A larger share of the value chain is moving away from raw code production and toward the layers around it: problem selection, operational integration, domain judgment, proprietary data, customer trust, and commercial design.<\/p>\n\n\n\n<p>The <a href=\"https:\/\/www.oecd.org\/content\/dam\/oecd\/en\/publications\/reports\/2025\/06\/the-effects-of-generative-ai-on-productivity-innovation-and-entrepreneurship_da1d085d\/b21df222-en.pdf\">OECD\u2019s 2025 review<\/a> is especially useful here because it separates productivity gains from strategic gains. It shows that generative AI can automate tasks, speed up prototyping, foster creativity, and lower barriers to entrepreneurial entry. But it also notes that AI\u2019s effectiveness depends on the user\u2019s experience, the nature of the task, and the quality of human-AI collaboration. This is a crucial distinction. AI can reduce the cost of building something. It does not automatically reduce the cost of building something meaningful, reliable, or valuable.<\/p>\n\n\n\n<p>That distinction becomes even clearer in the innovation evidence cited by the OECD. AI often improves idea generation, but it can also make outputs less distinctive and more similar to one another. In one set of findings summarized by the OECD, AI-assisted creative work becomes more homogeneous even as it becomes more polished. In another, large language models can generate relevant ideas but still tend to produce generic or less diverse outputs without strong human framing. That matters because it means AI can raise the supply of software ideas while simultaneously reducing the uniqueness of what gets produced.<\/p>\n\n\n\n<p>In plain language, AI makes it easier to build, but it also makes it easier for many companies to build in the same direction. That is why innovation in the AI era cannot be understood only as faster product creation. It has to be understood as a race to create a position that remains valuable after the code is no longer rare.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">The First-Mover Myth Gets Harder in \u2018Rough Waters\u2019<\/h1>\n\n\n\n<p>Business culture still loves the pioneer story. Be first. Define the category. Capture the market. But the research tradition around first-mover advantage has always been more cautious than managerial folklore. In <a href=\"https:\/\/hbr.org\/2005\/04\/the-half-truth-of-first-mover-advantage\">The Half-Truth of First-Mover Advantage<\/a>, Fernando Suarez and Gianvito Lanzolla argue that early entry is \u201cfar less than a sure thing,\u201d and that durable first-mover advantage depends heavily on the pace of both technology change and market evolution.<\/p>\n\n\n\n<p>Their most useful idea for the AI era is the distinction between calm waters and rough waters. In calm waters, where technology and the market both evolve slowly, pioneers have time to educate customers, improve the product, and defend their position. In rough waters, where technology and the market both move quickly, durable first-mover advantage becomes much harder to secure. That is a near-perfect description of large parts of software today. The tools change quickly. Customer expectations change quickly. Interfaces change quickly. Pricing changes quickly. Distribution patterns change quickly. The pioneer pays for learning, category education, and early mistakes, while the follower can observe, simplify, and rebuild.<\/p>\n\n\n\n<p>That does not mean first movers are doomed. It means that being first is not the same as being safe. The durable advantage no longer comes from the launch date. It comes from what the first mover does with the time it has before imitation arrives. The real advantage is not launch velocity. It is learning velocity.<\/p>\n\n\n\n<p>This is where many software leaders still think too narrowly. A fast launch is useful, but only if it creates an early cycle of customer learning, workflow insight, operational data, and market trust that imitators will struggle to compress into the same time window. In AI-heavy markets, the first mover\u2019s prize is no longer the product alone. It is the right to learn first and to turn that learning into a harder-to-copy system.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">The Real Prize Has Moved from Product to Capture<\/h1>\n\n\n\n<p>The most important older strategy paper for understanding this moment may still be David Teece\u2019s <a href=\"https:\/\/www.edegan.com\/pdfs\/Teece%20%281986%29%20-%20Profiting%20From%20Technological%20Innovation%20Implications%20For%20Integration%20Collaboration%20Licensing%20And%20Public%20Policy.pdf\">Profiting from Technological Innovation<\/a>. Teece\u2019s argument was simple and durable: when imitation is easy, the profits from innovation may flow not to the original inventor, but to the owners of complementary assets. Those assets can include distribution, customer relationships, service networks, manufacturing, installed base, integration capability, and other advantages that sit around the invention rather than inside it.<\/p>\n\n\n\n<p>AI makes Teece\u2019s insight more relevant, not less. If visible product features can be rebuilt quickly, then value moves toward the things that are harder to copy: operational embedding, workflow ownership, distribution, customer trust, domain-specific judgment, contractual position, and data accumulated through use. In that sense, AI does not eliminate innovation. It changes the place where innovation must be defended.<\/p>\n\n\n\n<p>The same logic applies to business models. A <a href=\"https:\/\/www.hbs.edu\/ris\/Publication%20Files\/11-003_0f4a1eff-854b-4269-b1a8-e2531cdcd3fa.pdf\">Harvard Business School paper on business model innovation and imitation<\/a> makes a related point: business model innovations, like product and process innovations, can also be imitated. This is important because many software firms respond to feature commoditization by saying, in effect, \u201cThen our business model will be the differentiator.\u201d Sometimes that is true. But it is not automatically true. If the business model is visible and the operational design is not deeply embedded, it can be copied too.<\/p>\n\n\n\n<p>That is why the key question in the AI era is no longer just \u201cCan we build it?\u201d It is \u201cCan we build it in a way that lets us capture value after others can see it?\u201d<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">The New Moats: Where Defensibility Still Lives<\/h1>\n\n\n\n<p>The first defensible layer is workflow embedding. When software becomes the default way a customer runs a mission-critical process, replacing it is no longer a simple feature comparison. It becomes a change-management problem. A clone may replicate the screen, but it still has to replicate the approvals, the human habits, the exception paths, the integrations, and the operational confidence built around the incumbent product. Software that sits inside real work is harder to dislodge than software that merely decorates a process.<\/p>\n\n\n\n<p>The second defensible layer is context. In many software categories, the most valuable asset is not the model or the codebase but the accumulated understanding of a customer\u2019s specific environment: its data quirks, approval culture, exception history, process timing, and role-specific patterns. This kind of memory compounds with use. A late entrant can copy functionality, but it cannot instantly recreate years of context. That is why the most strategic AI products will increasingly act less like static software and more like living systems that learn the oddly specific shape of the work they support.<\/p>\n\n\n\n<p>The third defensible layer is edge-case density. The easiest 80% of a workflow is becoming easier to replicate. The hardest 20%, the contradictory rules, partial failures, strange local exceptions, and high-consequence cases, remains where much of the durable advantage lives. AI can help write the basic path quickly. But real software advantage often sits in the rare cases that break na\u00efve systems. This is especially true in domains where a single mistake can trigger legal, financial, or operational harm.<\/p>\n\n\n\n<p>The fourth defensible layer is trust and accountability. Here the picture splits sharply. In low-consequence consumer software, users may increasingly tolerate rough edges if the tool is cheap and useful. In higher-stakes environments, especially where there are audits, legal consequences, customer harm, or board-level exposure, software still needs someone who will stand behind the result. The <a href=\"https:\/\/nvlpubs.nist.gov\/nistpubs\/ai\/nist.ai.100-1.pdf\">NIST AI Risk Management Framework<\/a> stresses that AI risk management is central to trustworthy deployment and that accountable organizational practices matter across the AI lifecycle. The <a href=\"https:\/\/artificialintelligenceact.eu\/article\/14\/\">EU AI Act\u2019s human oversight requirements for high-risk systems<\/a> point in the same direction. In high-risk use cases, regulators are not moving toward a world of pure machine autonomy with no accountable structure around it. They are moving toward documentation, oversight, and the ability to understand limits, anomalies, and failure modes.<\/p>\n\n\n\n<p>The fifth defensible layer is distribution and ecosystem position. There is a second AI-era risk that many application builders are still underestimating: platform encroachment. A March 2026 Brookings analysis notes that large AI model companies are moving into the application space and beginning to compete with the software developers that build on top of them. If your product is only a thin wrapper over another company\u2019s model, your danger is not just horizontal imitation from peers. It is vertical pressure from the platform layer above you. That makes ecosystem position and customer ownership more strategically important than before.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Why Compliance and Fiscalization Are Different<\/h1>\n\n\n\n<p>This is where the discussion becomes more concrete. Not all software categories are equally exposed to the same AI economics. If the software is not business-critical and not connected to legal or financial consequences, the vendor may face an even harsher commoditization problem than the general market. When the downside of failure is small, buyers become more willing to accept cheap, fast, AI-generated alternatives. In those categories, the question \u201cWhy you?\u201d becomes much harder to answer. Design, convenience, habit, and distribution may matter, but the penalty for switching is often low.<\/p>\n\n\n\n<p>Compliance software, tax software, and fiscalization software are different. In those categories, software does not just support a workflow. It participates in a chain of responsibility. It touches records, reporting, auditability, transaction logic, and legal interpretation. If a music recommendation app fails, a user is annoyed. If fiscal software fails, a retailer can face penalties, reporting errors, operational disruption, or damaged trust with tax authorities.<\/p>\n\n\n\n<p>That does not make regulated software immune to AI-driven imitation. It does, however, change where the value sits. In high-stakes software, value will increasingly sit in the combination of domain expertise, legal interpretation, update discipline, edge-case handling, traceability, and the vendor\u2019s willingness to stand behind outcomes. AI may commoditize more of the coding layer, but in regulated software it may actually increase the value of governance, accountability, and operational proof.<\/p>\n\n\n\n<p>This is one reason compliance and fiscalization vendors should not panic in the wrong way. Their danger is not that AI will suddenly make their work trivial. Their danger is that a new wave of vendors will use AI to produce convincing-looking products without equivalent depth, legal understanding, or operational resilience. That means the strategic response is not to deny AI. It is to move even more decisively toward the elements that are hardest to fake: reliability, documentation, exception logic, interpretability, regulatory change management, and trust.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Should You Wait, Watch, and Then Copy Better?<\/h1>\n\n\n\n<p>The temptation to become a fast follower is understandable. Why pay for category education if someone else will do it? Why take technical risk first if better AI tools will be available a few months later? Why not wait, observe what customers actually want, and then build a cleaner version with lower cost?<\/p>\n\n\n\n<p>In some markets, that logic is sound. Fast-following has always been a valid strategic choice, especially in markets where the concept is uncertain, the technology evolves quickly, and customer preferences are still taking shape. The first mover may bear the cost of educating buyers and exposing the weak points of the category. The follower can use that information to enter more efficiently.<\/p>\n\n\n\n<p>But waiting is not the same thing as strategy. A passive follower often arrives late with a technically adequate product and no meaningful advantage around it. A good follower uses the waiting period to prepare stronger distribution, better integration, sharper positioning, a better pricing model, or a more trust-rich product design. In other words, the right alternative to being first is not \u201cdo nothing until the market settles.\u201d It is \u201clearn from the pioneer while building a stronger capture model.\u201d<\/p>\n\n\n\n<p>My own view is that the old debate between first mover and fast follower is becoming less useful than a new one: shallow mover or deep mover. A shallow mover launches quickly but captures little. A deep mover may launch first or second, but it builds context, customer lock-in, workflow position, and value capture in parallel. In the AI era, that depth matters more than the order of entry.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Pricing Will Have to Change, Too<\/h1>\n\n\n\n<p>One reason AI is destabilizing software strategy is that it also destabilizes software pricing. When software was primarily a tool for human users, seat-based pricing felt natural. When software increasingly performs work on its own, seat-based pricing can understate value in one case and destroy margins in another.<\/p>\n\n\n\n<p>The market is already moving. Zendesk now prices AI agents by <a href=\"https:\/\/support.zendesk.com\/hc\/en-us\/articles\/6931689272090-Moving-to-automated-resolutions-from-existing-bot-pricing-plans\">automated resolutions<\/a>, not only by traditional software access metrics. Stripe\u2019s <a href=\"https:\/\/stripe.com\/resources\/more\/outcome-based-pricing\">guidance on outcome-based pricing<\/a> and <a href=\"https:\/\/stripe.com\/billing\/usage-based-billing\">usage-based billing for AI products<\/a> reflects the same shift: AI products increasingly need pricing that tracks consumption, completed work, or measurable business outcomes rather than seats alone.<\/p>\n\n\n\n<p>This matters because pricing is part of defensibility. If AI allows a product to perform more labor, the vendor has to decide whether it wants to charge for access, for usage, for workflow steps, or for outcomes. That decision changes not just revenue, but market position. Outcome-based pricing can signal confidence and align incentives, but it also requires measurement, attribution, contract clarity, and the ability to absorb more delivery risk. Not every company can do it well. That, in itself, becomes a differentiator.<\/p>\n\n\n\n<p>The larger point is that software companies can no longer assume that the old combination of seat pricing, feature roadmaps, and periodic upgrades will remain the default answer. In AI-driven markets, value is moving closer to what the software actually accomplishes. The business model has to move with it.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">What \u2018Winning\u2019 Means Now<\/h1>\n\n\n\n<p>In the AI era, winning should not be defined as being the first company to ship a feature. It should be defined as being the company that can still earn durable returns after the feature becomes easier to copy.<\/p>\n\n\n\n<p>That means winning is increasingly measured by a different set of questions. Can you become part of the customer\u2019s operating system, not just their software stack? Can you accumulate proprietary context and data loops that improve the product over time? Can you handle edge cases better than generic alternatives? Can you support, govern, explain, and update the system in ways that matter to the customer\u2019s risk profile? Can you build a business model that captures the value your automation creates? And can you keep customer ownership even if the platform layer above you becomes more aggressive?<\/p>\n\n\n\n<p>This is also why the most dangerous mistake for founders is to define innovation too narrowly. If innovation means only visible novelty, then yes, AI makes it easier for that novelty to be cloned. But if innovation includes workflow redesign, trust architecture, operational resilience, data capture, service design, contract structure, governance, and market education, then innovation remains not only possible but essential. It simply becomes harder, and therefore more strategic.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Direct Answers to the Strategic Questions<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">How should a software company innovate when AI helps everyone build faster?<\/h2>\n\n\n\n<p>It should treat speed as necessary but insufficient. The company still needs AI-assisted development, but it must pair that speed with a deeper plan for how value will be captured after imitation arrives. The real innovation agenda should include product, workflow, trust, data, integration, pricing, and governance, not just features.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Which innovator will win?<\/h2>\n\n\n\n<p>Usually not the one who is merely first. The likely winner is the company that converts early product insight into stronger customer embedding, richer context, better distribution, and a more defensible business model before the market catches up.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What does it mean to win?<\/h2>\n\n\n\n<p>It means durable value capture, not temporary applause. In practical terms, it means the ability to retain customers, hold pricing power, withstand cloning of visible features, and keep improving the product from a position of trust and learning.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Can innovation still be a differentiator?<\/h2>\n\n\n\n<p>Yes, but visible feature innovation is becoming a weaker moat. System innovation, workflow design, context accumulation, edge-case mastery, accountability, and commercial architecture, is becoming the more durable differentiator.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Is it smarter to wait and copy later with better AI tools?<\/h2>\n\n\n\n<p>Sometimes, but only if the company uses the waiting period strategically. A passive follower is just late. A strong follower enters with better economics, sharper positioning, stronger integration, or a better capture model. Waiting without a plan is not strategy.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What is the value of innovation if it can be copied so quickly?<\/h2>\n\n\n\n<p>Its value increasingly lies in what it allows the company to build around the product: market learning, proprietary context, customer trust, process ownership, and stronger economics. Innovation still matters, but unprotected innovation matters less.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">The Defensibility Mandate<\/h1>\n\n\n\n<p>The deepest strategic shift in software today is not that AI can write more code. It is that AI is forcing leaders to ask a harder question: what remains scarce after code becomes easier to produce?<\/p>\n\n\n\n<p>My answer is that scarcity is moving toward context, accountability, workflow position, trust, and the ability to turn innovation into a system of value capture. The companies that treat AI only as a productivity tool will use it to ship faster. The companies that treat AI as a strategic shock to the economics of software will redesign how they build, price, govern, and defend what they create.<\/p>\n\n\n\n<p>That is why the right mandate for software leaders now is not simply \u201cinnovate faster.\u201d It is \u201cinnovate more defensibly.\u201d Build faster, yes. But also build deeper. Use AI to compress development time, but spend the time you save building the things that will still matter after the code is copied: trust, context, workflow, outcomes, and capture.<\/p>\n\n\n\n<p>In that sense, the AI era does not make innovation less important. It makes shallow innovation less valuable and strategic innovation more important than ever.<\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Sources<\/h1>\n\n\n\n<p><a href=\"https:\/\/hai.stanford.edu\/ai-index\/2025-ai-index-report\">Stanford HAI, The AI Index 2025 Report<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/www.oecd.org\/content\/dam\/oecd\/en\/publications\/reports\/2025\/06\/the-effects-of-generative-ai-on-productivity-innovation-and-entrepreneurship_da1d085d\/b21df222-en.pdf\">OECD, The effects of generative AI on productivity, innovation and entrepreneurship (2025)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2302.06590\">Peng et al., The Impact of AI on Developer Productivity: Evidence from GitHub Copilot<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/www.brookings.edu\/articles\/the-effects-of-ai-on-firms-and-workers\/\">Brookings, The effects of AI on firms and workers (2025)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/www.edegan.com\/pdfs\/Teece%20%281986%29%20-%20Profiting%20From%20Technological%20Innovation%20Implications%20For%20Integration%20Collaboration%20Licensing%20And%20Public%20Policy.pdf\">David J. Teece, Profiting from Technological Innovation (1986)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/hbr.org\/2005\/04\/the-half-truth-of-first-mover-advantage\">Suarez and Lanzolla, The Half-Truth of First-Mover Advantage, Harvard Business Review (2005)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/www.hbs.edu\/ris\/Publication%20Files\/11-003_0f4a1eff-854b-4269-b1a8-e2531cdcd3fa.pdf\">Casadesus-Masanell and Zhu, Business Model Innovation and Competitive Imitation, Harvard Business School Working Paper<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/www.brookings.edu\/articles\/what-happens-when-ai-companies-compete-with-their-customers\/\">Brookings, What happens when AI companies compete with their customers? (2026)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/nvlpubs.nist.gov\/nistpubs\/ai\/nist.ai.100-1.pdf\">NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0)<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/artificialintelligenceact.eu\/article\/14\/\">EU Artificial Intelligence Act, Article 14: Human Oversight<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/support.zendesk.com\/hc\/en-us\/articles\/6931689272090-Moving-to-automated-resolutions-from-existing-bot-pricing-plans\">Zendesk Support, Moving to automated resolutions from existing bot pricing plans<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/stripe.com\/resources\/more\/outcome-based-pricing\">Stripe, Outcome-based pricing: A guide to linking revenue to results<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/stripe.com\/billing\/usage-based-billing\">Stripe, Usage-based billing for AI products<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Imagine a software company that does everything right. It sees a real market gap, designs a compelling new product, and uses a mix of human engineering and AI-assisted coding to get to market fast. The launch goes well. Customers notice. The market validates the idea. Then, only a few months later, a competitor rebuilds the&#8230;<\/p>\n","protected":false},"author":1,"featured_media":143178,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"episode_type":"","audio_file":"","cover_image":"","cover_image_id":"","duration":"","filesize":"","date_recorded":"","explicit":"","block":"","itunes_episode_number":"","itunes_title":"","itunes_season_number":"","itunes_episode_type":"","filesize_raw":"","footnotes":""},"categories":[3,1,56],"tags":[90],"class_list":["post-143176","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-business","category-uncategorized","category-technology","tag-innovation"],"_links":{"self":[{"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/posts\/143176","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/comments?post=143176"}],"version-history":[{"count":1,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/posts\/143176\/revisions"}],"predecessor-version":[{"id":143177,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/posts\/143176\/revisions\/143177"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/media\/143178"}],"wp:attachment":[{"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/media?parent=143176"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/categories?post=143176"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/darkopavic.xyz\/index.php\/wp-json\/wp\/v2\/tags?post=143176"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}