System engineering standards are missing the last step to reinvent Giambattista Vico

We, who proudly wear the mental scars of engineering a large project, who have seen code creatures bizarre and unthinkable, who slayed dragons and learned to kill hydras without cutting their necks, who grew weary and wise and know when not to start a fight and just let the beast be in their cave, to us I address this enquiry. For we know what it is like to tend to our own work from months ago, trying to figure out its grand design. For we understand that there is something we lose when we complete the act of making. The cord is severed, and the most intimate connection between the maker and their creation is lost.

While I'm making, I know my decisions. Some pieces accidentally fall to the right places, but others I am placing deliberately. I name variables, functions, types; I come up with interfaces; I structure the types; I chose or create algorithms; I use specific languages and tools. All these moments will be lost in time, like my tears shed over every painful compromise. Another day comes and I stare back at my work in disbelief: you must be joking, Mister Me-of-yesterday, this function has 10 arguments? Don't you know anything about design? But I glimpse at my scars and I know better, and I do not touch it. Otherwise I am likely spending the rest of the day trying to fix it just to rediscover, that because of some contextual reason, this ugly monster of a function was the lesser evil in a freak zoo. If I'm lucky, the context has changed enough since that time: some dependencies or hardware updated, some compiler bug fixed, I can properly rewrite this code. Good for the codebase, bad for me, as it provides an intermittent reward to my risky behavior. My code winks back after a facelift, inviting me closer into our toxic, codependent relationship. Here's another type that clearly does too much work, how about we decompose it?

It gets worse with other people's work, but really not much different. The root of evil is the lost rationale. If I don't understand why this function was made with 10 arguments, I don't know it really. So I attempt to rediscover this lost knowledge, studying the context and inferring the missing bits. Imperfectly, of course.

Giambattista Vico, a XVIII century Neapolitan philosopher, has formulated it: The true itself is made. In Latin: Verum ipsum factum. Only the maker has the knowledge of his creation. Studying a man-made thing is understanding its history and the "why's" of every decision.

If Vico understands that, what can we, engineers, learn more from him? Let's figure out how the system engineering currently treats two questions:

Then we'll figure out Vico's point of view on them, see where they converge, and how his investigations can be useful to us.

Ontologies of system engineering

Engineers who build systems need a way to talk about the world. They need to agree on what exists, what matters, and how to describe it. Making an internet shop means automating a part of some organization which has employees, clients, logistics, processes, reports to the tax office and so on. Usually, a team of engineers works on this automation. That makes three systems involved: the organization and its clients is the supersystem, the automation is the target system, and the developer team is the third one. A part of system engineer's work is to describe the interaction between these systems, the business processes, the structure of the organization, software architecture of the automation. The tool they reach for is an ontology — a framework that carves the world into categories useful enough to work with.

Let's talk about a popular, "common sense" ontology. It descends, loosely, from Aristotle, and pictures the world in a straightforward and simple way:

  • The world is made of objects, which occupy three-dimensional space.
  • Each object carries attributes.
  • Objects have types, and types may belong to broader, more generic types.
  • Attributes of a particular object also have types which, in turn, may belong to broader supertypes, and so on.

Take Garfield. He is an object of type cat. A cat belongs to a type animal, which is a subtype of living being, and so on. His attribute color is set to orange. This orangeness is an instance of the attribute orange of type color, belonging to an attribute supertype visible quality.

If Garfield has a son, Barfield, we record a son attribute on Garfield pointing to Barfield, and a father attribute on Barfield pointing back. The relationship spans over two separate places – one on each object – rather than in a single, stand-alone description.

Now suppose Garfield goes for a walk. To represent this, we update his coordinates. We change an attribute, and then change it again, and again. But the ontology has no way to express the walk itself. It can describe where Garfield is; it cannot describe what Garfield is doing. Change, in this framework, gets lost in the cracks between discrete, isolated snapshots of reality.

Software engineering leans on ontologies for at least two reasons. First, they need to describe the engineering process by which they build things – the teams, the customers, the platforms. Second, the software itself must model whatever real world processes it automates. Ashby's law implies as much: a controller must match the variety of the system it controls, and a model that misses important structure will produce software that misses important behavior.

Change is a challenge

Any ontology used in systems engineering must handle four things well: particular objects, types, relationships, and change. The "common sense" ontology stumbles on relationships and struggles with change.

Relationships split apart. The bond between Garfield and Barfield is scattered across two attributes on two objects. If I discover that Garfield is not really the father, I must find and fix both halves independently. Nothing in the ontology ties them together.

It is worse for the changes. This ontology simply provides no way to express that things evolve. This matters enormously for software systems, which constantly and unpredictably evolve. A software system is a complex node in a network of interconnected, interacting systems, which are all changing and trying to adapt to each other. An ontology that can't describe changes properly forces at least three problems:

The snapshot problem. Every description is a frozen moment. When something changes, you produce a new snapshot. But the ontology draws no line between the old picture and the new one. The connection exists only in someone's head.

The moving target problem. Describing a complex system takes time. If the system evolves quickly, the description will become obsolete and inaccurate before we even finish it.

The identity problem. "What makes two things the same thing?". If we gradually replace every plank in a ship, is it the same ship? If we rewrite every line in a software module, is it the same module, and does the old license still apply? If every member of a team is replaced, is it still the same team? And if it is, shouldn't it hold all the knowledge of the original team? The Aristotelian ontology has no principled answer. It tracks attributes of objects, but it has nothing to say about what persists when all the attributes change.

We might expect the major systems engineering standards to offer clean solutions, or at least systematic approaches to problems this pervasive. Unfortunately, they are still catching up.

How system engineering standards struggle with changes

Widely used enterprise architecture frameworks, such as TOGAF, Zachman, and ArchiMate, do not solve these four problems at the level of ontology. Instead, they work around them.

The usual approach goes like this:

  1. Describe the system as it is now.
  2. Describe how you want it to be.
  3. Identify the gaps between the current and desired systems, build a roadmap.

TOGAF's Architecture Development Method formalizes this into phases. Each phase produces documents and diagrams that represent a snapshot of the system. Engineers keep snapshots connected through reviews, processes and conventions. But the ontology itself can not represent a transition between snapshots. Transition is implied, not modeled.

The identity problem is left to human judgement, armed with the robust weaponry of "following conventions". You might ask: if the system has been migrated to the cloud, refactored, had its database replaced, and the development team rotated three times, is it the same system? TOGAF's answer: whatever the governance board decides. This risks inconsistency, and attempts to model both current and future states often end up blending them together, or mixing them up.

So, when a standard does not provide native ways of describing changes, people try to make up for it with processes, discipline, and documentation. This brittle practice works until someone changes jobs, retires, budgets get cut, or when the team just can't keep up with complexity and the pace of changes.

However, there are ontologies in other system engineering standards that handle changes more gracefully. They place objects not in three-dimensional space, but in four-dimensional space-time, so an object extends through time the way it extends through space. Let's see how.

Benefits of perdurantism

The key is to stop thinking of objects as three-dimensional things that persist through time, and start thinking of them as four-dimensional things that extend through time. A cat does not exist "at" various moments; it is a long, winding entity stretched across space-time, spanning from its first cry to the last nap. We only get to see a thin slice of it – the cat right now. The cat-on-Tuesday and cat-on-Wednesday are both connected parts of the cat, just as its ears are connected to its head. An Aristotelian cat was made of space and time happened to it; nowadays, cats are made of space, and time. This idea is called 4D-extensionalism, or perdurantism.

This is not a trick of language. It restructures the ontology and takes on the problems we identified.

The snapshot problem disappears. Two snapshots of a system are no longer isolated pictures that someone must mentally connect – the system itself is the connection.

The moving target problem becomes manageable. You can describe whatever temporal extent you know about and leave the rest open. The system grows a new temporal part tomorrow; you add the description of that part when you have it. The existing description is not "outdated" — it is a complete and accurate account of the portion of space-time it covers. You are not racing to finish before reality moves on. You are building a description that extends alongside reality.

The identity problem gets a better framing. The ship with all new planks is the same 4D ship: an early temporal part has old planks, a later temporal part has new ones, and both parts belong to the same 4D ship. The software module rewritten line by line is one 4D entity whose later temporal parts contain different code. The team whose members have all been replaced is one 4D team: its early parts include Alice and Bob, its later parts include Alex and Rob. The knowledge that the team possesses, however, also gets localized in space-time, so it can be lost if all its bearers leave the team.

For this kind of power we pay with complexity. A 4D ontology is harder to learn, and the models it produces look unfamiliar to engineers raised on entity-relationship diagrams and class hierarchies. But the payoff is an ontology where change and identity are built into the foundations.

TODO Where did the solutions come from?

The idea that objects extend through time did not originate in systems engineering. It came from philosophy.

Throughout the twentieth century, philosophers debated how ordinary objects persist. One camp — the endurantists — held that an object is wholly present at each moment it exists. The other — the perdurantists — argued that objects have temporal parts, just as they have spatial parts: your childhood self is not a past version of you but a temporal slice of the four-dimensional entity that is you. Quine developed early versions of this view; David Lewis gave it its most influential formulation; Ted Sider later provided rigorous formal defenses. The idea draws on the same intuition that Minkowski brought to physics in 1908, when he recast Einstein's special relativity in geometric terms and declared space and time a single four-dimensional manifold. But the philosophical and physical lines of thought developed largely independently, and the engineering work that followed drew on the philosophers, not the physicists.

The bridge from philosophy to engineering was built in the late 1980s by Chris Partridge and his colleagues at KPMG Consulting. Working on a legacy modernization project in the finance sector, they developed BORO — the Business Objects Reference Ontology. BORO adopted perdurantism and extensionalism as explicit foundational commitments: every business entity — a customer, a contract, a division — is a four-dimensional object with spatial and temporal extent, and identity is determined by that extent, not by names or descriptions. The approach proved effective for a practical reason: when two departments disagreed about whether a reorganized division was "the same" entity, BORO gave them a framework to describe exactly what had changed and what had not, without requiring anyone to win an argument about identity. Partridge described the method in Business Objects: Re-Engineering for Re-Use (1996), and BORO went on to be applied in finance, defence, energy, and oil and gas.

BORO's ideas fed directly into several standards. In 2005, the defence departments of Australia, Canada, the United Kingdom, and the United States established the IDEAS Group — the International Defence Enterprise Architecture Specification — with NATO as observers. The group set out to align their national architecture frameworks and needed a shared formal ontology for data exchange. They built it using the BORO method, with the ontology development led by Ian Bailey of Model Futures and major contributions from BORO Solutions. The result was a four-dimensionalist, extensional foundation that could represent defence systems, organizations, capabilities, and their evolution over time.

This foundation, in turn, was adopted by DODAF when it introduced DM2, the DoDAF Meta-Model, in version 2.0. Earlier versions of DODAF used a more conventional metamodel. DM2 inherited its formal foundation from IDEAS: four-dimensionalist, extensional, built on type theory, mereology, and 4D mereotopology. The adoption gave DODAF a principled basis for integrating and analyzing architectural data across the Department of Defense. The same IDEAS concepts also shaped MODAF in the UK.

ISO 15926, developed for the process industries, arrived at similar ontological commitments through a related but distinct path. The standard grew out of a European Union ESPRIT project called ProcessBase, begun in 1991, and was further developed by the EPISTLE consortium through 2003 under the lead of Matthew West at Shell. Its problem was lifecycle data integration for industrial plants — assets built, modified, maintained, and decommissioned over decades. The standard adopted a 4D data model with extensional identity, and its Part 2 reached International Standard status in 2003. Partridge himself identifies BORO as one of the bases for ISO 15926, and there was cross-pollination between the two efforts, but ISO 15926 also had its own independent development history rooted in industrial data management.

The pattern across all of these is worth noting. In every case, the 4D extensionalist ontology was adopted not because someone found it philosophically elegant, but because people ran into real problems — data integration failures, identity confusion, inability to track change — that conventional ontologies could not solve. The philosophy was available. The engineering caught up when the pain became acute enough.

Vico

Most engineers haven't heard of Giambattista Vico – even for philosophers he is a niche read. Surprisingly, a Neapolitan prophessor of rhetoric came close to the modern philosophy of engineering three centuries ago.

Vico was a Neapolitan professor of rhetoric who spent most of his career underpaid and ignored. He published his major work, the Scienza Nuova, in 1725, paying for the first edition by selling a ring. The book is strange, ambitious, and badly organized. It attempts nothing less than a unified science of human civilization: how nations arise, how languages form, how laws develop, how cultures mature and collapse. Vico argued that civilizations move through recurring cycles — a divine age of myth and ritual, a heroic age of aristocratic codes, a human age of reason and democracy. At the end of each cycle comes what he called the barbarism of reflection: a society so saturated with abstraction and formal rationality that it loses contact with the concrete, the particular, the made. It forgets how to understand its own institutions because it can only analyze them, not feel them. Then it collapses, and the cycle begins again.

This is where Vico is commonly misread. He is often labeled a Counter-Enlightenment thinker, an enemy of reason. He wasn't. He was an enemy of a specific kind of reason — the Cartesian kind, which starts from abstract axioms and deduces its way to truth. Descartes said: I think, therefore I am. Clear and distinct ideas are the foundation. Geometry is the model of knowledge. Vico replied: geometry works because we invented it. We can know it with certainty because we constructed it ourselves. He didn't reject science. He asked a sharper question: what can we truly know, and why?

His answer is compressed into a formula: verum ipsum factum. The true is the made. We have genuine knowledge of what we ourselves have made. Mathematics — yes, we made it, we can know it fully. History, law, language, institutions — yes, made by humans, knowable by humans. Nature — no. We didn't make it. Only God, its maker, has that knowledge. We can observe nature, measure it, build useful models, but we cannot know it the way we know our own creations.

Set aside the theology for a moment. The engineering insight is in the principle itself. Vico is not saying that made things happen to be easier to study. He is saying that making and knowing are the same act. When I build a system, every decision I make — this interface, this data structure, this trade-off — constitutes my knowledge of it. The knowledge isn't a description I write after the fact. The knowledge is the making. The moment I stop making, the knowledge begins to decay. Which is exactly the experience we started with: staring at our own code, unable to reconstruct the reasons.

Notice what this gives us that 4D extensionalism doesn't. The perdurantist says: your system is a four-dimensional object extended through time, and its identity is its complete spatiotemporal history. Good. But that history, for a perdurantist, is a sequence of states — temporal parts, one after another, like frames in a film. Vico says: for a man-made thing, the history is not a sequence of states. It is a sequence of acts of making. Each temporal part was produced by someone who chose it over alternatives, for reasons, under constraints. Strip out the acts of making and you have an accurate but inert skeleton. You know what the system looked like at every point in time. You do not know why.

This is an epistemic claim, not an ontological one. Vico is not saying made things are built out of different stuff than natural things. He is saying that knowing a made thing requires a different method than knowing a natural thing. For natural objects, observation and measurement are the best we have. For artifacts, we can do better — or rather, we must do better, because the observable properties of an artifact systematically underdetermine its design. Two systems can look identical in structure and behave identically in all tested scenarios, yet be different systems because they were made for different reasons, with different constraints, carrying different trade-offs. The reasons are invisible in the snapshot. They live in the history of making.

Vico's influence ran underground for a century, then surfaced everywhere at once. Hegel's idea that history is intelligible only through its own development — Vichian. Marx's claim that we understand society by understanding how it was produced — Vichian. Collingwood's argument that history is the re-enactment of past thought — Vichian. The hermeneutic tradition from Dilthey to Gadamer, with its insistence that understanding a human creation means recovering the intention behind it — all of this traces back, directly or indirectly, to the professor from Naples who couldn't get a better chair.

But none of these inheritors had our problem. They were studying civilizations, not configuring distributed systems. The question for us is narrower and more practical: can Vico's principle be made to work inside a formal ontology for systems engineering? Can we preserve the maker's knowledge — not just as documentation on the side, but as a structural feature of how we describe systems?

TODO Vico and his new science

  • lesser known, "father of humanities"
  • "New science book": cyclical history, two approaches to knowledge, nature vs man-made things, warning of barbarian rationalism
  • misconceptions: is not anti-enlightment
  • Influences on Hegel and others
  • Verum ipsum factum: 4D + intentions + how (but only for man-made objects)
    • Epistemic, not ontological
    • Process, not a snapshot
    • The history of a human-made thing is not just a sequence of states; it is a sequence of acts of making.

Incorporating Vico's insights

  • A complete framework would need both: Einstein's contribution (things are their spatiotemporal extents) and Vico's contribution (for artifacts, those extents are structured by intentionality).
  • Vico saw this earliest, but restricted it to the human domain. Physics universalized it. Applied ontology operationalized it for engineering. But the engineering tradition lost the intentional dimension that Vico considered essential.
  • The practical upshot: if you're building systems, modeling systems, or managing the evolution of systems, you need both the 4D spatiotemporal extent and the record of why each making-decision was made. The first without the second gives you a skeleton; the second without the first gives you folklore.
  • IBIS, QOC, DRL, Kruchten's ontology capture rationales but are not integrated into any serious ontology
  • Incorporating into ontology:
    • Making Acts as 4D individuals, related to system's temporal parts, agents, intentional content (see next point)
    • Intentional content: decision (selection of one of possible worlds + possible temporal parts) + rationale + Context (temporal parts of the environment)
    • Architecture Devision Records are part of the ontology now

Verum ipsum factum and large language models

  • Imagine if your code is not just a text but
    • a history of diffs annotated with rationales behind every change
    • and also every entity has semantic links to relevant entities
    • agents have much better context for every code entity, instead of inferring their meaning from the code https://huggingface.co/papers/2503.15231?utm_source=chatgpt.com

      For each “public” entity, store:

      • Spec hooks (tests/properties/protocol checks)
      • Invariants (“must hold” statements)
      • Decision record (rationale, alternatives, consequences)
      • Links (depends-on, refines, deprecates, implements, constrained-by, etc.)

The lost rationale problem. People change engineered systems for reasons. In this ontology, these reasons have nowhere to live. They float outside – in emails, slack channels, they are shared over company paid lunches and occasionally mentioned to the new hires. Every developer who has built something nontrivial and returned to it a year later knows the feeling: a design decision that once seemed obviously right now looks baffling. Why did I do this? Was there a constraint I've forgotten? The knowledge of why is lost, leaving the developer afraid to touch anything. Everything becomes brittle, or at least seems so. Cleaning a mess may cost you weeks of time, but in the end you'll rediscover why the said mess is still the best cope. In the end, time spent, knowledge recovered, but no progress on the product itself.

Design rationale sits outside the metamodel entirely. Some teams keep Architecture Decision Records as separate documents, but these records share no connection to the formal architecture description. They do not share the same underlying foundations nor language. They decay. They get abandoned. Sometimes, or rather often, they are simply never written at all.

The lost rationale is not addressed but might find a home. The design decisions are localized in space-time and tie the system that was built to the alternatives that were rejected. To the best of my knowledge, no commonly used system engineering standard puts the design rationales into ontologies, but the 4D approach makes room for it.