System engineering standards are missing the last step to reinvent Giambattista Vico
We, who proudly wear the mental scars of engineering a large project, who have seen code creatures bizarre and unthinkable, who slayed dragons and learned to kill hydras without cutting their necks, who grew weary and wise and know when not to start a fight and just let the beast be in their cave, to us I address this enquiry. For we know what it is like to tend to our own work from months ago, trying to figure out its grand design. For we understand that there is something we lose when we complete the act of making. The cord is severed, and the most intimate connection between the maker and their creation disappears.
While I'm making, I know my decisions. Some pieces accidentally fall to the right places, but others I am placing deliberately. I name variables, functions, types; I come up with interfaces; I structure the types; I chose or create algorithms; I use specific languages and tools. All these moments will be lost in time, like my tears shed over every painful compromise. Another day comes and I stare back at my work in disbelief: you must be joking, Mister Me-of-yesterday, this function has 10 arguments? Don't you know anything about design? But I glimpse at my scars and I know better, and I do not touch it. Otherwise I am likely spending the rest of the day trying to fix it just to rediscover, that because of some contextual reason, this ugly monster of a function was the lesser evil in a freak zoo. If I'm lucky, the context has changed enough since that time: some dependencies or hardware updated, some compiler bug fixed, I can properly rewrite this code. Good for the codebase, bad for me, as it provides an intermittent reward to my risky behavior. My code winks back after a facelift, inviting me closer into our toxic, codependent relationship. Here's another type that clearly does too much work, how about we decompose it?
The fever intensifies when I touch other people's work, but it's bad enough so the difference between me from a year ago and my colleague yesterday doesn't matter. The root cause is still the lost rationale. If I don't understand why this function was made with 10 arguments, I don't know it. And so I attempt to rediscover this lost knowledge, studying the context and inferring the missing bits. I recover bits, and I should be content with it.
Giambattista Vico, a XVIII century Neapolitan philosopher, has formulated it: The true itself is made. In Latin: Verum ipsum factum. Only the maker has the knowledge of his creation. Studying a man-made thing is understanding its history and the rationale behind every decision.
If Vico demonstrates such deep understanding of making, what can we, engineers, learn more from him? Let's figure out how the system engineering currently treats two questions: what is a (human-made) system, and what does it mean to know it? Then we'll figure out how to benefit from Vico's insights about them.
Ontologies of system engineering
Engineers need a way to talk about the world. They need to agree on what exists, what matters, and how to describe it.
For example, making an online shop means automating a part of some organization which has employees, clients, logistics, processes, reports to the tax office and so on. Usually, a team of engineers works on this automation. That makes three systems involved: the organization and its clients is the supersystem, the automation is the target system, and the developer team is the third one. The system engineering approach includes describing these systems and their interactions.
For these descriptions, engineers use ontologies – frameworks that carve the world into categories useful enough to work with.
Common sense ontology
Let's talk about an averaged, popular, "common sense" ontology. It descends, loosely, from Aristotle, and pictures the world in a straightforward way:
- The world is made of objects, which occupy three-dimensional space.
- Each object carries attributes.
- Objects have types, and types may belong to broader, more generic types.
- Attributes of a particular object also have types which, in turn, may belong to broader supertypes, and so on.
If you worked with any widely used programming language, this way of thinking is likely natural for you.
Take Garfield. He is an object of type cat. A cat belongs to a type animal, which is a subtype of living being, and so on. His attribute color is set to orange. This orangeness is an instance of the attribute orange of type color, belonging to an attribute supertype visible quality.
If Garfield has a son, Barfield, we record a son attribute on Garfield pointing to Barfield, and a father attribute on Barfield pointing back. The relationship spans over two separate places – one on each object – rather than in a single, stand-alone description.
Now suppose Garfield goes for a walk. To represent this, we update his coordinates. We change an attribute, and then change it again, and again. But this ontology has no way to express the walk itself. It can describe where Garfield is; it cannot describe what Garfield is doing. Change, in this framework, gets lost in the cracks between discrete, isolated snapshots of reality.
Ontologies for software engineers
Software engineers in particular lean on ontologies for at least two reasons.
First, as any kind of engineers, they need to describe their process by which they build things – the teams, the customers, the platforms.
Second, the software itself must model whatever real world processes it automates. Ashby's law implies as much: a controller must match the variety of the system it controls, and a model that misses important structure will produce software that misses important behavior.
Can't we just use the tools directly: types in programming languages, databases, data formats? In practice, they fall short in expressivity. Some examples:
- A relational database cannot directly express a many-to-many relationship – say, which authors wrote which books – so we invent a junction table, a table that names no real thing in the domain and exists only to work around a limitation of the data model.
- Older versions of Java can't express "exactly one of these three shapes," so a sum type gets encoded as a class hierarchy with visitors and downcasting – service machinery that names nothing in the domain.
- C has no closures, so they are modeled using a function pointer and a
void*context. - Languages without optional types force us to use sentinel values –
null,-1, the empty string – to encode absence, although "this value might not exist" is the real domain concept.
A fitting ontology describes the domain at the level we actually think about it – higher than any single implementation language, and free of its accidental machinery.
Change is a challenge
Not all ontologies are equally useful to model reality. An ontology used in systems engineering must handle four things well1: particular things, types of things, relationships between things, and changes. The "common sense" ontology stumbles on relationships and struggles with change.
Relationships split apart. The bond between Garfield and Barfield is scattered across two attributes on two objects. If I discover that Garfield is not really the father, I must find and fix both halves independently. Nothing in the ontology ties them together. We lose the intuitive and essential quality of father-son relationship – that if you are a son of someone, that someone should also be your father, and vice versa.
It is worse for the changes. This ontology simply provides no way to express that things evolve. But software systems constantly and unpredictably evolve. To expand, software system is a complex node in a network of interconnected, interacting, complex systems, which are all changing and trying to adapt to each other. An ontology that can't describe changes properly leads to at least three problems:
The snapshot problem. Every description is a frozen moment. When something changes, you produce a new snapshot. But the ontology draws no line between the old picture and the new one. The connection exists only in someone's head.
The moving target problem. Pragmatically, describing a complex system takes time. If the system evolves faster than we are able to describe its snapshots, the description will always be obsolete and inaccurate. What good is it then?
The identity problem. What makes two things the same thing? If we gradually replace every plank in a ship, is it the same ship? If we rewrite every line in a software module, is it the same module, and does the old license still apply? If every member of a team is replaced, is it still the same team, and does it hold the same knowledge? Common-sense ontology tracks attributes of objects, but when all the attributes change, it can't draw a connection between the object before and after changes.
Here is a funny example of how identity problem bites us. Suppose my friend John had long hair, but shaved his head; now he's bald. Let's talk about two moments in time: before and after shaving. After shaving, I believe, John is still the same person:
John (before shaving, hair) = John (after shaving, no hair)
However, even before, John was separable from his hair – that's what allowed to shave him, anyway. Consider hairy John, but taken without his hair; obviously, because they are the exact same 3D-object, the following holds:
John (before shaving, no hair) = John (after shaving, no hair)
By using symmetry and transitivity of equality, we infer:
John (before shaving, no hair) = John (before shaving, hair)
But these two are different 3D objects! Such intuitive reasoning about equality between 3D objects undergoing changes is inexact and inconsistent; sometimes, it quietly leads us astray.
We might expect the major systems engineering standards to offer clean solutions, or at least systematic approaches to problems this pervasive. Unfortunately, some of them are still catching up.
How system engineering standards struggle with changes
Widely used enterprise architecture frameworks, such as TOGAF, Zachman, and ArchiMate, do not solve these problems at the level of ontology. Instead, they work around them.
The usual approach goes like this:
- Describe the system as it is now.
- Describe how you want it to be.
- Identify the gaps between the current and desired systems, build a roadmap.
TOGAF's Architecture Development Method formalizes this into phases. Each phase produces documents and diagrams that represent a snapshot of the system. Engineers keep snapshots connected through reviews, processes and conventions. But the ontology itself can not represent a transition between snapshots. Transition is implied, not modeled.
The identity problem is left to human judgement, armed with the robust weaponry of "following conventions". You might wonder: if the system has been migrated to the cloud, refactored, had its database replaced, while the development team rotated three times, is it the same system, team, process? TOGAF gives a clear answer: whatever the governance board decides. This risks inconsistency, and attempts to model both current and future states often end up blending them together, or mixing them up.
So, when a standard does not provide native ways of describing changes, people try to make up for it with processes, discipline, and documentation. This brittle practice works until someone changes jobs, retires, budgets get cut, or when the team just can't keep up with complexity and the pace of changes.
However, there are ontologies in other system engineering standards that handle changes more gracefully. They place objects not in three-dimensional space, but in four-dimensional space-time, so an object extends through time the way it extends through space.
Benefits of perdurantism
The key is to stop thinking of objects as three-dimensional things that persist through time, and start thinking of them as four-dimensional things that extend through time. A cat does not exist "at" various moments; it is a long, winding entity stretched across space-time, spanning from its first cry to the last nap. We only get to see a thin slice of it – the cat right now. The cat-on-Tuesday and cat-on-Wednesday are both connected parts of the cat, just as its ears are connected to its head. An Aristotelian cat was made of space and time happened to it; nowadays, cats are made of space, and time. This idea is called 4D-extensionalism, or perdurantism. The common-sense ontology is an example of the opposite approach, endurantism, which posits entities are wholly present at each moment.
Perdurantism is not a trick of language. It restructures the ontology and takes on the problems we identified.
The snapshot problem disappears. Two snapshots of a system are no longer isolated pictures that someone must mentally connect – the system itself is the connection.
The moving target problem becomes manageable. You can describe whatever temporal extent you know about and leave the rest open. The system grows a new temporal part tomorrow; you add the description of that part when you have it. The existing description is not "outdated" — it is a complete and accurate account of the portion of space-time it covers. You are not racing to finish before reality moves on. You are building a description that extends alongside reality.
The identity problem gets a better framing. The ship with all new planks is the same 4D ship: an early temporal part has old planks, a later temporal part has new ones, and both parts belong to the same 4D ship. The software module rewritten line by line is one 4D entity whose later temporal parts contain different code. The team whose members have all been replaced is one 4D team: its early parts include Alice and Bob, its later parts include Alex and Rob. The knowledge that the team possesses, however, also gets localized in space-time, so it can be lost if all its bearers leave the team.
Robustness of perdurantist solution
In safety engineering there is a hierarchy of controls, a framework used to minimize or eliminate hazard exposure, ranked by effectiveness in preventing failures:
- Elimination
- remove the hazard entirely;
- Substitution
- replace with something safer;
- Engineering controls
- physical barriers, interlocks; think about trying to fit USB cable in a 3.5mm jack.;
- Administrative controls
- prevention through rules, procedures, training;
- Warnings and personal protection equipment
- last resort when everything else fails.
Classical endurantism can't solve identity problem when the object changed, so the answer is delegated to boards, conventions, policies. The model itself is silent, so humans compensate with administrative controls. Contrarily to that, in perdurantist frameworks, the identity question dissolves at the ontological level, and structure of the model precludes the ambiguity. That is a move to level 3 – engineering control. The constraint is now in the architecture of the representation itself, not in human process layered on top.
It does not mean that administrative controls are eliminated completely, as we still need decisions about granularity (what warrants a new temporal part?) and event boundaries (what triggers a new slice?). But these questions are now formalizable within the model, so they can themselves be pushed toward level 3. Simpler frameworks can't even express the question clearly.
For this kind of power we pay with complexity. A 4D ontology is harder to learn, and the models it produces look unfamiliar to engineers raised on entity-relationship diagrams and class hierarchies. But the payoff is an ontology where change and identity are built into the foundations.
How did perdurantism infiltrate engineering?
The idea that objects extend through time came from philosophers, debating how ordinary objects persist. One camp – the endurantists – held that an object is wholly present at each moment it exists. The other – the perdurantists – argued that objects have temporal parts, just as they have spatial parts.
There was a related debate on time: presentists think that only now exists, and eternalists consider time is eternal, so what was in the past or will be in the future simply exists.
- Eternalism pairs naturally with perdurantism: if all times genuinely exist, then an object's temporal parts at different times all exist too.
- Presentism pairs naturally with endurantism: if only the present exists, there are no past or future temporal parts to have.
Other pairings are possible but problematic, so let's stick with the classical ones. Common-sense ontology is endurantist and presentist, 4D-objects are perdurantist and eternalist.
Quine developed early versions of perdurantism; David Lewis gave it its most influential formulation; Ted Sider later provided rigorous formal defenses.
Chris Partridge and his colleagues at KPMG Consulting transfered the ideas from philosophy to engineering in the late 1980s. Working on a legacy modernization project in the finance sector, they developed BORO – the Business Objects Reference Ontology. BORO explicitly adopted perdurantism for a practical reason: when two departments disagreed about whether a reorganized division was "the same" entity, BORO gave them a framework to describe exactly what had changed and what had not, without requiring anyone to win an argument about identity. Partridge described the method in his book and BORO went on to be applied in finance, defence, energy, and oil and gas.
From there, BORO's ideas passed into the IDEAS Group which created a shared formal ontology for data exchange between defence departments of US and their allied countries. The ontology development was led by Ian Bailey of Model Futures and BORO Solutions contributed too.
The aforementioned DODAF framework absorbed this foundation; in its second version it introduced DM2, the four-dimensional and extensional DoDAF Meta-Model.
Another branch: ISO 15926, a standard developed for the process industries. It grew out of a European Union ESPRIT project called ProcessBase, begun in 1991, and was further developed by the EPISTLE consortium through 2003 under the lead of Matthew West at Shell. Its problem was lifecycle data integration for industrial plants: assets were built, modified, maintained, and decommissioned over decades. BORO seems to be an influence here as well, but ISO 15926 also had its own independent development history rooted in industrial data management.
In all cases, the perdurantist approach is not adopted because of its elegance, but because people ran into real problems that conventional ontologies could not solve. The philosophy provided solutions and useful ways of thinking.
The new science of Giambattista Vico
Now how Giambattista Vico fits into this picture? Most engineers haven't heard of him– even for philosophers he is a niche read. Vico was a Neapolitan professor of rhetoric who spent most of his career underpaid and ignored. I am impressed at how close his ideas got to the modern philosophy of engineering.
Vico published his major work, the Scienza Nuova (Italian for New science), in 1725, paying for the first edition by selling his ring. The book is strange, ambitious, and badly organized attempt to build a unified science of human civilization: how nations arise, how languages form, how laws develop, how cultures mature and collapse. Vico argued that civilizations move through recurring cycles – a divine age of myth and ritual, a heroic age of aristocratic codes, a human age of reason and democracy. At the end of each cycle comes what he called the barbarism of reflection. People start to apply rational approaches universally and outside their scope: to the foundations of politics, ethics. The society saturated with abstraction and formal rationality loses contact with the concrete, the particular, the made. It forgets how to understand its own institutions because it can only analyze them, not know them. Then it collapses, and the cycle begins again.
Vico is commonly mistaken for an anti-Englightment thinker, but he wasn't an enemy of reason. For Descartes, cogito ergo sum and clear and distinct ideas are the foundation, and mathematics is the model of knowledge. Vico notices that we doubt our knowledge about mathematics less than what we know about reality because we made mathematics. Makers know more about their creation than anyone else can possibly know. So he asked: what can we truly know, and why?
His answer is a formula: verum ipsum factum, the true is the made. We have genuine knowledge of what we ourselves have made. Mathematics – we made it, we can know it fully. History, law, language, institutions – same, made by humans, knowable. Nature is God's creation, therefore the full knowledge about nature is inaccessible to us. We can observe nature, measure it, build useful models, but that's it.
The engineering insight is in the principle itself. Vico means that making and knowing are the same act. When I build a system, every decision I make – this interface, this data structure, this trade-off – constitutes my knowledge of it. The knowledge isn't a description I write afterwards. The knowledge is the making. The moment I stop making, the knowledge begins to decay. Which is exactly the experience we started with: staring at our own code, unable to reconstruct the reasons.
So to know something, you need to deeply understand its history, and rationale behind changes. Notice what this goes further than 4D extensionalism doesn't. For a perdurantist, anything is a four-dimensional object extended through time, and that time slice is a sequence of states. For Vico, a man-made thing is special, its history is not a sequence of states, but of acts of making. Each temporal part was produced by someone who chose it over alternatives, for reasons, in a context, under constraints. Strip out the acts of making and you have a corpse where an organism used to be. You know what the system looked like at every point in time. You do not know why.
Make no mistake: Vico is not saying made things are built out of different stuff than natural things. He is saying that knowing a made thing requires a different method than knowing a natural thing. So, it's a claim about knowledge, not about being – an epistemic statement, not an ontological one. For natural objects, observation and measurement are the best we have. For artifacts, we can and must do better, because the observable properties of an artifact are not enough to describe its design. Two systems can look identical in structure and behave identically in all tested scenarios, yet be different systems because they were made for different reasons, with different constraints, carrying different trade-offs. The reasons are invisible in the snapshot. They live in the history of making.
Vico's influence ran underground for a century, then surfaced everywhere at once. Hegel's idea that history is intelligible only through its own development — Vichian. Marx's claim that we understand society by understanding how it was produced — Vichian. Collingwood's argument that history is the re-enactment of past thought — Vichian. The hermeneutic tradition from Dilthey to Gadamer, with its insistence that understanding a human creation means recovering the intention behind it — all of this traces back, directly or indirectly, to the professor from Naples who couldn't get a better chair.
But none of these inheritors had our problem. They were studying civilizations, not configuring distributed systems. The question for us is narrower and more practical: can Vico's principle be made to work inside a formal ontology for systems engineering? Can we preserve the maker's knowledge — not just as documentation on the side, but as a structural feature of how we describe systems?
Incorporating Vico's insights
Design rationale sits outside the metamodel entirely. Some teams keep Architecture Decision Records as separate documents, but these records share no connection to the formal architecture description. They do not share the same underlying foundations nor language. They decay. They get abandoned. Sometimes, or rather often, they are simply never written at all.
The lost rationale is not addressed but might find a home. The design decisions are localized in space-time and tie the system that was built to the alternatives that were rejected. To the best of my knowledge, no commonly used system engineering standard puts the design rationales into ontologies, but the 4D approach makes room for it.
- The practical upshot: if you're building systems, modeling systems, or managing the evolution of systems, you need both the 4D spatiotemporal extent and the record of why each making-decision was made. The first without the second gives you a skeleton; the second without the first gives you folklore.
- IBIS, QOC, DRL, Kruchten's ontology capture rationales but are not integrated into any serious ontology
- Incorporating into ontology:
- Making Acts as 4D individuals, related to system's temporal parts, agents, intentional content (see next point)
- Intentional content: decision (selection of one of possible worlds + possible temporal parts) + rationale + Context (temporal parts of the environment)
- Architecture Devision Records are part of the ontology now
Verum ipsum factum and large language models
- Imagine if your code is not just a text but
- a history of diffs annotated with rationales behind every change
- and also every entity has semantic links to relevant entities
agents have much better context for every code entity, instead of inferring their meaning from the code https://huggingface.co/papers/2503.15231?utm_source=chatgpt.com
For each “public” entity, store:
- Spec hooks (tests/properties/protocol checks)
- Invariants (“must hold” statements)
- Decision record (rationale, alternatives, consequences)
- Links (depends-on, refines, deprecates, implements, constrained-by, etc.)