In search of the golden rule: a comparison of two theories of everything
A comparison of David Deutsch and Chiara Marletto’s constructor theory and Stephen Wolfram’s ruliad
I am grateful to Theophilus Human for his insights throughout the process of writing this article.
Table of Contents
· Introduction
· Modeling the universe
∘ The art of modeling
· Wolfram: the ruliad
∘ Motivation: emergence, complexity from simplicity
∘ The golden rule
∘ The spaces of the ruliad
∘ 1. Spacetime: graphing the trajectory of our universe
∘ 2. Branchial space: branching, merging, and entanglement
∘ 3. Rulial space: the heart of the ruliad
∘ Putting it all together: the three spaces
∘ Summary
· Deutsch and Marletto: constructor theory
∘ The problem with prevailing theories
∘ Key definitions
∘ Yet again, the mystery of emergence
∘ The explanatory power of constructor theory
∘ A worked example: the emergence of life
∘ Summary
· Similarities, differences, and the potential for unification
∘ Which model is better?
∘ Key differences
∘ Meta-unification?
· About the author
· Sources and further reading
∘ The ruliad
∘ Constructor theory
∘ Other
Introduction
In this article, we’ll explore two up-and-coming “theories of everything”: David Deutsch and Chiara Marletto’s constructor theory and Stephen Wolfram’s ruliad.
On the one hand, Wolfram’s ruliad seeks to decompose the emergent structures of our universe into simple computational rules, explaining complex structures such as the fabric of spacetime as the product of repeated, elementary dynamics and interactions.
On the other hand, Deutsch and Marletto’s constructor theory seeks to decompose our physical laws into possible and impossible tasks, reframing problems of exact trajectory prediction as the more tractable problems of whether certain tasks can occur in principle.
What motivates these theories? How would these theories be applied in practice? Is either theory better? We’ll explore all of these questions and more.
Modeling the universe
Imagine you have been hired by Meta for their next project: Metaverse 2.0, a project more ambitious than a simple shared virtual space for communal interaction (Metaverse 1.0). Metaverse 2.0, they tell you, will be a universe of its own. You are on the design team, and your task is to create a virtual reality generator capable of rendering the entirety of our universe so thoroughly and accurately that a user experiencing it would find it indistinguishable from physical reality.
How would you go about rendering such an environment? How would you describe it in the first place, in such a way that a human, a computer, or other machine capable of processing information could interpret the description?
These are problems of modeling, a difficult art form in technical spheres. In the same way an artist can choose a certain style or “creative framework” for representing a physical tree on a 2D canvas, so too can mathematicians, scientists, and engineers develop abstractions of physical processes (and even the entirety of the universe) with their own unique “flair.”
Usually, the relationship between models and the underlying theories they embody can be considered as follows: theories are answers to identified problems, or explanations of observed phenomena, while models are representations created to explain theories. Going forward, whenever we refer to the ruliad and constructor theory’s ability to model the universe, keep in mind that this corresponds to each underlying theory’s ability to explain the universe.
The art of modeling
Much like functions, models seek to describe relationships between inputs and outputs. Depending on the model, more importance might be given to one of the following:
- Generation: “illuminating the black box” and detailing the process by which inputs are transformed into outputs.
- Verification: verifying that certain arguments correspond to certain outputs, without necessarily determining a general form for the mapping.
Other interesting distinctions can be drawn between models. Wolfram recently developed a framework for conceptualizing the categorization of models, his so-called “general paradigms for theoretical science.” These do a good job of broad, high-level classification, but I like the way Roman Leventov, the author of Engineering Ideas, refines them further according to some Deutsch-esque criteria, namely referencing the way in which models are described and executed (for either generation or verification), and their relation to algorithms, theories, and explanations (in the sense that David Deutsch uses in The Beginning of Infinity).
For a full breakdown of the categories, see Roman’s original post. For our purposes, we can simply focus on the following types:
Analytical/equation-based. Examples: many models of physics (classical mechanics, general relativity, thermodynamics, fluid mechanics, quantum mechanics), Structural Causal Models, Simulink models. This type corresponds to Wolfram’s “mathematical” paradigm.
Algorithmic/rule-based (generation). Examples: imperative code, Knuth-style algorithms, cellular automata, tax law, rules of logic (including probabilistic logic!). This type corresponds to Wolfram’s “computational” paradigm.
Principles/rules-based (verification only). Unlike all other types of models, this only permits verifying the results, but not producing them. Examples: David Deutsch’s Constructor theory, memory models of programming languages, traffic regulation, other codes of rules, code of regulation, codes of conduct, etiquette.
At their core, the two models that are the subject of this article — the ruliad and constructor theory — are fundamentally different in their intent and modeling approach.
The ruliad is an algorithmic/rule-based (generative) model of the universe, seeking to understand its fundamental laws and principles via simulation. It takes a reductionist perspective, decomposing the system into “elementary units,” initial conditions, and governing dynamics, and explaining the complex patterns that emerge from the compounded interactions of these simple elements.
On the other hand, constructor theory is a principles/rules-based (verification) model, seeking to provide explanations more fundamental than even theories like quantum mechanics and general relativity and unify our existing understanding of the universe. It aims to be scale-invariant, pursuing principles that not only apply to a macroscopic or microscopic level, but that are valid across domains. While a reductionist approach to physics proposes that the state of a system and its dynamics are fundamental, constructor theory proposes what is possible or impossible and why are fundamental.
We will explore each in more detail in the following sections.
Wolfram: the ruliad
Motivation: emergence, complexity from simplicity
Imagine we could model the entirety of the universe with a simple program. Everything from subatomic particles to clusters of galaxies, from nanoseconds to billions of years into the past or future, from our existence in this reality to a web of infinite entangled realities, could all be simulated from basic mathematical rules and computations.
It seems like a long shot, one that would require gross estimations, simplifications, and abstractions for a computational description tidy enough to program.
Stephen Wolfram, however, would point out that complex behaviors often “emerge” from seemingly simple programs, a phenomenon aptly called emergence. Generally, emergence refers to properties or behaviors of a system that the individual parts don’t have on their own, only becoming apparent when the parts interact as a larger unit.
One well-studied example of simple programs giving rise to complex behaviors is cellular automata, mathematical objects invented several decades ago by John von Neumann in his studies on self-replicating machines.
Cellular automata consist of a grid of cells, each of which can exist in a finite number of states. The states of these cells are updated over discrete time steps according to update rules, or just rules, based on the states of neighboring cells. Despite the seeming simplicity of the rules, their interaction with initial conditions and the compounded effect of interactions among neighboring cells yields surprising complexity.
Interesting properties arise from even the simplest class of one-dimensional cellular automata, so called elementary cellular automata. These cellular automata can be classified by their simple state and rule systems:
- States: Either 0 or 1 (binary value) for each cell
- Rules: When applied to a cell, the update depends only on…
- the state of the cell itself
- the state of the cell to the left
- the state of the cell to the right
Since there are 2³=8 possible binary states for the three cells considered in an update rule, there are a total of 2⁸=256 elementary cellular automata. Each update rule can be encoded with an 8-bit binary string, representing the sequence of single-bit “outputs” for all possible three-bit “inputs.” For example, rule 110 corresponds to 01101110:
The animation below shows the evolution of rule 110. Each T-shaped configuration of four cells under the heading “rule 110” shows how the three neighboring cells (highlighted in yellow) yield the next generation’s central cell (stacked directly below).
Many simple update rules give rise to surprising behavior. Rule 110 is often used as an example of this since, in addition to toeing the line between stability and chaos, it is Turing complete. In other words, given enough time and memory, the cellular automata could perform any computation within the repertoire of a computer.
If the program is so simple, where does the complexity come from? The program’s rules appear to have little direct relationship to its behavior at an arbitrary time — as the program progresses, the update process repeats iteratively, and the cumulative interactions become harder to disentangle. For this reason, there’s no way to generalize the jump from the pattern at time step 0 to time step N; it’s necessary to iterate through every intermediate time step. This is the principle of computational irreducibility — for some systems, there are simply no shortcuts that can be taken to predict the outcome. You have to simulate the entire process.
Note that while computational irreducibility and emergence are certainly linked, it is contested whether a computationally irreducible system necessarily demonstrates emergence. For example, see here.
The golden rule
If we’re adhering to this framework of “simple rules produce complex, emergent behaviors” (i.e., using an algorithmic/rule-based model), we can’t just observe the state of our universe at different time points and solve a system of equations to determine the “golden rule” as we could with an analytical/equation-based model. Due to the principle of computational irreducibility, we have to consider intermediate time steps and patiently compute every iteration of the simulation.
So, how would we find such a golden rule describing our universe? An alternative approach would be to construct our computational framework and brute-force search through all possible evolutions of states for the best match. It may not have the glitz and glamor of an analytical/equation-based model, but it’s the only way to feasibly find such a rule or set of rules.
This begs the question, what will the computational infrastructure of our model be? How will we computationally represent our universe?
The spaces of the ruliad
Wolfram’s ruliad can be broken down into the three key spaces it inhabits. If you’ve ever heard of Superstring Theory, or the ten dimensions it posits, you’ll see that each of the ruliad’s spaces roughly encompasses several of these ten dimensions:
- Spacetime
- Physical space (dimensions 1–3)
- Time (dimension 4) - Branchial space (dimensions 5–6)
- Rulial space (dimensions 7–9)
(For a more detailed explanation of Superstring Theory’s hierarchy of dimensions, see QuestSeans’ 10 Dimensions of Reality article.)
1. Spacetime: graphing the trajectory of our universe
For the computational infrastructure, Wolfram has proposed spatial hypergraphs that describe relations between “atoms of space.” Hypergraphs are just graphs generalized to higher dimensions. Edges of hypergraphs, called hyperedges, can join any number of nodes.
One convenient way to represent our hypergraphs is using strings, where elements of the string correspond to hyperedges. We’ll see below that the string fully describes the state of the system at any given time, and updates can be made by substituting substrings according to given rules.
For example, let’s say characters “A” and “B” represent hyperedges connecting three nodes and four nodes (3-ary and 4-ary hyperedges), respectively. Thus, the string “ABBAB” corresponds to the hypergraph defined by the following set of hyperedges:
In the large-scale limit, Wolfram believes hypergraphs like these can show features of continuous space, with graph distances approximating distances in physical space, and the graph itself representing the “instantaneous” configuration of the universe on a spacelike hypersurface.
2. Branchial space: branching, merging, and entanglement
We must now consider the types of rule systems we’ll encounter. In our previous discussion of cellular automata, update rules were both simple and determinate: applying the same rule to the same state yielded the same outcome every time. Of course, the compounding of rules, time step after time step, could lead to unexpected behavior, but rules’ behavior was always known.
We will now be considering rule systems that are simple (in the sense that we’re just “swapping” hyperedges) and indeterminate: there may be multiple ways to update the state, and thus, multiple possibilities for the state at once. This is characteristic of a multiway system, a kind of substitution system in which rules define multiple possible successors for states. A simple example is a string substitution system, which is easily achieved by mapping hyperedges to string elements, as described above.
If we consider the spatial hypergraph to encapsulate the state of the system, then the evolution of (possible) states over time is embodied by a multiway graph, which connects states to their immediate successors obtained by applying update rules. To make it explicit, spatial hypergraphs are nodes in the multiway graph.
Let’s consider the set of string replacement rules {A → BA, B → A}. When applied to a hypergraph, these rules:
- Replace any instance of a 3-ary hyperedge (A) with a 4-ary hyperedge and 3-ary hyperedge joined by one node (BA)
- Replace any instance of a 4-ary hyperedge with a 3-ary hyperedge (A)
If we start with an initial condition A (meaning we initially set the hypergraph to be 3-ary hyperedge) and think about all the ways the rules can update the hypergraph, we form the multiway graph.
Let us analyze the first few “slices in computational time,” or “steps,” each of which corresponds to a row of nodes in the multiway graph below. Again, each node in the graph represents a complete state of our system, which corresponds to some configuration of a spatial hypergraph.
- t = 0, current state = A. Starting from A, only one of the rules can be applied (A → BA), so this is the path that is taken, and we arrive at BA.
- t = 1, current state = BA. From BA, now either of the two rules can be applied. Therefore, we arrive at a branch: if we apply A → BA, we will arrive at BBA, but if we apply B → A, we will arrive at AA.
- t = 2, current state = {AA, BBA}. By this point, multiple states are possible. Again, this is one of the key features of the multiway system.
We continue in this way, at each step applying the rules in all possible ways to each state.
We can think of each path through this graph as defining a possible history for the system, leading to a complicated pattern of possible “threads of history,” sometimes branching and sometimes merging:
- Branching indicates a “fork” in the road and the creation of new states, i.e., there are multiple possibilities for what could happen next.
- Merging indicates the unification/convergence of states, i.e., multiple states end up producing the same state of the universe in the next time step.
From a “God’s eye” view, this branching and merging is constantly producing entangled threads of alternate and overlapping histories for the universe, yet from our humble human positions embedded within this branching universe, we perceive only a single thread of existence.
We can take a branchial slice across this system, giving us a view of universes that have diverged from ours, and construct a branchial graph by joining states that share an ancestor in the previous step:
To understand how this model relates to physics, we could interpret the nodes of these graphs as quantum states, so that the branchial graph effectively gives us a “map of quantum entanglements” between states.
Just like the large-scale limit of the spatial hypergraph represents physical space, the large-scale limit of this branchial graph gives us branchial space.
3. Rulial space: the heart of the ruliad
The pictures above are merely small toy models. The full ruliad involves applying all possible rules to all possible initial conditions for an infinite number of steps.
At this point, an analogy might be helpful. In the next few paragraphs, we’ll try to understand rulial space via parallels in language.
What are these rules really?
Rule systems are similar to language systems in that they are a mechanism for representation. Just as we use phonemes, words, sentences, and successive compounding of these symbolic units to convey ideas in language, rules are abstract encodings of the fundamental laws that govern the behavior of the universe. They are the language of models.
As opposed to using letters and sounds, rules may use equations, algorithms, and principles (corresponding to the mathematical, computational, and principles-based modeling approaches described above) as their fundamental building blocks of representation. However, since the ruliad is concerned with a computational map of the universe, algorithms will be our rule building block of choice.
Similar to how cellular automata rules dictate the evolution of patterns on a grid, the ruliad says that fundamental rules could govern the behavior of particles, space, and time, and that these rules are responsible for the emergence of more complex phenomena in the universe, including the laws of physics as we currently understand them.
As we will see in the next section, there are similarities between the ruliad’s conceptualization of rules and constructor theory’s conceptualization of transformations. Potentially, this is where constructor theory can clarify some of the ambiguities of the ruliad, or simply triumph as a better model.
What does it mean to occupy different sectors of rulial space?
Occupying different sectors of rulial space could be understood in the way different languages occupy different coordinates in the space of language representation, which we’ll call “lingual space.” Let’s use the example of English and Russian (not for political reasons, but because I also speak Russian). The two languages have different alphabets, grammatical constructs, colloquial connotations, idioms, etc., so they should have different coordinates in our system of understanding and communicating information.
How we define coordinates is subject to opinion. We could define one dimension for each feature mentioned above, e.g., the first few coordinates could correspond to the language’s alphabet, the next few coordinates to its grammar, and so forth until we arrived at high N-dimensional space. Likely, this would encode a lot of redundant information. If we could instead determine a lower dimensional space representing the most important features, that all other features ranging from the alphabet to idioms could be expressed in terms of, we would prefer that.
Returning to rules, we first set our coordinate system in rule space by choosing a type of model, which in our case is hypergraph rewriting. Then, different coordinates in rulial space correspond to different rule systems or “rulial encodings” of our universe via this modeling approach.
Wolfram has suggested rules can be coordinatized by representing computations in terms of Turing machines, cellular automata, or any number of simple programs with inherent rule systems. Potentially, program size could serve as a unit of distance. Given any rule, we can imagine writing a program to execute that rule in some language (e.g., in Wolfram Language) and describing the size of the program via its number of tokens.
Wolfram has pointed out that it’s not enough to measure things in terms of “raw information content,” or ordinary bits, as discussed in information theory. Rather, we want some measure of “semantic information content” that directly tells us what computation to do. How we would do this, however, is unclear.
What does it mean to transition between rules? How do we move through rulial space?
Though English and Russian live in different areas of “lingual space,” they aren’t entirely cut off from one another. If they were, how would international relations be possible?
If an English speaker wants to understand something in Russian, she must traverse lingual space to get to the Russian speaker’s home turf. This traversal is achieved through translation, which, in the context of languages, is a mapping of the representation of information expressed in one language to an equivalent representation in the other.
Imagine there is also an alien species we need to communicate with. This alien species is very different from humankind and has greatly enhanced senses: their analogs of human eyes can perceive the full electromagnetic spectrum, their analogs of human ears can detect frequencies in the ranges of 1 Hz to 100 MHz (humans can detect 20 Hz to 20 kHz), and so on and so forth as far as our imagination takes us.
Given the wide range of sensations unlocked by the aliens’ sophisticated anatomies, their alien language contains many constructs for conveying these sensations without any direct correlates to human-understood notions. For example, the aliens have many colors that we humans can’t even conceive of and thus have no direct translation to human language.
Similarly, traversing rulial space means translating a representation of the universe in one rule system to another. Let’s first consider two simple rule systems, System A and System B. System A maps each 1-ary hyperedge (disconnected node) to a 2-ary hyperedge, and each 2-ary hyperedge to a 3-ary and 1-ary hyperedge. System B simply maps each 1-ary hyperedge to a 3-ary and 1-ary hyperedge.
- System A: {1-ary → 2-ary, 2-ary → (3-ary, 1-ary)}
- System B: {1-ary → (3-ary, 1-ary)}
Both rule systems only allow one thread of history, so the multiway systems they produce are trivial, as can be seen in the figure below. The main takeaway, however, is that System A accomplishes the same state as B in two steps. If both systems are configured in an initial state of one 1-ary hyperedge, for example, then the (2i)th state of System A matches the ith state of System B.
Because every state in System A can be matched to a state in System B, this can be expressed as System A having a proper translation to System B, or vice versa. It is similar to our English-Russian scenario from before, where we were able to express all the concepts of one language in the language of the other. In the context of the ruliad, we might we would be able to trace a continuous path between the coordinates of System A and System B in rulial space.
Let’s now consider two rule systems System C and System D, each containing only one rule. System C simply maps 1-ary hyperedges (disconnected nodes) to 2-ary hyperedges. System D maps each 1-ary hyperedge to a 2-ary and 1-ary hyperedge.
- System C: {1-ary -> 2-ary}
- System D: {1-ary -> (2-ary, 1-ary)}
Though the rules of Systems C and D appear quite similar, they produce different behaviors. Because System C converts all 1-ary hyperedges to 2-ary hyperedges, after one time step, there are no more updates that can be made and the system’s state stagnates — a behavior known as termination. In fact, a graph will terminate unless the right-hand side of a rule contains some relations with the same “arity” as those on the left. On the other hand, since System D has a 1-ary relation on both the left and right-hand side of its singular rule, it continues to grow.
The set of states achievable by System C is obviously less than that achievable by System D. The idea that we can’t achieve all the states of System D via the rules of System C could be expressed as System D having no proper translation to System C. It is similar to our human-alien scenario from before, where we were unable to express all the concepts of the alien language in our human language. In the context of the ruliad, we might find that we wouldn’t be able to trace a continuous path between the coordinates of System C and System D in rulial space.
Thus, the topology of rulial space gives us an idea of the relationships between rule systems and which translations of models are possible.
Putting it all together: the three spaces
Tying together the information from above, the hypergraph representing our universe would exist in a realm described by three spaces:
- Spacetime
a. Physical space. An emergent feature of the generated hypergraph. Graph distances in the hypergraph potentially approximate distances in physical space.
b. Time. A reflection of the process of computation by which the spatial hypergraph is progressively updated. - Branchial space. The space of possible “branches of history.” For physics, this corresponds to the space of quantum states, in which entangled states are nearby.
- Rulial space. The space defined by allowing all possible rules of a given model class (in our case, hypergraph rewriting) to be followed between states of a system.
Summary
- Emergence refers to properties or behaviors of a system that the individual parts don’t have on their own; instead, they emerge only when the parts interact as a larger unit.
- As demonstrated by cellular automata, complex behaviors often emerge from seemingly simple programs.
- Computational irreducibility is the idea that for some systems, there are simply no shortcuts that can be taken to predict the outcome; the entire process must be simulated. It is closely related to emergence but not equivalent.
- The ruliad is based on the idea that if simple programs can yield complex behavior, perhaps our whole universe can be represented as a simple program.
- The ruliad can be broken down into the three key spaces it inhabits: spacetime, branchial space, and rulial space. At the most fundamental level, all three spaces are generated via hypergraph rewriting.
- We start with a hypergraph representing the state of our system and a rule system describing how hyperedges are updated. At each time step, multiple rules may apply to one state, creating multiple possibilities of state evolution (“branches of history”) for the system. This is aptly described by a multiway system, in which rules allow multiple possible successors for states.
- Spacetime consists of physical space and time. Physical space is an emergent feature of the hypergraph, i.e., as the hypergraph is progressively updated, it may begin to demonstrate a structure that resembles atoms in physical space. On the other hand, time reflects the computational process by which the hypergraph is progressively updated.
- Branchial space encompasses all possible “branches of history” for the multiway system. We can glimpse a section (or hypersurface) in branchial space by slicing across the multiway graph at a certain time step. For physics, this corresponds to the space of quantum states.
- Rulial space is defined by allowing all possible rules of a given model class to be followed between states of a system. The topology of rulial space tells us the relationships between rule systems and which translations of models are possible.
Deutsch and Marletto: constructor theory
Because of the way in which scientific knowledge is created, ever more accurate predictive rules can be discovered only through ever better explanatory theories. So accurately rendering a physically possible environment depends on understanding its physics.
— David Deutsch (Fabric of Reality, p. 117)
The problem with prevailing theories
In general, we tend to deduce physical laws from observations of things that do happen. This usually corresponds to a predictive approach to science, as observations are used to extrapolate data at other, non-sampled points. For example, an observation recorded at one point in time may be used to predict data in the future or past. This is valuable, of course. We need accurate predictions to launch spacecrafts without them exploding, to estimate our ETA for work, and really to execute most tasks in our daily lives.
Our prevailing theories of physics are wielded in predictive capacities: given the initial conditions of a system and the laws governing its dynamics, we simulate the trajectory of the system over a specified time interval.
However, considering only what does happen when deducing physical laws is a constrictive approach. As was touched upon in the ruliad section (even if only in the abstract, computational sense), many threads of history exist for our universe, all at once. Singling out one discards a whole range of phenomena that could have happened. After all, why throw out the baby with the bath water?
By shifting our focus from what did happen to what could have happened, we are no longer in a prediction paradigm. We are in an explanation paradigm.
Take, for example, the trajectory of a ball tossed into the air. We know the initial position and velocity of the ball, and we can derive the equations for projectile motion from Newton’s laws. Heck, we don’t even need to derive them if we can find them online! If we want a prediction of the exact path taken by the ball, we can simply plug and chug.
If we are instead concerned with all possible paths of the ball, not predicting a specific trajectory, we are forced to think about the constraints of our problem. Considering an extreme case, we know it’s impossible for the velocity of the ball to exceed the speed of light (special relativity). We also know, with more immediate relevancy, that it’s impossible for the ball to accelerate unless a net force acts on it (Newton’s 1st law). We may proceed like this, adding to our list of what we know is possible and impossible about the ball’s dynamics until we achieve a sufficiently bounded set of possible paths — or equivalently, a sufficiently simplified, unified, or deepened set of explanations.
This approach of framing our problem in terms of counterfactuals, fundamental laws about what can and can’t occur in principle, is at the heart of constructor theory. In other words, we start out by conceiving which physical tasks are possible vs impossible, then work out the consequences.
Note how the counterfactual rewording of a law tends to state what is impossible. This implicitly tells us every alternative to the impossibility is possible (subject to the constraints of other laws), and it is usually easier to define a single impossibility than list out every possibility.
What do we mean by “work out the consequences?” This can include:
- “Combining” counterfactual statements to infer other possible and impossible tasks.
E.g., combine counterfactual statements of :
- 1st law of thermodynamics: Perpetual motion machines are impossible.
- 2nd law of thermodynamics: It’s impossible to convert all heat into useful work.
- Other possible/impossible tasks.
→ Work out that a limited-efficiency heat engine is possible. - Simplifying, unifying, and deepening explanations
E.g., find higher-level principles (deeper explanations) underlying counterfactual statements of:
- Newton’s second law, Einstein field equations, etc.
→ Stationary action principle: It’s impossible for paths corresponding to non-stationary action to have net constructive interference. (I.e., the most probable path will result from stationary action and constructive interference, while other paths will destructively interfere.) We find that this principle encompasses explanations of the specific contexts (classical mechanics, general relativity, etc.) independently explained by the laws above.
As will be illustrated in more detail in the sections below, constructor theory expresses possible and impossible physical tasks in convenient abstractions, unhassled by the pain of accurate problem specification, sensitivity to experimental design (how we acquire data and parameters, etc.), the computational load of recomputing trajectories for new initializations, and the intractability of forward or backward simulating complex dynamics. Because we are concerned with all possibilities, and don’t care about distinguishing them from one another, we greatly reduce the burden associated with computing the details of each one.
In other words, using a predictive theory is like going fishing and recasting our line for every fish. We can be picky about the fish we retrieve, examining each catch (each simulated trajectory) as intensively as we like. Constructor theory, on the other hand, is like casting a net. It may not afford us as much selectivity (we can’t directly extract specific trajectories), but it’s more efficient. In general, we can say there is a tradeoff between:
- Constructor theory
- Deep explanation
- Effort required for individual application - Predictive theories
- Shallow explanation
- Effort to generalize an individual observation to a whole range of possibilities
Arguably, applying a deep explanation to a specific instance is easier than taking a particular instance and trying to work backward to a general class or rule.
Constructor theory’s abstraction of physical tasks is what makes it so powerful. The detailed problem specification required by predictive theories inherently creates “jagged edges” when reconciling theories from different domains. Like puzzle pieces from different boxes, even if all the pieces contribute to the same grand picture, misalignments in formalism and underlying frameworks of understanding make a cohesive vision difficult. Constructor theory, however, draws all of its puzzle pieces from one, tidy box. The pieces may not be as intricate as those belonging to any of the predictive theories’ boxes. Yet, they are easy to connect and easily lend themselves to assembling clear and wide-spanning explanations.
Of course, the examples above were contrived, but we’ll get our hands dirty with actual applications of constructor theory in the section “A worked example: the emergence of life.” We’ll build our way up to it in the sections that follow.
Key definitions
First, let’s go over some key definitions. To provide a concrete example of the definitions, we’ll discuss them within the context of how one might rewrite the catalytic process performed by the enzyme glucosidase, which converts the sugar maltose into two glucose sugars (shown in the figure below), in these constructor-theoretic terms.
- Task. The specification of a transformation on some physical system.
- Example: {1 maltose -> 2 glucose}. - Substrate. The object that is changed during the transformation.
- Example: maltose. - Constructor. A physical entity capable of carrying out a given task repeatedly, quite similar to a catalyst in biology. Car factories, robots and living cells are all accurate approximations to constructors.
- Example: the enzyme glucosidase. - Possible/impossible task. A task is possible if a constructor capable of performing it exists (i.e., the laws of physics allow it). Otherwise, it is impossible.
- Example: A possible task is {1 maltose -> 2 glucose}. An impossible task is {1 maltose -> 3 glucose}. - Counterfactuals. Statements about which transformations are possible and which are impossible in a physical system.
- Example: Anything other than the conversion of 1 maltose to 2 glucose is impossible.
Yet again, the mystery of emergence
In our earlier encounter with the ruliad, we discussed emergence. One of the most mysterious emergent phenomena that has been puzzling scientists for centuries is life. How could such artfully and intricately configured lifeforms like us have arisen from inert matter?
The question is especially perplexing when considered from a reductionist point of view. For example, we may work backward from our highly emergent human forms to consider the development of structure and function — the appearance of design — at the level of organ systems, then organs, then tissues, then cells, then organelles, then molecules, then “elementary objects” (e.g., particles, simple chemicals, etc.).
Many of our laws of physics aim to describe the properties and dynamics of elementary objects. If we imagine these elementary objects as Legos, building blocks that can combine to form other objects, the laws of physics provide no blueprint for the way these universal Legos coalesce and transform into larger more complex structures that evolve over time. Even the formations that look intentional, like the product of following an instruction booklet or at least some guidance from a parent or guardian, were made by an unbothered toddler who coincidentally made some artistic choices in her process of experimentation.
In this sense, the laws of physics are no-design. Any hint of intentional design we think we see is, in actuality, a product of random luck, the trial-and-error self-improvement of evolution and natural selection.
How could living things have evolved given these no-design laws of physics? How do we explain any number of complex patterns of nature if none of those patterns are the result of an intentional design process?
The explanatory power of constructor theory
Explaining the emergence of life is exceedingly difficult using prevailing theories of physics. Predicting how, given certain laws and initial conditions, particles aggregated to form an organism such as a cat is —to use constructor theoretic terms — an impossible task. The processes of replication, self‑reproduction, and natural selection at the core of evolutionary adaptation — and thus the appearance of design — are highly emergent, involving the compounded interaction of countless particles.
Even if we could predict a trajectory of particles resulting in a cat, this wouldn’t explain whether a cat could have come about without design. The design of the cat, for all we know, could be some miraculous interaction of our initial conditions and laws of motion.
Rather than predict the formation of a cat, we need to explain whether and how a cat is possible under no‑design laws of physics. This is where constructor theory comes in.
Given constructor theory’s focus on counterfactuals, it can explain patterns in domains that are inherently unpredictable, such as evolution. Importantly, the counterfactual form of possibility vs impossibility allows the expression of transformations independently of their constructors. This, for instance, allows us to express which biological reproduction mechanisms are realizable in principle without worrying about the organism’s details.
A worked example: the emergence of life
In a recent paper, Chiara Marletto used constructor theory to provide exact physical formulations of the appearance of design, no-design laws, and the logic of self-reproduction and natural selection. To understand the explanatory power of constructor theory, let’s walk through (a simplified version) of Marletto’s derivation of the form constructors must take under no-design laws.
1) Frame no-design laws in constructor theoretic terms
First, we have to frame no-design laws in constructor theoretic terms.
We begin by defining generic resources: substrates that exist in effectively unlimited numbers. In the context of early life on this planet, these include only elementary entities such as photons, water, simple catalysts, and small organic molecules.
Now we outline the conditions our no-design laws must satisfy:
- Generic resources can only undergo certain “elementary transformations.” Equivalently, when generic resources are the substrates, no design laws only permit certain “elementary tasks” to be performed. These are physically simple and contain no design (of biological adaptations), for example spontaneous chemical reactions.
- There are no good approximations to constructors for non-elementary tasks with only generic resources as substrates. In other words, you can’t spontaneously construct a human out of thin air.
As long as we don’t violate these conditions, we adhere to no-design laws! We can now use this definition to define tasks that are allowed by no-design laws.
2) No-design construction
Consider a possible, non-elementary task T and an object F that can perform T to a high accuracy. For instance, T could be the task of constructing a particular polypeptide chain (protein) from generic substrates, i.e., nucleotides and amino acids. These are simple enough to have naturally occurred in prebiotic environments, so we can consider them generic resources.
F could then be a bacterium. Though it performs other tasks (e.g., reproduction, metabolism, etc.), we can, for this purpose, consider it a protein synthesis factory, using nucleotides as instructions to convert amino acids into polypeptide chains.
Of course, F doesn’t just take in nucleotides and spit out proteins in one step. As from the definition of no-design laws above, there is no good approximation to a constructor for task T that takes the generic resources (nucleotides) as substrates and directly transforms them into polypeptide chains. In other words, T can’t be performed with just elementary tasks — it’s a non-elementary task after all!
F must execute a procedure, or a recipe, to perform the task T. In particular, the recipe used by F to perform T must be decomposable into sub-recipes (not necessarily sequential) that are allowed by no-design laws and executed by sub-constructors contained in F.
Side note: If we think of transformations as rules, it’s similar to the principle of computational irreducibility described in the ruliad section: if we have a state A that consists of untransformed generic resources (e.g., nucleotides), and a state B that’s the result of applying many rounds of transformations to generic resources (e.g., proteins), we cannot jump from state A to state B with some simpler transformation that approximates all the compounded transformations applied. It is necessary to step through each transformation, one by one.
We therefore break down F into two parts, a recipe (P) and recipe-follower (V):
- V: Programmable constructor that blindly performs the sub-tasks.
- P: The recipe (program) that specifies the sequence of sub-tasks and programs V.
V[P] is a constructor for the task Tₚ, P is the program for the task Tₚ, and Tₚ is considered to be in the repertoire of V .
Because the recipe instantiated in P must be decomposable into sub-recipes, P takes on a modular structure P = (p₁, p₂, …, pₙ), where each subunit pᵢ serves as an instruction, telling V which sub-task to perform when provided substrates.
For example, if we consider the case of constructing a particular polypeptide chain:
An important takeaway from all of this is that the ability to decompose a task T into sub-tasks non-specific to T — meaning the sub-tasks can be used to accomplish tasks other than T— implies the programmable constructor V responsible for executing the sub-tasks is also non-specific to T.
In constructor theoretic terms, if there are numerous tasks T₁, T₂, … Tₘ that can be achieved via arrangements of elementary instructions (p₁, p₂, …, pₙ) into different programs P₁, P₂, … Pₘ — and of course execution of those programs V[P₁], V[P₂], … V[Pₘ] — then the constructor V is not specific to any of the individual tasks T₁, T₂, … Tₘ. V has to be programmable to achieve all of the tasks in its repertoire. When V is programmed, it follows the given recipe blindly, which is completely fine under no-design laws.
Therefore, we can say that executing a recipe blindly means implementing non-specific sub-tasks, which are necessarily coded by modular programs.
Alright, V might’ve passed the no-design test, but what about P?
P is inherently specific to the task Tₚ — its very definition is that it’s a program for the task Tₚ! Since the recipe of P can’t be given in the laws of physics, the only other option is that the new instance of P is brought about by blind replication of the recipe P contained in the former instance. The modular structure of P assists with this. Therefore, the modular structure of P serves a dual purpose:
- Construction. (Sets of) subunits pᵢ serve as instructions, telling V which sub-task to perform when provided substrates. In the case of protein synthesis, each set of three nucleotides, a codon, serves as an instruction.
- Copying. Subunits pᵢ are the level at which P is copied. E.g., in the case of binary fission in a bacterium, P is copied one nucleotide at time before a new instance of F (a daughter bacterium) is created.
By replicating the subunits pᵢ (that are non-specific to the task of replication), we can achieve blind replication.
We conclude that, under no-design laws, the substrate instantiating a program P — the specific component of F — for an accurate constructor must be a modular replicator: a physical object that can be copied blindly, one elementary subunit at a time. In contrast, V — the non-specific component of F — is constructed anew from generic resources.
3) So what?
By extending this result to the specific case of replicators — key players in the theory of evolution as entities that can be copied from generation to generation — we can get the form that replication must take in constructor-theoretic terms to satisfy no-design laws. This, in turn, provides insights into how life emerged without physics having design explicit in its laws.
Summary
- The emergence of life has long puzzled scientists, as it’s surprising that the complex patterns of nature have resulted from a non-intentional design process.
- Though organisms often demonstrate the appearance of design — complex organization of units producing functions irreducible to any of the individual units themselves — the laws of physics are no-design, meaning they don’t inherently facilitate evolutionary adaptation.
- Prevailing theories of physics can predict what a physical system will do at a later time, given certain initial conditions and laws of motion. However, applying laws of motion to particles is an unnecessarily difficult way to express the appearance of design. Thus prevailing theories are ill-suited for problems like explaining evolution.
- On the other hand, constructor theory formulates physical laws as counterfactuals, i.e., in terms of which tasks are possible, which are impossible, and why. Because constructor theory is concerned with what can possibly happen, it can explain patterns in domains that are inherently unpredictable, such as evolution.
- In a recent paper, Chiara Marletto used constructor theory to provide exact physical formulations of the appearance of design, no-design laws, and the logic of self-reproduction and natural selection. A simplified version of Marletto’s derivation was provided above to explain the form constructors must take under no-design laws.
Similarities, differences, and the potential for unification
Which model is better?
So, which model is better? Which would give us a more accurate representation of our universe?
This question is rather ambiguous; how do we define an accurate representation? We might instead ask, for example, whether the ruliad or constructor theory would be better at tackling, or rather impersonating, Laplace’s demon — a demon that, knowing the precise location and dynamics of every atom in the universe, would be able to determine any of their past and future values for all eternity to a T.
It has long been known that irreversible processes would pose barriers to our demon’s clairvoyant abilities, introducing complications such as increasing entropy. In other words, irreversible processes defy the kind of perfect predictability that Laplace’s demon assumes, because they involve information loss or dispersal that cannot be perfectly reversed.
On the one hand, constructor theory might triumph in addressing these shortcomings, as one of the main values of redefining physics in terms of possible and impossible transformations is explaining which processes are and aren’t reversible. On the other hand, by its nature, constructor theory is not a generative model, and its intent is not to predict unknown states and dynamics given known values; it is to explain why we observe what we do. For this reason, the ruliad could serve as a better demon. Whether the ruliad could account for phenomena like irreversible processes remains to be seen.
This brings us back to the key point that the ruliad and constructor theory have different intents. Whereas the ruliad seeks to generate and predict, constructor theory seeks to verify and explain.
Key differences
Overall, there are some fundamental differences between the ruliad and constructor theory, a few of which can be attributed to Deutsch and Wolfram’s approaches to science in general.
First, two definitions pertaining to philosophy of science and epistemology:
- Scientific realism. The view that true explanations of phenomena in science (as well as art, philosophy, and history) do exist, even though we may never actually attain them but only get ever closer to the truth.
- Instrumentalism. The view that science cannot possibly come up with explanations of anything, only predictions. Instrumentalists regard science as a tool for anticipating the world and, with technology, controlling it.
Deutsch is a scientific realist, and rejects instrumentalism as a misguided approach. The primary purpose of science, he contends, is not power or prediction; it is understanding.
At first glance, this is a tempting mindset. Of course, if we could understand phenomena, rather than just predict them, that would be preferable. We all likely agree with Deutsch when he says that good theories are the ones that not only predict, but also provide an underlying why explaining those predictions. However, in practice, we often need to accept purely predictive theories as intermediates to more complete theories.
Although, to play devil’s advocate once more, a realist might say, yes, accept purely predicitve theories if that’s all you have access to at the time! Predictions are an indirect means of testing explanations. Once explanations do reveal themselves, these can be used in addition to predictions to discriminate between theories.
It is harder to categorize Wolfram as strictly a realist or instrumentalist. In fact, in the field of simulation, realism and instrumentalism are often blurred. However, Wolfram does believe that all natural processes can be viewed as computations. If reality is equivalent to computation, saying that there are true scientific explanations — i.e., exact computational models of phenomena — would be equivalent to saying that we can “out-compute” the universe. If we believe this to be possible, we (more or less) adopt a realist perspective; otherwise, we adopt an instrumentalist perspective.
To quote Stephen Wolfram:
And actually even before that, we need to ask: if we had the right rule, would we even know it? As I mentioned earlier, there’s potentially a big problem here with computational irreducibility. Because whatever the underlying rule is, our actual universe has applied it perhaps 10⁵⁰⁰ times. And if there’s computational irreducibility — as there inevitably will be — then there won’t be a way to fundamentally reduce the amount of computational effort that’s needed to determine the outcome of all these rule applications.
But what we have to hope is that somehow — even though the complete evolution of the universe is computationally irreducible — there are still enough “tunnels of computational reducibility” that we’ll be able to figure out at least what’s needed to be able to compare with what we know in physics, without having to do all that computational work. And I have to say that our recent success in getting conclusions just from the general structure of our models makes me much more optimistic about this possibility.
I guess we could say that Wolfram is cautiously optimistic about the potential for the ruliad to offer an ultimate computational model of the universe. But the philosophical premise remains that the laws of physics, even if somehow deducible from the basic structure of the universe’s computation, will remain models of what’s happening, and not absolute truths.
We may ask ourselves, Does the universe conform to certain truths? When we find a golden rule to describe the universe via the ruliad or some fundamental set of laws and principles via constructor theory, could these ever conceivably be truths or at most approximations, feeble attempts to describe the indescribable? Maybe the answers to these questions don’t even matter.
Meta-unification?
Though not specifically in reference to the ruliad, Deutsch has criticized Wolfram’s computational approach to understanding physical processes:
Wolfram seems to share a fundamental mistake with the majority of mathematicians and computer scientists, namely the belief that “simplicity” can be defined independently of the laws of physics. This tempts one to see computer programs “underlying” physical processes instead of vice-versa, and so to misconstrue the relationship between computation and physics.
Though the two may be reluctant to reconcile their differences, there is potential value in a merger of their frameworks. Maybe Deutsch’s own constructor theory could fill in the very gaps he identified.
On a superficial level, there appear to be similarities. A series of questions then arises about bridging these two theories:
- Could Wolfram’s rules be formulated as counterfactual physical principles?
- Could these rules then be used to define possible and impossible update events, formulated as constructor-theoretic transformations and tasks?
- Could branchial space be the spatial/network correlate to constructor-theoretic information, with its ability to encode superposition of states?
- Could constructor theory’s search for a universal constructor — a constructor capable of performing all physical transformations allowed by the laws of physics — resemble the ruliad’s search for a golden generative rule?
- Could a theory like assembly theory serve as the bridge between the ruliad’s generative/reductionist approach and constructor theory’s explanatory/anti-reductionist (though not holistic) approach?
I’m not enough of an expert to answer any of these questions. But hopefully, these spark some curiosity, and maybe lead you to do some research of your own :)
Sources and further reading
The ruliad
Constructor theory
https://royalsocietypublishing.org/doi/10.1098/rsif.2014.1226