The universal reach of human thought
Insights into the brain from physicist David Deutsch and lessons for AI
Table of contents
· Introduction
· Explanation and prediction
∘ The temptations of instrumentalism and reductionism in neuroscience
∘ What explanations don’t require
∘ What explanations do require
· Human ideation is a knowledge-creating process
∘ What is knowledge, and what constitutes a knowledge-creating process?
∘ Why processes of knowledge creation are unpredictable
· Reach as a defining characteristic of knowledge-creating processes
∘ Greater reach = greater explanator power
∘ Understanding reach via linguistics
∘ The universal reach of human thought
∘ A constructor theoretic interpretation of reach
∘ Humans are universal constructors
· Why we haven’t achieved general AI
· Summary
· About the author
· Sources and further reading
Introduction
With the increasing popularity of AI, we have seen a large shift toward instrumentalism, the philosophy that the goal of science should be prediction rather than explanation. While AI claims to design neural networks inspired by the structure and function of the brain, we still have yet to mimic the distinctly human quality of creativity — namely, the ability to progressively improve our understanding of the world and identify common abstract information in different physical processes, i.e., to generalize.
There’s a common saying that “In theory, there is no difference between theory and practice. But, in practice, there is.” While there is some frustrating truth to this, in the realm of entities capable of computation, humans are not too shabby at applying theory to practice. We are at least far better than the computers and algorithms we’ve designed to date.
Unlike neural networks, we hardly ever suffer from an imbalance in underfitting and overfitting — the so-called bias-variance tradeoff, which necessitates a compromise between capturing regularities in training data and applying well to unseen testing data.
We are masters of both specialization and generalization, both depth and breadth. We can immediately recognize that PowerPoint and Google Slides serve the same purpose, even though they are implemented in completely different ways. We can — less immediately, but still so — recognize that the problem ants solve in distributing food among their social hierarchy is similar to the one our brains solve in efficiently allocating energetic resources. (Though “interdisciplinary science” has become a buzzword, it does mark a special talent humans have in translating understanding of problems from one field to another.) In terms of Marr’s three levels, we can recognize computational-level equivalence despite differences at the implementation level.
In this article, I argue that human cognitive processes, in particular thought and reasoning, are unique in that they can create new explanatory knowledge with universal reach (as per the definition of David Deutsch). The power of human explanations, and their rarity as a type of knowledge in the universe, comes from their capacity to solve problems far beyond their originally intended scope.
In the sections that follow, we will explore the concepts of explanatory knowledge and reach, and discuss implications for both future neuroscience/cognitive science research and the development of general AI. The article is organized as below:
- The difference between explanation and prediction-oriented science, and why the former is more powerful in the study of knowledge-creating processes such as human thought and reasoning.
- The property of reach and its value in distinguishing between knowledge with explanatory vs predictive power. Examples from evolution, linguistics, and Turing machines demonstrate how the recursive application of simple rules to structured elements permits potentially universal reach.
- We haven’t yet achieved general AI due to the inherently limited reach of the “language” of neural networks, and in order to mimic human creativity, this language must evolve along with the networks it encodes.
Explanation and prediction
It seems neural networks today “know” more than we do, at least in niche domains like what your poop says about your gastrointestinal health. They can recognize patterns, intrinsic organization, and structural relationships in data that aren’t apparent to us at first glance (or many glances, for that matter). They automate the tedious process of finding models that best fit our observations. They allow us to hunt in N-dimensional solution spaces and forage for minima in treacherous objective-function landscapes whose only inhabitants are numbers. They allow us to ask questions and receive answers without any conception of how the answer came to be.
The idea of an omniscient black box that can spit out the correct output for any input is an appealing one. Why trouble yourself with the guts of the box, the twisted and tangled mess of nodes and connections that comprise the network’s inner layers, if you don’t have to? Why bother with explanation if you can have prediction?
If you’ve read anything by David Deutsch, you’ll have been inculcated with the importance of conducting science with the purpose of explanation rather than prediction. Maybe you think this is idealistic or naive. After all, prediction seems to be the money-maker.
But explanation is not just a romanticization of the scientific process. It’s not just the thing theorists mock experimentalists for lacking, or the thing experimentalists mock theorists for being neurotic about. Explanation is a necessary precursor to prediction and a far more powerful catalyst for innovation.
The temptations of instrumentalism and reductionism in neuroscience
Neuroscience is particularly susceptible to the sway of instrumentalism. As is often said, the brain is one of the most complex entities we know of, consisting of layers upon layers of emergence — “resolution into explicability at a higher, quasi-autonomous [almost self-contained] level” (Deutsch, 2012, p. 108).
The underpinnings of human thought and behavior can be studied at any of these levels of emergence: gene sequences and expression, neurotransmitters and neurons, neural populations and regions, neural circuits and networks, cognitive behavior, etc. Each level is emergent in the sense that it demonstrates properties or behaviors that the individual parts (lower-level constituents, e.g., the neurons that comprise a neural population) don’t have on their own, only becoming apparent when the parts interact as a larger unit (e.g., once activity of the neural population is measured).
The most difficult problem remains bridging these levels — understanding how activity at one level gives rise to phenomena at another. Some say that if we can’t explain it, we should simulate it from the bottom up. Once we develop a “simulome” capable of predicting how individual action potentials give rise to large-scale neural activity, we’ll understand everything!
As is evident from the simulome, instrumentalism is often accompanied by reductionism, a bottom-up approach to science that requires we explain things by analyzing them into components. In the same way some physicists believe we must explain everything in terms of subatomic particles, some neuroscientists believe we must explain all behavior and thought at the level of neurons.
A reductionist thinks that science is about analysing things into components. An instrumentalist thinks that it is about predicting things. To either of them, the existence of high-level sciences is merely a matter of convenience. Complexity prevents us from using fundamental physics to make high-level predictions, so instead we guess what those predictions would be if we could make them — emergence gives us a chance of doing that successfully — and supposedly that is what the higher-level sciences are about.
— Deutsch (1997, p. 21)
However, as David Deutsch points out, perfectly good explanations can exist within levels of emergence (Deutsch, 1997, p. 21). Just as you can explain your hunger in terms of skipping breakfast without reference to subatomic particles, you can explain human thought and behavior without reference to individual action potentials.
While some neuroscientists have been tempted by the blissful ignorance of prediction-oriented science, there are still many who hold their ground in prioritizing explanation. One good example is Carandini (2012), in which Carandini advocates for identifying an intermediate neural stage of computation “between circuits and behavior, the equivalent of computer languages for brain operation” (Carandini, 2012, p. 507). I like to think about the paper as consisting of two parts: what explanations don’t and do require.
What explanations don’t require
First, Carandini tells us what explanations at the intermediate level don’t require. He argues that explanations at the intermediate level are self-contained, meaning we can explain mesoscale phenomena without needing to reference low-level biophysical details. Notably, this is a rejection of reductionism.
Let’s again consider the simulome, the effort to simulate brain circuits from the level of neurons and “[generate] unanticipated functional insights based on emergent properties of neuronal structure” (Carandini, 2012, p. 509). The instrumentalist misconception is aptly summarized as follows: “A mere catalog of data is not the same as an understanding of how and why a system works” (Marcus et al., 2014, p.3).
A mere catalog of data is not the same as an understanding of how and why a system works.
As long as the simulome’s predictions align with observed neural behavior, the project will be deemed a success. But what will they have accomplished other than recreating the brain’s biophysical structure?
Similarly, imagine Deutsch’s example of an oracle that could predict the outcome of any experiment, including correctly mapping low-level neural activity to high-level behavior and vice versa (Deutsch, 1997, pp. 4–5). We would have no use for such an oracle unless we understood which observations conflicted with our best explanations and innate expectations of the world; only then would we be able to decide what experiments to run (have the oracle simulate) to resolve such conflicts. Explanation always precedes prediction.
What explanations do require
Second, Carandini tells us what explanations at the intermediate level do require, in particular, identifying a “core set of standard (canonical) neural computations: combined and repeated across brain regions and modalities” (Carandini, 2012, p. 508).
Similar to the way repeated chunks of code in an overall program can be factored into subroutines, or the way complex functions like log can be approximated as the addition of polynomials, so can computations like linear filtering and divisive normalization serve as “elementary operations” or “fundamental building blocks” at the intermediate level of computation — a basis set from which most other arbitrary computations can be composed.
As we will soon see, the key to explaining complex behavior such as cognitive processes, biological evolution, and the formation of grammatical language is understanding that it emerges from the recursive application of simple rules to structured elements.
From the discussion above, we can summarize the difference between prediction and explanation in the context of neuroscience/cognitive science as follows:
- Prediction is determining specific thoughts and behaviors or patterns of neural activity, i.e., having a black box that spits out the correct output for every input.
- Explanation is understanding the repertoire of possible tasks that can be performed, given certain fundamental operations (e.g., linear filtering and divisive normalization) and rules or constructors (e.g., circuits) that compose them into more complex operations.
Thus, explanation is intimately related to distinguishing between the possible and impossible, while prediction is concerned with accurate mappings of states (e.g., from initial to final state).
Human ideation is a knowledge-creating process
Not only does prediction-oriented science lead to explanatory dead ends, but it also may be impossible to apply to high-level cognitive processes. Human thought and reasoning are likely beyond our ability to predict, not because of their emergence or their irreducibility to a physical level, but because they are processes of knowledge creation, and it is impossible to predict the outcomes of such processes.
What is knowledge, and what constitutes a knowledge-creating process?
As per the definition of David Deutsch (Deutsch, 2012, p. 93–95), knowledge is abstract information that is useful (solves problems) and thus tends to keep itself physically instantiated. Processes of knowledge creation are inherently evolutionary, with rounds of selection and variation serving as the error-correcting mechanism that allows progress (permits asymptotic encoding of “objective truth” in the limit of infinite variation and selection).
While evolution is usually associated with genes and the development of adaptations, it is more generally a theory of abstract knowledge that tends to keep itself in existence via replicators — entities that (indirectly) contribute to their own copying — the best examples of which are genes and ideas (Deutsch, 2012, p. 93).
The most general way of stating the central assertion of the neo-Darwinian theory of evolution is that a population of replicators subject to variation (for instance by imperfect copying) will be taken over by those variants that are better than their rivals at causing themselves to be replicated. … both human knowledge and biological adaptations are abstract replicators: forms of information which, once they are embodied in a suitable physical system, tend to remain so while most variants of them do not.
— Deutsch (2012, pp. 93–95)
Thus, human theorizing is a knowledge-creating process: The abstract knowledge of ideas and theories is physically instantiated in our minds and behaviors. Through experimental tests, observation, and exposure to criticism, we compare our conjectures with relevant variants and select the best ones for further screening. What emerges after several rounds of variation and selection is the best explanation, which is often new explanatory knowledge — roughly that which answers the question of “why” and which contains knowledge that was not present in the initial conjectures.
This is similar to how evolution creates variations of a conjecture posited by a given gene via random mutation — where the conjecture can be understood as a gene’s guess of what will make it propagate throughout the population best — and subjects the variants to competition via survival and reproduction pressure from the environment.
While the knowledge yielded by evolution is limited in that it’s locally useful to an organism and environment, human theorizing is unique in that it can generate new explanatory knowledge with far-reaching applications.
Why processes of knowledge creation are unpredictable
Now, let us clarify why processes of knowledge creation are unpredictable. Some processes, such as a game of Russian roulette, are unpredictable because they are random. Although we cannot predict the outcome, we know the possible outcomes and probability of each.
However, processes of knowledge creation, such as the way a scientist proposes a new theory, are unpredictable because they are unknowable. We cannot know the contents of a yet-to-be theory because the knowledge that would have to be present in the prophecy of said theory has yet to be created. The possible outcomes, let alone their probabilities, are unknown (Deutsch, 2012, p. 197).
Note that this doesn’t mean we can’t explain a knowledge creation process, as for explanation, we don’t require exact predictions. Instead, we require the repertoire of possible physical transformations knowledge may help enact. With new insights into the neural underpinnings of our basis set of cognitive operations (e.g., Carandini, 2012) and the rules or constructors (e.g., circuits) that compose them, we increasingly have physical explanations for these repertoires of possibilities.
Reach as a defining characteristic of knowledge-creating processes
Now that we’ve discussed the difference between prediction and explanation, we can see that one property clearly distinguishes them: reach, which is approximately the applicability of given knowledge to problems beyond those the knowledge was originally intended to solve.
Greater reach = greater explanator power
Knowledge with predictive power has inherently limited reach, as it is narrowly confined to the problem it was intended to solve. For example, the simulome may develop an algorithm to predict the redistribution of neural activity that occurs when someone becomes aroused, but it likely has to deploy a completely different algorithm to predict what happens when a lesion is created.
On the other hand, knowledge with explanatory power can have universal reach, as our best explanations can apply to problems in domains far beyond their originally intended scope. For example, the operation of divisive normalization was “developed to explain responses in [the] primary visual cortex,” but has recently been shown to “underlie operations as diverse as the representation of odors, the deployment of visual attention, the encoding of value and the integration of multisensory information” (Carandini, 2012). Additionally, one of the most famous models in neuroscience, Hodgkin and Huxley’s model of the action potential, applies knowledge of electrical circuits to approximate neural activity via voltage-sensitive ion channels (Carandini, 2012).
Understanding reach via linguistics
The concept of reach can be further understood from the perspective of linguistics: “With a finite brain and finite amount of linguistic data, we manage to create a grammar that allows us to say and understand an infinite range of sentences, in many cases by constructing larger sentences (like this one) out of smaller components, such as individual words and phrases” (Marcus & Davis, 2021, p. 3).
Our system of syntax is simple yet powerful. Given a finite set of category labels (sentence, noun, verb, etc.) and a finite set of grammatical/syntactic rules, an infinite set of complex structures can be parsed or generated.
As Everaert, et al. (2015) put it, recursion underlies this finite–infinite distinction. Recursion boils down to allowing some categories to occur (directly or indirectly) inside categories of the same type. This permits recursive application of syntactic rules, where rules can be applied to the “output” of previous rule application, thus allowing parsing and generation — unraveling and building complexity layer by layer, respectively — of an infinite range of sentences.
Everaert, et al. (2015) also illustrate the power of recursion in terms of Turing machines:
The picture of Turing machine computation provides a useful explanation for why this [recursion underlying the finite–infinite distinction] is so. In a Turing machine, the output of a function f on some input x is determined via stepwise computation from some previously defined value, by carrying forward or ‘recursing’ on the Turing machine’s tape previously defined information. This enabled for the first time a precise, computational account of the notion of definition by induction (definition by recursion), with f(x) defined by prior computations on some earlier input y, f(y), y < x — crucially so as to strongly generate arbitrarily complex structures.
In this way, human language has universal reach. From a finite basis set of written symbols and grammatical rules governing their structure, an infinite number of ideas can be expressed via a composition of symbols in the basis set and recursive application of the grammatical rules.
Humanity’s achievement of universality in language required us to switch from representing basic ideas with pictograms — a “complete list” approach requiring a new one-to-one mapping for every new idea — to representing basic ideas with words — a “symbol + rule” approach using a finite set of grammatical rules operating on a finite set of symbols (letters).
The former system had finite reach, as it could only be used to represent ideas with existing entries in the complete list. Every new idea required modification of the basis set. The latter system has universal reach, as it can be used to represent ideas for which we have no formal representations. If we decide to express a new idea via a certain verbal identifier (composed of phonemes), we can easily translate this to a new word in our language without creating new letters. Similarly, we can compose words into phrases and sentences carrying more complex meanings.
The reach of the symbol + rule approach is in part because most alphabets encode how words sound, i.e., letters roughly correspond to phonemes. Our writing system thus exploits regularities in verbally expressed language, and our rules of composing and interpreting strings of letters as words contain more knowledge than a complete list of (pictogram, meaning) mappings ever could. Moreover, rules of morphology, syntax, and other grammatical constructs encode regularities in composing and interpreting strings of words, and contain more knowledge than a complete list of (string of pictograms/words, meaning) mappings ever could.
The universal reach of human thought
For the same reasons, our thought and reasoning processes have universal reach. Our internal representations of ideas and the rules for composing them to form more complex ideas must encode some regularities of abstract relations.
A good example from Marcus and Davis (2021) is that we don’t recognize two people as a pair of siblings by comparing them to some memorized, internal catalog (complete list) of every pair we’ve encountered. Rather, we have internal representations of what siblings are and the rules by which we can deduce that people are siblings in relation to other abstract concepts (e.g., if two people have the same parents, then they are siblings). We thus use a symbol + rule approach.
A constructor theoretic interpretation of reach
To use the terminology of constructor theory (Deutsch, 2013), reach is also related to the repertoire of tasks or transformations that are possible by a constructor, a physical entity capable of carrying out a given task repeatedly without changing its own composition. (See this article I wrote for an overview of constructor theory.) For instance, the constructor can be a ribosome, where its repertoire is the set of polypeptide chains it can construct, or a computer or brain, where the repertoire is the set of computations it can perform.
A good example of reach in relation to repertoires of possible tasks is the property of computational universality (Turing completeness) that a Turing machine may have. A Turing machine with this property can be said to have the combined repertoire of all other Turing machines (Deutsch, 1997, p. 132).
In a Turing machine, complex patterns and functions emerge from the compounded interactions of simple elements and rules. In the same way “component reuse” in a Turing machine allows useful sub-patterns of the machine to be harnessed for even more complex functionality, the achievement of possible, stable states in evolution allows successive rounds of variation and selection to build upon existing adaptations and achieve even greater appearance of design. By possible, I mean that the morphological (phenotypic) change could have appeared due to random mutations of genes. By stable, I mean the morphological change afforded the gene variant leverage over other variants at spreading through the population.
This is important to understanding that the “appearance of design” in our evolved adaptations and complex anatomical structures does not necessitate a creator with a master plan; rather, such emergence of hard-to-vary adaptations (if even slightly altered, would not serve the same function or at least not nearly as well) can be explained by no-design laws of physics.
Note: skip the next three paragraphs if you don’t want stuff that’s super technical!
Recently, David Deutsch and Chiara Marletto used constructor theory to explain how our no-design laws of physics permit the appearance of design (reliable and accurate transformations) in life forms. Consider the example I’ve previously illustrated here, in which a programmable constructor V[P] (e.g., a ribosome) executing its modular program P = (p1, p2, …) (e.g., nucleotide sequence of mRNA) and thus implementing task T (e.g., constructing a particular polypeptide chain). V[P] has the appearance of design because it seems specific to the non-elementary task T, where non-elementary means it wasn’t just the product of “simple” transformations such as spontaneous chemical reactions. (Deutsch and Marletto actually have a more nuanced definition of appearance of design related to the adaptation/constructor being “hard to vary,” but let’s use this for simplicity.)
However, if T can be decomposed into sub-tasks non-specific to T — meaning the sub-tasks can be used to accomplish tasks other than T — this implies the programmable constructor V responsible for executing the sub-tasks is also non-specific to T. In other words, this non-elementary task T could have arisen due to some random combination of existing sub-tasks and survived as a stable, recurring combination in successive generations because it conferred the “vehicle organism” greater evolutionary advantage. (By recursively applying this logic, you can arrive at the sub-tasks being elementary transformations.)
In terms of component reuse, we can consider this to mean that the appearance of design may occur when existing components are repeatedly and accidentally repurposed for new tasks that confer the vehicle organism greater evolutionary advantage. This is the epitome of reach — components initially serving one purpose are used for functions beyond their originally “intended” scope.
Humans are universal constructors
Because knowledge is a resource that can be used by a constructor to enact transformations, as humans capable of wielding explanatory knowledge (physically instantiating it in our minds) with universal reach, we are universal constructors: we “are factories for transforming anything into anything that the laws of nature allow” (Deutsch, 2012, p. 59). We have the capacity to exploit regularities in nature (understand them and abstractly encode them) and thus have the power to transform nature in a way constrained only by universal laws.
For example, we are never limited by the resources currently present in our environment, for unlike all other animals, we can convert those resources into others via our explanatory knowledge. We are thus capable of enacting any physical transformation permitted by the laws of physics, given the appropriate knowledge.
Why we haven’t achieved general AI
The property of reach is also helpful in understanding why we have yet to achieve general AI (GAI). Up to this point, we have only developed specialized use cases of AI confined to prediction in the domain they were designed for; they are incapable of creating explanatory knowledge like humans can.
The key to this failure is the difference in the reach of the symbols and rules that comprise human thought and AI algorithms: The “language” of our thought and reasoning processes has universal reach, while the “language” of neural networks (or whatever the learning data structure is) and the programs that update them has finite reach.
Much of what we consider AI is based on machine learning, which constitutes an evolution-esque process of alternating variation and selection. Sometimes the program itself is varied (e.g., genetic and evolutionary programming), sometimes encodings of the solution are varied (e.g., the actions of agents in reinforcement learning), and sometimes the weights or structure of a learning network are varied (e.g., most other machine learning). Regarding selection, a specific objective function may be used as the selection criteria for choosing between variants, whatever the varied entity may be. This takes the name of the fitness function in genetic and evolutionary programming, the reward function in reinforcement learning, and simply the objective function in most supervised and unsupervised learning.
The problem is that, unlike the evolutionary process of variation and selection that our thoughts and ideas undergo, an evolution-esque process on a computer does not create knowledge.
We often think of ideas that can’t be explained merely as a linear combination of contributing ideas, i.e., the ideas are emergent from their predecessors. However, current AI implementations are only evolution-esque and not evolutionary because no real “emergence of adaptations” occurs. Any sense of intelligence in AI can be attributed to creativity of the programmer; knowledge is created during the program’s development, not while it runs. As David Deutsch puts it, “The analogue of the idea that AI could be achieved by an accumulation of chatbot tricks is Lamarckism, the theory that new adaptions could be explained by changes that are in reality just a manifestation of existing knowledge” (Deutsch, 2012, p. 158).
The analogue of the idea that AI could be achieved by an accumulation of chatbot tricks is Lamarckism, the theory that new adaptions could be explained by changes that are in reality just a manifestation of existing knowledge.
— Deutsch (2012, p. 158)
If we are to truly mimic an evolutionary (knowledge-creating) process with a computer, the language of the program must evolve along with the adaptations it’s expressing.
Our genetic code evolved to have universal reach — the ability to encode a program for constructing all possible organisms. This is evident in the fact that DNA, which evolved to describe bacteria, has enough reach to describe humans. Our neural representations of ideas also evolved to have universal reach — the ability to encode all possible ideas and support all possible thought-computing processes, in particular, explanation-producing processes. If AI is ever to become GAI, then the language of programs must similarly evolve to have universal reach.
Summary
Compared to other knowledge-creating processes such as evolution, human ideation subjects competing conjectures to similar trials of variation and selection, yet yields the unusual product of explanation. This is largely attributable to the systems our brains have developed for physically instantiating abstract information.
Human writing systems developed universal reach with the advent of letters encoding regularities in human utterances and recursive grammatical rules encoding regularities in the expression of ideas. Some Turing machines have universal reach (computational universality or Turing completeness) due to binary strings encoding regularities in informative states and recursive update rules encoding regularities in logic and math.
Similarly, the “language” in which the brain represents abstract ideas has universal reach because it encodes regularities in the laws of physics and nature. Our ability to understand physical processes as complex (and physically distant) as the jet stream of a quasar comes from the brain’s ability to faithfully mimic the physical properties of our explanatory targets via its own physical processes (e.g., action potentials and electrical currents).
So, a computation or a proof is a physical process in which objects such as computers or brains physically model or instantiate abstract entities like numbers or equations, and mimic their properties. It is our window on the abstract. It works because we use such entities only in situations where we have good explanations saying that the relevant physical variables in those objects do indeed instantiate those abstract properties.
— Deutsch (2012)
Just like some infinities are larger than others, the universal reach of neural representation far exceeds that of language and Turing machine programs. This is due to the relation between reach and a constructor’s repertoire of possible tasks.
If a system of encoding information has universal reach, this implies that a constructor employing such a system can achieve (has a repertoire consisting of) all possible tasks 1) permitted by the constructor and 2) permitted by the laws of physics. A scribe employing a written language with universal reach can write all possible sentences. A Turing machine employing data-manipulation rules with universal reach can compute every function computable by Turing machines. A human brain employing a system of idea representation with universal reach can understand all possible explanations and can thus instruct its host body to enact all physically possible transformations.
While the set of all possible sentences and all possible computable functions are additionally bounded by the constraints of their domains (1), the set of all physically possible transformations is only bounded by the laws of physics (2). Once again, this is the remarkable value of explanatory knowledge and the systems that can encode it.
At the moment, only the human brain is capable of physically instantiating such knowledge. The brain is also unique as a knowledge medium in that it supports processes of knowledge creation.
If AI is to ever achieve human-like creativity, it must similarly become a general-purpose explainer, equipped with a system of idea representation that can encode all possible explanations and support knowledge-creating processes.
Sources and further reading
Deutsch, D. (2013). Constructor theory. Synthese, 190(18), 4331–4359.
Deutsch, D. (2012). The Beginning of Infinity: Explanations That Transform the World. Penguin Publishing Group.
Deutsch, D. (1997). The Fabric of Reality. Penguin Publishing Group.