The EQM as presently construed breaks down into three components (i) an account of structure and ontology (ii) a theory of evidence, and (iii) a theory of probability. I shall present some background to (i) and (iii), and talk briefly about (ii) in the context of Everett's original paper, 'Relative-state formulation of quantum mechanics'. This talk is loosely based on my paper 'The Everett interpretation of quantum mechanics: 50 years on'
Decoherence is absolutely central to the Everett interpretation – but its significance is not immediately apparent. As was recognized by Gell-Mann and Hartle in the 1990s, the true importance of decoherence is that it creates – within the unitarily evolving quantum state – autonomous, stable, quasi-classical systems which are approximately isomorphic to classical universes. These are the “many worlds” of the Everett interpretation – not worlds in the sense of complete, isolated universes, but in the sense of approximately isolated, approximately classical chunks of a larger reality. To see this requires only the mathematics of decoherence theory and the right understanding of higher-order ontology – an understanding which, although commonplace in the special sciences and the philosophy of mind, is not widely understood in the foundations and the philosophy of physics. I will sketch the relevant physics and philosophy and defend the claim that, given decoherence, unitary quantum mechanics is and must be a “many-worlds theory”.
A common understanding of both the Many Worlds theory and the original version of the GRW theory holds that they are ontologically monistic, postulating only the existence of the wavefunction and nothing else. Sometimes an appeal to Ockham’s razor is used to promote this as a boon for these theories. But without more detailed argumentation, it is hard to see how such an austere ontology can make comprehensible contact with the experimental facts that inspired the development of quantum theory in the first place, or indeed with our whole pre-theoretical picture of the physical world
Physicists rejected Lorentz’s rival theory in favor of Einstein’s special theory of relativity for two related reasons. Firstly, Lorentz’s theory proposes complex dynamical explanations for phenomena that the theory of relativity derives from kinematic considerations. Secondly, Lorentz assumes an underlying Galilean space-time structure that has no a priori merit over the Minkowskian structure—in fact, the Galilean space-time structure is not well-defined physically if Einstein’s light postulate is true.
Here we argue that an information-theoretic formulation of quantum mechanics is preferable to both Bohm’s theory and Everett’s interpretation. To develop the argument, we consider a ‘no cloning’ principle—there is no universal cloning machine—as the crucial principle demarcating classical from non-classical information theories (quantum theories, but also hypothetical theories with ‘superquantum’ correlations that violate the Tsirelson bound). We show that ‘no cloning’ entails that any measurement must sometimes irreversibly change the state of the measured system. Consequently, no complete dynamical account of measurement is possible in theories where the ‘no cloning’ principle is true.
These considerations lead us to reject two domas that underly Bohm’s theory and the Everett interpretation: that the process of measurement should always be open to a complete dynamical analysis in principle, and that the quantum state has an ontological significance analogous to the ontological significance of the classical state as a (perhaps incomplete) representation of physical reality. We show that cloning is possible, in principle, in both Bohmian and Everettian versions of quantum mechanics, but is practically hard for contingent dynamical reasons. By analogy with relativity, we then argue that the information-theoretic interpretation of quantum mechanics is to be preferred for two related reasons. Firstly, these rival formulations/interpretations propose complicated dynamical explanations for the fact that universal cloning is practically impossible (this is what a ‘solution to the measurement problem’ amounts to). Secondly, they assume that a complete dynamical account of measurement is in principle possible, and this assumption has no a priori merit over the ‘no cloning’ principle—in fact, the notion of measurement presupposed is not well-defined physically if the principle is true.
Much of the evidence for quantum mechanics is statistical in nature. The Everett interpretation, if it is to be a candidate for serious consideration, must be capable of doing justice to reasoning on which statistical evidence in which observed relative frequencies closely match calculated probabilities counts as evidence in favour of a theory from which the probabilities are calculated. Since, on the Everett interpretation, all outcomes with nonzero amplitude are actualized on different branches, it is not obvious that sense can be made of ascribing probabilities to outcomes of experiments, and this poses a prima facie problem for statistical inference. It is incumbent on the Everettian either to make sense of ascribing probabilities to outcomes of experiments in the Everett interpretation, or of finding a substitute on which the usual statistical analysis of experimental results continues to count as evidence for quantum mechanics, and, since it is the very evidence for quantum mechanics that is at stake, this must be done in a way that does not presuppose the correctness of Everettian quantum mechanics. This requires an account of theory confirmation that applies to branching-universe theories but does not resuppose the correctness of any such theory. In this paper, we supply and defend such an account. The account has the consequence that statistical evidence can confirm a branching-universe theory such as Everettian quantum mechanics in the same way in which it can confirm a probabilistic theory.
I will rehearse, and try to sharpen, a number of fairly widespread worries about the possibility of making sense of quantum-mechanical probabilities in the Everett picture - with particular attention to the various decision-theoretic strategies of Deutsch, Wallace, Greaves, and Saunders.
I introduce the Two-State Vector Formalism (TSVF) which describes a quantum system at a given time, in addition to the standard, forward evolving wave function, by the backward evolving quantum state. The analysis of the MWI in the framework of the TSVF leads to a certain change in the picture of branching ``tree'' of worlds. Instead of a tree of worlds which starts from a single state which common to all worlds and splits at every quantum measurement, the worlds split retroactively by the future measurements. More precisely, one can decompose a world to "classical" macroscopic objects rapidly measured by the environment, microscopic "quantum" objects measured only occasionally (at quantum measurements), and weakly coupled quantum objects which are influenced by the environment and other quantum objects only slightly. Ideal quantum measurements yield both forward and backward evolving quantum states and these states are identical. Macroscopic objects, like detectors, due to frequent measurements by the environment are described by identical forward and backward evolving quantum states, and for them splitting happens as usual, at quantum measurement. But quantum objects are not measured by the environment between the quantum measurements and they are described by backward evolving quantum state defined by future measurement. Thus, these future measurement split the world retroactively. My argument for adding quantum objects or future states of measuring devices to the description of Everett worlds at a particular time, is that this information is necessary for describing the time evolution of weakly coupled quantum objects. I conclude with a speculation about multiple many-worlds interpretation which arises when we require maximal symmetry in our theory.
The overwhelming success of quantum mechanics on laboratory scales naturally gives rise to the conjecture that quantum theory also applies to the universe as a whole on the scales of cosmology. But familiar textbook formulations of the quantum mechanics of measurements and observers must be generalized for quantum cosmology. Fifty years ago Everett took the first steps toward this generalization by taking seriously the idea that quantum mechanics could apply to a closed system like the universe. His ideas have since been extended, clarified, and to some extent completed by the work of many to create the modern synthesis called variously `decoherent histories' or `consistent histories' quantum theory. This is a quantum framework adequate for cosmology when gross quantum fluctuations in the geometry of spacetime can be neglected. A further generalization of this quantum framework is needed to incorporate quantum spacetime, ie quantum gravity. This paper will review a fully four-dimensional, sum-over-histories, generalized quantum mechanics of cosmological spacetime geometry. In such generalizations states of fields on spacelike surfaces and their unitary evolution are approximate notions that are appropriate only when spacetime geometry behaves classically and can supply the requisite notion of time. The Everettian idea of an evolving universal wave function which branches at definite times is only appropriate in special situations in this yet more general theory. The necessity of such generalizations and modifications, however, should not obscure the fact that they were made possible by the conceptual simplicity of Everett's formulation of quantum theory. These may yet supply a starting point for generalizing quantum mechanics itself.
I consider the ambiguity in quantum gravity that arises from the choice of clock. As I emphasize in earlier work (gr-qc/9408023) this ambiguity leads to an absolute lack of predictability for the laws of physics, or more specifically a complete absence of physical laws. I review the clock ambiguity and then consider possible ways forward given this seemingly critical failure. Remarkably, there is an approach that could lead to a certain amount of predictability in physics. I describe this approach and assess its prospects. I also draw attention to possible flaws in the original assumptions on which the clock ambiguity is based, with special emphasis on the definition of probabilities in the absence of time.
What is physical probability, and why does it have the characteristics that it has? How, in a fundamentally deterministic theory, can there be any room for probability? Because, in EQM, physical probability is identified with categorical physical properties and relations, these questions can be answered with considerable more clarity than in any of its rivals. In this talk I shall focus on (i) the place of uncertainty in EQM and a semantics for talk of uncertainty (ii) the explanation of the epistemology of probability, including such questions as: why there is no such thing as luck; why probability can't be measured directly; and why probability can only be manifested, with high credence, in statistics. This talk is loosely based on S. Saunders and D. Wallace, 'Branching and uncertainty'
We reply to claims (by Tipler, Deutsch, Zeh, Brown and Wallace) that the pilot-wave theory of de Broglie and Bohm is really a many-worlds theory with a superfluous configuration appended to one of the worlds. Assuming that pilot-wave theory does contain an ontological pilot wave (a complex-valued field in configuration space), we show that such claims arise essentially from not interpreting pilot-wave theory on its own terms. Pilot-wave dynamics is intrinsically nonclassical, with its own (`subquantum') theory of measurement, and it is in general a `nonequilibrium' theory that violates the quantum Born rule. From the point of view of pilot-wave theory itself, an apparent multiplicity of worlds at the microscopic level (envisaged by some many-worlds theorists) stems from the generally mistaken assumption of `eigenvalue realism' (the assumption that eigenvalues have an ontological status), which in turn ultimately derives from the generally mistaken assumption that `quantum measurements' are true and proper measurements. At the macroscopic level, it might be argued that in the presence of quantum experiments the universal (and ontological) pilot wave can develop non-overlapping and localised branches that evolve just like parallel classical (decoherent) worlds, each containing atoms, people, planets, etc. If this occurred, each localised branch would constitute a piece of real `ontological Ψ-stuff' that is executing a classical evolution for a world, and so, it might be argued, our world may as well be regarded as just one of these among many others. This argument fails on two counts: (a) subquantum measurements (allowed in nonequilibrium pilot-wave theory) could track the actual de Broglie-Bohm trajectory without affecting the branching structure of the pilot wave, so that in principle one could distinguish the branch containing the configuration from the empty ones, where the latter would be regarded merely as concentrations of a complex-valued configuration-space field, and (b) such localised configuration-space branches are in any case unrealistic (especially in a world containing chaos). In realistic models of decoherence, the pilot wave is delocalised, and the identification of a set of parallel (approximately) classical worlds does not arise in terms of localised pieces of actual `Ψ-stuff' executing approximately classical motions; instead, such identification amounts to a reification of mathematical trajectories associated with the velocity field of the approximately Hamiltonian flow of the (approximately non-negative) Wigner function --- a move that is fair enough from a many-worlds perspective, but which is unnecessary and unjustified from a pilot-wave perspective because according to pilot-wave theory there is nothing actually moving along any of these trajectories except one (just as in classical mechanics or in the theory of test particles in external fields or a background spacetime geometry). In addition to being unmotivated, such reification begs the question of why the mathematical trajectories should not also be reified outside the classical limit for general wave functions, resulting in a theory of `many de Broglie-Bohm worlds'. Finally, because pilot-wave theory can accomodate violations of the Born rule and many-worlds theory (apparently) cannot, any attempt to argue that the former theory is really the latter theory (`in denial') must in any case fail. At best, such arguments can only show that, if approximately classical experimenters are confined to the quantum equilibrium state, they will encounter a phenomenological appearance of many worlds (just as they will encounter a phenomenological appearance of locality, uncertainty, and of quantum physics generally). From the perspective of pilot-wave theory itself, many worlds are an illusion.