**What is the Problem of Measurement?**

Quantum mechanics, together with special relativity,
are the basis of most of what we know of the very small, the very light, and
the very fast. In these areas our knowledge is extensive; it would be hard to
say how *good* these theories are (arithmetic, one might say, is good
mathematics). For all that, there is a problem, a strange and amorphous
difficulty: the *problem of measurement.* This problem has become much
more interesting, and much more pressing, in recent years. Hitherto the
difficulty has been so peculiar, and so formless, that physicists have on the
whole thought it a matter of philosophy. But it has become increasingly clear
that the problem is not *purely* philosophical; that on the contrary, it
also has a *physical* dimension (some would say that it is *entirely*
a physical problem). The measurement problem is finally coming of age.

What is the problem of measurement? As a first stab, we can say this: there is a difficulty in accounting for the fact that measurements have any outcomes at all. In Heisenberg’s words: “it is the ‘factual’ character of an event describable in terms of the concepts of daily life which is not without further comment contained in the mathematical formalism of quantum theory, and which appears in the Copenhagen interpretation by the introduction of the observer.”[1] The interpretation to which Heisenberg refers is usually credited to Niels Bohr, one of the founders of quantum mechanics. It was the orthodoxy for several decades since. But consensus on this matter has crumbled, and it no longer commands much agreement today.

First some remarks on the basic ideas of
quantum theory. A fundamental concept of the standard formalism is the *state*,
in many ways the analog of the classical concept of state, where one has an
exhaustive specification of all the properties of a physical system at a given
time. Classically, the state of a system changes in time according to dynamical
equations of motion that are fully deterministic. In quantum mechanics there
are also equations of motion for the state, called the “unitary” equations of
motion, which are *also* fully deterministic. Just like their classical
counterparts, they too respect the familiar space-time symmetries. The
important difference is that in quantum mechanics the equation of motion of the
state is invariably *linear*. This means that given two quite different
states, each a solution of the equations of motion (so each a physical
possibility), the sum of the two states (with arbitrary coefficients) is also a
solution (is also a physical possibility). From a mathematical point of view,
this feature of quantum mechanics is at the heart of the measurement problem:
it is called “the superposition principle”. Only linear wave theories have this
property in classical physics, and there of course we never see stable
localized objects, of the sort that makes up ordinary matter.

The measurement problem put in these
terms is that the state, as it evolves under the unitary equations of motion,
does not correspond to any observed motion of matter, or when it comes down to
it to any *imaginable* motion of ordinary matter..

How then does quantum mechanics make
contact with ordinary physical systems, of the sort that we actually see? But
so far as experiments are concerned there are special, additional principles,
the “measurement postulates”. According to these, one first determines the *kind*
of measurement that is about to be performed (whether, say, of particle
position, or momentum, or direction of spin). The state of the system at the
time of measurement is then written as a list of numbers, each of which gives
the *probability* of a particular outcome. The same list can be rewritten
in a number of ways, corresponding to the different types of experiments that
can be performed, with well-defined rules linking the various lists (rules
again involving only linear transformations). In this respect the choice of the
type of experiment resembles the choice of rectilinear axes (or “basis
vectors”) in Euclidean space; given these, and a vector in that space, its
components can also be specified (and again, these components change linearly
with change in basis). Since the state must also be “prepared” - it has to be
produced by a “state-preparation device” – the theory gives us a recipe for
providing a list of probabilities, conditional on the preparation device, for
each possible outcome of the measurement device. That is enough to provide the
theory with empirical tests; the observed relative frequencies of outcomes are
compared with the theoretically calculated probabilities. And that’s it:
nothing more than that is provided.

In summary, although the state seems to
describe the properties of atoms and electrons and so on, in a way not so
different from classical physics; and although it undergoes a complicated and
interesting dynamical development in time, again similar to the classical
state; it nevertheless only has a clear meaning when the state is referred to *experiments*
that we do or might perform, and the probabilities of the various outcomes of
those experiments

What I have said so far is entirely uncontroversial. So far we have a kind of “minimal” interpretation of the standard formalism – the basic method for applying it. to concrete experiments. The state is on the one hand bound up with the microscopic system, and on the other with the probabilities of measurement outcomes. At this point the problem of measurement is foreshadowed by the simple query as to how the two are to be related.

One response is to eliminate altogether the reference
to the microscopic system. The state is *only* a list of numbers relating
a preparation device with the probabilities of measurement outcomes, the
particular list depending on the kind of final measurement to be made.

This is clearly a form of instrumentalism, particularly if no further interpretation of the formalism of quantum mechanics is on offer - if, as Bohr liked to put it, the theory is only a “symbolic calculus”; if “there is no quantum reality” (although Bohr was not really an instrumentalist, as we shall in a moment).

Of course, on an instrumentalist interpretation, we
do not *have* to deny that there is any underlying microscopic reality. We
may plead agnosticism instead. We may suppose that there is a microscopic
world, but that quantum mechanics does not describe it directly. The quantum
mechanical state, as a summary of the statistical relations between pairs of
macroscopic experiments (preparation and detection), does give us something to
go on in understanding the microscopic realm, but any puzzles here should not
impact on our understanding of what the state is (so says the instrumentalist).
There may be a problem of understanding the quantum world, but there is no
problem of measurement.[2]

Indeed, if quantum mechanics is understood in this
way, we do not so much have a problem of measurement as a problem in making
precise the phenomena to which the state *does* refer, the observable
preparation and detection processes. Why not describe these processes directly,
as undergoing the probabilistic changes that they undoubtedly *do* seem to
undergo? If we cannot find a deterministic way of doing this, why not describe
them in terms of *indeterministic* dynamical equations? (Nowadays we have
plenty of experience of equations like this,) Better still, why not look for a
theory which describes the macroscopic quite generally, treating “experiments”
merely as certain sorts of dynamical processes among others? Of course it may
be that if we do this we will have to modify or supplement the usual equations
of quantum mechanics, but why not make the attempt, so as to accurately depict
what we actually see?

There was a time when it was thought that there were
*impossibility proofs* obstructing this kind of program. If one makes
sufficiently stringent conditions on how much of the quantum mechanics of *microscopic*
objects must be preserved (in terms of relationships between certain dynamical
variables as given by the unitary dynamics) it is in fact quite easy to rule
out any of the alternatives. But this, it was eventually made clear, is not a
very convincing argument; it should never have cut any ice for an
instrumentalist; even a realist would agree that it is enough to develop a
theory of individual processes that agrees with the *observable* data.

Why isn’t standard quantum mechanics just such a
theory? But the unitary formalism fails to describe the experimental apparatus
altogether. Neither do the measurement postulates (not even when supplemented
by the “projection postulate”, the postulate that is required to account for
repeatable measurements – where, if a measured quantity takes a certain value,
the state had better be replaced by one that yields that value with certainty,
if the theory is to predict what is observed). The measurement postulates make
no reference at all to the details of the apparatus, but only to what the
apparatus is *intended to* *measure* – some particular microscopic
dynamical variable, expressed in terms of a particular basis. Now if *this*
feature of the formalism is held up as a *necessary* (or even as a *desirable*)
feature, the philosophy on offer is unmasked as something much closer to
idealism. It is the view that quantum mechanics does not describe any physical
systems at all, but only our *investigations*, only the *state of our
knowledge*.

Once put in this light, it is hard to take this sort
of instrumentalism seriously. So long as there are alternatives on offer –
whether a modified formalism, or an alternative interpretation of the extant
formalism, which provide descriptions of observable phenomena at least – we are
bound to reject it. At the very least, we can surely supplement the formalism,
making use of the concepts and methods of *classical* physics, at least at
the macroscopic level. In practice the latter was never in question, and we do
better to clearly acknowledge it. With that the problem of measurement may be
stated: what is the relationship between quantum and classical physics? How are
the two theories to be reconciled with each other?

On occasions Bohr hinted at the more idealistic form
of instrumentalism just reviewed, but this was not the picture on offer in the
early and critical period, when quantum mechanics was newly created, and when
the question of whether or not it could be considered fundamental was a matter
of urgency (a question that had to be *settled*).

The most important difference is that according to Bohr indeed the quantum mechanical state describes the individual system. It must be referred to a context of experiment, but it is not defined in terms of the statistics of preparation and detection events.

This is the absolutely crucial move: with this,
there is a clear notion of “the microscopic system” in place, the proper object
of physical inquiry, no matter that – according to Bohr - there were be a
number of constraints on what can meaningfully be said of it. In fact Bohr
insisted on using fragments of *classical* descriptions (one of a number
of mutually exclusive “complimentary” descriptions) - which fragment, on any
particular occasion, to be dictated by the experimental context (and the latter
too was of necessity to be described in classical terms). A context of
experiment must always be in place if anything meaningful was to be said about
the individual microscopic system.

Now for the measurement problem on this strategy. A first version is this: microscopic systems (and hence the macroscopic) are in some sense probabilistic. If the state says all there is to say about the microscopic, so that it is a “complete” description of the microscopic, then just so far as there are probabilistic microscopic events, then there will be corresponding changes in the state. But the state does not evolve in this way under the unitary equations of motion. It follows that these, the unitary equations, cannot be the whole story; either that or else the state is “incomplete”.[3]

The admission of incompleteness would be tantamount
to admitting that a better, *more* complete description of the
microscopic, might yet be found – and that would imply that quantum mechanics
was at best an interim theory that should eventually be superceded. Bohr on the
contrary insisted on the completeness of the state: its apparent deficiencies
were to be attributed to a kind of *philosophical* discovery, conditional
to be sure on an empirical one (the existence of Planck’s constant), that there
is a limitation to the classical “mode of description”. This leads in turn to
the restriction to one or other complementary descriptions, but in the first
instance the limitation is this: *no theory of the microscopic, in the sense
of a system of precise equations, can be applied to any objective phenomenon *(I
shall give the details of this argument shortly). In particular, the unitary
dynamical equations, in themselves, cannot dictate the course of events leading
to any observed phenomenon.

But Bohr did not formulate the Copenhagen
interpretation as a response to the measurement problem. Rather, the
constraint just mentioned, what I shall call “Bohr’s constraint,” was supposed
to follow from a more fundamental epistemological principle, namely that *an
objective phenomenon is only defined relative to an observation,* where the
latter must necessarily be described by means of classical concepts. I shall
call the latter “Bohr’s principle of significance” (or “Bohr’s principle” for
short); it has clear affinities with Kant’s philosophy.

For Bohr it was therefore more than satisfactory that the concept of observation should enter into the measurement postulates, although he had no particular inclination to bring in its wake subjective aspects of observation, or anything purely mentalistic.[4] Chiming with this, it was a matter of indifference as to whether one introduced only the experimental apparatus, or also the observer (so long as it was described classically). Further, it did not matter very much precisely where the apparatus was supposed to leave off, and where the system under measurement begin: for Bohr the point of the apparatus was to establish a fragment of the classical mode of description (one or other complementary descriptions), to be applied to the quantum system too. (Here Bohr was on dangerous ground. It is true that the predicted probabilities are stable under changes in this boundary – the von Neumann “cut” – but not if it is pushed too far to the microscopic.[5])

Finally, the last and most important ingredient of
Bohr’s interpretation was what he called “the quantum postulate”. At times this
sounded like a purely empirical principle (the existence of Planck’s constant),
but the use Bohr made of it extended well beyond any empirical evidence: the
postulate “attributes to any atomic process an essential discontinuity, or
rather individuality, completely foreign to the classical theories and
symbolized by Planck’s constant of action.” Bohr used this to supplement the *classical*
description of the experimental process, to describe the effect on the
underlying microscopic system, and *vice versa*.

Here is Bohr’s account of the matter:

Now,
the quantum postulate implies that any observation of atomic phenomena will
involve an interaction with the agency of observation not to be neglected.
Accordingly, an independent reality in the ordinary physical sense can neither
be ascribed to the phenomena nor to the agencies of observation. After all,
the concept of observation is so far arbitrary as it depends upon which objects
are included in the system to be observed. Ultimately, every observation can,
of course, be reduced to our sense perceptions. The circumstance, however,
that in interpreting observations use has always to be made of the theoretical
notions entails that for every particular case it is a question of convenience
at which point the concept of observation involving the quantum postulate with
its inherent “irrationality” is brought in. (*op cit* p.54).

The “irrational” element – that shows up not in the
unitary dynamics, but in the marriage of quantum and classical concepts
(concepts that *must* be married, according to Bohr’s principle of
significance) - is not to be further explained. One can of course produce a
more encompassing description of the apparatus, referring it to some other
experimental context, but one gains nothing by that: thereby one only shifts the
von Neumann cut. .

There is an “irrational” element to nature: so stands the measurement problem on Bohr’s philosophy. The unitary formalism cannot be used to illuminate the process of measurement; on the contrary, it is by appeal to the context of measurement, bringing into play the quantum postulate, that one interprets the unitary formalism in terms of one of a number of complementary descriptions.

The discrete change in the action, in multiples of Planck’s constant, was also called “the quantum jump”, and, in terms of wave mechanics, “the collapse of the wave-packet”. It can be incorporated into the measurement postulates (and it must be so incorporated for the special case of repeatable measurements) as the projection postulate. To put the conclusion of Bohr’s argument in these terms, one should not expect to analyze the process of reduction of state in terms of the unitary equations (or any other precise system of equations); it is already taken care of by the measurement postulates.

The conclusion is attractive, as it
solved a number of embarrassing questions about the nature of state reduction
(non-local, indeterministic), which was quite unlike the unitary equations
(local, deterministic). Others tried to find simpler arguments to a similar effect.
A popular strategy was to view the observer in purely mentalistic terms: if
state reduction ultimately concerns the mental, as something irreducibly
distinct from the physical, then no wonder the equation for it is so peculiar,
and without any physical criterion for when it kicks into play.[6]
In less fanciful terms, the state is only to describe our *knowledge*;
state reduction merely reflects *changes* in our state of knowledge
(which, under the unitary equations, does *not* change). On this view the
state, along with quantum mechanical probabilities, is purely epistemic. But
once put in these terms, we are relinquishing the key feature of Bohr’s
approach: we no longer describe the individual system. It is to revert back to
the instrumentalism-cum-idealism already considered. Or, on the uncomfortable
thought that there ought after all to be a correlate to a state of knowledge
(that our knowledge, and any change in our knowledge, is *about*
something, and reflects *changes* in something) it slides over into
another position entirely, namely that there are in addition goings on in
atomic systems not so different from quantum jumps – “potentia”, to use
Heisenberg’s term, that are or may be “actualized”. But the inevitable
implication of *this* point of view is the incompleteness of the state.
Heisenberg, in embracing this line of reasoning, did not appear to understand
how far he was departing from Bohr’s philosophy.[7] On this
view, there is no good reason why one should not attempt to describe this
underlying reality in more detail. Any interpretation of the state as
epistemic, which falls short of idealism, is likely to lead to this conclusion.
This was always perfectly obvious to Einstein..

It is likely that the Kantian background to Bohr’s philosophy was never much appreciated by his contemporaries; very likely they did not even understand it. But here was philosophy being used to give cover. It provided a rational to make do with the theory as it was handed down to them. Any obscurity in the interpretation could be considered a purely philosophical difficulty – someone else’s problem. And of course, from a purely pragmatic point of view, none of this really mattered, for the theory (if only under the “minimal” interpretation) was proving to be amazingly successful. For physics it was business as usual; in fact, physics had never had it so good.

Business as usual included the thought that there are
macroscopic events. Pursuit of this thought, understanding that the quantum
state describes individual microscopic states of affairs, led to attempts to
apply the formalism of quantum mechanics in the description of individual *macroscopic*
states of affairs as well, particularly in thermal physics. This started quite
innocuously, with some exciting experimental physics (superfluidity;
superconductivity; the Ising model; phase transitions). The area continued to
flourish. Applications of quantum mechanics to condensed matter physics soon
became commonplace. This is the first stage in the unraveling of the Copenhagen
orthodoxy.

At this level it is only a shift of emphasis. Nothing in the Copenhagen interpretation prohibited the application of quantum mechanics to large systems. But proceeding in this way it was found that certain classical concepts, and laws, can after all be coded into the quantum formalism, without resource to some background experimental context (of the sort that was supposed to define “the conditions of meaningful discourse”, the object as a “phenomenon”).

What was at issue with the alleged necessity of classical concepts? According to Bohr they are forced, no matter that they must be restricted by the quantum postulate:

The recognition of the limitation of our forms of
perception by no means implies that we can dispense with our customary ideas or
their direct verbal expressions when reducing our sense impressions to order.
No more is it likely that the fundamental concepts of the classical theories
will ever become superfluous for the description of physical experience. (*op
cit*, p.16).

The argument is on the face of it purely pragmatic, and as such it was widely accepted. Here is how Heisenberg put the matter:

Our actual situation in science is such that we do use
the classical concepts for the description of the experiments, and it was the
problem of quantum theory to find theoretical interpretation of the experiments
on this basis. There is no use in discussing what could be done if we were
other beings than we are. At this point we have to realize, as von Weizsäcker
has put it, that “Nature is earlier than man, but man is earlier than natural
science.” The first part of the sentence justifies classical physics, with its
ideal of complete objectivity. The second part tells us why we cannot escape
the paradox of quantum theory, namely, the necessity of using the classical
concepts. (*op cit*, p.55-6).

But with increasing
experience in the treatment of large systems we learn that there is no real
difficulty in the treatment of macroscopic systems either, at least as goes
their thermodynamic properties. In these (typically static) situations the
measurement postulates had been effectively bypassed. Of course any such
descriptions will still be constrained by the uncertainty relations; we do not
arrive at classical properties that involve absolutely precise position and
momentum (or velocity). But at the everyday level these imply no new
constraints at all on what we see, or even what we (classically) judge to be
the case. The uncertainties are minute for ordinary objects; they are already
swamped by thermal fluctuations. It is true that there remains the problem of
what it *means* to speak of uncertainty, or imprecision, but that is
nothing new to philosophy: most if not all of our everyday thing-words are
vague; the problem remains even if we take classical mechanics to be
fundamental.[8]

Success in applying quantum mechanics to macroscopic systems is in itself only a small step, but it leads on to another. It raises a question about Bohr’s principle of significance. It may be that we don’t actually have to have a context of measurement in place, in order to interpret the formalism – or that if we do, that we can do justice to Bohr’s principle by modeling the context of measurement itself in quantum mechanical terms. If so, then Bohr’s argument is completely undercut. We can if we wish respect his principle of significance, but it no longer implies his stricture against the application of quantum mechanics to closed systems; it no longer implies Bohr’s constraint.

We can see the lacuna in the argument in its very first appearance:

On one hand, the definition of the state
of a physical system, as ordinarily understood, claims the elimination of all
external disturbances. *But in that case, according to the quantum postulate,
any observation will be impossible*, and, above all, the concepts of space
and time lose their immediate sense. On the other hand, if in order to make
observation possible we permit certain interactions with suitable agencies of
measurement, not belonging to the system, an unambiguous definition of the
state of the system is no longer possible, and there can be no question of
causality in the ordinary sense of the word. The very nature of the quantum
theory thus forces us to regard the space-time co-ordination and the claim of
causality, the union of which characterizes the classical theories, as
complementary but exclusive features of the description, symbolizing the
idealization of observation and definition respectively. (*Op cit*, p.54,
emphasis mine.)

The argument, if sound, would show that no precise
theory of the microscopic could be applied to anything other than a closed
system – in which case it would no longer describe an objective phenomenon
(here, an object in space and time; later, the object as phenomenon). The
essential difference with classical physics is that in the latter case external
disturbances can be made as small as you like, and the system still be
considered an observed system; in the quantum case, because of the finiteness
of the quantum of action, that is no longer the case. But the argument is not
even valid: the fallacy occurs in the statement italicized, which stands in
need of a missing premise: that observation is always *external *to the system,
that it is never *from within* the system. But that premise appears quite
arbitrary; we can perfectly well incorporate Bohr’s principle by considering
the observer as *internal* to the system, with the whole being modeled in
terms of the unitary formalism. There is after all no *philosophical*
argument, from the existence of a lower bound to the action, to the conclusion
that there can be no precise description of an observed phenomenon. There is no
philosophical reason not to use the equations of quantum mechanics to model the
measurement process.

There is, of course, *another*
reason: the problem of measurement. Specifically, it is that whilst we can
interpret components of the state, without any need for introducing the
observer, we cannot interpret their superposition – and it is superpositions of
such states that we arrive at by the unitary dynamics. The logic of Bohr’s
position is then quite different from how it first appeared. It is because of
the *problem of measurement* that we cannot apply quantum mechanics to the
measurement process.

There is an important insight underlying this argument of Bohr’s, however. There is a kind of coupling, introduced by “observation” (or let us just say by the environment) that destroys interference effects in a way that is entirely independent of its strength. Interactions like this are ubiquitous; at ordinary temperatures and pressures they are also extremely rapid, occurring over timescales far smaller than even the most energetic elementary processes. If we decompose the state into components like these, we can explore the effects of the unitary equations of motion without ever having to worry about interference effects.

If so, it is a matter of indifference whether, considering one component of the state and its subsequent forward evolution in time, we bother to retain all the other components. If there are no interference effects, it will make no difference to the calculations of any of its transition probabilities. This is the reason why the probabilities are insensitive to the position of the von Neumann cut, supposing that as the cut is varied, one still has available the same non-interfering decomposition of the state. A basis like this is called a “decoherence” basis; the study of this general field is “decoherence theory”. It has even proved possible to set out a similar condition on the basis in the absence of a distinction between the system and its environment (as in the “decohering histories” formalism).

At the more mathematical level, interference effects
between components of the state depend on the phase relationships between these
components – just as sound waves reinforce or cancel one another, depending on
whether they are in phase or have opposite phase. The absence of interference
effects means that a superposition of states can be replaced by a catalog of
those states (a “mixed state”) with all their phase relations annulled. This
catalog of states will still contain information about their relative weights,
interpreted as the various probabilities that one or other of the states in the
mixture is the one that *actually obtains *(these probabilities can now be
viewed as *epistemic*). All along, what prevented one from interpreting a
superposition of states in this way was the presence of interference effects.
In other words, decoherence theory tells us when a superposition of states can
be replaced by a mixture; equivalently, when a pure state can be *interpreted*
as if it were a mixture.

The choice of basis, recall, is what in the instrumentalist interpretation, and what in Copenhagen philosophy, was bound up with the choice of experiment. A “complementary” description amounted to an interpretation of the state as a mixture, when the basis was dictated by what the experiment was intended to measure. But here it is a matter of how the macroscopic world is described in quantum mechanics, including the experimental apparatus. “Intentions” no longer have anything to do with it. The macroscopic world is treated as a whole; quantum mechanics is here applied to closed systems. Without any mention of “the observer,” quantum mechanics in and of itself has brought forth a description of the macroscopic world, all the way down to molecular levels.

**The Problem of Measurement.**

It might now
be thought that the problem of measurement has been solved; certainly decoherence
theory has transformed the nature of the problem. In fact, I suggest it has
only made the difficulty more immediate. What remains of the problem of
measurement? Only this: with the effective washing out of the relative phases
among components of the decoherence basis, it may be that interference effects
are effectively eliminated; but this process is *only* an effective one.
It always remains true (except in unrealistic infinite volume or infinite time
limits) that there are physical quantities whose predicted values (using the
measurement postulates) *do* depend on these phases. If, to get rid of
these, we model the replacement of the superposition by *one or another*
of the states entering into the mixture (and not by the “effective” mixture),
then something more radical is needed.

Nothing forbids us from modifying the unitary
formalism; all that is needed is a mechanism that takes us from a decohering
superposition of states, to one or another of those states, with the requisite
probability. But this is our old friend, the reduction of state. Using decoherence
theory – to be precise, decoherence theory as it arose in the study of the
quantum mechanics of “open” systems – we now know how this may be done, in a
way that makes no reference at all to the observer (but only to the physical
nature of the environment, or the experimental apparatus, described in *quantum
mechanical* terms). In effect the situation where “there can be no question
of causality in the ordinary sense of the word,” to use Bohr’s words, has
indeed been described by exact systems of equations.

In fact, it is now perfectly clear that these
equations may be perfectly deterministic. The simple deterministic mechanism
proposed long ago by de Broglie accomplishes this goal quite simply. Here, in
the “pilot-wave theory”, the quantum formalism is supplemented by an equation,
the “guidance equation”, which provides a family of trajectories each of which
picks out one member of the basis at each instant of time (*which* one, we
can imagine, being settled in advance, as part of the initial data).

For an indeterministic model, one of the best known is the modification of the unitary equations due to Ghirardi, Rimini, and Weber (the so-called “GRW theory”). Here certain parameters, that in decoherence theory vary somewhat with the details of the application (as parameters of an “effective” theory) are now to be viewed as new and fundamental constants of nature.[9] Using these, one can define stochastic equations of motion.

But the existence of these alternative, deviant
formalisms, and the recognition that they are perfectly adequate at least to
all known *non-relativistic* quantum effects, only renders the problem of
measurement more acute. For on the one hand they show conclusively that the
problem of measurement *can* be solved, by modifying the physical
formalism; but on the other hand they show that the modifications in question
are extremely significant – that, in point of fact, the obstacles confronting
the attempt to extend these sorts of theories to the relativistic domain are
truly horrendous.[10]
Lacking the relativistic symmetries, at least in the form in which they figure
in relativistic quantum field theory, these theories cannot as yet deal with
any high-energy phenomena at all. It is high-energy particle physics that now
poses the problem, not “the quantum postulate”, or anything emerging from the
Kantian tradition that Bohr found so beguiling. Here it seems that
fundamentally new *physical* ideas are needed – including, even, the
modification or abandonment of the relativity postulate itself.

The dilemma is all the more puzzling given that decoherence
theory grew out of the study of the unitary formalism of quantum mechanics (a
formalism which extends to relativity theory without any fundamental
difficulty). The very success of this enterprise appears to have called into
question the foundations of the discipline. Reflecting on the history of the
problem of measurement, we see that the key developments all revolve around the
notion that the quantum mechanical state is the correlate of individual states
of affairs. By a natural and perhaps inevitable progression, this leads to the
supposition that there is a quantum mechanical correlate to *macroscopic*
states of affairs as well. And now it seems that that must be correct; decoherence
theory turns on this, its success cannot be gainsaid. We *have* obtained
a probabilistic, macroscopic description, as an effective theory; we cannot
turn our back on this result. But it seems we cannot push through with this
program either, to make of the theory a literal one and not only an effective
formalism, or not without penalty in the relativistic regime.

It is a torturous predicament. *In extremis*,
then, we come to the remaining, and incredible, alternative. It is to rest with
the unitary dynamics, according to which, with respect to the decoherence
basis, *superpositions* of macroscopic states of affairs indeed propogate
unitarily and covariantly. The entire superposition evolves in a perfectly
deterministic manner, respecting the relativistic symmetries (it is only if *one*
of these components is selected at each instant in time that one runs into
problems with relativity). And this is a possible (but incredible) alternative
because were the superposition retained, it would still *appear*, from the
point of view of a system described by one of the components, as if the state
were reduced on measurement. Or better (for talk of “appearance” is suspect), *it
will in fact be the case *that such a system (a macroscopic object among
others) will be correlated with precisely the same environment, *whether or
not* all other components of the state are killed off, so *whether or not*
there is in fact any state reduction.

The remarkable implication (as first pointed out by
Hugh Everett in 1957) is that the unitary evolution is in fact sufficient to
account for all the observable evidence – at the price that *all*
physically possible states of affairs are realized in some part of the
universal state (hence the “many-worlds” interpretation, as it was called by
DeWitt). And this, of course, is simply incredible. For many it is a price that
is far too high to pay.

We have arrived at the measurement problem as it
stands today. There are plenty of ways of solving it in the non-relativistic
case; there are plenty of avenues to be explored for solving it in the
high-energy case – but at the likely price of the relativistic symmetries.
There is no internal *philosophical* argument against proceeding with them
(unless it is the very general one that it is implausible that physicists are
about to *redo* the bulk of their theorizing over the last half century).
But against this, we do appear to *already *have a solution to the
measurement problem, which is perfectly consistent with the basic quantum
mechanical and relativistic symmetries, and which leaves particle physics
intact as it is – but which few of us can believe.

If we hold relativistic and quantum mechanical principles to be fundamental - and we can scarcely do otherwise - then we must hold the universal state to be the fundamental object of physics. If we hold the visible universe to be the fundamental object of physics - and we can scarcely do otherwise - then these principles cannot be fundamental.

The dilemma is inescapable, and, if I am right that the universal state can only be successfully interpreted along Everett’s lines, disquieting. Few in this situation can feel very sanguine in their choice. But equally, it is clear that the fundamental principles of relativity and quantum mechanics are anyway under challenge in the field of quantum gravity; there string theory and the canonical and path-integral quantization programs, theories which aim to respect the relativistic and quantum mechanical symmetries, have yet to deliver. It is entirely plausible to suppose that one or other of these symmetries will have to be anyway given up in this arena. If here there is progress - and ideally experimental discoveries - introduced by novel principles in conflict with the familiar ones, then the situation will be transformed. Better, then, as onlookers, to sit on the fence.

The actors here are the physicists: they will have to vote with their feet. If past form is anything to go by, there will be other problems that will take precedence over the problem of measurement – the problem of making any definite predictions, for example. It will be the ideas that work at this level, in the end, that will decide the matter. The dilemma as I have stated it is likely to be soon settled.

Assistance may come from another direction, however. It may be that an interpretation along Everett’s lines cannot after all be sustained. There are plenty of objections that can be raised against it, quite apart from the one that it is unbelievable, but to date they all of them depend on philosophical arguments. It is perfectly possible that other objections will arise as well, depending on theoretical and experimental arguments, but the immediate objections are philosophical. .

To get a sense of them, it is enough to sketch out
the ideas using terms familiar to philosophy. Lewis’s writings provide a ready
context. We are to suppose that there is only the *superposition* of
physically possible histories, determined by the choice of a decohering set of
variables (and their corresponding bases). These histories are therefore
effectively autonomous from each other (we do not have to consider interference
effects from other histories). Using center-of-mass variables we obtain histories
that are recognizably, approximately, classical; they can be described using
approximately classical laws (corrected of course when it comes to quantum
mechanical effects as evidenced at the macroscopic level). The essential
difference, now, from Lewis’s modal metaphysics, is that whilst these histories
all co-exist, they are not separate worlds in his sense. They are not
physically closed-off from one another. In Lewis’s terms they are worlds that
all *overlap *(that have segments in common); they are, in fact, worlds
that branch in one direction only (towards the future – this *defines*
what we mean by future). This branching involves fission, not divergence.
Branching, viewed from within one of the off-shoots, appears as the quantum
jump or reduction of state. It is only here that there is anything like a
concept of probability coming into play. Probabilities supervene on perfectly
categorical properties and relations – the relative norms of the states
entering into the branching – in accordance with Lewis’s Humean supervenience
thesis, as defined by perfectly deterministic laws.

It should be evident that in the background of this interpretation is the structural view of time and personal identity that has long been at the center of the philosophical literature on these topics; [11] it is here being extended to the remaining physical modalities (physical possibility, necessity, and chance). It has long been clear that there are plenty of parallels between the modality of time and of possibility (between “the now”, as an indexical, and “the actual”, as an indexical, and the opposing views in both cases); more contentious is the interpretation of probability in terms of branching.

I have said enough to make clear some of the difficulties. There are any number of philosophical objections that can be raised against Everett’s proposal, but it will be for philosophers to decide if they are fatal to it. At issue is a question of physics, but of a metaphysical order that we have not seen since the time of Descartes. The problem of measurement has long been ignored by physicists; if they were right to ignore it, and the principles of relativity and quantum mechanics are after all to stay in place, then it can no longer be ignored by philosophers.

The concept of decoherence has long been familiar; it
already figured in Mott’s analysis of the cloud chamber in 1929. It resurfaced
in the “DLP” theory of measurement in the early ‘60s (after Daneri, Prosperi,
and Loingers). More than one champion of the Copenhagen interpretation
reconsidered in consequence (Leon Rosenfeld, Bohr’s faithful disciple, is an
example). General (and rigorous) treatments along these lines, taking the
thermodynamic limit, are due to K. Hepp, who supposed that the limiting
procedures used were as “natural [here] as elsewhere in microphysics.”
Subsequent investigators include B. Whitten-Wolfe, G. Emch, D. Lewis, L.
Thomas, and A. Frigerio, but the most important development from the point of
view of decoherence theory was the general theory of quantum semigroups as
systematically formulated by E. B. Davies (*The Quantum Mechanics of Open
Systems*, Academic Press, 1976). This was a major stimulus for the work of
G. Ghirardi, A. Rimini and T. Weber in 1986 (the GRW proposal), for whom the
phenomenological model was to be considered fundamental. Others who
contributed to this program included L. Diosi, P. Pearle, A. Barchielli, N. Gisin,
and I. Percival. References to these papers can be found in the works cited
below.

The same ideas, but explored in a way
that was supposed to involve no modification to the quantum formalism, were
applied in a variety of concrete physical models over the last two decades.
Here results of W. Unruh, W. Zurek, E. Joos, A. Caldera, and A. Legett were
particularly influential. The notion then in vogue was that of
“environmentally-induced superselection rules,” no matter that it was quite
clear that the rigorous notion of superselection sectors does not permit the
unitary development of the state from one sector to another. For an
illustration of this debate, see the exchange between Zurek and others in the
pages of *Physics Today* (October 1991, April 1993); Zurek eventually made
clear that his approach is a variant of Everett’s.

Different again is the decoherent histories formalism (also called the *consistent*
histories formalism), beginning with the work of R. Griffiths in 1984.
Contributors include R. Omnès, M. Gell-Mann, J. Hartle, and J. Halliwell; the
advantage of this approach is that reference to the environment can be
dispensed with, but the constraints that it imposes are significantly weaker
than the dynamical decoherence theory, as was made clear by A. Kent and F. Dowker
in 1994. This need not matter if the approach
is intended as a supplement to Everett’s interpretation, however (see my “Time, Quantum
Mechanics, and Decoherence”, *Synthese*, 102, p.235-66, which also
sketches parallels with the philosophy of time).

The connections between decoherence theory and the pilot-wave theory are
the subject of more recent studies. In fact it is often thought that the
pilot-wave theory stands quite independent of decoherence theory: so it does as
a piece of speculative theorizing, but its adequacy to observed
(non-relativistic) phenomena is another matter. But obviously it was not for
want of attention to this aspect of the theory that it was so long ignored: on
a charitable reading, it was ignored because no relativistic generalization of
it could be found. (For a less charitable reading, see J. Cushing, *Quantum
Mechanics: Historical Contingency and the Copenhagen Interpretation*, 1994.)
Its adequacy in the non-relativistic case should
have made clear what was wrong with Bohr’s philosophy (for an account of why it
did not, see M. Bella, *Quantum Dialog*, Chicago, 1999).

[2]
The “statistical interpretation”, as advocated by Ballentine (*A Survey of
Hidden-Variable Theories*, 1970), is an example of this sort of approach.
Another example is the *S-Matrix* school in the early ‘60s, which was
explicitly opposed to the use of quantum field theory, and indeed of any dynamical
theory in the usual sense, in hadron physics: the theory was to be formulated
in terms of the S-matrix alone (“S” for scattering; a matrix compiling the
input and output amplitudes among states of definite momentum.). See J.
Cushing, *Theory Construction and Selection in Modern Physics: The S-Matrix*,
Cambridge, 1990.

[3]
Einstein, famously, concluded that the state was incomplete. For an
authoritative study of Einstein’s views on this matter, see A. Fine, *The Shakey
Game*, Chicago, 1986.

[4]
Correspondingly, Bohr later talked of “experimental conditions” rather than
“observation”. On the term “phenomenon”, we have from his biographer Abraham Pais:
“he sharpened his own language, one might say, by defining the term
`phenomenon’ to include both the object of study and the mode of observation” (*Niels
Bohr’s Times*, Oxford, 1991, p.432). Compare the physicist J. A. Wheeler:
“In today’s words Bohr’s point — and the central point of quantum theory — can
be put into a single, simple sentence. `No elementary phenomenon is a
phenomenon until it is a registered (observed) phenomenon.’” (in J. A. Wheeler
and W. Zurek, eds., *Quantum Theory and Measurement*, Princeton, 1981).

The quotations that follow are all taken from the Como
lecture of 1927, in which Bohr first presented his interpretation of quantum
mechanics (reprinted in N. Bohr, *Atomic Theory and the Description of Nature*,
Cambridge, 1934).

[5] The stability or otherwise of calculated probabilities under variation of the choice of cut is the business of decoherence theory to determine. See below.

[6]J.
von Neumann, F. London, and E. Wigner are the most important examples (see
Wheeler and Zurek, op cit). Their latter-day followers include D. Albert, B. Loewer,
M. Lockwood, and E. Squires; see e.g. E. Squires, *Conscious Mind in the
Physical World*, Adam Hilger, 1990.

[7]
See Heisenberg *op cit*, p.34-5. In other respects he took it to an
extreme not contemplated by Bohr. Thus: “...the equation of motion for the
probability function ... now contain[s] the influence of the interaction with
the measuring device. This influence introduces a new element of uncertainty,
since the measuring device is necessarily described in the terms of classical
physics; such a description contains all the uncertainties concerning the microscopic
structure of the device which we know from thermodynamics...It contains in fact
the uncertainties of the microscopic structure of the whole world...It is for
this reason that the results of the measurement cannot be predicted with
certainty” (*ibid*). This is tantamount to explaining the quantum
postulate itself – or at least indeterminacy – using classical concepts.

[8] I am not suggesting that quantum indeterminacy and classical vagueness will be dealt with in the same way; on the contrary, there are precise quantum properties corresponding to any quantum mechanical state. The parallel is rather between classical concepts and quantum mechanical ones, on the one hand, and between ordinary language concepts and classical ones, on the other.

[9] This is the one respect in which the GRW proposal, in its present form, may make definite predictions in conflict with the standard non-relativistic formalism.

[10]
This will be readily granted in the case of state-reduction theories, but in
the case of the pilot-wave theory it is not so well recognized. See my “The ‘Beables’
of Relativistic Pilot-Wave Theory”, in *From Physics to Philosophy*, J.
Butterfield, and C. Pagonis, eds., Cambridge, 1999, for detailed arguments.

[11]
The relevance of split-brain scenarios, and particularly the work of Parfit,
should be quite obvious (D. Parfit, *Reasons and Persons*, Oxford, 1984).