Gleason's theorem

Gleason's theorem shows that the rule one uses to calculate probabilities in quantum physics follows logically from particular assumptions about how measurements are represented mathematically. Andrew M. Gleason first proved the theorem in 1957, answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that local hidden-variable theories are inconsistent with quantum physics. Multiple variations have been proven in the years since. Gleason's theorem is of particular importance for the field of quantum logic and for the effort in quantum information theory to re-derive quantum mechanics from information-theoretic principles.

In quantum physics, an observable physical quantity of a quantum system, like the position, momentum or energy of a quantum particle, is mathematically represented by a Hermitian operator on a separable Hilbert space. A quantum state for a system is also a Hermitian operator, more specifically a positive-semidefinite operator with a trace equal to 1. The Born rule specifies how to compute the probabilities for obtaining the different possible values of such a quantity in an experiment, given a quantum state for that system. Mackey had asked whether the Born rule is a necessary consequence of a particular set of axioms for quantum mechanics, and more specifically whether every measure on the lattice of projections of a Hilbert space can be defined by a positive-semidefinite operator with unit trace. Gleason's theorem shows that this is true as long as the dimension of the Hilbert space is larger than 2, thereby establishing both the set of valid quantum states and the Born rule as consequences of the underlying Hilbert-space structure.

Overview

Consider a quantum system with a Hilbert space of dimension 3 or larger, and suppose that there exists some function that assigns a probability to each outcome of any possible measurement upon that system. The probability of any such outcome must be a real number between 0 and 1 inclusive, and in order to be consistent, for any individual measurement the probabilities of the different possible outcomes must add up to 1. Gleason's theorem shows that any such function–that is, any consistent assignment of probabilities to measurement outcomes–must be expressible in terms of a quantum-mechanical density operator and the Born rule. In other words, given that each quantum system is associated with a Hilbert space, and given that measurements are described by particular mathematical entities defined on that Hilbert space, both the structure of quantum-state space and the rule for calculating probabilities from a quantum state then follow.

For simplicity, we can assume that the dimension of the Hilbert space is finite. A quantum-mechanical observable is a self-adjoint operator on that Hilbert space. Equivalently, we can say that a measurement is defined by an orthonormal basis, with each possible outcome of that measurement corresponding to one of the vectors comprising the basis. A density operator is a positive-semidefinite operator whose trace is equal to 1. In the language of von Weizsäcker, a density operator is a "catalogue of probabilities": for each measurement that can be defined, we can compute the probability distribution over the outcomes of that measurement from the density operator.[1] We do so by applying the Born rule, which states that

where is the density operator, and is the projection operator onto the basis vector associated with the measurement outcome .

Let be a function from projection operators to the unit interval with the property that, if a set of projection operators sum to the identity matrix (that is, if they correspond to an orthonormal basis), then

Such a function expresses an assignment of probability values to the outcomes of measurements, an assignment that is "noncontextual" in the sense that the probability for an outcome does not depend upon which measurement that outcome is embedded within, but only upon the mathematical representation of that specific outcome, i.e., its projection operator.[2] Gleason's theorem states that for any such function , there exists a positive-semidefinite operator with unit trace such that

Both the Born rule and the fact that "catalogues of probability" are positive-semidefinite operators of unit trace follow from the assumptions that measurements are represented by orthonormal bases, and that probability assignments are "noncontextual". In order for Gleason's theorem to be applicable, the space on which measurements are defined must be a real or complex Hilbert space, or a quaternionic module.[3] (Gleason's argument is inapplicable if, for example, one tries to construct an analogue of quantum mechanics using p-adic numbers.)

Another way of phrasing the theorem uses the terminology of quantum logic, which makes heavy use of lattice theory. Quantum logic treats quantum events (or measurement outcomes) as logical propositions and studies the relationships and structures formed by these events, with specific emphasis on quantum measurement. In quantum logic, the logical propositions that describe events are organized into a lattice, in which the distributive law, valid in classical logic, is weakened, to reflect the fact that in quantum physics, not all pairs of quantities can be measured simultaneously.[4] The representation theorem in quantum logic shows that such a lattice is isomorphic to the lattice of subspaces of a vector space with a scalar product.[5] It remains an open problem in quantum logic to constrain the field K over which the vector space is defined. Solèr's theorem implies that, granting certain hypotheses, the field K must be either the real numbers, complex numbers, or the quaternions.[6]

Let denote the Hilbert space of finite dimension associated with the physical system, and let denote the lattice of subspaces of . Let represent an observable with finitely many potential outcomes: the eigenvalues of the Hermitian operator , i.e. . Then an "event" is a proposition , which in natural language can be rendered "the outcome of measuring on the system is ". Each event is an atom of the lattice . The events generate a sublattice of which is a finite Boolean algebra. A quantum probability function over is a real function on the elements of that has the following properties:

  1. , and for all elements ,
  2. , if are orthogonal atoms.

This means that for every lattice element , the probability of obtaining as a measurement outcome is known, since it may be expressed as the union of the atoms under :

In this context, Gleason's theorem states:

Given a quantum probability function over a space of dimension , there is an Hermitian, non-negative operator on , whose trace is unity, such that for all atoms , where is the inner product, and is a unit vector along .

As one consequence: if some satisfies , then is the projection onto the complex line spanned by , and for all .

History and outline of Gleason's proof

In 1932, John von Neumann addressed the question of whether the Born rule could be derived in his textbook Mathematische Grundlagen der Quantenmechanik [Mathematical Foundations of Quantum Mechanics]. He succeeded in showing that if an observable physical quantity is represented by a Hermitian operator , then the expectation value of that quantity is given by for some density operator . However, the assumptions on which von Neumann built his proof were rather strong and eventually regarded to be not all well-motivated.[7]

By the late 1940s, George Mackey had grown interested in the mathematical foundations of quantum physics, wondering in particular whether the Born rule was the only possible rule for calculating probabilities in a theory founded on Hilbert space.[8] Mackey discussed this problem with Irving Segal at the University of Chicago, who in turn raised it with Richard Kadison, then a graduate student. Kadison pointed out that Mackey's question could be answered negatively in dimension 2. That is, probability measures on the closed subspaces of a 2-dimensional Hilbert space do not have to correspond to quantum states and the Born rule. One result of Gleason's work is the discovery that this is unique to dimension 2.[9]

Gleason begins by crediting the problem of determining "all measures on the closed subspaces of a Hilbert space" to Mackey. Gleason's original proof proceeds in three stages.[10] In Gleason's terminology, a frame function that is derived in the standard way, that is, by the Born rule from a quantum state, is regular. Gleason derives a sequence of lemmas concerning when a frame function is necessarily regular, culminating in the final theorem. First, he establishes that every continuous frame function on the Hilbert space is regular. This step makes use of the theory of spherical harmonics. Then, he proves that frame functions on have to be continuous, which establishes the theorem for the special case of . This step is regarded as the most difficult of the proof.[11] Finally, he shows that the general problem can be reduced to this special case. Gleason credits one lemma used in this last stage of the proof to his doctoral student Richard Palais.[12]

Implications

Gleason's theorem highlights a number of fundamental issues in quantum measurement theory. Fuchs argues that the theorem "is an extremely powerful result", because "it indicates the extent to which the Born probability rule and even the state-space structure of density operators are dependent upon the theory's other postulates". As a consequence, quantum theory is "a tighter package than one might have first thought".[13]

The theorem is often taken to rule out the possibility of local hidden variables in quantum mechanics. This is because the theorem implies that there can be no bivalent probability measures, i.e. probability measures having only the values 1 and 0. To see this, note that the mapping is continuous on the unit sphere of the Hilbert space for any density operator . Since this unit sphere is connected, no continuous probability measure on it can take only the values of 0 and 1.[14] But, a hidden-variable theory that is deterministic implies that the probability of a given outcome is always either 0 or 1: either the electron's spin is up, or it isn't. Gleason's theorem therefore suggests that quantum theory represents a deep and fundamental departure from the classical intuition that uncertainty is due to ignorance about hidden degrees of freedom.[15] More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity.[16]

Gleason's theorem motivated later work by John Stuart Bell, Ernst Specker and Simon Kochen that led to the result often called the Kochen–Specker theorem, which likewise shows that noncontextual hidden-variable models are incompatible with quantum mechanics. As noted above, Gleason's theorem shows that there is no bivalent probability measure over the rays of a Hilbert space (as long as the dimension of that space exceeds 2). The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no bivalent probability measure can be defined.[17] The fact that such a finite subset of rays must exist follows from Gleason's theorem by way of a logical compactness argument, but this method does not construct the desired set explicitly.[18]

A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e., for some outcome ). Any mixed state can be written as a convex combination of pure states, though not in a unique way. Because Gleason's theorem yields the set of all quantum states, pure and mixed, it can be taken as an argument that pure and mixed states should be treated on the same conceptual footing, rather than viewing pure states as more fundamental conceptions.[19]

Pitowsky uses Gleason's theorem to argue that quantum mechanics represents a new theory of probability, one in which the structure of the space of possible events is modified from the classical, Boolean algebra thereof. He regards this as analogous to the way that special relativity modifies the kinematics of Newtonian mechanics.[20] Alternatively, such approaches as relational quantum mechanics and some versions of quantum Bayesianism employ Gleason's theorem as an essential step in deriving the quantum formalism from information-theoretic postulates.[21]

The Gleason and Kochen–Specker theorems have been cited in support of various philosophies, including perspectivism, constructive empiricism and agential realism.[22]

Generalizations

Gleason originally proved the theorem assuming that the measurements applied to the system are of the von Neumann type, i.e., that each possible measurement corresponds to an orthonormal basis of the Hilbert space. Later, Busch, and independently Caves et al., proved an analogous result for a more general class of measurements, known as positive-operator-valued measures (POVMs). The proof of this result is simpler than that of Gleason's, and unlike the original theorem of Gleason, the generalized version using POVMs also applies to the case of a single qubit, for which the dimension of the Hilbert space equals 2.[23] This has been interpreted as showing that the probabilities for outcomes of measurements upon a single qubit cannot be explained in terms of hidden variables, provided that the class of allowed measurements is sufficiently broad.[24]

Gleason's theorem, in its original version, does not hold if the Hilbert space is defined over the rational numbers, i.e., if the components of vectors in the Hilbert space are restricted to be rational numbers, or complex numbers with rational parts. However, when the set of allowed measurements is the set of all POVMs, the theorem holds.[25]

The original proof by Gleason was not constructive: one of the ideas on which it depends is the fact that every continuous function defined on a compact space obtains its minimum. Because one cannot in all cases explicitly show where the minimum occurs, a proof that relies upon this principle will not be a constructive proof. However, the theorem can be reformulated in such a way that a constructive proof can be found.[26]

Gleason's theorem can be extended to some cases where the observables of the theory form a von Neumann algebra. Specifically, an analogue of Gleason's result can be shown to hold if the algebra of observables has no direct summand that is representable as the algebra of 2×2 matrices over a commutative von Neumann algebra (i.e., no direct summand of type I2). In essence, the only barrier to proving the theorem is the fact that Gleason's original result does not hold when the Hilbert space is that of a qubit.[27]

References

  1. Dreischner, Görnitz and von Weizsäcker (1988).
  2. Barnum et al. (2000); Pitowsky (2003), §1.3; Pitowsky (2006), §2.1; Kunjwal and Spekkens (2015).
  3. Piron (1972), §6; Drisch (1979); Horwitz et al. (1984); Razon et al. (1991); Varadarajan (2007), p. 83 ff.; Cassinelli and Lahti (2017), §2; Moretti and Oppio (2018).
  4. Dvurecenskij (1992).
  5. Pitowsky (2006), §2.
  6. Baez (2010); Cassinelli and Lahti (2017), §3; Moretti and Oppio (2019).
  7. Peres (1992); Mermin and Schack (2018).
  8. Mackey (1957); Chernoff (2009).
  9. Chernoff (2009).
  10. Hrushovski and Pitowsky (2004), §2.
  11. Pitowsky (1998).
  12. Gleason (1957), footnote 3.
  13. Fuchs (2011), pp. 94–95.
  14. Wilce (2017), §1.3.
  15. Mermin (1993).
  16. Shimony (1984); Mermin (1993).
  17. Peres (1991); Mermin (1993).
  18. Hrushovski and Pitowsky (2004), §1.
  19. Wallace (2017).
  20. Pitowsky (2006); Pitowsky (2013).
  21. Barnum et al. (2000); Wilce (2017), §1.4; Cassinelli and Lahti (2017), §2.
  22. Edwards (1979); van Fraassen (1991); Barad (2007).
  23. Busch (2003); Caves et al. (2004); Fuchs (2011), p. 116; Wright and Weigert (2019).
  24. Spekkens (2005).
  25. Caves et al. (2004), §3.D.
  26. Richman and Bridges (1999); Hrushovski and Pitowsky (2004).
  27. Hamhalter (2003).
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.