## A Short History of Quantum Reference Frames, Part 2

Yesterday, I began outlining the history of quantum reference frames beginning with Marco Toller’s paper in 1977. What we saw was two, somewhat separate lines of development. The first was a direct line from Toller through Rovelli and back again to Toller, in which a generalized (and distinctly operational) notion of a reference frame is developed that covers both quantum and classical systems. The second was a line beginning with Holevo and leading up through Peres and Scudo, in which a protocol is developed for sending information about a Cartesian coordinate system via a quantum channel.

The question I have is, where do the lines converge? Specifically, when on the informational side do people start referring to reference frames in the more general sense? Pinpointing an exact instance is difficult, but an early paper by Bartlett, Rudolph, and Spekkens (hereafter referred to as BRS) in 2003 seems to suggest that Toller’s idea had made its way into the quantum information community by that time. The first few sentences of their paper read:

Quantum physics allows for powerful new communication tasks that are not possible classically, such as secure communication and entanglement-enhanced classical communication. In investigations of these and other communication tasks, considerable effort has been devoted to identifying the

physicalresources that are required for their implementation. It is generally presumed, at least implicitly, that a shared reference frame (SRF) between the communicating parties is such a resource,with the precise nature of the reference frame being dictated by the particular physical systems involved.

The emphasis on the word ‘physical’ in the second sentence is theirs. The emphasis in the last sentence is mine and meant to show that Toller’s idea had gained some traction in the QI community by then. In a meaty review article published in 2007, BRS offered a slightly different take on Toller’s notion of a reference frame (and notably still did not cite his work which makes me wonder if they were and still are unaware of it, or if they disagreed with his definition to some extent). They refer back to the work by Peres and Scudo and define two types of information: *fungible* and *nonfungible* (referred to by Peres and Scudo as ‘speakable’ and ‘unspeakable’ respectively). Fungible information is typically classical and refers to information for which the means of encoding does not matter. The example they give is that Shannon’s coding theorems don’t care one way or the other how the 0’s and 1’s are encoded (e.g. by magnetic tape or by voltages or some other manner). So this is fungible information. If, on the other hand, the encoding of information *does* make a difference to the information being encoded, then that information is said to be nonfungible (unspeakable). BRS then note that

[w]e refer to the systems with respect to which unspeakable/nonfungible information is defined, clocks, gyroscopes, metre sticks and so forth, as

reference frames.

The emphasis is theirs. How does this compare to Toller’s definition of a frame of reference as some material object that is of the same nature as the objects that form the system under investigation, as well as the instruments used to measure that system? It seems to me that the BRS definition is merely a more concise statement of Toller’s definition which does not account for the differences between fungible and nonfungible information. So I think we’re still talking about the same thing here.

Superficially, it is fairly easy to see that many quantum ideas are natural generalizations of classical ideas. For example, in his terrific (and freely available!) undergraduate course materials on the mathematics of theoretical physics, Karl Svozil defines a reference frame (i.e. a “coordinate system”) as a linear basis. This is, arguably, a non-operational alternative to Toller’s definition, but I think the two can essentially be made equivalent. In fact he launches into a detailed discussion of the motivation behind defining the Cartesian reference frame which he refers to as “the Cartesian basis.” So the Cartesian frame of reference is just a special case of the broader idea of a degree of freedom as I pointed out in yesterday’s post. The point of all this is, even engineers will recognize that the even the classical notion of a reference frame is more than purely Cartesian (aerospace engineers, for example, work in systems with six degrees of freedom – either the three spatial coordinates plus the Euler angles [referred to by flight dynamics people as pitch, roll, and yaw], or the three spatial coordinates plus the corresponding momenta in each of those directions, i.e. phase space).

At any rate, one of the key results given in the BRS review has to do with the relation between quantum systems and classical systems. It is built on the notion of a superselection rule (SSR) which I briefly mentioned in yesterday’s post. SSRs were introduced by Wick, Wightman, and Wigner in 1952. In essence, they are rules that place limits on certain types of measurements, but are perhaps better understood as rules that prohibit superpositions of certain coherent states. What BRS showed in their review was that SSRs are formally equivalent to the *lack* of a shared *classical* reference frame (in the general sense of the term). So for instance, Wick, Wightman, and Wigner suggested that an SSR existed for states of opposite charge, i.e. one would never see a superposition of positive and negative charge. It makes perfect sense that this is equivalent to the lack of a *classical* reference frame for charge. In other words, we will never find a metal plate that is *simultaneously* positively charged and negatively charged. It is always one or the other or neutral. This might pose a problem for two parties (Alice and Bob again) who might, for some bizarre reason, have independently designed communication devices that rely on different charges. In a highly contrived example, suppose Bob lives on a planet that developed computers that run on positive charge somehow (maybe ionized atoms, for example), whereas Alice lives here on earth where our computers run on negative charge. If both use voltage readings to determine 0’s and 1’s in their binary codes, then they will get completely opposite results if they try to interpret each others’ machines. This is a completely classical issue.

On the other hand, at the quantum level, states exist that are a superposition of opposite charges despite the SSR (the first suggestion of this for charge was by Aharonov and Susskind in 1967 which is the paper that essentially launched our work on CPT symmetry). So while Alice and Bob may lack a *classical* reference frame for their computers, a common *quantum* reference frame can be established by using these superposition states as building blocks. Understanding this point really requires understanding precisely what a reference frame is as well as how it is used in communicating information between two parties. In other words, it requires understanding the difference between fungible and nonfungible information. It’s just another example of how an information theoretical view can shed light on some of the deepest (and seemingly uncontroversial) problems in physics.

June 21, 2016 at 10:23 am

First, I am outside looking in– an engineer reading to keep brain working, I guess. But the quantum frame seems surprisingly basic. Why was it not studied for so long? The Schrodinger equation for a free particle involves a partial derivative with respect to “x”, so doesn’t that mean the specific plane wave solution for the wave equation will depend for its direction on the “x” chosen, that is, depend on the intertial frame that’s chosen for the analysis? So there could be a plane wave going in whatever direction one chooses, just from choosing a different inertial frame which establishes the direction of “x.” And then, if the frame of proper time is chosen, momentum is zero. That is, in the Hamiltonian operator, the partial derivative of the wave function with respect to “x” vanishes, making the wave a constant, which then would have to be normalized over a finite space in order to get a non-zero probability of the particle being anywhere. It’s a bunch of solutions disagreeing with each other, all due to choice of intertial frame. And it seems like it was there to begin with, But according to the literature you cite, for some reason not found out till 1977! I wonder why.

LB