A quantum accelerometer using relativity

I haven’t posted in quite awhile since I’ve been focused on Twitter, but the occasion of sending my latest co-authored paper to the arXiv seems a good opportunity to post something a bit longer, particularly since it relates to a series of blog posts I wrote about five years ago concerning a neutral pion in a large spherical cavity. The more general idea behind those posts was the concept of frustrated spontaneous emission (a rather unfortunate name for anyone with a sixth-grade sense of humor). Imagine a simple two-level quantum system that we’ll assume, for simplicity, emits a photon if it transitions from the excited state to the ground state. As most every first- or second-year undergraduate physics student should know, adding spatial constraints to a quantum system then constrains the energy levels of the system. This is the famous “particle-in-a-box” model. It’s related to the concept behind “Planck’s oven” which is the idea that only certain wavelengths of light can exist inside a finite-sized oven. When a constrained quantum system transitions from an one energy level to a lower one, conservation of energy requires that the energy lost by the system be taken up by something else. In atomic transitions, for instance, that energy is released as a photon. In other words, the amount of energy lost by the system, which is the difference between its initial energy and final energy, E_i - E_f = \Delta E, must equal the energy of the photon, hc/\lambda. But, since the wavelength \lambda of the photon is proportional to the size of the constraining object (i.e. the “box”) not all transitions will be allowed. For example, suppose a simple quantum system has just two energy levels that we’ll call |e\rangle for the excited state and |g\rangle for the ground state. Suppose that the transition |e\rangle\to|g\rangle produces 4 eV of energy in the form of a photon. That means the photon must have a wavelength of \lambda=hc/E=(1240\textrm{ eV}\cdot\textrm{nm})/(4\textrm{ eV})=310\textrm{ nm}. But suppose that the two-level system is in a box that only allows wavelengths of 100 nm, 200 nm, 300 nm, etc. (I’m just making these numbers up—they don’t necessarily correspond to any realistic system). Then the quantum system will be stuck in its excited state since the photon it would have to emit in order to transition to the lower state is not allowed to exist inside the box.

So now suppose that we have one of these quantum systems that is in its excited state but is in a box that won’t allow it to decay. In other words, it is in a state of frustrated spontaneous emission (pause for sophomoric giggling). According to relativity, if we accelerate the box and the quantum system together, the box will Lorentz contract. At some point, if it continues to accelerate and thus contract, it will reach a length that is compatible with the wavelength of the photon. At this point, the system can now transition between the excited state and the ground state. Such a system, if it could be realized, would then be useful as an accelerometer.

That, of course, is the simplified description of our idea. To be rigorous, we approached this problem from the standpoint of having a two-level system weakly coupled to a 1+1-dimensional cavity and used the Klein-Gordon equation to model the transition probabilities for the resonant and off-resonant modes. The proposal utilizes the Purcell effect. I would encourage those of you who are well-versed in the physics to read the paper and offer your comments and suggestions. We’re working on submitting to a journal soon. The paper was done in collaboration with Andrzej Dragan at the University of Warsaw and his undergraduate student Kacper Kożdoń. (That’s the same Andrzej Dragan as the Cannes Lion Award finalist and creator of the Dragan photo effect.)

Regarding actual implementation of this idea, we have several ideas in mind including a solid-state analogue. I’m hoping to get one of my own students working on it soon. There are also some intriguing aspects of the Purcell effect in relativistic settings that are worth exploring.

Finally, it should be noted that, due to the equivalence principle, such a device would also be able to measure motion in gravitational fields and might offer a means by which gravitational field anomalies could be mapped to high precision.

The legacy of Martin Luther King, Jr.

This is my first blog post in more than six months and it’s not about science, but it’s something that needs to be said (besides, who actually reads this blog anyway?).

Today, January 15, 2017, would have been the Rev. Dr. Martin Luther King, Jr.’s 88th birthday. I attend a Unitarian Universalist church here in Maine and today’s service was devoted to Dr. King’s words and legacy. As any good service will do, it got me thinking.

During today’s service, my friend Bruce wondered what the world might have been like had Dr. King lived. It is worth mentioning that Bruce grew up in Memphis. He was ten years old when Dr. King was shot. He remembers waiting in the car for his mother outside a store when she suddenly ran out to tell him the news.

I wasn’t alive in the ’60s, but it seems to me that our country had reached a pivotal moment by the end of the decade and was faced with a choice. The ’40s and ’50s had been decades in which the white middle class had grown substantially. America was at its economic apex and the two major political parties were both still relatively moderate, continuing a trend that began in the ’30s (not by coincidence, this era saw some of the greatest progress in American history). The nation was in the midst of a great social experiment and while African Americans had not immediately benefited from the economic expansion, thanks to people like Dr. King they had finally won many hard fought freedoms. There was still much to be done but it seemed like a great deal of social and economic progress had been made. The year 1964 was particularly pivotal in that it saw the signing of the Civil Rights Act and LBJ’s declaration of a “war on poverty.”

The trouble was, as Dr. King astutely noted, such a war could never be won if the nation persisted in spending more money on military projects than on poverty itself. It’s hard to say when things began to unravel. I’m not a historian and I wasn’t born until February of 1974, one month prior to the lifting of the oil embargo that some historians consider the point at which the economic gains of the previous three decades began their long, steady decline.

To be sure, a good deal of progress has still been made since then—gay marriage is, for now, the law of the land and, despite his faults (which were many) and the vitriol he faced (which was considerable) we had a black president for eight years. Yet it seems to me that these things were accomplished in spite of nearly overwhelming contrary efforts and opposition. As one woman at my church pointed out this morning, Dr. King would likely be fighting the same exact battles he fought in the ’50s and ’60s were he alive today. What happened?

The way I see it, we fell victim to simple greed. The greed was then further mined for fear by a society that increasingly promoted a Randian sense of individual superiority over mutual interest. “Us” became “me.” Neo-liberalism or anarcho-capitalism or neo-corporatism or whatever one wishes to call it, reduced a person’s worth to what they could produce. Certain elements took advantage of rising materialism to begin to erode the four-decade-old social compact that had so painstakingly been built (and was still being built). Government, which is merely a tool, no different from any other tool, was (and is still) relentlessly derided as philosophically evil. The entire concept was bad because it might (could!) be used to suppress our individual materialist tendencies. In a political system built on compromise, it’s difficult to make progress when one side considers the entire process to be illegitimate.

Such a situation begs to be exploited. Those who were unhappy with the social progress of the ’50s and ’60s figured out that people will care less about minorities if their new-found religion of materialism and unchecked individuality felt threatened. It was a convenient way to deny minorities truly equal rights without actually bringing back segregation (yet). They were playing the long game anyway, counting on the fact that they could eventually find a way to dismantle even the very protections then (and, for now, still) enshrined in law. The de-legitimization of government in general and the anarcho-capitalist mantra turned out to be the perfect cover.

And so it is that Dr. King, were he alive today, would see the slow return of segregation in bathrooms and court houses and bakeries around the country. Some of it, of course, is legally imposed since, to the cynically exploitative enemies of progress, government is only evil when it works against their own interests. Dr. King would see active voter suppression and a minority rule that showed, in some places (notably North Carolina), a disturbing similarity to the very earliest days of South Africa. It is worth remembering that the Cape Colony, one of the founding states of the Union of South Africa, included multi-racial suffrage and equal rights in its earliest days. Dr. King would see that, despite a general decline in what is officially designated “poverty,” there is an increasing wealth gap that calls into question the very meaning of the word “poverty.” Dr. King would see the very same maladies affecting our nation now as did in 1968.

Dr. King believed a radical revolution was necessary back then. He believed we were increasingly on the wrong side of history. He believed that we had taken on

the role of those who make peaceful revolution impossible by refusing to give up the privileges and pleasures that come from the immense profits of overseas investments.

He felt that

we as a nation must undergo a radical revolution of values. We must rapidly begin to shift from a “thing” orientated society to a “person” orientated society. When machines and computers and profit motive and property rights are considered more important than people, the giant triplets of racism, militarism, and economic exploitation are incapable of being conquered.

He was unequivocal. He believed that

[o]ne day we must come to see that the whole Jericho road must be changed so that men and women must not be constantly beaten and robbed as they make their journey on life’s highway. True compassion is more than slinging a coin to a beggar. A true revolution of values will soon look uneasily on the glaring contrast of poverty and wealth with righteous indignation.

These quotes are all culled from a sermon he gave on April 30, 1967 in which he railed against the very anarcho-capitalism and neo-corporatism (which are spiritually kin) that threatens the progress people like Dr. King gave up their lives for. He railed against a

nation that continues year after year to spend more money on military defense than on programs of social uplift

predicting that it would soon lead to “spiritual death.”

At the same time he was hopeful that the United States, and indeed all the nations of the West, would not forget their revolutionary roots. But the revolution required changing our loyalties to mankind as a whole, developing a

worldwide fellowship that lifts the neighborly concern beyond one’s tribe, race, class and nation.

I hasten to add that it must rise above our own individual self-interests as well. Indeed it is this rampant cult of the “self” that has done more to undermine Dr. King’s legacy than anything else.

As hopeful as he was, he was cognizant of the fact that the vision of an end to poverty put forth by LBJ in 1964 was already showing signs of wear in 1967. Speaking out against the war in Vietnam, he said that there was an obvious

and almost facile connection between the war in Vietnam and the struggle I and others have been waging in America. A few years ago, there was a shining moment in that struggle. It seems as if there was a real promise of hope for the poor, both black and white, through the poverty program. There were experiments, hopes, a new beginning. Then came the build-up in Vietnam and I watched the program broken up as if it was some idle political plaything … and I knew that America would never invest the necessary funds or energies in rehabilitation of its poor so long as ventures like Vietnam continued to draw men and skills and money like some demonic destructive suction tube.

He even went so far as to acknowledge the criticisms leveled at his non-violent approach, the most devastating of which was the fact that the nation as a whole had undertaken a violent solution to the problem in Vietnam. One can almost sense a crack of self-doubt in this passage of the sermon. The critique, in his own words, had

hit home and I knew that I could never again raise my voice against the violence of the oppressed in ghettoes without having just spoken clearly to the greatest purveyor of violence in the world today—my own government.

In a statement that holds as true today as it did then, he descried those who equated dissent with disloyalty, calling it a

dark day in our nation when high level authorities will seek to use every method to silence dissent.

Dr. King was clear that his condemnation of the war in Vietnam and the actions of America did not mean that he, in any way, endorsed the atrocities carried out by others. He was not offering his support for Castro or North Korea and he even went so far as to say that, despite being a pacifist, he would have likely fought against Hitler, whom he called

such an evil force in history.

What he descried was the encroaching neo-liberal worldview that, today, takes many forms but whose constant, underlying themes include profit at all costs, the reduction of others’ self-worth to their economic output, and a promotion of the self as the highest of ideals. It is this latter point that has perhaps been the most insidious for it is through a cultivation of the self that we have lost any sense of empathy for others for we have, to paraphrase Dr. King, sacrificed truth at the altar of self-interest. And one of the many truths we have sacrificed is that we need one another. Self-interest and the myth of complete self-reliance produce complacency toward the suffering of others. Dr. King was not kind to such people, noting that he agreed with Dante

that the hottest places in Hell are reserved for those, who in a period of moral crisis, maintained their neutrality. There comes a time when silence is betrayal.

It is worth keeping all this in mind as we prepare to usher in the new administration in Washington, one that, though not even in office yet, appears poised to roll back much of the progress of the past seventy years and to accelerate the growing gap between those at the top and the rest of us. As Dr. King used to say, Pharaoh stayed in power by pitting his slaves against one another.

Dr. King called for unconditional love of all mankind and said

[w]hen I speak of love I am not speaking of some sentimental or weak response. I am speaking of that force which all of the great religions have seen as the supreme unifying principle of life.

He closed this sermon by saying

I have not lost faith, I am not in despair because I know there is a moral order. I have not lost faith because the arch [sic] of the moral universe bends towards justice.


The Collatz conjecture

I recently came across the book Mathematics: An Illustrated History of Numbers, which appears to be part of a series. One of the fun “ponderables” (as they are called) considered in the book is the Collatz conjecture.

The Collatz conjecture goes something like this. Pick a number n. If n is even, divide it in half. If it’s odd, then multiply it by three and add one. Repeat the process with the new number. Written as a function:


The conjecture is that eventually, regardless of the size of the initial value of n, the sequence will converge to 1.

In 1970 HSM Coxeter offered a $50 reward to anyone who could prove the conjecture. Paul Erdős later upped the ante to $500. More recently, according to a nice overview by Jeffrey Lagarias (who also edited a volume dedicated to the conjecture), B Thwaite offered £1000.

A lot of time and effort has been put into studying the rate of convergence (or, rather, total stopping time) for larger and larger values of n. Interestingly enough, there is a heuristic solution that is based on probabilistic arguments. If you consider only the odd numbers in the sequence, on average, each odd number ends up being 3/4 the previous one. This argument, when extended, can be used to prove that there is no divergence but it can’t rule out other cycles, e.g. numbers other than 1 that the sequence converges to. In addition, though we know it works for every integer up to at least 100 million (thanks to the power of computing), this cannot be seen as proof. Indeed, there are other conjectures that turn out to fail for very large numbers (e.g. the Pólya conjecture was disproven in 1958 by Haselgrove who found a counter example that was roughly equal to 1.845\times 10^{361}).

Might this be interesting to the physics community or is this a purely number theoretic problem? The study of the iterates of measure-preserving functions on a measure space, i.e. dynamical systems that include an invariant measure, is known as ergodic theory. In quantum mechanics, for instance, trace-preserving operations are essentially invariant measures. So while the Collatz conjecture might or might not have a direct physical corollary, proving (or disproving) it could have implications for ergodicity in general that might prove useful in physical systems.

But even if it had no useful physical corollary, it’s still a neat problem…


Mathematica: A world of numbers … and beyond

My favorite exhibit at Boston’s Museum of Science is called Mathematica: A world of numbers … and beyond. It had been closed for awhile as it was moved to a back corner, only reachable by walking through the Theater of Electricity (which contains the world’s largest air-insulated Van de Graff generator). I’m a little disappointed by this simply because it seems unlikely to get the same amount of traffic that it used to get when it was on the main level where everyone walked past it.

The exhibit was designed by the Charles and Ray Eames, who are famous for, among other things, the short film Powers of Ten and the Eames Lounge Chair. Mathematica originally opened at the California Museum of Science and Industry (now the California Science Center) in March of 1961 after IBM was asked to contribute something to the then-relatively new museum. It finally closed in 1998 (the same year the museum changed its name).

In November of 1961 an exact duplicate was made and placed in Chicago’s Museum of Science and Industry. This duplicate version was moved to Boston in 1980. A second duplicate had several homes over the years including at IBM’s headquarters, but now resides with the Eames family who apparently will display portions of it at their office from time-to-time.

It is my sincere hope that this fantastic exhibit never goes away. It manages to convey complex mathematics in ways to which people can relate. It also demonstrates the beauty and whimsy of mathematics in a way that could only have been captured by someone with a background in design.

My son has always been a big fan of the Eames’ and I’m beginning to appreciate their aesthetic. Certainly this exhibit is a triumph of their ability to work across disciplines. I just hope that it sticks around for another thirty years.

Reflections on a pale, blue dot

This summer marks the tenth anniversary of this blog. I don’t get many page views, but then I don’t post as often as I used to. The world seems to have moved to Twitter and I have less time to write deeply about things than I used to.

That said, I gave a sermon at the Unitarian church I attend back in April and have been meaning to post it for some time. I think it is perhaps appropriate given the current state of the world and the ongoing presidential campaign here in the US. So here it is:

On September 5, 1977 NASA launched the Voyager 1 probe on a mission to study the outer Solar System, a mission that continues to this day, more than 38 years later. Initially, the probe was only expected to work through its encounter with Saturn. When it passed the planet in 1980, astronomer Carl Sagan proposed to NASA the idea of having the spacecraft turn around and take one last picture of earth. He knew that such a picture would have little scientific value given how small the earth would appear in the image, but felt that it could have a meaningful impact on our perspective regarding our place in the universe. It took Sagan nearly a decade — a decade! — to convince NASA to take the picture, which they finally did on February 14, 1990. It inspired Sagan to write the following words:

From this distant vantage point, the Earth might not seem of any particular interest. But for us, it’s different. Consider again that dot. That’s here. That’s home. That’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there — on a mote of dust suspended in a sunbeam.

The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that in glory and triumph they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner. How frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds. Our posturings, our imagined self-importance, the delusion that we have some privileged position in the universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity — in all this vastness — there is no hint that help will come from elsewhere to save us from ourselves.

…There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another and to preserve and cherish the pale blue dot, the only home we’ve ever known.

Of course, even this image may not be sufficiently humbling to some. After all, the exploration of our Solar System might seem a bit routine these days. Supposedly even going to the moon became a bit normalized by the completion of the Apollo program, 13 months prior to my birth. And we haven’t been back since. So let me try to put some of this into perspective for you. If the age of the universe was condensed into a single year, the first multi-cellular life wouldn’t appear until the beginning of the final month and the entirety of human history would comprise the last 13 minutes of that year. If the diameter of the solar system was the width of a human hair, the universe would be (very approximately) 6000 miles wide. There are approximately 300 billion stars in our galaxy and approximately 100 billion galaxies in our universe. Simply put, we are but a tiny, almost imperceptible, mote of dust in a random corner of a vast cosmos.

Sometimes I naively think to myself, if I could just show Donald Trump or an ISIS commander the view from Height of Land just south of Rangeley, overlooking Mooselookmeguntic Lake and the mountains, I think maybe they’d glimpse something more than themselves. But then I realize that this would not likely change their worldview. And not necessarily for the reason you might think.

Let me state, right off the bat that I am not a complete relativist. There are absolute truths in the world. There are some things that are objectively wrong and some that are objectively right. Nevertheless, the world is emphatically NOT black and white. There are vast areas of nuance and grayness in between the right and the wrong. We’ve gotten used to looking at our world in a bimodal way. It’s a popular (and annoying) rhetorical trick to turn a nuanced comment into an all-or-nothing statement as a ploy to make someone look bad or to score points. Because, after all, a bimodal view of the world is a world of winners and losers and winning is everything in such a world.

But that vast grayness in the middle is where most of us live even if we tend to see it as black and white. Because what even the best of us (and I’m not even remotely close to the best) too often misses are the true roots of that nuance. We mistake sympathy for empathy and empathy for approval. While awe and wonderment may cause us to turn inwardly reflective — which is not a bad thing! — or outwardly reflective — also not a bad thing — it rarely causes us to turn objectively reflective.

So what do I mean by “objectively” reflective? Truthfully I had a hard time coming up with a word or phrase to describe the idea I was trying to get across. This was not the first thing I wrote down. But the idea is this. Human beings like narrative. We like to fill in the blanks and we like to read between the lines. In fact I have recently come to the conclusion that this is the single biggest failing of our criminal justice system, but that’s for another talk. We like stories. We don’t like fragments. And when we are presented with fragments — which is almost always! — we fill in the gaps of the narrative. And quite naturally, that narrative is colored and shaped by our own experiences. Sometimes that narrative turns out to be mostly right, but more often than not, it ends up reflecting our own emotions and experiences. To once again quote Carl Sagan, “Human beings have a demonstrated talent for self-deception when their emotions are stirred.” Being objectively reflective requires that we, at the very least, recognize that we are filling in a narrative that may or may not be true (more often than not, the truth is muddled anyway). Preferably, being objectively reflective asks us to free ourselves from any one, particular narrative. It asks us to clear our minds of pre-conceptions.

But how do we do that? It’s certainly not easy and I wish I could give you a simple, straightforward answer that you could file away in the back of your mind to pull out at appropriate times. But the truth is that I can’t. It’s a constant struggle. But I can at least share with you what has guided and continues to guide my thinking. It all begins with that pale blue dot. It begins with the question “why?” but never, ever ends with the answer to that question because final answers are rare — they do exist, but they are amazingly rare. Clearing our minds of pre-conceptions means never being fully satisfied with the answers. There should always be something more to learn, more to understand — and this is true even if the answers are “final” in a certain sense. This requires listening — deep listening — and a good deal of humility. The universe is a mysterious place that often defies common sense. Understanding it requires recognizing that the truth is often elusive and hard to pin down.

Of course, the danger in this is that we could fall into the trap of doubting everything. There are people who do not believe the moon landings were real and no amount of evidence will sway their opinion. A former acquaintance of mine stopped speaking to me last fall because he is absolutely convinced that anthropogenic global warming is a vast, worldwide conspiracy aimed at using taxpayer funds to support the “lavish” lifestyles of climate scientists and, since I foolishly attempted to explain the actual science to him, found myself included in the conspiracy. He uses doubt as a tool to enforce his narrative. We can’t fall into that trap any more than we can fall into the trap of surety. It’s a balance, one we won’t always get right. But it comes from recognizing both the bigger picture as well as how the pieces of the picture fit together. It comes from realizing that we truly are inconsequential in the grand scheme of the universe, but also realizing that we’re not inconsequential to one another. It means realizing that what my former acquaintance sees is real to him. It may be (in fact it is) a complete fantasy. But it’s not a fantasy to him and it is built up from legitimate concerns that shouldn’t be derisively dismissed. Getting at those legitimate concerns requires objectively reflecting on the origin of his anger and not automatically building a narrative in my own head about why he is the way he is. I may or may not ever speak to him again. That doesn’t mean that I shouldn’t try to objectively understand him.

I had this sermon written yesterday and loaded onto my iPad, ready to go. And then I went to see a show at Portland Stage. The show was a stage adaptation of Chaim Potok’s book My Name is Asher Lev. I’d never read the book, though I knew of it, so I had no idea what to expect. In it a hasidic Jew — Asher Lev — grows up to become a famous painter. The story deals with Asher’s struggle to reconcile two conflicting worlds (both of which he inhabits) and simultaneously deals with his father’s struggle to understand Asher. What struck me about it was how it dealt with nuance and metaphor, and how it dealt with our struggle, as humans, to find our place in this surprising and often unpredictable world. And I was struck by how it managed to show how a small piece fits in with a much larger puzzle.

Which brings me to one final quote from Carl Sagan who is vastly more lyrical than me.

As the ancient myth makers knew, we are children equally of the earth and the sky. In our tenure on this planet we’ve accumulated dangerous evolutionary baggage — propensities for aggression and ritual, submission to leaders, hostility to outsiders — all of which puts our survival in some doubt. But we’ve also acquired compassion for others, love for our children and desire to learn from history and experience, and a great soaring passionate intelligence — the clear tools for our continued survival and prosperity. Which aspects of our nature will prevail is uncertain, particularly when our visions and prospects are bound to one small part of the small planet Earth. But up there in the immensity of the Cosmos, an inescapable perspective awaits us. There are not yet any obvious signs of extraterrestrial intelligence and this makes us wonder whether civilizations like ours always rush implacably, headlong, toward self-destruction. National boundaries are not evident when we view the Earth from space. Fanatical ethnic or religious or national chauvinisms are a little difficult to maintain when we see our planet as a fragile blue crescent fading to become an inconspicuous point of light against the bastion and citadel of the stars. Travel is broadening.

Partially moving to Twitter

It’s been six months since I posted anything. This is mostly because I lack the time to put serious work into blogging. However, I have finally decided to join the world of Twitter and so Quantum Moxie will mostly be moving over there, though I plan to post a few longer pieces here once in awhile. I’ve had bouts of success here but I think I can do more with Twitter than I can with this. Nevertheless, this site is not going away. Ideally I would love to figure out how to post to here from Twitter (rather than just have my Twitter feed embedded on the side). I’ll continue to work on that and I thank you (all two or three of you!) for your continued support!

The Weirdness of Neutrinos: beyond the 2015 Nobel Prize

As most of you are probably aware, the Nobel Prize in Physics was awarded this week to Takaaki Kajita and Arthur B. McDonald “for the discovery of neutrino oscillations, which shows that neutrinos have mass”. This is truly a fantastic discovery and one that is long overdue for recognition. To be clear, what Kajita and McDonald won the award for was the experimental verification of neutrino oscillations. The theoretical prediction of such oscillations was made as early as 1957 by Bruno Pontecorvo. The Standard Model long predicted three types of neutrinos, but it also predicted that they should be massless (this is one of the glaring holes in the Standard Model that people too often ignore). In 1968 Pontecorvo figured out that if neutrinos actually had mass then they could change into one another. This was suggested as a solution to the so-called solar neutrino problem. The masses of the individual neutrinos were subsequently experimentally verified (and have led to a previous Nobel Prize in physics). What hadn’t been observed until Kajita and McDonald came along was the oscillation between the different types (masses), even though it was widely accepted that such oscillations existed.

The thing is, neutrinos are puzzling for another reason. Neutrinos are neutral particles and so don’t have an electric charge. Unlike neutrons, however, they are fundamental which means, as far as we know, they can’t be broken down into constituent particles. This fact is important for what I am about to say. All particles that possess angular momentum, also possess a magnetic dipole moment. So, for example, a dipole moment can arise from an electron’s orbit about a nucleus as well as its intrinsic spin. This is well-known, standard physics. Another piece of well-known, standard physics (Maxwell’s equations) tells us that magnetic fields are nothing more than electric fields viewed from a reference frame that is moving relative to the source of the electric field. This is why we typically refer to the singular field as “electromagnetic.” It’s just one field but it has two different (or seemingly different) behaviors depending on how it is viewed. The sources of such fields are electric charges. This actually helps explain the concept of magnetic moment. The presence of angular momentum in a system indicates the presence of relative motion between the portion of the system with the angular momentum and some observer. If the system possesses electric charge, then it makes sense that a magnetic field would be present as well due to the relative motion.

So now consider a neutron (not a neutrino just yet). It is a fermion and thus has a spin of 1/2 which suggests that it has a magnetic moment. In fact it does. But how does a neutral particle develop a magnetic moment if the source of magnetism ultimately has to be charge? Well, the standard way to answer this is to simply say that the neutron is a composite particle that is actually composed of quarks which are not neutral. So we could easily dismiss the magnetic moment of the neutron as being some kind of relic of the relative motion between an observer and the constituent quarks which do possess charge. But what about the neutrino?

This model would suggest that neutrinos shouldn’t have any magnetic moment at all because they are fundamental (i.e. they do not have any constituents). And yet, they do. But why? And how?

As it turns out, there is a connection between mass and magnetic moment according to ideas dating back to Dirac! Semi-classically, one can also show that any particle that possesses spin also possesses a magnetic dipole moment. Either way, however, since the Standard Model predicts a zero mass for the neutrino, any magnetic moment would suggest physics beyond the Standard Model. There are plenty of suggested minimal extensions of the Standard Model that produce a solution to this conundrum (e.g. you can use Feynman diagrams to show that there is a one-loop approximation of the neutrino as a “mixture” of a W+ and an e-). But none of these is universally accepted. And so we are left with another bizarre property of the neutrino: it is a neutral, fundamental particle that has electromagnetic characteristics! In short, it is the ultimate Standard Model contrarian.

A Short History of Quantum Reference Frames, Part 2

Yesterday, I began outlining the history of quantum reference frames beginning with Marco Toller’s paper in 1977. What we saw was two, somewhat separate lines of development. The first was a direct line from Toller through Rovelli and back again to Toller, in which a generalized (and distinctly operational) notion of a reference frame is developed that covers both quantum and classical systems. The second was a line beginning with Holevo and leading up through Peres and Scudo, in which a protocol is developed for sending information about a Cartesian coordinate system via a quantum channel.

The question I have is, where do the lines converge? Specifically, when on the informational side do people start referring to reference frames in the more general sense? Pinpointing an exact instance is difficult, but an early paper by Bartlett, Rudolph, and Spekkens (hereafter referred to as BRS) in 2003 seems to suggest that Toller’s idea had made its way into the quantum information community by that time. The first few sentences of their paper read:

Quantum physics allows for powerful new communication tasks that are not possible classically, such as secure communication and entanglement-enhanced classical communication. In investigations of these and other communication tasks, considerable effort has been devoted to identifying the physical resources that are required for their implementation. It is generally presumed, at least implicitly, that a shared reference frame (SRF) between the communicating parties is such a resource, with the precise nature of the reference frame being dictated by the particular physical systems involved.

The emphasis on the word ‘physical’ in the second sentence is theirs. The emphasis in the last sentence is mine and meant to show that Toller’s idea had gained some traction in the QI community by then. In a meaty review article published in 2007, BRS offered a slightly different take on Toller’s notion of a reference frame (and notably still did not cite his work which makes me wonder if they were and still are unaware of it, or if they disagreed with his definition to some extent). They refer back to the work by Peres and Scudo and define two types of information: fungible and nonfungible (referred to by Peres and Scudo as ‘speakable’ and ‘unspeakable’ respectively). Fungible information is typically classical and refers to information for which the means of encoding does not matter. The example they give is that Shannon’s coding theorems don’t care one way or the other how the 0’s and 1’s are encoded (e.g. by magnetic tape or by voltages or some other manner). So this is fungible information. If, on the other hand, the encoding of information does make a difference to the information being encoded, then that information is said to be nonfungible (unspeakable). BRS then note that

[w]e refer to the systems with respect to which unspeakable/nonfungible information is defined, clocks, gyroscopes, metre sticks and so forth, as reference frames.

The emphasis is theirs. How does this compare to Toller’s definition of a frame of reference as some material object that is of the same nature as the objects that form the system under investigation, as well as the instruments used to measure that system? It seems to me that the BRS definition is merely a more concise statement of Toller’s definition which does not account for the differences between fungible and nonfungible information. So I think we’re still talking about the same thing here.

Superficially, it is fairly easy to see that many quantum ideas are natural generalizations of classical ideas. For example, in his terrific (and freely available!) undergraduate course materials on the mathematics of theoretical physics, Karl Svozil defines a reference frame (i.e. a “coordinate system”) as a linear basis. This is, arguably, a non-operational alternative to Toller’s definition, but I think the two can essentially be made equivalent. In fact he launches into a detailed discussion of the motivation behind defining the Cartesian reference frame which he refers to as “the Cartesian basis.” So the Cartesian frame of reference is just a special case of the broader idea of a degree of freedom as I pointed out in yesterday’s post. The point of all this is, even engineers will recognize that the even the classical notion of a reference frame is more than purely Cartesian (aerospace engineers, for example, work in systems with six degrees of freedom – either the three spatial coordinates plus the Euler angles [referred to by flight dynamics people as pitch, roll, and yaw], or the three spatial coordinates plus the corresponding momenta in each of those directions, i.e. phase space).

At any rate, one of the key results given in the BRS review has to do with the relation between quantum systems and classical systems. It is built on the notion of a superselection rule (SSR) which I briefly mentioned in yesterday’s post. SSRs were introduced by Wick, Wightman, and Wigner in 1952. In essence, they are rules that place limits on certain types of measurements, but are perhaps better understood as rules that prohibit superpositions of certain coherent states. What BRS showed in their review was that SSRs are formally equivalent to the lack of a shared classical reference frame (in the general sense of the term). So for instance, Wick, Wightman, and Wigner suggested that an SSR existed for states of opposite charge, i.e. one would never see a superposition of positive and negative charge. It makes perfect sense that this is equivalent to the lack of a classical reference frame for charge. In other words, we will never find a metal plate that is simultaneously positively charged and negatively charged. It is always one or the other or neutral. This might pose a problem for two parties (Alice and Bob again) who might, for some bizarre reason, have independently designed communication devices that rely on different charges. In a highly contrived example, suppose Bob lives on a planet that developed computers that run on positive charge somehow (maybe ionized atoms, for example), whereas Alice lives here on earth where our computers run on negative charge. If both use voltage readings to determine 0’s and 1’s in their binary codes, then they will get completely opposite results if they try to interpret each others’ machines. This is a completely classical issue.

On the other hand, at the quantum level, states exist that are a superposition of opposite charges despite the SSR (the first suggestion of this for charge was by Aharonov and Susskind in 1967 which is the paper that essentially launched our work on CPT symmetry). So while Alice and Bob may lack a classical reference frame for their computers, a common quantum reference frame can be established by using these superposition states as building blocks. Understanding this point really requires understanding precisely what a reference frame is as well as how it is used in communicating information between two parties. In other words, it requires understanding the difference between fungible and nonfungible information. It’s just another example of how an information theoretical view can shed light on some of the deepest (and seemingly uncontroversial) problems in physics.

A Short History of Quantum Reference Frames, Part 1

I was recently at the Relativistic Quantum Information – North (RQI-N) meeting at Dartmouth College where I presented some of the work I have been doing on quantum reference frames and how to use them to overcome certain superselection rules (SSRs), specifically the SSR associated with CPT symmetry. I received a lot of terrific comments and numerous discussions have been spawned from the presentation. In particular, I seem to have introduced a number of people to the concept of a quantum reference frame for the first time. In fact I’m meeting back up with a few folks next week to discuss some of these issues. In preparing for next week’s discussions, I decided to do a little historical research to find out where the concept originated and how it got to its current form. I thought a blog post on the topic might be somewhat useful to some people out there and it would also provide me with a permanent record of some of the background papers in the field.

So what is a quantum reference frame? The idea appears to have first been proposed by Marco Toller in 1977 in a paper entitled “An operational analysis of the space-time structure” that appeared in Il Nuovo Cimento B. Here I quote the abstract:

We discuss the concepts related to space-time in a quantum-relativistic theory by means of the analysis of the physical procedures used to construct a new frame of reference starting from a pre-existent frame (transformation procedures). The physical objects which form a frame of reference are allowed to interact with the other physical objects and follow the laws of quantum physics. We suggest that there are conceptual limitations which do not permit the exact realization of a transformation of the Poincaré group by means of physical procedures. We remark also that the operations performed in order to construct a frame of reference perturb the surrounding physical objects and are influenced by them. We propose some general theoretical schemes which take these facts into account and permit the separation of the geometrical effects of a transformation procedure from the physical ones. Finally we find the conditions which permit the construction of a Poincaré-invariant theory of the usual kind by means of the introduction of some ideal concepts which have no direct operational meaning.

In other words, the most general definition of a ‘frame of reference’ is as some material object that is of the same nature as the objects that form the system under investigation as well as the measuring instruments themselves (Bohr’s classical-quantum contrast not withstanding). This idea was further developed by Aharonov and Kaufherr in 1984 in which they extended the principle of equivalence to quantum reference frames, and in a pair of articles written in 1991 by Carlo Rovelli (see here and here) which appear to have played some role in inspiring his relational interpretation of quantum mechanics. In this way, these ideas bear a striking resemblance to work attempted by Eddington in the 1930s and early 1940s (a topic I will leave for another blog post, but that served as the core topic of my long-forgotten PhD thesis).

Anyway, these ideas are clearly operational (Toller even uses the term in his original paper). They were, however, not necessarily informational, at least initially. However, in his 1982 book Probabilistic and Statistical Aspects of Quantum Theory, Alexander Holevo (who was just announced as the 2016 winner of the Claude E. Shannon award by the IEEE Information Theory Society) addressed the following question: can a system of N elementary spins (i.e. qubits, which weren’t yet named in 1982) be used to communicate, in a single transmission, the orientation of three mutually orthogonal unit vectors, i.e. a Cartesian reference frame? Holevo concluded that if the system had a well-defined total spin angular momentum J then, at best, only one of the three vectors could be communicated. A way around this limitation was found nearly two decades later by Bagan, Baig, and Muñoz-Tapia and, around the same time, Peres and Scudo found a way to do it with a single hydrogen atom. The idea was to allow two distant parties (i.e. our old friends Alice and Bob) to establish a common Cartesian reference frame simply using a quantum channel. Thus these papers, while informational in their focus, used the less general definition of a reference frame as a Cartesian coordinate system. In fact it is not entirely clear that any of these authors (or others working on similar ideas – see the previously mentioned paper by Bagan, et. al. for additional references) was aware of the more general definition of the reference frame originally proposed by Toller.

One of the key ideas in the early information-related papers was that the Cartesian frame, i.e. the concept of a spatial direction, could be encoded in a particle’s spin state. Somewhere along the line (it’s not quite clear to me yet exactly when) someone put these two ideas together and the more general concept of a quantum reference frame was born. It appears that somewhere around 2002 or 2003 someone realized that a spatial direction is an example of a degree of freedom. Of course even classical physicists – even many engineers – know that there are more general and abstract spaces that have more than three degrees of freedom (e.g. phase space). For decomposable systems, a distinction can be made between what might be called ‘collective’ degrees of freedom, i.e. those between a system and something external to it, and ‘relative’ degrees of freedom, i.e. those between the systems constituent parts. Several authors (including John Preskill, who was at RQI-N) recognized that encoding information into the collective degrees of freedom posed a number of problems. Beginning, to the best of my knowledge, in 1997 with a paper by Zanardi and Rasetti, encoding information into the relative degrees of freedom of a system was shown to be more optimal in some situations. Hopefully, you can see where this is headed. The relational degrees of freedom harken back to the general frame of reference á là Rovelli and his relational interpretation of QM. For example, take a look at this early paper by Bartlett, Rudolph, and Spekkens. The first few paragraphs offer a fairly nice summary of some of the work that had just recently come out on relative quantum information, though the paper itself still primarily deals with something spatial.

As early as 1996, Toller himself recognized that limitations in representations of the Poincaré group necessitated taking “internal” degrees of freedom into account when working with quantum reference frames. An example of such an internal degree of freedom is electrical charge. In fact, in our first paper in PRL, we introduce a new quantum number that represents all of the universally conserved internal quantum degrees of freedom (which happen to only be electrical charge and the difference between baryon number and lepton number), though we were unaware of Toller’s paper at that point (in fact I was unaware of it until I started working on this blog post). It may well be, in fact, that we are the first to have considered internal degrees of freedom in such a manner.

At any rate, in Part 2 of this short history, I will attempt to nail down exactly who first suggested using a generalized reference frame in the manner of Toller in an information communication scheme. I will then discuss the relation to SSRs which play a vitally important role in this story.

Create a free website or blog at WordPress.com.

Up ↑