## My latest adventure in interferometry

Posted in Uncategorized on May 20, 2013 by quantummoxie

This weekend was graduation weekend which meant two days of events in which we, the faculty, basically serve as eye candy. Thus that means listening to lots of speeches. Fortunately, our current Dean is a master at getting through all the graduates’ names quickly. At any rate, this being the first graduation with our newly reduced parking capacity (someone told us we had too much — I’m not kidding), traffic was worse than usual and so I hung out in the lab for a bit after the ceremony ended waiting until I could get out of the parking lot in a timely fashion.

So, let me first say that I have an increasingly god-like reverence for experimentalists. Lesser mortals would go utterly insane from the combination of tedium and unexpected results. As a theorist, I figure that I’m already insane so it doesn’t matter. After an entire semester of getting nothing but parallel lines on my outputs, I ended up getting the “bull’s eye” pattern which is clearly a laser cavity mode (at that point, I was ready to beat my head against the wall).

Curiously (or not?), I got it when the arms were each 8 inches long or 16 inches long but not when the arms were 18 inches long or 20 inches long. In the latter two cases, absolutely nothing I did produced an interference pattern whereas it was pretty easy in the former two cases (the closer it was to a parallelogram, the better the pattern). Now, this summer I’m ordering some fully-gimbaled mirror holders that match the mirror surface up with the center line to make aligning easier. I’m hoping I can quantify some of the nuances of the alignment a bit better with these.

Anyway, that all leads me to two conclusions:

• the interference pattern in an MZI has something to do with cavity modes; and
• textbooks (and even some papers) on optics, particularly on MZIs and Michelsons, are complete crap.

On another note, in reply to my last post on this topic, someone noted that my calculation of the coherence length might be incorrect and should actually be closer to 300 microns. So in doing it again, I got a completely different number. Maybe someone can locate my error. The linewidth of the laser I’ve got (if I’m reading it correctly) is 2 nm. I have no idea if the lineshape is Lorentzian or Gaussian, but I’m just going to guess Lorentzian for now. Thus the coherence time is given by $\tau_{c}=1/\Delta\omega$. Now, I think, upon further reflection, that $\Delta\omega$ is the half width of the lineshape in angular frequency units. Since $\omega = 2\pi c/\lambda$, then $\Delta\omega = 2\pi c \left(\frac{1}{\lambda_{2}}-\frac{1}{\lambda_{1}}\right)$ which, for a half line width of 1 nm gives $\Delta\omega = 5.344\times 10^{14}$ rad/s. The coherence time is then $\tau_{c}=1.87\times 10^{-15}$ s. The coherence length is then $L_{c}=c\tau_{c}=5.61\times 10^{-7}$ m or 561 nm. If my mistake was in the linewidth and it is actually 2 nm, then the coherence length is actually 112 nm.

Now, if it is a Guassian beam instead of a Lorentzian beam, the coherence time is actually $\tau_{c} = (8\pi\ln 2)^{1/2}/\Delta\omega$. This changes the coherence length for a half line width of 1 nm to 234 nm. So it doesn’t seem to matter what I do, I’m consistently getting a coherence length that is in the hundreds of nanometers. As was pointed out, I should be getting a number closer to 300 microns. Where’s my error?

## New(ish) Planetary Geology blog

Posted in Uncategorized on May 19, 2013 by quantummoxie

My friend Irene Antonenko, who is a planetary geologist, has a new blog call Planetary Geo Log. So, does that make it a “glog?”

## A simple but definitive guide to Mach-Zehnder interferometers

Posted in Uncategorized on May 4, 2013 by quantummoxie

This semester I took the leap and ventured into the lab (and have yet to break anything). One of the things I have been working with is a Mach-Zehnder interferometer. Generally speaking, a Mach-Zehnder interferometer — or MZI — is a fairly simple and straightforward device. But there were a few oddities about it that were bugging me and they turned into a semester-long obsession. Attempting to find literature that fully explained what was going on turned out to be incredibly difficult and no one I ran into seemed to really know (or they all had differing opinions). But at long last, I think I have figured it out.

Regarding the notation that I will be using, the following image depicts a beam splitter in which the blue beam is transmitted and the red beam is reflected. The reflected beam on the side with the dot picks up a phase shift of π radians.

Technically there could be a phase shift at the mirrors as well (depending on how they are constructed), but since both arms pick up the same shift from these mirrors, we can safely ignore any mirror effects. So the general setup that I focused on was the following fairly standard form:

I’ve given the two arms of the interferometer different colors just to distinguish them. I made the output beams purple just to indicate that they are some mixture of light from the two arms.

Quantum mechanically, we can model this in a fairly simple manner if we consider the input to be $|0\rangle$. The first beam splitter (which is 50:50) is given by $\frac{1}{\sqrt{2}}\left(\begin{array}{cc}-1 & 1 \\ 1 & 1\end{array}\right)$ while the second beam splitter (which is also 50:50) is given by $\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1 & 1 \\ 1 & -1\end{array}\right)$. Together they are given by $\left(\begin{array}{cc}0 & -1\\1 & 0\end{array}\right)$. As such, the output will be $|1\rangle$. This means that, quantum mechanically, nothing should appear at Output 1. Thus, when we send single photons through the device, they always arrive at Output 1. (See Schumacher and Westmoreland, Chapter 2, for an excellent discussion of this.)

If, however, we shine a bright laser through the MZI we actually see something like this (taken from my own setup — Output 2 is on the left and Output 1 is on the right):

I tossed in an extra mirror after Output 2 just so I could project the results onto the same screen. I also tacked on some lenses at the end just to blow up the pattern so you could see it. So, first of all, the obvious difference between this and the quantum case is that we now have photons reaching both outputs. This, of course, is inconsistent with the math we did above for the quantum case. The quantum result is not completely lost, however. If you look carefully, you will notice that the center of the interference pattern in Output 1 corresponds to a bright fringe whereas the center of the interference pattern corresponds to a dark fringe (note: it is wicked difficult to keep these things steady — the smallest movement, e.g. air conditioning, is enough to disturb it which is why MZIs are used in a number of practical situations). Also note that an interference pattern as shown above only appears if the MZI is set up in a perfect square (actually a rhombus, as we’ll see) and in the same plane. If it isn’t in a perfect square (and in the same plane), then you still see light at both outputs, but you don’t see an interference pattern.

So suppose we could very slowly crank up the laser intensity such that more and more photons began going through together. At what point would we start to see photons showing up at Output 2? More importantly, why do they start to show up there? Where does the interference pattern come from and why does it “preserve” some aspect of that quantum prediction? Numerous people have tossed out ideas here and there but the only one that was even close to correct was Nathan Wiebe with whom I discussed this at the APS March Meeting. Nathan suggested that decoherence had something to do with it. Of course, this is related to something Neil Bates has been trying to disprove for awhile now. I’m still not sure if I understand his argument so I can’t say for certain whether or not he is correct, but I can say that a certain type of decoherence definitely does have something to do with it. Credit Neil, however, with being the first person to alert me to the differences between spatial and temporal coherence in the beam (more on that later). Rather than give a detailed accounting of the different types of decoherence (both classical and quantum), I will instead simply explain what is happening and you can draw your own conclusions based on your understanding of the various types of decoherence.

So, first of all, if we model the beam as a continuous wave, the interesting thing is that by carefully keeping track of the phase shifts and combinations throughout the setup, we should get the same exact result as in the single-photon case. For example, the upper arm picks up a phase shift of π radians at the first beam splitter. At the second beam splitter, a portion of each beam is transmitted and a portion is reflected. Looking at Output 1, we have a combination of the reflected lower beam, which picks up a phase shift here of π radians since it is on the side with the dot, and the transmitted upper beam which already had a phase shift of π radians from the initial beam. So the phase shift on the reflected part of the lower beam has the effect of bringing the two beams back into phase with one another and we get perfectly constructive interference. Hence, we have light at Output 1. (Note that this implies that a single photon must travel through both arms simultaneously if we think of it as a wave packet!)

Looking at Output 2, however, the reflected portion of the upper beam, which combines with the transmitted portion of the lower beam, does not pick up a phase shift since it is not on the side with the dot! As such, the two beams are still out of phase by π radians and thus will destructively interfere meaning we should not see any light at Output 2. So clearly the so-called “quantum” prediction is is exactly the same as the so-called “classical” prediction, i.e. there’s only one prediction.

One possible explanation that I had set my sights on about a month ago had to do with the fact that the beam had a “width” to it which meant that not all parts of the beam were hitting the reflective portion of the beamsplitters in phase with one another. Notice, however, that regardless of where a particular part of the beam hits the reflecting part of the beamsplitter, it still forms a perfect square:

So while each part of the beam is out of phase with each other part, crucially they are never out of phase with themselves in such a way that the outputs flip. In other words, in every case you should still find light only at Output 1 (credit goes to our lab manager, Kathy Shartzer, for pointing that one out).

So then I figured that maybe it had something to do with the fact that the beam widens as it moves along (“beam spreading”), but if you perform the ray tracing as above, you will get a rhombus for the outer edges and if you keep track of the lengths and phases, it turns out you still should only get light at Output 1. Incidentally, this suggests that maybe it’s not that it has to be a perfect square, just a perfect rhombus. At any rate, it was at this point that I started to question the Law of Reflection (not to mention my sanity).

But then I started going back-and-forth between two books on optics: the classic one by Hecht and one on quantum optics by Fox (why did I not do this before?) and finally the light went on in my head (no pun intended). So here’s what’s happening.

First, I’ll address why there’s any light at Output 2 at all. When it finally occurred to me, it was a bit of a “well, duh” kind of moment. In order for the light to only appear at Output 1, the phases have to match up just as described above. But this means that the tolerances are very very small! For example, suppose that we add an extra length to the upper arm that gives it an additional phase shift of π radians. This would have the effect of sending all of the light to Output 2, now. For the 532 nm light I was working with, this merely corresponds to adding 266 nm to the length of the upper arm. So it’s pretty obvious that any slight deviations from an absolutely perfect correspondence between the lengths of the two legs will change the results. Since the mirror is not perfectly smooth and the beam has some width to it, it’s no surprise that this is nearly impossible (certainly in my lab).

But that only explains the presence of light at both outputs. Why is there an interference pattern, why does it only occur when we are very close to a perfect rhombus, and why does it somehow preserve the expected result in the center fringe of the pattern? The answer to that has to do with temporal decoherence. This is quantified by the coherence time $\tau_{c}$ which is the time duration over which the phase remains stable. Coherence time is related to the spread of angular frequencies $\Delta \omega$ in the beam by

$\tau_{c}\approx\frac{1}{\Delta \omega}$.

In other words, only a perfectly monochromatic beam is fully coherent, i.e. has an infinite coherence time. All realistic beams are only partially coherent because there is always some spread to the angular frequencies (and thus wavelength), i.e. they’re not truly monochromatic. To quote from Fox,

If we know the phase of the wave at some position $z$ at time $t_{1}$, then the phase at the same position but at a different time $t_{2}$ will be known with a high degree of certainty when $|t_{2}-t_{1}|\ll\tau_{c}$, and with a very low degree when $|t_{2}-t_{1}|\gg\tau_{c}$

A more convenient measure is the coherence length, $L_{c}=c\tau_{c}$ where c is the speed of light. So another way to state the above is to say that if we know the phase of the wave at $z_{1}$, then the phase at the same time at $z_{2}$ will only be known to a high degree of certainty if $|z_{2}-z_{1}|\ll L_{c}$. That means that in order to get the two arms to have just the right phase to produce an interference pattern, the difference in length between the two arms has to satisfy $2\Delta L\lesssim L_{c}$. This explains why we need nearly a perfect square (or rhombus) to get an interference pattern and it makes it clear that any such pattern is related to the natural variability in the beam. Anything else will simply produce light at both outputs. The only way to get the actual predicted result of light only appearing at Output 1 is to either dial it down to single photons (since, I don’t think that a single photon has a coherence time associated with it, but I could be wrong) or to have a perfectly monochromatic beam. (Note that a more accurate description involves the first-order correlation function which includes an oscillating term that explains this rapid changing of the angular frequency.) Note that this relates to the interpretation of the single photon taking both paths simultaneously (see Fox, p. 302).

The question then becomes, why does the center of each output faithfully retain the information of the expected result and why, if we adjust the mirror angles, does the spacing between the fringes change? Actually, the center of the outputs will only retain the expected result if it is exactly a perfect square or some proper multiple of the phase as discussed above. This explains why sometimes I got the opposite of my expected result. It also explains why the pattern seemed to constantly be shifting (and did so especially when there were vibrations in the air or on the optical bench). The alternating pattern then results from the fact that the mirrors are likely not exactly at 45 degree angles (remember how insanely small the tolerances are). So, for example, if we had mirrors that were exactly at 45 degree angles, what we would likely see would be light flashing back and forth between the two outputs, but no interference fringes.

So the only open question that I see is: if we start with single photons and slowly crank up the intensity, at what point does the coherence time come into play, i.e. at what point does temporal decoherence kick in? I suspect the answer lies in photon bunching, but I’ll have to do some more reading and thinking and, eventually, experimenting…

## Is quantum mechanics it?’ Bell’s definition of free choice.’

Posted in Uncategorized on March 22, 2013 by quantummoxie

I haven’t blogged in awhile thanks to an insane amount of work, but now’s as good a time as any to toss another post up here since I just got back from the APS March Meeting in Baltimore (where I wish I still was — no snow and a good deal warmer). Yesterday I attended a talk by Renato Renner on some work he has done with Roger Colbeck. Before I comment on it, let me just say that I had an interesting lunchtime conversation during which we discussed how fractured the foundations community is: people have their pet theories and foundations meetings tend to end up making no progress (as a whole) because they are either devoted to a single, common viewpoint or, if they are more general, people just end up yelling at each other. Well, maybe yelling’ is a little harsh. But, anyway, the point is that I can’t say I’ve ever known a foundations person to change camps,’ so-to-speak. I will say that there are some foundations people who are refreshingly open-minded. Terry Rudolph comes to mind as does Max Tegmark (speaking of which, it freaked me out when I discovered that Terry is about my age — Terry was already well-known before I finished my PhD…).

Anyway, Renato’s talk was built on Bell’s notion of free choice.’ At the beginning, Renato said he didn’t explicitly rule out any other notion of free choice, but he claimed never to have seen one. I will say that I may have misinterpreted his aim with this talk as astutely and rationally pointed out to me by Mark Wilde, but I still think some of my argument holds. At any rate, I’ll come back to all of that later. First, let’s review Bell’s definition of free choice’ and see what it implies.

Bell defined a free choice’ to be one that was completely independent of any event in it’s future light cone. UPDATE: Mark Wilde noted that the official definition given in the various papers is that a ‘free choice’ is one in which the choice event is only correlated with events in its future light cone. I think they say the same thing, though. The idea is natural enough since it implies that nothing in the future can affect a prior choice, i.e something in the past. Note, however, that this definition does not say anything about events in the past light cone of the choice’ event. In other words, imagine we have a single, six-sided die that we wish to use as a counter for something. So, for example, suppose I wish to mark an event with the number 5. I can turn the die to the side with 5 facing upwards and can mark it. If at some later time I lose the die entirely (maybe the dog swallowed it), that event (of losing it) does not affect the prior choice I made to turn it to the side showing 5. So that is Bell’s definition of free choice’ (and is supposedly the only one Renato is aware of).

But suppose I wish to mark some event with the number 8. Somewhere along the line, there was an event prior to my choice’ event in which I ended up with a six-sided die so I am unable (with the tools I have) to mark the 8. My choice in this case is certainly not free. One might take the view that it was the future’ event of me ending up needing an 8 that limited my choice, but needing an 8 does not necessarily need to be in the future of me ending up with a six-sided die (though the event of me gaining the knowledge of that need, is). At any rate, the point is that the definition of free choice’ is both a little fuzzy and certainly can be interpreted as not necessarily being completely free. Nevertheless, this definition does appear to allow for the existence of some classically free choices.’

Now, nothing is necessarily wrong with that in the Colbeck-Renner theory because they argue that quantum theory, in this situation, is maximally informative. In other words, because quantum theory is contextual, I don’t have to worry about having the six-sided die when I need to measure an 8. I can just choose a basis (i.e. a die) that allows me to measure an 8. That is (very roughly) contextuality — the result of my measurement (the 8) — depends on my choice of basis. That, of course, is a truly free choice (and, in fact, in this case the choice event’ is also independent of all events in the past light cone as well).

Now, it appears that, as a result of the fact that quantum theory is maximally informative under the more restrictive definition of free choice’ (i.e. one that does not include the past light cone dependence), they assume that there is nothing more general. In other words, they ask whether, under this assumption of free choice,’ there are extensions’ of quantum theory. It should be obvious that if the definition of free choice’ remains the same, then there really are no useful extensions of quantum theory since it is maximally informative. This, of course, implies that quantum mechanics is complete.

But here’s my problem: this takes a somewhat restrictive definition of free choice’ and then proves that something that is ultimately less restrictive is thus maximally informative. That’s a bit like cherry-picking data in order to prove a point you already think is true. So, for example, suppose I have a deck of cards that only consists of the spades and clubs (so all the cards are black) but does not include the ace of spades and the ace of clubs. Suppose it is my intention to prove that I have a complete set of black cards. So I get another deck that now includes all four suits, but it still lacks the two aforementioned aces. Now I claim that since my new set includes everything I had before and nothing new, the old set is maximally informative. Of course, we know that this is not true because we know there are still two missing aces. It came across to me in Renato’s presentation that he was essentially claiming that his proof shows that those other aces don’t exist, i.e. quantum mechanics truly is it.’ Now, as I said, Mark Wilde had a completely different impression and so perhaps this was never Renato’s intention. At any rate, if it is, then, to me, there are two specific objections to that claim (and the corresponding proof).

First, one can, of course, question the definition of free choice.’ Why not define free choice in terms of the past light cone events as well? (This is where Mark noted that he thinks the point was to show the limitations of Bell’s definition, in which case this is actually pretty good work, though I wonder if it was necessary). In other words, why not use the quantum case as the definition? To me, that seems like it’s more free’ than Bell’s definition. Sure, maybe it implies that there are no truly free choices in classical physics, but personally I don’t find that all that hard to believe. I want a Porsche, but that doesn’t mean I can run out and buy one.

The second objection is that because the argument works from the ground up,’ so-to-speak, and finds’ quantum theory to be maximally informative, it rules out more general theories somewhat by fiat. In other words, what if quantum theory is actually a more specific case of some more general theory? For example, what if quantum theory is actually a special case of a generalized probabilistic theory? Because Colbeck and Renner used Bell’s definition of free choice’ it seems to me that they were guaranteed to find that quantum mechanics was maximally informative. It is worth noting that here is another point at which Mark disagreed — he didn’t think it was that obvious.

I suppose there’s nothing inherently wrong with that, unless one is then claiming that there’s no point in looking for a more general theory or one is arguing that quantum mechanics is it.’ I kind of got the impression that they were saying exactly that — there’s no point in pursuing anything deeper or more fundamental. I apologize to Renato and Roger if I misconstrued their work, but they can think of it this way: at least it succeeded in getting people to discuss it!

## A path to quantum gravity?

Posted in Uncategorized with tags , on February 17, 2013 by quantummoxie

Joe Fitzsimons, Jonathan Jones and Vlatko Vedral just put out a fascinating and brilliant paper. I’ll be honest (no personal offense meant! ) and say that it is very much in the present style of writing physics and mathematics papers, which is to say as non-pedantic and jargon-laden as possible. This is a style I have come to dislike (think about this: we have freshman, non-science majors read Einstein’s original SR paper at Saint Anselm), but that is another story entirely.

Anyway, after some discussion with Joe about it I now see what they are getting at – and I think it offers a very intriguing potential path to follow in search of quantum gravity. It also seems to support my suspicion that time and space (or at least their connection via the metric) is emergent.

So here’s the basic argument. In quantum mechanics, density matrices are used to define quantum states that, in theory (depending on one’s interpretation), can extend throughout space. In other words, we use such matrices to represent what are usually spacelike separated measurement events. Joe, Jonathan, and Vlatko (henceforth FJV) ask if it might be possible to to extend these matrices in such a way as to cover a spread of time as well. In doing so they introduce the idea of a pseudo-density matrix (PDM). The PDM, like the usual density matrix, is positive semi-definite for spacelike separated events while it fails to be so for time like separated events (they don’t mention lightlike separations – in theory it should also fail to be positive semi-definite in this case). Intriguingly they actually present results of two qubit NMR experiments that support the results.

What really struck me is that they appear to be on the verge of being able to obtain the metric tensor of Minkowski space. In fact, Joe told me he had already recovered the signature which, given the results concerning the PDM, tells me they’ve already got the metric. In the signature convention (+,+,+,-), the metric tensor is indeed positive semi-definite for spacelike separations and not for timelike and lightlike separations.

The only thing I caution is that, though we like to proclaim the contrary (and FJV say it in their abstract), time and space are not the same in special relativity. If they were, time would have the same sign as space in the metric tensor (incidentally, Scott Aaronson agrees with me on this). In addition, if time and space were truly equal we would be able to go back in time.

Nevertheless, I think this is one of the most promising potential routes to quantum gravity that I’ve seen in years. It brings the “language” of quantum mechanics and relativity just a bit closer (something I’ve long thought about, but more in group theory terms) and it hints at the emergence of time (and maybe even space). It will be interesting to see where this leads.

## Royal Institution up for sale

Posted in Uncategorized on January 25, 2013 by quantummoxie

Sadly, the Royal Institution (RI) has put it’s historic building at Mayfair up for sale. The historic significance of this building can’t be understated. Some of the defining experiments of physics and chemistry were performed there, notably by Faraday and Davy. The RI owes creditors about 7 million pounds after a 22 million pound refurbishment of the building undertaken just prior to the market crash of 2008. Oxford neuroscientist Colin Blakemore was quoted as saying “A fraction of the cost of a Picasso or a football club would save this venerable institution. Surely there’s a benefactor out there who wants to secure a place in history by rescuing it?” Unfortunately, we live in an age in which things like the RI are simply not valued by society as a whole.

## Why Sean Carroll is wrong

Posted in Uncategorized on January 6, 2013 by quantummoxie

Sean Carroll, who I do respect, has blogged no less than four times about the idea that the physics underlying the “world of everyday experience” is completely understood, bar none.  His most recent post on this subject claims to have put it all into a single equation.  In his response to critics he has made  a number of interesting claims including arguing – correctly – that there is a misperception about the nature of scientific theories.  They aren’t simply right or wrong.  They have ranges of applicability.  This is one of my greatest pet peeves, in fact.  It drives me bonkers when people, for instance, claim that Einstein proved Newton was wrong.  No, that in fact is not true.  Einstein proved Newton was only correct within a certain range of validity.  We rely on Newton having been correct every single day when we open a door or drive a car.  That’s not the issue here.  So what is the issue?

Let’s look at Sean’s claim one more time: that the physics underlying the world of everyday experience is perfectly well understood.  To quote from one of Sean’s earlier posts on the subject,

If you were to ask a contemporary scientist why a table is solid, they would give you an explanation that comes down to the properties of the molecules of which it is made, which in turn reflect a combination of the size of the atoms as determined by quantum mechanics, and the electrostatic interaction between those atoms. If you were to ask why the Sun shines, you would get a story in terms of protons and neutrons fusing and releasing energy. If you were to ask what happens when a person flexes a muscle, you would hear about signals sent through nerves by the transmission of ions across electromagnetic potentials and various chemical interactions.

And so on with innumerable other questions about how everyday phenomena work. In every single case, the basic underlying story (if that happens to be what you’re interested in, and again there are plenty of other interesting things out there) would involve the particles of the Standard Model, interacting through electromagnetism, gravity, and the nuclear forces, according to the principles of quantum mechanics and general relativity.

As simple as that sounds, what is he really trying to say?  It certainly appears as if he is implying that one can draw a direct line from the Standard Model (via the equation that he is now rolling out on his tour of England) to doors closing and muscles flexing.  Yet, when anyone challenges him on the fact that emergence and complexity, not to mention the quantum-classical contrast, are not sufficiently well-understood he (and his supporters) dismiss the argument as “tiresome.”  But he has fallen into his own trap by overextending the validity of a theory.

So let’s review what we know, without question.

1. We know the classical physics (and the “classical” part is crucial here) that describes how things like doors open and close, buildings stand up, and so on.  On an everyday level, this involves Newtonian mechanics.  Thus we know how macroscopic objects work at non-relativistic energies and speeds to great precision.
2. We know the classical physics of electromagnetism and we know that it helps govern how macroscopic objects interact (when you press on a door it is really electromagnetic repulsion between molecules that mediates the interaction between your hand and the door).
3. We know the classical physics of macroscopic objects at relativistic energies to a great precision.
4. We know the quantum physics that tells us how molecules are held together, i.e. we know chemistry.
5. We know the quantum physics that tells us about the sub-atomic particles that make up the atoms (and thus molecules) that constitute these things, i.e. we know QED and QCD.

What don’t we know?  Well, for starters, we do not know where the quantum world ends and the classical world begins.  In other words, we’re not 100% certain how Numbers 4 and 5 above connect to Numbers 1, 2, and 3.  For example, Sean’s equation seems to imply that spacetime and gravity, at least as it regards everyday objects, is entirely explained by the two terms Sean has identified in that equation.  But Sean’s equation is fundamentally quantum in nature.  How can one include a gravitational term in a fundamentally quantum equation and claim it explains all of the physics underlying everyday life when we as yet have no well-developed theory of quantum gravity?  Do we know for certain that the gravitational interactions that affect everyday life can be traced to that one term?  Sean blithely claims

We don’t understand the full theory of quantum gravity, but we understand it perfectly well at the everyday level.

Really?  That’s a rather tall claim. Likewise, by including a group of terms broadly labeled “quantum mechanics” he is implying that we fully understand quantum mechanics, at least as it regards the physics of everyday life.  Presumably all of chemistry comes out of this particular set of terms, but there are an awful lot of things about quantum mechanics that we just don’t know.  Sean has conveniently brushed over some of the more complex aspects of biology in his description of muscle flexing (and then dismisses this criticism as “tiresome”).  But there are legitimate questions that could be asked about just how some of these neuro-chemical processes can legitimately come out of those terms (or others as we are apparently supposed to “not take them too seriously”) in Sean’s equation.  Thus even if I reluctantly granted him the gravity claim, he’s dodging certain problems with biochemistry by claiming criticism on this point is “tiresome” (if you do not see the problem in this, perhaps you should review a list of basic logical fallacies, notably this one).

Aside from the rather nebulously labeled term “other forces,” Sean also fails to account for certain interpretational problems inherent in the Standard Model, some of which have a direct bearing on everyday life.  In my very first FQXi essay I argued the point that there is an interpretational problem with the exclusion principle (and hence the spin-statistics theorem).  As we understand it at present, the Standard Model only fully explains three of the four fundamental forces of nature (already a problem for Sean’s claim as I have stated above).  Nevertheless, if we assume an extension of the Standard Model will someday include gravity, then

the four fundamental interactions would each be accompanied by a mediator particle – the photon for the electromagnetic, vector bosons (W+, W−, and Z0) for the weak nuclear, gluons for the color, and gravitons for the gravitational … Higher order micro- scopic interactions, such as the strong nuclear, possess their own mediator particle (e.g. the meson). One can theoretically use these as the building blocks for ordinary macroscopic matter with one glaring exception: the extended structure of the atom. In addition to two of the fundamental interactions, ‘building’ an atom requires invocation of the Pauli Exclusion Principle (PEP). PEP may be understood in the context of the Standard Model via the spin-statistics theorem – fields ultimately possess certain commutation properties that manifest themselves, after the action of a field operator, as bosons or fermions, the latter obeying PEP. In other words, we would find that a certain field has to be commuting (or perhaps anti-commuting) or else we get, in the words of Tony Zee, ‘a nonvanishing piece of junk’ in our mathematics …

So, as Tony Zee also put it,

[i]t is sometimes said that because of electromagnetism you do not sink through the floor and because of gravity you do not float to the ceiling, and you would be sinking or floating in total darkness were it not for the weak interaction, which regulates stellar burning. Without the spin statistics connection, electrons would not obey Pauli exclusion. Matter would just collapse.

Now here’s the rub.  We often connect the physics of everyday life with the physics underlying everyday life, but sometimes we have trouble.  In the former we like to do things such as draw free-body (force) diagrams to describe how the forces acting on a macroscopic object balance out.  While the following example is not part of everyday life, it is illustrative of the problem (since we know that the exclusion principle plays a major role in keeping all of matter from collapsing in on itself).  Consider a stable, macroscopic chunk of a white dwarf star.  Now draw a free-body diagram of that chunk.  In the radial direction there is, of course, a force due to gravity acting in the negative r direction and so I can draw an arrow and label it.  Now there ought to be an equal magnitude arrow pointing in the opposite direction since the chunk is stable.  But there isn’t because there is no force preventing collapse.  It is PEP that prevents collapse and acts against gravity here and PEP, in the Standard Model, is not a force.

That example was simply illustrative.  A similar argument can be made about all matter.  Indeed, as Tom Moore said to me once, PEP does present a problem for the interaction picture that is painted by the Standard Model.  Within the realm of the Standard Model itself, there is no problem.  But this is precisely where Sean falls into his own trap by over-extrapolating the realm of applicability of a theory: the interaction picture of the Standard Model matches up well with standard Newtonian physics except for this one case.  And we do not yet know why.  That alone should be enough to refute Sean’s claim. The claim is dubious at best and at worst is misleading enough to beguile even the best science journalists especially when it comes from someone as well-known as Sean Carroll.

A cynic might say that the purpose of such a claim is merely to sell books.  But I think Sean really believes his claim.  Either way it’s another case of the particle physics and cosmology community making grandiose claims that are eaten up by the public, giving the impression that this sub-field of physics has a monopoly on truth, particularly when it comes to fundamental questions.  And not only is that wrong but it is potentially harmful to science.

Oh, and by the way, just because we have an equation that works doesn’t mean we understand it.  If you don’t believe me then google “interpreting the quantum mechanical wave function.”