Putting a price on physics as a discipline

Posted in Uncategorized on September 13, 2013 by quantummoxie

Update: the following blog post has been reposted over on the FQXi website.

It was announced today that the University of Southern Maine’s physics department will be shuttered and its major eliminated. This hits particularly close to home for me since I live outside of Portland and know some of the faculty members there. Closing the USM physics department would leave the University of Maine at Orono as the only public university physics department in the state. In particular it leaves the largest city in the state without a public physics program. Along with the department, the wildly popular (though apparently money-losing) planetarium may be closed since it is currently operated by the department. Low enrollment (as compared to other departments in the university) and money (what else?) were cited as reasons by the university’s president.

I’m sure there are plenty of non-physicists out there who will welcome this move as pragmatic and inevitable in these tough economic times. “Shape up or ship out” seems to be the motto in a world in which we increasingly need to justify anything and everything in terms of short-term money and jobs. Physics, of course, is used to being treated as the bastard science in higher education, at least at the non-elite schools. Hundreds — indeed likely thousands — of colleges and small universities in the US have comparatively large biology and chemistry departments but no physics department. As a result, most high school physics teachers have a degree in something other than physics and frequently are teaching physics simply because someone has to (for now). Even at schools that have physics departments, they are often underfunded and under-appreciated. At my own institution, our department consists of three faculty members and one lab instructor while chemistry, which has roughly the same number of majors, has six faculty members and at least as many lab instructors.

So why is physics treated like this? I’m sure the answer to that is complicated and involves a lot of variables and a lot of history. I’m also sure that physicists are, themselves, partly to blame. But I’m also sure that a major component of this decline is the growing sense that everything must be practical and profit-driven, particularly in the short-term. This paradigm has also infected those physics departments that manage to survive by driving research away from fundamental discoveries and more toward discoveries that will have a short-term impact on the field, i.e. low-risk (which also happens to be low-reward). Perhaps you are someone who believes that this is a good thing. History should teach us, however, that it is not.

Stop and think for a moment about the things in your life that you take for granted. You car? Your house? These days, perhaps your smart phone or GPS? Your television? Your electric razor? How about electricity in general? X-rays, CT scans, and MRIs? Your immediate response might be that all of those things were brought to you by engineers, not physicists. Sure, with the exception, I suppose, of the X-ray, my guess is that those devices were all developed by engineers, for the most part. But who made the discoveries that were then turned into technology by the engineers? In every single case mentioned here, they were made by physicists. Newton, Galileo, Hooke, and others “discovered” classical mechanics which is what engineers use to build ever-more-complex buildings. Carnot, Clausius, Maxwell, Boltzmann and others “discovered” the laws of thermodynamics that literally fueled the Industrial Revolution. Faraday, Gauss, Ampère, Maxwell and others “discovered” the laws of electricity and magnetism that power nearly everything now (including our cars!). Marie Curie literally died from her research, not knowing the deleterious effects of radiation until it was too late. Robert Goddard, long-time chair of the Clark University physics department, whose webpage at one point notes that physics “is the most fundamental of the sciences,” pioneered the field of rocketry that allows us to put satellites in orbit that bring us such things as DirecTV and GPS. Do you get NFL Sunday Ticket (like me) or some other such thing? Thank a physicist. Don’t think relativity is of any practical importance? Think again. GPS satellites rely on it (without using it, GPS coordinates would be off by as much as 300 feet or more). Without Einstein, there’d be no GPS.

Well, OK, you say, but what has physics done for me lately? After all, didn’t Sean Carroll recently declare that the physics of everyday life was completely understood? Plenty of physicists will likely bristle at the following suggestion, but the fact remains that we are moving toward a reality that includes quantum computers in some way, shape, or form. While the D-Wave One may not be a universal quantum computer — and its very quantumness may even still be up for debate — the fact of the matter is that it exists and people have plunked down a lot of money to buy one. Without quantum physicists, the very debate over the efficacy of the D-Wave One couldn’t happen. Without a vibrant foundational physics community, we risk turning over words like “quantum” to hucksters selling pseudo-science.

Beyond that, note that if you are reading this, you’re reading it on the Internet. The internet has become a ubiquitous part of our lives. It has literally helped spawn revolutions. It has become a daily fixture in nearly all of our lives. And it was invented by a physicist working at a physics laboratory dedicated to fundamental discoveries, not practical ones. Does that mean it wouldn’t have been discovered in another setting? Certainly ARPANet existed before the web-based internet that we know today. But the point is that it wasn’t simply a fortuitous accident. It was a critical component of what was going on at CERN at the time.

So what does this — any of this — have to do with the closure of one, small physics department in a sparsely-populated state in the far eastern corner of the country? Physics is the foundational science. Removing the foundation of a house risks causing it to collapse. Removing a species lowest on the food chain endangers every species further up that chain. We have no way of knowing where the next Einstein or Newton or Maxwell or Curie will come from. He or she could very well come from Maine. Why not? Who’s to say? On top of that, a true appreciation of the importance of physics can’t be properly imparted by a teacher with no real background in the subject. Eliminating a department capable of producing physics teachers threatens to further erode an appreciation of the importance of physics.

Of course, the other argument I often hear about physics is that no one majors in it because it is hard. Since when did this country back away from things that were hard? We went to the Moon for God’s sake. Sure it’s hard. So? Maybe if employers stopped placing a premium on grades and class ranks more people would go into physics and at least appreciate it for what it is (because a physicist is well-trained for nearly any career which is why so many of us have contributed to so many fields over the years).

In addition to my work in physics, I am an entrepreneur and veteran of six start-up companies and I have learned that you can’t build an economy by just selling ideas. Once in awhile a huckster comes along and makes some money selling nothing but an idea. But you can’t build an economy on that. There have to be tangible products — goods — for an economy to be sustainable. So if you are a business or marketing person, just remember that the products you sell and the businesses you build always have something tangible behind them that was developed by an engineer or inventor, and that engineer or inventor is exploiting the laws of physics (because every single system in the world, even biological ones, must obey the laws of physics) to create that product. Even corporations used to understand this. IBM and the former Bell Labs have each won numerous Nobel Prizes in physics. But many companies have gutted their R&D departments in the name of maximizing short-term capital. By systematically devaluing physics, we are slowly eroding the foundation on which our entire economy — indeed the progress of the human race itself —  is based.

As a final note, while I appreciate the arguments of some physicists that we should be trying to encourage people to support us simply because we (as a country, as a species) should be asking these deep questions for their own sake, the reality is that, if people don’t value that it will be very difficult to change their mind. The fact is our entire culture devalues that, and so getting people to buy that argument requires changing the entire culture. With physics departments — and fundamental research itself — so pressured and marginalized, respectfully I say that now is not the time to appeal to these instincts, however laudable they may be. If we care about the long-term viability of our field (and our country and our species), we need to change the discourse by reminding people that fundamental science is important to the economy. So while USM may think it is doing a service to the taxpayers of the State of Maine, of which I am one, by eliminating its physics department, it is, in fact, contributing to the further erosion of the foundation of modern society as we know it. How can we put a price on that?

Astronomical Adventures of Karen (a guest blog post)

Posted in Uncategorized on August 5, 2013 by quantummoxie

The following was sent to me by Karen Banning after a recent meeting of our local astronomy club, the Astronomical Society of Northern New England (ASNNE). Karen and I sing in a choir together and she recently began coming to a few ASNNE events. While a few of us in ASNNE actually get paid to do this sort of thing, most are not professionals and are simply there for the love of astronomy.

Adventures of Karen: Star Date August 2nd, 2013

Driving around Arundel, I’m lost. My only landmark is black-faced sheep grazing on the side of the road. I call Bernie, asking how to get there. “Where?”, he asks. “There”, I reply. He tells me to go north.”Which way is that?” I ask. Do astronomers carry compasses? Oh. I forgot.

Arriving at the observatory, up a well-hidden (by overgrown grass) dirt road, a bunch of scientists and physicists are talking in Alien language. All I know is that Ian sings in my choir and Bernie writes an astronomy column, but the others look smart, until it gets dark and they just sound smart. I don’t really know if they are because I need a translator ring.

Ian has sired a 12 year old – Nate – who will be running NASA when he grows up in 2 years. Bernie is reading the news article he wrote and asks questions like the teacher in my nightmare about a class I did not know was on my schedule and I am in the final exam. “What happened on August 17th, in 1989?” he asks. (I am 120% sure that is NOT the date he asked about.) Someone answers that Saturn’s 5th moon was discovered and someone else dates when the 4th moon was found. I brace for the next inquisition. The only answer I have right now is: “the cotton gin” and I don’t know what question goes with that so I guess “Harvest Moon” and the song is immediately in my head, but I am told that is in September.

As the sky darkens, every star looks the same to me, but some are planets or airplanes. There is a huge telescope (or Earth person transfer station) that moves by remote control and NO it did not hit me in the head. Because someone warned me first. I don’t know who because I can’t see anything. And then, I see Saturn, in the thing you put your eye in. Viewfinder? That sounds like slides. It is about 1/4″ big. Saturn. Not the eye thing. So cute. And 3 pin dot moons and another thing that probably IS a pin dot.

M31 and M32 are next or M81 and M82. They are a bunch of stars. There are 181 Ms by the way [Editor's note: there are actually 110 Messier objects.] and it means menier or metienne. Close enough. They are star clusters in shapes. Bernie has a laser that shoots up 2 miles and points at stars. It’s like a light sword. It didn’t make that whooshie sound, though. He starts pointing out constellations and the memory of my 3rd grade science project of silver stars on navy paper floods into my mind. I ask if he knows everything. Well, he does. I can’t decide if it is creepy or cool that I feel totally inept.

We track a spy satellite. Really. They can be identified by polar orbit as they track toward Polaris. Nate babbles on and is not only tolerated but encouraged as he follows us around, spouting wisdom. I ask him if I can take him with me to be my own personal database. Every question I ask is answered, when the shadow-people can breathe again after laughing. “I think you need a chiropractor in this club” I suggest, as I hold the back of my neck to look up. The sky is glorious, full of wonder. I can’t name the sparkling pieces but I can be a speck, loving the sight of them.

The next day, I find an app on my smart phone that shows that Kohab is above me and Canopus below me on the other side of the world. Now I have to google those names to figure out what they are. They just might be street names in Arundel.

Contextuality as a unifying principle

Posted in Uncategorized on July 11, 2013 by quantummoxie

Summer has been a combination of lazy and busy (but lazy busy or busy lazy if that makes any sense?) so I haven’t posted in awhile. But my latest FQXi essay contest entry has recently been posted. In it I use a combination of domain theory and category theory (with a smidgen of topos theory thrown in) to argue that quantum contextuality is behind the ever-increasing entropy of the universe. In the process I have hinted at the development of an algebraic quantification of contextuality. While the ratings don’t seem particularly high, there were a couple of halfway decent comments. I’m hoping to put together a longer version of this essay with more rigorous arguments soon, but I’d be curious to hear from anyone who has ideas in this regard, particularly regarding the construction of an algebra that describes contextuality.

As the dust settles … nothing has changed at Elsevier

Posted in Uncategorized on May 27, 2013 by quantummoxie

Nor, frankly, anywhere else, for that matter. Greg Martin, mathematician at the University of British Columbia, wrote this eloquent letter (reprinted with permission on Tim Gowers’ blog) describing why he resigned as a member of the Editorial Board for the Journal of Number Theory. While I appreciate that it costs money to produce a journal, how is it that journals have gotten so prohibitively expensive while the technology required to produce them has dropped so dramatically? While it is easy to level charges at for-profit companies such as Elsevier, what excuse do the supposedly non-profit organizations have? For one professional organization that I won’t name for conflict-of-interest reasons, while individual online journal subscriptions for members aren’t all that expensive, I cannot locate institutional prices. In addition, these journals charge very expensive page charges that, frankly, not all researchers can afford to pay. Their attempt at “Open Access” simply shifts the cost from the subscriber to the author — open access publication in one of their journals, for example, costs $2700 per article. There seems to be an inherent assumption that only researchers at well-heeled institutions produce valid scientific results. This is on top of the $150+ annual membership dues.

I would have thought that the arXiv would have taught us at least a decade ago that there’s got to be a cheaper, more equitable model. I’m not saying “free” but at least “affordable.” I have, what I consider, a fairly sizable annual library budget. But the price of one “package” of good physics journals would eat up the entire budget. So, with the arXiv and Interlibrary loan satisfying most needs, I use that money to literally buy piles of books. And that’s what’s so odd. If you consider the editorial work, paper, typesetting, etc. that goes into the books I buy (and books aren’t cheap either!), and then look at just how many I can buy for the price of essentially one or two journal subscriptions, it demonstrates just how inflated those subscription prices are. The problem is that the system is self-perpetuating because, frankly, our institutions (whether they can afford to or not) place a premium on publication in these journals. And so nothing changes at the top, as usual…

Update: According to both Gowers’ Google+ feed and Nassif Ghoussoub’s blog, Martin’s UBC colleague Mike Bennett has also resigned from the board of JNT.

My latest adventure in interferometry

Posted in Uncategorized on May 20, 2013 by quantummoxie

This weekend was graduation weekend which meant two days of events in which we, the faculty, basically serve as eye candy. Thus that means listening to lots of speeches. Fortunately, our current Dean is a master at getting through all the graduates’ names quickly. At any rate, this being the first graduation with our newly reduced parking capacity (someone told us we had too much — I’m not kidding), traffic was worse than usual and so I hung out in the lab for a bit after the ceremony ended waiting until I could get out of the parking lot in a timely fashion.

So, let me first say that I have an increasingly god-like reverence for experimentalists. Lesser mortals would go utterly insane from the combination of tedium and unexpected results. As a theorist, I figure that I’m already insane so it doesn’t matter. After an entire semester of getting nothing but parallel lines on my outputs, I ended up getting the “bull’s eye” pattern which is clearly a laser cavity mode (at that point, I was ready to beat my head against the wall).

Curiously (or not?), I got it when the arms were each 8 inches long or 16 inches long but not when the arms were 18 inches long or 20 inches long. In the latter two cases, absolutely nothing I did produced an interference pattern whereas it was pretty easy in the former two cases (the closer it was to a parallelogram, the better the pattern). Now, this summer I’m ordering some fully-gimbaled mirror holders that match the mirror surface up with the center line to make aligning easier. I’m hoping I can quantify some of the nuances of the alignment a bit better with these.

Anyway, that all leads me to two conclusions:

  • the interference pattern in an MZI has something to do with cavity modes; and
  • textbooks (and even some papers) on optics, particularly on MZIs and Michelsons, are complete crap.

On another note, in reply to my last post on this topic, someone noted that my calculation of the coherence length might be incorrect and should actually be closer to 300 microns. So in doing it again, I got a completely different number. Maybe someone can locate my error. The linewidth of the laser I’ve got (if I’m reading it correctly) is 2 nm. I have no idea if the lineshape is Lorentzian or Gaussian, but I’m just going to guess Lorentzian for now. Thus the coherence time is given by \tau_{c}=1/\Delta\omega . Now, I think, upon further reflection, that \Delta\omega is the half width of the lineshape in angular frequency units. Since \omega = 2\pi c/\lambda, then \Delta\omega = 2\pi c \left(\frac{1}{\lambda_{2}}-\frac{1}{\lambda_{1}}\right) which, for a half line width of 1 nm gives \Delta\omega = 5.344\times 10^{14} rad/s. The coherence time is then \tau_{c}=1.87\times 10^{-15} s. The coherence length is then L_{c}=c\tau_{c}=5.61\times 10^{-7} m or 561 nm. If my mistake was in the linewidth and it is actually 2 nm, then the coherence length is actually 112 nm.

Now, if it is a Guassian beam instead of a Lorentzian beam, the coherence time is actually \tau_{c} = (8\pi\ln 2)^{1/2}/\Delta\omega. This changes the coherence length for a half line width of 1 nm to 234 nm. So it doesn’t seem to matter what I do, I’m consistently getting a coherence length that is in the hundreds of nanometers. As was pointed out, I should be getting a number closer to 300 microns. Where’s my error?

New(ish) Planetary Geology blog

Posted in Uncategorized on May 19, 2013 by quantummoxie

My friend Irene Antonenko, who is a planetary geologist, has a new blog call Planetary Geo Log. So, does that make it a “glog?”

A simple but definitive guide to Mach-Zehnder interferometers

Posted in Uncategorized on May 4, 2013 by quantummoxie

This semester I took the leap and ventured into the lab (and have yet to break anything). One of the things I have been working with is a Mach-Zehnder interferometer. Generally speaking, a Mach-Zehnder interferometer — or MZI — is a fairly simple and straightforward device. But there were a few oddities about it that were bugging me and they turned into a semester-long obsession. Attempting to find literature that fully explained what was going on turned out to be incredibly difficult and no one I ran into seemed to really know (or they all had differing opinions). But at long last, I think I have figured it out.

Regarding the notation that I will be using, the following image depicts a beam splitter in which the blue beam is transmitted and the red beam is reflected. The reflected beam on the side with the dot picks up a phase shift of π radians.


Technically there could be a phase shift at the mirrors as well (depending on how they are constructed), but since both arms pick up the same shift from these mirrors, we can safely ignore any mirror effects. So the general setup that I focused on was the following fairly standard form:


I’ve given the two arms of the interferometer different colors just to distinguish them. I made the output beams purple just to indicate that they are some mixture of light from the two arms.

Quantum mechanically, we can model this in a fairly simple manner if we consider the input to be |0\rangle. The first beam splitter (which is 50:50) is given by \frac{1}{\sqrt{2}}\left(\begin{array}{cc}-1 & 1 \\ 1 & 1\end{array}\right) while the second beam splitter (which is also 50:50) is given by \frac{1}{\sqrt{2}}\left(\begin{array}{cc}1 & 1 \\ 1 & -1\end{array}\right). Together they are given by \left(\begin{array}{cc}0 & -1\\1 & 0\end{array}\right). As such, the output will be |1\rangle. This means that, quantum mechanically, nothing should appear at Output 1. Thus, when we send single photons through the device, they always arrive at Output 1. (See Schumacher and Westmoreland, Chapter 2, for an excellent discussion of this.)

If, however, we shine a bright laser through the MZI we actually see something like this (taken from my own setup — Output 2 is on the left and Output 1 is on the right):


I tossed in an extra mirror after Output 2 just so I could project the results onto the same screen. I also tacked on some lenses at the end just to blow up the pattern so you could see it. So, first of all, the obvious difference between this and the quantum case is that we now have photons reaching both outputs. This, of course, is inconsistent with the math we did above for the quantum case. The quantum result is not completely lost, however. If you look carefully, you will notice that the center of the interference pattern in Output 1 corresponds to a bright fringe whereas the center of the interference pattern corresponds to a dark fringe (note: it is wicked difficult to keep these things steady — the smallest movement, e.g. air conditioning, is enough to disturb it which is why MZIs are used in a number of practical situations). Also note that an interference pattern as shown above only appears if the MZI is set up in a perfect square (actually a rhombus, as we’ll see) and in the same plane. If it isn’t in a perfect square (and in the same plane), then you still see light at both outputs, but you don’t see an interference pattern.

So suppose we could very slowly crank up the laser intensity such that more and more photons began going through together. At what point would we start to see photons showing up at Output 2? More importantly, why do they start to show up there? Where does the interference pattern come from and why does it “preserve” some aspect of that quantum prediction? Numerous people have tossed out ideas here and there but the only one that was even close to correct was Nathan Wiebe with whom I discussed this at the APS March Meeting. Nathan suggested that decoherence had something to do with it. Of course, this is related to something Neil Bates has been trying to disprove for awhile now. I’m still not sure if I understand his argument so I can’t say for certain whether or not he is correct, but I can say that a certain type of decoherence definitely does have something to do with it. Credit Neil, however, with being the first person to alert me to the differences between spatial and temporal coherence in the beam (more on that later). Rather than give a detailed accounting of the different types of decoherence (both classical and quantum), I will instead simply explain what is happening and you can draw your own conclusions based on your understanding of the various types of decoherence.

So, first of all, if we model the beam as a continuous wave, the interesting thing is that by carefully keeping track of the phase shifts and combinations throughout the setup, we should get the same exact result as in the single-photon case. For example, the upper arm picks up a phase shift of π radians at the first beam splitter. At the second beam splitter, a portion of each beam is transmitted and a portion is reflected. Looking at Output 1, we have a combination of the reflected lower beam, which picks up a phase shift here of π radians since it is on the side with the dot, and the transmitted upper beam which already had a phase shift of π radians from the initial beam. So the phase shift on the reflected part of the lower beam has the effect of bringing the two beams back into phase with one another and we get perfectly constructive interference. Hence, we have light at Output 1. (Note that this implies that a single photon must travel through both arms simultaneously if we think of it as a wave packet!)

Looking at Output 2, however, the reflected portion of the upper beam, which combines with the transmitted portion of the lower beam, does not pick up a phase shift since it is not on the side with the dot! As such, the two beams are still out of phase by π radians and thus will destructively interfere meaning we should not see any light at Output 2. So clearly the so-called “quantum” prediction is is exactly the same as the so-called “classical” prediction, i.e. there’s only one prediction.

One possible explanation that I had set my sights on about a month ago had to do with the fact that the beam had a “width” to it which meant that not all parts of the beam were hitting the reflective portion of the beamsplitters in phase with one another. Notice, however, that regardless of where a particular part of the beam hits the reflecting part of the beamsplitter, it still forms a perfect square:


So while each part of the beam is out of phase with each other part, crucially they are never out of phase with themselves in such a way that the outputs flip. In other words, in every case you should still find light only at Output 1 (credit goes to our lab manager, Kathy Shartzer, for pointing that one out).

So then I figured that maybe it had something to do with the fact that the beam widens as it moves along (“beam spreading”), but if you perform the ray tracing as above, you will get a rhombus for the outer edges and if you keep track of the lengths and phases, it turns out you still should only get light at Output 1. Incidentally, this suggests that maybe it’s not that it has to be a perfect square, just a perfect rhombus. At any rate, it was at this point that I started to question the Law of Reflection (not to mention my sanity).

But then I started going back-and-forth between two books on optics: the classic one by Hecht and one on quantum optics by Fox (why did I not do this before?) and finally the light went on in my head (no pun intended). So here’s what’s happening.

First, I’ll address why there’s any light at Output 2 at all. When it finally occurred to me, it was a bit of a “well, duh” kind of moment. In order for the light to only appear at Output 1, the phases have to match up just as described above. But this means that the tolerances are very very small! For example, suppose that we add an extra length to the upper arm that gives it an additional phase shift of π radians. This would have the effect of sending all of the light to Output 2, now. For the 532 nm light I was working with, this merely corresponds to adding 266 nm to the length of the upper arm. So it’s pretty obvious that any slight deviations from an absolutely perfect correspondence between the lengths of the two legs will change the results. Since the mirror is not perfectly smooth and the beam has some width to it, it’s no surprise that this is nearly impossible (certainly in my lab).

But that only explains the presence of light at both outputs. Why is there an interference pattern, why does it only occur when we are very close to a perfect rhombus, and why does it somehow preserve the expected result in the center fringe of the pattern? The answer to that has to do with temporal decoherence. This is quantified by the coherence time \tau_{c} which is the time duration over which the phase remains stable. Coherence time is related to the spread of angular frequencies \Delta \omega in the beam by

\tau_{c}\approx\frac{1}{\Delta \omega}.

In other words, only a perfectly monochromatic beam is fully coherent, i.e. has an infinite coherence time. All realistic beams are only partially coherent because there is always some spread to the angular frequencies (and thus wavelength), i.e. they’re not truly monochromatic. To quote from Fox,

If we know the phase of the wave at some position z at time t_{1}, then the phase at the same position but at a different time t_{2} will be known with a high degree of certainty when |t_{2}-t_{1}|\ll\tau_{c}, and with a very low degree when |t_{2}-t_{1}|\gg\tau_{c}

A more convenient measure is the coherence length, L_{c}=c\tau_{c} where c is the speed of light. So another way to state the above is to say that if we know the phase of the wave at z_{1}, then the phase at the same time at z_{2} will only be known to a high degree of certainty if |z_{2}-z_{1}|\ll L_{c}. That means that in order to get the two arms to have just the right phase to produce an interference pattern, the difference in length between the two arms has to satisfy 2\Delta L\lesssim L_{c}. This explains why we need nearly a perfect square (or rhombus) to get an interference pattern and it makes it clear that any such pattern is related to the natural variability in the beam. Anything else will simply produce light at both outputs. The only way to get the actual predicted result of light only appearing at Output 1 is to either dial it down to single photons (since, I don’t think that a single photon has a coherence time associated with it, but I could be wrong) or to have a perfectly monochromatic beam. (Note that a more accurate description involves the first-order correlation function which includes an oscillating term that explains this rapid changing of the angular frequency.) Note that this relates to the interpretation of the single photon taking both paths simultaneously (see Fox, p. 302).

The question then becomes, why does the center of each output faithfully retain the information of the expected result and why, if we adjust the mirror angles, does the spacing between the fringes change? Actually, the center of the outputs will only retain the expected result if it is exactly a perfect square or some proper multiple of the phase as discussed above. This explains why sometimes I got the opposite of my expected result. It also explains why the pattern seemed to constantly be shifting (and did so especially when there were vibrations in the air or on the optical bench). The alternating pattern then results from the fact that the mirrors are likely not exactly at 45 degree angles (remember how insanely small the tolerances are). So, for example, if we had mirrors that were exactly at 45 degree angles, what we would likely see would be light flashing back and forth between the two outputs, but no interference fringes.

So the only open question that I see is: if we start with single photons and slowly crank up the intensity, at what point does the coherence time come into play, i.e. at what point does temporal decoherence kick in? I suspect the answer lies in photon bunching, but I’ll have to do some more reading and thinking and, eventually, experimenting…


Get every new post delivered to your Inbox.

Join 322 other followers