To really get a sense of the nature of the wavefunction, let’s go back to the early days of contemporary physics and look at the nature of light. By ‘contemporary’ here I am referring roughly to post-Galilean physics. As far back as Newton and Huygens there was a debate over the nature of light – Newton held the corpuscular (particle) view, which likely stemmed from his work on optics, while Huygens held the wave view. The wave interpretation began to dominate the rhetoric after the implications of Young’s double-slit experiment (1801) were fully realized. It was then that the debate began to center on just what was it that was ‘waving,’ as it were. Maxwell, of course, answered that question by showing that light was merely the fluctuation of an electromagnetic field, though debate about this continued in the form of debate over the aether. Suffice it to say that, once the Michelson-Morley experiment was performed and certainly by the time relativity came along, the consensus was that what was ‘waving’ was an electromagnetic field. (Of course, ironically, in the same year he introduced special relativity, Einstein also explained the corpuscular nature of light that seemed to be suggested by certain experimental results.)

Since it had also become clear that light demonstrated corpuscular behavior as well, the wave-particle duality of light was firmly established. Louis de Broglie then proposed, in his PhD thesis, that all matter possessed this same duality. It took quite awhile for physicists to determine exactly what was ‘waving’ in the case of the particles of matter, but, as QFT seems to indicate, it’s just some other type of field.

But is Young’s double-slit experiment *really* evidence for the wave nature of light? Passing single photons through the slits one at a time gives a *statistical* distribution on a detection screen. I’m not sure of the historical roots of this, but I suspect that this is what led to the interpretation of the wavefunction as a probability distribution. Somehow along the way, when this notion was melded with the modern formalism of quantum mechanics, the wavefunction appears to have lost its ontological status.

On the other hand, we know that we get an interference pattern regardless of the wavelength of the light which means we can get interference with radio waves as well. In fact this is easily demonstrated using a pair of broadcasting radio antennas acting in the place of the slits. But most people never discuss “photons” in the context of radio waves because the notion somehow seems difficult to imagine (how big would radio photons be?). (Either they’re there and simply hard to imagine or they’re not there. If it is the latter, at what wavelength does light suddenly ‘rid itself’ of photons?)

Given the fact that, starting with de Broglie and Einstein essentially, photons were just another particle (in this case a massless boson), it made sense to treat the wavefunction of all these particles in the same manner. Given that it took awhile for QFT to really explain many of the other particles and the fact that the probability distribution interpretation seemed to work just fine, it seemed as if wavefunctions no longer needed any ontological status and was divorced from its association with the fluctuating field. But by stripping them of that status we are forgetting the historical origins of the wavefunction as I’ve just outlined and we are forgetting that, in the case of light at least, and really in the case of all particles, something physical – a field – really is fluctuating.

Now, given all of that, it is not particularly clear where the quantum-classical transition comes into play here. People often talk about ‘quantum’ versus ‘classical’ light, i.e. noting that light exhibits both quantum and classical behavior. But, according to Rovelli, Griffiths, and others, the world is entirely quantum and classicity is merely a perception on our part.

Given all of this and assuming for the moment that wavefunction collapse is real and that the act of measurement should cause this (again, just assume this for the moment whether or not you buy it), then shouldn’t the act of listening to the radio collapse the signal wave to a particular, localized state thereby preventing anyone else from listening to it on a different radio?

Regarding radio waves, of course they have photons. Yes, you are “collapsing the wavefunction” when you listen to the radio, but this has no effect on the signal because it is composed of an extremely large number of photons and you are only measuring a small portion of them. In fact, this isn’t quite accurate either because you are measuring the photons very weakly, so the wavefunction doesn’t completely collapse, but because there are a lot of them you get a strong signal anyway. In any case, what you are saying about the radio would be equivalent to me saying that I shouldn’t be able to see the tree outside my window because someone on the other side of the street is already looking at it, which is clearly nonsense.

Regarding the ontological status of the wavefunction, you are being a bit selective in your retelling of the history of QM. QM was developed from two sides simultaneously, i.e. in atomic physics and in statistical/optical physics. The former led to Heisenberg’s matrix mechanics and the latter to Schroedinger’s wave equation. Neither of these on their own was quite complete (despite erroneous claims that they were proved equivalent early on). Heisenberg lacked a good treatment of non-stationary states and Schroedinger lacked the Born rule. What we call modern quantum mechanics is a distillation of ideas taken from both these approaches, e.g. we treat the commutation relations as fundamental, which comes from Heisenberg, but we tend to stick the resulting differential operators into wave equations a la Schroedinger.

In my opinion, the Schroedinger approach does indeed suggest an ontological wavefunction, but the Heisenberg approach, and its whole history, suggest it should have a more epistemic status. After all, Heisenberg’s original idea was that atoms existed in stationary states (which were originally thought of as ontic states) and there are random jumps between them, governed by the amplitudes of what we would now call the wavefunction. Because it governs probabilistic transitions from the outset, and the ontic states are something else, this is suggestive of an epistemic status.

Of course, we now know that neither the Heisenberg or Schroedinger approaches to quantum theory, or at least their original ontologies, are good enough due to the various no-go theorems. The point of the story is just to suggest that we shouldn’t be swayed too much by historical arguments because you can find historical arguments to back up pretty much any point of view. Instead, we have to deal with the theory as it is today, and in that theory the wavefunction is not the same as a classical electromagnetic wave, nor is it simply a classical probability distribution. Therefore, I think that the question of the ontic/epistemic nature of the wavefunction should be resolved by theorems we can prove about the modern theory and analogies that are relevant to the modern theory, as well as looking at which one gives the best intuition for solving problems and making future progress in physics. From that point of view, I think the epistemic approach is winning at the moment.

Hi Matt,

I skipped the history of the Schrödinger and Heisenberg pictures because, to me, the problem of wavefunction ontology is independent of those pictures (but then again I’m a bit radically empiricist in my views).

As I pointed out in my FQXi essay, from an empirical standpoint I find the idea that the commutation relations are fundamental to be a bit unsatisfying. To me they are a mathematical statement and, though their physical

consequenceis clear, their physicalmeaningis not.Regarding the radio wave example, I honestly never thought about it until today. I probably spent too much time around radio astronomers earlier in my career – they don’t believe radio waves have a photonic nature (at least the ones I knew didn’t). I suppose this is where field theory really becomes handy since it allows for a “local collapse” of a single field.

At any rate, while I am aware that, for the most part, we have a different understanding of the wavefunction today than we used to, I find that understanding to be bothersome for the same reason I find the commutation relations bothersome. It is particularly irksome from the standpoint of the quantum-classical transition since, clearly, the fields

dofluctuate and there still needs to be some convergence of ideas around the double-slit experiment (as seemingly simple as it is). I also suspect that I am finally understanding why Feynman claimed that the quintessentially problematic “thing” in QM was the double-slit experiment (Schrödinger said it was entanglement and there obviously is a relation, but I always thought entanglement was more obviously problematic – until now).Oddly enough, I was agnostic on this issue until today (in fact that’s how I voted in the poll). But the fact that I’m leaning toward an ontological approach doesn’t mean I reject the epistemic one outright. I am, in fact, a relatively new convert to QBism which seems to be an epistemic approach (actually, it appears that most current (quasi-)interpretations – QBist, epistemic, consistent histories, relational, et. al. with the possible exception of MWI – are epistemic in some way). As a relationalist I see the lure of such arguments but I worry that these approaches are becoming too solipsistic.

I hope that made some sense.

I prefer to think of it this way – the ideal particle properties come from looking at wave functions in the delta function basis. The wave properties come from looking at them in the momentum basis. The actual states are somewhere in between. Well, that is if you don’t consider things like the conserved quantum number of particles as being an exclusively particle-like property, anyway. At any rate, I’m pretty certain that wave function collapse is going to be found to be a consequence of imposing locality on the interaction terms in the Hamiltonian/Lagrangian.

Matt Leifer also raises some excellent points – you’re only localizing a few of a large ensemble of particles, so you do deplete the signal (ie cause interference), but not a lot.

Also, if you’re looking at probing the quantum/classical boundary, there’s a pretty good paper by Tegmark from 1993 on the topic “Apparent wave function collapse caused by scattering” at

http://www.springerlink.com/content/v7671jgq5r11p8h7/

DOI: 10.1007/BF00662807

BlackGriffen

I think my point got a bit lost in the detail of what I wrote. I didn’t really want to emphasize the question of Schroedinger vs Heisenberg pictures, but rather I wanted to imply that there is as much of a historical precedent for being an epistemicist as there is for being an ontologist. If you consider the atomic models that were the forerunners QM, as opposed to considering the optical effects, then I think that you can argue that an epistemic approach looks natural. Therefore, if I buy your account of the optical effects, we can say that the tension between ontological and epistemic has existed right from the beginning. Basically, in pre-QM atomic models it is natural to think of the stationary states as “real” and the quantum probabilities are just fuzzy things that are responsible for transitions, which would naturally be interpreted epistemically. Since Heisenberg was thinking about atomic physics, it is no surprise that his approach has some affinities with epistemicism, but we don’t have to get that far into the history to appreciate that the ontology of quantum theory has never been obvious.

I agree that the interpretation of the commutation relations is problematic. My point in raising them was just to note that there is just as much precedent for viewing QM as a theory where something funny happens to the dynamical variables in Hamiltonian mechanics as there is for viewing it as a theory about a weird wavelike object satisfying a differential equation. Putting the emphasis on the structure of the dynamical variables is more conducive to an epistemic interpretation, since you can just view QM as a weird kind of probability over them. I’m not saying which point of view is right, just that they are both latent in the quantum formalism, and have been from the very start.

You are right that you have to be an epistemicist to be a QBist (unless you also want to be a contradictorian), but then again you probably shouldn’t be a QBist, at least not in the most extreme Fuchsian sense. Also, there are plenty of ontological interpretations on the table, e.g. MWI, Bohm, spontaneous collapse and modal interpretations. If anything, I would say that ontic interpretations are more popular amongst people who identify as experts in foundations, particularly in the philosophy community. The status of consistent histories is unclear to me. Some people view it as a fleshing out of MWI, i.e. a way of defining worlds in MWI, in which case it would have ontic states. The Griffith approach is probably vague enough to admit an epistemic interpretation, but it is also so vague that it doesn’t make sense to me. I do not believe that the relational interpretation makes coherent sense at the present stage of development, but I agree that it does want to have epistemic states. Therefore, I agree that there are a number of epistemic options on the table, but I think they should all be viewed as works in progress rather than completely coherent interpretations at the present stage. The ontic approaches are better worked out in general, so they provide good foils to orthodoxy, but I find it hard to believe that any of them can be correct.

OK, gotcha. I see your point now. Historically the origin of the use of wavefunctions in atomic physics essentially begins with Bohr – and perhaps indirectly de Broglie – but I would maintain that one could still draw an historical line back to the problem of light. If I were to “graph” the history of the wavefunction, I would envision a single path that forks at de Broglie, each of these two new paths eventually becoming associated with the Heisenberg and Schrödinger pictures respectively, before they start to “intertwine” again down the road.

To put it another way, Bohr had been thinking about atomic physics for quite some time before the “new” quantum theory appeared on the scene and the stationary states were a part of that (since it helped explain the periodic table, etc.). When he got wind of de Broglie’s ideas it suddenly made sense how these “stationary” states arose and why they were quantized and thus was born the Bohr model of hydrogen which we know is overly simplistic. But it has an important historical role to play as the primary place where something wave-like enters atomic physics. But since Bohr’s model used the de Broglie relations and the de Broglie relations were inspired by the duality of light, one could draw a line for the wavefunction all the way back to light.

As for QBism and the relational approach, I am presently working on “bucking the trend,” as it were, and proving that one can hold QBist-like views and still be an empiricist with a belief in certain ontic states. As with everything, I believe in a healthy balance (in this case between epistemic and ontic).

As for Bob Griffiths, I’ll admit to not having probed consistent histories in great depth, but it always seemed to me to be quite similar in underlying style to Rob’s epistemic theory. In fact their respective explanations of entanglement seem virtually identical (though I am basing this belief on informal conversations with both of them and thus this could merely be a superficial understanding).

Sean:

I suspect you’re right, by the way. I’ve been chatting with Frank Schroeck lately and he’s been working on using the phase space/Hilbert space dualism as a means for exploring the link between quantum and classical. There’s a natural fit the for your view about the Hamiltonian and Lagrangian.

A couple of quick questions from someone who hasn’t spent as much time in the field on this topic. QBist? What’s the critical distinction between ontology and epistemology?

Thanks,

BG

So QBism is short for “quantum Bayesianism.” It is a view that is probably best summed up by Chris Fuchs’ recent paper.

Ontology refers to states of being and existence while epistemology refers to states of knowledge. So in the discussion above if something has an ontological status (i.e. it is ontic) it means it actually exists or corresponds to something that actually exists (as opposed to being a useful mathematical model). If it is epistemic then it is merely a state of knowledge. The distinction between these two is most often discussed in relation to quantum states. The epistemic view takes quantum states to merely be states of knowledge (and, hence, many epistemic interpretations are often accused of being solipsistic). The ontic view, on the other hand, takes quantum states to be actual states of physical reality, independent of us.

Personally, I don’t view them (in this context) as necessarily being mutually exclusive ideas.

Hi Ian,

Nice discussion! I certainly encourage trying to find a “healthy balance” between the epistemic and ontic viewpoints, but there’s a difference between “balance” and trying to have something both ways…

Of course, you’re right, there’s nothing mutually exclusive about something being both an epistemic and an ontic state; if they’re the same thing, that just means that we actually know what’s really going on.

But the whole point of the psi-epistemic way of looking at things is that if our state of knowledge was different than the actual, ontic state of reality, then this would explain both the need for probability as well as an apparent collapse when we updated our knowledge via physical measurements.

The psi-epistemic camp isn’t saying there is no ontic state (not even Fuchs, I’m pretty sure) – just that whatever that ontic state is, it’s not psi. I’m with you in agreeing that the most important question that needs answering (from this perspective) is what that ontic state actually is.

But when you write a poll answer that effectively says “No, the wavefunction is not ontological… But it represents something very real”, you’re letting people have it both ways. If it “represents something very real”, then it’s ontological! (What else could it mean to be ontic?) There’s no middle ground on this one; that poll answer doesn’t make any sense.

The fact that this self-contradictory answer got a huge plurality of the votes may be evidence that most physicists have schizophrenic views when it comes to psi, and are often trying to have it both ways. IMHO, you should discourage this sloppy thinking by removing this option from any future polling that you conduct on this topic… 🙂

the most important question that needs answering (from this perspective) is what that ontic state actually is.As I’ve noted before, perhaps it’s a mistake to view the universe via classical realism. No, I don’t mean classical physics’ way of imaging (sic! – imagining) “what’s really there” v. the QM way (which basically punted the question.) I mean, we want to assume there is a common objective reality composed of distributions of “something” in space and time that really do have properties or at least “values” – like psi amplitudes – at each point and moment. That fits in with math, with graphs and computer simulations and such.

But, given the problems understanding WF collapse and the temptations to use faulty lines of reasoning (see my other comments and elsewhere re the fallacious reasoning used to claim decoherence solves or even helps solve the issue) to prop up that failing realist model, maybe we should imagine the universe as system of relations that can’t always be represented as a genuine “distribution” at each moment and place.

I don’t know just how it would work, but forcing non-classical physics into old philosophical classicality just isn’t working out. We can’t even claim there really are “values” as abstractions (like chance of detection) in a clean way – all the time, everywhere – because of wondering just what happens to them “at the moment of measurement.”

And worst of all, is to use circular reasoning etc. to put lipstick on the pig. The old school basically accepted this and passed on “what is really there,” and maybe we should too.

Hey Ken! How’s Australia? Anyway, thanks for the feedback. What I meant by that one poll question was that I was trying to distinguish between a purely mathematical wavefunction that has little or no physical association and a mathematical (epistemic) wavefunction that nevertheless describes a real process. It’s a subtle point and, honestly, is probably too subtle, i.e. I was making a division between ideas where there isn’t one.

If photons were really “particles” guided by Bohmian wave etc, then what of the EM field? What kind of particle could a photon be, considering it has to refract etc (not just diffraction which could be tortured into a guided delivery.) Does it have a size other than the WF spread? I don’t see any way to make light corpuscular and just guided, and have know optics.

As for the wavefunction being a “mathematical construct” etc: so then, what is really there? They act like waves, including EM restrictions like no radiation (meaning an orbiting electron should be a standing wave pattern etc.

Oh, about collapse preventing others from listening to the same radio program (or even, moment): NP at all since there are googles of radio photons. You are collapsing some, and other people receive the others. If you want to prove radio waves are really quantized and not just classical EM, then rush towards them at 0.9999….c for trillions of Doppler factor and let that UV light up your counters or even phosphor screens if fast enough.

Oh, I know they must be photonic. I mean, they’re light so if you redshift anything enough you get radio waves and shouldn’t really lose the photonic nature. What I meant really is that the photonic model seems to lose its meaning in the radio range.

Yeah, they lose effective photonic nature as radio since hard to find an interaction that would absorb as single (and hard to know you just made one, count later etc.) But “in principle” they are photonic (and as inferred from red and blue shift issues.)

Another interesting quirk is that photon number is itself uncertain, depending on what “state” light is in (versus electron number.) This is well known. No way IMHO to procrustianize photons into guided particles given that numerical messiness.

Neil, I appreciate your comments about photons and the EM field, although I am not quite sure why they are relevant to the discussion about ontic/epistemic wavefunctions. In any case, since you raised it, I wanted to point out that these issues are well-known to those who take the Bohmian and related approaches seriously. Instead of universally adopting a particle ontology, one can adopt a field configuration ontology for the fundamental bosonic fields and a particle ontology for the fundamental fermionic fields. IMHO, this is the best worked-out Bohmian approach to QFT at the moment, and it gels with the way things work out in the classical limit. Of course, one can also attempt a particle ontology for bosonic fields, and since we are dealing with QFT, it is no surprise that the number of particles is not conserved. Such theories have stochastic particle creation and annihilation events that can account for states with no definite photon number. The fact that the wavefunction does not determine a unique number of photons does not mean that there is not an actual determinate (but fluctuating) number of photons in the full ontic state in such theories. There are technical difficulties with these theories at the moment, but there is no reason to suspect that they can’t be worked out.