19 Responses to “Is the wavefunction ontological?”
Wave functions do indeed have an ontological status. They are a mode of Being, one way we encounter what is. It seems to me that the wave functions are in fact more real than the physical manifestation of the object, which only appear to be real (arising from the flux) because of the consciousness apprehending it. Wave functions are the closest to pure being, the flux of reality. Physical objects are simply reflections, a play of light and shadow upon the water, while the fave function is the flux in that water that creates the appearance of a reality that is fleeting, and disappears when the wave changes.
As a scientist who avoids metaphysics, the only way I know of to answer the question “Is such-and-such real”, is to try and measure it. If you can’t measure it, even in principle, then either a) it’s just a bookkeeping device, an abstract concept, or some other ontological spook, or b) it might as well be. I’m a geophysicist, and I don’t deal with quantum physics on a professional level, but it seems to me that you could collect a bunch of measurements of identically prepared atoms (or whatever you people do) and plot the distribution of the resulting measurements and thereby measure the wavefunction. If so, the wavefunction is measurable, and therefore real. If I’m wrong about this, then that doesn’t mean it’s not real, it just means you have to define very specifically what you mean by ‘real’, before I’m willing to answer. In any case, I usually avoid speculating about the existence of things we have no access to, so I would call myself agnostic.
So before I vote, somebody either verify my reasoning or set me straight.
A conditional “yes.” Wave functions have to be “real” in some sense if the world is composed of objects and not just a structure of relations and “appearances.” We need WFs to explain their interference, even if what we find at the end is not “a wave.” Also, I may harp on this but “decoherence” does not solve the collapse/measurement problem. DI enthusiast forget that we need some kind of special intervention to explain getting any kind of statistics at all (coherent or incoherent as the case may be) from waves. Their argument is often circular, taking “statistics” oddly for granted and then simply comparing the statistics caused by decoherence to those expressed by mixtures. But we can’t take the statistics for granted, that (in principle, not merely which kind) is the very thing we’re trying to explain (“get off the ground”) from the WFs. See relevant thread/s at my link, including how possibly to obtain experimental disproof of one claim of DI.
But if our world is a sort of “contrivance” of rules and relations (as per Kant?) then we don’t need to picture the “objects” in it clearly. So it’s hard to say, it all depends IMHO on the ultimate nature of reality more than quantum mechanics as a specific issue.
I like your points. That resonates with the empiricist in me. The trouble in quantum physics is that there is some disagreement as to the nature of the wavefunction which naturally means its ontological status is up for grabs.
I partly agree about circular arguments. Too many arguments in physics are actually somewhat circular without people realizing it (or they rely on arbitrary axioms that are mistaken for physical truth). Nevertheless, I’m still not entirely convinced about your argument related to DI.
I can always count on your for a poetic and philosophical response…
Yes, it represents a real, physical fluctuation (I skip the word absolutely which sounds too authoritative): the periodical fluctuation of an object that can be represented by a vector, a straight line segment. The fluctuation of every such object that spins (arrows, needles, twirling batons…) can be described by a wavefunction.
This is a personal realistic interpretation. I could call it a common sense interpretation: the quantum state vector |psi> represents little spinning rods. Little spinning rods act like photons or electrons or quarks. The vector difference d|psi> is perpendicular to the vector |psi> itself, so you can describe their evolution by an equation of the type:
i.d|psi> = omega dt |psi>
where the imaginary i indicates perpendicularity between d|psi> and |psi>. Further deductions are also compatible with the quantum formalism.
Which metaphysical implications does this give to the classical questions:
What happens in the double slit?
What happens to the cat?
What about nonlocality?
Double slit: a little spinning needle in a cloud of spinning needles behaves like a particle in a wave continuum –> wave-particle dualism –> interference pattern at other side of double slitted screen.
Schrödinger cat: nothing can be said before one has measured the state with a physical process (an interaction with another needle), it is in a state of superposition. After measurement, the state of the needle has been “collapsed” into one state.
Nonlocality: a needle is 2D extended. Measuring the position (or the velocity) of a needle with another needle gives a local “point” outcome, while the “real” needle position is spread over a range of values –> it is nonlocal.
Yes, I believe what I see happen experimentally: a particle and a wave and when we observe, we observe independent, discrete dots on a screen. If you like, we could call it wavefunction collapse –> a particle described by a wavefunction that leaves a point trace on a detecting device.
Have you heard of the deBroglieBohm or Many World interpretation where the wavefunction does not collapse and the apparant “collapse” is induced by decoherence?
David: the argument that decoherence can in any way solve the WF collapse is basically circular, as I noted. Supporters compare statistics found in coherent situations, to statistics found in incoherent ones and then compare the latter to the statistics of true mixtures (sometimes A, sometimes B, but not “both at the same time” as in superpositions.) Then they say, “look, the statistics in the case of decoherence are like those of real mixtures.” The most radical go to say, decoherence created a real mixture. (Or, effectively real – “FAPP” – which is “real” in an observation-dependent universe like ours is presumed.)
However, we wouldn’t have any “statistics” derived from WFs to compare to other WF statistics, or to “real” or FAPP mixtures unless some intervention could extract “hits” from the WF in the first place! See where the CA comes in? Without some process that localized the WFs they would just stay in that condition forever (eternal Schroedinger evolution – dead and alive!) and still superposed. Decoherence would just make the combined WFs messier. It can’t even make for the “appearance” (?!) of collapse as long as they are all evolving together across space and time.
Moxie doesn’t fully agree, but might be impressed as “an empiricist” if anyone runs my experiment about DI. What I proposed (at my blog) would show if post-decoherence output was really a mixture. But here’s my main counter-argument against MWI: MWIers suppose that measurement really isn’t so special, and the world keeps splitting up each time there’s some juncture or interaction (as I read them.) But then, wouldn’t a photon hitting a beam splitter for the first time be split there into two universes (for each direction) and not just at a later array of detectors? But if so, then we wouldn’t find inference patterns.
Arjen is referring to the “collapse” which turns a “wavefunction” into a localized event. It really doesn’t make sense to us, and thinkers shouldn’t indulge fallacies to pretend it can be incorporated into a rational progression. Indeed, I don’t think a fully realist model of the world can be found. Efforts like those of Penrose to find gravity etc. forcing collapse will not IMHO be fully successful. I think the world is a network of Kantian relations, a sort of phantasm or relative multiverse itself (not to be confused with other entire universes.) This fits in with my “property-dualist” theory of mind.
David: de Broglie-Bohmian and MW interpretations are “super”-interpretations, they add to the whole but don’t give an idea of the “ontos” of the quantum particles: their structure, their consistence and how the whole quantum formalism emerges from the physical reality of the particles.
Neil: it’s indeed important to have a rational progression. Therefore, quantum interpretation should start from the first principle: the elementary particle and its representation as a vector, and progress rationally, i.e. ask oneself what happens to a particle represented by a vector. By experience, this makes quantum mechanics more intuitive.
Arjen, then what happens after a particle’s WF spreads out – but then needs to be localized at one spot? Whatever is spread out needs to contract in a very odd way, and I don’t see any way to make that “palatable” to physical intuition.
I see the spreading out of the particle’s WF as the pilot wave of the particle. The particle remains localized (within its structural range) but its pilot wave spreads out. This can be verified with ordinary particle / pilot wave systems, like the walking droplet: http://www.physorg.com/news78650511.html
‘The question of whether the waves are something “real” or a function to describe and predict phenomena in a convenient way is a matter of taste. I personally like to regard a probability wave, even in 3N-dimensional space, as a real thing, certainly as more than a tool for mathematical calculations … Quite generally, how could we rely on probability predictions if by this notion we do not refer to something real and objective?’ [Max Born, Dover publ., 1964, “Natural Philosophy of Cause
and Chance”, p. 107].