Given that referees for most academic journals are not paid, this seems a bit outrageous. But then we already knew that, didn’t we? I got the graph below from this article in Scientific American which in turn got it from somewhere else.

# Reductionism, the nature of science, and quantum states

My entry into the latest FQXi essay contest has been posted and was partly inspired by my last blog post. What is particularly interesting is that, rather unexpectedly, there were a few other entries talking about reductionism. Mine was the only one that attempted to defend the practice. The three (that I have read so far) that appear to be on the opposite side of the issue include Julian Barbour’s, George Ellis’, and Emily Adlam’s. The latter isn’t a take-down of reductionism per sé, but it does call into question some of its usual precepts. Of those three, Ellis’ is the one I find to have the most compelling argument, though I still disagree with some of it. There clearly is some top-down causation in the universe. I simply think (and George actually agrees in part), that the causation moves “upward” (less complex to more complex) a bit like a random walk with drift, i.e. the overall trend is bottom-up as opposed to top-down.

With that said, the main thesis of my proposal is that science really consists not of the two branches of theory and experiment (with the possible third branch of computation), but rather of three *functions*: **measurement**, **description**, and **predictive** **explanation**. The predictive part is important because it leads to further measurements and thus the process is cyclic.

In regard to quantum states, I made a somewhat casual remark in the essay that might bear some further consideration. Consider the following generic quantum state,

.

Different interpretations of quantum mechanics interpret this state in different ways. A statistical or stochastic interpretation would assume that the values and represented the results of repeated measurements. An ontic interpretation (or something similar) would interpret these values as literal, i.e. the system that is in the given state really *is* in a superposition of the two sub-states simultaneously. An epistemic or Bayesian view would see these values as representing a state of our knowledge that will be updated with a subsequent measurement, i.e. they are related to probabilities (the Born rule, though there is a very big difference between the Bayesian approach and one that takes the Born rule at face value). Notice that these three interpretations of the state roughly correspond to the three functions I assigned to science: measurement, description, and predictive explanation. As I casually remarked in my essay, perhaps, instead of needing *no* interpretation, as Brukner has suggested, we really need *multiple* interpretations. Hmmm…