Archive for July, 2012

Problems with science? A belated response to Jonah Lehrer

Posted in Uncategorized on July 31, 2012 by quantummoxie

I’ve been working on this post for awhile and it’s turning into more of an essay.  In the interim, word comes that Jonah Lehrer resigned his position at the New Yorker after admitting to fabricating quotes by Bob Dylan.  While I do criticize Lehrer in this post, frankly, I’m a little surprised he would resort to such a thing.  At any rate, back in January, he wrote an intriguing, but flawed, article about the state of science for Wired magazine.  Having only just recently found time to read it, this rejoinder is a bit delayed.  But, though the handful of actual readers of this blog may be the veritable “choir” to whom I preach, I could not let some of his assertions pass unchallenged, particularly since additional mis-interpretations of science (and how people, in general, view science) have cropped up elsewhere in my life.  Unfortunately, despite his mis-characterizations, some of his points do ring true in certain fields of science.  And that brings me to point number one.

The single most egregious aspect of Lehrer’s article was the way it was “sold” to an unsuspecting reader and the impression it might give to a reader without a deeper knowledge of science.  The title implied that there was a serious, pervasive problem with science in general.  Science is a vast human enterprise involving tens of thousands (maybe even hundreds of thousands) of scientists in dozens of disciplines working in numerous cultures across the globe.  While in theory the scientific method is universal, it is true that institutional, cultural, and discipline-specific biases inevitably color scientific results since, at its heart, it is a human enterprise.  But, in the end, science studies the universe and the universe is at least objective enough (quantum mechanics not withstanding – more on this later) that its secrets reveal themselves in spite of human foibles.  In other words, no matter how much humans might want one thing to be true, the universe doesn’t listen to us (e.g. Lysenkoism).  Science, though, is an adaptable enterprise by its very nature and one of its inherent strengths is its ability to learn from its mistakes.  Science, as a process and an ideal, is not broken.  There may be problems with certain communities of scientists, and, indeed, even with certain methods of science, but that does not invalidate the underlying nature of science.

That brings me to my next two points about Lehrer’s piece. The first is that, despite its title and the way in which it was sold (all of science is broken!), Lehrer focuses almost exclusively on medical and pharmaceutical science.  If his intent is to make a broad generalization about all of science then he should have demonstrated that the problems he highlighted exist across all scientific disciplines or, conversely, that the problem is with the nature of science itself.  It appears from his essay that he attempted to use the medical and pharmaceutical sciences as examples in an effort to demonstrate the limits of reductionism, but his reasoning is muddled enough that its purpose is not entirely clear.  Ironically, in one example, he labels the outcome of a drug trial a “tale of mistaken causation.”  But in somehow conflating statistical methods with reductionism, Lehrer’s entire article is a tale of mistaken causation and misapplied blame. Simply put, statistics are not reductionist.  To better understand why, it is important to examine the origins of modern science.

The principles of modern science were largely developed in 17th century Europe and many were codified by the early documents and practices of the Royal Society of London, arguably the world’s oldest scientific organization.  In retrospect, one might view reductionism through the lens of Bishop Sprat who wrote that the Royal Society’s purpose was “not the Artifice of Words, but a bare knowledge of things” and he argued for a “Mathematical plainness.”  To put it another way, as Henry Oldenburg, the founder of the Society’s journal Philosophical Transactions, described it, the Society’s members were “remarkably well versed in mathematics and experimental science.”  In other words, modern science is founded on a balance of mathematical precision and experimental results.  At the time the Royal Society was founded, this physics-centric worldview was applied across the sciences, including to medicine.  Back then a physics-centric worldview was very mechanistic and deterministic (Newtonian).  Whether or not that is still true is debatable, but Lehrer argues that

[t]his assumption—that understanding a system’s constituent parts means we also understand the causes within the system—is not limited to the pharmaceutical industry or even to biology. It defines modern science. In general, we believe that the so-called problem of causation can be cured by more information, by our ceaseless accumulation of facts. Scientists refer to this process as reductionism. By breaking down a process, we can see how everything fits together; the complex mystery is distilled into a list of ingredients.

I would argue that truly understanding a system in a mechanistic and deterministic way doesn’t merely mean understanding the constituent parts, but also how they fit together.  Even in a non-deterministic worldview it is still necessary to both understand the parts of a system and its whole.  How do we understand causation if we don’t take such a view? For instance, if we can’t use such a mechanistic method for understanding, say, Condition X, how do we understand it?  The only alternative is to look at statistical trends in the data.  And there’s the rub.  If the mechanistic and deterministic view is reductionist as per Lehrer’s definition, then the statistical view, which is the only alternative, cannot possibly be!  (How can we be sure, by the way, that it is truly the only alternative?  That’s another blog post entirely, but suffice it to say for now that its all in the mathematics.)

We can reconcile reductionism with both mechanistic and statistical views if, instead, we equate it with Bishop Sprat’s ideal of science being “a bare knowledge of things.” In contemplating these issues and in preparing a talk on cosmology for my local library in which I wanted to present a bit on the “core enterprise of science,” I came to a realization.  Usually, science is portrayed as having two, equally balanced parts: theory and experiment.  In recent years, computational methods have been touted as the “third branch” of science, but I think there’s more to it than that and I think there’s always been more to it than that.

The purpose of experimental science is to lend precision to our observations of the world.  As such, it differs from Aristotelian observation in that it demands repeatability.  But there are multiple ways to achieve repeatability, as quantum mechanics has shown.  There’s repeatability in the usual sense, i.e. the results of an experiment are the same every time, but then there’s another way to interpret repeatability.  As I like to tell my students, quantum mechanics is “predictably probabilistic.”  That is, while you can’t predict the exact behavior, say, of an individual particle, the ensemble behavior of a collection of such particles in the form of the attached probability amplitudes, is exceptionally predictable (indeed, I have a feeling that this is how causality arises, but that’s another story).

But as quantum mechanics has taught us, it is possible to have mathematics without a singular, unifying theory.  Indeed, that is to be expected if we view mathematics itself as the “third branch” of science whose purpose is to describe the world.  In other words, mathematics – and the rigor to which it conforms – is the language we use to describe the processes we observe in the world.  Since probability and statistics are just branches of mathematics, they simply represent a different way of describing the world.  It is no less rigorous.  It is perfectly rigorous (all mathematics is both rigorous and self-consistent, i.e. unlike physics, for example, no branch of mathematics is incompatible with another branch) and, when properly applied, can be extraordinarily accurate.  But, by its very nature, it is an ensemble-based branch of mathematics and is thus not accurate when applied to non-ensemble constructs.  Similarly, computational methods are really just applied mathematics, at their heart, in the form of algorithms.  At any rate, the point is that mathematics as a descriptor is neither the same as experiment nor theory.

Theory, then, is that part of science that aims for a consistent explanation by bridging the gap between the mathematics and the experimental results.  This, incidentally, explains precisely why there are so many interpretations of quantum mechanics.  Really, these interpretations are theories that all provide alternative ways to bridge the gap between a set of interconnected mathematical functions and the results of experiment.  The aim of theory is consistency through a rigorous application of the rules of logic.  Unless one defines God as being equivalent to the rules of logic (which is fine, but unprovable in a scientific sense), then God is not a valid theory.  In addition the “God explanation” misses out on consistency in the sense that it does not explain the connections between differing sets of results and their associated models.  It is through theory that we aim to find how descriptions of the various parts of the world fit together in a self-consistent whole.  That is reductionism.  Yes, science picks things apart as a way to understand them, but science, by its very nature, has always – even in Aristotelian times – tried to then understand how those parts fit together.  To some extent, that was always the point!  And that’s where Lehrer was wrong.

With that said, some of the problems with science today, particularly as the study of complex systems becomes increasingly important, stem from a misunderstanding of these basic processes.  Many scientists conflate the mathematics (description) with the theory (explanation).  In addition, many scientists do not understand how to properly interpret different types of mathematics, e.g. taking ensemble mathematics as proof of sub-ensemble processes.  Others mistake experiment (measurement) with mathematics (description) in that they assume that the aim of experiment is to describe the world.  But, as I’ve argued above, it’s really not.  Still others (most notably in those sciences that heavily employ statistics and rely on passive observations rather than active experimentation) fail in the opposite direction by not realizing that “experiment” (which would include observation in some of these cases) demands precision and repeatability.  We’ve all heard stories of studies in which “bad data points” are tossed out only to find out later that these data points were crucial in understanding what was really going on.  Of course, sometimes there really are bad data points, but it is the experimenter’s job to find out why these data points were bad in the first place, not simply to toss them away.  Causal attribution is a part of theory, not experiment, and must be consistent with the mathematics.  As the old adage says, correlation does not equal causation.  Unfortunately, many scientists in fields dominated by statistics still make this mistake.

The crux of this discussion comes down to the fact that science is still not well-understood, even by those who think otherwise.  Nevertheless, for the most part, science gets it right largely because properly executed experiment and correctly applied mathematics are hard to argue with and leave little room for debate.  So, while some people may believe the world is entirely subjective and “participatory” (as the late John Wheeler put it), it’s existence is entirely independent of our interpretation and, in that sense, is objective.  It can still be subjective in that the act of measurement “creates” it, but it is objective in the sense that there are certain rules and patterns that these measurements must follow.  As such, it is never – ever – about “belief.” And that means that it is not a smorgasbord of things from which one can pick and choose what one want to “believe.”  You either buy it or you don’t.  And since technology is applied science, you can’t accept the latter without the former (Creationists, please return your smartphones, cars, computers, and any wealth obtained through modern means to the reception desk and don’t let the door hit you on the way out).  That’s not to say that science should be free of criticism – I just criticized it myself.  It’s to say that any criticism needs to be informed and consistent.  Lehrer’s wasn’t.

Have we really found the Higgs?

Posted in Uncategorized on July 4, 2012 by quantummoxie

By now you have probably heard the news that CERN has confirmed discovery of “a fundamental scalar” particle, i.e. the Higgs.  But as Sean Carroll pointed out on Google+ this morning, nothing in the data suggests it is fundamental.  It could be composite.  To understand why there could be room for some doubt, consider that the Higgs cannot be directly observed since it has zero spin and zero charge.  The only way to “observe” the Higgs is to observe its decay products.  The Standard Model (SM) tells us what the Higgs will decay to based on its mass, but it does not predict its actual mass.  If the Higgs mass lies somewhere between 115 GeV/c^2 and 180 GeV/c^2 the SM is correct up to the Planck energy.  So we look for decay products and depending on which products we see, various conservation laws and the rules of the SM tell us the properties of the particle that produced those products.

There are several assumptions being made here.  The first is that the SM is correct, at least to within a certain degree of accuracy.  The second then follows from the first: that any particle found within the specified energy range must be the Higgs because that’s what the SM predicts.  In other words (and perhaps this is a separate assumption), the SM does not, at present, predict any other particles in this energy range with these decay properties so we assume that it must be the Higgs.

There’s nothing inherently wrong with the above assumptions as long as they are made clear, particularly to a media willing to embellish anything in the name of “selling” news.  It should be noted, however, that the SM is far from an unassailable theory.  Now before I say more on this, I must emphasize an important point: there is a difference between a theory having problems and a theory having limitations.  Newton’s theory has limitations, but for the range of energies for which it is applicable, it is correct.  The SM, on the other hand, has some genuine problems.  In theory, it should apply to all fundamental particles, but it one of its central tenets is the CPT theorem and recent observations have hinted at CPT violation in neutrinos.  This is different from saying we have simply found a limit to the SM.  In this way, the SM is more akin to the de Broglie-Bohm theory that serves as a foil to standard quantum theory – it covers almost, but not quite all of the phenomena predicted by standard QM.  In the case of SM, it may be that we have simply stumbled on the alternative theory first before we found the actual theory.  (Of course, it’s also possible that there simply is no “actual” theory of anything and the best we can do is come up with rough models.)

So, did we find the Higgs?  The folks at CERN certainly think so, or at least the media seems to be playing it that way.  As a foundations guy, I honestly think it doesn’t matter one way or the other.  It’s another particle.  It’s existence was predicted by the theory and so we’ve been operating under the assumption that it is real for decades.  I suppose if it hadn’t been found, that would have been pretty damning evidence against the SM, but you can’t prove a negative, i.e. there’s no way we could ever say that not finding it was just due to our lack of experimental ability.  But finding it still doesn’t solve all the other problems with the SM including the fact that neutrinos and anti-neutrinos might actually have differing masses (some recent data has suggested this is possible).  Now, that would be weird and would contradict a central tenet of the SM, the CPT theorem.

One final point I should make, however, is that if the SM does turn out to either need revision or replacement, it doesn’t suddenly invalidate all the amazing discoveries that it has led to over the years.  It’s been an immensely useful tool and there’s no denying that it has come close to correctly predicting nearly all the properties of all known fundamental particles.  These properties are measurable and thus independent of theory, to some extent (there’s a big caveat there that I will leave for another time).  So, we wouldn’t need to suddenly revoke numerous Nobel Prizes if we found a hole in it.