Problems with science? A belated response to Jonah Lehrer

I’ve been working on this post for awhile and it’s turning into more of an essay.  In the interim, word comes that Jonah Lehrer resigned his position at the New Yorker after admitting to fabricating quotes by Bob Dylan.  While I do criticize Lehrer in this post, frankly, I’m a little surprised he would resort to such a thing.  At any rate, back in January, he wrote an intriguing, but flawed, article about the state of science for Wired magazine.  Having only just recently found time to read it, this rejoinder is a bit delayed.  But, though the handful of actual readers of this blog may be the veritable “choir” to whom I preach, I could not let some of his assertions pass unchallenged, particularly since additional mis-interpretations of science (and how people, in general, view science) have cropped up elsewhere in my life.  Unfortunately, despite his mis-characterizations, some of his points do ring true in certain fields of science.  And that brings me to point number one.

The single most egregious aspect of Lehrer’s article was the way it was “sold” to an unsuspecting reader and the impression it might give to a reader without a deeper knowledge of science.  The title implied that there was a serious, pervasive problem with science in general.  Science is a vast human enterprise involving tens of thousands (maybe even hundreds of thousands) of scientists in dozens of disciplines working in numerous cultures across the globe.  While in theory the scientific method is universal, it is true that institutional, cultural, and discipline-specific biases inevitably color scientific results since, at its heart, it is a human enterprise.  But, in the end, science studies the universe and the universe is at least objective enough (quantum mechanics not withstanding – more on this later) that its secrets reveal themselves in spite of human foibles.  In other words, no matter how much humans might want one thing to be true, the universe doesn’t listen to us (e.g. Lysenkoism).  Science, though, is an adaptable enterprise by its very nature and one of its inherent strengths is its ability to learn from its mistakes.  Science, as a process and an ideal, is not broken.  There may be problems with certain communities of scientists, and, indeed, even with certain methods of science, but that does not invalidate the underlying nature of science.

That brings me to my next two points about Lehrer’s piece. The first is that, despite its title and the way in which it was sold (all of science is broken!), Lehrer focuses almost exclusively on medical and pharmaceutical science.  If his intent is to make a broad generalization about all of science then he should have demonstrated that the problems he highlighted exist across all scientific disciplines or, conversely, that the problem is with the nature of science itself.  It appears from his essay that he attempted to use the medical and pharmaceutical sciences as examples in an effort to demonstrate the limits of reductionism, but his reasoning is muddled enough that its purpose is not entirely clear.  Ironically, in one example, he labels the outcome of a drug trial a “tale of mistaken causation.”  But in somehow conflating statistical methods with reductionism, Lehrer’s entire article is a tale of mistaken causation and misapplied blame. Simply put, statistics are not reductionist.  To better understand why, it is important to examine the origins of modern science.

The principles of modern science were largely developed in 17th century Europe and many were codified by the early documents and practices of the Royal Society of London, arguably the world’s oldest scientific organization.  In retrospect, one might view reductionism through the lens of Bishop Sprat who wrote that the Royal Society’s purpose was “not the Artifice of Words, but a bare knowledge of things” and he argued for a “Mathematical plainness.”  To put it another way, as Henry Oldenburg, the founder of the Society’s journal Philosophical Transactions, described it, the Society’s members were “remarkably well versed in mathematics and experimental science.”  In other words, modern science is founded on a balance of mathematical precision and experimental results.  At the time the Royal Society was founded, this physics-centric worldview was applied across the sciences, including to medicine.  Back then a physics-centric worldview was very mechanistic and deterministic (Newtonian).  Whether or not that is still true is debatable, but Lehrer argues that

[t]his assumption—that understanding a system’s constituent parts means we also understand the causes within the system—is not limited to the pharmaceutical industry or even to biology. It defines modern science. In general, we believe that the so-called problem of causation can be cured by more information, by our ceaseless accumulation of facts. Scientists refer to this process as reductionism. By breaking down a process, we can see how everything fits together; the complex mystery is distilled into a list of ingredients.

I would argue that truly understanding a system in a mechanistic and deterministic way doesn’t merely mean understanding the constituent parts, but also how they fit together.  Even in a non-deterministic worldview it is still necessary to both understand the parts of a system and its whole.  How do we understand causation if we don’t take such a view? For instance, if we can’t use such a mechanistic method for understanding, say, Condition X, how do we understand it?  The only alternative is to look at statistical trends in the data.  And there’s the rub.  If the mechanistic and deterministic view is reductionist as per Lehrer’s definition, then the statistical view, which is the only alternative, cannot possibly be!  (How can we be sure, by the way, that it is truly the only alternative?  That’s another blog post entirely, but suffice it to say for now that its all in the mathematics.)

We can reconcile reductionism with both mechanistic and statistical views if, instead, we equate it with Bishop Sprat’s ideal of science being “a bare knowledge of things.” In contemplating these issues and in preparing a talk on cosmology for my local library in which I wanted to present a bit on the “core enterprise of science,” I came to a realization.  Usually, science is portrayed as having two, equally balanced parts: theory and experiment.  In recent years, computational methods have been touted as the “third branch” of science, but I think there’s more to it than that and I think there’s always been more to it than that.

The purpose of experimental science is to lend precision to our observations of the world.  As such, it differs from Aristotelian observation in that it demands repeatability.  But there are multiple ways to achieve repeatability, as quantum mechanics has shown.  There’s repeatability in the usual sense, i.e. the results of an experiment are the same every time, but then there’s another way to interpret repeatability.  As I like to tell my students, quantum mechanics is “predictably probabilistic.”  That is, while you can’t predict the exact behavior, say, of an individual particle, the ensemble behavior of a collection of such particles in the form of the attached probability amplitudes, is exceptionally predictable (indeed, I have a feeling that this is how causality arises, but that’s another story).

But as quantum mechanics has taught us, it is possible to have mathematics without a singular, unifying theory.  Indeed, that is to be expected if we view mathematics itself as the “third branch” of science whose purpose is to describe the world.  In other words, mathematics – and the rigor to which it conforms – is the language we use to describe the processes we observe in the world.  Since probability and statistics are just branches of mathematics, they simply represent a different way of describing the world.  It is no less rigorous.  It is perfectly rigorous (all mathematics is both rigorous and self-consistent, i.e. unlike physics, for example, no branch of mathematics is incompatible with another branch) and, when properly applied, can be extraordinarily accurate.  But, by its very nature, it is an ensemble-based branch of mathematics and is thus not accurate when applied to non-ensemble constructs.  Similarly, computational methods are really just applied mathematics, at their heart, in the form of algorithms.  At any rate, the point is that mathematics as a descriptor is neither the same as experiment nor theory.

Theory, then, is that part of science that aims for a consistent explanation by bridging the gap between the mathematics and the experimental results.  This, incidentally, explains precisely why there are so many interpretations of quantum mechanics.  Really, these interpretations are theories that all provide alternative ways to bridge the gap between a set of interconnected mathematical functions and the results of experiment.  The aim of theory is consistency through a rigorous application of the rules of logic.  Unless one defines God as being equivalent to the rules of logic (which is fine, but unprovable in a scientific sense), then God is not a valid theory.  In addition the “God explanation” misses out on consistency in the sense that it does not explain the connections between differing sets of results and their associated models.  It is through theory that we aim to find how descriptions of the various parts of the world fit together in a self-consistent whole.  That is reductionism.  Yes, science picks things apart as a way to understand them, but science, by its very nature, has always – even in Aristotelian times – tried to then understand how those parts fit together.  To some extent, that was always the point!  And that’s where Lehrer was wrong.

With that said, some of the problems with science today, particularly as the study of complex systems becomes increasingly important, stem from a misunderstanding of these basic processes.  Many scientists conflate the mathematics (description) with the theory (explanation).  In addition, many scientists do not understand how to properly interpret different types of mathematics, e.g. taking ensemble mathematics as proof of sub-ensemble processes.  Others mistake experiment (measurement) with mathematics (description) in that they assume that the aim of experiment is to describe the world.  But, as I’ve argued above, it’s really not.  Still others (most notably in those sciences that heavily employ statistics and rely on passive observations rather than active experimentation) fail in the opposite direction by not realizing that “experiment” (which would include observation in some of these cases) demands precision and repeatability.  We’ve all heard stories of studies in which “bad data points” are tossed out only to find out later that these data points were crucial in understanding what was really going on.  Of course, sometimes there really are bad data points, but it is the experimenter’s job to find out why these data points were bad in the first place, not simply to toss them away.  Causal attribution is a part of theory, not experiment, and must be consistent with the mathematics.  As the old adage says, correlation does not equal causation.  Unfortunately, many scientists in fields dominated by statistics still make this mistake.

The crux of this discussion comes down to the fact that science is still not well-understood, even by those who think otherwise.  Nevertheless, for the most part, science gets it right largely because properly executed experiment and correctly applied mathematics are hard to argue with and leave little room for debate.  So, while some people may believe the world is entirely subjective and “participatory” (as the late John Wheeler put it), it’s existence is entirely independent of our interpretation and, in that sense, is objective.  It can still be subjective in that the act of measurement “creates” it, but it is objective in the sense that there are certain rules and patterns that these measurements must follow.  As such, it is never – ever – about “belief.” And that means that it is not a smorgasbord of things from which one can pick and choose what one want to “believe.”  You either buy it or you don’t.  And since technology is applied science, you can’t accept the latter without the former (Creationists, please return your smartphones, cars, computers, and any wealth obtained through modern means to the reception desk and don’t let the door hit you on the way out).  That’s not to say that science should be free of criticism – I just criticized it myself.  It’s to say that any criticism needs to be informed and consistent.  Lehrer’s wasn’t.

Advertisements

One Response to “Problems with science? A belated response to Jonah Lehrer”

  1. […] the winners of the FQXi essay contest, but here is the list.  My essay, that was based partly on a blog entry from earlier this year, took Fourth Prize.  While I congratulate the winners, the vast majority […]

Comment (obtuse, impolite, or otherwise "troll"-like comments may be deleted)

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: