The sterilization of science

The following is an op-ed piece that is about to appear in the next issue of The Quantum Times (which is not out yet).  Since some of my readers do not read The Times, I am posting it here since I think it has a fairly general relevance to it.

The greatest professional complement I have ever received was from a former student who had taken my introductory physics course as part of the requirements for a life sciences degree (she is now a practicing nurse). At the end of my course, which is well-known for being hard, she told me that my physics course had taught her to question everything (her emphasis). Most physicists likely share this penchant for skepticism, at least to a degree. After all, the process of formulating a theory or carrying out an experiment involves constant revision, which naturally entails questioning our own results. As Mike Fortun and Herb Bernstein (yes, that Herb Bernstein) put it, science can be “messy” and the process of doing science is often simply an act of “muddling through.”

That said, two things recently caught my eye that deserve mention. The first was an excellent post by GQI Chair Dave Bacon on his blog The Quantum Pontiff concerning the paper review process. Dave writes,

Science is dynamic. Sometimes this means that science is wrong, sometimes it means that science is messy. Mostly it is very self-correcting, given the current state of knowledge. At any given time the body of science knows a lot, but could be overturned when new evidence comes in. What we produce through all of this, however, at the end of the day, are polished journal articles.

This is more than just an issue of transparency. As someone who has done a fair amount of research in the history of science, I have noticed that one of the things we have lost in the digital age is “rough notes.” For papers more than about thirty years old, notes – from scraps of paper to entire notebooks – frequently can be found in archives and private collections that detail the “messy” process of science. The other thing we have lost, particularly with the advent of e-mail, is written letters as a record. Some of the best ideas have come out of these letters (I cited several in my PhD thesis) and they often included hand-drawn diagrams, equations that were often easier to read, and other items not found in the limiting form of an e-mail.

We also, often individually (i.e. with no real consensus), place limits on the questions we think science can legitimately ask. While this may be necessary, it is, to some degree, arbitrary and can have the effect of quenching legitimate scientific progress. Combined with the issues I raised above, it is also quenching what could be legitimate scientific dialogue.

That brings up the second thing that caught my eye recently. A letter was forwarded to me this spring in which a Nobel Physics Laureate was disinvited to a conference in Italy due to their apparent interest in the “paranormal.” The letter goes on to say that “it would not be appropriate for someone with such research interests to attend a scientific conference.” While I agree that certain aspects of the paranormal do not belong at a scientific conference, where, precisely, do we draw the line? Would we disinvite the late Georges Lemaître, a student of Einstein and a father of modern cosmology, because he was a Jesuit priest and, as such, took vows that ostensibly implied his belief in transubstantiation, a rite he likely performed regularly? There was never any evidence that implied that the disinvited person would make their paranormal beliefs a centerpiece of conversation. Did Lemaître babble on about Catholic theology at cosmology conferences?

Both these points beg the question of whether or not some of the founding papers in our own discipline would get published in a leading journal today. Bohr’s writing, for example, was notoriously philosophical (and some might say impenetrable).

The end result is that science, which should rise above such things, is increasingly being shaped by modern society rather than shaping modern society. The “culture wars” are forcing upon science a narrowing of purpose while the digital age is destroying its transparency and making its development appear black and white. More than simply unfortunate, this is dangerous.

Thus, I call on you to question everything, including your strongest beliefs, and be open and transparent about it. Science is beautiful and powerful but it isn’t perfect. We should stop pretending it is.

27 thoughts on “The sterilization of science

Add yours

  1. Whilst I agree with the spirit of your comments, I think there are many ways in which things are actually much better today than they were in the past. Firstly, we are increasingly making permanent records of the talks and discussions at conferences and workshops, in the form of online videos, and these will be an invaluable resource for historians that aren’t available for earlier periods. Then, there is the whole open notebook science movement and things like the polymath projects, which leave a digital trail that is possibly even more detailed and full of mistakes and corrections than research notes of previous eras. Admittedly, these methods of research are not widespread as of yet, but I think they will become more common in the not too distant future.

    Finally, since storage is so cheap these days, many people do not delete their email anymore, so it should be possible for future historians to reconstruct scientific discussions from them. It is also becoming more common to do things like taking photos of blackboards containing the results of discussions, which would previously have disappeared when wiped. Of course, for this to be useful to historians, there needs to be policies on what happens to people’s passwords, computers and online accounts when they die. I think this whole area is much to vague at the moment, but in the future I think it will become more common for people to specify this in their will.

    Overall though, I think the internet era has the effect of increasing the trail of information that everyone, including scientists, leaves behind rather than decreasing it. The problem is that we haven’t yet learned to proactively preserve this information or to make it available to others. In many ways, this is the flip-side to the privacy concerns that are so often talked about these days, since it is much easier to preserve data when it is publicly available.

  2. Matt, while some of what you say ought to be true in principle, I think the reality isn’t quite there yet. I know that my mailbox size, for instance, is limited and our IT department only archives e-mails for 4 days. Granted, historians are not likely to be looking for my e-mails, but still. Also, in order to be truly useful to historians of science, it needs to be archived in some meaningful and accessible way.

    You are correct about the open-notebook movement and similar ventures, but it’s still a very small movement.

    And a major aspect of my point was that this ought to be visible, in some way, to the public (perhaps via the press) in the sense that the “messiness” of science needs to be made apparent. Too many people do not understand how science works and its bad for science.

  3. With regard to the “sterilization of science”, sometimes sterilization can come about as a side-effect of too-sparse mathematical toolset.

    For example, in the present issue of The Quantum Times (vol. 4, no. 4), the two main articles (one by Jan Florjanczyk and one by Mark Wilde) both take pains to emphasize the underpinnings in linear algebra of the quantum mechanics curriculum.

    There is a contrary point-of-view, however, that a too-early over-emphasis on linear algebra can be a pedagogic disaster, not just for learning quantum dynamics, but even for learning classical dynamics.

    Expressions of this point-of-view can be found in Mac Lane’s Mathematics, Form, and Function, in which we find (p. 218):

    —————-

    “Many current texts of elementary linear algebra bury the ideas under a morass of muddled matrix manipulations with no understanding of the concepts.”

    —————-

    Even more acerbic is William Burke’s Div, Grad, and Curl are Dead, in which we find:

    “I am going to include some basic facts on linear algebra, multilinear algebra, affine algebra, and multi-affine algebra. Actually I would rather call these linear geometry, etc., but I follow the historical use here. You may have taken a course on linear algebra. This to repair the omissions of such a course, which now is typically only a course on matrix manipulation.

    The necessity for this has only slowly dawned on me, as the result of email with local mathematicians along the lines of:

    Mathematician: When do you guys (scientists and engineers) treat dual spaces in linear algebra?

    Scientist: We don’t.

    Mathematician: What! How can that be?”

    —————-

    Here the point is that the prevalent pedagogic methods for introducing students to quantum dynamics, may perhaps work to obstruct the students’ understanding even of classical dynamics.

    A good example is Gilbert Strang’s pedagogic article The Fundamental Theorem of Linear Algebra, which does not even mention the concept of duality; from the Mac Lane/Burke point-of-view this is a regrettable pedagogic misstep.

    For students of dynamical flow on non-Euclidean state-spaces—which nowadays encompasses pretty much all students of engineering, science or mathematics—discussions of linear algebra without reference to tangent spaces have very little professional utility. And this is true whether a student is studying classical mechanics, quantum mechanics, or the (increasingly common) systems in which classical and quantum dynamical flows are hybridized.

    For physics students especially, it’s well-worth reading iconoclastic textbooks like Div, Grad, and Curl are Dead, just to gain exposure to this geometrically sophisticated view of dynamics. The pedagogy of classical dynamics underwent a geometric revolution during the 1950s–1980s … now it’s (arguably) overdue for the pedagogy of quantum dynamics to embrace this revolution.

  4. John,

    Those are wonderful quotes and I do agree, though I also think that rather than a sparseness of a toolset, the problems have more to do with the ingrained mentality that math is nothing more than pure computation. In other words, we have a tendency to teach people the “mechanics” of the mathematics, meaning how to compute things, but we don’t teach them the underlying meaning (which I think, maybe, you were getting at in your comments).

    That is the beauty of “ultra-simplified” category theory in that it gives people (I think – haven’t tried it in class yet) an intuitive feel for the purpose of functions. The same is true of physics in many ways – people (not me) tend to teach a plug-n-chug methodology and make almost no attempts to generate an understanding of the underlying physical processes. This is why I really like Tom Moore’s Six Ideas That Shaped Physics (which, above and beyond being a mere textbook, offers a philosophy of physics without realizing it) and Dan Schroeder’s Introduction to Thermal Physics (Dan and Tom have collaborated before). In mathematics, I particularly like Steve Awodey’s Category Theory, which is in the same vein as the others.

  5. Ian, thanks for the pointer to Steve Awodey’s Category Theory (which I downloaded).

    As a math book, Awodey’s book looks really great … but from an engineering point-of-view, it was strikingly short of practical examples … in fact, the word “practical” appears nowhere in its 455 pages.

    But maybe this is not surprising, because (also remarkably) the word “practical” does not appear anywhere in Mac Lane’s Categories for the working mathematician, either.

    Now, an opposing usage-gradient is associated to the word “naturality”. This word is ubiquitous in arxiv math preprints (2300 uses), common in physics (403 uses), occasional in computer science (59 uses), and yet the word “naturality” is never used at all in any of the arxiv preprints dealing with biology, finance, or statistics.

    For me, the main utilitarian purpose of category-theoretic techniques is clean separation of design degrees of freedom, from constraints imposed by naturality.

    Perhaps this means that we can expect to see, in coming years, increased usage of “practicality” in pure math and science, counter-balanced by increased usage of “naturality” in applied math and engineering.

    I hope so, anyway! 🙂

  6. Whoops … please pardon my error … the book didn’t use the word “practical” at all was Jean-Pierre Demailly’s Complex Analytic and Differential Geometry.

    Whereas, the word “practical” appears in Steve Awodey’s Category Theory exactly … one time. 🙂

    Whereas, both authors are hugely fond of the word “natural” (69 and 90 usages, respectively).

    In the face of countervailing usage gradients like this, it seems mighty likely that in coming years, “natural” and “practical” will be reconciled.

  7. These are excellent points, John. I agree that Awodey’s book could benefit from some practical examples (though the second edition is coming out soon, so it is possible there might be some in the next edition). But I think it’s a more straightforward introduction than others. Unfortunately, textbooks generally suck.

  8. Ian concludes: Unfortunately, textbooks generally suck.

    Well, that’s true. Which makes the exceptions even more important to study!

    I’ve been looking for books that cover high-level “yellow book” mathematics … *and* include plenty of practical examples … *and* are outstanding successful pedagogically (which includes substantial sales to a broad audience).

    Needless to say, there aren’t many examples. But there are *some*.

    Oddly enough, the single best example I’ve found (so far) is Nathaniel Bowditch’s New American Practical Navigator. This textbook is presently in its 208th year of printing, with more than 50 editions to date, and many millions of copies in-print.

    To appreciate the remarkable mathematical sophistication of Bowditch it is only necessary to download the (free) 1826 version Bowditch, and compare Bowditch’s discussion of non-Euclidean geometry (on page 66, under the heading “Middle Latitude Sailing“) with Gauss’ discussion—published one year after Bowditch—in Gauss’ Disquisitiones generales circa superficies curvas, in which Gauss’ celebrated Theorema Egregium first appeared.

    It is evident that (thanks to Bowditch) the common sailors of the early 19th century appreciated the mathematical foundations of what would today be called Gaussian curvature and Riemannian geometry … decades in advance of Gauss and Riemann.

    So perhaps we should be calling it “Bowditchian” geometry? 🙂

    In any case, the success of Bowditch demonstrates that textbooks can blend the most advanced elements of mathematical naturality, with solutions to urgent practical problems.

    This surely was true in Bowditch’s century and (perhaps) it will be true in our own century too.

  9. John,

    That’s interesting. I’ve never read Bowditch in full, though a bound copy sits on my bookshelf. I looked at that 1826 edition, but I’d be very curious to know if it truly is an 1826 edition. Bowditch has undergone so many uncredited revisions over the years (indeed, it is still regularly revised).

    Anyway, I should have a look at it next time I am in my office because it would indeed be a unique text in that regard. I consider Schroeder’s aforementioned text to be in that same category – profoundly rethinking the foundations of thermodynamics while simultaneously remaining practical (i.e. discussing such mundane topics as heat engines and refrigerators) and being excellent pedagogically.

  10. Ian, it’s impressive (and I’m jealous too!) that you have your own copy of Bowditch, from any year.

    You are right to be wary of the true publication date of any given edition … Google Books has a putative 1802 Bowditch that is in fact an English rip-off.

    However, if you dig into Bowditch’s biography, you will find that he indeed had precisely the appropriate background—in both mathematics and practical navigation—to anticipate the thinking of Gauss and Riemann.

    This topic interests me because it seems that nowadays, in the 21st century, we are well-positioned to repeat the succession Bowditch/Gauss/Riemann/Einstein … etc., but now on the symplectic state-space of quantum dynamics, as contrasted with classical dynamics. This will be a *BIG* adventure for our century (I hope!).

    Nielsen and Chuang is IMHO the nearest to a “Bowditch” that we have presently … but it is unfortunately a “Bowditch” that teaches no non-Euclidean geometry.

  11. Ian, on further digging, it appears that Bowditch’s (exceedingly modern) description of differential geometry appears as early as the fifth edition of August 1821; this is confirmed by the title page, a letter from the publisher, and the fact that the ephemeris tables for that edition of Bowditch are for the decade 1820-1830 (I do not yet know whether editions of Bowditch prior to 1821 were similarly modern).

    That Bowditch’s modern exposition of differential geometry appeared prior to Gauss’ is consistent with three of Saunders Mac Lane’s observations: (1) “Mechanics developed by the treatment of many specific problems.” and (2) “Analysis is full of ingenious changes of coordinates, clever substitutions, and astute manipulations. In some of these cases, one can find a conceptual background. When so, the ideas so revealed help us understand what’s what. We submit that this aim of understanding is a vital aspect of mathematics.” (3) “Effective or tricky formal manipulations are introduced by Mathematicians who doubtless have a guiding idea—but it is easier to state the manipulations than to formulate the idea in words. Just as the same idea can be realized in different forms, so can the same formal success be understood by a variety of ideas. A perspicacious exposition of a piece of Mathematics would let the ideas shine through the display of manipulations.”

    For me, there are three really nice aspects to studying Bowditch’s work. First (and mainly for fun) the early editions of Bowditch are thrilling for the same reason that Patrick O’Brian’s Aubrey/Maturing novels are thrilling, namely, they epitomize the Age of Romance in science, mathematics, and engineering.

    Second, the early editions of Bowditch from 1802-1827 provide a thrillingly concrete illustration of Mac Lane’s ideas regarding the form and function of mathematics.

    Third, when it comes to math, science, and engineering, history often does repeat itself. Just as the early editions of Bowditch, written for humble sailors, prepared the way for the glorious succession of Gauss/Riemann/Einstein/Arnold, we can reasonably regard today’s humble dynamical simulation codes (both classical and quantum), as having already prepared the way for a wonderful 21st century; a 21st century in which we can realistically hope to discover—via deep mathematics and incredibly ingenious experiments—much more than previous centuries have known, about the state-space of nature.

    The chance to participate in this great global adventure—which compares advantageously to the great global adventures of the generation of Bowditch/Cook/Banks/Gauss/Riemann/Darwin, etc.—is (for me) what quantum computation and quantum information theory are all about.

    The preceding views are, no doubt, exceedingly optimistic, yet we note that exceeding optimism was a distinguishing characteristic of Bowditch’s generation too. Subsequent history largely justified the optimism of Bowditch’s generation; perhaps the history of the 21st century will largely justify present-day optimism too.

    We can hope so, anyway! 🙂

  12. John,

    OK, now I may have to make a special trip to my office just to get my hands on my copy of Bowditch!

    At any rate, I have more to comment on, but I’m on my way out the door so shall do so later.

  13. For that subset of humanity (a small subset perhaps) that enjoy Patrick O’Brian’s novels and take an interest in the history of the mathematical idea that “mechanics is geometry in phase space” (Arnol’d’s phrase), the earliest version of Bowditch I have found that is complete, digital, and freely downloadable, is Google Books’ Second Edition of May, 1807.

    Particularly recommended to Patrick O’Brian fans are the sections “Explanation of sea terms” and “Evolutions at sea” on pages 613-640 of the book (pp. 616-643 of the downloaded PDF file) … how I wish I’d read these tutorials *before* tackling O’Brian’s books!

    Fans of mathematical history will appreciate Bowditch’s explanation of the principles of differential geometry on p. 100 (p. 117 of the PDF); this passage remained unaltered at least from the Second Edition of 1807 through the Fifth Edition of 1827; thus Bowditch anticipates some of the key mathematical themes of Gauss’ Theorema Egregium by at least two decades.

    Given Gauss’ employment as a geodetic surveyor from 1817 to 1848, and the fact that the methods and tables in Bowditch for determining longitude from astronomical observations were widely acknowledged as the most accurate then available, it seems likely that Gauss (and other professional surveyors) were fully cognizant of Bowditch’s geometric ideas.

    Conversely, it seems likely Bowditch was writing not only for common sailors, but for a mathematically sophisticated audience that included Gauss; certainly the clarity and rigor of Bowditch’s discussion of foundational issues in geometry is hard to account otherwise.

    If anyone locates an electronic version of Bowditch’s First Edition (but not the pirated short version of 1802 that was edited by Kirby, which leaves out all the good stuff), please post a link to it.

    That’s all … have fun!

  14. As a confection, my colleague (and fellow Patrick O’Brian fan) Joe Garbini were laughing about the marked change in nautical terminology between the 1807 Second Edition and the 2002 Bicentennial Edition of Bowditch.

    For example, the entry for “bitter end” has been replaced by an entry for “bit map” …

    Captain: “Arrrgghhh .. flog the crew with bit-maps … served on a buggy 286 machine … running spyware-laden Windows XP … switch them to a low-amperage power-supply … and block YouTube and FaceBook!”

    First Mate: “Yer a hard man, Captain!”

  15. John,

    I am an avid fan of the Aubrey/Maturin series. At present, I have read through the seventh book (The Surgeon’s Mate). I never thought to have Bowditch handy for that, but now you’ve given me an idea!

    I’m trying to see if there’s a MacTutor entry for Bowditch, but the Math department servers seem to be down. If there isn’t, you should write one. Someone ought to write a paper about Bowditch and non-Euclidean geometry too.

  16. Whoops … without a preview feature, it sure is easy to botch the HTML formatting … here’s a version that’s (hopefully) fixed …
    ———————–

    Ian, the MacTutor suggestion is very good! Although the site is down, enough is visible on the Google cache to show that Bowditch *does* have a entry … hopefully the MacTutor server will be fixed in the coming week.

    Over on Godel’s Lost Letter, I posted some bibliographic links to early 21st century notions of geometric mechanics.

    These relate to a long-term personal project—undertaken solely out of personal interest—that focuses on the past-and-future evolution of the concept of “naturality” in dynamical analysis and simulation.

    This project partakes in roughly equal measure of science, history, engineering, and mathematics (SHEM? 🙂 ); it’s lots of fun to read from these broad-spectrum references.

    By no means am I the only person who is taking an interest in these ideas: highly recommended is Jeremy Butterfield’s 2007 essay “On symplectic reduction in mechanics”, which is the erudite lead chapter in an excellent 2007 book Philosophy of Physics. Also highly recommended is Jonathan Israel’s (ongoing) multi-volume history of the Enlightenment, which provides an in-depth historical context for these ideas.

    The 21st century evolution of these ideas will (it seems to me) largely recapitulate the 19-20th century evolution … with a few innovations, of course.

    One innovation (which is well underway already) will be to unify our understanding of classical and quantum dynamics under the aegis of a broadened notion of “naturality”.

    One starting point for this classical/quantum unification is the recognition that symplectic reduction can be accomplished by multiple mechanisms; in the 20th century the main focus was on reduction-by-symmetry, and in the 21st century the focus will broaden to encompass reduction-by-entropy … in both cases the key idea is that dynamics becomes simpler (whether classical or quantum) whenever the available information is restricted.

    In the 20th century everyone came to appreciate that dynamical symmetry leads to great mathematics; now in the 21st centrury quantum information theory is teaching us that dynamical entropy leads to great mathematics too.

    From a 21st century engineering point-of-view, this is very good news, for the pragmatic reason that high-entropy dynamical systems are even more ubiquitous in everyday life than high-symmetry dynamical systems.

    Heck, we humans are hot, inhomogeneous, asymmetric dynamical systems … heck, even on cosmological scales the universe looks kinda hot, inhomogeneous, and asymmetric … so (heuristically) there *ought* be some great mathematics lurking around … and this proves to be true.

    These investigations are grounds for considerable optimism, that the 21st century is going to be a Quantum Age of Wonder, for much the same mathematical and physical reasons that the early 18th-19th centuries were a Classical Age of Wonder.

    And hopefully, our 21st century *will* be a Quantum Age of Wonder … because an increasingly hot, crowded, resource-poor planet … with ten billion people on it … needs all the wonder that it can muster.

    That’s my 2¢, anyway! 🙂

  17. I have heard an authority on the matter compare a proof p = np to finding God to be either right or left handed. To me this seems a little naive.

    I openly welcome your advice and a rigorous reproof:

    Sincerely,

    Professor X

    root@bt:~# help prove p = np true
    pushd: pushd [ dir | +N | -N ] [-n]
    Adds a directory to the top of the directory stack, or rotates the stack, making the new top of the stack the current working directory. With no arguments, exchanges the top two directories.

    +N Rotates the stack so the Nth directory (counting from the left of the list shown by ‘dirs’, starting with zero) is at the top.

    -N Rotates the stack so the Nth directory (counting from the right of the list shown by ‘dirs’, starting with zero) is at the top.

    -n suppress the normal change of directory when adding directories to the stack, so only the stack is manipulated.
    dir adds DIR to the directory stack at the top, making it the new current working directory.

    You can see the directory stack with the ‘dirs’ command.
    pwd: pwd [-LP]
    Print the current working directory. With the -P option, pwd prints the physical directory, without any symbolic links; the -L option makes pwd follow symbolic links.
    true: true
    Returns a successful result

    The portion below is from a student:

    Rodriguez-Ariz, Xochitl
    Period 4

    The math behind Game Theory

    Game Theory is a wide assortment of topics and themes having to do with daily life, business or the mechanics behind games like poker and checkers. Besides the game there is a mathematical side, such as with Miserere play rules and P-position and n-position both stand for turn of players. P-position stands for previous player position and N-position with next player position. Both are not difficult to understand but become difficult to use and explain for certain games. Certain games like Nim Sum use difficult to understand rules. A certain rule goes hand in hand with this game is Fibonacci Nim. Similar to regular Nim the only change is the maximum can be taken is double the amount the first player took. For example for 1 token taken in the game at your turn, you can take either 1 or 2, however if your opponent decides to take 5, you in turn my take 1 to 10. The sequence of numbers can be taken is defined as F1=1, F2=2, and Fn +1= Fn + Fn-1 for n ≥. So it can then be written as 1,2,3,5,8,13,21,34,55 and so on and so on. Although there is a limited side of mathematics to Game Theory, it takes place as the foundation and explanation for many of the aspects of each game played.

    A, Xochitl please keep up the good work.

  18. John,

    > From a 21st century engineering point-of-view, this is very good news,
    > for the pragmatic reason that high-entropy dynamical systems are even
    > more ubiquitous in everyday life than high-symmetry dynamical
    > systems.

    Ah, see, NOW you’re talking. I am utterly fascinated by entropy. I have long sought to prove that the Cerf-Adami inequalities are merely a statement of the second law of thermodynamics. Actually, I thought I had proved this, but journal editors either thought otherwise or thought the result was uninteresting. Here are several pre-prints I’ve written on the topic:

    The non-conditional nature of the Cerf-Adami inequalities and implications for thermodynamics

    Are entropic Bell inequalities implied by the second law?

    A derivation of Bell’s inequalities in Wigner form from the fundamental assumption of statistical mechanics

    Limitations on entropic Bell inequalities

  19. Ian, those preprints contain some very thought-provoking ideas from a quantum systems engineering point-of-view. The following idea, in particular, might help overcome some of the referee objections. The idea is to extend your proof strategy to encompass some brand-new theorems.

    Let’s start with (what I take to be) the key idea of your preprints, namely, that classical and Hilbert state-spaces both respect the first and second laws of thermodynamics, and both are informatically causal with respect to measurement processes.

    Now we ask, do tensor network manifolds have these same two attributes, namely, that thermodynamics and causality hold? In particular, can we prove these two attributes by induction on the tensor network rank r, first for r=1, and then by induction for all tensor network manifolds of arbitrary rank, up-to-and-including Hilbert space?

    A lot of the needed ingredients are present: the dynamical flow is symplectic (second law), the Hamiltonian potential is conserved (first law), and Lindblad processes—when pulled-back on onto tensor network state-spaces—respect informatic causality at least locally (this is Theorem 1 of our QSE group’s NJP article Practical Recipes …).

    Assuming the preceding program goes through, the (mathematically new!) result would be something like a Cerf-Adami inequality, defined for a suitably generalized definition of entropy, applying to families of trajectories on tensor network state-spaces.

    Such a result would bear directly on some of the quantum-related challenges that Scott Aaronson has been drawing attention, in particular, the recognition that the 20th century challenge “build a working quantum computer” naturally generalizes to the 21st century challenge “build any physical oracle device whose capabilities extend beyond P“.

    Here the point is that the oracle challenge is almost certainly technically easier than building a working quantum computer, and yet this broader challenge is similarly fundamental (arguably) in what it tells us about the state-space of Nature.

    This program is easy to explain to the public too: “In the 20th century scientists measured the shape of classical space-time; now in the 21st century we are measuring the shape of quantum space-time, and these new measurements tell us a lot of what we need to learn about quantum dynamics, to help solve urgent practical problems here on earth.”

  20. Ian, just to provide a bit more background—and to get the morning off to an optimistic start!—we can begin with one of Roger Penrose’ historical ruminations: “The fact that Euclidean geometry seems so accurately to reflect the structure of the ‘space’ of our world has fooled us (or our ancestors!) into thinking that this geometry is a logical necessity, or into thinking we have an innate a priori grasp that Euclidean geometry must apply to the world in which we live (even the great Emmanuel Kant claimed this). … Far from Euclidean geometry being a logical necessity, it is an empirical observational fact that this geometry applies so accurately—though not quite exactly—to the structure of our physical space.”

    Abhay Ashtekar and Troy Schilling (arXiv:gr-qc/9706069) were among the first to realize that the above considerations to Hilbert state-space too: “The linear structure which is at the forefront in text-book treatments of quantum mechanics is, primarily, only a technical convenience and the essential ingredients—the manifold of states, the symplectic structure and the Riemannian metric—do not share this linearity.”

    When we reflect that quantum state-space curvature is dimensionless, and we ask “What laws of nature can be described in terms of dimensionless ratios?”, the answers (in descending order of obviousness) include uncertainty relations, signal-to-noise ratios, Berry-type holonomic phases, and (perhaps?) gauge theory coupling constants … we thus have a panoply of links between quantum state-space geometry and quantum physics.

    Finally (and most excitingly IMHO) we have the radical repurposing of quantum information science, a repurposing that (IMHO) is implicit in the recent work of Aaronson and Arkipov.

    The Aaronson-Arkipov repurposing (as I understand it) can be summed-up as “The grand experimental challenge of quantum information science is to either demonstrate an oracle capability that cannot be simulated in P, or alternatively, to observationally confirm that the geometric dynamics of quantum state-space obstructs such a demonstration.”

    A really wonderful aspect of the Ashtekar-Schilling/Aaronson-Arkipov program (as I understand it, anyway) is that it broadens and deepens the whole program of quantum information science. In particular, there is the 21st century opportunity to revisit vast (and intimately inter-related) tracts of thermodynamics, information theory, communication theory, and complexity theory, on state-spaces that naturally interpolate between classical geometry and Hilbert geometry (as discussed above).

    And there is more good news: the great twentieth century dynamicists like Arnold, Mac Lane, and Berezin have provided us with all the mathematical tools we need to pursue the preceding ideas with rigor and vigor … the best of all possible mathematical worlds. And these mathematical tools are proving to be immensely useful to practical quantum system engineering … no matter what the state-space of nature turns out to be.

    The 19th and 20th centuries were wonderfully exciting for mathematicians, scientists and engineers … and the above are ample reasons to foresee that our 21st century will be even more wonderfully exciting, as we hybridize and unify the understanding of classical and quantum dynamics and informatics, that we achieved during the 19th and 20th centuries.

    These ideas, as they come to fruition, are beginning to open new career opportunities for young mathematicians, scientists, and engineers … and these opportunities are IMHO the best news of all.

    Now, Ian, how’s THAT for an optimistic start-the-morning essay? 🙂

  21. Ian, just to provide a bit more background—and to get the morning off to an optimistic start!—we can begin with one of Roger Penrose’ historical ruminations: “The fact that Euclidean geometry seems so accurately to reflect the structure of the ‘space’ of our world has fooled us (or our ancestors!) into thinking that this geometry is a logical necessity, or into thinking we have an innate a priori grasp that Euclidean geometry must apply to the world in which we live (even the great Emmanuel Kant claimed this). … Far from Euclidean geometry being a logical necessity, it is an empirical observational fact that this geometry applies so accurately—though not quite exactly—to the structure of our physical space.”

    Abhay Ashtekar and Troy Schilling were among the first to realize (in arXiv:gr-qc/9706069) that the above considerations apply to Hilbert state-space too: “The linear structure which is at the forefront in text-book treatments of quantum mechanics is, primarily, only a technical convenience and the essential ingredients—the manifold of states, the symplectic structure and the Riemannian metric—do not share this linearity.”

    When we reflect that quantum state-space curvature is dimensionless, and we ask “What laws of nature can be described in terms of dimensionless ratios?”, the answers (in descending order of obviousness) include uncertainty relations, signal-to-noise ratios, Berry-type holonomic phases, and (perhaps?) gauge theory coupling constants … we thus have a panoply of links between quantum state-space geometry and quantum physics.

    Finally (and most excitingly IMHO) we have the radical repurposing of quantum information science, a repurposing that (IMHO) is implicit in the recent work of Aaronson and Arkipov.

    The Aaronson-Arkipov repurposing (as I understand it) can be summed-up as “The grand experimental challenge of quantum information science is to either demonstrate an oracle capability that cannot be simulated in P, or alternatively, to observationally confirm that the geometric dynamics of quantum state-space obstructs such a demonstration.”

    A really wonderful aspect of the Ashtekar-Schilling/Aaronson-Arkipov program (as I understand it, anyway) is that it broadens and deepens the whole program of quantum information science. In particular, there is the 21st century opportunity to revisit vast (and intimately inter-related) tracts of thermodynamics, information theory, communication theory, and complexity theory, on state-spaces that naturally interpolate between classical geometry and Hilbert geometry (as discussed above).

    And there is more good news: the great twentieth century dynamicists like Arnold, Mac Lane, and Berezin have provided us with all the mathematical tools we need to pursue the preceding ideas with rigor and vigor … the best of all possible mathematical worlds. And these mathematical tools are proving to be immensely useful to practical quantum system engineering … no matter what the state-space of nature turns out to be.

    The 19th and 20th centuries were wonderfully exciting for mathematicians, scientists and engineers … and the above are ample reasons to foresee that our 21st century will be even more wonderfully exciting, as we hybridize and unify the understanding of classical and quantum dynamics and informatics, that we achieved during the 19th and 20th centuries.

    These ideas, as they come to fruition, are beginning to open broad-spectrum career opportunities for young mathematicians, scientists, and engineers … and these opportunities are IMHO the best news of all.

    Now, Ian, how’s THAT for an optimistic start-the-morning essay? 🙂

  22. John,

    Wow, there’s a lot to digest there. First, regarding my pre-prints, if you honestly think there is anything salvageable from them, I would appreciate any assistance on (yet again) rewriting the ideas down and submitting them to a journal. As has been well-documented here, people have helped over and over to no avail. SOme of the versions are so distinct from one another as to seem like a different paper, thanks to the, often conflicting, input I have gotten. Terry Rudolph actually said,

    The Cerf-Adami extended result isn’t even alluded to in your paper until looong after most people will have stopped reading. If you simply make this the central result of the paper you can get it published easily (assuming its correct!).

    In all the crap reviews I got, no one ever questioned this. And yet, a year later, after he had helped me extensively rewrite the paper, Terry said he didn’t understand what my point was (even though he actually stated it in the above quote). I have been working on this for four years – and am pretty sure I am the first person to suggest a link between Bell-type inequalities/entanglement and the second law of the thermodynamics, but Brandao was able to publish a paper not long after I first proposed the idea and I can’t get anywhere. Later, Brandao and Plenio published an idea that was creepily close to mine, though with different mathematics, and Plenio had been an editorial reviewer on my paper. Hmmm…

    Anyway, you’ve touched on a sore subject with me. Sorry about that.

    In any case, I’ll have to digest some of your other points before commenting further but thanks for the optimistic outlook!

  23. Yeah, peer review is a sore subject for pretty much every scientist, mathematician, and engineer. And yet we all acknowledge that peer review works pretty well overall. It was Winston Churchill who famously said of democracy that it was “the worst form of government except all those other forms that have been tried”, and so it is for peer review too.

    What I look for in an article is an early, explicit statement along the lines of: “We present the following new (1) theorem, or (2) physical law, or (3) efficient algorithm, or (4) simplified derivation, or (5) historical interpretation, or (6) philosophical interpretation”.

    These six elements are listed in descending order of likelihood of their passing smoothly through peer review … `cuz heck … the tail-end historical/philosophical ideas are not what science is mainly about.

    Tthere *is* a well-established path for including historical/philosophical ideas in the scientific literature, but it’s not an easy path. Namely, apply historical/philosophical heuristics to derive new theorems, physical laws, algorithms, etc., and then mention the guiding historical/philosophical ideas in the discussion.

    That’s the method that Spin microscopy’s heritage, achievements, and prospects used to discuss von Neumann’s work on microscopy. That article’s explicit calculations of Shannon channel capacity, and the design links to IBM’s spin microscope experiments, are what justified the historical discussion of von Neumann’s work.

    Thus, if you have a new method of deriving Cerf-Adami inequalities, then a compelling publication strategy is to apply that method to extend the set of dynamical state-spaces on which those inequalities provably apply. Of course, such extensions are soberingly tough to carry through explicitly … and that is why reviewers respect them.

  24. Well, like I said, four different people – all respected in the field – gave me four different versions of what an ideal article should contain and what my article, specifically, should say.

    My original point was that the Cerf-Adami inequalities struck me as a statement of the second law of thermodynamics. But then I discovered that no one can seem to agree on a formal statement of the second law!

  25. You know, I wonder if I make this paper more pedagogical, if I could get it into the American Journal of Physics. I never thought of that before, but that might be an interesting route to take. AJP articles actually get cited fairly regularly.

Comment (obtuse, impolite, or otherwise "troll"-like comments may be deleted)

Create a free website or blog at WordPress.com.

Up ↑