Is Precision Overrated?


A historian of science strikes a Swiftian pose in The New York Times: statistically scrupulous scientists need to get their heads out of the clouds on climate change. But maybe that’s—precisely—where they need to be.


“He first took my Altitude by a Quadrant, and then with Rule and Compasses, described the Dimensions and Outlines of my whole Body, all which he entered upon Paper, and in Six Days brought my Clothes very ill made, and quite out of shape, by happening to mistake a figure in the Calculation. But my Comfort was, that I observed such Accidents very frequent and little regarded” — A Voyage to Laputa, Lemuel Gulliver, Gulliver’s Travels.


hirty years after its English publication, Jonathan Swift’s satiric masterpiece appeared in French translation in 1757, a signal year in the history of astronomical calculation. In June, at the Palais Luxemburg, Alexis Claude Clairaut, Joseph Lalande and Nicole-Reine Lepaute began an immense task of computation that would, if successful, predict the return of Edmund Halley’s Comet and solve a problem essential to the validation of Isaac Newton’s theory of gravity. That is, if gravity impels bodies to move in particular paths, how was the path of Halley’s Comet affected by the movement and gravitational forces of the Sun, Saturn, and Jupiter?

For a critique of Orestes' understanding of statistical significance and confidence levels, click here.

For a critique of Orestes’ understanding of statistical significance and confidence levels, click here.

As David Alan Grier notes in his wonderful history “When Computers Were Human,” the problem was computational not conceptual, and its solution had eluded Newton and Halley. Think of the sun at the center of a clock with Jupiter and Saturn as the end of the hour and minute hands; the comet is at the end of a second hand traveling in an ellipse in the opposite direction. Each degree of movement would require an adjustment in orbit due to the varying gravitational effects. To figure this out, Clairaut, Lalande, and Lepaute would have to break the path of the comet into small enough steps so that these fluctuations could be measured. Their undertaking is memorable not just for it being collaborative (a harbinger of an age where calculating problems would exceed the limits of individual capability), but for the brief recognition that a woman could be as gifted at math as any man.

To Swift, the astronomical obsession with measuring comets was both modish and an invitation to mockery. While scholars such Colin Kiernan have noted that Swift was actually adept enough at astronomical calculation to use Kepler’s third law to calculate the orbital times of Phobos and Deimos, Mars’ two known moons, he thought the entire practice—along with Mars and its satellites—devoid of meaning for humanity, other than as a distraction from moral engagement with the world. Laputa, thus, is a levitating island where, literally, everyone’s heads are in the clouds. It is filled with men who are physically deformed from, and socially debilitated by, their devotion to both the microscope and telescope; and the jab at Gulliver’s tailor “mistaking a figure in the Calculation” sounds to us as a cutting reference to a real computational error made by Newton—which would be solved in 1749, 22 years after his death, by Clairaut.

The calculations to predict when Halley’s Comet would return continued day after day, from late morning to evening, until late September. Finally, on the 14th of November, Clairaut tentatively proposed a date of April 15, 1758 for the date when the comet would reach its closest point to the sun. “You can see,” he said, “the caution with which I make such an announcement, since so many small quantities that must be neglected in methods of approximation can change the time by a month.”

The comet beat their projection by 33 days.

Yet, as Grier points out, they had achieved a remarkable success, improving Halley’s own predicted margin of return—600 days—in the face of two spectacular “unknown unknowns,” each exerting gravitational force on the comet’s path: Uranus (only discovered in 1781) and Neptune (discovered in 1846).

In the course of the late 17th and early 18th century, the mathematician Simon Pierre Laplace “opened the door to the statistical revolution,” as David Salsburg put it, by arguing that small errors in astronomical observation would have a probability distribution. Yet as errors were corrected, more precise measurement would uncover more variation and uncertainty, thereby requiring more sophisticated statistical description, until—at the microscopic end of things, so to speak—we are left with the probability distributions of quantum particles. A confident belief in the uniformity and regularity of the universe—a forgivable premise in the age of Newton—gave way to a complex understanding of the uncertainty in our own.

But are we too in thrall to uncertainty? Have we built a new Laputa out of science? Naomi Oreskes seems to think so and that it augurs our destruction:

“The 95 percent confidence limit reflects a long tradition in the history of science that valorizes skepticism as an antidote to religious faith. Even as scientists consciously rejected religion as a basis of natural knowledge, they held on to certain cultural presumptions about what kind of person had access to reliable knowledge. One of these presumptions involved the value of ascetic practices. Nowadays scientists do not live monastic lives, but they do practice a form of self-denial, denying themselves the right to believe anything that has not passed very high intellectual hurdles.”

“Climate policy on mitigation and decision-making on adaptation provide a rich field of evidence on the use and abuse of science and scientific language. We have a deep ignorance of what detailed weather the future will hold, even as we have a strong scientific basis for the belief that anthropogenic gases will warm the surface of the planet significantly. It seems rational to hold the probability that this is the case far in excess of the ‘1 in 200’ threshold which the financial sector is required to consider by law (regulation). Yet there is also an anti-science lobby which uses very scientific sounding words and graphs to bash well-meaning science and state-of-the-art modelling. If the response to this onslaught is to ‘circle the wagons’ and lower the profile of discussion of scientific error in the current science, one places the very foundation of science-based policy at risk. Failing to highlight the shortcomings of the current science will not only lead to poor decision-making, but is likely to generate a new generation of insightful academic sceptics, rightly sceptical of oversell, of any over-interpretation of statistical evidence, and of any unjustified faith in the relevance of model-based probabilities. Statisticians and physical scientists outside climate science (even those who specialise in processes central to weather and thus climate modelling) might become scientifically sceptical, sometimes wrongly, of the basic climate science in the face of unchecked oversell of model simulations. This mistrust will lead to a low assessment by these actors of the reliability3 (public reliability) of findings from such simulations, even where the reliability1 and reliability2 are relatively high (e.g. with respect to the attribution of climate change to significant increases in atmospheric CO2)” — Variations on Reliability: Connecting Climate Predictions to Climate Policy, by Leonard A. Smith and Arthur C. Petersen.

It may be unfair, given the necessity for journalistic simplicity in the New York Times, to criticize Oreskes for historical over-simplification; and yet, it is difficult to see how applied mathematics in the 18th century, or statistics in the 19th and 20th were driven towards the “asceticism” of 95 percent confidence levels through skepticism about religion. Of course there were disputes over how to reconcile reason with faith; but to the mathematicians and statisticians who shaped our understanding of the world, these disputes—if disputes at all (Newton had few problems reconciling reason and nature into a rational form of religion)—were of a second order to the discourse of problem setting and solving, the play of mind and instrumentation.

As the historian Jan Golinksi notes, precision measurement was “seen as the favored route by which other sciences sought to emulate the certainty and predictive power represented by Newton’s Principia.” And the value of measuring and predicting had the confirmation of practical effect, whether in determining the location of a ship at sea or the size of France’s population. As many scholars have argued, the “quantifying spirit” was animated less by the search for mathematical laws unifying and regulating the universe than a quotidian need for reliable data in order to unify and regulate the world. The statistician that Oreskes singles out for the 95 percent confidence level, Ronald Fisher, did some of his key work at the Rothamsted Agricultural Experimental Station north of London in the early part of the 20th century looking at, among other things, the influence of rainfall on wheat yields.

In other words, there is a distinct temporal problem with Oreskes account of why we defer to supposedly “very high intellectual hurdles:” By the time skepticism about religion became an issue, the value of precision in measurement had already transformed the practice of science and the workings of the administrative state; measurement had its own discourse of accuracy and inaccuracy, success and failure.

As of now, that discourse has focused considerable attention on the volume of error in scientific research, due to inadequate measurement or inadequate statistical understanding, or both. But it seems odd to suggest that scientists have, as a consequence, become neo-Laputans, incapable of seeing the world’s problems due to a pointless obsession with controlling for error and randomness; in fact, one could argue the opposite: we see more clearly now than ever what is uncertain or unknown.

Epistemic humility, it would appear, strikes Oreskes as foolish in the face of climate change; and one can imagine Swift agreeing, thinking it absurd to focus on the problem of modeling, say, cloud effects below 60 square miles, when the general risk, as defined, presents an existential threat that requires no further scientific rationalization for action. That might be true. But it also might be true, as with Clairaut and his team sweating the impact of small computational errors on the path of Halley’s Comet, that small errors in climate modeling could have big effects, in either direction—as could unknown unknowns.

This is the challenge for politicians. But it would seem to be a challenge that is better clarified by those who understand statistics and the importance of uncertainty (there’s a fascinating example here) than those who don’t—or worse, those like Oreskes who are simultaneously imprecise as they attack the value of precision. Hiding uncertainty from policy makers and the public can only undermine science; one might argue that with climate change, it already has.


Trevor Butterworth is Editor of and Director of Sense About Science USA. He is a visiting fellow at Cornell University. He has written for The Financial Times, Wall Street Journal, Washington Post, New,, and Harvard Business Review among other publications. He has a BA (Hons) and M.Phil from Trinity College Dublin, and an MS from Columbia’s Graduate School of Journalism, where he won the Sevellon Brown Award for outstanding knowledge of the history of the American press.


Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This