I recently had the good fortune to see a freshly-minted Australian Nobel-prizewinner speak. Brian Schmidt from the Australian National University arrived in all his Swedish pomp-and-glory to give his first post-Prize public talk at Monash about his role in figuring out that the universe is still accelerating like a crazy man.
He's a well-regarded speaker and it was greatly entertaining to see him talk, with his zesty American accent and his generous appetite for wine and other good things of life. Mulling over his talk, I got to thinking about how one goes about doing big science, like measuring the speed of the entire friggin universe.
I. The Richard Hamming approach to science
If you've never read the essay on research by Richard Hamming "You and your research", do yourself a favor and read it now for you will never find a collection of such earthy advice for doing top-knotch research.
There are many gems in that essay but one that struck me was the observation that the guys who had serial spectacular success in research were guys who are always thinking about the big problems in their field. However an important distinction was they were not actively working on the problems because typically the tools available were just not good enough to solve the problems at any given time. This makes sense as otherwise they probably would have been solved already. These guys were smart enough to know where the walls of innovation were but not dumb enough to bash their foreheads continually against the bricks of failure.
Nevertheless, the reason that these guys ultimately succeeded was that they also shared another trait, that of openess – these are guys who loved to meet other researchers and find out about new methods, no matter how crazy. Actually, especially the crazy methods. These were the guys who were able to see before any else how some crazy new method from that overlooked discipline could be exploited to solve one of the great problems of their own. They then had the strength of will to drop everything else and go for it.
During Brian Schmidt's talk, an actual decent question was asked at question time, "what was the key idea or moment that ultimately led to the measurement of the accelerating universe". Schmidt's answer was instructive, "...it was the realization that the type 1A supernovae could be used to measure the universe and that we had the technology to look for it."
This was exactly how great science is done according to Richard Hamming. Earlier in the talk Brian Schmidt admitted that the measurement of the universe was something he had been fascinated since he was a young lad. But desire and great intellect was not enough. Schmidt pinpointed the source of his great work as the moment of recognition that a new method could be used to solve a great problem in his field. Schmidt was uniquely gifted in recognizing that a new way had opened up in the intersection of a new theory of supernovae and the development of new machines – large-array telescopes and improved scientific workstations.
He didn't come up with either the theory or the tools, but he made a shotgun wedding of the two. And that is why he won the prize.
II. Rare measurements
In undergrad physics they teach you this rather quaint model of experimental measurement where you're supposed to repeat the measurement a kadjillion times and then calculate the mean and standard deviation from a statistical grind of your table of numbers. Looking back over a career in science, I now find this a rather souless way of looking at measurements. It replaces a deep understanding of the instrument and the theory behind the measurement with a blind and inelegant method of brute-force statistics to determine the error.
Indeed I believe there is a serious issue that a standard physics education fails to teach statistics properly but that is an issue for another time.
The reality is that in actual science, and in particular cutting edge science like the measurement of the acceleration of the universe, what you measure is much more ephemeral than anything you will ever encounter in Physics 101. Physics labs are toy experiments that are deliberately designed to make obviously repeatable measurements. In actual research, if you can't get someone to accept the validity of one single measurement, making a thousand of them won't make them change their mind. In some great experiments of the past, only one measurement was ever made.
A case in point is the set of measurements that got Brian Schmidt the Nobel Prize. What he was trying to measure, in the 90s, was the properties of a trulyb epheremal event – the explosion of a white dwarf star in a binary system, the so-called type IA supernovae. Because the theoretical model puts very stringent bounds on the masses of white dwarf stars, the maximum luminosity of this explosion had very uniform characteristics. Thus the distance and speed of these objects could be gauged by comparing the observed explosion to the theoretical explosion in a still place.
Schmidt and his large international group of rag-bag astronomers rescanned the entire sky with the large array telescopes to look specifically for these binary star systems at their moment of death. That is, not only did they re-scan the entire sky, but they had to do this over several intervals to capture the explosion from binary star to supernovae. His group pushed the puny (in comparison to today's iPhones) workstations to their limits as they processed what would have been enormous images of gigabytes in size to look for that rare event – a supernovae explosion that happened at the right moment in the past so that the light hit the earth just when the telescope was pointing at it.
The paper that came out presented only a handful of such events. After years of searching and untold amount of image processing, they were lucky to find even these little needles in their observational haystack. The speeds of these explosions of these stars were plotted in the paper with gloriously large error bars. These bars are not calculated using statistical standard deviations but derived from deep calculations of the limits of the instruments and the theory behind the instruments.
Still, it has to be pointed that a major reason that the results were accepted pretty quickly was that there was another team (producing another winner of the Prize) that was measuring exactly the same thing, albeit on other telescopes. The fact that both groups found the same thing pretty much sealed the deal. Here is actually a case where it was better to have a competitor working on the same project, rather than live in abject fear of being scooped. Indeed, this represents a general principle of calculation before the days of generally available computers. Often, long and involved computations done by hand had be done by two different people. Only if both get the same result, will it be accepted as the answer. This ought to be a working principle in the measurement of fancy new objects.
III. The role of theory
You might think that the work behind a Nobel prize in the year 2012 astrophysics would involve some fancy new theoretical physics. The surprising thing listening to Schmidt's talk was how thoroughly bog-standard the theory was. Indeed Schmidt spent the bulk of the introduction expaining Einstein's theory of general relativity, which was introduced to the world in 1915. Specifically, he only invoked the totally uniform solution of the equations of general relativity, which is the easiest of all solutions, and solved immediately by Einstein. In some ways, what was most surprising was what was not discussed in the talk. No M-branes. No superstrings. No 11-dimensional universes.
Indeed, the most important piece of theory was probably the models of the explosion of white dwarf stars. From what I understand, these models involved at the lowest level, neutrino physics in the standard model. This is good solid astrophysics married to particle physics from the 1970s.
Of course, having won a Nobel Prize, Schmidt had earnt the right to speculate like a crazy mountain hermit drunk on cactus juice and he did so rather entertainingly, warning of the fearful death of the universe as space has stretched so far apart in the distant future that every atom will live forever alone. Then there is the other possibility that whatever is the magic source of dark energy, will continually churn out energy as the universe expands and so the universe will be filled with stuff.
Nevertheless, the point I want to make is that great science doesn't need need fancy new theories, but rather, a deep appreciation of old theories and damn clever ways of measuring new things.