The expanding universe
The launch of the Webb telescope reminds me that I’ve been meaning to write about the fascinating — to me anyway — history of the ongoing discovery of the size of the universe.
I should note that I’m a complete amateur in regard to this subject, so there may well be various mistakes in what follows, which I’m sure will be corrected in the charitable fashion for which the LGM commentariat is Internet-world renowned.
The first celestial object whose distance to the Earth was calculated with some precision was the Moon. More than 2100 years ago, the Greek astronomer Hipparchus managed, via the trigonometry of a lunar eclipse, to calculate that distance within a few percentage points of the modern estimate. His estimate was around 233,000 to 265,000 miles, while on average the true — or I suppose “true” — value is about 239,000. So this was a fantastic accomplishment at the time.
The next big question in this branch of astronomy became, how far was the Sun from the Earth, and, beyond that, how far away were the other stars? The Ptolemaic system assumed the heliocentric sphere was much further from the Earth than the lunar sphere, and that the sphere in which the stars were fixed was much further still. But how much further?
It turns out that these questions remained the subject of little more than educated guesswork for about 1800 years after Hipparchus’s remarkably accurate measurement of the distance to the Moon. His method could not be used to measure the distance to the Sun unless the Sun’s diameter was known, and there was no way at the time to determine this.
In a remarkable coincidence that would eventually prove extremely significant to the development of astronomy, it turns out that the apparent diameter of the Sun and the Moon in the sky are almost exactly the same because the the proportions between the diameters of the two objects and their respective distances from the Earth are also almost identical — the Sun is about 400 times larger, and 400 times farther away.
But this fact remained undiscovered for nearly 2000 years. Ancient and medieval estimates of the distance to the Sun remained largely guesses, and those guesses turned out to be far too conservative. For example Ptolemy estimated the distance to be around four million miles, which is about 5% of the true value. Meanwhile the stars other than the Sun were guesstimated to be tens of millions of miles away.
It’s important not to indulge in too much Whig history in this regard: while these values, especially the latter one, turned out to be radically incorrect, keep in mind that millions and tens of millions of miles were still almost incomprehensibly vast distances, in a world in which the basic modes of transportation moved at only a few miles per hour.
In any case, anything resembling an accurate estimate of the distance from the Earth to the Sun would have to wait all the way until the 17th century. The first scientist who came close to determining the actual distance was the great Dutch polymath Christaan Huygens, who in 1659 calculated the distance as 1.068 of what would eventually be considered an astronomical unit (An astronomical unit is the average distance of the Sun to the Earth). So that was really close — but it turns out that historians of science are very harsh graders. Huygens made several big errors in his calculation of the question, that happened to pretty much cancel each other out, so he doesn’t get credit for making the first reasonably accurate calculation.
That achievement goes to Giovanni Cassini and his assistant Jean Richer, who between 1671 and 1673 observed the orbit of Mars simultaneously from French Guyana and Paris, which they then used to calculate the distance to the planet via parallax. This in turn allowed them to estimate the scale of the entire known solar system — the Sun and the six planets — within about 7% of the figures accepted today.
The true value was nailed down with even more precision a century later, using the transit of Venus — this produced estimates within two percent of the modern value, which is about 93 million miles.
Even so, the distance from the Earth to the other observable stars — just a few tens of thousands with the instruments of the time — again remained almost wholly speculative until towards the middle of the 19th century, when telescopes first became powerful enough to measure the first stellar parallaxes. (The first star to have its distanced measured in this was 61 Cygni, which was determined by Fredrich Bessel in 1838 to be about 10 light years from Earth).
This should give us a certain amount of pause: there are still people today who have a grandparent who was alive at the time that the distance to the nearest stars — which turns out to be an almost indescribably tiny distance in the context of what is now the known universe — was determined for the first time. These initial measurements found values of between 4.3 and a few dozen light years, for the stars whose parallaxes could be measured in mid-19th century. A light year is the distance light travels in a solar year, which is about 63,000 times the distance from the Earth to the Sun. So even the very nearest stars turned out to be hundreds of thousands or millions of times further from the Earth than the Sun.
Here’s a way of conceptualizing the distances involved in something other than purely abstract and mathematical terms: One one-thousandth of the speed of light is approximately 300 kilometers per second. A craft moving at this speed would cover the distance from New York to Chicago in four seconds, and then reach Seattle nine seconds later. This is approximately 27 times faster than the highest speed achieved by any crewed spacecraft (the Apollo capsules at their maximum re-entry speed), and still several times faster than the highest speed achieved by any spacecraft, period (a solar observation satellite at its closest approach to the Sun, when the Sun’s gravitational field accelerated it to its maximum velocity). But it’s still a speed somebody who has taken a transcontinental flight and watched the continent slide by over the course of a few hours — nice weather down there — can at least roughly imagine.
It would take tens of thousands of years, traveling at this speed, to reach the very nearest stars to the Earth, in what is just a tiny corner of the Milky Way galaxy.
Which brings us to the next big question facing astronomers, which was, how big was the galaxy, and was the galaxy itself It as it were?
Telescopic technology improved rapidly throughout the 19th century, but even so the parallax of only the most nearby stars in the galaxy could be measured. Improved technology did, however, resolve one part of a long-standing question: were the so-called galactic nebulae themselves made up of individual stars? For a couple of centuries astronomers had speculated whether at least some of the nebulae — the Latin word means “clouds” and was used to describe any diffused astronomical object whose contours were difficult to discern — might actually be extremely distant stars. In the 18th century Kant had speculated that they might even be what he called “island universes,” aka other galaxies outside the Milky Way itself.
By the early 20th century telescopes had improved enough to determine that some of the nebulae, such as the famous one in the Andromeda constellation, were indeed made up of vast collections of individual stars. But whether those collections were inside or outside the Milky Way remained a matter of intense and heated — for astronomers anyway — debate, since there was still no way to determine the distances to any but the closest stars.
Then in 1908 an astronomer at Harvard, Henrietta Swan Leavitt, made one of the most important discoveries in the history of science. Leavitt wasn’t actually considered a real astronomer by Harvard itself — she was a low-paid assistant whose job was to be a “computer” of the information collected on photographic plates taken of stars. While doing this work she studied the characteristics of thousands of Cepheid variables: stars that change in brightness within a well-defined stable period. What Leavitt realized was that these characteristics made it possible to establish the true luminosity — as opposed to the apparent luminosity — of these stars, which in turn meant they could be used to establish far vaster astronomical distances than could be determined via observations of stellar parallax.
This almost immediately changed the entire structure of astronomical debate regarding trans-solar distances. Within a decade, Harlow Shapley had used Cepheids to determine basic limits on the size and structure of the Milky Way, and to place the location of the Sun within it. (Leavitt died in 1921, and even though her work was initially appropriated by her supervisors with very little credit to her, her discoveries were so self-evidently of staggering importance that attempts were made in the mid-1920s to award her a Nobel Prize, which unfortunately can’t be awarded posthumously).
In 1920 Shapley and Heber Curtis held a famous debate, regarding the size of the universe as it was then understood. Shapley argued that the galactic nebulae were part of or at least adjacent to the Milky Way, which he considered to be the universe. He also placed the Sun (correctly) toward the outer edges of the galaxy. Curtis argued that the nebulae were, as Kant thought, completely separate galaxies. (Shapley thought this was impossible, because the distances involved for this to be true would have to be at least millions of light years — distances that astronomers at the time generally considered to be far too vast to be actually possible).
Shortly afterwards, Edwin Hubble used the brand new gigantic telescope at the Mt. Wilson observatory in California to determine the distance to a Cepheid variable in the Andromeda nebula, thereby conclusively proving that Curtis was right — the “nebula” was in fact an entire separate galaxy, at least one million light years away from Earth. (The contemporary value is 2.5 million light years.). Amazingly enough, Hubble’s determination that there were in fact other galaxies in the universe wasn’t his most important discovery: that would be the existence of the red shift, which allowed the development of the entire structure of modern cosmology.
That structure would, over the course of next few decades, determine that the observable universe was billions of light years in extent (In the context of the Big Bang theory, which was being developed initially by Georges Lemaitre, a Belgian priest, at the same time Hubble was making his discoveries, how much of the universe is actually observable from Earth is constrained by various absolute limits). The current value is about 92 billion, to be inexact.
These distances are basically inconceivable to the human mind. In summation, a universe that was considered almost immeasurably vast by ancient astronomers was found to be millions of times larger than that by astronomers of the early modern period three hundred years ago. Then, within a few generations, the universe — more properly the observable universe — was found to be millions of times larger than than the millions of times larger universe that replaced the Ptolemaic model (which again was considered to be tens of millions of miles in extent).
This story, I think, should produce a certain amount of intellectual humility. It may well be that the Webb telescope — which is heading to its permanent location nearly one million miles from Earth, and is 100 times more powerful than the Hubble space telescope — will make discoveries that will make the current cosmological model look as outmoded as that of the early 20th century astronomers looks today.
In any event, in me this narrative produces a kind of intellectual vertigo, when I contemplate that the most basic facts about the structure of the universe (a term that as Borges famously observed is always an example of question-begging) have changed so radically even within the lifetimes of many people who are still alive today.