Thick as Two Short...
In the previous post on this topic, I talked about how a simple model of black-body radiation leads to what's known as the "Ultraviolet Catastrophe": the prediction that the amount of radiation emitted by an object at a fixed temperature should increase as you go to shorter wavelengths. This prediction more or less works for long wavelengths, but is a miserable failure at shorter ones.
this was a huge and troubling problem for physics in the late nineteenth century. There's nothing obviously wrong with the derivation, but the result is almost completely wrong. Something was missing from the description of the physics, but nobody was sure what.
The solution was eventually provided by Max Planck, using a somewhat unorthodox method. Planck took what was known about black-body radiation (the Rayleigh-Jeans result, which works well at long wavelengths, the Wien displacement relation, which gives the peak of the black-body spectrum at a given temperature, some other observations), and fiddled around with them until he found a single expression that combined all the available observations (the Planck radiation formula). Then he tried to find a way to get that specific expression from first principles.
He eventually found it, but he had to use a mathematical trick to do it. The problem really comes in when you start assuming that the energy in the radiation field is smoothly and evenly distributed among the available modes. So what he did was to imagine that in comes in discrete chunks. These "quanta" of energy depend on the frequency of the radiation in a very simple way-- some constant h times the frequency (or hc/ λ where λ is the wavelength, and c the speed of light).
If you think about it a bit, you can see how this would fix the problem. The amount of light at a given wavelength depends on the number of quanta associated with radiation of that wavelengths, not the total energy allocated to modes in that region. As you move to shorter wavelengths (higher frequencies), the amount of energy allocated to a given range of wavelengths increases (because the number of possible modes in that range increases), but the number of quanta needed to account for that energy decreases (because the energy per quantum increases).
At long wavelengths (low frequencies), you have very few modes, so there's not a lot of energy, and very few quanta, so there's not much light. At short wavelengths (high frequencies), there are lots of possible modes, which means a lot of energy, but there are very few quanta, because each quantum contains a large amount of energy. Somewhere in the middle, you expect a peak-- there'll be a point in the middle where the total energy is fairly large, and the energy per quantum isn't too big, so you end up with a maximum number of quanta of radiation.
(I should note that this is a huge hand-wave-- what's actually going on is a little more complex (obviously, the energy of the field isn't divided perfectly uniformly, because eventually you get to a point where the energy per quantum is greater than the energy per mode), but this gets the basic spirit of the solution without being too ridiculously wrong. I think-- if you want to call me an idiot, you know where the comments are.)
It's a neat trick (Planck borrowed it from statistical physics, which was his main area of research), and it works beautifully. The problem is, there's no real reason to expect it to be correct. In fact, Planck himself didn't really believe that it had any physical reality-- he thought it was just a slick bit of mathematics that he could use to make the solution work, and that eventually somebody else would work out the right way to get there. It turns out, he had the right way all along, but it took a while for that to be appreciated. Einstein played a big part in that, which is how this all got brought up in that Boskone panel, but I'll save his part of the story for the next post.
I'll end this by noting that Planck's name has since been attached to h, the constant that he introduced in order to make his mathematical trick work. Planck's constant is a tiny, tiny number-- 6.626 10-34 joule-seconds (that's 0.0000000000000000000000000000000006626 J-s), but it shows up all over the place in quantum mechanics (including the uncertainty relations over in the left-hand column). In fact, the presence of an "h" in a formula is a pretty good indication that you're dealing with something quantum (unless the formula is something like mgh, which is gravitational potential energy in freshman mechanics), and if you'd like to make up your own quantum formula, it'd be a good idea to include an "h" somewhere in it.
The constant has units of energy multiplied by time, or kilograms times meters squared, divided by seconds. This means that you can put it together with the other major constants of the universe (the gravitational constant G and the speed of light c) to generate a set of "fundamental units" for length, mass, and time. These are sort of ridiculous for most normal purposes (the "Planck time" is 0.00000000000000000000000000000000000000000005 seconds), but they may be the natural units for describing certain extremely high-energy processes. If you hear string theorists talking about the "Planck scale," that's what they're talking about-- lengths comparable to the "fundamental" length you get from the proper arrangement of h, G and c (0.00000000000000000000000000000000002 meters, give or take).
Posted at 7:07 AM | link | follow-ups |