In statistical physics one of the key underlying ideas is the following: For every macroscopic state (e.g., a pressure of 1 atmosphere and a temperature of around 300 K for the air in your room), there are many microstates (in this example, there are many possible arrangements of positions and momenta of oxygen and nitrogen molecules in the room that all look macroscopically about the same). The macroscopic states that we observe are those that have the most possible microstates associated with them. There is nothing physically forbidden about having all of the air in your room just in the upper 1 m of space; it's just that there are vastly more microstates where the air is roughly evenly distributed, so that's what we end up seeing.

Crucial to actually calculating anything using this idea, we need to be able to count microstates. For pointlike particles, that means that we want to count up how many possible positions and momenta they can have. Classically this is awkward because position and momentum are continuous variables - there are an infinite number of possible positions and momenta even for one particle. Quantum mechanically, the uncertainty principle constrains things more, since we can never know the position and momentum precisely at the same time. So, the standard way of dealing with this is to divide up phase space (position x momentum) into "cells" of size h

^{d}, where h is Planck's constant and d is the dimensionality. For 3d, we use h

^{3}. Planck's constant comes into it via the uncertainty principle. Here's an example of a typical explanation.

Here's the problem: why h

^{3}, when we learn in quantum mechanics that the uncertainty relation is, in 1d, (delta p)(delta x) >= hbar/2 (which is h/4 pi, for the nonexperts), not h ? Now, for many results in classical and quantum statistical mechanics, the precise number used here is irrelevant. However, that's not always the case. For example, when one calculates the temperature at which Bose condensation takes place, the precise number used here actually matters. Since h

^{3}really does work for 3d, there must be some reason why it's right, rather than hbar

^{3}or some related quantity. I'm sure that there must be a nice geometrical argument, or some clever 3d quantum insight, but I'm having trouble getting this to work. If anyone can enlighten me, I'd appreciate it!

UPDATE: Thanks to those commenting on this. I'm afraid that I wasn't as clear as I'd wanted to be in the above; let me try to refine my question. I know that one can start from particle-in-a-box quantum mechanics, or assume periodic boundary conditions, and count up the allowed plane-wave modes within a volume. This is equivalent to Igor(the first response post)'s discussion of applying the old-time Bohr-Sommerfeld quantization condition (that periodic orbits have actions quantized by h). My question is, really, why does h show up here, when we know that the minimal uncertainty product is actually hbar/2. Or, put another way, should all of the stat mech books that argue that the h

^{3}comes from uncertainty be reworded instead to say that it comes from Bohr-Sommerfeld quantization?

## 11 comments:

It comes from counting states in the WKB approximation of QM. Or, even simpler (and less consistent), it follows from the Bohr-Sommerfeld quantization rules.

Another way to see it is that going to the fourier space introduces a factor of $2\pi$ into the scale for momentum.w

Section 16.9 (p. 370) of H.B. Callen's Thermodynamics and an Introduction to Thermostatistics (second edition, 1985) might be of some help.

Callen begins by asking "how did Willard Gibbs invent statistical mechanics in the nineteenth century, long before the birth of quantum mechanics and the concept of discrete states?", and ends with "Gibbs' postulate ... must stand as one of the most inspired insights in the history of physics."

Thanks all. I updated the original post to better refine my question. I get the state-counting argument from Bohr-Sommerfeld quantization; I see that as almost a separate origin. That is, I don't have an intuitive feel for the connection between B-S quantization and the uncertainty principle inequality. I guess that's really what I'm asking about.

Doug,

this is one of the subjects in which the Path Integral approach is most illuminating, in my opinion. I recommend the second chapter of "Quantum Mechanics and Path Integrals" by Feynman and Hibbs.

Have a look at Callen. If you're teaching to undergrads, its one of the best texts for these sorts of questions, although probably not the best for teaching.

If I recall McQuarrie's book, h is used instead of h-bar is the actual bound simply because not everyone does quantum mechanics from the ground up as, so that a rough bound is good enough for these purposes. He also says that this circumstance is known and just kinda accepted.

But then, the prof I took stat mech from also walked in one day, and the first thing he said was along the lines of "Yeah, we're gonna hit chapter X soon ... ignore the book for that chapter. McQuarrie's a chemist, so in this case he doesn't have a clue what he's talking about..."

From a quantum information point of view, x and p are "complementary operators" this means that if you have a state which gives one of the distributions perfectly sharp (i.e. an eigenvector), then the other distribution is flat (i.e. no information, all probabilities equal).

The QIT people like to reduce QM to spin-1/2 qubits as this is the simplest system that exhibits complementarity. For this, the complementary variables correspond to states chosen from mutually unbiased bases.

For the Pauli algebra (spin -1/2), the equivalent of (delta x)(delta p_x) is (delta S_x)(delta S_y). i.e. the spin operators. From knowing that the spin-1/2 states are \pm \hbar /2, you should be able to get the HUP and it arises because the simplest non trivial Hilbert space uses \hbar/2.

I know this doesn't answer the question, but I'm watching to see where it leads. It seems to me that qubits are the simplest way to get to the connection.

A good reference to follow is Landau and Lifshitz. In volume 5 Statistical Physics, section 7 "Entropy", they define entropy in quantum mechanics in terms of number of microscopic states, an absolute value and a dimensionless magnitude. That's not possible in classical mechanics, where entropy has to be defined up to an additive constant. But they get the factor per degree of freedom, h, refering to volume 3 Quantum Mechanics (no relativistic theory), section 48 entitled "Bohr-Sommerfeld quantization rule". No mention about the uncertainty principle. The factor comes from the study of the quasiclassical limit.

(Sections and volumes are those of the spanish translations from the original russian works)

My former student Lam Yu pointed out to me that another way to think about this issue is in terms of global vs. local structure of phase space. The Bohr-Sommerfeld quantization condition is a global statement that one makes about classical orbits in phase space. The uncertainty principle really comes from the commutation relations of the momentum and position operators, which can be thought of as a

localstatement about each point in phase space. In the end, I think the point is that the factor really comes from quantization not uncertainty per se.1) Uncertainty principle means qualitatively: a number of half-waves is greater than unity.

2) Number of microstates in box is counted semi-qualitatively and semi-classicaly as a number of half-waves on the box sizes(more exactly, number of corresponding "microcubes")

3) Cyclical frequencies and "hbar" are convenience artefacts. All measured quantities (magnetic flux and conductance quanta)appear only through the pristine Plank constant "h" without any "pi" as in the de Broglie relation.

Gennady Zebrev

Post a Comment