Today this blurb from the New Scientist cause a bit of excitement around the web. While it sounds at first glance like complete crackpottery, and is almost certainly a case of terrible science journalism, it does involve an interesting physics story that I first encountered back when I was looking at grad schools.

I visited Berkeley as a prospective student and got to meet Ray Chiao, who asked me how long it takes a particle with energy E to tunnel through a rectangular barrier of energetic height U > E and thickness d. He went to get a glass of water, and wanted me to give a quick answer when he got back a couple of minutes later. Well, if I wasn't supposed to do a real calculation, I figured there were three obvious guesses: (1) d/c; (2) d/ (hbar k/m), where k = sqrt(2 m (U-E))/hbar - basically solving for the (magnitude of the imaginary) classical velocity and using that; (3) 0. It turns out that this tunneling time controversy is actually very subtle. When you think about it, it's a funny question from the standpoint of quantum mechanics. You're asking, of the particles that successfully traversed the barrier, how long were they in the classically forbidden region? This has a long, glorious history that is discussed in detail here.

Amazingly, the answer is that the tunneling velocity (d / the tunneling time) can exceed c, the speed of light in a vacuum, depending on how it's defined. For example, you can consider a gaussian wave packet incident on a barrier, and ask how fast does the packet make it through. There will be some (smaller than incident) transmitted wavepacket, and if you look at how long it takes the center of the transmitted wave packet to emerge from the barrier after the center of the incident packet hits the barrier, you can get superluminal speeds out for the center of the wavepacket. (You can build up these distributions statistically by doing lots of single-photon counting experiments.) Amazingly, you can actually have a situation where the exiting pulse leaves the barrier before the entering pulse peak hits the barrier. This would correspond to negative (average) velocity (!), and has actually been demonstrated in the lab.

So, shouldn't this bother you? Why doesn't this violate causality and break special relativity? The conventional answer is that no information is actually going faster than light here. The wavepackets we've been considering are all smooth, analytic functions, so that the very leading tail of the incident packet contains all the information. Since that leading tail is, in Gaussian packets anyway, infinite in extent, all that's going on here is some kind of pulse re-shaping. The exiting pulse is just a modified version in some sense of information that was already present there. It all comes down to how one defines a signal velocity, as opposed to a phase velocity, group velocity, energy velocity, or any of the other concepts dreamed up by Sommerfeld back in the early 20th century when people first worried about this.

Now, this kind of argument from analyticity isn't very satisfying to everyone, particularly Prof. Nimtz. He has long argued that something more subtle is at work here - that superluminal signalling is possible, but tradeoffs between bandwidth and message duration ensure that causality can't be violated.

Well, according to his quotes in today's news, apparently related to this 2-page thing on the arxiv, he is making very strong statements now about violating special relativity. The preprint is woefully brief and shows no actual data - for such an extraordinary claim in the popular press, this paper is completely inadequate. Anyway, it's a fun topic, and it really forces you to think about what causality and information transfer really mean.

## 4 comments:

This subject is VERY interesting. It is remarkable that the problem can be stated so simply, but still stump anyone from advanced undergraduate students to full professor.

Technically it is not even a quantum mechanics problem as identical issues can arise in EM transmission through undersized waveguides.

I made my own peace with the whole issue of superluminality and signaling etc. when realizing that "a signal" must constitute a difference from some background level and must also be something discontinuous in the time domain. A signal that isn't somehow discontinuous in time, isn't a signal at all. Discontinuity in the time domain means it is composed of arbitrarily high frequencies in the Fourier domain. These high frequencies, being arbitrarily high, are always above the barrier height, or cutoff frequency and so they propagate with c or the free particle velocity.

Well, to be precise, there's no such thing as a signal that is ideally discontinuous. After all, there is

alwaysa finite rise time for any signal. A more precise statement of your realization is that every signal must occupy a finite (i.e., non-vanishing) bandwidth. In which case, the ratio of the bandwidth of the signal to the height of the potential barrier becomes relevant. And, indeed, this ratio need not be larger (or smaller) than unity, for real-world signals. But it cannot be zero, or infinite. I mean, on the face of it, it is clearly not true that all signals must contain arbitrarily high frequency components.Peter - The problem I have with your wording is that it implies that infinite bandwidth is required to send information. I'm pretty sure my phone line does not pass arbitrarily high frequency components. So, how does a signal with some kind of nonanalyticity pass through a finite bandwidth signal channel?

I think this would be in direct violation of a key tenet of Einstein's special theory of relativity that states that nothing, under any circumstance, can exceed the speed of light.

Post a Comment