Gravity remains an enduring challenge in physics. Newton had the insight that he could understand many phenomena (e.g., the falling of an apple, the orbit of Halley's comet) if the gravitational interaction between two objects is an attractive force proportional to the product of the objects' masses, and inversely proportional to the square of the distance between them ( \(F = - G M_{1}M_{2}/r^{2}\) ), and acts along the line between the objects. The constant of proportionality, \(G\), is Newton's gravitational constant. About 225 years later, Einstein had the insight that in more generality one should think of gravity as actually distorting space-time; what looks like a force is really a case of freely falling objects moving in the (locally) straightest trajectories that they can. (Obligatory rubber sheet analogy here.) In that theory, general relativity (GR), Newton's constant \(G\) again appears as a constant of proportionality that basically sets the scale for the amount of space-time distortion produced by a certain amount of stress-energy (rather than just good old-fashioned mass). GR has been very successful so far, though we have reasons to believe that it is the classical limit of some still unknown quantum theory of gravity. Whatever that quantum theory is, \(G\) must still show up to set the scale for the gravitational interaction.
It makes sense that we would like to know the numerical value of \(G\) as accurately and precisely as possible - seems like the first thing you'd like to understand, right? The challenge is, as I've explained before, gravity is actually an incredibly weak force. To measure it well in absolute numbers, you need an apparatus that can measure small forces while not being influenced by other, faaaaaar stronger forces like electromagnetism, and you need to do something like measure the force (or the counter-force that you need to apply to null out the gravitational force) as a function of different configurations of test masses (such as tungsten spheres).
I'm revisiting this because of a couple (1, 2) of interesting papers that came out recently. As I'd said in that 2010 post, the challenge in measuring \(G\) is so difficult that different groups have obtained nominally high precision measurements (precise out to the fourth decimal place, such as \(G = 6.6730 \pm 0.00029 \times 10^{-11}\) Nm2/kg2) that are mutually inconsistent with each other. See this plot (Fig. 1 from arxiv:1505.01774). The various symbols correspond to different published measurements of \(G\) over the last 35 years (!). The distressing thing is that there does not seem to be much sign of convergence. The recent papers are looking to see whether there is actually some periodicity to the results (as hinted by the sinusoid on the plot). To be clear: The authors are not suggesting that \(G\) really varies with a several year period - rather, they're exploring the possibility that there might be some unknown systematic effect that is skewing the results of some or all of the various measurement approaches. As both teams of authors say, the best solution would be to come up with a very clean experimental scheme and run it, undisturbed, continuously for years at a time. That's not easy or cheap. It's important to note that this is what real, careful measurement science looks like, not some of the stuff that has made web headlines lately.
Thanks Doug! On this topic I have often wondered _why_ we think GR is a classical limit when it explains all observations so well. Is it just the feeling that everything has to be quantum eventually? Is it inconsistencies in cosmological models?
ReplyDeleteAnon@12:45, good question(s). I'd appreciate it if others would chime in, but here is my take. First, there is a bit of a bias based on the fact that our other classical field theories have turned out to be quantized. Second, there are important thought experiments that seem to highlight that our classical GR description is incompatible with quantum mechanics. For example, what happens if you make a low mass black hole, so that its Schwarzchild radius is comparable to its Debroglie wavelength? (In other words, can you use the singular solutions to GR to evade the uncertainty principle?) What happens if we consider an electromagnetic wave with a wavelength so short that it's comparable to what would be its Schwarzchild radius? It's arguments like that + dimensional analysis that lead to the ideas of the Planck length, a distance scale where, at minimum, you expect some kind of quantum gravity effects to be important. Third, there are other physical issues. Classical GR says that we can measure an absolute zero of energy density - that's what corresponds to a flat Minkowski spacetime. However, ordinary quantum mechanics assumes that the zero of energy is arbitrary; moreover, quantum field theories like the standard model tell us (with phenomena like the Lamb shift to back up the statement) that the "vacuum" is full of virtual particles and fields. (One possible way to reconcile this would be if, somehow, there were other quantum fields whose zero-point energy densities were somehow negative, to exactly cancel out the ordinary fields like EM whose zero-point energy densities are nominally positive. That's one heuristic argument in favor of supersymmetry, since superpartners would fit that description. However, even based just on the other arguments, it still seems like there are reasons for expecting our classical picture of GR has to be modified in the really high energy limit.)
ReplyDeleteI've often heard gravity referred to as a very weak force but I'm unable to follow the reasoning behind this statement. It seems to me that there must be an implicit qualifier in this statement?
ReplyDeleteFor instance, the gravitational force between the sun and earth is much stronger than any EM interaction between earth and sun. While the gravitational force between a pair of ions is much weaker than the EM force.
So when one says that gravity is a weak force, is there an implicit understanding that we are talking in the context of condensed matter?
Anon@1:51, Matt Strassler has an extremely lucid explanation of this issue here. The example I use in class: The electromagnetic forces between my shoes and the floor are easily enough to resist the attractive interaction between my mass (about 72 kg) and that of the entire planet earth (about 6e24 kg).
ReplyDeleteFollow-up: To summarize Strassler's post, with appropriate arguments, we can write down dimensionless numbers that characterize the relative strengths of various interactions. When we do that, we find that gravity is about 40 orders of magnitude weaker than the Coulomb interaction.
ReplyDeleteThis is Anon@1:51
ReplyDeleteI liked Strassler's explanation very much. Thanks for the link. Particularly the reasons why it survives out to longer distances.
But I don't agree with your example of standing on the floor. What if you stood in an (ionic) liquid?
Anon@1:51, yeah, my wording could have been more precise. How about: The electromagnetic interactions that provide rigidity to the floor and my shoes are enough to withstand the force of the entire planet earth pulling me downward.
ReplyDeleteThis is Anon@1:51
ReplyDeleteI wasn't meaning to nitpick. My point is that EM interactions dominate when we're thinking of condensed matter regimes. Whereas gravity dominates at larger scales (Strassler also seems to make this point).
"The distressing thing is that there is not much sign of convergence." This might be distressing to supporters of Newtonian-Einsteinian gravitational theory, but not to supporters of Milgromian gravitational theory. Google "witten milgrom" for my viewpoint.
ReplyDeleteWhy should there be a correlation between G and LOD (length of day)? I have suggested that the -1/2 in the standard form of Einstein's field equations should be replaced by -1/2 + dark-matter-compensation-constant, where this constant is approximately sort((60±10)/4) * 10^-5. The speed of the Earth's rotation influences the distance from a gravitational measuring instrument to the center of the earth. The gravitational experimenters compensate for this but their compensation depends upon (the possibly false) assumption that dark-matter-compensation-constant = 0. If my theory is correct, the experimenters would be consistently slightly off in their assumptions concerning the slight gravitational red shifts influencing their instruments. Is my thinking wrong here?
ReplyDeletegoogle search on "improvements in measuring Newton's gravitational constant"
ReplyDeleteI need feedback on the following:
ReplyDeleteTITLE
Have gravitational metrologists discovered the ground-based analogue of the Anderson-Campbell-Ekelund-Ellis-Jordan flyby anomaly formula?
ABSTRACT
Measurements of Newton’s gravitational constant G have yielded inconsistencies suggesting that variations in measurements of G are correlated with Length Of Day (LOD). In 2007 Anderson et al. published an empirical formula that accurately described the flyby anomaly for 6 flybys of Earth. There might be one or more phenomena that explain the flyby anomaly and the inconsistencies in measurements of G.
MEASUREMENTS OF G
Measurements of Newton’s gravitational constant G show inconsistencies that are oscillatory over extended periods of time and are correlated with Length Of Day (LOD).[1]
FLYBY ANOMALY FORMULA
In 2007 Anderson et al. published an empirical formula that accurately described the flyby anomaly for 6 flybys of Earth. [2], [3] However, two Earth flybys (Rosetta spacecraft, 2007 and 2009) contradicted the predictions of the formula.
MODIFIED NEWTONIAN DYNAMICS
If the Pioneer anomaly is explained by a paint problem, this hypothesis should be confirmed by tests of the paint in a vacuum chamber. Fernández-Rañada suggested that there might be an anomalous redshift everywhere in the universe.[4] Kroupa, Pawlowski, and Milgrom have suggested that the empirical successes of MOND (Modified Newtonian Dynamics) require a new paradigm.[5] The simplest way to combine the ideas of Fernández-Rañada and Milgrom might be the Fernández-Rañada-Milgrom effect (replace the -1/2 in the standard form of Einstein’s field equations by -1/2 + dark-matter-compensation-constant, where this constant is approximately sqrt((60±10)/4) * 10^-5). Does the Fernández-Rañada-Milgrom effect approximately yield the Anderson-Campbell-Ekelund-Ellis-Jordan formula? Does an easy scaling argument show that the effect is approximately equivalent to MOND when gravitational accelerations are low?
Molecules in the Earth’s atmosphere, oceans, and crust undergo random molecular motions based upon frictional forces. On average, the molecules are prevented from orbital decay by friction. Whatever explains the Anderson-Campbell-Ekelund-Ellis-Jordan formula might also explain the metrological problems of ∆G/G because of anomalies influencing orbital decay.
REFERENCES
[1] J. D. Anderson, G. Schubert, V. Trimble, and M. R. Feldman, “Measurements of Newton’s gravitational constant and the length of day,” Europhysics Letters 110 no. 1, (Apr., 2015) 10002
[2] J. D. Anderson, J. K. Campbell, J. E. Ekelund, J. Ellis, and J. F. Jordan, “Anomalous Orbital-Energy Changes Observed during Spacecraft Flybys of Earth,” Physical Review Letters 100 no. 9, (Mar., 2008) 091102
[3] R. Ellman, "Analysis of the anomalous orbital-energy changes observed in spacecraft flybys of Earth", arxiv.org
[4] A. F. Ranada, “The Pioneer anomaly as acceleration of the clocks,” Foundations of Physics 34 no. 12, (2004) 1955–1971
[5] P. Kroupa, M. Pawlowski, and M. Milgrom, ”The failures of the standard model of cosmology require a new paradigm”, International Journal of Modern Physics D 21 no. 14 (Dec., 2012)