Tuesday, February 10, 2009

Experimental physics rules to live by?

I thought that it might be fun to have a discussion about "rules to live by" in experimental physics. Here are a few that I think may qualify, and of course I'd appreciate your suggestions for others....
  • Know your apparatus. Don't blindly use a piece of equipment as a black box. Understand how it works. Just because some hand-me-down voltage supply is supposed to put out a square wave doesn't mean that it actually does. You can't blindly use a 10 MOhm input impedance voltage amplifier to measure the voltage dropped across a 1 GOhm load.
  • When trying to understand something new, turn every experimental knob as much as you can. You'll be kicking yourself if you decide not to bother cooling the sample below 10 K, and then someone else finds an exciting effect at 9 K. Clearly one needs to strike a balance between time and likelihood of discovery, but in general, if you can tune a parameter, do so.
  • Estimate the expected signal size, in real, useful units. Double-check your calculation. My thesis advisor used to tell a story about some students in an advanced undergrad lab who thought their experiment was working well, but it turns out that a wire was actually disconnected, and they'd screwed up the calculation of expected signal size so that the answer agreed with the output of the broken setup.
  • Turn knobs finely enough. There are multiple tales out there in physics of discoveries being missed or almost missed because someone was tuning some parameter in coarse steps and skipped over a big feature in the data. That's how superconductivity in MgB2 was missed back in the 1960's, and how SLAC almost didn't co-discover the J/Psi particle.
  • Yes, you really do need to reproduce that result. You can see anything once. If the wild, exciting effect you just observed is real, you should be able to see it again if you're careful and diligent.
  • Be your own harshest critic. If you won't, the referees surely will.

13 comments:

Andrew said...

I have an amendment to your "knob rules." You should turn every knob you can, and you should look finely enough, as you say. But in the age of computerized data taking, it is all to easy for students to set up 17-dimensional data scans where every knob is turned as finely as it can be tuned, and of course the data is almost incomprehensible and unviewable.

I would say then, turn knobs BY HAND first. Get a feel for what you are looking at before turning the computer loose. Then try to formulate a good question, where you turn the right one or two knobs at a reasonable grit.

Also, when setting up experiments that take a while to run -- whether cooling down a fridge to test some kind of fab problem, or setting up a three-day computerized data-taking marathon, try and make sure that the likely answers are actually answers. It is all to easy to perform a "test" where 95% of possible outcomes provide you with little or no information.

This often happens when something has gone wrong with sample fab for us. We will have a run of bad luck, where samples don't work for some reason or another. We'll make a test sample, changing some parameter to probe a probable cause. However, in making that sample, if something else unrelated but unusual occurs, this sample is virtually useless. If you test it, and things are still broken, you will not be sure whether it was broken because you didn't change the right parameter, or broken because something else unusual happened. So testing this sample takes a lot of time, but doesn't really provide you much data.

I find it very helpful to write down a list of possible outcomes, and what I would learn from each one. I'll often find in doing this that I'm asking the wrong question, and that I can ask a much better question and get a better answer.

On the flip side of this would be a third rule: sometimes the best experiments are complete surprises. Don't be so focused on doing "your" experiment that you miss out on some really exciting science that just sneaks up on you.

thm said...

On the knobs and MgB_2: perhaps another rule is to plot your data. Just after the discovery of superconductivity in MgB_2, I plotted up some heat capacity measurements (which someone else had dug up) that had been published in tabular form only back in the 1957 (R. Swift and D. White, J. Am Chem. Soc. v. 79, p. 3641). The plot showed a clear change in slope at what is now known to be the superconducting transition temperature.

Pascal Mickelson said...

I'd say that the zeroth rule of trouble shooting an experiment is to ask, "Is it plugged in?".

I also agree with Andrew's comment that one should first turn the knobs by hand to gain an intuition for the response of the system. That's a lesson I've learned the hard way.

Doug Natelson said...

Andrew - I agree completely about turning knobs by hand or spot-checking first. As you say, it's too easy these days to set up a computer-controlled experiment to churn for 24 hours of data taking, only to realize later that you're measuring the wrong thing or tuning the wrong parameter. I'll go further. If you set up a many-hour automated run, at least check to make sure that it hasn't done something screwy partway through. It's great that computers let us automate certain tasks, but you still have to check along the way that you're collecting useful data.

Anonymous said...

You can't blindly use a 10 MOhm input impedance voltage amplifier to measure the voltage dropped across a 1 GOhm load.

No kidding.

Uncle Al said...

Discover what you were told to discover as described in your grant funding request, on your PERT chart, in your budget. Anything else is felonious negligent discovery - embezzlement of laboratory resources outside the scope of your jurisdiction. A good manager would earn a fine performance bonus for crushing yor insubordination.

CarlBrannen said...

Nathan Rynn told me that he never let a grad student work in his lab unless they answered the question "who changes the oil in your car" with "I do it myself of course".

That might be out of date. In any case, the rule is that the people, even the little people, matter as much as the equipment.

Anonymous said...

Haha, Carl. My father tried emphatically for many years to get me hooked on his hobby (fixing up cars) with little success--I just wasn't interested. One time after he mocked me for not changing my own oil, I replied "Dad, the reason I study so hard and pursue all these degrees is so I don't have to change my own oil".....needless to say, he didn't find that quite as funny as I did.

Ironically, I commented just the other day that one of the things I am most fond of from graduate school was all the plumbing and dirty work we had to do before we could even run the experiment. Even though I love building stuff with my hands and am rather good at it, I am not sure why I just can't get into cars......

Wolf said...

A corollary to the first equipment rule would be "read the manual". It saves one a lot of grief ("Make sure input does not exceed 1V"), and provides a lot of instrument specs that come in useful when one's trying to analyse the data.

Alex said...

When you're actually getting data, DO NOT GO HOME! That device will NOT work the same way in the morning, the sample will not show the same effect, and the stars will not align. It will take another 12 hours of fiddling with it to get where you were last night.

Scott said...

Perhaps not just for experimentalists, but keep on top of the literature beyond your own little corner of the physics world.

physicsandcake said...

"turn every experimental knob as much as you can"

But do it systematically - it's very easy to get lost in parameter space with 2 or more variables. Start by changing one variable (the one you think you understand the most, or the one that is experimentally the quickest) first. If the results look sensible, only then consider changing more parameters.

It's also a good idea to do a 'rough' sweep when taking data and practice using your analysis tools with that. I learnt the hard way not to spend hours taking a painstakingly detailed dataset just to realise that there was something wrong with the initial settings after doing the analysis.

As stated previously, it any interesting effect is real it will be reproducible and you can get the detailed data later.

Doug Natelson said...

Good suggestions, everyone.

I'd offer a refinement of one, in the vernacular: Don't be "afraid to run". I've seen students time and again get so emotionally invested in setting up an experiment that they almost don't want to flip the "on" switch, for fear that it won't work or that the data will look poor. No one likes experiments that don't go as planned, but you've got to get past this if you're going to get anything done. It's much better to have a high attempt frequency, so to speak, than to let doubt paralyze you.