Recently I've had some conversations with a couple of people, including someone involved in journalism, about what makes a physics experiment good. I've been trying to think of a good way to explain my views on this; I think it's important, particularly since the lay public (and many journalists) don't have the background to judge realistically for themselves the difference between good and bad scientific results.
There are different kinds of experiments, of course, each with its own special requirements. I'll limit myself to condensed matter/AMO sorts of work, rather than high energy or nuclear. Astro is a whole separate issue, where one is often an observer rather than an experimenter, per se. In the world of precision measurement, it's absolutely critical to understand all sources of error, since the whole point of such experiments is to establish new limits of precision (like the g factor of the electron, which serves as an exquisite test of quantum electrodynamics) or bounds on quantities (like the electric dipole moment of the electron, which is darned close to zero as far as anyone can tell, and if it was nonzero there would be some major implications). Less stringent but still important is the broad class of experiments where some property is measured and compared quantitatively with theoretical expectations, either to demonstrate a realization of a prediction or, conversely, to show that a theoretical explanation now exists that is consistent with some phenomenon. A third kind of experiment is more phenomenological - demonstrating some new effect and placing bounds on it, showing the trends (how the phenomenon depends on controllable parameters), and advancing a hypothesis of explanation. This last type of situation is moderately common in nanoscale science.
One major hallmark of a good experiment is reproducibility. In the nano world this can be challenging, since there are times when measured properties can depend critically on parameters over which we have no direct control (e.g., the precise configuration of atoms at some surface). Still, in macroscopic systems at the least, one should reasonably expect that the same experiment with identical sample preparation run multiple times should give the same quantitative results. If it doesn't, that means (a) you don't actually have control of all the parameters that are important, and (b) it will be very difficult to figure out what's going on. If someone is reporting a surprising finding, how often is it seen? How readily is it reproduced, especially by independent researchers? This is an essential component of good work.
Likewise, clarity of design is nice. How are different parameters in the experiment inferred? Is the procedure to find those values robust? Are there built-in crosschecks that one can do to ensure that the measurements and related calculations make sense? Can the important independent variables be tweaked without affecting each other? Are the measurements really providing information that is useful?
Good analysis is also critical. Are there hidden assumptions? Are quantities normalized in sensible ways? Do trends make sense? Are the data plotted in ways that are fair? In essence, are apples being compared to apples? Are the conclusions consistent with the data, or truly implied by the data?
I know that some of this sounds vague. Anyone more eloquent than me want to try to articulate this more clearly?