Search This Blog

Monday, May 31, 2010

Demagnetization cooling

I've been meaning for some time to write a post about demagnetization cooling, a technique that is readily explained in an undergrad stat mech class, but has to be seen to be believed.  I was finally inspired to write this post by seeing this preprint.  Here's the basic idea.  Start with an ensemble of magnetic moments in what we will call a "demag stage".  The sample of interest will be thermally connected to this demag stage.  When I worked on this stuff in grad school, we used the nuclear magnetic moments of copper nuclei, but it's also possible to use electron magnetic moments in a paramagnetic salt of some kind.  Anyway, apply a large magnetic field to these magnetic moments (spins) while attached to a refrigeration source of some kind. It's energetically favorable for the moments to align with the applied field. When they flip to align, the energy that is released is carried away by the refrigerator.  Likewise, in the case of a metal like copper, the ramping up of the magnetic field can generate heat via eddy currents; that heat is also carried away by the refrigerator.  Now, once the spins are basically aligned, unhook the thermal connection between the demag stage and the refrigerator, and gently lower the applied magnetic field.  What happens?

First, the formalistic explanation.  Basic statistical physics tells us that the entropy of an ensemble of magnetic moments like those in our demag stage is only a function of the ratio B/T, where B is the applied magnetic field and T is the temperature of the moments.  If we are gentle in how we lower B, so that the entropy remains constant, that implies that lowering B by a factor of two also lowers T by a factor of two.  When I first did this as a grad student, it seemed like magic.  We thermally isolated the demag stage (plus sample), and I used an ancient HP calculator to tell a power supply to ramp down the current in a superconducting magnet.  Voila - like magic, the temperature (as inferred via the capacitance of a special pressure transducer looking at a mixture of liquid and solid 3He) dropped like a stone, linear in B.  Amazing, and no moving parts!  

So, physically, what's really going on, and what are the limitations?  Well, the right way to think about the ensemble of magnetic moments is as an entropic "sink" of energy.  Equilibrium statistical physics is based on the idea that all microscopic states of a system that have the same total energy are equally likely.  When you create an ensemble of 1023 magnetic moments all pointed in the same direction (that is, with an aligned population much greater than what one would expect in equilibrium based on the new value of B), the most likely place for thermal energy in your system to go is into flipping those spins, to try and bring the aligned population down and back into the new equilibrium.  That means that heat will flow out of your sample and out of, e.g., the lattice vibrations of the demag stage, and into flipping those spins.  The fortuitous thing is that for reasonable numbers of moments (based on volumes of material) and accessible initial values of B and T, you can get lots of cooling.  This is the way to cool kilogram quantities of copper down to tens of microKelvin, starting from a few milliKelvin.  It's a way to cool a magnetic salt (and attached sample) down from 4.2 K to below 100 mK, with no messy dilution refrigerator, and people sell such gadgets.  

There are practical limitations to this, of course.  For example, there is no point in reducing the external B below the value of the effective internal magnetic field due to spin-spin interactions or impurities.  Also, when demag-ed, the system is a closed box with a finite (though initially large) heat capacity.  Any measurement done on an attached sample will dump some heat into the stage, even if only through stray heat leaks from the rest of the world, limiting the amount of time the stage and sample remain cold before needing another demag cycle.  Finally, and most relevant to the preprint linked above, there are real issues with establishing thermal equilibrium.  For example, it is not hard to get the nuclei of copper to have a much lower effective temperature than the conduction electrons, with an effective equilibration time longer than the demag-ed spin system can be kept cold.  In other words, while the nuclei can get very cold for a while, the electrons are never able to reach similar temperatures.  Still, the whole concept of cooling through demagnetization is very interesting, and really brought home to me that all the abstract concepts I'd learned about entropy and spins had real consequences.   

Wednesday, May 26, 2010

Workshop: Negotiating the Ideal Faculty Position

For the last few years I have been involved with Rice's ADVANCE program, a NSF-supported initiative designed particularly to increase the number of women faculty members in the sciences and engineering.  This FSP recent post reminded me that now is the right time to advertise ADVANCE's upcoming workshop on negotiating the ideal faculty position, and this blog is one way to reach a wide audience.  Potential participants need to apply, since space is limited.  (Last year there were 1100 applications and 65 slots.)  View this as a way to practice a job talk in a friendly, constructively critical environment, and to discuss issues that come up in the faculty job hunt (e.g., how the process works; picking letter-writers; lab space + startup packages).  This is a way to talk to knowledgeable people in yours and related areas who are not your mentors or direct potential employers.    

Tuesday, May 25, 2010

Tidbits.

  • While modern communications tools are definitely improving, there is still no substitute for actually sitting down with collaborators at a table with pen and paper, and hashing things out face-to-face.   I just returned from a quick trip to talk with some theorist colleagues, and it was a great way to get a lot accomplished in a relatively short period of time.  Much higher bandwidth than repeated emails.
  • If you're ever invited to write a review article, and you have any concerns about the quality of the journal or the publisher, don't ignore your instincts.  My postdoc and I just went through a painful experience along these lines - in the end, everything's going to be fine (with another publisher!), but the original publisher (I'll name names some other time) was amazingly incompetent.  You'd think, for example, that a journal editor would have an email system that actually accepts attachments, particularly if their web-based system is utterly fubar.
  • A follow-up to my recent post about IR CCDs....  Anyone out there have experience with the MOSIR-950?  It's actually a Si CCD with a special front end that makes it sensitive from 950nm out to around 1700 nm.
  • I'm very tempted to buy this.  (If you have never seen the tv show Lost, you won't get this.)

Thursday, May 20, 2010

This will be a big new story.

Craig Venter's company appears to have succeeded in creating a synthetic genome and getting it into an emptied-out (prokaryotic bacterial) cell, essentially changing the cell into a new species. This is going to be huge. Of course, we still don't actually understand what everything in that custom genome does, exactly - much of it is copied from another bacterium species. Still, it's an amazing achievement that one can design (on a computer) a DNA sequence, stitch it together via various methods, and get a cell to "run" that software.

Wednesday, May 19, 2010

IR CCD arrays for spectroscopy?

The charge-coupled device, or CCD, was the gadget behind part of this past year's Nobel prize in physics.  Far and away, the most common CCDs out there are based on silicon, and these devices are highly efficient from the visible out to the near-infrared, with efficiency really taking a major hit at wavelengths longer than about 1100 nm.  One advantage of CCDs is that generally their total efficiency is high:  an incident photon stands a good chance of producing some charge on a pixel, and that charge can be collected well, so that getting a "count" on a particular pixel requires only a couple of photons.  It turns out that one can also get CCDs based on InGaAs, a semiconductor with a smaller band gap than Si, and therefore sensitive to longer wavelengths, typically from around 950 nm out to 1700 nm or so.  I have been thinking about trying to get such a gadget for a few reasons, Raman spectroscopy in particular, and I would welcome reader recommendations.  For our application we really would like something with CCD-like sensitivity (as opposed to a linear array of photodiodes, which is considerably cheaper, but requires on the order of 100 photons to produce a single "count").  Feedback would be greatly appreciated.  I know that Princeton Instruments sells one gadget (though really for imaging rather than spectroscopy), and Newport appears (from press releases)  to offer something with more pixels, though it doesn't show up on their website....

Friday, May 14, 2010

Scale and perspective II

The title of this post harkens back to a previous example of stellar corporate governance.  Today the CEO of BP made the statement that "The Gulf of Mexico is a very big ocean. The amount of volume of oil and dispersant we are putting into it is tiny in relation to the total water volume". While that is literally true, as a physicist I have to ask, is that the right metric? I mean, are we worried about the total fraction of Gulf of Mexico that is oil? No, because everyone knows that the relevant point of comparison is not the total volume of water, but the point at which the oil content is having catastrophic effects on the environment.  We can gain some perspective by comparing with other oil spills. According to experts who have viewed the (long delayed by BP) video of the leak, the flow rate of oil is somewhere around 70000 barrels a day, or about 1 Exxon Valdez disaster (I think everyone sane agrees that it was a real mess) every four days. This has been going on for three weeks. Arguing that "the ocean is really big so this isn't that much of a problem" is just wrong.

update:  It's increasingly clear that BP is far more worried about their liability than about actually fixing the problem.  Check out this quote from the NY Times:  

BP has resisted entreaties from scientists that they be allowed to use sophisticated instruments at the ocean floor that would give a far more accurate picture of how much oil is really gushing from the well.

“The answer is no to that,” a BP spokesman, Tom Mueller, said on Saturday. “We’re not going to take any extra efforts now to calculate flow there at this point. It’s not relevant to the response effort, and it might even detract from the response effort.” 


Right, because good engineering solutions have nothing at all to do with accurately understanding the problem you're trying to solve. Idiots.

Tuesday, May 11, 2010

What do fancy research tools really cost at a university?

Over the years, I've become convinced that there are lies, damned lies, and cost accounting.  What I mean by this is that "true costs" for various items in a business or at a university (a type of nonprofit business, after all) are sometimes allocated in whatever way is necessary to bolster a particular argument at hand.  If those making an argument want something to look like a bargain, no problem, there's a way to do the accounting for that.  If those making an argument want to make something look so expensive that it's economically unattractive, no problem, there's a way to do that, too.  I remember as a postdoc when the part of Lucent Technologies that dealt with real estate argued (apparently successfully) that they should get rid of the simple general stockroom because somehow having the square footage allocated to that use was losing money.  So they shut down the stockroom, and had a couple of hundred PhD scientists and engineers spending their (expensive) time ordering 4-40 screws from McMaster Carr online or over the phone instead of just walking upstairs and grabbing some.

Let's take an electron microscope as a test case.  Suppose a university or company buys an SEM for $350,000 (for the sake of round numbers).  How much should they charge, fairly, for its use?  Let's assume that this is a shared tool and not just sitting in one person's lab.  This microscope and associated hardware take up something like 100 ft2 of floor space.  The microscope also needs electricity (say 1 kW) and cooling water.  Now, a university is unlikely to charge a department or faculty member "rent" on the floorspace, but a large company may decide to "tax" a business unit for space at some rate.  The electricity and cooling water are likely part of a university's or business's "overhead".  Overhead charges are assessed when it's difficult to trace a particular designated responsible source for certain kinds of costs-of-doing-business.  For example, the overhead rate at my institution is 52.5%.  That means that for every $1 of direct research cost (say a grad student's salary), the university charges my research account (and therefore the funding agency) $1.525.  That "extra" $0.525 goes to cover the university's costs in, e.g., keeping the lights on in my lab, the air handlers running for my fume hoods, and the road paved outside my building.

If the university or business wants to maintain the electron microscope, they probably want to buy an annual service contract for, say, $25K.  Now, in the absence of a staff person to run the system, you might think that a reasonable user fee would then be $25K divided by the number of hours the system is used (say 2000 hours per year).  Not so fast - you have to charge overhead.  Moreover, the university or business may decide to depreciate the SEM.  That means that they may have an interest in replacing the SEM eventually, so they are allowed to tack on a depreciation cost, too.  For our example, a typical depreciation schedule would be seven years, so in addition to the actual maintenance cost, they would tack on, in this case, $50K per year.  There are major federal rules about depreciation.  For example, you can't buy something with a federal grant (e.g., a NSF "Instrumentation for Materials Research" grant) and then also depreciate it - that would be like double-billing the government, and that's not allowed.


If the university or business does have some fraction of a staff person responsible for the instrument, it may be fair (depending on the discussion) to consider a fraction of that person's salary (plus fringe benefits [e.g. health insurance] plus overhead) as a cost to be recovered as well.  

So, the next time you are paying $30/hour for access to an electron microscope, and you're wondering where on earth that figure came from, now you have at least some idea.   You can also see how administrations can sometimes argue that they "lose money" on research - they cannot always recover the costs that they put into things (e.g., the actual overhead income may not cover the utility costs), and sometimes they choose not to  (e.g., by not charging rent for space).  This is all stuff about which I was blissfully ignorant back in my student days.

Thursday, May 06, 2010

Amateur economics

Perhaps a more economically savvy and inclined reader could comment, but is it fair to say that some fraction of the recent decline of the US stock markets (excepting dramatic short-term spikes like the one this afternoon between 14:30 and 15:00 EDT) is not a "real" decline, but a reflection of the increased value of the dollar relative to the euro?  From what I can see, the euro has fallen about 8.5% against the dollar since mid March, and the US financial markets are actually down about 4% (mostly in the last week or two) over the same time period.  Naively, if dollars are worth more, one should see "deflation" on the dollar-denominated stock markets, I would guess....

Sunday, May 02, 2010

Manageable-sized, LaTeX-happy .eps figures

In much of physics, LaTeX is the standard for typesetting scholarly work, including research papers and theses.  Traditionally, when working with figures in LaTeX documents, the preferred format is encapsulated postscript (.eps).  There are any number of ways to produce figures in .eps format, but some people seem to have recurring problems doing this with economical file sizes.  For example, today b/c I am on a committee reviewing doctoral theses for a departmental award, I had to download a thesis that was around 50 MB in size, entirely because the figures were unnecessarily huge.  Over the years I've come up with a few different ways of making small, good (in the sense that they render nicely and LaTeX likes them) .eps files.  Here's a quick how-to.  Mostly this is from the point of view of a windows user, by the way.  (I have both PCs and Macs, fwiw).

On a pc, I cannot recommend gsview and ghostscript strongly enough - they're essential tools.  There are linux versions of these as well, of course.  In general, if a postscript file opens cleanly in gsview, you're going to be fine w/ LaTeX compatibility.  gsview is also perfect for redefining bounding boxes, grabbing individual pages from a multipage postscript file, converting .ps into .eps, and other related tasks.    Another set of tools worth having is ImageMagick.  Very helpful for converting between formats.

Option 1:  Use an application that can natively export nice .eps (that is, vector format rather than automatically using bitmapping) with no attached preview.  For example, Origin can do this, as can Matlab or gnuplot if properly set up.  On the Mac, inkscape is a great vector drawing and editing piece of software.  Adobe (who invented postscript, as far as I know) has programs like Illustrator and Photoshop that can do this.  The former seems better at producing economical output.  The latter, without careful intervention, produces bloated, bitmapped, preview-laden junk.  

Option 2:  Use a generic postscript printer driver that can print to a file in .eps format.  Adobe has one for pc that lives here.  Using this, you can do things like use powerpoint as a cheesy compositing tool to draw something or put together multiple images, and then print a particular slide to an .eps file.  The result will be LaTeX-friendly, but not necessarily economical in file size.

Option 3:  Produce an image in a different, nicely economical format and then use a "wrapper" to produce a .eps file.  Here's one example.  Suppose I have a huge bitmap file.  I can use my favorite software (imagemagick, or even MS Paint or powerpoint) to save the image as .jpg or .png.  Then I can use jpeg2ps (in the former case) or bmeps (in either case) to produce a .eps file that is only slightly larger than the originating image.  

This last option provides a way out of the annoying situation of having a huge (say 10 MB) .eps file produced by some other application (like Matlab).  You can open the offending .eps file in gsview, and try to copy the on-screen image (zoomed as needed) to the buffer (that is, click on the displayed image and hit ctl-c on a windows pc).  Then paste the buffer into either paint or powerpoint, and export it as a .png file (nice format - no compression!).  Once you have the .png file, run bmeps to produce a new .eps, and you're all set.  Your 10 MB old .eps file can end up as a 70 kB new .eps file.  This wrapper strategy is also the one recommended by the arxiv folks.

This is by no means exhaustive, but if it saves anyone the pain of having monster .eps files that warp the final documents, then it was worth posting.  (I suspect that someone will comment about how things like this are one reason that journals are drifting toward MS Word.  Word carries with it many, many other problems for scientific writing, in my opinion, but I'd rather not get into a debate on the subject.)