Search This Blog

Monday, November 05, 2007

This week in cond-mat

Several entries from the arxiv this week. My descriptions here are a bit brief b/c of continued real-world constraints.

arxiv:0711.0343 - Dietl, Origin and control of ferromagnetism in dilute magnetic semiconductors and oxides
arxiv:0711.0340 - Dietl, Origin of ferromagnetic response in diluted magnetic semiconductors and oxides
These are two review articles by Tomasz Dietl, one of the big names in the dilute magnetic semiconductor (DMS) game. DMS are semiconductor materials that exhibit ferromagnetic order usually because of doping with transition metal atoms that contain unpaired d electrons, such as manganese. The idea of integrating magnetic materials directly with semiconductor devices, and ideally controlling magnetism via electrical or optical means, is quite appealing. However, it is very challenging to achieve high magnetic ordering temperatures (e.g., room temperature) and decent electronic properties at the same time. In many systems the high doping levels required for the magnetism go hand in hand with lots of disorder, in part because crystal growth must be performed under nonequilibrium conditions to force enough transition metal atoms to sit on the appropriate lattice sites. Anyway, these articles (one coming out in J. Phys.: Cond. Matt.
and the other coming out in J. Appl. Phys.) should give you plenty of reading material if you're interested in this area.

arxiv:0711.0218 - Leek et al., Observation of Berry's phase in a solid state qubit
In basic quantum mechanics we learn that particles are described by a complex wavefunction that has a phase factor. Propagation of a particle in space racks up phase at a rate proportional to the particle's momentum. As Feynman would tell us, each possible trajectory of a particle from A to B then contributes some complex amplitude (with a phase). The total probability of finding the particle at B is the squared magnitude of the sum of all of those amplitudes, rather than the classical sum of the probabilities of each path. Phase differences between paths lead to interference terms, and are the sort of thing responsible for electro diffraction, for example. Besides propagating through space, there are other ways of accumulating phase. In the case of the Aharanov-Bohm effect, the vector potential leads to an additional phase factor that depends on trajectory. In the general case of Berry's Phase, the slow variation of some external parameters (such as electric fields) can lead to a similar geometrical phase factor. The intro to this paper gives a nice discussion of the classical analog of this in terms of moving a little vector on the surface of a sphere. Anyway, this team has used a solid-state superconducting qubit to demonstrate this geometric phase explicitly. Quite nice.

arxiv:0710.5515 - Castelnovo et al., Magnetic monopoles in spin ice
One of the things that I find so interesting about condensed matter physics is the idea of emergent degrees of freedom. For example, phonons (quantized sound waves) are quantum mechanical quasiparticles in solids that can have well-defined quantum numbers, and arise because of the collective motion of large numbers of atoms. In a more exotic example, Cooper pairs in ordinary superconductors are objects with spin 0, charge -2e, yet are "built" out of electrons plus phonons. In a very exotic example, the quasiparticles in the fractional quantum Hall effect can have fractional charges and obey exotic statistics. In an even more extreme case, these authors propose that there are quasiparticle excitations in a kind of magnetically ordered insulator that act like magnetic monopoles. It seems that magnetic monopoles do not exist as elementary particles. Indeed, they would require a modification of Maxwell's equations. (In this solid state system the argument is that they exist as monopole/antimonopole pairs, so that the net divergence of the magnetic field is still zero). "Forbidden" particles emerging from the collective action of many electrons - a very neat idea, and it would appear that there may even be some experimental evidence for this already.

Wednesday, October 31, 2007

In honor of Halloween....

Three of my favorite science-related quotes from the movies, all from Ghostbusters:

Dean Teager: Your theories are the worst kind of popular tripe; your methods are sloppy, and your conclusions are highly questionable. You are a poor scientist, Dr. Venkman.
---
Ray Stantz: Personally, I like the University. They gave us money and facilities, we didn't have to produce anything. You've never been out of college. You don't know what it's like out there. I've worked in the private sector. They expect results.
---
Peter Venkman: Back off, man! I'm a scientist!

Any other good ones to share? (Real science post coming in a day or two....)

Friday, October 26, 2007

Jobs jobs jobs

I figure it's probably a good idea to take advantage of the staggeringly enormous readership of this blog to point out several searches going on at Rice right now.

First, three searches are going on here at Rice in the Physics and Astronomy department at the moment. These are:
There is also an experimental nanophotonics search going on in Electrical and Computer Engineering.

Finally, the Chemistry department is doing a search for inorganic or physical chemists, broadly defined. The ad is on the departmental homepage.

Share and enjoy! If you want to discuss what Rice is like as a faculty member, please feel free to contact me and I'll be happy to talk.



Friday, October 19, 2007

Three papers and a video.

Three interesting papers on ASAP at Nano Letters at the moment:

http://dx.doi.org/10.1021/nl0717715 and http://dx.doi.org/10.1021/nl072090c are both papers where people have taken graphite flakes, oxidized them to make graphite oxide, and then suspended the graphene oxide sheets in solvent. They then deposit the sheets onto substrates and made electronic devices out of them after trying to reduce the graphene oxide back to just graphene. There are a couple of people here at Rice trying similar things from the chemistry side. Interesting that a number of groups are all working on this at about the same time. That's one reason why it can be dangerous to try to jump into a rapidly evolving hot topic - it's easy to get scooped.

This one is a cute paper titled "Carbon nanotube radio". The science is nicely done, though not exactly surprising. AM radio works by taking an rf carrier signal and demodulating it to get back just the envelope of that carrier signal. Back in the early 20th century (or more recently, if you bought an old kit somewhere), people used to do the demodulating using a diode made semi-reliably by jamming a metal needle (a "cat's whisker") into a lead sulfide crystal - hence the term "crystal radio". It's simple trig math to see that a nonlinear IV curve (one with a nonzero d^2I/dV^2) can rectify an ac signal of amplitude V0 to give a dc signal of (1/4)(d^2I/dV^2)V0^2. Well, in this case the nonlinear element is a nanotube device. Cute, though I have to admit that I found the media hype a bit much. Wilson Ho did the same essential thing very nicely with an STM, but didn't talk about atomic-scale radio receivers....

Lastly, via Scott Aaronson, a link to a fantastic math presentation. Watch the whole thing - this really is a model of clarity and public outreach. On a bitter-sweet note, in the credits at the end I realized that one of the people responsible for this was an acquaintance from college who has since passed away. Small world.

Tuesday, October 16, 2007

This week in cond-mat

Real life continues to be very busy this semester. Two interesting papers on the arxiv this week....

arxiv:0710.2845
- Fratini et al., Current saturation and Coulomb interactions in organic single-crystal transistors
The technology finally exists to do what He Who Must Not Be Named claimed to have done: use a field-effect geometry to gate significant charge densities (that is, a good fraction of a charge carrier per molecule) into the surface of a clean single crystal of an organic semiconductor. The Delft group has used Ta2O5 as a high-k gate dielectric, and are able to get 0.1 holes per rubrene atom in a single-crystal FET geometry. In typical organic FETs, increasing the charge density in the channel improves transport by filling trap states and by moving the chemical potential in the channel toward the mobility edge in the density of states. Surprisingly, Fratini et al. have found that the channel conductance actually saturates at very high charge densities instead of continuing to increase. The reason for this appears to be Coulomb interactions in the channel due to the high carrier density and the polaronic nature of the holes. The strong coupling between the carriers and the dielectric layer leads to a tendency toward self-trapping; add strong repulsion and poor screening into the mix, and you have a more insulating state induced by this combination of effects. Very interesting!

arxiv:0710.2323 - Degen et al., Controlling spin noise in nanoscale ensembles of nuclear spins
Dan Rugar
at IBM has been working on magnetic resonance force microscopy for a long time, and they've got sensitivity to the point where they can detect hundreds of nuclear spins (!). (That may not seem impressive if you haven't been following this, but it's a tour de force experiment that's come very far from the initial work.) The basic idea of MRFM is to have a high-Q cantilever that is mechanically resonant at the spin resonance frequency and coupled via magnetic interactions to the sample - that way the polarized spins precess, they drive the cantilever resonance mode. When they look at such a small number of spins, the statistical fluctuations in the spin polarization are readily detected. This is a problem for imaging, actually - the timescale for the natural fluctuations is long enough that the signal bops around quite a bit during a line scan. Fortunately, Degen et al. have demonstrated in this paper that one can deliberately randomize the magnetization by bursts of rf pi/2 pulses, and thus suppress the fluctuation impact on imaging by making the effective fluctuations much more rapid. This is a nice mix of pretty physics and very clever experimental technique.

Wednesday, October 10, 2007

Giant magnetoresistance

I think it's great that the physics Nobel this year went for giant magnetoresistance (GMR). GMR is intrinsically a quantum mechanical effect, an example of a nanoscale technology that's made it out of the lab and into products, and one of the big reasons that you can buy a 500GB hard drive for $100. (Good job, Sujit, for the advanced pick!).

The story in brief: Back in the ancient past (that is, the 1980s), the read heads on hard drives operated based on the anisotropic magnetoresistance (AMR). For band structure reasons, the electrical resistivity of ferromagnetic metals depends a bit on the relative orientations of M, the magnetization, and J, the current density. In the common NiFe alloy permalloy, for example, the resistivity is about 2% larger when M is parallel to J than when M is perpendicular to J. To read out the bits on magnetic media, a strip of very coercible magnetic material was used, and the fringing fields from the disk media could alter the direction of that strip's M, leading to changes in the resistance that were translated into voltage changes that correspond to 1s and 0s.

In the late 1980s, Fert and Grunberg demonstrated that stacks of nanoscale layers of alternating magnetic and nonmagnetic metals had remarkable magnetoresistive properties. When the M of the FM layers are aligned, the mobile electrons can move smoothly between the layers, leading to relatively low resistance. However, when the M of the FM layers are anti-aligned, there is a mismatch between the densities of states for spin-up and spin-down electrons between anti-aligned layers. The result is enhanced scattering of spin-polarized electrons at the interfaces between the normal and FM layers. (Crudely, a spin-down electron that comes from being the majority spin in one FM layer goes through the normal metal and runs into the anti-aligned FM layer, where that spin orientation is now the minority spin - there are too few empty states available for that electron in the new FM layer, so it is likely to be reflected from the interface.) More scattering = higher resistance. The resulting GMR effect can be 10x larger than AMR, meaning that read heads based on GMR multilayers could read much smaller bits (with smaller fringing fields) for the same signal-to-noise ratio.

Thursday, October 04, 2007

Challenges in measurement

This post is only going to be relevant directly for those people working on the same kind of stuff that my group does. Still, it gives a flavor of the challenges that can pop up unexpectedly in doing experimental work.

Often we are interested in measuring the electronic conductance of some nanodevice. One approach to doing this is to apply a small AC voltage to one end of the device, and connect the other end to something called a current preamplifier (or a current-to-voltage converter, or a glorified ammeter) to measure the amount of current that flows. It's possible to build your own current preamp, but many nanodevice labs have a couple of general purpose ones lying around. A common one is the SR570, made by Stanford Research. This gadget is pretty nice - it has up to a 1 MHz bandwidth, it has built-in filter stages, it is remotely programmable, and it has various different gain settings depending on whether you want to measure microamps or picoamps of current.

Here's the problem, though. One of my students observed that his devices seemed to fail at a surprisingly high rate when using the SR570, while the failure rate was dramatically lower when using a different (though more expensive) preamp, the Keithley 428. After careful testing he found that when the SR570 changes gain ranges (there is an audible click of an internal relay when this happens, as the input stage of the amplifier is switched), spikes of > 1V (!) lasting tens of microseconds show up on the input of the amplifier (the part directly connected to the device), at least when hooked up to an oscilloscope. Our nanoscale junctions are very fragile, and these spikes irreversibly damage the devices. The Keithley, on the other hand, doesn't do this and is very quiet. Talking to SRS, this appears to be an unavoidable trait of the SR570. We're working to mitigate this problem, but it's probably good for people out there in the community using these things to know about this.

Sunday, September 30, 2007

This week in cond-mat

Two recent papers in cond-mat this time, both rather thermodynamics-related. That's appropriate, since I'm teaching undergrad stat mech these days.

arxiv:0709.4181 - Kubala et al., Violation of Wiedemann-Franz law in a single-electron transistor
The Wiedemann-Franz law is one of those things taught in nearly every undergraduate solid-state physics class. It also happens to be extremely useful for doing cryogenic engineering, as I learned during my grad school days. The idea is simple: simple kinetic theory arguments (and dimensional analysis) imply that the conductivity of some parameter via some excitations is given by the product (carrying capacity of that parameter per excitation)*(speed of excitation carrying that parameter)*(mean free path of that excitation), with some geometric factor out in front (e.g., 1/3 for three dimensional diffusive motion of the excitation). For example, the electrical conductivity in a 3d, diffusive, ordinary metal is (1/3)(e)(v_F)(\ell), where e is the electronic charge, v_F is the Fermi velocity for conduction electrons, and \ell is the mean free path for those electrons (at low T, \ell is set by impurity scattering or boundary scattering). However, in a normal metal electrons can also carry thermal energy with some heat capacity c_v per electron that scales like T, while the speed and mean free path of the electrons are as above. This implies the Wiedemann-Franz law, that the ratio of the thermal conductivity to the (electrical conductivity*T) in an ordinary metal should be a constant (the Lorenz number, ~25 nanoOhms W/K^2). Deviations from the W-F law are indicators of interesting physics - basically that simple metal electrons either aren't the dominant carriers of the electrical current, or that the charge carriers don't carry thermal energy as normal. This paper is a theory piece by the Helsinki group showing that the W-F law fails badly for single-electron transistors. In particular, in the co-tunneling regime, when current is carried via quantum coherent processes, the Lorenz number is predicted to be renormalized upward by a factor of 9/5. This will be challenging to measure in experiments, but exquisite thermal conductivity measurements have been performed in similar systems in the past.

arxiv:0709.4125 - Allahverdyan et al., Work extremum principle: structure and function of quantum heat engines
Marlan Scully (also here) caused a bit of a flurry of excitement a few years ago by proposing a form of heat engine that uses quantum coherence and its destruction to do work, in addition to the conventional approach of using two thermal baths at different temperatures. This paper is a theoretical analysis of some such quantum heat engines. Carnot can sleep easy - in the end you can't violate the Carnot efficiency even with quantum heat engines, if you dot all the "i"s and cross all the "t"s. Neat to think about, though, and of some experimental relevance to the cold atom community, who can prepare highly coherent atomic gases at very low temperatures. This paper is long and detailed and I don't claim to have read it in depth, but it looks interesting.

Tuesday, September 25, 2007

Revised: Primer on faculty searches, part I

It's that time of year again, with Chad Orzel and the Incoherent Ponderer both posting about the faculty job market and job hunting. So, I'm recycling a post of mine from last year describing the search process, at least the way it's done at Rice. I'm going to insert some revisions that are essentially tips to would-be candidates, though I think the IP has already done a good job on this, and some are basically common sense. An obvious disclaimer: this is based on my experience, and may not generalize well to other departments with vastly differing cultures or circumstances.

Here are the main steps in a search:
  • The search gets authorized. This is a big step - it determines what the position is, exactly: junior vs. junior or senior; a new faculty line vs. a replacement vs. a bridging position (i.e. we'll hire now, and when X retires in three years, we won't look for a replacement then).
  • The search committee gets put together. In my dept., the chair asks people to serve. If the search is in condensed matter, for example, there will be several condensed matter people on the committee, as well as representation from the other major groups in the department, and one knowledgeable person from outside the department (in chemistry or ECE, for example). The chairperson or chairpeople of the committee meet with the committee or at least those in the focus area, and come up with draft text for the ad.
  • The ad gets placed, and canvassing begins of lots of people who might know promising candidates. A special effort is made to make sure that all qualified women and underrepresented minority candidates know about the position and are asked to apply (the APS has mailing lists to help with this, and direct recommendations are always appreciated - this is in the search plan). Generally, the ad really does list what the department is interested in. It's a huge waste of everyone's time to have an ad that draws a large number of inappropriate (i.e. don't fit the dept.'s needs) applicants. The exception to this is the generic ad typically placed by MIT and Berkeley: "We are looking for smart folks. Doing good stuff. In some area." They run the same ad every year, trolling for talent. They seem to do ok. The other exception is when a university already knows who they want to get for a senior position, and writes an ad so narrow that only one person is really qualified. I've never seen this personally, but I've heard anecdotes.
  • In the meantime, a search plan is formulated and approved by the dean. The plan details how the search will work, what the timeline is, etc. This plan is largely a checklist to make sure that we follow all the right procedures and don't screw anything up. It also brings to the fore the importance of "beating the bushes" - see above. A couple of people on the search committee will be particularly in charge of oversight on affirmative action/equal opportunity issues.
  • The dean meets with the committee and we go over the plan, including a refresher for everyone on what is or is not appropriate for discussion in an interview (for an obvious example, you can't ask about someone's religion.).
  • Applications come in and are sorted; rec letters are collated. Each candidate has a folder.
  • The committee begins to review the applications. Generally the members of the committee who are from the target discipline do a first pass, to at least wean out the inevitable applications from people who are not qualified according to the ad (i.e. no PhD; senior people wanting a senior position even though the ad is explicitly for a junior slot; people with research interests or expertise in the wrong area). Applications are roughly rated by everyone into a top, middle, and bottom category. Each committee member comes up with their own ratings, so there is naturally some variability from person to person. Some people are "harsh graders". Some value high impact publications more than numbers of papers. Others place more of an emphasis on the research plan, the teaching statement, or the rec letters. Yes, people do value the teaching statement - we wouldn't waste everyone's time with it if we didn't care. Interestingly, often (not always) the people who are the strongest researchers also have very good ideas and actually care about teaching. This shouldn't be that surprising. As a friend of mine at a large state school once half-joked to me: 15% of the faculty in any department do the best research; 15% do the best teaching; 15% do the most service and committee work; and it's often the same 15%.
  • Once all the folders have been reviewed and rated, a relatively short list (say 20-25 or so out of 120 applications) is arrived at, and the committee meets to hash that down to, in the end, five or so to invite for interviews. In my experience, this happens by consensus, with the target discipline members having a bit more sway in practice since they know the area and can appreciate subtleties - the feasibility and originality of the proposed research, the calibration of the letter writers (are they first-rate folks? Do they always claim every candidate is the best postdoc they've ever seen?). I'm not kidding about consensus; I can't recall a case where there really was a big, hard argument within the committee. I know I've been lucky in this respect, and that other institutions can be much more fiesty. The best, meaning most useful, letters, by the way, are the ones who say things like "This candidate is very much like CCC and DDD were at this stage in their careers." Real comparisons like that are much more helpful than "The candidate is bright, creative, and a good communicator." Regarding research plans, the best ones (for me, anyway) give a good sense of near-term plans, medium-term ideas, and the long-term big picture, all while being relatively brief and written so that a general committee member can understand much of it (why the work is important, what is new) without being an expert in the target field. It's also good to know that, at least at my university, if we come across an applicant that doesn't really fit our needs, but meshes well with an open search in another department, we send over the file. This, like the consensus stuff above, is a benefit of good, nonpathological communication within the department and between departments.
That's pretty much it up to the interview stage. No big secrets. No automated ranking schemes based exclusively on h numbers or citation counts.

Tips for candidates:
  • Don't wrap your self-worth up in this any more than is unavoidable. It's a game of small numbers, and who gets interviewed where can easily be dominated by factors extrinsic to the candidates - what a department's pressing needs are, what the demographics of a subdiscipline are like, etc. Every candidate takes job searches personally to some degree because of our culture, but don't feel like this is some evaluation of you as a human being.
  • Don't automatically limit your job search because of geography unless you have some overwhelming personal reasons. I almost didn't apply to Rice because neither my wife nor I were particularly thrilled about Texas, despite the fact that neither of us had ever actually visited the place. Limiting my search that way would've been a really poor decision.
  • Really read the ads carefully and make sure that you don't leave anything out. If a place asks for a teaching statement, put some real thought into what you say - they want to see that you have actually given this some thought, or they wouldn't have asked for it.
  • Research statements are challenging because you need to appeal to both the specialists on the committee and the people who are way outside your area. My own research statement back in the day was around three pages. If you want to write a lot more, I recommend having a brief (2-3 page) summary at the beginning followed by more details for the specialists. It's good to identify near-term, mid-range, and long-term goals - you need to think about those timescales anyway. Don't get bogged down in specific technique details unless they're essential. You need committee members to come away from the proposal knowing "These are the Scientific Questions I'm trying to answer", not just "These are the kinds of techniques I know".
  • Be realistic about what undergrads, grad students, and postdocs are each capable of doing. If you're applying for a job at a four-year college, don't propose to do work that would require an experienced grad student putting in 60 hours a week.
  • Even if they don't ask for it, you need to think about what resources you'll need to accomplish your research goals. This includes equipment for your lab as well as space and shared facilities. Talk to colleagues and get a sense of what the going rate is for start-up in your area. Remember that four-year colleges do not have the resources of major research universities. Start-up packages at a four-year college are likely to be 1/4 of what they would be at a big research school (though there are occasional exceptions). Don't shave pennies - this is the one prime chance you get to ask for stuff! On the other hand, don't make unreasonable requests. No one is going to give a junior person a start-up package comparable to a mid-career scientist.
  • Pick letter-writers intelligently. Actually check with them that they're willing to write you a nice letter - it's polite and it's common sense. Beyond the obvious two (thesis advisor, postdoctoral mentor), it can sometimes be tough finding an additional person who can really say something about your research or teaching abilities. Sometimes you can ask those two for advice about this. Make sure your letter-writers know the deadlines and the addresses.
I'll revise more later if I have the time.

Monday, September 24, 2007

2007 Nobel Prize in Physics

Time for pointless speculation. I suggest Michael Berry and Yakir Aharonov for the 2007 physics Nobel, because of their seminal work on nonclassical phase factors in quantum mechanics. Thoughts?

Saturday, September 22, 2007

Two seminars this past week

I've been remiss by not posting more interesting physics, either arxiv or published. I'll try to be better about that, though usually those aren't the posts that actually seem to generate comments. For starters, I'll write a little about two interesting condensed matter seminars that we had this week. (We actually ended up with three in one week, which is highly unusual, but I was only able to go to two.)

First, my old friend Mike Manfra from Bell Labs came and gave a talk about the interesting things that one sees in two-dimensional hole systems (2dhs) on GaAs (100). Over the last 25 years, practically a whole subdiscipline (including two Nobel prizes) has sprung up out of our ability to make high quality two-dimensional electron systems (2des). If you have a single interface between GaAs below and AlxGa(1-x)As above, and you put silicon dopants in the AlGaAs close to the interface, charge transfer plus band alignment plus band bending combine to give you a layer of mobile electrons confined in a roughly triangular potential well at the interface. Those electrons are free to move within the plane of the interface, but they typically have no ability to move out of the plane. (That is, the energy to excite momentum in the z direction is greater than their Fermi energy.) Now it's become possible to grow extremely high quality 2dhs, using carbon as a dopant rather than silicon. The physics of these systems is more complicated than the electron case, because holes live in the valence band and experience strong spin-orbit effects (in contrast to electrons in the conduction band). In the electron system, it's known that at relatively low densities, low temperatures, and moderate magnetic fields, there is a competition between different possible ground states, including ones where the electron density is spatially complicated ("stripes", "bubbles", "nematics"). Manfra presented some nice work on the analogous case with holes, where the spin-orbit complications make things even more rich.

Then yesterday we had a talk by Satoru Nakatsuji from the ISSP at the University of Tokyo. He was talking about an extremely cool material, Pr2Ir2O7. This material is a metal, but because of its structure it has very complicated low temperature properties. For example, the Pr ions live on a pyrochlore lattice, which consists of corner-sharing tetrahedra. The ions are ferromagnetically coupled (they want to align their spins), but the lattice structure is a problem because it results in geometric frustration - not all the spins can be satisfied. As a result, the spins never order at nonzero temperature (at least, down to the milliKelvin range) despite having relatively strong couplings. This kind of frustration is important in things like water ice, too. In water ice, the hydrogens can be thought of as being at the corners of such tetrahedra, but the O-H bond lengths can't all be the same. For each tetrahedron, two are short (the covalent O-H bonds) and two are long (hydrogen bonds). The result is a ground state for water ice that is highly degenerate, leading to an unusual "extra" residual entropy at T = 0 of R/2 ln 3/2 per mole (in contrast to the classical third law of thermodynamics that says entropy goes to zero at T = 0. The same kind of thing happens in Pr2I2O7 - the spins on the tetrahedron corners have to be "two-in" and "two-out" (see the link above), leading to the same kind of residual entropy as in water ice. This frustration physics is just the tip of the iceberg (sorry.) of what Nakatsuji discussed. Very neat.

Friday, September 14, 2007

The secret joys of running a lab II: equipment

The good news is that we're getting a cool new piece of equipment to be installed here next week. The bad news (apart from the fact that it uses liquid helium - see previous post) is that I've been spending my morning shifting through US import tariff codes trying to come up with a number that will make the shipping agent happy. You might think that the tariff code supplied by the vendor would be good enough. Apparently you'd be wrong. You might think that this would be the job of a customs broker. Again, apparently you'd be wrong. As the Incoherent Ponderer pointed out, there are many aspects of our jobs for which we never receive formal training. Customs agent is one. By the way: can anyone explain to me why US tariff codes are maintained by the US Census Bureau? Ok, so they're part of the Department of Commerce, but this is just odd.

Thursday, September 13, 2007

The secret joys of running a lab: helium.

In my lab, and in many condensed matter physics labs around the world, we use liquid helium to run many of our experiments. At low temperatures, many complicating effects in condensed matter systems are "frozen out", and it becomes easier to understand the effects that remain. Often we are interested in the ground state of some system and want to reduce thermal excitations. Quantum effects are usually more apparent at low temperatures because the inelastic processes that lead to decoherence are suppressed as T approaches zero. For example, the quantum coherence length (the distance scale over which the phase of an electron's wavefunction is well defined before it gets messed up due to inelastic effects of the environment) of an electron in a metal like silver at room temperature is on the order of 1 nm, while that length can be thousands of times longer at 4.2 K, the boiling point of liquid helium at atmospheric pressure. Those kinds of temperatures are also necessary for running good superconducting magnet systems.

The downside of liquid helium is that it's damned expensive, and getting more so by the minute. Running at full capacity I could blow through several thousand liters in a year, and at several dollars a liter minimum plus overhead, that's real money. As a bonus, lately our supplier of helium has become incredibly unreliable, missing orders and generally flaking out, while simultaneously raising prices because of actual production shortages. I just had to read the sales guy the riot act, and if service doesn't improve darn fast, we'll take our business elsewhere, as will the other users on campus. (Helium comes from the radioactive decay of uranium and other alpha emitters deep in the earth, and comes out of natural gas wells.) The long-term solutions are (a) set up as many cryogen-free systems as possible, and (b) get a helium liquifier to recycle the helium that we do use. Unfortunately, (a) requires an upfront cost comparable to about 8 years of a system's helium consumption per system, and (b) also necessitates big capital expenses as well as an ongoing maintenance issue. Of course none of these kinds of costs are the sort of thing that it's easy to convince a funding agency to support. Too boring and pedestrian.

Fortunately, when you work at really nanometer scales, interesting physics often happens at higher temperatures. I've been lucky that two major things going on in my lab right now don't require helium at all. Still, it's bad enough worrying about paying students without the added fun of helium concerns.

UPDATE: See here.

Sunday, September 09, 2007

Other Packard meeting highlights

I'm back from California, and the remainder of the Packard meeting was just as much intellectual fun as the first day. It's great to see so much good science and engineering outside my own discipline. Some fun things I learned:
  • Plants really can communicate by smell (that is, by giving off and detecting volatile compounds).
  • Many flying insects have evolutionarily found wing flap patterns that optimize for minimum energy consumption when hovering.
  • Most of the huge number of insect species in tropical rainforests (at least in New Guinea) are specialist feeders, preferring to eat only one type of plant.
  • When you split a molecular ion (say I2-) into a neutral atom and an atomic ion, the coherent superposition (in this case, 1/\sqrt(2) [(I + I-) + (I- + I)]) can persist even when the atom and ion are separated by more than 10 atomic diameters.
  • Super fancy mass spec plus amazing statistical capabilities can let you do serious proteomics.
  • There may have been as many as four supercontinent phases and two "snowball earth" phases in the last three billion years.
  • If you come up with a computationally efficient way to model viscoelastic materials (e.g. jello, human skin), you can develop virtual surgery tools for reconstructive surgeons, and win an Oscar for special effects by modeling Davey Jones for POTC II.
  • If you develop a DNA microarray chip that lets you cheaply and reliably identify any known virus or the nearest relative of an unknown virus, and you want to use this clinically, the established medical testing companies will react in a very negative way (because they're afraid that if you're successful, they won't be able to keep chargin insurers $3K per possibly unnecessary blood test). The fact that you can save lives won't be of interest to them.
  • Comparing different measurement techniques can really tell you a lot about how cells sense and respond to touch.
  • You can design a Si photonic crystal to act as a superprism and show negative refraction and negative diffraction, all at the same time, over a useful bandwidth near 1.55 microns wavelength (the standard telecommunications band).
I know I'm leaving some out, too. Very fun stuff.

Friday, September 07, 2007

Packard meeting

I'm currently in Monterey thanking the Packard Foundation for their generous support. They're fantastic, and their fellowship has been a godsend that's really given me the flexibility in my research that I've needed. The best part about their annual meetings is that it's a chance for me to listen to good talks pitched to a general audience on an enormously broad set of science and engineering subjects. Some things that I learned yesterday:
  • It's possible to do successful astronomical planet-hunting surveys using 300mm camera lenses to make a telescope array.
  • There are molecules and molecular ions in astronomical gas clouds that are extremely difficult to make and study on earth (e.g., CH5-; C6H7+).
  • The human brain is 2% of the body's mass but uses 20% of the body's oxygen. It also has roughly 10x the concentration of iron, copper, and zinc as other soft tissues on the body.
  • Chemical "noise" (e.g., concentration fluctuations) is essential for some kinds of cell differentiation.
  • There are other photoactive parts in your eye besides rods and cones, and if those other parts are intact, your body clock can still re-set itself even in the absence of vision.
  • Soft tissue can (pretty convincingly) survive inside fossil bones dating back tens of millions of years.
  • Viral phylogeny shows convincingly that HIV did not start from contaminated polio vaccines grown in monkeys, and that HIV came from Africa first to Haiti, and then from Haiti to the US in the late 1960s.
  • Lots of microbes live as biofilms on the ocean floor via chemical energy gained from the decomposition of basaltic rock.

Wednesday, August 29, 2007

Invited talk suggestions, APS March Meeting 2008

Along with Eric Isaacs, I am co-organizing a focus topic at the March Meeting of the APS this year on "Fundamental Challenges in Transport Properties of Nanostructures". The description is:
This focus topic will address the fundamental issues that are critical to our understanding, characterization and control of electronic transport in electronic, optical, or mechanical nanostructures. Contributions are solicited in areas that reflect recent advances in our ability to synthesize, characterize and calculate the transport properties of individual quantum dots, molecules and self-assembled functional systems. Resolving open questions regarding transport in nanostructures can have a huge impact on a broad range of future technologies, from quantum computation to light harvesting for energy. Specific topics of interest include: fabrication or synthesis of nanostructures involved with charge transport; nanoscale structural characterization of materials and interfaces related to transport properties; advances in the theoretical treatment of electronic transport at the nanoscale; and experimental studies of charge transport in electronic, optical, or mechanical nanostructures.
The sorting category is 13.6.2, if you would like to submit a contributed talk. Until Friday August 31, we're still soliciting suggestions for invited speakers for this topic, and I would like to hear what you out there would want to see. If you've got a suggestion, feel free either to post below in the comments, or to email me with it, including the name of the suggested speaker and a brief description of why you think they'd be appropriate. The main restriction is that suggested speakers can't have given an invited talk at the 2007 meeting. Beyond that, while talks by senior people can be illuminating, it's a great opportunity for postdocs or senior students to present their work to an audience. Obviously space is limited, and I can make no promises, but suggestions would be appreciated. Thanks.

Tuesday, August 28, 2007

Quantum impurities from Germany II

A recurring theme at the workshop in Dresden last week was quantum impurities driven out of equilibrium. In general this is an extremely difficult problem! One of the approaches discussed was that of Natan Andrei's group, presented here and here. I don't claim to understand the details, but schematically the idea is to remap the general problem into a scattering language. You set up the nonequilibrium aspect (in the case of a quantum dot under bias, this corresponds to setting the chemical potentials of the leads at unequal values) as a boundary condition. By recasting things this way, you can use a clever ansatz to find eigenstates of the scattering form of the problem, and if you're sufficiently clever you can do this for different initial conditions and map out the full nonequilibrium response. Entropy production and eventual relaxation of the charge carriers far from the dot happens "at infinity". Andrei gives a good (if dense) talk, and this formalism seems very promising, though it also seems like actually calculating anything for a realistic system requires really solving for many-body wavefunctions for a given system.

Tuesday, August 21, 2007

Quantum impurities from Germany

I'm currently at a workshop on quantum impurity problems in nanostructures and molecular systems, sponsored by the Max Planck Institute for Complex Systems here in Dresden. A quantum impurity problem is defined by a localized subsystem (the impurity) with some specific quantum numbers (e.g. charge; spin) coupled to nonlocal degrees of freedom (e.g. a sea of delocalized conduction electrons; spin waves; phonons). The whole coupled system of impurity (or impurities) + environment can have extremely rich properties that are very challenging to deduce, even if the individual subsystems are relatively simple.

A classic example is the Kondo problem, with a localized impurity site coupled via tunneling to ordinary conduction electrons. The Coulomb repulsion is strong enough that the local site can really be occupied by only one electron at a time. However, the total energy of the system can be reduced if the localized electron can undergo high order virtual processes where it can pop into the conduction electron sea and back. The result is an effective magnetic exchange between the impurity site and the conduction electrons, as well as an enhanced density of states at the Fermi level for the conduction electrons. The ground state of this coupled system involves correlations between many electrons, and results in a net spin singlet. The Kondo problem can't be solved by perturbation theory, like many impurity problems.

The point is, with nanostructures it is now possible to implement all kinds of impurity problems experimentally. What is really exciting is the prospect of using these kinds of tunable model systems to study strong correlation physics (e.g. quantum phase transitions in heavy fermion compounds; non-Fermi liquid "bad metals") in a very controlled setting, or in regimes that are otherwise hard to probe (e.g., impurities driven out of equilibrium). This workshop is about 70 or 80 people, a mix of theorists and experimentalists, all interested in this stuff. When I get back I'll highlight a couple of the talks.

Thursday, August 16, 2007

Superluminality



Today this blurb from the New Scientist cause a bit of excitement around the web. While it sounds at first glance like complete crackpottery, and is almost certainly a case of terrible science journalism, it does involve an interesting physics story that I first encountered back when I was looking at grad schools. I visited Berkeley as a prospective student and got to meet Ray Chiao, who asked me how long it takes a particle with energy E to tunnel through a rectangular barrier of energetic height U > E and thickness d. He went to get a glass of water, and wanted me to give a quick answer when he got back a couple of minutes later. Well, if I wasn't supposed to do a real calculation, I figured there were three obvious guesses: (1) \( d/c\); (2) \(d/ (\hbar k/m)\), where \(k = \sqrt{2 m (U-E)}/\hbar\) - basically solving for the (magnitude of the imaginary) classical velocity and using that; (3) 0. It turns out that this tunneling time controversy is actually very subtle. When you think about it, it's a funny question from the standpoint of quantum mechanics. You're asking, of the particles that successfully traversed the barrier, how long were they in the classically forbidden region? This has a long, glorious history that is discussed in detail here. Amazingly, the answer is that the tunneling velocity (d / the tunneling time) can exceed c, the speed of light in a vacuum, depending on how it's defined. For example, you can consider a gaussian wave packet incident on a barrier, and ask how fast does the packet make it through. There will be some (smaller than incident) transmitted wavepacket, and if you look at how long it takes the center of the transmitted wave packet to emerge from the barrier after the center of the incident packet hits the barrier, you can get superluminal speeds out for the center of the wavepacket. (You can build up these distributions statistically by doing lots of single-photon counting experiments.) Amazingly, you can actually have a situation where the exiting pulse leaves the barrier before the entering pulse peak hits the barrier. This would correspond to negative (average) velocity (!), and has actually been demonstrated in the lab. So, shouldn't this bother you? Why doesn't this violate causality and break special relativity? The conventional answer is that no information is actually going faster than light here. The wavepackets we've been considering are all smooth, analytic functions, so that the very leading tail of the incident packet contains all the information. Since that leading tail is, in Gaussian packets anyway, infinite in extent, all that's going on here is some kind of pulse re-shaping. The exiting pulse is just a modified version in some sense of information that was already present there. It all comes down to how one defines a signal velocity, as opposed to a phase velocity, group velocity, energy velocity, or any of the other concepts dreamed up by Sommerfeld back in the early 20th century when people first worried about this. Now, this kind of argument from analyticity isn't very satisfying to everyone, particularly Prof. Nimtz. He has long argued that something more subtle is at work here - that superluminal signalling is possible, but tradeoffs between bandwidth and message duration ensure that causality can't be violated. Well, according to his quotes in today's news, apparently related to this 2-page thing on the arxiv, he is making very strong statements now about violating special relativity. The preprint is woefully brief and shows no actual data - for such an extraordinary claim in the popular press, this paper is completely inadequate. Anyway, it's a fun topic, and it really forces you to think about what causality and information transfer really mean.

Sunday, August 12, 2007

Kinds of papers

I've seen some recent writings about how theory papers come to be, and it got me thinking a bit about how experimental condensed matter papers come about, at least in my experience. Papers, or more accurately, scientific research projects and their results, seem to fall into three rough groupings for me:
  • The Specific Question. There's some particular piece of physics in an established area that isn't well understood, and after reading the literature and thinking hard, you've come up with an approach for getting the answer. Alternately, you may think that previous approaches that others have tried are inadequate, or are chasing the wrong idea. Either way, you've got a very specific physics goal in mind, a well-defined (in advance) set of experiments that will elucidate the situation, and a plan in place for the data analysis and how different types of data will allow you to distinguish between alternative physics explanations.
  • The New Capability. You've got an idea about a new experimental capability or technique, and you're out to develop and test this. If successful, you'll have a new tool in your kit for doing physics that you (and ideally everyone else) has never had before. While you can do cool science at this stage (and often you need to, if you want to publish in a good journal), pulling off this kind of project really sets the stage for a whole line of work along the lines of The Specific Question - applying your new skill to answer a variety of physics questions. The ideal examples of this would be the development of the scanning tunneling microscope or the atomic force microscope.
  • The (Well-Motivated) Surprise. You're trying to do either The Specific Question or The New Capability, and then all of the sudden you see something very intriguing, and that leads to a beautiful (to you, at least, and ideally to everyone else) piece of physics. This is the one that can get people hooked on doing research: you can know something about the universe that no one else knows. Luck naturally can play a role here, but "well-motivated" means that you make your own luck to some degree: you're much more likely to get this kind of surprise if you're looking at a system that is known to be physically interesting or rich, and/or using a new technique or tool.
Hopefully sometime in the future I'll give an anecdote or two about these. In the mean time, does anyone have suggestions on other categories that I've missed?

Behold the power of google

I am easily amused. They just put up google street-view maps of Houston, and while they didn't do every little road, they did index the driving routes through Rice University. In fact, you can clearly see my car here (it's the silver Saturn station wagon just to the right of the oak tree). Kind of cool, if a bit disturbing in terms of privacy.

Tuesday, August 07, 2007

This week in cond-mat

Another couple of papers that caught my eye recently....

arxiv:0707.2946 - Reilly et al., Fast single-charge sensing with an rf quantum point contact
arxiv:0708.0861 - Thalakulam et al., Shot-noise-limited operation of a fast quantum-point-contact charge sensor
It has become possible relatively recently to use the exquisit charge sensitivity of single-electron transistors (SETs) to detect motion of single electrons at MHz rates. The tricky bit is that a SET usually has a characteristic impedance on the order of tens of kOhms, much higher than either free space (377 Ohms) or typical radio-frequency hardware (50 Ohms). The standard approach that has developed is to terminate a coax line with an rf-SET; as the charge environment of the rf-SET changes, so does its impedance, and therefore so does the rf power reflected back up the coax. One can improve signal to noise by making an LC resonant circuit down at the rf-SET that has a resonance tuned to the carrier frequency used in the measurement. With some work, one can use a 1 GHz carrier wave and detect single charge motion near the rf-SET with MHz bandwidths. Well, these two papers use a gate-defined quantum point contact in a 2d electron gas instead of an rf-SET. See, rf-SETs are tough to make, are fragile, and have stability problems, all because they rely on ultrathin (2-3 nm) aluminum oxide tunnel barriers for their properties. In contrast, quantum point contacts (formed when a 2d electron gas is laterally constricted down to a size scale comparable to the Fermi wavelength of the electrons) are tunable, and like rf-SETs can be configured to have an impedance (typically 13 kOhms) that can be strongly dependent on the local charge configuration. Both the Harvard and Dartmouth groups have implemented these rf-QPCs, and the Dartmouth folks have demonstrated very nicely that theirs is as optimized as possible - its performance is limited by the fact that the current flowing through the QPC is composed of discrete electrons.

arxiv:0708.0646 - Hirsch, Does the h-index have predictive power?
*sigh*. The h-index is, like all attempts to quantify something inherently complex and multidimensional (in this case, scientific productivity and impact) in a single number, of limited utility. Here, Hirsch argues that the h-index is a good predictor of future scientific performance, and takes the opportunity to rebut criticisms that other metrics (e.g. average citations per paper) are better. This paper is a bit depressing to me. First, I think things like the citation index, etc. are a blessing and a curse. It's great to be able to follow reference trails around and learn new things. It's sociologically and psychologically of questionable good to be able to check on the impact of your own work and any competitor whose name you can spell. Second, Hirsch actually cites wikipedia as an authoritative source on how great the h-index is in academic fields beyond physics. I love wikipedia and use it all the time, but citing it in a serious context is silly. Ahh well. Back to trying to boost my own h-index by submitting papers.

Tuesday, July 31, 2007

Recent ACS + cond-mat

A couple of interesting recent results - a busy summer has really cut into my non-essential paper-reading, unfortunately.

One sideline that has popped up with the recent graphene feeding frenzy is trying to understand its optical properties. I don't mean anything terribly exotic - I mean just trying to get a good understanding of why it is possible, in a simple optical microscope, to see any optical contrast from atomically thin single layers of graphene. Papers that have looked at this include:
arxiv:0705.0259 - Blake et al., Making graphene visible
arxiv:0706.0029 - Jung et al., Simple approach for high-contrast optical imaging and characterization of graphene-based sheets
doi:10.1021/nl071254m (Nano Lett., in press) - Ni et al., Graphene thickness determination using reflection and contrast spectroscopy
UPDATE: Here's another one:
doi:10.1021/nl071158l (Nano Lett., in press) - Roddaro et al., The optical visibility of graphene: interference colors of ultrathin graphite on SiO2
It all comes down to the dielectric function of graphene sheets, how that evolves with thickness, and how that ultrathin dielectric layer interacts optically with the oxide coating on the substrate.

Another paper that looks important at a quick read is:
doi: 10.1021/nl071486l (Nano Lett., in press) - Beard et al., Multiple exciton generation in colloidal silicon nanocrystals
To excite the charge carriers in a (direct gap) semiconductor optically typically requires a photon with an energy exceeding the band gap, Eg, between the top of the valence band and the bottom of the conduction band. If an incident photon has excess energy, say 2Eg, what ordinarily happens is that a single electron-hole pair is produced, but that pair has excess kinetic energy. It's been shown recently that in certain direct-gap semiconductor nanocrystals, it's possible to generate multiple e-h pairs with single photons. That is, a photon with energy 3Eg might be able to make three e-h pairs. That's potentially big news for photovoltaics. In this new paper, Beard and coauthors have demonstrated the same sort of effect in Si nanocrystals. This is even more remarkable because bulk Si is an indirect gap semiconductor (this means that the because of the crystal structure of Si, taking an electron from the top of the valence band to the bottom of the conduction band requires more momentum than can be provided by just a photon with energy Eg). At a quick read, I don't quite get how this works in this material, but the data are pretty exciting.

Thursday, July 26, 2007

Texas and education

Governor Perry, why did you have to go and ruin my week? It's bad enough that the Texas Republican Party platform explicitly declares that "America is a Christian nation" - so much for not establishing a preferred religion. Now our governor has gone and appointed a creationist anti-intellectual to be the head of the state board of education. Frankly I don't care what his personal religious beliefs are, but I am extremely bothered that the governor has appointed a man who believes that education and intellectualism are essentially useless ("The belief seems to be spreading that intellectuals are no wiser as mentors, or worthier as exemplars, than the witch doctors or priests of old. I share that scepticism.") to run the state educational system. Great move, Governor. Ever wonder why it's hard to convince high tech industry to create jobs here?

Wednesday, July 25, 2007

Ob: Potter

This is the obligatory Harry Potter post. Yes, I read the 7th book, and while it's got a few narrative problems (characters sometimes behaving in deliberately obtuse ways for dramatic necessity - like nearly every episode of Lost), on the whole it was a satisfying wrap-up of the series. If you don't care about spoilers, here is a great parody of the whole thing (via Chad Orzel).

Thursday, July 19, 2007

This week in cond-mat

It's been a busy summer, hence the sparseness of my recent postings. Here are a couple of papers that caught my eye this past week.

arxiv:0707.1923 - Hogele et al., Quantum light from a carbon nanotube
Here the authors do careful time-resolved photoluminescence experiments on individual single-walled carbon nanotubes. By studying the time distribution of photon production, they can get insights into the exciton (bound electron-hole) dynamics that lead to light emission. They find evidence that photons are produced one-at-a-time in these structures, and that multiphoton processes are strongly suppressed. Perhaps nanotubes could be useful as sources of single photons, strongly desired for quantum cryptography applications.

arxiv:0707.2091 - Quek et al., Amine-gold linked single-molecule junctions: experiment and theory
This is a nice example of a mixed experiment/calculation paper in molecular electronics that actually has an interesting point. Very pretty experimental work by Venkataraman et al. at Columbia has shown that NH2-terminated molecules form better-defined contacts with Au electrodes than the conventional thiol (sulfur)-based chemistry. For example, looking at huge data sets from thousands of junction configurations, benzene diamine glommed into a Au break junction has a well-defined most likely conductance of around 0.0064 x 2e^2/h. Now theory collaborators have done a detailed examination via density functional theory of more than a dozen likely contact geometries and configurations for comparison. The calculations do show a well-defined junction conductance that's robust - however, the calculations overestimate the conductance by a factor of seven compared to experiment. The authors say that this shows that DFT likely misses important electronic correlation effects. Hmmm. It's a neat result, and now that they mention it, the almost every non-resonant molecular conduction calculation I've ever seen based on DFT overestimates the conduction by nearly an order of magnitude. The only underestimates of molecular conduction that come to mind are in the case of Kondo-based mechanisms, which can strongly boost conductance and are always missed by ordinary DFT.

Friday, July 13, 2007

This is just silly.

I got an email about an audio conference about faculty recruiting titled "How to Recruit Gen X Faculty Members". I shouldn't pre-judge, and I should be glad that anyone is trying to improve the faculty recruiting process, but it's sad that anyone needs to be told this stuff. The premise is this:
The era when colleges and universities could rely on prestige and a little cash to recruit top academic talent is gone. Increasingly, up-and-coming faculty talent is from Generation X, the much derided and little understood generation that is much more than the Gap-employee stereotype you heard about a decade ago. This generation has a different set of work priorities, and colleges that understand these priorities stand a better chance of landing the best candidates and keeping them.
Riiiggght. It must be because of their generational culture, not the fact that two income families are vastly more common now, and there are many more women faculty candidates then forty years ago, etc. The topics to be covered include:
  • Why prestige and tenure may not matter as much to this generation as previous generations, and what that means for recruiting.

  • The importance of being "family friendly" and how job candidates judge that now that all colleges are claiming that they are.

  • How Gen X professors view hierarchy and what that means in the context of departments.

  • The importance of transparency and collegiality.
  • So, basically we can sum this up in a few words that generalize beyond the university setting: People don't want to work at places where they will be treated poorly. People may want to actually have lives outside of their jobs, and like to work at places that understand that. Smart, educated people don't like being told what to do by people who are clueless just because the clueless have seniority. People don't like it when their employers are rude or have obscure, byzantine policies. My goodness, those Gen X slackers are totally unreasonable.


    Tuesday, July 10, 2007

    Organic Microelectronics workshop

    I just spent two days at the 3rd Annual Organic Microelectronics Workshop, meeting this year in Seattle. The workshop, sponsored jointly by the ACS, MRS, IEEE, and APS, was really very good - about 90 participants, and most of the big movers in the field. The talks were a great mix from the very applied (e.g. trying to optimize solvent conditions to avoid the coffee ring problem when inkjet or gravure printing solution-processable organic semiconductors) to the basic physics and chemistry of these materials. Among the things that I learned:
    • Among the single-crystal organic semiconductors, rubrene is truly special in a number of ways. The most important point from the perspective of understanding electronic transport is that it can be made particularly pure, and oxidation in this material is reversible, unlike, e.g., pentacene.
    • With polymer electrolytes, it is possible to make field-effect devices with gated surface charge densities exceeding 10^14 carriers/cm^2. I'd seen a couple of papers on this, and it's looking very impressive as a technique.
    • Clever phase separation tricks can produce self-assembling organic devices that encapsulate themselves within a protective coating.
    • RFID tags from Si are very very cheap.
    • When developing a manufacturing process, "'Good enough' is good enough, and 'better' is not necessarily better."

    Wednesday, July 04, 2007

    This ought to be fun.

    Looks like those folks at Steorn are going to do a 'demo' of their alleged free energy machine. I think I can safely predict (a) Steorn will claim success; (b) the reporting will generally give them the benefit of the doubt and "report the controversy"; and (c) we will not cure all the world's energy needs with magnet-based machines that violate the first law of thermodynamics.

    UPDATE: Wow - it turns out that I'd overestimated Steorn. They couldn't get their demo to work. Apparently they'd decided to ignore back-ups, rehearsals, and contingency planning in addition to the laws of physics. So, was this self-deception, the long con, a postmodern publicity stunt designed to show how effectively they could market vaporware, or something else?

    Tuesday, July 03, 2007

    four interesting ACS journal articles

    Here are four recent articles ACS journals, two from Nano Letters and two from JACS, that made an impression on me.

    Dattoli et al., Fully transparent thin-film transistor devices based on SnO2 nanowires
    The authors of this paper have made fully functional n-type FETs based on lightly doped tin oxide nanowires with indium tin oxide source, drain, and gate electrodes, and the performance of these FETs is reasonable when compared with the ones currently driving the pixels in your flat panel display. Since the entire FET structure is very transparent in the visible, this could have some significant applications in display technologies.

    Angus et al., Gate-defined quantum dots in intrinsic silicon
    People have been making Coulomb blockade devices out of puddles of gate-confined two-dimensional electron gas for nearly two decades now. Mostly this has been done at the GaAs/AlGaAs interface, and more recently it's been achieved in nanotubes, semiconductor nanowires, and SiGe heterostructures. The authors of this work have managed to do this nicely at the Si/SiO2 interface in a MOSFET. What this really shows is how well the interface states at that junction are passivated, how nicely the authors can make gates without messing up the surrounding material, and that properly made Ohmic contacts in Si FETs can operate well down to cryogenic temperatures. This could be a very important paper if one can build on it to manipulating electron spins in these dots - unlike GaAs structures, there should be many fewer nuclear spins to worry about for effects like hyperfine-induced decoherence of electron spins.

    Albrecht et al., Intrinsic multistate switching of gold clusters through electrochemical gating
    Lots of people in the molecular electronics community have pointed out the similarities and differences between three-terminal (electrostatically gated) molecular devices and solution-based electrochemical oxidation/reduction experiments in electron transfer. These authors are some of the only experimentalists out there that I have seen really delving into this, trying to unravel how the electrochemical case really works. This experiment is analogous to the Coulomb blockade experiment of the preceding paper, but performed using an STM in an electrochemical medium, with ligand-protected gold clusters playing the role of the quantum dot.

    Shim et al., Control and measurement of the phase behavior of aqueous solutions using microfluidics
    This isn't particularly deep, but it sure is cool. Microfluidics has come a long way, and the extremely nice properties of polydimethylsiloxane (PDMS) have been a big help. That's the transparent silicone rubber used for many microfluidic applications, as well as being related to the silicone used for soft contact lenses and breast implants. The authors here have carefully used the water and gas permeability of thin PDMS layers to control the concentrations of solutes in water-based solutions, allowing them to do things like gently make supersaturated conditions to control crystallization of proteins. We're just at the leading edge of the potential applications for these kinds of systems.

    Tuesday, June 26, 2007

    This week in cond-mat

    Two good review articles in the last week appeared on cond-mat....

    arxiv:0706.3015 - Bibes et al., Oxide spintronics
    This is a nice overview of recent developments in using transition metal oxides, which often exhibit strong electronic correlations, for measurements and devices involving spin. This includes materials like the manganites (colossal magnetoresistance oxides), half-metals (magnetite, CrO2), magnetically doped oxides (TiO2, ZnO) as wide-band gap dilute magnetic semiconductors, and new multiferroic materials (ferroelectricity + magnetic order all wrapped up in one system). Good stuff.

    arxiv:0706.3369 - Saminadayar et al., Equilibrium properties of mesoscopic quantum conductors
    Despite being rendered in some species of pdf that my viewer finds nearly unreadable, this is a very nice article all about equilibrium quantum effects in nanostructures comparable in size to the electronic phase coherence length. This includes persistent currents in small metal and semiconductor loops. These persistent currents (flowing without dissipating!) result in part from the requirement that the electronic phase be single-valued when traversing a loop trajectory in a coherent manner. The persistent currents are very challenging to measure, and as far as I know there continues to be controversy about whether the magnitude and sign of the resulting magnetic moments is consistent with theory.

    Thursday, June 21, 2007

    ACS journal articles

    One reason why I've been writing up arxiv preprints rather than published articles in PRL/APL/Science/Nature is that the APS Virtual Journals do a very good job of aggregating articles from those sources. The Virtual Journal of Nanoscale Science and Technology in particular is one of my favorite places to look for nano-themed condensed matter work. One unfortunate flaw of the virtual journals, however, is that they do not have a nice agreement in place to let them include links to articles published in ACS journals. That's really too bad, since an awful lot of very neat results have been showing up there, particularly in Nano Letters, and I suspect that the new longer-paper ACS Nano is going to be of similar high quality. So, I'm going to try pointing out a couple of JACS/Nano Lett/ACS Nano articles that catch my eye every week or two.

    Monday, June 18, 2007

    Prolific theorists

    How do they do it? No, really. How can some theorists be so prolific? I know they're not constrained by little things like having to get experiments to work, but surely it takes a certain amount of intellectual effort and creativity (or at least, supervision of students and postdocs, or correspondence with collaborators at other institutions) to produce a decent paper. At a little before the midpoint of the year, I can think of two CM theorists who have already produced, between the two of them, 23 preprints on the arxiv. That's something like one paper every 2.5 weeks for each of these people. Wow.

    Sunday, June 17, 2007

    Grand challenges

    As a condensed matter blogger, I am obligated to comment on the new report out from the National Research Council, titled "Condensed-Matter and Materials Physics: the Science of the World Around Us". This report is intended to list grand challenges for the discipline in the coming decade(s). I agree with the title, of course. As I wrote when I started this blog, while high energy physics and astrophysics grab much of the cachet and popular attention, it's very hard to dispute that condensed matter physics has had a much more direct impact on the daily lives of people living in developed societies. The transistor, the solid-state laser, and magnetic data storage are three prime examples of technologies that originated from condensed matter physics.

    I haven't read the full report yet, but I had read the interim report and know several of the people who put this thing together. I think the substance is definitely there, though I do wonder if the summary suffers because of the decision to write the grand challenges in language for the consumption of the lay public. The challenges are:
    1. How do complex phenomena emerge from simple ingredients? Phrased this way this challenge sounds rather naive; the whole point of condensed matter physics is that rich phenomena can be emergent from systems with many (simply) interacting degrees of freedom. Still, this gets to the heart of the discipline and many outstanding questions. Why can one material system exhibit metallic behavior, superconductivity, and antiferromagnetic insulating order with only minor tweaks in composition? Figure that one out, and win a trip to Stockholm.
    2. How will the energy demands of future generations be met? This is clearly not the purview of condensed matter alone, but there is little doubt that our discipline can contribute here. Photovoltaic materials, supercapacitor and battery electrodes, catalytically active materials, light/strong composites, novel superconductors for transmission.... There are any number of reasons why investing in CMMP is an intelligent component of a sound energy policy.
    3. What is the physics of life? This is really a biophysics question, though certainly condensed matter physics is closely relevant. At the very least, the principles and methods of condensed matter physics are highly likely to play roles in unraveling some of the basic questions in living systems (e.g., How does the chemical energy released in the conversion of ATP to ADP actually get translated into mechanical motion in the protein motor that turns the flagellum of a bacterium?).
    4. What happens far from equilibrium and why? This is a good one. Equilibrium statistical mechanics and its quantum form are tremendously useful, but nonequilibrium problems are very important and there exists no general formulation for treating them. Heck, any electronic transport measurement is a nonequilibrium experiment, and beyond linear response theory life can get very complicated. Add in strong electronic correlations, and you are at the frontiers of some of the most interesting work (to me, anyway) going on right now.
    5. What new discoveries await us in the nanoworld? Wow - this one really sounds like a sixth-grade filmstrip title. I would've preferred something like, "What new physics will be found when we control materials on the nanoscale?" The ability to manipulate and engineer systems with precision approaching the atomic scale lets us examine systems (e.g., single quantum impurities; candidate qubits) that can reveal rich physics as well as possible applications to technology.
    6. How will the information technology revolution be extended? I don't know.... While this is certainly a useful goal of CMMP, and this point clearly encompasses exciting physics relevant in quantum computation as well as things like plasmonics and nanophotonics, I'm not sure that this is really a physics grand challenge per se - more of an engineering challenge.
    So, what's missing? Well, I'm sure people will make suggestions in the comments, but here's one from me (though I'm sure that the NRC panelists consider this to be subsumed under point 1 above): Is there an efficient and exact general computational method for finding the ground state of the general strongly-interacting, strongly correlated many-electron problem? Basically I want something better than DFT that handles strong correlations. That would definitely be a grand challenge, though it's way too detailed ("physicsy") to fit the structure used in the above list.

    The report also emphasizes the fact that research funding in the physical sciences, particularly CMMP, is lagging that in other nations these days, and that this is probably not to our competitive advantage. The demise of long-term industrial R&D in the US has not helped matters. None of this is news, really, but one major purpose of reports like this one is to send a message to Congress. Hence the use of non-physicsy language for the challenges, I'm sure.

    Wednesday, June 13, 2007

    Albany Nanotech

    I returned today from a 1-day visit to Albany Nanotech, the absolutely enormous joint venture between SUNY Albany and a whole slew of collaborators, including International Sematech. In terms of facilities, this place is unparalleled. They have multiple photolithography tools for 300mm wafer processing, including standard (in-air, capable of 65 nm features), immersion (using the refractive index of very pure water to shrink the wavelength, allowing features down to 33 nm), EUV (reflective optics, 13.6 nm wavelength source, one of only two such systems in the world), and e-beam. They have every etching, deposition, polishing, and characterization tool you can think of. 80000 ft^2 of cleanroom space. I confess: I have facility envy. No other university could pull this off - this is an unprecedented confluence of industrial investment, educational initiative, and gobs of state funding, and seems to me like a sustainable model, at least for the next decade or more. No wonder Sematech is shifting lots (most?) of their operations to Albany.

    Saturday, June 09, 2007

    This week in cond-mat

    Two more papers that look interesting.

    arxiv:0706.0792 - Koop et al., Persistence of the 0.7 anomaly of quantum point contacts in high magnetic fields
    One of the neatest results (in my opinion) in mesoscopic physics is the appearance of conductance quantization in quantum point contacts, first shown in the late 1980s. The basic idea is simple. Start with a two-dimensional electron gas such as that formed at the interface between GaAs and modulation-doped AlGaAs. Metal gates on top of such a structure can be used to deplete the electron gas in particular places. Two closely spaced gates may be used to create a narrow constriction between two large reservoirs of 2d electron gas. As the constriction width is reduced until it is comparable to the Fermi wavelength of the confined electrons, the conductance through the constriction is quantized (at zero magnetic field) in integer multiples of G0 = 2e^2/h, the quantum of conductance (about 1/(13 kOhms)). That is, each spatial mode (each transverse subband of the constriction) can transport e^2/h worth of conductance per spin degree of freedom. Indeed, at very large magnetic fields, the conductance is quantized as integer multiples of G0/2, as one would expect if the different subbands are spin-split due to the Zeeman effect. This is all well explained by single-particle theory and the Landauer-Buttiker picture of conduction through small systems. In very clean quantum point contacts, additional structure is seen at 0.7 G0 - this is the so-called 0.7 anomaly. In the presence of a little bit of in-plane magnetic field, this approaches 0.5 G0, and therefore looks like there is some spontaneous spin-splitting, and this is a many-body effect that is the result of some kind of electron-electron correlation physics. This paper is an extensive study of 14 such point contacts, fully mapping out their magnetic field dependence and nonequilibrium (large bias voltage) properties.

    arxiv:0706.0906 - Clark et al., Nonclassical rotational inertia in single crystal helium
    The controversy over whether 4He has a true supersolid phase continues. This week this article appeared in Science, summarizing a number of recent experiments, and strongly suggesting that single crystals of pure 4He should not show a real supersolid phase - basically the claim is that the effects ascribed to such a phase are really due to disorder (glassy 4He at grain boundaries between crystals? 3He impurities somehow?). Now comes this paper from Moses Chan's group, arguing from new experiments that even carefully nucleated and grown single crystals of 4He show evidence of supersolid behavior (in the form of a nonclassical moment of rotational inertia). Hmmm. Neat, clever experimental design. It'll be interesting to see how this all pans out.

    Monday, June 04, 2007

    Link plus a couple of papers

    The Incoherent Ponderer has a fascinating analysis up of the statistics of the PhD-to-faculty pipeline in physics. The one thing missing (for lack of a good source of statistics) is how many physics PhDs go on to become faculty in a different discipline. This is increasingly common in this age of interdisciplinary work. For example, while by the IP's rankings Rice only places 1.9 percent of its PhDs as faculty members in top-50 physics departments, I can think of a few who are now faculty in, e.g., EE, Mat Sci, BioE, Chemistry, etc. It would be very interesting to look at the trends over the last twenty or thirty years. One reason for the pedigree effect is that good science is correlated with having cutting-edge resources - as fancier facilities (at least in condensed matter) have trickled down to the masses, so to speak, have things become more egalitarian?

    Two more points.... First, I have some nagging doubts about the validity of some of those numbers. I can already count 7 Stanford PhD alumni that I know who have assistant/assoc. faculty positions in top-50 universities. According to the AIP numbers, that's 25% of all of the ones out there. That seems hard for me to believe. Second, Chad Orzel has a very valid observation that goes to the heart of a pathology in our field. 93% of all colleges and universities are not in the top 50. As a discipline I think we do real sociological damage to our students when we brain-wash them into thinking that the only successful outcome of a graduate degree is a tenured job at Harvard. That kind of snobbery is harmful, and probably has something to do with attrition rates. People should not decide that they're failures because R1 academia isn't what they want to do. I thought hard about taking a job offer from a college, and I still resent the fact that some people clearly thought I was loopy for even considering that path.

    arxiv:0706.0381 - Fiebig et al., Conservation of energy in coherent backscattering of light
    This paper is at once a very nice piece of experimental work, and an example of the kind of argument that I really don't like. In mesoscopic physics, there is a phenomenon known as weak localization for electrons. Consider an electron moving through a disordered medium, and look at one particular trajectory that contains a closed loop (made up of straight propagation pieces and elastic scattering events). Feynman says that the amplitude corresponding to this trajectory is a complex number whose phase is found by adding up the phase from propagation along the straight segments plus the phase shifts from the scattering events. Now consider a second trajectory, identical to the first, but traversing the loop in the opposite direction. It turns out that the amplitudes of these two trajectories interfere constructively for backscattering by the loop. That is, the quantum probability for getting through the loop is below the classical value, and the quantum probability for getting reflected by the loop excedes the classical value. It turns out something very analogous to this can happen for light propagating through a diffusive medium, and this can be the basis for some really cool things, like random lasers (where the back-scattering itself acts like an effective cavity!). The authors of this paper show the physics of this beautifully, but they present it in the form of a straw man argument, saying that the coherent scattering result (with greater than classical backscattering) looks at first glance like it violates conservation of energy. No, it doesn't. It looks like coherent scattering. It doesn't look like a violation of conservation of energy any more than typical diffraction does.

    arxiv:0705.4260 - Huang et al., Experimental realization of a silicon spin field-effect transistor
    For nearly 17 years people have been trying to make a spin transistor of the type discussed here. The idea is that spins are injected from a magnetically polarized source, traverse a channel region, and then try to leave through a magnetically polarized grain. Depending on the gate electric field, the moving spins precess and either get out of the system or not depending on their eventual alignment relative to the drain magnetization. This has historically been extremely difficult for many reasons, not the least of which are the difficulty in injecting highly polarized carriers into a semiconductor and the annoying fact that spin polarization, unlike charge, can relax away to nothing. Well, this is a pretty convincing demo of a device quite close in concept to the original idea, though it's not a field-effect geometry as first conceived. Very pretty data.