Search This Blog

Thursday, August 28, 2014

Two cool papers on the arxiv

The beginning of the semester is a crazy time, so blogging is a little light right now.  Still, here are a couple of recent papers from the arxiv that struck my fancy.

arxiv:1408.4831 - "Self-replicating cracks:  A collaborative fracture mode in thin films," by Marthelot et al
This is very cool classical physics.  In thin, brittle films moderately adhering to a substrate, there can be a competition between the stresses involved with crack propagation and the stresses involved with delamination of the film.  The result can be very pretty pattern formation and impressively rich behavior.  A side note:  All cracks are really nanoscale phenomena - the actual breaking of bonds at the tip of the propagating crack is firmly in the nano regime.

arxiv:1408.6496 - "Non-equilibrium probing of two-level charge fluctuators using the step response of a single electron transistor," by Pourkabirian et al
I've written previously (wow, I've been blogging a while) about "two-level systems", the local dynamic degrees of freedom that are ubiquitous in disordered materials.  These little fluctuators have a statistically broad distribution of level asymmetries and tunneling times.  As a result, when perturbed, the ensemble of these TLSs responds not with a simple exponential decay (as would a system with a single characteristic time scale).  Instead, the TLS ensemble leads to a decaying response that is logarithmic in time.  For my PhD I studied such (agonizingly) slow relaxations in the dielectric response and acoustic response of glasses (like SiO2) at cryogenic temperatures.   Here, the authors use the incredible charge sensitivity of a single-electron transistor (SET) to look at the relaxation of the local charge environment near such disordered dielectrics.  The TLSs often have electric dipole moments, so their relaxation changes the local electrostatic potential near the SET. Guess what:  logarithmic relaxations.  Cute, and brings back memories of loooooong experiments from grad school.

Wednesday, August 20, 2014

Science and engineering research infrastructure - quo vadis?

I've returned from the NSF's workshop regarding the successor program to the NNIN.  While there, I learned a few interesting things, and I want to point out a serious issue facing science and engineering education and research (at least in the US).
  • The NNIN has been (since 2010) essentially level-funded at $16M/yr for the whole program, and there are no indications that this will change in the foreseeable future.  (Inflation erodes the value of that sum as well over time.)  The NNIN serves approximately 6000 users per year (with turnover of about 2200 users/yr).  For perspective, a truly cutting edge transmission electron microscope, one instrument, costs about $8M.  The idea that the NNIN program can directly create bleeding edge shared research hardware across the nation is misguided.
  • For comparison, the US DOE has five nano centers.  The typical budget for each one is about $20M/yr.  Each nano center can handle around 450 users/yr.  Note that these nano centers are very different things than NNIN sites - they do not charge user fees, and they are co-located with some truly unique characterization facilities (synchrotrons, neutron sources).  Still, the DOE is spending seventeen times as much per user per year in their program as the NNIN.
  • Even the DOE, with their much larger investment, doesn't really know how to handle "recapitalization".  That is, there was money available to buy cutting edge tools to set up their centers initially, but there is no clear, sustainable financial path to be able to replace aging instrumentation.  This is exactly the same problem faced by essentially every research university in the US.  Welcome to the party.  
  • Along those lines:  As far as I can tell (and please correct me if I'm wrong about this!), every US federal granting program intended to have a component associated with increasing shared research infrastructure at universities (this includes the NSF MRI program, MRSEC, STC, ERC, CCI; DOE instrumentation grants, DOE centers like EFRCs, DOD equipment programs like DURIPs) is either level-funded or facing declining funding levels.  Programs like these often favor acquisition of new, unusual tools over standard "bread-and-butter" as well.  Universities are going to have to rely increasingly on internal investment to acquire/replace instrumentation.  Given that there is already considerable resentment/concern about perceived stratification of research universities into "haves" and "have-nots", it's hard to see how this is going to get much better any time soon.
  • To potential donors who are really interested in the problem of graduate (and advanced undergrad) science and engineering hands-on education:  PLEASE consider this situation.  A consortium of donors who raised, say, $300M in an endowment could support the equivalent of the NNIN on the investment returns for decades to come.  This can have an impact on thousands of students/postdocs per year, for years at a time.  The idea that this is something of a return to the medieval system of rich patrons supporting the sciences is distressing.  However, given the constraints of government finances and the enormous sums of money out there in the hands of some brilliant, tech-savvy people who appreciate the importance of an educated workforce, I hope someone will take this possibility seriously.  To put this in further perspective:  I heard on the radio yesterday that the college athletics complex being built at Texas A&M University costs $400M.  Think about that.  A university athletic booster organization was able to raise that kind of money for something as narrowly focused (sorry, Aggies, but you know it's true). 

Sunday, August 17, 2014

Distinguishable from magic?

Arthur C. Clarke's most famous epigram is that "Any sufficiently advanced technology is indistinguishable from magic."  A question that I've heard debated in recent years is, have we gone far enough down that road that it's adversely affecting the science and engineering education pipeline?  There was a time when young people interested in technology could rip things apart and actually get a moderately good sense of how those gadgets worked.  This learning-through-disassembly approach is still encouraged, but the scope is much more limited. 

For example, when I was a kid (back in the dim mists of time known as the 1970s and early 80s), I ripped apart transistor radios and at least one old, busted TV.  Inside the radios, I saw how the AM tuner worked by sliding a metal contact along a wire solenoid - I learned later that this was tuning an inductor-capacitor resonator, and that the then-mysterious diodes in there (the only parts on the circuit board with some kind of polarity stamped on them, aside from the electrolytic capacitors on the power supply side) somehow were important at getting the signal out.  Inside the TV, I saw that there was a whopping big transformer, some electromagnets, and that the screen was actually the front face of a big (13 inch diagonal!) vacuum tube.  My dad explained to me that the electromagnets helped raster an electron beam back and forth in there, which smacked on phosphors on the inside of the screen.  Putting a big permanent magnet up against the front of a screen distorted the picture and warped the colors in a cool way that depended strongly on the distance between the magnet and the screen, and on the magnet's orientation, thanks to the magnet screwing with the electron beam's trajectory. 

Now, a kid opening up an ipod or little portable radio will find undocumented integrated circuits that do the digital tuning.  Flat screen LCD TVs are also much more black-box-like (though the light source is obvious), again containing lots of integrated circuits.  Touch screens, the accelerometers that determine which way to orient the image on a cell phone's screen, the chip that actually takes the pictures in a cell phone camera - all of these seem almost magical, and they are either packaged monolithically (and inscrutably), or all the really cool bits are too small to see without a high-power optical microscope.  Even automobiles are harder to figure out, with lots of sensors, solid-state electronics, and an architecture that often actively hampers investigation. 

I fully realize that I'm verging on sounding like a grumpy old man with an onion on his belt (non-US readers: see transcript here).  Still, the fact that understanding of everyday technology is becoming increasingly inaccessible, disconnected with common sense and daily experience, does seem like a cause for concern.  Chemistry sets, electronics sets, arduinos and raspberry pi-s, these are all ways to fight this trend, and their use should be encouraged!

Tuesday, August 12, 2014

Some quick cool science links

Here are a few neat things that have cropped up recently:
  • The New Horizons spacecraft is finally getting close enough to Pluto to be able to image Pluto and Charon orbiting about their common (approximate, b/c of other moons) center of mass.
  • The Moore Foundation announced the awardees in the materials synthesis component of their big program titled Emergent Phenomena in Quantum Systems.  Congratulations all around.  
  • Here's a shock:  congressmen in the pockets of the United Launch Alliance don't like SpaceX.
  • Cute toy.
  • The Fields Medal finally goes to a woman, Maryam Mirzakhani.  Also getting a share, Manjul Bhargava, who gave the single clearest math talk I've ever seen, using only a blank transparency and a felt-tip pen.

Saturday, August 09, 2014

Nanotubes by design

There is a paper in this week's issue of Nature (with an accompanying news commentary by my colleague Jim Tour) in which the authors appear to have solved a major, two decade+ challenge, growing single-walled carbon nanotubes of a specific type.   For a general audience:  You can imagine rolling up a single graphene sheet and joining the edges to make a cylinder.  There are many different ways to do this.  The issue is, different ways of rolling up the sheet lead to different electronic properties, and the energetic differences between these different tube types are very small.  When people have tried to grow nanotubes by any number of methods, they tend to end up with a bunch of tube types of similar diameters, rather than just the one they want.

The authors of this new paper have taken an approach that has great visual appeal.  They have used synthetic chemistry to make a planar hydrocarbon molecule that looks like they've taken the geodesic hemisphere end-cap of their desired tube and cut it to lay it flat - like making a funky projection to create a flat map of a globe.  When placed on a catalytically active Pt surface at elevated temperatures, this molecular seed can fold up into an endcap and start growing as a nanotube.  The authors show Raman spectroscopic evidence that they only produce the desired tube type (in this case, a metallic nanotube).  The picture is nice, and the authors imply that they could do this for other desired tube types.  It's not clear whether this is scalable for large volumes, but it's certainly encouraging.

This is very cute.  People in the nanotube game have been trying to do selective synthesis for twenty years.  Earlier this summer, a competing group showed progress in this direction using nanoparticle seeds, an approach pursued by many over the years with limited success.  It will be fun to see where this goes.  This is a good example of how long it can take to solve some materials problems.

Monday, August 04, 2014

Does being a physicist ruin science fiction for me? Generally, no.

For the past few years, as I've been teaching honors freshman mechanics, I've tried to work in at least one homework problem based on a popular sci-fi movie.  Broadening that definition to include the Marvel Cinematic Universe, I've done Iron Man, Captain America, the Avengers.  Yesterday I saw Guardians of the Galaxy, and I've got a problem in mind already.

I've been asked before, does being a physicist just ruin science fiction books and movies for me?  Does bad physics in movies or sci-fi books annoy me since I can't not see it?  Generally, the answer is "no".  I don't expect Star Trek or Star Wars to be a documentary, and I completely understand why bending physics rules can make a story more fun.  Iron Man would be a lot less entertaining if Tony Stark couldn't build an arc reactor with enough storage capacity and power density to fly long distances.  Trips through outer space that require years of narrative time just to get to Jupiter are less fun than superluminal travel.  If anything, I think well-done science fiction can be creatively inspiring.

One thing that does bug me is internally inconsistent bad physics or bad science.  For example, in the book Prey by Michael Crichton, it's established early on that any tiny breach in a window, etc. is enough for the malevolent nanocritters to get in, yet at the climax of the book the author miraculously forgets this (because if he'd remembered it the protagonist would've died).  Another thing that gets me is trivially avoidable science mistakes.  For example, in a Star Trek:TNG episode (I looked it up - it was this one), they quote a surface temperature less than absolute zero.  I'm happy to serve as a Hollywood science advisor to avoid these problems :-)