Tuesday, September 01, 2015

Nano and the oil industry

I went to an interesting lunchtime talk today by Sergio Kapusta, former chief scientist of Shell.  He gave a nice overview of the oil/gas industry and where nanoscience and nanotechnology fit in.   Clearly one of the main issues of interest is assessing (and eventually recovering) oil and gas trapped in porous rock, where the hydrocarbons can be trapped due to capillarity and the connectivity of the pores and cracks may be unknown.  Nanoparticles can be made with various chemical functionalizations (for example, dangling ligands known to be cleaved if the particle temperature exceeds some threshold) and then injected into a well; the particles can then be sought at another nearby well.  The particles act as "reporters".  The physics and chemistry of getting hydrocarbons out of these environments is all about the solid/liquid interface at the nanoscale.  More active sensor technologies for the aggressive, nasty down-hole environment are always of interest, too.

When asked about R&D spending in the oil industry, he pointed out something rather interesting:  R&D is actually cheap compared to the huge capital investments made by the major companies.  That means that it's relatively stable even in boom/bust cycles because it's only a minor perturbation on the flow of capital.  

Interesting numbers:  Total capital in hardware in the field for the petrochemical industry is on the order of $2T, built up over several decades.  Typical oil consumption worldwide is around 90M barrels equivalent per day (!).   If the supply ranges from 87-93M barrels per day, the price swings from $120 to $40/barrel, respectively.  Pretty wild.

Thursday, August 27, 2015

Short term-ism and industrial research

I have written multiple times (here and here, for example) about my concern that the structure of financial incentives and corporate governance have basically killed much of the American corporate research enterprise.  Simply put:  corporate officers are very heavily rewarded based on very short term metrics (stock price, year-over-year change in rate of growth of profit).  When faced with whether to invest company resources in risky long-term research that may not pay off for years if ever, most companies opt out of that investment.  Companies that do make long-term investments in research are generally quasi-monopolies.  The definition of "research" has increasingly crept toward what used to be called "development"; the definition of "long term" has edged toward "one year horizon for a product"; and physical sciences and engineering research has massively eroded in favor of much less expensive (in infrastructure, at least) work on software and algorithms. 

I'm not alone in making these observations - Norm Augustine, former CEO of Lockheed Martin, basically says the same thing, for example.  Hillary Clinton has lately started talking about this issue.

Now, writing in The New Yorker this week, James Surowiecki claims that "short termism" is a myth.  Apparently companies love R&D and have been investing in it more heavily.  I think he's just incorrect, in part because I don't think he really appreciates the difference between research and development, and in part because I don't think he appreciates the sliding definitions of "research", "long term" and the difference between software development and physical sciences and engineering.  I'm not the only one who thinks his article has issues - see this article at Forbes.

No one disputes the long list of physical research enterprises that have been eliminated, gutted, strongly reduced, or refocused onto much shorter term projects.  A brief list includes IBM, Xerox, Bell Labs, Motorola, General Electric, Ford, General Motors, RCA, NEC, HP Labs, Seagate, 3M, Dupont, and others.  Even Microsoft has been cutting back.  No one disputes that corporate officers have often left these organizations with fat benefits packages after making long-term, irreversible reductions in research capacity (I'm looking at you, Carly Fiorina).   Perhaps "short termism" is too simple an explanation, but claiming that all is well in the world of industrial research just rings false.

Monday, August 24, 2015

News items: Feynman, superconductors, faculty shuffle

A few brief news items - our first week of classes this term is a busy time.

  • Here is a video of Richard Feynman, explaining why he can't readily explain permanent magnets to the interviewer.   This gets right to the heart of why explaining science in a popular, accessible way can be very difficult.  Sure, he could come up with really stretched and tortured analogies, but truly getting at the deeper science behind the permanent magnets and their interactions would require laying a ton of groundwork, way more than what an average person would want to hear.
  • Here is a freely available news article from Nature about superconductivity in H2S at very high pressures.   I was going to write at some length about this but haven't found the time.  The short version:  There have been predictions for a long time that hydrogen, at very high pressures like in the interior of Jupiter, should be metallic and possibly a relatively high temperature superconductor.  There are later predictions that hydrogen-rich alloys and compounds could also superconduct at pretty high temperatures.  Now it seems that hydrogen sulfide does just this.  Crank up the pressure to 1.5 million atmospheres, and that stinky gas becomes what seems to be a relatively conventional (!) superconductor, with a transition temperature close to 200 K.  The temperature is comparatively high because of a combination of an effectively high speed of sound (the material gets pretty stiff at those pressures), a large density of electrons available to participate, and a strong coupling between the electrons and those vibrations (so that the vibrations can provide an effective attractive interaction between the electrons that leads to pairing).    The important thing about this work is that it shows that there is no obvious reason why superconductivity at or near room temperature should be ruled out.
  • Congratulations to Prof. Laura Greene, incoming APS president, who has been named the new chief scientist of the National High Magnetic Field Lab.  
  • Likewise, congratulations to Prof. Meigan Aronson, who has been named Texas A&M University's new Dean of Science.  

Friday, August 21, 2015

Anecdote 5: Becoming an experimentalist, and the Force-o-Matic

As an undergrad, I was a mechanical engineering major doing an engineering physics program from the engineering side.  When I was a sophomore, my lab partner in the engineering fluid mechanics course, Brian, was doing the same program, but from the physics side.  Rather than doing a pre-made lab, we chose to take the opportunity to do an experiment of our own devising.   We had a great plan.  We wanted to compare the drag forces on different shapes of boat hulls.  The course professor got us permission to go to a nearby research campus, where we would be able to take our homemade models and run them in their open water flow channel (like an infinity pool for engineering experiments) for about three hours one afternoon.  

The idea was simple:  The flowing water would tend to push the boat hull downstream due to drag.  We would attach a string to the hull, run the string over a pulley, and hang known masses on the end of the string, until the weight of the masses (transmitted via the string) pulled upstream to balance out the drag force - that way, when we had the right amount of weight on there, the boat hull would sit motionless in the flow channel.  By plotting the weight vs. the flow velocity, we'd be able to infer the dependence of the drag force on the flow speed, and we could compare different hull designs. 

Like many great ideas, this was wonderful right up until we actually tried to implement it in practice.  Because we were sophomores and didn't really have a good feel for the numbers, we hadn't estimated anything and tacitly assumed that our approach would work.  Instead, the drag forces on our beautiful homemade wood hulls were much smaller than we'd envisioned, so much so that just the horizontal component of the force from the sagging string itself was enough to hold the boats in place.  With only a couple of hours at our disposal, we had to face the fact that our whole measurement scheme was not going to work.

What did we do?  With improvisation that would have made McGyver proud, we used a protractor, chewing gum, and the spring from a broken ballpoint pen to create a much "softer" force measurement apparatus, dubbed the Force-o-Matic.  With the gum, we anchored one end of the stretched spring to the "origin" point of the protractor, with the other end attached to a pointer made out of the pen cap, oriented to point vertically relative to the water surface.  With fine thread instead of the heavier string, we connected the boat hull to the tip of the pointer, so that tension in the thread laterally deflected the extended spring by some angle.  We could then later calibrate the force required to produce a certain angular deflection.  We got usable data, an A on the project, and a real introduction, vividly memorable 25 years later, to real experimental work.

Friday, August 14, 2015

Drought balls and emergent properties

There has been a lot of interest online recently about the "drought balls" that the state of California is using to limit unwanted photochemistry and evaporation in its reservoirs.  These are hollow balls each about 10 cm in diameter, made from a polymer mixed with carbon black.  When dumped by the zillions into reservoirs, don't just help conserve water:  They spontaneously become a teaching tool about condensed matter physics.

As you can see from the figure, the balls spontaneously assemble into "crystalline" domains.  The balls are spherically symmetric, and they experience a few interactions:  They are buoyant, so they float on the water surface; they are rigid objects, so they have what a physicist would call "hard-core, short-ranged repulsive interactions" and what a chemist would call "steric hindrance"; a regular person would say that you can't make two balls occupy the same place.  Because they float and distort the water surface, they also experience some amount of an effective attractive interaction.  They get agitated by the rippling of the water, but not too much.  Throw all those ingredients together, and amazing things happen:  The balls pack together in a very tight spatial arrangement.  The balls are spherically symmetric, and there's nothing about the surface of the water that picks out a particular direction.  Nonetheless, the balls "spontaneously break rotational symmetry in the plane" and pick out a directionality to their arrangement. There's nothing about the surface of the water that picks out a particular spatial scale or "origin", but the balls "spontaneously break continuous translational symmetry", picking out special evenly-spaced lattice sites.  Physicists would say they preserve discrete rotational and translational symmetries.  The balls in different regions of the surface were basically isolated to begin with, so they broke those symmetries differently, leading to a "polycrystalline" arrangement, with "grain boundaries".  As the water jostles the system, there is a competition between the tendency to order and the ability to rearrange, and the grains rearrange over time.  This arrangement of balls has rigidity and supports collective motions (basically the analog of sound) within the layer that are meaningless when talking about the individual balls.  We can even spot some density of "point defects", where a ball is missing, or an "extra" ball is sitting on top.

What this tells us is that there are certain universal, emergent properties of what we think of as solids that really do not depend on the underlying microscopic details.   This is a pretty deep idea - that there are collective organizing principles that give emergent universal behaviors, even from very simple and generic microscopic rules.  Knowing that the balls are made deep down from quarks and leptons does not tell you anything about these properties.

Tuesday, August 11, 2015

Anecdote 4: Sometimes advisers are right.

When I was a first-year grad student, I started working in my adviser's lab, learning how to do experiments at extremely low temperatures.   This involved working quite a bit with liquid helium, which boils at atmospheric pressure at only 4.2 degrees above absolute zero, and is stored in big, vacuum-jacketed thermos bottles called dewars (named after James Dewar).   We had to transfer liquid helium from storage dewars into our experimental systems, and very often we were interested in knowing how much helium was left in the bottom of a storage dewar.

The easiest way to do this was to use a "thumper" - a skinny (maybe 1/8" diameter) thin-walled stainless steel tube,  a few feet long, open at the bottom, and silver-soldered to a larger (say 1" diameter) brass cylinder at the top, with the cylinder closed off by a stretched piece of latex glove.   When the bottom of the tube was inserted into the dewar (like a dipstick) and lowered into the cold gas, the rubber membrane at the top of the thumper would spontaneously start to pulse (hence the name).   The frequency of the thumping would go from a couple of beats per second when the bottom was immersed in liquid helium to more of a buzz when the bottom was raised into vapor.  You can measure the depth of the liquid left in the dewar this way, and look up the relevant volume of liquid on a sticker chart on the side of the dewar.

The "thumping" pulses are called Taconis oscillations.  They are an example of "thermoacoustic" oscillations.  The physics involved is actually pretty neat, and I'll explain it at the end of this post, but that's not really the point of this story.  I found this thumping business to be really weird, and I wanted to know how it worked, so I walked across the hall from the lab and knocked on my adviser's door, hoping to ask him for a reference.  He was clearly busy (being department chair at the time didn't help), and when I asked him "How do Taconis oscillations happen?" he said, after a brief pause, "Well, they're driven by the temperature difference between the hot and cold ends of the tube, and they're a complicated nonlinear phenomenon." in a tone that I thought was dismissive.  Doug O. loves explaining things, so I figured either he was trying to get rid of me, or (much less likely) he didn't really know.

I decided I really wanted to know.  I went to the physics library upstairs in Varian Hall and started looking through books and chasing journal articles.  Remember, this was back in the wee early dawn of the web, so there was no such thing as google or wikipedia.  Anyway, I somehow found this paper and its sequels.  In there are a collection of coupled partial differential equations looking at the pressure and density of the fluid, the flow of heat along the tube, the temperature everywhere, etc., and guess what:  They are complicated, nonlinear, and have oscillating solutions.  Damn.  Doug O. wasn't blowing me off - he was completely right (and knew that a more involved explanation would have been a huge mess).  I quickly got used to this situation.

Epilogue:  So, what is going on in Taconis oscillations, really?  Well, suppose you assume that there is gas rushing into the open end of the tube and moving upward toward the closed end.  That gas is getting compressed, so it would tend to get warmer.  Moreover, if the temperature gradient along the tube is steep enough, the upper walls of the tube can be warmer than the incoming gas, which then warms further by taking heat from the tube walls.  Now that the pressure of the gas has built up near the closed end, there is a pressure gradient that pushes the gas back down the tube.  The now warmed gas cools as it expands, but again if the tube walls have a steep temperature gradient, the gas can dump heat into the tube walls nearer the bottom.  This is discussed in more detail here.  Turns out that you have basically an engine, driven by the flow of heat from the top to the bottom, that cyclically drives gas pulses.  The pulse amplitude ratchets up until the dissipation in the whole system equals the work done per cycle on the gas.  More interesting than that:  Like some engines, you can run this one backwards.  If you drive pressure pulses properly, you can use the gas to pump heat from the cold side to the hot side - this is the basis for the thermoacoustic refrigerator.

Friday, August 07, 2015

Assorted items

Time is getting short before our semester starts here, and there is much to be done, so I'll be brief:


  • A library of nice online interactive physics applets/demos from the University of Colorado.  However, between the ongoing security pains of both Java and Flash, it's getting more and more difficult to have a resource like this that is of maximal use to students.  If someone finds a security hole in HTML5, then there will be no platform left for this kind of application, and that would be very disappointing
  • A very impressive demonstration of the Magnus force.
  • The new Fantastic Four movie has been reviewed very unfavorably by critics "Legendary physicist" [sic] Michio Kaku was tapped to do four featurettes about the science of the film.  Coincidence?  
  • Finally, two months after publication in the UK, my book is available through Amazon in the US - see the advertisement in the upper right of this page.
  • If you're in the US (not sure the stream will work elsewhere) and you missed it, here is the recent PBS documentary "The Bomb", about the events of 70 years ago.   Likewise, here is their other timely documentary, "Twisting the Dragon's Tail", about uranium.  If books are more your thing, you can't do much better than The Making of the Atomic Bomb.
  • This is brilliant.