Search This Blog

Monday, January 29, 2018

Photonics West

A significant piece of my research program is optics-related, and thanks to an invited talk, I'm spending a couple of days at the SPIE Photonics West meeting in San Francisco, a mix of topics from the very applied (that is, details of device engineering and manufacturing) to the fundamental.   It's fun seeing talks on subjects outside of my wheelhouse.

A couple of items of interest from talks so far today:

  • Andrew Rickman gave a talk about integrated Si photonics, touching on his ideas on why, while it's grown, it hasn't taken off in the same crazy exponential way as Moore's Law(s) in the microelectronics world.  On the economic side, he made a completely unsurprising argument:  For that kind of enormous growth, one needs high volume manufacturing with very high yield, and a market that is larger than just optical telecommunications.  One challenge of Si-based photonics is that Si is an indirect band gap material, so that for many photonic purposes (including many laser sources and detectors) it needs to be integrated with III-V semiconductors like InP.  Similarly, getting optical signals on and off of chips usually requires integration with macroscopically large optical fibers.   His big pitch, presumably the basis for his recent founding of Rockley Photonics, is that you're better off making larger Si waveguides (say micron-scale, rather than the 220 nm scale, a standard size set by certain mode choices) - this allegedly gives you much more manufacturing dimensional fault tolerance, easier integration with both III-V and fiber, good integration with electroabsorption modulators, etc. One big market he's really interested in is cloud computing, where apparently people are now planning for the transition form 100 Gbs to 400 Gbs (!) for communication within racks and even on boards.  That is some serious throughput.
  • Min Gu at Royal Melbourne Institute of Technology spoke about work his group has been doing trying to take advantage of the superresolution approach of STED microscopy, but for patterning.   In STED, a diffraction limited laser spot first illuminates a target area (with the idea of exciting fluorescence), and then a spot from a second laser source, in a mode that looks donut-shaped, also hits that location, depleting the fluorescence everywhere except at the location of the "donut hole".  The result is an optical imaging method with resolution at the tens of nm level.  Gu's group has done work combining the STED approach with photopolymerization to do optical 3d printing of tiny structures.  They've been doing a lot with this, including making gyroid-based photonic crystals that can act as helicity-resolved beamsplitters for circularly polarized light.  It turns out that you can make special gyroid structures so that they have broken symmetries so that these photonic crystals support topologically protected (!) modes analogous to Weyl fermions.
  • Venky Narayanamurti gave a talk about how to think about research and its long-standing demarcation into "basic" and "applied".  This drew heavily from his recent book (which is now on my reading list).   The bottom line:  In hindsight, Vannevar Bush didn't necessarily do a good thing by intellectually partitioning science and engineering into "basic" vs. "applied".  Narayanamurti would prefer to think in terms of invention and discovery, defined such that "Invention is the accumulation and creation of knowledge that results in a new tool, device, or process that accomplishes a particular specific purpose; discovery is the creation of new knowledge and facts about the world."  Neither of these are scheduled activities like development.  Research is "an unscheduled quest for new knowledge and the creation of new inventions, whose outcome cannot be predicted in advance, and in which both science and engineering are essential ingredients."  He sounded a very strong call that the US needs to change the way it is thinking about funding of research, and held up China as an example of a country that is investing enormous resources in scientific and engineering research.

Monday, January 22, 2018

In condensed matter, what is a "valley", and why should you care?

One big challenge of talking about condensed matter physics to a general audience is that there are a lot of important physical concepts that don't have easy-to-point-to, visible consequences.  One example of this is the idea of "valleys" in the electronic structure of materials. 

To explain the basic concept, you first have to get across several ideas:

You've heard about wave-particle duality.  A free particle in in quantum mechanics can be described by a wavefunction that really looks like a wave, oscillating in space with some spatial frequency (\(k\ = 2 \pi\)/wavelength).  Momentum is proportional to that spatial frequency (\(p = \hbar k\)), and there is a relationship between kinetic energy and momentum (a "dispersion relation") that looks simple.  In the low-speed limit, K.E. \(= p^2/2m\), and in the relativistic limit, K.E. \( = pc \).

In a large crystal (let's ignore surfaces for the moment), atoms are arranged periodically in space.  This arrangement has lower symmetry than totally empty space, but can still have a lot of symmetries in there.  Depending on the direction one considers, the electron density can have all kinds of interesting spatial periodicities.  Because of the interactions between the electrons and that crystal lattice, the dispersion relation \(E(\mathbf{k})\) becomes direction-dependent (leading to spaghetti diagrams).  Some kinetic energies don't correspond to any allowed electronic states, meaning that there are "bands" in energy of allowed states, separated by gaps.  In a semiconductor, the highest filled (in the limit of zero temperature) band is called the valence band, and the lowest unoccupied band is called the conduction band.

Depending on the symmetry of the material, the lowest energy states in the conduction band might not be near where \(|\mathbf{k}| = 0\).  Instead, the lowest energy electronic states in the conduction band can be at nonzero \(\mathbf{k}\).  These are the conduction band valleys.  In the case of bulk silicon, for example, there are 6 valleys (!), as in the figure.
The six valleys in the Si conduction band, where the axes 
here show the different components of \(\mathbf{k}\), and 
the blue dot is at \(\mathbf{k}=0\).

One way to think about the states at the bottom of these valleys is that there are different wavefunctions that all have the same kinetic energy, the lowest they can and still be in the conduction band, but their actual spatial arrangements (how the electron probability density is arranged in the lattice) differ subtly. 

In the case of graphene, I'd written about this before.  There are two valleys in graphene, and the states at the bottom of those valleys differ subtly about how charge is arranged between the two "sublattices" of carbon atoms that make up the graphene sheet.  What is special about graphene, and why other some materials are getting a lot of attention, is that you can do calculations about the valleys using the same math that gets used when talking about spin, the internal angular momentum of particles.  Instead of being in one graphene valley or the other, you can write about having "pseudospin" up or down. 

Once you start thinking of valley-ness as a kind of internal degree of freedom of the electrons that is often conserved in many processes, like spin, then you can consider all sorts of interesting ideas.  You can talk about "valley ferromagnetism", where available electrons all hang out in one valley.  You can talk about the "valley Hall effect", where carriers of differing valleys tend toward opposite transverse edges of the material.   Because of spin-orbit coupling, these valley effects can link to actual spin physics, and therefore are of interest for possible information processing and optoelectronic ideas.






Saturday, January 13, 2018

About grants: What is cost sharing?

In addition to science, I occasionally use this forum as a way to try to explain to students and the public how sponsored research works in academia.  Previously I wrote about the somewhat mysterious indirect costs.  This time I'd like to discuss cost sharing.

Cost sharing is what it sounds like - when researchers at a university propose a research project, and the funding agency or foundation wants to see the university kick in funding as well (beyond obvious things like the lab space where the investigators work).  Many grants, such as NSF single-investigator awards, expressly forbid explicit cost sharing.  That has certain virtues:  To some extent, it levels the playing field, so that particularly wealthy universities don't have an even larger advantage.  Agencies would all like to see their money leveraged as far as possible, and if cost sharing were unrestricted on grants, you could imagine a situation where wealthy institutions would effectively have an incentive to try to buy their way to grant success by offering big matching funds.   

In other programs, such as the NSF's major research instrumentation program, cost sharing is mandated, but the level is set at a fixed percentage of the total budget.  Similarly, some foundations make it known that they expect university matching at a certain percentage level.  While that might be a reach for some smaller, less-well-off universities when the budget is large, at least it's well-defined.    

Sometimes agencies try to finesse things, forbidding explicit cost sharing but still trying to get universities to invest "skin in the game".  For the NSF materials research science and engineering center program, for example, cost sharing is forbidden (in the sense that explicit promises of $N matching or institutional funding is not allowed), but proposals are required to include a discussion of "organizational commitment":  "Provide a description of the resources that the organization will provide to the project, should it be funded. Resources such as space, faculty release time, faculty and staff positions, capital equipment, access to existing facilities, collaborations, and support of outreach programs should be discussed, but not given as dollar equivalents.

"  First and foremost the science and broader impacts drive the merit review, but there's no question that an institution that happens to be investing synergistically with the topic of such a proposal would look good.

The big challenge for universities are grants where cost sharing is not forbidden, and no guidance is given about expectations.  There is a game theory dilemma at work, where institutions try to guess what level of cost sharing is really needed to be competitive.   

So where does the money for cost sharing come from on the university side?  Good question.  The details depend on the university.  Departments, deans, and the central administration typically have some financial resources that they can use to support cost sharing, but how these responsibilities get arranged and distributed varies.  

For the open-ended cost sharing situations, one question that comes up is, how much is too much?  As I'd discussed before, university administrations often argue that research is already a money-losing proposition, in the sense that the amount of indirect costs that they bring in does not actually come close to covering the true expenses of supporting the research enterprise.  That would argue in favor of minimizing cost sharing offers, except that schools really do want to land some of these awards.  (Clearly there are non-financial or indirect benefits to doing research, such as scholarly reputation, or universities would stop supporting that kind of work.)  It would be very interesting if someone would set up a rumor-mill-style site, so that institutions could share with peers roughly what they are offering up for certain programs - it would be revealing to see what it takes to be competitive.  

Sunday, January 07, 2018

Selected items

A few recent items that caught my eye:

  • The ever-creative McEuen and Cohen groups at Cornell worked together to make graphene-based origami widgets.   Access to the paper seems limited right now, but here is a link that has some of the figures.
  • Something else that the Cohen group has worked on in the past are complex fluids, such as colloidal suspensions.  The general statistical physics problem of large ensembles of interacting classical objects (e.g., maybe short-range rigid interactions, as in grains of sand, or perhaps M&Ms) is incredibly rich.  Sure, there are no quantum effects, but often you have to throw out the key simplifying assumption of statistical physics (that your system can readily explore all microscopic states compatible with overall constraints).  This can lead to some really weird effects, like dice packing themselves into an ordered array when stirred properly.  
  • When an ensemble of (relatively) hard classical objects really lock up collectively and start acting like a solid, that's called jamming.  It's still a very active subject of study, and is of huge industrial importance.  It also explains why mayonnaise gets much more viscous all of the sudden as egg yolk is added.
  • I'd be remiss if I didn't highlight a really nice article in Quanta about one of the grand challenges of (condensed matter) physics:  Classifying all possible thermodynamic phases of matter.   While the popular audience thinks of a handful of phases (solid, liquid, gas, maybe plasma), the physics perspective is broader, because of ideas about order and symmetries.  Now we understand more than ever before that we need to consider phases with different  topological properties as well.  Classification is not just "stamp collecting".

Monday, January 01, 2018

The new year and another arbitrary milestone

Happy new year to all!  I'm sure 2018 will bring some exciting developments in the discipline - at minimum, there will surely be a lot of talk about quantum computing.  I will attempt to post more often, and to work further on ways to bring condensed matter and nanoscale physics to a broader audience, though other responsibilities continue to make that a challenge.  Still, to modify a quote from Winston Churchill, "Writing a [blog] is like having a friend and companion at your side, to whom you can always turn for comfort and amusement, and whose society becomes more attractive as a new and widening field of interest is lighted in the mind."

By the way, this is the 1000th post on Nanoscale Views.  As we all know, this has special significance because 1000 is a big, round number.