One of the hot topics at the workshop I attended was the proper role of "first principles" calculations in trying to understand electronic conduction at the atomic and molecular scale. In this business, there tend to be two approaches. The first, which I call for lack of a better term the "toy model" paradigm, constructs models that are highly idealized and minimalistic, and you hope that they contain the essential physics needed to describe real systems. An example of such a model would be the single-level Anderson-Holstein model of transport through a molecule. Instead of worrying about all of the detailed electronic levels of a molecule and the many-electron physics there, you would concentrate on a single electronic level that can either be empty, singly occupied, or doubly occupied. Instead of worrying about the detailed band structure of the electrodes, you would treat them as ideal electronic reservoirs, and there would be some couplings that allows electrons to hop between the level and the reservoirs. Instead of considering all of the possible molecular vibrations, you would assume a single characteristic vibrational mode that "lives" on the molecule, and there would be some additional energy cost for having that vibration excited while there is an electron occupying the level. While this sounds complicated, it is still a comparatively idealized situation that can be described by a handful of characteristic energies, and it contains rich physics.
On the other hand, one can consider trying to model a specific molecule in detail, worrying about the precise electronic and vibrational levels appropriate for exactly that molecule bonded in a particular configuration to a specific kind of metal electrode surface. While this sounds in some ways like it's what you "really" ought to do, this "first principles" approach is fraught with challenges. For example, just solving for the electronic levels of the molecule and their relative alignment with the electronic levels in the electrodes is extremely difficult in general. While there are impressive techniques that can work well in certain situations (e.g., density functional theory), very often the circumstances where those methods work best (quasi-equilibrium, far away from resonances, in situations where electron correlation effects are minimal) are often not too interesting.
It's interesting to watch the gradual convergence of these approaches. As computing power grows and increasingly sophisticated treatments are developed, it looks like first-principles calculations are getting better. One direction that seems popular now, as our condensed matter seminar speaker yesterday pointed out, is using such calculations as guidelines for correctly estimating the parameters that should be fed into the essential physics toy models. Interesting times are on the horizon.
It seems that many of the ingredients for understanding conduction through molecules are similar to those in the longstanding theory of luminescence--except that you also need to include the electrodes.
ReplyDeleteI remember being surprised, thirty years ago, to learn that even qualitative aspects of luminescence depend on the details of a particular system. For example, whether luminescence increases or decreases with temperature depends on both the energy levels and the electron-phonon coupling: does the relaxed excited state lie above or below the ground state distorted to the same atomic configuration, and by how much? (I think there's a similar situation in polaron transport.)
I don't know how much that field has changed over the years, but there seemed to be little hope that general arguments, like the toy models you describe, would apply widely. As you suggest, knowing which toy model to use still requires some microscopic detail, either from first principles or from experimental characterization.
One direction that seems popular now, as our condensed matter seminar speaker yesterday pointed out, is using such calculations as guidelines for correctly estimating the parameters that should be fed into the essential physics toy models.
ReplyDeleteYes, that's precisely right (to add a bit of naked self-promotion, that's the approach we take here).
Overall, I think models can do a pretty good job if there is some universality involved (e.g., Kondo physics, phase transitions). However, getting the details right (and a more quantitative comparison with experiments) can be quite a challenge. Combining models with first principles can be a way to go.
Even though it looks simple enough, there are a few challenges, for instance:
i) How to identify the relevant "ingredients" (as Don puts it) from first-principles calculations
into the model. This is not trivial since what you call the "essential physics" can change depending, say, on the energy scales of the perturbations (bias, gate voltages, etc.). In other words, the contribution of, say, terms in the model Hamiltonian can be more or less important depending on the energy scale of the process you are interested.
I agree with Don that experimental characterization can be very helpful in this stage as well.
ii) Once that's done, you've got to ask yourself: how reliable are the parameters from first principles? Are there strong dependence on the details of the calculations? Can you compare some of the parameters with experimental values?
I would say that it all comes down to understanding the strengths and limitations of the two approaches (first principles/models). I think cooperation (rather than competition) between the two camps can go a long way.
Hi Don and Luis. It's pretty clear in this "first principles" business that you have to be very careful. As a colleague said, you'd better run the DFT calculation with multiple choices of functional, basis set, etc., and see which features of the results are persistent. For example, if you always find that there is a large gap between filled and empty molecular states, independent of the basis set, functional, and particular atomic geometry of the molecule-metal binding, then that's probably trustworthy. It is still scary to me, though, how some electronic structure calculations can be very good about some parameters (e.g., IR vibrational spectra w/ relative intensities of modes) and terrible about others (systematic errors of 2.5 eV in the ionization potential of a molecule).
ReplyDeleteDensity functional theory is famous for being unable to get band gaps right without post-hoc fudging--I'm guessing the ionization potential is the same problem. My understanding is that this reflects the very different wavefunction of the excited state compared to the normally filled states.
ReplyDeleteIt is nice to be able to estimate numbers (especially experimental ones) from first principles calculations, especially since they can corroborate and provide insights into the particular physical regime being studied, that toy models can qualitatively predict. However, worrying too much about quantitative accuracy is a bit pointless (I know some of the results from first principles in things like electron transport are off by orders of magnitude), since any reasonably interesting phenomenon should be robust over a range of parameter values.
ReplyDeleteModeling electron states and flow in molecules is a to-the-point issue, with a new development for picoyoctoscale imaging by atomic topological function calculation. Since computers can process a comprehensive atomic modeling function, this should achieve progress in mapping picotechnical, and them femtoscale states and events.
ReplyDeleteThe atom's RQT (relative quantum topological) data point imaging function is built by combination of the relativistic Einstein-Lorenz transform functions for time, mass, and energy with the workon quantized electromagnetic wave equations for frequency and wavelength. The atom labeled psi (Z) pulsates at the frequency {Nhu=e/h} by cycles of {e=m(c^2)} transformation of nuclear surface mass to forcons with joule values, followed by nuclear force absorption. This radiation process is limited only by spacetime boundaries of {Gravity-Time}, where gravity is the force binding space to psi, forming the GT integral atomic wavefunction. The expression is defined as the series expansion differential of nuclear output rates with quantum symmetry numbers assigned along the progression to give topology to the solutions.
Next, the correlation function for the manifold of internal heat capacity particle 3D functions condensed due to radial force dilution is extracted; by rearranging the total internal momentum function to the photon gain rule and integrating it for GT limits. This produces a series of 26 topological waveparticle functions of five classes; {+Positron, Workon, Thermon, -Electromagneton, Magnemedon}, each the 3D data image of a type of energy intermedon of the 5/2 kT J internal energy cloud, accounting for all of them.
Those values intersect the sizes of the fundamental physical constants: h, h-bar, delta, nuclear magneton, beta magneton, k (series). They quantize nuclear dynamics by acting as fulcrum particles. The result is the picoyoctometric, 3D, interactive video atomic model data imaging function, responsive to keyboard input of virtual photon gain events by relativistic, quantized shifts of electron, force, and energy field states and positions.
Images of the h-bar magnetic energy waveparticle of ~175 picoyoctometers are available online at http://www.symmecon.com with the complete RQT atomic modeling guide titled The Crystalon Door, copyright TXu1-266-788. TCD conforms to the unopposed motion of disclosure in U.S. District (NM) Court of 04/02/2001 titled The Solution to the Equation of Schrodinger.
(C) 2009, Dale B. Ritter, B.A.