This makes my public outreach efforts look lame by comparison. Well done!
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Wednesday, January 27, 2016
Friday, January 15, 2016
What is a functional? Ex: the Action Principle
Working our way toward the biggest theory most people have never heard of, let's talk about functionals, using the non-rigorous language that physicists like and which annoys mathematicians.
Here's an analogy. You want to drive from your house to the store. There are many possible routes, and for each route we could come up with a single number that depends on the route - it could be the total distance traveled, or the total time it took to get from the house to the store, or it could be the total fuel consumed, or it could be the number of times you turned left minus the number of times you turned right. We could take all your possible routes, and we could somehow process each possible route into a number. The operation that chews on your route information and converts it to a number is a functional of your path from the house to the store. (Why would you want to do this? Well, perhaps you value your time, and you want to pick the route that has the least accumulated time. Perhaps you value fuel costs, and you want to pick the route that has the least fuel consumption. The point is, depending on what you care about, a functional can let you pick between alternatives, here the routes, that are described by a huge, effectively infinite number of variables.)
Here's an analogy. You want to drive from your house to the store. There are many possible routes, and for each route we could come up with a single number that depends on the route - it could be the total distance traveled, or the total time it took to get from the house to the store, or it could be the total fuel consumed, or it could be the number of times you turned left minus the number of times you turned right. We could take all your possible routes, and we could somehow process each possible route into a number. The operation that chews on your route information and converts it to a number is a functional of your path from the house to the store. (Why would you want to do this? Well, perhaps you value your time, and you want to pick the route that has the least accumulated time. Perhaps you value fuel costs, and you want to pick the route that has the least fuel consumption. The point is, depending on what you care about, a functional can let you pick between alternatives, here the routes, that are described by a huge, effectively infinite number of variables.)
In the spirit of MTW, a function of a single variable is a machine that takes a number, chews on it, and spits out a number. This could be \(y(x) = x^{2}\), for example. A function of multiple variables is a machine that takes more than one number, chews on them, and spits out a number -- like \(y(x_{1}, x_{2}, x_{3}) = x_{1}^{2} + 3x_{2} - x_{3}\). For this example, for any set of three numbers \( \{x_{1}, x_{2}, x_{3}\} \), you can compute a value of \(y\).
A functional is the "continuum limit" of a function of multiple variables - it's a machine that takes an infinite number of numbers (!), chews on it, and spits out a single number. We can cast our example of Fermat's principle of least time this way. Suppose light starts out at point P, and we let it take some wild path like the one shown in the figure. We're eventually going to have the light wind up at point Q. How long does it take the light to get from P to the interface? Well, that depends on how you think it goes. If you knew all the intervening points \((x_{i},y_{i})\), you could compute the distance between successive points, and add up all the times. The transit time \(t_{\mathrm{tot}}\) depends on the whole trajectory that the light takes from P to Q. Instead of writing \(t_{\mathrm{tot}}(x_{1}, y_{1}, x_{2}, y_{2}, .....)\), we write \(t_{\mathrm{tot}}[x,y]\), where the square brackets indicate that this is a functional. For any goofy trajectory we could draw from P to Q, we could compute \(t_{\mathrm{tot}}\). Fermat's principle of least time says that the one actually taken by light is the one that gives the smallest value of \(t_{\mathrm{tot}}\). Why does this work? That's actually a very deep question, and I won't try to answer it now.
The Action Principle is the most famous example of showing that functionals can be incredibly useful in physics. I'm going to do a simple 1d example involving mechanical motion of a particle, but everything I will say generalizes to much more complicated cases. Suppose we have a particle that starts at some initial position position \(x_{\mathrm{i}}\) at some initial time \(t_{\mathrm{i}}\), and ends up at some final position \(x_{\mathrm{f}}\) at some final time \(t_{\mathrm{f}}\). We want to know, how does the particle get there? Which of the essentially infinite number of possible trajectories \(x(t)\) did the particle take? (Note that by allowing any arbitrary path \(x(t)\), we're also basically permitting any arbitrary velocity as a function of time in there.)
The local way to answer this problem is to start with the particle at the initial location and time, and apply Newton's laws. From its position find the force acting on the particle, use that force to find the acceleration, and take a little timestep forward, updating the particle's position and velocity. Now repeat this.
The Action Principle is a global approach. It says that there is some functional called the action, \(S[x(t)]\). For any trajectory \(x(t)\), you can compute a number \(S\). The trajectory that a classical particle takes is the one that starts and ends in the right places and times, and produces the minimum* value of \(S\). The form of \(S\) contains all the physics. (For a 1d particle obeying Newton's laws, the correct form for \(S\) is the integral as a function of time over the whole trajectory of (the kinetic energy minus the potential energy).) This is one of the stranger things to learn when studying physics - with the right procedure for writing down and expression for \(S\), and the right procedure for minimizing it (techniques called variational calculus), it seems like the (global) Action Principle is nearly magical, giving you ways to solve problems that would seem hopelessly complex in traditional (local) approaches. Why does this actually work? Again, this is a deep question, and I'll revisit it some other time. The fact that you can actually come up with a functional-based formalism does indicate that there is "hidden" structure to nature beyond what you might guess just from, e.g., Newton's laws.
To revisit the analogy: If I told you that there was a way to predict how you would drive from home to the store based on a single number related to each possible route, you would realize: (1) you don't necessarily have to know all the detailed rules of driving to find the preferred route, just how to calculate that number; and (2) there clearly is some deeper principle at work than just the rules of driving that picks out the route you take.
Next time, I'll finally get to the point about density functional theory.
*Technically, a maximum could also work here, but for many many cases, there is no maximum possible value of \(S\).
The Action Principle is the most famous example of showing that functionals can be incredibly useful in physics. I'm going to do a simple 1d example involving mechanical motion of a particle, but everything I will say generalizes to much more complicated cases. Suppose we have a particle that starts at some initial position position \(x_{\mathrm{i}}\) at some initial time \(t_{\mathrm{i}}\), and ends up at some final position \(x_{\mathrm{f}}\) at some final time \(t_{\mathrm{f}}\). We want to know, how does the particle get there? Which of the essentially infinite number of possible trajectories \(x(t)\) did the particle take? (Note that by allowing any arbitrary path \(x(t)\), we're also basically permitting any arbitrary velocity as a function of time in there.)
The local way to answer this problem is to start with the particle at the initial location and time, and apply Newton's laws. From its position find the force acting on the particle, use that force to find the acceleration, and take a little timestep forward, updating the particle's position and velocity. Now repeat this.
The Action Principle is a global approach. It says that there is some functional called the action, \(S[x(t)]\). For any trajectory \(x(t)\), you can compute a number \(S\). The trajectory that a classical particle takes is the one that starts and ends in the right places and times, and produces the minimum* value of \(S\). The form of \(S\) contains all the physics. (For a 1d particle obeying Newton's laws, the correct form for \(S\) is the integral as a function of time over the whole trajectory of (the kinetic energy minus the potential energy).) This is one of the stranger things to learn when studying physics - with the right procedure for writing down and expression for \(S\), and the right procedure for minimizing it (techniques called variational calculus), it seems like the (global) Action Principle is nearly magical, giving you ways to solve problems that would seem hopelessly complex in traditional (local) approaches. Why does this actually work? Again, this is a deep question, and I'll revisit it some other time. The fact that you can actually come up with a functional-based formalism does indicate that there is "hidden" structure to nature beyond what you might guess just from, e.g., Newton's laws.
To revisit the analogy: If I told you that there was a way to predict how you would drive from home to the store based on a single number related to each possible route, you would realize: (1) you don't necessarily have to know all the detailed rules of driving to find the preferred route, just how to calculate that number; and (2) there clearly is some deeper principle at work than just the rules of driving that picks out the route you take.
Next time, I'll finally get to the point about density functional theory.
*Technically, a maximum could also work here, but for many many cases, there is no maximum possible value of \(S\).
Sunday, January 10, 2016
"Local" vs "global" ways to solve physics problems
Inspired by a recent post of Ross McKenzie, I thought it would be fun to try to write a popularly accessible piece about the enormously successful, wholly remarkable theory that most people have never heard of, density functional theory.
To get there will require a couple of steps. First, it's important to appreciate that sometimes, thanks to the mathematical structure of the universe, it is possible to think about and solve physics problems with two seemingly very different approaches - call them "local" and "global". In the local approach, we write down equations that describe the underlying problem in great detail, and by carefully working out their solution, we arrive at an answer. In the global approach, we come at the problem from an overview perspective of considering possible solutions and figuring out which one is correct.
For example, let's think about a light ray propagating from point P (in air) to point Q (in water), as shown in the figure (courtesy wikipedia). It turns out that light travels at a speed \(c/n\) in a medium, where \(c\) is the speed of light in vacuum, and \(n\) is the "index of refraction" that depends on the material and the frequency of the light. (This is already short-hand for solving the complicated problem of electromagnetic radiation and its interactions with a material containing charges, something that Feynman wrote about elegantly in this book, based on these lectures.) The "local" approach would be to write down the equations describing the electromagnetic light waves, and solve these, including the description of the air, the water, and their interface. The result we would find is so simple and compact that we teach it to freshmen, Snell's Law: \(n_{1}\sin(\theta_{1}) = n_{2}\sin(\theta_{2})\), where the angles are defined in the figure.
The "global" way to solve this problem (and again arrive at Snell's Law) was found by Fermat (yes, the one with the "last" theorem). He didn't have the option of solving the microscopic equations governing the radiation, since he died two hundred years before Maxwell published them. Instead, Fermat knew that light seems to travel in straight lines within a given medium. Therefore, he considered all the possible paths that a light ray could take from P to Q (such as the blue and green alternatives shown in the modified figure), trying to figure out which combination of straight segments (and hence which angles) were picked out by nature. The answer he posited was that the correct path for the light is the one that minimizes the overall time taken by the light in going from P to Q. This does give Snell's Law as a consequence, and seems to hint at a deeper organizing principle or structure at work than just "we solved complex equations with tricky boundary conditions, and Snell's Law fell out". (These days, if a student is asked to derive the Snell's Law from Fermat's Principle of Least Time, they would use calculus to do so, since that plus coordinate geometry provides a clear way to right down an expression for the transit time and a way to minimize that function. Fermat couldn't do that, as modern calculus didn't exist at the time, though he was among the people thinking along those lines. He was pretty sharp.)
Next up: another example of a "global" approach, the Action Principle.
To get there will require a couple of steps. First, it's important to appreciate that sometimes, thanks to the mathematical structure of the universe, it is possible to think about and solve physics problems with two seemingly very different approaches - call them "local" and "global". In the local approach, we write down equations that describe the underlying problem in great detail, and by carefully working out their solution, we arrive at an answer. In the global approach, we come at the problem from an overview perspective of considering possible solutions and figuring out which one is correct.
For example, let's think about a light ray propagating from point P (in air) to point Q (in water), as shown in the figure (courtesy wikipedia). It turns out that light travels at a speed \(c/n\) in a medium, where \(c\) is the speed of light in vacuum, and \(n\) is the "index of refraction" that depends on the material and the frequency of the light. (This is already short-hand for solving the complicated problem of electromagnetic radiation and its interactions with a material containing charges, something that Feynman wrote about elegantly in this book, based on these lectures.) The "local" approach would be to write down the equations describing the electromagnetic light waves, and solve these, including the description of the air, the water, and their interface. The result we would find is so simple and compact that we teach it to freshmen, Snell's Law: \(n_{1}\sin(\theta_{1}) = n_{2}\sin(\theta_{2})\), where the angles are defined in the figure.
The "global" way to solve this problem (and again arrive at Snell's Law) was found by Fermat (yes, the one with the "last" theorem). He didn't have the option of solving the microscopic equations governing the radiation, since he died two hundred years before Maxwell published them. Instead, Fermat knew that light seems to travel in straight lines within a given medium. Therefore, he considered all the possible paths that a light ray could take from P to Q (such as the blue and green alternatives shown in the modified figure), trying to figure out which combination of straight segments (and hence which angles) were picked out by nature. The answer he posited was that the correct path for the light is the one that minimizes the overall time taken by the light in going from P to Q. This does give Snell's Law as a consequence, and seems to hint at a deeper organizing principle or structure at work than just "we solved complex equations with tricky boundary conditions, and Snell's Law fell out". (These days, if a student is asked to derive the Snell's Law from Fermat's Principle of Least Time, they would use calculus to do so, since that plus coordinate geometry provides a clear way to right down an expression for the transit time and a way to minimize that function. Fermat couldn't do that, as modern calculus didn't exist at the time, though he was among the people thinking along those lines. He was pretty sharp.)
Next up: another example of a "global" approach, the Action Principle.
Subscribe to:
Posts (Atom)