Here's an analogy. You want to drive from your house to the store. There are many possible routes, and for each route we could come up with a single number that depends on the route - it could be the total distance traveled, or the total time it took to get from the house to the store, or it could be the total fuel consumed, or it could be the number of times you turned left minus the number of times you turned right. We could take all your possible routes, and we could somehow process each possible route into a number. The operation that chews on your route information and converts it to a number is a functional of your path from the house to the store. (Why would you want to do this? Well, perhaps you value your time, and you want to pick the route that has the least accumulated time. Perhaps you value fuel costs, and you want to pick the route that has the least fuel consumption. The point is, depending on what you care about, a functional can let you pick between alternatives, here the routes, that are described by a huge, effectively infinite number of variables.)
In the spirit of MTW, a function of a single variable is a machine that takes a number, chews on it, and spits out a number. This could be \(y(x) = x^{2}\), for example. A function of multiple variables is a machine that takes more than one number, chews on them, and spits out a number -- like \(y(x_{1}, x_{2}, x_{3}) = x_{1}^{2} + 3x_{2} - x_{3}\). For this example, for any set of three numbers \( \{x_{1}, x_{2}, x_{3}\} \), you can compute a value of \(y\).
A functional is the "continuum limit" of a function of multiple variables - it's a machine that takes an infinite number of numbers (!), chews on it, and spits out a single number. We can cast our example of Fermat's principle of least time this way. Suppose light starts out at point P, and we let it take some wild path like the one shown in the figure. We're eventually going to have the light wind up at point Q. How long does it take the light to get from P to the interface? Well, that depends on how you think it goes. If you knew all the intervening points \((x_{i},y_{i})\), you could compute the distance between successive points, and add up all the times. The transit time \(t_{\mathrm{tot}}\) depends on the whole trajectory that the light takes from P to Q. Instead of writing \(t_{\mathrm{tot}}(x_{1}, y_{1}, x_{2}, y_{2}, .....)\), we write \(t_{\mathrm{tot}}[x,y]\), where the square brackets indicate that this is a functional. For any goofy trajectory we could draw from P to Q, we could compute \(t_{\mathrm{tot}}\). Fermat's principle of least time says that the one actually taken by light is the one that gives the smallest value of \(t_{\mathrm{tot}}\). Why does this work? That's actually a very deep question, and I won't try to answer it now.
The Action Principle is the most famous example of showing that functionals can be incredibly useful in physics. I'm going to do a simple 1d example involving mechanical motion of a particle, but everything I will say generalizes to much more complicated cases. Suppose we have a particle that starts at some initial position position \(x_{\mathrm{i}}\) at some initial time \(t_{\mathrm{i}}\), and ends up at some final position \(x_{\mathrm{f}}\) at some final time \(t_{\mathrm{f}}\). We want to know, how does the particle get there? Which of the essentially infinite number of possible trajectories \(x(t)\) did the particle take? (Note that by allowing any arbitrary path \(x(t)\), we're also basically permitting any arbitrary velocity as a function of time in there.)
The local way to answer this problem is to start with the particle at the initial location and time, and apply Newton's laws. From its position find the force acting on the particle, use that force to find the acceleration, and take a little timestep forward, updating the particle's position and velocity. Now repeat this.
The Action Principle is a global approach. It says that there is some functional called the action, \(S[x(t)]\). For any trajectory \(x(t)\), you can compute a number \(S\). The trajectory that a classical particle takes is the one that starts and ends in the right places and times, and produces the minimum* value of \(S\). The form of \(S\) contains all the physics. (For a 1d particle obeying Newton's laws, the correct form for \(S\) is the integral as a function of time over the whole trajectory of (the kinetic energy minus the potential energy).) This is one of the stranger things to learn when studying physics - with the right procedure for writing down and expression for \(S\), and the right procedure for minimizing it (techniques called variational calculus), it seems like the (global) Action Principle is nearly magical, giving you ways to solve problems that would seem hopelessly complex in traditional (local) approaches. Why does this actually work? Again, this is a deep question, and I'll revisit it some other time. The fact that you can actually come up with a functional-based formalism does indicate that there is "hidden" structure to nature beyond what you might guess just from, e.g., Newton's laws.
To revisit the analogy: If I told you that there was a way to predict how you would drive from home to the store based on a single number related to each possible route, you would realize: (1) you don't necessarily have to know all the detailed rules of driving to find the preferred route, just how to calculate that number; and (2) there clearly is some deeper principle at work than just the rules of driving that picks out the route you take.
Next time, I'll finally get to the point about density functional theory.
*Technically, a maximum could also work here, but for many many cases, there is no maximum possible value of \(S\).
The Action Principle is the most famous example of showing that functionals can be incredibly useful in physics. I'm going to do a simple 1d example involving mechanical motion of a particle, but everything I will say generalizes to much more complicated cases. Suppose we have a particle that starts at some initial position position \(x_{\mathrm{i}}\) at some initial time \(t_{\mathrm{i}}\), and ends up at some final position \(x_{\mathrm{f}}\) at some final time \(t_{\mathrm{f}}\). We want to know, how does the particle get there? Which of the essentially infinite number of possible trajectories \(x(t)\) did the particle take? (Note that by allowing any arbitrary path \(x(t)\), we're also basically permitting any arbitrary velocity as a function of time in there.)
The local way to answer this problem is to start with the particle at the initial location and time, and apply Newton's laws. From its position find the force acting on the particle, use that force to find the acceleration, and take a little timestep forward, updating the particle's position and velocity. Now repeat this.
The Action Principle is a global approach. It says that there is some functional called the action, \(S[x(t)]\). For any trajectory \(x(t)\), you can compute a number \(S\). The trajectory that a classical particle takes is the one that starts and ends in the right places and times, and produces the minimum* value of \(S\). The form of \(S\) contains all the physics. (For a 1d particle obeying Newton's laws, the correct form for \(S\) is the integral as a function of time over the whole trajectory of (the kinetic energy minus the potential energy).) This is one of the stranger things to learn when studying physics - with the right procedure for writing down and expression for \(S\), and the right procedure for minimizing it (techniques called variational calculus), it seems like the (global) Action Principle is nearly magical, giving you ways to solve problems that would seem hopelessly complex in traditional (local) approaches. Why does this actually work? Again, this is a deep question, and I'll revisit it some other time. The fact that you can actually come up with a functional-based formalism does indicate that there is "hidden" structure to nature beyond what you might guess just from, e.g., Newton's laws.
To revisit the analogy: If I told you that there was a way to predict how you would drive from home to the store based on a single number related to each possible route, you would realize: (1) you don't necessarily have to know all the detailed rules of driving to find the preferred route, just how to calculate that number; and (2) there clearly is some deeper principle at work than just the rules of driving that picks out the route you take.
Next time, I'll finally get to the point about density functional theory.
*Technically, a maximum could also work here, but for many many cases, there is no maximum possible value of \(S\).
I'm not sure about the description of a functional in this post. Traditionally a distinction is made between the two along the lines of "a function eats *numbers* and shits numbers, a functional eats *the entire function* and shits a number". I've never seen someone claim that they are the same before, which this post does by eliding a function of infinite arguments with an infinite set of numbers.
ReplyDeleteAnon., I make no claims to rigor in what I wrote, though I don't think it's a crazy way to think about functionals. If a function f maps number(s) x to a number, and a functional g cares about the entire set (x, f(x)) to generate another number, then it seems like I could think of g as a function of all the possible values (x, f(x)), where x is allowed to vary continuously, and f has the specified relationship with x. I was trying to come up with some language or analogy that would be more explicit than "eats an entire function".
ReplyDeleteC'mon Doug, the finale to the very important DFT?
ReplyDelete