Tuesday, February 13, 2007

Quantum computing: are we there yet?

(Updated and corrected) As others in the blogging world have pointed out, today is the big day for D-Wave, a privately held, VC-financed Canadian company that plans a public demonstration of a 16 qubit quantum computer. One of the main ideas behind quantum computation is that, because of the way quantum mechanics works, performing a linear number of operations, N, allows you to build up quantum states that can be written as superpositions containing an exponentially large (e.g. 2^N) number of terms. If one can do this and not have decoherence (due to environmental interactions) mess up the superposition states, it is possible to use this property of quantum mechanics to do certain computations much faster than classical computers. Another way to view the power of this quantum parallelism: suppose you want to solve a math problem, and the input is an N-bit binary number. With a generic quantum computer, you can imagine preparing an initial state built out of N qubits that is actually a superposition of all 2^N possible inputs. Your quantum computer could then solve the problem, producing a superposition of all solutions corresponding to those inputs. Readout is the tricky bit, of course, since simple-minded measurement of the final state will only pick out one of those solutions.

There have been many ideas proposed for physical implementations of quantum computers. The requirement that decoherence be small is extremely restrictive. With so-called "fault-tolerant" quantum computation, one can beat down that requirement a bit by using additional qubits to do error correction. In the last few years, there has been great progress in using small superconducting systems as quantum mechanical bits (qubits), either thinking about the charge on small "Cooper pair box" metal islands, or persistent currents in superconducting loops with Josephson junctions. One can do a form of quantum computation using NMR, though the number of effective qubits is strongly limited in molecules. There have been proposals to use tunable hyperfine interactions in phosphorous doped Si to get around that restriction. Some people want to do quantum computation using photons, or through optical manipulations of excitons in semiconductor dots, or directly using individual electron spins in semiconductor nanostructures. The current record (6 qubits) for producing superpositions like the ones I described above, or other related superpositions (8 qubits) has been set using trapped ions.

The D-wave demo is an attempt to do adiabatic quantum computation. The idea is to formulate a problem such that one can start out with the initial data being represented by the ground state (lowest energy state) of a system of interacting qubits. Then one very gently changes the Hamiltonian of the system such that the system never leaves its instantaneous ground state (that's the adiabatic part), but arranges matters so that the solution to the problem is represented by the ground state of the final Hamiltonian. The main proponent of this approach has been Seth Lloyd. Empirically, the D-wave folks are going to use 16 qubits made out of Nb loops and Josephson junctions (as explained here), and they cool this whole mess (128 filtered leads) down to 5 mK in a dilution refrigerator.

There seem to be three big questions here: (1) Is this really quantum computation? It's difficult for me to assess this, as I'm no expert. There seem to be arguments about which problems can really be solved in the adiabatic formulation that's implemented here, and about whether one can actually get significant improvements relative to classical algorithms. (2) Will the demo be fair? The high tech world is no stranger to rigged demos, and in this is a particular black-box affair. One has to trust that the stuff displayed on the screen of the PC controlling the electronics is really being determined by the chip at the bottom of the fridge, and not by some clever software. I'm willing to give them the benefit of the doubt, provided that they let some independent experts play with the system. (3) Why haven't they published everything in the open literature and let outsiders come in to verify that it's all legit? Well, I can't say I really blame them. The paper I linked to up there for their implementation never got into PRL, as far as I can see. I don't see Intel hurrying up to get outside approval for their new gate metallization. If these folks think they can actually get this to work and make a buck at it, more power to them. The truth will out.

4 comments:

  1. Anonymous11:42 AM

    I just cruised in from the Quantum Pontiff's roundup of D-wave analyses, and I like your post! So, of course, I thought I'd nitpick a bit.

    1) The sentence "Your quantum computer could then solve the problem, producing a superposition of all 2^N possible solutions," is a bit warped. Making a superposition of all 2^N possible solutions (i.e., every possible bit string) isn't very helpful -- for instance, it doesn't actually encode any information about the problem! In order to ``solve the problem,'' the QC has to do a bit more -- it has to single out the bitstring that actually solves the problem. Then we get to worry about measuring it.

    This misinterpretation usually comes from the picture of factoring a number N where you: (a) put register X in a superposition of all #s; (b) divide N by X; and (c) use interference to figure out which divisions had remainder zero. What's important to note is that the problem isn't even arguably "solved" until after step (b) -- it's the superposition of remainders (in this picture) that contains information about the solution, not the initial superposition of possible solutions.

    2) The record for "Most number of qubits entangled" is actually 8, and is held by the Innsbruck ion trapping group (the paper, available at http://arxiv.org/abs/quant-ph/0603217, was published simultaneously in Nature with the Boulder paper that you cited). There might be some question as to whether they actually succeeded in demonstrating 8-ion entanglement, but (a) it's published, and (b) the experiment is sound.

    3) Fault tolerance is even more important than it seems in "The requirement that decoherence be small is very restrictive. With so-called `fault-tolerant' quantum computation, one can beat down that requirement a bit..." Without fault tolerant error correction, the requirement is basically that decoherence be zero, since useful computations take a lot (>10^11) of steps. Fault tolerant design turns "zero error tolerated" into "0.1% error tolerated", which is the difference between "absolutely impossible" and "pretty hard".

    Thanks again for blogging!

    ReplyDelete
  2. Hi Robin -
    Thanks for your comments! I'll update the post to make the appropriate corrections. Your first point, in particular, was careless typing on my part. Your third point teaches me something I didn't know: that the number of operations required to do anything useful is so large. That surprises me a bit. Clearly you don't want decoherence while you're doing operations....

    ReplyDelete
  3. Anonymous6:47 PM

    Anybody got an update on what happened when D-wave turned on their machine for public viewing? I assume they didn't violate any laws of physics, or the world would have ended and I would have noticed that.

    ReplyDelete
  4. Hey Dan - From what I can tell, their efforts emitted that most bogus of products: a press release. Other than that, I haven't seen much written anywhere about what they did or saw that had any technical detail.

    ReplyDelete