Much has been written about quantum computers and their prospects for doing remarkable things (see here for one example of a great primer), and Scott Aronson's blog is an incredible resource if you want more technical discussions. Recent high profile news this week about Microsoft investing heavily in one particular approach to quantum computation has been a good prompt to revisit parts of this subject, both to summarize the science and to think a bit about corporate funding of research. It's good to see how far things have come since I wrote this almost ten years ago (!!).
Remember, to realize the benefits of general quantum computation, you need (without quibbling over the details) some good-sized set (say 1000-10000) of quantum degrees of freedom, qubits, that you can initialize, entangle to create superpositions, and manipulate in deliberate ways to perform computational operations. On the one hand, you need to be able to couple the qubits to the outside world, both to do operations and to read out their state. On the other hand, you need the qubits to be isolated from the outside world, because when a quantum system becomes entangled with (many) environmental degrees of freedom whose quantum states you aren't tracking, you generally get decoherence - what is known colloquially as the collapse of the wavefunction.
The rival candidates for general purpose quantum computing platforms make different tradeoffs in terms of robustness of qubit coherence and scalability. There are error correction schemes, and implementations that combine several "physical" qubits into a single "logical" qubit that is supposed to be harder to screw up. Trapped ions can have very long coherence times and be manipulated with great precision via optics, but scaling up to hundreds of qubits is very difficult (though see here for a claim of a breakthrough). Photons can be used for quantum computing, but since they fundamentally don't interact with each other under ordinary conditions, some operations are difficult, and scaling is really hard - to quote from that link, "About 100 billion optical components would be needed to create a practical quantum computer that uses light to process information." Electrons in semiconductor quantum dots might be more readily scaled, but coherence is fleeting. Superconducting approaches are the choices of the Yale and UC Santa Barbara groups.
The Microsoft approach, since they started funding quantum computing research, has always been rooted in ideas about topology, perhaps unsurprising since their effort has been led by Michael Freedman. If you can encode quantum information in something to do with topology, perhaps the qubits can be more robust to decoherence. One way to get topology in the mix is to work with particular exotic quantum excitations in 2d that are non-Abelian. That is, if you take two such excitations and move them around each other in real space, the quantum state somehow transforms itself to remember that braiding, including whether you moved particle 2 around particle 1, or vice versa. Originally Microsoft was very interested in the \(\nu = 5/2\) fractional quantum Hall state as an example of a system supporting this kind of topological braiding. Now, they've decided to bankroll the groups of Leo Kouwenhoven and Charlie Marcus, who are trying to implement topological quantum computing ideas using superconductor/semiconductor hybrid structures thought to exhibit Majorana fermions.
It's worth noting that Microsoft are not the only people investing serious money in quantum computing. Google invested enormously in John Martinis' effort. Intel has put a decent amount of money into a silicon quantum dot effort practically down the hall from Kouwenhoven. This kind of industrial investment does raise some eyebrows, but as long as it doesn't kill publication or hamstring students and postdocs with weird constraints, it's hard to see big downsides. (Of course, Uber and Carnegie Mellon are a cautionary example of how this sort of relationship may not work out well for the relevant universities.)
7 comments:
This is definitely one of the most exciting fields for condensed-matter physicists, because they can now use some fairly deep theoretical concepts and come up with exotic experiments that society at large is finally interested in, and willing to pay for generously. The classic field-effect transistor is nice, but its underlying physics can be explained to high-school students, while concepts such as Majorana zero modes are much more sophisticated, to say the least.
It may be worth pointing out that the Delft and Niels Bohr experiments realize (if their conclusions are right) a qubit that is only protected as a memory element, as Kitaev notes in his original paper; some other invention is needed to make the computation robust against decoherence. But I suspect it is only a matter of time before another smart physicist figures out how to build a topological quantum gate, rather than simply a memory element.
Also, the Marcus group experiment shows an interesting electronic signature and length dependence that is consistent with the Majorana modes they were searching for, but, as they point out, the real test will be when they are able to perform time-resolved measurements of the qubit, and demonstrate that it is indeed a very stable phase-coherent memory element.
So, following up on your post from almost 10 years (!) ago, what ever became of D wave? I see that they still have not yet initiated the destruction of the universe.
DanM, they are alive and well, apparently, though their quantum annealer approach doesn't seem to have blown far past classical computers, even for their specialized problems, as had been aggressively forecast. By the way, my post seems to have been prescient, because Science published this a few days after I wrote the above.
The D-wave machine, for all the publicity it gets, is essentially a glorified spin glass simulator. But you have to give them credit for raising interest in the field, and for spurring tech companies to fund other quantum computing projects.
Just to point out that, decoherence is not the same as collapse of wave function at all!
Anon@4:14, I was streamlining my language, but I would like to read your view on this. Are you taking the view that there really is an evolution of the wave function not described by the Schroedinger equation that takes place during a measurement?
Post a Comment