Much has been written about quantum computers and their prospects for doing remarkable things (see here for one example of a great primer), and Scott Aronson's blog is an incredible resource if you want more technical discussions. Recent high profile news this week about Microsoft investing heavily in one particular approach to quantum computation has been a good prompt to revisit parts of this subject, both to summarize the science and to think a bit about corporate funding of research. It's good to see how far things have come since I wrote this almost ten years ago (!!).
Remember, to realize the benefits of general quantum computation, you need (without quibbling over the details) some good-sized set (say 1000-10000) of quantum degrees of freedom, qubits, that you can initialize, entangle to create superpositions, and manipulate in deliberate ways to perform computational operations. On the one hand, you need to be able to couple the qubits to the outside world, both to do operations and to read out their state. On the other hand, you need the qubits to be isolated from the outside world, because when a quantum system becomes entangled with (many) environmental degrees of freedom whose quantum states you aren't tracking, you generally get decoherence - what is known colloquially as the collapse of the wavefunction.
The rival candidates for general purpose quantum computing platforms make different tradeoffs in terms of robustness of qubit coherence and scalability. There are error correction schemes, and implementations that combine several "physical" qubits into a single "logical" qubit that is supposed to be harder to screw up. Trapped ions can have very long coherence times and be manipulated with great precision via optics, but scaling up to hundreds of qubits is very difficult (though see here for a claim of a breakthrough). Photons can be used for quantum computing, but since they fundamentally don't interact with each other under ordinary conditions, some operations are difficult, and scaling is really hard - to quote from that link, "About 100 billion optical components would be needed to create a practical quantum computer that uses light to process information." Electrons in semiconductor quantum dots might be more readily scaled, but coherence is fleeting. Superconducting approaches are the choices of the Yale and UC Santa Barbara groups.
The Microsoft approach, since they started funding quantum computing research, has always been rooted in ideas about topology, perhaps unsurprising since their effort has been led by Michael Freedman. If you can encode quantum information in something to do with topology, perhaps the qubits can be more robust to decoherence. One way to get topology in the mix is to work with particular exotic quantum excitations in 2d that are non-Abelian. That is, if you take two such excitations and move them around each other in real space, the quantum state somehow transforms itself to remember that braiding, including whether you moved particle 2 around particle 1, or vice versa. Originally Microsoft was very interested in the \(\nu = 5/2\) fractional quantum Hall state as an example of a system supporting this kind of topological braiding. Now, they've decided to bankroll the groups of Leo Kouwenhoven and Charlie Marcus, who are trying to implement topological quantum computing ideas using superconductor/semiconductor hybrid structures thought to exhibit Majorana fermions.
It's worth noting that Microsoft are not the only people investing serious money in quantum computing. Google invested enormously in John Martinis' effort. Intel has put a decent amount of money into a silicon quantum dot effort practically down the hall from Kouwenhoven. This kind of industrial investment does raise some eyebrows, but as long as it doesn't kill publication or hamstring students and postdocs with weird constraints, it's hard to see big downsides. (Of course, Uber and Carnegie Mellon are a cautionary example of how this sort of relationship may not work out well for the relevant universities.)
A blog about condensed matter and nanoscale physics. Why should high energy and astro folks have all the fun?
Search This Blog
Saturday, November 26, 2016
Monday, November 21, 2016
More short items, incl. postdoc opportunities
Some additional brief items:
- Rice's Smalley-Curl Institute has two competitive, endowed postdoctoral opportunities coming up, the J. Evans Attwell Welch Postdoctoral Fellowship, and the Peter M. and Ruth L. Nicholas Postdoctoral Fellowship in Nanotechnology. The competition is fierce, but they're great awards and come with separate travel funds and research supplies resources. Applying requires working with a Rice faculty sponsor, and the deadline for applications is June 30, 2017, with a would-be start date around the beginning of September, 2017.
- This may be completely academic, but my colleagues at Rice's Baker Institute, including former NSF director and Presidential science adviser Neal Lane, have prepared a report with recommendations to the next science adviser regarding the Office of Science and Technology Policy and how to integrate science into policy making. Yeah. Sigh.
- Check out Funsize Physics! It's a repository of education and broader outreach products from NSF investigators, started by Shireen Adenwalla and Jocelyn Bosely, related to their NSF MRSEC.
- Anyone have strong opinions about Academic Analytics? The main questions are whether the quality control on the information is good, and whether the information can actually be useful.
Wednesday, November 16, 2016
short items
A handful of brief items:
- A biologist former colleague has some good advice on writing successful NSF proposals that translates well to other disciplines and agencies.
- An astronomy colleague has a nice page on the actual science behind the much-hyped supermoon business.
- Lately I've found myself recalling a book that I read as part of an undergraduate philosophy of science course twenty-five years ago, The Dilemmas of an Upright Man. It's the story of Max Planck and the compromises and choices he made while trying to preserve German science through two world wars. As the Nazis rose to power and began their pressuring of government scientific institutions such as the Berlin Academy and the Kaiser Wilhelm Institutes, Planck decided to remain in leadership roles and generally not speak out publicly, in part because he felt like if he abrogated his position there would only be awful people left behind like ardent Nazi Johannes Stark. These decisions may have preserved German science, but they broke his relationship with Einstein, who never spoke to Planck again from 1937 until Planck's death in 1947. It's a good book and very much worth reading.
Wednesday, November 09, 2016
Lenses from metamaterials
As alluded to in my previous posts on metamaterials and metasurfaces, there have been some recently published papers that take these ideas and do impressive things.
- Khorasaninejad et al. have made a metasurface out of a 2d array of very particularly designed TiO2 posts on a glass substrate. The posts vary in size and shape, and are carefully positioned and oriented on the substrate so that, for light incident from behind the glass, normal to the glass surface, and centered on the middle of the array, the light is focused to a spot 200 microns above the array surface. Each little TiO2 post acts like a sub-wavelength scatterer and imparts a phase on the passing light, so that the whole array together acts like a converging lens. This is very reminiscent of the phased array I'd mentioned previously. For a given array, different colors focus to different depths (chromatic aberration). Impressively, the arrays are designed so that there is no polarization dependence of the focusing properties for a given color.
- Hu et al. have made a different kind of metasurface, using plasmonically active gold nanoparticles on a glass surface. The remarkable achievement here is that the authors have used a genetic algorithm to find a pattern of nanoparticle shapes and sizes that somehow, through phased array magic, produces a metasurface that functions as an achromatic lens - different visible colors (red, green, blue) normally incident on the array focus to the same spot, albeit with a short focal length of a few microns.
- Finally, in more of a 3d metamaterial approach, Krueger et al. have leveraged their ability to create 3d designer structures of porous silicon. The porous silicon frameworks have an effective index of refraction at the desired wavelength. By controllably varying the porosity as a function of distance from the optical axis of the structure, these things can act as lenses. Moreover, because of designed anisotropy in the framework, they can make different polarizations of incident light experience different effective refractive indices and therefore have different focal lengths. Fabrication here is supposed to be considerably simpler than the complicated e-beam lithography needed to accomplish the same goal with 2d metasurfaces.
Friday, November 04, 2016
What is a metasurface?
As I alluded in my previous post, metamaterials are made out of building blocks, and thanks to the properties of those building blocks and their spatial arrangement, the aggregate system has, on longer distance scales, emergent properties (e.g., optical, thermal, acoustic, elastic) that can be very different from the traits of the individual building blocks. Classic examples are opal and butterfly wing, both of which are examples of structural coloration. The building blocks (silica spheres in opal; chitin structures in butterfly wing) have certain optical properties, but by properly shaping and arranging them, the metamaterial comprising them has brilliant iridescent color very different from that of bulk slabs of the underlying material.
This works because of wave interference of light. Light propagates more slowly in a dielectric (\(c/n(\omega)\), where \(n(\omega)\) is the frequency-dependent index of refraction). Light propagating through some thickness of material will pick up a phase shift relative to light that propagates through empty space. Moreover, additional phase shifts are picked up at interfaces between dielectrics. If you can control the relative phases of light rays that arrive at a particular location, then you can set up constructive interference or destructive interference.
This is precisely the same math that gives you diffraction patterns. You can also do this actively with radio transmitter antennas. If you set up an antenna array and drive each antenna at the same frequency but with a controlled phase relative to its neighbors, you can tune where the waves constructively or destructively interfere. This is the principle behind phased arrays.
An optical metasurface is an interface that has structures on it that impose particular phase shifts on light that either is transmitted through or reflected off the interface. Like a metamaterial and for the same wave interference reasons, the optical properties of the interface on distance scales larger than those structures can be very different than those of the materials that constitute the structures. Bear in mind, the individual structures don't have to be boring - each by itself could have complicated frequency response, like acting as a dielectric or plasmonic resonator. We now have techniques that allow rich fabrication on surfaces with a variety of materials down to scales much smaller than the wavelength of visible light, and we have tremendous computational techniques that allow us to calculate the expected optical response from such structures. Put these together, and those capabilities enable some pretty amazing optical tricks. See here (pdf!) for a good slideshow covering this topic.
Controlling the relative phases between antennas in an array lets you steer radiation. By Davidjessop - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/ w/index.php?curid=48304978 |
This is precisely the same math that gives you diffraction patterns. You can also do this actively with radio transmitter antennas. If you set up an antenna array and drive each antenna at the same frequency but with a controlled phase relative to its neighbors, you can tune where the waves constructively or destructively interfere. This is the principle behind phased arrays.
An optical metasurface is an interface that has structures on it that impose particular phase shifts on light that either is transmitted through or reflected off the interface. Like a metamaterial and for the same wave interference reasons, the optical properties of the interface on distance scales larger than those structures can be very different than those of the materials that constitute the structures. Bear in mind, the individual structures don't have to be boring - each by itself could have complicated frequency response, like acting as a dielectric or plasmonic resonator. We now have techniques that allow rich fabrication on surfaces with a variety of materials down to scales much smaller than the wavelength of visible light, and we have tremendous computational techniques that allow us to calculate the expected optical response from such structures. Put these together, and those capabilities enable some pretty amazing optical tricks. See here (pdf!) for a good slideshow covering this topic.
Tuesday, November 01, 2016
What is a metamaterial?
(This is part of a lead-in to a brief discussion I'd like to do of two papers that just came out.) The wikipedia entry for metamaterial is actually rather good, but doesn't really give the "big picture". As you will hopefully see, that wording is a bit ironic.
"Ordinary" materials are built up out of atoms or molecules. The electronic, optical, and mechanical properties of a solid or liquid come about from the properties of the individual constituents, and how those constituents are spatially arranged and coupled together into the whole. On the length scale of the constituents (the size of atoms, say, in a piece of silicon), the local properties like electron density and local electric field vary enormously. However, on length scales large compared to the individual constituent atoms or molecules, it makes sense to think of the material as having some spatially-averaged "bulk" properties, like an index of refraction (describing how light propagates through the material), or a magnetic permeability (how the magnetic induction \(\mathbf{B}\) inside a material responds to an externally applied magnetic field \(\mathbf{H}\)), or an elastic modulus (how a material deforms in response to an applied stress).
A "metamaterial" takes this idea a step further. A metamaterial is build up out of some constituent building blocks such as dielectric spheres or metallic rods. The properties of an individual building block arise as above from their own constituent atoms, of course. However, the properties of the metamaterial, on length scales long compared to the size of the building blocks, are emergent from the properties of those building blocks and how the building blocks are then arranged and coupled to each other. The most common metamaterials are probably dielectric mirrors, which are a subset of photonic band gap systems. You can take thin layers of nominally transparent dielectrics, stack them up in a periodic way, and end up with a mirror that is incredibly reflective at certain particular wavelengths - an emergent optical property that is not at all obvious at first glance from the properties of the constituent layers.
Depending on what property you're trying to engineer in the final metamaterial, you will need to structure the system on different length scales. If you want to mess with optical properties, generally the right ballpark distance scale is around a quarter of the wavelength (within the building block constituent) of the light. For microwaves, this can be the cm range; for visible light, its tens to hundreds of nm. If you want to make an acoustic metamaterial, you need to make building blocks on a scale comparable to a fraction of the wavelength of the sound you want to manipulate. Mechanical metamaterials, which have large-scale elastic properties far different than those of their individual building blocks, are trickier, and should be thought about as something more akin to a small truss or origami framework. These differ from optical and acoustic metamaterials because the latter rely crucially on interference phenomena between waves to build up their optical or acoustic properties, while structural systems rely on local properties (e.g., bending at vertices).
Bottom line: We now know a lot about how to build up larger structures from smaller building blocks, so that the resulting structures can have very different and interesting properties compared to those of the constituents themselves.
"Ordinary" materials are built up out of atoms or molecules. The electronic, optical, and mechanical properties of a solid or liquid come about from the properties of the individual constituents, and how those constituents are spatially arranged and coupled together into the whole. On the length scale of the constituents (the size of atoms, say, in a piece of silicon), the local properties like electron density and local electric field vary enormously. However, on length scales large compared to the individual constituent atoms or molecules, it makes sense to think of the material as having some spatially-averaged "bulk" properties, like an index of refraction (describing how light propagates through the material), or a magnetic permeability (how the magnetic induction \(\mathbf{B}\) inside a material responds to an externally applied magnetic field \(\mathbf{H}\)), or an elastic modulus (how a material deforms in response to an applied stress).
A "metamaterial" takes this idea a step further. A metamaterial is build up out of some constituent building blocks such as dielectric spheres or metallic rods. The properties of an individual building block arise as above from their own constituent atoms, of course. However, the properties of the metamaterial, on length scales long compared to the size of the building blocks, are emergent from the properties of those building blocks and how the building blocks are then arranged and coupled to each other. The most common metamaterials are probably dielectric mirrors, which are a subset of photonic band gap systems. You can take thin layers of nominally transparent dielectrics, stack them up in a periodic way, and end up with a mirror that is incredibly reflective at certain particular wavelengths - an emergent optical property that is not at all obvious at first glance from the properties of the constituent layers.
Depending on what property you're trying to engineer in the final metamaterial, you will need to structure the system on different length scales. If you want to mess with optical properties, generally the right ballpark distance scale is around a quarter of the wavelength (within the building block constituent) of the light. For microwaves, this can be the cm range; for visible light, its tens to hundreds of nm. If you want to make an acoustic metamaterial, you need to make building blocks on a scale comparable to a fraction of the wavelength of the sound you want to manipulate. Mechanical metamaterials, which have large-scale elastic properties far different than those of their individual building blocks, are trickier, and should be thought about as something more akin to a small truss or origami framework. These differ from optical and acoustic metamaterials because the latter rely crucially on interference phenomena between waves to build up their optical or acoustic properties, while structural systems rely on local properties (e.g., bending at vertices).
Bottom line: We now know a lot about how to build up larger structures from smaller building blocks, so that the resulting structures can have very different and interesting properties compared to those of the constituents themselves.
Subscribe to:
Posts (Atom)