There is a nice article at Nature this past week talking about the possible impending end of Moore's Law. The quick summary: It's really looking like we are approaching the end of one of Moore's laws (that the number of transistors on a chip doubles roughly every 18 months). Bear in mind that the endurance of this form of Moore's law is not an accident - the growth of transistor density transformed from an empirical observation made by Moore to a growth target adopted by the semiconductor industry decades ago.
There are many reasons why continued aggressive transistor scaling is difficult. I write about these at some length in my book. Clearly we are starting to approach the limit where devices are so small that atomic-scale differences in geometry and composition can start to affect performance. Power density, even when transistors are nominally "off", is becoming a major problem. This is one reason mentioned in the article why clock speeds on processors have basically stopped climbing. (Oddly, I never hear anyone mention the other major reason that clock speeds have plateaued at a few GHz: Going much higher in frequency makes layout and circuit design a much more difficult microwave engineering task.)
The article discusses possible radical shifts in strategy to extend the trend of increasing processor performance. These include major changes in materials (obligatory mention of graphene and two-dimensional semiconductors) and architecture (going 3d in circuit design; quantum computing; the increasingly trendy neuromorphic computing). There are also major efforts to think about computing at lower powers. While it's cool to talk about these, I have to say that the enormous economic advantage of silicon (an individual Si transistor costs an infinitesimal fraction of a cent, and we know how to make a billion of them at a time and have them all work for a decade) makes it very difficult to see how any competing material gains significant ground for a long time.
No comments:
Post a Comment