I work in a field, semiconductors, that is the paradigmatic example of the Singularity. When people talk about technology zooming up the exponential growth curve, they’re talking about what I do. Since the field began in the early 1960s, there has been more improvement in integrated circuits than in any other technology except magnetic storage. Each transistor has not only gotten cheaper by many orders of magnitude, but they’re faster, more reliable, and use less power. Yet in spite of all this, there’s been a lot less change in the underlying ideas than you would think.
I’ve seen the progress part directly. The first chip I worked on, the V-11 microprocessor at DEC in the early 80s, had 110,000 transistors with a gate length of 3 µm and ran at 5 Mhz. My latest has 130,000,000 transistors with a gate length of 28 nm, and runs at 750 Mhz. That’s not actually a big chip by today’s standards! Yet it’s a factor of 1200 in density and 250 in raw speed. Overall it’s about 2000 times faster, and 20 times cheaper to boot.
It’s a story of terrific technological success, which naturally leads to vast hype. People have compared the invention of semiconductors to that of printing, writing, and even fire. Some writers – most notably the SF author and CS professor Vernor Vinge, and the inventor Ray Kurzweil – go farther still and claim that they will bring about the transcendence of humanity, what they call the Singularity. Things will move so far that no one today will be able to comprehend the people (or their uploaded avatars) of that era. We’ll either be amoebas crushed by battling AIs, or we’ll all be as gods.
Yeah right. I’ve actually been in this field for over 30 years, and what strikes me is NOT how unfathomably different it has become, but how constant the underlying technologies have actually remained. Let me start at the lowest level and work up:
- The base material of chips is still silicon, as it was in the early 60s. People tried other compounds like gallium-arsenide and silicon-germanium alloys, and they’re only used for niche products, and fewer and fewer of those.
- The base device is the MOSFET transistor, invented in the 1960s. They look a little different today – they’re built out of etched fins on the chip surface instead of lying flat along it, and use hafnium oxide instead of silicon dioxide for the dielectric, but it’s the same basic device that Noyce and Moore worked on. There are no more bipolar transistors, no tunnel diodes, no magnetic bubble memories, and no exotic non-volatile devices like MRAM. Maybe the last will happen at some point, but people have been trying new schemes for decades.
- The base circuits are the CMOS static combinatorial gate, the 6-transistor SRAM cell, the DRAM capacitive storage bit cell, and the floating gate flash memory cell, all from the 70s. These account for practically all of the 10^21 (billion trillion) transistors made per year. Dozens of other circuit styles have been tried, and all have failed because of excess power or poor noise resistance. I’ve worked in some of these styles, such as boot-strapped NMOS, pre-charged dynamic, and cross-coupled cascode, and they’re all gone. Few people even design at the circuit level any more, except for those doing standard cell libraries (and most of those are auto-generated) or analog circuits.
- The base data types are character, integer, fixed point, and floating point, all known to Mauchley and Wilkes in the late 1940s. Characters, at least, have expanded from the 5-bit Baudot code to the 8 to 32-bit UTF-8 Unicode. There are no logarithmic number systems (which make divide and square root easy), or redundant binary (which avoid carries), and hardly even any support for 128-bit floating point, which we built in the 80s.
- The base processor architecture techniques were all discovered by the 1970s. Pipelining, caching, branch prediction, vector instructions, and out-of-order execution were all known by then. They used to be only available in supercomputers, and now every widget that goes for more than $30 can afford an ARM Cortex-A9 processor, which uses all of them. Even the newer multi-core processors use ideas like write-back cache coherence, multi-threading, and distributed cache directories from the 80s. Massive SIMD and dataflow never made it.
- The base processor language is still C from the 70s. Lots of code won’t even rely on the enhancements in the 1999 update, C99. People code today in a huge range of languages, but somewhere underneath them is usually an interpreter written in C, and C is in all the libraries and the OS. It’s about the only binary compiler that most machines support. That and FORTRAN, which is even older. The object-oriented enhancements of C++ and Java are from the 80s. Lisp is still considered exotic, and it’s from the 60s.
Overall, it’s gratifying that the basic concepts of the field have persisted for so long. It means that one’s initial education has not become irrelevant. Learn the fundamental concepts and you can work for a long time in a wide set of sub-fields of the chip world.
But I don’t mean to say that there haven’t been any advances. Chips are now in every corner of the manufactured world. There have just been less underlying changes than what the breathless promoters of futurity would have you believe. And I don’t mean that there won’t be significant consequences of further work. Twenty years from now we may be living in clouds of smart dust that support terabit links to our augmented-reality sensoria. But it’s very likely that that smart dust will be made of silicon, use standard gates and arithmetic, run code on standard processors, and that that code eventually will depend on C. Any smart kid of the last 30 years can understand all of it.