For the most part, computers don't compute; they shuffle. About 75 percent of the time, they're shuffling information, in the form of ones and zeros, between various layers of memory, and between memory and the central processing unit (CPU) where the actual work of computing is done. That's because computers in 2015 work essentially the same as ENIAC, the world's first general-purpose electronic computer, did in 1946.
ENIAC was built just a year after math genius John von Neumann and others created the original blueprint for electronic computers. Then and now, mainframes, servers, laptops, tablets, phones, cameras and all the rest basically consist of two components, processor and memory. The processor calculates; the memory stores data together with the instructions (software or app) on what to do with that data.
In its simplest form, memory currently comes in two forms: non-volatile — that is, information that stays intact when the power is turned off (e.g. flash memory, such as thumb-drives, camera chips and the like; and conventional hard drives) and volatile. The most common type of volatile memory, DRAM (dynamic random-access memory) operates at more or less CPU speed, thousands of times faster than non-volatile memory. When a computer is powered down, information stored in DRAM disappears.
Because of all the inefficient, non-processing time and energy spent on moving information between non-volatile and volatile memory, computer engineers have long dreamt of combining the two. One way of doing this was mooted back in 1971, when UC Berkeley researcher Leon Chua conceptualized what he dubbed the "memristor," from memory + resistor. Chua saw the memristor as the "missing" fourth circuit element, to be added to the trinity of resistor, inductor and capacitor. The essential properties of a memristor, as envisaged by Chua, are responsiveness (applied voltage results in an almost instantaneous change in resistance); and — crucially — non-volatility (it remembers its history, even when the power is turned off). The memristor was thus the theoretical dream component that engineers sought, combining and improving upon the best properties of other types of memory: speed and non-volatility. Going one better, a memristor can hold a partial charge, moving beyond the simple on-off states of other components.
As an analogy, think of a resistor as a pipe through which water flows (electric current); the higher the pressure (voltage) and larger the pipe (less resistance), the greater the flow. In contrast, a memristor is like a flexible pipe which expands in diameter when water flows one way (allowing greater flow) and shrinks when water flows the other way (less flow); when the water is turned off, the pipe retains its most recent diameter.
That analogy comes from Stan Williams, who in 2008 (37 years after Chua's insight) reported that his research group at Hewlett-Packard had built a working memristor. The devil was in the details, however, and progress from prototype to an actual memristor-based computer has been glacial. When HP announced, in June 2014, that "The Machine" supercomputer would be based on completely new architecture using memristor memory, it looked like a quantum leap in computer technology was just around the corner.
One year later, the company scaled back its ambitious vision. The Machine will still be built (yea!), but with DRAM memory (boo!). According to HP's CTO Martin Fink, "We way over-associated [The Machine] with the memristor." However, with IBM and other big-name manufacturers in the race, the promise of fast, cheap and powerful memristor-based computers is still very much alive. Just not quite yet.
Barry Evans' ([email protected]) three Field Notes anthologies are available at Eureka Books, Booklegger and Northtown Books.
Comments