The ripped and stained vinyl covered seat of some lost color puffed a cloud of tobacco smoke when I sat down. It wasn't the worst seat in the room. One wall of the windowless basement room was a painted green chalk board. Over forty years had gone by and it was still called the opium den. Someone had brought a box of doughnuts from the shop next door. Fifteen was their bakers dozen. I was lucky to have been in this place before the building was torn down. You could feel the presence of history.
In the late 1940s at the Bell Telephone Laboratories an acronym for inventing the future of electronics and telecommunications appeared in the technical journal.
RrdD
Big R, little r, little d, big D: pure research, applied research, investigatory/inventive development and development at scale. It was supposed to be a path from a discovery in the laboratory to the next marvel you'd use anywhere from five to fifty years out. Parts of the labs and the Western Electric manufacturing arm were organized to support that path. There was just one little problem. That's not how things happen in the real world. Some bits of it work sometimes, but it influenced how people think. That said it's a useful framework. You can identify the pieces, but they may not appear where you think they will. I have a lot of experience in R, r and d. I'll spend most of the post talking about little r as it's the one that often brings dramatic change. It's probably the one to study if you worry about change, but it's often impossible to see the early signals unless you're a participant. You need to be playing the game.
First a bit on "pure" research. Science as we know it really didn't develop until the early 19th century. One of the great inventions of that century was the modern natural science laboratory. It was a radical departure from anything people had seen and was best exemplified by the creation of the Cavendish Laboratory at Cambridge around 1870. A half century later the modern large government lab grew out of Ernest Lawrence's Radiation Laboratory at Berkeley and the Bell Telephone Laboratories started to become a formidable industrial lab. All of these places supported pure research - research with the primary aim was understanding Nature better. It was assumed that some of these discoveries would be useful and it was important to have people intimately familiar with the discoveries as well as finding some practical use. It was an important step as the pure research was becoming very expensive to justify as pure curiosity.
Pure is a bit tainted. A feature of science is when you look at something in a new way - a better telescope, microscope, particle accelerator - you often learn things beyond your wildest imagination. Technology drives instrumentation. Computers and exotic electronics and materials fuel better views of Nature so scientists find themselves inventing and perfecting new tools in addition to using commercial tools that came from engineering organizations.
There's an interesting story about the global positioning system. Shortly after the USSR orbited Sputnik in 1957 the US launched a frantic effort to track satellites. It enlisted thousands of amateur astronomers and high school students with a dirt-cheap azimuthal spotting scope. You'd record your latitude and longitude, find the satellite, read off a couple of numbers and note the time. The largest source of error was accurate timing. It was recommended people use a shortwave radio and tune to the time standard at WWV in Fort Collins, Colorado, but even then human reflexes got in the way. A physicist was part of the team and wondered what it would be like if these trackers had their own accurate clocks - atomic clocks. It wasn't practical, but he flipped the problem in his mind. What if the satellite had it's own clock? If you had several in orbit you could calculate your position to great accuracy. That was the holy grail of navigation. There were a lot of details to work out and it wasn't technically possible at the time, but it resurfaced at the government research organization DARPA about fifteen years later when they realized you could do it for a few tens of billions of dollars. A bargain given the need.
With that, on to little r:
A century ago a little polytechnic school in Pasadena created the premiere astronomy program in the world. A few other brilliant people outside of astronomy were attracted and the school was transformed. They needed some technical goals to grow past pure science. The hottest game in the technical world was electrifying the country. It seemed like magic.. spin a bunch of copper wires in a magnetic field with steam or moving water and make a light bulb come on in a home a hundred miles away. For many it was more desirable than indoor plumbing and universities, companies and politicians were falling over each other trying to make it happen. It seemed to be the natural problem for an electrical engineering program to work on. The folks at Caltech looked studied it and came to the conclusion that the fundamental problems had been solved - it wasn't worth the effort if you were trying to make a major impact. Rather they decided to work in more basic areas and trust that practical developments would take place. Not in one person or even a department, but if they cleverly linked departments and had people float around a bit.. They went long and deep with the blessings of their leadership.
It paid off. They developed an understanding of earthquakes, aerodynamics, and biochemistry. They developed a rich understanding of the emerging field of quantum mechanics and were able to make contributions in solid state physics - the key to semiconductors and computing. But the big steps didn't seem to follow naturally. In the late forties the transistor form Bell Labs was regarded as an exciting curiosity. They were difficult to manufacture and didn't amplify as well as tubes. Early devices had a high failure rate. Bell Labs saw them as being useful for long distance telephony. A bit later on, as the solid state physics of silicon developed it became clear that you could capture the energy of the sun with silicon and the solar cell was invented. And some people in Caltech were finally positioned to help change the world. But let's go back to the opium den.
My first director was a young Member of the Technical Staff at Bell Labs in the early fifties. The Labs, as part of AT&T's agreement with the government to exist as a regulated monopoly, had to make most of its inventions available for very modest and sometimes non-existent licensing fees. He had become an expert on one of the early types of transistors and tricks used to make it and was in Allentown, Pennsylvania teaching a master class to two in the opium den. One of the two at the table liked the doughnuts, but both were probably wishing for sushi. Akio Morita and a Japanese physicist had traveled from the Tokyo Telecommunications Engineering Corporation - a tiny Japanese tape recorder company. They wanted to make smaller tape recorders and were hoping the lower power demands of a transistorized amplifier would make a difference. They were also interested in the high frequency characteristics and the possible use in a radio receiver. They asked a lot of questions and managed to survive the room and its doughnuts. Tokyo Telecommunications Engineering Corporation later became known as Sony and the transistor radio was a breakthrough product.
By the late fifties a core of R and r people from Bell Labs left to start Fairchild in the Bay Area. There were some connections to Berkeley and Stanford, but for more fundamental work, to Caltech where some of these people had done their degrees. Early transistors were about the size of the eraser on a pencil with three wire leads coming out. They were hand assembled and had to be individually tested and then soldered into circuit boards mostly for military and telecommunication applications. Until it was realized you could turn them into switches. Suddenly it made sense to build transistorized computers. Transistors were still too expensive and the tube industry wrote white papers telling people not to waste their time and money. Few realized one door was closing and another opening.
Radical change seems to come out of left field. There's rarely an "aha!" moment. People who have done the most impactful work will tell you they've been frustrated working at something for a long time meeting with failure after failure. They end up trying something different and wonder why it seems to be working differently than expected. There are still failures, but also a series of promising developments. Some of these are little ahas!, but the final result is often anticlimactic.
There was a twenty year period when electrification technology went from almost zero to having almost all of the basics sorted out. There was still an enormous amount of work and investment required to bring electricity to America and large areas of the Earth are still in the dark today, but during this amazingly intense period when almost every good engineer seemed focused on the problem until a certain maturity was achieved and progress slowed dramatically. About the time electrification was slowing, radio lit up. Ten to fifteen years and most of the basics had been worked out. We still use designs that date back to the twenties. Television followed a similar path during the 30s. The North American NTSC standard dates to 1941. Post WWII advances like color were much less radical in scale than the early developments. And the earliest precursor to the Internet sprang up in the early 70s. By 1985 most of the necessary technical leaps had been made.
After each one of these hot engineering periods people wonder if that's the end of progress. It never is. To drop in a bit deeper consider the solid state electronics revolution that began at the Bell Telephone Laboratories,
The cold war created strong demands for radar, computers as well as general miniaturization. Money was flowing and engineers were focused mostly on dealing the the problems of tube based computers (there's a separate story on computer architecture that I'll leave out for now). Exotic materials in the tubes to make them last long enough that a computer might work for a day evolved into computers that could have sections die and still keep working. It had been discovered that transistors could be switches and from that you could build all of the logic elements a digital computer required. The problem was cost and reliability. IBM stated making some of the first practical solid state computers, but there were a series of wack-a-mole class problems.
Robert Noyce was also frustrated by these problem. He came up with connecingt them on the silicon with metallic runners that were created as part of the manufacturing process. He added a few passive components and had invented something he called the integrated circuit. It was completely out of left field, but helped solve some of those pesky connection problems. Suddenly an entirely new direction had been found. The focus shifted to increasing the density of devices on the chip.
Carver Mead started wondering, assuming you had the manufacturing technique to make things arbitrarily small, what the practical limit of "micro" was. As part of what started as a back of the envelop exercise he found factors conspiring with each other in such a way that it should be possible to make working complex devices near the region where quantum mechanics would become an issue - you should be able to make working ICs more than a million times denser than the state of the art at the time. Gordon Moore was also working in this area and observing progress to date and understanding Carver's clear blue skies, wrote a document that became known as Moore's Law. It isn't a natural law, like the laws of thermodynamics or gravity, but rather a permission. It provided people with the belief that fundamental road blocks didn't exist - or at least wouldn't for decades. You might only see a generation - a year and a half or so - ahead, but if you worked hard you'd get there. Then there would be a new set of hurdles you could set your eyes on a conquer - but it would be possible for someone to take them down. It was a permission slip and probably the most optimistic document in all of engineering and technology.
There isn't any isolated pure research in solid state electronics. The tools and money are in advanced development and production. It is where that part of Nature is most easily studied. Researchers work in these areas informed by the fast moving problems. It's somewhat different from the natural sciences and you ignore manufacturing technology at your peril (arguably Intel has fallen victim as have many others). Now back to the late sixties.
Laying out dozens and then hundreds of logic elements was becoming a difficult task. The design progress had become a bottleneck for logic based ICs and people were inventing frustrating solutions that weren't solutions. Wack-a-mole. The herd is working very hard, but some people get frustrated and realize there's something wrong with the zeitgeist. A few had the freedom - or have managed to create it for themselves - and wander off in other directions. There was failure but enough little successes and hints to keep going. Federico Faggin had the idea to put enough of an array of logic elements on a chip that he could make it a programmable general purpose computer. His four bit 4004 microprocessor with a few thousand transistors ignited a revolution was created.
Microprocessors were complex beasts and enormously difficult to design. Carver Mead had been working on improving design languages and ended up creating a new direction with his silicon compiler. It was now possible to design very complex chips in a language designers would work with and debug - not unlike the compilers and debugging tools used in software development only rather than an executable program the output was a "tape out" that would let an electron beam writing tool create a set of photomasks used to manufacture the chips. It became possible for people to design chips without owning a manufacturing plant.
Even now there's a new revolution underway. People were building supercomputers in the 80s and 90s as they had for two decades before for scientific tasks, but it was felt they'd be needed for perceptive systems and the next generation of artificial intelligence. Conventional digital computers aren't exactly well matched to these fuzzy perceptive tasks. A few people had been playing with neural networks and a couple of architectural advances had been made along with the realization that simple processors used in gaming computers could be used as building blocks. This new class of fuzzy computing is still being sorted out, but it works with a tiny fraction of the energy and cost required for equivalent work on a supercomputer. It even exists in smartphones.
I don't think we're at the beginning of that exciting twenty year period for quantum computing yet - but it's probably coming.
So that's what would have been called little r research .. the process that makes dramatic pivots in technology possible. You can't create them with market research .. that approach is exactly backwards and there are tragic examples. It's remarkable that you find yourself with something new that you intended to solve a problem with and suddenly a thousand new other solutions to other problems - problems you couldn't have dreamt of - become available. This is a great example of why trying to predict the detail of future technologies doesn't work.
What happens in what was once called little d .. advanced development .. is also fascinating. Only a few organizations have managed to pull it off more than a couple of times. I'll write about it somewhere down the road.