The ripped and stained vinyl covered seat of some lost color puffed a cloud of tobacco smoke when I sat down. It wasn't the worst seat in the room. One wall of the windowless basement room was a painted green chalk board. Over forty years had gone by and it was still called the opium den. Someone had brought a box of doughnuts from the shop next door. Fifteen was their bakers dozen. I was lucky to have been in this place before the building was torn down. You could feel the presence of history.
In the late 1940s at the Bell Telephone Laboratories an acronym for inventing the future of electronics and telecommunications appeared in the technical journal.
RrdD
Big R, little r, little d, big D: pure research, applied research, investigatory/inventive development and development at scale. It was supposed to be a path from a discovery in the laboratory to the next marvel you'd use anywhere from five to fifty years out. Parts of the labs and the Western Electric manufacturing arm were organized to support that path. There was just one little problem. That's not how things happen in the real world. Some bits of it work sometimes, but it influenced how people think. That said it's a useful framework. You can identify the pieces, but they may not appear where you think they will. I have a lot of experience in R, r and d. I'll spend most of the post talking about little r as it's the one that often brings dramatic change. It's probably the one to study if you worry about change, but it's often impossible to see the early signals unless you're a participant. You need to be playing the game.
First a bit on "pure" research. Science as we know it really didn't develop until the early 19th century. One of the great inventions of that century was the modern natural science laboratory. It was a radical departure from anything people had seen and was best exemplified by the creation of the Cavendish Laboratory at Cambridge around 1870. A half century later the modern large government lab grew out of Ernest Lawrence's Radiation Laboratory at Berkeley and the Bell Telephone Laboratories started to become a formidable industrial lab. All of these places supported pure research - research with the primary aim was understanding Nature better. It was assumed that some of these discoveries would be useful and it was important to have people intimately familiar with the discoveries as well as finding some practical use. It was an important step as the pure research was becoming very expensive to justify as pure curiosity.
Pure is a bit tainted. A feature of science is when you look at something in a new way - a better telescope, microscope, particle accelerator - you often learn things beyond your wildest imagination. Technology drives instrumentation. Computers and exotic electronics and materials fuel better views of Nature so scientists find themselves inventing and perfecting new tools in addition to using commercial tools that came from engineering organizations.
There's an interesting story about the global positioning system. Shortly after the USSR orbited Sputnik in 1957 the US launched a frantic effort to track satellites. It enlisted thousands of amateur astronomers and high school students with a dirt-cheap azimuthal spotting scope. You'd record your latitude and longitude, find the satellite, read off a couple of numbers and note the time. The largest source of error was accurate timing. It was recommended people use a shortwave radio and tune to the time standard at WWV in Fort Collins, Colorado, but even then human reflexes got in the way. A physicist was part of the team and wondered what it would be like if these trackers had their own accurate clocks - atomic clocks. It wasn't practical, but he flipped the problem in his mind. What if the satellite had it's own clock? If you had several in orbit you could calculate your position to great accuracy. That was the holy grail of navigation. There were a lot of details to work out and it wasn't technically possible at the time, but it resurfaced at the government research organization DARPA about fifteen years later when they realized you could do it for a few tens of billions of dollars. A bargain given the need.
With that, on to little r:
A century ago a little polytechnic school in Pasadena created the premiere astronomy program in the world. A few other brilliant people outside of astronomy were attracted and the school was transformed. They needed some technical goals to grow past pure science. The hottest game in the technical world was electrifying the country. It seemed like magic.. spin a bunch of copper wires in a magnetic field with steam or moving water and make a light bulb come on in a home a hundred miles away. For many it was more desirable than indoor plumbing and universities, companies and politicians were falling over each other trying to make it happen. It seemed to be the natural problem for an electrical engineering program to work on. The folks at Caltech looked studied it and came to the conclusion that the fundamental problems had been solved - it wasn't worth the effort if you were trying to make a major impact. Rather they decided to work in more basic areas and trust that practical developments would take place. Not in one person or even a department, but if they cleverly linked departments and had people float around a bit.. They went long and deep with the blessings of their leadership.
It paid off. They developed an understanding of earthquakes, aerodynamics, and biochemistry. They developed a rich understanding of the emerging field of quantum mechanics and were able to make contributions in solid state physics - the key to semiconductors and computing. But the big steps didn't seem to follow naturally. In the late forties the transistor form Bell Labs was regarded as an exciting curiosity. They were difficult to manufacture and didn't amplify as well as tubes. Early devices had a high failure rate. Bell Labs saw them as being useful for long distance telephony. A bit later on, as the solid state physics of silicon developed it became clear that you could capture the energy of the sun with silicon and the solar cell was invented. And some people in Caltech were finally positioned to help change the world. But let's go back to the opium den.
My first director was a young Member of the Technical Staff at Bell Labs in the early fifties. The Labs, as part of AT&T's agreement with the government to exist as a regulated monopoly, had to make most of its inventions available for very modest and sometimes non-existent licensing fees. He had become an expert on one of the early types of transistors and tricks used to make it and was in Allentown, Pennsylvania teaching a master class to two in the opium den. One of the two at the table liked the doughnuts, but both were probably wishing for sushi. Akio Morita and a Japanese physicist had traveled from the Tokyo Telecommunications Engineering Corporation - a tiny Japanese tape recorder company. They wanted to make smaller tape recorders and were hoping the lower power demands of a transistorized amplifier would make a difference. They were also interested in the high frequency characteristics and the possible use in a radio receiver. They asked a lot of questions and managed to survive the room and its doughnuts. Tokyo Telecommunications Engineering Corporation later became known as Sony and the transistor radio was a breakthrough product.
By the late fifties a core of R and r people from Bell Labs left to start Fairchild in the Bay Area. There were some connections to Berkeley and Stanford, but for more fundamental work, to Caltech where some of these people had done their degrees. Early transistors were about the size of the eraser on a pencil with three wire leads coming out. They were hand assembled and had to be individually tested and then soldered into circuit boards mostly for military and telecommunication applications. Until it was realized you could turn them into switches. Suddenly it made sense to build transistorized computers. Transistors were still too expensive and the tube industry wrote white papers telling people not to waste their time and money. Few realized one door was closing and another opening.
Radical change seems to come out of left field. There's rarely an "aha!" moment. People who have done the most impactful work will tell you they've been frustrated working at something for a long time meeting with failure after failure. They end up trying something different and wonder why it seems to be working differently than expected. There are still failures, but also a series of promising developments. Some of these are little ahas!, but the final result is often anticlimactic.
There was a twenty year period when electrification technology went from almost zero to having almost all of the basics sorted out. There was still an enormous amount of work and investment required to bring electricity to America and large areas of the Earth are still in the dark today, but during this amazingly intense period when almost every good engineer seemed focused on the problem until a certain maturity was achieved and progress slowed dramatically. About the time electrification was slowing, radio lit up. Ten to fifteen years and most of the basics had been worked out. We still use designs that date back to the twenties. Television followed a similar path during the 30s. The North American NTSC standard dates to 1941. Post WWII advances like color were much less radical in scale than the early developments. And the earliest precursor to the Internet sprang up in the early 70s. By 1985 most of the necessary technical leaps had been made.
After each one of these hot engineering periods people wonder if that's the end of progress. It never is. To drop in a bit deeper consider the solid state electronics revolution that began at the Bell Telephone Laboratories,
The cold war created strong demands for radar, computers as well as general miniaturization. Money was flowing and engineers were focused mostly on dealing the the problems of tube based computers (there's a separate story on computer architecture that I'll leave out for now). Exotic materials in the tubes to make them last long enough that a computer might work for a day evolved into computers that could have sections die and still keep working. It had been discovered that transistors could be switches and from that you could build all of the logic elements a digital computer required. The problem was cost and reliability. IBM stated making some of the first practical solid state computers, but there were a series of wack-a-mole class problems.
Robert Noyce was also frustrated by these problem. He came up with connecingt them on the silicon with metallic runners that were created as part of the manufacturing process. He added a few passive components and had invented something he called the integrated circuit. It was completely out of left field, but helped solve some of those pesky connection problems. Suddenly an entirely new direction had been found. The focus shifted to increasing the density of devices on the chip.
Carver Mead started wondering, assuming you had the manufacturing technique to make things arbitrarily small, what the practical limit of "micro" was. As part of what started as a back of the envelop exercise he found factors conspiring with each other in such a way that it should be possible to make working complex devices near the region where quantum mechanics would become an issue - you should be able to make working ICs more than a million times denser than the state of the art at the time. Gordon Moore was also working in this area and observing progress to date and understanding Carver's clear blue skies, wrote a document that became known as Moore's Law. It isn't a natural law, like the laws of thermodynamics or gravity, but rather a permission. It provided people with the belief that fundamental road blocks didn't exist - or at least wouldn't for decades. You might only see a generation - a year and a half or so - ahead, but if you worked hard you'd get there. Then there would be a new set of hurdles you could set your eyes on a conquer - but it would be possible for someone to take them down. It was a permission slip and probably the most optimistic document in all of engineering and technology.
There isn't any isolated pure research in solid state electronics. The tools and money are in advanced development and production. It is where that part of Nature is most easily studied. Researchers work in these areas informed by the fast moving problems. It's somewhat different from the natural sciences and you ignore manufacturing technology at your peril (arguably Intel has fallen victim as have many others). Now back to the late sixties.
Laying out dozens and then hundreds of logic elements was becoming a difficult task. The design progress had become a bottleneck for logic based ICs and people were inventing frustrating solutions that weren't solutions. Wack-a-mole. The herd is working very hard, but some people get frustrated and realize there's something wrong with the zeitgeist. A few had the freedom - or have managed to create it for themselves - and wander off in other directions. There was failure but enough little successes and hints to keep going. Federico Faggin had the idea to put enough of an array of logic elements on a chip that he could make it a programmable general purpose computer. His four bit 4004 microprocessor with a few thousand transistors ignited a revolution was created.
Microprocessors were complex beasts and enormously difficult to design. Carver Mead had been working on improving design languages and ended up creating a new direction with his silicon compiler. It was now possible to design very complex chips in a language designers would work with and debug - not unlike the compilers and debugging tools used in software development only rather than an executable program the output was a "tape out" that would let an electron beam writing tool create a set of photomasks used to manufacture the chips. It became possible for people to design chips without owning a manufacturing plant.
Even now there's a new revolution underway. People were building supercomputers in the 80s and 90s as they had for two decades before for scientific tasks, but it was felt they'd be needed for perceptive systems and the next generation of artificial intelligence. Conventional digital computers aren't exactly well matched to these fuzzy perceptive tasks. A few people had been playing with neural networks and a couple of architectural advances had been made along with the realization that simple processors used in gaming computers could be used as building blocks. This new class of fuzzy computing is still being sorted out, but it works with a tiny fraction of the energy and cost required for equivalent work on a supercomputer. It even exists in smartphones.
I don't think we're at the beginning of that exciting twenty year period for quantum computing yet - but it's probably coming.
So that's what would have been called little r research .. the process that makes dramatic pivots in technology possible. You can't create them with market research .. that approach is exactly backwards and there are tragic examples. It's remarkable that you find yourself with something new that you intended to solve a problem with and suddenly a thousand new other solutions to other problems - problems you couldn't have dreamt of - become available. This is a great example of why trying to predict the detail of future technologies doesn't work.
What happens in what was once called little d .. advanced development .. is also fascinating. Only a few organizations have managed to pull it off more than a couple of times. I'll write about it somewhere down the road.
x marks the spot
That's a really stupid idea Steve.
It was somewhere around 1992 and wasn't the first or last time I heard something it. In fact it's part of the process of doing science - I just was applying it somewhere else. The PR people and business units of AT&T were for looking futuristic communicatons ideas in the labs that could change it's user's lives as the company moved beyond telephony.
I didn't start out as a technologist and still wonder why some people think of me that way. After some post-doc work in physics I joined the Bell Telephone Laboratories when it was still the best place this planet for certain types. A wide range of technologies were not just represented, but were invented there. My work was a bit to the side of the hardcore technologists and developers. My background going in was narrow - I'd get lost in many of the talks. But if you're curious, read and ask questions, anyone can learn. They gave me first-hand tours explaining their ideas in languages I could understand and, if they couldn't, or if they couldn't, they'd find an interpreter. That was considered part of my job. You could see the trajectory of some of the technologies -- threads moving moving forward in time and in some space. Threads that had a history that allowed you to compare your projections with what went on before. They had characteristics like cost, rate of development and dependencies on other technologies. It became a game to look for threads that could weave together in a history of the future. And every now and again a fabric began to emerge.
The rejection I began with was the most beautiful and poetic of the ideas I wrote down.
Maybe in fifteen or twenty years you'll hold a piece of glass in your hand and watch a friend's face on the other side of the world as you wish them happy birthday. The glass will be wireless and buttonless and the service will be too cheap to meter.
It was a weaving of wireless, battery technology, displays, silicon, image compression, and some things that were happen with user interfaces. The need was there. I had learned it paid to create an experience people craved. AT&T had been after the videophone since 1928 with numerous failed attempts, but they kept hammering away at it. It was clear that video over the Internet was possible - heck, I was doing it then - and the other curves looked attractive. The weaving was there, but there were these pesky questions.. How could you get that much bandwidth over a cellular connection. Who would built out a wireless network?
When, How, Why, Who ...?
These were all temporal questions that deserved attention if you were looking into the future. You just need an excuse and the company arguably gave me too much freedom. There were a few ideas they liked. I started thinking about that distant point in technology space.
In math and physics when you're solving for something you often call it X.
Five years later and I was involved in a group specializing in computer mediated human interactions. About a dozen really bright people who knew things I had never thought about. People with patience and their curiosity beyond their own backgrounds. I brought my own background to the group. We did a few neat things.
My favorite was a project a few of you were involved with - Air Graffiti. I won't go into it deeply other than saying it was a realization that phones would become location aware computers with cameras. We hacked together a few computational bricks that gave us a rather clear view into a few things that happened ten to fifteen years later. We thought the basic technology of a location-aware computerphone with a camera would start to mature around 2005 - about the time cellular networks could begin to support links a few tenths of a megabit per second. That'd be a start. There were all kinds of objections, but we learned the trick was to not say much in demos and just give someone the test brick and let them imagine and invent on their own. These were the most amazing demos I had seen and the folks in the group were just amazing... Wayzen, Jessica, Gregg, Dave, Nancy and Steve (yes - we had two Steves). We all developed a healthy respect for user interface and user experience and we had dreams of special hardware. I began to think Apple was going to be the future, but had no idea it would be a phone ..
The fundamental technologies are silicon and radio. The 1992 piece of glass looked very do-able computationally. The requirements were lower than a first generation iPhone and it was just a simple Moore's Law extension. Batteries were more difficult. I was sure GPS would get there with some military funding. The camera and display were certainly possible - it was just a question of picture quality. The big IF was the buildout of the wireless network. I had been playing with folks building community networks in isolated areas in the West. The technology was there .. and the need was there. It was a question of buildout and price. Several of us thought it would happen, but we had no idea what the path and timetable would be.
But the fabric continues to move with new threads weaving their way in. That's what makes it so fascinating and keeps it so rich.
Oh yeah .. and hindsight is at least 20:20 .. I get a lot of things wrong too. But that's how you learn.
Posted at 04:37 PM in change, critical thinking, general comments, history of technology, society and technology | Permalink | Comments (0)
| Reblog (0)