In this morning's surfing I came across a price list for Apple I and IIs c.1977. Times have changed - particularly when one considers what a dollar was worth in 1977.
About that time I was using the CDC 7600/6600 at Brookhaven National Laboratories. It was an extremely expensive supercomputer of the day and actually had a beautiful design. It struck me that I have some old emails with a history of computing at BNL by Kurt Fuchel in 1995 and 1996. I note that many Nobel Prizes were supported by the computational infrastructure at BNL.
So pull up a cup of coffee or hot chocolate and enjoy a bit of history (at least for the techies out there).
A Personal History of Computing at BNL
This series of brief articles is not an authoritative History of Computing. It is a fairly subjective view of my experiences in the computing profession at BNL over the past 35 years. Its goal is not a recounting of facts and figures, but an attempt to convey the computing milieu of each period.
Part 1- Pre-IBM
In 1960, fresh out of college, I started work at BNL, in the Applied Mathematics Division which was then part of the Directors Office, and indeed resided in the North-West quadrant of Building 460.
The Division was small in those days - probably no more than a dozen people - and was divided into three groups. One of these consisted of a handful of mathematicians thus justifying the name of the Division ( there was no such thing as `computer science' in those days), a few people who prepared and submitted card decks for the nightly shuttle to NYU where the newest and greatest machine, an IBM 704 (soon replaced by a 7090), processed them in time to return the voluminous paper output to BNL in the morning.
The largest part of the Division was dedicated to preparing for production the wondrous machine which had been designed and built by our staff. The machine was called Merlin. It was based on Los Alamos' Maniac computer, but had undergone many improvements and refinements in design. At that time it was believed that a small group of scientists, engineers and technicians could build something superior to what commercial companies could do and at a fraction of the cost. After all, not long before, IBM was only the International Tabulating Company, and their equipment was deemed inadequate for scientific usage. The Lab did own an IBM 610 and a 650, but they were used only by Payroll. The 650's storage was, if memory serves, a magnetic drum with 50 storage locations, while the 610 operated via electromechanical relays.
Beyond the belief that `we can do better', the motivation for Merlin was economic. At the time, the cost of magnetic core storage was 20 cents a bit; the Williams tubes electrostatic memory which Merlin used cost only a nickel a bit. The bits were stored as points on the display part of the tube and had to be refreshed every 20 milliseconds or the bits would spread into neighboring areas. The unit of storage for Merlin was a 48-bit word; thus it used 53 memory tubes, the additional five allowing the correction of a single bit parity error. When the machine was unable to refresh in the required time or multiple parities were detected, a TILT light came on and the machine halted. The basic memory consisted of 4096 words and functioned with reasonable reliability (for the period). Moreover, a flick of a switch converted the memory to 8192 totally unreliable words. The entire circuitry of the machine used vacuum tubes which required enormous cooling equipment; the time between failures was, at best, about an hour.
My first task was to learn the assembly language for Merlin - again home grown - and then develop the standard mathematical libraries. This work enchanted me, the more so as I had spent the previous four years working for a surveyor where my days were spent computing trigonometric functions using a three inch thick book of ten-place logarithms (created as one of FDR's WPA projects) and a Monroe calculator. Obtaining the value of a sine function in less than a second seemed miraculous to me, especially so as the sine was computed, not merely looked up.
The Assembly program was punched onto paper tape and fed into the machine. The binary output was also on paper tape - yards and yards of it. Sometimes the automatic spoilers took off and tore the tape; fortunately, custom designed splicers were available. Later on, one inch magnetic tape drives were added to Merlin so that intermediate results, binaries and data could be stored on a less fragile medium than paper.
Merlin had several novel features. Each word included two `tag' bits, which could be detected automatically or programmatically; this greatly facilitated list processing as the last item in the list could be tagged. The console was a thing of beauty with its neat rows of 48 switches; instructions could be entered in this way. Four T-registers for fast arithmetic operations were displayed. On Visitors Day (or off hours) a program which used these registers in a relay program was popular; rumor had it that occasional the betting was brisk. Monte-Carlo calculations and the use of random number generators was, in fact, the most common application of the machine.
In the absence of formal computer science training, the people in the computing part of the Division had an assortment of backgrounds. Perhaps the best programmer (and my mentor) was a former undertaker; others included a jeweler, a philosophy major who put himself through college playing bridge, a teacher and a surveyor (myself).
In one way these were the good old days at least for programmers: when your program bombed, there was an even chance that the problem really was hardware.
This series of brief articles is not an authoritative History of Computing. It is a fairly subjective view of my experiences in the computing profession at BNL over the past 35 years. Its goal is not a recounting of facts and figures, but an attempt to convey the computing milieu of each period. I am indebted to Les Lawrence and Susan Sevian for input to this section.
Part 2 - The First IBM Era
In the early 60s, IBM rapidly grew to be the dominant force in the computing field, although initially, it seemed to follow two parallel paths: the commercial and the scientific.
For BNL, the scientific path started with the IBM 704 at NYU, rapidly moving to our own 7090, upgraded to a 7094 the following year. These were machines with 32,768 36-bit words of individually addressable memory, or Octal 00000-77777, an ACCumulator register and an MQ (Multiplier/Quotient) register, and three Index Registers, increased to seven on the 7094.
The 36 bit word allowed 27 bit accuracy for scientific calculation, or about 7 significant digits, or held six characters, using the 6-bit code (Uppercase only) of that period. There are still people at BNL who always input text fields in CAPITALS.
These machines ran in mono-program, batch mode and required off-line machines, specifically IBM 1401s to process input and output. Users submitted their decks to the dispatcher who accumulated the decks in large trays until a sufficient number was available. The cards were then fed it into the card reader on the 1401 which copied them to magnetic tape. The tape was mounted on one of the drives on the 7090/94, the job processed, and the output (or more often diagnostics and error messages) went onto another tape. In due course this tape was carried back to the 1401 which printed the output on one of its 132 column printers. Operators separated the output, bagged it, and placed it in large pigeon holes for user pick up. Turnaround was at least half an hour, and frequently much longer.
An input deck usually consisted of:
Job Card which contained the user's name, and a problem number which validated the run and associated the cost with an account; later the card also contained a Priority, a maximum CPU run time, and other information used to optimize scheduling.
Control Card(s) specifying actions such as compilation, loading, execution, the mounting of tapes, or comments, followed by the cards comprising the module invoked; this sequence could be repeated several times in a deck;
an End-of-Record card, followed by data cards, if any;
an End-of-File, usually a distinctive orange, making it easy to separate decks to return to the user.
One of the more amusing Help Desk questions of the period was asked by a polite young lady who reported that the computer refused to mount her tape. The MOUNT command was:
* MOUNT XXXX ON TAPE 1 She had written:
* PLEASE MOUNT XXXX ON TAPE1 which the system took as a comment.
Our first 1401 had actually arrived at BNL in 1961, but was used mainly by Payroll. It had a memory of 4000 characters (one didn't talk in bytes in those days). In an attempt to learn the 1401's convoluted assembly language (variable length instructions), I wrote an arctangent subroutine for it. In retrospect, it is hard to imagine anything more useless, but I had spent the previous year writing mathematical subroutines for Merlin, and, "For a man with a hammer, everything looks like a nail."
Programming was done in Fortran, but system programmers, or those desiring extreme optimization coded in the FAP or MAP assembly language. FAP contained well over 100 distinct instructions; few people ever used more than half of them. This was not really surprising as the machines tried to satisfy both scientific and commercial users, and supplied both double-precision arithmetic and compound text processing instructions. Everybody's favorite instructions for indexing loops were known as Tix and Tixi (TIX and TXI), which counted up or down and Transferred (branched) when the end condition was met. Coding was a slow and laborious process, but efficient use of compute cycles and memory was at a premium as labor was cheap compared to the hardware costs. The average salary of a programmer was about $8,000 a year, while a 7094 cost over a million dollars, and had less power than what we now have sitting on our desks.
Some original work in computer development was being done at BNL. An oil-cooled memory buffer between the 7090 and the bubble chamber equipment allowed some real-time analysis of particle events, and the software to run background jobs was developed.
In 1964, IBM released DCS, the Directly Coupled System, comprising large disks shared by the 7094 and a 7044, a smaller, slower machine, but better suited for I/O operations. The 1401s were retired. However, the bandwidth of DCS was inadequate, and we ended up running the two machines more or less independently.
As communication links began to appear, interactive computing started to become appealing. Efforts were made to share the processor among multiple users or programs, which necessitated rolling jobs in and out of memory and storing them on disks which, by todays standards, were very small indeed. The idea of dividing a job into "pages", rolling in only those pages required, and rolling out the "least recently used" ones was appealing, and the attractive concept of "virtual memory" was born. Alas, the hardware was not up to the task, and early systems spent their time threshing program segments back and forth, while the expensive CPU stood idle, and the online user fumed.
This series of brief articles is not an authoritative History of Computing. It is a fairly subjective view of my experiences in the computing profession at BNL over the past 35 years. Its goal is not a recounting of facts and figures, but an attempt to convey the computing milieu of each period.
Part 3 - The Glory that was CDC's
By 1964 it was clear that the architecture of the IBM 7000 machines had reached its end, limited as it was by the 15-bit address field and its single AC/MQ register processor. It was time to search for a replacement, and BNL (as well as most of its sister Labs) started major ADPE acquisition processes.
There were three candidates: IBM, Univac and Control Data Corporation. IBM offered its new 360 Series which started with a really small Model 30, and went up, up and up from there. These machines were supposed to handle both scientific and commercial applications, offered vast amounts of memory (at a fantastic cost), multiprogramming, time sharing everything. Univac offered a more traditional architecture, and a limited number of models. CDC, however, presented a very different design, the brainchild of Seymour Cray, the computer genius of the '60s.
We had flirted with CDC prior to this major acquisition, and bought a CDC 924, a pleasant machine, which offered hands-on operation, a nice console, and, its 24-bit word size made it essentially half one of CDC's 3000 series machines. We acquired our first 6600 (Serial 11) in 1966, coincidentally with moving into our new building.
The 6600 was a radical departure from the traditional architecture of a uniprocessor and sophisticated (and very expensive) I/O channels. The 6600 had an extremely fast Central Processing (CP) Unit, with eight 60-bit X-Registers for arithmetic and a memory of 65,536 60-bit words (later increased to 131,072).
Ten Peripheral Processing (PP) Units handled the control functions and I/O. Each PP had its own memory of 4,096 12-bit words as well as access to all of the CP's memory, a fixed block of which was assigned to each PP for communication with the CP and other PPs, a clean approach which obviated the need for complicated interrupts. Instead of dividing memory into pages as did the IBM machines, each job had its own Field Length (FL) of contiguous memory. This scheme lacked flexibity but gained in efficiency, provided that several jobs could fit into CM at once.
The hardware was "second generation"- discrete components - which were located between two circuit boards rather than on a single one, a 3-dimensional layout. Freon for cooling was pumped through the metal frame. I still have a memory module - 4096 12-bit words in a 6" cube, which weighs about 5 lbs, and, when new, cost about $20,000!
The assembly codes for both CP and PP comprised a small number of instructions (compared to their IBM counterpart), and systems programers found them easy to use and debug. Of course, users coded in Fortran, initially Fortran 2, then Fortran 4 also known as Fortran 66 or FTN, ultimately, Fortran 77. Both IBM and CDC had developed a number of Fortran additions, some of which made their way into future standards. Old habits die hard, and the emphasis was still on efficiency rather than portability. Initially, the 6600 ran under the Chippewa Operating System (COS) which was replaced by SCOPE in 1967 and NOS/BE (Network Operating System/Batch Environment) in 1980. Since both PP and CP utilization times were recorded, we attempted to classify jobs by their PP/CP ratio for scheduling purposes.
During the CDC years, the Applied Mathematics Department did work in the forefront of technology. In 1967, we got our second CDC 6600 (Serial 24). CDC then offered Extended Core Storage (ECS), a large block of fast access memory, which was an adjunct to main memory except that code could not be executed from it; effectively, it was a fast swapping medium. ECS was linked to both 6600s; however, there was no sysem software to use it. Graham Campbell, Sidney Heller and I wrote code to incorporate the use of ECS into the operating system, and CDC included it in their future releases of the operating system, paying us in extra disks and other hardware, a fairly common practice in those days.
Two other home-grown products are worth noting. One was the Operator eXtension (OX) system which allowed users to get the status of their jobs. The 6600's structure made this kind of product relatively easy to develop and test since the code resided in one of the PPs, and, with care, debugging could be done during production time. The other, called QUEST, was, I believe, BNLs first patented program under the title: A Computer Diagnostic Program With Inherent Fail Safety. Tony Kandiew was the author.
Around the same time, members of our engineering group developed Brooknet, one of the first Megabit communication links, which was used to transmit data from experiments directly into ECS where it could be processed in real time. The first link was to the Chemistry Department, with the AGS soon to follow. Brooknet was, for its era, a major engineering achievement.
The CDC 6600 was a magnificent machine. Far faster than anything else at the time of its introduction, it (and its successor the 7600) were the mainstay of computing at BNL for over 20 years.
In 1986 I wrote the article "Valentine's Day Massacre" (but only published a carefully edited version of it). For the benefit of the nostalgic, it appears next.
Valentine's Day Massacre
The party was over. Spilled wine mottled the polished white tiles, and half empty plastic glasses, some with cigarette butts in them, stood in the gaping holes left by souvenir hunters. The event had started innocently enough when an announcement appeared on the bulletin boards and in peoples' mail slots: "Retirement Party" was the heading, followed by "After 20 years of Faithful Service", and then the name: "CDC 6600".
People were aware that an era was coming to an end. The venerable machine was finally going to be turned off and scrapped. A marvel in its day, the most powerful computer in the world in 1965, a title it had held for five years, was no longer cost effective, state of the art, or whatever euphemism was used to describe it.
The party was on Friday the 14th -- Valentine's Day. At 5 pm we moved to the sealed, climate controlled partition in the machine room. The chief maintenance engineer swung open one of the huge, ponderous bays with its myriad of screws holding the components in place. The inner side of the frame was a veritable rats nest of multicolored wires, but that was not the side that drew his attention. His screw-driver approached a small component in the middle of the vast array, one that looked no different from the many others. He undid the two screws, pulled it out, and then dropped it on the floor and stomped on it.
The machine looked no different; the monstrous cooling equipment continued to hum away, but we all knew that it was dead, irreversibly so, and that there was no way to bring it back to life. Its heart (the main clock module), which had beaten faithfully for 20 years near its rated 5 million pulses a second was stilled. Even a transplant could not save it, since the entire machine had been carefully tuned to this one particular component, the lengths of its wires carefully adjusted for correct timing.
Earlier in the day, the public relations people had visited and the chief engineer had removed the component for the benefit of the photographer. Hardly had he taken it out when the telephone began to ring: against all expectations, people were still using the machine in spite of the availability of younger, more attractive machines.
Then the massacre started. People who had spent their entire careers servicing, using, loving, hating the machine wanted a part as souvenir. I got a memory module: a heavy six inch cube with a black steel faceplate on which a small plaque indicated: "STORAGE MODULE, SERIAL NO. 55309". Eighteen screws on the front plate alone held it together, and one could see a jumble of wires and tiny discrete components: multicolor resistors, silver transistors, while deep inside it, I knew there resided an array of incredibly fine wires with a doughnut shaped piece of iron at each intersection. This was the summit of Second Generation computer technology. The part I held with both hands had cost close to $20,000 when new; nowadays one could buy ten times as much storage for around $8. (Editor's Note: in 1995, the cost of the equivalent memory is around a quarter!)
One might have expected there to be sadness in the air, but it was hard to pinpoint it. Perhaps programmers and engineers are an unsentimental lot, at least insofar as their machines are concerned. However, the party was subdued; a twenty year milestone makes one think about the passing of time, ones own mortality, the good times and the might-have-beens.
Will the newer machines last as long? Chances are that they will be "upgraded" periodically, almost imperceptibly, so that when the final cabinet is hauled away, it will contain little of the original, and then be replaced by something "upwards compatible", so that the passing will scarcely be noticed.
Perhaps it's better that way. Still, it was nice to gather for the last time around the old machine, lift our glasses in tribute, and say: "they don't make 'em like they used to."
This series of brief articles is not an authoritative History of Computing, but a fairly subjective view of my experiences in the computing profession at BNL over the past 35 years. Its goal is not a recounting of facts and figures, but an attempt to convey the computing milieu of each period. I am indebted to Paul Kessler for much of the information on the DEC years.
Part 4 - The Dependable DEC Years
Digital Equipment Corporation (DEC) started small - by manufacturing microcomputers. In the late 60s the PDP-8 made its appearance. It had 4096 12-bit words (like the PPs of the CDC 6600), and came, originally, with an assembler and little else. It was used as a forerunner of the personal computer, and for process control applications; I am sure that several still lurk in the machinery of the AGS, and were anyone to let it be known that he had one to spare, it would be snatched up within the hour. The PDP-8 was followed by the PDP-11, a popular and more substantial machine, but the DEC boom really started with the introduction of the VAX series in 1978. BNL acquired its first VAX 11/780 in 1980.
These machines were categorized as mid-size: moderately fast, with moderate size memory, they went about their assigned tasks without panache, producing reliable results within a reasonable time frame, provided that the scope was not beyond their capacity. Within a year, each of the major Lab departments owned at least one of them, and assigned it the mundane, routine computational tasks.
The operating system was VMS with the Digital Command Language (DCL) as its interface. DCL was - no, is - both powerful and easy to learn and use. Until quite recently, few believed that it would ever be superseded by Unix (except, of course, in universities.)
The VAXs (or Vaxen for the purists) had a large number of programming languages and application packages available for them. The August 1987 issue of the BNL Computing Newsletter lists the following languages and programs on our five node cluster, (although not necessarily on all nodes):
FTN/VAX TO RSX
DEC connectivity was good, and several VAXs were eventually "clustered" together, a configuration which allowed any of the machines to execute any of the jobs in the input queue, and to use any of the disks on the system. CCD got its initial VAX Cluster of four VAX 11/780s in 1984, but they were soon upgraded to 11/785s which were 50% faster. DECNET proved to be a fine, general purpose communication system which allowed file transfer, mail and remote logins.
The machines continued to improve; the 6000 series supplanted the 785s, there was even a symmetric multiprocessor machine which allowed parallel processing. The microVAX found its way onto some desktops, and the annual DECUS meeting attracted as many as 8,000 people. Until last year we still ran a VAX-VMS cluster, and VAXs are still used in many areas.
DEC still maintains a sizable presence at BNL. CCD currently runs an AXP-VMS cluster. AXP systems, called Alphas, are DEC's current 64-bit systems and they can run VMS, Unix or NT.
Before starting to write this article I surveyed people to try and find one word or phrase to characterize the DEC VAX era. Stable, reliable, pedestrian, user friendly and " white bread" were some of the descriptors suggested. Perhaps Paul Kessler, BNL's longtime VAX System Manager, said it best when he called the VAX " the Levittown of Computing". Just as a Levittown house has been designated a national landmark, so should a VAX be enshrined in a computer museum. It wasn't much to look at, it broke no technological limits or records, but it got the work done at a reasonable cost without giving its users and system managers high blood pressure. Half a million were sold, and, for a few years, DEC was the Number 2 computer manufacturer in the world. What caused DEC's rapid descent? Closed hardware, and expensive software, the exact opposite to what drives the current boom.
This series of brief articles is not an authoritative History of Computing. It is a subjective view of my experiences in the computing profession at BNL over the past 35 years. Its goal is not a recounting of facts and figures, but an attempt to convey the computing milieu of each period.
I am indebted to Al Smith, Ed McFadden and Susan Sevian for input to this section.
After the long CDC years at BNL, culminating with the retirement of the CDC 7600, we needed another main frame computer, which had to be in the "supercomputer" class. Many at our wealthier sister Labs had moved to Crays, but these were beyond our financial reach. We wrote specs, and went out for bids. IBM, perhaps nettled by their long exclusion from the BNL mainstream, determined to recapture the site, and their bid was "an offer you couldn't refuse." We were also urged towards IBM by many BNL users who wanted compatibility with other "IBM sites", such as SLAC, FERMILAB and CERN.
Despite the anguished cries of many of our users faced with horrendous conversion problems, we sprang for an IBM 3090-180 with Vector Facility which was duly installed in late 1986, and put into production in February 1987. The original operating system was VM/SP HPO (High performance Option), soon replaced by VM/XA (Virtual Machine/eXtended Architecture). Under this operating system ran CP, the Control Program, and under that CMS, the interface for interactive users. In addition, we adopted the SLAC Batch System, and, on top of the IBM "vanilla" code, we installed the HEP/VM user interface, developed at CERN, which brought a degree of compatibility to the High Energy Physics community. If all this sounds complicated ... it was!
The CDC 7600 was a 100% batch machine, which didn't even do its own input/output. The IBM 3090 inherited much of its batch user community, although there were many defections to the more approachable (and affordable) world of the DEC VAX. The 3090 did support Mail and a state-of-the-art network, called BITNET.
In July 1990, the 3090 was upgraded to a model 300E with three CPUs, one with vector facility. Extended Storage was added as a very fast swap device.
My own involvement with the machine was limited to database applications. This seems a good place to digress on database systems at BNL. We acquired our first commercial DBMS in 1974, a product called System 2000 from MRI, Inc., an offshoot of a group of researchers from the University of Texas at Austin. S2K, as it was known, ran on the three "biggies" in computing at the time: IBM, CDC and Univac. It was a hierarchical system; i.e., the only relationship between different records was the parent-child model. The modern Relational database model, with records linked by key columns was the subject of research papers, but those prototypes that existed at the time were woefully inefficient. S2K was easy to learn and use, and contained a rudimentary report writer.
One of the most successful applications was the Capital Equipment database. Keeping track of equipment at the Lab had been done by MIS on an IBM 360, but the volume of 30,000 items has exceeded the capacity of the machine. We were asked for help, but warned that it might be a lengthy undertaking. I requested the raw data, and a meeting between AMD and MIS management to discuss feasibility was scheduled a few days hence. Thanks to S2K's ease of defining a database and loading data, I was able to demonstrate the database up and running, and capable of responding to ad hoc queries to the stupefied MIS representatives. In its heyday, S2K supported over 50 databases from many departments. When the last of our CDC 6600s was retired, S2K was moved to a smaller machine, the CDC 830, but the end of the line was at hand, and it was clear that an alternative had to be found. The logical platform was our IBM 3090.
The decision was made to acquire IBM's SQL/DS, a choice that was driven by plans to acquire a product called PDM to keep track of the myriad of Autocad drawings the Lab produced and which ran on top of SQL/DS. Eventually it was decided that PDM was too expensive and the project killed, but we were stuck with SQL/DS. Whereas the Capital Equipment database had been put up in a matter of days (alright - it took a few weeks to polish it and write the reports), it took two people close to a year to port the application to the 3090 and SQL/DS. Not that the product was bad - far from it. It was enormously powerful and efficient, the ideal product for a brokerage house processing millions of transactions a day. Our biggest application was Capital Equipment with its 30,000 items and a volume of around six megabytes. For us, SQL/DS was the proverbial eight cylinder supercharged, automatic transmission insect crusher.
Of all the major "mainframes" BNL acquired, the IBM 3090 was used the least amount of time. It served us only six years before being superseded by the army of RISC workstations. However, the 3090 was by no means a bad machine; and many are still in service throughout the world and it's quite likely that ours, which we sold, is still in service somewhere. Although, by 1993, its raw computing power could be matched by a machine of 1/50th of its cost, its throughput was unparalleled. It was extremely reliable, mean time between failures being measured in months. When it detected a hardware failure, it automatically "called home" (IBM), reported the problem and worked around it. If one CPU broke, we often did not know until IBM personnel arrived with a replacement. The disks featured cross redundancy and we never lost data.
The 3090 was a great machine but was, initially at least, a poor match for BNL's compute-bound needs. By the time we acquired the additional processors and memory, the inexpensive and very fast RISC workstation was seen by many as a better economic alternative, and a high level committee advised the dropping of the 3090, perhaps prematurely. Whether economies were actually realized is debatable. Software and maintenance on the 3090 was expensive, but the cost of the much cheaper products for workstations multiplied by the number of platforms was almost certainly higher.
As we struggle with the personal workstation - "it's all yours, good luck, you'll need it!" - we can be nostalgic for the BIG IBM days, when engineers were on site, and regularly did preventive maintenance for us even to the extent of vacuuming the disks, albeit at a cost of $100,000 a year. Our software was maintained for us including installation of applications and upgrades (about $180,000 a year), and friendly, knowledgeable, impeccably attired personnel were always on hand, ready to reassure us that the problem would be fixed very soon, although the machine would run much better if we bought more memory and more disks. Today, buying more hardware is usually the best way to improve performance and reliability; IBM was ahead of its time.