A curious Bell System business from the late twenties through the thirties was selling bundled outbound courtesy calls to the airlines. The public saw flying as risky and it was common for an airline ticket to include a free one minute phone call let your family know that you had made it to your destination in one piece. It wasn't great pr, but it was an appreciated courtesy.
Until the fifties airlines had a serious safety image problem. Flying had been unsafe. Going up in one of those caniplicons was seen as taking your life into your own hands. Fine as long as you didn't kill anyone else.. The same was true for the airplane. No safety standards existed for pilots or airplanes. The emerging airlines stayed afloat largely with airmail contracts with the post office. They pushed for safety and training regulations and tried to create an image of military-like professionalism with uniforms and codes of conduct. At the same time enormous progress was made in airframe, engine, instrument and control system design. Aerodynamics became a serious area of study during World War I with the creation of NASA's forerunner - The National Advisory Committee for Aeronautics. From the biplane to pre-jet fighters and bombers in twenty years.
Flying was doubly dangerous as many of the aircraft of the day weren't stable and pilots weren't very skilled. The autopilot emerged and was soon lionized as making flight safe and easy.1 In fact the technology slowly emerged along with everything else. I'll skip the detailed history and focus on something just as interesting that is relevant to automated systems today - the reality and perception of accountability in complex human-machine interfaces.
Autopilots grew in sophistication though the thirties with a focus on flying an airplane when visual flying (VFR) wasn't possible. After WWII serious experiments of hands-off flying were made including a 1947 flight from Newfoundland to England using a system that read a series of punched cards.
The plane behaved as if an invisible crew were working her controls. … The commanding robot was a snarl of electronic equipment affectionately known as "the Brain." Everything it did on the long flight was "preset" before the start. It received radio signals from a U.S. Coast Guard cutter. Later it picked up a beam from Droitwich, England, and followed that for a while. When the plane neared Brize Norton, the wide-awake Brain concentrated on a special landing beam from an R.A.F. radio and made a conventional automatic landing. On the way over, the crew checked the course and watched the instruments. Most of them had little to do. They played cards and read books.
Time 1947
The technical press were beside themselves. They saw a jump to a future where the pilot was completely out of the loop. The emergence of remote controlled military drones muddied the water a bit as some reported them as autonomous. There were predictions that pilots may be an endangered breed and the very unstable helicopter would be rendered easy to operate with full or near autonomy and would quickly spread to every middle class garage.
It turned out to be a tad optimistic, but terrific progress continued with the average commercial flight only needing pilot intervention on takeoffs, landings and while taxiing.. and even then takeoffs and landings are easily automated.
The systems grew in function and complexity. The software effort has grown to the point where the software component of a new fighter or airliner often exceeds half the total development cost. Designing for and testing and integrating complexity is a non-trivial task.2
There is the issue of what happens when something goes wrong. Almost always some human in the system is blamed. There may be a series of events that could have seen pilot intervention, but didn't - or perhaps saw the wrong intervention. Pilot error... The systems and their designers are rarely blamed in aviation. In the early days autopilot and instrument makers would settle lawsuits out of court rather than take a reputation hit. Detailed analyses of crashes often center around the crew not being able to spool up quickly enough to realize what is going and and how to do something about it. More often than not the automatic system encountered something outside its design envelope (sometimes they fail within their envelope) and the passing of command fails.
It turns out these events aren't that uncommon, but the crews have enough time to react. The fundamental problem is our conscious minds mostly single task and are interrupt driven. If I'm not directly flying the plane and the computer says - here Steve, it's all yours - I need fifteen seconds to a minute to come up to speed. When something awful happens we blame the crew. It is as if they are human airbags that deploy to protect the technology.
And that brings us to self-driving cars.
Autonomous cars are a much more difficult problem than autonomous aircraft. About a decade ago serious progress was made that since has been fueled by the addition of cubic money with many expecting big things in the short term (say five to ten years). To sort things a six level automatic taxonomy from the Society of Automotive Engineers is useful
level 0 no automation. A car from the 70s and before (some would argue automatic transmissions are level 0.5)
level 1 single task driver assistance. Cruise control, stability control, anti-lock brakes. Universal on most current American cars.
level 2 partial automation. Two more more subsystems combine. The combination of cruise control and lane keeping for example. Common on many cars today.
level 3 conditional automation. Systems that drive and monitor, but assume human backup. Tesla is sort of an example
level 4 high automation. Systems that do nearly all driver tasks, but only in certain regions and conditions. Interstate highways in daylight and dry weather, or a very low speed urban system that is tailored to just that environment.
level 5 Full automation. The works. It would go anywhere you can imagine going on or off road and in all conditions. No driver controls are necessary.
Currently we're at level 2. Tesla assumes a human in command with their level 3 system, but the human attention recapture issue rears its head. Many, myself included, believe it is sufficiently difficult that level 4 happens before we have solid level 3 systems.3 Level 4 in certain carefully defined environments seems likely in five years. I've done a bit of work with some folks looking at the system and human level issues of such designs and note it gets thorny in a hurry. Some may plow ahead and use human airbags to protect their growing technology base, but that may be counterproductive for the industry.
Widespread level 4 will be a long rollout like autopilots in aircraft were. The software is enormously complex and some of us believe some breakthroughs in software verification and testing are needed in addition to a much deeper than just the technology view of the problem. Expect certification and regular maintenance requirements. It may be that personal ownership won't be practical if maintenance requirements mirror the aircraft world. This isn't just rocket science - that's the easy part. Level 5 is way out there even if the tech exists. We could have it on airplanes but won't go there for social reasons.
In the end I'm optimistic on highly constrained level 4 systems, but I'm definitely not in the technology solutionist camp. If Apple, Toyota or any of the other players are doing this I can see it in cooperation with existing transportation systems - probably in Europe or China - and not privately owned. I'm also very optimistic about potential lower death counts, but much of that is the kinetic energy argument as speeds are likely to be slower. There is a lot of detail here, but beyond the scope of an hour blog post. So many interesting scenarios...
__________
1 The article is from the New York Times, 27 Nov, 1916
2 We complain that air traffic control could be much better. It could, but the dose of reality is that changes have to propagate across many systems and they need to be checked. We could rely completely on GPS, but what happens if there if the system goes down (not terribly likely, but possible) or if there is local GPS denial (this has happened)? Dozens of systems need to be in synch and the mentality of careful testing and verification that the airlines desired beginning ninety years ago continues. No one wants to tamper with the safety record story. I don't blame them.
3 Google seems to have taken this point of view. Their focus is now on highly constrained level 4 systems. They are very worried about the human attention recapture problem. The same is true for most of the Detroit and Euro systems I'm aware of.
__________