Walking in the woods a few months ago I had one of those moments of synthesis. A dozen or so threads I'd been thinking about for a long time suddenly wove together into a single something that made sense. It is something of a form of story telling and I need to see if it really makes sense. I started writing about it, but it struck me many people who see the post title will think of super intelligent AI. I'm talking about something much more immediate - something we're dealing with now, but I'll begin with my two bits on super intelligent AI to get it out of the way,.
intelligence that is different and beyond
There's nothing in physics that says our brains are special. Intelligence is just a kind of information processing where information goes into some sort of arrangment of elementary particles and a result emerges. The particle arrangements can take a variety of forms. A mouse is more intelligent than a tea cup and most of us (with the exception of Douglas) think ourselves more intelligent than mice. There are some information tasks where other animals or machines beat us. A twenty dollar calculator (or a free app) destroys most of us when it comes to arithmetical calculations. As time goes on it's reasonable to assume computation will improve. There is the sticky wicket of "self-aware" and "thinking" .. I'll punt on those as we're unclear as to exactly what that means for humans.
Science fiction often portrays malevolent super intelligent AIs. We probably shouldn't worry about how anthropomorphic it's thought process is and accept it will be different. A more fruitful way of looking at the problem may be to ask if we share goals and consider it's competence. It would be good to at least share goals with a super competent AI. Think of it this way... we're more intelligent and usually more competent than insects. We might want to build a house on a field with a large number of them. We don't think of ourselves as malevolent, but our goals aren't aligned and end the end we wipe them out without noticing. On the other hand my ferrets are skilled at aligning their goals with mine. Dogs rule when it comes to aligning goals with humans often outperforming other humans. Perhaps that's a reasonable model for going forward.
There's a lot of hype and science fiction, but we're only taking baby steps on the fundamental problems. Every serious researcher I know, and there are a number, thinks this kind of intelligence is decades away - perhaps many decades. We have some time and we need to be thinking about how to handle it now. We need to think deeply about what kind of future we want and how we share decision making. These thinkers need to represent a wide range of backgrounds. Individuals with great intellectual range as well as focused specialists forming groups that communicate. More than ten percent with engineering backgrounds is too high. There is time, but I worry that our track recored dealing with such change has not been good.
There are other interesting aspects of super intelligent AI to think about - the sort I'm attracted to. You can think about the limits of intelligence in the Universe. It is a function of entropy, energy, the expansion of the universe, and limits to communication set by the speed of light. As you might imagine there's a lot of headroom, but the limits are harsh. And you can turn your focus in the other direction and compare the brain and human sociology to existing and theoretical computer architectures. Energy, entropy and communication speed are also important. You can even put an upper bound on the optimal physical size for human style brains - we're fairly close to the limit. (curiously some animals have some neat evolutionary tricks). And then there are the alien intelligences already among us - the Cephalopods.
rebooting:
humanism and the dawn of machine learning
Sometimes I try to teach it to someone else or at least play around with the idea of how I"d turn it into a course. Here's a quick pass of a syllabus for a hypothetical course.
° Eight generations of change - a brief history of the first industrial revolution. Per capita energy use increased by and order of magnitude and our lives changed. The distribution of impact was very uneven ranging from increased slavery and war to the shrinking of the world.
° The scientific revolution and the electronic revolution that followed. How the invention of the scientific laboratory and the revolution that followed was at least as important as the industrial revolution.
° The externalities of change. We never see the big picture in the beginning and as it emerges we adapt, but sometimes the adaptations are flawed. Can we begin to think about externalities early in the game and who should participate?
° The smartphone - the convergence of at least six technology curves bound together by a bit of social clue into a partner that is always with us. Is it one of those rare events that socially and culturally rewired us? Is it one of those events that neurologically rewire us?
° Surveillance capitalism and big data - when the consumer is the product. Herds of winners and losers. Connections to consumerism and politics. The concentration of power in the hands of a few.
° Algorithms - what they are and how they are used and abused. Who gets to control them and do they understand the consequences? What would it mean to require oversight or openness?
° What is modern numeracy and algorithmic literacy? Where and when can it be taught?
° Security, privacy. Did we ever have them and in what degree? Will we get some of what we've lost back and the fundamental differences and between the large informations companies.
° Brogrammers, sexism, racism and the sterile algorithmic culture.
° A brief history of machine learning. Where it wins and where it fails - reality and hype.
° thread weaving (although this is done throughout). What will power do with ML and emerging AI? What are they doing now? Safeguards...
° Another look at the industrial and scientific revolutions.
I have a hard time thinking about shoehorning something like this into a single semester, but the point is to refine my own thoughts.
Thanks for indulging me
I would sign up for this course. TJ Coburn
Posted by: TJ Coburn | 09/05/2017 at 06:40 AM
This would be a great course - with you of course:-) I remember talking with you about the blind and not blind biases built into programs. I love your point that algorithms separate into winners and losers. And that story you told me about manga being used as a marker of good programmers and how that’s sexist and racist.
Posted by: Jheri | 09/05/2017 at 07:00 AM