Call it coincidence, serendipity, synchronicity, or just random, but last week I was accidentally exposed to two seemingly unrelated ideas that ended up seeming very related to me. And they gave me a fascinating whack on the side of the head. I thought artificial intelligence had run its course, but computers that learn could be much more important.
First, the book Blondie24, by David Fogel, describing how he and his team used evolutionary computing to develop computer programming that taught itself to play checkers. It’s well written, logical, easy to follow, and fascinating. Here’s a snippet (direct quote from the book):
Suppose we could harness the fundamental processes of natural evolution inside a computer. We could generate many thousands, or maybe millions, of solutions to problems, test these solutions, keep the ones that are better, and use them as parents of future improved solutions. We could write a computer program that uses an evolutionary algorithm to breed solutions to problems and perfect them over time.
There is so much more there that I have trouble summarizing, but what it means is something way better than what we used to call artificial intelligence, which was rules-based computing in which humans summarized knowledge and experience into rules (sorry, that’s my definition, so I apologize to all the AI people out there). This, in stark contrast, is computers that learn, using a process that mimics evolution: It’s survival of the fittest, in compressed time.
Having suffered through some serious attempts to create a rules-based system for financial forecasting, back in the 1980s, I am instantly intrigued by the change of perspective: let the algorithms learn by themselves. Don’t try to codify, just manage evolution. I’m sure that sounds very far fetched, but in the book takes the reader through actual cases with practical implications. Blondie24, as it turns out, is a program that taught itself to play checkers. It ends up sounding much more believable, in the book, than any summary that does it justice.
The day after reading that book I spent all morning with Dun and Bradstreet Credibility Corp, whose founder, Jeff Stibel, is the author of Wired for Thought, which has some striking parallels. Consider this paragraph (another direct quote) and compare it to the one above:
Think of it this way: evolution took hundreds of thousands of years to evolve the human brain to its current level of complexity and sophistication. The Internet will approximate that in a few generations. We will have experienced in cyberspace a replication of biological growth itself, as though it were the brain of a living thing. But more to the point, we will replicate not only the brain itself but also its by-product: thought.
Again, in this quote like the one above it, I ask you to trust me that this book too, like Blondie24, is very well written, easy to follow, and exciting. They are both talking about some high-flying ideas, but they guide the reader through them very considerately.
Which brings me to another coincidence, parallel thought. Although they don’t know each other (I don’t think) and they are approaching this from different directions, they both ended up with the same example of what’s going on.
First, from Blondie24, David Fogel is making the point that the goal is not elaborating human thought into computers in sets of rules and conclusions, but generating an independent evolutionary process:
For example, suppose we wanted to design a flying machine. We might look to nature for inspiration and see a vast array of feathered birds flapping their wings. But in emulating those specific manifestations of flight, we’d be led astray. Neither feathers nor flapping wings is a cause, but rather an effect. It’s no surprise that we’ve failed to build a practical man-carrying ornithopter.
Alternatively, we can adopt a high-level and more abstract perspective that exploits the common ground found across all learning systems. This ‘top-down’ approach seeks out repeated patterns in systems and does not lead us astray. Considering my example of aerodynamics, the top-down approach focuses on the countering forces of lift and gravity, thrust and drag, and air flowing over an airfoil…
The pattern repeated in all natural learnings systems is an evolutionary process of adaptation by variations and selection. Evolution, then, provides a simple yet complete prescription for programming a learning machine, a computer that can adapt its behavior to meet goals in a range of environments and to generate solutions tio problems that we don’t yet know how to solve.
Interesting? Yes. And Dr. Fogel plays that out in actual cases involving some classic logic problems, playing checkers, and even some practical results in reading images to diagnose cancers.
And then I picked up Jeff’s book, where he uses strikingly similar metaphor about solving the problem of flight. He’s talking about the development of the Internet as similar to nodes in a human brain, and the hope of developing something akin to learning and thought, from the Internet:
This development is not unlike the evolution of flight. When the Wright Brothers first flew … their intent was not to create a bird. To be sure, some innovators thought that building a ‘bird’ was the road to flight, but it was not. The Wright brothers harnessed the laws of flight, and not the body of a duck or a bluejay.
So here I was thinking that artificial intelligence had lost traction because it was impossible to mimic human knowledge and experience in a rules-based system, or maybe just not worth the effort. And then I discover that there’s some fascinating work going on, not in what we used to call artificial intelligence, but, rather, computers that learn, and the Internet as brain.
Final thought: are we the same species we were 500 years ago, and that we have been for a few thousand years? Or are we now a new species that has new powers of communication, storing and retrieving information, instant interaction over large distances?
What do you think?