An excerpted chapter from Nate Silver’s The Signal and the Noise is worth reading in its entirety. It deals with how Deep Blue beat Garry Kasparov in 1997:
Deep Blue had won. Only, it had done so less with a bang than an anticlimactic whimper. Was Kasparov simply exhausted, exacerbating his problems by playing an opening line with which he had little familiarity? Or, as the grandmaster Patrick Wolff concluded, had Kasparov thrown the game,47 to delegitimize Deep Blue’s accomplishment? Was there any significance to the fact that the line he had selected, the Caro-Kann, was a signature of Karpov, the rival whom he had so often vanquished?
But these subtleties were soon lost to the popular imagination. Machine had triumphed over man! It was like when HAL 9000 took over the spaceship. Like the moment when, exactly thirteen seconds into “Love Will Tear Us Apart,” the synthesizer overpowers the guitar riff, leaving rock and roll in its dust.48
Except it wasn’t true. Kasparov had been the victim of a large amount of human frailty—and a tiny software bug.
The bug occurred when the computer, unable to select a best move, defaulted to a random move. This move was so divorced from what looked like a sound move that Kasparov decided that Deep Blue must actually be twenty steps ahead of the game. The idea that the computer had acted in error – out of a programming bug – never occurred to him, because computers do not make mistakes. Rattled, Kasparov resigned the game.
Clarke’s Third Law applies. When a computer — a device all of us use and almost none of us understand — can beat a human at something that humans find very difficult to do, some of us begin to wonder if we really are building something too powerful to control. If any sufficiently advanced technology is indistinguishable from magic, is any sufficiently advanced supercomputer indistinguishable from God?
Silver has his doubts:
Computers are very, very fast at making calculations. Moreover, they can be counted on to calculate faithfully—without getting tired or emotional or changing their mode of analysis in midstream.
But this does not mean that computers produce perfect forecasts, or even necessarily good ones. The acronym GIGO (“garbage in, garbage out”) sums up this problem. If you give a computer bad data, or devise a foolish set of instructions for it to analyze, it won’t spin straw into gold. Meanwhile, computers are not very good at tasks that require creativity and imagination, like devising strategies or developing theories about the way the world works.
A highly advanced machine remains a machine. It does what is programmed to do. It does not program itself.
Of course, neither do we, but consider this:
[I]t is not really “artificial” intelligence if a human designed the artifice.