If I have one hope for my book, The Annotated Turing: A Guided Tour through Alan Turing's Historic Paper on Computability and the Turing Machine, it is to help readers understand the difference between Turing's original Turing Machine, and the Turing Machine as it's commonly encountered in college courses and textbooks.
Two decades after Turing's paper that introduced his computing machines, the Turing Machine was reformulated by Stephen Kleene in Introduction to Metamathematics (1952) and Martin Davis in Computability and Unsolvability (1958). These reformulated machines — which generally compute functions — dominate the current literature on computability. The "halting problem" (the term is Martin Davis's) involves the existence of a general algorithm to determine whether an arbitrary machine finishes properly and halts, or whether it goes bad and runs forever.
As I've mentioned before ( 2006-08-11, 2007-11-26, 2007-12-02, 2008-05-12), it is not correct to associate the halting problem with Turing's original work. Turing's Turing Machines are supposed to run forever.
Turing's paper describes some machines that compute functions, but these are only special cases. The original conception is a machine that computes the infinite digits of a real number. Of course, this is not typical computer activity (unless you're one of those people who set computers going calculating the infinite digits of π) but it's certainly a good example of an algorithm at work.
The advantage of Turing's approach is that certain theoretical results are available almost immediately. Once you see how a Turing Machine (and hence, any finite algorithm) can be described by a single integer, it becomes obvious that Turing Machines are enumerable. Since real numbers are not enumerable, it follows that the vast majority of real numbers cannot be computed algorithmically. (If you're hazy on the concept of enumerability, that's the focus of Chapter 2 of The Annotated Turing.) Digital computers are thus intrinsically limited in what they can do.
If only a subset of the real numbers can be computed, what are all those other real numbers that can't be computed? The vast majority of real numbers are basically strings of random digits without any pattern whatsoever. You simply can't generate random digits algorithmically.
Georg Cantor proved the non-enumerability of real numbers in two very different ways (although both were reductio ad absurdum proofs). The second of Cantor's proof involves the famous diagonalization process. (Again, see Chapter 2 of The Annotated Turing for a full discussion.) If you can list the real numbers, you can derive a new number based on the diagonal of the numbers in this list, but which differs by 1 in each digit. This must be a real number but it's not in the list. The conclusion is that you really can't list the reals.
If you subject the computable numbers to Cantor's diagonalization process, what do you get? You're defining an algorithm to compute the diagonal from all the other computable numbers, so the diagonal must be a computable number. However, if it is a computable number, then it's not in the list, which means that computable numbers are not enumerable, and we already know that's not so.
The only possible way out of this seeming paradox is the inescapable conclusion that you actually can't construct a diagonal of the computable numbers. You can't construct it because you can't determine which Turing Machines compute numbers, and which get "jammed up" or stuck in undesirable loops. It also follows that there is no general process to determine whether a particular machine ever prints a particular digit, or a particular pattern of digits.
This is yet another limitation of digital computers. To determine what a Turing Machine (or algorithm) ultimately does, you essentially need to trace through the steps — in effect, to simulate the machine.
The indeterminacy of Turing Machines is a combination of bad news and good news: It means we can't write a generalized "debugging" program that can analyze code and reveal all the bugs that hide inside. Extensive testing of software is still required, and even then we can't ever be fully confident that all the bugs have been eliminated.
But imagine if there did exist a finite algorithm that could determine the ultimate fate of Turing Machines. What would that imply about models of the human mind that are based on Turing Machines? Or cosmologies that visualize the universe as a giant Turing Machine? The nasty hobgoblin of determinism is not exactly eliminated by the indeterminacy of Turing Machines, but it's almost rendered impotent.
There is no algorithmic process to determine the future — whether it's the future of a computer program, a thought process of the human mind, or the universe as a whole.
|Coming June 16, 2008!
Available for Pre-Ordering
|Wiley||Amazon US||Barnes & Noble|
|Amazon Canada||Amazon UK||Amazon Deutsch|
|Amazon Français||Amazon Japan||Blackwell|