Monday, January 12, 2009

History of Complexity

For an early blog post, I was planning on reading a little about the history of complexity, checking out the original sources, and writing a little about "who first thought of measuring running time in terms of input string length?" and "what were Cobham and Edmonds thinking when they suggested P as a good class to study?"

But after a bunch of research I found this article by Stephen Cook on the very subject, on the occasion of him winning the Turing Award. And it seemed hard to imagine how I could write anything more authentic or instructive.

So I encourage you to read the article. But as you do so, let me add a few random comments that spring to my mind. Also, please post any questions or comments that you might have...

. Cook's article was written five years before the discovery of the famous 1956 letter from Gödel to von Neumann -- in which Gödel quite explicitly asks whether NP could be in P. Bear in mind that the notions of P and NP didn't even exist in 1956!

. I've never heard of Cook's problem about printing out $\sqrt{2}$ at a linear rate -- anyone got a reference for it? Is it still open?

. As Cook himself notes, much of the key development of computational complexity took place among the graduate students at Princeton in the early-to-mid 1960's. I suppose this shouldn't be too surprising, given that it was at Princeton and the Institute for Advanced Studies in the '30s and '40s that you had all the folks working on foundations of mathematics but with a "computer science" slant (Gödel, von Neumann, Church, Turing), plus other folks like Kuhn, Tucker, and Minsky. This is the gang that begat the complexity-theory founders Cook mentions -- Rabin, Stearns, Bennett, and Ritchie, as well as Aho, Hopcroft and Ullman, and CMU's own Turing Award winners Manuel Blum and Dana Scott. The only original complexity theory folks from this era I could find not out of Princeton or Princetonians were Hartmanis (from Caltech) and the very mysterious Alan Cobham, who appears to have no PhD and to hail from parts unknown. Anyone got any information on him?

. These are all men, by the way; of course, Princeton didn't even have women undergraduates until the end of the '60s. Harvard, where Cook was, seems to have been the only non-backwards place of the time, producing several female complexity-theory folks, most notably Greibach.

. The "right" definition of how to define a RAM (random access machine) is not so controversial today as I guess it was when Cook was writing. I think everyone agrees that most sensible definitions are okay and about equal, up to log factors. I think the "random access Turing Machine" (see here) is a reasonable standard.

. Cook writes "The identification of P with the tractable (or feasible) problems has been generally accepted in the field since the early 1970's." I'm not completely sure about this today. Certainly everyone agrees that P is the most reasonable, natural, and best class to study, but there is a great deal of interest these days in linear, quasilinear (linear times log factors), and even sublinear time algorithms. In fact, it's not so unreasonable to argue that an algorithm is truly efficient in practice only if it is quasilinear time. Perhaps more on this in a future post...

. Regarding Cook's list of the most interesting and important problems in P...

- 35 years after the Schönhage-Strassen algorithm Cook mentions, a faster multiplication algorithm was found. It runs in time $n \log n 2^{O(\log^* n)}$ (!!) and was given in an award-winning 2007 paper of Fürer over at Penn State.

- Coppersmith and Winograd soon thereafter improved the matrix multiplication time to roughly $n^{2.38}$. This has stood for 20 years.

- Primes is in P, thanks to Agrawal, Kayal, and Saxena in 2004.

- Linear Programming being in P has subsequently proven to be of enormous importance.

- To be honest, there aren't so many additional notable polynomial-time algorithms discovered since then. I might point to the Convex Programming work due to Grötschel, Lovász and Schrijver (1981), the theory of Markov Chain Monte Carlo due to Jerrum-Sinclair and Dyer-Frieze-Kannan (1988), and recognizing minor-closed graph families due to Robinson and Seymour (1983--2004). Any other candidates?

. Cook devotes a lot more space to #P-completeness than NP-completeness (probably out of modesty), but the latter continues to be much more important and influential, though we will certainly study the former in our class.

. Similarly, the study of parallel complexity and the class NC has not proven quite so influential as it looked in the '80s, but we will certainly study it too.

. Re lower bounds: "Structured lower bounds" we're pretty good at. But genuine lower bounds? We still have absolutely no clue; people feel even less enthusiastic about the prospect of proving good time lower bounds than they did when Cook wrote. Cook's speculation about separating P from L at the end of the article looks overly optimistic today.

. That said, and on the Section 4.3 topic of time-space product lower bounds, there has been a lot of good work done recently in this area, especially by recent CMU complexity grad Ryan Williams. We plan to cover some of this in our course.

3 comments:

  1. An early non-obvious polynomial time algorithm was the algorithm to decode BCH codes (method would also work for
    Reed-Solomon codes
    ) up to half the minimum distance. This was published in 1960 in


    Encoding and error-correction procedures for the Bose-Chaudhuri codes
    by W. W. Peterson.

    The 2nd paragraph of the introduction and the first paragraph on page 463 explicitly highlights that the running time is a polynomial with small exponent. While this is before both Cobham and Edmonds, the broader applicability of the concept of polynomial running time as a metaphor for good algorithms does not seem to have been realized (at least explicitly) in this paper.

    ReplyDelete
  2. It looks like Alan Cobham did not appear out of nowhere after all, according to the following excerpt of an interview with Stephen Cook:

    Frana: Was Alan Cobham already at Harvard at this time?

    Cook: Well, no. Before I got there he was a graduate student in the Mathematics Department and was close to getting his Ph.D. He wrote a thesis, but he didn't bother to complete the minor thesis requirement. Instead, he just went off to work for IBM Research in Yorktown Heights.

    ReplyDelete
  3. You might be interested in reading my blog "Financial crisis? It's a pyramid, stupid."

    http://gregpytel.blogspot.com/

    Bascially I use basic complexity tools (growth of exponential function) and look at the current financial crisis from Cobham Thesis perspective. Any comments will be welcome.

    ReplyDelete

Note: Only a member of this blog may post a comment.