Friday, January 23, 2009

The many characterizations of NP

When I first heard of nondeterministic Turing machines and the complexity class NP, I thought the idea was somewhat contrived. Alan Turing’s original work laid out a computational model which Turing believed captured all of the important artifacts of a person performing a computation. If you want to take a look at Turing’s paper (it’s a good read!), you can find a copy of it here: http://www.thocp.net/biographies/papers/turing_oncomputablenumbers_1936.pdf. Today, we are taught to think of Turing machines like specific computer programs or computers (universal TMs). Most programmers would be aghast at the thought of a computer being able to guess inputs to functions or take different courses of action in the middle of a computation. It turns out that despite some of the strangeness of thinking about nondeterministic machines, the complexity class NP has many interesting equivalent definitions without the use of nondeterminism. The goal of this blog post is to explore these various equivalences in more detail.

We first saw robustness in our definitions when we showed that polytime multi-tape TMs have equivalent polytime single-tape TMs. Thus, the class P is robust in the sense that it doesn’t matter whether we use a single or multi-tape TM to solve problems in P, EXP, etc. In a somewhat similar looking result, Savatch’s theorem relates the amount of space required by a deterministic simulation of a nondeterministic machine as follows: $NSPACE(f(n)) \subseteq DSPACE(f^2(n))$. Hence, the class PSPACE is robust to whether we use deterministic or nondeterministic machines. In the area of low space complexity, we still don’t know whether L = NL. However, Immerman and Szelepcsenyi independently showed that NL = co-NL. We’ll return back to this later when we mention descriptive complexity theory.

Of course, one of the most important question about nondeterministic TMs is whether P = NP. While it’s straightforward to go about simulating a nondeterministic TM using only a quadratic space blowup, it’s not at all clear whether one can avoid doing an exhaustive search of computation paths in the simulation. Steve Cook has an excellent overview of the P vs. NP problem on the Clay Mathematics website that you should check out here.Overall, the study of complexity started early on when von Neumann, Cobham and Edmonds took interest in problems which have polytime solutions. Over time, there have been proofs that certain theories are difficult to decide (look at Presberger arithmetic or type-checking in ML for example). Whether we should consider polynomial run time feasible is questionable (consider an O(n^100) algorithm and doubling the input size). Alas, this argument seems like one best left to people actually implementing systems, where constant factors matter. :-)

The P = NP is important from a theoretical and practical viewpoint. It deserves a better response than "not unless P = 0 or N = 1." I want to argue why NP is 'natural' in the sense that even without strange nondeterministic machines we still define NP in many different ways. So with that, let's take a look at a few different characterizations of NP!



Characterization 1: You only need to easily verify a 'solution'.

As we saw in class, NP is exactly those languages for which there’s a polynomial time ‘verifier’ V for the language. We showed that for a language L in NP there’s some polytime TM V(.,.) and constant c such that V verifies x’s membership in L given the existence of some certificate string y, where |y| < |x|^c. If you don't have such a certificate, then x isn't in L. The most basic and intuitive example we saw was SAT: given a propositional formula and truth assignment, you can easily check whether the assignment satisfies the formula. If there is no such satisfying assignment, then the formula is not in SAT. Sometimes it’s much more tangible to work with this characterization of NP- come up with an appropriate notion of a certificate and simply write a reasonably fast program which checks certificate validity. Furthermore, this definition of NP doesn’t require the machinery of nondeterministic TMs. A consequence of P = NP and this characterization of NP would be that finding solutions (certificates) is no harder (up to polynomial factors) than verifying solutions. Alternatively, if P != NP, then there is some language for which it’s hard to find solutions. As SAT is NP-complete, SAT would need to be one such difficult language and we couldn't quickly find satisfying assignments for Boolean formulas.




Characterization 2: Probabilistic Checkable Proofs

For the sake of this blog post, I'll provide a high-level view of PCPs without going into details about the formalities. Arora’s and Barak’s complexity book provide nice coverage of interactive proof systems and PCPs. In the case of SAT we can determine whether a formula A is in SAT by verifying some satisfying assignment to the variables. An incredible result known as the PCP Theorem tells us that each NP languages has a probabilistic certificate verifier which only looks at a constant number of bits in the certificate w, and is allowed O(log(n)) random bits to perform the check. Like we did in the case of BPP and RP, we'll look at a probability distribution over random input strings r. In the case that the certificate is valid, the verifier must always accept (x,w,r). In the case that the certificate is actually incorrect, the probability that the verifier incorrectly accepts (x,w,r) must be less than 1 /2.

What implications does this have in practice? One example is that you can be arbitrarily certain of the correctness of a supposed satisfying assignment A to a 3-CNF formula Phi while checking only a constant number of variable assignments and using a small number of random bits.M athematicians trying to verify that a long (but still polynomial length) proof of a theorem is correct can do so with great certainty without reading the whole thing! Since theorems with polynomial length proofs are a language in NP, reviewers would only need to spot check the proof in order to confident of the proof’s correctness. That sure cuts down on the amount of work a reviewer has to perform, but surely doesn’t satisfy a mathematician’s desire for 100% certainty that a proof holds. The PCP theorem provides an interesting probabilistic definition for NP.




Characterization 3: Descriptive Complexity Theory and doing away with machines

Up to this point we’ve focused on looking at NP languages in terms of computational models. So far we’ve seen two verifier-based approaches, but what about a view of NP which makes no reference to Turing at all? Immerman’s “Descriptive Complexity: a Logician’s Approach to Computation” provides a nice introduction to the topic of descriptive complexity. In general, NP languages characterize some property of strings. For example, the language SAT contains those formulas which satisfy the property “This formula is satisfiable.” Descriptive complexity theory attempts to characterize the hardness of defining these properties in a logical framework.

Immerman says that “we view inputs as finite logical structures, e.g., a binary string w” which can be encoded as an element in structure that encodes all n-bit strings, functions to test whether a particular bit i is set, and an order relation over all of the binary strings. We've done this sort of thing before when arguing that we only need to consider TMs over the alphabet {0,1}. Let’s look at how we can express the language of 3-colorable graphs in this context using these structures, unary ‘color’ relations, a binary edge relation, and existential quantification. The structure contains an encoding of the graph vertices and edges, and the relationship Edge(.,.) determines whether two vertices are adjacent or not. A graph is 3-colorable iff there’s an assignment of one of three colors to every vertex and no two adjacent vertices are colored the same. Logically, this is expressed as

Exists unary relations R, G, and B such that for all x [ (R(x) or G(x) or B(x)) and [for all y Edge(x,y) implies {not (R(x) and R(y)) and not (G(x) and G(y)) and not (B(x) and B(y)) } ]]

The unary relationships assign colors to all of the vertices and the second part makes sure that no two adjacent vertices are colored the same. This formula is an example of a second order existential formula; a 2nd order existential formula begins with a second order existential quantification and the rest of the formula is a first order statement. These formulas look like “Exists relations R, S, T, … such that (first order formula over the universe).” The first result in descriptive complexity theory happened when Ronald Fagin showed that a set of structures (collections of binary strings) Q is in NP if and only if Q is the set of structures which satisfy some second order existential formula. Notice that this definition makes no mention of Turing machines *or* time! Earlier I made a comment about NL = co-NL, and I will finish commenting on it here. Immerman proves that NL is equivalent to “the set of problems describable in first-order logic with the addition of a transitive closure operation.” After playing around with the logic and properties of transitive closure operations, Immerman is able to show that for s(n) >= log(n), NSPACE[s(n)] is closed under complement. Describing complexity classes in terms of logical formulas provides a different perspective on computation and introduces an interesting set of tools for approaching various problems in complexity theory.

Hopefully this post gives you some perspective about all kinds of interesting characterizations of NP. Based on some of the logic I've studied, I'm interested in looking more at descriptive complexity theory. If you're interested in exploring this approach to complexity theory in more detail, please get in touch with me.


Note to future bloggers: DO NOT write your post in MS Word and copy it into the rich text edit box. It ends up copying over a bunch of wacky HTML that you have to edit out by hand in order to get your post to work... :-(



2 comments:

  1. Some remarks on Descriptive Complexity (aka Finite Model Theory):
    1. To be a bit more precise, Fagin's theorem says a property of finite (relational) structures is in NP iff it's "definable" by an existential SOL (ESO) sentence. Evidently co-NP then corresponds to universal SOL (USO). So NP is different from co-NP iff there is a property of finite structures that is definable by USOL but not by ESOL or the other way round. More interestingly, not long after the Fagin theorem it's proved that for monadic SOL, i.e., when only unary predicates are involved, the existential and universal fragments can indeed be separated by simple properties, e.g., connectivity of a graph is definable by monadic-USO but not monadic-ESO.
    2. Descriptive Complexity is concerned with ways of cooking up special logics to capture different complexity classes. A central question is, is there a logic that captures P? Surely if it's provable there is no such a logic, then P is not NP since NP is easily captured by ESO. People have different opinion on this issue, while many believe and try to prove it's impossible, some are seriously moving towards finding such a logic (surely different from ESO).

    ReplyDelete
  2. A succinct summary of known characterizations of various complexity classes in terms of logics appears on Neil Immerman's page here.

    ReplyDelete

Note: Only a member of this blog may post a comment.