In the lectures we've skipped some technical definitions so as to not get bogged down. But these are good things to know; they'll come up again and again if you study Complexity Theory.
1. Constructibility:A function is time-constructible if there is a DTM which, on input , outputs in time . It is space-constructible if outputting is doable in space exactly . As mentioned in class, we also invariably restrict attention to time bounds that are at least and space bounds that are at least . In this case, every non-ridiculous time bound you would ever use is time-constructible, and similarly for space. The Time and Space Hierarchy Theorems require constructibility.
2. Promise problems: This is a generalization of a "language". A promise problem is a pair of disjoint subsets of . A TM "decides" the promise problem if it accepts all strings in and rejects all strings in . Note that a promise promise is a language iff . The name "promise" here refers to the fact that you can think of the TM as being "promised" that the input string is always in . Generally speaking, if there is an "efficient" algorithm for checking if then the promise problem is not much different from a language (one can put the "easily decidable" strings in into either or , say).
3. Syntactic/semantic classes: This is not a 100% rigorous definition; we illustrate it via examples. A complexity class defined via time-bounded machines (TMs, NTMs, counting TMs, etc.) is said to be syntactic if there is an "effective enumeration" of the machines defining the class. For example, if is a time-constructible function, the class is a syntactic class, since you can effectively enumerate the TMs running in time at most . Similarly for -- the NTMs running in time at most are effectively enumerable. Classes like , , are "syntactic". Syntactic class have Hierarchy Theorems and they also tend to have "generic complete problems". A semantic class is one which is not syntactic; generally, this means it is defined by a machine class "with a promise". The classic example is : Here there is no way to effectively enumerate randomized TMs with the property that for each they either accept with probability at least or reject with probability at least . (Indeed, checking whether an RTM has this property is undecidable.) Instead, languages in are defined only by machines that satisfy the promise " is a BPP machine".
4. Promise classes: For semantic classes like and there is an associated "promise class" of promise problems, denoted , , etc. For example, is the class of all promise problems such that there is a polynomial-time RTM which accepts all strings with probability at least and accepts all strings with probability . The RTM may have any behavior on strings not in .
5. Uniformity: Again, this is not 100% well-defined, but a complexity class is said to be uniform if it is defined by one machine that operates for all input lengths. It is said to be nonuniform if there is a different machine defining it for each input lengths. Roughly, classes with TM-based definitions are "uniform", and classes with circuit-based definitions are "nonuniform".
6. Reductions: We've talked about how it is important to specify the complexity of reductions (poly-time, log-space, even ), but there's also another distinction. A many-one (or Karp) reduction from to is one which maps to such that . An oracle (or Cook) reduction is one which solves by using an algorithm for as an oracle. Having a Karp reduction is "stronger" than having a Cook reduction, and we generally define completeness for complexity classes in terms of having a Karp reduction because we can: e.g., all textbook -completeness reductions are many-one. But sometimes it's better to define completeness in terms of Cook reductions; for example, as we saw in class with .
7. -hard: For a class we said that is -complete if: (a) ; (b) every problem in reduces to . We say that is -hard when (b) holds.
8. Self-reducibility: A language is said to be downward self-reducible if one can decide efficiently using an oracle for on instances of size . This sounds funny, but it makes sense. The classic example is : If you have a black box which can solve any instance of size at most , you can use it to efficiently solve any instance of size at most ; you use the classic trick of setting to and checking satisfiability, and setting to and checking satisfiability. A language is said to be randomly self-reducible if one can decide efficiently with high probability using an oracle for which works with high probability on randomly chosen instances. An example (as we'll see at some point in class) is the 0-1 PERMANENT problem: If you can solve randomly chosen 0-1 PERMANENT instances with high probability, you can actually solve all 0-1 PERMANENT instances with high probability.
9. Padding: Padding is the general idea of taking a language and considering the language , where is some fast-growing functions. I.e., it's the idea of "padding" all inputs with a large number of "blanks". It is used for classic upward collapse results such as "". To see this, suppose . Let be a language in ; say it's in . Let be the padded version of , where . Now it's easy to see that is in : just run the algorithm for , which is now "polynomial time" because the input length in has been padded. Hence by assumption, is in . But then clearly is in : to decide in exponential time, just pad and then run the -algorithm for .
10. Succinctness: This is the spiritual opposite of padding. For some languages , there is a version where the inputs for are succinctly described. The classic example is where is CIRCUIT-VALUE, and is SUCCINCT-CIRCUIT-VALUE. Here, an input for is a "succinct encoding" of an (exponentially large) circuit : The input itself is a circuit, which on input "outputs the th gate in " -- its type (AND, OR, NOT, input, output), and the indices of the gates that feed into it. Typically, the "succinct" version of a complete language is itself complete for the "exponentially larger" class. For example, you can check that since CIRCUIT-VALUE is -complete, SUCCINCT-CIRCUIT-VALUE is -complete. And since SAT is -complete, SUCCINCT-SAT (where the input is a circuit describing an exponentially long formula) is -complete.
Saturday, January 31, 2009
Subscribe to:
Post Comments (Atom)
By the way, here are some homework hints: 3a) complete languages? 3b) padding. 4b) try showing it for Sigma_4, with diagonalization; 4c) there's already a hint.
ReplyDelete