Join me, fellow students, in a quest to discover Complexity-Theory's version of the 'Holy Grail': lower bounds on the size of circuits. This quest began at the 1970s, when the question of $P$ vs. $NP$ first arose. At those early ages, when people were naive, optimistic, and still new to the question $P \stackrel {?} {=} NP$, they sought to separate $NP$ from $P$. One direction, that initially seemed promising, was though circuit size. Clearly, $P \subset P/poly$. Furthermore, Karp showed that many functions belong to $NP$, and we know many functions with super-polynomial circuit-complexity do exist. So the hope was one would be able to show that some function in $NP$ must have large circuit-size, thus proving $P \neq NP$. In fact, one can show super-polynomial lower bound for some function in any level of the $PH$, and that will prove such a separation. And since Toda's theorem gives, $PH \subset P^{#P}$, the hope was that the Permanent, a $#P$-complete function, will have super-polynomial large circuits.
Recall that the Permanent is a function of numbers: $Perm = \sum_{\sigma \in S_n} \Pi_{i} M_{i, \sigma(i)}$, whereas circuits were defined over boolean values. So we introduce arithmetics circuits. Arithmetic circuits have $\oplus, \otimes$ gates (with fan-in 2), and $\ominus$ gates (with fan-in 1), representing the arithmetic $+, \times, -$ operations over some field $F$, and additionally uses constants in $F$. Clearly, every polynomial (and in particular the Permanent and the Determinant) can be represented as an arithmetic circuit, and furthermore, one can reduce any boolean circuit to an arithmetic circuit (representing NOT as $1 \ominus x$ and AND as $x\otimes y$). Using arithmetic circuits, Valiant defined in 1979 the arithmetics analogs of $P$ and $NP$:
Definition 1: A polynomial $p(\bar x)$ of degree $n^{O(1)}$ is in $VP$ if there exists a arithmetic circuit family of size $n^{O(1)}$ that computes it.
Definition 2: A polynomial $p(\bar x)$ of degree $n^{O(1)}$ is in $VNP$ if for some $m$ (polynomially bounded in $n$) there exists a family of polynomials $q_{n+m} \in VP$ s.t. $p(\bar x) = \sum_{\bar y \in \{0,1\}^m} q(\bar x, \bar y)$. For such $p$ and $q$, we say that $p$ is a projection of $q$.
Valiant showed that any polynomial in $VNP$ is a projection of the Permanent. He also showed a nice characterization of the functions of $VP$, which will take us back to linear algebra. Recall that an affine transformation of vector spaces is a combination of a linear transformation and a translation. We look at affine functions from the vector space $F[x_1, x_2, \ldots, x_d]$ of polynomials in $d$ variables over $F$, to the vector space $M_n(F)$ of all $n\times n$ matrices.
Definition 3: We say that a polynomial $p\in F[x_1, \ldots, x_d]$ has determinant complexity $n$ if any affine mapping $f:F[x_1, \ldots, x_d] \rightarrow M_n(F)$ s.t. $p \equiv \det\circ f$, must map $F[x_1, \ldots, x_d]$ to matrices of size $n\times n$ (or larger).
Valiant showed that if $p$ has circuit complexity $c$, then its determinant complexity is $\leq 2c$, so all functions in $VP$ have polynomially bounded determinant complexity. Hence, in order to separate $VNP$ from $VP$, one aspires to show that the determinant complexity of the Permanent is super-polynomial. Alas, until 2004, the best determinant-complexity lower bounds were linear in $n$. In 2004, Mignon and Ressayre showed a quadratic lower bound:
Theorem: If the characteristic of $F$ is $0$, then the determinant-complexity of the Permanent is at least $n^2/2$.
The proof of this theorem naturally involves some linear algebra, and requires the definition of a special linear operator. If $f:V\rightarrow F$ is a differentiable function, then we can define the tangent map $Tf: V\rightarrow V^*$, by: $Tf(v) = \langle \frac {\partial f}{\partial x_i}(v), \cdot \rangle$. I.e., we map every $v\in V$ to a linear functional $T_vf$, where $T_vf(w)$ is the scalar product of $\nabla f(v)$ and $w$. Using the tangent and the fact that the space of linear functionals is a vector space, we can now define a tangent to the tangent. Let $T^2f: V \rightarrow \textrm{Hom}(V, V^*)$ be defined by $T^2f(v) = (\cdot)^TH_f(v)(\cdot)$, where $H_f(v) = \left(\frac {\partial^2 f}{\partial x_i \partial x_j}(v)\right)_{i,j=1}$. I.e., we map every $v\in V$ to a bilinear function $A_v(\cdot, \cdot)$ where $A_v$ is constructed by the 2nd order partial derivatives of $f$ at $v$. Alternatively, one can think of $f(v)$ as a map $T_{v}^2f:V\rightarrow V^*$, where $(T_{v}^2f)(w)$ is the linear functional $\langle A_v w, \cdot\rangle$. Recall that any bilinear function (and in particular $A_v$) has a rank, which is the rank of the matrix.
Here is the proof of Mignon and Ressayre's theorem in a nutshell. Let $f:M_n(F)\rightarrow M_m(F)$ be an affine function s.t. $Perm_n = \det_m\circ f$. Then the following 3 claims hold:
Claim 1. $Perm_n$ is isomorphic to restricting $\det_m$ to the image of $f$, i.e. $Perm_n \simeq g$ where $g = \det_m |_{\textrm{Im}(f)}$. Since $g$ is a restriction of $\det_m$, we get $\textrm{rank}(T_{A}^{2} g) \leq \textrm{rank}(T_{A}^{2}\det_m)$.
Claim 2. There exists some non-invertible $A\in M_n(F)$, s.t. $T_{A}^{2} Perm_n$ has rank $= n^2$. (This is the most non-trivial part.)
Claim 3. For any non-invertible matrix $A$ it must hold that $\textrm{rank}(T_{A}^{2} \det_m) \leq 2m$.
Combining the 3 steps together, we deduce the inequalities $n^2 = \textrm{rank}(T_{A}^2Perm_n) = \textrm{rank}(T_A^2 g) \leq \textrm{rank}(T_A^2\det\_m) \leq 2m$
None of the proofs of 1,2 or 3 is hard, and they all boil down to looking at the matrix generated by the $T^2$ operator. If is our variable matrix, then we denote $P = Perm_n(X)$. For the minor $X_{i,j}$ (removing the $i$th row and the $j$th column of $X$), we denote $P_{i,j} = Perm_{n-1}(X_{i,j})$. For any $i\neq i', j\neq j'$, we denote $X_{\{i,i'\},\{j,j'\}}$ to be the minor of $X$ we get by removing both $i$ and $i'$ rows, and $j$ and $j'$ columns, and denote $P_{\{i,i'\},\{j,j'\}} = Perm_{n-2}(X_{\{i,i'\},\{j,j'\}})$. Then observe the following: $T^{2}Perm_n$ is the $n^2\times n^2$ matrix where for every $i \neq i'$, we have
Proving Claim 2 is the result of looking at . By sheer computation, they show that $T_A^{2}Perm_n$ is of the form , for two particular, invertible, $B$ and $C$, and show that this gives a $n^2 \times n^2$-matrix of full rank. We comment that the elements of $B$ and $C$ are multiplications of large factorials, so $B$ and $C$ are invertible because of the zero characteristic of $F$.
Proving Claim 3 uses a similar observation, based on the structure of $T^2$. If $X$ is a non-invertible matrix, then by changing basis, $X$ can be turned into $X'$, which is a diagonal matrix with some 0 entries on the main diagonal. $T_X^2\det$ and $T_{X'}^2\det$ will have the same rank. If we replace $Perm$ in $\det$, we get that $T_{X'}^2\det$ has the exact same structure as it had for the Permanent (replacing $P_{\{i,i'\},\{j,j'\}}$ with $D_{\{i,i'\},\{j,j'\}}$, the determinant of the same minor). However, since $X'$ is diagonal and non-invertible, then the determinant is $0$ for almost everywhere.
Proving Claim 1 boils down to showing the $f$ is injective, thus isomorphic over its image. If $f$ isn't injective, then exists a non-zero $v \in \textrm{Ker}(f)$. Hence, $\forall x, \forall \lambda \in F$ it holds that $Perm_n(x+\lambda v) = \det_m(f(x)+\lambda f(v)) = Perm_n(x)$, so the gradient of $Perm_n$ at $v$ must be $0$. Alternatively, every $T_xPerm_n$ maps $v$ into the $0$-functional. So for every $y$ it holds that $T_y^2Perm_n$ is a bilinear mapping, that maps every $x$ into a functional $\phi_x$ which is $0$ over $v$. Thus for every $y$ it holds that $T_y^2Perm_n$ is not of full rank. In particular, $T_A^2Perm_n$ has of rank strictly smaller than $n^2$, which contradicts Claim 2. The proof of the theorem is done.
SPOILER ALERT: The quest for lower bounds on circuits is (thus far) unsuccessful. Obviously, we still don't know how to show super-polynomial lower bounds on the determinant complexity of the Permanent. To this day, lower bounds are known only to a very restricted class of circuits (such as $AC^0$ or monotone circuits). In fact, Razborov and Rudich showed in 1995 that proving lower bounds using the currently known techniques is intrinsically difficult. However, there is some positive sign: in 2004, Kabanets and Impagliazzo showed that under the assumption $coRP = P$, either the Permanent or some function in $NEXP$ doesn't have polynomial-size circuits.
Saturday, February 14, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.