Given the blackbox model, a direct way to test whether there's a zero polynomial inside is just to try (maybe a lot of) inputs. Then if it is a zero polynomial you'll always get the right answer; but if it isn't, you need to have a reasonable bound on the number of tests you need to make. Hence the key observation that makes an efficient randomized algorithm possible is the following:
(Schwartz-Zippel) Let $P(x_1,...,x_n)\in F[x_1,...,x_n]$ be a nonzero polynomial and $d=\sum d_i$. Pick points in some set $S^n\subseteq\bar F^n$ ($\bar F$ denotes the algebraic closure of $F$, which is just a technicality, since sometimes you need to go into extensions of $F$) and feed them to the blackbox, then you get at most $d|S|^{n-1}$ zeros as output.
An inductive argument (on the number of variables) easily shows its validity. This is saying: if you choose a reasonably big $S$, say $|S|>cd$ for some constant $c$, then the possibility of getting zero as output is at most $c^{-1}$. The number of random bits used is $n\log cd$. Evidently the algorithm can be boosted to arbitrary precision at the cost of using more random bits (bigger $S$ or repetitive runs).
This is a practical enough algorithm, but can we possibly have some magic inputs to try on the blackboxes, so that zero polynomials can be deterministically detected from the output? Consider, for example, when you know the polynomial only has integer coefficients, maybe you can feed it with some numbers whose weirdness will not be affected by the integer coefficients, and the whole polynomial evaluates to zero iff all the coefficients are zero! This is the idea of Chen-Kao: use irrational numbers.
Suppose we feed $P(x_1,...,x_n)$ (which contains only integer coefficients) with an input $(\pi_1,...,\pi_n)$ satisfying $\pi_i=\sum_{j=1}^{\log (d_i+1)} \sqrt p_{ij}$ where $p_{ij}$s are arbitrary (different) prime numbers. Then, $P(x_1,...,x_n)\neq 0$ iff $P(\pi_1,...,\pi_n)\neq 0$. (Proof: In the inductive step, first evaluate $P$ on $\pi_1,...,\pi_{n-1}$, which results in a univariate polynomial $P'(x_n)$, with coefficients in the extension field $F'=F[\pi_1,...,\pi_{n-1}]$. But notice that the dimension of $F'[\pi_n]$ over $F'$ is $2^{\log(d_n+1)}$, which is bigger than the degree $d_n$, implying that $\pi_n$ can never be a root of $P'(x_n)$.)
But isn't that saying we have a deterministic algorithm already? No - you can never evaluate irrational numbers at infinite precision! We can only truncate the binary expansion of the irrationals at certain decimal positions, say $l$, and randomly choose the signs of $p_{ij}$s -the error probability then drops proportionally to $l^{-1}$ (calculation is needed and contained in Chen-Kao's original paper (Lemma2.2)). Rethink about this: the precision of the algorithm depends on $l$, which is not in the random bits! So arbitrarily precision can be achieved without changing the number of random bits (which is $\sum_{i=1}^n \log (d_i+1)$), the number of different signs we picked).
Chen-Kao's ingenious method has one drawback - it's dependent on the properties of numbers (coefficients need to be integers; prime numbers need to exist - not the case for, say, finite fields). This is where Lewin-Vadhan came in, with an abstraction of the method applicable to arbitrary field.
The idea of Lewin-Vadhan is to find a substitute notion of irrationals in arbitrary fields. Note that the underlying strategy of Chen-Kao is to find some evaluation points that are provably non-vanishable for nonzero polynomials. Lewin-Vadhan carefully showed that the following analogy works: let prime numbers be abstracted to irreducible polynomials in $F[x]$; binary expansion of the square roots be abstracted to power series (in $F[[x]]$) of the square roots of the irreducible polynomials, which are approximated by truncating modulo some $x^l$.
For the whole scheme to work, two things need to be affirmed:
1. The irreducible polynomials indeed work as prime numbers, i.e., when any nonzero polynomials $P(x_1,...,x_n)$, let $\pi_i=\sum_{j=1}^{\log (d_i+1)} \sqrt f_{ij}$ ($f_{ij}$s are now univariate polynomials) then $P(\pi_1,...,\pi_n)$ is never zero (in $F[x]$!). The proof for this goes similarly as the simpler proof above for prime numbers.
2. The irreducible polynomials that have square root expansions (this is not always the case since the constant term has to be quadratic residue) need to be easy enough to find. In fact, it is proved using Hensel's Lemma that polynomials have a power series expansion in $F[[x]]$ iff the constant term is a quadratic residue (except for finite fields of characteristic 2, which are handled independently). Also, the number of irreducible polynomials in $F[x]$ of degree less than $n$ is at least $(|F|^n-1)/n$. Please refer to the original paper for the full proofs (actually the extended version of it).
Finally, the approximation using power series needs to give the desirable error probability. For any given error $\epsilon$, the power series can be truncated at $x^l$ where $l=0.5 \epsilon^{-1} d \max (\deg (f_{ij}))$. This is technically more complicated than just probability calculation as for the prime number case. But, (very "luckily", which is a word used multiple times in the paper), everything works out. The random bits are, still, only used for choosing signs of $\sqrt f_{ij}$, and the number of them is $\sum_i\log (d_i+1)$.
In a sense, the problem is solved after Lewin-Vadhan's construction. It is provable that under the given model (i.e., only $n$ and $d_i$ are known as parameters), the number of random bits used is already optimal - since any deterministic algorithm requests at least $\prod_i (d_i+1)$ queries to decide the polynomial (if the number of queries is smaller than $\prod_i (d_i+1)$ which is the number of possible monomials, you'll get polynomials indistinguishable from the zero polynomial), any random algorithm necessarily needs $r=(1-O(1))\sum_i \log(d_i+1)$ bits (a trivial derandomization is to try out all the $2^r$ possibilities which shouldn't be smaller than the deterministic queries needed).
This should be a good success story in randomized algorithm design, had there not been the tragic death of Daniel Lewin on 9-11, 2001.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.