In computational complexity theory, a function f is called b(n)-enumerable if there exists a polynomial-time function that can restrict the output of f(x) to one of b(n) possible values. This paper investigates #GA, the function that computes the number of automorphisms of an undirected graph, and GI, the set of pairs of isomorphic graphs. The results in this paper show the following connections between the enumerability of #GA and the computational complexity of GI.
Randomizing reductions have provided new techniques for tackling average-case complexity problems. For example, although some NP-complete problems with uniform distributions on instances cannot be complete under deterministic one-one reductions [Wang and Belanger], they are complete under randomized reductions [Venkatesan and Levin]. We study randomized reductions in this paper on reductions that are one-one and honest mappings over certain input domains. These are reasonable assumptions since all the randomized reductions in the literature that are used in proving average-case completeness results possess this property. We consider whether randomized reductions can be inverted efficiently. This gives rise to the issue of randomized isomorphisms. By generalizing the notion of isomorphisms under deterministic reductions, we define what it means for two distributional problems to be isomorphic under randomized reductions. We then show a randomized version of the Cantor-Bernstein-Myhill theorem, which provides a sufficient condition for two distributional problems to be isomorphic under randomized reductions. Based on that condition we show that all the known average-case NP-complete problems (including those that are complete under deterministic reductions) are indeed isomorphic to each other under randomized reductions.
We study the class coNPMV of complements of NPMV functions. Though defined symmetrically to NPMV, this class exhibits very different properties. We clarify the complexity of coNPMV by showing that it is essentially the same as that of NPMV^NP. Complete functions for coNPMV are exhibited and central complexity-theoretic properties of this class are studied. We show that computing maximum satisfying assignments can be done in coNPMV, which leads us to a comparison of NPMV and coNPMV with Krentel's classes MaxP and MinP. The difference hierarchy for NPMV is related to the query hierarchy for coNPMV. Finally, we examine a functional analogue of Chang and Kadin's relationship between a collapse of the Boolean hierarchy over NP and a collapse of the polynomial-time hierarchy.
Recently, Watanabe proposed a framework for testing the correctness and average-case performance of algorithms that purport to solve a given NP search problem efficiently on average with respect to some distribution on the instances. The idea is to randomly generate certified instances under some distribution that resembles the input distribution. Watanabe showed that unless RE=NE, test instances cannot be generated for some NP search problems. We further discuss Watanabe's approach and show, as an upper bound, that test instances can be generated for every \ComplexityClassNP search problem with non-adaptive queries to an NP oracle.
We also introduce Las Vegas and Monte Carlo types of test instance generators and show that these generators can be used to find out (with high confidence) whether an algorithm is correct and efficient on average. It is shown that Monte Carlo generators can be easily constructed for all RP search problems and that Las Vegas generators exist for all ZPP search problems as well as for other problems such as prime factorization\@. On the other hand, we prove that Monte Carlo generators can only exist for problems in the intersection of NP and coAM.
To analyze the complexity of decision problems on graphs, one normally assumes that the input size is polynomial in the number of vertices. Galperin and Wigderson and, later, Papadimitriou and Yannakakis investigated the complexity of these problems when the input graph is represented by a polylogarithmically succinct circuit. They showed that, under such a representation, certain trivial problems become intractable and that, in general, there is an exponential blowup in problem complexity. Later, Balcazar, Lozano, and Toran extended these results to problems whose inputs are structures other than graphs.
In this paper, we show that, when the input graph is represented by an ordered binary decision diagram (OBDD), there is an exponential blowup in the complexity of most graph problems. In particular, we show that the GAP and AGAP problems become complete for PSPACE and EXP, respectively, when the graphs are succinctly represented by OBDDs.
This paper studies Hopfield neural networks from the perspective of self-stabilizing distributed computation. Known self-stabilization results on Hopfield networks are surveyed. Key ingredients of the proofs are given. Novel applications of self-stabilization---associative memories and optimization---arising from the context of neural networks are discussed. Two new results at the intersection of Hopfield nets and of distributed systems are obtained: One involves convergence under a fine-grained implementation; the other is on perturbation analysis. Some possibilities for further research at the intersection of these two fields are discussed.
We show that the permanent cannot be computed by uniform constant-depth threshold circuits of size T(n) for any function T such that for all k, T^(k) (n) = o(2^n). More generally, we show that any problem that is hard for the complexity class C_=P requires circuits of this size (on the uniform constant-depth threshold circuit model). In particular, this lower bound applies to any problem that is hard for the complexity classes PP or #P.
This extends a
recent result by Caussinus, McKenzie, Therien, and Vollmer,
showing that there are problems in
the counting hierarchy that require superpolynomial-size uniform TC0
circuits. Their
proof uses ``leaf languages'' as a tool in obtaining their
separations. Their proof does not immediately yield larger lower bounds
for the complexity of these problems, and it also does not yield a lower
bound for any particular problem at any fixed level of the counting
hierarchy. (It only shows that hard problems must exist at
We also present related and somewhat weaker lower bounds, extending the theorem of Caussinus et al. showing that ACC^0 is properly contained in ModPH.
We prove an OMEGA(n^(ceiling(r/2))) lower bound for the following
problem: For some fixed linear equation in r variables, given n
real numbers, do any r of them satisfy the equation? Our lower
bound holds in a restricted linear decision tree model, in which each
decision is based on the sign of an arbitrary linear combination
of r or fewer inputs. In this model, our lower bound is as large as
possible. Previously, this lower bound was known only for a few
special cases and only in more specialized models of computation.
Our lower bound follows from an adversary argument. We show that for
any algorithm, there is a input that contains OMEGA(n^(ceiling(r/2)))
``critical'' r-tuples, which have the following important property.
None of the critical tuples satisfies the equation; however, if the
algorithm does not directly test each critical tuple, then the
adversary can modify the input, in a way that is undetectable to the
algorithm, so that some untested tuple
We compare the costs of three well-known consistency guarantees (seque
ntial
consistency, linearizability, and hybrid consistency) for shared memor
y objects
of
Distributed
architectures and
solutions are described for classes of
constraint satisfaction problems, called
In this paper, we prove a lemma that shows how to encode satisfying
solutions of a k-CNF (boolean formulae in conjunctive normal form
with at most k literals per clause) succinctly. Using this lemma,
which we call the
e extend the notion of bisimulation to Kripke structures with fairness. We define equivalences that preserve fairness and are akin to bisimulation. Specifically we define an equivalence and show that it is complete in the sense that it is the coarsest equivalence that preserves the logic CTL* interpreted with respect to the fair paths. The addition of fairness might cause two Kripke structures, which can be distinguished by a CTL* formula, to become indistinguishable by any CTL formula. We define a weaker equivalence that is the weakest equivalence preserving CTL interpreted on the fair paths. As a consequence of our proofs, we also obtain characterizations of states in the fair structure in terms of CTL* and CTL formulae.
Last modified: Fri Mar 19 16:06:49 CST 1999