Chicago Journal of Theoretical Computer Science

Volume 1999 Abstracts


  1. On Finding the Number of Graph Automorphisms by Robert Beals, Richard Chang, William Gasarch, and Jacobo Torán, 10 February 1999

    In computational complexity theory, a function f is called b(n)-enumerable if there exists a polynomial-time function that can restrict the output of f(x) to one of b(n) possible values. This paper investigates #GA, the function that computes the number of automorphisms of an undirected graph, and GI, the set of pairs of isomorphic graphs. The results in this paper show the following connections between the enumerability of #GA and the computational complexity of GI.

    1. #GA is e^(O(sqrt(n log n)))-enumerable}.
    2. If #GA is polynomially enumerable, then GI in R.
    3. For epsilon < 1/2, if #GA is n^epsilon-enumerable, then GI in P.

  2. Randomized Reductions and Isomorphisms by Jie Wang (Special Issue on Computational Complexity, results from Dagstuhl-Seminar 1996, Eric Allender editor), 24 February 1999

    Randomizing reductions have provided new techniques for tackling average-case complexity problems. For example, although some NP-complete problems with uniform distributions on instances cannot be complete under deterministic one-one reductions [Wang and Belanger], they are complete under randomized reductions [Venkatesan and Levin]. We study randomized reductions in this paper on reductions that are one-one and honest mappings over certain input domains. These are reasonable assumptions since all the randomized reductions in the literature that are used in proving average-case completeness results possess this property. We consider whether randomized reductions can be inverted efficiently. This gives rise to the issue of randomized isomorphisms. By generalizing the notion of isomorphisms under deterministic reductions, we define what it means for two distributional problems to be isomorphic under randomized reductions. We then show a randomized version of the Cantor-Bernstein-Myhill theorem, which provides a sufficient condition for two distributional problems to be isomorphic under randomized reductions. Based on that condition we show that all the known average-case NP-complete problems (including those that are complete under deterministic reductions) are indeed isomorphic to each other under randomized reductions.

  3. Complements of Multivalued Functions by Stephen Fenner, Frederic Green, Steven Homer, Alan L. Selman, Thomas Thierauf, and Heribert Vollmer, 19 March 1999

    We study the class coNPMV of complements of NPMV functions. Though defined symmetrically to NPMV, this class exhibits very different properties. We clarify the complexity of coNPMV by showing that it is essentially the same as that of NPMV^NP. Complete functions for coNPMV are exhibited and central complexity-theoretic properties of this class are studied. We show that computing maximum satisfying assignments can be done in coNPMV, which leads us to a comparison of NPMV and coNPMV with Krentel's classes MaxP and MinP. The difference hierarchy for NPMV is related to the query hierarchy for coNPMV. Finally, we examine a functional analogue of Chang and Kadin's relationship between a collapse of the Boolean hierarchy over NP and a collapse of the polynomial-time hierarchy.

  4. The Complexity of Generating Test Instances by Christoph Karg, Johannes Köbler, and Rainer Schuler (Special Issue on Computational Complexity, results from Dagstuhl-Seminar 1996, Eric Allender editor), 22 April 1999

    Recently, Watanabe proposed a framework for testing the correctness and average-case performance of algorithms that purport to solve a given NP search problem efficiently on average with respect to some distribution on the instances. The idea is to randomly generate certified instances under some distribution that resembles the input distribution. Watanabe showed that unless RE=NE, test instances cannot be generated for some NP search problems. We further discuss Watanabe's approach and show, as an upper bound, that test instances can be generated for every \ComplexityClassNP search problem with non-adaptive queries to an NP oracle.

    We also introduce Las Vegas and Monte Carlo types of test instance generators and show that these generators can be used to find out (with high confidence) whether an algorithm is correct and efficient on average. It is shown that Monte Carlo generators can be easily constructed for all RP search problems and that Las Vegas generators exist for all ZPP search problems as well as for other problems such as prime factorization\@. On the other hand, we prove that Monte Carlo generators can only exist for problems in the intersection of NP and coAM.

  5. The Complexity of Problems on Graphs Represented as OBDDs byJoan Feigenbaum, Sampath Kannan, Moshe Y. Vardi, and Mahesh Viswanathan, 6 August 1999
  6. To analyze the complexity of decision problems on graphs, one normally assumes that the input size is polynomial in the number of vertices. Galperin and Wigderson and, later, Papadimitriou and Yannakakis investigated the complexity of these problems when the input graph is represented by a polylogarithmically succinct circuit. They showed that, under such a representation, certain trivial problems become intractable and that, in general, there is an exponential blowup in problem complexity. Later, Balcazar, Lozano, and Toran extended these results to problems whose inputs are structures other than graphs.

    In this paper, we show that, when the input graph is represented by an ordered binary decision diagram (OBDD), there is an exponential blowup in the complexity of most graph problems. In particular, we show that the GAP and AGAP problems become complete for PSPACE and EXP, respectively, when the graphs are succinctly represented by OBDDs.

  7. Hopfield Neural Networks and Self-Stabilization by Arun Jagota, 6 August 1999
  8. This paper studies Hopfield neural networks from the perspective of self-stabilizing distributed computation. Known self-stabilization results on Hopfield networks are surveyed. Key ingredients of the proofs are given. Novel applications of self-stabilization---associative memories and optimization---arising from the context of neural networks are discussed. Two new results at the intersection of Hopfield nets and of distributed systems are obtained: One involves convergence under a fine-grained implementation; the other is on perturbation analysis. Some possibilities for further research at the intersection of these two fields are discussed.

  9. The Permanent Requires Large Uniform Threshold Circuits by Eric Allender, 6 August 1999
  10. We show that the permanent cannot be computed by uniform constant-depth threshold circuits of size T(n) for any function T such that for all k, T^(k) (n) = o(2^n). More generally, we show that any problem that is hard for the complexity class C_=P requires circuits of this size (on the uniform constant-depth threshold circuit model). In particular, this lower bound applies to any problem that is hard for the complexity classes PP or #P.

    This extends a recent result by Caussinus, McKenzie, Therien, and Vollmer, showing that there are problems in the counting hierarchy that require superpolynomial-size uniform TC0 circuits. Their proof uses ``leaf languages'' as a tool in obtaining their separations. Their proof does not immediately yield larger lower bounds for the complexity of these problems, and it also does not yield a lower bound for any particular problem at any fixed level of the counting hierarchy. (It only shows that hard problems must exist at some level of the counting hierarchy.)

    We also present related and somewhat weaker lower bounds, extending the theorem of Caussinus et al. showing that ACC^0 is properly contained in ModPH.

  11. Bounds for Linear Satisfiability Problems by Jeff Erickson, 6 August 1999
  12. We prove an OMEGA(n^(ceiling(r/2))) lower bound for the following problem: For some fixed linear equation in r variables, given n real numbers, do any r of them satisfy the equation? Our lower bound holds in a restricted linear decision tree model, in which each decision is based on the sign of an arbitrary linear combination of r or fewer inputs. In this model, our lower bound is as large as possible. Previously, this lower bound was known only for a few special cases and only in more specialized models of computation. Our lower bound follows from an adversary argument. We show that for any algorithm, there is a input that contains OMEGA(n^(ceiling(r/2))) ``critical'' r-tuples, which have the following important property. None of the critical tuples satisfies the equation; however, if the algorithm does not directly test each critical tuple, then the adversary can modify the input, in a way that is undetectable to the algorithm, so that some untested tuple does satisfy the equation. A key step in the proof is the introduction of formal infinitesimals into the adversary input. A theorem of Tarski implies that if we can construct a single input containing infinitesimals that is hard for every algorithm, then for every decision tree algorithm there exists a corresponding real-valued input which is hard for that algorithm.

  13. Time Bounds for Strong and Hybrid Consistency for Arbitrary Abstract Data Types by Martha J. Kosa, 6 August 1999
  14. We compare the costs of three well-known consistency guarantees (seque ntial consistency, linearizability, and hybrid consistency) for shared memor y objects of arbitrary data types, generalizing previous work on specifi c types. We evaluate the impact of the desired consistency guarantee, the amount of synchrony among the users of the objects, and algebraic properties of a type on the worst-case time complexity of operations of the type. These algebraic properties are sufficient for proving many lower bounds and some upper bounds (even 0, meaning only local computation is required) on the worst-case times. Under certain reasonable assumptions, sequential consistency and linearizabi lity are equally costly in perfectly synchronized systems, linearizable operati ons are more expensive in approximately synchronized systems than in perfectly synchronized systems, sequentially consistent operations are cheaper t han linearizable operations in approximately synchronized systems, and hyb rid consistency is not necessarily cheaper than sequential consistency or linearizability.

  15. Self-Stabilizing Distributed Constraint Satisfaction by Zeev Collin, Rina Dechter, and Shmuel Katz, 31 December 1999
  16. Distributed architectures and solutions are described for classes of constraint satisfaction problems, called network consistency problems. An inherent assumption of these architectures is that the communication network mimics the structure of the constraint problem. The solutions are required to be self-stabilizing and to treat arbitrary networks, which makes them suitable for dynamic or error-prone environments. We first show that even for relatively simple constraint networks, such as rings, there is no self-stabilizing solution that guarantees convergence from every initial state of the system using a completely uniform, asynchronous model (where all processors are identical). An almost uniform, asynchronous, network consistency protocol with one specially designated node is shown and proven correct. We also show that some restricted topologies such as trees can accommodate the uniform, asynchronous model when neighboring nodes cannot take simultaneous steps.

  17. Satisfiability Coding Lemma by Ramamohan Paturi, Pavel Pudlak, and Francis Zane, 31 December 1999
  18. In this paper, we prove a lemma that shows how to encode satisfying solutions of a k-CNF (boolean formulae in conjunctive normal form with at most k literals per clause) succinctly. Using this lemma, which we call the satisfiability coding lemma, we prove tight lower bounds on depth-3 circuits and improved upper bounds for the k-SAT problem. We show an Omega(n^{1/4}2^{\sqrt{n}}) lower bound on the size of depth-3 circuits of AND, OR, NOT gates computing the parity function. This bound is tight up to a constant factor. We also present and analyze two simple algorithms for finding satisfying assignments of k-CNFs. The first is a randomized algorithm which, with probability approaching 1, finds a satisfying assignment of a satisfiable k-CNF formula F in time O(n^2 |F| 2^{n-n/k}). The second algorithm is deterministic, and its running time approaches 2^{n-n/2k} for large n and k. The randomized algorithm improves on previous algorithms for k>3, though subsequent algorithms improve on it; the deterministic algorithm is the best known deterministic algorithm for k>4.

  19. Equivalences for Fair Kripke Structures by Adnan Aziz, Felice Balarin, Vigyan Singhal, Robert Brayton, and Albert\ o Sangiovanni-Vincentelli 31 December 1999
  20. e extend the notion of bisimulation to Kripke structures with fairness. We define equivalences that preserve fairness and are akin to bisimulation. Specifically we define an equivalence and show that it is complete in the sense that it is the coarsest equivalence that preserves the logic CTL* interpreted with respect to the fair paths. The addition of fairness might cause two Kripke structures, which can be distinguished by a CTL* formula, to become indistinguishable by any CTL formula. We define a weaker equivalence that is the weakest equivalence preserving CTL interpreted on the fair paths. As a consequence of our proofs, we also obtain characterizations of states in the fair structure in terms of CTL* and CTL formulae.


[] Volume 1998 Abstracts
[back] Volume 1999 [back] Published articles
[CJCTS home]

Last modified: Fri Mar 19 16:06:49 CST 1999