**A concentration inequality for the overlap of a vector on a large set, with application to the communication complexity of the Gap-Hamming-Distance problem**by Thomas Vidick.

July 1, 2012.

Given two sets A,B in Rn, a measure of their correlation is given by the expected squared inner product between a random x in A and a random y in B. We prove an inequality showing that no two sets of large enough Gaussian measure (at least e^{-delta n} for some constant delta >0) can have correlation substantially lower than would two random sets of the same size. Our proof is based on a concentration inequality for the overlap of a random Gaussian vector on a large set.

As an application, we show how our result can be combined with the partition bound of Jain and Klauck to give a simpler proof of a recent linear lower bound on the randomized communication complexity of the Gap-Hamming-Distance problem due to Chakrabarti and Regev.

**Minimum and maximum against k lies**by Michael Hoffman, Jiří Matoušek, Yoshio Okamoto and Philipp Zumstein.

July 16, 2012

A neat 1972 result of Pohl asserts that ceiling(3n/2) - 2 comparisons are sufficient, and also necessary in the worst case, for finding both the minimum and the maximum of an n-element totally ordered set. The set is accessed via an oracle for pairwise comparisons. More recently, the problem has been studied in the context of the Rényi--Ulam liar games, where the oracle may give up to k false answers. For large k, an upper bound due to Aigner shows that (k+BigO(sqrt{k}))n comparisons suffice. We improve on this by providing an algorithm with at most (k+1+C)n+BigO(k^3) comparisons for some constant C. A recent result of Pálvölgyi provides a lower bound of the form (k+1+0.5)n-D_k, so our upper bound for the coefficient of n is tight up to the value of C.

**Linear cover time for trees is exponentially unlikely**by Amir Yehudayoff.

July 28, 2012

The cover time of a graph is the time it takes for a random walk on it to visit all vertices. This note shows that for any tree on n vertices, the probability that the cover time of the tree is linear in n is exponentially small in n. This is true for any starting vertex.

**Hardness of the Covering Radius Problem on Lattices**by Ishay Haviv and Oded Regev.

August 11, 2012

We provide the first hardness result for the Covering Radius Problem on lattices (

**CRP**). Namely, we show that for any large enough p there exists a constant c(p) such that**CRP**in the l_p norm is PI-2-hard to approximate to within any constant factor less than c(p). In particular, for the case p=infinity, we obtain the constant c=3/2. This gets close to the factor 2 beyond which the problem is not believed to be PI-2-hard (Guruswami et al., Computational Complexity, 2005).**Best Effort and Priority Queuing Policies for Buffered Crossbar Switches**by Alex Kesselman, Kirill Kogan and Michael Segal

August 16, 2012

The buffered crossbar switch architecture has recently gained considerable research attention. In such a switch, besides normal input and output queues, a small buffer is associated with each crosspoint. Due to the introduction of crossbar buffers, output and input contention is eliminated, and the scheduling process is greatly simplified. We analyze the performance of switch policies by means of competitive analysis, where a worst-case throughput guarantee is provided for all traffic patterns. The goal of the switch policy is to maximize the total value of packets sent out of the switch. For the case of unit length and value packets (Best Effort), we present a simple greedy switch policy that is at most 4-competitive and at least 3/2-competitive. Moreover, we demonstrate that any online policy for this case is at least 3/2-competitive for a special case of unit size buffers.

For the case of variable value packets, we consider the Priority Queueing (PQ) mechanism, which provides better Quality of Service (QoS) guarantees by decreasing the delay of real-time traffic. We propose a preemptive greedy switch policy with a preemption factor of

**beta**whose competitve ratio is at most (**beta**+2)^2 + 2/(**beta**-1) (16.24 for**beta**=1.53) and at least (2**beta**-1)/(**beta**-1) (3.87 for**beta**=1.53). The results for upper bounds hold for any value of the switch fabricspeedup . Moreover, the presented policies incur low overhead and are amenable to efficient hardware implementation at wire speed. To the best of our knowledge, this is the first work on competitive analysis for the buffered crossbar switch architecture.**Interactive proofs with efficient quantum prover for recursive Fourier sampling**by Matthew McKagueWe consider the recursive Fourier sampling problem (RFS), and show that there exists an interactive proof for RFS with an efficient classical verifier and efficient quantum prover.

**Computational Models with No Linear Speedup**by Amir M. Ben-Amram, Niels Christensen and Jakob Grue SimonsenThe linear speedup theorem states, informally, that constants do not matter: It is essentially always possible to find a program solving any decision problem a factor of 2 faster. This result is a classical theorem in computing, but also one of the most debated. The main ingredient of the typical proof of the linear speedup theorem is tape compression, where a fast machine is constructed with tape alphabet or number of tapes far greater than that of the original machine. In this paper, we prove that limiting Turing machines to a fixed alphabet and a fixed number of tapes rules out linear speedup. Specifically, we describe a language that can be recognized in linear time (e. g., 1.51n), and provide a proof, based on Kolmogorov complexity, that the computation cannot be sped up (e. g., below 1.49n). Without the tape and alphabet limitation, the linear speedup theorem does hold and yields machines of time complexity of the form (1+ε)n for arbitrarily small ε > 0.

Earlier results negating linear speedup in alternative models of computation have often been based on the existence of very efficient universal machines. In the vernacular of programming language theory: These models have very efficient self-interpreters. As the second contribution of this paper, we define a class, PICSTI, of computation models that exactly captures this property, and we disprove the Linear Speedup Theorem for every model in this class, thus generalizing all similar, model-specific proofs.

**Almost all decision trees do not allow significant quantum speed-up**by Ashley MontanaroWe show that, for any

*d*, all but a doubly exponentially small fraction of decision trees of depth at most*d*require OMEGA(*d*) quantum queries to be computed with bounded error. In other words, most efficient classical algorithms in the query complexity model do not admit a significant quantum speed-up. The proof is based on showing that, with high probability, the average sensitivity of a random decision tree is high.**On Quantum Interactive Proofs with Short Messages**by Attila PereszlényiThis paper proves one of the open problems posed by Beigi, Shor and Watrous in

*Theory of Computing*volume 7, article 7, pp. 101-117 (2011). We consider quantum interactive proof systems where, in the beginning, the verifier and prover send messages to each other, with the combined length of all messages being at most logarithmic (in the input length); and at the end, the prover sends a polynomial-length message to the verifier. We show that this class has the same expressive power as*QMA***Learning graph based quantum query algorithms for finding constant-size subgraphs**by Troy Lee, Frédéric Magniez and Miklos SanthaLet H be a fixed k-vertex graph with m edges and minimum degree d > 0. We use the learning graph framework of Belovs to show that the bounded-error quantum query complexity of determining if an n-vertex graph contains H as a subgraph is O(n^[2 - 2/k -t] ), where

t = max {A, B} > 0 with

A = [ k^2 -2(m+1)] / [k(k+1)(m+1)]

and

B = [2k -d -3]/[k(d+1)(m-d+2)]

Volume 2011 Abstracts

Volume 2012 Published articles

Janos Simon