Published by the Department of Computer Science, The University of Chicago.
October 16, 2012
The linear speedup theorem states, informally, that constants do not matter: It
is essentially always possible to find a program solving any decision problem
a factor of
2 faster. This result is a classical theorem in computing,
but also one of the most debated.
The main ingredient of the typical proof of the linear speedup theorem is tape
compression, where a fast machine is constructed with tape alphabet or
number of tapes far greater than that of the original machine. In this paper,
we prove that limiting Turing machines to a fixed
alphabet and a fixed number of tapes rules out linear speedup.
Specifically, we describe a language that can be recognized in linear time
(e. g., 1.51n), and provide a proof, based
on Kolmogorov complexity, that the computation cannot be sped up
(e. g., below 1.49n).
Without the tape and alphabet limitation, the linear speedup theorem does
hold and yields machines of time complexity of the form (1+ε)n
for arbitrarily small ε > 0.
Earlier results negating linear speedup in alternative models of computation have often been based on the existence of very efficient universal machines. In the vernacular of programming language theory: These models have very efficient self-interpreters. As the second contribution of this paper, we define a class, PICSTI, of computation models that exactly captures this property, and we disprove the Linear Speedup Theorem for every model in this class, thus generalizing all similar, model-specific proofs.
Submitted October 11, 2011, revised August 20, 2012, and October 1st 2012, published October 16, 2012.