% The Chicago Journal of Theoretical Computer Science, Volume 1997 Article 5
% Published by MIT Press, Cambridge, Massachusetts USA
% Copyright 1997 by Massachusetts Institute of Technology
%
\newtoks\rstyle
\rstyle={cjlook}
%
\documentclass[v1.1,published]{cjstruct}
%
% Local style resources for this article.
%
\usestyleresource{amstext}
%
% Local macro definitions for this article.
%
\catcode`\@=11
\ifx\nopagebreakbeginpar\undeftex
\newcommand{\nopagebreakbeginpar}{\@beginparpenalty10000}
\fi
\ifx\nopagebreakendpar\undeftex
\newcommand{\nopagebreakendpar}{\@endparpenalty10000}
\fi
\ifx\nopagebreakinteritem\undeftex
\newcommand{\nopagebreakinteritem}{\@itempenalty10000}
\fi
\catcode`\@=12
%
\newcommand{\prooffontshape}{\fontshape{n}\selectfont}
%
\newcommand{\setof}[2]{\left\{\,#1:#2\,\right\}}
\newcommand{\mult}{\cdot}
\newcommand{\ceiling}[1]{\left\lceil#1\right\rceil}
%
\usestyleresource{epic}
\usestyleresource{eepic}
\usestyleresource{latexsym}
\usestyleresource{amsmath}
%
\newcommand{\textuprm}[1]{\textup{\textrm{#1}}}
\mathstyleclass{\complexityclass}{\mathord}{\textuprm}
\newcommand{\clows}{\mbox{\({\cal W }\)}}
%
\def\registered{\({}^{\ooalign%
{\hfil\raise.07ex\hbox{{\tiny\textup{R}}}%
\hfil\crcr\scriptsize\mathhexbox20D\cr}}\)}
%
% end of local macro definitions for this article
%
\begin{document}
%
\begin{articleinfo}
%
\publisher{The MIT Press}
%
\publishercomment{%
{\Large
\begin{center}
Volume 1997, Article 5\\
\textit{31 December 1997}
\end{center}
}
\par\noindent
ISSN 1073--0486\@. MIT Press Journals, Five Cambridge Center, Cambridge, MA
02142-1493 USA; (617)253-2889;
\emph{journals-orders@mit.edu, journals-info@mit.edu}\@.
Published one article at a time in \LaTeX\ source form on the
Internet\@. Pagination varies from copy to copy\@. For more
information and other articles see:
\begin{itemize}
\item \emph{http://mitpress.mit.edu/CJTCS/}
\item \emph{http://www.cs.uchicago.edu/publications/cjtcs/}
\item \emph{ftp://mitpress.mit.edu/pub/CJTCS}
\item \emph{ftp://cs.uchicago.edu/pub/publications/cjtcs}
\end{itemize}
\sentence
\par\noindent
The \emph{Chicago Journal of Theoretical Computer Science} is
abstracted or indexed in \emph{Research Alert,\registered
SciSearch,\registered Current Contents\registered/Engineering
Computing \& Technology}, and \emph{CompuMath Citation Index\@.\registered}
}
%
\copyrightstmt{%
\copyright 1997 The Massachusetts Institute of Technology\@.
Subscribers are licensed to use journal articles in a variety of
ways, limited only as required to insure fair attribution to authors
and the journal, and to prohibit use in a competing commercial
product\@. See the journal's World Wide Web site for further
details\@. Address inquiries to the Subsidiary Rights Manager, MIT
Press Journals; (617)253-2864;
\emph{journals-rights@mit.edu}\@.\pagebreak[2]
}
%
\journal{Chicago\nolinebreak[1] Journal of Theoretical Computer\nolinebreak[1] Science}
\journalcomment{%
The \emph{Chicago Journal of Theoretical Computer Science}
is a peer-reviewed scholarly journal in theoretical computer
science\@. The journal is committed to providing a forum for
significant results on theoretical aspects of all topics in computer
science\@.
\par
\begin{trivlist}
\item[\emph{Editor in chief:}] Janos Simon
\item[\emph{Consulting editors:}]
Joseph Halpern, Stuart A.\ Kurtz, Raimund Seidel
\item[\emph{Editors:}]
\begin{tabular}[t]{lll}
Martin Abadi & Greg Frederickson & John Mitchell \\
Pankaj Agarwal & Andrew Goldberg & Ketan Mulmuley \\
Eric Allender & Georg Gottlob & Gil Neiger \\
Tetsuo Asano & Vassos Hadzilacos & David Peleg \\
Laszl\'o Babai & Juris Hartmanis & Andrew Pitts \\
Eric Bach & Maurice Herlihy & James Royer \\
Stephen Brookes & Ted Herman & Alan Selman \\
Jin-Yi Cai & Stephen Homer & Nir Shavit \\
Anne Condon & Neil Immerman & Eva Tardos \\
Cynthia Dwork & Howard Karloff & Sam Toueg \\
David Eppstein & Philip Klein & Moshe Vardi \\
Ronald Fagin & Phokion Kolaitis & Jennifer Welch \\
Lance Fortnow & Stephen Mahaney & Pierre Wolper \\
Steven Fortune & Michael Merritt
\end{tabular}
\vspace{1ex}
\item[\emph{Managing editor:}] Michael J.\ O'Donnell
\vspace{1ex}
\item[\emph{Electronic mail:}] \emph{chicago-journal@cs.uchicago.edu}
\end{trivlist}
\sentence
}
%
\bannerfile{cjtcs-banner.tex}
%
\volume{1997}
%
\articleid{5}
%
\selfcitation{
@article{cj97-05
author={Meena Mahajan and V. Vinay},
title={Determinant: Combinatorics, Algorithms, and Complexity},
journal={Chicago Journal of Theoretical Computer Science},
volume={1997},
number={5},
publisher={MIT Press},
month={December},
year={1997},
}
}
%
\begin{retrievalinfo}
\end{retrievalinfo}
%
\title{Determinant: Combinatorics, Algorithms, and
Complexity\footnote{A preliminary version of this paper
appeared in \cite{cj97-05-16}\@.}}
\shorttitle{Determinant}
%
\begin{authorinfo}
%
% Correction to error in cjstruct.sty version 1.1.
%
\def\fixversion{1.1}
\ifx\cjversion\fixversion
\renewcommand{\printname}[1]{
\global\authorstring={#1}
\global\authorstrue
\ifinfo\item Printed name: #1\fi
}
\fi
%
% End of correction.
%
\printname{Meena Mahajan}
\collname{}{Mahajan, Meena}
\begin{institutioninfo}
\instname{Institute of Mathematical Sciences}
\department{Theoretical Computer Science Group}
\address{}{Chennai}{600 113}{India}{}
\end{institutioninfo}
\email{meena@imsc.ernet.in}
\end{authorinfo}
\begin{authorinfo}
\printname{V. Vinay}
\collname{}{Vinay, V.}
\begin{institutioninfo}
\instname{Indian Institute of Science}
\department{Department of Computer Science and Automation}
\address{}{Bangalore}{560 012}{India}{}
\end{institutioninfo}
\email{vinay@csa.iisc.ernet.in}
\end{authorinfo}
%
\shortauthors{Mahajan and Vinay}
%
\support{
Part of this work was done when Meena Mahajan was visiting the
Department of Computer Science and Automation, IISc, Bangalore.
}
%
\begin{editinfo}
\submitted{01}{08}{1997}\reported{18}{09}{l997}
\submitted{03}{10}{1997}\reported{09}{10}{l997}\published{31}{12}{1997}
\end{editinfo}
%
\end{articleinfo}
%
\begin{articletext}
%
\begin{abstract}
%
\apar{Abstract-1}
We prove a new combinatorial characterization of the {\em
determinant}\@. The characterization yields a simple combinatorial
algorithm for computing the determinant\@. Hitherto, all (known)
algorithms for the determinant have been based on linear algebra\@. Our
combinatorial algorithm requires no division, and works over arbitrary
commutative rings\@. It also lends itself to efficient parallel
implementations\@.
%
\apar{Abstract-2}
It has been known for some time now that the complexity class \(\complexityclass{GapL}\)
characterizes the complexity of computing the determinant of matrices
over the integers\@. We present a direct proof of this characterization\@.
\end{abstract}
\asection{1}{Introduction}
%
\apar{1-1}
The determinant has been a subject of study for over 200 years\@. Its
history can be traced back to Leibnitz, Cramer, Vandermode, Binet,
Cauchy, Jacobi, Gauss, and others\@. Given its importance in linear
algebra in particular and in geometry in general, it is not surprising
that a galaxy of great mathematicians investigated the determinant
{from} varied viewpoints\@.
%
\apar{1-2}
The algorithmic history of the determinant is as old as the
mathematical concept itself\@. After all, the determinant was invented
to solve systems of linear equations\@. Much of the initial effort was
expended on proving the so-called ``Cramer's rule,'' ``Laplace
expansion,'' and the ``Cauchy-Binet theorem,'' and these led to a
variety of interesting algebraic identities\@. The first definitions of
determinant used \emph{inversions} as a means of computing the signs of
permutations\@. Cauchy realized that the sign of a permutation can be
more easily computed by considering the permutation's cycle decomposition:
if \(k\) is the number of cycles in the decomposition of a
permutation over \(S_n\), he showed that \((-1)^{n-k}\) computes the
sign\@. In a sense, Cauchy appears to have started the combinatorial
approach to determinants\@.
%
\apar{1-3}
The so-called ``Gaussian elimination'' is a standard procedure for
calculating the determinant\@. It converts a given matrix into an upper
triangular matrix using elementary row operations, which maintain the
value of the determinant, and uses \(O(n^3)\) operations\@. It can be
shown that the sizes of numbers in the intermediate steps are small,
and this gives rise to a polynomial time algorithm\@. This algorithm,
however, appears to be sequential\@. In its present form, the algorithm
would require division, rendering it useless over arbitrary
rings\@. To use this method over a ring, one considers a field extension
(e.g., for computing the determinant over integers, compute using
rationals)\@. While theoretically correct, this procedure often
introduces a computational problem; for instance, over integers,
because of the divisions involved, this method may needlessly
introduce floating point errors\@. Thus, in several situations, a
division-free method that still has polynomial bit complexity would
be preferable to Gaussian elimination\@.
%
\apar{1-4}
Numerical analysts have looked closely at the problem of
computing the determinant, and also the associated problem of computing
the characteristic polynomial of a given matrix\@. An authoritative book
on this subject is due to Fadeev and Fadeeva \cite{cj97-05-12}\@. The book lists
more than half a dozen methods for computation of the
characteristic polynomial\@. The most important among them seem to be
Krylov's method, Leverier's method, and Samuelson's
method\@. Csanky \cite{cj97-05-09} observes that Leverier's method may be
implemented in \(\complexityclass{NC\(^2\)}\)\@. However, Leverier's method uses division, and
hence is unsuitable over arbitrary fields\@. (The method is applicable
only over fields of characteristic zero or over fields with
characteristic greater than the dimension of the matrix, so the
algorithm cannot be used, in general, over finite fields\@.) Berkowitz
\cite{cj97-05-04} observes that Samuelson's method \cite{cj97-05-20} is
division free and may be implemented in \(\complexityclass{NC\(^2\)}\)\@. Valiant \cite{cj97-05-25}
analyzes the nature of monomials that result from Samuelson's method\@.
Independently, Chistov \cite{cj97-05-07} uses arithmetic over polynomials to
come up with a division-free \(\complexityclass{NC\(^2\)}\) algorithm\@. Thus the
Samuelson-Berkowitz algorithm, as well as Chistov's algorithm, can be
used over any commutative ring\@.
%
\apar{1-5}
Vinay~\cite{cj97-05-27}, Damm~\cite{cj97-05-10}, and Toda~\cite{cj97-05-22} observed
independently that \(\complexityclass{DET}\) (as a complexity class) has an exact
characterization\@. They showed that over integers, \(\complexityclass{DET}\) is exactly
\(\complexityclass{GapL}\)\@. That is, any function that is logspace reducible to computing
the determinant of a matrix over integers can be computed as the
difference of two \(\complexityclass{\#L}\) functions\@. Here, a \(\complexityclass{\#L}\) function
corresponds to the number of accepting paths in an \(\complexityclass{NL}\) machine; a
\(\complexityclass{GapL}\) function corresponds to the difference between the number of
accepting paths and the number of rejecting paths in an \(\complexityclass{NL}\) machine,
or equivalently, to the difference between two \(\complexityclass{\#L}\) functions\@. This
characterization of \(\complexityclass{DET}\) establishes a telling parallel between the
complexity of the two major algorithmic problems; namely, the complexity of the
permanent vs. the determinant\@. While Valiant \cite{cj97-05-24} shows
that computing the permanent is \(\complexityclass{GapP}\) complete, the determinant is
complete for \(\complexityclass{GapL}\); both are complete for counting versions of
nondeterministic classes\@. An interesting feature of the three
independent proofs cited above is that they all rely on Samuelson's
method to convert the problem of computing the determinant to iterated
matrix multiplication\@. In this paper, we present a direct and
self-contained proof of this theorem\@.
%
\apar{1-6}
We give the first combinatorial algorithm for computing the
determinant\@. We do this by extending the definition of a permutation
to a \emph{clow sequence} (``clow'' being the acronym for ``closed walk'')\@.
Using a combinatorial proof, we establish that all
clow sequences that are not permutations cancel each other, leaving
precisely the permutations\@. We then show how clow sequences may be
realized in a simple graph-theoretic model\@. The model is described by
a tuple \(\langle G,s,t_+,t_- \rangle\), where \(G\) is a directed acyclic
graph (DAG), and \(s\), \(t_+\), \(t_-\) are distinguished vertices in \(G\)\@.
Let \(paths(G,s,t)\) compute the number of paths from \(s\) to \(t\) in \(G\)\@.
Then the integer function computed by \(\langle G,s,t_+,t_- \rangle\) is
\(paths(G,s,t_+) - paths(G,s,t_-)\)\@. The model yields a polynomial time
algorithm via simple dynamic programming techniques (see
Table~1)\@. It characterizes \(\complexityclass{GapL}\) exactly, and also contains
characterizations in terms of arithmetic skew circuits \cite{cj97-05-26,cj97-05-22} and arithmetic branching programs, yielding \(\complexityclass{NC\(^2\)}\) and \(\complexityclass{GapL}\)
algorithms (see Tables~2~and 3)\@. The
results stand out in contrast to Nisan's results \cite{cj97-05-17}, which
show that the determinant cannot be computed by a polynomial-size
branching program over a noncommutative semi-ring\@.
%
\apar{1-7}
The size of the DAG we construct is about \(O(n^4)\), with \(O(n^6)\)
edges, and may be implemented on an arithmetic skew circuit with
\(O(n^6)\) wires\@. This compares favorably with the \(O(n^{18})\)
implementation of Toda\@. (In \cite{cj97-05-23}, Toda notes that
Samuelson's method can be implemented on arithmetic skew circuits of
size \(n^{18}\)\@.)
%
\apar{1-8}
Our combinatorial proof is inspired by Straubing, who gives a purely
combinatorial interpretation and a very elegant proof of the
Cayley-Hamilton theorem \cite{cj97-05-21}\@.
%
\apar{1-9}
Various other parallel algorithms for computing the determinant
(including Chistov's method and the Samuelson-Berkowitz method) can
also be interpreted combinatorially, and correctness can also be
proved using purely combinatorial techniques\@. The objects generated by
these algorithms turn out to be variations of clow sequences\@.
Such interpretations for some algorithms are found in \cite{cj97-05-15}\@.
%
\apar{1-10}
Of course, the combinatorial approach cannot replace the algebraic one
altogether\@. However, it can (as we feel it does in this case) offer
interesting insights into the nature of a seemingly purely algebraic
problem\@.
\asection{2}{The Combinatorics}
%
\apar{2-1}
We will start with the definition of the determinant of an
\(n\)-dimensional matrix, \(A\):
\[\det(A)= \sum_{\sigma \in S_n}sgn(\sigma)\prod_i a_{i\sigma(i)}\]
The summation is over all permutations on \(n\) elements\@. The sign
of a permutation is defined in terms of the number of inversions:
\[sgn(\sigma)= (-1)^{\mbox{\scriptsize number of inversions in \(\sigma\)}}\]
%
\apar{2-2}
To move to a combinatorial setting, we interpret the matrix \(A\) as a
weighted, directed graph \(G_A\) on \(n\) vertices, where the weight on
the directed edge \(\langle i,j\rangle \) is \(a_{ij}\)\@. A permutation in
\(S_n\) therefore corresponds to a \emph{cycle cover}: the cycle decomposition
of the permutation, when interpreted as a graph, induces a partition
on the vertex set into disjoint cycles\@.
%
\apar{2-3}
This definition cannot be directly converted into an efficient
algorithm for the determinant, because the number of monomials in the above
definition is \(n!\)\@. Since enumeration is out of the question, any algorithm
should implicitly count over all monomials\@. The bottleneck
in doing so directly is that these permutations are not easily
``factorizable'' to allow for a simple implementation\@. We will get
around this problem by enlarging the summation from cycle covers to
clow sequences\@.
%
\apar{2-4}
A clow is a walk \(\langle w_1,\dots,
w_l\rangle \) starting from vertex \(w_1\) and ending at the same
vertex, where any \(\langle w_i,w_{i+1}\rangle \) is an edge in the
graph\@. Vertex \(w_1\) is the least-numbered vertex in the clow, and is called
the head of the clow\@. We also require that the head occur only
once in the clow\@. This means that there is exactly one incoming edge
(\(\langle w_l, w_1 \rangle \)) and one outgoing edge (\(\langle
w_1, w_2 \rangle \)) at \(w_1\) in the clow\@.
%
\apar{2-5}
A clow sequence is a sequence of clows \(\clows = \langle C_1,
\dots, C_k\rangle \) with two properties:
\begin{enumerate}
\item the sequence is ordered: \(\mbox{head}(C_1) < \mbox{head}(C_2) <
\dots < \mbox{head}(C_k)\), and
\item the total number of edges (counted with multiplicity) adds to exactly \(n\)\@.
\end{enumerate}
%
\apar{2-6}
A cycle cover is a special type of clow sequence\@. We will now show
how to associate a sign with a clow sequence that is consistent with
the definition of the sign of a cycle cover\@. The sign of a cycle cover
can be shown to be \((-1)^{n+k}\), where \(n\) is the number of vertices in
the graph, and \(k\) is the number of components in the cycle cover\@. The
sign of a clow sequence is defined to be \((-1)^{n+k}\), where \(n\) is the
number of vertices in the graph, and \(k\) is the number of clows in the
sequence\@.
%
\apar{2-7}
We will also associate a weight with a clow sequence that is
consistent with the contribution of a cycle cover\@. The weight of a
clow \(C\), \(w(C)\), is the product of the weights of the edges in the
walk while accounting for multiplicity\@. For example, \(w(\langle
1,2,3,2,3\rangle ) = a_{12}a_{23}^2a_{32}a_{31}\)\@. The weight of a
clow sequence \(\clows = \langle C_1, \dots, C_k\rangle \) is
\(w(\clows) = \prod_{i}w(C_i)\)\@.
%
\begin{alabsubtext}{Theorem}{1}{}
\[det(A)= \sum_{\clows:\mbox{~a clow sequence} }sgn(\clows)w(\clows)\]
\end{alabsubtext}
%
\begin{figure*}[t]
%\input{clow.eepic}
\setlength{\unitlength}{0.0100in}
%
\begingroup\makeatletter\ifx\SetFigFont\undefined
% extract first six characters in \fmtname
\def\x#1#2#3#4#5#6#7\relax{\def\x{#1#2#3#4#5#6}}%
\expandafter\x\fmtname xxxxxx\relax \def\y{splain}%
\ifx\x\y % LaTeX or SliTeX?
\gdef\SetFigFont#1#2#3{%
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\large\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname}%
\else
\gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\x{\endgroup\@setsize\SetFigFont{#2pt}}%
\expandafter\x
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname}%
\fi
\fi\endgroup
\begin{picture}(526,303)(0,-10)
\put(41,260){\ellipse{6}{6}}
\put(134,212){\ellipse{6}{6}}
\put(93,106){\ellipse{6}{6}}
\put(156,213){\ellipse{46}{46}}
\put(445,258){\ellipse{6}{6}}
\put(497,104){\ellipse{6}{6}}
\put(502,79){\ellipse{46}{46}}
\path(161,236)(166,234)
\drawline(51,240)(51,240)
\drawline(51,240)(51,240)
\path(51,240)(56,230)
\path(96,180)(91,175)
\path(61,165)(51,165)
\path(6,200)(6,210)
\drawline(455,238)(455,238)
\drawline(455,238)(455,238)
\path(455,238)(460,228)
\path(500,178)(495,173)
\path(465,163)(455,163)
\path(410,198)(410,208)
\path(525,75)(524,67)
\path(221,180)(321,180)(301,200)
\path(321,160)(221,160)(241,140)
\path(41,260) (46.732,255.237)
(49.095,249.841)
(50.207,246.688)
(51.293,243.345)
(52.371,239.898)
(53.455,236.432)
(54.562,233.031)
(55.708,229.782)
(58.180,224.077)
(61.000,220.000)
\path(61,220) (64.872,217.461)
(70.078,215.672)
(76.113,214.262)
(79.283,213.585)
(82.470,212.864)
(88.643,211.107)
(94.126,208.624)
(98.414,205.044)
(101.000,200.000)
\path(101,200) (101.645,195.414)
(101.155,191.140)
(99.684,187.136)
(97.385,183.358)
(94.411,179.764)
(90.915,176.309)
(87.051,172.952)
(82.973,169.648)
(78.833,166.356)
(74.784,163.031)
(70.980,159.632)
(67.575,156.114)
(64.721,152.435)
(62.571,148.552)
(61.280,144.421)
(61.000,140.000)
\path(61,140) (62.109,133.900)
(63.176,130.735)
(64.538,127.556)
(68.006,121.338)
(72.229,115.612)
(76.922,110.748)
(81.804,107.112)
(86.591,105.074)
(91.000,105.000)
\path(91,105) (94.987,107.123)
(98.570,111.132)
(101.561,116.586)
(102.776,119.716)
(103.771,123.040)
(104.523,126.504)
(105.009,130.052)
(105.204,133.628)
(105.086,137.178)
(104.630,140.645)
(103.813,143.975)
(101.000,150.000)
\path(101,150) (95.735,156.031)
(92.557,158.567)
(89.090,160.777)
(85.394,162.656)
(81.525,164.199)
(77.541,165.401)
(73.500,166.255)
(69.459,166.758)
(65.475,166.904)
(61.606,166.688)
(57.910,166.106)
(54.443,165.150)
(51.265,163.818)
(46.000,160.000)
\path(46,160) (43.052,155.457)
(41.451,149.707)
(41.133,146.520)
(41.124,143.200)
(41.414,139.802)
(41.995,136.383)
(42.858,132.999)
(43.992,129.706)
(47.041,123.618)
(51.068,118.566)
(56.000,115.000)
\path(56,115) (59.718,113.638)
(63.794,113.008)
(68.142,113.037)
(72.679,113.651)
(77.318,114.777)
(81.977,116.341)
(86.570,118.271)
(91.013,120.493)
(95.221,122.935)
(99.109,125.522)
(102.594,128.181)
(105.591,130.839)
(109.780,135.861)
(111.000,140.000)
\path(111,140) (106.890,143.867)
(102.662,144.544)
(97.246,144.598)
(94.154,144.442)
(90.839,144.189)
(87.325,143.862)
(83.636,143.478)
(79.798,143.059)
(75.834,142.624)
(71.769,142.194)
(67.629,141.788)
(63.436,141.427)
(59.216,141.131)
(54.993,140.920)
(50.792,140.813)
(46.638,140.831)
(42.554,140.994)
(38.565,141.321)
(34.697,141.834)
(30.972,142.552)
(27.416,143.495)
(24.054,144.683)
(20.910,146.136)
(15.372,149.917)
(11.000,155.000)
\path(11,155) (9.137,158.228)
(7.564,161.650)
(6.269,165.246)
(5.242,168.998)
(4.471,172.887)
(3.946,176.894)
(3.656,180.999)
(3.590,185.184)
(3.738,189.429)
(4.087,193.716)
(4.629,198.026)
(5.350,202.338)
(6.242,206.636)
(7.293,210.898)
(8.491,215.107)
(9.827,219.243)
(11.289,223.288)
(12.867,227.221)
(14.549,231.025)
(16.325,234.680)
(18.184,238.167)
(20.115,241.467)
(24.150,247.431)
(28.343,252.418)
(32.607,256.277)
(36.855,258.855)
(41.000,260.000)
\path(445,258) (450.732,253.237)
(453.095,247.841)
(454.207,244.688)
(455.293,241.345)
(456.371,237.898)
(457.455,234.432)
(458.562,231.031)
(459.708,227.782)
(462.180,222.077)
(465.000,218.000)
\path(465,218) (468.872,215.461)
(474.078,213.672)
(480.113,212.262)
(483.283,211.585)
(486.470,210.864)
(492.643,209.107)
(498.126,206.624)
(502.414,203.044)
(505.000,198.000)
\path(505,198) (505.645,193.414)
(505.155,189.140)
(503.684,185.136)
(501.385,181.358)
(498.411,177.764)
(494.915,174.309)
(491.051,170.952)
(486.973,167.648)
(482.833,164.356)
(478.784,161.031)
(474.980,157.632)
(471.575,154.114)
(468.721,150.435)
(466.571,146.552)
(465.280,142.421)
(465.000,138.000)
\path(465,138) (466.109,131.900)
(467.176,128.735)
(468.538,125.556)
(472.006,119.338)
(476.229,113.612)
(480.922,108.748)
(485.804,105.112)
(490.591,103.074)
(495.000,103.000)
\path(495,103) (498.987,105.123)
(502.570,109.132)
(505.561,114.586)
(506.776,117.716)
(507.771,121.040)
(508.523,124.504)
(509.009,128.052)
(509.204,131.628)
(509.086,135.178)
(508.630,138.645)
(507.813,141.975)
(505.000,148.000)
\path(505,148) (499.735,154.031)
(496.557,156.567)
(493.090,158.777)
(489.394,160.656)
(485.525,162.199)
(481.541,163.401)
(477.500,164.255)
(473.459,164.758)
(469.475,164.904)
(465.606,164.688)
(461.910,164.106)
(458.443,163.150)
(455.265,161.818)
(450.000,158.000)
\path(450,158) (447.052,153.457)
(445.451,147.707)
(445.133,144.520)
(445.124,141.200)
(445.414,137.802)
(445.995,134.383)
(446.858,130.999)
(447.992,127.706)
(451.041,121.618)
(455.068,116.566)
(460.000,113.000)
\path(460,113) (463.718,111.638)
(467.794,111.008)
(472.142,111.037)
(476.679,111.651)
(481.318,112.777)
(485.977,114.341)
(490.570,116.271)
(495.013,118.493)
(499.221,120.935)
(503.109,123.522)
(506.594,126.181)
(509.591,128.839)
(513.780,133.861)
(515.000,138.000)
\path(515,138) (510.890,141.867)
(506.662,142.544)
(501.246,142.598)
(498.154,142.442)
(494.839,142.189)
(491.325,141.862)
(487.636,141.478)
(483.798,141.059)
(479.834,140.624)
(475.769,140.194)
(471.629,139.788)
(467.436,139.427)
(463.216,139.131)
(458.993,138.920)
(454.792,138.813)
(450.638,138.831)
(446.554,138.994)
(442.565,139.321)
(438.697,139.834)
(434.972,140.552)
(431.416,141.495)
(428.054,142.683)
(424.910,144.136)
(419.372,147.917)
(415.000,153.000)
\path(415,153) (413.137,156.228)
(411.564,159.650)
(410.269,163.246)
(409.242,166.998)
(408.471,170.887)
(407.946,174.894)
(407.656,178.999)
(407.590,183.184)
(407.738,187.429)
(408.087,191.716)
(408.629,196.026)
(409.350,200.338)
(410.242,204.636)
(411.293,208.898)
(412.491,213.107)
(413.827,217.243)
(415.289,221.288)
(416.867,225.221)
(418.549,229.025)
(420.325,232.680)
(422.184,236.167)
(424.115,239.467)
(428.150,245.431)
(432.343,250.418)
(436.607,254.277)
(440.855,256.855)
(445.000,258.000)
\put(41,3){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}Case 1}}}}}
\put(444,0){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}Case 2}}}}}
\put(41,270){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}head}}}}}
\put(107,100){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}v}}}}}
\put(121,220){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}v}}}}}
\put(441,270){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}head}}}}}
\put(518,108){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}v}}}}}
\end{picture}
\afigcap{1}{Pairing clow sequences of opposing signs}
\end{figure*}
%
\begin{alabsubtext}{Proof of Theorem}{1}{}
\prooffontshape
%
\apar{Proof of Theorem 1-1}
We prove this by showing that the contribution of clow sequences that are
\emph{not} cycle covers is zero\@. Consequently, only the cycle covers
contribute to the summation, yielding exactly the determinant\@.
%
\apar{Proof of Theorem 1-2}
Our proof defines an involution on a signed set\@. An involution
\(\varphi\) on a set is a bijection with the property that \(\varphi^2\)
is the identity map on the set\@. The domain is the set of all clow
sequences, and their signs define a natural partition of the domain
into two sets\@.
%
\apar{Proof of Theorem 1-3}
We will now define an involution on this signed set\@.
It has the property that a clow sequence that is not a cycle cover is
paired with another clow sequence over the same multiset of edges, but
with opposing sign\@. The fixed points of the involution are precisely
the cycle covers\@. This is sufficient to establish the theorem\@.
%
\apar{Proof of Theorem 1-4}
The desired involution is the following\@. Let \(\clows = \langle C_1,
\dots, C_k \rangle\) be a clow sequence\@. Choose the smallest \(i\) such
that \(\langle C_{i+1}, \dots, C_{k} \rangle\) is a set of disjoint
(simple) cycles\@. If \(i = 0\), the involution maps \clows\ to itself\@.
These are obviously cycle covers and the only fixed points\@.
Otherwise, having chosen \(i\), traverse \(C_i\) starting from the head
until one of two things happen:
\begin{enumerate}
\item we hit a vertex that touches one of \(\langle C_{i+1}, \dots,
C_{k}\rangle\), or
\item we hit a vertex that completes a simple cycle within \(C_i\)\@.
\end{enumerate}
Let us call the vertex \(v\)\@. Given the way we chose \(i\), such a \(v\)
must exist\@. Vertex \(v\) cannot satisfy both of the above conditions: if
\(v\) completes a cycle \emph{and} it touches cycle \(C_j\), its previous
occurrence (which exists, or else there can be no cycle at \(v\)) also
touches \(C_j\), and the traversal would have stopped at that occurrence\@.
%
\begin{alabsubtext}{Case}{1}{}
\apar{Case 1-1}
\textup{
Suppose \(v\) touches \(C_j\)\@. We map \clows\ to a clow sequence:
\[\clows ' = \langle C_1, \dots,C_{i-1}, C_i', C_{i+1}, \dots ,
C_{j-1}, C_{j+1}, \dots C_k \rangle\] The modified clow, \(C_i'\), is
obtained by merging \(C_i\) and \(C_j\) as follows: insert the cycle \(C_j\)
into \(C_i\) at the first occurence (from the head) of \(v\)\@. For
example, let \(C_i = \langle 8, 11, 10, 14 \rangle\) and \(C_j = \langle
9, 10, 12 \rangle\)\@. Then the new clow is \(\langle 8, 11, 10, 12, 9,
10, 14 \rangle\)\@. Figure~1 illustrates the mapping\@.
%
\apar{Case 1-2}
The head of \(C_i ' \) is clearly the head of \(C_i\)\@. The new sequence
has the same multiset of edges and hence the same weight as the
original sequence\@. It also has one component less than the original
sequence\@.
%
\apar{Case 1-3}
In the new sequence, vertex \(v\) in cycle \(C_i '\) would have
been chosen by our traversal, and it satisfies Case 2\@.
}
{\nopagebreakbeginpar\markendlst{Case 1}}
\end{alabsubtext}
%
\begin{alabsubtext}{Case}{2}{}
\textup{
\apar{Case 2-1}
Suppose \(v\) completes a simple cycle \(C\) in \(C_i\)\@. By our earlier
argument, cycle \(C\) cannot touch any of the later cycles\@. We now
modify the sequence \clows\ by deleting \(C\) from \(C_i\) and introducing
\(C\) as a new clow in an appropriate position, depending on the minimum
labeled vertex in \(C\), which we make the head of \(C\)\@. For example,
let \(C_i = \langle 8, 11, 10, 12, 9, 10, 14 \rangle\)\@. Then \(C_i\)
changes to \( \langle 8, 11, 10, 14 \rangle\), and the new cycle \(C =
\langle 9, 10, 12 \rangle\) is inserted in the clow sequence\@.
%
\apar{Case 2-2}
To show that the modified sequence continues to be a clow sequence,
note that the head of \(C\) is greater than the head of \(C_i\); hence
\(C\) occurs \emph{after} \(C_i\)\@. Also, the head of \(C\) is distinct from
the heads of \(C_j\) (\( i< j \le k\))\@. In fact, \(C\) is disjoint from all
cycles \(C_j\) (\( i< j \le k\))\@. (Otherwise, Case 1 would have been true\@.)
Further, the new sequence has the same multiset of edges and hence
the same weight as the original sequence\@. It also has one component
more than the original sequence\@.
%
\apar{Case 2-3}
Figure~1 illustrates the mapping\@.
In the new sequence, vertex \(v\) in cycle \(C_i '\) would have
been chosen by our traversal, and it satisfies Case 1\@.
}
{\nopagebreakbeginpar\markendlst{Case 2}}
\end{alabsubtext}
%
\apar{Proof of Theorem 1-5}
In both of the above cases, the new sequence constructed
maps back to the original sequence; therefore, the mapping is
a weight-preserving involution\@. Furthermore, the number of
clows in the two sequences differ by one, and hence the
signs are opposing\@. This completes the proof\@.
{\nopagebreakbeginpar\markendlst{Proof of Theorem 1}}
\end{alabsubtext}
%
\begin{alabsubtext}{Corollary}{1}{}
\[det(A)= \sum_{\clows:\mbox{~a clow sequence with head of
first clow 1} }sgn(\clows)w(\clows)\]
\end{alabsubtext}
%
\begin{alabsubtext}{Proof of Corollary}{1}{}
\prooffontshape
In the involution defined above, the head of the first clow in the
clow sequence remains unchanged\@. Also, the head of the first cycle in
any cycle cover must be the vertex 1\@.
{\nopagebreakbeginpar\markendlst{Proof of Corollary 1}}
\end{alabsubtext}
\asection{3}{The Sequential Algorithm}
%
\apar{3-1}
Given an \(n \times n\) matrix \(A\), we define a layered, directed acyclic
graph \(H_A\) with three special vertices, \(s\), \(t_+\), and \(t_-\), having
the following property:
\[
det(A) = \sum_{\rho : \mbox{~\(s \leadsto t_+\) path}}w(\rho) -
\sum_{\eta : \mbox{~\(s \leadsto t_-\) path}}w(\eta)
\]
Here the weight of a path is simply the product of the weights of the
edges appearing in it\@. The idea is that \(s\leadsto t_+\)
(\(s\leadsto t_-\) ) paths will be in one-to-one correspondence with
clow sequences of positive (negative) sign\@.
%
\apar{3-2}
The vertex set of \(H_A\) is \(\{s, t_+, t_- \} \cup \{ [p,h,u,i] \mid p
\in \{0,1 \}, h,u \in \{1, \dots, n\}, i \in \{0, \dots , n-1 \}
\}\)\@. If a path from \(s\) reaches a vertex of the form \([p,h,u,i]\), this
indicates that in the clow sequence being constructed along this path,
\(p\) is the parity of the quantity ``\(n\) + the number of components already
constructed,'' \(h\) is the head of the clow currently being constructed,
\(u\) is the vertex that the current clow has reached, and \(i\) edges
have been traversed so far (in this and preceding clows)\@. Finally, an
\(s\leadsto t_+\) (\(s\leadsto t_-\) ) path will correspond to a clow
sequence where \(n\) + the number of components in the sequence is even
(odd)\@.
%
\apar{3-3}
The edge set of \(H_A\) consists of the following types of edges:
\begin{enumerate}
\item \(\langle s, [b,h,h,0]\rangle \) for \(h \in \{1, \dots , n \}\),
\(b = n \bmod 2\); this edge has weight 1,
\item \(\langle [p,h,u,i],[p,h,v,i+1]\rangle \) if \(v > h\) and
\(i+1 < n\); this edge has weight \(a_{uv}\),
\item \(\langle [p,h,u,i],[\overline{p},h',h',i+1]\rangle \)
if \(h' > h\) and \(i+1 < n\); this edge has weight
\(a_{uh}\),
\item \(\langle [p,h,u,n-1], t_+\rangle \) if \(p = 1\); this
edge has weight \(a_{uh}\), and
\item \(\langle [p,h,u,n-1], t_-\rangle \) if \(p = 0\); this
edge has weight \(a_{uh}\)\@.
\end{enumerate}
%
\begin{alabsubtext}{Theorem}{2}{}
For an \(n\)-dimensional matrix \(A\), let \(H_A\) be the graph described above\@.
Then
\[
det(A) = \sum_{\rho : \mbox{~\(s \leadsto t_+\) path}}w(\rho) -
\sum_{\eta : \mbox{~\(s \leadsto t_-\) path}}w(\eta)
\]
\end{alabsubtext}
%
\begin{alabsubtext}{Proof of Theorem}{2}{}
\prooffontshape
%
\apar{Proof of Theorem 2-1}
We will establish a one-to-one correspondence between \(s\leadsto t_+\)
(\(s\leadsto t_-\)) paths and clow sequences of positive (negative)
sign, preserving weights\@. The result then follows from
Theorem~1\@.
%
\apar{Proof of Theorem 2-2}
Let \(\clows = \langle C_1, \dots , C_{k} \rangle\) be a clow sequence
of positive sign (i.e., \(n+k\) is even)\@. We will demonstrate a path
{from} \(s\) to \(t_+\) in \(H_A\)\@. Let \(h_i\) be the head of clow \(C_i\), and
let \(n_i\) be the number of edges in clows \(C_1,\dots,C_{i-1}\)\@.
The path we construct will go through the vertices \([p,h_i,h_i,n_i]\),
where \(p = 0\) if \(n+i\) is odd, and \(p = 1\) otherwise\@. From \(s\),
clearly we can go to the first such vertex \([n\bmod 2,h_1,h_1,0]\)\@.
Assume that the path has reached \([p,h_i,h_i,n_i]\)\@. Let the clow
\(C_i\) be the sequence \(\langle h_i, v_1, \dots , v_{l-1} \rangle\), a
closed walk of length \(l\)\@. Starting from \([p,h_i, h_i,n_i]\), \(H_A\)
has a path through vertices \([p,h_i, v_1, n_i+1]\), \([p,h_i, v_2,
n_{i}+2], \dots, [p,h_i, v_{l-1}, n_{i}+(l-1)]\), and finally
\([\overline{p},h_{i+1}, h_{i+1}, n_{i}+l]\), which is the vertex \(
[\overline{p},h_{i+1}, h_{i+1}, n_{i+1}]\)\@. At the last clow, starting
{from} \([1,h_{k},h_{k},n_{k}]\), \(H_A\) has a path tracing out the
vertices of clow \(C_{k}\) and finally making a transition to
\(t_+\)\@. Clearly, the weight of the path is identical to the weight of
the clow sequence\@. See Figure~2\@.
%
\begin{figure*}
%\input{clowtopath.eepic}
\setlength{\unitlength}{0.0113in}
%
\begingroup\makeatletter\ifx\SetFigFont\undefined
% extract first six characters in \fmtname
\def\x#1#2#3#4#5#6#7\relax{\def\x{#1#2#3#4#5#6}}%
\expandafter\x\fmtname xxxxxx\relax \def\y{splain}%
\ifx\x\y % LaTeX or SliTeX?
\gdef\SetFigFont#1#2#3{%
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\large\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname}%
\else
\gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\x{\endgroup\@setsize\SetFigFont{#2pt}}%
\expandafter\x
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname}%
\fi
\fi\endgroup
\begin{picture}(557,480)(0,-10)
\put(70,438){\ellipse{4}{4}}
\put(122,397){\ellipse{4}{4}}
\put(107,337){\ellipse{4}{4}}
\put(182,342){\ellipse{4}{4}}
\put(30,318){\ellipse{4}{4}}
\put(282,437){\ellipse{4}{4}}
\put(342,357){\ellipse{4}{4}}
\put(262,317){\ellipse{4}{4}}
\put(402,437){\ellipse{4}{4}}
\put(462,357){\ellipse{4}{4}}
\put(380,313){\ellipse{4}{4}}
\drawline(25,338)(25,338)
\path(96,425)(103,417)
\path(26,339)(24,356)
\path(183,317)(176,306)
\path(89,376)(96,386)
\path(137,376)(142,365)
\path(251,341)(252,354)
\path(333,338)(315,325)
\path(312,424)(322,413)
\path(433,423)(444,412)
\path(455,340)(444,329)
\path(371,337)(371,351)
\path(62,437) (65.731,438.135)
(69.658,438.365)
(73.740,437.779)
(77.934,436.467)
(82.199,434.520)
(86.495,432.029)
(90.780,429.083)
(95.011,425.772)
(99.149,422.187)
(103.151,418.419)
(106.976,414.557)
(110.583,410.691)
(113.930,406.912)
(116.977,403.311)
(119.680,399.977)
(122.000,397.000)
\path(122,397) (123.975,394.314)
(126.149,391.156)
(128.460,387.598)
(130.843,383.710)
(133.236,379.563)
(135.574,375.227)
(137.794,370.772)
(139.833,366.270)
(141.627,361.791)
(143.113,357.405)
(144.227,353.183)
(144.906,349.195)
(145.086,345.512)
(144.704,342.205)
(142.000,337.000)
\path(142,337) (139.560,335.216)
(136.436,333.971)
(132.738,333.236)
(128.575,332.982)
(124.056,333.180)
(119.290,333.801)
(114.385,334.818)
(109.451,336.201)
(104.596,337.921)
(99.931,339.950)
(95.562,342.259)
(91.601,344.820)
(88.155,347.603)
(85.333,350.579)
(83.245,353.722)
(82.000,357.000)
\path(82,357) (81.607,360.088)
(81.806,363.349)
(82.553,366.727)
(83.805,370.169)
(85.517,373.622)
(87.643,377.033)
(90.141,380.347)
(92.966,383.511)
(96.072,386.472)
(99.417,389.176)
(102.955,391.569)
(106.643,393.598)
(110.436,395.209)
(114.289,396.349)
(118.159,396.964)
(122.000,397.000)
\path(122,397) (125.365,396.619)
(128.715,395.965)
(132.042,395.051)
(135.338,393.891)
(138.595,392.499)
(141.804,390.887)
(144.958,389.071)
(148.048,387.063)
(151.066,384.878)
(154.004,382.528)
(156.854,380.028)
(159.608,377.391)
(162.258,374.632)
(164.796,371.762)
(167.213,368.798)
(169.501,365.751)
(171.652,362.635)
(173.659,359.465)
(175.513,356.254)
(177.206,353.016)
(178.729,349.764)
(180.076,346.512)
(181.236,343.274)
(182.204,340.063)
(182.969,336.893)
(183.525,333.778)
(183.863,330.731)
(183.975,327.766)
(183.853,324.897)
(183.488,322.137)
(182.000,317.000)
\path(182,317) (180.358,313.628)
(178.291,310.379)
(175.825,307.260)
(172.984,304.275)
(169.793,301.430)
(166.277,298.731)
(162.460,296.182)
(158.368,293.788)
(154.025,291.556)
(149.456,289.491)
(144.686,287.598)
(139.740,285.881)
(134.642,284.348)
(129.418,283.002)
(124.091,281.849)
(118.688,280.895)
(113.233,280.146)
(107.749,279.605)
(102.264,279.279)
(96.800,279.173)
(91.384,279.292)
(86.039,279.642)
(80.791,280.228)
(75.664,281.055)
(70.684,282.129)
(65.875,283.454)
(61.261,285.038)
(56.868,286.883)
(52.721,288.997)
(48.844,291.384)
(45.262,294.050)
(42.000,297.000)
\path(42,297) (39.047,300.257)
(36.370,303.824)
(33.967,307.679)
(31.832,311.796)
(29.960,316.152)
(28.347,320.723)
(26.989,325.486)
(25.879,330.416)
(25.015,335.490)
(24.390,340.683)
(24.002,345.973)
(23.844,351.335)
(23.912,356.745)
(24.202,362.180)
(24.709,367.615)
(25.428,373.027)
(26.355,378.391)
(27.484,383.685)
(28.812,388.884)
(30.333,393.965)
(32.044,398.903)
(33.939,403.675)
(36.013,408.256)
(38.263,412.624)
(40.682,416.754)
(43.268,420.622)
(46.014,424.204)
(48.917,427.478)
(51.972,430.418)
(55.174,433.001)
(58.518,435.203)
(62.000,437.000)
\path(282,437) (286.227,437.323)
(290.946,436.441)
(296.037,434.460)
(301.382,431.486)
(306.862,427.625)
(312.357,422.984)
(317.749,417.668)
(320.369,414.791)
(322.919,411.784)
(325.384,408.663)
(327.748,405.439)
(329.997,402.126)
(332.117,398.737)
(334.091,395.286)
(335.906,391.786)
(337.546,388.249)
(338.997,384.691)
(340.244,381.123)
(341.271,377.559)
(342.064,374.012)
(342.609,370.495)
(342.889,367.023)
(342.891,363.607)
(342.600,360.262)
(342.000,357.000)
\path(342,357) (341.184,354.148)
(340.105,351.350)
(337.220,345.929)
(333.467,340.780)
(328.970,335.943)
(323.853,331.458)
(321.101,329.360)
(318.239,327.365)
(315.285,325.479)
(312.253,323.707)
(309.159,322.053)
(306.018,320.523)
(302.845,319.121)
(299.657,317.853)
(296.468,316.724)
(293.294,315.739)
(290.150,314.903)
(287.053,314.222)
(284.016,313.699)
(281.057,313.341)
(278.190,313.152)
(275.430,313.138)
(270.296,313.653)
(265.778,314.927)
(262.000,317.000)
\path(262,317) (257.919,321.252)
(256.242,323.984)
(254.798,327.079)
(253.582,330.508)
(252.585,334.238)
(251.802,338.240)
(251.225,342.483)
(250.849,346.936)
(250.665,351.569)
(250.669,356.351)
(250.852,361.251)
(251.209,366.240)
(251.732,371.285)
(252.415,376.357)
(253.251,381.426)
(254.234,386.459)
(255.356,391.428)
(256.612,396.301)
(257.994,401.047)
(259.495,405.636)
(261.110,410.038)
(262.831,414.221)
(264.651,418.156)
(266.565,421.811)
(268.565,425.156)
(270.644,428.160)
(272.796,430.793)
(277.293,434.823)
(282.000,437.000)
\path(402,437) (406.191,437.341)
(410.869,436.498)
(415.918,434.572)
(421.219,431.665)
(426.657,427.876)
(432.114,423.309)
(437.473,418.063)
(440.079,415.217)
(442.617,412.239)
(445.072,409.143)
(447.430,405.940)
(449.675,402.643)
(451.794,399.265)
(453.771,395.818)
(455.592,392.316)
(457.242,388.770)
(458.707,385.194)
(459.972,381.600)
(461.022,378.001)
(461.844,374.409)
(462.421,370.836)
(462.740,367.297)
(462.786,363.802)
(462.544,360.366)
(462.000,357.000)
\path(462,357) (461.219,353.970)
(460.170,350.990)
(458.869,348.065)
(457.333,345.201)
(453.615,339.678)
(449.141,334.464)
(444.036,329.604)
(441.285,327.320)
(438.424,325.140)
(435.468,323.071)
(432.432,321.118)
(429.332,319.286)
(426.184,317.581)
(423.003,316.008)
(419.805,314.572)
(416.606,313.280)
(413.420,312.136)
(410.265,311.146)
(407.155,310.316)
(404.107,309.652)
(401.135,309.157)
(398.255,308.839)
(395.484,308.702)
(390.328,308.995)
(385.791,310.080)
(382.000,312.000)
\path(382,312) (377.841,316.184)
(376.126,318.932)
(374.645,322.072)
(373.392,325.574)
(372.360,329.403)
(371.543,333.528)
(370.934,337.915)
(370.528,342.532)
(370.316,347.346)
(370.294,352.325)
(370.454,357.436)
(370.790,362.645)
(371.295,367.922)
(371.964,373.232)
(372.789,378.543)
(373.764,383.822)
(374.883,389.038)
(376.139,394.156)
(377.526,399.145)
(379.037,403.971)
(380.665,408.602)
(382.405,413.006)
(384.249,417.149)
(386.192,421.000)
(388.227,424.524)
(390.347,427.690)
(392.546,430.465)
(397.154,434.711)
(402.000,437.000)
\put(67,447){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}1}}}}}
\put(127,402){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}2}}}}}
\put(192,342){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}4}}}}}
\put(22,307){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}7}}}}}
\put(102,322){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}5}}}}}
\put(282,447){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}3}}}}}
\put(322,357){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}8}}}}}
\put(407,442){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}6}}}}}
\put(252,302){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}10}}}}}
\put(472,357){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}9}}}}}
\put(367,297){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}11}}}}}
\put(75,163){\ellipse{4}{4}}
\put(127,122){\ellipse{4}{4}}
\put(112,62){\ellipse{4}{4}}
\put(187,67){\ellipse{4}{4}}
\put(35,43){\ellipse{4}{4}}
\put(287,162){\ellipse{4}{4}}
\put(347,82){\ellipse{4}{4}}
\put(407,162){\ellipse{4}{4}}
\put(467,82){\ellipse{4}{4}}
\put(265,43){\ellipse{4}{4}}
\put(387,47){\ellipse{4}{4}}
\put(2,136){\ellipse{4}{4}}
\put(543,155){\ellipse{4}{4}}
\drawline(30,63)(30,63)
\path(52,138)(41,129)
\path(63,23)(76,15)
\path(182,80)(176,89)
\path(92,111)(83,100)
\path(138,86)(133,94)
\path(184,162)(196,164)
\path(267,136)(260,126)
\path(297,42)(317,48)
\path(423,48)(435,55)
\path(346,149)(359,161)
\path(3,137)(75,163)
\path(398,124)(388,101)
\path(202,277)(202,237)
\path(222,277)(222,237)
\path(202,237)(182,237)(212,217)
(242,237)(222,237)
\path(202,277)(222,277)
\path(297,257)(357,182)(557,182)
(557,257)(297,257)
\path(77,162) (73.578,159.173)
(70.291,156.426)
(67.137,153.758)
(64.113,151.165)
(61.216,148.644)
(58.444,146.192)
(53.265,141.485)
(48.556,137.019)
(44.297,132.770)
(40.467,128.715)
(37.046,124.829)
(34.015,121.089)
(31.352,117.472)
(29.039,113.953)
(27.054,110.508)
(25.378,107.115)
(23.990,103.748)
(22.871,100.384)
(22.000,97.000)
\path(22,97) (21.439,93.845)
(21.099,90.481)
(20.975,86.944)
(21.063,83.271)
(21.358,79.498)
(21.856,75.664)
(22.552,71.804)
(23.442,67.955)
(24.522,64.155)
(25.786,60.439)
(27.230,56.846)
(28.850,53.412)
(30.642,50.173)
(32.600,47.167)
(37.000,42.000)
\path(37,42) (39.839,39.361)
(42.835,36.748)
(45.981,34.172)
(49.270,31.639)
(52.692,29.158)
(56.241,26.737)
(59.909,24.385)
(63.686,22.109)
(67.567,19.919)
(71.542,17.822)
(75.604,15.826)
(79.745,13.940)
(83.957,12.172)
(88.232,10.531)
(92.563,9.024)
(96.941,7.660)
(101.358,6.447)
(105.808,5.394)
(110.280,4.508)
(114.769,3.798)
(119.266,3.272)
(123.763,2.938)
(128.252,2.805)
(132.725,2.881)
(137.175,3.174)
(141.593,3.693)
(145.972,4.445)
(150.304,5.439)
(154.580,6.683)
(158.794,8.186)
(162.936,9.956)
(167.000,12.000)
\path(167,12) (170.375,14.102)
(173.445,16.551)
(176.216,19.311)
(178.691,22.345)
(180.875,25.617)
(182.773,29.092)
(184.389,32.732)
(185.728,36.503)
(186.795,40.367)
(187.593,44.288)
(188.127,48.231)
(188.402,52.159)
(188.422,56.036)
(188.192,59.826)
(187.717,63.493)
(187.000,67.000)
\path(187,67) (185.599,71.758)
(183.741,76.507)
(181.457,81.212)
(178.779,85.836)
(175.736,90.345)
(172.359,94.701)
(168.679,98.870)
(164.727,102.814)
(160.534,106.499)
(156.130,109.888)
(151.546,112.945)
(146.812,115.635)
(141.960,117.921)
(137.020,119.768)
(132.023,121.140)
(127.000,122.000)
\path(127,122) (123.735,122.242)
(120.328,122.223)
(116.822,121.946)
(113.261,121.416)
(109.687,120.637)
(106.145,119.615)
(102.677,118.354)
(99.328,116.859)
(96.140,115.133)
(93.158,113.182)
(87.981,108.622)
(84.146,103.216)
(82.840,100.207)
(82.000,97.000)
\path(82,97) (81.692,94.035)
(81.809,91.074)
(82.323,88.136)
(83.205,85.245)
(84.426,82.421)
(85.958,79.687)
(89.836,74.572)
(94.611,70.073)
(100.050,66.363)
(102.946,64.858)
(105.923,63.615)
(108.950,62.655)
(112.000,62.000)
\path(112,62) (115.872,61.801)
(119.711,62.390)
(123.426,63.670)
(126.928,65.543)
(130.126,67.910)
(132.931,70.674)
(135.252,73.737)
(137.000,77.000)
\path(137,77) (137.921,82.349)
(136.675,87.840)
(135.481,90.629)
(134.035,93.442)
(132.434,96.274)
(130.775,99.123)
(129.154,101.983)
(127.668,104.851)
(126.413,107.723)
(125.487,110.594)
(124.986,113.462)
(125.007,116.321)
(125.646,119.169)
(127.000,122.000)
\path(127,122) (129.385,125.697)
(131.866,129.219)
(134.450,132.565)
(137.143,135.740)
(139.954,138.743)
(142.890,141.578)
(145.958,144.245)
(149.165,146.748)
(152.519,149.087)
(156.027,151.264)
(159.696,153.282)
(163.534,155.142)
(167.548,156.846)
(171.746,158.395)
(176.134,159.792)
(180.720,161.039)
(185.511,162.137)
(190.515,163.087)
(195.739,163.893)
(201.191,164.556)
(204.004,164.834)
(206.876,165.077)
(209.810,165.285)
(212.804,165.459)
(215.861,165.598)
(218.981,165.702)
(222.166,165.773)
(225.415,165.810)
(228.730,165.814)
(232.113,165.784)
(235.563,165.721)
(239.081,165.626)
(242.670,165.497)
(246.329,165.337)
(250.059,165.144)
(253.862,164.919)
(257.738,164.663)
(261.688,164.375)
(265.714,164.056)
(269.815,163.706)
(273.994,163.325)
(278.250,162.914)
(282.585,162.472)
(287.000,162.000)
\path(287,162) (283.766,158.173)
(280.671,154.461)
(277.712,150.859)
(274.888,147.364)
(272.195,143.972)
(269.634,140.678)
(267.200,137.480)
(264.892,134.372)
(262.709,131.352)
(260.647,128.415)
(258.706,125.558)
(256.883,122.776)
(253.581,117.423)
(250.726,112.325)
(248.302,107.451)
(246.292,102.771)
(244.679,98.252)
(243.447,93.865)
(242.580,89.577)
(242.060,85.357)
(241.873,81.176)
(242.000,77.000)
\path(242,77) (242.817,71.885)
(244.525,66.632)
(247.007,61.416)
(250.150,56.411)
(253.836,51.793)
(257.950,47.735)
(262.376,44.413)
(267.000,42.000)
\path(267,42) (270.154,40.947)
(273.507,40.238)
(277.024,39.849)
(280.671,39.752)
(284.412,39.923)
(288.215,40.334)
(292.044,40.961)
(295.864,41.777)
(299.642,42.756)
(303.343,43.873)
(306.932,45.100)
(310.376,46.413)
(313.639,47.786)
(316.687,49.192)
(322.000,52.000)
\path(322,52) (325.458,54.370)
(329.131,57.506)
(332.863,61.223)
(336.500,65.335)
(339.887,69.655)
(342.869,73.996)
(345.292,78.173)
(347.000,82.000)
\path(347,82) (347.837,85.355)
(348.114,89.048)
(347.929,93.034)
(347.376,97.267)
(346.552,101.704)
(345.553,106.298)
(344.474,111.005)
(343.411,115.779)
(342.460,120.576)
(341.718,125.351)
(341.279,130.058)
(341.240,134.652)
(341.697,139.088)
(342.745,143.322)
(344.481,147.307)
(347.000,151.000)
\path(347,151) (349.837,154.009)
(353.102,156.781)
(356.730,159.302)
(360.654,161.555)
(364.809,163.525)
(369.129,165.195)
(373.550,166.551)
(378.004,167.577)
(382.427,168.256)
(386.754,168.574)
(390.917,168.513)
(394.853,168.060)
(398.494,167.198)
(401.776,165.911)
(407.000,162.000)
\path(407,162) (409.131,158.613)
(410.169,154.760)
(410.242,150.503)
(409.480,145.905)
(408.010,141.028)
(405.962,135.937)
(403.465,130.692)
(400.647,125.358)
(397.636,119.998)
(394.563,114.673)
(391.554,109.447)
(388.740,104.383)
(386.249,99.543)
(384.209,94.991)
(382.750,90.789)
(382.000,87.000)
\path(382,87) (381.467,82.770)
(380.755,77.586)
(380.412,74.752)
(380.120,71.817)
(379.911,68.826)
(379.816,65.827)
(379.868,62.864)
(380.098,59.984)
(381.219,54.654)
(383.435,50.204)
(387.000,47.000)
\path(387,47) (391.838,44.848)
(396.950,43.663)
(402.281,43.361)
(407.778,43.860)
(410.571,44.383)
(413.386,45.075)
(416.215,45.926)
(419.052,46.925)
(421.889,48.062)
(424.721,49.326)
(427.540,50.707)
(430.340,52.194)
(435.854,55.447)
(441.210,59.002)
(446.354,62.774)
(451.232,66.682)
(455.789,70.642)
(459.972,74.570)
(463.727,78.384)
(467.000,82.000)
\path(467,82) (470.187,87.354)
(471.298,90.548)
(472.157,94.014)
(472.818,97.697)
(473.332,101.544)
(473.753,105.499)
(474.132,109.508)
(474.523,113.517)
(474.978,117.471)
(475.550,121.316)
(476.291,124.998)
(477.254,128.461)
(478.491,131.652)
(482.000,137.000)
\path(482,137) (485.865,140.556)
(490.417,143.693)
(495.832,146.470)
(498.918,147.741)
(502.285,148.945)
(505.956,150.087)
(509.953,151.177)
(514.297,152.220)
(519.011,153.224)
(524.116,154.197)
(529.635,155.146)
(532.556,155.613)
(535.589,156.078)
(538.736,156.540)
(542.000,157.000)
\path(302,182) (301.524,185.569)
(301.111,188.895)
(300.762,191.989)
(300.476,194.864)
(300.095,200.011)
(299.968,204.439)
(300.095,208.249)
(300.476,211.544)
(302.000,217.000)
\path(302,217) (304.247,220.988)
(308.017,225.339)
(310.619,227.796)
(313.778,230.520)
(317.552,233.568)
(322.000,237.000)
\path(312,182) (311.453,185.024)
(310.980,187.843)
(310.251,192.912)
(309.813,197.295)
(309.668,201.080)
(309.813,204.355)
(310.251,207.208)
(312.000,212.000)
\path(312,212) (316.683,216.806)
(320.961,219.238)
(323.739,220.563)
(327.000,222.000)
\path(347,177)(347,180.921)
(347.000,185.000)
\path(347,185) (347.493,187.447)
(348.132,189.012)
(348.790,190.130)
(350.983,191.088)
(353.183,192.000)
(367.000,192.000)
\path(327,182) (324.456,187.401)
(323.475,191.467)
(323.475,197.540)
(326.000,200.540)
\path(326.000,200.540) (330.206,203.231)
(333.417,204.331)
(337.237,205.306)
(341.740,206.186)
(347.000,207.000)
\put(72,172){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}1}}}}}
\put(132,127){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}2}}}}}
\put(197,67){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}4}}}}}
\put(27,32){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}7}}}}}
\put(107,47){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}5}}}}}
\put(327,82){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}8}}}}}
\put(412,167){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}6}}}}}
\put(257,27){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}10}}}}}
\put(477,82){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}9}}}}}
\put(372,22){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}11}}}}}
\put(82,170){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 1, 1, 0]}}}}}
\put(15,14){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 1, 7, 1]}}}}}
\put(147,134){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 1, 2, 3]}}}}}
\put(147,115){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 1, 2, 5]}}}}}
\put(171,50){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 1, 4, 2]}}}}}
\put(113,64){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 1, 5, 4]}}}}}
\put(260,52){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[1, 3, 10, 7]}}}}}
\put(320,102){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[1, 3, 8, 8]}}}}}
\put(425,166){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 6, 6, 9]}}}}}
\put(389,51){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 6, 11, 10]}}}}}
\put(459,95){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[0, 6, 9, 11]}}}}}
\put(280,168){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}3}}}}}
\put(294,166){\makebox(0,0)[lb]{\smash{{{\SetFigFont{10}{14.4}{bf}[1, 3, 3, 6]}}}}}
\put(1,146){\makebox(0,0)[lb]{\smash{{{\SetFigFont{12}{14.4}{bf}s}}}}}
\put(327,237){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}parity of number of components}}}}}
\put(352,202){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}current vertex}}}}}
\put(377,187){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}edges seen so far}}}}}
\put(332,217){\makebox(0,0)[lb]{\smash{{{\SetFigFont{11}{13.2}{bf}head of current clow}}}}}
\end{picture}
\afigcap{2}{From a clow sequence to a path}
\end{figure*}
%
\apar{Proof of Theorem 2-3}
Conversely, let \(\rho \) be an \(s \leadsto t_+\) path in \(H_A\)\@. In the
sequence of vertices visited along this path, the second component of
the vertex labels is monotonically nondecreasing and takes, say, \(k\)
distinct values \(h_1, \dots , h_k\)\@. Also, the first component changes
exactly when the second component changes, and is \(n\bmod 2\) at \(h_1\) and
1 at \(h_k\) (to allow an edge to \(t_+\)), so \(n+k\) must be even\@.
%
\apar{Proof of Theorem 2-4}
Consider the maximal segment of the path with second component \(h_i\)\@.
The third components along this segment constitute a clow with leader
\(h_i\) in \(G_A\)\@. When this clow is completely traversed, a new clow
with a larger head must be started, and the parity of the number of
components must change\@. But this is precisely modeled by the edges of
\(H_A\)\@. Therefore, \(\rho\) corresponds to a clow sequence
of positive sign in \(G_A\)\@.
%
\apar{Proof of Theorem 2-5}
A similar argument shows the correspondence between paths from \(s\) to
\(t_-\) and clow sequences with negative sign, preserving weights\@.
%
{\nopagebreakbeginpar\markendlst{Proof of Theorem 2}}
\end{alabsubtext}
%
\apar{3-4}
Now, to evaluate \(det(A)\), we merely need to evaluate the weighted
sums of paths\@. But this can easily be done using simple dynamic
programming techniques; we give a polynomial time algorithm that
evaluates this expression and hence computes \(det(A)\)\@.
%
\apar{3-5}
We say that a vertex \([p,h,u,i]\) is at layer \(i\) in \(H_A\)\@. Vertices \(t_+\) and
\(t_-\) are at layer \(n\)\@. The algorithm proceeds by computing, in
stages, the sum of the weighted paths from \(s\) to any vertex at layer \(i\)
in \(H_A\)\@. After \(n\) stages, it has the values at \(t_+\) and \(t_-\), and
therefore it also has \(det(A)\)\@. See Table~1\@.
%
\begin{table*}[t]
% table 1
\begin{center}
\framebox{\parbox{4in}{
\begin{algorithm}{informal}{}
\begin{tabbing}
\comment{\# Initialize values to 0} \\
\kwd{For} \= \(u,v,i \in [n]\), \(p \in \{0,1 \}\) \kwd{do}
\(V([p,u,v,i-1]) = 0\) \\
\(V(t_+) = 0\) \\
\(V(t_-) = 0\) \\
\comment{\# Set selected values at layer 0 to 1} \\
\(b = n\bmod 2\) \\
\kwd{For} \(u \in [n]\), \kwd{do} \(V([b,u,u,0]) = 1\) \\
\comment{\# Process outgoing edges from each layer}\\
\kwd{For} \(i = 0\) to \(n-2\) \kwd{do} \\
\> \kwd{For} \= \(u,v \in [n]\) such that \(u \le v\), and \(p \in \{0,1 \}\), \kwd{do} \\
\> \> \kwd{For} \= \(w \in \{u+1,\dots ,n \}\) \kwd{do}\\
\> \> \> \(V([p,u,w,i+1]) = V([p,u,w,i+1]) + V([p,u,v,i])\mult a_{vw}\) \\
\> \> \> \(V([\bar{p},w,w,i+1]) = V([\bar{p},w,w,i+1]) + V([p,u,v,i])\mult a_{vu}\) \\
\kwd{For} \(u,v \in [n]\) such that \(u \le v\), and \(p \in \{0,1 \}\), \kwd{do} \\
\> \(V(t_+) = V(t_+) + V([1,u,v,n-1])\mult a_{vu}\) \\
\> \(V(t_-) = V(t_-) + V([0,u,v,n-1])\mult a_{vu}\) \\
\comment{\# Compute the determinant} \\
Return \(V(t_+) - V(t_-)\)
\end{tabbing}
\end{algorithm}
}
}
\end{center}
\caption{A sequential algorithm for the determinant}
\end{table*}
%
\apar{3-6}
This algorithm processes each edge in \(H_A\) exactly once, and for each
edge, it performs one addition and one multiplication\@. The total
number of vertices in \(H_A\) is \(2n^3+3\)\@. However, \(H_A\) is quite a
sparse graph; the total number of edges is at most \(4n^4\)\@. The
overall running time is therefore \(O(n^4)\)\@. In fact, if \(G_A\) has \(m\)
edges, then \(H_A\) has only \(O(mn^2)\) edges, so for sparse matrices,
the algorithm is faster\@.
%
\apar{3-7}
The number of operations (addition or multiplication) is \(O(n^4)\)\@. The
largest partial product at any stage is \(m^n|a_{\mbox{{\scriptsize
max}}}|^n\), where \(a_{\mbox{{\scriptsize max}}}\) is the largest entry
in \(A\), \(m\) is the number of edges in \(G_A\), and \(m^n\) is an upper
bound on the number of clow sequences\@. This can be represented with \(N
= n\log m + n\log |a_{\mbox{{\scriptsize max}}}|\) bits, so each
operation needs at most \(M(N)\) time, where \(M(t)\) is the time required
to multiply two \(t\)-bit numbers\@. Clearly, even in terms of bit
complexity, the algorithm needs only polynomial time\@.
%
\apar{3-8}
The space used in this implementation is also polynomial; however, it
is only \(O(n^2)\), since at any stage the values at only two adjacent
layers need to be stored\@. (Again, there are \(O(n^2)\) values to be
stored; each may require up to \(N\) bits\@.)
\asection{4}{Computing the Characteristic Polynomial}
%
\apar{4-1}
Our technique can be used as easily to compute \emph{all} coefficients
of the characteristic polynomial \(\Phi _A(\lambda ) = det(\lambda I_n
- A) = c_n\lambda^n + c_{n-1}\lambda^{n-1} + \dots + c_1\lambda +
c_0\) of the matrix \(A\)\@. In fact, we will show that the graph defined
in the previous section already does so\@. Rewriting \(det(\lambda I_n -
A)\) in terms of cycle covers and regrouping terms, we see that \(c_r\), the
coefficient of \(\lambda ^r\) in \(\Phi _A(\lambda )\), can be
computed by summing, over all permutations of \(\sigma\) with at least \(r\)
fixed points, the weight of the permutation outside these fixed
points:
\[
c_r = \sum_{S \subseteq [1, \dots, n]: |S| = r}~~
\sum_{\sigma \in S_n: j \in S \Rightarrow \sigma (j) = j}
sgn(\sigma)\prod_{i \not \in S} (-a_{i\sigma(i)})
\]
If \(\sigma\) fixes all of the points in \(S\), let \(sgn(\sigma
|S)\) denote the parity of the number of
components of \(\sigma\), not counting the fixed-point cycles
of \(S\) (\(+1\) if the parity is even, and \(-1\)
otherwise)\@. Then
\[
c_r = \sum_{S \subseteq [1, \dots, n]: |S| = r}~~
\sum_{\sigma \in S_n: j \in S \Rightarrow \sigma (j) = j}
sgn(\sigma |S)\prod_{i \not \in S} (a_{i\sigma(i)})
\]
But each term here is the weight of a partial cycle cover, covering
exactly \(n-r\) vertices\@. To compute this sum, not surprisingly, we look
at partial clow sequences! An \(l\)-clow sequence is a sequence of clows
(ordered by strictly increasing head) with total number of edges
exactly \(l\), accounting for multiplicity\@. Its sign is \((-1)^{k}\),
where \(k\) is the number of clows in the sequence\@. The involution on
the set of \(l\)-clow sequences is defined in the same fashion as in
Section~2, and shows that the net contribution of sequences
that are not partial cycle covers is zero\@. So now instead of \(H_A\), we
construct \(H_A(r)\) with \(n-r\) layers, with paths corresponding to
\((n-r)\)-clow sequences\@. We then compute the weights of \(s\leadsto
t_+\) and \(s\leadsto t_-\) paths in this reduced graph, and report the
difference as \(c_r\)\@.
%
\apar{4-2}
Actually, all coefficients can be computed using the single graph
\(H_A\), by introducing \(n\) copies of \(t_+\) and \(t_-\) (one for each
coefficient)\@. Further, in this graph, if the \(i\)th layer reports a
nonzero value, we can immediately conclude that the matrix has rank
at least \(i\)\@. However, we do not know how to infer small rank from
this construction\@.
%
\apar{4-3}
Note that our graphs \(H_A\), or \(H_A(r)\) for computing the coefficient
\(c_r\), have a very regular structure: the edge connectivity across
layers is identical\@. By dropping layer information from the vertices,
we can construct a graph (finite-state automaton) with \(O(n^2)\)
vertices\@. Then to compute \(c_r\) for \(0 \le r \le n-1\), we find the
contribution of \(s\leadsto t_+ \mbox{~or~} t_-\) paths in this graph of
length exactly \(n-r\)\@.
\asection{5}{Improving the Algorithm}
%
\apar{5-1}
The algorithm of Section~3 can be made more efficient if the
number of vertices and edges in the graph \(H_A\) can be pruned\@. One
simple saving is obtained by noting that we do not really need two
copies of each vertex \([p,u,v,i]\) for the two values of \(p\)\@. Instead,
we can keep one copy, and where in the original \(H_A\) this component
was to be changed via an edge \(\langle [p,u,v,i],
[\overline{p},x,y,i+1] \rangle\) of weight \(w\), we now have an edge
{from} \([u,v,i]\) to \([x,y,i+1]\) with weight \(-w\)\@. This reduces the
number of vertices by a factor of 2\@. More crucially, it allows the
dynamic programming algorithm to do subtractions and cancellations at
earlier layers, so the size of partial products, and hence the bit
complexity, decreases\@.
%
\apar{5-2}
Another pruning which also results in a saving by a constant factor
(but with a larger constant than above) follows from this simple
observation: paths going through vertices of the form \([p,h,u,i]\) with
\(h > i+1\) cannot correspond to cycle covers\@. This is because in a cycle
cover, all vertices are covered exactly once, so at layer \(i\), with
\(n-i\) vertices still to be covered, the head (minimum element) of the
current cycle cannot be greater than \(i\)\@. Alternatively, once \(h\)
becomes the head, at least \(h-1\) edges should have been seen in
preceding cycles\@. We also can require our clow sequences to satisfy
this property\@. We formalize the prefix property as follows: a clow sequence
\(\clows = \langle C_1, \dots, C_k\rangle \) has the prefix property if
for \(1 \le j \le k\), the total lengths of the clows \(C_1, \dots,
C_{j-1}\) are at least \(\mbox{head}(C_j) - 1\)\@.
%
\apar{5-3}
It is easy to verify that the involution defined in Section~2
also works on such restricted clow sequences: \(\varphi(\clows)\) has
the prefix property if and only if \clows\ does\@. So we may instead
construct a pruned version of \(H_A\) which generates only such clow
sequences\@. (Consider the induced subgraph of \(H_A\) obtained by
deleting all vertices \([p,h,u,i]\) where \(h > i+1\)\@.) This will lead to
an algorithm with essentially the same complexity, but smaller
constants\@. Pruning, however, is not without its drawbacks: after pruning,
we can no longer directly extract the coefficients of the
characteristic polynomial\@.
%
\apar{5-4}
Interestingly, clow sequences with the prefix property are precisely
the terms computed by Samuelson's method for computing the
determinant\@. As observed by Valiant\footnote{There is a minor
technical error in Valiant's formulation\@. He claims that Samuelson's
algorithm generates all clow sequences, referred to there as loop
covers\@. However, his preceding discussion makes it clear that clow
sequences without the prefix property are not generated\@.} in
\cite{cj97-05-25}, the correctness of Samuelson's algorithm gives a proof,
based on linear algebra, that such sequences that are \emph{not} cycle
covers ``cancel'' out\@. Our involution gives a combinatorial proof of
this fact (for details, see \cite{cj97-05-15})\@.
%
\apar{5-5}
Of course, if the algorithm is to be used to compute the coefficient
\(c_r\) of the characteristic polynomial, then we cannot use this kind
of prefix property\@. The right prefix property for computing \(c_r\)
would be that in the clow sequence \(\clows = \langle C_1, \dots,
C_k\rangle \); for \(1 \le j \le k\), the total lengths of the clows
\(C_1, \dots, C_{j-1}\) should be at least \(\mbox{head}(j) - 1 - r\)\@. The graph
\(H_A (r)\) can be pruned consistent with this property\@. However, if a
single graph is to be used to compute all coefficients, then we must
work with the unpruned version\@.
\asection{6}{Parallel Algorithms: \(\complexityclass{GapL}\) and \(\complexityclass{NC}\) Implementations}
%
\apar{6-1}
In this section, we describe two different approaches toward
obtaining parallel algorithms that exploit the combinatorial
Theorem~1\@. The first approach is to apply the standard
divide-and-conquer technique to compute the contributions of all clow
sequences, and so directly obtain an \(\complexityclass{NC}\) or PRAM algorithm\@. The
second approach is indirect; we first show how our algorithm places the integer
determinant in the class of functions \(\complexityclass{GapL}\), and then appeal to
standard parallelizations of \(\complexityclass{GapL}\) functions\@. This approach is
particularly interesting from the complexity-theory point of view,
since the \(\complexityclass{GapL}\) implementation gives a very good instance of how to
effectively use nondeterminism in a space-bounded computation\@.
\asubsection{6.1}{PRAM and \(\complexityclass{NC}\) Algorithms}
%
\apar{6.1-1}
The signed and weighted sum of all clow sequences can be evaluated in
parallel using the standard divide-and-conquer technique, yielding an
\(\complexityclass{NC\(^2\)}\) algorithm for the determinant\@. We describe the algorithm below\@.
We first show how to construct an arithmetic \(\complexityclass{SAC\(^1\)}\) circuit for
computing the determinant (an arithmetic polynomially sized circuit
with \(O(\log n)\) depth, where the \(+\) gates have unbounded fanin but
each \(\times \) gate has constant fanin)\@. We then show how to implement
this circuit as an OROW PRAM algorithm, requiring \(O(\log ^2 n)\)
parallel time\@. Owner-restricted PRAMs, first introduced in
\cite{cj97-05-11}, are the most restrictive of the PRAM models\@. In
owner-write-restricted PRAMs, each cell is ``owned'' by one processor,
and this is the only processor allowed to write into the cell\@. In
owner-read-restricted PRAMs, a processor is associated with each cell,
and only this processor can read the cell's contents\@. In an
owner-read owner-write (OROW) PRAM, the
processor owning a cell for writing is in general different from the
processor owning the same cell for reading\@. For more details, see
\cite{cj97-05-11,cj97-05-19,cj97-05-13}\@. We also analyze the bit complexity of our
algorithm, and show an implementation in Boolean \(\mathrm{NC}^2\)\@.
%
\apar{6.1-2}
The goal is to sum up the contribution of all clow sequences at the
output gate of the circuit\@. The output gate is a sum, over all \(1 \le
k \le n\), of \(C_k\), where \(C_k\) is the sum of the contributions of all
clow sequences with exactly \(k\) clows\@. To compute \(C_k\), we use a
divide-and-conquer approach on the number of clows: any clow sequence
contributing to \(C_k\) can be suitably split into two partial clow
sequences, with the left sequence having \(2^{\lceil \log k \rceil -
1}\) clows\@. The heads of all clows in the left part must be less than
the head of the first clow in the rightmost part\@. And the lengths of
the left and the right partial clow sequences must add up to \(n\)\@. We
can carry this information in the gate label\@. Let gate \(g[p,l,u,v]\)
sum up the weights of all partial clow sequences with \(p\) clows, \(l\)
edges, head of first clow \(u\), and heads of all clows at most \(v\)\@. (We
need not consider gates where \(l < p\) or \(u > v\)\@.) Then \(C_k =
g[k,n,1,n]\), \(D_k = (-1)^{n+k}C_k\), and the output is \(\sum_{k=1}^{n}
D_k\)\@. Further,
%
\[
g[p,l,u,v]=
\begin{cases}
\displaystyle
\sum_{%
\begin{gathered}
\scriptstyle 2^q\le r\le 2^q+(l-p)\\
\scriptstyle u1 \\
g[l,u] & p=1
\end{cases}
\]
%
where \(q = \lceil \log p \rceil - 1\), i.e., \(2^q < p \le 2^{q+1}\)\@. The
gate \(g[l,u]\) sums up the weights of all clows of length \(l\) with head
\(u\)\@. This gate is also evaluated in a divide-and-conquer fashion\@. A
clow with head \(u\) is either a self-loop if \(l=1\), or it must first
visit some vertex \(v > u\), find a path of length \(l-2\) to some vertex
\(w > u\) through vertices all greater than \(u\), and then return to
\(u\)\@. So
%
\[
g[l,u]=
\begin{cases}
a_{uu} & l=1\\[2ex]
\displaystyle{\sum_{v>u}a_{uv}\mult a_{vu}} & l=2\\[4ex]
\displaystyle{\sum_{v,w>u}a_{uv}\mult c[l-2,u,v,w]\mult a_{wu}} &
\text{otherwise}
\end{cases}
\]
%
The gate \(c[l,u,v,w]\) sums the weights of all length \(l\) paths from
\(v\) to \(w\) going through vertices greater than \(u\)\@. The required
values can be computed in \(O(\log n)\) layers as follows:
%
\[
\begin{align*}
c[1,u,v,w] & = a_{vw}\\[2ex]
c[2^s+i,u,v,w] & = \sum_{x>u} c[2^s,u,v,x] \mult c[i,u,x,w]
\end{align*}
\]
%
for \(s=0\) to \(\ceiling{\log n}-1\), \(i=1\) to \(2^s\)\@.
The final circuit description is summarized in a bottom-up
fashion in Table~2\@.
%
\begin{table*}[!t]
% table 2
\framebox{\parbox{5in}{
\begin{algorithm}{informal}{}
\begin{tabbing}
~~\= \comment{\# Initialize values for paths of length 1} \\
\> \kwd{For} \= \(u,v,w \in [n]: v,w > u\) \kwd{do} in parallel\\
\> \> \(c[1,u,v,w] = a_{vw}\) \\
\> \comment{\# Evaluate values of paths of lengths up to \(2^s\)} \\
\> \kwd{For} \(s = 0\) to \(\lceil \log n \rceil -1\) \\
\> \> \kwd{For} \= \(1 \le i \le 2^s\) and for \(u,v,w \in [n]: v,w > u\) \kwd{do} in parallel\\
\> \> \> \(c[2^s+i,u,v,w]\) = \(\sum_{x>u}c[2^s,u,v,x] \mult c[i,u,x,w]\)\\
\> \comment{\# Evaluate values of single clows} \\
\> \kwd{For} \(l,u \in [n]\) \kwd{do} in parallel\\
\> \> If \(l=1\) then \(g[l,u]\) = \(a_{uu}\) \\
\> \> If \(l=2\) then \(g[l,u]\) =
\(\sum_{v > u} a_{uv} \mult a_{vu}\) \\
\> \> If \(l>2\) then \(g[l,u]\) =
\(\sum_{v,w > u} a_{uv} \mult c[l-2,u,v,w] \mult a_{wu}\) \\
\> \comment{\# Initialize values of partial clow sequences with one clow} \\
\> \kwd{For} \(l,u,v \in [n]: u \le v\) \kwd{do} in parallel \\
\> \> \(g[1,l,u,v]\) = \(g[l,u]\)\\
\> \comment{\# Evaluate values of partial clow sequences with up to \(2^s\) clows} \\
\> \kwd{For} \(s = 0\) to \(\lceil \log n \rceil -1\) \\
\> \> \kwd{For} \(1 \le i \le 2^s\) and for \(l,u,v \in [n]: (u \le v)
\wedge (l \ge 2^s + i)\) \kwd{do} in parallel~~\\
\> \> \> \(g[2^s+i,l,u,v]\) = \(\displaystyle{\sum_{
\mbox{\scriptsize{
\(2^s\le r \le l-i;~ u < w \le v\)
}}}}
g[2^s,r,u,w-1] \mult g[i,l-r,w,v]\) \\
\> \comment{\# Evaluate the sum of clow sequences} \\
\> \kwd{For} \(k \in [n]\) \kwd{do} in parallel \\
\> \> \(D_k\) = \((-1)^{n+k}g[k,n,1,n]\)\\
\> \comment{\# Evaluate the determinant} \\
\> \kwd{Return} det(\(A\)) = \(\sum_{k=1}^n D_k\)
\end{tabbing}
\end{algorithm}
}}
\caption{An arithmetic \(\complexityclass{SAC\(^1\)}\) algorithm for the determinant}
\end{table*}
%
\apar{6.1-3}
Assigning a gate to each variable \(c[l,u,v,w]\) or \(g[l,u]\) or
\(g[p,l,u,v]\) or \(D_k\) in this algorithm, we obtain an arithmetic
circuit with \(O(n^4)\) gates and depth \(O(\log n)\)\@. Each \(+\) gate has
fanin at most \(O(n^2)\), and each \(\times \) gate has fanin at most
3\@. Therefore this is an arithmetic \(\complexityclass{SAC\(^1\)}\) circuit\@.
%
\apar{6.1-4}
To obtain the desired PRAM implementation, we must first eliminate the
large fanin and fanout gates\@. Suppose we replace each \(+\) gate
having fanin \(f\) by a binary tree of \(+\) gates (i.e., a bounded fanin
subcircuit)\@. The number of edges only doubles, and the depth increases
by \(\log f\)\@. To handle fanout, we reverse the process: we introduce an
inverted binary tree of ``copy'' gates above each gate (a
bounded fanout subcircuit)\@. Again, the number of edges only
doubles\@. Note that in the \(\complexityclass{SAC\(^1\)}\) circuit described above, the maximum
fanin of any \(+\) gate is \(O(n^2)\), so the total number of edges in the
circuit is \(O(n^6)\)\@. Applied to the circuit, this procedure yields a
bounded fanin, bounded fanout circuit of depth \(O(\log^2 n)\) with
\(O(n^6)\) edges\@. Placing a processor on
each gate and associating a memory cell with each edge gives an EREW
PRAM algorithm that requires \(O(n^6)\) processors and runs in \(O(\log
^2n)\) parallel time\@. Reusing
processors across layers will give an EREW PRAM algorithm performing
\(O(n^6)\) work and running in \(O(\log ^2n)\) parallel time\@. A little
reflection will convince the reader that this EREW implementation
satisfies the owner-write and owner-read restrictions; thus we have an
OROW PRAM implementation\@.
%
\apar{6.1-5}
Let us consider the bit complexity of the above algorithm\@. (In the
PRAM model, each addition and each multiplication is considered to be
a unit-cost operation\@.) All operations in the \(\complexityclass{SAC\(^1\)}\) circuit are on
\(N\)-bit numbers; recall that \(N = n\log n + n\log
|a_{\mbox{{\scriptsize max}}}|\), where \(a_{\mbox{{\scriptsize max}}}\)
is the largest entry in \(A\)\@. Each operation in the above algorithm
involves either adding \(n^2\) numbers or multiplying three numbers, and can
be performed in \(\complexityclass{NC\(^1\)}\) \@. Plugging in these Boolean circuits at each gate
gives a Boolean \(\complexityclass{NC\(^2\)}\) circuit that computes the determinant\@.
\asubsection{6.2}{The Integer Determinant and \(\complexityclass{GapL}\)}
%
\apar{6.2-1}
In this section we demonstrate how \(\complexityclass{\#L}\) functions (first studied in
\cite{cj97-05-03}) can be used to compute the determinant\@. In particular,
we show that the determinant can be expressed as the difference of two
\(\complexityclass{\#L}\) functions, i.e., as a \(\complexityclass{GapL}\) function\@. This, coupled with the
fact that functions in \(\complexityclass{\#L}\) can be computed in Boolean
\(\complexityclass{NC\(^2\)}\) (and
subtraction is in \(\complexityclass{NC\(^1\)}\)), provides another approach toward building
small-depth circuits for the determinant\@.
%
\apar{6.2-2}
To place things in perspective, we first sketch the history of
research efforts directed toward the connections between the
determinant and the function class \(\complexityclass{\#L}\) \@. We denote by
\(\algo{}{}{\idr{\textsf{Det}}}\) the
function which, given a matrix with integer entries, evaluates to the
determinant of the matrix\@.
%
\apar{6.2-3}
Cook \cite{cj97-05-08} introduced \(\complexityclass{NC\(^1\)}\) reductions; he used
\(\complexityclass{NC\(^1\)}\) Turing
reductions to formally define and study many parallel complexity
classes\@. The decision to use reductions with oracle gates as opposed
to many-one \(\complexityclass{NC\(^1\)}\) reductions is deliberate; in Cook's view, oracle
reductions are more suitable for the study of functions than many-one
reductions\@. Among the parallel complexity classes introduced, Cook
defined \(\complexityclass{DET\(^*\)}\)\ to be the class of functions that \(\complexityclass{NC\(^1\)}\) reducible to
\(\algo{}{}{\idr{\textsf{Det}}}\)\@. Cook also listed many complete problems for \(\complexityclass{DET\(^*\)}\); the most
important of them was Iterated Matrix Product (over integers)\@. He
observed that Samuelson's method in fact establishes an \(\complexityclass{NC\(^1\)}\)
reduction from \(\algo{}{}{\idr{\textsf{Det}}}\) to \(\operatorname{IterMatProd}\)\@. In the other direction,
\[\operatorname{IterMatProd} \leq \operatorname{MatrixPowering} \leq \operatorname{MatrixInversion} \leq \algo{}{}{\idr{\textsf{Det}}}\]
(All reductions are \(\complexityclass{NC\(^1\)}\) reductions\@.)
%
\apar{6.2-4}
Unfortunately, the relation \(\complexityclass{NC\(^1\)}\) \(\subseteq\)
\(\complexityclass{DLOG}\) does not
relativize \cite{cj97-05-29}\@. This caused some confusion earlier in many
papers dealing with the determinant as a complexity class\@.
%
\apar{6.2-5}
In particular, Damm's claim \cite{cj97-05-10} that the logspace many-one
reduction closure of \(\algo{}{}{\idr{\textsf{Det}}}\) is equivalent to \(\complexityclass{L}^{\complexityclass{\#L}}\), and Vinay's claim
\cite{cj97-05-27} that \(\complexityclass{DET\(^*\)}\)\ is equivalent to \(\complexityclass{L}^{\complexityclass{\#L}}\), are both in
error\@. Also, Immerman and Landau erroneously claimed in the conference
version of their paper \cite{cj97-05-14} that the quantifier-free projection
closure of \(\algo{}{}{\idr{\textsf{Det}}}\), qfp(\(\algo{}{}{\idr{\textsf{Det}}}\)), equals \(\complexityclass{DET\(^*\)}\)\@.
%
\apar{6.2-6}
Buntrock et al. \cite{cj97-05-05} show that \(\complexityclass{L}^{\complexityclass{\#L}}\) is contained in \(\complexityclass{DET\(^*\)}\)\@. Vinay
and Damm use the following chain to establish their results:
%
\[ \operatorname{IterMatProd}~\leq~\complexityclass{DiffL}~\subseteq~\complexityclass{L}^{\complexityclass{\#L}}~\subseteq~\complexityclass{DET\(^*\)}\]
%
Wilson's results \cite{cj97-05-29} imply that ``\(\operatorname{IterMatProd}\) is complete for \(\complexityclass{DET\(^*\)}\)" is
inadequate to show that ``\(\complexityclass{DET\(^*\)}\)\
equals \(\complexityclass{L}^{\complexityclass{\#L}}\)\@.'' And the proof technique
of Buntrock et al.\ \cite{cj97-05-05} fails when \(\complexityclass{DET\(^*\)}\)\ is replaced with
the weaker logspace many-one reduction closure of the determinant\@.
%
\apar{6.2-7}
In fact, the correct statement should be as follows \cite{cj97-05-28,cj97-05-22}:
%
\begin{alabsubtext}{Theorem}{3}{}
\(\complexityclass{DET\(^*\)} = \complexityclass{NC\(^1\)}(\complexityclass{\#L})\)\@.
\end{alabsubtext}
%
\apar{6.2-8}
Immerman and Landau \cite{cj97-05-14} and Toda \cite{cj97-05-22} observe that the
Samuelson-Berkowitz algorithm is in fact a (first-order) projection
{from} \(\algo{}{}{\idr{\textsf{Det}}}\) to \(\operatorname{IterMatProd}\)\@. Consequently, by defining (a possibly new class)
\(\complexityclass{DET}\) as the logspace many-one closure of the determinant, and by
defining \(\complexityclass{GapL}\) as the class containing differences of two \(\complexityclass{\#L}\)
functions, one can hope for the following theorem:
%
\begin{alabsubtext}{Theorem}{4}{}
\(\complexityclass{DET}\) = \(\complexityclass{GapL}\)\@.
\end{alabsubtext}
%
\apar{6.2-9}
This result was essentially discovered independently by Vinay, Damm,
and Toda (see \cite{cj97-05-27,cj97-05-28,cj97-05-10,cj97-05-22,cj97-05-23}) around
the same time\@. The proofs are somewhat different, though\@. Damm,
inspired by Babai and Fortnow's \cite{cj97-05-06} characterization of \(\complexityclass{\#P}\),
used (positive) arithmetic straight-line programs with restricted
multiplications (the (P)ARM model) to characterize \(\complexityclass{\#L}\)\@. These programs
are equivalent to skew arithmetic circuits; in fact, in the \(\complexityclass{\#P}\)
setting, this equivalence was already known (see \cite{cj97-05-06})\@. Toda and
Vinay show how \(\algo{}{}{\idr{\textsf{Det}}}\) is equivalent to the difference between the
number of \((s,t)\) paths in two directed acyclic graphs\@. All the
proofs rely on the Samuelson-Berkowitz algorithm\@. We will now present
a complete proof of this theorem\@.
%
\begin{alabsubtext}{Proof of Theorem}{4}{}
\prooffontshape
In the forward direction, the result can be seen in a particularly
direct way from our sequential implementation of clow sequences in
Section~3\@. Observe that given \(A\), the construction of \(H_A\)
is logspace uniform, and tracing out \(s\leadsto t_+\) or \(s\leadsto
t_-\) paths can be done by an \(\complexityclass{NL}\) machine\@. (We must be careful here:
the partial products along a path can be too large to be stored in
logspace\@. To traverse an edge of weight \(|w|\), the \(\complexityclass{NL}\) machine
should simply branch into \(w\) paths and remember only the sign\@. This
way, a path of weight \(w\) in \(H_A\) will generate \(|w|\)
accepting/rejecting paths of the \(\complexityclass{NL}\) machine\@.) Essentially, \(H_A\)
gives us a uniform polynomial size, polynomial width branching program,
corresponding precisely to \(\complexityclass{GapL}\)\@.
%
\begin{table*}[!t]
% table 3
\begin{center}
\framebox{\parbox{4in}{
\begin{algorithm}{informal}{}
\begin{tabbing}
Nondeterministically choose \idr{head} \(\in \{1, \dots , n \}\).\\
\idr{current} = \idr{head}\\
\idr{count} = 0\\
\kwd{If} \(n\) is odd, \kwd{then} \idr{parity} = 1 \kwd{else} \idr{parity} = 0 \\
\comment{\# [parity, head, current, count] is the vertex traced out in }\\
\comment{\# ~~~~a clow sequence of \(G_A\) or an \(s\leadsto t_+\) or \(s\leadsto t_-\) path in \(H_A\)}\\
\kwd{While} \= \idr{count} \(< n-1\) \kwd{do}\\
\> Nondeterministically choose \idr{next} \(\in \{\idr{\mathrm{head}}, \dots , n \}\).\\
\> \idr{count} = \idr{count} + 1\\
\> \kwd{Branch} \(l\)-ways, \kwd{where} \(l = A[\idr{\mathrm{current}}, \idr{\mathrm{next}}]\).\\
\> \kwd{If} \idr{next} \(>\) \idr{head} \= \kwd{then} \= \idr{current} = \idr{next} \\
\> \kwd{else} \> Nondeterministically choose \idr{newhead} \(\in \{\idr{\mathrm{head}} + 1, \dots , n \}\).\\
\> \> \idr{parity} = (\idr{parity} + 1) \(\bmod{2}\) \\
\> \> \idr{head} = \idr{newhead}\\
\> \> \idr{current} = \idr{head}\\
\comment{\# At this point, \idr{count} = \(n-1\)} \\
\kwd{Branch} \(l\)-ways, \kwd{where} \(l = A[\idr{\mathrm{current}}, \idr{\mathrm{head}}]\).\\
\idr{parity} = (\idr{parity} + 1) \(\bmod{2}\) \\
\kwd{If} \idr{parity} = 0 \kwd{then} accept, \kwd{otherwise} reject.
\end{tabbing}
\end{algorithm}
}}
\end{center}
\caption{A \(\complexityclass{GapL}\) algorithm for the determinant over non-negative
integers}
\end{table*}
%
\begin{table*}[!t]
% table 4
\begin{center}
\framebox{\parbox{4in}{
\begin{algorithm}{informal}{}
\begin{tabbing}
Input: \= \(k\) bits \(a_{k-1}, \dots ,a_1,a_0\) specifying the number \(l = \sum_{i=0}^{k-1}2^ia_i\) \\
Goal: \> To produce exactly \(l\) paths ending in some prespecified configuration \(C\); \\
\> any extra paths produced should be rejecting\@. \\
\(\complexityclass{NL}\) algorithm \\
\> \(j=0\) \\
\> \kwd{While} \= \(j < k\) \kwd{do}\\
\> \> Branch 3-ways \\
\> \> On Branch 1, if \(a_j = 0\) \= then reject and exit loop \\
\> \> \> otherwise enter configuration \(C\) and exit loop\\
\> \> On Branches 2 and 3, increment \(j\)\\
\> \kwd{Endwhile}
\end{tabbing}
\end{algorithm}
}}
\end{center}
\caption{\(\complexityclass{NL}\) code to produce \(l\) accepting paths, given \(l\) in binary}
\end{table*}
%
\apar{6.2-10}
Table~3 lists the code for an \(\complexityclass{NL}\) machine computing,
through its gap function, the determinant of a matrix \(A\) with
non-negative integral entries\@. (Negative integers can be accommodated
by appropriately updating the parity variable\@.) Most steps are
easily seen to be possible on an \(\complexityclass{NL}\) machine\@. The \(l\)-way branching
is the only step which must be done carefully\@. Since \(l\) can be
\(n\) bits long, it cannot be stored on the worktape\@. Instead, the \(\complexityclass{NL}\)
machine has to step through the bit description of \(l\) on the input
tape, and branch accordingly\@. The details are shown in
Table~4\@.
%
\apar{6.2-11}
In the other direction, we follow Toda, who follows Valiant
\cite{cj97-05-24}\@. It essentially suffices to show that counting
\(s\leadsto t\) paths in an acyclic graph \(G_1\) (a canonical complete
problem for \(\complexityclass{\#L}\)) can be reduced to computing the determinant of a
matrix\@. We first replace each edge in \(G_1\) by a path of length 2, and add
edges \(\langle t,s \rangle\) and \(\{ \langle u,u \rangle \mid u \not
\in \{s,t \} \}\)\@. An \(s\leadsto t\) path in \(G_1\) then corresponds
exactly to a cycle cover in this new graph \(G_2\)\@, and all cycle
covers in \(G_2\) have a positive sign\@. The number of \(s\leadsto t\)
paths in \(G_1\) therefore equals det(adj(\(G_2\))), which equals perm(adj(\(G_2\)))
(by adj(\(G_2\)) we mean the adjacency matrix of the graph \(G_2\))\@.
%
\apar{6.2-12}
A canonical complete problem for \(\complexityclass{GapL}\) is counting the difference
between \(s\leadsto t_+\) paths and \(s\leadsto t_-\) paths in an acyclic
graph \(G_1\)\@. To reduce this to computing the determinant of a matrix,
a minor modification of the above procedure suffices\@. We first
replace each edge in \(G_1\) by a path of length 2 as above, then add
node \(x\), and add edges \(\langle t_+,s \rangle\), \(\langle t_-,x
\rangle\), \(\langle x,s \rangle\), and \(\{ \langle u,u \rangle \mid u
\neq s \}\)\@. Each \(s\leadsto t_+\) path in \(G_1\) then corresponds to a
cycle cover of positive sign in this new graph \(G_2\) (there are no
cycles of even length in the corresponding cycle cover)\@. Also, each
\(s\leadsto t_-\) path in \(G_1\) corresponds to a cycle cover of negative
sign in \(G_2\)\@. So the \(\complexityclass{GapL}\) function equals det(adj(\(G_2\)))\@.
{\nopagebreakbeginpar\markendlst{Proof of Theorem 4}}
\end{alabsubtext}
%
\apar{6.2-13}
What is striking is a complexity theoretic analog of the classical
\(\algo{}{}{\idr{\textsf{Det}}}\) vs. \(\operatorname{Perm}\) problem: they are complete for \(\complexityclass{GapL}\) and
\(\complexityclass{GapP}\),
respectively\@.\footnote{Here, \(\operatorname{Perm}\) denotes the function which,
given an \(n\)-dimensional matrix \(A\) with integer entries, evaluates to the
permanent of \(A\):
\[perm(A)= \sum_{\sigma \in S_n}\prod_i a_{i\sigma (i)}\]}
A corollary of the proof above shows that for every
integer matrix \(A\), there is an integer matrix \(B\) whose dimensions
are polynomially related to the dimensions of \(A\), such that the
determinant of \(A\) is the permanent of \(B\)\@. Of course, we do not know
if \(\operatorname{Perm}\) can be reduced to \(\algo{}{}{\idr{\textsf{Det}}}\) in logspace/polynomial time\@.
%
\apar{6.2-14}
The future of the class \(\complexityclass{DET\(^*\)}\)\ is not clear\@. Allender and Ogihara
\cite{cj97-05-02} note that there is no reason to believe that \(\complexityclass{NL}\) is a
subset of \(\complexityclass{GapL}\)(=\(\complexityclass{DET}\))\@. (We mean subset in the following sense: a
language is in \(\complexityclass{GapL}\) if its characteristic function is in \(\complexityclass{GapL}\)\@.)
This is because the 0--1 valued functions in \(\complexityclass{DET}\) must differ in at
most one accepting path\@. (However, a recent result of Reinhardt and
Allender \cite{cj97-05-18} shows that the inclusion is true in a nonuniform
setting\@.) On the other hand, notice that \(\complexityclass{NL}\) is contained in
\(\complexityclass{DET\(^*\)}\)\@. In fact, Allender and Ogihara \cite{cj97-05-02} consider
\(\complexityclass{AC\(^0\)(\(\algo{}{}{\idr{\textsf{Det}}}\))}\) and
show that \(\complexityclass{AC\(^0\)(\(\algo{}{}{\idr{\textsf{Det}}}\))}\) corresponds to a certain
counting hierarchy \(\complexityclass{LH}\)
that may be defined on \(\complexityclass{\#L}\)\@. They also claim (without proof) that if
\(\complexityclass{AC\(^0\)(\(\algo{}{}{\idr{\textsf{Det}}}\))}\) and \(\complexityclass{DET\(^*\)}\) coincide,
\(\complexityclass{LH}\) collapses\@.
%
\apar{6.2-15}
Two nice applications of the \(\complexityclass{GapL}\) characterization in
Theorem~4 have been the drastic simplification of
Jung's theorem on PL (see \cite{cj97-05-02}) and the characterization of the
complexity of computing the rank of a matrix (see \cite{cj97-05-01})\@.
\asection{Acknowledgements}{}
%
\apar{}
We thank Eric Allender for setting us on the track of pursuing the
determinant problem, and for supplying us with several interesting
references\@. We thank an anonymous referee for pointing out that our
EREW algorithm from Section~6.1 is even an OROW algorithm\@.
%
\end{articletext}
%
\begin{articlebib}
\bibliographystyle{alpha}
\bibliography{cj97-05}
\end{articlebib}
%
\end{document}