\documentclass[11pt]{article}
\usepackage{latexsym,array,ifthen,float}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{epsf}
\marginparwidth 0pt
\oddsidemargin 0pt
\evensidemargin 0pt
\marginparsep 0pt
\topmargin -0.6in
\textwidth 6.6in
\textheight 8.7 in
\floatstyle{ruled}
\newfloat{Algorithm}{thp}{lop}[section]
\newenvironment{proof}{\noindent{\bf Proof:}}{\hfill$\Box$}
\newcommand{\HRule}{\rule{\linewidth}{0.2mm}}
\newtheorem{definition}{Definition}[section]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{corollary}{Corollary}[section]
\newtheorem{observation}{Observation}[section]
\newtheorem{notation}{Notation}[section]
\newtheorem{remark}{Remark}[section]
\newtheorem{note}{Note}[section]
\newtheorem{proposition}{Proposition}[section]
\newtheorem{property}{Property}[section]
\begin{document}
\title {Self-Stabilizing Local Mutual Exclusion and Daemon Refinement
\thanks{A preliminary version of this work was presented at DISC'00
\cite{BDGM00}.
}}
\author {Joffroy Beauquier$^\dag$ \and Ajoy K. Datta$^\ddag$ \and
Maria Gradinariu$^\dag$ \and Frederic Magniette$^\dag$}
\date{
{\small $^\dag$ Laboratoire de Recherche en Informatique, Universit\'{e} de
Paris Sud, France}\\
{\small $^\ddag$ Department of Computer Science, University of Nevada Las
Vegas}
}
\maketitle
\begin{abstract}
Refining self-stabilizing algorithms which use tighter scheduling
constraints (weaker daemon) into corresponding algorithms for weaker or no
scheduling constraints (stronger daemon), while preserving the
stabilization property, is useful and challenging. Designing
transformation techniques for these refinements has been the subject of
serious investigations in recent years.
This paper proposes a new transformation technique for daemon refinement.
The core of the transformer is a self-stabilizing local mutual exclusion
algorithm. The local mutual exclusion problem is to grant a process
the privilege to enter critical section if and only if none of its
neighbors has the privilege. The contribution of this
paper is twofold. First, we present a bounded-memory self-stabilizing
local mutual exclusion algorithm for arbitrary networks, assuming any
arbitrary daemon. After stabilization, this algorithm maintains a bound
on the service time (the delay between two successive executions of
critical section by a particular process). This bound is $\frac{n
\times (n-1)}{2}$ where $n$ is the network size.
Another nice feature of our algorithm is that it satisfies the
strong safety property ---
in any configuration, there is at least one privileged processor.
Second, we use the local
mutual exclusion algorithm to design two transformers which
convert the algorithms working under a weaker daemon to ones which work
under the distributed, arbitrary (or unfair) daemon. Both transformers
preserve the self-stabilizing property. The first transformer refines
algorithms written under the central daemon, while the second transformer
refines algorithms designed for the $k$-fair ($k \geq (n-1)$) daemon.
{\bf Keywords:} Local mutual exclusion, self-stabilization, transformer,
unfair daemon.
\end{abstract}
\section{Introduction}
One of the most inclusive approaches to fault-tolerance in distributed
systems is \emph{self-stabilization} \cite{Dij74,Dol00}. Introduced by
Dijkstra~\cite{Dij74}, this technique guarantees that, regardless of the
initial state, the system will eventually converge to the intended behavior.
The correctness of self-stabilizing algorithms is proven assuming some type of
scheduler (or daemon) as the adversary.
The two most common schedulers are the following:
the central scheduler (only one process can execute an atomic step at one
time) and the distributed scheduler (any nonempty subset of the enabled
processes can execute their atomic steps simultaneously). Although it is
easier to prove the stabilization for the algorithms working under the
central scheduler, but those working under the distributed scheduler
support more practical implementations. So, it makes sense to design
self-stabilizing algorithms under central scheduler, prove its
correctness in this model, and then transform the algorithms
such that they work under the distributed scheduler preserving the
self-stabilization and other desirable properties. One of the main goals
of this paper is to design such a transformer.
The dining philosophers problem \cite{Dij71} deals with mutual exclusion
among neighboring processes in a ring. The local mutual exclusion problem
is the extension of this problem to any arbitrary network.
\medskip
\textbf{Related Work.}
There have been several attempts to develop transformers that
transform a program written and proven under the assumption of a
weak daemon to a program self-stabilizing under a strong
daemon. A special class of systems, called alternator, was introduced in
\cite{GH97} for the linear topology and in \cite{GH99} for any arbitrary
topology. The idea of an alternator is the following:
Every process has an integer state variable which is bounded by $2d-1$, where
$d$ is the length of the longest simple cycle in the network. One of the main
features of the alternator is that no two enabled neighboring processes
can have the state variable equal to $2d-1$ at the same time. The processes
do some effective work only when they are in state $2d-1$.
This non-interference property is used in \cite{GH99} to design a transformer
from the central daemon to the distributed daemon.
The transformation idea is to compose the algorithm $A$ (written under the
central daemon) with the alternator such that the actions of the algorithm $A$
are executed only when the alternator is in state $2d-1$.
Another transformer was proposed in \cite{MN98}. This method
uses time-stamps to order the actions of any self-stabilizing algorithm.
One notable feature of \cite{MN98} is that it achieves the silent
stabilization~\cite{DGS96}, but the algorithm uses unbounded variables
and works under a weakly fair scheduler.
Another approach in designing transformation techniques is to implement
the local mutual exclusion among the neighboring processes.
Distributed, but non-stabilizing solutions to the local mutual exclusion
problem are presented in \cite{CM84} and \cite{AS90}.
The alternator~\cite{GH99} can also be considered as a solution to the dining
philosophers problem. However, this is not an optimal solution
for the following reason:
Only when one and exactly one process (say $p$) in a neighborhood is in state
$2d-1$, $p$ can enter the critical section. But, in this algorithm, there are
many configurations
where no process in some neighborhoods is in state $2d-1$, implying no process
in those neighborhoods is in critical section.
The algorithms in \cite{HP89} and \cite{Gou87} propose
self-stabilizing solutions to the dining philosophers problem
(and hence, to the local mutual exclusion problem).
But, both solutions use a central daemon and a distinguished process to
implement the token circulation. The process holding the token executes its
critical section. A self-stabilizing solution to the dining
philosophers problem has been proposed in~\cite{Hua00}. The
algorithm
in~\cite{Hua00} works under the read/write atomicity model~\cite{DIM93}, but
makes a very strong assumption---the links of the network are (initially)
colored in a special way.
A solution to the local mutual exclusion problem for arbitrary networks is
proposed in~\cite{AS99a}. This solution uses the read/write atomicity model.
The proposed algorithm is not uniform. It uses an underlying spanning tree.
This tree is used to to bound the timestamps---the system is periodically
reset using the tree.
The algorithm disables the critical section access during the reset period,
and hence, is not an optimal solution.
The legitimate state depends on some parameters, one of which has to be larger
than the number of nodes. The authors claimed that the algorithm uses bounded
memory, but did not provide any bound.
The solution to the local mutual exclusion problem in tree networks is
presented in~\cite{AS99b} and \cite{JADT99}.
A transformer using the local mutual exclusion has been reported
in~\cite{NA99}. The main goal of this work was to refine the atomicity.
The algorithm works under the read/write atomicity and uses bounded variables.
However, since the algorithm uses a weakly fair daemon,
although the service time is bounded, the exact bound on the
service time cannot be computed (which depends on the type of daemon).
\medskip
\textbf{Our Contributions.}
We first present two solutions to the local mutual exclusion problem.
The first solution uses unbounded memory.
We then extend the first algorithm to design a bounded memory solution.
Our algorithms are self-stabilizing under any
arbitrary (unfair) distributed daemon. Moreover, after stabilization, the
service time is bounded by $\frac{n(n-1)}{2}$.
We use the local mutual exclusion algorithm to design two
stabilizing preserving transformers to transform algorithms written using
weaker daemon into algorithms which work under the assumption of a stronger
daemon. The first transformer refines algorithms
written under the central daemon, while the second transformer takes as input
algorithms designed for the $k$-fair ($k \geq (n-1)$) daemon.
Both transformers convert the
input algorithms to self-stabilizing algorithms which work under the
assumption of arbitrary (even unfair) distributed daemon.
\medskip
\textbf{Outline of Paper.}
The rest of the paper is organized as follows: The model and specification of
the problem solved in this paper are presented in Section~\ref{sec:model}.
In Section \ref{sec:lme}, two local mutual exclusion algorithms are given.
The two transformers are discussed in Section~\ref{sec:trans}.
Section~\ref{sec:concl} provides some concluding remarks.
\section{Model and Specification} \label{sec:model}
A distributed system is a set of state machines called processes. Each
process can communicate with a subset of the processes called
neighbors. We will use ${\mathcal N}.x$ to denote the set of neighbors of
node $x$ and $|{\mathcal N}.x|$ to represent the number of neighbors of
$x$. The communication among neighboring processes is carried out using
the communication registers (called ``shared variables'' throughout
this paper).
We consider distributed systems consisting of $n$ processes where every
process has a unique identifier. The system's communication graph is drawn
by representing processes as nodes and the neighborhood relationship by
edges connecting the nodes.
Every process $p$ executes a program which consists of a set of shared variables
and a finite set of guarded actions of the form:
$\langle label\rangle::\ \langle guard \rangle\longrightarrow\langle
statement \rangle$, where each guard is a boolean expression involving the
variables of $p$ and its neighbors. The statement of an action of $p$ updates
one or more variables of $p$. An action can be executed only if its guard
evaluates to true.
A \emph{configuration} of a distributed system is an
instance of the state of the processes. A process
is \emph{enabled} in a given configuration if at least one of the guards
of its program is \emph{true}. We denote the set of enabled processes for
a given configuration by $E$.
A distributed system can be modeled by a transition system.
A transition system is a three-tuple
$S=({\mathcal C},{\mathcal T}, {\mathcal I})$
where $\mathcal C$ is the collection of all the configurations,
${\mathcal I}$ is a subset of ${\mathcal C}$ called the set of initial
configurations, and
$\mathcal T$ is a function ${\mathcal T}: {\mathcal C} \longrightarrow
{\mathcal C}$.
A transition, also called a {\em computation step}, is a tuple $(c_1,c_2)$
such that $c_2={\mathcal T}(c_1)$.
A \emph{computation} of an algorithm $\mathcal{P}$ is a \emph{maximal}
sequence of computation steps
$e=((c_0,c_{1})$ $(c_1,c_{2})$ $\ldots (c_i,c_{i+1}) \ldots)$
such that for $i \geq 0, c_{i+1}={\mathcal T}(c_i)$ (a single
\emph{computation step}) if $c_{i+1}$ exists, or $c_{i}$ is a terminal
configuration.
\emph{Maximality} means that the sequence is either infinite, or it is
finite and no process is enabled in the final configuration.
All computations considered in this paper are assumed to be maximal.
A {\em fragment} of a computation $e$ is a finite sequence of successive
computation steps of $e$.
In a computation, a transition $(c_i,c_{i+1})$ occurs due to the
execution of a nonempty subset of the enabled processes in configuration $c_i$.
In every computation step, this subset is chosen by the scheduler or daemon.
We refer to the following types of daemon in this paper:
{\it central daemon ---} in every computation step, only one of the enabled
processes is chosen by the daemon;
{\it k-fair daemon ---} a process cannot be selected more than $k$ times by
the daemon without choosing another process which has been continuously
enabled;
{\it weakly fair daemon ---} if a process $p$ is continuously enabled, $p$
will be eventually chosen by the daemon to execute an action;
{\it distributed unfair daemon ---} during a computation step, any nonempty
subset of the enabled processes is chosen by the daemon.
We refer to the distributed unfair daemon as \textit{stronger daemon} and
all other daemons (defined above) as \textit{weaker daemons}.
\paragraph{Self-Stabilization.}
In order to define self-stabilization for a distributed system, we use two
types of predicates: the legitimacy predicate (defined on the system
configurations and denoted by ${\mathcal L}$) and the problem
specification (defined on the system computations and denoted by
${\mathcal S}{\mathcal P}$).
Let ${\mathcal P}$ be an algorithm. The set of all computations of the
algorithm ${\mathcal P}$ is denoted by ${\mathcal E}_{\mathcal P}$.
Let ${\cal X}$ be a set and $Pred$ a predicate defined on the elements of
${\cal X}$. The notation $x \models Pred$ means that the element
$x \in {\cal X}$ satisfies the predicate $Pred$.
\begin{definition}[Self-Stabilization]
An algorithm ${\cal P}$ is \underline{self-stabilizing} for a specification
${\mathcal S}{\mathcal P}$ if and only if the following two properties hold:\\
{\it (1) Convergence:} All computations reach a configuration that
satisfies the legitimacy predicate. Formally,
$ \forall e \in {\cal E}_{\cal P}:: e=((c_0,c_{1})
(c_1,c_{2}) \ldots ): \exists n \geq 1, c_{n} \models {\mathcal L}$.\\
{\it (2) Correctness:} All computations starting in a configuration
satisfying the legitimacy predicate satisfy the problem
specification ${\mathcal S}{\mathcal P}$. Formally,
$\forall e \in {\cal E}_{\cal P}::$
$e=((c_0,c_{1})$ $(c_1,c_{2})$ $\ldots ): c_{0}
\models {\mathcal L}\Rightarrow e \models {\mathcal S}{\mathcal P}$.\\
\end{definition}
\paragraph{Local Mutual Exclusion.}
\label{lme}
The specification of the local mutual exclusion problem
(${\mathcal S}{\mathcal P}_{LME}$) is the conjunction of the following two
conditions defined in terms of ``the privilege to enter the critical
section'':
(i) {\em Safety:}
If a process holds a privilege,
then none of its neighbors holds the privilege.
(ii){\em Liveness:} Every process holds the privilege
infinitely often.
\begin{definition}[Strong Safety]
Let ${\mathcal P}$ be a self-stabilizing mutual exclusion algorithm.
${\mathcal P}$ satisfies the strong safety property if in any configuration
there exists at least one privileged process.
\end{definition}
\begin{definition}[Fairness Index]
\label{def:fairness}
Let ${\mathcal P}$ be a self-stabilizing mutual exclusion
algorithm. ${\mathcal P}$ is considered to have a \underline{fairness index}
of $k$ if in any computation of ${\mathcal P}$ under any daemon,
between any two consecutive critical section executions of a process,
any other process can execute its critical section at most $k$ times.
\end{definition}
\begin{definition}[Service Time]
\label{def:stime}
Let ${\mathcal P}$ be a self-stabilizing local mutual exclusion algorithm. The
\underline{service time} of ${\mathcal P}$ is the maximum total number of
critical sections executed by all other processors between two
successive executions of critical section
by any process without any assumption of the daemon.
\end{definition}
\paragraph{Virtual Orientation of the Communication Graph.}
We will use an ``adjacency'' relation, denoted by $\rhd$, over the
shared variables of the processes to define a virtual
orientation of the communication graph. The exact definition (or
implementation) of this relation will be specific to the two solutions to the
local mutual exclusion problem presented in Section~\ref{sec:lme}.
\begin{definition} [Virtual Orientation] \label{def:vor}
Let $x.p$ and $x.q$ be two shared variables of two neighboring
processes $p$ and
$q$. In the communication graph, the edge between $p$ and $q$ is said to be
\underline{virtually oriented} from $p$ to $q$ if and only if $x.p \rhd x.q$.
This edge is an incoming edge for $q$ and an outgoing edge for $p$.
\end{definition}
\begin{definition}[Privileged Process]
\label{def:priv}
A process is said to be \underline{privileged} in a configuration if in the
communication graph, all the edges adjacent to the process are oriented
toward it (i.e., all edges are incoming edges).
\end{definition}
\section{Local Mutual Exclusion} \label{sec:lme}
We present two solutions to the local mutual exclusion problem in this
section. We first present the unbounded space solution to
help understand the ideas behind the second solution. Next, we will give
the bounded solution which is our final solution. In the next section, we
will use the bounded solution to design two scheduler transformers.
Both solutions use the edge reversal mechanism of \cite{BG89}
to maintain the acyclic orientation of the communication graph.
\subsection{Unbounded Local Mutual Exclusion Algorithm}
The key feature of this algorithm is its $(n-1)$-fairness index under any
arbitrary distributed daemon.
\subsubsection{Algorithm ULME}
\begin{Algorithm}
\noindent
{\bf Constants}:\\
\hspace*{0.3cm}
$id.p$ : unique integer identifier of $p$;\\
\hspace*{0.3cm}
${\mathcal N}.p$ : the set of neighbors of $p$;\\
\noindent
{\bf Shared Variable}:\\
\hspace*{0.3cm}
$L.p$ : unbounded integer;\\
\noindent
{\bf Local variable}:\\
\hspace*{0.3cm}
$CS$ : boolean flag used to indicate if $p$ is in the critical section;\\
\noindent
\noindent
{\bf Function}:\\
\hspace*{0.3cm}
$(\forall q \in {\mathcal N}.p): p \prec q \equiv L.q \rhd L.p$\\
\noindent
{\bf Actions}:\\
\noindent
\hspace*{0.3cm}
${\mathcal A}:$
$\forall q \in {\mathcal N}.p,~ \ p \prec q$
$\longrightarrow$ \\
\hspace*{2cm}
$CS=1$;\\
\hspace*{2cm}
execute critical section;\\
\hspace*{2cm}
$L.p=max\{L.q|\ q \in {\mathcal N}.p\}+1$;\\
\caption{Unbounded Local Mutual Exclusion (ULME) for process $p$}
\label{alg:unbounded}
\end{Algorithm}
\begin{note} \label{note:cs-u}
The variable $CS$ in Algorithm ULME is not necessary to solve the local mutual
exclusion problem. We added this in the code in Algorithm~\ref{alg:unbounded}
because we would need this to design the transformers in
Section~\ref{sec:trans}.
\end{note}
We borrow the definition of the ``direction of an edge'' from \cite{GK93} to
define the adjacency relation ($\rhd$) for Algorithm ULME
(shown as Algorithm~\ref{alg:unbounded}).
\begin{definition}\label{def:uvor}
For any two neighboring processes $p_1$ and $p_2$ executing Algorithm ULME,
$L.p_2 \rhd L.p_1$ (i.e., $p_2$ is virtually oriented toward $p_1$) iff
$(L.p_1 \longrightarrow$
$;$\\
\hspace*{5cm}
if $CS=1$ then \\
\hspace*{6cm}
if $\exists j \in [1..n]$ $$ then $$;
\end{itemize}
\begin{note} \label{note:cs-r}
The actions of the weaker algorithm are executed only when $CS=1$,
i.e., when the process is allowed to execute its critical section. The
variable $CS$ is used only to design the transformers
(see Note~\ref{note:cs-u}).
\end{note}
We now present some properties of the composed algorithm.
\begin{lemma}
\label{maximal}
Any maximal computation of the composed algorithm has a maximal projection on
Algorithm ${\mathcal A}_{LME}$.
\end{lemma}
\begin{proof}
Let $e$ be a maximal computation of the composed algorithm.
Suppose that the projection of $e$ on the ${\mathcal A}_{LME}$
algorithm is not maximal.
This means that $e$ has a suffix where no ${\mathcal A}_{LME}$ action
is executed, only the weaker actions are executed. From the above definition
of composition, the weaker actions are executed only in conjunction with
the actions of ${\mathcal A}_{LME}$, which implies
that our assumption is false.
\end{proof}
\begin{lemma}
\label{cros}
The composed algorithm is a self-stabilizing local mutual exclusion
algorithm according to the specification
${\mathcal S}{\mathcal P}_{LME}$.
\end{lemma}
\begin{proof}
Let $e$ be a maximal computation of the composed algorithm. Consider the
maximal projection, $e_{st}$ of $e$ on the algorithm ${\mathcal A}_{LME}$
(the existence of the maximal projection is proven in
Lemma~\ref{maximal}). By assumption, a legitimate configuration for this
algorithm is reached during $e_{st}$,
and the suffix of $e_{st}$ (let us call it $e_{sts}$) starting from this
legitimate configuration satisfies the local mutual exclusion specification.
Let $e_s$ be the suffix
of $e$ corresponding to the projection $e_{sts}$ on ${\mathcal A}_{LME}$.
The computation $e_s$ then also starts in a legitimate configuration
for this algorithm, and satisfies local mutual exclusion specification.
\end{proof}
\begin{lemma}
\label{cfair}
The fairness index of the composed algorithm is $(n-1)$.
\end{lemma}
\begin{proof}
Let $e$ be a computation of the composed algorithm.
Assume that in $e$, the fairness index is not $(n-1)$.
That implies that the fairness index of the
projection of $e$ on algorithm ${\mathcal A}_{LME}$ is not $n-1$,
which contradicts Lemma~\ref{lem:fairness} (where we established that the
fairness index of ${\mathcal A}_{LME}$ is $(n-1)$).
\end{proof}
\begin{lemma}
\label{max}
The projection of any maximal computation of the composed algorithm on the
weaker algorithm is a maximal computation.
\end{lemma}
\begin{proof}
Let $e$ be a maximal computation of the composed algorithm and
$e_w$ be the projection of $e$ on the weaker algorithm. Assume that
$e_w$ is not a maximal computation. Hence there exists a suffix of $e$
in which the actions of the weaker algorithm are not executed, only the
actions of the ${\mathcal A}_{LME}$ algorithm are executed. Let $c$
be the first configuration occurring in this suffix. So, in $c$ and any other
configuration reachable from $c$ in $e$, one of the two following conditions
is true: (i) No weaker guard of any process is enabled.
(ii) There exists a process $p$ enabled to execute a weaker action, but $p$ is
never chosen by the daemon to execute.
In Case (i), the prefix of $e$ ending at $c$ has a maximal projection on the
weaker algorithm.
In Case (ii), using the $(n-1)$-fairness property of the composed algorithm
(see Lemma \ref{cfair}), $p$ will eventually execute this weaker action in
conjunction with ${\mathcal A}_{LME}$ critical section execution.
Thus, the projection on the weaker algorithm is maximal.
\end{proof}
\begin{lemma}
\label{together}
The projection of any maximal computation of the composed algorithm on
the weaker algorithm has a maximal suffix where the neighboring
processes do not execute their actions simultaneously.
\end{lemma}
\begin{proof}
Let $e$ be a maximal computation of the composed algorithm.
From Lemma~\ref{cros}, $e$ has a maximal suffix, $f$ satisfying the local
mutual exclusion specification. Let $e_W$ be
the projection of $e$ on the weaker protocol. From Lemma~\ref{max},
the computation $e_W$ is maximal. Let $f_W$ be the projection of $f$
on the weaker protocol. Now, we need to show that in
the computation $f_W$, the neighboring processes do not execute their
actions simultaneously. Suppose that there exists a configuration in $f_W$
where two neighboring processes execute their actions
simultaneously. This means that there is a configuration in $f$ such
that the critical section is executed simultaneously by two
neighboring processes which contradict the assumption that $f$ satisfies
the local mutual exclusion property.
\end{proof}
\subsection{Central Daemon to Distributed Daemon}
In this section, we show that we can transform a self-stabilizing
algorithm working under a central daemon into a self-stabilizing algorithm
under a distributed daemon.
Assume that, in the composition described in Section~\ref{transformation},
the weaker algorithm, ${\mathcal W}_{C}$, is a self-stabilizing
algorithm for the
specification ${\mathcal S}{\mathcal P}_{\mathcal C}$ which works
under a central daemon,
and ${\mathcal S}$ represents the composed algorithm.
\begin{lemma}
\label{eddtoecd}
In any algorithm ${\mathcal P}$, if in a computation the neighboring
processes do not execute their actions simultaneously, then there must exist a
corresponding computation of ${\mathcal P}$ under a central daemon.
\end{lemma}
\begin{proof}
Let $e$ be a computation of ${\mathcal P}$ under a distributed
daemon such that the neighboring processes do not execute their
actions simultaneously.
Computation $e_c$ (under central daemon) corresponding to $e$
is obtained by constructing a fragment computation of
$e_c$ for every transition in $e$.
Let $c$ and $c^{\prime}$ be two successive configurations in $e$. Assume that
in $c$, processes $p_1,\ldots,p_k$ execute their actions to change the
configuration to $c^{\prime}$. These processes are not neighbors
of each other as per our assumption.
Consider the following fragment
$f=(c=c_0,c_1,c_2, \ldots c_k=c^\prime)$ where $c_i$ is
obtained from $c_{i-1}$ after process $p_{i}$ executes its action.
Since processes ${p_i}\ (i \in \{1,k\})$ are not neighboring
processes, the execution of $p_{i}$'s action has no influence on the
execution of other active processes, and hence, the fragment
$f$ satisfies the central daemon specification. Thus, we can consider
the fragment $f$ as the computation under the central daemon corresponding to
the transition $(c,c^\prime$) in $e$ under the distributed daemon.
We can repeat the above construction process to obtain the computation $e_c$
corresponding to $e$ (working under the distributed daemon).
\end{proof}
\begin{theorem}
\label{ss:proof}
Algorithm ${\mathcal S}$ is self-stabilizing
for Specification ${\mathcal S}{\mathcal P}_{\mathcal C}$ under a
distributed daemon.
\end{theorem}
\begin{proof}
From Lemma~\ref{cros}, the composed algorithm is a self-stabilizing
local mutual exclusion algorithm under a distributed daemon. So, any
computation of the composed algorithm has an infinite suffix which satisfies
the local mutual exclusion specification. Let $e$ be one such computation and
$e_{W}$ be the projection of $e$ on the weaker algorithm. From
Lemmas~\ref{max} and \ref{together}, the projection
$e_{W}$ is maximal and also satisfies the property that the
neighboring processes do not execute their actions simultaneously.
From Lemma~\ref{eddtoecd}, there is a computation $e^{c}_{W}$,
such that $e^{c}_{W}$ contains all the configurations which appeared in
$e_{W}$ and satisfies the central daemon specification.
Hence there exists a function which maps any computation of the composed
algorithm to a computation of the weaker algorithm under a central daemon.
Thus, since the weaker algorithm is self-stabilizing for
Specification ${\mathcal S}{\mathcal P}_{\mathcal C}$
under the central daemon, the composed algorithm
is self-stabilizing for Specification ${\mathcal S}{\mathcal P}_{\mathcal C}$.
\end{proof}
\subsection{Fair Daemon to Distributed Daemon}
We propose another application of the local mutual exclusion in this
section --- the transformation of a self-stabilizing algorithm under a
$k$-fair daemon into a self-stabilizing algorithm under a
distributed daemon.
Assume that in the composition described in Section~\ref{transformation},
the weaker algorithm, ${\mathcal W}_{kB}$ is a self-stabilizing algorithm
for the specification ${\mathcal S}{\mathcal P}_{\mathcal F}$ under a $k$-fair
daemon ($\forall k \geq (n-1)$), and
${\mathcal S}$ is the composed algorithm.
\begin{theorem}
Algorithm ${\mathcal S}$ is
self-stabilizing for Specification ${\mathcal S}{\mathcal P}_{\mathcal F}$
under any distributed daemon.
\end{theorem}
\begin{proof}
Let $e$ be a computation of Algorithm ${\mathcal S}$. From
Lemma \ref{cfair}, we can claim that $e$ is a
$(n-1)$-fair computation. Let $e_W$ be the projection of $e$ on the weaker
algorithm. Now, we need to prove that $e_W$ is a computation under a
$k$-fair ($\forall k \geq (n-1)$) daemon. Assume that
$e_W$ is not a computation under the k-fair daemon. This implies that
a process $p$ executes its actions more then $k$ times before another
process $q$ executes. This also means that $p$ executes its critical
section more than $k$
times before $q$ executes in $e$. But, that is not possible because
the fairness index of $e$
is $(n-1)$ and $k \geq (n-1)$.
\end{proof}
\section{Conclusions} \label{sec:concl}
We presented a transformation technique to transform
self-stabilizing algorithms under weak daemons into algorithms which
maintain the self-stabilization property and also paper under the
stronger daemon,
e.g., any arbitrary distributed daemon (including the unfair daemon).
The key tool in designing the above is a self-stabilizing local mutual
exclusion algorithm, which by itself is a major contribution of this work.
One of the two local mutual exclusion algorithms presented in this paper is
a bounded memory self-stabilizing solution which is proven under the
unfair daemon.
Another nice feature of our local mutual
algorithms is that they achieve a bounded service time.
Our protocols can be easily transformed to work under the read/write
atomicity model (see \cite{BDGM00b} for this work), and hence
can be extended to the message-passing environment.
Another possible extension for our bounded solution is a
generalization of the unison problem defined in \cite{CFG92}.
In Algorithm BLME, the difference between the values of the local
variables of any two neighboring processes is bounded. Therefore, each
process can start a phase with the number equal to the value of its
local variable $L$. The main features of this extension are that the
phase difference between two processes is bounded and no two
neighboring processes are in the same phase.
\section{Acknowledgments}
We would like to thank the referees for their valuable which greatly improve
the presentation of the paper. Special thanks to the referee who helped us
simplify the composition scheme in Section~\ref{sec:trans}.
\bibliographystyle{alpha}
\bibliography{paper}
\end{document}