prio/Paper/Paper.thy
author urbanc
Mon, 30 Jan 2012 09:44:33 +0000
changeset 271 78523b3ae2ad
parent 270 d98435b93063
child 272 ee4611c1e13c
permissions -rwxr-xr-x
spell check

(*<*)
theory Paper
imports CpsG ExtGG "~~/src/HOL/Library/LaTeXsugar"
begin
ML {*
  Printer.show_question_marks_default := false;
  *}
(*>*)

section {* Introduction *}

text {*
  Many realtime systems need to support processes with priorities and
  locking of resources. Locking of resources ensures mutual exclusion
  when accessing shared data or devices. Priorities allow scheduling
  of processes that need to finish their work within hard deadlines.
  Unfortunately, both features can interact in subtle ways leading to
  a problem, called \emph{Priority Inversion}. Suppose three processes
  having priorities $H$(igh), $M$(edium) and $L$(ow). We would expect
  that the process $H$ blocks any other process with lower priority
  and itself cannot be blocked by a process with lower priority. Alas,
  in a naive implementation of resource looking and priorities this
  property can be violated. Even worse, $H$ can be delayed
  indefinitely by processes with lower priorities. For this let $L$ be
  in the possession of a lock for a resource that also $H$ needs. $H$
  must therefore wait for $L$ to release this lock. The problem is
  that $L$ might in turn be blocked by any process with priority $M$,
  and so $H$ sits there potentially waiting indefinitely. Since $H$ is
  blocked by processes with lower priorities, the problem is called
  Priority Inversion. It was first described in
  \cite{Lampson:Redell:cacm:1980} in the context of the Mesa
  programming language designed for concurrent programming.

  If the problem of Priority Inversion is ignored, realtime systems
  can become unpredictable and resulting bugs can be hard to diagnose.
  The classic example where this happened is the software that
  controlled the Mars Pathfinder mission in 1997
  \cite{Reeves-Glenn-1998}.  Once the spacecraft landed, the software
  shut down at irregular intervals leading to loss of project time, as
  normal operation of the craft could only resume the next day (the
  mission and data already collected were fortunately not lost, because
  of a clever system design).  The reason for the shutdowns was that
  the scheduling software fell victim of Priority Inversion: a low
  priority task locking a resource prevented a high priority process
  from running in time leading to a system reset. Once the problem was found,
  it was rectified by enabling the Priority Inheritance Protocol in
  the scheduling software.

  The idea behind the \emph{Priority Inheritance Protocol} (PIP) is to
  let the process $L$ temporarily inherit the high priority from $H$ 
  until $L$ releases the locked resource. This solves the problem of
  $H$ having to wait indefinitely, because $L$ cannot, for example, be blocked by 
  processes having priority $M$. This solution to the Priority
  Inversion problem has been known since \cite{Lampson:Redell:cacm:1980}
  but Lui et al give the first thorough analysis and present a correctness
  proof for an algorithm \cite{journals/tc/ShaRL90}.

  However, there are further subtleties: just lowering the priority 
  of the process $L$ to its low priority, as proposed in ???, is 
  incorrect.\bigskip
  
  \noindent
  Priority inversion referrers to the phenomena where tasks with higher 
  priority are blocked by ones with lower priority. If priority inversion 
  is not controlled, there will be no guarantee the urgent tasks will be 
  processed in time. As reported in \cite{Reeves-Glenn-1998}, 
  priority inversion used to cause software system resets and data lose in 
  JPL's Mars pathfinder project. Therefore, the avoiding, detecting and controlling 
  of priority inversion is a key issue to attain predictability in priority 
  based real-time systems. 

  The priority inversion phenomenon was first published in \cite{Lampson:Redell:cacm:1980}. 
  The two protocols widely used to eliminate priority inversion, namely 
  PI (Priority Inheritance) and PCE (Priority Ceiling Emulation), were proposed 
  in \cite{journals/tc/ShaRL90}. PCE is less convenient to use because it requires 
  static analysis of programs. Therefore, PI is more commonly used in 
  practice\cite{locke-july02}. However, as pointed out in the literature, 
  the analysis of priority inheritance protocol is quite subtle\cite{yodaiken-july02}. 
  A formal analysis will certainly be helpful for us to understand and correctly 
  implement PI. All existing formal analysis of PI
  \cite{conf/fase/JahierHR09,WellingsBSB07,Faria08} are based on the model checking 
  technology. Because of the state explosion problem, model check 
  is much like an exhaustive testing of finite models with limited size. 
  The results obtained can not be safely generalized to models with arbitrarily 
  large size. Worse still, since model checking is fully automatic, it give little 
  insight on why the formal model is correct. It is therefore 
  definitely desirable to analyze PI using theorem proving, which gives 
  more general results as well as deeper insight. And this is the purpose 
  of this paper which gives a formal analysis of PI in the interactive 
  theorem prover Isabelle using Higher Order Logic (HOL). The formalization 
  focuses on on two issues:

  \begin{enumerate}
  \item The correctness of the protocol model itself. A series of desirable properties is 
    derived until we are fully convinced that the formal model of PI does 
    eliminate priority inversion. And a better understanding of PI is so obtained 
    in due course. For example, we find through formalization that the choice of 
    next thread to take hold when a 
    resource is released is irrelevant for the very basic property of PI to hold. 
    A point never mentioned in literature. 
  \item The correctness of the implementation. A series of properties is derived the meaning 
    of which can be used as guidelines on how PI can be implemented efficiently and correctly. 
  \end{enumerate} 

  The rest of the paper is organized as follows: Section \ref{overview} gives an overview 
  of PI. Section \ref{model} introduces the formal model of PI. Section \ref{general} 
  discusses a series of basic properties of PI. Section \ref{extension} shows formally 
  how priority inversion is controlled by PI. Section \ref{implement} gives properties 
  which can be used for guidelines of implementation. Section \ref{related} discusses 
  related works. Section \ref{conclusion} concludes the whole paper.


  Contributions

  Despite the wide use of Priority Inheritance Protocol in real time operating
  system, it's correctness has never been formally proved and mechanically checked. 
  All existing verification are based on model checking technology. Full automatic
  verification gives little help to understand why the protocol is correct. 
  And results such obtained only apply to models of limited size. 
  This paper presents a formal verification based on theorem proving. 
  Machine checked formal proof does help to get deeper understanding. We found 
  the fact which is not mentioned in the literature, that the choice of next 
  thread to take over when an critical resource is release does not affect the correctness
  of the protocol. The paper also shows how formal proof can help to construct 
  correct and efficient implementation.\bigskip 

*}

section {* An overview of priority inversion and priority inheritance \label{overview} *}

text {*

  Priority inversion refers to the phenomenon when a thread with high priority is blocked 
  by a thread with low priority. Priority happens when the high priority thread requests 
  for some critical resource already taken by the low priority thread. Since the high 
  priority thread has to wait for the low priority thread to complete, it is said to be 
  blocked by the low priority thread. Priority inversion might prevent high priority 
  thread from fulfill its task in time if the duration of priority inversion is indefinite 
  and unpredictable. Indefinite priority inversion happens when indefinite number 
  of threads with medium priorities is activated during the period when the high 
  priority thread is blocked by the low priority thread. Although these medium 
  priority threads can not preempt the high priority thread directly, they are able 
  to preempt the low priority threads and cause it to stay in critical section for 
  an indefinite long duration. In this way, the high priority thread may be blocked indefinitely. 
  
  Priority inheritance is one protocol proposed to avoid indefinite priority inversion. 
  The basic idea is to let the high priority thread donate its priority to the low priority 
  thread holding the critical resource, so that it will not be preempted by medium priority 
  threads. The thread with highest priority will not be blocked unless it is requesting 
  some critical resource already taken by other threads. Viewed from a different angle, 
  any thread which is able to block the highest priority threads must already hold some 
  critical resource. Further more, it must have hold some critical resource at the 
  moment the highest priority is created, otherwise, it may never get change to run and 
  get hold. Since the number of such resource holding lower priority threads is finite, 
  if every one of them finishes with its own critical section in a definite duration, 
  the duration the highest priority thread is blocked is definite as well. The key to 
  guarantee lower priority threads to finish in definite is to donate them the highest 
  priority. In such cases, the lower priority threads is said to have inherited the 
  highest priority. And this explains the name of the protocol: 
  {\em Priority Inheritance} and how Priority Inheritance prevents indefinite delay.

  The objectives of this paper are:
  \begin{enumerate}
  \item Build the above mentioned idea into formal model and prove a series of properties 
    until we are convinced that the formal model does fulfill the original idea. 
  \item Show how formally derived properties can be used as guidelines for correct 
    and efficient implementation.
  \end{enumerate}
  The proof is totally formal in the sense that every detail is reduced to the 
  very first principles of Higher Order Logic. The nature of interactive theorem 
  proving is for the human user to persuade computer program to accept its arguments. 
  A clear and simple understanding of the problem at hand is both a prerequisite and a 
  byproduct of such an effort, because everything has finally be reduced to the very 
  first principle to be checked mechanically. The former intuitive explanation of 
  Priority Inheritance is just such a byproduct. 
  *}

section {* Formal model of Priority Inheritance \label{model} *}
text {*
  \input{../../generated/PrioGDef}
*}

section {* General properties of Priority Inheritance \label{general} *}

text {*
  The following are several very basic prioprites:
  \begin{enumerate}
  \item All runing threads must be ready (@{text "runing_ready"}):
          @{thm[display] "runing_ready"}  
  \item All ready threads must be living (@{text "readys_threads"}):
          @{thm[display] "readys_threads"} 
  \item There are finite many living threads at any moment (@{text "finite_threads"}):
          @{thm[display] "finite_threads"} 
  \item Every waiting queue does not contain duplcated elements (@{text "wq_distinct"}): 
          @{thm[display] "wq_distinct"} 
  \item All threads in waiting queues are living threads (@{text "wq_threads"}): 
          @{thm[display] "wq_threads"} 
  \item The event which can get a thread into waiting queue must be @{term "P"}-events
         (@{text "block_pre"}): 
          @{thm[display] "block_pre"}   
  \item A thread may never wait for two different critical resources
         (@{text "waiting_unique"}): 
          @{thm[display] waiting_unique[of _ _ "cs\<^isub>1" "cs\<^isub>2"]}
  \item Every resource can only be held by one thread
         (@{text "held_unique"}): 
          @{thm[display] held_unique[of _ "th\<^isub>1" _ "th\<^isub>2"]}
  \item Every living thread has an unique precedence
         (@{text "preced_unique"}): 
          @{thm[display] preced_unique[of "th\<^isub>1" _ "th\<^isub>2"]}
  \end{enumerate}
*}

text {* \noindent
  The following lemmas show how RAG is changed with the execution of events:
  \begin{enumerate}
  \item Execution of @{term "Set"} does not change RAG (@{text "depend_set_unchanged"}):
    @{thm[display] depend_set_unchanged}
  \item Execution of @{term "Create"} does not change RAG (@{text "depend_create_unchanged"}):
    @{thm[display] depend_create_unchanged}
  \item Execution of @{term "Exit"} does not change RAG (@{text "depend_exit_unchanged"}):
    @{thm[display] depend_exit_unchanged}
  \item Execution of @{term "P"} (@{text "step_depend_p"}):
    @{thm[display] step_depend_p}
  \item Execution of @{term "V"} (@{text "step_depend_v"}):
    @{thm[display] step_depend_v}
  \end{enumerate}
  *}

text {* \noindent
  These properties are used to derive the following important results about RAG:
  \begin{enumerate}
  \item RAG is loop free (@{text "acyclic_depend"}):
  @{thm [display] acyclic_depend}
  \item RAGs are finite (@{text "finite_depend"}):
  @{thm [display] finite_depend}
  \item Reverse paths in RAG are well founded (@{text "wf_dep_converse"}):
  @{thm [display] wf_dep_converse}
  \item The dependence relation represented by RAG has a tree structure (@{text "unique_depend"}):
  @{thm [display] unique_depend[of _ _ "n\<^isub>1" "n\<^isub>2"]}
  \item All threads in RAG are living threads 
    (@{text "dm_depend_threads"} and @{text "range_in"}):
    @{thm [display] dm_depend_threads range_in}
  \end{enumerate}
  *}

text {* \noindent
  The following lemmas show how every node in RAG can be chased to ready threads:
  \begin{enumerate}
  \item Every node in RAG can be chased to a ready thread (@{text "chain_building"}):
    @{thm [display] chain_building[rule_format]}
  \item The ready thread chased to is unique (@{text "dchain_unique"}):
    @{thm [display] dchain_unique[of _ _ "th\<^isub>1" "th\<^isub>2"]}
  \end{enumerate}
  *}

text {* \noindent
  Properties about @{term "next_th"}:
  \begin{enumerate}
  \item The thread taking over is different from the thread which is releasing
  (@{text "next_th_neq"}):
  @{thm [display] next_th_neq}
  \item The thread taking over is unique
  (@{text "next_th_unique"}):
  @{thm [display] next_th_unique[of _ _ _ "th\<^isub>1" "th\<^isub>2"]}  
  \end{enumerate}
  *}

text {* \noindent
  Some deeper results about the system:
  \begin{enumerate}
  \item There can only be one running thread (@{text "runing_unique"}):
  @{thm [display] runing_unique[of _ "th\<^isub>1" "th\<^isub>2"]}
  \item The maximum of @{term "cp"} and @{term "preced"} are equal (@{text "max_cp_eq"}):
  @{thm [display] max_cp_eq}
  \item There must be one ready thread having the max @{term "cp"}-value 
  (@{text "max_cp_readys_threads"}):
  @{thm [display] max_cp_readys_threads}
  \end{enumerate}
  *}

text {* \noindent
  The relationship between the count of @{text "P"} and @{text "V"} and the number of 
  critical resources held by a thread is given as follows:
  \begin{enumerate}
  \item The @{term "V"}-operation decreases the number of critical resources 
    one thread holds (@{text "cntCS_v_dec"})
     @{thm [display]  cntCS_v_dec}
  \item The number of @{text "V"} never exceeds the number of @{text "P"} 
    (@{text "cnp_cnv_cncs"}):
    @{thm [display]  cnp_cnv_cncs}
  \item The number of @{text "V"} equals the number of @{text "P"} when 
    the relevant thread is not living:
    (@{text "cnp_cnv_eq"}):
    @{thm [display]  cnp_cnv_eq}
  \item When a thread is not living, it does not hold any critical resource 
    (@{text "not_thread_holdents"}):
    @{thm [display] not_thread_holdents}
  \item When the number of @{text "P"} equals the number of @{text "V"}, the relevant 
    thread does not hold any critical resource, therefore no thread can depend on it
    (@{text "count_eq_dependents"}):
    @{thm [display] count_eq_dependents}
  \end{enumerate}
  *}

section {* Key properties \label{extension} *}

(*<*)
context extend_highest_gen
begin
(*>*)

text {*
  The essential of {\em Priority Inheritance} is to avoid indefinite priority inversion. For this 
  purpose, we need to investigate what happens after one thread takes the highest precedence. 
  A locale is used to describe such a situation, which assumes:
  \begin{enumerate}
  \item @{term "s"} is a valid state (@{text "vt_s"}):
    @{thm  vt_s}.
  \item @{term "th"} is a living thread in @{term "s"} (@{text "threads_s"}):
    @{thm threads_s}.
  \item @{term "th"} has the highest precedence in @{term "s"} (@{text "highest"}):
    @{thm highest}.
  \item The precedence of @{term "th"} is @{term "Prc prio tm"} (@{text "preced_th"}):
    @{thm preced_th}.
  \end{enumerate}
  *}

text {* \noindent
  Under these assumptions, some basic priority can be derived for @{term "th"}:
  \begin{enumerate}
  \item The current precedence of @{term "th"} equals its own precedence (@{text "eq_cp_s_th"}):
    @{thm [display] eq_cp_s_th}
  \item The current precedence of @{term "th"} is the highest precedence in 
    the system (@{text "highest_cp_preced"}):
    @{thm [display] highest_cp_preced}
  \item The precedence of @{term "th"} is the highest precedence 
    in the system (@{text "highest_preced_thread"}):
    @{thm [display] highest_preced_thread}
  \item The current precedence of @{term "th"} is the highest current precedence 
    in the system (@{text "highest'"}):
    @{thm [display] highest'}
  \end{enumerate}
  *}

text {* \noindent
  To analysis what happens after state @{term "s"} a sub-locale is defined, which 
  assumes:
  \begin{enumerate}
  \item @{term "t"} is a valid extension of @{term "s"} (@{text "vt_t"}): @{thm vt_t}.
  \item Any thread created in @{term "t"} has priority no higher than @{term "prio"}, therefore
    its precedence can not be higher than @{term "th"},  therefore
    @{term "th"} remain to be the one with the highest precedence
    (@{text "create_low"}):
    @{thm [display] create_low}
  \item Any adjustment of priority in 
    @{term "t"} does not happen to @{term "th"} and 
    the priority set is no higher than @{term "prio"}, therefore
    @{term "th"} remain to be the one with the highest precedence (@{text "set_diff_low"}):
    @{thm [display] set_diff_low}
  \item Since we are investigating what happens to @{term "th"}, it is assumed 
    @{term "th"} does not exit during @{term "t"} (@{text "exit_diff"}):
    @{thm [display] exit_diff}
  \end{enumerate}
*}

text {* \noindent
  All these assumptions are put into a predicate @{term "extend_highest_gen"}. 
  It can be proved that @{term "extend_highest_gen"} holds 
  for any moment @{text "i"} in it @{term "t"} (@{text "red_moment"}):
  @{thm [display] red_moment}
  
  From this, an induction principle can be derived for @{text "t"}, so that 
  properties already derived for @{term "t"} can be applied to any prefix 
  of @{text "t"} in the proof of new properties 
  about @{term "t"} (@{text "ind"}):
  \begin{center}
  @{thm[display] ind}
  \end{center}

  The following properties can be proved about @{term "th"} in @{term "t"}:
  \begin{enumerate}
  \item In @{term "t"}, thread @{term "th"} is kept live and its 
    precedence is preserved as well
    (@{text "th_kept"}): 
    @{thm [display] th_kept}
  \item In @{term "t"}, thread @{term "th"}'s precedence is always the maximum among 
    all living threads
    (@{text "max_preced"}): 
    @{thm [display] max_preced}
  \item In @{term "t"}, thread @{term "th"}'s current precedence is always the maximum precedence
    among all living threads
    (@{text "th_cp_max_preced"}): 
    @{thm [display] th_cp_max_preced}
  \item In @{term "t"}, thread @{term "th"}'s current precedence is always the maximum current 
    precedence among all living threads
    (@{text "th_cp_max"}): 
    @{thm [display] th_cp_max}
  \item In @{term "t"}, thread @{term "th"}'s current precedence equals its precedence at moment 
    @{term "s"}
    (@{text "th_cp_preced"}): 
    @{thm [display] th_cp_preced}
  \end{enumerate}
  *}

text {* \noindent
  The main theorem of this part is to characterizing the running thread during @{term "t"} 
  (@{text "runing_inversion_2"}):
  @{thm [display] runing_inversion_2}
  According to this, if a thread is running, it is either @{term "th"} or was
  already live and held some resource 
  at moment @{text "s"} (expressed by: @{text "cntV s th' < cntP s th'"}).

  Since there are only finite many threads live and holding some resource at any moment,
  if every such thread can release all its resources in finite duration, then after finite
  duration, none of them may block @{term "th"} anymore. So, no priority inversion may happen
  then.
  *}

(*<*)
end
(*>*)

section {* Properties to guide implementation \label{implement} *}

text {*
  The properties (especially @{text "runing_inversion_2"}) convinced us that the model defined 
  in Section \ref{model} does prevent indefinite priority inversion and therefore fulfills 
  the fundamental requirement of Priority Inheritance protocol. Another purpose of this paper 
  is to show how this model can be used to guide a concrete implementation. As discussed in
  Section 5.6.5 of \cite{Vahalia:1996:UI}, the implementation of Priority Inheritance in Solaris 
  uses sophisticated linking data structure. Except discussing two scenarios to show how
  the data structure should be manipulated, a lot of details of the implementation are missing. 
  In \cite{Faria08,conf/fase/JahierHR09,WellingsBSB07} the protocol is described formally 
  using different notations, but little information is given on how this protocol can be 
  implemented efficiently, especially there is no information on how these data structure 
  should be manipulated. 

  Because the scheduling of threads is based on current precedence, 
  the central issue in implementation of Priority Inheritance is how to compute the precedence
  correctly and efficiently. As long as the precedence is correct, it is very easy to 
  modify the scheduling algorithm to select the correct thread to execute. 

  First, it can be proved that the computation of current precedence @{term "cp"} of a threads
  only involves its children (@{text "cp_rec"}):
  @{thm [display] cp_rec} 
  where @{term "children s th"} represents the set of children of @{term "th"} in the current
  RAG: 
  \[
  @{thm (lhs) children_def} @{text "\<equiv>"} @{thm (rhs) children_def}
  \]
  where the definition of @{term "child"} is: 
  \[ @{thm (lhs) child_def} @{text "\<equiv>"}  @{thm (rhs) child_def}
  \]

  The aim of this section is to fill the missing details of how current precedence should
  be changed with the happening of events, with each event type treated by one subsection,
  where the computation of @{term "cp"} uses lemma @{text "cp_rec"}.
  *}
 
subsection {* Event @{text "Set th prio"} *}

(*<*)
context step_set_cps
begin
(*>*)

text {*
  The context under which event @{text "Set th prio"} happens is formalized as follows:
  \begin{enumerate}
    \item The formation of @{term "s"} (@{text "s_def"}): @{thm s_def}.
    \item State @{term "s"} is a valid state (@{text "vt_s"}): @{thm vt_s}. This implies 
      event @{text "Set th prio"} is eligible to happen under state @{term "s'"} and
      state @{term "s'"} is a valid state.
  \end{enumerate}
  *}

text {* \noindent
  Under such a context, we investigated how the current precedence @{term "cp"} of 
  threads change from state @{term "s'"} to @{term "s"} and obtained the following
  conclusions:
  \begin{enumerate}
  %% \item The RAG does not change (@{text "eq_dep"}): @{thm "eq_dep"}.
  \item All threads with no dependence relation with thread @{term "th"} have their
    @{term "cp"}-value unchanged (@{text "eq_cp"}):
    @{thm [display] eq_cp}
    This lemma implies the @{term "cp"}-value of @{term "th"}
    and those threads which have a dependence relation with @{term "th"} might need
    to be recomputed. The way to do this is to start from @{term "th"} 
    and follow the @{term "depend"}-chain to recompute the @{term "cp"}-value of every 
    encountered thread using lemma @{text "cp_rec"}. 
    Since the @{term "depend"}-relation is loop free, this procedure 
    can always stop. The the following lemma shows this procedure actually could stop earlier.
  \item The following two lemma shows, if a thread the re-computation of which
    gives an unchanged @{term "cp"}-value, the procedure described above can stop. 
    \begin{enumerate}
      \item Lemma @{text "eq_up_self"} shows if the re-computation of
        @{term "th"}'s @{term "cp"} gives the same result, the procedure can stop:
        @{thm [display] eq_up_self}
      \item Lemma @{text "eq_up"}) shows if the re-computation at intermediate threads
        gives unchanged result, the procedure can stop:
        @{thm [display] eq_up}
  \end{enumerate}
  \end{enumerate}
  *}

(*<*)
end
(*>*)

section {* Related works \label{related} *}

text {*
  \begin{enumerate}
  \item {\em Integrating Priority Inheritance Algorithms in the Real-Time Specification for Java}
    \cite{WellingsBSB07} models and verifies the combination of Priority Inheritance (PI) and 
    Priority Ceiling Emulation (PCE) protocols in the setting of Java virtual machine 
    using extended Timed Automata(TA) formalism of the UPPAAL tool. Although a detailed 
    formal model of combined PI and PCE is given, the number of properties is quite 
    small and the focus is put on the harmonious working of PI and PCE. Most key features of PI 
    (as well as PCE) are not shown. Because of the limitation of the model checking technique
    used there, properties are shown only for a small number of scenarios. Therefore, 
    the verification does not show the correctness of the formal model itself in a 
    convincing way.  
  \item {\em Formal Development of Solutions for Real-Time Operating Systems with TLA+/TLC}
    \cite{Faria08}. A formal model of PI is given in TLA+. Only 3 properties are shown 
    for PI using model checking. The limitation of model checking is intrinsic to the work.
  \item {\em Synchronous modeling and validation of priority inheritance schedulers}
    \cite{conf/fase/JahierHR09}. Gives a formal model
    of PI and PCE in AADL (Architecture Analysis \& Design Language) and checked 
    several properties using model checking. The number of properties shown there is 
    less than here and the scale is also limited by the model checking technique. 
  \item {\em The Priority Ceiling Protocol: Formalization and Analysis Using PVS}
    \cite{dutertre99b}. Formalized another protocol for Priority Inversion in the 
    interactive theorem proving system PVS.
\end{enumerate}


  There are several works on inversion avoidance:
  \begin{enumerate}
  \item {\em Solving the group priority inversion problem in a timed asynchronous system}
    \cite{Wang:2002:SGP}. The notion of Group Priority Inversion is introduced. The main 
    strategy is still inversion avoidance. The method is by reordering requests 
    in the setting of Client-Server.
  \item {\em A Formalization of Priority Inversion} \cite{journals/rts/BabaogluMS93}. 
    Formalized the notion of Priority 
    Inversion and proposes methods to avoid it. 
  \end{enumerate}

  {\em Examples of inaccurate specification of the protocol ???}.

*}

section {* Conclusions \label{conclusion} *}

(*<*)
end
(*>*)