Journal/Paper.thy
author Christian Urban <urbanc@in.tum.de>
Fri, 28 Apr 2017 13:20:44 +0100
changeset 163 2ec13cfbb81c
parent 162 a8ceb68bfeb0
child 164 613189244e72
permissions -rw-r--r--
updated

(*<*)
theory Paper
imports "../Implementation" 
        "../Correctness" 
        "~~/src/HOL/Library/LaTeXsugar"
begin

ML {* Scan.succeed *}

ML {*
 fun strip_quants ctxt trm =
   case trm of 
      Const("HOL.Trueprop", _) $ t => strip_quants ctxt t 
    | Const("Pure.imp", _) $ _ $ t => strip_quants ctxt t
    | Const("Pure.all", _) $ Abs(n, T, t) =>
         strip_quants ctxt (subst_bound (Free (n, T), t)) 
    | Const("HOL.All", _) $ Abs(n, T, t) =>
         strip_quants ctxt (subst_bound (Free (n, T), t)) 
    | Const("HOL.Ex", _) $ Abs(n, T, t) =>
         strip_quants ctxt (subst_bound (Free (n, T), t)) 
    | _ => trm
*}


setup {* Term_Style.setup @{binding "no_quants"} (Scan.succeed strip_quants) *}


declare [[show_question_marks = false]]

notation (latex output)
  Cons ("_::_" [78,77] 73) and
  If  ("(\<^raw:\textrm{>if\<^raw:}> (_)/ \<^raw:\textrm{>then\<^raw:}> (_)/ \<^raw:\textrm{>else\<^raw:}> (_))" 10) and
  vt ("valid'_state") and
  Prc ("'(_, _')") and
  holding_raw ("holds") and
  holding ("holds") and
  waiting_raw ("waits") and
  waiting ("waits") and
  dependants_raw ("dependants") and
  dependants ("dependants") and
  RAG_raw ("RAG") and
  RAG ("RAG") and
  Th ("T") and
  Cs ("C") and
  readys ("ready") and
  preced ("prec") and
  preceds ("precs") and
  cpreced ("cprec") and
  wq_fun ("wq") and
  cprec_fun ("cp'_fun") and
  holdents ("resources") and
  DUMMY  ("\<^raw:\mbox{$\_\!\_$}>") and
  cntP ("c\<^bsub>P\<^esub>") and
  cntV ("c\<^bsub>V\<^esub>")

 
(*>*)

section {* Introduction *}

text {*

  Many real-time systems need to support threads involving priorities
  and locking of resources. Locking of resources ensures mutual
  exclusion when accessing shared data or devices that cannot be
  preempted. Priorities allow scheduling of threads that need to
  finish their work within deadlines.  Unfortunately, both features
  can interact in subtle ways leading to a problem, called
  \emph{Priority Inversion}. Suppose three threads having priorities
  $H$(igh), $M$(edium) and $L$(ow). We would expect that the thread
  $H$ blocks any other thread with lower priority and the thread
  itself cannot be blocked indefinitely by threads with lower
  priority. Alas, in a naive implementation of resource locking and
  priorities this property can be violated. For this let $L$ be in the
  possession of a lock for a resource that $H$ also needs. $H$ must
  therefore wait for $L$ to exit the critical section and release this
  lock. The problem is that $L$ might in turn be blocked by any thread
  with priority $M$, and so $H$ sits there potentially waiting
  indefinitely. Since $H$ is blocked by threads with lower priorities,
  the problem is called Priority Inversion. It was first described in
  \cite{Lampson80} in the context of the Mesa programming language
  designed for concurrent programming.
 
  If the problem of Priority Inversion is ignored, real-time systems
  can become unpredictable and resulting bugs can be hard to diagnose.
  The classic example where this happened is the software that
  controlled the Mars Pathfinder mission in 1997 \cite{Reeves98}.  On
  Earth the software run mostly without any problem, but once the
  spacecraft landed on Mars, it shut down at irregular, but frequent,
  intervals leading to loss of project time as normal operation of the
  craft could only resume the next day (the mission and data already
  collected were fortunately not lost, because of a clever system
  design).  The reason for the shutdowns was that the scheduling
  software fell victim to Priority Inversion: a low priority thread
  locking a resource prevented a high priority thread from running in
  time, leading to a system reset. Once the problem was found, it was
  rectified by enabling the \emph{Priority Inheritance Protocol} (PIP)
  \cite{Sha90}\footnote{Sha et al.~call it the \emph{Basic Priority
  Inheritance Protocol} \cite{Sha90} and others sometimes also call it
  \emph{Priority Boosting}, \emph{Priority Donation} or \emph{Priority
  Lending}.}  in the scheduling software.

  The idea behind PIP is to let the thread $L$ temporarily inherit the
  high priority from $H$ until $L$ leaves the critical section
  unlocking the resource. This solves the problem of $H$ having to
  wait indefinitely, because $L$ cannot be blocked by threads having
  priority $M$. While a few other solutions exist for the Priority
  Inversion problem, PIP is one that is widely deployed and
  implemented. This includes VxWorks (a proprietary real-time OS used
  in the Mars Pathfinder mission, in Boeing's 787 Dreamliner, Honda's
  ASIMO robot, etc.) and ThreadX (another proprietary real-time OS
  used in nearly all HP inkjet printers \cite{ThreadX}), but also the
  POSIX 1003.1c Standard realised for example in libraries for
  FreeBSD, Solaris and Linux.

  Two advantages of PIP are that it is deterministic and that
  increasing the priority of a thread can be performed dynamically by
  the scheduler.  This is in contrast to \emph{Priority Ceiling}
  \cite{Sha90}, another solution to the Priority Inversion problem,
  which requires static analysis of the program in order to prevent
  Priority Inversion, and also in contrast to the approach taken in
  the Windows NT scheduler, which avoids this problem by randomly
  boosting the priority of ready low-priority threads (see for
  instance~\cite{WINDOWSNT}).  However, there has also been strong
  criticism against PIP. For instance, PIP cannot prevent deadlocks
  when lock dependencies are circular, and also blocking times can be
  substantial (more than just the duration of a critical section).
  Though, most criticism against PIP centres around unreliable
  implementations and PIP being too complicated and too inefficient.
  For example, Yodaiken writes in \cite{Yodaiken02}:

  \begin{quote}
  \it{}``Priority inheritance is neither efficient nor reliable. Implementations
  are either incomplete (and unreliable) or surprisingly complex and intrusive.''
  \end{quote}

  \noindent He suggests avoiding PIP altogether by designing the
  system so that no priority inversion may happen in the first
  place. However, such ideal designs may not always be achievable in
  practice.

  In our opinion, there is clearly a need for investigating correct
  algorithms for PIP. A few specifications for PIP exist (in informal
  English) and also a few high-level descriptions of implementations
  (e.g.~in the textbooks \cite[Section 12.3.1]{Liu00} and
  \cite[Section 5.6.5]{Vahalia96}), but they help little with actual
  implementations. That this is a problem in practice is proved by an
  email by Baker, who wrote on 13 July 2009 on the Linux Kernel
  mailing list:

  \begin{quote}
  \it{}``I observed in the kernel code (to my disgust), the Linux PIP
  implementation is a nightmare: extremely heavy weight, involving
  maintenance of a full wait-for graph, and requiring updates for a
  range of events, including priority changes and interruptions of
  wait operations.''
  \end{quote}

  \noindent The criticism by Yodaiken, Baker and others suggests
  another look at PIP from a more abstract level (but still concrete
  enough to inform an implementation), and makes PIP a good candidate
  for a formal verification. An additional reason is that the original
  specification of PIP~\cite{Sha90}, despite being informally
  ``proved'' correct, is actually \emph{flawed}.
  
  Yodaiken \cite{Yodaiken02} and also Moylan et
  al.~\cite{deinheritance} point to a subtlety that had been
  overlooked in the informal proof by Sha et al. They specify PIP in
  \cite{Sha90} so that after the thread (whose priority has been
  raised) completes its critical section and releases the lock, it
  ``{\it returns to its original priority level}''. This leads them to
  believe that an implementation of PIP is ``{\it rather
  straightforward}''~\cite{Sha90}.  Unfortunately, as Yodaiken and
  Moylan et al.~point out, this behaviour is too simplistic. Moylan et
  al.~write that there are ``{\it some hidden
  traps}''~\cite{deinheritance}.  Consider the case where the low
  priority thread $L$ locks \emph{two} resources, and two
  high-priority threads $H$ and $H'$ each wait for one of them.  If
  $L$ releases one resource so that $H$, say, can proceed, then we
  still have Priority Inversion with $H'$ (which waits for the other
  resource). The correct behaviour for $L$ is to switch to the highest
  remaining priority of the threads that it blocks.  A similar error
  is made in the textbook \cite[Section 2.3.1]{book} which specifies
  for a process that inherited a higher priority and exits a critical
  section ``{\it it resumes the priority it had at the point of entry
  into the critical section}''.  This error can also be found in the
  textbook \cite[Section 16.4.1]{LiYao03} where the authors write
  about this process: ``{\it its priority is immediately lowered to the level originally assigned}'';
  and also in the 
  more recent textbook \cite[Page 119]{Laplante11} where the authors
  state: ``{\it when [the task] exits the critical section that caused
  the block, it reverts to the priority it had when it entered that
  section}''. The textbook \cite[Page 286]{Liu00} contains a simlar
  flawed specification and even goes on to develop pseudo-code based
  on this flawed specification. Accordingly, the operating system
  primitives for inheritance and restoration of priorities in
  \cite{Liu00} depend on maintaining a data structure called
  \emph{inheritance log}. This log is maintained for every thread and
  broadly specified as containing ``{\it [h]istorical information on
  how the thread inherited its current priority}'' \cite[Page
  527]{Liu00}. Unfortunately, the important information about actually
  computing the priority to be restored solely from this log is not
  explained in \cite{Liu00} but left as an ``{\it excercise}'' to the
  reader.  As we shall see, a correct version of PIP does not need to
  maintain this (potentially expensive) data structure at
  all. Surprisingly also the widely read and frequently updated
  textbook \cite{Silberschatz13} gives the wrong specification. For
  example on Page 254 the authors write: ``{\it Upon releasing the
  lock, the [low-priority] thread will revert to its original
  priority.}'' The same error is also repeated later in this popular textbook.

  
  While \cite{Laplante11,LiYao03,Liu00,book,Sha90,Silberschatz13} are the only
  formal publications we have found that specify the incorrect
  behaviour, it seems also many informal descriptions of PIP overlook
  the possibility that another high-priority might wait for a
  low-priority process to finish.  A notable exception is the texbook
  \cite{buttazzo}, which gives the correct behaviour of resetting the
  priority of a thread to the highest remaining priority of the
  threads it blocks. This textbook also gives an informal proof for
  the correctness of PIP in the style of Sha et al. Unfortunately,
  this informal proof is too vague to be useful for formalising the
  correctness of PIP and the specification leaves out nearly all
  details in order to implement PIP efficiently.\medskip\smallskip
  %
  %The advantage of formalising the
  %correctness of a high-level specification of PIP in a theorem prover
  %is that such issues clearly show up and cannot be overlooked as in
  %informal reasoning (since we have to analyse all possible behaviours
  %of threads, i.e.~\emph{traces}, that could possibly happen).

  \noindent {\bf Contributions:} There have been earlier formal
  investigations into PIP \cite{Faria08,Jahier09,Wellings07}, but they
  employ model checking techniques. This paper presents a formalised
  and mechanically checked proof for the correctness of PIP. For this
  we needed to design a new correctness criterion for PIP. In contrast
  to model checking, our formalisation provides insight into why PIP
  is correct and allows us to prove stronger properties that, as we
  will show, can help with an efficient implementation of PIP. We
  illustrate this with an implementation of PIP in the educational
  PINTOS operating system \cite{PINTOS}.  For example, we found by
  ``playing'' with the formalisation that the choice of the next
  thread to take over a lock when a resource is released is irrelevant
  for PIP being correct---a fact that has not been mentioned in the
  literature and not been used in the reference implementation of PIP
  in PINTOS.  This fact, however, is important for an efficient
  implementation of PIP, because we can give the lock to the thread
  with the highest priority so that it terminates more quickly.  We
  are also being able to generalise the scheduler of Sha et
  al.~\cite{Sha90} to the practically relevant case where critical
  sections can overlap; see Figure~\ref{overlap} \emph{a)} below for
  an example of this restriction. In the existing literature there is
  no proof and also no proving method that cover this generalised
  case.

  \begin{figure}
  \begin{center}
  \begin{tikzpicture}[scale=1]
  %%\draw[step=2mm] (0,0) grid (10,2);
  \draw [->,line width=0.6mm] (0,0) -- (10,0);
  \draw [->,line width=0.6mm] (0,1.5) -- (10,1.5);
  \draw [line width=0.6mm, pattern=horizontal lines] (0.8,0) rectangle (4,0.5);
  \draw [line width=0.6mm, pattern=north east lines] (3.0,0) rectangle (6,0.5);
  \draw [line width=0.6mm, pattern=vertical lines] (5.0,0) rectangle (9,0.5);

  \draw [line width=0.6mm, pattern=horizontal lines] (0.6,1.5) rectangle (4.0,2); 
  \draw [line width=0.6mm, pattern=north east lines] (1.0,1.5) rectangle (3.4,2); 
  \draw [line width=0.6mm, pattern=vertical lines] (5.0,1.5) rectangle (8.8,2); 
 
  \node at (0.8,-0.3) {@{term "P\<^sub>1"}};
  \node at (3.0,-0.3) {@{term "P\<^sub>2"}};
  \node at (4.0,-0.3) {@{term "V\<^sub>1"}}; 
  \node at (5.0,-0.3) {@{term "P\<^sub>3"}};
  \node at (6.0,-0.3) {@{term "V\<^sub>2"}};
  \node at (9.0,-0.3) {@{term "V\<^sub>3"}};
  
  \node at (0.6,1.2) {@{term "P\<^sub>1"}};
  \node at (1.0,1.2) {@{term "P\<^sub>2"}};
  \node at (3.4,1.2) {@{term "V\<^sub>2"}};
  \node at (4.0,1.2) {@{term "V\<^sub>1"}};
  \node at (5.0,1.2) {@{term "P\<^sub>3"}};
  \node at (8.8,1.2) {@{term "V\<^sub>3"}};
  \node at (10.3,0) {$t$};
  \node at (10.3,1.5) {$t$};

  \node at (-0.3,0.2) {$b)$};
  \node at (-0.3,1.7) {$a)$};
  \end{tikzpicture}\mbox{}\\[-10mm]\mbox{}
  \end{center}
  \caption{Assume a process is over time locking and unlocking, say, three resources.
  The locking requests are labelled @{term "P\<^sub>1"}, @{term "P\<^sub>2"}, and @{term "P\<^sub>3"} 
  respectively, and the corresponding unlocking operations are labelled
  @{term "V\<^sub>1"}, @{term "V\<^sub>2"}, and @{term "V\<^sub>3"}. 
  Then graph $a)$ shows \emph{properly nested} critical sections as required 
  by Sha et al.~\cite{Sha90} in their proof---the sections must either be contained within 
  each other
  (the section @{term "P\<^sub>2"}--@{term "V\<^sub>2"} is contained in @{term "P\<^sub>1"}--@{term "V\<^sub>1"}) or
  be independent (@{term "P\<^sub>3"}--@{term "V\<^sub>3"} is independent from the other 
  two). Graph $b)$ shows the general case where 
  the locking and unlocking of different critical sections can 
  overlap.\label{overlap}}
  \end{figure}
*}

section {* Formal Model of the Priority Inheritance Protocol\label{model} *}

text {*
  The Priority Inheritance Protocol, short PIP, is a scheduling
  algorithm for a single-processor system.\footnote{We shall come back
  later to the case of PIP on multi-processor systems.} 
  Following good experience in earlier work \cite{Wang09},  
  our model of PIP is based on Paulson's inductive approach for protocol
  verification \cite{Paulson98}. In this approach a \emph{state} of a system is
  given by a list of events that happened so far (with new events prepended to the list). 
  \emph{Events} of PIP fall
  into five categories defined as the datatype:

  \begin{isabelle}\ \ \ \ \ %%%
  \mbox{\begin{tabular}{r@ {\hspace{2mm}}c@ {\hspace{2mm}}l@ {\hspace{7mm}}l}
  \isacommand{datatype} event 
  & @{text "="} & @{term "Create thread priority\<iota>"}\\
  & @{text "|"} & @{term "Exit thread"} \\
  & @{text "|"} & @{term "Set thread priority\<iota>"} & {\rm reset of the priority for} @{text thread}\\
  & @{text "|"} & @{term "P thread cs"} & {\rm request of resource} @{text "cs"} {\rm by} @{text "thread"}\\
  & @{text "|"} & @{term "V thread cs"} & {\rm release of resource} @{text "cs"} {\rm by} @{text "thread"}
  \end{tabular}}
  \end{isabelle}

  \noindent
  whereby threads, priorities and (critical) resources are represented
  as natural numbers. The event @{term Set} models the situation that
  a thread obtains a new priority given by the programmer or
  user (for example via the {\tt nice} utility under UNIX).  For states
  we define the following type-synonym:

  \begin{isabelle}\ \ \ \ \ %%%
  \isacommand{type\_synonym} @{text "state = event list"}
  \end{isabelle}    

  \noindent As in Paulson's work, we need to define functions that
  allow us to make some observations about states.  One function,
  called @{term threads}, calculates the set of ``live'' threads that
  we have seen so far in a state:

  \begin{isabelle}\ \ \ \ \ %%%
  \mbox{\begin{tabular}{lcl}
  @{thm (lhs) threads.simps(1)} & @{text "\<equiv>"} & 
    @{thm (rhs) threads.simps(1)}\\
  @{thm (lhs) threads.simps(2)} & @{text "\<equiv>"} & 
    @{thm (rhs) threads.simps(2)}\\
  @{thm (lhs) threads.simps(3)} & @{text "\<equiv>"} & 
    @{thm (rhs) threads.simps(3)}\\
  @{term "threads (DUMMY#s)"} & @{text "\<equiv>"} & @{term "threads s"}\\
  \end{tabular}}
  \end{isabelle}

  \noindent
  In this definition @{term "DUMMY # DUMMY"} stands for list-cons and @{term "[]"} for the empty list.
  Another function calculates the priority for a thread @{text "th"}, which is 
  defined as

  \begin{isabelle}\ \ \ \ \ %%%
  \mbox{\begin{tabular}{lcl}
  @{thm (lhs) priority.simps(1)} & @{text "\<equiv>"} & 
    @{thm (rhs) priority.simps(1)}\\
  @{thm (lhs) priority.simps(2)} & @{text "\<equiv>"} & 
    @{thm (rhs) priority.simps(2)}\\
  @{thm (lhs) priority.simps(3)} & @{text "\<equiv>"} & 
    @{thm (rhs) priority.simps(3)}\\
  @{term "priority th (DUMMY#s)"} & @{text "\<equiv>"} & @{term "priority th s"}\\
  \end{tabular}}
  \end{isabelle}

  \noindent
  In this definition we set @{text 0} as the default priority for
  threads that have not (yet) been created. The last function we need 
  calculates the ``time'', or index, at which time a thread had its 
  priority last set.

  \begin{isabelle}\ \ \ \ \ %%%
  \mbox{\begin{tabular}{lcl}
  @{thm (lhs) last_set.simps(1)} & @{text "\<equiv>"} & 
    @{thm (rhs) last_set.simps(1)}\\
  @{thm (lhs) last_set.simps(2)} & @{text "\<equiv>"} & 
    @{thm (rhs) last_set.simps(2)}\\
  @{thm (lhs) last_set.simps(3)} & @{text "\<equiv>"} & 
    @{thm (rhs) last_set.simps(3)}\\
  @{term "last_set th (DUMMY#s)"} & @{text "\<equiv>"} & @{term "last_set th s"}\\
  \end{tabular}}
  \end{isabelle}

  \noindent
  In this definition @{term "length s"} stands for the length of the list
  of events @{text s}. Again the default value in this function is @{text 0}
  for threads that have not been created yet. An \emph{actor} of an event is
  defined as

  \begin{isabelle}\ \ \ \ \ %%%
  \mbox{\begin{tabular}{lcl}
  @{thm (lhs) actor.simps(5)} & @{text "\<equiv>"} & 
    @{thm (rhs) actor.simps(5)}\\
  @{thm (lhs) actor.simps(1)} & @{text "\<equiv>"} & 
    @{thm (rhs) actor.simps(1)}\\
  @{thm (lhs) actor.simps(4)} & @{text "\<equiv>"} & 
    @{thm (rhs) actor.simps(4)}\\
  @{thm (lhs) actor.simps(2)} & @{text "\<equiv>"} & 
    @{thm (rhs) actor.simps(2)}\\
  @{thm (lhs) actor.simps(3)} & @{text "\<equiv>"} & 
    @{thm (rhs) actor.simps(3)}\\
  \end{tabular}}
  \end{isabelle}

  \noindent
  This allows us to define what actions a set of threads @{text ths} might
  perform in a list of events @{text s}, namely

  \begin{isabelle}\ \ \ \ \ %%%
  @{thm actions_of_def[where ?s="s" and ?ths="ths", THEN eq_reflection]}.
  \end{isabelle}

  where we use Isabelle's notation for list-comprehensions. This
  notation is very similar to notation used in Haskell for list
  comprehensions.  A \emph{precedence} of a thread @{text th} in a
  state @{text s} is the pair of natural numbers defined as
  
  \begin{isabelle}\ \ \ \ \ %%%
  @{thm preced_def}
  \end{isabelle}

  \noindent
  We also use the abbreviation 

  \begin{isabelle}\ \ \ \ \ %%%
  @{abbrev "preceds ths s"}
  \end{isabelle}

  \noindent
  for the set of precedences of threads @{text ths} in state @{text s}.
  The point of precedences is to schedule threads not according to priorities (because what should
  we do in case two threads have the same priority), but according to precedences. 
  Precedences allow us to always discriminate between two threads with equal priority by 
  taking into account the time when the priority was last set. We order precedences so 
  that threads with the same priority get a higher precedence if their priority has been 
  set earlier, since for such threads it is more urgent to finish their work. In an implementation
  this choice would translate to a quite natural FIFO-scheduling of threads with 
  the same priority. 
  
  Moylan et al.~\cite{deinheritance} considered the alternative of 
  ``time-slicing'' threads with equal priority, but found that it does not lead to 
  advantages in practice. On the contrary, according to their work having a policy 
  like our FIFO-scheduling of threads with equal priority reduces the number of
  tasks involved in the inheritance process and thus minimises the number
  of potentially expensive thread-switches. 
  
  %\endnote{{\bf NEEDED?} We will also need counters for @{term P} and @{term V} events of a thread @{term th}
  %in a state @{term s}. This can be straightforwardly defined in Isabelle as
  %
  %\begin{isabelle}\ \ \ \ \ %%%
  %\mbox{\begin{tabular}{@ {}l}
  %@{thm cntP_def}\\
  %@{thm cntV_def}
  %\end{tabular}}
  %\end{isabelle}
  % 
  %\noindent using the predefined function @{const count} for lists.}

  Next, we introduce the concept of \emph{waiting queues}. They are
  lists of threads associated with every resource. The first thread in
  this list (i.e.~the head, or short @{term hd}) is chosen to be the one 
  that is in possession of the
  ``lock'' of the corresponding resource. We model waiting queues as
  functions, below abbreviated as @{text wq}. They take a resource as
  argument and return a list of threads.  This allows us to define
  when a thread \emph{holds}, respectively \emph{waits} for, a
  resource @{text cs} given a waiting queue function @{text wq}.

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm holding_raw_def[where thread="th"]}\\
  @{thm waiting_raw_def[where thread="th"]}
  \end{tabular}
  \end{isabelle}

  \noindent
  In this definition we assume @{text "set"} converts a list into a set.
  Note that in the first definition the condition about @{text "th \<in> set (wq cs)"} does not follow
  from @{text "th = hd (set (wq cs))"}, since the head of an empty list is undefined in Isabelle/HOL. 
  At the beginning, that is in the state where no thread is created yet, 
  the waiting queue function will be the function that returns the
  empty list for every resource.

  \begin{isabelle}\ \ \ \ \ %%%
  @{abbrev all_unlocked}\hfill\numbered{allunlocked}
  \end{isabelle}

  \noindent
  Using @{term "holding_raw"} and @{term waiting_raw}, we can introduce \emph{Resource Allocation Graphs} 
  (RAG), which represent the dependencies between threads and resources.
  We choose to represent RAGs as relations using pairs of the form

  \begin{isabelle}\ \ \ \ \ %%%
  @{term "(Th th, Cs cs)"} \hspace{5mm}{\rm and}\hspace{5mm}
  @{term "(Cs cs, Th th)"}\hfill\numbered{pairs}
  \end{isabelle}

  \noindent
  where the first stands for a \emph{waiting edge} and the second for a 
  \emph{holding edge} (@{term Cs} and @{term Th} are constructors of a 
  datatype for vertices). Given a waiting queue function, a RAG is defined 
  as the union of the sets of waiting and holding edges, namely

  \begin{isabelle}\ \ \ \ \ %%%
  @{thm RAG_raw_def}
  \end{isabelle}


  \begin{figure}[t]
  \begin{center}
  \newcommand{\fnt}{\fontsize{7}{8}\selectfont}
  \begin{tikzpicture}[scale=1]
  %%\draw[step=2mm] (-3,2) grid (1,-1);

  \node (A) at (0,0) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^sub>0"}};
  \node (B) at (2,0) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^sub>1"}};
  \node (C) at (4,0.7) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^sub>1"}};
  \node (D) at (4,-0.7) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^sub>2"}};
  \node (E) at (6,-0.7) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^sub>2"}};
  \node (E1) at (6, 0.2) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^sub>3"}};
  \node (F) at (8,-0.7) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^sub>3"}};

  \node (X) at (0,-2) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^sub>4"}};
  \node (Y) at (2,-2) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^sub>4"}};
  \node (Z) at (2,-2.9) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^sub>5"}};
  \node (U1) at (4,-2) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^sub>5"}};
  \node (U2) at (4,-2.9) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^sub>6"}};
   \node (R) at (6,-2.9) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^sub>6"}};

  \draw [<-,line width=0.6mm] (A) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding}  (B);
  \draw [->,line width=0.6mm] (C) to node [pos=0.4,sloped,above=-0.5mm] {\fnt{}waiting}  (B);
  \draw [->,line width=0.6mm] (D) to node [pos=0.4,sloped,below=-0.5mm] {\fnt{}waiting}  (B);
  \draw [<-,line width=0.6mm] (D) to node [pos=0.54,sloped,below=-0.5mm] {\fnt{}holding}  (E);
  \draw [<-,line width=0.6mm] (D) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding}  (E1);
  \draw [->,line width=0.6mm] (F) to node [pos=0.45,sloped,below=-0.5mm] {\fnt{}waiting}  (E);

  \draw [->,line width=0.6mm] (U1) to node [pos=0.45,sloped,below=-0.5mm] {\fnt{}waiting}  (Y);
  \draw [->,line width=0.6mm] (U2) to node [pos=0.45,sloped,below=-0.5mm] {\fnt{}waiting}  (Z);
  \draw [<-,line width=0.6mm] (X) to node [pos=0.54,sloped,below=-0.5mm] {\fnt{}holding}  (Z);
  \draw [<-,line width=0.6mm] (X) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding}  (Y);
  \draw [<-,line width=0.6mm] (U2) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding}  (R);
  \end{tikzpicture}
  \end{center}
  \caption{An instance of a Resource Allocation Graph (RAG).\label{RAGgraph}}
  \end{figure}

  \noindent
  If there is no cycle, then every RAG can be pictured as a forrest of trees, as
  for example in Figure~\ref{RAGgraph}.

  Because of the RAGs, we will need to formalise some results about
  graphs.  While there are few formalisations for graphs already
  implemented in Isabelle, we choose to introduce our own library of
  graphs for PIP. The justification for this is that we wanted to be able to
  reason about potentially infinite graphs (in the sense of infinitely
  branching and infinite size): the property that our RAGs are
  actually forrests of finitely branching trees having only an finite
  depth should be something we can \emph{prove} for our model of
  PIP---it should not be an assumption we build already into our
  model. It seemed for our purposes the most convenient
  represeantation of graphs are binary relations given by sets of
  pairs shown in \eqref{pairs}. The pairs stand for the edges in
  graphs. This relation-based representation is convenient since we
  can use the notions of transitive closure operations @{term "trancl
  DUMMY"} and @{term "rtrancl DUMMY"}, as well as relation
  composition.  A \emph{forrest} is defined as the relation @{text
  rel} that is \emph{single valued} and \emph{acyclic}:

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm single_valued_def[where ?r="rel", THEN eq_reflection]}\\
  @{thm acyclic_def[where ?r="rel", THEN eq_reflection]}
  \end{tabular}
  \end{isabelle} 

  \noindent
  The \emph{children}, \emph{subtree} and \emph{ancestors} of a node in a graph
  can be easily defined relationally as 

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm children_def[where ?r="rel" and ?x="node", THEN eq_reflection]}\\
  @{thm subtree_def[where ?r="rel" and ?x="node", THEN eq_reflection]}\\
  @{thm ancestors_def[where ?r="rel" and ?x="node", THEN eq_reflection]}\\
  \end{tabular}
  \end{isabelle}
  
  \noindent Note that forrests can have trees with infinte depth and
  containing nodes with infinitely many children.  A \emph{finite
  forrest} is a forrest which is well-founded and every node has 
  finitely many children (is only finitely branching).

  %\endnote{
  %\begin{isabelle}\ \ \ \ \ %%%
  %@ {thm rtrancl_path.intros}
  %\end{isabelle} 
  %
  %\begin{isabelle}\ \ \ \ \ %%%
  %@ {thm rpath_def}
  %\end{isabelle}
  %}


  %\endnote{{\bf Lemma about overlapping paths}}
  
  The locking mechanism of PIP ensures that for each thread node,
  there can be many incoming holding edges in the RAG, but at most one
  out going waiting edge.  The reason is that when a thread asks for
  resource that is locked already, then the thread is blocked and
  cannot ask for another resource.  Clearly, also every resource can
  only have at most one outgoing holding edge---indicating that the
  resource is locked. So if the @{text "RAG"} is well-founded and
  finite, we can always start at a thread waiting for a resource and
  ``chase'' outgoing arrows leading to a single root of a tree.
  
  The use of relations for representing RAGs allows us to conveniently define
  the notion of the \emph{dependants} of a thread

  \begin{isabelle}\ \ \ \ \ %%%
  @{thm dependants_raw_def}
  \end{isabelle}

  \noindent This definition needs to account for all threads that wait
  for a thread to release a resource. This means we need to include
  threads that transitively wait for a resource to be released (in the
  picture above this means the dependants of @{text "th\<^sub>0"} are
  @{text "th\<^sub>1"} and @{text "th\<^sub>2"}, which wait for
  resource @{text "cs\<^sub>1"}, but also @{text "th\<^sub>3"}, which
  cannot make any progress unless @{text "th\<^sub>2"} makes progress,
  which in turn needs to wait for @{text "th\<^sub>0"} to finish). If
  there is a circle of dependencies in a RAG, then clearly we have a
  deadlock. Therefore when a thread requests a resource, we must
  ensure that the resulting RAG is not circular. In practice, the
  programmer has to ensure this. Our model will enforce that critical 
  resources can only be requested provided no circularity can arise.

  Next we introduce the notion of the \emph{current precedence} of a thread @{text th} in a 
  state @{text s}. It is defined as

  \begin{isabelle}\ \ \ \ \ %%%
  @{thm cpreced_def3}\hfill\numbered{cpreced}
  \end{isabelle}

  %\endnote{
  %\begin{isabelle}\ \ \ \ \ %%%
  %@ {thm cp_alt_def cp_alt_def1}
  %\end{isabelle}
  %}

  \noindent where the dependants of @{text th} are given by the
  waiting queue function.  While the precedence @{term prec} of any
  thread is determined statically (for example when the thread is
  created), the point of the current precedence is to dynamically
  increase this precedence, if needed according to PIP. Therefore the
  current precedence of @{text th} is given as the maximum of the
  precedence of @{text th} \emph{and} all threads that are dependants
  of @{text th} in the state @{text s}. Since the notion @{term
  "dependants"} is defined as the transitive closure of all dependent
  threads, we deal correctly with the problem in the informal
  algorithm by Sha et al.~\cite{Sha90} where a priority of a thread is
  lowered prematurely (see Introduction). We again introduce an abbreviation for current
  precedeces of a set of threads, written @{term "cprecs wq s ths"}.
  
  \begin{isabelle}\ \ \ \ \ %%%
  @{thm cpreceds_def}
  \end{isabelle}

  The next function, called @{term schs}, defines the behaviour of the scheduler. It will be defined
  by recursion on the state (a list of events); this function returns a \emph{schedule state}, which 
  we represent as a record consisting of two
  functions:

  \begin{isabelle}\ \ \ \ \ %%%
  @{text "\<lparr>wq_fun, cprec_fun\<rparr>"}
  \end{isabelle}

  \noindent
  The first function is a waiting queue function (that is, it takes a
  resource @{text "cs"} and returns the corresponding list of threads
  that lock, respectively wait for, it); the second is a function that
  takes a thread and returns its current precedence (see
  the definition in \eqref{cpreced}). We assume the usual getter and setter methods for
  such records.

  In the initial state, the scheduler starts with all resources unlocked (the corresponding 
  function is defined in \eqref{allunlocked}) and the
  current precedence of every thread is initialised with @{term "Prc 0 0"}; that means 
  \mbox{@{abbrev initial_cprec}}. Therefore
  we have for the initial shedule state

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm (lhs) schs.simps(1)} @{text "\<equiv>"}\\ 
  \hspace{5mm}@{term "(|wq_fun = all_unlocked, cprec_fun = (\<lambda>_::thread. Prc 0 0)|)"}
  \end{tabular}
  \end{isabelle}

  \noindent
  The cases for @{term Create}, @{term Exit} and @{term Set} are also straightforward:
  we calculate the waiting queue function of the (previous) state @{text s}; 
  this waiting queue function @{text wq} is unchanged in the next schedule state---because
  none of these events lock or release any resource; 
  for calculating the next @{term "cprec_fun"}, we use @{text wq} and 
  @{term cpreced}. This gives the following three clauses for @{term schs}:

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm (lhs) schs.simps(2)} @{text "\<equiv>"}\\ 
  \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
  \hspace{8mm}@{term "(|wq_fun = wq\<iota>, cprec_fun = cpreced wq\<iota> (Create th prio # s)|)"}\smallskip\\
  @{thm (lhs) schs.simps(3)} @{text "\<equiv>"}\\
  \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
  \hspace{8mm}@{term "(|wq_fun = wq\<iota>, cprec_fun = cpreced wq\<iota> (Exit th # s)|)"}\smallskip\\
  @{thm (lhs) schs.simps(4)} @{text "\<equiv>"}\\ 
  \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
  \hspace{8mm}@{term "(|wq_fun = wq\<iota>, cprec_fun = cpreced wq\<iota> (Set th prio # s)|)"}
  \end{tabular}
  \end{isabelle}

  \noindent 
  More interesting are the cases where a resource, say @{text cs}, is requested or released. In these cases
  we need to calculate a new waiting queue function. For the event @{term "P th cs"}, we have to update
  the function so that the new thread list for @{text cs} is the old thread list plus the thread @{text th} 
  appended to the end of that list (remember the head of this list is assigned to be in the possession of this
  resource). This gives the clause

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm (lhs) schs.simps(5)} @{text "\<equiv>"}\\ 
  \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
  \hspace{5mm}@{text "let"} @{text "new_wq = wq(cs := (wq cs @ [th]))"} @{text "in"}\\
  \hspace{8mm}@{term "(|wq_fun = new_wq, cprec_fun = cpreced new_wq (P th cs # s)|)"}
  \end{tabular}
  \end{isabelle}

  \noindent
  The clause for event @{term "V th cs"} is similar, except that we need to update the waiting queue function
  so that the thread that possessed the lock is deleted from the corresponding thread list. For this 
  list transformation, we use
  the auxiliary function @{term release}. A simple version of @{term release} would
  just delete this thread and return the remaining threads, namely

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}lcl}
  @{term "release []"} & @{text "\<equiv>"} & @{term "[]"}\\
  @{term "release (DUMMY # qs)"} & @{text "\<equiv>"} & @{term "qs"}\\
  \end{tabular}
  \end{isabelle}

  \noindent
  In practice, however, often the thread with the highest precedence in the list will get the
  lock next. We have implemented this choice, but later found out that the choice 
  of which thread is chosen next is actually irrelevant for the correctness of PIP.
  Therefore we prove the stronger result where @{term release} is defined as

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}lcl}
  @{term "release []"} & @{text "\<equiv>"} & @{term "[]"}\\
  @{term "release (DUMMY # qs)"} & @{text "\<equiv>"} & @{term "SOME qs'. distinct qs' \<and> set qs' = set qs"}\\
  \end{tabular}
  \end{isabelle}

  \noindent where @{text "SOME"} stands for Hilbert's epsilon and
  implements an arbitrary choice for the next waiting list. It just
  has to be a list of distinctive threads and contains the same
  elements as @{text "qs"} (essentially @{text "qs'"} can be any
  reordering of the list @{text "qs"}). This gives for @{term V} the clause:
 
  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm (lhs) schs.simps(6)} @{text "\<equiv>"}\\
  \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
  \hspace{5mm}@{text "let"} @{text "new_wq = wq(cs := release (wq cs))"} @{text "in"}\\
  \hspace{8mm}@{term "(|wq_fun = new_wq, cprec_fun = cpreced new_wq (V th cs # s)|)"}
  \end{tabular}
  \end{isabelle}

  Having the scheduler function @{term schs} at our disposal, we can
  ``lift'', or overload, the notions @{term waiting}, @{term holding},
  @{term RAG}, @{term dependants} and @{term cp} to operate on states
  only.

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}rcl}
  @{thm (lhs) s_holding_abv}  & @{text "\<equiv>"} & @{thm (rhs) s_holding_abv}\\
  @{thm (lhs) s_waiting_abv}  & @{text "\<equiv>"} & @{thm (rhs) s_waiting_abv}\\
  @{thm (lhs) s_RAG_abv}      & @{text "\<equiv>"} & @{thm (rhs) s_RAG_abv}\\
  @{thm (lhs) s_dependants_abv}& @{text "\<equiv>"} & @{thm (rhs) s_dependants_abv}\\
  @{thm (lhs) cp_def}         & @{text "\<equiv>"} & @{thm (rhs) cp_def}\\  
  \end{tabular}
  \end{isabelle}

  \noindent
  With these abbreviations in place we can introduce 
  the notion of a thread being @{term ready} in a state (i.e.~threads
  that do not wait for any resource, which are the roots of the trees 
  in the RAG, see Figure~\ref{RAGgraph}). The @{term running} thread
  is then the thread with the highest current precedence of all ready threads.

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm readys_def}\\
  @{thm running_def}
  \end{tabular}
  \end{isabelle}

  \noindent
  %%In the second definition @{term "DUMMY ` DUMMY"} stands for the image of a set under a function.
  Note that in the initial state, that is where the list of events is empty, the set 
  @{term threads} is empty and therefore there is neither a thread ready nor running.
  If there is one or more threads ready, then there can only be \emph{one} thread
  running, namely the one whose current precedence is equal to the maximum of all ready 
  threads. We use sets to capture both possibilities.
  We can now also conveniently define the set of resources that are locked by a thread in a
  given state and also when a thread is detached in a state (meaning the thread neither 
  holds nor waits for a resource---in the RAG this would correspond to an
  isolated node without any incoming and outgoing edges, see Figure~\ref{RAGgraph}):

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  @{thm holdents_def}\\
  @{thm detached_def}
  \end{tabular}
  \end{isabelle}

  %\noindent
  %The second definition states that @{text th}  in @{text s}.
  
  Finally we can define what a \emph{valid state} is in our model of PIP. For
  example we cannot expect to be able to exit a thread, if it was not
  created yet. 
  These validity constraints on states are characterised by the
  inductive predicate @{term "step"} and @{term vt}. We first give five inference rules
  for @{term step} relating a state and an event that can happen next.

  \begin{center}
  \begin{tabular}{c}
  @{thm[mode=Rule] thread_create[where thread=th]}\hspace{1cm}
  @{thm[mode=Rule] thread_exit[where thread=th]}
  \end{tabular}
  \end{center}

  \noindent
  The first rule states that a thread can only be created, if it is not alive yet.
  Similarly, the second rule states that a thread can only be terminated if it was
  running and does not lock any resources anymore (this simplifies slightly our model;
  in practice we would expect the operating system releases all locks held by a
  thread that is about to exit). The event @{text Set} can happen
  if the corresponding thread is running. 

  \begin{center}
  @{thm[mode=Rule] thread_set[where thread=th]}
  \end{center}

  \noindent If a thread wants to lock a resource, then the thread
  needs to be running and also we have to make sure that the resource
  lock does not lead to a cycle in the RAG (the prurpose of the second
  premise in the rule below). In practice, ensuring the latter is the
  responsibility of the programmer.  In our formal model we brush
  aside these problematic cases in order to be able to make some
  meaningful statements about PIP.\footnote{This situation is similar
  to the infamous \emph{occurs check} in Prolog: In order to say
  anything meaningful about unification, one needs to perform an
  occurs check. But in practice the occurs check is omitted and the
  responsibility for avoiding problems rests with the programmer.}

 
  \begin{center}
  @{thm[mode=Rule] thread_P[where thread=th]}
  \end{center}
 
  \noindent
  Similarly, if a thread wants to release a lock on a resource, then
  it must be running and in the possession of that lock. This is
  formally given by the last inference rule of @{term step}.
 
  \begin{center}
  @{thm[mode=Rule] thread_V[where thread=th]}
  \end{center}

  \noindent
  Note, however, that apart from the circularity condition, we do not make any 
  assumption on how different resources can be locked and released relative to each 
  other. In our model it is possible that critical sections overlap. This is in 
  contrast to Sha et al \cite{Sha90} who require that critical sections are 
  properly nested (recall Fig.~\ref{overlap}).

  A valid state of PIP can then be conveniently be defined as follows:

  \begin{center}
  \begin{tabular}{c}
  @{thm[mode=Axiom] vt_nil}\hspace{1cm}
  @{thm[mode=Rule] vt_cons}
  \end{tabular}
  \end{center}

  \noindent
  This completes our formal model of PIP. In the next section we present
  a series of desirable properties derived from our model of PIP. This can
  be regarded as a validation of the correctness of our model.
*}

(*
section {* Preliminaries *}
*)

(*<*)
context valid_trace
begin
  (*>*)
(*<*)
text {*

  \endnote{In this section we prove facts that immediately follow from
  our definitions of valid traces.

  \begin{lemma}??\label{precedunique}
  @{thm [mode=IfThen] preced_unique[where ?th1.0=th\<^sub>1 and ?th2.0=th\<^sub>2]} 
  \end{lemma}


  We can verify that in any valid state, there can only be at most
  one running thread---if there are more than one running thread,
  say @{text "th\<^sub>1"} and @{text "th\<^sub>2"}, they must be 
  equal.

  \begin{lemma}
  @{thm [mode=IfThen] running_unique[where ?th1.0=th\<^sub>1 and ?th2.0=th\<^sub>2]} 
  \end{lemma}
  
  \begin{proof}
  Since @{text "th\<^sub>1"} and @{text "th\<^sub>2"} are running, they must be
  roots in the RAG.
  According to XXX, there exists a chain in the RAG-subtree of @{text "th\<^sub>1"}, 
  say starting from @{text "th'\<^sub>1"}, such that @{text "th'\<^sub>1"} has the 
  highest precedence in this subtree (@{text "th\<^sub>1"} inherited
  the precedence of @{text "th'\<^sub>1"}). We have a similar chain starting from, say 
  @{text "th'\<^sub>2"}, in the RAG-subtree of @{text "th\<^sub>2"}. Since @{text "th\<^sub>1"}
  and @{text "th\<^sub>2"} are running we know their cp-value must be the same, that is
  \begin{center}
  @{term "cp s th\<^sub>1 = cp s th\<^sub>2"} 
  \end{center}
  
  \noindent
  That means the precedences of @{text "th'\<^sub>1"} and @{text "th'\<^sub>2"}
  must be the same (they are the maxima in the respective RAG-subtrees). From this we can
  infer by Lemma~\ref{precedunique} that @{text "th'\<^sub>1"}
  and @{text "th'\<^sub>2"} are the same threads. However, this also means the
  roots @{text "th\<^sub>1"} and @{text "th\<^sub>2"} must be the same.\qed
  \end{proof}}

  *}
(*>*)
(*<*)end(*>*)

section {* The Correctness Proof *}

(*<*)
context extend_highest_gen
begin
(*>*)
text {* 

  Sha et al.~state their first correctness criterion for PIP in terms
  of the number of low-priority threads \cite[Theorem 3]{Sha90}: if
  there are @{text n} low-priority threads, then a blocked job with
  high priority can only be blocked a maximum of @{text n} times.
  Their second correctness criterion is given in terms of the number
  of critical resources \cite[Theorem 6]{Sha90}: if there are @{text
  m} critical resources, then a blocked job with high priority can
  only be blocked a maximum of @{text m} times. Both results on their
  own, strictly speaking, do \emph{not} prevent indefinite, or
  unbounded, Priority Inversion, because if a low-priority thread does
  not give up its critical resource (the one the high-priority thread
  is waiting for), then the high-priority thread can never run.  The
  argument of Sha et al.~is that \emph{if} threads release locked
  resources in a finite amount of time, then indefinite Priority
  Inversion cannot occur---the high-priority thread is guaranteed to
  run eventually. The assumption is that programmers must ensure that
  threads are programmed in this way.  However, even taking this
  assumption into account, the correctness properties of Sha et
  al.~are \emph{not} true for their version of PIP---despite being
  ``proved''. As Yodaiken \cite{Yodaiken02} and Moylan et
  al.~\cite{deinheritance} pointed out: If a low-priority thread
  possesses locks to two resources for which two high-priority threads
  are waiting for, then lowering the priority prematurely after giving
  up only one lock, can cause indefinite Priority Inversion for one of
  the high-priority threads, invalidating their two bounds.

  Even when fixed, their proof idea does not seem to go through for
  us, because of the way we have set up our formal model of PIP.  One
  reason is that we allow critical sections, which start with a @{text
  P}-event and finish with a corresponding @{text V}-event, to
  arbitrarily overlap (something Sha et al.~explicitly exclude).
  Therefore we have designed a different correctness criterion for
  PIP. The idea behind our criterion is as follows: for all states
  @{text s}, we know the corresponding thread @{text th} with the
  highest precedence; we show that in every future state (denoted by
  @{text "s' @ s"}) in which @{text th} is still alive, either @{text
  th} is running or it is blocked by a thread that was alive in the
  state @{text s} and was waiting for or in the possession of a lock
  in @{text s}. Since in @{text s}, as in every state, the set of
  alive threads is finite, @{text th} can only be blocked a finite
  number of times. This is independent of how many threads of lower
  priority are created in @{text "s'"}. We will actually prove a
  stronger statement where we also provide the current precedence of
  the blocking thread. 

  However, this correctness criterion hinges upon a number of
  natural assumptions about the states @{text s} and @{text "s' @ s"}, the
  thread @{text th} and the events happening in @{text s'}. We list
  them next:

  \begin{quote}
  {\bf Assumptions on the states {\boldmath@{text s}} and 
  {\boldmath@{text "s' @ s"}:}} We need to require that @{text "s"} and 
  @{text "s' @ s"} are valid states:
  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{l}
  @{term "vt s"}, @{term "vt (s' @ s)"} 
  \end{tabular}
  \end{isabelle}
  \end{quote}

  \begin{quote}
  {\bf Assumptions on the thread {\boldmath@{text "th"}:}} 
  The thread @{text th} must be alive in @{text s} and 
  has the highest precedence of all alive threads in @{text s}. Furthermore the
  priority of @{text th} is @{text prio} (we need this in the next assumptions).
  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{l}
  @{term "th \<in> threads s"}\\
  @{term "prec th s = Max (cprecs s (threads s))"}\\
  @{term "prec th s = (prio, DUMMY)"}
  \end{tabular}
  \end{isabelle}
  \end{quote}
  
  \begin{quote}
  {\bf Assumptions on the events in {\boldmath@{text "s'"}:}} We want to prove that @{text th} cannot
  be blocked indefinitely. Of course this can happen if threads with higher priority
  than @{text th} are continuously created in @{text s'}. Therefore we have to assume that  
  events in @{text s'} can only create (respectively set) threads with equal or lower 
  priority than @{text prio} of @{text th}. We also need to assume that the
  priority of @{text "th"} does not get reset and all other reset priorities are either
  less or equal. Moreover, we assume  that @{text th} does
  not get ``exited'' in @{text "s'"}. This can be ensured by assuming the following three implications. 
  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{l}
  {If}~~@{text "Create th' prio' \<in> set s'"}~~{then}~~@{text "prio' \<le> prio"}\\
  {If}~~@{text "Set th' prio' \<in> set s'"}~~{then}~~@{text "th' \<noteq> th"}~~{and}~~@{text "prio' \<le> prio"}\\
  {If}~~@{text "Exit th' \<in> set s'"}~~{then}~~@{text "th' \<noteq> th"}\\
  \end{tabular}
  \end{isabelle}
  \end{quote}

  \noindent The locale mechanism of Isabelle helps us to manage
  conveniently such assumptions~\cite{Haftmann08}.  Under these
  assumptions we shall prove the following correctness property:

  \begin{theorem}\label{mainthm}
  Given the assumptions about states @{text "s"} and @{text "s' @ s"},
  the thread @{text th} and the events in @{text "s'"}, then either
  \begin{itemize}
  \item[$\bullet$] @{term "th \<in> running (s' @ s)"} or\medskip

  \item[$\bullet$] there exists a thread @{term "th'"} with @{term "th' \<noteq> th"}
  and @{term "th' \<in> running (s' @ s)"} such that @{text "th' \<in> threads
  s"}, @{text "\<not> detached s th'"} and @{term "cp (s' @ s) th' = prec
  th s"}.
  \end{itemize}
  \end{theorem}

  \noindent This theorem ensures that the thread @{text th}, which has
  the highest precedence in the state @{text s}, is either running in
  state @{term "s' @ s"}, or can only be blocked in the state @{text
  "s' @ s"} by a thread @{text th'} that already existed in @{text s}
  and requested a resource or had a lock on at least one resource---that means
  the thread was not \emph{detached} in @{text s}.  As we shall see
  shortly, that means there are only finitely many threads that can
  block @{text th} in this way and then they need to run with the same
  precedence as @{text th}.

  
%% HERE

  Given our assumptions (on @{text th}), the first property we can
  show is that any running thread, say @{text "th'"}, has the same
  precedence as @{text th}:

  \begin{lemma}\label{runningpreced}
  @{thm [mode=IfThen] running_preced_inversion}
  \end{lemma}

  \begin{proof}
  By definition, the running thread has as current precedence the maximum of
  all ready threads, that is

  \begin{center}
  @{term "cp (t @ s) th' = Max (cp (t @ s) ` readys (t @ s))"}
  \end{center}

  \noindent
  We also know that this is equal to the maximum of current precedences of all threads,
  that is

  \begin{center}
  @{term "cp (t @ s) th' = Max (cp (t @ s) ` treads (t @ s))"}
  \end{center}

  \noindent
  This is because each ready thread, say @{text "th\<^sub>r"}, has the maximum
  current precedence of the subtree located at @{text "th\<^sub>r"}. All these
  subtrees together form the set of threads.
  But the maximum of all threads is the @{term "cp"} of @{text "th"},
  which is equal to the @{term preced} of @{text th}.\qed
  \end{proof}

  %\endnote{
  %@{thm "th_blockedE_pretty"} -- thm-blockedE??
  % 
  % @{text "th_kept"} shows that th is a thread in s'-s
  % }

  Next we show that a running thread @{text "th'"} must either wait for or
  hold a resource in state @{text s}.

  \begin{lemma}\label{notdetached}
  If @{term "th' \<in> running (s' @ s)"} and @{term "th \<noteq> th'"} then @{term "\<not>detached s th'"}.
  \end{lemma}

  \begin{proof} Let us assume @{text "th'"} is detached in state
  @{text "s"}, then, according to the definition of detached, @{text
  "th’"} does not hold or wait for any resource. Hence the @{text
  cp}-value of @{text "th'"} in @{text s} is not boosted, that is
  @{term "cp s th' = prec th s"}, and is therefore lower than the
  precedence (as well as the @{text "cp"}-value) of @{term "th"}. This
  means @{text "th'"} will not run as long as @{text "th"} is a
  live thread. In turn this means @{text "th'"} cannot acquire a reseource
  and is still detached in state @{text "s' @ s"}.  Consequently
  @{text "th'"} is also not boosted in state @{text "s' @ s"} and
  would not run. This contradicts our assumption.\qed
  \end{proof}


  \begin{proof}[of Theorem 1] If @{term "th \<in> running (s' @ s)"},
  then there is nothing to show. So let us assume otherwise. Since the
  @{text "RAG"} is well-founded, we know there exists an ancestor of
  @{text "th"} that is the root of the corrsponding subtree and
  therefore is ready. Let us call this thread @{text "th'"}. We know
  that @{text "th'"} has the highest precedence of all ready threads.
  Therefore it is running.  We have that @{term "th \<noteq> th'"}
  since we assumed @{text th} is not running.  By
  Lem.~\ref{notdetached} we have that @{term "\<not>detached s th'"}.
  If @{text "th'"} is not detached in @{text s}, that is either
  holding or waiting for a resource, it must be that @{term "th' \<in>
  threads s"}.  By Lem.~\ref{runningpreced} we have

  \begin{center}
  @{term "cp (t @ s) th' = preced th s"}
  \end{center}

  \noindent
  This concludes the proof of Theorem 1.\qed
  \end{proof}


  %\endnote{
  %In what follows we will describe properties of PIP that allow us to
  %prove Theorem~\ref{mainthm} and, when instructive, briefly describe
  %our argument. Recall we want to prove that in state @ {term "s' @ s"}
  %either @{term th} is either running or blocked by a thread @ {term
  %"th'"} (@{term "th \<noteq> th'"}) which was alive in state @{term s}. We
  %can show that


  %\begin{lemma}
  %If @{thm (prem 2) eq_pv_blocked}
  %then @{thm (concl) eq_pv_blocked}
  %\end{lemma}

  %\begin{lemma}
  %If @{thm (prem 2) eq_pv_persist}
  %then @{thm (concl) eq_pv_persist}
  %\end{lemma}}

%  \endnote{{\bf OUTLINE}

%  Since @{term "th"} is the most urgent thread, if it is somehow
%  blocked, people want to know why and wether this blocking is
%  reasonable.

%  @{thm [source] th_blockedE} @{thm th_blockedE}

%  if @{term "th"} is blocked, then there is a path leading from 
%  @{term "th"} to @{term "th'"}, which means:
%  there is a chain of demand leading from @{term th} to @{term th'}.

  %%% in other words
  %%% th -> cs1 -> th1 -> cs2 -> th2 -> ... -> csn -> thn -> cs -> th'. 
  %%% 
  %%% We says that th is blocked by "th'".

%  THEN

%  @ {thm [source] vat_t.th_chain_to_ready} @ {thm vat_t.th_chain_to_ready}

%  It is basic propery with non-trival proof. 

%  THEN

%  @ {thm [source] max_preced} @ {thm max_preced}

%  which says @{term "th"} holds the max precedence.

%  THEN
 
%  @ {thm [source] th_cp_max th_cp_preced th_kept}
%  @ {thm th_cp_max th_cp_preced th_kept}

%  THENTHEN

 %@ {thm [source] running_inversion_4} @  {thm running_inversion_4}

 % which explains what the @{term "th'"} looks like. Now, we have found the 
 % @{term "th'"} which blocks @{term th}, we need to know more about it.
 % To see what kind of thread can block @{term th}.

 % From these two lemmas we can see the correctness of PIP, which is
 % that: the blockage of th is reasonable and under control.

 % Lemmas we want to describe:

 % \begin{lemma}
 % @ {thm running_cntP_cntV_inv}
 % \end{lemma}

%  \noindent
%  Remember we do not have the well-nestedness restriction in our
%  proof, which means the difference between the counters @{const cntV}
%  and @{const cntP} can be larger than @{term 1}.

%  \begin{lemma}\label{runninginversion}
%  @ {thm running_inversion}
%  \end{lemma}
  
%  explain tRAG
%}

  
%  Suppose the thread @ {term th} is \emph{not} running in state @ {term
%  "t @ s"}, meaning that it should be blocked by some other thread.
%  It is first shown that there is a path in the RAG leading from node
%  @ {term th} to another thread @ {text "th'"}, which is also in the
%  @ {term readys}-set.  Since @ {term readys}-set is non-empty, there
%  must be one in it which holds the highest @ {term cp}-value, which,
%  by definition, is currently the @ {term running}-thread.  However, we
%  are going to show in the next lemma slightly more: this running
%  thread is exactly @ {term "th'"}.

%  \begin{lemma}
%  There exists a thread @{text "th'"}
%  such that @{thm (no_quants) th_blockedE_pretty}.
%  \end{lemma}

%  \begin{proof}
%  We know that @{term th} cannot be in @{term readys}, because it has
%  the highest precedence and therefore must be running. This violates our
%  assumption. So by ?? we have that there must be a @{term "th'"} such that
%  @{term "th' \<in> readys (t @ s)"} and @{term "(Th th, Th th') \<in> (RAG (t @ s))\<^sup>+"}.
%  We are going to first show that this @{term "th'"} must be running. For this we 
%  need to show that @{term th'} holds the highest @{term cp}-value.
%  By ?? we know that the @{term "cp"}-value of @{term "th'"} must
%  be the highest all precedences of all thread nodes in its @{term tRAG}-subtree.
%  That is 

%  \begin{center}
%  @ {term "cp (t @ s) th' = Max (the_preced (t @ s) ` 
%    (the_thread ` subtree (tRAG (t @ s)) (Th th')))"}
%  \end{center}

%  But since @{term th} is in this subtree the right-hand side is equal
%  to @{term "preced th (t @ s)"}.

  %Let me distinguish between cp (current precedence) and assigned precedence (the precedence the
  %thread ``normally'' has).
  %So we want to show what the cp of th' is in state t @ s.
  %We look at all the assingned precedences in the subgraph starting from th'
  %We are looking for the maximum of these assigned precedences.
  %This subgraph must contain the thread th, which actually has the highest precednence
  %so cp of th' in t @ s has this (assigned) precedence of th
  %We know that cp (t @ s) th' 
  %is the Maximum of the threads in the subgraph starting from th'
  %this happens to be the precedence of th
  %th has the highest precedence of all threads
%  \end{proof}

%  \begin{corollary}  
%  Using the lemma \ref{runninginversion} we can say more about the thread th'
%  \end{corollary}

%  \endnote{\subsection*{END OUTLINE}}

%  In what follows we will describe properties of PIP that allow us to prove 
%  Theorem~\ref{mainthm} and, when instructive, briefly describe our argument. 
%  It is relatively easy to see that:

%  \begin{isabelle}\ \ \ \ \ %%%
%  \begin{tabular}{@ {}l}
%  @ {text "running s \<subseteq> ready s \<subseteq> threads s"}\\
%  @ {thm[mode=IfThen]  finite_threads}
%  \end{tabular}
%  \end{isabelle}

%  \noindent
%  The second property is by induction on @{term vt}. The next three
%  properties are: 

%  \begin{isabelle}\ \ \ \ \ %%%
%  \begin{tabular}{@ {}l}
%  HERE??
  %@ {thm[mode=IfThen] waiting_unique[of _ _ "cs1" "cs2"]}\\
  %@ {thm[mode=IfThen] held_unique[of _ "th1" _ "th2"]}\\
  %@ {thm[mode=IfThen] running_unique[of _ "th1" "th2"]}
%  \end{tabular}
%  \end{isabelle}

%  \noindent
%  The first property states that every waiting thread can only wait for a single
%  resource (because it gets suspended after requesting that resource); the second 
%  that every resource can only be held by a single thread; 
%  the third property establishes that in every given valid state, there is
%  at most one running thread. We can also show the following properties 
%  about the @{term RAG} in @{text "s"}.

%  \begin{isabelle}\ \ \ \ \ %%%
%  \begin{tabular}{@ {}l}
%  HERE?? %@{text If}~@ {thm (prem 1) acyclic_RAG}~@{text "then"}:\\
%  \hspace{5mm}@{thm (concl) acyclic_RAG},
%  @{thm (concl) finite_RAG} and
%  %@ {thm (concl) wf_dep_converse},\\
%  %\hspace{5mm}@{text "if"}~@ {thm (prem 2) dm_RAG_threads}~@{text "then"}~@{thm (concl) dm_RAG_threads}
%  and\\
%  %\hspace{5mm}@{text "if"}~@ {thm (prem 2) range_in}~@{text "then"}~% @ {thm (concl) range_in}.
%  \end{tabular}
%  \end{isabelle}

%  \noindent
%  The acyclicity property follows from how we restricted the events in
%  @{text step}; similarly the finiteness and well-foundedness property.
%  The last two properties establish that every thread in a @{text "RAG"}
%  (either holding or waiting for a resource) is a live thread.

%  The key lemma in our proof of Theorem~\ref{mainthm} is as follows:

%  \begin{lemma}\label{mainlem}
%  Given the assumptions about states @{text "s"} and @{text "s' @ s"},
%  the thread @{text th} and the events in @{text "s'"},
%  if @{term "th' \<in> threads (s' @ s)"}, @{text "th' \<noteq> th"} and @{text "detached (s' @ s) th'"}\\
%  then @{text "th' \<notin> running (s' @ s)"}.
%  \end{lemma}

%  \noindent
%  The point of this lemma is that a thread different from @{text th} (which has the highest
%  precedence in @{text s}) and not holding any resource, cannot be running 
%  in the state @{text "s' @ s"}.

%  \begin{proof}
%  Since thread @{text "th'"} does not hold any resource, no thread can depend on it. 
%  Therefore its current precedence @{term "cp (s' @ s) th'"} equals its own precedence
%  @{term "prec th' (s' @ s)"}. Since @{text "th"} has the highest precedence in the 
%  state @{text "(s' @ s)"} and precedences are distinct among threads, we have
%  @{term "prec th' (s' @s ) < prec th (s' @ s)"}. From this 
%  we have @{term "cp (s' @ s) th' < prec th (s' @ s)"}.
%  Since @{text "prec th (s' @ s)"} is already the highest 
%  @{term "cp (s' @ s) th"} can not be higher than this and can not be lower either (by 
%  definition of @{term "cp"}). Consequently, we have @{term "prec th (s' @ s) = cp (s' @ s) th"}.
%  Finally we have @{term "cp (s' @ s) th' < cp (s' @ s) th"}.
%  By defintion of @{text "running"}, @{text "th'"} can not be running in state
%  @{text "s' @ s"}, as we had to show.\qed
%  \end{proof}

%  \noindent
%  Since @{text "th'"} is not able to run in state @{text "s' @ s"}, it is not able to 
%  issue a @{text "P"} or @{text "V"} event. Therefore if @{text "s' @ s"} is extended
%  one step further, @{text "th'"} still cannot hold any resource. The situation will 
%  not change in further extensions as long as @{text "th"} holds the highest precedence.

%  From this lemma we can deduce Theorem~\ref{mainthm}: that @{text th} can only be 
%  blocked by a thread @{text th'} that
%  held some resource in state @{text s} (that is not @{text "detached"}). And furthermore
%  that the current precedence of @{text th'} in state @{text "(s' @ s)"} must be equal to the 
%  precedence of @{text th} in @{text "s"}.
%  We show this theorem by induction on @{text "s'"} using Lemma~\ref{mainlem}.
%  This theorem gives a stricter bound on the threads that can block @{text th} than the
%  one obtained by Sha et al.~\cite{Sha90}:
%  only threads that were alive in state @{text s} and moreover held a resource.
%  This means our bound is in terms of both---alive threads in state @{text s}
%  and number of critical resources. Finally, the theorem establishes that the blocking threads have the
%  current precedence raised to the precedence of @{text th}.

%  We can furthermore prove that under our assumptions no deadlock exists in the state @{text "s' @ s"}
%  by showing that @{text "running (s' @ s)"} is not empty.

%  \begin{lemma}
%  Given the assumptions about states @{text "s"} and @{text "s' @ s"},
%  the thread @{text th} and the events in @{text "s'"},
%  @{term "running (s' @ s) \<noteq> {}"}.
%  \end{lemma}

%  \begin{proof}
%  If @{text th} is blocked, then by following its dependants graph, we can always 
%  reach a ready thread @{text th'}, and that thread must have inherited the 
%  precedence of @{text th}.\qed
%  \end{proof}


  %The following lemmas show how every node in RAG can be chased to ready threads:
  %\begin{enumerate}
  %\item Every node in RAG can be chased to a ready thread (@{text "chain_building"}):
  %  @   {thm [display] chain_building[rule_format]}
  %\item The ready thread chased to is unique (@{text "dchain_unique"}):
  %  @   {thm [display] dchain_unique[of _ _ "th1" "th2"]}
  %\end{enumerate}

  %Some deeper results about the system:
  %\begin{enumerate}
  %\item The maximum of @{term "cp"} and @{term "preced"} are equal (@{text "max_cp_eq"}):
  %@  {thm [display] max_cp_eq}
  %\item There must be one ready thread having the max @{term "cp"}-value 
  %(@{text "max_cp_readys_threads"}):
  %@  {thm [display] max_cp_readys_threads}
  %\end{enumerate}

  %The relationship between the count of @{text "P"} and @{text "V"} and the number of 
  %critical resources held by a thread is given as follows:
  %\begin{enumerate}
  %\item The @{term "V"}-operation decreases the number of critical resources 
  %  one thread holds (@{text "cntCS_v_dec"})
  %   @  {thm [display]  cntCS_v_dec}
  %\item The number of @{text "V"} never exceeds the number of @{text "P"} 
  %  (@  {text "cnp_cnv_cncs"}):
  %  @  {thm [display]  cnp_cnv_cncs}
  %\item The number of @{text "V"} equals the number of @{text "P"} when 
  %  the relevant thread is not living:
  %  (@{text "cnp_cnv_eq"}):
  %  @  {thm [display]  cnp_cnv_eq}
  %\item When a thread is not living, it does not hold any critical resource 
  %  (@{text "not_thread_holdents"}):
  %  @  {thm [display] not_thread_holdents}
  %\item When the number of @{text "P"} equals the number of @{text "V"}, the relevant 
  %  thread does not hold any critical resource, therefore no thread can depend on it
  %  (@{text "count_eq_dependants"}):
  %  @  {thm [display] count_eq_dependants}
  %\end{enumerate}

  %The reason that only threads which already held some resoures
  %can be running and block @{text "th"} is that if , otherwise, one thread 
  %does not hold any resource, it may never have its prioirty raised
  %and will not get a chance to run. This fact is supported by 
  %lemma @{text "moment_blocked"}:
  %@   {thm [display] moment_blocked}
  %When instantiating  @{text "i"} to @{text "0"}, the lemma means threads which did not hold any
  %resource in state @{text "s"} will not have a change to run latter. Rephrased, it means 
  %any thread which is running after @{text "th"} became the highest must have already held
  %some resource at state @{text "s"}.


  %When instantiating @{text "i"} to a number larger than @{text "0"}, the lemma means 
  %if a thread releases all its resources at some moment in @{text "t"}, after that, 
  %it may never get a change to run. If every thread releases its resource in finite duration,
  %then after a while, only thread @{text "th"} is left running. This shows how indefinite 
  %priority inversion can be avoided. 

  %All these assumptions are put into a predicate @{term "extend_highest_gen"}. 
  %It can be proved that @{term "extend_highest_gen"} holds 
  %for any moment @{text "i"} in it @{term "t"} (@{text "red_moment"}):
  %@   {thm [display] red_moment}
  
  %From this, an induction principle can be derived for @{text "t"}, so that 
  %properties already derived for @{term "t"} can be applied to any prefix 
  %of @{text "t"} in the proof of new properties 
  %about @{term "t"} (@{text "ind"}):
  %\begin{center}
  %@   {thm[display] ind}
  %\end{center}

  %The following properties can be proved about @{term "th"} in @{term "t"}:
  %\begin{enumerate}
  %\item In @{term "t"}, thread @{term "th"} is kept live and its 
  %  precedence is preserved as well
  %  (@{text "th_kept"}): 
  %  @   {thm [display] th_kept}
  %\item In @{term "t"}, thread @{term "th"}'s precedence is always the maximum among 
  %  all living threads
  %  (@{text "max_preced"}): 
  %  @   {thm [display] max_preced}
  %\item In @{term "t"}, thread @{term "th"}'s current precedence is always the maximum precedence
  %  among all living threads
  %  (@{text "th_cp_max_preced"}): 
  %  @   {thm [display] th_cp_max_preced}
  %\item In @{term "t"}, thread @{term "th"}'s current precedence is always the maximum current 
  %  precedence among all living threads
  %  (@{text "th_cp_max"}): 
  %  @   {thm [display] th_cp_max}
  %\item In @{term "t"}, thread @{term "th"}'s current precedence equals its precedence at moment 
  %  @{term "s"}
  %  (@{text "th_cp_preced"}): 
  %  @   {thm [display] th_cp_preced}
  %\end{enumerate}

  %The main theorem of this part is to characterizing the running thread during @{term "t"} 
  %(@{text "running_inversion_2"}):
  %@   {thm [display] running_inversion_2}
  %According to this, if a thread is running, it is either @{term "th"} or was
  %already live and held some resource 
  %at moment @{text "s"} (expressed by: @{text "cntV s th' < cntP s th'"}).

  %Since there are only finite many threads live and holding some resource at any moment,
  %if every such thread can release all its resources in finite duration, then after finite
  %duration, none of them may block @{term "th"} anymore. So, no priority inversion may happen
  %then.

 % NOTE: about bounds in sha et al and ours: they prove a bound on the length of individual
 % blocages. We prove a bound for the overall-blockage.

 % There are low priority threads, 
 % which do not hold any resources, 
 % such thread will not block th. 
 % Their Theorem 3 does not exclude such threads.

 % There are resources, which are not held by any low prioirty threads,
 % such resources can not cause blockage of th neither. And similiary, 
 % theorem 6 does not exlude them.

 % Our one bound excudle them by using a different formaulation. "

  *}
(*<*)
end
(*>*)

(*text {*
   explan why Thm1 roughly corresponds to Sha's Thm 3
*}*)

section {* A Finite Bound on Priority Inversion *}

(*<*)
context extend_highest_gen
begin
(*>*)
text {*

  Like in the work by Sha et al.~our result in Thm 1 does not yet
  guarantee the absence of indefinite Priority Inversion. For this we
  further need the property that every thread gives up its resources
  after a finite amount of time. We found that this property is not so
  straightforward to formalise in our model. There are mainly two
  reasons for this: First, we do not specify what ``running'' the code
  of a thread means, for example by giving an operational semantics
  for machine instructions. Therefore we cannot characterise what are
  ``good'' programs that contain for every locking request also a
  corresponding unlocking request for a resource.  Second, we need to
  distinghish between a thread that ``just'' locks a resource for a
  finite amount of time (even if it is very long) and one that locks
  it forever (there might be a lookp in between the locking and
  unlocking requests).

  Because of these problems, we decided in our earlier paper
  \cite{ZhangUrbanWu12} to leave out this property and let the
  programmer assume the responsibility to program threads in such a
  benign manner (in addition to causing no circularity in the
  RAG). This leave-it-to-the-programmer was also the approach taken by
  Sha et al.~in their paper.  However, in this paper we can make an
  improvement: we can look at the \emph{events} that are happening
  after a Priority Inversion occurs. The events can be seen as a
  \textbf{rough} abstraction of the ``runtime behaviour'' of threads
  and also as an abstract notion of ``time''---when a new event
  happened, some time must have passed.  In this setting we can prove
  a more direct bound for the absence of indefinite Priority
  Inversion. This is what we shall show below.

  What we will establish in this section is that there can only be a
  finite amount of states after state @{term s} in which the thread
  @{term th} is blocked.  For this finiteness bound to exist, Sha et
  al.~assume in their work that there is a finite pool of threads
  (active or hibernating). However, we do not have this concept of
  active or hibernating threads in our model. Rather, in our model we
  can create or exit threads arbitrarily. Consequently, the avoidance
  of indefinite priority inversion we are trying to establish is in
  our model not true, unless we require that the number of threads
  created is bounded in every valid future state after @{term s}. So
  our first assumption states:

  \begin{quote} {\bf Assumption on the number of threads created in
  every valid state after the state {\boldmath@{text s}}:} Given the
  state @{text s}, in every ``future'' valid state @{text "t @ s"}, we
  require that the number of created threads is less than
  a bound @{text "BC"}, that is 

  \[@{text "len (filter isCreate t) < BC"}\;.\]  
  \end{quote}

  \noindent Note that it is not enough to just to state that there are
  only finite number of threads created in a single state @{text "s' @
  s"} after @{text s}.  Instead, we need to put this bound on the
  @{text "Create"} events for all valid states after @{text s}.

  For our second assumption about giving up resources after a finite
  amount of ``time'', let us introduce the following definition about
  threads that can potentially block @{text th}:

  \begin{isabelle}\ \ \ \ \ %%%
  @{thm blockers_def}
  \end{isabelle}

  \noindent This set contains all treads that are not detached in
  state @{text s} (i.e.~they have a lock on a resource) and therefore
  can potentially block @{text th} after state @{text s}. We need to
  make the following assumption about the threads in this set:

  \begin{quote}
  {\bf Assumptions on the threads {\boldmath{@{term "th' \<in> blockers"}}}:} 
  For each such @{text "th'"} there exists a finite bound @{text "BND(th')"} 
  such that for all future 
  valid states @{text "t @ s"},
  we have that if \mbox{@{term "\<not>(detached (t @ s) th')"}}, then 
  \[@{text "len (actions_of {th'} t) < BND(th')"}\] 
  \end{quote} 

  \noindent By this assumption we enforce that any thread potentially
  blocking @{term th} must become detached (that is lock no resource
  anymore) after a finite number of events in @{text "t @ s"}. Again
  we have to state this bound to hold in all valid states after @{text
  s}. The bound reflects how each thread @{text "th'"} is programmed:
  Though we cannot express what instructions a thread is executing,
  the events in our model correspond to the system calls made by
  thread. Our @{text "BND(th')"} binds the number of these calls.
  
  The main reason for these two assumptions is that we can prove: the
  number of states after @{text s} in which the thread @{text th} is
  is not running (that is where Priority Inversion occurs) can be
  bounded by the number of actions the threads in @{text blockers}
  perform and how many threads are newly created. This bound can be
  stated for all valid states @{term "t @ s"} that can happen after
  @{text s}. To state our bound we need to make a definition of what we
  mean by intermediate states: it will be the list of traces/states starting 
  from @{text s} ending in @{text "t @ s"}

  \begin{center}
  @{text "t @ s"},\; \ldots,\; @{text "e2 :: e1 :: s"},\;@{text "e1 :: s"},\;@{text "s"}
  \end{center}

  \noindent This can be defined by the following recursive functions

  \begin{center}
  \begin{tabular}{rcl}
  @{text "s upto t"} & $\equiv$ & @{text "if (t = []) then [s]"} \\
  & & @{text "else (t @ s) :: s upto (tail t)"}
  \end{tabular}
  \end{center}
  

  \noindent Assume you have an extension @{text t}, this essentially 
  defines in out list representation of states all the postfixes of 
  @{text t}.

  Theorem: 

  \begin{isabelle}\ \ \ \ \ %%%
  @{text "len (filter (\<lambda>t'. th \<notin> running t') (s upto t)) \<le>
  1 + len (actions_of blockers t) + len (filter isCreate t)"}
  \end{isabelle}

  Proof:
  
  Consider the states @{text "s upto t"}. It holds that all the states where
  @{text "th"} runs and all the states where @{text "th"} does not run is 
  equalt to @{text "len t"}. That means

  \begin{center}
  @{text "states where th does not run = len t - states where th runs"} (*)
  \end{center}

  It also holds that all the actions of @{text "th"} are less or equal to 
  the states where @{text th} runs. That is

  \begin{center}
  @{text "len (actions_of {th} t) \<le> states where th runs"}
  \end{center}
  
  That means in $(*)$ we have 

  \begin{center}
  @{text "states where th does not run \<le> len t - len (actions_of {th} t)"}
  \end{center}

  If we look at all the events that can occur in @{text "s upto t"}, we have that

  \begin{center}
  @{text "len t = len (action_of {th}) + len (action_of blockers t) + 
  len (filter isCreate t)"}
  \end{center}

  This gives us finally our theorem. \hfill\qed\medskip

  \noindent In order to now show the absence of indefinite Priority
  Inversion, we need to show that the number of actions of the @{text
  "blockers"} is bounded---the number of @{text "Creates"} is clearly
  bounded by our first assumption. The number of actions of each
  individual thread in @{text "blockers"} is bound by our second
  assumption.  Since there can only be a finite number of @{text
  blockers} in state @{text s} their overall sum is also bounded.
  This is actually the main conclusion we obtain for the Priority
  Inheritance Protocol: this above theorem shows is that set of @{text
  blockers} is fixed at state @{text s} when the Priority Inversion
  occured and no additional blocker of @{text th} can appear after the
  state @{text s}. And in this way we can bound the number of states
  where the thread @{text th} with the highest priority is prevented
  fropm running.


*}
(*<*)
end
(*>*)

section {* Properties for an Implementation\label{implement} *}

text {*
  While our formalised proof gives us confidence about the correctness of our model of PIP, 
  we found that the formalisation can even help us with efficiently implementing it.
  For example Baker complained that calculating the current precedence
  in PIP is quite ``heavy weight'' in Linux (see the Introduction).
  In our model of PIP the current precedence of a thread in a state @{text s}
  depends on all its dependants---a ``global'' transitive notion,
  which is indeed heavy weight (see Definition shown in \eqref{cpreced}).
  We can however improve upon this. For this let us define the notion
  of @{term children} of a thread @{text th} in a state @{text s} as

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  HERE?? %%@ {thm children_def2}
  \end{tabular}
  \end{isabelle}

  \noindent
  where a child is a thread that is only one ``hop'' away from the thread
  @{text th} in the @{term RAG} (and waiting for @{text th} to release
  a resource). We can prove the following lemma.

  \begin{lemma}\label{childrenlem}
  HERE %@{text "If"} @ {thm (prem 1) cp_rec} @{text "then"}
  \begin{center}
  %@ {thm (concl) cp_rec}.
  \end{center}
  \end{lemma}
  
  \noindent
  That means the current precedence of a thread @{text th} can be
  computed locally by considering only the current precedences of the children of @{text th}. In
  effect, it only needs to be recomputed for @{text th} when one of
  its children changes its current precedence.  Once the current 
  precedence is computed in this more efficient manner, the selection
  of the thread with highest precedence from a set of ready threads is
  a standard scheduling operation implemented in most operating
  systems.

  %\begin{proof}[of Lemma~\ref{childrenlem}]
  %Test
  %\end{proof}

  Of course the main work for implementing PIP involves the
  scheduler and coding how it should react to events.  Below we
  outline how our formalisation guides this implementation for each
  kind of events.\smallskip
*}

text {*
  \noindent
  \colorbox{mygrey}{@{term "Create th prio"}:} We assume that the current state @{text s'} and 
  the next state @{term "s \<equiv> Create th prio#s'"} are both valid (meaning the event
  is allowed to occur). In this situation we can show that

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  HERE ?? %@ {thm eq_dep},\\
  @{thm valid_trace_create.eq_cp_th}, and\\
  @{thm[mode=IfThen] valid_trace_create.eq_cp}
  \end{tabular}
  \end{isabelle}

  \noindent
  This means in an implementation we do not have to recalculate the @{text RAG} and also none of the
  current precedences of the other threads. The current precedence of the created
  thread @{text th} is just its precedence, namely the pair @{term "(prio, length (s::event list))"}.
  \smallskip
  *}

text {*
  \noindent
  \colorbox{mygrey}{@{term "Exit th"}:} We again assume that the current state @{text s'} and 
  the next state @{term "s \<equiv> Exit th#s'"} are both valid. We can show that

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  HERE %@ {thm valid_trace_create.eq_dep}, and\\
  @{thm[mode=IfThen] valid_trace_create.eq_cp}
  \end{tabular}
  \end{isabelle}

  \noindent
  This means again we do not have to recalculate the @{text RAG} and
  also not the current precedences for the other threads. Since @{term th} is not
  alive anymore in state @{term "s"}, there is no need to calculate its
  current precedence.
  \smallskip
*}

text {*
  \noindent
  \colorbox{mygrey}{@{term "Set th prio"}:} We assume that @{text s'} and 
  @{term "s \<equiv> Set th prio#s'"} are both valid. We can show that

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  %@ {thm[mode=IfThen] eq_dep}, and\\
  %@ {thm[mode=IfThen] valid_trace_create.eq_cp_pre}
  \end{tabular}
  \end{isabelle}

  \noindent
  The first property is again telling us we do not need to change the @{text RAG}. 
  The second shows that the @{term cp}-values of all threads other than @{text th} 
  are unchanged. The reason is that @{text th} is running; therefore it is not in 
  the @{term dependants} relation of any other thread. This in turn means that the 
  change of its priority cannot affect other threads.

  %The second
  %however states that only threads that are \emph{not} dependants of @{text th} have their
  %current precedence unchanged. For the others we have to recalculate the current
  %precedence. To do this we can start from @{term "th"} 
  %and follow the @{term "depend"}-edges to recompute  using Lemma~\ref{childrenlem} 
  %the @{term "cp"} of every 
  %thread encountered on the way. Since the @{term "depend"}
  %is assumed to be loop free, this procedure will always stop. The following two lemmas show, however, 
  %that this procedure can actually stop often earlier without having to consider all
  %dependants.
  %
  %\begin{isabelle}\ \ \ \ \ %%%
  %\begin{tabular}{@ {}l}
  %@ {thm[mode=IfThen] eq_up_self}\\
  %@{text "If"} @ {thm (prem 1) eq_up}, @ {thm (prem 2) eq_up} and @ {thm (prem 3) eq_up}\\
  %@{text "then"} @ {thm (concl) eq_up}.
  %\end{tabular}
  %\end{isabelle}
  %
  %\noindent
  %The first lemma states that if the current precedence of @{text th} is unchanged,
  %then the procedure can stop immediately (all dependent threads have their @{term cp}-value unchanged).
  %The second states that if an intermediate @{term cp}-value does not change, then
  %the procedure can also stop, because none of its dependent threads will
  %have their current precedence changed.
  \smallskip
  *}

text {*
  \noindent
  \colorbox{mygrey}{@{term "V th cs"}:} We assume that @{text s'} and 
  @{term "s \<equiv> V th cs#s'"} are both valid. We have to consider two
  subcases: one where there is a thread to ``take over'' the released
  resource @{text cs}, and one where there is not. Let us consider them
  in turn. Suppose in state @{text s}, the thread @{text th'} takes over
  resource @{text cs} from thread @{text th}. We can prove


  \begin{isabelle}\ \ \ \ \ %%%
  %@ {thm RAG_s}
  \end{isabelle}
  
  \noindent
  which shows how the @{text RAG} needs to be changed. The next lemma suggests
  how the current precedences need to be recalculated. For threads that are
  not @{text "th"} and @{text "th'"} nothing needs to be changed, since we
  can show

  \begin{isabelle}\ \ \ \ \ %%%
  %@ {thm[mode=IfThen] cp_kept}
  \end{isabelle}
  
  \noindent
  For @{text th} and @{text th'} we need to use Lemma~\ref{childrenlem} to
  recalculate their current precedence since their children have changed. *}

text {*
  \noindent
  In the other case where there is no thread that takes over @{text cs}, we can show how
  to recalculate the @{text RAG} and also show that no current precedence needs
  to be recalculated.

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  %@ {thm RAG_s}\\
  %@ {thm eq_cp}
  \end{tabular}
  \end{isabelle}
  *}

text {*
  \noindent
  \colorbox{mygrey}{@{term "P th cs"}:} We assume that @{text s'} and 
  @{term "s \<equiv> P th cs#s'"} are both valid. We again have to analyse two subcases, namely
  the one where @{text cs} is not locked, and one where it is. We treat the former case
  first by showing that
  
  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  %@ {thm RAG_s}\\
  HERE %@ {thm eq_cp}
  \end{tabular}
  \end{isabelle}

  \noindent
  This means we need to add a holding edge to the @{text RAG} and no
  current precedence needs to be recalculated.*} 

text {*
  \noindent
  In the second case we know that resource @{text cs} is locked. We can show that
  
  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  %@ {thm RAG_s}\\
  HERE %@ {thm[mode=IfThen] eq_cp}
  \end{tabular}
  \end{isabelle}

  \noindent
  That means we have to add a waiting edge to the @{text RAG}. Furthermore
  the current precedence for all threads that are not dependants of @{text "th'"}
  are unchanged. For the others we need to follow the edges 
  in the @{text RAG} and recompute the @{term "cp"}. To do this we can start from @{term "th"} 
  and follow the @{term "depend"}-edges to recompute  using Lemma~\ref{childrenlem} 
  the @{term "cp"} of every 
  thread encountered on the way. Since the @{term "depend"}
  is loop free, this procedure will always stop. The following lemma shows, however, 
  that this procedure can actually stop often earlier without having to consider all
  dependants.

  \begin{isabelle}\ \ \ \ \ %%%
  \begin{tabular}{@ {}l}
  %%@ {t hm[mode=IfThen] eq_up_self}\\
  HERE
  %@{text "If"} @ {thm (prem 1) eq_up}, @ {thm (prem 2) eq_up} and @ {thm (prem 3) eq_up}\\
  %@{text "then"} @ {thm (concl) eq_up}.
  \end{tabular}
  \end{isabelle}
 
  \noindent
  This lemma states that if an intermediate @{term cp}-value does not change, then
  the procedure can also stop, because none of its dependent threads will
  have their current precedence changed.
  *}

text {*
  As can be seen, a pleasing byproduct of our formalisation is that the properties in
  this section closely inform an implementation of PIP, namely whether the
  RAG needs to be reconfigured or current precedences need to
  be recalculated for an event. This information is provided by the lemmas we proved.
  We confirmed that our observations translate into practice by implementing
  our version of PIP on top of PINTOS, a small operating system written in C and used for teaching at 
  Stanford University \cite{PINTOS}. An alternative would have been the small Xv6 operating 
  system used for teaching at MIT \cite{Xv6link,Xv6}. However this operating system implements
  a simple round robin scheduler that lacks stubs for dealing with priorities. This
  is inconvenient for our purposes.


  To implement PIP in PINTOS, we only need to modify the kernel 
  functions corresponding to the events in our formal model. The events translate to the following 
  function interface in PINTOS:

  \begin{center}
  \begin{tabular}{|l@ {\hspace{2mm}}|l@ {\hspace{2mm}}|}
  \hline
  {\bf Event} & {\bf PINTOS function} \\
  \hline
  @{text Create} & @{ML_text "thread_create"}\\
  @{text Exit}   & @{ML_text "thread_exit"}\\
  @{text Set}    & @{ML_text "thread_set_priority"}\\
  @{text P}      & @{ML_text "lock_acquire"}\\
  @{text V}      & @{ML_text "lock_release"}\\ 
  \hline
  \end{tabular}
  \end{center}

  \noindent
  Our implicit assumption that every event is an atomic operation is ensured by the architecture of 
  PINTOS (which allows disabling of interrupts when some operations are performed). The case where 
  an unlocked resource is given next to the waiting thread with the
  highest precedence is realised in our implementation by priority queues. We implemented
  them as \emph{Braun trees} \cite{Paulson96}, which provide efficient @{text "O(log n)"}-operations
  for accessing and updating. In the code we shall describe below, we use the function
  @{ML_text "queue_insert"}, for inserting a new element into a priority queue, and 
  the function @{ML_text "queue_update"}, for updating the position of an element that is already
  in a queue. Both functions take an extra argument that specifies the
  comparison function used for organising the priority queue.
  
  Apart from having to implement relatively complex data\-structures in C
  using pointers, our experience with the implementation has been very positive: our specification 
  and formalisation of PIP translates smoothly to an efficent implementation in PINTOS. 
  Let us illustrate this with the C-code for the function @{ML_text "lock_acquire"}, 
  shown in Figure~\ref{code}.  This function implements the operation of requesting and, if free, 
  locking of a resource by the current running thread. The convention in the PINTOS
  code is to use the terminology \emph{locks} rather than resources. 
  A lock is represented as a pointer to the structure {\tt lock} (Line 1). 
  Lines 2 to 4 are taken from the original 
  code of @{ML_text "lock_acquire"} in PINTOS. They contain diagnostic code: first, 
  there is a check that 
  the lock is a ``valid'' lock 
  by testing whether it is not {\tt NULL}; second, a check that the code is not called
  as part of an interrupt---acquiring a lock should only be initiated by a 
  request from a (user) thread, not from an interrupt; third, it is ensured that the 
  current thread does not ask twice for a lock. These assertions are supposed
  to be satisfied because of the assumptions in PINTOS about how this code is called.
  If not, then the assertions indicate a bug in PINTOS and the result will be
  a ``kernel panic''. 



  \begin{figure}[tph]
  \begin{lstlisting}
void lock_acquire (struct lock *lock)
{ ASSERT (lock != NULL);
  ASSERT (!intr_context());
  ASSERT (!lock_held_by_current_thread (lock));

  enum intr_level old_level;
  old_level = intr_disable();
  if (lock->value == 0) {
    queue_insert(thread_cprec, &lock->wq, &thread_current()->helem); 
    thread_current()->waiting = lock;
    struct thread *pt;
    pt = lock->holder;
    while (pt) {
      queue_update(lock_cprec, &pt->held, &lock->helem);
      if (!(update_cprec(pt)))
        break;
      lock = pt->waiting;
      if (!lock) {
        queue_update(higher_cprec, &ready_queue, &pt->helem);
        break;
      };
      queue_update(thread_cprec, &lock->wq, &pt->helem);
      pt = lock->holder;
    };
    thread_block();
  } else {
    lock->value--;
    lock->holder = thread_current();
    queue_insert(lock_prec, &thread_current()->held, &lock->helem); 
  };
  intr_set_level(old_level);
}
  \end{lstlisting}
  \caption{Our version of the {\tt lock\_acquire} function for the small operating 
  system PINTOS.  It implements the operation corresponding to a @{text P}-event.\label{code}}
  \end{figure}

 
  Line 6 and 7 of {\tt lock\_acquire} make the operation of acquiring a lock atomic by disabling all 
  interrupts, but saving them for resumption at the end of the function (Line 31).
  In Line 8, the interesting code with respect to scheduling starts: we 
  first check whether the lock is already taken (its value is then 0 indicating ``already 
  taken'', or 1 for being ``free''). In case the lock is taken, we enter the
  if-branch inserting the current thread into the waiting queue of this lock (Line 9).
  The waiting queue is referenced in the usual C-way as @{ML_text "&lock->wq"}. 
  Next, we record that the current thread is waiting for the lock (Line 10).
  Thus we established two pointers: one in the waiting queue of the lock pointing to the 
  current thread, and the other from the currend thread pointing to the lock.
  According to our specification in Section~\ref{model} and the properties we were able 
  to prove for @{text P}, we need to ``chase'' all the dependants 
  in the RAG (Resource Allocation Graph) and update their
  current precedence; however we only have to do this as long as there is change in the 
  current precedence.

  The ``chase'' is implemented in the while-loop in Lines 13 to 24. 
  To initialise the loop, we 
  assign in Lines 11 and 12 the variable @{ML_text pt} to the owner 
  of the lock.
  Inside the loop, we first update the precedence of the lock held by @{ML_text pt} (Line 14).
  Next, we check whether there is a change in the current precedence of @{ML_text pt}. If not,
  then we leave the loop, since nothing else needs to be updated (Lines 15 and 16).
  If there is a change, then we have to continue our ``chase''. We check what lock the 
  thread @{ML_text pt} is waiting for (Lines 17 and 18). If there is none, then 
  the thread @{ML_text pt} is ready (the ``chase'' is finished with finding a root in the RAG). In this 
  case we update the ready-queue accordingly (Lines 19 and 20). If there is a lock  @{ML_text pt} is 
  waiting for, we update the waiting queue for this lock and we continue the loop with 
  the holder of that lock 
  (Lines 22 and 23). After all current precedences have been updated, we finally need 
  to block the current thread, because the lock it asked for was taken (Line 25). 

  If the lock the current thread asked for is \emph{not} taken, we proceed with the else-branch 
  (Lines 26 to 30). We first decrease the value of the lock to 0, meaning 
  it is taken now (Line 27). Second, we update the reference of the holder of 
  the lock (Line 28), and finally update the queue of locks the current 
  thread already possesses (Line 29).
  The very last step is to enable interrupts again thus leaving the protected section.
  

  Similar operations need to be implementated for the @{ML_text lock_release} function, which
  we however do not show. The reader should note though that we did \emph{not} verify our C-code. 
  This is in contrast, for example, to the work on seL4, which actually verified in Isabelle/HOL
  that their C-code satisfies its specification, thought this specification does not contain 
  anything about PIP \cite{sel4}.
  Our verification of PIP however provided us with the justification for designing 
  the C-code. It gave us confidence that leaving the ``chase'' early, whenever
  there is no change in the calculated current precedence, does not break the
  correctness of the algorithm.
*}

section {* Conclusion *}

text {* 
  The Priority Inheritance Protocol (PIP) is a classic textbook
  algorithm used in many real-time operating systems in order to avoid the problem of
  Priority Inversion.  Although classic and widely used, PIP does have
  its faults: for example it does not prevent deadlocks in cases where threads
  have circular lock dependencies.

  We had two goals in mind with our formalisation of PIP: One is to
  make the notions in the correctness proof by Sha et al.~\cite{Sha90}
  precise so that they can be processed by a theorem prover. The reason is
  that a mechanically checked proof avoids the flaws that crept into their
  informal reasoning. We achieved this goal: The correctness of PIP now
  only hinges on the assumptions behind our formal model. The reasoning, which is
  sometimes quite intricate and tedious, has been checked by Isabelle/HOL. 
  We can also confirm that Paulson's
  inductive method for protocol verification~\cite{Paulson98} is quite
  suitable for our formal model and proof. The traditional application
  area of this method is security protocols. 

  The second goal of our formalisation is to provide a specification for actually
  implementing PIP. Textbooks, for example \cite[Section 5.6.5]{Vahalia96},
  explain how to use various implementations of PIP and abstractly
  discuss their properties, but surprisingly lack most details important for a
  programmer who wants to implement PIP (similarly Sha et al.~\cite{Sha90}).  
  That this is an issue in practice is illustrated by the
  email from Baker we cited in the Introduction. We achieved also this
  goal: The formalisation allowed us to efficently implement our version
  of PIP on top of PINTOS \cite{PINTOS}, a simple instructional operating system for the x86 
  architecture. It also gives the first author enough data to enable
  his undergraduate students to implement PIP (as part of their OS course).
  A byproduct of our formalisation effort is that nearly all
  design choices for the implementation of PIP scheduler are backed up with a proved
  lemma. We were also able to establish the property that the choice of
  the next thread which takes over a lock is irrelevant for the correctness
  of PIP. Moreover, we eliminated a crucial restriction present in 
  the proof of Sha et al.: they require that critical sections nest properly, 
  whereas our scheduler allows critical sections to overlap. What we
  are not able to do is to mechanically ``synthesise'' an actual implementation 
  from our formalisation. To do so for C-code seems quite hard and is beyond 
  current technology available for Isabelle. Also our proof-method based
  on events is not ``computational'' in the sense of having a concrete
  algorithm behind it: our formalisation is really more about the 
  specification of PIP and ensuring that it has the desired properties
  (the informal specification by Sha et al.~did not). 
  

  PIP is a scheduling algorithm for single-processor systems. We are
  now living in a multi-processor world. Priority Inversion certainly
  occurs also there, see for example \cite{Brandenburg11,Davis11}.  
  However, there is very little ``foundational''
  work about PIP-algorithms on multi-processor systems.  We are not
  aware of any correctness proofs, not even informal ones. There is an
  implementation of a PIP-algorithm for multi-processors as part of the
  ``real-time'' effort in Linux, including an informal description of the implemented scheduling
  algorithm given in \cite{LINUX}.  We estimate that the formal
  verification of this algorithm, involving more fine-grained events,
  is a magnitude harder than the one we presented here, but still
  within reach of current theorem proving technology. We leave this
  for future work.

  To us, it seems sound reasoning about scheduling algorithms is fiendishly difficult
  if done informally by ``pencil-and-paper''. We infer this from the flawed proof
  in the paper by Sha et al.~\cite{Sha90} and also from \cite{Regehr} where Regehr 
  points out an error in a paper about Preemption 
  Threshold Scheduling \cite{ThreadX}. The use of a theorem prover was
  invaluable to us in order to be confident about the correctness of our reasoning 
  (for example no corner case can be overlooked).   
  The most closely related work to ours is the formal verification in
  PVS of the Priority Ceiling Protocol done by Dutertre
  \cite{dutertre99b}---another solution to the Priority Inversion
  problem, which however needs static analysis of programs in order to
  avoid it. There have been earlier formal investigations
  into PIP \cite{Faria08,Jahier09,Wellings07}, but they employ model
  checking techniques. The results obtained by them apply,
  however, only to systems with a fixed size, such as a fixed number of 
  events and threads. In contrast, our result applies to systems of arbitrary
  size. Moreover, our result is a good 
  witness for one of the major reasons to be interested in machine checked 
  reasoning: gaining deeper understanding of the subject matter.

  Our formalisation
  consists of around 210 lemmas and overall 6950 lines of readable Isabelle/Isar
  code with a few apply-scripts interspersed. The formal model of PIP
  is 385 lines long; the formal correctness proof 3800 lines. Some auxiliary
  definitions and proofs span over 770 lines of code. The properties relevant
  for an implementation require 2000 lines. 
  The code of our formalisation 
  can be downloaded from the Mercurial repository at
  \url{http://www.dcs.kcl.ac.uk/staff/urbanc/cgi-bin/repos.cgi/pip}.

  %\medskip

  %\noindent
  %{\bf Acknowledgements:}
  %We are grateful for the comments we received from anonymous
  %referees.

  \bibliographystyle{plain}
  \bibliography{root}

  \theendnotes
*}


(*<*)
end
(*>*)