Journal/Paper.thy
changeset 6 7f2493296c39
child 7 0514be2ad83e
equal deleted inserted replaced
5:0f2d4b78f839 6:7f2493296c39
       
     1 (*<*)
       
     2 theory Paper
       
     3 imports "../CpsG" "../ExtGG" "~~/src/HOL/Library/LaTeXsugar"
       
     4 begin
       
     5 
       
     6 (*
       
     7 find_unused_assms CpsG 
       
     8 find_unused_assms ExtGG 
       
     9 find_unused_assms Moment 
       
    10 find_unused_assms Precedence_ord 
       
    11 find_unused_assms PrioG 
       
    12 find_unused_assms PrioGDef
       
    13 *)
       
    14 
       
    15 ML {*
       
    16   open Printer;
       
    17   show_question_marks_default := false;
       
    18   *}
       
    19 
       
    20 notation (latex output)
       
    21   Cons ("_::_" [78,77] 73) and
       
    22   vt ("valid'_state") and
       
    23   runing ("running") and
       
    24   birthtime ("last'_set") and
       
    25   If  ("(\<^raw:\textrm{>if\<^raw:}> (_)/ \<^raw:\textrm{>then\<^raw:}> (_)/ \<^raw:\textrm{>else\<^raw:}> (_))" 10) and
       
    26   Prc ("'(_, _')") and
       
    27   holding ("holds") and
       
    28   waiting ("waits") and
       
    29   Th ("T") and
       
    30   Cs ("C") and
       
    31   readys ("ready") and
       
    32   depend ("RAG") and 
       
    33   preced ("prec") and
       
    34   cpreced ("cprec") and
       
    35   dependents ("dependants") and
       
    36   cp ("cprec") and
       
    37   holdents ("resources") and
       
    38   original_priority ("priority") and
       
    39   DUMMY  ("\<^raw:\mbox{$\_\!\_$}>")
       
    40 
       
    41 (*abbreviation
       
    42  "detached s th \<equiv> cntP s th = cntV s th"
       
    43 *)
       
    44 (*>*)
       
    45 
       
    46 section {* Introduction *}
       
    47 
       
    48 text {*
       
    49   Many real-time systems need to support threads involving priorities and
       
    50   locking of resources. Locking of resources ensures mutual exclusion
       
    51   when accessing shared data or devices that cannot be
       
    52   preempted. Priorities allow scheduling of threads that need to
       
    53   finish their work within deadlines.  Unfortunately, both features
       
    54   can interact in subtle ways leading to a problem, called
       
    55   \emph{Priority Inversion}. Suppose three threads having priorities
       
    56   $H$(igh), $M$(edium) and $L$(ow). We would expect that the thread
       
    57   $H$ blocks any other thread with lower priority and the thread itself cannot
       
    58   be blocked indefinitely by threads with lower priority. Alas, in a naive
       
    59   implementation of resource locking and priorities this property can
       
    60   be violated. For this let $L$ be in the
       
    61   possession of a lock for a resource that $H$ also needs. $H$ must
       
    62   therefore wait for $L$ to exit the critical section and release this
       
    63   lock. The problem is that $L$ might in turn be blocked by any
       
    64   thread with priority $M$, and so $H$ sits there potentially waiting
       
    65   indefinitely. Since $H$ is blocked by threads with lower
       
    66   priorities, the problem is called Priority Inversion. It was first
       
    67   described in \cite{Lampson80} in the context of the
       
    68   Mesa programming language designed for concurrent programming.
       
    69 
       
    70   If the problem of Priority Inversion is ignored, real-time systems
       
    71   can become unpredictable and resulting bugs can be hard to diagnose.
       
    72   The classic example where this happened is the software that
       
    73   controlled the Mars Pathfinder mission in 1997 \cite{Reeves98}.
       
    74   Once the spacecraft landed, the software shut down at irregular
       
    75   intervals leading to loss of project time as normal operation of the
       
    76   craft could only resume the next day (the mission and data already
       
    77   collected were fortunately not lost, because of a clever system
       
    78   design).  The reason for the shutdowns was that the scheduling
       
    79   software fell victim to Priority Inversion: a low priority thread
       
    80   locking a resource prevented a high priority thread from running in
       
    81   time, leading to a system reset. Once the problem was found, it was
       
    82   rectified by enabling the \emph{Priority Inheritance Protocol} (PIP)
       
    83   \cite{Sha90}\footnote{Sha et al.~call it the \emph{Basic Priority
       
    84   Inheritance Protocol} \cite{Sha90} and others sometimes also call it
       
    85   \emph{Priority Boosting} or \emph{Priority Donation}.} in the scheduling software.
       
    86 
       
    87   The idea behind PIP is to let the thread $L$ temporarily inherit
       
    88   the high priority from $H$ until $L$ leaves the critical section
       
    89   unlocking the resource. This solves the problem of $H$ having to
       
    90   wait indefinitely, because $L$ cannot be blocked by threads having
       
    91   priority $M$. While a few other solutions exist for the Priority
       
    92   Inversion problem, PIP is one that is widely deployed and
       
    93   implemented. This includes VxWorks (a proprietary real-time OS used
       
    94   in the Mars Pathfinder mission, in Boeing's 787 Dreamliner, Honda's
       
    95   ASIMO robot, etc.), but also the POSIX 1003.1c Standard realised for
       
    96   example in libraries for FreeBSD, Solaris and Linux. 
       
    97 
       
    98   One advantage of PIP is that increasing the priority of a thread
       
    99   can be dynamically calculated by the scheduler. This is in contrast
       
   100   to, for example, \emph{Priority Ceiling} \cite{Sha90}, another
       
   101   solution to the Priority Inversion problem, which requires static
       
   102   analysis of the program in order to prevent Priority
       
   103   Inversion. However, there has also been strong criticism against
       
   104   PIP. For instance, PIP cannot prevent deadlocks when lock
       
   105   dependencies are circular, and also blocking times can be
       
   106   substantial (more than just the duration of a critical section).
       
   107   Though, most criticism against PIP centres around unreliable
       
   108   implementations and PIP being too complicated and too inefficient.
       
   109   For example, Yodaiken writes in \cite{Yodaiken02}:
       
   110 
       
   111   \begin{quote}
       
   112   \it{}``Priority inheritance is neither efficient nor reliable. Implementations
       
   113   are either incomplete (and unreliable) or surprisingly complex and intrusive.''
       
   114   \end{quote}
       
   115 
       
   116   \noindent
       
   117   He suggests avoiding PIP altogether by designing the system so that no 
       
   118   priority inversion may happen in the first place. However, such ideal designs may 
       
   119   not always be achievable in practice.
       
   120 
       
   121   In our opinion, there is clearly a need for investigating correct
       
   122   algorithms for PIP. A few specifications for PIP exist (in English)
       
   123   and also a few high-level descriptions of implementations (e.g.~in
       
   124   the textbook \cite[Section 5.6.5]{Vahalia96}), but they help little
       
   125   with actual implementations. That this is a problem in practice is
       
   126   proved by an email by Baker, who wrote on 13 July 2009 on the Linux
       
   127   Kernel mailing list:
       
   128 
       
   129   \begin{quote}
       
   130   \it{}``I observed in the kernel code (to my disgust), the Linux PIP
       
   131   implementation is a nightmare: extremely heavy weight, involving
       
   132   maintenance of a full wait-for graph, and requiring updates for a
       
   133   range of events, including priority changes and interruptions of
       
   134   wait operations.''
       
   135   \end{quote}
       
   136 
       
   137   \noindent
       
   138   The criticism by Yodaiken, Baker and others suggests another look
       
   139   at PIP from a more abstract level (but still concrete enough
       
   140   to inform an implementation), and makes PIP a good candidate for a
       
   141   formal verification. An additional reason is that the original
       
   142   presentation of PIP~\cite{Sha90}, despite being informally
       
   143   ``proved'' correct, is actually \emph{flawed}. 
       
   144 
       
   145   Yodaiken \cite{Yodaiken02} points to a subtlety that had been
       
   146   overlooked in the informal proof by Sha et al. They specify in
       
   147   \cite{Sha90} that after the thread (whose priority has been raised)
       
   148   completes its critical section and releases the lock, it ``returns
       
   149   to its original priority level.'' This leads them to believe that an
       
   150   implementation of PIP is ``rather straightforward''~\cite{Sha90}.
       
   151   Unfortunately, as Yodaiken points out, this behaviour is too
       
   152   simplistic.  Consider the case where the low priority thread $L$
       
   153   locks \emph{two} resources, and two high-priority threads $H$ and
       
   154   $H'$ each wait for one of them.  If $L$ releases one resource
       
   155   so that $H$, say, can proceed, then we still have Priority Inversion
       
   156   with $H'$ (which waits for the other resource). The correct
       
   157   behaviour for $L$ is to switch to the highest remaining priority of
       
   158   the threads that it blocks. The advantage of formalising the
       
   159   correctness of a high-level specification of PIP in a theorem prover
       
   160   is that such issues clearly show up and cannot be overlooked as in
       
   161   informal reasoning (since we have to analyse all possible behaviours
       
   162   of threads, i.e.~\emph{traces}, that could possibly happen).\medskip
       
   163 
       
   164   \noindent
       
   165   {\bf Contributions:} There have been earlier formal investigations
       
   166   into PIP \cite{Faria08,Jahier09,Wellings07}, but they employ model
       
   167   checking techniques. This paper presents a formalised and
       
   168   mechanically checked proof for the correctness of PIP (to our
       
   169   knowledge the first one).  In contrast to model checking, our
       
   170   formalisation provides insight into why PIP is correct and allows us
       
   171   to prove stronger properties that, as we will show, can help with an
       
   172   efficient implementation of PIP in the educational PINTOS operating
       
   173   system \cite{PINTOS}.  For example, we found by ``playing'' with the
       
   174   formalisation that the choice of the next thread to take over a lock
       
   175   when a resource is released is irrelevant for PIP being correct---a
       
   176   fact that has not been mentioned in the literature and not been used
       
   177   in the reference implementation of PIP in PINTOS.  This fact, however, is important
       
   178   for an efficient implementation of PIP, because we can give the lock
       
   179   to the thread with the highest priority so that it terminates more
       
   180   quickly.
       
   181 *}
       
   182 
       
   183 section {* Formal Model of the Priority Inheritance Protocol *}
       
   184 
       
   185 text {*
       
   186   The Priority Inheritance Protocol, short PIP, is a scheduling
       
   187   algorithm for a single-processor system.\footnote{We shall come back
       
   188   later to the case of PIP on multi-processor systems.} 
       
   189   Following good experience in earlier work \cite{Wang09},  
       
   190   our model of PIP is based on Paulson's inductive approach to protocol
       
   191   verification \cite{Paulson98}. In this approach a \emph{state} of a system is
       
   192   given by a list of events that happened so far (with new events prepended to the list). 
       
   193   \emph{Events} of PIP fall
       
   194   into five categories defined as the datatype:
       
   195 
       
   196   \begin{isabelle}\ \ \ \ \ %%%
       
   197   \mbox{\begin{tabular}{r@ {\hspace{2mm}}c@ {\hspace{2mm}}l@ {\hspace{7mm}}l}
       
   198   \isacommand{datatype} event 
       
   199   & @{text "="} & @{term "Create thread priority"}\\
       
   200   & @{text "|"} & @{term "Exit thread"} \\
       
   201   & @{text "|"} & @{term "Set thread priority"} & {\rm reset of the priority for} @{text thread}\\
       
   202   & @{text "|"} & @{term "P thread cs"} & {\rm request of resource} @{text "cs"} {\rm by} @{text "thread"}\\
       
   203   & @{text "|"} & @{term "V thread cs"} & {\rm release of resource} @{text "cs"} {\rm by} @{text "thread"}
       
   204   \end{tabular}}
       
   205   \end{isabelle}
       
   206 
       
   207   \noindent
       
   208   whereby threads, priorities and (critical) resources are represented
       
   209   as natural numbers. The event @{term Set} models the situation that
       
   210   a thread obtains a new priority given by the programmer or
       
   211   user (for example via the {\tt nice} utility under UNIX).  As in Paulson's work, we
       
   212   need to define functions that allow us to make some observations
       
   213   about states.  One, called @{term threads}, calculates the set of
       
   214   ``live'' threads that we have seen so far:
       
   215 
       
   216   \begin{isabelle}\ \ \ \ \ %%%
       
   217   \mbox{\begin{tabular}{lcl}
       
   218   @{thm (lhs) threads.simps(1)} & @{text "\<equiv>"} & 
       
   219     @{thm (rhs) threads.simps(1)}\\
       
   220   @{thm (lhs) threads.simps(2)[where thread="th"]} & @{text "\<equiv>"} & 
       
   221     @{thm (rhs) threads.simps(2)[where thread="th"]}\\
       
   222   @{thm (lhs) threads.simps(3)[where thread="th"]} & @{text "\<equiv>"} & 
       
   223     @{thm (rhs) threads.simps(3)[where thread="th"]}\\
       
   224   @{term "threads (DUMMY#s)"} & @{text "\<equiv>"} & @{term "threads s"}\\
       
   225   \end{tabular}}
       
   226   \end{isabelle}
       
   227 
       
   228   \noindent
       
   229   In this definition @{term "DUMMY # DUMMY"} stands for list-cons.
       
   230   Another function calculates the priority for a thread @{text "th"}, which is 
       
   231   defined as
       
   232 
       
   233   \begin{isabelle}\ \ \ \ \ %%%
       
   234   \mbox{\begin{tabular}{lcl}
       
   235   @{thm (lhs) original_priority.simps(1)[where thread="th"]} & @{text "\<equiv>"} & 
       
   236     @{thm (rhs) original_priority.simps(1)[where thread="th"]}\\
       
   237   @{thm (lhs) original_priority.simps(2)[where thread="th" and thread'="th'"]} & @{text "\<equiv>"} & 
       
   238     @{thm (rhs) original_priority.simps(2)[where thread="th" and thread'="th'"]}\\
       
   239   @{thm (lhs) original_priority.simps(3)[where thread="th" and thread'="th'"]} & @{text "\<equiv>"} & 
       
   240     @{thm (rhs) original_priority.simps(3)[where thread="th" and thread'="th'"]}\\
       
   241   @{term "original_priority th (DUMMY#s)"} & @{text "\<equiv>"} & @{term "original_priority th s"}\\
       
   242   \end{tabular}}
       
   243   \end{isabelle}
       
   244 
       
   245   \noindent
       
   246   In this definition we set @{text 0} as the default priority for
       
   247   threads that have not (yet) been created. The last function we need 
       
   248   calculates the ``time'', or index, at which time a process had its 
       
   249   priority last set.
       
   250 
       
   251   \begin{isabelle}\ \ \ \ \ %%%
       
   252   \mbox{\begin{tabular}{lcl}
       
   253   @{thm (lhs) birthtime.simps(1)[where thread="th"]} & @{text "\<equiv>"} & 
       
   254     @{thm (rhs) birthtime.simps(1)[where thread="th"]}\\
       
   255   @{thm (lhs) birthtime.simps(2)[where thread="th" and thread'="th'"]} & @{text "\<equiv>"} & 
       
   256     @{thm (rhs) birthtime.simps(2)[where thread="th" and thread'="th'"]}\\
       
   257   @{thm (lhs) birthtime.simps(3)[where thread="th" and thread'="th'"]} & @{text "\<equiv>"} & 
       
   258     @{thm (rhs) birthtime.simps(3)[where thread="th" and thread'="th'"]}\\
       
   259   @{term "birthtime th (DUMMY#s)"} & @{text "\<equiv>"} & @{term "birthtime th s"}\\
       
   260   \end{tabular}}
       
   261   \end{isabelle}
       
   262 
       
   263   \noindent
       
   264   In this definition @{term "length s"} stands for the length of the list
       
   265   of events @{text s}. Again the default value in this function is @{text 0}
       
   266   for threads that have not been created yet. A \emph{precedence} of a thread @{text th} in a 
       
   267   state @{text s} is the pair of natural numbers defined as
       
   268   
       
   269   \begin{isabelle}\ \ \ \ \ %%%
       
   270   @{thm preced_def[where thread="th"]}
       
   271   \end{isabelle}
       
   272 
       
   273   \noindent
       
   274   The point of precedences is to schedule threads not according to priorities (because what should
       
   275   we do in case two threads have the same priority), but according to precedences. 
       
   276   Precedences allow us to always discriminate between two threads with equal priority by 
       
   277   taking into account the time when the priority was last set. We order precedences so 
       
   278   that threads with the same priority get a higher precedence if their priority has been 
       
   279   set earlier, since for such threads it is more urgent to finish their work. In an implementation
       
   280   this choice would translate to a quite natural FIFO-scheduling of processes with 
       
   281   the same priority.
       
   282 
       
   283   Next, we introduce the concept of \emph{waiting queues}. They are
       
   284   lists of threads associated with every resource. The first thread in
       
   285   this list (i.e.~the head, or short @{term hd}) is chosen to be the one 
       
   286   that is in possession of the
       
   287   ``lock'' of the corresponding resource. We model waiting queues as
       
   288   functions, below abbreviated as @{text wq}. They take a resource as
       
   289   argument and return a list of threads.  This allows us to define
       
   290   when a thread \emph{holds}, respectively \emph{waits} for, a
       
   291   resource @{text cs} given a waiting queue function @{text wq}.
       
   292 
       
   293   \begin{isabelle}\ \ \ \ \ %%%
       
   294   \begin{tabular}{@ {}l}
       
   295   @{thm cs_holding_def[where thread="th"]}\\
       
   296   @{thm cs_waiting_def[where thread="th"]}
       
   297   \end{tabular}
       
   298   \end{isabelle}
       
   299 
       
   300   \noindent
       
   301   In this definition we assume @{text "set"} converts a list into a set.
       
   302   At the beginning, that is in the state where no thread is created yet, 
       
   303   the waiting queue function will be the function that returns the
       
   304   empty list for every resource.
       
   305 
       
   306   \begin{isabelle}\ \ \ \ \ %%%
       
   307   @{abbrev all_unlocked}\hfill\numbered{allunlocked}
       
   308   \end{isabelle}
       
   309 
       
   310   \noindent
       
   311   Using @{term "holding"} and @{term waiting}, we can introduce \emph{Resource Allocation Graphs} 
       
   312   (RAG), which represent the dependencies between threads and resources.
       
   313   We represent RAGs as relations using pairs of the form
       
   314 
       
   315   \begin{isabelle}\ \ \ \ \ %%%
       
   316   @{term "(Th th, Cs cs)"} \hspace{5mm}{\rm and}\hspace{5mm}
       
   317   @{term "(Cs cs, Th th)"}
       
   318   \end{isabelle}
       
   319 
       
   320   \noindent
       
   321   where the first stands for a \emph{waiting edge} and the second for a 
       
   322   \emph{holding edge} (@{term Cs} and @{term Th} are constructors of a 
       
   323   datatype for vertices). Given a waiting queue function, a RAG is defined 
       
   324   as the union of the sets of waiting and holding edges, namely
       
   325 
       
   326   \begin{isabelle}\ \ \ \ \ %%%
       
   327   @{thm cs_depend_def}
       
   328   \end{isabelle}
       
   329 
       
   330   \noindent
       
   331   Given four threads and three resources, an instance of a RAG can be pictured 
       
   332   as follows:
       
   333 
       
   334   \begin{center}
       
   335   \newcommand{\fnt}{\fontsize{7}{8}\selectfont}
       
   336   \begin{tikzpicture}[scale=1]
       
   337   %%\draw[step=2mm] (-3,2) grid (1,-1);
       
   338 
       
   339   \node (A) at (0,0) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^isub>0"}};
       
   340   \node (B) at (2,0) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^isub>1"}};
       
   341   \node (C) at (4,0.7) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^isub>1"}};
       
   342   \node (D) at (4,-0.7) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^isub>2"}};
       
   343   \node (E) at (6,-0.7) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^isub>2"}};
       
   344   \node (E1) at (6, 0.2) [draw, circle, very thick, inner sep=0.4mm] {@{text "cs\<^isub>3"}};
       
   345   \node (F) at (8,-0.7) [draw, rounded corners=1mm, rectangle, very thick] {@{text "th\<^isub>3"}};
       
   346 
       
   347   \draw [<-,line width=0.6mm] (A) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding}  (B);
       
   348   \draw [->,line width=0.6mm] (C) to node [pos=0.4,sloped,above=-0.5mm] {\fnt{}waiting}  (B);
       
   349   \draw [->,line width=0.6mm] (D) to node [pos=0.4,sloped,below=-0.5mm] {\fnt{}waiting}  (B);
       
   350   \draw [<-,line width=0.6mm] (D) to node [pos=0.54,sloped,below=-0.5mm] {\fnt{}holding}  (E);
       
   351   \draw [<-,line width=0.6mm] (D) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding}  (E1);
       
   352   \draw [->,line width=0.6mm] (F) to node [pos=0.45,sloped,below=-0.5mm] {\fnt{}waiting}  (E);
       
   353   \end{tikzpicture}
       
   354   \end{center}
       
   355 
       
   356   \noindent
       
   357   The use of relations for representing RAGs allows us to conveniently define
       
   358   the notion of the \emph{dependants} of a thread using the transitive closure
       
   359   operation for relations. This gives
       
   360 
       
   361   \begin{isabelle}\ \ \ \ \ %%%
       
   362   @{thm cs_dependents_def}
       
   363   \end{isabelle}
       
   364 
       
   365   \noindent
       
   366   This definition needs to account for all threads that wait for a thread to
       
   367   release a resource. This means we need to include threads that transitively
       
   368   wait for a resource being released (in the picture above this means the dependants
       
   369   of @{text "th\<^isub>0"} are @{text "th\<^isub>1"} and @{text "th\<^isub>2"}, which wait for resource @{text "cs\<^isub>1"}, 
       
   370   but also @{text "th\<^isub>3"}, 
       
   371   which cannot make any progress unless @{text "th\<^isub>2"} makes progress, which
       
   372   in turn needs to wait for @{text "th\<^isub>0"} to finish). If there is a circle of dependencies 
       
   373   in a RAG, then clearly
       
   374   we have a deadlock. Therefore when a thread requests a resource,
       
   375   we must ensure that the resulting RAG is not circular. In practice, the 
       
   376   programmer has to ensure this.
       
   377 
       
   378 
       
   379   Next we introduce the notion of the \emph{current precedence} of a thread @{text th} in a 
       
   380   state @{text s}. It is defined as
       
   381 
       
   382   \begin{isabelle}\ \ \ \ \ %%%
       
   383   @{thm cpreced_def2}\hfill\numbered{cpreced}
       
   384   \end{isabelle}
       
   385 
       
   386   \noindent
       
   387   where the dependants of @{text th} are given by the waiting queue function.
       
   388   While the precedence @{term prec} of a thread is determined statically 
       
   389   (for example when the thread is
       
   390   created), the point of the current precedence is to let the scheduler increase this
       
   391   precedence, if needed according to PIP. Therefore the current precedence of @{text th} is
       
   392   given as the maximum of the precedence @{text th} has in state @{text s} \emph{and} all 
       
   393   threads that are dependants of @{text th}. Since the notion @{term "dependants"} is
       
   394   defined as the transitive closure of all dependent threads, we deal correctly with the 
       
   395   problem in the informal algorithm by Sha et al.~\cite{Sha90} where a priority of a thread is
       
   396   lowered prematurely.
       
   397   
       
   398   The next function, called @{term schs}, defines the behaviour of the scheduler. It will be defined
       
   399   by recursion on the state (a list of events); this function returns a \emph{schedule state}, which 
       
   400   we represent as a record consisting of two
       
   401   functions:
       
   402 
       
   403   \begin{isabelle}\ \ \ \ \ %%%
       
   404   @{text "\<lparr>wq_fun, cprec_fun\<rparr>"}
       
   405   \end{isabelle}
       
   406 
       
   407   \noindent
       
   408   The first function is a waiting queue function (that is, it takes a
       
   409   resource @{text "cs"} and returns the corresponding list of threads
       
   410   that lock, respectively wait for, it); the second is a function that
       
   411   takes a thread and returns its current precedence (see
       
   412   the definition in \eqref{cpreced}). We assume the usual getter and setter methods for
       
   413   such records.
       
   414 
       
   415   In the initial state, the scheduler starts with all resources unlocked (the corresponding 
       
   416   function is defined in \eqref{allunlocked}) and the
       
   417   current precedence of every thread is initialised with @{term "Prc 0 0"}; that means 
       
   418   \mbox{@{abbrev initial_cprec}}. Therefore
       
   419   we have for the initial shedule state
       
   420 
       
   421   \begin{isabelle}\ \ \ \ \ %%%
       
   422   \begin{tabular}{@ {}l}
       
   423   @{thm (lhs) schs.simps(1)} @{text "\<equiv>"}\\ 
       
   424   \hspace{5mm}@{term "(|wq_fun = all_unlocked, cprec_fun = (\<lambda>_::thread. Prc 0 0)|)"}
       
   425   \end{tabular}
       
   426   \end{isabelle}
       
   427 
       
   428   \noindent
       
   429   The cases for @{term Create}, @{term Exit} and @{term Set} are also straightforward:
       
   430   we calculate the waiting queue function of the (previous) state @{text s}; 
       
   431   this waiting queue function @{text wq} is unchanged in the next schedule state---because
       
   432   none of these events lock or release any resource; 
       
   433   for calculating the next @{term "cprec_fun"}, we use @{text wq} and 
       
   434   @{term cpreced}. This gives the following three clauses for @{term schs}:
       
   435 
       
   436   \begin{isabelle}\ \ \ \ \ %%%
       
   437   \begin{tabular}{@ {}l}
       
   438   @{thm (lhs) schs.simps(2)} @{text "\<equiv>"}\\ 
       
   439   \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
       
   440   \hspace{8mm}@{term "(|wq_fun = wq\<iota>, cprec_fun = cpreced wq\<iota> (Create th prio # s)|)"}\smallskip\\
       
   441   @{thm (lhs) schs.simps(3)} @{text "\<equiv>"}\\
       
   442   \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
       
   443   \hspace{8mm}@{term "(|wq_fun = wq\<iota>, cprec_fun = cpreced wq\<iota> (Exit th # s)|)"}\smallskip\\
       
   444   @{thm (lhs) schs.simps(4)} @{text "\<equiv>"}\\ 
       
   445   \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
       
   446   \hspace{8mm}@{term "(|wq_fun = wq\<iota>, cprec_fun = cpreced wq\<iota> (Set th prio # s)|)"}
       
   447   \end{tabular}
       
   448   \end{isabelle}
       
   449 
       
   450   \noindent 
       
   451   More interesting are the cases where a resource, say @{text cs}, is locked or released. In these cases
       
   452   we need to calculate a new waiting queue function. For the event @{term "P th cs"}, we have to update
       
   453   the function so that the new thread list for @{text cs} is the old thread list plus the thread @{text th} 
       
   454   appended to the end of that list (remember the head of this list is assigned to be in the possession of this
       
   455   resource). This gives the clause
       
   456 
       
   457   \begin{isabelle}\ \ \ \ \ %%%
       
   458   \begin{tabular}{@ {}l}
       
   459   @{thm (lhs) schs.simps(5)} @{text "\<equiv>"}\\ 
       
   460   \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
       
   461   \hspace{5mm}@{text "let"} @{text "new_wq = wq(cs := (wq cs @ [th]))"} @{text "in"}\\
       
   462   \hspace{8mm}@{term "(|wq_fun = new_wq, cprec_fun = cpreced new_wq (P th cs # s)|)"}
       
   463   \end{tabular}
       
   464   \end{isabelle}
       
   465 
       
   466   \noindent
       
   467   The clause for event @{term "V th cs"} is similar, except that we need to update the waiting queue function
       
   468   so that the thread that possessed the lock is deleted from the corresponding thread list. For this 
       
   469   list transformation, we use
       
   470   the auxiliary function @{term release}. A simple version of @{term release} would
       
   471   just delete this thread and return the remaining threads, namely
       
   472 
       
   473   \begin{isabelle}\ \ \ \ \ %%%
       
   474   \begin{tabular}{@ {}lcl}
       
   475   @{term "release []"} & @{text "\<equiv>"} & @{term "[]"}\\
       
   476   @{term "release (DUMMY # qs)"} & @{text "\<equiv>"} & @{term "qs"}\\
       
   477   \end{tabular}
       
   478   \end{isabelle}
       
   479 
       
   480   \noindent
       
   481   In practice, however, often the thread with the highest precedence in the list will get the
       
   482   lock next. We have implemented this choice, but later found out that the choice 
       
   483   of which thread is chosen next is actually irrelevant for the correctness of PIP.
       
   484   Therefore we prove the stronger result where @{term release} is defined as
       
   485 
       
   486   \begin{isabelle}\ \ \ \ \ %%%
       
   487   \begin{tabular}{@ {}lcl}
       
   488   @{term "release []"} & @{text "\<equiv>"} & @{term "[]"}\\
       
   489   @{term "release (DUMMY # qs)"} & @{text "\<equiv>"} & @{term "SOME qs'. distinct qs' \<and> set qs' = set qs"}\\
       
   490   \end{tabular}
       
   491   \end{isabelle}
       
   492 
       
   493   \noindent
       
   494   where @{text "SOME"} stands for Hilbert's epsilon and implements an arbitrary
       
   495   choice for the next waiting list. It just has to be a list of distinctive threads and
       
   496   contain the same elements as @{text "qs"}. This gives for @{term V} the clause:
       
   497  
       
   498   \begin{isabelle}\ \ \ \ \ %%%
       
   499   \begin{tabular}{@ {}l}
       
   500   @{thm (lhs) schs.simps(6)} @{text "\<equiv>"}\\
       
   501   \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
       
   502   \hspace{5mm}@{text "let"} @{text "new_wq = release (wq cs)"} @{text "in"}\\
       
   503   \hspace{8mm}@{term "(|wq_fun = new_wq, cprec_fun = cpreced new_wq (V th cs # s)|)"}
       
   504   \end{tabular}
       
   505   \end{isabelle}
       
   506 
       
   507   Having the scheduler function @{term schs} at our disposal, we can ``lift'', or
       
   508   overload, the notions
       
   509   @{term waiting}, @{term holding}, @{term depend} and @{term cp} to operate on states only.
       
   510 
       
   511   \begin{isabelle}\ \ \ \ \ %%%
       
   512   \begin{tabular}{@ {}rcl}
       
   513   @{thm (lhs) s_holding_abv} & @{text "\<equiv>"} & @{thm (rhs) s_holding_abv}\\
       
   514   @{thm (lhs) s_waiting_abv} & @{text "\<equiv>"} & @{thm (rhs) s_waiting_abv}\\
       
   515   @{thm (lhs) s_depend_abv}  & @{text "\<equiv>"} & @{thm (rhs) s_depend_abv}\\
       
   516   @{thm (lhs) cp_def}        & @{text "\<equiv>"} & @{thm (rhs) cp_def}
       
   517   \end{tabular}
       
   518   \end{isabelle}
       
   519 
       
   520   \noindent
       
   521   With these abbreviations in place we can introduce 
       
   522   the notion of a thread being @{term ready} in a state (i.e.~threads
       
   523   that do not wait for any resource) and the running thread.
       
   524 
       
   525   \begin{isabelle}\ \ \ \ \ %%%
       
   526   \begin{tabular}{@ {}l}
       
   527   @{thm readys_def}\\
       
   528   @{thm runing_def}
       
   529   \end{tabular}
       
   530   \end{isabelle}
       
   531 
       
   532   \noindent
       
   533   In the second definition @{term "DUMMY ` DUMMY"} stands for the image of a set under a function.
       
   534   Note that in the initial state, that is where the list of events is empty, the set 
       
   535   @{term threads} is empty and therefore there is neither a thread ready nor running.
       
   536   If there is one or more threads ready, then there can only be \emph{one} thread
       
   537   running, namely the one whose current precedence is equal to the maximum of all ready 
       
   538   threads. We use sets to capture both possibilities.
       
   539   We can now also conveniently define the set of resources that are locked by a thread in a
       
   540   given state and also when a thread is detached in a state (meaning the thread neither 
       
   541   holds nor waits for a resource):
       
   542 
       
   543   \begin{isabelle}\ \ \ \ \ %%%
       
   544   \begin{tabular}{@ {}l}
       
   545   @{thm holdents_def}\\
       
   546   @{thm detached_def}
       
   547   \end{tabular}
       
   548   \end{isabelle}
       
   549 
       
   550   %\noindent
       
   551   %The second definition states that @{text th}  in @{text s}.
       
   552   
       
   553   Finally we can define what a \emph{valid state} is in our model of PIP. For
       
   554   example we cannot expect to be able to exit a thread, if it was not
       
   555   created yet. 
       
   556   These validity constraints on states are characterised by the
       
   557   inductive predicate @{term "step"} and @{term vt}. We first give five inference rules
       
   558   for @{term step} relating a state and an event that can happen next.
       
   559 
       
   560   \begin{center}
       
   561   \begin{tabular}{c}
       
   562   @{thm[mode=Rule] thread_create[where thread=th]}\hspace{1cm}
       
   563   @{thm[mode=Rule] thread_exit[where thread=th]}
       
   564   \end{tabular}
       
   565   \end{center}
       
   566 
       
   567   \noindent
       
   568   The first rule states that a thread can only be created, if it is not alive yet.
       
   569   Similarly, the second rule states that a thread can only be terminated if it was
       
   570   running and does not lock any resources anymore (this simplifies slightly our model;
       
   571   in practice we would expect the operating system releases all locks held by a
       
   572   thread that is about to exit). The event @{text Set} can happen
       
   573   if the corresponding thread is running. 
       
   574 
       
   575   \begin{center}
       
   576   @{thm[mode=Rule] thread_set[where thread=th]}
       
   577   \end{center}
       
   578 
       
   579   \noindent
       
   580   If a thread wants to lock a resource, then the thread needs to be
       
   581   running and also we have to make sure that the resource lock does
       
   582   not lead to a cycle in the RAG. In practice, ensuring the latter
       
   583   is the responsibility of the programmer.  In our formal
       
   584   model we brush aside these problematic cases in order to be able to make
       
   585   some meaningful statements about PIP.\footnote{This situation is
       
   586   similar to the infamous \emph{occurs check} in Prolog: In order to say
       
   587   anything meaningful about unification, one needs to perform an occurs
       
   588   check. But in practice the occurs check is omitted and the
       
   589   responsibility for avoiding problems rests with the programmer.}
       
   590 
       
   591  
       
   592   \begin{center}
       
   593   @{thm[mode=Rule] thread_P[where thread=th]}
       
   594   \end{center}
       
   595  
       
   596   \noindent
       
   597   Similarly, if a thread wants to release a lock on a resource, then
       
   598   it must be running and in the possession of that lock. This is
       
   599   formally given by the last inference rule of @{term step}.
       
   600  
       
   601   \begin{center}
       
   602   @{thm[mode=Rule] thread_V[where thread=th]}
       
   603   \end{center}
       
   604 
       
   605   \noindent
       
   606   A valid state of PIP can then be conveniently be defined as follows:
       
   607 
       
   608   \begin{center}
       
   609   \begin{tabular}{c}
       
   610   @{thm[mode=Axiom] vt_nil}\hspace{1cm}
       
   611   @{thm[mode=Rule] vt_cons}
       
   612   \end{tabular}
       
   613   \end{center}
       
   614 
       
   615   \noindent
       
   616   This completes our formal model of PIP. In the next section we present
       
   617   properties that show our model of PIP is correct.
       
   618 *}
       
   619 
       
   620 section {* The Correctness Proof *}
       
   621 
       
   622 (*<*)
       
   623 context extend_highest_gen
       
   624 begin
       
   625 (*>*)
       
   626 text {* 
       
   627   Sha et al.~state their first correctness criterion for PIP in terms
       
   628   of the number of low-priority threads \cite[Theorem 3]{Sha90}: if
       
   629   there are @{text n} low-priority threads, then a blocked job with
       
   630   high priority can only be blocked a maximum of @{text n} times.
       
   631   Their second correctness criterion is given
       
   632   in terms of the number of critical resources \cite[Theorem 6]{Sha90}: if there are
       
   633   @{text m} critical resources, then a blocked job with high priority
       
   634   can only be blocked a maximum of @{text m} times. Both results on their own, strictly speaking, do
       
   635   \emph{not} prevent indefinite, or unbounded, Priority Inversion,
       
   636   because if a low-priority thread does not give up its critical
       
   637   resource (the one the high-priority thread is waiting for), then the
       
   638   high-priority thread can never run.  The argument of Sha et al.~is
       
   639   that \emph{if} threads release locked resources in a finite amount
       
   640   of time, then indefinite Priority Inversion cannot occur---the high-priority
       
   641   thread is guaranteed to run eventually. The assumption is that
       
   642   programmers must ensure that threads are programmed in this way.  However, even
       
   643   taking this assumption into account, the correctness properties of
       
   644   Sha et al.~are
       
   645   \emph{not} true for their version of PIP---despite being ``proved''. As Yodaiken
       
   646   \cite{Yodaiken02} pointed out: If a low-priority thread possesses
       
   647   locks to two resources for which two high-priority threads are
       
   648   waiting for, then lowering the priority prematurely after giving up
       
   649   only one lock, can cause indefinite Priority Inversion for one of the
       
   650   high-priority threads, invalidating their two bounds.
       
   651 
       
   652   Even when fixed, their proof idea does not seem to go through for
       
   653   us, because of the way we have set up our formal model of PIP.  One
       
   654   reason is that we allow critical sections, which start with a @{text P}-event
       
   655   and finish with a corresponding @{text V}-event, to arbitrarily overlap
       
   656   (something Sha et al.~explicitly exclude).  Therefore we have
       
   657   designed a different correctness criterion for PIP. The idea behind
       
   658   our criterion is as follows: for all states @{text s}, we know the
       
   659   corresponding thread @{text th} with the highest precedence; we show
       
   660   that in every future state (denoted by @{text "s' @ s"}) in which
       
   661   @{text th} is still alive, either @{text th} is running or it is
       
   662   blocked by a thread that was alive in the state @{text s} and was waiting 
       
   663   for or in the possession of a lock in @{text s}. Since in @{text s}, as in
       
   664   every state, the set of alive threads is finite, @{text th} can only
       
   665   be blocked a finite number of times. This is independent of how many
       
   666   threads of lower priority are created in @{text "s'"}. We will actually prove a
       
   667   stronger statement where we also provide the current precedence of
       
   668   the blocking thread. However, this correctness criterion hinges upon
       
   669   a number of assumptions about the states @{text s} and @{text "s' @
       
   670   s"}, the thread @{text th} and the events happening in @{text
       
   671   s'}. We list them next:
       
   672 
       
   673   \begin{quote}
       
   674   {\bf Assumptions on the states {\boldmath@{text s}} and 
       
   675   {\boldmath@{text "s' @ s"}:}} We need to require that @{text "s"} and 
       
   676   @{text "s' @ s"} are valid states:
       
   677   \begin{isabelle}\ \ \ \ \ %%%
       
   678   \begin{tabular}{l}
       
   679   @{term "vt s"}, @{term "vt (s' @ s)"} 
       
   680   \end{tabular}
       
   681   \end{isabelle}
       
   682   \end{quote}
       
   683 
       
   684   \begin{quote}
       
   685   {\bf Assumptions on the thread {\boldmath@{text "th"}:}} 
       
   686   The thread @{text th} must be alive in @{text s} and 
       
   687   has the highest precedence of all alive threads in @{text s}. Furthermore the
       
   688   priority of @{text th} is @{text prio} (we need this in the next assumptions).
       
   689   \begin{isabelle}\ \ \ \ \ %%%
       
   690   \begin{tabular}{l}
       
   691   @{term "th \<in> threads s"}\\
       
   692   @{term "prec th s = Max (cprec s ` threads s)"}\\
       
   693   @{term "prec th s = (prio, DUMMY)"}
       
   694   \end{tabular}
       
   695   \end{isabelle}
       
   696   \end{quote}
       
   697   
       
   698   \begin{quote}
       
   699   {\bf Assumptions on the events in {\boldmath@{text "s'"}:}} We want to prove that @{text th} cannot
       
   700   be blocked indefinitely. Of course this can happen if threads with higher priority
       
   701   than @{text th} are continuously created in @{text s'}. Therefore we have to assume that  
       
   702   events in @{text s'} can only create (respectively set) threads with equal or lower 
       
   703   priority than @{text prio} of @{text th}. We also need to assume that the
       
   704   priority of @{text "th"} does not get reset and also that @{text th} does
       
   705   not get ``exited'' in @{text "s'"}. This can be ensured by assuming the following three implications. 
       
   706   \begin{isabelle}\ \ \ \ \ %%%
       
   707   \begin{tabular}{l}
       
   708   {If}~~@{text "Create th' prio' \<in> set s'"}~~{then}~~@{text "prio' \<le> prio"}\\
       
   709   {If}~~@{text "Set th' prio' \<in> set s'"}~~{then}~~@{text "th' \<noteq> th"}~~{and}~~@{text "prio' \<le> prio"}\\
       
   710   {If}~~@{text "Exit th' \<in> set s'"}~~{then}~~@{text "th' \<noteq> th"}\\
       
   711   \end{tabular}
       
   712   \end{isabelle}
       
   713   \end{quote}
       
   714 
       
   715   \noindent
       
   716   The locale mechanism of Isabelle helps us to manage conveniently such assumptions~\cite{Haftmann08}.
       
   717   Under these assumptions we shall prove the following correctness property:
       
   718 
       
   719   \begin{theorem}\label{mainthm}
       
   720   Given the assumptions about states @{text "s"} and @{text "s' @ s"},
       
   721   the thread @{text th} and the events in @{text "s'"},
       
   722   if @{term "th' \<in> running (s' @ s)"} and @{text "th' \<noteq> th"} then
       
   723   @{text "th' \<in> threads s"}, @{text "\<not> detached s th'"} and 
       
   724   @{term "cp (s' @ s) th' = prec th s"}.
       
   725   \end{theorem}
       
   726 
       
   727   \noindent
       
   728   This theorem ensures that the thread @{text th}, which has the
       
   729   highest precedence in the state @{text s}, can only be blocked in
       
   730   the state @{text "s' @ s"} by a thread @{text th'} that already
       
   731   existed in @{text s} and requested or had a lock on at least 
       
   732   one resource---that means the thread was not \emph{detached} in @{text s}. 
       
   733   As we shall see shortly, that means there are only finitely 
       
   734   many threads that can block @{text th} in this way and then they 
       
   735   need to run with the same current precedence as @{text th}.
       
   736 
       
   737   Like in the argument by Sha et al.~our
       
   738   finite bound does not guarantee absence of indefinite Priority
       
   739   Inversion. For this we further have to assume that every thread
       
   740   gives up its resources after a finite amount of time. We found that
       
   741   this assumption is awkward to formalise in our model. Therefore we
       
   742   leave it out and let the programmer assume the responsibility to
       
   743   program threads in such a benign manner (in addition to causing no 
       
   744   circularity in the @{text RAG}). In this detail, we do not
       
   745   make any progress in comparison with the work by Sha et al.
       
   746   However, we are able to combine their two separate bounds into a
       
   747   single theorem improving their bound.
       
   748 
       
   749   In what follows we will describe properties of PIP that allow us to prove 
       
   750   Theorem~\ref{mainthm} and, when instructive, briefly describe our argument. 
       
   751   It is relatively easy to see that
       
   752 
       
   753   \begin{isabelle}\ \ \ \ \ %%%
       
   754   \begin{tabular}{@ {}l}
       
   755   @{text "running s \<subseteq> ready s \<subseteq> threads s"}\\
       
   756   @{thm[mode=IfThen]  finite_threads}
       
   757   \end{tabular}
       
   758   \end{isabelle}
       
   759 
       
   760   \noindent
       
   761   The second property is by induction of @{term vt}. The next three
       
   762   properties are 
       
   763 
       
   764   \begin{isabelle}\ \ \ \ \ %%%
       
   765   \begin{tabular}{@ {}l}
       
   766   @{thm[mode=IfThen] waiting_unique[of _ _ "cs\<^isub>1" "cs\<^isub>2"]}\\
       
   767   @{thm[mode=IfThen] held_unique[of _ "th\<^isub>1" _ "th\<^isub>2"]}\\
       
   768   @{thm[mode=IfThen] runing_unique[of _ "th\<^isub>1" "th\<^isub>2"]}
       
   769   \end{tabular}
       
   770   \end{isabelle}
       
   771 
       
   772   \noindent
       
   773   The first property states that every waiting thread can only wait for a single
       
   774   resource (because it gets suspended after requesting that resource); the second 
       
   775   that every resource can only be held by a single thread; 
       
   776   the third property establishes that in every given valid state, there is
       
   777   at most one running thread. We can also show the following properties 
       
   778   about the @{term RAG} in @{text "s"}.
       
   779 
       
   780   \begin{isabelle}\ \ \ \ \ %%%
       
   781   \begin{tabular}{@ {}l}
       
   782   @{text If}~@{thm (prem 1) acyclic_depend}~@{text "then"}:\\
       
   783   \hspace{5mm}@{thm (concl) acyclic_depend},
       
   784   @{thm (concl) finite_depend} and
       
   785   @{thm (concl) wf_dep_converse},\\
       
   786   \hspace{5mm}@{text "if"}~@{thm (prem 2) dm_depend_threads}~@{text "then"}~@{thm (concl) dm_depend_threads}
       
   787   and\\
       
   788   \hspace{5mm}@{text "if"}~@{thm (prem 2) range_in}~@{text "then"}~@{thm (concl) range_in}.
       
   789   \end{tabular}
       
   790   \end{isabelle}
       
   791 
       
   792   \noindent
       
   793   The acyclicity property follows from how we restricted the events in
       
   794   @{text step}; similarly the finiteness and well-foundedness property.
       
   795   The last two properties establish that every thread in a @{text "RAG"}
       
   796   (either holding or waiting for a resource) is a live thread.
       
   797 
       
   798   The key lemma in our proof of Theorem~\ref{mainthm} is as follows:
       
   799 
       
   800   \begin{lemma}\label{mainlem}
       
   801   Given the assumptions about states @{text "s"} and @{text "s' @ s"},
       
   802   the thread @{text th} and the events in @{text "s'"},
       
   803   if @{term "th' \<in> threads (s' @ s)"}, @{text "th' \<noteq> th"} and @{text "detached (s' @ s) th'"}\\
       
   804   then @{text "th' \<notin> running (s' @ s)"}.
       
   805   \end{lemma}
       
   806 
       
   807   \noindent
       
   808   The point of this lemma is that a thread different from @{text th} (which has the highest
       
   809   precedence in @{text s}) and not holding any resource, cannot be running 
       
   810   in the state @{text "s' @ s"}.
       
   811 
       
   812   \begin{proof}
       
   813   Since thread @{text "th'"} does not hold any resource, no thread can depend on it. 
       
   814   Therefore its current precedence @{term "cp (s' @ s) th'"} equals its own precedence
       
   815   @{term "prec th' (s' @ s)"}. Since @{text "th"} has the highest precedence in the 
       
   816   state @{text "(s' @ s)"} and precedences are distinct among threads, we have
       
   817   @{term "prec th' (s' @s ) < prec th (s' @ s)"}. From this 
       
   818   we have @{term "cp (s' @ s) th' < prec th (s' @ s)"}.
       
   819   Since @{text "prec th (s' @ s)"} is already the highest 
       
   820   @{term "cp (s' @ s) th"} can not be higher than this and can not be lower either (by 
       
   821   definition of @{term "cp"}). Consequently, we have @{term "prec th (s' @ s) = cp (s' @ s) th"}.
       
   822   Finally we have @{term "cp (s' @ s) th' < cp (s' @ s) th"}.
       
   823   By defintion of @{text "running"}, @{text "th'"} can not be running in state
       
   824   @{text "s' @ s"}, as we had to show.\qed
       
   825   \end{proof}
       
   826 
       
   827   \noindent
       
   828   Since @{text "th'"} is not able to run in state @{text "s' @ s"}, it is not able to 
       
   829   issue a @{text "P"} or @{text "V"} event. Therefore if @{text "s' @ s"} is extended
       
   830   one step further, @{text "th'"} still cannot hold any resource. The situation will 
       
   831   not change in further extensions as long as @{text "th"} holds the highest precedence.
       
   832 
       
   833   From this lemma we can deduce Theorem~\ref{mainthm}: that @{text th} can only be 
       
   834   blocked by a thread @{text th'} that
       
   835   held some resource in state @{text s} (that is not @{text "detached"}). And furthermore
       
   836   that the current precedence of @{text th'} in state @{text "(s' @ s)"} must be equal to the 
       
   837   precedence of @{text th} in @{text "s"}.
       
   838   We show this theorem by induction on @{text "s'"} using Lemma~\ref{mainlem}.
       
   839   This theorem gives a stricter bound on the threads that can block @{text th} than the
       
   840   one obtained by Sha et al.~\cite{Sha90}:
       
   841   only threads that were alive in state @{text s} and moreover held a resource.
       
   842   This means our bound is in terms of both---alive threads in state @{text s}
       
   843   and number of critical resources. Finally, the theorem establishes that the blocking threads have the
       
   844   current precedence raised to the precedence of @{text th}.
       
   845 
       
   846   We can furthermore prove that under our assumptions no deadlock exists in the state @{text "s' @ s"}
       
   847   by showing that @{text "running (s' @ s)"} is not empty.
       
   848 
       
   849   \begin{lemma}
       
   850   Given the assumptions about states @{text "s"} and @{text "s' @ s"},
       
   851   the thread @{text th} and the events in @{text "s'"},
       
   852   @{term "running (s' @ s) \<noteq> {}"}.
       
   853   \end{lemma}
       
   854 
       
   855   \begin{proof}
       
   856   If @{text th} is blocked, then by following its dependants graph, we can always 
       
   857   reach a ready thread @{text th'}, and that thread must have inherited the 
       
   858   precedence of @{text th}.\qed
       
   859   \end{proof}
       
   860 
       
   861 
       
   862   %The following lemmas show how every node in RAG can be chased to ready threads:
       
   863   %\begin{enumerate}
       
   864   %\item Every node in RAG can be chased to a ready thread (@{text "chain_building"}):
       
   865   %  @   {thm [display] chain_building[rule_format]}
       
   866   %\item The ready thread chased to is unique (@{text "dchain_unique"}):
       
   867   %  @   {thm [display] dchain_unique[of _ _ "th\<^isub>1" "th\<^isub>2"]}
       
   868   %\end{enumerate}
       
   869 
       
   870   %Some deeper results about the system:
       
   871   %\begin{enumerate}
       
   872   %\item The maximum of @{term "cp"} and @{term "preced"} are equal (@{text "max_cp_eq"}):
       
   873   %@  {thm [display] max_cp_eq}
       
   874   %\item There must be one ready thread having the max @{term "cp"}-value 
       
   875   %(@{text "max_cp_readys_threads"}):
       
   876   %@  {thm [display] max_cp_readys_threads}
       
   877   %\end{enumerate}
       
   878 
       
   879   %The relationship between the count of @{text "P"} and @{text "V"} and the number of 
       
   880   %critical resources held by a thread is given as follows:
       
   881   %\begin{enumerate}
       
   882   %\item The @{term "V"}-operation decreases the number of critical resources 
       
   883   %  one thread holds (@{text "cntCS_v_dec"})
       
   884   %   @  {thm [display]  cntCS_v_dec}
       
   885   %\item The number of @{text "V"} never exceeds the number of @{text "P"} 
       
   886   %  (@  {text "cnp_cnv_cncs"}):
       
   887   %  @  {thm [display]  cnp_cnv_cncs}
       
   888   %\item The number of @{text "V"} equals the number of @{text "P"} when 
       
   889   %  the relevant thread is not living:
       
   890   %  (@{text "cnp_cnv_eq"}):
       
   891   %  @  {thm [display]  cnp_cnv_eq}
       
   892   %\item When a thread is not living, it does not hold any critical resource 
       
   893   %  (@{text "not_thread_holdents"}):
       
   894   %  @  {thm [display] not_thread_holdents}
       
   895   %\item When the number of @{text "P"} equals the number of @{text "V"}, the relevant 
       
   896   %  thread does not hold any critical resource, therefore no thread can depend on it
       
   897   %  (@{text "count_eq_dependents"}):
       
   898   %  @  {thm [display] count_eq_dependents}
       
   899   %\end{enumerate}
       
   900 
       
   901   %The reason that only threads which already held some resoures
       
   902   %can be runing and block @{text "th"} is that if , otherwise, one thread 
       
   903   %does not hold any resource, it may never have its prioirty raised
       
   904   %and will not get a chance to run. This fact is supported by 
       
   905   %lemma @{text "moment_blocked"}:
       
   906   %@   {thm [display] moment_blocked}
       
   907   %When instantiating  @{text "i"} to @{text "0"}, the lemma means threads which did not hold any
       
   908   %resource in state @{text "s"} will not have a change to run latter. Rephrased, it means 
       
   909   %any thread which is running after @{text "th"} became the highest must have already held
       
   910   %some resource at state @{text "s"}.
       
   911 
       
   912 
       
   913   %When instantiating @{text "i"} to a number larger than @{text "0"}, the lemma means 
       
   914   %if a thread releases all its resources at some moment in @{text "t"}, after that, 
       
   915   %it may never get a change to run. If every thread releases its resource in finite duration,
       
   916   %then after a while, only thread @{text "th"} is left running. This shows how indefinite 
       
   917   %priority inversion can be avoided. 
       
   918 
       
   919   %All these assumptions are put into a predicate @{term "extend_highest_gen"}. 
       
   920   %It can be proved that @{term "extend_highest_gen"} holds 
       
   921   %for any moment @{text "i"} in it @{term "t"} (@{text "red_moment"}):
       
   922   %@   {thm [display] red_moment}
       
   923   
       
   924   %From this, an induction principle can be derived for @{text "t"}, so that 
       
   925   %properties already derived for @{term "t"} can be applied to any prefix 
       
   926   %of @{text "t"} in the proof of new properties 
       
   927   %about @{term "t"} (@{text "ind"}):
       
   928   %\begin{center}
       
   929   %@   {thm[display] ind}
       
   930   %\end{center}
       
   931 
       
   932   %The following properties can be proved about @{term "th"} in @{term "t"}:
       
   933   %\begin{enumerate}
       
   934   %\item In @{term "t"}, thread @{term "th"} is kept live and its 
       
   935   %  precedence is preserved as well
       
   936   %  (@{text "th_kept"}): 
       
   937   %  @   {thm [display] th_kept}
       
   938   %\item In @{term "t"}, thread @{term "th"}'s precedence is always the maximum among 
       
   939   %  all living threads
       
   940   %  (@{text "max_preced"}): 
       
   941   %  @   {thm [display] max_preced}
       
   942   %\item In @{term "t"}, thread @{term "th"}'s current precedence is always the maximum precedence
       
   943   %  among all living threads
       
   944   %  (@{text "th_cp_max_preced"}): 
       
   945   %  @   {thm [display] th_cp_max_preced}
       
   946   %\item In @{term "t"}, thread @{term "th"}'s current precedence is always the maximum current 
       
   947   %  precedence among all living threads
       
   948   %  (@{text "th_cp_max"}): 
       
   949   %  @   {thm [display] th_cp_max}
       
   950   %\item In @{term "t"}, thread @{term "th"}'s current precedence equals its precedence at moment 
       
   951   %  @{term "s"}
       
   952   %  (@{text "th_cp_preced"}): 
       
   953   %  @   {thm [display] th_cp_preced}
       
   954   %\end{enumerate}
       
   955 
       
   956   %The main theorem of this part is to characterizing the running thread during @{term "t"} 
       
   957   %(@{text "runing_inversion_2"}):
       
   958   %@   {thm [display] runing_inversion_2}
       
   959   %According to this, if a thread is running, it is either @{term "th"} or was
       
   960   %already live and held some resource 
       
   961   %at moment @{text "s"} (expressed by: @{text "cntV s th' < cntP s th'"}).
       
   962 
       
   963   %Since there are only finite many threads live and holding some resource at any moment,
       
   964   %if every such thread can release all its resources in finite duration, then after finite
       
   965   %duration, none of them may block @{term "th"} anymore. So, no priority inversion may happen
       
   966   %then.
       
   967   *}
       
   968 (*<*)
       
   969 end
       
   970 (*>*)
       
   971 
       
   972 section {* Properties for an Implementation\label{implement} *}
       
   973 
       
   974 text {*
       
   975   While our formalised proof gives us confidence about the correctness of our model of PIP, 
       
   976   we found that the formalisation can even help us with efficiently implementing it.
       
   977 
       
   978   For example Baker complained that calculating the current precedence
       
   979   in PIP is quite ``heavy weight'' in Linux (see the Introduction).
       
   980   In our model of PIP the current precedence of a thread in a state @{text s}
       
   981   depends on all its dependants---a ``global'' transitive notion,
       
   982   which is indeed heavy weight (see Def.~shown in \eqref{cpreced}).
       
   983   We can however improve upon this. For this let us define the notion
       
   984   of @{term children} of a thread @{text th} in a state @{text s} as
       
   985 
       
   986   \begin{isabelle}\ \ \ \ \ %%%
       
   987   \begin{tabular}{@ {}l}
       
   988   @{thm children_def2}
       
   989   \end{tabular}
       
   990   \end{isabelle}
       
   991 
       
   992   \noindent
       
   993   where a child is a thread that is only one ``hop'' away from the thread
       
   994   @{text th} in the @{term RAG} (and waiting for @{text th} to release
       
   995   a resource). We can prove the following lemma.
       
   996 
       
   997   \begin{lemma}\label{childrenlem}
       
   998   @{text "If"} @{thm (prem 1) cp_rec} @{text "then"}
       
   999   \begin{center}
       
  1000   @{thm (concl) cp_rec}.
       
  1001   \end{center}
       
  1002   \end{lemma}
       
  1003   
       
  1004   \noindent
       
  1005   That means the current precedence of a thread @{text th} can be
       
  1006   computed locally by considering only the children of @{text th}. In
       
  1007   effect, it only needs to be recomputed for @{text th} when one of
       
  1008   its children changes its current precedence.  Once the current 
       
  1009   precedence is computed in this more efficient manner, the selection
       
  1010   of the thread with highest precedence from a set of ready threads is
       
  1011   a standard scheduling operation implemented in most operating
       
  1012   systems.
       
  1013 
       
  1014   Of course the main work for implementing PIP involves the
       
  1015   scheduler and coding how it should react to events.  Below we
       
  1016   outline how our formalisation guides this implementation for each
       
  1017   kind of events.\smallskip
       
  1018 *}
       
  1019 
       
  1020 (*<*)
       
  1021 context step_create_cps
       
  1022 begin
       
  1023 (*>*)
       
  1024 text {*
       
  1025   \noindent
       
  1026   \colorbox{mygrey}{@{term "Create th prio"}:} We assume that the current state @{text s'} and 
       
  1027   the next state @{term "s \<equiv> Create th prio#s'"} are both valid (meaning the event
       
  1028   is allowed to occur). In this situation we can show that
       
  1029 
       
  1030   \begin{isabelle}\ \ \ \ \ %%%
       
  1031   \begin{tabular}{@ {}l}
       
  1032   @{thm eq_dep},\\
       
  1033   @{thm eq_cp_th}, and\\
       
  1034   @{thm[mode=IfThen] eq_cp}
       
  1035   \end{tabular}
       
  1036   \end{isabelle}
       
  1037 
       
  1038   \noindent
       
  1039   This means in an implementation we do not have recalculate the @{text RAG} and also none of the
       
  1040   current precedences of the other threads. The current precedence of the created
       
  1041   thread @{text th} is just its precedence, namely the pair @{term "(prio, length (s::event list))"}.
       
  1042   \smallskip
       
  1043   *}
       
  1044 (*<*)
       
  1045 end
       
  1046 context step_exit_cps
       
  1047 begin
       
  1048 (*>*)
       
  1049 text {*
       
  1050   \noindent
       
  1051   \colorbox{mygrey}{@{term "Exit th"}:} We again assume that the current state @{text s'} and 
       
  1052   the next state @{term "s \<equiv> Exit th#s'"} are both valid. We can show that
       
  1053 
       
  1054   \begin{isabelle}\ \ \ \ \ %%%
       
  1055   \begin{tabular}{@ {}l}
       
  1056   @{thm eq_dep}, and\\
       
  1057   @{thm[mode=IfThen] eq_cp}
       
  1058   \end{tabular}
       
  1059   \end{isabelle}
       
  1060 
       
  1061   \noindent
       
  1062   This means again we do not have to recalculate the @{text RAG} and
       
  1063   also not the current precedences for the other threads. Since @{term th} is not
       
  1064   alive anymore in state @{term "s"}, there is no need to calculate its
       
  1065   current precedence.
       
  1066   \smallskip
       
  1067 *}
       
  1068 (*<*)
       
  1069 end
       
  1070 context step_set_cps
       
  1071 begin
       
  1072 (*>*)
       
  1073 text {*
       
  1074   \noindent
       
  1075   \colorbox{mygrey}{@{term "Set th prio"}:} We assume that @{text s'} and 
       
  1076   @{term "s \<equiv> Set th prio#s'"} are both valid. We can show that
       
  1077 
       
  1078   \begin{isabelle}\ \ \ \ \ %%%
       
  1079   \begin{tabular}{@ {}l}
       
  1080   @{thm[mode=IfThen] eq_dep}, and\\
       
  1081   @{thm[mode=IfThen] eq_cp_pre}
       
  1082   \end{tabular}
       
  1083   \end{isabelle}
       
  1084 
       
  1085   \noindent
       
  1086   The first property is again telling us we do not need to change the @{text RAG}. 
       
  1087   The second shows that the @{term cp}-values of all threads other than @{text th} 
       
  1088   are unchanged. The reason is that @{text th} is running; therefore it is not in 
       
  1089   the @{term dependants} relation of any other thread. This in turn means that the 
       
  1090   change of its priority cannot affect other threads.
       
  1091 
       
  1092   %The second
       
  1093   %however states that only threads that are \emph{not} dependants of @{text th} have their
       
  1094   %current precedence unchanged. For the others we have to recalculate the current
       
  1095   %precedence. To do this we can start from @{term "th"} 
       
  1096   %and follow the @{term "depend"}-edges to recompute  using Lemma~\ref{childrenlem} 
       
  1097   %the @{term "cp"} of every 
       
  1098   %thread encountered on the way. Since the @{term "depend"}
       
  1099   %is assumed to be loop free, this procedure will always stop. The following two lemmas show, however, 
       
  1100   %that this procedure can actually stop often earlier without having to consider all
       
  1101   %dependants.
       
  1102   %
       
  1103   %\begin{isabelle}\ \ \ \ \ %%%
       
  1104   %\begin{tabular}{@ {}l}
       
  1105   %@{thm[mode=IfThen] eq_up_self}\\
       
  1106   %@{text "If"} @{thm (prem 1) eq_up}, @{thm (prem 2) eq_up} and @{thm (prem 3) eq_up}\\
       
  1107   %@{text "then"} @{thm (concl) eq_up}.
       
  1108   %\end{tabular}
       
  1109   %\end{isabelle}
       
  1110   %
       
  1111   %\noindent
       
  1112   %The first lemma states that if the current precedence of @{text th} is unchanged,
       
  1113   %then the procedure can stop immediately (all dependent threads have their @{term cp}-value unchanged).
       
  1114   %The second states that if an intermediate @{term cp}-value does not change, then
       
  1115   %the procedure can also stop, because none of its dependent threads will
       
  1116   %have their current precedence changed.
       
  1117   \smallskip
       
  1118   *}
       
  1119 (*<*)
       
  1120 end
       
  1121 context step_v_cps_nt
       
  1122 begin
       
  1123 (*>*)
       
  1124 text {*
       
  1125   \noindent
       
  1126   \colorbox{mygrey}{@{term "V th cs"}:} We assume that @{text s'} and 
       
  1127   @{term "s \<equiv> V th cs#s'"} are both valid. We have to consider two
       
  1128   subcases: one where there is a thread to ``take over'' the released
       
  1129   resource @{text cs}, and one where there is not. Let us consider them
       
  1130   in turn. Suppose in state @{text s}, the thread @{text th'} takes over
       
  1131   resource @{text cs} from thread @{text th}. We can prove
       
  1132 
       
  1133 
       
  1134   \begin{isabelle}\ \ \ \ \ %%%
       
  1135   @{thm depend_s}
       
  1136   \end{isabelle}
       
  1137   
       
  1138   \noindent
       
  1139   which shows how the @{text RAG} needs to be changed. The next lemma suggests
       
  1140   how the current precedences need to be recalculated. For threads that are
       
  1141   not @{text "th"} and @{text "th'"} nothing needs to be changed, since we
       
  1142   can show
       
  1143 
       
  1144   \begin{isabelle}\ \ \ \ \ %%%
       
  1145   @{thm[mode=IfThen] cp_kept}
       
  1146   \end{isabelle}
       
  1147   
       
  1148   \noindent
       
  1149   For @{text th} and @{text th'} we need to use Lemma~\ref{childrenlem} to
       
  1150   recalculate their current precedence since their children have changed. *}(*<*)end context step_v_cps_nnt begin (*>*)text {*
       
  1151   \noindent
       
  1152   In the other case where there is no thread that takes over @{text cs}, we can show how
       
  1153   to recalculate the @{text RAG} and also show that no current precedence needs
       
  1154   to be recalculated.
       
  1155 
       
  1156   \begin{isabelle}\ \ \ \ \ %%%
       
  1157   \begin{tabular}{@ {}l}
       
  1158   @{thm depend_s}\\
       
  1159   @{thm eq_cp}
       
  1160   \end{tabular}
       
  1161   \end{isabelle}
       
  1162   *}
       
  1163 (*<*)
       
  1164 end
       
  1165 context step_P_cps_e
       
  1166 begin
       
  1167 (*>*)
       
  1168 text {*
       
  1169   \noindent
       
  1170   \colorbox{mygrey}{@{term "P th cs"}:} We assume that @{text s'} and 
       
  1171   @{term "s \<equiv> P th cs#s'"} are both valid. We again have to analyse two subcases, namely
       
  1172   the one where @{text cs} is not locked, and one where it is. We treat the former case
       
  1173   first by showing that
       
  1174   
       
  1175   \begin{isabelle}\ \ \ \ \ %%%
       
  1176   \begin{tabular}{@ {}l}
       
  1177   @{thm depend_s}\\
       
  1178   @{thm eq_cp}
       
  1179   \end{tabular}
       
  1180   \end{isabelle}
       
  1181 
       
  1182   \noindent
       
  1183   This means we need to add a holding edge to the @{text RAG} and no
       
  1184   current precedence needs to be recalculated.*}(*<*)end context step_P_cps_ne begin(*>*) text {*
       
  1185   \noindent
       
  1186   In the second case we know that resource @{text cs} is locked. We can show that
       
  1187   
       
  1188   \begin{isabelle}\ \ \ \ \ %%%
       
  1189   \begin{tabular}{@ {}l}
       
  1190   @{thm depend_s}\\
       
  1191   @{thm[mode=IfThen] eq_cp}
       
  1192   \end{tabular}
       
  1193   \end{isabelle}
       
  1194 
       
  1195   \noindent
       
  1196   That means we have to add a waiting edge to the @{text RAG}. Furthermore
       
  1197   the current precedence for all threads that are not dependants of @{text th}
       
  1198   are unchanged. For the others we need to follow the edges 
       
  1199   in the @{text RAG} and recompute the @{term "cp"}. To do this we can start from @{term "th"} 
       
  1200   and follow the @{term "depend"}-edges to recompute  using Lemma~\ref{childrenlem} 
       
  1201   the @{term "cp"} of every 
       
  1202   thread encountered on the way. Since the @{term "depend"}
       
  1203   is loop free, this procedure will always stop. The following lemma shows, however, 
       
  1204   that this procedure can actually stop often earlier without having to consider all
       
  1205   dependants.
       
  1206 
       
  1207   \begin{isabelle}\ \ \ \ \ %%%
       
  1208   \begin{tabular}{@ {}l}
       
  1209   %%@ {t hm[mode=IfThen] eq_up_self}\\
       
  1210   @{text "If"} @{thm (prem 1) eq_up}, @{thm (prem 2) eq_up} and @{thm (prem 3) eq_up}\\
       
  1211   @{text "then"} @{thm (concl) eq_up}.
       
  1212   \end{tabular}
       
  1213   \end{isabelle}
       
  1214 
       
  1215   \noindent
       
  1216   This lemma states that if an intermediate @{term cp}-value does not change, then
       
  1217   the procedure can also stop, because none of its dependent threads will
       
  1218   have their current precedence changed.
       
  1219   *}
       
  1220 (*<*)
       
  1221 end
       
  1222 (*>*)
       
  1223 text {*
       
  1224   \noindent
       
  1225   As can be seen, a pleasing byproduct of our formalisation is that the properties in
       
  1226   this section closely inform an implementation of PIP, namely whether the
       
  1227   @{text RAG} needs to be reconfigured or current precedences need to
       
  1228   be recalculated for an event. This information is provided by the lemmas we proved.
       
  1229   We confirmed that our observations translate into practice by implementing
       
  1230   our version of PIP on top of PINTOS, a small operating system written in C and used for teaching at 
       
  1231   Stanford University \cite{PINTOS}. To implement PIP, we only need to modify the kernel 
       
  1232   functions corresponding to the events in our formal model. The events translate to the following 
       
  1233   function interface in PINTOS:
       
  1234 
       
  1235   \begin{center}
       
  1236   \begin{tabular}{|l@ {\hspace{2mm}}|l@ {\hspace{2mm}}|}
       
  1237   \hline
       
  1238   {\bf Event} & {\bf PINTOS function} \\
       
  1239   \hline
       
  1240   @{text Create} & @{text "thread_create"}\\
       
  1241   @{text Exit}   & @{text "thread_exit"}\\
       
  1242   @{text Set}    & @{text "thread_set_priority"}\\
       
  1243   @{text P}      & @{text "lock_acquire"}\\
       
  1244   @{text V}      & @{text "lock_release"}\\ 
       
  1245   \hline
       
  1246   \end{tabular}
       
  1247   \end{center}
       
  1248 
       
  1249   \noindent
       
  1250   Our implicit assumption that every event is an atomic operation is ensured by the architecture of 
       
  1251   PINTOS. The case where an unlocked resource is given next to the waiting thread with the
       
  1252   highest precedence is realised in our implementation by priority queues. We implemented
       
  1253   them as \emph{Braun trees} \cite{Paulson96}, which provide efficient @{text "O(log n)"}-operations
       
  1254   for accessing and updating. Apart from having to implement relatively complex data\-structures in C
       
  1255   using pointers, our experience with the implementation has been very positive: our specification 
       
  1256   and formalisation of PIP translates smoothly to an efficent implementation in PINTOS. 
       
  1257 *}
       
  1258 
       
  1259 section {* Conclusion *}
       
  1260 
       
  1261 text {* 
       
  1262   The Priority Inheritance Protocol (PIP) is a classic textbook
       
  1263   algorithm used in many real-time operating systems in order to avoid the problem of
       
  1264   Priority Inversion.  Although classic and widely used, PIP does have
       
  1265   its faults: for example it does not prevent deadlocks in cases where threads
       
  1266   have circular lock dependencies.
       
  1267 
       
  1268   We had two goals in mind with our formalisation of PIP: One is to
       
  1269   make the notions in the correctness proof by Sha et al.~\cite{Sha90}
       
  1270   precise so that they can be processed by a theorem prover. The reason is
       
  1271   that a mechanically checked proof avoids the flaws that crept into their
       
  1272   informal reasoning. We achieved this goal: The correctness of PIP now
       
  1273   only hinges on the assumptions behind our formal model. The reasoning, which is
       
  1274   sometimes quite intricate and tedious, has been checked by Isabelle/HOL. 
       
  1275   We can also confirm that Paulson's
       
  1276   inductive method for protocol verification~\cite{Paulson98} is quite
       
  1277   suitable for our formal model and proof. The traditional application
       
  1278   area of this method is security protocols. 
       
  1279 
       
  1280   The second goal of our formalisation is to provide a specification for actually
       
  1281   implementing PIP. Textbooks, for example \cite[Section 5.6.5]{Vahalia96},
       
  1282   explain how to use various implementations of PIP and abstractly
       
  1283   discuss their properties, but surprisingly lack most details important for a
       
  1284   programmer who wants to implement PIP (similarly Sha et al.~\cite{Sha90}).  
       
  1285   That this is an issue in practice is illustrated by the
       
  1286   email from Baker we cited in the Introduction. We achieved also this
       
  1287   goal: The formalisation allowed us to efficently implement our version
       
  1288   of PIP on top of PINTOS \cite{PINTOS}, a simple instructional operating system for the x86 
       
  1289   architecture. It also gives the first author enough data to enable
       
  1290   his undergraduate students to implement PIP (as part of their OS course).
       
  1291   A byproduct of our formalisation effort is that nearly all
       
  1292   design choices for the PIP scheduler are backed up with a proved
       
  1293   lemma. We were also able to establish the property that the choice of
       
  1294   the next thread which takes over a lock is irrelevant for the correctness
       
  1295   of PIP. 
       
  1296 
       
  1297   PIP is a scheduling algorithm for single-processor systems. We are
       
  1298   now living in a multi-processor world. Priority Inversion certainly
       
  1299   occurs also there.  However, there is very little ``foundational''
       
  1300   work about PIP-algorithms on multi-processor systems.  We are not
       
  1301   aware of any correctness proofs, not even informal ones. There is an
       
  1302   implementation of a PIP-algorithm for multi-processors as part of the
       
  1303   ``real-time'' effort in Linux, including an informal description of the implemented scheduling
       
  1304   algorithm given in \cite{LINUX}.  We estimate that the formal
       
  1305   verification of this algorithm, involving more fine-grained events,
       
  1306   is a magnitude harder than the one we presented here, but still
       
  1307   within reach of current theorem proving technology. We leave this
       
  1308   for future work.
       
  1309 
       
  1310   The most closely related work to ours is the formal verification in
       
  1311   PVS of the Priority Ceiling Protocol done by Dutertre
       
  1312   \cite{dutertre99b}---another solution to the Priority Inversion
       
  1313   problem, which however needs static analysis of programs in order to
       
  1314   avoid it. There have been earlier formal investigations
       
  1315   into PIP \cite{Faria08,Jahier09,Wellings07}, but they employ model
       
  1316   checking techniques. The results obtained by them apply,
       
  1317   however, only to systems with a fixed size, such as a fixed number of 
       
  1318   events and threads. In contrast, our result applies to systems of arbitrary
       
  1319   size. Moreover, our result is a good 
       
  1320   witness for one of the major reasons to be interested in machine checked 
       
  1321   reasoning: gaining deeper understanding of the subject matter.
       
  1322 
       
  1323   Our formalisation
       
  1324   consists of around 210 lemmas and overall 6950 lines of readable Isabelle/Isar
       
  1325   code with a few apply-scripts interspersed. The formal model of PIP
       
  1326   is 385 lines long; the formal correctness proof 3800 lines. Some auxiliary
       
  1327   definitions and proofs span over 770 lines of code. The properties relevant
       
  1328   for an implementation require 2000 lines. 
       
  1329   %The code of our formalisation 
       
  1330   %can be downloaded from
       
  1331   %\url{http://www.inf.kcl.ac.uk/staff/urbanc/pip.html}.
       
  1332 
       
  1333   \medskip
       
  1334 
       
  1335   \noindent
       
  1336   {\bf Acknowledgements:}
       
  1337   We are grateful for the comments we received from anonymous
       
  1338   referees.
       
  1339 
       
  1340   \bibliographystyle{plain}
       
  1341   \bibliography{root}
       
  1342 *}
       
  1343 
       
  1344 
       
  1345 (*<*)
       
  1346 end
       
  1347 (*>*)