Journal/Paper.thy
changeset 204 5191a09d9928
parent 203 fe3dbfd9123b
child 205 12a8dfcb55a7
equal deleted inserted replaced
203:fe3dbfd9123b 204:5191a09d9928
   208   how the thread inherited its current priority}'' \cite[Page
   208   how the thread inherited its current priority}'' \cite[Page
   209   527]{Liu00}. Unfortunately, the important information about actually
   209   527]{Liu00}. Unfortunately, the important information about actually
   210   computing the priority to be restored solely from this log is not
   210   computing the priority to be restored solely from this log is not
   211   explained in \cite{Liu00} but left as an ``{\it excercise}'' to the
   211   explained in \cite{Liu00} but left as an ``{\it excercise}'' to the
   212   reader.  As we shall see, a correct version of PIP does not need to
   212   reader.  As we shall see, a correct version of PIP does not need to
   213   maintain this (potentially expensive) data structure at
   213   maintain this (potentially expensive) log data structure at
   214   all. Surprisingly also the widely read and frequently updated
   214   all. Surprisingly also the widely read and frequently updated
   215   textbook \cite{Silberschatz13} gives the wrong specification. On Page 254 the authors write: ``{\it Upon releasing the
   215   textbook \cite{Silberschatz13} gives the wrong specification. On Page 254 the authors write: ``{\it Upon releasing the
   216   lock, the [low-priority] thread will revert to its original
   216   lock, the [low-priority] thread will revert to its original
   217   priority.}'' The same error is also repeated later in this popular textbook.
   217   priority.}'' The same error is also repeated later in this popular textbook.
   218 
   218 
   227   remaining priority of the threads it blocks. This textbook also
   227   remaining priority of the threads it blocks. This textbook also
   228   gives an informal proof for the correctness of PIP in the style of
   228   gives an informal proof for the correctness of PIP in the style of
   229   Sha et al. Unfortunately, this informal proof is too vague to be
   229   Sha et al. Unfortunately, this informal proof is too vague to be
   230   useful for formalising the correctness of PIP and the specification
   230   useful for formalising the correctness of PIP and the specification
   231   leaves out nearly all details in order to implement PIP
   231   leaves out nearly all details in order to implement PIP
   232   efficiently.\medskip\smallskip % %The advantage of formalising the
   232   efficiently.\medskip\smallskip
   233   %correctness of a high-level specification of PIP in a theorem
   233 
   234   prover %is that such issues clearly show up and cannot be overlooked
       
   235   as in %informal reasoning (since we have to analyse all possible
       
   236   behaviours %of threads, i.e.~\emph{traces}, that could possibly
       
   237   happen).
       
   238 
   234 
   239   \noindent {\bf Contributions:} There have been earlier formal
   235   \noindent {\bf Contributions:} There have been earlier formal
   240   investigations into PIP \cite{Faria08,Jahier09,Wellings07}, but they
   236   investigations into PIP \cite{Faria08,Jahier09,Wellings07}, but they
   241   employ model checking techniques. This paper presents a formalised
   237   employ model checking techniques. This paper presents a formalised
   242   and mechanically checked proof for the correctness of PIP. For this
   238   and mechanically checked proof for the correctness of PIP. For this
   243   we needed to design a new correctness criterion for PIP. In contrast
   239   we needed to design a new correctness criterion for PIP. In contrast
   244   to model checking, our formalisation provides insight into why PIP
   240   to model checking, our formalisation provides insight into why PIP
   245   is correct and allows us to prove stronger properties that, as we
   241   is correct and allows us to prove stronger properties that, as we
   246   will show, can help with an efficient implementation of PIP. We
   242   will show, can help with an efficient implementation of PIP. We
   247   illustrate this with an implementation of PIP in the educational
   243   illustrate this with an implementation of PIP in the educational
   248   PINTOS operating system \cite{PINTOS}.  For example, we found by
   244   operating system PINTOS \cite{PINTOS}.  For example, we found by
   249   ``playing'' with the formalisation that the choice of the next
   245   ``playing'' with the formalisation that the choice of the next
   250   thread to take over a lock when a resource is released is irrelevant
   246   thread to take over a lock when a resource is released is irrelevant
   251   for PIP being correct---a fact that has not been mentioned in the
   247   for PIP being correct---a fact that has not been mentioned in the
   252   literature and not been used in the reference implementation of PIP
   248   literature and not been used in the reference implementation of PIP
   253   in PINTOS.  This fact, however, is important for an efficient
   249   in PINTOS.  This fact, however, is important for an efficient
   566   \noindent
   562   \noindent
   567   If there is no cycle, then every @{text RAG} can be pictured as a forest of trees, as
   563   If there is no cycle, then every @{text RAG} can be pictured as a forest of trees, as
   568   for example in Figure~\ref{RAGgraph}.
   564   for example in Figure~\ref{RAGgraph}.
   569 
   565 
   570   Because of the @{text RAG}s, we will need to formalise some results
   566   Because of the @{text RAG}s, we will need to formalise some results
   571   about graphs.  While there are a few formalisations for graphs already
   567   about graphs.
   572   implemented in Isabelle, we choose to introduce our own library of
   568   It
   573   graphs for PIP. The justification for this is that we wanted to have
       
   574   a more general theory of graphs which is capable of representing
       
   575   potentially infinite graphs (in the sense of infinitely branching
       
   576   and infinite size): the property that our @{text RAG}s are actually
       
   577   forests of finitely branching trees having only a finite depth
       
   578   should be something we can \emph{prove} for our model of PIP---it
       
   579   should not be an assumption we build already into our model. It
       
   580   seems for our purposes the most convenient representation of
   569   seems for our purposes the most convenient representation of
   581   graphs are binary relations given by sets of pairs shown in
   570   graphs are binary relations given by sets of pairs shown in
   582   \eqref{pairs}. The pairs stand for the edges in graphs. This
   571   \eqref{pairs}. The pairs stand for the edges in graphs. This
   583   relation-based representation has the advantage that the notions
   572   relation-based representation has the advantage that the notions
   584   @{text "waiting"} and @{text "holding"} are already defined in
   573   @{text "waiting"} and @{text "holding"} are already defined in
   585   terms of relations amongst threads and resources. Also, we can easily
   574   terms of relations amongst threads and resources. Also, we can easily
   586   re-use the standard notions for transitive closure operations @{term
   575   re-use the standard notions for transitive closure operations @{term
   587   "trancl DUMMY"} and @{term "rtrancl DUMMY"}, as well as relation
   576   "trancl DUMMY"} and @{term "rtrancl DUMMY"}, as well as relation
   588   composition for our graphs.  A \emph{forest} is defined in our
   577   composition for our graphs.
       
   578   While there are a few formalisations for graphs already
       
   579   implemented in Isabelle, we choose to introduce our own library of
       
   580   graphs for PIP. The justification for this is that we wanted to have
       
   581   a more general theory of graphs which is capable of representing
       
   582   potentially infinite graphs (in the sense of infinitely branching
       
   583   and infinite size): the property that our @{text RAG}s are actually
       
   584   forests of finitely branching trees having only a finite depth
       
   585   should be something we can \emph{prove} for our model of PIP---it
       
   586   should not be an assumption we build already into our model.
       
   587   A \emph{forest} is defined in our
   589   representation as the relation @{text rel} that is \emph{single
   588   representation as the relation @{text rel} that is \emph{single
   590   valued} and \emph{acyclic}:
   589   valued} and \emph{acyclic}:
   591 
   590 
   592   \begin{isabelle}\ \ \ \ \ %%%
   591   \begin{isabelle}\ \ \ \ \ %%%
   593   \begin{tabular}{@ {}l}
   592   \begin{tabular}{@ {}l}
   611   
   610   
   612   \noindent Note that forests can have trees with infinte depth and
   611   \noindent Note that forests can have trees with infinte depth and
   613   containing nodes with infinitely many children.  A \emph{finite
   612   containing nodes with infinitely many children.  A \emph{finite
   614   forest} is a forest whose underlying relation is
   613   forest} is a forest whose underlying relation is
   615   well-founded\footnote{For \emph{well-founded} we use the quite natural
   614   well-founded\footnote{For \emph{well-founded} we use the quite natural
   616   definition from Isabelle.}
   615   definition from Isabelle/HOL.}
   617   and every node has finitely many children (is only finitely
   616   and every node has finitely many children (is only finitely
   618   branching).
   617   branching).
   619 
   618 
   620   %\endnote{
   619   %\endnote{
   621   %\begin{isabelle}\ \ \ \ \ %%%
   620   %\begin{isabelle}\ \ \ \ \ %%%
   628   %}
   627   %}
   629 
   628 
   630 
   629 
   631   %\endnote{{\bf Lemma about overlapping paths}}
   630   %\endnote{{\bf Lemma about overlapping paths}}
   632   
   631   
   633   The locking mechanism of PIP ensures that for each thread node,
   632   The locking mechanism ensures that for each thread node,
   634   there can be many incoming holding edges in the @{text RAG}, but at most one
   633   there can be many incoming holding edges in the @{text RAG}, but at most one
   635   out going waiting edge.  The reason is that when a thread asks for
   634   out going waiting edge.  The reason is that when a thread asks for
   636   a resource that is locked already, then the thread is blocked and
   635   a resource that is locked already, then the thread is blocked and
   637   cannot ask for another resource.  Clearly, also every resource can
   636   cannot ask for another resource.  Clearly, also every resource can
   638   only have at most one outgoing holding edge---indicating that the
   637   only have at most one outgoing holding edge---indicating that the
   661   there is a circle of dependencies in a @{text RAG} (and thus @{text TDG}), then clearly we have a
   660   there is a circle of dependencies in a @{text RAG} (and thus @{text TDG}), then clearly we have a
   662   deadlock. Therefore when a thread requests a resource, we must
   661   deadlock. Therefore when a thread requests a resource, we must
   663   ensure that the resulting @{text RAG} and @{text TDG} are not circular. In practice, the
   662   ensure that the resulting @{text RAG} and @{text TDG} are not circular. In practice, the
   664   programmer has to ensure this. Our model will enforce that critical 
   663   programmer has to ensure this. Our model will enforce that critical 
   665   resources can only be requested provided no circularity can arise (but 
   664   resources can only be requested provided no circularity can arise (but 
   666   they can overlap, see Fig~\ref{overlap}).
   665   critical sections can overlap, see Fig~\ref{overlap}).
   667 
   666 
   668   Next we introduce the notion of the \emph{current precedence} of a thread @{text th} in a 
   667   Next we introduce the notion of the \emph{current precedence} of a thread @{text th} in a 
   669   state @{text s}. It is defined as
   668   state @{text s}. It is defined as
   670 
   669 
   671   \begin{isabelle}\ \ \ \ \ %%%
   670   \begin{isabelle}\ \ \ \ \ %%%
   675 
   674 
   676    \noindent 
   675    \noindent 
   677   While the precedence @{term prec} of any
   676   While the precedence @{term prec} of any
   678   thread is determined statically (for example when the thread is
   677   thread is determined statically (for example when the thread is
   679   created), the point of the current precedence is to dynamically
   678   created), the point of the current precedence is to dynamically
   680   increase this precedence, if needed according to PIP. Therefore the
   679   boost this precedence, if needed according to PIP. Therefore the
   681   current precedence of @{text th} is given as the maximum of the
   680   current precedence of @{text th} is given as the maximum of the
   682   precedences of all threads in its subtree (which includes by definition 
   681   precedences of all threads in its subtree (which includes by definition 
   683   @{text th} itself). Since the notion of current precedence is defined as the 
   682   @{text th} itself). Since the notion of current precedence is defined as the 
   684   transitive closure of the dependent
   683   transitive closure of the dependent
   685   threads in the @{text TDG}, we deal correctly with the problem in the informal
   684   threads in the @{text TDG}, we deal correctly with the problem in the informal
   703   \noindent
   702   \noindent
   704   The first function is a waiting queue function (that is, it takes a
   703   The first function is a waiting queue function (that is, it takes a
   705   resource @{text "cs"} and returns the corresponding list of threads
   704   resource @{text "cs"} and returns the corresponding list of threads
   706   that lock or wait for it); the second is a function that
   705   that lock or wait for it); the second is a function that
   707   takes a thread and returns its current precedence (see
   706   takes a thread and returns its current precedence (see
   708   the definition in \eqref{cpreced}). We assume the usual getter and setter methods for
   707   the @{text wq} in \eqref{cpreced}). We assume the usual getter and setter methods for
   709   such records.
   708   such records.
   710 
   709 
   711   In the initial state, the scheduler starts with all resources unlocked (the corresponding 
   710   In the initial state, the scheduler starts with all resources unlocked (the corresponding 
   712   function is defined in \eqref{allunlocked}) and the
   711   function is defined in \eqref{allunlocked}) and the
   713   current precedence of every thread is initialised with @{term "Prc 0 0"}; that means 
   712   current precedence of every thread is initialised with @{term "Prc 0 0"}; that means 
   725   The cases for @{term Create}, @{term Exit} and @{term Set} are also straightforward:
   724   The cases for @{term Create}, @{term Exit} and @{term Set} are also straightforward:
   726   we calculate the waiting queue function of the (previous) state @{text s}; 
   725   we calculate the waiting queue function of the (previous) state @{text s}; 
   727   this waiting queue function @{text wq} is unchanged in the next schedule state---because
   726   this waiting queue function @{text wq} is unchanged in the next schedule state---because
   728   none of these events lock or release any resource; 
   727   none of these events lock or release any resource; 
   729   for calculating the next @{term "cprec_fun"}, we use @{text wq} and 
   728   for calculating the next @{term "cprec_fun"}, we use @{text wq} and 
   730   @{term cpreced}. This gives the following three clauses for @{term schs}:
   729   @{term cpreced} defined above. This gives the following three clauses for @{term schs}:
   731 
   730 
   732   \begin{isabelle}\ \ \ \ \ %%%
   731   \begin{isabelle}\ \ \ \ \ %%%
   733   \begin{tabular}{@ {}l}
   732   \begin{tabular}{@ {}l}
   734   @{thm (lhs) schs.simps(2)} @{text "\<equiv>"}\\ 
   733   @{thm (lhs) schs.simps(2)} @{text "\<equiv>"}\\ 
   735   \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
   734   \hspace{5mm}@{text "let"} @{text "wq = wq_fun (schs s)"} @{text "in"}\\
  1043   @{text "s' @ s"}) in which @{text th} is still alive, either @{text
  1042   @{text "s' @ s"}) in which @{text th} is still alive, either @{text
  1044   th} is running or it is blocked by a thread that was alive in the
  1043   th} is running or it is blocked by a thread that was alive in the
  1045   state @{text s} and was waiting for or in the possession of a lock
  1044   state @{text s} and was waiting for or in the possession of a lock
  1046   in @{text s}. Since in @{text s}, as in every state, the set of
  1045   in @{text s}. Since in @{text s}, as in every state, the set of
  1047   alive threads is finite, @{text th} can only be blocked by a finite
  1046   alive threads is finite, @{text th} can only be blocked by a finite
  1048   number of threads.  We will actually prove a
  1047   number of threads.
  1049   stronger statement where we also provide the current precedence of
  1048   
  1050   the blocking thread. 
       
  1051 
  1049 
  1052   However, the theorem we are going to prove hinges upon a number of
  1050   However, the theorem we are going to prove hinges upon a number of
  1053   natural assumptions about the states @{text s} and @{text "s' @ s"}, the
  1051   natural assumptions about the states @{text s} and @{text "s' @ s"}, the
  1054   thread @{text th} and the events happening in @{text s'}. We list
  1052   thread @{text th} and the events happening in @{text s'}. We list
  1055   them next:
  1053   them next:
  1164   %@{thm "th_blockedE_pretty"} -- thm-blockedE??
  1162   %@{thm "th_blockedE_pretty"} -- thm-blockedE??
  1165   % 
  1163   % 
  1166   % @{text "th_kept"} shows that th is a thread in s'-s
  1164   % @{text "th_kept"} shows that th is a thread in s'-s
  1167   % }
  1165   % }
  1168 
  1166 
  1169   Given our assumptions (on @{text th}), the first property we show
  1167   The next lemma is part of the proof for Theorem~1:
       
  1168   Given our assumptions (on~@{text th}), the first property we show
  1170   that a running thread @{text "th'"} must either wait for or hold a
  1169   that a running thread @{text "th'"} must either wait for or hold a
  1171   resource in state @{text s}.
  1170   resource in state @{text s}.
  1172 
  1171 
  1173   \begin{lemma}\label{notdetached}
  1172   \begin{lemma}\label{notdetached}
  1174   If @{term "th' \<in> running (s' @ s)"} and @{term "th \<noteq> th'"} then @{term "\<not>detached s th'"}.
  1173   If @{term "th' \<in> running (s' @ s)"} and @{term "th \<noteq> th'"} then @{term "\<not>detached s th'"}.
  1175   \end{lemma}
  1174   \end{lemma}
  1176 
  1175 
  1177   \begin{proof} Let us assume otherwise, that is @{text "th'"} is detached in state
  1176   \begin{proof} Let us assume otherwise, that is @{text "th'"} is detached in state
  1178   @{text "s"}, then, according to the definition of detached, @{text
  1177   @{text "s"}, then, according to the definition of detached, @{text
  1179   "th'"} does not hold or wait for any resource. Hence the @{text
  1178   "th'"} does not hold or wait for any resource. Hence the @{text
  1180   cp}-value of @{text "th'"} in @{text s} is not boosted, that is
  1179   cprec}-value of @{text "th'"} in @{text s} is not boosted, that is
  1181   @{term "cp s th' = prec th' s"}, and is therefore lower than the
  1180   @{term "cp s th' = prec th' s"}, and is therefore lower than the
  1182   precedence (as well as the @{text "cp"}-value) of @{term "th"}. This
  1181   precedence (as well as the @{text "cprec"}-value) of @{term "th"}. This
  1183   means @{text "th'"} will not run as long as @{text "th"} is a live
  1182   means @{text "th'"} will not run as long as @{text "th"} is a live
  1184   thread. In turn this means @{text "th'"} cannot take any action in
  1183   thread. In turn this means @{text "th'"} cannot take any action in
  1185   state @{text "s' @ s"} to change its current status; therefore
  1184   state @{text "s' @ s"} to change its current status; therefore
  1186   @{text "th'"} is still detached in state @{text "s' @ s"}.
  1185   @{text "th'"} is still detached in state @{text "s' @ s"}.
  1187   Consequently @{text "th'"} is also not boosted in state @{text "s' @
  1186   Consequently @{text "th'"} is also not boosted in state @{text "s' @
  1645   (active or hibernating) and second, each of these threads will give up its
  1644   (active or hibernating) and second, each of these threads will give up its
  1646   resources after a finite amount of time.  However, we do not have
  1645   resources after a finite amount of time.  However, we do not have
  1647   this concept of active or hibernating threads in our model.  In fact
  1646   this concept of active or hibernating threads in our model.  In fact
  1648   we can dispense with the first assumption altogether and allow that
  1647   we can dispense with the first assumption altogether and allow that
  1649   in our model we can create new threads or exit existing threads
  1648   in our model we can create new threads or exit existing threads
  1650   arbitrarily. Consequently, the avoidance of indefinite priority
  1649   arbitrarily. Consequently, the absence of indefinite priority
  1651   inversion we are trying to establish is in our model not true,
  1650   inversion we are trying to establish in our model is not true,
  1652   unless we stipulate an upper bound on the number of threads that
  1651   unless we stipulate an upper bound on the number of threads that
  1653   have been created during the time leading to any future state after
  1652   have been created during the time leading to any future state after
  1654   @{term s}. Otherwise our PIP scheduler could be ``swamped'' with
  1653   @{term s}. Otherwise our PIP scheduler could be ``swamped'' with
  1655   @{text "Create"}-requests of lower priority threads.  So our first
  1654   @{text "Create"}-requests of lower priority threads.  So our first
  1656   assumption states:
  1655   assumption states:
  1669   \noindent Note that it is not enough to just to state that there are
  1668   \noindent Note that it is not enough to just to state that there are
  1670   only finite number of threads created up until a single state @{text
  1669   only finite number of threads created up until a single state @{text
  1671   "s' @ s"} after @{text s}.  Instead, we need to put this bound on
  1670   "s' @ s"} after @{text s}.  Instead, we need to put this bound on
  1672   the @{text "Create"} events for all valid states after @{text s}.
  1671   the @{text "Create"} events for all valid states after @{text s}.
  1673   This ensures that no matter which ``future'' state is reached, the
  1672   This ensures that no matter which ``future'' state is reached, the
  1674   number of @{text "Create"}-events is finite. We use @{text "es @ s"}
  1673   number of @{text "Create"}-events is finite. This bound @{text BC} is assumed 
  1675   to stand for \emph{future states} after @{text s}---it is @{text s}
  1674  with respect to all future states @{text "es @ s"}
  1676   extended with some list @{text es} of events.
  1675   of @{text s}, not just a single one.
  1677 
  1676 
  1678   For our second assumption about giving up resources after a finite
  1677   For our second assumption about giving up resources after a finite
  1679   amount of ``time'', let us introduce the following definition about
  1678   amount of ``time'', let us introduce the following definition about
  1680   threads that can potentially block @{text th}:
  1679   threads that can potentially block @{text th}:
  1681 
  1680 
  1685 
  1684 
  1686   \noindent This set contains all threads that are not detached in
  1685   \noindent This set contains all threads that are not detached in
  1687   state @{text s}. According to our definiton of @{text "detached"},
  1686   state @{text s}. According to our definiton of @{text "detached"},
  1688   this means a thread in @{text "blockers"} either holds or waits for
  1687   this means a thread in @{text "blockers"} either holds or waits for
  1689   some resource in state @{text s} . Our Thm.~1 implies that any of
  1688   some resource in state @{text s} . Our Thm.~1 implies that any of
  1690   those threads can all potentially block @{text th} after state
  1689   such threads can all potentially block @{text th} after state
  1691   @{text s}. We need to make the following assumption about the
  1690   @{text s}. We need to make the following assumption about the
  1692   threads in the @{text "blockers"}-set:
  1691   threads in the @{text "blockers"}-set:
  1693 
  1692 
  1694   \begin{quote}
  1693   \begin{quote}
  1695   {\bf Assumptions on the threads {\boldmath{@{term "th' \<in> blockers"}}}:} 
  1694   {\bf Assumptions on the threads {\boldmath{@{term "th' \<in> blockers"}}}:} 
  1777   & & +\;
  1776   & & +\;
  1778   @{text len}\, [@{text "s'"} \leftarrow @{text "s upto es"}.\;\; @{text "th"} \not\in @{text "running s'"}]
  1777   @{text len}\, [@{text "s'"} \leftarrow @{text "s upto es"}.\;\; @{text "th"} \not\in @{text "running s'"}]
  1779   \end{array} 
  1778   \end{array} 
  1780   \end{equation}
  1779   \end{equation}
  1781 
  1780 
  1782   \noindent Second by Thm~\ref{mainthm}, the events are either the
  1781   \noindent 
  1783   actions of @{text th} or @{text "Create"}-events or actions of the
  1782   The actions in @{text es} can be partitioned into the actions of @{text th} and the
  1784   threads in blockers. That is
  1783   actions of threads other than @{text th}. The latter can further be divided
       
  1784   into actions of existing threads and the actions to create new
       
  1785   ones. Moreover, the actions of existing threads other than @{text th}
       
  1786   are by Thm 1 the actions of blockers. This gives rise to 
  1785   %
  1787   %
  1786   \begin{equation}\label{secondeq}
  1788   \begin{equation}\label{secondeq}
  1787   \begin{array}{lcl}
  1789   \begin{array}{lcl}
  1788   @{text "len es"} & \;=\; & @{text "len (actions_of {th} es)"}\\
  1790   @{text "len es"} & \;=\; & @{text "len (actions_of {th} es)"}\\
  1789   & & +\; @{text "len (filter isCreate es)"}\\ 
  1791   & & +\; @{text "len (filter isCreate es)"}\\ 
  1887   precedence is computed in this more efficient manner, the selection
  1889   precedence is computed in this more efficient manner, the selection
  1888   of the thread with highest precedence from a set of ready threads is
  1890   of the thread with highest precedence from a set of ready threads is
  1889   a standard scheduling operation and implemented in most operating
  1891   a standard scheduling operation and implemented in most operating
  1890   systems.
  1892   systems.
  1891 
  1893 
  1892   %\begin{proof}[of Lemma~\ref{childrenlem}]
  1894   Below we
  1893   %Test
  1895   outline how our formalisation guides the efficient calculation of @{text cprecs}
  1894   %\end{proof}
  1896   in response to each kind of events.\smallskip
  1895 
       
  1896   Of course the main work for implementing PIP involves the
       
  1897   scheduler and coding how it should react to events.  Below we
       
  1898   outline how our formalisation guides this implementation for each
       
  1899   kind of events.\smallskip
       
  1900 *}
  1897 *}
  1901 
  1898 
  1902 text {*
  1899 text {*
  1903   \noindent
  1900   \noindent
  1904   \colorbox{mygrey}{@{term "Create th prio"}:} We assume that the current state @{text s} and 
  1901   \colorbox{mygrey}{@{term "Create th prio"}:} We assume that the current state @{text s} and 
  2011   not @{text "th"} and @{text "th'"} nothing needs to be changed, since we
  2008   not @{text "th"} and @{text "th'"} nothing needs to be changed, since we
  2012   can show
  2009   can show
  2013 
  2010 
  2014   \begin{isabelle}\ \ \ \ \ %%%
  2011   \begin{isabelle}\ \ \ \ \ %%%
  2015   @{text "If"} @{text "th'' \<noteq> th"} and 
  2012   @{text "If"} @{text "th'' \<noteq> th"} and 
  2016   @{text "th'' \<noteq> th'"}
  2013   @{text "th'' \<noteq> th'"} \;\;@{text then}\;\;
  2017   @{thm (concl) valid_trace_v_n.cp_kept[where ?th1.0="th''"]}
  2014   @{thm (concl) valid_trace_v_n.cp_kept[where ?th1.0="th''"]}
  2018   \hfill\numbered{fone}
  2015   \hfill\numbered{fone}
  2019   \end{isabelle}
  2016   \end{isabelle}
  2020   
  2017   
  2021   \noindent
  2018   \noindent
  2094   \end{tabular}
  2091   \end{tabular}
  2095   \end{isabelle}
  2092   \end{isabelle}
  2096 
  2093 
  2097   \noindent That means we have to add a waiting edge to the @{text
  2094   \noindent That means we have to add a waiting edge to the @{text
  2098   RAG}. Furthermore the current precedence for all threads that are
  2095   RAG}. Furthermore the current precedence for all threads that are
  2099   not ancestors of @{text "th"} are unchanged. For the ancestors of 
  2096   not ancestors of @{text "th"} (in the new @{text RAG} or @{text TDG}) are unchanged. For the ancestors of 
  2100   @{text th} we need
  2097   @{text th} we need
  2101   to follow the edges in the @{text TDG} and recompute the @{term
  2098   to follow the edges in the @{text TDG} and recompute the @{term
  2102   "cprecs"}. Whereas in all other event we might have to make
  2099   "cprecs"}. Whereas in all other event we might have to make
  2103   modifications to the @{text "RAG"}, no recalculation of @{text
  2100   modifications to the @{text "RAG"}, no recalculation of @{text
  2104   cprec} depends on the @{text RAG}. This is the only case where
  2101   cprec} depends on the @{text RAG}. This is the only case where
  2245   taken'', or 1 for being ``free''). In case the lock is taken, we enter the
  2242   taken'', or 1 for being ``free''). In case the lock is taken, we enter the
  2246   if-branch inserting the current thread into the waiting queue of this lock (Line 9).
  2243   if-branch inserting the current thread into the waiting queue of this lock (Line 9).
  2247   The waiting queue is referenced in the usual C-way as @{ML_text "&lock->wq"}. 
  2244   The waiting queue is referenced in the usual C-way as @{ML_text "&lock->wq"}. 
  2248   Next, we record that the current thread is waiting for the lock (Line 10).
  2245   Next, we record that the current thread is waiting for the lock (Line 10).
  2249   Thus we established two pointers: one in the waiting queue of the lock pointing to the 
  2246   Thus we established two pointers: one in the waiting queue of the lock pointing to the 
  2250   current thread, and the other from the currend thread pointing to the lock.
  2247   current thread, and the other from the current thread pointing to the lock.
  2251   According to our specification in Section~\ref{model} and the properties we were able 
  2248   According to our specification in Section~\ref{model} and the properties we were able 
  2252   to prove for @{text P}, we need to ``chase'' all the ancestor threads
  2249   to prove for @{text P}, we need to ``chase'' all the ancestor threads
  2253   in the @{text RAG} and update their
  2250   in the @{text RAG} and update their
  2254   current precedence; however we only have to do this as long as there is change in the 
  2251   current precedence; however we only have to do this as long as there is change in the 
  2255   current precedence.
  2252   current precedence.