Journal/Paper.thy
changeset 135 9b5da0327d43
parent 134 8a13b37b4d95
child 136 fb3f52fe99d1
--- a/Journal/Paper.thy	Fri Jul 08 01:25:19 2016 +0100
+++ b/Journal/Paper.thy	Tue Jul 12 15:09:09 2016 +0100
@@ -185,8 +185,8 @@
   section ``{\it it resumes the priority it had at the point of entry
   into the critical section}''.  This error can also be found in the
   textbook \cite[Section 16.4.1]{LiYao03} where the authors write
-  ``{\it its priority is immediately lowered to the level originally assigned}'';
-  or in the 
+  about this process: ``{\it its priority is immediately lowered to the level originally assigned}'';
+  and also in the 
   more recent textbook \cite[Page 119]{Laplante11} where the authors
   state: ``{\it when [the task] exits the critical section that caused
   the block, it reverts to the priority it had when it entered that
@@ -201,13 +201,13 @@
   527]{Liu00}. Unfortunately, the important information about actually
   computing the priority to be restored solely from this log is not
   explained in \cite{Liu00} but left as an ``{\it excercise}'' to the
-  reader.  Of course, a correct version of PIP does not need to
+  reader.  As we shall see, a correct version of PIP does not need to
   maintain this (potentially expensive) data structure at
   all. Surprisingly also the widely read and frequently updated
   textbook \cite{Silberschatz13} gives the wrong specification. For
   example on Page 254 the authors write: ``{\it Upon releasing the
   lock, the [low-priority] thread will revert to its original
-  priority.}'' The same error is also repeated later in this textbook.
+  priority.}'' The same error is also repeated later in this popular textbook.
 
   
   While \cite{Laplante11,LiYao03,Liu00,book,Sha90,Silberschatz13} are the only
@@ -246,7 +246,7 @@
   in PINTOS.  This fact, however, is important for an efficient
   implementation of PIP, because we can give the lock to the thread
   with the highest priority so that it terminates more quickly.  We
-  were also being able to generalise the scheduler of Sha et
+  are also being able to generalise the scheduler of Sha et
   al.~\cite{Sha90} to the practically relevant case where critical
   sections can overlap; see Figure~\ref{overlap} \emph{a)} below for
   an example of this restriction. In the existing literature there is
@@ -330,10 +330,17 @@
   whereby threads, priorities and (critical) resources are represented
   as natural numbers. The event @{term Set} models the situation that
   a thread obtains a new priority given by the programmer or
-  user (for example via the {\tt nice} utility under UNIX).  As in Paulson's work, we
-  need to define functions that allow us to make some observations
-  about states.  One, called @{term threads}, calculates the set of
-  ``live'' threads that we have seen so far:
+  user (for example via the {\tt nice} utility under UNIX).  For states
+  we define the following type-synonym:
+
+  \begin{isabelle}\ \ \ \ \ %%%
+  \isacommand{type\_synonym} @{text "state = event list"}
+  \end{isabelle}    
+
+  \noindent As in Paulson's work, we need to define functions that
+  allow us to make some observations about states.  One function,
+  called @{term threads}, calculates the set of ``live'' threads that
+  we have seen so far in a state:
 
   \begin{isabelle}\ \ \ \ \ %%%
   \mbox{\begin{tabular}{lcl}
@@ -417,7 +424,7 @@
   tasks involved in the inheritance process and thus minimises the number
   of potentially expensive thread-switches. 
 
-  We will also need counters for @{term P} and @{term V} events of a thread @{term th}
+  {\bf NEEDED?} We will also need counters for @{term P} and @{term V} events of a thread @{term th}
   in a state @{term s}. This can be straightforwardly defined in Isabelle as
 
   \begin{isabelle}\ \ \ \ \ %%%
@@ -521,7 +528,48 @@
   If there is no cycle, then every RAG can be pictured as a forrest of trees, as
   for example in Figure~\ref{RAGgraph}.
 
-  We will design our scheduler  
+  While there are few formalisations for graphs already implemented in
+  Isabelle, we choose to introduce our own library of graphs for
+  PIP. The reason for this is that we wanted to be able to reason with
+  potentially infinite graphs (in the sense of infinitely branching
+  and infinite size): the property that our RAGs are forrests of
+  finitely branching trees having only an finite depth should be a
+  something we can \emph{prove} for our model of PIP---it should not
+  be an assumption we build already into our model. It seemed  for our purposes the most convenient
+  represeantation of graphs are binary relations given by 
+  sets of pairs. The pairs stand for the edges in graphs. This
+  relation-based representation is convenient since we can use the notions
+  of transitive closure operations @{term "trancl DUMMY"} and 
+  @{term "rtrancl DUMMY"}, as well as relation composition @{term "DUMMY O DUMMY"}.
+  A \emph{forrest} is defined as the relation @{text rel} that is \emph{single valued}
+  and \emph{acyclic}:
+
+  \begin{isabelle}\ \ \ \ \ %%%
+  \begin{tabular}{@ {}l}
+  @{thm single_valued_def[where ?r="rel", THEN eq_reflection]}\\
+  @{thm acyclic_def[where ?r="rel", THEN eq_reflection]}
+  \end{tabular}
+  \end{isabelle} 
+
+  \noindent
+  The \emph{children}, \emph{subtree} and \emph{ancestors} of a node in a graph are 
+  defined as
+
+  \begin{isabelle}\ \ \ \ \ %%%
+  \begin{tabular}{@ {}l}
+  @{thm children_def[where ?r="rel" and ?x="node", THEN eq_reflection]}\\
+  @{thm subtree_def[where ?r="rel" and ?x="node", THEN eq_reflection]}\\
+  @{thm ancestors_def[where ?r="rel" and ?x="node", THEN eq_reflection]}\\
+  \end{tabular}
+  \end{isabelle}
+  
+  \noindent Note that forests can have trees with infinte depth and
+  containing nodes with infinitely many children.  A \emph{finite
+  forrest} is a forest which is well-founded and every node has 
+  finitely many children (is only finitely branching).
+
+
+  We will design our PIP-scheduler  
   so that every thread can be in the possession of several resources, that is 
   it has potentially several incoming holding edges in the RAG, but has at most one outgoing  
   waiting edge. The reason is that when a thread asks for resource that is locked