Journal/Paper.thy
changeset 11 8e02fb168350
parent 8 5ba3d79622da
child 12 85116bc854c0
--- a/Journal/Paper.thy	Thu Dec 20 12:23:44 2012 +0000
+++ b/Journal/Paper.thy	Thu Dec 20 14:54:06 2012 +0000
@@ -82,7 +82,8 @@
   rectified by enabling the \emph{Priority Inheritance Protocol} (PIP)
   \cite{Sha90}\footnote{Sha et al.~call it the \emph{Basic Priority
   Inheritance Protocol} \cite{Sha90} and others sometimes also call it
-  \emph{Priority Boosting} or \emph{Priority Donation}.} in the scheduling software.
+  \emph{Priority Boosting}, \emph{Priority Donation} or \emph{Priority Lending}.} 
+  in the scheduling software.
 
   The idea behind PIP is to let the thread $L$ temporarily inherit
   the high priority from $H$ until $L$ leaves the critical section
@@ -177,7 +178,9 @@
   in the reference implementation of PIP in PINTOS.  This fact, however, is important
   for an efficient implementation of PIP, because we can give the lock
   to the thread with the highest priority so that it terminates more
-  quickly.
+  quickly.  We were also bale to generalise the scheduler of Sha
+  et al \cite{Sha90} to the practically relevant case where critical 
+  sections can overlap.
 *}
 
 section {* Formal Model of the Priority Inheritance Protocol *}
@@ -612,6 +615,12 @@
   \end{center}
 
   \noindent
+  Note, however, that apart from the circularity condition, we do not make any 
+  assumption on how different resources can locked and released relative to each 
+  other. In our model it is possible that critical sections overlap. This is in 
+  contrast to Sha et al \cite{Sha90} who require that critical sections are 
+  properly nested.
+
   A valid state of PIP can then be conveniently be defined as follows:
 
   \begin{center}
@@ -1246,23 +1255,39 @@
   \hline
   {\bf Event} & {\bf PINTOS function} \\
   \hline
-  @{text Create} & @{text "thread_create"}\\
-  @{text Exit}   & @{text "thread_exit"}\\
-  @{text Set}    & @{text "thread_set_priority"}\\
-  @{text P}      & @{text "lock_acquire"}\\
-  @{text V}      & @{text "lock_release"}\\ 
+  @{text Create} & @{ML_text "thread_create"}\\
+  @{text Exit}   & @{ML_text "thread_exit"}\\
+  @{text Set}    & @{ML_text "thread_set_priority"}\\
+  @{text P}      & @{ML_text "lock_acquire"}\\
+  @{text V}      & @{ML_text "lock_release"}\\ 
   \hline
   \end{tabular}
   \end{center}
 
   \noindent
   Our implicit assumption that every event is an atomic operation is ensured by the architecture of 
-  PINTOS. The case where an unlocked resource is given next to the waiting thread with the
+  PINTOS (which allows to disable interrupts when some operations are performed). The case where 
+  an unlocked resource is given next to the waiting thread with the
   highest precedence is realised in our implementation by priority queues. We implemented
   them as \emph{Braun trees} \cite{Paulson96}, which provide efficient @{text "O(log n)"}-operations
   for accessing and updating. Apart from having to implement relatively complex data\-structures in C
   using pointers, our experience with the implementation has been very positive: our specification 
   and formalisation of PIP translates smoothly to an efficent implementation in PINTOS. 
+  Let us illustrate this with the C-implementation of the function {\tt lock\_aquire}, 
+  shown in Figure~\ref{code}.  This function implements the operation that
+  the currently running thread asks for the lock of a specific resource. 
+  In C such a lock is represented as a pointer to the structure {\tt lock} (Line 1). 
+
+  Lines 2 to 4 of {\tt lock\_aquire} contain diagnostic code: first we check that 
+  the lock is a ``valid'' lock 
+  by testing it is not {\tt NULL}; second we check that the code is not called
+  as part of an interrupt---aquiring a lock should have been only initiated by a 
+  request from a (user) thread, not an interrupt; third we make sure the 
+  current thread does not ask twice for a lock. These assertions are supposed
+  to be satisfied because of the assumptions in PINTOS about how this code is called.
+  If not, then the assertions indicate a bug in PINTOS.
+
+
 
   \begin{figure}
   \begin{lstlisting}
@@ -1288,7 +1313,7 @@
         break;
       };
       heap_update(thread_preced, &lock->wq, &pt->helem);
-      pt = lock -> holder;
+      pt = lock->holder;
     };
     thread_block();
   } else {
@@ -1303,33 +1328,19 @@
   PINTOS.\label{code}}
   \end{figure}
 
-  Let us illustrate that our specification translates relatively smoothly
-  into C-code. The function {\tt lock\_aquire}, shown in Figure~\ref{code}, 
-  implements the operation that
-  the currently running thread asks for the lock of a specific resource. 
-  In C such a lock is represented as a pointer to the structure {\tt lock}. Lines 2 to 
-  4 contain diagnostic code for PINTOS: first we check that the lock is a ``valid'' lock 
-  by testing it is not {\tt NULL}; second we check that the code is not called
-  as part of an interrupt---aquiring a lock should have been only initiated by a 
-  request from a (user) thread, not an interrupt; third we make sure the 
-  current thread does not ask twice for a lock. These assertions are supposed
-  to be always satisfied because of the assumptions in PINTOS of how this code is called.
-  If not, then there is a bug in PINTOS.
-
-  Line 6 and 7 make the operation of aquiring a lock atomic by disabling all 
-  interrupts, but saving them for resumption at the ond of the function (Line 31).
-  In Line 8, the interesting code of this function starts: we 
-  test whether lock is already taken (its value is 0 for indicating ``already 
-  taken'' and 1 for being ``free''). In case the lock is taken, we need to
-  insert the current thread into the waiting queue of the lock (Line 9,
-  the waiting queue is referenced as @{ML_text "&lock-wq"}). The queues are
-  implemented as Braun Trees providing an heap interface, therefore
-  the function is called @{ML_text "heap_insert"}. Next we
-  record that the current thread is waiting for the lock (Line 10).
+ 
+  Line 6 and 7 of {\tt lock\_aquire} make the operation of aquiring a lock atomic by disabling all 
+  interrupts, but saving them for resumption at the end of the function (Line 31).
+  In Line 8, the interesting code of the scheduler starts: we 
+  test whether the lock is already taken (its value is then 0 indicating ``already 
+  taken'', and 1 for being ``free''). In case the lock is taken, we need to
+  insert the current thread into the waiting queue of this lock (Line 9,
+  the waiting queue is referenced as @{ML_text "&lock-wq"}). 
+  Next we record that the current thread is waiting for the lock (Line 10).
   According to our specification, we need to ``chase'' the holders of locks
-  in the @{text RAG} (Resource Allocation Graph). In Lines 11 and 12 we
-  assign to the variable @{ML_text pt} the owner of the lock, and enter
-  a while-loop in Lines 13 to 24, which implements the ``chase''.  
+  in the RAG (Resource Allocation Graph). For this we assign in Lines 11 and 12 
+  assign the variable @{ML_text pt} ot the owner of the lock, and enter
+  the while-loop in Lines 13 to 24. This loop implements the ``chase''.  
 
 *}
 
@@ -1366,10 +1377,13 @@
   architecture. It also gives the first author enough data to enable
   his undergraduate students to implement PIP (as part of their OS course).
   A byproduct of our formalisation effort is that nearly all
-  design choices for the PIP scheduler are backed up with a proved
+  design choices for the implementation of PIP scheduler are backed up with a proved
   lemma. We were also able to establish the property that the choice of
   the next thread which takes over a lock is irrelevant for the correctness
-  of PIP. 
+  of PIP. Moreover, we eliminated a crucial restriction present in 
+  the proof of Sha et al.: they require that critical sections nest properly, 
+  whereas our scheduler allows critical sections to overlap. This is the default
+  in implementations of PIP. 
 
   PIP is a scheduling algorithm for single-processor systems. We are
   now living in a multi-processor world. Priority Inversion certainly