--- a/Journal/Paper.thy Fri Dec 21 13:30:14 2012 +0000
+++ b/Journal/Paper.thy Fri Dec 21 18:06:00 2012 +0000
@@ -97,7 +97,7 @@
example in libraries for FreeBSD, Solaris and Linux.
One advantage of PIP is that increasing the priority of a thread
- can be dynamically calculated by the scheduler. This is in contrast
+ can be performed dynamically by the scheduler. This is in contrast
to, for example, \emph{Priority Ceiling} \cite{Sha90}, another
solution to the Priority Inversion problem, which requires static
analysis of the program in order to prevent Priority
@@ -183,7 +183,7 @@
sections can overlap.
*}
-section {* Formal Model of the Priority Inheritance Protocol *}
+section {* Formal Model of the Priority Inheritance Protocol\label{model} *}
text {*
The Priority Inheritance Protocol, short PIP, is a scheduling
@@ -1293,18 +1293,21 @@
using pointers, our experience with the implementation has been very positive: our specification
and formalisation of PIP translates smoothly to an efficent implementation in PINTOS.
Let us illustrate this with the C-code of the function {\tt lock\_acquire},
- shown in Figure~\ref{code}. This function implements the operation that
- the currently running thread asks for the lock of a resource.
- In C such a lock is represented as a pointer to the structure {\tt lock} (Line 1).
+ shown in Figure~\ref{code}. This function implements the operation of requesting and, if free,
+ locking of a resource by the currently running thread. The convention in the PINTOS
+ code is use the terminology \emph{locks} rather than resources.
+ A lock is represented as a pointer to the structure {\tt lock} (Line 1).
- Lines 2 to 4 of {\tt lock\_acquire} contain diagnostic code: first, we check that
+ Lines 2 to 4 of the function @{ML_text "lock_acquire"} contain diagnostic code: first, we check that
the lock is a ``valid'' lock
- by testing it is not {\tt NULL}; second, we check that the code is not called
+ by testing whether it is not {\tt NULL}; second, we check that the code is not called
as part of an interrupt---acquiring a lock should only be initiated by a
request from a (user) thread, not an interrupt; third, we make sure the
current thread does not ask twice for a lock. These assertions are supposed
to be satisfied because of the assumptions in PINTOS about how this code is called.
- If not, then the assertions indicate a bug in PINTOS.
+ If not, then the assertions indicate a bug in PINTOS and the result will be
+ a \emph{kernel panic}. We took these three lines from the original code of
+ @{ML_text "lock_acquire"} in PINTOS.
@@ -1351,35 +1354,48 @@
Line 6 and 7 of {\tt lock\_acquire} make the operation of acquiring a lock atomic by disabling all
interrupts, but saving them for resumption at the end of the function (Line 31).
In Line 8, the interesting code with respect to scheduling starts: we
- test whether the lock is already taken (its value is then 0 indicating ``already
+ first check whether the lock is already taken (its value is then 0 indicating ``already
taken'', or 1 for being ``free''). In case the lock is taken, we enter the
- if-branch by first inserting the current thread into the waiting queue of this lock (Line 9).
+ if-branch inserting the current thread into the waiting queue of this lock (Line 9).
The waiting queue is referenced in the usual C-way as @{ML_text "&lock->wq"}.
Next we record that the current thread is waiting for the lock (Line 10).
- According to our specification, we need to next ``chase'' the dependants
- in the RAG (Resource Allocation Graph), but we can stop the ``chase'' whenever
- there is no change in the @{text cprec}. To start the loop, we
+ Thus we established two pointers: one in the waiting queue of the lock pointing to the
+ current thread, and the other from the currend thread pointing to the lock.
+ According to our specification in Section~\ref{model} and the properties we were able
+ to prove for @{text P}, we need to ``chase'' all the dependants
+ in the RAG (Resource Allocation Graph) and update their
+ current precedence, however we only have to do this as long as there is change in the
+ current precedence from one thread at hand to another.
+
+ The ``chase'' is implemented in the while-loop in Lines 13 to 24.
+ To initialise the loop, we
assign in Lines 11 and 12 the variable @{ML_text pt} to the owner
- of the lock, and then enter the while-loop in Lines 13 to 24.
-
+ of the lock.
Inside the loop, we first update the precedence of the lock held by @{ML_text pt} (Line 14).
- Next, we test if there is a change in the current precedence of @{ML_text pt}. If not,
- then we leave the loop since nothing else needs to be updated (Lines 15 and 16).
- If there is a change, then we have to continue our ``chase''. We check the lock
- @{ML_text pt} is waiting for (Lines 17 and 18). If there isn't one, the thread @{ML_text pt}
- is ready (the ``chase'' finished with finding a root) and we update the ready-queue accordingly
- (Lines 19 and 20). If there is a lock @{ML_text pt} is waiting for, we update the
- waiting queue for this lock and we continue the loop with the holder of that lock
- (Lines 22 and 23). Finally we need to block the current thread, since the lock
- it asked for was taken (Line 25).
+ Next, we check whether there is a change in the current precedence of @{ML_text pt}. If not,
+ then we leave the loop, since nothing else needs to be updated (Lines 15 and 16).
+ If there is a change, then we have to continue our ``chase''. We check what lock the
+ thread @{ML_text pt} is waiting for (Lines 17 and 18). If there is none, then
+ the thread @{ML_text pt} is ready (the ``chase'' is finished with finding a root). In this
+ case we update the ready-queue accordingly (Lines 19 and 20). If there is a lock @{ML_text pt} is
+ waiting for, we update the waiting queue for this lock and we continue the loop with
+ the holder of that lock
+ (Lines 22 and 23). After all current precedences have been updated, we finally need
+ to block the current thread, because the lock it asked for was taken (Line 25).
- If the lock was not taken, we take the else-branch (Lines 26 to 30). We first
- decrease the value of the lock to 0, meaning it is taken now (Line 27).
- We update the reference of holder of the lock (Line 28) and finally update
- the queue of locks the current thread already possesses (Line 29).
- Finally, we enable interrupts again, leaving the protected section of PINTOS.
+ If the lock the current thread asked for is \emph{not} taken, we proceed with the else-branch
+ (Lines 26 to 30). We first decrease the value of the lock to 0, meaning
+ it is taken now (Line 27). Second, we update the reference of the holder of
+ the lock (Line 28), and finally update the queue of locks the current
+ thread already possesses (Line 29).
+ The very last is to enable interrupts again thus leaving the protected section.
+
+
Similar operations need to be implementated for the @{ML_text lock_release} function, which
- we omit.
+ we however do not show. The reader should note that we did not verify our C-code. The
+ verification of the specification however provided us with the justification for designing
+ the C-code in this way. It gave us confidence for leaving the ``chase'' early whenever
+ there is no change in the calculated current precedence.
*}
section {* Conclusion *}
@@ -1420,8 +1436,7 @@
the next thread which takes over a lock is irrelevant for the correctness
of PIP. Moreover, we eliminated a crucial restriction present in
the proof of Sha et al.: they require that critical sections nest properly,
- whereas our scheduler allows critical sections to overlap. This is the default
- in implementations of PIP.
+ whereas our scheduler allows critical sections to overlap.
PIP is a scheduling algorithm for single-processor systems. We are
now living in a multi-processor world. Priority Inversion certainly