176 when a resource is released is irrelevant for PIP being correct---a |
176 when a resource is released is irrelevant for PIP being correct---a |
177 fact that has not been mentioned in the literature and not been used |
177 fact that has not been mentioned in the literature and not been used |
178 in the reference implementation of PIP in PINTOS. This fact, however, is important |
178 in the reference implementation of PIP in PINTOS. This fact, however, is important |
179 for an efficient implementation of PIP, because we can give the lock |
179 for an efficient implementation of PIP, because we can give the lock |
180 to the thread with the highest priority so that it terminates more |
180 to the thread with the highest priority so that it terminates more |
181 quickly. We were also bale to generalise the scheduler of Sha |
181 quickly. We were also able to generalise the scheduler of Sha |
182 et al \cite{Sha90} to the practically relevant case where critical |
182 et al.~\cite{Sha90} to the practically relevant case where critical |
183 sections can overlap. |
183 sections can overlap. |
184 *} |
184 *} |
185 |
185 |
186 section {* Formal Model of the Priority Inheritance Protocol *} |
186 section {* Formal Model of the Priority Inheritance Protocol *} |
187 |
187 |
366 \draw [<-,line width=0.6mm] (X) to node [pos=0.54,sloped,below=-0.5mm] {\fnt{}holding} (Z); |
366 \draw [<-,line width=0.6mm] (X) to node [pos=0.54,sloped,below=-0.5mm] {\fnt{}holding} (Z); |
367 \draw [<-,line width=0.6mm] (X) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding} (Y); |
367 \draw [<-,line width=0.6mm] (X) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding} (Y); |
368 \draw [<-,line width=0.6mm] (U2) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding} (R); |
368 \draw [<-,line width=0.6mm] (U2) to node [pos=0.54,sloped,above=-0.5mm] {\fnt{}holding} (R); |
369 \end{tikzpicture} |
369 \end{tikzpicture} |
370 \end{center} |
370 \end{center} |
371 \caption{An Instance of a Resource Allocation Graph (RAG).\label{RAGgraph}} |
371 \caption{An instance of a Resource Allocation Graph (RAG).\label{RAGgraph}} |
372 \end{figure} |
372 \end{figure} |
373 |
373 |
374 \noindent |
374 \noindent |
375 We will design our scheduler |
375 We will design our scheduler |
376 so that every thread can be in the possession of several resources, that is |
376 so that every thread can be in the possession of several resources, that is |
377 it has potentially several incoming holding edges in the RAG, but has at most one outgoing |
377 it has potentially several incoming holding edges in the RAG, but has at most one outgoing |
378 waiting edge. The reason is that when a thread asks for resource that is locked |
378 waiting edge. The reason is that when a thread asks for resource that is locked |
379 already, then the process is stopped and cannot ask for another resource. |
379 already, then the process is blocked and cannot ask for another resource. |
380 Clearly, also every resource can only have at most one outgoing holding edge---indicating |
380 Clearly, also every resource can only have at most one outgoing holding edge---indicating |
381 that the resource is locked. |
381 that the resource is locked. In this way we can always start at a thread waiting for a |
|
382 resource and ``chase'' outgoing arrows leading to a single root of a tree. |
382 |
383 |
383 |
384 |
384 The use of relations for representing RAGs allows us to conveniently define |
385 The use of relations for representing RAGs allows us to conveniently define |
385 the notion of the \emph{dependants} of a thread using the transitive closure |
386 the notion of the \emph{dependants} of a thread using the transitive closure |
386 operation for relations. This gives |
387 operation for relations. This gives |
564 If there is one or more threads ready, then there can only be \emph{one} thread |
566 If there is one or more threads ready, then there can only be \emph{one} thread |
565 running, namely the one whose current precedence is equal to the maximum of all ready |
567 running, namely the one whose current precedence is equal to the maximum of all ready |
566 threads. We use sets to capture both possibilities. |
568 threads. We use sets to capture both possibilities. |
567 We can now also conveniently define the set of resources that are locked by a thread in a |
569 We can now also conveniently define the set of resources that are locked by a thread in a |
568 given state and also when a thread is detached in a state (meaning the thread neither |
570 given state and also when a thread is detached in a state (meaning the thread neither |
569 holds nor waits for a resource): |
571 holds nor waits for a resource---in the RAG this would correspond to an |
|
572 isolated node without any incoming and outgoing edges, see Figure~\ref{RAGgraph}): |
570 |
573 |
571 \begin{isabelle}\ \ \ \ \ %%% |
574 \begin{isabelle}\ \ \ \ \ %%% |
572 \begin{tabular}{@ {}l} |
575 \begin{tabular}{@ {}l} |
573 @{thm holdents_def}\\ |
576 @{thm holdents_def}\\ |
574 @{thm detached_def} |
577 @{thm detached_def} |
764 the state @{text "s' @ s"} by a thread @{text th'} that already |
767 the state @{text "s' @ s"} by a thread @{text th'} that already |
765 existed in @{text s} and requested or had a lock on at least |
768 existed in @{text s} and requested or had a lock on at least |
766 one resource---that means the thread was not \emph{detached} in @{text s}. |
769 one resource---that means the thread was not \emph{detached} in @{text s}. |
767 As we shall see shortly, that means there are only finitely |
770 As we shall see shortly, that means there are only finitely |
768 many threads that can block @{text th} in this way and then they |
771 many threads that can block @{text th} in this way and then they |
769 need to run with the same current precedence as @{text th}. |
772 need to run with the same precedence as @{text th}. |
770 |
773 |
771 Like in the argument by Sha et al.~our |
774 Like in the argument by Sha et al.~our |
772 finite bound does not guarantee absence of indefinite Priority |
775 finite bound does not guarantee absence of indefinite Priority |
773 Inversion. For this we further have to assume that every thread |
776 Inversion. For this we further have to assume that every thread |
774 gives up its resources after a finite amount of time. We found that |
777 gives up its resources after a finite amount of time. We found that |
1287 highest precedence is realised in our implementation by priority queues. We implemented |
1290 highest precedence is realised in our implementation by priority queues. We implemented |
1288 them as \emph{Braun trees} \cite{Paulson96}, which provide efficient @{text "O(log n)"}-operations |
1291 them as \emph{Braun trees} \cite{Paulson96}, which provide efficient @{text "O(log n)"}-operations |
1289 for accessing and updating. Apart from having to implement relatively complex data\-structures in C |
1292 for accessing and updating. Apart from having to implement relatively complex data\-structures in C |
1290 using pointers, our experience with the implementation has been very positive: our specification |
1293 using pointers, our experience with the implementation has been very positive: our specification |
1291 and formalisation of PIP translates smoothly to an efficent implementation in PINTOS. |
1294 and formalisation of PIP translates smoothly to an efficent implementation in PINTOS. |
1292 Let us illustrate this with the C-code of the function {\tt lock\_aquire}, |
1295 Let us illustrate this with the C-code of the function {\tt lock\_acquire}, |
1293 shown in Figure~\ref{code}. This function implements the operation that |
1296 shown in Figure~\ref{code}. This function implements the operation that |
1294 the currently running thread asks for the lock of a resource. |
1297 the currently running thread asks for the lock of a resource. |
1295 In C such a lock is represented as a pointer to the structure {\tt lock} (Line 1). |
1298 In C such a lock is represented as a pointer to the structure {\tt lock} (Line 1). |
1296 |
1299 |
1297 Lines 2 to 4 of {\tt lock\_aquire} contain diagnostic code: first, we check that |
1300 Lines 2 to 4 of {\tt lock\_acquire} contain diagnostic code: first, we check that |
1298 the lock is a ``valid'' lock |
1301 the lock is a ``valid'' lock |
1299 by testing it is not {\tt NULL}; second, we check that the code is not called |
1302 by testing it is not {\tt NULL}; second, we check that the code is not called |
1300 as part of an interrupt---aquiring a lock should only be initiated by a |
1303 as part of an interrupt---acquiring a lock should only be initiated by a |
1301 request from a (user) thread, not an interrupt; third, we make sure the |
1304 request from a (user) thread, not an interrupt; third, we make sure the |
1302 current thread does not ask twice for a lock. These assertions are supposed |
1305 current thread does not ask twice for a lock. These assertions are supposed |
1303 to be satisfied because of the assumptions in PINTOS about how this code is called. |
1306 to be satisfied because of the assumptions in PINTOS about how this code is called. |
1304 If not, then the assertions indicate a bug in PINTOS. |
1307 If not, then the assertions indicate a bug in PINTOS. |
1305 |
1308 |
1313 ASSERT (!lock_held_by_current_thread (lock)); |
1316 ASSERT (!lock_held_by_current_thread (lock)); |
1314 |
1317 |
1315 enum intr_level old_level; |
1318 enum intr_level old_level; |
1316 old_level = intr_disable(); |
1319 old_level = intr_disable(); |
1317 if (lock->value == 0) { |
1320 if (lock->value == 0) { |
1318 heap_insert(thread_cprec, &lock->wq, &thread_current()->helem); |
1321 queue_insert(thread_cprec, &lock->wq, &thread_current()->helem); |
1319 thread_current()->waiting = lock; |
1322 thread_current()->waiting = lock; |
1320 struct thread *pt; |
1323 struct thread *pt; |
1321 pt = lock->holder; |
1324 pt = lock->holder; |
1322 while (pt) { |
1325 while (pt) { |
1323 heap_update(lock_cprec, &pt->held, &lock->helem); |
1326 queue_update(lock_cprec, &pt->held, &lock->helem); |
1324 if (!(update_cprec(pt))) |
1327 if (!(update_cprec(pt))) |
1325 break; |
1328 break; |
1326 lock = pt->waiting; |
1329 lock = pt->waiting; |
1327 if (!lock) { |
1330 if (!lock) { |
1328 heap_update(higher_cprec, &ready_heap, &pt->helem); |
1331 queue_update(higher_cprec, &ready_queue, &pt->helem); |
1329 break; |
1332 break; |
1330 }; |
1333 }; |
1331 heap_update(thread_cprec, &lock->wq, &pt->helem); |
1334 queue_update(thread_cprec, &lock->wq, &pt->helem); |
1332 pt = lock->holder; |
1335 pt = lock->holder; |
1333 }; |
1336 }; |
1334 thread_block(); |
1337 thread_block(); |
1335 } else { |
1338 } else { |
1336 lock->value--; |
1339 lock->value--; |
1337 lock->holder = thread_current(); |
1340 lock->holder = thread_current(); |
1338 heap_insert(lock_prec, &thread_current()->held, &lock->helem); |
1341 queue_insert(lock_prec, &thread_current()->held, &lock->helem); |
1339 }; |
1342 }; |
1340 intr_set_level(old_level); |
1343 intr_set_level(old_level); |
1341 } |
1344 } |
1342 \end{lstlisting} |
1345 \end{lstlisting} |
1343 \caption{Our version of the {\tt lock\_release} function in |
1346 \caption{Our version of the {\tt lock\_release} function in |
1344 PINTOS. It implements the operation corresponding to a @{text P}-event.\label{code}} |
1347 PINTOS. It implements the operation corresponding to a @{text P}-event.\label{code}} |
1345 \end{figure} |
1348 \end{figure} |
1346 |
1349 |
1347 |
1350 |
1348 Line 6 and 7 of {\tt lock\_aquire} make the operation of aquiring a lock atomic by disabling all |
1351 Line 6 and 7 of {\tt lock\_acquire} make the operation of acquiring a lock atomic by disabling all |
1349 interrupts, but saving them for resumption at the end of the function (Line 31). |
1352 interrupts, but saving them for resumption at the end of the function (Line 31). |
1350 In Line 8, the interesting code starts: we |
1353 In Line 8, the interesting code with respect to scheduling starts: we |
1351 test whether the lock is already taken (its value is then 0 indicating ``already |
1354 test whether the lock is already taken (its value is then 0 indicating ``already |
1352 taken'', or 1 for being ``free''). In case the lock is taken, we enter the |
1355 taken'', or 1 for being ``free''). In case the lock is taken, we enter the |
1353 if-branch by first inserting the current thread into the waiting queue of this lock (Line 9, |
1356 if-branch by first inserting the current thread into the waiting queue of this lock (Line 9). |
1354 the waiting queue is referenced as @{ML_text "&lock-wq"}). |
1357 The waiting queue is referenced in the usual C-way as @{ML_text "&lock->wq"}. |
1355 Next we record that the current thread is waiting for the lock (Line 10). |
1358 Next we record that the current thread is waiting for the lock (Line 10). |
1356 According to our specification, we need to next ``chase'' the dependants |
1359 According to our specification, we need to next ``chase'' the dependants |
1357 in the RAG (Resource Allocation Graph), but we can stop whenever there |
1360 in the RAG (Resource Allocation Graph), but we can stop the ``chase'' whenever |
1358 is no change in the @{text cprec}. Befre we start the loop, we |
1361 there is no change in the @{text cprec}. To start the loop, we |
1359 assign in Lines 11 and 12 assign the variable @{ML_text pt} to the owner |
1362 assign in Lines 11 and 12 the variable @{ML_text pt} to the owner |
1360 of the lock, and enter the while-loop in Lines 13 to 24. \ldots |
1363 of the lock, and then enter the while-loop in Lines 13 to 24. |
1361 |
1364 |
|
1365 Inside the loop, we first update the precedence of the lock held by @{ML_text pt} (Line 14). |
|
1366 Next, we test if there is a change in the current precedence of @{ML_text pt}. If not, |
|
1367 then we leave the loop since nothing else needs to be updated (Lines 15 and 16). |
|
1368 If there is a change, then we have to continue our ``chase''. We check the lock |
|
1369 @{ML_text pt} is waiting for (Lines 17 and 18). If there isn't one, the thread @{ML_text pt} |
|
1370 is ready (the ``chase'' finished with finding a root) and we update the ready-queue accordingly |
|
1371 (Lines 19 and 20). If there is a lock @{ML_text pt} is waiting for, we update the |
|
1372 waiting queue for this lock and we continue the loop with the holder of that lock |
|
1373 (Lines 22 and 23). Finally we need to block the current thread, since the lock |
|
1374 it asked for was taken (Line 25). |
|
1375 |
|
1376 If the lock was not taken, we take the else-branch (Lines 26 to 30). We first |
|
1377 decrease the value of the lock to 0, meaning it is taken now (Line 27). |
|
1378 We update the reference of holder of the lock (Line 28) and finally update |
|
1379 the queue of locks the current thread already possesses (Line 29). |
|
1380 Finally, we enable interrupts again, leaving the protected section of PINTOS. |
|
1381 Similar operations need to be implementated for the @{ML_text lock_release} function, which |
|
1382 we omit. |
1362 *} |
1383 *} |
1363 |
1384 |
1364 section {* Conclusion *} |
1385 section {* Conclusion *} |
1365 |
1386 |
1366 text {* |
1387 text {* |