262
|
1 |
(*<*)
|
|
2 |
theory Paper
|
272
|
3 |
imports CpsG ExtGG (* "~~/src/HOL/Library/LaTeXsugar" *) LaTeXsugar
|
262
|
4 |
begin
|
266
|
5 |
ML {*
|
273
|
6 |
open Printer;
|
272
|
7 |
show_question_marks_default := false;
|
266
|
8 |
*}
|
284
|
9 |
|
|
10 |
notation (latex output)
|
|
11 |
Cons ("_::_" [78,77] 73) and
|
|
12 |
vt ("valid'_state") and
|
|
13 |
runing ("running") and
|
286
|
14 |
birthtime ("last'_set") and
|
284
|
15 |
If ("(\<^raw:\textrm{>if\<^raw:}> (_)/ \<^raw:\textrm{>then\<^raw:}> (_)/ \<^raw:\textrm{>else\<^raw:}> (_))" 10) and
|
286
|
16 |
Prc ("'(_, _')") and
|
284
|
17 |
DUMMY ("\<^raw:\mbox{$\_\!\_$}>")
|
262
|
18 |
(*>*)
|
|
19 |
|
|
20 |
section {* Introduction *}
|
|
21 |
|
|
22 |
text {*
|
284
|
23 |
Many real-time systems need to support threads involving priorities and
|
267
|
24 |
locking of resources. Locking of resources ensures mutual exclusion
|
275
|
25 |
when accessing shared data or devices that cannot be
|
284
|
26 |
preempted. Priorities allow scheduling of threads that need to
|
275
|
27 |
finish their work within deadlines. Unfortunately, both features
|
|
28 |
can interact in subtle ways leading to a problem, called
|
284
|
29 |
\emph{Priority Inversion}. Suppose three threads having priorities
|
|
30 |
$H$(igh), $M$(edium) and $L$(ow). We would expect that the thread
|
|
31 |
$H$ blocks any other thread with lower priority and itself cannot
|
|
32 |
be blocked by any thread with lower priority. Alas, in a naive
|
275
|
33 |
implementation of resource looking and priorities this property can
|
|
34 |
be violated. Even worse, $H$ can be delayed indefinitely by
|
284
|
35 |
threads with lower priorities. For this let $L$ be in the
|
275
|
36 |
possession of a lock for a resource that also $H$ needs. $H$ must
|
|
37 |
therefore wait for $L$ to exit the critical section and release this
|
|
38 |
lock. The problem is that $L$ might in turn be blocked by any
|
284
|
39 |
thread with priority $M$, and so $H$ sits there potentially waiting
|
|
40 |
indefinitely. Since $H$ is blocked by threads with lower
|
275
|
41 |
priorities, the problem is called Priority Inversion. It was first
|
277
|
42 |
described in \cite{Lampson80} in the context of the
|
275
|
43 |
Mesa programming language designed for concurrent programming.
|
265
|
44 |
|
273
|
45 |
If the problem of Priority Inversion is ignored, real-time systems
|
267
|
46 |
can become unpredictable and resulting bugs can be hard to diagnose.
|
|
47 |
The classic example where this happened is the software that
|
284
|
48 |
controlled the Mars Pathfinder mission in 1997 \cite{Reeves98}.
|
|
49 |
Once the spacecraft landed, the software shut down at irregular
|
|
50 |
intervals leading to loss of project time as normal operation of the
|
|
51 |
craft could only resume the next day (the mission and data already
|
|
52 |
collected were fortunately not lost, because of a clever system
|
|
53 |
design). The reason for the shutdowns was that the scheduling
|
|
54 |
software fell victim of Priority Inversion: a low priority thread
|
|
55 |
locking a resource prevented a high priority thread from running in
|
|
56 |
time leading to a system reset. Once the problem was found, it was
|
|
57 |
rectified by enabling the \emph{Priority Inheritance Protocol} (PIP)
|
|
58 |
\cite{Sha90}\footnote{Sha et al.~call it the \emph{Basic Priority
|
286
|
59 |
Inheritance Protocol} \cite{Sha90} and others sometimes also call it
|
|
60 |
\emph{Priority Boosting}.} in the scheduling software.
|
262
|
61 |
|
284
|
62 |
The idea behind PIP is to let the thread $L$ temporarily inherit
|
286
|
63 |
the high priority from $H$ until $L$ leaves the critical section
|
284
|
64 |
unlocking the resource. This solves the problem of $H$ having to
|
|
65 |
wait indefinitely, because $L$ cannot be blocked by threads having
|
|
66 |
priority $M$. While a few other solutions exist for the Priority
|
|
67 |
Inversion problem, PIP is one that is widely deployed and
|
|
68 |
implemented. This includes VxWorks (a proprietary real-time OS used
|
|
69 |
in the Mars Pathfinder mission, in Boeing's 787 Dreamliner, Honda's
|
|
70 |
ASIMO robot, etc.), but also the POSIX 1003.1c Standard realised for
|
|
71 |
example in libraries for FreeBSD, Solaris and Linux.
|
274
|
72 |
|
284
|
73 |
One advantage of PIP is that increasing the priority of a thread
|
275
|
74 |
can be dynamically calculated by the scheduler. This is in contrast
|
277
|
75 |
to, for example, \emph{Priority Ceiling} \cite{Sha90}, another
|
|
76 |
solution to the Priority Inversion problem, which requires static
|
284
|
77 |
analysis of the program in order to prevent Priority
|
|
78 |
Inversion. However, there has also been strong criticism against
|
|
79 |
PIP. For instance, PIP cannot prevent deadlocks when lock
|
|
80 |
dependencies are circular, and also blocking times can be
|
|
81 |
substantial (more than just the duration of a critical section).
|
|
82 |
Though, most criticism against PIP centres around unreliable
|
|
83 |
implementations and PIP being too complicated and too inefficient.
|
|
84 |
For example, Yodaiken writes in \cite{Yodaiken02}:
|
274
|
85 |
|
|
86 |
\begin{quote}
|
|
87 |
\it{}``Priority inheritance is neither efficient nor reliable. Implementations
|
|
88 |
are either incomplete (and unreliable) or surprisingly complex and intrusive.''
|
|
89 |
\end{quote}
|
273
|
90 |
|
274
|
91 |
\noindent
|
275
|
92 |
He suggests to avoid PIP altogether by not allowing critical
|
286
|
93 |
sections to be preempted. Unfortunately, this solution does not
|
|
94 |
help in real-time systems with low latency \emph{requirements}.
|
278
|
95 |
|
286
|
96 |
In our opinion, there is clearly a need for investigating correct
|
278
|
97 |
algorithms for PIP. A few specifications for PIP exist (in English)
|
|
98 |
and also a few high-level descriptions of implementations (e.g.~in
|
|
99 |
the textbook \cite[Section 5.6.5]{Vahalia96}), but they help little
|
|
100 |
with actual implementations. That this is a problem in practise is
|
283
|
101 |
proved by an email from Baker, who wrote on 13 July 2009 on the Linux
|
278
|
102 |
Kernel mailing list:
|
274
|
103 |
|
|
104 |
\begin{quote}
|
275
|
105 |
\it{}``I observed in the kernel code (to my disgust), the Linux PIP
|
|
106 |
implementation is a nightmare: extremely heavy weight, involving
|
|
107 |
maintenance of a full wait-for graph, and requiring updates for a
|
|
108 |
range of events, including priority changes and interruptions of
|
|
109 |
wait operations.''
|
274
|
110 |
\end{quote}
|
|
111 |
|
|
112 |
\noindent
|
277
|
113 |
The criticism by Yodaiken, Baker and others suggests to us to look
|
|
114 |
again at PIP from a more abstract level (but still concrete enough
|
286
|
115 |
to inform an implementation), and makes PIP an ideal candidate for a
|
277
|
116 |
formal verification. One reason, of course, is that the original
|
284
|
117 |
presentation of PIP~\cite{Sha90}, despite being informally
|
283
|
118 |
``proved'' correct, is actually \emph{flawed}.
|
|
119 |
|
|
120 |
Yodaiken \cite{Yodaiken02} points to a subtlety that had been
|
|
121 |
overlooked in the informal proof by Sha et al. They specify in
|
284
|
122 |
\cite{Sha90} that after the thread (whose priority has been raised)
|
283
|
123 |
completes its critical section and releases the lock, it ``returns
|
|
124 |
to its original priority level.'' This leads them to believe that an
|
284
|
125 |
implementation of PIP is ``rather straightforward''~\cite{Sha90}.
|
|
126 |
Unfortunately, as Yodaiken points out, this behaviour is too
|
|
127 |
simplistic. Consider the case where the low priority thread $L$
|
|
128 |
locks \emph{two} resources, and two high-priority threads $H$ and
|
283
|
129 |
$H'$ each wait for one of them. If $L$ then releases one resource
|
|
130 |
so that $H$, say, can proceed, then we still have Priority Inversion
|
|
131 |
with $H'$ (which waits for the other resource). The correct
|
|
132 |
behaviour for $L$ is to revert to the highest remaining priority of
|
284
|
133 |
the threads that it blocks. The advantage of formalising the
|
|
134 |
correctness of a high-level specification of PIP in a theorem prover
|
|
135 |
is that such issues clearly show up and cannot be overlooked as in
|
|
136 |
informal reasoning (since we have to analyse all possible behaviours
|
|
137 |
of threads, i.e.~\emph{traces}, that could possibly happen).
|
274
|
138 |
|
279
|
139 |
There have been earlier formal investigations into PIP, but ...\cite{Faria08}
|
284
|
140 |
|
|
141 |
vt (valid trace) was introduced earlier, cite
|
|
142 |
|
|
143 |
distributed PIP
|
286
|
144 |
|
|
145 |
Paulson's method has not been used outside security field, except
|
|
146 |
work by Zhang et al.
|
|
147 |
|
|
148 |
no clue about multi-processor case according to \cite{Steinberg10}
|
280
|
149 |
*}
|
278
|
150 |
|
283
|
151 |
section {* Formal Model of the Priority Inheritance Protocol *}
|
267
|
152 |
|
280
|
153 |
text {*
|
286
|
154 |
The Priority Inheritance Protocol, short PIP, is a scheduling
|
|
155 |
algorithm for a single-processor system.\footnote{We shall come back
|
|
156 |
later to the case of PIP on multi-processor systems.} Our model of
|
|
157 |
PIP is based on Paulson's inductive approach to protocol
|
|
158 |
verification \cite{Paulson98}, where the \emph{state} of a system is
|
|
159 |
given by a list of events that happened so far. \emph{Events} fall
|
|
160 |
into five categories defined as the datatype
|
283
|
161 |
|
|
162 |
\begin{isabelle}\ \ \ \ \ %%%
|
284
|
163 |
\mbox{\begin{tabular}{r@ {\hspace{2mm}}c@ {\hspace{2mm}}l@ {\hspace{7mm}}l}
|
|
164 |
\isacommand{datatype} event
|
|
165 |
& @{text "="} & @{term "Create thread priority"}\\
|
|
166 |
& @{text "|"} & @{term "Exit thread"} \\
|
286
|
167 |
& @{text "|"} & @{term "Set thread priority"} & {\rm reset of the priority for} @{text thread}\\
|
284
|
168 |
& @{text "|"} & @{term "P thread cs"} & {\rm request of resource} @{text "cs"} {\rm by} @{text "thread"}\\
|
|
169 |
& @{text "|"} & @{term "V thread cs"} & {\rm release of resource} @{text "cs"} {\rm by} @{text "thread"}
|
|
170 |
\end{tabular}}
|
|
171 |
\end{isabelle}
|
|
172 |
|
|
173 |
\noindent
|
286
|
174 |
whereby threads, priorities and (critical) resources are represented
|
|
175 |
as natural numbers. The event @{term Set} models the situation that
|
|
176 |
a thread obtains a new priority given by the programmer or
|
|
177 |
user (for example via the {\tt nice} utility under UNIX). As in Paulson's work, we
|
|
178 |
need to define functions that allow one to make some observations
|
|
179 |
about states. One, called @{term threads}, calculates the set of
|
|
180 |
``live'' threads that we have seen so far in a state:
|
284
|
181 |
|
|
182 |
\begin{isabelle}\ \ \ \ \ %%%
|
|
183 |
\mbox{\begin{tabular}{lcl}
|
|
184 |
@{thm (lhs) threads.simps(1)} & @{text "\<equiv>"} &
|
|
185 |
@{thm (rhs) threads.simps(1)}\\
|
|
186 |
@{thm (lhs) threads.simps(2)[where thread="th"]} & @{text "\<equiv>"} &
|
|
187 |
@{thm (rhs) threads.simps(2)[where thread="th"]}\\
|
|
188 |
@{thm (lhs) threads.simps(3)[where thread="th"]} & @{text "\<equiv>"} &
|
|
189 |
@{thm (rhs) threads.simps(3)[where thread="th"]}\\
|
|
190 |
@{term "threads (DUMMY#s)"} & @{text "\<equiv>"} & @{term "threads s"}\\
|
|
191 |
\end{tabular}}
|
283
|
192 |
\end{isabelle}
|
|
193 |
|
|
194 |
\noindent
|
286
|
195 |
Another calculates the priority for a thread @{text "th"}, defined as
|
284
|
196 |
|
|
197 |
\begin{isabelle}\ \ \ \ \ %%%
|
|
198 |
\mbox{\begin{tabular}{lcl}
|
|
199 |
@{thm (lhs) original_priority.simps(1)[where thread="th"]} & @{text "\<equiv>"} &
|
|
200 |
@{thm (rhs) original_priority.simps(1)[where thread="th"]}\\
|
|
201 |
@{thm (lhs) original_priority.simps(2)[where thread="th" and thread'="th'"]} & @{text "\<equiv>"} &
|
|
202 |
@{thm (rhs) original_priority.simps(2)[where thread="th" and thread'="th'"]}\\
|
|
203 |
@{thm (lhs) original_priority.simps(3)[where thread="th" and thread'="th'"]} & @{text "\<equiv>"} &
|
|
204 |
@{thm (rhs) original_priority.simps(3)[where thread="th" and thread'="th'"]}\\
|
|
205 |
@{term "original_priority th (DUMMY#s)"} & @{text "\<equiv>"} & @{term "original_priority th s"}\\
|
|
206 |
\end{tabular}}
|
|
207 |
\end{isabelle}
|
|
208 |
|
|
209 |
\noindent
|
|
210 |
In this definition we set @{text 0} as the default priority for
|
|
211 |
threads that have not (yet) been created. The last function we need
|
285
|
212 |
calculates the ``time'', or index, at which time a process had its
|
|
213 |
priority set.
|
284
|
214 |
|
|
215 |
\begin{isabelle}\ \ \ \ \ %%%
|
|
216 |
\mbox{\begin{tabular}{lcl}
|
|
217 |
@{thm (lhs) birthtime.simps(1)[where thread="th"]} & @{text "\<equiv>"} &
|
|
218 |
@{thm (rhs) birthtime.simps(1)[where thread="th"]}\\
|
|
219 |
@{thm (lhs) birthtime.simps(2)[where thread="th" and thread'="th'"]} & @{text "\<equiv>"} &
|
|
220 |
@{thm (rhs) birthtime.simps(2)[where thread="th" and thread'="th'"]}\\
|
|
221 |
@{thm (lhs) birthtime.simps(3)[where thread="th" and thread'="th'"]} & @{text "\<equiv>"} &
|
|
222 |
@{thm (rhs) birthtime.simps(3)[where thread="th" and thread'="th'"]}\\
|
|
223 |
@{term "birthtime th (DUMMY#s)"} & @{text "\<equiv>"} & @{term "birthtime th s"}\\
|
|
224 |
\end{tabular}}
|
|
225 |
\end{isabelle}
|
286
|
226 |
|
|
227 |
\footnote{Can Precedence be the real birth-time / or must be time precedence last set?}
|
|
228 |
|
|
229 |
\noindent
|
|
230 |
A \emph{precedence} of a thread @{text th} in a state @{text s} is a pair of
|
|
231 |
natural numbers defined as
|
284
|
232 |
|
286
|
233 |
\begin{isabelle}\ \ \ \ \ %%%
|
|
234 |
@{thm (rhs) preced_def[where thread="th"]}
|
|
235 |
\end{isabelle}
|
|
236 |
|
|
237 |
\noindent
|
|
238 |
The point of precedences is to schedule threads not according to priorities (what should
|
|
239 |
we do in case two threads have the same priority), but according to precedences.
|
|
240 |
Precedences allow us to always discriminate two threads with equal priority by
|
|
241 |
tacking into account the time when the priority was last set. We order precedences so
|
|
242 |
that threads with the same priority get a higher precedence if their priority has been
|
|
243 |
set earlier, since for such threads it is more urgent to finish. In an impementation
|
|
244 |
this choice would translate to a quite natural a FIFO-scheduling of processes with
|
|
245 |
the same priority.
|
|
246 |
|
|
247 |
Next, we introduce the concept of \emph{waiting queues}. They are
|
|
248 |
lists of threads associated with every resource. The first thread in
|
|
249 |
this list is chosen to be in the one that is in possession of the
|
|
250 |
``lock'' of the corresponding resource. We model waiting queues as
|
|
251 |
functions, below abbreviated as @{text wq}, taking a resource as
|
|
252 |
argument and returning a list of threads. This allows us to define
|
|
253 |
when a thread \emph{holds}, respectively \emph{waits}, for a
|
|
254 |
resource @{text cs}.
|
|
255 |
|
|
256 |
\begin{isabelle}\ \ \ \ \ %%%
|
|
257 |
\begin{tabular}{@ {}l}
|
|
258 |
@{thm cs_holding_def[where thread="th"]}\\
|
|
259 |
@{thm cs_waiting_def[where thread="th"]}
|
|
260 |
\end{tabular}
|
|
261 |
\end{isabelle}
|
284
|
262 |
|
|
263 |
|
|
264 |
|
|
265 |
resources
|
|
266 |
|
|
267 |
step relation:
|
|
268 |
|
|
269 |
\begin{center}
|
|
270 |
\begin{tabular}{c}
|
|
271 |
@{thm[mode=Rule] thread_create[where thread=th]}\hspace{1cm}
|
|
272 |
@{thm[mode=Rule] thread_exit[where thread=th]}\medskip\\
|
|
273 |
|
|
274 |
@{thm[mode=Rule] thread_P[where thread=th]}\medskip\\
|
|
275 |
@{thm[mode=Rule] thread_V[where thread=th]}\\
|
|
276 |
\end{tabular}
|
|
277 |
\end{center}
|
|
278 |
|
|
279 |
valid state:
|
|
280 |
|
|
281 |
\begin{center}
|
|
282 |
\begin{tabular}{c}
|
|
283 |
@{thm[mode=Axiom] vt_nil[where cs=step]}\hspace{1cm}
|
|
284 |
@{thm[mode=Rule] vt_cons[where cs=step]}
|
|
285 |
\end{tabular}
|
|
286 |
\end{center}
|
|
287 |
|
274
|
288 |
|
280
|
289 |
To define events, the identifiers of {\em threads},
|
|
290 |
{\em priority} and {\em critical resources } (abbreviated as @{text "cs"})
|
|
291 |
need to be represented. All three are represetned using standard
|
|
292 |
Isabelle/HOL type @{typ "nat"}:
|
|
293 |
*}
|
273
|
294 |
|
280
|
295 |
text {*
|
|
296 |
\bigskip
|
284
|
297 |
The priority inversion phenomenon was first published in
|
|
298 |
\cite{Lampson80}. The two protocols widely used to eliminate
|
|
299 |
priority inversion, namely PI (Priority Inheritance) and PCE
|
|
300 |
(Priority Ceiling Emulation), were proposed in \cite{Sha90}. PCE is
|
|
301 |
less convenient to use because it requires static analysis of
|
|
302 |
programs. Therefore, PI is more commonly used in
|
|
303 |
practice\cite{locke-july02}. However, as pointed out in the
|
|
304 |
literature, the analysis of priority inheritance protocol is quite
|
|
305 |
subtle\cite{yodaiken-july02}. A formal analysis will certainly be
|
|
306 |
helpful for us to understand and correctly implement PI. All
|
|
307 |
existing formal analysis of PI
|
|
308 |
\cite{conf/fase/JahierHR09,WellingsBSB07,Faria08} are based on the
|
|
309 |
model checking technology. Because of the state explosion problem,
|
|
310 |
model check is much like an exhaustive testing of finite models with
|
|
311 |
limited size. The results obtained can not be safely generalized to
|
|
312 |
models with arbitrarily large size. Worse still, since model
|
|
313 |
checking is fully automatic, it give little insight on why the
|
|
314 |
formal model is correct. It is therefore definitely desirable to
|
|
315 |
analyze PI using theorem proving, which gives more general results
|
|
316 |
as well as deeper insight. And this is the purpose of this paper
|
|
317 |
which gives a formal analysis of PI in the interactive theorem
|
|
318 |
prover Isabelle using Higher Order Logic (HOL). The formalization
|
262
|
319 |
focuses on on two issues:
|
|
320 |
|
|
321 |
\begin{enumerate}
|
|
322 |
\item The correctness of the protocol model itself. A series of desirable properties is
|
|
323 |
derived until we are fully convinced that the formal model of PI does
|
|
324 |
eliminate priority inversion. And a better understanding of PI is so obtained
|
|
325 |
in due course. For example, we find through formalization that the choice of
|
|
326 |
next thread to take hold when a
|
|
327 |
resource is released is irrelevant for the very basic property of PI to hold.
|
|
328 |
A point never mentioned in literature.
|
|
329 |
\item The correctness of the implementation. A series of properties is derived the meaning
|
|
330 |
of which can be used as guidelines on how PI can be implemented efficiently and correctly.
|
|
331 |
\end{enumerate}
|
|
332 |
|
|
333 |
The rest of the paper is organized as follows: Section \ref{overview} gives an overview
|
|
334 |
of PI. Section \ref{model} introduces the formal model of PI. Section \ref{general}
|
|
335 |
discusses a series of basic properties of PI. Section \ref{extension} shows formally
|
|
336 |
how priority inversion is controlled by PI. Section \ref{implement} gives properties
|
|
337 |
which can be used for guidelines of implementation. Section \ref{related} discusses
|
|
338 |
related works. Section \ref{conclusion} concludes the whole paper.
|
265
|
339 |
|
273
|
340 |
The basic priority inheritance protocol has two problems:
|
|
341 |
|
|
342 |
It does not prevent a deadlock from happening in a program with circular lock dependencies.
|
|
343 |
|
|
344 |
A chain of blocking may be formed; blocking duration can be substantial, though bounded.
|
|
345 |
|
265
|
346 |
|
|
347 |
Contributions
|
|
348 |
|
|
349 |
Despite the wide use of Priority Inheritance Protocol in real time operating
|
|
350 |
system, it's correctness has never been formally proved and mechanically checked.
|
|
351 |
All existing verification are based on model checking technology. Full automatic
|
|
352 |
verification gives little help to understand why the protocol is correct.
|
|
353 |
And results such obtained only apply to models of limited size.
|
|
354 |
This paper presents a formal verification based on theorem proving.
|
|
355 |
Machine checked formal proof does help to get deeper understanding. We found
|
|
356 |
the fact which is not mentioned in the literature, that the choice of next
|
|
357 |
thread to take over when an critical resource is release does not affect the correctness
|
|
358 |
of the protocol. The paper also shows how formal proof can help to construct
|
|
359 |
correct and efficient implementation.\bigskip
|
|
360 |
|
262
|
361 |
*}
|
|
362 |
|
|
363 |
section {* An overview of priority inversion and priority inheritance \label{overview} *}
|
|
364 |
|
|
365 |
text {*
|
|
366 |
|
|
367 |
Priority inversion refers to the phenomenon when a thread with high priority is blocked
|
|
368 |
by a thread with low priority. Priority happens when the high priority thread requests
|
|
369 |
for some critical resource already taken by the low priority thread. Since the high
|
|
370 |
priority thread has to wait for the low priority thread to complete, it is said to be
|
|
371 |
blocked by the low priority thread. Priority inversion might prevent high priority
|
|
372 |
thread from fulfill its task in time if the duration of priority inversion is indefinite
|
|
373 |
and unpredictable. Indefinite priority inversion happens when indefinite number
|
|
374 |
of threads with medium priorities is activated during the period when the high
|
|
375 |
priority thread is blocked by the low priority thread. Although these medium
|
|
376 |
priority threads can not preempt the high priority thread directly, they are able
|
|
377 |
to preempt the low priority threads and cause it to stay in critical section for
|
|
378 |
an indefinite long duration. In this way, the high priority thread may be blocked indefinitely.
|
|
379 |
|
|
380 |
Priority inheritance is one protocol proposed to avoid indefinite priority inversion.
|
|
381 |
The basic idea is to let the high priority thread donate its priority to the low priority
|
|
382 |
thread holding the critical resource, so that it will not be preempted by medium priority
|
|
383 |
threads. The thread with highest priority will not be blocked unless it is requesting
|
|
384 |
some critical resource already taken by other threads. Viewed from a different angle,
|
|
385 |
any thread which is able to block the highest priority threads must already hold some
|
|
386 |
critical resource. Further more, it must have hold some critical resource at the
|
|
387 |
moment the highest priority is created, otherwise, it may never get change to run and
|
|
388 |
get hold. Since the number of such resource holding lower priority threads is finite,
|
|
389 |
if every one of them finishes with its own critical section in a definite duration,
|
|
390 |
the duration the highest priority thread is blocked is definite as well. The key to
|
|
391 |
guarantee lower priority threads to finish in definite is to donate them the highest
|
|
392 |
priority. In such cases, the lower priority threads is said to have inherited the
|
|
393 |
highest priority. And this explains the name of the protocol:
|
|
394 |
{\em Priority Inheritance} and how Priority Inheritance prevents indefinite delay.
|
|
395 |
|
|
396 |
The objectives of this paper are:
|
|
397 |
\begin{enumerate}
|
|
398 |
\item Build the above mentioned idea into formal model and prove a series of properties
|
|
399 |
until we are convinced that the formal model does fulfill the original idea.
|
|
400 |
\item Show how formally derived properties can be used as guidelines for correct
|
|
401 |
and efficient implementation.
|
|
402 |
\end{enumerate}
|
|
403 |
The proof is totally formal in the sense that every detail is reduced to the
|
|
404 |
very first principles of Higher Order Logic. The nature of interactive theorem
|
|
405 |
proving is for the human user to persuade computer program to accept its arguments.
|
|
406 |
A clear and simple understanding of the problem at hand is both a prerequisite and a
|
|
407 |
byproduct of such an effort, because everything has finally be reduced to the very
|
|
408 |
first principle to be checked mechanically. The former intuitive explanation of
|
|
409 |
Priority Inheritance is just such a byproduct.
|
|
410 |
*}
|
|
411 |
|
|
412 |
section {* Formal model of Priority Inheritance \label{model} *}
|
|
413 |
text {*
|
|
414 |
\input{../../generated/PrioGDef}
|
|
415 |
*}
|
|
416 |
|
|
417 |
section {* General properties of Priority Inheritance \label{general} *}
|
264
|
418 |
|
|
419 |
text {*
|
|
420 |
The following are several very basic prioprites:
|
|
421 |
\begin{enumerate}
|
|
422 |
\item All runing threads must be ready (@{text "runing_ready"}):
|
|
423 |
@{thm[display] "runing_ready"}
|
|
424 |
\item All ready threads must be living (@{text "readys_threads"}):
|
|
425 |
@{thm[display] "readys_threads"}
|
|
426 |
\item There are finite many living threads at any moment (@{text "finite_threads"}):
|
|
427 |
@{thm[display] "finite_threads"}
|
|
428 |
\item Every waiting queue does not contain duplcated elements (@{text "wq_distinct"}):
|
|
429 |
@{thm[display] "wq_distinct"}
|
|
430 |
\item All threads in waiting queues are living threads (@{text "wq_threads"}):
|
|
431 |
@{thm[display] "wq_threads"}
|
|
432 |
\item The event which can get a thread into waiting queue must be @{term "P"}-events
|
|
433 |
(@{text "block_pre"}):
|
|
434 |
@{thm[display] "block_pre"}
|
|
435 |
\item A thread may never wait for two different critical resources
|
|
436 |
(@{text "waiting_unique"}):
|
|
437 |
@{thm[display] waiting_unique[of _ _ "cs\<^isub>1" "cs\<^isub>2"]}
|
|
438 |
\item Every resource can only be held by one thread
|
|
439 |
(@{text "held_unique"}):
|
|
440 |
@{thm[display] held_unique[of _ "th\<^isub>1" _ "th\<^isub>2"]}
|
|
441 |
\item Every living thread has an unique precedence
|
|
442 |
(@{text "preced_unique"}):
|
|
443 |
@{thm[display] preced_unique[of "th\<^isub>1" _ "th\<^isub>2"]}
|
|
444 |
\end{enumerate}
|
|
445 |
*}
|
|
446 |
|
|
447 |
text {* \noindent
|
|
448 |
The following lemmas show how RAG is changed with the execution of events:
|
|
449 |
\begin{enumerate}
|
|
450 |
\item Execution of @{term "Set"} does not change RAG (@{text "depend_set_unchanged"}):
|
|
451 |
@{thm[display] depend_set_unchanged}
|
|
452 |
\item Execution of @{term "Create"} does not change RAG (@{text "depend_create_unchanged"}):
|
|
453 |
@{thm[display] depend_create_unchanged}
|
|
454 |
\item Execution of @{term "Exit"} does not change RAG (@{text "depend_exit_unchanged"}):
|
|
455 |
@{thm[display] depend_exit_unchanged}
|
|
456 |
\item Execution of @{term "P"} (@{text "step_depend_p"}):
|
|
457 |
@{thm[display] step_depend_p}
|
|
458 |
\item Execution of @{term "V"} (@{text "step_depend_v"}):
|
|
459 |
@{thm[display] step_depend_v}
|
|
460 |
\end{enumerate}
|
|
461 |
*}
|
|
462 |
|
|
463 |
text {* \noindent
|
|
464 |
These properties are used to derive the following important results about RAG:
|
|
465 |
\begin{enumerate}
|
|
466 |
\item RAG is loop free (@{text "acyclic_depend"}):
|
|
467 |
@{thm [display] acyclic_depend}
|
|
468 |
\item RAGs are finite (@{text "finite_depend"}):
|
|
469 |
@{thm [display] finite_depend}
|
|
470 |
\item Reverse paths in RAG are well founded (@{text "wf_dep_converse"}):
|
|
471 |
@{thm [display] wf_dep_converse}
|
|
472 |
\item The dependence relation represented by RAG has a tree structure (@{text "unique_depend"}):
|
|
473 |
@{thm [display] unique_depend[of _ _ "n\<^isub>1" "n\<^isub>2"]}
|
|
474 |
\item All threads in RAG are living threads
|
|
475 |
(@{text "dm_depend_threads"} and @{text "range_in"}):
|
|
476 |
@{thm [display] dm_depend_threads range_in}
|
|
477 |
\end{enumerate}
|
|
478 |
*}
|
|
479 |
|
|
480 |
text {* \noindent
|
|
481 |
The following lemmas show how every node in RAG can be chased to ready threads:
|
|
482 |
\begin{enumerate}
|
|
483 |
\item Every node in RAG can be chased to a ready thread (@{text "chain_building"}):
|
|
484 |
@{thm [display] chain_building[rule_format]}
|
|
485 |
\item The ready thread chased to is unique (@{text "dchain_unique"}):
|
|
486 |
@{thm [display] dchain_unique[of _ _ "th\<^isub>1" "th\<^isub>2"]}
|
|
487 |
\end{enumerate}
|
|
488 |
*}
|
|
489 |
|
|
490 |
text {* \noindent
|
|
491 |
Properties about @{term "next_th"}:
|
|
492 |
\begin{enumerate}
|
|
493 |
\item The thread taking over is different from the thread which is releasing
|
|
494 |
(@{text "next_th_neq"}):
|
|
495 |
@{thm [display] next_th_neq}
|
|
496 |
\item The thread taking over is unique
|
|
497 |
(@{text "next_th_unique"}):
|
|
498 |
@{thm [display] next_th_unique[of _ _ _ "th\<^isub>1" "th\<^isub>2"]}
|
|
499 |
\end{enumerate}
|
|
500 |
*}
|
|
501 |
|
|
502 |
text {* \noindent
|
|
503 |
Some deeper results about the system:
|
|
504 |
\begin{enumerate}
|
|
505 |
\item There can only be one running thread (@{text "runing_unique"}):
|
|
506 |
@{thm [display] runing_unique[of _ "th\<^isub>1" "th\<^isub>2"]}
|
|
507 |
\item The maximum of @{term "cp"} and @{term "preced"} are equal (@{text "max_cp_eq"}):
|
|
508 |
@{thm [display] max_cp_eq}
|
|
509 |
\item There must be one ready thread having the max @{term "cp"}-value
|
|
510 |
(@{text "max_cp_readys_threads"}):
|
|
511 |
@{thm [display] max_cp_readys_threads}
|
|
512 |
\end{enumerate}
|
|
513 |
*}
|
|
514 |
|
|
515 |
text {* \noindent
|
|
516 |
The relationship between the count of @{text "P"} and @{text "V"} and the number of
|
|
517 |
critical resources held by a thread is given as follows:
|
|
518 |
\begin{enumerate}
|
|
519 |
\item The @{term "V"}-operation decreases the number of critical resources
|
|
520 |
one thread holds (@{text "cntCS_v_dec"})
|
|
521 |
@{thm [display] cntCS_v_dec}
|
|
522 |
\item The number of @{text "V"} never exceeds the number of @{text "P"}
|
|
523 |
(@{text "cnp_cnv_cncs"}):
|
|
524 |
@{thm [display] cnp_cnv_cncs}
|
|
525 |
\item The number of @{text "V"} equals the number of @{text "P"} when
|
|
526 |
the relevant thread is not living:
|
|
527 |
(@{text "cnp_cnv_eq"}):
|
|
528 |
@{thm [display] cnp_cnv_eq}
|
|
529 |
\item When a thread is not living, it does not hold any critical resource
|
|
530 |
(@{text "not_thread_holdents"}):
|
|
531 |
@{thm [display] not_thread_holdents}
|
|
532 |
\item When the number of @{text "P"} equals the number of @{text "V"}, the relevant
|
|
533 |
thread does not hold any critical resource, therefore no thread can depend on it
|
|
534 |
(@{text "count_eq_dependents"}):
|
|
535 |
@{thm [display] count_eq_dependents}
|
|
536 |
\end{enumerate}
|
|
537 |
*}
|
262
|
538 |
|
|
539 |
section {* Key properties \label{extension} *}
|
|
540 |
|
264
|
541 |
(*<*)
|
|
542 |
context extend_highest_gen
|
|
543 |
begin
|
|
544 |
(*>*)
|
|
545 |
|
|
546 |
text {*
|
|
547 |
The essential of {\em Priority Inheritance} is to avoid indefinite priority inversion. For this
|
|
548 |
purpose, we need to investigate what happens after one thread takes the highest precedence.
|
|
549 |
A locale is used to describe such a situation, which assumes:
|
|
550 |
\begin{enumerate}
|
|
551 |
\item @{term "s"} is a valid state (@{text "vt_s"}):
|
|
552 |
@{thm vt_s}.
|
|
553 |
\item @{term "th"} is a living thread in @{term "s"} (@{text "threads_s"}):
|
|
554 |
@{thm threads_s}.
|
|
555 |
\item @{term "th"} has the highest precedence in @{term "s"} (@{text "highest"}):
|
|
556 |
@{thm highest}.
|
|
557 |
\item The precedence of @{term "th"} is @{term "Prc prio tm"} (@{text "preced_th"}):
|
|
558 |
@{thm preced_th}.
|
|
559 |
\end{enumerate}
|
|
560 |
*}
|
|
561 |
|
|
562 |
text {* \noindent
|
|
563 |
Under these assumptions, some basic priority can be derived for @{term "th"}:
|
|
564 |
\begin{enumerate}
|
|
565 |
\item The current precedence of @{term "th"} equals its own precedence (@{text "eq_cp_s_th"}):
|
|
566 |
@{thm [display] eq_cp_s_th}
|
|
567 |
\item The current precedence of @{term "th"} is the highest precedence in
|
|
568 |
the system (@{text "highest_cp_preced"}):
|
|
569 |
@{thm [display] highest_cp_preced}
|
|
570 |
\item The precedence of @{term "th"} is the highest precedence
|
|
571 |
in the system (@{text "highest_preced_thread"}):
|
|
572 |
@{thm [display] highest_preced_thread}
|
|
573 |
\item The current precedence of @{term "th"} is the highest current precedence
|
|
574 |
in the system (@{text "highest'"}):
|
|
575 |
@{thm [display] highest'}
|
|
576 |
\end{enumerate}
|
|
577 |
*}
|
|
578 |
|
|
579 |
text {* \noindent
|
|
580 |
To analysis what happens after state @{term "s"} a sub-locale is defined, which
|
|
581 |
assumes:
|
|
582 |
\begin{enumerate}
|
|
583 |
\item @{term "t"} is a valid extension of @{term "s"} (@{text "vt_t"}): @{thm vt_t}.
|
|
584 |
\item Any thread created in @{term "t"} has priority no higher than @{term "prio"}, therefore
|
|
585 |
its precedence can not be higher than @{term "th"}, therefore
|
|
586 |
@{term "th"} remain to be the one with the highest precedence
|
|
587 |
(@{text "create_low"}):
|
|
588 |
@{thm [display] create_low}
|
|
589 |
\item Any adjustment of priority in
|
|
590 |
@{term "t"} does not happen to @{term "th"} and
|
|
591 |
the priority set is no higher than @{term "prio"}, therefore
|
|
592 |
@{term "th"} remain to be the one with the highest precedence (@{text "set_diff_low"}):
|
|
593 |
@{thm [display] set_diff_low}
|
|
594 |
\item Since we are investigating what happens to @{term "th"}, it is assumed
|
|
595 |
@{term "th"} does not exit during @{term "t"} (@{text "exit_diff"}):
|
|
596 |
@{thm [display] exit_diff}
|
|
597 |
\end{enumerate}
|
|
598 |
*}
|
|
599 |
|
|
600 |
text {* \noindent
|
|
601 |
All these assumptions are put into a predicate @{term "extend_highest_gen"}.
|
|
602 |
It can be proved that @{term "extend_highest_gen"} holds
|
|
603 |
for any moment @{text "i"} in it @{term "t"} (@{text "red_moment"}):
|
|
604 |
@{thm [display] red_moment}
|
|
605 |
|
|
606 |
From this, an induction principle can be derived for @{text "t"}, so that
|
|
607 |
properties already derived for @{term "t"} can be applied to any prefix
|
|
608 |
of @{text "t"} in the proof of new properties
|
|
609 |
about @{term "t"} (@{text "ind"}):
|
|
610 |
\begin{center}
|
|
611 |
@{thm[display] ind}
|
|
612 |
\end{center}
|
|
613 |
|
|
614 |
The following properties can be proved about @{term "th"} in @{term "t"}:
|
|
615 |
\begin{enumerate}
|
|
616 |
\item In @{term "t"}, thread @{term "th"} is kept live and its
|
|
617 |
precedence is preserved as well
|
|
618 |
(@{text "th_kept"}):
|
|
619 |
@{thm [display] th_kept}
|
|
620 |
\item In @{term "t"}, thread @{term "th"}'s precedence is always the maximum among
|
|
621 |
all living threads
|
|
622 |
(@{text "max_preced"}):
|
|
623 |
@{thm [display] max_preced}
|
|
624 |
\item In @{term "t"}, thread @{term "th"}'s current precedence is always the maximum precedence
|
|
625 |
among all living threads
|
|
626 |
(@{text "th_cp_max_preced"}):
|
|
627 |
@{thm [display] th_cp_max_preced}
|
|
628 |
\item In @{term "t"}, thread @{term "th"}'s current precedence is always the maximum current
|
|
629 |
precedence among all living threads
|
|
630 |
(@{text "th_cp_max"}):
|
|
631 |
@{thm [display] th_cp_max}
|
|
632 |
\item In @{term "t"}, thread @{term "th"}'s current precedence equals its precedence at moment
|
|
633 |
@{term "s"}
|
|
634 |
(@{text "th_cp_preced"}):
|
|
635 |
@{thm [display] th_cp_preced}
|
|
636 |
\end{enumerate}
|
|
637 |
*}
|
|
638 |
|
|
639 |
text {* \noindent
|
266
|
640 |
The main theorem of this part is to characterizing the running thread during @{term "t"}
|
264
|
641 |
(@{text "runing_inversion_2"}):
|
|
642 |
@{thm [display] runing_inversion_2}
|
|
643 |
According to this, if a thread is running, it is either @{term "th"} or was
|
|
644 |
already live and held some resource
|
|
645 |
at moment @{text "s"} (expressed by: @{text "cntV s th' < cntP s th'"}).
|
|
646 |
|
|
647 |
Since there are only finite many threads live and holding some resource at any moment,
|
|
648 |
if every such thread can release all its resources in finite duration, then after finite
|
|
649 |
duration, none of them may block @{term "th"} anymore. So, no priority inversion may happen
|
|
650 |
then.
|
|
651 |
*}
|
|
652 |
|
|
653 |
(*<*)
|
|
654 |
end
|
|
655 |
(*>*)
|
|
656 |
|
262
|
657 |
section {* Properties to guide implementation \label{implement} *}
|
|
658 |
|
264
|
659 |
text {*
|
266
|
660 |
The properties (especially @{text "runing_inversion_2"}) convinced us that the model defined
|
|
661 |
in Section \ref{model} does prevent indefinite priority inversion and therefore fulfills
|
264
|
662 |
the fundamental requirement of Priority Inheritance protocol. Another purpose of this paper
|
266
|
663 |
is to show how this model can be used to guide a concrete implementation. As discussed in
|
276
|
664 |
Section 5.6.5 of \cite{Vahalia96}, the implementation of Priority Inheritance in Solaris
|
266
|
665 |
uses sophisticated linking data structure. Except discussing two scenarios to show how
|
|
666 |
the data structure should be manipulated, a lot of details of the implementation are missing.
|
|
667 |
In \cite{Faria08,conf/fase/JahierHR09,WellingsBSB07} the protocol is described formally
|
|
668 |
using different notations, but little information is given on how this protocol can be
|
|
669 |
implemented efficiently, especially there is no information on how these data structure
|
|
670 |
should be manipulated.
|
|
671 |
|
|
672 |
Because the scheduling of threads is based on current precedence,
|
|
673 |
the central issue in implementation of Priority Inheritance is how to compute the precedence
|
|
674 |
correctly and efficiently. As long as the precedence is correct, it is very easy to
|
|
675 |
modify the scheduling algorithm to select the correct thread to execute.
|
|
676 |
|
|
677 |
First, it can be proved that the computation of current precedence @{term "cp"} of a threads
|
|
678 |
only involves its children (@{text "cp_rec"}):
|
|
679 |
@{thm [display] cp_rec}
|
|
680 |
where @{term "children s th"} represents the set of children of @{term "th"} in the current
|
|
681 |
RAG:
|
|
682 |
\[
|
|
683 |
@{thm (lhs) children_def} @{text "\<equiv>"} @{thm (rhs) children_def}
|
|
684 |
\]
|
|
685 |
where the definition of @{term "child"} is:
|
|
686 |
\[ @{thm (lhs) child_def} @{text "\<equiv>"} @{thm (rhs) child_def}
|
|
687 |
\]
|
|
688 |
|
|
689 |
The aim of this section is to fill the missing details of how current precedence should
|
|
690 |
be changed with the happening of events, with each event type treated by one subsection,
|
|
691 |
where the computation of @{term "cp"} uses lemma @{text "cp_rec"}.
|
|
692 |
*}
|
|
693 |
|
|
694 |
subsection {* Event @{text "Set th prio"} *}
|
|
695 |
|
|
696 |
(*<*)
|
|
697 |
context step_set_cps
|
|
698 |
begin
|
|
699 |
(*>*)
|
|
700 |
|
|
701 |
text {*
|
|
702 |
The context under which event @{text "Set th prio"} happens is formalized as follows:
|
|
703 |
\begin{enumerate}
|
|
704 |
\item The formation of @{term "s"} (@{text "s_def"}): @{thm s_def}.
|
|
705 |
\item State @{term "s"} is a valid state (@{text "vt_s"}): @{thm vt_s}. This implies
|
|
706 |
event @{text "Set th prio"} is eligible to happen under state @{term "s'"} and
|
|
707 |
state @{term "s'"} is a valid state.
|
|
708 |
\end{enumerate}
|
264
|
709 |
*}
|
|
710 |
|
266
|
711 |
text {* \noindent
|
|
712 |
Under such a context, we investigated how the current precedence @{term "cp"} of
|
|
713 |
threads change from state @{term "s'"} to @{term "s"} and obtained the following
|
|
714 |
conclusions:
|
|
715 |
\begin{enumerate}
|
|
716 |
%% \item The RAG does not change (@{text "eq_dep"}): @{thm "eq_dep"}.
|
|
717 |
\item All threads with no dependence relation with thread @{term "th"} have their
|
|
718 |
@{term "cp"}-value unchanged (@{text "eq_cp"}):
|
|
719 |
@{thm [display] eq_cp}
|
|
720 |
This lemma implies the @{term "cp"}-value of @{term "th"}
|
|
721 |
and those threads which have a dependence relation with @{term "th"} might need
|
|
722 |
to be recomputed. The way to do this is to start from @{term "th"}
|
|
723 |
and follow the @{term "depend"}-chain to recompute the @{term "cp"}-value of every
|
|
724 |
encountered thread using lemma @{text "cp_rec"}.
|
|
725 |
Since the @{term "depend"}-relation is loop free, this procedure
|
|
726 |
can always stop. The the following lemma shows this procedure actually could stop earlier.
|
|
727 |
\item The following two lemma shows, if a thread the re-computation of which
|
|
728 |
gives an unchanged @{term "cp"}-value, the procedure described above can stop.
|
|
729 |
\begin{enumerate}
|
|
730 |
\item Lemma @{text "eq_up_self"} shows if the re-computation of
|
|
731 |
@{term "th"}'s @{term "cp"} gives the same result, the procedure can stop:
|
|
732 |
@{thm [display] eq_up_self}
|
|
733 |
\item Lemma @{text "eq_up"}) shows if the re-computation at intermediate threads
|
|
734 |
gives unchanged result, the procedure can stop:
|
|
735 |
@{thm [display] eq_up}
|
|
736 |
\end{enumerate}
|
|
737 |
\end{enumerate}
|
|
738 |
*}
|
|
739 |
|
|
740 |
(*<*)
|
|
741 |
end
|
|
742 |
(*>*)
|
264
|
743 |
|
272
|
744 |
subsection {* Event @{text "V th cs"} *}
|
|
745 |
|
|
746 |
(*<*)
|
|
747 |
context step_v_cps_nt
|
|
748 |
begin
|
|
749 |
(*>*)
|
|
750 |
|
|
751 |
text {*
|
|
752 |
The context under which event @{text "V th cs"} happens is formalized as follows:
|
|
753 |
\begin{enumerate}
|
|
754 |
\item The formation of @{term "s"} (@{text "s_def"}): @{thm s_def}.
|
|
755 |
\item State @{term "s"} is a valid state (@{text "vt_s"}): @{thm vt_s}. This implies
|
|
756 |
event @{text "V th cs"} is eligible to happen under state @{term "s'"} and
|
|
757 |
state @{term "s'"} is a valid state.
|
|
758 |
\end{enumerate}
|
|
759 |
*}
|
|
760 |
|
|
761 |
text {* \noindent
|
|
762 |
Under such a context, we investigated how the current precedence @{term "cp"} of
|
|
763 |
threads change from state @{term "s'"} to @{term "s"}.
|
|
764 |
|
|
765 |
|
|
766 |
Two subcases are considerted,
|
|
767 |
where the first is that there exits @{term "th'"}
|
|
768 |
such that
|
|
769 |
@{thm [display] nt}
|
|
770 |
holds, which means there exists a thread @{term "th'"} to take over
|
|
771 |
the resource release by thread @{term "th"}.
|
|
772 |
In this sub-case, the following results are obtained:
|
|
773 |
\begin{enumerate}
|
|
774 |
\item The change of RAG is given by lemma @{text "depend_s"}:
|
|
775 |
@{thm [display] "depend_s"}
|
|
776 |
which shows two edges are removed while one is added. These changes imply how
|
|
777 |
the current precedences should be re-computed.
|
|
778 |
\item First all threads different from @{term "th"} and @{term "th'"} have their
|
|
779 |
@{term "cp"}-value kept, therefore do not need a re-computation
|
|
780 |
(@{text "cp_kept"}): @{thm [display] cp_kept}
|
|
781 |
This lemma also implies, only the @{term "cp"}-values of @{term "th"} and @{term "th'"}
|
|
782 |
need to be recomputed.
|
|
783 |
\end{enumerate}
|
|
784 |
*}
|
|
785 |
|
|
786 |
(*<*)
|
|
787 |
end
|
|
788 |
|
|
789 |
context step_v_cps_nnt
|
|
790 |
begin
|
|
791 |
(*>*)
|
|
792 |
|
|
793 |
text {*
|
|
794 |
The other sub-case is when for all @{text "th'"}
|
|
795 |
@{thm [display] nnt}
|
|
796 |
holds, no such thread exists. The following results can be obtained for this
|
|
797 |
sub-case:
|
|
798 |
\begin{enumerate}
|
|
799 |
\item The change of RAG is given by lemma @{text "depend_s"}:
|
|
800 |
@{thm [display] depend_s}
|
|
801 |
which means only one edge is removed.
|
|
802 |
\item In this case, no re-computation is needed (@{text "eq_cp"}):
|
|
803 |
@{thm [display] eq_cp}
|
|
804 |
\end{enumerate}
|
|
805 |
*}
|
|
806 |
|
|
807 |
(*<*)
|
|
808 |
end
|
|
809 |
(*>*)
|
|
810 |
|
|
811 |
|
|
812 |
subsection {* Event @{text "P th cs"} *}
|
|
813 |
|
|
814 |
(*<*)
|
|
815 |
context step_P_cps_e
|
|
816 |
begin
|
|
817 |
(*>*)
|
|
818 |
|
|
819 |
text {*
|
|
820 |
The context under which event @{text "P th cs"} happens is formalized as follows:
|
|
821 |
\begin{enumerate}
|
|
822 |
\item The formation of @{term "s"} (@{text "s_def"}): @{thm s_def}.
|
|
823 |
\item State @{term "s"} is a valid state (@{text "vt_s"}): @{thm vt_s}. This implies
|
|
824 |
event @{text "P th cs"} is eligible to happen under state @{term "s'"} and
|
|
825 |
state @{term "s'"} is a valid state.
|
|
826 |
\end{enumerate}
|
|
827 |
|
|
828 |
This case is further divided into two sub-cases. The first is when @{thm ee} holds.
|
|
829 |
The following results can be obtained:
|
|
830 |
\begin{enumerate}
|
|
831 |
\item One edge is added to the RAG (@{text "depend_s"}):
|
|
832 |
@{thm [display] depend_s}
|
|
833 |
\item No re-computation is needed (@{text "eq_cp"}):
|
|
834 |
@{thm [display] eq_cp}
|
|
835 |
\end{enumerate}
|
|
836 |
*}
|
|
837 |
|
|
838 |
(*<*)
|
|
839 |
end
|
|
840 |
|
|
841 |
context step_P_cps_ne
|
|
842 |
begin
|
|
843 |
(*>*)
|
|
844 |
|
|
845 |
text {*
|
|
846 |
The second is when @{thm ne} holds.
|
|
847 |
The following results can be obtained:
|
|
848 |
\begin{enumerate}
|
|
849 |
\item One edge is added to the RAG (@{text "depend_s"}):
|
|
850 |
@{thm [display] depend_s}
|
|
851 |
\item Threads with no dependence relation with @{term "th"} do not need a re-computation
|
|
852 |
of their @{term "cp"}-values (@{text "eq_cp"}):
|
|
853 |
@{thm [display] eq_cp}
|
|
854 |
This lemma implies all threads with a dependence relation with @{term "th"} may need
|
|
855 |
re-computation.
|
|
856 |
\item Similar to the case of @{term "Set"}, the computation procedure could stop earlier
|
|
857 |
(@{text "eq_up"}):
|
|
858 |
@{thm [display] eq_up}
|
|
859 |
\end{enumerate}
|
|
860 |
|
|
861 |
*}
|
|
862 |
|
|
863 |
(*<*)
|
|
864 |
end
|
|
865 |
(*>*)
|
|
866 |
|
|
867 |
subsection {* Event @{text "Create th prio"} *}
|
|
868 |
|
|
869 |
(*<*)
|
|
870 |
context step_create_cps
|
|
871 |
begin
|
|
872 |
(*>*)
|
|
873 |
|
|
874 |
text {*
|
|
875 |
The context under which event @{text "Create th prio"} happens is formalized as follows:
|
|
876 |
\begin{enumerate}
|
|
877 |
\item The formation of @{term "s"} (@{text "s_def"}): @{thm s_def}.
|
|
878 |
\item State @{term "s"} is a valid state (@{text "vt_s"}): @{thm vt_s}. This implies
|
|
879 |
event @{text "Create th prio"} is eligible to happen under state @{term "s'"} and
|
|
880 |
state @{term "s'"} is a valid state.
|
|
881 |
\end{enumerate}
|
|
882 |
The following results can be obtained under this context:
|
|
883 |
\begin{enumerate}
|
|
884 |
\item The RAG does not change (@{text "eq_dep"}):
|
|
885 |
@{thm [display] eq_dep}
|
|
886 |
\item All threads other than @{term "th"} do not need re-computation (@{text "eq_cp"}):
|
|
887 |
@{thm [display] eq_cp}
|
|
888 |
\item The @{term "cp"}-value of @{term "th"} equals its precedence
|
|
889 |
(@{text "eq_cp_th"}):
|
|
890 |
@{thm [display] eq_cp_th}
|
|
891 |
\end{enumerate}
|
|
892 |
|
|
893 |
*}
|
|
894 |
|
|
895 |
|
|
896 |
(*<*)
|
|
897 |
end
|
|
898 |
(*>*)
|
|
899 |
|
|
900 |
subsection {* Event @{text "Exit th"} *}
|
|
901 |
|
|
902 |
(*<*)
|
|
903 |
context step_exit_cps
|
|
904 |
begin
|
|
905 |
(*>*)
|
|
906 |
|
|
907 |
text {*
|
|
908 |
The context under which event @{text "Exit th"} happens is formalized as follows:
|
|
909 |
\begin{enumerate}
|
|
910 |
\item The formation of @{term "s"} (@{text "s_def"}): @{thm s_def}.
|
|
911 |
\item State @{term "s"} is a valid state (@{text "vt_s"}): @{thm vt_s}. This implies
|
|
912 |
event @{text "Exit th"} is eligible to happen under state @{term "s'"} and
|
|
913 |
state @{term "s'"} is a valid state.
|
|
914 |
\end{enumerate}
|
|
915 |
The following results can be obtained under this context:
|
|
916 |
\begin{enumerate}
|
|
917 |
\item The RAG does not change (@{text "eq_dep"}):
|
|
918 |
@{thm [display] eq_dep}
|
|
919 |
\item All threads other than @{term "th"} do not need re-computation (@{text "eq_cp"}):
|
|
920 |
@{thm [display] eq_cp}
|
|
921 |
\end{enumerate}
|
|
922 |
Since @{term th} does not live in state @{term "s"}, there is no need to compute
|
|
923 |
its @{term cp}-value.
|
|
924 |
*}
|
|
925 |
|
|
926 |
(*<*)
|
|
927 |
end
|
|
928 |
(*>*)
|
|
929 |
|
|
930 |
|
262
|
931 |
section {* Related works \label{related} *}
|
|
932 |
|
|
933 |
text {*
|
|
934 |
\begin{enumerate}
|
|
935 |
\item {\em Integrating Priority Inheritance Algorithms in the Real-Time Specification for Java}
|
|
936 |
\cite{WellingsBSB07} models and verifies the combination of Priority Inheritance (PI) and
|
|
937 |
Priority Ceiling Emulation (PCE) protocols in the setting of Java virtual machine
|
|
938 |
using extended Timed Automata(TA) formalism of the UPPAAL tool. Although a detailed
|
|
939 |
formal model of combined PI and PCE is given, the number of properties is quite
|
|
940 |
small and the focus is put on the harmonious working of PI and PCE. Most key features of PI
|
|
941 |
(as well as PCE) are not shown. Because of the limitation of the model checking technique
|
|
942 |
used there, properties are shown only for a small number of scenarios. Therefore,
|
|
943 |
the verification does not show the correctness of the formal model itself in a
|
|
944 |
convincing way.
|
|
945 |
\item {\em Formal Development of Solutions for Real-Time Operating Systems with TLA+/TLC}
|
|
946 |
\cite{Faria08}. A formal model of PI is given in TLA+. Only 3 properties are shown
|
|
947 |
for PI using model checking. The limitation of model checking is intrinsic to the work.
|
|
948 |
\item {\em Synchronous modeling and validation of priority inheritance schedulers}
|
|
949 |
\cite{conf/fase/JahierHR09}. Gives a formal model
|
|
950 |
of PI and PCE in AADL (Architecture Analysis \& Design Language) and checked
|
|
951 |
several properties using model checking. The number of properties shown there is
|
|
952 |
less than here and the scale is also limited by the model checking technique.
|
|
953 |
\item {\em The Priority Ceiling Protocol: Formalization and Analysis Using PVS}
|
|
954 |
\cite{dutertre99b}. Formalized another protocol for Priority Inversion in the
|
|
955 |
interactive theorem proving system PVS.
|
|
956 |
\end{enumerate}
|
|
957 |
|
|
958 |
|
|
959 |
There are several works on inversion avoidance:
|
|
960 |
\begin{enumerate}
|
|
961 |
\item {\em Solving the group priority inversion problem in a timed asynchronous system}
|
|
962 |
\cite{Wang:2002:SGP}. The notion of Group Priority Inversion is introduced. The main
|
|
963 |
strategy is still inversion avoidance. The method is by reordering requests
|
|
964 |
in the setting of Client-Server.
|
|
965 |
\item {\em A Formalization of Priority Inversion} \cite{journals/rts/BabaogluMS93}.
|
|
966 |
Formalized the notion of Priority
|
|
967 |
Inversion and proposes methods to avoid it.
|
|
968 |
\end{enumerate}
|
|
969 |
|
|
970 |
{\em Examples of inaccurate specification of the protocol ???}.
|
|
971 |
|
|
972 |
*}
|
|
973 |
|
|
974 |
section {* Conclusions \label{conclusion} *}
|
|
975 |
|
286
|
976 |
text {*
|
|
977 |
The work in this paper only deals with single CPU configurations. The
|
|
978 |
"one CPU" assumption is essential for our formalisation, because the
|
|
979 |
main lemma fails in multi-CPU configuration. The lemma says that any
|
|
980 |
runing thead must be the one with the highest prioirty or already held
|
|
981 |
some resource when the highest priority thread was initiated. When
|
|
982 |
there are multiple CPUs, it may well be the case that a threads did
|
|
983 |
not hold any resource when the highest priority thread was initiated,
|
|
984 |
but that thread still runs after that moment on a separate CPU. In
|
|
985 |
this way, the main lemma does not hold anymore.
|
|
986 |
|
|
987 |
|
|
988 |
There are some works deals with priority inversion in multi-CPU
|
|
989 |
configurations[???], but none of them have given a formal correctness
|
|
990 |
proof. The extension of our formal proof to deal with multi-CPU
|
|
991 |
configurations is not obvious. One possibility, as suggested in paper
|
|
992 |
[???], is change our formal model (the defiintion of "schs") to give
|
|
993 |
the released resource to the thread with the highest prioirty. In this
|
|
994 |
way, indefinite prioirty inversion can be avoided, but for a quite
|
|
995 |
different reason from the one formalized in this paper (because the
|
|
996 |
"mail lemma" will be different). This means a formal correctness proof
|
|
997 |
for milt-CPU configuration would be quite different from the one given
|
|
998 |
in this paper. The solution of prioirty inversion problem in mult-CPU
|
|
999 |
configurations is a different problem which needs different solutions
|
|
1000 |
which is outside the scope of this paper.
|
|
1001 |
|
|
1002 |
*}
|
|
1003 |
|
262
|
1004 |
(*<*)
|
|
1005 |
end
|
|
1006 |
(*>*) |