handouts/ho01.tex
changeset 180 a95782c2f046
parent 179 1cacbe5c67cf
child 181 a736a0c324a3
--- a/handouts/ho01.tex	Thu Sep 25 07:49:22 2014 +0100
+++ b/handouts/ho01.tex	Thu Sep 25 11:33:45 2014 +0100
@@ -87,17 +87,20 @@
 but plays an important role. To illustrate this lets look at
 an example. 
 
-The questions is whether the Chip-and-PIN system with credit
-cards is more secure than the older method of signing receipts
-at the till. On first glance Chip-and-PIN seems obviously more
-secure and improved security was also the central plank in the
-``marketing speak'' of the banks behind Chip-and-PIN. The
-earlier system was based on a magnetic stripe or a mechanical
-imprint on the card and required customers to sign receipts at
-the till whenever they bought something. This signature
-authorised the transactions. Although in use for a long time,
-this system had some crucial security flaws, including making
-clones of credit cards and forging signatures. 
+\subsubsection*{Chip-and-PIN is Surely More Secure?}
+
+The questions is whether the Chip-and-PIN system used with
+modern credit cards is more secure than the older method of
+signing receipts at the till. On first glance the answer seems
+obvious: Chip-and-PIN must be more secure and indeed improved
+security was the central plank in the ``marketing speak'' of
+the banks behind Chip-and-PIN. The earlier system was based on
+a magnetic stripe or a mechanical imprint on the cards and
+required customers to sign receipts at the till whenever they
+bought something. This signature authorised the transactions.
+Although in use for a long time, this system had some crucial
+security flaws, including making clones of credit cards and
+forging signatures. 
 
 Chip-and-PIN, as the name suggests, relies on data being
 stored on a chip on the card and a PIN number for
@@ -107,7 +110,7 @@
 (especially the group around Ross Anderson). To begin with,
 the Chip-and-PIN system introduced a ``new player'' that
 needed to be trusted: the PIN terminals and their
-manufacturers. It was claimed that these terminals are
+manufacturers. It was claimed that these terminals were
 tamper-resistant, but needless to say this was a weak link in
 the system, which criminals successfully attacked. Some
 terminals were even so skilfully manipulated that they
@@ -116,16 +119,16 @@
 Chip-and-PIN, you need to vet quite closely the supply chain
 of such terminals.
 
-Later on Ross Anderson and his group managed to launch a
+Later on Ross Anderson and his group were able to perform
 man-in-the-middle attacks against Chip-and-PIN. Essentially
 they made the terminal think the correct PIN was entered and
-the card think that a signature was used. This was a more
-serious security problem. The flaw was mitigated by requiring
-that a link between the card and the bank is established at
-every time the card is used. Even later this group found
-another problem with Chip-and-PIN and ATMs which do not
-generate random enough numbers (nonces) on which the security
-of the underlying protocols relies. 
+the card think that a signature was used. This is a kind of
+\emph{protocol failure}. After discovery, the flaw was
+mitigated by requiring that a link between the card and the
+bank is established at every time the card is used. Even later
+this group found another problem with Chip-and-PIN and ATMs
+which did not generate random enough numbers (nonces) on which
+the security of the underlying protocols relies. 
 
 The problem with all this is that the banks who introduced
 Chip-and-PIN managed with the new system to shift the
@@ -141,8 +144,8 @@
 Since banks managed to successfully claim that their
 Chip-and-PIN system is secure, they were under the new system
 able to point the finger at the customer when fraud occurred:
-they must have been negligent loosing their PIN. The customer
-had almost no means to defend themselves in such situations.
+customers must have been negligent loosing their PIN and they
+had almost no way of defending themselves in such situations.
 That is why the work of \emph{ethical} hackers like Ross
 Anderson's group was so important, because they and others
 established that the bank's claim that their system is secure
@@ -156,43 +159,47 @@
 improve security, also needs to bear the financial losses if
 things go wrong. Otherwise, you end up with an insecure
 system. In case of the Chip-and-PIN system, no good security
-engineer would claim that it is secure beyond reproach: the
-specification of the EMV protocol (underlying Chip-and-PIN) is
-some 700 pages long, but still leaves out many things (like
-how to implement a good random number generator). No human
-being is able to scrutinise such a specification and ensure it
-contains no flaws. Moreover, banks can add their own
-sub-protocols to EMV. With all the experience we already have,
-it is as clear as day that criminals were eventually able to
-poke holes into it and measures need to be taken to address
-them. However, with how the system was set up, the banks had
-no real incentive to come up with a system that is really
-secure. Getting the incentives right in favour of security is
-often a tricky business.
+engineer would dare claim that it is secure beyond reproach:
+the specification of the EMV protocol (underlying
+Chip-and-PIN) is some 700 pages long, but still leaves out
+many things (like how to implement a good random number
+generator). No human being is able to scrutinise such a
+specification and ensure it contains no flaws. Moreover, banks
+can add their own sub-protocols to EMV. With all the
+experience we already have, it is as clear as day that
+criminals were eventually able to poke holes into it and
+measures need to be taken to address them. However, with how
+the system was set up, the banks had no real incentive to come
+up with a system that is really secure. Getting the incentives
+right in favour of security is often a tricky business. From a
+customer point of view the system was much less secure than
+the old signature-based method.
 
 \subsection*{Of Cookies and Salts}
 
-Lets look at another example which should helps with
+Lets look at another example which will help with
 understanding how passwords should be verified and stored.
 Imagine you need to develop a web-application that has the
 feature of recording how many times a customer visits a page.
-For example to give a discount whenever the customer visited a
-webpage some $x$ number of times (say $x$ equal $5$). There is
-one more constraint: we want to store the information about
-the number of times a customer has visited inside a cookie. I
-think, for a number of years the webpage of the New York Times
-operated in this way: it allowed you to read ten articles per
-months for free; if you wanted to read more, you had to pay.
-My guess is it used cookies for recording how many times their
-pages was visited, because if you switched browsers you could
-easily circumvent the restriction about ten articles.
+For example in order to give a discount whenever the customer
+visited a webpage some $x$ number of times (say $x$ equal
+$5$). There is one more constraint: we want to store the
+information about the number of visits as a cookie on the
+browser. I think, for a number of years the webpage of the New
+York Times operated in this way: it allowed you to read ten
+articles per month for free; if you wanted to read more, you
+had to pay. My best guess is that it used cookies for
+recording how many times their pages was visited, because if I
+switched browsers I could easily circumvent the restriction
+about ten articles.
 
 To implement our web-application it is good to look under the
-hood what happens when a webpage is requested. A typical
-web-application works as follows: The browser sends a GET
-request for a particular page to a server. The server answers
-this request. A simple JavaScript program that realises a
-``hello world'' webpage is as follows:
+hood what happens when a webpage is displayed in a browser. A
+typical web-application works as follows: The browser sends a
+GET request for a particular page to a server. The server
+answers this request with a webpage in HTML (we can here
+ignore these details). A simple JavaScript program that
+realises a ``hello world'' webpage is as follows:
 
 \begin{center}
 \lstinputlisting{../progs/ap0.js}
@@ -201,16 +208,20 @@
 \noindent The interesting lines are 4 to 7 where the answer to
 the GET request is generated\ldots in this case it is just a
 simple string. This program is run on the server and will be
-executed whenever a browser initiates such a GET request.
+executed whenever a browser initiates such a GET request. You
+can run this program on your computer and then direct a
+browser to the address \pcode{localhost:8000} in order to
+simulate a request over the internet.
+
 
 For our web-application of interest is the feature that the
 server when answering the request can store some information
-at the client's side. This information is called a
+on the client's side. This information is called a
 \emph{cookie}. The next time the browser makes another GET
-request to the same webpage, this cookie can be read by the
-server. We can use cookies in order to store a counter that
-records the number of times our webpage has been visited. This
-can be realised with the following small program
+request to the same webpage, this cookie can be read again by
+the server. We can use cookies in order to store a counter
+that records the number of times our webpage has been visited.
+This can be realised with the following small program
 
 \begin{center}
 \lstinputlisting{../progs/ap2.js}
@@ -239,13 +250,13 @@
 when it reaches a threshold. If the client deletes the cookie,
 then the counter will just be reset to zero. This does not
 bother us, because the purported discount will just not be
-granted. In this way we do not lose us any (hypothetical)
-money. What we need to be concerned about is, however, when a
-client artificially increases this counter without having
-visited our web-page. This is actually a trivial task for a
-knowledgeable person, since there are convenient tools that
-allow one to set a cookie to an arbitrary value, for example
-above our threshold for the discount. 
+granted. In this way we do not lose any (hypothetical) money.
+What we need to be concerned about is, however, when a client
+artificially increases this counter without having visited our
+web-page. This is actually a trivial task for a knowledgeable
+person, since there are convenient tools that allow one to set
+a cookie to an arbitrary value, for example above our
+threshold for the discount. 
 
 There seems to be no real way to prevent this kind of
 tampering with cookies, because the whole purpose of cookies
@@ -270,13 +281,13 @@
 
 Fortunately, \emph{hash functions} seem to be more suitable
 for our purpose. Like encryption, hash functions scramble data
-in such a way that it is easy to calculate the output of a has
-function from the input. But it is hard (i.e.~practically
+in such a way that it is easy to calculate the output of a
+hash function from the input. But it is hard (i.e.~practically
 impossible) to calculate the input from knowing the output.
 Therefore hash functions are often called \emph{one-way
 functions}. There are several such hashing function. For
 example SHA-1 would hash the string \pcode{"hello world"} to
-produce
+produce the hash-value
 
 \begin{center}
 \pcode{2aae6c35c94fcfb415dbe95f408b9ce91ee846ed}
@@ -296,25 +307,26 @@
 
 We can use hashes in our web-application and store in the
 cookie the value of the counter in plain text but together
-with its hash. We need to store both pieces of data such we
-can extract both components (below I will just separate them
-using a \pcode{"-"}). If we now read back the cookie when the
-client visits our webpage, we can extract the counter, hash it
-again and compare the result to the stored hash value inside
-the cookie. If these hashes disagree, then we can deduce that
-the cookie has been tampered with. Unfortunately, if they
-agree, we can still not be entirely sure that not a clever
-hacker has tampered with the cookie. The reason is that the
-hacker can see the clear text part of the cookie, say
-\pcode{3}, and also its hash. It does not take much trial and
-error to find out that we used the SHA-1 hashing functions and
-then graft a cookie accordingly. This is eased by the fact
-that for SHA-1 many strings and corresponding hashvalues are
-precalculated. Type, for example, into Google the hash value
-for \pcode{"hello world"} and you will actually pretty quickly
-find that it was generated by input string \pcode{"hello
-wolrd"}. This defeats the purpose of a hashing functions and
-thus would not help us for our web-applications. 
+with its hash. We need to store both pieces of data in such a
+way that we can extract both components (below I will just
+separate them using a \pcode{"-"}). If we now read back the
+cookie when the client visits our webpage, we can extract the
+counter, hash it again and compare the result to the stored
+hash value inside the cookie. If these hashes disagree, then
+we can deduce that the cookie has been tampered with.
+Unfortunately, if they agree, we can still not be entirely
+sure that not a clever hacker has tampered with the cookie.
+The reason is that the hacker can see the clear text part of
+the cookie, say \pcode{3}, and also its hash. It does not take
+much trial and error to find out that we used the SHA-1
+hashing functions and then the hacker can graft a cookie
+accordingly. This is eased by the fact that for SHA-1 many
+strings and corresponding hash-values are precalculated. Type,
+for example, into Google the hash value for \pcode{"hello
+world"} and you will actually pretty quickly find that it was
+generated by input string \pcode{"hello wolrd"}. This defeats
+the purpose of a hashing functions and thus would not help us
+for our web-applications.