\documentclass{article}\usepackage{../style}\usepackage{../langs}\lstset{language=JavaScript}\begin{document}\fnote{\copyright{} Christian Urban, King's College London, 2014, 2015, 2016}% passwords at dropbox%%https://blogs.dropbox.com/tech/2016/09/how-dropbox-securely-stores-your-passwords/%Ross anderson%https://youtu.be/FY2YKxBxOkg%http://www.scmagazineuk.com/amazon-launches-open-source-tls-implementation-s2n/article/424360/%Singapurs Behörden gehen offline% how to store passwords%https://nakedsecurity.sophos.com/2013/11/20/serious-security-how-to-store-your-users-passwords-safely/%hashes%http://web.archive.org/web/20071226014140/http://www.cits.rub.de/MD5Collisions/%https://blog.codinghorror.com/speed-hashing/%https://blogs.dropbox.com/tech/2016/09/how-dropbox-securely-stores-your-passwords/% Hello Kitty database stolen% https://nakedsecurity.sophos.com/2017/01/10/stolen-details-of-3-3m-hello-kitty-fans-including-kids-published-online/%% IoT% https://nakedsecurity.sophos.com/2015/10/26/the-internet-of-things-stop-the-things-i-want-to-get-off/% cloning creditc cards and passports%https://www.youtube.com/watch?v=-4_on9zj-zs\section*{Handout 1 (Security Engineering)}Much of the material and inspiration in this module is takenfrom the works of Bruce Schneier, Ross Anderson and AlexHalderman. I think they are the world experts in the area ofsecurity engineering. I especially like that they argue that asecurity engineer requires a certain \emph{security mindset}.Bruce Schneier for example writes:\begin{quote} \it ``Security engineers --- at least the good ones --- seethe world differently. They can't walk into a store withoutnoticing how they might shoplift. They can't use a computerwithout wondering about the security vulnerabilities. Theycan't vote without trying to figure out how to vote twice.They just can't help it.''\end{quote}\noindentand\begin{quote}\it ``Security engineering\ldots requires you to thinkdifferently. You need to figure out not how something works,but how something can be made to not work. You have to imaginean intelligent and malicious adversary inside your system\ldots, constantly trying new ways tosubvert it. You have to consider all the ways your system canfail, most of them having nothing to do with the designitself. You have to look at everything backwards, upside down,and sideways. You have to think like an alien.''\end{quote}\noindent In this module I like to teach you this securitymindset. This might be a mindset that you think is veryforeign to you---after all we are all good citizens and do nothack into things. However, I beg to differ: You have thismindset already when in school you were thinking, at leasthypothetically, about ways in which you can cheat in an exam(whether it is by hiding notes or by looking over theshoulders of your fellow pupils). Right? To defend a system,you need to have this kind of mindset and be able to thinklike an attacker. This will include understanding techniquesthat can be used to compromise security and privacy insystems. This will many times result in insights wherewell-intended security mechanisms made a system actually lesssecure.\medskip\noindent {\Large\bf Warning!} However, don’t be evil! Using thosetechniques in the real world may violate the law or King’srules, and it may be unethical. Under some circumstances, evenprobing for weaknesses of a system may result in severepenalties, up to and including expulsion, fines andjail time. Acting lawfully and ethically is yourresponsibility. Ethics requires you to refrain from doingharm. Always respect privacy and rights of others. Do nottamper with any of King's systems. If you try out a technique,always make doubly sure you are working in a safe environmentso that you cannot cause any harm, not even accidentally.Don't be evil. Be an ethical hacker.\medskip\noindent In this lecture I want to make you familiar with thesecurity mindset and dispel the myth that encryption is theanswer to all security problems (it is certainly often a partof an answer, but almost always never a sufficient one). Thisis actually an important thread going through the wholecourse: We will assume that encryption works perfectly, butstill attack ``things''. By ``works perfectly'' we mean thatwe will assume encryption is a black box and, for example,will not look at the underlying mathematics and break thealgorithms.\footnote{Though fascinating this might be.}For a secure system, it seems, four requirements need to cometogether: First a security policy (what is supposed to beachieved?); second a mechanism (cipher, access controls,tamper resistance etc); third the assurance we obtain from themechanism (the amount of reliance we can put on the mechanism)and finally the incentives (the motive that the peopleguarding and maintaining the system have to do their jobproperly, and also the motive that the attackers have to tryto defeat your policy). The last point is often overlooked,but plays an important role. To illustrate this let's look atan example. \subsubsection*{Chip-and-PIN is Surely More Secure, No?}The questions is whether the Chip-and-PIN system used withmodern credit cards is more secure than the older method ofsigning receipts at the till? On first glance the answer seemsobvious: Chip-and-PIN must be more secure and indeed improvedsecurity was the central plank in the ``marketing speak'' ofthe banks behind Chip-and-PIN. The earlier system was based ona magnetic stripe or a mechanical imprint on the cards andrequired customers to sign receipts at the till whenever theybought something. This signature authorised the transactions.Although in use for a long time, this system had some crucialsecurity flaws, including making clones of credit cards andforging signatures. Chip-and-PIN, as the name suggests, relies on data beingstored on a chip on the card and a PIN number forauthorisation. Even though the banks involved trumpeted theirsystem as being absolutely secure and indeed fraud ratesinitially went down, security researchers were not convinced(especially not the group around RossAnderson).\footnote{Actually, historical data about fraudshowed that first fraud rates went up (while early problems todo with the introduction of Chip-and-PIN we exploited), thendown, but recently up again (because criminals getting morefamiliar with the technology and how it can be exploited).} To begin with, theChip-and-PIN system introduced a ``new player'' into thesystem that needed to be trusted: the PIN terminals and theirmanufacturers. It was claimed that these terminals weretamper-resistant, but needless to say this was a weak link inthe system, which criminals successfully attacked. Someterminals were even so skilfully manipulated that theytransmitted skimmed PIN numbers via built-in mobile phoneconnections. To mitigate this flaw in the security ofChip-and-PIN, you need to be able to vet quite closely thesupply chain of such terminals. This is something that ismostly beyond the control of customers who need to use theseterminals. To make matters worse for Chip-and-PIN, around 2009 RossAnderson and his group were able to perform man-in-the-middleattacks against Chip-and-PIN. Essentially they made theterminal think the correct PIN was entered and the card thinkthat a signature was used. This is a kind of \emph{protocolfailure}. After discovery, the flaw was mitigated by requiringthat a link between the card and the bank is established atevery time the card is used. Even later this group foundanother problem with Chip-and-PIN and ATMs which did notgenerate random enough numbers (cryptographic nonces) on whichthe security of the underlying protocols relies. The overarching problem with all this is that the banks whointroduced Chip-and-PIN managed with the new system to shiftthe liability for any fraud and the burden of proof onto thecustomer. In the old system, the banks had to prove that thecustomer used the card, which they often did not bother with.In effect, if fraud occurred the customers were eitherrefunded fully or lost only a small amount of money. Thistaking-responsibility-of-potential-fraud was part of the``business plan'' of the banks and did not reduce theirprofits too much. Since banks managed to successfully claim that theirChip-and-PIN system is secure, they were under the new systemable to point the finger at the customer when fraud occurred:customers must have been negligent losing their PIN andcustomers had almost no way of defending themselves in suchsituations. That is why the work of \emph{ethical} hackerslike Ross Anderson's group is so important, because they andothers established that the banks' claim that their system issecure and it must have been the customer's fault, was bogus.In 2009 the law changed and the burden of proof went back tothe banks. They need to prove whether it was really thecustomer who used a card or not. The current state of affairs,however, is that standing up for your right requires you to beknowledgeable, potentially having to go to court\ldots{}ifnot, the banks are happy to take advantage of you.This is a classic example where a security design principlewas violated: Namely, the one who is in the position toimprove security, also needs to bear the financial losses ifthings go wrong. Otherwise, you end up with an insecuresystem. In case of the Chip-and-PIN system, no good securityengineer would dare to claim that it is secure beyondreproach: the specification of the EMV protocol (underlyingChip-and-PIN) is some 700 pages long, but still leaves outmany things (like how to implement a good random numbergenerator). No human being is able to scrutinise such aspecification and ensure it contains no flaws. Moreover, bankscan add their own sub-protocols to EMV. With all theexperience we already have, it is as clear as day thatcriminals were bound to eventually be able to poke holes intoit and measures need to be taken to address them. However,with how the system was set up, the banks had no realincentive to come up with a system that is really secure.Getting the incentives right in favour of security is often atricky business. From a customer point of view, theChip-and-PIN system was much less secure than the oldsignature-based method. The customer could now losesignificant amounts of money.If you want to watch an entertaining talk about attackingChip-and-PIN cards, then this talk from the 2014 ChaosComputer Club conference is for you:\begin{center}\url{https://goo.gl/zuwVHb}\end{center}\noindent They claim that they are able to clone Chip-and-PINscards such that they get all data that was on the Magstripe,except for three digits (the CVV number). Remember,Chip-and-PIN cards were introduced exactly for preventingthis. Ross Anderson also talked about his research at theBlackHat Conference in 2014:\begin{center}\url{https://www.youtube.com/watch?v=ET0MFkRorbo}\end{center}\noindent An article about reverse-engineering a PIN-number skimmeris at \begin{center}\small\url{https://trustfoundry.net/reverse-engineering-a-discovered-atm-skimmer/}\end{center}\noindentincluding a scary video of how a PIN-pad overlay isinstalled by some crooks.\subsection*{Of Cookies and Salts}Let us look at another example which will help with understanding howpasswords should be verified and stored. Imagine you need to developa web-application that has the feature of recording how many times acustomer visits a page. For example in order to give a discountwhenever the customer has visited a webpage some $x$ number of times(say $x$ equals $5$). There is one more constraint: we want to storethe information about the number of visits as a cookie on thebrowser. I think, for a number of years the webpage of the New YorkTimes operated in this way: it allowed you to read ten articles permonth for free; if you wanted to read more, you had to pay. My bestguess is that it used cookies for recording how many times their pageswas visited, because if I switched browsers I could easily circumventthe restriction about ten articles.\footnote{Another online media that works in this way is the Times Higher Education \url{http://www.timeshighereducation.co.uk}. It also seems to use cookies to restrict the number of free articles to five.}To implement our web-application it is good to look under thehood what happens when a webpage is displayed in a browser. Atypical web-application works as follows: The browser sends aGET request for a particular page to a server. The serveranswers this request with a webpage in HTML (for our purposeswe can ignore the details about HTML). A simple JavaScriptprogram that realises a server answering with a ``HelloWorld'' webpage is as follows:\begin{center}\lstinputlisting{../progs/ap0.js}\end{center}\noindent The interesting lines are 4 to 7 where the answer tothe GET request is generated\ldots in this case it is just asimple string. This program is run on the server and will beexecuted whenever a browser initiates such a GET request. Youcan run this program on your computer and then direct abrowser to the address \pcode{localhost:8000} in order tosimulate a request over the internet. You are encouragedto try this out\ldots{}theory is always good, but practice is better.For our web-application of interest is the feature that theserver when answering the request can store some informationon the client's side. This information is called a\emph{cookie}. The next time the browser makes another GETrequest to the same webpage, this cookie can be read again bythe server. We can use cookies in order to store a counterthat records the number of times our webpage has been visited.This can be realised with the following small program\begin{center}\lstinputlisting{../progs/ap2.js}\end{center}\noindent The overall structure of this program is the same asthe earlier one: Lines 7 to 17 generate the answer to aGET-request. The new part is in Line 8 where we read thecookie called \pcode{counter}. If present, this cookie will besend together with the GET-request from the client. The valueof this counter will come in form of a string, therefore weuse the function \pcode{parseInt} in order to transform itinto an integer. In case the cookie is not present, we defaultthe counter to zero. The odd looking construction \code{...||0} is realising this defaulting in JavaScript. In Line 9 weincrease the counter by one and store it back to the client(under the name \pcode{counter}, since potentially more thanone value could be stored). In Lines 10 to 15 we test whetherthis counter is greater or equal than 5 and send accordingly aspecially grafted message back to the client.Let us step back and analyse this program from a securitypoint of view. We store a counter in plain text on theclient's browser (which is not under our control). Dependingon this value we want to unlock a resource (like a discount)when it reaches a threshold. If the client deletes the cookie,then the counter will just be reset to zero. This does notbother us, because the purported discount will just not begranted. In this way we do not lose any (hypothetical) money.What we need to be concerned about is, however, when a clientartificially increases this counter without having visited ourweb-page. This is actually a trivial task for a knowledgeableperson, since there are convenient tools that allow one to seta cookie to an arbitrary value, for example above ourthreshold for the discount. There seems to be no simple way to prevent this kind oftampering with cookies, because the whole purpose of cookiesis that they are stored on the client's side, which from thethe server's perspective is a potentially hostile environment.What we need to ensure is the integrity of this counter inthis hostile environment. We could think of encrypting thecounter. But this has two drawbacks to do with the keys forencryption. If you use a single, global key for all theclients that visit our site, then we risk that our whole``business'' might collapse in the event this key gets knownto the outside world. Then all cookies we might have set inthe past, can now be decrypted and manipulated. If, on theother hand, we use many ``private'' keys for the clients, thenwe have to solve the problem of having to securely store thiskey on our server side (obviously we cannot store the key withthe client because then the client again has all data totamper with the counter; and obviously we also cannot encryptthe key, lest we can solve an impossible chicken-and-eggproblem). So encryption seems to not solve the problem we facewith the integrity of our counter.Fortunately, \emph{cryptographic hash functions} seem to bemore suitable for our purpose. Like encryption, hash functionsscramble data in such a way that it is easy to calculate theoutput of a hash function from the input. But it is hard(i.e.~practically impossible) to calculate the input fromknowing the output. This is often called \emph{preimageresistance}. Cryptographic hash functions also ensure thatgiven a message and a hash, it is computationally infeasible tofind another message with the same hash. This is called\emph{collusion resistance}. Because of these properties, hashfunctions are often called \emph{one-way functions}: youcannot go back from the output to the input (without sometricks, see below). There are several such hashing function. For example SHA-1would hash the string \pcode{"hello world"} to produce thehash-value\begin{center}\pcode{2aae6c35c94fcfb415dbe95f408b9ce91ee846ed}\end{center}\noindent Another handy feature of hash functions is that ifthe input changes only a little, the output changesdrastically. For example \pcode{"iello world"} produces underSHA-1 the output\begin{center}\pcode{d2b1402d84e8bcef5ae18f828e43e7065b841ff1}\end{center}\noindent That means it is not predictable what the outputwill be from just looking at input that is ``close by''. We can use hashes in our web-application and store in thecookie the value of the counter in plain text but togetherwith its hash. We need to store both pieces of data in such away that we can extract them again later on. In the code belowI will just separate them using a \pcode{"-"}. For thecounter \pcode{1} for example\begin{center}\pcode{1-356a192b7913b04c54574d18c28d46e6395428ab}\end{center}\noindent If we now read back the cookie when the clientvisits our webpage, we can extract the counter, hash it againand compare the result to the stored hash value inside thecookie. If these hashes disagree, then we can deduce that thecookie has been tampered with. Unfortunately, if they agree,we can still not be entirely sure that not a clever hacker hastampered with the cookie. The reason is that the hacker cansee the clear text part of the cookie, say \pcode{3}, and alsoits hash. It does not take much trial and error to find outthat we used the SHA-1 hashing function and then the hackercan graft a cookie accordingly. This is eased by the fact thatfor SHA-1 many strings and corresponding hash-values areprecalculated. Type, for example, into Google the hash valuefor \pcode{"hello world"} and you will actually pretty quicklyfind that it was generated by input string \pcode{"helloworld"}. Similarly for the hash-value for \pcode{1}. Thisdefeats the purpose of a hashing function and thus would nothelp us with our web-applications and later also not with howto store passwords properly. There is one ingredient missing, which happens to be called\emph{salts}. Salts are random keys, which are added to thecounter before the hash is calculated. In our case we mustkeep the salt secret. As can be see in Figure~\ref{hashsalt},we need to extract from the cookie the counter value and itshash (Lines 19 and 20). But before hashing the counter again(Line 22) we need to add the secret salt. Similarly, when weset the new increased counter, we will need to add the saltbefore hashing (this is done in Line 15). Our web-applicationwill now store cookies like \begin{figure}[p]\lstinputlisting{../progs/App4.js}\caption{A Node.js web-app that sets a cookie in the client'sbrowser for counting the number of visits to a page.\label{hashsalt}}\end{figure}\begin{center}\tt\begin{tabular}{l}1 + salt - 8189effef4d4f7411f4153b13ff72546dd682c69\\2 + salt - 1528375d5ceb7d71597053e6877cc570067a738f\\3 + salt - d646e213d4f87e3971d9dd6d9f435840eb6a1c06\\4 + salt - 5b9e85269e4461de0238a6bf463ed3f25778cbba\\...\\\end{tabular}\end{center}\noindent These hashes allow us to read and set the value ofthe counter, and also give us confidence that the counter hasnot been tampered with. This of course depends on being ableto keep the salt secret. Once the salt is public, we betterignore all cookies and start setting them again with a newsalt.There is an interesting and very subtle point to note withrespect to the 'New York Times' way of checking the numbervisits. Essentially they have their `resource' unlocked at thebeginning and lock it only when the data in the cookie statesthat the allowed free number of visits are up. As said before,this can be easily circumvented by just deleting the cookie orby switching the browser. This would mean the New York Timeswill lose revenue whenever this kind of tampering occurs. The`quick fix' to require that a cookie must always be presentdoes not work, because then this newspaper will cut off anynew readers, or anyone who gets a new computer. In contrast,our web-application has the resource (discount) locked at thebeginning and only unlocks it if the cookie data says so. Ifthe cookie is deleted, well then the resource just does notget unlocked. No major harm will result to us. You can see:the same security mechanism behaves rather differentlydepending on whether the ``resource'' needs to be locked orunlocked. Apart from thinking about the difference verycarefully, I do not know of any good ``theory'' that couldhelp with solving such security intricacies in any other way. \subsection*{How to Store Passwords Properly?}While admittedly quite silly, the simple web-application inthe previous section should help with the more importantquestion of how passwords should be verified and stored. It isunbelievable that nowadays systems still do this withpasswords in plain text. The idea behind such plain-textpasswords is of course that if the user typed in\pcode{foobar} as password, we need to verify whether itmatches with the password that is already stored for this userin the system. Why not doing this with plain-text passwords?Unfortunately doing this verification in plain text is reallya bad idea. Alas, evidence suggests it is still awidespread practice. I leave you to think about why verifyingpasswords in plain text is a bad idea.Using hash functions, like in our web-application, we can dobetter. They allow us to not having to store passwords inplain text for verification whether a password matches or not.We can just hash the password and store the hash-value. Andwhenever the user types in a new password, well then we hashit again and check whether the hash-values agree. Just likein the web-application before.Lets analyse what happens when a hacker gets hold of such ahashed password database. That is the scenario we want todefend against.\footnote{If we could assume our servers cannever be broken into, then storing passwords in plain textwould be no problem. The point, however, is that servers arenever absolutely secure.} The hacker has then a list of user names andassociated hash-values, like \begin{center}\pcode{urbanc:2aae6c35c94fcfb415dbe95f408b9ce91ee846ed}\end{center}\noindent For a beginner-level hacker this information is ofno use. It would not work to type in the hash value instead ofthe password, because it will go through the hashing functionagain and then the resulting two hash-values will not match.One attack a hacker can try, however, is called a \emph{bruteforce attack}. Essentially this means trying out exhaustivelyall strings\begin{center}\pcode{a},\pcode{aa},\pcode{...},\pcode{ba},\pcode{...},\pcode{zzz},\pcode{...}\end{center} \noindent and so on, hash them and check whether they matchwith the hash-values in the database. Such brute force attacksare surprisingly effective. With modern technology (usuallyGPU graphic cards), passwords of moderate length only needseconds or hours to be cracked. Well, the only defence we haveagainst such brute force attacks is to make passwords longerand force users to use the whole spectrum of letters and keysfor passwords. The hope is that this makes the search spacetoo big for an effective brute force attack.Unfortunately, clever hackers have another ace up theirsleeves. These are called \emph{dictionary attacks}. The ideabehind dictionary attack is the observation that only fewpeople are competent enough to use sufficiently strongpasswords. Most users (at least too many) use passwords like\begin{center}\pcode{123456},\pcode{password},\pcode{qwerty},\pcode{letmein},\pcode{...}\end{center}\noindent So an attacker just needs to compile a list as largeas possible of such likely candidates of passwords and alsocompute their hash-values. The difference between a bruteforce attack, where maybe $2^{80}$ many strings need to beconsidered, is that a dictionary attack might get away withchecking only 10 Million words (remember the language English``only'' contains 600,000 words). This is a drasticsimplification for attackers. Now, if the attacker knows thehash-value of a password is\begin{center}\pcode{5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8}\end{center}\noindent then just a lookup in the dictionary will revealthat the plain-text password was \pcode{password}. What isgood about this attack is that the dictionary can beprecompiled in the ``comfort of the hacker's home'' before anactual attack is launched. It just needs sufficient storagespace, which nowadays is pretty cheap. A hacker might in thisway not be able to crack all passwords in our database, buteven being able to crack 50\% can be serious damage for alarge company (because then you have to think about how tomake users to change their old passwords---a major hassle).And hackers are very industrious in compiling thesedictionaries: for example they definitely include variationslike \pcode{passw0rd} and also include rules that cover caseslike \pcode{passwordpassword} or \pcode{drowssap} (passwordreversed).\footnote{Some entertaining rules for creatingeffective dictionaries are described in the book ``AppliedCryptography'' by Bruce Schneier (in case you can find it inthe library), and also in the original research literaturewhich can be accessed for free from\url{http://www.klein.com/dvk/publications/passwd.pdf}.}Historically, compiling a list for a dictionary attack is notas simple as it might seem. At the beginning only ``real''dictionaries were available (like the Oxford EnglishDictionary), but such dictionaries are not optimised for thepurpose of cracking passwords. The first real hard data aboutactually used passwords was obtained when a company calledRockYou ``lost'' at the end of 2009 32 Million plain-textpasswords. With this data of real-life passwords, dictionaryattacks took off. Compiling such dictionaries is nowadays veryeasy with the help of off-the-shelf tools.These dictionary attacks can be prevented by using salts.Remember a hacker needs to use the most likely candidates of passwords and calculate their hash-value. If we add beforehashing a password a random salt, like \pcode{mPX2aq},then the string \pcode{passwordmPX2aq} will almost certainly not be in the dictionary. Like in the web-application in theprevious section, a salt does not prevent us from verifying a password. We just need to add the salt whenever the password is typed in again. There is a question whether we should use a single random saltfor every password in our database. A single salt wouldalready make dictionary attacks considerably more difficult.It turns out, however, that in case of password databasesevery password should get their own salt. This salt isgenerated at the time when the password is first set. If you look at a Unix password file you will find entries like\begin{center}\pcode{urbanc:$6$3WWbKfr1$4vblknvGr6FcDeF92R5xFn3mskfdnEn...$...}\end{center}\noindent where the first part is the login-name, followed bya field \pcode{$6$} which specifies which hash-function isused. After that follows the salt \pcode{3WWbKfr1} and afterthat the hash-value that is stored for the password (whichincludes the salt). I leave it to you to figure out how thepassword verification would need to work based on this data.There is a non-obvious benefit of using a separate salt foreach password. Recall that \pcode{123456} is a popularpassword that is most likely used by several of your users(especially if the database contains millions of entries). Ifwe use no salt or one global salt, all hash-values will be thesame for this password. So if a hacker is in the business ofcracking as many passwords as possible, then it is a good ideato concentrate on those very popular passwords. This is notpossible if each password gets its own salt: since we assumethe salt is generated randomly, each version of \pcode{123456}will be associated with a different hash-value. This willmake the life harder for an attacker.Note another interesting point. The web-application from theprevious section was only secure when the salt was secret. Inthe password case, this is not needed. The salt can be publicas shown above in the Unix password file where it is actuallystored as part of the password entry. Knowing the salt doesnot give the attacker any advantage, but prevents thatdictionaries can be precompiled. While salts do not solveevery problem, they help with protecting against dictionaryattacks on password files. It protects people who have thesame passwords on multiple machines. But it does not protectagainst a focused attack against a single password and alsodoes not make poorly chosen passwords any better. Still themoral is that you should never store passwords in plain text.Never ever.\subsubsection*{Further Reading}A readable article by Bruce Schneier on ``How Security Companies Sucker Us with Lemons''\begin{center}\url{http://archive.wired.com/politics/security/commentary/securitymatters/2007/04/securitymatters_0419}\end{center}\noindentA recent research paper about surveillance using cookies is\begin{center}\url{http://randomwalker.info/publications/cookie-surveillance-v2.pdf}\end{center}\noindentA slightly different point of view about the economies of password cracking:\begin{center}\url{http://xkcd.com/538/}\end{center}\noindent If you want to know more about passwords, the bookby Bruce Schneier about Applied Cryptography is recommendable,though quite expensive. There is also another expensive bookabout penetration testing, but the readable chapter aboutpassword attacks (Chapter 9) is free:\begin{center}\url{http://www.nostarch.com/pentesting}\end{center}\noindent Even the government recently handed out some advice about passwords\begin{center}\url{http://goo.gl/dIzqMg}\end{center}\noindent Here is an interesting blog-post about how a group``cracked'' efficiently millions of bcrypt passwords from theAshley Madison leak.\begin{center}\url{http://goo.gl/83Ho0N}\end{center}\noindent Or the passwords from eHarmony\begin{center}\url{https://goo.gl/W63Xhw}\end{center}\noindent The attack used dictionaries with up to 15 Billionentries.\footnote{Compare this with the full brute-force spaceof $62^8$} If eHarmony had properly salted their passwords,the attack would have taken 31 years.Clearly, passwords are a technology that comes tothe end of its usefulness, because brute force attacks becomemore and more powerful and it is unlikely that humans get anybetter in remembering (securely) longer and longer passwords.The big question is which technology can replacepasswords\ldots \medskip\end{document}%%% fingerprints vs. passwords (what is better)https://www.youtube.com/watch?v=VVxL9ymiyAU&feature=youtu.be%%% cookieshttp://randomwalker.info/publications/cookie-surveillance-v2.pdf%%% Local Variables: %%% mode: latex%%% TeX-master: t%%% End: