How to persuade average people that security matters?
Frankly, I believe the only way to achieve that is to offer no other choice - or at least make doing the right thing much easier than the alternatives. All of the points you raise put a burden on the user to get some aspect (however small) of information security correct: well users are not for the most part information security experts. They are bricklayers, or physics researchers, or project managers: we should let them get on with laying bricks, or researching physics, or managing projects. Those are the things they are expert at: we should be doing the security for them.
To pick on one example, "don't click on links that he doesn't trust". Just how should an administrative assistant, for example, decide whether a link is trustworthy? Look at the WHOIS record for the domain (having ensured that their DNS is being resolved through a 'trusted' channel)? Examine the details of the SSL certificate? Phone the webmaster and verify the key fingerprint? Only, why does he trust the phone number, the phone network, the person on the other end of the phone... It's a ridiculous position to give people hyperlinks that can point anywhere then tell them to make a judgement on using hyperlinks.
So I might be busy ranting, but am I going to suggest an alternative? Yes: stop using backwards compatibility or familiarity with the existing workflow to justify continued use of broken systems. To address the trusted link example, we have two problems:
- we can't identify all websites reliably because they don't all have identification data, which is currently the SSL certificate.
- we don't necessarily trust the people who are telling us to trust SSL certificate holders
Point 1 can be easily, if expensively, addressed.
Point 2 is harder: from a technological perspective, you can imagine issuing multiple certs from multiple authorities, but then you have to have UI for examining and evaluating multiple certs; yuck. Or you could imagine a 'web of trust' model where people countersign keys from sites they trust, and people they trust to trust, and so on; but now we're back in the same position we were before, where we have to know who to trust (this is basically what Moxie Marlinspike came to implement in Convergence). Or you could adopt the approach of current web filtering tools, and while you don't trust a vendor to tell you who is good, you trust them enough to declare that some sites are definitely bad.
This rough description fits with what Microsoft's SDL team has called NEAT security UI: the interface should be Necessary, Explained, Actionable and Tested. Compare that with the current UI for trusted websites: you get the interface (the padlock icon) when the trust is OK, i.e. when it's not Necessary. Clicking on that usually lets you see the details of the SSL certificate, but did anyone (and should anyone) explain what all of the fields in an SSL certificate mean to the user? Not Explained. Also, most browsers only stop you proceeding to sites with obviously broken certs, so in many cases the UI is not Actionable either.
The point is that if the UI were NEAT, then it would become more valuable: users would see the UI at the point that they (not we) need to make a decision; they would be told the supporting information relevant to their decision; their decision would have a meaningful outcome; and we would know how they react in both benign and hostile conditions.
I pretty much agree with Graham, security these days is too important to be left to the layman or even to the programmers themselves. It is best to have a dedicated team ensuring conformity to standards including those which could impact the user-side of operations.
On those lines:
- Rather telling user to use HTTPS when available, the program should itself pick HTTPS if possible and disabling it should be complex enough to demotivate a layman. (I remember a famous organization offering 'opt-in' to users for HTTPS rather than enforcing it. :P)
- Most software do offer auto-update. Having said that, some people do not like 'nossy softwares' which do things beyond their knowledge, especially connect to the internet. In such a scenario, it then becomes paramount that each release in itself is relatively secure.
- Session management is also an important aspect whose usage, in today's data mining and targeted advertisement times, is often in conflict with security goals. A site may want to sustain a session, track user activity and collect data on partner websites like many social networking sites do. But a longer/sustained session implies a greater security risk. To tackle such an issue there is a good approach of asking again for only the password when user visits the site after some specified time, activity on some other external sites or accesses through newer tabs/windows.
- Similarly content filtering also needs to be taken care of by, if not the developers of the content, then by the host bringing the content to the user. Eg.: Anti-spam may be implemented at your mail-provider but it also needs to implemented in your mail software like Thunderbird or Outlook, link-detection in browsers etc.
Most users do not get imaginative while using software, they generally follow a simple pattern. This may be used to advantage of security or to its disadvantage!
My experience is average Joe’s do care about security – hasn’t every member of this site been asked by someone outside the computer industry which antivirus software is best? I’m suggesting a question posed beyond a feigned attempt to find some common ground with you, that is, a direct question that clearly indicates they would like to hear what you think they can do to better their situation.
Maybe persuasion is not what’s needed.
Maybe easier ways for average Joe’s to pick up Internet street knowledge is what’s needed.
Many already have security anxiety rooted in not knowing if they’re infected, owned, or doing stupid things. That means the power of incentives is already in play. They are just not aware of the plenitude of dangers you alluded to; using open Wi-Fi at Starbucks is crazy*, but that’s street knowledge.
Knowing why it’s bad to walk down that street or to click that link takes work – you either have to experience it or be learned of it. I wonder if there are any short cuts to gaining such wisdom?
I think most people would be happy to spend a little extra time doing something right (i.e. securely) if they know what that means.
Alexis Conran (co-writer/actor for The Real Hustle) did a couple keynotes for the RSA Security conferences – if I remember right, his advice centered on making yourself more knowledgeable of bad things to avoid getting owned. I guess that’s another way of saying to be secure don’t be stupid.
(*http://www.immunitysec.com/products-silica.shtml is a Starbucks Wi-Fi auto-owning tool)