Security Absolutism... The Enemy of Security
This may be a bit of a controversial topic but I personally believe that security absolutism is the enemy of security. This has a close relationship to FUD, a topic which I recently wrote about. I recently even had a Tweet thread to something on this effect:
We see this often. Your password is not absolutely secure because of "insufficient entropy". You are using TLS 1.1. You are using SMS 2FA. And so on and so on... For the vast majority this is absolutely meaningless! They are not going to know what entropy is, and most likely not going to really care either. To them they will want to know will their password allow an attacker break into their account. Not if some attacker has a supercomputer which will allow them to brute force the password on their account after months of running (something that only a nation state will likely have). The problem with many of these arguments are that they are largely theoretical, and highly unlikely to happen for the vast majority of users, or the other options are none existent (for example a site only supports SMS 2FA).
Recently someone on Twitter stated that a reporter, who I have great respect for, was wrong for advising their readers to update to the latest version of iOS 14.8. Their argument was that this reporter should have rather been encouraging their readers to update to iOS 15 instead. The argument being that iOS 15 had greater security improvements. While not technically wrong, they were missing a few things. Firstly Apple still officially supports iOS 14.8. Next iOS 15 is still relatively new, and many people wait at least a few months before updating to a new version. In fact iOS 15 had a few issues, most notably iOS Car Play:
The point is that people operate differently, and most importantly have their own threat tolerances, as well as threat models. But most importantly most ordinary people will almost always value functionality over security. To the extent that people will actually go out of their way to prevent updates. There are several reasons for this, but most often is the fact that updates break things especially major updates. So many users typically wait for the dust to settle before updating to those major updates. I know many who do this. The security absolutism part that users must always update to the latest and greatest sounds great on paper but in reality forcing users to do this can often have the opposite effect.
The point I'm trying to make is that we need to stop viewing security through own prism, and instead start trying to focus on the prism of ordinary users. Things aren't black and white, there are varying degrees. While it's great to reduce risk to zero, the reality is that will likely never be the case. The effort and burden to the user will make it almost unusable or even perhaps unusable period (a rock is pretty darn secure but pretty useless for an IT system). There is often a constant tug of war between functionality (and usability) and security. So we have to aim for the best balance. So my view is that if we approach security helping weigh a bit on the usability part, we will have much better success. So for example in the iOS case above, iOS 14.8 allows users to stick with the older OS and let the dust settle, while still getting important security updates. Is there a chance that they could potentially be compromised? Well yes, there certainly is. But there's also a chance a meteorite could fall out of the sky and strike you on the head. The chances are so insignificant for ordinary users that they are basically negligible.
I've spent several years involved with vulnerability management (from an application security perspective). Applying appropriate though to risk and weighing up the security implications versus the functional and effort to fix was a very big part of this. This is why I often live and breathe this, and something that I think many cybersecurity professionals should actually spend time doing. Viewing things from the other side is a very important aspect to this. The same should be applied when attempting to give security advice to ordinary users, who often have limited to no security knowledge.
I love this blog post by Troy Hunt. In it he highlights that having a good enough solution is a perfectly reasonable solution. At the end of the day, security is about managing risk, and often we just need to get that risk to a level which we are acceptable to live with. There will never be a time where you will be able to completely remove risk (i.e. have something "unhackable"). Trying to force users to achieve this ideal will only further increase their frustrations and as a result have them resent security further. We have to approach this from the other angle. Get them onboard with us. And to do this we take the steps which help them, and this to me is why the "good enough" solution is the ideal solution. It's that balancing point between security and functionality. It's the point that allows us to face less frustrations and as a result, and hopefully, have a better vested interest in security. Even if they don't their risk will be such the chances of them being compromised are about as much as being struck by a meteorite.