Get new articles and guides, a curated list of events and job openings, and more. Sign up now.
A Successful Security Strategy Is All About Relationships. Here’s How to Build Them.
Security efforts are not limited to security teams. High impact strategies need to engage everyone from employees to the board of execs, DevOps teams and IT. Learn how how to become not just an effective partner but a trusted advisor across an organization.
Stateful Machine Learning is Our Best (And Only) Bet
Traditional machine learning methods that are used to detect threats at the machine layer aren’t equipped to account for the complexities of human relationships and behaviors across businesses over time. There is no concept of “state” — the additional variable that makes human-layer security problems so complex.
The ground is shaking under one of cybersecurity’s favorite acronyms. Dr. Karen Renaud, Chancellor’s Fellow at the University of Strathclyde and Dr. Marc Dupuis, Assistant Professor at the University of Washington Bothell believe that fear, uncertainty and doubt (FUD) aren’t all they are cracked up to be.
Listen to the full podcast here, or read on for Dr. Karen Renaud’s & Dr. Marc Dupuis’s top three takeaways.
Too much fear burns people out and makes them less responsive to fear appeals
KR: The literature tells us that when people are targeted by a fear appeal they can respond in one of two ways. They can either engage in a danger-control response or a fear-control response.
A danger-control response is generally aligned with what the designer of the appeal intended. So if a fear appeal is trying to encourage a user to back up their files, a danger-control response would involve the user making the backup.
Alternatively, a fear-control response sees the user try to combat the fear. They don’t like the feeling of fear, so they act to stop feeling it – they attack the fear rather than the danger itself. This response is undesirable as the user might go into denial or become angry with the person or organization who has exposed them to the fear appeal. Ultimately, the user is unlikely to take the recommended action.
When we consider events such as the COVID-19 pandemic, you can see how adding cybersecurity fear appeals to people’s pre-existing fear runs the risk of users feeling overwhelmed and having a fear-control response. People are already seeing so many fear appeals that they are likely to go into denial and refuse to take the message on board.
Fear appeals can encourage people to take more risks
MD: I have a three-and-a-half-year-old son. Unlike my daughter, if I tell him to not do something like stand on a chair, and explain that he might crack his head open if he does, he’ll do it. So, he’ll climb on the chair, and then if he doesn’t crack his head open he’ll say ‘See daddy, I didn’t crack my head open!’, and in his mind, my warning has been disproved.
This scenario with my son speaks to another point on fear appeals – we scare people to try and get them to not do something, but when they do it anyway and nothing bad happens it only reinforces the idea that the consequences aren’t that bad.
KR: You can see examples of this kind of thing throughout history. If you look back at the German bombings of London during the second world war, something similar happened. Though the goal of the Germans was to get Britain to capitulate, the bombings provoked a totally different response – the British people became more defiant. People get afraid of being afraid, and we need to consider this when designing cybersecurity training and messaging.
People need to be viewed as the solution instead of the problem
MD: We are all responsible for changing the narrative in cybersecurity away from fear, uncertainty, and doubt (FUD), and it starts with conversations like this. It is easy to criticise something, but the question we then need to answer is what can we replace it with? We know self-efficacy is the major player – but what is that going to look like? I believe that approaches will vary between organizations but the underlying concepts will be the same, such as creating a less punitive system and building a sense of togetherness.
KR: When you treat your users as a problem it informs the way you manage them. Currently, many organizations see their employees as a problem – they’ll train them, they’ll constrain them, and then they’ll blame them when things go wrong! Unfortunately, this method stops users from being part of the solution and creates the very problem you’re trying to solve.
To improve cybersecurity, it is crucial that you make everyone feel like they’re part of the defense of the organization.
My research with the Technical University of Darmstadt looked into what kind of things we could do to make this happen, and it really comes down to a few core principles:
Encourage collaboration and communication between colleagues – So we can support each other.
Build resilience as well as resistance – Currently, there is a huge focus on resisting security threats, but we also need to know how to bounce back when things do go wrong.
Flexible and responsive security training and awareness policies – We treat security training and awareness policies as a one-size-fits-all, but this is outdated. We need to ask people if what we are proposing is possible for them and the role that they do, and adapt accordingly.
Learn from successes, not just mistakes – What did some people spot in a phishing message that others didn’t? Teach other people those techniques.
Recent examples in other industries, such as safety, have shown that putting the power into employees’ hands can be revolutionary. We are yet to see it done in cybersecurity, but I’m certain that it is right around the corner.
Dr. Karen Renaud
Chancellor's Fellow, The University of Strathclyde
Dr Karen Renaud is an esteemed Professor and Computing Scientist from Abertay University whose research focuses on all aspects of Human-Centred Security and Privacy. She's especially interested in the interplay between users and security in the context of societal and industrial use, and her goal is to improve the boundary where humans and cybersecurity meet.