WEF

Ransomware and ‘ransom-war’: why we all need to be ready for cyberattacks – on Radio Davos

Robin Pomeroy, Podcast Editor, World Economic Forum

Listen the interview at Radio Davos

It’s boom time for cyber criminals trying to make easy money by taking computer data hostage and demanding ransom. As online working surged during the pandemic, so did cybercrime – ransomware attacks rose 151% in 2021. The World Economic Forum’s Global Cybersecurity Outlook found there were on average 270 cyberattacks per organization that year, with each successful cyber breach costing a company $3.6m.

On this podcast, we speak to Jim Guinn, Senior Managing Director Security, Strategy and Consulting Lead at Accenture, a company that had its own, well-publicized ransomware attack last year, and to Algirde Pipikaite, Cybersecurity Strategy Lead at the World Economic Forum.

And to talk about how ransomware can often be considered ‘ransom-war’, we speak to Alex Klimburg, head of the World Economic Forum’s Centre for Cybersecurity.

Read the Forum’s Global Cybersecurity Outlook 2022.

This is a transcript of the interviews from the Radio Davos episode: ‘Ransom war’

Alex Klimburg, Head, Centre for Cybersecurity, World Economic Forum: Ransomware is one of the rising political weapons in cyberspace. I actually, in my publications, have referred to political ransomware attacks as ‘ransom war’. It’s certainly the weapon of choice in cyber conflict in the last couple of years.

Ransomware attacks have been part of the global landscape for a while now, but we can go back to around 2015, 2016, when a group of cyber actors, criminal hackers, but probably intelligence service, secured themselves an NSA cyber weapon called EternalBlue. So, EternalBlue was a massive Windows exploit, a security vulnerability in the Windows operating system that was very difficult to patch and was unknown at that time. So, basically, anybody who had this vulnerability would have access to any Windows machine.

And that tool, supposedly maintained by the US intelligence service, was stolen by this hacker community and put online. Now the suspicion in 2016 around the US elections was that this was actually Russian military intelligence that was trying to not only embarrass the US intelligence but also try to cause problems by creating more cyber criminal activity – the more cyber criminal activity occurs, the more busy the cyber defenders are, the more difficulty they have dealing with cyber crime activities and also then cyber war and cyber intelligence activities. So, basically, it’s a bit of a win-win sometimes for some actors to increase the level of cybercrime.

So they put out this vulnerability called EternalBlue. And they said, Hey, does somebody want to pick this up? Does somebody want to use this? Nobody actually did, which was kind of bizarre, for a couple of years.

And then suddenly, in 2017, actors that were associated with North Korea put out something called WannaCry. And WannaCry was an extremely destructive ransomware attack that hit, for instance, the UK National Health Service so bad that a quarter of the hospitals were at one point offline. So, to put that in context, there’s no doubt that many people died as a result of this attack. This was quite clearly an attempt to road-test the EternalBlue vulnerability, but also to try to get other actors to see the attraction of ransomware because basically, you could pay to have your data decrypted and restored and returned to you. It wasn’t destroyed in the WannaCry context.

The more cyber criminal activity occurs, the busier cyber defenders are and the more difficulty they have dealing with cyber crime activities, cyber war and cyber intelligence activities.

—Alex Klimburg, Head, Centre for Cybersecurity, World Economic Forum

In theory, it was possible for you to pay money and get your data back, or at least get your system to operate again. It turned out, however, that the hackers were not very responsive to requests of some individuals to effectively get their data released to them. So, therefore it became rather obvious it was more of a political attack. It was more that the attackers were more interested in causing damage than in making money.

So, that attack came and went, and that was pretty bad. But then what happened afterwards was even worse, and that was only six months later. And that was NotPetya. NotPetya has now been considered the most destructive cyberattack ever, and it came out of Ukraine. So it looks like a Russian intelligence or cyber operator intentionally infected a Ukrainian business software company that had links to a number of external companies all over the world. And these companies included, for instance, FedEx and Maersk.

And the ransomware spreads so quickly – two days – and succeeded in causing severe disruption to, for instance, Maersk and FedEx and dozens of other companies that the total damage has now been assessed at over $2 billion, which is quite an astronomical sum. For Maersk alone the damage was 300 million over that particular period, which is one of the highest recorded cyber damages that have ever been been put on paper.

Ukraine was always connected with field testing new cyber weapons. Ransom war attacks were first tested in Ukraine.

—Alex Klimburg, Head, Centre for Cybersecurity, World Economic Forum

And the thing about this attack was is that even though theoretically it was a criminal attack, there were claims that you could get your data back if you only paid a little bit of ransom, there was actually no way to pay. Nobody ever answered the email and there was no data ever decrypted, so it was a fake ransomware attack. It was a ‘ransom war’ attack. The primary intent was to cause damage and political disruption.

So, Ukraine was always connected with field testing and new cyber weapons. Ransom war attacks were first tested in Ukraine, and a lot of the activity that we see right now internationally is sometimes considered to be ransomware really done by actual cyber criminals who are really only interested in money. But sometimes it might be politically minded actors that are more trying to cause disruption. This is the lesson that we learnt from 2017 that it’s an extremely effective weapon both in causing political insecurity, but also raising the temperature overall. So, it’s basically a very efficient tool to use if you want to cause disruption on a massive scale.

Robin Pomeroy: And are we seeing that increase now since the invasion of Ukraine?

Alex Klimburg: What we are seeing is a very high level of ransomware that had already been active previous to the invasion. So, the invasion alert more or less happened already in October of last year and from that period onwards we did see an increase in ransomware attacks across, in particular, Europe. Europe lagged behind North America for quite a while, which is another indication that most ransomware attacks were actually political and not criminal, because Europe would have been just as juicy a target as most American enterprises would have been, but, for political reasons, the US was the primary target, and suddenly the focus started to shift. More European enterprises were hit.

We saw, for instance, a rather significant attack on the fuel retail business in Germany as well as a large oil refinery in Rotterdam. And there’s been a number of other attacks reported, for instance, also in transport companies and similar. Sometimes these attacks have a payment option, but the payment option is either fake, so it takes so long to exercise that it’s effectively useless. Or the amount of data that has been encrypted and which needs to be recovered is so large that it is basically pointless – the data is effectively destroyed.

The cyber weapon of choice these days is ransomware, and very often it is political and therefore really ransom war. Although the lines are intentionally blurred between political actors and cyber criminals.

Company leaders over-confident on cyber risk?

Algirde Pipikaite, Cybersecurity Strategy Lead, World Economic Forum: Actually, I would love to say I’m surprised to see the statistics that you just mentioned but, sadly, I don’t think it was surprising to the cybersecurity community. I think we surprised a lot of CEOs and board members that they feel so much more confident about their resilience and about their ability to respond to an incident if an incident occurs. Jim, what’s your take there?

Jim Guinn, Senior Managing Director – Security, Strategy and Consulting Lead, Accenture: You hit it right on the head. We’ve seen the same sort of trend in all of the years that I’ve participated in trying to help secure critical infrastructure organisations. And the trend is a belief that we have this problem called cybersecurity and we understand it and we can conquer it from the executive level. But then once you start going down into the organisation for the people that have to live it every day, they’re less confident in the ability to thwart an attack because they know that they’re constantly evolving, they’re constantly changing.

So, when you do give a board presentation or a board update at the macro level, at the very high level, and you say, here’s where we are, here are the things that we’ve done to become more cyber resilient, and here’s our journey to continue on that path – in three weeks that may have changed, and the report is already out, your executives have seen it and they’re thinking, gosh, I feel very confident we have a handle on this. But then the climate changes or the environment changes, or there’s Eastern European tensions between two countries which elevate the risk level for all countries. So, things happen so rapidly in cyberspace that the ability for a senior executive to feel very comfortable about one cybersecurity posture versus being actually very strong and resilient are not necessarily always connected. They ebb and flow at various times, and it’s an unfortunate reality of the world that we live in today.

Algirde Pipikaite: Do you think this reality is upon us because we are so massively connected? And does COVID have any role to play with it? Or do you foresee that with kind of the dawn of the [end of the] pandemic, hopefully the cybersecurity situation will improve? What’s your take there?

If you increase the number of attachments or connections, you increase the attack surface. And when you increase the attack surface you give bad actors a better opportunity to try to navigate to get in.

—Jim Guinn, Senior Managing Director – Security, Strategy and Consulting Lead, Accenture

Jim Guinn: Roll back the clock circa 2019. In 2019, there were a significant number of cyber events that occurred on a continuous basis. And we can go all the way back, whether it’s Stuxnet, WannaCry, Wannacrypto, or any number of significant global cyber events that occurred. And when the world had to shift from working the way that we used to circa 2019 to the way that we work today – very, very connected in your home networks, on your mobile devices, on potentially unsecure networks, to be able to communicate with corporate assets, we did see an increase in cyber activity just simply because of the connectivity.

Now I am one that subscribes to the belief that we will not ever go back to the way that it was again circa 2019 with how we worked and what we did. And there will be an evolution of more remote working or continual remote working, and that’s going to increase the number of, in the most simplistic terms, IP addressable assets. And if you have an IP addressable asset, meaning a human working on a thing, attaching to a thing and doing their job, if you increase the number of attachments or connections, you increase the attack surface. And when you increase the attack surface you give bad actors a better opportunity to try to navigate to get in. If you pivot to the things like the metaverse or other, responsible AI and 5G, and the things that are really going to accelerate the adoption of technology more broad-spectrum, I think that this problem is only going to get worse. It’s not going to get easier or better. I think it’s going to continue to evolve and we just have to be very vigilant in how we approach the cyber measures that we have in our organisations today.

Algirde Pipikaite: Let me follow up actually on the AI and blockchain and 5G, and the new technologies, the new realities that we are introducing, like augmented reality and the metaverse. Combined with what we’ve seen with ransomware attacks – the rise last year in 2021 at least by 150% if not more. And cryptocurrency is being used on an enormous scale for payments and then for tracking those payments. My question is do you still see that we will be suffering from ransomware with very basic attacks? Or do you foresee that the introduction of new technologies or the combination of technologies that we will be using will introduce really sophisticated attacks?

If somebody wants to break in and they are funded by a nation-state or an affiliate, it is going to be impossible to stop them.

—Jim Guinn, Senior Managing Director – Security, Strategy and Consulting Lead, Accenture

Jim Guinn: I think both, and history has proven that it will be both. And one of the common things that we have seen, not always, but a common theme, is not having the simplistic things like multifactor authentication enabled on various devices. And therefore someone can get in because they have harvested a credential from some other means.

Until we can get the fundamentals right – like multi-factor authentication across the enterprise, and I’m talking about operational technology assets to enterprise IT assets, to our mobile devices, to all of it, all of our endpoints – until we can get that done you will still see the less sophisticated actors get in and do harm – cyber gangs trying to make a quick buck.

The second thing is, with all respect for all nations, if any nation-state, as well-funded as they are, that has really strong cyber capabilities, if they wanted to do harm through a nation-state actor or an affiliate, you cannot stop them. It is simply a matter of time.

There’s been a number of recent multinational ‘zero-days’ – no one knew it was coming – nation-state affiliates and/or direct nation-states who have caused some real upheaval. And it’s near impossible. If somebody wants to break in and they are funded by a nation-state or an affiliate, it is going to be impossible to stop them. It’s a matter of time.

So, I think you’re going to have both. I think you going to have the very less than sophisticated gangs that try to make a quick buck by leveraging harvesting credentials and get in, and you’re going to have very sophisticated nation-state actors who want to cause disruption in the globe.

Robin Pomeroy: That’s pretty scary. So, in the first category there you have to plug all those holes like a leaky ship in some ways. If you plug those holes – ways of accessing those networks – then you should be able to stop criminal ransomware gangs. But the other category you are talking about – these state actors – which have been going on now for well over a decade, to my amateur knowledge – you’re saying it’s impossible to stop them. So what do we do to stop a health service from being stopped in its tracks or the energy grid of a country being blocked out? You have to tackle that after the fact, is that what the situation is?

Jim Guinn: Yes. And I will stick with your analogy because it’s a very good one and it’s used quite often. There’s a great philosopher that once said ‘the moment that the ship was created, we also created the shipwreck,’ meaning once you build a ship and you sail it, it is going to crash, one will crash at some point and it is a tragedy. Having spent the early part of my career working offshore and working on vessels, the first thing that you learn how to do is you learn how to exit the vessel in the event of an emergency. And so what that means is you have a safety plan and you have a security plan and you know where to muster and everybody knows what their role is.

We always try to avoid a shipwreck – cyber shipwreck. However, if it does happen, it’s not about that it happened, it’s about the resilience and how quickly you can isolate, contain, recover and respond to it in a very orderly fashion. If you can think about the ship, in your analogy, if you did have a major catastrophe on a ship and everyone panicked, then it would be very tragic. So it’s about planning, it’s about execution, it’s about being deliberate in your moves so that when it occurs to an organisation that you’re a part of, in whatever form it appears, that you have a very, very tested and true plan so that you can be more resilient, so you can recover from it very quickly.

We’re all going to get sick. We’re going to get the flu, we’re going to get a cold or, you know, God forbid, we’ll get COVID, but we will all get sick. But we should all be able to recover if we have the right plan. If we have the right pharmaceutical capabilities, we can all get better. Same thing with cyber, it’s going to happen. So let’s talk about how we recover from it in a very logical and structured way so that we minimise the impacts to the organisation or our customers or our suppliers in the future.

Algirde Pipikaite: At the very beginning, Robin mentioned that on average it takes around 280 days for a company or organisation to identify and respond, start responding, to an incident. That means that if someone was hacked on 1 January, they potentially only around mid-October will get to know about that hack.

If resilience is our mantra, how can we actually prevent and how would you even tackle an attacker who sits in our networks potentially for 280 days? The reconnaissance that they are doing and the way they identify your sensitive data, your vulnerabilities, your access management and everything, their navigation through your network during nine, 10 months is spectacular. So do you still believe it’s a winnable battle or it’s kind of like once we find it, then we will try to prevent that situation?

Precautions against cyberattacks

Jim Guinn: That may be some organisations’ strategy: ‘once we find it, we’ll try to prevent it’. That’s not necessarily what we would advise, even with our own experience.

It’s about the preparedness. It’s about planning in advance. Because the closer you can get to impact – meaning day one: infiltration occurs or someone has gotten in, some bad actor’s gotten in. Now it’s about trying to roll the clock back from hundreds of days down to a handful of days so that you can reduce the impact.

And there’s a there’s a lot of things that we can do. And I would gladly share a good reference architecture with anyone that listens to the podcast here or beyond. There are some very, albeit difficult to execute, simple philosophies. If you use them, you can actually decrease that time from impact to extrication – removing them, getting them out of your network.

It’s things like multifactor authentication. If every device needs multiple credentials to get in it’s really hard to move laterally in the network.

Zero trust. What does zero trust mean? I don’t trust anybody that’s on the network, so I need to have multiple ways to authenticate to a given set of assets in a business context to protect them so that we don’t have loss of IP or loss of data.

There’s some really fundamental things. And, again, they’re not easy to implement and they cost both time and money to do. But the the most spectacular – and I’m using that word in a negative context, not a positive context – the most spectacular cyber events we have seen have been on flat networks, without multifactor authentication, without the ability to see the environment, to know what’s happening.

We try to describe this in simple terms: if you can’t see it you can’t protect it. Literally, if you cannot see it, you cannot protect it. So the ability to see in your environment across the entire landscape and be able to correlate events gives you a better chance to reduce that dwell time from hundreds of days down to a few hours.

And even in our own case, where we had the event last year, where a LockBit ransomware gang was able to infiltrate a particular set of servers that were misconfigured, quite frankly, we had them locked out within five hours under five hours. So we know it can be done. And so if just our own experience of what we had to live through because of a misconfiguration with multifactor authentication, we were able to identify and isolate and eradicate a threat actor in less than five hours

Algirde Pipikaite: In the blurry lines of us working from home, many people using work devices for their personal life and personal errands, or their personal devices for work, is it easy to identify when an intruder gets into the network when so many different devices are connected and so many different routers that are actually not corporate-secured and they are home-security or maybe not even secured, are sitting on our networks? How are you identifying? And once you identify, what are those steps? What do you do to actually contain an incident and how do you know it’s not Jim, it’s not Kelly, it’s not Diana sitting on a network.

The best protection from a threat actor is your own people. Our own people are probably our greatest strength and potentially our biggest weakness when it comes to cyber events.

—Jim Guinn, Senior Managing Director – Security, Strategy and Consulting Lead, Accenture

Jim Guinn: There are so many things that go into a cyber strategy, but being able to understand usage patterns or understand personas and what that persona should be doing and how they interact with their job, also known as, or AKA, ‘roles’ – what is my role in the organisation? What should I be interacting with? What data should I be touching? When do I typically touch it?

There’s a lot of things that we can use – data analytics and AI – responsible AI, meaning understanding it and not using it for ill gotten gains, but responsible AI – there’s a lot of things that we can learn about our environments just watching the way that humans interact with technology and interact with machines.

The fundamentals of something like zero trust is really having a set of roles and personas and IDs that are going to interact with a set of systems in a certain way and knowing what that is, and then allowing access to those through a ‘trust yet verify’ mechanism.

And so if organisations – and the unfortunate thing is, and we talked about it in the report, many of the organisations that are smaller, which are very, very critical to the ecosystem, they do have folks who are using multiple devices to attach to corporate assets. They do have less than sophisticated cyber postures – because of the cost and time to implement. You may have an organisation that has 100 people that’s very critical to part of the supply chain for durable goods or pharmaceuticals, and they can have a vulnerability and someone can infiltrate that organisation to get to your organisation. And those are not impossible to stop, but they’re very difficult to stop.

And it goes back to – if you don’t know what individuals are supposed to be doing while they’re on a network, then you can’t really predict what they should be doing. So it’s back to the ‘if you can’t see it, you can’t protect it’ or ‘if you don’t know you can’t protect it. So it’s not easy. It’s doable, but it takes time and it takes some fortitude to really invest both the corporate assets, intellectual assets. and our people.

Let’s not forget our people. The best thing to to avoid a threat actor from getting in is your own people. Our own people being diligent, not clicking on links, not having shared assets where I’m reading my web mail on the same thing as my computer and someone sends me a video of something that’s supposed to be funny and it had malware and now it’s on my corporate asset, and now it can traverse the network and get in. So our own people are probably our greatest strength and potentially our biggest weakness as it comes to cyber events.

Robin Pomeroy: Can we talk about what it’s like to be victim of a ransomware attack? People think about ransom, they get a note through the door with the letters cut out from a newspaper ‘we’re holding your your puppy, give us $50 and you’ll get it back’. In a way, it’s not that far off, is it? I was reading about this attack that you experienced last year, and is it true to say they paste something up on as wallpaper on your screen that actually says, ‘Yeah, we’ve now taken over this computer. Here we are. Ha ha ha. Send us the money’. I mean, it’s kind of that basic. Is that how you first realise you’re under attack?

Jim Guinn: Hopefully not, because at that point they have gone completely laterally across your network, they’ve infiltrated multiple devices, and now they have what effectively would be a choke hold on your infrastructure. By the time that comes up, it’s really bad. I mean, it’s really, really bad. Ideally, you will catch them in the act. You will see network traffic that is anomalous. You will see activity that is not normal.

You know, say your MXDR platform. We’ve got one of the best in the world. We use it for hundreds and hundreds of clients. When it detects an anomaly and it pops a flare, you immediately go and investigate what happened. And so hopefully you catch it before you get that notification. But if you get that notification, there’s a whole new series of activities that you’re going to have to go through that are going to be less than ideal both for yourself, for your people, for your clients, for your suppliers. So hopefully you get to it before that occurs. Does that make sense?

Robin Pomeroy: Yeah, absolutely. The acronym caught me out though – MXDR – could you explain what that is?

Jim Guinn: It’s ‘manage, extend, detect and respond’. It’s a set of technologies that is a big, if you will, kaleidoscope of all the activities going across your network and everything – it enables the seeing of what’s happening.

And it’s the ability to manage and extend and detect and respond to a particular event. It may be anomalous. It may be good. It may be bad. But it’s something that’s different. So you have to investigate it. So it’s a bit of fabric that lays over the top of the network that identifies a ripple. And then we go look at the ripple and figure out what that is and why is it correlating in other areas and why is it appearing in other places. So it’s a way to better detect when there is an incident or the potential of an incident.

Robin Pomeroy: So you’ll be able to tell us what you are able to tell us about this attack last year, but it’s been fairly widely reported this ransomware group LockBit – maybe you could tell us who they are or who they might be. They were asking for $50 million in ransom. So how did it all unfold from where you were sitting?

Jim Guinn: We were immensely transparent with all of our clients. The event occurred on 30 July and I want to say that was a Sunday and I’m having to roll back the clock in my mind, I believe it was a Sunday. And I remember getting up on Monday morning. I’m pretty sure it was a Sunday, but I remember Monday morning, my boss at the time, I had a text from him and I got up, it was like, 5-5.15 in the morning. Usually when I get up I go and take the dog for a walk. I’m still in the outfit that I took the dogs for walk in. And occasionally when I walk past my phone, I don’t keep it with me, and I go into the office and I tap the screen and I see that, you know, I’d gotten the message and I’m like, that’s not ever good. It’s just not good to have a message at that time in the morning.

So I look at the message and the world changed.

And and you know, what I will say is you can’t always believe what you read from a threat actor. And there’s a couple of different reasons. Number one, it was not $50 million. We don’t disclose what they wanted, but I can tell you it was significantly less. It happened – within less than five hours we had it contained. So from the time that someone got in and our alerts went off and we knew that something was anomalous and we had to go investigate it, it was isolated inside of five hours. So the ability to exfiltrate whatever it was – data – as it turned out, and as we widely shared with many clients, it was mostly seating charts, internal communication emails, some benchmarking data that was publicly available, just a series of things – the value of the data was not there.

Threat actors will – LockBit, any of them – threat actors like to boast what they have so that if they can’t get it from the entity that they have harvested it from, then they try to sell it on the dark web by increasing the perceived value of it. And in this case, it was a rock that was spray-painted gold. Yes, there was something there. It was material. It was not material from a financial sense, but it was data. But they spray painted it gold to make it seem like it was more important or bigger. And thank goodness it wasn’t because it’d be a different set of circumstances. But the reality is they realised what they had and when we said, yeah, we’re not going to pay, then it became widely public. Once we knew what they had and once we felt comfortable that we had it contained and isolated, then it became more of a stalling technique than really a negotiation to be able to defer until publicly known what we knew and what our internal cert [computer emergency response team] team and our internal CISO [chief information security officer] was trying to get their arms around. So it for sure was not that amount of ransom. And for sure, we did not pay, just to be really clear.

Ransomware: to pay or not to pay

Algirde Pipikaite: How hard is the decision to pay or not to pay. And do normally companies decide to pay and get rid of the problem. Or is it less complicated not to pay and try to rebuild the network? There are two different philosophies. Which one do you normally lean towards and what do you see much more happening in the market?

If it is going to impact health, safety or the environment, you need to have a protocol for decision-making as to whether you’re going to pay or not.

—Jim Guinn, Senior Managing Director – Security, Strategy and Consulting Lead, Accenture

Jim Guinn: That is a fantastic question. And it’s not an easy one to answer. Some believe you should always pay, some believe you should never pay. What I try to distill it down to, at least with the clients that I and my team serve, especially in criticial infrastructure, if it is going to impact health, safety or the environment – I want to be really clear – health, safety, environment – you need to have a protocol for decision making as to whether you’re going to pay or not. If you’re going to have a significant impact that could impact the environment in a very, very negative way or the safety of our people or others in a very negative way or the health of of patients and patient care, you have to have a decision tree that you’ve already run through numerous times through tabletop exercises and executive discussions to figure out if we pay, what does that mean, if we don’t pay, what does that mean? So that you’re not trying to do it in real time.

I personally subscribe to you have to determine what is a payable event and what is not well before you actually have an event. And then you have to run those protocols when or if you do get into that set of scenarios, and you have to live by the decisions that you made, because trying to make real-time decisions that are massively critical with the emotions running high – in the midst of a cyber event, generally speaking, you may not make the best decision, generally speaking.

So if you think about it when times are calm and people are not upset and it’s not a panic or hair-on-fire unfortunate situation. The act of planning for war is not necessarily the plan itself, it’s that you plan for it. And going through that plan, both in a cyber event or in business critical operations gives you just a better chance of making the best decision at that moment in time.

So I don’t subscribe to either. I am actually right down the middle of the road. It’s that every company needs to have the ability. within the legal requirements of the jurisdiction they operate in, to determine whether or not they should or should not pay, because of impacts to health, safety and environmental concerns.

Robin Pomeroy: So do companies actually run those scenarios, those war games? I’m assuming that’s big companies that have the capacity to do that, but are you aware that companies do that kind of thing?

Jim Guinn: Yes, they do. We’ve evolved from the early 2000s, you know, ’98, ’99, 2000, 2001, 2002, when some of these emerging technologies were really taking off. We’ve evolved quite significantly that really, really large institutions that have a decent enough budget will go through red teaming exercises and tabletop exercises and decision-making processes to figure out how they would respond in a particular an event and what the protocol should be to guide that decision.

And then smaller organisations actually have – there are now mechanisms through many of the the governments that we see across, whether it’s through Australia or Singapore or in the EU or in the United States and Canada, that facilitate those sort of capabilities for smaller organisations to be able to leverage, albeit not at the scale and capacity of every organisation that might need it at that moment, or want it at that moment.

But we’re getting better. And those that actually do prepare are probably in the best position to to shrink that window, not just of dwell time, but of potential loss of IP, data or other material assets.

Previously posted at :