The CSIRT at Orange Cyberdefense has to handle unprecedented levels of cybersecurity incidents. A steady flow of Microsoft Office 365 email hacks have been abused in large-scale ransomware attacks. None of them have been “nation-state attacks”, and the majority have not been what we would classify as overly sophisticated. However, they were all causing severe damage before we were called in. This section will look at a tiny selection of some of the mistakes that we have seen last year and the damage they have caused.
This is the stuff of IT nightmares. The fable as old as IT: “no one will hack us, we don’t have anything worth stealing”. So why bother doing the most basic of industry best practices? That is exactly what we found. A totally flat network, with no backups, over 30 domain admin accounts, and no centralized logging.
The latter meant that when someone opened a macro loaded Word document no one spotted that their antivirus had alerted (but not blocked) a download of Emotet, nor did anyone spot that shortly after a local admin account was used to install some network mapping tools.
A good Security Operations Centre (SOC) could have issued an early warning to anyone of those incidents. It could have all been cleaned up and the end user could have had some training to try and prevent such incidents from happening again. But that’s not what happened.
The attackers were lucky: protection of the local admin account on the endpoint they had access to was, to be polite, very weak. Worrying on its own, this gets terrifying when you factor in that the local admin password was the same on every endpoint of the network, including servers and hypervisors. This gave the attackers total access to the entire network, with no one watching what they were doing.
So off the attackers went: deleting backups, disabling AV, creating domain admin accounts, using Blood Hound to map out the entire network, and opening up firewalls to outside Remote Desktop (RDP) connections.
The attack reached a devastating crescendo when the ever-popular Ryuk ransomware was placed in a hidden share folder on the client’s domain controller. Accompanied by a list of over 4,000 Microsoft Windows endpoints in a simple “.txt” file, a lone “.bat” file, and a copy of the legitimate Windows “PsExec” binary. With one click the bat file unleashed Ryuk on the network encrypting every usable file and grinding the business to a total standstill.
In total, the Orange Cyberdefense CSIRT worked for four weeks to get the network back up and running. Against all advice from Orange Cyberdefense, the client paid half a million Euros to the attackers to get decryption keys. On top of that, they had to pay a law firm hundreds of thousands in fees to handle the payment (begs the question of who the real criminals are in this), and well over half a million more in network upgrades and policy changes to get the damaged network to a clean and trustable state.
Lessons learned
So what should you take away from this tale of horror? The majority of weaknesses in the network could have been easily changed: network segmentation is probably the most basic of security measures, as well as strong password policies and user rights restrictions. These measures have some impact on how IT staff work, but don’t cost a lot to implement.
Admittedly, retrofitting a SOC is a big project, but that’s why you ensure that your network implements best practices, to begin with. The scariest part of this story: we have left a lot of the details out for privacy purposes. In real life, it was much worse.
Gartner: ”The Importance of Rapid Incident Response to Augment Threat Monitoring and Detection Is Growing”
Our Incident Response team helps you to recover completely from an incident.