Quantcast
Channel: Perimeter Grid » risk
Viewing all articles
Browse latest Browse all 20

South Carolina Hack Attack Root Causes

$
0
0

Recently, the South Carolina Department of Revenue was hacked, losing tax records on 3.6 million people — that is, most of South Carolina’s population. These contained Social Security numbers at the very least, as well as 3.3 million bank account numbers, and may have been full tax returns (they haven’t said.)

There’s been the usual casting of blame after such an incident, but it’s quite interesting to read over the incident response report they had Mandiant prepare for them. Despite being “PCI-Compliant”, they had a number of vulnerabilities that let the hackers break in. But what could they really have done to protect themselves? From the report, the attacker went through 16 steps:

1. August 13, 2012: A malicious (phishing) email was sent to multiple Department of Revenue employees. At least one Department of Revenue user clicked on the embedded link, unwittingly executed malware, and became compromised. The malware likely stole the user’s username and password. This theory is based on other facts discovered during the investigation; however, Mandiant was unable to conclusively determine if this is how the user’s credentials were obtained by the attacker.

It’s not clear here if this was untargeted spam phishing with off-the-shelf malware, or a spear-phishing attack on the DOR with custom malware. If it’s the former, then this would have been prevented by any decent mail security product (to block spam and phishing) and desktop anti-malware software with current signatures & centralized monitoring. Since I would think any “PCI-Compliant” institution would have this, my guess is that this was a spear-phishing attack. The unfortunate fact is that there’s basically nothing you can do about spear-phishing and targeted malware; by its nature it evades automated detection, and security awareness training is of limited effectiveness against a phishing mail customized for your employees. So far there’s no sign that the state DOR screwed up here.

2. August 27, 2012: The attacker logged into the remote access service (Citrix) using legitimate Department of Revenue user credentials. The credentials used belonged to one of the users who had received and opened the malicious email on August 13, 2012. The attacker used the Citrix portal to log into the user’s workstation and then leveraged the user’s access rights to access other Department of Revenue systems and databases with the user’s credentials.

And right here in step 2 I think we’ve found the root cause of the attack. They had an external remote access service that allowed single-factor login — coming in through the perimeter from the Internet using only a password. Given that spear-phishing & targeted malware are not preventable, you have to assume that passwords will be stolen and have barriers in place to keep password-bearing attackers out; two-factor auth on remote access services should be a bare minimum, whether that’s SecurID tokens, smart cards, or other mechanisms.

3. August 29, 2012: The attacker executed utilities designed to obtain user account passwords on six servers.

Dumping the LSA secrets requires administrative privileges. It’s possible the credentials the attacker acquired in step 1 were administrative on some servers, in which case there’s no new exploit here. But if they weren’t, the attacker elevated privileges in some way, implying that the DOR might have had a patch-management problem. Once again, though, it’s not clear that there’s much they can do about it — patching inside of 30-60 days is actually very difficult for an enterprise of decent size, even a mature, technically competent one. If the attacker used a recent exploit, then the DOR might well have been no worse-off patching-wise than everyone else is. On the other hand, if they used something ancient, this might be another problem by the DOR. This said, with proper authentication on the remote access service, the attacker shouldn’t have even gotten this far.

4. September 1, 2012: The attacker executed a utility to obtain user account passwords for all Windows user accounts. The attacker also installed malicious software (“backdoor”) on one server.

At this point the attacker is a domain administrator; if he’s dumping “all Windows user accounts” he’s got at least a network login on the domain controller. Chances are that a domain admin had logged onto the first compromised server at some point, and thus the attacker captured his cached credentials. No new attacks or exploits here.

5. September 2, 2012: The attacker interacted with twenty one servers using a compromised account and performed reconnaissance activities. The attacker also authenticated to a web server that handled payment maintenance information for the Department of Revenue, but was not able to accomplish anything malicious.
6. September 3, 2012: The attacker interacted with eight servers using a compromised account and performed reconnaissance activities. The attacker again authenticated to a web server that handled payment maintenance information for the Department of Revenue, but was not able to accomplish anything malicious.
7. September 4, 2012: The attacker interacted with six systems using a compromised account and performed reconnaissance activities.
8. September 5 – 10, 2012: No evidence of attacker activity was identified.
9. September 11, 2012: The attacker interacted with three systems using a compromised account and performed reconnaissance activities.

Nothing interesting here. Very few enterprises could have detected the above; it would require the sort of aggressive NIDS with extensive monitoring that’s normally only found in classified environments.

10. September 12, 2012: The attacker copied database backup files to a staging directory.
11. September 13 and 14, 2012: The attacker compressed the database backup files into fourteen (of the fifteen total) encrypted 7-zip archives. The attacker then moved the 7-zip archives from the database server to another server and sent the data to a system on the Internet. The attacker then deleted the backup files and 7-zip archives.

This was a database exfiltration of over 8 gigabytes of data. This is actually one thing that NIDS could be effective against if tuned properly.

The remainder of the attack steps were just some more reconnaissance, backdoor testing, and other probes, followed by Mandiant shutting down the attacker’s entry point.

The interesting thing here is that assuming this was spear-phishing with targeted malware, the only mistakes the DOR seems to have made were insufficient IDS tuning (which is honestly usually high-effort, low-payoff security work) and having single-factor remote access (which is catastrophic.) There’s nothing in this report that makes it look like the DOR’s IT department was run by a gang of idiots (like in, say, last year’s many Sony attacks); it looks like an organization that was doing most things right but had failed to deploy two-factor remote access. I’d wager their IT security guys wanted to, too, but were blocked by either the inconvenience to users or the cost of rolling out tokens or smart cards.

Having spent more than $14 million recovering from this incident, I’d bet two-factor auth is looking pretty cheap now.


Viewing all articles
Browse latest Browse all 20

Latest Images

Trending Articles





Latest Images