22214
post-template-default,single,single-post,postid-22214,single-format-standard,theme-stockholm,stockholm-core-1.0.8,woocommerce-no-js,select-theme-ver-5.1.5,ajax_fade,page_not_loaded,wpb-js-composer js-comp-ver-6.0.2,vc_responsive
Title Image

A “DevOps Firewall” Could’ve Prevented the Capital One Hack

Recently, federal prosecutors charged a Seattle woman with stealing more than 100 million credit applications from Capital One.

As the details of the attack became public, the Capital One AWS environment came under scrutiny throughout the DevOps and media landscapes.

It’s now accepted that the attack vector the hacker took began with a misconfigured firewall. Ephemeral AWS credentials were extracted from the instance role and used to raid data from under-restricted S3 buckets.

Several things immediately stand out about this attack. Most notably:

  1. A misconfigured firewall should not cause such a vast security breach. Failsafe measures should catch intruders. The lack of redundancy in security would suggest more systemic security issues.
  2. With a compliance tool like Matter, the hack would never have occurred. Broader security architecture reviews should’ve highlighted the extra S3 permissions and eliminated them from the role. Or, those permissions could’ve been limited to a WAF-logging specific bucket if needed. Matter would’ve flagged this in the compliance stage of migration, and in an ongoing capacity ensured that permissions were structured appropriately.
  3. Why weren’t the S3 buckets – which were filled with seriously sensitive information — on restricted access for known IP ranges only? These settings can be managed and continuously monitored with automated compliance tools like Matter.
  4. Capital One acknowledged that the web application firewall (WAF) role never made API calls, like “List Buckets” or “Sync”, until the day of the hack. But nothing in the system flagged the WAF role’s behavioral change. Yet, when a credential set suddenly begins behaving atypically – such as scanning and looting S3 buckets – it’s possible to flag the behavior for review. Amazon Macie could have caught this abnormal behavior and alerted Capital One immediately.
  5. Netflix recently released RepoKid — an open source tool to remove permissions that go unused; it’s an OpenSource tool that could’ve stopped the attack before it happened.

That said, Capital One got a few things right. As a best practice, logging must always be enabled across all public cloud accounts, and those logs should be sent to a protected and dedicated logging account.

And if an incident does happen, it’s imperative to have a response plan, so organizations know how to react to compromises before they happen.

On the last two points, Capital One was actually pretty successful. They logged everything, and it was clear immediately after they discovered the attack what occurred. At minimum, Capital One can look retroactively at the exact steps taken to breach their security for key learnings. In this case, the principle of least privilege would have prevented the breach.

Though Capital One gave up a pretty scary amount of critical consumer data, they were also rapid and accountable in their response, which is surely worth something. In the end, the lesson from is clear: build the right security in the first time. The public cloud is far more secure than on-premise data centers, but it isn’t impenetrable. Which is why it’s imperative that the automated tools that monitor your compliance, and the DevOps teams that build your public cloud are paying attention.