TL;DR: Four upstream failures in the LastPass breach, each fixable with patterns that existed in 2022.

  1. Source code contained cleartext secrets instead of references to secrets fetched at runtime from a secrets manager.
  2. The decryption key for customer vault backups was stored in a LastPass vault instead of an HSM, and the rotation cost of that arrangement distorted the incident response in a dangerous direction.
  3. Engineers with privileged access were permitted to run non-current macOS versions, which Apple has been shown to delay patches for, if they release them at all. A browser-to-kernel exploit chain actively exploited in the wild during the LastPass compromise window was patched on the current macOS twenty-six days before being patched on the previous supported version.
  4. AWS credentials in the compromised vault had no IP, MFA, or VPC restrictions, so they worked from anywhere the moment the attacker had them.

The detailed arguments and fixes are below.

Intro

The 2022 LastPass breach is the security industry’s reference case for what happens when a sophisticated attacker beaches a company that handles other people’s secrets. The attacker exfiltrated encrypted customer vault backups that have been slowly cracked offline for the last three years, producing a long tail of follow-on attacks against LastPass customers. The on-chain analysis firm TRM Labs has traced hundreds of millions of dollars in cryptocurrency theft to the breach, much of it flowing through Russian-associated infrastructure, including a single $150 million XRP theft from Ripple co-founder Chris Larsen in 20241. LastPass itself has faced a £1.2 million penalty from the UK Information Commissioner’s Office2, an $8.2 million class action settlement3, and the kind of reputational damage that’s hard to quantify but easy to see in the company’s diminished market position.

Three years of post-breach root cause analysis has focused on the downstream effects and on the obvious entry point of the DevOps engineer’s personal Plex server. The downstream effects are important. The Plex story is important too, and has been covered in depth. What has not been covered is the framing LastPass used to deflect the Plex criticism, and the three upstream failures that mattered more.

LastPass put cleartext credentials in source code, which turned the loss of fourteen repositories into the loss of working production secrets. They protected their customer vault backups with a key stored in a LastPass vault, and discovered mid-incident that the cost of rotating the Key Encryption Key had distorted their response in a dangerous direction. They allowed engineers with privileged access to the production environment to use Macs that were at least one major version behind, during a window when an actively-exploited browser-to-kernel exploit chain was patched on the current macOS version and remained unpatched on the previous one for another twenty-six days. And they issued the AWS credentials that the attacker eventually exfiltrated as long-lived keys with no IP, MFA, or VPC restrictions, which meant that those credentials worked from anywhere the moment the attacker had them. The point is not to pile on LastPass. The point is that all four failures are common, all four are fixable, and the LastPass breach is the reference case for what happens when you skip them.

1) Secrets in Source Code

LastPass’s March 2023 incident report acknowledged that the 14 exfiltrated repositories included cleartext embedded credentials, stored digital certificates, and encrypted credentials used for production. Most coverage mentions this in passing before moving on to the downstream effects. It deserves more attention because it is the failure that made everything else consequential. If the source code had contained references to secrets rather than secrets themselves, Incident 1 ends with the attacker holding architectural documentation and nothing else useful. The DevOps engineer in Incident 2 still gets compromised via Plex, but when the attacker pulls the contents of his vault, there is no decryption key for the backup infrastructure to find, because there is no decryption key in a vault to begin with.

The correct pattern is that source code never contains secrets, only references to them. At runtime, the application fetches the actual secret from a dedicated secrets manager using its workload identity, and the secret never touches disk on the developer’s machine, the build artifact, or the repository. AWS Secrets Manager, HashiCorp Vault, Google Secret Manager, and Azure Key Vault all implement this pattern. The mechanics differ slightly but the shape is the same: the application asks “give me the secret named X,” the secrets manager checks whether the caller’s identity is authorized to read X, and if so returns it.

Here is what this looks like in practice for a Python service running on ECS or Lambda:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import boto3
import json

def get_database_credentials():
    client = boto3.client('secretsmanager')
    response = client.get_secret_value(SecretId='prod/db/primary')
    return json.loads(response['SecretString'])

creds = get_database_credentials()
conn = psycopg2.connect(
    host=creds['host'],
    user=creds['username'],
    password=creds['password'],
    dbname=creds['dbname'],
)

The application code contains the name of the secret (prod/db/primary), not the secret itself. The boto3 client uses the task’s IAM role to authenticate to Secrets Manager. The IAM policy on that role is what controls access:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/db/primary-*",
    "Condition": {
      "StringEquals": {
        "aws:SourceVpce": "vpce-0abc123def456"
      }
    }
  }]
}

Three things are happening in this policy that matter. The Resource is scoped to one specific secret, not *, so a compromised task can only read what it needs. The aws:SourceVpce condition restricts the call to a specific VPC endpoint, so the secret cannot be retrieved from outside the VPC even with valid credentials. And there are no long-lived access keys anywhere in this picture; the ECS task assumes a role and gets short-lived credentials that rotate automatically.

The repository now contains a config file like this:

1
2
3
database:
  secret_id: prod/db/primary
  region: us-east-1

If an attacker exfiltrates this repository, they get the name of a secret they have no way to retrieve. They learn that the application uses a PostgreSQL database in us-east-1, which is architectural information of roughly the same value as the README. They do not get credentials.

Detection is the other half of the pattern. Pre-commit hooks like gitleaks or trufflehog scan staged changes for things that look like secrets and refuse the commit if they find one. CI pipelines run the same scanners on every push as a backstop. GitHub’s secret scanning runs continuously on public repositories and will notify a list of partner providers (AWS, Stripe, Slack, etc.) when it finds a credential issued by them, who will then automatically revoke it. None of this is exotic, none of it is expensive, and all of it existed in 2022.

2) Encryption Keys Belong in an HSM

LastPass encrypted their S3 backups with SSE-C, which means the customer generates the key, sends it to S3 on every API call, and AWS discards it after each operation. The decryption key for the SSE-C key was stored in the personal LastPass vaults of four senior engineers. If the threat model says LastPass could not trust AWS KMS with their key material, the correct answer was CloudHSM or an on-premises HSM with client-side encryption, not a 256-bit value stashed in a password vault. LastPass was already running physical datacenters for production, so the infrastructure was in place. An HSM keeps the key material inside a hardware boundary, logs every cryptographic operation with the calling identity, and lets you revoke access with a policy change that takes effect in seconds.

An HSM would not have prevented the initial compromises. Someone authorized to decrypt backups for disaster recovery can still be impersonated by an attacker who compromises their workstation. What an HSM would have changed is the shape of LastPass’s response to Incident 1. After Incident 1, LastPass knew the attacker had obtained the encrypted SSE-C key from the compromised repository. They rotated the AWS access keys between August 16 and 18. They did not rotate the SSE-C key, because rotation meant re-encrypting terabytes of backup data and they assessed that the decryption key was still safely held. Two days later, one of those vaults was compromised. With an HSM, the equivalent precautionary response is revoking the compromised principal’s access, which is cheap enough that there is no cost-benefit calculation to get wrong. The SSE-C rotation cost distorted the response in a dangerous direction, and the attacker’s window between Incident 1 and Incident 2 was exactly the kind of window that a cheaper response would have closed.

3) The Browser-to-Kernel Chain Apple Patched Nine Days Late

The LastPass Incident 1 post-mortem indicates that the Software Developer’s laptop was compromised starting August 8, 2022, and that a scheduled operating system upgrade coinciding with the incident wiped out logs relevant to the investigation. The word “upgrade” is meaningful in Apple parlance: it signals a major version transition, not a point release. In August 2022, the current macOS was Monterey; the previous supported versions were Big Sur and Catalina, with Mojave out of support since July 2021. Whichever version the engineer was running, Apple’s pattern of delayed patches on non-current operating systems made it a worse position to be in than running Monterey.

Apple has a systematic pattern of delaying patches on non-current operating systems even when they are still officially supported. Joshua Long has documented how patches often arrive weeks after the current-OS fix, leaving an estimated 35–40% of all supported Macs exposed to actively-exploited vulnerabilities at any given time4. CVE-2022-22675, an actively-exploited out-of-bounds write in the AppleAVD media decoder, was patched on Monterey in 12.3.1 on March 31, 2022 and on Big Sur in 11.6.6 on May 16, 2022, leaving Big Sur users exposed for forty-six days5. Within a week of Apple shipping the Monterey 12.3.1 fix, security researcher Mickey Jin reverse-engineered the patch and publicly verified that Big Sur was still vulnerable4. The gap between “Apple ships the fix on Monterey” and “Apple ships the fix on Big Sur” is also the gap during which the patched code itself becomes the most reliable available description of the vulnerability for any attacker willing to read it.

The pattern matters more than usual in this specific case because of what Apple patched on August 17, 2022, nine days after the LastPass developer’s laptop was first compromised. Monterey 12.5.1 was an emergency release containing fixes for exactly two CVEs, both flagged by Apple as actively exploited in the wild6. CVE-2022-32893 is an out-of-bounds write in WebKit, the rendering engine that powers Safari, that allows arbitrary code execution from maliciously crafted web content. CVE-2022-32894 is an out-of-bounds write in the kernel that allows arbitrary code execution with kernel privileges. The two were almost certainly meant to be chained: WebKit gives an attacker initial code execution as a sandboxed browser process, and the kernel bug elevates that to ring zero. This is the canonical shape of every modern macOS exploit chain that matters in the wild. Drive-by browser RCE for entry, kernel LPE for full control. Apple shipped both halves together because they were responding to a known in-the-wild exploit chain, not two unrelated bugs. Both CVEs are listed in the CISA Known Exploited Vulnerabilities catalog7.

Apple patched the chain on Big Sur on September 12, 2022 in macOS Big Sur 11.7, twenty-six days after the Monterey patch and thirty-five days after the LastPass compromise began8. Neither CVE has been publicly attributed to a specific actor or campaign. Apple’s standard language (“Apple is aware of a report that this issue may have been actively exploited”) typically means at least one targeted attack has been observed, but neither Citizen Lab nor Google’s Threat Analysis Group has published a technical writeup of this particular chain. The lack of attribution cuts both ways. It does not prove the chain reached the LastPass developer’s machine, and it does not prove it did not.

The LastPass post-mortem describes Incident 1 as follows: an unknown initial threat vector, an EDR agent that was tampered with and not triggered, anti-forensic activity that destroyed evidence of how the compromise happened, and the assessment that “no privilege escalation was identified or required” for the lateral movement into the cloud development environment. The chain Apple patched in 12.5.1 fits this description in three independent ways. The browser-RCE entry vector explains why no email artifact, no malicious attachment, and no obvious payload was found, because there isn’t one to find. The kernel privilege escalation explains how the attacker disabled the EDR agent, which on macOS requires kernel-level access to remove the EDR product’s hooks. The combination explains the anti-forensic activity, which is also kernel-privileged work. None of this proves the chain was used. The logs were destroyed and Mandiant could not determine the entry vector. But on August 8, 2022, this chain was an unpatched zero-day on every version of macOS that mattered. Whoever had it could use it against anything.

The chain may or may not have been the entry vector for the LastPass compromise. The evidence is circumstantial and the logs are gone. But the question of whether this specific chain was used is less important than the structural reality the chain illustrates. Apple patched it on Monterey on August 17, 2022, and on Big Sur on September 12, 2022. That twenty-six day gap exists every time Apple patches a zero-day. Sometimes shorter, sometimes longer, but the gap is built into how Apple supports older macOS versions, and sophisticated attackers operate inside that gap by design. The conventional wisdom that staying one version behind is “safe” introduces months of unintended exposure given this pattern. Monterey reached 12.2 by January 2022, which gives a reasonable amount of time to test compatibility before the August incident. The defensible posture is to stay on the current major version of macOS, accept the compatibility risk that comes with it, and treat one-version-behind as a temporary state rather than a stable home.

4) Allowing Personal Devices

The LastPass public disclosures about device controls use a rhetorical pattern I see constantly in vendor risk assessments. The company is careful to specify what employees could not do: access the “network” without a company-provided laptop, MFA, and Cisco AnyConnect. 9 Read quickly, this sounds like a strong statement about device trust. Read carefully, it is a statement about one specific network, with everything else left unsaid. Everything in it is technically true, and the gaps it leaves open are where the breach happened.

What the statement does not say is what “network” actually covers. It does not cover the LastPass web application itself, which was reachable from any browser. It does not cover corporate vaults, which were accessed from whatever machine the engineer happened to be using. It does not cover the backup infrastructure, which was a separate cloud environment that ran on AWS. The scope of systems an engineer needed to compromise to reach customer vault backups does not overlap with the scope of systems LastPass’s device policy actually governed.

The ICO found that a keylogger captured the DevOps engineer’s master password, and a stolen trusted device cookie allowed the attacker to bypass MFA on the LastPass web application entirely. The web application was not on the “network,” so device policy never mattered. The second incident followed the same pattern at a different layer. The entry point was not a company laptop failing a control; it was an exploitable Plex server running on an engineer’s personal computer, which was not a LastPass device at all. The engineer was using that personal computer to access his corporate LastPass vault, which held the decryption keys the attacker eventually used to unlock the exfiltrated customer backups. The personal computer was outside the scope of any LastPass device policy, because LastPass’s device policy was scoped to their internal network, and a personal computer on an engineer’s home LAN is not on their internal network by definition.

The credential failure compounds the framing failure. Once an attacker was on the engineer’s personal machine with access to his LastPass vault, the AWS credentials in that vault worked from anywhere. There was no constraint on where they could be used from, no constraint on how they could be used, and no constraint on how long they remained valid. AWS provides several IAM condition keys that constrain where and how credentials can be used, and a sufficiently restrictive combination would have made the exfiltrated credentials nearly useless outside the corporate environment. The strongest pattern is restricting the credentials to specific VPC endpoints, which keeps the traffic off the public internet entirely:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": "*",
    "Resource": "*",
    "Condition": {
      "StringEquals": {
        "aws:SourceVpce": "vpce-0abc123def456"
      },
      "Bool": {
        "aws:MultiFactorAuthPresent": "true"
      }
    }
  }]
}

Three things are happening in this policy. The aws:SourceVpce condition restricts the call to a specific VPC endpoint, which means the credentials cannot be used over the public internet at all, even with valid keys. The aws:MultiFactorAuthPresent condition requires MFA at the API call level, not just at the console login level, so exfiltrated long-lived keys without MFA tokens cannot be used. And the policy could be combined with short-lived STS credentials issued by an SSO provider rather than long-lived access keys, which means the credentials a vault contained would expire on their own within hours of exfiltration regardless of any other control.

Where VPC endpoint restrictions are not possible, IP-based conditions restricting use to the corporate VPN range are a weaker but reasonable fallback. Policies using aws:SourceIp need to be written carefully: AWS services making calls on behalf of the user appear to originate from AWS infrastructure rather than the user’s IP, so production policies often need a companion aws:ViaAWSService condition to allow legitimate service-chained calls.

The framing question I started this section with applies here too. A vendor who tells you their “internal systems” require MFA is not answering the question you asked. The question is which systems an attacker has to reach to hurt your customers, and whether the controls the vendor described cover all of them. An exfiltrated vault containing credentials that only work from inside a specific VPC, with MFA, and that expire in an hour, is an exfiltrated vault that does not unlock customer backup infrastructure two days later. LastPass’s answer to the framing question, in retrospect, was no.

Closing

None of the four failures above is exotic. Conditional access with device trust is a mature pattern that Okta, Microsoft Entra, and every other identity provider has supported for years. Secrets management is a solved problem with free and commercial tooling that runs on every commit. Operating system patch cadence is a policy decision that any organization can make and enforce. The LastPass breach happened not because these controls were unavailable or unknown, but because the organization did not apply them rigorously enough to the specific systems that needed them most.

Sources


  1. TRM Labs, “TRM Traces Stolen Crypto From 2022 LastPass Breach; On-Chain Indicators Suggest Russian Cybercriminal Involvement,” https://www.trmlabs.com/resources/blog/trm-traces-stolen-crypto-from-2022-lastpass-breach-on-chain-indicators-suggest-russian-cybercriminal-involvement. Ian Allison, “Ripple Co-Founder’s $150M XRP Heist Related to LastPass Hack: ZachXBT,” CoinDesk, March 8, 2025, https://www.coindesk.com/tech/2025/03/08/ripple-co-founder-s-usd150m-xrp-heist-related-to-lastpass-hack-zachxbt↩︎

  2. UK Information Commissioner’s Office, Penalty Notice – LastPass UK Ltd, November 2025. ↩︎

  3. In re LastPass Data Security Incident Litigation, settlement details at https://www.lastpasssettlement.com/↩︎

  4. Joshua Long, “Apple Neglects to Patch Two Zero-Day, Wild Vulnerabilities for macOS Big Sur, Catalina,” Intego Mac Security Blog, https://www.intego.com/mac-security-blog/apple-neglects-to-patch-zero-day-wild-vulnerabilities-for-macos-big-sur-catalina/. The 35–40% figure on Mac patch coverage gaps and Mickey Jin’s reverse-engineering of the CVE-2022-22675 Monterey patch are both sourced from this article. ↩︎ ↩︎

  5. Apple, “About the security content of macOS Monterey 12.3.1,” March 31, 2022, https://support.apple.com/en-gb/HT213220. Apple, “About the security content of macOS Big Sur 11.6.6,” May 16, 2022, https://support.apple.com/en-us/102873↩︎

  6. Apple, “About the security content of macOS Monterey 12.5.1,” August 17, 2022, https://support.apple.com/en-us/103006. The two CVEs patched in this release, CVE-2022-32893 and CVE-2022-32894, are both flagged by Apple as actively exploited in the wild. ↩︎

  7. Cybersecurity and Infrastructure Security Agency, “Known Exploited Vulnerabilities Catalog,” https://www.cisa.gov/known-exploited-vulnerabilities-catalog. Both CVE-2022-32893 and CVE-2022-32894 are listed. ↩︎

  8. Apple, “About the security content of macOS Big Sur 11.7,” September 12, 2022, https://support.apple.com/en-us/102840. The temporal alignment between this CVE pair’s disclosure window and the LastPass Incident 1 timeline is my own observation. ↩︎

  9. LastPass, “Incident 1 – Additional details of the attack,” March 2023 incident disclosure. The direct quote about company-provided laptops and the Cisco AnyConnect VPN requirement is from this document. ↩︎