Supply chain attacks are difficult by design. The adversary's entry point is not your environment — it's a vendor you trust, a library you depend on, an update pipeline you have no visibility into. By the time malicious code reaches your systems, it arrives with legitimate signing certificates and expected behavior patterns. Traditional detection focuses on anomalies; supply chain compromises are engineered to look normal.

What follows is a technical account of a detection case from our threat research team — a software supply chain compromise we identified and disrupted before the payload stage. Specific company and vendor names have been omitted, but the technical details are real.

How the detection started

The initial signal was not an alert generated by endpoint tooling or a SIEM rule. It was infrastructure. During routine external attack surface enumeration for a customer in the defense industrial base, our discovery process identified a new subdomain registered on a domain belonging to one of the customer's software vendors. The registration was three days old. The hostname pattern — a combination of a legitimate-looking product name and version string — was consistent with what a vendor might use for a software distribution or update endpoint.

What made it suspicious was the hosting. The IP address resolved to a datacenter allocation in Eastern Europe that we had associated with C2 infrastructure used in prior campaigns by a specific threat actor group. The hosting provider had no other relationship to the vendor's infrastructure footprint. The domain's SSL certificate had been issued the same day as registration.

Alone, none of these indicators would trigger a high-confidence alert. Combined, they produced a pattern consistent with adversary-controlled infrastructure masquerading as vendor update infrastructure. We flagged it for investigation rather than dismissal.

Widening the scope

The next step was establishing whether this infrastructure was being used — whether any customer systems were communicating with it, and whether the legitimate vendor's systems showed any signs of compromise that would explain how attacker-controlled infrastructure was positioned as update endpoints.

On the customer environment side, we queried DNS resolution logs for the suspicious subdomain. No internal hosts had queried it in the window we could observe. That ruled out active exploitation in the customer environment but didn't resolve the broader question.

On the vendor side, we ran external enumeration specifically focused on the vendor's infrastructure. This surfaced two findings. First, the vendor's legitimate software distribution domain had, over the preceding two weeks, added several new DNS records pointing to IP ranges with no prior association to the vendor. Second, the vendor's developer portal — accessible via a subdomain that had been in their DNS for two years — was running an outdated version of a project management platform with a known authenticated remote code execution vulnerability (CVE-2023-22527, a Confluence server RCE with a CVSS score of 10.0 that had been actively exploited in the wild since early 2024).

This was the plausible entry point. An attacker with access to the vendor's development infrastructure, gained through exploitation of the Confluence instance, could modify the software build pipeline and insert malicious code into update packages before signing.

The notification decision

At this stage, we had a credible hypothesis but not confirmed evidence of a compromise in the vendor's environment — because we don't have access to the vendor's internal systems. What we had was a set of external indicators consistent with a pre-stage supply chain attack: potentially compromised developer infrastructure, new distribution-adjacent subdomains on attacker-associated hosting, and a timing pattern that fit the preparation phase of a deployment.

We notified the customer with a full technical brief. The customer, in turn, contacted the vendor directly with our findings. The vendor's internal investigation confirmed that the Confluence instance had been exploited and that a threat actor had maintained access to their build environment for approximately 11 days. Build artifacts produced during that window were quarantined pending forensic analysis. The suspicious subdomain was confirmed as attacker-controlled infrastructure that had not yet been activated.

The attacker's intended deployment mechanism — modified update packages pushed to customers via the vendor's auto-update pipeline — had not been triggered. The detection window was the gap between infrastructure setup and payload activation, which our external enumeration landed in.

What made the detection possible

Three factors combined to produce an early detection that wouldn't have happened through conventional endpoint or SIEM monitoring:

Continuous external enumeration beyond your own perimeter. The initial signal came from monitoring our customer's vendor ecosystem, not just our customer's direct infrastructure. Most organizations have no visibility into the external attack surface of their suppliers. Threat actors know this and treat the supply chain as a low-detection-risk entry vector.

Infrastructure correlation over time. A single new subdomain registered to a vendor is not suspicious in isolation. The suspicion came from correlating the hosting characteristics against known adversary infrastructure patterns. That kind of correlation requires maintaining a longitudinal database of adversary infrastructure — not just current IOC feeds — and the ability to match new observations against historical patterns.

Treating external enumeration findings as potential intelligence triggers. The natural workflow for ASM findings is: new asset discovered, assess risk, add to inventory or flag for remediation. This detection required treating an unexpected vendor-side finding as a potential adversary indicator and following it analytically rather than just logging it. That requires an integrated intelligence and ASM capability, not two separate programs with separate workflows.

Implications for supply chain risk programs

This case illustrates a gap in how most supply chain risk programs operate. Third-party risk assessments typically evaluate vendors' security programs through questionnaires, SOC 2 reports, and periodic audits. None of those mechanisms would have surfaced the Confluence vulnerability or the new infrastructure before exploitation was underway.

External attack surface enumeration of your critical vendor ecosystem — applied continuously, not just at onboarding — gives you a signal that vendors' own security teams may not surface through internal monitoring. The adversary doesn't distinguish between your perimeter and your vendor's; neither should your detection program.