DevSec Station
DevSec Station is a security focused podcast for software developers who want to create amazing applications. Hosted by Tanya Janca, also known as SheHacksPurple, these short lessons will help you level up.
DevSec Station
The Anatomy of a Modern Supply Chain Attack
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
What if a supply chain attack didn’t start with a sophisticated exploit… but with something totally normal?
A typo.
A copy-paste.
An AI suggestion.
In this episode, Tanya Janca walks through how modern supply chain attacks actually happen, and why they’re less about “elite hackers” and more about everyday developer workflows.
You’ll learn why these attacks are not a single event, but a sequence of small, reasonable decisions that quietly introduce risk into our systems.
What You’ll Learn
- Why supply chain attacks are a process, not a moment
- How attackers exploit normal developer behaviour
- A realistic, step-by-step walk through of a modern attack
- Why traditional SCA approaches often fail
- How to focus on real risk instead of noise
A Realistic Attack, Step by Step
This episode walks through a common pattern seen in real-world incidents:
- An attacker identifies a package name used internally
- They publish a lookalike or typo-squatted package
- Malicious behaviour is hidden in install scripts or dependencies
- A developer installs it, often unintentionally
- The system continues working… but access is now compromised
Bad / Better / Best: Managing Supply Chain Risk
Bad: Ignore supply chain risk or abandon tools due to noise
Better: Use SCA, but without context or prioritization
Best: Use SCA with reachability or runtime analysis
If You Do Just One Thing This Week
Run an SCA tool with reachability enabled, and take action on one issue.
- Run SCA on your current project
- Filter to: high severity + reachable
- Fix one issue (remove, upgrade, or replace)
- Add one guardrail:
- Pin versions and use lockfiles
- Restrict registries
- Fail CI on high + reachable findings
You don’t need to fix everything. But you do need to start.
🚉 About DevSec Station
DevSec Station is a security-focused podcast for developers.
Please like and subscribe. Hosted by Tanya Janca | SheHacksPurple
If I told you a supply chain attack could start with a typo, a copy-paste, or an AI suggestion, would that surprise you? Because that's how a lot of them actually start now. Not with elite hackers using zero days, but with a very normal developer behavior. Hi, I'm Tanya Jenka, also known as SheHacksPurple. Welcome to DevSecStation, a podcast for software developers who want to build more secure software. In each episode, I'll share a short practical lesson about secure coding, software security, and how to build safer systems without slowing development down. You can jump in at any episode at any time. No homework required. If you've ever installed a dependency without thinking too hard about it, trusted a tool because it looked super legit, or copied code from a place that felt safe, this one's probably for you. When people hear supply chain attack, they often imagine something really dramatic. Nation-state hackers, zero days strung together. There's this huge super breach that starts deep inside our production environment. But most modern supply chain attacks actually don't start there. They start really, really quietly. And they unfold a little bit over time. We often don't see them until it is way too late. A supply chain attack is not usually one single mistake. It's often a series of small, very reasonable seeming steps that add up to a super huge big problem later. I'd like to walk through a very realistic example step by step with you of how this could happen, because they are happening every day to a lot of us. So step one, an attacker notices a popular internal package, and the name is mentioned in a public repo, a blog post, or a job posting. The attacker takes note. Step two, attacker publishes a package with the same name, or a very similar name. Maybe there's a typo, maybe a slightly different scope, something intended to confuse, maybe an underscore instead of a dash. Step three. That package includes something very small and sneaky. Perhaps it is a post-install script, a dependency that phones home, we call this command and control, a line of code that collects environment variables, something like that. Then we get to step four. You, the unsuspecting software developer, installs it. Or an AI assistant suggests it, or a build system resolves it automatically because the name matches. Nothing in your app breaks. The tests still pass. The app works. Except now the attacker has a copy of your credentials or a token or access to a build pipeline. Yeah. So notice what did not happen here. No one hacked your app, right? No one broke your cryptography. No one was sitting there brute forcing a password so that you could detect it with your detection tools. This attack succeeds because the system trusted something it should not have. And because a developer did exactly what they were supposed to do: ship code. That's our jobs, right? So how do we fix this? A very common bad approach is not really dealing with the supply chain security at all, just ignoring it. Dependencies just keep piling up. You install whatever you want. Vulnerabilities pile up too. And usually no one's actually paying attention. This is not really being managed whatsoever. Or your team buys a software composition analysis tool, an SCA tool. They run it once, they see hundreds or thousands of findings, they super panic, and then quietly they ignore the tool and the findings because, oh my gosh, that is way too much. Both of these approaches fail because attackers do not care how noisy and long your report is. They only need one way in, one exploitable path, and they are in. They don't care about the rest. A better approach would be running a software composition analysis tool and trying to work through that long list. This is definitely an improvement because at least you can see the risk and you're looking at it. But it is not great because without reachability features, you're still underwater. So you don't know what's actually being used. You don't know what's actually exploitable. And this leaves developers completely drowned in findings rather than actual serious risks. And these findings mostly don't matter. That's how a software composition analysis tool can just become background noise instead of a useful security control. So the best approach is combining SCA with reachability features or runtime usage analysis. There's a lot of different names depending upon who is in charge at the marketing team there. And then acting on the dependencies that pose real, genuine risk to your organization. So think high severity plus reachability being the key thing you're looking for. So what do these features do? So they tell you if the vulnerability exists and if the code path with the vulnerability is actually executed, and if this is something an attacker could realistically actually get at. So the problem doesn't quietly come back and bite you later. So when we do this, the security stops creating a backlog and instead starts to become a normal part of maintenance. So the tool starts to support you instead of hindering you. If you do just one thing after this episode, please do this. I would like you to run a software composition analysis tool with reachability turned on and then action the top results. And if it doesn't have reachability, throw it out and buy a new one. We are not trying to fix every single low-risk thing we can find. We are trying to remove the stuff that can genuinely hurt us, our customers, our colleagues, the citizens where we live. So step one, because obviously I'm going to give you some steps, right? Step one, run an SCA against your repo or on that whatever app you're currently working on. If your tool supports it, which I hope it does, turn on reachability, call graph analysis, or runtime usage tracking. The wording or the name of the feature changes from tool to tool, but what you want has the same outcome. Show me what vulnerabilities are in code paths that are actually getting executed. That's what we want to know. Step two, filter down those results to high severity and reachable from within my code. So this is your real list that I want you to start working on. That's the stuff that can be exploited, not just stuff that exists somewhere in your dependency graph. Now I would love it if you also looked at mediums as well, but I'm willing to just accept high vulnerabilities for today. Step three, take action on the top one. So pick the simplest one to fix first and go in this order. Remove it if you don't need it. Two, upgrade the dependency to a patched version if possible. And then three, if none of those are possible, replace that dependency if it's no longer supported or if the amount of effort to replace it or to upgrade it seems really unrealistic. Okay, now we're on to step four. Step four, prevent future you from redoing this work by adding at least one guardrail. So here is a list of guardrails. Pin versions and use lock files to ensure that nothing changes on you by surprise. Pinning doesn't make dependencies safe, but it does make changes deliberate. And as you can imagine, us security nerds do not like those kinds of surprises. Another guardrail you could do would be being strict on which registries you use and only downloading dependencies from trusted registries. Or three, you could do all of them if you wanted, but three, the last choice, add an SCA tool to your pipeline and set your CI to fail on high plus reachable. So it's only noisy when it actually matters. You don't need to boil the ocean today. Just run the SCA tool once and fix one finding and add one guardrail, and you've done a really good job. This is what I mean when I say security should be practical. You're not doing security for the sake of it, you're doing real work that keeps attackers out. Reachability can be your secret weapon because it saves your time. It helps you focus on real problems instead of drowning in all these theoretical risks, often called findings. Thanks for listening to DevSecStation. If you enjoyed this episode, please subscribe, share it with a friend, or leave a review. It helps more people discover the show. If you'd like to learn more, I'm Tanya Jenka, also known as SheHacksPurple. And I teach secure coding training for software developers. You can find me online at shehackspurple.ca. Thank you for being here.