The specifics of what happens inside CI/CD pipelines are infamously opaque. Despite having written the YAML config file, which is the pipeline list of instructions, how can you be certain that everything happens precisely as it is described? Even worse, the majority of pipelines are entirely transient, so even in the event of a malfunction, there is little evidence other than what was logged that may or may not include the problem’s details.
Rapid development is achieved through the use of automated Continuous Integration/Continuous Delivery (CI/CD) pipelines. Having triggers or scheduling that compile, build, test, and ship your code automatically is fantastic. Most pipelines, however, aren’t built with security in mind, having been designed for speed and convenience of use. Since the pipelines typically require access to the internet for downloading dependencies and files, once a pipeline is compromised, the attacker has a variety of options to disrupt your operation or exfiltrate information or secrets.
In this article, I’ll cover some of the best practices you can put in place to protect your CI/CD pipeline. For our purposes, it doesn’t matter which automating tools or systems you’re using – the security principles are still valid. You just need to find the right tool for the job of securing that section of your pipeline.
What is The CI/CD (Continuous Integration/Continuous Delivery) Pipeline?
A CI/CD pipeline is an automated process for building, testing, and publishing your software, application, or artifact. These pipelines are becoming more and more commonplace and intricate. Pipelines are an excellent tool for increasing team productivity and producing software artifacts more consistently and predictably. The significance of automating these procedures becomes much more evident when you take into account that larger businesses may have hundreds of interconnected, choreographed pipelines, all of which depend on one another to function well.
Automatically and regularly building and testing code changes into a new end product is known as continuous integration, or CI. Coding changes are delivered, tested, and integrated as part of a two-step process called continuous delivery and/or deployment – CD. Continuous deployment delivers the updates into the production environment automatically, while continuous delivery stops just before automatic production deployment. Whether your pipeline is using one or the other is completely up to you and the way your environments and deliverables are set up.
The Importance of CI/CD security for your software supply chain
Most companies rely on CI/CD tools for automating their pipelines. That means that, like many other software supply chain attacks, all the bad actors need is to breach a single target to get a vast blast radius. One of the key weaknesses is the need for the pipeline to download and integrate dependencies into the final product or artifact. Even one bad dependency is enough to give an unwanted element a foothold in the pipeline. Since the pipeline has access to the source code and various other elements of your infrastructure (as needed), a privilege escalation can access and later change or exfiltrate almost any part of the product created in that particular pipeline.
A simple example can be found in our explanation of a cache or dependency poisoning.
In the past few years, several large companies have suffered from software supply chain attacks that had a CI/CD pipeline as their point of origin. For example, you can look at CircleCI’s breach in January of 2023, Argo CD’s compromise in January of 2022, and the Codecove breach in April 2021.
The potential ramifications for such attacks are severe, so it makes sense to do whatever you can to make sure your pipelines are as secure as you can make them.
CI/CD Security Best Practices
Whatever CI/CD platform or tools you’re using, there are a few things you can do to strengthen your security and lessen the potential damage in the unlikely event that a hostile actor does manage to get access to your pipeline or network.
Monitoring & alerting – A breach could occur even if you have trained your engineers to be cautious of phishing and other social engineering frauds. Since the majority of pipeline environments are transient, once the work is finished, there won’t be many traces left behind unless you actively log them. As you work on each PR, merge, build, and test, ensure that any modifications made to the environment or configuration files are logged. User data should also be logged along with all other data for examination if an issue calls for it. Being able to reconstruct a breach and determine what went wrong and how it went wrong is the aim here. Select the events that should trigger an alert in advance, and make sure the appropriate parties are informed. Take care not to overwhelm individuals with pointless or overly sensitive alerts; this might lead to alert fatigue, which would simply make them ignore the alerts or react much later than is prudent.
Use the RBAC principle combined with least privilege – Providing access to system resources based on a user’s designated role or job function within an organization is the foundation of Role-Based Access Control or RBAC. Users are given roles in RBAC that specify their access rights and permissions to different system resources, like files, folders, and programs. On the other hand, the concept of least privilege refers to the practice of giving users the minimal amount of access and privileges required to carry out their job duties. This implies that users are limited to using the resources necessary to complete their assigned jobs and nothing more. Least Privilege and RBAC are frequently applied in tandem as complimentary security concepts. The Least Privilege principle makes sure that users only have access to the minimal amount of resources required to do their particular duties, and RBAC assigns roles that provide users the right amount of access to the resources they need to carry out their job functions and nothing more. When combined, these guidelines aid in maintaining a well-maintained, relatively safe system. You can configure it to require multiple user authorizations for essential system actions as an extra layer of security. This strategy should be used carefully since it could cause the development process to lag noticeably.
Keep Pipeline Provenance as an Immutable Log – Verifiable information about software artifacts that describes where, when, and how something was created is known as provenance. Knowing precisely which files were entered and what occurred to them in a pipeline can be generated as a provenance file to form an unfalsifiable log of that pipeline. To be secure, provenance must be created independently of any user, as anything that a user can disrupt or modify is not entirely trustworthy. Scribe’s Valint enables you to establish provenance in your pipeline for a wide range of SCM systems. Each provenance file (JSON) is accessible later on, so you may review it to determine whether anything unexpected or undesirable occurred. By the way, generating and managing provenance files from throughout your pipelines is in the heart of the SLSA framework.
Fully utilize your SBOM – Just in case you missed a few of the potential uses, an SBOM made at the end of the pipeline could help list all open-source packages used. Comparing that list to known CVEs could tell you what potential vulnerabilities exist in your final product. You can also use the list to check if you’re still using outdated versions of open-source packages, and even use something like the OpenSSF Scorecard to check the ‘health’ of the packages you’re using. Since new CVEs are constantly coming to light, you should have a service, unlike a one-time SAST, that lets you know if one of your existing packages got a new CVE discovered in it. Scribe’s service can help you do all of that automatically.
Verify compliance with your policies – Each company, and sometimes each pipeline, has policies that need to happen to make sure everything is fine. Some policies are generic (like making sure there is a two-person verification process), and others are unique (like making sure that Mike signs off on the latest change before we ship it to production). Utilizing the cryptographic sign-verify mechanism and a unique policy file, you can now include the needed policies with each pipeline and verify (to yourself and others) that they took place. It’s a human weakness that, when stressed, may cause some requirements to be skipped and some rules bent to meet a deadline. With this measure in place, people can no longer bend the rules, and that should help maintain your pipeline’s security from both inside and outside threats. Scribe has developed a novel way to enforce such policies and even allow you to write your own. Check it out here.
Secure The Pipeline’s Instruction File – Threat actors may “poison” the CI pipeline by using a technique known as poisoned pipeline execution (PPE), which essentially modifies the pipeline stages or their sequence as originally specified in the pipeline instruction file. The method manipulates the build process by abusing permissions in source code management (SCM) repositories. By inserting malicious code or commands into the build pipeline settings, it is possible to poison the pipeline and cause malicious code to execute while the build is being completed. You won’t be able to tell that your builds aren’t operating the way you intended until or unless you check the pipeline instruction file. To be sure your pipelines are being run as you intended, you should verify the instruction file before each run. Cryptographically, signing the file and adding the signature verification as a first step of the pipeline is one way to achieve that security. Scribe’s Valint sign and verify functions are one way to verify that your instruction file has remained unaltered before you initiate any new pipeline run.
Secure Your End Result – Why should an attacker work hard to disrupt your pipeline when replacing your final product with a fraudulent version is much easier? Since the generating company appears to be the source of the image in these types of attacks, it is insufficient for that company to have a valid certificate protecting it. Simply said, it would increase the fake’s credibility. The solution is to cryptographically sign whatever the final artifact produced by the pipeline is and allow the end user to verify that signature. Scribe’s Valint can be used to sign and verify a large variety of artifacts giving you that extra security that your users are getting exactly what you intended for them to get.
Looking To The Future
Nobody is going to quit using automation techniques like CI/CD to expedite their work. Quite the contrary, in the world we live in, we are always pushing for ever-faster software update iterations. We should, at the very least, make sure that we approach the task with caution, taking care not to jeopardize our production environment or our source code in the process.
The crucial thing is to consider the potential consequences of someone gaining illegal access to your pipeline, your environment, or your source code. I’m sure you’ll be able to take the appropriate action to stop or mitigate potential leaks once you realize how dangerous it could be and where your pipelines and network are most susceptible.
As interconnected pipelines are only going to increase in complexity, it’s vital to maintain your overall environment security (segmented network, RBAC, zero trust, etc) as the first step to help protect your pipelines. After that, look to create solid, unfalsifiable evidence and employ cryptographic signing and verifying of data to try and mitigate as much as possible the potential of a software supply chain attack that might poison your pipeline or pipeline’s cache. Staying alert and suspicious could save your company untold headaches.