Using the 3CX Desktop App Attack To Illustrate the Importance of Signing and Verifying Software

All Posts

In late March 2023, security researchers exposed a threat actor’s complex software supply chain attack on business communication software from 3CX, mainly the company’s voice and video-calling desktop app. The researchers warned that the app was somehow trojanized and that using it could expose the organization to a possible exfiltration scheme by a threat actor. The attack was dubbed ‘Smooth Operator’ and there is some evidence suggesting it’s been going on for months. 

So what exactly happened, how is using this trojanized version puts you at risk, and how could it have been prevented by employing software signing and verification? 

First Things First: What is 3CX?

3CX is a software-based, open standards IP PBX (Private Branch Exchange) replacing a traditional hardware PBX. It is designed to allow businesses to make and receive calls using VoIP (Voice over Internet Protocol) technology, which transmits voice communications over the Internet. 3CX also includes advanced features such as video conferencing, presence, instant messaging, and more, and can be deployed on-premise or in the cloud. Windows, macOS, and Linux are just a few of the popular operating systems on which the app is available. Additionally, the client is accessible through browsers thanks to a Chrome extension and the client even has a PWA version, as well as being available as a mobile application for Android and iOS devices.

You can get some idea of the potential effects of a software supply chain attack from the 3CX website, which boasts 600,000 companies using their app with over 12 million daily users.

A Sneak Peek Into the Attack: What You Need To Know

This is a little complex so we’ll break it down into steps:

  1. You download a trojanized version of the desktop app or you already have it installed and simply update it with a trojanized version.
  2. The 3CXDesktopApp.exe executable loads a malicious dynamic link library (DLL) called ffmpeg.dll.
  3. The ffmpeg.dll is used to extract an encrypted payload from d3dcompiler_47.dll and execute it.
  4. The malware then downloads innocent-looking icon files hosted on GitHub that contain Base64 encoded strings appended to the end of the images.
  5. That encoded data is then decoded and used to download another stage, containing the encrypted C&C server that the backdoor connects to in order to retrieve the possible final payload.
  6. In the final phase, info-stealer functionality is put into practice, including the gathering of system data and browser data from Chrome, Edge, Brave, and Firefox browsers. This can include querying browsing history and information from the Places table as well as potentially querying the History table.

Initially, 3CX downplayed the attack but later admitted it was a real threat and suggested uninstalling and reinstalling the app with their specific instructions as well as using the PWA version in the meantime until the company manages to untangle the incident and mitigate it.

Another very important factor to keep in mind is that the compromise includes a code signing certificate used to sign the trojanized binaries. Well, not exactly – it’s actually using a known vulnerability called CVE-2013-3900 (originally published in 2013 but updated in 2022 and again this week) to make it appear legitimately signed.

Déjà Vu: This Has Happened Before

If this story of a 3CX trojanized version sound familiar is because it has happened before

In this case, it is unclear whether an open-source upstream library the company uses became infected or an actual attack breached the company’s development environment. 

In other famous attacks, from ‘Kingslayer’ (2016) to ‘CCleaner’ (2017), ‘VestaCP‘ (2018), ‘WIZVERA VeraPort’ (2020), and all the way up to ‘SolarWinds’ (2020), it’s a common threat actor practice to try and compromise either a company’s servers, build environment, or its actual downloadable executable. After all, disguising something bad and dangerous as something you can trust is a great way to get people to trust and download your payload.

That is part of the definition of a software supply chain attack – the attackers compromised the software supply chain to distribute malicious software to a large number of victims. In each of these famous cases, the attackers were able to inject malicious code into legitimate software packages, which were then distributed to users. The attackers were often able to do this by compromising a trusted software vendor or provider, such as a software update server or a code signing certificate.

By getting unsuspecting customers to download a modified version of a legitimate application the attackers can essentially hide almost anything inside.

And here’s the main problem – ‘unsuspecting’. After all, the executable, binary, or image came from the creating company, apparently approved by it, and it even contains a signed certificate. What more can a customer do? Should they call the company to verify each update? Scan the code (if available) for the existence of back doors? That is preposterous and unrealistic. But there is something that can be done.  

How Can You Add a Layer of Trust Beyond a Certificate? 

An image illustrating a layer of trust

The proposed model is fairly simple and is based on the same idea as that of code signing certificates. A code signing certificate is a digital certificate issued by a third party that is used to digitally sign software or code. When software is signed with a code signing certificate, it allows users to verify the authenticity and integrity of the software before installing or executing it.

Signing certificates are issued by trusted third-party certificate authorities (CAs), who verify the identity of the software publisher and the integrity of the software code. The certificate authority uses cryptographic algorithms to create a digital signature of the software, which is then included in the signed code. When a user attempts to install or execute the software, their system will check the digital signature to ensure that it matches the signature generated by the certificate authority. If the signatures match, the software is considered to be authentic and has not been tampered with since it was signed. 

This system is based on public-key cryptography, also known as asymmetric cryptography – a method of cryptography that uses two different keys, a public key and a private key, to encrypt and decrypt data. In the context of code signing, a private-public key pair is used to sign software and code.

In this process, the software publisher generates a private-public key pair, where the private key is kept secret and the public key is made available to others. The software publisher then uses their private key to create a digital signature of the software or code they wish to sign. This digital signature is a hash value generated by running the software or code through a mathematical algorithm and then encrypting the resulting hash value with the publisher’s private key.

When a user downloads the signed software or code, their system uses the software publisher’s public key to decrypt the digital signature and verify that it matches the hash value of the downloaded software or code. If the digital signature is valid, then the user can be confident that the software or code has not been tampered with since it was signed by the software publisher.

Based on this simple concept the proposed remediation is to sign every new release, binary, and image directly with the company’s key or the build pipeline key and just ask that the user verify the signature when they download or update the software.

Of course, things aren’t always that simple. If the bad actors have infiltrated the build server then signing the build there is already pointless. If the key infrastructure has been compromised the whole exercise is likewise, pointless.

But, asking users to verify a signature, something fast and easy that can be done automatically, is a small price to pay to help prevent the next software supply chain attack.

But wait, you might be saying, what if it’s actually an open-source library upstream that is the source of the contamination? In such a case signing a build is, again, pointless since the compromising code is ‘built in’.

This is where we need to start considering an ecosystem of trust based on signing and verifying these signatures. If these open-source packages were signed and the signatures were verified when they were incorporated into the company’s code it might reduce the likelihood of a breach.

Where Scribe Comes In

Scribe has implemented a tool called Valint that enables you to sign and verify files, folders, and images. Without the need to maintain complicated PKI systems, the tool implements a novel approach of using your already established verified identity (like your Google, Microsoft, GitHub, or AWS identity for example) to sign the desired artifact. You can later use the same tool to verify that the artifact has been signed and what the identity used to sign it was.

Let’s say your build pipeline produces a container image as a final artifact. Right after that image is created you should sign it and upload that signed version to the repository where your clients can download it. Once signed that image can no longer be modified – it is locked. Anyone who wants to can check both that it’s signed and that the signing identity matches what the company published.

This tool is only part of the capabilities granted by implementing the Scribe SaaS solution for your organization. With an objective of improving both your software supply chain security and your overall transparency, there is every reason to go and check out what Scribe can offer you. 

Banner

This content is brought to you by Scribe Security, a leading end-to-end software supply chain security solution provider – delivering state-of-the-art security to code artifacts and code development and delivery processes throughout the software supply chains. Learn more.