Safari Adventure
In a recent research project, we focused on three CVEs that Black Lantern Security operators frequently encounter in customer environments. Leveraging BBOT as our primary tool, we set out to identify and enumerate these vulnerabilities across the internet. In this article, we’ll share the journey of our exploration—tracking down these “herds” of technologies, overcoming challenges along the way, and uncovering key insights during our adventure.
Background
One of the most compelling use cases for BBOT is its ability to move directly from the discovery phase to identifying exploitable vulnerabilities in a single step. In many cases, it can quickly pinpoint serious issues with minimal, non-intrusive checks. This is largely due to powerful modules unique to BBOT, such as Badsecrets.
For large-scale internet vulnerability scanning, Nuclei is often the first tool people think of, and for good reason. We have a lot of respect for Nuclei and see it as an essential part of the infosec toolkit. However, some vulnerabilities are simply too complex to be handled within the limits of a Nuclei template, and this is where BBOT really shines.
A number of these vulnerabilities have a habit of resurfacing across different environments and have appeared repeatedly in many BLS customer environments. This got us wondering: just how widespread are these issues across the internet? While testing for a specific customer is manageable, performing the same checks on a huge scale is a much bigger challenge.
has already talked about how BBOT can explode with results if you are too inclusive with your targets, and these results could quickly become the entire internet with some careless configuration settings. Recursion is BBOT’s secret ingredient, but it also can spiral into a void of unending depths of the internet if not carefully controlled and limited.With this in mind, we began a research project to capture the percentage of internet-facing systems vulnerable to exploitation through some of the most prevalent unauthenticated web vulnerabilities we discover, focusing on those that are readily identifiable using BBOT’s built-in modules.
To do so, we knew we needed to focus our efforts on these specific technologies. But how do you go about compiling a list of those? Thankfully, sites like Shodan.io, BuiltWith, WhatRuns, and Ful.io have already done much of the heavy lifting. These platforms catalog and inventory externally facing web technologies, providing us with a comprehensive starting point to target the specific technologies associated with the vulnerabilities we were researching.
The Elephant
We focused our efforts on three major web technologies:
Telerik: A suite of UI components and tools for building web applications, most commonly used with .NET and JavaScript frameworks.
DotNetNuke: An open-source content management system and web application framework for building and managing websites on the .NET platform.
AjaxPro: A third-party library that enables AJAX calls to server-side methods in ASP.NET applications.
We used BBOT’s modules to validate each web technology and then detect whether the specific CVE existed on the website:
Using the aforementioned services for inventorying web technologies across the web, we set out to catalog each site associated with the technology and began our analysis. A few caveats need to be defined before we continue.
Caveats
We used the services mentioned to generate a list of sites using the particular web technology. This is not all-inclusive and does not account for custom implementations of the technology.
We did not perform directory brute forcing or conduct any other in-depth discovery efforts to find custom endpoints of the web technology. All technologies were assumed in their default install location/configuration.
We did not do any additional analysis on the site outside of the default detection mechanism with BBOT. Operators can choose to do additional scanning with the recursion engine of BBOT; however, this was outside the scope of our research.
All scanning was conducted passively using BBOT and its modules, which simply browsed publicly accessible web pages to identify version information and technology fingerprints. No intrusive or active exploitation techniques were used. While we developed some custom tooling to assist in validating version-based vulnerabilities, these tools operated without interacting with the sites in any harmful or unauthorized manner. We will not be releasing the specific methods or technical details used for validation.
Our approach focused on targeting the most easily identifiable and vulnerable web technologies—the sickest and weakest of the attack surface herd. These were systems that could be quickly observed and validated without the need for extensive analysis or additional tools beyond the BBOT scan.
Finding the Elephant
In order to validate the web technology, we first needed to execute a BBOT scan that would take the list of targets and check for the web technology based on the detection logic defined in each of the modules. For example, the most common Telerik endpoint is Telerik.Web.UI.DialogHandler.aspx,
which, combined with the detection logic in the module, validates that Telerik is actually present. Using this as our indicator, we could utilize the BBOT module to do the rest of the work for us.
Problem 1
The first problem we encountered was the sheer size of the elephant we were trying to identify. Typical BBOT scans start with a high-level target (e.g., example.com) and then use the recursion to find other in-scope assets. BBOT is designed to discover its own additional targets. We may manually provide some additional domains if we have them, but providing thousands of domains is not typical for most BBOT use cases.
With just Telerik UI alone, we inventoried over 120,000 different sites that reported using this technology. In order to accomplish this discovery, we had to execute 12 different scans on this web technology alone. Taking smaller bites of this technology, we were still able to consume the meal, albeit at a slower pace. One of the magical things about BBOT is its recursion, but this magic can be a double-edged sword. At the time of this research was being conducted, there was a 10,000-domain limit in place, however this has been fixed in a recent revision.1 2
Problem 2
BBOT’s magic lies within its recursive capabilities, which enable the discovery of an organization’s vast digital landscape by repeatedly expanding on initial targets. Starting with a relatively small target list, BBOT can use the modules and recursion to uncover an expansive surface hidden to the untrained eye.
discusses this process in his talk about recursion, in a link posted above.We know that our target list is already larger than most scan results and that we have recursion functionality that can blow up scans, both in data returned and length to completion.
and made this easier in a recent update to BBOT with Presets. BBOT’s presets feature allows us to tailor the scope and focus of our scans by selecting specific modules and configuring target discovery parameters. This enables us to limit a scan with predefined web technologies’ modules to the discovered targets, ensuring a more efficient and targeted exploration of the organization’s digital landscape.An example preset YAML file that could be used:
config:
scope:
strict: true
modules:
- portscan
- telerik
output_modules:
- json
- txt
This would force BBOT to keep a strict scope of only the targets listed in the target file, isolate just the portscan and Telerik module, and output to JSON and TXT formats. This would solve our problem of trying to eat the entire herd of elephants instead of just the specific elephant we’re after.
Problem 3
Besides the hard limits of the size of the elephant and making sure we stuck to our specific target elephant, we also had to deal with the hardware requirements to do the survey. Typically, a BBOT scan doesn’t require more than 2 GB of memory and 2 CPUs. A VM with this size can easily accomplish the vast majority of discovery work when the targets are under 1,000 and a moderate selection of modules is used. However, when targeting web technologies as pervasive as the three we are looking for, a VM of this size isn’t enough juice.
For this research, we used a VM with 4 vCPU and 8 GB of memory, with 4 GB of swap space. Using the larger resource allocation allowed us to run the (majority of the) scans to completion. Most scans took an average of 1 hour and 30 minutes to complete. For comparison, when we work with our enterprise customers for our Attack Surface Management (ASM) service and execute intense scans, those can last well over 8+ hours (depending in modules used and configurations set).
Problem 4
Another issue that can arise with intense scans is ending up on a deny list. If the target utilizes a reputation-based service or WAF like BrightCloud or CloudFlare, this can block our scans and end up giving a false negative for the result. BBOT has built-in features that allow it to be run in an agent mode, allowing a decentralized infrastructure. Our road map for a new release will have our I/O feature set, which will extend this capability.
We did not attempt to resolve this issue; if a WAF was present and blocked our detection mechanism, we simply moved on to the next target.
Eating the Elephant
Now that we had all of our targets, configuration, and assets ready, we could finally head out on this safari. First, we used our preset configuration and a subset of our target list and began enumeration.
Telerik Herd
Background
CVE-2017-9248 is a cryptographic weakness in Telerik UI for ASP.NET AJAX DialogHandler, allowing attackers to gain access to a file manager utility that supports arbitrary file uploads, often resulting in remote code execution (RCE). The vulnerability arises from an information leak in error messages during the decryption of Telerik “DialogParameters,” a set of encrypted configuration values echoed back to the server as user input.
Attackers can exploit these error messages to systematically deduce the Telerik.Web.UI.DialogParametersEncryptionKey
. With this key, they can decrypt and re-encrypt the parameters, gaining unauthorized access to the file upload utility, which they can then abuse to upload and execute files on the server.
Discovery
Targeting the Telerik software first, we kicked off our initial segment scan to identify endpoints reported by the list we generated. Out of the first 10,000 assets, we only were able to discover 567 “DialogHandler” endpoints, which is only a roughly 5.7% true positive rate. While there were other Telerik endpoints discovered that may be associated with other CVEs, we focused on this specific endpoint for our research. Of the 567 endpoints we validated, only 31 were found to be vulnerable, representing approximately 5.5% of the total validated endpoints. This is still a fairly large number for a 8-year-old vulnerability.
Extending this logic to the rest of the target list yielded 7,635 total sites that had the “DialogHandler” endpoint, of which 1,291 were still vulnerable to the CVE. In other words, approximately 17% (nearly one fifth) of publicly accessible Telerik sites had vulnerabilities that could be exploited to allow unauthorized file uploads. The discovery of Telerik endpoints in assessments is always exciting for BLS operators, as it often presents a high-probability and low-effort opportunity for success.
One specific consideration regarding the Telerik herd is the fact that multiple other CVEs for Telerik could also be vulnerable (e.g., CVE-2017-11317, CVE-2019-18935, CVE-2024-1801). However, these additional CVEs were not in our scope to assess. Additionally, not all Telerik sites have the “DialogHandler” endpoint enabled, and not all installments of Telerik use the default locations; this was evident in the percentage of reported sites using Telerik vs. the true positive endpoints discovered (12,755/122,698; 11%).
DotNetNuke Herd
Background
CVE-2017-9822 is a deserialization vulnerability in DotNetNuke (DNN). It affects versions 5.0.0 through 9.3.0 and can lead to RCE. The vulnerability is tied to the DNNPersonalization
cookie, which is used to store personalization settings for anonymous users.
Exploitation occurs when custom application code (or pages, such as custom 404 error pages—a common default) processes the DNNPersonalization
cookie without properly validating its content. This allows an attacker to craft a malicious serialized object, embed it in the cookie, and trigger its deserialization on the server.
Discovery
For the DotNetNuke web technology, we had to use custom detection tooling (which we will not be releasing) in order to validate the vulnerability. The default BBOT module triggers a benign exploit validation, which executes code on the server and should only be run with authorized use. We obtained a total of 59,702 sites from the list. The first scan returned a positive rate of 794 out of 10,000 sites vulnerable to the CVE (8%; higher than the percentage from the first Telerik scan).
Expanding this scan to the rest of the total sites cataloged resulted in 4,485 vulnerable sites running DotNetNuke. Overall, 7.5% were vulnerable to the CVE allowing for code execution. Again, this is with the caveat that the BBOT module only examines default exploitable locations within DotNetNuke. The CVE often manifests itself through custom pages; however, no additional analysis was performed against the sites. Across all of the scans, the technology DotNetNuke was observed on a total of 48,741 sites. Of the sites we positively identified as running DotNetNuke, 9.2% were vulnerable to the CVE.
AjaxPro Herd
Background
CVE-2021-23758 is a critical vulnerability in Ajax.NET Professional (AjaxPro) versions prior to 21.11.29.1 that allows attackers to achieve RCE. The issue arises from the framework’s deserialization process, which fails to adequately validate user-provided JSON data. Attackers can craft malicious payloads that contain specially formatted type information to exploit this weakness, enabling them to execute arbitrary code on the server.
A key aspect of this vulnerability is its unauthenticated nature, which makes it particularly easy to exploit. Many versions of AjaxPro include a default class, ICartService
, which is often enabled by default and exposes a method that accepts arbitrary objects. The combination of a default exploitable class and a lack of authentication greatly increases the risk to applications using the vulnerable framework.
Discovery
The final herd we went after in our safari was AjaxPro. Again, for this vulnerability, we had to develop custom tooling in order to validate that the site was vulnerable. For AjaxPro, our list contained 12,036 sites running the software—definitely a smaller scale compared to the first two herds.
Out of the 12,306 sites, 1,755 were confirmed to be vulnerable to the CVE allowing deserialization. Of all AjaxPro sites being cataloged, 14% were vulnerable to this CVE. This particular herd had the highest false-positive rate out of the safari that required the custom tooling for validation. A large caveat to this web technology is the custom locations often observed. The other two herds are more often deployed with default locations and configurations. We’ll be looking for ways to improve the detection accuracy of the module in the future.
After-Meal Thoughts
After completing our research, we were surprised by the alarmingly high percentages of true positive vulnerabilities identified in these web technologies. It is concerning that despite some of the CVEs dating back as far as 2017, nearly double-digit percentages of vulnerabilities persist for each technology. Combining these vulnerabilities with supplementary discovery methods greatly increases the likelihood of identifying additional exploitable weaknesses.
While we specifically and carefully enumerate our customers’ organizations and businesses for our ASM service, for this exercise, we adopted a strategy much closer to the way a real attacker’s campaign would be structured. If an attacker’s goal is just to find as many vulnerable systems as possible, we have demonstrated how they could leverage the numerous online services that exist solely to identify and track the technologies used by websites. Once a specific technology is identified, vulnerabilities associated with it become much easier to exploit, especially for older technologies with CVEs that have publicly known exploits. Real threat actors are conducting this research constantly, looking for systems to exploit.
This highlights the need for any company to leverage a robust ASM service, like the one we provide, to continuously monitor and assess their digital footprint. While Black Lantern’s ASM service does conduct these scans and discover these vulnerabilities, we take it one step further, and our analysts conduct in-depth analyses of attack surfaces. By implementing an ASM program, a business can identify and mitigate these risks before they are exploited.
At Black Lantern Security, we understand the importance of staying ahead of emerging threats. That’s why our enterprise ASM service, powered by BBOT, continuously monitors your attack surface for the latest vulnerabilities and provides proactive coverage against emerging threats. Start protecting your organization today by signing up for our ASM service. Contact us now to get started and secure your digital footprint!
This limitation was solved by
in this pull by removing YARA limitations. If you want to do targeted scans, also implemented--fast
as a way to do these scans without doing a full enumeration scan.Another recent implementation for better memory optimization was pushed to help with larger scans.