70% of successful breaches are perpetrated by external actors whose attacks originate on the internet. Since these actors don’t have access to your organization’s internal assets or networks, they rely on data available on the internet. With 8.5 billion records compromised, in 2019 alone, adversaries can find an employee’s credentials, or your organization’s API keys, within a few hours. Allowing them to infiltrate your organization, spread malware and ransomware, or steal intellectual property and sensitive documents.
Apart from the direct operational impacts, cyber-attacks affect an organization’s hard-earned reputation and revenue as well. Snapchat shares dropped by 3.4% the day after their source code leak was made public. And in addition to the immediate backlash, companies that have experienced a breach, underperform the market by > 15%, even 3 years later.
Considering the stakes, it is important to take a closer look at the types of leaked data that threat actors seek out, and ways to effectively prevent them from getting their hands on it.
In almost all cyber-attacks affecting an organisation, credentials are involved either as a target of theft or as a means to furthering access in a network. This includes email credentials and hardcoded access credentials that can be used to access confidential emails, systems, and documents.
Target was breached using stolen credentials
In one of the first major breaches, threats actors uploaded BlackPOS to Target’s point-of-sale (PoS) network, allowing them to steal customers’ credit card information and other personal details. It was later found that threat actors were able to compromise Target servers using credentials stolen from Fazio Mechanical Services. Fazio, Target’s HVAC vendor, had access to Target servers. And since the network was not properly segmented, threat actors were able to compromise Target’s PoS network.
While source code can be exposed on purpose, by malicious insiders, most often it is exposed by developers being careless while pushing code from their machines to GitHub. Leaked source code could potentially expose SSH keys – digital certificates that unlock online resources, Application Programming Interface (API) keys, and other sensitive tokens. Using the source code, threat actors can find vulnerabilities that can be exploited, to launch cyber-attacks on the company.
After discovering one of Daimler AG’s Git web portals, a researcher registered an account on Daimler’s code-hosting portal and downloaded 580 Git repositories from the company’s server. The repositories contained the source code of onboard logic units (OLUs) used in Mercedes vans, which provide live vehicle data. The researcher then uploaded the files to file-hosting service MEGA, the Internet Archive, and on his own GitLab server, thus making it public.
Sensitive data such as credit card details, healthcare information, customer PII, etc. often end up on the dark web after being exposed on unsecured databases or cloud storage. This information could be used to launch phishing attacks. It could also lead to your intellectual property being exposed to the public.
540 million Facebook users’ records were exposed on unsecured S3 buckets
Mexico based digital media company Cultura Colectiva exposed 146 GB of Facebook user data, including comments, likes, account names, reactions, and Facebook IDs, on an unsecured Amazon S3 bucket. Another S3 bucket, belonging to Facebook integrated app At The Pool, exposed 22,000 Facebook users’ friend lists, interests, photos, group memberships, and check-ins.
How to eliminate these low hanging fruits that expedite attacks?
As seen from the above examples, despite their best efforts, Target, Mercedes, and Facebook were not able to prevent their data from leaking. This can be attributed to the highly distributed, interconnected, and globalized nature of modern businesses. This means, there aren’t enough resources to monitor every employee, vendor, and vendor’s vendor. But the good news is, if you can detect data leaks in time, and have them taken down, their impact will be greatly reduced.
Usually, a data breach lifecycle is 279 days, 206 days to identify a breach, and 73 days to contain it. Instead of 206 days, if a data leak can be identified within a few hours, its presence across the surface web and dark web can be contained. However, this cannot be done manually. The only way to effectively identify and curb data leaks is to adopt AI-driven real-time monitoring.
Continuous monitoring for leaked or exposed data
Incorporate processes and tools that ensure data leaks related to your organization are monitored continuously. This includes real-time monitoring of the surface web, deep web, and dark web, for credentials, source code, and sensitive information. Deploy a comprehensive threat monitoring tool such as CloudSEK’s XVigil, whose AI-driven engine scours the internet for threats and data leaks related to your organization, prioritizes them by severity, and provides real-time alerts. Thus, giving you enough time to neutralize the data leaks before it can have adverse impacts on your business.
With cyber threats on the rise, and the recent implementation of remote work across businesses and organizations, in-house IT teams are struggling to preserve their security posture. Furthermore, an increasing number of employees are using applications, hardware, software, and web services that their IT departments are not aware of. A Forbes Insights survey found that more than 1 in 5 organizations have experienced a security incident due to shadow IT resources.
Amidst the COVID-19 crisis, with entire workforces confined to their homes, the use of personal networks and devices is growing rapidly. This allows employees to install or work with external applications and infrastructure that complements their skills and/ or requirements. While this may improve employee productivity, it exposes employees and their organizations to a wide range of cyber threats.
What is Shadow IT?
Shadow IT refers to the use of diverse Information Technology (IT) systems, devices, software, applications, and services, without the authorization of IT departments. Although shadow IT enhances efficiency, it also subjects users and their organizations to heightened risks of data breaches, noncompliance issues, unforeseen costs, etc.
Microsoft 365, work management apps such as Slack, Asana, Jira, etc., messaging apps like Whatsapp, cloud storage, sharing, and synchronisation apps such as OneDrive and DropBox are the most common examples of shadow IT. Obviously, these applications are not inherently threatening, and are usually installed with the best intentions, but they tend to endanger the overall security of the organization, in the event of misuse or negligence.
What are the different forms of shadow IT and which is the most popular one?
Users employ various forms of shadow IT applications and services. Broadly, they can be classified as:
Hardware: Personal devices, systems, servers and other assets.
Ready-to-use software: Adobe Photoshop, MS Office, etc.
Cloud services: Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS) services.
While users subscribe to various IT services that are not administered by their IT departments, the most common form of shadow IT are SaaS-based cloud services. SaaS based applications are gaining popularity across workforces, regardless of the industry or sector. This is because, such publicly available applications, often outperform on-premise applications and infrastructure.
Why do employees prefer shadow IT?
A research by the Everest Group found that shadow IT accounted for 50% or more of the IT spending in large organizations. So, dismantling shadow IT means, organizations have to devote more funds to build and maintain approved applications and infrastructure. However, employees prefer external applications even with the availability of in-house applications, simply because they are comparatively sophisticated.
Here are some common reasons for employees opting for shadow IT solutions:
Efficiency and agility
This is probably the most common reason behind the increasing use of shadow IT. Users employ external IT resources to produce better results. Also, because it makes work pretty easy. Latest research by Entrust Datacard reported that 77% of the surveyed IT employees believed that organizations could be frontrunners if they were successful in meeting the shadow IT needs of their employees.
Poor communication and coordination between various teams and the IT department is not conducive for productivity. Therefore, it could cause employees to choose shadow IT over onsite software and applications.
If customers’ programs cannot be integrated with the organization’s systems/ software, employees may resort to using external services for better results.
Readily available tools
Clearance from the IT department could be time-consuming. So, when the necessary software, service, or hardware is readily available, and is compatible on any device, naturally employees would choose to use them.
What are the potential risks associated with shadow IT?
On the subject of employees using shadow IT, security is definitely the principal concern. As IT departments are not aware of certain applications that employees use, it would be impossible for them to provide security updates and patches, or test the newly adopted applications. Unpatched vulnerabilities can cost organizations a fortune, such as in the case of Maersk in 2017, when hackers exploited their computers because it lacked the latest Microsoft security patches. This incident cost Maersk over $200 million in lost revenue.
Data breaches, leaks
Shadow IT applications that support file sharing, storage, and collaboration are prevalent among employees of every organization. As effective as they are, they can cause data breaches and leaks. Since IT departments are not familiar with these additional software deployed on its network, they eventually lose control over the organization’s data. In 2018, Gartner predicted that in 2020, one-third of successful attacks that target organizations will be through their data located in shadow IT resources and shadow IoTs.
Non-compliance and violation of regulations
If and when organizations fail to conduct risk assessments and take preventive measures with regard to unauthorized applications, it could burden them with severe sanctions for non-compliance. These actions also risk violating regulations such as HIPAA, GDPR, etc. On becoming aware of such shadow IT applications that are in use within the organization, they are forced to conduct a separate security audit which results in unforeseen costs.
What can organizations do to avoid these risks?
Regular monitoring of networks and vulnerability scanning
Monitor your organization’s network continuously for any shadow IT applications. And scan such applications along with other in-house assets for vulnerabilities that could expose your organization to cyberthreats. Ensure to install the latest updates.
The IT department could set up a system of SaaS Management or simply Software Asset Management, to keep track of all the applications used within the organization.
Internal monitoring tools
We would also encourage organizations to leverage digital risk monitoring tools such as CloudSEK’s XVigil. XVigil helps to detect data leaks, pertinent to the organization, caused by shadow IT, early on. Giving you sufficient time to address these issues, before it affects your security posture.
Security/ IT teams should create awareness among employees. This could also give you an idea of the various shadow IT devices, or applications that your employees use. While security/ IT teams are on it, they may also want to educate employees on the different types of data that they deal with and the responsibilities that come along with it.
Address employees’ technology needs
Organizations should address employees’ technology requirements, to eliminate the need for external applications. Employees often cite long approval processes and delays in acquiring sanctioned applications, as reasons for adopting external solutions to meet their immediate needs.
Prepare a list of usable applications or devices
Keeping in mind that not all applications or devices pose a threat, organizations could prepare a list of approved applications/ devices and encourage employees to use them.
As more businesses migrate to cloud environments, making it easier for customers to access their services/ products, we have witnessed a sharp rise in the number of online businesses employing web applications. Also known as web apps, they have assumed great significance in this digital era, allowing businesses to develop and achieve their objectives, expeditiously.
Well designed web apps allow organizations to gain competitive advantage and appeal to more customers. Hence, it is essential to have measurable or quantifiable metrics to gauge the quality of a web app.
What is a web application?
Web apps are software programs that require a web browser for interaction. And unlike other applications, users need not install the software to run web applications; all they require is a web browser. Web applications include everything from small-scale online games to video streaming applications like Netflix.
What are Software Quality Metrics?
Software quality metrics gauge the quality of the software, its development and maintenance, and the execution of the project itself. In essence, software quality metrics record not only the number of defects or security flaws in the software, but also the entire process of development of the project, as well as the product.
Classification of Quality Metrics
Based on the components and features, software quality metrics can be classified into:
Product quality metrics
In-process quality metrics
Project quality metrics
A user grades the quality of an application based on their experience with its features/functionalities, the value it provides, and after-sales services such as maintenance, upgrades, etc. However, the quality of the software is also measured based on the project, the teams involved, project cost, etc.
Six major quality metrics to consider for better web applications
Usability of the web application:
Usability testing assesses the ease with which end-users consume the application. It ensures effective interaction between the user and the app. Web applications that have a complicated design or interface, are least prefered by users.
In order to test the usability of web apps, its navigation, content, and other user-facing features should be tested.
Images and other non-text content should be placed appropriately, so as to avoid distractions.
The options “Search” and “Contact us” should be easy to find.
Performance of the web application:
Performance testing determines the behaviour of the application under different settings and configurations. For example: Performance during high usage vs normal usage. Performance of a web app contributes to its adoption, continued usage, and overall success.
Types of performance testing
Web stress testing
In load testing, we evaluate the performance of the web app when multiple users access it concurrently. This helps to ascertain if the app can sustain peak hours, handle large user requests or simultaneous database access requests, etc.
In web stress testing, the system is tested beyond the limits of standard conditions. The objective of web stress testing is to assess the behaviour of the app during volatile conditions such as when web pages time out or a delay between requests and responses, and how it recovers from crashes.
Compatibility on different platforms and browsers:
The quality of the software also depends on whether the application is compatible with different browsers, hardware, operating systems, applications, network environments, and devices.
If developers intend to have a mobile version of a web application, they ought to address and resolve any issues that may arise in that scenario.
While performing various actions such as printing or downloading, from a web application, the elements on the page, including text, images, etc., should be fixed in place, and properly aligned to fit on the page.
This parameter traces and maps user requirements throughout its life (from its source, through stages of its development and deployment), using test cases. It checks whether every user requirement is met and defines the purpose of each requirement and the factors they depend on.
Modes of requirement traceability
Based on the direction of tracing, requirement traceability can be classified into:
Forward traceability: Tracing the requirement sources to the resulting requirement, to ensure coherence.
Backward traceability: Tracing the various components of design or implementation back to its source, to verify that requirements are updated.
Bidirectional traceability: Tracing both backward and forward.
A web application is not reliable if it does not produce consistent results. In an ideal situation, the application must operate failure-free, for a specified period of time, in a particular environment.
For example, a medical thermometer is only reliable if it measures the accurate temperature every time it is used.
Security testing for the web application:
The security implementations of a web application is another factor that determines its success. As a study shows, hackers can attack users in 9 out of 10 web applications. These attacks include redirecting users to a malicious site, stealing credentials, and spreading malware. So, ignoring this factor could cause serious damage to users and their businesses.
To test the security of web applications, we test URLs that a user can and cannot access. If an online document has an ID/ identifier such as ID=”456″ or identifier=”zm9vdC0xNl8yMDE5…” at the end of its URL, the user should only be able to access that document. In the event that the user tries to change the ID/ identifier, they should receive an appropriate error message upon altering the URL.
Automatic traffic can be prevented by using CAPTCHA.
Types of security testing
Dynamic Application Security Testing (DAST): It detects indicators of security vulnerabilities in applications that are running.
Static Application Security Testing (SAST): It analyzes the application source code, and/ or compiled versions of code that are indicative of security vulnerabilities.
Application Penetration Testing: It assesses how applications defend against possible attacks.
Additional components to be considered
To ensure that the web application is fully functional in all aspects, the following components should be inspected:
Links that direct users to another section on the same page
Orphan pages in web applications
Forms or other input fields
Verify all validations
Check default values
Links to update forms, edit forms, delete forms, etc. (if any)
Review data integrity while editing, deleting, and updating forms
Check if data is being retrieved and updated correctly
Check whether the cookies are encrypted or not
Evaluate application behavior after deleting cookies
According to a Ponemon study, 59% of the surveyed companies had experienced a data breach due to their third-party vendors. While data breaches can be caused by several sources, those that involve a third-party have been found to increase the total cost of a data breach by approximately $370,000. And considering that data breaches affect an organization’s reputation, revenue, and compliance, third-party vendor risk management can no longer be an afterthought.
Given the level of access most vendors have to an organization’s network, traditional risk management frameworks fall short. Traditional strategies focus on vetting vendors, having a robust onboarding process, and periodic assessments. However, a rapidly evolving cyber threat landscape renders these assessments and findings obsolete, within a few days or weeks.
The failure of traditional vendor risk management is evident in the several high-profile breaches. Starting with the Target breach in 2013, to the recent Facebook and Airbus breaches, they were all traced back their respective third-party vendors. So, this calls for a more dynamic vendor risk management approach, which covers a wide range of vendor related risks.
In this article, we explore:
Risks associated with third-party vendors
Common pitfalls in traditional vendor risk management strategies
Ways to upgrade your vendor risk management, and effectively reduce associated risks
Risks associated with third-party vendors
Outsourcing is an integral part of most businesses because they provide:
Flexibility: Offering a dynamic workforce and adaptable operations.
Scalability: Reaching new markets and serving more customers.
Expertise: Catering to different sectors and industries.
Cost cutting: Saving on infrastructure and operational costs.
For these reasons, outsourcing is here to stay. However, as vendors and organizations become more interconnected, the cybersecurity risks also multiply. Vendors serve as an entry point for threat actors to make their way into a company’s networks by:
Exploiting vulnerabilities in a vendor’s systems
While a business has control over patching and updating their assets, they cannot monitor a vendor’s systems, and ensure they do the same.
Ticketmaster’s data breach was due to a vulnerability in their vendor’s system:
A data breach at Ticketmaster, an American ticket sales and distribution company, was traced back to Inbenta, a third-party, which powers Ticketmaster’s customer support agent. Inbenta was one of the 800 victims targeted by Magecart’s digital credit card skimming campaign. An attacker targeted Inbenta’s front-end servers, where they stored code libraries used by Ticketmaster. Then, by exploiting a number of vulnerabilities, the attacker modified the code to steal customer data.
Using network/ system credentials exposed by vendors
Vendors usually need remote access to a company’s systems in order to access data and applications, or to carry out maintenance activities. And vendors could leave your network credentials exposed, or threat actors could compromise a vendor’s network to steal the credentials. This is especially damaging, if there is no proper network segmentation, giving the threat actor unbridled access to the company.
Threat actors used stolen vendor credentials to access Target’s PoS network
In one of the first major breaches, threats actors uploaded BlackPOS to Target’s point-of-sale (PoS) network, allowing them to steal credit card information and other personal details. It was later found that threat actors were able to compromise Target servers using credentials stolen from Fazio Mechanical Services. Fazio, Target’s HVAC vendor, had access to Target servers. And due to improper network segmentation, threat actors were able to compromise Target’s PoS network.
Using source code leaked by vendors
Most companies keep their source code confidential. So, unlike open-source software, the public cannot view or modify their source code. Leaked source code usually finds its way to dark web sites, where the code will be available to hackers even after it has been taken down from the original location. Hackers then use the source code to find vulnerabilities that can be exploited to launch cyber-attacks on the company and its customers.
Partners leaked the source code of Team Fortress 2 and CS:GO source codes
Team Fortress 2 and Counter-Strike: Global Offensive (CS:GO) source codes were found online and then uploaded to torrent sites. CS:GO confirmed that the code was originally shared with their partners in 2017, and was subsequently leaked. And despite reassurances that the leak doesn’t affect current players, several screenshots and videos made the rounds, purporting to be Remote Code Execution (RCE) exploits based on the leaked code. Thus, impacting the games’ reputations.
Sensitive information exposed by vendors
In the recent past, there have been several cases of vendors exposing Amazon storage buckets and databases that can be accessed over the internet. This gives threat actors easy access to sensitive information, which they then sell on the dark web, to the highest bidder.
Vendors exposed 540 million Facebook users’ records
Common pitfalls in traditional vendor risk management strategies
While traditional vendor risk management frameworks are a good starting point, there are a few areas they need to address to be effective in a hyper-connected world. Dynamic third-party risk management should:
Address fourth/ nth party vendors
A 2019 survey found that only 2% of organizations identify and monitor all their subcontractors. And 8% of organizations monitor subcontractors only for critical infrastructure and IT. The remaining 90% said they lacked the required skills to monitor fourth/ nth parties.
Adapt to a constantly evolving cyberthreat landscape
Organizations generally perform vendor risk assessments, at the time of onboarding, and at regular intervals thereafter. During the intervals between assessments, new vulnerabilities, exploits and, malware and ransomware strains show up. Ans assessment don’t account for these unknowns.
Leverage automation and technology
Standard vendor risk management frameworks don’t offer a common, integrated platform that tracks the end to end process from risk identification and prioritization to issue tracking and mitigation. It also doesn’t provide actionable intelligence, which organizations can leverage, to make better cybersecurity decisions.
Ways to upgrade your vendor risk management, and effectively reduce associated risks
Companies need to upgrade their standard vendor risk management process, to ensure their vendors are not putting their data and network at risk. Organizations can do this by incorporating a few effective tools and processes such as:
Updating contractual standards
Update contracts to account for new regulatory and data privacy requirements. And ensure your vendor is obligated to disclose risks and data breaches in a timely manner. It would also help to have defined processes to mitigate risks and to respond to data breaches.
Focusing on nth party risk management
Ensure you have complete visibility of your vendor’s vendors. Determine if the products and services are provided directly by the vendor or by a subcontractor. And have contractual agreements with vendors that mandate such disclosures.
Continuous vendor risk monitoring
Incorporate processes and tools that ensure vendor related risks are monitored even between regular assessments. This includes real-time monitoring of the surface web, deep web, and dark web, for source code, sensitive information, and credentials. An IBM study found that the Mean-time-to-identify (MTTI) a breach is 197 days. It is during this interval that a comprehensive SaaS platform such as CloudSEK’s XVigil, will help. XVigil’s AI-driven engine scours the internet for threats related to your organization, prioritizes it by severity, and provides real time alerts. Thus, giving you enough time to mitigate the threats, before it can have adverse impacts on your business.
A recently uncovered spear phishing campaign, orchestrated by the PerSwaysion group, targeting 150+ executives across the globe, is a prime example of the growing trend of concerted cyber attacks on CXOs and VIPs. This process of targeted attacks on VIPs is commonly known as Whaling. Whaling tactics are similar to general spear-phishing. But they differ in the fact that it specifically targets high-level and important individuals within an organization.
Threat actors are slowly moving from large-scale, low-value attacks, which target a general population, to small-scale, high-value attacks, which target the key personnel of an organization. Furthermore, the Verizon 2019 Data Breach Report found that senior executives are 12 times more likely to be targets of social incidents, and 9 times more likely to be targets of social breaches. This is because high-profile personnel have exclusive clearances, privileges, and access to:
Confidential and sensitive information including financials, trade secrets etc.
Authorize or order other employees in the organization to carry out certain tasks.
Valuable assets including networks, devices, and facilities.
How do threat actors target C-level executives?
Research and reconnaissance
To orchestrate a typical attack, threat actors perform extensive reconnaissance and research, to understand an organization’s structure and functions.
Using this information, they narrow down the list of potential targets and their associates.
They then collect personal information about the shortlisted VIPs. Most companies publish their executives’ details on social media, news media, and their own websites. Thus, a simple Google search will give the threat actor access to this information. Moreover, the executives themselves have personal accounts on platforms such as Facebook and LinkedIn. And often, the privacy settings on these accounts are lax.
They further search for exposed account credentials from previous data leaks. Given that most of us, executives being no exception, use the same password for multiple accounts, the exposed credentials can be used to gain access to the executive’s official email account.
Data theft attacks
Once hackers have obtained access to C-suite executives accounts, through brute-force attacks or other means, they steal valuable information. This may include client lists, customer data, financial data, internal processes, business strategy and plan, and more.
Threat actors could hijack executives’ social media accounts and post harmful messages. And, this could tarnish the reputation of the executive and their organization.
Using the email access, threat actors decipher the communication frequencies and styles within the organization. For example: If there is a trail of audit related emails, threat actors can send requests for audit related details in continuation to the ongoing communication.
If threats actors cannot get access to an executives’ credentials, they create fake email IDs. These email IDs closely resemble one of the executives’ email IDs or that of the HR department or Accounting department. From the fake ID they send an urgent, actionable, and believable email to a C-level executive.
Threat actors bank on executives having limited time, or relying on assistants, to read and respond to emails. They also ensure the emails are believable. For this, they add references to the executive’s interests and hobbies, which are gleaned from their social media profiles. The emails usually request the email recipient, who is also an executive or VIP, for sensitive information, wire transfers, or to download an attachment.
If the recipient falls for the trap, they will end up revealing sensitive information or authorizing someone else to do so. They could also authorize transfers to the fake account details shared by the threat actor. A malicious attachment could drop a malware or ransomware payload in their systems. The recent PerSwaysion campaign used a fake Microsoft Outlook login page, from where they were able to collect 150+ executives’ login credentials. The credentials can be used to orchestrate other attacks or could be sold on the Dark Web, to the highest bidder.
How to protect C-level executives from these attacks?
Given the heightened risk to VIPs, here are a few measures to combat and mitigate threats:
Deploy a real-time monitoring tool that will scour the internet – surface web, deep web, and dark web – for potential threats. A comprehensive SaaS platform such as CloudSEK’s XVigil tracks VIP’s personal email IDs for their presence in past security breaches. Organizations are alerted to such threats immediately, along with other significant details pertaining to the risk.
Review social media presence
Ensure the executives’ social media accounts have the highest level of privacy. Report duplicate accounts and delete dormant accounts on a regular basis.
Enable Multi Factor Authentication (MFA) for all their accounts, including email, company assets and network.
Regular cybersecurity refreshers
Since threat actors are constantly changing and upgrading their whaling tactics and ruses, periodic training will help executives spot and avoid such traps.
An attack on a VIP doesn’t just affect them personally, it also affects their organizations revenue and brand image. Threat actors could gain access to the company’s central database, and steal employee and customer details, and leak them or even sell them. It takes years of painstaking effort to build a company’s brand image, and any damage to this intangible asset can have very serious and far-reaching consequences. Hence it is important to enable processes, and tools such as XVigil, to continuously monitor and protect VIPs and their organizations.
Urban dictionary defines IoT as: an acronym for “Internet of Things”, e.g. everyday objects (such as light bulbs or refrigerators) that can be accessed and possibly controlled via the Internet. The letter ‘s’ in the acronym stands for data and communication security.
Still wondering where the ‘s’ is?
Although the security of IoT devices demands immediate attention, the abundance of these devices has resulted in the lack thereof. There are more than 40 Billion connected devices at present, and every day a significant number of IoT devices are deployed.
Internet routers, smart TVs, watches, refrigerators, speakers, and security systems such as cameras and home automation devices, are the most common IoT devices. Some of the lesser-known examples are smart vending machine services like BigBasket’s BBInsta, smart electricity meters, bluetooth-activated rental scooters such as Vogo and Bounce, and smart RO water purifiers like DrinkPrime. And most of these devices have already become indispensable parts of our lives.
Why is it important to secure IoT devices?
The growing demand for smart devices makes it essential to prioritize its security. However, the following reasons are also notable:
1. Prolonged use:
Unlike other technological devices, connected devices are used for a longer period of time – ADSL Broadband routers released in the late 2000s with software components from early 2000s are still alive and online. However, most of these devices no longer receive security updates.
2. Low attack protection:
Most connected devices run on low power and low memory, making it impossible to leverage modern defense techniques, especially against memory corruption vulnerabilities such as buffer overflow. Also, users usually find stack protection, ASLR, etc. disabled.
3. Uncharted terrains:
The security industry’s primary focus is on web/ desktop applications. Thus neglecting the security of a large number of IoT devices.
How to detect vulnerabilities in IoT devices?
There are multiple ways to detect the vulnerabilities in IoT devices. We will explore:
1. Firmware Analysis
The advantage of this approach is that it does not require the physical presence of the target device. When we discuss the various ways to detect vulnerabilities in connected devices, I will explain how I discovered a remotely exploitable remote code execution vulnerability in a highly distributed internet router.
Firstly, download the latest firmware from the device manufacturer’s website, often found in the support page related to that device. Manufacturers usually provide user guides with instructions for manual software update or in the case of bricked hardware.
The preferred tool for this approach is binwalk. It is an easy-to-use tool to analyze, for reverse engineering, and to extract firmware images. Moreover, it would work on any unknown binary file. It scans for known file-type signatures within the file, and detects filesystems and known compressed stream types.
Here is a demo of running binwalk on TP-Link Archer C5’s firmware, the default router issued by ACT, Bangalore.
It, then, detects three things within the file:
U-Boot – A bootloader often used in embedded devices,
Some compressed data, and
A Squash FS file system – These are the root filesystem image and data that are mounted on the device. It will contain all the binaries, scripts, and configuration.
To extract SquashFS and other files one can use binwalk itself: `binwalk -e firmware_file` or `unsquashfs`. However, based on the filesystem, one might need to download additional tools to extract the image.
If binwalk fails to identify the filesystem or identifies false positives instead, we can also try manual analysis. We will discuss this, later in the article. Now that we have the code and the binaries that run on the device, we can start testing.
Upon running binwalk on the firmware for JioFI 2, it detects a lot of files directly in plain text, that are not enclosed in a filesystem. Further, open the firmware file in a hex editor and search the first few bytes (also called magic bytes). The file will be identified as an FBF (Flash Binary File).
In the event that this doesn’t work, we shall assess whether the file is encrypted using entropy analysis with `binwalk -E`.
The presence of encrypted firmware usually means that proceeding further is difficult. In that case, one could try reverse engineering the header to see if the decryption metadata (key algorithm) is in the header. This is highly unlikely.
If the required firmware is not available, or it is impossible to extract anything, there are other ways to proceed.
2. Service Exploitation
An IoT device will have a network interface. So, we can fire up nmap and scan the host for open services.
Routers, for example, have an http server with a web interface for configuration, status information, etc. which is an easy target for bugs.
The most important vulnerability to look for during such black box testing in web ui is command injection. A lot of the Web UI functionality is just a wrapper for internal linux utilities like iptables, ping, traceroute, etc.
The actions on the web interface are passed to these utilities as normal parameterized shell commands which can lead to command injection if the input is not sanitized. Apart from this, we should also look for unauthenticated action execution or if any of the pages failed to implement auth checks.
Here is one such injection I found in a large ISP issued router:
Once a command injection is executed, we shall escalate that into a full shell access. Usually we will be able to find a telnet binary. If we fail to find the binary in the system, we can download one. Subsequently, start a telnet listener such as this: `127.0.0.1 && /usr/sbin/utelnetd -l bin/sh -p 2512`.
Then, we explore the processes that are running.
All these files and data expand the attack surface. These binaries and their configuration files determine whether they are custom or off-the-shelf tools. We can leverage reverse engineering toolkits like Ghidra to analyse these binaries and ascertain their susceptibility to memory corruption issues such as buffer overflow or logic bugs.
At this point, we can also explore the filesystem for configuration files or conduct a static source code analysis of the web UI backend. The most prized bugs to seek are remotely exploitable pre-auth RCEs. Also, try to find services that listen on the WAN interface and use that to find a bug.
One of the bugs I found, during this process, was a telnet binary listening on the WAN which used a custom executable/ bin/ login which only worked if supplied with a hardcoded password.
Such low-hanging vulnerabilities are not very rare. Developers often leave hard-coded backdoor passwords exposed. These are a couple of instances that prove the same:
Similarly, we can find routers, printers, security systems, etc. with default passwords enabled.
3. Hardware Engagement
Anticipating a failure, to find vulnerabilities in the firmware or any other running services with black box testing, there are other ways to detect vulnerabilities:
3.1 Serial Interface
Most IoT devices run a full linux kernel on an MIPS or ARM powered box. A serial interface is not uncommon on these types of devices.
Typically, one can find a UART over RS-232 or TTL interface on the chip of the IoT device. An RS-232 interface will have a 9-pin connector, and a TTL interface will have 3-5 pins. The chip, within the outer case, will have instructions regarding the connectors. Use a USB-TTL converter, soldering the connection between the chip and the converter.
Then, connect to the serial console and use device admin credentials to log in.
These interfaces are usually provided by manufacturers to de-brick the device. At the time of booting the device, we have access to additional functionality such as loading firmware over the network.
Once a shell prompt is initiated, we can use techniques discussed previously, for further testing.
In any case, if the device doesn’t run a full fledged OS or the hardware doesn’t provide a serial connection, there is an even lower level approach we could try.
JTAG is another common hardware interface that enables direct communication with the microcontroller on a board. Even though JTAG was initially used by manufacturers to test all the connections on the board, now they are used for low level debugging.
JTAG connection directions are marked on the chip. Otherwise, the spec sheet of the microcontroller/ processor will have details of the same. Solder directly to the JTAG pins on the microcontroller, to access the debugging interface.
3.3 What can you do with JTAG ?
Pause and step through an operation
Write bytes directly into memory,
Inject code into the process or process memory
Dump the contents of the bootloader
Bypass logins, and so on
What can hackers do after finding bugs in these devices ?
The Mirai Botnet attack
In 2016, security vulnerabilities in brands of security cameras almost toppled the internet. The Mirai botnet launched 623 Gbps distributed denial-of-service attacks on multiple targets. The traffic originated from thousands of such security cameras. The next year its variant, Mirai Okiru, was launched, targeting Huawei routers.
The proliferation of IoT devices has made it almost impossible to handle the increasing number of attacks they encounter.
Most smart devices are frequently exploited to encroach on the privacy of its users:
Smart speakers are exploited to listen to interactions.
Security devices such as CCTV cameras are abused to gain access to sensitive visuals.
Vulnerabilities in routers can lead to internet traffic being compromised. Hackers can see the sites visited through plaintext DNS queries. Further, they can perform MiTM attacks and steal credentials or sessions. These vulnerabilities also expose internal devices to the attacker, bypassing the NAT firewall and causing severe damage.
Since the coinage of the term in 1956, Artificial Intelligence (AI) has evolved considerably. From its metaphorical reference in Mary Shelly’s Frankenstein, to its most popular recent application in autonomous cars, AI has made a progressive shift, over the years. It influences all the major industries such as transportation, communication, banking, education, healthcare, media, etc.
When it comes to cybersecurity, AI is changing how we detect and respond to threats. However, with the benefits, comes the risk of the potential misuse of AI capabilities. Is the primary catalyst for cybersecurity, also a threat to it?
How do we use AI in our daily life?
Social media users encounter AI on a daily basis and probably don’t recognize it at all. Online shopping recommendations, image recognition, personal assistants such as Siri and Alexa, and smart email replies, are the most popular examples.
For instance, Facebook identifies individual faces in a photo, and helps users “tag” and notify them. Businesses often embed chatbots in their websites and applications. These AI-driven chatbots detect words in the questions entered by customers, to predict and deliver prompt responses.
How do malicious actors abuse and weaponize AI?
To orchestrate attacks, cyber criminals often tinker with existing AI systems, instead of developing new AI programs and tools. Some common attacks that exploit Artificial Intelligence include:
Misusing the nature of AI algorithms/ systems: AI capabilities such as efficiency, speed and accuracy can be used to devise precise and undetectable attacks like targeted phishing attacks, delivering fake news, etc.
Input attacks/ adversarial attacks: Attackers can feed altered inputs into AI systems, to trigger unexpected/incorrect results.
Data Poisoning: Malicious actors corrupt AI training data sets by poisoning them with bad data, affecting the system’s accuracy.
Examples of how AI can be weaponized
GPT-2 text generator/ language models
In November 2019, OpenAI released the latest and largest version of GPT-2 (Generative Pretrained Transformer 2). This language model has the training to generate unique textual content, based on a given input. It even tailors the output style and subject based on the input. So, if you input a specific topic or theme, GPT-2 will yield a few lines of text. GPT-2 is exceptional in that it doesn’t produce pre-existing strings, but singular content that didn’t exist before the model created it.
Drawbacks of GPT-2
The language model is built with 1.5 billion parameters and has a “credibility score” of 6.9 out of 10. The model received a training with the help of 8 million text documents. As a result, OpenAI claims that “GPT-2 outperforms other language models.” The text generated by GPT-2 is as good as text composed by a human. Since detecting this synthetic text is challenging, creating spam emails and messages, fake news, or performing targeted phishing attacks, among other things, becomes easier.
Image recognition software
Image recognition is the process of identifying pixels and patterns to detect objects in digital images. The latest smartphones (for biometric authentication), social networking platforms, Google reverse image search, etc. use facial recognition. AI-based face recognition softwares detect faces in the camera’s field of vision. Given its multiple uses across industries and domains, researchers expect the image recognition software market to make a whopping USD 39 billion, by 2021.
Drawbacks of image recognition softwares
Major smartphone brands are now using facial recognition instead of fingerprint recognition, in their biometric authentication systems. Since this cutting-edge technology is popular among consumers, cyber criminals have found ways to exploit it.
Tricking facial recognition: It has been demonstrated that Apple’s Face ID can be duped using 3D masks. There are also other instances of deceiving facial recognition with infrared lights, glasses, etc. Identical twins, such as myself, can swap our smartphones to trick even the most efficient algorithms, currently available.
Blocking automated facial recognition: As facial recognition depends on key features of the face, an alteration made to the features can block automated facial recognition. Similarly, researchers are exploring various ways by which automated facial recognition can be blocked.
For example: Researchers found that minor modifications to a stop sign confuses autonomous cars. If implemented in real life, these technologies could have severe consequences.
Poisoned training sets
Machine learning algorithms that power Artificial Intelligence, learn from data sets (training sets) or by extracting patterns from data sets.
Drawbacks of Machine Learning algorithms
Attackers can poison training sets with bad data, to alter a system’s accuracy. They can even “teach” the model to behave differently, through a backdoor or otherwise. As a result, the model fails to work in the intended way, and will remain corrupted.
In the most unusual of ways, Microsoft’ AI chatbot, Tay, was corrupted through Twitter trolls. Releasing the smart chatbot was on an experimental basis, to engage people in “playful conversations.” However, Twitter users deluged the chatbot with racist, misogynistic, and anti-semitic tweets, turning Tay into a mouthpiece for a terrifying ideology in under a day.
AI is here to stay. So, as we build Artificial Intelligence systems that can efficiently detect and respond to cyber threats, we should take small steps to ensure they are not exploited:
Focus on basic cybersecurity hygiene including network security and anti-malware systems.
Ensure there is some human monitoring/ intervention even for the most advanced AI systems.
Teach AI systems to detect foreign data based on timestamps, data quality etc.
To meet the growing needs of customers, banks are increasingly adopting Information Technology (IT) solutions, to carry out daily operations. Thus making them attractive targets for escalating cyber attacks. To ensure that Indian banks function in a cyber-resilient environment, the Reserve Bank of India (RBI) issues regular guidelines. Hence, in one of its recent circulars, in addition to distinguishing cybersecurity from information security, the RBI advises banks to establish mechanisms for:
Continuous surveillance to protect personal data
A focused approach towards cybersecurity
Board/ Top Management to be aware of the bank’s threat quotient
Board/ Top Management to proactively monitor, share, and mitigate threats
The RBI guidelines advocate the following measures to help banks improve their overall security posture:
1. Provision for continuous surveillance
Cyber attacks are not preceded by warnings or timelines. Hence, the RBI recommends that banks set up continuous surveillance to stay abreast of emerging cyber threats.
XVigil helps you anticipate and mitigate threats
XVigil, CloudSEK’s digital risk monitoring platform, offers continuous monitoring across the surface and the dark web. Specifically focusing on: mentions of the bank, its brand, and its infrastructure.
2. Ensure protection of customer data
Financial institutions depend on technology to function smoothly. It also helps them deliver cutting-edge digital products to address their customers’ needs. However, in the process, banks collect customers’ personal and sensitive information.
Banks should take appropriate steps to ensure uncompromised confidentiality, integrity, and availability of this data. Moreover, as custodians of such information, it is incumbent on banks to preserve data, in transit and in storage, within their environment or that of third party vendors. To this end, banks should establish suitable systems and processes, across the data/ information lifecycle.
XVigil detects data leaks
XVigil proactively monitors the web for data leaks. Subsequently, it alerts banks to leaks involving their customers’ information, credit card details, or debit card details. The platform also reports 3rd party data leaks that could affect banks and their customers.
3. Report cybersecurity incidents to RBI
Banks also need to notify the RBI of all unusual cybersecurity activities and incidents, irrespective of the success or failure of the attempts.
XVigil generates reports to notify the RBI
XVigil prepares reports, listing major incidents that may be submitted to the RBI, adhering to compliance standards.
4. Manage inventory of IT assets
Banks need to maintain an up-to-date inventory of assets including their infrastructure and business applications.
XVigil scans your assets every day
XVigil performs daily asset scans, to track all internet-facing assets, including domains, sub-domains, IPs, WebApps, etc.
5. Prevent execution of unauthorized software
Banks should maintain an updated, and preferably centralized, inventory of authorized/ unauthorized software.
XVigil monitors for Shadow IT threats
XVigil runs infrastructure scans every day and alerts banks to any threats. As a result, it keeps Shadow IT threats in check.
6. Secure configuration
Banks must document and apply baseline security requirements/ configurations to all categories of devices.
XVigil detects misconfigured assets
XVigil detects and reports misconfiguration of internet-facing assets, in addition to the Open Web Application Security Project (OWASP) top 10 vulnerabilities.
7. Vendor risk management
Banks are accountable for appropriate management of security risks pertaining to outsourced and partner arrangements.
XVigil detects third-party leaks
XVigil monitors and reports on any third-party sources that leak sensitive information, thus fulfilling the RBI’s requirement to manage vendor risk.
8. Advanced real-time threat defence and management
The RBI advocates for banks to:
Build a robust defence system against the installation, spread, and execution of malicious code, at multiple points in the enterprise
Consider whitelisting of internet websites/ systems
Consider implementing secure web gateways with capabilities to deep scan network packets. Hence securing (HTTPS, etc.) traffic passing through the web/ internet gateway.
XVigil provides real-time alerts
XVigil monitors and provides real-time alerts, on threats that impact banks’ brand or infrastructure, from various sources across the surface web and the dark web. In addition, the platform scans open ports, misconfigured SSLs, leaky S3 buckets, and XSS vulnerabilities.
Banks have been advised to subscribe to anti-phishing/ anti-rogue apps or services from external service providers. Since, this will help them identify and take down phishing websites/ rogue applications.
XVigil detects and initiates takedowns
XVigil detects phishing/ rogue apps, fake domains, and fake social media accounts. CloudSEK also offers takedown of such phishing websites/rouge applications.
10. Data leak prevention strategy
Banks should develop a comprehensive data loss/ leakage prevention strategy to safeguard sensitive, proprietary, and confidential business and customer data.
XVigil monitors data leaks
XVigil scans for data leaks, including third-party leaks, and additionally gives banks timely and actionable threat intelligence.
11. Vulnerability Assessment, Penetration Test, and Red Team Exercises
Banks should conduct periodic vulnerability assessment and pen-testing exercises on all the critical systems, particularly the internet-facing ones.
XVigil runs periodic tests
XVigil runs basic level vulnerability assessments, as well as pen-testing exercises, every day. And subsequently alerts banks to open ports, misconfigured SSLs, leaky S3 buckets, and XSS vulnerabilities.
Banks must make arrangements for forensic investigation unless they have support.
CloudSEK offers forensic services and support
CloudSEK offers forensic services, together with unlimited support.
13. External Integration
While delivering services to customers, several stakeholders are involved directly or otherwise. Their experience is indispensable. Besides, their integration with multiple tools would give organizations a view of the entire security landscape. Thus, encouraging better decision making.
XVigil can be integrated with ease
XVigil can be easily integrated with multiple SIEMS, SOAR and other platforms. Thus giving banks a single view of their entire security landscape.
In the recent past, several security vulnerabilities have been discovered, in widely used software products. Since these products are installed on a significant number of devices, connected to the internet, it entices threat actors to develop botnets, steal sensitive data, and more.
In this article we explore:
Vulnerabilities detected in some popular products.
Target identification and exploitation techniques employed by intrusive threat actors.
Threat actors’ course of action in the event of identifying a flaw in widely used internet products/technology.
Popular Target Vulnerabilities and their Exploitation
Ghostcat: Apache Tomcat Vulnerability
All Apache Tomcat Server versions are vulnerable to Local File Inclusion and Potential RCE. The issue resides in the AJP protocol, which is an optimised version of the HTTP protocol. The years old vulnerability is vulnerable because of the component which handled a request attribute improperly. The AJP protocol, enabled by default, listens on TCP port 8009. Multiple scanners, exploit scripts, honeypots surfaced in a matter of days after the original disclosure by Apache.
Stats published by researchers indicate a large number of affected systems, the numbers being much greater than originally predicted.
Recently, Directory Traversal and RCE vulnerabilities, in Citrix ADC and Gateway products, affected at least 80,000 systems. Shortly after the disclosure, multiple entities (ProjectZeroIndia, TrustedSec) released PoC scripts publicly that engendered a slew of exploit attempts, from multiple actors in the wild.
Jira Sensitive Data Exposure
A few months ago, researchers found Jira Instances leaking sensitive information such as names, roles, email IDs of employees. Additionally, internal project details, such as milestones, current projects, owner and subscriber details, etc., were also accessible to anyone making a request to the following unauthenticated JIRA endpoints:
Avinash Jain, from Grofers, tested the vulnerability on multiple targets, and discovered a large number of vulnerable Jira instances, revealing sensitive data belonging to various companies, such as NASA, Google and Yahoo, and its employees.
Spring Boot Data Leakage via Actuators
Spring Boot is an open source Java-based MVC framework. It enables developers to quickly set up routes to serve data over HTTP. Most apps using the Spring MVC framework now also use the Boot utility. Boot helps developers to configure what components to add, and also to setup the Framework faster.
An added feature of the tool called Actuator, enables developers to monitor and manage their applications/REST API, by storing and serving request dumps, metrics, audit details, and environment settings.
In the event of a misconfiguration, these Actuators could be a back door to the servers, making exposed applications susceptible to breaches. The misconfiguration in Spring Boot Versions 1 to 1.4 granted access to Actuator endpoints without authentication. Although later versions secure these endpoints by default, and allow access only after authentication, developers still tend to ignore the misconfiguration before deploying the application.
The following actuator endpoints leak sensitive data:
performs a thread dump and returns the dump
returns the dump of HTTP requests received by the app
returns the app-logged content
commands the app to shutdown gracefully
returns a list of all the @RequestMapping paths
exposes all the Spring’s ConfigurableEnvironment values
returns application’s health information
There are other such defective Actuator endpoints, that provide sensitive information to:
Gain system information
Send requests as authenticated users (by leveraging session values obtained from the request dumps)
Execute critical commands, etc.
Webmin RCE via backdoored functionality
Webmin is a popular web-based system configuration tool. A zero-day pre-auth RCE vulnerability, affects some of its versions, between 1.882 and 1.921. This vulnerability enables the remote password change functionality. The Webmin code repository on SourceForge was backdoored with malicious code allowing remote command execution (RCE) capability on an affected endpoint.
The attacker sends his commands piped with Password Change parameters through `password_change.cgi` on the vulnerable host running Webmin. And if the Webmin app is hosted with root privileges, the adversary can execute malicious commands as an administrator.
Why do threat actors exploit vulnerabilities?
Breach user/company data: Data exfiltration of Sensitive/PII data
Computing power: Infecting systems to mine Cryptocurrency, serve malicious files
Botnets, serving malicious files: Exploits targeted at adding more bots to a larger botnet
Service disruption and eventually Ransom: Locking users out of the devices
Political reasons, cyber war, angry user, etc.
How do adversaries exploit vulnerabilities?
On disclosure of such vulnerabilities, adversaries probe the internet for technical details and exploit codes, to launch attacks. Rand corporation’s research and analysis on zero-day vulnerabilities states that, after a vulnerability disclosure, it takes 6 to 37 days and a median of 22 days to develop a fully functional exploit. But when an exploit disclosure comes with a patch, developers and administrators immediately patch the vulnerable software. Auto update, regular security updates, large scale coverage of such disclosures help to contain attacks. However, several systems run the unpatched versions of a software or application and become easy targets for such attacks.
Steps involved in vulnerability exploitation
Once a bad actor decides to exploit a vulnerability they have to:
Obtain a working exploit or develop an exploit (in case of a zero-day vulnerability)
Utilize Proof of Concept (PoC) attached to a bug report (in case of a bug disclosure)
Identify as many hosts as possible that are vulnerable to the exploit
Maximise the number of targets to maximise profits.
Even though the respective vendors patch vulnerabilities reported, upon searching GitHub or specific CVEs on ExploitDB, we can find PoC scripts for the issues. Usually PoC scripts require a host/ URL as an input and it measures the success of the exploit/ examination.
Adversaries identify a vulnerable host through their signatures/ behaviour, to generate a list of exploitable hosts. The following components possess signatures that determine whether a host is vulnerable or not:
Indexed Content/ URL
Many commonly used software has a specific default installation port(s). If a port is not configured, the software installs on a pre-set port. And in most cases a software installs on the default port. For example, most systems use default port 3306 to install MySQL and port 9200 for Elasticsearch. So, by curating a list of all servers with an open 9200 port, a threat actor can determine systems running the Elasticsearch. However, port 9200 can be used to install other services/ software as well.
Using port scans to discover targets to exploit the Webmin RCE vulnerabilities
Determining that the default port where Webmin listens to after installation is Port 10000.
Get a working PoC for the Webmin exploit.
Execute a port scan on all hosts connected to the internet for port 10000.
This will lead to a discovery of all possible Webmin installations that could be vulnerable to the exploit.
In addition, tools like Shodan make port-based target discovery effortless. At the same time, if Shodan does not index the target port, attackers leverage tools like MassScan, Zenmap and run an internet-wide scan. The latter approach hardly takes a day if the attacker has enough resources.
Similarly, an attacker in search of an easy way to find a list of systems affected by Ghostcat, will port scan all the target IPs and narrow down on machines with port 8009 open.
Software/ services are commonly installed on a distinct default path. Thus, the software can be fingerprinted by observing the signature path. For instance, WordPress installations can be identified if the path ‘wp-login.php’ is detected on the server. This facilitates locating the service as it accesses a web browser.
For example, when phpmyadmin utility is installed, by default it installs on the path ‘/phpmyadmin’. A user can access the utility through this path. In this case, a port scan won’t help, because this utility doesn’t install on a specific port.
Using distinct paths to discover targets to exploit Spring Boot Data Leakage
Gather a list of hosts that run Spring Boot. Since the default Spring Boot applications start on port 8080, it would help to have a list of hosts that have this port open. This allows threat actors to see a pattern.
Hit specific endpoints like ‘/trace’, ‘/env’ on the hosts and check the response for sensitive content.
Web path scanners and web fuzzer tools such as Dirsearch or Ffuf facilitate this process.
Though responses may include false positives, actors can use techniques, such as signature matching or static rule check, to constrict the list of vulnerable hosts. As this method operates with HTTP requests and responses, the process can be much slower than mass scale port scans. Shodan can also fetch hosts based on http responses, from its index.
Software are commonly installed on a specific subdomain since is an easier, standard, and convenient way to operate the software.
For example, Jira is commonly found on a subdomain as in ‘jira.domain.com’ or ‘bug-jira.domain.com’. Even though there are no rules when it comes to subdomains, adversaries can identify certain patterns. Similar services, usually installed on a subdomain, are Gitlab, Ftp, Webmail, Redmine, Jenkins, etc.
Security Trails, Circl.lu, Rapid7 Open Data hold passive DNS records. Other scanners that maintain such records would be sites such as Crt.sh and Censys. They collect SSL certificate records regularly and have an add-on feature that supports queries.
The content published by services is generally unique. If we employ search engines such as Google, to find pages based on particular signatures, serving specific content, the results will have a list of URLs running a particular service. This is one of the most common techniques to hunt down targets, easily.
It is commonly known as ‘Google Dorking’. For instance, adversaries can quickly curate a short list of all cPanel login pages. For which, they could use the following Dork in Google Search: “site:cpanel.*.* intitle:”login” -site:forums.cpanel.net”. The Google Hacking database contains numerous such Dorks and after understanding the search mechanism, it is easy to write such search queries.
There have been multiple honey pot experiments to study the mass scale exploration and exploitation in the wild. Setting up honey pots is not only a good way of understanding the attack patterns, it also serves in identifying malicious actors out there, trying to exploit systems in the wild. These identified IPs/ Network trying to enumerate targets or exploit vulnerable systems end up in various public blacklists. Various research attempts have set up diverse honeypots and studied the techniques used to gain access. Most attempts are to gain access via default credentials, and originated mainly from blacklisted IP addresses.
Another interesting observation is that, most honeypot detected traffic, seems to originate from China. It is also very common to see honeypots specific to a zero-day surface on Github as soon after a the release of an exploit. The Citrix ADC vulnerability (CVE-2019-19781) also saw a few honeypots being published on Github within a short time after the first exploit PoC was released.
Research carried out by Sophos highlights the high rate of activity on exposed targets using honeypots. As reported in the research paper, it took from less than a minute to 2 hours for the first attack on the exposed target. Therefore, if an accidental misconfiguration leaves a system exposed to the internet, for even a short period of time, it should not be assumed that the system was not exploited.
Payment gateways, such as Wibmo, CCAvenue, and PayUbiz, facilitate payments on thousands of online portals. And customers implicitly trust them to secure their transactions. But, as reported by a security researcher, a flaw in the logical design of a previous version of Wibmo payment gateway put its customers at risk. This was because the payment gateway did not distinguish between transactions initiated within the same time frame.
Payment gateways serve as a channel of communication, between merchants and banks, to conduct secure transactions. The gateway encrypts the transaction information, which includes the credit/debit card number, CVV, expiry date, etc. And passes on the information to the payment processor, which acts as the link between the user bank and merchant bank. The gateway confirms the payment, unless the information is incorrect. Then, the processor settles the payment with the merchant’s bank.
One Time Passwords for gateways
In order to secure transactions, 3-dimensional payment gateways add time-based One Time Passwords (OTPs) as an additional layer of authentication. The payment gateway only accepts time-based OTPs submitted within the permitted time frame. After which the OTP is not valid. Even though this additional layer of authentication should secure transactions, a vulnerable gateway, could reduce its efficacy. A payment gateway that is not able to distinguish between transactions, could permit unauthorized transactions.
Flaw in the design of Wibmo Payment Gateway
Wibmo fails to distinguish between transactions processed during a single 180 second time frame.
So, the OTP generated for a transaction is valid for other transactions, in the same time period. Irrespective of the amount or geo-location.
This vulnerability increases the possibilities of a man-in-the-middle attack (MITM) by which the attacker forges the request.
And if the OTP remains unused for the first few seconds or minutes, it allows attackers to conduct fraudulent transactions within the validity period of the OTP.
Explaining the flaw through a scenario
A user initiates a legitimate transaction for Re.1.
They receive an OTP, on their registered mobile number, which is valid for 180 seconds.
Before the user applies the OTP for that transaction, an attacker intercepts the OTP and uses it to process a transaction for Rs.1000. Irrespective of the attacker’s location, and transaction amount, the fraudulent transaction is considered legitimate. And the attacker successfully receives the amount.
Verification of the Wibmo Payment Gateway flaw
CloudSEK’s research team tested Wibmo with various banking systems to confirm the flaw. We found that the same OTP is valid for 180 seconds or more, for any transaction, provided the OTP has not been used already. The screenshots below prove the same:
With the increasing number of online transactions, flaws such as Wibmo’s make users vulnerable to threat actors. Apart from financial losses, it could impact the reputation of the payment gateway, and the online portals using it.
Note: Wibmo became aware of this flaw on the 3rd of August, 2019. The security team at Wibmo closed the issue and marked it as a known functionality on August 12, 2019. And publicly disclosed the flaw on August 25, 2019. Wibmo recommends that portals using its payment gateway should fix the vulnerability, to avoid security incidents.