Generate hosts file on a cpanel server quickly

So here’s how to quickly generate a hosts file on a cPanel server, or rather how to print out a bunch of lines of text that can be appended to someone’s local “hosts file”:

# set www="no" if you don't want www to be prepended to each entry.
cd /var/cpanel/users 2>/dev/null && for userfile in *; do
	echo "#$userfile";
	ip=$(awk -F\= '$1 == "IP" {print $2}' "$userfile")
	domain=$(egrep '^DNS([0-9]+)?\=' "$userfile" | awk -F\= '{print $2}')
	if [[ "$www" = "yes" ]]; then
		domain="${domain} www.${domain}"
	for domain in ${domains[@]}; do
		echo "$ip $domain"
done || (echo "Error: Is this a cPanel server?"; exit 1) 

Quick one-liner you can paste directly on a server over ssh:

www="yes"; cd /var/cpanel/users 2>/dev/null && for userfile in *; do echo "#$userfile"; ip=$(awk -F\= '$1 == "IP" {print $2}' "$userfile"); domains=; domain=$(egrep '^DNS([0-9]+)?\=' "$userfile" | awk -F\= '{print $2}'); if [[ "$www" = "yes" ]]; then domain="${domain} www.${domain}"; fi; domains+=${domain}; for domain in ${domains[@]}; do echo "$ip $domain"; done; done || (echo "Error: Is this a cPanel server?"; exit 1)

Quick and dirty. Mostly useful if you are a sysadmin for a hosting company that uses cPanel, or you use cPanel and need to generate a “hosts file” quickly for testing a server migration.

I wonder if there is a better way to do this using cPanel’s API? I’ve never bothered looking into it. Might be a fun project for someone out there.

Cloudflare + DDoS

Quick post here. If you’re using something like Cloudflare for the sole purpose of mitigating a distributed denial of service attack (DDoS), please be smart with your DNS records.

I have a lot of respect for Cloudflare as a service. They use a combination of anycast + cache header injection + using their own Content Delivery Network (CDN) for static files (JS, CSS, images, etc).

If you are using this service as a frontend to mitigate a DDoS please mind your DNS records. Putting your site behind cloudflare, and changing your actual origin server ip could very easily mitigate a DDoS and keep your site online. However, it does no good if someone knows your “real ip” / “origin ip”.

How would the attacker be able to figure this out? Well, if you want to stop a hacker, start thinking like one. The first thing I would do is query common subdomain records for your domain.

for sub in mail cpanel dev staging real direct web1 web2 web3 db1 db2 db3; do
  dig +short "$sub"

Oops! You may have forgotten to remove a DNS record that exposes the “real ip” / “origin ip”. So keep this in mind. Delete those records if you don’t use them. If you do need an A record that points directly to the server, consider handling this at the server level, or at least creating a less “guessable” subdomain, ie: that points to the real ip.

Security via obscurity is never the right solution, but it’s better than nothing sometimes.

Control Panels, Cross Site Request Forgery, and Case 74889

The rise of web hosting control panels has changed the landscape of the web hosting industry dramatically. They reduce the barrier to entry for server administration by automating configuration and management tasks within a web-based GUI. Before this, server administrators had to configure their systems by hand, or through a suite of their own custom shell scripts and such.

While this has allowed more people to enter into the business and become resellers and administrators, it has its drawbacks. Some would argue that this has been damaging to the industry as it’s ushered in a wave of administrators who aren’t qualified enough or knowledgable enough to properly manage a server. In the right hands though, these panels can actually be a big help to both camps.

plesk-logoThat argument aside, something has never felt right to me about a web interface that you log into as root to run commands. It’s essentially a root kit with a pretty front-end. That said, there are protections in place to prevent exploitation, and cPanel in particular has a great security track record, and their security team from my experience was responsive, and a pleasure to work with.

One problem that I don’t think gets enough attention with these control panels like cPanel, Plesk, Webmin, DirectAdmin, and others, is the possibility for self-induced Cross Site Request Forgery, for lack of a better term.

Consider this hypothetical attack:

1) Compromise one or more vulnerable sites on the server (perhaps an outdated Joomla or WordPress site), and inject code like the following:


if( preg_match('/https?:\/\/.*\/sess_token[0-9]{10}/', $ref, $matches) ) {
        echo "<script>'$url','_blank'); </script>";

2) Wait for a systems administrator, logged into the panel as root to click on the URL link for the hacked domain from the control panel. The above would have the sysadmin unknowngly create an account with full privileges. It doesn’t take much creativity to do far worse, like adding a public ssh key through the panel, changing the root password, or anything else they are allowed to do.

The CSRF session token, designed to prevent this type of attack will be useless because the administrator just provided it to the hacked site via the referer (if their browser is configured to pass referers, which is the default in most browsers). The method above creates a new tab in the victim administrator’s browser to make the malicious control panel request, but you could easily make this action less transparent. Note also that my regular experession matches an optional https, but I believe this attack will only work if the administrator is using non-secure access to the panel. * – See #2 below.

I submitted a very similar, and working POC to cPanel’s security team, and they corrected the issue ( Case 74889 ) within a few weeks. I am willing to bet however that it’s a problem in other panels like Plesk, and perhaps other parts of cPanel that provide URL links to sites on the server. In this case, the POC was very similar to the code above, and it the vulnerability was in URLs in the “Manage SSL Hosts” section of the panel. They corrected it by cleansing the requests before redirecting you to the destination domain so as to remove the referer.

The reason I call it a self-induced CSRF is because it’s not exactly a plain-vanilla CSRF attack. Typically you would post a malicious link to a remote forum, chat, or email it to them, hoping they already has an active session at some other site. For example:

<a href="">Funny Cat Gifs LOL</a>

The above could be easily mitigated if the router’s admin panel used a randomized csrf token in the url.

However, in the case of the control panel attack, it bypasses any CSRF protections because the malicious link is clicked from the same origin being exploited, and the token is provided to us free of charge. If there’s a correct name for this type of attack, I don’t know it.

The moral of the story is:

1) Server Admin Control Panels will never really be secure.
2) * – Always use https if you have to use them. This is because browsers will IIRC, never pass a referer if the scheme changes from https to http, OR if the scheme remains https to https, but the origin changes.
3) Just don’t click on any remote urls from within a control panel if you don’t have to. Copy/paste to a new tab, or ‘middle click’ to open a new tab. (referer should not send this way)

Advanced Troubleshooting with Strace

Sometimes a site is performing erratically, or loading slowly and it’s not evident what the problem is. When you’ve run out of standard troubleshooting methods, it might be time to go deeper.

We need to go deeper.
We need to go deeper.
One way to do that is with a tool called strace. Strace allows you to track the system calls to the kernel in real time.

You can pass it a process id, or run it in front of a command.

Quick example:

Let’s use the -e trace option to tell strace what type of system call we’re interested in. We want to see what files it’s opening. We have a suspicion that running the host command will attempt to check our /etc/resolv.conf before querying the internet for an A record, so let’s verify that.

$ strace -e trace=open host 2>&1 | grep resolv.conf
open("/etc/resolv.conf", O_RDONLY|O_LARGEFILE) = 6

As we expected, it does make an attempt to open that file.

Note that I redirected STDERR to STDOUT so I could grep the output. strace writes its output to STDERR. I won’t go into too much more detail about strace for now, but you get the idea.

Now back to our hypothetical slow or erratic website issue. The first step to troubleshooting an issue is duplicating the problem. The second step is making it repeatable. The third step is isolating the problem so you can pick it apart and examine it. When dealing with a busy webserver, the problem with doing that last step is that you don’t know which apache PID is serving you, so you can’t very well isolate it if you don’t know which one to iolate.

There are some hacky workarounds for isolating the apache process id that’s serving your HTTP requests. You can telnet to the server from the server, and find the pid via lsof, or netstat:

$ telnet localhost 80
GET / HTTP/1.1

Then open another screen on the server, and find your telnet pid with netstat:

$ netstat -tapn

tcp6       0      0           ESTABLISHED 20008/apache2            
tcp        0      0            ESTABLISHED 23955/telnet 

From this we know that process id 20008 is serving my telnet request because the remote and destination ports match. Then you can strace that PID, and quickly give your HTTP request in your telnet session the final carriage return to send the request. But this is clunky, and has race condition issues, and frankly it’s hard to get right.

But there is a better way. You can launch another instance of apache on different ports, say 81, and 444 (for https). Set the MaxClients value to 1, so only you can access it, then add an iptables rule to only allow your remote ip to access those destination ports.

Here’s an example of how you can do this on a cPanel server. Keep in mind, you may not need to copy everything like I did, but I just wanted to make sure I had an exact replica running on the alternate ports. You might want to exclude large log files and such if your apache diretory is large.

Clone the apache directory in full (binaries, conf, everything)

cp -r /usr/local/apache /usr/local/apache-tmp

Change ports for http and https so we can run ours without affecting the regular apache
$ cd /usr/local/apache-tmp
$ find . -type f -exec sed -i 's/:80/:81/g' {} \;
$ find . -type f -exec sed -i 's/:443/:444/g' {} \;

Only allow one maxclient, so we can find the apache pid serving us when we hit the site
$ find . -type f -exec sed -i 's/MaxClients.*/MaxClients\ 1/g' {} \;

Modify all absolute path references to the normal apache dir to our cloned one
$ find . -type f -exec sed -i 's/\/usr\/local\/apache/\/usr\/local\/apache-tmp\//g' {} \;

Now we can start our cloned apache on alternate ports 81,444 with just one maxclient allowed. You should then be able to access every site on the server via the alternate ports.
$ httpd -d /usr/local/apache-tmp/ -f /usr/local/apache-tmp/conf/httpd.conf

That launched the root httpd process with one child pid as expected
Now find the CHILD pid, try:
$ ps auxf | grep apache-tmp

Now to attach strace to the one and only apache process.
$ strace -p PID_HERE -f -s 2048

The -f option tells strace to follow child processes.
The -s option specifies how many bytes of each call to capture. 2048 might be overkill, so feel free to adjust this.

Then make the http request:

$ curl

This is definitely a drastic troubleshooting method, but it’s great for those times when you hit a brick wall diagnosing a slow-loading, or erratic behaving site and feel compelled to find find the issue.

Note: cPanel changes directory structure with updates from time to time. This was done a few months ago on a cpanel 11.40 build I believe. YMMV, use this tactic with caution.