SaltStack is pretty awesome. If you don’t know what it is and are looking for a configuration manangement tool, I highly recommend it.
All of the major configuration management tools (Puppet, Chef, Ansible, Salt) are mostly the same, but the saltstack community is great. The IRC channel on freenode is an excellent resource, and their github issue tracker is actively monitored if you submit a bug report (or even a question, although you shouldn’t use the issue tracker for questions).
Anyway, a project I’m working on has become complex enough that I really need to start writing functional tests. Salt has some built-in testing classes that you can use and extend but they are mostly for unit and integration testing to my knowledge. I’m mostly interested in simple functional testing right now — ie: After I run my “highstate”, that does a ton of complicated things to a cluster of 10 servers, each with unique roles, does the website respond correctly?
That is to say, I need to go above and beyond the result output / return status of my highstate runs and orchestrations and actually test if I seem to have achieved the state I desire on the remote minions.
I don’t need fancy testing, or continuous integration right now. What I need are just some simple functional tests. When I started thinking about this tonight I realized I would definitely need access to salt’s internals to get pillars, grains, minions, etc. After researching for only 10 minutes (which is pretty typical for me before I just start playing) I gave up and just started writing code. The best option seemed to be writing my own ‘salt-runner’.
My initial goal was just to write a simple test for http response on the resulting cluster for each website. Here is the salt-runner code [shell]/srv/salt/runners/test_websites.py[/shell]:
[python]
# Import salt modules
import salt.client
import requests
def responding(tgt, outputter=None):
local = salt.client.LocalClient()
pillar = local.cmd(tgt, ‘pillar.items’)
websites = pillar[tgt][‘websites’]
lb_ip = pillar[tgt][‘lb’]
results = {}
for website, data in websites.iteritems():
if website != ‘default’:
headers = {
‘Host’: website,
‘User-Agent’: ‘Salt Testing Agent’
}
r = requests.get(‘https://{0}’.format(lb_ip), headers=headers, verify=False)
_results = {
‘status_code’: r.status_code
}
results[website] = _results
return {‘outputter’: outputter, ‘data’: results}
[/python]
This goes in your [shell]runners_dir[/shell] (for me that’s /srv/salt/runners] which is defined on your salt-master in your [shell]/etc/salt/master[/shell] (or wherever your salt’s master configuration file is).
The example usage/output looks like:
[shell]
# salt-run test_websites.responding web1.foo.fqdn.com
somesite1.com:
———-
status_code:
200
somesite2.com:
———-
status_code:
200
somesite3.com:
———-
status_code:
200
[/shell]
The script gets the pillar data from the ‘tgt’ specified on the command line, then makes an http request to the loadbalancer ip, passing in the host header for each site that should be responding.
This is specific to my case, but you can see how it’s a quick way for me to do “poor man’s functional testing” and ensure that specific aspect of the resulting state are actually being achieved.
I know this isn’t real functional testing in the true sense of the term. I’m doing this for now as a way to do regression testing against future bugs.
I hope that helps someone out there. If anyone has a better way to do this kind of thing I would love to hear it.
MOJOHEADZ … https://twitter.com/mojoheadz/status/1326268369394339840 MOJOHEADZ records, is a trance music company that is set to make its mark on the world of techno music on its own terms.Check this out review!!