kitchen-gce 0.2.0 is out

From the changelog:

* #10: Deprecate "area" in configuration for "region"
* #11: Fix name length, via @pdunnavant
* #12: Generate instance names that are valid for GCE

The biggest change here is to move away from the concept of “areas” (something I invented myself) to Google’s “regions”. So, instead of putting this in your .kitchen.yml:

area: europe

you should now start putting this:

region: europe-west1

However, backwards compatibility with the old “area” key will be maintained, at least until a 1.0 release.

kitchen-gce 0.1.2 Released

A very minor release of kitchen-gce this morning, with two changes:

If you just want to use the asia-east1 region, an upgrade from last month’s 0.1.0 is not required – this release only adds documentation changes around the new region, no code changes were required.

As always, bug reports, feature requests and pull requests are welcome on GitHub.

kitchen-gce 0.1.0

I’ve just released version 0.1.0 of kitchen-gce to RubyGems.org. Kitchen-gce is a Google Compute Engine driver for Test-Kitchen; big changes in this release are:

As always, feedback and pull requests are welcome!

Kitchen-gce 0.0.6 Released

Just a quick post to announce: I’ve released kitchen-gce 0.0.6, the latest version of my Google Compute Engine driver for Test Kitchen. This release adds support for specifying the test instance’s network in GCE, and for tagging instances.

Kitchen-gce currently has a dependency on Fog 1.19.0 (only); the next release will support Fog 1.20.0 (and greater), but will likely require changes to .kitchen.yml. More details in the GitHub issue.

Introducing kitchen-gce – a Test Kitchen Driver for Google Compute Engine

Test Kitchen is an integration test framework for, among other things, Chef development. To create its testing environment, it builds on virtualization tools and cloud providers such as Docker, EC2, Rackspace and Vagrant. My own contribution – currently in 0.0.1 release, but undergoing active development – is kitchen-gce, a driver for Google Compute Engine (GCE).

Compared to other cloud providers, for use with Test Kitchen GCE has a couple of advantages, namely:

  • (Subjectively) faster instance launch times than other providers; and
  • Sub-hour billing.

One brief usage note:

Kitchen-gce uses Fog to interface with GCE. Fog’s GCE provider code has been undergoing development, and until then next releases of Fog and kitchen-gce come out (greater than 1.18.0 and 0.0.1, respectively), I recommend using the development versions of each. For instance, in a Gemfile:

group :integration do
gem 'test-kitchen'
gem 'kitchen-gce', :git => 'https://github.com/anl/kitchen-gce.git'
gem 'fog', :git => 'https://github.com/fog/fog.git', :ref => 'c5e6e2ae868b7fdec8e4fd8ef729fcab3b199b15'
end

If you use Test Kitchen, please check out kitchen-gce. Feedback, issues and pull requests welcome.

Acknowledgements: Google has currently given me a free trial plan for GCE, and has, as of this writing, covered about $0.18 of compute time on GCE used in developing kitchen-gce. I spent about $0.55 of my own money before that – sub-hour billing really does make economic sense…

Puppet Module Development with Vagrant

By way of background, I recently switched employers, and in that change, made the switch from Puppet to Chef at work. In doing so, I’ve become a big fan of a Vagrant-driven development workflow. However, on my own time, I still hack around with Puppet – and find myself wanting to use a similar development process when writing modules as opposed to cookbooks.

As far as my searching could could turn up, most of the documentation out on the web pertaining to using Puppet and Vagrant together is about using the Vagrant Puppet provisioner to set up an application development environment, not for using Vagrant to actively test Puppet modules during their development. This was momentarily frustrating to me, but the configuration I came up with to do this is pretty simple. My solution? Use a slightly non-standard module directory layout, add an additional “vagrant.pp” file, and include this in your Vagrantfile:

  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = 'vagrant'
    puppet.manifest_file = 'vagrant.pp'
    puppet.module_path = '../'
  end

The Puppet module directory layout looks like this:

module/
├── Gemfile
├── Gemfile.lock
├── Modulefile
├── README
├── Vagrantfile
├── manifests
│   └── init.pp
├── spec
│   └── spec_helper.rb
├── tests
│   └── init.pp
└── vagrant
    └── vagrant.pp

In particular, in addition to the Vagrantfile, note the “vagrant” directory and the “vagrant.pp” file contained within it; said “vagrant.pp” file is short and sweet:

# Include this module
include <module>

(Where “<module>” is replaced with the actual module name.)

Finally, note that your development directory name must match the class name – otherwise the Puppet provisioner will be unable to find the included class. So, if you namespace your Puppet modules on GitHub with a leading “puppet-” or something similar, you’ll need to rename your development directory.

This layout does conflict with the Puppet module layout specification (by adding additional files/directories), although I haven’t yet noticed any ill effects. I expect – and hope – that test-kitchen for Puppet will render this blog post obsolete in the near future.

Edit 9/28/2013, 10/1/2013: If you are running a somewhat dated Vagrant box, you may find that “package” resources fail, ultimately with an error similar to the following:

E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

If updating your box isn’t an immediate option, you can work around this on Ubuntu by adding the following to your Vagrantfile before the Puppet provisioner block:

config.vm.provision :shell, :inline => '/usr/bin/apt-get update'

To avoid running this every time, you can wrap it in a test for an environment variable:

unless ENV['NO_VAGRANT_APTGET']
  config.vm.provision :shell, :inline => '/usr/bin/apt-get update'
end

and then run “vagrant provision” as follows:

NO_VAGRANT_APTGET=1 vagrant provision

fio vs. ZFS compression

Summary: When testing ZFS read performance with fio, compression settings on the file system may cause you to test cache performance instead of physical disk performance.

Background: Testing was done on a FreeBSD 8.3-STABLE system, with an eleven-disk, non-root zpool:

$ zpool status
  pool: tank01
 state: ONLINE
  scan: scrub repaired 0 in 0h11m with 0 errors on Fri Jun 14 17:20:12 2013
config:

	NAME        STATE     READ WRITE CKSUM
	tank01      ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	  mirror-2  ONLINE       0     0     0
	    da5     ONLINE       0     0     0
	    da6     ONLINE       0     0     0
	  mirror-3  ONLINE       0     0     0
	    da7     ONLINE       0     0     0
	    da8     ONLINE       0     0     0
	  mirror-4  ONLINE       0     0     0
	    da9     ONLINE       0     0     0
	    da10    ONLINE       0     0     0
	logs
	  da11      ONLINE       0     0     0

errors: No known data errors

ZFS and zpool versions were 5 and 28, respectively. Recordsize was 128K. Compression was set to “on”.

Data disks were Toshiba model MK1001TRKB; the separate log was an STEC ZeusRAM C018. Drives were connected to a single LSI SAS2008 HBA.

The system had a single 2.13GHz Intel Xeon E5606 processor, and 12GB of RAM; via vfs.zfs.arc_max in /boot/loader.conf, ARC size was limited to 6GB.

Tests run:

For $test types of “read” and “randread”, fio was run as follows, five times for each test:

fio --directory=. 
  --name=$filename 
  --rw=$test 
  --bs=128k 
  --size=36G 
  --numjobs=1 
  --time_based 
  --runtime=60 
  --group_reporting

System activity was monitored during each run using a combination of sysctl, iostat, vmstat and top. Fio “IO file” size was measured using “du -h”.

IO files were then written to using the following command:

fio --directory=. 
  --name=$filename 
  --rw=write 
  --bs=128k 
  --size=36G 
  --numjobs=1 
  --group_reporting

After writing to the IO files, tests were re-run, again five times per test.

Results:

Fio appears to create new IO files in a highly-compressible format. With compression on, files that have not been written to are 512 bytes in size; without compression, they are the full size specified in the fio command line (36GB in this case). Once the files have been written to, on a file system with compression turned on, they grew in size to about 14GB (i.e. the files had a compression ratio slightly better than 2:1).

Performance was substantially better on files that had not been written to as opposed to those that had been written to; run-to-run variation, as measured by the standard deviation, was larger on the written files:

Test mean MB/s unwritten file stdev MB/s unwritten file mean IOPs unwritten file stdev IOPs unwritten file mean MB/s written file stdev MB/s written file mean IOPs written file stdev IOPs written file
read 2458.4 11.7 19666 93 897.4 33.4 7178 267
randread  2337.8 2.3 18703 20 40.6 14.2 324 114

The vmstat, iostat and top values suggest that benchmark performance was bounded by the CPU on unwritten files, and zpool disks on the written files.

sysctl counters and iostat indicated that effectively no reads for the unwritten files were served from disk but came instead from (prefetch) cache; written files exercised the disks, but when data was read from cache for the written files, it came predominantly from the ARC.

The randread/written file results, in aggregate, have far greater variation as a proportion of the achieved performance; looking at individual runs an interesting pattern emerges:

Test number MB/s IOPs ARC cache hit ratio MRU hits MFU hits
1 20.1 160 0.15 856 646
2 36.2 289 0.51 6060 2790
3 40.4 323 0.56 5504 5289
4 47.8 382 0.63 1348 12992
5 58.3 466 0.69 1761 17633

Specifically, performance got better with each run, apparently as a result of ARC caching. Initially, cache hits seem to be drawn from the MRU, but by the fourth and fifth tests, the MFU is more heavily used. The ARC caches uncompressed data, but even at 36GB of file data with a 6GB ARC, it is reasonable that some proportion of “random” read data will be served from the ARC. The possible implication that the ARC is successfully able to adapt to fio’s random read workload would be interesting to look at in greater depth.

Conclusion: Although this is a limited set of data, two results can reasonably be drawn from it:

  • The combination of cache and compression in ZFS can have impressive performance benefits; and
  • ARC and prefetch cache are each relevant in different performance domains.

Acknowledgements: I am indebted to my former employer for allowing me free usage of the system tested in this blog post.

Filling in the Missing Parts of NetApp’s API

Late last year, NetApp released long-overdue Python and Ruby support in their SDK, officially known as the NetApp Manageability SDK. The SDK download is – oddly and unfortunately – still buried behind a paywall, and you have to submit a web form about how you plan to use it to get access to the download; otherwise it’s available to all.

But perhaps there’s good reason for hiding the download away: There are still large gaps in the API. For instance, say you want to change the security mode of a qtree? You’re out of luck. (Makes one wonder how NetApp implements this functionality in OnCommand System Manager – are they eating their own dogfood?)

That said, if you’re willing to venture off the beaten (and supported) path, you can use the undocumented system-cli API call. Here’s how I’m using it in a Python wrapper I’m working on that makes the SDK feel a little bit less like handling thinly-varnished XML:
Continue reading

Git pre-commit hook for DNS zone data

If you’re storing your DNS configuration in Git, a pre-commit hook to automatically run named-checkzone before zone file changes are committed may be useful to you. The pre-commit hook I use assumes that zone files (and only zone files) are in the format db.<zonename> (e.g. “db.andyleonard.com”), and only tests zone files (e.g. named-checkconf is not run against configuration files).

This pre-commit hook’s structure is based heavily on a Puppet 2.7 pre-commit published elsewhere. Continue reading