In September, The FUUG Foundation gave me a grant to buy some hardware for Obnam development. I used this money to buy a new desktop-ish machine, see below for details. It's sat in a corner, and I use it as a server: it's not normally connected to a monitor or keyboard. It runs Obnam benchmarks. Before this, I ran Obnam benchmarks and experiments on my laptop, or on BigV virtual servers donated by Bytemark.

The hardware

The parts:

  • CPU: Intel Core i7 4790K 4.0 GHz (4 cores, total of 8 hyperthreads)
  • RAM: Kingston HyperX Beast 32 GB
  • Mainboard: 46400 Asus H97M-Plus Intel H97 LGA 1150 micro-ATX
  • SSD: Samsung 850 EVO SSD 120 GB
  • HDD: 4 x WD Red 4 TB
  • PSU: Corsair CX750M ATX
  • Case: BitFenix Phenom micro-ATX

Not to brag, but it's a nice machine. Much more power than my 2012 era laptop.

The SSD is the system drive, the HDDs are for running Obnam benchmarks on. The HDDs are not RAIDed. Each drive is a PV for LVM2. All the data on those drives is scratch data: it's not valuable, and I do not care if it is lost. In fact, most of the data gets created and deleted during a benchmark run, and usually the disks are empty. The SSD contains the host operating system, and the virtual disks for all the virtual machines.

I assembled the machine myself, with the help of a friend, and installed Debian jessie on it. The Debian installation is pretty bare bones, just enough to run and manage a bunch of virtual machines using libvirt and ansible. All the actual work, including benchmarks, are run in virtual machines.

The benchmarks

The benchmarks are run by a kludge, called obbench, and the results are published at http://benchmark.obnam.org/. Here's a snapshot of the results so far:

date many files one big file
2015-09-28 2165.0 1381.9
2015-12-06 1461.6 384.0

In a bit over two months, I've made some significant progress, I think.

The two benchmarks that I currently run are:

  • A million tiny files, containing a single, random byte.
  • A single, 10 GB file.

These are two extreme cases of what a backup needs to deal with: either the file metadata, or its content. They incur different costs for a backup program. Thus, two benchmarks.

In both cases, the benchmark consists of an initial backup, a restore, and a second backup, without changes to the live data. The second backup is an extreme case of what backups usually do: most data usually doesn't change, so keeping that in mind for optimisation is important.

The above benchmarks are synthetic: they use data that's generated by a program (genbackupdata), so that they can be reproduced. Synthetic benchmarks are useful, especially for looking at particular aspects of a program's operation for optimisation. However, they do not necessarily reflect how a program behaves in actual use.

I also run, by hand, experiments with real data. I have a snapshot of our home file server and my laptop on the benchmark machine. The snapshots are static, and do not get updated. I experiment by running Obnam backups manually, the initial full backup and a no-change incremental one. In early October, I couldn't finish the initial full backups. They took too long, more than a week. Now I can finish them in about a day. This remarkable change is not evident from the synthetic benchmarks.

In numbers: 572986 files in the live data, containing 4.5 TiB. Initial backup, about 18.5 hours. Incremental backup, 4m13s. This is from a local disk to a local disk.

In addition to these, I've run numerous experiments on the new machine. These would have been much less easy to run on my laptop, and so I probably wouldn't have. Running benchmarks was always painful on my laptop, since it does not have the necessary disk space, and I'd really rather like to use it for other things.

The results

Thanks to the benchmarks and experiments I've been able to take the in-development version of Obnam from being quite impractical for real use to being in experimental use for real data. I now use the new version as my primary backup of my laptop, with two secondary backups (with the old Obnam version, and rsync) in parallel. This would not have happened this year without the extra hardware.

In addition to the Obnam work, I've used the new machine to develop a test suite for vmdebootstrap.

My actual development still happens on my laptop, except for things that are heavy enough to be slow on the laptop. I've made sure I can do most development purely on my laptop, while offline, including running a CI system, and testing things on two architecture and three releases of Debian. I do not want my development to be dependent on incidental things such as network access, unless I'm doing things that by their nature depend on the network, such as publishing changes or releases.