OUnit 2.0, official release
After 1.5 month of work, I am proud to officialy release OUnit 2.0.0. This is a major rewrite of OUnit to include various feature that I think was missing from OUnit1. The very good news is that the port of the OASIS test suite has proven that this new version of OUnit can drastically improve the running time of a test suite.
OUnit is a unit test framework for OCaml. It allows one to easily create unit-tests for OCaml code. It is based on HUnit, a unit testing framework for Haskell. It is similar to JUnit, and other XUnit testing frameworks.
The basic features:
- better configuration setup
- environment variable
- command line options
- configuration files
- improved output of the tests:
- allow vim quickfix to jump in the log file where the error has happened
- output HTML report
- output JUnit report
- systematic logging (verbose always on), but output log in a file
- choose how to run your test:
- run tests in parallel using processes (auto-detect number of CPU and run as many worker processes)
- run tests concurrently using threads
- use the old sequential runner
- choose which test to run with a chooser that can do smart selection of tests:
- simple: just run test in sequence
- failfirst: run the tests that failed in the last run first and skip the success if they are still failing
- some refactoring:
- bracket: use a registration in the context but easier to use
- remove all useless functions in the OUnit2 interface
- non-fatal section: allow to fail inside non-fatal section without quitting immediately the whole test
- allow to use OUnit1 test inside OUnit2 (smooth the transition)
- timer that makes tests fail if they take too long, only when using the processes runner (I was not able to do it cleanly using threads and sequential)
- allow to parametrize filenames so that you can use
$(suite_name)replace by the test suite name
- create locks to avoid accessing the same resources within a single process or the whole application (typically to avoid doing a
chdirwhile another thread is doing a
- create a
in_testdata_dirfunction to locate test data, if any
OUnit 2.0.0 still provides the
OUnit module which is exactly the same as the last OUnit 1.X version. This way, you are not forced to migrate. However, this means that you will gain no advantage of the new release and even some slowdown due to increase complexity of the code. Though, I strongly recommend to upgrade to OUnit2.
Here is a checklist to do the migration:
- replace all
- the test function now takes
test_ctxtargument, so replace all
fun () -> ...)by
fun test_ctxt -> ...
- bracket are now inlined so
bracket setUp f tearDownis now
let x = bracket setUp tearDown test_ctxt in
- make sure that you don't change global process state like
Unix.putenvor that you don't rely on another test setting something for the next test
The OASIS test suite migration
In order to check that everything was working correctly, I have migrated the OASIS test suite to OUnit2. This is a big test suite (210 test cases) and it includes quite big sequences of tests (end to end tests from calling
oasis setup to compiling and installing the results). This was really time consuming and I wish to see a significant speedup for the tests with OUnit2.
You can see the result in term of code of the full migration here.
Here are the results on my Intel Core i7 920/SSD:
- Pristine test suite (210 tests):
- oUnit v1: 52.36s (i.e. latest OUnit v1.x, reference time)
- oUnit1 over oUnit2: 60.39s (OUnit v2.0.0 using the OUnit v1 layer)
- Migration to OUnit2 (166 tests):
- processes (8 shards): 10.12s
- processes (autodect, 4 shards): 12.99s
- sequential: 58.77s
The migration was quite heavy because this test suite had a big design problem. It uses in-place modification of the test data. I think I pick this design because I thought this was a good idea to decrease the running time. As a matter of fact, this was a huge mistake, that keeps popping failed test cases because one of the previous tests was failing. I have refactored all this and now we start by copying the test data into a temporary directory, which ensure that everything is always starting from pristine test data.
During the redesign, I have decided to reduce the number of tests by merging some of them. This should have no big impact on the running time, although this is not as pure 1:1 comparison with OUnit v1. Although it is still testing exactly the same thing. This explain the loss of ~50 tests, which in fact has been merged in other tests.
The overall speedup is 4x compared to OUnit v1, when using processes. However there is a 12% increase compared to OUnit v1 for sequential and a 15% increase when using OUnit v1 compatibility layer. While this is not very good score, I hope this is small enough to compensate the huge win of being able to run tests in parallel with processes.
And now the magic !
At this point, if you read carefully the numbers, you would have noticed that there is a 4.5x decrease in speed when you compare sequential and 4 processes for OUnit2. Since we are only actively testing in 4 shards, it looks strange. I don't expect a super-linear speedup due to the use of processes. I have checked that every tests was indeed running and found no solutions to this mystery. Right now, I think this is due to the fact that we are running less tests in more processes which should lighten the load of the GC (which may not trigger at all). I am not sure about this explanation and will welcome any bug report which shows a problem in the implementation of either sequential or processes runner. Although, this is great.
Help still wanted
If you find any bugs with OUnit v2, this is the time to submit a bug. OUnit BTS
If you want to try to fix bugs by yourself, please checkout the latest version of OUnit:
$> darcs get http://forge.ocamlcore.org/anonscm/darcs/ounit/ounit
Patches always welcome.