Catch2 vs Google Test

[ c++  tdd  catch2  clion  ]
Written on October 17, 2018

I’ve moved all my tests for the Vulkan sprite renderer from Google Test to Catch2.

There are two main reasons why I like Catch2 over Google Test - the fact that it’s a header only library, and the way test cases with sections make fixtures unnecessary.

Header only

Using Catch2 is ridiculously easy - I download one header file, stick it in my extern folder and include it.

The setup I had with Google Test wasn’t bad - I had a CMakeLists-googletest.txt.in file that I included in my main CMakeLists.txt file that pulled Google Test down from the GitHub repo and built it, meaning I didn’t need to explicitly install it beforehand. I used this approach in an earlier project based off this blog post.

I had set up a Travis job for testing, and pulling in the dependencies this way made it easier to build the project on any machine - the build process itself knows how to get what it needs.

So, for the Vulkan sprites I just copied this setup and all was good. It worked on my Ubuntu desktop, then I cloned the project over to my MacBook Pro and it all worked fine there as well. Until, at some point I was working without an Internet connection (I was on an airplane) and CMake decided it needed to talk to GitHub.

With Catch2, needless to say, with only one header that I’ve now added to my project, it all just works with no CMake voodoo.

No fixtures

My renderer needs a window to initialize with, so for Google Test I had set up a fixture (see my previous blog) for creating and destroying that window for each test.

Catch2 uses a different approach, allowing you to split test cases into sections. For each section the test case is executed from the start, so any code you have before the first section can do whatever setup you have that is common - inline with all your test code rather than being in a separate fixture class.

TEST_CASE("Renderer") {
    glfwInit();
    glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
    auto window = glfwCreateWindow(800, 600, "RendererTest", nullptr, nullptr);

    SECTION("Initialize") {
        SECTION("without validation") {
            Renderer r;
            r.Initialize(window, Renderer::DISABLE_VALIDATION);
            REQUIRE(r.IsInitialized());
            REQUIRE(!r.GetDebugMessenger());
        }

        SECTION("with validation") {
            Renderer r;
            r.Initialize(window, Renderer::ENABLE_VALIDATION);
            REQUIRE(r.IsInitialized());
            REQUIRE(r.GetDebugMessenger());
            REQUIRE(r.GetDebugMessenger()->GetErrorAndWarningCount() == 0);
        }

        SECTION("multiple calls throw") {
            Renderer r;
            r.Initialize(window, Renderer::ENABLE_VALIDATION);
            REQUIRE_THROWS_AS(r.Initialize(window, Renderer::ENABLE_VALIDATION), std::runtime_error);
        }
    }
    ...
}

Sections can be nested, and I generally set up a test case per class, with a section per method and a sub-section per test I need to do on that method.

Other niceties

I like the simplicity of the REQUIRE macro, just relying on natural C++ syntax for conditions.

My development environment of choice these days is CLion. It has very good support for both Google Test and Catch2, but I find it easier to make the test report within CLion to be more descriptive. With Google Test the test name has to be a valid C++ identifier, whereas Catch2 allows you to use a regular string.

But…

There’s always a but, right? I have found one drawback, in that I can’t seem to able to set breakpoints in tests. Sometimes when tests fail it’s nice to be able to set a breakpoint in the failing test, run the suite again under the debugger and step through code. For some reason that isn’t working for me with Catch2, the way it did with Google Test.

It’s not a showstopper for me - with granular unit tests the need for a debugger is reduced, and I can still set breakpoints in the code being tested. Rather than running the suite again with a breakpoint in the failing test, I can set breakpoints in the code under test and only run the failing test.

Finally

I’m happy with making the switch, and look forward to writing more tests with Catch2. I find the tests to be more readable, and the test report within CLion to be more descriptive.

That said, this is a small project that only had about 70 tests. For a larger project with any significant number of tests it might be hard to justify switching as Google Test does the job quite nicely.

This blog post describes the author’s process of porting his tests in a semi-automated fashion.

When converting tests is not an issue I’d highly recommend Catch2 over Google Test.