This eRFC lays out a path for stabilizing programmatic output for libtest.


libtest is the test harness used by default for tests in cargo projects. It provides the CLI that cargo calls into and enumerates and runs the tests discovered in that binary. It ships with rustup and has the same compatibility guarantees as the standard library.

Before 1.70, anyone could pass --format json despite it being unstable. When this was fixed to require nightly, this helped show how much people have come to rely on programmatic output.

Cargo could also benefit from programmatic test output to improve user interactions, including

Most of that involves shifting responsibilities from the test harness to the test runner which has the side effects of:

  • Allowing more powerful experiments with custom test runners (e.g. cargo nextest) as they’ll have more information to operate on
  • Lowering the barrier for custom test harnesses (like libtest-mimic) as UI responsibilities are shifted to the test runner (cargo test)

Guide-level explanation

The intended outcomes of this experiment are:

  • Updates to libtest’s unstable output
  • A stabilization request to T-libs-api using the process of their choosing

Additional outcomes we hope for are:

  • A change proposal for T-cargo for cargo test and cargo bench to provide their own UX on top of the programmatic output
  • A change proposal for T-cargo to allow users of custom test harnesses to opt-in to the new UX using programmatic output

While having a plan for evolution takes some burden off of the format, we should still do some due diligence in ensuring the format works well for our intended uses. Our rough plan for vetting a proposal is:

  1. Create an experimental test harness where each --format <mode> is a skin over a common internal serde structure, emulating what libtest and cargos relationship will be like on a smaller scale for faster iteration
  2. Transition libtest to this proposed interface
  3. Add experimental support for cargo to interact with test binaries through the unstable programmatic output
  4. Create a stabilization report for programmatic output for T-libs-api and a cargo RFC for custom test harnesses to opt into this new protocol

It is expected that the experimental test harness have functional parity with libtest, including

  • Ignored tests
  • Parallel running of tests
  • Benches being both a bench and a test
  • Test discovery

We should evaluate the design against the capabilities of test runners from different ecosystems to ensure the format has the expandability for what people may do with custom test harnesses or cargo test, including:

Warning: This doesn’t mean they’ll all be supported in the initial stabilization just that we feel confident the format will support them)

We also need to evaluate how we’ll support evolving the format. An important consideration is that the compile-time burden we put on custom test harnesses as that will be an important factor for people’s willingness to pull them in as libtest comes pre-built today.

Custom test harnesses are important for this discussion because

Reference-level explanation


Comments made on libtests format


Rationale and alternatives

See also


Prior art

Existing formats

Unresolved questions

Future possibilities

Improve custom test harness experience

With less of a burden being placed on custom test harnesses, we can more easily explore what is needed for making them be a first-class experience.