Skip to content

Rerun fails with coverage and "dependent" subtests #274

@neiser

Description

@neiser

I've started to use the --rerun-fails option and noticed that when coverage profiling is enabled, the produced coverage.out file only contains the coverage from the failed but re-executed sub-tests.

For example:

$ gotestsum --rerun-fails --packages=. -- -coverprofile=coverage.out -coverpkg="." && go tool cover -func coverage.out
✖  . (2ms) (coverage: 76.9% of statements in .)

DONE 4 tests, 2 failures in 0.271s

✓  . (2ms) (coverage: 46.2% of statements in .)

=== Failed
=== FAIL: . Test_sayHello/Voldemort_makes_it_fail_every_second_time (0.00s)
    main_test.go:16: 
                Error Trace:    /home/agr/oss/gotestsum-rerun-dependent-subtests/main_test.go:16
                Error:          Received unexpected error:
                                afraid of Voldemort, cannot say hello
                Test:           Test_sayHello/Voldemort_makes_it_fail_every_second_time
    --- FAIL: Test_sayHello/Voldemort_makes_it_fail_every_second_time (0.00s)

=== FAIL: . Test_sayHello (0.00s)

DONE 2 runs, 6 tests, 2 failures in 0.509s
github.com/neiser/gotestsum-rerun-dependent-subtests/main.go:11:        sayHello        46.2%
total:                                                                  (statements)    46.2%

The above example usage is from this little demo project to illustrate my issue, see run-test.sh script there.

I've tried to catch the coverage.out from the first run using with --post-run-command="bash -c 'mv coverage.out $(mktemp coverage.out.XXXXXXXXXX)'", but that only produced one file. I assume that's because the post-run command is only executed after all re-runs are done by gotestsum.

Do you have a suggestion how to calculate coverage correctly and/or generate a "merged" coverage.out file from all re-runs?

Futhermore, I'd like to see a command line option such as --rerun-all-subtests which re-runs not only one specific failed sub-test, but essentially all tests of the test case containing the failed sub-test (i.e. find the root of a failed subtest separated by / and run that one again). That would help when a sophisticated integration test is split up into sub-tests and subsequent sub-tests need all previous sub-tests to be run to be successful. I can also prepare an illustration to this problem, but I hope you get the idea. What do you think?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions