Skip to content

phd: could assert expected number of passed/failed/skipped #1085

@iximeow

Description

@iximeow

phd-tests/runner/src/main.rs is where we produce the nice test result: ... X passed; Y failed; Z skipped; W not run; finished in ... line, but actual numbers of pass/fail/skipped vary by guest fixture.

while one would hope there are no failures, some tests involve Linux-isms that don't survive contact with cmd.exe and fail for boring reasons, while other tests skip on guest OSes that don't provide all the desired test facilities (usually some tool not present on Alpine like lshw). finally, at least the CPU topology test is (currently) Intel-only and Linux-only.

it'd be nice for phd-runner to, when running all tests, assert that the number of pass/fail/skip actually match our expectations for the given test image adapter, hardware, etc. if a test moves from passing to skipped without us knowing, that's bad!

Metadata

Metadata

Assignees

No one assigned

    Labels

    testingRelated to testing and/or the PHD test framework.

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions