Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions AUTHORS
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,7 @@ Michał Zięba
Mickey Pashov
Mihai Capotă
Mihail Milushev
Mike Fiedler (miketheman)
Mike Hoyle (hoylemd)
Mike Lundy
Milan Lesnek
Expand Down
1 change: 1 addition & 0 deletions changelog/14371.feature.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Added :option:`--max-warnings` command-line option and :confval:`max_warnings` configuration option to fail the test run when the number of warnings exceeds a given threshold -- by :user:`miketheman`.
40 changes: 40 additions & 0 deletions doc/en/how-to/capture-warnings.rst
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the filterwarnings examples show the best practice of starting with the error value from the beginning. And I'm thinking that considering that docs examples tend to end up in people's projects unchanged, it'd be good to show the new setting set to 0. Personally, I'll build that into my configs once this is shipped.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree a bit here. I think that folks that have warnings may explore this and use to ratchet down. If you're already a warnings expert 😉 you're likely already using filterwarnings: error:* or such, then this feature does not really apply to those folks.

Supplying a default of 0 (or even 1) has the potential to confuse the reader, as 0/1 values are also often used to denote true/false in other configurations (not pytest as far as I know), so showing a "real" number makes it clearer to me.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean.. We could have a code comment in the config examples:
# Zero allowed warnings errors out on any warning leaks unlike the unset default which is a no-op


I think you raise an interesting UX point regarding disabling the feature, though — it's not documented how to disable a pre-enabled setting. Imagine having this set in the config file but wanting to override one for a one-time CLI invocation. For an external plugin, you'd just do a -p no:plugin_importable (except for when it's set via adopts that would break the parser). But how does an end-user reset the plugin to the default/unset value? Some plugins introduce additional options (pytest-cov has --no-cov and --cov-reset, for example). It's probably a good idea to special-case an empty string so people could do a --max-warnings= w/o a numeric value that would disable the check. Or use some kind of a sentinel value.

Original file line number Diff line number Diff line change
Expand Up @@ -204,6 +204,46 @@ decorator or to all tests in a module by setting the :globalvar:`pytestmark` var

.. _`pytest-warnings`: https://github.com/fschulze/pytest-warnings

Setting a maximum number of warnings
-------------------------------------

.. versionadded:: 9.1

You can use the :option:`--max-warnings` command-line option to fail the test run
if the total number of warnings exceeds a given threshold:

.. code-block:: bash

pytest --max-warnings=10

If all tests pass but the number of warnings exceeds the threshold, pytest will exit with code ``6``
(:class:`~pytest.ExitCode` ``MAX_WARNINGS_ERROR``). This is useful for gradually
ratcheting down warnings in a codebase.

Note that :confval:`filtered warnings <filterwarnings>` do not count toward this maximum total.

The threshold can also be set in the configuration file using :confval:`max_warnings`:

.. tab:: toml

.. code-block:: toml

[pytest]
max_warnings = 10

.. tab:: ini

.. code-block:: ini

[pytest]
max_warnings = 10

.. note::

If tests fail, the exit code will be ``1`` (:class:`~pytest.ExitCode` ``TESTS_FAILED``)
regardless of the warning count. ``MAX_WARNINGS_ERROR`` is only reported when all tests pass
but the warning threshold is exceeded.

Disabling warnings summary
--------------------------

Expand Down
3 changes: 2 additions & 1 deletion doc/en/reference/exit-codes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,15 @@
Exit codes
========================================================

Running ``pytest`` can result in six different exit codes:
Running ``pytest`` can result in seven different exit codes:

:Exit code 0: All tests were collected and passed successfully
:Exit code 1: Tests were collected and run but some of the tests failed
:Exit code 2: Test execution was interrupted by the user
:Exit code 3: Internal error happened while executing tests
:Exit code 4: pytest command line usage error
:Exit code 5: No tests were collected
:Exit code 6: Maximum number of warnings exceeded (see :option:`--max-warnings`)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I debated on whether to add an explicit code, and since this doesn't match ExitCode 1, I chose to add another.


They are represented by the :class:`pytest.ExitCode` enum. The exit codes being a part of the public API can be imported and accessed directly using:

Expand Down
39 changes: 39 additions & 0 deletions doc/en/reference/reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1630,6 +1630,34 @@ passed multiple times. The expected format is ``name=value``. For example::
into errors. For more information please refer to :ref:`warnings`.


.. confval:: max_warnings
:type: ``int``

.. versionadded:: 9.1

Maximum number of warnings allowed before the test run is considered a failure.
When all tests pass, but the total number of warnings exceeds this value, pytest exits with
:class:`pytest.ExitCode` ``MAX_WARNINGS_ERROR`` (code ``6``).

.. tab:: toml

.. code-block:: toml

[pytest]
max_warnings = 10

.. tab:: ini

.. code-block:: ini

[pytest]
max_warnings = 10

Note that :confval:`filtered warnings <filterwarnings>` do not count toward this maximum total.

Can also be set via the :option:`--max-warnings` command-line option.


.. confval:: junit_duration_report
:type: ``str``
:default: ``"total"``
Expand Down Expand Up @@ -3121,6 +3149,12 @@ Warnings
Set which warnings to report, see ``-W`` option of Python itself.
Can be specified multiple times.

.. option:: --max-warnings=NUM

Exit with :class:`pytest.ExitCode` ``MAX_WARNINGS_ERROR`` (code ``6``) if all the tests pass, but the number
of warnings exceeds the given threshold. By default there is no limit.
Can also be set via the :confval:`max_warnings` configuration option.

Doctest
~~~~~~~

Expand Down Expand Up @@ -3409,6 +3443,8 @@ All the command-line flags can also be obtained by running ``pytest --help``::
-W, --pythonwarnings PYTHONWARNINGS
Set which warnings to report, see -W option of
Python itself
--max-warnings=num Exit with error if the number of warnings exceeds
this threshold

collection:
--collect-only, --co Only collect tests, don't execute them
Expand Down Expand Up @@ -3531,6 +3567,9 @@ All the command-line flags can also be obtained by running ``pytest --help``::
Each line specifies a pattern for
warnings.filterwarnings. Processed after
-W/--pythonwarnings.
max_warnings (string):
Maximum number of warnings allowed before failing
the test run
norecursedirs (args): Directory patterns to avoid for recursion
testpaths (args): Directories to search for tests when no files or
directories are given on the command line
Expand Down
2 changes: 2 additions & 0 deletions src/_pytest/config/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,8 @@ class ExitCode(enum.IntEnum):
USAGE_ERROR = 4
#: pytest couldn't find tests.
NO_TESTS_COLLECTED = 5
#: All tests pass, but maximum number of warnings exceeded.
MAX_WARNINGS_ERROR = 6

__module__ = "pytest"

Expand Down
13 changes: 13 additions & 0 deletions src/_pytest/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,13 +125,26 @@ def pytest_addoption(parser: Parser) -> None:
action="append",
help="Set which warnings to report, see -W option of Python itself",
)
group.addoption(
"--max-warnings",
action="store",
type=int,
default=None,
metavar="num",
dest="max_warnings",
help="Exit with error if all tests pass but the number of warnings exceeds this threshold",
)
parser.addini(
"filterwarnings",
type="linelist",
help="Each line specifies a pattern for "
"warnings.filterwarnings. "
"Processed after -W/--pythonwarnings.",
)
parser.addini(
"max_warnings",
help="Exit with error if all tests pass but the number of warnings exceeds this threshold",
)

group = parser.getgroup("collect", "collection")
group.addoption(
Expand Down
22 changes: 22 additions & 0 deletions src/_pytest/terminal.py
Original file line number Diff line number Diff line change
Expand Up @@ -966,11 +966,23 @@ def pytest_sessionfinish(
ExitCode.INTERRUPTED,
ExitCode.USAGE_ERROR,
ExitCode.NO_TESTS_COLLECTED,
ExitCode.MAX_WARNINGS_ERROR,
)
if exitstatus in summary_exit_codes and not self.no_summary:
self.config.hook.pytest_terminal_summary(
terminalreporter=self, exitstatus=exitstatus, config=self.config
)
# Check --max-warnings threshold after all warnings have been collected.
max_warnings = self._get_max_warnings()
if max_warnings is not None and session.exitstatus == ExitCode.OK:
warning_count = len(self.stats.get("warnings", []))
if warning_count > max_warnings:
session.exitstatus = ExitCode.MAX_WARNINGS_ERROR
self.write_line(
"Tests pass, but maximum allowed warnings exceeded: "
f"{warning_count} > {max_warnings}",
red=True,
)
if session.shouldfail:
self.write_sep("!", str(session.shouldfail), red=True)
if exitstatus == ExitCode.INTERRUPTED:
Expand Down Expand Up @@ -1057,6 +1069,16 @@ def _getcrashline(self, rep):
except AttributeError:
return ""

def _get_max_warnings(self) -> int | None:
"""Return the max_warnings threshold, from CLI or INI, or None if unset."""
value = self.config.option.max_warnings
if value is not None:
return int(value)
ini_value = self.config.getini("max_warnings")
if ini_value:
return int(ini_value)
return None

#
# Summaries for sessionfinish.
#
Expand Down
153 changes: 153 additions & 0 deletions testing/test_warnings.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
import sys
import warnings

from _pytest.config import ExitCode
from _pytest.fixtures import FixtureRequest
from _pytest.pytester import Pytester
import pytest
Expand Down Expand Up @@ -885,3 +886,155 @@ def test_resource_warning(tmp_path):
else []
)
result.stdout.fnmatch_lines([*expected_extra, "*1 passed*"])


class TestMaxWarnings:
"""Tests for the --max-warnings feature."""

PYFILE = """
import warnings
def test_one():
warnings.warn(UserWarning("warning one"))
def test_two():
warnings.warn(UserWarning("warning two"))
"""

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_not_set(self, pytester: Pytester) -> None:
"""Without --max-warnings, warnings don't affect exit code."""
pytester.makepyfile(self.PYFILE)
result = pytester.runpytest()
result.assert_outcomes(passed=2, warnings=2)
assert result.ret == ExitCode.OK

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_not_exceeded(self, pytester: Pytester) -> None:
"""When warning count is below the threshold, exit code is OK."""
pytester.makepyfile(self.PYFILE)
result = pytester.runpytest("--max-warnings", "10")
result.assert_outcomes(passed=2, warnings=2)
assert result.ret == ExitCode.OK

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_exceeded(self, pytester: Pytester) -> None:
"""When warning count exceeds threshold, exit code is MAX_WARNINGS_ERROR."""
pytester.makepyfile(self.PYFILE)
result = pytester.runpytest("--max-warnings", "1")
assert result.ret == ExitCode.MAX_WARNINGS_ERROR

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_equal_to_count(self, pytester: Pytester) -> None:
"""When warning count equals threshold exactly, exit code is OK."""
pytester.makepyfile(self.PYFILE)
result = pytester.runpytest("--max-warnings", "2")
result.assert_outcomes(passed=2, warnings=2)
assert result.ret == ExitCode.OK

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_zero(self, pytester: Pytester) -> None:
"""--max-warnings 0 means no warnings are allowed."""
pytester.makepyfile(self.PYFILE)
result = pytester.runpytest("--max-warnings", "0")
assert result.ret == ExitCode.MAX_WARNINGS_ERROR

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_exceeded_message(self, pytester: Pytester) -> None:
"""Verify the output message when max warnings is exceeded."""
pytester.makepyfile(self.PYFILE)
result = pytester.runpytest("--max-warnings", "1")
result.stdout.fnmatch_lines(
["*Tests pass, but maximum allowed warnings exceeded: 2 > 1*"]
)

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_ini_option(self, pytester: Pytester) -> None:
"""max_warnings can be set via INI configuration."""
pytester.makeini(
"""
[pytest]
max_warnings = 1
"""
)
pytester.makepyfile(self.PYFILE)
result = pytester.runpytest()
assert result.ret == ExitCode.MAX_WARNINGS_ERROR

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_with_test_failure(self, pytester: Pytester) -> None:
"""When tests fail AND warnings exceed max, TESTS_FAILED takes priority."""
pytester.makepyfile(
"""
import warnings
def test_fail():
warnings.warn(UserWarning("a warning"))
assert False
"""
)
result = pytester.runpytest("--max-warnings", "0")
assert result.ret == ExitCode.TESTS_FAILED

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_with_filterwarnings_ignore(self, pytester: Pytester) -> None:
"""Filtered (ignored) warnings don't count toward max_warnings."""
pytester.makepyfile(
"""
import warnings
def test_one():
warnings.warn(UserWarning("counted"))
warnings.warn(RuntimeWarning("ignored"))
"""
)
result = pytester.runpytest(
"--max-warnings",
"1",
"-W",
"ignore::RuntimeWarning",
)
result.assert_outcomes(passed=1, warnings=1)
assert result.ret == ExitCode.OK

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_with_filterwarnings_error(self, pytester: Pytester) -> None:
"""Warnings turned into errors via filterwarnings don't count as warnings."""
pytester.makepyfile(
"""
import warnings
def test_one():
warnings.warn(UserWarning("still a warning"))
def test_two():
warnings.warn(RuntimeWarning("becomes an error"))
"""
)
result = pytester.runpytest(
"--max-warnings",
"0",
"-W",
"error::RuntimeWarning",
)
# The RuntimeWarning becomes a test error, so TESTS_FAILED takes priority.
assert result.ret == ExitCode.TESTS_FAILED

@pytest.mark.filterwarnings("default::UserWarning")
def test_max_warnings_with_filterwarnings_ini_ignore(
self, pytester: Pytester
) -> None:
"""Warnings ignored via ini filterwarnings don't count toward max_warnings."""
pytester.makeini(
"""
[pytest]
filterwarnings =
ignore::RuntimeWarning
max_warnings = 1
"""
)
pytester.makepyfile(
"""
import warnings
def test_one():
warnings.warn(UserWarning("counted"))
warnings.warn(RuntimeWarning("ignored by ini"))
"""
)
result = pytester.runpytest()
result.assert_outcomes(passed=1, warnings=1)
assert result.ret == ExitCode.OK