Skip to content

docs: Add prescriptive testing strategy guide#278

Open
PredictiveManish wants to merge 2 commits intoopenml:mainfrom
PredictiveManish:API-Docs
Open

docs: Add prescriptive testing strategy guide#278
PredictiveManish wants to merge 2 commits intoopenml:mainfrom
PredictiveManish:API-Docs

Conversation

@PredictiveManish
Copy link

@PredictiveManish PredictiveManish commented Mar 17, 2026

Summary

Updates docs/contributing/tests.md with a clear, prescriptive testing strategy for the OpenML server API project.

Fixes #169

Changes

  • Defines four test levels: migration, integration, database, and unit tests
  • Uses prescriptive language ("MUST", "SHOULD", "DO", "DO NOT")
  • Documents database fixture rollback behavior and when to use persist=True
  • Preserves performance benchmark data showing fixture overhead (~60ms for TestClient vs ~2ms for DB)
  • Adds fixtures reference table and actionable guidelines

Testing

  • Documentation changes only, no code changes
  • Follows existing documentation structure

Related

This addresses review feedback from a previous PR attempt #256.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • The unit test example for flow_exists still depends on an expdb_test fixture and patches the DB layer, which blurs the line between unit and DB tests; consider adjusting the example to avoid a DB fixture entirely so it better illustrates a pure unit test pattern.
  • There’s some potential confusion between py_api as a fixture and the httpx.AsyncClient/“FastAPI TestClient” terminology used in the text and tables; aligning the naming in examples, the fixtures reference, and the performance section would make it clearer which concrete fixture the ~60ms overhead refers to.
  • In the persisted_flow fixture example you show manual commit/rollback with a # Cleanup... comment; it might help to either show an explicit cleanup step (e.g. delete query) or briefly note that this is schematic so readers don’t copy an incomplete pattern into real fixtures.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The unit test example for `flow_exists` still depends on an `expdb_test` fixture and patches the DB layer, which blurs the line between unit and DB tests; consider adjusting the example to avoid a DB fixture entirely so it better illustrates a pure unit test pattern.
- There’s some potential confusion between `py_api` as a fixture and the `httpx.AsyncClient`/“FastAPI TestClient” terminology used in the text and tables; aligning the naming in examples, the fixtures reference, and the performance section would make it clearer which concrete fixture the ~60ms overhead refers to.
- In the `persisted_flow` fixture example you show manual `commit`/`rollback` with a `# Cleanup...` comment; it might help to either show an explicit cleanup step (e.g. delete query) or briefly note that this is schematic so readers don’t copy an incomplete pattern into real fixtures.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 17, 2026

Walkthrough

The pull request updates docs/contributing/tests.md with a comprehensive testing strategy overview. The documentation now includes four testing levels: Migration, Integration, Database, and Unit. The update replaces previous terse guidance with detailed explanations, example snippets, fixture definitions, and test marker specifications. Additional sections address data persistence for cross-API visibility and fixture overhead considerations. The changes consolidate test execution commands and markers into unified documentation. This is purely a documentation revision with no modifications to executable code or runtime behavior.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding a prescriptive testing strategy guide to the documentation.
Linked Issues check ✅ Passed The PR comprehensively addresses issue #169 requirements: defines four test levels (migration, integration, database, unit), documents fixture behavior, includes performance benchmarks, and provides prescriptive guidance for test authors.
Out of Scope Changes check ✅ Passed All changes are in-scope documentation updates to tests.md; no unrelated or out-of-scope modifications are present.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description check ✅ Passed The PR description clearly relates to the changeset, describing documentation updates to tests.md with a prescriptive testing strategy for the OpenML server API.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can generate a title for your PR based on the changes with custom instructions.

Set the reviews.auto_title_instructions setting to generate a title for your PR based on the changes in the PR with custom instructions.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
docs/contributing/tests.md (1)

182-188: Clarify the test execution command comment.

The comment on line 187 says "Run all tests including migration" but the command pytest -m "mut" only runs tests marked with mut (migration tests), not all tests.

📝 Suggested clarification
 # Run only fast tests locally
 pytest -m "not mut"
 
-# Run all tests including migration
+# Run only migration tests
 pytest -m "mut"

Or if you want to document running all tests:

 # Run only fast tests locally
 pytest -m "not mut"
 
-# Run all tests including migration
+# Run only migration tests
 pytest -m "mut"
+
+# Run all tests (fast + migration)
+pytest
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/contributing/tests.md` around lines 182 - 188, Update the misleading
comment: clarify that pytest -m "not mut" runs fast/non-migration tests and
pytest -m "mut" runs only tests marked with the mut marker (migration tests),
and optionally add the explicit command to run all tests (pytest) so readers
aren't misled by the current "Run all tests including migration" phrasing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@docs/contributing/tests.md`:
- Around line 182-188: Update the misleading comment: clarify that pytest -m
"not mut" runs fast/non-migration tests and pytest -m "mut" runs only tests
marked with the mut marker (migration tests), and optionally add the explicit
command to run all tests (pytest) so readers aren't misled by the current "Run
all tests including migration" phrasing.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: c91ec3ec-dc4b-4a0a-886b-4e98fcc91444

📥 Commits

Reviewing files that changed from the base of the PR and between 5357b01 and 00a1721.

📒 Files selected for processing (1)
  • docs/contributing/tests.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Write developer documentation on writing tests

1 participant