feat: merkle tree implementation for block transaction commitments#18
feat: merkle tree implementation for block transaction commitments#18arunabha003 wants to merge 10 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughThis PR establishes the foundational infrastructure for MiniChain, a Python blockchain project. It introduces build configuration (Makefile, pyproject.toml), GitHub Actions CI, core cryptographic modules for identity and signatures, transaction signing and verification, Merkle tree construction, canonical serialization utilities, node orchestration scaffolding, and a comprehensive test suite. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Transaction
participant Serialization
participant Crypto
participant Verification
Client->>Transaction: create(sender, recipient, amount, nonce, fee, timestamp)
Client->>Transaction: sign(signing_key)
Transaction->>Serialization: serialize transaction for signing
Serialization-->>Transaction: signing_bytes
Transaction->>Crypto: sign_message(signing_bytes, signing_key)
Crypto-->>Transaction: signature (hex)
Transaction->>Crypto: derive_address(verify_key)
Crypto-->>Transaction: public_key (hex)
Note over Client,Verification: Transaction now signed and ready for transmission
Client->>Verification: receive Transaction
Verification->>Transaction: verify()
Transaction->>Serialization: serialize_transaction()
Serialization-->>Transaction: signing_bytes
Transaction->>Crypto: deserialize_verify_key(public_key_hex)
Crypto-->>Transaction: verify_key
Transaction->>Crypto: verify_signature(signing_bytes, signature, verify_key)
Crypto-->>Transaction: signature_valid (bool)
Transaction->>Crypto: derive_address(verify_key)
Crypto-->>Transaction: derived_address
Transaction->>Transaction: assert derived_address == sender
Transaction-->>Verification: verification_result (bool)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 15-18: The workflow step using actions/setup-python@v5 (the "Setup
Python" step) should enable pip dependency caching by adding the cache input
(cache: "pip") to the step configuration so the runner uses the built-in pip
cache instead of re-downloading packages on every CI run; update the step that
references actions/setup-python@v5 and retains python-version: "3.11" while
adding the cache: "pip" input to speed up dependency installation.
- Line 18: Replace the fixed python-version with a test matrix so CI runs
multiple Python versions; update the workflow step that currently uses the
"python-version" key to accept a matrix (e.g., include "3.10", "3.11", "3.12")
and adjust any job definitions that reference that key so they iterate over
matrix.python-version (look for the "python-version" property in the CI job
configuration and the job's uses/of actions/setup-python call to update to
matrix.python-version).
In @.gitignore:
- Line 340: The .gitignore currently ignores the entire "docs/" directory which
can hide hand-authored documentation; update the ignore rules to only exclude
generated output (e.g., replace or remove the "docs/" entry and instead ignore
the build artifacts like "docs/_build/" and other generated folders) while
leaving source docs tracked (ensure patterns reference "docs/_build/" and
similar generated paths rather than the whole "docs/" directory).
- Around line 261-270: The Python ignore entries (__pycache__/, *.py[cod],
.pytest_cache/, .ruff_cache/, .venv/) and the docs entries (issues.md,
architectureProposal.md, docs/) are currently placed between the LaTeX sections
marked by comments like "# easy-todo" and "# xcolor"; move these Python/docs
rules out of the LaTeX block and consolidate them into a dedicated "Python /
virtualenv / docs" section at the bottom of the .gitignore so they are grouped
and clearly separated from LaTeX rules.
In `@README.md`:
- Around line 9-11: Update the module listing that currently enumerates
placeholder components to include the new modules `serialization` and `merkle`
and remove the implication they are only placeholders; specifically, edit the
README section that lists component modules (the line enumerating `crypto`,
`transaction`, `block`, `state`, `mempool`, `consensus`, `network`, `storage`,
`node`) to add `serialization` and `merkle` to that list and adjust the wording
so those modules are presented as actual components rather than placeholders.
- Around line 51-71: Update the "Repository Layout" section in README.md to
reflect the new files added by this PR: add top-level Makefile and
pyproject.toml, and under src/minichain include serialization.py and merkle.py,
and under tests include test_crypto.py, test_merkle.py, test_serialization.py,
and test_transaction.py; make sure the layout block matches the repository tree
format used currently and preserves existing entries like __init__.py,
__main__.py and tests/test_scaffold.py.
In `@src/minichain/crypto.py`:
- Around line 41-44: The docstring for blake2b_digest claims a 32-byte digest
but relies on PyNaCl's default; modify the blake2b_digest function to explicitly
request a 32-byte digest by passing digest_size=32 to blake2b and update the
docstring to state it explicitly uses digest_size=32 so the 32-byte guarantee is
self-contained (refer to function blake2b_digest and the blake2b call).
In `@src/minichain/merkle.py`:
- Around line 12-27: The current compute_merkle_root duplicates the last leaf
for odd counts and uses the same raw concatenation for leaves and internal
nodes, enabling second-preimage attacks; update compute_merkle_root to avoid the
duplicate-last shortcut (either raise an error on odd-length input or append a
sentinel like blake2b_digest(b"") instead of duplicating the exact last leaf)
and ensure leaves are hashed with a leaf domain tag before building the tree
(e.g., call blake2b_digest(b"\x00" + leaf) when initializing level), and change
_hash_pair to perform domain-separated internal-node hashing (e.g.,
blake2b_digest(b"\x01" + left + right)) so internal and leaf hashes cannot
collide; keep using blake2b_digest but only via these tagged inputs.
In `@src/minichain/serialization.py`:
- Around line 28-44: The function _to_field_map currently treats Mapping inputs
by validating unexpected keys and raising ValueError, while for plain object
inputs it silently ignores extra attributes; add a brief docstring or inline
comment inside _to_field_map describing this asymmetry (mentioning parameters
value and field_order and behavior for Mapping vs object), clarifying that
Mapping keys are strictly validated but object attributes are filtered to
field_order, so future maintainers understand this intentional design choice.
- Line 6: Replace the deprecated typing.Mapping import with the modern
collections.abc.Mapping: update the import statement that currently reads "from
typing import Any, Mapping" to import Mapping from collections.abc while keeping
Any from typing (or move Any to typing if still needed); ensure the symbol
Mapping used in this module (e.g., function signatures or type hints in
serialization.py) now refers to collections.abc.Mapping and keep the existing
"from __future__ import annotations" intact.
In `@src/minichain/transaction.py`:
- Around line 70-76: The sign method currently types signing_key as object and
directly accesses signing_key.verify_key causing AttributeError for wrong types;
change the annotation to the concrete SigningKey type using a string forward
reference (e.g. "SigningKey") and add an explicit runtime type check
(isinstance(signing_key, SigningKey) or check for verify_key attribute) that
raises a clear TypeError if invalid, so sign(self, signing_key: "SigningKey") ->
None is type-safe and fails fast before calling signing_bytes(),
serialize_verify_key, or sign_message.
In `@tests/test_crypto.py`:
- Around line 40-47: The variable signing_key in
test_invalid_signature_is_rejected is unused and triggers RUF059; rename it to
_signing_key (or prefix with an underscore) to signal intentional unused
variable while keeping the rest of the test unchanged—locate the initializer
generate_key_pair() call in test_invalid_signature_is_rejected and update the
first returned binding name from signing_key to _signing_key.
In `@tests/test_merkle.py`:
- Around line 13-37: Add a new unit test in tests/test_merkle.py named
test_single_leaf_returns_leaf_unchanged that creates a single leaf with
blake2b_digest(b"only-tx") and asserts compute_merkle_root([leaf]) == leaf; this
covers the single-leaf path in compute_merkle_root and documents the expected
behavior that a lone leaf is returned unchanged (no extra hashing or pairing).
In `@tests/test_scaffold.py`:
- Around line 22-25: Refactor the test_component_modules_are_importable function
to parametrize over COMPONENT_MODULES so each module becomes its own test case:
add "import pytest" at the top, replace the loop with a pytest.mark.parametrize
decorator over the symbol COMPONENT_MODULES and a test function that calls
importlib.import_module(f"minichain.{module}") and asserts the result is not
None (referencing test_component_modules_are_importable, COMPONENT_MODULES, and
importlib.import_module).
In `@tests/test_transaction.py`:
- Around line 49-55: In test_mismatched_public_key_and_sender_is_rejected
replace the unused local other_signing_key with a blank identifier in the
destructuring from generate_key_pair() (e.g., use "_, other_verify_key =
generate_key_pair()") and remove the redundant assignment line `_ =
other_signing_key`; this keeps the test intent while eliminating the unnecessary
variable (refer to the symbols test_mismatched_public_key_and_sender_is_rejected
and generate_key_pair).
| - name: Setup Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: "3.11" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add pip dependency caching to speed up CI runs.
actions/setup-python@v5 natively supports pip caching via the cache key. Without it, every CI run re-downloads all dependencies from scratch.
⚡ Proposed improvement
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
+ cache: "pip"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.11" | |
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.11" | |
| cache: "pip" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 15 - 18, The workflow step using
actions/setup-python@v5 (the "Setup Python" step) should enable pip dependency
caching by adding the cache input (cache: "pip") to the step configuration so
the runner uses the built-in pip cache instead of re-downloading packages on
every CI run; update the step that references actions/setup-python@v5 and
retains python-version: "3.11" while adding the cache: "pip" input to speed up
dependency installation.
| - name: Setup Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: "3.11" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider a python-version matrix for broader compatibility coverage.
Currently only Python 3.11 is tested. As the project matures, adding 3.12 (and potentially 3.10) to a version matrix will catch compatibility regressions early.
🔧 Example matrix expansion (future consideration)
test:
runs-on: ubuntu-latest
+ strategy:
+ matrix:
+ python-version: ["3.10", "3.11", "3.12"]
steps:
...
- name: Setup Python
uses: actions/setup-python@v5
with:
- python-version: "3.11"
+ python-version: ${{ matrix.python-version }}
+ cache: "pip"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| python-version: "3.11" | |
| test: | |
| runs-on: ubuntu-latest | |
| strategy: | |
| matrix: | |
| python-version: ["3.10", "3.11", "3.12"] | |
| steps: | |
| ... | |
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: ${{ matrix.python-version }} | |
| cache: "pip" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml at line 18, Replace the fixed python-version with a
test matrix so CI runs multiple Python versions; update the workflow step that
currently uses the "python-version" key to accept a matrix (e.g., include
"3.10", "3.11", "3.12") and adjust any job definitions that reference that key
so they iterate over matrix.python-version (look for the "python-version"
property in the CI job configuration and the job's uses/of actions/setup-python
call to update to matrix.python-version).
| # MiniChain local planning docs (do not commit) | ||
| issues.md | ||
| architectureProposal.md | ||
|
|
||
| # Python caches and virtualenvs | ||
| __pycache__/ | ||
| *.py[cod] | ||
| .pytest_cache/ | ||
| .ruff_cache/ | ||
| .venv/ |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Python entries are embedded mid-file inside LaTeX template sections.
The Python cache/venv rules (lines 261–270) land between the # easy-todo and # xcolor LaTeX blocks rather than in a dedicated section at the end of the file. This is functionally harmless but makes the file harder to scan.
🔧 Suggested reorganisation
Move the Python block (and the docs/ entry) to a clearly delineated section at the bottom of the file, e.g.:
-# MiniChain local planning docs (do not commit)
-issues.md
-architectureProposal.md
-
-# Python caches and virtualenvs
-__pycache__/
-*.py[cod]
-.pytest_cache/
-.ruff_cache/
-.venv/Place this block (along with docs/) after the last existing LaTeX rule:
+# ---- MiniChain project ----
+# Local planning docs (do not commit)
+issues.md
+architectureProposal.md
+
+# Python caches and virtualenvs
+__pycache__/
+*.py[cod]
+.pytest_cache/
+.ruff_cache/
+.venv/
+
+# Generated/local docs
+docs/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 261 - 270, The Python ignore entries (__pycache__/,
*.py[cod], .pytest_cache/, .ruff_cache/, .venv/) and the docs entries
(issues.md, architectureProposal.md, docs/) are currently placed between the
LaTeX sections marked by comments like "# easy-todo" and "# xcolor"; move these
Python/docs rules out of the LaTeX block and consolidate them into a dedicated
"Python / virtualenv / docs" section at the bottom of the .gitignore so they are
grouped and clearly separated from LaTeX rules.
| #*Notes.bib | ||
|
|
||
|
|
||
| docs/ |
There was a problem hiding this comment.
Blanket docs/ ignore may silently suppress committed documentation.
If Sphinx source files or other hand-authored docs are placed under docs/, they will not be tracked. Consider scoping the ignore to generated output only (e.g., docs/_build/) rather than the entire directory.
🔧 Suggested fix
-docs/
+docs/_build/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| docs/ | |
| docs/_build/ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore at line 340, The .gitignore currently ignores the entire "docs/"
directory which can hide hand-authored documentation; update the ignore rules to
only exclude generated output (e.g., replace or remove the "docs/" entry and
instead ignore the build artifacts like "docs/_build/" and other generated
folders) while leaving source docs tracked (ensure patterns reference
"docs/_build/" and similar generated paths rather than the whole "docs/"
directory).
| - Python package layout under `src/minichain` | ||
| - Placeholder component modules for: | ||
| - `crypto`, `transaction`, `block`, `state`, `mempool`, `consensus`, `network`, `storage`, `node` |
There was a problem hiding this comment.
Update the module listing to include serialization and merkle.
Line 11 lists placeholder component modules but omits serialization and merkle, both of which are introduced in this PR and are no longer just placeholders.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 9 - 11, Update the module listing that currently
enumerates placeholder components to include the new modules `serialization` and
`merkle` and remove the implication they are only placeholders; specifically,
edit the README section that lists component modules (the line enumerating
`crypto`, `transaction`, `block`, `state`, `mempool`, `consensus`, `network`,
`storage`, `node`) to add `serialization` and `merkle` to that list and adjust
the wording so those modules are presented as actual components rather than
placeholders.
| def sign(self, signing_key: object) -> None: | ||
| """Sign this transaction in-place and populate auth fields.""" | ||
| if not self._validate_common_fields(): | ||
| raise ValueError("Invalid transaction fields") | ||
| verify_key = signing_key.verify_key | ||
| self.public_key = serialize_verify_key(verify_key) | ||
| self.signature = sign_message(self.signing_bytes(), signing_key).hex() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
signing_key parameter typed as object — loses type safety.
Line 74 accesses signing_key.verify_key without any guard, so passing anything other than a SigningKey will raise AttributeError. Since from __future__ import annotations is active, you can use the actual type as a string annotation without a runtime import dependency.
♻️ Proposed fix
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from nacl.signing import SigningKey as _SigningKey
+
...
- def sign(self, signing_key: object) -> None:
+ def sign(self, signing_key: _SigningKey) -> None:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def sign(self, signing_key: object) -> None: | |
| """Sign this transaction in-place and populate auth fields.""" | |
| if not self._validate_common_fields(): | |
| raise ValueError("Invalid transaction fields") | |
| verify_key = signing_key.verify_key | |
| self.public_key = serialize_verify_key(verify_key) | |
| self.signature = sign_message(self.signing_bytes(), signing_key).hex() | |
| from typing import TYPE_CHECKING | |
| if TYPE_CHECKING: | |
| from nacl.signing import SigningKey as _SigningKey | |
| def sign(self, signing_key: _SigningKey) -> None: | |
| """Sign this transaction in-place and populate auth fields.""" | |
| if not self._validate_common_fields(): | |
| raise ValueError("Invalid transaction fields") | |
| verify_key = signing_key.verify_key | |
| self.public_key = serialize_verify_key(verify_key) | |
| self.signature = sign_message(self.signing_bytes(), signing_key).hex() |
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 73-73: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/transaction.py` around lines 70 - 76, The sign method currently
types signing_key as object and directly accesses signing_key.verify_key causing
AttributeError for wrong types; change the annotation to the concrete SigningKey
type using a string forward reference (e.g. "SigningKey") and add an explicit
runtime type check (isinstance(signing_key, SigningKey) or check for verify_key
attribute) that raises a clear TypeError if invalid, so sign(self, signing_key:
"SigningKey") -> None is type-safe and fails fast before calling
signing_bytes(), serialize_verify_key, or sign_message.
| def test_invalid_signature_is_rejected() -> None: | ||
| signing_key, verify_key = generate_key_pair() | ||
| other_signing_key, _ = generate_key_pair() | ||
| message = b"minichain-message" | ||
|
|
||
| wrong_signature = sign_message(message, other_signing_key) | ||
|
|
||
| assert not verify_signature(message, wrong_signature, verify_key) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unused signing_key — prefix with _ per static analysis hint.
Ruff RUF059 flags signing_key on line 41 as unused.
♻️ Proposed fix
- signing_key, verify_key = generate_key_pair()
+ _, verify_key = generate_key_pair()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_invalid_signature_is_rejected() -> None: | |
| signing_key, verify_key = generate_key_pair() | |
| other_signing_key, _ = generate_key_pair() | |
| message = b"minichain-message" | |
| wrong_signature = sign_message(message, other_signing_key) | |
| assert not verify_signature(message, wrong_signature, verify_key) | |
| def test_invalid_signature_is_rejected() -> None: | |
| _, verify_key = generate_key_pair() | |
| other_signing_key, _ = generate_key_pair() | |
| message = b"minichain-message" | |
| wrong_signature = sign_message(message, other_signing_key) | |
| assert not verify_signature(message, wrong_signature, verify_key) |
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 41-41: Unpacked variable signing_key is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_crypto.py` around lines 40 - 47, The variable signing_key in
test_invalid_signature_is_rejected is unused and triggers RUF059; rename it to
_signing_key (or prefix with an underscore) to signal intentional unused
variable while keeping the rest of the test unchanged—locate the initializer
generate_key_pair() call in test_invalid_signature_is_rejected and update the
first returned binding name from signing_key to _signing_key.
| def test_empty_leaf_list_has_well_defined_root() -> None: | ||
| assert compute_merkle_root([]) == blake2b_digest(b"") | ||
|
|
||
|
|
||
| def test_merkle_root_is_deterministic() -> None: | ||
| leaves = [blake2b_digest(b"tx-a"), blake2b_digest(b"tx-b"), blake2b_digest(b"tx-c")] | ||
| first = compute_merkle_root(leaves) | ||
| second = compute_merkle_root(list(leaves)) | ||
| assert first == second | ||
|
|
||
|
|
||
| def test_merkle_root_changes_when_leaf_changes() -> None: | ||
| base = [blake2b_digest(b"tx-a"), blake2b_digest(b"tx-b"), blake2b_digest(b"tx-c")] | ||
| modified = [blake2b_digest(b"tx-a"), blake2b_digest(b"tx-b-mutated"), blake2b_digest(b"tx-c")] | ||
| assert compute_merkle_root(base) != compute_merkle_root(modified) | ||
|
|
||
|
|
||
| def test_odd_leaf_count_duplicates_last_leaf() -> None: | ||
| leaves = [blake2b_digest(b"tx-a"), blake2b_digest(b"tx-b"), blake2b_digest(b"tx-c")] | ||
|
|
||
| left = blake2b_digest(leaves[0] + leaves[1]) | ||
| right = blake2b_digest(leaves[2] + leaves[2]) | ||
| expected = blake2b_digest(left + right) | ||
|
|
||
| assert compute_merkle_root(leaves) == expected |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Good test coverage for core cases — consider adding a single-leaf test.
The current suite covers empty, odd-count, determinism, and mutation. A single-leaf case is worth adding since it exercises the "no pairing needed" path and documents the expected behavior (the leaf is returned as-is without re-hashing).
💚 Suggested additional test
def test_single_leaf_returns_leaf_unchanged() -> None:
leaf = blake2b_digest(b"only-tx")
assert compute_merkle_root([leaf]) == leaf🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_merkle.py` around lines 13 - 37, Add a new unit test in
tests/test_merkle.py named test_single_leaf_returns_leaf_unchanged that creates
a single leaf with blake2b_digest(b"only-tx") and asserts
compute_merkle_root([leaf]) == leaf; this covers the single-leaf path in
compute_merkle_root and documents the expected behavior that a lone leaf is
returned unchanged (no extra hashing or pairing).
| def test_component_modules_are_importable() -> None: | ||
| for module in COMPONENT_MODULES: | ||
| imported = importlib.import_module(f"minichain.{module}") | ||
| assert imported is not None |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider parametrizing the module import test for better failure diagnostics.
If one module fails to import, the current loop produces a single test failure with no indication of which module is broken. Parametrizing gives one test item per module.
♻️ Suggested improvement
-def test_component_modules_are_importable() -> None:
- for module in COMPONENT_MODULES:
- imported = importlib.import_module(f"minichain.{module}")
- assert imported is not None
+@pytest.mark.parametrize("module", COMPONENT_MODULES)
+def test_component_modules_are_importable(module: str) -> None:
+ imported = importlib.import_module(f"minichain.{module}")
+ assert imported is not NoneThis requires adding import pytest at the top of the file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_scaffold.py` around lines 22 - 25, Refactor the
test_component_modules_are_importable function to parametrize over
COMPONENT_MODULES so each module becomes its own test case: add "import pytest"
at the top, replace the loop with a pytest.mark.parametrize decorator over the
symbol COMPONENT_MODULES and a test function that calls
importlib.import_module(f"minichain.{module}") and asserts the result is not
None (referencing test_component_modules_are_importable, COMPONENT_MODULES, and
importlib.import_module).
| def test_mismatched_public_key_and_sender_is_rejected() -> None: | ||
| tx, _ = _build_signed_transaction() | ||
| other_signing_key, other_verify_key = generate_key_pair() | ||
| _ = other_signing_key | ||
| tampered = replace(tx, public_key=serialize_verify_key(other_verify_key)) | ||
|
|
||
| assert not tampered.verify() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Simplify unused variable handling.
Line 52 (_ = other_signing_key) is unnecessary — use _ directly in the destructuring.
♻️ Proposed fix
def test_mismatched_public_key_and_sender_is_rejected() -> None:
tx, _ = _build_signed_transaction()
- other_signing_key, other_verify_key = generate_key_pair()
- _ = other_signing_key
+ _, other_verify_key = generate_key_pair()
tampered = replace(tx, public_key=serialize_verify_key(other_verify_key))
assert not tampered.verify()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_transaction.py` around lines 49 - 55, In
test_mismatched_public_key_and_sender_is_rejected replace the unused local
other_signing_key with a blank identifier in the destructuring from
generate_key_pair() (e.g., use "_, other_verify_key = generate_key_pair()") and
remove the redundant assignment line `_ = other_signing_key`; this keeps the
test intent while eliminating the unnecessary variable (refer to the symbols
test_mismatched_public_key_and_sender_is_rejected and generate_key_pair).
Addressed Issues:
Part of #8
Depends on #12
Summary
Checklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: TODO
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit
Release Notes
New Features
Tests
Chores