feat: block data structure with header hashing and merkle commitments#20
feat: block data structure with header hashing and merkle commitments#20arunabha003 wants to merge 13 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughThis PR establishes the foundational structure for MiniChain, a Python-based blockchain project. It introduces a complete package layout with cryptographic utilities, transaction and block data models, serialization helpers, a CLI entry point, comprehensive test coverage, and development infrastructure including Makefile and GitHub Actions CI. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 24
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 15-23: Add dependency caching to the "Setup Python" step by
enabling actions/setup-python@v5's built-in pip cache: update the step that uses
actions/setup-python@v5 (the "Setup Python" step) to include cache: 'pip' so the
subsequent pip install commands (python -m pip install --upgrade pip and python
-m pip install -e .[dev]) reuse cached wheels and speed CI.
In @.gitignore:
- Around line 338-340: Remove the blanket "docs/" ignore entry and replace it
with a scoped pattern that only ignores generated output (for example
"docs/_build/" or "docs/**/*.html" depending on your generator); update the
.gitignore entry that currently contains "docs/" to a more specific pattern so
committed, hand-written documentation won't be silently ignored while still
excluding generator output.
- Around line 265-270: The .gitignore currently ignores ".venv/" but not the
common dotless virtual environment directory "venv/"; update the .gitignore by
adding an entry for "venv/" alongside ".venv/" so contributors who create
virtualenvs with "python -m venv venv" won't produce untracked files.
- Around line 261-270: Move the project-specific .gitignore entries (the
MiniChain docs lines "issues.md" and "architectureProposal.md" and the Python
cache/virtualenv lines "__pycache__/", "*.py[cod]", ".pytest_cache/",
".ruff_cache/", ".venv/") out of the middle of the LaTeX template block and
place them together either at the top or bottom of the file (before or after the
LaTeX template) so the LaTeX entries remain alphabetized and contiguous; ensure
you leave a clear comment header like "# Project-specific" for that group and
preserve surrounding blank lines for readability.
In `@Makefile`:
- Line 3: Add a new phony Makefile target named "clean" and include it in the
existing .PHONY line so maintainers can run make clean; implement the target to
recursively remove Python build/artifact directories and files such as
__pycache__ directories, dist/, build/, and any *.egg-info to prevent stale
bytecode and packaging artifacts from lingering; update the .PHONY declaration
(the line containing ".PHONY: install dev-install test lint format start-node")
to also list clean so it’s treated as a phony target.
In `@README.md`:
- Around line 51-70: The README's "Repository Layout" section is missing files
added in this PR; update the layout block to include the new module filenames
(serialization.py, merkle.py) under src/minichain and the new test files
(test_crypto.py, test_merkle.py, test_transaction.py, test_serialization.py,
test_block.py) under tests so the documented tree matches the actual repository
contents; ensure the entries use the same indentation and formatting as the
existing listing for consistency.
- Around line 5-15: Update the "## Current Status" section to reflect PR `#20` by
adding a line (or new subsection) that documents the addition of block data
structures, header hashing, and Merkle commitments; specifically mention the new
components introduced by PR `#20` (e.g., block data structures, header hashing
implementation, Merkle tree/commitment logic) alongside the existing scaffolding
items so the README accurately describes both Issue `#1` and the PR `#20`
enhancements.
In `@src/minichain/__init__.py`:
- Around line 1-4: The package currently hardcodes __version__ ("0.1.0") which
duplicates pyproject.toml; replace the literal with a single-source approach:
remove the hardcoded assignment to __version__ and set __version__ by reading
the installed package version (e.g. via importlib.metadata.version("minichain")
or importlib.metadata.metadata lookup) so __all__ still exposes "__version__"
but the value comes from package metadata; also update pyproject.toml to enable
dynamic versioning (add "version" to dynamic) so hatchling is the single source
of truth.
In `@src/minichain/block.py`:
- Around line 29-30: Add concise docstrings to all public Block methods:
hash_hex, transaction_hashes, computed_merkle_root, computed_merkle_root_hex,
update_header_merkle_root, has_valid_merkle_root, and hash. For each method add
a one-line summary of what it does, and include parameter and return
descriptions where applicable (e.g., return type and meaning for
computed_merkle_root/hex and hash), matching the project's docstring style and
tone; place them directly above each method definition (e.g., def hash_hex(self)
-> str:) so tooling and reviewers can understand the API in this
consensus-critical module.
In `@src/minichain/crypto.py`:
- Around line 77-84: The verify_signature function currently only catches
BadSignatureError so other exceptions (e.g., nacl.exceptions.CryptoError,
ValueError on wrong-length signature) can propagate; update verify_signature to
catch these broader verification-related exceptions thrown by VerifyKey.verify
(BadSignatureError, CryptoError, ValueError) and return False on any of them,
ensuring verify_key.verify(message, signature) remains in the try block and no
verification-related exception escapes the function.
- Around line 21-24: The _require_nacl() guard is brittle because it checks for
"blake2b" in globals() instead of the actual import sentinel; update
_require_nacl() to explicitly test the import sentinel _NACL_IMPORT_ERROR (e.g.,
if _NACL_IMPORT_ERROR is not None) and raise RuntimeError using that sentinel as
the cause so the error is deterministic and self-documenting; locate the
function _require_nacl and replace the "blake2b" globals() check with a direct
check of _NACL_IMPORT_ERROR, keeping the existing message and raising from
_NACL_IMPORT_ERROR.
In `@src/minichain/merkle.py`:
- Around line 8-9: _hash_pair currently computes blake2b_digest(left + right)
which lacks domain separation; change it to call blake2b_digest(b'\x01' + left +
right) and update all leaf hashing sites that call blake2b_digest(data) (or the
leaf-hash helper, if present) to use blake2b_digest(b'\x00' + data) so internal
and leaf hashes are distinct; ensure you only add the single-byte prefixes and
keep the same blake2b_digest function signature so callers remain compatible.
- Around line 18-25: The current duplicate-last-leaf approach in
compute_merkle_root (the while loop that appends level[-1]) makes different tx
lists collide; instead, when building next_level, do NOT duplicate the last
node—carry it up unchanged: in the for-loop over indices (0..len(level) step 2)
call _hash_pair(level[i], level[i+1]) when i+1 exists, otherwise append level[i]
directly to next_level (promote the odd node), then set level = next_level; this
preserves uniqueness without storing a leaf count.
In `@src/minichain/serialization.py`:
- Around line 8-25: The constants TRANSACTION_FIELD_ORDER and
BLOCK_HEADER_FIELD_ORDER are intended to define canonical key order but
json.dumps is being called with sort_keys=True which alphabetically reorders
keys and overrides those tuples; locate the serialization code that builds
canonical_map and calls json.dumps (search for canonical_map and json.dumps
usage) and remove the sort_keys=True argument so json.dumps uses insertion order
from canonical_map and the FIELD_ORDER tuples control canonical serialization.
- Around line 31-44: The extras check currently only runs for Mapping inputs
because for objects you build source from field_order, so extra attributes on
the object are omitted and never detected; update the object branch in
serialization.py (the code that builds source from value when not
isinstance(value, Mapping)) to also collect the object's actual attribute names
(e.g., via getattr/vars(value), and handle __slots__ if used) and compute extras
= sorted(set(actual_attrs) - set(field_order)) and raise the same ValueError
when extras exist, ensuring parity with the Mapping branch; reference the
existing field_order and the object-handling branch so the extras detection
logic is unified for both Mapping and object inputs.
- Line 6: Replace the deprecated typing.Mapping import with the
collections.abc.Mapping import: update the import line that currently reads
"from typing import Any, Mapping" so that Mapping is imported from
collections.abc (e.g., keep Any from typing and import Mapping from
collections.abc) to remove the use of typing.Mapping in the module
serialization.py and satisfy the Ruff UP035 recommendation.
In `@src/minichain/transaction.py`:
- Line 80: The sign method's parameter is typed too broadly; update def
sign(self, signing_key: object) -> None to require a SigningKey (or a small
Protocol declaring .verify_key and .sign) so type-checkers catch
incompatibilities; use the existing guarded import of SigningKey and change the
parameter annotation in the sign method (or define a SigningKeyLike Protocol and
annotate signing_key: SigningKeyLike) and adjust any tests/ call sites
accordingly.
- Around line 97-100: The try/except around
deserialize_verify_key(self.public_key) is too broad; replace the bare except
with a narrow catch for the expected deserialization failures (ValueError and
PyNaCl's CryptoError) so unrelated errors aren't swallowed: change the except to
except (ValueError, CryptoError): return False and add/import CryptoError from
nacl.exceptions near the top of the module if it's not already imported; keep
deserialize_verify_key and the return False behavior intact.
In `@tests/test_block.py`:
- Around line 73-78: Add a unit test that covers the empty-block edge case:
create a BlockHeader and a Block with transactions=[], call
Block.update_header_merkle_root() (which uses compute_merkle_root([])) and then
assert Block.has_valid_merkle_root() returns True; reference the Block,
BlockHeader, compute_merkle_root, update_header_merkle_root, and
has_valid_merkle_root symbols so the test verifies that an empty transaction
list yields the expected blake2b_digest(b"") merkle root and validates
correctly.
- Around line 49-51: Replace the weak idempotency check in
test_block_hash_is_deterministic by constructing a Block with fully
deterministic, hardcoded inputs (use explicit values for sender, recipient,
amount, nonce, timestamp, and payload instead of _make_block()) and assert
block.hash() equals a precomputed expected hash byte string; reference the test
function name test_block_hash_is_deterministic, the Block constructor (or
factory used in your code) and the block.hash() method so you locate where to
build the deterministic Block and compare against the known expected hash value.
Ensure the timestamp and any numeric types are set to fixed literals and compute
the expected hash externally (or from a one-time run) so the assertion checks
cross-invocation determinism.
In `@tests/test_crypto.py`:
- Around line 40-47: The test test_invalid_signature_is_rejected unpacks
generate_key_pair into signing_key and verify_key but never uses signing_key,
triggering Ruff RUF059; rename the unused variable to _signing_key (or prefix
with an underscore) when calling generate_key_pair to silence the linter while
keeping verify_key intact (e.g., change the left-hand side to _signing_key,
verify_key = generate_key_pair()).
In `@tests/test_merkle.py`:
- Around line 13-27: Add a new unit test to cover the single-leaf edge case:
verify that compute_merkle_root([leaf]) returns the leaf as-is (use
blake2b_digest to produce the leaf), e.g. create a test function named like
test_single_leaf_root_equals_leaf that constructs leaf =
blake2b_digest(b"tx-only") and asserts compute_merkle_root([leaf]) == leaf so
the behavior of compute_merkle_root with a single pre-hashed leaf is explicitly
tested.
In `@tests/test_serialization.py`:
- Around line 68-69: The parametrize call uses a comma-separated string for
parameter names; update the pytest.mark.parametrize invocation so the names
argument is a tuple of strings instead of a single comma-joined string (i.e.,
change the "payload,serializer,expected" names argument to ("payload",
"serializer", "expected") in the pytest.mark.parametrize call).
In `@tests/test_transaction.py`:
- Around line 49-55: The test_mismatched_public_key_and_sender_is_rejected
contains a redundant assignment `_ = other_signing_key` after unpacking
generate_key_pair(); remove that redundant line and unpack only the needed names
(e.g., `_, other_verify_key = generate_key_pair()` or `other_signing_key,
other_verify_key = generate_key_pair()` and drop the `_` assignment) so the test
uses serialize_verify_key(other_verify_key) and replace(tx, public_key=...) then
asserts not tampered.verify() without the unnecessary throwaway reassignment.
| - name: Setup Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: "3.11" | ||
|
|
||
| - name: Install dependencies | ||
| run: | | ||
| python -m pip install --upgrade pip | ||
| python -m pip install -e .[dev] |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
No pip cache configured — consider adding cache: 'pip' for faster CI
actions/setup-python@v5 natively supports dependency caching. Without it, every run installs all packages from scratch, which is slow and wastes bandwidth.
⚡ Proposed improvement
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
+ cache: "pip"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.11" | |
| - name: Install dependencies | |
| run: | | |
| python -m pip install --upgrade pip | |
| python -m pip install -e .[dev] | |
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.11" | |
| cache: "pip" | |
| - name: Install dependencies | |
| run: | | |
| python -m pip install --upgrade pip | |
| python -m pip install -e .[dev] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 15 - 23, Add dependency caching to the
"Setup Python" step by enabling actions/setup-python@v5's built-in pip cache:
update the step that uses actions/setup-python@v5 (the "Setup Python" step) to
include cache: 'pip' so the subsequent pip install commands (python -m pip
install --upgrade pip and python -m pip install -e .[dev]) reuse cached wheels
and speed CI.
| # MiniChain local planning docs (do not commit) | ||
| issues.md | ||
| architectureProposal.md | ||
|
|
||
| # Python caches and virtualenvs | ||
| __pycache__/ | ||
| *.py[cod] | ||
| .pytest_cache/ | ||
| .ruff_cache/ | ||
| .venv/ |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Project-specific entries are inserted mid-file among LaTeX entries.
The LaTeX template is organized alphabetically by package/topic. The MiniChain planning-doc and Python cache blocks land between # easy-todo and # xcolor, fragmenting that structure. Moving them to the top or bottom of the file (ahead of or after the LaTeX template block) keeps the project-specific section easy to find and doesn't interleave unrelated concerns.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 261 - 270, Move the project-specific .gitignore
entries (the MiniChain docs lines "issues.md" and "architectureProposal.md" and
the Python cache/virtualenv lines "__pycache__/", "*.py[cod]", ".pytest_cache/",
".ruff_cache/", ".venv/") out of the middle of the LaTeX template block and
place them together either at the top or bottom of the file (before or after the
LaTeX template) so the LaTeX entries remain alphabetized and contiguous; ensure
you leave a clear comment header like "# Project-specific" for that group and
preserve surrounding blank lines for readability.
| # Python caches and virtualenvs | ||
| __pycache__/ | ||
| *.py[cod] | ||
| .pytest_cache/ | ||
| .ruff_cache/ | ||
| .venv/ |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider also ignoring venv/ alongside .venv/.
python -m venv venv (dotless) is still very common. Without it, a contributor using that convention will end up with an untracked directory.
🔧 Suggested addition
.venv/
+venv/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Python caches and virtualenvs | |
| __pycache__/ | |
| *.py[cod] | |
| .pytest_cache/ | |
| .ruff_cache/ | |
| .venv/ | |
| # Python caches and virtualenvs | |
| __pycache__/ | |
| *.py[cod] | |
| .pytest_cache/ | |
| .ruff_cache/ | |
| .venv/ | |
| venv/ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 265 - 270, The .gitignore currently ignores ".venv/"
but not the common dotless virtual environment directory "venv/"; update the
.gitignore by adding an entry for "venv/" alongside ".venv/" so contributors who
create virtualenvs with "python -m venv venv" won't produce untracked files.
|
|
||
|
|
||
| docs/ |
There was a problem hiding this comment.
docs/ blanket-ignore will suppress any future committed documentation.
If the project ever adopts a checked-in docs/ folder (e.g., for static site content or hand-written guides), contributors will not notice that Git is silently ignoring it. If the intent is only to ignore auto-generated output, prefer a scoped pattern (e.g., docs/_build/) rather than the entire directory.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 338 - 340, Remove the blanket "docs/" ignore entry
and replace it with a scoped pattern that only ignores generated output (for
example "docs/_build/" or "docs/**/*.html" depending on your generator); update
the .gitignore entry that currently contains "docs/" to a more specific pattern
so committed, hand-written documentation won't be silently ignored while still
excluding generator output.
| @@ -0,0 +1,21 @@ | |||
| PYTHON ?= python3 | |||
|
|
|||
| .PHONY: install dev-install test lint format start-node | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding a clean target.
A clean target to remove build artifacts (__pycache__, *.egg-info, dist/, build/) is a common convenience for developer workflows and avoids stale bytecode issues.
♻️ Suggested addition
-.PHONY: install dev-install test lint format start-node
+.PHONY: install dev-install test lint format start-node cleanThen add at the end:
clean:
find . -type d -name __pycache__ -exec rm -rf {} +
rm -rf dist build *.egg-info src/*.egg-info🧰 Tools
🪛 checkmake (0.2.2)
[warning] 3-3: Missing required phony target "all"
(minphony)
[warning] 3-3: Missing required phony target "clean"
(minphony)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Makefile` at line 3, Add a new phony Makefile target named "clean" and
include it in the existing .PHONY line so maintainers can run make clean;
implement the target to recursively remove Python build/artifact directories and
files such as __pycache__ directories, dist/, build/, and any *.egg-info to
prevent stale bytecode and packaging artifacts from lingering; update the .PHONY
declaration (the line containing ".PHONY: install dev-install test lint format
start-node") to also list clean so it’s treated as a phony target.
| def test_header_merkle_root_matches_transaction_body() -> None: | ||
| block = _make_block() | ||
| assert block.has_valid_merkle_root() | ||
|
|
||
| block.transactions[0].amount += 1 | ||
| assert not block.has_valid_merkle_root() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Missing edge-case: empty block (zero transactions)
compute_merkle_root([]) has a defined code path that returns blake2b_digest(b""). There is no test verifying that a Block with no transactions produces and validates a correct Merkle root. Consider adding:
def test_empty_block_has_valid_merkle_root() -> None:
header = BlockHeader(
version=0,
previous_hash="00" * 32,
merkle_root="",
timestamp=1_739_800_000,
difficulty_target=1_000_000,
nonce=0,
block_height=0,
)
block = Block(header=header, transactions=[])
block.update_header_merkle_root()
assert block.has_valid_merkle_root()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_block.py` around lines 73 - 78, Add a unit test that covers the
empty-block edge case: create a BlockHeader and a Block with transactions=[],
call Block.update_header_merkle_root() (which uses compute_merkle_root([])) and
then assert Block.has_valid_merkle_root() returns True; reference the Block,
BlockHeader, compute_merkle_root, update_header_merkle_root, and
has_valid_merkle_root symbols so the test verifies that an empty transaction
list yields the expected blake2b_digest(b"") merkle root and validates
correctly.
| def test_invalid_signature_is_rejected() -> None: | ||
| signing_key, verify_key = generate_key_pair() | ||
| other_signing_key, _ = generate_key_pair() | ||
| message = b"minichain-message" | ||
|
|
||
| wrong_signature = sign_message(message, other_signing_key) | ||
|
|
||
| assert not verify_signature(message, wrong_signature, verify_key) |
There was a problem hiding this comment.
Prefix unused signing_key with _ to silence the Ruff RUF059 warning.
signing_key is unpacked but never referenced in the test body.
🔧 Proposed fix
def test_invalid_signature_is_rejected() -> None:
- signing_key, verify_key = generate_key_pair()
+ _, verify_key = generate_key_pair()
other_signing_key, _ = generate_key_pair()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_invalid_signature_is_rejected() -> None: | |
| signing_key, verify_key = generate_key_pair() | |
| other_signing_key, _ = generate_key_pair() | |
| message = b"minichain-message" | |
| wrong_signature = sign_message(message, other_signing_key) | |
| assert not verify_signature(message, wrong_signature, verify_key) | |
| def test_invalid_signature_is_rejected() -> None: | |
| _, verify_key = generate_key_pair() | |
| other_signing_key, _ = generate_key_pair() | |
| message = b"minichain-message" | |
| wrong_signature = sign_message(message, other_signing_key) | |
| assert not verify_signature(message, wrong_signature, verify_key) |
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 41-41: Unpacked variable signing_key is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_crypto.py` around lines 40 - 47, The test
test_invalid_signature_is_rejected unpacks generate_key_pair into signing_key
and verify_key but never uses signing_key, triggering Ruff RUF059; rename the
unused variable to _signing_key (or prefix with an underscore) when calling
generate_key_pair to silence the linter while keeping verify_key intact (e.g.,
change the left-hand side to _signing_key, verify_key = generate_key_pair()).
| def test_empty_leaf_list_has_well_defined_root() -> None: | ||
| assert compute_merkle_root([]) == blake2b_digest(b"") | ||
|
|
||
|
|
||
| def test_merkle_root_is_deterministic() -> None: | ||
| leaves = [blake2b_digest(b"tx-a"), blake2b_digest(b"tx-b"), blake2b_digest(b"tx-c")] | ||
| first = compute_merkle_root(leaves) | ||
| second = compute_merkle_root(list(leaves)) | ||
| assert first == second | ||
|
|
||
|
|
||
| def test_merkle_root_changes_when_leaf_changes() -> None: | ||
| base = [blake2b_digest(b"tx-a"), blake2b_digest(b"tx-b"), blake2b_digest(b"tx-c")] | ||
| modified = [blake2b_digest(b"tx-a"), blake2b_digest(b"tx-b-mutated"), blake2b_digest(b"tx-c")] | ||
| assert compute_merkle_root(base) != compute_merkle_root(modified) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding a single-leaf edge-case test.
compute_merkle_root with exactly one leaf skips the while loop entirely and returns the raw leaf bytes unchanged — i.e., compute_merkle_root([leaf]) == leaf, not blake2b_digest(leaf). This behaviour is intentional (leaves are pre-hashed), but it is currently untested and may surprise future readers.
🧪 Suggested additional test
def test_single_leaf_root_equals_leaf() -> None:
leaf = blake2b_digest(b"tx-only")
assert compute_merkle_root([leaf]) == leaf🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_merkle.py` around lines 13 - 27, Add a new unit test to cover the
single-leaf edge case: verify that compute_merkle_root([leaf]) returns the leaf
as-is (use blake2b_digest to produce the leaf), e.g. create a test function
named like test_single_leaf_root_equals_leaf that constructs leaf =
blake2b_digest(b"tx-only") and asserts compute_merkle_root([leaf]) == leaf so
the behavior of compute_merkle_root with a single pre-hashed leaf is explicitly
tested.
| @pytest.mark.parametrize( | ||
| "payload,serializer,expected", |
There was a problem hiding this comment.
Use a tuple for the pytest.mark.parametrize names argument (Ruff PT006).
The Ruff/flake8-pytest-style convention for multiple parameters is a tuple, e.g. ("name1", "name2"), rather than a comma-separated string.
🔧 Proposed fix
`@pytest.mark.parametrize`(
- "payload,serializer,expected",
+ ("payload", "serializer", "expected"),
[📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.mark.parametrize( | |
| "payload,serializer,expected", | |
| `@pytest.mark.parametrize`( | |
| ("payload", "serializer", "expected"), |
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 69-69: Wrong type passed to first argument of pytest.mark.parametrize; expected tuple
Use a tuple for the first argument
(PT006)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_serialization.py` around lines 68 - 69, The parametrize call uses
a comma-separated string for parameter names; update the pytest.mark.parametrize
invocation so the names argument is a tuple of strings instead of a single
comma-joined string (i.e., change the "payload,serializer,expected" names
argument to ("payload", "serializer", "expected") in the pytest.mark.parametrize
call).
| def test_mismatched_public_key_and_sender_is_rejected() -> None: | ||
| tx, _ = _build_signed_transaction() | ||
| other_signing_key, other_verify_key = generate_key_pair() | ||
| _ = other_signing_key | ||
| tampered = replace(tx, public_key=serialize_verify_key(other_verify_key)) | ||
|
|
||
| assert not tampered.verify() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Simplify the unpacking — _ = other_signing_key is redundant.
Assigning an already-bound name to _ after destructuring is unnecessary. Unpack directly:
♻️ Proposed fix
def test_mismatched_public_key_and_sender_is_rejected() -> None:
tx, _ = _build_signed_transaction()
- other_signing_key, other_verify_key = generate_key_pair()
- _ = other_signing_key
+ _, other_verify_key = generate_key_pair()
tampered = replace(tx, public_key=serialize_verify_key(other_verify_key))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_mismatched_public_key_and_sender_is_rejected() -> None: | |
| tx, _ = _build_signed_transaction() | |
| other_signing_key, other_verify_key = generate_key_pair() | |
| _ = other_signing_key | |
| tampered = replace(tx, public_key=serialize_verify_key(other_verify_key)) | |
| assert not tampered.verify() | |
| def test_mismatched_public_key_and_sender_is_rejected() -> None: | |
| tx, _ = _build_signed_transaction() | |
| _, other_verify_key = generate_key_pair() | |
| tampered = replace(tx, public_key=serialize_verify_key(other_verify_key)) | |
| assert not tampered.verify() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_transaction.py` around lines 49 - 55, The
test_mismatched_public_key_and_sender_is_rejected contains a redundant
assignment `_ = other_signing_key` after unpacking generate_key_pair(); remove
that redundant line and unpack only the needed names (e.g., `_, other_verify_key
= generate_key_pair()` or `other_signing_key, other_verify_key =
generate_key_pair()` and drop the `_` assignment) so the test uses
serialize_verify_key(other_verify_key) and replace(tx, public_key=...) then
asserts not tampered.verify() without the unnecessary throwaway reassignment.
Addressed Issues:
Part of #8
Depends on #18
Summary
BlockHeaderandBlockdata structuresValidation
python -m ruff check src testspython -m pytestChecklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: TODO
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Chores
Tests