feat: block propagation protocol with validation and dedup#40
feat: block propagation protocol with validation and dedup#40arunabha003 wants to merge 41 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughThis PR introduces a complete minimal blockchain implementation (MiniChain) with core primitives for blocks, transactions, and cryptography; chain management with consensus and PoW mining; a mempool for transaction pooling; peer-to-peer networking with gossip support; SQLite persistence; node orchestration; and a comprehensive CLI with subcommands for key generation, balance queries, and block/transaction management. Configuration, packaging, CI/CD, and extensive test coverage are included. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 18
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.gitignore:
- Around line 265-270: The Python-cache lines (__pycache__/, *.py[cod],
.pytest_cache/, .ruff_cache/, .venv/) are currently inserted between the LaTeX
section markers (# easy-todo and # xcolor); move that entire block so it sits
after the LaTeX-related entries (i.e., after the # xcolor section) and just
before the docs/ line to keep LaTeX entries contiguous and maintain
organization.
In `@pyproject.toml`:
- Around line 16-23: The CI fails because tests import MiniChainNetwork and
NetworkConfig from minichain.network but the "network" optional extra isn't
installed; update pyproject.toml by adding the "network" extra into the dev
extras list (include "py-libp2p>=0.2.0" under the dev array) or update CI to
install the package with both extras (e.g., pip install .[dev,network]); ensure
the identifiers referenced (MiniChainNetwork, NetworkConfig and the
[project.optional-dependencies] sections "network" and "dev") are satisfied so
tests like tests/test_transaction_gossip.py, tests/test_network.py, and
tests/test_block_gossip.py can import minichain.network successfully.
In `@src/minichain/chain.py`:
- Around line 98-116: add_block currently inserts block into _blocks_by_hash and
_heights before calling _replay_state_for_tip, but only removes them on
ChainValidationError; modify add_block so any exception raised by
_replay_state_for_tip (not just ChainValidationError) triggers cleanup of the
inserted entries from _blocks_by_hash and _heights and then re-raises an
appropriate ChainValidationError (or wraps the original exception) so unexpected
errors don't leave the chain manager in an inconsistent state; reference
add_block, _blocks_by_hash, _heights, _replay_state_for_tip, and
ChainValidationError when applying the fix.
In `@src/minichain/crypto.py`:
- Around line 77-84: The verify_signature function currently only catches
BadSignatureError from VerifyKey.verify, but VerifyKey.verify can also raise
ValueError for malformed signatures (wrong length); update the exception
handling in verify_signature to also catch ValueError and return False on
ValueError, keeping the existing BadSignatureError handling and return True only
after a successful verify call.
In `@src/minichain/genesis.py`:
- Around line 41-74: The genesis merkle_root is computed directly with
blake2b_digest(b"").hex() which duplicates logic; replace both occurrences in
create_genesis_block and apply_genesis_block to call compute_merkle_root([]) so
the genesis header and the expected_merkle_root check use the shared helper
(update the header.merkle_root assignment in create_genesis_block and the
expected_merkle_root computation in apply_genesis_block to use
compute_merkle_root([])).
In `@src/minichain/mempool.py`:
- Around line 63-106: add_transaction currently returns the txid even if the
newly inserted entry was immediately evicted by evict; after calling
self.evict(...), check whether transaction_id still exists in
self._entries_by_id (or whether the sender pool still contains the entry) and if
it was removed, clean up any remaining references (e.g., remove nonce_key from
self._id_by_sender_nonce and entry from pool.entries if necessary) and raise a
MempoolValidationError (e.g., "Transaction evicted due to mempool full") instead
of returning the id; keep references to add_transaction, evict,
_recompute_sender_pool, _PoolEntry, self._entries_by_id and
self._id_by_sender_nonce to locate and update the logic.
In `@src/minichain/mining.py`:
- Around line 20-65: build_candidate_block currently calls
create_coinbase_transaction with coinbase_amount that may be non-positive and
lets ValueError escape; wrap the coinbase creation in a try/except that catches
ValueError and raises BlockConstructionError instead, and add an explicit guard
that if coinbase_amount <= 0 raise BlockConstructionError (or treat as
no-coinbase case) before calling create_coinbase_transaction; reference the
coinbase_amount variable, the create_coinbase_transaction call, and the
BlockConstructionError raised by build_candidate_block so callers receive
consistent error types.
In `@src/minichain/network.py`:
- Around line 751-772: _read_message currently wraps all reads in
asyncio.wait_for using self.config.connect_timeout_seconds, which causes idle
peers to be disconnected; change _read_message to accept an optional timeout
parameter (default None) and only call asyncio.wait_for when timeout is not
None, then update callers: pass timeout=self.config.connect_timeout_seconds from
handshake/initial connection code paths but call _read_message() with no timeout
from _peer_reader_loop so steady‑state reads block indefinitely instead of
timing out; reference symbols: _read_message, _peer_reader_loop, and
connect_timeout_seconds.
- Around line 206-216: The code currently marks blocks as seen before acceptance
causing permanently ignored retries; update submit_block and
_handle_block_gossip so they only call _remember_seen_block after _accept_block
returns True (i.e., move the _remember_seen_block call to after successful
acceptance), or alternatively implement a separate orphan/pending cache (e.g.,
_orphan_blocks or _pending_blocks) to record blocks that failed _accept_block so
they can be reprocessed later; adjust logic in submit_block,
_handle_block_gossip, and any _seen_blocks checks to consult the new pending
cache and only add to _seen_blocks when _accept_block succeeds.
In `@src/minichain/node.py`:
- Around line 92-111: In start(), ensure the SQLiteStorage instance is closed if
subsequent initialization fails: create the SQLiteStorage(db_path) and assign to
a local variable (or self._storage), then wrap the calls to
self._initialize_chain_manager(...) and Mempool(...) in a try/except; on
exception call storage.close() (or self._storage.close() if assigned) to release
the DB handle, then re-raise the error; only set self._storage,
self._chain_manager, self._mempool and self._running = True after successful
initialization to avoid leaking the SQLite connection when
_initialize_chain_manager or Mempool raises.
In `@src/minichain/serialization.py`:
- Around line 47-56: The current serialize_canonical function builds an ordered
mapping via _to_field_map but calls json.dumps(..., sort_keys=True) which
re-sorts keys and breaks the explicit canonical order; change
serialize_canonical to call json.dumps without sort_keys (or with
sort_keys=False) so the insertion order from _to_field_map is preserved when
encoding (keep ensure_ascii, separators, and the utf-8 encode as-is) to
guarantee consistent canonical serialization for consensus hashing.
In `@tests/test_consensus.py`:
- Around line 34-35: Add a noqa comment to the ValueError raise to suppress Ruff
TRY003: locate the check where heights and timestamps lengths are compared (the
raise ValueError("heights and timestamps must have the same length") inside the
function that uses heights and timestamps) and append " # noqa: TRY003" to that
raise statement so the linter ignores the TRY003 rule for this built-in
exception message.
- Around line 76-96: Refactor the two tests to use pytest.raises context
managers: add "import pytest" at top, change test_mining_honors_stop_event to
wrap the mine_block_header(...) call in "with pytest.raises(MiningInterrupted,
match='interrupted')" and change test_mining_raises_when_nonce_range_exhausted
to wrap mine_block_header(...) in "with pytest.raises(RuntimeError, match='No
valid nonce found')" instead of the try/except/else blocks; keep the same header
inputs and stop_event/start_nonce/max_nonce arguments so the assertions still
target mine_block_header and the MiningInterrupted/RuntimeError cases.
In `@tests/test_crypto.py`:
- Around line 40-47: The test test_invalid_signature_is_rejected defines an
unused variable signing_key which triggers a linter warning; rename it to
_signing_key (or _) to indicate it's intentionally unused while keeping the rest
of the test logic (calls to generate_key_pair, sign_message, and
verify_signature) unchanged so the test still creates keys, signs with
other_signing_key, and asserts verification fails.
In `@tests/test_mempool.py`:
- Around line 120-124: Split the compound assertions that use `and` into
separate assertions to satisfy PT018: replace `assert selected[2].sender ==
sender_a and selected[2].nonce == 0` with two asserts `assert selected[2].sender
== sender_a` and `assert selected[2].nonce == 0`, and similarly replace `assert
selected[3].sender == sender_a and selected[3].nonce == 1` with `assert
selected[3].sender == sender_a` and `assert selected[3].nonce == 1` in the test
that calls `mempool.get_transactions_for_mining(state, limit=4)` so failures
report the specific failing condition.
In `@tests/test_serialization.py`:
- Around line 68-101: The pytest.mark.parametrize decorator in
test_required_and_unexpected_fields_are_rejected uses a CSV string
"payload,serializer,expected" which violates PT006; change the argnames to an
explicit tuple ("payload", "serializer", "expected") so pytest receives multiple
parameter names correctly when calling the test with payload, serializer, and
expected (this affects the decorator on
test_required_and_unexpected_fields_are_rejected in
tests/test_serialization.py).
In `@tests/test_storage.py`:
- Line 59: The test function test_store_and_load_block_round_trip uses the wrong
type for the tmp_path fixture; change the parameter annotation from
pytest.TempPathFactory to pathlib.Path, and add/import pathlib.Path if not
already imported so the signature becomes tmp_path: pathlib.Path; ensure any
other references expect a pathlib.Path (not a TempPathFactory) and remove the
incorrect pytest.TempPathFactory annotation.
In `@tests/test_transaction_gossip.py`:
- Around line 61-63: The test call to _wait_until currently passes timeout=3.0
and triggers ASYNC109/TRY003; update the _wait_until function signature (the
async function named _wait_until) to rename the parameter from timeout to
timeout_seconds and change its TimeoutError raise to raise TimeoutError() with
no message, then update this call site (where seen_by_node is used) to pass
timeout_seconds=3.0 instead of timeout=3.0 so all callers match the new
parameter name.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (39)
.github/workflows/ci.yml.gitignoreMakefileREADME.mdpyproject.tomlsrc/minichain/__init__.pysrc/minichain/__main__.pysrc/minichain/block.pysrc/minichain/chain.pysrc/minichain/consensus.pysrc/minichain/crypto.pysrc/minichain/genesis.pysrc/minichain/mempool.pysrc/minichain/merkle.pysrc/minichain/mining.pysrc/minichain/network.pysrc/minichain/node.pysrc/minichain/serialization.pysrc/minichain/state.pysrc/minichain/storage.pysrc/minichain/transaction.pytests/test_block.pytests/test_block_gossip.pytests/test_chain.pytests/test_cli.pytests/test_consensus.pytests/test_crypto.pytests/test_genesis.pytests/test_mempool.pytests/test_merkle.pytests/test_mining.pytests/test_network.pytests/test_node.pytests/test_scaffold.pytests/test_serialization.pytests/test_state.pytests/test_storage.pytests/test_transaction.pytests/test_transaction_gossip.py
| # Python caches and virtualenvs | ||
| __pycache__/ | ||
| *.py[cod] | ||
| .pytest_cache/ | ||
| .ruff_cache/ | ||
| .venv/ |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Python-cache entries are inserted mid-LaTeX-section
These entries land between # easy-todo and # xcolor, splitting an otherwise coherent LaTeX-auxiliary block. Move them after all the LaTeX entries (e.g. just before the new docs/ line) to keep the file organized.
♻️ Proposed reorganization
-# easy-todo
-*.lod
-
-# MiniChain local planning docs (do not commit)
-issues.md
-architectureProposal.md
-
-# Python caches and virtualenvs
-__pycache__/
-*.py[cod]
-.pytest_cache/
-.ruff_cache/
-.venv/
-
-# xcolor
+# easy-todo
+*.lod
+
+# xcolorThen append at the end of the file:
+# MiniChain local planning docs (do not commit)
+issues.md
+architectureProposal.md
+
+# Python caches and virtualenvs
+__pycache__/
+*.py[cod]
+.pytest_cache/
+.ruff_cache/
+.venv/
+
+docs/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 265 - 270, The Python-cache lines (__pycache__/,
*.py[cod], .pytest_cache/, .ruff_cache/, .venv/) are currently inserted between
the LaTeX section markers (# easy-todo and # xcolor); move that entire block so
it sits after the LaTeX-related entries (i.e., after the # xcolor section) and
just before the docs/ line to keep LaTeX entries contiguous and maintain
organization.
| [project.optional-dependencies] | ||
| network = [ | ||
| "py-libp2p>=0.2.0", | ||
| ] | ||
| dev = [ | ||
| "pytest>=8.0", | ||
| "ruff>=0.7.0", | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n --type=py -C2 'minichain\.network|libp2p' tests srcRepository: StabilityNexus/MiniChain
Length of output: 947
Include network extra in dev dependencies, or configure CI to install .[dev,network].
Three test files import from minichain.network, which requires the network extra:
tests/test_transaction_gossip.py(line 8)tests/test_network.py(line 8)tests/test_block_gossip.py(line 16)
Since CI installs only .[dev], these tests will fail when they attempt to import MiniChainNetwork and NetworkConfig.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyproject.toml` around lines 16 - 23, The CI fails because tests import
MiniChainNetwork and NetworkConfig from minichain.network but the "network"
optional extra isn't installed; update pyproject.toml by adding the "network"
extra into the dev extras list (include "py-libp2p>=0.2.0" under the dev array)
or update CI to install the package with both extras (e.g., pip install
.[dev,network]); ensure the identifiers referenced (MiniChainNetwork,
NetworkConfig and the [project.optional-dependencies] sections "network" and
"dev") are satisfied so tests like tests/test_transaction_gossip.py,
tests/test_network.py, and tests/test_block_gossip.py can import
minichain.network successfully.
| def add_block(self, block: Block) -> str: | ||
| """Add a block to chain storage and update canonical tip when appropriate.""" | ||
| block_hash = block.hash().hex() | ||
| if block_hash in self._blocks_by_hash: | ||
| return "duplicate" | ||
|
|
||
| parent_hash = block.header.previous_hash | ||
| if parent_hash not in self._blocks_by_hash: | ||
| raise ChainValidationError(f"Unknown parent block: {parent_hash}") | ||
|
|
||
| self._blocks_by_hash[block_hash] = block | ||
| self._heights[block_hash] = block.header.block_height | ||
|
|
||
| try: | ||
| candidate_path, candidate_state = self._replay_state_for_tip(block_hash) | ||
| except ChainValidationError: | ||
| self._blocks_by_hash.pop(block_hash, None) | ||
| self._heights.pop(block_hash, None) | ||
| raise |
There was a problem hiding this comment.
Rollback on any replay exception to avoid corrupt state.
add_block inserts into internal indexes before validation. If replay throws a non-ChainValidationError, the block stays indexed and the chain manager becomes inconsistent. Clean up and wrap unexpected exceptions.
🛠️ Proposed fix
try:
candidate_path, candidate_state = self._replay_state_for_tip(block_hash)
except ChainValidationError:
self._blocks_by_hash.pop(block_hash, None)
self._heights.pop(block_hash, None)
raise
+ except Exception as exc:
+ self._blocks_by_hash.pop(block_hash, None)
+ self._heights.pop(block_hash, None)
+ raise ChainValidationError(f"Block validation failed: {exc}") from exc🧰 Tools
🪛 Ruff (0.15.2)
[warning] 106-106: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/chain.py` around lines 98 - 116, add_block currently inserts
block into _blocks_by_hash and _heights before calling _replay_state_for_tip,
but only removes them on ChainValidationError; modify add_block so any exception
raised by _replay_state_for_tip (not just ChainValidationError) triggers cleanup
of the inserted entries from _blocks_by_hash and _heights and then re-raises an
appropriate ChainValidationError (or wraps the original exception) so unexpected
errors don't leave the chain manager in an inconsistent state; reference
add_block, _blocks_by_hash, _heights, _replay_state_for_tip, and
ChainValidationError when applying the fix.
| def verify_signature(message: bytes, signature: bytes, verify_key: VerifyKey) -> bool: | ||
| """Verify a detached Ed25519 signature.""" | ||
| _require_nacl() | ||
| try: | ||
| verify_key.verify(message, signature) | ||
| except BadSignatureError: | ||
| return False | ||
| return True |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
PyNaCl VerifyKey.verify exceptions BadSignatureError ValueError documentation
💡 Result:
nacl.signing.VerifyKey.verify() can raise:
nacl.exceptions.BadSignatureError: when the signed message fails verification (signature forged/corrupt). This is explicitly documented in the method docstring, and it comes from the underlyingnacl.bindings.crypto_sign_open(...)call. [1]nacl.exceptions.ValueError(aCryptoError+ stdlibValueErrormix-in): when you pass a detachedsignature=whose length is not exactlynacl.bindings.crypto_sign_BYTES(64 bytes for Ed25519). [1][2]nacl.exceptions.TypeError: when you pass a detachedsignature=that is not abytesobject. [1][2]
Also, if your encoder.decode(...) fails (e.g., invalid base64/hex input), that decode step can raise exceptions before verification happens (commonly TypeError/ValueError, depending on the encoder). [1][2]
Sources: PyNaCl VerifyKey.verify implementation/docs [1] and PyNaCl exceptions documentation (noting ValueError is a CryptoError subclass) [2].
[1] (pynacl.readthedocs.io)
[2] (pynacl.readthedocs.io)
Citations:
- 1: https://pynacl.readthedocs.io/en/latest/_modules/nacl/signing/
- 2: https://pynacl.readthedocs.io/en/latest/exceptions/?utm_source=openai
🏁 Script executed:
cd src && grep -r "verify_signature" . -B 3 -A 3Repository: StabilityNexus/MiniChain
Length of output: 1225
🏁 Script executed:
cat -n src/minichain/crypto.py | head -100Repository: StabilityNexus/MiniChain
Length of output: 3375
Catch ValueError from signature length validation.
VerifyKey.verify() raises ValueError if the detached signature is not exactly 64 bytes (for Ed25519). The current code only catches BadSignatureError, so a malformed signature will raise an exception instead of returning False. Update the exception handler:
Suggested fix
try:
verify_key.verify(message, signature)
- except BadSignatureError:
+ except (BadSignatureError, ValueError):
return False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/crypto.py` around lines 77 - 84, The verify_signature function
currently only catches BadSignatureError from VerifyKey.verify, but
VerifyKey.verify can also raise ValueError for malformed signatures (wrong
length); update the exception handling in verify_signature to also catch
ValueError and return False on ValueError, keeping the existing
BadSignatureError handling and return True only after a successful verify call.
| def create_genesis_block(config: GenesisConfig) -> Block: | ||
| """Build the genesis block (height 0, no PoW check required).""" | ||
| config.validate() | ||
| header = BlockHeader( | ||
| version=config.version, | ||
| previous_hash=GENESIS_PREVIOUS_HASH, | ||
| merkle_root=blake2b_digest(b"").hex(), | ||
| timestamp=config.timestamp, | ||
| difficulty_target=config.difficulty_target, | ||
| nonce=0, | ||
| block_height=0, | ||
| ) | ||
| return Block(header=header, transactions=[]) | ||
|
|
||
|
|
||
| def apply_genesis_block(state: State, block: Block, config: GenesisConfig) -> None: | ||
| """Apply genesis allocations to an empty state.""" | ||
| config.validate() | ||
| if state.accounts: | ||
| raise ValueError("Genesis can only be applied to an empty state") | ||
| if block.header.block_height != 0: | ||
| raise ValueError("Genesis block height must be 0") | ||
| if block.header.previous_hash != GENESIS_PREVIOUS_HASH: | ||
| raise ValueError("Genesis previous_hash must be all zeros") | ||
| if block.transactions: | ||
| raise ValueError("Genesis block must not contain transactions") | ||
|
|
||
| expected_merkle_root = blake2b_digest(b"").hex() | ||
| if block.header.merkle_root != expected_merkle_root: | ||
| raise ValueError("Genesis merkle_root must commit to an empty tx list") | ||
|
|
||
| for address, balance in config.initial_balances.items(): | ||
| state.set_account(address, Account(balance=balance, nonce=0)) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find src -name "merkle.py" -type fRepository: StabilityNexus/MiniChain
Length of output: 91
🏁 Script executed:
fd -t f "merkle.py" src/Repository: StabilityNexus/MiniChain
Length of output: 91
🏁 Script executed:
rg -n "def compute_merkle_root" -A 10 src/Repository: StabilityNexus/MiniChain
Length of output: 713
🏁 Script executed:
rg -n "blake2b_digest" -B 2 -A 2 src/minichain/Repository: StabilityNexus/MiniChain
Length of output: 3600
🏁 Script executed:
rg -n "compute_merkle_root" src/minichain/ -B 2 -A 2Repository: StabilityNexus/MiniChain
Length of output: 957
🏁 Script executed:
rg -n "has_valid_merkle_root\|merkle_root" src/minichain/block.py -B 2 -A 2Repository: StabilityNexus/MiniChain
Length of output: 50
🏁 Script executed:
rg -n "def.*valid\|merkle_root" src/minichain/block.py -B 1 -A 1Repository: StabilityNexus/MiniChain
Length of output: 50
🏁 Script executed:
cat -n src/minichain/block.py | head -80Repository: StabilityNexus/MiniChain
Length of output: 3382
Use compute_merkle_root([]) for genesis block merkle_root to maintain consistent empty-tree convention.
Genesis block currently hardcodes blake2b_digest(b"") for the merkle root. Since compute_merkle_root([]) uses the same convention (merkle.py:15), consolidating to the helper function would centralize the consensus rule and avoid duplication. This improves maintainability when the empty-tree rule is updated.
🛠️ Suggested fix
-from minichain.crypto import blake2b_digest
+from minichain.merkle import compute_merkle_root
@@
- merkle_root=blake2b_digest(b"").hex(),
+ merkle_root=compute_merkle_root([]).hex(),
@@
- expected_merkle_root = blake2b_digest(b"").hex()
+ expected_merkle_root = compute_merkle_root([]).hex()🧰 Tools
🪛 Ruff (0.15.2)
[warning] 60-60: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 62-62: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 64-64: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 66-66: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 70-70: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/minichain/genesis.py` around lines 41 - 74, The genesis merkle_root is
computed directly with blake2b_digest(b"").hex() which duplicates logic; replace
both occurrences in create_genesis_block and apply_genesis_block to call
compute_merkle_root([]) so the genesis header and the expected_merkle_root check
use the shared helper (update the header.merkle_root assignment in
create_genesis_block and the expected_merkle_root computation in
apply_genesis_block to use compute_merkle_root([])).
| def test_invalid_signature_is_rejected() -> None: | ||
| signing_key, verify_key = generate_key_pair() | ||
| other_signing_key, _ = generate_key_pair() | ||
| message = b"minichain-message" | ||
|
|
||
| wrong_signature = sign_message(message, other_signing_key) | ||
|
|
||
| assert not verify_signature(message, wrong_signature, verify_key) |
There was a problem hiding this comment.
Fix unused variable per Ruff warning.
signing_key is unused; use _signing_key or _ to keep lint clean.
✅ Minimal fix
- signing_key, verify_key = generate_key_pair()
+ _signing_key, verify_key = generate_key_pair()🧰 Tools
🪛 Ruff (0.15.2)
[warning] 41-41: Unpacked variable signing_key is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_crypto.py` around lines 40 - 47, The test
test_invalid_signature_is_rejected defines an unused variable signing_key which
triggers a linter warning; rename it to _signing_key (or _) to indicate it's
intentionally unused while keeping the rest of the test logic (calls to
generate_key_pair, sign_message, and verify_signature) unchanged so the test
still creates keys, signs with other_signing_key, and asserts verification
fails.
| selected = mempool.get_transactions_for_mining(state, limit=4) | ||
|
|
||
| assert [tx.fee for tx in selected] == [8, 4, 1, 10] | ||
| assert selected[2].sender == sender_a and selected[2].nonce == 0 | ||
| assert selected[3].sender == sender_a and selected[3].nonce == 1 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n tests/test_mempool.py | sed -n '110,135p'Repository: StabilityNexus/MiniChain
Length of output: 1094
🌐 Web query:
PT018 pytest linting rule
💡 Result:
PT018 is the flake8-pytest-style / Ruff (PT) rule pytest-composite-assertion: it flags assert statements that combine multiple independent conditions (e.g., via and/or) and recommends splitting them into separate asserts to make failures easier to diagnose. [1][2]
Example (violates PT018):
assert a and b
assert not (a or b)Preferred:
assert a
assert b
assert not a
assert not b[2]
Where you’ll see it:
- Ruff rule docs:
pytest-composite-assertion (PT018)[2] - flake8-pytest-style rule list: “assertion should be broken down into multiple parts” [1]
Sources
[1] flake8-pytest-style (PyPI) — PT018 description
[2] Ruff docs — pytest-composite-assertion (PT018)
Split compound asserts to satisfy PT018 and improve test failure clarity.
Lines 123-124 use and operators in assertions, which violates PT018 (pytest-composite-assertion). Splitting them into separate assertions makes failures more specific—for example, if the nonce check fails, the error message will pinpoint exactly which value caused the issue.
🔧 Suggested fix
- assert selected[2].sender == sender_a and selected[2].nonce == 0
- assert selected[3].sender == sender_a and selected[3].nonce == 1
+ assert selected[2].sender == sender_a
+ assert selected[2].nonce == 0
+ assert selected[3].sender == sender_a
+ assert selected[3].nonce == 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| selected = mempool.get_transactions_for_mining(state, limit=4) | |
| assert [tx.fee for tx in selected] == [8, 4, 1, 10] | |
| assert selected[2].sender == sender_a and selected[2].nonce == 0 | |
| assert selected[3].sender == sender_a and selected[3].nonce == 1 | |
| selected = mempool.get_transactions_for_mining(state, limit=4) | |
| assert [tx.fee for tx in selected] == [8, 4, 1, 10] | |
| assert selected[2].sender == sender_a | |
| assert selected[2].nonce == 0 | |
| assert selected[3].sender == sender_a | |
| assert selected[3].nonce == 1 |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 123-123: Assertion should be broken down into multiple parts
Break down assertion into multiple parts
(PT018)
[warning] 124-124: Assertion should be broken down into multiple parts
Break down assertion into multiple parts
(PT018)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_mempool.py` around lines 120 - 124, Split the compound assertions
that use `and` into separate assertions to satisfy PT018: replace `assert
selected[2].sender == sender_a and selected[2].nonce == 0` with two asserts
`assert selected[2].sender == sender_a` and `assert selected[2].nonce == 0`, and
similarly replace `assert selected[3].sender == sender_a and selected[3].nonce
== 1` with `assert selected[3].sender == sender_a` and `assert selected[3].nonce
== 1` in the test that calls `mempool.get_transactions_for_mining(state,
limit=4)` so failures report the specific failing condition.
| @pytest.mark.parametrize( | ||
| "payload,serializer,expected", | ||
| [ | ||
| ( | ||
| { | ||
| "sender": "aa" * 20, | ||
| "recipient": "bb" * 20, | ||
| "amount": 1, | ||
| "nonce": 1, | ||
| "timestamp": 1, | ||
| }, | ||
| serialize_transaction, | ||
| "Missing required fields: fee", | ||
| ), | ||
| ( | ||
| { | ||
| "version": 0, | ||
| "previous_hash": "00" * 32, | ||
| "merkle_root": "11" * 32, | ||
| "timestamp": 1, | ||
| "difficulty_target": 1, | ||
| "nonce": 1, | ||
| "block_height": 1, | ||
| "extra": "x", | ||
| }, | ||
| serialize_block_header, | ||
| "Unexpected fields: extra", | ||
| ), | ||
| ], | ||
| ) | ||
| def test_required_and_unexpected_fields_are_rejected( | ||
| payload: dict[str, object], serializer: Callable[[dict[str, object]], bytes], expected: str | ||
| ) -> None: | ||
| with pytest.raises(ValueError, match=expected): |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Ruff PT006 pytest parametrize rule
💡 Result:
Ruff rule PT006 = pytest-parametrize-names-wrong-type. It flags when the first argument (argnames) passed to pytest.mark.parametrize is the “wrong” type (e.g., using ("x",) for a single name, or using a CSV string / list when your config expects a tuple for multiple names). [1]
How to fix
- Single parameter: use a string:
`@pytest.mark.parametrize`("param", [1, 2, 3]) def test_foo(param): ...
- Multiple parameters: use the type required by your config (commonly a tuple of strings):
`@pytest.mark.parametrize`(("foo", "bar"), [(1, 1), (2, 2)]) def test_foo(foo, bar): ...
[1]
Configure the expected style (Ruff)
Set lint.flake8-pytest-style.parametrize-names-type to control what Ruff expects for multiple parameter names. [1]
Sources
[1] Ruff docs: PT006 pytest-parametrize-names-wrong-type rule description and option (lint.flake8-pytest-style.parametrize-names-type).
🏁 Script executed:
# Find and examine the test file
find . -name "test_serialization.py" -type fRepository: StabilityNexus/MiniChain
Length of output: 97
🏁 Script executed:
# Check for Ruff configuration in the repository
fd -e toml -e ini -e yaml -e yml | head -20Repository: StabilityNexus/MiniChain
Length of output: 92
🏁 Script executed:
# Check the actual code in test_serialization.py around lines 68-101
sed -n '60,110p' ./tests/test_serialization.pyRepository: StabilityNexus/MiniChain
Length of output: 1247
🏁 Script executed:
# Check Ruff configuration to see what parametrize-names-type is set to
cat ruff.tomlRepository: StabilityNexus/MiniChain
Length of output: 1883
🏁 Script executed:
# Check if there's a more specific configuration for PT006/parametrize-names-type in pyproject.toml
cat pyproject.toml | grep -A 10 -B 5 "parametrize-names-type\|flake8-pytest-style"Repository: StabilityNexus/MiniChain
Length of output: 50
🌐 Web query:
Ruff PT006 parametrize-names-type default configuration
💡 Result:
For Ruff rule PT006 (pytest-parametrize-names-wrong-type), the expected type for multiple @pytest.mark.parametrize argument names is controlled by:
lint.flake8-pytest-style.parametrize-names-type(docs.astral.sh)
Default
tuple(default) (docs.astral.sh)
Configure (pyproject.toml)
[tool.ruff.lint.flake8-pytest-style]
parametrize-names-type = "tuple" # or: "list" | "csv"Supported values:
"csv"→"name1,name2""tuple"(default) →("name1", "name2")"list"→["name1", "name2"](docs.astral.sh)
Citations:
- 1: https://docs.astral.sh/ruff/rules/pytest-parametrize-names-wrong-type/?utm_source=openai
- 2: https://docs.astral.sh/ruff/settings/?utm_source=openai
- 3: https://docs.astral.sh/ruff/settings/?utm_source=openai
Use tuple format for parametrize argnames to comply with PT006.
The code currently uses a CSV string "payload,serializer,expected" for multiple parameter names. Ruff's PT006 rule (enabled in this project) expects a tuple ("payload", "serializer", "expected") by default when passing multiple argnames to pytest.mark.parametrize.
🔧 Suggested fix
-@pytest.mark.parametrize(
- "payload,serializer,expected",
+@pytest.mark.parametrize(
+ ("payload", "serializer", "expected"),
[
(
{🧰 Tools
🪛 Ruff (0.15.2)
[warning] 69-69: Wrong type passed to first argument of pytest.mark.parametrize; expected tuple
Use a tuple for the first argument
(PT006)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_serialization.py` around lines 68 - 101, The
pytest.mark.parametrize decorator in
test_required_and_unexpected_fields_are_rejected uses a CSV string
"payload,serializer,expected" which violates PT006; change the argnames to an
explicit tuple ("payload", "serializer", "expected") so pytest receives multiple
parameter names correctly when calling the test with payload, serializer, and
expected (this affects the decorator on
test_required_and_unexpected_fields_are_rejected in
tests/test_serialization.py).
| return block | ||
|
|
||
|
|
||
| def test_store_and_load_block_round_trip(tmp_path: pytest.TempPathFactory) -> None: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n tests/test_storage.py | head -150Repository: StabilityNexus/MiniChain
Length of output: 5596
🏁 Script executed:
head -15 tests/test_storage.pyRepository: StabilityNexus/MiniChain
Length of output: 512
🌐 Web query:
pytest tmp_path fixture type annotation pathlib.Path TempPathFactory
💡 Result:
tmp_pathfixture is apathlib.Path, so annotate it asPath. [1][2]tmp_path_factoryfixture is apytest.TempPathFactory(classTempPathFactory), so annotate it asTempPathFactory(import frompytest). [2]
from pathlib import Path
import pytest
from pytest import TempPathFactory
def test_something(tmp_path: Path) -> None:
p = tmp_path / "file.txt"
p.write_text("hi", encoding="utf-8")
`@pytest.fixture`(scope="session")
def shared_dir(tmp_path_factory: TempPathFactory) -> Path:
return tmp_path_factory.mktemp("data") # returns a Path[1][2]
🏁 Script executed:
grep -n "tmp_path.*TempPathFactory" tests/test_storage.pyRepository: StabilityNexus/MiniChain
Length of output: 238
tmp_path should be typed as pathlib.Path (not pytest.TempPathFactory).
tmp_path is a Path fixture; TempPathFactory is for tmp_path_factory. This avoids type-checker confusion.
🔧 Suggested fix
+from pathlib import Path
import pytest
@@
-def test_store_and_load_block_round_trip(tmp_path: pytest.TempPathFactory) -> None:
+def test_store_and_load_block_round_trip(tmp_path: Path) -> None:
@@
def test_state_and_metadata_persist_across_restart(
- tmp_path: pytest.TempPathFactory,
+ tmp_path: Path,
) -> None:
@@
def test_atomic_persist_rolls_back_on_metadata_failure(
- tmp_path: pytest.TempPathFactory,
+ tmp_path: Path,
) -> None:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_storage.py` at line 59, The test function
test_store_and_load_block_round_trip uses the wrong type for the tmp_path
fixture; change the parameter annotation from pytest.TempPathFactory to
pathlib.Path, and add/import pathlib.Path if not already imported so the
signature becomes tmp_path: pathlib.Path; ensure any other references expect a
pathlib.Path (not a TempPathFactory) and remove the incorrect
pytest.TempPathFactory annotation.
| await _wait_until( | ||
| lambda: len(seen_by_node["b"]) == 1 and len(seen_by_node["c"]) == 1, | ||
| timeout=3.0, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and examine the relevant lines
cd tests && wc -l test_transaction_gossip.pyRepository: StabilityNexus/MiniChain
Length of output: 98
🏁 Script executed:
# Read the file to see the current state around lines 61-63
sed -n '55,70p' tests/test_transaction_gossip.pyRepository: StabilityNexus/MiniChain
Length of output: 725
🏁 Script executed:
# Read around lines 97-103
sed -n '90,110p' tests/test_transaction_gossip.pyRepository: StabilityNexus/MiniChain
Length of output: 496
🏁 Script executed:
# Find the _wait_until function definition
rg -n "async def _wait_until" tests/Repository: StabilityNexus/MiniChain
Length of output: 256
🌐 Web query:
ASYNC109 TRY003 linter rules Python
💡 Result:
-
ASYNC109(Ruff / flake8-async):async-function-with-timeout— flagsasync def ...functions that accept atimeoutparameter (e.g.,async def task(timeout): ...). The intent is to avoid “passing timeouts around” and instead apply timeouts at the call site using the async framework’s timeout context manager (e.g.,asyncio.timeout,trio.fail_after,anyio.move_on_after). [1] -
TRY003(Ruff / tryceratops):raise-vanilla-args— flags “long” exception messages constructed at theraisesite (e.g.,raise MyError(f"...")). The recommendation is to move the message formatting into the exception class (e.g., define__init__and raiseMyError(data)), so messages stay consistent and the exception is more reusable. [2]
Sources: [1] [2]
🏁 Script executed:
# Check if test_block_gossip.py has the same issue
sed -n '138,150p' tests/test_block_gossip.pyRepository: StabilityNexus/MiniChain
Length of output: 383
🏁 Script executed:
# Search for all calls to _wait_until in both files to confirm the pattern
rg "_wait_until" tests/ -A 2Repository: StabilityNexus/MiniChain
Length of output: 1066
Rename the timeout parameter to timeout_seconds and drop the custom exception message to avoid ASYNC109/TRY003.
The _wait_until function (lines 97–103) has a parameter named timeout in an async function, which triggers ASYNC109. It also raises TimeoutError with a literal message string, which triggers TRY003. Renaming the parameter and raising the exception without a message resolves both violations.
🔧 Suggested fix
-async def _wait_until(predicate, *, timeout: float) -> None:
- deadline = asyncio.get_running_loop().time() + timeout
+async def _wait_until(predicate, *, timeout_seconds: float) -> None:
+ deadline = asyncio.get_running_loop().time() + timeout_seconds
while asyncio.get_running_loop().time() < deadline:
if predicate():
return
await asyncio.sleep(0.05)
- raise TimeoutError("condition was not met before timeout")
+ raise TimeoutErrorUpdate the call site (lines 61–63) accordingly:
await _wait_until(
lambda: len(seen_by_node["b"]) == 1 and len(seen_by_node["c"]) == 1,
- timeout=3.0,
+ timeout_seconds=3.0,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await _wait_until( | |
| lambda: len(seen_by_node["b"]) == 1 and len(seen_by_node["c"]) == 1, | |
| timeout=3.0, | |
| await _wait_until( | |
| lambda: len(seen_by_node["b"]) == 1 and len(seen_by_node["c"]) == 1, | |
| timeout_seconds=3.0, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_transaction_gossip.py` around lines 61 - 63, The test call to
_wait_until currently passes timeout=3.0 and triggers ASYNC109/TRY003; update
the _wait_until function signature (the async function named _wait_until) to
rename the parameter from timeout to timeout_seconds and change its TimeoutError
raise to raise TimeoutError() with no message, then update this call site (where
seen_by_node is used) to pass timeout_seconds=3.0 instead of timeout=3.0 so all
callers match the new parameter name.
|
Superseded by #41. Closing this PR to keep review history in one place. |
Addressed Issues:
PR body:
Part of #8
Depends on #39
Summary
Validation
Checklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: TODO
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit
New Features
Chores