feat: genesis block creation and initial state bootstrap#24
feat: genesis block creation and initial state bootstrap#24arunabha003 wants to merge 17 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughIntroduces foundational infrastructure and core blockchain modules for MiniChain project. Adds build system (Makefile, pyproject.toml), CI/CD pipeline (GitHub Actions), project configuration (.gitignore, README), cryptographic and transaction handling, block structures with Merkle root validation, state management with transaction application, genesis block initialization, and comprehensive test suite covering all major components. Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI as CLI/Main
participant Crypto
participant Transaction
participant State
participant Block
participant Merkle
User->>CLI: Invoke node with host/port
CLI->>CLI: Parse arguments
Note over CLI: (host: 127.0.0.1, port: 7000)
User->>Crypto: Generate keypair
Crypto-->>User: (signing_key, verify_key)
User->>Crypto: derive_address(verify_key)
Crypto-->>User: sender_address
User->>Transaction: Create transaction
User->>Transaction: sign(signing_key)
Transaction->>Crypto: serialize_verify_key
Transaction->>Crypto: sign_message
Transaction-->>User: Signed transaction
User->>Transaction: verify()
Transaction->>Crypto: verify_signature
Crypto-->>Transaction: signature valid
Transaction-->>User: verification result
User->>State: Initialize state
State-->>User: empty state
User->>State: apply_transaction(tx)
State->>Transaction: verify()
State->>State: Validate nonce & balance
State->>State: Update balances & nonce
State-->>User: State updated
User->>Block: Create block with transactions
Block->>Merkle: compute_merkle_root
Merkle-->>Block: merkle_root
Block->>Block: update_header_merkle_root
Block-->>User: Block with header
User->>State: apply_block(block)
State->>State: Take snapshot
loop For each transaction
State->>State: apply_transaction(tx)
end
alt All transactions valid
State-->>User: Block applied
else Transaction fails
State->>State: Restore snapshot
State-->>User: Block rejected, state rolled back
end
sequenceDiagram
participant Caller
participant Genesis
participant GenesisConfig
participant Block
participant State
participant Crypto
Caller->>GenesisConfig: Create with balances, timestamp, etc.
GenesisConfig->>GenesisConfig: validate()
GenesisConfig-->>Caller: Config validated
Caller->>Genesis: create_genesis_block(config)
Genesis->>Genesis: Build BlockHeader (height=0, nonce=0, prev_hash=zeros)
Genesis->>Crypto: blake2b_digest(b"")
Crypto-->>Genesis: merkle_root
Genesis->>Block: Create Block with header, empty transactions
Genesis-->>Caller: Genesis Block
Caller->>State: Initialize empty state
State-->>Caller: Empty state
Caller->>Genesis: apply_genesis_block(state, block, config)
Genesis->>Genesis: Validate block fields & state is empty
Genesis->>Genesis: Verify merkle_root for empty transactions
loop For each initial_balance entry
Genesis->>State: set_account(address, Account(balance, nonce=0))
end
Genesis-->>Caller: State initialized
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 28
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 15-23: Update the CI workflow to add pip caching and a Python
version matrix: change the "Setup Python" step to use a matrix for
python-version (e.g., include multiple versions like 3.10/3.11/3.12) and update
dependent steps to run across the matrix; modify the "Install dependencies" step
to restore and save a pip cache (use actions/cache with a key that includes
runner OS and python-version and a restore-keys pattern, and base the cache path
on ~/.cache/pip) so installs use the cache and reduce re-downloads; ensure the
cache key also incorporates a lockfile or requirements hash
(pyproject.lock/requirements.txt) to invalidate when dependencies change.
In @.gitignore:
- Around line 261-270: Move the Python ignore patterns out of the LaTeX
auxiliary section and consolidate them into a dedicated "Python" block (grouping
__pycache__/, *.py[cod], .pytest_cache/, .ruff_cache/, .venv/) rather than
leaving them mid-LaTeX; additionally add standard build/dist ignores such as
dist/, build/, and *.egg-info/ (optionally add wheel artifacts like *.whl) so
that editable installs and builds won’t accidentally commit artifacts. Identify
the existing entries __pycache__, *.py[cod], .pytest_cache, .ruff_cache, .venv
in the diff and relocate them into the new Python section and append dist/,
build/, and *.egg-info/ to that block.
In `@Makefile`:
- Line 3: The Makefile is missing conventional "all" and "clean" targets which
checkmake flags; add an "all" phony target that depends on the default
build/test targets (e.g., include "install" or "test" as appropriate) and add a
"clean" phony target that removes common build/test artifacts (e.g.,
__pycache__, .pytest_cache, .ruff_cache, dist/build directories, .venv,
.mypy_cache) so callers can reset state; update the .PHONY line (currently
listing install dev-install test lint format start-node) to include all and
clean and ensure the "clean" recipe uses safe rm commands (or git clean) to
remove those caches.
In `@pyproject.toml`:
- Around line 17-19: The dependency list under the "network" extra in
pyproject.toml uses the wrong PyPI package name "py-libp2p"; change it to the
correct PyPI package name "libp2p" (e.g., replace "py-libp2p>=0.2.0" with
"libp2p>=0.2.0") so pip can resolve the package when installing the network
extra.
In `@README.md`:
- Around line 68-70: The README's "Repository Layout" code snippet incorrectly
lists gitignored files issues.md and architectureProposal.md; update the
README.md snippet by removing the lines referencing issues.md and
architectureProposal.md (the Repository Layout block) and, if desired, add a
brief parenthetical note that those artifacts are intentionally gitignored so
new clones won't contain them; ensure you update the exact snippet text in
README.md that currently enumerates test_scaffold.py, issues.md, and
architectureProposal.md.
In `@src/minichain/block.py`:
- Around line 29-56: Add concise public docstrings to each public method listed:
hash_hex, transaction_hashes, computed_merkle_root, computed_merkle_root_hex,
update_header_merkle_root, has_valid_merkle_root, and Block.hash; for each
method include a one-line description of behavior, any important inputs/state
used (e.g., transactions or header), and the return type/value (e.g., hex string
or bytes or bool), and for update_header_merkle_root note the side effect on
header.merkle_root; attach these docstrings directly above the method
definitions in the Block class so callers have clear API contracts.
- Around line 13-30: BlockHeader currently accepts invalid values; add a
__post_init__ method on the BlockHeader dataclass that validates fields and
raises ValueError on bad input. Specifically, check that version, block_height,
timestamp, difficulty_target, and nonce are integers and non-negative; ensure
previous_hash and merkle_root are non-empty strings and valid hex (attempt
bytes.fromhex or similar) and optionally validate expected byte lengths if
known; provide clear error messages mentioning the offending field (e.g.,
"BlockHeader.previous_hash invalid hex") so callers can diagnose failures. Use
the class name BlockHeader and its existing methods (hash, hash_hex) to locate
where to add __post_init__.
In `@src/minichain/crypto.py`:
- Around line 41-44: The blake2b_digest function currently relies on the library
default for output length; update the call in blake2b_digest to pass
digest_size=32 explicitly (while keeping RawEncoder) so the function matches its
32-byte docstring and is robust to future defaults; locate the blake2b_digest
function and add the digest_size=32 keyword to the blake2b(...) invocation.
In `@src/minichain/genesis.py`:
- Line 24: The initial_balances dataclass field is declared as a plain dict
(initial_balances: dict[str, int]) so even with frozen=True its contents remain
mutable; add a __post_init__ on GenesisConfig that wraps the incoming dict with
types.MappingProxyType using object.__setattr__(self, "initial_balances",
MappingProxyType(orig_dict)) to enforce read-only semantics (or alternatively
document the mutability caveat), and import MappingProxyType from types; locate
the GenesisConfig class and update its __post_init__ accordingly.
- Around line 76-81: create_genesis_state currently triggers duplicate
validation because create_genesis_block calls config.validate() and
apply_genesis_block calls it again; fix by adding a non-validating path and
using it in create_genesis_state: add an internal helper (e.g.,
create_genesis_block_no_validate or a create_genesis_block(..., validate: bool =
True) flag) or a corresponding apply_genesis_block option to skip the second
validate, then have create_genesis_state call the validating
create_genesis_block once and pass the non-validating variant into
apply_genesis_block, updating function signatures and internal calls to respect
the new skip-validation parameter.
- Around line 56-73: The function apply_genesis_block currently validates
structural invariants but doesn't ensure the block header fields match the
GenesisConfig; update apply_genesis_block to compare block.header.version with
config.version, block.header.timestamp with config.timestamp, and
block.header.difficulty_target with config.difficulty_target and raise
ValueError with clear messages when any mismatch occurs so the genesis block and
config remain consistent.
In `@src/minichain/merkle.py`:
- Around line 8-9: _hash_pair concatenates left and right with no domain
separation, allowing internal-node hashes to collide with leaf hashes; change
internal-node hashing to include an internal-node prefix (e.g., b'\x01') before
left+right in _hash_pair and ensure leaf hashing uses a distinct leaf prefix
(e.g., b'\x00') in the function that produces leaf hashes (update the leaf-hash
function if present) so leaves and internal nodes are cryptographically
separated; adjust any callers/tests that depend on the raw concatenated digest
values accordingly.
- Around line 19-20: The current Merkle tree builder duplicates the last element
when a level has odd length (the line using level.append(level[-1])), which
allows a collision attack; replace this behavior by padding with a constant
zero-hash instead: define a module-level ZERO_HASH computed with the same hash
function used for node hashing (e.g., HASH_FUNC(b"").digest()) and, inside the
function that iterates over 'level' (the block that currently does
level.append(level[-1])), change it to append ZERO_HASH when len(level) % 2 == 1
so the tree is padded deterministically; ensure ZERO_HASH has the same byte
length/type as other node hashes and update any tests that assumed duplication
behavior.
In `@src/minichain/serialization.py`:
- Line 6: Replace the deprecated typing.Mapping import by importing Mapping from
collections.abc: update the import statement that currently references typing
(the line with "from typing import Any, Mapping") so that Mapping comes from
collections.abc while keeping Any from typing; ensure any other references to
Mapping in the module remain unchanged (e.g., function signatures using Mapping)
and run linters to confirm Ruff UP035 is resolved.
- Around line 47-56: The function serialize_canonical currently passes
sort_keys=True to json.dumps which alphabetically reorders keys and overrides
the intended insertion order from _to_field_map; remove the sort_keys=True
argument so json.dumps uses the insertion order provided by canonical_map
(thereby making field_order control the canonical key order), and update the
docstring or tests if needed to reflect that serialize_canonical relies on
_to_field_map insertion order for canonicalization.
- Around line 28-29: The parameter annotation for _to_field_map currently uses
Mapping[str, Any] | object which collapses to object and loses type-safety;
update the signature to accurately express "mapping or attribute-bearing object"
by either replacing the union with Mapping[str, Any] | Any (simplest) or define
and use a small Protocol (e.g., an AttributeAccess protocol with __getattr__ /
attribute names) and use Mapping[str, Any] | AttributeAccessProtocol; update any
related type imports and adjust callers if needed to satisfy the new type.
In `@src/minichain/state.py`:
- Around line 58-67: The code in apply_transaction currently deducts total_cost
= transaction.amount + transaction.fee from sender.balance but only credits
recipient.balance with transaction.amount, effectively burning transaction.fee;
either make this explicit in documentation or (recommended) add a fee recipient
and credit the fee: modify apply_transaction (and callers such as apply_block)
to accept a fee_recipient (or fee_pool) parameter and after sender.balance -=
total_cost and recipient.balance += transaction.amount also do
fee_recipient.balance += transaction.fee (or accumulate into a fee_pool
account), updating any nonce/emit logic accordingly so the fee is not silently
destroyed.
- Around line 47-76: The long inline exception messages violate TRY003; define
concise StateTransitionError subclasses (e.g., SignatureVerificationError,
NonceMismatchError, InsufficientBalanceError, BlockApplicationError) that
encapsulate formatting logic and any detail attributes, then update
apply_transaction to raise those subclasses (replace the long messages in the
signature check, the nonce check, and the balance check) and update apply_block
to raise a short BlockApplicationError while chaining the original exception;
ensure each new exception class formats its full message internally so raise
sites in apply_transaction and apply_block remain terse.
- Around line 40-43: get_account currently mutates self.accounts by
auto-creating Account() for unseen addresses which causes "ghost" zero-balance
accounts to persist when apply_transaction fails; change checks to use a
non-mutating read-only lookup (e.g., use self.accounts.get(address) or a new
peek_account(address) that returns None instead of creating) during validation
in apply_transaction (for both sender and recipient/non-existence checks) and
only call the mutating path (or assign new Account()) after all nonce/balance
checks pass; keep apply_block behavior unchanged but ensure apply_transaction
does not perform side-effectful writes during validation.
- Around line 69-76: The apply_block method currently applies transactions
without verifying the block header's Merkle root; update apply_block to call
block.has_valid_merkle_root() before applying transactions, raise a
StateTransitionError (or re-use existing exception handling path) if the check
fails, and ensure the state revert (using snapshot = self.copy() and restoring
self.accounts) is performed only after a failed validation or transaction error;
reference apply_block and Block.has_valid_merkle_root() to locate the change.
In `@src/minichain/transaction.py`:
- Around line 97-100: The code currently catches all exceptions when calling
deserialize_verify_key(self.public_key) and should instead catch
nacl.exceptions.CryptoError; change the except Exception block to except
CryptoError and add the appropriate import (e.g., from nacl.exceptions import
CryptoError) near the top of src/minichain/transaction.py so only PyNaCl crypto
errors are handled while other exceptions propagate.
In `@tests/test_crypto.py`:
- Line 41: The variable signing_key returned by generate_key_pair is unused and
triggers RUF059; rename it to _signing_key (i.e., unpack as _signing_key,
verify_key = generate_key_pair()) so the linter treats it as intentionally
unused, leaving verify_key unchanged; update any nearby assertions or uses to
reference verify_key only.
In `@tests/test_genesis.py`:
- Around line 1-15: Add a runtime guard to skip tests when PyNaCl is unavailable
by importing pytest and calling pytest.importorskip("nacl") at the top of the
file before importing anything from minichain.crypto (e.g., before the
blake2b_digest import or any imports that pull in nacl); this ensures tests
referencing blake2b_digest, create_genesis_block, create_genesis_state,
apply_genesis_block, GENESIS_PREVIOUS_HASH, Account, and State are skipped
cleanly instead of raising ImportError during collection.
- Around line 60-85: Replace the try/except/else anti-pattern in the two tests
with pytest.raises: in test_genesis_requires_empty_state and
test_genesis_block_rejects_wrong_previous_hash, call apply_genesis_block(state,
block, config) inside a with pytest.raises(ValueError) as excinfo: context and
then assert the expected substring appears in str(excinfo.value) (e.g., "empty
state" and "previous_hash" respectively); ensure you import pytest at top if not
already present.
In `@tests/test_scaffold.py`:
- Around line 23-26: The test_component_modules_are_importable test currently
asserts imported is not None which is redundant because importlib.import_module
raises ModuleNotFoundError on failure; remove the assert imported is not None
and either leave the import call (so failures raise) or replace the assertion
with a meaningful check (e.g., use hasattr(imported, "expected_symbol") to
validate an exported attribute). Update the test to iterate COMPONENT_MODULES,
call importlib.import_module(f"minichain.{module}"), and perform either no
further assertion or a targeted hasattr check for richer validation.
In `@tests/test_serialization.py`:
- Around line 68-69: The parametrize decorator is using a string for the
argument name list which triggers Ruff PT006; update the pytest.mark.parametrize
call in tests/test_serialization.py to use a tuple of argument names instead of
the string (i.e., replace the "payload,serializer,expected" argument with a
tuple like ("payload", "serializer", "expected") so the decorator call on the
test uses a tuple for the parameter names.
In `@tests/test_state.py`:
- Around line 154-157: The test is relying on get_account's lazy-creation to
pass after rollback; instead assert absence from the in-memory mapping before
calling get_account. Modify the assertions around state and rollback to first
check that recipient_address is not present in state.accounts (e.g., using a
membership or key lookup on state.accounts) to confirm rollback removed it, then
optionally call state.get_account(recipient_address) only if you need to inspect
a created account; keep existing checks for sender_address using
state.get_account(sender_address) and its balance/nonce as before. Ensure you
reference state.accounts and get_account and assert recipient_address absence
before any get_account call.
- Around line 53-54: The tests repeat an unused-key pattern by assigning `_ =
recipient_key` after unpacking from generate_key_pair(); instead destructure
only the used value to avoid the lint-visible unused variable. Replace
occurrences like "recipient_key, recipient_verify = generate_key_pair(); _ =
recipient_key" with a single-variable unpack that discards the unused key (e.g.,
"_, recipient_verify = generate_key_pair()") in each test in tests/test_state.py
where generate_key_pair() is called.
| - name: Setup Python | ||
| uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: "3.11" | ||
|
|
||
| - name: Install dependencies | ||
| run: | | ||
| python -m pip install --upgrade pip | ||
| python -m pip install -e .[dev] |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add pip caching and a Python version matrix for broader coverage.
Every CI run re-downloads all packages from scratch (no caching), and only Python 3.11 is tested. Both are easy wins:
⚙️ Suggested improvements
- name: Setup Python
uses: actions/setup-python@v5
with:
- python-version: "3.11"
+ python-version: ${{ matrix.python-version }}
+ cache: "pip"
+ strategy:
+ matrix:
+ python-version: ["3.11", "3.12", "3.13"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 15 - 23, Update the CI workflow to add
pip caching and a Python version matrix: change the "Setup Python" step to use a
matrix for python-version (e.g., include multiple versions like 3.10/3.11/3.12)
and update dependent steps to run across the matrix; modify the "Install
dependencies" step to restore and save a pip cache (use actions/cache with a key
that includes runner OS and python-version and a restore-keys pattern, and base
the cache path on ~/.cache/pip) so installs use the cache and reduce
re-downloads; ensure the cache key also incorporates a lockfile or requirements
hash (pyproject.lock/requirements.txt) to invalidate when dependencies change.
| # MiniChain local planning docs (do not commit) | ||
| issues.md | ||
| architectureProposal.md | ||
|
|
||
| # Python caches and virtualenvs | ||
| __pycache__/ | ||
| *.py[cod] | ||
| .pytest_cache/ | ||
| .ruff_cache/ | ||
| .venv/ |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Python cache/venv ignores are inserted mid-LaTeX block; also missing standard build artifact patterns.
The Python entries are placed between LaTeX auxiliary sections, which makes the file harder to read. More importantly, dist/, build/, and *.egg-info/ are absent — running pip install -e . or python -m build will produce these and they could be accidentally committed.
🔧 Suggested additions
Add a dedicated Python section (ideally at the end of the file, grouped with the other Python entries):
# Python caches and virtualenvs
__pycache__/
*.py[cod]
.pytest_cache/
.ruff_cache/
.venv/
+
+# Python build/packaging artifacts
+dist/
+build/
+*.egg-info/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 261 - 270, Move the Python ignore patterns out of
the LaTeX auxiliary section and consolidate them into a dedicated "Python" block
(grouping __pycache__/, *.py[cod], .pytest_cache/, .ruff_cache/, .venv/) rather
than leaving them mid-LaTeX; additionally add standard build/dist ignores such
as dist/, build/, and *.egg-info/ (optionally add wheel artifacts like *.whl) so
that editable installs and builds won’t accidentally commit artifacts. Identify
the existing entries __pycache__, *.py[cod], .pytest_cache, .ruff_cache, .venv
in the diff and relocate them into the new Python section and append dist/,
build/, and *.egg-info/ to that block.
| @@ -0,0 +1,21 @@ | |||
| PYTHON ?= python3 | |||
|
|
|||
| .PHONY: install dev-install test lint format start-node | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding conventional all and clean targets.
checkmake flags their absence. A clean target is particularly useful for resetting build/cache artifacts (__pycache__, .pytest_cache, .ruff_cache, etc.).
🔧 Suggested additions
-.PHONY: install dev-install test lint format start-node
+.PHONY: all install dev-install test lint format start-node clean
+all: dev-install
+clean:
+ find . -type d -name "__pycache__" -exec rm -rf {} +
+ find . -type d -name ".pytest_cache" -exec rm -rf {} +
+ find . -type d -name ".ruff_cache" -exec rm -rf {} +
+ find . -type d -name "*.egg-info" -exec rm -rf {} +📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .PHONY: install dev-install test lint format start-node | |
| .PHONY: all install dev-install test lint format start-node clean | |
| all: dev-install | |
| clean: | |
| find . -type d -name "__pycache__" -exec rm -rf {} + | |
| find . -type d -name ".pytest_cache" -exec rm -rf {} + | |
| find . -type d -name ".ruff_cache" -exec rm -rf {} + | |
| find . -type d -name "*.egg-info" -exec rm -rf {} + |
🧰 Tools
🪛 checkmake (0.2.2)
[warning] 3-3: Missing required phony target "all"
(minphony)
[warning] 3-3: Missing required phony target "clean"
(minphony)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Makefile` at line 3, The Makefile is missing conventional "all" and "clean"
targets which checkmake flags; add an "all" phony target that depends on the
default build/test targets (e.g., include "install" or "test" as appropriate)
and add a "clean" phony target that removes common build/test artifacts (e.g.,
__pycache__, .pytest_cache, .ruff_cache, dist/build directories, .venv,
.mypy_cache) so callers can reset state; update the .PHONY line (currently
listing install dev-install test lint format start-node) to include all and
clean and ensure the "clean" recipe uses safe rm commands (or git clean) to
remove those caches.
| network = [ | ||
| "py-libp2p>=0.2.0", | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify that py-libp2p does not exist on PyPI, and libp2p does.
curl -s -o /dev/null -w "%{http_code}" https://pypi.org/pypi/py-libp2p/json
echo " <- py-libp2p (expect 404)"
curl -s -o /dev/null -w "%{http_code}" https://pypi.org/pypi/libp2p/json
echo " <- libp2p (expect 200)"Repository: StabilityNexus/MiniChain
Length of output: 124
Wrong PyPI package name for the libp2p dependency.
The Python libp2p library is published on PyPI as libp2p — py-libp2p is only the GitHub repository name. Running pip install -e .[network] with py-libp2p>=0.2.0 will fail with a "package not found" error.
Proposed fix
[project.optional-dependencies]
network = [
- "py-libp2p>=0.2.0",
+ "libp2p>=0.2.0",
]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| network = [ | |
| "py-libp2p>=0.2.0", | |
| ] | |
| network = [ | |
| "libp2p>=0.2.0", | |
| ] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pyproject.toml` around lines 17 - 19, The dependency list under the "network"
extra in pyproject.toml uses the wrong PyPI package name "py-libp2p"; change it
to the correct PyPI package name "libp2p" (e.g., replace "py-libp2p>=0.2.0" with
"libp2p>=0.2.0") so pip can resolve the package when installing the network
extra.
| test_scaffold.py | ||
| issues.md | ||
| architectureProposal.md |
There was a problem hiding this comment.
issues.md and architectureProposal.md are listed in the repo layout but are gitignored.
These files will not be present in any fresh clone, making the layout section misleading for new contributors. Remove them from the "Repository Layout" snippet since they are intentionally excluded from version control.
📝 Proposed fix
tests/
test_scaffold.py
-issues.md
-architectureProposal.md📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test_scaffold.py | |
| issues.md | |
| architectureProposal.md | |
| tests/ | |
| test_scaffold.py |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 68 - 70, The README's "Repository Layout" code
snippet incorrectly lists gitignored files issues.md and
architectureProposal.md; update the README.md snippet by removing the lines
referencing issues.md and architectureProposal.md (the Repository Layout block)
and, if desired, add a brief parenthetical note that those artifacts are
intentionally gitignored so new clones won't contain them; ensure you update the
exact snippet text in README.md that currently enumerates test_scaffold.py,
issues.md, and architectureProposal.md.
| def test_genesis_requires_empty_state() -> None: | ||
| config = GenesisConfig(initial_balances={"dd" * 20: 1}) | ||
| block = create_genesis_block(config) | ||
| state = State() | ||
| state.set_account("ff" * 20, Account(balance=1, nonce=0)) | ||
|
|
||
| try: | ||
| apply_genesis_block(state, block, config) | ||
| except ValueError as exc: | ||
| assert "empty state" in str(exc) | ||
| else: | ||
| raise AssertionError("Expected ValueError for non-empty state") | ||
|
|
||
|
|
||
| def test_genesis_block_rejects_wrong_previous_hash() -> None: | ||
| config = GenesisConfig(initial_balances={"ee" * 20: 10}) | ||
| block = create_genesis_block(config) | ||
| block.header = replace(block.header, previous_hash="11" * 32) | ||
| state = State() | ||
|
|
||
| try: | ||
| apply_genesis_block(state, block, config) | ||
| except ValueError as exc: | ||
| assert "previous_hash" in str(exc) | ||
| else: | ||
| raise AssertionError("Expected ValueError for invalid previous_hash") |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Replace try/except/else exception checks with pytest.raises().
The try/except/else idiom in test_genesis_requires_empty_state and test_genesis_block_rejects_wrong_previous_hash is an anti-pattern in pytest (PT017). pytest.raises() provides cleaner, more idiomatic assertion and better failure messages.
♻️ Proposed fix
def test_genesis_requires_empty_state() -> None:
config = GenesisConfig(initial_balances={"dd" * 20: 1})
block = create_genesis_block(config)
state = State()
state.set_account("ff" * 20, Account(balance=1, nonce=0))
- try:
- apply_genesis_block(state, block, config)
- except ValueError as exc:
- assert "empty state" in str(exc)
- else:
- raise AssertionError("Expected ValueError for non-empty state")
+ with pytest.raises(ValueError, match="empty state"):
+ apply_genesis_block(state, block, config)
def test_genesis_block_rejects_wrong_previous_hash() -> None:
config = GenesisConfig(initial_balances={"ee" * 20: 10})
block = create_genesis_block(config)
block.header = replace(block.header, previous_hash="11" * 32)
state = State()
- try:
- apply_genesis_block(state, block, config)
- except ValueError as exc:
- assert "previous_hash" in str(exc)
- else:
- raise AssertionError("Expected ValueError for invalid previous_hash")
+ with pytest.raises(ValueError, match="previous_hash"):
+ apply_genesis_block(state, block, config)🧰 Tools
🪛 Ruff (0.15.1)
[warning] 69-69: Found assertion on exception exc in except block, use pytest.raises() instead
(PT017)
[warning] 71-71: Avoid specifying long messages outside the exception class
(TRY003)
[warning] 83-83: Found assertion on exception exc in except block, use pytest.raises() instead
(PT017)
[warning] 85-85: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_genesis.py` around lines 60 - 85, Replace the try/except/else
anti-pattern in the two tests with pytest.raises: in
test_genesis_requires_empty_state and
test_genesis_block_rejects_wrong_previous_hash, call apply_genesis_block(state,
block, config) inside a with pytest.raises(ValueError) as excinfo: context and
then assert the expected substring appears in str(excinfo.value) (e.g., "empty
state" and "previous_hash" respectively); ensure you import pytest at top if not
already present.
| def test_component_modules_are_importable() -> None: | ||
| for module in COMPONENT_MODULES: | ||
| imported = importlib.import_module(f"minichain.{module}") | ||
| assert imported is not None |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
assert imported is not None is always True — the real guard is the implicit ModuleNotFoundError.
importlib.import_module never returns None; it either returns the module or raises. The assertion adds no detection value. Drop it (or replace with a hasattr / attribute check if you want richer validation):
🔧 Proposed simplification
def test_component_modules_are_importable() -> None:
for module in COMPONENT_MODULES:
- imported = importlib.import_module(f"minichain.{module}")
- assert imported is not None
+ importlib.import_module(f"minichain.{module}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_scaffold.py` around lines 23 - 26, The
test_component_modules_are_importable test currently asserts imported is not
None which is redundant because importlib.import_module raises
ModuleNotFoundError on failure; remove the assert imported is not None and
either leave the import call (so failures raise) or replace the assertion with a
meaningful check (e.g., use hasattr(imported, "expected_symbol") to validate an
exported attribute). Update the test to iterate COMPONENT_MODULES, call
importlib.import_module(f"minichain.{module}"), and perform either no further
assertion or a targeted hasattr check for richer validation.
| @pytest.mark.parametrize( | ||
| "payload,serializer,expected", |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use a tuple for the pytest.mark.parametrize argument name list.
Ruff PT006 flags the string form; the tuple form is idiomatic.
♻️ Proposed fix
`@pytest.mark.parametrize`(
- "payload,serializer,expected",
+ ("payload", "serializer", "expected"),
[📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.mark.parametrize( | |
| "payload,serializer,expected", | |
| `@pytest.mark.parametrize`( | |
| ("payload", "serializer", "expected"), | |
| [ |
🧰 Tools
🪛 Ruff (0.15.1)
[warning] 69-69: Wrong type passed to first argument of pytest.mark.parametrize; expected tuple
Use a tuple for the first argument
(PT006)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_serialization.py` around lines 68 - 69, The parametrize decorator
is using a string for the argument name list which triggers Ruff PT006; update
the pytest.mark.parametrize call in tests/test_serialization.py to use a tuple
of argument names instead of the string (i.e., replace the
"payload,serializer,expected" argument with a tuple like ("payload",
"serializer", "expected") so the decorator call on the test uses a tuple for the
parameter names.
| recipient_key, recipient_verify = generate_key_pair() | ||
| _ = recipient_key |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Simplify the repeated unused-key pattern.
The _ = recipient_key assignment after unpack appears in five test functions. Destructuring directly is more idiomatic and avoids the lint-visible unused variable.
♻️ Proposed fix (apply to all five occurrences)
- recipient_key, recipient_verify = generate_key_pair()
- _ = recipient_key
+ _, recipient_verify = generate_key_pair()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| recipient_key, recipient_verify = generate_key_pair() | |
| _ = recipient_key | |
| _, recipient_verify = generate_key_pair() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_state.py` around lines 53 - 54, The tests repeat an unused-key
pattern by assigning `_ = recipient_key` after unpacking from
generate_key_pair(); instead destructure only the used value to avoid the
lint-visible unused variable. Replace occurrences like "recipient_key,
recipient_verify = generate_key_pair(); _ = recipient_key" with a
single-variable unpack that discards the unused key (e.g., "_, recipient_verify
= generate_key_pair()") in each test in tests/test_state.py where
generate_key_pair() is called.
| assert state.get_account(sender_address).balance == 100 | ||
| assert state.get_account(sender_address).nonce == 0 | ||
| assert state.get_account(recipient_address).balance == 0 | ||
| assert state.get_account(recipient_address).nonce == 0 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Rollback assertions rely on get_account's lazy-creation side effect.
After a block rollback, recipient_address is absent from state.accounts. get_account auto-creates it with balance=0, so the assertions pass — but they pass for the wrong reason regardless of whether the rollback worked. A stronger check is to assert the address is absent from state.accounts before calling get_account.
♻️ Proposed fix
assert state.get_account(sender_address).balance == 100
assert state.get_account(sender_address).nonce == 0
- assert state.get_account(recipient_address).balance == 0
- assert state.get_account(recipient_address).nonce == 0
+ assert recipient_address not in state.accounts📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert state.get_account(sender_address).balance == 100 | |
| assert state.get_account(sender_address).nonce == 0 | |
| assert state.get_account(recipient_address).balance == 0 | |
| assert state.get_account(recipient_address).nonce == 0 | |
| assert state.get_account(sender_address).balance == 100 | |
| assert state.get_account(sender_address).nonce == 0 | |
| assert recipient_address not in state.accounts |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_state.py` around lines 154 - 157, The test is relying on
get_account's lazy-creation to pass after rollback; instead assert absence from
the in-memory mapping before calling get_account. Modify the assertions around
state and rollback to first check that recipient_address is not present in
state.accounts (e.g., using a membership or key lookup on state.accounts) to
confirm rollback removed it, then optionally call
state.get_account(recipient_address) only if you need to inspect a created
account; keep existing checks for sender_address using
state.get_account(sender_address) and its balance/nonce as before. Ensure you
reference state.accounts and get_account and assert recipient_address absence
before any get_account call.
Addressed Issues:
Part of #8
Depends on #21
Summary
Validation
python -m ruff check src testspython -m pytestChecklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: TODO
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact.
Summary by CodeRabbit
New Features
Chores
Tests