Implement keetanetwork-block crate with block and operation types#3
Implement keetanetwork-block crate with block and operation types#3
Conversation
There was a problem hiding this comment.
Pull request overview
This PR introduces a new keetanetwork-block crate that provides foundational block and operation type definitions for the Keeta blockchain. The implementation emphasizes zero-copy design with lifetime parameters to support no_std environments while maintaining flexibility through optional alloc and std features.
Changes:
- Added shared type definitions for blocks (V1 and V2) and all 11 blockchain operations
- Implemented address validation based on algorithm prefix (secp256k1, Ed25519, secp256r1)
- Configured feature flags to support
no_std,alloc, andstdenvironments
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| keetanetwork-block/Cargo.toml | Defines feature flags for std/alloc/no_std support and removes unused dependencies |
| keetanetwork-block/src/lib.rs | Exports public API with conditional compilation for alloc-dependent types |
| keetanetwork-block/src/types.rs | Implements all block types, operation structures, and address validation logic |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@rkeene do you suggest to put the remaining types here or in different PR? |
I think it's fine to include that change in this PR since it's a dependency -- otherwise you would need to re-org and re-base but without much gained from it here. |
| pub enum AdjustMethodRelative { | ||
| Add, | ||
| Remove, | ||
| Unknown(u8), | ||
| } |
There was a problem hiding this comment.
I am not sure this is a good approach. We should just return Result instead when using an unsupported AdjustMethod.
keetanetwork-block/src/types.rs
Outdated
| pub fn new(header: BlockHeader<'a>, operations: Vec<Operation<'a>>) -> Self { | ||
| Self { version: BlockVersion::V1, header, operations } | ||
| } | ||
|
|
||
| /// Create a new V2 block | ||
| pub fn new_v2(header: BlockHeader<'a>, operations: Vec<Operation<'a>>) -> Self { | ||
| Self { version: BlockVersion::V2, header, operations } | ||
| } |
There was a problem hiding this comment.
We may want a KeetaBlockBuilder instead of new_v*
let block = KeetaBlockBuilder::from(BlockVersion::V*)
.with_block_header(header)
.with_operations(operations)
.build();
Something like so.
There was a problem hiding this comment.
I added KeetaBlockBuilder but I set header in constructor since it's not optional for both versions
There was a problem hiding this comment.
Builder should be in a draft PR and not here
|
I think the changes covers the keetanetwork-block work we scoped. Summary of implementation:
Unrelated Changes
All tests passing. Ready for review. |
| #[derive(Debug, Clone, Copy, Sequence)] | ||
| pub struct TokenRate<'a> { | ||
| /// Token public key | ||
| pub token: Bytes<'a>, |
There was a problem hiding this comment.
Why do we use undifferentiated Bytes here instead of something higher-level and more specific (account) ?
There was a problem hiding this comment.
This layer mirrors the ASN.1 (all OCTET STRING); semantic validation happens upstream in TS via assertKeyType()
can add aliases for clarity: pub type TokenId<'a> = Bytes<'a>; No compile-time safety, but documents intent. Prefer newtype wrappers instead?
There was a problem hiding this comment.
We could use the existing types but we would need to make those crates partially no_std.
There was a problem hiding this comment.
a newtype wrapper would be better, something like:
pub struct TokenId<'a>(Bytes<'a>);
impl<'a> TokenId<'a> {
// validate algorithm prefix and length
pub fn assert_key_type(&self) -> Result<(), Error> {}
pub fn as_bytes(&self) -> &'a [u8] {
self.0.as_bytes()
}
}
I will implement these types for Bytes fields.
There was a problem hiding this comment.
Will be resolved when using account crate
| pub signature: &'a [u8], | ||
| } | ||
|
|
||
| /// Vote staple - bundles blocks with their votes |
There was a problem hiding this comment.
Vote Staples make few other guarantees as well:
- All votes have the same set of block hashes in the same order
- All blocks are referenced by the votes
- All blocks referenced in the votes are present
- All votes are of the same level of permanence (temporary or permanent)
There was a problem hiding this comment.
Interesting design for packaging - I'm familiar with OCSP stapling, which bundles certificate status proofs so verifiers don't need extra roundtrips, similar concept here with votes and blocks.
These invariants make sense for that model, want me to add them as doc comments on VoteStaple, implement a validate() method or both?
There was a problem hiding this comment.
Either is fine at this point, but the validation is important to do at some point because it's not a valid vote staple unless those conditions are met
There was a problem hiding this comment.
Makes sense. Before jumping into the implementation, it might be worth discussing how vote staple validation should behave under different circumstances — e.g., should validate() be called on construction (ensuring invalid staples can't exist), on deserialization from the network, or deferred to the consumer? The answer likely affects where the method lives and what it returns. Happy to sync on this and then implement accordingly.
|
| // network | ||
| let network: u64 = seq.decode()?; | ||
|
|
||
| // subnet | ||
| let subnet = decode_null_or_integer(seq)?; | ||
|
|
||
| // date | ||
| let date = read_generalized_time_bytes(seq)?; | ||
|
|
||
| // account | ||
| let account: OctetStringRef = seq.decode()?; | ||
|
|
||
| // signer | ||
| let signer = decode_v1_signer(seq)?; | ||
|
|
||
| // previous | ||
| let previous: OctetStringRef = seq.decode()?; | ||
|
|
||
| // operations | ||
| let operations = decode_operations(seq)?; | ||
|
|
||
| // signature | ||
| let signature: OctetStringRef = seq.decode()?; |
There was a problem hiding this comment.
These comments are pointless.
| // network | ||
| let network: u64 = seq.decode()?; | ||
|
|
||
| // date | ||
| let date = read_generalized_time_bytes(seq)?; | ||
|
|
||
| // purpose | ||
| let purpose_val: u8 = seq.decode()?; | ||
| let purpose = BlockPurpose::try_from(purpose_val).map_err(|_| Tag::Integer.value_error())?; | ||
|
|
||
| // account | ||
| let account: OctetStringRef = seq.decode()?; | ||
|
|
||
| // signer | ||
| let signer = decode_v2_signer(seq)?; | ||
|
|
||
| // previous | ||
| let previous: OctetStringRef = seq.decode()?; | ||
|
|
||
| // operations | ||
| let operations = decode_operations(seq)?; | ||
|
|
||
| // signatures | ||
| let signatures = decode_signatures(seq)?; |
There was a problem hiding this comment.
These comments are pointless.
|
|
||
| // date | ||
| let date_len = encode_generalized_time_len(self.header.date)?; | ||
|
|
||
| // account | ||
| let account_len = OctetStringRef::new(self.header.account)?.encoded_len()?; | ||
|
|
||
| // signer | ||
| let signer_len = encode_signer_len(&self.header.signer)?; | ||
|
|
||
| // previous | ||
| let previous_len = OctetStringRef::new(self.header.previous)?.encoded_len()?; | ||
|
|
||
| // operations | ||
| let ops_len = self.operations_encoded_len()?; | ||
|
|
||
| // signature | ||
| let sig_len = if !self.signatures.is_empty() { | ||
| OctetStringRef::new(self.signatures[0])?.encoded_len()? | ||
| } else { | ||
| Length::ZERO |
There was a problem hiding this comment.
These comments are pointless.
| // network | ||
| let network_len = self.header.network.encoded_len()?; | ||
|
|
||
| // date | ||
| let date_len = encode_generalized_time_len(self.header.date)?; | ||
|
|
||
| // purpose | ||
| let purpose_len = (self.header.purpose as u8).encoded_len()?; | ||
|
|
||
| // account | ||
| let account_len = OctetStringRef::new(self.header.account)?.encoded_len()?; | ||
|
|
||
| // signer | ||
| let signer_len = encode_signer_len(&self.header.signer)?; | ||
|
|
||
| // previous | ||
| let previous_len = OctetStringRef::new(self.header.previous)?.encoded_len()?; | ||
|
|
||
| // operations | ||
| let ops_len = self.operations_encoded_len()?; | ||
|
|
||
| // signatures | ||
| let sig_len = self.signatures_encoded_len()?; | ||
|
|
||
| network_len + date_len + purpose_len + account_len + signer_len + previous_len + ops_len + sig_len |
There was a problem hiding this comment.
These comments are pointless.
| #[cfg(any(feature = "alloc", feature = "std"))] | ||
| fn encode_generalized_time_len(date_bytes: &[u8]) -> der::Result<Length> { | ||
| let content_len = Length::try_from(date_bytes.len())?; | ||
| Header::new(Tag::GeneralizedTime, content_len)?.encoded_len() + content_len | ||
| } | ||
|
|
||
| /// Encodes `GeneralizedTime` from raw bytes. | ||
| #[cfg(any(feature = "alloc", feature = "std"))] | ||
| fn encode_generalized_time(date_bytes: &[u8], writer: &mut impl Writer) -> der::Result<()> { | ||
| let content_len = Length::try_from(date_bytes.len())?; | ||
| Header::new(Tag::GeneralizedTime, content_len)?.encode(writer)?; | ||
| writer.write(date_bytes) | ||
| } |
There was a problem hiding this comment.
One mutates and the other returns but you would not know this from this naming scheme.
| if let Some(value) = raw.asset_id { | ||
| let bytes = value.as_bytes(); | ||
| let copy_len = bytes.len().min(MAX_FIELD_LEN); | ||
| result.asset_id[..copy_len].copy_from_slice(&bytes[..copy_len]); | ||
| result.asset_id_len = copy_len; | ||
| found_any = true; | ||
| } | ||
|
|
||
| if let Some(value) = raw.authority { | ||
| let bytes = value.as_bytes(); | ||
| let copy_len = bytes.len().min(MAX_FIELD_LEN); | ||
| result.authority[..copy_len].copy_from_slice(&bytes[..copy_len]); | ||
| result.authority_len = copy_len; | ||
| found_any = true; | ||
| } | ||
|
|
||
| if let Some(value) = raw.symbol { | ||
| let bytes = value.as_bytes(); | ||
| let copy_len = bytes.len().min(MAX_FIELD_LEN); | ||
| result.symbol[..copy_len].copy_from_slice(&bytes[..copy_len]); | ||
| result.symbol_len = copy_len; | ||
| found_any = true; | ||
| } |
There was a problem hiding this comment.
You could probably DRY this.
| #[test] | ||
| fn test_decode_metadata_unknown_fields_only() { | ||
| let mut buf = [0u8; 512]; | ||
|
|
||
| let json = r#"{"foo":"bar","baz":123}"#; | ||
| let b64 = base64_encode_for_test(json.as_bytes()); | ||
|
|
||
| match decode_metadata(&b64, &mut buf) { | ||
| MetadataDisplay::Unknown => {} | ||
| _ => panic!("Expected Unknown variant"), | ||
| } | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_decode_metadata_invalid_base64() { | ||
| let mut buf = [0u8; 100]; | ||
| match decode_metadata("not-valid-base64!!!", &mut buf) { | ||
| MetadataDisplay::Invalid => {} | ||
| _ => panic!("Expected Invalid variant"), | ||
| } | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_decode_metadata_invalid_json() { | ||
| let mut buf = [0u8; 100]; | ||
| let b64 = base64_encode_for_test(b"not json at all"); | ||
|
|
||
| match decode_metadata(&b64, &mut buf) { | ||
| MetadataDisplay::Invalid => {} | ||
| _ => panic!("Expected Invalid variant"), | ||
| } | ||
| } |
There was a problem hiding this comment.
You could data-drive these tests.
| #[test] | ||
| fn test_malformed_empty_input() { | ||
| let result = KeetaBlock::from_der(&[]); | ||
| assert!(result.is_err(), "Empty input should fail to parse"); | ||
| } | ||
|
|
||
| /// Test that truncated data returns an error | ||
| #[test] | ||
| fn test_malformed_truncated_data() { | ||
| // Take a valid sample and truncate it | ||
| let valid = samples::SET_REP; | ||
| let truncated = &valid[..valid.len() / 2]; | ||
| let result = KeetaBlock::from_der(truncated); | ||
| assert!(result.is_err(), "Truncated data should fail to parse"); | ||
| } | ||
|
|
||
| /// Test that invalid outer tag returns an error | ||
| #[test] | ||
| fn test_malformed_invalid_tag() { | ||
| // Use OCTET STRING tag (0x04) instead of SEQUENCE (0x30) or [1] (0xa1) | ||
| let invalid = [0x04, 0x05, 0x01, 0x02, 0x03, 0x04, 0x05]; | ||
| let result = KeetaBlock::from_der(&invalid); | ||
| assert!(result.is_err(), "Invalid outer tag should fail to parse"); | ||
| } | ||
|
|
||
| /// Test that length exceeding data returns an error | ||
| #[test] | ||
| fn test_malformed_length_overflow() { | ||
| // SEQUENCE with length 0xFF but only 5 bytes of data | ||
| let invalid = [0x30, 0x81, 0xff, 0x01, 0x02, 0x03, 0x04, 0x05]; | ||
| let result = KeetaBlock::from_der(&invalid); | ||
| assert!(result.is_err(), "Length overflow should fail to parse"); | ||
| } | ||
|
|
||
| /// Test that extra trailing bytes are handled | ||
| #[test] | ||
| fn test_malformed_trailing_data() { | ||
| // Take a valid sample and append extra bytes | ||
| let valid = samples::SET_REP; | ||
| let mut with_trailing = valid.to_vec(); |
There was a problem hiding this comment.
You can probably data-drive all of these KeetaBlock::from_der tests.



This PR adds the following features:
keetanetwork-blockcrate for block and operation typesTODO