repo stringlengths 6 65 | file_url stringlengths 81 311 | file_path stringlengths 6 227 | content stringlengths 0 32.8k | language stringclasses 1
value | license stringclasses 7
values | commit_sha stringlengths 40 40 | retrieved_at stringdate 2026-01-04 15:31:58 2026-01-04 20:25:31 | truncated bool 2
classes |
|---|---|---|---|---|---|---|---|---|
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/test_utils.rs | crates/storage/codecs/src/test_utils.rs | //! Test utilities for `Compact` derive macro
/// Macro to ensure that derived `Compact` types can be extended with new fields while maintaining
/// backwards compatibility.
///
/// Verifies that the unused bits in the bitflag struct remain as expected: `Zero` or `NotZero`. For
/// more on bitflag struct: [`reth_codecs_derive::Compact`].
///
/// Possible failures:
/// ### 1. `NotZero` -> `Zero`
/// This wouldn't allow new fields to be added in the future. Instead, the new field of `T`
/// should be `Option<TExtension>` to allow for new fields. The new user field should be included
/// in `TExtension` type. **Only then, update the test to expect `Zero` for `T` and
/// add a new test for `TExtension`.**
///
/// **Goal:**
///
/// ```rust,ignore
/// {
/// struct T {
/// // ... other fields
/// ext: Option<TExtension>
/// }
///
/// // Use an extension type for new fields:
/// struct TExtension {
/// new_field_b: Option<u8>,
/// }
///
/// // Change tests
/// validate_bitflag_backwards_compat!(T, UnusedBits::Zero);
/// validate_bitflag_backwards_compat!(TExtension, UnusedBits::NotZero);
/// }
/// ```
///
/// ### 2. `Zero` -> `NotZero`
/// If it becomes `NotZero`, it would break backwards compatibility, so there is not an action item,
/// and should be handled with care in a case by case scenario.
#[macro_export]
macro_rules! validate_bitflag_backwards_compat {
($type:ty, $expected_unused_bits:expr) => {
let actual_unused_bits = <$type>::bitflag_unused_bits();
match $expected_unused_bits {
UnusedBits::NotZero => {
assert_ne!(
actual_unused_bits,
0,
"Assertion failed: `bitflag_unused_bits` for type `{}` unexpectedly went from non-zero to zero!",
stringify!($type)
);
}
UnusedBits::Zero => {
assert_eq!(
actual_unused_bits,
0,
"Assertion failed: `bitflag_unused_bits` for type `{}` unexpectedly went from zero to non-zero!",
stringify!($type)
);
}
}
};
}
/// Whether there are zero or more unused bits on `Compact` bitflag struct.
///
/// To be used with [`validate_bitflag_backwards_compat`].
#[derive(Debug)]
pub enum UnusedBits {
/// Zero bits available for a new field.
Zero,
/// Bits available for a new field.
NotZero,
}
impl UnusedBits {
/// Returns true if the variant is [`Self::NotZero`].
pub const fn not_zero(&self) -> bool {
matches!(self, Self::NotZero)
}
}
/// Tests decoding and re-encoding to ensure correctness.
pub fn test_decode<T: crate::Compact>(buf: &[u8]) {
let (decoded, _) = T::from_compact(buf, buf.len());
let mut encoded = Vec::with_capacity(buf.len());
decoded.to_compact(&mut encoded);
assert_eq!(buf, &encoded[..]);
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/txtype.rs | crates/storage/codecs/src/txtype.rs | //! Commonly used constants for transaction types.
/// Identifier parameter for legacy transaction
pub const COMPACT_IDENTIFIER_LEGACY: usize = 0;
/// Identifier parameter for EIP-2930 transaction
pub const COMPACT_IDENTIFIER_EIP2930: usize = 1;
/// Identifier parameter for EIP-1559 transaction
pub const COMPACT_IDENTIFIER_EIP1559: usize = 2;
/// For backwards compatibility purposes only 2 bits of the type are encoded in the identifier
/// parameter. In the case of a [`COMPACT_EXTENDED_IDENTIFIER_FLAG`], the full transaction type is
/// read from the buffer as a single byte.
pub const COMPACT_EXTENDED_IDENTIFIER_FLAG: usize = 3;
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/private.rs | crates/storage/codecs/src/private.rs | pub use modular_bitfield;
pub use bytes::{self, Buf};
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/withdrawal.rs | crates/storage/codecs/src/alloy/withdrawal.rs | //! Compact implementation for [`AlloyWithdrawal`]
use crate::Compact;
use alloc::vec::Vec;
use alloy_eips::eip4895::{Withdrawal as AlloyWithdrawal, Withdrawals};
use alloy_primitives::Address;
use reth_codecs_derive::add_arbitrary_tests;
/// Withdrawal acts as bridge which simplifies Compact implementation for `AlloyWithdrawal`.
///
/// Notice: Make sure this struct is 1:1 with `alloy_eips::eip4895::Withdrawal`
#[derive(Debug, Clone, PartialEq, Eq, Default, Compact)]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct Withdrawal {
/// Monotonically increasing identifier issued by consensus layer.
index: u64,
/// Index of validator associated with withdrawal.
validator_index: u64,
/// Target address for withdrawn ether.
address: Address,
/// Value of the withdrawal in gwei.
amount: u64,
}
impl Compact for AlloyWithdrawal {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let withdrawal = Withdrawal {
index: self.index,
validator_index: self.validator_index,
address: self.address,
amount: self.amount,
};
withdrawal.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (withdrawal, _) = Withdrawal::from_compact(buf, len);
let alloy_withdrawal = Self {
index: withdrawal.index,
validator_index: withdrawal.validator_index,
address: withdrawal.address,
amount: withdrawal.amount,
};
(alloy_withdrawal, buf)
}
}
impl Compact for Withdrawals {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
self.as_ref().to_compact(buf)
}
fn from_compact(mut buf: &[u8], _: usize) -> (Self, &[u8]) {
let (withdrawals, new_buf) = Vec::from_compact(buf, buf.len());
buf = new_buf;
let alloy_withdrawals = Self::new(withdrawals);
(alloy_withdrawals, buf)
}
}
#[cfg(test)]
mod tests {
use super::*;
use proptest::proptest;
use proptest_arbitrary_interop::arb;
proptest! {
#[test]
fn roundtrip_withdrawal(withdrawal in arb::<AlloyWithdrawal>()) {
let mut compacted_withdrawal = Vec::<u8>::new();
let len = withdrawal.to_compact(&mut compacted_withdrawal);
let (decoded, _) = AlloyWithdrawal::from_compact(&compacted_withdrawal, len);
assert_eq!(withdrawal, decoded)
}
#[test]
fn roundtrip_withdrawals(withdrawals in arb::<Withdrawals>()) {
let mut compacted_withdrawals = Vec::<u8>::new();
let len = withdrawals.to_compact(&mut compacted_withdrawals);
let (decoded, _) = Withdrawals::from_compact(&compacted_withdrawals, len);
assert_eq!(withdrawals, decoded);
}
}
// each value in the database has an extra field named flags that encodes metadata about other
// fields in the value, e.g. offset and length.
//
// this check is to ensure we do not inadvertently add too many fields to a struct which would
// expand the flags field and break backwards compatibility
#[test]
fn test_ensure_backwards_compatibility() {
assert_eq!(Withdrawal::bitflag_encoded_bytes(), 2);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/signature.rs | crates/storage/codecs/src/alloy/signature.rs | //! Compact implementation for [`Signature`]
use crate::Compact;
use alloy_primitives::{Signature, U256};
impl Compact for Signature {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
buf.put_slice(&self.r().as_le_bytes());
buf.put_slice(&self.s().as_le_bytes());
self.v() as usize
}
fn from_compact(mut buf: &[u8], identifier: usize) -> (Self, &[u8]) {
use bytes::Buf;
assert!(buf.len() >= 64);
let r = U256::from_le_slice(&buf[0..32]);
let s = U256::from_le_slice(&buf[32..64]);
buf.advance(64);
(Self::new(r, s, identifier != 0), buf)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/log.rs | crates/storage/codecs/src/alloy/log.rs | //! Native Compact codec impl for primitive alloy log types.
use crate::Compact;
use alloc::vec::Vec;
use alloy_primitives::{Address, Bytes, Log, LogData};
use bytes::BufMut;
/// Implement `Compact` for `LogData` and `Log`.
impl Compact for LogData {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>,
{
let mut buffer = Vec::new();
self.topics().specialized_to_compact(&mut buffer);
self.data.to_compact(&mut buffer);
buf.put(&buffer[..]);
buffer.len()
}
fn from_compact(mut buf: &[u8], _: usize) -> (Self, &[u8]) {
let (topics, new_buf) = Vec::specialized_from_compact(buf, buf.len());
buf = new_buf;
let (data, buf) = Bytes::from_compact(buf, buf.len());
let log_data = Self::new_unchecked(topics, data);
(log_data, buf)
}
}
impl Compact for Log {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>,
{
let mut buffer = Vec::new();
self.address.to_compact(&mut buffer);
self.data.to_compact(&mut buffer);
buf.put(&buffer[..]);
buffer.len()
}
fn from_compact(mut buf: &[u8], _: usize) -> (Self, &[u8]) {
let (address, new_buf) = Address::from_compact(buf, buf.len());
buf = new_buf;
let (log_data, new_buf) = LogData::from_compact(buf, buf.len());
buf = new_buf;
let log = Self { address, data: log_data };
(log, buf)
}
}
#[cfg(test)]
mod tests {
use super::{Compact, Log};
use alloy_primitives::{Address, Bytes, LogData, B256};
use proptest::proptest;
use serde::Deserialize;
proptest! {
#[test]
fn roundtrip(log: Log) {
let mut buf = Vec::<u8>::new();
let len = log.to_compact(&mut buf);
let (decoded, _) = Log::from_compact(&buf, len);
assert_eq!(log, decoded);
}
}
#[derive(Deserialize)]
struct CompactLogTestVector {
topics: Vec<B256>,
address: Address,
data: Bytes,
encoded_bytes: Bytes,
}
#[test]
fn test_compact_log_codec() {
let test_vectors: Vec<CompactLogTestVector> =
serde_json::from_str(include_str!("../../testdata/log_compact.json"))
.expect("Failed to parse test vectors");
for test_vector in test_vectors {
let log_data = LogData::new_unchecked(test_vector.topics, test_vector.data);
let log = Log { address: test_vector.address, data: log_data };
let mut buf = Vec::<u8>::new();
let len = log.clone().to_compact(&mut buf);
assert_eq!(test_vector.encoded_bytes, buf);
let (decoded, _) = Log::from_compact(&test_vector.encoded_bytes, len);
assert_eq!(log, decoded);
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/authorization_list.rs | crates/storage/codecs/src/alloy/authorization_list.rs | //! Compact implementation for [`AlloyAuthorization`]
use crate::Compact;
use alloy_eips::eip7702::{Authorization as AlloyAuthorization, SignedAuthorization};
use alloy_primitives::{Address, U256};
use bytes::Buf;
use core::ops::Deref;
use reth_codecs_derive::add_arbitrary_tests;
/// Authorization acts as bridge which simplifies Compact implementation for `AlloyAuthorization`.
///
/// Notice: Make sure this struct is 1:1 with `alloy_eips::eip7702::Authorization`
#[derive(Debug, Clone, PartialEq, Eq, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct Authorization {
chain_id: U256,
address: Address,
nonce: u64,
}
impl Compact for AlloyAuthorization {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let authorization =
Authorization { chain_id: self.chain_id, address: self.address, nonce: self.nonce() };
authorization.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (authorization, buf) = Authorization::from_compact(buf, len);
let alloy_authorization = Self {
chain_id: authorization.chain_id,
address: authorization.address,
nonce: authorization.nonce,
};
(alloy_authorization, buf)
}
}
impl Compact for SignedAuthorization {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
buf.put_u8(self.y_parity());
buf.put_slice(self.r().as_le_slice());
buf.put_slice(self.s().as_le_slice());
// to_compact doesn't write the len to buffer.
// By placing it as last, we don't need to store it either.
1 + 32 + 32 + self.deref().to_compact(buf)
}
fn from_compact(mut buf: &[u8], len: usize) -> (Self, &[u8]) {
let y_parity = buf.get_u8();
let r = U256::from_le_slice(&buf[0..32]);
buf.advance(32);
let s = U256::from_le_slice(&buf[0..32]);
buf.advance(32);
let (auth, buf) = AlloyAuthorization::from_compact(buf, len);
(Self::new_unchecked(auth, y_parity, r, s), buf)
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_primitives::{address, b256};
#[test]
fn test_roundtrip_compact_authorization_list_item() {
let authorization = AlloyAuthorization {
chain_id: U256::from(1),
address: address!("0xdac17f958d2ee523a2206206994597c13d831ec7"),
nonce: 1,
}
.into_signed(alloy_primitives::Signature::new(
b256!("0x1fd474b1f9404c0c5df43b7620119ffbc3a1c3f942c73b6e14e9f55255ed9b1d").into(),
b256!("0x29aca24813279a901ec13b5f7bb53385fa1fc627b946592221417ff74a49600d").into(),
false,
));
let mut compacted_authorization = Vec::<u8>::new();
let len = authorization.to_compact(&mut compacted_authorization);
let (decoded_authorization, _) =
SignedAuthorization::from_compact(&compacted_authorization, len);
assert_eq!(authorization, decoded_authorization);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/header.rs | crates/storage/codecs/src/alloy/header.rs | //! Compact implementation for [`AlloyHeader`]
use crate::Compact;
use alloy_consensus::Header as AlloyHeader;
use alloy_primitives::{Address, BlockNumber, Bloom, Bytes, B256, U256};
/// Block header
///
/// This is a helper type to use derive on it instead of manually managing `bitfield`.
///
/// By deriving `Compact` here, any future changes or enhancements to the `Compact` derive
/// will automatically apply to this type.
///
/// Notice: Make sure this struct is 1:1 with [`alloy_consensus::Header`]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(serde::Serialize, serde::Deserialize, arbitrary::Arbitrary)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[derive(Debug, Clone, PartialEq, Eq, Hash, Default, Compact)]
#[reth_codecs(crate = "crate")]
pub(crate) struct Header {
parent_hash: B256,
ommers_hash: B256,
beneficiary: Address,
state_root: B256,
transactions_root: B256,
receipts_root: B256,
withdrawals_root: Option<B256>,
logs_bloom: Bloom,
difficulty: U256,
number: BlockNumber,
gas_limit: u64,
gas_used: u64,
timestamp: u64,
mix_hash: B256,
nonce: u64,
base_fee_per_gas: Option<u64>,
blob_gas_used: Option<u64>,
excess_blob_gas: Option<u64>,
parent_beacon_block_root: Option<B256>,
extra_fields: Option<HeaderExt>,
extra_data: Bytes,
}
/// [`Header`] extension struct.
///
/// All new fields should be added here in the form of a `Option<T>`, since `Option<HeaderExt>` is
/// used as a field of [`Header`] for backwards compatibility.
///
/// More information: <https://github.com/paradigmxyz/reth/issues/7820> & [`reth_codecs_derive::Compact`].
#[cfg_attr(
any(test, feature = "test-utils"),
derive(serde::Serialize, serde::Deserialize, arbitrary::Arbitrary)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[derive(Debug, Clone, PartialEq, Eq, Hash, Default, Compact)]
#[reth_codecs(crate = "crate")]
pub(crate) struct HeaderExt {
requests_hash: Option<B256>,
}
impl HeaderExt {
/// Converts into [`Some`] if any of the field exists. Otherwise, returns [`None`].
///
/// Required since [`Header`] uses `Option<HeaderExt>` as a field.
const fn into_option(self) -> Option<Self> {
if self.requests_hash.is_some() {
Some(self)
} else {
None
}
}
}
impl Compact for AlloyHeader {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let extra_fields = HeaderExt { requests_hash: self.requests_hash };
let header = Header {
parent_hash: self.parent_hash,
ommers_hash: self.ommers_hash,
beneficiary: self.beneficiary,
state_root: self.state_root,
transactions_root: self.transactions_root,
receipts_root: self.receipts_root,
withdrawals_root: self.withdrawals_root,
logs_bloom: self.logs_bloom,
difficulty: self.difficulty,
number: self.number,
gas_limit: self.gas_limit,
gas_used: self.gas_used,
timestamp: self.timestamp,
mix_hash: self.mix_hash,
nonce: self.nonce.into(),
base_fee_per_gas: self.base_fee_per_gas,
blob_gas_used: self.blob_gas_used,
excess_blob_gas: self.excess_blob_gas,
parent_beacon_block_root: self.parent_beacon_block_root,
extra_fields: extra_fields.into_option(),
extra_data: self.extra_data.clone(),
};
header.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (header, _) = Header::from_compact(buf, len);
let alloy_header = Self {
parent_hash: header.parent_hash,
ommers_hash: header.ommers_hash,
beneficiary: header.beneficiary,
state_root: header.state_root,
transactions_root: header.transactions_root,
receipts_root: header.receipts_root,
withdrawals_root: header.withdrawals_root,
logs_bloom: header.logs_bloom,
difficulty: header.difficulty,
number: header.number,
gas_limit: header.gas_limit,
gas_used: header.gas_used,
timestamp: header.timestamp,
mix_hash: header.mix_hash,
nonce: header.nonce.into(),
base_fee_per_gas: header.base_fee_per_gas,
blob_gas_used: header.blob_gas_used,
excess_blob_gas: header.excess_blob_gas,
parent_beacon_block_root: header.parent_beacon_block_root,
requests_hash: header.extra_fields.as_ref().and_then(|h| h.requests_hash),
extra_data: header.extra_data,
};
(alloy_header, buf)
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_consensus::EMPTY_OMMER_ROOT_HASH;
use alloy_primitives::{address, b256, bloom, bytes, hex};
/// Holesky block #1947953
const HOLESKY_BLOCK: Header = Header {
parent_hash: b256!("0x8605e0c46689f66b3deed82598e43d5002b71a929023b665228728f0c6e62a95"),
ommers_hash: EMPTY_OMMER_ROOT_HASH,
beneficiary: address!("0xc6e2459991bfe27cca6d86722f35da23a1e4cb97"),
state_root: b256!("0xedad188ca5647d62f4cca417c11a1afbadebce30d23260767f6f587e9b3b9993"),
transactions_root: b256!("0x4daf25dc08a841aa22aa0d3cb3e1f159d4dcaf6a6063d4d36bfac11d3fdb63ee"),
receipts_root: b256!("0x1a1500328e8ade2592bbea1e04f9a9fd8c0142d3175d6e8420984ee159abd0ed"),
withdrawals_root: Some(b256!("0xd0f7f22d6d915be5a3b9c0fee353f14de5ac5c8ac1850b76ce9be70b69dfe37d")),
logs_bloom: bloom!("36410880400480e1090a001c408880800019808000125124002100400048442220020000408040423088300004d0000050803000862485a02020011600a5010404143021800881e8e08c402940404002105004820c440051640000809c000011080002300208510808150101000038002500400040000230000000110442800000800204420100008110080200088c1610c0b80000c6008900000340400200200210010111020000200041a2010804801100030a0284a8463820120a0601480244521002a10201100400801101006002001000008000000ce011011041086418609002000128800008180141002003004c00800040940c00c1180ca002890040"),
difficulty: U256::ZERO,
number: 0x1db931,
gas_limit: 0x1c9c380,
gas_used: 0x440949,
timestamp: 0x66982980,
mix_hash: b256!("0x574db0ff0a2243b434ba2a35da8f2f72df08bca44f8733f4908d10dcaebc89f1"),
nonce: 0,
base_fee_per_gas: Some(0x8),
blob_gas_used: Some(0x60000),
excess_blob_gas: Some(0x0),
parent_beacon_block_root: Some(b256!("0xaa1d9606b7932f2280a19b3498b9ae9eebc6a83f1afde8e45944f79d353db4c1")),
extra_data: bytes!("726574682f76312e302e302f6c696e7578"),
extra_fields: None,
};
#[test]
fn test_ensure_backwards_compatibility() {
assert_eq!(Header::bitflag_encoded_bytes(), 4);
assert_eq!(HeaderExt::bitflag_encoded_bytes(), 1);
}
#[test]
fn test_backwards_compatibility() {
let holesky_header_bytes = hex!("81a121788605e0c46689f66b3deed82598e43d5002b71a929023b665228728f0c6e62a951dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347c6e2459991bfe27cca6d86722f35da23a1e4cb97edad188ca5647d62f4cca417c11a1afbadebce30d23260767f6f587e9b3b99934daf25dc08a841aa22aa0d3cb3e1f159d4dcaf6a6063d4d36bfac11d3fdb63ee1a1500328e8ade2592bbea1e04f9a9fd8c0142d3175d6e8420984ee159abd0edd0f7f22d6d915be5a3b9c0fee353f14de5ac5c8ac1850b76ce9be70b69dfe37d36410880400480e1090a001c408880800019808000125124002100400048442220020000408040423088300004d0000050803000862485a02020011600a5010404143021800881e8e08c402940404002105004820c440051640000809c000011080002300208510808150101000038002500400040000230000000110442800000800204420100008110080200088c1610c0b80000c6008900000340400200200210010111020000200041a2010804801100030a0284a8463820120a0601480244521002a10201100400801101006002001000008000000ce011011041086418609002000128800008180141002003004c00800040940c00c1180ca0028900401db93101c9c38044094966982980574db0ff0a2243b434ba2a35da8f2f72df08bca44f8733f4908d10dcaebc89f101080306000000aa1d9606b7932f2280a19b3498b9ae9eebc6a83f1afde8e45944f79d353db4c1726574682f76312e302e302f6c696e7578");
let (decoded_header, _) =
Header::from_compact(&holesky_header_bytes, holesky_header_bytes.len());
assert_eq!(decoded_header, HOLESKY_BLOCK);
let mut encoded_header = Vec::with_capacity(holesky_header_bytes.len());
assert_eq!(holesky_header_bytes.len(), decoded_header.to_compact(&mut encoded_header));
assert_eq!(encoded_header, holesky_header_bytes);
}
#[test]
fn test_extra_fields() {
let mut header = HOLESKY_BLOCK;
header.extra_fields = Some(HeaderExt { requests_hash: Some(B256::random()) });
let mut encoded_header = vec![];
let len = header.to_compact(&mut encoded_header);
assert_eq!(header, Header::from_compact(&encoded_header, len).0);
}
#[test]
fn test_extra_fields_missing() {
let mut header = HOLESKY_BLOCK;
header.extra_fields = None;
let mut encoded_header = vec![];
let len = header.to_compact(&mut encoded_header);
assert_eq!(header, Header::from_compact(&encoded_header, len).0);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/access_list.rs | crates/storage/codecs/src/alloy/access_list.rs | //! Compact implementation for [`AccessList`]
use crate::Compact;
use alloc::vec::Vec;
use alloy_eips::eip2930::{AccessList, AccessListItem};
use alloy_primitives::Address;
/// Implement `Compact` for `AccessListItem` and `AccessList`.
impl Compact for AccessListItem {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let mut buffer = Vec::new();
self.address.to_compact(&mut buffer);
self.storage_keys.specialized_to_compact(&mut buffer);
buf.put(&buffer[..]);
buffer.len()
}
fn from_compact(mut buf: &[u8], _: usize) -> (Self, &[u8]) {
let (address, new_buf) = Address::from_compact(buf, buf.len());
buf = new_buf;
let (storage_keys, new_buf) = Vec::specialized_from_compact(buf, buf.len());
buf = new_buf;
let access_list_item = Self { address, storage_keys };
(access_list_item, buf)
}
}
impl Compact for AccessList {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let mut buffer = Vec::new();
self.0.to_compact(&mut buffer);
buf.put(&buffer[..]);
buffer.len()
}
fn from_compact(mut buf: &[u8], _: usize) -> (Self, &[u8]) {
let (access_list_items, new_buf) = Vec::from_compact(buf, buf.len());
buf = new_buf;
let access_list = Self(access_list_items);
(access_list, buf)
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_primitives::Bytes;
use proptest::proptest;
use proptest_arbitrary_interop::arb;
use serde::Deserialize;
proptest! {
#[test]
fn test_roundtrip_compact_access_list_item(access_list_item in arb::<AccessListItem>()) {
let mut compacted_access_list_item = Vec::<u8>::new();
let len = access_list_item.to_compact(&mut compacted_access_list_item);
let (decoded_access_list_item, _) = AccessListItem::from_compact(&compacted_access_list_item, len);
assert_eq!(access_list_item, decoded_access_list_item);
}
}
proptest! {
#[test]
fn test_roundtrip_compact_access_list(access_list in arb::<AccessList>()) {
let mut compacted_access_list = Vec::<u8>::new();
let len = access_list.to_compact(&mut compacted_access_list);
let (decoded_access_list, _) = AccessList::from_compact(&compacted_access_list, len);
assert_eq!(access_list, decoded_access_list);
}
}
#[derive(Deserialize)]
struct CompactAccessListTestVector {
access_list: AccessList,
encoded_bytes: Bytes,
}
#[test]
fn test_compact_access_list_codec() {
let test_vectors: Vec<CompactAccessListTestVector> =
serde_json::from_str(include_str!("../../testdata/access_list_compact.json"))
.expect("Failed to parse test vectors");
for test_vector in test_vectors {
let mut buf = Vec::<u8>::new();
let len = test_vector.access_list.clone().to_compact(&mut buf);
assert_eq!(test_vector.encoded_bytes.0, buf);
let (decoded, _) = AccessList::from_compact(&test_vector.encoded_bytes, len);
assert_eq!(test_vector.access_list, decoded);
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/mod.rs | crates/storage/codecs/src/alloy/mod.rs | //! Implements Compact for alloy types.
/// Will make it a pub mod if test-utils is enabled
macro_rules! cond_mod {
($($mod_name:ident),*) => {
$(
#[cfg(feature = "test-utils")]
pub mod $mod_name;
#[cfg(not(feature = "test-utils"))]
pub(crate) mod $mod_name;
)*
};
}
cond_mod!(
access_list,
authorization_list,
genesis_account,
header,
log,
signature,
trie,
txkind,
withdrawal
);
pub mod transaction;
#[cfg(test)]
mod tests {
use crate::{
alloy::{
genesis_account::{GenesisAccount, GenesisAccountRef, StorageEntries, StorageEntry},
header::{Header, HeaderExt},
transaction::{
eip1559::TxEip1559, eip2930::TxEip2930, eip4844::TxEip4844, eip7702::TxEip7702,
legacy::TxLegacy,
},
withdrawal::Withdrawal,
},
test_utils::UnusedBits,
validate_bitflag_backwards_compat,
};
#[test]
fn validate_bitflag_backwards_compat() {
// In case of failure, refer to the documentation of the
// [`validate_bitflag_backwards_compat`] macro for detailed instructions on handling
// it.
validate_bitflag_backwards_compat!(Header, UnusedBits::Zero);
validate_bitflag_backwards_compat!(HeaderExt, UnusedBits::NotZero);
validate_bitflag_backwards_compat!(TxEip2930, UnusedBits::Zero);
validate_bitflag_backwards_compat!(StorageEntries, UnusedBits::Zero);
validate_bitflag_backwards_compat!(StorageEntry, UnusedBits::NotZero); // Seismic broke backwards compatibility here for the is_private flag
validate_bitflag_backwards_compat!(GenesisAccountRef<'_>, UnusedBits::NotZero);
validate_bitflag_backwards_compat!(GenesisAccount, UnusedBits::NotZero);
validate_bitflag_backwards_compat!(TxEip1559, UnusedBits::NotZero);
validate_bitflag_backwards_compat!(TxEip4844, UnusedBits::NotZero);
validate_bitflag_backwards_compat!(TxEip7702, UnusedBits::NotZero);
validate_bitflag_backwards_compat!(TxLegacy, UnusedBits::NotZero);
validate_bitflag_backwards_compat!(Withdrawal, UnusedBits::NotZero);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/genesis_account.rs | crates/storage/codecs/src/alloy/genesis_account.rs | //! Compact implementation for [`AlloyGenesisAccount`]
use crate::Compact;
use alloc::vec::Vec;
use seismic_alloy_genesis::GenesisAccount as AlloyGenesisAccount;
use alloy_primitives::{Bytes, B256, U256};
use reth_codecs_derive::add_arbitrary_tests;
use alloy_primitives::FlaggedStorage;
/// `GenesisAccount` acts as bridge which simplifies Compact implementation for
/// `AlloyGenesisAccount`.
///
/// Notice: Make sure this struct is 1:1 with `alloy_genesis::GenesisAccount`
#[derive(Debug, Clone, PartialEq, Eq, Compact)]
#[reth_codecs(crate = "crate")]
pub(crate) struct GenesisAccountRef<'a> {
/// The nonce of the account at genesis.
nonce: Option<u64>,
/// The balance of the account at genesis.
balance: &'a U256,
/// The account's bytecode at genesis.
code: Option<&'a Bytes>,
/// The account's storage at genesis.
storage: Option<StorageEntries>,
/// The account's private key. Should only be used for testing.
private_key: Option<&'a B256>,
}
/// Acts as bridge which simplifies Compact implementation for
/// `AlloyGenesisAccount`.
#[derive(Debug, Clone, PartialEq, Eq, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct GenesisAccount {
/// The nonce of the account at genesis.
nonce: Option<u64>,
/// The balance of the account at genesis.
balance: U256,
/// The account's bytecode at genesis.
code: Option<Bytes>,
/// The account's storage at genesis.
storage: Option<StorageEntries>,
/// The account's private key. Should only be used for testing.
private_key: Option<B256>,
}
#[derive(Debug, Clone, PartialEq, Eq, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct StorageEntries {
entries: Vec<StorageEntry>,
}
#[derive(Debug, Clone, PartialEq, Eq, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct StorageEntry {
key: B256,
value: B256,
is_private: bool,
}
impl Compact for AlloyGenesisAccount {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let account = GenesisAccountRef {
nonce: self.nonce,
balance: &self.balance,
code: self.code.as_ref(),
storage: self.storage.as_ref().map(|s| StorageEntries {
entries: s
.iter()
.map(|(key, value)| StorageEntry {
key: *key,
value: value.value.into(),
is_private: value.is_private,
})
.collect(),
}),
private_key: self.private_key.as_ref(),
};
account.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (account, _) = GenesisAccount::from_compact(buf, len);
let alloy_account = Self {
nonce: account.nonce,
balance: account.balance,
code: account.code,
storage: account
.storage
.map(|s| s.entries.into_iter().map(|entry| (entry.key, FlaggedStorage::new(U256::from_be_bytes(entry.value.0), entry.is_private))).collect()),
private_key: account.private_key,
};
(alloy_account, buf)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/txkind.rs | crates/storage/codecs/src/alloy/txkind.rs | //! Native Compact codec impl for primitive alloy [`TxKind`].
use crate::Compact;
use alloy_primitives::{Address, TxKind};
/// Identifier for [`TxKind::Create`]
const TX_KIND_TYPE_CREATE: usize = 0;
/// Identifier for [`TxKind::Call`]
const TX_KIND_TYPE_CALL: usize = 1;
impl Compact for TxKind {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
match self {
Self::Create => TX_KIND_TYPE_CREATE,
Self::Call(address) => {
address.to_compact(buf);
TX_KIND_TYPE_CALL
}
}
}
fn from_compact(buf: &[u8], identifier: usize) -> (Self, &[u8]) {
match identifier {
TX_KIND_TYPE_CREATE => (Self::Create, buf),
TX_KIND_TYPE_CALL => {
let (addr, buf) = Address::from_compact(buf, buf.len());
(addr.into(), buf)
}
_ => {
unreachable!("Junk data in database: unknown TransactionKind variant",)
}
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/trie.rs | crates/storage/codecs/src/alloy/trie.rs | //! Native Compact codec impl for alloy-trie types.
use crate::Compact;
use alloc::vec::Vec;
use alloy_primitives::B256;
use alloy_trie::{
hash_builder::{HashBuilderValue, HashBuilderValueRef},
BranchNodeCompact, TrieMask,
};
use bytes::{Buf, BufMut};
/// Identifier for [`HashBuilderValueRef::Hash`]
const HASH_BUILDER_TYPE_HASH: u8 = 0;
/// Identifier for [`HashBuilderValueRef::Bytes`]
const HASH_BUILDER_TYPE_BYTES: u8 = 1;
impl Compact for HashBuilderValue {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>,
{
match self.as_ref() {
HashBuilderValueRef::Hash(hash) => {
buf.put_u8(HASH_BUILDER_TYPE_HASH);
1 + hash.to_compact(buf)
}
HashBuilderValueRef::Bytes(bytes) => {
buf.put_u8(HASH_BUILDER_TYPE_BYTES);
1 + bytes.to_compact(buf)
}
}
}
fn from_compact(mut buf: &[u8], _: usize) -> (Self, &[u8]) {
let mut this = Self::default();
let buf = match buf.get_u8() {
HASH_BUILDER_TYPE_HASH => {
let (hash, buf) = B256::from_compact(buf, 32);
this.set_from_ref(HashBuilderValueRef::Hash(&hash));
buf
}
HASH_BUILDER_TYPE_BYTES => {
let (bytes, buf) = Vec::from_compact(buf, 0);
this.set_bytes_owned(bytes);
buf
}
_ => unreachable!("Junk data in database: unknown HashBuilderValue variant"),
};
(this, buf)
}
}
impl Compact for BranchNodeCompact {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let mut buf_size = 0;
buf_size += self.state_mask.to_compact(buf);
buf_size += self.tree_mask.to_compact(buf);
buf_size += self.hash_mask.to_compact(buf);
if let Some(root_hash) = self.root_hash {
buf_size += B256::len_bytes();
buf.put_slice(root_hash.as_slice());
}
for hash in self.hashes.iter() {
buf_size += B256::len_bytes();
buf.put_slice(hash.as_slice());
}
buf_size
}
fn from_compact(buf: &[u8], _len: usize) -> (Self, &[u8]) {
let hash_len = B256::len_bytes();
// Assert the buffer is long enough to contain the masks and the hashes.
assert_eq!(buf.len() % hash_len, 6);
// Consume the masks.
let (state_mask, buf) = TrieMask::from_compact(buf, 0);
let (tree_mask, buf) = TrieMask::from_compact(buf, 0);
let (hash_mask, buf) = TrieMask::from_compact(buf, 0);
let mut buf = buf;
let mut num_hashes = buf.len() / hash_len;
let mut root_hash = None;
// Check if the root hash is present
if hash_mask.count_ones() as usize + 1 == num_hashes {
root_hash = Some(B256::from_slice(&buf[..hash_len]));
buf.advance(hash_len);
num_hashes -= 1;
}
// Consume all remaining hashes.
let mut hashes = Vec::<B256>::with_capacity(num_hashes);
for _ in 0..num_hashes {
hashes.push(B256::from_slice(&buf[..hash_len]));
buf.advance(hash_len);
}
(Self::new(state_mask, tree_mask, hash_mask, hashes, root_hash), buf)
}
}
impl Compact for TrieMask {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
buf.put_u16(self.get());
2
}
fn from_compact(mut buf: &[u8], _len: usize) -> (Self, &[u8]) {
let mask = buf.get_u16();
(Self::new(mask), buf)
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_primitives::hex;
#[test]
fn node_encoding() {
let n = BranchNodeCompact::new(
0xf607,
0x0005,
0x4004,
vec![
hex!("90d53cd810cc5d4243766cd4451e7b9d14b736a1148b26b3baac7617f617d321").into(),
hex!("cc35c964dda53ba6c0b87798073a9628dbc9cd26b5cce88eb69655a9c609caf1").into(),
],
Some(hex!("aaaabbbb0006767767776fffffeee44444000005567645600000000eeddddddd").into()),
);
let mut out = Vec::new();
let compact_len = n.to_compact(&mut out);
assert_eq!(BranchNodeCompact::from_compact(&out, compact_len).0, n);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/ethereum.rs | crates/storage/codecs/src/alloy/transaction/ethereum.rs | use crate::{Compact, Vec};
use alloy_consensus::{
transaction::RlpEcdsaEncodableTx, EthereumTxEnvelope, Signed, Transaction, TxEip1559,
TxEip2930, TxEip7702, TxLegacy, TxType,
};
use alloy_primitives::Signature;
use bytes::{Buf, BufMut};
/// A trait for extracting transaction without type and signature and serializing it using
/// [`Compact`] encoding.
///
/// It is not a responsibility of this trait to encode transaction type and signature. Likely this
/// will be a part of a serialization scenario with a greater scope where these values are
/// serialized separately.
///
/// See [`ToTxCompact::to_tx_compact`].
pub trait ToTxCompact {
/// Serializes inner transaction using [`Compact`] encoding. Writes the result into `buf`.
///
/// The written bytes do not contain signature and transaction type. This information be needs
/// to be serialized extra if needed.
fn to_tx_compact(&self, buf: &mut (impl BufMut + AsMut<[u8]>));
}
/// A trait for deserializing transaction without type and signature using [`Compact`] encoding.
///
/// It is not a responsibility of this trait to extract transaction type and signature, but both
/// are needed to create the value. While these values can come from anywhere, likely this will be
/// a part of a deserialization scenario with a greater scope where these values are deserialized
/// separately.
///
/// See [`FromTxCompact::from_tx_compact`].
pub trait FromTxCompact {
/// The transaction type that represents the set of transactions.
type TxType;
/// Deserializes inner transaction using [`Compact`] encoding. The concrete type is determined
/// by `tx_type`. The `signature` is added to create typed and signed transaction.
///
/// Returns a tuple of 2 elements. The first element is the deserialized value and the second
/// is a byte slice created from `buf` with a starting position advanced by the exact amount
/// of bytes consumed for this process.
fn from_tx_compact(buf: &[u8], tx_type: Self::TxType, signature: Signature) -> (Self, &[u8])
where
Self: Sized;
}
impl<Eip4844: Compact + Transaction> ToTxCompact for EthereumTxEnvelope<Eip4844> {
fn to_tx_compact(&self, buf: &mut (impl BufMut + AsMut<[u8]>)) {
match self {
Self::Legacy(tx) => tx.tx().to_compact(buf),
Self::Eip2930(tx) => tx.tx().to_compact(buf),
Self::Eip1559(tx) => tx.tx().to_compact(buf),
Self::Eip4844(tx) => tx.tx().to_compact(buf),
Self::Eip7702(tx) => tx.tx().to_compact(buf),
};
}
}
impl<Eip4844: Compact + Transaction> FromTxCompact for EthereumTxEnvelope<Eip4844> {
type TxType = TxType;
fn from_tx_compact(buf: &[u8], tx_type: TxType, signature: Signature) -> (Self, &[u8]) {
match tx_type {
TxType::Legacy => {
let (tx, buf) = TxLegacy::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Legacy(tx), buf)
}
TxType::Eip2930 => {
let (tx, buf) = TxEip2930::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip2930(tx), buf)
}
TxType::Eip1559 => {
let (tx, buf) = TxEip1559::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip1559(tx), buf)
}
TxType::Eip4844 => {
let (tx, buf) = Eip4844::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip4844(tx), buf)
}
TxType::Eip7702 => {
let (tx, buf) = TxEip7702::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip7702(tx), buf)
}
}
}
}
/// A trait for types convertible from a compact transaction type.
pub trait Envelope: FromTxCompact<TxType: Compact> {
///Returns the signature
fn signature(&self) -> &Signature;
///Returns the tx type
fn tx_type(&self) -> Self::TxType;
}
impl<Eip4844: Compact + Transaction + RlpEcdsaEncodableTx> Envelope
for EthereumTxEnvelope<Eip4844>
{
fn signature(&self) -> &Signature {
Self::signature(self)
}
fn tx_type(&self) -> Self::TxType {
Self::tx_type(self)
}
}
/// Compact serialization for transaction envelopes with compression and bitfield packing.
pub trait CompactEnvelope: Sized {
/// Takes a buffer which can be written to. *Ideally*, it returns the length written to.
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>;
/// Takes a buffer which can be read from. Returns the object and `buf` with its internal cursor
/// advanced (eg.`.advance(len)`).
///
/// `len` can either be the `buf` remaining length, or the length of the compacted type.
///
/// It will panic, if `len` is smaller than `buf.len()`.
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]);
}
impl<T: Envelope + ToTxCompact + Transaction + Send + Sync> CompactEnvelope for T {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>,
{
let start = buf.as_mut().len();
// Placeholder for bitflags.
// The first byte uses 4 bits as flags: IsCompressed[1bit], TxType[2bits], Signature[1bit]
buf.put_u8(0);
let sig_bit = self.signature().to_compact(buf) as u8;
let zstd_bit = self.input().len() >= 32;
let tx_bits = if zstd_bit {
// compress the tx prefixed with txtype
let mut tx_buf = Vec::with_capacity(256);
let tx_bits = self.tx_type().to_compact(&mut tx_buf) as u8;
self.to_tx_compact(&mut tx_buf);
buf.put_slice(
&{
#[cfg(feature = "std")]
{
reth_zstd_compressors::TRANSACTION_COMPRESSOR.with(|compressor| {
let mut compressor = compressor.borrow_mut();
compressor.compress(&tx_buf)
})
}
#[cfg(not(feature = "std"))]
{
let mut compressor = reth_zstd_compressors::create_tx_compressor();
compressor.compress(&tx_buf)
}
}
.expect("Failed to compress"),
);
tx_bits
} else {
let tx_bits = self.tx_type().to_compact(buf) as u8;
self.to_tx_compact(buf);
tx_bits
};
let flags = sig_bit | (tx_bits << 1) | ((zstd_bit as u8) << 3);
buf.as_mut()[start] = flags;
buf.as_mut().len() - start
}
fn from_compact(mut buf: &[u8], _len: usize) -> (Self, &[u8]) {
let flags = buf.get_u8() as usize;
let sig_bit = flags & 1;
let tx_bits = (flags & 0b110) >> 1;
let zstd_bit = flags >> 3;
let (signature, buf) = Signature::from_compact(buf, sig_bit);
let (transaction, buf) = if zstd_bit != 0 {
#[cfg(feature = "std")]
{
reth_zstd_compressors::TRANSACTION_DECOMPRESSOR.with(|decompressor| {
let mut decompressor = decompressor.borrow_mut();
let decompressed = decompressor.decompress(buf);
let (tx_type, tx_buf) = T::TxType::from_compact(decompressed, tx_bits);
let (tx, _) = Self::from_tx_compact(tx_buf, tx_type, signature);
(tx, buf)
})
}
#[cfg(not(feature = "std"))]
{
let mut decompressor = reth_zstd_compressors::create_tx_decompressor();
let decompressed = decompressor.decompress(buf);
let (tx_type, tx_buf) = T::TxType::from_compact(decompressed, tx_bits);
let (tx, _) = Self::from_tx_compact(tx_buf, tx_type, signature);
(tx, buf)
}
} else {
let (tx_type, buf) = T::TxType::from_compact(buf, tx_bits);
Self::from_tx_compact(buf, tx_type, signature)
};
(transaction, buf)
}
}
impl<Eip4844: Compact + RlpEcdsaEncodableTx + Transaction + Send + Sync> Compact
for EthereumTxEnvelope<Eip4844>
{
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>,
{
<Self as CompactEnvelope>::to_compact(self, buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
<Self as CompactEnvelope>::from_compact(buf, len)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/optimism.rs | crates/storage/codecs/src/alloy/transaction/optimism.rs | //! Compact implementation for [`AlloyTxDeposit`]
use crate::{
alloy::transaction::ethereum::{CompactEnvelope, Envelope, FromTxCompact, ToTxCompact},
generate_tests,
txtype::{
COMPACT_EXTENDED_IDENTIFIER_FLAG, COMPACT_IDENTIFIER_EIP1559, COMPACT_IDENTIFIER_EIP2930,
COMPACT_IDENTIFIER_LEGACY,
},
Compact,
};
use alloy_consensus::{
constants::EIP7702_TX_TYPE_ID, Signed, TxEip1559, TxEip2930, TxEip7702, TxLegacy,
};
use alloy_primitives::{Address, Bytes, Sealed, Signature, TxKind, B256, U256};
use bytes::BufMut;
use op_alloy_consensus::{OpTxEnvelope, OpTxType, OpTypedTransaction, TxDeposit as AlloyTxDeposit};
use reth_codecs_derive::add_arbitrary_tests;
/// Deposit transactions, also known as deposits are initiated on L1, and executed on L2.
///
/// This is a helper type to use derive on it instead of manually managing `bitfield`.
///
/// By deriving `Compact` here, any future changes or enhancements to the `Compact` derive
/// will automatically apply to this type.
///
/// Notice: Make sure this struct is 1:1 with [`op_alloy_consensus::TxDeposit`]
#[derive(Debug, Clone, PartialEq, Eq, Hash, Default, Compact)]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[reth_codecs(crate = "crate")]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct TxDeposit {
source_hash: B256,
from: Address,
to: TxKind,
mint: Option<u128>,
value: U256,
gas_limit: u64,
is_system_transaction: bool,
input: Bytes,
}
impl Compact for AlloyTxDeposit {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let tx = TxDeposit {
source_hash: self.source_hash,
from: self.from,
to: self.to,
mint: match self.mint {
0 => None,
v => Some(v),
},
value: self.value,
gas_limit: self.gas_limit,
is_system_transaction: self.is_system_transaction,
input: self.input.clone(),
};
tx.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (tx, _) = TxDeposit::from_compact(buf, len);
let alloy_tx = Self {
source_hash: tx.source_hash,
from: tx.from,
to: tx.to,
mint: tx.mint.unwrap_or_default(),
value: tx.value,
gas_limit: tx.gas_limit,
is_system_transaction: tx.is_system_transaction,
input: tx.input,
};
(alloy_tx, buf)
}
}
impl crate::Compact for OpTxType {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
use crate::txtype::*;
match self {
Self::Legacy => COMPACT_IDENTIFIER_LEGACY,
Self::Eip2930 => COMPACT_IDENTIFIER_EIP2930,
Self::Eip1559 => COMPACT_IDENTIFIER_EIP1559,
Self::Eip7702 => {
buf.put_u8(EIP7702_TX_TYPE_ID);
COMPACT_EXTENDED_IDENTIFIER_FLAG
}
Self::Deposit => {
buf.put_u8(op_alloy_consensus::DEPOSIT_TX_TYPE_ID);
COMPACT_EXTENDED_IDENTIFIER_FLAG
}
}
}
// For backwards compatibility purposes only 2 bits of the type are encoded in the identifier
// parameter. In the case of a [`COMPACT_EXTENDED_IDENTIFIER_FLAG`], the full transaction type
// is read from the buffer as a single byte.
fn from_compact(mut buf: &[u8], identifier: usize) -> (Self, &[u8]) {
use bytes::Buf;
(
match identifier {
COMPACT_IDENTIFIER_LEGACY => Self::Legacy,
COMPACT_IDENTIFIER_EIP2930 => Self::Eip2930,
COMPACT_IDENTIFIER_EIP1559 => Self::Eip1559,
COMPACT_EXTENDED_IDENTIFIER_FLAG => {
let extended_identifier = buf.get_u8();
match extended_identifier {
EIP7702_TX_TYPE_ID => Self::Eip7702,
op_alloy_consensus::DEPOSIT_TX_TYPE_ID => Self::Deposit,
_ => panic!("Unsupported OpTxType identifier: {extended_identifier}"),
}
}
_ => panic!("Unknown identifier for TxType: {identifier}"),
},
buf,
)
}
}
impl Compact for OpTypedTransaction {
fn to_compact<B>(&self, out: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let identifier = self.tx_type().to_compact(out);
match self {
Self::Legacy(tx) => tx.to_compact(out),
Self::Eip2930(tx) => tx.to_compact(out),
Self::Eip1559(tx) => tx.to_compact(out),
Self::Eip7702(tx) => tx.to_compact(out),
Self::Deposit(tx) => tx.to_compact(out),
};
identifier
}
fn from_compact(buf: &[u8], identifier: usize) -> (Self, &[u8]) {
let (tx_type, buf) = OpTxType::from_compact(buf, identifier);
match tx_type {
OpTxType::Legacy => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Legacy(tx), buf)
}
OpTxType::Eip2930 => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Eip2930(tx), buf)
}
OpTxType::Eip1559 => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Eip1559(tx), buf)
}
OpTxType::Eip7702 => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Eip7702(tx), buf)
}
OpTxType::Deposit => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Deposit(tx), buf)
}
}
}
}
impl ToTxCompact for OpTxEnvelope {
fn to_tx_compact(&self, buf: &mut (impl BufMut + AsMut<[u8]>)) {
match self {
Self::Legacy(tx) => tx.tx().to_compact(buf),
Self::Eip2930(tx) => tx.tx().to_compact(buf),
Self::Eip1559(tx) => tx.tx().to_compact(buf),
Self::Eip7702(tx) => tx.tx().to_compact(buf),
Self::Deposit(tx) => tx.to_compact(buf),
};
}
}
impl FromTxCompact for OpTxEnvelope {
type TxType = OpTxType;
fn from_tx_compact(buf: &[u8], tx_type: OpTxType, signature: Signature) -> (Self, &[u8]) {
match tx_type {
OpTxType::Legacy => {
let (tx, buf) = TxLegacy::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Legacy(tx), buf)
}
OpTxType::Eip2930 => {
let (tx, buf) = TxEip2930::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip2930(tx), buf)
}
OpTxType::Eip1559 => {
let (tx, buf) = TxEip1559::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip1559(tx), buf)
}
OpTxType::Eip7702 => {
let (tx, buf) = TxEip7702::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip7702(tx), buf)
}
OpTxType::Deposit => {
let (tx, buf) = op_alloy_consensus::TxDeposit::from_compact(buf, buf.len());
let tx = Sealed::new(tx);
(Self::Deposit(tx), buf)
}
}
}
}
const DEPOSIT_SIGNATURE: Signature = Signature::new(U256::ZERO, U256::ZERO, false);
impl Envelope for OpTxEnvelope {
fn signature(&self) -> &Signature {
match self {
Self::Legacy(tx) => tx.signature(),
Self::Eip2930(tx) => tx.signature(),
Self::Eip1559(tx) => tx.signature(),
Self::Eip7702(tx) => tx.signature(),
Self::Deposit(_) => &DEPOSIT_SIGNATURE,
}
}
fn tx_type(&self) -> Self::TxType {
Self::tx_type(self)
}
}
impl Compact for OpTxEnvelope {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>,
{
CompactEnvelope::to_compact(self, buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
CompactEnvelope::from_compact(buf, len)
}
}
generate_tests!(#[crate, compact] OpTypedTransaction, OpTypedTransactionTests);
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/eip4844.rs | crates/storage/codecs/src/alloy/transaction/eip4844.rs | //! Compact implementation for [`AlloyTxEip4844`]
use crate::{Compact, CompactPlaceholder};
use alloc::vec::Vec;
use alloy_consensus::TxEip4844 as AlloyTxEip4844;
use alloy_eips::eip2930::AccessList;
use alloy_primitives::{Address, Bytes, ChainId, B256, U256};
use reth_codecs_derive::add_arbitrary_tests;
/// [EIP-4844 Blob Transaction](https://eips.ethereum.org/EIPS/eip-4844#blob-transaction)
///
/// This is a helper type to use derive on it instead of manually managing `bitfield`.
///
/// By deriving `Compact` here, any future changes or enhancements to the `Compact` derive
/// will automatically apply to this type.
///
/// Notice: Make sure this struct is 1:1 with [`alloy_consensus::TxEip4844`]
#[derive(Debug, Clone, PartialEq, Eq, Hash, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(any(test, feature = "test-utils"), derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct TxEip4844 {
chain_id: ChainId,
nonce: u64,
gas_limit: u64,
max_fee_per_gas: u128,
max_priority_fee_per_gas: u128,
/// TODO(debt): this should be removed if we break the DB.
/// Makes sure that the Compact bitflag struct has one bit after the above field:
/// <https://github.com/paradigmxyz/reth/pull/8291#issuecomment-2117545016>
#[cfg_attr(
feature = "test-utils",
serde(
serialize_with = "serialize_placeholder",
deserialize_with = "deserialize_placeholder"
)
)]
placeholder: Option<CompactPlaceholder>,
to: Address,
value: U256,
access_list: AccessList,
blob_versioned_hashes: Vec<B256>,
max_fee_per_blob_gas: u128,
input: Bytes,
}
impl Compact for AlloyTxEip4844 {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let tx = TxEip4844 {
chain_id: self.chain_id,
nonce: self.nonce,
gas_limit: self.gas_limit,
max_fee_per_gas: self.max_fee_per_gas,
max_priority_fee_per_gas: self.max_priority_fee_per_gas,
placeholder: Some(()),
to: self.to,
value: self.value,
access_list: self.access_list.clone(),
blob_versioned_hashes: self.blob_versioned_hashes.clone(),
max_fee_per_blob_gas: self.max_fee_per_blob_gas,
input: self.input.clone(),
};
tx.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (tx, _) = TxEip4844::from_compact(buf, len);
let alloy_tx = Self {
chain_id: tx.chain_id,
nonce: tx.nonce,
gas_limit: tx.gas_limit,
max_fee_per_gas: tx.max_fee_per_gas,
max_priority_fee_per_gas: tx.max_priority_fee_per_gas,
to: tx.to,
value: tx.value,
access_list: tx.access_list,
blob_versioned_hashes: tx.blob_versioned_hashes,
max_fee_per_blob_gas: tx.max_fee_per_blob_gas,
input: tx.input,
};
(alloy_tx, buf)
}
}
#[cfg(any(test, feature = "test-utils"))]
impl<'a> arbitrary::Arbitrary<'a> for TxEip4844 {
fn arbitrary(u: &mut arbitrary::Unstructured<'a>) -> arbitrary::Result<Self> {
Ok(Self {
chain_id: ChainId::arbitrary(u)?,
nonce: u64::arbitrary(u)?,
gas_limit: u64::arbitrary(u)?,
max_fee_per_gas: u128::arbitrary(u)?,
max_priority_fee_per_gas: u128::arbitrary(u)?,
// Should always be Some for TxEip4844
placeholder: Some(()),
to: Address::arbitrary(u)?,
value: U256::arbitrary(u)?,
access_list: AccessList::arbitrary(u)?,
blob_versioned_hashes: Vec::<B256>::arbitrary(u)?,
max_fee_per_blob_gas: u128::arbitrary(u)?,
input: Bytes::arbitrary(u)?,
})
}
}
#[cfg(feature = "test-utils")]
fn serialize_placeholder<S>(value: &Option<()>, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
// Required otherwise `serde_json` will serialize it as null and would be `None` when decoding
// it again.
match value {
Some(()) => serializer.serialize_str("placeholder"), // Custom serialization
None => serializer.serialize_none(),
}
}
#[cfg(feature = "test-utils")]
fn deserialize_placeholder<'de, D>(deserializer: D) -> Result<Option<()>, D::Error>
where
D: serde::Deserializer<'de>,
{
use serde::de::Deserialize;
let s: Option<String> = Option::deserialize(deserializer)?;
match s.as_deref() {
Some("placeholder") => Ok(Some(())),
None => Ok(None),
_ => Err(serde::de::Error::custom("unexpected value")),
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_primitives::{address, bytes};
#[test]
fn backwards_compatible_txkind_test() {
// TxEip4844 encoded with TxKind on to field
// holesky tx hash: <0xa3b1668225bf0fbfdd6c19aa6fd071fa4ff5d09a607c67ccd458b97735f745ac>
let tx = bytes!("224348a100426844cb2dc6c0b2d05e003b9aca0079c9109b764609df928d16fc4a91e9081f7e87db09310001019101fb28118ceccaabca22a47e35b9c3f12eb2dcb25e5c543d5b75e6cd841f0a05328d26ef16e8450000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000052000000000000000000000000000000000000000000000000000000000000004c000000000000000000000000000000000000000000000000000000000000000200000000000000000000000007b399987d24fc5951f3e94a4cb16e87414bf22290000000000000000000000001670090000000000000000000000000000010001302e31382e302d64657600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000420000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000200000000000000000000000009e640a6aadf4f664cf467b795c31332f44acbe6c000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000002c00000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000006614c2d1000000000000000000000000000000000000000000000000000000000014012c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001e000000000000000000000000000000000000000000000000000000000000000030000000000000000000000000000000000000000000000000000000000000064000000000000000000000000000000000000000000000000000000000000093100000000000000000000000000000000000000000000000000000000000000c8000000000000000000000000000000000000000000000000000000000000093100000000000000000000000000000000000000000000000000000000000003e800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000041f06fd78f4dcdf089263524731620941747b9b93fd8f631557e25b23845a78b685bd82f9d36bce2f4cc812b6e5191df52479d349089461ffe76e9f2fa2848a0fe1b0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000410819f04aba17677807c61ae72afdddf7737f26931ecfa8af05b7c669808b36a2587e32c90bb0ed2100266dd7797c80121a109a2b0fe941ca5a580e438988cac81c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000");
let (tx, _) = TxEip4844::from_compact(&tx, tx.len());
assert_eq!(tx.to, address!("0x79C9109b764609df928d16fC4a91e9081F7e87DB"));
assert_eq!(tx.placeholder, Some(()));
assert_eq!(tx.input, bytes!("ef16e8450000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000052000000000000000000000000000000000000000000000000000000000000004c000000000000000000000000000000000000000000000000000000000000000200000000000000000000000007b399987d24fc5951f3e94a4cb16e87414bf22290000000000000000000000001670090000000000000000000000000000010001302e31382e302d64657600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000420000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000200000000000000000000000009e640a6aadf4f664cf467b795c31332f44acbe6c000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000002c00000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000006614c2d1000000000000000000000000000000000000000000000000000000000014012c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001e000000000000000000000000000000000000000000000000000000000000000030000000000000000000000000000000000000000000000000000000000000064000000000000000000000000000000000000000000000000000000000000093100000000000000000000000000000000000000000000000000000000000000c8000000000000000000000000000000000000000000000000000000000000093100000000000000000000000000000000000000000000000000000000000003e800000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000041f06fd78f4dcdf089263524731620941747b9b93fd8f631557e25b23845a78b685bd82f9d36bce2f4cc812b6e5191df52479d349089461ffe76e9f2fa2848a0fe1b0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000410819f04aba17677807c61ae72afdddf7737f26931ecfa8af05b7c669808b36a2587e32c90bb0ed2100266dd7797c80121a109a2b0fe941ca5a580e438988cac81c000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"));
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/eip2930.rs | crates/storage/codecs/src/alloy/transaction/eip2930.rs | //! Compact implementation for [`AlloyTxEip2930`]
use crate::Compact;
use alloy_consensus::TxEip2930 as AlloyTxEip2930;
use alloy_eips::eip2930::AccessList;
use alloy_primitives::{Bytes, ChainId, TxKind, U256};
use reth_codecs_derive::add_arbitrary_tests;
/// Transaction with an [`AccessList`] ([EIP-2930](https://eips.ethereum.org/EIPS/eip-2930)).
///
/// This is a helper type to use derive on it instead of manually managing `bitfield`.
///
/// By deriving `Compact` here, any future changes or enhancements to the `Compact` derive
/// will automatically apply to this type.
///
/// Notice: Make sure this struct is 1:1 with [`alloy_consensus::TxEip2930`]
#[derive(Debug, Clone, PartialEq, Eq, Hash, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct TxEip2930 {
chain_id: ChainId,
nonce: u64,
gas_price: u128,
gas_limit: u64,
to: TxKind,
value: U256,
access_list: AccessList,
input: Bytes,
}
impl Compact for AlloyTxEip2930 {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let tx = TxEip2930 {
chain_id: self.chain_id,
nonce: self.nonce,
gas_price: self.gas_price,
gas_limit: self.gas_limit,
to: self.to,
value: self.value,
access_list: self.access_list.clone(),
input: self.input.clone(),
};
tx.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (tx, _) = TxEip2930::from_compact(buf, len);
let alloy_tx = Self {
chain_id: tx.chain_id,
nonce: tx.nonce,
gas_price: tx.gas_price,
gas_limit: tx.gas_limit,
to: tx.to,
value: tx.value,
access_list: tx.access_list,
input: tx.input,
};
(alloy_tx, buf)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/eip7702.rs | crates/storage/codecs/src/alloy/transaction/eip7702.rs | //! Compact implementation for [`AlloyTxEip7702`]
use crate::Compact;
use alloc::vec::Vec;
use alloy_consensus::TxEip7702 as AlloyTxEip7702;
use alloy_eips::{eip2930::AccessList, eip7702::SignedAuthorization};
use alloy_primitives::{Address, Bytes, ChainId, U256};
use reth_codecs_derive::add_arbitrary_tests;
/// [EIP-7702 Set Code Transaction](https://eips.ethereum.org/EIPS/eip-7702)
///
/// This is a helper type to use derive on it instead of manually managing `bitfield`.
///
/// By deriving `Compact` here, any future changes or enhancements to the `Compact` derive
/// will automatically apply to this type.
///
/// Notice: Make sure this struct is 1:1 with [`alloy_consensus::TxEip7702`]
#[derive(Debug, Clone, PartialEq, Eq, Hash, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
#[add_arbitrary_tests(crate, compact)]
pub(crate) struct TxEip7702 {
chain_id: ChainId,
nonce: u64,
gas_limit: u64,
max_fee_per_gas: u128,
max_priority_fee_per_gas: u128,
to: Address,
value: U256,
access_list: AccessList,
authorization_list: Vec<SignedAuthorization>,
input: Bytes,
}
impl Compact for AlloyTxEip7702 {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let tx = TxEip7702 {
chain_id: self.chain_id,
nonce: self.nonce,
max_fee_per_gas: self.max_fee_per_gas,
max_priority_fee_per_gas: self.max_priority_fee_per_gas,
gas_limit: self.gas_limit,
to: self.to,
value: self.value,
input: self.input.clone(),
access_list: self.access_list.clone(),
authorization_list: self.authorization_list.clone(),
};
tx.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (tx, _) = TxEip7702::from_compact(buf, len);
let alloy_tx = Self {
chain_id: tx.chain_id,
nonce: tx.nonce,
max_fee_per_gas: tx.max_fee_per_gas,
max_priority_fee_per_gas: tx.max_priority_fee_per_gas,
gas_limit: tx.gas_limit,
to: tx.to,
value: tx.value,
input: tx.input,
access_list: tx.access_list,
authorization_list: tx.authorization_list,
};
(alloy_tx, buf)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/mod.rs | crates/storage/codecs/src/alloy/transaction/mod.rs | //! Compact implementation for transaction types
use crate::Compact;
use alloy_consensus::{
transaction::{RlpEcdsaEncodableTx, TxEip1559, TxEip2930, TxEip7702, TxLegacy},
EthereumTypedTransaction, TxType,
};
use alloy_primitives::bytes::BufMut;
impl<Eip4844> Compact for EthereumTypedTransaction<Eip4844>
where
Eip4844: Compact + RlpEcdsaEncodableTx,
{
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>,
{
let identifier = self.tx_type().to_compact(buf);
match self {
Self::Legacy(tx) => tx.to_compact(buf),
Self::Eip2930(tx) => tx.to_compact(buf),
Self::Eip1559(tx) => tx.to_compact(buf),
Self::Eip4844(tx) => tx.to_compact(buf),
Self::Eip7702(tx) => tx.to_compact(buf),
};
identifier
}
fn from_compact(buf: &[u8], identifier: usize) -> (Self, &[u8]) {
let (tx_type, buf) = TxType::from_compact(buf, identifier);
match tx_type {
TxType::Legacy => {
let (tx, buf) = TxLegacy::from_compact(buf, buf.len());
(Self::Legacy(tx), buf)
}
TxType::Eip4844 => {
let (tx, buf) = Eip4844::from_compact(buf, buf.len());
(Self::Eip4844(tx), buf)
}
TxType::Eip7702 => {
let (tx, buf) = TxEip7702::from_compact(buf, buf.len());
(Self::Eip7702(tx), buf)
}
TxType::Eip1559 => {
let (tx, buf) = TxEip1559::from_compact(buf, buf.len());
(Self::Eip1559(tx), buf)
}
TxType::Eip2930 => {
let (tx, buf) = TxEip2930::from_compact(buf, buf.len());
(Self::Eip2930(tx), buf)
}
}
}
}
cond_mod!(eip1559, eip2930, eip4844, eip7702, legacy, seismic, txtype);
mod ethereum;
pub use ethereum::{CompactEnvelope, Envelope, FromTxCompact, ToTxCompact};
#[cfg(all(feature = "test-utils", feature = "op"))]
pub mod optimism;
#[cfg(all(not(feature = "test-utils"), feature = "op"))]
mod optimism;
#[cfg(test)]
mod tests {
// each value in the database has an extra field named flags that encodes metadata about other
// fields in the value, e.g. offset and length.
//
// this check is to ensure we do not inadvertently add too many fields to a struct which would
// expand the flags field and break backwards compatibility
use crate::{
alloy::{
header::Header,
transaction::{
eip1559::TxEip1559, eip2930::TxEip2930, eip4844::TxEip4844, eip7702::TxEip7702,
legacy::TxLegacy,
},
},
test_utils::test_decode,
};
use alloy_primitives::hex;
#[test]
fn test_ensure_backwards_compatibility() {
assert_eq!(TxEip4844::bitflag_encoded_bytes(), 5);
assert_eq!(TxLegacy::bitflag_encoded_bytes(), 3);
assert_eq!(TxEip1559::bitflag_encoded_bytes(), 4);
assert_eq!(TxEip2930::bitflag_encoded_bytes(), 3);
assert_eq!(TxEip7702::bitflag_encoded_bytes(), 4);
}
#[cfg(feature = "op")]
#[test]
fn test_ensure_backwards_compatibility_optimism() {
assert_eq!(crate::alloy::transaction::optimism::TxDeposit::bitflag_encoded_bytes(), 2);
}
#[test]
fn test_decode_header() {
test_decode::<Header>(&hex!(
"01000000fbbb564baeafd064b979c2ac032df5cd987098066a8c6969514dfb8ecfbf043e667fa19efcc00d1dd197c309a3cc42dec820cd627af8f7f38f3274f842406891b22624431d0ea858422db8415b1181f8d19befbd21287debaf98a94e84b3ec20be846f35abfbf743ee3eda4fdda6a6f9124d295da97e26eaa1cedd09936f0a3c560b6bc10316dba5e82abd21afcf519a985feb09a6ce7fba2e8163b10f06c99828b8049c29b993d88d1d112dca60a03ebd8ebc6d69a7e1f301ca6d67c21fe0949d67bca251edf36c96a2cf7c84d98fc60a53988ac95820f434eb35280d98c8ba4d7484e7ee8fefd63591ad4c937ccaaea23871d05c77bac754c5759b34cf9b0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
));
}
#[test]
fn test_decode_eip1559() {
test_decode::<TxEip1559>(&hex!(
"88086110b81b05bc5bb59ec3e4cd44e895a9dcb2656d5003e2f64ecb2e15443898cc1cc19af19ca96fc2b4eafc4abc26e4bbd70a3ddb10b7530b65eea128f4095c97164f712c04239902c1b08acf3949d4687123cdd72d5c73df113d2dc6ed7e519f410ace5553ca805975240a208b57013532de78c5cb407423ea11921ab11b13e93ef35d4d01c9a23166c4d627987545fe4675528d0ab111b0a1dc83fba0a4e1cd5c826a94db3f"
));
}
#[test]
fn test_decode_eip2930() {
test_decode::<TxEip2930>(&hex!(
"7810833fce14e3e2921e94fd3727eb71e91551d2c1e029697a654bfab510f3963aa57074015e152065d1c807f8830079fb0aeadc251d248eaec7147e78580ed638c4e667827775e24270edd5aad475776533ece65373afa71722bfeba3c900"
));
}
#[test]
fn test_decode_eip4844() {
test_decode::<TxEip4844>(&hex!(
"88086110025c359180ea680b5007c856f9e1ad4d1be7a5019feb42133f4fc4bdf74da1b457ab787462385a28a1bf8edb401adabf3ff21ac18f695e30180348ea67246fc4dc25e88add12b7c317651a0ce08946d98dbbe5b38883aa758a0f247e23b0fe3ac1bcc43d7212c984d6ccc770d70135890c9a07d715cacb9032c90d539d0b3d209a8d600178bcfb416fd489e5d5dd56d9cfc6addae810ae70bdaee65672b871dc2b3f35ec00dbaa0d872f78cb58b3199984c608c8ba"
));
}
#[test]
fn test_decode_eip7702() {
test_decode::<TxEip7702>(&hex!(
"8808210881415c034feba383d7a6efd3f2601309b33a6d682ad47168cac0f7a5c5136a33370e5e7ca7f570d5530d7a0d18bf5eac33583fdc27b6580f61e8cbd34d6de596f925c1f353188feb2c1e9e20de82a80b57f0be425d8c5896280d4f5f66cdcfba256d0c9ac8abd833859a62ec019501b4585fa176f048de4f88b93bdefecfcaf4d8f0dd04767bc683a4569c893632e44ba9d53f90d758125c9b24c0192a649166520cd5eecbc110b53eda400cf184b8ef9932c81d0deb2ea27dfa863392a87bfd53af3ec67379f20992501e76e387cbe3933861beead1b49649383cf8b2a2d5c6d04b7edc376981ed9b12cf7199fe7fabf5198659e001bed40922969b82a6cd000000000000"
));
}
#[test]
fn test_decode_legacy() {
test_decode::<TxLegacy>(&hex!(
"112210080a8ba06a8d108540bb3140e9f71a0812c46226f9ea77ae880d98d19fe27e5911801175c3b32620b2e887af0296af343526e439b775ee3b1c06750058e9e5fc4cd5965c3010f86184"
));
}
#[cfg(feature = "op")]
#[test]
fn test_decode_deposit() {
test_decode::<op_alloy_consensus::TxDeposit>(&hex!(
"8108ac8f15983d59b6ae4911a00ff7bfcd2e53d2950926f8c82c12afad02861c46fcb293e776204052725e1c08ff2e9ff602ca916357601fa972a14094891fe3598b718758f22c46f163c18bcaa6296ce87e5267ef3fd932112842fbbf79011548cdf067d93ce6098dfc0aaf5a94531e439f30d6dfd0c6"
));
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/legacy.rs | crates/storage/codecs/src/alloy/transaction/legacy.rs | //! Compact implementation for [`AlloyTxLegacy`]
use crate::Compact;
use alloy_consensus::TxLegacy as AlloyTxLegacy;
use alloy_primitives::{Bytes, ChainId, TxKind, U256};
/// Legacy transaction.
#[derive(Debug, Clone, PartialEq, Eq, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize),
crate::add_arbitrary_tests(crate, compact)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
pub(crate) struct TxLegacy {
/// Added as EIP-155: Simple replay attack protection
chain_id: Option<ChainId>,
/// A scalar value equal to the number of transactions sent by the sender; formally Tn.
nonce: u64,
/// A scalar value equal to the number of
/// Wei to be paid per unit of gas for all computation
/// costs incurred as a result of the execution of this transaction; formally Tp.
///
/// As ethereum circulation is around 120mil eth as of 2022 that is around
/// 120000000000000000000000000 wei we are safe to use u128 as its max number is:
/// 340282366920938463463374607431768211455
gas_price: u128,
/// A scalar value equal to the maximum
/// amount of gas that should be used in executing
/// this transaction. This is paid up-front, before any
/// computation is done and may not be increased
/// later; formally Tg.
gas_limit: u64,
/// The 160-bit address of the message call’s recipient or, for a contract creation
/// transaction, ∅, used here to denote the only member of B0 ; formally Tt.
to: TxKind,
/// A scalar value equal to the number of Wei to
/// be transferred to the message call’s recipient or,
/// in the case of contract creation, as an endowment
/// to the newly created account; formally Tv.
value: U256,
/// Input has two uses depending if transaction is Create or Call (if `to` field is None or
/// Some). pub init: An unlimited size byte array specifying the
/// EVM-code for the account initialization procedure CREATE,
/// data: An unlimited size byte array specifying the
/// input data of the message call, formally Td.
input: Bytes,
}
impl Compact for AlloyTxLegacy {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let tx = TxLegacy {
chain_id: self.chain_id,
nonce: self.nonce,
gas_price: self.gas_price,
gas_limit: self.gas_limit,
to: self.to,
value: self.value,
input: self.input.clone(),
};
tx.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (tx, _) = TxLegacy::from_compact(buf, len);
let alloy_tx = Self {
chain_id: tx.chain_id,
nonce: tx.nonce,
gas_price: tx.gas_price,
gas_limit: tx.gas_limit,
to: tx.to,
value: tx.value,
input: tx.input,
};
(alloy_tx, buf)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/txtype.rs | crates/storage/codecs/src/alloy/transaction/txtype.rs | //! Compact implementation for [`TxType`]
use crate::txtype::{
COMPACT_EXTENDED_IDENTIFIER_FLAG, COMPACT_IDENTIFIER_EIP1559, COMPACT_IDENTIFIER_EIP2930,
COMPACT_IDENTIFIER_LEGACY,
};
use alloy_consensus::{constants::*, TxType};
impl crate::Compact for TxType {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
use crate::txtype::*;
match self {
Self::Legacy => COMPACT_IDENTIFIER_LEGACY,
Self::Eip2930 => COMPACT_IDENTIFIER_EIP2930,
Self::Eip1559 => COMPACT_IDENTIFIER_EIP1559,
Self::Eip4844 => {
buf.put_u8(EIP4844_TX_TYPE_ID);
COMPACT_EXTENDED_IDENTIFIER_FLAG
}
Self::Eip7702 => {
buf.put_u8(EIP7702_TX_TYPE_ID);
COMPACT_EXTENDED_IDENTIFIER_FLAG
}
}
}
// For backwards compatibility purposes only 2 bits of the type are encoded in the identifier
// parameter. In the case of a [`COMPACT_EXTENDED_IDENTIFIER_FLAG`], the full transaction type
// is read from the buffer as a single byte.
fn from_compact(mut buf: &[u8], identifier: usize) -> (Self, &[u8]) {
use bytes::Buf;
(
match identifier {
COMPACT_IDENTIFIER_LEGACY => Self::Legacy,
COMPACT_IDENTIFIER_EIP2930 => Self::Eip2930,
COMPACT_IDENTIFIER_EIP1559 => Self::Eip1559,
COMPACT_EXTENDED_IDENTIFIER_FLAG => {
let extended_identifier = buf.get_u8();
match extended_identifier {
EIP4844_TX_TYPE_ID => Self::Eip4844,
EIP7702_TX_TYPE_ID => Self::Eip7702,
_ => panic!("Unsupported TxType identifier: {extended_identifier}"),
}
}
_ => panic!("Unknown identifier for TxType: {identifier}"),
},
buf,
)
}
}
#[cfg(test)]
mod tests {
use super::*;
use rstest::rstest;
use crate::Compact;
use alloy_consensus::constants::{EIP4844_TX_TYPE_ID, EIP7702_TX_TYPE_ID};
#[rstest]
#[case(TxType::Legacy, COMPACT_IDENTIFIER_LEGACY, vec![])]
#[case(TxType::Eip2930, COMPACT_IDENTIFIER_EIP2930, vec![])]
#[case(TxType::Eip1559, COMPACT_IDENTIFIER_EIP1559, vec![])]
#[case(TxType::Eip4844, COMPACT_EXTENDED_IDENTIFIER_FLAG, vec![EIP4844_TX_TYPE_ID])]
#[case(TxType::Eip7702, COMPACT_EXTENDED_IDENTIFIER_FLAG, vec![EIP7702_TX_TYPE_ID])]
fn test_txtype_to_compact(
#[case] tx_type: TxType,
#[case] expected_identifier: usize,
#[case] expected_buf: Vec<u8>,
) {
let mut buf = vec![];
let identifier = tx_type.to_compact(&mut buf);
assert_eq!(identifier, expected_identifier, "Unexpected identifier for TxType {tx_type:?}",);
assert_eq!(buf, expected_buf, "Unexpected buffer for TxType {tx_type:?}",);
}
#[rstest]
#[case(TxType::Legacy, COMPACT_IDENTIFIER_LEGACY, vec![])]
#[case(TxType::Eip2930, COMPACT_IDENTIFIER_EIP2930, vec![])]
#[case(TxType::Eip1559, COMPACT_IDENTIFIER_EIP1559, vec![])]
#[case(TxType::Eip4844, COMPACT_EXTENDED_IDENTIFIER_FLAG, vec![EIP4844_TX_TYPE_ID])]
#[case(TxType::Eip7702, COMPACT_EXTENDED_IDENTIFIER_FLAG, vec![EIP7702_TX_TYPE_ID])]
fn test_txtype_from_compact(
#[case] expected_type: TxType,
#[case] identifier: usize,
#[case] buf: Vec<u8>,
) {
let (actual_type, remaining_buf) = TxType::from_compact(&buf, identifier);
assert_eq!(actual_type, expected_type, "Unexpected TxType for identifier {identifier}");
assert!(remaining_buf.is_empty(), "Buffer not fully consumed for identifier {identifier}");
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/eip1559.rs | crates/storage/codecs/src/alloy/transaction/eip1559.rs | //! Compact implementation for [`AlloyTxEip1559`]
use crate::Compact;
use alloy_consensus::TxEip1559 as AlloyTxEip1559;
use alloy_eips::eip2930::AccessList;
use alloy_primitives::{Bytes, ChainId, TxKind, U256};
/// [EIP-1559 Transaction](https://eips.ethereum.org/EIPS/eip-1559)
///
/// This is a helper type to use derive on it instead of manually managing `bitfield`.
///
/// By deriving `Compact` here, any future changes or enhancements to the `Compact` derive
/// will automatically apply to this type.
///
/// Notice: Make sure this struct is 1:1 with [`alloy_consensus::TxEip1559`]
#[derive(Debug, Clone, PartialEq, Eq, Hash, Compact, Default)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize)
)]
#[cfg_attr(any(test, feature = "test-utils"), crate::add_arbitrary_tests(crate, compact))]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
pub(crate) struct TxEip1559 {
chain_id: ChainId,
nonce: u64,
gas_limit: u64,
max_fee_per_gas: u128,
max_priority_fee_per_gas: u128,
to: TxKind,
value: U256,
access_list: AccessList,
input: Bytes,
}
impl Compact for AlloyTxEip1559 {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let tx = TxEip1559 {
chain_id: self.chain_id,
nonce: self.nonce,
gas_limit: self.gas_limit,
max_fee_per_gas: self.max_fee_per_gas,
max_priority_fee_per_gas: self.max_priority_fee_per_gas,
to: self.to,
value: self.value,
access_list: self.access_list.clone(),
input: self.input.clone(),
};
tx.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (tx, _) = TxEip1559::from_compact(buf, len);
let alloy_tx = Self {
chain_id: tx.chain_id,
nonce: tx.nonce,
gas_limit: tx.gas_limit,
max_fee_per_gas: tx.max_fee_per_gas,
max_priority_fee_per_gas: tx.max_priority_fee_per_gas,
to: tx.to,
value: tx.value,
access_list: tx.access_list,
input: tx.input,
};
(alloy_tx, buf)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/src/alloy/transaction/seismic.rs | crates/storage/codecs/src/alloy/transaction/seismic.rs | //! Compact implementation for [`AlloyTxSeismic`]
use crate::{
txtype::{
COMPACT_EXTENDED_IDENTIFIER_FLAG, COMPACT_IDENTIFIER_EIP1559, COMPACT_IDENTIFIER_EIP2930,
COMPACT_IDENTIFIER_LEGACY,
},
Compact,
};
use alloy_consensus::TxEip4844;
use alloy_consensus::{
transaction::{TxEip1559, TxEip2930, TxEip7702, TxLegacy},
Signed, TxEip4844Variant,
};
use alloy_eips::eip2718::{EIP4844_TX_TYPE_ID, EIP7702_TX_TYPE_ID};
use alloy_primitives::{aliases::U96, Bytes, ChainId, Signature, TxKind, U256};
use bytes::{Buf, BufMut, BytesMut};
use seismic_alloy_consensus::{
transaction::TxSeismicElements, SeismicTxEnvelope, SeismicTxType, SeismicTypedTransaction,
TxSeismic as AlloyTxSeismic, SEISMIC_TX_TYPE_ID,
};
use super::ethereum::{CompactEnvelope, Envelope, FromTxCompact, ToTxCompact};
/// Seismic transaction.
#[derive(Debug, Clone, PartialEq, Eq, Default, Compact)]
#[reth_codecs(crate = "crate")]
#[cfg_attr(
any(test, feature = "test-utils"),
derive(arbitrary::Arbitrary, serde::Serialize, serde::Deserialize),
crate::add_arbitrary_tests(crate, compact)
)]
#[cfg_attr(feature = "test-utils", allow(unreachable_pub), visibility::make(pub))]
pub(crate) struct TxSeismic {
/// Added as EIP-155: Simple replay attack protection
chain_id: ChainId,
/// A scalar value equal to the number of transactions sent by the sender; formally Tn.
nonce: u64,
/// A scalar value equal to the number of
/// Wei to be paid per unit of gas for all computation
/// costs incurred as a result of the execution of this transaction; formally Tp.
///
/// As ethereum circulation is around 120mil eth as of 2022 that is around
/// 120000000000000000000000000 wei we are safe to use u128 as its max number is:
/// 340282366920938463463374607431768211455
gas_price: u128,
/// A scalar value equal to the maximum
/// amount of gas that should be used in executing
/// this transaction. This is paid up-front, before any
/// computation is done and may not be increased
/// later; formally Tg.
gas_limit: u64,
/// The 160-bit address of the message call’s recipient or, for a contract creation
/// transaction, ∅, used here to denote the only member of B0 ; formally Tt.
to: TxKind,
/// A scalar value equal to the number of Wei to
/// be transferred to the message call’s recipient or,
/// in the case of contract creation, as an endowment
/// to the newly created account; formally Tv.
value: U256,
/// seismic elements
seismic_elements: TxSeismicElements,
/// Input has two uses depending if transaction is Create or Call (if `to` field is None or
/// Some). pub init: An unlimited size byte array specifying the
/// EVM-code for the account initialisation procedure CREATE,
/// data: An unlimited size byte array specifying the
/// input data of the message call, formally Td.
input: Bytes,
}
impl Compact for TxSeismicElements {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let mut len = 0;
len += self.encryption_pubkey.serialize().to_compact(buf);
buf.put_u8(self.message_version);
len += core::mem::size_of::<u8>();
let mut cache = BytesMut::new();
let nonce_len = self.encryption_nonce.to_compact(&mut cache);
buf.put_u8(nonce_len as u8);
buf.put_slice(&cache);
len += nonce_len + 1;
len
}
fn from_compact(mut buf: &[u8], _len: usize) -> (Self, &[u8]) {
let encryption_pubkey_compressed_bytes =
&buf[..seismic_enclave::secp256k1::constants::PUBLIC_KEY_SIZE];
let encryption_pubkey =
seismic_enclave::secp256k1::PublicKey::from_slice(encryption_pubkey_compressed_bytes)
.unwrap();
buf.advance(seismic_enclave::secp256k1::constants::PUBLIC_KEY_SIZE);
let (message_version, buf) = (buf[0], &buf[1..]);
let (nonce_len, buf) = (buf[0], &buf[1..]);
let (encryption_nonce, buf) = U96::from_compact(buf, nonce_len as usize);
(Self { encryption_pubkey, encryption_nonce, message_version }, buf)
}
}
impl Compact for AlloyTxSeismic {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let tx = TxSeismic {
chain_id: self.chain_id,
nonce: self.nonce,
gas_price: self.gas_price,
gas_limit: self.gas_limit,
to: self.to,
value: self.value,
seismic_elements: self.seismic_elements,
input: self.input.clone(),
};
tx.to_compact(buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (tx, _) = TxSeismic::from_compact(buf, len);
let alloy_tx = Self {
chain_id: tx.chain_id,
nonce: tx.nonce,
gas_price: tx.gas_price,
gas_limit: tx.gas_limit,
to: tx.to,
value: tx.value,
seismic_elements: tx.seismic_elements,
input: tx.input,
};
(alloy_tx, buf)
}
}
impl Compact for SeismicTxType {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
match self {
Self::Legacy => COMPACT_IDENTIFIER_LEGACY,
Self::Eip2930 => COMPACT_IDENTIFIER_EIP2930,
Self::Eip1559 => COMPACT_IDENTIFIER_EIP1559,
Self::Eip4844 => {
buf.put_u8(EIP4844_TX_TYPE_ID);
COMPACT_EXTENDED_IDENTIFIER_FLAG
}
Self::Eip7702 => {
buf.put_u8(EIP7702_TX_TYPE_ID);
COMPACT_EXTENDED_IDENTIFIER_FLAG
}
Self::Seismic => {
buf.put_u8(SEISMIC_TX_TYPE_ID);
COMPACT_EXTENDED_IDENTIFIER_FLAG
}
}
}
fn from_compact(mut buf: &[u8], identifier: usize) -> (Self, &[u8]) {
use bytes::Buf;
(
match identifier {
COMPACT_IDENTIFIER_LEGACY => Self::Legacy,
COMPACT_IDENTIFIER_EIP2930 => Self::Eip2930,
COMPACT_IDENTIFIER_EIP1559 => Self::Eip1559,
COMPACT_EXTENDED_IDENTIFIER_FLAG => {
let extended_identifier = buf.get_u8();
match extended_identifier {
EIP7702_TX_TYPE_ID => Self::Eip7702,
SEISMIC_TX_TYPE_ID => Self::Seismic,
_ => panic!("Unsupported TxType identifier: {extended_identifier}"),
}
}
_ => panic!("Unknown identifier for TxType: {identifier}"),
},
buf,
)
}
}
impl Compact for SeismicTypedTransaction {
fn to_compact<B>(&self, out: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let identifier = self.tx_type().to_compact(out);
match self {
Self::Legacy(tx) => tx.to_compact(out),
Self::Eip2930(tx) => tx.to_compact(out),
Self::Eip1559(tx) => tx.to_compact(out),
Self::Eip4844(tx) => {
match tx {
TxEip4844Variant::TxEip4844(tx) => tx.to_compact(out),
TxEip4844Variant::TxEip4844WithSidecar(tx) => {
// we do not have a way to encode the sidecar, so we just encode the inner
let inner: &TxEip4844 = tx.tx();
inner.to_compact(out)
}
}
}
Self::Eip7702(tx) => tx.to_compact(out),
Self::Seismic(tx) => tx.to_compact(out),
};
identifier
}
fn from_compact(buf: &[u8], identifier: usize) -> (Self, &[u8]) {
let (tx_type, buf) = SeismicTxType::from_compact(buf, identifier);
match tx_type {
SeismicTxType::Legacy => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Legacy(tx), buf)
}
SeismicTxType::Eip2930 => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Eip2930(tx), buf)
}
SeismicTxType::Eip1559 => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Eip1559(tx), buf)
}
SeismicTxType::Eip4844 => {
let (tx, buf): (TxEip4844, _) = Compact::from_compact(buf, buf.len());
let tx = TxEip4844Variant::TxEip4844(tx);
(Self::Eip4844(tx), buf)
}
SeismicTxType::Eip7702 => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Eip7702(tx), buf)
}
SeismicTxType::Seismic => {
let (tx, buf) = Compact::from_compact(buf, buf.len());
(Self::Seismic(tx), buf)
}
}
}
}
impl ToTxCompact for SeismicTxEnvelope {
fn to_tx_compact(&self, buf: &mut (impl BufMut + AsMut<[u8]>)) {
match self {
Self::Legacy(tx) => tx.tx().to_compact(buf),
Self::Eip2930(tx) => tx.tx().to_compact(buf),
Self::Eip1559(tx) => tx.tx().to_compact(buf),
Self::Eip4844(tx) => match tx.tx() {
TxEip4844Variant::TxEip4844(tx) => tx.to_compact(buf),
TxEip4844Variant::TxEip4844WithSidecar(tx) => Compact::to_compact(&tx.tx(), buf),
},
Self::Eip7702(tx) => tx.tx().to_compact(buf),
Self::Seismic(tx) => tx.tx().to_compact(buf),
};
}
}
impl FromTxCompact for SeismicTxEnvelope {
type TxType = SeismicTxType;
fn from_tx_compact(buf: &[u8], tx_type: SeismicTxType, signature: Signature) -> (Self, &[u8]) {
match tx_type {
SeismicTxType::Legacy => {
let (tx, buf) = TxLegacy::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Legacy(tx), buf)
}
SeismicTxType::Eip2930 => {
let (tx, buf) = TxEip2930::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip2930(tx), buf)
}
SeismicTxType::Eip1559 => {
let (tx, buf) = TxEip1559::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip1559(tx), buf)
}
SeismicTxType::Eip4844 => {
let (variant_tag, rest) = buf.split_first().expect("buffer should not be empty");
match variant_tag {
0 => {
let (tx, buf) = TxEip4844::from_compact(rest, rest.len());
let tx = Signed::new_unhashed(TxEip4844Variant::TxEip4844(tx), signature);
(Self::Eip4844(tx), buf)
}
1 => unreachable!("seismic does not serialize sidecars yet"),
_ => panic!("Unknown EIP-4844 variant tag: {}", variant_tag),
}
}
SeismicTxType::Eip7702 => {
let (tx, buf) = TxEip7702::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Eip7702(tx), buf)
}
SeismicTxType::Seismic => {
let (tx, buf) = AlloyTxSeismic::from_compact(buf, buf.len());
let tx = Signed::new_unhashed(tx, signature);
(Self::Seismic(tx), buf)
}
}
}
}
impl Envelope for SeismicTxEnvelope {
fn signature(&self) -> &Signature {
match self {
Self::Legacy(tx) => tx.signature(),
Self::Eip2930(tx) => tx.signature(),
Self::Eip1559(tx) => tx.signature(),
Self::Eip4844(tx) => tx.signature(),
Self::Eip7702(tx) => tx.signature(),
Self::Seismic(tx) => tx.signature(),
}
}
fn tx_type(&self) -> Self::TxType {
Self::tx_type(self)
}
}
impl Compact for SeismicTxEnvelope {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: BufMut + AsMut<[u8]>,
{
CompactEnvelope::to_compact(self, buf)
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
CompactEnvelope::from_compact(buf, len)
}
}
// Custom test module that excludes EIP4844 cases to avoid proptest failures
#[cfg(test)]
mod seismic_typed_transaction_tests {
use super::*;
use crate::Compact;
use proptest::prelude::*;
use proptest_arbitrary_interop::arb;
#[test]
fn proptest() {
let config = ProptestConfig::with_cases(100);
proptest::proptest!(config, |(field in arb::<SeismicTypedTransaction>())| {
// Skip EIP4844 cases as they have incomplete serialization support
match &field {
SeismicTypedTransaction::Eip4844(_) => return Ok(()),
_ => {}
}
let mut buf = vec![];
let len = field.clone().to_compact(&mut buf);
let (decoded, _): (SeismicTypedTransaction, _) = Compact::from_compact(&buf, len);
assert_eq!(field, decoded, "maybe_generate_tests::compact");
});
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_primitives::{hex, Bytes, TxKind};
use bytes::BytesMut;
use seismic_enclave::secp256k1::PublicKey;
#[test]
fn test_seismic_tx_compact_roundtrip() {
// Create a test transaction based on the example in file_context_0
let tx = AlloyTxSeismic {
chain_id: 1166721750861005481,
nonce: 13985005159674441909,
gas_price: 296133358425745351516777806240018869443,
gas_limit: 6091425913586946366,
to: TxKind::Create,
value: U256::from_str_radix(
"30997721070913355446596643088712595347117842472993214294164452566768407578853",
10,
)
.unwrap(),
seismic_elements: TxSeismicElements {
encryption_pubkey: PublicKey::from_slice(
&hex::decode(
"02d211b6b0a191b9469bb3674e9c609f453d3801c3e3fd7e0bb00c6cc1e1d941df",
)
.unwrap(),
)
.unwrap(),
encryption_nonce: U96::from_str_radix("11856476099097235301", 10).unwrap(),
message_version: 85,
},
input: Bytes::from_static(&[0x24]),
};
// Encode to compact format
let mut buf = BytesMut::new();
let encoded_size = tx.to_compact(&mut buf);
// Decode from compact format
let (decoded_tx, _) = AlloyTxSeismic::from_compact(&buf, encoded_size);
// Verify the roundtrip
assert_eq!(tx.chain_id, decoded_tx.chain_id);
assert_eq!(tx.nonce, decoded_tx.nonce);
assert_eq!(tx.gas_price, decoded_tx.gas_price);
assert_eq!(tx.gas_limit, decoded_tx.gas_limit);
assert_eq!(tx.to, decoded_tx.to);
assert_eq!(tx.value, decoded_tx.value);
assert_eq!(tx.input, decoded_tx.input);
// Check seismic elements
assert_eq!(
tx.seismic_elements.encryption_pubkey.serialize(),
decoded_tx.seismic_elements.encryption_pubkey.serialize()
);
assert_eq!(
tx.seismic_elements.encryption_nonce,
decoded_tx.seismic_elements.encryption_nonce
);
assert_eq!(
tx.seismic_elements.message_version,
decoded_tx.seismic_elements.message_version
);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/derive/src/lib.rs | crates/storage/codecs/derive/src/lib.rs | //! Derive macros for the Compact codec traits.
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/SeismicSystems/seismic-reth/issues/"
)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
#![allow(unreachable_pub, missing_docs)]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
use proc_macro::TokenStream;
use quote::{format_ident, quote};
use syn::{
bracketed,
parse::{Parse, ParseStream},
parse_macro_input, DeriveInput, Result, Token,
};
mod arbitrary;
mod compact;
#[derive(Clone)]
pub(crate) struct ZstdConfig {
compressor: syn::Path,
decompressor: syn::Path,
}
/// Derives the `Compact` trait for custom structs, optimizing serialization with a possible
/// bitflag struct.
///
/// ## Implementation:
/// The derived `Compact` implementation leverages a bitflag struct when needed to manage the
/// presence of certain field types, primarily for compacting fields efficiently. This bitflag
/// struct records information about fields that require a small, fixed number of bits for their
/// encoding, such as `bool`, `Option<T>`, or other small types.
///
/// ### Bit Sizes for Fields:
/// The amount of bits used to store a field size is determined by the field's type. For specific
/// types, a fixed number of bits is allocated (from `fn get_bit_size`):
/// - `bool`, `Option<T>`, `TransactionKind`, `Signature`: **1 bit**
/// - `TxType`: **2 bits**
/// - `u64`, `BlockNumber`, `TxNumber`, `ChainId`, `NumTransactions`: **4 bits**
/// - `u128`: **5 bits**
/// - `U256`: **6 bits**
///
/// ### Warning: Extending structs, unused bits and backwards compatibility:
/// When the bitflag only has one bit left (for example, when adding many `Option<T>` fields),
/// you should introduce a new struct (e.g., `TExtension`) with additional fields, and use
/// `Option<TExtension>` in the original struct. This approach allows further field extensions while
/// maintaining backward compatibility.
///
/// ### Limitations:
/// - Fields not listed above, or types such `Vec`, or large composite types, should manage their
/// own encoding and do not rely on the bitflag struct.
/// - `Bytes` fields and any types containing a `Bytes` field should be placed last to ensure
/// efficient decoding.
#[proc_macro_derive(Compact, attributes(maybe_zero, reth_codecs))]
pub fn derive(input: TokenStream) -> TokenStream {
compact::derive(parse_macro_input!(input as DeriveInput), None)
}
/// Adds `zstd` compression to derived [`Compact`].
#[proc_macro_derive(CompactZstd, attributes(maybe_zero, reth_codecs, reth_zstd))]
pub fn derive_zstd(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as DeriveInput);
let mut compressor = None;
let mut decompressor = None;
for attr in &input.attrs {
if attr.path().is_ident("reth_zstd") {
if let Err(err) = attr.parse_nested_meta(|meta| {
if meta.path.is_ident("compressor") {
let value = meta.value()?;
let path: syn::Path = value.parse()?;
compressor = Some(path);
} else if meta.path.is_ident("decompressor") {
let value = meta.value()?;
let path: syn::Path = value.parse()?;
decompressor = Some(path);
} else {
return Err(meta.error("unsupported attribute"))
}
Ok(())
}) {
return err.to_compile_error().into()
}
}
}
let (Some(compressor), Some(decompressor)) = (compressor, decompressor) else {
return quote! {
compile_error!("missing compressor or decompressor attribute");
}
.into()
};
compact::derive(input, Some(ZstdConfig { compressor, decompressor }))
}
/// Generates tests for given type.
///
/// If `compact` or `rlp` is passed to `add_arbitrary_tests`, there will be proptest roundtrip tests
/// generated. An integer value passed will limit the number of proptest cases generated (default:
/// 256).
///
/// Examples:
/// * `#[add_arbitrary_tests]`: will derive arbitrary with no tests.
/// * `#[add_arbitrary_tests(rlp)]`: will derive arbitrary and generate rlp roundtrip proptests.
/// * `#[add_arbitrary_tests(rlp, 10)]`: will derive arbitrary and generate rlp roundtrip proptests.
/// Limited to 10 cases.
/// * `#[add_arbitrary_tests(compact, rlp)]`. will derive arbitrary and generate rlp and compact
/// roundtrip proptests.
#[proc_macro_attribute]
pub fn add_arbitrary_tests(args: TokenStream, input: TokenStream) -> TokenStream {
let ast = parse_macro_input!(input as DeriveInput);
let tests =
arbitrary::maybe_generate_tests(args, &ast.ident, &format_ident!("{}Tests", ast.ident));
quote! {
#ast
#tests
}
.into()
}
struct GenerateTestsInput {
args: TokenStream,
ty: syn::Type,
mod_name: syn::Ident,
}
impl Parse for GenerateTestsInput {
fn parse(input: ParseStream<'_>) -> Result<Self> {
input.parse::<Token![#]>()?;
let args;
bracketed!(args in input);
let args = args.parse::<proc_macro2::TokenStream>()?;
let ty = input.parse()?;
input.parse::<Token![,]>()?;
let mod_name = input.parse()?;
Ok(Self { args: args.into(), ty, mod_name })
}
}
/// Generates tests for given type based on passed parameters.
///
/// See `arbitrary::maybe_generate_tests` for more information.
///
/// Examples:
/// * `generate_tests!(#[rlp] MyType, MyTypeTests)`: will generate rlp roundtrip tests for `MyType`
/// in a module named `MyTypeTests`.
/// * `generate_tests!(#[compact, 10] MyType, MyTypeTests)`: will generate compact roundtrip tests
/// for `MyType` limited to 10 cases.
#[proc_macro]
pub fn generate_tests(input: TokenStream) -> TokenStream {
let input = parse_macro_input!(input as GenerateTestsInput);
arbitrary::maybe_generate_tests(input.args, &input.ty, &input.mod_name).into()
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/derive/src/arbitrary.rs | crates/storage/codecs/derive/src/arbitrary.rs | use proc_macro::TokenStream;
use proc_macro2::{Ident, TokenStream as TokenStream2};
use quote::{quote, ToTokens};
/// If `compact` or `rlp` is passed to `derive_arbitrary`, this function will generate the
/// corresponding proptest roundtrip tests.
///
/// It accepts an optional integer number for the number of proptest cases. Otherwise, it will set
/// it at 1000.
pub fn maybe_generate_tests(
args: TokenStream,
type_ident: &impl ToTokens,
mod_tests: &Ident,
) -> TokenStream2 {
// Same as proptest
let mut default_cases = 1;
let mut traits = vec![];
let mut roundtrips = vec![];
let mut additional_tests = vec![];
let mut is_crate = false;
let mut iter = args.into_iter().peekable();
// we check if there's a crate argument which is used from inside the codecs crate directly
if let Some(arg) = iter.peek() {
if arg.to_string() == "crate" {
is_crate = true;
iter.next();
}
}
for arg in iter {
if arg.to_string() == "compact" {
let path = if is_crate {
quote! { use crate::Compact; }
} else {
quote! { use reth_codecs::Compact; }
};
traits.push(path);
roundtrips.push(quote! {
{
let mut buf = vec![];
let len = field.clone().to_compact(&mut buf);
let (decoded, _): (super::#type_ident, _) = Compact::from_compact(&buf, len);
assert_eq!(field, decoded, "maybe_generate_tests::compact");
}
});
} else if arg.to_string() == "rlp" {
traits.push(quote! { use alloy_rlp::{Encodable, Decodable}; });
roundtrips.push(quote! {
{
let mut buf = vec![];
let len = field.encode(&mut buf);
let mut b = &mut buf.as_slice();
let decoded: super::#type_ident = Decodable::decode(b).unwrap();
assert_eq!(field, decoded, "maybe_generate_tests::rlp");
// ensure buffer is fully consumed by decode
assert!(b.is_empty(), "buffer was not consumed entirely");
}
});
additional_tests.push(quote! {
#[test]
fn malformed_rlp_header_check() {
use rand::RngCore;
// get random instance of type
let mut raw = vec![0u8; 1024];
rand::rng().fill_bytes(&mut raw);
let mut unstructured = arbitrary::Unstructured::new(&raw[..]);
let val: Result<super::#type_ident, _> = arbitrary::Arbitrary::arbitrary(&mut unstructured);
if val.is_err() {
// this can be flaky sometimes due to not enough data for iterator based types like Vec
return
}
let val = val.unwrap();
let mut buf = vec![];
let len = val.encode(&mut buf);
// malformed rlp-header check
let mut decode_buf = &mut buf.as_slice();
let mut header = alloy_rlp::Header::decode(decode_buf).expect("failed to decode header");
header.payload_length+=1;
let mut b = Vec::with_capacity(decode_buf.len());
header.encode(&mut b);
b.extend_from_slice(decode_buf);
let res: Result<super::#type_ident, _> = Decodable::decode(&mut b.as_ref());
assert!(res.is_err(), "malformed header was decoded");
}
});
} else if let Ok(num) = arg.to_string().parse() {
default_cases = num;
}
}
let mut tests = TokenStream2::default();
if !roundtrips.is_empty() {
tests = quote! {
#[expect(non_snake_case)]
#[cfg(test)]
mod #mod_tests {
#(#traits)*
use proptest_arbitrary_interop::arb;
#[test]
fn proptest() {
let mut config = proptest::prelude::ProptestConfig::with_cases(#default_cases as u32);
proptest::proptest!(config, |(field in arb::<super::#type_ident>())| {
#(#roundtrips)*
});
}
#(#additional_tests)*
}
}
}
tests
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/derive/src/compact/structs.rs | crates/storage/codecs/derive/src/compact/structs.rs | use super::*;
#[derive(Debug)]
pub struct StructHandler<'a> {
fields_iterator: std::iter::Peekable<std::slice::Iter<'a, FieldTypes>>,
lines: Vec<TokenStream2>,
pub is_wrapper: bool,
}
impl<'a> StructHandler<'a> {
pub fn new(fields: &'a FieldList) -> Self {
StructHandler {
lines: vec![],
fields_iterator: fields.iter().peekable(),
is_wrapper: false,
}
}
pub fn next_field(&mut self) -> Option<&'a FieldTypes> {
self.fields_iterator.next()
}
pub fn generate_to(mut self) -> Vec<TokenStream2> {
while let Some(field) = self.next_field() {
match field {
FieldTypes::EnumVariant(_) | FieldTypes::EnumUnnamedField(_) => unreachable!(),
FieldTypes::StructField(field_descriptor) => self.to(field_descriptor),
}
}
self.lines
}
pub fn generate_from(&mut self, known_types: &[&str]) -> Vec<TokenStream2> {
while let Some(field) = self.next_field() {
match field {
FieldTypes::EnumVariant(_) | FieldTypes::EnumUnnamedField(_) => unreachable!(),
FieldTypes::StructField(field_descriptor) => {
self.from(field_descriptor, known_types)
}
}
}
self.lines.clone()
}
/// Generates `to_compact` code for a struct field.
fn to(&mut self, field_descriptor: &StructFieldDescriptor) {
let StructFieldDescriptor { name, ftype, is_compact, use_alt_impl, is_reference: _ } =
field_descriptor;
let to_compact_ident = if *use_alt_impl {
format_ident!("specialized_to_compact")
} else {
format_ident!("to_compact")
};
// Should only happen on wrapper structs like `Struct(pub Field)`
if name.is_empty() {
self.is_wrapper = true;
self.lines.push(quote! {
let _len = self.0.#to_compact_ident(&mut buffer);
});
if is_flag_type(ftype) {
self.lines.push(quote! {
flags.set_placeholder_len(_len as u8);
})
}
return
}
let name = format_ident!("{name}");
let set_len_method = format_ident!("set_{name}_len");
let len = format_ident!("{name}_len");
// B256 with #[maybe_zero] attribute for example
if *is_compact && !is_flag_type(ftype) {
let itype = format_ident!("{ftype}");
let set_bool_method = format_ident!("set_{name}");
self.lines.push(quote! {
if self.#name != #itype::zero() {
flags.#set_bool_method(true);
self.#name.#to_compact_ident(&mut buffer);
};
});
} else {
self.lines.push(quote! {
let #len = self.#name.#to_compact_ident(&mut buffer);
});
}
if is_flag_type(ftype) {
self.lines.push(quote! {
flags.#set_len_method(#len as u8);
})
}
}
/// Generates `from_compact` code for a struct field.
fn from(&mut self, field_descriptor: &StructFieldDescriptor, known_types: &[&str]) {
let StructFieldDescriptor { name, ftype, is_compact, use_alt_impl, .. } = field_descriptor;
let (name, len) = if name.is_empty() {
self.is_wrapper = true;
// Should only happen on wrapper structs like `Struct(pub Field)`
(format_ident!("placeholder"), format_ident!("placeholder_len"))
} else {
(format_ident!("{name}"), format_ident!("{name}_len"))
};
let from_compact_ident = if *use_alt_impl {
format_ident!("specialized_from_compact")
} else {
format_ident!("from_compact")
};
// ! Be careful before changing the following assert ! Especially if the type does not
// implement proptest tests.
//
// The limitation of the last placed field applies to fields with potentially large sizes,
// like the `Transaction` field. These fields may have inner "Bytes" fields, sometimes even
// nested further, making it impossible to check with `proc_macro`. The goal is to place
// such fields as the last ones, so we don't need to store their length separately. Instead,
// we can simply read them until the end of the buffer.
//
// However, certain types don't require this approach because they don't contain inner
// "Bytes" fields. For these types, we can add them to a "known_types" list so it doesn't
// trigger this error. These types can handle their own deserialization without
// relying on the length provided by the higher-level deserializer. For example, a
// type "T" with two "u64" fields doesn't need the length parameter from
// "T::from_compact(buf, len)" since the length of "u64" is known internally (bitpacked).
assert!(
known_types.contains(&ftype.as_str()) ||
is_flag_type(ftype) ||
self.fields_iterator.peek().is_none(),
"`{ftype}` field should be placed as the last one since it's not known.
If it's an alias type (which are not supported by proc_macro), be sure to add it to either `known_types` or `get_bit_size` lists in the derive crate."
);
if ftype == "Bytes" {
self.lines.push(quote! {
let mut #name = Bytes::new();
(#name, buf) = Bytes::from_compact(buf, buf.len() as usize);
})
} else {
let ident_type = format_ident!("{ftype}");
if !is_flag_type(ftype) {
// It's a type that handles its own length requirements. (B256, Custom, ...)
self.lines.push(quote! {
let (#name, new_buf) = #ident_type::#from_compact_ident(buf, buf.len());
})
} else if *is_compact {
self.lines.push(quote! {
let (#name, new_buf) = #ident_type::#from_compact_ident(buf, flags.#len() as usize);
});
} else {
todo!()
}
self.lines.push(quote! {
buf = new_buf;
});
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/derive/src/compact/enums.rs | crates/storage/codecs/derive/src/compact/enums.rs | use super::*;
#[derive(Debug)]
pub struct EnumHandler<'a> {
current_variant_index: u8,
fields_iterator: std::iter::Peekable<std::slice::Iter<'a, FieldTypes>>,
enum_lines: Vec<TokenStream2>,
}
impl<'a> EnumHandler<'a> {
pub fn new(fields: &'a FieldList) -> Self {
EnumHandler {
current_variant_index: 0u8,
enum_lines: vec![],
fields_iterator: fields.iter().peekable(),
}
}
pub fn next_field(&mut self) -> Option<&'a FieldTypes> {
self.fields_iterator.next()
}
pub fn generate_to(mut self, ident: &Ident) -> Vec<TokenStream2> {
while let Some(field) = self.next_field() {
match field {
// The following method will advance the
// `fields_iterator` by itself and stop right before the next variant.
FieldTypes::EnumVariant(name) => self.to(name, ident),
FieldTypes::EnumUnnamedField(_) | FieldTypes::StructField(_) => unreachable!(),
}
}
self.enum_lines
}
pub fn generate_from(mut self, ident: &Ident) -> Vec<TokenStream2> {
while let Some(field) = self.next_field() {
match field {
// The following method will advance the
// `fields_iterator` by itself and stop right before the next variant.
FieldTypes::EnumVariant(name) => self.from(name, ident),
FieldTypes::EnumUnnamedField(_) | FieldTypes::StructField(_) => unreachable!(),
}
}
self.enum_lines
}
/// Generates `from_compact` code for an enum variant.
///
/// `fields_iterator` might look something like \[`VariantUnit`, `VariantUnnamedField`, Field,
/// `VariantUnit`...\].
pub fn from(&mut self, variant_name: &str, ident: &Ident) {
let variant_name = format_ident!("{variant_name}");
let current_variant_index = self.current_variant_index;
if let Some(next_field) = self.fields_iterator.peek() {
match next_field {
FieldTypes::EnumUnnamedField((next_ftype, use_alt_impl)) => {
// This variant is of the type `EnumVariant(UnnamedField)`
let field_type = format_ident!("{next_ftype}");
let from_compact_ident = if *use_alt_impl {
format_ident!("specialized_from_compact")
} else {
format_ident!("from_compact")
};
// Unnamed type
self.enum_lines.push(quote! {
#current_variant_index => {
let (inner, new_buf) = #field_type::#from_compact_ident(buf, buf.len());
buf = new_buf;
#ident::#variant_name(inner)
}
});
self.fields_iterator.next();
}
FieldTypes::EnumVariant(_) => self.enum_lines.push(quote! {
#current_variant_index => #ident::#variant_name,
}),
FieldTypes::StructField(_) => unreachable!(),
};
} else {
// This variant has no fields: Unit type
self.enum_lines.push(quote! {
#current_variant_index => #ident::#variant_name,
});
}
self.current_variant_index += 1;
}
/// Generates `to_compact` code for an enum variant.
///
/// `fields_iterator` might look something like [`VariantUnit`, `VariantUnnamedField`, Field,
/// `VariantUnit`...].
pub fn to(&mut self, variant_name: &str, ident: &Ident) {
let variant_name = format_ident!("{variant_name}");
let current_variant_index = self.current_variant_index;
if let Some(next_field) = self.fields_iterator.peek() {
match next_field {
FieldTypes::EnumUnnamedField((_, use_alt_impl)) => {
let to_compact_ident = if *use_alt_impl {
format_ident!("specialized_to_compact")
} else {
format_ident!("to_compact")
};
// Unnamed type
self.enum_lines.push(quote! {
#ident::#variant_name(field) => {
field.#to_compact_ident(&mut buffer);
#current_variant_index
},
});
self.fields_iterator.next();
}
FieldTypes::EnumVariant(_) => self.enum_lines.push(quote! {
#ident::#variant_name => #current_variant_index,
}),
FieldTypes::StructField(_) => unreachable!(),
};
} else {
// This variant has no fields: Unit type
self.enum_lines.push(quote! {
#ident::#variant_name => #current_variant_index,
});
}
self.current_variant_index += 1;
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/derive/src/compact/flags.rs | crates/storage/codecs/derive/src/compact/flags.rs | use super::*;
use syn::Attribute;
/// Generates the flag fieldset struct that is going to be used to store the length of fields and
/// their potential presence.
pub(crate) fn generate_flag_struct(
ident: &Ident,
attrs: &[Attribute],
has_lifetime: bool,
fields: &FieldList,
is_zstd: bool,
) -> TokenStream2 {
let is_enum = fields.iter().any(|field| matches!(field, FieldTypes::EnumVariant(_)));
let flags_ident = format_ident!("{ident}Flags");
let mod_flags_ident = format_ident!("{ident}_flags");
let reth_codecs = parse_reth_codecs_path(attrs).unwrap();
let mut field_flags = vec![];
let total_bits = if is_enum {
field_flags.push(quote! {
pub variant: B8,
});
8
} else {
build_struct_field_flags(
fields
.iter()
.filter_map(|f| {
if let FieldTypes::StructField(f) = f {
return Some(f)
}
None
})
.collect::<Vec<_>>(),
&mut field_flags,
is_zstd,
)
};
if total_bits == 0 {
return placeholder_flag_struct(ident, &flags_ident)
}
let (total_bytes, unused_bits) = pad_flag_struct(total_bits, &mut field_flags);
// Provides the number of bytes used to represent the flag struct.
let readable_bytes = vec![
quote! {
buf.get_u8(),
};
total_bytes.into()
];
let docs = format!(
"Fieldset that facilitates compacting the parent type. Used bytes: {total_bytes} | Unused bits: {unused_bits}"
);
let bitflag_encoded_bytes = format!("Used bytes by [`{flags_ident}`]");
let bitflag_unused_bits = format!("Unused bits for new fields by [`{flags_ident}`]");
let impl_bitflag_encoded_bytes = if has_lifetime {
quote! {
impl<'a> #ident<'a> {
#[doc = #bitflag_encoded_bytes]
pub const fn bitflag_encoded_bytes() -> usize {
#total_bytes as usize
}
#[doc = #bitflag_unused_bits]
pub const fn bitflag_unused_bits() -> usize {
#unused_bits as usize
}
}
}
} else {
quote! {
impl #ident {
#[doc = #bitflag_encoded_bytes]
pub const fn bitflag_encoded_bytes() -> usize {
#total_bytes as usize
}
#[doc = #bitflag_unused_bits]
pub const fn bitflag_unused_bits() -> usize {
#unused_bits as usize
}
}
}
};
// Generate the flag struct.
quote! {
#impl_bitflag_encoded_bytes
pub use #mod_flags_ident::#flags_ident;
#[expect(non_snake_case)]
mod #mod_flags_ident {
use #reth_codecs::__private::Buf;
use #reth_codecs::__private::modular_bitfield;
use #reth_codecs::__private::modular_bitfield::prelude::*;
#[doc = #docs]
#[bitfield]
#[derive(Clone, Copy, Debug, Default)]
pub struct #flags_ident {
#(#field_flags)*
}
impl #flags_ident {
/// Deserializes this fieldset and returns it, alongside the original slice in an advanced position.
pub fn from(mut buf: &[u8]) -> (Self, &[u8]) {
(#flags_ident::from_bytes([
#(#readable_bytes)*
]), buf)
}
}
}
}
}
/// Builds the flag struct for the user struct fields.
///
/// Returns the total number of bits necessary.
fn build_struct_field_flags(
fields: Vec<&StructFieldDescriptor>,
field_flags: &mut Vec<TokenStream2>,
is_zstd: bool,
) -> u8 {
let mut total_bits = 0;
// Find out the adequate bit size for the length of each field, if applicable.
for field in fields {
let StructFieldDescriptor { name, ftype, is_compact, use_alt_impl: _, is_reference: _ } =
field;
// This happens when dealing with a wrapper struct eg. Struct(pub U256).
let name = if name.is_empty() { "placeholder" } else { name };
if *is_compact {
if is_flag_type(ftype) {
let name = format_ident!("{name}_len");
let bitsize = get_bit_size(ftype);
let bsize = format_ident!("B{bitsize}");
total_bits += bitsize;
field_flags.push(quote! {
pub #name: #bsize ,
});
} else {
let name = format_ident!("{name}");
field_flags.push(quote! {
pub #name: bool ,
});
total_bits += 1;
}
}
}
if is_zstd {
field_flags.push(quote! {
pub __zstd: B1,
});
total_bits += 1;
}
total_bits
}
/// Total number of bits should be divisible by 8, so we might need to pad the struct with an unused
/// skipped field.
///
/// Returns the total number of bytes used by the flags struct and how many unused bits.
fn pad_flag_struct(total_bits: u8, field_flags: &mut Vec<TokenStream2>) -> (u8, u8) {
let remaining = 8 - total_bits % 8;
if remaining == 8 {
(total_bits / 8, 0)
} else {
let bsize = format_ident!("B{remaining}");
field_flags.push(quote! {
#[skip]
unused: #bsize ,
});
((total_bits + remaining) / 8, remaining)
}
}
/// Placeholder struct for when there are no bitfields to be added.
fn placeholder_flag_struct(ident: &Ident, flags: &Ident) -> TokenStream2 {
let bitflag_encoded_bytes = format!("Used bytes by [`{flags}`]");
let bitflag_unused_bits = format!("Unused bits for new fields by [`{flags}`]");
quote! {
impl #ident {
#[doc = #bitflag_encoded_bytes]
pub const fn bitflag_encoded_bytes() -> usize {
0
}
#[doc = #bitflag_unused_bits]
pub const fn bitflag_unused_bits() -> usize {
0
}
}
/// Placeholder struct for when there is no need for a fieldset. Doesn't actually write or read any data.
#[derive(Debug, Default)]
pub struct #flags {
}
impl #flags {
/// Placeholder: does not read any value.
pub fn from(mut buf: &[u8]) -> (Self, &[u8]) {
(#flags::default(), buf)
}
/// Placeholder: returns an empty array.
pub fn into_bytes(self) -> [u8; 0] {
[]
}
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/derive/src/compact/generator.rs | crates/storage/codecs/derive/src/compact/generator.rs | //! Code generator for the `Compact` trait.
use super::*;
use crate::ZstdConfig;
use convert_case::{Case, Casing};
use syn::{Attribute, LitStr};
/// Generates code to implement the `Compact` trait for a data type.
pub fn generate_from_to(
ident: &Ident,
attrs: &[Attribute],
has_lifetime: bool,
fields: &FieldList,
zstd: Option<ZstdConfig>,
) -> TokenStream2 {
let flags = format_ident!("{ident}Flags");
let reth_codecs = parse_reth_codecs_path(attrs).unwrap();
let to_compact = generate_to_compact(fields, ident, zstd.clone(), &reth_codecs);
let from_compact = generate_from_compact(fields, ident, zstd);
let snake_case_ident = ident.to_string().to_case(Case::Snake);
let fuzz = format_ident!("fuzz_test_{snake_case_ident}");
let test = format_ident!("fuzz_{snake_case_ident}");
let lifetime = if has_lifetime {
quote! { 'a }
} else {
quote! {}
};
let impl_compact = if has_lifetime {
quote! {
impl<#lifetime> #reth_codecs::Compact for #ident<#lifetime>
}
} else {
quote! {
impl #reth_codecs::Compact for #ident
}
};
let has_ref_fields = fields.iter().any(|field| {
if let FieldTypes::StructField(field) = field {
field.is_reference
} else {
false
}
});
let fn_from_compact = if has_ref_fields {
quote! { unimplemented!("from_compact not supported with ref structs") }
} else {
quote! {
let (flags, mut buf) = #flags::from(buf);
#from_compact
}
};
let fuzz_tests = if has_lifetime {
quote! {}
} else {
quote! {
#[cfg(test)]
#[expect(dead_code)]
#[test_fuzz::test_fuzz]
fn #fuzz(obj: #ident) {
use #reth_codecs::Compact;
let mut buf = vec![];
let len = obj.clone().to_compact(&mut buf);
let (same_obj, buf) = #ident::from_compact(buf.as_ref(), len);
assert_eq!(obj, same_obj);
}
#[test]
#[expect(missing_docs)]
pub fn #test() {
#fuzz(#ident::default())
}
}
};
// Build function
quote! {
#fuzz_tests
#impl_compact {
fn to_compact<B>(&self, buf: &mut B) -> usize where B: #reth_codecs::__private::bytes::BufMut + AsMut<[u8]> {
let mut flags = #flags::default();
let mut total_length = 0;
#(#to_compact)*
total_length
}
fn from_compact(mut buf: &[u8], len: usize) -> (Self, &[u8]) {
#fn_from_compact
}
}
}
}
/// Generates code to implement the `Compact` trait method `to_compact`.
fn generate_from_compact(
fields: &FieldList,
ident: &Ident,
zstd: Option<ZstdConfig>,
) -> TokenStream2 {
let mut lines = vec![];
let mut known_types = vec![
"B256",
"Address",
"Bloom",
"Vec",
"TxHash",
"BlockHash",
"FixedBytes",
"Cow",
"TxSeismicElements",
];
// Only types without `Bytes` should be added here. It's currently manually added, since
// it's hard to figure out with derive_macro which types have Bytes fields.
//
// This removes the requirement of the field to be placed last in the struct.
known_types.extend_from_slice(&["TxKind", "AccessList", "Signature", "CheckpointBlockRange"]);
// let mut handle = FieldListHandler::new(fields);
let is_enum = fields.iter().any(|field| matches!(field, FieldTypes::EnumVariant(_)));
if is_enum {
let enum_lines = EnumHandler::new(fields).generate_from(ident);
// Builds the object instantiation.
lines.push(quote! {
let obj = match flags.variant() {
#(#enum_lines)*
_ => unreachable!()
};
});
} else {
let mut struct_handler = StructHandler::new(fields);
lines.append(&mut struct_handler.generate_from(known_types.as_slice()));
// Builds the object instantiation.
if struct_handler.is_wrapper {
lines.push(quote! {
let obj = #ident(placeholder);
});
} else {
let fields = fields.iter().filter_map(|field| {
if let FieldTypes::StructField(field) = field {
let ident = format_ident!("{}", field.name);
return Some(quote! {
#ident: #ident,
})
}
None
});
lines.push(quote! {
let obj = #ident {
#(#fields)*
};
});
}
}
// If the type has compression support, then check the `__zstd` flag. Otherwise, use the default
// code branch. However, even if it's a type with compression support, not all values are
// to be compressed (thus the zstd flag). Ideally only the bigger ones.
if let Some(zstd) = zstd {
let decompressor = zstd.decompressor;
quote! {
if flags.__zstd() != 0 {
#decompressor.with(|decompressor| {
let decompressor = &mut decompressor.borrow_mut();
let decompressed = decompressor.decompress(buf);
let mut original_buf = buf;
let mut buf: &[u8] = decompressed;
#(#lines)*
(obj, original_buf)
})
} else {
#(#lines)*
(obj, buf)
}
}
} else {
quote! {
#(#lines)*
(obj, buf)
}
}
}
/// Generates code to implement the `Compact` trait method `from_compact`.
fn generate_to_compact(
fields: &FieldList,
ident: &Ident,
zstd: Option<ZstdConfig>,
reth_codecs: &syn::Path,
) -> Vec<TokenStream2> {
let mut lines = vec![quote! {
let mut buffer = #reth_codecs::__private::bytes::BytesMut::new();
}];
let is_enum = fields.iter().any(|field| matches!(field, FieldTypes::EnumVariant(_)));
if is_enum {
let enum_lines = EnumHandler::new(fields).generate_to(ident);
lines.push(quote! {
flags.set_variant(match self {
#(#enum_lines)*
});
})
} else {
lines.append(&mut StructHandler::new(fields).generate_to());
}
// Just because a type supports compression, doesn't mean all its values are to be compressed.
// We skip the smaller ones, and thus require a flag` __zstd` to specify if this value is
// compressed or not.
if zstd.is_some() {
lines.push(quote! {
let mut zstd = buffer.len() > 7;
if zstd {
flags.set___zstd(1);
}
});
}
// Places the flag bits.
lines.push(quote! {
let flags = flags.into_bytes();
total_length += flags.len() + buffer.len();
buf.put_slice(&flags);
});
if let Some(zstd) = zstd {
let compressor = zstd.compressor;
lines.push(quote! {
if zstd {
#compressor.with(|compressor| {
let mut compressor = compressor.borrow_mut();
let compressed = compressor.compress(&buffer).expect("Failed to compress.");
buf.put(compressed.as_slice());
});
} else {
buf.put(buffer);
}
});
} else {
lines.push(quote! {
buf.put(buffer);
})
}
lines
}
/// Function to extract the crate path from `reth_codecs(crate = "...")` attribute.
pub(crate) fn parse_reth_codecs_path(attrs: &[Attribute]) -> syn::Result<syn::Path> {
// let default_crate_path: syn::Path = syn::parse_str("reth-codecs").unwrap();
let mut reth_codecs_path: syn::Path = syn::parse_quote!(reth_codecs);
for attr in attrs {
if attr.path().is_ident("reth_codecs") {
attr.parse_nested_meta(|meta| {
if meta.path.is_ident("crate") {
let value = meta.value()?;
let lit: LitStr = value.parse()?;
reth_codecs_path = syn::parse_str(&lit.value())?;
Ok(())
} else {
Err(meta.error("unsupported attribute"))
}
})?;
}
}
Ok(reth_codecs_path)
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/codecs/derive/src/compact/mod.rs | crates/storage/codecs/derive/src/compact/mod.rs | use proc_macro::TokenStream;
use proc_macro2::{Ident, TokenStream as TokenStream2};
use quote::{format_ident, quote};
use syn::{Data, DeriveInput, Generics};
mod generator;
use generator::*;
mod enums;
use enums::*;
mod flags;
use flags::*;
mod structs;
use structs::*;
use crate::ZstdConfig;
// Helper Alias type
type FieldType = String;
/// `Compact` has alternative functions that can be used as a workaround for type
/// specialization of fixed sized types.
///
/// Example: `Vec<B256>` vs `Vec<U256>`. The first does not
/// require the len of the element, while the latter one does.
type UseAlternative = bool;
// Helper Alias type
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct StructFieldDescriptor {
name: String,
ftype: String,
is_compact: bool,
use_alt_impl: bool,
is_reference: bool,
}
// Helper Alias type
type FieldList = Vec<FieldTypes>;
#[derive(Debug, Clone, Eq, PartialEq)]
pub enum FieldTypes {
StructField(StructFieldDescriptor),
EnumVariant(String),
EnumUnnamedField((FieldType, UseAlternative)),
}
/// Derives the `Compact` trait and its from/to implementations.
pub fn derive(input: DeriveInput, zstd: Option<ZstdConfig>) -> TokenStream {
let mut output = quote! {};
let DeriveInput { ident, data, generics, attrs, .. } = input;
let has_lifetime = has_lifetime(&generics);
let fields = get_fields(&data);
output.extend(generate_flag_struct(&ident, &attrs, has_lifetime, &fields, zstd.is_some()));
output.extend(generate_from_to(&ident, &attrs, has_lifetime, &fields, zstd));
output.into()
}
pub fn has_lifetime(generics: &Generics) -> bool {
generics.lifetimes().next().is_some()
}
/// Given a list of fields on a struct, extract their fields and types.
pub fn get_fields(data: &Data) -> FieldList {
let mut fields = vec![];
match data {
Data::Struct(data) => match data.fields {
syn::Fields::Named(ref data_fields) => {
for field in &data_fields.named {
load_field(field, &mut fields, false);
}
assert_eq!(fields.len(), data_fields.named.len(), "get_fields");
}
syn::Fields::Unnamed(ref data_fields) => {
assert_eq!(
data_fields.unnamed.len(),
1,
"Compact only allows one unnamed field. Consider making it a struct."
);
load_field(&data_fields.unnamed[0], &mut fields, false);
}
syn::Fields::Unit => todo!(),
},
Data::Enum(data) => {
for variant in &data.variants {
fields.push(FieldTypes::EnumVariant(variant.ident.to_string()));
match &variant.fields {
syn::Fields::Named(_) => {
panic!(
"Not allowed to have Enum Variants with multiple named fields. Make it a struct instead."
)
}
syn::Fields::Unnamed(data_fields) => {
assert_eq!(
data_fields.unnamed.len(),
1,
"Compact only allows one unnamed field. Consider making it a struct."
);
load_field(&data_fields.unnamed[0], &mut fields, true);
}
syn::Fields::Unit => (),
}
}
}
Data::Union(_) => todo!(),
}
fields
}
fn load_field(field: &syn::Field, fields: &mut FieldList, is_enum: bool) {
match field.ty {
syn::Type::Reference(ref reference) => match &*reference.elem {
syn::Type::Path(path) => {
load_field_from_segments(&path.path.segments, is_enum, fields, field)
}
_ => unimplemented!("{:?}", &field.ident),
},
syn::Type::Path(ref path) => {
load_field_from_segments(&path.path.segments, is_enum, fields, field)
}
_ => unimplemented!("{:?}", &field.ident),
}
}
fn load_field_from_segments(
segments: &syn::punctuated::Punctuated<syn::PathSegment, syn::token::PathSep>,
is_enum: bool,
fields: &mut Vec<FieldTypes>,
field: &syn::Field,
) {
if !segments.is_empty() {
let mut ftype = String::new();
let mut use_alt_impl: UseAlternative = false;
for (index, segment) in segments.iter().enumerate() {
ftype.push_str(&segment.ident.to_string());
if index < segments.len() - 1 {
ftype.push_str("::");
}
use_alt_impl = should_use_alt_impl(&ftype, segment);
}
if is_enum {
fields.push(FieldTypes::EnumUnnamedField((ftype, use_alt_impl)));
} else {
let should_compact = is_flag_type(&ftype) ||
field.attrs.iter().any(|attr| {
attr.path().segments.iter().any(|path| path.ident == "maybe_zero")
});
fields.push(FieldTypes::StructField(StructFieldDescriptor {
name: field.ident.as_ref().map(|i| i.to_string()).unwrap_or_default(),
ftype,
is_compact: should_compact,
use_alt_impl,
is_reference: matches!(field.ty, syn::Type::Reference(_)),
}));
}
}
}
/// Since there's no impl specialization in rust stable atm, once we find we have a
/// Vec/Option we try to find out if it's a Vec/Option of a fixed size data type, e.g. `Vec<B256>`.
///
/// If so, we use another impl to code/decode its data.
fn should_use_alt_impl(ftype: &str, segment: &syn::PathSegment) -> bool {
if ftype == "Vec" || ftype == "Option" {
if let syn::PathArguments::AngleBracketed(ref args) = segment.arguments {
if let Some(syn::GenericArgument::Type(syn::Type::Path(arg_path))) = args.args.last() {
if let (Some(path), 1) =
(arg_path.path.segments.first(), arg_path.path.segments.len())
{
if [
"B256",
"Address",
"Address",
"Bloom",
"TxHash",
"BlockHash",
"CompactPlaceholder",
]
.contains(&path.ident.to_string().as_str())
{
return true
}
}
}
}
}
false
}
/// Given the field type in a string format, return the amount of bits necessary to save its maximum
/// length.
pub fn get_bit_size(ftype: &str) -> u8 {
match ftype {
"TransactionKind" | "TxKind" | "bool" | "Option" | "Signature" => 1,
"TxType" | "OpTxType" | "SeismicTxType" => 2,
"u64" | "BlockNumber" | "TxNumber" | "ChainId" | "NumTransactions" => 4,
"u128" => 5,
"U256" => 6,
"u8" => 1,
_ => 0,
}
}
/// Given the field type in a string format, checks if its type should be added to the
/// `StructFlags`.
pub fn is_flag_type(ftype: &str) -> bool {
get_bit_size(ftype) > 0
}
#[cfg(test)]
mod tests {
use super::*;
use similar_asserts::assert_eq;
use syn::parse2;
#[test]
fn compact_codec() {
let f_struct = quote! {
#[derive(Debug, PartialEq, Clone)]
pub struct TestStruct {
f_u64: u64,
f_u256: U256,
f_bool_t: bool,
f_bool_f: bool,
f_option_none: Option<U256>,
f_option_some: Option<B256>,
f_option_some_u64: Option<u64>,
f_vec_empty: Vec<U256>,
f_vec_some: Vec<Address>,
}
};
// Generate code that will impl the `Compact` trait.
let mut output = quote! {};
let DeriveInput { ident, data, attrs, .. } = parse2(f_struct).unwrap();
let fields = get_fields(&data);
output.extend(generate_flag_struct(&ident, &attrs, false, &fields, false));
output.extend(generate_from_to(&ident, &attrs, false, &fields, None));
// Expected output in a TokenStream format. Commas matter!
let should_output = quote! {
impl TestStruct {
#[doc = "Used bytes by [`TestStructFlags`]"]
pub const fn bitflag_encoded_bytes() -> usize {
2u8 as usize
}
#[doc = "Unused bits for new fields by [`TestStructFlags`]"]
pub const fn bitflag_unused_bits() -> usize {
1u8 as usize
}
}
pub use TestStruct_flags::TestStructFlags;
#[expect(non_snake_case)]
mod TestStruct_flags {
use reth_codecs::__private::Buf;
use reth_codecs::__private::modular_bitfield;
use reth_codecs::__private::modular_bitfield::prelude::*;
#[doc = "Fieldset that facilitates compacting the parent type. Used bytes: 2 | Unused bits: 1"]
#[bitfield]
#[derive(Clone, Copy, Debug, Default)]
pub struct TestStructFlags {
pub f_u64_len: B4,
pub f_u256_len: B6,
pub f_bool_t_len: B1,
pub f_bool_f_len: B1,
pub f_option_none_len: B1,
pub f_option_some_len: B1,
pub f_option_some_u64_len: B1,
#[skip]
unused: B1,
}
impl TestStructFlags {
#[doc = r" Deserializes this fieldset and returns it, alongside the original slice in an advanced position."]
pub fn from(mut buf: &[u8]) -> (Self, &[u8]) {
(
TestStructFlags::from_bytes([buf.get_u8(), buf.get_u8(),]),
buf
)
}
}
}
#[cfg(test)]
#[expect(dead_code)]
#[test_fuzz::test_fuzz]
fn fuzz_test_test_struct(obj: TestStruct) {
use reth_codecs::Compact;
let mut buf = vec![];
let len = obj.clone().to_compact(&mut buf);
let (same_obj, buf) = TestStruct::from_compact(buf.as_ref(), len);
assert_eq!(obj, same_obj);
}
#[test]
#[expect(missing_docs)]
pub fn fuzz_test_struct() {
fuzz_test_test_struct(TestStruct::default())
}
impl reth_codecs::Compact for TestStruct {
fn to_compact<B>(&self, buf: &mut B) -> usize where B: reth_codecs::__private::bytes::BufMut + AsMut<[u8]> {
let mut flags = TestStructFlags::default();
let mut total_length = 0;
let mut buffer = reth_codecs::__private::bytes::BytesMut::new();
let f_u64_len = self.f_u64.to_compact(&mut buffer);
flags.set_f_u64_len(f_u64_len as u8);
let f_u256_len = self.f_u256.to_compact(&mut buffer);
flags.set_f_u256_len(f_u256_len as u8);
let f_bool_t_len = self.f_bool_t.to_compact(&mut buffer);
flags.set_f_bool_t_len(f_bool_t_len as u8);
let f_bool_f_len = self.f_bool_f.to_compact(&mut buffer);
flags.set_f_bool_f_len(f_bool_f_len as u8);
let f_option_none_len = self.f_option_none.to_compact(&mut buffer);
flags.set_f_option_none_len(f_option_none_len as u8);
let f_option_some_len = self.f_option_some.specialized_to_compact(&mut buffer);
flags.set_f_option_some_len(f_option_some_len as u8);
let f_option_some_u64_len = self.f_option_some_u64.to_compact(&mut buffer);
flags.set_f_option_some_u64_len(f_option_some_u64_len as u8);
let f_vec_empty_len = self.f_vec_empty.to_compact(&mut buffer);
let f_vec_some_len = self.f_vec_some.specialized_to_compact(&mut buffer);
let flags = flags.into_bytes();
total_length += flags.len() + buffer.len();
buf.put_slice(&flags);
buf.put(buffer);
total_length
}
fn from_compact(mut buf: &[u8], len: usize) -> (Self, &[u8]) {
let (flags, mut buf) = TestStructFlags::from(buf);
let (f_u64, new_buf) = u64::from_compact(buf, flags.f_u64_len() as usize);
buf = new_buf;
let (f_u256, new_buf) = U256::from_compact(buf, flags.f_u256_len() as usize);
buf = new_buf;
let (f_bool_t, new_buf) = bool::from_compact(buf, flags.f_bool_t_len() as usize);
buf = new_buf;
let (f_bool_f, new_buf) = bool::from_compact(buf, flags.f_bool_f_len() as usize);
buf = new_buf;
let (f_option_none, new_buf) = Option::from_compact(buf, flags.f_option_none_len() as usize);
buf = new_buf;
let (f_option_some, new_buf) = Option::specialized_from_compact(buf, flags.f_option_some_len() as usize);
buf = new_buf;
let (f_option_some_u64, new_buf) = Option::from_compact(buf, flags.f_option_some_u64_len() as usize);
buf = new_buf;
let (f_vec_empty, new_buf) = Vec::from_compact(buf, buf.len());
buf = new_buf;
let (f_vec_some, new_buf) = Vec::specialized_from_compact(buf, buf.len());
buf = new_buf;
let obj = TestStruct {
f_u64: f_u64,
f_u256: f_u256,
f_bool_t: f_bool_t,
f_bool_f: f_bool_f,
f_option_none: f_option_none,
f_option_some: f_option_some,
f_option_some_u64: f_option_some_u64,
f_vec_empty: f_vec_empty,
f_vec_some: f_vec_some,
};
(obj, buf)
}
}
};
assert_eq!(
syn::parse2::<syn::File>(output).unwrap(),
syn::parse2::<syn::File>(should_output).unwrap()
);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/lockfile.rs | crates/storage/db/src/lockfile.rs | //! Storage lock utils.
#![cfg_attr(feature = "disable-lock", allow(dead_code))]
use reth_storage_errors::lockfile::StorageLockError;
use std::{
path::{Path, PathBuf},
process,
sync::{Arc, OnceLock},
};
use sysinfo::{ProcessRefreshKind, RefreshKind, System};
/// File lock name.
const LOCKFILE_NAME: &str = "lock";
/// A file lock for a storage directory to ensure exclusive read-write access across different
/// processes.
///
/// This lock stores the PID of the process holding it and is released (deleted) on a graceful
/// shutdown. On resuming from a crash, the stored PID helps verify that no other process holds the
/// lock.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct StorageLock(Arc<StorageLockInner>);
impl StorageLock {
/// Tries to acquire a write lock on the target directory, returning [`StorageLockError`] if
/// unsuccessful.
///
/// Note: In-process exclusivity is not on scope. If called from the same process (or another
/// with the same PID), it will succeed.
pub fn try_acquire(path: &Path) -> Result<Self, StorageLockError> {
#[cfg(feature = "disable-lock")]
{
let file_path = path.join(LOCKFILE_NAME);
// Too expensive for ef-tests to write/read lock to/from disk.
Ok(Self(Arc::new(StorageLockInner { file_path })))
}
#[cfg(not(feature = "disable-lock"))]
Self::try_acquire_file_lock(path)
}
/// Acquire a file write lock.
#[cfg(any(test, not(feature = "disable-lock")))]
fn try_acquire_file_lock(path: &Path) -> Result<Self, StorageLockError> {
let file_path = path.join(LOCKFILE_NAME);
if let Some(process_lock) = ProcessUID::parse(&file_path)? {
if process_lock.pid != (process::id() as usize) && process_lock.is_active() {
reth_tracing::tracing::error!(
target: "reth::db::lockfile",
path = ?file_path,
pid = process_lock.pid,
start_time = process_lock.start_time,
"Storage lock already taken."
);
return Err(StorageLockError::Taken(process_lock.pid))
}
}
Ok(Self(Arc::new(StorageLockInner::new(file_path)?)))
}
}
impl Drop for StorageLockInner {
fn drop(&mut self) {
// The lockfile is not created in disable-lock mode, so we don't need to delete it.
#[cfg(any(test, not(feature = "disable-lock")))]
{
let file_path = &self.file_path;
if file_path.exists() {
if let Ok(Some(process_uid)) = ProcessUID::parse(file_path) {
// Only remove if the lock file belongs to our process
if process_uid.pid == process::id() as usize {
if let Err(err) = reth_fs_util::remove_file(file_path) {
reth_tracing::tracing::error!(%err, "Failed to delete lock file");
}
} else {
reth_tracing::tracing::warn!(
"Lock file belongs to different process (PID: {}), not removing",
process_uid.pid
);
}
} else {
// If we can't parse the lock file, still try to remove it
// as it might be corrupted or from a previous run
if let Err(err) = reth_fs_util::remove_file(file_path) {
reth_tracing::tracing::error!(%err, "Failed to delete lock file");
}
}
}
}
}
}
#[derive(Debug, PartialEq, Eq)]
struct StorageLockInner {
file_path: PathBuf,
}
impl StorageLockInner {
/// Creates lock file and writes this process PID into it.
fn new(file_path: PathBuf) -> Result<Self, StorageLockError> {
// Create the directory if it doesn't exist
if let Some(parent) = file_path.parent() {
reth_fs_util::create_dir_all(parent).map_err(StorageLockError::other)?;
}
// Write this process unique identifier (pid & start_time) to file
ProcessUID::own().write(&file_path)?;
Ok(Self { file_path })
}
}
#[derive(Clone, Debug)]
struct ProcessUID {
/// OS process identifier
pid: usize,
/// Process start time
start_time: u64,
}
impl ProcessUID {
/// Creates [`Self`] for the provided PID.
fn new(pid: usize) -> Option<Self> {
let mut system = System::new();
let pid2 = sysinfo::Pid::from(pid);
system.refresh_processes_specifics(
sysinfo::ProcessesToUpdate::Some(&[pid2]),
true,
ProcessRefreshKind::nothing(),
);
system.process(pid2).map(|process| Self { pid, start_time: process.start_time() })
}
/// Creates [`Self`] from own process.
fn own() -> Self {
static CACHE: OnceLock<ProcessUID> = OnceLock::new();
CACHE.get_or_init(|| Self::new(process::id() as usize).expect("own process")).clone()
}
/// Parses [`Self`] from a file.
fn parse(path: &Path) -> Result<Option<Self>, StorageLockError> {
if path.exists() {
if let Ok(contents) = reth_fs_util::read_to_string(path) {
let mut lines = contents.lines();
if let (Some(Ok(pid)), Some(Ok(start_time))) = (
lines.next().map(str::trim).map(str::parse),
lines.next().map(str::trim).map(str::parse),
) {
return Ok(Some(Self { pid, start_time }));
}
}
}
Ok(None)
}
/// Whether a process with this `pid` and `start_time` exists.
fn is_active(&self) -> bool {
System::new_with_specifics(
RefreshKind::nothing().with_processes(ProcessRefreshKind::nothing()),
)
.process(self.pid.into())
.is_some_and(|p| p.start_time() == self.start_time)
}
/// Writes `pid` and `start_time` to a file.
fn write(&self, path: &Path) -> Result<(), StorageLockError> {
reth_fs_util::write(path, format!("{}\n{}", self.pid, self.start_time))
.map_err(StorageLockError::other)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::{Mutex, MutexGuard, OnceLock};
// helper to ensure some tests are run serially
static SERIAL: OnceLock<Mutex<()>> = OnceLock::new();
fn serial_lock() -> MutexGuard<'static, ()> {
SERIAL.get_or_init(|| Mutex::new(())).lock().unwrap()
}
#[test]
fn test_lock() {
let _guard = serial_lock();
let temp_dir = tempfile::tempdir().unwrap();
let lock = StorageLock::try_acquire_file_lock(temp_dir.path()).unwrap();
// Same process can re-acquire the lock
assert_eq!(Ok(lock.clone()), StorageLock::try_acquire_file_lock(temp_dir.path()));
// A lock of a non existent PID can be acquired.
let lock_file = temp_dir.path().join(LOCKFILE_NAME);
let mut fake_pid = 1337;
let system = System::new_all();
while system.process(fake_pid.into()).is_some() {
fake_pid += 1;
}
ProcessUID { pid: fake_pid, start_time: u64::MAX }.write(&lock_file).unwrap();
assert_eq!(Ok(lock.clone()), StorageLock::try_acquire_file_lock(temp_dir.path()));
let mut pid_1 = ProcessUID::new(1).unwrap();
// If a parsed `ProcessUID` exists, the lock can NOT be acquired.
pid_1.write(&lock_file).unwrap();
assert_eq!(
Err(StorageLockError::Taken(1)),
StorageLock::try_acquire_file_lock(temp_dir.path())
);
// A lock of a different but existing PID can be acquired ONLY IF the start_time differs.
pid_1.start_time += 1;
pid_1.write(&lock_file).unwrap();
assert_eq!(Ok(lock), StorageLock::try_acquire_file_lock(temp_dir.path()));
}
#[test]
fn test_drop_lock() {
let _guard = serial_lock();
let temp_dir = tempfile::tempdir().unwrap();
let lock_file = temp_dir.path().join(LOCKFILE_NAME);
let lock = StorageLock::try_acquire_file_lock(temp_dir.path()).unwrap();
assert!(lock_file.exists());
drop(lock);
assert!(!lock_file.exists());
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/mdbx.rs | crates/storage/db/src/mdbx.rs | //! Helper functions for initializing and opening a database.
use crate::{is_database_empty, TableSet, Tables};
use eyre::Context;
use std::path::Path;
pub use crate::implementation::mdbx::*;
pub use reth_libmdbx::*;
/// Creates a new database at the specified path if it doesn't exist. Does NOT create tables. Check
/// [`init_db`].
pub fn create_db<P: AsRef<Path>>(path: P, args: DatabaseArguments) -> eyre::Result<DatabaseEnv> {
use crate::version::{check_db_version_file, create_db_version_file, DatabaseVersionError};
let rpath = path.as_ref();
if is_database_empty(rpath) {
reth_fs_util::create_dir_all(rpath)
.wrap_err_with(|| format!("Could not create database directory {}", rpath.display()))?;
create_db_version_file(rpath)?;
} else {
match check_db_version_file(rpath) {
Ok(_) => (),
Err(DatabaseVersionError::MissingFile) => create_db_version_file(rpath)?,
Err(err) => return Err(err.into()),
}
}
Ok(DatabaseEnv::open(rpath, DatabaseEnvKind::RW, args)?)
}
/// Opens up an existing database or creates a new one at the specified path. Creates tables defined
/// in [`Tables`] if necessary. Read/Write mode.
pub fn init_db<P: AsRef<Path>>(path: P, args: DatabaseArguments) -> eyre::Result<DatabaseEnv> {
init_db_for::<P, Tables>(path, args)
}
/// Opens up an existing database or creates a new one at the specified path. Creates tables defined
/// in the given [`TableSet`] if necessary. Read/Write mode.
pub fn init_db_for<P: AsRef<Path>, TS: TableSet>(
path: P,
args: DatabaseArguments,
) -> eyre::Result<DatabaseEnv> {
let client_version = args.client_version().clone();
let db = create_db(path, args)?;
db.create_tables_for::<TS>()?;
db.record_client_version(client_version)?;
Ok(db)
}
/// Opens up an existing database. Read only mode. It doesn't create it or create tables if missing.
pub fn open_db_read_only(
path: impl AsRef<Path>,
args: DatabaseArguments,
) -> eyre::Result<DatabaseEnv> {
let path = path.as_ref();
DatabaseEnv::open(path, DatabaseEnvKind::RO, args)
.with_context(|| format!("Could not open database at path: {}", path.display()))
}
/// Opens up an existing database. Read/Write mode with `WriteMap` enabled. It doesn't create it or
/// create tables if missing.
pub fn open_db(path: impl AsRef<Path>, args: DatabaseArguments) -> eyre::Result<DatabaseEnv> {
fn open(path: &Path, args: DatabaseArguments) -> eyre::Result<DatabaseEnv> {
let client_version = args.client_version().clone();
let db = DatabaseEnv::open(path, DatabaseEnvKind::RW, args)
.with_context(|| format!("Could not open database at path: {}", path.display()))?;
db.record_client_version(client_version)?;
Ok(db)
}
open(path.as_ref(), args)
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/lib.rs | crates/storage/db/src/lib.rs | //! MDBX implementation for reth's database abstraction layer.
//!
//! This crate is an implementation of `reth-db-api` for MDBX, as well as a few other common
//! database types.
//!
//! # Overview
//!
//! An overview of the current data model of reth can be found in the [`mod@tables`] module.
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/SeismicSystems/seismic-reth/issues/"
)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
mod implementation;
pub mod lockfile;
#[cfg(feature = "mdbx")]
mod metrics;
pub mod static_file;
#[cfg(feature = "mdbx")]
mod utils;
pub mod version;
#[cfg(feature = "mdbx")]
pub mod mdbx;
pub use reth_storage_errors::db::{DatabaseError, DatabaseWriteOperation};
#[cfg(feature = "mdbx")]
pub use utils::is_database_empty;
#[cfg(feature = "mdbx")]
pub use mdbx::{create_db, init_db, open_db, open_db_read_only, DatabaseEnv, DatabaseEnvKind};
pub use models::ClientVersion;
pub use reth_db_api::*;
/// Collection of database test utilities
#[cfg(any(test, feature = "test-utils"))]
pub mod test_utils {
use super::*;
use crate::mdbx::DatabaseArguments;
use parking_lot::RwLock;
use reth_db_api::{
database::Database, database_metrics::DatabaseMetrics, models::ClientVersion,
};
use reth_fs_util;
use reth_libmdbx::MaxReadTransactionDuration;
use std::{
fmt::Formatter,
path::{Path, PathBuf},
sync::Arc,
};
use tempfile::TempDir;
/// Error during database open
pub const ERROR_DB_OPEN: &str = "could not open the database file";
/// Error during database creation
pub const ERROR_DB_CREATION: &str = "could not create the database file";
/// Error during database creation
pub const ERROR_STATIC_FILES_CREATION: &str = "could not create the static file path";
/// Error during table creation
pub const ERROR_TABLE_CREATION: &str = "could not create tables in the database";
/// Error during tempdir creation
pub const ERROR_TEMPDIR: &str = "could not create a temporary directory";
/// A database will delete the db dir when dropped.
pub struct TempDatabase<DB> {
db: Option<DB>,
path: PathBuf,
/// Executed right before a database transaction is created.
pre_tx_hook: RwLock<Box<dyn Fn() + Send + Sync>>,
/// Executed right after a database transaction is created.
post_tx_hook: RwLock<Box<dyn Fn() + Send + Sync>>,
}
impl<DB: std::fmt::Debug> std::fmt::Debug for TempDatabase<DB> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("TempDatabase").field("db", &self.db).field("path", &self.path).finish()
}
}
impl<DB> Drop for TempDatabase<DB> {
fn drop(&mut self) {
if let Some(db) = self.db.take() {
drop(db);
let _ = reth_fs_util::remove_dir_all(&self.path);
}
}
}
impl<DB> TempDatabase<DB> {
/// Create new [`TempDatabase`] instance.
pub fn new(db: DB, path: PathBuf) -> Self {
Self {
db: Some(db),
path,
pre_tx_hook: RwLock::new(Box::new(|| ())),
post_tx_hook: RwLock::new(Box::new(|| ())),
}
}
/// Returns the reference to inner db.
pub const fn db(&self) -> &DB {
self.db.as_ref().unwrap()
}
/// Returns the path to the database.
pub fn path(&self) -> &Path {
&self.path
}
/// Convert temp database into inner.
pub fn into_inner_db(mut self) -> DB {
self.db.take().unwrap() // take out db to avoid clean path in drop fn
}
/// Sets [`TempDatabase`] new pre transaction creation hook.
pub fn set_pre_transaction_hook(&self, hook: Box<dyn Fn() + Send + Sync>) {
let mut db_hook = self.pre_tx_hook.write();
*db_hook = hook;
}
/// Sets [`TempDatabase`] new post transaction creation hook.
pub fn set_post_transaction_hook(&self, hook: Box<dyn Fn() + Send + Sync>) {
let mut db_hook = self.post_tx_hook.write();
*db_hook = hook;
}
}
impl<DB: Database> Database for TempDatabase<DB> {
type TX = <DB as Database>::TX;
type TXMut = <DB as Database>::TXMut;
fn tx(&self) -> Result<Self::TX, DatabaseError> {
self.pre_tx_hook.read()();
let tx = self.db().tx()?;
self.post_tx_hook.read()();
Ok(tx)
}
fn tx_mut(&self) -> Result<Self::TXMut, DatabaseError> {
self.db().tx_mut()
}
}
impl<DB: DatabaseMetrics> DatabaseMetrics for TempDatabase<DB> {
fn report_metrics(&self) {
self.db().report_metrics()
}
}
/// Create `static_files` path for testing
#[track_caller]
pub fn create_test_static_files_dir() -> (TempDir, PathBuf) {
let temp_dir = TempDir::with_prefix("reth-test-static-").expect(ERROR_TEMPDIR);
let path = temp_dir.path().to_path_buf();
(temp_dir, path)
}
/// Get a temporary directory path to use for the database
pub fn tempdir_path() -> PathBuf {
let builder = tempfile::Builder::new().prefix("reth-test-").rand_bytes(8).tempdir();
builder.expect(ERROR_TEMPDIR).keep()
}
/// Create read/write database for testing
#[track_caller]
pub fn create_test_rw_db() -> Arc<TempDatabase<DatabaseEnv>> {
let path = tempdir_path();
let emsg = format!("{ERROR_DB_CREATION}: {path:?}");
let db = init_db(
&path,
DatabaseArguments::new(ClientVersion::default())
.with_max_read_transaction_duration(Some(MaxReadTransactionDuration::Unbounded)),
)
.expect(&emsg);
Arc::new(TempDatabase::new(db, path))
}
/// Create read/write database for testing
#[track_caller]
pub fn create_test_rw_db_with_path<P: AsRef<Path>>(path: P) -> Arc<TempDatabase<DatabaseEnv>> {
let path = path.as_ref().to_path_buf();
let db = init_db(
path.as_path(),
DatabaseArguments::new(ClientVersion::default())
.with_max_read_transaction_duration(Some(MaxReadTransactionDuration::Unbounded)),
)
.expect(ERROR_DB_CREATION);
Arc::new(TempDatabase::new(db, path))
}
/// Create read only database for testing
#[track_caller]
pub fn create_test_ro_db() -> Arc<TempDatabase<DatabaseEnv>> {
let args = DatabaseArguments::new(ClientVersion::default())
.with_max_read_transaction_duration(Some(MaxReadTransactionDuration::Unbounded));
let path = tempdir_path();
{
init_db(path.as_path(), args.clone()).expect(ERROR_DB_CREATION);
}
let db = open_db_read_only(path.as_path(), args).expect(ERROR_DB_OPEN);
Arc::new(TempDatabase::new(db, path))
}
}
#[cfg(test)]
mod tests {
use crate::{
init_db,
mdbx::DatabaseArguments,
open_db, tables,
version::{db_version_file_path, DatabaseVersionError},
};
use assert_matches::assert_matches;
use reth_db_api::{
cursor::DbCursorRO, database::Database, models::ClientVersion, transaction::DbTx,
};
use reth_libmdbx::MaxReadTransactionDuration;
use std::time::Duration;
use tempfile::tempdir;
#[test]
fn db_version() {
let path = tempdir().unwrap();
let args = DatabaseArguments::new(ClientVersion::default())
.with_max_read_transaction_duration(Some(MaxReadTransactionDuration::Unbounded));
// Database is empty
{
let db = init_db(&path, args.clone());
assert_matches!(db, Ok(_));
}
// Database is not empty, current version is the same as in the file
{
let db = init_db(&path, args.clone());
assert_matches!(db, Ok(_));
}
// Database is not empty, version file is malformed
{
reth_fs_util::write(path.path().join(db_version_file_path(&path)), "invalid-version")
.unwrap();
let db = init_db(&path, args.clone());
assert!(db.is_err());
assert_matches!(
db.unwrap_err().downcast_ref::<DatabaseVersionError>(),
Some(DatabaseVersionError::MalformedFile)
)
}
// Database is not empty, version file contains not matching version
{
reth_fs_util::write(path.path().join(db_version_file_path(&path)), "0").unwrap();
let db = init_db(&path, args);
assert!(db.is_err());
assert_matches!(
db.unwrap_err().downcast_ref::<DatabaseVersionError>(),
Some(DatabaseVersionError::VersionMismatch { version: 0 })
)
}
}
#[test]
fn db_client_version() {
let path = tempdir().unwrap();
// Empty client version is not recorded
{
let db = init_db(&path, DatabaseArguments::new(ClientVersion::default())).unwrap();
let tx = db.tx().unwrap();
let mut cursor = tx.cursor_read::<tables::VersionHistory>().unwrap();
assert_matches!(cursor.first(), Ok(None));
}
// Client version is recorded
let first_version = ClientVersion { version: String::from("v1"), ..Default::default() };
{
let db = init_db(&path, DatabaseArguments::new(first_version.clone())).unwrap();
let tx = db.tx().unwrap();
let mut cursor = tx.cursor_read::<tables::VersionHistory>().unwrap();
assert_eq!(
cursor
.walk_range(..)
.unwrap()
.map(|x| x.map(|(_, v)| v))
.collect::<Result<Vec<_>, _>>()
.unwrap(),
vec![first_version.clone()]
);
}
// Same client version is not duplicated.
{
let db = init_db(&path, DatabaseArguments::new(first_version.clone())).unwrap();
let tx = db.tx().unwrap();
let mut cursor = tx.cursor_read::<tables::VersionHistory>().unwrap();
assert_eq!(
cursor
.walk_range(..)
.unwrap()
.map(|x| x.map(|(_, v)| v))
.collect::<Result<Vec<_>, _>>()
.unwrap(),
vec![first_version.clone()]
);
}
// Different client version is recorded
std::thread::sleep(Duration::from_secs(1));
let second_version = ClientVersion { version: String::from("v2"), ..Default::default() };
{
let db = init_db(&path, DatabaseArguments::new(second_version.clone())).unwrap();
let tx = db.tx().unwrap();
let mut cursor = tx.cursor_read::<tables::VersionHistory>().unwrap();
assert_eq!(
cursor
.walk_range(..)
.unwrap()
.map(|x| x.map(|(_, v)| v))
.collect::<Result<Vec<_>, _>>()
.unwrap(),
vec![first_version.clone(), second_version.clone()]
);
}
// Different client version is recorded on db open.
std::thread::sleep(Duration::from_secs(1));
let third_version = ClientVersion { version: String::from("v3"), ..Default::default() };
{
let db = open_db(path.path(), DatabaseArguments::new(third_version.clone())).unwrap();
let tx = db.tx().unwrap();
let mut cursor = tx.cursor_read::<tables::VersionHistory>().unwrap();
assert_eq!(
cursor
.walk_range(..)
.unwrap()
.map(|x| x.map(|(_, v)| v))
.collect::<Result<Vec<_>, _>>()
.unwrap(),
vec![first_version, second_version, third_version]
);
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/version.rs | crates/storage/db/src/version.rs | //! Database version utils.
use std::{
fs, io,
path::{Path, PathBuf},
};
/// The name of the file that contains the version of the database.
pub const DB_VERSION_FILE_NAME: &str = "database.version";
/// The version of the database stored in the [`DB_VERSION_FILE_NAME`] file in the same directory as
/// database.
pub const DB_VERSION: u64 = 2;
/// Error when checking a database version using [`check_db_version_file`]
#[derive(thiserror::Error, Debug)]
pub enum DatabaseVersionError {
/// Unable to determine the version of the database; the file is missing.
#[error("unable to determine the version of the database, file is missing")]
MissingFile,
/// Unable to determine the version of the database; the file is malformed.
#[error("unable to determine the version of the database, file is malformed")]
MalformedFile,
/// Breaking database change detected.
///
/// Your database version is incompatible with the latest database version.
#[error(
"breaking database change detected: your database version (v{version}) \
is incompatible with the latest database version (v{DB_VERSION})"
)]
VersionMismatch {
/// The detected version in the database.
version: u64,
},
/// IO error occurred while reading the database version file.
#[error("IO error occurred while reading {path}: {err}")]
IORead {
/// The encountered IO error.
err: io::Error,
/// The path to the database version file.
path: PathBuf,
},
}
/// Checks the database version file with [`DB_VERSION_FILE_NAME`] name.
///
/// Returns [Ok] if file is found and has one line which equals to [`DB_VERSION`].
/// Otherwise, returns different [`DatabaseVersionError`] error variants.
pub fn check_db_version_file<P: AsRef<Path>>(db_path: P) -> Result<(), DatabaseVersionError> {
let version = get_db_version(db_path)?;
if version != DB_VERSION {
return Err(DatabaseVersionError::VersionMismatch { version })
}
Ok(())
}
/// Returns the database version from file with [`DB_VERSION_FILE_NAME`] name.
///
/// Returns [Ok] if file is found and contains a valid version.
/// Otherwise, returns different [`DatabaseVersionError`] error variants.
pub fn get_db_version<P: AsRef<Path>>(db_path: P) -> Result<u64, DatabaseVersionError> {
let version_file_path = db_version_file_path(db_path);
match fs::read_to_string(&version_file_path) {
Ok(raw_version) => {
Ok(raw_version.parse::<u64>().map_err(|_| DatabaseVersionError::MalformedFile)?)
}
Err(err) if err.kind() == io::ErrorKind::NotFound => Err(DatabaseVersionError::MissingFile),
Err(err) => Err(DatabaseVersionError::IORead { err, path: version_file_path }),
}
}
/// Creates a database version file with [`DB_VERSION_FILE_NAME`] name containing [`DB_VERSION`]
/// string.
///
/// This function will create a file if it does not exist,
/// and will entirely replace its contents if it does.
pub fn create_db_version_file<P: AsRef<Path>>(db_path: P) -> io::Result<()> {
fs::write(db_version_file_path(db_path), DB_VERSION.to_string())
}
/// Returns a database version file path.
pub fn db_version_file_path<P: AsRef<Path>>(db_path: P) -> PathBuf {
db_path.as_ref().join(DB_VERSION_FILE_NAME)
}
#[cfg(test)]
mod tests {
use super::{check_db_version_file, db_version_file_path, DatabaseVersionError};
use assert_matches::assert_matches;
use std::fs;
use tempfile::tempdir;
#[test]
fn missing_file() {
let dir = tempdir().unwrap();
let result = check_db_version_file(&dir);
assert_matches!(result, Err(DatabaseVersionError::MissingFile));
}
#[test]
fn malformed_file() {
let dir = tempdir().unwrap();
fs::write(db_version_file_path(&dir), "invalid-version").unwrap();
let result = check_db_version_file(&dir);
assert_matches!(result, Err(DatabaseVersionError::MalformedFile));
}
#[test]
fn version_mismatch() {
let dir = tempdir().unwrap();
fs::write(db_version_file_path(&dir), "0").unwrap();
let result = check_db_version_file(&dir);
assert_matches!(result, Err(DatabaseVersionError::VersionMismatch { version: 0 }));
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/utils.rs | crates/storage/db/src/utils.rs | //! Utils crate for `db`.
use std::path::Path;
/// Returns the default page size that can be used in this OS.
pub(crate) fn default_page_size() -> usize {
let os_page_size = page_size::get();
// source: https://gitflic.ru/project/erthink/libmdbx/blob?file=mdbx.h#line-num-821
let libmdbx_max_page_size = 0x10000;
// May lead to errors if it's reduced further because of the potential size of the
// data.
let min_page_size = 4096;
os_page_size.clamp(min_page_size, libmdbx_max_page_size)
}
/// Check if a db is empty. It does not provide any information on the
/// validity of the data in it. We consider a database as non empty when it's a non empty directory.
pub fn is_database_empty<P: AsRef<Path>>(path: P) -> bool {
let path = path.as_ref();
if !path.exists() {
true
} else if path.is_file() {
false
} else if let Ok(mut dir) = path.read_dir() {
// Check if directory has any entries without counting all of them
dir.next().is_none()
} else {
true
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn is_database_empty_false_if_db_path_is_a_file() {
let db_file = tempfile::NamedTempFile::new().unwrap();
let result = is_database_empty(&db_file);
assert!(!result);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/metrics.rs | crates/storage/db/src/metrics.rs | use crate::Tables;
use metrics::Histogram;
use reth_metrics::{metrics::Counter, Metrics};
use rustc_hash::FxHashMap;
use std::time::{Duration, Instant};
use strum::{EnumCount, EnumIter, IntoEnumIterator};
const LARGE_VALUE_THRESHOLD_BYTES: usize = 4096;
/// Caches metric handles for database environment to make sure handles are not re-created
/// on every operation.
///
/// Requires a metric recorder to be registered before creating an instance of this struct.
/// Otherwise, metric recording will no-op.
#[derive(Debug)]
pub(crate) struct DatabaseEnvMetrics {
/// Caches `OperationMetrics` handles for each table and operation tuple.
operations: FxHashMap<(&'static str, Operation), OperationMetrics>,
/// Caches `TransactionMetrics` handles for counters grouped by only transaction mode.
/// Updated both at tx open and close.
transactions: FxHashMap<TransactionMode, TransactionMetrics>,
/// Caches `TransactionOutcomeMetrics` handles for counters grouped by transaction mode and
/// outcome. Can only be updated at tx close, as outcome is only known at that point.
transaction_outcomes:
FxHashMap<(TransactionMode, TransactionOutcome), TransactionOutcomeMetrics>,
}
impl DatabaseEnvMetrics {
pub(crate) fn new() -> Self {
// Pre-populate metric handle maps with all possible combinations of labels
// to avoid runtime locks on the map when recording metrics.
Self {
operations: Self::generate_operation_handles(),
transactions: Self::generate_transaction_handles(),
transaction_outcomes: Self::generate_transaction_outcome_handles(),
}
}
/// Generate a map of all possible operation handles for each table and operation tuple.
/// Used for tracking all operation metrics.
fn generate_operation_handles() -> FxHashMap<(&'static str, Operation), OperationMetrics> {
let mut operations = FxHashMap::with_capacity_and_hasher(
Tables::COUNT * Operation::COUNT,
Default::default(),
);
for table in Tables::ALL {
for operation in Operation::iter() {
operations.insert(
(table.name(), operation),
OperationMetrics::new_with_labels(&[
(Labels::Table.as_str(), table.name()),
(Labels::Operation.as_str(), operation.as_str()),
]),
);
}
}
operations
}
/// Generate a map of all possible transaction modes to metric handles.
/// Used for tracking a counter of open transactions.
fn generate_transaction_handles() -> FxHashMap<TransactionMode, TransactionMetrics> {
TransactionMode::iter()
.map(|mode| {
(
mode,
TransactionMetrics::new_with_labels(&[(
Labels::TransactionMode.as_str(),
mode.as_str(),
)]),
)
})
.collect()
}
/// Generate a map of all possible transaction mode and outcome handles.
/// Used for tracking various stats for finished transactions (e.g. commit duration).
fn generate_transaction_outcome_handles(
) -> FxHashMap<(TransactionMode, TransactionOutcome), TransactionOutcomeMetrics> {
let mut transaction_outcomes = FxHashMap::with_capacity_and_hasher(
TransactionMode::COUNT * TransactionOutcome::COUNT,
Default::default(),
);
for mode in TransactionMode::iter() {
for outcome in TransactionOutcome::iter() {
transaction_outcomes.insert(
(mode, outcome),
TransactionOutcomeMetrics::new_with_labels(&[
(Labels::TransactionMode.as_str(), mode.as_str()),
(Labels::TransactionOutcome.as_str(), outcome.as_str()),
]),
);
}
}
transaction_outcomes
}
/// Record a metric for database operation executed in `f`.
/// Panics if a metric recorder is not found for the given table and operation.
pub(crate) fn record_operation<R>(
&self,
table: &'static str,
operation: Operation,
value_size: Option<usize>,
f: impl FnOnce() -> R,
) -> R {
if let Some(metrics) = self.operations.get(&(table, operation)) {
metrics.record(value_size, f)
} else {
f()
}
}
/// Record metrics for opening a database transaction.
pub(crate) fn record_opened_transaction(&self, mode: TransactionMode) {
self.transactions
.get(&mode)
.expect("transaction mode metric handle not found")
.record_open();
}
/// Record metrics for closing a database transactions.
#[cfg(feature = "mdbx")]
pub(crate) fn record_closed_transaction(
&self,
mode: TransactionMode,
outcome: TransactionOutcome,
open_duration: Duration,
close_duration: Option<Duration>,
commit_latency: Option<reth_libmdbx::CommitLatency>,
) {
self.transactions
.get(&mode)
.expect("transaction mode metric handle not found")
.record_close();
self.transaction_outcomes
.get(&(mode, outcome))
.expect("transaction outcome metric handle not found")
.record(open_duration, close_duration, commit_latency);
}
}
/// Transaction mode for the database, either read-only or read-write.
#[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, EnumCount, EnumIter)]
pub(crate) enum TransactionMode {
/// Read-only transaction mode.
ReadOnly,
/// Read-write transaction mode.
ReadWrite,
}
impl TransactionMode {
/// Returns the transaction mode as a string.
pub(crate) const fn as_str(&self) -> &'static str {
match self {
Self::ReadOnly => "read-only",
Self::ReadWrite => "read-write",
}
}
/// Returns `true` if the transaction mode is read-only.
pub(crate) const fn is_read_only(&self) -> bool {
matches!(self, Self::ReadOnly)
}
}
/// Transaction outcome after a database operation - commit, abort, or drop.
#[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, EnumCount, EnumIter)]
pub(crate) enum TransactionOutcome {
/// Successful commit of the transaction.
Commit,
/// Aborted transaction.
Abort,
/// Dropped transaction.
Drop,
}
impl TransactionOutcome {
/// Returns the transaction outcome as a string.
pub(crate) const fn as_str(&self) -> &'static str {
match self {
Self::Commit => "commit",
Self::Abort => "abort",
Self::Drop => "drop",
}
}
/// Returns `true` if the transaction outcome is a commit.
pub(crate) const fn is_commit(&self) -> bool {
matches!(self, Self::Commit)
}
}
/// Types of operations conducted on the database: get, put, delete, and various cursor operations.
#[derive(Debug, Clone, Copy, Eq, PartialEq, Hash, EnumCount, EnumIter)]
pub(crate) enum Operation {
/// Database get operation.
Get,
/// Database put operation.
Put,
/// Database delete operation.
Delete,
/// Database cursor upsert operation.
CursorUpsert,
/// Database cursor insert operation.
CursorInsert,
/// Database cursor append operation.
CursorAppend,
/// Database cursor append duplicates operation.
CursorAppendDup,
/// Database cursor delete current operation.
CursorDeleteCurrent,
/// Database cursor delete current duplicates operation.
CursorDeleteCurrentDuplicates,
}
impl Operation {
/// Returns the operation as a string.
pub(crate) const fn as_str(&self) -> &'static str {
match self {
Self::Get => "get",
Self::Put => "put",
Self::Delete => "delete",
Self::CursorUpsert => "cursor-upsert",
Self::CursorInsert => "cursor-insert",
Self::CursorAppend => "cursor-append",
Self::CursorAppendDup => "cursor-append-dup",
Self::CursorDeleteCurrent => "cursor-delete-current",
Self::CursorDeleteCurrentDuplicates => "cursor-delete-current-duplicates",
}
}
}
/// Enum defining labels for various aspects used in metrics.
enum Labels {
/// Label representing a table.
Table,
/// Label representing a transaction mode.
TransactionMode,
/// Label representing a transaction outcome.
TransactionOutcome,
/// Label representing a database operation.
Operation,
}
impl Labels {
/// Converts each label variant into its corresponding string representation.
pub(crate) const fn as_str(&self) -> &'static str {
match self {
Self::Table => "table",
Self::TransactionMode => "mode",
Self::TransactionOutcome => "outcome",
Self::Operation => "operation",
}
}
}
#[derive(Metrics, Clone)]
#[metrics(scope = "database.transaction")]
pub(crate) struct TransactionMetrics {
/// Total number of opened database transactions (cumulative)
opened_total: Counter,
/// Total number of closed database transactions (cumulative)
closed_total: Counter,
}
impl TransactionMetrics {
pub(crate) fn record_open(&self) {
self.opened_total.increment(1);
}
pub(crate) fn record_close(&self) {
self.closed_total.increment(1);
}
}
#[derive(Metrics, Clone)]
#[metrics(scope = "database.transaction")]
pub(crate) struct TransactionOutcomeMetrics {
/// The time a database transaction has been open
open_duration_seconds: Histogram,
/// The time it took to close a database transaction
close_duration_seconds: Histogram,
/// The time it took to prepare a transaction commit
commit_preparation_duration_seconds: Histogram,
/// Duration of GC update during transaction commit by wall clock
commit_gc_wallclock_duration_seconds: Histogram,
/// The time it took to conduct audit of a transaction commit
commit_audit_duration_seconds: Histogram,
/// The time it took to write dirty/modified data pages to a filesystem during transaction
/// commit
commit_write_duration_seconds: Histogram,
/// The time it took to sync written data to the disk/storage during transaction commit
commit_sync_duration_seconds: Histogram,
/// The time it took to release resources during transaction commit
commit_ending_duration_seconds: Histogram,
/// The total duration of a transaction commit
commit_whole_duration_seconds: Histogram,
/// User-mode CPU time spent on GC update during transaction commit
commit_gc_cputime_duration_seconds: Histogram,
}
impl TransactionOutcomeMetrics {
/// Record transaction closing with the duration it was open and the duration it took to close
/// it.
#[cfg(feature = "mdbx")]
pub(crate) fn record(
&self,
open_duration: Duration,
close_duration: Option<Duration>,
commit_latency: Option<reth_libmdbx::CommitLatency>,
) {
self.open_duration_seconds.record(open_duration);
if let Some(close_duration) = close_duration {
self.close_duration_seconds.record(close_duration)
}
if let Some(commit_latency) = commit_latency {
self.commit_preparation_duration_seconds.record(commit_latency.preparation());
self.commit_gc_wallclock_duration_seconds.record(commit_latency.gc_wallclock());
self.commit_audit_duration_seconds.record(commit_latency.audit());
self.commit_write_duration_seconds.record(commit_latency.write());
self.commit_sync_duration_seconds.record(commit_latency.sync());
self.commit_ending_duration_seconds.record(commit_latency.ending());
self.commit_whole_duration_seconds.record(commit_latency.whole());
self.commit_gc_cputime_duration_seconds.record(commit_latency.gc_cputime());
}
}
}
#[derive(Metrics, Clone)]
#[metrics(scope = "database.operation")]
pub(crate) struct OperationMetrics {
/// Total number of database operations made
calls_total: Counter,
/// The time it took to execute a database operation (`put/upsert/insert/append/append_dup`)
/// with value larger than [`LARGE_VALUE_THRESHOLD_BYTES`] bytes.
large_value_duration_seconds: Histogram,
}
impl OperationMetrics {
/// Record operation metric.
///
/// The duration it took to execute the closure is recorded only if the provided `value_size` is
/// larger than [`LARGE_VALUE_THRESHOLD_BYTES`].
pub(crate) fn record<R>(&self, value_size: Option<usize>, f: impl FnOnce() -> R) -> R {
self.calls_total.increment(1);
// Record duration only for large values to prevent the performance hit of clock syscall
// on small operations
if value_size.is_some_and(|size| size > LARGE_VALUE_THRESHOLD_BYTES) {
let start = Instant::now();
let result = f();
self.large_value_duration_seconds.record(start.elapsed());
result
} else {
f()
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/implementation/mod.rs | crates/storage/db/src/implementation/mod.rs | #[cfg(feature = "mdbx")]
pub(crate) mod mdbx;
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/implementation/mdbx/cursor.rs | crates/storage/db/src/implementation/mdbx/cursor.rs | //! Cursor wrapper for libmdbx-sys.
use super::utils::*;
use crate::{
metrics::{DatabaseEnvMetrics, Operation},
DatabaseError,
};
use reth_db_api::{
common::{PairResult, ValueOnlyResult},
cursor::{
DbCursorRO, DbCursorRW, DbDupCursorRO, DbDupCursorRW, DupWalker, RangeWalker,
ReverseWalker, Walker,
},
table::{Compress, Decode, Decompress, DupSort, Encode, Table},
};
use reth_libmdbx::{Error as MDBXError, TransactionKind, WriteFlags, RO, RW};
use reth_storage_errors::db::{DatabaseErrorInfo, DatabaseWriteError, DatabaseWriteOperation};
use std::{borrow::Cow, collections::Bound, marker::PhantomData, ops::RangeBounds, sync::Arc};
/// Read only Cursor.
pub type CursorRO<T> = Cursor<RO, T>;
/// Read write cursor.
pub type CursorRW<T> = Cursor<RW, T>;
/// Cursor wrapper to access KV items.
#[derive(Debug)]
pub struct Cursor<K: TransactionKind, T: Table> {
/// Inner `libmdbx` cursor.
pub(crate) inner: reth_libmdbx::Cursor<K>,
/// Cache buffer that receives compressed values.
buf: Vec<u8>,
/// Reference to metric handles in the DB environment. If `None`, metrics are not recorded.
metrics: Option<Arc<DatabaseEnvMetrics>>,
/// Phantom data to enforce encoding/decoding.
_dbi: PhantomData<T>,
}
impl<K: TransactionKind, T: Table> Cursor<K, T> {
pub(crate) const fn new_with_metrics(
inner: reth_libmdbx::Cursor<K>,
metrics: Option<Arc<DatabaseEnvMetrics>>,
) -> Self {
Self { inner, buf: Vec::new(), metrics, _dbi: PhantomData }
}
/// If `self.metrics` is `Some(...)`, record a metric with the provided operation and value
/// size.
///
/// Otherwise, just execute the closure.
fn execute_with_operation_metric<R>(
&mut self,
operation: Operation,
value_size: Option<usize>,
f: impl FnOnce(&mut Self) -> R,
) -> R {
if let Some(metrics) = self.metrics.clone() {
metrics.record_operation(T::NAME, operation, value_size, || f(self))
} else {
f(self)
}
}
}
/// Decodes a `(key, value)` pair from the database.
#[expect(clippy::type_complexity)]
pub fn decode<T>(
res: Result<Option<(Cow<'_, [u8]>, Cow<'_, [u8]>)>, impl Into<DatabaseErrorInfo>>,
) -> PairResult<T>
where
T: Table,
T::Key: Decode,
T::Value: Decompress,
{
res.map_err(|e| DatabaseError::Read(e.into()))?.map(decoder::<T>).transpose()
}
/// Some types don't support compression (eg. B256), and we don't want to be copying them to the
/// allocated buffer when we can just use their reference.
macro_rules! compress_to_buf_or_ref {
($self:expr, $value:expr) => {
if let Some(value) = $value.uncompressable_ref() {
Some(value)
} else {
$self.buf.clear();
$value.compress_to_buf(&mut $self.buf);
None
}
};
}
impl<K: TransactionKind, T: Table> DbCursorRO<T> for Cursor<K, T> {
fn first(&mut self) -> PairResult<T> {
decode::<T>(self.inner.first())
}
fn seek_exact(&mut self, key: <T as Table>::Key) -> PairResult<T> {
decode::<T>(self.inner.set_key(key.encode().as_ref()))
}
fn seek(&mut self, key: <T as Table>::Key) -> PairResult<T> {
decode::<T>(self.inner.set_range(key.encode().as_ref()))
}
fn next(&mut self) -> PairResult<T> {
decode::<T>(self.inner.next())
}
fn prev(&mut self) -> PairResult<T> {
decode::<T>(self.inner.prev())
}
fn last(&mut self) -> PairResult<T> {
decode::<T>(self.inner.last())
}
fn current(&mut self) -> PairResult<T> {
decode::<T>(self.inner.get_current())
}
fn walk(&mut self, start_key: Option<T::Key>) -> Result<Walker<'_, T, Self>, DatabaseError> {
let start = if let Some(start_key) = start_key {
decode::<T>(self.inner.set_range(start_key.encode().as_ref())).transpose()
} else {
self.first().transpose()
};
Ok(Walker::new(self, start))
}
fn walk_range(
&mut self,
range: impl RangeBounds<T::Key>,
) -> Result<RangeWalker<'_, T, Self>, DatabaseError> {
let start = match range.start_bound().cloned() {
Bound::Included(key) => self.inner.set_range(key.encode().as_ref()),
Bound::Excluded(_key) => {
unreachable!("Rust doesn't allow for Bound::Excluded in starting bounds");
}
Bound::Unbounded => self.inner.first(),
};
let start = decode::<T>(start).transpose();
Ok(RangeWalker::new(self, start, range.end_bound().cloned()))
}
fn walk_back(
&mut self,
start_key: Option<T::Key>,
) -> Result<ReverseWalker<'_, T, Self>, DatabaseError> {
let start = if let Some(start_key) = start_key {
decode::<T>(self.inner.set_range(start_key.encode().as_ref()))
} else {
self.last()
}
.transpose();
Ok(ReverseWalker::new(self, start))
}
}
impl<K: TransactionKind, T: DupSort> DbDupCursorRO<T> for Cursor<K, T> {
/// Returns the next `(key, value)` pair of a DUPSORT table.
fn next_dup(&mut self) -> PairResult<T> {
decode::<T>(self.inner.next_dup())
}
/// Returns the next `(key, value)` pair skipping the duplicates.
fn next_no_dup(&mut self) -> PairResult<T> {
decode::<T>(self.inner.next_nodup())
}
/// Returns the next `value` of a duplicate `key`.
fn next_dup_val(&mut self) -> ValueOnlyResult<T> {
self.inner
.next_dup()
.map_err(|e| DatabaseError::Read(e.into()))?
.map(decode_value::<T>)
.transpose()
}
fn seek_by_key_subkey(
&mut self,
key: <T as Table>::Key,
subkey: <T as DupSort>::SubKey,
) -> ValueOnlyResult<T> {
self.inner
.get_both_range(key.encode().as_ref(), subkey.encode().as_ref())
.map_err(|e| DatabaseError::Read(e.into()))?
.map(decode_one::<T>)
.transpose()
}
/// Depending on its arguments, returns an iterator starting at:
/// - Some(key), Some(subkey): a `key` item whose data is >= than `subkey`
/// - Some(key), None: first item of a specified `key`
/// - None, Some(subkey): like first case, but in the first key
/// - None, None: first item in the table of a DUPSORT table.
fn walk_dup(
&mut self,
key: Option<T::Key>,
subkey: Option<T::SubKey>,
) -> Result<DupWalker<'_, T, Self>, DatabaseError> {
let start = match (key, subkey) {
(Some(key), Some(subkey)) => {
// encode key and decode it after.
let key: Vec<u8> = key.encode().into();
self.inner
.get_both_range(key.as_ref(), subkey.encode().as_ref())
.map_err(|e| DatabaseError::Read(e.into()))?
.map(|val| decoder::<T>((Cow::Owned(key), val)))
}
(Some(key), None) => {
let key: Vec<u8> = key.encode().into();
self.inner
.set(key.as_ref())
.map_err(|e| DatabaseError::Read(e.into()))?
.map(|val| decoder::<T>((Cow::Owned(key), val)))
}
(None, Some(subkey)) => {
if let Some((key, _)) = self.first()? {
let key: Vec<u8> = key.encode().into();
self.inner
.get_both_range(key.as_ref(), subkey.encode().as_ref())
.map_err(|e| DatabaseError::Read(e.into()))?
.map(|val| decoder::<T>((Cow::Owned(key), val)))
} else {
Some(Err(DatabaseError::Read(MDBXError::NotFound.into())))
}
}
(None, None) => self.first().transpose(),
};
Ok(DupWalker::<'_, T, Self> { cursor: self, start })
}
}
impl<T: Table> DbCursorRW<T> for Cursor<RW, T> {
/// Database operation that will update an existing row if a specified value already
/// exists in a table, and insert a new row if the specified value doesn't already exist
///
/// For a DUPSORT table, `upsert` will not actually update-or-insert. If the key already exists,
/// it will append the value to the subkey, even if the subkeys are the same. So if you want
/// to properly upsert, you'll need to `seek_exact` & `delete_current` if the key+subkey was
/// found, before calling `upsert`.
fn upsert(&mut self, key: T::Key, value: &T::Value) -> Result<(), DatabaseError> {
let key = key.encode();
let value = compress_to_buf_or_ref!(self, value);
self.execute_with_operation_metric(
Operation::CursorUpsert,
Some(value.unwrap_or(&self.buf).len()),
|this| {
this.inner
.put(key.as_ref(), value.unwrap_or(&this.buf), WriteFlags::UPSERT)
.map_err(|e| {
DatabaseWriteError {
info: e.into(),
operation: DatabaseWriteOperation::CursorUpsert,
table_name: T::NAME,
key: key.into(),
}
.into()
})
},
)
}
fn insert(&mut self, key: T::Key, value: &T::Value) -> Result<(), DatabaseError> {
let key = key.encode();
let value = compress_to_buf_or_ref!(self, value);
self.execute_with_operation_metric(
Operation::CursorInsert,
Some(value.unwrap_or(&self.buf).len()),
|this| {
this.inner
.put(key.as_ref(), value.unwrap_or(&this.buf), WriteFlags::NO_OVERWRITE)
.map_err(|e| {
DatabaseWriteError {
info: e.into(),
operation: DatabaseWriteOperation::CursorInsert,
table_name: T::NAME,
key: key.into(),
}
.into()
})
},
)
}
/// Appends the data to the end of the table. Consequently, the append operation
/// will fail if the inserted key is less than the last table key
fn append(&mut self, key: T::Key, value: &T::Value) -> Result<(), DatabaseError> {
let key = key.encode();
let value = compress_to_buf_or_ref!(self, value);
self.execute_with_operation_metric(
Operation::CursorAppend,
Some(value.unwrap_or(&self.buf).len()),
|this| {
this.inner
.put(key.as_ref(), value.unwrap_or(&this.buf), WriteFlags::APPEND)
.map_err(|e| {
DatabaseWriteError {
info: e.into(),
operation: DatabaseWriteOperation::CursorAppend,
table_name: T::NAME,
key: key.into(),
}
.into()
})
},
)
}
fn delete_current(&mut self) -> Result<(), DatabaseError> {
self.execute_with_operation_metric(Operation::CursorDeleteCurrent, None, |this| {
this.inner.del(WriteFlags::CURRENT).map_err(|e| DatabaseError::Delete(e.into()))
})
}
}
impl<T: DupSort> DbDupCursorRW<T> for Cursor<RW, T> {
fn delete_current_duplicates(&mut self) -> Result<(), DatabaseError> {
self.execute_with_operation_metric(Operation::CursorDeleteCurrentDuplicates, None, |this| {
this.inner.del(WriteFlags::NO_DUP_DATA).map_err(|e| DatabaseError::Delete(e.into()))
})
}
fn append_dup(&mut self, key: T::Key, value: T::Value) -> Result<(), DatabaseError> {
let key = key.encode();
let value = compress_to_buf_or_ref!(self, value);
self.execute_with_operation_metric(
Operation::CursorAppendDup,
Some(value.unwrap_or(&self.buf).len()),
|this| {
this.inner
.put(key.as_ref(), value.unwrap_or(&this.buf), WriteFlags::APPEND_DUP)
.map_err(|e| {
DatabaseWriteError {
info: e.into(),
operation: DatabaseWriteOperation::CursorAppendDup,
table_name: T::NAME,
key: key.into(),
}
.into()
})
},
)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/implementation/mdbx/tx.rs | crates/storage/db/src/implementation/mdbx/tx.rs | //! Transaction wrapper for libmdbx-sys.
use super::{cursor::Cursor, utils::*};
use crate::{
metrics::{DatabaseEnvMetrics, Operation, TransactionMode, TransactionOutcome},
DatabaseError,
};
use reth_db_api::{
table::{Compress, DupSort, Encode, Table, TableImporter},
transaction::{DbTx, DbTxMut},
};
use reth_libmdbx::{ffi::MDBX_dbi, CommitLatency, Transaction, TransactionKind, WriteFlags, RW};
use reth_storage_errors::db::{DatabaseWriteError, DatabaseWriteOperation};
use reth_tracing::tracing::{debug, trace, warn};
use std::{
backtrace::Backtrace,
marker::PhantomData,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
time::{Duration, Instant},
};
/// Duration after which we emit the log about long-lived database transactions.
const LONG_TRANSACTION_DURATION: Duration = Duration::from_secs(60);
/// Wrapper for the libmdbx transaction.
#[derive(Debug)]
pub struct Tx<K: TransactionKind> {
/// Libmdbx-sys transaction.
pub inner: Transaction<K>,
/// Handler for metrics with its own [Drop] implementation for cases when the transaction isn't
/// closed by [`Tx::commit`] or [`Tx::abort`], but we still need to report it in the metrics.
///
/// If [Some], then metrics are reported.
metrics_handler: Option<MetricsHandler<K>>,
}
impl<K: TransactionKind> Tx<K> {
/// Creates new `Tx` object with a `RO` or `RW` transaction.
#[inline]
pub const fn new(inner: Transaction<K>) -> Self {
Self::new_inner(inner, None)
}
/// Creates new `Tx` object with a `RO` or `RW` transaction and optionally enables metrics.
#[inline]
#[track_caller]
pub(crate) fn new_with_metrics(
inner: Transaction<K>,
env_metrics: Option<Arc<DatabaseEnvMetrics>>,
) -> reth_libmdbx::Result<Self> {
let metrics_handler = env_metrics
.map(|env_metrics| {
let handler = MetricsHandler::<K>::new(inner.id()?, env_metrics);
handler.env_metrics.record_opened_transaction(handler.transaction_mode());
handler.log_transaction_opened();
Ok(handler)
})
.transpose()?;
Ok(Self::new_inner(inner, metrics_handler))
}
#[inline]
const fn new_inner(inner: Transaction<K>, metrics_handler: Option<MetricsHandler<K>>) -> Self {
Self { inner, metrics_handler }
}
/// Gets this transaction ID.
pub fn id(&self) -> reth_libmdbx::Result<u64> {
self.metrics_handler.as_ref().map_or_else(|| self.inner.id(), |handler| Ok(handler.txn_id))
}
/// Gets a table database handle if it exists, otherwise creates it.
pub fn get_dbi<T: Table>(&self) -> Result<MDBX_dbi, DatabaseError> {
self.inner
.open_db(Some(T::NAME))
.map(|db| db.dbi())
.map_err(|e| DatabaseError::Open(e.into()))
}
/// Create db Cursor
pub fn new_cursor<T: Table>(&self) -> Result<Cursor<K, T>, DatabaseError> {
let inner = self
.inner
.cursor_with_dbi(self.get_dbi::<T>()?)
.map_err(|e| DatabaseError::InitCursor(e.into()))?;
Ok(Cursor::new_with_metrics(
inner,
self.metrics_handler.as_ref().map(|h| h.env_metrics.clone()),
))
}
/// If `self.metrics_handler == Some(_)`, measure the time it takes to execute the closure and
/// record a metric with the provided transaction outcome.
///
/// Otherwise, just execute the closure.
fn execute_with_close_transaction_metric<R>(
mut self,
outcome: TransactionOutcome,
f: impl FnOnce(Self) -> (R, Option<CommitLatency>),
) -> R {
let run = |tx| {
let start = Instant::now();
let (result, commit_latency) = f(tx);
let total_duration = start.elapsed();
if outcome.is_commit() {
debug!(
target: "storage::db::mdbx",
?total_duration,
?commit_latency,
is_read_only = K::IS_READ_ONLY,
"Commit"
);
}
(result, commit_latency, total_duration)
};
if let Some(mut metrics_handler) = self.metrics_handler.take() {
metrics_handler.close_recorded = true;
metrics_handler.log_backtrace_on_long_read_transaction();
let (result, commit_latency, close_duration) = run(self);
let open_duration = metrics_handler.start.elapsed();
metrics_handler.env_metrics.record_closed_transaction(
metrics_handler.transaction_mode(),
outcome,
open_duration,
Some(close_duration),
commit_latency,
);
result
} else {
run(self).0
}
}
/// If `self.metrics_handler == Some(_)`, measure the time it takes to execute the closure and
/// record a metric with the provided operation.
///
/// Otherwise, just execute the closure.
fn execute_with_operation_metric<T: Table, R>(
&self,
operation: Operation,
value_size: Option<usize>,
f: impl FnOnce(&Transaction<K>) -> R,
) -> R {
if let Some(metrics_handler) = &self.metrics_handler {
metrics_handler.log_backtrace_on_long_read_transaction();
metrics_handler
.env_metrics
.record_operation(T::NAME, operation, value_size, || f(&self.inner))
} else {
f(&self.inner)
}
}
}
#[derive(Debug)]
struct MetricsHandler<K: TransactionKind> {
/// Cached internal transaction ID provided by libmdbx.
txn_id: u64,
/// The time when transaction has started.
start: Instant,
/// Duration after which we emit the log about long-lived database transactions.
long_transaction_duration: Duration,
/// If `true`, the metric about transaction closing has already been recorded and we don't need
/// to do anything on [`Drop::drop`].
close_recorded: bool,
/// If `true`, the backtrace of transaction will be recorded and logged.
/// See [`MetricsHandler::log_backtrace_on_long_read_transaction`].
record_backtrace: bool,
/// If `true`, the backtrace of transaction has already been recorded and logged.
/// See [`MetricsHandler::log_backtrace_on_long_read_transaction`].
backtrace_recorded: AtomicBool,
/// Shared database environment metrics.
env_metrics: Arc<DatabaseEnvMetrics>,
/// Backtrace of the location where the transaction has been opened. Reported only with debug
/// assertions, because capturing the backtrace on every transaction opening is expensive.
#[cfg(debug_assertions)]
open_backtrace: Backtrace,
_marker: PhantomData<K>,
}
impl<K: TransactionKind> MetricsHandler<K> {
fn new(txn_id: u64, env_metrics: Arc<DatabaseEnvMetrics>) -> Self {
Self {
txn_id,
start: Instant::now(),
long_transaction_duration: LONG_TRANSACTION_DURATION,
close_recorded: false,
record_backtrace: true,
backtrace_recorded: AtomicBool::new(false),
#[cfg(debug_assertions)]
open_backtrace: Backtrace::force_capture(),
env_metrics,
_marker: PhantomData,
}
}
const fn transaction_mode(&self) -> TransactionMode {
if K::IS_READ_ONLY {
TransactionMode::ReadOnly
} else {
TransactionMode::ReadWrite
}
}
/// Logs the caller location and ID of the transaction that was opened.
#[track_caller]
fn log_transaction_opened(&self) {
trace!(
target: "storage::db::mdbx",
caller = %core::panic::Location::caller(),
id = %self.txn_id,
mode = %self.transaction_mode().as_str(),
"Transaction opened",
);
}
/// Logs the backtrace of current call if the duration that the read transaction has been open
/// is more than [`LONG_TRANSACTION_DURATION`] and `record_backtrace == true`.
/// The backtrace is recorded and logged just once, guaranteed by `backtrace_recorded` atomic.
///
/// NOTE: Backtrace is recorded using [`Backtrace::force_capture`], so `RUST_BACKTRACE` env var
/// is not needed.
fn log_backtrace_on_long_read_transaction(&self) {
if self.record_backtrace &&
!self.backtrace_recorded.load(Ordering::Relaxed) &&
self.transaction_mode().is_read_only()
{
let open_duration = self.start.elapsed();
if open_duration >= self.long_transaction_duration {
self.backtrace_recorded.store(true, Ordering::Relaxed);
#[cfg(debug_assertions)]
let message = format!(
"The database read transaction has been open for too long. Open backtrace:\n{}\n\nCurrent backtrace:\n{}",
self.open_backtrace,
Backtrace::force_capture()
);
#[cfg(not(debug_assertions))]
let message = format!(
"The database read transaction has been open for too long. Backtrace:\n{}",
Backtrace::force_capture()
);
warn!(
target: "storage::db::mdbx",
?open_duration,
%self.txn_id,
"{message}"
);
}
}
}
}
impl<K: TransactionKind> Drop for MetricsHandler<K> {
fn drop(&mut self) {
if !self.close_recorded {
self.log_backtrace_on_long_read_transaction();
self.env_metrics.record_closed_transaction(
self.transaction_mode(),
TransactionOutcome::Drop,
self.start.elapsed(),
None,
None,
);
}
}
}
impl TableImporter for Tx<RW> {}
impl<K: TransactionKind> DbTx for Tx<K> {
type Cursor<T: Table> = Cursor<K, T>;
type DupCursor<T: DupSort> = Cursor<K, T>;
fn get<T: Table>(&self, key: T::Key) -> Result<Option<<T as Table>::Value>, DatabaseError> {
self.get_by_encoded_key::<T>(&key.encode())
}
fn get_by_encoded_key<T: Table>(
&self,
key: &<T::Key as Encode>::Encoded,
) -> Result<Option<T::Value>, DatabaseError> {
self.execute_with_operation_metric::<T, _>(Operation::Get, None, |tx| {
tx.get(self.get_dbi::<T>()?, key.as_ref())
.map_err(|e| DatabaseError::Read(e.into()))?
.map(decode_one::<T>)
.transpose()
})
}
fn commit(self) -> Result<bool, DatabaseError> {
self.execute_with_close_transaction_metric(TransactionOutcome::Commit, |this| {
match this.inner.commit().map_err(|e| DatabaseError::Commit(e.into())) {
Ok((v, latency)) => (Ok(v), Some(latency)),
Err(e) => (Err(e), None),
}
})
}
fn abort(self) {
self.execute_with_close_transaction_metric(TransactionOutcome::Abort, |this| {
(drop(this.inner), None)
})
}
// Iterate over read only values in database.
fn cursor_read<T: Table>(&self) -> Result<Self::Cursor<T>, DatabaseError> {
self.new_cursor()
}
/// Iterate over read only values in database.
fn cursor_dup_read<T: DupSort>(&self) -> Result<Self::DupCursor<T>, DatabaseError> {
self.new_cursor()
}
/// Returns number of entries in the table using cheap DB stats invocation.
fn entries<T: Table>(&self) -> Result<usize, DatabaseError> {
Ok(self
.inner
.db_stat_with_dbi(self.get_dbi::<T>()?)
.map_err(|e| DatabaseError::Stats(e.into()))?
.entries())
}
/// Disables long-lived read transaction safety guarantees, such as backtrace recording and
/// timeout.
fn disable_long_read_transaction_safety(&mut self) {
if let Some(metrics_handler) = self.metrics_handler.as_mut() {
metrics_handler.record_backtrace = false;
}
self.inner.disable_timeout();
}
}
impl DbTxMut for Tx<RW> {
type CursorMut<T: Table> = Cursor<RW, T>;
type DupCursorMut<T: DupSort> = Cursor<RW, T>;
fn put<T: Table>(&self, key: T::Key, value: T::Value) -> Result<(), DatabaseError> {
let key = key.encode();
let value = value.compress();
self.execute_with_operation_metric::<T, _>(
Operation::Put,
Some(value.as_ref().len()),
|tx| {
tx.put(self.get_dbi::<T>()?, key.as_ref(), value, WriteFlags::UPSERT).map_err(|e| {
DatabaseWriteError {
info: e.into(),
operation: DatabaseWriteOperation::Put,
table_name: T::NAME,
key: key.into(),
}
.into()
})
},
)
}
fn delete<T: Table>(
&self,
key: T::Key,
value: Option<T::Value>,
) -> Result<bool, DatabaseError> {
let mut data = None;
let value = value.map(Compress::compress);
if let Some(value) = &value {
data = Some(value.as_ref());
};
self.execute_with_operation_metric::<T, _>(Operation::Delete, None, |tx| {
tx.del(self.get_dbi::<T>()?, key.encode(), data)
.map_err(|e| DatabaseError::Delete(e.into()))
})
}
fn clear<T: Table>(&self) -> Result<(), DatabaseError> {
self.inner.clear_db(self.get_dbi::<T>()?).map_err(|e| DatabaseError::Delete(e.into()))?;
Ok(())
}
fn cursor_write<T: Table>(&self) -> Result<Self::CursorMut<T>, DatabaseError> {
self.new_cursor()
}
fn cursor_dup_write<T: DupSort>(&self) -> Result<Self::DupCursorMut<T>, DatabaseError> {
self.new_cursor()
}
}
#[cfg(test)]
mod tests {
use crate::{mdbx::DatabaseArguments, tables, DatabaseEnv, DatabaseEnvKind};
use reth_db_api::{database::Database, models::ClientVersion, transaction::DbTx};
use reth_libmdbx::MaxReadTransactionDuration;
use reth_storage_errors::db::DatabaseError;
use std::{sync::atomic::Ordering, thread::sleep, time::Duration};
use tempfile::tempdir;
#[test]
fn long_read_transaction_safety_disabled() {
const MAX_DURATION: Duration = Duration::from_secs(1);
let dir = tempdir().unwrap();
let args = DatabaseArguments::new(ClientVersion::default())
.with_max_read_transaction_duration(Some(MaxReadTransactionDuration::Set(
MAX_DURATION,
)));
let db = DatabaseEnv::open(dir.path(), DatabaseEnvKind::RW, args).unwrap().with_metrics();
let mut tx = db.tx().unwrap();
tx.metrics_handler.as_mut().unwrap().long_transaction_duration = MAX_DURATION;
tx.disable_long_read_transaction_safety();
// Give the `TxnManager` some time to time out the transaction.
sleep(MAX_DURATION + Duration::from_millis(100));
// Transaction has not timed out.
assert_eq!(
tx.get::<tables::Transactions>(0),
Err(DatabaseError::Open(reth_libmdbx::Error::NotFound.into()))
);
// Backtrace is not recorded.
assert!(!tx.metrics_handler.unwrap().backtrace_recorded.load(Ordering::Relaxed));
}
#[test]
fn long_read_transaction_safety_enabled() {
const MAX_DURATION: Duration = Duration::from_secs(1);
let dir = tempdir().unwrap();
let args = DatabaseArguments::new(ClientVersion::default())
.with_max_read_transaction_duration(Some(MaxReadTransactionDuration::Set(
MAX_DURATION,
)));
let db = DatabaseEnv::open(dir.path(), DatabaseEnvKind::RW, args).unwrap().with_metrics();
let mut tx = db.tx().unwrap();
tx.metrics_handler.as_mut().unwrap().long_transaction_duration = MAX_DURATION;
// Give the `TxnManager` some time to time out the transaction.
sleep(MAX_DURATION + Duration::from_millis(100));
// Transaction has timed out.
assert_eq!(
tx.get::<tables::Transactions>(0),
Err(DatabaseError::Open(reth_libmdbx::Error::ReadTransactionTimeout.into()))
);
// Backtrace is recorded.
assert!(tx.metrics_handler.unwrap().backtrace_recorded.load(Ordering::Relaxed));
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/implementation/mdbx/utils.rs | crates/storage/db/src/implementation/mdbx/utils.rs | //! Small database table utilities and helper functions.
use crate::{
table::{Decode, Decompress, Table, TableRow},
DatabaseError,
};
use std::borrow::Cow;
/// Helper function to decode a `(key, value)` pair.
pub(crate) fn decoder<'a, T>(
(k, v): (Cow<'a, [u8]>, Cow<'a, [u8]>),
) -> Result<TableRow<T>, DatabaseError>
where
T: Table,
T::Key: Decode,
T::Value: Decompress,
{
Ok((
match k {
Cow::Borrowed(k) => Decode::decode(k)?,
Cow::Owned(k) => Decode::decode_owned(k)?,
},
match v {
Cow::Borrowed(v) => Decompress::decompress(v)?,
Cow::Owned(v) => Decompress::decompress_owned(v)?,
},
))
}
/// Helper function to decode only a value from a `(key, value)` pair.
pub(crate) fn decode_value<'a, T>(
kv: (Cow<'a, [u8]>, Cow<'a, [u8]>),
) -> Result<T::Value, DatabaseError>
where
T: Table,
{
Ok(match kv.1 {
Cow::Borrowed(v) => Decompress::decompress(v)?,
Cow::Owned(v) => Decompress::decompress_owned(v)?,
})
}
/// Helper function to decode a value. It can be a key or subkey.
pub(crate) fn decode_one<T>(value: Cow<'_, [u8]>) -> Result<T::Value, DatabaseError>
where
T: Table,
{
Ok(match value {
Cow::Borrowed(v) => Decompress::decompress(v)?,
Cow::Owned(v) => Decompress::decompress_owned(v)?,
})
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/implementation/mdbx/mod.rs | crates/storage/db/src/implementation/mdbx/mod.rs | //! Module that interacts with MDBX.
use crate::{
lockfile::StorageLock,
metrics::DatabaseEnvMetrics,
tables::{self, Tables},
utils::default_page_size,
DatabaseError, TableSet,
};
use eyre::Context;
use metrics::{gauge, Label};
use reth_db_api::{
cursor::{DbCursorRO, DbCursorRW},
database::Database,
database_metrics::DatabaseMetrics,
models::ClientVersion,
transaction::{DbTx, DbTxMut},
};
use reth_libmdbx::{
ffi, DatabaseFlags, Environment, EnvironmentFlags, Geometry, HandleSlowReadersReturnCode,
MaxReadTransactionDuration, Mode, PageSize, SyncMode, RO, RW,
};
use reth_storage_errors::db::LogLevel;
use reth_tracing::tracing::error;
use std::{
ops::{Deref, Range},
path::Path,
sync::Arc,
time::{SystemTime, UNIX_EPOCH},
};
use tx::Tx;
pub mod cursor;
pub mod tx;
mod utils;
/// 1 KB in bytes
pub const KILOBYTE: usize = 1024;
/// 1 MB in bytes
pub const MEGABYTE: usize = KILOBYTE * 1024;
/// 1 GB in bytes
pub const GIGABYTE: usize = MEGABYTE * 1024;
/// 1 TB in bytes
pub const TERABYTE: usize = GIGABYTE * 1024;
/// MDBX allows up to 32767 readers (`MDBX_READERS_LIMIT`), but we limit it to slightly below that
const DEFAULT_MAX_READERS: u64 = 32_000;
/// Space that a read-only transaction can occupy until the warning is emitted.
/// See [`reth_libmdbx::EnvironmentBuilder::set_handle_slow_readers`] for more information.
const MAX_SAFE_READER_SPACE: usize = 10 * GIGABYTE;
/// Environment used when opening a MDBX environment. RO/RW.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum DatabaseEnvKind {
/// Read-only MDBX environment.
RO,
/// Read-write MDBX environment.
RW,
}
impl DatabaseEnvKind {
/// Returns `true` if the environment is read-write.
pub const fn is_rw(&self) -> bool {
matches!(self, Self::RW)
}
}
/// Arguments for database initialization.
#[derive(Clone, Debug)]
pub struct DatabaseArguments {
/// Client version that accesses the database.
client_version: ClientVersion,
/// Database geometry settings.
geometry: Geometry<Range<usize>>,
/// Database log level. If [None], the default value is used.
log_level: Option<LogLevel>,
/// Maximum duration of a read transaction. If [None], the default value is used.
max_read_transaction_duration: Option<MaxReadTransactionDuration>,
/// Open environment in exclusive/monopolistic mode. If [None], the default value is used.
///
/// This can be used as a replacement for `MDB_NOLOCK`, which don't supported by MDBX. In this
/// way, you can get the minimal overhead, but with the correct multi-process and multi-thread
/// locking.
///
/// If `true` = open environment in exclusive/monopolistic mode or return `MDBX_BUSY` if
/// environment already used by other process. The main feature of the exclusive mode is the
/// ability to open the environment placed on a network share.
///
/// If `false` = open environment in cooperative mode, i.e. for multi-process
/// access/interaction/cooperation. The main requirements of the cooperative mode are:
/// - Data files MUST be placed in the LOCAL file system, but NOT on a network share.
/// - Environment MUST be opened only by LOCAL processes, but NOT over a network.
/// - OS kernel (i.e. file system and memory mapping implementation) and all processes that
/// open the given environment MUST be running in the physically single RAM with
/// cache-coherency. The only exception for cache-consistency requirement is Linux on MIPS
/// architecture, but this case has not been tested for a long time).
///
/// This flag affects only at environment opening but can't be changed after.
exclusive: Option<bool>,
/// MDBX allows up to 32767 readers (`MDBX_READERS_LIMIT`). This arg is to configure the max
/// readers.
max_readers: Option<u64>,
}
impl Default for DatabaseArguments {
fn default() -> Self {
Self::new(ClientVersion::default())
}
}
impl DatabaseArguments {
/// Create new database arguments with given client version.
pub fn new(client_version: ClientVersion) -> Self {
Self {
client_version,
geometry: Geometry {
size: Some(0..(4 * TERABYTE)),
growth_step: Some(4 * GIGABYTE as isize),
shrink_threshold: Some(0),
page_size: Some(PageSize::Set(default_page_size())),
},
log_level: None,
max_read_transaction_duration: None,
exclusive: None,
max_readers: None,
}
}
/// Sets the upper size limit of the db environment, the maximum database size in bytes.
pub const fn with_geometry_max_size(mut self, max_size: Option<usize>) -> Self {
if let Some(max_size) = max_size {
self.geometry.size = Some(0..max_size);
}
self
}
/// Configures the database growth step in bytes.
pub const fn with_growth_step(mut self, growth_step: Option<usize>) -> Self {
if let Some(growth_step) = growth_step {
self.geometry.growth_step = Some(growth_step as isize);
}
self
}
/// Set the log level.
pub const fn with_log_level(mut self, log_level: Option<LogLevel>) -> Self {
self.log_level = log_level;
self
}
/// Set the maximum duration of a read transaction.
pub const fn max_read_transaction_duration(
&mut self,
max_read_transaction_duration: Option<MaxReadTransactionDuration>,
) {
self.max_read_transaction_duration = max_read_transaction_duration;
}
/// Set the maximum duration of a read transaction.
pub const fn with_max_read_transaction_duration(
mut self,
max_read_transaction_duration: Option<MaxReadTransactionDuration>,
) -> Self {
self.max_read_transaction_duration(max_read_transaction_duration);
self
}
/// Set the mdbx exclusive flag.
pub const fn with_exclusive(mut self, exclusive: Option<bool>) -> Self {
self.exclusive = exclusive;
self
}
/// Set `max_readers` flag.
pub const fn with_max_readers(mut self, max_readers: Option<u64>) -> Self {
self.max_readers = max_readers;
self
}
/// Returns the client version if any.
pub const fn client_version(&self) -> &ClientVersion {
&self.client_version
}
}
/// Wrapper for the libmdbx environment: [Environment]
#[derive(Debug)]
pub struct DatabaseEnv {
/// Libmdbx-sys environment.
inner: Environment,
/// Cache for metric handles. If `None`, metrics are not recorded.
metrics: Option<Arc<DatabaseEnvMetrics>>,
/// Write lock for when dealing with a read-write environment.
_lock_file: Option<StorageLock>,
}
impl Database for DatabaseEnv {
type TX = tx::Tx<RO>;
type TXMut = tx::Tx<RW>;
fn tx(&self) -> Result<Self::TX, DatabaseError> {
Tx::new_with_metrics(
self.inner.begin_ro_txn().map_err(|e| DatabaseError::InitTx(e.into()))?,
self.metrics.clone(),
)
.map_err(|e| DatabaseError::InitTx(e.into()))
}
fn tx_mut(&self) -> Result<Self::TXMut, DatabaseError> {
Tx::new_with_metrics(
self.inner.begin_rw_txn().map_err(|e| DatabaseError::InitTx(e.into()))?,
self.metrics.clone(),
)
.map_err(|e| DatabaseError::InitTx(e.into()))
}
}
impl DatabaseMetrics for DatabaseEnv {
fn report_metrics(&self) {
for (name, value, labels) in self.gauge_metrics() {
gauge!(name, labels).set(value);
}
}
fn gauge_metrics(&self) -> Vec<(&'static str, f64, Vec<Label>)> {
let mut metrics = Vec::new();
let _ = self
.view(|tx| {
for table in Tables::ALL.iter().map(Tables::name) {
let table_db = tx.inner.open_db(Some(table)).wrap_err("Could not open db.")?;
let stats = tx
.inner
.db_stat(&table_db)
.wrap_err(format!("Could not find table: {table}"))?;
let page_size = stats.page_size() as usize;
let leaf_pages = stats.leaf_pages();
let branch_pages = stats.branch_pages();
let overflow_pages = stats.overflow_pages();
let num_pages = leaf_pages + branch_pages + overflow_pages;
let table_size = page_size * num_pages;
let entries = stats.entries();
metrics.push((
"db.table_size",
table_size as f64,
vec![Label::new("table", table)],
));
metrics.push((
"db.table_pages",
leaf_pages as f64,
vec![Label::new("table", table), Label::new("type", "leaf")],
));
metrics.push((
"db.table_pages",
branch_pages as f64,
vec![Label::new("table", table), Label::new("type", "branch")],
));
metrics.push((
"db.table_pages",
overflow_pages as f64,
vec![Label::new("table", table), Label::new("type", "overflow")],
));
metrics.push((
"db.table_entries",
entries as f64,
vec![Label::new("table", table)],
));
}
Ok::<(), eyre::Report>(())
})
.map_err(|error| error!(%error, "Failed to read db table stats"));
if let Ok(freelist) =
self.freelist().map_err(|error| error!(%error, "Failed to read db.freelist"))
{
metrics.push(("db.freelist", freelist as f64, vec![]));
}
if let Ok(stat) = self.stat().map_err(|error| error!(%error, "Failed to read db.stat")) {
metrics.push(("db.page_size", stat.page_size() as f64, vec![]));
}
metrics.push((
"db.timed_out_not_aborted_transactions",
self.timed_out_not_aborted_transactions() as f64,
vec![],
));
metrics
}
}
impl DatabaseEnv {
/// Opens the database at the specified path with the given `EnvKind`.
///
/// It does not create the tables, for that call [`DatabaseEnv::create_tables`].
pub fn open(
path: &Path,
kind: DatabaseEnvKind,
args: DatabaseArguments,
) -> Result<Self, DatabaseError> {
let _lock_file = if kind.is_rw() {
StorageLock::try_acquire(path)
.map_err(|err| DatabaseError::Other(err.to_string()))?
.into()
} else {
None
};
let mut inner_env = Environment::builder();
let mode = match kind {
DatabaseEnvKind::RO => Mode::ReadOnly,
DatabaseEnvKind::RW => {
// enable writemap mode in RW mode
inner_env.write_map();
Mode::ReadWrite { sync_mode: SyncMode::Durable }
}
};
// Note: We set max dbs to 256 here to allow for custom tables. This needs to be set on
// environment creation.
debug_assert!(Tables::ALL.len() <= 256, "number of tables exceed max dbs");
inner_env.set_max_dbs(256);
inner_env.set_geometry(args.geometry);
fn is_current_process(id: u32) -> bool {
#[cfg(unix)]
{
id == std::os::unix::process::parent_id() || id == std::process::id()
}
#[cfg(not(unix))]
{
id == std::process::id()
}
}
extern "C" fn handle_slow_readers(
_env: *const ffi::MDBX_env,
_txn: *const ffi::MDBX_txn,
process_id: ffi::mdbx_pid_t,
thread_id: ffi::mdbx_tid_t,
read_txn_id: u64,
gap: std::ffi::c_uint,
space: usize,
retry: std::ffi::c_int,
) -> HandleSlowReadersReturnCode {
if space > MAX_SAFE_READER_SPACE {
let message = if is_current_process(process_id as u32) {
"Current process has a long-lived database transaction that grows the database file."
} else {
"External process has a long-lived database transaction that grows the database file. \
Use shorter-lived read transactions or shut down the node."
};
reth_tracing::tracing::warn!(
target: "storage::db::mdbx",
?process_id,
?thread_id,
?read_txn_id,
?gap,
?space,
?retry,
"{message}"
)
}
reth_libmdbx::HandleSlowReadersReturnCode::ProceedWithoutKillingReader
}
inner_env.set_handle_slow_readers(handle_slow_readers);
inner_env.set_flags(EnvironmentFlags {
mode,
// We disable readahead because it improves performance for linear scans, but
// worsens it for random access (which is our access pattern outside of sync)
no_rdahead: true,
coalesce: true,
exclusive: args.exclusive.unwrap_or_default(),
..Default::default()
});
// Configure more readers
inner_env.set_max_readers(args.max_readers.unwrap_or(DEFAULT_MAX_READERS));
// This parameter sets the maximum size of the "reclaimed list", and the unit of measurement
// is "pages". Reclaimed list is the list of freed pages that's populated during the
// lifetime of DB transaction, and through which MDBX searches when it needs to insert new
// record with overflow pages. The flow is roughly the following:
// 0. We need to insert a record that requires N number of overflow pages (in consecutive
// sequence inside the DB file).
// 1. Get some pages from the freelist, put them into the reclaimed list.
// 2. Search through the reclaimed list for the sequence of size N.
// 3. a. If found, return the sequence.
// 3. b. If not found, repeat steps 1-3. If the reclaimed list size is larger than
// the `rp augment limit`, stop the search and allocate new pages at the end of the file:
// https://github.com/paradigmxyz/reth/blob/2a4c78759178f66e30c8976ec5d243b53102fc9a/crates/storage/libmdbx-rs/mdbx-sys/libmdbx/mdbx.c#L11479-L11480.
//
// Basically, this parameter controls for how long do we search through the freelist before
// trying to allocate new pages. Smaller value will make MDBX to fallback to
// allocation faster, higher value will force MDBX to search through the freelist
// longer until the sequence of pages is found.
//
// The default value of this parameter is set depending on the DB size. The bigger the
// database, the larger is `rp augment limit`.
// https://github.com/paradigmxyz/reth/blob/2a4c78759178f66e30c8976ec5d243b53102fc9a/crates/storage/libmdbx-rs/mdbx-sys/libmdbx/mdbx.c#L10018-L10024.
//
// Previously, MDBX set this value as `256 * 1024` constant. Let's fallback to this,
// because we want to prioritize freelist lookup speed over database growth.
// https://github.com/paradigmxyz/reth/blob/fa2b9b685ed9787636d962f4366caf34a9186e66/crates/storage/libmdbx-rs/mdbx-sys/libmdbx/mdbx.c#L16017.
inner_env.set_rp_augment_limit(256 * 1024);
if let Some(log_level) = args.log_level {
// Levels higher than [LogLevel::Notice] require libmdbx built with `MDBX_DEBUG` option.
let is_log_level_available = if cfg!(debug_assertions) {
true
} else {
matches!(
log_level,
LogLevel::Fatal | LogLevel::Error | LogLevel::Warn | LogLevel::Notice
)
};
if is_log_level_available {
inner_env.set_log_level(match log_level {
LogLevel::Fatal => 0,
LogLevel::Error => 1,
LogLevel::Warn => 2,
LogLevel::Notice => 3,
LogLevel::Verbose => 4,
LogLevel::Debug => 5,
LogLevel::Trace => 6,
LogLevel::Extra => 7,
});
} else {
return Err(DatabaseError::LogLevelUnavailable(log_level))
}
}
if let Some(max_read_transaction_duration) = args.max_read_transaction_duration {
inner_env.set_max_read_transaction_duration(max_read_transaction_duration);
}
let env = Self {
inner: inner_env.open(path).map_err(|e| DatabaseError::Open(e.into()))?,
metrics: None,
_lock_file,
};
Ok(env)
}
/// Enables metrics on the database.
pub fn with_metrics(mut self) -> Self {
self.metrics = Some(DatabaseEnvMetrics::new().into());
self
}
/// Creates all the tables defined in [`Tables`], if necessary.
pub fn create_tables(&self) -> Result<(), DatabaseError> {
self.create_tables_for::<Tables>()
}
/// Creates all the tables defined in the given [`TableSet`], if necessary.
pub fn create_tables_for<TS: TableSet>(&self) -> Result<(), DatabaseError> {
let tx = self.inner.begin_rw_txn().map_err(|e| DatabaseError::InitTx(e.into()))?;
for table in TS::tables() {
let flags =
if table.is_dupsort() { DatabaseFlags::DUP_SORT } else { DatabaseFlags::default() };
tx.create_db(Some(table.name()), flags)
.map_err(|e| DatabaseError::CreateTable(e.into()))?;
}
tx.commit().map_err(|e| DatabaseError::Commit(e.into()))?;
Ok(())
}
/// Records version that accesses the database with write privileges.
pub fn record_client_version(&self, version: ClientVersion) -> Result<(), DatabaseError> {
if version.is_empty() {
return Ok(())
}
let tx = self.tx_mut()?;
let mut version_cursor = tx.cursor_write::<tables::VersionHistory>()?;
let last_version = version_cursor.last()?.map(|(_, v)| v);
if Some(&version) != last_version.as_ref() {
version_cursor.upsert(
SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or_default().as_secs(),
&version,
)?;
tx.commit()?;
}
Ok(())
}
}
impl Deref for DatabaseEnv {
type Target = Environment;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
tables::{
AccountsHistory, CanonicalHeaders, Headers, PlainAccountState, PlainStorageState,
},
test_utils::*,
AccountChangeSets,
};
use alloy_consensus::Header;
use alloy_primitives::{address, Address, B256, U256};
use reth_db_api::{
cursor::{DbDupCursorRO, DbDupCursorRW, ReverseWalker, Walker},
models::{AccountBeforeTx, IntegerList, ShardedKey},
table::{Encode, Table},
};
use reth_libmdbx::Error;
use reth_primitives_traits::{Account, StorageEntry};
use reth_storage_errors::db::{DatabaseWriteError, DatabaseWriteOperation};
use std::str::FromStr;
use tempfile::TempDir;
/// Create database for testing
fn create_test_db(kind: DatabaseEnvKind) -> Arc<DatabaseEnv> {
Arc::new(create_test_db_with_path(
kind,
&tempfile::TempDir::new().expect(ERROR_TEMPDIR).keep(),
))
}
/// Create database for testing with specified path
fn create_test_db_with_path(kind: DatabaseEnvKind, path: &Path) -> DatabaseEnv {
let env = DatabaseEnv::open(path, kind, DatabaseArguments::new(ClientVersion::default()))
.expect(ERROR_DB_CREATION);
env.create_tables().expect(ERROR_TABLE_CREATION);
env
}
const ERROR_DB_CREATION: &str = "Not able to create the mdbx file.";
const ERROR_PUT: &str = "Not able to insert value into table.";
const ERROR_APPEND: &str = "Not able to append the value to the table.";
const ERROR_UPSERT: &str = "Not able to upsert the value to the table.";
const ERROR_GET: &str = "Not able to get value from table.";
const ERROR_DEL: &str = "Not able to delete from table.";
const ERROR_COMMIT: &str = "Not able to commit transaction.";
const ERROR_RETURN_VALUE: &str = "Mismatching result.";
const ERROR_INIT_TX: &str = "Failed to create a MDBX transaction.";
const ERROR_ETH_ADDRESS: &str = "Invalid address.";
#[test]
fn db_creation() {
create_test_db(DatabaseEnvKind::RW);
}
#[test]
fn db_manual_put_get() {
let env = create_test_db(DatabaseEnvKind::RW);
let value = Header::default();
let key = 1u64;
// PUT
let tx = env.tx_mut().expect(ERROR_INIT_TX);
tx.put::<Headers>(key, value.clone()).expect(ERROR_PUT);
tx.commit().expect(ERROR_COMMIT);
// GET
let tx = env.tx().expect(ERROR_INIT_TX);
let result = tx.get::<Headers>(key).expect(ERROR_GET);
assert_eq!(result.expect(ERROR_RETURN_VALUE), value);
tx.commit().expect(ERROR_COMMIT);
}
#[test]
fn db_dup_cursor_delete_first() {
let db: Arc<DatabaseEnv> = create_test_db(DatabaseEnvKind::RW);
let tx = db.tx_mut().expect(ERROR_INIT_TX);
let mut dup_cursor = tx.cursor_dup_write::<PlainStorageState>().unwrap();
let entry_0 = StorageEntry { key: B256::with_last_byte(1), value: U256::from(0).into() };
let entry_1 = StorageEntry { key: B256::with_last_byte(1), value: U256::from(1).into() };
dup_cursor.upsert(Address::with_last_byte(1), &entry_0).expect(ERROR_UPSERT);
dup_cursor.upsert(Address::with_last_byte(1), &entry_1).expect(ERROR_UPSERT);
assert_eq!(
dup_cursor.walk(None).unwrap().collect::<Result<Vec<_>, _>>(),
Ok(vec![(Address::with_last_byte(1), entry_0), (Address::with_last_byte(1), entry_1),])
);
let mut walker = dup_cursor.walk(None).unwrap();
walker.delete_current().expect(ERROR_DEL);
assert_eq!(walker.next(), Some(Ok((Address::with_last_byte(1), entry_1))));
// Check the tx view - it correctly holds entry_1
assert_eq!(
tx.cursor_dup_read::<PlainStorageState>()
.unwrap()
.walk(None)
.unwrap()
.collect::<Result<Vec<_>, _>>(),
Ok(vec![
(Address::with_last_byte(1), entry_1), // This is ok - we removed entry_0
])
);
// Check the remainder of walker
assert_eq!(walker.next(), None);
}
#[test]
fn db_cursor_walk() {
let env = create_test_db(DatabaseEnvKind::RW);
let value = Header::default();
let key = 1u64;
// PUT
let tx = env.tx_mut().expect(ERROR_INIT_TX);
tx.put::<Headers>(key, value.clone()).expect(ERROR_PUT);
tx.commit().expect(ERROR_COMMIT);
// Cursor
let tx = env.tx().expect(ERROR_INIT_TX);
let mut cursor = tx.cursor_read::<Headers>().unwrap();
let first = cursor.first().unwrap();
assert!(first.is_some(), "First should be our put");
// Walk
let walk = cursor.walk(Some(key)).unwrap();
let first = walk.into_iter().next().unwrap().unwrap();
assert_eq!(first.1, value, "First next should be put value");
}
#[test]
fn db_cursor_walk_range() {
let db: Arc<DatabaseEnv> = create_test_db(DatabaseEnvKind::RW);
// PUT (0, 0), (1, 0), (2, 0), (3, 0)
let tx = db.tx_mut().expect(ERROR_INIT_TX);
vec![0, 1, 2, 3]
.into_iter()
.try_for_each(|key| tx.put::<CanonicalHeaders>(key, B256::ZERO))
.expect(ERROR_PUT);
tx.commit().expect(ERROR_COMMIT);
let tx = db.tx().expect(ERROR_INIT_TX);
let mut cursor = tx.cursor_read::<CanonicalHeaders>().unwrap();
// [1, 3)
let mut walker = cursor.walk_range(1..3).unwrap();
assert_eq!(walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((2, B256::ZERO))));
assert_eq!(walker.next(), None);
// next() returns None after walker is done
assert_eq!(walker.next(), None);
// [1, 2]
let mut walker = cursor.walk_range(1..=2).unwrap();
assert_eq!(walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((2, B256::ZERO))));
// next() returns None after walker is done
assert_eq!(walker.next(), None);
// [1, ∞)
let mut walker = cursor.walk_range(1..).unwrap();
assert_eq!(walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((2, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((3, B256::ZERO))));
// next() returns None after walker is done
assert_eq!(walker.next(), None);
// [2, 4)
let mut walker = cursor.walk_range(2..4).unwrap();
assert_eq!(walker.next(), Some(Ok((2, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((3, B256::ZERO))));
assert_eq!(walker.next(), None);
// next() returns None after walker is done
assert_eq!(walker.next(), None);
// (∞, 3)
let mut walker = cursor.walk_range(..3).unwrap();
assert_eq!(walker.next(), Some(Ok((0, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((2, B256::ZERO))));
// next() returns None after walker is done
assert_eq!(walker.next(), None);
// (∞, ∞)
let mut walker = cursor.walk_range(..).unwrap();
assert_eq!(walker.next(), Some(Ok((0, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((2, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((3, B256::ZERO))));
// next() returns None after walker is done
assert_eq!(walker.next(), None);
}
#[test]
fn db_cursor_walk_range_on_dup_table() {
let db: Arc<DatabaseEnv> = create_test_db(DatabaseEnvKind::RW);
let address0 = Address::ZERO;
let address1 = Address::with_last_byte(1);
let address2 = Address::with_last_byte(2);
let tx = db.tx_mut().expect(ERROR_INIT_TX);
tx.put::<AccountChangeSets>(0, AccountBeforeTx { address: address0, info: None })
.expect(ERROR_PUT);
tx.put::<AccountChangeSets>(0, AccountBeforeTx { address: address1, info: None })
.expect(ERROR_PUT);
tx.put::<AccountChangeSets>(0, AccountBeforeTx { address: address2, info: None })
.expect(ERROR_PUT);
tx.put::<AccountChangeSets>(1, AccountBeforeTx { address: address0, info: None })
.expect(ERROR_PUT);
tx.put::<AccountChangeSets>(1, AccountBeforeTx { address: address1, info: None })
.expect(ERROR_PUT);
tx.put::<AccountChangeSets>(1, AccountBeforeTx { address: address2, info: None })
.expect(ERROR_PUT);
tx.put::<AccountChangeSets>(2, AccountBeforeTx { address: address0, info: None }) // <- should not be returned by the walker
.expect(ERROR_PUT);
tx.commit().expect(ERROR_COMMIT);
let tx = db.tx().expect(ERROR_INIT_TX);
let mut cursor = tx.cursor_read::<AccountChangeSets>().unwrap();
let entries = cursor.walk_range(..).unwrap().collect::<Result<Vec<_>, _>>().unwrap();
assert_eq!(entries.len(), 7);
let mut walker = cursor.walk_range(0..=1).unwrap();
assert_eq!(walker.next(), Some(Ok((0, AccountBeforeTx { address: address0, info: None }))));
assert_eq!(walker.next(), Some(Ok((0, AccountBeforeTx { address: address1, info: None }))));
assert_eq!(walker.next(), Some(Ok((0, AccountBeforeTx { address: address2, info: None }))));
assert_eq!(walker.next(), Some(Ok((1, AccountBeforeTx { address: address0, info: None }))));
assert_eq!(walker.next(), Some(Ok((1, AccountBeforeTx { address: address1, info: None }))));
assert_eq!(walker.next(), Some(Ok((1, AccountBeforeTx { address: address2, info: None }))));
assert_eq!(walker.next(), None);
}
#[expect(clippy::reversed_empty_ranges)]
#[test]
fn db_cursor_walk_range_invalid() {
let db: Arc<DatabaseEnv> = create_test_db(DatabaseEnvKind::RW);
// PUT (0, 0), (1, 0), (2, 0), (3, 0)
let tx = db.tx_mut().expect(ERROR_INIT_TX);
vec![0, 1, 2, 3]
.into_iter()
.try_for_each(|key| tx.put::<CanonicalHeaders>(key, B256::ZERO))
.expect(ERROR_PUT);
tx.commit().expect(ERROR_COMMIT);
let tx = db.tx().expect(ERROR_INIT_TX);
let mut cursor = tx.cursor_read::<CanonicalHeaders>().unwrap();
// start bound greater than end bound
let mut res = cursor.walk_range(3..1).unwrap();
assert_eq!(res.next(), None);
// start bound greater than end bound
let mut res = cursor.walk_range(15..=2).unwrap();
assert_eq!(res.next(), None);
// returning nothing
let mut walker = cursor.walk_range(1..1).unwrap();
assert_eq!(walker.next(), None);
}
#[test]
fn db_walker() {
let db: Arc<DatabaseEnv> = create_test_db(DatabaseEnvKind::RW);
// PUT (0, 0), (1, 0), (3, 0)
let tx = db.tx_mut().expect(ERROR_INIT_TX);
vec![0, 1, 3]
.into_iter()
.try_for_each(|key| tx.put::<CanonicalHeaders>(key, B256::ZERO))
.expect(ERROR_PUT);
tx.commit().expect(ERROR_COMMIT);
let tx = db.tx().expect(ERROR_INIT_TX);
let mut cursor = tx.cursor_read::<CanonicalHeaders>().unwrap();
let mut walker = Walker::new(&mut cursor, None);
assert_eq!(walker.next(), Some(Ok((0, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((3, B256::ZERO))));
assert_eq!(walker.next(), None);
// transform to ReverseWalker
let mut reverse_walker = walker.rev();
assert_eq!(reverse_walker.next(), Some(Ok((3, B256::ZERO))));
assert_eq!(reverse_walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(reverse_walker.next(), Some(Ok((0, B256::ZERO))));
assert_eq!(reverse_walker.next(), None);
}
#[test]
fn db_reverse_walker() {
let db: Arc<DatabaseEnv> = create_test_db(DatabaseEnvKind::RW);
// PUT (0, 0), (1, 0), (3, 0)
let tx = db.tx_mut().expect(ERROR_INIT_TX);
vec![0, 1, 3]
.into_iter()
.try_for_each(|key| tx.put::<CanonicalHeaders>(key, B256::ZERO))
.expect(ERROR_PUT);
tx.commit().expect(ERROR_COMMIT);
let tx = db.tx().expect(ERROR_INIT_TX);
let mut cursor = tx.cursor_read::<CanonicalHeaders>().unwrap();
let mut reverse_walker = ReverseWalker::new(&mut cursor, None);
assert_eq!(reverse_walker.next(), Some(Ok((3, B256::ZERO))));
assert_eq!(reverse_walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(reverse_walker.next(), Some(Ok((0, B256::ZERO))));
assert_eq!(reverse_walker.next(), None);
// transform to Walker
let mut walker = reverse_walker.forward();
assert_eq!(walker.next(), Some(Ok((0, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((1, B256::ZERO))));
assert_eq!(walker.next(), Some(Ok((3, B256::ZERO))));
assert_eq!(walker.next(), None);
}
#[test]
fn db_walk_back() {
let db: Arc<DatabaseEnv> = create_test_db(DatabaseEnvKind::RW);
// PUT (0, 0), (1, 0), (3, 0)
let tx = db.tx_mut().expect(ERROR_INIT_TX);
vec![0, 1, 3]
.into_iter()
.try_for_each(|key| tx.put::<CanonicalHeaders>(key, B256::ZERO))
.expect(ERROR_PUT);
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/static_file/cursor.rs | crates/storage/db/src/static_file/cursor.rs | use super::mask::{ColumnSelectorOne, ColumnSelectorThree, ColumnSelectorTwo};
use alloy_primitives::B256;
use derive_more::{Deref, DerefMut};
use reth_db_api::table::Decompress;
use reth_nippy_jar::{DataReader, NippyJar, NippyJarCursor};
use reth_static_file_types::SegmentHeader;
use reth_storage_errors::provider::{ProviderError, ProviderResult};
use std::sync::Arc;
/// Cursor of a static file segment.
#[derive(Debug, Deref, DerefMut)]
pub struct StaticFileCursor<'a>(NippyJarCursor<'a, SegmentHeader>);
/// Type alias for column results with optional values.
type ColumnResult<T> = ProviderResult<Option<T>>;
impl<'a> StaticFileCursor<'a> {
/// Returns a new [`StaticFileCursor`].
pub fn new(jar: &'a NippyJar<SegmentHeader>, reader: Arc<DataReader>) -> ProviderResult<Self> {
Ok(Self(NippyJarCursor::with_reader(jar, reader).map_err(ProviderError::other)?))
}
/// Returns the current `BlockNumber` or `TxNumber` of the cursor depending on the kind of
/// static file segment.
pub fn number(&self) -> Option<u64> {
self.jar().user_header().start().map(|start| self.row_index() + start)
}
/// Gets a row of values.
pub fn get(
&mut self,
key_or_num: KeyOrNumber<'_>,
mask: usize,
) -> ProviderResult<Option<Vec<&'_ [u8]>>> {
if self.jar().rows() == 0 {
return Ok(None)
}
let row = match key_or_num {
KeyOrNumber::Key(_) => unimplemented!(),
KeyOrNumber::Number(n) => match self.jar().user_header().start() {
Some(offset) => {
if offset > n {
return Ok(None)
}
self.row_by_number_with_cols((n - offset) as usize, mask)
}
None => Ok(None),
},
}
.map_or(None, |v| v);
Ok(row)
}
/// Gets one column value from a row.
pub fn get_one<M: ColumnSelectorOne>(
&mut self,
key_or_num: KeyOrNumber<'_>,
) -> ColumnResult<M::FIRST> {
let row = self.get(key_or_num, M::MASK)?;
match row {
Some(row) => Ok(Some(M::FIRST::decompress(row[0])?)),
None => Ok(None),
}
}
/// Gets two column values from a row.
pub fn get_two<M: ColumnSelectorTwo>(
&mut self,
key_or_num: KeyOrNumber<'_>,
) -> ColumnResult<(M::FIRST, M::SECOND)> {
let row = self.get(key_or_num, M::MASK)?;
match row {
Some(row) => Ok(Some((M::FIRST::decompress(row[0])?, M::SECOND::decompress(row[1])?))),
None => Ok(None),
}
}
/// Gets three column values from a row.
pub fn get_three<M: ColumnSelectorThree>(
&mut self,
key_or_num: KeyOrNumber<'_>,
) -> ColumnResult<(M::FIRST, M::SECOND, M::THIRD)> {
let row = self.get(key_or_num, M::MASK)?;
match row {
Some(row) => Ok(Some((
M::FIRST::decompress(row[0])?,
M::SECOND::decompress(row[1])?,
M::THIRD::decompress(row[2])?,
))),
None => Ok(None),
}
}
}
/// Either a key _or_ a block/tx number
#[derive(Debug)]
pub enum KeyOrNumber<'a> {
/// A slice used as a key. Usually a block/tx hash
Key(&'a [u8]),
/// A block/tx number
Number(u64),
}
impl<'a> From<&'a B256> for KeyOrNumber<'a> {
fn from(value: &'a B256) -> Self {
KeyOrNumber::Key(value.as_slice())
}
}
impl From<u64> for KeyOrNumber<'_> {
fn from(value: u64) -> Self {
KeyOrNumber::Number(value)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/static_file/masks.rs | crates/storage/db/src/static_file/masks.rs | use crate::{
add_static_file_mask,
static_file::mask::{ColumnSelectorOne, ColumnSelectorTwo},
HeaderTerminalDifficulties,
};
use alloy_primitives::BlockHash;
use reth_db_api::table::Table;
// HEADER MASKS
add_static_file_mask! {
#[doc = "Mask for selecting a single header from Headers static file segment"]
HeaderMask<H>, H, 0b001
}
add_static_file_mask! {
#[doc = "Mask for selecting a total difficulty value from Headers static file segment"]
TotalDifficultyMask, <HeaderTerminalDifficulties as Table>::Value, 0b010
}
add_static_file_mask! {
#[doc = "Mask for selecting a block hash value from Headers static file segment"]
BlockHashMask, BlockHash, 0b100
}
add_static_file_mask! {
#[doc = "Mask for selecting a header along with block hash from Headers static file segment"]
HeaderWithHashMask<H>, H, BlockHash, 0b101
}
add_static_file_mask! {
#[doc = "Mask for selecting a total difficulty along with block hash from Headers static file segment"]
TDWithHashMask,
<HeaderTerminalDifficulties as Table>::Value,
BlockHash,
0b110
}
// RECEIPT MASKS
add_static_file_mask! {
#[doc = "Mask for selecting a single receipt from Receipts static file segment"]
ReceiptMask<R>, R, 0b1
}
// TRANSACTION MASKS
add_static_file_mask! {
#[doc = "Mask for selecting a single transaction from Transactions static file segment"]
TransactionMask<T>, T, 0b1
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/static_file/mask.rs | crates/storage/db/src/static_file/mask.rs | use reth_db_api::table::Decompress;
/// Trait for specifying a mask to select one column value.
pub trait ColumnSelectorOne {
/// First desired column value
type FIRST: Decompress;
/// Mask to obtain desired values, should correspond to the order of columns in a `static_file`.
const MASK: usize;
}
/// Trait for specifying a mask to select two column values.
pub trait ColumnSelectorTwo {
/// First desired column value
type FIRST: Decompress;
/// Second desired column value
type SECOND: Decompress;
/// Mask to obtain desired values, should correspond to the order of columns in a `static_file`.
const MASK: usize;
}
/// Trait for specifying a mask to select three column values.
pub trait ColumnSelectorThree {
/// First desired column value
type FIRST: Decompress;
/// Second desired column value
type SECOND: Decompress;
/// Third desired column value
type THIRD: Decompress;
/// Mask to obtain desired values, should correspond to the order of columns in a `static_file`.
const MASK: usize;
}
#[macro_export]
/// Add mask to select `N` column values from a specific static file segment row.
macro_rules! add_static_file_mask {
($(#[$attr:meta])* $mask_struct:ident $(<$generic:ident>)?, $type1:ty, $mask:expr) => {
$(#[$attr])*
#[derive(Debug)]
pub struct $mask_struct$(<$generic>)?$((std::marker::PhantomData<$generic>))?;
impl$(<$generic>)? ColumnSelectorOne for $mask_struct$(<$generic>)?
where
$type1: Send + Sync + std::fmt::Debug + reth_db_api::table::Decompress,
{
type FIRST = $type1;
const MASK: usize = $mask;
}
};
($(#[$attr:meta])* $mask_struct:ident $(<$generic:ident>)?, $type1:ty, $type2:ty, $mask:expr) => {
$(#[$attr])*
#[derive(Debug)]
pub struct $mask_struct$(<$generic>)?$((std::marker::PhantomData<$generic>))?;
impl$(<$generic>)? ColumnSelectorTwo for $mask_struct$(<$generic>)?
where
$type1: Send + Sync + std::fmt::Debug + reth_db_api::table::Decompress,
$type2: Send + Sync + std::fmt::Debug + reth_db_api::table::Decompress,
{
type FIRST = $type1;
type SECOND = $type2;
const MASK: usize = $mask;
}
};
($(#[$attr:meta])* $mask_struct:ident $(<$generic:ident>)?, $type1:ty, $type2:ty, $type3:ty, $mask:expr) => {
$(#[$attr])*
#[derive(Debug)]
pub struct $mask_struct$(<$generic>)?$((std::marker::PhantomData<$generic>))?;
impl$(<$generic>)? ColumnSelectorThree for $mask_struct$(<$generic>)?
where
$type1: Send + Sync + std::fmt::Debug + reth_db_api::table::Decompress,
$type2: Send + Sync + std::fmt::Debug + reth_db_api::table::Decompress,
$type3: Send + Sync + std::fmt::Debug + reth_db_api::table::Decompress,
{
type FIRST = $type1;
type SECOND = $type2;
type THIRD = $type3;
const MASK: usize = $mask;
}
};
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/src/static_file/mod.rs | crates/storage/db/src/static_file/mod.rs | //! reth's static file database table import and access
use std::{
collections::{hash_map::Entry, HashMap},
path::Path,
};
mod cursor;
pub use cursor::StaticFileCursor;
mod mask;
pub use mask::*;
use reth_nippy_jar::{NippyJar, NippyJarError};
mod masks;
pub use masks::*;
use reth_static_file_types::{SegmentHeader, SegmentRangeInclusive, StaticFileSegment};
/// Alias type for a map of [`StaticFileSegment`] and sorted lists of existing static file ranges.
type SortedStaticFiles =
HashMap<StaticFileSegment, Vec<(SegmentRangeInclusive, Option<SegmentRangeInclusive>)>>;
/// Given the `static_files` directory path, it returns a list over the existing `static_files`
/// organized by [`StaticFileSegment`]. Each segment has a sorted list of block ranges and
/// transaction ranges as presented in the file configuration.
pub fn iter_static_files(path: &Path) -> Result<SortedStaticFiles, NippyJarError> {
if !path.exists() {
reth_fs_util::create_dir_all(path).map_err(|err| NippyJarError::Custom(err.to_string()))?;
}
let mut static_files = SortedStaticFiles::default();
let entries = reth_fs_util::read_dir(path)
.map_err(|err| NippyJarError::Custom(err.to_string()))?
.filter_map(Result::ok);
for entry in entries {
if entry.metadata().is_ok_and(|metadata| metadata.is_file()) {
if let Some((segment, _)) =
StaticFileSegment::parse_filename(&entry.file_name().to_string_lossy())
{
let jar = NippyJar::<SegmentHeader>::load(&entry.path())?;
let (block_range, tx_range) = (
jar.user_header().block_range().copied(),
jar.user_header().tx_range().copied(),
);
if let Some(block_range) = block_range {
match static_files.entry(segment) {
Entry::Occupied(mut entry) => {
entry.get_mut().push((block_range, tx_range));
}
Entry::Vacant(entry) => {
entry.insert(vec![(block_range, tx_range)]);
}
}
}
}
}
}
for range_list in static_files.values_mut() {
// Sort by block end range.
range_list.sort_by_key(|(r, _)| r.end());
}
Ok(static_files)
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/benches/hash_keys.rs | crates/storage/db/benches/hash_keys.rs | #![allow(missing_docs)]
use std::{collections::HashSet, path::Path, sync::Arc};
use criterion::{
criterion_group, criterion_main, measurement::WallTime, BenchmarkGroup, Criterion,
};
use proptest::{
arbitrary::Arbitrary,
prelude::any_with,
strategy::{Strategy, ValueTree},
test_runner::TestRunner,
};
use reth_db::{test_utils::create_test_rw_db_with_path, DatabaseEnv, TransactionHashNumbers};
use reth_db_api::{
cursor::DbCursorRW,
database::Database,
table::{Table, TableRow},
transaction::DbTxMut,
};
use reth_fs_util as fs;
use std::hint::black_box;
mod utils;
use utils::*;
criterion_group! {
name = benches;
config = Criterion::default();
targets = hash_keys
}
criterion_main!(benches);
/// It benchmarks the insertion of rows into a table where `Keys` are hashes.
/// * `append`: Table is empty. Sorts during benchmark.
/// * `insert_sorted`: Table is preloaded with rows (same as batch size). Sorts during benchmark.
/// * `insert_unsorted`: Table is preloaded with rows (same as batch size).
/// * `put_sorted`: Table is preloaded with rows (same as batch size). Sorts during benchmark.
/// * `put_unsorted`: Table is preloaded with rows (same as batch size).
///
/// It does the above steps with different batches of rows. `10_000`, `100_000`, `1_000_000`. In the
/// end, the table statistics are shown (eg. number of pages, table size...)
pub fn hash_keys(c: &mut Criterion) {
let mut group = c.benchmark_group("Hash-Keys Table Insertion");
group.sample_size(10);
for size in [10_000, 100_000, 1_000_000] {
// Too slow.
#[expect(unexpected_cfgs)]
if cfg!(codspeed) && size > 10_000 {
continue;
}
measure_table_insertion::<TransactionHashNumbers>(&mut group, size);
}
}
fn measure_table_insertion<T>(group: &mut BenchmarkGroup<'_, WallTime>, size: usize)
where
T: Table,
T::Key: Default
+ Clone
+ for<'de> serde::Deserialize<'de>
+ Arbitrary
+ serde::Serialize
+ Ord
+ std::hash::Hash,
T::Value: Default + Clone + for<'de> serde::Deserialize<'de> + Arbitrary + serde::Serialize,
{
let bench_db_path = Path::new(BENCH_DB_PATH);
let scenarios: Vec<(fn(_, _) -> _, &str)> = vec![
(append::<T>, "append_all"),
(append::<T>, "append_input"),
(insert::<T>, "insert_unsorted"),
(insert::<T>, "insert_sorted"),
(put::<T>, "put_unsorted"),
(put::<T>, "put_sorted"),
];
// `preload` is to be inserted into the database during the setup phase in all scenarios but
// `append`.
let (preload, unsorted_input) = generate_batches::<T>(size);
for (scenario, scenario_str) in scenarios {
// Append does not preload the table
let mut preload_size = size;
let mut input_size = size;
if scenario_str.contains("append") {
if scenario_str == "append_all" {
input_size = size * 2;
}
preload_size = 0;
}
// Setup phase before each benchmark iteration
let setup = || {
// Reset DB
let _ = fs::remove_dir_all(bench_db_path);
let db = Arc::try_unwrap(create_test_rw_db_with_path(bench_db_path)).unwrap();
let db = db.into_inner_db();
let mut unsorted_input = unsorted_input.clone();
if scenario_str == "append_all" {
unsorted_input.extend_from_slice(&preload);
}
if preload_size > 0 {
db.update(|tx| {
for (key, value) in &preload {
let _ = tx.put::<T>(key.clone(), value.clone());
}
})
.unwrap();
}
(unsorted_input, db)
};
// Iteration to be benchmarked
let execution = |(input, db)| {
let mut input: Vec<TableRow<T>> = input;
if scenario_str.contains("_sorted") || scenario_str.contains("append") {
input.sort_by(|a, b| a.0.cmp(&b.0));
}
scenario(db, input)
};
group.bench_function(
format!(
"{} | {scenario_str} | preload: {} | writing: {} ",
T::NAME,
preload_size,
input_size
),
|b| {
b.iter_with_setup(setup, execution);
},
);
// Execute once more to show table stats (doesn't count for benchmarking speed)
let db = execution(setup());
get_table_stats::<T>(db);
}
}
/// Generates two batches. The first is to be inserted into the database before running the
/// benchmark. The second is to be benchmarked with.
fn generate_batches<T>(size: usize) -> (Vec<TableRow<T>>, Vec<TableRow<T>>)
where
T: Table,
T::Key: std::hash::Hash + Arbitrary,
T::Value: Arbitrary,
{
let strategy = proptest::collection::vec(
any_with::<TableRow<T>>((
<T::Key as Arbitrary>::Parameters::default(),
<T::Value as Arbitrary>::Parameters::default(),
)),
size,
)
.no_shrink()
.boxed();
let mut runner = TestRunner::deterministic();
let mut preload = strategy.new_tree(&mut runner).unwrap().current();
let mut input = strategy.new_tree(&mut runner).unwrap().current();
let mut unique_keys = HashSet::with_capacity(preload.len() + input.len());
preload.retain(|(k, _)| unique_keys.insert(k.clone()));
input.retain(|(k, _)| unique_keys.insert(k.clone()));
(preload, input)
}
fn append<T>(db: DatabaseEnv, input: Vec<(<T as Table>::Key, <T as Table>::Value)>) -> DatabaseEnv
where
T: Table,
{
{
let tx = db.tx_mut().expect("tx");
let mut crsr = tx.cursor_write::<T>().expect("cursor");
black_box({
for (k, v) in input {
crsr.append(k, &v).expect("submit");
}
tx.inner.commit().unwrap()
});
}
db
}
fn insert<T>(db: DatabaseEnv, input: Vec<(<T as Table>::Key, <T as Table>::Value)>) -> DatabaseEnv
where
T: Table,
{
{
let tx = db.tx_mut().expect("tx");
let mut crsr = tx.cursor_write::<T>().expect("cursor");
black_box({
for (k, v) in input {
crsr.insert(k, &v).expect("submit");
}
tx.inner.commit().unwrap()
});
}
db
}
fn put<T>(db: DatabaseEnv, input: Vec<(<T as Table>::Key, <T as Table>::Value)>) -> DatabaseEnv
where
T: Table,
{
{
let tx = db.tx_mut().expect("tx");
black_box({
for (k, v) in input {
tx.put::<T>(k, v).expect("submit");
}
tx.inner.commit().unwrap()
});
}
db
}
#[derive(Debug)]
#[expect(dead_code)]
struct TableStats {
page_size: usize,
leaf_pages: usize,
branch_pages: usize,
overflow_pages: usize,
num_pages: usize,
size: usize,
}
fn get_table_stats<T>(db: DatabaseEnv)
where
T: Table,
{
db.view(|tx| {
let table_db = tx.inner.open_db(Some(T::NAME)).map_err(|_| "Could not open db.").unwrap();
println!(
"{:?}\n",
tx.inner
.db_stat(&table_db)
.map_err(|_| format!("Could not find table: {}", T::NAME))
.map(|stats| {
let num_pages =
stats.leaf_pages() + stats.branch_pages() + stats.overflow_pages();
let size = num_pages * stats.page_size() as usize;
TableStats {
page_size: stats.page_size() as usize,
leaf_pages: stats.leaf_pages(),
branch_pages: stats.branch_pages(),
overflow_pages: stats.overflow_pages(),
num_pages,
size,
}
})
.unwrap()
);
})
.unwrap();
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/benches/utils.rs | crates/storage/db/benches/utils.rs | #![allow(missing_docs)]
#![cfg(feature = "test-utils")]
use alloy_primitives::Bytes;
use reth_db::{test_utils::create_test_rw_db_with_path, DatabaseEnv};
use reth_db_api::{
table::{Compress, Encode, Table, TableRow},
transaction::DbTxMut,
Database,
};
use reth_fs_util as fs;
use std::{path::Path, sync::Arc};
/// Path where the DB is initialized for benchmarks.
#[allow(dead_code)]
pub(crate) const BENCH_DB_PATH: &str = "/tmp/reth-benches";
/// Used for `RandomRead` and `RandomWrite` benchmarks.
#[allow(dead_code)]
pub(crate) const RANDOM_INDEXES: [usize; 10] = [23, 2, 42, 5, 3, 99, 54, 0, 33, 64];
/// Returns bench vectors in the format: `Vec<(Key, EncodedKey, Value, CompressedValue)>`.
#[allow(dead_code)]
pub(crate) fn load_vectors<T: Table>() -> Vec<(T::Key, Bytes, T::Value, Bytes)>
where
T::Key: Clone + for<'de> serde::Deserialize<'de>,
T::Value: Clone + for<'de> serde::Deserialize<'de>,
{
let path =
format!("{}/../../../testdata/micro/db/{}.json", env!("CARGO_MANIFEST_DIR"), T::NAME);
let list: Vec<TableRow<T>> = serde_json::from_reader(std::io::BufReader::new(
std::fs::File::open(&path)
.unwrap_or_else(|_| panic!("Test vectors not found. They can be generated from the workspace by calling `cargo run --bin reth --features dev -- test-vectors tables`: {path:?}"))
))
.unwrap();
list.into_iter()
.map(|(k, v)| {
(
k.clone(),
Bytes::copy_from_slice(k.encode().as_ref()),
v.clone(),
Bytes::copy_from_slice(v.compress().as_ref()),
)
})
.collect::<Vec<_>>()
}
/// Sets up a clear database at `bench_db_path`.
#[expect(clippy::ptr_arg)]
#[allow(dead_code)]
pub(crate) fn set_up_db<T>(
bench_db_path: &Path,
pair: &Vec<(<T as Table>::Key, Bytes, <T as Table>::Value, Bytes)>,
) -> DatabaseEnv
where
T: Table,
T::Key: Clone,
T::Value: Clone,
{
// Reset DB
let _ = fs::remove_dir_all(bench_db_path);
let db = Arc::try_unwrap(create_test_rw_db_with_path(bench_db_path)).unwrap();
{
// Prepare data to be read
let tx = db.tx_mut().expect("tx");
for (k, _, v, _) in pair.clone() {
tx.put::<T>(k, v).expect("submit");
}
tx.inner.commit().unwrap();
}
db.into_inner_db()
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/benches/criterion.rs | crates/storage/db/benches/criterion.rs | #![allow(missing_docs)]
use std::{path::Path, sync::Arc};
use criterion::{
criterion_group, criterion_main, measurement::WallTime, BenchmarkGroup, Criterion,
};
use reth_db::test_utils::create_test_rw_db_with_path;
use reth_db_api::{
cursor::{DbCursorRO, DbCursorRW, DbDupCursorRO, DbDupCursorRW},
database::Database,
table::{Compress, Decode, Decompress, DupSort, Encode, Table},
tables::*,
transaction::{DbTx, DbTxMut},
};
use reth_fs_util as fs;
mod utils;
use utils::*;
criterion_group! {
name = benches;
config = Criterion::default();
targets = db, serialization
}
criterion_main!(benches);
pub fn db(c: &mut Criterion) {
let mut group = c.benchmark_group("tables_db");
group.measurement_time(std::time::Duration::from_millis(200));
group.warm_up_time(std::time::Duration::from_millis(200));
measure_table_db::<CanonicalHeaders>(&mut group);
measure_table_db::<HeaderTerminalDifficulties>(&mut group);
measure_table_db::<HeaderNumbers>(&mut group);
measure_table_db::<Headers>(&mut group);
measure_table_db::<BlockBodyIndices>(&mut group);
measure_table_db::<BlockOmmers>(&mut group);
measure_table_db::<TransactionHashNumbers>(&mut group);
measure_table_db::<Transactions>(&mut group);
measure_dupsort_db::<PlainStorageState>(&mut group);
measure_table_db::<PlainAccountState>(&mut group);
}
pub fn serialization(c: &mut Criterion) {
let mut group = c.benchmark_group("tables_serialization");
group.measurement_time(std::time::Duration::from_millis(200));
group.warm_up_time(std::time::Duration::from_millis(200));
measure_table_serialization::<CanonicalHeaders>(&mut group);
measure_table_serialization::<HeaderTerminalDifficulties>(&mut group);
measure_table_serialization::<HeaderNumbers>(&mut group);
measure_table_serialization::<Headers>(&mut group);
measure_table_serialization::<BlockBodyIndices>(&mut group);
measure_table_serialization::<BlockOmmers>(&mut group);
measure_table_serialization::<TransactionHashNumbers>(&mut group);
measure_table_serialization::<Transactions>(&mut group);
measure_table_serialization::<PlainStorageState>(&mut group);
measure_table_serialization::<PlainAccountState>(&mut group);
}
/// Measures `Encode`, `Decode`, `Compress` and `Decompress`.
fn measure_table_serialization<T>(group: &mut BenchmarkGroup<'_, WallTime>)
where
T: Table,
T::Key: Clone + for<'de> serde::Deserialize<'de>,
T::Value: Clone + for<'de> serde::Deserialize<'de>,
{
let input = &load_vectors::<T>();
group.bench_function(format!("{}.KeyEncode", T::NAME), move |b| {
b.iter_with_setup(
|| input.clone(),
|input| {
for (k, _, _, _) in input {
k.encode();
}
},
)
});
group.bench_function(format!("{}.KeyDecode", T::NAME), |b| {
b.iter_with_setup(
|| input.clone(),
|input| {
for (_, k, _, _) in input {
let _ = <T as Table>::Key::decode(&k);
}
},
)
});
group.bench_function(format!("{}.ValueCompress", T::NAME), move |b| {
b.iter_with_setup(
|| input.clone(),
|input| {
for (_, _, v, _) in input {
v.compress();
}
},
)
});
group.bench_function(format!("{}.ValueDecompress", T::NAME), |b| {
b.iter_with_setup(
|| input.clone(),
|input| {
for (_, _, _, v) in input {
let _ = <T as Table>::Value::decompress(&v);
}
},
)
});
}
/// Measures `SeqWrite`, `RandomWrite`, `SeqRead` and `RandomRead` using `cursor` and `tx.put`.
fn measure_table_db<T>(group: &mut BenchmarkGroup<'_, WallTime>)
where
T: Table,
T::Key: Clone + for<'de> serde::Deserialize<'de>,
T::Value: Clone + for<'de> serde::Deserialize<'de>,
{
let input = &load_vectors::<T>();
let bench_db_path = Path::new(BENCH_DB_PATH);
group.bench_function(format!("{}.SeqWrite", T::NAME), |b| {
b.iter_with_setup(
|| {
// Reset DB
let _ = fs::remove_dir_all(bench_db_path);
(
input.clone(),
Arc::try_unwrap(create_test_rw_db_with_path(bench_db_path)).unwrap(),
)
},
|(input, db)| {
// Create TX
let tx = db.tx_mut().expect("tx");
let mut crsr = tx.cursor_write::<T>().expect("cursor");
for (k, _, v, _) in input {
crsr.append(k, &v).expect("submit");
}
tx.inner.commit().unwrap()
},
)
});
group.bench_function(format!("{}.RandomWrite", T::NAME), |b| {
b.iter_with_setup(
|| {
// Reset DB
let _ = fs::remove_dir_all(bench_db_path);
(input, Arc::try_unwrap(create_test_rw_db_with_path(bench_db_path)).unwrap())
},
|(input, db)| {
// Create TX
let tx = db.tx_mut().expect("tx");
let mut crsr = tx.cursor_write::<T>().expect("cursor");
for index in RANDOM_INDEXES {
let (k, _, v, _) = input.get(index).unwrap().clone();
crsr.insert(k, &v).expect("submit");
}
tx.inner.commit().unwrap()
},
)
});
group.bench_function(format!("{}.SeqRead", T::NAME), |b| {
let db = set_up_db::<T>(bench_db_path, input);
b.iter(|| {
// Create TX
let tx = db.tx().expect("tx");
let mut cursor = tx.cursor_read::<T>().expect("cursor");
let walker = cursor.walk(Some(input.first().unwrap().0.clone())).unwrap();
for element in walker {
element.unwrap();
}
})
});
group.bench_function(format!("{}.RandomRead", T::NAME), |b| {
let db = set_up_db::<T>(bench_db_path, input);
b.iter(|| {
// Create TX
let tx = db.tx().expect("tx");
for index in RANDOM_INDEXES {
let mut cursor = tx.cursor_read::<T>().expect("cursor");
cursor.seek_exact(input.get(index).unwrap().0.clone()).unwrap();
}
})
});
}
/// Measures `SeqWrite`, `RandomWrite` and `SeqRead` using `cursor_dup` and `tx.put`.
fn measure_dupsort_db<T>(group: &mut BenchmarkGroup<'_, WallTime>)
where
T: Table + DupSort,
T::Key: Clone + for<'de> serde::Deserialize<'de>,
T::Value: Clone + for<'de> serde::Deserialize<'de>,
T::SubKey: Default + Clone + for<'de> serde::Deserialize<'de>,
{
let input = &load_vectors::<T>();
let bench_db_path = Path::new(BENCH_DB_PATH);
group.bench_function(format!("{}.SeqWrite", T::NAME), |b| {
b.iter_with_setup(
|| {
// Reset DB
let _ = fs::remove_dir_all(bench_db_path);
(
input.clone(),
Arc::try_unwrap(create_test_rw_db_with_path(bench_db_path)).unwrap(),
)
},
|(input, db)| {
// Create TX
let tx = db.tx_mut().expect("tx");
let mut crsr = tx.cursor_dup_write::<T>().expect("cursor");
for (k, _, v, _) in input {
crsr.append_dup(k, v).expect("submit");
}
tx.inner.commit().unwrap()
},
)
});
group.bench_function(format!("{}.RandomWrite", T::NAME), |b| {
b.iter_with_setup(
|| {
// Reset DB
let _ = fs::remove_dir_all(bench_db_path);
(input, Arc::try_unwrap(create_test_rw_db_with_path(bench_db_path)).unwrap())
},
|(input, db)| {
// Create TX
let tx = db.tx_mut().expect("tx");
for index in RANDOM_INDEXES {
let (k, _, v, _) = input.get(index).unwrap().clone();
tx.put::<T>(k, v).unwrap();
}
tx.inner.commit().unwrap();
},
)
});
group.bench_function(format!("{}.SeqRead", T::NAME), |b| {
let db = set_up_db::<T>(bench_db_path, input);
b.iter(|| {
// Create TX
let tx = db.tx().expect("tx");
let mut cursor = tx.cursor_dup_read::<T>().expect("cursor");
let walker = cursor.walk_dup(None, Some(T::SubKey::default())).unwrap();
for element in walker {
element.unwrap();
}
})
});
// group.bench_function(format!("{}.RandomRead", T::NAME), |b| {});
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db/benches/get.rs | crates/storage/db/benches/get.rs | #![allow(missing_docs)]
use alloy_primitives::TxHash;
use criterion::{criterion_group, criterion_main, Criterion};
use reth_db::{test_utils::create_test_rw_db_with_path, Database, TransactionHashNumbers};
use reth_db_api::transaction::DbTx;
use std::{fs, sync::Arc};
mod utils;
use utils::BENCH_DB_PATH;
criterion_group! {
name = benches;
config = Criterion::default();
targets = get
}
criterion_main!(benches);
// Small benchmark showing that [get_by_encoded_key] is slightly faster than [get]
// for a reference key, as [get] requires copying or cloning the key first.
fn get(c: &mut Criterion) {
let mut group = c.benchmark_group("Get");
// Random keys to get
let mut keys = Vec::new();
for _ in 0..10_000_000 {
let key = TxHash::random();
keys.push(key);
}
// We don't bother mock the DB to reduce noise from DB I/O, value decoding, etc.
let _ = fs::remove_dir_all(BENCH_DB_PATH);
let db = Arc::try_unwrap(create_test_rw_db_with_path(BENCH_DB_PATH)).unwrap();
let tx = db.tx().expect("tx");
group.bench_function("get", |b| {
b.iter(|| {
for key in &keys {
tx.get::<TransactionHashNumbers>(*key).unwrap();
}
})
});
group.bench_function("get_by_encoded_key", |b| {
b.iter(|| {
for key in &keys {
tx.get_by_encoded_key::<TransactionHashNumbers>(key).unwrap();
}
})
});
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db-common/src/lib.rs | crates/storage/db-common/src/lib.rs | //! Common db operations
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/SeismicSystems/seismic-reth/issues/"
)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
pub mod init;
mod db_tool;
pub use db_tool::*;
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db-common/src/init.rs | crates/storage/db-common/src/init.rs | //! Reth genesis initialization utility functions.
use alloy_consensus::BlockHeader;
use alloy_primitives::{keccak256, map::HashMap, Address, B256, U256};
use reth_chainspec::EthChainSpec;
use reth_codecs::Compact;
use reth_config::config::EtlConfig;
use reth_db_api::{tables, transaction::DbTxMut, DatabaseError};
use reth_etl::Collector;
use reth_execution_errors::StateRootError;
use reth_primitives_traits::{Account, Bytecode, GotExpected, NodePrimitives, StorageEntry};
use reth_provider::{
errors::provider::ProviderResult, providers::StaticFileWriter, writer::UnifiedStorageWriter,
BlockHashReader, BlockNumReader, BundleStateInit, ChainSpecProvider, DBProvider,
DatabaseProviderFactory, ExecutionOutcome, HashingWriter, HeaderProvider, HistoryWriter,
OriginalValuesKnown, ProviderError, RevertsInit, StageCheckpointReader, StageCheckpointWriter,
StateWriter, StaticFileProviderFactory, StorageLocation, TrieWriter,
};
use reth_stages_types::{StageCheckpoint, StageId};
use reth_static_file_types::StaticFileSegment;
use reth_trie::{
prefix_set::{TriePrefixSets, TriePrefixSetsMut},
IntermediateStateRootState, Nibbles, StateRoot as StateRootComputer, StateRootProgress,
};
use reth_trie_db::DatabaseStateRoot;
use serde::{Deserialize, Serialize};
use std::io::BufRead;
use tracing::{debug, error, info, trace};
use seismic_alloy_genesis::GenesisAccount;
/// Default soft limit for number of bytes to read from state dump file, before inserting into
/// database.
///
/// Default is 1 GB.
pub const DEFAULT_SOFT_LIMIT_BYTE_LEN_ACCOUNTS_CHUNK: usize = 1_000_000_000;
/// Approximate number of accounts per 1 GB of state dump file. One account is approximately 3.5 KB
///
/// Approximate is 285 228 accounts.
//
// (14.05 GB OP mainnet state dump at Bedrock block / 4 007 565 accounts in file > 3.5 KB per
// account)
pub const AVERAGE_COUNT_ACCOUNTS_PER_GB_STATE_DUMP: usize = 285_228;
/// Soft limit for the number of flushed updates after which to log progress summary.
const SOFT_LIMIT_COUNT_FLUSHED_UPDATES: usize = 1_000_000;
/// Storage initialization error type.
#[derive(Debug, thiserror::Error, Clone)]
pub enum InitStorageError {
/// Genesis header found on static files but the database is empty.
#[error(
"static files found, but the database is uninitialized. If attempting to re-syncing, delete both."
)]
UninitializedDatabase,
/// An existing genesis block was found in the database, and its hash did not match the hash of
/// the chainspec.
#[error(
"genesis hash in the storage does not match the specified chainspec: chainspec is {chainspec_hash}, database is {storage_hash}"
)]
GenesisHashMismatch {
/// Expected genesis hash.
chainspec_hash: B256,
/// Actual genesis hash.
storage_hash: B256,
},
/// Provider error.
#[error(transparent)]
Provider(#[from] ProviderError),
/// State root error while computing the state root
#[error(transparent)]
StateRootError(#[from] StateRootError),
/// State root doesn't match the expected one.
#[error("state root mismatch: {_0}")]
StateRootMismatch(GotExpected<B256>),
}
impl From<DatabaseError> for InitStorageError {
fn from(error: DatabaseError) -> Self {
Self::Provider(ProviderError::Database(error))
}
}
/// Write the genesis block if it has not already been written
pub fn init_genesis<PF>(factory: &PF) -> Result<B256, InitStorageError>
where
PF: DatabaseProviderFactory
+ StaticFileProviderFactory<Primitives: NodePrimitives<BlockHeader: Compact>>
+ ChainSpecProvider
+ StageCheckpointReader
+ BlockHashReader,
PF::ProviderRW: StaticFileProviderFactory<Primitives = PF::Primitives>
+ StageCheckpointWriter
+ HistoryWriter
+ HeaderProvider
+ HashingWriter
+ StateWriter
+ TrieWriter
+ AsRef<PF::ProviderRW>,
PF::ChainSpec: EthChainSpec<Header = <PF::Primitives as NodePrimitives>::BlockHeader>,
{
let chain = factory.chain_spec();
let genesis = chain.genesis();
let hash = chain.genesis_hash();
// Check if we already have the genesis header or if we have the wrong one.
match factory.block_hash(0) {
Ok(None) | Err(ProviderError::MissingStaticFileBlock(StaticFileSegment::Headers, 0)) => {}
Ok(Some(block_hash)) => {
if block_hash == hash {
// Some users will at times attempt to re-sync from scratch by just deleting the
// database. Since `factory.block_hash` will only query the static files, we need to
// make sure that our database has been written to, and throw error if it's empty.
if factory.get_stage_checkpoint(StageId::Headers)?.is_none() {
error!(target: "reth::storage", "Genesis header found on static files, but database is uninitialized.");
return Err(InitStorageError::UninitializedDatabase)
}
debug!("Genesis already written, skipping.");
return Ok(hash)
}
return Err(InitStorageError::GenesisHashMismatch {
chainspec_hash: hash,
storage_hash: block_hash,
})
}
Err(e) => {
debug!(?e);
return Err(e.into());
}
}
debug!("Writing genesis block.");
let alloc = &genesis.alloc;
// use transaction to insert genesis header
let provider_rw = factory.database_provider_rw()?;
insert_genesis_hashes(&provider_rw, alloc.iter())?;
insert_genesis_history(&provider_rw, alloc.iter())?;
// Insert header
insert_genesis_header(&provider_rw, &chain)?;
insert_genesis_state(&provider_rw, alloc.iter())?;
// compute state root to populate trie tables
compute_state_root(&provider_rw, None)?;
// insert sync stage
for stage in StageId::ALL {
provider_rw.save_stage_checkpoint(stage, Default::default())?;
}
let static_file_provider = provider_rw.static_file_provider();
// Static file segments start empty, so we need to initialize the genesis block.
let segment = StaticFileSegment::Receipts;
static_file_provider.latest_writer(segment)?.increment_block(0)?;
let segment = StaticFileSegment::Transactions;
static_file_provider.latest_writer(segment)?.increment_block(0)?;
// `commit_unwind`` will first commit the DB and then the static file provider, which is
// necessary on `init_genesis`.
UnifiedStorageWriter::commit_unwind(provider_rw)?;
Ok(hash)
}
/// Inserts the genesis state into the database.
pub fn insert_genesis_state<'a, 'b, Provider>(
provider: &Provider,
alloc: impl Iterator<Item = (&'a Address, &'b GenesisAccount)>,
) -> ProviderResult<()>
where
Provider: StaticFileProviderFactory
+ DBProvider<Tx: DbTxMut>
+ HeaderProvider
+ StateWriter
+ AsRef<Provider>,
{
insert_state(provider, alloc, 0)
}
/// Inserts state at given block into database.
pub fn insert_state<'a, 'b, Provider>(
provider: &Provider,
alloc: impl Iterator<Item = (&'a Address, &'b GenesisAccount)>,
block: u64,
) -> ProviderResult<()>
where
Provider: StaticFileProviderFactory
+ DBProvider<Tx: DbTxMut>
+ HeaderProvider
+ StateWriter
+ AsRef<Provider>,
{
let capacity = alloc.size_hint().1.unwrap_or(0);
let mut state_init: BundleStateInit =
HashMap::with_capacity_and_hasher(capacity, Default::default());
let mut reverts_init = HashMap::with_capacity_and_hasher(capacity, Default::default());
let mut contracts: HashMap<B256, Bytecode> =
HashMap::with_capacity_and_hasher(capacity, Default::default());
for (address, account) in alloc {
let bytecode_hash = if let Some(code) = &account.code {
match Bytecode::new_raw_checked(code.clone()) {
Ok(bytecode) => {
let hash = bytecode.hash_slow();
contracts.insert(hash, bytecode);
Some(hash)
}
Err(err) => {
error!(%address, %err, "Failed to decode genesis bytecode.");
return Err(DatabaseError::Other(err.to_string()).into());
}
}
} else {
None
};
// get state
let storage = account
.storage
.as_ref()
.map(|m| {
m.iter()
.map(|(key, &flagged_value)| {
(
*key,
(alloy_primitives::FlaggedStorage::public(U256::ZERO), flagged_value),
)
})
.collect::<HashMap<_, _>>()
})
.unwrap_or_default();
reverts_init.insert(
*address,
(
Some(None),
storage.keys().map(|k| StorageEntry { key: *k, ..Default::default() }).collect(),
),
);
state_init.insert(
*address,
(
None,
Some(Account {
nonce: account.nonce.unwrap_or_default(),
balance: account.balance,
bytecode_hash,
}),
storage,
),
);
}
let all_reverts_init: RevertsInit = HashMap::from_iter([(block, reverts_init)]);
let execution_outcome = ExecutionOutcome::new_init(
state_init,
all_reverts_init,
contracts,
Vec::default(),
block,
Vec::new(),
);
provider.write_state(
&execution_outcome,
OriginalValuesKnown::Yes,
StorageLocation::Database,
)?;
trace!(target: "reth::cli", "Inserted state");
Ok(())
}
/// Inserts hashes for the genesis state.
pub fn insert_genesis_hashes<'a, 'b, Provider>(
provider: &Provider,
alloc: impl Iterator<Item = (&'a Address, &'b GenesisAccount)> + Clone,
) -> ProviderResult<()>
where
Provider: DBProvider<Tx: DbTxMut> + HashingWriter,
{
// insert and hash accounts to hashing table
let alloc_accounts = alloc.clone().map(|(addr, account)| (*addr, Some(Account::from(account))));
provider.insert_account_for_hashing(alloc_accounts)?;
trace!(target: "reth::cli", "Inserted account hashes");
let alloc_storage = alloc.filter_map(|(addr, account)| {
// only return Some if there is storage
account.storage.as_ref().map(|storage| {
(*addr, storage.clone().into_iter().map(|(key, value)| StorageEntry { key, value }))
})
});
provider.insert_storage_for_hashing(alloc_storage)?;
trace!(target: "reth::cli", "Inserted storage hashes");
Ok(())
}
/// Inserts history indices for genesis accounts and storage.
pub fn insert_genesis_history<'a, 'b, Provider>(
provider: &Provider,
alloc: impl Iterator<Item = (&'a Address, &'b GenesisAccount)> + Clone,
) -> ProviderResult<()>
where
Provider: DBProvider<Tx: DbTxMut> + HistoryWriter,
{
insert_history(provider, alloc, 0)
}
/// Inserts history indices for genesis accounts and storage.
pub fn insert_history<'a, 'b, Provider>(
provider: &Provider,
alloc: impl Iterator<Item = (&'a Address, &'b GenesisAccount)> + Clone,
block: u64,
) -> ProviderResult<()>
where
Provider: DBProvider<Tx: DbTxMut> + HistoryWriter,
{
let account_transitions = alloc.clone().map(|(addr, _)| (*addr, [block]));
provider.insert_account_history_index(account_transitions)?;
trace!(target: "reth::cli", "Inserted account history");
let storage_transitions = alloc
.filter_map(|(addr, account)| account.storage.as_ref().map(|storage| (addr, storage)))
.flat_map(|(addr, storage)| storage.keys().map(|key| ((*addr, *key), [block])));
provider.insert_storage_history_index(storage_transitions)?;
trace!(target: "reth::cli", "Inserted storage history");
Ok(())
}
/// Inserts header for the genesis state.
pub fn insert_genesis_header<Provider, Spec>(
provider: &Provider,
chain: &Spec,
) -> ProviderResult<()>
where
Provider: StaticFileProviderFactory<Primitives: NodePrimitives<BlockHeader: Compact>>
+ DBProvider<Tx: DbTxMut>,
Spec: EthChainSpec<Header = <Provider::Primitives as NodePrimitives>::BlockHeader>,
{
let (header, block_hash) = (chain.genesis_header(), chain.genesis_hash());
let static_file_provider = provider.static_file_provider();
match static_file_provider.block_hash(0) {
Ok(None) | Err(ProviderError::MissingStaticFileBlock(StaticFileSegment::Headers, 0)) => {
let (difficulty, hash) = (header.difficulty(), block_hash);
let mut writer = static_file_provider.latest_writer(StaticFileSegment::Headers)?;
writer.append_header(header, difficulty, &hash)?;
}
Ok(Some(_)) => {}
Err(e) => return Err(e),
}
provider.tx_ref().put::<tables::HeaderNumbers>(block_hash, 0)?;
provider.tx_ref().put::<tables::BlockBodyIndices>(0, Default::default())?;
Ok(())
}
/// Reads account state from a [`BufRead`] reader and initializes it at the highest block that can
/// be found on database.
///
/// It's similar to [`init_genesis`] but supports importing state too big to fit in memory, and can
/// be set to the highest block present. One practical usecase is to import OP mainnet state at
/// bedrock transition block.
pub fn init_from_state_dump<Provider>(
mut reader: impl BufRead,
provider_rw: &Provider,
etl_config: EtlConfig,
) -> eyre::Result<B256>
where
Provider: StaticFileProviderFactory
+ DBProvider<Tx: DbTxMut>
+ BlockNumReader
+ BlockHashReader
+ ChainSpecProvider
+ StageCheckpointWriter
+ HistoryWriter
+ HeaderProvider
+ HashingWriter
+ TrieWriter
+ StateWriter
+ AsRef<Provider>,
{
if etl_config.file_size == 0 {
return Err(eyre::eyre!("ETL file size cannot be zero"))
}
let block = provider_rw.last_block_number()?;
let hash = provider_rw
.block_hash(block)?
.ok_or_else(|| eyre::eyre!("Block hash not found for block {}", block))?;
let expected_state_root = provider_rw
.header_by_number(block)?
.ok_or_else(|| ProviderError::HeaderNotFound(block.into()))?
.state_root();
// first line can be state root
let dump_state_root = parse_state_root(&mut reader)?;
if expected_state_root != dump_state_root {
error!(target: "reth::cli",
?dump_state_root,
?expected_state_root,
"State root from state dump does not match state root in current header."
);
return Err(InitStorageError::StateRootMismatch(GotExpected {
got: dump_state_root,
expected: expected_state_root,
})
.into())
}
debug!(target: "reth::cli",
block,
chain=%provider_rw.chain_spec().chain(),
"Initializing state at block"
);
// remaining lines are accounts
let collector = parse_accounts(&mut reader, etl_config)?;
// write state to db and collect prefix sets
let mut prefix_sets = TriePrefixSetsMut::default();
dump_state(collector, provider_rw, block, &mut prefix_sets)?;
info!(target: "reth::cli", "All accounts written to database, starting state root computation (may take some time)");
// compute and compare state root. this advances the stage checkpoints.
let computed_state_root = compute_state_root(provider_rw, Some(prefix_sets.freeze()))?;
if computed_state_root == expected_state_root {
info!(target: "reth::cli",
?computed_state_root,
"Computed state root matches state root in state dump"
);
} else {
error!(target: "reth::cli",
?computed_state_root,
?expected_state_root,
"Computed state root does not match state root in state dump"
);
return Err(InitStorageError::StateRootMismatch(GotExpected {
got: computed_state_root,
expected: expected_state_root,
})
.into())
}
// insert sync stages for stages that require state
for stage in StageId::STATE_REQUIRED {
provider_rw.save_stage_checkpoint(stage, StageCheckpoint::new(block))?;
}
Ok(hash)
}
/// Parses and returns expected state root.
fn parse_state_root(reader: &mut impl BufRead) -> eyre::Result<B256> {
let mut line = String::new();
reader.read_line(&mut line)?;
let expected_state_root = serde_json::from_str::<StateRoot>(&line)?.root;
trace!(target: "reth::cli",
root=%expected_state_root,
"Read state root from file"
);
Ok(expected_state_root)
}
/// Parses accounts and pushes them to a [`Collector`].
fn parse_accounts(
mut reader: impl BufRead,
etl_config: EtlConfig,
) -> Result<Collector<Address, GenesisAccount>, eyre::Error> {
let mut line = String::new();
let mut collector = Collector::new(etl_config.file_size, etl_config.dir);
while let Ok(n) = reader.read_line(&mut line) {
if n == 0 {
break
}
let GenesisAccountWithAddress { genesis_account, address } = serde_json::from_str(&line)?;
collector.insert(address, genesis_account)?;
if !collector.is_empty() &&
collector.len().is_multiple_of(AVERAGE_COUNT_ACCOUNTS_PER_GB_STATE_DUMP)
{
info!(target: "reth::cli",
parsed_new_accounts=collector.len(),
);
}
line.clear();
}
Ok(collector)
}
/// Takes a [`Collector`] and processes all accounts.
fn dump_state<Provider>(
mut collector: Collector<Address, GenesisAccount>,
provider_rw: &Provider,
block: u64,
prefix_sets: &mut TriePrefixSetsMut,
) -> Result<(), eyre::Error>
where
Provider: StaticFileProviderFactory
+ DBProvider<Tx: DbTxMut>
+ HeaderProvider
+ HashingWriter
+ HistoryWriter
+ StateWriter
+ AsRef<Provider>,
{
let accounts_len = collector.len();
let mut accounts = Vec::with_capacity(AVERAGE_COUNT_ACCOUNTS_PER_GB_STATE_DUMP);
let mut total_inserted_accounts = 0;
for (index, entry) in collector.iter()?.enumerate() {
let (address, account) = entry?;
let (address, _) = Address::from_compact(address.as_slice(), address.len());
let (account, _) = GenesisAccount::from_compact(account.as_slice(), account.len());
// Add to prefix sets
let hashed_address = keccak256(address);
prefix_sets.account_prefix_set.insert(Nibbles::unpack(hashed_address));
// Add storage keys to prefix sets if storage exists
if let Some(ref storage) = account.storage {
for key in storage.keys() {
let hashed_key = keccak256(key);
prefix_sets
.storage_prefix_sets
.entry(hashed_address)
.or_default()
.insert(Nibbles::unpack(hashed_key));
}
}
accounts.push((address, account));
if (index > 0 && index.is_multiple_of(AVERAGE_COUNT_ACCOUNTS_PER_GB_STATE_DUMP)) ||
index == accounts_len - 1
{
total_inserted_accounts += accounts.len();
info!(target: "reth::cli",
total_inserted_accounts,
"Writing accounts to db"
);
// use transaction to insert genesis header
insert_genesis_hashes(
provider_rw,
accounts.iter().map(|(address, account)| (address, account)),
)?;
insert_history(
provider_rw,
accounts.iter().map(|(address, account)| (address, account)),
block,
)?;
// block is already written to static files
insert_state(
provider_rw,
accounts.iter().map(|(address, account)| (address, account)),
block,
)?;
accounts.clear();
}
}
Ok(())
}
/// Computes the state root (from scratch) based on the accounts and storages present in the
/// database.
fn compute_state_root<Provider>(
provider: &Provider,
prefix_sets: Option<TriePrefixSets>,
) -> Result<B256, InitStorageError>
where
Provider: DBProvider<Tx: DbTxMut> + TrieWriter,
{
trace!(target: "reth::cli", "Computing state root");
let tx = provider.tx_ref();
let mut intermediate_state: Option<IntermediateStateRootState> = None;
let mut total_flushed_updates = 0;
loop {
let mut state_root =
StateRootComputer::from_tx(tx).with_intermediate_state(intermediate_state);
if let Some(sets) = prefix_sets.clone() {
state_root = state_root.with_prefix_sets(sets);
}
match state_root.root_with_progress()? {
StateRootProgress::Progress(state, _, updates) => {
let updated_len = provider.write_trie_updates(&updates)?;
total_flushed_updates += updated_len;
trace!(target: "reth::cli",
last_account_key = %state.account_root_state.last_hashed_key,
updated_len,
total_flushed_updates,
"Flushing trie updates"
);
intermediate_state = Some(*state);
if total_flushed_updates.is_multiple_of(SOFT_LIMIT_COUNT_FLUSHED_UPDATES) {
info!(target: "reth::cli",
total_flushed_updates,
"Flushing trie updates"
);
}
}
StateRootProgress::Complete(root, _, updates) => {
let updated_len = provider.write_trie_updates(&updates)?;
total_flushed_updates += updated_len;
trace!(target: "reth::cli",
%root,
updated_len,
total_flushed_updates,
"State root has been computed"
);
return Ok(root)
}
}
}
}
/// Type to deserialize state root from state dump file.
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
struct StateRoot {
root: B256,
}
/// An account as in the state dump file. This contains a [`GenesisAccount`] and the account's
/// address.
#[derive(Debug, Serialize, Deserialize)]
struct GenesisAccountWithAddress {
/// The account's balance, nonce, code, and storage.
#[serde(flatten)]
genesis_account: GenesisAccount,
/// The account's address.
address: Address,
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_consensus::constants::{
HOLESKY_GENESIS_HASH, MAINNET_GENESIS_HASH, SEPOLIA_GENESIS_HASH,
};
use reth_chainspec::{Chain, ChainSpec, HOLESKY, MAINNET, SEPOLIA};
use reth_db::DatabaseEnv;
use reth_db_api::{
cursor::DbCursorRO,
models::{storage_sharded_key::StorageShardedKey, IntegerList, ShardedKey},
table::{Table, TableRow},
transaction::DbTx,
Database,
};
use reth_provider::{
test_utils::{create_test_provider_factory_with_chain_spec, MockNodeTypesWithDB},
ProviderFactory,
};
use seismic_alloy_genesis::Genesis;
use std::{collections::BTreeMap, sync::Arc};
fn collect_table_entries<DB, T>(
tx: &<DB as Database>::TX,
) -> Result<Vec<TableRow<T>>, InitStorageError>
where
DB: Database,
T: Table,
{
Ok(tx.cursor_read::<T>()?.walk_range(..)?.collect::<Result<Vec<_>, _>>()?)
}
#[test]
fn success_init_genesis_mainnet() {
let genesis_hash =
init_genesis(&create_test_provider_factory_with_chain_spec(MAINNET.clone())).unwrap();
// actual, expected
assert_eq!(genesis_hash, MAINNET_GENESIS_HASH);
}
#[test]
fn success_init_genesis_sepolia() {
let genesis_hash =
init_genesis(&create_test_provider_factory_with_chain_spec(SEPOLIA.clone())).unwrap();
// actual, expected
assert_eq!(genesis_hash, SEPOLIA_GENESIS_HASH);
}
#[test]
fn success_init_genesis_holesky() {
let genesis_hash =
init_genesis(&create_test_provider_factory_with_chain_spec(HOLESKY.clone())).unwrap();
// actual, expected
assert_eq!(genesis_hash, HOLESKY_GENESIS_HASH);
}
#[test]
fn fail_init_inconsistent_db() {
let factory = create_test_provider_factory_with_chain_spec(SEPOLIA.clone());
let static_file_provider = factory.static_file_provider();
init_genesis(&factory).unwrap();
// Try to init db with a different genesis block
let genesis_hash = init_genesis(&ProviderFactory::<MockNodeTypesWithDB>::new(
factory.into_db(),
MAINNET.clone(),
static_file_provider,
));
assert!(matches!(
genesis_hash.unwrap_err(),
InitStorageError::GenesisHashMismatch {
chainspec_hash: MAINNET_GENESIS_HASH,
storage_hash: SEPOLIA_GENESIS_HASH
}
))
}
#[test]
fn init_genesis_history() {
let address_with_balance = Address::with_last_byte(1);
let address_with_storage = Address::with_last_byte(2);
let storage_key = B256::with_last_byte(1);
let chain_spec = Arc::new(ChainSpec {
chain: Chain::from_id(1),
genesis: Genesis {
alloc: BTreeMap::from([
(
address_with_balance,
GenesisAccount { balance: U256::from(1), ..Default::default() },
),
(
address_with_storage,
GenesisAccount {
storage: Some(
seismic_alloy_genesis::convert_fixedbytes_map_to_flagged_storage(
BTreeMap::from([(storage_key, B256::random())]),
),
),
..Default::default()
},
),
]),
..Default::default()
},
hardforks: Default::default(),
paris_block_and_final_difficulty: None,
deposit_contract: None,
..Default::default()
});
let factory = create_test_provider_factory_with_chain_spec(chain_spec);
init_genesis(&factory).unwrap();
let provider = factory.provider().unwrap();
let tx = provider.tx_ref();
assert_eq!(
collect_table_entries::<Arc<DatabaseEnv>, tables::AccountsHistory>(tx)
.expect("failed to collect"),
vec![
(ShardedKey::new(address_with_balance, u64::MAX), IntegerList::new([0]).unwrap()),
(ShardedKey::new(address_with_storage, u64::MAX), IntegerList::new([0]).unwrap())
],
);
assert_eq!(
collect_table_entries::<Arc<DatabaseEnv>, tables::StoragesHistory>(tx)
.expect("failed to collect"),
vec![(
StorageShardedKey::new(address_with_storage, storage_key, u64::MAX),
IntegerList::new([0]).unwrap()
)],
);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db-common/src/db_tool/mod.rs | crates/storage/db-common/src/db_tool/mod.rs | //! Common db operations
use boyer_moore_magiclen::BMByte;
use eyre::Result;
use reth_db_api::{
cursor::{DbCursorRO, DbDupCursorRO},
database::Database,
table::{Decode, Decompress, DupSort, Table, TableRow},
transaction::{DbTx, DbTxMut},
DatabaseError, RawTable, TableRawRow,
};
use reth_fs_util as fs;
use reth_node_types::NodeTypesWithDB;
use reth_provider::{providers::ProviderNodeTypes, ChainSpecProvider, DBProvider, ProviderFactory};
use std::{path::Path, rc::Rc, sync::Arc};
use tracing::info;
/// Wrapper over DB that implements many useful DB queries.
#[derive(Debug)]
pub struct DbTool<N: NodeTypesWithDB> {
/// The provider factory that the db tool will use.
pub provider_factory: ProviderFactory<N>,
}
impl<N: NodeTypesWithDB> DbTool<N> {
/// Get an [`Arc`] to the underlying chainspec.
pub fn chain(&self) -> Arc<N::ChainSpec> {
self.provider_factory.chain_spec()
}
/// Grabs the contents of the table within a certain index range and places the
/// entries into a [`HashMap`][std::collections::HashMap].
///
/// [`ListFilter`] can be used to further
/// filter down the desired results. (eg. List only rows which include `0xd3adbeef`)
pub fn list<T: Table>(&self, filter: &ListFilter) -> Result<(Vec<TableRow<T>>, usize)> {
let bmb = Rc::new(BMByte::from(&filter.search));
if bmb.is_none() && filter.has_search() {
eyre::bail!("Invalid search.")
}
let mut hits = 0;
let data = self.provider_factory.db_ref().view(|tx| {
let mut cursor =
tx.cursor_read::<RawTable<T>>().expect("Was not able to obtain a cursor.");
let map_filter = |row: Result<TableRawRow<T>, _>| {
if let Ok((k, v)) = row {
let (key, value) = (k.into_key(), v.into_value());
if key.len() + value.len() < filter.min_row_size {
return None
}
if key.len() < filter.min_key_size {
return None
}
if value.len() < filter.min_value_size {
return None
}
let result = || {
if filter.only_count {
return None
}
Some((
<T as Table>::Key::decode(&key).unwrap(),
<T as Table>::Value::decompress(&value).unwrap(),
))
};
match &*bmb {
Some(searcher) => {
if searcher.find_first_in(&value).is_some() ||
searcher.find_first_in(&key).is_some()
{
hits += 1;
return result()
}
}
None => {
hits += 1;
return result()
}
}
}
None
};
if filter.reverse {
Ok(cursor
.walk_back(None)?
.skip(filter.skip)
.filter_map(map_filter)
.take(filter.len)
.collect::<Vec<(_, _)>>())
} else {
Ok(cursor
.walk(None)?
.skip(filter.skip)
.filter_map(map_filter)
.take(filter.len)
.collect::<Vec<(_, _)>>())
}
})?;
Ok((data.map_err(|e: DatabaseError| eyre::eyre!(e))?, hits))
}
}
impl<N: ProviderNodeTypes> DbTool<N> {
/// Takes a DB where the tables have already been created.
pub fn new(provider_factory: ProviderFactory<N>) -> eyre::Result<Self> {
// Disable timeout because we are entering a TUI which might read for a long time. We
// disable on the [`DbTool`] level since it's only used in the CLI.
provider_factory.provider()?.disable_long_read_transaction_safety();
Ok(Self { provider_factory })
}
/// Grabs the content of the table for the given key
pub fn get<T: Table>(&self, key: T::Key) -> Result<Option<T::Value>> {
self.provider_factory.db_ref().view(|tx| tx.get::<T>(key))?.map_err(|e| eyre::eyre!(e))
}
/// Grabs the content of the `DupSort` table for the given key and subkey
pub fn get_dup<T: DupSort>(&self, key: T::Key, subkey: T::SubKey) -> Result<Option<T::Value>> {
self.provider_factory
.db_ref()
.view(|tx| tx.cursor_dup_read::<T>()?.seek_by_key_subkey(key, subkey))?
.map_err(|e| eyre::eyre!(e))
}
/// Drops the database, the static files and ExEx WAL at the given paths.
pub fn drop<P: AsRef<Path>>(
&self,
db_path: P,
static_files_path: P,
exex_wal_path: P,
) -> Result<()> {
let db_path = db_path.as_ref();
info!(target: "reth::cli", "Dropping database at {:?}", db_path);
fs::remove_dir_all(db_path)?;
let static_files_path = static_files_path.as_ref();
info!(target: "reth::cli", "Dropping static files at {:?}", static_files_path);
fs::remove_dir_all(static_files_path)?;
fs::create_dir_all(static_files_path)?;
if exex_wal_path.as_ref().exists() {
let exex_wal_path = exex_wal_path.as_ref();
info!(target: "reth::cli", "Dropping ExEx WAL at {:?}", exex_wal_path);
fs::remove_dir_all(exex_wal_path)?;
}
Ok(())
}
/// Drops the provided table from the database.
pub fn drop_table<T: Table>(&self) -> Result<()> {
self.provider_factory.db_ref().update(|tx| tx.clear::<T>())??;
Ok(())
}
}
/// Filters the results coming from the database.
#[derive(Debug)]
pub struct ListFilter {
/// Skip first N entries.
pub skip: usize,
/// Take N entries.
pub len: usize,
/// Sequence of bytes that will be searched on values and keys from the database.
pub search: Vec<u8>,
/// Minimum row size.
pub min_row_size: usize,
/// Minimum key size.
pub min_key_size: usize,
/// Minimum value size.
pub min_value_size: usize,
/// Reverse order of entries.
pub reverse: bool,
/// Only counts the number of filtered entries without decoding and returning them.
pub only_count: bool,
}
impl ListFilter {
/// If `search` has a list of bytes, then filter for rows that have this sequence.
pub const fn has_search(&self) -> bool {
!self.search.is_empty()
}
/// Updates the page with new `skip` and `len` values.
pub const fn update_page(&mut self, skip: usize, len: usize) {
self.skip = skip;
self.len = len;
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db-models/src/lib.rs | crates/storage/db-models/src/lib.rs | //! Models used in storage module.
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/paradigmxyz/reth/issues/"
)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
extern crate alloc;
/// Accounts
pub mod accounts;
pub use accounts::AccountBeforeTx;
/// Blocks
pub mod blocks;
pub use blocks::{StaticFileBlockWithdrawals, StoredBlockBodyIndices, StoredBlockWithdrawals};
/// Client Version
pub mod client_version;
pub use client_version::ClientVersion;
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db-models/src/blocks.rs | crates/storage/db-models/src/blocks.rs | use alloy_eips::eip4895::Withdrawals;
use alloy_primitives::TxNumber;
use core::ops::Range;
/// Total number of transactions.
pub type NumTransactions = u64;
/// The storage of the block body indices.
///
/// It has the pointer to the transaction Number of the first
/// transaction in the block and the total number of transactions.
#[derive(Debug, Default, Eq, PartialEq, Clone, Copy)]
#[cfg_attr(any(test, feature = "arbitrary"), derive(arbitrary::Arbitrary))]
#[cfg_attr(any(test, feature = "reth-codec"), derive(reth_codecs::Compact))]
#[cfg_attr(any(test, feature = "reth-codec"), reth_codecs::add_arbitrary_tests(compact))]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub struct StoredBlockBodyIndices {
/// The number of the first transaction in this block
///
/// Note: If the block is empty, this is the number of the first transaction
/// in the next non-empty block.
pub first_tx_num: TxNumber,
/// The total number of transactions in the block
///
/// NOTE: Number of transitions is equal to number of transactions with
/// additional transition for block change if block has block reward or withdrawal.
pub tx_count: NumTransactions,
}
impl StoredBlockBodyIndices {
/// Return the range of transaction ids for this block.
pub const fn tx_num_range(&self) -> Range<TxNumber> {
self.first_tx_num..self.first_tx_num + self.tx_count
}
/// Return the index of last transaction in this block unless the block
/// is empty in which case it refers to the last transaction in a previous
/// non-empty block
pub const fn last_tx_num(&self) -> TxNumber {
self.first_tx_num.saturating_add(self.tx_count).saturating_sub(1)
}
/// First transaction index.
///
/// Caution: If the block is empty, this is the number of the first transaction
/// in the next non-empty block.
pub const fn first_tx_num(&self) -> TxNumber {
self.first_tx_num
}
/// Return the index of the next transaction after this block.
pub const fn next_tx_num(&self) -> TxNumber {
self.first_tx_num + self.tx_count
}
/// Return a flag whether the block is empty
pub const fn is_empty(&self) -> bool {
self.tx_count == 0
}
/// Return number of transaction inside block
///
/// NOTE: This is not the same as the number of transitions.
pub const fn tx_count(&self) -> NumTransactions {
self.tx_count
}
}
/// The storage representation of block withdrawals.
#[derive(Debug, Default, Eq, PartialEq, Clone)]
#[cfg_attr(any(test, feature = "arbitrary"), derive(arbitrary::Arbitrary))]
#[cfg_attr(any(test, feature = "reth-codec"), derive(reth_codecs::Compact))]
#[cfg_attr(any(test, feature = "reth-codec"), reth_codecs::add_arbitrary_tests(compact))]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub struct StoredBlockWithdrawals {
/// The block withdrawals.
pub withdrawals: Withdrawals,
}
/// A storage representation of block withdrawals that is static file friendly. An inner `None`
/// represents a pre-merge block.
#[derive(Debug, Default, Eq, PartialEq, Clone)]
#[cfg_attr(any(test, feature = "arbitrary"), derive(arbitrary::Arbitrary))]
#[cfg_attr(any(test, feature = "reth-codec"), reth_codecs::add_arbitrary_tests(compact))]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub struct StaticFileBlockWithdrawals {
/// The block withdrawals. A `None` value represents a pre-merge block.
pub withdrawals: Option<Withdrawals>,
}
#[cfg(any(test, feature = "reth-codec"))]
impl reth_codecs::Compact for StaticFileBlockWithdrawals {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
buf.put_u8(self.withdrawals.is_some() as u8);
if let Some(withdrawals) = &self.withdrawals {
return 1 + withdrawals.to_compact(buf);
}
1
}
fn from_compact(mut buf: &[u8], _: usize) -> (Self, &[u8]) {
use bytes::Buf;
if buf.get_u8() == 1 {
let (w, buf) = Withdrawals::from_compact(buf, buf.len());
(Self { withdrawals: Some(w) }, buf)
} else {
(Self { withdrawals: None }, buf)
}
}
}
#[cfg(test)]
mod tests {
use crate::StoredBlockBodyIndices;
#[test]
fn block_indices() {
let first_tx_num = 10;
let tx_count = 6;
let block_indices = StoredBlockBodyIndices { first_tx_num, tx_count };
assert_eq!(block_indices.first_tx_num(), first_tx_num);
assert_eq!(block_indices.last_tx_num(), first_tx_num + tx_count - 1);
assert_eq!(block_indices.next_tx_num(), first_tx_num + tx_count);
assert_eq!(block_indices.tx_count(), tx_count);
assert_eq!(block_indices.tx_num_range(), first_tx_num..first_tx_num + tx_count);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db-models/src/client_version.rs | crates/storage/db-models/src/client_version.rs | //! Client version model.
use alloc::string::String;
/// Client version that accessed the database.
#[derive(Clone, Eq, PartialEq, Debug, Default)]
#[cfg_attr(any(test, feature = "arbitrary"), derive(arbitrary::Arbitrary))]
#[cfg_attr(any(test, feature = "reth-codec"), reth_codecs::add_arbitrary_tests(compact))]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
pub struct ClientVersion {
/// Client version
pub version: String,
/// The git commit sha
pub git_sha: String,
/// Build timestamp
pub build_timestamp: String,
}
impl ClientVersion {
/// Returns `true` if no version fields are set.
pub const fn is_empty(&self) -> bool {
self.version.is_empty() && self.git_sha.is_empty() && self.build_timestamp.is_empty()
}
}
#[cfg(any(test, feature = "reth-codec"))]
impl reth_codecs::Compact for ClientVersion {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
let version_size = self.version.to_compact(buf);
let git_sha_size = self.git_sha.to_compact(buf);
let build_timestamp_size = self.build_timestamp.to_compact(buf);
version_size + git_sha_size + build_timestamp_size
}
fn from_compact(buf: &[u8], len: usize) -> (Self, &[u8]) {
let (version, buf) = String::from_compact(buf, len);
let (git_sha, buf) = String::from_compact(buf, len);
let (build_timestamp, buf) = String::from_compact(buf, len);
let client_version = Self { version, git_sha, build_timestamp };
(client_version, buf)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/db-models/src/accounts.rs | crates/storage/db-models/src/accounts.rs | use alloy_primitives::Address;
use reth_primitives_traits::Account;
/// Account as it is saved in the database.
///
/// [`Address`] is the subkey.
#[derive(Debug, Default, Clone, Eq, PartialEq)]
#[cfg_attr(any(test, feature = "arbitrary"), derive(arbitrary::Arbitrary))]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
#[cfg_attr(any(test, feature = "reth-codec"), reth_codecs::add_arbitrary_tests(compact))]
pub struct AccountBeforeTx {
/// Address for the account. Acts as `DupSort::SubKey`.
pub address: Address,
/// Account state before the transaction.
pub info: Option<Account>,
}
// NOTE: Removing reth_codec and manually encode subkey
// and compress second part of the value. If we have compression
// over whole value (Even SubKey) that would mess up fetching of values with seek_by_key_subkey
#[cfg(any(test, feature = "reth-codec"))]
impl reth_codecs::Compact for AccountBeforeTx {
fn to_compact<B>(&self, buf: &mut B) -> usize
where
B: bytes::BufMut + AsMut<[u8]>,
{
// for now put full bytes and later compress it.
buf.put_slice(self.address.as_slice());
let acc_len = if let Some(account) = self.info { account.to_compact(buf) } else { 0 };
acc_len + 20
}
fn from_compact(mut buf: &[u8], len: usize) -> (Self, &[u8]) {
use bytes::Buf;
let address = Address::from_slice(&buf[..20]);
buf.advance(20);
let info = (len - 20 > 0).then(|| {
let (acc, advanced_buf) = Account::from_compact(buf, len - 20);
buf = advanced_buf;
acc
});
(Self { address, info }, buf)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/zstd-compressors/src/lib.rs | crates/storage/zstd-compressors/src/lib.rs | //! Commonly used zstd [`Compressor`] and [`Decompressor`] for reth types.
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/SeismicSystems/seismic-reth/issues/"
)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
extern crate alloc;
use crate::alloc::string::ToString;
use alloc::vec::Vec;
use zstd::bulk::{Compressor, Decompressor};
/// Compression/Decompression dictionary for `Receipt`.
pub static RECEIPT_DICTIONARY: &[u8] = include_bytes!("../receipt_dictionary.bin");
/// Compression/Decompression dictionary for `Transaction`.
pub static TRANSACTION_DICTIONARY: &[u8] = include_bytes!("../transaction_dictionary.bin");
#[cfg(feature = "std")]
pub use locals::*;
#[cfg(feature = "std")]
mod locals {
use super::*;
use core::cell::RefCell;
// We use `thread_local` compressors and decompressors because dictionaries can be quite big,
// and zstd-rs recommends to use one context/compressor per thread
std::thread_local! {
/// Thread Transaction compressor.
pub static TRANSACTION_COMPRESSOR: RefCell<Compressor<'static>> = RefCell::new(
Compressor::with_dictionary(0, TRANSACTION_DICTIONARY)
.expect("failed to initialize transaction compressor"),
);
/// Thread Transaction decompressor.
pub static TRANSACTION_DECOMPRESSOR: RefCell<ReusableDecompressor> =
RefCell::new(ReusableDecompressor::new(
Decompressor::with_dictionary(TRANSACTION_DICTIONARY)
.expect("failed to initialize transaction decompressor"),
));
/// Thread receipt compressor.
pub static RECEIPT_COMPRESSOR: RefCell<Compressor<'static>> = RefCell::new(
Compressor::with_dictionary(0, RECEIPT_DICTIONARY)
.expect("failed to initialize receipt compressor"),
);
/// Thread receipt decompressor.
pub static RECEIPT_DECOMPRESSOR: RefCell<ReusableDecompressor> =
RefCell::new(ReusableDecompressor::new(
Decompressor::with_dictionary(RECEIPT_DICTIONARY)
.expect("failed to initialize receipt decompressor"),
));
}
}
/// Fn creates tx [`Compressor`]
pub fn create_tx_compressor() -> Compressor<'static> {
Compressor::with_dictionary(0, RECEIPT_DICTIONARY).expect("Failed to instantiate tx compressor")
}
/// Fn creates tx [`Decompressor`]
pub fn create_tx_decompressor() -> ReusableDecompressor {
ReusableDecompressor::new(
Decompressor::with_dictionary(TRANSACTION_DICTIONARY)
.expect("Failed to instantiate tx decompressor"),
)
}
/// Fn creates receipt [`Compressor`]
pub fn create_receipt_compressor() -> Compressor<'static> {
Compressor::with_dictionary(0, RECEIPT_DICTIONARY)
.expect("Failed to instantiate receipt compressor")
}
/// Fn creates receipt [`Decompressor`]
pub fn create_receipt_decompressor() -> ReusableDecompressor {
ReusableDecompressor::new(
Decompressor::with_dictionary(RECEIPT_DICTIONARY)
.expect("Failed to instantiate receipt decompressor"),
)
}
/// Reusable decompressor that uses its own internal buffer.
#[expect(missing_debug_implementations)]
pub struct ReusableDecompressor {
/// The `zstd` decompressor.
decompressor: Decompressor<'static>,
/// The buffer to decompress to.
buf: Vec<u8>,
}
impl ReusableDecompressor {
fn new(decompressor: Decompressor<'static>) -> Self {
Self { decompressor, buf: Vec::with_capacity(4096) }
}
/// Decompresses `src` reusing the decompressor and its internal buffer.
pub fn decompress(&mut self, src: &[u8]) -> &[u8] {
// If the decompression fails because the buffer is too small, we try to reserve more space
// by getting the upper bound and retry the decompression.
let mut reserved_upper_bound = false;
while let Err(err) = self.decompressor.decompress_to_buffer(src, &mut self.buf) {
let err = err.to_string();
assert!(
err.contains("Destination buffer is too small"),
"Failed to decompress {} bytes: {err}",
src.len()
);
let additional = 'b: {
// Try to get the upper bound of the decompression for the given source.
// Do this only once as it might be expensive and will be the same for the same
// source.
if !reserved_upper_bound {
reserved_upper_bound = true;
if let Some(upper_bound) = Decompressor::upper_bound(src) {
if let Some(additional) = upper_bound.checked_sub(self.buf.capacity()) {
break 'b additional
}
}
}
// Otherwise, double the capacity of the buffer.
// This should normally not be reached as the upper bound should be enough.
self.buf.capacity() + 24_000
};
self.reserve(additional, src.len());
}
// `decompress_to_buffer` sets the length of the vector to the number of bytes written, so
// we can safely return it as a slice.
&self.buf
}
#[track_caller]
fn reserve(&mut self, additional: usize, src_len: usize) {
if let Err(e) = self.buf.try_reserve(additional) {
panic!(
"failed to allocate to {existing} + {additional} bytes \
for the decompression of {src_len} bytes: {e}",
existing = self.buf.capacity(),
);
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/lib.rs | crates/storage/provider/src/lib.rs | //! Collection of traits and trait implementations for common database operations.
//!
//! ## Feature Flags
//!
//! - `test-utils`: Export utilities for testing
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/SeismicSystems/seismic-reth/issues/"
)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
/// Various provider traits.
mod traits;
pub use traits::*;
/// Provider trait implementations.
pub mod providers;
pub use providers::{
DatabaseProvider, DatabaseProviderRO, DatabaseProviderRW, HistoricalStateProvider,
HistoricalStateProviderRef, LatestStateProvider, LatestStateProviderRef, ProviderFactory,
StaticFileAccess, StaticFileWriter,
};
#[cfg(any(test, feature = "test-utils"))]
/// Common test helpers for mocking the Provider.
pub mod test_utils;
/// Re-export provider error.
pub use reth_storage_errors::provider::{ProviderError, ProviderResult};
pub use reth_static_file_types as static_file;
pub use static_file::StaticFileSegment;
pub use reth_execution_types::*;
pub mod bundle_state;
/// Re-export `OriginalValuesKnown`
pub use revm_database::states::OriginalValuesKnown;
/// Writer standalone type.
pub mod writer;
pub use reth_chain_state::{
CanonStateNotification, CanonStateNotificationSender, CanonStateNotificationStream,
CanonStateNotifications, CanonStateSubscriptions,
};
// reexport traits to avoid breaking changes
pub use reth_storage_api::{HistoryWriter, StatsReader};
pub(crate) fn to_range<R: std::ops::RangeBounds<u64>>(bounds: R) -> std::ops::Range<u64> {
let start = match bounds.start_bound() {
std::ops::Bound::Included(&v) => v,
std::ops::Bound::Excluded(&v) => v + 1,
std::ops::Bound::Unbounded => 0,
};
let end = match bounds.end_bound() {
std::ops::Bound::Included(&v) => v + 1,
std::ops::Bound::Excluded(&v) => v,
std::ops::Bound::Unbounded => u64::MAX,
};
start..end
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/writer/mod.rs | crates/storage/provider/src/writer/mod.rs | use crate::{
providers::{StaticFileProvider, StaticFileWriter as SfWriter},
BlockExecutionWriter, BlockWriter, HistoryWriter, StateWriter, StaticFileProviderFactory,
StorageLocation, TrieWriter,
};
use alloy_consensus::BlockHeader;
use reth_chain_state::{ExecutedBlock, ExecutedBlockWithTrieUpdates};
use reth_db_api::transaction::{DbTx, DbTxMut};
use reth_errors::{ProviderError, ProviderResult};
use reth_primitives_traits::{NodePrimitives, SignedTransaction};
use reth_static_file_types::StaticFileSegment;
use reth_storage_api::{DBProvider, StageCheckpointWriter, TransactionsProviderExt};
use reth_storage_errors::writer::UnifiedStorageWriterError;
use revm_database::OriginalValuesKnown;
use std::sync::Arc;
use tracing::debug;
/// [`UnifiedStorageWriter`] is responsible for managing the writing to storage with both database
/// and static file providers.
#[derive(Debug)]
pub struct UnifiedStorageWriter<'a, ProviderDB, ProviderSF> {
database: &'a ProviderDB,
static_file: Option<ProviderSF>,
}
impl<'a, ProviderDB, ProviderSF> UnifiedStorageWriter<'a, ProviderDB, ProviderSF> {
/// Creates a new instance of [`UnifiedStorageWriter`].
///
/// # Parameters
/// - `database`: An optional reference to a database provider.
/// - `static_file`: An optional mutable reference to a static file instance.
pub const fn new(database: &'a ProviderDB, static_file: Option<ProviderSF>) -> Self {
Self { database, static_file }
}
/// Creates a new instance of [`UnifiedStorageWriter`] from a database provider and a static
/// file instance.
pub fn from<P>(database: &'a P, static_file: ProviderSF) -> Self
where
P: AsRef<ProviderDB>,
{
Self::new(database.as_ref(), Some(static_file))
}
/// Creates a new instance of [`UnifiedStorageWriter`] from a database provider.
pub fn from_database<P>(database: &'a P) -> Self
where
P: AsRef<ProviderDB>,
{
Self::new(database.as_ref(), None)
}
/// Returns a reference to the database writer.
///
/// # Panics
/// If the database provider is not set.
const fn database(&self) -> &ProviderDB {
self.database
}
/// Returns a reference to the static file instance.
///
/// # Panics
/// If the static file instance is not set.
const fn static_file(&self) -> &ProviderSF {
self.static_file.as_ref().expect("should exist")
}
/// Ensures that the static file instance is set.
///
/// # Returns
/// - `Ok(())` if the static file instance is set.
/// - `Err(StorageWriterError::MissingStaticFileWriter)` if the static file instance is not set.
#[expect(unused)]
const fn ensure_static_file(&self) -> Result<(), UnifiedStorageWriterError> {
if self.static_file.is_none() {
return Err(UnifiedStorageWriterError::MissingStaticFileWriter)
}
Ok(())
}
}
impl UnifiedStorageWriter<'_, (), ()> {
/// Commits both storage types in the right order.
///
/// For non-unwinding operations it makes more sense to commit the static files first, since if
/// it is interrupted before the database commit, we can just truncate
/// the static files according to the checkpoints on the next
/// start-up.
///
/// NOTE: If unwinding data from storage, use `commit_unwind` instead!
pub fn commit<P>(provider: P) -> ProviderResult<()>
where
P: DBProvider<Tx: DbTxMut> + StaticFileProviderFactory,
{
let static_file = provider.static_file_provider();
static_file.commit()?;
provider.commit()?;
Ok(())
}
/// Commits both storage types in the right order for an unwind operation.
///
/// For unwinding it makes more sense to commit the database first, since if
/// it is interrupted before the static files commit, we can just
/// truncate the static files according to the
/// checkpoints on the next start-up.
///
/// NOTE: Should only be used after unwinding data from storage!
pub fn commit_unwind<P>(provider: P) -> ProviderResult<()>
where
P: DBProvider<Tx: DbTxMut> + StaticFileProviderFactory,
{
let static_file = provider.static_file_provider();
provider.commit()?;
static_file.commit()?;
Ok(())
}
}
impl<ProviderDB> UnifiedStorageWriter<'_, ProviderDB, &StaticFileProvider<ProviderDB::Primitives>>
where
ProviderDB: DBProvider<Tx: DbTx + DbTxMut>
+ BlockWriter
+ TransactionsProviderExt
+ TrieWriter
+ StateWriter
+ HistoryWriter
+ StageCheckpointWriter
+ BlockExecutionWriter
+ AsRef<ProviderDB>
+ StaticFileProviderFactory,
{
/// Writes executed blocks and receipts to storage.
pub fn save_blocks<N>(&self, blocks: Vec<ExecutedBlockWithTrieUpdates<N>>) -> ProviderResult<()>
where
N: NodePrimitives<SignedTx: SignedTransaction>,
ProviderDB: BlockWriter<Block = N::Block> + StateWriter<Receipt = N::Receipt>,
{
if blocks.is_empty() {
debug!(target: "provider::storage_writer", "Attempted to write empty block range");
return Ok(())
}
// NOTE: checked non-empty above
let first_block = blocks.first().unwrap().recovered_block();
let last_block = blocks.last().unwrap().recovered_block();
let first_number = first_block.number();
let last_block_number = last_block.number();
debug!(target: "provider::storage_writer", block_count = %blocks.len(), "Writing blocks and execution data to storage");
// TODO: Do performant / batched writes for each type of object
// instead of a loop over all blocks,
// meaning:
// * blocks
// * state
// * hashed state
// * trie updates (cannot naively extend, need helper)
// * indices (already done basically)
// Insert the blocks
for ExecutedBlockWithTrieUpdates {
block: ExecutedBlock { recovered_block, execution_output, hashed_state },
trie,
} in blocks
{
let block_hash = recovered_block.hash();
self.database()
.insert_block(Arc::unwrap_or_clone(recovered_block), StorageLocation::Both)?;
// Write state and changesets to the database.
// Must be written after blocks because of the receipt lookup.
self.database().write_state(
&execution_output,
OriginalValuesKnown::No,
StorageLocation::StaticFiles,
)?;
// insert hashes and intermediate merkle nodes
self.database()
.write_hashed_state(&Arc::unwrap_or_clone(hashed_state).into_sorted())?;
self.database().write_trie_updates(
trie.as_ref().ok_or(ProviderError::MissingTrieUpdates(block_hash))?,
)?;
}
// update history indices
self.database().update_history_indices(first_number..=last_block_number)?;
// Update pipeline progress
self.database().update_pipeline_stages(last_block_number, false)?;
debug!(target: "provider::storage_writer", range = ?first_number..=last_block_number, "Appended block data");
Ok(())
}
/// Removes all block, transaction and receipt data above the given block number from the
/// database and static files. This is exclusive, i.e., it only removes blocks above
/// `block_number`, and does not remove `block_number`.
pub fn remove_blocks_above(&self, block_number: u64) -> ProviderResult<()> {
// IMPORTANT: we use `block_number+1` to make sure we remove only what is ABOVE the block
debug!(target: "provider::storage_writer", ?block_number, "Removing blocks from database above block_number");
self.database().remove_block_and_execution_above(block_number, StorageLocation::Both)?;
// Get highest static file block for the total block range
let highest_static_file_block = self
.static_file()
.get_highest_static_file_block(StaticFileSegment::Headers)
.expect("todo: error handling, headers should exist");
// IMPORTANT: we use `highest_static_file_block.saturating_sub(block_number)` to make sure
// we remove only what is ABOVE the block.
//
// i.e., if the highest static file block is 8, we want to remove above block 5 only, we
// will have three blocks to remove, which will be block 8, 7, and 6.
debug!(target: "provider::storage_writer", ?block_number, "Removing static file blocks above block_number");
self.static_file()
.get_writer(block_number, StaticFileSegment::Headers)?
.prune_headers(highest_static_file_block.saturating_sub(block_number))?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
test_utils::create_test_provider_factory, AccountReader, StorageTrieWriter, TrieWriter,
};
use alloy_primitives::{keccak256, map::HashMap, Address, B256, U256};
use reth_db_api::{
cursor::{DbCursorRO, DbCursorRW, DbDupCursorRO},
models::{AccountBeforeTx, BlockNumberAddress},
tables,
transaction::{DbTx, DbTxMut},
};
use reth_ethereum_primitives::Receipt;
use reth_execution_types::ExecutionOutcome;
use reth_primitives_traits::{Account, StorageEntry};
use reth_storage_api::{DatabaseProviderFactory, HashedPostStateProvider};
use reth_trie::{
test_utils::{state_root, storage_root_prehashed},
HashedPostState, HashedStorage, StateRoot, StorageRoot, StorageRootProgress,
};
use reth_trie_db::{DatabaseStateRoot, DatabaseStorageRoot};
use revm_database::{
states::{
bundle_state::BundleRetention, changes::PlainStorageRevert, PlainStorageChangeset,
},
BundleState, State,
};
use revm_database_interface::{DatabaseCommit, EmptyDB};
use revm_state::{
Account as RevmAccount, AccountInfo as RevmAccountInfo, AccountStatus, EvmStorageSlot,
FlaggedStorage,
};
use std::{collections::BTreeMap, str::FromStr};
#[test]
fn wiped_entries_are_removed() {
let provider_factory = create_test_provider_factory();
let addresses = (0..10).map(|_| Address::random()).collect::<Vec<_>>();
let destroyed_address = *addresses.first().unwrap();
let destroyed_address_hashed = keccak256(destroyed_address);
let slot = B256::with_last_byte(1);
let hashed_slot = keccak256(slot);
{
let provider_rw = provider_factory.provider_rw().unwrap();
let mut accounts_cursor =
provider_rw.tx_ref().cursor_write::<tables::HashedAccounts>().unwrap();
let mut storage_cursor =
provider_rw.tx_ref().cursor_write::<tables::HashedStorages>().unwrap();
for address in addresses {
let hashed_address = keccak256(address);
accounts_cursor
.insert(hashed_address, &Account { nonce: 1, ..Default::default() })
.unwrap();
storage_cursor
.insert(
hashed_address,
&StorageEntry { key: hashed_slot, value: FlaggedStorage::public(1) },
)
.unwrap();
}
provider_rw.commit().unwrap();
}
let mut hashed_state = HashedPostState::default();
hashed_state.accounts.insert(destroyed_address_hashed, None);
hashed_state.storages.insert(destroyed_address_hashed, HashedStorage::new(true));
let provider_rw = provider_factory.provider_rw().unwrap();
assert!(matches!(provider_rw.write_hashed_state(&hashed_state.into_sorted()), Ok(())));
provider_rw.commit().unwrap();
let provider = provider_factory.provider().unwrap();
assert_eq!(
provider.tx_ref().get::<tables::HashedAccounts>(destroyed_address_hashed),
Ok(None)
);
assert_eq!(
provider
.tx_ref()
.cursor_read::<tables::HashedStorages>()
.unwrap()
.seek_by_key_subkey(destroyed_address_hashed, hashed_slot),
Ok(None)
);
}
#[test]
fn write_to_db_account_info() {
let factory = create_test_provider_factory();
let provider = factory.provider_rw().unwrap();
let address_a = Address::ZERO;
let address_b = Address::repeat_byte(0xff);
let account_a = RevmAccountInfo { balance: U256::from(1), nonce: 1, ..Default::default() };
let account_b = RevmAccountInfo { balance: U256::from(2), nonce: 2, ..Default::default() };
let account_b_changed =
RevmAccountInfo { balance: U256::from(3), nonce: 3, ..Default::default() };
let mut state = State::builder().with_bundle_update().build();
state.insert_not_existing(address_a);
state.insert_account(address_b, account_b.clone());
// 0x00.. is created
state.commit(HashMap::from_iter([(
address_a,
RevmAccount {
info: account_a.clone(),
status: AccountStatus::Touched | AccountStatus::Created,
storage: HashMap::default(),
transaction_id: 0,
},
)]));
// 0xff.. is changed (balance + 1, nonce + 1)
state.commit(HashMap::from_iter([(
address_b,
RevmAccount {
info: account_b_changed.clone(),
status: AccountStatus::Touched,
storage: HashMap::default(),
transaction_id: 0,
},
)]));
state.merge_transitions(BundleRetention::Reverts);
let mut revm_bundle_state = state.take_bundle();
// Write plain state and reverts separately.
let reverts = revm_bundle_state.take_all_reverts().to_plain_state_reverts();
let plain_state = revm_bundle_state.to_plain_state(OriginalValuesKnown::Yes);
assert!(plain_state.storage.is_empty());
assert!(plain_state.contracts.is_empty());
provider.write_state_changes(plain_state).expect("Could not write plain state to DB");
assert_eq!(reverts.storage, [[]]);
provider.write_state_reverts(reverts, 1).expect("Could not write reverts to DB");
let reth_account_a = account_a.into();
let reth_account_b = account_b.into();
let reth_account_b_changed = (&account_b_changed).into();
// Check plain state
assert_eq!(
provider.basic_account(&address_a).expect("Could not read account state"),
Some(reth_account_a),
"Account A state is wrong"
);
assert_eq!(
provider.basic_account(&address_b).expect("Could not read account state"),
Some(reth_account_b_changed),
"Account B state is wrong"
);
// Check change set
let mut changeset_cursor = provider
.tx_ref()
.cursor_dup_read::<tables::AccountChangeSets>()
.expect("Could not open changeset cursor");
assert_eq!(
changeset_cursor.seek_exact(1).expect("Could not read account change set"),
Some((1, AccountBeforeTx { address: address_a, info: None })),
"Account A changeset is wrong"
);
assert_eq!(
changeset_cursor.next_dup().expect("Changeset table is malformed"),
Some((1, AccountBeforeTx { address: address_b, info: Some(reth_account_b) })),
"Account B changeset is wrong"
);
let mut state = State::builder().with_bundle_update().build();
state.insert_account(address_b, account_b_changed.clone());
// 0xff.. is destroyed
state.commit(HashMap::from_iter([(
address_b,
RevmAccount {
status: AccountStatus::Touched | AccountStatus::SelfDestructed,
info: account_b_changed,
storage: HashMap::default(),
transaction_id: 0,
},
)]));
state.merge_transitions(BundleRetention::Reverts);
let mut revm_bundle_state = state.take_bundle();
// Write plain state and reverts separately.
let reverts = revm_bundle_state.take_all_reverts().to_plain_state_reverts();
let plain_state = revm_bundle_state.to_plain_state(OriginalValuesKnown::Yes);
// Account B selfdestructed so flag for it should be present.
assert_eq!(
plain_state.storage,
[PlainStorageChangeset { address: address_b, wipe_storage: true, storage: vec![] }]
);
assert!(plain_state.contracts.is_empty());
provider.write_state_changes(plain_state).expect("Could not write plain state to DB");
assert_eq!(
reverts.storage,
[[PlainStorageRevert { address: address_b, wiped: true, storage_revert: vec![] }]]
);
provider.write_state_reverts(reverts, 2).expect("Could not write reverts to DB");
// Check new plain state for account B
assert_eq!(
provider.basic_account(&address_b).expect("Could not read account state"),
None,
"Account B should be deleted"
);
// Check change set
assert_eq!(
changeset_cursor.seek_exact(2).expect("Could not read account change set"),
Some((2, AccountBeforeTx { address: address_b, info: Some(reth_account_b_changed) })),
"Account B changeset is wrong after deletion"
);
}
#[test]
fn write_to_db_storage() {
let factory = create_test_provider_factory();
let provider = factory.database_provider_rw().unwrap();
let address_a = Address::ZERO;
let address_b = Address::repeat_byte(0xff);
let address_c = Address::random();
let account_b = RevmAccountInfo { balance: U256::from(2), nonce: 2, ..Default::default() };
let account_c = RevmAccountInfo { balance: U256::from(1), nonce: 3, ..Default::default() };
let mut state = State::builder().with_bundle_update().build();
state.insert_not_existing(address_a);
state.insert_account_with_storage(
address_b,
account_b.clone(),
HashMap::from_iter([(U256::from(1), FlaggedStorage::new(1, false))]),
);
state.insert_account_with_storage(
address_c,
account_c.clone(),
HashMap::from_iter([(U256::from(3), FlaggedStorage::new(1, false))]),
);
state.commit(HashMap::from_iter([
(
address_a,
RevmAccount {
status: AccountStatus::Touched | AccountStatus::Created,
info: RevmAccountInfo::default(),
// 0x00 => 0 => 1
// 0x01 => 0 => 2
storage: HashMap::from_iter([
(
U256::from(0),
EvmStorageSlot {
present_value: FlaggedStorage::new(1, true),
..Default::default()
},
),
(
U256::from(1),
EvmStorageSlot {
present_value: FlaggedStorage::new(2, true),
..Default::default()
},
),
]),
transaction_id: 0,
},
),
(
address_b,
RevmAccount {
status: AccountStatus::Touched,
info: account_b,
// 0x01 => 1 => 2
storage: HashMap::from_iter([(
U256::from(1),
EvmStorageSlot {
present_value: FlaggedStorage::new(2, false),
original_value: FlaggedStorage::new(1, false),
..Default::default()
},
)]),
transaction_id: 0,
},
),
(
address_c,
RevmAccount {
status: AccountStatus::Touched,
info: account_c,
// 0x03 => {private: false, value: 1} => {private: true, value: 2}
storage: HashMap::from_iter([(
U256::from(3),
EvmStorageSlot {
present_value: FlaggedStorage::new(2, true),
original_value: FlaggedStorage::new(1, false),
..Default::default()
},
)]),
transaction_id: 0,
},
),
]));
state.merge_transitions(BundleRetention::Reverts);
let outcome = ExecutionOutcome::new(state.take_bundle(), Default::default(), 1, Vec::new());
provider
.write_state(&outcome, OriginalValuesKnown::Yes, StorageLocation::Database)
.expect("Could not write bundle state to DB");
// Check plain storage state
let mut storage_cursor = provider
.tx_ref()
.cursor_dup_read::<tables::PlainStorageState>()
.expect("Could not open plain storage state cursor");
assert_eq!(
storage_cursor.seek_exact(address_a).unwrap(),
Some((address_a, StorageEntry { key: B256::ZERO, value: FlaggedStorage::private(1) })),
"Slot 0 for account A should be a private 1"
);
assert_eq!(
storage_cursor.next_dup().unwrap(),
Some((
address_a,
StorageEntry {
key: B256::from(U256::from(1).to_be_bytes()),
value: FlaggedStorage::private(2),
}
)),
"Slot 1 for account A should be a private 2"
);
assert_eq!(
storage_cursor.next_dup().unwrap(),
None,
"Account A should only have 2 storage slots"
);
assert_eq!(
storage_cursor.seek_exact(address_b).unwrap(),
Some((
address_b,
StorageEntry {
key: B256::from(U256::from(1).to_be_bytes()),
value: FlaggedStorage::public(2),
}
)),
"Slot 1 for account B should be a public 2"
);
assert_eq!(
storage_cursor.next_dup().unwrap(),
None,
"Account B should only have 1 storage slot"
);
assert_eq!(
storage_cursor.seek_exact(address_c).unwrap(),
Some((
address_c,
StorageEntry {
key: B256::from(U256::from(3).to_be_bytes()),
value: FlaggedStorage::private(2),
}
)),
"Slot 3 for account C should be a private 2"
);
assert_eq!(
storage_cursor.next_dup().unwrap(),
None,
"Account C should only have 1 storage slot"
);
// Check change set
let mut changeset_cursor = provider
.tx_ref()
.cursor_dup_read::<tables::StorageChangeSets>()
.expect("Could not open storage changeset cursor");
assert_eq!(
changeset_cursor.seek_exact(BlockNumberAddress((1, address_a))).unwrap(),
Some((
BlockNumberAddress((1, address_a)),
StorageEntry { key: B256::ZERO, value: FlaggedStorage::ZERO }
)),
"Slot 0 for account A should have changed from a public 0"
);
assert_eq!(
changeset_cursor.next_dup().unwrap(),
Some((
BlockNumberAddress((1, address_a)),
StorageEntry {
key: B256::from(U256::from(1).to_be_bytes()),
value: FlaggedStorage::ZERO,
}
)),
"Slot 1 for account A should have changed from a public 0"
);
assert_eq!(
changeset_cursor.next_dup().unwrap(),
None,
"Account A should only be in the changeset 2 times"
);
assert_eq!(
changeset_cursor.seek_exact(BlockNumberAddress((1, address_b))).unwrap(),
Some((
BlockNumberAddress((1, address_b)),
StorageEntry {
key: B256::from(U256::from(1).to_be_bytes()),
value: FlaggedStorage::public(1),
}
)),
"Slot 1 for account B should have changed from a public 1"
);
assert_eq!(
changeset_cursor.next_dup().unwrap(),
None,
"Account B should only be in the changeset 1 time"
);
assert_eq!(
changeset_cursor.seek_exact(BlockNumberAddress((1, address_c))).unwrap(),
Some((
BlockNumberAddress((1, address_c)),
StorageEntry {
key: B256::from(U256::from(3).to_be_bytes()),
value: FlaggedStorage::public(1),
}
)),
"Slot 1 for account C should have changed from a public 1"
);
assert_eq!(
changeset_cursor.next_dup().unwrap(),
None,
"Account C should only be in the changeset 1 time"
);
// Delete account A
let mut state = State::builder().with_bundle_update().build();
state.insert_account(address_a, RevmAccountInfo::default());
state.commit(HashMap::from_iter([(
address_a,
RevmAccount {
status: AccountStatus::Touched | AccountStatus::SelfDestructed,
info: RevmAccountInfo::default(),
storage: HashMap::default(),
transaction_id: 0,
},
)]));
state.merge_transitions(BundleRetention::Reverts);
let outcome = ExecutionOutcome::new(state.take_bundle(), Default::default(), 2, Vec::new());
provider
.write_state(&outcome, OriginalValuesKnown::Yes, StorageLocation::Database)
.expect("Could not write bundle state to DB");
assert_eq!(
storage_cursor.seek_exact(address_a).unwrap(),
None,
"Account A should have no storage slots after deletion"
);
assert_eq!(
changeset_cursor.seek_exact(BlockNumberAddress((2, address_a))).unwrap(),
Some((
BlockNumberAddress((2, address_a)),
StorageEntry { key: B256::ZERO, value: FlaggedStorage::private(1) }
)),
"Slot 0 for account A should have changed from a private 1 on deletion"
);
assert_eq!(
changeset_cursor.next_dup().unwrap(),
Some((
BlockNumberAddress((2, address_a)),
StorageEntry {
key: B256::from(U256::from(1).to_be_bytes()),
value: FlaggedStorage::private(2),
}
)),
"Slot 1 for account A should have changed from a private 2 on deletion"
);
assert_eq!(
changeset_cursor.next_dup().unwrap(),
None,
"Account A should only be in the changeset 2 times on deletion"
);
}
#[test]
fn write_to_db_multiple_selfdestructs() {
let factory = create_test_provider_factory();
let provider = factory.database_provider_rw().unwrap();
let address1 = Address::random();
let account_info = RevmAccountInfo { nonce: 1, ..Default::default() };
// Block #0: initial state.
let mut init_state = State::builder().with_bundle_update().build();
init_state.insert_not_existing(address1);
init_state.commit(HashMap::from_iter([(
address1,
RevmAccount {
info: account_info.clone(),
status: AccountStatus::Touched | AccountStatus::Created,
// 0x00 => 0 => 1
// 0x01 => 0 => 2
storage: HashMap::from_iter([
(
U256::ZERO,
EvmStorageSlot {
present_value: FlaggedStorage::public(1),
..Default::default()
},
),
(
U256::from(1),
EvmStorageSlot {
present_value: FlaggedStorage::public(2),
..Default::default()
},
),
]),
transaction_id: 0,
},
)]));
init_state.merge_transitions(BundleRetention::Reverts);
let outcome =
ExecutionOutcome::new(init_state.take_bundle(), Default::default(), 0, Vec::new());
provider
.write_state(&outcome, OriginalValuesKnown::Yes, StorageLocation::Database)
.expect("Could not write bundle state to DB");
let mut state = State::builder().with_bundle_update().build();
state.insert_account_with_storage(
address1,
account_info.clone(),
HashMap::from_iter([
(U256::ZERO, FlaggedStorage::public(1)),
(U256::from(1), FlaggedStorage::public(2)),
]),
);
// Block #1: change storage.
state.commit(HashMap::from_iter([(
address1,
RevmAccount {
status: AccountStatus::Touched,
info: account_info.clone(),
// 0x00 => 1 => 2
storage: HashMap::from_iter([(
U256::ZERO,
EvmStorageSlot {
original_value: FlaggedStorage::public(1),
present_value: FlaggedStorage::public(2),
..Default::default()
},
)]),
transaction_id: 0,
},
)]));
state.merge_transitions(BundleRetention::Reverts);
// Block #2: destroy account.
state.commit(HashMap::from_iter([(
address1,
RevmAccount {
status: AccountStatus::Touched | AccountStatus::SelfDestructed,
info: account_info.clone(),
storage: HashMap::default(),
transaction_id: 0,
},
)]));
state.merge_transitions(BundleRetention::Reverts);
// Block #3: re-create account and change storage.
state.commit(HashMap::from_iter([(
address1,
RevmAccount {
status: AccountStatus::Touched | AccountStatus::Created,
info: account_info.clone(),
storage: HashMap::default(),
transaction_id: 0,
},
)]));
state.merge_transitions(BundleRetention::Reverts);
// Block #4: change storage.
state.commit(HashMap::from_iter([(
address1,
RevmAccount {
status: AccountStatus::Touched,
info: account_info.clone(),
// 0x00 => 0 => 2
// 0x02 => 0 => 4
// 0x06 => 0 => 6
storage: HashMap::from_iter([
(
U256::ZERO,
EvmStorageSlot {
present_value: FlaggedStorage::public(2),
..Default::default()
},
),
(
U256::from(2),
EvmStorageSlot {
present_value: FlaggedStorage::public(4),
..Default::default()
},
),
(
U256::from(6),
EvmStorageSlot {
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/consistent.rs | crates/storage/provider/src/providers/consistent.rs | use super::{DatabaseProviderRO, ProviderFactory, ProviderNodeTypes};
use crate::{
providers::StaticFileProvider, AccountReader, BlockHashReader, BlockIdReader, BlockNumReader,
BlockReader, BlockReaderIdExt, BlockSource, ChainSpecProvider, ChangeSetReader, HeaderProvider,
ProviderError, PruneCheckpointReader, ReceiptProvider, ReceiptProviderIdExt,
StageCheckpointReader, StateReader, StaticFileProviderFactory, TransactionVariant,
TransactionsProvider,
};
use alloy_consensus::{transaction::TransactionMeta, BlockHeader};
use alloy_eips::{
eip2718::Encodable2718, BlockHashOrNumber, BlockId, BlockNumHash, BlockNumberOrTag,
HashOrNumber,
};
use alloy_primitives::{
map::{hash_map, HashMap},
Address, BlockHash, BlockNumber, TxHash, TxNumber, B256, U256,
};
use reth_chain_state::{BlockState, CanonicalInMemoryState, MemoryOverlayStateProviderRef};
use reth_chainspec::ChainInfo;
use reth_db_api::models::{AccountBeforeTx, BlockNumberAddress, StoredBlockBodyIndices};
use reth_execution_types::{BundleStateInit, ExecutionOutcome, RevertsInit};
use reth_node_types::{BlockTy, HeaderTy, ReceiptTy, TxTy};
use reth_primitives_traits::{Account, BlockBody, RecoveredBlock, SealedHeader, StorageEntry};
use reth_prune_types::{PruneCheckpoint, PruneSegment};
use reth_stages_types::{StageCheckpoint, StageId};
use reth_storage_api::{
BlockBodyIndicesProvider, DatabaseProviderFactory, NodePrimitivesProvider, StateProvider,
StorageChangeSetReader, TryIntoHistoricalStateProvider,
};
use reth_storage_errors::provider::ProviderResult;
use revm_database::states::PlainStorageRevert;
use std::{
ops::{Add, Bound, RangeBounds, RangeInclusive, Sub},
sync::Arc,
};
use tracing::trace;
/// Type that interacts with a snapshot view of the blockchain (storage and in-memory) at time of
/// instantiation, EXCEPT for pending, safe and finalized block which might change while holding
/// this provider.
///
/// CAUTION: Avoid holding this provider for too long or the inner database transaction will
/// time-out.
#[derive(Debug)]
#[doc(hidden)] // triggers ICE for `cargo docs`
pub struct ConsistentProvider<N: ProviderNodeTypes> {
/// Storage provider.
storage_provider: <ProviderFactory<N> as DatabaseProviderFactory>::Provider,
/// Head block at time of [`Self`] creation
head_block: Option<Arc<BlockState<N::Primitives>>>,
/// In-memory canonical state. This is not a snapshot, and can change! Use with caution.
canonical_in_memory_state: CanonicalInMemoryState<N::Primitives>,
}
impl<N: ProviderNodeTypes> ConsistentProvider<N> {
/// Create a new provider using [`ProviderFactory`] and [`CanonicalInMemoryState`],
///
/// Underneath it will take a snapshot by fetching [`CanonicalInMemoryState::head_state`] and
/// [`ProviderFactory::database_provider_ro`] effectively maintaining one single snapshotted
/// view of memory and database.
pub fn new(
storage_provider_factory: ProviderFactory<N>,
state: CanonicalInMemoryState<N::Primitives>,
) -> ProviderResult<Self> {
// Each one provides a snapshot at the time of instantiation, but its order matters.
//
// If we acquire first the database provider, it's possible that before the in-memory chain
// snapshot is instantiated, it will flush blocks to disk. This would
// mean that our database provider would not have access to the flushed blocks (since it's
// working under an older view), while the in-memory state may have deleted them
// entirely. Resulting in gaps on the range.
let head_block = state.head_state();
let storage_provider = storage_provider_factory.database_provider_ro()?;
Ok(Self { storage_provider, head_block, canonical_in_memory_state: state })
}
// Helper function to convert range bounds
fn convert_range_bounds<T>(
&self,
range: impl RangeBounds<T>,
end_unbounded: impl FnOnce() -> T,
) -> (T, T)
where
T: Copy + Add<Output = T> + Sub<Output = T> + From<u8>,
{
let start = match range.start_bound() {
Bound::Included(&n) => n,
Bound::Excluded(&n) => n + T::from(1u8),
Bound::Unbounded => T::from(0u8),
};
let end = match range.end_bound() {
Bound::Included(&n) => n,
Bound::Excluded(&n) => n - T::from(1u8),
Bound::Unbounded => end_unbounded(),
};
(start, end)
}
/// Storage provider for latest block
fn latest_ref<'a>(&'a self) -> ProviderResult<Box<dyn StateProvider + 'a>> {
trace!(target: "providers::blockchain", "Getting latest block state provider");
// use latest state provider if the head state exists
if let Some(state) = &self.head_block {
trace!(target: "providers::blockchain", "Using head state for latest state provider");
Ok(self.block_state_provider_ref(state)?.boxed())
} else {
trace!(target: "providers::blockchain", "Using database state for latest state provider");
Ok(self.storage_provider.latest())
}
}
fn history_by_block_hash_ref<'a>(
&'a self,
block_hash: BlockHash,
) -> ProviderResult<Box<dyn StateProvider + 'a>> {
trace!(target: "providers::blockchain", ?block_hash, "Getting history by block hash");
self.get_in_memory_or_storage_by_block(
block_hash.into(),
|_| self.storage_provider.history_by_block_hash(block_hash),
|block_state| {
let state_provider = self.block_state_provider_ref(block_state)?;
Ok(Box::new(state_provider))
},
)
}
/// Returns a state provider indexed by the given block number or tag.
fn state_by_block_number_ref<'a>(
&'a self,
number: BlockNumber,
) -> ProviderResult<Box<dyn StateProvider + 'a>> {
let hash =
self.block_hash(number)?.ok_or_else(|| ProviderError::HeaderNotFound(number.into()))?;
self.history_by_block_hash_ref(hash)
}
/// Return the last N blocks of state, recreating the [`ExecutionOutcome`].
///
/// If the range is empty, or there are no blocks for the given range, then this returns `None`.
pub fn get_state(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Option<ExecutionOutcome<ReceiptTy<N>>>> {
if range.is_empty() {
return Ok(None)
}
let start_block_number = *range.start();
let end_block_number = *range.end();
// We are not removing block meta as it is used to get block changesets.
let mut block_bodies = Vec::new();
for block_num in range.clone() {
let block_body = self
.block_body_indices(block_num)?
.ok_or(ProviderError::BlockBodyIndicesNotFound(block_num))?;
block_bodies.push((block_num, block_body))
}
// get transaction receipts
let Some(from_transaction_num) = block_bodies.first().map(|body| body.1.first_tx_num())
else {
return Ok(None)
};
let Some(to_transaction_num) = block_bodies.last().map(|body| body.1.last_tx_num()) else {
return Ok(None)
};
let mut account_changeset = Vec::new();
for block_num in range.clone() {
let changeset =
self.account_block_changeset(block_num)?.into_iter().map(|elem| (block_num, elem));
account_changeset.extend(changeset);
}
let mut storage_changeset = Vec::new();
for block_num in range {
let changeset = self.storage_changeset(block_num)?;
storage_changeset.extend(changeset);
}
let (state, reverts) =
self.populate_bundle_state(account_changeset, storage_changeset, end_block_number)?;
let mut receipt_iter =
self.receipts_by_tx_range(from_transaction_num..=to_transaction_num)?.into_iter();
let mut receipts = Vec::with_capacity(block_bodies.len());
// loop break if we are at the end of the blocks.
for (_, block_body) in block_bodies {
let mut block_receipts = Vec::with_capacity(block_body.tx_count as usize);
for tx_num in block_body.tx_num_range() {
let receipt = receipt_iter
.next()
.ok_or_else(|| ProviderError::ReceiptNotFound(tx_num.into()))?;
block_receipts.push(receipt);
}
receipts.push(block_receipts);
}
Ok(Some(ExecutionOutcome::new_init(
state,
reverts,
// We skip new contracts since we never delete them from the database
Vec::new(),
receipts,
start_block_number,
Vec::new(),
)))
}
/// Populate a [`BundleStateInit`] and [`RevertsInit`] using cursors over the
/// [`reth_db::PlainAccountState`] and [`reth_db::PlainStorageState`] tables, based on the given
/// storage and account changesets.
fn populate_bundle_state(
&self,
account_changeset: Vec<(u64, AccountBeforeTx)>,
storage_changeset: Vec<(BlockNumberAddress, StorageEntry)>,
block_range_end: BlockNumber,
) -> ProviderResult<(BundleStateInit, RevertsInit)> {
let mut state: BundleStateInit = HashMap::default();
let mut reverts: RevertsInit = HashMap::default();
let state_provider = self.state_by_block_number_ref(block_range_end)?;
// add account changeset changes
for (block_number, account_before) in account_changeset.into_iter().rev() {
let AccountBeforeTx { info: old_info, address } = account_before;
match state.entry(address) {
hash_map::Entry::Vacant(entry) => {
let new_info = state_provider.basic_account(&address)?;
entry.insert((old_info, new_info, HashMap::default()));
}
hash_map::Entry::Occupied(mut entry) => {
// overwrite old account state.
entry.get_mut().0 = old_info;
}
}
// insert old info into reverts.
reverts.entry(block_number).or_default().entry(address).or_default().0 = Some(old_info);
}
// add storage changeset changes
for (block_and_address, old_storage) in storage_changeset.into_iter().rev() {
let BlockNumberAddress((block_number, address)) = block_and_address;
// get account state or insert from plain state.
let account_state = match state.entry(address) {
hash_map::Entry::Vacant(entry) => {
let present_info = state_provider.basic_account(&address)?;
entry.insert((present_info, present_info, HashMap::default()))
}
hash_map::Entry::Occupied(entry) => entry.into_mut(),
};
// match storage.
match account_state.2.entry(old_storage.key) {
hash_map::Entry::Vacant(entry) => {
let new_storage_value =
state_provider.storage(address, old_storage.key)?.unwrap_or_default();
entry.insert((old_storage.value, new_storage_value));
}
hash_map::Entry::Occupied(mut entry) => {
entry.get_mut().0 = old_storage.value;
}
};
reverts
.entry(block_number)
.or_default()
.entry(address)
.or_default()
.1
.push(old_storage);
}
Ok((state, reverts))
}
/// Fetches a range of data from both in-memory state and persistent storage while a predicate
/// is met.
///
/// Creates a snapshot of the in-memory chain state and database provider to prevent
/// inconsistencies. Splits the range into in-memory and storage sections, prioritizing
/// recent in-memory blocks in case of overlaps.
///
/// * `fetch_db_range` function (`F`) provides access to the database provider, allowing the
/// user to retrieve the required items from the database using [`RangeInclusive`].
/// * `map_block_state_item` function (`G`) provides each block of the range in the in-memory
/// state, allowing for selection or filtering for the desired data.
fn get_in_memory_or_storage_by_block_range_while<T, F, G, P>(
&self,
range: impl RangeBounds<BlockNumber>,
fetch_db_range: F,
map_block_state_item: G,
mut predicate: P,
) -> ProviderResult<Vec<T>>
where
F: FnOnce(
&DatabaseProviderRO<N::DB, N>,
RangeInclusive<BlockNumber>,
&mut P,
) -> ProviderResult<Vec<T>>,
G: Fn(&BlockState<N::Primitives>, &mut P) -> Option<T>,
P: FnMut(&T) -> bool,
{
// Each one provides a snapshot at the time of instantiation, but its order matters.
//
// If we acquire first the database provider, it's possible that before the in-memory chain
// snapshot is instantiated, it will flush blocks to disk. This would
// mean that our database provider would not have access to the flushed blocks (since it's
// working under an older view), while the in-memory state may have deleted them
// entirely. Resulting in gaps on the range.
let mut in_memory_chain =
self.head_block.as_ref().map(|b| b.chain().collect::<Vec<_>>()).unwrap_or_default();
let db_provider = &self.storage_provider;
let (start, end) = self.convert_range_bounds(range, || {
// the first block is the highest one.
in_memory_chain
.first()
.map(|b| b.number())
.unwrap_or_else(|| db_provider.last_block_number().unwrap_or_default())
});
if start > end {
return Ok(vec![])
}
// Split range into storage_range and in-memory range. If the in-memory range is not
// necessary drop it early.
//
// The last block of `in_memory_chain` is the lowest block number.
let (in_memory, storage_range) = match in_memory_chain.last().as_ref().map(|b| b.number()) {
Some(lowest_memory_block) if lowest_memory_block <= end => {
let highest_memory_block =
in_memory_chain.first().as_ref().map(|b| b.number()).expect("qed");
// Database will for a time overlap with in-memory-chain blocks. In
// case of a re-org, it can mean that the database blocks are of a forked chain, and
// so, we should prioritize the in-memory overlapped blocks.
let in_memory_range =
lowest_memory_block.max(start)..=end.min(highest_memory_block);
// If requested range is in the middle of the in-memory range, remove the necessary
// lowest blocks
in_memory_chain.truncate(
in_memory_chain
.len()
.saturating_sub(start.saturating_sub(lowest_memory_block) as usize),
);
let storage_range =
(lowest_memory_block > start).then(|| start..=lowest_memory_block - 1);
(Some((in_memory_chain, in_memory_range)), storage_range)
}
_ => {
// Drop the in-memory chain so we don't hold blocks in memory.
drop(in_memory_chain);
(None, Some(start..=end))
}
};
let mut items = Vec::with_capacity((end - start + 1) as usize);
if let Some(storage_range) = storage_range {
let mut db_items = fetch_db_range(db_provider, storage_range.clone(), &mut predicate)?;
items.append(&mut db_items);
// The predicate was not met, if the number of items differs from the expected. So, we
// return what we have.
if items.len() as u64 != storage_range.end() - storage_range.start() + 1 {
return Ok(items)
}
}
if let Some((in_memory_chain, in_memory_range)) = in_memory {
for (num, block) in in_memory_range.zip(in_memory_chain.into_iter().rev()) {
debug_assert!(num == block.number());
if let Some(item) = map_block_state_item(block, &mut predicate) {
items.push(item);
} else {
break
}
}
}
Ok(items)
}
/// This uses a given [`BlockState`] to initialize a state provider for that block.
fn block_state_provider_ref(
&self,
state: &BlockState<N::Primitives>,
) -> ProviderResult<MemoryOverlayStateProviderRef<'_, N::Primitives>> {
let anchor_hash = state.anchor().hash;
let latest_historical = self.history_by_block_hash_ref(anchor_hash)?;
let in_memory = state.chain().map(|block_state| block_state.block()).collect();
Ok(MemoryOverlayStateProviderRef::new(latest_historical, in_memory))
}
/// Fetches data from either in-memory state or persistent storage for a range of transactions.
///
/// * `fetch_from_db`: has a `DatabaseProviderRO` and the storage specific range.
/// * `fetch_from_block_state`: has a [`RangeInclusive`] of elements that should be fetched from
/// [`BlockState`]. [`RangeInclusive`] is necessary to handle partial look-ups of a block.
fn get_in_memory_or_storage_by_tx_range<S, M, R>(
&self,
range: impl RangeBounds<BlockNumber>,
fetch_from_db: S,
fetch_from_block_state: M,
) -> ProviderResult<Vec<R>>
where
S: FnOnce(
&DatabaseProviderRO<N::DB, N>,
RangeInclusive<TxNumber>,
) -> ProviderResult<Vec<R>>,
M: Fn(RangeInclusive<usize>, &BlockState<N::Primitives>) -> ProviderResult<Vec<R>>,
{
let in_mem_chain = self.head_block.iter().flat_map(|b| b.chain()).collect::<Vec<_>>();
let provider = &self.storage_provider;
// Get the last block number stored in the storage which does NOT overlap with in-memory
// chain.
let last_database_block_number = in_mem_chain
.last()
.map(|b| Ok(b.anchor().number))
.unwrap_or_else(|| provider.last_block_number())?;
// Get the next tx number for the last block stored in the storage, which marks the start of
// the in-memory state.
let last_block_body_index = provider
.block_body_indices(last_database_block_number)?
.ok_or(ProviderError::BlockBodyIndicesNotFound(last_database_block_number))?;
let mut in_memory_tx_num = last_block_body_index.next_tx_num();
let (start, end) = self.convert_range_bounds(range, || {
in_mem_chain
.iter()
.map(|b| b.block_ref().recovered_block().body().transactions().len() as u64)
.sum::<u64>() +
last_block_body_index.last_tx_num()
});
if start > end {
return Ok(vec![])
}
let mut tx_range = start..=end;
// If the range is entirely before the first in-memory transaction number, fetch from
// storage
if *tx_range.end() < in_memory_tx_num {
return fetch_from_db(provider, tx_range);
}
let mut items = Vec::with_capacity((tx_range.end() - tx_range.start() + 1) as usize);
// If the range spans storage and memory, get elements from storage first.
if *tx_range.start() < in_memory_tx_num {
// Determine the range that needs to be fetched from storage.
let db_range = *tx_range.start()..=in_memory_tx_num.saturating_sub(1);
// Set the remaining transaction range for in-memory
tx_range = in_memory_tx_num..=*tx_range.end();
items.extend(fetch_from_db(provider, db_range)?);
}
// Iterate from the lowest block to the highest in-memory chain
for block_state in in_mem_chain.iter().rev() {
let block_tx_count =
block_state.block_ref().recovered_block().body().transactions().len();
let remaining = (tx_range.end() - tx_range.start() + 1) as usize;
// If the transaction range start is equal or higher than the next block first
// transaction, advance
if *tx_range.start() >= in_memory_tx_num + block_tx_count as u64 {
in_memory_tx_num += block_tx_count as u64;
continue
}
// This should only be more than 0 once, in case of a partial range inside a block.
let skip = (tx_range.start() - in_memory_tx_num) as usize;
items.extend(fetch_from_block_state(
skip..=skip + (remaining.min(block_tx_count - skip) - 1),
block_state,
)?);
in_memory_tx_num += block_tx_count as u64;
// Break if the range has been fully processed
if in_memory_tx_num > *tx_range.end() {
break
}
// Set updated range
tx_range = in_memory_tx_num..=*tx_range.end();
}
Ok(items)
}
/// Fetches data from either in-memory state or persistent storage by transaction
/// [`HashOrNumber`].
fn get_in_memory_or_storage_by_tx<S, M, R>(
&self,
id: HashOrNumber,
fetch_from_db: S,
fetch_from_block_state: M,
) -> ProviderResult<Option<R>>
where
S: FnOnce(&DatabaseProviderRO<N::DB, N>) -> ProviderResult<Option<R>>,
M: Fn(usize, TxNumber, &BlockState<N::Primitives>) -> ProviderResult<Option<R>>,
{
let in_mem_chain = self.head_block.iter().flat_map(|b| b.chain()).collect::<Vec<_>>();
let provider = &self.storage_provider;
// Get the last block number stored in the database which does NOT overlap with in-memory
// chain.
let last_database_block_number = in_mem_chain
.last()
.map(|b| Ok(b.anchor().number))
.unwrap_or_else(|| provider.last_block_number())?;
// Get the next tx number for the last block stored in the database and consider it the
// first tx number of the in-memory state
let last_block_body_index = provider
.block_body_indices(last_database_block_number)?
.ok_or(ProviderError::BlockBodyIndicesNotFound(last_database_block_number))?;
let mut in_memory_tx_num = last_block_body_index.next_tx_num();
// If the transaction number is less than the first in-memory transaction number, make a
// database lookup
if let HashOrNumber::Number(id) = id {
if id < in_memory_tx_num {
return fetch_from_db(provider)
}
}
// Iterate from the lowest block to the highest
for block_state in in_mem_chain.iter().rev() {
let executed_block = block_state.block_ref();
let block = executed_block.recovered_block();
for tx_index in 0..block.body().transactions().len() {
match id {
HashOrNumber::Hash(tx_hash) => {
if tx_hash == block.body().transactions()[tx_index].trie_hash() {
return fetch_from_block_state(tx_index, in_memory_tx_num, block_state)
}
}
HashOrNumber::Number(id) => {
if id == in_memory_tx_num {
return fetch_from_block_state(tx_index, in_memory_tx_num, block_state)
}
}
}
in_memory_tx_num += 1;
}
}
// Not found in-memory, so check database.
if let HashOrNumber::Hash(_) = id {
return fetch_from_db(provider)
}
Ok(None)
}
/// Fetches data from either in-memory state or persistent storage by [`BlockHashOrNumber`].
pub(crate) fn get_in_memory_or_storage_by_block<S, M, R>(
&self,
id: BlockHashOrNumber,
fetch_from_db: S,
fetch_from_block_state: M,
) -> ProviderResult<R>
where
S: FnOnce(&DatabaseProviderRO<N::DB, N>) -> ProviderResult<R>,
M: Fn(&BlockState<N::Primitives>) -> ProviderResult<R>,
{
if let Some(Some(block_state)) = self.head_block.as_ref().map(|b| b.block_on_chain(id)) {
return fetch_from_block_state(block_state)
}
fetch_from_db(&self.storage_provider)
}
/// Consumes the provider and returns a state provider for the specific block hash.
pub(crate) fn into_state_provider_at_block_hash(
self,
block_hash: BlockHash,
) -> ProviderResult<Box<dyn StateProvider>> {
let Self { storage_provider, head_block, .. } = self;
let into_history_at_block_hash = |block_hash| -> ProviderResult<Box<dyn StateProvider>> {
let block_number = storage_provider
.block_number(block_hash)?
.ok_or(ProviderError::BlockHashNotFound(block_hash))?;
storage_provider.try_into_history_at_block(block_number)
};
if let Some(Some(block_state)) =
head_block.as_ref().map(|b| b.block_on_chain(block_hash.into()))
{
let anchor_hash = block_state.anchor().hash;
let latest_historical = into_history_at_block_hash(anchor_hash)?;
return Ok(Box::new(block_state.state_provider(latest_historical)));
}
into_history_at_block_hash(block_hash)
}
}
impl<N: ProviderNodeTypes> ConsistentProvider<N> {
/// Ensures that the given block number is canonical (synced)
///
/// This is a helper for guarding the `HistoricalStateProvider` against block numbers that are
/// out of range and would lead to invalid results, mainly during initial sync.
///
/// Verifying the `block_number` would be expensive since we need to lookup sync table
/// Instead, we ensure that the `block_number` is within the range of the
/// [`Self::best_block_number`] which is updated when a block is synced.
#[inline]
pub(crate) fn ensure_canonical_block(&self, block_number: BlockNumber) -> ProviderResult<()> {
let latest = self.best_block_number()?;
if block_number > latest {
Err(ProviderError::HeaderNotFound(block_number.into()))
} else {
Ok(())
}
}
}
impl<N: ProviderNodeTypes> NodePrimitivesProvider for ConsistentProvider<N> {
type Primitives = N::Primitives;
}
impl<N: ProviderNodeTypes> StaticFileProviderFactory for ConsistentProvider<N> {
fn static_file_provider(&self) -> StaticFileProvider<N::Primitives> {
self.storage_provider.static_file_provider()
}
}
impl<N: ProviderNodeTypes> HeaderProvider for ConsistentProvider<N> {
type Header = HeaderTy<N>;
fn header(&self, block_hash: &BlockHash) -> ProviderResult<Option<Self::Header>> {
self.get_in_memory_or_storage_by_block(
(*block_hash).into(),
|db_provider| db_provider.header(block_hash),
|block_state| Ok(Some(block_state.block_ref().recovered_block().clone_header())),
)
}
fn header_by_number(&self, num: BlockNumber) -> ProviderResult<Option<Self::Header>> {
self.get_in_memory_or_storage_by_block(
num.into(),
|db_provider| db_provider.header_by_number(num),
|block_state| Ok(Some(block_state.block_ref().recovered_block().clone_header())),
)
}
fn header_td(&self, hash: &BlockHash) -> ProviderResult<Option<U256>> {
if let Some(num) = self.block_number(*hash)? {
self.header_td_by_number(num)
} else {
Ok(None)
}
}
fn header_td_by_number(&self, number: BlockNumber) -> ProviderResult<Option<U256>> {
let number = if self.head_block.as_ref().map(|b| b.block_on_chain(number.into())).is_some()
{
// If the block exists in memory, we should return a TD for it.
//
// The canonical in memory state should only store post-merge blocks. Post-merge blocks
// have zero difficulty. This means we can use the total difficulty for the last
// finalized block number if present (so that we are not affected by reorgs), if not the
// last number in the database will be used.
if let Some(last_finalized_num_hash) =
self.canonical_in_memory_state.get_finalized_num_hash()
{
last_finalized_num_hash.number
} else {
self.last_block_number()?
}
} else {
// Otherwise, return what we have on disk for the input block
number
};
self.storage_provider.header_td_by_number(number)
}
fn headers_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Self::Header>> {
self.get_in_memory_or_storage_by_block_range_while(
range,
|db_provider, range, _| db_provider.headers_range(range),
|block_state, _| Some(block_state.block_ref().recovered_block().header().clone()),
|_| true,
)
}
fn sealed_header(
&self,
number: BlockNumber,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
self.get_in_memory_or_storage_by_block(
number.into(),
|db_provider| db_provider.sealed_header(number),
|block_state| Ok(Some(block_state.block_ref().recovered_block().clone_sealed_header())),
)
}
fn sealed_headers_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
self.get_in_memory_or_storage_by_block_range_while(
range,
|db_provider, range, _| db_provider.sealed_headers_range(range),
|block_state, _| Some(block_state.block_ref().recovered_block().clone_sealed_header()),
|_| true,
)
}
fn sealed_headers_while(
&self,
range: impl RangeBounds<BlockNumber>,
predicate: impl FnMut(&SealedHeader<Self::Header>) -> bool,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
self.get_in_memory_or_storage_by_block_range_while(
range,
|db_provider, range, predicate| db_provider.sealed_headers_while(range, predicate),
|block_state, predicate| {
let header = block_state.block_ref().recovered_block().sealed_header();
predicate(header).then(|| header.clone())
},
predicate,
)
}
}
impl<N: ProviderNodeTypes> BlockHashReader for ConsistentProvider<N> {
fn block_hash(&self, number: u64) -> ProviderResult<Option<B256>> {
self.get_in_memory_or_storage_by_block(
number.into(),
|db_provider| db_provider.block_hash(number),
|block_state| Ok(Some(block_state.hash())),
)
}
fn canonical_hashes_range(
&self,
start: BlockNumber,
end: BlockNumber,
) -> ProviderResult<Vec<B256>> {
self.get_in_memory_or_storage_by_block_range_while(
start..end,
|db_provider, inclusive_range, _| {
db_provider
.canonical_hashes_range(*inclusive_range.start(), *inclusive_range.end() + 1)
},
|block_state, _| Some(block_state.hash()),
|_| true,
)
}
}
impl<N: ProviderNodeTypes> BlockNumReader for ConsistentProvider<N> {
fn chain_info(&self) -> ProviderResult<ChainInfo> {
let best_number = self.best_block_number()?;
Ok(ChainInfo { best_hash: self.block_hash(best_number)?.unwrap_or_default(), best_number })
}
fn best_block_number(&self) -> ProviderResult<BlockNumber> {
self.head_block.as_ref().map(|b| Ok(b.number())).unwrap_or_else(|| self.last_block_number())
}
fn last_block_number(&self) -> ProviderResult<BlockNumber> {
self.storage_provider.last_block_number()
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/mod.rs | crates/storage/provider/src/providers/mod.rs | //! Contains the main provider types and traits for interacting with the blockchain's storage.
use reth_chainspec::EthereumHardforks;
use reth_db_api::table::Value;
use reth_node_types::{FullNodePrimitives, NodeTypes, NodeTypesWithDB};
mod database;
pub use database::*;
mod static_file;
pub use static_file::{
StaticFileAccess, StaticFileJarProvider, StaticFileProvider, StaticFileProviderRW,
StaticFileProviderRWRefMut, StaticFileWriter,
};
mod state;
pub use state::{
historical::{HistoricalStateProvider, HistoricalStateProviderRef, LowestAvailableBlocks},
latest::{LatestStateProvider, LatestStateProviderRef},
};
mod consistent_view;
pub use consistent_view::{ConsistentDbView, ConsistentViewError};
mod blockchain_provider;
pub use blockchain_provider::BlockchainProvider;
mod consistent;
pub use consistent::ConsistentProvider;
/// Helper trait to bound [`NodeTypes`] so that combined with database they satisfy
/// [`ProviderNodeTypes`].
pub trait NodeTypesForProvider
where
Self: NodeTypes<
ChainSpec: EthereumHardforks,
Storage: ChainStorage<Self::Primitives>,
Primitives: FullNodePrimitives<SignedTx: Value, Receipt: Value, BlockHeader: Value>,
>,
{
}
impl<T> NodeTypesForProvider for T where
T: NodeTypes<
ChainSpec: EthereumHardforks,
Storage: ChainStorage<T::Primitives>,
Primitives: FullNodePrimitives<SignedTx: Value, Receipt: Value, BlockHeader: Value>,
>
{
}
/// Helper trait keeping common requirements of providers for [`NodeTypesWithDB`].
pub trait ProviderNodeTypes
where
Self: NodeTypesForProvider + NodeTypesWithDB,
{
}
impl<T> ProviderNodeTypes for T where T: NodeTypesForProvider + NodeTypesWithDB {}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/consistent_view.rs | crates/storage/provider/src/providers/consistent_view.rs | use crate::{BlockNumReader, DatabaseProviderFactory, HeaderProvider};
use alloy_primitives::B256;
pub use reth_storage_errors::provider::ConsistentViewError;
use reth_storage_errors::provider::ProviderResult;
/// A consistent view over state in the database.
///
/// View gets initialized with the latest or provided tip.
/// Upon every attempt to create a database provider, the view will
/// perform a consistency check of current tip against the initial one.
///
/// ## Usage
///
/// The view should only be used outside of staged-sync.
/// Otherwise, any attempt to create a provider will result in [`ConsistentViewError::Syncing`].
///
/// When using the view, the consumer should either
/// 1) have a failover for when the state changes and handle [`ConsistentViewError::Inconsistent`]
/// appropriately.
/// 2) be sure that the state does not change.
#[derive(Clone, Debug)]
pub struct ConsistentDbView<Factory> {
factory: Factory,
tip: Option<(B256, u64)>,
}
impl<Factory> ConsistentDbView<Factory>
where
Factory: DatabaseProviderFactory<Provider: BlockNumReader + HeaderProvider>,
{
/// Creates new consistent database view.
pub const fn new(factory: Factory, tip: Option<(B256, u64)>) -> Self {
Self { factory, tip }
}
/// Creates new consistent database view with latest tip.
pub fn new_with_latest_tip(provider: Factory) -> ProviderResult<Self> {
let provider_ro = provider.database_provider_ro()?;
let last_num = provider_ro.last_block_number()?;
let tip = provider_ro.sealed_header(last_num)?.map(|h| (h.hash(), last_num));
Ok(Self::new(provider, tip))
}
/// Creates new read-only provider and performs consistency checks on the current tip.
pub fn provider_ro(&self) -> ProviderResult<Factory::Provider> {
// Create a new provider.
let provider_ro = self.factory.database_provider_ro()?;
// Check that the currently stored tip is included on-disk.
// This means that the database may have moved, but the view was not reorged.
//
// NOTE: We must use `sealed_header` with the block number here, because if we are using
// the consistent view provider while we're persisting blocks, we may enter a race
// condition. Recall that we always commit to static files first, then the database, and
// that block hash to block number indexes are contained in the database. If we were to
// fetch the block by hash while we're persisting, the following situation may occur:
//
// 1. Persistence appends the latest block to static files.
// 2. We initialize the consistent view provider, which fetches based on `last_block_number`
// and `sealed_header`, which both check static files, setting the tip to the newly
// committed block.
// 3. We attempt to fetch a header by hash, using for example the `header` method. This
// checks the database first, to fetch the number corresponding to the hash. Because the
// database has not been committed yet, this fails, and we return
// `ConsistentViewError::Reorged`.
// 4. Some time later, the database commits.
//
// To ensure this doesn't happen, we just have to make sure that we fetch from the same
// data source that we used during initialization. In this case, that is static files
if let Some((hash, number)) = self.tip {
if provider_ro.sealed_header(number)?.is_none_or(|header| header.hash() != hash) {
return Err(ConsistentViewError::Reorged { block: hash }.into())
}
}
Ok(provider_ro)
}
}
#[cfg(test)]
mod tests {
use reth_errors::ProviderError;
use std::str::FromStr;
use super::*;
use crate::{
test_utils::create_test_provider_factory_with_chain_spec, BlockWriter,
StaticFileProviderFactory, StaticFileWriter,
};
use alloy_primitives::Bytes;
use assert_matches::assert_matches;
use reth_chainspec::{EthChainSpec, MAINNET};
use reth_ethereum_primitives::{Block, BlockBody};
use reth_primitives_traits::{block::TestBlock, RecoveredBlock, SealedBlock};
use reth_static_file_types::StaticFileSegment;
use reth_storage_api::StorageLocation;
#[test]
fn test_consistent_view_extend() {
let provider_factory = create_test_provider_factory_with_chain_spec(MAINNET.clone());
let genesis_header = MAINNET.genesis_header();
let genesis_block =
SealedBlock::<Block>::seal_parts(genesis_header.clone(), BlockBody::default());
let genesis_hash: B256 = genesis_block.hash();
let genesis_block = RecoveredBlock::new_sealed(genesis_block, vec![]);
// insert the block
let provider_rw = provider_factory.provider_rw().unwrap();
provider_rw.insert_block(genesis_block, StorageLocation::StaticFiles).unwrap();
provider_rw.commit().unwrap();
// create a consistent view provider and check that a ro provider can be made
let view = ConsistentDbView::new_with_latest_tip(provider_factory.clone()).unwrap();
// ensure successful creation of a read-only provider.
assert_matches!(view.provider_ro(), Ok(_));
// generate a block that extends the genesis
let mut block = Block::default();
block.header_mut().parent_hash = genesis_hash;
block.header_mut().number = 1;
let sealed_block = SealedBlock::seal_slow(block);
let recovered_block = RecoveredBlock::new_sealed(sealed_block, vec![]);
// insert the block
let provider_rw = provider_factory.provider_rw().unwrap();
provider_rw.insert_block(recovered_block, StorageLocation::StaticFiles).unwrap();
provider_rw.commit().unwrap();
// ensure successful creation of a read-only provider, based on this new db state.
assert_matches!(view.provider_ro(), Ok(_));
// generate a block that extends that block
let mut block = Block::default();
block.header_mut().parent_hash = genesis_hash;
block.header_mut().number = 2;
let sealed_block = SealedBlock::seal_slow(block);
let recovered_block = RecoveredBlock::new_sealed(sealed_block, vec![]);
// insert the block
let provider_rw = provider_factory.provider_rw().unwrap();
provider_rw.insert_block(recovered_block, StorageLocation::StaticFiles).unwrap();
provider_rw.commit().unwrap();
// check that creation of a read-only provider still works
assert_matches!(view.provider_ro(), Ok(_));
}
#[test]
fn test_consistent_view_remove() {
let provider_factory = create_test_provider_factory_with_chain_spec(MAINNET.clone());
let genesis_header = MAINNET.genesis_header();
let genesis_block =
SealedBlock::<Block>::seal_parts(genesis_header.clone(), BlockBody::default());
let genesis_hash: B256 = genesis_block.hash();
let genesis_block = RecoveredBlock::new_sealed(genesis_block, vec![]);
// insert the block
let provider_rw = provider_factory.provider_rw().unwrap();
provider_rw.insert_block(genesis_block, StorageLocation::Both).unwrap();
provider_rw.0.static_file_provider().commit().unwrap();
provider_rw.commit().unwrap();
// create a consistent view provider and check that a ro provider can be made
let view = ConsistentDbView::new_with_latest_tip(provider_factory.clone()).unwrap();
// ensure successful creation of a read-only provider.
assert_matches!(view.provider_ro(), Ok(_));
// generate a block that extends the genesis
let mut block = Block::default();
block.header_mut().parent_hash = genesis_hash;
block.header_mut().number = 1;
let sealed_block = SealedBlock::seal_slow(block);
let recovered_block = RecoveredBlock::new_sealed(sealed_block.clone(), vec![]);
// insert the block
let provider_rw = provider_factory.provider_rw().unwrap();
provider_rw.insert_block(recovered_block, StorageLocation::Both).unwrap();
provider_rw.0.static_file_provider().commit().unwrap();
provider_rw.commit().unwrap();
// create a second consistent view provider and check that a ro provider can be made
let view = ConsistentDbView::new_with_latest_tip(provider_factory.clone()).unwrap();
let initial_tip_hash = sealed_block.hash();
// ensure successful creation of a read-only provider, based on this new db state.
assert_matches!(view.provider_ro(), Ok(_));
// remove the block above the genesis block
let provider_rw = provider_factory.provider_rw().unwrap();
provider_rw.remove_blocks_above(0, StorageLocation::Both).unwrap();
let sf_provider = provider_rw.0.static_file_provider();
sf_provider.get_writer(1, StaticFileSegment::Headers).unwrap().prune_headers(1).unwrap();
sf_provider.commit().unwrap();
provider_rw.commit().unwrap();
// ensure unsuccessful creation of a read-only provider, based on this new db state.
let Err(ProviderError::ConsistentView(boxed_consistent_view_err)) = view.provider_ro()
else {
panic!("expected reorged consistent view error, got success");
};
let unboxed = *boxed_consistent_view_err;
assert_eq!(unboxed, ConsistentViewError::Reorged { block: initial_tip_hash });
// generate a block that extends the genesis with a different hash
let mut block = Block::default();
block.header_mut().parent_hash = genesis_hash;
block.header_mut().number = 1;
block.header_mut().extra_data =
Bytes::from_str("6a6f75726e657920746f20697468616361").unwrap();
let sealed_block = SealedBlock::seal_slow(block);
let recovered_block = RecoveredBlock::new_sealed(sealed_block, vec![]);
// reinsert the block at the same height, but with a different hash
let provider_rw = provider_factory.provider_rw().unwrap();
provider_rw.insert_block(recovered_block, StorageLocation::Both).unwrap();
provider_rw.0.static_file_provider().commit().unwrap();
provider_rw.commit().unwrap();
// ensure unsuccessful creation of a read-only provider, based on this new db state.
let Err(ProviderError::ConsistentView(boxed_consistent_view_err)) = view.provider_ro()
else {
panic!("expected reorged consistent view error, got success");
};
let unboxed = *boxed_consistent_view_err;
assert_eq!(unboxed, ConsistentViewError::Reorged { block: initial_tip_hash });
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/blockchain_provider.rs | crates/storage/provider/src/providers/blockchain_provider.rs | #![allow(unused)]
use crate::{
providers::{ConsistentProvider, ProviderNodeTypes, StaticFileProvider},
AccountReader, BlockHashReader, BlockIdReader, BlockNumReader, BlockReader, BlockReaderIdExt,
BlockSource, CanonChainTracker, CanonStateNotifications, CanonStateSubscriptions,
ChainSpecProvider, ChainStateBlockReader, ChangeSetReader, DatabaseProvider,
DatabaseProviderFactory, FullProvider, HashedPostStateProvider, HeaderProvider, ProviderError,
ProviderFactory, PruneCheckpointReader, ReceiptProvider, ReceiptProviderIdExt,
StageCheckpointReader, StateProviderBox, StateProviderFactory, StateReader,
StaticFileProviderFactory, TransactionVariant, TransactionsProvider,
};
use alloy_consensus::{transaction::TransactionMeta, Header};
use alloy_eips::{
eip4895::{Withdrawal, Withdrawals},
BlockHashOrNumber, BlockId, BlockNumHash, BlockNumberOrTag,
};
use alloy_primitives::{Address, BlockHash, BlockNumber, Sealable, TxHash, TxNumber, B256, U256};
use alloy_rpc_types_engine::ForkchoiceState;
use reth_chain_state::{
BlockState, CanonicalInMemoryState, ForkChoiceNotifications, ForkChoiceSubscriptions,
MemoryOverlayStateProvider,
};
use reth_chainspec::{ChainInfo, EthereumHardforks};
use reth_db_api::{
models::{AccountBeforeTx, BlockNumberAddress, StoredBlockBodyIndices},
transaction::DbTx,
Database,
};
use reth_ethereum_primitives::{Block, EthPrimitives, Receipt, TransactionSigned};
use reth_evm::{ConfigureEvm, EvmEnv};
use reth_execution_types::ExecutionOutcome;
use reth_node_types::{BlockTy, HeaderTy, NodeTypesWithDB, ReceiptTy, TxTy};
use reth_primitives_traits::{
Account, BlockBody, NodePrimitives, RecoveredBlock, SealedBlock, SealedHeader, StorageEntry,
};
use reth_prune_types::{PruneCheckpoint, PruneSegment};
use reth_stages_types::{StageCheckpoint, StageId};
use reth_storage_api::{
BlockBodyIndicesProvider, DBProvider, NodePrimitivesProvider, StorageChangeSetReader,
};
use reth_storage_errors::provider::ProviderResult;
use reth_trie::{HashedPostState, KeccakKeyHasher};
use revm_database::BundleState;
use std::{
ops::{Add, RangeBounds, RangeInclusive, Sub},
sync::Arc,
time::Instant,
};
use tracing::trace;
/// The main type for interacting with the blockchain.
///
/// This type serves as the main entry point for interacting with the blockchain and provides data
/// from database storage and from the blockchain tree (pending state etc.) It is a simple wrapper
/// type that holds an instance of the database and the blockchain tree.
#[derive(Debug)]
pub struct BlockchainProvider<N: NodeTypesWithDB> {
/// Provider factory used to access the database.
pub(crate) database: ProviderFactory<N>,
/// Tracks the chain info wrt forkchoice updates and in memory canonical
/// state.
pub(crate) canonical_in_memory_state: CanonicalInMemoryState<N::Primitives>,
}
impl<N: NodeTypesWithDB> Clone for BlockchainProvider<N> {
fn clone(&self) -> Self {
Self {
database: self.database.clone(),
canonical_in_memory_state: self.canonical_in_memory_state.clone(),
}
}
}
impl<N: ProviderNodeTypes> BlockchainProvider<N> {
/// Create a new [`BlockchainProvider`] using only the storage, fetching the latest
/// header from the database to initialize the provider.
pub fn new(storage: ProviderFactory<N>) -> ProviderResult<Self> {
let provider = storage.provider()?;
let best = provider.chain_info()?;
match provider.header_by_number(best.best_number)? {
Some(header) => {
drop(provider);
Ok(Self::with_latest(storage, SealedHeader::new(header, best.best_hash))?)
}
None => Err(ProviderError::HeaderNotFound(best.best_number.into())),
}
}
/// Create new provider instance that wraps the database and the blockchain tree, using the
/// provided latest header to initialize the chain info tracker.
///
/// This returns a `ProviderResult` since it tries the retrieve the last finalized header from
/// `database`.
pub fn with_latest(
storage: ProviderFactory<N>,
latest: SealedHeader<HeaderTy<N>>,
) -> ProviderResult<Self> {
let provider = storage.provider()?;
let finalized_header = provider
.last_finalized_block_number()?
.map(|num| provider.sealed_header(num))
.transpose()?
.flatten();
let safe_header = provider
.last_safe_block_number()?
.or_else(|| {
// for the purpose of this we can also use the finalized block if we don't have the
// safe block
provider.last_finalized_block_number().ok().flatten()
})
.map(|num| provider.sealed_header(num))
.transpose()?
.flatten();
Ok(Self {
database: storage,
canonical_in_memory_state: CanonicalInMemoryState::with_head(
latest,
finalized_header,
safe_header,
),
})
}
/// Gets a clone of `canonical_in_memory_state`.
pub fn canonical_in_memory_state(&self) -> CanonicalInMemoryState<N::Primitives> {
self.canonical_in_memory_state.clone()
}
/// Returns a provider with a created `DbTx` inside, which allows fetching data from the
/// database using different types of providers. Example: [`HeaderProvider`]
/// [`BlockHashReader`]. This may fail if the inner read database transaction fails to open.
#[track_caller]
pub fn consistent_provider(&self) -> ProviderResult<ConsistentProvider<N>> {
ConsistentProvider::new(self.database.clone(), self.canonical_in_memory_state())
}
/// This uses a given [`BlockState`] to initialize a state provider for that block.
fn block_state_provider(
&self,
state: &BlockState<N::Primitives>,
) -> ProviderResult<MemoryOverlayStateProvider<N::Primitives>> {
let anchor_hash = state.anchor().hash;
let latest_historical = self.database.history_by_block_hash(anchor_hash)?;
Ok(state.state_provider(latest_historical))
}
/// Return the last N blocks of state, recreating the [`ExecutionOutcome`].
///
/// If the range is empty, or there are no blocks for the given range, then this returns `None`.
pub fn get_state(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Option<ExecutionOutcome<ReceiptTy<N>>>> {
self.consistent_provider()?.get_state(range)
}
}
impl<N: NodeTypesWithDB> NodePrimitivesProvider for BlockchainProvider<N> {
type Primitives = N::Primitives;
}
impl<N: ProviderNodeTypes> DatabaseProviderFactory for BlockchainProvider<N> {
type DB = N::DB;
type Provider = <ProviderFactory<N> as DatabaseProviderFactory>::Provider;
type ProviderRW = <ProviderFactory<N> as DatabaseProviderFactory>::ProviderRW;
fn database_provider_ro(&self) -> ProviderResult<Self::Provider> {
self.database.database_provider_ro()
}
fn database_provider_rw(&self) -> ProviderResult<Self::ProviderRW> {
self.database.database_provider_rw()
}
}
impl<N: ProviderNodeTypes> StaticFileProviderFactory for BlockchainProvider<N> {
fn static_file_provider(&self) -> StaticFileProvider<Self::Primitives> {
self.database.static_file_provider()
}
}
impl<N: ProviderNodeTypes> HeaderProvider for BlockchainProvider<N> {
type Header = HeaderTy<N>;
fn header(&self, block_hash: &BlockHash) -> ProviderResult<Option<Self::Header>> {
self.consistent_provider()?.header(block_hash)
}
fn header_by_number(&self, num: BlockNumber) -> ProviderResult<Option<Self::Header>> {
self.consistent_provider()?.header_by_number(num)
}
fn header_td(&self, hash: &BlockHash) -> ProviderResult<Option<U256>> {
self.consistent_provider()?.header_td(hash)
}
fn header_td_by_number(&self, number: BlockNumber) -> ProviderResult<Option<U256>> {
self.consistent_provider()?.header_td_by_number(number)
}
fn headers_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Self::Header>> {
self.consistent_provider()?.headers_range(range)
}
fn sealed_header(
&self,
number: BlockNumber,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
self.consistent_provider()?.sealed_header(number)
}
fn sealed_headers_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
self.consistent_provider()?.sealed_headers_range(range)
}
fn sealed_headers_while(
&self,
range: impl RangeBounds<BlockNumber>,
predicate: impl FnMut(&SealedHeader<Self::Header>) -> bool,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
self.consistent_provider()?.sealed_headers_while(range, predicate)
}
}
impl<N: ProviderNodeTypes> BlockHashReader for BlockchainProvider<N> {
fn block_hash(&self, number: u64) -> ProviderResult<Option<B256>> {
self.consistent_provider()?.block_hash(number)
}
fn canonical_hashes_range(
&self,
start: BlockNumber,
end: BlockNumber,
) -> ProviderResult<Vec<B256>> {
self.consistent_provider()?.canonical_hashes_range(start, end)
}
}
impl<N: ProviderNodeTypes> BlockNumReader for BlockchainProvider<N> {
fn chain_info(&self) -> ProviderResult<ChainInfo> {
Ok(self.canonical_in_memory_state.chain_info())
}
fn best_block_number(&self) -> ProviderResult<BlockNumber> {
Ok(self.canonical_in_memory_state.get_canonical_block_number())
}
fn last_block_number(&self) -> ProviderResult<BlockNumber> {
self.database.last_block_number()
}
fn earliest_block_number(&self) -> ProviderResult<BlockNumber> {
self.database.earliest_block_number()
}
fn block_number(&self, hash: B256) -> ProviderResult<Option<BlockNumber>> {
self.consistent_provider()?.block_number(hash)
}
}
impl<N: ProviderNodeTypes> BlockIdReader for BlockchainProvider<N> {
fn pending_block_num_hash(&self) -> ProviderResult<Option<BlockNumHash>> {
Ok(self.canonical_in_memory_state.pending_block_num_hash())
}
fn safe_block_num_hash(&self) -> ProviderResult<Option<BlockNumHash>> {
Ok(self.canonical_in_memory_state.get_safe_num_hash())
}
fn finalized_block_num_hash(&self) -> ProviderResult<Option<BlockNumHash>> {
Ok(self.canonical_in_memory_state.get_finalized_num_hash())
}
}
impl<N: ProviderNodeTypes> BlockReader for BlockchainProvider<N> {
type Block = BlockTy<N>;
fn find_block_by_hash(
&self,
hash: B256,
source: BlockSource,
) -> ProviderResult<Option<Self::Block>> {
self.consistent_provider()?.find_block_by_hash(hash, source)
}
fn block(&self, id: BlockHashOrNumber) -> ProviderResult<Option<Self::Block>> {
self.consistent_provider()?.block(id)
}
fn pending_block(&self) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
Ok(self.canonical_in_memory_state.pending_recovered_block())
}
fn pending_block_and_receipts(
&self,
) -> ProviderResult<Option<(RecoveredBlock<Self::Block>, Vec<Self::Receipt>)>> {
Ok(self.canonical_in_memory_state.pending_block_and_receipts())
}
/// Returns the block with senders with matching number or hash from database.
///
/// **NOTE: If [`TransactionVariant::NoHash`] is provided then the transactions have invalid
/// hashes, since they would need to be calculated on the spot, and we want fast querying.**
///
/// Returns `None` if block is not found.
fn recovered_block(
&self,
id: BlockHashOrNumber,
transaction_kind: TransactionVariant,
) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
self.consistent_provider()?.recovered_block(id, transaction_kind)
}
fn sealed_block_with_senders(
&self,
id: BlockHashOrNumber,
transaction_kind: TransactionVariant,
) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
self.consistent_provider()?.sealed_block_with_senders(id, transaction_kind)
}
fn block_range(&self, range: RangeInclusive<BlockNumber>) -> ProviderResult<Vec<Self::Block>> {
self.consistent_provider()?.block_range(range)
}
fn block_with_senders_range(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<RecoveredBlock<Self::Block>>> {
self.consistent_provider()?.block_with_senders_range(range)
}
fn recovered_block_range(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<RecoveredBlock<Self::Block>>> {
self.consistent_provider()?.recovered_block_range(range)
}
}
impl<N: ProviderNodeTypes> TransactionsProvider for BlockchainProvider<N> {
type Transaction = TxTy<N>;
fn transaction_id(&self, tx_hash: TxHash) -> ProviderResult<Option<TxNumber>> {
self.consistent_provider()?.transaction_id(tx_hash)
}
fn transaction_by_id(&self, id: TxNumber) -> ProviderResult<Option<Self::Transaction>> {
self.consistent_provider()?.transaction_by_id(id)
}
fn transaction_by_id_unhashed(
&self,
id: TxNumber,
) -> ProviderResult<Option<Self::Transaction>> {
self.consistent_provider()?.transaction_by_id_unhashed(id)
}
fn transaction_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Transaction>> {
self.consistent_provider()?.transaction_by_hash(hash)
}
fn transaction_by_hash_with_meta(
&self,
tx_hash: TxHash,
) -> ProviderResult<Option<(Self::Transaction, TransactionMeta)>> {
self.consistent_provider()?.transaction_by_hash_with_meta(tx_hash)
}
fn transaction_block(&self, id: TxNumber) -> ProviderResult<Option<BlockNumber>> {
self.consistent_provider()?.transaction_block(id)
}
fn transactions_by_block(
&self,
id: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Transaction>>> {
self.consistent_provider()?.transactions_by_block(id)
}
fn transactions_by_block_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Transaction>>> {
self.consistent_provider()?.transactions_by_block_range(range)
}
fn transactions_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Transaction>> {
self.consistent_provider()?.transactions_by_tx_range(range)
}
fn senders_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Address>> {
self.consistent_provider()?.senders_by_tx_range(range)
}
fn transaction_sender(&self, id: TxNumber) -> ProviderResult<Option<Address>> {
self.consistent_provider()?.transaction_sender(id)
}
}
impl<N: ProviderNodeTypes> ReceiptProvider for BlockchainProvider<N> {
type Receipt = ReceiptTy<N>;
fn receipt(&self, id: TxNumber) -> ProviderResult<Option<Self::Receipt>> {
self.consistent_provider()?.receipt(id)
}
fn receipt_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Receipt>> {
self.consistent_provider()?.receipt_by_hash(hash)
}
fn receipts_by_block(
&self,
block: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Receipt>>> {
self.consistent_provider()?.receipts_by_block(block)
}
fn receipts_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Receipt>> {
self.consistent_provider()?.receipts_by_tx_range(range)
}
fn receipts_by_block_range(
&self,
block_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Receipt>>> {
self.consistent_provider()?.receipts_by_block_range(block_range)
}
}
impl<N: ProviderNodeTypes> ReceiptProviderIdExt for BlockchainProvider<N> {
fn receipts_by_block_id(&self, block: BlockId) -> ProviderResult<Option<Vec<Self::Receipt>>> {
self.consistent_provider()?.receipts_by_block_id(block)
}
}
impl<N: ProviderNodeTypes> BlockBodyIndicesProvider for BlockchainProvider<N> {
fn block_body_indices(
&self,
number: BlockNumber,
) -> ProviderResult<Option<StoredBlockBodyIndices>> {
self.consistent_provider()?.block_body_indices(number)
}
fn block_body_indices_range(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<StoredBlockBodyIndices>> {
self.consistent_provider()?.block_body_indices_range(range)
}
}
impl<N: ProviderNodeTypes> StageCheckpointReader for BlockchainProvider<N> {
fn get_stage_checkpoint(&self, id: StageId) -> ProviderResult<Option<StageCheckpoint>> {
self.consistent_provider()?.get_stage_checkpoint(id)
}
fn get_stage_checkpoint_progress(&self, id: StageId) -> ProviderResult<Option<Vec<u8>>> {
self.consistent_provider()?.get_stage_checkpoint_progress(id)
}
fn get_all_checkpoints(&self) -> ProviderResult<Vec<(String, StageCheckpoint)>> {
self.consistent_provider()?.get_all_checkpoints()
}
}
impl<N: ProviderNodeTypes> PruneCheckpointReader for BlockchainProvider<N> {
fn get_prune_checkpoint(
&self,
segment: PruneSegment,
) -> ProviderResult<Option<PruneCheckpoint>> {
self.consistent_provider()?.get_prune_checkpoint(segment)
}
fn get_prune_checkpoints(&self) -> ProviderResult<Vec<(PruneSegment, PruneCheckpoint)>> {
self.consistent_provider()?.get_prune_checkpoints()
}
}
impl<N: NodeTypesWithDB> ChainSpecProvider for BlockchainProvider<N> {
type ChainSpec = N::ChainSpec;
fn chain_spec(&self) -> Arc<N::ChainSpec> {
self.database.chain_spec()
}
}
impl<N: ProviderNodeTypes> StateProviderFactory for BlockchainProvider<N> {
/// Storage provider for latest block
fn latest(&self) -> ProviderResult<StateProviderBox> {
trace!(target: "providers::blockchain", "Getting latest block state provider");
// use latest state provider if the head state exists
if let Some(state) = self.canonical_in_memory_state.head_state() {
trace!(target: "providers::blockchain", "Using head state for latest state provider");
Ok(self.block_state_provider(&state)?.boxed())
} else {
trace!(target: "providers::blockchain", "Using database state for latest state provider");
self.database.latest()
}
}
/// Returns a [`StateProviderBox`] indexed by the given block number or tag.
fn state_by_block_number_or_tag(
&self,
number_or_tag: BlockNumberOrTag,
) -> ProviderResult<StateProviderBox> {
match number_or_tag {
BlockNumberOrTag::Latest => self.latest(),
BlockNumberOrTag::Finalized => {
// we can only get the finalized state by hash, not by num
let hash =
self.finalized_block_hash()?.ok_or(ProviderError::FinalizedBlockNotFound)?;
self.state_by_block_hash(hash)
}
BlockNumberOrTag::Safe => {
// we can only get the safe state by hash, not by num
let hash = self.safe_block_hash()?.ok_or(ProviderError::SafeBlockNotFound)?;
self.state_by_block_hash(hash)
}
BlockNumberOrTag::Earliest => {
self.history_by_block_number(self.earliest_block_number()?)
}
BlockNumberOrTag::Pending => self.pending(),
BlockNumberOrTag::Number(num) => {
let hash = self
.block_hash(num)?
.ok_or_else(|| ProviderError::HeaderNotFound(num.into()))?;
self.state_by_block_hash(hash)
}
}
}
fn history_by_block_number(
&self,
block_number: BlockNumber,
) -> ProviderResult<StateProviderBox> {
trace!(target: "providers::blockchain", ?block_number, "Getting history by block number");
let provider = self.consistent_provider()?;
provider.ensure_canonical_block(block_number)?;
let hash = provider
.block_hash(block_number)?
.ok_or_else(|| ProviderError::HeaderNotFound(block_number.into()))?;
provider.into_state_provider_at_block_hash(hash)
}
fn history_by_block_hash(&self, block_hash: BlockHash) -> ProviderResult<StateProviderBox> {
trace!(target: "providers::blockchain", ?block_hash, "Getting history by block hash");
self.consistent_provider()?.into_state_provider_at_block_hash(block_hash)
}
fn state_by_block_hash(&self, hash: BlockHash) -> ProviderResult<StateProviderBox> {
trace!(target: "providers::blockchain", ?hash, "Getting state by block hash");
if let Ok(state) = self.history_by_block_hash(hash) {
// This could be tracked by a historical block
Ok(state)
} else if let Ok(Some(pending)) = self.pending_state_by_hash(hash) {
// .. or this could be the pending state
Ok(pending)
} else {
// if we couldn't find it anywhere, then we should return an error
Err(ProviderError::StateForHashNotFound(hash))
}
}
/// Returns the state provider for pending state.
///
/// If there's no pending block available then the latest state provider is returned:
/// [`Self::latest`]
fn pending(&self) -> ProviderResult<StateProviderBox> {
trace!(target: "providers::blockchain", "Getting provider for pending state");
if let Some(pending) = self.canonical_in_memory_state.pending_state() {
// we have a pending block
return Ok(Box::new(self.block_state_provider(&pending)?));
}
// fallback to latest state if the pending block is not available
self.latest()
}
fn pending_state_by_hash(&self, block_hash: B256) -> ProviderResult<Option<StateProviderBox>> {
if let Some(pending) = self.canonical_in_memory_state.pending_state() {
if pending.hash() == block_hash {
return Ok(Some(Box::new(self.block_state_provider(&pending)?)));
}
}
Ok(None)
}
fn maybe_pending(&self) -> ProviderResult<Option<StateProviderBox>> {
if let Some(pending) = self.canonical_in_memory_state.pending_state() {
return Ok(Some(Box::new(self.block_state_provider(&pending)?)))
}
Ok(None)
}
}
impl<N: NodeTypesWithDB> HashedPostStateProvider for BlockchainProvider<N> {
fn hashed_post_state(&self, bundle_state: &BundleState) -> HashedPostState {
HashedPostState::from_bundle_state::<KeccakKeyHasher>(bundle_state.state())
}
}
impl<N: ProviderNodeTypes> CanonChainTracker for BlockchainProvider<N> {
type Header = HeaderTy<N>;
fn on_forkchoice_update_received(&self, _update: &ForkchoiceState) {
// update timestamp
self.canonical_in_memory_state.on_forkchoice_update_received();
}
fn last_received_update_timestamp(&self) -> Option<Instant> {
self.canonical_in_memory_state.last_received_update_timestamp()
}
fn set_canonical_head(&self, header: SealedHeader<Self::Header>) {
self.canonical_in_memory_state.set_canonical_head(header);
}
fn set_safe(&self, header: SealedHeader<Self::Header>) {
self.canonical_in_memory_state.set_safe(header);
}
fn set_finalized(&self, header: SealedHeader<Self::Header>) {
self.canonical_in_memory_state.set_finalized(header);
}
}
impl<N: ProviderNodeTypes> BlockReaderIdExt for BlockchainProvider<N>
where
Self: ReceiptProviderIdExt,
{
fn block_by_id(&self, id: BlockId) -> ProviderResult<Option<Self::Block>> {
self.consistent_provider()?.block_by_id(id)
}
fn header_by_number_or_tag(
&self,
id: BlockNumberOrTag,
) -> ProviderResult<Option<Self::Header>> {
self.consistent_provider()?.header_by_number_or_tag(id)
}
fn sealed_header_by_number_or_tag(
&self,
id: BlockNumberOrTag,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
self.consistent_provider()?.sealed_header_by_number_or_tag(id)
}
fn sealed_header_by_id(
&self,
id: BlockId,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
self.consistent_provider()?.sealed_header_by_id(id)
}
fn header_by_id(&self, id: BlockId) -> ProviderResult<Option<Self::Header>> {
self.consistent_provider()?.header_by_id(id)
}
}
impl<N: ProviderNodeTypes> CanonStateSubscriptions for BlockchainProvider<N> {
fn subscribe_to_canonical_state(&self) -> CanonStateNotifications<Self::Primitives> {
self.canonical_in_memory_state.subscribe_canon_state()
}
}
impl<N: ProviderNodeTypes> ForkChoiceSubscriptions for BlockchainProvider<N> {
type Header = HeaderTy<N>;
fn subscribe_safe_block(&self) -> ForkChoiceNotifications<Self::Header> {
let receiver = self.canonical_in_memory_state.subscribe_safe_block();
ForkChoiceNotifications(receiver)
}
fn subscribe_finalized_block(&self) -> ForkChoiceNotifications<Self::Header> {
let receiver = self.canonical_in_memory_state.subscribe_finalized_block();
ForkChoiceNotifications(receiver)
}
}
impl<N: ProviderNodeTypes> StorageChangeSetReader for BlockchainProvider<N> {
fn storage_changeset(
&self,
block_number: BlockNumber,
) -> ProviderResult<Vec<(BlockNumberAddress, StorageEntry)>> {
self.consistent_provider()?.storage_changeset(block_number)
}
}
impl<N: ProviderNodeTypes> ChangeSetReader for BlockchainProvider<N> {
fn account_block_changeset(
&self,
block_number: BlockNumber,
) -> ProviderResult<Vec<AccountBeforeTx>> {
self.consistent_provider()?.account_block_changeset(block_number)
}
}
impl<N: ProviderNodeTypes> AccountReader for BlockchainProvider<N> {
/// Get basic account information.
fn basic_account(&self, address: &Address) -> ProviderResult<Option<Account>> {
self.consistent_provider()?.basic_account(address)
}
}
impl<N: ProviderNodeTypes> StateReader for BlockchainProvider<N> {
type Receipt = ReceiptTy<N>;
/// Re-constructs the [`ExecutionOutcome`] from in-memory and database state, if necessary.
///
/// If data for the block does not exist, this will return [`None`].
///
/// NOTE: This cannot be called safely in a loop outside of the blockchain tree thread. This is
/// because the [`CanonicalInMemoryState`] could change during a reorg, causing results to be
/// inconsistent. Currently this can safely be called within the blockchain tree thread,
/// because the tree thread is responsible for modifying the [`CanonicalInMemoryState`] in the
/// first place.
fn get_state(
&self,
block: BlockNumber,
) -> ProviderResult<Option<ExecutionOutcome<Self::Receipt>>> {
StateReader::get_state(&self.consistent_provider()?, block)
}
}
#[cfg(test)]
mod tests {
use crate::{
providers::BlockchainProvider,
test_utils::{
create_test_provider_factory, create_test_provider_factory_with_chain_spec,
MockNodeTypesWithDB,
},
writer::UnifiedStorageWriter,
BlockWriter, CanonChainTracker, ProviderFactory, StaticFileProviderFactory,
StaticFileWriter,
};
use alloy_eips::{BlockHashOrNumber, BlockNumHash, BlockNumberOrTag};
use alloy_primitives::{BlockNumber, TxNumber, B256};
use itertools::Itertools;
use rand::Rng;
use reth_chain_state::{
test_utils::TestBlockBuilder, CanonStateNotification, CanonStateSubscriptions,
CanonicalInMemoryState, ExecutedBlock, ExecutedBlockWithTrieUpdates, ExecutedTrieUpdates,
NewCanonicalChain,
};
use reth_chainspec::{
ChainSpec, ChainSpecBuilder, ChainSpecProvider, EthereumHardfork, MAINNET,
};
use reth_db_api::{
cursor::DbCursorRO,
models::{AccountBeforeTx, StoredBlockBodyIndices},
tables,
transaction::DbTx,
};
use reth_errors::ProviderError;
use reth_ethereum_primitives::{Block, EthPrimitives, Receipt};
use reth_execution_types::{Chain, ExecutionOutcome};
use reth_primitives_traits::{
BlockBody, RecoveredBlock, SealedBlock, SignedTransaction, SignerRecoverable,
};
use reth_static_file_types::StaticFileSegment;
use reth_storage_api::{
BlockBodyIndicesProvider, BlockHashReader, BlockIdReader, BlockNumReader, BlockReader,
BlockReaderIdExt, BlockSource, ChangeSetReader, DatabaseProviderFactory, HeaderProvider,
ReceiptProvider, ReceiptProviderIdExt, StateProviderFactory, TransactionVariant,
TransactionsProvider,
};
use reth_testing_utils::generators::{
self, random_block, random_block_range, random_changeset_range, random_eoa_accounts,
random_receipt, BlockParams, BlockRangeParams,
};
use revm_database::BundleState;
use std::{
ops::{Bound, Deref, Range, RangeBounds},
sync::Arc,
time::Instant,
};
const TEST_BLOCKS_COUNT: usize = 5;
const TEST_TRANSACTIONS_COUNT: u8 = 4;
fn random_blocks(
rng: &mut impl Rng,
database_blocks: usize,
in_memory_blocks: usize,
requests_count: Option<Range<u8>>,
withdrawals_count: Option<Range<u8>>,
tx_count: impl RangeBounds<u8>,
) -> (Vec<SealedBlock<Block>>, Vec<SealedBlock<Block>>) {
let block_range = (database_blocks + in_memory_blocks - 1) as u64;
let tx_start = match tx_count.start_bound() {
Bound::Included(&n) | Bound::Excluded(&n) => n,
Bound::Unbounded => u8::MIN,
};
let tx_end = match tx_count.end_bound() {
Bound::Included(&n) | Bound::Excluded(&n) => n + 1,
Bound::Unbounded => u8::MAX,
};
let blocks = random_block_range(
rng,
0..=block_range,
BlockRangeParams {
parent: Some(B256::ZERO),
tx_count: tx_start..tx_end,
requests_count,
withdrawals_count,
},
);
let (database_blocks, in_memory_blocks) = blocks.split_at(database_blocks);
(database_blocks.to_vec(), in_memory_blocks.to_vec())
}
#[expect(clippy::type_complexity)]
fn provider_with_chain_spec_and_random_blocks(
rng: &mut impl Rng,
chain_spec: Arc<ChainSpec>,
database_blocks: usize,
in_memory_blocks: usize,
block_range_params: BlockRangeParams,
) -> eyre::Result<(
BlockchainProvider<MockNodeTypesWithDB>,
Vec<SealedBlock<Block>>,
Vec<SealedBlock<Block>>,
Vec<Vec<Receipt>>,
)> {
let (database_blocks, in_memory_blocks) = random_blocks(
rng,
database_blocks,
in_memory_blocks,
block_range_params.requests_count,
block_range_params.withdrawals_count,
block_range_params.tx_count,
);
let receipts: Vec<Vec<_>> = database_blocks
.iter()
.chain(in_memory_blocks.iter())
.map(|block| block.body().transactions.iter())
.map(|tx| tx.map(|tx| random_receipt(rng, tx, Some(2), None)).collect())
.collect();
let factory = create_test_provider_factory_with_chain_spec(chain_spec);
let provider_rw = factory.database_provider_rw()?;
let static_file_provider = factory.static_file_provider();
// Write transactions to static files with the right `tx_num``
let mut tx_num = provider_rw
.block_body_indices(database_blocks.first().as_ref().unwrap().number.saturating_sub(1))?
.map(|indices| indices.next_tx_num())
.unwrap_or_default();
// Insert blocks into the database
for (block, receipts) in database_blocks.iter().zip(&receipts) {
// TODO: this should be moved inside `insert_historical_block`: <https://github.com/paradigmxyz/reth/issues/11524>
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/state/macros.rs | crates/storage/provider/src/providers/state/macros.rs | //! Helper macros for implementing traits for various [`StateProvider`](crate::StateProvider)
//! implementations
/// A macro that delegates trait implementations to the `as_ref` function of the type.
///
/// Used to implement provider traits.
macro_rules! delegate_impls_to_as_ref {
(for $target:ty => $($trait:ident $(where [$($generics:tt)*])? { $(fn $func:ident$(<$($generic_arg:ident: $generic_arg_ty:path),*>)?(&self, $($arg:ident: $argty:ty),*) -> $ret:path;)* })* ) => {
$(
impl<'a, $($($generics)*)?> $trait for $target {
$(
fn $func$(<$($generic_arg: $generic_arg_ty),*>)?(&self, $($arg: $argty),*) -> $ret {
self.as_ref().$func($($arg),*)
}
)*
}
)*
};
}
pub(crate) use delegate_impls_to_as_ref;
/// Delegates the provider trait implementations to the `as_ref` function of the type:
///
/// [`AccountReader`](crate::AccountReader)
/// [`BlockHashReader`](crate::BlockHashReader)
/// [`StateProvider`](crate::StateProvider)
macro_rules! delegate_provider_impls {
($target:ty $(where [$($generics:tt)*])?) => {
$crate::providers::state::macros::delegate_impls_to_as_ref!(
for $target =>
AccountReader $(where [$($generics)*])? {
fn basic_account(&self, address: &alloy_primitives::Address) -> reth_storage_errors::provider::ProviderResult<Option<reth_primitives_traits::Account>>;
}
BlockHashReader $(where [$($generics)*])? {
fn block_hash(&self, number: u64) -> reth_storage_errors::provider::ProviderResult<Option<alloy_primitives::B256>>;
fn canonical_hashes_range(&self, start: alloy_primitives::BlockNumber, end: alloy_primitives::BlockNumber) -> reth_storage_errors::provider::ProviderResult<Vec<alloy_primitives::B256>>;
}
StateProvider $(where [$($generics)*])? {
fn storage(&self, account: alloy_primitives::Address, storage_key: alloy_primitives::StorageKey) -> reth_storage_errors::provider::ProviderResult<Option<revm_state::FlaggedStorage>>;
}
BytecodeReader $(where [$($generics)*])? {
fn bytecode_by_hash(&self, code_hash: &alloy_primitives::B256) -> reth_storage_errors::provider::ProviderResult<Option<reth_primitives_traits::Bytecode>>;
}
StateRootProvider $(where [$($generics)*])? {
fn state_root(&self, state: reth_trie::HashedPostState) -> reth_storage_errors::provider::ProviderResult<alloy_primitives::B256>;
fn state_root_from_nodes(&self, input: reth_trie::TrieInput) -> reth_storage_errors::provider::ProviderResult<alloy_primitives::B256>;
fn state_root_with_updates(&self, state: reth_trie::HashedPostState) -> reth_storage_errors::provider::ProviderResult<(alloy_primitives::B256, reth_trie::updates::TrieUpdates)>;
fn state_root_from_nodes_with_updates(&self, input: reth_trie::TrieInput) -> reth_storage_errors::provider::ProviderResult<(alloy_primitives::B256, reth_trie::updates::TrieUpdates)>;
}
StorageRootProvider $(where [$($generics)*])? {
fn storage_root(&self, address: alloy_primitives::Address, storage: reth_trie::HashedStorage) -> reth_storage_errors::provider::ProviderResult<alloy_primitives::B256>;
fn storage_proof(&self, address: alloy_primitives::Address, slot: alloy_primitives::B256, storage: reth_trie::HashedStorage) -> reth_storage_errors::provider::ProviderResult<reth_trie::StorageProof>;
fn storage_multiproof(&self, address: alloy_primitives::Address, slots: &[alloy_primitives::B256], storage: reth_trie::HashedStorage) -> reth_storage_errors::provider::ProviderResult<reth_trie::StorageMultiProof>;
}
StateProofProvider $(where [$($generics)*])? {
fn proof(&self, input: reth_trie::TrieInput, address: alloy_primitives::Address, slots: &[alloy_primitives::B256]) -> reth_storage_errors::provider::ProviderResult<reth_trie::AccountProof>;
fn multiproof(&self, input: reth_trie::TrieInput, targets: reth_trie::MultiProofTargets) -> reth_storage_errors::provider::ProviderResult<reth_trie::MultiProof>;
fn witness(&self, input: reth_trie::TrieInput, target: reth_trie::HashedPostState) -> reth_storage_errors::provider::ProviderResult<Vec<alloy_primitives::Bytes>>;
}
HashedPostStateProvider $(where [$($generics)*])? {
fn hashed_post_state(&self, bundle_state: &revm_database::BundleState) -> reth_trie::HashedPostState;
}
);
}
}
pub(crate) use delegate_provider_impls;
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/state/mod.rs | crates/storage/provider/src/providers/state/mod.rs | //! [`StateProvider`](crate::StateProvider) implementations
pub(crate) mod historical;
pub(crate) mod latest;
pub(crate) mod macros;
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/state/latest.rs | crates/storage/provider/src/providers/state/latest.rs | use crate::{
providers::state::macros::delegate_provider_impls, AccountReader, BlockHashReader,
HashedPostStateProvider, StateProvider, StateRootProvider,
};
use alloy_primitives::{Address, BlockNumber, Bytes, StorageKey, B256};
use reth_db_api::{cursor::DbDupCursorRO, tables, transaction::DbTx};
use reth_primitives_traits::{Account, Bytecode};
use reth_storage_api::{BytecodeReader, DBProvider, StateProofProvider, StorageRootProvider};
use reth_storage_errors::provider::{ProviderError, ProviderResult};
use reth_trie::{
proof::{Proof, StorageProof},
updates::TrieUpdates,
witness::TrieWitness,
AccountProof, HashedPostState, HashedStorage, KeccakKeyHasher, MultiProof, MultiProofTargets,
StateRoot, StorageMultiProof, StorageRoot, TrieInput,
};
use reth_trie_db::{
DatabaseProof, DatabaseStateRoot, DatabaseStorageProof, DatabaseStorageRoot,
DatabaseTrieWitness,
};
use revm_state::FlaggedStorage;
/// State provider over latest state that takes tx reference.
///
/// Wraps a [`DBProvider`] to get access to database.
#[derive(Debug)]
pub struct LatestStateProviderRef<'b, Provider>(&'b Provider);
impl<'b, Provider: DBProvider> LatestStateProviderRef<'b, Provider> {
/// Create new state provider
pub const fn new(provider: &'b Provider) -> Self {
Self(provider)
}
fn tx(&self) -> &Provider::Tx {
self.0.tx_ref()
}
}
impl<Provider: DBProvider> AccountReader for LatestStateProviderRef<'_, Provider> {
/// Get basic account information.
fn basic_account(&self, address: &Address) -> ProviderResult<Option<Account>> {
self.tx().get_by_encoded_key::<tables::PlainAccountState>(address).map_err(Into::into)
}
}
impl<Provider: BlockHashReader> BlockHashReader for LatestStateProviderRef<'_, Provider> {
/// Get block hash by number.
fn block_hash(&self, number: u64) -> ProviderResult<Option<B256>> {
self.0.block_hash(number)
}
fn canonical_hashes_range(
&self,
start: BlockNumber,
end: BlockNumber,
) -> ProviderResult<Vec<B256>> {
self.0.canonical_hashes_range(start, end)
}
}
impl<Provider: DBProvider + Sync> StateRootProvider for LatestStateProviderRef<'_, Provider> {
fn state_root(&self, hashed_state: HashedPostState) -> ProviderResult<B256> {
StateRoot::overlay_root(self.tx(), hashed_state)
.map_err(|err| ProviderError::Database(err.into()))
}
fn state_root_from_nodes(&self, input: TrieInput) -> ProviderResult<B256> {
StateRoot::overlay_root_from_nodes(self.tx(), input)
.map_err(|err| ProviderError::Database(err.into()))
}
fn state_root_with_updates(
&self,
hashed_state: HashedPostState,
) -> ProviderResult<(B256, TrieUpdates)> {
StateRoot::overlay_root_with_updates(self.tx(), hashed_state)
.map_err(|err| ProviderError::Database(err.into()))
}
fn state_root_from_nodes_with_updates(
&self,
input: TrieInput,
) -> ProviderResult<(B256, TrieUpdates)> {
StateRoot::overlay_root_from_nodes_with_updates(self.tx(), input)
.map_err(|err| ProviderError::Database(err.into()))
}
}
impl<Provider: DBProvider + Sync> StorageRootProvider for LatestStateProviderRef<'_, Provider> {
fn storage_root(
&self,
address: Address,
hashed_storage: HashedStorage,
) -> ProviderResult<B256> {
StorageRoot::overlay_root(self.tx(), address, hashed_storage)
.map_err(|err| ProviderError::Database(err.into()))
}
fn storage_proof(
&self,
address: Address,
slot: B256,
hashed_storage: HashedStorage,
) -> ProviderResult<reth_trie::StorageProof> {
StorageProof::overlay_storage_proof(self.tx(), address, slot, hashed_storage)
.map_err(ProviderError::from)
}
fn storage_multiproof(
&self,
address: Address,
slots: &[B256],
hashed_storage: HashedStorage,
) -> ProviderResult<StorageMultiProof> {
StorageProof::overlay_storage_multiproof(self.tx(), address, slots, hashed_storage)
.map_err(ProviderError::from)
}
}
impl<Provider: DBProvider + Sync> StateProofProvider for LatestStateProviderRef<'_, Provider> {
fn proof(
&self,
input: TrieInput,
address: Address,
slots: &[B256],
) -> ProviderResult<AccountProof> {
Proof::overlay_account_proof(self.tx(), input, address, slots).map_err(ProviderError::from)
}
fn multiproof(
&self,
input: TrieInput,
targets: MultiProofTargets,
) -> ProviderResult<MultiProof> {
Proof::overlay_multiproof(self.tx(), input, targets).map_err(ProviderError::from)
}
fn witness(&self, input: TrieInput, target: HashedPostState) -> ProviderResult<Vec<Bytes>> {
TrieWitness::overlay_witness(self.tx(), input, target)
.map_err(ProviderError::from)
.map(|hm| hm.into_values().collect())
}
}
impl<Provider: DBProvider + Sync> HashedPostStateProvider for LatestStateProviderRef<'_, Provider> {
fn hashed_post_state(&self, bundle_state: &revm_database::BundleState) -> HashedPostState {
HashedPostState::from_bundle_state::<KeccakKeyHasher>(bundle_state.state())
}
}
impl<Provider: DBProvider + BlockHashReader> StateProvider
for LatestStateProviderRef<'_, Provider>
{
/// Get storage.
fn storage(
&self,
account: Address,
storage_key: StorageKey,
) -> ProviderResult<Option<FlaggedStorage>> {
let mut cursor = self.tx().cursor_dup_read::<tables::PlainStorageState>()?;
if let Some(entry) = cursor.seek_by_key_subkey(account, storage_key)? {
if entry.key == storage_key {
return Ok(Some(entry.into()))
}
}
Ok(None)
}
}
impl<Provider: DBProvider + BlockHashReader> BytecodeReader
for LatestStateProviderRef<'_, Provider>
{
/// Get account code by its hash
fn bytecode_by_hash(&self, code_hash: &B256) -> ProviderResult<Option<Bytecode>> {
self.tx().get_by_encoded_key::<tables::Bytecodes>(code_hash).map_err(Into::into)
}
}
/// State provider for the latest state.
#[derive(Debug)]
pub struct LatestStateProvider<Provider>(Provider);
impl<Provider: DBProvider> LatestStateProvider<Provider> {
/// Create new state provider
pub const fn new(db: Provider) -> Self {
Self(db)
}
/// Returns a new provider that takes the `TX` as reference
#[inline(always)]
const fn as_ref(&self) -> LatestStateProviderRef<'_, Provider> {
LatestStateProviderRef::new(&self.0)
}
}
// Delegates all provider impls to [LatestStateProviderRef]
delegate_provider_impls!(LatestStateProvider<Provider> where [Provider: DBProvider + BlockHashReader ]);
#[cfg(test)]
mod tests {
use super::*;
const fn assert_state_provider<T: StateProvider>() {}
#[expect(dead_code)]
const fn assert_latest_state_provider<T: DBProvider + BlockHashReader>() {
assert_state_provider::<LatestStateProvider<T>>();
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/state/historical.rs | crates/storage/provider/src/providers/state/historical.rs | use crate::{
providers::state::macros::delegate_provider_impls, AccountReader, BlockHashReader,
HashedPostStateProvider, ProviderError, StateProvider, StateRootProvider,
};
use alloy_eips::merge::EPOCH_SLOTS;
use alloy_primitives::{Address, BlockNumber, Bytes, StorageKey, B256};
use reth_db_api::{
cursor::{DbCursorRO, DbDupCursorRO},
models::{storage_sharded_key::StorageShardedKey, ShardedKey},
table::Table,
tables,
transaction::DbTx,
BlockNumberList,
};
use reth_primitives_traits::{Account, Bytecode};
use reth_storage_api::{
BlockNumReader, BytecodeReader, DBProvider, StateProofProvider, StorageRootProvider,
};
use reth_storage_errors::provider::ProviderResult;
use reth_trie::{
proof::{Proof, StorageProof},
updates::TrieUpdates,
witness::TrieWitness,
AccountProof, HashedPostState, HashedStorage, KeccakKeyHasher, MultiProof, MultiProofTargets,
StateRoot, StorageMultiProof, StorageRoot, TrieInput,
};
use reth_trie_db::{
DatabaseHashedPostState, DatabaseHashedStorage, DatabaseProof, DatabaseStateRoot,
DatabaseStorageProof, DatabaseStorageRoot, DatabaseTrieWitness,
};
use std::fmt::Debug;
use revm_state::FlaggedStorage;
/// State provider for a given block number which takes a tx reference.
///
/// Historical state provider accesses the state at the start of the provided block number.
/// It means that all changes made in the provided block number are not included.
///
/// Historical state provider reads the following tables:
/// - [`tables::AccountsHistory`]
/// - [`tables::Bytecodes`]
/// - [`tables::StoragesHistory`]
/// - [`tables::AccountChangeSets`]
/// - [`tables::StorageChangeSets`]
#[derive(Debug)]
pub struct HistoricalStateProviderRef<'b, Provider> {
/// Database provider
provider: &'b Provider,
/// Block number is main index for the history state of accounts and storages.
block_number: BlockNumber,
/// Lowest blocks at which different parts of the state are available.
lowest_available_blocks: LowestAvailableBlocks,
}
#[derive(Debug, Eq, PartialEq)]
pub enum HistoryInfo {
NotYetWritten,
InChangeset(u64),
InPlainState,
MaybeInPlainState,
}
impl<'b, Provider: DBProvider + BlockNumReader> HistoricalStateProviderRef<'b, Provider> {
/// Create new `StateProvider` for historical block number
pub fn new(provider: &'b Provider, block_number: BlockNumber) -> Self {
Self { provider, block_number, lowest_available_blocks: Default::default() }
}
/// Create new `StateProvider` for historical block number and lowest block numbers at which
/// account & storage histories are available.
pub const fn new_with_lowest_available_blocks(
provider: &'b Provider,
block_number: BlockNumber,
lowest_available_blocks: LowestAvailableBlocks,
) -> Self {
Self { provider, block_number, lowest_available_blocks }
}
/// Lookup an account in the `AccountsHistory` table
pub fn account_history_lookup(&self, address: Address) -> ProviderResult<HistoryInfo> {
if !self.lowest_available_blocks.is_account_history_available(self.block_number) {
return Err(ProviderError::StateAtBlockPruned(self.block_number))
}
// history key to search IntegerList of block number changesets.
let history_key = ShardedKey::new(address, self.block_number);
self.history_info::<tables::AccountsHistory, _>(
history_key,
|key| key.key == address,
self.lowest_available_blocks.account_history_block_number,
)
}
/// Lookup a storage key in the `StoragesHistory` table
pub fn storage_history_lookup(
&self,
address: Address,
storage_key: StorageKey,
) -> ProviderResult<HistoryInfo> {
if !self.lowest_available_blocks.is_storage_history_available(self.block_number) {
return Err(ProviderError::StateAtBlockPruned(self.block_number))
}
// history key to search IntegerList of block number changesets.
let history_key = StorageShardedKey::new(address, storage_key, self.block_number);
self.history_info::<tables::StoragesHistory, _>(
history_key,
|key| key.address == address && key.sharded_key.key == storage_key,
self.lowest_available_blocks.storage_history_block_number,
)
}
/// Checks and returns `true` if distance to historical block exceeds the provided limit.
fn check_distance_against_limit(&self, limit: u64) -> ProviderResult<bool> {
let tip = self.provider.last_block_number()?;
Ok(tip.saturating_sub(self.block_number) > limit)
}
/// Retrieve revert hashed state for this history provider.
fn revert_state(&self) -> ProviderResult<HashedPostState> {
if !self.lowest_available_blocks.is_account_history_available(self.block_number) ||
!self.lowest_available_blocks.is_storage_history_available(self.block_number)
{
return Err(ProviderError::StateAtBlockPruned(self.block_number))
}
if self.check_distance_against_limit(EPOCH_SLOTS)? {
tracing::warn!(
target: "provider::historical_sp",
target = self.block_number,
"Attempt to calculate state root for an old block might result in OOM"
);
}
Ok(HashedPostState::from_reverts::<KeccakKeyHasher>(self.tx(), self.block_number)?)
}
/// Retrieve revert hashed storage for this history provider and target address.
fn revert_storage(&self, address: Address) -> ProviderResult<HashedStorage> {
if !self.lowest_available_blocks.is_storage_history_available(self.block_number) {
return Err(ProviderError::StateAtBlockPruned(self.block_number))
}
if self.check_distance_against_limit(EPOCH_SLOTS * 10)? {
tracing::warn!(
target: "provider::historical_sp",
target = self.block_number,
"Attempt to calculate storage root for an old block might result in OOM"
);
}
Ok(HashedStorage::from_reverts(self.tx(), address, self.block_number)?)
}
fn history_info<T, K>(
&self,
key: K,
key_filter: impl Fn(&K) -> bool,
lowest_available_block_number: Option<BlockNumber>,
) -> ProviderResult<HistoryInfo>
where
T: Table<Key = K, Value = BlockNumberList>,
{
let mut cursor = self.tx().cursor_read::<T>()?;
// Lookup the history chunk in the history index. If they key does not appear in the
// index, the first chunk for the next key will be returned so we filter out chunks that
// have a different key.
if let Some(chunk) = cursor.seek(key)?.filter(|(key, _)| key_filter(key)).map(|x| x.1 .0) {
// Get the rank of the first entry before or equal to our block.
let mut rank = chunk.rank(self.block_number);
// Adjust the rank, so that we have the rank of the first entry strictly before our
// block (not equal to it).
if rank.checked_sub(1).and_then(|rank| chunk.select(rank)) == Some(self.block_number) {
rank -= 1
};
let block_number = chunk.select(rank);
// If our block is before the first entry in the index chunk and this first entry
// doesn't equal to our block, it might be before the first write ever. To check, we
// look at the previous entry and check if the key is the same.
// This check is worth it, the `cursor.prev()` check is rarely triggered (the if will
// short-circuit) and when it passes we save a full seek into the changeset/plain state
// table.
if rank == 0 &&
block_number != Some(self.block_number) &&
!cursor.prev()?.is_some_and(|(key, _)| key_filter(&key))
{
if let (Some(_), Some(block_number)) = (lowest_available_block_number, block_number)
{
// The key may have been written, but due to pruning we may not have changesets
// and history, so we need to make a changeset lookup.
Ok(HistoryInfo::InChangeset(block_number))
} else {
// The key is written to, but only after our block.
Ok(HistoryInfo::NotYetWritten)
}
} else if let Some(block_number) = block_number {
// The chunk contains an entry for a write after our block, return it.
Ok(HistoryInfo::InChangeset(block_number))
} else {
// The chunk does not contain an entry for a write after our block. This can only
// happen if this is the last chunk and so we need to look in the plain state.
Ok(HistoryInfo::InPlainState)
}
} else if lowest_available_block_number.is_some() {
// The key may have been written, but due to pruning we may not have changesets and
// history, so we need to make a plain state lookup.
Ok(HistoryInfo::MaybeInPlainState)
} else {
// The key has not been written to at all.
Ok(HistoryInfo::NotYetWritten)
}
}
/// Set the lowest block number at which the account history is available.
pub const fn with_lowest_available_account_history_block_number(
mut self,
block_number: BlockNumber,
) -> Self {
self.lowest_available_blocks.account_history_block_number = Some(block_number);
self
}
/// Set the lowest block number at which the storage history is available.
pub const fn with_lowest_available_storage_history_block_number(
mut self,
block_number: BlockNumber,
) -> Self {
self.lowest_available_blocks.storage_history_block_number = Some(block_number);
self
}
}
impl<Provider: DBProvider + BlockNumReader> HistoricalStateProviderRef<'_, Provider> {
fn tx(&self) -> &Provider::Tx {
self.provider.tx_ref()
}
}
impl<Provider: DBProvider + BlockNumReader> AccountReader
for HistoricalStateProviderRef<'_, Provider>
{
/// Get basic account information.
fn basic_account(&self, address: &Address) -> ProviderResult<Option<Account>> {
match self.account_history_lookup(*address)? {
HistoryInfo::NotYetWritten => Ok(None),
HistoryInfo::InChangeset(changeset_block_number) => Ok(self
.tx()
.cursor_dup_read::<tables::AccountChangeSets>()?
.seek_by_key_subkey(changeset_block_number, *address)?
.filter(|acc| &acc.address == address)
.ok_or(ProviderError::AccountChangesetNotFound {
block_number: changeset_block_number,
address: *address,
})?
.info),
HistoryInfo::InPlainState | HistoryInfo::MaybeInPlainState => {
Ok(self.tx().get_by_encoded_key::<tables::PlainAccountState>(address)?)
}
}
}
}
impl<Provider: DBProvider + BlockNumReader + BlockHashReader> BlockHashReader
for HistoricalStateProviderRef<'_, Provider>
{
/// Get block hash by number.
fn block_hash(&self, number: u64) -> ProviderResult<Option<B256>> {
self.provider.block_hash(number)
}
fn canonical_hashes_range(
&self,
start: BlockNumber,
end: BlockNumber,
) -> ProviderResult<Vec<B256>> {
self.provider.canonical_hashes_range(start, end)
}
}
impl<Provider: DBProvider + BlockNumReader> StateRootProvider
for HistoricalStateProviderRef<'_, Provider>
{
fn state_root(&self, hashed_state: HashedPostState) -> ProviderResult<B256> {
let mut revert_state = self.revert_state()?;
revert_state.extend(hashed_state);
StateRoot::overlay_root(self.tx(), revert_state)
.map_err(|err| ProviderError::Database(err.into()))
}
fn state_root_from_nodes(&self, mut input: TrieInput) -> ProviderResult<B256> {
input.prepend(self.revert_state()?);
StateRoot::overlay_root_from_nodes(self.tx(), input)
.map_err(|err| ProviderError::Database(err.into()))
}
fn state_root_with_updates(
&self,
hashed_state: HashedPostState,
) -> ProviderResult<(B256, TrieUpdates)> {
let mut revert_state = self.revert_state()?;
revert_state.extend(hashed_state);
StateRoot::overlay_root_with_updates(self.tx(), revert_state)
.map_err(|err| ProviderError::Database(err.into()))
}
fn state_root_from_nodes_with_updates(
&self,
mut input: TrieInput,
) -> ProviderResult<(B256, TrieUpdates)> {
input.prepend(self.revert_state()?);
StateRoot::overlay_root_from_nodes_with_updates(self.tx(), input)
.map_err(|err| ProviderError::Database(err.into()))
}
}
impl<Provider: DBProvider + BlockNumReader> StorageRootProvider
for HistoricalStateProviderRef<'_, Provider>
{
fn storage_root(
&self,
address: Address,
hashed_storage: HashedStorage,
) -> ProviderResult<B256> {
let mut revert_storage = self.revert_storage(address)?;
revert_storage.extend(&hashed_storage);
StorageRoot::overlay_root(self.tx(), address, revert_storage)
.map_err(|err| ProviderError::Database(err.into()))
}
fn storage_proof(
&self,
address: Address,
slot: B256,
hashed_storage: HashedStorage,
) -> ProviderResult<reth_trie::StorageProof> {
let mut revert_storage = self.revert_storage(address)?;
revert_storage.extend(&hashed_storage);
StorageProof::overlay_storage_proof(self.tx(), address, slot, revert_storage)
.map_err(ProviderError::from)
}
fn storage_multiproof(
&self,
address: Address,
slots: &[B256],
hashed_storage: HashedStorage,
) -> ProviderResult<StorageMultiProof> {
let mut revert_storage = self.revert_storage(address)?;
revert_storage.extend(&hashed_storage);
StorageProof::overlay_storage_multiproof(self.tx(), address, slots, revert_storage)
.map_err(ProviderError::from)
}
}
impl<Provider: DBProvider + BlockNumReader> StateProofProvider
for HistoricalStateProviderRef<'_, Provider>
{
/// Get account and storage proofs.
fn proof(
&self,
mut input: TrieInput,
address: Address,
slots: &[B256],
) -> ProviderResult<AccountProof> {
input.prepend(self.revert_state()?);
Proof::overlay_account_proof(self.tx(), input, address, slots).map_err(ProviderError::from)
}
fn multiproof(
&self,
mut input: TrieInput,
targets: MultiProofTargets,
) -> ProviderResult<MultiProof> {
input.prepend(self.revert_state()?);
Proof::overlay_multiproof(self.tx(), input, targets).map_err(ProviderError::from)
}
fn witness(&self, mut input: TrieInput, target: HashedPostState) -> ProviderResult<Vec<Bytes>> {
input.prepend(self.revert_state()?);
TrieWitness::overlay_witness(self.tx(), input, target)
.map_err(ProviderError::from)
.map(|hm| hm.into_values().collect())
}
}
impl<Provider: Sync> HashedPostStateProvider for HistoricalStateProviderRef<'_, Provider> {
fn hashed_post_state(&self, bundle_state: &revm_database::BundleState) -> HashedPostState {
HashedPostState::from_bundle_state::<KeccakKeyHasher>(bundle_state.state())
}
}
impl<Provider: DBProvider + BlockNumReader + BlockHashReader> StateProvider
for HistoricalStateProviderRef<'_, Provider>
{
/// Get storage.
fn storage(
&self,
address: Address,
storage_key: StorageKey,
) -> ProviderResult<Option<FlaggedStorage>> {
match self.storage_history_lookup(address, storage_key)? {
HistoryInfo::NotYetWritten => Ok(None),
HistoryInfo::InChangeset(changeset_block_number) => Ok(Some(
self.tx()
.cursor_dup_read::<tables::StorageChangeSets>()?
.seek_by_key_subkey((changeset_block_number, address).into(), storage_key)?
.filter(|entry| entry.key == storage_key)
.ok_or_else(|| ProviderError::StorageChangesetNotFound {
block_number: changeset_block_number,
address,
storage_key: Box::new(storage_key),
})?
.into(),
)),
HistoryInfo::InPlainState | HistoryInfo::MaybeInPlainState => Ok(self
.tx()
.cursor_dup_read::<tables::PlainStorageState>()?
.seek_by_key_subkey(address, storage_key)?
.filter(|entry| entry.key == storage_key)
.map(|entry| entry.into())
.or(Some(FlaggedStorage::ZERO))),
}
}
}
impl<Provider: DBProvider + BlockNumReader> BytecodeReader
for HistoricalStateProviderRef<'_, Provider>
{
/// Get account code by its hash
fn bytecode_by_hash(&self, code_hash: &B256) -> ProviderResult<Option<Bytecode>> {
self.tx().get_by_encoded_key::<tables::Bytecodes>(code_hash).map_err(Into::into)
}
}
/// State provider for a given block number.
/// For more detailed description, see [`HistoricalStateProviderRef`].
#[derive(Debug)]
pub struct HistoricalStateProvider<Provider> {
/// Database provider.
provider: Provider,
/// State at the block number is the main indexer of the state.
block_number: BlockNumber,
/// Lowest blocks at which different parts of the state are available.
lowest_available_blocks: LowestAvailableBlocks,
}
impl<Provider: DBProvider + BlockNumReader> HistoricalStateProvider<Provider> {
/// Create new `StateProvider` for historical block number
pub fn new(provider: Provider, block_number: BlockNumber) -> Self {
Self { provider, block_number, lowest_available_blocks: Default::default() }
}
/// Set the lowest block number at which the account history is available.
pub const fn with_lowest_available_account_history_block_number(
mut self,
block_number: BlockNumber,
) -> Self {
self.lowest_available_blocks.account_history_block_number = Some(block_number);
self
}
/// Set the lowest block number at which the storage history is available.
pub const fn with_lowest_available_storage_history_block_number(
mut self,
block_number: BlockNumber,
) -> Self {
self.lowest_available_blocks.storage_history_block_number = Some(block_number);
self
}
/// Returns a new provider that takes the `TX` as reference
#[inline(always)]
const fn as_ref(&self) -> HistoricalStateProviderRef<'_, Provider> {
HistoricalStateProviderRef::new_with_lowest_available_blocks(
&self.provider,
self.block_number,
self.lowest_available_blocks,
)
}
}
// Delegates all provider impls to [HistoricalStateProviderRef]
delegate_provider_impls!(HistoricalStateProvider<Provider> where [Provider: DBProvider + BlockNumReader + BlockHashReader ]);
/// Lowest blocks at which different parts of the state are available.
/// They may be [Some] if pruning is enabled.
#[derive(Clone, Copy, Debug, Default)]
pub struct LowestAvailableBlocks {
/// Lowest block number at which the account history is available. It may not be available if
/// [`reth_prune_types::PruneSegment::AccountHistory`] was pruned.
/// [`Option::None`] means all history is available.
pub account_history_block_number: Option<BlockNumber>,
/// Lowest block number at which the storage history is available. It may not be available if
/// [`reth_prune_types::PruneSegment::StorageHistory`] was pruned.
/// [`Option::None`] means all history is available.
pub storage_history_block_number: Option<BlockNumber>,
}
impl LowestAvailableBlocks {
/// Check if account history is available at the provided block number, i.e. lowest available
/// block number for account history is less than or equal to the provided block number.
pub fn is_account_history_available(&self, at: BlockNumber) -> bool {
self.account_history_block_number.map(|block_number| block_number <= at).unwrap_or(true)
}
/// Check if storage history is available at the provided block number, i.e. lowest available
/// block number for storage history is less than or equal to the provided block number.
pub fn is_storage_history_available(&self, at: BlockNumber) -> bool {
self.storage_history_block_number.map(|block_number| block_number <= at).unwrap_or(true)
}
}
#[cfg(test)]
mod tests {
use crate::{
providers::state::historical::{HistoryInfo, LowestAvailableBlocks},
test_utils::create_test_provider_factory,
AccountReader, HistoricalStateProvider, HistoricalStateProviderRef, StateProvider,
};
use alloy_primitives::{address, b256, Address, B256, U256};
use reth_db_api::{
models::{storage_sharded_key::StorageShardedKey, AccountBeforeTx, ShardedKey},
tables,
transaction::{DbTx, DbTxMut},
BlockNumberList,
};
use reth_primitives_traits::{Account, StorageEntry};
use reth_storage_api::{BlockHashReader, BlockNumReader, DBProvider, DatabaseProviderFactory};
use reth_storage_errors::provider::ProviderError;
use revm_state::FlaggedStorage;
const ADDRESS: Address = address!("0x0000000000000000000000000000000000000001");
const HIGHER_ADDRESS: Address = address!("0x0000000000000000000000000000000000000005");
const STORAGE: B256 =
b256!("0x0000000000000000000000000000000000000000000000000000000000000001");
const fn assert_state_provider<T: StateProvider>() {}
#[expect(dead_code)]
const fn assert_historical_state_provider<T: DBProvider + BlockNumReader + BlockHashReader>() {
assert_state_provider::<HistoricalStateProvider<T>>();
}
#[test]
fn history_provider_get_account() {
let factory = create_test_provider_factory();
let tx = factory.provider_rw().unwrap().into_tx();
tx.put::<tables::AccountsHistory>(
ShardedKey { key: ADDRESS, highest_block_number: 7 },
BlockNumberList::new([1, 3, 7]).unwrap(),
)
.unwrap();
tx.put::<tables::AccountsHistory>(
ShardedKey { key: ADDRESS, highest_block_number: u64::MAX },
BlockNumberList::new([10, 15]).unwrap(),
)
.unwrap();
tx.put::<tables::AccountsHistory>(
ShardedKey { key: HIGHER_ADDRESS, highest_block_number: u64::MAX },
BlockNumberList::new([4]).unwrap(),
)
.unwrap();
let acc_plain = Account { nonce: 100, balance: U256::ZERO, bytecode_hash: None };
let acc_at15 = Account { nonce: 15, balance: U256::ZERO, bytecode_hash: None };
let acc_at10 = Account { nonce: 10, balance: U256::ZERO, bytecode_hash: None };
let acc_at7 = Account { nonce: 7, balance: U256::ZERO, bytecode_hash: None };
let acc_at3 = Account { nonce: 3, balance: U256::ZERO, bytecode_hash: None };
let higher_acc_plain = Account { nonce: 4, balance: U256::ZERO, bytecode_hash: None };
// setup
tx.put::<tables::AccountChangeSets>(1, AccountBeforeTx { address: ADDRESS, info: None })
.unwrap();
tx.put::<tables::AccountChangeSets>(
3,
AccountBeforeTx { address: ADDRESS, info: Some(acc_at3) },
)
.unwrap();
tx.put::<tables::AccountChangeSets>(
4,
AccountBeforeTx { address: HIGHER_ADDRESS, info: None },
)
.unwrap();
tx.put::<tables::AccountChangeSets>(
7,
AccountBeforeTx { address: ADDRESS, info: Some(acc_at7) },
)
.unwrap();
tx.put::<tables::AccountChangeSets>(
10,
AccountBeforeTx { address: ADDRESS, info: Some(acc_at10) },
)
.unwrap();
tx.put::<tables::AccountChangeSets>(
15,
AccountBeforeTx { address: ADDRESS, info: Some(acc_at15) },
)
.unwrap();
// setup plain state
tx.put::<tables::PlainAccountState>(ADDRESS, acc_plain).unwrap();
tx.put::<tables::PlainAccountState>(HIGHER_ADDRESS, higher_acc_plain).unwrap();
tx.commit().unwrap();
let db = factory.provider().unwrap();
// run
assert!(matches!(
HistoricalStateProviderRef::new(&db, 1).basic_account(&ADDRESS),
Ok(None)
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 2).basic_account(&ADDRESS),
Ok(Some(acc)) if acc == acc_at3
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 3).basic_account(&ADDRESS),
Ok(Some(acc)) if acc == acc_at3
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 4).basic_account(&ADDRESS),
Ok(Some(acc)) if acc == acc_at7
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 7).basic_account(&ADDRESS),
Ok(Some(acc)) if acc == acc_at7
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 9).basic_account(&ADDRESS),
Ok(Some(acc)) if acc == acc_at10
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 10).basic_account(&ADDRESS),
Ok(Some(acc)) if acc == acc_at10
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 11).basic_account(&ADDRESS),
Ok(Some(acc)) if acc == acc_at15
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 16).basic_account(&ADDRESS),
Ok(Some(acc)) if acc == acc_plain
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 1).basic_account(&HIGHER_ADDRESS),
Ok(None)
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 1000).basic_account(&HIGHER_ADDRESS),
Ok(Some(acc)) if acc == higher_acc_plain
));
}
#[test]
fn history_provider_get_storage() {
let factory = create_test_provider_factory();
let tx = factory.provider_rw().unwrap().into_tx();
tx.put::<tables::StoragesHistory>(
StorageShardedKey {
address: ADDRESS,
sharded_key: ShardedKey { key: STORAGE, highest_block_number: 7 },
},
BlockNumberList::new([3, 7]).unwrap(),
)
.unwrap();
tx.put::<tables::StoragesHistory>(
StorageShardedKey {
address: ADDRESS,
sharded_key: ShardedKey { key: STORAGE, highest_block_number: u64::MAX },
},
BlockNumberList::new([10, 15]).unwrap(),
)
.unwrap();
tx.put::<tables::StoragesHistory>(
StorageShardedKey {
address: HIGHER_ADDRESS,
sharded_key: ShardedKey { key: STORAGE, highest_block_number: u64::MAX },
},
BlockNumberList::new([4]).unwrap(),
)
.unwrap();
let higher_entry_plain = StorageEntry { key: STORAGE, value: FlaggedStorage::public(1000) };
let higher_entry_at4 = StorageEntry { key: STORAGE, value: FlaggedStorage::public(0) };
let entry_plain = StorageEntry { key: STORAGE, value: FlaggedStorage::public(100) };
let entry_at15 = StorageEntry { key: STORAGE, value: FlaggedStorage::public(15) };
let entry_at10 = StorageEntry { key: STORAGE, value: FlaggedStorage::public(10) };
let entry_at7 = StorageEntry { key: STORAGE, value: FlaggedStorage::public(7) };
let entry_at3 = StorageEntry { key: STORAGE, value: FlaggedStorage::public(0) };
// setup
tx.put::<tables::StorageChangeSets>((3, ADDRESS).into(), entry_at3).unwrap();
tx.put::<tables::StorageChangeSets>((4, HIGHER_ADDRESS).into(), higher_entry_at4).unwrap();
tx.put::<tables::StorageChangeSets>((7, ADDRESS).into(), entry_at7).unwrap();
tx.put::<tables::StorageChangeSets>((10, ADDRESS).into(), entry_at10).unwrap();
tx.put::<tables::StorageChangeSets>((15, ADDRESS).into(), entry_at15).unwrap();
// setup plain state
tx.put::<tables::PlainStorageState>(ADDRESS, entry_plain).unwrap();
tx.put::<tables::PlainStorageState>(HIGHER_ADDRESS, higher_entry_plain).unwrap();
tx.commit().unwrap();
let db = factory.provider().unwrap();
// run
assert!(matches!(
HistoricalStateProviderRef::new(&db, 0).storage(ADDRESS, STORAGE),
Ok(None)
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 3).storage(ADDRESS, STORAGE),
Ok(Some(FlaggedStorage::ZERO))
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 4).storage(ADDRESS, STORAGE),
Ok(Some(expected_value)) if expected_value == entry_at7.value.into()
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 7).storage(ADDRESS, STORAGE),
Ok(Some(expected_value)) if expected_value == entry_at7.value.into()
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 9).storage(ADDRESS, STORAGE),
Ok(Some(expected_value)) if expected_value == entry_at10.value.into()
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 10).storage(ADDRESS, STORAGE),
Ok(Some(expected_value)) if expected_value == entry_at10.value.into()
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 11).storage(ADDRESS, STORAGE),
Ok(Some(expected_value)) if expected_value == entry_at15.value.into()
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 16).storage(ADDRESS, STORAGE),
Ok(Some(expected_value)) if expected_value == entry_plain.value.into()
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 1).storage(HIGHER_ADDRESS, STORAGE),
Ok(None)
));
assert!(matches!(
HistoricalStateProviderRef::new(&db, 1000).storage(HIGHER_ADDRESS, STORAGE),
Ok(Some(expected_value)) if expected_value == higher_entry_plain.value.into()
));
}
#[test]
fn history_provider_unavailable() {
let factory = create_test_provider_factory();
let db = factory.database_provider_rw().unwrap();
// provider block_number < lowest available block number,
// i.e. state at provider block is pruned
let provider = HistoricalStateProviderRef::new_with_lowest_available_blocks(
&db,
2,
LowestAvailableBlocks {
account_history_block_number: Some(3),
storage_history_block_number: Some(3),
},
);
assert!(matches!(
provider.account_history_lookup(ADDRESS),
Err(ProviderError::StateAtBlockPruned(number)) if number == provider.block_number
));
assert!(matches!(
provider.storage_history_lookup(ADDRESS, STORAGE),
Err(ProviderError::StateAtBlockPruned(number)) if number == provider.block_number
));
// provider block_number == lowest available block number,
// i.e. state at provider block is available
let provider = HistoricalStateProviderRef::new_with_lowest_available_blocks(
&db,
2,
LowestAvailableBlocks {
account_history_block_number: Some(2),
storage_history_block_number: Some(2),
},
);
assert!(matches!(
provider.account_history_lookup(ADDRESS),
Ok(HistoryInfo::MaybeInPlainState)
));
assert!(matches!(
provider.storage_history_lookup(ADDRESS, STORAGE),
Ok(HistoryInfo::MaybeInPlainState)
));
// provider block_number == lowest available block number,
// i.e. state at provider block is available
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/static_file/manager.rs | crates/storage/provider/src/providers/static_file/manager.rs | use super::{
metrics::StaticFileProviderMetrics, writer::StaticFileWriters, LoadedJar,
StaticFileJarProvider, StaticFileProviderRW, StaticFileProviderRWRefMut,
};
use crate::{
to_range, BlockHashReader, BlockNumReader, BlockReader, BlockSource, HeaderProvider,
ReceiptProvider, StageCheckpointReader, StatsReader, TransactionVariant, TransactionsProvider,
TransactionsProviderExt,
};
use alloy_consensus::{
transaction::{SignerRecoverable, TransactionMeta},
Header,
};
use alloy_eips::{eip2718::Encodable2718, BlockHashOrNumber};
use alloy_primitives::{
b256, keccak256, Address, BlockHash, BlockNumber, TxHash, TxNumber, B256, U256,
};
use dashmap::DashMap;
use notify::{RecommendedWatcher, RecursiveMode, Watcher};
use parking_lot::RwLock;
use reth_chainspec::{ChainInfo, ChainSpecProvider, EthChainSpec, NamedChain};
use reth_db::{
lockfile::StorageLock,
static_file::{
iter_static_files, BlockHashMask, HeaderMask, HeaderWithHashMask, ReceiptMask,
StaticFileCursor, TDWithHashMask, TransactionMask,
},
};
use reth_db_api::{
cursor::DbCursorRO,
models::StoredBlockBodyIndices,
table::{Decompress, Table, Value},
tables,
transaction::DbTx,
};
use reth_ethereum_primitives::{Receipt, TransactionSigned};
use reth_nippy_jar::{NippyJar, NippyJarChecker, CONFIG_FILE_EXTENSION};
use reth_node_types::{FullNodePrimitives, NodePrimitives};
use reth_primitives_traits::{RecoveredBlock, SealedHeader, SignedTransaction};
use reth_stages_types::{PipelineTarget, StageId};
use reth_static_file_types::{
find_fixed_range, HighestStaticFiles, SegmentHeader, SegmentRangeInclusive, StaticFileSegment,
DEFAULT_BLOCKS_PER_STATIC_FILE,
};
use reth_storage_api::{BlockBodyIndicesProvider, DBProvider};
use reth_storage_errors::provider::{ProviderError, ProviderResult};
use std::{
collections::{hash_map::Entry, BTreeMap, HashMap},
fmt::Debug,
marker::PhantomData,
ops::{Deref, Range, RangeBounds, RangeInclusive},
path::{Path, PathBuf},
sync::{atomic::AtomicU64, mpsc, Arc},
};
use tracing::{debug, info, trace, warn};
/// Alias type for a map that can be queried for block ranges from a transaction
/// segment respectively. It uses `TxNumber` to represent the transaction end of a static file
/// range.
type SegmentRanges = HashMap<StaticFileSegment, BTreeMap<TxNumber, SegmentRangeInclusive>>;
/// Access mode on a static file provider. RO/RW.
#[derive(Debug, Default, PartialEq, Eq)]
pub enum StaticFileAccess {
/// Read-only access.
#[default]
RO,
/// Read-write access.
RW,
}
impl StaticFileAccess {
/// Returns `true` if read-only access.
pub const fn is_read_only(&self) -> bool {
matches!(self, Self::RO)
}
/// Returns `true` if read-write access.
pub const fn is_read_write(&self) -> bool {
matches!(self, Self::RW)
}
}
/// [`StaticFileProvider`] manages all existing [`StaticFileJarProvider`].
///
/// "Static files" contain immutable chain history data, such as:
/// - transactions
/// - headers
/// - receipts
///
/// This provider type is responsible for reading and writing to static files.
#[derive(Debug)]
pub struct StaticFileProvider<N>(pub(crate) Arc<StaticFileProviderInner<N>>);
impl<N> Clone for StaticFileProvider<N> {
fn clone(&self) -> Self {
Self(self.0.clone())
}
}
impl<N: NodePrimitives> StaticFileProvider<N> {
/// Creates a new [`StaticFileProvider`] with the given [`StaticFileAccess`].
fn new(path: impl AsRef<Path>, access: StaticFileAccess) -> ProviderResult<Self> {
let provider = Self(Arc::new(StaticFileProviderInner::new(path, access)?));
provider.initialize_index()?;
Ok(provider)
}
/// Creates a new [`StaticFileProvider`] with read-only access.
///
/// Set `watch_directory` to `true` to track the most recent changes in static files. Otherwise,
/// new data won't be detected or queryable.
///
/// Watching is recommended if the read-only provider is used on a directory that an active node
/// instance is modifying.
///
/// See also [`StaticFileProvider::watch_directory`].
pub fn read_only(path: impl AsRef<Path>, watch_directory: bool) -> ProviderResult<Self> {
let provider = Self::new(path, StaticFileAccess::RO)?;
if watch_directory {
provider.watch_directory();
}
Ok(provider)
}
/// Creates a new [`StaticFileProvider`] with read-write access.
pub fn read_write(path: impl AsRef<Path>) -> ProviderResult<Self> {
Self::new(path, StaticFileAccess::RW)
}
/// Watches the directory for changes and updates the in-memory index when modifications
/// are detected.
///
/// This may be necessary, since a non-node process that owns a [`StaticFileProvider`] does not
/// receive `update_index` notifications from a node that appends/truncates data.
pub fn watch_directory(&self) {
let provider = self.clone();
std::thread::spawn(move || {
let (tx, rx) = std::sync::mpsc::channel();
let mut watcher = RecommendedWatcher::new(
move |res| tx.send(res).unwrap(),
notify::Config::default(),
)
.expect("failed to create watcher");
watcher
.watch(&provider.path, RecursiveMode::NonRecursive)
.expect("failed to watch path");
// Some backends send repeated modified events
let mut last_event_timestamp = None;
while let Ok(res) = rx.recv() {
match res {
Ok(event) => {
// We only care about modified data events
if !matches!(
event.kind,
notify::EventKind::Modify(_) |
notify::EventKind::Create(_) |
notify::EventKind::Remove(_)
) {
continue
}
// We only trigger a re-initialization if a configuration file was
// modified. This means that a
// static_file_provider.commit() was called on the node after
// appending/truncating rows
for segment in event.paths {
// Ensure it's a file with the .conf extension
if segment
.extension()
.is_none_or(|s| s.to_str() != Some(CONFIG_FILE_EXTENSION))
{
continue
}
// Ensure it's well formatted static file name
if StaticFileSegment::parse_filename(
&segment.file_stem().expect("qed").to_string_lossy(),
)
.is_none()
{
continue
}
// If we can read the metadata and modified timestamp, ensure this is
// not an old or repeated event.
if let Ok(current_modified_timestamp) =
std::fs::metadata(&segment).and_then(|m| m.modified())
{
if last_event_timestamp.is_some_and(|last_timestamp| {
last_timestamp >= current_modified_timestamp
}) {
continue
}
last_event_timestamp = Some(current_modified_timestamp);
}
info!(target: "providers::static_file", updated_file = ?segment.file_stem(), "re-initializing static file provider index");
if let Err(err) = provider.initialize_index() {
warn!(target: "providers::static_file", "failed to re-initialize index: {err}");
}
break
}
}
Err(err) => warn!(target: "providers::watcher", "watch error: {err:?}"),
}
}
});
}
}
impl<N: NodePrimitives> Deref for StaticFileProvider<N> {
type Target = StaticFileProviderInner<N>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
/// [`StaticFileProviderInner`] manages all existing [`StaticFileJarProvider`].
#[derive(Debug)]
pub struct StaticFileProviderInner<N> {
/// Maintains a map which allows for concurrent access to different `NippyJars`, over different
/// segments and ranges.
map: DashMap<(BlockNumber, StaticFileSegment), LoadedJar>,
/// Min static file range for each segment.
/// This index is initialized on launch to keep track of the lowest, non-expired static file
/// per segment.
///
/// This tracks the lowest static file per segment together with the block range in that
/// file. E.g. static file is batched in 500k block intervals then the lowest static file
/// is [0..499K], and the block range is start = 0, end = 499K.
/// This index is mainly used to History expiry, which targets transactions, e.g. pre-merge
/// history expiry would lead to removing all static files below the merge height.
static_files_min_block: RwLock<HashMap<StaticFileSegment, SegmentRangeInclusive>>,
/// This is an additional index that tracks the expired height, this will track the highest
/// block number that has been expired (missing). The first, non expired block is
/// `expired_history_height + 1`.
///
/// This is effectively the transaction range that has been expired:
/// [`StaticFileProvider::delete_transactions_below`] and mirrors
/// `static_files_min_block[transactions] - blocks_per_file`.
///
/// This additional tracker exists for more efficient lookups because the node must be aware of
/// the expired height.
earliest_history_height: AtomicU64,
/// Max static file block for each segment
static_files_max_block: RwLock<HashMap<StaticFileSegment, u64>>,
/// Available static file block ranges on disk indexed by max transactions.
static_files_tx_index: RwLock<SegmentRanges>,
/// Directory where `static_files` are located
path: PathBuf,
/// Maintains a writer set of [`StaticFileSegment`].
writers: StaticFileWriters<N>,
/// Metrics for the static files.
metrics: Option<Arc<StaticFileProviderMetrics>>,
/// Access rights of the provider.
access: StaticFileAccess,
/// Number of blocks per file.
blocks_per_file: u64,
/// Write lock for when access is [`StaticFileAccess::RW`].
_lock_file: Option<StorageLock>,
/// Node primitives
_pd: PhantomData<N>,
}
impl<N: NodePrimitives> StaticFileProviderInner<N> {
/// Creates a new [`StaticFileProviderInner`].
fn new(path: impl AsRef<Path>, access: StaticFileAccess) -> ProviderResult<Self> {
let _lock_file = if access.is_read_write() {
StorageLock::try_acquire(path.as_ref()).map_err(ProviderError::other)?.into()
} else {
None
};
let provider = Self {
map: Default::default(),
writers: Default::default(),
static_files_min_block: Default::default(),
earliest_history_height: Default::default(),
static_files_max_block: Default::default(),
static_files_tx_index: Default::default(),
path: path.as_ref().to_path_buf(),
metrics: None,
access,
blocks_per_file: DEFAULT_BLOCKS_PER_STATIC_FILE,
_lock_file,
_pd: Default::default(),
};
Ok(provider)
}
pub const fn is_read_only(&self) -> bool {
self.access.is_read_only()
}
/// Each static file has a fixed number of blocks. This gives out the range where the requested
/// block is positioned.
pub const fn find_fixed_range(&self, block: BlockNumber) -> SegmentRangeInclusive {
find_fixed_range(block, self.blocks_per_file)
}
}
impl<N: NodePrimitives> StaticFileProvider<N> {
/// Set a custom number of blocks per file.
#[cfg(any(test, feature = "test-utils"))]
pub fn with_custom_blocks_per_file(self, blocks_per_file: u64) -> Self {
let mut provider =
Arc::try_unwrap(self.0).expect("should be called when initializing only");
provider.blocks_per_file = blocks_per_file;
Self(Arc::new(provider))
}
/// Enables metrics on the [`StaticFileProvider`].
pub fn with_metrics(self) -> Self {
let mut provider =
Arc::try_unwrap(self.0).expect("should be called when initializing only");
provider.metrics = Some(Arc::new(StaticFileProviderMetrics::default()));
Self(Arc::new(provider))
}
/// Reports metrics for the static files.
pub fn report_metrics(&self) -> ProviderResult<()> {
let Some(metrics) = &self.metrics else { return Ok(()) };
let static_files = iter_static_files(&self.path).map_err(ProviderError::other)?;
for (segment, ranges) in static_files {
let mut entries = 0;
let mut size = 0;
for (block_range, _) in &ranges {
let fixed_block_range = self.find_fixed_range(block_range.start());
let jar_provider = self
.get_segment_provider(segment, || Some(fixed_block_range), None)?
.ok_or_else(|| {
ProviderError::MissingStaticFileBlock(segment, block_range.start())
})?;
entries += jar_provider.rows();
let data_size = reth_fs_util::metadata(jar_provider.data_path())
.map(|metadata| metadata.len())
.unwrap_or_default();
let index_size = reth_fs_util::metadata(jar_provider.index_path())
.map(|metadata| metadata.len())
.unwrap_or_default();
let offsets_size = reth_fs_util::metadata(jar_provider.offsets_path())
.map(|metadata| metadata.len())
.unwrap_or_default();
let config_size = reth_fs_util::metadata(jar_provider.config_path())
.map(|metadata| metadata.len())
.unwrap_or_default();
size += data_size + index_size + offsets_size + config_size;
}
metrics.record_segment(segment, size, ranges.len(), entries);
}
Ok(())
}
/// Gets the [`StaticFileJarProvider`] of the requested segment and block.
pub fn get_segment_provider_from_block(
&self,
segment: StaticFileSegment,
block: BlockNumber,
path: Option<&Path>,
) -> ProviderResult<StaticFileJarProvider<'_, N>> {
self.get_segment_provider(
segment,
|| self.get_segment_ranges_from_block(segment, block),
path,
)?
.ok_or(ProviderError::MissingStaticFileBlock(segment, block))
}
/// Gets the [`StaticFileJarProvider`] of the requested segment and transaction.
pub fn get_segment_provider_from_transaction(
&self,
segment: StaticFileSegment,
tx: TxNumber,
path: Option<&Path>,
) -> ProviderResult<StaticFileJarProvider<'_, N>> {
self.get_segment_provider(
segment,
|| self.get_segment_ranges_from_transaction(segment, tx),
path,
)?
.ok_or(ProviderError::MissingStaticFileTx(segment, tx))
}
/// Gets the [`StaticFileJarProvider`] of the requested segment and block or transaction.
///
/// `fn_range` should make sure the range goes through `find_fixed_range`.
pub fn get_segment_provider(
&self,
segment: StaticFileSegment,
fn_range: impl Fn() -> Option<SegmentRangeInclusive>,
path: Option<&Path>,
) -> ProviderResult<Option<StaticFileJarProvider<'_, N>>> {
// If we have a path, then get the block range from its name.
// Otherwise, check `self.available_static_files`
let block_range = match path {
Some(path) => StaticFileSegment::parse_filename(
&path
.file_name()
.ok_or_else(|| {
ProviderError::MissingStaticFilePath(segment, path.to_path_buf())
})?
.to_string_lossy(),
)
.and_then(|(parsed_segment, block_range)| {
if parsed_segment == segment {
return Some(block_range)
}
None
}),
None => fn_range(),
};
// Return cached `LoadedJar` or insert it for the first time, and then, return it.
if let Some(block_range) = block_range {
return Ok(Some(self.get_or_create_jar_provider(segment, &block_range)?))
}
Ok(None)
}
/// Given a segment and block range it removes the cached provider from the map.
///
/// CAUTION: cached provider should be dropped before calling this or IT WILL deadlock.
pub fn remove_cached_provider(
&self,
segment: StaticFileSegment,
fixed_block_range_end: BlockNumber,
) {
self.map.remove(&(fixed_block_range_end, segment));
}
/// This handles history expiry by deleting all transaction static files below the given block.
///
/// For example if block is 1M and the blocks per file are 500K this will delete all individual
/// files below 1M, so 0-499K and 500K-999K.
///
/// This will not delete the file that contains the block itself, because files can only be
/// removed entirely.
pub fn delete_transactions_below(&self, block: BlockNumber) -> ProviderResult<()> {
// Nothing to delete if block is 0.
if block == 0 {
return Ok(())
}
loop {
let Some(block_height) =
self.get_lowest_static_file_block(StaticFileSegment::Transactions)
else {
return Ok(())
};
if block_height >= block {
return Ok(())
}
debug!(
target: "provider::static_file",
?block_height,
"Deleting transaction static file below block"
);
// now we need to wipe the static file, this will take care of updating the index and
// advance the lowest tracked block height for the transactions segment.
self.delete_jar(StaticFileSegment::Transactions, block_height)
.inspect_err(|err| {
warn!( target: "provider::static_file", %block_height, ?err, "Failed to delete transaction static file below block")
})
?;
}
}
/// Given a segment and block, it deletes the jar and all files from the respective block range.
///
/// CAUTION: destructive. Deletes files on disk.
///
/// This will re-initialize the index after deletion, so all files are tracked.
pub fn delete_jar(&self, segment: StaticFileSegment, block: BlockNumber) -> ProviderResult<()> {
let fixed_block_range = self.find_fixed_range(block);
let key = (fixed_block_range.end(), segment);
let jar = if let Some((_, jar)) = self.map.remove(&key) {
jar.jar
} else {
let file = self.path.join(segment.filename(&fixed_block_range));
debug!(
target: "provider::static_file",
?file,
?fixed_block_range,
?block,
"Loading static file jar for deletion"
);
NippyJar::<SegmentHeader>::load(&file).map_err(ProviderError::other)?
};
jar.delete().map_err(ProviderError::other)?;
self.initialize_index()?;
Ok(())
}
/// Given a segment and block range it returns a cached
/// [`StaticFileJarProvider`]. TODO(joshie): we should check the size and pop N if there's too
/// many.
fn get_or_create_jar_provider(
&self,
segment: StaticFileSegment,
fixed_block_range: &SegmentRangeInclusive,
) -> ProviderResult<StaticFileJarProvider<'_, N>> {
let key = (fixed_block_range.end(), segment);
// Avoid using `entry` directly to avoid a write lock in the common case.
trace!(target: "provider::static_file", ?segment, ?fixed_block_range, "Getting provider");
let mut provider: StaticFileJarProvider<'_, N> = if let Some(jar) = self.map.get(&key) {
trace!(target: "provider::static_file", ?segment, ?fixed_block_range, "Jar found in cache");
jar.into()
} else {
trace!(target: "provider::static_file", ?segment, ?fixed_block_range, "Creating jar from scratch");
let path = self.path.join(segment.filename(fixed_block_range));
let jar = NippyJar::load(&path).map_err(ProviderError::other)?;
self.map.entry(key).insert(LoadedJar::new(jar)?).downgrade().into()
};
if let Some(metrics) = &self.metrics {
provider = provider.with_metrics(metrics.clone());
}
Ok(provider)
}
/// Gets a static file segment's block range from the provider inner block
/// index.
fn get_segment_ranges_from_block(
&self,
segment: StaticFileSegment,
block: u64,
) -> Option<SegmentRangeInclusive> {
self.static_files_max_block
.read()
.get(&segment)
.filter(|max| **max >= block)
.map(|_| self.find_fixed_range(block))
}
/// Gets a static file segment's fixed block range from the provider inner
/// transaction index.
fn get_segment_ranges_from_transaction(
&self,
segment: StaticFileSegment,
tx: u64,
) -> Option<SegmentRangeInclusive> {
let static_files = self.static_files_tx_index.read();
let segment_static_files = static_files.get(&segment)?;
// It's more probable that the request comes from a newer tx height, so we iterate
// the static_files in reverse.
let mut static_files_rev_iter = segment_static_files.iter().rev().peekable();
while let Some((tx_end, block_range)) = static_files_rev_iter.next() {
if tx > *tx_end {
// request tx is higher than highest static file tx
return None
}
let tx_start = static_files_rev_iter.peek().map(|(tx_end, _)| *tx_end + 1).unwrap_or(0);
if tx_start <= tx {
return Some(self.find_fixed_range(block_range.end()))
}
}
None
}
/// Updates the inner transaction and block indexes alongside the internal cached providers in
/// `self.map`.
///
/// Any entry higher than `segment_max_block` will be deleted from the previous structures.
///
/// If `segment_max_block` is None it means there's no static file for this segment.
pub fn update_index(
&self,
segment: StaticFileSegment,
segment_max_block: Option<BlockNumber>,
) -> ProviderResult<()> {
let mut max_block = self.static_files_max_block.write();
let mut tx_index = self.static_files_tx_index.write();
match segment_max_block {
Some(segment_max_block) => {
// Update the max block for the segment
max_block.insert(segment, segment_max_block);
let fixed_range = self.find_fixed_range(segment_max_block);
let jar = NippyJar::<SegmentHeader>::load(
&self.path.join(segment.filename(&fixed_range)),
)
.map_err(ProviderError::other)?;
// Updates the tx index by first removing all entries which have a higher
// block_start than our current static file.
if let Some(tx_range) = jar.user_header().tx_range() {
let tx_end = tx_range.end();
// Current block range has the same block start as `fixed_range``, but block end
// might be different if we are still filling this static file.
if let Some(current_block_range) = jar.user_header().block_range().copied() {
// Considering that `update_index` is called when we either append/truncate,
// we are sure that we are handling the latest data
// points.
//
// Here we remove every entry of the index that has a block start higher or
// equal than our current one. This is important in the case
// that we prune a lot of rows resulting in a file (and thus
// a higher block range) deletion.
tx_index
.entry(segment)
.and_modify(|index| {
index.retain(|_, block_range| {
block_range.start() < fixed_range.start()
});
index.insert(tx_end, current_block_range);
})
.or_insert_with(|| BTreeMap::from([(tx_end, current_block_range)]));
}
} else if segment.is_tx_based() {
// The unwinded file has no more transactions/receipts. However, the highest
// block is within this files' block range. We only retain
// entries with block ranges before the current one.
tx_index.entry(segment).and_modify(|index| {
index.retain(|_, block_range| block_range.start() < fixed_range.start());
});
// If the index is empty, just remove it.
if tx_index.get(&segment).is_some_and(|index| index.is_empty()) {
tx_index.remove(&segment);
}
}
// Update the cached provider.
self.map.insert((fixed_range.end(), segment), LoadedJar::new(jar)?);
// Delete any cached provider that no longer has an associated jar.
self.map.retain(|(end, seg), _| !(*seg == segment && *end > fixed_range.end()));
}
None => {
tx_index.remove(&segment);
max_block.remove(&segment);
}
};
Ok(())
}
/// Initializes the inner transaction and block index
pub fn initialize_index(&self) -> ProviderResult<()> {
let mut min_block = self.static_files_min_block.write();
let mut max_block = self.static_files_max_block.write();
let mut tx_index = self.static_files_tx_index.write();
min_block.clear();
max_block.clear();
tx_index.clear();
for (segment, ranges) in iter_static_files(&self.path).map_err(ProviderError::other)? {
// Update first and last block for each segment
if let Some((first_block_range, _)) = ranges.first() {
min_block.insert(segment, *first_block_range);
}
if let Some((last_block_range, _)) = ranges.last() {
max_block.insert(segment, last_block_range.end());
}
// Update tx -> block_range index
for (block_range, tx_range) in ranges {
if let Some(tx_range) = tx_range {
let tx_end = tx_range.end();
match tx_index.entry(segment) {
Entry::Occupied(mut index) => {
index.get_mut().insert(tx_end, block_range);
}
Entry::Vacant(index) => {
index.insert(BTreeMap::from([(tx_end, block_range)]));
}
};
}
}
}
// If this is a re-initialization, we need to clear this as well
self.map.clear();
// initialize the expired history height to the lowest static file block
if let Some(lowest_range) = min_block.get(&StaticFileSegment::Transactions) {
// the earliest height is the lowest available block number
self.earliest_history_height
.store(lowest_range.start(), std::sync::atomic::Ordering::Relaxed);
}
Ok(())
}
/// Ensures that any broken invariants which cannot be healed on the spot return a pipeline
/// target to unwind to.
///
/// Two types of consistency checks are done for:
///
/// 1) When a static file fails to commit but the underlying data was changed.
/// 2) When a static file was committed, but the required database transaction was not.
///
/// For 1) it can self-heal if `self.access.is_read_only()` is set to `false`. Otherwise, it
/// will return an error.
/// For 2) the invariants below are checked, and if broken, might require a pipeline unwind
/// to heal.
///
/// For each static file segment:
/// * the corresponding database table should overlap or have continuity in their keys
/// ([`TxNumber`] or [`BlockNumber`]).
/// * its highest block should match the stage checkpoint block number if it's equal or higher
/// than the corresponding database table last entry.
///
/// Returns a [`Option`] of [`PipelineTarget::Unwind`] if any healing is further required.
///
/// WARNING: No static file writer should be held before calling this function, otherwise it
/// will deadlock.
pub fn check_consistency<Provider>(
&self,
provider: &Provider,
has_receipt_pruning: bool,
) -> ProviderResult<Option<PipelineTarget>>
where
Provider: DBProvider + BlockReader + StageCheckpointReader + ChainSpecProvider,
N: NodePrimitives<Receipt: Value, BlockHeader: Value, SignedTx: Value>,
{
// OVM historical import is broken and does not work with this check. It's importing
// duplicated receipts resulting in having more receipts than the expected transaction
// range.
//
// If we detect an OVM import was done (block #1 <https://optimistic.etherscan.io/block/1>), skip it.
// More on [#11099](https://github.com/paradigmxyz/reth/pull/11099).
if provider.chain_spec().is_optimism() &&
reth_chainspec::Chain::optimism_mainnet() == provider.chain_spec().chain_id()
{
// check whether we have the first OVM block: <https://optimistic.etherscan.io/block/0xbee7192e575af30420cae0c7776304ac196077ee72b048970549e4f08e875453>
const OVM_HEADER_1_HASH: B256 =
b256!("0xbee7192e575af30420cae0c7776304ac196077ee72b048970549e4f08e875453");
if provider.block_number(OVM_HEADER_1_HASH)?.is_some() {
info!(target: "reth::cli",
"Skipping storage verification for OP mainnet, expected inconsistency in OVM chain"
);
return Ok(None)
}
}
info!(target: "reth::cli", "Verifying storage consistency.");
let mut unwind_target: Option<BlockNumber> = None;
let mut update_unwind_target = |new_target: BlockNumber| {
if let Some(target) = unwind_target.as_mut() {
*target = (*target).min(new_target);
} else {
unwind_target = Some(new_target);
}
};
for segment in StaticFileSegment::iter() {
if has_receipt_pruning && segment.is_receipts() {
// Pruned nodes (including full node) do not store receipts as static files.
continue
}
if segment.is_receipts() &&
(NamedChain::Gnosis == provider.chain_spec().chain_id() ||
NamedChain::Chiado == provider.chain_spec().chain_id())
{
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/static_file/mod.rs | crates/storage/provider/src/providers/static_file/mod.rs | mod manager;
pub use manager::{StaticFileAccess, StaticFileProvider, StaticFileWriter};
mod jar;
pub use jar::StaticFileJarProvider;
mod writer;
pub use writer::{StaticFileProviderRW, StaticFileProviderRWRefMut};
mod metrics;
use reth_nippy_jar::NippyJar;
use reth_static_file_types::{SegmentHeader, StaticFileSegment};
use reth_storage_errors::provider::{ProviderError, ProviderResult};
use std::{ops::Deref, sync::Arc};
/// Alias type for each specific `NippyJar`.
type LoadedJarRef<'a> = dashmap::mapref::one::Ref<'a, (u64, StaticFileSegment), LoadedJar>;
/// Helper type to reuse an associated static file mmap handle on created cursors.
#[derive(Debug)]
pub struct LoadedJar {
jar: NippyJar<SegmentHeader>,
mmap_handle: Arc<reth_nippy_jar::DataReader>,
}
impl LoadedJar {
fn new(jar: NippyJar<SegmentHeader>) -> ProviderResult<Self> {
match jar.open_data_reader() {
Ok(data_reader) => {
let mmap_handle = Arc::new(data_reader);
Ok(Self { jar, mmap_handle })
}
Err(e) => Err(ProviderError::other(e)),
}
}
/// Returns a clone of the mmap handle that can be used to instantiate a cursor.
fn mmap_handle(&self) -> Arc<reth_nippy_jar::DataReader> {
self.mmap_handle.clone()
}
const fn segment(&self) -> StaticFileSegment {
self.jar.user_header().segment()
}
}
impl Deref for LoadedJar {
type Target = NippyJar<SegmentHeader>;
fn deref(&self) -> &Self::Target {
&self.jar
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
test_utils::create_test_provider_factory, HeaderProvider, StaticFileProviderFactory,
};
use alloy_consensus::{Header, SignableTransaction, Transaction, TxLegacy};
use alloy_primitives::{BlockHash, Signature, TxNumber, B256, U256};
use rand::seq::SliceRandom;
use reth_db::test_utils::create_test_static_files_dir;
use reth_db_api::{
transaction::DbTxMut, CanonicalHeaders, HeaderNumbers, HeaderTerminalDifficulties, Headers,
};
use reth_ethereum_primitives::{EthPrimitives, Receipt, TransactionSigned};
use reth_static_file_types::{
find_fixed_range, SegmentRangeInclusive, DEFAULT_BLOCKS_PER_STATIC_FILE,
};
use reth_storage_api::{ReceiptProvider, TransactionsProvider};
use reth_testing_utils::generators::{self, random_header_range};
use std::{fmt::Debug, fs, ops::Range, path::Path};
fn assert_eyre<T: PartialEq + Debug>(got: T, expected: T, msg: &str) -> eyre::Result<()> {
if got != expected {
eyre::bail!("{msg} | got: {got:?} expected: {expected:?}");
}
Ok(())
}
#[test]
fn test_snap() {
// Ranges
let row_count = 100u64;
let range = 0..=(row_count - 1);
// Data sources
let factory = create_test_provider_factory();
let static_files_path = tempfile::tempdir().unwrap();
let static_file = static_files_path.path().join(
StaticFileSegment::Headers
.filename(&find_fixed_range(*range.end(), DEFAULT_BLOCKS_PER_STATIC_FILE)),
);
// Setup data
let mut headers = random_header_range(
&mut generators::rng(),
*range.start()..(*range.end() + 1),
B256::random(),
);
let mut provider_rw = factory.provider_rw().unwrap();
let tx = provider_rw.tx_mut();
let mut td = U256::ZERO;
for header in headers.clone() {
td += header.header().difficulty;
let hash = header.hash();
tx.put::<CanonicalHeaders>(header.number, hash).unwrap();
tx.put::<Headers>(header.number, header.clone_header()).unwrap();
tx.put::<HeaderTerminalDifficulties>(header.number, td.into()).unwrap();
tx.put::<HeaderNumbers>(hash, header.number).unwrap();
}
provider_rw.commit().unwrap();
// Create StaticFile
{
let manager = factory.static_file_provider();
let mut writer = manager.latest_writer(StaticFileSegment::Headers).unwrap();
let mut td = U256::ZERO;
for header in headers.clone() {
td += header.header().difficulty;
let hash = header.hash();
writer.append_header(&header.unseal(), td, &hash).unwrap();
}
writer.commit().unwrap();
}
// Use providers to query Header data and compare if it matches
{
let db_provider = factory.provider().unwrap();
let manager = db_provider.static_file_provider();
let jar_provider = manager
.get_segment_provider_from_block(StaticFileSegment::Headers, 0, Some(&static_file))
.unwrap();
assert!(!headers.is_empty());
// Shuffled for chaos.
headers.shuffle(&mut generators::rng());
for header in headers {
let header_hash = header.hash();
let header = header.unseal();
// Compare Header
assert_eq!(header, db_provider.header(&header_hash).unwrap().unwrap());
assert_eq!(header, jar_provider.header_by_number(header.number).unwrap().unwrap());
// Compare HeaderTerminalDifficulties
assert_eq!(
db_provider.header_td(&header_hash).unwrap().unwrap(),
jar_provider.header_td_by_number(header.number).unwrap().unwrap()
);
}
}
}
#[test]
fn test_header_truncation() {
let (static_dir, _) = create_test_static_files_dir();
let blocks_per_file = 10; // Number of headers per file
let files_per_range = 3; // Number of files per range (data/conf/offset files)
let file_set_count = 3; // Number of sets of files to create
let initial_file_count = files_per_range * file_set_count;
let tip = blocks_per_file * file_set_count - 1; // Initial highest block (29 in this case)
// [ Headers Creation and Commit ]
{
let sf_rw = StaticFileProvider::<EthPrimitives>::read_write(&static_dir)
.expect("Failed to create static file provider")
.with_custom_blocks_per_file(blocks_per_file);
let mut header_writer = sf_rw.latest_writer(StaticFileSegment::Headers).unwrap();
// Append headers from 0 to the tip (29) and commit
let mut header = Header::default();
for num in 0..=tip {
header.number = num;
header_writer
.append_header(&header, U256::default(), &BlockHash::default())
.unwrap();
}
header_writer.commit().unwrap();
}
// Helper function to prune headers and validate truncation results
fn prune_and_validate(
writer: &mut StaticFileProviderRWRefMut<'_, EthPrimitives>,
sf_rw: &StaticFileProvider<EthPrimitives>,
static_dir: impl AsRef<Path>,
prune_count: u64,
expected_tip: Option<u64>,
expected_file_count: u64,
) -> eyre::Result<()> {
writer.prune_headers(prune_count)?;
writer.commit()?;
// Validate the highest block after pruning
assert_eyre(
sf_rw.get_highest_static_file_block(StaticFileSegment::Headers),
expected_tip,
"block mismatch",
)?;
if let Some(id) = expected_tip {
assert_eyre(
sf_rw.header_by_number(id)?.map(|h| h.number),
expected_tip,
"header mismatch",
)?;
}
// Validate the number of files remaining in the directory
assert_eyre(
count_files_without_lockfile(static_dir)?,
expected_file_count as usize,
"file count mismatch",
)?;
Ok(())
}
// [ Test Cases ]
type PruneCount = u64;
type ExpectedTip = u64;
type ExpectedFileCount = u64;
let mut tmp_tip = tip;
let test_cases: Vec<(PruneCount, Option<ExpectedTip>, ExpectedFileCount)> = vec![
// Case 0: Pruning 1 header
{
tmp_tip -= 1;
(1, Some(tmp_tip), initial_file_count)
},
// Case 1: Pruning remaining rows from file should result in its deletion
{
tmp_tip -= blocks_per_file - 1;
(blocks_per_file - 1, Some(tmp_tip), initial_file_count - files_per_range)
},
// Case 2: Pruning more headers than a single file has (tip reduced by
// blocks_per_file + 1) should result in a file set deletion
{
tmp_tip -= blocks_per_file + 1;
(blocks_per_file + 1, Some(tmp_tip), initial_file_count - files_per_range * 2)
},
// Case 3: Pruning all remaining headers from the file except the genesis header
{
(
tmp_tip,
Some(0), // Only genesis block remains
files_per_range, // The file set with block 0 should remain
)
},
// Case 4: Pruning the genesis header (should not delete the file set with block 0)
{
(
1,
None, // No blocks left
files_per_range, // The file set with block 0 remains
)
},
];
// Test cases execution
{
let sf_rw = StaticFileProvider::read_write(&static_dir)
.expect("Failed to create static file provider")
.with_custom_blocks_per_file(blocks_per_file);
assert_eq!(sf_rw.get_highest_static_file_block(StaticFileSegment::Headers), Some(tip));
assert_eq!(
count_files_without_lockfile(static_dir.as_ref()).unwrap(),
initial_file_count as usize
);
let mut header_writer = sf_rw.latest_writer(StaticFileSegment::Headers).unwrap();
for (case, (prune_count, expected_tip, expected_file_count)) in
test_cases.into_iter().enumerate()
{
prune_and_validate(
&mut header_writer,
&sf_rw,
&static_dir,
prune_count,
expected_tip,
expected_file_count,
)
.map_err(|err| eyre::eyre!("Test case {case}: {err}"))
.unwrap();
}
}
}
/// 3 block ranges are built
///
/// for `blocks_per_file = 10`:
/// * `0..=9` : except genesis, every block has a tx/receipt
/// * `10..=19`: no txs/receipts
/// * `20..=29`: only one tx/receipt
fn setup_tx_based_scenario(
sf_rw: &StaticFileProvider<EthPrimitives>,
segment: StaticFileSegment,
blocks_per_file: u64,
) {
fn setup_block_ranges(
writer: &mut StaticFileProviderRWRefMut<'_, EthPrimitives>,
sf_rw: &StaticFileProvider<EthPrimitives>,
segment: StaticFileSegment,
block_range: &Range<u64>,
mut tx_count: u64,
next_tx_num: &mut u64,
) {
let mut receipt = Receipt::default();
let mut tx = TxLegacy::default();
for block in block_range.clone() {
writer.increment_block(block).unwrap();
// Append transaction/receipt if there's still a transaction count to append
if tx_count > 0 {
if segment.is_receipts() {
// Used as ID for validation
receipt.cumulative_gas_used = *next_tx_num;
writer.append_receipt(*next_tx_num, &receipt).unwrap();
} else {
// Used as ID for validation
tx.nonce = *next_tx_num;
let tx: TransactionSigned =
tx.clone().into_signed(Signature::test_signature()).into();
writer.append_transaction(*next_tx_num, &tx).unwrap();
}
*next_tx_num += 1;
tx_count -= 1;
}
}
writer.commit().unwrap();
// Calculate expected values based on the range and transactions
let expected_block = block_range.end - 1;
let expected_tx = if tx_count == 0 { *next_tx_num - 1 } else { *next_tx_num };
// Perform assertions after processing the blocks
assert_eq!(sf_rw.get_highest_static_file_block(segment), Some(expected_block),);
assert_eq!(sf_rw.get_highest_static_file_tx(segment), Some(expected_tx),);
}
// Define the block ranges and transaction counts as vectors
let block_ranges = [
0..blocks_per_file,
blocks_per_file..blocks_per_file * 2,
blocks_per_file * 2..blocks_per_file * 3,
];
let tx_counts = [
blocks_per_file - 1, // First range: tx per block except genesis
0, // Second range: no transactions
1, // Third range: 1 transaction in the second block
];
let mut writer = sf_rw.latest_writer(segment).unwrap();
let mut next_tx_num = 0;
// Loop through setup scenarios
for (block_range, tx_count) in block_ranges.iter().zip(tx_counts.iter()) {
setup_block_ranges(
&mut writer,
sf_rw,
segment,
block_range,
*tx_count,
&mut next_tx_num,
);
}
// Ensure that scenario was properly setup
let expected_tx_ranges = vec![
Some(SegmentRangeInclusive::new(0, 8)),
None,
Some(SegmentRangeInclusive::new(9, 9)),
];
block_ranges.iter().zip(expected_tx_ranges).for_each(|(block_range, expected_tx_range)| {
assert_eq!(
sf_rw
.get_segment_provider_from_block(segment, block_range.start, None)
.unwrap()
.user_header()
.tx_range(),
expected_tx_range.as_ref()
);
});
// Ensure transaction index
let tx_index = sf_rw.tx_index().read();
let expected_tx_index =
vec![(8, SegmentRangeInclusive::new(0, 9)), (9, SegmentRangeInclusive::new(20, 29))];
assert_eq!(
tx_index.get(&segment).map(|index| index.iter().map(|(k, v)| (*k, *v)).collect()),
(!expected_tx_index.is_empty()).then_some(expected_tx_index),
"tx index mismatch",
);
}
#[test]
fn test_tx_based_truncation() {
let segments = [StaticFileSegment::Transactions, StaticFileSegment::Receipts];
let blocks_per_file = 10; // Number of blocks per file
let files_per_range = 3; // Number of files per range (data/conf/offset files)
let file_set_count = 3; // Number of sets of files to create
let initial_file_count = files_per_range * file_set_count;
#[expect(clippy::too_many_arguments)]
fn prune_and_validate(
sf_rw: &StaticFileProvider<EthPrimitives>,
static_dir: impl AsRef<Path>,
segment: StaticFileSegment,
prune_count: u64,
last_block: u64,
expected_tx_tip: Option<u64>,
expected_file_count: i32,
expected_tx_index: Vec<(TxNumber, SegmentRangeInclusive)>,
) -> eyre::Result<()> {
let mut writer = sf_rw.latest_writer(segment)?;
// Prune transactions or receipts based on the segment type
if segment.is_receipts() {
writer.prune_receipts(prune_count, last_block)?;
} else {
writer.prune_transactions(prune_count, last_block)?;
}
writer.commit()?;
// Verify the highest block and transaction tips
assert_eyre(
sf_rw.get_highest_static_file_block(segment),
Some(last_block),
"block mismatch",
)?;
assert_eyre(sf_rw.get_highest_static_file_tx(segment), expected_tx_tip, "tx mismatch")?;
// Verify that transactions and receipts are returned correctly. Uses
// cumulative_gas_used & nonce as ids.
if let Some(id) = expected_tx_tip {
if segment.is_receipts() {
assert_eyre(
expected_tx_tip,
sf_rw.receipt(id)?.map(|r| r.cumulative_gas_used),
"tx mismatch",
)?;
} else {
assert_eyre(
expected_tx_tip,
sf_rw.transaction_by_id(id)?.map(|t| t.nonce()),
"tx mismatch",
)?;
}
}
// Ensure the file count has reduced as expected
assert_eyre(
count_files_without_lockfile(static_dir)?,
expected_file_count as usize,
"file count mismatch",
)?;
// Ensure that the inner tx index (max_tx -> block range) is as expected
let tx_index = sf_rw.tx_index().read();
assert_eyre(
tx_index.get(&segment).map(|index| index.iter().map(|(k, v)| (*k, *v)).collect()),
(!expected_tx_index.is_empty()).then_some(expected_tx_index),
"tx index mismatch",
)?;
Ok(())
}
for segment in segments {
let (static_dir, _) = create_test_static_files_dir();
let sf_rw = StaticFileProvider::read_write(&static_dir)
.expect("Failed to create static file provider")
.with_custom_blocks_per_file(blocks_per_file);
setup_tx_based_scenario(&sf_rw, segment, blocks_per_file);
let sf_rw = StaticFileProvider::read_write(&static_dir)
.expect("Failed to create static file provider")
.with_custom_blocks_per_file(blocks_per_file);
let highest_tx = sf_rw.get_highest_static_file_tx(segment).unwrap();
// Test cases
// [prune_count, last_block, expected_tx_tip, expected_file_count, expected_tx_index)
let test_cases = vec![
// Case 0: 20..=29 has only one tx. Prune the only tx of the block range.
// It ensures that the file is not deleted even though there are no rows, since the
// `last_block` which is passed to the prune method is the first
// block of the range.
(
1,
blocks_per_file * 2,
Some(highest_tx - 1),
initial_file_count,
vec![(highest_tx - 1, SegmentRangeInclusive::new(0, 9))],
),
// Case 1: 10..=19 has no txs. There are no txes in the whole block range, but want
// to unwind to block 9. Ensures that the 20..=29 and 10..=19 files
// are deleted.
(
0,
blocks_per_file - 1,
Some(highest_tx - 1),
files_per_range,
vec![(highest_tx - 1, SegmentRangeInclusive::new(0, 9))],
),
// Case 2: Prune most txs up to block 1.
(
highest_tx - 1,
1,
Some(0),
files_per_range,
vec![(0, SegmentRangeInclusive::new(0, 1))],
),
// Case 3: Prune remaining tx and ensure that file is not deleted.
(1, 0, None, files_per_range, vec![]),
];
// Loop through test cases
for (
case,
(prune_count, last_block, expected_tx_tip, expected_file_count, expected_tx_index),
) in test_cases.into_iter().enumerate()
{
prune_and_validate(
&sf_rw,
&static_dir,
segment,
prune_count,
last_block,
expected_tx_tip,
expected_file_count,
expected_tx_index,
)
.map_err(|err| eyre::eyre!("Test case {case}: {err}"))
.unwrap();
}
}
}
/// Returns the number of files in the provided path, excluding ".lock" files.
fn count_files_without_lockfile(path: impl AsRef<Path>) -> eyre::Result<usize> {
let is_lockfile = |entry: &fs::DirEntry| {
entry.path().file_name().map(|name| name == "lock").unwrap_or(false)
};
let count = fs::read_dir(path)?
.filter_map(|entry| entry.ok())
.filter(|entry| !is_lockfile(entry))
.count();
Ok(count)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/static_file/jar.rs | crates/storage/provider/src/providers/static_file/jar.rs | use super::{
metrics::{StaticFileProviderMetrics, StaticFileProviderOperation},
LoadedJarRef,
};
use crate::{
to_range, BlockHashReader, BlockNumReader, HeaderProvider, ReceiptProvider,
TransactionsProvider,
};
use alloy_consensus::transaction::{SignerRecoverable, TransactionMeta};
use alloy_eips::{eip2718::Encodable2718, BlockHashOrNumber};
use alloy_primitives::{Address, BlockHash, BlockNumber, TxHash, TxNumber, B256, U256};
use reth_chainspec::ChainInfo;
use reth_db::static_file::{
BlockHashMask, HeaderMask, HeaderWithHashMask, ReceiptMask, StaticFileCursor, TDWithHashMask,
TotalDifficultyMask, TransactionMask,
};
use reth_db_api::table::{Decompress, Value};
use reth_node_types::NodePrimitives;
use reth_primitives_traits::{SealedHeader, SignedTransaction};
use reth_storage_errors::provider::{ProviderError, ProviderResult};
use std::{
fmt::Debug,
ops::{Deref, RangeBounds, RangeInclusive},
sync::Arc,
};
/// Provider over a specific `NippyJar` and range.
#[derive(Debug)]
pub struct StaticFileJarProvider<'a, N> {
/// Main static file segment
jar: LoadedJarRef<'a>,
/// Another kind of static file segment to help query data from the main one.
auxiliary_jar: Option<Box<Self>>,
/// Metrics for the static files.
metrics: Option<Arc<StaticFileProviderMetrics>>,
/// Node primitives
_pd: std::marker::PhantomData<N>,
}
impl<'a, N: NodePrimitives> Deref for StaticFileJarProvider<'a, N> {
type Target = LoadedJarRef<'a>;
fn deref(&self) -> &Self::Target {
&self.jar
}
}
impl<'a, N: NodePrimitives> From<LoadedJarRef<'a>> for StaticFileJarProvider<'a, N> {
fn from(value: LoadedJarRef<'a>) -> Self {
StaticFileJarProvider {
jar: value,
auxiliary_jar: None,
metrics: None,
_pd: Default::default(),
}
}
}
impl<'a, N: NodePrimitives> StaticFileJarProvider<'a, N> {
/// Provides a cursor for more granular data access.
pub fn cursor<'b>(&'b self) -> ProviderResult<StaticFileCursor<'a>>
where
'b: 'a,
{
let result = StaticFileCursor::new(self.value(), self.mmap_handle())?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
self.segment(),
StaticFileProviderOperation::InitCursor,
None,
);
}
Ok(result)
}
/// Adds a new auxiliary static file to help query data from the main one
pub fn with_auxiliary(mut self, auxiliary_jar: Self) -> Self {
self.auxiliary_jar = Some(Box::new(auxiliary_jar));
self
}
/// Enables metrics on the provider.
pub fn with_metrics(mut self, metrics: Arc<StaticFileProviderMetrics>) -> Self {
self.metrics = Some(metrics);
self
}
}
impl<N: NodePrimitives<BlockHeader: Value>> HeaderProvider for StaticFileJarProvider<'_, N> {
type Header = N::BlockHeader;
fn header(&self, block_hash: &BlockHash) -> ProviderResult<Option<Self::Header>> {
Ok(self
.cursor()?
.get_two::<HeaderWithHashMask<Self::Header>>(block_hash.into())?
.filter(|(_, hash)| hash == block_hash)
.map(|(header, _)| header))
}
fn header_by_number(&self, num: BlockNumber) -> ProviderResult<Option<Self::Header>> {
self.cursor()?.get_one::<HeaderMask<Self::Header>>(num.into())
}
fn header_td(&self, block_hash: &BlockHash) -> ProviderResult<Option<U256>> {
Ok(self
.cursor()?
.get_two::<TDWithHashMask>(block_hash.into())?
.filter(|(_, hash)| hash == block_hash)
.map(|(td, _)| td.into()))
}
fn header_td_by_number(&self, num: BlockNumber) -> ProviderResult<Option<U256>> {
Ok(self.cursor()?.get_one::<TotalDifficultyMask>(num.into())?.map(Into::into))
}
fn headers_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Self::Header>> {
let range = to_range(range);
let mut cursor = self.cursor()?;
let mut headers = Vec::with_capacity((range.end - range.start) as usize);
for num in range {
if let Some(header) = cursor.get_one::<HeaderMask<Self::Header>>(num.into())? {
headers.push(header);
}
}
Ok(headers)
}
fn sealed_header(
&self,
number: BlockNumber,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
Ok(self
.cursor()?
.get_two::<HeaderWithHashMask<Self::Header>>(number.into())?
.map(|(header, hash)| SealedHeader::new(header, hash)))
}
fn sealed_headers_while(
&self,
range: impl RangeBounds<BlockNumber>,
mut predicate: impl FnMut(&SealedHeader<Self::Header>) -> bool,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
let range = to_range(range);
let mut cursor = self.cursor()?;
let mut headers = Vec::with_capacity((range.end - range.start) as usize);
for number in range {
if let Some((header, hash)) =
cursor.get_two::<HeaderWithHashMask<Self::Header>>(number.into())?
{
let sealed = SealedHeader::new(header, hash);
if !predicate(&sealed) {
break
}
headers.push(sealed);
}
}
Ok(headers)
}
}
impl<N: NodePrimitives> BlockHashReader for StaticFileJarProvider<'_, N> {
fn block_hash(&self, number: u64) -> ProviderResult<Option<B256>> {
self.cursor()?.get_one::<BlockHashMask>(number.into())
}
fn canonical_hashes_range(
&self,
start: BlockNumber,
end: BlockNumber,
) -> ProviderResult<Vec<B256>> {
let mut cursor = self.cursor()?;
let mut hashes = Vec::with_capacity((end - start) as usize);
for number in start..end {
if let Some(hash) = cursor.get_one::<BlockHashMask>(number.into())? {
hashes.push(hash)
}
}
Ok(hashes)
}
}
impl<N: NodePrimitives> BlockNumReader for StaticFileJarProvider<'_, N> {
fn chain_info(&self) -> ProviderResult<ChainInfo> {
// Information on live database
Err(ProviderError::UnsupportedProvider)
}
fn best_block_number(&self) -> ProviderResult<BlockNumber> {
// Information on live database
Err(ProviderError::UnsupportedProvider)
}
fn last_block_number(&self) -> ProviderResult<BlockNumber> {
// Information on live database
Err(ProviderError::UnsupportedProvider)
}
fn block_number(&self, hash: B256) -> ProviderResult<Option<BlockNumber>> {
let mut cursor = self.cursor()?;
Ok(cursor
.get_one::<BlockHashMask>((&hash).into())?
.and_then(|res| (res == hash).then(|| cursor.number()).flatten()))
}
}
impl<N: NodePrimitives<SignedTx: Decompress + SignedTransaction>> TransactionsProvider
for StaticFileJarProvider<'_, N>
{
type Transaction = N::SignedTx;
fn transaction_id(&self, hash: TxHash) -> ProviderResult<Option<TxNumber>> {
let mut cursor = self.cursor()?;
Ok(cursor
.get_one::<TransactionMask<Self::Transaction>>((&hash).into())?
.and_then(|res| (res.trie_hash() == hash).then(|| cursor.number()).flatten()))
}
fn transaction_by_id(&self, num: TxNumber) -> ProviderResult<Option<Self::Transaction>> {
self.cursor()?.get_one::<TransactionMask<Self::Transaction>>(num.into())
}
fn transaction_by_id_unhashed(
&self,
num: TxNumber,
) -> ProviderResult<Option<Self::Transaction>> {
self.cursor()?.get_one::<TransactionMask<Self::Transaction>>(num.into())
}
fn transaction_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Transaction>> {
self.cursor()?.get_one::<TransactionMask<Self::Transaction>>((&hash).into())
}
fn transaction_by_hash_with_meta(
&self,
_hash: TxHash,
) -> ProviderResult<Option<(Self::Transaction, TransactionMeta)>> {
// Information required on indexing table [`tables::TransactionBlocks`]
Err(ProviderError::UnsupportedProvider)
}
fn transaction_block(&self, _id: TxNumber) -> ProviderResult<Option<BlockNumber>> {
// Information on indexing table [`tables::TransactionBlocks`]
Err(ProviderError::UnsupportedProvider)
}
fn transactions_by_block(
&self,
_block_id: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Transaction>>> {
// Related to indexing tables. Live database should get the tx_range and call static file
// provider with `transactions_by_tx_range` instead.
Err(ProviderError::UnsupportedProvider)
}
fn transactions_by_block_range(
&self,
_range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Transaction>>> {
// Related to indexing tables. Live database should get the tx_range and call static file
// provider with `transactions_by_tx_range` instead.
Err(ProviderError::UnsupportedProvider)
}
fn transactions_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Transaction>> {
let range = to_range(range);
let mut cursor = self.cursor()?;
let mut txes = Vec::with_capacity((range.end - range.start) as usize);
for num in range {
if let Some(tx) = cursor.get_one::<TransactionMask<Self::Transaction>>(num.into())? {
txes.push(tx)
}
}
Ok(txes)
}
fn senders_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Address>> {
let txs = self.transactions_by_tx_range(range)?;
Ok(reth_primitives_traits::transaction::recover::recover_signers(&txs)?)
}
fn transaction_sender(&self, num: TxNumber) -> ProviderResult<Option<Address>> {
Ok(self
.cursor()?
.get_one::<TransactionMask<Self::Transaction>>(num.into())?
.and_then(|tx| tx.recover_signer().ok()))
}
}
impl<N: NodePrimitives<SignedTx: Decompress + SignedTransaction, Receipt: Decompress>>
ReceiptProvider for StaticFileJarProvider<'_, N>
{
type Receipt = N::Receipt;
fn receipt(&self, num: TxNumber) -> ProviderResult<Option<Self::Receipt>> {
self.cursor()?.get_one::<ReceiptMask<Self::Receipt>>(num.into())
}
fn receipt_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Receipt>> {
if let Some(tx_static_file) = &self.auxiliary_jar {
if let Some(num) = tx_static_file.transaction_id(hash)? {
return self.receipt(num)
}
}
Ok(None)
}
fn receipts_by_block(
&self,
_block: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Receipt>>> {
// Related to indexing tables. StaticFile should get the tx_range and call static file
// provider with `receipt()` instead for each
Err(ProviderError::UnsupportedProvider)
}
fn receipts_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Receipt>> {
let range = to_range(range);
let mut cursor = self.cursor()?;
let mut receipts = Vec::with_capacity((range.end - range.start) as usize);
for num in range {
if let Some(tx) = cursor.get_one::<ReceiptMask<Self::Receipt>>(num.into())? {
receipts.push(tx)
}
}
Ok(receipts)
}
fn receipts_by_block_range(
&self,
_block_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Receipt>>> {
// Related to indexing tables. StaticFile should get the tx_range and call static file
// provider with `receipt()` instead for each
Err(ProviderError::UnsupportedProvider)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/static_file/writer.rs | crates/storage/provider/src/providers/static_file/writer.rs | use super::{
manager::StaticFileProviderInner, metrics::StaticFileProviderMetrics, StaticFileProvider,
};
use crate::providers::static_file::metrics::StaticFileProviderOperation;
use alloy_consensus::BlockHeader;
use alloy_primitives::{BlockHash, BlockNumber, TxNumber, U256};
use parking_lot::{lock_api::RwLockWriteGuard, RawRwLock, RwLock};
use reth_codecs::Compact;
use reth_db_api::models::CompactU256;
use reth_nippy_jar::{NippyJar, NippyJarError, NippyJarWriter};
use reth_node_types::NodePrimitives;
use reth_static_file_types::{SegmentHeader, SegmentRangeInclusive, StaticFileSegment};
use reth_storage_errors::provider::{ProviderError, ProviderResult, StaticFileWriterError};
use std::{
borrow::Borrow,
fmt::Debug,
path::{Path, PathBuf},
sync::{Arc, Weak},
time::Instant,
};
use tracing::debug;
/// Static file writers for every known [`StaticFileSegment`].
///
/// WARNING: Trying to use more than one writer for the same segment type **will result in a
/// deadlock**.
#[derive(Debug)]
pub(crate) struct StaticFileWriters<N> {
headers: RwLock<Option<StaticFileProviderRW<N>>>,
transactions: RwLock<Option<StaticFileProviderRW<N>>>,
receipts: RwLock<Option<StaticFileProviderRW<N>>>,
}
impl<N> Default for StaticFileWriters<N> {
fn default() -> Self {
Self {
headers: Default::default(),
transactions: Default::default(),
receipts: Default::default(),
}
}
}
impl<N: NodePrimitives> StaticFileWriters<N> {
pub(crate) fn get_or_create(
&self,
segment: StaticFileSegment,
create_fn: impl FnOnce() -> ProviderResult<StaticFileProviderRW<N>>,
) -> ProviderResult<StaticFileProviderRWRefMut<'_, N>> {
let mut write_guard = match segment {
StaticFileSegment::Headers => self.headers.write(),
StaticFileSegment::Transactions => self.transactions.write(),
StaticFileSegment::Receipts => self.receipts.write(),
};
if write_guard.is_none() {
*write_guard = Some(create_fn()?);
}
Ok(StaticFileProviderRWRefMut(write_guard))
}
pub(crate) fn commit(&self) -> ProviderResult<()> {
for writer_lock in [&self.headers, &self.transactions, &self.receipts] {
let mut writer = writer_lock.write();
if let Some(writer) = writer.as_mut() {
writer.commit()?;
}
}
Ok(())
}
}
/// Mutable reference to a [`StaticFileProviderRW`] behind a [`RwLockWriteGuard`].
#[derive(Debug)]
pub struct StaticFileProviderRWRefMut<'a, N>(
pub(crate) RwLockWriteGuard<'a, RawRwLock, Option<StaticFileProviderRW<N>>>,
);
impl<N> std::ops::DerefMut for StaticFileProviderRWRefMut<'_, N> {
fn deref_mut(&mut self) -> &mut Self::Target {
// This is always created by [`StaticFileWriters::get_or_create`]
self.0.as_mut().expect("static file writer provider should be init")
}
}
impl<N> std::ops::Deref for StaticFileProviderRWRefMut<'_, N> {
type Target = StaticFileProviderRW<N>;
fn deref(&self) -> &Self::Target {
// This is always created by [`StaticFileWriters::get_or_create`]
self.0.as_ref().expect("static file writer provider should be init")
}
}
#[derive(Debug)]
/// Extends `StaticFileProvider` with writing capabilities
pub struct StaticFileProviderRW<N> {
/// Reference back to the provider. We need [Weak] here because [`StaticFileProviderRW`] is
/// stored in a [`dashmap::DashMap`] inside the parent [`StaticFileProvider`].which is an
/// [Arc]. If we were to use an [Arc] here, we would create a reference cycle.
reader: Weak<StaticFileProviderInner<N>>,
/// A [`NippyJarWriter`] instance.
writer: NippyJarWriter<SegmentHeader>,
/// Path to opened file.
data_path: PathBuf,
/// Reusable buffer for encoding appended data.
buf: Vec<u8>,
/// Metrics.
metrics: Option<Arc<StaticFileProviderMetrics>>,
/// On commit, does the instructed pruning: number of lines, and if it applies, the last block
/// it ends at.
prune_on_commit: Option<(u64, Option<BlockNumber>)>,
}
impl<N: NodePrimitives> StaticFileProviderRW<N> {
/// Creates a new [`StaticFileProviderRW`] for a [`StaticFileSegment`].
///
/// Before use, transaction based segments should ensure the block end range is the expected
/// one, and heal if not. For more check `Self::ensure_end_range_consistency`.
pub fn new(
segment: StaticFileSegment,
block: BlockNumber,
reader: Weak<StaticFileProviderInner<N>>,
metrics: Option<Arc<StaticFileProviderMetrics>>,
) -> ProviderResult<Self> {
let (writer, data_path) = Self::open(segment, block, reader.clone(), metrics.clone())?;
let mut writer = Self {
writer,
data_path,
buf: Vec::with_capacity(100),
reader,
metrics,
prune_on_commit: None,
};
writer.ensure_end_range_consistency()?;
Ok(writer)
}
fn open(
segment: StaticFileSegment,
block: u64,
reader: Weak<StaticFileProviderInner<N>>,
metrics: Option<Arc<StaticFileProviderMetrics>>,
) -> ProviderResult<(NippyJarWriter<SegmentHeader>, PathBuf)> {
let start = Instant::now();
let static_file_provider = Self::upgrade_provider_to_strong_reference(&reader);
let block_range = static_file_provider.find_fixed_range(block);
let (jar, path) = match static_file_provider.get_segment_provider_from_block(
segment,
block_range.start(),
None,
) {
Ok(provider) => (
NippyJar::load(provider.data_path()).map_err(ProviderError::other)?,
provider.data_path().into(),
),
Err(ProviderError::MissingStaticFileBlock(_, _)) => {
let path = static_file_provider.directory().join(segment.filename(&block_range));
(create_jar(segment, &path, block_range), path)
}
Err(err) => return Err(err),
};
let result = match NippyJarWriter::new(jar) {
Ok(writer) => Ok((writer, path)),
Err(NippyJarError::FrozenJar) => {
// This static file has been frozen, so we should
Err(ProviderError::FinalizedStaticFile(segment, block))
}
Err(e) => Err(ProviderError::other(e)),
}?;
if let Some(metrics) = &metrics {
metrics.record_segment_operation(
segment,
StaticFileProviderOperation::OpenWriter,
Some(start.elapsed()),
);
}
Ok(result)
}
/// If a file level healing happens, we need to update the end range on the
/// [`SegmentHeader`].
///
/// However, for transaction based segments, the block end range has to be found and healed
/// externally.
///
/// Check [`reth_nippy_jar::NippyJarChecker`] &
/// [`NippyJarWriter`] for more on healing.
fn ensure_end_range_consistency(&mut self) -> ProviderResult<()> {
// If we have lost rows (in this run or previous), we need to update the [SegmentHeader].
let expected_rows = if self.user_header().segment().is_headers() {
self.user_header().block_len().unwrap_or_default()
} else {
self.user_header().tx_len().unwrap_or_default()
};
let pruned_rows = expected_rows - self.writer.rows() as u64;
if pruned_rows > 0 {
self.user_header_mut().prune(pruned_rows);
}
self.writer.commit().map_err(ProviderError::other)?;
// Updates the [SnapshotProvider] manager
self.update_index()?;
Ok(())
}
/// Commits configuration changes to disk and updates the reader index with the new changes.
pub fn commit(&mut self) -> ProviderResult<()> {
let start = Instant::now();
// Truncates the data file if instructed to.
if let Some((to_delete, last_block_number)) = self.prune_on_commit.take() {
match self.writer.user_header().segment() {
StaticFileSegment::Headers => self.prune_header_data(to_delete)?,
StaticFileSegment::Transactions => self
.prune_transaction_data(to_delete, last_block_number.expect("should exist"))?,
StaticFileSegment::Receipts => {
self.prune_receipt_data(to_delete, last_block_number.expect("should exist"))?
}
}
}
if self.writer.is_dirty() {
// Commits offsets and new user_header to disk
self.writer.commit().map_err(ProviderError::other)?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
self.writer.user_header().segment(),
StaticFileProviderOperation::CommitWriter,
Some(start.elapsed()),
);
}
debug!(
target: "provider::static_file",
segment = ?self.writer.user_header().segment(),
path = ?self.data_path,
duration = ?start.elapsed(),
"Commit"
);
self.update_index()?;
}
Ok(())
}
/// Commits configuration changes to disk and updates the reader index with the new changes.
///
/// CAUTION: does not call `sync_all` on the files.
#[cfg(feature = "test-utils")]
pub fn commit_without_sync_all(&mut self) -> ProviderResult<()> {
let start = Instant::now();
// Commits offsets and new user_header to disk
self.writer.commit_without_sync_all().map_err(ProviderError::other)?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
self.writer.user_header().segment(),
StaticFileProviderOperation::CommitWriter,
Some(start.elapsed()),
);
}
debug!(
target: "provider::static_file",
segment = ?self.writer.user_header().segment(),
path = ?self.data_path,
duration = ?start.elapsed(),
"Commit"
);
self.update_index()?;
Ok(())
}
/// Updates the `self.reader` internal index.
fn update_index(&self) -> ProviderResult<()> {
// We find the maximum block of the segment by checking this writer's last block.
//
// However if there's no block range (because there's no data), we try to calculate it by
// subtracting 1 from the expected block start, resulting on the last block of the
// previous file.
//
// If that expected block start is 0, then it means that there's no actual block data, and
// there's no block data in static files.
let segment_max_block = self
.writer
.user_header()
.block_range()
.as_ref()
.map(|block_range| block_range.end())
.or_else(|| {
(self.writer.user_header().expected_block_start() > 0)
.then(|| self.writer.user_header().expected_block_start() - 1)
});
self.reader().update_index(self.writer.user_header().segment(), segment_max_block)
}
/// Allows to increment the [`SegmentHeader`] end block. It will commit the current static file,
/// and create the next one if we are past the end range.
///
/// Returns the current [`BlockNumber`] as seen in the static file.
pub fn increment_block(&mut self, expected_block_number: BlockNumber) -> ProviderResult<()> {
let segment = self.writer.user_header().segment();
self.check_next_block_number(expected_block_number)?;
let start = Instant::now();
if let Some(last_block) = self.writer.user_header().block_end() {
// We have finished the previous static file and must freeze it
if last_block == self.writer.user_header().expected_block_end() {
// Commits offsets and new user_header to disk
self.commit()?;
// Opens the new static file
let (writer, data_path) =
Self::open(segment, last_block + 1, self.reader.clone(), self.metrics.clone())?;
self.writer = writer;
self.data_path = data_path;
*self.writer.user_header_mut() = SegmentHeader::new(
self.reader().find_fixed_range(last_block + 1),
None,
None,
segment,
);
}
}
self.writer.user_header_mut().increment_block();
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
segment,
StaticFileProviderOperation::IncrementBlock,
Some(start.elapsed()),
);
}
Ok(())
}
/// Returns a block number that is one next to the current tip of static files.
pub fn next_block_number(&self) -> u64 {
// The next static file block number can be found by checking the one after block_end.
// However, if it's a new file that hasn't been added any data, its block range will
// actually be None. In that case, the next block will be found on `expected_block_start`.
self.writer
.user_header()
.block_end()
.map(|b| b + 1)
.unwrap_or_else(|| self.writer.user_header().expected_block_start())
}
/// Verifies if the incoming block number matches the next expected block number
/// for a static file. This ensures data continuity when adding new blocks.
fn check_next_block_number(&self, expected_block_number: u64) -> ProviderResult<()> {
let next_static_file_block = self.next_block_number();
if expected_block_number != next_static_file_block {
return Err(ProviderError::UnexpectedStaticFileBlockNumber(
self.writer.user_header().segment(),
expected_block_number,
next_static_file_block,
))
}
Ok(())
}
/// Truncates a number of rows from disk. It deletes and loads an older static file if block
/// goes beyond the start of the current block range.
///
/// **`last_block`** should be passed only with transaction based segments.
///
/// # Note
/// Commits to the configuration file at the end.
fn truncate(&mut self, num_rows: u64, last_block: Option<u64>) -> ProviderResult<()> {
let mut remaining_rows = num_rows;
let segment = self.writer.user_header().segment();
while remaining_rows > 0 {
let len = if segment.is_block_based() {
self.writer.user_header().block_len().unwrap_or_default()
} else {
self.writer.user_header().tx_len().unwrap_or_default()
};
if remaining_rows >= len {
// If there's more rows to delete than this static file contains, then just
// delete the whole file and go to the next static file
let block_start = self.writer.user_header().expected_block_start();
// We only delete the file if it's NOT the first static file AND:
// * it's a Header segment OR
// * it's a tx-based segment AND `last_block` is lower than the first block of this
// file's block range. Otherwise, having no rows simply means that this block
// range has no transactions, but the file should remain.
if block_start != 0 &&
(segment.is_headers() || last_block.is_some_and(|b| b < block_start))
{
self.delete_current_and_open_previous()?;
} else {
// Update `SegmentHeader`
self.writer.user_header_mut().prune(len);
self.writer.prune_rows(len as usize).map_err(ProviderError::other)?;
break
}
remaining_rows -= len;
} else {
// Update `SegmentHeader`
self.writer.user_header_mut().prune(remaining_rows);
// Truncate data
self.writer.prune_rows(remaining_rows as usize).map_err(ProviderError::other)?;
remaining_rows = 0;
}
}
// Only Transactions and Receipts
if let Some(last_block) = last_block {
let mut expected_block_start = self.writer.user_header().expected_block_start();
if num_rows == 0 {
// Edge case for when we are unwinding a chain of empty blocks that goes across
// files, and therefore, the only reference point to know which file
// we are supposed to be at is `last_block`.
while last_block < expected_block_start {
self.delete_current_and_open_previous()?;
expected_block_start = self.writer.user_header().expected_block_start();
}
}
self.writer.user_header_mut().set_block_range(expected_block_start, last_block);
}
// Commits new changes to disk.
self.commit()?;
Ok(())
}
/// Delete the current static file, and replace this provider writer with the previous static
/// file.
fn delete_current_and_open_previous(&mut self) -> Result<(), ProviderError> {
let current_path = self.data_path.clone();
let (previous_writer, data_path) = Self::open(
self.user_header().segment(),
self.writer.user_header().expected_block_start() - 1,
self.reader.clone(),
self.metrics.clone(),
)?;
self.writer = previous_writer;
self.writer.set_dirty();
self.data_path = data_path;
NippyJar::<SegmentHeader>::load(¤t_path)
.map_err(ProviderError::other)?
.delete()
.map_err(ProviderError::other)?;
Ok(())
}
/// Appends column to static file.
fn append_column<T: Compact>(&mut self, column: T) -> ProviderResult<()> {
self.buf.clear();
column.to_compact(&mut self.buf);
self.writer.append_column(Some(Ok(&self.buf))).map_err(ProviderError::other)?;
Ok(())
}
/// Appends to tx number-based static file.
///
/// Returns the current [`TxNumber`] as seen in the static file.
fn append_with_tx_number<V: Compact>(
&mut self,
tx_num: TxNumber,
value: V,
) -> ProviderResult<()> {
if let Some(range) = self.writer.user_header().tx_range() {
let next_tx = range.end() + 1;
if next_tx != tx_num {
return Err(ProviderError::UnexpectedStaticFileTxNumber(
self.writer.user_header().segment(),
tx_num,
next_tx,
))
}
self.writer.user_header_mut().increment_tx();
} else {
self.writer.user_header_mut().set_tx_range(tx_num, tx_num);
}
self.append_column(value)?;
Ok(())
}
/// Appends header to static file.
///
/// It **CALLS** `increment_block()` since the number of headers is equal to the number of
/// blocks.
///
/// Returns the current [`BlockNumber`] as seen in the static file.
pub fn append_header(
&mut self,
header: &N::BlockHeader,
total_difficulty: U256,
hash: &BlockHash,
) -> ProviderResult<()>
where
N::BlockHeader: Compact,
{
let start = Instant::now();
self.ensure_no_queued_prune()?;
debug_assert!(self.writer.user_header().segment() == StaticFileSegment::Headers);
self.increment_block(header.number())?;
self.append_column(header)?;
self.append_column(CompactU256::from(total_difficulty))?;
self.append_column(hash)?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
StaticFileSegment::Headers,
StaticFileProviderOperation::Append,
Some(start.elapsed()),
);
}
Ok(())
}
/// Appends transaction to static file.
///
/// It **DOES NOT CALL** `increment_block()`, it should be handled elsewhere. There might be
/// empty blocks and this function wouldn't be called.
///
/// Returns the current [`TxNumber`] as seen in the static file.
pub fn append_transaction(&mut self, tx_num: TxNumber, tx: &N::SignedTx) -> ProviderResult<()>
where
N::SignedTx: Compact,
{
let start = Instant::now();
self.ensure_no_queued_prune()?;
debug_assert!(self.writer.user_header().segment() == StaticFileSegment::Transactions);
self.append_with_tx_number(tx_num, tx)?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
StaticFileSegment::Transactions,
StaticFileProviderOperation::Append,
Some(start.elapsed()),
);
}
Ok(())
}
/// Appends receipt to static file.
///
/// It **DOES NOT** call `increment_block()`, it should be handled elsewhere. There might be
/// empty blocks and this function wouldn't be called.
///
/// Returns the current [`TxNumber`] as seen in the static file.
pub fn append_receipt(&mut self, tx_num: TxNumber, receipt: &N::Receipt) -> ProviderResult<()>
where
N::Receipt: Compact,
{
let start = Instant::now();
self.ensure_no_queued_prune()?;
debug_assert!(self.writer.user_header().segment() == StaticFileSegment::Receipts);
self.append_with_tx_number(tx_num, receipt)?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
StaticFileSegment::Receipts,
StaticFileProviderOperation::Append,
Some(start.elapsed()),
);
}
Ok(())
}
/// Appends multiple receipts to the static file.
///
/// Returns the current [`TxNumber`] as seen in the static file, if any.
pub fn append_receipts<I, R>(&mut self, receipts: I) -> ProviderResult<Option<TxNumber>>
where
I: Iterator<Item = Result<(TxNumber, R), ProviderError>>,
R: Borrow<N::Receipt>,
N::Receipt: Compact,
{
debug_assert!(self.writer.user_header().segment() == StaticFileSegment::Receipts);
let mut receipts_iter = receipts.into_iter().peekable();
// If receipts are empty, we can simply return None
if receipts_iter.peek().is_none() {
return Ok(None);
}
let start = Instant::now();
self.ensure_no_queued_prune()?;
// At this point receipts contains at least one receipt, so this would be overwritten.
let mut tx_number = 0;
let mut count: u64 = 0;
for receipt_result in receipts_iter {
let (tx_num, receipt) = receipt_result?;
self.append_with_tx_number(tx_num, receipt.borrow())?;
tx_number = tx_num;
count += 1;
}
if let Some(metrics) = &self.metrics {
metrics.record_segment_operations(
StaticFileSegment::Receipts,
StaticFileProviderOperation::Append,
count,
Some(start.elapsed()),
);
}
Ok(Some(tx_number))
}
/// Adds an instruction to prune `to_delete` transactions during commit.
///
/// Note: `last_block` refers to the block the unwinds ends at.
pub fn prune_transactions(
&mut self,
to_delete: u64,
last_block: BlockNumber,
) -> ProviderResult<()> {
debug_assert_eq!(self.writer.user_header().segment(), StaticFileSegment::Transactions);
self.queue_prune(to_delete, Some(last_block))
}
/// Adds an instruction to prune `to_delete` receipts during commit.
///
/// Note: `last_block` refers to the block the unwinds ends at.
pub fn prune_receipts(
&mut self,
to_delete: u64,
last_block: BlockNumber,
) -> ProviderResult<()> {
debug_assert_eq!(self.writer.user_header().segment(), StaticFileSegment::Receipts);
self.queue_prune(to_delete, Some(last_block))
}
/// Adds an instruction to prune `to_delete` headers during commit.
pub fn prune_headers(&mut self, to_delete: u64) -> ProviderResult<()> {
debug_assert_eq!(self.writer.user_header().segment(), StaticFileSegment::Headers);
self.queue_prune(to_delete, None)
}
/// Adds an instruction to prune `to_delete` elements during commit.
///
/// Note: `last_block` refers to the block the unwinds ends at if dealing with transaction-based
/// data.
fn queue_prune(
&mut self,
to_delete: u64,
last_block: Option<BlockNumber>,
) -> ProviderResult<()> {
self.ensure_no_queued_prune()?;
self.prune_on_commit = Some((to_delete, last_block));
Ok(())
}
/// Returns Error if there is a pruning instruction that needs to be applied.
fn ensure_no_queued_prune(&self) -> ProviderResult<()> {
if self.prune_on_commit.is_some() {
return Err(ProviderError::other(StaticFileWriterError::new(
"Pruning should be committed before appending or pruning more data",
)));
}
Ok(())
}
/// Removes the last `to_delete` transactions from the data file.
fn prune_transaction_data(
&mut self,
to_delete: u64,
last_block: BlockNumber,
) -> ProviderResult<()> {
let start = Instant::now();
debug_assert!(self.writer.user_header().segment() == StaticFileSegment::Transactions);
self.truncate(to_delete, Some(last_block))?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
StaticFileSegment::Transactions,
StaticFileProviderOperation::Prune,
Some(start.elapsed()),
);
}
Ok(())
}
/// Prunes the last `to_delete` receipts from the data file.
fn prune_receipt_data(
&mut self,
to_delete: u64,
last_block: BlockNumber,
) -> ProviderResult<()> {
let start = Instant::now();
debug_assert!(self.writer.user_header().segment() == StaticFileSegment::Receipts);
self.truncate(to_delete, Some(last_block))?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
StaticFileSegment::Receipts,
StaticFileProviderOperation::Prune,
Some(start.elapsed()),
);
}
Ok(())
}
/// Prunes the last `to_delete` headers from the data file.
fn prune_header_data(&mut self, to_delete: u64) -> ProviderResult<()> {
let start = Instant::now();
debug_assert!(self.writer.user_header().segment() == StaticFileSegment::Headers);
self.truncate(to_delete, None)?;
if let Some(metrics) = &self.metrics {
metrics.record_segment_operation(
StaticFileSegment::Headers,
StaticFileProviderOperation::Prune,
Some(start.elapsed()),
);
}
Ok(())
}
fn reader(&self) -> StaticFileProvider<N> {
Self::upgrade_provider_to_strong_reference(&self.reader)
}
/// Upgrades a weak reference of [`StaticFileProviderInner`] to a strong reference
/// [`StaticFileProvider`].
///
/// # Panics
///
/// Panics if the parent [`StaticFileProvider`] is fully dropped while the child writer is still
/// active. In reality, it's impossible to detach the [`StaticFileProviderRW`] from the
/// [`StaticFileProvider`].
fn upgrade_provider_to_strong_reference(
provider: &Weak<StaticFileProviderInner<N>>,
) -> StaticFileProvider<N> {
provider.upgrade().map(StaticFileProvider).expect("StaticFileProvider is dropped")
}
/// Helper function to access [`SegmentHeader`].
pub const fn user_header(&self) -> &SegmentHeader {
self.writer.user_header()
}
/// Helper function to access a mutable reference to [`SegmentHeader`].
pub const fn user_header_mut(&mut self) -> &mut SegmentHeader {
self.writer.user_header_mut()
}
/// Helper function to override block range for testing.
#[cfg(any(test, feature = "test-utils"))]
pub const fn set_block_range(&mut self, block_range: std::ops::RangeInclusive<BlockNumber>) {
self.writer.user_header_mut().set_block_range(*block_range.start(), *block_range.end())
}
/// Helper function to override block range for testing.
#[cfg(any(test, feature = "test-utils"))]
pub const fn inner(&mut self) -> &mut NippyJarWriter<SegmentHeader> {
&mut self.writer
}
}
fn create_jar(
segment: StaticFileSegment,
path: &Path,
expected_block_range: SegmentRangeInclusive,
) -> NippyJar<SegmentHeader> {
let mut jar = NippyJar::new(
segment.columns(),
path,
SegmentHeader::new(expected_block_range, None, None, segment),
);
// Transaction and Receipt already have the compression scheme used natively in its encoding.
// (zstd-dictionary)
if segment.is_headers() {
jar = jar.with_lz4();
}
jar
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/static_file/metrics.rs | crates/storage/provider/src/providers/static_file/metrics.rs | use std::{collections::HashMap, time::Duration};
use itertools::Itertools;
use metrics::{Counter, Gauge, Histogram};
use reth_metrics::Metrics;
use reth_static_file_types::StaticFileSegment;
use strum::{EnumIter, IntoEnumIterator};
/// Metrics for the static file provider.
#[derive(Debug)]
pub struct StaticFileProviderMetrics {
segments: HashMap<StaticFileSegment, StaticFileSegmentMetrics>,
segment_operations: HashMap<
(StaticFileSegment, StaticFileProviderOperation),
StaticFileProviderOperationMetrics,
>,
}
impl Default for StaticFileProviderMetrics {
fn default() -> Self {
Self {
segments: StaticFileSegment::iter()
.map(|segment| {
(
segment,
StaticFileSegmentMetrics::new_with_labels(&[("segment", segment.as_str())]),
)
})
.collect(),
segment_operations: StaticFileSegment::iter()
.cartesian_product(StaticFileProviderOperation::iter())
.map(|(segment, operation)| {
(
(segment, operation),
StaticFileProviderOperationMetrics::new_with_labels(&[
("segment", segment.as_str()),
("operation", operation.as_str()),
]),
)
})
.collect(),
}
}
}
impl StaticFileProviderMetrics {
pub(crate) fn record_segment(
&self,
segment: StaticFileSegment,
size: u64,
files: usize,
entries: usize,
) {
self.segments.get(&segment).expect("segment metrics should exist").size.set(size as f64);
self.segments.get(&segment).expect("segment metrics should exist").files.set(files as f64);
self.segments
.get(&segment)
.expect("segment metrics should exist")
.entries
.set(entries as f64);
}
pub(crate) fn record_segment_operation(
&self,
segment: StaticFileSegment,
operation: StaticFileProviderOperation,
duration: Option<Duration>,
) {
self.segment_operations
.get(&(segment, operation))
.expect("segment operation metrics should exist")
.calls_total
.increment(1);
if let Some(duration) = duration {
self.segment_operations
.get(&(segment, operation))
.expect("segment operation metrics should exist")
.write_duration_seconds
.record(duration.as_secs_f64());
}
}
pub(crate) fn record_segment_operations(
&self,
segment: StaticFileSegment,
operation: StaticFileProviderOperation,
count: u64,
duration: Option<Duration>,
) {
self.segment_operations
.get(&(segment, operation))
.expect("segment operation metrics should exist")
.calls_total
.increment(count);
if let Some(duration) = duration {
self.segment_operations
.get(&(segment, operation))
.expect("segment operation metrics should exist")
.write_duration_seconds
.record(duration.as_secs_f64() / count as f64);
}
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, EnumIter)]
pub(crate) enum StaticFileProviderOperation {
InitCursor,
OpenWriter,
Append,
Prune,
IncrementBlock,
CommitWriter,
}
impl StaticFileProviderOperation {
const fn as_str(&self) -> &'static str {
match self {
Self::InitCursor => "init-cursor",
Self::OpenWriter => "open-writer",
Self::Append => "append",
Self::Prune => "prune",
Self::IncrementBlock => "increment-block",
Self::CommitWriter => "commit-writer",
}
}
}
/// Metrics for a specific static file segment.
#[derive(Metrics)]
#[metrics(scope = "static_files.segment")]
pub(crate) struct StaticFileSegmentMetrics {
/// The size of a static file segment
size: Gauge,
/// The number of files for a static file segment
files: Gauge,
/// The number of entries for a static file segment
entries: Gauge,
}
#[derive(Metrics)]
#[metrics(scope = "static_files.jar_provider")]
pub(crate) struct StaticFileProviderOperationMetrics {
/// Total number of static file jar provider operations made.
calls_total: Counter,
/// The time it took to execute the static file jar provider operation that writes data.
write_duration_seconds: Histogram,
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/database/builder.rs | crates/storage/provider/src/providers/database/builder.rs | //! Helper builder entrypoint to instantiate a [`ProviderFactory`].
//!
//! This also includes general purpose staging types that provide builder style functions that lead
//! up to the intended build target.
use crate::{providers::StaticFileProvider, ProviderFactory};
use reth_db::{
mdbx::{DatabaseArguments, MaxReadTransactionDuration},
open_db_read_only, DatabaseEnv,
};
use reth_db_api::{database_metrics::DatabaseMetrics, Database};
use reth_node_types::{NodeTypes, NodeTypesWithDBAdapter};
use std::{
marker::PhantomData,
path::{Path, PathBuf},
sync::Arc,
};
/// Helper type to create a [`ProviderFactory`].
///
/// This type is the entry point for a stage based builder.
///
/// The intended staging is:
/// 1. Configure the database: [`ProviderFactoryBuilder::db`]
/// 2. Configure the chainspec: `chainspec`
/// 3. Configure the [`StaticFileProvider`]: `static_file`
#[derive(Debug)]
pub struct ProviderFactoryBuilder<N> {
_types: PhantomData<N>,
}
impl<N> ProviderFactoryBuilder<N> {
/// Maps the [`NodeTypes`] of this builder.
pub fn types<T>(self) -> ProviderFactoryBuilder<T> {
ProviderFactoryBuilder::default()
}
/// Configures the database.
pub fn db<DB>(self, db: DB) -> TypesAnd1<N, DB> {
TypesAnd1::new(db)
}
/// Opens the database with the given chainspec and [`ReadOnlyConfig`].
///
/// # Open a monitored instance
///
/// This is recommended when the new read-only instance is used with an active node.
///
/// ```no_run
/// use reth_chainspec::MAINNET;
/// use reth_node_types::NodeTypes;
/// use reth_provider::providers::ProviderFactoryBuilder;
///
/// fn demo<N: NodeTypes<ChainSpec = reth_chainspec::ChainSpec>>() {
/// let provider_factory = ProviderFactoryBuilder::<N>::default()
/// .open_read_only(MAINNET.clone(), "datadir")
/// .unwrap();
/// }
/// ```
///
/// # Open an unmonitored instance
///
/// This is recommended when no changes to the static files are expected (e.g. no active node)
///
/// ```no_run
/// use reth_chainspec::MAINNET;
/// use reth_node_types::NodeTypes;
///
/// use reth_provider::providers::{ProviderFactoryBuilder, ReadOnlyConfig};
///
/// fn demo<N: NodeTypes<ChainSpec = reth_chainspec::ChainSpec>>() {
/// let provider_factory = ProviderFactoryBuilder::<N>::default()
/// .open_read_only(MAINNET.clone(), ReadOnlyConfig::from_datadir("datadir").no_watch())
/// .unwrap();
/// }
/// ```
///
/// # Open an instance with disabled read-transaction timeout
///
/// By default, read transactions are automatically terminated after a timeout to prevent
/// database free list growth. However, if the database is static (no writes occurring), this
/// safety mechanism can be disabled using
/// [`ReadOnlyConfig::disable_long_read_transaction_safety`].
///
/// ```no_run
/// use reth_chainspec::MAINNET;
/// use reth_node_types::NodeTypes;
///
/// use reth_provider::providers::{ProviderFactoryBuilder, ReadOnlyConfig};
///
/// fn demo<N: NodeTypes<ChainSpec = reth_chainspec::ChainSpec>>() {
/// let provider_factory = ProviderFactoryBuilder::<N>::default()
/// .open_read_only(
/// MAINNET.clone(),
/// ReadOnlyConfig::from_datadir("datadir").disable_long_read_transaction_safety(),
/// )
/// .unwrap();
/// }
/// ```
pub fn open_read_only(
self,
chainspec: Arc<N::ChainSpec>,
config: impl Into<ReadOnlyConfig>,
) -> eyre::Result<ProviderFactory<NodeTypesWithDBAdapter<N, Arc<DatabaseEnv>>>>
where
N: NodeTypes,
{
let ReadOnlyConfig { db_dir, db_args, static_files_dir, watch_static_files } =
config.into();
Ok(self
.db(Arc::new(open_db_read_only(db_dir, db_args)?))
.chainspec(chainspec)
.static_file(StaticFileProvider::read_only(static_files_dir, watch_static_files)?)
.build_provider_factory())
}
}
impl<N> Default for ProviderFactoryBuilder<N> {
fn default() -> Self {
Self { _types: Default::default() }
}
}
/// Settings for how to open the database and static files.
///
/// The default derivation from a path assumes the path is the datadir:
/// [`ReadOnlyConfig::from_datadir`]
#[derive(Debug, Clone)]
pub struct ReadOnlyConfig {
/// The path to the database directory.
pub db_dir: PathBuf,
/// How to open the database
pub db_args: DatabaseArguments,
/// The path to the static file dir
pub static_files_dir: PathBuf,
/// Whether the static files should be watched for changes.
pub watch_static_files: bool,
}
impl ReadOnlyConfig {
/// Derives the [`ReadOnlyConfig`] from the datadir.
///
/// By default this assumes the following datadir layout:
///
/// ```text
/// -`datadir`
/// |__db
/// |__static_files
/// ```
///
/// By default this watches the static file directory for changes, see also
/// [`StaticFileProvider::read_only`]
pub fn from_datadir(datadir: impl AsRef<Path>) -> Self {
let datadir = datadir.as_ref();
Self::from_dirs(datadir.join("db"), datadir.join("static_files"))
}
/// Disables long-lived read transaction safety guarantees.
///
/// Caution: Keeping database transaction open indefinitely can cause the free list to grow if
/// changes to the database are made.
pub const fn disable_long_read_transaction_safety(mut self) -> Self {
self.db_args.max_read_transaction_duration(Some(MaxReadTransactionDuration::Unbounded));
self
}
/// Derives the [`ReadOnlyConfig`] from the database dir.
///
/// By default this assumes the following datadir layout:
///
/// ```text
/// - db
/// -static_files
/// ```
///
/// By default this watches the static file directory for changes, see also
/// [`StaticFileProvider::read_only`]
///
/// # Panics
///
/// If the path does not exist
pub fn from_db_dir(db_dir: impl AsRef<Path>) -> Self {
let db_dir = db_dir.as_ref();
let static_files_dir = std::fs::canonicalize(db_dir)
.unwrap()
.parent()
.unwrap()
.to_path_buf()
.join("static_files");
Self::from_dirs(db_dir, static_files_dir)
}
/// Creates the config for the given paths.
///
///
/// By default this watches the static file directory for changes, see also
/// [`StaticFileProvider::read_only`]
pub fn from_dirs(db_dir: impl AsRef<Path>, static_files_dir: impl AsRef<Path>) -> Self {
Self {
static_files_dir: static_files_dir.as_ref().into(),
db_dir: db_dir.as_ref().into(),
db_args: Default::default(),
watch_static_files: true,
}
}
/// Configures the db arguments used when opening the database.
pub fn with_db_args(mut self, db_args: impl Into<DatabaseArguments>) -> Self {
self.db_args = db_args.into();
self
}
/// Configures the db directory.
pub fn with_db_dir(mut self, db_dir: impl Into<PathBuf>) -> Self {
self.db_dir = db_dir.into();
self
}
/// Configures the static file directory.
pub fn with_static_file_dir(mut self, static_file_dir: impl Into<PathBuf>) -> Self {
self.static_files_dir = static_file_dir.into();
self
}
/// Whether the static file directory should be watches for changes, see also
/// [`StaticFileProvider::read_only`]
pub const fn set_watch_static_files(&mut self, watch_static_files: bool) {
self.watch_static_files = watch_static_files;
}
/// Don't watch the static files for changes.
///
/// This is only recommended if this is used without a running node instance that modifies
/// static files.
pub const fn no_watch(mut self) -> Self {
self.set_watch_static_files(false);
self
}
}
impl<T> From<T> for ReadOnlyConfig
where
T: AsRef<Path>,
{
fn from(value: T) -> Self {
Self::from_datadir(value.as_ref())
}
}
/// This is staging type that contains the configured types and _one_ value.
#[derive(Debug)]
pub struct TypesAnd1<N, Val1> {
_types: PhantomData<N>,
val_1: Val1,
}
impl<N, Val1> TypesAnd1<N, Val1> {
/// Creates a new instance with the given types and one value.
pub fn new(val_1: Val1) -> Self {
Self { _types: Default::default(), val_1 }
}
/// Configures the chainspec.
pub fn chainspec<C>(self, chainspec: Arc<C>) -> TypesAnd2<N, Val1, Arc<C>> {
TypesAnd2::new(self.val_1, chainspec)
}
}
/// This is staging type that contains the configured types and _two_ values.
#[derive(Debug)]
pub struct TypesAnd2<N, Val1, Val2> {
_types: PhantomData<N>,
val_1: Val1,
val_2: Val2,
}
impl<N, Val1, Val2> TypesAnd2<N, Val1, Val2> {
/// Creates a new instance with the given types and two values.
pub fn new(val_1: Val1, val_2: Val2) -> Self {
Self { _types: Default::default(), val_1, val_2 }
}
/// Returns the first value.
pub const fn val_1(&self) -> &Val1 {
&self.val_1
}
/// Returns the second value.
pub const fn val_2(&self) -> &Val2 {
&self.val_2
}
/// Configures the [`StaticFileProvider`].
pub fn static_file(
self,
static_file_provider: StaticFileProvider<N::Primitives>,
) -> TypesAnd3<N, Val1, Val2, StaticFileProvider<N::Primitives>>
where
N: NodeTypes,
{
TypesAnd3::new(self.val_1, self.val_2, static_file_provider)
}
}
/// This is staging type that contains the configured types and _three_ values.
#[derive(Debug)]
pub struct TypesAnd3<N, Val1, Val2, Val3> {
_types: PhantomData<N>,
val_1: Val1,
val_2: Val2,
val_3: Val3,
}
impl<N, Val1, Val2, Val3> TypesAnd3<N, Val1, Val2, Val3> {
/// Creates a new instance with the given types and three values.
pub fn new(val_1: Val1, val_2: Val2, val_3: Val3) -> Self {
Self { _types: Default::default(), val_1, val_2, val_3 }
}
}
impl<N, DB> TypesAnd3<N, DB, Arc<N::ChainSpec>, StaticFileProvider<N::Primitives>>
where
N: NodeTypes,
DB: Database + DatabaseMetrics + Clone + Unpin + 'static,
{
/// Creates the [`ProviderFactory`].
pub fn build_provider_factory(self) -> ProviderFactory<NodeTypesWithDBAdapter<N, DB>> {
let Self { _types, val_1, val_2, val_3 } = self;
ProviderFactory::new(val_1, val_2, val_3)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/database/chain.rs | crates/storage/provider/src/providers/database/chain.rs | use crate::{providers::NodeTypesForProvider, DatabaseProvider};
use reth_db_api::transaction::{DbTx, DbTxMut};
use reth_node_types::FullNodePrimitives;
use reth_primitives_traits::{FullBlockHeader, FullSignedTx};
use reth_storage_api::{ChainStorageReader, ChainStorageWriter, EthStorage};
/// Trait that provides access to implementations of [`ChainStorage`]
pub trait ChainStorage<Primitives: FullNodePrimitives>: Send + Sync {
/// Provides access to the chain reader.
fn reader<TX, Types>(&self) -> impl ChainStorageReader<DatabaseProvider<TX, Types>, Primitives>
where
TX: DbTx + 'static,
Types: NodeTypesForProvider<Primitives = Primitives>;
/// Provides access to the chain writer.
fn writer<TX, Types>(&self) -> impl ChainStorageWriter<DatabaseProvider<TX, Types>, Primitives>
where
TX: DbTxMut + DbTx + 'static,
Types: NodeTypesForProvider<Primitives = Primitives>;
}
impl<N, T, H> ChainStorage<N> for EthStorage<T, H>
where
T: FullSignedTx,
H: FullBlockHeader,
N: FullNodePrimitives<
Block = alloy_consensus::Block<T, H>,
BlockHeader = H,
BlockBody = alloy_consensus::BlockBody<T, H>,
SignedTx = T,
>,
{
fn reader<TX, Types>(&self) -> impl ChainStorageReader<DatabaseProvider<TX, Types>, N>
where
TX: DbTx + 'static,
Types: NodeTypesForProvider<Primitives = N>,
{
self
}
fn writer<TX, Types>(&self) -> impl ChainStorageWriter<DatabaseProvider<TX, Types>, N>
where
TX: DbTxMut + DbTx + 'static,
Types: NodeTypesForProvider<Primitives = N>,
{
self
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/database/mod.rs | crates/storage/provider/src/providers/database/mod.rs | use crate::{
providers::{state::latest::LatestStateProvider, StaticFileProvider},
to_range,
traits::{BlockSource, ReceiptProvider},
BlockHashReader, BlockNumReader, BlockReader, ChainSpecProvider, DatabaseProviderFactory,
HashedPostStateProvider, HeaderProvider, HeaderSyncGapProvider, ProviderError,
PruneCheckpointReader, StageCheckpointReader, StateProviderBox, StaticFileProviderFactory,
TransactionVariant, TransactionsProvider,
};
use alloy_consensus::transaction::TransactionMeta;
use alloy_eips::BlockHashOrNumber;
use alloy_primitives::{Address, BlockHash, BlockNumber, TxHash, TxNumber, B256, U256};
use core::fmt;
use reth_chainspec::ChainInfo;
use reth_db::{init_db, mdbx::DatabaseArguments, DatabaseEnv};
use reth_db_api::{database::Database, models::StoredBlockBodyIndices};
use reth_errors::{RethError, RethResult};
use reth_node_types::{
BlockTy, HeaderTy, NodeTypes, NodeTypesWithDB, NodeTypesWithDBAdapter, ReceiptTy, TxTy,
};
use reth_primitives_traits::{RecoveredBlock, SealedHeader};
use reth_prune_types::{PruneCheckpoint, PruneModes, PruneSegment};
use reth_stages_types::{StageCheckpoint, StageId};
use reth_static_file_types::StaticFileSegment;
use reth_storage_api::{
BlockBodyIndicesProvider, NodePrimitivesProvider, TryIntoHistoricalStateProvider,
};
use reth_storage_errors::provider::ProviderResult;
use reth_trie::HashedPostState;
use revm_database::BundleState;
use std::{
ops::{RangeBounds, RangeInclusive},
path::Path,
sync::Arc,
};
use tracing::trace;
mod provider;
pub use provider::{DatabaseProvider, DatabaseProviderRO, DatabaseProviderRW};
use super::ProviderNodeTypes;
use reth_trie::KeccakKeyHasher;
mod builder;
pub use builder::{ProviderFactoryBuilder, ReadOnlyConfig};
mod metrics;
mod chain;
pub use chain::*;
/// A common provider that fetches data from a database or static file.
///
/// This provider implements most provider or provider factory traits.
pub struct ProviderFactory<N: NodeTypesWithDB> {
/// Database instance
db: N::DB,
/// Chain spec
chain_spec: Arc<N::ChainSpec>,
/// Static File Provider
static_file_provider: StaticFileProvider<N::Primitives>,
/// Optional pruning configuration
prune_modes: PruneModes,
/// The node storage handler.
storage: Arc<N::Storage>,
}
impl<N: NodeTypes> ProviderFactory<NodeTypesWithDBAdapter<N, Arc<DatabaseEnv>>> {
/// Instantiates the builder for this type
pub fn builder() -> ProviderFactoryBuilder<N> {
ProviderFactoryBuilder::default()
}
}
impl<N: NodeTypesWithDB> ProviderFactory<N> {
/// Create new database provider factory.
pub fn new(
db: N::DB,
chain_spec: Arc<N::ChainSpec>,
static_file_provider: StaticFileProvider<N::Primitives>,
) -> Self {
Self {
db,
chain_spec,
static_file_provider,
prune_modes: PruneModes::none(),
storage: Default::default(),
}
}
/// Enables metrics on the static file provider.
pub fn with_static_files_metrics(mut self) -> Self {
self.static_file_provider = self.static_file_provider.with_metrics();
self
}
/// Sets the pruning configuration for an existing [`ProviderFactory`].
pub fn with_prune_modes(mut self, prune_modes: PruneModes) -> Self {
self.prune_modes = prune_modes;
self
}
/// Returns reference to the underlying database.
pub const fn db_ref(&self) -> &N::DB {
&self.db
}
#[cfg(any(test, feature = "test-utils"))]
/// Consumes Self and returns DB
pub fn into_db(self) -> N::DB {
self.db
}
}
impl<N: NodeTypesWithDB<DB = Arc<DatabaseEnv>>> ProviderFactory<N> {
/// Create new database provider by passing a path. [`ProviderFactory`] will own the database
/// instance.
pub fn new_with_database_path<P: AsRef<Path>>(
path: P,
chain_spec: Arc<N::ChainSpec>,
args: DatabaseArguments,
static_file_provider: StaticFileProvider<N::Primitives>,
) -> RethResult<Self> {
Ok(Self {
db: Arc::new(init_db(path, args).map_err(RethError::msg)?),
chain_spec,
static_file_provider,
prune_modes: PruneModes::none(),
storage: Default::default(),
})
}
}
impl<N: ProviderNodeTypes> ProviderFactory<N> {
/// Returns a provider with a created `DbTx` inside, which allows fetching data from the
/// database using different types of providers. Example: [`HeaderProvider`]
/// [`BlockHashReader`]. This may fail if the inner read database transaction fails to open.
///
/// This sets the [`PruneModes`] to [`None`], because they should only be relevant for writing
/// data.
#[track_caller]
pub fn provider(&self) -> ProviderResult<DatabaseProviderRO<N::DB, N>> {
Ok(DatabaseProvider::new(
self.db.tx()?,
self.chain_spec.clone(),
self.static_file_provider.clone(),
self.prune_modes.clone(),
self.storage.clone(),
))
}
/// Returns a provider with a created `DbTxMut` inside, which allows fetching and updating
/// data from the database using different types of providers. Example: [`HeaderProvider`]
/// [`BlockHashReader`]. This may fail if the inner read/write database transaction fails to
/// open.
#[track_caller]
pub fn provider_rw(&self) -> ProviderResult<DatabaseProviderRW<N::DB, N>> {
Ok(DatabaseProviderRW(DatabaseProvider::new_rw(
self.db.tx_mut()?,
self.chain_spec.clone(),
self.static_file_provider.clone(),
self.prune_modes.clone(),
self.storage.clone(),
)))
}
/// State provider for latest block
#[track_caller]
pub fn latest(&self) -> ProviderResult<StateProviderBox> {
trace!(target: "providers::db", "Returning latest state provider");
Ok(Box::new(LatestStateProvider::new(self.database_provider_ro()?)))
}
/// Storage provider for state at that given block
pub fn history_by_block_number(
&self,
block_number: BlockNumber,
) -> ProviderResult<StateProviderBox> {
let state_provider = self.provider()?.try_into_history_at_block(block_number)?;
trace!(target: "providers::db", ?block_number, "Returning historical state provider for block number");
Ok(state_provider)
}
/// Storage provider for state at that given block hash
pub fn history_by_block_hash(&self, block_hash: BlockHash) -> ProviderResult<StateProviderBox> {
let provider = self.provider()?;
let block_number = provider
.block_number(block_hash)?
.ok_or(ProviderError::BlockHashNotFound(block_hash))?;
let state_provider = provider.try_into_history_at_block(block_number)?;
trace!(target: "providers::db", ?block_number, %block_hash, "Returning historical state provider for block hash");
Ok(state_provider)
}
}
impl<N: NodeTypesWithDB> NodePrimitivesProvider for ProviderFactory<N> {
type Primitives = N::Primitives;
}
impl<N: ProviderNodeTypes> DatabaseProviderFactory for ProviderFactory<N> {
type DB = N::DB;
type Provider = DatabaseProvider<<N::DB as Database>::TX, N>;
type ProviderRW = DatabaseProvider<<N::DB as Database>::TXMut, N>;
fn database_provider_ro(&self) -> ProviderResult<Self::Provider> {
self.provider()
}
fn database_provider_rw(&self) -> ProviderResult<Self::ProviderRW> {
self.provider_rw().map(|provider| provider.0)
}
}
impl<N: NodeTypesWithDB> StaticFileProviderFactory for ProviderFactory<N> {
/// Returns static file provider
fn static_file_provider(&self) -> StaticFileProvider<Self::Primitives> {
self.static_file_provider.clone()
}
}
impl<N: ProviderNodeTypes> HeaderSyncGapProvider for ProviderFactory<N> {
type Header = HeaderTy<N>;
fn local_tip_header(
&self,
highest_uninterrupted_block: BlockNumber,
) -> ProviderResult<SealedHeader<Self::Header>> {
self.provider()?.local_tip_header(highest_uninterrupted_block)
}
}
impl<N: ProviderNodeTypes> HeaderProvider for ProviderFactory<N> {
type Header = HeaderTy<N>;
fn header(&self, block_hash: &BlockHash) -> ProviderResult<Option<Self::Header>> {
self.provider()?.header(block_hash)
}
fn header_by_number(&self, num: BlockNumber) -> ProviderResult<Option<Self::Header>> {
self.static_file_provider.get_with_static_file_or_database(
StaticFileSegment::Headers,
num,
|static_file| static_file.header_by_number(num),
|| self.provider()?.header_by_number(num),
)
}
fn header_td(&self, hash: &BlockHash) -> ProviderResult<Option<U256>> {
self.provider()?.header_td(hash)
}
fn header_td_by_number(&self, number: BlockNumber) -> ProviderResult<Option<U256>> {
self.provider()?.header_td_by_number(number)
}
fn headers_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Self::Header>> {
self.static_file_provider.get_range_with_static_file_or_database(
StaticFileSegment::Headers,
to_range(range),
|static_file, range, _| static_file.headers_range(range),
|range, _| self.provider()?.headers_range(range),
|_| true,
)
}
fn sealed_header(
&self,
number: BlockNumber,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
self.static_file_provider.get_with_static_file_or_database(
StaticFileSegment::Headers,
number,
|static_file| static_file.sealed_header(number),
|| self.provider()?.sealed_header(number),
)
}
fn sealed_headers_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
self.sealed_headers_while(range, |_| true)
}
fn sealed_headers_while(
&self,
range: impl RangeBounds<BlockNumber>,
predicate: impl FnMut(&SealedHeader<Self::Header>) -> bool,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
self.static_file_provider.get_range_with_static_file_or_database(
StaticFileSegment::Headers,
to_range(range),
|static_file, range, predicate| static_file.sealed_headers_while(range, predicate),
|range, predicate| self.provider()?.sealed_headers_while(range, predicate),
predicate,
)
}
}
impl<N: ProviderNodeTypes> BlockHashReader for ProviderFactory<N> {
fn block_hash(&self, number: u64) -> ProviderResult<Option<B256>> {
self.static_file_provider.get_with_static_file_or_database(
StaticFileSegment::Headers,
number,
|static_file| static_file.block_hash(number),
|| self.provider()?.block_hash(number),
)
}
fn canonical_hashes_range(
&self,
start: BlockNumber,
end: BlockNumber,
) -> ProviderResult<Vec<B256>> {
self.static_file_provider.get_range_with_static_file_or_database(
StaticFileSegment::Headers,
start..end,
|static_file, range, _| static_file.canonical_hashes_range(range.start, range.end),
|range, _| self.provider()?.canonical_hashes_range(range.start, range.end),
|_| true,
)
}
}
impl<N: ProviderNodeTypes> BlockNumReader for ProviderFactory<N> {
fn chain_info(&self) -> ProviderResult<ChainInfo> {
self.provider()?.chain_info()
}
fn best_block_number(&self) -> ProviderResult<BlockNumber> {
self.provider()?.best_block_number()
}
fn last_block_number(&self) -> ProviderResult<BlockNumber> {
self.provider()?.last_block_number()
}
fn earliest_block_number(&self) -> ProviderResult<BlockNumber> {
// earliest history height tracks the lowest block number that has __not__ been expired, in
// other words, the first/earliest available block.
Ok(self.static_file_provider.earliest_history_height())
}
fn block_number(&self, hash: B256) -> ProviderResult<Option<BlockNumber>> {
self.provider()?.block_number(hash)
}
}
impl<N: ProviderNodeTypes> BlockReader for ProviderFactory<N> {
type Block = BlockTy<N>;
fn find_block_by_hash(
&self,
hash: B256,
source: BlockSource,
) -> ProviderResult<Option<Self::Block>> {
self.provider()?.find_block_by_hash(hash, source)
}
fn block(&self, id: BlockHashOrNumber) -> ProviderResult<Option<Self::Block>> {
self.provider()?.block(id)
}
fn pending_block(&self) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
self.provider()?.pending_block()
}
fn pending_block_and_receipts(
&self,
) -> ProviderResult<Option<(RecoveredBlock<Self::Block>, Vec<Self::Receipt>)>> {
self.provider()?.pending_block_and_receipts()
}
fn recovered_block(
&self,
id: BlockHashOrNumber,
transaction_kind: TransactionVariant,
) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
self.provider()?.recovered_block(id, transaction_kind)
}
fn sealed_block_with_senders(
&self,
id: BlockHashOrNumber,
transaction_kind: TransactionVariant,
) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
self.provider()?.sealed_block_with_senders(id, transaction_kind)
}
fn block_range(&self, range: RangeInclusive<BlockNumber>) -> ProviderResult<Vec<Self::Block>> {
self.provider()?.block_range(range)
}
fn block_with_senders_range(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<RecoveredBlock<Self::Block>>> {
self.provider()?.block_with_senders_range(range)
}
fn recovered_block_range(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<RecoveredBlock<Self::Block>>> {
self.provider()?.recovered_block_range(range)
}
}
impl<N: ProviderNodeTypes> TransactionsProvider for ProviderFactory<N> {
type Transaction = TxTy<N>;
fn transaction_id(&self, tx_hash: TxHash) -> ProviderResult<Option<TxNumber>> {
self.provider()?.transaction_id(tx_hash)
}
fn transaction_by_id(&self, id: TxNumber) -> ProviderResult<Option<Self::Transaction>> {
self.static_file_provider.get_with_static_file_or_database(
StaticFileSegment::Transactions,
id,
|static_file| static_file.transaction_by_id(id),
|| self.provider()?.transaction_by_id(id),
)
}
fn transaction_by_id_unhashed(
&self,
id: TxNumber,
) -> ProviderResult<Option<Self::Transaction>> {
self.static_file_provider.get_with_static_file_or_database(
StaticFileSegment::Transactions,
id,
|static_file| static_file.transaction_by_id_unhashed(id),
|| self.provider()?.transaction_by_id_unhashed(id),
)
}
fn transaction_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Transaction>> {
self.provider()?.transaction_by_hash(hash)
}
fn transaction_by_hash_with_meta(
&self,
tx_hash: TxHash,
) -> ProviderResult<Option<(Self::Transaction, TransactionMeta)>> {
self.provider()?.transaction_by_hash_with_meta(tx_hash)
}
fn transaction_block(&self, id: TxNumber) -> ProviderResult<Option<BlockNumber>> {
self.provider()?.transaction_block(id)
}
fn transactions_by_block(
&self,
id: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Transaction>>> {
self.provider()?.transactions_by_block(id)
}
fn transactions_by_block_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Transaction>>> {
self.provider()?.transactions_by_block_range(range)
}
fn transactions_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Transaction>> {
self.provider()?.transactions_by_tx_range(range)
}
fn senders_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Address>> {
self.provider()?.senders_by_tx_range(range)
}
fn transaction_sender(&self, id: TxNumber) -> ProviderResult<Option<Address>> {
self.provider()?.transaction_sender(id)
}
}
impl<N: ProviderNodeTypes> ReceiptProvider for ProviderFactory<N> {
type Receipt = ReceiptTy<N>;
fn receipt(&self, id: TxNumber) -> ProviderResult<Option<Self::Receipt>> {
self.static_file_provider.get_with_static_file_or_database(
StaticFileSegment::Receipts,
id,
|static_file| static_file.receipt(id),
|| self.provider()?.receipt(id),
)
}
fn receipt_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Receipt>> {
self.provider()?.receipt_by_hash(hash)
}
fn receipts_by_block(
&self,
block: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Receipt>>> {
self.provider()?.receipts_by_block(block)
}
fn receipts_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Receipt>> {
self.static_file_provider.get_range_with_static_file_or_database(
StaticFileSegment::Receipts,
to_range(range),
|static_file, range, _| static_file.receipts_by_tx_range(range),
|range, _| self.provider()?.receipts_by_tx_range(range),
|_| true,
)
}
fn receipts_by_block_range(
&self,
block_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Receipt>>> {
self.provider()?.receipts_by_block_range(block_range)
}
}
impl<N: ProviderNodeTypes> BlockBodyIndicesProvider for ProviderFactory<N> {
fn block_body_indices(
&self,
number: BlockNumber,
) -> ProviderResult<Option<StoredBlockBodyIndices>> {
self.provider()?.block_body_indices(number)
}
fn block_body_indices_range(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<StoredBlockBodyIndices>> {
self.provider()?.block_body_indices_range(range)
}
}
impl<N: ProviderNodeTypes> StageCheckpointReader for ProviderFactory<N> {
fn get_stage_checkpoint(&self, id: StageId) -> ProviderResult<Option<StageCheckpoint>> {
self.provider()?.get_stage_checkpoint(id)
}
fn get_stage_checkpoint_progress(&self, id: StageId) -> ProviderResult<Option<Vec<u8>>> {
self.provider()?.get_stage_checkpoint_progress(id)
}
fn get_all_checkpoints(&self) -> ProviderResult<Vec<(String, StageCheckpoint)>> {
self.provider()?.get_all_checkpoints()
}
}
impl<N: NodeTypesWithDB> ChainSpecProvider for ProviderFactory<N> {
type ChainSpec = N::ChainSpec;
fn chain_spec(&self) -> Arc<N::ChainSpec> {
self.chain_spec.clone()
}
}
impl<N: ProviderNodeTypes> PruneCheckpointReader for ProviderFactory<N> {
fn get_prune_checkpoint(
&self,
segment: PruneSegment,
) -> ProviderResult<Option<PruneCheckpoint>> {
self.provider()?.get_prune_checkpoint(segment)
}
fn get_prune_checkpoints(&self) -> ProviderResult<Vec<(PruneSegment, PruneCheckpoint)>> {
self.provider()?.get_prune_checkpoints()
}
}
impl<N: ProviderNodeTypes> HashedPostStateProvider for ProviderFactory<N> {
fn hashed_post_state(&self, bundle_state: &BundleState) -> HashedPostState {
HashedPostState::from_bundle_state::<KeccakKeyHasher>(bundle_state.state())
}
}
impl<N> fmt::Debug for ProviderFactory<N>
where
N: NodeTypesWithDB<DB: fmt::Debug, ChainSpec: fmt::Debug, Storage: fmt::Debug>,
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { db, chain_spec, static_file_provider, prune_modes, storage } = self;
f.debug_struct("ProviderFactory")
.field("db", &db)
.field("chain_spec", &chain_spec)
.field("static_file_provider", &static_file_provider)
.field("prune_modes", &prune_modes)
.field("storage", &storage)
.finish()
}
}
impl<N: NodeTypesWithDB> Clone for ProviderFactory<N> {
fn clone(&self) -> Self {
Self {
db: self.db.clone(),
chain_spec: self.chain_spec.clone(),
static_file_provider: self.static_file_provider.clone(),
prune_modes: self.prune_modes.clone(),
storage: self.storage.clone(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
providers::{StaticFileProvider, StaticFileWriter},
test_utils::{blocks::TEST_BLOCK, create_test_provider_factory, MockNodeTypesWithDB},
BlockHashReader, BlockNumReader, BlockWriter, DBProvider, HeaderSyncGapProvider,
StorageLocation, TransactionsProvider,
};
use alloy_primitives::{TxNumber, B256, U256};
use assert_matches::assert_matches;
use reth_chainspec::ChainSpecBuilder;
use reth_db::{
mdbx::DatabaseArguments,
test_utils::{create_test_static_files_dir, ERROR_TEMPDIR},
};
use reth_db_api::tables;
use reth_primitives_traits::SignerRecoverable;
use reth_prune_types::{PruneMode, PruneModes};
use reth_storage_errors::provider::ProviderError;
use reth_testing_utils::generators::{self, random_block, random_header, BlockParams};
use std::{ops::RangeInclusive, sync::Arc};
#[test]
fn common_history_provider() {
let factory = create_test_provider_factory();
let _ = factory.latest();
}
#[test]
fn default_chain_info() {
let factory = create_test_provider_factory();
let provider = factory.provider().unwrap();
let chain_info = provider.chain_info().expect("should be ok");
assert_eq!(chain_info.best_number, 0);
assert_eq!(chain_info.best_hash, B256::ZERO);
}
#[test]
fn provider_flow() {
let factory = create_test_provider_factory();
let provider = factory.provider().unwrap();
provider.block_hash(0).unwrap();
let provider_rw = factory.provider_rw().unwrap();
provider_rw.block_hash(0).unwrap();
provider.block_hash(0).unwrap();
}
#[test]
fn provider_factory_with_database_path() {
let chain_spec = ChainSpecBuilder::mainnet().build();
let (_static_dir, static_dir_path) = create_test_static_files_dir();
let factory = ProviderFactory::<MockNodeTypesWithDB<DatabaseEnv>>::new_with_database_path(
tempfile::TempDir::new().expect(ERROR_TEMPDIR).keep(),
Arc::new(chain_spec),
DatabaseArguments::new(Default::default()),
StaticFileProvider::read_write(static_dir_path).unwrap(),
)
.unwrap();
let provider = factory.provider().unwrap();
provider.block_hash(0).unwrap();
let provider_rw = factory.provider_rw().unwrap();
provider_rw.block_hash(0).unwrap();
provider.block_hash(0).unwrap();
}
#[test]
fn insert_block_with_prune_modes() {
let factory = create_test_provider_factory();
let block = TEST_BLOCK.clone();
{
let provider = factory.provider_rw().unwrap();
assert_matches!(
provider
.insert_block(block.clone().try_recover().unwrap(), StorageLocation::Database),
Ok(_)
);
assert_matches!(
provider.transaction_sender(0), Ok(Some(sender))
if sender == block.body().transactions[0].recover_signer().unwrap()
);
assert_matches!(
provider.transaction_id(*block.body().transactions[0].tx_hash()),
Ok(Some(0))
);
}
{
let prune_modes = PruneModes {
sender_recovery: Some(PruneMode::Full),
transaction_lookup: Some(PruneMode::Full),
..PruneModes::none()
};
let provider = factory.with_prune_modes(prune_modes).provider_rw().unwrap();
assert_matches!(
provider
.insert_block(block.clone().try_recover().unwrap(), StorageLocation::Database),
Ok(_)
);
assert_matches!(provider.transaction_sender(0), Ok(None));
assert_matches!(
provider.transaction_id(*block.body().transactions[0].tx_hash()),
Ok(None)
);
}
}
#[test]
fn take_block_transaction_range_recover_senders() {
let factory = create_test_provider_factory();
let mut rng = generators::rng();
let block =
random_block(&mut rng, 0, BlockParams { tx_count: Some(3), ..Default::default() });
let tx_ranges: Vec<RangeInclusive<TxNumber>> = vec![0..=0, 1..=1, 2..=2, 0..=1, 1..=2];
for range in tx_ranges {
let provider = factory.provider_rw().unwrap();
assert_matches!(
provider
.insert_block(block.clone().try_recover().unwrap(), StorageLocation::Database),
Ok(_)
);
let senders = provider.take::<tables::TransactionSenders>(range.clone());
assert_eq!(
senders,
Ok(range
.clone()
.map(|tx_number| (
tx_number,
block.body().transactions[tx_number as usize].recover_signer().unwrap()
))
.collect())
);
let db_senders = provider.senders_by_tx_range(range);
assert!(matches!(db_senders, Ok(ref v) if v.is_empty()));
}
}
#[test]
fn header_sync_gap_lookup() {
let factory = create_test_provider_factory();
let provider = factory.provider_rw().unwrap();
let mut rng = generators::rng();
// Genesis
let checkpoint = 0;
let head = random_header(&mut rng, 0, None);
// Empty database
assert_matches!(
provider.local_tip_header(checkpoint),
Err(ProviderError::HeaderNotFound(block_number))
if block_number.as_number().unwrap() == checkpoint
);
// Checkpoint and no gap
let static_file_provider = provider.static_file_provider();
let mut static_file_writer =
static_file_provider.latest_writer(StaticFileSegment::Headers).unwrap();
static_file_writer.append_header(head.header(), U256::ZERO, &head.hash()).unwrap();
static_file_writer.commit().unwrap();
drop(static_file_writer);
let local_head = provider.local_tip_header(checkpoint).unwrap();
assert_eq!(local_head, head);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/database/metrics.rs | crates/storage/provider/src/providers/database/metrics.rs | use metrics::Histogram;
use reth_metrics::Metrics;
use std::time::{Duration, Instant};
#[derive(Debug)]
pub(crate) struct DurationsRecorder {
start: Instant,
current_metrics: DatabaseProviderMetrics,
pub(crate) actions: Vec<(Action, Duration)>,
latest: Option<Duration>,
}
impl Default for DurationsRecorder {
fn default() -> Self {
Self {
start: Instant::now(),
actions: Vec::new(),
latest: None,
current_metrics: DatabaseProviderMetrics::default(),
}
}
}
impl DurationsRecorder {
/// Records the duration since last record, saves it for future logging and instantly reports as
/// a metric with `action` label.
pub(crate) fn record_relative(&mut self, action: Action) {
let elapsed = self.start.elapsed();
let duration = elapsed - self.latest.unwrap_or_default();
self.actions.push((action, duration));
self.current_metrics.record_duration(action, duration);
self.latest = Some(elapsed);
}
}
#[derive(Debug, Copy, Clone)]
pub(crate) enum Action {
InsertStorageHashing,
InsertAccountHashing,
InsertMerkleTree,
InsertBlock,
InsertState,
InsertHashes,
InsertHistoryIndices,
UpdatePipelineStages,
InsertCanonicalHeaders,
InsertHeaders,
InsertHeaderNumbers,
InsertHeaderTerminalDifficulties,
InsertBlockBodyIndices,
InsertTransactionBlocks,
GetNextTxNum,
GetParentTD,
}
/// Database provider metrics
#[derive(Metrics)]
#[metrics(scope = "storage.providers.database")]
struct DatabaseProviderMetrics {
/// Duration of insert storage hashing
insert_storage_hashing: Histogram,
/// Duration of insert account hashing
insert_account_hashing: Histogram,
/// Duration of insert merkle tree
insert_merkle_tree: Histogram,
/// Duration of insert block
insert_block: Histogram,
/// Duration of insert state
insert_state: Histogram,
/// Duration of insert hashes
insert_hashes: Histogram,
/// Duration of insert history indices
insert_history_indices: Histogram,
/// Duration of update pipeline stages
update_pipeline_stages: Histogram,
/// Duration of insert canonical headers
insert_canonical_headers: Histogram,
/// Duration of insert headers
insert_headers: Histogram,
/// Duration of insert header numbers
insert_header_numbers: Histogram,
/// Duration of insert header TD
insert_header_td: Histogram,
/// Duration of insert block body indices
insert_block_body_indices: Histogram,
/// Duration of insert transaction blocks
insert_tx_blocks: Histogram,
/// Duration of get next tx num
get_next_tx_num: Histogram,
/// Duration of get parent TD
get_parent_td: Histogram,
}
impl DatabaseProviderMetrics {
/// Records the duration for the given action.
pub(crate) fn record_duration(&self, action: Action, duration: Duration) {
match action {
Action::InsertStorageHashing => self.insert_storage_hashing.record(duration),
Action::InsertAccountHashing => self.insert_account_hashing.record(duration),
Action::InsertMerkleTree => self.insert_merkle_tree.record(duration),
Action::InsertBlock => self.insert_block.record(duration),
Action::InsertState => self.insert_state.record(duration),
Action::InsertHashes => self.insert_hashes.record(duration),
Action::InsertHistoryIndices => self.insert_history_indices.record(duration),
Action::UpdatePipelineStages => self.update_pipeline_stages.record(duration),
Action::InsertCanonicalHeaders => self.insert_canonical_headers.record(duration),
Action::InsertHeaders => self.insert_headers.record(duration),
Action::InsertHeaderNumbers => self.insert_header_numbers.record(duration),
Action::InsertHeaderTerminalDifficulties => self.insert_header_td.record(duration),
Action::InsertBlockBodyIndices => self.insert_block_body_indices.record(duration),
Action::InsertTransactionBlocks => self.insert_tx_blocks.record(duration),
Action::GetNextTxNum => self.get_next_tx_num.record(duration),
Action::GetParentTD => self.get_parent_td.record(duration),
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/providers/database/provider.rs | crates/storage/provider/src/providers/database/provider.rs | use crate::{
bundle_state::StorageRevertsIter,
providers::{
database::{chain::ChainStorage, metrics},
static_file::StaticFileWriter,
NodeTypesForProvider, StaticFileProvider,
},
to_range,
traits::{
AccountExtReader, BlockSource, ChangeSetReader, ReceiptProvider, StageCheckpointWriter,
},
AccountReader, BlockBodyWriter, BlockExecutionWriter, BlockHashReader, BlockNumReader,
BlockReader, BlockWriter, BundleStateInit, ChainStateBlockReader, ChainStateBlockWriter,
DBProvider, HashingWriter, HeaderProvider, HeaderSyncGapProvider, HistoricalStateProvider,
HistoricalStateProviderRef, HistoryWriter, LatestStateProvider, LatestStateProviderRef,
OriginalValuesKnown, ProviderError, PruneCheckpointReader, PruneCheckpointWriter, RevertsInit,
StageCheckpointReader, StateProviderBox, StateWriter, StaticFileProviderFactory, StatsReader,
StorageLocation, StorageReader, StorageTrieWriter, TransactionVariant, TransactionsProvider,
TransactionsProviderExt, TrieWriter,
};
use alloy_consensus::{
transaction::{SignerRecoverable, TransactionMeta},
BlockHeader, Header, TxReceipt,
};
use alloy_eips::{eip2718::Encodable2718, BlockHashOrNumber};
use alloy_primitives::{
keccak256,
map::{hash_map, B256Map, HashMap, HashSet},
Address, BlockHash, BlockNumber, TxHash, TxNumber, B256, U256,
};
use itertools::Itertools;
use rayon::slice::ParallelSliceMut;
use reth_chainspec::{ChainInfo, ChainSpecProvider, EthChainSpec, EthereumHardforks};
use reth_db_api::{
cursor::{DbCursorRO, DbCursorRW, DbDupCursorRO, DbDupCursorRW},
database::Database,
models::{
sharded_key, storage_sharded_key::StorageShardedKey, AccountBeforeTx, BlockNumberAddress,
ShardedKey, StoredBlockBodyIndices,
},
table::Table,
tables,
transaction::{DbTx, DbTxMut},
BlockNumberList, DatabaseError, PlainAccountState, PlainStorageState,
};
use reth_execution_types::{Chain, ExecutionOutcome};
use reth_node_types::{BlockTy, BodyTy, HeaderTy, NodeTypes, ReceiptTy, TxTy};
use reth_primitives_traits::{
Account, Block as _, BlockBody as _, Bytecode, GotExpected, NodePrimitives, RecoveredBlock,
SealedHeader, SignedTransaction, StorageEntry,
};
use reth_prune_types::{
PruneCheckpoint, PruneMode, PruneModes, PruneSegment, MINIMUM_PRUNING_DISTANCE,
};
use reth_stages_types::{StageCheckpoint, StageId};
use reth_static_file_types::StaticFileSegment;
use reth_storage_api::{
BlockBodyIndicesProvider, BlockBodyReader, NodePrimitivesProvider, StateProvider,
StorageChangeSetReader, TryIntoHistoricalStateProvider,
};
use reth_storage_errors::provider::{ProviderResult, RootMismatch};
use reth_trie::{
prefix_set::{PrefixSet, PrefixSetMut, TriePrefixSets},
updates::{StorageTrieUpdates, TrieUpdates},
HashedPostStateSorted, Nibbles, StateRoot, StoredNibbles,
};
use reth_trie_db::{DatabaseStateRoot, DatabaseStorageTrieCursor};
use revm_database::states::{
PlainStateReverts, PlainStorageChangeset, PlainStorageRevert, StateChangeset,
};
use std::{
cmp::Ordering,
collections::{BTreeMap, BTreeSet},
fmt::Debug,
ops::{Deref, DerefMut, Range, RangeBounds, RangeInclusive},
sync::{mpsc, Arc},
};
use tracing::{debug, trace};
/// A [`DatabaseProvider`] that holds a read-only database transaction.
pub type DatabaseProviderRO<DB, N> = DatabaseProvider<<DB as Database>::TX, N>;
/// A [`DatabaseProvider`] that holds a read-write database transaction.
///
/// Ideally this would be an alias type. However, there's some weird compiler error (<https://github.com/rust-lang/rust/issues/102211>), that forces us to wrap this in a struct instead.
/// Once that issue is solved, we can probably revert back to being an alias type.
#[derive(Debug)]
pub struct DatabaseProviderRW<DB: Database, N: NodeTypes>(
pub DatabaseProvider<<DB as Database>::TXMut, N>,
);
impl<DB: Database, N: NodeTypes> Deref for DatabaseProviderRW<DB, N> {
type Target = DatabaseProvider<<DB as Database>::TXMut, N>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<DB: Database, N: NodeTypes> DerefMut for DatabaseProviderRW<DB, N> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl<DB: Database, N: NodeTypes> AsRef<DatabaseProvider<<DB as Database>::TXMut, N>>
for DatabaseProviderRW<DB, N>
{
fn as_ref(&self) -> &DatabaseProvider<<DB as Database>::TXMut, N> {
&self.0
}
}
impl<DB: Database, N: NodeTypes + 'static> DatabaseProviderRW<DB, N> {
/// Commit database transaction and static file if it exists.
pub fn commit(self) -> ProviderResult<bool> {
self.0.commit()
}
/// Consume `DbTx` or `DbTxMut`.
pub fn into_tx(self) -> <DB as Database>::TXMut {
self.0.into_tx()
}
}
impl<DB: Database, N: NodeTypes> From<DatabaseProviderRW<DB, N>>
for DatabaseProvider<<DB as Database>::TXMut, N>
{
fn from(provider: DatabaseProviderRW<DB, N>) -> Self {
provider.0
}
}
/// A provider struct that fetches data from the database.
/// Wrapper around [`DbTx`] and [`DbTxMut`]. Example: [`HeaderProvider`] [`BlockHashReader`]
#[derive(Debug)]
pub struct DatabaseProvider<TX, N: NodeTypes> {
/// Database transaction.
tx: TX,
/// Chain spec
chain_spec: Arc<N::ChainSpec>,
/// Static File provider
static_file_provider: StaticFileProvider<N::Primitives>,
/// Pruning configuration
prune_modes: PruneModes,
/// Node storage handler.
storage: Arc<N::Storage>,
}
impl<TX, N: NodeTypes> DatabaseProvider<TX, N> {
/// Returns reference to prune modes.
pub const fn prune_modes_ref(&self) -> &PruneModes {
&self.prune_modes
}
}
impl<TX: DbTx + 'static, N: NodeTypes> DatabaseProvider<TX, N> {
/// State provider for latest state
pub fn latest<'a>(&'a self) -> Box<dyn StateProvider + 'a> {
trace!(target: "providers::db", "Returning latest state provider");
Box::new(LatestStateProviderRef::new(self))
}
/// Storage provider for state at that given block hash
pub fn history_by_block_hash<'a>(
&'a self,
block_hash: BlockHash,
) -> ProviderResult<Box<dyn StateProvider + 'a>> {
let mut block_number =
self.block_number(block_hash)?.ok_or(ProviderError::BlockHashNotFound(block_hash))?;
if block_number == self.best_block_number().unwrap_or_default() &&
block_number == self.last_block_number().unwrap_or_default()
{
return Ok(Box::new(LatestStateProviderRef::new(self)))
}
// +1 as the changeset that we want is the one that was applied after this block.
block_number += 1;
let account_history_prune_checkpoint =
self.get_prune_checkpoint(PruneSegment::AccountHistory)?;
let storage_history_prune_checkpoint =
self.get_prune_checkpoint(PruneSegment::StorageHistory)?;
let mut state_provider = HistoricalStateProviderRef::new(self, block_number);
// If we pruned account or storage history, we can't return state on every historical block.
// Instead, we should cap it at the latest prune checkpoint for corresponding prune segment.
if let Some(prune_checkpoint_block_number) =
account_history_prune_checkpoint.and_then(|checkpoint| checkpoint.block_number)
{
state_provider = state_provider.with_lowest_available_account_history_block_number(
prune_checkpoint_block_number + 1,
);
}
if let Some(prune_checkpoint_block_number) =
storage_history_prune_checkpoint.and_then(|checkpoint| checkpoint.block_number)
{
state_provider = state_provider.with_lowest_available_storage_history_block_number(
prune_checkpoint_block_number + 1,
);
}
Ok(Box::new(state_provider))
}
#[cfg(feature = "test-utils")]
/// Sets the prune modes for provider.
pub fn set_prune_modes(&mut self, prune_modes: PruneModes) {
self.prune_modes = prune_modes;
}
}
impl<TX, N: NodeTypes> NodePrimitivesProvider for DatabaseProvider<TX, N> {
type Primitives = N::Primitives;
}
impl<TX, N: NodeTypes> StaticFileProviderFactory for DatabaseProvider<TX, N> {
/// Returns a static file provider
fn static_file_provider(&self) -> StaticFileProvider<Self::Primitives> {
self.static_file_provider.clone()
}
}
impl<TX: Debug + Send + Sync, N: NodeTypes<ChainSpec: EthChainSpec + 'static>> ChainSpecProvider
for DatabaseProvider<TX, N>
{
type ChainSpec = N::ChainSpec;
fn chain_spec(&self) -> Arc<Self::ChainSpec> {
self.chain_spec.clone()
}
}
impl<TX: DbTxMut, N: NodeTypes> DatabaseProvider<TX, N> {
/// Creates a provider with an inner read-write transaction.
pub const fn new_rw(
tx: TX,
chain_spec: Arc<N::ChainSpec>,
static_file_provider: StaticFileProvider<N::Primitives>,
prune_modes: PruneModes,
storage: Arc<N::Storage>,
) -> Self {
Self { tx, chain_spec, static_file_provider, prune_modes, storage }
}
}
impl<TX, N: NodeTypes> AsRef<Self> for DatabaseProvider<TX, N> {
fn as_ref(&self) -> &Self {
self
}
}
impl<TX: DbTx + DbTxMut + 'static, N: NodeTypesForProvider> DatabaseProvider<TX, N> {
/// Unwinds trie state for the given range.
///
/// This includes calculating the resulted state root and comparing it with the parent block
/// state root.
pub fn unwind_trie_state_range(
&self,
range: RangeInclusive<BlockNumber>,
) -> ProviderResult<()> {
let changed_accounts = self
.tx
.cursor_read::<tables::AccountChangeSets>()?
.walk_range(range.clone())?
.collect::<Result<Vec<_>, _>>()?;
// Unwind account hashes. Add changed accounts to account prefix set.
let hashed_addresses = self.unwind_account_hashing(changed_accounts.iter())?;
let mut account_prefix_set = PrefixSetMut::with_capacity(hashed_addresses.len());
let mut destroyed_accounts = HashSet::default();
for (hashed_address, account) in hashed_addresses {
account_prefix_set.insert(Nibbles::unpack(hashed_address));
if account.is_none() {
destroyed_accounts.insert(hashed_address);
}
}
// Unwind account history indices.
self.unwind_account_history_indices(changed_accounts.iter())?;
let storage_range = BlockNumberAddress::range(range.clone());
let changed_storages = self
.tx
.cursor_read::<tables::StorageChangeSets>()?
.walk_range(storage_range)?
.collect::<Result<Vec<_>, _>>()?;
// Unwind storage hashes. Add changed account and storage keys to corresponding prefix
// sets.
let mut storage_prefix_sets = B256Map::<PrefixSet>::default();
let storage_entries = self.unwind_storage_hashing(changed_storages.iter().copied())?;
for (hashed_address, hashed_slots) in storage_entries {
account_prefix_set.insert(Nibbles::unpack(hashed_address));
let mut storage_prefix_set = PrefixSetMut::with_capacity(hashed_slots.len());
for slot in hashed_slots {
storage_prefix_set.insert(Nibbles::unpack(slot));
}
storage_prefix_sets.insert(hashed_address, storage_prefix_set.freeze());
}
// Unwind storage history indices.
self.unwind_storage_history_indices(changed_storages.iter().copied())?;
// Calculate the reverted merkle root.
// This is the same as `StateRoot::incremental_root_with_updates`, only the prefix sets
// are pre-loaded.
let prefix_sets = TriePrefixSets {
account_prefix_set: account_prefix_set.freeze(),
storage_prefix_sets,
destroyed_accounts,
};
let (new_state_root, trie_updates) = StateRoot::from_tx(&self.tx)
.with_prefix_sets(prefix_sets)
.root_with_updates()
.map_err(reth_db_api::DatabaseError::from)?;
let parent_number = range.start().saturating_sub(1);
let parent_state_root = self
.header_by_number(parent_number)?
.ok_or_else(|| ProviderError::HeaderNotFound(parent_number.into()))?
.state_root();
// state root should be always correct as we are reverting state.
// but for sake of double verification we will check it again.
if new_state_root != parent_state_root {
let parent_hash = self
.block_hash(parent_number)?
.ok_or_else(|| ProviderError::HeaderNotFound(parent_number.into()))?;
return Err(ProviderError::UnwindStateRootMismatch(Box::new(RootMismatch {
root: GotExpected { got: new_state_root, expected: parent_state_root },
block_number: parent_number,
block_hash: parent_hash,
})))
}
self.write_trie_updates(&trie_updates)?;
Ok(())
}
/// Removes receipts from all transactions starting with provided number (inclusive).
fn remove_receipts_from(
&self,
from_tx: TxNumber,
last_block: BlockNumber,
remove_from: StorageLocation,
) -> ProviderResult<()> {
if remove_from.database() {
// iterate over block body and remove receipts
self.remove::<tables::Receipts<ReceiptTy<N>>>(from_tx..)?;
}
if remove_from.static_files() && !self.prune_modes.has_receipts_pruning() {
let static_file_receipt_num =
self.static_file_provider.get_highest_static_file_tx(StaticFileSegment::Receipts);
let to_delete = static_file_receipt_num
.map(|static_num| (static_num + 1).saturating_sub(from_tx))
.unwrap_or_default();
self.static_file_provider
.latest_writer(StaticFileSegment::Receipts)?
.prune_receipts(to_delete, last_block)?;
}
Ok(())
}
}
impl<TX: DbTx + 'static, N: NodeTypes> TryIntoHistoricalStateProvider for DatabaseProvider<TX, N> {
fn try_into_history_at_block(
self,
mut block_number: BlockNumber,
) -> ProviderResult<StateProviderBox> {
// if the block number is the same as the currently best block number on disk we can use the
// latest state provider here
if block_number == self.best_block_number().unwrap_or_default() {
return Ok(Box::new(LatestStateProvider::new(self)))
}
// +1 as the changeset that we want is the one that was applied after this block.
block_number += 1;
let account_history_prune_checkpoint =
self.get_prune_checkpoint(PruneSegment::AccountHistory)?;
let storage_history_prune_checkpoint =
self.get_prune_checkpoint(PruneSegment::StorageHistory)?;
let mut state_provider = HistoricalStateProvider::new(self, block_number);
// If we pruned account or storage history, we can't return state on every historical block.
// Instead, we should cap it at the latest prune checkpoint for corresponding prune segment.
if let Some(prune_checkpoint_block_number) =
account_history_prune_checkpoint.and_then(|checkpoint| checkpoint.block_number)
{
state_provider = state_provider.with_lowest_available_account_history_block_number(
prune_checkpoint_block_number + 1,
);
}
if let Some(prune_checkpoint_block_number) =
storage_history_prune_checkpoint.and_then(|checkpoint| checkpoint.block_number)
{
state_provider = state_provider.with_lowest_available_storage_history_block_number(
prune_checkpoint_block_number + 1,
);
}
Ok(Box::new(state_provider))
}
}
impl<
Tx: DbTx + DbTxMut + 'static,
N: NodeTypesForProvider<Primitives: NodePrimitives<BlockHeader = Header>>,
> DatabaseProvider<Tx, N>
{
// TODO: uncomment below, once `reth debug_cmd` has been feature gated with dev.
// #[cfg(any(test, feature = "test-utils"))]
/// Inserts an historical block. **Used for setting up test environments**
pub fn insert_historical_block(
&self,
block: RecoveredBlock<<Self as BlockWriter>::Block>,
) -> ProviderResult<StoredBlockBodyIndices> {
let ttd = if block.number() == 0 {
block.header().difficulty()
} else {
let parent_block_number = block.number() - 1;
let parent_ttd = self.header_td_by_number(parent_block_number)?.unwrap_or_default();
parent_ttd + block.header().difficulty()
};
let mut writer = self.static_file_provider.latest_writer(StaticFileSegment::Headers)?;
// Backfill: some tests start at a forward block number, but static files require no gaps.
let segment_header = writer.user_header();
if segment_header.block_end().is_none() && segment_header.expected_block_start() == 0 {
for block_number in 0..block.number() {
let mut prev = block.clone_header();
prev.number = block_number;
writer.append_header(&prev, U256::ZERO, &B256::ZERO)?;
}
}
writer.append_header(block.header(), ttd, &block.hash())?;
self.insert_block(block, StorageLocation::Database)
}
}
/// For a given key, unwind all history shards that contain block numbers at or above the given
/// block number.
///
/// S - Sharded key subtype.
/// T - Table to walk over.
/// C - Cursor implementation.
///
/// This function walks the entries from the given start key and deletes all shards that belong to
/// the key and contain block numbers at or above the given block number. Shards entirely below
/// the block number are preserved.
///
/// The boundary shard (the shard that spans across the block number) is removed from the database.
/// Any indices that are below the block number are filtered out and returned for reinsertion.
/// The boundary shard is returned for reinsertion (if it's not empty).
fn unwind_history_shards<S, T, C>(
cursor: &mut C,
start_key: T::Key,
block_number: BlockNumber,
mut shard_belongs_to_key: impl FnMut(&T::Key) -> bool,
) -> ProviderResult<Vec<u64>>
where
T: Table<Value = BlockNumberList>,
T::Key: AsRef<ShardedKey<S>>,
C: DbCursorRO<T> + DbCursorRW<T>,
{
// Start from the given key and iterate through shards
let mut item = cursor.seek_exact(start_key)?;
while let Some((sharded_key, list)) = item {
// If the shard does not belong to the key, break.
if !shard_belongs_to_key(&sharded_key) {
break
}
// Always delete the current shard from the database first
// We'll decide later what (if anything) to reinsert
cursor.delete_current()?;
// Get the first (lowest) block number in this shard
// All block numbers in a shard are sorted in ascending order
let first = list.iter().next().expect("List can't be empty");
// Case 1: Entire shard is at or above the unwinding point
// Keep it deleted (don't return anything for reinsertion)
if first >= block_number {
item = cursor.prev()?;
continue
}
// Case 2: This is a boundary shard (spans across the unwinding point)
// The shard contains some blocks below and some at/above the unwinding point
else if block_number <= sharded_key.as_ref().highest_block_number {
// Return only the block numbers that are below the unwinding point
// These will be reinserted to preserve the historical data
return Ok(list.iter().take_while(|i| *i < block_number).collect::<Vec<_>>())
}
// Case 3: Entire shard is below the unwinding point
// Return all block numbers for reinsertion (preserve entire shard)
return Ok(list.iter().collect::<Vec<_>>())
}
// No shards found or all processed
Ok(Vec::new())
}
impl<TX: DbTx + 'static, N: NodeTypesForProvider> DatabaseProvider<TX, N> {
/// Creates a provider with an inner read-only transaction.
pub const fn new(
tx: TX,
chain_spec: Arc<N::ChainSpec>,
static_file_provider: StaticFileProvider<N::Primitives>,
prune_modes: PruneModes,
storage: Arc<N::Storage>,
) -> Self {
Self { tx, chain_spec, static_file_provider, prune_modes, storage }
}
/// Consume `DbTx` or `DbTxMut`.
pub fn into_tx(self) -> TX {
self.tx
}
/// Pass `DbTx` or `DbTxMut` mutable reference.
pub const fn tx_mut(&mut self) -> &mut TX {
&mut self.tx
}
/// Pass `DbTx` or `DbTxMut` immutable reference.
pub const fn tx_ref(&self) -> &TX {
&self.tx
}
/// Returns a reference to the chain specification.
pub fn chain_spec(&self) -> &N::ChainSpec {
&self.chain_spec
}
}
impl<TX: DbTx + 'static, N: NodeTypesForProvider> DatabaseProvider<TX, N> {
fn transactions_by_tx_range_with_cursor<C>(
&self,
range: impl RangeBounds<TxNumber>,
cursor: &mut C,
) -> ProviderResult<Vec<TxTy<N>>>
where
C: DbCursorRO<tables::Transactions<TxTy<N>>>,
{
self.static_file_provider.get_range_with_static_file_or_database(
StaticFileSegment::Transactions,
to_range(range),
|static_file, range, _| static_file.transactions_by_tx_range(range),
|range, _| self.cursor_collect(cursor, range),
|_| true,
)
}
fn recovered_block<H, HF, B, BF>(
&self,
id: BlockHashOrNumber,
_transaction_kind: TransactionVariant,
header_by_number: HF,
construct_block: BF,
) -> ProviderResult<Option<B>>
where
H: AsRef<HeaderTy<N>>,
HF: FnOnce(BlockNumber) -> ProviderResult<Option<H>>,
BF: FnOnce(H, BodyTy<N>, Vec<Address>) -> ProviderResult<Option<B>>,
{
let Some(block_number) = self.convert_hash_or_number(id)? else { return Ok(None) };
let Some(header) = header_by_number(block_number)? else { return Ok(None) };
// Get the block body
//
// If the body indices are not found, this means that the transactions either do not exist
// in the database yet, or they do exit but are not indexed. If they exist but are not
// indexed, we don't have enough information to return the block anyways, so we return
// `None`.
let Some(body) = self.block_body_indices(block_number)? else { return Ok(None) };
let tx_range = body.tx_num_range();
let (transactions, senders) = if tx_range.is_empty() {
(vec![], vec![])
} else {
(self.transactions_by_tx_range(tx_range.clone())?, self.senders_by_tx_range(tx_range)?)
};
let body = self
.storage
.reader()
.read_block_bodies(self, vec![(header.as_ref(), transactions)])?
.pop()
.ok_or(ProviderError::InvalidStorageOutput)?;
construct_block(header, body, senders)
}
/// Returns a range of blocks from the database.
///
/// Uses the provided `headers_range` to get the headers for the range, and `assemble_block` to
/// construct blocks from the following inputs:
/// – Header
/// - Range of transaction numbers
/// – Ommers
/// – Withdrawals
/// – Senders
fn block_range<F, H, HF, R>(
&self,
range: RangeInclusive<BlockNumber>,
headers_range: HF,
mut assemble_block: F,
) -> ProviderResult<Vec<R>>
where
H: AsRef<HeaderTy<N>>,
HF: FnOnce(RangeInclusive<BlockNumber>) -> ProviderResult<Vec<H>>,
F: FnMut(H, BodyTy<N>, Range<TxNumber>) -> ProviderResult<R>,
{
if range.is_empty() {
return Ok(Vec::new())
}
let len = range.end().saturating_sub(*range.start()) as usize;
let mut blocks = Vec::with_capacity(len);
let headers = headers_range(range.clone())?;
let mut tx_cursor = self.tx.cursor_read::<tables::Transactions<TxTy<N>>>()?;
// If the body indices are not found, this means that the transactions either do
// not exist in the database yet, or they do exit but are
// not indexed. If they exist but are not indexed, we don't
// have enough information to return the block anyways, so
// we skip the block.
let present_headers = self
.block_body_indices_range(range)?
.into_iter()
.map(|b| b.tx_num_range())
.zip(headers)
.collect::<Vec<_>>();
let mut inputs = Vec::new();
for (tx_range, header) in &present_headers {
let transactions = if tx_range.is_empty() {
Vec::new()
} else {
self.transactions_by_tx_range_with_cursor(tx_range.clone(), &mut tx_cursor)?
};
inputs.push((header.as_ref(), transactions));
}
let bodies = self.storage.reader().read_block_bodies(self, inputs)?;
for ((tx_range, header), body) in present_headers.into_iter().zip(bodies) {
blocks.push(assemble_block(header, body, tx_range)?);
}
Ok(blocks)
}
/// Returns a range of blocks from the database, along with the senders of each
/// transaction in the blocks.
///
/// Uses the provided `headers_range` to get the headers for the range, and `assemble_block` to
/// construct blocks from the following inputs:
/// – Header
/// - Transactions
/// – Ommers
/// – Withdrawals
/// – Senders
fn block_with_senders_range<H, HF, B, BF>(
&self,
range: RangeInclusive<BlockNumber>,
headers_range: HF,
assemble_block: BF,
) -> ProviderResult<Vec<B>>
where
H: AsRef<HeaderTy<N>>,
HF: Fn(RangeInclusive<BlockNumber>) -> ProviderResult<Vec<H>>,
BF: Fn(H, BodyTy<N>, Vec<Address>) -> ProviderResult<B>,
{
let mut senders_cursor = self.tx.cursor_read::<tables::TransactionSenders>()?;
self.block_range(range, headers_range, |header, body, tx_range| {
let senders = if tx_range.is_empty() {
Vec::new()
} else {
// fetch senders from the senders table
let known_senders =
senders_cursor
.walk_range(tx_range.clone())?
.collect::<Result<HashMap<_, _>, _>>()?;
let mut senders = Vec::with_capacity(body.transactions().len());
for (tx_num, tx) in tx_range.zip(body.transactions()) {
match known_senders.get(&tx_num) {
None => {
// recover the sender from the transaction if not found
let sender = tx.recover_signer_unchecked()?;
senders.push(sender);
}
Some(sender) => senders.push(*sender),
}
}
senders
};
assemble_block(header, body, senders)
})
}
/// Populate a [`BundleStateInit`] and [`RevertsInit`] using cursors over the
/// [`PlainAccountState`] and [`PlainStorageState`] tables, based on the given storage and
/// account changesets.
fn populate_bundle_state<A, S>(
&self,
account_changeset: Vec<(u64, AccountBeforeTx)>,
storage_changeset: Vec<(BlockNumberAddress, StorageEntry)>,
plain_accounts_cursor: &mut A,
plain_storage_cursor: &mut S,
) -> ProviderResult<(BundleStateInit, RevertsInit)>
where
A: DbCursorRO<PlainAccountState>,
S: DbDupCursorRO<PlainStorageState>,
{
// iterate previous value and get plain state value to create changeset
// Double option around Account represent if Account state is know (first option) and
// account is removed (Second Option)
let mut state: BundleStateInit = HashMap::default();
// This is not working for blocks that are not at tip. as plain state is not the last
// state of end range. We should rename the functions or add support to access
// History state. Accessing history state can be tricky but we are not gaining
// anything.
let mut reverts: RevertsInit = HashMap::default();
// add account changeset changes
for (block_number, account_before) in account_changeset.into_iter().rev() {
let AccountBeforeTx { info: old_info, address } = account_before;
match state.entry(address) {
hash_map::Entry::Vacant(entry) => {
let new_info = plain_accounts_cursor.seek_exact(address)?.map(|kv| kv.1);
entry.insert((old_info, new_info, HashMap::default()));
}
hash_map::Entry::Occupied(mut entry) => {
// overwrite old account state.
entry.get_mut().0 = old_info;
}
}
// insert old info into reverts.
reverts.entry(block_number).or_default().entry(address).or_default().0 = Some(old_info);
}
// add storage changeset changes
for (block_and_address, old_storage) in storage_changeset.into_iter().rev() {
let BlockNumberAddress((block_number, address)) = block_and_address;
// get account state or insert from plain state.
let account_state = match state.entry(address) {
hash_map::Entry::Vacant(entry) => {
let present_info = plain_accounts_cursor.seek_exact(address)?.map(|kv| kv.1);
entry.insert((present_info, present_info, HashMap::default()))
}
hash_map::Entry::Occupied(entry) => entry.into_mut(),
};
// match storage.
match account_state.2.entry(old_storage.key) {
hash_map::Entry::Vacant(entry) => {
let new_storage = plain_storage_cursor
.seek_by_key_subkey(address, old_storage.key)?
.filter(|storage| storage.key == old_storage.key)
.unwrap_or_default();
entry.insert((old_storage.value, new_storage.value));
}
hash_map::Entry::Occupied(mut entry) => {
entry.get_mut().0 = old_storage.value;
}
};
reverts
.entry(block_number)
.or_default()
.entry(address)
.or_default()
.1
.push(old_storage);
}
Ok((state, reverts))
}
}
impl<TX: DbTxMut + DbTx + 'static, N: NodeTypes> DatabaseProvider<TX, N> {
/// Commit database transaction.
pub fn commit(self) -> ProviderResult<bool> {
Ok(self.tx.commit()?)
}
/// Load shard and remove it. If list is empty, last shard was full or
/// there are no shards at all.
fn take_shard<T>(
&self,
cursor: &mut <TX as DbTxMut>::CursorMut<T>,
key: T::Key,
) -> ProviderResult<Vec<u64>>
where
T: Table<Value = BlockNumberList>,
{
if let Some((_, list)) = cursor.seek_exact(key)? {
// delete old shard so new one can be inserted.
cursor.delete_current()?;
let list = list.iter().collect::<Vec<_>>();
return Ok(list)
}
Ok(Vec::new())
}
/// Insert history index to the database.
///
/// For each updated partial key, this function removes the last shard from
/// the database (if any), appends the new indices to it, chunks the resulting integer list and
/// inserts the new shards back into the database.
///
/// This function is used by history indexing stages.
fn append_history_index<P, T>(
&self,
index_updates: impl IntoIterator<Item = (P, impl IntoIterator<Item = u64>)>,
mut sharded_key_factory: impl FnMut(P, BlockNumber) -> T::Key,
) -> ProviderResult<()>
where
P: Copy,
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/traits/mod.rs | crates/storage/provider/src/traits/mod.rs | //! Collection of common provider traits.
// Re-export all the traits
pub use reth_storage_api::*;
pub use reth_chainspec::ChainSpecProvider;
mod static_file_provider;
pub use static_file_provider::StaticFileProviderFactory;
mod full;
pub use full::FullProvider;
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/traits/static_file_provider.rs | crates/storage/provider/src/traits/static_file_provider.rs | use reth_storage_api::NodePrimitivesProvider;
use crate::providers::StaticFileProvider;
/// Static file provider factory.
pub trait StaticFileProviderFactory: NodePrimitivesProvider {
/// Create new instance of static file provider.
fn static_file_provider(&self) -> StaticFileProvider<Self::Primitives>;
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/traits/full.rs | crates/storage/provider/src/traits/full.rs | //! Helper provider traits to encapsulate all provider traits for simplicity.
use crate::{
AccountReader, BlockReader, BlockReaderIdExt, ChainSpecProvider, ChangeSetReader,
DatabaseProviderFactory, HashedPostStateProvider, StageCheckpointReader, StateProviderFactory,
StateReader, StaticFileProviderFactory,
};
use reth_chain_state::{CanonStateSubscriptions, ForkChoiceSubscriptions};
use reth_node_types::{BlockTy, HeaderTy, NodeTypesWithDB, ReceiptTy, TxTy};
use reth_storage_api::NodePrimitivesProvider;
use std::fmt::Debug;
/// Helper trait to unify all provider traits for simplicity.
pub trait FullProvider<N: NodeTypesWithDB>:
DatabaseProviderFactory<DB = N::DB, Provider: BlockReader>
+ NodePrimitivesProvider<Primitives = N::Primitives>
+ StaticFileProviderFactory<Primitives = N::Primitives>
+ BlockReaderIdExt<
Transaction = TxTy<N>,
Block = BlockTy<N>,
Receipt = ReceiptTy<N>,
Header = HeaderTy<N>,
> + AccountReader
+ StateProviderFactory
+ StateReader
+ HashedPostStateProvider
+ ChainSpecProvider<ChainSpec = N::ChainSpec>
+ ChangeSetReader
+ CanonStateSubscriptions
+ ForkChoiceSubscriptions<Header = HeaderTy<N>>
+ StageCheckpointReader
+ Clone
+ Debug
+ Unpin
+ 'static
{
}
impl<T, N: NodeTypesWithDB> FullProvider<N> for T where
T: DatabaseProviderFactory<DB = N::DB, Provider: BlockReader>
+ NodePrimitivesProvider<Primitives = N::Primitives>
+ StaticFileProviderFactory<Primitives = N::Primitives>
+ BlockReaderIdExt<
Transaction = TxTy<N>,
Block = BlockTy<N>,
Receipt = ReceiptTy<N>,
Header = HeaderTy<N>,
> + AccountReader
+ StateProviderFactory
+ StateReader
+ HashedPostStateProvider
+ ChainSpecProvider<ChainSpec = N::ChainSpec>
+ ChangeSetReader
+ CanonStateSubscriptions
+ ForkChoiceSubscriptions<Header = HeaderTy<N>>
+ StageCheckpointReader
+ Clone
+ Debug
+ Unpin
+ 'static
{
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/bundle_state/state_reverts.rs | crates/storage/provider/src/bundle_state/state_reverts.rs | use alloy_primitives::B256;
use revm_database::states::RevertToSlot;
use revm_state::FlaggedStorage;
use std::iter::Peekable;
/// Iterator over storage reverts.
/// See [`StorageRevertsIter::next`] for more details.
#[expect(missing_debug_implementations)]
pub struct StorageRevertsIter<R: Iterator, W: Iterator> {
reverts: Peekable<R>,
wiped: Peekable<W>,
}
impl<R, W> StorageRevertsIter<R, W>
where
R: Iterator<Item = (B256, RevertToSlot)>,
W: Iterator<Item = (B256, FlaggedStorage)>,
{
/// Create a new iterator over storage reverts.
pub fn new(
reverts: impl IntoIterator<IntoIter = R>,
wiped: impl IntoIterator<IntoIter = W>,
) -> Self {
Self { reverts: reverts.into_iter().peekable(), wiped: wiped.into_iter().peekable() }
}
/// Consume next revert and return it.
fn next_revert(&mut self) -> Option<(B256, FlaggedStorage)> {
self.reverts.next().map(|(key, revert)| (key, revert.to_previous_value()))
}
/// Consume next wiped storage and return it.
fn next_wiped(&mut self) -> Option<(B256, FlaggedStorage)> {
self.wiped.next()
}
}
impl<R, W> Iterator for StorageRevertsIter<R, W>
where
R: Iterator<Item = (B256, RevertToSlot)>,
W: Iterator<Item = (B256, FlaggedStorage)>,
{
type Item = (B256, FlaggedStorage);
/// Iterate over storage reverts and wiped entries and return items in the sorted order.
/// NOTE: The implementation assumes that inner iterators are already sorted.
fn next(&mut self) -> Option<Self::Item> {
match (self.reverts.peek(), self.wiped.peek()) {
(Some(revert), Some(wiped)) => {
// Compare the keys and return the lesser.
use std::cmp::Ordering;
match revert.0.cmp(&wiped.0) {
Ordering::Less => self.next_revert(),
Ordering::Greater => self.next_wiped(),
Ordering::Equal => {
// Keys are the same, decide which one to return.
let (key, revert_to) = *revert;
let value = match revert_to {
// If the slot is some, prefer the revert value.
RevertToSlot::Some(value) => value,
// If the slot was destroyed, prefer the database value.
RevertToSlot::Destroyed => wiped.1,
};
// Consume both values from inner iterators.
self.next_revert();
self.next_wiped();
Some((key, value))
}
}
}
(Some(_revert), None) => self.next_revert(),
(None, Some(_wiped)) => self.next_wiped(),
(None, None) => None,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_storage_reverts_iter_empty() {
// Create empty sample data for reverts and wiped entries.
let reverts: Vec<(B256, RevertToSlot)> = vec![];
let wiped: Vec<(B256, FlaggedStorage)> = vec![];
// Create the iterator with the empty data.
let iter = StorageRevertsIter::new(reverts, wiped);
// Iterate and collect results into a vector for verification.
let results: Vec<_> = iter.collect();
// Verify that the results are empty.
assert_eq!(results, vec![]);
}
#[test]
fn test_storage_reverts_iter_reverts_only() {
// Create sample data for only reverts.
let reverts = vec![
(B256::from_slice(&[4; 32]), RevertToSlot::Destroyed),
(B256::from_slice(&[5; 32]), RevertToSlot::Some(FlaggedStorage::new_from_value(40))),
];
// Create the iterator with only reverts and no wiped entries.
let iter = StorageRevertsIter::new(reverts, vec![]);
// Iterate and collect results into a vector for verification.
let results: Vec<_> = iter.collect();
// Verify the output order and values.
assert_eq!(
results,
vec![
(B256::from_slice(&[4; 32]), FlaggedStorage::ZERO), // Revert slot previous value
(B256::from_slice(&[5; 32]), FlaggedStorage::new_from_value(40)), /* Only revert
* present. */
]
);
}
#[test]
fn test_storage_reverts_iter_wiped_only() {
// Create sample data for only wiped entries.
let wiped = vec![
(B256::from_slice(&[6; 32]), FlaggedStorage::new_from_value(50)),
(B256::from_slice(&[7; 32]), FlaggedStorage::new_from_value(60)),
];
// Create the iterator with only wiped entries and no reverts.
let iter = StorageRevertsIter::new(vec![], wiped);
// Iterate and collect results into a vector for verification.
let results: Vec<_> = iter.collect();
// Verify the output order and values.
assert_eq!(
results,
vec![
(B256::from_slice(&[6; 32]), FlaggedStorage::new_from_value(50)), /* Only wiped
* present. */
(B256::from_slice(&[7; 32]), FlaggedStorage::new_from_value(60)), /* Only wiped
* present. */
]
);
}
#[test]
fn test_storage_reverts_iter_interleaved() {
// Create sample data for interleaved reverts and wiped entries.
let reverts = vec![
(B256::from_slice(&[8; 32]), RevertToSlot::Some(FlaggedStorage::new_from_value(70))),
(B256::from_slice(&[9; 32]), RevertToSlot::Some(FlaggedStorage::new_from_value(80))),
// Some higher key than wiped
(B256::from_slice(&[15; 32]), RevertToSlot::Some(FlaggedStorage::new_from_value(90))),
];
let wiped = vec![
(B256::from_slice(&[8; 32]), FlaggedStorage::new_from_value(75)), // Same key as revert
(B256::from_slice(&[10; 32]), FlaggedStorage::new_from_value(85)), // Wiped with new key
];
// Create the iterator with the sample data.
let iter = StorageRevertsIter::new(reverts, wiped);
// Iterate and collect results into a vector for verification.
let results: Vec<_> = iter.collect();
// Verify the output order and values.
assert_eq!(
results,
vec![
(B256::from_slice(&[8; 32]), FlaggedStorage::new_from_value(70)), /* Revert takes priority. */
(B256::from_slice(&[9; 32]), FlaggedStorage::new_from_value(80)), /* Only revert
* present. */
(B256::from_slice(&[10; 32]), FlaggedStorage::new_from_value(85)), // Wiped entry.
(B256::from_slice(&[15; 32]), FlaggedStorage::new_from_value(90)), /* Greater revert entry */
]
);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/bundle_state/mod.rs | crates/storage/provider/src/bundle_state/mod.rs | //! Bundle state module.
//! This module contains all the logic related to bundle state.
mod state_reverts;
pub use state_reverts::StorageRevertsIter;
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/test_utils/noop.rs | crates/storage/provider/src/test_utils/noop.rs | //! Additional testing support for `NoopProvider`.
use crate::{providers::StaticFileProvider, StaticFileProviderFactory};
use reth_primitives_traits::NodePrimitives;
use std::path::PathBuf;
/// Re-exported for convenience
pub use reth_storage_api::noop::NoopProvider;
impl<C: Send + Sync, N: NodePrimitives> StaticFileProviderFactory for NoopProvider<C, N> {
fn static_file_provider(&self) -> StaticFileProvider<Self::Primitives> {
StaticFileProvider::read_only(PathBuf::default(), false).unwrap()
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/test_utils/mod.rs | crates/storage/provider/src/test_utils/mod.rs | use crate::{
providers::{ProviderNodeTypes, StaticFileProvider},
HashingWriter, ProviderFactory, TrieWriter,
};
use alloy_primitives::B256;
use reth_chainspec::{ChainSpec, MAINNET};
use reth_db::{
test_utils::{create_test_rw_db, create_test_static_files_dir, TempDatabase},
DatabaseEnv,
};
use reth_errors::ProviderResult;
use reth_ethereum_engine_primitives::EthEngineTypes;
use reth_node_types::{NodeTypes, NodeTypesWithDBAdapter};
use reth_primitives_traits::{Account, StorageEntry};
use reth_trie::StateRoot;
use reth_trie_db::DatabaseStateRoot;
use std::sync::Arc;
pub mod blocks;
mod mock;
mod noop;
pub use mock::{ExtendedAccount, MockEthProvider};
pub use noop::NoopProvider;
pub use reth_chain_state::test_utils::TestCanonStateSubscriptions;
/// Mock [`reth_node_types::NodeTypes`] for testing.
pub type MockNodeTypes = reth_node_types::AnyNodeTypesWithEngine<
reth_ethereum_primitives::EthPrimitives,
reth_ethereum_engine_primitives::EthEngineTypes,
reth_chainspec::ChainSpec,
crate::EthStorage,
EthEngineTypes,
>;
/// Mock [`reth_node_types::NodeTypesWithDB`] for testing.
pub type MockNodeTypesWithDB<DB = TempDatabase<DatabaseEnv>> =
NodeTypesWithDBAdapter<MockNodeTypes, Arc<DB>>;
/// Creates test provider factory with mainnet chain spec.
pub fn create_test_provider_factory() -> ProviderFactory<MockNodeTypesWithDB> {
create_test_provider_factory_with_chain_spec(MAINNET.clone())
}
/// Creates test provider factory with provided chain spec.
pub fn create_test_provider_factory_with_chain_spec(
chain_spec: Arc<ChainSpec>,
) -> ProviderFactory<MockNodeTypesWithDB> {
create_test_provider_factory_with_node_types::<MockNodeTypes>(chain_spec)
}
/// Creates test provider factory with provided chain spec.
pub fn create_test_provider_factory_with_node_types<N: NodeTypes>(
chain_spec: Arc<N::ChainSpec>,
) -> ProviderFactory<NodeTypesWithDBAdapter<N, Arc<TempDatabase<DatabaseEnv>>>> {
let (static_dir, _) = create_test_static_files_dir();
let db = create_test_rw_db();
ProviderFactory::new(
db,
chain_spec,
StaticFileProvider::read_write(static_dir.keep()).expect("static file provider"),
)
}
/// Inserts the genesis alloc from the provided chain spec into the trie.
pub fn insert_genesis<N: ProviderNodeTypes<ChainSpec = ChainSpec>>(
provider_factory: &ProviderFactory<N>,
chain_spec: Arc<N::ChainSpec>,
) -> ProviderResult<B256> {
let provider = provider_factory.provider_rw()?;
// Hash accounts and insert them into hashing table.
let genesis = chain_spec.genesis();
let alloc_accounts =
genesis.alloc.iter().map(|(addr, account)| (*addr, Some(Account::from(account))));
provider.insert_account_for_hashing(alloc_accounts).unwrap();
let alloc_storage = genesis.alloc.clone().into_iter().filter_map(|(addr, account)| {
// Only return `Some` if there is storage.
account.storage.map(|storage| {
(
addr,
storage.into_iter().map(|(key, value)| StorageEntry {
key,
value: value.into(),
..Default::default()
}),
)
})
});
provider.insert_storage_for_hashing(alloc_storage)?;
let (root, updates) = StateRoot::from_tx(provider.tx_ref())
.root_with_updates()
.map_err(reth_db::DatabaseError::from)?;
provider.write_trie_updates(&updates).unwrap();
provider.commit()?;
Ok(root)
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/test_utils/mock.rs | crates/storage/provider/src/test_utils/mock.rs | use crate::{
traits::{BlockSource, ReceiptProvider},
AccountReader, BlockHashReader, BlockIdReader, BlockNumReader, BlockReader, BlockReaderIdExt,
ChainSpecProvider, ChangeSetReader, HeaderProvider, ReceiptProviderIdExt, StateProvider,
StateProviderBox, StateProviderFactory, StateReader, StateRootProvider, TransactionVariant,
TransactionsProvider,
};
use alloy_consensus::{constants::EMPTY_ROOT_HASH, transaction::TransactionMeta, BlockHeader};
use alloy_eips::{BlockHashOrNumber, BlockId, BlockNumberOrTag};
use alloy_primitives::{
keccak256, map::HashMap, Address, BlockHash, BlockNumber, Bytes, StorageKey, TxHash, TxNumber,
B256, U256,
};
use parking_lot::Mutex;
use reth_chain_state::{CanonStateNotifications, CanonStateSubscriptions};
use reth_chainspec::{ChainInfo, EthChainSpec};
use reth_db_api::{
mock::{DatabaseMock, TxMock},
models::{AccountBeforeTx, StoredBlockBodyIndices},
};
use reth_ethereum_primitives::EthPrimitives;
use reth_execution_types::ExecutionOutcome;
use reth_primitives_traits::{
Account, Block, BlockBody, Bytecode, GotExpected, NodePrimitives, RecoveredBlock, SealedHeader,
SignedTransaction, SignerRecoverable,
};
use reth_prune_types::PruneModes;
use reth_stages_types::{StageCheckpoint, StageId};
use reth_storage_api::{
BlockBodyIndicesProvider, BytecodeReader, DBProvider, DatabaseProviderFactory,
HashedPostStateProvider, NodePrimitivesProvider, StageCheckpointReader, StateProofProvider,
StorageRootProvider,
};
use reth_storage_errors::provider::{ConsistentViewError, ProviderError, ProviderResult};
use reth_trie::{
updates::TrieUpdates, AccountProof, HashedPostState, HashedStorage, MultiProof,
MultiProofTargets, StorageMultiProof, StorageProof, TrieInput,
};
use std::{
collections::BTreeMap,
fmt::Debug,
ops::{RangeBounds, RangeInclusive},
sync::Arc,
};
use tokio::sync::broadcast;
use revm_state::FlaggedStorage;
/// A mock implementation for Provider interfaces.
#[derive(Debug)]
pub struct MockEthProvider<T: NodePrimitives = EthPrimitives, ChainSpec = reth_chainspec::ChainSpec>
{
///local block store
pub blocks: Arc<Mutex<HashMap<B256, T::Block>>>,
/// Local header store
pub headers: Arc<Mutex<HashMap<B256, <T::Block as Block>::Header>>>,
/// Local receipt store indexed by block number
pub receipts: Arc<Mutex<HashMap<BlockNumber, Vec<T::Receipt>>>>,
/// Local account store
pub accounts: Arc<Mutex<HashMap<Address, ExtendedAccount>>>,
/// Local chain spec
pub chain_spec: Arc<ChainSpec>,
/// Local state roots
pub state_roots: Arc<Mutex<Vec<B256>>>,
/// Local block body indices store
pub block_body_indices: Arc<Mutex<HashMap<BlockNumber, StoredBlockBodyIndices>>>,
tx: TxMock,
prune_modes: Arc<PruneModes>,
}
impl<T: NodePrimitives, ChainSpec> Clone for MockEthProvider<T, ChainSpec>
where
T::Block: Clone,
{
fn clone(&self) -> Self {
Self {
blocks: self.blocks.clone(),
headers: self.headers.clone(),
receipts: self.receipts.clone(),
accounts: self.accounts.clone(),
chain_spec: self.chain_spec.clone(),
state_roots: self.state_roots.clone(),
block_body_indices: self.block_body_indices.clone(),
tx: self.tx.clone(),
prune_modes: self.prune_modes.clone(),
}
}
}
impl<T: NodePrimitives> MockEthProvider<T, reth_chainspec::ChainSpec> {
/// Create a new, empty instance
pub fn new() -> Self {
Self {
blocks: Default::default(),
headers: Default::default(),
receipts: Default::default(),
accounts: Default::default(),
chain_spec: Arc::new(reth_chainspec::ChainSpecBuilder::mainnet().build()),
state_roots: Default::default(),
block_body_indices: Default::default(),
tx: Default::default(),
prune_modes: Default::default(),
}
}
}
impl<T: NodePrimitives, ChainSpec> MockEthProvider<T, ChainSpec> {
/// Add block to local block store
pub fn add_block(&self, hash: B256, block: T::Block) {
self.add_header(hash, block.header().clone());
self.blocks.lock().insert(hash, block);
}
/// Add multiple blocks to local block store
pub fn extend_blocks(&self, iter: impl IntoIterator<Item = (B256, T::Block)>) {
for (hash, block) in iter {
self.add_header(hash, block.header().clone());
self.add_block(hash, block)
}
}
/// Add header to local header store
pub fn add_header(&self, hash: B256, header: <T::Block as Block>::Header) {
self.headers.lock().insert(hash, header);
}
/// Add multiple headers to local header store
pub fn extend_headers(
&self,
iter: impl IntoIterator<Item = (B256, <T::Block as Block>::Header)>,
) {
for (hash, header) in iter {
self.add_header(hash, header)
}
}
/// Add account to local account store
pub fn add_account(&self, address: Address, account: ExtendedAccount) {
self.accounts.lock().insert(address, account);
}
/// Add account to local account store
pub fn extend_accounts(&self, iter: impl IntoIterator<Item = (Address, ExtendedAccount)>) {
for (address, account) in iter {
self.add_account(address, account)
}
}
/// Add receipts to local receipt store
pub fn add_receipts(&self, block_number: BlockNumber, receipts: Vec<T::Receipt>) {
self.receipts.lock().insert(block_number, receipts);
}
/// Add multiple receipts to local receipt store
pub fn extend_receipts(&self, iter: impl IntoIterator<Item = (BlockNumber, Vec<T::Receipt>)>) {
for (block_number, receipts) in iter {
self.add_receipts(block_number, receipts);
}
}
/// Add block body indices to local store
pub fn add_block_body_indices(
&self,
block_number: BlockNumber,
indices: StoredBlockBodyIndices,
) {
self.block_body_indices.lock().insert(block_number, indices);
}
/// Add state root to local state root store
pub fn add_state_root(&self, state_root: B256) {
self.state_roots.lock().push(state_root);
}
/// Set chain spec.
pub fn with_chain_spec<C>(self, chain_spec: C) -> MockEthProvider<T, C> {
MockEthProvider {
blocks: self.blocks,
headers: self.headers,
receipts: self.receipts,
accounts: self.accounts,
chain_spec: Arc::new(chain_spec),
state_roots: self.state_roots,
block_body_indices: self.block_body_indices,
tx: self.tx,
prune_modes: self.prune_modes,
}
}
}
impl Default for MockEthProvider {
fn default() -> Self {
Self::new()
}
}
/// An extended account for local store
#[derive(Debug, Clone)]
pub struct ExtendedAccount {
account: Account,
bytecode: Option<Bytecode>,
storage: HashMap<StorageKey, FlaggedStorage>,
}
impl ExtendedAccount {
/// Create new instance of extended account
pub fn new(nonce: u64, balance: U256) -> Self {
Self {
account: Account { nonce, balance, bytecode_hash: None },
bytecode: None,
storage: Default::default(),
}
}
/// Set bytecode and bytecode hash on the extended account
pub fn with_bytecode(mut self, bytecode: Bytes) -> Self {
let hash = keccak256(&bytecode);
self.account.bytecode_hash = Some(hash);
self.bytecode = Some(Bytecode::new_raw(bytecode));
self
}
/// Add storage to the extended account. If the storage key is already present,
/// the value is updated.
pub fn extend_storage(
mut self,
storage: impl IntoIterator<Item = (StorageKey, FlaggedStorage)>,
) -> Self {
self.storage.extend(storage);
self
}
}
impl<T: NodePrimitives, ChainSpec: EthChainSpec + Clone + 'static> DatabaseProviderFactory
for MockEthProvider<T, ChainSpec>
{
type DB = DatabaseMock;
type Provider = Self;
type ProviderRW = Self;
fn database_provider_ro(&self) -> ProviderResult<Self::Provider> {
Err(ConsistentViewError::Syncing { best_block: GotExpected::new(0, 0) }.into())
}
fn database_provider_rw(&self) -> ProviderResult<Self::ProviderRW> {
Err(ConsistentViewError::Syncing { best_block: GotExpected::new(0, 0) }.into())
}
}
impl<T: NodePrimitives, ChainSpec: EthChainSpec + 'static> DBProvider
for MockEthProvider<T, ChainSpec>
{
type Tx = TxMock;
fn tx_ref(&self) -> &Self::Tx {
&self.tx
}
fn tx_mut(&mut self) -> &mut Self::Tx {
&mut self.tx
}
fn into_tx(self) -> Self::Tx {
self.tx
}
fn prune_modes_ref(&self) -> &PruneModes {
&self.prune_modes
}
}
impl<T: NodePrimitives, ChainSpec: EthChainSpec + Send + Sync + 'static> HeaderProvider
for MockEthProvider<T, ChainSpec>
{
type Header = <T::Block as Block>::Header;
fn header(&self, block_hash: &BlockHash) -> ProviderResult<Option<Self::Header>> {
let lock = self.headers.lock();
Ok(lock.get(block_hash).cloned())
}
fn header_by_number(&self, num: u64) -> ProviderResult<Option<Self::Header>> {
let lock = self.headers.lock();
Ok(lock.values().find(|h| h.number() == num).cloned())
}
fn header_td(&self, hash: &BlockHash) -> ProviderResult<Option<U256>> {
let lock = self.headers.lock();
Ok(lock.get(hash).map(|target| {
lock.values()
.filter(|h| h.number() < target.number())
.fold(target.difficulty(), |td, h| td + h.difficulty())
}))
}
fn header_td_by_number(&self, number: BlockNumber) -> ProviderResult<Option<U256>> {
let lock = self.headers.lock();
let sum = lock
.values()
.filter(|h| h.number() <= number)
.fold(U256::ZERO, |td, h| td + h.difficulty());
Ok(Some(sum))
}
fn headers_range(
&self,
range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Self::Header>> {
let lock = self.headers.lock();
let mut headers: Vec<_> =
lock.values().filter(|header| range.contains(&header.number())).cloned().collect();
headers.sort_by_key(|header| header.number());
Ok(headers)
}
fn sealed_header(
&self,
number: BlockNumber,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
Ok(self.header_by_number(number)?.map(SealedHeader::seal_slow))
}
fn sealed_headers_while(
&self,
range: impl RangeBounds<BlockNumber>,
mut predicate: impl FnMut(&SealedHeader<Self::Header>) -> bool,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
Ok(self
.headers_range(range)?
.into_iter()
.map(SealedHeader::seal_slow)
.take_while(|h| predicate(h))
.collect())
}
}
impl<T, ChainSpec> ChainSpecProvider for MockEthProvider<T, ChainSpec>
where
T: NodePrimitives,
ChainSpec: EthChainSpec + 'static + Debug + Send + Sync,
{
type ChainSpec = ChainSpec;
fn chain_spec(&self) -> Arc<Self::ChainSpec> {
self.chain_spec.clone()
}
}
impl<T: NodePrimitives, ChainSpec: EthChainSpec + 'static> TransactionsProvider
for MockEthProvider<T, ChainSpec>
{
type Transaction = T::SignedTx;
fn transaction_id(&self, tx_hash: TxHash) -> ProviderResult<Option<TxNumber>> {
let lock = self.blocks.lock();
let tx_number = lock
.values()
.flat_map(|block| block.body().transactions())
.position(|tx| *tx.tx_hash() == tx_hash)
.map(|pos| pos as TxNumber);
Ok(tx_number)
}
fn transaction_by_id(&self, id: TxNumber) -> ProviderResult<Option<Self::Transaction>> {
let lock = self.blocks.lock();
let transaction =
lock.values().flat_map(|block| block.body().transactions()).nth(id as usize).cloned();
Ok(transaction)
}
fn transaction_by_id_unhashed(
&self,
id: TxNumber,
) -> ProviderResult<Option<Self::Transaction>> {
let lock = self.blocks.lock();
let transaction =
lock.values().flat_map(|block| block.body().transactions()).nth(id as usize).cloned();
Ok(transaction)
}
fn transaction_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Transaction>> {
Ok(self.blocks.lock().iter().find_map(|(_, block)| {
block.body().transactions_iter().find(|tx| *tx.tx_hash() == hash).cloned()
}))
}
fn transaction_by_hash_with_meta(
&self,
hash: TxHash,
) -> ProviderResult<Option<(Self::Transaction, TransactionMeta)>> {
let lock = self.blocks.lock();
for (block_hash, block) in lock.iter() {
for (index, tx) in block.body().transactions_iter().enumerate() {
if *tx.tx_hash() == hash {
let meta = TransactionMeta {
tx_hash: hash,
index: index as u64,
block_hash: *block_hash,
block_number: block.header().number(),
base_fee: block.header().base_fee_per_gas(),
excess_blob_gas: block.header().excess_blob_gas(),
timestamp: block.header().timestamp(),
};
return Ok(Some((tx.clone(), meta)))
}
}
}
Ok(None)
}
fn transaction_block(&self, id: TxNumber) -> ProviderResult<Option<BlockNumber>> {
let lock = self.blocks.lock();
let mut current_tx_number: TxNumber = 0;
for block in lock.values() {
if current_tx_number + (block.body().transaction_count() as TxNumber) > id {
return Ok(Some(block.header().number()))
}
current_tx_number += block.body().transaction_count() as TxNumber;
}
Ok(None)
}
fn transactions_by_block(
&self,
id: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Transaction>>> {
Ok(self.block(id)?.map(|b| b.body().clone_transactions()))
}
fn transactions_by_block_range(
&self,
range: impl RangeBounds<alloy_primitives::BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Transaction>>> {
// init btreemap so we can return in order
let mut map = BTreeMap::new();
for (_, block) in self.blocks.lock().iter() {
if range.contains(&block.header().number()) {
map.insert(block.header().number(), block.body().clone_transactions());
}
}
Ok(map.into_values().collect())
}
fn transactions_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Transaction>> {
let lock = self.blocks.lock();
let transactions = lock
.values()
.flat_map(|block| block.body().transactions())
.enumerate()
.filter(|&(tx_number, _)| range.contains(&(tx_number as TxNumber)))
.map(|(_, tx)| tx.clone())
.collect();
Ok(transactions)
}
fn senders_by_tx_range(
&self,
range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Address>> {
let lock = self.blocks.lock();
let transactions = lock
.values()
.flat_map(|block| block.body().transactions())
.enumerate()
.filter_map(|(tx_number, tx)| {
if range.contains(&(tx_number as TxNumber)) {
tx.recover_signer().ok()
} else {
None
}
})
.collect();
Ok(transactions)
}
fn transaction_sender(&self, id: TxNumber) -> ProviderResult<Option<Address>> {
self.transaction_by_id(id).map(|tx_option| tx_option.map(|tx| tx.recover_signer().unwrap()))
}
}
impl<T, ChainSpec> ReceiptProvider for MockEthProvider<T, ChainSpec>
where
T: NodePrimitives,
ChainSpec: Send + Sync + 'static,
{
type Receipt = T::Receipt;
fn receipt(&self, _id: TxNumber) -> ProviderResult<Option<Self::Receipt>> {
Ok(None)
}
fn receipt_by_hash(&self, _hash: TxHash) -> ProviderResult<Option<Self::Receipt>> {
Ok(None)
}
fn receipts_by_block(
&self,
block: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Receipt>>> {
let receipts_lock = self.receipts.lock();
match block {
BlockHashOrNumber::Hash(hash) => {
// Find block number by hash first
let headers_lock = self.headers.lock();
if let Some(header) = headers_lock.get(&hash) {
Ok(receipts_lock.get(&header.number()).cloned())
} else {
Ok(None)
}
}
BlockHashOrNumber::Number(number) => Ok(receipts_lock.get(&number).cloned()),
}
}
fn receipts_by_tx_range(
&self,
_range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Receipt>> {
Ok(vec![])
}
fn receipts_by_block_range(
&self,
block_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Receipt>>> {
let receipts_lock = self.receipts.lock();
let headers_lock = self.headers.lock();
let mut result = Vec::new();
for block_number in block_range {
// Only include blocks that exist in headers (i.e., have been added to the provider)
if headers_lock.values().any(|header| header.number() == block_number) {
if let Some(block_receipts) = receipts_lock.get(&block_number) {
result.push(block_receipts.clone());
} else {
// If block exists but no receipts found, add empty vec
result.push(vec![]);
}
}
}
Ok(result)
}
}
impl<T, ChainSpec> ReceiptProviderIdExt for MockEthProvider<T, ChainSpec>
where
T: NodePrimitives,
Self: ReceiptProvider + BlockIdReader,
{
}
impl<T: NodePrimitives, ChainSpec: Send + Sync + 'static> BlockHashReader
for MockEthProvider<T, ChainSpec>
{
fn block_hash(&self, number: u64) -> ProviderResult<Option<B256>> {
let lock = self.headers.lock();
let hash =
lock.iter().find_map(|(hash, header)| (header.number() == number).then_some(*hash));
Ok(hash)
}
fn canonical_hashes_range(
&self,
start: BlockNumber,
end: BlockNumber,
) -> ProviderResult<Vec<B256>> {
let lock = self.headers.lock();
let mut hashes: Vec<_> =
lock.iter().filter(|(_, header)| (start..end).contains(&header.number())).collect();
hashes.sort_by_key(|(_, header)| header.number());
Ok(hashes.into_iter().map(|(hash, _)| *hash).collect())
}
}
impl<T: NodePrimitives, ChainSpec: Send + Sync + 'static> BlockNumReader
for MockEthProvider<T, ChainSpec>
{
fn chain_info(&self) -> ProviderResult<ChainInfo> {
let best_block_number = self.best_block_number()?;
let lock = self.headers.lock();
Ok(lock
.iter()
.find(|(_, header)| header.number() == best_block_number)
.map(|(hash, header)| ChainInfo { best_hash: *hash, best_number: header.number() })
.unwrap_or_default())
}
fn best_block_number(&self) -> ProviderResult<BlockNumber> {
let lock = self.headers.lock();
lock.iter()
.max_by_key(|h| h.1.number())
.map(|(_, header)| header.number())
.ok_or(ProviderError::BestBlockNotFound)
}
fn last_block_number(&self) -> ProviderResult<BlockNumber> {
self.best_block_number()
}
fn block_number(&self, hash: B256) -> ProviderResult<Option<alloy_primitives::BlockNumber>> {
let lock = self.headers.lock();
Ok(lock.get(&hash).map(|header| header.number()))
}
}
impl<T: NodePrimitives, ChainSpec: EthChainSpec + Send + Sync + 'static> BlockIdReader
for MockEthProvider<T, ChainSpec>
{
fn pending_block_num_hash(&self) -> ProviderResult<Option<alloy_eips::BlockNumHash>> {
Ok(None)
}
fn safe_block_num_hash(&self) -> ProviderResult<Option<alloy_eips::BlockNumHash>> {
Ok(None)
}
fn finalized_block_num_hash(&self) -> ProviderResult<Option<alloy_eips::BlockNumHash>> {
Ok(None)
}
}
//look
impl<T: NodePrimitives, ChainSpec: EthChainSpec + Send + Sync + 'static> BlockReader
for MockEthProvider<T, ChainSpec>
{
type Block = T::Block;
fn find_block_by_hash(
&self,
hash: B256,
_source: BlockSource,
) -> ProviderResult<Option<Self::Block>> {
self.block(hash.into())
}
fn block(&self, id: BlockHashOrNumber) -> ProviderResult<Option<Self::Block>> {
let lock = self.blocks.lock();
match id {
BlockHashOrNumber::Hash(hash) => Ok(lock.get(&hash).cloned()),
BlockHashOrNumber::Number(num) => {
Ok(lock.values().find(|b| b.header().number() == num).cloned())
}
}
}
fn pending_block(&self) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
Ok(None)
}
fn pending_block_and_receipts(
&self,
) -> ProviderResult<Option<(RecoveredBlock<Self::Block>, Vec<T::Receipt>)>> {
Ok(None)
}
fn recovered_block(
&self,
_id: BlockHashOrNumber,
_transaction_kind: TransactionVariant,
) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
Ok(None)
}
fn sealed_block_with_senders(
&self,
_id: BlockHashOrNumber,
_transaction_kind: TransactionVariant,
) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
Ok(None)
}
fn block_range(&self, range: RangeInclusive<BlockNumber>) -> ProviderResult<Vec<Self::Block>> {
let lock = self.blocks.lock();
let mut blocks: Vec<_> = lock
.values()
.filter(|block| range.contains(&block.header().number()))
.cloned()
.collect();
blocks.sort_by_key(|block| block.header().number());
Ok(blocks)
}
fn block_with_senders_range(
&self,
_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<RecoveredBlock<Self::Block>>> {
Ok(vec![])
}
fn recovered_block_range(
&self,
_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<RecoveredBlock<Self::Block>>> {
Ok(vec![])
}
}
impl<T, ChainSpec> BlockReaderIdExt for MockEthProvider<T, ChainSpec>
where
ChainSpec: EthChainSpec + Send + Sync + 'static,
T: NodePrimitives,
{
fn block_by_id(&self, id: BlockId) -> ProviderResult<Option<T::Block>> {
match id {
BlockId::Number(num) => self.block_by_number_or_tag(num),
BlockId::Hash(hash) => self.block_by_hash(hash.block_hash),
}
}
fn sealed_header_by_id(
&self,
id: BlockId,
) -> ProviderResult<Option<SealedHeader<<T::Block as Block>::Header>>> {
self.header_by_id(id)?.map_or_else(|| Ok(None), |h| Ok(Some(SealedHeader::seal_slow(h))))
}
fn header_by_id(&self, id: BlockId) -> ProviderResult<Option<<T::Block as Block>::Header>> {
match self.block_by_id(id)? {
None => Ok(None),
Some(block) => Ok(Some(block.into_header())),
}
}
}
impl<T: NodePrimitives, ChainSpec: Send + Sync> AccountReader for MockEthProvider<T, ChainSpec> {
fn basic_account(&self, address: &Address) -> ProviderResult<Option<Account>> {
Ok(self.accounts.lock().get(address).cloned().map(|a| a.account))
}
}
impl<T: NodePrimitives, ChainSpec: Send + Sync> StageCheckpointReader
for MockEthProvider<T, ChainSpec>
{
fn get_stage_checkpoint(&self, _id: StageId) -> ProviderResult<Option<StageCheckpoint>> {
Ok(None)
}
fn get_stage_checkpoint_progress(&self, _id: StageId) -> ProviderResult<Option<Vec<u8>>> {
Ok(None)
}
fn get_all_checkpoints(&self) -> ProviderResult<Vec<(String, StageCheckpoint)>> {
Ok(vec![])
}
}
impl<T, ChainSpec> StateRootProvider for MockEthProvider<T, ChainSpec>
where
T: NodePrimitives,
ChainSpec: Send + Sync,
{
fn state_root(&self, _state: HashedPostState) -> ProviderResult<B256> {
Ok(self.state_roots.lock().pop().unwrap_or_default())
}
fn state_root_from_nodes(&self, _input: TrieInput) -> ProviderResult<B256> {
Ok(self.state_roots.lock().pop().unwrap_or_default())
}
fn state_root_with_updates(
&self,
_state: HashedPostState,
) -> ProviderResult<(B256, TrieUpdates)> {
let state_root = self.state_roots.lock().pop().unwrap_or_default();
Ok((state_root, Default::default()))
}
fn state_root_from_nodes_with_updates(
&self,
_input: TrieInput,
) -> ProviderResult<(B256, TrieUpdates)> {
let state_root = self.state_roots.lock().pop().unwrap_or_default();
Ok((state_root, Default::default()))
}
}
impl<T, ChainSpec> StorageRootProvider for MockEthProvider<T, ChainSpec>
where
T: NodePrimitives,
ChainSpec: Send + Sync,
{
fn storage_root(
&self,
_address: Address,
_hashed_storage: HashedStorage,
) -> ProviderResult<B256> {
Ok(EMPTY_ROOT_HASH)
}
fn storage_proof(
&self,
_address: Address,
slot: B256,
_hashed_storage: HashedStorage,
) -> ProviderResult<reth_trie::StorageProof> {
Ok(StorageProof::new(slot))
}
fn storage_multiproof(
&self,
_address: Address,
_slots: &[B256],
_hashed_storage: HashedStorage,
) -> ProviderResult<StorageMultiProof> {
Ok(StorageMultiProof::empty())
}
}
impl<T, ChainSpec> StateProofProvider for MockEthProvider<T, ChainSpec>
where
T: NodePrimitives,
ChainSpec: Send + Sync,
{
fn proof(
&self,
_input: TrieInput,
address: Address,
_slots: &[B256],
) -> ProviderResult<AccountProof> {
Ok(AccountProof::new(address))
}
fn multiproof(
&self,
_input: TrieInput,
_targets: MultiProofTargets,
) -> ProviderResult<MultiProof> {
Ok(MultiProof::default())
}
fn witness(&self, _input: TrieInput, _target: HashedPostState) -> ProviderResult<Vec<Bytes>> {
Ok(Vec::default())
}
}
impl<T: NodePrimitives, ChainSpec: EthChainSpec + 'static> HashedPostStateProvider
for MockEthProvider<T, ChainSpec>
{
fn hashed_post_state(&self, _state: &revm_database::BundleState) -> HashedPostState {
HashedPostState::default()
}
}
impl<T, ChainSpec> StateProvider for MockEthProvider<T, ChainSpec>
where
T: NodePrimitives,
ChainSpec: EthChainSpec + Send + Sync + 'static,
{
fn storage(
&self,
account: Address,
storage_key: StorageKey,
) -> ProviderResult<Option<FlaggedStorage>> {
let lock = self.accounts.lock();
Ok(lock.get(&account).and_then(|account| account.storage.get(&storage_key)).copied())
}
}
impl<T, ChainSpec> BytecodeReader for MockEthProvider<T, ChainSpec>
where
T: NodePrimitives,
ChainSpec: Send + Sync,
{
fn bytecode_by_hash(&self, code_hash: &B256) -> ProviderResult<Option<Bytecode>> {
let lock = self.accounts.lock();
Ok(lock.values().find_map(|account| {
match (account.account.bytecode_hash.as_ref(), account.bytecode.as_ref()) {
(Some(bytecode_hash), Some(bytecode)) if bytecode_hash == code_hash => {
Some(bytecode.clone())
}
_ => None,
}
}))
}
}
impl<T: NodePrimitives, ChainSpec: EthChainSpec + Send + Sync + 'static> StateProviderFactory
for MockEthProvider<T, ChainSpec>
{
fn latest(&self) -> ProviderResult<StateProviderBox> {
Ok(Box::new(self.clone()))
}
fn state_by_block_number_or_tag(
&self,
number_or_tag: BlockNumberOrTag,
) -> ProviderResult<StateProviderBox> {
match number_or_tag {
BlockNumberOrTag::Latest => self.latest(),
BlockNumberOrTag::Finalized => {
// we can only get the finalized state by hash, not by num
let hash =
self.finalized_block_hash()?.ok_or(ProviderError::FinalizedBlockNotFound)?;
// only look at historical state
self.history_by_block_hash(hash)
}
BlockNumberOrTag::Safe => {
// we can only get the safe state by hash, not by num
let hash = self.safe_block_hash()?.ok_or(ProviderError::SafeBlockNotFound)?;
self.history_by_block_hash(hash)
}
BlockNumberOrTag::Earliest => {
self.history_by_block_number(self.earliest_block_number()?)
}
BlockNumberOrTag::Pending => self.pending(),
BlockNumberOrTag::Number(num) => self.history_by_block_number(num),
}
}
fn history_by_block_number(&self, _block: BlockNumber) -> ProviderResult<StateProviderBox> {
Ok(Box::new(self.clone()))
}
fn history_by_block_hash(&self, _block: BlockHash) -> ProviderResult<StateProviderBox> {
Ok(Box::new(self.clone()))
}
fn state_by_block_hash(&self, _block: BlockHash) -> ProviderResult<StateProviderBox> {
Ok(Box::new(self.clone()))
}
fn pending(&self) -> ProviderResult<StateProviderBox> {
Ok(Box::new(self.clone()))
}
fn pending_state_by_hash(&self, _block_hash: B256) -> ProviderResult<Option<StateProviderBox>> {
Ok(Some(Box::new(self.clone())))
}
fn maybe_pending(&self) -> ProviderResult<Option<StateProviderBox>> {
Ok(Some(Box::new(self.clone())))
}
}
impl<T: NodePrimitives, ChainSpec: Send + Sync> BlockBodyIndicesProvider
for MockEthProvider<T, ChainSpec>
{
fn block_body_indices(&self, num: u64) -> ProviderResult<Option<StoredBlockBodyIndices>> {
Ok(self.block_body_indices.lock().get(&num).copied())
}
fn block_body_indices_range(
&self,
_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<StoredBlockBodyIndices>> {
Ok(vec![])
}
}
impl<T: NodePrimitives, ChainSpec: Send + Sync> ChangeSetReader for MockEthProvider<T, ChainSpec> {
fn account_block_changeset(
&self,
_block_number: BlockNumber,
) -> ProviderResult<Vec<AccountBeforeTx>> {
Ok(Vec::default())
}
}
impl<T: NodePrimitives, ChainSpec: Send + Sync> StateReader for MockEthProvider<T, ChainSpec> {
type Receipt = T::Receipt;
fn get_state(
&self,
_block: BlockNumber,
) -> ProviderResult<Option<ExecutionOutcome<Self::Receipt>>> {
Ok(None)
}
}
impl<T: NodePrimitives, ChainSpec: Send + Sync> CanonStateSubscriptions
for MockEthProvider<T, ChainSpec>
{
fn subscribe_to_canonical_state(&self) -> CanonStateNotifications<T> {
broadcast::channel(1).1
}
}
impl<T: NodePrimitives, ChainSpec: Send + Sync> NodePrimitivesProvider
for MockEthProvider<T, ChainSpec>
{
type Primitives = T;
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_consensus::Header;
use alloy_primitives::BlockHash;
use reth_ethereum_primitives::Receipt;
#[test]
fn test_mock_provider_receipts() {
let provider = MockEthProvider::<EthPrimitives>::new();
let block_hash = BlockHash::random();
let block_number = 1u64;
let header = Header { number: block_number, ..Default::default() };
let receipt1 = Receipt { cumulative_gas_used: 21000, success: true, ..Default::default() };
let receipt2 = Receipt { cumulative_gas_used: 42000, success: true, ..Default::default() };
let receipts = vec![receipt1, receipt2];
provider.add_header(block_hash, header);
provider.add_receipts(block_number, receipts.clone());
let result = provider.receipts_by_block(block_hash.into()).unwrap();
assert_eq!(result, Some(receipts.clone()));
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/provider/src/test_utils/blocks.rs | crates/storage/provider/src/test_utils/blocks.rs | //! Dummy blocks and data for tests
use crate::{DBProvider, DatabaseProviderRW, ExecutionOutcome};
use alloy_consensus::{TxLegacy, EMPTY_OMMER_ROOT_HASH};
use alloy_primitives::{
b256, hex_literal::hex, map::HashMap, Address, BlockNumber, Bytes, Log, TxKind, B256, U256,
};
use alloy_consensus::Header;
use alloy_eips::eip4895::{Withdrawal, Withdrawals};
use alloy_primitives::Signature;
use reth_db_api::{database::Database, models::StoredBlockBodyIndices, tables};
use reth_ethereum_primitives::{BlockBody, Receipt, Transaction, TransactionSigned, TxType};
use reth_node_types::NodeTypes;
use reth_primitives_traits::{Account, RecoveredBlock, SealedBlock, SealedHeader};
use reth_trie::root::{state_root_unhashed, storage_root_unhashed};
use revm_database::BundleState;
use revm_state::{AccountInfo, FlaggedStorage};
use std::{str::FromStr, sync::LazyLock};
/// Assert genesis block
pub fn assert_genesis_block<DB: Database, N: NodeTypes>(
provider: &DatabaseProviderRW<DB, N>,
g: SealedBlock<reth_ethereum_primitives::Block>,
) {
let n = g.number;
let h = B256::ZERO;
let tx = provider;
// check if all tables are empty
assert_eq!(tx.table::<tables::Headers>().unwrap(), vec![(g.number, g.header().clone())]);
assert_eq!(tx.table::<tables::HeaderNumbers>().unwrap(), vec![(h, n)]);
assert_eq!(tx.table::<tables::CanonicalHeaders>().unwrap(), vec![(n, h)]);
assert_eq!(
tx.table::<tables::HeaderTerminalDifficulties>().unwrap(),
vec![(n, g.difficulty.into())]
);
assert_eq!(
tx.table::<tables::BlockBodyIndices>().unwrap(),
vec![(0, StoredBlockBodyIndices::default())]
);
assert_eq!(tx.table::<tables::BlockOmmers>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::BlockWithdrawals>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::Transactions>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::TransactionBlocks>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::TransactionHashNumbers>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::Receipts>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::PlainAccountState>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::PlainStorageState>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::AccountsHistory>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::StoragesHistory>().unwrap(), vec![]);
// TODO check after this gets done: https://github.com/paradigmxyz/reth/issues/1588
// Bytecodes are not reverted assert_eq!(tx.table::<tables::Bytecodes>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::AccountChangeSets>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::StorageChangeSets>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::HashedAccounts>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::HashedStorages>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::AccountsTrie>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::StoragesTrie>().unwrap(), vec![]);
assert_eq!(tx.table::<tables::TransactionSenders>().unwrap(), vec![]);
// StageCheckpoints is not updated in tests
}
pub(crate) static TEST_BLOCK: LazyLock<SealedBlock<reth_ethereum_primitives::Block>> =
LazyLock::new(|| {
SealedBlock::from_sealed_parts(
SealedHeader::new(
Header {
parent_hash: hex!(
"c86e8cc0310ae7c531c758678ddbfd16fc51c8cef8cec650b032de9869e8b94f"
)
.into(),
ommers_hash: EMPTY_OMMER_ROOT_HASH,
beneficiary: hex!("2adc25665018aa1fe0e6bc666dac8fc2697ff9ba").into(),
state_root: hex!(
"50554882fbbda2c2fd93fdc466db9946ea262a67f7a76cc169e714f105ab583d"
)
.into(),
transactions_root: hex!(
"0967f09ef1dfed20c0eacfaa94d5cd4002eda3242ac47eae68972d07b106d192"
)
.into(),
receipts_root: hex!(
"e3c8b47fbfc94667ef4cceb17e5cc21e3b1eebd442cebb27f07562b33836290d"
)
.into(),
difficulty: U256::from(131_072),
number: 1,
gas_limit: 1_000_000,
gas_used: 14_352,
timestamp: 1_000,
..Default::default()
},
hex!("cf7b274520720b50e6a4c3e5c4d553101f44945396827705518ce17cb7219a42").into(),
),
BlockBody {
transactions: vec![TransactionSigned::new_unhashed(
Transaction::Legacy(TxLegacy {
gas_price: 10,
gas_limit: 400_000,
to: TxKind::Call(hex!("095e7baea6a6c7c4c2dfeb977efac326af552d87").into()),
..Default::default()
}),
Signature::new(
U256::from_str(
"51983300959770368863831494747186777928121405155922056726144551509338672451120",
)
.unwrap(),
U256::from_str(
"29056683545955299640297374067888344259176096769870751649153779895496107008675",
)
.unwrap(),
false,
)
)],
..Default::default()
},
)
});
/// Test chain with genesis, blocks, execution results
/// that have valid changesets.
#[derive(Debug)]
pub struct BlockchainTestData {
/// Genesis
pub genesis: SealedBlock<reth_ethereum_primitives::Block>,
/// Blocks with its execution result
pub blocks: Vec<(RecoveredBlock<reth_ethereum_primitives::Block>, ExecutionOutcome)>,
}
impl BlockchainTestData {
/// Create test data with two blocks that are connected, specifying their block numbers.
pub fn default_from_number(first: BlockNumber) -> Self {
let one = block1(first);
let mut extended_execution_outcome = one.1.clone();
let two = block2(first + 1, one.0.hash(), &extended_execution_outcome);
extended_execution_outcome.extend(two.1.clone());
let three = block3(first + 2, two.0.hash(), &extended_execution_outcome);
extended_execution_outcome.extend(three.1.clone());
let four = block4(first + 3, three.0.hash(), &extended_execution_outcome);
extended_execution_outcome.extend(four.1.clone());
let five = block5(first + 4, four.0.hash(), &extended_execution_outcome);
Self { genesis: genesis(), blocks: vec![one, two, three, four, five] }
}
}
impl Default for BlockchainTestData {
fn default() -> Self {
let one = block1(1);
let mut extended_execution_outcome = one.1.clone();
let two = block2(2, one.0.hash(), &extended_execution_outcome);
extended_execution_outcome.extend(two.1.clone());
let three = block3(3, two.0.hash(), &extended_execution_outcome);
extended_execution_outcome.extend(three.1.clone());
let four = block4(4, three.0.hash(), &extended_execution_outcome);
extended_execution_outcome.extend(four.1.clone());
let five = block5(5, four.0.hash(), &extended_execution_outcome);
Self { genesis: genesis(), blocks: vec![one, two, three, four, five] }
}
}
/// Genesis block
pub fn genesis() -> SealedBlock<reth_ethereum_primitives::Block> {
SealedBlock::from_sealed_parts(
SealedHeader::new(
Header { number: 0, difficulty: U256::from(1), ..Default::default() },
B256::ZERO,
),
Default::default(),
)
}
fn bundle_state_root(execution_outcome: &ExecutionOutcome) -> B256 {
state_root_unhashed(execution_outcome.bundle_accounts_iter().filter_map(
|(address, account)| {
account.info.as_ref().map(|info| {
(
address,
Account::from(info).into_trie_account(storage_root_unhashed(
account
.storage
.iter()
.filter(|(_, value)| !value.present_value.is_zero())
.map(|(slot, value)| ((*slot).into(), value.present_value)),
)),
)
})
},
))
}
/// Block one that points to genesis
fn block1(
number: BlockNumber,
) -> (RecoveredBlock<reth_ethereum_primitives::Block>, ExecutionOutcome) {
// block changes
let account1: Address = [0x60; 20].into();
let account2: Address = [0x61; 20].into();
let slot = U256::from(5);
let info = AccountInfo { nonce: 1, balance: U256::from(10), ..Default::default() };
let execution_outcome = ExecutionOutcome::new(
BundleState::builder(number..=number)
.state_present_account_info(account1, info.clone())
.revert_account_info(number, account1, Some(None))
.state_present_account_info(account2, info)
.revert_account_info(number, account2, Some(None))
.state_storage(
account1,
HashMap::from_iter([(
slot,
(FlaggedStorage::ZERO, FlaggedStorage::new_from_value(10)),
)]),
)
.build(),
vec![vec![Receipt {
tx_type: TxType::Eip2930,
success: true,
cumulative_gas_used: 300,
logs: vec![Log::new_unchecked(
Address::new([0x60; 20]),
vec![B256::with_last_byte(1), B256::with_last_byte(2)],
Bytes::default(),
)],
}]],
number,
Vec::new(),
);
let state_root = bundle_state_root(&execution_outcome);
assert_eq!(
state_root,
b256!("0x5d035ccb3e75a9057452ff060b773b213ec1fc353426174068edfc3971a0b6bd")
);
let (mut header, mut body) = TEST_BLOCK.clone().split_header_body();
body.withdrawals = Some(Withdrawals::new(vec![Withdrawal::default()]));
header.number = number;
header.state_root = state_root;
header.parent_hash = B256::ZERO;
let block = SealedBlock::seal_parts(header, body);
(RecoveredBlock::new_sealed(block, vec![Address::new([0x30; 20])]), execution_outcome)
}
/// Block two that points to block 1
fn block2(
number: BlockNumber,
parent_hash: B256,
prev_execution_outcome: &ExecutionOutcome,
) -> (RecoveredBlock<reth_ethereum_primitives::Block>, ExecutionOutcome) {
// block changes
let account: Address = [0x60; 20].into();
let slot = U256::from(5);
let execution_outcome = ExecutionOutcome::new(
BundleState::builder(number..=number)
.state_present_account_info(
account,
AccountInfo { nonce: 3, balance: U256::from(20), ..Default::default() },
)
.state_storage(
account,
HashMap::from_iter([(
slot,
(FlaggedStorage::ZERO, FlaggedStorage::new_from_value(15)),
)]),
)
.revert_account_info(
number,
account,
Some(Some(AccountInfo { nonce: 1, balance: U256::from(10), ..Default::default() })),
)
.revert_storage(
number,
account,
Vec::from([(slot, FlaggedStorage::new_from_value(10))]),
)
.build(),
vec![vec![Receipt {
tx_type: TxType::Eip1559,
success: false,
cumulative_gas_used: 400,
logs: vec![Log::new_unchecked(
Address::new([0x61; 20]),
vec![B256::with_last_byte(3), B256::with_last_byte(4)],
Bytes::default(),
)],
}]],
number,
Vec::new(),
);
let mut extended = prev_execution_outcome.clone();
extended.extend(execution_outcome.clone());
let state_root = bundle_state_root(&extended);
assert_eq!(
state_root,
b256!("0x90101a13dd059fa5cca99ed93d1dc23657f63626c5b8f993a2ccbdf7446b64f8")
);
let (mut header, mut body) = TEST_BLOCK.clone().split_header_body();
body.withdrawals = Some(Withdrawals::new(vec![Withdrawal::default()]));
header.number = number;
header.state_root = state_root;
// parent_hash points to block1 hash
header.parent_hash = parent_hash;
let block = SealedBlock::seal_parts(header, body);
(RecoveredBlock::new_sealed(block, vec![Address::new([0x31; 20])]), execution_outcome)
}
/// Block three that points to block 2
fn block3(
number: BlockNumber,
parent_hash: B256,
prev_execution_outcome: &ExecutionOutcome,
) -> (RecoveredBlock<reth_ethereum_primitives::Block>, ExecutionOutcome) {
let address_range = 1..=20;
let slot_range = 1..=100;
let mut bundle_state_builder = BundleState::builder(number..=number);
for idx in address_range {
let address = Address::with_last_byte(idx);
bundle_state_builder = bundle_state_builder
.state_present_account_info(
address,
AccountInfo { nonce: 1, balance: U256::from(idx), ..Default::default() },
)
.state_storage(
address,
HashMap::from_iter(slot_range.clone().map(|slot| {
(U256::from(slot), (FlaggedStorage::ZERO, FlaggedStorage::new_from_value(slot)))
})),
)
.revert_account_info(number, address, Some(None))
.revert_storage(number, address, Vec::new());
}
let execution_outcome = ExecutionOutcome::new(
bundle_state_builder.build(),
vec![vec![Receipt {
tx_type: TxType::Eip1559,
success: true,
cumulative_gas_used: 400,
logs: vec![Log::new_unchecked(
Address::new([0x61; 20]),
vec![B256::with_last_byte(3), B256::with_last_byte(4)],
Bytes::default(),
)],
}]],
number,
Vec::new(),
);
let mut extended = prev_execution_outcome.clone();
extended.extend(execution_outcome.clone());
let state_root = bundle_state_root(&extended);
let (mut header, mut body) = TEST_BLOCK.clone().split_header_body();
body.withdrawals = Some(Withdrawals::new(vec![Withdrawal::default()]));
header.number = number;
header.state_root = state_root;
// parent_hash points to block1 hash
header.parent_hash = parent_hash;
let block = SealedBlock::seal_parts(header, body);
(RecoveredBlock::new_sealed(block, vec![Address::new([0x31; 20])]), execution_outcome)
}
/// Block four that points to block 3
fn block4(
number: BlockNumber,
parent_hash: B256,
prev_execution_outcome: &ExecutionOutcome,
) -> (RecoveredBlock<reth_ethereum_primitives::Block>, ExecutionOutcome) {
let address_range = 1..=20;
let slot_range = 1..=100;
let mut bundle_state_builder = BundleState::builder(number..=number);
for idx in address_range {
let address = Address::with_last_byte(idx);
// increase balance for every even account and destroy every odd
bundle_state_builder = if idx.is_multiple_of(2) {
bundle_state_builder
.state_present_account_info(
address,
AccountInfo { nonce: 1, balance: U256::from(idx * 2), ..Default::default() },
)
.state_storage(
address,
HashMap::from_iter(slot_range.clone().map(|slot| {
(
U256::from(slot),
(
FlaggedStorage::new_from_value(slot),
FlaggedStorage::new_from_value(slot * 2),
),
)
})),
)
} else {
bundle_state_builder.state_address(address).state_storage(
address,
HashMap::from_iter(slot_range.clone().map(|slot| {
(U256::from(slot), (FlaggedStorage::new_from_value(slot), FlaggedStorage::ZERO))
})),
)
};
// record previous account info
bundle_state_builder = bundle_state_builder
.revert_account_info(
number,
address,
Some(Some(AccountInfo {
nonce: 1,
balance: U256::from(idx),
..Default::default()
})),
)
.revert_storage(
number,
address,
Vec::from_iter(
slot_range
.clone()
.map(|slot| (U256::from(slot), FlaggedStorage::new_from_value(slot))),
),
);
}
let execution_outcome = ExecutionOutcome::new(
bundle_state_builder.build(),
vec![vec![Receipt {
tx_type: TxType::Eip1559,
success: true,
cumulative_gas_used: 400,
logs: vec![Log::new_unchecked(
Address::new([0x61; 20]),
vec![B256::with_last_byte(3), B256::with_last_byte(4)],
Bytes::default(),
)],
}]],
number,
Vec::new(),
);
let mut extended = prev_execution_outcome.clone();
extended.extend(execution_outcome.clone());
let state_root = bundle_state_root(&extended);
let (mut header, mut body) = TEST_BLOCK.clone().split_header_body();
body.withdrawals = Some(Withdrawals::new(vec![Withdrawal::default()]));
header.number = number;
header.state_root = state_root;
// parent_hash points to block1 hash
header.parent_hash = parent_hash;
let block = SealedBlock::seal_parts(header, body);
(RecoveredBlock::new_sealed(block, vec![Address::new([0x31; 20])]), execution_outcome)
}
/// Block five that points to block 4
fn block5(
number: BlockNumber,
parent_hash: B256,
prev_execution_outcome: &ExecutionOutcome,
) -> (RecoveredBlock<reth_ethereum_primitives::Block>, ExecutionOutcome) {
let address_range = 1..=20;
let slot_range = 1..=100;
let mut bundle_state_builder = BundleState::builder(number..=number);
for idx in address_range {
let address = Address::with_last_byte(idx);
// update every even account and recreate every odd only with half of slots
bundle_state_builder = bundle_state_builder
.state_present_account_info(
address,
AccountInfo { nonce: 1, balance: U256::from(idx * 2), ..Default::default() },
)
.state_storage(
address,
HashMap::from_iter(slot_range.clone().take(50).map(|slot| {
(
U256::from(slot),
(
FlaggedStorage::new_from_value(slot),
FlaggedStorage::new_from_value(slot * 4),
),
)
})),
);
bundle_state_builder = if idx.is_multiple_of(2) {
bundle_state_builder
.revert_account_info(
number,
address,
Some(Some(AccountInfo {
nonce: 1,
balance: U256::from(idx * 2),
..Default::default()
})),
)
.revert_storage(
number,
address,
slot_range
.clone()
.map(|slot| (U256::from(slot), FlaggedStorage::new_from_value(slot * 2)))
.collect(),
)
} else {
bundle_state_builder.revert_address(number, address)
};
}
let execution_outcome = ExecutionOutcome::new(
bundle_state_builder.build(),
vec![vec![Receipt {
tx_type: TxType::Eip1559,
success: true,
cumulative_gas_used: 400,
logs: vec![Log::new_unchecked(
Address::new([0x61; 20]),
vec![B256::with_last_byte(3), B256::with_last_byte(4)],
Bytes::default(),
)],
}]],
number,
Vec::new(),
);
let mut extended = prev_execution_outcome.clone();
extended.extend(execution_outcome.clone());
let state_root = bundle_state_root(&extended);
let (mut header, mut body) = TEST_BLOCK.clone().split_header_body();
body.withdrawals = Some(Withdrawals::new(vec![Withdrawal::default()]));
header.number = number;
header.state_root = state_root;
// parent_hash points to block1 hash
header.parent_hash = parent_hash;
let block = SealedBlock::seal_parts(header, body);
(RecoveredBlock::new_sealed(block, vec![Address::new([0x31; 20])]), execution_outcome)
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/rpc-provider/src/lib.rs | crates/storage/rpc-provider/src/lib.rs | //! # RPC Blockchain Provider for Reth
//!
//! This crate provides an RPC-based implementation of reth's `StateProviderFactory` and related
//! traits that fetches blockchain data via RPC instead of from a local database.
//!
//! Similar to the [`BlockchainProvider`](../../provider/src/providers/blockchain_provider.rs)
//! which provides access to local blockchain data, this crate offers the same functionality but for
//! remote blockchain access via RPC.
//!
//! Originally created by [cakevm](https://github.com/cakevm/alloy-reth-provider).
//!
//! ## Features
//!
//! - Implements `StateProviderFactory` for remote RPC state access
//! - Supports Ethereum and Optimism network
//! - Useful for testing without requiring a full database
//! - Can be used with reth ExEx (Execution Extensions) for testing
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/paradigmxyz/reth/issues/"
)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
use alloy_consensus::{constants::KECCAK_EMPTY, BlockHeader};
use alloy_eips::{BlockHashOrNumber, BlockNumberOrTag};
use alloy_network::{primitives::HeaderResponse, BlockResponse};
use alloy_primitives::{
map::HashMap, Address, BlockHash, BlockNumber, FlaggedStorage, StorageKey, TxHash, TxNumber,
B256, U256,
};
use alloy_provider::{ext::DebugApi, network::Network, Provider};
use alloy_rpc_types::{AccountInfo, BlockId};
use alloy_rpc_types_engine::ForkchoiceState;
use parking_lot::RwLock;
use reth_chainspec::{ChainInfo, ChainSpecProvider};
use reth_db_api::{
mock::{DatabaseMock, TxMock},
models::StoredBlockBodyIndices,
};
use reth_errors::{ProviderError, ProviderResult};
use reth_node_types::{
Block, BlockBody, BlockTy, HeaderTy, NodeTypes, PrimitivesTy, ReceiptTy, TxTy,
};
use reth_primitives::{Account, Bytecode, RecoveredBlock, SealedHeader, TransactionMeta};
use reth_provider::{
AccountReader, BlockHashReader, BlockIdReader, BlockNumReader, BlockReader, BytecodeReader,
CanonChainTracker, CanonStateNotification, CanonStateNotifications, CanonStateSubscriptions,
ChainStateBlockReader, ChainStateBlockWriter, ChangeSetReader, DatabaseProviderFactory,
HeaderProvider, PruneCheckpointReader, ReceiptProvider, StageCheckpointReader, StateProvider,
StateProviderBox, StateProviderFactory, StateReader, StateRootProvider, StorageReader,
TransactionVariant, TransactionsProvider,
};
use reth_prune_types::{PruneCheckpoint, PruneSegment};
use reth_rpc_convert::{TryFromBlockResponse, TryFromReceiptResponse, TryFromTransactionResponse};
use reth_stages_types::{StageCheckpoint, StageId};
use reth_storage_api::{
BlockBodyIndicesProvider, BlockReaderIdExt, BlockSource, DBProvider, NodePrimitivesProvider,
ReceiptProviderIdExt, StatsReader,
};
use reth_trie::{updates::TrieUpdates, AccountProof, HashedPostState, MultiProof, TrieInput};
use std::{
collections::BTreeMap,
future::{Future, IntoFuture},
ops::{RangeBounds, RangeInclusive},
sync::Arc,
};
use tokio::{runtime::Handle, sync::broadcast};
use tracing::{trace, warn};
/// Configuration for `RpcBlockchainProvider`
#[derive(Debug, Clone)]
pub struct RpcBlockchainProviderConfig {
/// Whether to compute state root when creating execution outcomes
pub compute_state_root: bool,
/// Whether to use Reth-specific RPC methods for better performance
///
/// If enabled, the node will use Reth's RPC methods (`debug_codeByHash` and
/// `eth_getAccountInfo`) to speed up account information retrieval. When disabled, it will
/// use multiple standard RPC calls to get account information.
pub reth_rpc_support: bool,
}
impl Default for RpcBlockchainProviderConfig {
fn default() -> Self {
Self { compute_state_root: false, reth_rpc_support: true }
}
}
impl RpcBlockchainProviderConfig {
/// Sets whether to compute state root when creating execution outcomes
pub const fn with_compute_state_root(mut self, compute: bool) -> Self {
self.compute_state_root = compute;
self
}
/// Sets whether to use Reth-specific RPC methods for better performance
pub const fn with_reth_rpc_support(mut self, support: bool) -> Self {
self.reth_rpc_support = support;
self
}
}
/// An RPC-based blockchain provider that fetches blockchain data via remote RPC calls.
///
/// This is the RPC equivalent of
/// [`BlockchainProvider`](../../provider/src/providers/blockchain_provider.rs), implementing
/// the same `StateProviderFactory` and related traits but fetching data from a remote node instead
/// of local storage.
///
/// This provider is useful for:
/// - Testing without requiring a full local database
/// - Accessing blockchain state from remote nodes
/// - Building light clients or tools that don't need full node storage
///
/// The provider type is generic over the network type N (defaulting to `AnyNetwork`),
/// but the current implementation is specialized for `alloy_network::AnyNetwork`
/// as it needs to access block header fields directly.
#[derive(Clone)]
pub struct RpcBlockchainProvider<P, Node, N = alloy_network::AnyNetwork>
where
Node: NodeTypes,
{
/// The underlying Alloy provider
provider: P,
/// Node types marker
node_types: std::marker::PhantomData<Node>,
/// Network marker
network: std::marker::PhantomData<N>,
/// Broadcast channel for canon state notifications
canon_state_notification: broadcast::Sender<CanonStateNotification<PrimitivesTy<Node>>>,
/// Configuration for the provider
config: RpcBlockchainProviderConfig,
/// Cached chain spec
chain_spec: Arc<Node::ChainSpec>,
}
impl<P, Node: NodeTypes, N> std::fmt::Debug for RpcBlockchainProvider<P, Node, N> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("RpcBlockchainProvider").field("config", &self.config).finish()
}
}
impl<P, Node: NodeTypes, N> RpcBlockchainProvider<P, Node, N> {
/// Creates a new `RpcBlockchainProvider` with default configuration
pub fn new(provider: P) -> Self
where
Node::ChainSpec: Default,
{
Self::new_with_config(provider, RpcBlockchainProviderConfig::default())
}
/// Creates a new `RpcBlockchainProvider` with custom configuration
pub fn new_with_config(provider: P, config: RpcBlockchainProviderConfig) -> Self
where
Node::ChainSpec: Default,
{
let (canon_state_notification, _) = broadcast::channel(1);
Self {
provider,
node_types: std::marker::PhantomData,
network: std::marker::PhantomData,
canon_state_notification,
config,
chain_spec: Arc::new(Node::ChainSpec::default()),
}
}
/// Use a custom chain spec for the provider
pub fn with_chain_spec(self, chain_spec: Arc<Node::ChainSpec>) -> Self {
Self {
provider: self.provider,
node_types: std::marker::PhantomData,
network: std::marker::PhantomData,
canon_state_notification: self.canon_state_notification,
config: self.config,
chain_spec,
}
}
/// Helper function to execute async operations in a blocking context
fn block_on_async<F, T>(&self, fut: F) -> T
where
F: Future<Output = T>,
{
tokio::task::block_in_place(move || Handle::current().block_on(fut))
}
/// Get a reference to the canon state notification sender
pub const fn canon_state_notification(
&self,
) -> &broadcast::Sender<CanonStateNotification<PrimitivesTy<Node>>> {
&self.canon_state_notification
}
}
impl<P, Node, N> RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
/// Helper function to create a state provider for a given block ID
fn create_state_provider(&self, block_id: BlockId) -> RpcBlockchainStateProvider<P, Node, N> {
RpcBlockchainStateProvider::with_chain_spec(
self.provider.clone(),
block_id,
self.chain_spec.clone(),
)
.with_compute_state_root(self.config.compute_state_root)
.with_reth_rpc_support(self.config.reth_rpc_support)
}
/// Helper function to get state provider by block number
fn state_by_block_number(
&self,
block_number: BlockNumber,
) -> Result<StateProviderBox, ProviderError> {
Ok(Box::new(self.create_state_provider(BlockId::number(block_number))))
}
}
// Implementation note: While the types are generic over Network N, the trait implementations
// are specialized for AnyNetwork because they need to access block header fields.
// This allows the types to be instantiated with any network while the actual functionality
// requires AnyNetwork. Future improvements could add trait bounds for networks with
// compatible block structures.
impl<P, Node, N> BlockHashReader for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
fn block_hash(&self, number: BlockNumber) -> Result<Option<B256>, ProviderError> {
let block = self.block_on_async(async {
self.provider.get_block_by_number(number.into()).await.map_err(ProviderError::other)
})?;
Ok(block.map(|b| b.header().hash()))
}
fn canonical_hashes_range(
&self,
_start: BlockNumber,
_end: BlockNumber,
) -> Result<Vec<B256>, ProviderError> {
// Would need to make multiple RPC calls
Err(ProviderError::UnsupportedProvider)
}
}
impl<P, Node, N> BlockNumReader for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
fn chain_info(&self) -> Result<reth_chainspec::ChainInfo, ProviderError> {
self.block_on_async(async {
let block = self
.provider
.get_block(BlockId::Number(BlockNumberOrTag::Latest))
.await
.map_err(ProviderError::other)?
.ok_or(ProviderError::HeaderNotFound(0.into()))?;
Ok(ChainInfo { best_hash: block.header().hash(), best_number: block.header().number() })
})
}
fn best_block_number(&self) -> Result<BlockNumber, ProviderError> {
self.block_on_async(async {
self.provider.get_block_number().await.map_err(ProviderError::other)
})
}
fn last_block_number(&self) -> Result<BlockNumber, ProviderError> {
self.best_block_number()
}
fn block_number(&self, hash: B256) -> Result<Option<BlockNumber>, ProviderError> {
let block = self.block_on_async(async {
self.provider.get_block_by_hash(hash).await.map_err(ProviderError::other)
})?;
Ok(block.map(|b| b.header().number()))
}
}
impl<P, Node, N> BlockIdReader for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
fn block_number_for_id(&self, block_id: BlockId) -> Result<Option<BlockNumber>, ProviderError> {
match block_id {
BlockId::Hash(hash) => {
let block = self.block_on_async(async {
self.provider
.get_block_by_hash(hash.block_hash)
.await
.map_err(ProviderError::other)
})?;
Ok(block.map(|b| b.header().number()))
}
BlockId::Number(number_or_tag) => match number_or_tag {
alloy_rpc_types::BlockNumberOrTag::Number(num) => Ok(Some(num)),
alloy_rpc_types::BlockNumberOrTag::Latest => self.block_on_async(async {
self.provider.get_block_number().await.map(Some).map_err(ProviderError::other)
}),
_ => Ok(None),
},
}
}
fn pending_block_num_hash(&self) -> Result<Option<alloy_eips::BlockNumHash>, ProviderError> {
// RPC doesn't provide pending block number and hash together
Err(ProviderError::UnsupportedProvider)
}
fn safe_block_num_hash(&self) -> Result<Option<alloy_eips::BlockNumHash>, ProviderError> {
// RPC doesn't provide safe block number and hash
Err(ProviderError::UnsupportedProvider)
}
fn finalized_block_num_hash(&self) -> Result<Option<alloy_eips::BlockNumHash>, ProviderError> {
// RPC doesn't provide finalized block number and hash
Err(ProviderError::UnsupportedProvider)
}
}
impl<P, Node, N> HeaderProvider for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
BlockTy<Node>: TryFromBlockResponse<N>,
{
type Header = HeaderTy<Node>;
fn header(&self, block_hash: &BlockHash) -> ProviderResult<Option<Self::Header>> {
let block_response = self.block_on_async(async {
self.provider.get_block_by_hash(*block_hash).await.map_err(ProviderError::other)
})?;
let Some(block_response) = block_response else {
// If the block was not found, return None
return Ok(None);
};
// Convert the network block response to primitive block
let block = <BlockTy<Node> as TryFromBlockResponse<N>>::from_block_response(block_response)
.map_err(ProviderError::other)?;
Ok(Some(block.into_header()))
}
fn header_by_number(&self, num: u64) -> ProviderResult<Option<Self::Header>> {
let Some(sealed_header) = self.sealed_header(num)? else {
// If the block was not found, return None
return Ok(None);
};
Ok(Some(sealed_header.into_header()))
}
fn header_td(&self, hash: &BlockHash) -> ProviderResult<Option<U256>> {
let header = self.header(hash).map_err(ProviderError::other)?;
Ok(header.map(|b| b.difficulty()))
}
fn header_td_by_number(&self, number: BlockNumber) -> ProviderResult<Option<U256>> {
let header = self.header_by_number(number).map_err(ProviderError::other)?;
Ok(header.map(|b| b.difficulty()))
}
fn headers_range(
&self,
_range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Self::Header>> {
Err(ProviderError::UnsupportedProvider)
}
fn sealed_header(
&self,
number: BlockNumber,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
let block_response = self.block_on_async(async {
self.provider.get_block_by_number(number.into()).await.map_err(ProviderError::other)
})?;
let Some(block_response) = block_response else {
// If the block was not found, return None
return Ok(None);
};
let block_hash = block_response.header().hash();
// Convert the network block response to primitive block
let block = <BlockTy<Node> as TryFromBlockResponse<N>>::from_block_response(block_response)
.map_err(ProviderError::other)?;
Ok(Some(SealedHeader::new(block.into_header(), block_hash)))
}
fn sealed_headers_while(
&self,
_range: impl RangeBounds<BlockNumber>,
_predicate: impl FnMut(&SealedHeader<Self::Header>) -> bool,
) -> ProviderResult<Vec<SealedHeader<Self::Header>>> {
Err(ProviderError::UnsupportedProvider)
}
}
impl<P, Node, N> BlockBodyIndicesProvider for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
fn block_body_indices(&self, _num: u64) -> ProviderResult<Option<StoredBlockBodyIndices>> {
Err(ProviderError::UnsupportedProvider)
}
fn block_body_indices_range(
&self,
_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<StoredBlockBodyIndices>> {
Err(ProviderError::UnsupportedProvider)
}
}
impl<P, Node, N> BlockReader for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
BlockTy<Node>: TryFromBlockResponse<N>,
TxTy<Node>: TryFromTransactionResponse<N>,
ReceiptTy<Node>: TryFromReceiptResponse<N>,
{
type Block = BlockTy<Node>;
fn find_block_by_hash(
&self,
_hash: B256,
_source: BlockSource,
) -> ProviderResult<Option<Self::Block>> {
Err(ProviderError::UnsupportedProvider)
}
fn block(&self, id: BlockHashOrNumber) -> ProviderResult<Option<Self::Block>> {
let block_response = self.block_on_async(async {
self.provider.get_block(id.into()).full().await.map_err(ProviderError::other)
})?;
let Some(block_response) = block_response else {
// If the block was not found, return None
return Ok(None);
};
// Convert the network block response to primitive block
let block = <BlockTy<Node> as TryFromBlockResponse<N>>::from_block_response(block_response)
.map_err(ProviderError::other)?;
Ok(Some(block))
}
fn pending_block(&self) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
Err(ProviderError::UnsupportedProvider)
}
fn pending_block_and_receipts(
&self,
) -> ProviderResult<Option<(RecoveredBlock<Self::Block>, Vec<Self::Receipt>)>> {
Err(ProviderError::UnsupportedProvider)
}
fn recovered_block(
&self,
_id: BlockHashOrNumber,
_transaction_kind: TransactionVariant,
) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
Err(ProviderError::UnsupportedProvider)
}
fn sealed_block_with_senders(
&self,
_id: BlockHashOrNumber,
_transaction_kind: TransactionVariant,
) -> ProviderResult<Option<RecoveredBlock<Self::Block>>> {
Err(ProviderError::UnsupportedProvider)
}
fn block_range(&self, _range: RangeInclusive<BlockNumber>) -> ProviderResult<Vec<Self::Block>> {
Err(ProviderError::UnsupportedProvider)
}
fn block_with_senders_range(
&self,
_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<RecoveredBlock<Self::Block>>> {
Err(ProviderError::UnsupportedProvider)
}
fn recovered_block_range(
&self,
_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<RecoveredBlock<Self::Block>>> {
Err(ProviderError::UnsupportedProvider)
}
}
impl<P, Node, N> BlockReaderIdExt for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
BlockTy<Node>: TryFromBlockResponse<N>,
TxTy<Node>: TryFromTransactionResponse<N>,
ReceiptTy<Node>: TryFromReceiptResponse<N>,
{
fn block_by_id(&self, id: BlockId) -> ProviderResult<Option<Self::Block>> {
match id {
BlockId::Hash(hash) => self.block_by_hash(hash.block_hash),
BlockId::Number(number_or_tag) => self.block_by_number_or_tag(number_or_tag),
}
}
fn sealed_header_by_id(
&self,
id: BlockId,
) -> ProviderResult<Option<SealedHeader<Self::Header>>> {
match id {
BlockId::Hash(hash) => self.sealed_header_by_hash(hash.block_hash),
BlockId::Number(number_or_tag) => self.sealed_header_by_number_or_tag(number_or_tag),
}
}
fn header_by_id(&self, id: BlockId) -> ProviderResult<Option<Self::Header>> {
match id {
BlockId::Hash(hash) => self.header_by_hash_or_number(hash.block_hash.into()),
BlockId::Number(number_or_tag) => self.header_by_number_or_tag(number_or_tag),
}
}
}
impl<P, Node, N> ReceiptProvider for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
ReceiptTy<Node>: TryFromReceiptResponse<N>,
{
type Receipt = ReceiptTy<Node>;
fn receipt(&self, _id: TxNumber) -> ProviderResult<Option<Self::Receipt>> {
Err(ProviderError::UnsupportedProvider)
}
fn receipt_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Receipt>> {
let receipt_response = self.block_on_async(async {
self.provider.get_transaction_receipt(hash).await.map_err(ProviderError::other)
})?;
let Some(receipt_response) = receipt_response else {
// If the receipt was not found, return None
return Ok(None);
};
// Convert the network receipt response to primitive receipt
let receipt =
<ReceiptTy<Node> as TryFromReceiptResponse<N>>::from_receipt_response(receipt_response)
.map_err(ProviderError::other)?;
Ok(Some(receipt))
}
fn receipts_by_block(
&self,
block: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Receipt>>> {
self.block_on_async(async {
let receipts_response = self
.provider
.get_block_receipts(block.into())
.await
.map_err(ProviderError::other)?;
let Some(receipts) = receipts_response else {
// If the receipts were not found, return None
return Ok(None);
};
// Convert the network receipts response to primitive receipts
let receipts = receipts
.into_iter()
.map(|receipt_response| {
<ReceiptTy<Node> as TryFromReceiptResponse<N>>::from_receipt_response(
receipt_response,
)
.map_err(ProviderError::other)
})
.collect::<Result<Vec<_>, _>>()?;
Ok(Some(receipts))
})
}
fn receipts_by_tx_range(
&self,
_range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Receipt>> {
Err(ProviderError::UnsupportedProvider)
}
fn receipts_by_block_range(
&self,
_block_range: RangeInclusive<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Receipt>>> {
Err(ProviderError::UnsupportedProvider)
}
}
impl<P, Node, N> ReceiptProviderIdExt for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
ReceiptTy<Node>: TryFromReceiptResponse<N>,
{
}
impl<P, Node, N> TransactionsProvider for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
BlockTy<Node>: TryFromBlockResponse<N>,
TxTy<Node>: TryFromTransactionResponse<N>,
{
type Transaction = TxTy<Node>;
fn transaction_id(&self, _tx_hash: TxHash) -> ProviderResult<Option<TxNumber>> {
Err(ProviderError::UnsupportedProvider)
}
fn transaction_by_id(&self, _id: TxNumber) -> ProviderResult<Option<Self::Transaction>> {
Err(ProviderError::UnsupportedProvider)
}
fn transaction_by_id_unhashed(
&self,
_id: TxNumber,
) -> ProviderResult<Option<Self::Transaction>> {
Err(ProviderError::UnsupportedProvider)
}
fn transaction_by_hash(&self, hash: TxHash) -> ProviderResult<Option<Self::Transaction>> {
let transaction_response = self.block_on_async(async {
self.provider.get_transaction_by_hash(hash).await.map_err(ProviderError::other)
})?;
let Some(transaction_response) = transaction_response else {
// If the transaction was not found, return None
return Ok(None);
};
// Convert the network transaction response to primitive transaction
let transaction = <TxTy<Node> as TryFromTransactionResponse<N>>::from_transaction_response(
transaction_response,
)
.map_err(ProviderError::other)?;
Ok(Some(transaction))
}
fn transaction_by_hash_with_meta(
&self,
_hash: TxHash,
) -> ProviderResult<Option<(Self::Transaction, TransactionMeta)>> {
Err(ProviderError::UnsupportedProvider)
}
fn transaction_block(&self, _id: TxNumber) -> ProviderResult<Option<BlockNumber>> {
Err(ProviderError::UnsupportedProvider)
}
fn transactions_by_block(
&self,
block: BlockHashOrNumber,
) -> ProviderResult<Option<Vec<Self::Transaction>>> {
let block_response = self.block_on_async(async {
self.provider.get_block(block.into()).full().await.map_err(ProviderError::other)
})?;
let Some(block_response) = block_response else {
// If the block was not found, return None
return Ok(None);
};
// Convert the network block response to primitive block
let block = <BlockTy<Node> as TryFromBlockResponse<N>>::from_block_response(block_response)
.map_err(ProviderError::other)?;
Ok(Some(block.into_body().into_transactions()))
}
fn transactions_by_block_range(
&self,
_range: impl RangeBounds<BlockNumber>,
) -> ProviderResult<Vec<Vec<Self::Transaction>>> {
Err(ProviderError::UnsupportedProvider)
}
fn transactions_by_tx_range(
&self,
_range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Self::Transaction>> {
Err(ProviderError::UnsupportedProvider)
}
fn senders_by_tx_range(
&self,
_range: impl RangeBounds<TxNumber>,
) -> ProviderResult<Vec<Address>> {
Err(ProviderError::UnsupportedProvider)
}
fn transaction_sender(&self, _id: TxNumber) -> ProviderResult<Option<Address>> {
Err(ProviderError::UnsupportedProvider)
}
}
impl<P, Node, N> StateProviderFactory for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
fn latest(&self) -> Result<StateProviderBox, ProviderError> {
Ok(Box::new(self.create_state_provider(self.best_block_number()?.into())))
}
fn state_by_block_id(&self, block_id: BlockId) -> Result<StateProviderBox, ProviderError> {
Ok(Box::new(self.create_state_provider(block_id)))
}
fn state_by_block_number_or_tag(
&self,
number_or_tag: alloy_rpc_types::BlockNumberOrTag,
) -> Result<StateProviderBox, ProviderError> {
match number_or_tag {
alloy_rpc_types::BlockNumberOrTag::Latest => self.latest(),
alloy_rpc_types::BlockNumberOrTag::Pending => self.pending(),
alloy_rpc_types::BlockNumberOrTag::Number(num) => self.state_by_block_number(num),
_ => Err(ProviderError::UnsupportedProvider),
}
}
fn history_by_block_number(
&self,
block_number: BlockNumber,
) -> Result<StateProviderBox, ProviderError> {
self.state_by_block_number(block_number)
}
fn history_by_block_hash(
&self,
block_hash: BlockHash,
) -> Result<StateProviderBox, ProviderError> {
self.state_by_block_hash(block_hash)
}
fn state_by_block_hash(
&self,
block_hash: BlockHash,
) -> Result<StateProviderBox, ProviderError> {
trace!(target: "alloy-provider", ?block_hash, "Getting state provider by block hash");
let block = self.block_on_async(async {
self.provider
.get_block_by_hash(block_hash)
.await
.map_err(ProviderError::other)?
.ok_or(ProviderError::BlockHashNotFound(block_hash))
})?;
let block_number = block.header().number();
Ok(Box::new(self.create_state_provider(BlockId::number(block_number))))
}
fn pending(&self) -> Result<StateProviderBox, ProviderError> {
trace!(target: "alloy-provider", "Getting pending state provider");
self.latest()
}
fn pending_state_by_hash(
&self,
_block_hash: B256,
) -> Result<Option<StateProviderBox>, ProviderError> {
// RPC provider doesn't support pending state by hash
Err(ProviderError::UnsupportedProvider)
}
fn maybe_pending(&self) -> Result<Option<StateProviderBox>, ProviderError> {
Ok(None)
}
}
impl<P, Node, N> DatabaseProviderFactory for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
type DB = DatabaseMock;
type Provider = RpcBlockchainStateProvider<P, Node, N>;
type ProviderRW = RpcBlockchainStateProvider<P, Node, N>;
fn database_provider_ro(&self) -> Result<Self::Provider, ProviderError> {
// RPC provider returns a new state provider
let block_number = self.block_on_async(async {
self.provider.get_block_number().await.map_err(ProviderError::other)
})?;
Ok(self.create_state_provider(BlockId::number(block_number)))
}
fn database_provider_rw(&self) -> Result<Self::ProviderRW, ProviderError> {
// RPC provider returns a new state provider
let block_number = self.block_on_async(async {
self.provider.get_block_number().await.map_err(ProviderError::other)
})?;
Ok(self.create_state_provider(BlockId::number(block_number)))
}
}
impl<P, Node, N> CanonChainTracker for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
type Header = alloy_consensus::Header;
fn on_forkchoice_update_received(&self, _update: &ForkchoiceState) {
// No-op for RPC provider
}
fn last_received_update_timestamp(&self) -> Option<std::time::Instant> {
None
}
fn set_canonical_head(&self, _header: SealedHeader<Self::Header>) {
// No-op for RPC provider
}
fn set_safe(&self, _header: SealedHeader<Self::Header>) {
// No-op for RPC provider
}
fn set_finalized(&self, _header: SealedHeader<Self::Header>) {
// No-op for RPC provider
}
}
impl<P, Node, N> NodePrimitivesProvider for RpcBlockchainProvider<P, Node, N>
where
P: Send + Sync,
N: Send + Sync,
Node: NodeTypes,
{
type Primitives = PrimitivesTy<Node>;
}
impl<P, Node, N> CanonStateSubscriptions for RpcBlockchainProvider<P, Node, N>
where
P: Provider<N> + Clone + 'static,
N: Network,
Node: NodeTypes,
{
fn subscribe_to_canonical_state(&self) -> CanonStateNotifications<PrimitivesTy<Node>> {
trace!(target: "alloy-provider", "Subscribing to canonical state notifications");
self.canon_state_notification.subscribe()
}
}
impl<P, Node, N> ChainSpecProvider for RpcBlockchainProvider<P, Node, N>
where
P: Send + Sync,
N: Send + Sync,
Node: NodeTypes,
Node::ChainSpec: Default,
{
type ChainSpec = Node::ChainSpec;
fn chain_spec(&self) -> Arc<Self::ChainSpec> {
self.chain_spec.clone()
}
}
/// RPC-based state provider implementation that fetches blockchain state via remote RPC calls.
///
/// This is the state provider counterpart to `RpcBlockchainProvider`, handling state queries
/// at specific block heights via RPC instead of local database access.
pub struct RpcBlockchainStateProvider<P, Node, N = alloy_network::AnyNetwork>
where
Node: NodeTypes,
{
/// The underlying Alloy provider
provider: P,
/// The block ID to fetch state at
block_id: BlockId,
/// Node types marker
node_types: std::marker::PhantomData<Node>,
/// Network marker
network: std::marker::PhantomData<N>,
/// Cached chain spec (shared with parent provider)
chain_spec: Option<Arc<Node::ChainSpec>>,
/// Whether to enable state root calculation
compute_state_root: bool,
/// Cached bytecode for accounts
///
/// Since the state provider is short-lived, we don't worry about memory leaks.
code_store: RwLock<HashMap<B256, Bytecode>>,
/// Whether to use Reth-specific RPC methods for better performance
reth_rpc_support: bool,
}
impl<P: std::fmt::Debug, Node: NodeTypes, N> std::fmt::Debug
for RpcBlockchainStateProvider<P, Node, N>
{
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("RpcBlockchainStateProvider")
.field("provider", &self.provider)
.field("block_id", &self.block_id)
.finish()
}
}
impl<P: Clone, Node: NodeTypes, N> RpcBlockchainStateProvider<P, Node, N> {
/// Creates a new state provider for the given block
pub fn new(
provider: P,
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/nippy-jar/src/cursor.rs | crates/storage/nippy-jar/src/cursor.rs | use crate::{
compression::{Compression, Compressors, Zstd},
DataReader, NippyJar, NippyJarError, NippyJarHeader, RefRow,
};
use std::{ops::Range, sync::Arc};
use zstd::bulk::Decompressor;
/// Simple cursor implementation to retrieve data from [`NippyJar`].
#[derive(Clone)]
pub struct NippyJarCursor<'a, H = ()> {
/// [`NippyJar`] which holds most of the required configuration to read from the file.
jar: &'a NippyJar<H>,
/// Data and offset reader.
reader: Arc<DataReader>,
/// Internal buffer to unload data to without reallocating memory on each retrieval.
internal_buffer: Vec<u8>,
/// Cursor row position.
row: u64,
}
impl<H: NippyJarHeader> std::fmt::Debug for NippyJarCursor<'_, H> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("NippyJarCursor").field("config", &self.jar).finish_non_exhaustive()
}
}
impl<'a, H: NippyJarHeader> NippyJarCursor<'a, H> {
/// Creates a new instance of [`NippyJarCursor`] for the given [`NippyJar`].
pub fn new(jar: &'a NippyJar<H>) -> Result<Self, NippyJarError> {
let max_row_size = jar.max_row_size;
Ok(Self {
jar,
reader: Arc::new(jar.open_data_reader()?),
// Makes sure that we have enough buffer capacity to decompress any row of data.
internal_buffer: Vec::with_capacity(max_row_size),
row: 0,
})
}
/// Creates a new instance of [`NippyJarCursor`] with the specified [`NippyJar`] and data
/// reader.
pub fn with_reader(
jar: &'a NippyJar<H>,
reader: Arc<DataReader>,
) -> Result<Self, NippyJarError> {
let max_row_size = jar.max_row_size;
Ok(Self {
jar,
reader,
// Makes sure that we have enough buffer capacity to decompress any row of data.
internal_buffer: Vec::with_capacity(max_row_size),
row: 0,
})
}
/// Returns a reference to the related [`NippyJar`]
pub const fn jar(&self) -> &NippyJar<H> {
self.jar
}
/// Returns current row index of the cursor
pub const fn row_index(&self) -> u64 {
self.row
}
/// Resets cursor to the beginning.
pub const fn reset(&mut self) {
self.row = 0;
}
/// Returns a row by its number.
pub fn row_by_number(&mut self, row: usize) -> Result<Option<RefRow<'_>>, NippyJarError> {
self.row = row as u64;
self.next_row()
}
/// Returns the current value and advances the row.
pub fn next_row(&mut self) -> Result<Option<RefRow<'_>>, NippyJarError> {
self.internal_buffer.clear();
if self.row as usize >= self.jar.rows {
// Has reached the end
return Ok(None)
}
let mut row = Vec::with_capacity(self.jar.columns);
// Retrieve all column values from the row
for column in 0..self.jar.columns {
self.read_value(column, &mut row)?;
}
self.row += 1;
Ok(Some(
row.into_iter()
.map(|v| match v {
ValueRange::Mmap(range) => self.reader.data(range),
ValueRange::Internal(range) => &self.internal_buffer[range],
})
.collect(),
))
}
/// Returns a row by its number by using a `mask` to only read certain columns from the row.
pub fn row_by_number_with_cols(
&mut self,
row: usize,
mask: usize,
) -> Result<Option<RefRow<'_>>, NippyJarError> {
self.row = row as u64;
self.next_row_with_cols(mask)
}
/// Returns the current value and advances the row.
///
/// Uses a `mask` to only read certain columns from the row.
pub fn next_row_with_cols(&mut self, mask: usize) -> Result<Option<RefRow<'_>>, NippyJarError> {
self.internal_buffer.clear();
if self.row as usize >= self.jar.rows {
// Has reached the end
return Ok(None)
}
let columns = self.jar.columns;
let mut row = Vec::with_capacity(columns);
for column in 0..columns {
if mask & (1 << column) != 0 {
self.read_value(column, &mut row)?
}
}
self.row += 1;
Ok(Some(
row.into_iter()
.map(|v| match v {
ValueRange::Mmap(range) => self.reader.data(range),
ValueRange::Internal(range) => &self.internal_buffer[range],
})
.collect(),
))
}
/// Takes the column index and reads the range value for the corresponding column.
fn read_value(
&mut self,
column: usize,
row: &mut Vec<ValueRange>,
) -> Result<(), NippyJarError> {
// Find out the offset of the column value
let offset_pos = self.row as usize * self.jar.columns + column;
let value_offset = self.reader.offset(offset_pos)? as usize;
let column_offset_range = if self.jar.rows * self.jar.columns == offset_pos + 1 {
// It's the last column of the last row
value_offset..self.reader.size()
} else {
let next_value_offset = self.reader.offset(offset_pos + 1)? as usize;
value_offset..next_value_offset
};
if let Some(compression) = self.jar.compressor() {
let from = self.internal_buffer.len();
match compression {
Compressors::Zstd(z) if z.use_dict => {
// If we are here, then for sure we have the necessary dictionaries and they're
// loaded (happens during deserialization). Otherwise, there's an issue
// somewhere else and we can't recover here anyway.
let dictionaries = z.dictionaries.as_ref().expect("dictionaries to exist")
[column]
.loaded()
.expect("dictionary to be loaded");
let mut decompressor = Decompressor::with_prepared_dictionary(dictionaries)?;
Zstd::decompress_with_dictionary(
self.reader.data(column_offset_range),
&mut self.internal_buffer,
&mut decompressor,
)?;
}
_ => {
// Uses the chosen default decompressor
compression.decompress_to(
self.reader.data(column_offset_range),
&mut self.internal_buffer,
)?;
}
}
let to = self.internal_buffer.len();
row.push(ValueRange::Internal(from..to));
} else {
// Not compressed
row.push(ValueRange::Mmap(column_offset_range));
}
Ok(())
}
}
/// Helper type that stores the range of the decompressed column value either on a `mmap` slice or
/// on the internal buffer.
enum ValueRange {
Mmap(Range<usize>),
Internal(Range<usize>),
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/nippy-jar/src/lib.rs | crates/storage/nippy-jar/src/lib.rs | //! Immutable data store format.
//!
//! *Warning*: The `NippyJar` encoding format and its implementations are
//! designed for storing and retrieving data internally. They are not hardened
//! to safely read potentially malicious data.
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/SeismicSystems/seismic-reth/issues/"
)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
use memmap2::Mmap;
use serde::{Deserialize, Serialize};
use std::{
error::Error as StdError,
fs::File,
io::Read,
ops::Range,
path::{Path, PathBuf},
};
use tracing::*;
/// Compression algorithms supported by `NippyJar`.
pub mod compression;
#[cfg(test)]
use compression::Compression;
use compression::Compressors;
/// empty enum for backwards compatibility
#[derive(Debug, Serialize, Deserialize)]
#[cfg_attr(test, derive(PartialEq, Eq))]
pub enum Functions {}
/// empty enum for backwards compatibility
#[derive(Debug, Serialize, Deserialize)]
#[cfg_attr(test, derive(PartialEq, Eq))]
pub enum InclusionFilters {}
mod error;
pub use error::NippyJarError;
mod cursor;
pub use cursor::NippyJarCursor;
mod writer;
pub use writer::NippyJarWriter;
mod consistency;
pub use consistency::NippyJarChecker;
/// The version number of the Nippy Jar format.
const NIPPY_JAR_VERSION: usize = 1;
/// The file extension used for index files.
const INDEX_FILE_EXTENSION: &str = "idx";
/// The file extension used for offsets files.
const OFFSETS_FILE_EXTENSION: &str = "off";
/// The file extension used for configuration files.
pub const CONFIG_FILE_EXTENSION: &str = "conf";
/// A [`RefRow`] is a list of column value slices pointing to either an internal buffer or a
/// memory-mapped file.
type RefRow<'a> = Vec<&'a [u8]>;
/// Alias type for a column value wrapped in `Result`.
pub type ColumnResult<T> = Result<T, Box<dyn StdError + Send + Sync>>;
/// A trait for the user-defined header of [`NippyJar`].
pub trait NippyJarHeader:
Send + Sync + Serialize + for<'b> Deserialize<'b> + std::fmt::Debug + 'static
{
}
// Blanket implementation for all types that implement the required traits.
impl<T> NippyJarHeader for T where
T: Send + Sync + Serialize + for<'b> Deserialize<'b> + std::fmt::Debug + 'static
{
}
/// `NippyJar` is a specialized storage format designed for immutable data.
///
/// Data is organized into a columnar format, enabling column-based compression. Data retrieval
/// entails consulting an offset list and fetching the data from file via `mmap`.
#[derive(Serialize, Deserialize)]
#[cfg_attr(test, derive(PartialEq))]
pub struct NippyJar<H = ()> {
/// The version of the `NippyJar` format.
version: usize,
/// User-defined header data.
/// Default: zero-sized unit type: no header data
user_header: H,
/// Number of data columns in the jar.
columns: usize,
/// Number of data rows in the jar.
rows: usize,
/// Optional compression algorithm applied to the data.
compressor: Option<Compressors>,
#[serde(skip)]
/// Optional field for backwards compatibility
filter: Option<InclusionFilters>,
#[serde(skip)]
/// Optional field for backwards compatibility
phf: Option<Functions>,
/// Maximum uncompressed row size of the set. This will enable decompression without any
/// resizing of the output buffer.
max_row_size: usize,
/// Data path for file. Supporting files will have a format `{path}.{extension}`.
#[serde(skip)]
path: PathBuf,
}
impl<H: NippyJarHeader> std::fmt::Debug for NippyJar<H> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("NippyJar")
.field("version", &self.version)
.field("user_header", &self.user_header)
.field("rows", &self.rows)
.field("columns", &self.columns)
.field("compressor", &self.compressor)
.field("filter", &self.filter)
.field("phf", &self.phf)
.field("path", &self.path)
.field("max_row_size", &self.max_row_size)
.finish_non_exhaustive()
}
}
impl NippyJar<()> {
/// Creates a new [`NippyJar`] without an user-defined header data.
pub fn new_without_header(columns: usize, path: &Path) -> Self {
Self::new(columns, path, ())
}
/// Loads the file configuration and returns [`Self`] on a jar without user-defined header data.
pub fn load_without_header(path: &Path) -> Result<Self, NippyJarError> {
Self::load(path)
}
}
impl<H: NippyJarHeader> NippyJar<H> {
/// Creates a new [`NippyJar`] with a user-defined header data.
pub fn new(columns: usize, path: &Path, user_header: H) -> Self {
Self {
version: NIPPY_JAR_VERSION,
user_header,
columns,
rows: 0,
max_row_size: 0,
compressor: None,
filter: None,
phf: None,
path: path.to_path_buf(),
}
}
/// Adds [`compression::Zstd`] compression.
pub fn with_zstd(mut self, use_dict: bool, max_dict_size: usize) -> Self {
self.compressor =
Some(Compressors::Zstd(compression::Zstd::new(use_dict, max_dict_size, self.columns)));
self
}
/// Adds [`compression::Lz4`] compression.
pub fn with_lz4(mut self) -> Self {
self.compressor = Some(Compressors::Lz4(compression::Lz4::default()));
self
}
/// Gets a reference to the user header.
pub const fn user_header(&self) -> &H {
&self.user_header
}
/// Gets total columns in jar.
pub const fn columns(&self) -> usize {
self.columns
}
/// Gets total rows in jar.
pub const fn rows(&self) -> usize {
self.rows
}
/// Gets a reference to the compressor.
pub const fn compressor(&self) -> Option<&Compressors> {
self.compressor.as_ref()
}
/// Gets a mutable reference to the compressor.
pub const fn compressor_mut(&mut self) -> Option<&mut Compressors> {
self.compressor.as_mut()
}
/// Loads the file configuration and returns [`Self`].
///
/// **The user must ensure the header type matches the one used during the jar's creation.**
pub fn load(path: &Path) -> Result<Self, NippyJarError> {
// Read [`Self`] located at the data file.
let config_path = path.with_extension(CONFIG_FILE_EXTENSION);
let config_file = File::open(&config_path)
.map_err(|err| reth_fs_util::FsPathError::open(err, config_path))?;
let mut obj = Self::load_from_reader(config_file)?;
obj.path = path.to_path_buf();
Ok(obj)
}
/// Deserializes an instance of [`Self`] from a [`Read`] type.
pub fn load_from_reader<R: Read>(reader: R) -> Result<Self, NippyJarError> {
Ok(bincode::deserialize_from(reader)?)
}
/// Returns the path for the data file
pub fn data_path(&self) -> &Path {
self.path.as_ref()
}
/// Returns the path for the index file
pub fn index_path(&self) -> PathBuf {
self.path.with_extension(INDEX_FILE_EXTENSION)
}
/// Returns the path for the offsets file
pub fn offsets_path(&self) -> PathBuf {
self.path.with_extension(OFFSETS_FILE_EXTENSION)
}
/// Returns the path for the config file
pub fn config_path(&self) -> PathBuf {
self.path.with_extension(CONFIG_FILE_EXTENSION)
}
/// Deletes from disk this [`NippyJar`] alongside every satellite file.
pub fn delete(self) -> Result<(), NippyJarError> {
// TODO(joshie): ensure consistency on unexpected shutdown
for path in
[self.data_path().into(), self.index_path(), self.offsets_path(), self.config_path()]
{
if path.exists() {
debug!(target: "nippy-jar", ?path, "Removing file.");
reth_fs_util::remove_file(path)?;
}
}
Ok(())
}
/// Returns a [`DataReader`] of the data and offset file
pub fn open_data_reader(&self) -> Result<DataReader, NippyJarError> {
DataReader::new(self.data_path())
}
/// Writes all necessary configuration to file.
fn freeze_config(&self) -> Result<(), NippyJarError> {
Ok(reth_fs_util::atomic_write_file(&self.config_path(), |file| {
bincode::serialize_into(file, &self)
})?)
}
}
#[cfg(test)]
impl<H: NippyJarHeader> NippyJar<H> {
/// If required, prepares any compression algorithm to an early pass of the data.
pub fn prepare_compression(
&mut self,
columns: Vec<impl IntoIterator<Item = Vec<u8>>>,
) -> Result<(), NippyJarError> {
// Makes any necessary preparations for the compressors
if let Some(compression) = &mut self.compressor {
debug!(target: "nippy-jar", columns=columns.len(), "Preparing compression.");
compression.prepare_compression(columns)?;
}
Ok(())
}
/// Writes all data and configuration to a file and the offset index to another.
pub fn freeze(
self,
columns: Vec<impl IntoIterator<Item = ColumnResult<Vec<u8>>>>,
total_rows: u64,
) -> Result<Self, NippyJarError> {
self.check_before_freeze(&columns)?;
debug!(target: "nippy-jar", path=?self.data_path(), "Opening data file.");
// Creates the writer, data and offsets file
let mut writer = NippyJarWriter::new(self)?;
// Append rows to file while holding offsets in memory
writer.append_rows(columns, total_rows)?;
// Flushes configuration and offsets to disk
writer.commit()?;
debug!(target: "nippy-jar", ?writer, "Finished writing data.");
Ok(writer.into_jar())
}
/// Safety checks before creating and returning a [`File`] handle to write data to.
fn check_before_freeze(
&self,
columns: &[impl IntoIterator<Item = ColumnResult<Vec<u8>>>],
) -> Result<(), NippyJarError> {
if columns.len() != self.columns {
return Err(NippyJarError::ColumnLenMismatch(self.columns, columns.len()))
}
if let Some(compression) = &self.compressor {
if !compression.is_ready() {
return Err(NippyJarError::CompressorNotReady)
}
}
Ok(())
}
}
/// Manages the reading of static file data using memory-mapped files.
///
/// Holds file and mmap descriptors of the data and offsets files of a `static_file`.
#[derive(Debug)]
pub struct DataReader {
/// Data file descriptor. Needs to be kept alive as long as `data_mmap` handle.
#[expect(dead_code)]
data_file: File,
/// Mmap handle for data.
data_mmap: Mmap,
/// Offset file descriptor. Needs to be kept alive as long as `offset_mmap` handle.
offset_file: File,
/// Mmap handle for offsets.
offset_mmap: Mmap,
/// Number of bytes that represent one offset.
offset_size: u8,
}
impl DataReader {
/// Reads the respective data and offsets file and returns [`DataReader`].
pub fn new(path: impl AsRef<Path>) -> Result<Self, NippyJarError> {
let data_file = File::open(path.as_ref())?;
// SAFETY: File is read-only and its descriptor is kept alive as long as the mmap handle.
let data_mmap = unsafe { Mmap::map(&data_file)? };
let offset_file = File::open(path.as_ref().with_extension(OFFSETS_FILE_EXTENSION))?;
// SAFETY: File is read-only and its descriptor is kept alive as long as the mmap handle.
let offset_mmap = unsafe { Mmap::map(&offset_file)? };
// First byte is the size of one offset in bytes
let offset_size = offset_mmap[0];
// Ensure that the size of an offset is at most 8 bytes.
if offset_size > 8 {
return Err(NippyJarError::OffsetSizeTooBig { offset_size })
} else if offset_size == 0 {
return Err(NippyJarError::OffsetSizeTooSmall { offset_size })
}
Ok(Self { data_file, data_mmap, offset_file, offset_size, offset_mmap })
}
/// Returns the offset for the requested data index
pub fn offset(&self, index: usize) -> Result<u64, NippyJarError> {
// + 1 represents the offset_len u8 which is in the beginning of the file
let from = index * self.offset_size as usize + 1;
self.offset_at(from)
}
/// Returns the offset for the requested data index starting from the end
pub fn reverse_offset(&self, index: usize) -> Result<u64, NippyJarError> {
let offsets_file_size = self.offset_file.metadata()?.len() as usize;
if offsets_file_size > 1 {
let from = offsets_file_size - self.offset_size as usize * (index + 1);
self.offset_at(from)
} else {
Ok(0)
}
}
/// Returns total number of offsets in the file.
/// The size of one offset is determined by the file itself.
pub fn offsets_count(&self) -> Result<usize, NippyJarError> {
Ok((self.offset_file.metadata()?.len().saturating_sub(1) / self.offset_size as u64)
as usize)
}
/// Reads one offset-sized (determined by the offset file) u64 at the provided index.
fn offset_at(&self, index: usize) -> Result<u64, NippyJarError> {
let mut buffer: [u8; 8] = [0; 8];
let offset_end = index.saturating_add(self.offset_size as usize);
if offset_end > self.offset_mmap.len() {
return Err(NippyJarError::OffsetOutOfBounds { index })
}
buffer[..self.offset_size as usize].copy_from_slice(&self.offset_mmap[index..offset_end]);
Ok(u64::from_le_bytes(buffer))
}
/// Returns number of bytes that represent one offset.
pub const fn offset_size(&self) -> u8 {
self.offset_size
}
/// Returns the underlying data as a slice of bytes for the provided range.
pub fn data(&self, range: Range<usize>) -> &[u8] {
&self.data_mmap[range]
}
/// Returns total size of data
pub fn size(&self) -> usize {
self.data_mmap.len()
}
}
#[cfg(test)]
mod tests {
use super::*;
use compression::Compression;
use rand::{rngs::SmallRng, seq::SliceRandom, RngCore, SeedableRng};
use std::{fs::OpenOptions, io::Read};
type ColumnResults<T> = Vec<ColumnResult<T>>;
type ColumnValues = Vec<Vec<u8>>;
fn test_data(seed: Option<u64>) -> (ColumnValues, ColumnValues) {
let value_length = 32;
let num_rows = 100;
let mut vec: Vec<u8> = vec![0; value_length];
let mut rng = seed.map(SmallRng::seed_from_u64).unwrap_or_else(SmallRng::from_os_rng);
let mut entry_gen = || {
(0..num_rows)
.map(|_| {
rng.fill_bytes(&mut vec[..]);
vec.clone()
})
.collect()
};
(entry_gen(), entry_gen())
}
fn clone_with_result(col: &ColumnValues) -> ColumnResults<Vec<u8>> {
col.iter().map(|v| Ok(v.clone())).collect()
}
#[test]
fn test_config_serialization() {
let file = tempfile::NamedTempFile::new().unwrap();
let jar = NippyJar::new_without_header(23, file.path()).with_lz4();
jar.freeze_config().unwrap();
let mut config_file = OpenOptions::new().read(true).open(jar.config_path()).unwrap();
let config_file_len = config_file.metadata().unwrap().len();
assert_eq!(config_file_len, 37);
let mut buf = Vec::with_capacity(config_file_len as usize);
config_file.read_to_end(&mut buf).unwrap();
assert_eq!(
vec![
1, 0, 0, 0, 0, 0, 0, 0, 23, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0
],
buf
);
let mut read_jar = bincode::deserialize_from::<_, NippyJar>(&buf[..]).unwrap();
// Path is not ser/de
read_jar.path = file.path().to_path_buf();
assert_eq!(jar, read_jar);
}
#[test]
fn test_zstd_with_dictionaries() {
let (col1, col2) = test_data(None);
let num_rows = col1.len() as u64;
let num_columns = 2;
let file_path = tempfile::NamedTempFile::new().unwrap();
let nippy = NippyJar::new_without_header(num_columns, file_path.path());
assert!(nippy.compressor().is_none());
let mut nippy =
NippyJar::new_without_header(num_columns, file_path.path()).with_zstd(true, 5000);
assert!(nippy.compressor().is_some());
if let Some(Compressors::Zstd(zstd)) = &mut nippy.compressor_mut() {
assert!(matches!(zstd.compressors(), Err(NippyJarError::CompressorNotReady)));
// Make sure the number of column iterators match the initial set up ones.
assert!(matches!(
zstd.prepare_compression(vec![col1.clone(), col2.clone(), col2.clone()]),
Err(NippyJarError::ColumnLenMismatch(columns, 3)) if columns == num_columns
));
}
// If ZSTD is enabled, do not write to the file unless the column dictionaries have been
// calculated.
assert!(matches!(
nippy.freeze(vec![clone_with_result(&col1), clone_with_result(&col2)], num_rows),
Err(NippyJarError::CompressorNotReady)
));
let mut nippy =
NippyJar::new_without_header(num_columns, file_path.path()).with_zstd(true, 5000);
assert!(nippy.compressor().is_some());
nippy.prepare_compression(vec![col1.clone(), col2.clone()]).unwrap();
if let Some(Compressors::Zstd(zstd)) = &nippy.compressor() {
assert!(matches!(
(&zstd.state, zstd.dictionaries.as_ref().map(|dict| dict.len())),
(compression::ZstdState::Ready, Some(columns)) if columns == num_columns
));
}
let nippy = nippy
.freeze(vec![clone_with_result(&col1), clone_with_result(&col2)], num_rows)
.unwrap();
let loaded_nippy = NippyJar::load_without_header(file_path.path()).unwrap();
assert_eq!(nippy.version, loaded_nippy.version);
assert_eq!(nippy.columns, loaded_nippy.columns);
assert_eq!(nippy.filter, loaded_nippy.filter);
assert_eq!(nippy.phf, loaded_nippy.phf);
assert_eq!(nippy.max_row_size, loaded_nippy.max_row_size);
assert_eq!(nippy.path, loaded_nippy.path);
if let Some(Compressors::Zstd(zstd)) = loaded_nippy.compressor() {
assert!(zstd.use_dict);
let mut cursor = NippyJarCursor::new(&loaded_nippy).unwrap();
// Iterate over compressed values and compare
let mut row_index = 0usize;
while let Some(row) = cursor.next_row().unwrap() {
assert_eq!(
(row[0], row[1]),
(col1[row_index].as_slice(), col2[row_index].as_slice())
);
row_index += 1;
}
} else {
panic!("Expected Zstd compressor")
}
}
#[test]
fn test_lz4() {
let (col1, col2) = test_data(None);
let num_rows = col1.len() as u64;
let num_columns = 2;
let file_path = tempfile::NamedTempFile::new().unwrap();
let nippy = NippyJar::new_without_header(num_columns, file_path.path());
assert!(nippy.compressor().is_none());
let nippy = NippyJar::new_without_header(num_columns, file_path.path()).with_lz4();
assert!(nippy.compressor().is_some());
let nippy = nippy
.freeze(vec![clone_with_result(&col1), clone_with_result(&col2)], num_rows)
.unwrap();
let loaded_nippy = NippyJar::load_without_header(file_path.path()).unwrap();
assert_eq!(nippy, loaded_nippy);
if let Some(Compressors::Lz4(_)) = loaded_nippy.compressor() {
let mut cursor = NippyJarCursor::new(&loaded_nippy).unwrap();
// Iterate over compressed values and compare
let mut row_index = 0usize;
while let Some(row) = cursor.next_row().unwrap() {
assert_eq!(
(row[0], row[1]),
(col1[row_index].as_slice(), col2[row_index].as_slice())
);
row_index += 1;
}
} else {
panic!("Expected Lz4 compressor")
}
}
#[test]
fn test_zstd_no_dictionaries() {
let (col1, col2) = test_data(None);
let num_rows = col1.len() as u64;
let num_columns = 2;
let file_path = tempfile::NamedTempFile::new().unwrap();
let nippy = NippyJar::new_without_header(num_columns, file_path.path());
assert!(nippy.compressor().is_none());
let nippy =
NippyJar::new_without_header(num_columns, file_path.path()).with_zstd(false, 5000);
assert!(nippy.compressor().is_some());
let nippy = nippy
.freeze(vec![clone_with_result(&col1), clone_with_result(&col2)], num_rows)
.unwrap();
let loaded_nippy = NippyJar::load_without_header(file_path.path()).unwrap();
assert_eq!(nippy, loaded_nippy);
if let Some(Compressors::Zstd(zstd)) = loaded_nippy.compressor() {
assert!(!zstd.use_dict);
let mut cursor = NippyJarCursor::new(&loaded_nippy).unwrap();
// Iterate over compressed values and compare
let mut row_index = 0usize;
while let Some(row) = cursor.next_row().unwrap() {
assert_eq!(
(row[0], row[1]),
(col1[row_index].as_slice(), col2[row_index].as_slice())
);
row_index += 1;
}
} else {
panic!("Expected Zstd compressor")
}
}
/// Tests `NippyJar` with everything enabled.
#[test]
fn test_full_nippy_jar() {
let (col1, col2) = test_data(None);
let num_rows = col1.len() as u64;
let num_columns = 2;
let file_path = tempfile::NamedTempFile::new().unwrap();
let data = vec![col1.clone(), col2.clone()];
let block_start = 500;
#[derive(Serialize, Deserialize, Debug)]
struct BlockJarHeader {
block_start: usize,
}
// Create file
{
let mut nippy =
NippyJar::new(num_columns, file_path.path(), BlockJarHeader { block_start })
.with_zstd(true, 5000);
nippy.prepare_compression(data.clone()).unwrap();
nippy
.freeze(vec![clone_with_result(&col1), clone_with_result(&col2)], num_rows)
.unwrap();
}
// Read file
{
let loaded_nippy = NippyJar::<BlockJarHeader>::load(file_path.path()).unwrap();
assert!(loaded_nippy.compressor().is_some());
assert_eq!(loaded_nippy.user_header().block_start, block_start);
if let Some(Compressors::Zstd(_zstd)) = loaded_nippy.compressor() {
let mut cursor = NippyJarCursor::new(&loaded_nippy).unwrap();
// Iterate over compressed values and compare
let mut row_num = 0usize;
while let Some(row) = cursor.next_row().unwrap() {
assert_eq!(
(row[0], row[1]),
(data[0][row_num].as_slice(), data[1][row_num].as_slice())
);
row_num += 1;
}
// Shuffled for chaos.
let mut data = col1.iter().zip(col2.iter()).enumerate().collect::<Vec<_>>();
data.shuffle(&mut rand::rng());
for (row_num, (v0, v1)) in data {
// Simulates `by_number` queries
let row_by_num = cursor.row_by_number(row_num).unwrap().unwrap();
assert_eq!((&row_by_num[0].to_vec(), &row_by_num[1].to_vec()), (v0, v1));
}
}
}
}
#[test]
fn test_selectable_column_values() {
let (col1, col2) = test_data(None);
let num_rows = col1.len() as u64;
let num_columns = 2;
let file_path = tempfile::NamedTempFile::new().unwrap();
let data = vec![col1.clone(), col2.clone()];
// Create file
{
let mut nippy =
NippyJar::new_without_header(num_columns, file_path.path()).with_zstd(true, 5000);
nippy.prepare_compression(data).unwrap();
nippy
.freeze(vec![clone_with_result(&col1), clone_with_result(&col2)], num_rows)
.unwrap();
}
// Read file
{
let loaded_nippy = NippyJar::load_without_header(file_path.path()).unwrap();
if let Some(Compressors::Zstd(_zstd)) = loaded_nippy.compressor() {
let mut cursor = NippyJarCursor::new(&loaded_nippy).unwrap();
// Shuffled for chaos.
let mut data = col1.iter().zip(col2.iter()).enumerate().collect::<Vec<_>>();
data.shuffle(&mut rand::rng());
// Imagine `Blocks` static file has two columns: `Block | StoredWithdrawals`
const BLOCKS_FULL_MASK: usize = 0b11;
// Read both columns
for (row_num, (v0, v1)) in &data {
// Simulates `by_number` queries
let row_by_num = cursor
.row_by_number_with_cols(*row_num, BLOCKS_FULL_MASK)
.unwrap()
.unwrap();
assert_eq!((&row_by_num[0].to_vec(), &row_by_num[1].to_vec()), (*v0, *v1));
}
// Read first column only: `Block`
const BLOCKS_BLOCK_MASK: usize = 0b01;
for (row_num, (v0, _)) in &data {
// Simulates `by_number` queries
let row_by_num = cursor
.row_by_number_with_cols(*row_num, BLOCKS_BLOCK_MASK)
.unwrap()
.unwrap();
assert_eq!(row_by_num.len(), 1);
assert_eq!(&row_by_num[0].to_vec(), *v0);
}
// Read second column only: `Block`
const BLOCKS_WITHDRAWAL_MASK: usize = 0b10;
for (row_num, (_, v1)) in &data {
// Simulates `by_number` queries
let row_by_num = cursor
.row_by_number_with_cols(*row_num, BLOCKS_WITHDRAWAL_MASK)
.unwrap()
.unwrap();
assert_eq!(row_by_num.len(), 1);
assert_eq!(&row_by_num[0].to_vec(), *v1);
}
// Read nothing
const BLOCKS_EMPTY_MASK: usize = 0b00;
for (row_num, _) in &data {
// Simulates `by_number` queries
assert!(cursor
.row_by_number_with_cols(*row_num, BLOCKS_EMPTY_MASK)
.unwrap()
.unwrap()
.is_empty());
}
}
}
}
#[test]
fn test_writer() {
let (col1, col2) = test_data(None);
let num_columns = 2;
let file_path = tempfile::NamedTempFile::new().unwrap();
append_two_rows(num_columns, file_path.path(), &col1, &col2);
// Appends a third row and prunes two rows, to make sure we prune from memory and disk
// offset list
prune_rows(num_columns, file_path.path(), &col1, &col2);
// Should be able to append new rows
append_two_rows(num_columns, file_path.path(), &col1, &col2);
// Simulate an unexpected shutdown before there's a chance to commit, and see that it
// unwinds successfully
test_append_consistency_no_commit(file_path.path(), &col1, &col2);
// Simulate an unexpected shutdown during commit, and see that it unwinds successfully
test_append_consistency_partial_commit(file_path.path(), &col1, &col2);
}
#[test]
fn test_pruner() {
let (col1, col2) = test_data(None);
let num_columns = 2;
let num_rows = 2;
// (missing_offsets, expected number of rows)
// If a row wasn't fully pruned, then it should clear it up as well
let missing_offsets_scenarios = [(1, 1), (2, 1), (3, 0)];
for (missing_offsets, expected_rows) in missing_offsets_scenarios {
let file_path = tempfile::NamedTempFile::new().unwrap();
append_two_rows(num_columns, file_path.path(), &col1, &col2);
simulate_interrupted_prune(num_columns, file_path.path(), num_rows, missing_offsets);
let nippy = NippyJar::load_without_header(file_path.path()).unwrap();
assert_eq!(nippy.rows, expected_rows);
}
}
fn test_append_consistency_partial_commit(
file_path: &Path,
col1: &[Vec<u8>],
col2: &[Vec<u8>],
) {
let nippy = NippyJar::load_without_header(file_path).unwrap();
// Set the baseline that should be unwinded to
let initial_rows = nippy.rows;
let initial_data_size =
File::open(nippy.data_path()).unwrap().metadata().unwrap().len() as usize;
let initial_offset_size =
File::open(nippy.offsets_path()).unwrap().metadata().unwrap().len() as usize;
assert!(initial_data_size > 0);
assert!(initial_offset_size > 0);
// Appends a third row
let mut writer = NippyJarWriter::new(nippy).unwrap();
writer.append_column(Some(Ok(&col1[2]))).unwrap();
writer.append_column(Some(Ok(&col2[2]))).unwrap();
// Makes sure it doesn't write the last one offset (which is the expected file data size)
let _ = writer.offsets_mut().pop();
// `commit_offsets` is not a pub function. we call it here to simulate the shutdown before
// it can flush nippy.rows (config) to disk.
writer.commit_offsets().unwrap();
// Simulate an unexpected shutdown of the writer, before it can finish commit()
drop(writer);
let nippy = NippyJar::load_without_header(file_path).unwrap();
assert_eq!(initial_rows, nippy.rows);
// Data was written successfully
let new_data_size =
File::open(nippy.data_path()).unwrap().metadata().unwrap().len() as usize;
assert_eq!(new_data_size, initial_data_size + col1[2].len() + col2[2].len());
// It should be + 16 (two columns were added), but there's a missing one (the one we pop)
assert_eq!(
initial_offset_size + 8,
File::open(nippy.offsets_path()).unwrap().metadata().unwrap().len() as usize
);
// Writer will execute a consistency check and verify first that the offset list on disk
// doesn't match the nippy.rows, and prune it. Then, it will prune the data file
// accordingly as well.
let writer = NippyJarWriter::new(nippy).unwrap();
assert_eq!(initial_rows, writer.rows());
assert_eq!(
initial_offset_size,
File::open(writer.offsets_path()).unwrap().metadata().unwrap().len() as usize
);
assert_eq!(
initial_data_size,
File::open(writer.data_path()).unwrap().metadata().unwrap().len() as usize
);
}
fn test_append_consistency_no_commit(file_path: &Path, col1: &[Vec<u8>], col2: &[Vec<u8>]) {
let nippy = NippyJar::load_without_header(file_path).unwrap();
// Set the baseline that should be unwinded to
let initial_rows = nippy.rows;
let initial_data_size =
File::open(nippy.data_path()).unwrap().metadata().unwrap().len() as usize;
let initial_offset_size =
File::open(nippy.offsets_path()).unwrap().metadata().unwrap().len() as usize;
assert!(initial_data_size > 0);
assert!(initial_offset_size > 0);
// Appends a third row, so we have an offset list in memory, which is not flushed to disk,
// while the data has been.
let mut writer = NippyJarWriter::new(nippy).unwrap();
writer.append_column(Some(Ok(&col1[2]))).unwrap();
writer.append_column(Some(Ok(&col2[2]))).unwrap();
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/nippy-jar/src/error.rs | crates/storage/nippy-jar/src/error.rs | use std::path::PathBuf;
use thiserror::Error;
/// Errors associated with [`crate::NippyJar`].
#[derive(Error, Debug)]
pub enum NippyJarError {
/// An internal error occurred, wrapping any type of error.
#[error(transparent)]
Internal(#[from] Box<dyn core::error::Error + Send + Sync>),
/// An error occurred while disconnecting, wrapping a standard I/O error.
#[error(transparent)]
Disconnect(#[from] std::io::Error),
/// An error related to the file system occurred, wrapping a file system path error.
#[error(transparent)]
FileSystem(#[from] reth_fs_util::FsPathError),
/// A custom error message provided by the user.
#[error("{0}")]
Custom(String),
/// An error occurred during serialization/deserialization with Bincode.
#[error(transparent)]
Bincode(#[from] Box<bincode::ErrorKind>),
/// An error occurred with the Elias-Fano encoding/decoding process.
#[error(transparent)]
EliasFano(#[from] anyhow::Error),
/// Compression was enabled, but the compressor is not ready yet.
#[error("compression was enabled, but it's not ready yet")]
CompressorNotReady,
/// Decompression was enabled, but the decompressor is not ready yet.
#[error("decompression was enabled, but it's not ready yet")]
DecompressorNotReady,
/// The number of columns does not match the expected length.
#[error("number of columns does not match: {0} != {1}")]
ColumnLenMismatch(usize, usize),
/// An unexpected missing value was encountered at a specific row and column.
#[error("unexpected missing value: row:col {0}:{1}")]
UnexpectedMissingValue(u64, u64),
/// The size of an offset exceeds the maximum allowed size of 8 bytes.
#[error("the size of an offset must be at most 8 bytes, got {offset_size}")]
OffsetSizeTooBig {
/// The read offset size in number of bytes.
offset_size: u8,
},
/// The size of an offset is less than the minimum allowed size of 1 byte.
#[error("the size of an offset must be at least 1 byte, got {offset_size}")]
OffsetSizeTooSmall {
/// The read offset size in number of bytes.
offset_size: u8,
},
/// An attempt was made to read an offset that is out of bounds.
#[error("attempted to read an out of bounds offset: {index}")]
OffsetOutOfBounds {
/// The index of the offset that was being read.
index: usize,
},
/// The output buffer is too small for the compression or decompression operation.
#[error("compression or decompression requires a bigger destination output")]
OutputTooSmall,
/// A dictionary is not loaded when it is required for operations.
#[error("dictionary is not loaded.")]
DictionaryNotLoaded,
/// It's not possible to generate a compressor after loading a dictionary.
#[error("it's not possible to generate a compressor after loading a dictionary.")]
CompressorNotAllowed,
/// The number of offsets is smaller than the requested prune size.
#[error("number of offsets ({0}) is smaller than prune request ({1}).")]
InvalidPruning(u64, u64),
/// The jar has been frozen and cannot be modified.
#[error("jar has been frozen and cannot be modified.")]
FrozenJar,
/// The file is in an inconsistent state.
#[error("File is in an inconsistent state.")]
InconsistentState,
/// A specified file is missing.
#[error("Missing file: {}", .0.display())]
MissingFile(PathBuf),
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/nippy-jar/src/consistency.rs | crates/storage/nippy-jar/src/consistency.rs | use crate::{writer::OFFSET_SIZE_BYTES, NippyJar, NippyJarError, NippyJarHeader};
use std::{
cmp::Ordering,
fs::{File, OpenOptions},
io::{BufWriter, Seek, SeekFrom},
path::Path,
};
/// Performs consistency checks or heals on the [`NippyJar`] file
/// * Is the offsets file size expected?
/// * Is the data file size expected?
///
/// This is based on the assumption that [`NippyJar`] configuration is **always** the last one
/// to be updated when something is written, as by the `NippyJarWriter::commit()` function shows.
///
/// **For checks (read-only) use `check_consistency` method.**
///
/// **For heals (read-write) use `ensure_consistency` method.**
#[derive(Debug)]
pub struct NippyJarChecker<H: NippyJarHeader = ()> {
/// Associated [`NippyJar`], containing all necessary configurations for data
/// handling.
pub(crate) jar: NippyJar<H>,
/// File handle to where the data is stored.
pub(crate) data_file: Option<BufWriter<File>>,
/// File handle to where the offsets are stored.
pub(crate) offsets_file: Option<BufWriter<File>>,
}
impl<H: NippyJarHeader> NippyJarChecker<H> {
/// Creates a new instance of [`NippyJarChecker`] with the provided [`NippyJar`].
///
/// This method initializes the checker without any associated file handles for
/// the data or offsets files. The [`NippyJar`] passed in contains all necessary
/// configurations for handling data.
pub const fn new(jar: NippyJar<H>) -> Self {
Self { jar, data_file: None, offsets_file: None }
}
/// It will throw an error if the [`NippyJar`] is in a inconsistent state.
pub fn check_consistency(&mut self) -> Result<(), NippyJarError> {
self.handle_consistency(ConsistencyFailStrategy::ThrowError)
}
/// It will attempt to heal if the [`NippyJar`] is in a inconsistent state.
///
/// **ATTENTION**: disk commit should be handled externally by consuming `Self`
pub fn ensure_consistency(&mut self) -> Result<(), NippyJarError> {
self.handle_consistency(ConsistencyFailStrategy::Heal)
}
fn handle_consistency(&mut self, mode: ConsistencyFailStrategy) -> Result<(), NippyJarError> {
self.load_files(mode)?;
let mut reader = self.jar.open_data_reader()?;
// When an offset size is smaller than the initial (8), we are dealing with immutable
// data.
if reader.offset_size() != OFFSET_SIZE_BYTES {
return Err(NippyJarError::FrozenJar)
}
let expected_offsets_file_size: u64 = (1 + // first byte is the size of one offset
OFFSET_SIZE_BYTES as usize* self.jar.rows * self.jar.columns + // `offset size * num rows * num columns`
OFFSET_SIZE_BYTES as usize) as u64; // expected size of the data file
let actual_offsets_file_size = self.offsets_file().get_ref().metadata()?.len();
if mode.should_err() &&
expected_offsets_file_size.cmp(&actual_offsets_file_size) != Ordering::Equal
{
return Err(NippyJarError::InconsistentState)
}
// Offsets configuration wasn't properly committed
match expected_offsets_file_size.cmp(&actual_offsets_file_size) {
Ordering::Less => {
// Happened during an appending job
// TODO: ideally we could truncate until the last offset of the last column of the
// last row inserted
// Windows has locked the file with the mmap handle, so we need to drop it
drop(reader);
self.offsets_file().get_mut().set_len(expected_offsets_file_size)?;
reader = self.jar.open_data_reader()?;
}
Ordering::Greater => {
// Happened during a pruning job
// `num rows = (file size - 1 - size of one offset) / num columns`
self.jar.rows = ((actual_offsets_file_size.
saturating_sub(1). // first byte is the size of one offset
saturating_sub(OFFSET_SIZE_BYTES as u64) / // expected size of the data file
(self.jar.columns as u64)) /
OFFSET_SIZE_BYTES as u64) as usize;
// Freeze row count changed
self.jar.freeze_config()?;
}
Ordering::Equal => {}
}
// last offset should match the data_file_len
let last_offset = reader.reverse_offset(0)?;
let data_file_len = self.data_file().get_ref().metadata()?.len();
if mode.should_err() && last_offset.cmp(&data_file_len) != Ordering::Equal {
return Err(NippyJarError::InconsistentState)
}
// Offset list wasn't properly committed
match last_offset.cmp(&data_file_len) {
Ordering::Less => {
// Windows has locked the file with the mmap handle, so we need to drop it
drop(reader);
// Happened during an appending job, so we need to truncate the data, since there's
// no way to recover it.
self.data_file().get_mut().set_len(last_offset)?;
}
Ordering::Greater => {
// Happened during a pruning job, so we need to reverse iterate offsets until we
// find the matching one.
for index in 0..reader.offsets_count()? {
let offset = reader.reverse_offset(index + 1)?;
// It would only be equal if the previous row was fully pruned.
if offset <= data_file_len {
let new_len = self
.offsets_file()
.get_ref()
.metadata()?
.len()
.saturating_sub(OFFSET_SIZE_BYTES as u64 * (index as u64 + 1));
// Windows has locked the file with the mmap handle, so we need to drop it
drop(reader);
self.offsets_file().get_mut().set_len(new_len)?;
// Since we decrease the offset list, we need to check the consistency of
// `self.jar.rows` again
self.handle_consistency(ConsistencyFailStrategy::Heal)?;
break
}
}
}
Ordering::Equal => {}
}
self.offsets_file().seek(SeekFrom::End(0))?;
self.data_file().seek(SeekFrom::End(0))?;
Ok(())
}
/// Loads data and offsets files.
fn load_files(&mut self, mode: ConsistencyFailStrategy) -> Result<(), NippyJarError> {
let load_file = |path: &Path| -> Result<BufWriter<File>, NippyJarError> {
let path = path
.exists()
.then_some(path)
.ok_or_else(|| NippyJarError::MissingFile(path.to_path_buf()))?;
Ok(BufWriter::new(OpenOptions::new().read(true).write(mode.should_heal()).open(path)?))
};
self.data_file = Some(load_file(self.jar.data_path())?);
self.offsets_file = Some(load_file(&self.jar.offsets_path())?);
Ok(())
}
/// Returns a mutable reference to offsets file.
///
/// **Panics** if it does not exist.
const fn offsets_file(&mut self) -> &mut BufWriter<File> {
self.offsets_file.as_mut().expect("should exist")
}
/// Returns a mutable reference to data file.
///
/// **Panics** if it does not exist.
const fn data_file(&mut self) -> &mut BufWriter<File> {
self.data_file.as_mut().expect("should exist")
}
}
/// Strategy on encountering an inconsistent state on [`NippyJarChecker`].
#[derive(Debug, Copy, Clone)]
enum ConsistencyFailStrategy {
/// Writer should heal.
Heal,
/// Writer should throw an error.
ThrowError,
}
impl ConsistencyFailStrategy {
/// Whether writer should heal.
const fn should_heal(&self) -> bool {
matches!(self, Self::Heal)
}
/// Whether writer should throw an error.
const fn should_err(&self) -> bool {
matches!(self, Self::ThrowError)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/nippy-jar/src/writer.rs | crates/storage/nippy-jar/src/writer.rs | use crate::{
compression::Compression, ColumnResult, NippyJar, NippyJarChecker, NippyJarError,
NippyJarHeader,
};
use std::{
fs::{File, OpenOptions},
io::{BufWriter, Read, Seek, SeekFrom, Write},
path::Path,
};
/// Size of one offset in bytes.
pub(crate) const OFFSET_SIZE_BYTES: u8 = 8;
/// Writer of [`NippyJar`]. Handles table data and offsets only.
///
/// Table data is written directly to disk, while offsets and configuration need to be flushed by
/// calling `commit()`.
///
/// ## Offset file layout
/// The first byte is the size of a single offset in bytes, `m`.
/// Then, the file contains `n` entries, each with a size of `m`. Each entry represents an offset,
/// except for the last entry, which represents both the total size of the data file, as well as the
/// next offset to write new data to.
///
/// ## Data file layout
/// The data file is represented just as a sequence of bytes of data without any delimiters
#[derive(Debug)]
pub struct NippyJarWriter<H: NippyJarHeader = ()> {
/// Associated [`NippyJar`], containing all necessary configurations for data
/// handling.
jar: NippyJar<H>,
/// File handle to where the data is stored.
data_file: BufWriter<File>,
/// File handle to where the offsets are stored.
offsets_file: BufWriter<File>,
/// Temporary buffer to reuse when compressing data.
tmp_buf: Vec<u8>,
/// Used to find the maximum uncompressed size of a row in a jar.
uncompressed_row_size: usize,
/// Partial offset list which hasn't been flushed to disk.
offsets: Vec<u64>,
/// Column where writer is going to write next.
column: usize,
/// Whether the writer has changed data that needs to be committed.
dirty: bool,
}
impl<H: NippyJarHeader> NippyJarWriter<H> {
/// Creates a [`NippyJarWriter`] from [`NippyJar`].
///
/// If will **always** attempt to heal any inconsistent state when called.
pub fn new(jar: NippyJar<H>) -> Result<Self, NippyJarError> {
let (data_file, offsets_file, is_created) =
Self::create_or_open_files(jar.data_path(), &jar.offsets_path())?;
let (jar, data_file, offsets_file) = if is_created {
// Makes sure we don't have dangling data and offset files when we just created the file
jar.freeze_config()?;
(jar, BufWriter::new(data_file), BufWriter::new(offsets_file))
} else {
// If we are opening a previously created jar, we need to check its consistency, and
// make changes if necessary.
let mut checker = NippyJarChecker::new(jar);
checker.ensure_consistency()?;
let NippyJarChecker { jar, data_file, offsets_file } = checker;
// Calling ensure_consistency, will fill data_file and offsets_file
(jar, data_file.expect("qed"), offsets_file.expect("qed"))
};
let mut writer = Self {
jar,
data_file,
offsets_file,
tmp_buf: Vec::with_capacity(1_000_000),
uncompressed_row_size: 0,
offsets: Vec::with_capacity(1_000_000),
column: 0,
dirty: false,
};
if !is_created {
// Commit any potential heals done above.
writer.commit()?;
}
Ok(writer)
}
/// Returns a reference to `H` of [`NippyJar`]
pub const fn user_header(&self) -> &H {
&self.jar.user_header
}
/// Returns a mutable reference to `H` of [`NippyJar`].
///
/// Since there's no way of knowing if `H` has been actually changed, this sets `self.dirty` to
/// true.
pub const fn user_header_mut(&mut self) -> &mut H {
self.dirty = true;
&mut self.jar.user_header
}
/// Returns whether there are changes that need to be committed.
pub const fn is_dirty(&self) -> bool {
self.dirty
}
/// Sets writer as dirty.
pub const fn set_dirty(&mut self) {
self.dirty = true
}
/// Gets total writer rows in jar.
pub const fn rows(&self) -> usize {
self.jar.rows()
}
/// Consumes the writer and returns the associated [`NippyJar`].
pub fn into_jar(self) -> NippyJar<H> {
self.jar
}
fn create_or_open_files(
data: &Path,
offsets: &Path,
) -> Result<(File, File, bool), NippyJarError> {
let is_created = !data.exists() || !offsets.exists();
if !data.exists() {
// File::create is write-only (no reading possible)
File::create(data)?;
}
let mut data_file = OpenOptions::new().read(true).write(true).open(data)?;
data_file.seek(SeekFrom::End(0))?;
if !offsets.exists() {
// File::create is write-only (no reading possible)
File::create(offsets)?;
}
let mut offsets_file = OpenOptions::new().read(true).write(true).open(offsets)?;
if is_created {
let mut buf = Vec::with_capacity(1 + OFFSET_SIZE_BYTES as usize);
// First byte of the offset file is the size of one offset in bytes
buf.write_all(&[OFFSET_SIZE_BYTES])?;
// The last offset should always represent the data file len, which is 0 on
// creation.
buf.write_all(&[0; OFFSET_SIZE_BYTES as usize])?;
offsets_file.write_all(&buf)?;
offsets_file.seek(SeekFrom::End(0))?;
}
Ok((data_file, offsets_file, is_created))
}
/// Appends rows to data file. `fn commit()` should be called to flush offsets and config to
/// disk.
///
/// `column_values_per_row`: A vector where each element is a column's values in sequence,
/// corresponding to each row. The vector's length equals the number of columns.
pub fn append_rows(
&mut self,
column_values_per_row: Vec<impl IntoIterator<Item = ColumnResult<impl AsRef<[u8]>>>>,
num_rows: u64,
) -> Result<(), NippyJarError> {
let mut column_iterators = column_values_per_row
.into_iter()
.map(|v| v.into_iter())
.collect::<Vec<_>>()
.into_iter();
for _ in 0..num_rows {
let mut iterators = Vec::with_capacity(self.jar.columns);
for mut column_iter in column_iterators {
self.append_column(column_iter.next())?;
iterators.push(column_iter);
}
column_iterators = iterators.into_iter();
}
Ok(())
}
/// Appends a column to data file. `fn commit()` should be called to flush offsets and config to
/// disk.
pub fn append_column(
&mut self,
column: Option<ColumnResult<impl AsRef<[u8]>>>,
) -> Result<(), NippyJarError> {
self.dirty = true;
match column {
Some(Ok(value)) => {
if self.offsets.is_empty() {
// Represents the offset of the soon to be appended data column
self.offsets.push(self.data_file.stream_position()?);
}
let written = self.write_column(value.as_ref())?;
// Last offset represents the size of the data file if no more data is to be
// appended. Otherwise, represents the offset of the next data item.
self.offsets.push(self.offsets.last().expect("qed") + written as u64);
}
None => {
return Err(NippyJarError::UnexpectedMissingValue(
self.jar.rows as u64,
self.column as u64,
))
}
Some(Err(err)) => return Err(err.into()),
}
Ok(())
}
/// Writes column to data file. If it's the last column of the row, call `finalize_row()`
fn write_column(&mut self, value: &[u8]) -> Result<usize, NippyJarError> {
self.uncompressed_row_size += value.len();
let len = if let Some(compression) = &self.jar.compressor {
let before = self.tmp_buf.len();
let len = compression.compress_to(value, &mut self.tmp_buf)?;
self.data_file.write_all(&self.tmp_buf[before..before + len])?;
len
} else {
self.data_file.write_all(value)?;
value.len()
};
self.column += 1;
if self.jar.columns == self.column {
self.finalize_row();
}
Ok(len)
}
/// Prunes rows from data and offsets file and updates its configuration on disk
pub fn prune_rows(&mut self, num_rows: usize) -> Result<(), NippyJarError> {
self.dirty = true;
self.offsets_file.flush()?;
self.data_file.flush()?;
// Each column of a row is one offset
let num_offsets = num_rows * self.jar.columns;
// Calculate the number of offsets to prune from in-memory list
let offsets_prune_count = num_offsets.min(self.offsets.len().saturating_sub(1)); // last element is the expected size of the data file
let remaining_to_prune = num_offsets.saturating_sub(offsets_prune_count);
// Prune in-memory offsets if needed
if offsets_prune_count > 0 {
// Determine new length based on the offset to prune up to
let new_len = self.offsets[(self.offsets.len() - 1) - offsets_prune_count]; // last element is the expected size of the data file
self.offsets.truncate(self.offsets.len() - offsets_prune_count);
// Truncate the data file to the new length
self.data_file.get_mut().set_len(new_len)?;
}
// Prune from on-disk offset list if there are still rows left to prune
if remaining_to_prune > 0 {
// Get the current length of the on-disk offset file
let length = self.offsets_file.get_ref().metadata()?.len();
// Handle non-empty offset file
if length > 1 {
// first byte is reserved for `bytes_per_offset`, which is 8 initially.
let num_offsets = (length - 1) / OFFSET_SIZE_BYTES as u64;
if remaining_to_prune as u64 > num_offsets {
return Err(NippyJarError::InvalidPruning(
num_offsets,
remaining_to_prune as u64,
))
}
let new_num_offsets = num_offsets.saturating_sub(remaining_to_prune as u64);
// If all rows are to be pruned
if new_num_offsets <= 1 {
// <= 1 because the one offset would actually be the expected file data size
self.offsets_file.get_mut().set_len(1)?;
self.data_file.get_mut().set_len(0)?;
} else {
// Calculate the new length for the on-disk offset list
let new_len = 1 + new_num_offsets * OFFSET_SIZE_BYTES as u64;
// Seek to the position of the last offset
self.offsets_file
.seek(SeekFrom::Start(new_len.saturating_sub(OFFSET_SIZE_BYTES as u64)))?;
// Read the last offset value
let mut last_offset = [0u8; OFFSET_SIZE_BYTES as usize];
self.offsets_file.get_ref().read_exact(&mut last_offset)?;
let last_offset = u64::from_le_bytes(last_offset);
// Update the lengths of both the offsets and data files
self.offsets_file.get_mut().set_len(new_len)?;
self.data_file.get_mut().set_len(last_offset)?;
}
} else {
return Err(NippyJarError::InvalidPruning(0, remaining_to_prune as u64))
}
}
self.offsets_file.get_ref().sync_all()?;
self.data_file.get_ref().sync_all()?;
self.offsets_file.seek(SeekFrom::End(0))?;
self.data_file.seek(SeekFrom::End(0))?;
self.jar.rows = self.jar.rows.saturating_sub(num_rows);
if self.jar.rows == 0 {
self.jar.max_row_size = 0;
}
self.jar.freeze_config()?;
Ok(())
}
/// Updates [`NippyJar`] with the new row count and maximum uncompressed row size, while
/// resetting internal fields.
fn finalize_row(&mut self) {
self.jar.max_row_size = self.jar.max_row_size.max(self.uncompressed_row_size);
self.jar.rows += 1;
self.tmp_buf.clear();
self.uncompressed_row_size = 0;
self.column = 0;
}
/// Commits configuration and offsets to disk. It drains the internal offset list.
pub fn commit(&mut self) -> Result<(), NippyJarError> {
self.data_file.flush()?;
self.data_file.get_ref().sync_all()?;
self.commit_offsets()?;
// Flushes `max_row_size` and total `rows` to disk.
self.jar.freeze_config()?;
self.dirty = false;
Ok(())
}
/// Commits changes to the data file and offsets without synchronizing all data to disk.
///
/// This function flushes the buffered data to the data file and commits the offsets,
/// but it does not guarantee that all data is synchronized to persistent storage.
#[cfg(feature = "test-utils")]
pub fn commit_without_sync_all(&mut self) -> Result<(), NippyJarError> {
self.data_file.flush()?;
self.commit_offsets_without_sync_all()?;
// Flushes `max_row_size` and total `rows` to disk.
self.jar.freeze_config()?;
self.dirty = false;
Ok(())
}
/// Flushes offsets to disk.
pub(crate) fn commit_offsets(&mut self) -> Result<(), NippyJarError> {
self.commit_offsets_inner()?;
self.offsets_file.get_ref().sync_all()?;
Ok(())
}
#[cfg(feature = "test-utils")]
fn commit_offsets_without_sync_all(&mut self) -> Result<(), NippyJarError> {
self.commit_offsets_inner()
}
/// Flushes offsets to disk.
///
/// CAUTION: Does not call `sync_all` on the offsets file and requires a manual call to
/// `self.offsets_file.get_ref().sync_all()`.
fn commit_offsets_inner(&mut self) -> Result<(), NippyJarError> {
// The last offset on disk can be the first offset of `self.offsets` given how
// `append_column()` works alongside commit. So we need to skip it.
let mut last_offset_ondisk = if self.offsets_file.get_ref().metadata()?.len() > 1 {
self.offsets_file.seek(SeekFrom::End(-(OFFSET_SIZE_BYTES as i64)))?;
let mut buf = [0u8; OFFSET_SIZE_BYTES as usize];
self.offsets_file.get_ref().read_exact(&mut buf)?;
Some(u64::from_le_bytes(buf))
} else {
None
};
self.offsets_file.seek(SeekFrom::End(0))?;
// Appends new offsets to disk
for offset in self.offsets.drain(..) {
if let Some(last_offset_ondisk) = last_offset_ondisk.take() {
if last_offset_ondisk == offset {
continue
}
}
self.offsets_file.write_all(&offset.to_le_bytes())?;
}
self.offsets_file.flush()?;
Ok(())
}
/// Returns the maximum row size for the associated [`NippyJar`].
#[cfg(test)]
pub const fn max_row_size(&self) -> usize {
self.jar.max_row_size
}
/// Returns the column index of the current checker instance.
#[cfg(test)]
pub const fn column(&self) -> usize {
self.column
}
/// Returns a reference to the offsets vector.
#[cfg(test)]
pub fn offsets(&self) -> &[u64] {
&self.offsets
}
/// Returns a mutable reference to the offsets vector.
#[cfg(test)]
pub const fn offsets_mut(&mut self) -> &mut Vec<u64> {
&mut self.offsets
}
/// Returns the path to the offsets file for the associated [`NippyJar`].
#[cfg(test)]
pub fn offsets_path(&self) -> std::path::PathBuf {
self.jar.offsets_path()
}
/// Returns the path to the data file for the associated [`NippyJar`].
#[cfg(test)]
pub fn data_path(&self) -> &Path {
self.jar.data_path()
}
/// Returns a mutable reference to the buffered writer for the data file.
#[cfg(any(test, feature = "test-utils"))]
pub const fn data_file(&mut self) -> &mut BufWriter<File> {
&mut self.data_file
}
/// Returns a reference to the associated [`NippyJar`] instance.
#[cfg(any(test, feature = "test-utils"))]
pub const fn jar(&self) -> &NippyJar<H> {
&self.jar
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/nippy-jar/src/compression/zstd.rs | crates/storage/nippy-jar/src/compression/zstd.rs | use crate::{compression::Compression, NippyJarError};
use derive_more::Deref;
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use std::{
fs::File,
io::{Read, Write},
sync::Arc,
};
use tracing::*;
use zstd::bulk::Compressor;
pub use zstd::{bulk::Decompressor, dict::DecoderDictionary};
type RawDictionary = Vec<u8>;
/// Represents the state of a Zstandard compression operation.
#[derive(Debug, Default, PartialEq, Eq, Serialize, Deserialize)]
pub enum ZstdState {
/// The compressor is pending a dictionary.
#[default]
PendingDictionary,
/// The compressor is ready to perform compression.
Ready,
}
#[cfg_attr(test, derive(PartialEq))]
#[derive(Debug, Serialize, Deserialize)]
/// Zstd compression structure. Supports a compression dictionary per column.
pub struct Zstd {
/// State. Should be ready before compressing.
pub(crate) state: ZstdState,
/// Compression level. A level of `0` uses zstd's default (currently `3`).
pub(crate) level: i32,
/// Uses custom dictionaries to compress data.
pub use_dict: bool,
/// Max size of a dictionary
pub(crate) max_dict_size: usize,
/// List of column dictionaries.
#[serde(with = "dictionaries_serde")]
pub(crate) dictionaries: Option<Arc<ZstdDictionaries<'static>>>,
/// Number of columns to compress.
columns: usize,
}
impl Zstd {
/// Creates new [`Zstd`].
pub const fn new(use_dict: bool, max_dict_size: usize, columns: usize) -> Self {
Self {
state: if use_dict { ZstdState::PendingDictionary } else { ZstdState::Ready },
level: 0,
use_dict,
max_dict_size,
dictionaries: None,
columns,
}
}
/// Sets the compression level for the Zstd compression instance.
pub const fn with_level(mut self, level: i32) -> Self {
self.level = level;
self
}
/// Creates a list of [`Decompressor`] if using dictionaries.
pub fn decompressors(&self) -> Result<Vec<Decompressor<'_>>, NippyJarError> {
if let Some(dictionaries) = &self.dictionaries {
debug_assert!(dictionaries.len() == self.columns);
return dictionaries.decompressors()
}
Ok(vec![])
}
/// If using dictionaries, creates a list of [`Compressor`].
pub fn compressors(&self) -> Result<Option<Vec<Compressor<'_>>>, NippyJarError> {
match self.state {
ZstdState::PendingDictionary => Err(NippyJarError::CompressorNotReady),
ZstdState::Ready => {
if !self.use_dict {
return Ok(None)
}
if let Some(dictionaries) = &self.dictionaries {
debug!(target: "nippy-jar", count=?dictionaries.len(), "Generating ZSTD compressor dictionaries.");
return Ok(Some(dictionaries.compressors()?))
}
Ok(None)
}
}
}
/// Compresses a value using a dictionary. Reserves additional capacity for `buffer` if
/// necessary.
pub fn compress_with_dictionary(
column_value: &[u8],
buffer: &mut Vec<u8>,
handle: &mut File,
compressor: Option<&mut Compressor<'_>>,
) -> Result<(), NippyJarError> {
if let Some(compressor) = compressor {
// Compressor requires the destination buffer to be big enough to write, otherwise it
// fails. However, we don't know how big it will be. If data is small
// enough, the compressed buffer will actually be larger. We keep retrying.
// If we eventually fail, it probably means it's another kind of error.
let mut multiplier = 1;
while let Err(err) = compressor.compress_to_buffer(column_value, buffer) {
buffer.reserve(column_value.len() * multiplier);
multiplier += 1;
if multiplier == 5 {
return Err(NippyJarError::Disconnect(err))
}
}
handle.write_all(buffer)?;
buffer.clear();
} else {
handle.write_all(column_value)?;
}
Ok(())
}
/// Appends a decompressed value using a dictionary to a user provided buffer.
pub fn decompress_with_dictionary(
column_value: &[u8],
output: &mut Vec<u8>,
decompressor: &mut Decompressor<'_>,
) -> Result<(), NippyJarError> {
let previous_length = output.len();
// SAFETY: We're setting len to the existing capacity.
unsafe {
output.set_len(output.capacity());
}
match decompressor.decompress_to_buffer(column_value, &mut output[previous_length..]) {
Ok(written) => {
// SAFETY: `decompress_to_buffer` can only write if there's enough capacity.
// Therefore, it shouldn't write more than our capacity.
unsafe {
output.set_len(previous_length + written);
}
Ok(())
}
Err(_) => {
// SAFETY: we are resetting it to the previous value.
unsafe {
output.set_len(previous_length);
}
Err(NippyJarError::OutputTooSmall)
}
}
}
}
impl Compression for Zstd {
fn decompress_to(&self, value: &[u8], dest: &mut Vec<u8>) -> Result<(), NippyJarError> {
let mut decoder = zstd::Decoder::with_dictionary(value, &[])?;
decoder.read_to_end(dest)?;
Ok(())
}
fn decompress(&self, value: &[u8]) -> Result<Vec<u8>, NippyJarError> {
let mut decompressed = Vec::with_capacity(value.len() * 2);
let mut decoder = zstd::Decoder::new(value)?;
decoder.read_to_end(&mut decompressed)?;
Ok(decompressed)
}
fn compress_to(&self, src: &[u8], dest: &mut Vec<u8>) -> Result<usize, NippyJarError> {
let before = dest.len();
let mut encoder = zstd::Encoder::new(dest, self.level)?;
encoder.write_all(src)?;
let dest = encoder.finish()?;
Ok(dest.len() - before)
}
fn compress(&self, src: &[u8]) -> Result<Vec<u8>, NippyJarError> {
let mut compressed = Vec::with_capacity(src.len());
self.compress_to(src, &mut compressed)?;
Ok(compressed)
}
fn is_ready(&self) -> bool {
matches!(self.state, ZstdState::Ready)
}
#[cfg(test)]
/// If using it with dictionaries, prepares a dictionary for each column.
fn prepare_compression(
&mut self,
columns: Vec<impl IntoIterator<Item = Vec<u8>>>,
) -> Result<(), NippyJarError> {
if !self.use_dict {
return Ok(())
}
// There's a per 2GB hard limit on each column data set for training
// REFERENCE: https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md#dictionary-builder
// ```
// -M#, --memory=#: Limit the amount of sample data loaded for training (default: 2 GB).
// Note that the default (2 GB) is also the maximum. This parameter can be useful in
// situations where the training set size is not well controlled and could be potentially
// very large. Since speed of the training process is directly correlated to the size of the
// training sample set, a smaller sample set leads to faster training.`
// ```
if columns.len() != self.columns {
return Err(NippyJarError::ColumnLenMismatch(self.columns, columns.len()))
}
let mut dictionaries = Vec::with_capacity(columns.len());
for column in columns {
// ZSTD requires all training data to be continuous in memory, alongside the size of
// each entry
let mut sizes = vec![];
let data: Vec<_> = column
.into_iter()
.flat_map(|data| {
sizes.push(data.len());
data
})
.collect();
dictionaries.push(zstd::dict::from_continuous(&data, &sizes, self.max_dict_size)?);
}
debug_assert_eq!(dictionaries.len(), self.columns);
self.dictionaries = Some(Arc::new(ZstdDictionaries::new(dictionaries)));
self.state = ZstdState::Ready;
Ok(())
}
}
mod dictionaries_serde {
use super::*;
pub(crate) fn serialize<S>(
dictionaries: &Option<Arc<ZstdDictionaries<'static>>>,
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match dictionaries {
Some(dicts) => serializer.serialize_some(dicts.as_ref()),
None => serializer.serialize_none(),
}
}
pub(crate) fn deserialize<'de, D>(
deserializer: D,
) -> Result<Option<Arc<ZstdDictionaries<'static>>>, D::Error>
where
D: Deserializer<'de>,
{
let dictionaries: Option<Vec<RawDictionary>> = Option::deserialize(deserializer)?;
Ok(dictionaries.map(|dicts| Arc::new(ZstdDictionaries::load(dicts))))
}
}
/// List of [`ZstdDictionary`]
#[cfg_attr(test, derive(PartialEq))]
#[derive(Serialize, Deserialize, Deref)]
pub(crate) struct ZstdDictionaries<'a>(Vec<ZstdDictionary<'a>>);
impl std::fmt::Debug for ZstdDictionaries<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ZstdDictionaries").field("num", &self.len()).finish_non_exhaustive()
}
}
impl ZstdDictionaries<'_> {
#[cfg(test)]
/// Creates [`ZstdDictionaries`].
pub(crate) fn new(raw: Vec<RawDictionary>) -> Self {
Self(raw.into_iter().map(ZstdDictionary::Raw).collect())
}
/// Loads a list [`RawDictionary`] into a list of [`ZstdDictionary::Loaded`].
pub(crate) fn load(raw: Vec<RawDictionary>) -> Self {
Self(
raw.into_iter()
.map(|dict| ZstdDictionary::Loaded(DecoderDictionary::copy(&dict)))
.collect(),
)
}
/// Creates a list of decompressors from a list of [`ZstdDictionary::Loaded`].
pub(crate) fn decompressors(&self) -> Result<Vec<Decompressor<'_>>, NippyJarError> {
Ok(self
.iter()
.flat_map(|dict| {
dict.loaded()
.ok_or(NippyJarError::DictionaryNotLoaded)
.map(Decompressor::with_prepared_dictionary)
})
.collect::<Result<Vec<_>, _>>()?)
}
/// Creates a list of compressors from a list of [`ZstdDictionary::Raw`].
pub(crate) fn compressors(&self) -> Result<Vec<Compressor<'_>>, NippyJarError> {
Ok(self
.iter()
.flat_map(|dict| {
dict.raw()
.ok_or(NippyJarError::CompressorNotAllowed)
.map(|dict| Compressor::with_dictionary(0, dict))
})
.collect::<Result<Vec<_>, _>>()?)
}
}
/// A Zstd dictionary. It's created and serialized with [`ZstdDictionary::Raw`], and deserialized as
/// [`ZstdDictionary::Loaded`].
pub(crate) enum ZstdDictionary<'a> {
#[cfg_attr(not(test), expect(dead_code))]
Raw(RawDictionary),
Loaded(DecoderDictionary<'a>),
}
impl ZstdDictionary<'_> {
/// Returns a reference to the expected `RawDictionary`
pub(crate) const fn raw(&self) -> Option<&RawDictionary> {
match self {
ZstdDictionary::Raw(dict) => Some(dict),
ZstdDictionary::Loaded(_) => None,
}
}
/// Returns a reference to the expected `DecoderDictionary`
pub(crate) const fn loaded(&self) -> Option<&DecoderDictionary<'_>> {
match self {
ZstdDictionary::Raw(_) => None,
ZstdDictionary::Loaded(dict) => Some(dict),
}
}
}
impl<'de> Deserialize<'de> for ZstdDictionary<'_> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
let dict = RawDictionary::deserialize(deserializer)?;
Ok(Self::Loaded(DecoderDictionary::copy(&dict)))
}
}
impl Serialize for ZstdDictionary<'_> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match self {
ZstdDictionary::Raw(r) => r.serialize(serializer),
ZstdDictionary::Loaded(_) => unreachable!(),
}
}
}
#[cfg(test)]
impl PartialEq for ZstdDictionary<'_> {
fn eq(&self, other: &Self) -> bool {
if let (Self::Raw(a), Self::Raw(b)) = (self, &other) {
return a == b
}
unimplemented!(
"`DecoderDictionary` can't be compared. So comparison should be done after decompressing a value."
);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/nippy-jar/src/compression/lz4.rs | crates/storage/nippy-jar/src/compression/lz4.rs | use crate::{compression::Compression, NippyJarError};
use serde::{Deserialize, Serialize};
/// Wrapper type for `lz4_flex` that implements [`Compression`].
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Default)]
#[non_exhaustive]
pub struct Lz4;
impl Compression for Lz4 {
fn decompress_to(&self, value: &[u8], dest: &mut Vec<u8>) -> Result<(), NippyJarError> {
let previous_length = dest.len();
// Create a mutable slice from the spare capacity
let spare_capacity = dest.spare_capacity_mut();
// SAFETY: This is safe because we're using MaybeUninit's as_mut_ptr
let output = unsafe {
std::slice::from_raw_parts_mut(
spare_capacity.as_mut_ptr() as *mut u8,
spare_capacity.len(),
)
};
match lz4_flex::decompress_into(value, output) {
Ok(written) => {
// SAFETY: `compress_into` can only write if there's enough capacity. Therefore, it
// shouldn't write more than our capacity.
unsafe {
dest.set_len(previous_length + written);
}
Ok(())
}
Err(_) => Err(NippyJarError::OutputTooSmall),
}
}
fn decompress(&self, value: &[u8]) -> Result<Vec<u8>, NippyJarError> {
let mut multiplier = 1;
loop {
match lz4_flex::decompress(value, multiplier * value.len()) {
Ok(v) => return Ok(v),
Err(err) => {
multiplier *= 2;
if multiplier == 16 {
return Err(NippyJarError::Custom(err.to_string()))
}
}
}
}
}
fn compress_to(&self, src: &[u8], dest: &mut Vec<u8>) -> Result<usize, NippyJarError> {
let previous_length = dest.len();
// Create a mutable slice from the spare capacity
let spare_capacity = dest.spare_capacity_mut();
// SAFETY: This is safe because we're using MaybeUninit's as_mut_ptr
let output = unsafe {
std::slice::from_raw_parts_mut(
spare_capacity.as_mut_ptr() as *mut u8,
spare_capacity.len(),
)
};
match lz4_flex::compress_into(src, output) {
Ok(written) => {
// SAFETY: `compress_into` can only write if there's enough capacity. Therefore, it
// shouldn't write more than our capacity.
unsafe {
dest.set_len(previous_length + written);
}
Ok(written)
}
Err(_) => Err(NippyJarError::OutputTooSmall),
}
}
fn compress(&self, src: &[u8]) -> Result<Vec<u8>, NippyJarError> {
Ok(lz4_flex::compress(src))
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/storage/nippy-jar/src/compression/mod.rs | crates/storage/nippy-jar/src/compression/mod.rs | use crate::NippyJarError;
use serde::{Deserialize, Serialize};
mod zstd;
pub use self::zstd::{DecoderDictionary, Decompressor, Zstd, ZstdState};
mod lz4;
pub use self::lz4::Lz4;
/// Trait that will compress column values
pub trait Compression: Serialize + for<'a> Deserialize<'a> {
/// Appends decompressed data to the dest buffer. Requires `dest` to have sufficient capacity.
fn decompress_to(&self, value: &[u8], dest: &mut Vec<u8>) -> Result<(), NippyJarError>;
/// Returns decompressed data.
fn decompress(&self, value: &[u8]) -> Result<Vec<u8>, NippyJarError>;
/// Appends compressed data from `src` to `dest`. Requires `dest` to have sufficient
/// capacity.
///
/// Returns number of bytes written to `dest`.
fn compress_to(&self, src: &[u8], dest: &mut Vec<u8>) -> Result<usize, NippyJarError>;
/// Compresses data from `src`
fn compress(&self, src: &[u8]) -> Result<Vec<u8>, NippyJarError>;
/// Returns `true` if it's ready to compress.
///
/// Example: it will return false, if `zstd` with dictionary is set, but wasn't generated.
fn is_ready(&self) -> bool {
true
}
#[cfg(test)]
/// If required, prepares compression algorithm with an early pass on the data.
fn prepare_compression(
&mut self,
_columns: Vec<impl IntoIterator<Item = Vec<u8>>>,
) -> Result<(), NippyJarError> {
Ok(())
}
}
/// Enum with different [`Compression`] types.
#[derive(Debug, Serialize, Deserialize)]
#[cfg_attr(test, derive(PartialEq))]
pub enum Compressors {
/// Zstandard compression algorithm with custom settings.
Zstd(Zstd),
/// LZ4 compression algorithm with custom settings.
Lz4(Lz4),
}
impl Compression for Compressors {
fn decompress_to(&self, value: &[u8], dest: &mut Vec<u8>) -> Result<(), NippyJarError> {
match self {
Self::Zstd(zstd) => zstd.decompress_to(value, dest),
Self::Lz4(lz4) => lz4.decompress_to(value, dest),
}
}
fn decompress(&self, value: &[u8]) -> Result<Vec<u8>, NippyJarError> {
match self {
Self::Zstd(zstd) => zstd.decompress(value),
Self::Lz4(lz4) => lz4.decompress(value),
}
}
fn compress_to(&self, src: &[u8], dest: &mut Vec<u8>) -> Result<usize, NippyJarError> {
let initial_capacity = dest.capacity();
loop {
let result = match self {
Self::Zstd(zstd) => zstd.compress_to(src, dest),
Self::Lz4(lz4) => lz4.compress_to(src, dest),
};
match result {
Ok(v) => return Ok(v),
Err(err) => match err {
NippyJarError::OutputTooSmall => {
dest.reserve(initial_capacity);
}
_ => return Err(err),
},
}
}
}
fn compress(&self, src: &[u8]) -> Result<Vec<u8>, NippyJarError> {
match self {
Self::Zstd(zstd) => zstd.compress(src),
Self::Lz4(lz4) => lz4.compress(src),
}
}
fn is_ready(&self) -> bool {
match self {
Self::Zstd(zstd) => zstd.is_ready(),
Self::Lz4(lz4) => lz4.is_ready(),
}
}
#[cfg(test)]
fn prepare_compression(
&mut self,
columns: Vec<impl IntoIterator<Item = Vec<u8>>>,
) -> Result<(), NippyJarError> {
match self {
Self::Zstd(zstd) => zstd.prepare_compression(columns),
Self::Lz4(lz4) => lz4.prepare_compression(columns),
}
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/optimism/evm/src/config.rs | crates/optimism/evm/src/config.rs | use alloy_consensus::BlockHeader;
use op_revm::OpSpecId;
use reth_optimism_forks::OpHardforks;
use revm::primitives::{Address, Bytes, B256};
/// Context relevant for execution of a next block w.r.t OP.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct OpNextBlockEnvAttributes {
/// The timestamp of the next block.
pub timestamp: u64,
/// The suggested fee recipient for the next block.
pub suggested_fee_recipient: Address,
/// The randomness value for the next block.
pub prev_randao: B256,
/// Block gas limit.
pub gas_limit: u64,
/// The parent beacon block root.
pub parent_beacon_block_root: Option<B256>,
/// Encoded EIP-1559 parameters to include into block's `extra_data` field.
pub extra_data: Bytes,
}
/// Map the latest active hardfork at the given header to a revm [`OpSpecId`].
pub fn revm_spec(chain_spec: impl OpHardforks, header: impl BlockHeader) -> OpSpecId {
revm_spec_by_timestamp_after_bedrock(chain_spec, header.timestamp())
}
/// Returns the revm [`OpSpecId`] at the given timestamp.
///
/// # Note
///
/// This is only intended to be used after the Bedrock, when hardforks are activated by
/// timestamp.
pub fn revm_spec_by_timestamp_after_bedrock(
chain_spec: impl OpHardforks,
timestamp: u64,
) -> OpSpecId {
if chain_spec.is_interop_active_at_timestamp(timestamp) {
OpSpecId::INTEROP
} else if chain_spec.is_isthmus_active_at_timestamp(timestamp) {
OpSpecId::ISTHMUS
} else if chain_spec.is_holocene_active_at_timestamp(timestamp) {
OpSpecId::HOLOCENE
} else if chain_spec.is_granite_active_at_timestamp(timestamp) {
OpSpecId::GRANITE
} else if chain_spec.is_fjord_active_at_timestamp(timestamp) {
OpSpecId::FJORD
} else if chain_spec.is_ecotone_active_at_timestamp(timestamp) {
OpSpecId::ECOTONE
} else if chain_spec.is_canyon_active_at_timestamp(timestamp) {
OpSpecId::CANYON
} else if chain_spec.is_regolith_active_at_timestamp(timestamp) {
OpSpecId::REGOLITH
} else {
OpSpecId::BEDROCK
}
}
#[cfg(feature = "rpc")]
impl<H: alloy_consensus::BlockHeader> reth_rpc_eth_api::helpers::pending_block::BuildPendingEnv<H>
for OpNextBlockEnvAttributes
{
fn build_pending_env(parent: &crate::SealedHeader<H>) -> Self {
Self {
timestamp: parent.timestamp().saturating_add(12),
suggested_fee_recipient: parent.beneficiary(),
prev_randao: alloy_primitives::B256::random(),
gas_limit: parent.gas_limit(),
parent_beacon_block_root: parent.parent_beacon_block_root(),
extra_data: parent.extra_data().clone(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_consensus::Header;
use reth_chainspec::ChainSpecBuilder;
use reth_optimism_chainspec::{OpChainSpec, OpChainSpecBuilder};
#[test]
fn test_revm_spec_by_timestamp_after_merge() {
#[inline(always)]
fn op_cs(f: impl FnOnce(OpChainSpecBuilder) -> OpChainSpecBuilder) -> OpChainSpec {
let cs = ChainSpecBuilder::mainnet().chain(reth_chainspec::Chain::from_id(10)).into();
f(cs).build()
}
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.interop_activated()), 0),
OpSpecId::INTEROP
);
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.isthmus_activated()), 0),
OpSpecId::ISTHMUS
);
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.holocene_activated()), 0),
OpSpecId::HOLOCENE
);
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.granite_activated()), 0),
OpSpecId::GRANITE
);
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.fjord_activated()), 0),
OpSpecId::FJORD
);
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.ecotone_activated()), 0),
OpSpecId::ECOTONE
);
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.canyon_activated()), 0),
OpSpecId::CANYON
);
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.bedrock_activated()), 0),
OpSpecId::BEDROCK
);
assert_eq!(
revm_spec_by_timestamp_after_bedrock(op_cs(|cs| cs.regolith_activated()), 0),
OpSpecId::REGOLITH
);
}
#[test]
fn test_to_revm_spec() {
#[inline(always)]
fn op_cs(f: impl FnOnce(OpChainSpecBuilder) -> OpChainSpecBuilder) -> OpChainSpec {
let cs = ChainSpecBuilder::mainnet().chain(reth_chainspec::Chain::from_id(10)).into();
f(cs).build()
}
assert_eq!(
revm_spec(op_cs(|cs| cs.isthmus_activated()), Header::default()),
OpSpecId::ISTHMUS
);
assert_eq!(
revm_spec(op_cs(|cs| cs.holocene_activated()), Header::default()),
OpSpecId::HOLOCENE
);
assert_eq!(
revm_spec(op_cs(|cs| cs.granite_activated()), Header::default()),
OpSpecId::GRANITE
);
assert_eq!(revm_spec(op_cs(|cs| cs.fjord_activated()), Header::default()), OpSpecId::FJORD);
assert_eq!(
revm_spec(op_cs(|cs| cs.ecotone_activated()), Header::default()),
OpSpecId::ECOTONE
);
assert_eq!(
revm_spec(op_cs(|cs| cs.canyon_activated()), Header::default()),
OpSpecId::CANYON
);
assert_eq!(
revm_spec(op_cs(|cs| cs.bedrock_activated()), Header::default()),
OpSpecId::BEDROCK
);
assert_eq!(
revm_spec(op_cs(|cs| cs.regolith_activated()), Header::default()),
OpSpecId::REGOLITH
);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/optimism/evm/src/lib.rs | crates/optimism/evm/src/lib.rs | //! EVM config for vanilla optimism.
#![doc(
html_logo_url = "https://raw.githubusercontent.com/paradigmxyz/reth/main/assets/reth-docs.png",
html_favicon_url = "https://avatars0.githubusercontent.com/u/97369466?s=256",
issue_tracker_base_url = "https://github.com/SeismicSystems/seismic-reth/issues/"
)]
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
#![cfg_attr(not(feature = "std"), no_std)]
#![cfg_attr(not(test), warn(unused_crate_dependencies))]
extern crate alloc;
use alloc::sync::Arc;
use alloy_consensus::{BlockHeader, Header};
use alloy_eips::Decodable2718;
use alloy_evm::{FromRecoveredTx, FromTxWithEncoded};
use alloy_op_evm::block::receipt_builder::OpReceiptBuilder;
use alloy_primitives::U256;
use core::fmt::Debug;
use op_alloy_consensus::EIP1559ParamError;
use op_alloy_rpc_types_engine::OpExecutionData;
use op_revm::{OpSpecId, OpTransaction};
use reth_chainspec::EthChainSpec;
use reth_evm::{
ConfigureEngineEvm, ConfigureEvm, EvmEnv, EvmEnvFor, ExecutableTxIterator, ExecutionCtxFor,
};
use reth_optimism_chainspec::OpChainSpec;
use reth_optimism_forks::OpHardforks;
use reth_optimism_primitives::{DepositReceipt, OpPrimitives};
use reth_primitives_traits::{
NodePrimitives, SealedBlock, SealedHeader, SignedTransaction, TxTy, WithEncoded,
};
use reth_storage_errors::any::AnyError;
use revm::{
context::{BlockEnv, CfgEnv, TxEnv},
context_interface::block::BlobExcessGasAndPrice,
primitives::hardfork::SpecId,
};
mod config;
pub use config::{revm_spec, revm_spec_by_timestamp_after_bedrock, OpNextBlockEnvAttributes};
mod execute;
pub use execute::*;
pub mod l1;
pub use l1::*;
mod receipts;
pub use receipts::*;
mod build;
pub use build::OpBlockAssembler;
mod error;
pub use error::OpBlockExecutionError;
pub use alloy_op_evm::{OpBlockExecutionCtx, OpBlockExecutorFactory, OpEvm, OpEvmFactory};
/// Optimism-related EVM configuration.
#[derive(Debug)]
pub struct OpEvmConfig<
ChainSpec = OpChainSpec,
N: NodePrimitives = OpPrimitives,
R = OpRethReceiptBuilder,
> {
/// Inner [`OpBlockExecutorFactory`].
pub executor_factory: OpBlockExecutorFactory<R, Arc<ChainSpec>>,
/// Optimism block assembler.
pub block_assembler: OpBlockAssembler<ChainSpec>,
_pd: core::marker::PhantomData<N>,
}
impl<ChainSpec, N: NodePrimitives, R: Clone> Clone for OpEvmConfig<ChainSpec, N, R> {
fn clone(&self) -> Self {
Self {
executor_factory: self.executor_factory.clone(),
block_assembler: self.block_assembler.clone(),
_pd: self._pd,
}
}
}
impl<ChainSpec: OpHardforks> OpEvmConfig<ChainSpec> {
/// Creates a new [`OpEvmConfig`] with the given chain spec for OP chains.
pub fn optimism(chain_spec: Arc<ChainSpec>) -> Self {
Self::new(chain_spec, OpRethReceiptBuilder::default())
}
}
impl<ChainSpec: OpHardforks, N: NodePrimitives, R> OpEvmConfig<ChainSpec, N, R> {
/// Creates a new [`OpEvmConfig`] with the given chain spec.
pub fn new(chain_spec: Arc<ChainSpec>, receipt_builder: R) -> Self {
Self {
block_assembler: OpBlockAssembler::new(chain_spec.clone()),
executor_factory: OpBlockExecutorFactory::new(
receipt_builder,
chain_spec,
OpEvmFactory::default(),
),
_pd: core::marker::PhantomData,
}
}
/// Returns the chain spec associated with this configuration.
pub const fn chain_spec(&self) -> &Arc<ChainSpec> {
self.executor_factory.spec()
}
}
impl<ChainSpec, N, R> ConfigureEvm for OpEvmConfig<ChainSpec, N, R>
where
ChainSpec: EthChainSpec<Header = Header> + OpHardforks,
N: NodePrimitives<
Receipt = R::Receipt,
SignedTx = R::Transaction,
BlockHeader = Header,
BlockBody = alloy_consensus::BlockBody<R::Transaction>,
Block = alloy_consensus::Block<R::Transaction>,
>,
OpTransaction<TxEnv>: FromRecoveredTx<N::SignedTx> + FromTxWithEncoded<N::SignedTx>,
R: OpReceiptBuilder<Receipt: DepositReceipt, Transaction: SignedTransaction>,
Self: Send + Sync + Unpin + Clone + 'static,
{
type Primitives = N;
type Error = EIP1559ParamError;
type NextBlockEnvCtx = OpNextBlockEnvAttributes;
type BlockExecutorFactory = OpBlockExecutorFactory<R, Arc<ChainSpec>>;
type BlockAssembler = OpBlockAssembler<ChainSpec>;
fn block_executor_factory(&self) -> &Self::BlockExecutorFactory {
&self.executor_factory
}
fn block_assembler(&self) -> &Self::BlockAssembler {
&self.block_assembler
}
fn evm_env(&self, header: &Header) -> EvmEnv<OpSpecId> {
let spec = config::revm_spec(self.chain_spec(), header);
let cfg_env = CfgEnv::new().with_chain_id(self.chain_spec().chain().id()).with_spec(spec);
let blob_excess_gas_and_price = spec
.into_eth_spec()
.is_enabled_in(SpecId::CANCUN)
.then_some(BlobExcessGasAndPrice { excess_blob_gas: 0, blob_gasprice: 1 });
let block_env = BlockEnv {
number: U256::from(header.number()),
beneficiary: header.beneficiary(),
timestamp: U256::from(header.timestamp()),
difficulty: if spec.into_eth_spec() >= SpecId::MERGE {
U256::ZERO
} else {
header.difficulty()
},
prevrandao: if spec.into_eth_spec() >= SpecId::MERGE {
header.mix_hash()
} else {
None
},
gas_limit: header.gas_limit(),
basefee: header.base_fee_per_gas().unwrap_or_default(),
// EIP-4844 excess blob gas of this block, introduced in Cancun
blob_excess_gas_and_price,
};
EvmEnv { cfg_env, block_env }
}
fn next_evm_env(
&self,
parent: &Header,
attributes: &Self::NextBlockEnvCtx,
) -> Result<EvmEnv<OpSpecId>, Self::Error> {
// ensure we're not missing any timestamp based hardforks
let spec_id = revm_spec_by_timestamp_after_bedrock(self.chain_spec(), attributes.timestamp);
// configure evm env based on parent block
let cfg_env =
CfgEnv::new().with_chain_id(self.chain_spec().chain().id()).with_spec(spec_id);
// if the parent block did not have excess blob gas (i.e. it was pre-cancun), but it is
// cancun now, we need to set the excess blob gas to the default value(0)
let blob_excess_gas_and_price = spec_id
.into_eth_spec()
.is_enabled_in(SpecId::CANCUN)
.then_some(BlobExcessGasAndPrice { excess_blob_gas: 0, blob_gasprice: 1 });
let block_env = BlockEnv {
number: U256::from(parent.number() + 1),
beneficiary: attributes.suggested_fee_recipient,
timestamp: U256::from(attributes.timestamp),
difficulty: U256::ZERO,
prevrandao: Some(attributes.prev_randao),
gas_limit: attributes.gas_limit,
// calculate basefee based on parent block's gas usage
basefee: self
.chain_spec()
.next_block_base_fee(parent, attributes.timestamp)
.unwrap_or_default(),
// calculate excess gas based on parent block's blob gas usage
blob_excess_gas_and_price,
};
Ok(EvmEnv { cfg_env, block_env })
}
fn context_for_block(&self, block: &'_ SealedBlock<N::Block>) -> OpBlockExecutionCtx {
OpBlockExecutionCtx {
parent_hash: block.header().parent_hash(),
parent_beacon_block_root: block.header().parent_beacon_block_root(),
extra_data: block.header().extra_data().clone(),
}
}
fn context_for_next_block(
&self,
parent: &SealedHeader<N::BlockHeader>,
attributes: Self::NextBlockEnvCtx,
) -> OpBlockExecutionCtx {
OpBlockExecutionCtx {
parent_hash: parent.hash(),
parent_beacon_block_root: attributes.parent_beacon_block_root,
extra_data: attributes.extra_data,
}
}
}
impl<ChainSpec, N, R> ConfigureEngineEvm<OpExecutionData> for OpEvmConfig<ChainSpec, N, R>
where
ChainSpec: EthChainSpec<Header = Header> + OpHardforks,
N: NodePrimitives<
Receipt = R::Receipt,
SignedTx = R::Transaction,
BlockHeader = Header,
BlockBody = alloy_consensus::BlockBody<R::Transaction>,
Block = alloy_consensus::Block<R::Transaction>,
>,
OpTransaction<TxEnv>: FromRecoveredTx<N::SignedTx> + FromTxWithEncoded<N::SignedTx>,
R: OpReceiptBuilder<Receipt: DepositReceipt, Transaction: SignedTransaction>,
Self: Send + Sync + Unpin + Clone + 'static,
{
fn evm_env_for_payload(&self, payload: &OpExecutionData) -> EvmEnvFor<Self> {
let timestamp = payload.payload.timestamp();
let block_number = payload.payload.block_number();
let spec = revm_spec_by_timestamp_after_bedrock(self.chain_spec(), timestamp);
let cfg_env = CfgEnv::new().with_chain_id(self.chain_spec().chain().id()).with_spec(spec);
let blob_excess_gas_and_price = spec
.into_eth_spec()
.is_enabled_in(SpecId::CANCUN)
.then_some(BlobExcessGasAndPrice { excess_blob_gas: 0, blob_gasprice: 1 });
let block_env = BlockEnv {
number: U256::from(block_number),
beneficiary: payload.payload.as_v1().fee_recipient,
timestamp: U256::from(timestamp),
difficulty: if spec.into_eth_spec() >= SpecId::MERGE {
U256::ZERO
} else {
payload.payload.as_v1().prev_randao.into()
},
prevrandao: (spec.into_eth_spec() >= SpecId::MERGE)
.then(|| payload.payload.as_v1().prev_randao),
gas_limit: payload.payload.as_v1().gas_limit,
basefee: payload.payload.as_v1().base_fee_per_gas.to(),
// EIP-4844 excess blob gas of this block, introduced in Cancun
blob_excess_gas_and_price,
};
EvmEnv { cfg_env, block_env }
}
fn context_for_payload<'a>(&self, payload: &'a OpExecutionData) -> ExecutionCtxFor<'a, Self> {
OpBlockExecutionCtx {
parent_hash: payload.parent_hash(),
parent_beacon_block_root: payload.sidecar.parent_beacon_block_root(),
extra_data: payload.payload.as_v1().extra_data.clone(),
}
}
fn tx_iterator_for_payload(
&self,
payload: &OpExecutionData,
) -> impl ExecutableTxIterator<Self> {
payload.payload.transactions().clone().into_iter().map(|encoded| {
let tx = TxTy::<Self::Primitives>::decode_2718_exact(encoded.as_ref())
.map_err(AnyError::new)?;
let signer = tx.try_recover().map_err(AnyError::new)?;
Ok::<_, AnyError>(WithEncoded::new(encoded, tx.with_signer(signer)))
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_consensus::{Header, Receipt};
use alloy_eips::eip7685::Requests;
use alloy_genesis::Genesis;
use alloy_primitives::{bytes, map::HashMap, Address, LogData, B256};
use op_revm::OpSpecId;
use reth_chainspec::ChainSpec;
use reth_evm::execute::ProviderError;
use reth_execution_types::{
AccountRevertInit, BundleStateInit, Chain, ExecutionOutcome, RevertsInit,
};
use reth_optimism_chainspec::{OpChainSpec, BASE_MAINNET};
use reth_optimism_primitives::{OpBlock, OpPrimitives, OpReceipt};
use reth_primitives_traits::{Account, RecoveredBlock};
use revm::{
database::{BundleState, CacheDB},
database_interface::EmptyDBTyped,
inspector::NoOpInspector,
primitives::Log,
state::AccountInfo,
};
use std::sync::Arc;
fn test_evm_config() -> OpEvmConfig {
OpEvmConfig::optimism(BASE_MAINNET.clone())
}
#[test]
fn test_fill_cfg_and_block_env() {
// Create a default header
let header = Header::default();
// Build the ChainSpec for Ethereum mainnet, activating London, Paris, and Shanghai
// hardforks
let chain_spec = ChainSpec::builder()
.chain(0.into())
.genesis(Genesis::default())
.london_activated()
.paris_activated()
.shanghai_activated()
.build();
// Use the `OpEvmConfig` to create the `cfg_env` and `block_env` based on the ChainSpec,
// Header, and total difficulty
let EvmEnv { cfg_env, .. } =
OpEvmConfig::optimism(Arc::new(OpChainSpec { inner: chain_spec.clone() }))
.evm_env(&header);
// Assert that the chain ID in the `cfg_env` is correctly set to the chain ID of the
// ChainSpec
assert_eq!(cfg_env.chain_id, chain_spec.chain().id());
}
#[test]
fn test_evm_with_env_default_spec() {
let evm_config = test_evm_config();
let db = CacheDB::<EmptyDBTyped<ProviderError>>::default();
let evm_env = EvmEnv::default();
let evm = evm_config.evm_with_env(db, evm_env.clone());
// Check that the EVM environment
assert_eq!(evm.cfg, evm_env.cfg_env);
}
#[test]
fn test_evm_with_env_custom_cfg() {
let evm_config = test_evm_config();
let db = CacheDB::<EmptyDBTyped<ProviderError>>::default();
// Create a custom configuration environment with a chain ID of 111
let cfg = CfgEnv::new().with_chain_id(111).with_spec(OpSpecId::default());
let evm_env = EvmEnv { cfg_env: cfg.clone(), ..Default::default() };
let evm = evm_config.evm_with_env(db, evm_env);
// Check that the EVM environment is initialized with the custom environment
assert_eq!(evm.cfg, cfg);
}
#[test]
fn test_evm_with_env_custom_block_and_tx() {
let evm_config = test_evm_config();
let db = CacheDB::<EmptyDBTyped<ProviderError>>::default();
// Create customs block and tx env
let block = BlockEnv {
basefee: 1000,
gas_limit: 10_000_000,
number: U256::from(42),
..Default::default()
};
let evm_env = EvmEnv { block_env: block, ..Default::default() };
let evm = evm_config.evm_with_env(db, evm_env.clone());
// Verify that the block and transaction environments are set correctly
assert_eq!(evm.block, evm_env.block_env);
}
#[test]
fn test_evm_with_spec_id() {
let evm_config = test_evm_config();
let db = CacheDB::<EmptyDBTyped<ProviderError>>::default();
let evm_env =
EvmEnv { cfg_env: CfgEnv::new().with_spec(OpSpecId::ECOTONE), ..Default::default() };
let evm = evm_config.evm_with_env(db, evm_env.clone());
assert_eq!(evm.cfg, evm_env.cfg_env);
}
#[test]
fn test_evm_with_env_and_default_inspector() {
let evm_config = test_evm_config();
let db = CacheDB::<EmptyDBTyped<ProviderError>>::default();
let evm_env = EvmEnv { cfg_env: Default::default(), ..Default::default() };
let evm = evm_config.evm_with_env_and_inspector(db, evm_env.clone(), NoOpInspector {});
// Check that the EVM environment is set to default values
assert_eq!(evm.block, evm_env.block_env);
assert_eq!(evm.cfg, evm_env.cfg_env);
}
#[test]
fn test_evm_with_env_inspector_and_custom_cfg() {
let evm_config = test_evm_config();
let db = CacheDB::<EmptyDBTyped<ProviderError>>::default();
let cfg = CfgEnv::new().with_chain_id(111).with_spec(OpSpecId::default());
let block = BlockEnv::default();
let evm_env = EvmEnv { block_env: block, cfg_env: cfg.clone() };
let evm = evm_config.evm_with_env_and_inspector(db, evm_env.clone(), NoOpInspector {});
// Check that the EVM environment is set with custom configuration
assert_eq!(evm.cfg, cfg);
assert_eq!(evm.block, evm_env.block_env);
}
#[test]
fn test_evm_with_env_inspector_and_custom_block_tx() {
let evm_config = test_evm_config();
let db = CacheDB::<EmptyDBTyped<ProviderError>>::default();
// Create custom block and tx environment
let block = BlockEnv {
basefee: 1000,
gas_limit: 10_000_000,
number: U256::from(42),
..Default::default()
};
let evm_env = EvmEnv { block_env: block, ..Default::default() };
let evm = evm_config.evm_with_env_and_inspector(db, evm_env.clone(), NoOpInspector {});
// Verify that the block and transaction environments are set correctly
assert_eq!(evm.block, evm_env.block_env);
}
#[test]
fn test_evm_with_env_inspector_and_spec_id() {
let evm_config = test_evm_config();
let db = CacheDB::<EmptyDBTyped<ProviderError>>::default();
let evm_env =
EvmEnv { cfg_env: CfgEnv::new().with_spec(OpSpecId::ECOTONE), ..Default::default() };
let evm = evm_config.evm_with_env_and_inspector(db, evm_env.clone(), NoOpInspector {});
// Check that the spec ID is set properly
assert_eq!(evm.cfg, evm_env.cfg_env);
assert_eq!(evm.block, evm_env.block_env);
}
#[test]
fn receipts_by_block_hash() {
// Create a default recovered block
let block: RecoveredBlock<OpBlock> = Default::default();
// Define block hashes for block1 and block2
let block1_hash = B256::new([0x01; 32]);
let block2_hash = B256::new([0x02; 32]);
// Clone the default block into block1 and block2
let mut block1 = block.clone();
let mut block2 = block;
// Set the hashes of block1 and block2
block1.set_block_number(10);
block1.set_hash(block1_hash);
block2.set_block_number(11);
block2.set_hash(block2_hash);
// Create a random receipt object, receipt1
let receipt1 = OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![],
status: true.into(),
});
// Create another random receipt object, receipt2
let receipt2 = OpReceipt::Legacy(Receipt {
cumulative_gas_used: 1325345,
logs: vec![],
status: true.into(),
});
// Create a Receipts object with a vector of receipt vectors
let receipts = vec![vec![receipt1.clone()], vec![receipt2]];
// Create an ExecutionOutcome object with the created bundle, receipts, an empty requests
// vector, and first_block set to 10
let execution_outcome = ExecutionOutcome::<OpReceipt> {
bundle: Default::default(),
receipts,
requests: vec![],
first_block: 10,
};
// Create a Chain object with a BTreeMap of blocks mapped to their block numbers,
// including block1_hash and block2_hash, and the execution_outcome
let chain: Chain<OpPrimitives> =
Chain::new([block1, block2], execution_outcome.clone(), None);
// Assert that the proper receipt vector is returned for block1_hash
assert_eq!(chain.receipts_by_block_hash(block1_hash), Some(vec![&receipt1]));
// Create an ExecutionOutcome object with a single receipt vector containing receipt1
let execution_outcome1 = ExecutionOutcome {
bundle: Default::default(),
receipts: vec![vec![receipt1]],
requests: vec![],
first_block: 10,
};
// Assert that the execution outcome at the first block contains only the first receipt
assert_eq!(chain.execution_outcome_at_block(10), Some(execution_outcome1));
// Assert that the execution outcome at the tip block contains the whole execution outcome
assert_eq!(chain.execution_outcome_at_block(11), Some(execution_outcome));
}
#[test]
fn test_initialization() {
// Create a new BundleState object with initial data
let bundle = BundleState::new(
vec![(Address::new([2; 20]), None, Some(AccountInfo::default()), HashMap::default())],
vec![vec![(Address::new([2; 20]), None, vec![])]],
vec![],
);
// Create a Receipts object with a vector of receipt vectors
let receipts = vec![vec![Some(OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![],
status: true.into(),
}))]];
// Create a Requests object with a vector of requests
let requests = vec![Requests::new(vec![bytes!("dead"), bytes!("beef"), bytes!("beebee")])];
// Define the first block number
let first_block = 123;
// Create a ExecutionOutcome object with the created bundle, receipts, requests, and
// first_block
let exec_res = ExecutionOutcome {
bundle: bundle.clone(),
receipts: receipts.clone(),
requests: requests.clone(),
first_block,
};
// Assert that creating a new ExecutionOutcome using the constructor matches exec_res
assert_eq!(
ExecutionOutcome::new(bundle, receipts.clone(), first_block, requests.clone()),
exec_res
);
// Create a BundleStateInit object and insert initial data
let mut state_init: BundleStateInit = HashMap::default();
state_init
.insert(Address::new([2; 20]), (None, Some(Account::default()), HashMap::default()));
// Create a HashMap for account reverts and insert initial data
let mut revert_inner: HashMap<Address, AccountRevertInit> = HashMap::default();
revert_inner.insert(Address::new([2; 20]), (None, vec![]));
// Create a RevertsInit object and insert the revert_inner data
let mut revert_init: RevertsInit = HashMap::default();
revert_init.insert(123, revert_inner);
// Assert that creating a new ExecutionOutcome using the new_init method matches
// exec_res
assert_eq!(
ExecutionOutcome::new_init(
state_init,
revert_init,
vec![],
receipts,
first_block,
requests,
),
exec_res
);
}
#[test]
fn test_block_number_to_index() {
// Create a Receipts object with a vector of receipt vectors
let receipts = vec![vec![Some(OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![],
status: true.into(),
}))]];
// Define the first block number
let first_block = 123;
// Create a ExecutionOutcome object with the created bundle, receipts, requests, and
// first_block
let exec_res = ExecutionOutcome {
bundle: Default::default(),
receipts,
requests: vec![],
first_block,
};
// Test before the first block
assert_eq!(exec_res.block_number_to_index(12), None);
// Test after the first block but index larger than receipts length
assert_eq!(exec_res.block_number_to_index(133), None);
// Test after the first block
assert_eq!(exec_res.block_number_to_index(123), Some(0));
}
#[test]
fn test_get_logs() {
// Create a Receipts object with a vector of receipt vectors
let receipts = vec![vec![OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![Log::<LogData>::default()],
status: true.into(),
})]];
// Define the first block number
let first_block = 123;
// Create a ExecutionOutcome object with the created bundle, receipts, requests, and
// first_block
let exec_res = ExecutionOutcome {
bundle: Default::default(),
receipts,
requests: vec![],
first_block,
};
// Get logs for block number 123
let logs: Vec<&Log> = exec_res.logs(123).unwrap().collect();
// Assert that the logs match the expected logs
assert_eq!(logs, vec![&Log::<LogData>::default()]);
}
#[test]
fn test_receipts_by_block() {
// Create a Receipts object with a vector of receipt vectors
let receipts = vec![vec![Some(OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![Log::<LogData>::default()],
status: true.into(),
}))]];
// Define the first block number
let first_block = 123;
// Create a ExecutionOutcome object with the created bundle, receipts, requests, and
// first_block
let exec_res = ExecutionOutcome {
bundle: Default::default(), // Default value for bundle
receipts, // Include the created receipts
requests: vec![], // Empty vector for requests
first_block, // Set the first block number
};
// Get receipts for block number 123 and convert the result into a vector
let receipts_by_block: Vec<_> = exec_res.receipts_by_block(123).iter().collect();
// Assert that the receipts for block number 123 match the expected receipts
assert_eq!(
receipts_by_block,
vec![&Some(OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![Log::<LogData>::default()],
status: true.into(),
}))]
);
}
#[test]
fn test_receipts_len() {
// Create a Receipts object with a vector of receipt vectors
let receipts = vec![vec![Some(OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![Log::<LogData>::default()],
status: true.into(),
}))]];
// Create an empty Receipts object
let receipts_empty = vec![];
// Define the first block number
let first_block = 123;
// Create a ExecutionOutcome object with the created bundle, receipts, requests, and
// first_block
let exec_res = ExecutionOutcome {
bundle: Default::default(), // Default value for bundle
receipts, // Include the created receipts
requests: vec![], // Empty vector for requests
first_block, // Set the first block number
};
// Assert that the length of receipts in exec_res is 1
assert_eq!(exec_res.len(), 1);
// Assert that exec_res is not empty
assert!(!exec_res.is_empty());
// Create a ExecutionOutcome object with an empty Receipts object
let exec_res_empty_receipts: ExecutionOutcome<OpReceipt> = ExecutionOutcome {
bundle: Default::default(), // Default value for bundle
receipts: receipts_empty, // Include the empty receipts
requests: vec![], // Empty vector for requests
first_block, // Set the first block number
};
// Assert that the length of receipts in exec_res_empty_receipts is 0
assert_eq!(exec_res_empty_receipts.len(), 0);
// Assert that exec_res_empty_receipts is empty
assert!(exec_res_empty_receipts.is_empty());
}
#[test]
fn test_revert_to() {
// Create a random receipt object
let receipt = OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![],
status: true.into(),
});
// Create a Receipts object with a vector of receipt vectors
let receipts = vec![vec![Some(receipt.clone())], vec![Some(receipt.clone())]];
// Define the first block number
let first_block = 123;
// Create a request.
let request = bytes!("deadbeef");
// Create a vector of Requests containing the request.
let requests =
vec![Requests::new(vec![request.clone()]), Requests::new(vec![request.clone()])];
// Create a ExecutionOutcome object with the created bundle, receipts, requests, and
// first_block
let mut exec_res =
ExecutionOutcome { bundle: Default::default(), receipts, requests, first_block };
// Assert that the revert_to method returns true when reverting to the initial block number.
assert!(exec_res.revert_to(123));
// Assert that the receipts are properly cut after reverting to the initial block number.
assert_eq!(exec_res.receipts, vec![vec![Some(receipt)]]);
// Assert that the requests are properly cut after reverting to the initial block number.
assert_eq!(exec_res.requests, vec![Requests::new(vec![request])]);
// Assert that the revert_to method returns false when attempting to revert to a block
// number greater than the initial block number.
assert!(!exec_res.revert_to(133));
// Assert that the revert_to method returns false when attempting to revert to a block
// number less than the initial block number.
assert!(!exec_res.revert_to(10));
}
#[test]
fn test_extend_execution_outcome() {
// Create a Receipt object with specific attributes.
let receipt = OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![],
status: true.into(),
});
// Create a Receipts object containing the receipt.
let receipts = vec![vec![Some(receipt.clone())]];
// Create a request.
let request = bytes!("deadbeef");
// Create a vector of Requests containing the request.
let requests = vec![Requests::new(vec![request.clone()])];
// Define the initial block number.
let first_block = 123;
// Create an ExecutionOutcome object.
let mut exec_res =
ExecutionOutcome { bundle: Default::default(), receipts, requests, first_block };
// Extend the ExecutionOutcome object by itself.
exec_res.extend(exec_res.clone());
// Assert the extended ExecutionOutcome matches the expected outcome.
assert_eq!(
exec_res,
ExecutionOutcome {
bundle: Default::default(),
receipts: vec![vec![Some(receipt.clone())], vec![Some(receipt)]],
requests: vec![Requests::new(vec![request.clone()]), Requests::new(vec![request])],
first_block: 123,
}
);
}
#[test]
fn test_split_at_execution_outcome() {
// Create a random receipt object
let receipt = OpReceipt::Legacy(Receipt {
cumulative_gas_used: 46913,
logs: vec![],
status: true.into(),
});
// Create a Receipts object with a vector of receipt vectors
let receipts = vec![
vec![Some(receipt.clone())],
vec![Some(receipt.clone())],
vec![Some(receipt.clone())],
];
// Define the first block number
let first_block = 123;
// Create a request.
let request = bytes!("deadbeef");
// Create a vector of Requests containing the request.
let requests = vec![
Requests::new(vec![request.clone()]),
Requests::new(vec![request.clone()]),
Requests::new(vec![request.clone()]),
];
// Create a ExecutionOutcome object with the created bundle, receipts, requests, and
// first_block
let exec_res =
ExecutionOutcome { bundle: Default::default(), receipts, requests, first_block };
// Split the ExecutionOutcome at block number 124
let result = exec_res.clone().split_at(124);
// Define the expected lower ExecutionOutcome after splitting
let lower_execution_outcome = ExecutionOutcome {
bundle: Default::default(),
receipts: vec![vec![Some(receipt.clone())]],
requests: vec![Requests::new(vec![request.clone()])],
first_block,
};
// Define the expected higher ExecutionOutcome after splitting
let higher_execution_outcome = ExecutionOutcome {
bundle: Default::default(),
receipts: vec![vec![Some(receipt.clone())], vec![Some(receipt)]],
requests: vec![Requests::new(vec![request.clone()]), Requests::new(vec![request])],
first_block: 124,
};
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | true |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/optimism/evm/src/l1.rs | crates/optimism/evm/src/l1.rs | //! Optimism-specific implementation and utilities for the executor
use crate::{error::L1BlockInfoError, revm_spec_by_timestamp_after_bedrock, OpBlockExecutionError};
use alloy_consensus::Transaction;
use alloy_primitives::{hex, U256};
use op_revm::L1BlockInfo;
use reth_execution_errors::BlockExecutionError;
use reth_optimism_forks::OpHardforks;
use reth_primitives_traits::BlockBody;
/// The function selector of the "setL1BlockValuesEcotone" function in the `L1Block` contract.
const L1_BLOCK_ECOTONE_SELECTOR: [u8; 4] = hex!("440a5e20");
/// The function selector of the "setL1BlockValuesIsthmus" function in the `L1Block` contract.
const L1_BLOCK_ISTHMUS_SELECTOR: [u8; 4] = hex!("098999be");
/// Extracts the [`L1BlockInfo`] from the L2 block. The L1 info transaction is always the first
/// transaction in the L2 block.
///
/// Returns an error if the L1 info transaction is not found, if the block is empty.
pub fn extract_l1_info<B: BlockBody>(body: &B) -> Result<L1BlockInfo, OpBlockExecutionError> {
let l1_info_tx = body
.transactions()
.first()
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::MissingTransaction))?;
extract_l1_info_from_tx(l1_info_tx)
}
/// Extracts the [`L1BlockInfo`] from the L1 info transaction (first transaction) in the L2
/// block.
///
/// Returns an error if the calldata is shorter than 4 bytes.
pub fn extract_l1_info_from_tx<T: Transaction>(
tx: &T,
) -> Result<L1BlockInfo, OpBlockExecutionError> {
let l1_info_tx_data = tx.input();
if l1_info_tx_data.len() < 4 {
return Err(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::InvalidCalldata));
}
parse_l1_info(l1_info_tx_data)
}
/// Parses the input of the first transaction in the L2 block, into [`L1BlockInfo`].
///
/// Returns an error if data is incorrect length.
///
/// Caution this expects that the input is the calldata of the [`L1BlockInfo`] transaction (first
/// transaction) in the L2 block.
///
/// # Panics
/// If the input is shorter than 4 bytes.
pub fn parse_l1_info(input: &[u8]) -> Result<L1BlockInfo, OpBlockExecutionError> {
// Parse the L1 info transaction into an L1BlockInfo struct, depending on the function selector.
// There are currently 3 variants:
// - Isthmus
// - Ecotone
// - Bedrock
if input[0..4] == L1_BLOCK_ISTHMUS_SELECTOR {
parse_l1_info_tx_isthmus(input[4..].as_ref())
} else if input[0..4] == L1_BLOCK_ECOTONE_SELECTOR {
parse_l1_info_tx_ecotone(input[4..].as_ref())
} else {
parse_l1_info_tx_bedrock(input[4..].as_ref())
}
}
/// Parses the calldata of the [`L1BlockInfo`] transaction pre-Ecotone hardfork.
pub fn parse_l1_info_tx_bedrock(data: &[u8]) -> Result<L1BlockInfo, OpBlockExecutionError> {
// The setL1BlockValues tx calldata must be exactly 260 bytes long, considering that
// we already removed the first 4 bytes (the function selector). Detailed breakdown:
// 32 bytes for the block number
// + 32 bytes for the block timestamp
// + 32 bytes for the base fee
// + 32 bytes for the block hash
// + 32 bytes for the block sequence number
// + 32 bytes for the batcher hash
// + 32 bytes for the fee overhead
// + 32 bytes for the fee scalar
if data.len() != 256 {
return Err(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::UnexpectedCalldataLength));
}
let l1_base_fee = U256::try_from_be_slice(&data[64..96])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BaseFeeConversion))?;
let l1_fee_overhead = U256::try_from_be_slice(&data[192..224])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::FeeOverheadConversion))?;
let l1_fee_scalar = U256::try_from_be_slice(&data[224..256])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::FeeScalarConversion))?;
let mut l1block = L1BlockInfo::default();
l1block.l1_base_fee = l1_base_fee;
l1block.l1_fee_overhead = Some(l1_fee_overhead);
l1block.l1_base_fee_scalar = l1_fee_scalar;
Ok(l1block)
}
/// Updates the L1 block values for an Ecotone upgraded chain.
/// Params are packed and passed in as raw msg.data instead of ABI to reduce calldata size.
/// Params are expected to be in the following order:
/// 1. _baseFeeScalar L1 base fee scalar
/// 2. _blobBaseFeeScalar L1 blob base fee scalar
/// 3. _sequenceNumber Number of L2 blocks since epoch start.
/// 4. _timestamp L1 timestamp.
/// 5. _number L1 blocknumber.
/// 6. _basefee L1 base fee.
/// 7. _blobBaseFee L1 blob base fee.
/// 8. _hash L1 blockhash.
/// 9. _batcherHash Versioned hash to authenticate batcher by.
///
/// <https://github.com/ethereum-optimism/optimism/blob/957e13dd504fb336a4be40fb5dd0d8ba0276be34/packages/contracts-bedrock/src/L2/L1Block.sol#L136>
pub fn parse_l1_info_tx_ecotone(data: &[u8]) -> Result<L1BlockInfo, OpBlockExecutionError> {
if data.len() != 160 {
return Err(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::UnexpectedCalldataLength));
}
// https://github.com/ethereum-optimism/op-geth/blob/60038121c7571a59875ff9ed7679c48c9f73405d/core/types/rollup_cost.go#L317-L328
//
// data layout assumed for Ecotone:
// offset type varname
// 0 <selector>
// 4 uint32 _basefeeScalar (start offset in this scope)
// 8 uint32 _blobBaseFeeScalar
// 12 uint64 _sequenceNumber,
// 20 uint64 _timestamp,
// 28 uint64 _l1BlockNumber
// 36 uint256 _basefee,
// 68 uint256 _blobBaseFee,
// 100 bytes32 _hash,
// 132 bytes32 _batcherHash,
let l1_base_fee_scalar = U256::try_from_be_slice(&data[..4])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BaseFeeScalarConversion))?;
let l1_blob_base_fee_scalar = U256::try_from_be_slice(&data[4..8]).ok_or({
OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BlobBaseFeeScalarConversion)
})?;
let l1_base_fee = U256::try_from_be_slice(&data[32..64])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BaseFeeConversion))?;
let l1_blob_base_fee = U256::try_from_be_slice(&data[64..96])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BlobBaseFeeConversion))?;
let mut l1block = L1BlockInfo::default();
l1block.l1_base_fee = l1_base_fee;
l1block.l1_base_fee_scalar = l1_base_fee_scalar;
l1block.l1_blob_base_fee = Some(l1_blob_base_fee);
l1block.l1_blob_base_fee_scalar = Some(l1_blob_base_fee_scalar);
Ok(l1block)
}
/// Updates the L1 block values for an Isthmus upgraded chain.
/// Params are packed and passed in as raw msg.data instead of ABI to reduce calldata size.
/// Params are expected to be in the following order:
/// 1. _baseFeeScalar L1 base fee scalar
/// 2. _blobBaseFeeScalar L1 blob base fee scalar
/// 3. _sequenceNumber Number of L2 blocks since epoch start.
/// 4. _timestamp L1 timestamp.
/// 5. _number L1 blocknumber.
/// 6. _basefee L1 base fee.
/// 7. _blobBaseFee L1 blob base fee.
/// 8. _hash L1 blockhash.
/// 9. _batcherHash Versioned hash to authenticate batcher by.
/// 10. _operatorFeeScalar Operator fee scalar
/// 11. _operatorFeeConstant Operator fee constant
pub fn parse_l1_info_tx_isthmus(data: &[u8]) -> Result<L1BlockInfo, OpBlockExecutionError> {
if data.len() != 172 {
return Err(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::UnexpectedCalldataLength));
}
// https://github.com/ethereum-optimism/op-geth/blob/60038121c7571a59875ff9ed7679c48c9f73405d/core/types/rollup_cost.go#L317-L328
//
// data layout assumed for Ecotone:
// offset type varname
// 0 <selector>
// 4 uint32 _basefeeScalar (start offset in this scope)
// 8 uint32 _blobBaseFeeScalar
// 12 uint64 _sequenceNumber,
// 20 uint64 _timestamp,
// 28 uint64 _l1BlockNumber
// 36 uint256 _basefee,
// 68 uint256 _blobBaseFee,
// 100 bytes32 _hash,
// 132 bytes32 _batcherHash,
// 164 uint32 _operatorFeeScalar
// 168 uint64 _operatorFeeConstant
let l1_base_fee_scalar = U256::try_from_be_slice(&data[..4])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BaseFeeScalarConversion))?;
let l1_blob_base_fee_scalar = U256::try_from_be_slice(&data[4..8]).ok_or({
OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BlobBaseFeeScalarConversion)
})?;
let l1_base_fee = U256::try_from_be_slice(&data[32..64])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BaseFeeConversion))?;
let l1_blob_base_fee = U256::try_from_be_slice(&data[64..96])
.ok_or(OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::BlobBaseFeeConversion))?;
let operator_fee_scalar = U256::try_from_be_slice(&data[160..164]).ok_or({
OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::OperatorFeeScalarConversion)
})?;
let operator_fee_constant = U256::try_from_be_slice(&data[164..172]).ok_or({
OpBlockExecutionError::L1BlockInfo(L1BlockInfoError::OperatorFeeConstantConversion)
})?;
let mut l1block = L1BlockInfo::default();
l1block.l1_base_fee = l1_base_fee;
l1block.l1_base_fee_scalar = l1_base_fee_scalar;
l1block.l1_blob_base_fee = Some(l1_blob_base_fee);
l1block.l1_blob_base_fee_scalar = Some(l1_blob_base_fee_scalar);
l1block.operator_fee_scalar = Some(operator_fee_scalar);
l1block.operator_fee_constant = Some(operator_fee_constant);
Ok(l1block)
}
/// An extension trait for [`L1BlockInfo`] that allows us to calculate the L1 cost of a transaction
/// based off of the chain spec's activated hardfork.
pub trait RethL1BlockInfo {
/// Forwards an L1 transaction calculation to revm and returns the gas cost.
///
/// ### Takes
/// - `chain_spec`: The chain spec for the node.
/// - `timestamp`: The timestamp of the current block.
/// - `input`: The calldata of the transaction.
/// - `is_deposit`: Whether or not the transaction is a deposit.
fn l1_tx_data_fee(
&mut self,
chain_spec: impl OpHardforks,
timestamp: u64,
input: &[u8],
is_deposit: bool,
) -> Result<U256, BlockExecutionError>;
/// Computes the data gas cost for an L2 transaction.
///
/// ### Takes
/// - `chain_spec`: The chain spec for the node.
/// - `timestamp`: The timestamp of the current block.
/// - `input`: The calldata of the transaction.
fn l1_data_gas(
&self,
chain_spec: impl OpHardforks,
timestamp: u64,
input: &[u8],
) -> Result<U256, BlockExecutionError>;
}
impl RethL1BlockInfo for L1BlockInfo {
fn l1_tx_data_fee(
&mut self,
chain_spec: impl OpHardforks,
timestamp: u64,
input: &[u8],
is_deposit: bool,
) -> Result<U256, BlockExecutionError> {
if is_deposit {
return Ok(U256::ZERO);
}
let spec_id = revm_spec_by_timestamp_after_bedrock(&chain_spec, timestamp);
Ok(self.calculate_tx_l1_cost(input, spec_id))
}
fn l1_data_gas(
&self,
chain_spec: impl OpHardforks,
timestamp: u64,
input: &[u8],
) -> Result<U256, BlockExecutionError> {
let spec_id = revm_spec_by_timestamp_after_bedrock(&chain_spec, timestamp);
Ok(self.data_gas(input, spec_id))
}
}
#[cfg(test)]
mod tests {
use super::*;
use alloy_consensus::{Block, BlockBody};
use alloy_eips::eip2718::Decodable2718;
use reth_optimism_chainspec::OP_MAINNET;
use reth_optimism_forks::OpHardforks;
use reth_optimism_primitives::OpTransactionSigned;
#[test]
fn sanity_l1_block() {
use alloy_consensus::Header;
use alloy_primitives::{hex_literal::hex, Bytes};
let bytes = Bytes::from_static(&hex!(
"7ef9015aa044bae9d41b8380d781187b426c6fe43df5fb2fb57bd4466ef6a701e1f01e015694deaddeaddeaddeaddeaddeaddeaddeaddead000194420000000000000000000000000000000000001580808408f0d18001b90104015d8eb900000000000000000000000000000000000000000000000000000000008057650000000000000000000000000000000000000000000000000000000063d96d10000000000000000000000000000000000000000000000000000000000009f35273d89754a1e0387b89520d989d3be9c37c1f32495a88faf1ea05c61121ab0d1900000000000000000000000000000000000000000000000000000000000000010000000000000000000000002d679b567db6187c0c8323fa982cfb88b74dbcc7000000000000000000000000000000000000000000000000000000000000083400000000000000000000000000000000000000000000000000000000000f4240"
));
let l1_info_tx = OpTransactionSigned::decode_2718(&mut bytes.as_ref()).unwrap();
let mock_block = Block {
header: Header::default(),
body: BlockBody { transactions: vec![l1_info_tx], ..Default::default() },
};
let l1_info: L1BlockInfo = extract_l1_info(&mock_block.body).unwrap();
assert_eq!(l1_info.l1_base_fee, U256::from(652_114));
assert_eq!(l1_info.l1_fee_overhead, Some(U256::from(2100)));
assert_eq!(l1_info.l1_base_fee_scalar, U256::from(1_000_000));
assert_eq!(l1_info.l1_blob_base_fee, None);
assert_eq!(l1_info.l1_blob_base_fee_scalar, None);
}
#[test]
fn sanity_l1_block_ecotone() {
// rig
// OP mainnet ecotone block 118024092
// <https://optimistic.etherscan.io/block/118024092>
const TIMESTAMP: u64 = 1711603765;
assert!(OP_MAINNET.is_ecotone_active_at_timestamp(TIMESTAMP));
// First transaction in OP mainnet block 118024092
//
// https://optimistic.etherscan.io/getRawTx?tx=0x88501da5d5ca990347c2193be90a07037af1e3820bb40774c8154871c7669150
const TX: [u8; 251] = hex!(
"7ef8f8a0a539eb753df3b13b7e386e147d45822b67cb908c9ddc5618e3dbaa22ed00850b94deaddeaddeaddeaddeaddeaddeaddeaddead00019442000000000000000000000000000000000000158080830f424080b8a4440a5e2000000558000c5fc50000000000000000000000006605a89f00000000012a10d90000000000000000000000000000000000000000000000000000000af39ac3270000000000000000000000000000000000000000000000000000000d5ea528d24e582fa68786f080069bdbfe06a43f8e67bfd31b8e4d8a8837ba41da9a82a54a0000000000000000000000006887246668a3b87f54deb3b94ba47a6f63f32985"
);
let tx = OpTransactionSigned::decode_2718(&mut TX.as_slice()).unwrap();
let block: Block<OpTransactionSigned> = Block {
body: BlockBody { transactions: vec![tx], ..Default::default() },
..Default::default()
};
// expected l1 block info
let expected_l1_base_fee = U256::from_be_bytes(hex!(
"0000000000000000000000000000000000000000000000000000000af39ac327" // 47036678951
));
let expected_l1_base_fee_scalar = U256::from(1368);
let expected_l1_blob_base_fee = U256::from_be_bytes(hex!(
"0000000000000000000000000000000000000000000000000000000d5ea528d2" // 57422457042
));
let expected_l1_blob_base_fee_scalar = U256::from(810949);
// test
let l1_block_info: L1BlockInfo = extract_l1_info(&block.body).unwrap();
assert_eq!(l1_block_info.l1_base_fee, expected_l1_base_fee);
assert_eq!(l1_block_info.l1_base_fee_scalar, expected_l1_base_fee_scalar);
assert_eq!(l1_block_info.l1_blob_base_fee, Some(expected_l1_blob_base_fee));
assert_eq!(l1_block_info.l1_blob_base_fee_scalar, Some(expected_l1_blob_base_fee_scalar));
}
#[test]
fn parse_l1_info_fjord() {
// rig
// L1 block info for OP mainnet block 124665056 (stored in input of tx at index 0)
//
// https://optimistic.etherscan.io/tx/0x312e290cf36df704a2217b015d6455396830b0ce678b860ebfcc30f41403d7b1
const DATA: &[u8] = &hex!(
"440a5e200000146b000f79c500000000000000040000000066d052e700000000013ad8a3000000000000000000000000000000000000000000000000000000003ef1278700000000000000000000000000000000000000000000000000000000000000012fdf87b89884a61e74b322bbcf60386f543bfae7827725efaaf0ab1de2294a590000000000000000000000006887246668a3b87f54deb3b94ba47a6f63f32985"
);
// expected l1 block info verified against expected l1 fee for tx. l1 tx fee listed on OP
// mainnet block scanner
//
// https://github.com/bluealloy/revm/blob/fa5650ee8a4d802f4f3557014dd157adfb074460/crates/revm/src/optimism/l1block.rs#L414-L443
let l1_base_fee = U256::from(1055991687);
let l1_base_fee_scalar = U256::from(5227);
let l1_blob_base_fee = Some(U256::from(1));
let l1_blob_base_fee_scalar = Some(U256::from(1014213));
// test
let l1_block_info = parse_l1_info(DATA).unwrap();
assert_eq!(l1_block_info.l1_base_fee, l1_base_fee);
assert_eq!(l1_block_info.l1_base_fee_scalar, l1_base_fee_scalar);
assert_eq!(l1_block_info.l1_blob_base_fee, l1_blob_base_fee);
assert_eq!(l1_block_info.l1_blob_base_fee_scalar, l1_blob_base_fee_scalar);
}
#[test]
fn parse_l1_info_isthmus() {
// rig
// L1 block info from a devnet with Isthmus activated
const DATA: &[u8] = &hex!(
"098999be00000558000c5fc500000000000000030000000067a9f765000000000000002900000000000000000000000000000000000000000000000000000000006a6d09000000000000000000000000000000000000000000000000000000000000000172fcc8e8886636bdbe96ba0e4baab67ea7e7811633f52b52e8cf7a5123213b6f000000000000000000000000d3f2c5afb2d76f5579f326b0cd7da5f5a4126c3500004e2000000000000001f4"
);
// expected l1 block info verified against expected l1 fee and operator fee for tx.
let l1_base_fee = U256::from(6974729);
let l1_base_fee_scalar = U256::from(1368);
let l1_blob_base_fee = Some(U256::from(1));
let l1_blob_base_fee_scalar = Some(U256::from(810949));
let operator_fee_scalar = Some(U256::from(20000));
let operator_fee_constant = Some(U256::from(500));
// test
let l1_block_info = parse_l1_info(DATA).unwrap();
assert_eq!(l1_block_info.l1_base_fee, l1_base_fee);
assert_eq!(l1_block_info.l1_base_fee_scalar, l1_base_fee_scalar);
assert_eq!(l1_block_info.l1_blob_base_fee, l1_blob_base_fee);
assert_eq!(l1_block_info.l1_blob_base_fee_scalar, l1_blob_base_fee_scalar);
assert_eq!(l1_block_info.operator_fee_scalar, operator_fee_scalar);
assert_eq!(l1_block_info.operator_fee_constant, operator_fee_constant);
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/optimism/evm/src/execute.rs | crates/optimism/evm/src/execute.rs | //! Optimism block execution strategy.
/// Helper type with backwards compatible methods to obtain executor providers.
pub type OpExecutorProvider = crate::OpEvmConfig;
#[cfg(test)]
mod tests {
use crate::{OpEvmConfig, OpRethReceiptBuilder};
use alloc::sync::Arc;
use alloy_consensus::{Block, BlockBody, Header, SignableTransaction, TxEip1559};
use alloy_primitives::{b256, Address, Signature, StorageKey, StorageValue, U256};
use op_alloy_consensus::TxDeposit;
use op_revm::constants::L1_BLOCK_CONTRACT;
use reth_chainspec::MIN_TRANSACTION_GAS;
use reth_evm::execute::{BasicBlockExecutor, Executor};
use reth_optimism_chainspec::{OpChainSpec, OpChainSpecBuilder};
use reth_optimism_primitives::{OpReceipt, OpTransactionSigned};
use reth_primitives_traits::{Account, RecoveredBlock};
use reth_revm::{database::StateProviderDatabase, test_utils::StateProviderTest};
use revm::state::FlaggedStorage;
use std::{collections::HashMap, str::FromStr};
fn create_op_state_provider() -> StateProviderTest {
let mut db = StateProviderTest::default();
let l1_block_contract_account =
Account { balance: U256::ZERO, bytecode_hash: None, nonce: 1 };
let mut l1_block_storage = HashMap::default();
// base fee
l1_block_storage
.insert(StorageKey::with_last_byte(1), FlaggedStorage::new_from_value(1000000000));
// l1 fee overhead
l1_block_storage.insert(StorageKey::with_last_byte(5), FlaggedStorage::new_from_value(188));
// l1 fee scalar
l1_block_storage
.insert(StorageKey::with_last_byte(6), FlaggedStorage::new_from_value(684000));
// l1 free scalars post ecotone
l1_block_storage.insert(
StorageKey::with_last_byte(3),
FlaggedStorage::new_from_value(
StorageValue::from_str(
"0x0000000000000000000000000000000000001db0000d27300000000000000005",
)
.unwrap(),
),
);
db.insert_account(L1_BLOCK_CONTRACT, l1_block_contract_account, None, l1_block_storage);
db
}
fn evm_config(chain_spec: Arc<OpChainSpec>) -> OpEvmConfig {
OpEvmConfig::new(chain_spec, OpRethReceiptBuilder::default())
}
#[test]
fn op_deposit_fields_pre_canyon() {
let header = Header {
timestamp: 1,
number: 1,
gas_limit: 1_000_000,
gas_used: 42_000,
receipts_root: b256!(
"0x83465d1e7d01578c0d609be33570f91242f013e9e295b0879905346abbd63731"
),
..Default::default()
};
let mut db = create_op_state_provider();
let addr = Address::ZERO;
let account = Account { balance: U256::MAX, ..Account::default() };
db.insert_account(addr, account, None, HashMap::default());
let chain_spec = Arc::new(OpChainSpecBuilder::base_mainnet().regolith_activated().build());
let tx: OpTransactionSigned = TxEip1559 {
chain_id: chain_spec.chain.id(),
nonce: 0,
gas_limit: MIN_TRANSACTION_GAS,
to: addr.into(),
..Default::default()
}
.into_signed(Signature::test_signature())
.into();
let tx_deposit: OpTransactionSigned = TxDeposit {
from: addr,
to: addr.into(),
gas_limit: MIN_TRANSACTION_GAS,
..Default::default()
}
.into();
let provider = evm_config(chain_spec);
let mut executor = BasicBlockExecutor::new(provider, StateProviderDatabase::new(&db));
// make sure the L1 block contract state is preloaded.
executor.with_state_mut(|state| {
state.load_cache_account(L1_BLOCK_CONTRACT).unwrap();
});
// Attempt to execute a block with one deposit and one non-deposit transaction
let output = executor
.execute(&RecoveredBlock::new_unhashed(
Block {
header,
body: BlockBody { transactions: vec![tx, tx_deposit], ..Default::default() },
},
vec![addr, addr],
))
.unwrap();
let receipts = &output.receipts;
let tx_receipt = &receipts[0];
let deposit_receipt = &receipts[1];
assert!(!matches!(tx_receipt, OpReceipt::Deposit(_)));
// deposit_nonce is present only in deposit transactions
let OpReceipt::Deposit(deposit_receipt) = deposit_receipt else {
panic!("expected deposit")
};
assert!(deposit_receipt.deposit_nonce.is_some());
// deposit_receipt_version is not present in pre canyon transactions
assert!(deposit_receipt.deposit_receipt_version.is_none());
}
#[test]
fn op_deposit_fields_post_canyon() {
// ensure_create2_deployer will fail if timestamp is set to less than 2
let header = Header {
timestamp: 2,
number: 1,
gas_limit: 1_000_000,
gas_used: 42_000,
receipts_root: b256!(
"0xfffc85c4004fd03c7bfbe5491fae98a7473126c099ac11e8286fd0013f15f908"
),
..Default::default()
};
let mut db = create_op_state_provider();
let addr = Address::ZERO;
let account = Account { balance: U256::MAX, ..Account::default() };
db.insert_account(addr, account, None, HashMap::default());
let chain_spec = Arc::new(OpChainSpecBuilder::base_mainnet().canyon_activated().build());
let tx: OpTransactionSigned = TxEip1559 {
chain_id: chain_spec.chain.id(),
nonce: 0,
gas_limit: MIN_TRANSACTION_GAS,
to: addr.into(),
..Default::default()
}
.into_signed(Signature::test_signature())
.into();
let tx_deposit: OpTransactionSigned = TxDeposit {
from: addr,
to: addr.into(),
gas_limit: MIN_TRANSACTION_GAS,
..Default::default()
}
.into();
let provider = evm_config(chain_spec);
let mut executor = BasicBlockExecutor::new(provider, StateProviderDatabase::new(&db));
// make sure the L1 block contract state is preloaded.
executor.with_state_mut(|state| {
state.load_cache_account(L1_BLOCK_CONTRACT).unwrap();
});
// attempt to execute an empty block with parent beacon block root, this should not fail
let output = executor
.execute(&RecoveredBlock::new_unhashed(
Block {
header,
body: BlockBody { transactions: vec![tx, tx_deposit], ..Default::default() },
},
vec![addr, addr],
))
.expect("Executing a block while canyon is active should not fail");
let receipts = &output.receipts;
let tx_receipt = &receipts[0];
let deposit_receipt = &receipts[1];
// deposit_receipt_version is set to 1 for post canyon deposit transactions
assert!(!matches!(tx_receipt, OpReceipt::Deposit(_)));
let OpReceipt::Deposit(deposit_receipt) = deposit_receipt else {
panic!("expected deposit")
};
assert_eq!(deposit_receipt.deposit_receipt_version, Some(1));
// deposit_nonce is present only in deposit transactions
assert!(deposit_receipt.deposit_nonce.is_some());
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
SeismicSystems/seismic-reth | https://github.com/SeismicSystems/seismic-reth/blob/62834bd8deb86513778624a3ba33f55f4d6a1471/crates/optimism/evm/src/error.rs | crates/optimism/evm/src/error.rs | //! Error types for the Optimism EVM module.
use reth_evm::execute::BlockExecutionError;
/// L1 Block Info specific errors
#[derive(Debug, Clone, PartialEq, Eq, thiserror::Error)]
pub enum L1BlockInfoError {
/// Could not find L1 block info transaction in the L2 block
#[error("could not find l1 block info tx in the L2 block")]
MissingTransaction,
/// Invalid L1 block info transaction calldata
#[error("invalid l1 block info transaction calldata in the L2 block")]
InvalidCalldata,
/// Unexpected L1 block info transaction calldata length
#[error("unexpected l1 block info tx calldata length found")]
UnexpectedCalldataLength,
/// Base fee conversion error
#[error("could not convert l1 base fee")]
BaseFeeConversion,
/// Fee overhead conversion error
#[error("could not convert l1 fee overhead")]
FeeOverheadConversion,
/// Fee scalar conversion error
#[error("could not convert l1 fee scalar")]
FeeScalarConversion,
/// Base Fee Scalar conversion error
#[error("could not convert base fee scalar")]
BaseFeeScalarConversion,
/// Blob base fee conversion error
#[error("could not convert l1 blob base fee")]
BlobBaseFeeConversion,
/// Blob base fee scalar conversion error
#[error("could not convert l1 blob base fee scalar")]
BlobBaseFeeScalarConversion,
/// Operator fee scalar conversion error
#[error("could not convert operator fee scalar")]
OperatorFeeScalarConversion,
/// Operator fee constant conversion error
#[error("could not convert operator fee constant")]
OperatorFeeConstantConversion,
/// Optimism hardforks not active
#[error("Optimism hardforks are not active")]
HardforksNotActive,
}
/// Optimism Block Executor Errors
#[derive(Debug, Clone, PartialEq, Eq, thiserror::Error)]
pub enum OpBlockExecutionError {
/// Error when trying to parse L1 block info
#[error(transparent)]
L1BlockInfo(#[from] L1BlockInfoError),
/// Thrown when force deploy of create2deployer code fails.
#[error("failed to force create2deployer account code")]
ForceCreate2DeployerFail,
/// Thrown when a database account could not be loaded.
#[error("failed to load account {_0}")]
AccountLoadFailed(alloy_primitives::Address),
}
impl From<OpBlockExecutionError> for BlockExecutionError {
fn from(err: OpBlockExecutionError) -> Self {
Self::other(err)
}
}
| rust | Apache-2.0 | 62834bd8deb86513778624a3ba33f55f4d6a1471 | 2026-01-04T20:20:17.218210Z | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.