mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-10 04:01:51 +00:00
* Attestation superstruct changes for EIP 7549 (#5644)
* update
* experiment
* superstruct changes
* revert
* superstruct changes
* fix tests
* indexed attestation
* indexed attestation superstruct
* updated TODOs
* `superstruct` the `AttesterSlashing` (#5636)
* `superstruct` Attester Fork Variants
* Push a little further
* Deal with Encode / Decode of AttesterSlashing
* not so sure about this..
* Stop Encode/Decode Bounds from Propagating Out
* Tons of Changes..
* More Conversions to AttestationRef
* Add AsReference trait (#15)
* Add AsReference trait
* Fix some snafus
* Got it Compiling! :D
* Got Tests Building
* Get beacon chain tests compiling
---------
Co-authored-by: Michael Sproul <micsproul@gmail.com>
* Merge remote-tracking branch 'upstream/unstable' into electra_attestation_changes
* Make EF Tests Fork-Agnostic (#5713)
* Finish EF Test Fork Agnostic (#5714)
* Superstruct `AggregateAndProof` (#5715)
* Upgrade `superstruct` to `0.8.0`
* superstruct `AggregateAndProof`
* Merge remote-tracking branch 'sigp/unstable' into electra_attestation_changes
* cargo fmt
* Merge pull request #5726 from realbigsean/electra_attestation_changes
Merge unstable into Electra attestation changes
* EIP7549 `get_attestation_indices` (#5657)
* get attesting indices electra impl
* fmt
* get tests to pass
* fmt
* fix some beacon chain tests
* fmt
* fix slasher test
* fmt got me again
* fix more tests
* fix tests
* Some small changes (#5739)
* cargo fmt (#5740)
* Sketch op pool changes
* fix get attesting indices (#5742)
* fix get attesting indices
* better errors
* fix compile
* only get committee index once
* Ef test fixes (#5753)
* attestation related ef test fixes
* delete commented out stuff
* Fix Aggregation Pool for Electra (#5754)
* Fix Aggregation Pool for Electra
* Remove Outdated Interface
* fix ssz (#5755)
* Get `electra_op_pool` up to date (#5756)
* fix get attesting indices (#5742)
* fix get attesting indices
* better errors
* fix compile
* only get committee index once
* Ef test fixes (#5753)
* attestation related ef test fixes
* delete commented out stuff
* Fix Aggregation Pool for Electra (#5754)
* Fix Aggregation Pool for Electra
* Remove Outdated Interface
* fix ssz (#5755)
---------
Co-authored-by: realbigsean <sean@sigmaprime.io>
* Revert "Get `electra_op_pool` up to date (#5756)" (#5757)
This reverts commit ab9e58aa3d.
* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into electra_op_pool
* Compute on chain aggregate impl (#5752)
* add compute_on_chain_agg impl to op pool changes
* fmt
* get op pool tests to pass
* update the naive agg pool interface (#5760)
* Fix bugs in cross-committee aggregation
* Add comment to max cover optimisation
* Fix assert
* Merge pull request #5749 from sigp/electra_op_pool
Optimise Electra op pool aggregation
* update committee offset
* Fix Electra Fork Choice Tests (#5764)
* Subscribe to the correct subnets for electra attestations (#5782)
* subscribe to the correct att subnets for electra
* subscribe to the correct att subnets for electra
* cargo fmt
* fix slashing handling
* Merge remote-tracking branch 'upstream/unstable'
* Send unagg attestation based on fork
* Publish all aggregates
* just one more check bro plz..
* Merge pull request #5832 from ethDreamer/electra_attestation_changes_merge_unstable
Merge `unstable` into `electra_attestation_changes`
* Merge pull request #5835 from realbigsean/fix-validator-logic
Fix validator logic
* Merge pull request #5816 from realbigsean/electra-attestation-slashing-handling
Electra slashing handling
* Electra attestation changes rm decode impl (#5856)
* Remove Crappy Decode impl for Attestation
* Remove Inefficient Attestation Decode impl
* Implement Schema Upgrade / Downgrade
* Update beacon_node/beacon_chain/src/schema_change/migration_schema_v20.rs
Co-authored-by: Michael Sproul <micsproul@gmail.com>
---------
Co-authored-by: Michael Sproul <micsproul@gmail.com>
* Fix failing attestation tests and misc electra attestation cleanup (#5810)
* - get attestation related beacon chain tests to pass
- observed attestations are now keyed off of data + committee index
- rename op pool attestationref to compactattestationref
- remove unwraps in agg pool and use options instead
- cherry pick some changes from ef-tests-electra
* cargo fmt
* fix failing test
* Revert dockerfile changes
* make committee_index return option
* function args shouldnt be a ref to attestation ref
* fmt
* fix dup imports
---------
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
* fix some todos (#5817)
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes
* add consolidations to merkle calc for inclusion proof
* Remove Duplicate KZG Commitment Merkle Proof Code (#5874)
* Remove Duplicate KZG Commitment Merkle Proof Code
* s/tree_lists/fields/
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes
* fix compile
* Fix slasher tests (#5906)
* Fix electra tests
* Add electra attestations to double vote tests
* Update superstruct to 0.8
* Merge remote-tracking branch 'origin/unstable' into electra_attestation_changes
* Small cleanup in slasher tests
* Clean up Electra observed aggregates (#5929)
* Use consistent key in observed_attestations
* Remove unwraps from observed aggregates
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes
* De-dup attestation constructor logic
* Remove unwraps in Attestation construction
* Dedup match_attestation_data
* Remove outdated TODO
* Use ForkName Ord in fork-choice tests
* Use ForkName Ord in BeaconBlockBody
* Make to_electra not fallible
* Remove TestRandom impl for IndexedAttestation
* Remove IndexedAttestation faulty Decode impl
* Drop TestRandom impl
* Add PendingAttestationInElectra
* Indexed att on disk (#35)
* indexed att on disk
* fix lints
* Update slasher/src/migrate.rs
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
---------
Co-authored-by: Lion - dapplion <35266934+dapplion@users.noreply.github.com>
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
* add electra fork enabled fn to ForkName impl (#36)
* add electra fork enabled fn to ForkName impl
* remove inadvertent file
* Update common/eth2/src/types.rs
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
* Dedup attestation constructor logic in attester cache
* Use if let Ok for committee_bits
* Dedup Attestation constructor code
* Diff reduction in tests
* Fix beacon_chain tests
* Diff reduction
* Use Ord for ForkName in pubsub
* Resolve into_attestation_and_indices todo
* Remove stale TODO
* Fix beacon_chain tests
* Test spec invariant
* Use electra_enabled in pubsub
* Remove get_indexed_attestation_from_signed_aggregate
* Use ok_or instead of if let else
* committees are sorted
* remove dup method `get_indexed_attestation_from_committees`
* Merge pull request #5940 from dapplion/electra_attestation_changes_lionreview
Electra attestations #5712 review
* update default persisted op pool deserialization
* ensure aggregate and proof uses serde untagged on ref
* Fork aware ssz static attestation tests
* Electra attestation changes from Lions review (#5971)
* dedup/cleanup and remove unneeded hashset use
* remove irrelevant TODOs
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes
* Electra attestation changes sean review (#5972)
* instantiate empty bitlist in unreachable code
* clean up error conversion
* fork enabled bool cleanup
* remove a couple todos
* return bools instead of options in `aggregate` and use the result
* delete commented out code
* use map macros in simple transformations
* remove signers_disjoint_from
* get ef tests compiling
* get ef tests compiling
* update intentionally excluded files
* Avoid changing slasher schema for Electra
* Delete slasher schema v4
* Fix clippy
* Fix compilation of beacon_chain tests
* Update database.rs
* Add electra lightclient types
* Update slasher/src/database.rs
* fix imports
* Merge pull request #5980 from dapplion/electra-lightclient
Add electra lightclient types
* Merge pull request #5975 from michaelsproul/electra-slasher-no-migration
Avoid changing slasher schema for Electra
* Update beacon_node/beacon_chain/src/attestation_verification.rs
* Update beacon_node/beacon_chain/src/attestation_verification.rs
408 lines
14 KiB
Rust
408 lines
14 KiB
Rust
use beacon_chain::{BeaconChain, BeaconChainError, BeaconChainTypes};
|
|
use eth2::lighthouse::{
|
|
BlockPackingEfficiency, BlockPackingEfficiencyQuery, ProposerInfo, UniqueAttestation,
|
|
};
|
|
use parking_lot::Mutex;
|
|
use state_processing::{
|
|
per_epoch_processing::EpochProcessingSummary, BlockReplayError, BlockReplayer,
|
|
};
|
|
use std::collections::{HashMap, HashSet};
|
|
use std::marker::PhantomData;
|
|
use std::sync::Arc;
|
|
use types::{
|
|
AttestationRef, BeaconCommittee, BeaconState, BeaconStateError, BlindedPayload, ChainSpec,
|
|
Epoch, EthSpec, Hash256, OwnedBeaconCommittee, RelativeEpoch, SignedBeaconBlock, Slot,
|
|
};
|
|
use warp_utils::reject::{beacon_chain_error, custom_bad_request, custom_server_error};
|
|
|
|
/// Load blocks from block roots in chunks to reduce load on memory.
|
|
const BLOCK_ROOT_CHUNK_SIZE: usize = 100;
|
|
|
|
#[derive(Debug)]
|
|
// We don't use the inner values directly, but they're used in the Debug impl.
|
|
enum PackingEfficiencyError {
|
|
BlockReplay(#[allow(dead_code)] BlockReplayError),
|
|
BeaconState(#[allow(dead_code)] BeaconStateError),
|
|
CommitteeStoreError(#[allow(dead_code)] Slot),
|
|
InvalidAttestationError,
|
|
}
|
|
|
|
impl From<BlockReplayError> for PackingEfficiencyError {
|
|
fn from(e: BlockReplayError) -> Self {
|
|
Self::BlockReplay(e)
|
|
}
|
|
}
|
|
|
|
impl From<BeaconStateError> for PackingEfficiencyError {
|
|
fn from(e: BeaconStateError) -> Self {
|
|
Self::BeaconState(e)
|
|
}
|
|
}
|
|
|
|
struct CommitteeStore {
|
|
current_epoch_committees: Vec<OwnedBeaconCommittee>,
|
|
previous_epoch_committees: Vec<OwnedBeaconCommittee>,
|
|
}
|
|
|
|
impl CommitteeStore {
|
|
fn new() -> Self {
|
|
CommitteeStore {
|
|
current_epoch_committees: Vec::new(),
|
|
previous_epoch_committees: Vec::new(),
|
|
}
|
|
}
|
|
}
|
|
|
|
struct PackingEfficiencyHandler<E: EthSpec> {
|
|
current_slot: Slot,
|
|
current_epoch: Epoch,
|
|
prior_skip_slots: u64,
|
|
available_attestations: HashSet<UniqueAttestation>,
|
|
included_attestations: HashMap<UniqueAttestation, u64>,
|
|
committee_store: CommitteeStore,
|
|
_phantom: PhantomData<E>,
|
|
}
|
|
|
|
impl<E: EthSpec> PackingEfficiencyHandler<E> {
|
|
fn new(
|
|
start_epoch: Epoch,
|
|
starting_state: BeaconState<E>,
|
|
spec: &ChainSpec,
|
|
) -> Result<Self, PackingEfficiencyError> {
|
|
let mut handler = PackingEfficiencyHandler {
|
|
current_slot: start_epoch.start_slot(E::slots_per_epoch()),
|
|
current_epoch: start_epoch,
|
|
prior_skip_slots: 0,
|
|
available_attestations: HashSet::new(),
|
|
included_attestations: HashMap::new(),
|
|
committee_store: CommitteeStore::new(),
|
|
_phantom: PhantomData,
|
|
};
|
|
|
|
handler.compute_epoch(start_epoch, &starting_state, spec)?;
|
|
Ok(handler)
|
|
}
|
|
|
|
fn update_slot(&mut self, slot: Slot) {
|
|
self.current_slot = slot;
|
|
if slot % E::slots_per_epoch() == 0 {
|
|
self.current_epoch = Epoch::new(slot.as_u64() / E::slots_per_epoch());
|
|
}
|
|
}
|
|
|
|
fn prune_included_attestations(&mut self) {
|
|
let epoch = self.current_epoch;
|
|
self.included_attestations.retain(|x, _| {
|
|
x.slot >= Epoch::new(epoch.as_u64().saturating_sub(2)).start_slot(E::slots_per_epoch())
|
|
});
|
|
}
|
|
|
|
fn prune_available_attestations(&mut self) {
|
|
let slot = self.current_slot;
|
|
self.available_attestations
|
|
.retain(|x| x.slot >= (slot.as_u64().saturating_sub(E::slots_per_epoch())));
|
|
}
|
|
|
|
fn apply_block(
|
|
&mut self,
|
|
block: &SignedBeaconBlock<E, BlindedPayload<E>>,
|
|
) -> Result<usize, PackingEfficiencyError> {
|
|
let block_body = block.message().body();
|
|
let attestations = block_body.attestations();
|
|
|
|
let mut attestations_in_block = HashMap::new();
|
|
for attestation in attestations {
|
|
match attestation {
|
|
AttestationRef::Base(attn) => {
|
|
for (position, voted) in attn.aggregation_bits.iter().enumerate() {
|
|
if voted {
|
|
let unique_attestation = UniqueAttestation {
|
|
slot: attn.data.slot,
|
|
committee_index: attn.data.index,
|
|
committee_position: position,
|
|
};
|
|
let inclusion_distance: u64 = block
|
|
.slot()
|
|
.as_u64()
|
|
.checked_sub(attn.data.slot.as_u64())
|
|
.ok_or(PackingEfficiencyError::InvalidAttestationError)?;
|
|
|
|
self.available_attestations.remove(&unique_attestation);
|
|
attestations_in_block.insert(unique_attestation, inclusion_distance);
|
|
}
|
|
}
|
|
}
|
|
AttestationRef::Electra(attn) => {
|
|
for (position, voted) in attn.aggregation_bits.iter().enumerate() {
|
|
if voted {
|
|
let unique_attestation = UniqueAttestation {
|
|
slot: attn.data.slot,
|
|
committee_index: attn.data.index,
|
|
committee_position: position,
|
|
};
|
|
let inclusion_distance: u64 = block
|
|
.slot()
|
|
.as_u64()
|
|
.checked_sub(attn.data.slot.as_u64())
|
|
.ok_or(PackingEfficiencyError::InvalidAttestationError)?;
|
|
|
|
self.available_attestations.remove(&unique_attestation);
|
|
attestations_in_block.insert(unique_attestation, inclusion_distance);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
// Remove duplicate attestations as these yield no reward.
|
|
attestations_in_block.retain(|x, _| !self.included_attestations.contains_key(x));
|
|
self.included_attestations
|
|
.extend(attestations_in_block.clone());
|
|
|
|
Ok(attestations_in_block.len())
|
|
}
|
|
|
|
fn add_attestations(&mut self, slot: Slot) -> Result<(), PackingEfficiencyError> {
|
|
let committees = self.get_committees_at_slot(slot)?;
|
|
for committee in committees {
|
|
for position in 0..committee.committee.len() {
|
|
let unique_attestation = UniqueAttestation {
|
|
slot,
|
|
committee_index: committee.index,
|
|
committee_position: position,
|
|
};
|
|
self.available_attestations.insert(unique_attestation);
|
|
}
|
|
}
|
|
|
|
Ok(())
|
|
}
|
|
|
|
fn compute_epoch(
|
|
&mut self,
|
|
epoch: Epoch,
|
|
state: &BeaconState<E>,
|
|
spec: &ChainSpec,
|
|
) -> Result<(), PackingEfficiencyError> {
|
|
// Free some memory by pruning old attestations from the included set.
|
|
self.prune_included_attestations();
|
|
|
|
let new_committees = if state.committee_cache_is_initialized(RelativeEpoch::Current) {
|
|
state
|
|
.get_beacon_committees_at_epoch(RelativeEpoch::Current)?
|
|
.into_iter()
|
|
.map(BeaconCommittee::into_owned)
|
|
.collect::<Vec<_>>()
|
|
} else {
|
|
state
|
|
.initialize_committee_cache(epoch, spec)?
|
|
.get_all_beacon_committees()?
|
|
.into_iter()
|
|
.map(BeaconCommittee::into_owned)
|
|
.collect::<Vec<_>>()
|
|
};
|
|
|
|
self.committee_store
|
|
.previous_epoch_committees
|
|
.clone_from(&self.committee_store.current_epoch_committees);
|
|
|
|
self.committee_store.current_epoch_committees = new_committees;
|
|
|
|
Ok(())
|
|
}
|
|
|
|
fn get_committees_at_slot(
|
|
&self,
|
|
slot: Slot,
|
|
) -> Result<Vec<OwnedBeaconCommittee>, PackingEfficiencyError> {
|
|
let mut committees = Vec::new();
|
|
|
|
for committee in &self.committee_store.current_epoch_committees {
|
|
if committee.slot == slot {
|
|
committees.push(committee.clone());
|
|
}
|
|
}
|
|
for committee in &self.committee_store.previous_epoch_committees {
|
|
if committee.slot == slot {
|
|
committees.push(committee.clone());
|
|
}
|
|
}
|
|
|
|
if committees.is_empty() {
|
|
return Err(PackingEfficiencyError::CommitteeStoreError(slot));
|
|
}
|
|
|
|
Ok(committees)
|
|
}
|
|
}
|
|
|
|
pub fn get_block_packing_efficiency<T: BeaconChainTypes>(
|
|
query: BlockPackingEfficiencyQuery,
|
|
chain: Arc<BeaconChain<T>>,
|
|
) -> Result<Vec<BlockPackingEfficiency>, warp::Rejection> {
|
|
let spec = &chain.spec;
|
|
|
|
let start_epoch = query.start_epoch;
|
|
let start_slot = start_epoch.start_slot(T::EthSpec::slots_per_epoch());
|
|
let prior_slot = start_slot - 1;
|
|
|
|
let end_epoch = query.end_epoch;
|
|
let end_slot = end_epoch.end_slot(T::EthSpec::slots_per_epoch());
|
|
|
|
// Check query is valid.
|
|
if start_epoch > end_epoch || start_epoch == 0 {
|
|
return Err(custom_bad_request(format!(
|
|
"invalid start and end epochs: {}, {}",
|
|
start_epoch, end_epoch
|
|
)));
|
|
}
|
|
|
|
let prior_epoch = start_epoch - 1;
|
|
let start_slot_of_prior_epoch = prior_epoch.start_slot(T::EthSpec::slots_per_epoch());
|
|
|
|
// Load block roots.
|
|
let mut block_roots: Vec<Hash256> = chain
|
|
.forwards_iter_block_roots_until(start_slot_of_prior_epoch, end_slot)
|
|
.map_err(beacon_chain_error)?
|
|
.collect::<Result<Vec<(Hash256, Slot)>, _>>()
|
|
.map_err(beacon_chain_error)?
|
|
.iter()
|
|
.map(|(root, _)| *root)
|
|
.collect();
|
|
block_roots.dedup();
|
|
|
|
let first_block_root = block_roots
|
|
.first()
|
|
.ok_or_else(|| custom_server_error("no blocks were loaded".to_string()))?;
|
|
|
|
let first_block = chain
|
|
.get_blinded_block(first_block_root)
|
|
.and_then(|maybe_block| {
|
|
maybe_block.ok_or(BeaconChainError::MissingBeaconBlock(*first_block_root))
|
|
})
|
|
.map_err(beacon_chain_error)?;
|
|
|
|
// Load state for block replay.
|
|
let starting_state_root = first_block.state_root();
|
|
|
|
let starting_state = chain
|
|
.get_state(&starting_state_root, Some(prior_slot))
|
|
.and_then(|maybe_state| {
|
|
maybe_state.ok_or(BeaconChainError::MissingBeaconState(starting_state_root))
|
|
})
|
|
.map_err(beacon_chain_error)?;
|
|
|
|
// Initialize response vector.
|
|
let mut response = Vec::new();
|
|
|
|
// Initialize handler.
|
|
let handler = Arc::new(Mutex::new(
|
|
PackingEfficiencyHandler::new(prior_epoch, starting_state.clone(), spec)
|
|
.map_err(|e| custom_server_error(format!("{:?}", e)))?,
|
|
));
|
|
|
|
let pre_slot_hook =
|
|
|_, state: &mut BeaconState<T::EthSpec>| -> Result<(), PackingEfficiencyError> {
|
|
// Add attestations to `available_attestations`.
|
|
handler.lock().add_attestations(state.slot())?;
|
|
Ok(())
|
|
};
|
|
|
|
let post_slot_hook = |state: &mut BeaconState<T::EthSpec>,
|
|
_summary: Option<EpochProcessingSummary<T::EthSpec>>,
|
|
is_skip_slot: bool|
|
|
-> Result<(), PackingEfficiencyError> {
|
|
handler.lock().update_slot(state.slot());
|
|
|
|
// Check if this a new epoch.
|
|
if state.slot() % T::EthSpec::slots_per_epoch() == 0 {
|
|
handler.lock().compute_epoch(
|
|
state.slot().epoch(T::EthSpec::slots_per_epoch()),
|
|
state,
|
|
spec,
|
|
)?;
|
|
}
|
|
|
|
if is_skip_slot {
|
|
handler.lock().prior_skip_slots += 1;
|
|
}
|
|
|
|
// Remove expired attestations.
|
|
handler.lock().prune_available_attestations();
|
|
|
|
Ok(())
|
|
};
|
|
|
|
let pre_block_hook = |_state: &mut BeaconState<T::EthSpec>,
|
|
block: &SignedBeaconBlock<_, BlindedPayload<_>>|
|
|
-> Result<(), PackingEfficiencyError> {
|
|
let slot = block.slot();
|
|
|
|
let block_message = block.message();
|
|
// Get block proposer info.
|
|
let proposer_info = ProposerInfo {
|
|
validator_index: block_message.proposer_index(),
|
|
graffiti: block_message.body().graffiti().as_utf8_lossy(),
|
|
};
|
|
|
|
// Store the count of available attestations at this point.
|
|
// In the future it may be desirable to check that the number of available attestations
|
|
// does not exceed the maximum possible amount given the length of available committees.
|
|
let available_count = handler.lock().available_attestations.len();
|
|
|
|
// Get all attestations included in the block.
|
|
let included = handler.lock().apply_block(block)?;
|
|
|
|
let efficiency = BlockPackingEfficiency {
|
|
slot,
|
|
block_hash: block.canonical_root(),
|
|
proposer_info,
|
|
available_attestations: available_count,
|
|
included_attestations: included,
|
|
prior_skip_slots: handler.lock().prior_skip_slots,
|
|
};
|
|
|
|
// Write to response.
|
|
if slot >= start_slot {
|
|
response.push(efficiency);
|
|
}
|
|
|
|
handler.lock().prior_skip_slots = 0;
|
|
|
|
Ok(())
|
|
};
|
|
|
|
// Build BlockReplayer.
|
|
let mut replayer = BlockReplayer::new(starting_state, spec)
|
|
.no_state_root_iter()
|
|
.no_signature_verification()
|
|
.minimal_block_root_verification()
|
|
.pre_slot_hook(Box::new(pre_slot_hook))
|
|
.post_slot_hook(Box::new(post_slot_hook))
|
|
.pre_block_hook(Box::new(pre_block_hook));
|
|
|
|
// Iterate through the block roots, loading blocks in chunks to reduce load on memory.
|
|
for block_root_chunks in block_roots.chunks(BLOCK_ROOT_CHUNK_SIZE) {
|
|
// Load blocks from the block root chunks.
|
|
let blocks = block_root_chunks
|
|
.iter()
|
|
.map(|root| {
|
|
chain
|
|
.get_blinded_block(root)
|
|
.and_then(|maybe_block| {
|
|
maybe_block.ok_or(BeaconChainError::MissingBeaconBlock(*root))
|
|
})
|
|
.map_err(beacon_chain_error)
|
|
})
|
|
.collect::<Result<Vec<_>, _>>()?;
|
|
|
|
replayer = replayer
|
|
.apply_blocks(blocks, None)
|
|
.map_err(|e: PackingEfficiencyError| custom_server_error(format!("{:?}", e)))?;
|
|
}
|
|
|
|
drop(replayer);
|
|
|
|
Ok(response)
|
|
}
|