Files
lighthouse/lcli/src/transition_blocks.rs
ethDreamer c52c598f69 Electra: Remaining Consensus Data Structures (#5712)
* Attestation superstruct changes for EIP 7549 (#5644)

* update

* experiment

* superstruct changes

* revert

* superstruct changes

* fix tests

* indexed attestation

* indexed attestation superstruct

* updated TODOs

* `superstruct` the `AttesterSlashing` (#5636)

* `superstruct` Attester Fork Variants

* Push a little further

* Deal with Encode / Decode of AttesterSlashing

* not so sure about this..

* Stop Encode/Decode Bounds from Propagating Out

* Tons of Changes..

* More Conversions to AttestationRef

* Add AsReference trait (#15)

* Add AsReference trait

* Fix some snafus

* Got it Compiling! :D

* Got Tests Building

* Get beacon chain tests compiling

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Merge remote-tracking branch 'upstream/unstable' into electra_attestation_changes

* Make EF Tests Fork-Agnostic (#5713)

* Finish EF Test Fork Agnostic (#5714)

* Superstruct `AggregateAndProof` (#5715)

* Upgrade `superstruct` to `0.8.0`

* superstruct `AggregateAndProof`

* Merge remote-tracking branch 'sigp/unstable' into electra_attestation_changes

* cargo fmt

* Merge pull request #5726 from realbigsean/electra_attestation_changes

Merge unstable into Electra attestation changes

* EIP7549 `get_attestation_indices` (#5657)

* get attesting indices electra impl

* fmt

* get tests to pass

* fmt

* fix some beacon chain tests

* fmt

* fix slasher test

* fmt got me again

* fix more tests

* fix tests

* Some small changes (#5739)

* cargo fmt (#5740)

* Sketch op pool changes

* fix get attesting indices (#5742)

* fix get attesting indices

* better errors

* fix compile

* only get committee index once

* Ef test fixes (#5753)

* attestation related ef test fixes

* delete commented out stuff

* Fix Aggregation Pool for Electra (#5754)

* Fix Aggregation Pool for Electra

* Remove Outdated Interface

* fix ssz (#5755)

* Get `electra_op_pool` up to date (#5756)

* fix get attesting indices (#5742)

* fix get attesting indices

* better errors

* fix compile

* only get committee index once

* Ef test fixes (#5753)

* attestation related ef test fixes

* delete commented out stuff

* Fix Aggregation Pool for Electra (#5754)

* Fix Aggregation Pool for Electra

* Remove Outdated Interface

* fix ssz (#5755)

---------

Co-authored-by: realbigsean <sean@sigmaprime.io>

* Revert "Get `electra_op_pool` up to date (#5756)" (#5757)

This reverts commit ab9e58aa3d.

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into electra_op_pool

* Compute on chain aggregate impl (#5752)

* add compute_on_chain_agg impl to op pool changes

* fmt

* get op pool tests to pass

* update the naive agg pool interface (#5760)

* Fix bugs in cross-committee aggregation

* Add comment to max cover optimisation

* Fix assert

* Merge pull request #5749 from sigp/electra_op_pool

Optimise Electra op pool aggregation

* update committee offset

* Fix Electra Fork Choice Tests (#5764)

* Subscribe to the correct subnets for electra attestations (#5782)

* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra

* cargo fmt

* fix slashing handling

* Merge remote-tracking branch 'upstream/unstable'

* Send unagg attestation based on fork

* Publish all aggregates

* just one more check bro plz..

* Merge pull request #5832 from ethDreamer/electra_attestation_changes_merge_unstable

Merge `unstable` into `electra_attestation_changes`

* Merge pull request #5835 from realbigsean/fix-validator-logic

Fix validator logic

* Merge pull request #5816 from realbigsean/electra-attestation-slashing-handling

Electra slashing handling

* Electra attestation changes rm decode impl (#5856)

* Remove Crappy Decode impl for Attestation

* Remove Inefficient Attestation Decode impl

* Implement Schema Upgrade / Downgrade

* Update beacon_node/beacon_chain/src/schema_change/migration_schema_v20.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Fix failing attestation tests and misc electra attestation cleanup (#5810)

* - get attestation related beacon chain tests to pass
- observed attestations are now keyed off of data + committee index
- rename op pool attestationref to compactattestationref
- remove unwraps in agg pool and use options instead
- cherry pick some changes from ef-tests-electra

* cargo fmt

* fix failing test

* Revert dockerfile changes

* make committee_index return option

* function args shouldnt be a ref to attestation ref

* fmt

* fix dup imports

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* fix some todos (#5817)

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* add consolidations to merkle calc for inclusion proof

* Remove Duplicate KZG Commitment Merkle Proof Code (#5874)

* Remove Duplicate KZG Commitment Merkle Proof Code

* s/tree_lists/fields/

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* fix compile

* Fix slasher tests (#5906)

* Fix electra tests

* Add electra attestations to double vote tests

* Update superstruct to 0.8

* Merge remote-tracking branch 'origin/unstable' into electra_attestation_changes

* Small cleanup in slasher tests

* Clean up Electra observed aggregates (#5929)

* Use consistent key in observed_attestations

* Remove unwraps from observed aggregates

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* De-dup attestation constructor logic

* Remove unwraps in Attestation construction

* Dedup match_attestation_data

* Remove outdated TODO

* Use ForkName Ord in fork-choice tests

* Use ForkName Ord in BeaconBlockBody

* Make to_electra not fallible

* Remove TestRandom impl for IndexedAttestation

* Remove IndexedAttestation faulty Decode impl

* Drop TestRandom impl

* Add PendingAttestationInElectra

* Indexed att on disk (#35)

* indexed att on disk

* fix lints

* Update slasher/src/migrate.rs

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

---------

Co-authored-by: Lion - dapplion <35266934+dapplion@users.noreply.github.com>
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

* add electra fork enabled fn to ForkName impl (#36)

* add electra fork enabled fn to ForkName impl

* remove inadvertent file

* Update common/eth2/src/types.rs

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

* Dedup attestation constructor logic in attester cache

* Use if let Ok for committee_bits

* Dedup Attestation constructor code

* Diff reduction in tests

* Fix beacon_chain tests

* Diff reduction

* Use Ord for ForkName in pubsub

* Resolve into_attestation_and_indices todo

* Remove stale TODO

* Fix beacon_chain tests

* Test spec invariant

* Use electra_enabled in pubsub

* Remove get_indexed_attestation_from_signed_aggregate

* Use ok_or instead of if let else

* committees are sorted

* remove dup method `get_indexed_attestation_from_committees`

* Merge pull request #5940 from dapplion/electra_attestation_changes_lionreview

Electra attestations #5712 review

* update default persisted op pool deserialization

* ensure aggregate and proof uses serde untagged on ref

* Fork aware ssz static attestation tests

* Electra attestation changes from Lions review (#5971)

* dedup/cleanup and remove unneeded hashset use

* remove irrelevant TODOs

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* Electra attestation changes sean review (#5972)

* instantiate empty bitlist in unreachable code

* clean up error conversion

* fork enabled bool cleanup

* remove a couple todos

* return bools instead of options in `aggregate` and use the result

* delete commented out code

* use map macros in simple transformations

* remove signers_disjoint_from

* get ef tests compiling

* get ef tests compiling

* update intentionally excluded files

* Avoid changing slasher schema for Electra

* Delete slasher schema v4

* Fix clippy

* Fix compilation of beacon_chain tests

* Update database.rs

* Add electra lightclient types

* Update slasher/src/database.rs

* fix imports

* Merge pull request #5980 from dapplion/electra-lightclient

Add electra lightclient types

* Merge pull request #5975 from michaelsproul/electra-slasher-no-migration

Avoid changing slasher schema for Electra

* Update beacon_node/beacon_chain/src/attestation_verification.rs

* Update beacon_node/beacon_chain/src/attestation_verification.rs
2024-06-24 21:08:07 +00:00

436 lines
15 KiB
Rust

//! # Transition Blocks
//!
//! Use this tool to apply a `SignedBeaconBlock` to a `BeaconState`. Useful for benchmarking or
//! troubleshooting consensus failures.
//!
//! It can load states and blocks from file or pull them from a beaconAPI. Objects pulled from a
//! beaconAPI can be saved to disk to reduce future calls to that server.
//!
//! Logging output is controlled via the `RUST_LOG` environment variable. For example, `export
//! RUST_LOG=debug`.
//!
//! ## Examples
//!
//! ### Run using a block from a beaconAPI
//!
//! Download the 0x6c69 block and its pre-state (the state from its parent block) from the
//! beaconAPI. Advance the pre-state to the slot of the 0x6c69 block and apply that block to the
//! pre-state.
//!
//! ```ignore
//! lcli transition-blocks \
//! --beacon-url http://localhost:5052 \
//! --block-id 0x6c69cf50a451f1ec905e954bf1fa22970f371a72a5aa9f8e3a43a18fdd980bec \
//! --runs 10
//! ```
//!
//! ### Download a block and pre-state from a beaconAPI to the filesystem
//!
//! Download a block and pre-state to the filesystem, without performing any transitions:
//!
//! ```ignore
//! lcli transition-blocks \
//! --beacon-url http://localhost:5052 \
//! --block-id 0x6c69cf50a451f1ec905e954bf1fa22970f371a72a5aa9f8e3a43a18fdd980bec \
//! --runs 0 \
//! --block-output-path /tmp/block-0x6c69.ssz \
//! --pre-state-output-path /tmp/pre-state-0x6c69.ssz
//! ```
//!
//! ### Use a block and pre-state from the filesystem
//!
//! Do one run over the block and pre-state downloaded in the previous example and save the post
//! state to file:
//!
//! ```ignore
//! lcli transition-blocks \
//! --block-path /tmp/block-0x6c69.ssz \
//! --pre-state-path /tmp/pre-state-0x6c69.ssz
//! --post-state-output-path /tmp/post-state-0x6c69.ssz
//! ```
//!
//! ### Isolate block processing for benchmarking
//!
//! Try to isolate block processing as much as possible for benchmarking:
//!
//! ```ignore
//! lcli transition-blocks \
//! --block-path /tmp/block-0x6c69.ssz \
//! --pre-state-path /tmp/pre-state-0x6c69.ssz \
//! --runs 10 \
//! --exclude-cache-builds \
//! --exclude-post-block-thc
//! ```
use beacon_chain::{
test_utils::EphemeralHarnessType, validator_pubkey_cache::ValidatorPubkeyCache,
};
use clap::ArgMatches;
use clap_utils::{parse_optional, parse_required};
use environment::{null_logger, Environment};
use eth2::{
types::{BlockId, StateId},
BeaconNodeHttpClient, SensitiveUrl, Timeouts,
};
use eth2_network_config::Eth2NetworkConfig;
use log::{debug, info};
use ssz::Encode;
use state_processing::state_advance::complete_state_advance;
use state_processing::{
block_signature_verifier::BlockSignatureVerifier, per_block_processing, AllCaches,
BlockSignatureStrategy, ConsensusContext, VerifyBlockRoot,
};
use std::borrow::Cow;
use std::fs::File;
use std::io::prelude::*;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::{Duration, Instant};
use store::HotColdDB;
use types::{BeaconState, ChainSpec, EthSpec, Hash256, SignedBeaconBlock};
const HTTP_TIMEOUT: Duration = Duration::from_secs(10);
#[derive(Debug)]
struct Config {
no_signature_verification: bool,
exclude_cache_builds: bool,
exclude_post_block_thc: bool,
}
pub fn run<E: EthSpec>(
env: Environment<E>,
network_config: Eth2NetworkConfig,
matches: &ArgMatches,
) -> Result<(), String> {
let spec = &network_config.chain_spec::<E>()?;
let executor = env.core_context().executor;
/*
* Parse (most) CLI arguments.
*/
let pre_state_path: Option<PathBuf> = parse_optional(matches, "pre-state-path")?;
let block_path: Option<PathBuf> = parse_optional(matches, "block-path")?;
let post_state_output_path: Option<PathBuf> =
parse_optional(matches, "post-state-output-path")?;
let pre_state_output_path: Option<PathBuf> = parse_optional(matches, "pre-state-output-path")?;
let block_output_path: Option<PathBuf> = parse_optional(matches, "block-output-path")?;
let beacon_url: Option<SensitiveUrl> = parse_optional(matches, "beacon-url")?;
let runs: usize = parse_required(matches, "runs")?;
let config = Config {
no_signature_verification: matches.get_flag("no-signature-verification"),
exclude_cache_builds: matches.get_flag("exclude-cache-builds"),
exclude_post_block_thc: matches.get_flag("exclude-post-block-thc"),
};
info!("Using {} spec", E::spec_name());
info!("Doing {} runs", runs);
info!("{:?}", &config);
/*
* Load the block and pre-state from disk or beaconAPI URL.
*/
let (mut pre_state, mut state_root_opt, block) = match (pre_state_path, block_path, beacon_url)
{
(Some(pre_state_path), Some(block_path), None) => {
info!("Block path: {:?}", block_path);
info!("Pre-state path: {:?}", pre_state_path);
let pre_state = load_from_ssz_with(&pre_state_path, spec, BeaconState::from_ssz_bytes)?;
let block = load_from_ssz_with(&block_path, spec, SignedBeaconBlock::from_ssz_bytes)?;
(pre_state, None, block)
}
(None, None, Some(beacon_url)) => {
let block_id: BlockId = parse_required(matches, "block-id")?;
let client = BeaconNodeHttpClient::new(beacon_url, Timeouts::set_all(HTTP_TIMEOUT));
executor
.handle()
.ok_or("shutdown in progress")?
.block_on(async move {
let block = client
.get_beacon_blocks(block_id)
.await
.map_err(|e| format!("Failed to download block: {:?}", e))?
.ok_or_else(|| format!("Unable to locate block at {:?}", block_id))?
.data;
if block.slot() == spec.genesis_slot {
return Err("Cannot run on the genesis block".to_string());
}
let parent_block: SignedBeaconBlock<E> = client
.get_beacon_blocks(BlockId::Root(block.parent_root()))
.await
.map_err(|e| format!("Failed to download parent block: {:?}", e))?
.ok_or_else(|| format!("Unable to locate parent block at {:?}", block_id))?
.data;
let state_root = parent_block.state_root();
let state_id = StateId::Root(state_root);
let pre_state = client
.get_debug_beacon_states::<E>(state_id)
.await
.map_err(|e| format!("Failed to download state: {:?}", e))?
.ok_or_else(|| format!("Unable to locate state at {:?}", state_id))?
.data;
Ok((pre_state, Some(state_root), block))
})
.map_err(|e| format!("Failed to complete task: {:?}", e))?
}
_ => {
return Err(
"must supply *both* --pre-state-path and --block-path *or* only --beacon-url"
.into(),
)
}
};
// Compute the block root.
let block_root = block.canonical_root();
/*
* Create a `BeaconStore` and `ValidatorPubkeyCache` for block signature verification.
*/
let store = HotColdDB::open_ephemeral(
<_>::default(),
spec.clone(),
null_logger().map_err(|e| format!("Failed to create null_logger: {:?}", e))?,
)
.map_err(|e| format!("Failed to create ephemeral store: {:?}", e))?;
let store = Arc::new(store);
debug!("Building pubkey cache (might take some time)");
let validator_pubkey_cache = ValidatorPubkeyCache::new(&pre_state, store)
.map_err(|e| format!("Failed to create pubkey cache: {:?}", e))?;
/*
* If cache builds are excluded from the timings, build them early so they are available for
* each run.
*/
if config.exclude_cache_builds {
pre_state
.build_all_caches(spec)
.map_err(|e| format!("Unable to build caches: {:?}", e))?;
let state_root = pre_state
.update_tree_hash_cache()
.map_err(|e| format!("Unable to build THC: {:?}", e))?;
if state_root_opt.map_or(false, |expected| expected != state_root) {
return Err(format!(
"State root mismatch! Expected {}, computed {}",
state_root_opt.unwrap(),
state_root
));
}
state_root_opt = Some(state_root);
}
/*
* Perform the core "runs".
*/
let mut output_post_state = None;
let mut saved_ctxt = None;
for i in 0..runs {
let pre_state = pre_state.clone();
let block = block.clone();
let start = Instant::now();
let post_state = do_transition(
pre_state,
block_root,
block,
state_root_opt,
&config,
&validator_pubkey_cache,
&mut saved_ctxt,
spec,
)?;
let duration = Instant::now().duration_since(start);
info!("Run {}: {:?}", i, duration);
if output_post_state.is_none() {
output_post_state = Some(post_state)
}
}
/*
* Write artifacts to disk, if required.
*/
if let Some(path) = post_state_output_path {
let output_post_state = output_post_state.ok_or_else(|| {
format!(
"Post state was not computed, cannot save to disk (runs = {})",
runs
)
})?;
let mut output_file =
File::create(path).map_err(|e| format!("Unable to create output file: {:?}", e))?;
output_file
.write_all(&output_post_state.as_ssz_bytes())
.map_err(|e| format!("Unable to write to output file: {:?}", e))?;
}
if let Some(path) = pre_state_output_path {
let mut output_file =
File::create(path).map_err(|e| format!("Unable to create output file: {:?}", e))?;
output_file
.write_all(&pre_state.as_ssz_bytes())
.map_err(|e| format!("Unable to write to output file: {:?}", e))?;
}
if let Some(path) = block_output_path {
let mut output_file =
File::create(path).map_err(|e| format!("Unable to create output file: {:?}", e))?;
output_file
.write_all(&block.as_ssz_bytes())
.map_err(|e| format!("Unable to write to output file: {:?}", e))?;
}
drop(pre_state);
Ok(())
}
#[allow(clippy::too_many_arguments)]
fn do_transition<E: EthSpec>(
mut pre_state: BeaconState<E>,
block_root: Hash256,
block: SignedBeaconBlock<E>,
mut state_root_opt: Option<Hash256>,
config: &Config,
validator_pubkey_cache: &ValidatorPubkeyCache<EphemeralHarnessType<E>>,
saved_ctxt: &mut Option<ConsensusContext<E>>,
spec: &ChainSpec,
) -> Result<BeaconState<E>, String> {
if !config.exclude_cache_builds {
let t = Instant::now();
pre_state
.build_all_caches(spec)
.map_err(|e| format!("Unable to build caches: {:?}", e))?;
debug!("Build caches: {:?}", t.elapsed());
let t = Instant::now();
let state_root = pre_state
.update_tree_hash_cache()
.map_err(|e| format!("Unable to build tree hash cache: {:?}", e))?;
debug!("Initial tree hash: {:?}", t.elapsed());
if state_root_opt.map_or(false, |expected| expected != state_root) {
return Err(format!(
"State root mismatch! Expected {}, computed {}",
state_root_opt.unwrap(),
state_root
));
}
state_root_opt = Some(state_root);
}
let state_root = state_root_opt.ok_or("Failed to compute state root, internal error")?;
// Transition the parent state to the block slot.
let t = Instant::now();
complete_state_advance(&mut pre_state, Some(state_root), block.slot(), spec)
.map_err(|e| format!("Unable to perform complete advance: {e:?}"))?;
debug!("Slot processing: {:?}", t.elapsed());
// Slot and epoch processing should keep the caches fully primed.
assert!(pre_state.all_caches_built());
let t = Instant::now();
pre_state
.build_all_caches(spec)
.map_err(|e| format!("Unable to build caches: {:?}", e))?;
debug!("Build all caches (again): {:?}", t.elapsed());
let mut ctxt = if let Some(ctxt) = saved_ctxt {
ctxt.clone()
} else {
let ctxt = ConsensusContext::new(pre_state.slot())
.set_current_block_root(block_root)
.set_proposer_index(block.message().proposer_index());
ctxt
};
if !config.no_signature_verification {
let get_pubkey = move |validator_index| {
validator_pubkey_cache
.get(validator_index)
.map(Cow::Borrowed)
};
let decompressor = move |pk_bytes| {
// Map compressed pubkey to validator index.
let validator_index = validator_pubkey_cache.get_index(pk_bytes)?;
// Map validator index to pubkey (respecting guard on unknown validators).
get_pubkey(validator_index)
};
let t = Instant::now();
BlockSignatureVerifier::verify_entire_block(
&pre_state,
get_pubkey,
decompressor,
&block,
&mut ctxt,
spec,
)
.map_err(|e| format!("Invalid block signature: {:?}", e))?;
debug!("Batch verify block signatures: {:?}", t.elapsed());
// Signature verification should prime the indexed attestation cache.
assert_eq!(
ctxt.num_cached_indexed_attestations(),
block.message().body().attestations_len()
);
}
let t = Instant::now();
per_block_processing(
&mut pre_state,
&block,
BlockSignatureStrategy::NoVerification,
VerifyBlockRoot::True,
&mut ctxt,
spec,
)
.map_err(|e| format!("State transition failed: {:?}", e))?;
debug!("Process block: {:?}", t.elapsed());
if !config.exclude_post_block_thc {
let t = Instant::now();
pre_state
.update_tree_hash_cache()
.map_err(|e| format!("Unable to build tree hash cache: {:?}", e))?;
debug!("Post-block tree hash: {:?}", t.elapsed());
}
Ok(pre_state)
}
pub fn load_from_ssz_with<T>(
path: &Path,
spec: &ChainSpec,
decoder: impl FnOnce(&[u8], &ChainSpec) -> Result<T, ssz::DecodeError>,
) -> Result<T, String> {
let mut file =
File::open(path).map_err(|e| format!("Unable to open file {:?}: {:?}", path, e))?;
let mut bytes = vec![];
file.read_to_end(&mut bytes)
.map_err(|e| format!("Unable to read from file {:?}: {:?}", path, e))?;
let t = Instant::now();
let result = decoder(&bytes, spec).map_err(|e| format!("Ssz decode failed: {:?}", e));
debug!("SSZ decoding {}: {:?}", path.display(), t.elapsed());
result
}