mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-16 11:22:56 +00:00
## Overview
This rather extensive PR achieves two primary goals:
1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.
Additionally, it achieves:
- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
- I had to do this to deal with sending blocks into spawned tasks.
- Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
- We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
- Avoids cloning *all the blocks* in *every chain segment* during sync.
- It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.
For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273
## Changes to `canonical_head` and `fork_choice`
Previously, the `BeaconChain` had two separate fields:
```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```
Now, we have grouped these values under a single struct:
```
canonical_head: CanonicalHead {
cached_head: RwLock<Arc<Snapshot>>,
fork_choice: RwLock<BeaconForkChoice>
}
```
Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.
## Breaking Changes
### The `state` (root) field in the `finalized_checkpoint` SSE event
Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:
1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.
Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).
I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.
## Notes for Reviewers
I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.
I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".
I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.
I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.
Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.
You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.
I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.
Co-authored-by: Mac L <mjladson@pm.me>
223 lines
8.6 KiB
Rust
223 lines
8.6 KiB
Rust
//! These two `batch_...` functions provide verification of batches of attestations. They provide
|
|
//! significant CPU-time savings by performing batch verification of BLS signatures.
|
|
//!
|
|
//! In each function, attestations are "indexed" (i.e., the `IndexedAttestation` is computed), to
|
|
//! determine if they should progress to signature verification. Then, all attestations which were
|
|
//! successfully indexed have their signatures verified in a batch. If that signature batch fails
|
|
//! then all attestation signatures are verified independently.
|
|
//!
|
|
//! The outcome of each function is a `Vec<Result>` with a one-to-one mapping to the attestations
|
|
//! supplied as input. Each result provides the exact success or failure result of the corresponding
|
|
//! attestation, with no loss of fidelity when compared to individual verification.
|
|
use super::{
|
|
CheckAttestationSignature, Error, IndexedAggregatedAttestation, IndexedUnaggregatedAttestation,
|
|
VerifiedAggregatedAttestation, VerifiedUnaggregatedAttestation,
|
|
};
|
|
use crate::{
|
|
beacon_chain::VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT, metrics, BeaconChain, BeaconChainError,
|
|
BeaconChainTypes,
|
|
};
|
|
use bls::verify_signature_sets;
|
|
use state_processing::signature_sets::{
|
|
indexed_attestation_signature_set_from_pubkeys, signed_aggregate_selection_proof_signature_set,
|
|
signed_aggregate_signature_set,
|
|
};
|
|
use std::borrow::Cow;
|
|
use types::*;
|
|
|
|
/// Verify aggregated attestations using batch BLS signature verification.
|
|
///
|
|
/// See module-level docs for more info.
|
|
pub fn batch_verify_aggregated_attestations<'a, T, I>(
|
|
aggregates: I,
|
|
chain: &BeaconChain<T>,
|
|
) -> Result<Vec<Result<VerifiedAggregatedAttestation<'a, T>, Error>>, Error>
|
|
where
|
|
T: BeaconChainTypes,
|
|
I: Iterator<Item = &'a SignedAggregateAndProof<T::EthSpec>> + ExactSizeIterator,
|
|
{
|
|
let mut num_indexed = 0;
|
|
let mut num_failed = 0;
|
|
|
|
// Perform indexing of all attestations, collecting the results.
|
|
let indexing_results = aggregates
|
|
.map(|aggregate| {
|
|
let result = IndexedAggregatedAttestation::verify(aggregate, chain);
|
|
if result.is_ok() {
|
|
num_indexed += 1;
|
|
} else {
|
|
num_failed += 1;
|
|
}
|
|
result
|
|
})
|
|
.collect::<Vec<_>>();
|
|
|
|
// May be set to `No` if batch verification succeeds.
|
|
let mut check_signatures = CheckAttestationSignature::Yes;
|
|
|
|
// Perform batch BLS verification, if any attestation signatures are worth checking.
|
|
if num_indexed > 0 {
|
|
let signature_setup_timer =
|
|
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_BATCH_AGG_SIGNATURE_SETUP_TIMES);
|
|
|
|
let pubkey_cache = chain
|
|
.validator_pubkey_cache
|
|
.try_read_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT)
|
|
.ok_or(BeaconChainError::ValidatorPubkeyCacheLockTimeout)?;
|
|
|
|
let fork = chain.canonical_head.cached_head().head_fork();
|
|
|
|
let mut signature_sets = Vec::with_capacity(num_indexed * 3);
|
|
|
|
// Iterate, flattening to get only the `Ok` values.
|
|
for indexed in indexing_results.iter().flatten() {
|
|
let signed_aggregate = &indexed.signed_aggregate;
|
|
let indexed_attestation = &indexed.indexed_attestation;
|
|
|
|
signature_sets.push(
|
|
signed_aggregate_selection_proof_signature_set(
|
|
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
|
|
signed_aggregate,
|
|
&fork,
|
|
chain.genesis_validators_root,
|
|
&chain.spec,
|
|
)
|
|
.map_err(BeaconChainError::SignatureSetError)?,
|
|
);
|
|
signature_sets.push(
|
|
signed_aggregate_signature_set(
|
|
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
|
|
signed_aggregate,
|
|
&fork,
|
|
chain.genesis_validators_root,
|
|
&chain.spec,
|
|
)
|
|
.map_err(BeaconChainError::SignatureSetError)?,
|
|
);
|
|
signature_sets.push(
|
|
indexed_attestation_signature_set_from_pubkeys(
|
|
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
|
|
&indexed_attestation.signature,
|
|
indexed_attestation,
|
|
&fork,
|
|
chain.genesis_validators_root,
|
|
&chain.spec,
|
|
)
|
|
.map_err(BeaconChainError::SignatureSetError)?,
|
|
);
|
|
}
|
|
|
|
metrics::stop_timer(signature_setup_timer);
|
|
|
|
let _signature_verification_timer =
|
|
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_BATCH_AGG_SIGNATURE_TIMES);
|
|
|
|
if verify_signature_sets(signature_sets.iter()) {
|
|
// Since all the signatures verified in a batch, there's no reason for them to be
|
|
// checked again later.
|
|
check_signatures = CheckAttestationSignature::No
|
|
}
|
|
}
|
|
|
|
// Complete the attestation verification, potentially verifying all signatures independently.
|
|
let final_results = indexing_results
|
|
.into_iter()
|
|
.map(|result| match result {
|
|
Ok(indexed) => {
|
|
VerifiedAggregatedAttestation::from_indexed(indexed, chain, check_signatures)
|
|
}
|
|
Err(e) => Err(e),
|
|
})
|
|
.collect();
|
|
|
|
Ok(final_results)
|
|
}
|
|
|
|
/// Verify unaggregated attestations using batch BLS signature verification.
|
|
///
|
|
/// See module-level docs for more info.
|
|
pub fn batch_verify_unaggregated_attestations<'a, T, I>(
|
|
attestations: I,
|
|
chain: &BeaconChain<T>,
|
|
) -> Result<Vec<Result<VerifiedUnaggregatedAttestation<'a, T>, Error>>, Error>
|
|
where
|
|
T: BeaconChainTypes,
|
|
I: Iterator<Item = (&'a Attestation<T::EthSpec>, Option<SubnetId>)> + ExactSizeIterator,
|
|
{
|
|
let mut num_partially_verified = 0;
|
|
let mut num_failed = 0;
|
|
|
|
// Perform partial verification of all attestations, collecting the results.
|
|
let partial_results = attestations
|
|
.map(|(attn, subnet_opt)| {
|
|
let result = IndexedUnaggregatedAttestation::verify(attn, subnet_opt, chain);
|
|
if result.is_ok() {
|
|
num_partially_verified += 1;
|
|
} else {
|
|
num_failed += 1;
|
|
}
|
|
result
|
|
})
|
|
.collect::<Vec<_>>();
|
|
|
|
// May be set to `No` if batch verification succeeds.
|
|
let mut check_signatures = CheckAttestationSignature::Yes;
|
|
|
|
// Perform batch BLS verification, if any attestation signatures are worth checking.
|
|
if num_partially_verified > 0 {
|
|
let signature_setup_timer = metrics::start_timer(
|
|
&metrics::ATTESTATION_PROCESSING_BATCH_UNAGG_SIGNATURE_SETUP_TIMES,
|
|
);
|
|
|
|
let fork = chain.canonical_head.cached_head().head_fork();
|
|
|
|
let pubkey_cache = chain
|
|
.validator_pubkey_cache
|
|
.try_read_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT)
|
|
.ok_or(BeaconChainError::ValidatorPubkeyCacheLockTimeout)?;
|
|
|
|
let mut signature_sets = Vec::with_capacity(num_partially_verified);
|
|
|
|
// Iterate, flattening to get only the `Ok` values.
|
|
for partially_verified in partial_results.iter().flatten() {
|
|
let indexed_attestation = &partially_verified.indexed_attestation;
|
|
|
|
let signature_set = indexed_attestation_signature_set_from_pubkeys(
|
|
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
|
|
&indexed_attestation.signature,
|
|
indexed_attestation,
|
|
&fork,
|
|
chain.genesis_validators_root,
|
|
&chain.spec,
|
|
)
|
|
.map_err(BeaconChainError::SignatureSetError)?;
|
|
|
|
signature_sets.push(signature_set);
|
|
}
|
|
|
|
metrics::stop_timer(signature_setup_timer);
|
|
|
|
let _signature_verification_timer =
|
|
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_BATCH_UNAGG_SIGNATURE_TIMES);
|
|
|
|
if verify_signature_sets(signature_sets.iter()) {
|
|
// Since all the signatures verified in a batch, there's no reason for them to be
|
|
// checked again later.
|
|
check_signatures = CheckAttestationSignature::No
|
|
}
|
|
}
|
|
|
|
// Complete the attestation verification, potentially verifying all signatures independently.
|
|
let final_results = partial_results
|
|
.into_iter()
|
|
.map(|result| match result {
|
|
Ok(partial) => {
|
|
VerifiedUnaggregatedAttestation::from_indexed(partial, chain, check_signatures)
|
|
}
|
|
Err(e) => Err(e),
|
|
})
|
|
.collect();
|
|
|
|
Ok(final_results)
|
|
}
|