mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-10 04:01:51 +00:00
## Overview
This rather extensive PR achieves two primary goals:
1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.
Additionally, it achieves:
- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
- I had to do this to deal with sending blocks into spawned tasks.
- Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
- We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
- Avoids cloning *all the blocks* in *every chain segment* during sync.
- It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.
For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273
## Changes to `canonical_head` and `fork_choice`
Previously, the `BeaconChain` had two separate fields:
```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```
Now, we have grouped these values under a single struct:
```
canonical_head: CanonicalHead {
cached_head: RwLock<Arc<Snapshot>>,
fork_choice: RwLock<BeaconForkChoice>
}
```
Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.
## Breaking Changes
### The `state` (root) field in the `finalized_checkpoint` SSE event
Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:
1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.
Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).
I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.
## Notes for Reviewers
I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.
I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".
I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.
I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.
Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.
You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.
I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.
Co-authored-by: Mac L <mjladson@pm.me>
219 lines
8.2 KiB
Rust
219 lines
8.2 KiB
Rust
//! Contains the handler for the `GET validator/duties/attester/{epoch}` endpoint.
|
|
|
|
use crate::state_id::StateId;
|
|
use beacon_chain::{
|
|
BeaconChain, BeaconChainError, BeaconChainTypes, MAXIMUM_GOSSIP_CLOCK_DISPARITY,
|
|
};
|
|
use eth2::types::{self as api_types};
|
|
use slot_clock::SlotClock;
|
|
use state_processing::state_advance::partial_state_advance;
|
|
use types::{
|
|
AttestationDuty, BeaconState, ChainSpec, CloneConfig, Epoch, EthSpec, Hash256, RelativeEpoch,
|
|
};
|
|
|
|
/// The struct that is returned to the requesting HTTP client.
|
|
type ApiDuties = api_types::DutiesResponse<Vec<api_types::AttesterData>>;
|
|
|
|
/// Handles a request from the HTTP API for attester duties.
|
|
pub fn attester_duties<T: BeaconChainTypes>(
|
|
request_epoch: Epoch,
|
|
request_indices: &[u64],
|
|
chain: &BeaconChain<T>,
|
|
) -> Result<ApiDuties, warp::reject::Rejection> {
|
|
let current_epoch = chain
|
|
.epoch()
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
// Determine what the current epoch would be if we fast-forward our system clock by
|
|
// `MAXIMUM_GOSSIP_CLOCK_DISPARITY`.
|
|
//
|
|
// Most of the time, `tolerant_current_epoch` will be equal to `current_epoch`. However, during
|
|
// the first `MAXIMUM_GOSSIP_CLOCK_DISPARITY` duration of the epoch `tolerant_current_epoch`
|
|
// will equal `current_epoch + 1`
|
|
let tolerant_current_epoch = chain
|
|
.slot_clock
|
|
.now_with_future_tolerance(MAXIMUM_GOSSIP_CLOCK_DISPARITY)
|
|
.ok_or_else(|| warp_utils::reject::custom_server_error("unable to read slot clock".into()))?
|
|
.epoch(T::EthSpec::slots_per_epoch());
|
|
|
|
if request_epoch == current_epoch
|
|
|| request_epoch == tolerant_current_epoch
|
|
|| request_epoch == current_epoch + 1
|
|
|| request_epoch == tolerant_current_epoch + 1
|
|
{
|
|
cached_attestation_duties(request_epoch, request_indices, chain)
|
|
} else if request_epoch > current_epoch + 1 {
|
|
Err(warp_utils::reject::custom_bad_request(format!(
|
|
"request epoch {} is more than one epoch past the current epoch {}",
|
|
request_epoch, current_epoch
|
|
)))
|
|
} else {
|
|
// request_epoch < current_epoch
|
|
compute_historic_attester_duties(request_epoch, request_indices, chain)
|
|
}
|
|
}
|
|
|
|
fn cached_attestation_duties<T: BeaconChainTypes>(
|
|
request_epoch: Epoch,
|
|
request_indices: &[u64],
|
|
chain: &BeaconChain<T>,
|
|
) -> Result<ApiDuties, warp::reject::Rejection> {
|
|
let head_block_root = chain.canonical_head.cached_head().head_block_root();
|
|
|
|
let (duties, dependent_root, _execution_status) = chain
|
|
.validator_attestation_duties(request_indices, request_epoch, head_block_root)
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
convert_to_api_response(duties, request_indices, dependent_root, chain)
|
|
}
|
|
|
|
/// Compute some attester duties by reading a `BeaconState` from disk, completely ignoring the
|
|
/// shuffling cache.
|
|
fn compute_historic_attester_duties<T: BeaconChainTypes>(
|
|
request_epoch: Epoch,
|
|
request_indices: &[u64],
|
|
chain: &BeaconChain<T>,
|
|
) -> Result<ApiDuties, warp::reject::Rejection> {
|
|
// If the head is quite old then it might still be relevant for a historical request.
|
|
//
|
|
// Use the `with_head` function to read & clone in a single call to avoid race conditions.
|
|
let state_opt = chain
|
|
.with_head(|head| {
|
|
if head.beacon_state.current_epoch() <= request_epoch {
|
|
Ok(Some((
|
|
head.beacon_state_root(),
|
|
head.beacon_state
|
|
.clone_with(CloneConfig::committee_caches_only()),
|
|
)))
|
|
} else {
|
|
Ok(None)
|
|
}
|
|
})
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
let mut state = if let Some((state_root, mut state)) = state_opt {
|
|
// If we've loaded the head state it might be from a previous epoch, ensure it's in a
|
|
// suitable epoch.
|
|
ensure_state_knows_attester_duties_for_epoch(
|
|
&mut state,
|
|
state_root,
|
|
request_epoch,
|
|
&chain.spec,
|
|
)?;
|
|
state
|
|
} else {
|
|
StateId::slot(request_epoch.start_slot(T::EthSpec::slots_per_epoch())).state(chain)?
|
|
};
|
|
|
|
// Sanity-check the state lookup.
|
|
if !(state.current_epoch() == request_epoch || state.current_epoch() + 1 == request_epoch) {
|
|
return Err(warp_utils::reject::custom_server_error(format!(
|
|
"state epoch {} not suitable for request epoch {}",
|
|
state.current_epoch(),
|
|
request_epoch
|
|
)));
|
|
}
|
|
|
|
let relative_epoch =
|
|
RelativeEpoch::from_epoch(state.current_epoch(), request_epoch).map_err(|e| {
|
|
warp_utils::reject::custom_server_error(format!("invalid epoch for state: {:?}", e))
|
|
})?;
|
|
|
|
state
|
|
.build_committee_cache(relative_epoch, &chain.spec)
|
|
.map_err(BeaconChainError::from)
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
let dependent_root = state
|
|
// The only block which decides its own shuffling is the genesis block.
|
|
.attester_shuffling_decision_root(chain.genesis_block_root, relative_epoch)
|
|
.map_err(BeaconChainError::from)
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
let duties = request_indices
|
|
.iter()
|
|
.map(|&validator_index| {
|
|
state
|
|
.get_attestation_duties(validator_index as usize, relative_epoch)
|
|
.map_err(BeaconChainError::from)
|
|
})
|
|
.collect::<Result<_, _>>()
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
convert_to_api_response(duties, request_indices, dependent_root, chain)
|
|
}
|
|
|
|
fn ensure_state_knows_attester_duties_for_epoch<E: EthSpec>(
|
|
state: &mut BeaconState<E>,
|
|
state_root: Hash256,
|
|
target_epoch: Epoch,
|
|
spec: &ChainSpec,
|
|
) -> Result<(), warp::reject::Rejection> {
|
|
// Protect against an inconsistent slot clock.
|
|
if state.current_epoch() > target_epoch {
|
|
return Err(warp_utils::reject::custom_server_error(format!(
|
|
"state epoch {} is later than target epoch {}",
|
|
state.current_epoch(),
|
|
target_epoch
|
|
)));
|
|
} else if state.current_epoch() + 1 < target_epoch {
|
|
// Since there's a one-epoch look-head on attester duties, it suffices to only advance to
|
|
// the prior epoch.
|
|
let target_slot = target_epoch
|
|
.saturating_sub(1_u64)
|
|
.start_slot(E::slots_per_epoch());
|
|
|
|
// A "partial" state advance is adequate since attester duties don't rely on state roots.
|
|
partial_state_advance(state, Some(state_root), target_slot, spec)
|
|
.map_err(BeaconChainError::from)
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
}
|
|
|
|
Ok(())
|
|
}
|
|
|
|
/// Convert the internal representation of attester duties into the format returned to the HTTP
|
|
/// client.
|
|
fn convert_to_api_response<T: BeaconChainTypes>(
|
|
duties: Vec<Option<AttestationDuty>>,
|
|
indices: &[u64],
|
|
dependent_root: Hash256,
|
|
chain: &BeaconChain<T>,
|
|
) -> Result<ApiDuties, warp::reject::Rejection> {
|
|
// Protect against an inconsistent slot clock.
|
|
if duties.len() != indices.len() {
|
|
return Err(warp_utils::reject::custom_server_error(format!(
|
|
"duties length {} does not match indices length {}",
|
|
duties.len(),
|
|
indices.len()
|
|
)));
|
|
}
|
|
|
|
let usize_indices = indices.iter().map(|i| *i as usize).collect::<Vec<_>>();
|
|
let index_to_pubkey_map = chain
|
|
.validator_pubkey_bytes_many(&usize_indices)
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
let data = duties
|
|
.into_iter()
|
|
.zip(indices)
|
|
.filter_map(|(duty_opt, &validator_index)| {
|
|
let duty = duty_opt?;
|
|
Some(api_types::AttesterData {
|
|
pubkey: *index_to_pubkey_map.get(&(validator_index as usize))?,
|
|
validator_index,
|
|
committees_at_slot: duty.committees_at_slot,
|
|
committee_index: duty.index,
|
|
committee_length: duty.committee_len as u64,
|
|
validator_committee_index: duty.committee_position as u64,
|
|
slot: duty.slot,
|
|
})
|
|
})
|
|
.collect::<Vec<_>>();
|
|
|
|
Ok(api_types::DutiesResponse {
|
|
dependent_root,
|
|
data,
|
|
})
|
|
}
|