mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-21 22:04:44 +00:00
Use async code when interacting with EL (#3244)
## Overview
This rather extensive PR achieves two primary goals:
1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.
Additionally, it achieves:
- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
- I had to do this to deal with sending blocks into spawned tasks.
- Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
- We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
- Avoids cloning *all the blocks* in *every chain segment* during sync.
- It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.
For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273
## Changes to `canonical_head` and `fork_choice`
Previously, the `BeaconChain` had two separate fields:
```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```
Now, we have grouped these values under a single struct:
```
canonical_head: CanonicalHead {
cached_head: RwLock<Arc<Snapshot>>,
fork_choice: RwLock<BeaconForkChoice>
}
```
Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.
## Breaking Changes
### The `state` (root) field in the `finalized_checkpoint` SSE event
Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:
1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.
Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).
I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.
## Notes for Reviewers
I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.
I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".
I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.
I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.
Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.
You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.
I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.
Co-authored-by: Mac L <mjladson@pm.me>
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
use crate::beacon_chain::{BEACON_CHAIN_DB_KEY, ETH1_CACHE_DB_KEY, OP_POOL_DB_KEY};
|
||||
use crate::beacon_chain::{CanonicalHead, BEACON_CHAIN_DB_KEY, ETH1_CACHE_DB_KEY, OP_POOL_DB_KEY};
|
||||
use crate::eth1_chain::{CachingEth1Backend, SszEth1};
|
||||
use crate::fork_choice_signal::ForkChoiceSignalTx;
|
||||
use crate::fork_revert::{reset_fork_choice_to_finalization, revert_to_fork_boundary};
|
||||
@@ -245,6 +245,7 @@ where
|
||||
let fork_choice =
|
||||
BeaconChain::<Witness<TSlotClock, TEth1Backend, _, _, _>>::load_fork_choice(
|
||||
store.clone(),
|
||||
&self.spec,
|
||||
)
|
||||
.map_err(|e| format!("Unable to load fork choice from disk: {:?}", e))?
|
||||
.ok_or("Fork choice not found in store")?;
|
||||
@@ -337,7 +338,7 @@ where
|
||||
Ok((
|
||||
BeaconSnapshot {
|
||||
beacon_block_root,
|
||||
beacon_block,
|
||||
beacon_block: Arc::new(beacon_block),
|
||||
beacon_state,
|
||||
},
|
||||
self,
|
||||
@@ -352,12 +353,15 @@ where
|
||||
self = updated_builder;
|
||||
|
||||
let fc_store = BeaconForkChoiceStore::get_forkchoice_store(store, &genesis);
|
||||
let current_slot = None;
|
||||
|
||||
let fork_choice = ForkChoice::from_anchor(
|
||||
fc_store,
|
||||
genesis.beacon_block_root,
|
||||
&genesis.beacon_block,
|
||||
&genesis.beacon_state,
|
||||
current_slot,
|
||||
&self.spec,
|
||||
)
|
||||
.map_err(|e| format!("Unable to initialize ForkChoice: {:?}", e))?;
|
||||
|
||||
@@ -455,17 +459,20 @@ where
|
||||
|
||||
let snapshot = BeaconSnapshot {
|
||||
beacon_block_root: weak_subj_block_root,
|
||||
beacon_block: weak_subj_block,
|
||||
beacon_block: Arc::new(weak_subj_block),
|
||||
beacon_state: weak_subj_state,
|
||||
};
|
||||
|
||||
let fc_store = BeaconForkChoiceStore::get_forkchoice_store(store, &snapshot);
|
||||
|
||||
let current_slot = Some(snapshot.beacon_block.slot());
|
||||
let fork_choice = ForkChoice::from_anchor(
|
||||
fc_store,
|
||||
snapshot.beacon_block_root,
|
||||
&snapshot.beacon_block,
|
||||
&snapshot.beacon_state,
|
||||
current_slot,
|
||||
&self.spec,
|
||||
)
|
||||
.map_err(|e| format!("Unable to initialize ForkChoice: {:?}", e))?;
|
||||
|
||||
@@ -638,17 +645,18 @@ where
|
||||
head_block_root,
|
||||
&head_state,
|
||||
store.clone(),
|
||||
Some(current_slot),
|
||||
&self.spec,
|
||||
)?;
|
||||
}
|
||||
|
||||
let mut canonical_head = BeaconSnapshot {
|
||||
let mut head_snapshot = BeaconSnapshot {
|
||||
beacon_block_root: head_block_root,
|
||||
beacon_block: head_block,
|
||||
beacon_block: Arc::new(head_block),
|
||||
beacon_state: head_state,
|
||||
};
|
||||
|
||||
canonical_head
|
||||
head_snapshot
|
||||
.beacon_state
|
||||
.build_all_caches(&self.spec)
|
||||
.map_err(|e| format!("Failed to build state caches: {:?}", e))?;
|
||||
@@ -658,25 +666,17 @@ where
|
||||
//
|
||||
// This is a sanity check to detect database corruption.
|
||||
let fc_finalized = fork_choice.finalized_checkpoint();
|
||||
let head_finalized = canonical_head.beacon_state.finalized_checkpoint();
|
||||
if fc_finalized != head_finalized {
|
||||
let is_genesis = head_finalized.root.is_zero()
|
||||
&& head_finalized.epoch == fc_finalized.epoch
|
||||
&& fc_finalized.root == genesis_block_root;
|
||||
let is_wss = store.get_anchor_slot().map_or(false, |anchor_slot| {
|
||||
fc_finalized.epoch == anchor_slot.epoch(TEthSpec::slots_per_epoch())
|
||||
});
|
||||
if !is_genesis && !is_wss {
|
||||
return Err(format!(
|
||||
"Database corrupt: fork choice is finalized at {:?} whilst head is finalized at \
|
||||
let head_finalized = head_snapshot.beacon_state.finalized_checkpoint();
|
||||
if fc_finalized.epoch < head_finalized.epoch {
|
||||
return Err(format!(
|
||||
"Database corrupt: fork choice is finalized at {:?} whilst head is finalized at \
|
||||
{:?}",
|
||||
fc_finalized, head_finalized
|
||||
));
|
||||
}
|
||||
fc_finalized, head_finalized
|
||||
));
|
||||
}
|
||||
|
||||
let validator_pubkey_cache = self.validator_pubkey_cache.map(Ok).unwrap_or_else(|| {
|
||||
ValidatorPubkeyCache::new(&canonical_head.beacon_state, store.clone())
|
||||
ValidatorPubkeyCache::new(&head_snapshot.beacon_state, store.clone())
|
||||
.map_err(|e| format!("Unable to init validator pubkey cache: {:?}", e))
|
||||
})?;
|
||||
|
||||
@@ -691,7 +691,7 @@ where
|
||||
if let Some(slot) = slot_clock.now() {
|
||||
validator_monitor.process_valid_state(
|
||||
slot.epoch(TEthSpec::slots_per_epoch()),
|
||||
&canonical_head.beacon_state,
|
||||
&head_snapshot.beacon_state,
|
||||
);
|
||||
}
|
||||
|
||||
@@ -725,10 +725,18 @@ where
|
||||
.do_atomically(self.pending_io_batch)
|
||||
.map_err(|e| format!("Error writing chain & metadata to disk: {:?}", e))?;
|
||||
|
||||
let genesis_validators_root = head_snapshot.beacon_state.genesis_validators_root();
|
||||
let genesis_time = head_snapshot.beacon_state.genesis_time();
|
||||
let head_for_snapshot_cache = head_snapshot.clone();
|
||||
let canonical_head = CanonicalHead::new(fork_choice, Arc::new(head_snapshot));
|
||||
|
||||
let beacon_chain = BeaconChain {
|
||||
spec: self.spec,
|
||||
config: self.chain_config,
|
||||
store,
|
||||
task_executor: self
|
||||
.task_executor
|
||||
.ok_or("Cannot build without task executor")?,
|
||||
store_migrator,
|
||||
slot_clock,
|
||||
op_pool: self.op_pool.ok_or("Cannot build without op pool")?,
|
||||
@@ -758,18 +766,18 @@ where
|
||||
observed_attester_slashings: <_>::default(),
|
||||
eth1_chain: self.eth1_chain,
|
||||
execution_layer: self.execution_layer,
|
||||
genesis_validators_root: canonical_head.beacon_state.genesis_validators_root(),
|
||||
canonical_head: TimeoutRwLock::new(canonical_head.clone()),
|
||||
genesis_validators_root,
|
||||
genesis_time,
|
||||
canonical_head,
|
||||
genesis_block_root,
|
||||
genesis_state_root,
|
||||
fork_choice: RwLock::new(fork_choice),
|
||||
fork_choice_signal_tx,
|
||||
fork_choice_signal_rx,
|
||||
event_handler: self.event_handler,
|
||||
head_tracker,
|
||||
snapshot_cache: TimeoutRwLock::new(SnapshotCache::new(
|
||||
DEFAULT_SNAPSHOT_CACHE_SIZE,
|
||||
canonical_head,
|
||||
head_for_snapshot_cache,
|
||||
)),
|
||||
shuffling_cache: TimeoutRwLock::new(ShufflingCache::new()),
|
||||
beacon_proposer_cache: <_>::default(),
|
||||
@@ -787,9 +795,7 @@ where
|
||||
validator_monitor: RwLock::new(validator_monitor),
|
||||
};
|
||||
|
||||
let head = beacon_chain
|
||||
.head()
|
||||
.map_err(|e| format!("Failed to get head: {:?}", e))?;
|
||||
let head = beacon_chain.head_snapshot();
|
||||
|
||||
// Prime the attester cache with the head state.
|
||||
beacon_chain
|
||||
@@ -992,10 +998,10 @@ mod test {
|
||||
.build()
|
||||
.expect("should build");
|
||||
|
||||
let head = chain.head().expect("should get head");
|
||||
let head = chain.head_snapshot();
|
||||
|
||||
let state = head.beacon_state;
|
||||
let block = head.beacon_block;
|
||||
let state = &head.beacon_state;
|
||||
let block = &head.beacon_block;
|
||||
|
||||
assert_eq!(state.slot(), Slot::new(0), "should start from genesis");
|
||||
assert_eq!(
|
||||
@@ -1014,7 +1020,7 @@ mod test {
|
||||
.get_blinded_block(&Hash256::zero())
|
||||
.expect("should read db")
|
||||
.expect("should find genesis block"),
|
||||
block.clone().into(),
|
||||
block.clone_as_blinded(),
|
||||
"should store genesis block under zero hash alias"
|
||||
);
|
||||
assert_eq!(
|
||||
|
||||
Reference in New Issue
Block a user