Use async code when interacting with EL (#3244)

## Overview

This rather extensive PR achieves two primary goals:

1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.

Additionally, it achieves:

- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
    - I had to do this to deal with sending blocks into spawned tasks.
    - Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
    - We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
    - Avoids cloning *all the blocks* in *every chain segment* during sync.
    - It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.

For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273

## Changes to `canonical_head` and `fork_choice`

Previously, the `BeaconChain` had two separate fields:

```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```

Now, we have grouped these values under a single struct:

```
canonical_head: CanonicalHead {
  cached_head: RwLock<Arc<Snapshot>>,
  fork_choice: RwLock<BeaconForkChoice>
} 
```

Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.

## Breaking Changes

### The `state` (root) field in the `finalized_checkpoint` SSE event

Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:

1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.

Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).

I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.

## Notes for Reviewers

I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.

I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".

I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.

I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.

Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.

You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.

I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.

Co-authored-by: Mac L <mjladson@pm.me>
This commit is contained in:
Paul Hauner
2022-07-03 05:36:50 +00:00
parent e5212f1320
commit be4e261e74
106 changed files with 6515 additions and 4538 deletions

View File

@@ -684,26 +684,20 @@ where
if let Some(execution_layer) = beacon_chain.execution_layer.as_ref() {
// Only send a head update *after* genesis.
if let Ok(current_slot) = beacon_chain.slot() {
let head = beacon_chain
.head_info()
.map_err(|e| format!("Unable to read beacon chain head: {:?}", e))?;
// Issue the head to the execution engine on startup. This ensures it can start
// syncing.
if head
.execution_payload_block_hash
.map_or(false, |h| h != ExecutionBlockHash::zero())
let params = beacon_chain
.canonical_head
.cached_head()
.forkchoice_update_parameters();
if params
.head_hash
.map_or(false, |hash| hash != ExecutionBlockHash::zero())
{
// Spawn a new task using the "async" fork choice update method, rather than
// using the "blocking" method.
//
// Using the blocking method may cause a panic if this code is run inside an
// async context.
// Spawn a new task to update the EE without waiting for it to complete.
let inner_chain = beacon_chain.clone();
runtime_context.executor.spawn(
async move {
let result = inner_chain
.update_execution_engine_forkchoice_async(current_slot)
.update_execution_engine_forkchoice(current_slot, params)
.await;
// No need to exit early if setting the head fails. It will be set again if/when the
@@ -811,8 +805,16 @@ where
self.db_path = Some(hot_path.into());
self.freezer_db_path = Some(cold_path.into());
let inner_spec = spec.clone();
let schema_upgrade = |db, from, to| {
migrate_schema::<Witness<TSlotClock, TEth1Backend, _, _, _>>(db, datadir, from, to, log)
migrate_schema::<Witness<TSlotClock, TEth1Backend, _, _, _>>(
db,
datadir,
from,
to,
log,
&inner_spec,
)
};
let store = HotColdDB::open(

View File

@@ -1,5 +1,5 @@
use crate::metrics;
use beacon_chain::{BeaconChain, BeaconChainTypes, HeadSafetyStatus};
use beacon_chain::{BeaconChain, BeaconChainTypes, ExecutionStatus};
use lighthouse_network::{types::SyncState, NetworkGlobals};
use parking_lot::Mutex;
use slog::{crit, debug, error, info, warn, Logger};
@@ -100,15 +100,10 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
current_sync_state = sync_state;
}
let head_info = match beacon_chain.head_info() {
Ok(head_info) => head_info,
Err(e) => {
error!(log, "Failed to get beacon chain head info"; "error" => format!("{:?}", e));
break;
}
};
let head_slot = head_info.slot;
let cached_head = beacon_chain.canonical_head.cached_head();
let head_slot = cached_head.head_slot();
let head_root = cached_head.head_block_root();
let finalized_checkpoint = cached_head.finalized_checkpoint();
metrics::set_gauge(&metrics::NOTIFIER_HEAD_SLOT, head_slot.as_u64() as i64);
@@ -125,9 +120,6 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
};
let current_epoch = current_slot.epoch(T::EthSpec::slots_per_epoch());
let finalized_epoch = head_info.finalized_checkpoint.epoch;
let finalized_root = head_info.finalized_checkpoint.root;
let head_root = head_info.block_root;
// The default is for regular sync but this gets modified if backfill sync is in
// progress.
@@ -177,8 +169,8 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
log,
"Slot timer";
"peers" => peer_count_pretty(connected_peer_count),
"finalized_root" => format!("{}", finalized_root),
"finalized_epoch" => finalized_epoch,
"finalized_root" => format!("{}", finalized_checkpoint.root),
"finalized_epoch" => finalized_checkpoint.epoch,
"head_block" => format!("{}", head_root),
"head_slot" => head_slot,
"current_slot" => current_slot,
@@ -264,35 +256,29 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
head_root.to_string()
};
let block_hash = match beacon_chain.head_safety_status() {
Ok(HeadSafetyStatus::Safe(hash_opt)) => hash_opt
.map(|hash| format!("{} (verified)", hash))
.unwrap_or_else(|| "n/a".to_string()),
Ok(HeadSafetyStatus::Unsafe(block_hash)) => {
let block_hash = match beacon_chain.canonical_head.head_execution_status() {
Ok(ExecutionStatus::Irrelevant(_)) => "n/a".to_string(),
Ok(ExecutionStatus::Valid(hash)) => format!("{} (verified)", hash),
Ok(ExecutionStatus::Optimistic(hash)) => {
warn!(
log,
"Head execution payload is unverified";
"execution_block_hash" => ?block_hash,
"Head is optimistic";
"info" => "chain not fully verified, \
block and attestation production disabled until execution engine syncs",
"execution_block_hash" => ?hash,
);
format!("{} (unverified)", block_hash)
format!("{} (unverified)", hash)
}
Ok(HeadSafetyStatus::Invalid(block_hash)) => {
Ok(ExecutionStatus::Invalid(hash)) => {
crit!(
log,
"Head execution payload is invalid";
"msg" => "this scenario may be unrecoverable",
"execution_block_hash" => ?block_hash,
"execution_block_hash" => ?hash,
);
format!("{} (invalid)", block_hash)
}
Err(e) => {
error!(
log,
"Failed to read head safety status";
"error" => ?e
);
"n/a".to_string()
format!("{} (invalid)", hash)
}
Err(_) => "unknown".to_string(),
};
info!(
@@ -300,8 +286,8 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
"Synced";
"peers" => peer_count_pretty(connected_peer_count),
"exec_hash" => block_hash,
"finalized_root" => format!("{}", finalized_root),
"finalized_epoch" => finalized_epoch,
"finalized_root" => format!("{}", finalized_checkpoint.root),
"finalized_epoch" => finalized_checkpoint.epoch,
"epoch" => current_epoch,
"block" => block_info,
"slot" => current_slot,
@@ -312,8 +298,8 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
log,
"Searching for peers";
"peers" => peer_count_pretty(connected_peer_count),
"finalized_root" => format!("{}", finalized_root),
"finalized_epoch" => finalized_epoch,
"finalized_root" => format!("{}", finalized_checkpoint.root),
"finalized_epoch" => finalized_checkpoint.epoch,
"head_slot" => head_slot,
"current_slot" => current_slot,
);
@@ -332,57 +318,52 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
fn eth1_logging<T: BeaconChainTypes>(beacon_chain: &BeaconChain<T>, log: &Logger) {
let current_slot_opt = beacon_chain.slot().ok();
if let Ok(head_info) = beacon_chain.head_info() {
// Perform some logging about the eth1 chain
if let Some(eth1_chain) = beacon_chain.eth1_chain.as_ref() {
// No need to do logging if using the dummy backend.
if eth1_chain.is_dummy_backend() {
return;
}
if let Some(status) =
eth1_chain.sync_status(head_info.genesis_time, current_slot_opt, &beacon_chain.spec)
{
debug!(
log,
"Eth1 cache sync status";
"eth1_head_block" => status.head_block_number,
"latest_cached_block_number" => status.latest_cached_block_number,
"latest_cached_timestamp" => status.latest_cached_block_timestamp,
"voting_target_timestamp" => status.voting_target_timestamp,
"ready" => status.lighthouse_is_cached_and_ready
);
if !status.lighthouse_is_cached_and_ready {
let voting_target_timestamp = status.voting_target_timestamp;
let distance = status
.latest_cached_block_timestamp
.map(|latest| {
voting_target_timestamp.saturating_sub(latest)
/ beacon_chain.spec.seconds_per_eth1_block
})
.map(|distance| distance.to_string())
.unwrap_or_else(|| "initializing deposits".to_string());
warn!(
log,
"Syncing eth1 block cache";
"est_blocks_remaining" => distance,
);
}
} else {
error!(
log,
"Unable to determine eth1 sync status";
);
}
// Perform some logging about the eth1 chain
if let Some(eth1_chain) = beacon_chain.eth1_chain.as_ref() {
// No need to do logging if using the dummy backend.
if eth1_chain.is_dummy_backend() {
return;
}
if let Some(status) = eth1_chain.sync_status(
beacon_chain.genesis_time,
current_slot_opt,
&beacon_chain.spec,
) {
debug!(
log,
"Eth1 cache sync status";
"eth1_head_block" => status.head_block_number,
"latest_cached_block_number" => status.latest_cached_block_number,
"latest_cached_timestamp" => status.latest_cached_block_timestamp,
"voting_target_timestamp" => status.voting_target_timestamp,
"ready" => status.lighthouse_is_cached_and_ready
);
if !status.lighthouse_is_cached_and_ready {
let voting_target_timestamp = status.voting_target_timestamp;
let distance = status
.latest_cached_block_timestamp
.map(|latest| {
voting_target_timestamp.saturating_sub(latest)
/ beacon_chain.spec.seconds_per_eth1_block
})
.map(|distance| distance.to_string())
.unwrap_or_else(|| "initializing deposits".to_string());
warn!(
log,
"Syncing eth1 block cache";
"est_blocks_remaining" => distance,
);
}
} else {
error!(
log,
"Unable to determine eth1 sync status";
);
}
} else {
error!(
log,
"Unable to get head info";
);
}
}