Use async code when interacting with EL (#3244)

## Overview

This rather extensive PR achieves two primary goals:

1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.

Additionally, it achieves:

- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
    - I had to do this to deal with sending blocks into spawned tasks.
    - Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
    - We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
    - Avoids cloning *all the blocks* in *every chain segment* during sync.
    - It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.

For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273

## Changes to `canonical_head` and `fork_choice`

Previously, the `BeaconChain` had two separate fields:

```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```

Now, we have grouped these values under a single struct:

```
canonical_head: CanonicalHead {
  cached_head: RwLock<Arc<Snapshot>>,
  fork_choice: RwLock<BeaconForkChoice>
} 
```

Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.

## Breaking Changes

### The `state` (root) field in the `finalized_checkpoint` SSE event

Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:

1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.

Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).

I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.

## Notes for Reviewers

I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.

I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".

I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.

I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.

Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.

You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.

I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.

Co-authored-by: Mac L <mjladson@pm.me>
This commit is contained in:
Paul Hauner
2022-07-03 05:36:50 +00:00
parent e5212f1320
commit be4e261e74
106 changed files with 6515 additions and 4538 deletions

View File

@@ -56,7 +56,7 @@ fn get_harness(validator_count: usize) -> BeaconChainHarness<EphemeralHarnessTyp
fn get_valid_unaggregated_attestation<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
) -> (Attestation<T::EthSpec>, usize, usize, SecretKey, SubnetId) {
let head = chain.head().expect("should get head");
let head = chain.head_snapshot();
let current_slot = chain.slot().expect("should get slot");
let mut valid_attestation = chain
@@ -106,7 +106,8 @@ fn get_valid_aggregated_attestation<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
aggregate: Attestation<T::EthSpec>,
) -> (SignedAggregateAndProof<T::EthSpec>, usize, SecretKey) {
let state = &chain.head().expect("should get head").beacon_state;
let head = chain.head_snapshot();
let state = &head.beacon_state;
let current_slot = chain.slot().expect("should get slot");
let committee = state
@@ -155,7 +156,8 @@ fn get_non_aggregator<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
aggregate: &Attestation<T::EthSpec>,
) -> (usize, SecretKey) {
let state = &chain.head().expect("should get head").beacon_state;
let head = chain.head_snapshot();
let state = &head.beacon_state;
let current_slot = chain.slot().expect("should get slot");
let committee = state
@@ -213,15 +215,17 @@ struct GossipTester {
}
impl GossipTester {
pub fn new() -> Self {
pub async fn new() -> Self {
let harness = get_harness(VALIDATOR_COUNT);
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
harness
.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
)
.await;
// Advance into a slot where there have not been blocks or attestations produced.
harness.advance_slot();
@@ -395,9 +399,10 @@ impl GossipTester {
}
}
/// Tests verification of `SignedAggregateAndProof` from the gossip network.
#[test]
fn aggregated_gossip_verification() {
#[tokio::test]
async fn aggregated_gossip_verification() {
GossipTester::new()
.await
/*
* The following two tests ensure:
*
@@ -511,8 +516,7 @@ fn aggregated_gossip_verification() {
let committee_len = tester
.harness
.chain
.head()
.unwrap()
.head_snapshot()
.beacon_state
.get_beacon_committee(tester.slot(), a.message.aggregate.data.index)
.expect("should get committees")
@@ -612,7 +616,7 @@ fn aggregated_gossip_verification() {
tester.valid_aggregate.message.aggregate.clone(),
None,
&sk,
&chain.head_info().unwrap().fork,
&chain.canonical_head.cached_head().head_fork(),
chain.genesis_validators_root,
&chain.spec,
)
@@ -669,9 +673,10 @@ fn aggregated_gossip_verification() {
}
/// Tests the verification conditions for an unaggregated attestation on the gossip network.
#[test]
fn unaggregated_gossip_verification() {
#[tokio::test]
async fn unaggregated_gossip_verification() {
GossipTester::new()
.await
/*
* The following test ensures:
*
@@ -684,8 +689,7 @@ fn unaggregated_gossip_verification() {
a.data.index = tester
.harness
.chain
.head()
.unwrap()
.head_snapshot()
.beacon_state
.get_committee_count_at_slot(a.data.slot)
.unwrap()
@@ -924,16 +928,18 @@ fn unaggregated_gossip_verification() {
/// Ensures that an attestation that skips epochs can still be processed.
///
/// This also checks that we can do a state lookup if we don't get a hit from the shuffling cache.
#[test]
fn attestation_that_skips_epochs() {
#[tokio::test]
async fn attestation_that_skips_epochs() {
let harness = get_harness(VALIDATOR_COUNT);
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 + 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::SomeValidators(vec![]),
);
harness
.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 + 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::SomeValidators(vec![]),
)
.await;
let current_slot = harness.chain.slot().expect("should get slot");
let current_epoch = harness.chain.epoch().expect("should get epoch");
@@ -992,16 +998,18 @@ fn attestation_that_skips_epochs() {
.expect("should gossip verify attestation that skips slots");
}
#[test]
fn attestation_to_finalized_block() {
#[tokio::test]
async fn attestation_to_finalized_block() {
let harness = get_harness(VALIDATOR_COUNT);
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 4 + 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
harness
.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 4 + 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
)
.await;
let finalized_checkpoint = harness
.chain
@@ -1067,16 +1075,18 @@ fn attestation_to_finalized_block() {
.contains(earlier_block_root));
}
#[test]
fn verify_aggregate_for_gossip_doppelganger_detection() {
#[tokio::test]
async fn verify_aggregate_for_gossip_doppelganger_detection() {
let harness = get_harness(VALIDATOR_COUNT);
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
harness
.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
)
.await;
// Advance into a slot where there have not been blocks or attestations produced.
harness.advance_slot();
@@ -1124,16 +1134,18 @@ fn verify_aggregate_for_gossip_doppelganger_detection() {
.expect("should check if gossip aggregator was observed"));
}
#[test]
fn verify_attestation_for_gossip_doppelganger_detection() {
#[tokio::test]
async fn verify_attestation_for_gossip_doppelganger_detection() {
let harness = get_harness(VALIDATOR_COUNT);
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
harness
.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
)
.await;
// Advance into a slot where there have not been blocks or attestations produced.
harness.advance_slot();