mirror of
https://github.com/sigp/lighthouse.git
synced 2026-04-17 21:08:32 +00:00
Use async code when interacting with EL (#3244)
## Overview
This rather extensive PR achieves two primary goals:
1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.
Additionally, it achieves:
- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
- I had to do this to deal with sending blocks into spawned tasks.
- Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
- We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
- Avoids cloning *all the blocks* in *every chain segment* during sync.
- It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.
For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273
## Changes to `canonical_head` and `fork_choice`
Previously, the `BeaconChain` had two separate fields:
```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```
Now, we have grouped these values under a single struct:
```
canonical_head: CanonicalHead {
cached_head: RwLock<Arc<Snapshot>>,
fork_choice: RwLock<BeaconForkChoice>
}
```
Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.
## Breaking Changes
### The `state` (root) field in the `finalized_checkpoint` SSE event
Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:
1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.
Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).
I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.
## Notes for Reviewers
I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.
I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".
I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.
I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.
Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.
You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.
I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.
Co-authored-by: Mac L <mjladson@pm.me>
This commit is contained in:
@@ -38,7 +38,7 @@ use tree_hash_derive::TreeHash;
|
||||
derive(Debug, PartialEq, TreeHash),
|
||||
tree_hash(enum_behaviour = "transparent")
|
||||
),
|
||||
map_ref_into(BeaconBlockBodyRef),
|
||||
map_ref_into(BeaconBlockBodyRef, BeaconBlock),
|
||||
map_ref_mut_into(BeaconBlockBodyRefMut)
|
||||
)]
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, Encode, TreeHash, Derivative)]
|
||||
@@ -541,6 +541,50 @@ impl_from!(BeaconBlockBase, <E, FullPayload<E>>, <E, BlindedPayload<E>>, |body:
|
||||
impl_from!(BeaconBlockAltair, <E, FullPayload<E>>, <E, BlindedPayload<E>>, |body: BeaconBlockBodyAltair<_, _>| body.into());
|
||||
impl_from!(BeaconBlockMerge, <E, FullPayload<E>>, <E, BlindedPayload<E>>, |body: BeaconBlockBodyMerge<_, _>| body.into());
|
||||
|
||||
// We can clone blocks with payloads to blocks without payloads, without cloning the payload.
|
||||
macro_rules! impl_clone_as_blinded {
|
||||
($ty_name:ident, <$($from_params:ty),*>, <$($to_params:ty),*>) => {
|
||||
impl<E: EthSpec> $ty_name<$($from_params),*>
|
||||
{
|
||||
pub fn clone_as_blinded(&self) -> $ty_name<$($to_params),*> {
|
||||
let $ty_name {
|
||||
slot,
|
||||
proposer_index,
|
||||
parent_root,
|
||||
state_root,
|
||||
body,
|
||||
} = self;
|
||||
|
||||
$ty_name {
|
||||
slot: *slot,
|
||||
proposer_index: *proposer_index,
|
||||
parent_root: *parent_root,
|
||||
state_root: *state_root,
|
||||
body: body.clone_as_blinded(),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl_clone_as_blinded!(BeaconBlockBase, <E, FullPayload<E>>, <E, BlindedPayload<E>>);
|
||||
impl_clone_as_blinded!(BeaconBlockAltair, <E, FullPayload<E>>, <E, BlindedPayload<E>>);
|
||||
impl_clone_as_blinded!(BeaconBlockMerge, <E, FullPayload<E>>, <E, BlindedPayload<E>>);
|
||||
|
||||
// A reference to a full beacon block can be cloned into a blinded beacon block, without cloning the
|
||||
// execution payload.
|
||||
impl<'a, E: EthSpec> From<BeaconBlockRef<'a, E, FullPayload<E>>>
|
||||
for BeaconBlock<E, BlindedPayload<E>>
|
||||
{
|
||||
fn from(
|
||||
full_block: BeaconBlockRef<'a, E, FullPayload<E>>,
|
||||
) -> BeaconBlock<E, BlindedPayload<E>> {
|
||||
map_beacon_block_ref_into_beacon_block!(&'a _, full_block, |inner, cons| {
|
||||
cons(inner.clone_as_blinded())
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl<E: EthSpec> From<BeaconBlock<E, FullPayload<E>>>
|
||||
for (
|
||||
BeaconBlock<E, BlindedPayload<E>>,
|
||||
|
||||
@@ -251,6 +251,53 @@ impl<E: EthSpec> From<BeaconBlockBodyMerge<E, FullPayload<E>>>
|
||||
}
|
||||
}
|
||||
|
||||
// We can clone a full block into a blinded block, without cloning the payload.
|
||||
impl<E: EthSpec> BeaconBlockBodyBase<E, FullPayload<E>> {
|
||||
pub fn clone_as_blinded(&self) -> BeaconBlockBodyBase<E, BlindedPayload<E>> {
|
||||
let (block_body, _payload) = self.clone().into();
|
||||
block_body
|
||||
}
|
||||
}
|
||||
|
||||
impl<E: EthSpec> BeaconBlockBodyAltair<E, FullPayload<E>> {
|
||||
pub fn clone_as_blinded(&self) -> BeaconBlockBodyAltair<E, BlindedPayload<E>> {
|
||||
let (block_body, _payload) = self.clone().into();
|
||||
block_body
|
||||
}
|
||||
}
|
||||
|
||||
impl<E: EthSpec> BeaconBlockBodyMerge<E, FullPayload<E>> {
|
||||
pub fn clone_as_blinded(&self) -> BeaconBlockBodyMerge<E, BlindedPayload<E>> {
|
||||
let BeaconBlockBodyMerge {
|
||||
randao_reveal,
|
||||
eth1_data,
|
||||
graffiti,
|
||||
proposer_slashings,
|
||||
attester_slashings,
|
||||
attestations,
|
||||
deposits,
|
||||
voluntary_exits,
|
||||
sync_aggregate,
|
||||
execution_payload: FullPayload { execution_payload },
|
||||
} = self;
|
||||
|
||||
BeaconBlockBodyMerge {
|
||||
randao_reveal: randao_reveal.clone(),
|
||||
eth1_data: eth1_data.clone(),
|
||||
graffiti: *graffiti,
|
||||
proposer_slashings: proposer_slashings.clone(),
|
||||
attester_slashings: attester_slashings.clone(),
|
||||
attestations: attestations.clone(),
|
||||
deposits: deposits.clone(),
|
||||
voluntary_exits: voluntary_exits.clone(),
|
||||
sync_aggregate: sync_aggregate.clone(),
|
||||
execution_payload: BlindedPayload {
|
||||
execution_payload_header: From::from(execution_payload),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<E: EthSpec> From<BeaconBlockBody<E, FullPayload<E>>>
|
||||
for (
|
||||
BeaconBlockBody<E, BlindedPayload<E>>,
|
||||
|
||||
@@ -34,32 +34,34 @@ fn default_values() {
|
||||
assert!(cache.get_beacon_committees_at_slot(Slot::new(0)).is_err());
|
||||
}
|
||||
|
||||
fn new_state<T: EthSpec>(validator_count: usize, slot: Slot) -> BeaconState<T> {
|
||||
async fn new_state<T: EthSpec>(validator_count: usize, slot: Slot) -> BeaconState<T> {
|
||||
let harness = get_harness(validator_count);
|
||||
let head_state = harness.get_current_state();
|
||||
if slot > Slot::new(0) {
|
||||
harness.add_attested_blocks_at_slots(
|
||||
head_state,
|
||||
Hash256::zero(),
|
||||
(1..slot.as_u64())
|
||||
.map(Slot::new)
|
||||
.collect::<Vec<_>>()
|
||||
.as_slice(),
|
||||
(0..validator_count).collect::<Vec<_>>().as_slice(),
|
||||
);
|
||||
harness
|
||||
.add_attested_blocks_at_slots(
|
||||
head_state,
|
||||
Hash256::zero(),
|
||||
(1..slot.as_u64())
|
||||
.map(Slot::new)
|
||||
.collect::<Vec<_>>()
|
||||
.as_slice(),
|
||||
(0..validator_count).collect::<Vec<_>>().as_slice(),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
harness.get_current_state()
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[tokio::test]
|
||||
#[should_panic]
|
||||
fn fails_without_validators() {
|
||||
new_state::<MinimalEthSpec>(0, Slot::new(0));
|
||||
async fn fails_without_validators() {
|
||||
new_state::<MinimalEthSpec>(0, Slot::new(0)).await;
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn initializes_with_the_right_epoch() {
|
||||
let state = new_state::<MinimalEthSpec>(16, Slot::new(0));
|
||||
#[tokio::test]
|
||||
async fn initializes_with_the_right_epoch() {
|
||||
let state = new_state::<MinimalEthSpec>(16, Slot::new(0)).await;
|
||||
let spec = &MinimalEthSpec::default_spec();
|
||||
|
||||
let cache = CommitteeCache::default();
|
||||
@@ -75,13 +77,13 @@ fn initializes_with_the_right_epoch() {
|
||||
assert!(cache.is_initialized_at(state.next_epoch().unwrap()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn shuffles_for_the_right_epoch() {
|
||||
#[tokio::test]
|
||||
async fn shuffles_for_the_right_epoch() {
|
||||
let num_validators = MinimalEthSpec::minimum_validator_count() * 2;
|
||||
let epoch = Epoch::new(6);
|
||||
let slot = epoch.start_slot(MinimalEthSpec::slots_per_epoch());
|
||||
|
||||
let mut state = new_state::<MinimalEthSpec>(num_validators, slot);
|
||||
let mut state = new_state::<MinimalEthSpec>(num_validators, slot).await;
|
||||
let spec = &MinimalEthSpec::default_spec();
|
||||
|
||||
let distinct_hashes: Vec<Hash256> = (0..MinimalEthSpec::epochs_per_historical_vector())
|
||||
|
||||
@@ -25,7 +25,7 @@ lazy_static! {
|
||||
static ref KEYPAIRS: Vec<Keypair> = generate_deterministic_keypairs(MAX_VALIDATOR_COUNT);
|
||||
}
|
||||
|
||||
fn get_harness<E: EthSpec>(
|
||||
async fn get_harness<E: EthSpec>(
|
||||
validator_count: usize,
|
||||
slot: Slot,
|
||||
) -> BeaconChainHarness<EphemeralHarnessType<E>> {
|
||||
@@ -41,24 +41,26 @@ fn get_harness<E: EthSpec>(
|
||||
.map(Slot::new)
|
||||
.collect::<Vec<_>>();
|
||||
let state = harness.get_current_state();
|
||||
harness.add_attested_blocks_at_slots(
|
||||
state,
|
||||
Hash256::zero(),
|
||||
slots.as_slice(),
|
||||
(0..validator_count).collect::<Vec<_>>().as_slice(),
|
||||
);
|
||||
harness
|
||||
.add_attested_blocks_at_slots(
|
||||
state,
|
||||
Hash256::zero(),
|
||||
slots.as_slice(),
|
||||
(0..validator_count).collect::<Vec<_>>().as_slice(),
|
||||
)
|
||||
.await;
|
||||
}
|
||||
harness
|
||||
}
|
||||
|
||||
fn build_state<E: EthSpec>(validator_count: usize) -> BeaconState<E> {
|
||||
async fn build_state<E: EthSpec>(validator_count: usize) -> BeaconState<E> {
|
||||
get_harness(validator_count, Slot::new(0))
|
||||
.await
|
||||
.chain
|
||||
.head_beacon_state()
|
||||
.unwrap()
|
||||
.head_beacon_state_cloned()
|
||||
}
|
||||
|
||||
fn test_beacon_proposer_index<T: EthSpec>() {
|
||||
async fn test_beacon_proposer_index<T: EthSpec>() {
|
||||
let spec = T::default_spec();
|
||||
|
||||
// Get the i'th candidate proposer for the given state and slot
|
||||
@@ -85,20 +87,20 @@ fn test_beacon_proposer_index<T: EthSpec>() {
|
||||
|
||||
// Test where we have one validator per slot.
|
||||
// 0th candidate should be chosen every time.
|
||||
let state = build_state(T::slots_per_epoch() as usize);
|
||||
let state = build_state(T::slots_per_epoch() as usize).await;
|
||||
for i in 0..T::slots_per_epoch() {
|
||||
test(&state, Slot::from(i), 0);
|
||||
}
|
||||
|
||||
// Test where we have two validators per slot.
|
||||
// 0th candidate should be chosen every time.
|
||||
let state = build_state((T::slots_per_epoch() as usize).mul(2));
|
||||
let state = build_state((T::slots_per_epoch() as usize).mul(2)).await;
|
||||
for i in 0..T::slots_per_epoch() {
|
||||
test(&state, Slot::from(i), 0);
|
||||
}
|
||||
|
||||
// Test with two validators per slot, first validator has zero balance.
|
||||
let mut state = build_state::<T>((T::slots_per_epoch() as usize).mul(2));
|
||||
let mut state = build_state::<T>((T::slots_per_epoch() as usize).mul(2)).await;
|
||||
let slot0_candidate0 = ith_candidate(&state, Slot::new(0), 0, &spec);
|
||||
state.validators_mut()[slot0_candidate0].effective_balance = 0;
|
||||
test(&state, Slot::new(0), 1);
|
||||
@@ -107,9 +109,9 @@ fn test_beacon_proposer_index<T: EthSpec>() {
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn beacon_proposer_index() {
|
||||
test_beacon_proposer_index::<MinimalEthSpec>();
|
||||
#[tokio::test]
|
||||
async fn beacon_proposer_index() {
|
||||
test_beacon_proposer_index::<MinimalEthSpec>().await;
|
||||
}
|
||||
|
||||
/// Test that
|
||||
@@ -144,11 +146,11 @@ fn test_cache_initialization<T: EthSpec>(
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn cache_initialization() {
|
||||
#[tokio::test]
|
||||
async fn cache_initialization() {
|
||||
let spec = MinimalEthSpec::default_spec();
|
||||
|
||||
let mut state = build_state::<MinimalEthSpec>(16);
|
||||
let mut state = build_state::<MinimalEthSpec>(16).await;
|
||||
|
||||
*state.slot_mut() =
|
||||
(MinimalEthSpec::genesis_epoch() + 1).start_slot(MinimalEthSpec::slots_per_epoch());
|
||||
@@ -211,11 +213,11 @@ fn test_clone_config<E: EthSpec>(base_state: &BeaconState<E>, clone_config: Clon
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn clone_config() {
|
||||
#[tokio::test]
|
||||
async fn clone_config() {
|
||||
let spec = MinimalEthSpec::default_spec();
|
||||
|
||||
let mut state = build_state::<MinimalEthSpec>(16);
|
||||
let mut state = build_state::<MinimalEthSpec>(16).await;
|
||||
|
||||
state.build_all_caches(&spec).unwrap();
|
||||
state
|
||||
@@ -314,7 +316,7 @@ mod committees {
|
||||
assert!(expected_indices_iter.next().is_none());
|
||||
}
|
||||
|
||||
fn committee_consistency_test<T: EthSpec>(
|
||||
async fn committee_consistency_test<T: EthSpec>(
|
||||
validator_count: usize,
|
||||
state_epoch: Epoch,
|
||||
cache_epoch: RelativeEpoch,
|
||||
@@ -322,7 +324,7 @@ mod committees {
|
||||
let spec = &T::default_spec();
|
||||
|
||||
let slot = state_epoch.start_slot(T::slots_per_epoch());
|
||||
let harness = get_harness::<T>(validator_count, slot);
|
||||
let harness = get_harness::<T>(validator_count, slot).await;
|
||||
let mut new_head_state = harness.get_current_state();
|
||||
|
||||
let distinct_hashes: Vec<Hash256> = (0..T::epochs_per_historical_vector())
|
||||
@@ -350,7 +352,7 @@ mod committees {
|
||||
);
|
||||
}
|
||||
|
||||
fn committee_consistency_test_suite<T: EthSpec>(cached_epoch: RelativeEpoch) {
|
||||
async fn committee_consistency_test_suite<T: EthSpec>(cached_epoch: RelativeEpoch) {
|
||||
let spec = T::default_spec();
|
||||
|
||||
let validator_count = spec
|
||||
@@ -359,13 +361,15 @@ mod committees {
|
||||
.mul(spec.target_committee_size)
|
||||
.add(1);
|
||||
|
||||
committee_consistency_test::<T>(validator_count as usize, Epoch::new(0), cached_epoch);
|
||||
committee_consistency_test::<T>(validator_count as usize, Epoch::new(0), cached_epoch)
|
||||
.await;
|
||||
|
||||
committee_consistency_test::<T>(
|
||||
validator_count as usize,
|
||||
T::genesis_epoch() + 4,
|
||||
cached_epoch,
|
||||
);
|
||||
)
|
||||
.await;
|
||||
|
||||
committee_consistency_test::<T>(
|
||||
validator_count as usize,
|
||||
@@ -374,38 +378,39 @@ mod committees {
|
||||
.mul(T::slots_per_epoch())
|
||||
.mul(4),
|
||||
cached_epoch,
|
||||
);
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn current_epoch_committee_consistency() {
|
||||
committee_consistency_test_suite::<MinimalEthSpec>(RelativeEpoch::Current);
|
||||
#[tokio::test]
|
||||
async fn current_epoch_committee_consistency() {
|
||||
committee_consistency_test_suite::<MinimalEthSpec>(RelativeEpoch::Current).await;
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn previous_epoch_committee_consistency() {
|
||||
committee_consistency_test_suite::<MinimalEthSpec>(RelativeEpoch::Previous);
|
||||
#[tokio::test]
|
||||
async fn previous_epoch_committee_consistency() {
|
||||
committee_consistency_test_suite::<MinimalEthSpec>(RelativeEpoch::Previous).await;
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn next_epoch_committee_consistency() {
|
||||
committee_consistency_test_suite::<MinimalEthSpec>(RelativeEpoch::Next);
|
||||
#[tokio::test]
|
||||
async fn next_epoch_committee_consistency() {
|
||||
committee_consistency_test_suite::<MinimalEthSpec>(RelativeEpoch::Next).await;
|
||||
}
|
||||
}
|
||||
|
||||
mod get_outstanding_deposit_len {
|
||||
use super::*;
|
||||
|
||||
fn state() -> BeaconState<MinimalEthSpec> {
|
||||
async fn state() -> BeaconState<MinimalEthSpec> {
|
||||
get_harness(16, Slot::new(0))
|
||||
.await
|
||||
.chain
|
||||
.head_beacon_state()
|
||||
.unwrap()
|
||||
.head_beacon_state_cloned()
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn returns_ok() {
|
||||
let mut state = state();
|
||||
#[tokio::test]
|
||||
async fn returns_ok() {
|
||||
let mut state = state().await;
|
||||
assert_eq!(state.get_outstanding_deposit_len(), Ok(0));
|
||||
|
||||
state.eth1_data_mut().deposit_count = 17;
|
||||
@@ -413,9 +418,9 @@ mod get_outstanding_deposit_len {
|
||||
assert_eq!(state.get_outstanding_deposit_len(), Ok(1));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn returns_err_if_the_state_is_invalid() {
|
||||
let mut state = state();
|
||||
#[tokio::test]
|
||||
async fn returns_err_if_the_state_is_invalid() {
|
||||
let mut state = state().await;
|
||||
// The state is invalid, deposit count is lower than deposit index.
|
||||
state.eth1_data_mut().deposit_count = 16;
|
||||
*state.eth1_deposit_index_mut() = 17;
|
||||
|
||||
@@ -28,6 +28,8 @@ pub trait ExecPayload<T: EthSpec>:
|
||||
+ Hash
|
||||
+ TryFrom<ExecutionPayloadHeader<T>>
|
||||
+ From<ExecutionPayload<T>>
|
||||
+ Send
|
||||
+ 'static
|
||||
{
|
||||
fn block_type() -> BlockType;
|
||||
|
||||
|
||||
@@ -346,6 +346,14 @@ impl<E: EthSpec> From<SignedBeaconBlock<E>> for SignedBlindedBeaconBlock<E> {
|
||||
}
|
||||
}
|
||||
|
||||
// We can blind borrowed blocks with payloads by converting the payload into a header (without
|
||||
// cloning the payload contents).
|
||||
impl<E: EthSpec> SignedBeaconBlock<E> {
|
||||
pub fn clone_as_blinded(&self) -> SignedBlindedBeaconBlock<E> {
|
||||
SignedBeaconBlock::from_block(self.message().into(), self.signature().clone())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
|
||||
Reference in New Issue
Block a user