mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-15 19:02:42 +00:00
Optimize validator duties (#2243)
## Issue Addressed Closes #2052 ## Proposed Changes - Refactor the attester/proposer duties endpoints in the BN - Performance improvements - Fixes some potential inconsistencies with the dependent root fields. - Removes `http_api::beacon_proposer_cache` and just uses the one on the `BeaconChain` instead. - Move the code for the proposer/attester duties endpoints into separate files, for readability. - Refactor the `DutiesService` in the VC - Required to reduce the delay on broadcasting new blocks. - Gets rid of the `ValidatorDuty` shim struct that came about when we adopted the standard API. - Separate block/attestation duty tasks so that they don't block each other when one is slow. - In the VC, use `PublicKeyBytes` to represent validators instead of `PublicKey`. `PublicKey` is a legit crypto object whilst `PublicKeyBytes` is just a byte-array, it's much faster to clone/hash `PublicKeyBytes` and this change has had a significant impact on runtimes. - Unfortunately this has created lots of dust changes. - In the BN, store `PublicKeyBytes` in the `beacon_proposer_cache` and allow access to them. The HTTP API always sends `PublicKeyBytes` over the wire and the conversion from `PublicKey` -> `PublickeyBytes` is non-trivial, especially when queries have 100s/1000s of validators (like Pyrmont). - Add the `state_processing::state_advance` mod which dedups a lot of the "apply `n` skip slots to the state" code. - This also fixes a bug with some functions which were failing to include a state root as per [this comment](072695284f/consensus/state_processing/src/state_advance.rs (L69-L74)). I couldn't find any instance of this bug that resulted in anything more severe than keying a shuffling cache by the wrong block root. - Swap the VC block service to use `mpsc` from `tokio` instead of `futures`. This is consistent with the rest of the code base. ~~This PR *reduces* the size of the codebase 🎉~~ It *used* to reduce the size of the code base before I added more comments. ## Observations on Prymont - Proposer duties times down from peaks of 450ms to consistent <1ms. - Current epoch attester duties times down from >1s peaks to a consistent 20-30ms. - Block production down from +600ms to 100-200ms. ## Additional Info - ~~Blocked on #2241~~ - ~~Blocked on #2234~~ ## TODO - [x] ~~Refactor this into some smaller PRs?~~ Leaving this as-is for now. - [x] Address `per_slot_processing` roots. - [x] Investigate slow next epoch times. Not getting added to cache on block processing? - [x] Consider [this](072695284f/beacon_node/store/src/hot_cold_store.rs (L811-L812)) in the scenario of replacing the state roots Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io>
This commit is contained in:
201
beacon_node/http_api/src/attester_duties.rs
Normal file
201
beacon_node/http_api/src/attester_duties.rs
Normal file
@@ -0,0 +1,201 @@
|
||||
//! Contains the handler for the `GET validator/duties/attester/{epoch}` endpoint.
|
||||
|
||||
use crate::state_id::StateId;
|
||||
use beacon_chain::{BeaconChain, BeaconChainError, BeaconChainTypes};
|
||||
use eth2::types::{self as api_types};
|
||||
use state_processing::state_advance::partial_state_advance;
|
||||
use types::{
|
||||
AttestationDuty, BeaconState, ChainSpec, CloneConfig, Epoch, EthSpec, Hash256, RelativeEpoch,
|
||||
};
|
||||
|
||||
/// The struct that is returned to the requesting HTTP client.
|
||||
type ApiDuties = api_types::DutiesResponse<Vec<api_types::AttesterData>>;
|
||||
|
||||
/// Handles a request from the HTTP API for attester duties.
|
||||
pub fn attester_duties<T: BeaconChainTypes>(
|
||||
request_epoch: Epoch,
|
||||
request_indices: &[u64],
|
||||
chain: &BeaconChain<T>,
|
||||
) -> Result<ApiDuties, warp::reject::Rejection> {
|
||||
let current_epoch = chain
|
||||
.epoch()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
let next_epoch = current_epoch + 1;
|
||||
|
||||
if request_epoch > next_epoch {
|
||||
Err(warp_utils::reject::custom_bad_request(format!(
|
||||
"request epoch {} is more than one epoch past the current epoch {}",
|
||||
request_epoch, current_epoch
|
||||
)))
|
||||
} else if request_epoch == current_epoch || request_epoch == next_epoch {
|
||||
cached_attestation_duties(request_epoch, request_indices, chain)
|
||||
} else {
|
||||
compute_historic_attester_duties(request_epoch, request_indices, chain)
|
||||
}
|
||||
}
|
||||
|
||||
fn cached_attestation_duties<T: BeaconChainTypes>(
|
||||
request_epoch: Epoch,
|
||||
request_indices: &[u64],
|
||||
chain: &BeaconChain<T>,
|
||||
) -> Result<ApiDuties, warp::reject::Rejection> {
|
||||
let head = chain
|
||||
.head_info()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
let (duties, dependent_root) = chain
|
||||
.validator_attestation_duties(&request_indices, request_epoch, head.block_root)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
convert_to_api_response(duties, request_indices, dependent_root, chain)
|
||||
}
|
||||
|
||||
/// Compute some attester duties by reading a `BeaconState` from disk, completely ignoring the
|
||||
/// shuffling cache.
|
||||
fn compute_historic_attester_duties<T: BeaconChainTypes>(
|
||||
request_epoch: Epoch,
|
||||
request_indices: &[u64],
|
||||
chain: &BeaconChain<T>,
|
||||
) -> Result<ApiDuties, warp::reject::Rejection> {
|
||||
// If the head is quite old then it might still be relevant for a historical request.
|
||||
//
|
||||
// Use the `with_head` function to read & clone in a single call to avoid race conditions.
|
||||
let state_opt = chain
|
||||
.with_head(|head| {
|
||||
if head.beacon_state.current_epoch() <= request_epoch {
|
||||
Ok(Some((
|
||||
head.beacon_state_root(),
|
||||
head.beacon_state
|
||||
.clone_with(CloneConfig::committee_caches_only()),
|
||||
)))
|
||||
} else {
|
||||
Ok(None)
|
||||
}
|
||||
})
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
let mut state = if let Some((state_root, mut state)) = state_opt {
|
||||
// If we've loaded the head state it might be from a previous epoch, ensure it's in a
|
||||
// suitable epoch.
|
||||
ensure_state_knows_attester_duties_for_epoch(
|
||||
&mut state,
|
||||
state_root,
|
||||
request_epoch,
|
||||
&chain.spec,
|
||||
)?;
|
||||
state
|
||||
} else {
|
||||
StateId::slot(request_epoch.start_slot(T::EthSpec::slots_per_epoch())).state(&chain)?
|
||||
};
|
||||
|
||||
// Sanity-check the state lookup.
|
||||
if !(state.current_epoch() == request_epoch || state.current_epoch() + 1 == request_epoch) {
|
||||
return Err(warp_utils::reject::custom_server_error(format!(
|
||||
"state epoch {} not suitable for request epoch {}",
|
||||
state.current_epoch(),
|
||||
request_epoch
|
||||
)));
|
||||
}
|
||||
|
||||
let relative_epoch =
|
||||
RelativeEpoch::from_epoch(state.current_epoch(), request_epoch).map_err(|e| {
|
||||
warp_utils::reject::custom_server_error(format!("invalid epoch for state: {:?}", e))
|
||||
})?;
|
||||
|
||||
state
|
||||
.build_committee_cache(relative_epoch, &chain.spec)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
let dependent_root = state
|
||||
// The only block which decides its own shuffling is the genesis block.
|
||||
.attester_shuffling_decision_root(chain.genesis_block_root, relative_epoch)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
let duties = request_indices
|
||||
.iter()
|
||||
.map(|&validator_index| {
|
||||
state
|
||||
.get_attestation_duties(validator_index as usize, relative_epoch)
|
||||
.map_err(BeaconChainError::from)
|
||||
})
|
||||
.collect::<Result<_, _>>()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
convert_to_api_response(duties, request_indices, dependent_root, chain)
|
||||
}
|
||||
|
||||
fn ensure_state_knows_attester_duties_for_epoch<E: EthSpec>(
|
||||
state: &mut BeaconState<E>,
|
||||
state_root: Hash256,
|
||||
target_epoch: Epoch,
|
||||
spec: &ChainSpec,
|
||||
) -> Result<(), warp::reject::Rejection> {
|
||||
// Protect against an inconsistent slot clock.
|
||||
if state.current_epoch() > target_epoch {
|
||||
return Err(warp_utils::reject::custom_server_error(format!(
|
||||
"state epoch {} is later than target epoch {}",
|
||||
state.current_epoch(),
|
||||
target_epoch
|
||||
)));
|
||||
} else if state.current_epoch() + 1 < target_epoch {
|
||||
// Since there's a one-epoch look-head on attester duties, it suffices to only advance to
|
||||
// the prior epoch.
|
||||
let target_slot = target_epoch
|
||||
.saturating_sub(1_u64)
|
||||
.start_slot(E::slots_per_epoch());
|
||||
|
||||
// A "partial" state advance is adequate since attester duties don't rely on state roots.
|
||||
partial_state_advance(state, Some(state_root), target_slot, spec)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Convert the internal representation of attester duties into the format returned to the HTTP
|
||||
/// client.
|
||||
fn convert_to_api_response<T: BeaconChainTypes>(
|
||||
duties: Vec<Option<AttestationDuty>>,
|
||||
indices: &[u64],
|
||||
dependent_root: Hash256,
|
||||
chain: &BeaconChain<T>,
|
||||
) -> Result<ApiDuties, warp::reject::Rejection> {
|
||||
// Protect against an inconsistent slot clock.
|
||||
if duties.len() != indices.len() {
|
||||
return Err(warp_utils::reject::custom_server_error(format!(
|
||||
"duties length {} does not match indices length {}",
|
||||
duties.len(),
|
||||
indices.len()
|
||||
)));
|
||||
}
|
||||
|
||||
let usize_indices = indices.iter().map(|i| *i as usize).collect::<Vec<_>>();
|
||||
let index_to_pubkey_map = chain
|
||||
.validator_pubkey_bytes_many(&usize_indices)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
let data = duties
|
||||
.into_iter()
|
||||
.zip(indices)
|
||||
.filter_map(|(duty_opt, &validator_index)| {
|
||||
let duty = duty_opt?;
|
||||
Some(api_types::AttesterData {
|
||||
pubkey: *index_to_pubkey_map.get(&(validator_index as usize))?,
|
||||
validator_index,
|
||||
committees_at_slot: duty.committees_at_slot,
|
||||
committee_index: duty.index,
|
||||
committee_length: duty.committee_len as u64,
|
||||
validator_committee_index: duty.committee_position as u64,
|
||||
slot: duty.slot,
|
||||
})
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
Ok(api_types::DutiesResponse {
|
||||
dependent_root,
|
||||
data,
|
||||
})
|
||||
}
|
||||
@@ -1,186 +0,0 @@
|
||||
use crate::metrics;
|
||||
use beacon_chain::{BeaconChain, BeaconChainError, BeaconChainTypes};
|
||||
use eth2::types::ProposerData;
|
||||
use fork_choice::ProtoBlock;
|
||||
use slot_clock::SlotClock;
|
||||
use state_processing::per_slot_processing;
|
||||
use types::{BeaconState, Epoch, EthSpec, Hash256, PublicKeyBytes};
|
||||
|
||||
/// This sets a maximum bound on the number of epochs to skip whilst instantiating the cache for
|
||||
/// the first time.
|
||||
const EPOCHS_TO_SKIP: u64 = 2;
|
||||
|
||||
/// Caches the beacon block proposers for a given `epoch` and `epoch_boundary_root`.
|
||||
///
|
||||
/// This cache is only able to contain a single set of proposers and is only
|
||||
/// intended to cache the proposers for the current epoch according to the head
|
||||
/// of the chain. A change in epoch or re-org to a different chain may cause a
|
||||
/// cache miss and rebuild.
|
||||
pub struct BeaconProposerCache {
|
||||
epoch: Epoch,
|
||||
decision_block_root: Hash256,
|
||||
proposers: Vec<ProposerData>,
|
||||
}
|
||||
|
||||
impl BeaconProposerCache {
|
||||
/// Create a new cache for the current epoch of the `chain`.
|
||||
pub fn new<T: BeaconChainTypes>(chain: &BeaconChain<T>) -> Result<Self, BeaconChainError> {
|
||||
let head_root = chain.head_beacon_block_root()?;
|
||||
let head_block = chain
|
||||
.fork_choice
|
||||
.read()
|
||||
.get_block(&head_root)
|
||||
.ok_or(BeaconChainError::MissingBeaconBlock(head_root))?;
|
||||
|
||||
// If the head epoch is more than `EPOCHS_TO_SKIP` in the future, just build the cache at
|
||||
// the epoch of the head. This prevents doing a massive amount of skip slots when starting
|
||||
// a new database from genesis.
|
||||
let epoch = {
|
||||
let epoch_now = chain
|
||||
.epoch()
|
||||
.unwrap_or_else(|_| chain.spec.genesis_slot.epoch(T::EthSpec::slots_per_epoch()));
|
||||
let head_epoch = head_block.slot.epoch(T::EthSpec::slots_per_epoch());
|
||||
if epoch_now > head_epoch + EPOCHS_TO_SKIP {
|
||||
head_epoch
|
||||
} else {
|
||||
epoch_now
|
||||
}
|
||||
};
|
||||
|
||||
Self::for_head_block(chain, epoch, head_root, head_block)
|
||||
}
|
||||
|
||||
/// Create a new cache that contains the shuffling for `current_epoch`,
|
||||
/// assuming that `head_root` and `head_block` represents the most recent
|
||||
/// canonical block.
|
||||
fn for_head_block<T: BeaconChainTypes>(
|
||||
chain: &BeaconChain<T>,
|
||||
current_epoch: Epoch,
|
||||
head_root: Hash256,
|
||||
head_block: ProtoBlock,
|
||||
) -> Result<Self, BeaconChainError> {
|
||||
let _timer = metrics::start_timer(&metrics::HTTP_API_BEACON_PROPOSER_CACHE_TIMES);
|
||||
|
||||
let mut head_state = chain
|
||||
.get_state(&head_block.state_root, Some(head_block.slot))?
|
||||
.ok_or(BeaconChainError::MissingBeaconState(head_block.state_root))?;
|
||||
|
||||
let decision_block_root = Self::decision_block_root(current_epoch, head_root, &head_state)?;
|
||||
|
||||
// We *must* skip forward to the current epoch to obtain valid proposer
|
||||
// duties. We cannot skip to the previous epoch, like we do with
|
||||
// attester duties.
|
||||
while head_state.current_epoch() < current_epoch {
|
||||
// Skip slots until the current epoch, providing `Hash256::zero()` as the state root
|
||||
// since we don't require it to be valid to identify producers.
|
||||
per_slot_processing(&mut head_state, Some(Hash256::zero()), &chain.spec)?;
|
||||
}
|
||||
|
||||
let proposers = current_epoch
|
||||
.slot_iter(T::EthSpec::slots_per_epoch())
|
||||
.map(|slot| {
|
||||
head_state
|
||||
.get_beacon_proposer_index(slot, &chain.spec)
|
||||
.map_err(BeaconChainError::from)
|
||||
.and_then(|i| {
|
||||
let pubkey = chain
|
||||
.validator_pubkey(i)?
|
||||
.ok_or(BeaconChainError::ValidatorPubkeyCacheIncomplete(i))?;
|
||||
|
||||
Ok(ProposerData {
|
||||
pubkey: PublicKeyBytes::from(pubkey),
|
||||
validator_index: i as u64,
|
||||
slot,
|
||||
})
|
||||
})
|
||||
})
|
||||
.collect::<Result<_, _>>()?;
|
||||
|
||||
Ok(Self {
|
||||
epoch: current_epoch,
|
||||
decision_block_root,
|
||||
proposers,
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns a block root which can be used to key the shuffling obtained from the following
|
||||
/// parameters:
|
||||
///
|
||||
/// - `shuffling_epoch`: the epoch for which the shuffling pertains.
|
||||
/// - `head_block_root`: the block root at the head of the chain.
|
||||
/// - `head_block_state`: the state of `head_block_root`.
|
||||
pub fn decision_block_root<E: EthSpec>(
|
||||
shuffling_epoch: Epoch,
|
||||
head_block_root: Hash256,
|
||||
head_block_state: &BeaconState<E>,
|
||||
) -> Result<Hash256, BeaconChainError> {
|
||||
let decision_slot = shuffling_epoch
|
||||
.start_slot(E::slots_per_epoch())
|
||||
.saturating_sub(1_u64);
|
||||
|
||||
// If decision slot is equal to or ahead of the head, the block root is the head block root
|
||||
if decision_slot >= head_block_state.slot {
|
||||
Ok(head_block_root)
|
||||
} else {
|
||||
head_block_state
|
||||
.get_block_root(decision_slot)
|
||||
.map(|root| *root)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
}
|
||||
|
||||
/// Return the proposers for the given `Epoch`.
|
||||
///
|
||||
/// The cache may be rebuilt if:
|
||||
///
|
||||
/// - The epoch has changed since the last cache build.
|
||||
/// - There has been a re-org that crosses an epoch boundary.
|
||||
pub fn get_proposers<T: BeaconChainTypes>(
|
||||
&mut self,
|
||||
chain: &BeaconChain<T>,
|
||||
epoch: Epoch,
|
||||
) -> Result<Vec<ProposerData>, warp::Rejection> {
|
||||
let current_epoch = chain
|
||||
.slot_clock
|
||||
.now_or_genesis()
|
||||
.ok_or_else(|| {
|
||||
warp_utils::reject::custom_server_error("unable to read slot clock".to_string())
|
||||
})?
|
||||
.epoch(T::EthSpec::slots_per_epoch());
|
||||
|
||||
// Disallow requests that are outside the current epoch. This ensures the cache doesn't get
|
||||
// washed-out with old values.
|
||||
if current_epoch != epoch {
|
||||
return Err(warp_utils::reject::custom_bad_request(format!(
|
||||
"requested epoch is {} but only current epoch {} is allowed",
|
||||
epoch, current_epoch
|
||||
)));
|
||||
}
|
||||
|
||||
let (head_block_root, head_decision_block_root) = chain
|
||||
.with_head(|head| {
|
||||
Self::decision_block_root(current_epoch, head.beacon_block_root, &head.beacon_state)
|
||||
.map(|decision_root| (head.beacon_block_root, decision_root))
|
||||
})
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
let head_block = chain
|
||||
.fork_choice
|
||||
.read()
|
||||
.get_block(&head_block_root)
|
||||
.ok_or(BeaconChainError::MissingBeaconBlock(head_block_root))
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
// Rebuild the cache if this call causes a cache-miss.
|
||||
if self.epoch != current_epoch || self.decision_block_root != head_decision_block_root {
|
||||
metrics::inc_counter(&metrics::HTTP_API_BEACON_PROPOSER_CACHE_MISSES_TOTAL);
|
||||
|
||||
*self = Self::for_head_block(chain, current_epoch, head_block_root, head_block)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
} else {
|
||||
metrics::inc_counter(&metrics::HTTP_API_BEACON_PROPOSER_CACHE_HITS_TOTAL);
|
||||
}
|
||||
|
||||
Ok(self.proposers.clone())
|
||||
}
|
||||
}
|
||||
@@ -5,9 +5,10 @@
|
||||
//! There are also some additional, non-standard endpoints behind the `/lighthouse/` path which are
|
||||
//! used for development.
|
||||
|
||||
mod beacon_proposer_cache;
|
||||
mod attester_duties;
|
||||
mod block_id;
|
||||
mod metrics;
|
||||
mod proposer_duties;
|
||||
mod state_id;
|
||||
mod validator_inclusion;
|
||||
|
||||
@@ -17,19 +18,16 @@ use beacon_chain::{
|
||||
validator_monitor::{get_block_delay_ms, timestamp_now},
|
||||
AttestationError as AttnError, BeaconChain, BeaconChainError, BeaconChainTypes,
|
||||
};
|
||||
use beacon_proposer_cache::BeaconProposerCache;
|
||||
use block_id::BlockId;
|
||||
use eth2::types::{self as api_types, ValidatorId};
|
||||
use eth2_libp2p::{types::SyncState, EnrExt, NetworkGlobals, PeerId, PubsubMessage};
|
||||
use lighthouse_version::version_with_platform;
|
||||
use network::NetworkMessage;
|
||||
use parking_lot::Mutex;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use slog::{crit, debug, error, info, warn, Logger};
|
||||
use slot_clock::SlotClock;
|
||||
use ssz::Encode;
|
||||
use state_id::StateId;
|
||||
use state_processing::per_slot_processing;
|
||||
use std::borrow::Cow;
|
||||
use std::convert::TryInto;
|
||||
use std::future::Future;
|
||||
@@ -38,9 +36,8 @@ use std::sync::Arc;
|
||||
use tokio::sync::mpsc::UnboundedSender;
|
||||
use tokio_stream::{wrappers::BroadcastStream, StreamExt};
|
||||
use types::{
|
||||
Attestation, AttestationDuty, AttesterSlashing, CloneConfig, CommitteeCache, Epoch, EthSpec,
|
||||
Hash256, ProposerSlashing, PublicKey, PublicKeyBytes, RelativeEpoch, SignedAggregateAndProof,
|
||||
SignedBeaconBlock, SignedVoluntaryExit, Slot, YamlConfig,
|
||||
Attestation, AttesterSlashing, CommitteeCache, Epoch, EthSpec, ProposerSlashing, RelativeEpoch,
|
||||
SignedAggregateAndProof, SignedBeaconBlock, SignedVoluntaryExit, Slot, YamlConfig,
|
||||
};
|
||||
use warp::http::StatusCode;
|
||||
use warp::sse::Event;
|
||||
@@ -240,30 +237,6 @@ pub fn serve<T: BeaconChainTypes>(
|
||||
|
||||
let eth1_v1 = warp::path(API_PREFIX).and(warp::path(API_VERSION));
|
||||
|
||||
// Instantiate the beacon proposer cache.
|
||||
let beacon_proposer_cache = ctx
|
||||
.chain
|
||||
.as_ref()
|
||||
.map(|chain| BeaconProposerCache::new(&chain))
|
||||
.transpose()
|
||||
.map_err(|e| format!("Unable to initialize beacon proposer cache: {:?}", e))?
|
||||
.map(Mutex::new)
|
||||
.map(Arc::new);
|
||||
|
||||
// Create a `warp` filter that provides access to the proposer cache.
|
||||
let beacon_proposer_cache = || {
|
||||
warp::any()
|
||||
.map(move || beacon_proposer_cache.clone())
|
||||
.and_then(|beacon_proposer_cache| async move {
|
||||
match beacon_proposer_cache {
|
||||
Some(cache) => Ok(cache),
|
||||
None => Err(warp_utils::reject::custom_not_found(
|
||||
"Beacon proposer cache is not initialized.".to_string(),
|
||||
)),
|
||||
}
|
||||
})
|
||||
};
|
||||
|
||||
// Create a `warp` filter that provides access to the network globals.
|
||||
let inner_network_globals = ctx.network_globals.clone();
|
||||
let network_globals = warp::any()
|
||||
@@ -1674,89 +1647,10 @@ pub fn serve<T: BeaconChainTypes>(
|
||||
.and(warp::path::end())
|
||||
.and(not_while_syncing_filter.clone())
|
||||
.and(chain_filter.clone())
|
||||
.and(beacon_proposer_cache())
|
||||
.and_then(
|
||||
|epoch: Epoch,
|
||||
chain: Arc<BeaconChain<T>>,
|
||||
beacon_proposer_cache: Arc<Mutex<BeaconProposerCache>>| {
|
||||
blocking_json_task(move || {
|
||||
let current_epoch = chain
|
||||
.epoch()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
if epoch > current_epoch {
|
||||
return Err(warp_utils::reject::custom_bad_request(format!(
|
||||
"request epoch {} is ahead of the current epoch {}",
|
||||
epoch, current_epoch
|
||||
)));
|
||||
}
|
||||
|
||||
if epoch == current_epoch {
|
||||
let dependent_root_slot = current_epoch
|
||||
.start_slot(T::EthSpec::slots_per_epoch()) - 1;
|
||||
let dependent_root = if dependent_root_slot > chain.best_slot().map_err(warp_utils::reject::beacon_chain_error)? {
|
||||
chain.head_beacon_block_root().map_err(warp_utils::reject::beacon_chain_error)?
|
||||
} else {
|
||||
chain
|
||||
.root_at_slot(dependent_root_slot)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
.unwrap_or(chain.genesis_block_root)
|
||||
};
|
||||
|
||||
beacon_proposer_cache
|
||||
.lock()
|
||||
.get_proposers(&chain, epoch)
|
||||
.map(|duties| api_types::DutiesResponse { data: duties, dependent_root })
|
||||
} else {
|
||||
let state =
|
||||
StateId::slot(epoch.start_slot(T::EthSpec::slots_per_epoch()))
|
||||
.state(&chain)?;
|
||||
|
||||
let dependent_root_slot = state.current_epoch()
|
||||
.start_slot(T::EthSpec::slots_per_epoch()) - 1;
|
||||
let dependent_root = if dependent_root_slot > chain.best_slot().map_err(warp_utils::reject::beacon_chain_error)? {
|
||||
chain.head_beacon_block_root().map_err(warp_utils::reject::beacon_chain_error)?
|
||||
} else {
|
||||
chain
|
||||
.root_at_slot(dependent_root_slot)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
.unwrap_or(chain.genesis_block_root)
|
||||
};
|
||||
|
||||
epoch
|
||||
.slot_iter(T::EthSpec::slots_per_epoch())
|
||||
.map(|slot| {
|
||||
state
|
||||
.get_beacon_proposer_index(slot, &chain.spec)
|
||||
.map_err(warp_utils::reject::beacon_state_error)
|
||||
.and_then(|i| {
|
||||
let pubkey =
|
||||
chain.validator_pubkey(i)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
.ok_or_else(||
|
||||
warp_utils::reject::beacon_chain_error(
|
||||
BeaconChainError::ValidatorPubkeyCacheIncomplete(i)
|
||||
)
|
||||
)?;
|
||||
|
||||
Ok(api_types::ProposerData {
|
||||
pubkey: PublicKeyBytes::from(pubkey),
|
||||
validator_index: i as u64,
|
||||
slot,
|
||||
})
|
||||
})
|
||||
})
|
||||
.collect::<Result<Vec<api_types::ProposerData>, _>>()
|
||||
.map(|duties| {
|
||||
api_types::DutiesResponse {
|
||||
dependent_root,
|
||||
data: duties,
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
},
|
||||
);
|
||||
.and(log_filter.clone())
|
||||
.and_then(|epoch: Epoch, chain: Arc<BeaconChain<T>>, log: Logger| {
|
||||
blocking_json_task(move || proposer_duties::proposer_duties(epoch, &chain, &log))
|
||||
});
|
||||
|
||||
// GET validator/blocks/{slot}
|
||||
let get_validator_blocks = eth1_v1
|
||||
@@ -1865,188 +1759,7 @@ pub fn serve<T: BeaconChainTypes>(
|
||||
.and_then(
|
||||
|epoch: Epoch, indices: api_types::ValidatorIndexData, chain: Arc<BeaconChain<T>>| {
|
||||
blocking_json_task(move || {
|
||||
let current_epoch = chain
|
||||
.epoch()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
if epoch > current_epoch + 1 {
|
||||
return Err(warp_utils::reject::custom_bad_request(format!(
|
||||
"request epoch {} is more than one epoch past the current epoch {}",
|
||||
epoch, current_epoch
|
||||
)));
|
||||
}
|
||||
|
||||
let validator_count = StateId::head()
|
||||
.map_state(&chain, |state| Ok(state.validators.len() as u64))?;
|
||||
|
||||
let pubkeys = indices
|
||||
.0
|
||||
.iter()
|
||||
.filter(|i| **i < validator_count as u64)
|
||||
.map(|i| {
|
||||
let pubkey = chain
|
||||
.validator_pubkey(*i as usize)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
.ok_or_else(|| {
|
||||
warp_utils::reject::custom_bad_request(format!(
|
||||
"unknown validator index {}",
|
||||
*i
|
||||
))
|
||||
})?;
|
||||
|
||||
Ok((*i, pubkey))
|
||||
})
|
||||
.collect::<Result<Vec<_>, warp::Rejection>>()?;
|
||||
|
||||
// Converts the internal Lighthouse `AttestationDuty` struct into an
|
||||
// API-conforming `AttesterData` struct.
|
||||
let convert = |validator_index: u64,
|
||||
pubkey: PublicKey,
|
||||
duty: AttestationDuty|
|
||||
-> api_types::AttesterData {
|
||||
api_types::AttesterData {
|
||||
pubkey: pubkey.into(),
|
||||
validator_index,
|
||||
committees_at_slot: duty.committees_at_slot,
|
||||
committee_index: duty.index,
|
||||
committee_length: duty.committee_len as u64,
|
||||
validator_committee_index: duty.committee_position as u64,
|
||||
slot: duty.slot,
|
||||
}
|
||||
};
|
||||
|
||||
// Here we have two paths:
|
||||
//
|
||||
// ## Fast
|
||||
//
|
||||
// If the request epoch is the current epoch, use the cached beacon chain
|
||||
// method.
|
||||
//
|
||||
// ## Slow
|
||||
//
|
||||
// If the request epoch is prior to the current epoch, load a beacon state from
|
||||
// disk
|
||||
//
|
||||
// The idea is to stop historical requests from washing out the cache on the
|
||||
// beacon chain, whilst allowing a VC to request duties quickly.
|
||||
let (duties, dependent_root) = if epoch == current_epoch {
|
||||
// Fast path.
|
||||
let duties = pubkeys
|
||||
.into_iter()
|
||||
// Exclude indices which do not represent a known public key and a
|
||||
// validator duty.
|
||||
.filter_map(|(i, pubkey)| {
|
||||
Some(
|
||||
chain
|
||||
.validator_attestation_duty(i as usize, epoch)
|
||||
.transpose()?
|
||||
.map_err(warp_utils::reject::beacon_chain_error)
|
||||
.map(|duty| convert(i, pubkey, duty)),
|
||||
)
|
||||
})
|
||||
.collect::<Result<Vec<_>, warp::Rejection>>()?;
|
||||
|
||||
let dependent_root_slot =
|
||||
(epoch - 1).start_slot(T::EthSpec::slots_per_epoch()) - 1;
|
||||
let dependent_root = if dependent_root_slot
|
||||
> chain
|
||||
.best_slot()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
{
|
||||
chain
|
||||
.head_beacon_block_root()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
} else {
|
||||
chain
|
||||
.root_at_slot(dependent_root_slot)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
.unwrap_or(chain.genesis_block_root)
|
||||
};
|
||||
|
||||
(duties, dependent_root)
|
||||
} else {
|
||||
// If the head state is equal to or earlier than the request epoch, use it.
|
||||
let mut state = chain
|
||||
.with_head(|head| {
|
||||
if head.beacon_state.current_epoch() <= epoch {
|
||||
Ok(Some(
|
||||
head.beacon_state
|
||||
.clone_with(CloneConfig::committee_caches_only()),
|
||||
))
|
||||
} else {
|
||||
Ok(None)
|
||||
}
|
||||
})
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
.map(Result::Ok)
|
||||
.unwrap_or_else(|| {
|
||||
StateId::slot(epoch.start_slot(T::EthSpec::slots_per_epoch()))
|
||||
.state(&chain)
|
||||
})?;
|
||||
|
||||
// Only skip forward to the epoch prior to the request, since we have a
|
||||
// one-epoch look-ahead on shuffling.
|
||||
while state
|
||||
.next_epoch()
|
||||
.map_err(warp_utils::reject::beacon_state_error)?
|
||||
< epoch
|
||||
{
|
||||
// Don't calculate state roots since they aren't required for calculating
|
||||
// shuffling (achieved by providing Hash256::zero()).
|
||||
per_slot_processing(&mut state, Some(Hash256::zero()), &chain.spec)
|
||||
.map_err(warp_utils::reject::slot_processing_error)?;
|
||||
}
|
||||
|
||||
let relative_epoch =
|
||||
RelativeEpoch::from_epoch(state.current_epoch(), epoch).map_err(
|
||||
|e| {
|
||||
warp_utils::reject::custom_server_error(format!(
|
||||
"unable to obtain suitable state: {:?}",
|
||||
e
|
||||
))
|
||||
},
|
||||
)?;
|
||||
|
||||
state
|
||||
.build_committee_cache(relative_epoch, &chain.spec)
|
||||
.map_err(warp_utils::reject::beacon_state_error)?;
|
||||
let duties = pubkeys
|
||||
.into_iter()
|
||||
.filter_map(|(i, pubkey)| {
|
||||
Some(
|
||||
state
|
||||
.get_attestation_duties(i as usize, relative_epoch)
|
||||
.transpose()?
|
||||
.map_err(warp_utils::reject::beacon_state_error)
|
||||
.map(|duty| convert(i, pubkey, duty)),
|
||||
)
|
||||
})
|
||||
.collect::<Result<Vec<_>, warp::Rejection>>()?;
|
||||
|
||||
let dependent_root_slot =
|
||||
(epoch - 1).start_slot(T::EthSpec::slots_per_epoch()) - 1;
|
||||
let dependent_root = if dependent_root_slot
|
||||
> chain
|
||||
.best_slot()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
{
|
||||
chain
|
||||
.head_beacon_block_root()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
} else {
|
||||
chain
|
||||
.root_at_slot(dependent_root_slot)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?
|
||||
.unwrap_or(chain.genesis_block_root)
|
||||
};
|
||||
|
||||
(duties, dependent_root)
|
||||
};
|
||||
|
||||
Ok(api_types::DutiesResponse {
|
||||
dependent_root,
|
||||
data: duties,
|
||||
})
|
||||
attester_duties::attester_duties(epoch, &indices.0, &chain)
|
||||
})
|
||||
},
|
||||
);
|
||||
|
||||
280
beacon_node/http_api/src/proposer_duties.rs
Normal file
280
beacon_node/http_api/src/proposer_duties.rs
Normal file
@@ -0,0 +1,280 @@
|
||||
//! Contains the handler for the `GET validator/duties/proposer/{epoch}` endpoint.
|
||||
|
||||
use crate::state_id::StateId;
|
||||
use beacon_chain::{BeaconChain, BeaconChainError, BeaconChainTypes};
|
||||
use eth2::types::{self as api_types};
|
||||
use slog::{debug, Logger};
|
||||
use state_processing::state_advance::partial_state_advance;
|
||||
use std::cmp::Ordering;
|
||||
use types::{BeaconState, ChainSpec, CloneConfig, Epoch, EthSpec, Hash256, Slot};
|
||||
|
||||
/// The struct that is returned to the requesting HTTP client.
|
||||
type ApiDuties = api_types::DutiesResponse<Vec<api_types::ProposerData>>;
|
||||
|
||||
/// Handles a request from the HTTP API for proposer duties.
|
||||
pub fn proposer_duties<T: BeaconChainTypes>(
|
||||
request_epoch: Epoch,
|
||||
chain: &BeaconChain<T>,
|
||||
log: &Logger,
|
||||
) -> Result<ApiDuties, warp::reject::Rejection> {
|
||||
let current_epoch = chain
|
||||
.epoch()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
match request_epoch.cmp(¤t_epoch) {
|
||||
// request_epoch > current_epoch
|
||||
//
|
||||
// Reject queries about the future as they're very expensive there's no look-ahead for
|
||||
// proposer duties.
|
||||
Ordering::Greater => Err(warp_utils::reject::custom_bad_request(format!(
|
||||
"request epoch {} is ahead of the current epoch {}",
|
||||
request_epoch, current_epoch
|
||||
))),
|
||||
// request_epoch == current_epoch
|
||||
//
|
||||
// Queries about the current epoch should attempt to find the value in the cache. If it
|
||||
// can't be found, it should be computed and then stored in the cache for future gains.
|
||||
Ordering::Equal => {
|
||||
if let Some(duties) = try_proposer_duties_from_cache(request_epoch, chain)? {
|
||||
Ok(duties)
|
||||
} else {
|
||||
debug!(
|
||||
log,
|
||||
"Proposer cache miss";
|
||||
"request_epoch" => request_epoch,
|
||||
);
|
||||
compute_and_cache_proposer_duties(request_epoch, chain)
|
||||
}
|
||||
}
|
||||
// request_epoch < current_epoch
|
||||
//
|
||||
// Queries about the past are handled with a slow path.
|
||||
Ordering::Less => compute_historic_proposer_duties(request_epoch, chain),
|
||||
}
|
||||
}
|
||||
|
||||
/// Attempt to load the proposer duties from the `chain.beacon_proposer_cache`, returning `Ok(None)`
|
||||
/// if there is a cache miss.
|
||||
///
|
||||
/// ## Notes
|
||||
///
|
||||
/// The `current_epoch` value should equal the current epoch on the slot clock, otherwise we risk
|
||||
/// washing out the proposer cache at the expense of block processing.
|
||||
fn try_proposer_duties_from_cache<T: BeaconChainTypes>(
|
||||
current_epoch: Epoch,
|
||||
chain: &BeaconChain<T>,
|
||||
) -> Result<Option<ApiDuties>, warp::reject::Rejection> {
|
||||
let head = chain
|
||||
.head_info()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
let head_epoch = head.slot.epoch(T::EthSpec::slots_per_epoch());
|
||||
|
||||
let dependent_root = match head_epoch.cmp(¤t_epoch) {
|
||||
// head_epoch == current_epoch
|
||||
Ordering::Equal => head.proposer_shuffling_decision_root,
|
||||
// head_epoch < current_epoch
|
||||
Ordering::Less => head.block_root,
|
||||
// head_epoch > current_epoch
|
||||
Ordering::Greater => {
|
||||
return Err(warp_utils::reject::custom_server_error(format!(
|
||||
"head epoch {} is later than current epoch {}",
|
||||
head_epoch, current_epoch
|
||||
)))
|
||||
}
|
||||
};
|
||||
|
||||
chain
|
||||
.beacon_proposer_cache
|
||||
.lock()
|
||||
.get_epoch::<T::EthSpec>(dependent_root, current_epoch)
|
||||
.cloned()
|
||||
.map(|indices| {
|
||||
convert_to_api_response(chain, current_epoch, dependent_root, indices.to_vec())
|
||||
})
|
||||
.transpose()
|
||||
}
|
||||
|
||||
/// Compute the proposer duties using the head state, add the duties to the proposer cache and
|
||||
/// return the proposers.
|
||||
///
|
||||
/// This method does *not* attempt to read the values from the cache before computing them. See
|
||||
/// `try_proposer_duties_from_cache` to read values.
|
||||
///
|
||||
/// ## Notes
|
||||
///
|
||||
/// The `current_epoch` value should equal the current epoch on the slot clock, otherwise we risk
|
||||
/// washing out the proposer cache at the expense of block processing.
|
||||
fn compute_and_cache_proposer_duties<T: BeaconChainTypes>(
|
||||
current_epoch: Epoch,
|
||||
chain: &BeaconChain<T>,
|
||||
) -> Result<ApiDuties, warp::reject::Rejection> {
|
||||
// Take a copy of the head of the chain.
|
||||
let head = chain
|
||||
.head()
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
let mut state = head.beacon_state;
|
||||
let head_state_root = head.beacon_block.state_root();
|
||||
|
||||
// Advance the state into the requested epoch.
|
||||
ensure_state_is_in_epoch(&mut state, head_state_root, current_epoch, &chain.spec)?;
|
||||
|
||||
let indices = state
|
||||
.get_beacon_proposer_indices(&chain.spec)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
let dependent_root = state
|
||||
// The only block which decides its own shuffling is the genesis block.
|
||||
.proposer_shuffling_decision_root(chain.genesis_block_root)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
// Prime the proposer shuffling cache with the newly-learned value.
|
||||
chain
|
||||
.beacon_proposer_cache
|
||||
.lock()
|
||||
.insert(
|
||||
state.current_epoch(),
|
||||
dependent_root,
|
||||
indices.clone(),
|
||||
state.fork,
|
||||
)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
convert_to_api_response(chain, current_epoch, dependent_root, indices)
|
||||
}
|
||||
|
||||
/// Compute some proposer duties by reading a `BeaconState` from disk, completely ignoring the
|
||||
/// `beacon_proposer_cache`.
|
||||
fn compute_historic_proposer_duties<T: BeaconChainTypes>(
|
||||
epoch: Epoch,
|
||||
chain: &BeaconChain<T>,
|
||||
) -> Result<ApiDuties, warp::reject::Rejection> {
|
||||
// If the head is quite old then it might still be relevant for a historical request.
|
||||
//
|
||||
// Use the `with_head` function to read & clone in a single call to avoid race conditions.
|
||||
let state_opt = chain
|
||||
.with_head(|head| {
|
||||
if head.beacon_state.current_epoch() <= epoch {
|
||||
Ok(Some((
|
||||
head.beacon_state_root(),
|
||||
head.beacon_state
|
||||
.clone_with(CloneConfig::committee_caches_only()),
|
||||
)))
|
||||
} else {
|
||||
Ok(None)
|
||||
}
|
||||
})
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
let state = if let Some((state_root, mut state)) = state_opt {
|
||||
// If we've loaded the head state it might be from a previous epoch, ensure it's in a
|
||||
// suitable epoch.
|
||||
ensure_state_is_in_epoch(&mut state, state_root, epoch, &chain.spec)?;
|
||||
state
|
||||
} else {
|
||||
StateId::slot(epoch.start_slot(T::EthSpec::slots_per_epoch())).state(&chain)?
|
||||
};
|
||||
|
||||
// Ensure the state lookup was correct.
|
||||
if state.current_epoch() != epoch {
|
||||
return Err(warp_utils::reject::custom_server_error(format!(
|
||||
"state epoch {} not equal to request epoch {}",
|
||||
state.current_epoch(),
|
||||
epoch
|
||||
)));
|
||||
}
|
||||
|
||||
let indices = state
|
||||
.get_beacon_proposer_indices(&chain.spec)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
// We can supply the genesis block root as the block root since we know that the only block that
|
||||
// decides its own root is the genesis block.
|
||||
let dependent_root = state
|
||||
.proposer_shuffling_decision_root(chain.genesis_block_root)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
convert_to_api_response(chain, epoch, dependent_root, indices)
|
||||
}
|
||||
|
||||
/// If required, advance `state` to `target_epoch`.
|
||||
///
|
||||
/// ## Details
|
||||
///
|
||||
/// - Returns an error if `state.current_epoch() > target_epoch`.
|
||||
/// - No-op if `state.current_epoch() == target_epoch`.
|
||||
/// - It must be the case that `state.canonical_root() == state_root`, but this function will not
|
||||
/// check that.
|
||||
fn ensure_state_is_in_epoch<E: EthSpec>(
|
||||
state: &mut BeaconState<E>,
|
||||
state_root: Hash256,
|
||||
target_epoch: Epoch,
|
||||
spec: &ChainSpec,
|
||||
) -> Result<(), warp::reject::Rejection> {
|
||||
match state.current_epoch().cmp(&target_epoch) {
|
||||
// Protects against an inconsistent slot clock.
|
||||
Ordering::Greater => Err(warp_utils::reject::custom_server_error(format!(
|
||||
"state epoch {} is later than target epoch {}",
|
||||
state.current_epoch(),
|
||||
target_epoch
|
||||
))),
|
||||
// The state needs to be advanced.
|
||||
Ordering::Less => {
|
||||
let target_slot = target_epoch.start_slot(E::slots_per_epoch());
|
||||
partial_state_advance(state, Some(state_root), target_slot, spec)
|
||||
.map_err(BeaconChainError::from)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)
|
||||
}
|
||||
// The state is suitable, nothing to do.
|
||||
Ordering::Equal => Ok(()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Converts the internal representation of proposer duties into one that is compatible with the
|
||||
/// standard API.
|
||||
fn convert_to_api_response<T: BeaconChainTypes>(
|
||||
chain: &BeaconChain<T>,
|
||||
epoch: Epoch,
|
||||
dependent_root: Hash256,
|
||||
indices: Vec<usize>,
|
||||
) -> Result<ApiDuties, warp::reject::Rejection> {
|
||||
let index_to_pubkey_map = chain
|
||||
.validator_pubkey_bytes_many(&indices)
|
||||
.map_err(warp_utils::reject::beacon_chain_error)?;
|
||||
|
||||
// Map our internal data structure into the API structure.
|
||||
let proposer_data = indices
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter_map(|(i, &validator_index)| {
|
||||
// Offset the index in `indices` to determine the slot for which these
|
||||
// duties apply.
|
||||
let slot = epoch.start_slot(T::EthSpec::slots_per_epoch()) + Slot::from(i);
|
||||
|
||||
Some(api_types::ProposerData {
|
||||
pubkey: *index_to_pubkey_map.get(&validator_index)?,
|
||||
validator_index: validator_index as u64,
|
||||
slot,
|
||||
})
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
// Consistency check.
|
||||
let slots_per_epoch = T::EthSpec::slots_per_epoch() as usize;
|
||||
if proposer_data.len() != slots_per_epoch {
|
||||
Err(warp_utils::reject::custom_server_error(format!(
|
||||
"{} proposers is not enough for {} slots",
|
||||
proposer_data.len(),
|
||||
slots_per_epoch,
|
||||
)))
|
||||
} else {
|
||||
Ok(api_types::DutiesResponse {
|
||||
dependent_root,
|
||||
data: proposer_data,
|
||||
})
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user