mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-14 02:12:33 +00:00
Stable futures (#879)
* Port eth1 lib to use stable futures * Port eth1_test_rig to stable futures * Port eth1 tests to stable futures * Port genesis service to stable futures * Port genesis tests to stable futures * Port beacon_chain to stable futures * Port lcli to stable futures * Fix eth1_test_rig (#1014) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Update hashmap hashset to stable futures * Adds panic test to hashset delay * Port remote_beacon_node to stable futures * Fix lcli merge conflicts * Non rpc stuff compiles * protocol.rs compiles * Port websockets, timer and notifier to stable futures (#1035) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Port remote_beacon_node to stable futures * Partial eth2-libp2p stable future upgrade * Finished first round of fighting RPC types * Further progress towards porting eth2-libp2p adds caching to discovery * Update behaviour * RPC handler to stable futures * Update RPC to master libp2p * Network service additions * Fix the fallback transport construction (#1102) * Correct warning * Remove hashmap delay * Compiling version of eth2-libp2p * Update all crates versions * Fix conversion function and add tests (#1113) * Port validator_client to stable futures (#1114) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Revert "Update hashmap hashset to stable futures" This reverts commitd432378a3c. * Revert "Adds panic test to hashset delay" This reverts commit281502396f. * Ported attestation_service * Ported duties_service * Ported fork_service * More ports * Port block_service * Minor fixes * VC compiles * Update TODOS * Borrow self where possible * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test * Re-Fix attestation and duties service * Minor fixes Co-authored-by: Paul Hauner <paul@paulhauner.com> * Network crate update to stable futures * Port account_manager to stable futures (#1121) * Port account_manager to stable futures * Run async fns in tokio environment * Port rest_api crate to stable futures (#1118) * Port rest_api lib to stable futures * Reduce tokio features * Update notifier to stable futures * Builder update * Further updates * Convert self referential async functions * stable futures fixes (#1124) * Fix eth1 update functions * Fix genesis and client * Fix beacon node lib * Return appropriate runtimes from environment * Fix test rig * Refactor eth1 service update * Upgrade simulator to stable futures * Lighthouse compiles on stable futures * Remove println debugging statement * Update libp2p service, start rpc test upgrade * Update network crate for new libp2p * Update tokio::codec to futures_codec (#1128) * Further work towards RPC corrections * Correct http timeout and network service select * Use tokio runtime for libp2p * Revert "Update tokio::codec to futures_codec (#1128)" This reverts commite57aea924a. * Upgrade RPC libp2p tests * Upgrade secio fallback test * Upgrade gossipsub examples * Clean up RPC protocol * Test fixes (#1133) * Correct websocket timeout and run on os thread * Fix network test * Clean up PR * Correct tokio tcp move attestation service tests * Upgrade attestation service tests * Correct network test * Correct genesis test * Test corrections * Log info when block is received * Modify logs and update attester service events * Stable futures: fixes to vc, eth1 and account manager (#1142) * Add local testnet scripts * Remove whiteblock script * Rename local testnet script * Move spawns onto handle * Fix VC panic * Initial fix to block production issue * Tidy block producer fix * Tidy further * Add local testnet clean script * Run cargo fmt * Tidy duties service * Tidy fork service * Tidy ForkService * Tidy AttestationService * Tidy notifier * Ensure await is not suppressed in eth1 * Ensure await is not suppressed in account_manager * Use .ok() instead of .unwrap_or(()) * RPC decoding test for proto * Update discv5 and eth2-libp2p deps * Fix lcli double runtime issue (#1144) * Handle stream termination and dialing peer errors * Correct peer_info variant types * Remove unnecessary warnings * Handle subnet unsubscription removal and improve logigng * Add logs around ping * Upgrade discv5 and improve logging * Handle peer connection status for multiple connections * Improve network service logging * Improve logging around peer manager * Upgrade swarm poll centralise peer management * Identify clients on error * Fix `remove_peer` in sync (#1150) * remove_peer removes from all chains * Remove logs * Fix early return from loop * Improved logging, fix panic * Partially correct tests * Stable futures: Vc sync (#1149) * Improve syncing heuristic * Add comments * Use safer method for tolerance * Fix tests * Stable futures: Fix VC bug, update agg pool, add more metrics (#1151) * Expose epoch processing summary * Expose participation metrics to prometheus * Switch to f64 * Reduce precision * Change precision * Expose observed attesters metrics * Add metrics for agg/unagg attn counts * Add metrics for gossip rx * Add metrics for gossip tx * Adds ignored attns to prom * Add attestation timing * Add timer for aggregation pool sig agg * Add write lock timer for agg pool * Add more metrics to agg pool * Change map lock code * Add extra metric to agg pool * Change lock handling in agg pool * Change .write() to .read() * Add another agg pool timer * Fix for is_aggregator * Fix pruning bug Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
@@ -847,7 +847,7 @@ where
|
||||
// The state roots are not useful for the shuffling, so there's no need to
|
||||
// compute them.
|
||||
per_slot_processing(&mut state, Some(Hash256::zero()), &chain.spec)
|
||||
.map_err(|e| BeaconChainError::from(e))?
|
||||
.map_err(|e| BeaconChainError::from(e))?;
|
||||
}
|
||||
|
||||
metrics::stop_timer(state_skip_timer);
|
||||
|
||||
@@ -553,7 +553,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
// Note: supplying some `state_root` when it is known would be a cheap and easy
|
||||
// optimization.
|
||||
match per_slot_processing(&mut state, skip_state_root, &self.spec) {
|
||||
Ok(()) => (),
|
||||
Ok(_) => (),
|
||||
Err(e) => {
|
||||
warn!(
|
||||
self.log,
|
||||
@@ -863,7 +863,14 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
&self,
|
||||
attestation: Attestation<T::EthSpec>,
|
||||
) -> Result<VerifiedUnaggregatedAttestation<T>, AttestationError> {
|
||||
VerifiedUnaggregatedAttestation::verify(attestation, self)
|
||||
metrics::inc_counter(&metrics::UNAGGREGATED_ATTESTATION_PROCESSING_REQUESTS);
|
||||
let _timer =
|
||||
metrics::start_timer(&metrics::UNAGGREGATED_ATTESTATION_GOSSIP_VERIFICATION_TIMES);
|
||||
|
||||
VerifiedUnaggregatedAttestation::verify(attestation, self).map(|v| {
|
||||
metrics::inc_counter(&metrics::UNAGGREGATED_ATTESTATION_PROCESSING_SUCCESSES);
|
||||
v
|
||||
})
|
||||
}
|
||||
|
||||
/// Accepts some `SignedAggregateAndProof` from the network and attempts to verify it,
|
||||
@@ -872,7 +879,14 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
&self,
|
||||
signed_aggregate: SignedAggregateAndProof<T::EthSpec>,
|
||||
) -> Result<VerifiedAggregatedAttestation<T>, AttestationError> {
|
||||
VerifiedAggregatedAttestation::verify(signed_aggregate, self)
|
||||
metrics::inc_counter(&metrics::AGGREGATED_ATTESTATION_PROCESSING_REQUESTS);
|
||||
let _timer =
|
||||
metrics::start_timer(&metrics::AGGREGATED_ATTESTATION_GOSSIP_VERIFICATION_TIMES);
|
||||
|
||||
VerifiedAggregatedAttestation::verify(signed_aggregate, self).map(|v| {
|
||||
metrics::inc_counter(&metrics::AGGREGATED_ATTESTATION_PROCESSING_SUCCESSES);
|
||||
v
|
||||
})
|
||||
}
|
||||
|
||||
/// Accepts some attestation-type object and attempts to verify it in the context of fork
|
||||
@@ -887,6 +901,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
&self,
|
||||
unverified_attestation: &'a impl IntoForkChoiceVerifiedAttestation<'a, T>,
|
||||
) -> Result<ForkChoiceVerifiedAttestation<'a, T>, AttestationError> {
|
||||
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_APPLY_TO_FORK_CHOICE);
|
||||
|
||||
let verified = unverified_attestation.into_fork_choice_verified_attestation(self)?;
|
||||
let indexed_attestation = verified.indexed_attestation();
|
||||
self.fork_choice
|
||||
@@ -907,6 +923,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
&self,
|
||||
unaggregated_attestation: VerifiedUnaggregatedAttestation<T>,
|
||||
) -> Result<VerifiedUnaggregatedAttestation<T>, AttestationError> {
|
||||
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_APPLY_TO_AGG_POOL);
|
||||
|
||||
let attestation = unaggregated_attestation.attestation();
|
||||
|
||||
match self.naive_aggregation_pool.insert(attestation) {
|
||||
@@ -950,6 +968,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
&self,
|
||||
signed_aggregate: VerifiedAggregatedAttestation<T>,
|
||||
) -> Result<VerifiedAggregatedAttestation<T>, AttestationError> {
|
||||
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_APPLY_TO_OP_POOL);
|
||||
|
||||
// If there's no eth1 chain then it's impossible to produce blocks and therefore
|
||||
// useless to put things in the op pool.
|
||||
if self.eth1_chain.is_some() {
|
||||
|
||||
@@ -54,10 +54,12 @@ use slot_clock::SlotClock;
|
||||
use ssz::Encode;
|
||||
use state_processing::{
|
||||
block_signature_verifier::{BlockSignatureVerifier, Error as BlockSignatureVerifierError},
|
||||
per_block_processing, per_slot_processing, BlockProcessingError, BlockSignatureStrategy,
|
||||
SlotProcessingError,
|
||||
per_block_processing,
|
||||
per_epoch_processing::EpochProcessingSummary,
|
||||
per_slot_processing, BlockProcessingError, BlockSignatureStrategy, SlotProcessingError,
|
||||
};
|
||||
use std::borrow::Cow;
|
||||
use std::convert::TryFrom;
|
||||
use std::fs;
|
||||
use std::io::Write;
|
||||
use store::{Error as DBError, StateBatch};
|
||||
@@ -238,7 +240,7 @@ pub fn signature_verify_chain_segment<T: BeaconChainTypes>(
|
||||
/// the p2p network.
|
||||
pub struct GossipVerifiedBlock<T: BeaconChainTypes> {
|
||||
pub block: SignedBeaconBlock<T::EthSpec>,
|
||||
block_root: Hash256,
|
||||
pub block_root: Hash256,
|
||||
parent: BeaconSnapshot<T::EthSpec>,
|
||||
}
|
||||
|
||||
@@ -556,6 +558,8 @@ impl<T: BeaconChainTypes> FullyVerifiedBlock<T> {
|
||||
});
|
||||
}
|
||||
|
||||
let mut summaries = vec![];
|
||||
|
||||
// Transition the parent state to the block slot.
|
||||
let mut state = parent.beacon_state;
|
||||
let distance = block.slot().as_u64().saturating_sub(state.slot.as_u64());
|
||||
@@ -571,9 +575,12 @@ impl<T: BeaconChainTypes> FullyVerifiedBlock<T> {
|
||||
state_root
|
||||
};
|
||||
|
||||
per_slot_processing(&mut state, Some(state_root), &chain.spec)?;
|
||||
per_slot_processing(&mut state, Some(state_root), &chain.spec)?
|
||||
.map(|summary| summaries.push(summary));
|
||||
}
|
||||
|
||||
expose_participation_metrics(&summaries);
|
||||
|
||||
metrics::stop_timer(catchup_timer);
|
||||
|
||||
/*
|
||||
@@ -891,6 +898,45 @@ fn get_signature_verifier<'a, E: EthSpec>(
|
||||
)
|
||||
}
|
||||
|
||||
fn expose_participation_metrics(summaries: &[EpochProcessingSummary]) {
|
||||
if !cfg!(feature = "participation_metrics") {
|
||||
return;
|
||||
}
|
||||
|
||||
for summary in summaries {
|
||||
let b = &summary.total_balances;
|
||||
|
||||
metrics::maybe_set_float_gauge(
|
||||
&metrics::PARTICIPATION_PREV_EPOCH_ATTESTER,
|
||||
participation_ratio(b.previous_epoch_attesters(), b.previous_epoch()),
|
||||
);
|
||||
|
||||
metrics::maybe_set_float_gauge(
|
||||
&metrics::PARTICIPATION_PREV_EPOCH_TARGET_ATTESTER,
|
||||
participation_ratio(b.previous_epoch_target_attesters(), b.previous_epoch()),
|
||||
);
|
||||
|
||||
metrics::maybe_set_float_gauge(
|
||||
&metrics::PARTICIPATION_PREV_EPOCH_HEAD_ATTESTER,
|
||||
participation_ratio(b.previous_epoch_head_attesters(), b.previous_epoch()),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
fn participation_ratio(section: u64, total: u64) -> Option<f64> {
|
||||
// Reduce the precision to help ensure we fit inside a u32.
|
||||
const PRECISION: u64 = 100_000_000;
|
||||
|
||||
let section: f64 = u32::try_from(section / PRECISION).ok()?.into();
|
||||
let total: f64 = u32::try_from(total / PRECISION).ok()?.into();
|
||||
|
||||
if total > 0_f64 {
|
||||
Some(section / total)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
fn write_state<T: EthSpec>(prefix: &str, state: &BeaconState<T>, log: &Logger) {
|
||||
if WRITE_BLOCK_PROCESSING_SSZ {
|
||||
let root = state.tree_hash_root();
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
use crate::metrics;
|
||||
use eth1::{Config as Eth1Config, Eth1Block, Service as HttpService};
|
||||
use eth2_hashing::hash;
|
||||
use futures::Future;
|
||||
use slog::{debug, error, trace, Logger};
|
||||
use ssz::{Decode, Encode};
|
||||
use ssz_derive::{Decode, Encode};
|
||||
@@ -286,11 +285,10 @@ impl<T: EthSpec, S: Store<T>> CachingEth1Backend<T, S> {
|
||||
}
|
||||
|
||||
/// Starts the routine which connects to the external eth1 node and updates the caches.
|
||||
pub fn start(
|
||||
&self,
|
||||
exit: tokio::sync::oneshot::Receiver<()>,
|
||||
) -> impl Future<Item = (), Error = ()> {
|
||||
self.core.auto_update(exit)
|
||||
pub fn start(&self, exit: tokio::sync::oneshot::Receiver<()>) {
|
||||
// don't need to spawn as a task is being spawned in auto_update
|
||||
// TODO: check if this is correct
|
||||
HttpService::auto_update(self.core.clone(), exit);
|
||||
}
|
||||
|
||||
/// Instantiates `self` from an existing service.
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
use crate::{BeaconChain, BeaconChainTypes};
|
||||
pub use lighthouse_metrics::*;
|
||||
use types::{BeaconState, Epoch, Hash256, Slot};
|
||||
use slot_clock::SlotClock;
|
||||
use types::{BeaconState, Epoch, EthSpec, Hash256, Slot};
|
||||
|
||||
lazy_static! {
|
||||
/*
|
||||
@@ -79,25 +80,81 @@ lazy_static! {
|
||||
"Number of attestations in a block"
|
||||
);
|
||||
|
||||
/*
|
||||
* Unaggregated Attestation Verification
|
||||
*/
|
||||
pub static ref UNAGGREGATED_ATTESTATION_PROCESSING_REQUESTS: Result<IntCounter> = try_create_int_counter(
|
||||
"beacon_unaggregated_attestation_processing_requests_total",
|
||||
"Count of all unaggregated attestations submitted for processing"
|
||||
);
|
||||
pub static ref UNAGGREGATED_ATTESTATION_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
|
||||
"beacon_unaggregated_attestation_processing_successes_total",
|
||||
"Number of unaggregated attestations verified for gossip"
|
||||
);
|
||||
pub static ref UNAGGREGATED_ATTESTATION_GOSSIP_VERIFICATION_TIMES: Result<Histogram> = try_create_histogram(
|
||||
"beacon_unaggregated_attestation_gossip_verification_seconds",
|
||||
"Full runtime of aggregated attestation gossip verification"
|
||||
);
|
||||
|
||||
/*
|
||||
* Aggregated Attestation Verification
|
||||
*/
|
||||
pub static ref AGGREGATED_ATTESTATION_PROCESSING_REQUESTS: Result<IntCounter> = try_create_int_counter(
|
||||
"beacon_aggregated_attestation_processing_requests_total",
|
||||
"Count of all aggregated attestations submitted for processing"
|
||||
);
|
||||
pub static ref AGGREGATED_ATTESTATION_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
|
||||
"beacon_aggregated_attestation_processing_successes_total",
|
||||
"Number of aggregated attestations verified for gossip"
|
||||
);
|
||||
pub static ref AGGREGATED_ATTESTATION_GOSSIP_VERIFICATION_TIMES: Result<Histogram> = try_create_histogram(
|
||||
"beacon_aggregated_attestation_gossip_verification_seconds",
|
||||
"Full runtime of aggregated attestation gossip verification"
|
||||
);
|
||||
|
||||
/*
|
||||
* General Attestation Processing
|
||||
*/
|
||||
pub static ref ATTESTATION_PROCESSING_APPLY_TO_FORK_CHOICE: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_apply_to_fork_choice",
|
||||
"Time spent applying an attestation to fork choice"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_APPLY_TO_AGG_POOL: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_apply_to_agg_pool",
|
||||
"Time spent applying an attestation to the naive aggregation pool"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_AGG_POOL_MAPS_WRITE_LOCK: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_agg_pool_maps_write_lock",
|
||||
"Time spent waiting for the maps write lock when adding to the agg poll"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_AGG_POOL_PRUNE: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_agg_pool_prune",
|
||||
"Time spent for the agg pool to prune"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_AGG_POOL_INSERT: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_agg_pool_insert",
|
||||
"Time spent for the outer pool.insert() function of agg pool"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_AGG_POOL_CORE_INSERT: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_agg_pool_core_insert",
|
||||
"Time spent for the core map.insert() function of agg pool"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_AGG_POOL_AGGREGATION: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_agg_pool_aggregation",
|
||||
"Time spent doing signature aggregation when adding to the agg poll"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_AGG_POOL_CREATE_MAP: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_agg_pool_create_map",
|
||||
"Time spent for creating a map for a new slot"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_APPLY_TO_OP_POOL: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_apply_to_op_pool",
|
||||
"Time spent applying an attestation to the block inclusion pool"
|
||||
);
|
||||
|
||||
/*
|
||||
* Attestation Processing
|
||||
*/
|
||||
pub static ref ATTESTATION_PROCESSING_REQUESTS: Result<IntCounter> = try_create_int_counter(
|
||||
"beacon_attestation_processing_requests_total",
|
||||
"Count of all attestations submitted for processing"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
|
||||
"beacon_attestation_processing_successes_total",
|
||||
"total_attestation_processing_successes"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_TIMES: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_seconds",
|
||||
"Full runtime of attestation processing"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_INITIAL_VALIDATION_TIMES: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_initial_validation_seconds",
|
||||
"Time spent on the initial_validation of attestation processing"
|
||||
);
|
||||
pub static ref ATTESTATION_PROCESSING_SHUFFLING_CACHE_WAIT_TIMES: Result<Histogram> = try_create_histogram(
|
||||
"beacon_attestation_processing_shuffling_cache_wait_seconds",
|
||||
"Time spent on waiting for the shuffling cache lock during attestation processing"
|
||||
@@ -251,6 +308,34 @@ lazy_static! {
|
||||
try_create_int_gauge("beacon_op_pool_proposer_slashings_total", "Count of proposer slashings in the op pool");
|
||||
pub static ref OP_POOL_NUM_VOLUNTARY_EXITS: Result<IntGauge> =
|
||||
try_create_int_gauge("beacon_op_pool_voluntary_exits_total", "Count of voluntary exits in the op pool");
|
||||
|
||||
/*
|
||||
* Participation Metrics
|
||||
*/
|
||||
pub static ref PARTICIPATION_PREV_EPOCH_ATTESTER: Result<Gauge> = try_create_float_gauge(
|
||||
"beacon_participation_prev_epoch_attester",
|
||||
"Ratio of attesting balances to total balances"
|
||||
);
|
||||
pub static ref PARTICIPATION_PREV_EPOCH_TARGET_ATTESTER: Result<Gauge> = try_create_float_gauge(
|
||||
"beacon_participation_prev_epoch_target_attester",
|
||||
"Ratio of target-attesting balances to total balances"
|
||||
);
|
||||
pub static ref PARTICIPATION_PREV_EPOCH_HEAD_ATTESTER: Result<Gauge> = try_create_float_gauge(
|
||||
"beacon_participation_prev_epoch_head_attester",
|
||||
"Ratio of head-attesting balances to total balances"
|
||||
);
|
||||
|
||||
/*
|
||||
* Attestation Observation Metrics
|
||||
*/
|
||||
pub static ref ATTN_OBSERVATION_PREV_EPOCH_ATTESTERS: Result<IntGauge> = try_create_int_gauge(
|
||||
"beacon_attn_observation_epoch_attesters",
|
||||
"Count of attesters that have been seen by the beacon chain in the previous epoch"
|
||||
);
|
||||
pub static ref ATTN_OBSERVATION_PREV_EPOCH_AGGREGATORS: Result<IntGauge> = try_create_int_gauge(
|
||||
"beacon_attn_observation_epoch_aggregators",
|
||||
"Count of aggregators that have been seen by the beacon chain in the previous epoch"
|
||||
);
|
||||
}
|
||||
|
||||
/// Scrape the `beacon_chain` for metrics that are not constantly updated (e.g., the present slot,
|
||||
@@ -260,6 +345,10 @@ pub fn scrape_for_metrics<T: BeaconChainTypes>(beacon_chain: &BeaconChain<T>) {
|
||||
scrape_head_state::<T>(&head.beacon_state, head.beacon_state_root)
|
||||
}
|
||||
|
||||
if let Some(slot) = beacon_chain.slot_clock.now() {
|
||||
scrape_attestation_observation(slot, beacon_chain);
|
||||
}
|
||||
|
||||
set_gauge_by_usize(
|
||||
&OP_POOL_NUM_ATTESTATIONS,
|
||||
beacon_chain.op_pool.num_attestations(),
|
||||
@@ -332,6 +421,24 @@ fn scrape_head_state<T: BeaconChainTypes>(state: &BeaconState<T::EthSpec>, state
|
||||
set_gauge_by_u64(&HEAD_STATE_ETH1_DEPOSIT_INDEX, state.eth1_deposit_index);
|
||||
}
|
||||
|
||||
fn scrape_attestation_observation<T: BeaconChainTypes>(slot_now: Slot, chain: &BeaconChain<T>) {
|
||||
let prev_epoch = slot_now.epoch(T::EthSpec::slots_per_epoch()) - 1;
|
||||
|
||||
if let Some(count) = chain
|
||||
.observed_attesters
|
||||
.observed_validator_count(prev_epoch)
|
||||
{
|
||||
set_gauge_by_usize(&ATTN_OBSERVATION_PREV_EPOCH_ATTESTERS, count);
|
||||
}
|
||||
|
||||
if let Some(count) = chain
|
||||
.observed_aggregators
|
||||
.observed_validator_count(prev_epoch)
|
||||
{
|
||||
set_gauge_by_usize(&ATTN_OBSERVATION_PREV_EPOCH_AGGREGATORS, count);
|
||||
}
|
||||
}
|
||||
|
||||
fn set_gauge_by_slot(gauge: &Result<IntGauge>, value: Slot) {
|
||||
set_gauge(gauge, value.as_u64() as i64);
|
||||
}
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
use crate::metrics;
|
||||
use parking_lot::RwLock;
|
||||
use std::collections::HashMap;
|
||||
use types::{Attestation, AttestationData, EthSpec, Slot};
|
||||
@@ -68,6 +69,8 @@ impl<E: EthSpec> AggregatedAttestationMap<E> {
|
||||
///
|
||||
/// The given attestation (`a`) must only have one signature.
|
||||
pub fn insert(&mut self, a: &Attestation<E>) -> Result<InsertOutcome, Error> {
|
||||
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_CORE_INSERT);
|
||||
|
||||
let set_bits = a
|
||||
.aggregation_bits
|
||||
.iter()
|
||||
@@ -93,6 +96,8 @@ impl<E: EthSpec> AggregatedAttestationMap<E> {
|
||||
{
|
||||
Ok(InsertOutcome::SignatureAlreadyKnown { committee_index })
|
||||
} else {
|
||||
let _timer =
|
||||
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_AGGREGATION);
|
||||
existing_attestation.aggregate(a);
|
||||
Ok(InsertOutcome::SignatureAggregated { committee_index })
|
||||
}
|
||||
@@ -164,8 +169,9 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
|
||||
/// The pool may be pruned if the given `attestation.data` has a slot higher than any
|
||||
/// previously seen.
|
||||
pub fn insert(&self, attestation: &Attestation<E>) -> Result<InsertOutcome, Error> {
|
||||
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_INSERT);
|
||||
let slot = attestation.data.slot;
|
||||
let lowest_permissible_slot = *self.lowest_permissible_slot.read();
|
||||
let lowest_permissible_slot: Slot = *self.lowest_permissible_slot.read();
|
||||
|
||||
// Reject any attestations that are too old.
|
||||
if slot < lowest_permissible_slot {
|
||||
@@ -175,11 +181,15 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
|
||||
});
|
||||
}
|
||||
|
||||
let lock_timer =
|
||||
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_MAPS_WRITE_LOCK);
|
||||
let mut maps = self.maps.write();
|
||||
drop(lock_timer);
|
||||
|
||||
let outcome = if let Some(map) = maps.get_mut(&slot) {
|
||||
map.insert(attestation)
|
||||
} else {
|
||||
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_CREATE_MAP);
|
||||
// To avoid re-allocations, try and determine a rough initial capacity for the new item
|
||||
// by obtaining the mean size of all items in earlier epoch.
|
||||
let (count, sum) = maps
|
||||
@@ -219,8 +229,19 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
|
||||
/// Removes any attestations with a slot lower than `current_slot` and bars any future
|
||||
/// attestations with a slot lower than `current_slot - SLOTS_RETAINED`.
|
||||
pub fn prune(&self, current_slot: Slot) {
|
||||
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_PRUNE);
|
||||
|
||||
// Taking advantage of saturating subtraction on `Slot`.
|
||||
let lowest_permissible_slot = current_slot - Slot::from(SLOTS_RETAINED);
|
||||
|
||||
// No need to prune if the lowest permissible slot has not changed and the queue length is
|
||||
// less than the maximum
|
||||
if *self.lowest_permissible_slot.read() == lowest_permissible_slot
|
||||
&& self.maps.read().len() <= SLOTS_RETAINED
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
*self.lowest_permissible_slot.write() = lowest_permissible_slot;
|
||||
let mut maps = self.maps.write();
|
||||
|
||||
|
||||
@@ -36,9 +36,12 @@ pub trait Item {
|
||||
/// The default capacity for self. Used when we can't guess a reasonable size.
|
||||
fn default_capacity() -> usize;
|
||||
|
||||
/// Returns the number of validator indices stored in `self`.
|
||||
/// Returns the allocated size of `self`, measured by validator indices.
|
||||
fn len(&self) -> usize;
|
||||
|
||||
/// Returns the number of validators that have been observed by `self`.
|
||||
fn validator_count(&self) -> usize;
|
||||
|
||||
/// Store `validator_index` in `self`.
|
||||
fn insert(&mut self, validator_index: usize) -> bool;
|
||||
|
||||
@@ -67,6 +70,10 @@ impl Item for EpochBitfield {
|
||||
self.bitfield.len()
|
||||
}
|
||||
|
||||
fn validator_count(&self) -> usize {
|
||||
self.bitfield.iter().filter(|bit| **bit).count()
|
||||
}
|
||||
|
||||
fn insert(&mut self, validator_index: usize) -> bool {
|
||||
self.bitfield
|
||||
.get_mut(validator_index)
|
||||
@@ -116,6 +123,10 @@ impl Item for EpochHashSet {
|
||||
self.set.len()
|
||||
}
|
||||
|
||||
fn validator_count(&self) -> usize {
|
||||
self.set.len()
|
||||
}
|
||||
|
||||
/// Inserts the `validator_index` in the set. Returns `true` if the `validator_index` was
|
||||
/// already in the set.
|
||||
fn insert(&mut self, validator_index: usize) -> bool {
|
||||
@@ -219,6 +230,15 @@ impl<T: Item, E: EthSpec> AutoPruningContainer<T, E> {
|
||||
Ok(exists)
|
||||
}
|
||||
|
||||
/// Returns the number of validators that have been observed at the given `epoch`. Returns
|
||||
/// `None` if `self` does not have a cache for that epoch.
|
||||
pub fn observed_validator_count(&self, epoch: Epoch) -> Option<usize> {
|
||||
self.items
|
||||
.read()
|
||||
.get(&epoch)
|
||||
.map(|item| item.validator_count())
|
||||
}
|
||||
|
||||
fn sanitize_request(&self, a: &Attestation<E>, validator_index: usize) -> Result<(), Error> {
|
||||
if validator_index > E::ValidatorRegistryLimit::to_usize() {
|
||||
return Err(Error::ValidatorIndexTooHigh(validator_index));
|
||||
|
||||
Reference in New Issue
Block a user