Files
lighthouse/validator_client/src/duties_service.rs
Paul Hauner c571afb8d8 Wallet-based, encrypted key management (#1138)
* Update hashmap hashset to stable futures

* Adds panic test to hashset delay

* Port remote_beacon_node to stable futures

* Fix lcli merge conflicts

* Non rpc stuff compiles

* Remove padding

* Add error enum, zeroize more things

* Fix comment

* protocol.rs compiles

* Port websockets, timer and notifier to stable futures (#1035)

* Fix lcli

* Port timer to stable futures

* Fix timer

* Port websocket_server to stable futures

* Port notifier to stable futures

* Add TODOS

* Port remote_beacon_node to stable futures

* Partial eth2-libp2p stable future upgrade

* Finished first round of fighting RPC types

* Further progress towards porting eth2-libp2p adds caching to discovery

* Update behaviour

* Add keystore builder

* Remove keystore stuff from val client

* Add more tests, comments

* RPC handler to stable futures

* Update RPC to master libp2p

* Add more comments, test vectors

* Network service additions

* Progress on improving JSON validation

* More JSON verification

* Start moving JSON into own mod

* Remove old code

* Add more tests, reader/writers

* Tidy

* Move keystore into own file

* Move more logic into keystore file

* Tidy

* Tidy

* Fix the fallback transport construction (#1102)

* Allow for odd-character hex

* Correct warning

* Remove hashmap delay

* Compiling version of eth2-libp2p

* Update all crates versions

* Fix conversion function and add tests (#1113)

* Add more json missing field checks

* Use scrypt by default

* Tidy, address comments

* Test path and uuid in vectors

* Fix comment

* Add checks for kdf params

* Enforce empty kdf message

* Port validator_client to stable futures (#1114)

* Add PH & MS slot clock changes

* Account for genesis time

* Add progress on duties refactor

* Add simple is_aggregator bool to val subscription

* Start work on attestation_verification.rs

* Add progress on ObservedAttestations

* Progress with ObservedAttestations

* Fix tests

* Add observed attestations to the beacon chain

* Add attestation observation to processing code

* Add progress on attestation verification

* Add first draft of ObservedAttesters

* Add more tests

* Add observed attesters to beacon chain

* Add observers to attestation processing

* Add more attestation verification

* Create ObservedAggregators map

* Remove commented-out code

* Add observed aggregators into chain

* Add progress

* Finish adding features to attestation verification

* Ensure beacon chain compiles

* Link attn verification into chain

* Integrate new attn verification in chain

* Remove old attestation processing code

* Start trying to fix beacon_chain tests

* Split adding into pools into two functions

* Add aggregation to harness

* Get test harness working again

* Adjust the number of aggregators for test harness

* Fix edge-case in harness

* Integrate new attn processing in network

* Fix compile bug in validator_client

* Update validator API endpoints

* Fix aggreagation in test harness

* Fix enum thing

* Fix attestation observation bug:

* Patch failing API tests

* Start adding comments to attestation verification

* Remove unused attestation field

* Unify "is block known" logic

* Update comments

* Supress fork choice errors for network processing

* Add todos

* Tidy

* Add gossip attn tests

* Disallow test harness to produce old attns

* Comment out in-progress tests

* Partially address pruning tests

* Fix failing store test

* Add aggregate tests

* Add comments about which spec conditions we check

* Dont re-aggregate

* Split apart test harness attn production

* Fix compile error in network

* Make progress on commented-out test

* Fix skipping attestation test

* Add fork choice verification tests

* Tidy attn tests, remove dead code

* Remove some accidentally added code

* Fix clippy lint

* Rename test file

* Add block tests, add cheap block proposer check

* Rename block testing file

* Add observed_block_producers

* Tidy

* Switch around block signature verification

* Finish block testing

* Remove gossip from signature tests

* First pass of self review

* Fix deviation in spec

* Update test spec tags

* Start moving over to hashset

* Finish moving observed attesters to hashmap

* Move aggregation pool over to hashmap

* Make fc attn borrow again

* Fix rest_api compile error

* Fix missing comments

* Fix monster test

* Uncomment increasing slots test

* Address remaining comments

* Remove unsafe, use cfg test

* Remove cfg test flag

* Fix dodgy comment

* Revert "Update hashmap hashset to stable futures"

This reverts commit d432378a3c.

* Revert "Adds panic test to hashset delay"

This reverts commit 281502396f.

* Ported attestation_service

* Ported duties_service

* Ported fork_service

* More ports

* Port block_service

* Minor fixes

* VC compiles

* Update TODOS

* Borrow self where possible

* Ignore aggregates that are already known.

* Unify aggregator modulo logic

* Fix typo in logs

* Refactor validator subscription logic

* Avoid reproducing selection proof

* Skip HTTP call if no subscriptions

* Rename DutyAndState -> DutyAndProof

* Tidy logs

* Print root as dbg

* Fix compile errors in tests

* Fix compile error in test

* Re-Fix attestation and duties service

* Minor fixes

Co-authored-by: Paul Hauner <paul@paulhauner.com>

* Expose json_keystore mod

* First commits on path derivation

* Progress with implementation

* More progress

* Passing intermediate test vectors

* Tidy, add comments

* Add DerivedKey structs

* Move key derivation into own crate

* Add zeroize structs

* Return error for empty seed

* Add tests

* Tidy

* First commits on path derivation

* Progress with implementation

* Move key derivation into own crate

* Start defining JSON wallet

* Add progress

* Split out encrypt/decrypt

* First commits on path derivation

* Progress with implementation

* More progress

* Passing intermediate test vectors

* Tidy, add comments

* Add DerivedKey structs

* Move key derivation into own crate

* Add zeroize structs

* Return error for empty seed

* Add tests

* Tidy

* Add progress

* Replace some password usage with slice

* First commits on path derivation

* Progress with implementation

* More progress

* Passing intermediate test vectors

* Tidy, add comments

* Add DerivedKey structs

* Move key derivation into own crate

* Add zeroize structs

* Return error for empty seed

* Add tests

* Tidy

* Add progress

* Expose PlainText struct

* First commits on path derivation

* Progress with implementation

* More progress

* Passing intermediate test vectors

* Tidy, add comments

* Add DerivedKey structs

* Move key derivation into own crate

* Add zeroize structs

* Return error for empty seed

* Add tests

* Tidy

* Add builder

* Expose consts, remove Password

* Minor progress

* Expose SALT_SIZE

* First compiling version

* Add test vectors

* Network crate update to stable futures

* Move dbg assert statement

* Port account_manager to stable futures (#1121)

* Port account_manager to stable futures

* Run async fns in tokio environment

* Port rest_api crate to stable futures (#1118)

* Port rest_api lib to stable futures

* Reduce tokio features

* Update notifier to stable futures

* Builder update

* Further updates

* Add mnemonic, tidy

* Convert self referential async functions

* Tidy

* Add testing

* Add first attempt at validator_dir

* Present pubkey field

* stable futures fixes (#1124)

* Fix eth1 update functions

* Fix genesis and client

* Fix beacon node lib

* Return appropriate runtimes from environment

* Fix test rig

* Refactor eth1 service update

* Upgrade simulator to stable futures

* Lighthouse compiles on stable futures

* Add first pass of wallet manager

* Progress with CLI

* Remove println debugging statement

* Tidy output

* Tidy 600 perms

* Update libp2p service, start rpc test upgrade

* Add validator creation flow

* Update network crate for new libp2p

* Start tidying, adding comments

* Update tokio::codec to futures_codec (#1128)

* Further work towards RPC corrections

* Correct http timeout and network service select

* Add wallet mgr testing

* Shift LockedWallet into own file

* Add comments to fs

* Start integration into VC

* Use tokio runtime for libp2p

* Revert "Update tokio::codec to futures_codec (#1128)"

This reverts commit e57aea924a.

* Upgrade RPC libp2p tests

* Upgrade secio fallback test

* Add lcli keypair upgrade command

* Upgrade gossipsub examples

* Clean up RPC protocol

* Test fixes (#1133)

* Correct websocket timeout and run on os thread

* Fix network test

* Add --secrets-dir to VC

* Remove --legacy-keys from VC

* Clean up PR

* Correct tokio tcp move attestation service tests

* Upgrade attestation service tests

* Fix sim

* Correct network test

* Correct genesis test

* Start docs

* Add progress for validator generation

* Tidy error messages

* Test corrections

* Log info when block is received

* Modify logs and update attester service events

* Stable futures: fixes to vc, eth1 and account manager (#1142)

* Add local testnet scripts

* Remove whiteblock script

* Rename local testnet script

* Move spawns onto handle

* Fix VC panic

* Initial fix to block production issue

* Tidy block producer fix

* Tidy further

* Add local testnet clean script

* Run cargo fmt

* Tidy duties service

* Tidy fork service

* Tidy ForkService

* Tidy AttestationService

* Tidy notifier

* Ensure await is not suppressed in eth1

* Ensure await is not suppressed in account_manager

* Use .ok() instead of .unwrap_or(())

* RPC decoding test for proto

* Update discv5 and eth2-libp2p deps

* Run cargo fmt

* Pre-build keystores for sim

* Fix lcli double runtime issue (#1144)

* Handle stream termination and dialing peer errors

* Correct peer_info variant types

* Add progress on new deposit flow

* Remove unnecessary warnings

* Handle subnet unsubscription removal and improve logigng

* Add logs around ping

* Upgrade discv5 and improve logging

* Handle peer connection status for multiple connections

* Improve network service logging

* Add more incomplete progress

* Improve logging around peer manager

* Upgrade swarm poll centralise peer management

* Identify clients on error

* Fix `remove_peer` in sync (#1150)

* remove_peer removes from all chains

* Remove logs

* Fix early return from loop

* Improved logging, fix panic

* Partially correct tests

* Add deposit command

* Remove old validator directory

* Start adding AM tests

* Stable futures: Vc sync (#1149)

* Improve syncing heuristic

* Add comments

* Use safer method for tolerance

* Fix tests

* Binary testing progress

* Progress with CLI tests

* Use constants for flags

* More account manager testing

* Improve CLI tests

* Move upgrade-legacy-keypairs into account man

* Use rayon for VC key generation

* Add comments to `validator_dir`

* Add testing to validator_dir

* Add fix to eth1-sim

* Check errors in eth1-sim

* Fix mutability issue

* Ensure password file ends in .pass

* Add more tests to wallet manager

* Tidy deposit

* Tidy account manager

* Tidy account manager

* Remove panic

* Generate keypairs earlier in sim

* Tidy eth1-sime

* Try to fix eth1 sim

* Address review comments

* Fix typo in CLI command

* Update docs

* Disable eth1 sim

* Remove eth1 sim completely

Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: pawanjay176 <pawandhananjay@gmail.com>
2020-05-18 19:01:45 +10:00

701 lines
24 KiB
Rust

use crate::{is_synced::is_synced, validator_store::ValidatorStore};
use environment::RuntimeContext;
use exit_future::Signal;
use futures::{FutureExt, StreamExt};
use parking_lot::RwLock;
use remote_beacon_node::{PublishStatus, RemoteBeaconNode};
use rest_types::{ValidatorDuty, ValidatorDutyBytes, ValidatorSubscription};
use slog::{debug, error, info, trace, warn};
use slot_clock::SlotClock;
use std::collections::HashMap;
use std::convert::TryInto;
use std::ops::Deref;
use std::sync::Arc;
use tokio::time::{interval_at, Duration, Instant};
use types::{ChainSpec, CommitteeIndex, Epoch, EthSpec, PublicKey, SelectionProof, Slot};
/// Delay this period of time after the slot starts. This allows the node to process the new slot.
const TIME_DELAY_FROM_SLOT: Duration = Duration::from_millis(100);
/// Remove any duties where the `duties_epoch < current_epoch - PRUNE_DEPTH`.
const PRUNE_DEPTH: u64 = 4;
type BaseHashMap = HashMap<PublicKey, HashMap<Epoch, DutyAndProof>>;
#[derive(Debug, Clone)]
pub struct DutyAndProof {
/// The validator duty.
pub duty: ValidatorDuty,
/// Stores the selection proof if the duty elects the validator to be an aggregator.
pub selection_proof: Option<SelectionProof>,
}
impl DutyAndProof {
/// Computes the selection proof for `self.validator_pubkey` and `self.duty.attestation_slot`,
/// storing it in `self.selection_proof` _if_ the validator is an aggregator. If the validator
/// is not an aggregator, `self.selection_proof` is set to `None`.
///
/// ## Errors
///
/// - `self.validator_pubkey` is not known in `validator_store`.
/// - There's an arith error during computation.
pub fn compute_selection_proof<T: SlotClock + 'static, E: EthSpec>(
&mut self,
validator_store: &ValidatorStore<T, E>,
) -> Result<(), String> {
let (modulo, slot) = if let (Some(modulo), Some(slot)) =
(self.duty.aggregator_modulo, self.duty.attestation_slot)
{
(modulo, slot)
} else {
// If there is no modulo or for the aggregator we assume they are not activated and
// therefore not an aggregator.
self.selection_proof = None;
return Ok(());
};
let selection_proof = validator_store
.produce_selection_proof(&self.duty.validator_pubkey, slot)
.ok_or_else(|| "Failed to produce selection proof".to_string())?;
self.selection_proof = selection_proof
.is_aggregator_from_modulo(modulo)
.map_err(|e| format!("Invalid modulo: {:?}", e))
.map(|is_aggregator| {
if is_aggregator {
Some(selection_proof)
} else {
None
}
})?;
Ok(())
}
/// Returns `true` if the two `Self` instances would result in the same beacon subscription.
pub fn subscription_eq(&self, other: &Self) -> bool {
self.selection_proof_eq(other)
&& self.duty.validator_index == other.duty.validator_index
&& self.duty.attestation_committee_index == other.duty.attestation_committee_index
&& self.duty.attestation_slot == other.duty.attestation_slot
}
/// Returns `true` if the selection proof between `self` and `other` _should_ be equal.
///
/// It's important to note that this doesn't actually check `self.selection_proof`, instead it
/// checks to see if the inputs to computing the selection proof are equal.
fn selection_proof_eq(&self, other: &Self) -> bool {
self.duty.aggregator_modulo == other.duty.aggregator_modulo
&& self.duty.attestation_slot == other.duty.attestation_slot
}
/// Returns the information required for an attesting validator, if they are scheduled to
/// attest.
pub fn attestation_duties(&self) -> Option<(Slot, CommitteeIndex, usize, u64)> {
Some((
self.duty.attestation_slot?,
self.duty.attestation_committee_index?,
self.duty.attestation_committee_position?,
self.duty.validator_index?,
))
}
pub fn validator_pubkey(&self) -> &PublicKey {
&self.duty.validator_pubkey
}
}
impl TryInto<DutyAndProof> for ValidatorDutyBytes {
type Error = String;
fn try_into(self) -> Result<DutyAndProof, Self::Error> {
let duty = ValidatorDuty {
validator_pubkey: (&self.validator_pubkey)
.try_into()
.map_err(|e| format!("Invalid pubkey bytes from server: {:?}", e))?,
validator_index: self.validator_index,
attestation_slot: self.attestation_slot,
attestation_committee_index: self.attestation_committee_index,
attestation_committee_position: self.attestation_committee_position,
block_proposal_slots: self.block_proposal_slots,
aggregator_modulo: self.aggregator_modulo,
};
Ok(DutyAndProof {
duty,
selection_proof: None,
})
}
}
/// The outcome of inserting some `ValidatorDuty` into the `DutiesStore`.
#[derive(PartialEq, Debug, Clone)]
enum InsertOutcome {
/// These are the first duties received for this validator.
NewValidator,
/// The duties for this given epoch were previously unknown and have been stored.
NewEpoch,
/// The duties were identical to some already in the store.
Identical,
/// There were duties for this validator and epoch in the store that were different to the ones
/// provided. The existing duties were replaced.
Replaced { should_resubscribe: bool },
/// The given duties were invalid.
Invalid,
}
impl InsertOutcome {
/// Returns `true` if the outcome indicates that the validator _might_ require a subscription.
pub fn is_subscription_candidate(self) -> bool {
match self {
InsertOutcome::Replaced { should_resubscribe } => should_resubscribe,
InsertOutcome::NewValidator => true,
InsertOutcome::NewEpoch => true,
InsertOutcome::Identical => false,
InsertOutcome::Invalid => false,
}
}
}
#[derive(Default)]
pub struct DutiesStore {
store: RwLock<BaseHashMap>,
}
impl DutiesStore {
/// Returns the total number of validators that should propose in the given epoch.
fn proposer_count(&self, epoch: Epoch) -> usize {
self.store
.read()
.iter()
.filter(|(_validator_pubkey, validator_map)| {
validator_map
.get(&epoch)
.map(|duties| !duties.duty.block_proposal_slots.is_empty())
.unwrap_or_else(|| false)
})
.count()
}
/// Returns the total number of validators that should attest in the given epoch.
fn attester_count(&self, epoch: Epoch) -> usize {
self.store
.read()
.iter()
.filter(|(_validator_pubkey, validator_map)| {
validator_map
.get(&epoch)
.map(|duties| duties.duty.attestation_slot.is_some())
.unwrap_or_else(|| false)
})
.count()
}
fn block_producers(&self, slot: Slot, slots_per_epoch: u64) -> Vec<PublicKey> {
self.store
.read()
.iter()
// As long as a `HashMap` iterator does not return duplicate keys, neither will this
// function.
.filter_map(|(_validator_pubkey, validator_map)| {
let epoch = slot.epoch(slots_per_epoch);
validator_map.get(&epoch).and_then(|duties| {
if duties.duty.block_proposal_slots.contains(&slot) {
Some(duties.duty.validator_pubkey.clone())
} else {
None
}
})
})
.collect()
}
fn attesters(&self, slot: Slot, slots_per_epoch: u64) -> Vec<DutyAndProof> {
self.store
.read()
.iter()
// As long as a `HashMap` iterator does not return duplicate keys, neither will this
// function.
.filter_map(|(_validator_pubkey, validator_map)| {
let epoch = slot.epoch(slots_per_epoch);
validator_map.get(&epoch).and_then(|duties| {
if duties.duty.attestation_slot == Some(slot) {
Some(duties)
} else {
None
}
})
})
.cloned()
.collect()
}
fn is_aggregator(&self, validator_pubkey: &PublicKey, epoch: &Epoch) -> Option<bool> {
Some(
self.store
.read()
.get(validator_pubkey)?
.get(epoch)?
.selection_proof
.is_some(),
)
}
fn insert<T: SlotClock + 'static, E: EthSpec>(
&self,
epoch: Epoch,
mut duties: DutyAndProof,
slots_per_epoch: u64,
validator_store: &ValidatorStore<T, E>,
) -> Result<InsertOutcome, String> {
let mut store = self.store.write();
if !duties_match_epoch(&duties.duty, epoch, slots_per_epoch) {
return Ok(InsertOutcome::Invalid);
}
// TODO: refactor with Entry.
if let Some(validator_map) = store.get_mut(&duties.duty.validator_pubkey) {
if let Some(known_duties) = validator_map.get_mut(&epoch) {
if known_duties.duty == duties.duty {
Ok(InsertOutcome::Identical)
} else {
// Compute the selection proof.
duties.compute_selection_proof(validator_store)?;
// Determine if a re-subscription is required.
let should_resubscribe = duties.subscription_eq(known_duties);
// Replace the existing duties.
*known_duties = duties;
Ok(InsertOutcome::Replaced { should_resubscribe })
}
} else {
// Compute the selection proof.
duties.compute_selection_proof(validator_store)?;
validator_map.insert(epoch, duties);
Ok(InsertOutcome::NewEpoch)
}
} else {
// Compute the selection proof.
duties.compute_selection_proof(validator_store)?;
let validator_pubkey = duties.duty.validator_pubkey.clone();
let mut validator_map = HashMap::new();
validator_map.insert(epoch, duties);
store.insert(validator_pubkey, validator_map);
Ok(InsertOutcome::NewValidator)
}
}
fn prune(&self, prior_to: Epoch) {
self.store
.write()
.retain(|_validator_pubkey, validator_map| {
validator_map.retain(|epoch, _duties| *epoch >= prior_to);
!validator_map.is_empty()
});
}
}
pub struct DutiesServiceBuilder<T, E: EthSpec> {
validator_store: Option<ValidatorStore<T, E>>,
slot_clock: Option<T>,
beacon_node: Option<RemoteBeaconNode<E>>,
context: Option<RuntimeContext<E>>,
allow_unsynced_beacon_node: bool,
}
impl<T: SlotClock + 'static, E: EthSpec> DutiesServiceBuilder<T, E> {
pub fn new() -> Self {
Self {
validator_store: None,
slot_clock: None,
beacon_node: None,
context: None,
allow_unsynced_beacon_node: false,
}
}
pub fn validator_store(mut self, store: ValidatorStore<T, E>) -> Self {
self.validator_store = Some(store);
self
}
pub fn slot_clock(mut self, slot_clock: T) -> Self {
self.slot_clock = Some(slot_clock);
self
}
pub fn beacon_node(mut self, beacon_node: RemoteBeaconNode<E>) -> Self {
self.beacon_node = Some(beacon_node);
self
}
pub fn runtime_context(mut self, context: RuntimeContext<E>) -> Self {
self.context = Some(context);
self
}
/// Set to `true` to allow polling for duties when the beacon node is not synced.
pub fn allow_unsynced_beacon_node(mut self, allow_unsynced_beacon_node: bool) -> Self {
self.allow_unsynced_beacon_node = allow_unsynced_beacon_node;
self
}
pub fn build(self) -> Result<DutiesService<T, E>, String> {
Ok(DutiesService {
inner: Arc::new(Inner {
store: Arc::new(DutiesStore::default()),
validator_store: self
.validator_store
.ok_or_else(|| "Cannot build DutiesService without validator_store")?,
slot_clock: self
.slot_clock
.ok_or_else(|| "Cannot build DutiesService without slot_clock")?,
beacon_node: self
.beacon_node
.ok_or_else(|| "Cannot build DutiesService without beacon_node")?,
context: self
.context
.ok_or_else(|| "Cannot build DutiesService without runtime_context")?,
allow_unsynced_beacon_node: self.allow_unsynced_beacon_node,
}),
})
}
}
/// Helper to minimise `Arc` usage.
pub struct Inner<T, E: EthSpec> {
store: Arc<DutiesStore>,
validator_store: ValidatorStore<T, E>,
pub(crate) slot_clock: T,
pub(crate) beacon_node: RemoteBeaconNode<E>,
context: RuntimeContext<E>,
/// If true, the duties service will poll for duties from the beacon node even if it is not
/// synced.
allow_unsynced_beacon_node: bool,
}
/// Maintains a store of the duties for all voting validators in the `validator_store`.
///
/// Polls the beacon node at the start of each epoch, collecting duties for the current and next
/// epoch.
pub struct DutiesService<T, E: EthSpec> {
inner: Arc<Inner<T, E>>,
}
impl<T, E: EthSpec> Clone for DutiesService<T, E> {
fn clone(&self) -> Self {
Self {
inner: self.inner.clone(),
}
}
}
impl<T, E: EthSpec> Deref for DutiesService<T, E> {
type Target = Inner<T, E>;
fn deref(&self) -> &Self::Target {
self.inner.deref()
}
}
impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
/// Returns the total number of validators known to the duties service.
pub fn total_validator_count(&self) -> usize {
self.validator_store.num_voting_validators()
}
/// Returns the total number of validators that should propose in the given epoch.
pub fn proposer_count(&self, epoch: Epoch) -> usize {
self.store.proposer_count(epoch)
}
/// Returns the total number of validators that should attest in the given epoch.
pub fn attester_count(&self, epoch: Epoch) -> usize {
self.store.attester_count(epoch)
}
/// Returns the pubkeys of the validators which are assigned to propose in the given slot.
///
/// It is possible that multiple validators have an identical proposal slot, however that is
/// likely the result of heavy forking (lol) or inconsistent beacon node connections.
pub fn block_producers(&self, slot: Slot) -> Vec<PublicKey> {
self.store.block_producers(slot, E::slots_per_epoch())
}
/// Returns all `ValidatorDuty` for the given `slot`.
pub fn attesters(&self, slot: Slot) -> Vec<DutyAndProof> {
self.store.attesters(slot, E::slots_per_epoch())
}
/// Start the service that periodically polls the beacon node for validator duties.
pub fn start_update_service(self, spec: &ChainSpec) -> Result<Signal, String> {
let log = self.context.log.clone();
let duration_to_next_slot = self
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| "Unable to determine duration to next slot".to_string())?;
let mut interval = {
let slot_duration = Duration::from_millis(spec.milliseconds_per_slot);
// Note: `interval_at` panics if `slot_duration` is 0
interval_at(
Instant::now() + duration_to_next_slot + TIME_DELAY_FROM_SLOT,
slot_duration,
)
};
let (exit_signal, exit_fut) = exit_future::signal();
// Run an immediate update before starting the updater service.
self.inner
.context
.runtime_handle
.spawn(self.clone().do_update());
let runtime_handle = self.inner.context.runtime_handle.clone();
let interval_fut = async move {
while interval.next().await.is_some() {
self.clone().do_update().await.ok();
}
};
let future = futures::future::select(
Box::pin(interval_fut),
exit_fut.map(move |_| info!(log, "Shutdown complete")),
);
runtime_handle.spawn(future);
Ok(exit_signal)
}
/// Attempt to download the duties of all managed validators for this epoch and the next.
async fn do_update(self) -> Result<(), ()> {
let log = &self.context.log;
if !is_synced(&self.beacon_node, &self.slot_clock, None).await
&& !self.allow_unsynced_beacon_node
{
return Ok(());
}
let current_epoch = self
.slot_clock
.now()
.ok_or_else(|| {
error!(log, "Duties manager failed to read slot clock");
})
.map(|slot| {
let epoch = slot.epoch(E::slots_per_epoch());
if slot % E::slots_per_epoch() == 0 {
let prune_below = epoch - PRUNE_DEPTH;
trace!(
log,
"Pruning duties cache";
"pruning_below" => prune_below.as_u64(),
"current_epoch" => epoch.as_u64(),
);
self.store.prune(prune_below);
}
epoch
})?;
let result = self.clone().update_epoch(current_epoch).await;
if let Err(e) = result {
error!(
log,
"Failed to get current epoch duties";
"http_error" => format!("{:?}", e)
);
}
self.clone()
.update_epoch(current_epoch + 1)
.await
.map_err(move |e| {
error!(
log,
"Failed to get next epoch duties";
"http_error" => format!("{:?}", e)
);
})?;
Ok(())
}
/// Attempt to download the duties of all managed validators for the given `epoch`.
async fn update_epoch(self, epoch: Epoch) -> Result<(), String> {
let pubkeys = self.validator_store.voting_pubkeys();
let all_duties = self
.beacon_node
.http
.validator()
.get_duties(epoch, pubkeys.as_slice())
.await
.map_err(move |e| format!("Failed to get duties for epoch {}: {:?}", epoch, e))?;
let log = self.context.log.clone();
let mut new_validator = 0;
let mut new_epoch = 0;
let mut identical = 0;
let mut replaced = 0;
let mut invalid = 0;
// For each of the duties, attempt to insert them into our local store and build a
// list of new or changed selections proofs for any aggregating validators.
let validator_subscriptions = all_duties
.into_iter()
.filter_map(|remote_duties| {
// Convert the remote duties into our local representation.
let duties: DutyAndProof = remote_duties
.clone()
.try_into()
.map_err(|e| {
error!(
log,
"Unable to convert remote duties";
"error" => e
)
})
.ok()?;
let validator_pubkey = duties.duty.validator_pubkey.clone();
// Attempt to update our local store.
let outcome = self
.store
.insert(epoch, duties, E::slots_per_epoch(), &self.validator_store)
.map_err(|e| {
error!(
log,
"Unable to store duties";
"error" => e
)
})
.ok()?;
match &outcome {
InsertOutcome::NewValidator => {
debug!(
log,
"First duty assignment for validator";
"proposal_slots" => format!("{:?}", &remote_duties.block_proposal_slots),
"attestation_slot" => format!("{:?}", &remote_duties.attestation_slot),
"validator" => format!("{:?}", &remote_duties.validator_pubkey)
);
new_validator += 1;
}
InsertOutcome::NewEpoch => new_epoch += 1,
InsertOutcome::Identical => identical += 1,
InsertOutcome::Replaced { .. } => replaced += 1,
InsertOutcome::Invalid => invalid += 1,
};
// The selection proof is computed on `store.insert`, so it's necessary to check
// with the store that the validator is an aggregator.
let is_aggregator = self.store.is_aggregator(&validator_pubkey, &epoch)?;
if outcome.is_subscription_candidate() {
Some(ValidatorSubscription {
validator_index: remote_duties.validator_index?,
attestation_committee_index: remote_duties.attestation_committee_index?,
slot: remote_duties.attestation_slot?,
is_aggregator,
})
} else {
None
}
})
.collect::<Vec<_>>();
if invalid > 0 {
error!(
log,
"Received invalid duties from beacon node";
"bad_duty_count" => invalid,
"info" => "Duties are from wrong epoch."
)
}
trace!(
log,
"Performed duties update";
"identical" => identical,
"new_epoch" => new_epoch,
"new_validator" => new_validator,
"replaced" => replaced,
"epoch" => format!("{}", epoch)
);
if replaced > 0 {
warn!(
log,
"Duties changed during routine update";
"info" => "Chain re-org likely occurred"
)
}
let log = self.context.log.clone();
let count = validator_subscriptions.len();
if count == 0 {
debug!(log, "No new subscriptions required");
Ok(())
} else {
self.beacon_node
.http
.validator()
.subscribe(validator_subscriptions)
.await
.map_err(|e| format!("Failed to subscribe validators: {:?}", e))
.map(move |status| {
match status {
PublishStatus::Valid => debug!(
log,
"Successfully subscribed validators";
"count" => count
),
PublishStatus::Unknown => error!(
log,
"Unknown response from subscription";
),
PublishStatus::Invalid(e) => error!(
log,
"Failed to subscribe validator";
"error" => e
),
};
})
}
}
}
/// Returns `true` if the slots in the `duties` are from the given `epoch`
fn duties_match_epoch(duties: &ValidatorDuty, epoch: Epoch, slots_per_epoch: u64) -> bool {
duties
.attestation_slot
.map_or(true, |slot| slot.epoch(slots_per_epoch) == epoch)
&& duties
.block_proposal_slots
.iter()
.all(|slot| slot.epoch(slots_per_epoch) == epoch)
}