mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-10 12:11:59 +00:00
Initial work towards v0.2.0 (#924)
* Remove ping protocol
* Initial renaming of network services
* Correct rebasing relative to latest master
* Start updating types
* Adds HashMapDelay struct to utils
* Initial network restructure
* Network restructure. Adds new types for v0.2.0
* Removes build artefacts
* Shift validation to beacon chain
* Temporarily remove gossip validation
This is to be updated to match current optimisation efforts.
* Adds AggregateAndProof
* Begin rebuilding pubsub encoding/decoding
* Signature hacking
* Shift gossipsup decoding into eth2_libp2p
* Existing EF tests passing with fake_crypto
* Shifts block encoding/decoding into RPC
* Delete outdated API spec
* All release tests passing bar genesis state parsing
* Update and test YamlConfig
* Update to spec v0.10 compatible BLS
* Updates to BLS EF tests
* Add EF test for AggregateVerify
And delete unused hash2curve tests for uncompressed points
* Update EF tests to v0.10.1
* Use optional block root correctly in block proc
* Use genesis fork in deposit domain. All tests pass
* Fast aggregate verify test
* Update REST API docs
* Fix unused import
* Bump spec tags to v0.10.1
* Add `seconds_per_eth1_block` to chainspec
* Update to timestamp based eth1 voting scheme
* Return None from `get_votes_to_consider` if block cache is empty
* Handle overflows in `is_candidate_block`
* Revert to failing tests
* Fix eth1 data sets test
* Choose default vote according to spec
* Fix collect_valid_votes tests
* Fix `get_votes_to_consider` to choose all eligible blocks
* Uncomment winning_vote tests
* Add comments; remove unused code
* Reduce seconds_per_eth1_block for simulation
* Addressed review comments
* Add test for default vote case
* Fix logs
* Remove unused functions
* Meter default eth1 votes
* Fix comments
* Progress on attestation service
* Address review comments; remove unused dependency
* Initial work on removing libp2p lock
* Add LRU caches to store (rollup)
* Update attestation validation for DB changes (WIP)
* Initial version of should_forward_block
* Scaffold
* Progress on attestation validation
Also, consolidate prod+testing slot clocks so that they share much
of the same implementation and can both handle sub-slot time changes.
* Removes lock from libp2p service
* Completed network lock removal
* Finish(?) attestation processing
* Correct network termination future
* Add slot check to block check
* Correct fmt issues
* Remove Drop implementation for network service
* Add first attempt at attestation proc. re-write
* Add version 2 of attestation processing
* Minor fixes
* Add validator pubkey cache
* Make get_indexed_attestation take a committee
* Link signature processing into new attn verification
* First working version
* Ensure pubkey cache is updated
* Add more metrics, slight optimizations
* Clone committee cache during attestation processing
* Update shuffling cache during block processing
* Remove old commented-out code
* Fix shuffling cache insert bug
* Used indexed attestation in fork choice
* Restructure attn processing, add metrics
* Add more detailed metrics
* Tidy, fix failing tests
* Fix failing tests, tidy
* Address reviewers suggestions
* Disable/delete two outdated tests
* Modification of validator for subscriptions
* Add slot signing to validator client
* Further progress on validation subscription
* Adds necessary validator subscription functionality
* Add new Pubkeys struct to signature_sets
* Refactor with functional approach
* Update beacon chain
* Clean up validator <-> beacon node http types
* Add aggregator status to ValidatorDuty
* Impl Clone for manual slot clock
* Fix minor errors
* Further progress validator client subscription
* Initial subscription and aggregation handling
* Remove decompressed member from pubkey bytes
* Progress to modifying val client for attestation aggregation
* First draft of validator client upgrade for aggregate attestations
* Add hashmap for indices lookup
* Add state cache, remove store cache
* Only build the head committee cache
* Removes lock on a network channel
* Partially implement beacon node subscription http api
* Correct compilation issues
* Change `get_attesting_indices` to use Vec
* Fix failing test
* Partial implementation of timer
* Adds timer, removes exit_future, http api to op pool
* Partial multiple aggregate attestation handling
* Permits bulk messages accross gossipsub network channel
* Correct compile issues
* Improve gosispsub messaging and correct rest api helpers
* Added global gossipsub subscriptions
* Update validator subscriptions data structs
* Tidy
* Re-structure validator subscriptions
* Initial handling of subscriptions
* Re-structure network service
* Add pubkey cache persistence file
* Add more comments
* Integrate persistence file into builder
* Add pubkey cache tests
* Add HashSetDelay and introduce into attestation service
* Handles validator subscriptions
* Add data_dir to beacon chain builder
* Remove Option in pubkey cache persistence file
* Ensure consistency between datadir/data_dir
* Fix failing network test
* Peer subnet discovery gets queued for future subscriptions
* Reorganise attestation service functions
* Initial wiring of attestation service
* First draft of attestation service timing logic
* Correct minor typos
* Tidy
* Fix todos
* Improve tests
* Add PeerInfo to connected peers mapping
* Fix compile error
* Fix compile error from merge
* Split up block processing metrics
* Tidy
* Refactor get_pubkey_from_state
* Remove commented-out code
* Rename state_cache -> checkpoint_cache
* Rename Checkpoint -> Snapshot
* Tidy, add comments
* Tidy up find_head function
* Change some checkpoint -> snapshot
* Add tests
* Expose max_len
* Remove dead code
* Tidy
* Fix bug
* Add sync-speed metric
* Add first attempt at VerifiableBlock
* Start integrating into beacon chain
* Integrate VerifiableBlock
* Rename VerifableBlock -> PartialBlockVerification
* Add start of typed methods
* Add progress
* Add further progress
* Rename structs
* Add full block verification to block_processing.rs
* Further beacon chain integration
* Update checks for gossip
* Add todo
* Start adding segement verification
* Add passing chain segement test
* Initial integration with batch sync
* Minor changes
* Tidy, add more error checking
* Start adding chain_segment tests
* Finish invalid signature tests
* Include single and gossip verified blocks in tests
* Add gossip verification tests
* Start adding docs
* Finish adding comments to block_processing.rs
* Rename block_processing.rs -> block_verification
* Start removing old block processing code
* Fixes beacon_chain compilation
* Fix project-wide compile errors
* Remove old code
* Correct code to pass all tests
* Fix bug with beacon proposer index
* Fix shim for BlockProcessingError
* Only process one epoch at a time
* Fix loop in chain segment processing
* Correct tests from master merge
* Add caching for state.eth1_data_votes
* Add BeaconChain::validator_pubkey
* Revert "Add caching for state.eth1_data_votes"
This reverts commit cd73dcd643.
Co-authored-by: Grant Wuerker <gwuerker@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "validator_client"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>", "Luke Anderson <luke@lukeanderson.com.au>"]
|
||||
edition = "2018"
|
||||
|
||||
@@ -16,6 +16,7 @@ clap = "2.33.0"
|
||||
lighthouse_bootstrap = { path = "../eth2/utils/lighthouse_bootstrap" }
|
||||
eth2_interop_keypairs = { path = "../eth2/utils/eth2_interop_keypairs" }
|
||||
slot_clock = { path = "../eth2/utils/slot_clock" }
|
||||
rest_types = { path = "../eth2/utils/rest_types" }
|
||||
types = { path = "../eth2/types" }
|
||||
serde = "1.0.102"
|
||||
serde_derive = "1.0.102"
|
||||
|
||||
@@ -1,19 +1,20 @@
|
||||
use crate::{
|
||||
duties_service::{DutiesService, ValidatorDuty},
|
||||
duties_service::{DutiesService, DutyAndState},
|
||||
validator_store::ValidatorStore,
|
||||
};
|
||||
use environment::RuntimeContext;
|
||||
use exit_future::Signal;
|
||||
use futures::{Future, Stream};
|
||||
use remote_beacon_node::{PublishStatus, RemoteBeaconNode};
|
||||
use rest_types::{ValidatorDuty, ValidatorSubscription};
|
||||
use slog::{crit, info, trace};
|
||||
use slot_clock::SlotClock;
|
||||
use std::collections::HashMap;
|
||||
use std::ops::Deref;
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, Instant};
|
||||
use tokio::timer::Interval;
|
||||
use types::{ChainSpec, CommitteeIndex, EthSpec, Slot};
|
||||
use tokio::timer::{Delay, Interval};
|
||||
use types::{AggregateAndProof, ChainSpec, CommitteeIndex, EthSpec, Slot};
|
||||
|
||||
/// Builds an `AttestationService`.
|
||||
pub struct AttestationServiceBuilder<T, E: EthSpec> {
|
||||
@@ -123,13 +124,13 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
|
||||
let context = &self.context;
|
||||
let log = context.log.clone();
|
||||
|
||||
let slot_duration = Duration::from_millis(spec.milliseconds_per_slot);
|
||||
let duration_to_next_slot = self
|
||||
.slot_clock
|
||||
.duration_to_next_slot()
|
||||
.ok_or_else(|| "Unable to determine duration to next slot".to_string())?;
|
||||
|
||||
let interval = {
|
||||
let slot_duration = Duration::from_millis(spec.milliseconds_per_slot);
|
||||
Interval::new(
|
||||
Instant::now() + duration_to_next_slot + slot_duration / 3,
|
||||
slot_duration,
|
||||
@@ -154,7 +155,7 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
|
||||
}
|
||||
})
|
||||
.for_each(move |_| {
|
||||
if let Err(e) = service.spawn_attestation_tasks() {
|
||||
if let Err(e) = service.spawn_attestation_tasks(slot_duration) {
|
||||
crit!(
|
||||
log_2,
|
||||
"Failed to spawn attestation tasks";
|
||||
@@ -178,30 +179,73 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
|
||||
|
||||
/// For each each required attestation, spawn a new task that downloads, signs and uploads the
|
||||
/// attestation to the beacon node.
|
||||
fn spawn_attestation_tasks(&self) -> Result<(), String> {
|
||||
fn spawn_attestation_tasks(&self, slot_duration: Duration) -> Result<(), String> {
|
||||
let service = self.clone();
|
||||
|
||||
let slot = service
|
||||
.slot_clock
|
||||
.now()
|
||||
.ok_or_else(|| "Failed to read slot clock".to_string())?;
|
||||
let duration_to_next_slot = service
|
||||
.slot_clock
|
||||
.duration_to_next_slot()
|
||||
.ok_or_else(|| "Unable to determine duration to next slot".to_string())?;
|
||||
|
||||
// If a validator needs to publish an aggregate attestation, they must do so at 2/3
|
||||
// through the slot. This delay triggers at this time
|
||||
let aggregator_delay_instant = {
|
||||
if duration_to_next_slot <= slot_duration / 3 {
|
||||
Instant::now()
|
||||
} else {
|
||||
Instant::now() + duration_to_next_slot - (slot_duration / 3)
|
||||
}
|
||||
};
|
||||
|
||||
let epoch = slot.epoch(E::slots_per_epoch());
|
||||
// Check if any attestation subscriptions are required. If there a new attestation duties for
|
||||
// this epoch or the next, send them to the beacon node
|
||||
let mut duties_to_subscribe = service.duties_service.unsubscribed_epoch_duties(&epoch);
|
||||
duties_to_subscribe.append(
|
||||
&mut service
|
||||
.duties_service
|
||||
.unsubscribed_epoch_duties(&(epoch + 1)),
|
||||
);
|
||||
|
||||
// spawn a task to subscribe all the duties
|
||||
service
|
||||
.context
|
||||
.executor
|
||||
.spawn(self.clone().send_subscriptions(duties_to_subscribe));
|
||||
|
||||
// Builds a map of committee index and spawn individual tasks to process raw attestations
|
||||
// and aggregated attestations
|
||||
let mut committee_indices: HashMap<CommitteeIndex, Vec<ValidatorDuty>> = HashMap::new();
|
||||
let mut aggregator_committee_indices: HashMap<CommitteeIndex, Vec<DutyAndState>> =
|
||||
HashMap::new();
|
||||
|
||||
service
|
||||
.duties_service
|
||||
.attesters(slot)
|
||||
.into_iter()
|
||||
.for_each(|duty| {
|
||||
if let Some(committee_index) = duty.attestation_committee_index {
|
||||
.for_each(|duty_and_state| {
|
||||
if let Some(committee_index) = duty_and_state.duty.attestation_committee_index {
|
||||
let validator_duties = committee_indices
|
||||
.entry(committee_index)
|
||||
.or_insert_with(|| vec![]);
|
||||
validator_duties.push(duty_and_state.duty.clone());
|
||||
|
||||
validator_duties.push(duty);
|
||||
// If this duty entails the validator aggregating attestations, perform
|
||||
// aggregation tasks
|
||||
if duty_and_state.is_aggregator() {
|
||||
let validator_duties = aggregator_committee_indices
|
||||
.entry(committee_index)
|
||||
.or_insert_with(|| vec![]);
|
||||
validator_duties.push(duty_and_state);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// spawns tasks for all required raw attestations production
|
||||
committee_indices
|
||||
.into_iter()
|
||||
.for_each(|(committee_index, validator_duties)| {
|
||||
@@ -213,11 +257,112 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
|
||||
));
|
||||
});
|
||||
|
||||
// spawns tasks for all aggregate attestation production
|
||||
aggregator_committee_indices
|
||||
.into_iter()
|
||||
.for_each(|(committee_index, validator_duties)| {
|
||||
// Spawn a separate task for each aggregate attestation.
|
||||
service
|
||||
.context
|
||||
.executor
|
||||
.spawn(self.clone().do_aggregate_attestation(
|
||||
slot,
|
||||
committee_index,
|
||||
validator_duties,
|
||||
Delay::new(aggregator_delay_instant.clone()),
|
||||
));
|
||||
});
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// For a given `committee_index`, download the attestation, have it signed by all validators
|
||||
/// in `validator_duties` then upload it.
|
||||
/// Subscribes any required validators to the beacon node for a particular slot.
|
||||
///
|
||||
/// This informs the beacon node that the validator has a duty on a particular
|
||||
/// slot allowing the beacon node to connect to the required subnet and determine
|
||||
/// if attestations need to be aggregated.
|
||||
fn send_subscriptions(&self, duties: Vec<ValidatorDuty>) -> impl Future<Item = (), Error = ()> {
|
||||
let mut validator_subscriptions = Vec::new();
|
||||
let mut successful_duties = Vec::new();
|
||||
|
||||
let service_1 = self.clone();
|
||||
let duties_no = duties.len();
|
||||
|
||||
let log_1 = self.context.log.clone();
|
||||
let log_2 = self.context.log.clone();
|
||||
|
||||
// builds a list of subscriptions
|
||||
for duty in duties {
|
||||
if let Some((slot, attestation_committee_index, _, validator_index)) =
|
||||
attestation_duties(&duty)
|
||||
{
|
||||
if let Some(slot_signature) =
|
||||
self.validator_store.sign_slot(&duty.validator_pubkey, slot)
|
||||
{
|
||||
let is_aggregator_proof = if duty.is_aggregator(&slot_signature) {
|
||||
Some(slot_signature.clone())
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
let subscription = ValidatorSubscription::new(
|
||||
validator_index,
|
||||
attestation_committee_index,
|
||||
slot,
|
||||
slot_signature,
|
||||
);
|
||||
validator_subscriptions.push(subscription);
|
||||
|
||||
// add successful duties to the list, along with whether they are aggregation
|
||||
// duties or not
|
||||
successful_duties.push((duty, is_aggregator_proof));
|
||||
}
|
||||
} else {
|
||||
crit!(log_2, "Validator duty doesn't have required fields");
|
||||
}
|
||||
}
|
||||
|
||||
let failed_duties = duties_no - successful_duties.len();
|
||||
|
||||
self.beacon_node
|
||||
.http
|
||||
.validator()
|
||||
.subscribe(validator_subscriptions)
|
||||
.map_err(|e| format!("Failed to subscribe validators: {:?}", e))
|
||||
.map(move |publish_status| match publish_status {
|
||||
PublishStatus::Valid => info!(
|
||||
log_1,
|
||||
"Successfully subscribed validators";
|
||||
"validators" => duties_no,
|
||||
"failed_validators" => failed_duties,
|
||||
),
|
||||
PublishStatus::Invalid(msg) => crit!(
|
||||
log_1,
|
||||
"Validator Subscription was invalid";
|
||||
"message" => msg,
|
||||
),
|
||||
PublishStatus::Unknown => {
|
||||
crit!(log_1, "Unknown condition when publishing attestation")
|
||||
}
|
||||
})
|
||||
.and_then(move |_| {
|
||||
for (duty, is_aggregator_proof) in successful_duties {
|
||||
service_1
|
||||
.duties_service
|
||||
.subscribe_duty(&duty, is_aggregator_proof);
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
.map_err(move |e| {
|
||||
crit!(
|
||||
log_2,
|
||||
"Error during attestation production";
|
||||
"error" => e
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
/// For a given `committee_index`, download the attestation, have each validator in
|
||||
/// `validator_duties` sign it and send the collection back to the beacon node.
|
||||
fn do_attestation(
|
||||
&self,
|
||||
slot: Slot,
|
||||
@@ -235,28 +380,32 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
|
||||
.produce_attestation(slot, committee_index)
|
||||
.map_err(|e| format!("Failed to produce attestation: {:?}", e))
|
||||
.map(move |attestation| {
|
||||
validator_duties
|
||||
.iter()
|
||||
.fold(attestation, |mut attestation, duty| {
|
||||
validator_duties.iter().fold(
|
||||
(Vec::new(), attestation),
|
||||
|(mut attestation_list, attestation), duty| {
|
||||
let log = service_1.context.log.clone();
|
||||
|
||||
if let Some((
|
||||
duty_slot,
|
||||
duty_committee_index,
|
||||
validator_committee_position,
|
||||
_,
|
||||
)) = attestation_duties(duty)
|
||||
{
|
||||
let mut raw_attestation = attestation.clone();
|
||||
if duty_slot == slot && duty_committee_index == committee_index {
|
||||
if service_1
|
||||
.validator_store
|
||||
.sign_attestation(
|
||||
&duty.validator_pubkey,
|
||||
validator_committee_position,
|
||||
&mut attestation,
|
||||
&mut raw_attestation,
|
||||
)
|
||||
.is_none()
|
||||
{
|
||||
crit!(log, "Failed to sign attestation");
|
||||
} else {
|
||||
attestation_list.push(raw_attestation);
|
||||
}
|
||||
} else {
|
||||
crit!(log, "Inconsistent validator duties during signing");
|
||||
@@ -265,22 +414,134 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
|
||||
crit!(log, "Missing validator duties when signing");
|
||||
}
|
||||
|
||||
attestation
|
||||
})
|
||||
(attestation_list, attestation)
|
||||
},
|
||||
)
|
||||
})
|
||||
.and_then(move |attestation| {
|
||||
.and_then(move |(attestation_list, attestation)| {
|
||||
service_2
|
||||
.beacon_node
|
||||
.http
|
||||
.validator()
|
||||
.publish_attestation(attestation.clone())
|
||||
.publish_attestations(attestation_list.clone())
|
||||
.map(|publish_status| (attestation_list, attestation, publish_status))
|
||||
.map_err(|e| format!("Failed to publish attestations: {:?}", e))
|
||||
})
|
||||
.map(
|
||||
move |(attestation_list, attestation, publish_status)| match publish_status {
|
||||
PublishStatus::Valid => info!(
|
||||
log_1,
|
||||
"Successfully published attestation";
|
||||
"signatures" => attestation_list.len(),
|
||||
"head_block" => format!("{}", attestation.data.beacon_block_root),
|
||||
"committee_index" => attestation.data.index,
|
||||
"slot" => attestation.data.slot.as_u64(),
|
||||
),
|
||||
PublishStatus::Invalid(msg) => crit!(
|
||||
log_1,
|
||||
"Published attestation was invalid";
|
||||
"message" => msg,
|
||||
"committee_index" => attestation.data.index,
|
||||
"slot" => attestation.data.slot.as_u64(),
|
||||
),
|
||||
PublishStatus::Unknown => {
|
||||
crit!(log_1, "Unknown condition when publishing attestation")
|
||||
}
|
||||
},
|
||||
)
|
||||
.map_err(move |e| {
|
||||
crit!(
|
||||
log_2,
|
||||
"Error during attestation production";
|
||||
"error" => e
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
/// For a given `committee_index`, download the aggregate attestation, have it signed by all validators
|
||||
/// in `validator_duties` then upload it.
|
||||
fn do_aggregate_attestation(
|
||||
&self,
|
||||
slot: Slot,
|
||||
committee_index: CommitteeIndex,
|
||||
validator_duties: Vec<DutyAndState>,
|
||||
aggregator_delay: Delay,
|
||||
) -> impl Future<Item = (), Error = ()> {
|
||||
let service_1 = self.clone();
|
||||
let service_2 = self.clone();
|
||||
let log_1 = self.context.log.clone();
|
||||
let log_2 = self.context.log.clone();
|
||||
|
||||
self.beacon_node
|
||||
.http
|
||||
.validator()
|
||||
.produce_aggregate_attestation(slot, committee_index)
|
||||
.map_err(|e| format!("Failed to produce an aggregate attestation: {:?}", e))
|
||||
.map(move |attestation| {
|
||||
validator_duties.iter().fold(
|
||||
(Vec::new(), attestation),
|
||||
|(mut aggregate_and_proof_list, attestation), duty_and_state| {
|
||||
let log = service_1.context.log.clone();
|
||||
|
||||
match (
|
||||
duty_and_state.selection_proof(),
|
||||
attestation_duties(&duty_and_state.duty),
|
||||
) {
|
||||
(
|
||||
Some(selection_proof),
|
||||
Some((duty_slot, duty_committee_index, _, aggregator_index)),
|
||||
) => {
|
||||
let raw_attestation = attestation.clone();
|
||||
if duty_slot == slot && duty_committee_index == committee_index {
|
||||
// build the `AggregateAndProof` struct for each validator
|
||||
let aggregate_and_proof = AggregateAndProof {
|
||||
aggregator_index,
|
||||
aggregate: raw_attestation,
|
||||
selection_proof,
|
||||
};
|
||||
|
||||
if let Some(signed_aggregate_and_proof) =
|
||||
service_1.validator_store.sign_aggregate_and_proof(
|
||||
&duty_and_state.duty.validator_pubkey,
|
||||
aggregate_and_proof,
|
||||
)
|
||||
{
|
||||
aggregate_and_proof_list.push(signed_aggregate_and_proof);
|
||||
} else {
|
||||
crit!(log, "Failed to sign attestation");
|
||||
}
|
||||
} else {
|
||||
crit!(log, "Inconsistent validator duties during signing");
|
||||
}
|
||||
}
|
||||
_ => crit!(
|
||||
log,
|
||||
"Missing validator duties or not aggregate duty when signing"
|
||||
),
|
||||
}
|
||||
|
||||
(aggregate_and_proof_list, attestation)
|
||||
},
|
||||
)
|
||||
})
|
||||
.and_then(move |(aggregate_and_proof_list, attestation)| {
|
||||
aggregator_delay
|
||||
.map(move |_| (aggregate_and_proof_list, attestation))
|
||||
.map_err(move |e| format!("Error during aggregator delay: {:?}", e))
|
||||
})
|
||||
.and_then(move |(aggregate_and_proof_list, attestation)| {
|
||||
service_2
|
||||
.beacon_node
|
||||
.http
|
||||
.validator()
|
||||
.publish_aggregate_and_proof(aggregate_and_proof_list)
|
||||
.map(|publish_status| (attestation, publish_status))
|
||||
.map_err(|e| format!("Failed to publish attestation: {:?}", e))
|
||||
.map_err(|e| format!("Failed to publish aggregate and proofs: {:?}", e))
|
||||
})
|
||||
.map(move |(attestation, publish_status)| match publish_status {
|
||||
PublishStatus::Valid => info!(
|
||||
log_1,
|
||||
"Successfully published attestation";
|
||||
"Successfully published aggregate attestations";
|
||||
"signatures" => attestation.aggregation_bits.num_set_bits(),
|
||||
"head_block" => format!("{}", attestation.data.beacon_block_root),
|
||||
"committee_index" => attestation.data.index,
|
||||
@@ -307,10 +568,11 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
|
||||
}
|
||||
}
|
||||
|
||||
fn attestation_duties(duty: &ValidatorDuty) -> Option<(Slot, CommitteeIndex, usize)> {
|
||||
fn attestation_duties(duty: &ValidatorDuty) -> Option<(Slot, CommitteeIndex, usize, u64)> {
|
||||
Some((
|
||||
duty.attestation_slot?,
|
||||
duty.attestation_committee_index?,
|
||||
duty.attestation_committee_position?,
|
||||
duty.validator_index?,
|
||||
))
|
||||
}
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
use crate::validator_store::ValidatorStore;
|
||||
use bls::Signature;
|
||||
use environment::RuntimeContext;
|
||||
use exit_future::Signal;
|
||||
use futures::{future, Future, IntoFuture, Stream};
|
||||
use parking_lot::RwLock;
|
||||
use remote_beacon_node::RemoteBeaconNode;
|
||||
use rest_types::{ValidatorDuty, ValidatorDutyBytes};
|
||||
use slog::{crit, debug, error, info, trace, warn};
|
||||
use slot_clock::SlotClock;
|
||||
use std::collections::HashMap;
|
||||
@@ -12,7 +14,7 @@ use std::ops::Deref;
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, Instant};
|
||||
use tokio::timer::Interval;
|
||||
use types::{ChainSpec, CommitteeIndex, Epoch, EthSpec, PublicKey, Slot};
|
||||
use types::{ChainSpec, Epoch, EthSpec, PublicKey, Slot};
|
||||
|
||||
/// Delay this period of time after the slot starts. This allows the node to process the new slot.
|
||||
const TIME_DELAY_FROM_SLOT: Duration = Duration::from_millis(100);
|
||||
@@ -20,35 +22,74 @@ const TIME_DELAY_FROM_SLOT: Duration = Duration::from_millis(100);
|
||||
/// Remove any duties where the `duties_epoch < current_epoch - PRUNE_DEPTH`.
|
||||
const PRUNE_DEPTH: u64 = 4;
|
||||
|
||||
type BaseHashMap = HashMap<PublicKey, HashMap<Epoch, ValidatorDuty>>;
|
||||
type BaseHashMap = HashMap<PublicKey, HashMap<Epoch, DutyAndState>>;
|
||||
|
||||
/// Stores the duties for some validator for an epoch.
|
||||
#[derive(PartialEq, Debug, Clone)]
|
||||
pub struct ValidatorDuty {
|
||||
/// The validator's BLS public key, uniquely identifying them. _48-bytes, hex encoded with 0x prefix, case insensitive._
|
||||
pub validator_pubkey: PublicKey,
|
||||
/// The slot at which the validator must attest.
|
||||
pub attestation_slot: Option<Slot>,
|
||||
/// The index of the committee within `slot` of which the validator is a member.
|
||||
pub attestation_committee_index: Option<CommitteeIndex>,
|
||||
/// The position of the validator in the committee.
|
||||
pub attestation_committee_position: Option<usize>,
|
||||
/// The slots in which a validator must propose a block (can be empty).
|
||||
pub block_proposal_slots: Vec<Slot>,
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct DutyAndState {
|
||||
/// The validator duty.
|
||||
pub duty: ValidatorDuty,
|
||||
/// The current state of the validator duty.
|
||||
state: DutyState,
|
||||
}
|
||||
|
||||
impl TryInto<ValidatorDuty> for remote_beacon_node::ValidatorDuty {
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum DutyState {
|
||||
/// This duty has not been subscribed to the beacon node.
|
||||
NotSubscribed,
|
||||
/// The duty has been subscribed to the beacon node.
|
||||
Subscribed,
|
||||
/// The duty has been subscribed and the validator is an aggregator for this duty. The
|
||||
/// selection proof is provided to construct the `AggregateAndProof` struct.
|
||||
SubscribedAggregator(Signature),
|
||||
}
|
||||
|
||||
impl DutyAndState {
|
||||
/// Returns true if the duty is an aggregation duty (the validator must aggregate all
|
||||
/// attestations.
|
||||
pub fn is_aggregator(&self) -> bool {
|
||||
match self.state {
|
||||
DutyState::NotSubscribed => false,
|
||||
DutyState::Subscribed => false,
|
||||
DutyState::SubscribedAggregator(_) => true,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the selection proof if the duty is an aggregation duty.
|
||||
pub fn selection_proof(&self) -> Option<Signature> {
|
||||
match &self.state {
|
||||
DutyState::SubscribedAggregator(proof) => Some(proof.clone()),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns true if the this duty has been subscribed with the beacon node.
|
||||
pub fn is_subscribed(&self) -> bool {
|
||||
match self.state {
|
||||
DutyState::NotSubscribed => false,
|
||||
DutyState::Subscribed => true,
|
||||
DutyState::SubscribedAggregator(_) => true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TryInto<DutyAndState> for ValidatorDutyBytes {
|
||||
type Error = String;
|
||||
|
||||
fn try_into(self) -> Result<ValidatorDuty, Self::Error> {
|
||||
Ok(ValidatorDuty {
|
||||
fn try_into(self) -> Result<DutyAndState, Self::Error> {
|
||||
let duty = ValidatorDuty {
|
||||
validator_pubkey: (&self.validator_pubkey)
|
||||
.try_into()
|
||||
.map_err(|e| format!("Invalid pubkey bytes from server: {:?}", e))?,
|
||||
validator_index: self.validator_index,
|
||||
attestation_slot: self.attestation_slot,
|
||||
attestation_committee_index: self.attestation_committee_index,
|
||||
attestation_committee_position: self.attestation_committee_position,
|
||||
block_proposal_slots: self.block_proposal_slots,
|
||||
aggregator_modulo: self.aggregator_modulo,
|
||||
};
|
||||
Ok(DutyAndState {
|
||||
duty,
|
||||
state: DutyState::NotSubscribed,
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -82,7 +123,7 @@ impl DutiesStore {
|
||||
.filter(|(_validator_pubkey, validator_map)| {
|
||||
validator_map
|
||||
.get(&epoch)
|
||||
.map(|duties| !duties.block_proposal_slots.is_empty())
|
||||
.map(|duties| !duties.duty.block_proposal_slots.is_empty())
|
||||
.unwrap_or_else(|| false)
|
||||
})
|
||||
.count()
|
||||
@@ -96,7 +137,7 @@ impl DutiesStore {
|
||||
.filter(|(_validator_pubkey, validator_map)| {
|
||||
validator_map
|
||||
.get(&epoch)
|
||||
.map(|duties| duties.attestation_slot.is_some())
|
||||
.map(|duties| duties.duty.attestation_slot.is_some())
|
||||
.unwrap_or_else(|| false)
|
||||
})
|
||||
.count()
|
||||
@@ -112,8 +153,8 @@ impl DutiesStore {
|
||||
let epoch = slot.epoch(slots_per_epoch);
|
||||
|
||||
validator_map.get(&epoch).and_then(|duties| {
|
||||
if duties.block_proposal_slots.contains(&slot) {
|
||||
Some(duties.validator_pubkey.clone())
|
||||
if duties.duty.block_proposal_slots.contains(&slot) {
|
||||
Some(duties.duty.validator_pubkey.clone())
|
||||
} else {
|
||||
None
|
||||
}
|
||||
@@ -122,7 +163,49 @@ impl DutiesStore {
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn attesters(&self, slot: Slot, slots_per_epoch: u64) -> Vec<ValidatorDuty> {
|
||||
/// Gets a list of validator duties for an epoch that have not yet been subscribed
|
||||
/// to the beacon node.
|
||||
// Note: Potentially we should modify the data structure to store the unsubscribed epoch duties for validator clients with a large number of validators. This currently adds an O(N) search each slot.
|
||||
fn unsubscribed_epoch_duties(&self, epoch: &Epoch) -> Vec<ValidatorDuty> {
|
||||
self.store
|
||||
.read()
|
||||
.iter()
|
||||
.filter_map(|(_validator_pubkey, validator_map)| {
|
||||
validator_map.get(epoch).and_then(|duty_and_state| {
|
||||
if !duty_and_state.is_subscribed() {
|
||||
Some(duty_and_state)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
})
|
||||
.map(|duties| duties.duty.clone())
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Marks a duty as being subscribed to the beacon node. This is called by the attestation
|
||||
/// service once it has been sent.
|
||||
fn set_duty_state(
|
||||
&self,
|
||||
validator: &PublicKey,
|
||||
slot: Slot,
|
||||
state: DutyState,
|
||||
slots_per_epoch: u64,
|
||||
) {
|
||||
let epoch = slot.epoch(slots_per_epoch);
|
||||
|
||||
let mut store = self.store.write();
|
||||
if let Some(map) = store.get_mut(validator) {
|
||||
if let Some(duty) = map.get_mut(&epoch) {
|
||||
if duty.duty.attestation_slot == Some(slot) {
|
||||
// set the duty state
|
||||
duty.state = state;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn attesters(&self, slot: Slot, slots_per_epoch: u64) -> Vec<DutyAndState> {
|
||||
self.store
|
||||
.read()
|
||||
.iter()
|
||||
@@ -132,7 +215,7 @@ impl DutiesStore {
|
||||
let epoch = slot.epoch(slots_per_epoch);
|
||||
|
||||
validator_map.get(&epoch).and_then(|duties| {
|
||||
if duties.attestation_slot == Some(slot) {
|
||||
if duties.duty.attestation_slot == Some(slot) {
|
||||
Some(duties)
|
||||
} else {
|
||||
None
|
||||
@@ -143,16 +226,16 @@ impl DutiesStore {
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn insert(&self, epoch: Epoch, duties: ValidatorDuty, slots_per_epoch: u64) -> InsertOutcome {
|
||||
fn insert(&self, epoch: Epoch, duties: DutyAndState, slots_per_epoch: u64) -> InsertOutcome {
|
||||
let mut store = self.store.write();
|
||||
|
||||
if !duties_match_epoch(&duties, epoch, slots_per_epoch) {
|
||||
if !duties_match_epoch(&duties.duty, epoch, slots_per_epoch) {
|
||||
return InsertOutcome::Invalid;
|
||||
}
|
||||
|
||||
if let Some(validator_map) = store.get_mut(&duties.validator_pubkey) {
|
||||
if let Some(validator_map) = store.get_mut(&duties.duty.validator_pubkey) {
|
||||
if let Some(known_duties) = validator_map.get_mut(&epoch) {
|
||||
if *known_duties == duties {
|
||||
if known_duties.duty == duties.duty {
|
||||
InsertOutcome::Identical
|
||||
} else {
|
||||
*known_duties = duties;
|
||||
@@ -164,7 +247,7 @@ impl DutiesStore {
|
||||
InsertOutcome::NewEpoch
|
||||
}
|
||||
} else {
|
||||
let validator_pubkey = duties.validator_pubkey.clone();
|
||||
let validator_pubkey = duties.duty.validator_pubkey.clone();
|
||||
|
||||
let mut validator_map = HashMap::new();
|
||||
validator_map.insert(epoch, duties);
|
||||
@@ -315,10 +398,29 @@ impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
|
||||
}
|
||||
|
||||
/// Returns all `ValidatorDuty` for the given `slot`.
|
||||
pub fn attesters(&self, slot: Slot) -> Vec<ValidatorDuty> {
|
||||
pub fn attesters(&self, slot: Slot) -> Vec<DutyAndState> {
|
||||
self.store.attesters(slot, E::slots_per_epoch())
|
||||
}
|
||||
|
||||
/// Returns all `ValidatorDuty` that have not been registered with the beacon node.
|
||||
pub fn unsubscribed_epoch_duties(&self, epoch: &Epoch) -> Vec<ValidatorDuty> {
|
||||
self.store.unsubscribed_epoch_duties(epoch)
|
||||
}
|
||||
|
||||
/// Marks the duty as being subscribed to the beacon node.
|
||||
///
|
||||
/// If the duty is to be marked as an aggregator duty, a selection proof is also provided.
|
||||
pub fn subscribe_duty(&self, duty: &ValidatorDuty, aggregator_proof: Option<Signature>) {
|
||||
let state = match aggregator_proof {
|
||||
Some(proof) => DutyState::SubscribedAggregator(proof),
|
||||
None => DutyState::Subscribed,
|
||||
};
|
||||
if let Some(slot) = duty.attestation_slot {
|
||||
self.store
|
||||
.set_duty_state(&duty.validator_pubkey, slot, state, E::slots_per_epoch())
|
||||
}
|
||||
}
|
||||
|
||||
/// Start the service that periodically polls the beacon node for validator duties.
|
||||
pub fn start_update_service(&self, spec: &ChainSpec) -> Result<Signal, String> {
|
||||
let log = self.context.log.clone();
|
||||
@@ -477,7 +579,7 @@ impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
|
||||
let mut invalid = 0;
|
||||
|
||||
all_duties.into_iter().try_for_each::<_, Result<_, String>>(|remote_duties| {
|
||||
let duties: ValidatorDuty = remote_duties.try_into()?;
|
||||
let duties: DutyAndState = remote_duties.try_into()?;
|
||||
|
||||
match service_2
|
||||
.store
|
||||
@@ -487,9 +589,9 @@ impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
|
||||
debug!(
|
||||
log,
|
||||
"First duty assignment for validator";
|
||||
"proposal_slots" => format!("{:?}", &duties.block_proposal_slots),
|
||||
"attestation_slot" => format!("{:?}", &duties.attestation_slot),
|
||||
"validator" => format!("{:?}", &duties.validator_pubkey)
|
||||
"proposal_slots" => format!("{:?}", &duties.duty.block_proposal_slots),
|
||||
"attestation_slot" => format!("{:?}", &duties.duty.attestation_slot),
|
||||
"validator" => format!("{:?}", &duties.duty.validator_pubkey)
|
||||
);
|
||||
new_validator += 1
|
||||
}
|
||||
|
||||
@@ -12,8 +12,8 @@ use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tempdir::TempDir;
|
||||
use types::{
|
||||
Attestation, BeaconBlock, ChainSpec, Domain, Epoch, EthSpec, Fork, PublicKey, Signature,
|
||||
SignedBeaconBlock, SignedRoot,
|
||||
AggregateAndProof, Attestation, BeaconBlock, ChainSpec, Domain, Epoch, EthSpec, Fork,
|
||||
PublicKey, Signature, SignedAggregateAndProof, SignedBeaconBlock, SignedRoot, Slot,
|
||||
};
|
||||
|
||||
#[derive(Clone)]
|
||||
@@ -198,4 +198,38 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
|
||||
Some(())
|
||||
})
|
||||
}
|
||||
|
||||
/// Signs a slot for a given validator.
|
||||
///
|
||||
/// This is used to subscribe a validator to a beacon node and is used to determine if the
|
||||
/// validator is to aggregate attestations for this slot.
|
||||
pub fn sign_slot(&self, validator_pubkey: &PublicKey, slot: Slot) -> Option<Signature> {
|
||||
let validators = self.validators.read();
|
||||
let voting_keypair = validators.get(validator_pubkey)?.voting_keypair.as_ref()?;
|
||||
|
||||
let domain = self.spec.get_domain(
|
||||
slot.epoch(E::slots_per_epoch()),
|
||||
Domain::SelectionProof,
|
||||
&self.fork()?,
|
||||
);
|
||||
|
||||
let message = slot.signing_root(domain);
|
||||
|
||||
Some(Signature::new(message.as_bytes(), &voting_keypair.sk))
|
||||
}
|
||||
|
||||
/// Signs an `AggregateAndProof` for a given validator.
|
||||
///
|
||||
/// The resulting `SignedAggregateAndProof` is sent on the aggregation channel and cannot be
|
||||
/// modified by actors other than the signing validator.
|
||||
pub fn sign_aggregate_and_proof(
|
||||
&self,
|
||||
validator_pubkey: &PublicKey,
|
||||
aggregate_and_proof: AggregateAndProof<E>,
|
||||
) -> Option<SignedAggregateAndProof<E>> {
|
||||
let validators = self.validators.read();
|
||||
let voting_keypair = validators.get(validator_pubkey)?.voting_keypair.as_ref()?;
|
||||
|
||||
Some(aggregate_and_proof.into_signed(&voting_keypair.sk, &self.fork()?))
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user