Initial work towards v0.2.0 (#924)

* Remove ping protocol

* Initial renaming of network services

* Correct rebasing relative to latest master

* Start updating types

* Adds HashMapDelay struct to utils

* Initial network restructure

* Network restructure. Adds new types for v0.2.0

* Removes build artefacts

* Shift validation to beacon chain

* Temporarily remove gossip validation

This is to be updated to match current optimisation efforts.

* Adds AggregateAndProof

* Begin rebuilding pubsub encoding/decoding

* Signature hacking

* Shift gossipsup decoding into eth2_libp2p

* Existing EF tests passing with fake_crypto

* Shifts block encoding/decoding into RPC

* Delete outdated API spec

* All release tests passing bar genesis state parsing

* Update and test YamlConfig

* Update to spec v0.10 compatible BLS

* Updates to BLS EF tests

* Add EF test for AggregateVerify

And delete unused hash2curve tests for uncompressed points

* Update EF tests to v0.10.1

* Use optional block root correctly in block proc

* Use genesis fork in deposit domain. All tests pass

* Fast aggregate verify test

* Update REST API docs

* Fix unused import

* Bump spec tags to v0.10.1

* Add `seconds_per_eth1_block` to chainspec

* Update to timestamp based eth1 voting scheme

* Return None from `get_votes_to_consider` if block cache is empty

* Handle overflows in `is_candidate_block`

* Revert to failing tests

* Fix eth1 data sets test

* Choose default vote according to spec

* Fix collect_valid_votes tests

* Fix `get_votes_to_consider` to choose all eligible blocks

* Uncomment winning_vote tests

* Add comments; remove unused code

* Reduce seconds_per_eth1_block for simulation

* Addressed review comments

* Add test for default vote case

* Fix logs

* Remove unused functions

* Meter default eth1 votes

* Fix comments

* Progress on attestation service

* Address review comments; remove unused dependency

* Initial work on removing libp2p lock

* Add LRU caches to store (rollup)

* Update attestation validation for DB changes (WIP)

* Initial version of should_forward_block

* Scaffold

* Progress on attestation validation

Also, consolidate prod+testing slot clocks so that they share much
of the same implementation and can both handle sub-slot time changes.

* Removes lock from libp2p service

* Completed network lock removal

* Finish(?) attestation processing

* Correct network termination future

* Add slot check to block check

* Correct fmt issues

* Remove Drop implementation for network service

* Add first attempt at attestation proc. re-write

* Add version 2 of attestation processing

* Minor fixes

* Add validator pubkey cache

* Make get_indexed_attestation take a committee

* Link signature processing into new attn verification

* First working version

* Ensure pubkey cache is updated

* Add more metrics, slight optimizations

* Clone committee cache during attestation processing

* Update shuffling cache during block processing

* Remove old commented-out code

* Fix shuffling cache insert bug

* Used indexed attestation in fork choice

* Restructure attn processing, add metrics

* Add more detailed metrics

* Tidy, fix failing tests

* Fix failing tests, tidy

* Address reviewers suggestions

* Disable/delete two outdated tests

* Modification of validator for subscriptions

* Add slot signing to validator client

* Further progress on validation subscription

* Adds necessary validator subscription functionality

* Add new Pubkeys struct to signature_sets

* Refactor with functional approach

* Update beacon chain

* Clean up validator <-> beacon node http types

* Add aggregator status to ValidatorDuty

* Impl Clone for manual slot clock

* Fix minor errors

* Further progress validator client subscription

* Initial subscription and aggregation handling

* Remove decompressed member from pubkey bytes

* Progress to modifying val client for attestation aggregation

* First draft of validator client upgrade for aggregate attestations

* Add hashmap for indices lookup

* Add state cache, remove store cache

* Only build the head committee cache

* Removes lock on a network channel

* Partially implement beacon node subscription http api

* Correct compilation issues

* Change `get_attesting_indices` to use Vec

* Fix failing test

* Partial implementation of timer

* Adds timer, removes exit_future, http api to op pool

* Partial multiple aggregate attestation handling

* Permits bulk messages accross gossipsub network channel

* Correct compile issues

* Improve gosispsub messaging and correct rest api helpers

* Added global gossipsub subscriptions

* Update validator subscriptions data structs

* Tidy

* Re-structure validator subscriptions

* Initial handling of subscriptions

* Re-structure network service

* Add pubkey cache persistence file

* Add more comments

* Integrate persistence file into builder

* Add pubkey cache tests

* Add HashSetDelay and introduce into attestation service

* Handles validator subscriptions

* Add data_dir to beacon chain builder

* Remove Option in pubkey cache persistence file

* Ensure consistency between datadir/data_dir

* Fix failing network test

* Peer subnet discovery gets queued for future subscriptions

* Reorganise attestation service functions

* Initial wiring of attestation service

* First draft of attestation service timing logic

* Correct minor typos

* Tidy

* Fix todos

* Improve tests

* Add PeerInfo to connected peers mapping

* Fix compile error

* Fix compile error from merge

* Split up block processing metrics

* Tidy

* Refactor get_pubkey_from_state

* Remove commented-out code

* Rename state_cache -> checkpoint_cache

* Rename Checkpoint -> Snapshot

* Tidy, add comments

* Tidy up find_head function

* Change some checkpoint -> snapshot

* Add tests

* Expose max_len

* Remove dead code

* Tidy

* Fix bug

* Add sync-speed metric

* Add first attempt at VerifiableBlock

* Start integrating into beacon chain

* Integrate VerifiableBlock

* Rename VerifableBlock -> PartialBlockVerification

* Add start of typed methods

* Add progress

* Add further progress

* Rename structs

* Add full block verification to block_processing.rs

* Further beacon chain integration

* Update checks for gossip

* Add todo

* Start adding segement verification

* Add passing chain segement test

* Initial integration with batch sync

* Minor changes

* Tidy, add more error checking

* Start adding chain_segment tests

* Finish invalid signature tests

* Include single and gossip verified blocks in tests

* Add gossip verification tests

* Start adding docs

* Finish adding comments to block_processing.rs

* Rename block_processing.rs -> block_verification

* Start removing old block processing code

* Fixes beacon_chain compilation

* Fix project-wide compile errors

* Remove old code

* Correct code to pass all tests

* Fix bug with beacon proposer index

* Fix shim for BlockProcessingError

* Only process one epoch at a time

* Fix loop in chain segment processing

* Correct tests from master merge

* Add caching for state.eth1_data_votes

* Add BeaconChain::validator_pubkey

* Revert "Add caching for state.eth1_data_votes"

This reverts commit cd73dcd643.

Co-authored-by: Grant Wuerker <gwuerker@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
Age Manning
2020-03-17 17:24:44 +11:00
committed by GitHub
parent c198bddf9e
commit 95c8e476bc
161 changed files with 9771 additions and 5266 deletions

View File

@@ -0,0 +1,575 @@
//! This service keeps track of which shard subnet the beacon node should be subscribed to at any
//! given time. It schedules subscriptions to shard subnets, requests peer discoveries and
//! determines whether attestations should be aggregated and/or passed to the beacon node.
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{types::GossipKind, NetworkGlobals};
use futures::prelude::*;
use hashmap_delay::HashSetDelay;
use rand::seq::SliceRandom;
use rest_types::ValidatorSubscription;
use slog::{crit, debug, error, o, warn};
use slot_clock::SlotClock;
use std::boxed::Box;
use std::collections::VecDeque;
use std::sync::Arc;
use std::time::{Duration, Instant};
use types::{Attestation, SubnetId};
use types::{EthSpec, Slot};
/// The minimum number of slots ahead that we attempt to discover peers for a subscription. If the
/// slot is less than this number, skip the peer discovery process.
const MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 1;
/// The number of slots ahead that we attempt to discover peers for a subscription. If the slot to
/// attest to is greater than this, we queue a discovery request for this many slots prior to
/// subscribing.
const TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 6;
/// The time (in seconds) before a last seen validator is considered absent and we unsubscribe from the random
/// gossip topics that we subscribed to due to the validator connection.
const LAST_SEEN_VALIDATOR_TIMEOUT: u64 = 1800; // 30 mins
/// The number of seconds in advance that we subscribe to a subnet before the required slot.
const ADVANCE_SUBSCRIBE_SECS: u64 = 3;
#[derive(Debug, PartialEq)]
pub enum AttServiceMessage {
/// Subscribe to the specified subnet id.
Subscribe(SubnetId),
/// Unsubscribe to the specified subnet id.
Unsubscribe(SubnetId),
/// Add the `SubnetId` to the ENR bitfield.
EnrAdd(SubnetId),
/// Remove the `SubnetId` from the ENR bitfield.
EnrRemove(SubnetId),
/// Discover peers for a particular subnet.
DiscoverPeers(SubnetId),
}
pub struct AttestationService<T: BeaconChainTypes> {
/// Queued events to return to the driving service.
events: VecDeque<AttServiceMessage>,
/// A collection of public network variables.
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
/// A reference to the beacon chain to process received attestations.
beacon_chain: Arc<BeaconChain<T>>,
/// The collection of currently subscribed random subnets mapped to their expiry deadline.
random_subnets: HashSetDelay<SubnetId>,
/// A collection of timeouts for when to start searching for peers for a particular shard.
discover_peers: HashSetDelay<(SubnetId, Slot)>,
/// A collection of timeouts for when to subscribe to a shard subnet.
subscriptions: HashSetDelay<(SubnetId, Slot)>,
/// A collection of timeouts for when to unsubscribe from a shard subnet.
unsubscriptions: HashSetDelay<(SubnetId, Slot)>,
/// A collection of seen validators. These dictate how many random subnets we should be
/// subscribed to. As these time out, we unsubscribe for the required random subnets and update
/// our ENR.
/// This is a set of validator indices.
known_validators: HashSetDelay<u64>,
/// The logger for the attestation service.
log: slog::Logger,
}
impl<T: BeaconChainTypes> AttestationService<T> {
/* Public functions */
pub fn new(
beacon_chain: Arc<BeaconChain<T>>,
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
log: &slog::Logger,
) -> Self {
let log = log.new(o!("service" => "attestation_service"));
// calculate the random subnet duration from the spec constants
let spec = &beacon_chain.spec;
let random_subnet_duration_millis = spec
.epochs_per_random_subnet_subscription
.saturating_mul(T::EthSpec::slots_per_epoch())
.saturating_mul(spec.milliseconds_per_slot);
AttestationService {
events: VecDeque::with_capacity(10),
network_globals,
beacon_chain,
random_subnets: HashSetDelay::new(Duration::from_millis(random_subnet_duration_millis)),
discover_peers: HashSetDelay::default(),
subscriptions: HashSetDelay::default(),
unsubscriptions: HashSetDelay::default(),
known_validators: HashSetDelay::new(Duration::from_secs(LAST_SEEN_VALIDATOR_TIMEOUT)),
log,
}
}
/// Processes a list of validator subscriptions.
///
/// This will:
/// - Register new validators as being known.
/// - Subscribe to the required number of random subnets.
/// - Update the local ENR for new random subnets due to seeing new validators.
/// - Search for peers for required subnets.
/// - Request subscriptions for subnets on specific slots when required.
/// - Build the timeouts for each of these events.
///
/// This returns a result simply for the ergonomics of using ?. The result can be
/// safely dropped.
pub fn validator_subscriptions(
&mut self,
subscriptions: Vec<ValidatorSubscription>,
) -> Result<(), ()> {
for subscription in subscriptions {
//NOTE: We assume all subscriptions have been verified before reaching this service
// Registers the validator with the attestation service.
// This will subscribe to long-lived random subnets if required.
self.add_known_validator(subscription.validator_index);
let subnet_id = SubnetId::new(
subscription.attestation_committee_index
% self.beacon_chain.spec.attestation_subnet_count,
);
// determine if we should run a discovery lookup request and request it if required
let _ = self.discover_peers_request(subnet_id, subscription.slot);
// set the subscription timer to subscribe to the next subnet if required
let _ = self.subscribe_to_subnet(subnet_id, subscription.slot);
}
Ok(())
}
pub fn handle_attestation(
&mut self,
subnet: SubnetId,
attestation: Box<Attestation<T::EthSpec>>,
) {
}
/* Internal private functions */
/// Checks if there are currently queued discovery requests and the time required to make the
/// request.
///
/// If there is sufficient time and no other request exists, queues a peer discovery request
/// for the required subnet.
fn discover_peers_request(
&mut self,
subnet_id: SubnetId,
subscription_slot: Slot,
) -> Result<(), ()> {
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
// if there is enough time to perform a discovery lookup
if subscription_slot >= current_slot.saturating_add(MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD) {
// check if a discovery request already exists
if self
.discover_peers
.get(&(subnet_id, subscription_slot))
.is_some()
{
// already a request queued, end
return Ok(());
}
// check current event log to see if there is a discovery event queued
if self
.events
.iter()
.find(|event| event == &&AttServiceMessage::DiscoverPeers(subnet_id))
.is_some()
{
// already queued a discovery event
return Ok(());
}
// if the slot is more than epoch away, add an event to start looking for peers
if subscription_slot
< current_slot.saturating_add(TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD)
{
// then instantly add a discovery request
self.events
.push_back(AttServiceMessage::DiscoverPeers(subnet_id));
} else {
// Queue the discovery event to be executed for
// TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD
let duration_to_discover = {
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
// The -1 is done here to exclude the current slot duration, as we will use
// `duration_to_next_slot`.
let slots_until_discover = subscription_slot
.saturating_sub(current_slot)
.saturating_sub(1u64)
.saturating_sub(TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD);
duration_to_next_slot + slot_duration * (slots_until_discover.as_u64() as u32)
};
self.discover_peers
.insert_at((subnet_id, subscription_slot), duration_to_discover);
}
}
Ok(())
}
/// Checks the current random subnets and subscriptions to determine if a new subscription for this
/// subnet is required for the given slot.
///
/// If required, adds a subscription event and an associated unsubscription event.
fn subscribe_to_subnet(
&mut self,
subnet_id: SubnetId,
subscription_slot: Slot,
) -> Result<(), ()> {
// initialise timing variables
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
let advance_subscription_duration = Duration::from_secs(ADVANCE_SUBSCRIBE_SECS);
// calculate the time to subscribe to the subnet
let duration_to_subscribe = {
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
// The -1 is done here to exclude the current slot duration, as we will use
// `duration_to_next_slot`.
let slots_until_subscribe = subscription_slot
.saturating_sub(current_slot)
.saturating_sub(1u64);
duration_to_next_slot + slot_duration * (slots_until_subscribe.as_u64() as u32)
- advance_subscription_duration
};
// the duration until we no longer need this subscription. We assume a single slot is
// sufficient.
let expected_end_subscription_duration =
duration_to_subscribe + slot_duration + advance_subscription_duration;
// Checks on current subscriptions
// Note: We may be connected to a long-lived random subnet. In this case we still add the
// subscription timeout and check this case when the timeout fires. This is because a
// long-lived random subnet can be unsubscribed at any time when a validator becomes
// in-active. This case is checked on the subscription event (see `handle_subscriptions`).
// Return if we already have a subscription for this subnet_id and slot
if self.subscriptions.contains(&(subnet_id, subscription_slot)) {
return Ok(());
}
// We are not currently subscribed and have no waiting subscription, create one
self.subscriptions
.insert_at((subnet_id, subscription_slot), duration_to_subscribe);
// if there is an unsubscription event for the slot prior, we remove it to prevent
// unsubscriptions immediately after the subscription. We also want to minimize
// subscription churn and maintain a consecutive subnet subscriptions.
self.unsubscriptions
.remove(&(subnet_id, subscription_slot.saturating_sub(1u64)));
// add an unsubscription event to remove ourselves from the subnet once completed
self.unsubscriptions.insert_at(
(subnet_id, subscription_slot),
expected_end_subscription_duration,
);
Ok(())
}
/// Updates the `known_validators` mapping and subscribes to a set of random subnets if required.
///
/// This also updates the ENR to indicate our long-lived subscription to the subnet
fn add_known_validator(&mut self, validator_index: u64) {
if self.known_validators.get(&validator_index).is_none() {
// New validator has subscribed
// Subscribe to random topics and update the ENR if needed.
let spec = &self.beacon_chain.spec;
if self.random_subnets.len() < spec.attestation_subnet_count as usize {
// Still room for subscriptions
self.subscribe_to_random_subnets(
self.beacon_chain.spec.random_subnets_per_validator as usize,
);
}
}
// add the new validator or update the current timeout for a known validator
self.known_validators.insert(validator_index);
}
/// Subscribe to long-lived random subnets and update the local ENR bitfield.
fn subscribe_to_random_subnets(&mut self, no_subnets_to_subscribe: usize) {
let subnet_count = self.beacon_chain.spec.attestation_subnet_count;
// Build a list of random subnets that we are not currently subscribed to.
let available_subnets = (0..subnet_count)
.map(SubnetId::new)
.filter(|subnet_id| self.random_subnets.get(subnet_id).is_none())
.collect::<Vec<_>>();
let to_subscribe_subnets = {
if available_subnets.len() < no_subnets_to_subscribe {
debug!(self.log, "Reached maximum random subnet subscriptions");
available_subnets
} else {
// select a random sample of available subnets
available_subnets
.choose_multiple(&mut rand::thread_rng(), no_subnets_to_subscribe)
.cloned()
.collect::<Vec<_>>()
}
};
for subnet_id in to_subscribe_subnets {
// remove this subnet from any immediate subscription/un-subscription events
self.subscriptions
.retain(|(map_subnet_id, _)| map_subnet_id != &subnet_id);
self.unsubscriptions
.retain(|(map_subnet_id, _)| map_subnet_id != &subnet_id);
// insert a new random subnet
self.random_subnets.insert(subnet_id);
// if we are not already subscribed, then subscribe
let topic_kind = &GossipKind::CommitteeIndex(subnet_id);
if let None = self
.network_globals
.gossipsub_subscriptions
.read()
.iter()
.find(|topic| topic.kind() == topic_kind)
{
// not already subscribed to the topic
self.events
.push_back(AttServiceMessage::Subscribe(subnet_id));
}
// add the subnet to the ENR bitfield
self.events.push_back(AttServiceMessage::EnrAdd(subnet_id));
}
}
/* A collection of functions that handle the various timeouts */
/// Request a discovery query to find peers for a particular subnet.
fn handle_discover_peers(&mut self, subnet_id: SubnetId, target_slot: Slot) {
debug!(self.log, "Searching for peers for subnet"; "subnet" => *subnet_id, "target_slot" => target_slot);
self.events
.push_back(AttServiceMessage::DiscoverPeers(subnet_id));
}
/// A queued subscription is ready.
///
/// We add subscriptions events even if we are already subscribed to a random subnet (as these
/// can be unsubscribed at any time by inactive validators). If we are
/// still subscribed at the time the event fires, we don't re-subscribe.
fn handle_subscriptions(&mut self, subnet_id: SubnetId, target_slot: Slot) {
// Check if the subnet currently exists as a long-lasting random subnet
if let Some(expiry) = self.random_subnets.get(&subnet_id) {
// we are subscribed via a random subnet, if this is to expire during the time we need
// to be subscribed, just extend the expiry
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
let advance_subscription_duration = Duration::from_secs(ADVANCE_SUBSCRIBE_SECS);
// we require the subnet subscription for at least a slot on top of the initial
// subscription time
let expected_end_subscription_duration = slot_duration + advance_subscription_duration;
if expiry < &(Instant::now() + expected_end_subscription_duration) {
self.random_subnets
.update_timeout(&subnet_id, expected_end_subscription_duration);
}
} else {
// we are also not un-subscribing from a subnet if the next slot requires us to be
// subscribed. Therefore there could be the case that we are already still subscribed
// to the required subnet. In which case we do not issue another subscription request.
let topic_kind = &GossipKind::CommitteeIndex(subnet_id);
if self
.network_globals
.gossipsub_subscriptions
.read()
.iter()
.find(|topic| topic.kind() == topic_kind)
.is_none()
{
// we are not already subscribed
debug!(self.log, "Subscribing to subnet"; "subnet" => *subnet_id, "target_slot" => target_slot.as_u64());
self.events
.push_back(AttServiceMessage::Subscribe(subnet_id));
}
}
}
/// A queued unsubscription is ready.
///
/// Unsubscription events are added, even if we are subscribed to long-lived random subnets. If
/// a random subnet is present, we do not unsubscribe from it.
fn handle_unsubscriptions(&mut self, subnet_id: SubnetId, target_slot: Slot) {
// Check if the subnet currently exists as a long-lasting random subnet
if self.random_subnets.contains(&subnet_id) {
return;
}
debug!(self.log, "Unsubscribing from subnet"; "subnet" => *subnet_id, "processed_slot" => target_slot.as_u64());
// various logic checks
if self.subscriptions.contains(&(subnet_id, target_slot)) {
crit!(self.log, "Unsubscribing from a subnet in subscriptions");
}
self.events
.push_back(AttServiceMessage::Unsubscribe(subnet_id));
}
/// A random subnet has expired.
///
/// This function selects a new subnet to join, or extends the expiry if there are no more
/// available subnets to choose from.
fn handle_random_subnet_expiry(&mut self, subnet_id: SubnetId) {
let subnet_count = self.beacon_chain.spec.attestation_subnet_count;
if self.random_subnets.len() == (subnet_count - 1) as usize {
// We are at capacity, simply increase the timeout of the current subnet
self.random_subnets.insert(subnet_id);
return;
}
// we are not at capacity, unsubscribe from the current subnet, remove the ENR bitfield bit and choose a new random one
// from the available subnets
// Note: This should not occur during a required subnet as subscriptions update the timeout
// to last as long as they are needed.
debug!(self.log, "Unsubscribing from random subnet"; "subnet_id" => *subnet_id);
self.events
.push_back(AttServiceMessage::Unsubscribe(subnet_id));
self.events
.push_back(AttServiceMessage::EnrRemove(subnet_id));
self.subscribe_to_random_subnets(1);
}
/// A known validator has not sent a subscription in a while. They are considered offline and the
/// beacon node no longer needs to be subscribed to the allocated random subnets.
///
/// We don't keep track of a specific validator to random subnet, rather the ratio of active
/// validators to random subnets. So when a validator goes offline, we can simply remove the
/// allocated amount of random subnets.
fn handle_known_validator_expiry(&mut self) -> Result<(), ()> {
let spec = &self.beacon_chain.spec;
let subnet_count = spec.attestation_subnet_count;
let random_subnets_per_validator = spec.random_subnets_per_validator;
if self.known_validators.len() as u64 * random_subnets_per_validator >= subnet_count {
// have too many validators, ignore
return Ok(());
}
let subscribed_subnets = self.random_subnets.keys_vec();
let to_remove_subnets = subscribed_subnets.choose_multiple(
&mut rand::thread_rng(),
random_subnets_per_validator as usize,
);
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
for subnet_id in to_remove_subnets {
// If a subscription is queued for two slots in the future, it's associated unsubscription
// will unsubscribe from the expired subnet.
// If there is no subscription for this subnet,slot it is safe to add one, without
// unsubscribing early from a required subnet
if self
.subscriptions
.get(&(**subnet_id, current_slot + 2))
.is_none()
{
// set an unsubscribe event
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
let slot_duration =
Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
// Set the unsubscription timeout
let unsubscription_duration = duration_to_next_slot + slot_duration * 2;
self.unsubscriptions
.insert_at((**subnet_id, current_slot + 2), unsubscription_duration);
}
// as the long lasting subnet subscription is being removed, remove the subnet_id from
// the ENR bitfield
self.events
.push_back(AttServiceMessage::EnrRemove(**subnet_id));
}
Ok(())
}
}
impl<T: BeaconChainTypes> Stream for AttestationService<T> {
type Item = AttServiceMessage;
type Error = ();
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
// process any peer discovery events
while let Async::Ready(Some((subnet_id, target_slot))) =
self.discover_peers.poll().map_err(|e| {
error!(self.log, "Failed to check for peer discovery requests"; "error"=> format!("{}", e));
})?
{
self.handle_discover_peers(subnet_id, target_slot);
}
// process any subscription events
while let Async::Ready(Some((subnet_id, target_slot))) = self.subscriptions.poll().map_err(|e| {
error!(self.log, "Failed to check for subnet subscription times"; "error"=> format!("{}", e));
})?
{
self.handle_subscriptions(subnet_id, target_slot);
}
// process any un-subscription events
while let Async::Ready(Some((subnet_id, target_slot))) = self.unsubscriptions.poll().map_err(|e| {
error!(self.log, "Failed to check for subnet unsubscription times"; "error"=> format!("{}", e));
})?
{
self.handle_unsubscriptions(subnet_id, target_slot);
}
// process any random subnet expiries
while let Async::Ready(Some(subnet)) = self.random_subnets.poll().map_err(|e| {
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
})?
{
self.handle_random_subnet_expiry(subnet);
}
// process any known validator expiries
while let Async::Ready(Some(_validator_index)) = self.known_validators.poll().map_err(|e| {
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
})?
{
let _ = self.handle_known_validator_expiry();
}
// process any generated events
if let Some(event) = self.events.pop_front() {
return Ok(Async::Ready(Some(event)));
}
Ok(Async::NotReady)
}
}