Initial work towards v0.2.0 (#924)

* Remove ping protocol

* Initial renaming of network services

* Correct rebasing relative to latest master

* Start updating types

* Adds HashMapDelay struct to utils

* Initial network restructure

* Network restructure. Adds new types for v0.2.0

* Removes build artefacts

* Shift validation to beacon chain

* Temporarily remove gossip validation

This is to be updated to match current optimisation efforts.

* Adds AggregateAndProof

* Begin rebuilding pubsub encoding/decoding

* Signature hacking

* Shift gossipsup decoding into eth2_libp2p

* Existing EF tests passing with fake_crypto

* Shifts block encoding/decoding into RPC

* Delete outdated API spec

* All release tests passing bar genesis state parsing

* Update and test YamlConfig

* Update to spec v0.10 compatible BLS

* Updates to BLS EF tests

* Add EF test for AggregateVerify

And delete unused hash2curve tests for uncompressed points

* Update EF tests to v0.10.1

* Use optional block root correctly in block proc

* Use genesis fork in deposit domain. All tests pass

* Fast aggregate verify test

* Update REST API docs

* Fix unused import

* Bump spec tags to v0.10.1

* Add `seconds_per_eth1_block` to chainspec

* Update to timestamp based eth1 voting scheme

* Return None from `get_votes_to_consider` if block cache is empty

* Handle overflows in `is_candidate_block`

* Revert to failing tests

* Fix eth1 data sets test

* Choose default vote according to spec

* Fix collect_valid_votes tests

* Fix `get_votes_to_consider` to choose all eligible blocks

* Uncomment winning_vote tests

* Add comments; remove unused code

* Reduce seconds_per_eth1_block for simulation

* Addressed review comments

* Add test for default vote case

* Fix logs

* Remove unused functions

* Meter default eth1 votes

* Fix comments

* Progress on attestation service

* Address review comments; remove unused dependency

* Initial work on removing libp2p lock

* Add LRU caches to store (rollup)

* Update attestation validation for DB changes (WIP)

* Initial version of should_forward_block

* Scaffold

* Progress on attestation validation

Also, consolidate prod+testing slot clocks so that they share much
of the same implementation and can both handle sub-slot time changes.

* Removes lock from libp2p service

* Completed network lock removal

* Finish(?) attestation processing

* Correct network termination future

* Add slot check to block check

* Correct fmt issues

* Remove Drop implementation for network service

* Add first attempt at attestation proc. re-write

* Add version 2 of attestation processing

* Minor fixes

* Add validator pubkey cache

* Make get_indexed_attestation take a committee

* Link signature processing into new attn verification

* First working version

* Ensure pubkey cache is updated

* Add more metrics, slight optimizations

* Clone committee cache during attestation processing

* Update shuffling cache during block processing

* Remove old commented-out code

* Fix shuffling cache insert bug

* Used indexed attestation in fork choice

* Restructure attn processing, add metrics

* Add more detailed metrics

* Tidy, fix failing tests

* Fix failing tests, tidy

* Address reviewers suggestions

* Disable/delete two outdated tests

* Modification of validator for subscriptions

* Add slot signing to validator client

* Further progress on validation subscription

* Adds necessary validator subscription functionality

* Add new Pubkeys struct to signature_sets

* Refactor with functional approach

* Update beacon chain

* Clean up validator <-> beacon node http types

* Add aggregator status to ValidatorDuty

* Impl Clone for manual slot clock

* Fix minor errors

* Further progress validator client subscription

* Initial subscription and aggregation handling

* Remove decompressed member from pubkey bytes

* Progress to modifying val client for attestation aggregation

* First draft of validator client upgrade for aggregate attestations

* Add hashmap for indices lookup

* Add state cache, remove store cache

* Only build the head committee cache

* Removes lock on a network channel

* Partially implement beacon node subscription http api

* Correct compilation issues

* Change `get_attesting_indices` to use Vec

* Fix failing test

* Partial implementation of timer

* Adds timer, removes exit_future, http api to op pool

* Partial multiple aggregate attestation handling

* Permits bulk messages accross gossipsub network channel

* Correct compile issues

* Improve gosispsub messaging and correct rest api helpers

* Added global gossipsub subscriptions

* Update validator subscriptions data structs

* Tidy

* Re-structure validator subscriptions

* Initial handling of subscriptions

* Re-structure network service

* Add pubkey cache persistence file

* Add more comments

* Integrate persistence file into builder

* Add pubkey cache tests

* Add HashSetDelay and introduce into attestation service

* Handles validator subscriptions

* Add data_dir to beacon chain builder

* Remove Option in pubkey cache persistence file

* Ensure consistency between datadir/data_dir

* Fix failing network test

* Peer subnet discovery gets queued for future subscriptions

* Reorganise attestation service functions

* Initial wiring of attestation service

* First draft of attestation service timing logic

* Correct minor typos

* Tidy

* Fix todos

* Improve tests

* Add PeerInfo to connected peers mapping

* Fix compile error

* Fix compile error from merge

* Split up block processing metrics

* Tidy

* Refactor get_pubkey_from_state

* Remove commented-out code

* Rename state_cache -> checkpoint_cache

* Rename Checkpoint -> Snapshot

* Tidy, add comments

* Tidy up find_head function

* Change some checkpoint -> snapshot

* Add tests

* Expose max_len

* Remove dead code

* Tidy

* Fix bug

* Add sync-speed metric

* Add first attempt at VerifiableBlock

* Start integrating into beacon chain

* Integrate VerifiableBlock

* Rename VerifableBlock -> PartialBlockVerification

* Add start of typed methods

* Add progress

* Add further progress

* Rename structs

* Add full block verification to block_processing.rs

* Further beacon chain integration

* Update checks for gossip

* Add todo

* Start adding segement verification

* Add passing chain segement test

* Initial integration with batch sync

* Minor changes

* Tidy, add more error checking

* Start adding chain_segment tests

* Finish invalid signature tests

* Include single and gossip verified blocks in tests

* Add gossip verification tests

* Start adding docs

* Finish adding comments to block_processing.rs

* Rename block_processing.rs -> block_verification

* Start removing old block processing code

* Fixes beacon_chain compilation

* Fix project-wide compile errors

* Remove old code

* Correct code to pass all tests

* Fix bug with beacon proposer index

* Fix shim for BlockProcessingError

* Only process one epoch at a time

* Fix loop in chain segment processing

* Correct tests from master merge

* Add caching for state.eth1_data_votes

* Add BeaconChain::validator_pubkey

* Revert "Add caching for state.eth1_data_votes"

This reverts commit cd73dcd643.

Co-authored-by: Grant Wuerker <gwuerker@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
Age Manning
2020-03-17 17:24:44 +11:00
committed by GitHub
parent c198bddf9e
commit 95c8e476bc
161 changed files with 9771 additions and 5266 deletions

View File

@@ -0,0 +1,575 @@
//! This service keeps track of which shard subnet the beacon node should be subscribed to at any
//! given time. It schedules subscriptions to shard subnets, requests peer discoveries and
//! determines whether attestations should be aggregated and/or passed to the beacon node.
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{types::GossipKind, NetworkGlobals};
use futures::prelude::*;
use hashmap_delay::HashSetDelay;
use rand::seq::SliceRandom;
use rest_types::ValidatorSubscription;
use slog::{crit, debug, error, o, warn};
use slot_clock::SlotClock;
use std::boxed::Box;
use std::collections::VecDeque;
use std::sync::Arc;
use std::time::{Duration, Instant};
use types::{Attestation, SubnetId};
use types::{EthSpec, Slot};
/// The minimum number of slots ahead that we attempt to discover peers for a subscription. If the
/// slot is less than this number, skip the peer discovery process.
const MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 1;
/// The number of slots ahead that we attempt to discover peers for a subscription. If the slot to
/// attest to is greater than this, we queue a discovery request for this many slots prior to
/// subscribing.
const TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 6;
/// The time (in seconds) before a last seen validator is considered absent and we unsubscribe from the random
/// gossip topics that we subscribed to due to the validator connection.
const LAST_SEEN_VALIDATOR_TIMEOUT: u64 = 1800; // 30 mins
/// The number of seconds in advance that we subscribe to a subnet before the required slot.
const ADVANCE_SUBSCRIBE_SECS: u64 = 3;
#[derive(Debug, PartialEq)]
pub enum AttServiceMessage {
/// Subscribe to the specified subnet id.
Subscribe(SubnetId),
/// Unsubscribe to the specified subnet id.
Unsubscribe(SubnetId),
/// Add the `SubnetId` to the ENR bitfield.
EnrAdd(SubnetId),
/// Remove the `SubnetId` from the ENR bitfield.
EnrRemove(SubnetId),
/// Discover peers for a particular subnet.
DiscoverPeers(SubnetId),
}
pub struct AttestationService<T: BeaconChainTypes> {
/// Queued events to return to the driving service.
events: VecDeque<AttServiceMessage>,
/// A collection of public network variables.
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
/// A reference to the beacon chain to process received attestations.
beacon_chain: Arc<BeaconChain<T>>,
/// The collection of currently subscribed random subnets mapped to their expiry deadline.
random_subnets: HashSetDelay<SubnetId>,
/// A collection of timeouts for when to start searching for peers for a particular shard.
discover_peers: HashSetDelay<(SubnetId, Slot)>,
/// A collection of timeouts for when to subscribe to a shard subnet.
subscriptions: HashSetDelay<(SubnetId, Slot)>,
/// A collection of timeouts for when to unsubscribe from a shard subnet.
unsubscriptions: HashSetDelay<(SubnetId, Slot)>,
/// A collection of seen validators. These dictate how many random subnets we should be
/// subscribed to. As these time out, we unsubscribe for the required random subnets and update
/// our ENR.
/// This is a set of validator indices.
known_validators: HashSetDelay<u64>,
/// The logger for the attestation service.
log: slog::Logger,
}
impl<T: BeaconChainTypes> AttestationService<T> {
/* Public functions */
pub fn new(
beacon_chain: Arc<BeaconChain<T>>,
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
log: &slog::Logger,
) -> Self {
let log = log.new(o!("service" => "attestation_service"));
// calculate the random subnet duration from the spec constants
let spec = &beacon_chain.spec;
let random_subnet_duration_millis = spec
.epochs_per_random_subnet_subscription
.saturating_mul(T::EthSpec::slots_per_epoch())
.saturating_mul(spec.milliseconds_per_slot);
AttestationService {
events: VecDeque::with_capacity(10),
network_globals,
beacon_chain,
random_subnets: HashSetDelay::new(Duration::from_millis(random_subnet_duration_millis)),
discover_peers: HashSetDelay::default(),
subscriptions: HashSetDelay::default(),
unsubscriptions: HashSetDelay::default(),
known_validators: HashSetDelay::new(Duration::from_secs(LAST_SEEN_VALIDATOR_TIMEOUT)),
log,
}
}
/// Processes a list of validator subscriptions.
///
/// This will:
/// - Register new validators as being known.
/// - Subscribe to the required number of random subnets.
/// - Update the local ENR for new random subnets due to seeing new validators.
/// - Search for peers for required subnets.
/// - Request subscriptions for subnets on specific slots when required.
/// - Build the timeouts for each of these events.
///
/// This returns a result simply for the ergonomics of using ?. The result can be
/// safely dropped.
pub fn validator_subscriptions(
&mut self,
subscriptions: Vec<ValidatorSubscription>,
) -> Result<(), ()> {
for subscription in subscriptions {
//NOTE: We assume all subscriptions have been verified before reaching this service
// Registers the validator with the attestation service.
// This will subscribe to long-lived random subnets if required.
self.add_known_validator(subscription.validator_index);
let subnet_id = SubnetId::new(
subscription.attestation_committee_index
% self.beacon_chain.spec.attestation_subnet_count,
);
// determine if we should run a discovery lookup request and request it if required
let _ = self.discover_peers_request(subnet_id, subscription.slot);
// set the subscription timer to subscribe to the next subnet if required
let _ = self.subscribe_to_subnet(subnet_id, subscription.slot);
}
Ok(())
}
pub fn handle_attestation(
&mut self,
subnet: SubnetId,
attestation: Box<Attestation<T::EthSpec>>,
) {
}
/* Internal private functions */
/// Checks if there are currently queued discovery requests and the time required to make the
/// request.
///
/// If there is sufficient time and no other request exists, queues a peer discovery request
/// for the required subnet.
fn discover_peers_request(
&mut self,
subnet_id: SubnetId,
subscription_slot: Slot,
) -> Result<(), ()> {
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
// if there is enough time to perform a discovery lookup
if subscription_slot >= current_slot.saturating_add(MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD) {
// check if a discovery request already exists
if self
.discover_peers
.get(&(subnet_id, subscription_slot))
.is_some()
{
// already a request queued, end
return Ok(());
}
// check current event log to see if there is a discovery event queued
if self
.events
.iter()
.find(|event| event == &&AttServiceMessage::DiscoverPeers(subnet_id))
.is_some()
{
// already queued a discovery event
return Ok(());
}
// if the slot is more than epoch away, add an event to start looking for peers
if subscription_slot
< current_slot.saturating_add(TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD)
{
// then instantly add a discovery request
self.events
.push_back(AttServiceMessage::DiscoverPeers(subnet_id));
} else {
// Queue the discovery event to be executed for
// TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD
let duration_to_discover = {
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
// The -1 is done here to exclude the current slot duration, as we will use
// `duration_to_next_slot`.
let slots_until_discover = subscription_slot
.saturating_sub(current_slot)
.saturating_sub(1u64)
.saturating_sub(TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD);
duration_to_next_slot + slot_duration * (slots_until_discover.as_u64() as u32)
};
self.discover_peers
.insert_at((subnet_id, subscription_slot), duration_to_discover);
}
}
Ok(())
}
/// Checks the current random subnets and subscriptions to determine if a new subscription for this
/// subnet is required for the given slot.
///
/// If required, adds a subscription event and an associated unsubscription event.
fn subscribe_to_subnet(
&mut self,
subnet_id: SubnetId,
subscription_slot: Slot,
) -> Result<(), ()> {
// initialise timing variables
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
let advance_subscription_duration = Duration::from_secs(ADVANCE_SUBSCRIBE_SECS);
// calculate the time to subscribe to the subnet
let duration_to_subscribe = {
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
// The -1 is done here to exclude the current slot duration, as we will use
// `duration_to_next_slot`.
let slots_until_subscribe = subscription_slot
.saturating_sub(current_slot)
.saturating_sub(1u64);
duration_to_next_slot + slot_duration * (slots_until_subscribe.as_u64() as u32)
- advance_subscription_duration
};
// the duration until we no longer need this subscription. We assume a single slot is
// sufficient.
let expected_end_subscription_duration =
duration_to_subscribe + slot_duration + advance_subscription_duration;
// Checks on current subscriptions
// Note: We may be connected to a long-lived random subnet. In this case we still add the
// subscription timeout and check this case when the timeout fires. This is because a
// long-lived random subnet can be unsubscribed at any time when a validator becomes
// in-active. This case is checked on the subscription event (see `handle_subscriptions`).
// Return if we already have a subscription for this subnet_id and slot
if self.subscriptions.contains(&(subnet_id, subscription_slot)) {
return Ok(());
}
// We are not currently subscribed and have no waiting subscription, create one
self.subscriptions
.insert_at((subnet_id, subscription_slot), duration_to_subscribe);
// if there is an unsubscription event for the slot prior, we remove it to prevent
// unsubscriptions immediately after the subscription. We also want to minimize
// subscription churn and maintain a consecutive subnet subscriptions.
self.unsubscriptions
.remove(&(subnet_id, subscription_slot.saturating_sub(1u64)));
// add an unsubscription event to remove ourselves from the subnet once completed
self.unsubscriptions.insert_at(
(subnet_id, subscription_slot),
expected_end_subscription_duration,
);
Ok(())
}
/// Updates the `known_validators` mapping and subscribes to a set of random subnets if required.
///
/// This also updates the ENR to indicate our long-lived subscription to the subnet
fn add_known_validator(&mut self, validator_index: u64) {
if self.known_validators.get(&validator_index).is_none() {
// New validator has subscribed
// Subscribe to random topics and update the ENR if needed.
let spec = &self.beacon_chain.spec;
if self.random_subnets.len() < spec.attestation_subnet_count as usize {
// Still room for subscriptions
self.subscribe_to_random_subnets(
self.beacon_chain.spec.random_subnets_per_validator as usize,
);
}
}
// add the new validator or update the current timeout for a known validator
self.known_validators.insert(validator_index);
}
/// Subscribe to long-lived random subnets and update the local ENR bitfield.
fn subscribe_to_random_subnets(&mut self, no_subnets_to_subscribe: usize) {
let subnet_count = self.beacon_chain.spec.attestation_subnet_count;
// Build a list of random subnets that we are not currently subscribed to.
let available_subnets = (0..subnet_count)
.map(SubnetId::new)
.filter(|subnet_id| self.random_subnets.get(subnet_id).is_none())
.collect::<Vec<_>>();
let to_subscribe_subnets = {
if available_subnets.len() < no_subnets_to_subscribe {
debug!(self.log, "Reached maximum random subnet subscriptions");
available_subnets
} else {
// select a random sample of available subnets
available_subnets
.choose_multiple(&mut rand::thread_rng(), no_subnets_to_subscribe)
.cloned()
.collect::<Vec<_>>()
}
};
for subnet_id in to_subscribe_subnets {
// remove this subnet from any immediate subscription/un-subscription events
self.subscriptions
.retain(|(map_subnet_id, _)| map_subnet_id != &subnet_id);
self.unsubscriptions
.retain(|(map_subnet_id, _)| map_subnet_id != &subnet_id);
// insert a new random subnet
self.random_subnets.insert(subnet_id);
// if we are not already subscribed, then subscribe
let topic_kind = &GossipKind::CommitteeIndex(subnet_id);
if let None = self
.network_globals
.gossipsub_subscriptions
.read()
.iter()
.find(|topic| topic.kind() == topic_kind)
{
// not already subscribed to the topic
self.events
.push_back(AttServiceMessage::Subscribe(subnet_id));
}
// add the subnet to the ENR bitfield
self.events.push_back(AttServiceMessage::EnrAdd(subnet_id));
}
}
/* A collection of functions that handle the various timeouts */
/// Request a discovery query to find peers for a particular subnet.
fn handle_discover_peers(&mut self, subnet_id: SubnetId, target_slot: Slot) {
debug!(self.log, "Searching for peers for subnet"; "subnet" => *subnet_id, "target_slot" => target_slot);
self.events
.push_back(AttServiceMessage::DiscoverPeers(subnet_id));
}
/// A queued subscription is ready.
///
/// We add subscriptions events even if we are already subscribed to a random subnet (as these
/// can be unsubscribed at any time by inactive validators). If we are
/// still subscribed at the time the event fires, we don't re-subscribe.
fn handle_subscriptions(&mut self, subnet_id: SubnetId, target_slot: Slot) {
// Check if the subnet currently exists as a long-lasting random subnet
if let Some(expiry) = self.random_subnets.get(&subnet_id) {
// we are subscribed via a random subnet, if this is to expire during the time we need
// to be subscribed, just extend the expiry
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
let advance_subscription_duration = Duration::from_secs(ADVANCE_SUBSCRIBE_SECS);
// we require the subnet subscription for at least a slot on top of the initial
// subscription time
let expected_end_subscription_duration = slot_duration + advance_subscription_duration;
if expiry < &(Instant::now() + expected_end_subscription_duration) {
self.random_subnets
.update_timeout(&subnet_id, expected_end_subscription_duration);
}
} else {
// we are also not un-subscribing from a subnet if the next slot requires us to be
// subscribed. Therefore there could be the case that we are already still subscribed
// to the required subnet. In which case we do not issue another subscription request.
let topic_kind = &GossipKind::CommitteeIndex(subnet_id);
if self
.network_globals
.gossipsub_subscriptions
.read()
.iter()
.find(|topic| topic.kind() == topic_kind)
.is_none()
{
// we are not already subscribed
debug!(self.log, "Subscribing to subnet"; "subnet" => *subnet_id, "target_slot" => target_slot.as_u64());
self.events
.push_back(AttServiceMessage::Subscribe(subnet_id));
}
}
}
/// A queued unsubscription is ready.
///
/// Unsubscription events are added, even if we are subscribed to long-lived random subnets. If
/// a random subnet is present, we do not unsubscribe from it.
fn handle_unsubscriptions(&mut self, subnet_id: SubnetId, target_slot: Slot) {
// Check if the subnet currently exists as a long-lasting random subnet
if self.random_subnets.contains(&subnet_id) {
return;
}
debug!(self.log, "Unsubscribing from subnet"; "subnet" => *subnet_id, "processed_slot" => target_slot.as_u64());
// various logic checks
if self.subscriptions.contains(&(subnet_id, target_slot)) {
crit!(self.log, "Unsubscribing from a subnet in subscriptions");
}
self.events
.push_back(AttServiceMessage::Unsubscribe(subnet_id));
}
/// A random subnet has expired.
///
/// This function selects a new subnet to join, or extends the expiry if there are no more
/// available subnets to choose from.
fn handle_random_subnet_expiry(&mut self, subnet_id: SubnetId) {
let subnet_count = self.beacon_chain.spec.attestation_subnet_count;
if self.random_subnets.len() == (subnet_count - 1) as usize {
// We are at capacity, simply increase the timeout of the current subnet
self.random_subnets.insert(subnet_id);
return;
}
// we are not at capacity, unsubscribe from the current subnet, remove the ENR bitfield bit and choose a new random one
// from the available subnets
// Note: This should not occur during a required subnet as subscriptions update the timeout
// to last as long as they are needed.
debug!(self.log, "Unsubscribing from random subnet"; "subnet_id" => *subnet_id);
self.events
.push_back(AttServiceMessage::Unsubscribe(subnet_id));
self.events
.push_back(AttServiceMessage::EnrRemove(subnet_id));
self.subscribe_to_random_subnets(1);
}
/// A known validator has not sent a subscription in a while. They are considered offline and the
/// beacon node no longer needs to be subscribed to the allocated random subnets.
///
/// We don't keep track of a specific validator to random subnet, rather the ratio of active
/// validators to random subnets. So when a validator goes offline, we can simply remove the
/// allocated amount of random subnets.
fn handle_known_validator_expiry(&mut self) -> Result<(), ()> {
let spec = &self.beacon_chain.spec;
let subnet_count = spec.attestation_subnet_count;
let random_subnets_per_validator = spec.random_subnets_per_validator;
if self.known_validators.len() as u64 * random_subnets_per_validator >= subnet_count {
// have too many validators, ignore
return Ok(());
}
let subscribed_subnets = self.random_subnets.keys_vec();
let to_remove_subnets = subscribed_subnets.choose_multiple(
&mut rand::thread_rng(),
random_subnets_per_validator as usize,
);
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
for subnet_id in to_remove_subnets {
// If a subscription is queued for two slots in the future, it's associated unsubscription
// will unsubscribe from the expired subnet.
// If there is no subscription for this subnet,slot it is safe to add one, without
// unsubscribing early from a required subnet
if self
.subscriptions
.get(&(**subnet_id, current_slot + 2))
.is_none()
{
// set an unsubscribe event
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
let slot_duration =
Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
// Set the unsubscription timeout
let unsubscription_duration = duration_to_next_slot + slot_duration * 2;
self.unsubscriptions
.insert_at((**subnet_id, current_slot + 2), unsubscription_duration);
}
// as the long lasting subnet subscription is being removed, remove the subnet_id from
// the ENR bitfield
self.events
.push_back(AttServiceMessage::EnrRemove(**subnet_id));
}
Ok(())
}
}
impl<T: BeaconChainTypes> Stream for AttestationService<T> {
type Item = AttServiceMessage;
type Error = ();
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
// process any peer discovery events
while let Async::Ready(Some((subnet_id, target_slot))) =
self.discover_peers.poll().map_err(|e| {
error!(self.log, "Failed to check for peer discovery requests"; "error"=> format!("{}", e));
})?
{
self.handle_discover_peers(subnet_id, target_slot);
}
// process any subscription events
while let Async::Ready(Some((subnet_id, target_slot))) = self.subscriptions.poll().map_err(|e| {
error!(self.log, "Failed to check for subnet subscription times"; "error"=> format!("{}", e));
})?
{
self.handle_subscriptions(subnet_id, target_slot);
}
// process any un-subscription events
while let Async::Ready(Some((subnet_id, target_slot))) = self.unsubscriptions.poll().map_err(|e| {
error!(self.log, "Failed to check for subnet unsubscription times"; "error"=> format!("{}", e));
})?
{
self.handle_unsubscriptions(subnet_id, target_slot);
}
// process any random subnet expiries
while let Async::Ready(Some(subnet)) = self.random_subnets.poll().map_err(|e| {
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
})?
{
self.handle_random_subnet_expiry(subnet);
}
// process any known validator expiries
while let Async::Ready(Some(_validator_index)) = self.known_validators.poll().map_err(|e| {
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
})?
{
let _ = self.handle_known_validator_expiry();
}
// process any generated events
if let Some(event) = self.events.pop_front() {
return Ok(Async::Ready(Some(event)));
}
Ok(Async::NotReady)
}
}

View File

@@ -1,6 +1,4 @@
// generates error types
use eth2_libp2p;
use error_chain::error_chain;
error_chain! {

View File

@@ -1,12 +1,11 @@
/// This crate provides the network server for Lighthouse.
pub mod error;
pub mod message_handler;
pub mod message_processor;
pub mod persisted_dht;
pub mod service;
pub mod sync;
mod attestation_service;
mod persisted_dht;
mod router;
mod sync;
pub use eth2_libp2p::NetworkConfig;
pub use message_processor::MessageProcessor;
pub use service::NetworkMessage;
pub use service::Service;
pub use service::{NetworkMessage, NetworkService};

View File

@@ -1,367 +0,0 @@
#![allow(clippy::unit_arg)]
use crate::error;
use crate::service::NetworkMessage;
use crate::MessageProcessor;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{
behaviour::PubsubMessage,
rpc::{RPCError, RPCErrorResponse, RPCRequest, RPCResponse, RequestId, ResponseTermination},
MessageId, PeerId, RPCEvent,
};
use futures::future::Future;
use futures::stream::Stream;
use slog::{debug, o, trace, warn};
use ssz::{Decode, DecodeError};
use std::sync::Arc;
use tokio::sync::mpsc;
use types::{Attestation, AttesterSlashing, ProposerSlashing, SignedBeaconBlock, VoluntaryExit};
/// Handles messages received from the network and client and organises syncing. This
/// functionality of this struct is to validate an decode messages from the network before
/// passing them to the internal message processor. The message processor spawns a syncing thread
/// which manages which blocks need to be requested and processed.
pub struct MessageHandler<T: BeaconChainTypes> {
/// A channel to the network service to allow for gossip propagation.
network_send: mpsc::UnboundedSender<NetworkMessage>,
/// Processes validated and decoded messages from the network. Has direct access to the
/// sync manager.
message_processor: MessageProcessor<T>,
/// The `MessageHandler` logger.
log: slog::Logger,
}
/// Types of messages the handler can receive.
#[derive(Debug)]
pub enum HandlerMessage {
/// We have initiated a connection to a new peer.
PeerDialed(PeerId),
/// Peer has disconnected,
PeerDisconnected(PeerId),
/// An RPC response/request has been received.
RPC(PeerId, RPCEvent),
/// A gossip message has been received. The fields are: message id, the peer that sent us this
/// message and the message itself.
PubsubMessage(MessageId, PeerId, PubsubMessage),
}
impl<T: BeaconChainTypes> MessageHandler<T> {
/// Initializes and runs the MessageHandler.
pub fn spawn(
beacon_chain: Arc<BeaconChain<T>>,
network_send: mpsc::UnboundedSender<NetworkMessage>,
executor: &tokio::runtime::TaskExecutor,
log: slog::Logger,
) -> error::Result<mpsc::UnboundedSender<HandlerMessage>> {
let message_handler_log = log.new(o!("service"=> "msg_handler"));
trace!(message_handler_log, "Service starting");
let (handler_send, handler_recv) = mpsc::unbounded_channel();
// Initialise a message instance, which itself spawns the syncing thread.
let message_processor =
MessageProcessor::new(executor, beacon_chain, network_send.clone(), &log);
// generate the Message handler
let mut handler = MessageHandler {
network_send,
message_processor,
log: message_handler_log,
};
// spawn handler task and move the message handler instance into the spawned thread
executor.spawn(
handler_recv
.for_each(move |msg| Ok(handler.handle_message(msg)))
.map_err(move |_| {
debug!(log, "Network message handler terminated.");
}),
);
Ok(handler_send)
}
/// Handle all messages incoming from the network service.
fn handle_message(&mut self, message: HandlerMessage) {
match message {
// we have initiated a connection to a peer
HandlerMessage::PeerDialed(peer_id) => {
self.message_processor.on_connect(peer_id);
}
// A peer has disconnected
HandlerMessage::PeerDisconnected(peer_id) => {
self.message_processor.on_disconnect(peer_id);
}
// An RPC message request/response has been received
HandlerMessage::RPC(peer_id, rpc_event) => {
self.handle_rpc_message(peer_id, rpc_event);
}
// An RPC message request/response has been received
HandlerMessage::PubsubMessage(id, peer_id, gossip) => {
self.handle_gossip(id, peer_id, gossip);
}
}
}
/* RPC - Related functionality */
/// Handle RPC messages
fn handle_rpc_message(&mut self, peer_id: PeerId, rpc_message: RPCEvent) {
match rpc_message {
RPCEvent::Request(id, req) => self.handle_rpc_request(peer_id, id, req),
RPCEvent::Response(id, resp) => self.handle_rpc_response(peer_id, id, resp),
RPCEvent::Error(id, error) => self.handle_rpc_error(peer_id, id, error),
}
}
/// A new RPC request has been received from the network.
fn handle_rpc_request(&mut self, peer_id: PeerId, request_id: RequestId, request: RPCRequest) {
match request {
RPCRequest::Status(status_message) => {
self.message_processor
.on_status_request(peer_id, request_id, status_message)
}
RPCRequest::Goodbye(goodbye_reason) => {
debug!(
self.log, "PeerGoodbye";
"peer" => format!("{:?}", peer_id),
"reason" => format!("{:?}", goodbye_reason),
);
self.message_processor.on_disconnect(peer_id);
}
RPCRequest::BlocksByRange(request) => self
.message_processor
.on_blocks_by_range_request(peer_id, request_id, request),
RPCRequest::BlocksByRoot(request) => self
.message_processor
.on_blocks_by_root_request(peer_id, request_id, request),
}
}
/// An RPC response has been received from the network.
// we match on id and ignore responses past the timeout.
fn handle_rpc_response(
&mut self,
peer_id: PeerId,
request_id: RequestId,
error_response: RPCErrorResponse,
) {
// an error could have occurred.
match error_response {
RPCErrorResponse::InvalidRequest(error) => {
warn!(self.log, "Peer indicated invalid request";"peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::ServerError(error) => {
warn!(self.log, "Peer internal server error";"peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::Unknown(error) => {
warn!(self.log, "Unknown peer error";"peer" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::Success(response) => {
match response {
RPCResponse::Status(status_message) => {
self.message_processor
.on_status_response(peer_id, status_message);
}
RPCResponse::BlocksByRange(response) => {
match self.decode_beacon_block(response) {
Ok(beacon_block) => {
self.message_processor.on_blocks_by_range_response(
peer_id,
request_id,
Some(beacon_block),
);
}
Err(e) => {
// TODO: Down-vote Peer
warn!(self.log, "Peer sent invalid BEACON_BLOCKS response";"peer" => format!("{:?}", peer_id), "error" => format!("{:?}", e));
}
}
}
RPCResponse::BlocksByRoot(response) => {
match self.decode_beacon_block(response) {
Ok(beacon_block) => {
self.message_processor.on_blocks_by_root_response(
peer_id,
request_id,
Some(beacon_block),
);
}
Err(e) => {
// TODO: Down-vote Peer
warn!(self.log, "Peer sent invalid BEACON_BLOCKS response";"peer" => format!("{:?}", peer_id), "error" => format!("{:?}", e));
}
}
}
}
}
RPCErrorResponse::StreamTermination(response_type) => {
// have received a stream termination, notify the processing functions
match response_type {
ResponseTermination::BlocksByRange => {
self.message_processor
.on_blocks_by_range_response(peer_id, request_id, None);
}
ResponseTermination::BlocksByRoot => {
self.message_processor
.on_blocks_by_root_response(peer_id, request_id, None);
}
}
}
}
}
/// Handle various RPC errors
fn handle_rpc_error(&mut self, peer_id: PeerId, request_id: RequestId, error: RPCError) {
warn!(self.log, "RPC Error"; "Peer" => format!("{:?}", peer_id), "request_id" => format!("{}", request_id), "Error" => format!("{:?}", error));
self.message_processor.on_rpc_error(peer_id, request_id);
}
/// Handle RPC messages
fn handle_gossip(&mut self, id: MessageId, peer_id: PeerId, gossip_message: PubsubMessage) {
match gossip_message {
PubsubMessage::Block(message) => match self.decode_gossip_block(message) {
Ok(block) => {
let should_forward_on = self
.message_processor
.on_block_gossip(peer_id.clone(), block);
// TODO: Apply more sophisticated validation and decoding logic
if should_forward_on {
self.propagate_message(id, peer_id);
}
}
Err(e) => {
debug!(self.log, "Invalid gossiped beacon block"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
},
PubsubMessage::Attestation(message) => match self.decode_gossip_attestation(message) {
Ok(attestation) => {
// TODO: Apply more sophisticated validation and decoding logic
self.propagate_message(id, peer_id.clone());
self.message_processor
.on_attestation_gossip(peer_id, attestation);
}
Err(e) => {
debug!(self.log, "Invalid gossiped attestation"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
},
PubsubMessage::VoluntaryExit(message) => match self.decode_gossip_exit(message) {
Ok(_exit) => {
// TODO: Apply more sophisticated validation and decoding logic
self.propagate_message(id, peer_id.clone());
// TODO: Handle exits
debug!(self.log, "Received a voluntary exit"; "peer_id" => format!("{}", peer_id) );
}
Err(e) => {
debug!(self.log, "Invalid gossiped exit"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
},
PubsubMessage::ProposerSlashing(message) => {
match self.decode_gossip_proposer_slashing(message) {
Ok(_slashing) => {
// TODO: Apply more sophisticated validation and decoding logic
self.propagate_message(id, peer_id.clone());
// TODO: Handle proposer slashings
debug!(self.log, "Received a proposer slashing"; "peer_id" => format!("{}", peer_id) );
}
Err(e) => {
debug!(self.log, "Invalid gossiped proposer slashing"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
}
}
PubsubMessage::AttesterSlashing(message) => {
match self.decode_gossip_attestation_slashing(message) {
Ok(_slashing) => {
// TODO: Apply more sophisticated validation and decoding logic
self.propagate_message(id, peer_id.clone());
// TODO: Handle attester slashings
debug!(self.log, "Received an attester slashing"; "peer_id" => format!("{}", peer_id) );
}
Err(e) => {
debug!(self.log, "Invalid gossiped attester slashing"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
}
}
PubsubMessage::Unknown(message) => {
// Received a message from an unknown topic. Ignore for now
debug!(self.log, "Unknown Gossip Message"; "peer_id" => format!("{}", peer_id), "Message" => format!("{:?}", message));
}
}
}
/// Informs the network service that the message should be forwarded to other peers.
fn propagate_message(&mut self, message_id: MessageId, propagation_source: PeerId) {
self.network_send
.try_send(NetworkMessage::Propagate {
propagation_source,
message_id,
})
.unwrap_or_else(|_| {
warn!(
self.log,
"Could not send propagation request to the network service"
)
});
}
/* Decoding of gossipsub objects from the network.
*
* The decoding is done in the message handler as it has access to to a `BeaconChain` and can
* therefore apply more efficient logic in decoding and verification.
*
* TODO: Apply efficient decoding/verification of these objects
*/
/* Gossipsub Domain Decoding */
// Note: These are not generics as type-specific verification will need to be applied.
fn decode_gossip_block(
&self,
beacon_block: Vec<u8>,
) -> Result<SignedBeaconBlock<T::EthSpec>, DecodeError> {
//TODO: Apply verification before decoding.
SignedBeaconBlock::from_ssz_bytes(&beacon_block)
}
fn decode_gossip_attestation(
&self,
beacon_block: Vec<u8>,
) -> Result<Attestation<T::EthSpec>, DecodeError> {
//TODO: Apply verification before decoding.
Attestation::from_ssz_bytes(&beacon_block)
}
fn decode_gossip_exit(&self, voluntary_exit: Vec<u8>) -> Result<VoluntaryExit, DecodeError> {
//TODO: Apply verification before decoding.
VoluntaryExit::from_ssz_bytes(&voluntary_exit)
}
fn decode_gossip_proposer_slashing(
&self,
proposer_slashing: Vec<u8>,
) -> Result<ProposerSlashing, DecodeError> {
//TODO: Apply verification before decoding.
ProposerSlashing::from_ssz_bytes(&proposer_slashing)
}
fn decode_gossip_attestation_slashing(
&self,
attester_slashing: Vec<u8>,
) -> Result<AttesterSlashing<T::EthSpec>, DecodeError> {
//TODO: Apply verification before decoding.
AttesterSlashing::from_ssz_bytes(&attester_slashing)
}
/* Req/Resp Domain Decoding */
/// Verifies and decodes an ssz-encoded `SignedBeaconBlock`. If `None` is passed, this represents a
/// stream termination.
fn decode_beacon_block(
&self,
beacon_block: Vec<u8>,
) -> Result<SignedBeaconBlock<T::EthSpec>, DecodeError> {
//TODO: Implement faster block verification before decoding entirely
SignedBeaconBlock::from_ssz_bytes(&beacon_block)
}
}

View File

@@ -1,13 +1,15 @@
use beacon_chain::BeaconChainTypes;
use eth2_libp2p::Enr;
use rlp;
use std::sync::Arc;
use store::{DBColumn, Error as StoreError, SimpleStoreItem, Store};
use types::{EthSpec, Hash256};
use store::Store;
use store::{DBColumn, Error as StoreError, SimpleStoreItem};
use types::Hash256;
/// 32-byte key for accessing the `DhtEnrs`.
pub const DHT_DB_KEY: &str = "PERSISTEDDHTPERSISTEDDHTPERSISTE";
pub fn load_dht<T: Store<E>, E: EthSpec>(store: Arc<T>) -> Vec<Enr> {
pub fn load_dht<T: BeaconChainTypes>(store: Arc<T::Store>) -> Vec<Enr> {
// Load DHT from store
let key = Hash256::from_slice(&DHT_DB_KEY.as_bytes());
match store.get(&key) {
@@ -20,8 +22,8 @@ pub fn load_dht<T: Store<E>, E: EthSpec>(store: Arc<T>) -> Vec<Enr> {
}
/// Attempt to persist the ENR's in the DHT to `self.store`.
pub fn persist_dht<T: Store<E>, E: EthSpec>(
store: Arc<T>,
pub fn persist_dht<T: BeaconChainTypes>(
store: Arc<T::Store>,
enrs: Vec<Enr>,
) -> Result<(), store::Error> {
let key = Hash256::from_slice(&DHT_DB_KEY.as_bytes());

View File

@@ -0,0 +1,275 @@
//! This module handles incoming network messages.
//!
//! It routes the messages to appropriate services, such as the Sync
//! and processes those that are
#![allow(clippy::unit_arg)]
pub mod processor;
use crate::error;
use crate::service::NetworkMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{
rpc::{RPCError, RPCErrorResponse, RPCRequest, RPCResponse, RequestId, ResponseTermination},
MessageId, PeerId, PubsubData, PubsubMessage, RPCEvent,
};
use futures::future::Future;
use futures::stream::Stream;
use processor::Processor;
use slog::{debug, o, trace, warn};
use std::sync::Arc;
use tokio::sync::mpsc;
use types::EthSpec;
/// Handles messages received from the network and client and organises syncing. This
/// functionality of this struct is to validate an decode messages from the network before
/// passing them to the internal message processor. The message processor spawns a syncing thread
/// which manages which blocks need to be requested and processed.
pub struct Router<T: BeaconChainTypes> {
/// A channel to the network service to allow for gossip propagation.
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
/// Processes validated and decoded messages from the network. Has direct access to the
/// sync manager.
processor: Processor<T>,
/// The `Router` logger.
log: slog::Logger,
}
/// Types of messages the handler can receive.
#[derive(Debug)]
pub enum RouterMessage<T: EthSpec> {
/// We have initiated a connection to a new peer.
PeerDialed(PeerId),
/// Peer has disconnected,
PeerDisconnected(PeerId),
/// An RPC response/request has been received.
RPC(PeerId, RPCEvent<T>),
/// A gossip message has been received. The fields are: message id, the peer that sent us this
/// message and the message itself.
PubsubMessage(MessageId, PeerId, PubsubMessage<T>),
}
impl<T: BeaconChainTypes> Router<T> {
/// Initializes and runs the Router.
pub fn spawn(
beacon_chain: Arc<BeaconChain<T>>,
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
executor: &tokio::runtime::TaskExecutor,
log: slog::Logger,
) -> error::Result<mpsc::UnboundedSender<RouterMessage<T::EthSpec>>> {
let message_handler_log = log.new(o!("service"=> "msg_handler"));
trace!(message_handler_log, "Service starting");
let (handler_send, handler_recv) = mpsc::unbounded_channel();
// Initialise a message instance, which itself spawns the syncing thread.
let processor = Processor::new(executor, beacon_chain, network_send.clone(), &log);
// generate the Message handler
let mut handler = Router {
network_send,
processor,
log: message_handler_log,
};
// spawn handler task and move the message handler instance into the spawned thread
executor.spawn(
handler_recv
.for_each(move |msg| Ok(handler.handle_message(msg)))
.map_err(move |_| {
debug!(log, "Network message handler terminated.");
}),
);
Ok(handler_send)
}
/// Handle all messages incoming from the network service.
fn handle_message(&mut self, message: RouterMessage<T::EthSpec>) {
match message {
// we have initiated a connection to a peer
RouterMessage::PeerDialed(peer_id) => {
self.processor.on_connect(peer_id);
}
// A peer has disconnected
RouterMessage::PeerDisconnected(peer_id) => {
self.processor.on_disconnect(peer_id);
}
// An RPC message request/response has been received
RouterMessage::RPC(peer_id, rpc_event) => {
self.handle_rpc_message(peer_id, rpc_event);
}
// An RPC message request/response has been received
RouterMessage::PubsubMessage(id, peer_id, gossip) => {
self.handle_gossip(id, peer_id, gossip);
}
}
}
/* RPC - Related functionality */
/// Handle RPC messages
fn handle_rpc_message(&mut self, peer_id: PeerId, rpc_message: RPCEvent<T::EthSpec>) {
match rpc_message {
RPCEvent::Request(id, req) => self.handle_rpc_request(peer_id, id, req),
RPCEvent::Response(id, resp) => self.handle_rpc_response(peer_id, id, resp),
RPCEvent::Error(id, error) => self.handle_rpc_error(peer_id, id, error),
}
}
/// A new RPC request has been received from the network.
fn handle_rpc_request(
&mut self,
peer_id: PeerId,
request_id: RequestId,
request: RPCRequest<T::EthSpec>,
) {
match request {
RPCRequest::Status(status_message) => {
self.processor
.on_status_request(peer_id, request_id, status_message)
}
RPCRequest::Goodbye(goodbye_reason) => {
debug!(
self.log, "PeerGoodbye";
"peer" => format!("{:?}", peer_id),
"reason" => format!("{:?}", goodbye_reason),
);
self.processor.on_disconnect(peer_id);
}
RPCRequest::BlocksByRange(request) => self
.processor
.on_blocks_by_range_request(peer_id, request_id, request),
RPCRequest::BlocksByRoot(request) => self
.processor
.on_blocks_by_root_request(peer_id, request_id, request),
RPCRequest::Phantom(_) => unreachable!("Phantom never initialised"),
}
}
/// An RPC response has been received from the network.
// we match on id and ignore responses past the timeout.
fn handle_rpc_response(
&mut self,
peer_id: PeerId,
request_id: RequestId,
error_response: RPCErrorResponse<T::EthSpec>,
) {
// an error could have occurred.
match error_response {
RPCErrorResponse::InvalidRequest(error) => {
warn!(self.log, "Peer indicated invalid request";"peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::ServerError(error) => {
warn!(self.log, "Peer internal server error";"peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::Unknown(error) => {
warn!(self.log, "Unknown peer error";"peer" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::Success(response) => match response {
RPCResponse::Status(status_message) => {
self.processor.on_status_response(peer_id, status_message);
}
RPCResponse::BlocksByRange(beacon_block) => {
self.processor.on_blocks_by_range_response(
peer_id,
request_id,
Some(beacon_block),
);
}
RPCResponse::BlocksByRoot(beacon_block) => {
self.processor.on_blocks_by_root_response(
peer_id,
request_id,
Some(beacon_block),
);
}
},
RPCErrorResponse::StreamTermination(response_type) => {
// have received a stream termination, notify the processing functions
match response_type {
ResponseTermination::BlocksByRange => {
self.processor
.on_blocks_by_range_response(peer_id, request_id, None);
}
ResponseTermination::BlocksByRoot => {
self.processor
.on_blocks_by_root_response(peer_id, request_id, None);
}
}
}
}
}
/// Handle various RPC errors
fn handle_rpc_error(&mut self, peer_id: PeerId, request_id: RequestId, error: RPCError) {
warn!(self.log, "RPC Error"; "Peer" => format!("{:?}", peer_id), "request_id" => format!("{}", request_id), "Error" => format!("{:?}", error));
self.processor.on_rpc_error(peer_id, request_id);
}
/// Handle RPC messages
fn handle_gossip(
&mut self,
id: MessageId,
peer_id: PeerId,
gossip_message: PubsubMessage<T::EthSpec>,
) {
match gossip_message.data {
PubsubData::BeaconBlock(block) => {
if self.processor.should_forward_block(&block) {
self.propagate_message(id, peer_id.clone());
}
self.processor.on_block_gossip(peer_id, block);
}
PubsubData::AggregateAndProofAttestation(_agg_attestation) => {
// TODO: Handle propagation conditions
self.propagate_message(id, peer_id);
// TODO Handle aggregate attestion
// self.processor
// .on_attestation_gossip(peer_id.clone(), &agg_attestation);
}
PubsubData::Attestation(boxed_shard_attestation) => {
// TODO: Handle propagation conditions
self.propagate_message(id, peer_id.clone());
self.processor
.on_attestation_gossip(peer_id, boxed_shard_attestation.1);
}
PubsubData::VoluntaryExit(_exit) => {
// TODO: Apply more sophisticated validation
self.propagate_message(id, peer_id.clone());
// TODO: Handle exits
debug!(self.log, "Received a voluntary exit"; "peer_id" => format!("{}", peer_id) );
}
PubsubData::ProposerSlashing(_proposer_slashing) => {
// TODO: Apply more sophisticated validation
self.propagate_message(id, peer_id.clone());
// TODO: Handle proposer slashings
debug!(self.log, "Received a proposer slashing"; "peer_id" => format!("{}", peer_id) );
}
PubsubData::AttesterSlashing(_attester_slashing) => {
// TODO: Apply more sophisticated validation
self.propagate_message(id, peer_id.clone());
// TODO: Handle attester slashings
debug!(self.log, "Received an attester slashing"; "peer_id" => format!("{}", peer_id) );
}
}
}
/// Informs the network service that the message should be forwarded to other peers.
fn propagate_message(&mut self, message_id: MessageId, propagation_source: PeerId) {
self.network_send
.try_send(NetworkMessage::Propagate {
propagation_source,
message_id,
})
.unwrap_or_else(|_| {
warn!(
self.log,
"Could not send propagation request to the network service"
)
});
}
}

View File

@@ -19,9 +19,6 @@ use types::{Attestation, Epoch, EthSpec, Hash256, SignedBeaconBlock, Slot};
/// Otherwise we queue it.
pub(crate) const FUTURE_SLOT_TOLERANCE: u64 = 1;
const SHOULD_FORWARD_GOSSIP_BLOCK: bool = true;
const SHOULD_NOT_FORWARD_GOSSIP_BLOCK: bool = false;
/// Keeps track of syncing information for known connected peers.
#[derive(Clone, Copy, Debug)]
pub struct PeerSyncInfo {
@@ -52,7 +49,7 @@ impl PeerSyncInfo {
/// Processes validated messages from the network. It relays necessary data to the syncing thread
/// and processes blocks from the pubsub network.
pub struct MessageProcessor<T: BeaconChainTypes> {
pub struct Processor<T: BeaconChainTypes> {
/// A reference to the underlying beacon chain.
chain: Arc<BeaconChain<T>>,
/// A channel to the syncing thread.
@@ -60,17 +57,17 @@ pub struct MessageProcessor<T: BeaconChainTypes> {
/// A oneshot channel for destroying the sync thread.
_sync_exit: oneshot::Sender<()>,
/// A network context to return and handle RPC requests.
network: HandlerNetworkContext,
network: HandlerNetworkContext<T::EthSpec>,
/// The `RPCHandler` logger.
log: slog::Logger,
}
impl<T: BeaconChainTypes> MessageProcessor<T> {
/// Instantiate a `MessageProcessor` instance
impl<T: BeaconChainTypes> Processor<T> {
/// Instantiate a `Processor` instance
pub fn new(
executor: &tokio::runtime::TaskExecutor,
beacon_chain: Arc<BeaconChain<T>>,
network_send: mpsc::UnboundedSender<NetworkMessage>,
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
log: &slog::Logger,
) -> Self {
let sync_logger = log.new(o!("service"=> "sync"));
@@ -83,7 +80,7 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
sync_logger,
);
MessageProcessor {
Processor {
chain: beacon_chain,
sync_send,
_sync_exit,
@@ -303,7 +300,7 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
self.network.send_rpc_response(
peer_id.clone(),
request_id,
RPCResponse::BlocksByRoot(block.as_ssz_bytes()),
RPCResponse::BlocksByRoot(Box::new(block)),
);
send_block_count += 1;
} else {
@@ -389,7 +386,7 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
self.network.send_rpc_response(
peer_id.clone(),
request_id,
RPCResponse::BlocksByRange(block.as_ssz_bytes()),
RPCResponse::BlocksByRange(Box::new(block)),
);
}
} else {
@@ -436,9 +433,8 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
&mut self,
peer_id: PeerId,
request_id: RequestId,
beacon_block: Option<SignedBeaconBlock<T::EthSpec>>,
beacon_block: Option<Box<SignedBeaconBlock<T::EthSpec>>>,
) {
let beacon_block = beacon_block.map(Box::new);
trace!(
self.log,
"Received BlocksByRange Response";
@@ -457,9 +453,8 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
&mut self,
peer_id: PeerId,
request_id: RequestId,
beacon_block: Option<SignedBeaconBlock<T::EthSpec>>,
beacon_block: Option<Box<SignedBeaconBlock<T::EthSpec>>>,
) {
let beacon_block = beacon_block.map(Box::new);
trace!(
self.log,
"Received BlocksByRoot Response";
@@ -473,6 +468,22 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
});
}
/// Template function to be called on a block to determine if the block should be propagated
/// across the network.
pub fn should_forward_block(&mut self, _block: &Box<SignedBeaconBlock<T::EthSpec>>) -> bool {
// TODO: Propagate error once complete
// self.chain.should_forward_block(block).is_ok()
true
}
/// Template function to be called on an attestation to determine if the attestation should be propagated
/// across the network.
pub fn _should_forward_attestation(&mut self, _attestation: &Attestation<T::EthSpec>) -> bool {
// TODO: Propagate error once complete
//self.chain.should_forward_attestation(attestation).is_ok()
true
}
/// Process a gossip message declaring a new block.
///
/// Attempts to apply to block to the beacon chain. May queue the block for later processing.
@@ -481,9 +492,9 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
pub fn on_block_gossip(
&mut self,
peer_id: PeerId,
block: SignedBeaconBlock<T::EthSpec>,
block: Box<SignedBeaconBlock<T::EthSpec>>,
) -> bool {
match self.chain.process_block(block.clone()) {
match BlockProcessingOutcome::shim(self.chain.process_block(*block.clone())) {
Ok(outcome) => match outcome {
BlockProcessingOutcome::Processed { .. } => {
trace!(self.log, "Gossipsub block processed";
@@ -508,24 +519,13 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
"location" => "block gossip"
),
}
SHOULD_FORWARD_GOSSIP_BLOCK
}
BlockProcessingOutcome::ParentUnknown { .. } => {
// Inform the sync manager to find parents for this block
trace!(self.log, "Block with unknown parent received";
"peer_id" => format!("{:?}",peer_id));
self.send_to_sync(SyncMessage::UnknownBlock(peer_id, Box::new(block)));
SHOULD_FORWARD_GOSSIP_BLOCK
self.send_to_sync(SyncMessage::UnknownBlock(peer_id, block));
}
BlockProcessingOutcome::FutureSlot {
present_slot,
block_slot,
} if present_slot + FUTURE_SLOT_TOLERANCE >= block_slot => {
//TODO: Decide the logic here
SHOULD_FORWARD_GOSSIP_BLOCK
}
BlockProcessingOutcome::BlockIsAlreadyKnown => SHOULD_FORWARD_GOSSIP_BLOCK,
other => {
warn!(
self.log,
@@ -539,7 +539,6 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
"Invalid gossip beacon block ssz";
"ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())),
);
SHOULD_NOT_FORWARD_GOSSIP_BLOCK //TODO: Decide if we want to forward these
}
},
Err(_) => {
@@ -549,15 +548,18 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
"Erroneous gossip beacon block ssz";
"ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())),
);
SHOULD_NOT_FORWARD_GOSSIP_BLOCK
}
}
// TODO: Update with correct block gossip checking
true
}
/// Process a gossip message declaring a new attestation.
///
/// Not currently implemented.
pub fn on_attestation_gossip(&mut self, peer_id: PeerId, msg: Attestation<T::EthSpec>) {
pub fn on_attestation_gossip(&mut self, _peer_id: PeerId, _msg: Attestation<T::EthSpec>) {
// TODO: Handle subnet gossip
/*
match self.chain.process_attestation(msg.clone()) {
Ok(outcome) => match outcome {
AttestationProcessingOutcome::Processed => {
@@ -603,7 +605,8 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
"ssz" => format!("0x{}", hex::encode(msg.as_ssz_bytes())),
);
}
}
};
*/
}
}
@@ -625,15 +628,15 @@ pub(crate) fn status_message<T: BeaconChainTypes>(
/// Wraps a Network Channel to employ various RPC related network functionality for the message
/// handler. The handler doesn't manage it's own request Id's and can therefore only send
/// responses or requests with 0 request Ids.
pub struct HandlerNetworkContext {
pub struct HandlerNetworkContext<T: EthSpec> {
/// The network channel to relay messages to the Network service.
network_send: mpsc::UnboundedSender<NetworkMessage>,
network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
/// Logger for the `NetworkContext`.
log: slog::Logger,
}
impl HandlerNetworkContext {
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage>, log: slog::Logger) -> Self {
impl<T: EthSpec> HandlerNetworkContext<T> {
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage<T>>, log: slog::Logger) -> Self {
Self { network_send, log }
}
@@ -655,7 +658,7 @@ impl HandlerNetworkContext {
});
}
pub fn send_rpc_request(&mut self, peer_id: PeerId, rpc_request: RPCRequest) {
pub fn send_rpc_request(&mut self, peer_id: PeerId, rpc_request: RPCRequest<T>) {
// the message handler cannot send requests with ids. Id's are managed by the sync
// manager.
let request_id = 0;
@@ -667,7 +670,7 @@ impl HandlerNetworkContext {
&mut self,
peer_id: PeerId,
request_id: RequestId,
rpc_response: RPCResponse,
rpc_response: RPCResponse<T>,
) {
self.send_rpc_event(
peer_id,
@@ -680,12 +683,12 @@ impl HandlerNetworkContext {
&mut self,
peer_id: PeerId,
request_id: RequestId,
rpc_error_response: RPCErrorResponse,
rpc_error_response: RPCErrorResponse<T>,
) {
self.send_rpc_event(peer_id, RPCEvent::Response(request_id, rpc_error_response));
}
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent) {
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent<T>) {
self.network_send
.try_send(NetworkMessage::RPC(peer_id, rpc_event))
.unwrap_or_else(|_| {

View File

@@ -1,23 +1,24 @@
use crate::error;
use crate::message_handler::{HandlerMessage, MessageHandler};
use crate::persisted_dht::{load_dht, persist_dht};
use crate::NetworkConfig;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use core::marker::PhantomData;
use eth2_libp2p::Service as LibP2PService;
use eth2_libp2p::{
rpc::RPCRequest, Enr, Libp2pEvent, MessageId, Multiaddr, NetworkGlobals, PeerId, Swarm, Topic,
use crate::router::{Router, RouterMessage};
use crate::{
attestation_service::{AttServiceMessage, AttestationService},
NetworkConfig,
};
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::Service as LibP2PService;
use eth2_libp2p::{rpc::RPCRequest, Enr, Libp2pEvent, MessageId, NetworkGlobals, PeerId, Swarm};
use eth2_libp2p::{PubsubMessage, RPCEvent};
use futures::prelude::*;
use futures::Stream;
use rest_types::ValidatorSubscription;
use slog::{debug, error, info, trace};
use std::collections::HashSet;
use std::sync::{atomic::Ordering, Arc};
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::runtime::TaskExecutor;
use tokio::sync::{mpsc, oneshot};
use tokio::timer::Delay;
use types::EthSpec;
mod tests;
@@ -25,27 +26,46 @@ mod tests;
const BAN_PEER_TIMEOUT: u64 = 30;
/// Service that handles communication between internal services and the `eth2_libp2p` network service.
pub struct Service<T: BeaconChainTypes> {
libp2p_port: u16,
network_globals: Arc<NetworkGlobals>,
_libp2p_exit: oneshot::Sender<()>,
_network_send: mpsc::UnboundedSender<NetworkMessage>,
_phantom: PhantomData<T>,
pub struct NetworkService<T: BeaconChainTypes> {
/// The underlying libp2p service that drives all the network interactions.
libp2p: LibP2PService<T::EthSpec>,
/// An attestation and subnet manager service.
attestation_service: AttestationService<T>,
/// The receiver channel for lighthouse to communicate with the network service.
network_recv: mpsc::UnboundedReceiver<NetworkMessage<T::EthSpec>>,
/// The sending channel for the network service to send messages to be routed throughout
/// lighthouse.
router_send: mpsc::UnboundedSender<RouterMessage<T::EthSpec>>,
/// A reference to lighthouse's database to persist the DHT.
store: Arc<T::Store>,
/// A collection of global variables, accessible outside of the network service.
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
/// An initial delay to update variables after the libp2p service has started.
initial_delay: Delay,
/// The logger for the network service.
log: slog::Logger,
/// A probability of propagation.
propagation_percentage: Option<u8>,
}
impl<T: BeaconChainTypes> Service<T> {
pub fn new(
impl<T: BeaconChainTypes> NetworkService<T> {
pub fn start(
beacon_chain: Arc<BeaconChain<T>>,
config: &NetworkConfig,
executor: &TaskExecutor,
network_log: slog::Logger,
) -> error::Result<(Arc<Self>, mpsc::UnboundedSender<NetworkMessage>)> {
) -> error::Result<(
Arc<NetworkGlobals<T::EthSpec>>,
mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
oneshot::Sender<()>,
)> {
// build the network channel
let (network_send, network_recv) = mpsc::unbounded_channel::<NetworkMessage>();
// launch message handler thread
let (network_send, network_recv) = mpsc::unbounded_channel::<NetworkMessage<T::EthSpec>>();
// Get a reference to the beacon chain store
let store = beacon_chain.store.clone();
let message_handler_send = MessageHandler::spawn(
beacon_chain,
// launch the router task
let router_send = Router::spawn(
beacon_chain.clone(),
network_send.clone(),
executor,
network_log.clone(),
@@ -53,82 +73,42 @@ impl<T: BeaconChainTypes> Service<T> {
let propagation_percentage = config.propagation_percentage;
// launch libp2p service
let (network_globals, mut libp2p_service) =
LibP2PService::new(config, network_log.clone())?;
let (network_globals, mut libp2p) = LibP2PService::new(config, network_log.clone())?;
for enr in load_dht::<T::Store, T::EthSpec>(store.clone()) {
libp2p_service.swarm.add_enr(enr);
for enr in load_dht::<T>(store.clone()) {
libp2p.swarm.add_enr(enr);
}
// A delay used to initialise code after the network has started
// This is currently used to obtain the listening addresses from the libp2p service.
let initial_delay = Delay::new(Instant::now() + Duration::from_secs(1));
let libp2p_exit = spawn_service::<T>(
libp2p_service,
network_recv,
message_handler_send,
executor,
store,
network_globals.clone(),
initial_delay,
network_log.clone(),
propagation_percentage,
)?;
// create the attestation service
let attestation_service =
AttestationService::new(beacon_chain, network_globals.clone(), &network_log);
let network_service = Service {
libp2p_port: config.libp2p_port,
network_globals,
_libp2p_exit: libp2p_exit,
_network_send: network_send.clone(),
_phantom: PhantomData,
// create the network service and spawn the task
let network_service = NetworkService {
libp2p,
attestation_service,
network_recv,
router_send,
store,
network_globals: network_globals.clone(),
initial_delay,
log: network_log,
propagation_percentage,
};
Ok((Arc::new(network_service), network_send))
}
let network_exit = spawn_service(network_service, &executor)?;
/// Returns the local ENR from the underlying Discv5 behaviour that external peers may connect
/// to.
pub fn local_enr(&self) -> Option<Enr> {
self.network_globals.local_enr.read().clone()
}
/// Returns the local libp2p PeerID.
pub fn local_peer_id(&self) -> PeerId {
self.network_globals.peer_id.read().clone()
}
/// Returns the list of `Multiaddr` that the underlying libp2p instance is listening on.
pub fn listen_multiaddrs(&self) -> Vec<Multiaddr> {
self.network_globals.listen_multiaddrs.read().clone()
}
/// Returns the libp2p port that this node has been configured to listen using.
pub fn listen_port(&self) -> u16 {
self.libp2p_port
}
/// Returns the number of libp2p connected peers.
pub fn connected_peers(&self) -> usize {
self.network_globals.connected_peers.load(Ordering::Relaxed)
}
/// Returns the set of `PeerId` that are connected via libp2p.
pub fn connected_peer_set(&self) -> HashSet<PeerId> {
self.network_globals.connected_peer_set.read().clone()
Ok((network_globals, network_send, network_exit))
}
}
fn spawn_service<T: BeaconChainTypes>(
mut libp2p_service: LibP2PService,
mut network_recv: mpsc::UnboundedReceiver<NetworkMessage>,
mut message_handler_send: mpsc::UnboundedSender<HandlerMessage>,
mut service: NetworkService<T>,
executor: &TaskExecutor,
store: Arc<T::Store>,
network_globals: Arc<NetworkGlobals>,
mut initial_delay: Delay,
log: slog::Logger,
propagation_percentage: Option<u8>,
) -> error::Result<tokio::sync::oneshot::Sender<()>> {
let (network_exit, mut exit_rx) = tokio::sync::oneshot::channel();
@@ -136,25 +116,26 @@ fn spawn_service<T: BeaconChainTypes>(
executor.spawn(
futures::future::poll_fn(move || -> Result<_, ()> {
let log = &service.log;
if !initial_delay.is_elapsed() {
if let Ok(Async::Ready(_)) = initial_delay.poll() {
let multi_addrs = Swarm::listeners(&libp2p_service.swarm).cloned().collect();
*network_globals.listen_multiaddrs.write() = multi_addrs;
if !service.initial_delay.is_elapsed() {
if let Ok(Async::Ready(_)) = service.initial_delay.poll() {
let multi_addrs = Swarm::listeners(&service.libp2p.swarm).cloned().collect();
*service.network_globals.listen_multiaddrs.write() = multi_addrs;
}
}
// perform termination tasks when the network is being shutdown
if let Ok(Async::Ready(_)) | Err(_) = exit_rx.poll() {
// network thread is terminating
let enrs: Vec<Enr> = libp2p_service.swarm.enr_entries().cloned().collect();
let enrs: Vec<Enr> = service.libp2p.swarm.enr_entries().cloned().collect();
debug!(
log,
"Persisting DHT to store";
"Number of peers" => format!("{}", enrs.len()),
);
match persist_dht::<T::Store, T::EthSpec>(store.clone(), enrs) {
match persist_dht::<T>(service.store.clone(), enrs) {
Err(e) => error!(
log,
"Failed to persist DHT on drop";
@@ -173,11 +154,11 @@ fn spawn_service<T: BeaconChainTypes>(
// processes the network channel before processing the libp2p swarm
loop {
// poll the network channel
match network_recv.poll() {
match service.network_recv.poll() {
Ok(Async::Ready(Some(message))) => match message {
NetworkMessage::RPC(peer_id, rpc_event) => {
trace!(log, "Sending RPC"; "rpc" => format!("{}", rpc_event));
libp2p_service.swarm.send_rpc(peer_id, rpc_event);
service.libp2p.swarm.send_rpc(peer_id, rpc_event);
}
NetworkMessage::Propagate {
propagation_source,
@@ -186,7 +167,7 @@ fn spawn_service<T: BeaconChainTypes>(
// TODO: Remove this for mainnet
// randomly prevents propagation
let mut should_send = true;
if let Some(percentage) = propagation_percentage {
if let Some(percentage) = service.propagation_percentage {
// not exact percentage but close enough
let rand = rand::random::<u8>() % 100;
if rand > percentage {
@@ -201,16 +182,16 @@ fn spawn_service<T: BeaconChainTypes>(
"propagation_peer" => format!("{:?}", propagation_source),
"message_id" => message_id.to_string(),
);
libp2p_service
service.libp2p
.swarm
.propagate_message(&propagation_source, message_id);
}
}
NetworkMessage::Publish { topics, message } => {
NetworkMessage::Publish { messages } => {
// TODO: Remove this for mainnet
// randomly prevents propagation
let mut should_send = true;
if let Some(percentage) = propagation_percentage {
if let Some(percentage) = service.propagation_percentage {
// not exact percentage but close enough
let rand = rand::random::<u8>() % 100;
if rand > percentage {
@@ -219,18 +200,31 @@ fn spawn_service<T: BeaconChainTypes>(
}
}
if !should_send {
info!(log, "Random filter did not publish message");
info!(log, "Random filter did not publish messages");
} else {
debug!(log, "Sending pubsub message"; "topics" => format!("{:?}",topics));
libp2p_service.swarm.publish(&topics, message);
let mut unique_topics = Vec::new();
for message in &messages {
for topic in message.topics() {
if !unique_topics.contains(&topic) {
unique_topics.push(topic);
}
}
}
debug!(log, "Sending pubsub messages"; "count" => messages.len(), "topics" => format!("{:?}", unique_topics));
service.libp2p.swarm.publish(messages);
}
}
NetworkMessage::Disconnect { peer_id } => {
libp2p_service.disconnect_and_ban_peer(
service.libp2p.disconnect_and_ban_peer(
peer_id,
std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
);
}
NetworkMessage::Subscribe { subscriptions } =>
{
// the result is dropped as it used solely for ergonomics
let _ = service.attestation_service.validator_subscriptions(subscriptions);
}
},
Ok(Async::NotReady) => break,
Ok(Async::Ready(None)) => {
@@ -244,10 +238,24 @@ fn spawn_service<T: BeaconChainTypes>(
}
}
// process any attestation service events
// NOTE: This must come after the network message processing as that may trigger events in
// the attestation service.
while let Ok(Async::Ready(Some(attestation_service_message))) = service.attestation_service.poll() {
match attestation_service_message {
// TODO: Implement
AttServiceMessage::Subscribe(_subnet) => { },
AttServiceMessage::Unsubscribe(_subnet) => { },
AttServiceMessage::EnrAdd(_subnet) => { },
AttServiceMessage::EnrRemove(_subnet) => { },
AttServiceMessage::DiscoverPeers(_subnet) => { },
}
}
let mut peers_to_ban = Vec::new();
// poll the swarm
loop {
match libp2p_service.poll() {
match service.libp2p.poll() {
Ok(Async::Ready(Some(event))) => match event {
Libp2pEvent::RPC(peer_id, rpc_event) => {
// trace!(log, "Received RPC"; "rpc" => format!("{}", rpc_event));
@@ -256,21 +264,21 @@ fn spawn_service<T: BeaconChainTypes>(
if let RPCEvent::Request(_, RPCRequest::Goodbye(_)) = rpc_event {
peers_to_ban.push(peer_id.clone());
};
message_handler_send
.try_send(HandlerMessage::RPC(peer_id, rpc_event))
.map_err(|_| { debug!(log, "Failed to send RPC to handler");} )?;
service.router_send
.try_send(RouterMessage::RPC(peer_id, rpc_event))
.map_err(|_| { debug!(log, "Failed to send RPC to router");} )?;
}
Libp2pEvent::PeerDialed(peer_id) => {
debug!(log, "Peer Dialed"; "peer_id" => format!("{:?}", peer_id));
message_handler_send
.try_send(HandlerMessage::PeerDialed(peer_id))
.map_err(|_| { debug!(log, "Failed to send peer dialed to handler");})?;
service.router_send
.try_send(RouterMessage::PeerDialed(peer_id))
.map_err(|_| { debug!(log, "Failed to send peer dialed to router");})?;
}
Libp2pEvent::PeerDisconnected(peer_id) => {
debug!(log, "Peer Disconnected"; "peer_id" => format!("{:?}", peer_id));
message_handler_send
.try_send(HandlerMessage::PeerDisconnected(peer_id))
.map_err(|_| { debug!(log, "Failed to send peer disconnect to handler");})?;
service.router_send
.try_send(RouterMessage::PeerDisconnected(peer_id))
.map_err(|_| { debug!(log, "Failed to send peer disconnect to router");})?;
}
Libp2pEvent::PubsubMessage {
id,
@@ -278,9 +286,9 @@ fn spawn_service<T: BeaconChainTypes>(
message,
..
} => {
message_handler_send
.try_send(HandlerMessage::PubsubMessage(id, source, message))
.map_err(|_| { debug!(log, "Failed to send pubsub message to handler");})?;
service.router_send
.try_send(RouterMessage::PubsubMessage(id, source, message))
.map_err(|_| { debug!(log, "Failed to send pubsub message to router");})?;
}
Libp2pEvent::PeerSubscribed(_, _) => {}
},
@@ -292,7 +300,7 @@ fn spawn_service<T: BeaconChainTypes>(
// ban and disconnect any peers that sent Goodbye requests
while let Some(peer_id) = peers_to_ban.pop() {
libp2p_service.disconnect_and_ban_peer(
service.libp2p.disconnect_and_ban_peer(
peer_id.clone(),
std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
);
@@ -308,14 +316,15 @@ fn spawn_service<T: BeaconChainTypes>(
/// Types of messages that the network service can receive.
#[derive(Debug)]
pub enum NetworkMessage {
/// Send an RPC message to the libp2p service.
RPC(PeerId, RPCEvent),
/// Publish a message to gossipsub.
Publish {
topics: Vec<Topic>,
message: PubsubMessage,
pub enum NetworkMessage<T: EthSpec> {
/// Subscribes a list of validators to specific slots for attestation duties.
Subscribe {
subscriptions: Vec<ValidatorSubscription>,
},
/// Send an RPC message to the libp2p service.
RPC(PeerId, RPCEvent<T>),
/// Publish a list of messages to the gossipsub protocol.
Publish { messages: Vec<PubsubMessage<T>> },
/// Propagate a received gossipsub message.
Propagate {
propagation_source: PeerId,

View File

@@ -35,7 +35,7 @@
use super::network_context::SyncNetworkContext;
use super::range_sync::{Batch, BatchProcessResult, RangeSync};
use crate::message_processor::PeerSyncInfo;
use crate::router::processor::PeerSyncInfo;
use crate::service::NetworkMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome};
use eth2_libp2p::rpc::methods::*;
@@ -153,7 +153,7 @@ pub struct SyncManager<T: BeaconChainTypes> {
input_channel: mpsc::UnboundedReceiver<SyncMessage<T::EthSpec>>,
/// A network context to contact the network service.
network: SyncNetworkContext,
network: SyncNetworkContext<T::EthSpec>,
/// The object handling long-range batch load-balanced syncing.
range_sync: RangeSync<T>,
@@ -180,7 +180,7 @@ pub struct SyncManager<T: BeaconChainTypes> {
pub fn spawn<T: BeaconChainTypes>(
executor: &tokio::runtime::TaskExecutor,
beacon_chain: Weak<BeaconChain<T>>,
network_send: mpsc::UnboundedSender<NetworkMessage>,
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
log: slog::Logger,
) -> (
mpsc::UnboundedSender<SyncMessage<T::EthSpec>>,
@@ -391,7 +391,7 @@ impl<T: BeaconChainTypes> SyncManager<T> {
// we have the correct block, try and process it
if let Some(chain) = self.chain.upgrade() {
match chain.process_block(block.clone()) {
match BlockProcessingOutcome::shim(chain.process_block(block.clone())) {
Ok(outcome) => {
match outcome {
BlockProcessingOutcome::Processed { block_root } => {
@@ -597,7 +597,7 @@ impl<T: BeaconChainTypes> SyncManager<T> {
.downloaded_blocks
.pop()
.expect("There is always at least one block in the queue");
match chain.process_block(newest_block.clone()) {
match BlockProcessingOutcome::shim(chain.process_block(newest_block.clone())) {
Ok(BlockProcessingOutcome::ParentUnknown { .. }) => {
// need to keep looking for parents
// add the block back to the queue and continue the search
@@ -642,7 +642,7 @@ impl<T: BeaconChainTypes> SyncManager<T> {
while let Some(block) = parent_request.downloaded_blocks.pop() {
// check if the chain exists
if let Some(chain) = self.chain.upgrade() {
match chain.process_block(block) {
match BlockProcessingOutcome::shim(chain.process_block(block)) {
Ok(BlockProcessingOutcome::Processed { .. })
| Ok(BlockProcessingOutcome::BlockIsAlreadyKnown { .. }) => {} // continue to the next block

View File

@@ -5,9 +5,4 @@ pub mod manager;
mod network_context;
mod range_sync;
/// Currently implemented sync methods.
pub enum SyncMethod {
SimpleSync,
}
pub use manager::SyncMessage;

View File

@@ -1,7 +1,7 @@
//! Provides network functionality for the Syncing thread. This fundamentally wraps a network
//! channel and stores a global RPC ID to perform requests.
use crate::message_processor::status_message;
use crate::router::processor::status_message;
use crate::service::NetworkMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::rpc::methods::*;
@@ -10,20 +10,21 @@ use eth2_libp2p::PeerId;
use slog::{debug, trace, warn};
use std::sync::Weak;
use tokio::sync::mpsc;
use types::EthSpec;
/// Wraps a Network channel to employ various RPC related network functionality for the Sync manager. This includes management of a global RPC request Id.
pub struct SyncNetworkContext {
pub struct SyncNetworkContext<T: EthSpec> {
/// The network channel to relay messages to the Network service.
network_send: mpsc::UnboundedSender<NetworkMessage>,
network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
request_id: RequestId,
/// Logger for the `SyncNetworkContext`.
log: slog::Logger,
}
impl SyncNetworkContext {
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage>, log: slog::Logger) -> Self {
impl<T: EthSpec> SyncNetworkContext<T> {
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage<T>>, log: slog::Logger) -> Self {
Self {
network_send,
request_id: 0,
@@ -31,9 +32,9 @@ impl SyncNetworkContext {
}
}
pub fn status_peer<T: BeaconChainTypes>(
pub fn status_peer<U: BeaconChainTypes>(
&mut self,
chain: Weak<BeaconChain<T>>,
chain: Weak<BeaconChain<U>>,
peer_id: PeerId,
) {
if let Some(chain) = chain.upgrade() {
@@ -117,7 +118,7 @@ impl SyncNetworkContext {
pub fn send_rpc_request(
&mut self,
peer_id: PeerId,
rpc_request: RPCRequest,
rpc_request: RPCRequest<T>,
) -> Result<RequestId, &'static str> {
let request_id = self.request_id;
self.request_id += 1;
@@ -125,7 +126,11 @@ impl SyncNetworkContext {
Ok(request_id)
}
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent) -> Result<(), &'static str> {
fn send_rpc_event(
&mut self,
peer_id: PeerId,
rpc_event: RPCEvent<T>,
) -> Result<(), &'static str> {
self.network_send
.try_send(NetworkMessage::RPC(peer_id, rpc_event))
.map_err(|_| {

View File

@@ -1,7 +1,7 @@
use super::batch::Batch;
use crate::message_processor::FUTURE_SLOT_TOLERANCE;
use crate::router::processor::FUTURE_SLOT_TOLERANCE;
use crate::sync::manager::SyncMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome};
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockError};
use slog::{debug, error, trace, warn};
use std::sync::{Arc, Weak};
use tokio::sync::mpsc;
@@ -54,116 +54,78 @@ fn process_batch<T: BeaconChainTypes>(
batch: &Batch<T::EthSpec>,
log: &slog::Logger,
) -> Result<(), String> {
let mut successful_block_import = false;
for block in &batch.downloaded_blocks {
if let Some(chain) = chain.upgrade() {
let processing_result = chain.process_block(block.clone());
if let Ok(outcome) = processing_result {
match outcome {
BlockProcessingOutcome::Processed { block_root } => {
// The block was valid and we processed it successfully.
trace!(
log, "Imported block from network";
"slot" => block.slot(),
"block_root" => format!("{}", block_root),
);
successful_block_import = true;
}
BlockProcessingOutcome::ParentUnknown { parent, .. } => {
// blocks should be sequential and all parents should exist
warn!(
log, "Parent block is unknown";
"parent_root" => format!("{}", parent),
"baby_block_slot" => block.slot(),
);
if successful_block_import {
run_fork_choice(chain, log);
}
return Err(format!(
"Block at slot {} has an unknown parent.",
block.slot()
));
}
BlockProcessingOutcome::BlockIsAlreadyKnown => {
// this block is already known to us, move to the next
debug!(
log, "Imported a block that is already known";
"block_slot" => block.slot(),
);
}
BlockProcessingOutcome::FutureSlot {
present_slot,
block_slot,
} => {
if present_slot + FUTURE_SLOT_TOLERANCE >= block_slot {
// The block is too far in the future, drop it.
warn!(
log, "Block is ahead of our slot clock";
"msg" => "block for future slot rejected, check your time",
"present_slot" => present_slot,
"block_slot" => block_slot,
"FUTURE_SLOT_TOLERANCE" => FUTURE_SLOT_TOLERANCE,
);
if successful_block_import {
run_fork_choice(chain, log);
}
return Err(format!(
"Block at slot {} is too far in the future",
block.slot()
));
} else {
// The block is in the future, but not too far.
debug!(
log, "Block is slightly ahead of our slot clock, ignoring.";
"present_slot" => present_slot,
"block_slot" => block_slot,
"FUTURE_SLOT_TOLERANCE" => FUTURE_SLOT_TOLERANCE,
);
}
}
BlockProcessingOutcome::WouldRevertFinalizedSlot { .. } => {
debug!(
log, "Finalized or earlier block processed";
"outcome" => format!("{:?}", outcome),
);
// block reached our finalized slot or was earlier, move to the next block
}
BlockProcessingOutcome::GenesisBlock => {
debug!(
log, "Genesis block was processed";
"outcome" => format!("{:?}", outcome),
);
}
_ => {
warn!(
log, "Invalid block received";
"msg" => "peer sent invalid block",
"outcome" => format!("{:?}", outcome),
);
if successful_block_import {
run_fork_choice(chain, log);
}
return Err(format!("Invalid block at slot {}", block.slot()));
}
if let Some(chain) = chain.upgrade() {
match chain.process_chain_segment(batch.downloaded_blocks.clone()) {
Ok(roots) => {
trace!(
log, "Imported blocks from network";
"count" => roots.len(),
);
}
Err(BlockError::ParentUnknown(parent)) => {
// blocks should be sequential and all parents should exist
warn!(
log, "Parent block is unknown";
"parent_root" => format!("{}", parent),
);
}
Err(BlockError::BlockIsAlreadyKnown) => {
// this block is already known to us, move to the next
debug!(
log, "Imported a block that is already known";
);
}
Err(BlockError::FutureSlot {
present_slot,
block_slot,
}) => {
if present_slot + FUTURE_SLOT_TOLERANCE >= block_slot {
// The block is too far in the future, drop it.
warn!(
log, "Block is ahead of our slot clock";
"msg" => "block for future slot rejected, check your time",
"present_slot" => present_slot,
"block_slot" => block_slot,
"FUTURE_SLOT_TOLERANCE" => FUTURE_SLOT_TOLERANCE,
);
} else {
// The block is in the future, but not too far.
debug!(
log, "Block is slightly ahead of our slot clock, ignoring.";
"present_slot" => present_slot,
"block_slot" => block_slot,
"FUTURE_SLOT_TOLERANCE" => FUTURE_SLOT_TOLERANCE,
);
}
} else {
}
Err(BlockError::WouldRevertFinalizedSlot { .. }) => {
debug!(
log, "Finalized or earlier block processed";
);
// block reached our finalized slot or was earlier, move to the next block
}
Err(BlockError::GenesisBlock) => {
debug!(
log, "Genesis block was processed";
);
}
Err(BlockError::BeaconChainError(e)) => {
warn!(
log, "BlockProcessingFailure";
"msg" => "unexpected condition in processing block.",
"outcome" => format!("{:?}", processing_result)
"outcome" => format!("{:?}", e)
);
}
other => {
warn!(
log, "Invalid block received";
"msg" => "peer sent invalid block",
"outcome" => format!("{:?}", other),
);
if successful_block_import {
run_fork_choice(chain, log);
}
return Err(format!(
"Unexpected block processing error: {:?}",
processing_result
));
}
} else {
return Ok(()); // terminate early due to dropped beacon chain
}
} else {
return Ok(()); // terminate early due to dropped beacon chain
}
// Batch completed successfully, run fork choice.

View File

@@ -17,7 +17,7 @@ use types::{Hash256, SignedBeaconBlock, Slot};
/// downvote peers with poor bandwidth. This can be set arbitrarily high, in which case the
/// responder will fill the response up to the max request size, assuming they have the bandwidth
/// to do so.
pub const BLOCKS_PER_BATCH: u64 = 50;
pub const BLOCKS_PER_BATCH: u64 = 64;
/// The number of times to retry a batch before the chain is considered failed and removed.
const MAX_BATCH_RETRIES: u8 = 5;
@@ -141,7 +141,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// batch.
pub fn on_block_response(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
request_id: RequestId,
beacon_block: &Option<SignedBeaconBlock<T::EthSpec>>,
) -> Option<()> {
@@ -161,7 +161,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// failed indicating that further batches are required.
fn handle_completed_batch(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
batch: Batch<T::EthSpec>,
) {
// An entire batch of blocks has been received. This functions checks to see if it can be processed,
@@ -255,7 +255,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// of the batch processor.
pub fn on_batch_process_result(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
processing_id: u64,
batch: &mut Option<Batch<T::EthSpec>>,
result: &BatchProcessResult,
@@ -385,7 +385,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// TODO: Batches could have been partially downloaded due to RPC size-limit restrictions. We
// need to add logic for partial batch downloads. Potentially, if another peer returns the same
// batch, we try a partial download.
fn handle_invalid_batch(&mut self, network: &mut SyncNetworkContext, batch: Batch<T::EthSpec>) {
fn handle_invalid_batch(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
batch: Batch<T::EthSpec>,
) {
// The current batch could not be processed, indicating either the current or previous
// batches are invalid
@@ -415,7 +419,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
///
/// If the re-downloaded batch is different to the original and can be processed, the original
/// peer will be downvoted.
fn reprocess_batch(&mut self, network: &mut SyncNetworkContext, mut batch: Batch<T::EthSpec>) {
fn reprocess_batch(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
mut batch: Batch<T::EthSpec>,
) {
// marks the batch as attempting to be reprocessed by hashing the downloaded blocks
batch.original_hash = Some(batch.hash());
@@ -455,7 +463,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// This chain has been requested to start syncing.
///
/// This could be new chain, or an old chain that is being resumed.
pub fn start_syncing(&mut self, network: &mut SyncNetworkContext, local_finalized_slot: Slot) {
pub fn start_syncing(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
local_finalized_slot: Slot,
) {
// A local finalized slot is provided as other chains may have made
// progress whilst this chain was Stopped or paused. If so, update the `processed_batch_id` to
// accommodate potentially downloaded batches from other chains. Also prune any old batches
@@ -490,7 +502,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// Add a peer to the chain.
///
/// If the chain is active, this starts requesting batches from this peer.
pub fn add_peer(&mut self, network: &mut SyncNetworkContext, peer_id: PeerId) {
pub fn add_peer(&mut self, network: &mut SyncNetworkContext<T::EthSpec>, peer_id: PeerId) {
self.peer_pool.insert(peer_id.clone());
// do not request blocks if the chain is not syncing
if let ChainSyncingState::Stopped = self.state {
@@ -503,7 +515,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
}
/// Sends a STATUS message to all peers in the peer pool.
pub fn status_peers(&self, network: &mut SyncNetworkContext) {
pub fn status_peers(&self, network: &mut SyncNetworkContext<T::EthSpec>) {
for peer_id in self.peer_pool.iter() {
network.status_peer(self.chain.clone(), peer_id.clone());
}
@@ -517,7 +529,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// this chain.
pub fn inject_error(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: &PeerId,
request_id: RequestId,
) -> Option<ProcessingResult> {
@@ -541,7 +553,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// `MAX_BATCH_RETRIES`.
pub fn failed_batch(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
mut batch: Batch<T::EthSpec>,
) -> ProcessingResult {
batch.retries += 1;
@@ -575,7 +587,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// Attempts to request the next required batches from the peer pool if the chain is syncing. It will exhaust the peer
/// pool and left over batches until the batch buffer is reached or all peers are exhausted.
fn request_batches(&mut self, network: &mut SyncNetworkContext) {
fn request_batches(&mut self, network: &mut SyncNetworkContext<T::EthSpec>) {
if let ChainSyncingState::Syncing = self.state {
while self.send_range_request(network) {}
}
@@ -583,7 +595,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// Requests the next required batch from a peer. Returns true, if there was a peer available
/// to send a request and there are batches to request, false otherwise.
fn send_range_request(&mut self, network: &mut SyncNetworkContext) -> bool {
fn send_range_request(&mut self, network: &mut SyncNetworkContext<T::EthSpec>) -> bool {
// find the next pending batch and request it from the peer
if let Some(peer_id) = self.get_next_peer() {
if let Some(batch) = self.get_next_batch(peer_id) {
@@ -669,7 +681,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
}
/// Requests the provided batch from the provided peer.
fn send_batch(&mut self, network: &mut SyncNetworkContext, batch: Batch<T::EthSpec>) {
fn send_batch(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
batch: Batch<T::EthSpec>,
) {
let request = batch.to_blocks_by_range_request();
if let Ok(request_id) = network.blocks_by_range_request(batch.current_peer.clone(), request)
{

View File

@@ -4,7 +4,7 @@
//! with this struct to to simplify the logic of the other layers of sync.
use super::chain::{ChainSyncingState, SyncingChain};
use crate::message_processor::PeerSyncInfo;
use crate::router::processor::PeerSyncInfo;
use crate::sync::manager::SyncMessage;
use crate::sync::network_context::SyncNetworkContext;
use beacon_chain::{BeaconChain, BeaconChainTypes};
@@ -103,7 +103,11 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
///
/// This removes any out-dated chains, swaps to any higher priority finalized chains and
/// updates the state of the collection.
pub fn update_finalized(&mut self, network: &mut SyncNetworkContext, log: &slog::Logger) {
pub fn update_finalized(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
log: &slog::Logger,
) {
let local_slot = match self.beacon_chain.upgrade() {
Some(chain) => {
let local = match PeerSyncInfo::from_chain(&chain) {
@@ -197,7 +201,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
#[allow(clippy::too_many_arguments)]
pub fn new_head_chain(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
remote_finalized_slot: Slot,
target_head: Hash256,
target_slot: Slot,
@@ -277,7 +281,11 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
///
/// This removes chains with no peers, or chains whose start block slot is less than our current
/// finalized block slot.
pub fn purge_outdated_chains(&mut self, network: &mut SyncNetworkContext, log: &slog::Logger) {
pub fn purge_outdated_chains(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
log: &slog::Logger,
) {
// Remove any chains that have no peers
self.finalized_chains
.retain(|chain| !chain.peer_pool.is_empty());
@@ -349,7 +357,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
/// This will re-status the chains peers on removal. The index must exist.
pub fn remove_chain(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
index: usize,
log: &slog::Logger,
) {

View File

@@ -42,7 +42,7 @@
use super::chain::ProcessingResult;
use super::chain_collection::{ChainCollection, SyncState};
use super::{Batch, BatchProcessResult};
use crate::message_processor::PeerSyncInfo;
use crate::router::processor::PeerSyncInfo;
use crate::sync::manager::SyncMessage;
use crate::sync::network_context::SyncNetworkContext;
use beacon_chain::{BeaconChain, BeaconChainTypes};
@@ -108,7 +108,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// prioritised by peer-pool size.
pub fn add_peer(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: PeerId,
remote: PeerSyncInfo,
) {
@@ -228,7 +228,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// This request could complete a chain or simply add to its progress.
pub fn blocks_by_range_response(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: PeerId,
request_id: RequestId,
beacon_block: Option<SignedBeaconBlock<T::EthSpec>>,
@@ -255,7 +255,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
pub fn handle_block_process_result(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
processing_id: u64,
batch: Batch<T::EthSpec>,
result: BatchProcessResult,
@@ -326,7 +326,11 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// A peer has disconnected. This removes the peer from any ongoing chains and mappings. A
/// disconnected peer could remove a chain
pub fn peer_disconnect(&mut self, network: &mut SyncNetworkContext, peer_id: &PeerId) {
pub fn peer_disconnect(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: &PeerId,
) {
// if the peer is in the awaiting head mapping, remove it
self.awaiting_head_peers.remove(&peer_id);
@@ -340,7 +344,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// When a peer gets removed, both the head and finalized chains need to be searched to check which pool the peer is in. The chain may also have a batch or batches awaiting
/// for this peer. If so we mark the batch as failed. The batch may then hit it's maximum
/// retries. In this case, we need to remove the chain and re-status all the peers.
fn remove_peer(&mut self, network: &mut SyncNetworkContext, peer_id: &PeerId) {
fn remove_peer(&mut self, network: &mut SyncNetworkContext<T::EthSpec>, peer_id: &PeerId) {
if let Some((index, ProcessingResult::RemoveChain)) =
self.chains.head_finalized_request(|chain| {
if chain.peer_pool.remove(peer_id) {
@@ -370,7 +374,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// been too many failed attempts for the batch, remove the chain.
pub fn inject_error(
&mut self,
network: &mut SyncNetworkContext,
network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: PeerId,
request_id: RequestId,
) {