mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-12 02:14:10 +00:00
Stable futures (#879)
* Port eth1 lib to use stable futures * Port eth1_test_rig to stable futures * Port eth1 tests to stable futures * Port genesis service to stable futures * Port genesis tests to stable futures * Port beacon_chain to stable futures * Port lcli to stable futures * Fix eth1_test_rig (#1014) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Update hashmap hashset to stable futures * Adds panic test to hashset delay * Port remote_beacon_node to stable futures * Fix lcli merge conflicts * Non rpc stuff compiles * protocol.rs compiles * Port websockets, timer and notifier to stable futures (#1035) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Port remote_beacon_node to stable futures * Partial eth2-libp2p stable future upgrade * Finished first round of fighting RPC types * Further progress towards porting eth2-libp2p adds caching to discovery * Update behaviour * RPC handler to stable futures * Update RPC to master libp2p * Network service additions * Fix the fallback transport construction (#1102) * Correct warning * Remove hashmap delay * Compiling version of eth2-libp2p * Update all crates versions * Fix conversion function and add tests (#1113) * Port validator_client to stable futures (#1114) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Revert "Update hashmap hashset to stable futures" This reverts commitd432378a3c. * Revert "Adds panic test to hashset delay" This reverts commit281502396f. * Ported attestation_service * Ported duties_service * Ported fork_service * More ports * Port block_service * Minor fixes * VC compiles * Update TODOS * Borrow self where possible * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test * Re-Fix attestation and duties service * Minor fixes Co-authored-by: Paul Hauner <paul@paulhauner.com> * Network crate update to stable futures * Port account_manager to stable futures (#1121) * Port account_manager to stable futures * Run async fns in tokio environment * Port rest_api crate to stable futures (#1118) * Port rest_api lib to stable futures * Reduce tokio features * Update notifier to stable futures * Builder update * Further updates * Convert self referential async functions * stable futures fixes (#1124) * Fix eth1 update functions * Fix genesis and client * Fix beacon node lib * Return appropriate runtimes from environment * Fix test rig * Refactor eth1 service update * Upgrade simulator to stable futures * Lighthouse compiles on stable futures * Remove println debugging statement * Update libp2p service, start rpc test upgrade * Update network crate for new libp2p * Update tokio::codec to futures_codec (#1128) * Further work towards RPC corrections * Correct http timeout and network service select * Use tokio runtime for libp2p * Revert "Update tokio::codec to futures_codec (#1128)" This reverts commite57aea924a. * Upgrade RPC libp2p tests * Upgrade secio fallback test * Upgrade gossipsub examples * Clean up RPC protocol * Test fixes (#1133) * Correct websocket timeout and run on os thread * Fix network test * Clean up PR * Correct tokio tcp move attestation service tests * Upgrade attestation service tests * Correct network test * Correct genesis test * Test corrections * Log info when block is received * Modify logs and update attester service events * Stable futures: fixes to vc, eth1 and account manager (#1142) * Add local testnet scripts * Remove whiteblock script * Rename local testnet script * Move spawns onto handle * Fix VC panic * Initial fix to block production issue * Tidy block producer fix * Tidy further * Add local testnet clean script * Run cargo fmt * Tidy duties service * Tidy fork service * Tidy ForkService * Tidy AttestationService * Tidy notifier * Ensure await is not suppressed in eth1 * Ensure await is not suppressed in account_manager * Use .ok() instead of .unwrap_or(()) * RPC decoding test for proto * Update discv5 and eth2-libp2p deps * Fix lcli double runtime issue (#1144) * Handle stream termination and dialing peer errors * Correct peer_info variant types * Remove unnecessary warnings * Handle subnet unsubscription removal and improve logigng * Add logs around ping * Upgrade discv5 and improve logging * Handle peer connection status for multiple connections * Improve network service logging * Improve logging around peer manager * Upgrade swarm poll centralise peer management * Identify clients on error * Fix `remove_peer` in sync (#1150) * remove_peer removes from all chains * Remove logs * Fix early return from loop * Improved logging, fix panic * Partially correct tests * Stable futures: Vc sync (#1149) * Improve syncing heuristic * Add comments * Use safer method for tolerance * Fix tests * Stable futures: Fix VC bug, update agg pool, add more metrics (#1151) * Expose epoch processing summary * Expose participation metrics to prometheus * Switch to f64 * Reduce precision * Change precision * Expose observed attesters metrics * Add metrics for agg/unagg attn counts * Add metrics for gossip rx * Add metrics for gossip tx * Adds ignored attns to prom * Add attestation timing * Add timer for aggregation pool sig agg * Add write lock timer for agg pool * Add more metrics to agg pool * Change map lock code * Add extra metric to agg pool * Change lock handling in agg pool * Change .write() to .read() * Add another agg pool timer * Fix for is_aggregator * Fix pruning bug Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
@@ -5,32 +5,32 @@ authors = ["Age Manning <Age@AgeManning.com>"]
|
||||
edition = "2018"
|
||||
|
||||
[dev-dependencies]
|
||||
sloggers = "0.3.4"
|
||||
sloggers = "1.0.0"
|
||||
genesis = { path = "../genesis" }
|
||||
tempdir = "0.3"
|
||||
lazy_static = "1.4.0"
|
||||
matches = "0.1.8"
|
||||
tempfile = "3.1.0"
|
||||
|
||||
[dependencies]
|
||||
beacon_chain = { path = "../beacon_chain" }
|
||||
store = { path = "../store" }
|
||||
eth2-libp2p = { path = "../eth2-libp2p" }
|
||||
hashmap_delay = { path = "../../eth2/utils/hashmap_delay" }
|
||||
hashset_delay = { path = "../../eth2/utils/hashset_delay" }
|
||||
rest_types = { path = "../../eth2/utils/rest_types" }
|
||||
types = { path = "../../eth2/types" }
|
||||
slot_clock = { path = "../../eth2/utils/slot_clock" }
|
||||
slog = { version = "2.5.2", features = ["max_level_trace"] }
|
||||
hex = "0.3"
|
||||
hex = "0.4.2"
|
||||
eth2_ssz = "0.1.2"
|
||||
tree_hash = "0.1.0"
|
||||
futures = "0.1.29"
|
||||
error-chain = "0.12.1"
|
||||
tokio = "0.1.22"
|
||||
parking_lot = "0.9.0"
|
||||
smallvec = "1.0.0"
|
||||
futures = "0.3.5"
|
||||
error-chain = "0.12.2"
|
||||
tokio = { version = "0.2.20", features = ["full"] }
|
||||
parking_lot = "0.10.2"
|
||||
smallvec = "1.4.0"
|
||||
# TODO: Remove rand crate for mainnet
|
||||
rand = "0.7.2"
|
||||
rand = "0.7.3"
|
||||
fnv = "1.0.6"
|
||||
rlp = "0.4.3"
|
||||
tokio-timer = "0.2.12"
|
||||
matches = "0.1.8"
|
||||
tempfile = "3.1.0"
|
||||
rlp = "0.4.5"
|
||||
lazy_static = "1.4.0"
|
||||
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" }
|
||||
|
||||
@@ -5,16 +5,20 @@
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes};
|
||||
use eth2_libp2p::{types::GossipKind, MessageId, NetworkGlobals, PeerId};
|
||||
use futures::prelude::*;
|
||||
use hashmap_delay::HashSetDelay;
|
||||
use hashset_delay::HashSetDelay;
|
||||
use rand::seq::SliceRandom;
|
||||
use rest_types::ValidatorSubscription;
|
||||
use slog::{crit, debug, error, o, warn};
|
||||
use slot_clock::SlotClock;
|
||||
use std::collections::VecDeque;
|
||||
use std::pin::Pin;
|
||||
use std::sync::Arc;
|
||||
use std::task::{Context, Poll};
|
||||
use std::time::{Duration, Instant};
|
||||
use types::{Attestation, EthSpec, Slot, SubnetId};
|
||||
|
||||
mod tests;
|
||||
|
||||
/// The minimum number of slots ahead that we attempt to discover peers for a subscription. If the
|
||||
/// slot is less than this number, skip the peer discovery process.
|
||||
const MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 1;
|
||||
@@ -564,7 +568,7 @@ impl<T: BeaconChainTypes> AttestationService<T> {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let subscribed_subnets = self.random_subnets.keys_vec();
|
||||
let subscribed_subnets = self.random_subnets.keys().cloned().collect::<Vec<_>>();
|
||||
let to_remove_subnets = subscribed_subnets.choose_multiple(
|
||||
&mut rand::thread_rng(),
|
||||
random_subnets_per_validator as usize,
|
||||
@@ -576,10 +580,10 @@ impl<T: BeaconChainTypes> AttestationService<T> {
|
||||
for subnet_id in to_remove_subnets {
|
||||
// If a subscription is queued for two slots in the future, it's associated unsubscription
|
||||
// will unsubscribe from the expired subnet.
|
||||
// If there is no subscription for this subnet,slot it is safe to add one, without
|
||||
// If there is no unsubscription for this subnet,slot it is safe to add one, without
|
||||
// unsubscribing early from a required subnet
|
||||
let subnet = ExactSubnet {
|
||||
subnet_id: **subnet_id,
|
||||
subnet_id: *subnet_id,
|
||||
slot: current_slot + 2,
|
||||
};
|
||||
if self.subscriptions.get(&subnet).is_none() {
|
||||
@@ -597,11 +601,11 @@ impl<T: BeaconChainTypes> AttestationService<T> {
|
||||
self.unsubscriptions
|
||||
.insert_at(subnet, unsubscription_duration);
|
||||
}
|
||||
|
||||
// as the long lasting subnet subscription is being removed, remove the subnet_id from
|
||||
// the ENR bitfield
|
||||
self.events
|
||||
.push_back(AttServiceMessage::EnrRemove(**subnet_id));
|
||||
.push_back(AttServiceMessage::EnrRemove(*subnet_id));
|
||||
self.random_subnets.remove(subnet_id);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
@@ -609,648 +613,64 @@ impl<T: BeaconChainTypes> AttestationService<T> {
|
||||
|
||||
impl<T: BeaconChainTypes> Stream for AttestationService<T> {
|
||||
type Item = AttServiceMessage;
|
||||
type Error = ();
|
||||
|
||||
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
|
||||
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
|
||||
// process any peer discovery events
|
||||
while let Async::Ready(Some(exact_subnet)) =
|
||||
self.discover_peers.poll().map_err(|e| {
|
||||
error!(self.log, "Failed to check for peer discovery requests"; "error"=> format!("{}", e));
|
||||
})?
|
||||
{
|
||||
self.handle_discover_peers(exact_subnet);
|
||||
}
|
||||
match self.discover_peers.poll_next_unpin(cx) {
|
||||
Poll::Ready(Some(Ok(exact_subnet))) => self.handle_discover_peers(exact_subnet),
|
||||
Poll::Ready(Some(Err(e))) => {
|
||||
error!(self.log, "Failed to check for peer discovery requests"; "error"=> format!("{}", e));
|
||||
}
|
||||
Poll::Ready(None) | Poll::Pending => {}
|
||||
}
|
||||
|
||||
// process any subscription events
|
||||
while let Async::Ready(Some(exact_subnet)) = self.subscriptions.poll().map_err(|e| {
|
||||
error!(self.log, "Failed to check for subnet subscription times"; "error"=> format!("{}", e));
|
||||
})?
|
||||
{
|
||||
self.handle_subscriptions(exact_subnet);
|
||||
}
|
||||
match self.subscriptions.poll_next_unpin(cx) {
|
||||
Poll::Ready(Some(Ok(exact_subnet))) => self.handle_subscriptions(exact_subnet),
|
||||
Poll::Ready(Some(Err(e))) => {
|
||||
error!(self.log, "Failed to check for subnet subscription times"; "error"=> format!("{}", e));
|
||||
}
|
||||
Poll::Ready(None) | Poll::Pending => {}
|
||||
}
|
||||
|
||||
// process any un-subscription events
|
||||
while let Async::Ready(Some(exact_subnet)) = self.unsubscriptions.poll().map_err(|e| {
|
||||
error!(self.log, "Failed to check for subnet unsubscription times"; "error"=> format!("{}", e));
|
||||
})?
|
||||
{
|
||||
self.handle_unsubscriptions(exact_subnet);
|
||||
}
|
||||
match self.unsubscriptions.poll_next_unpin(cx) {
|
||||
Poll::Ready(Some(Ok(exact_subnet))) => self.handle_unsubscriptions(exact_subnet),
|
||||
Poll::Ready(Some(Err(e))) => {
|
||||
error!(self.log, "Failed to check for subnet unsubscription times"; "error"=> format!("{}", e));
|
||||
}
|
||||
Poll::Ready(None) | Poll::Pending => {}
|
||||
}
|
||||
|
||||
// process any random subnet expiries
|
||||
while let Async::Ready(Some(subnet)) = self.random_subnets.poll().map_err(|e| {
|
||||
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
|
||||
})?
|
||||
{
|
||||
self.handle_random_subnet_expiry(subnet);
|
||||
}
|
||||
match self.random_subnets.poll_next_unpin(cx) {
|
||||
Poll::Ready(Some(Ok(subnet))) => self.handle_random_subnet_expiry(subnet),
|
||||
Poll::Ready(Some(Err(e))) => {
|
||||
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
|
||||
}
|
||||
Poll::Ready(None) | Poll::Pending => {}
|
||||
}
|
||||
|
||||
// process any known validator expiries
|
||||
while let Async::Ready(Some(_validator_index)) = self.known_validators.poll().map_err(|e| {
|
||||
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
|
||||
})?
|
||||
{
|
||||
let _ = self.handle_known_validator_expiry();
|
||||
}
|
||||
match self.known_validators.poll_next_unpin(cx) {
|
||||
Poll::Ready(Some(Ok(_validator_index))) => {
|
||||
let _ = self.handle_known_validator_expiry();
|
||||
}
|
||||
Poll::Ready(Some(Err(e))) => {
|
||||
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
|
||||
}
|
||||
Poll::Ready(None) | Poll::Pending => {}
|
||||
}
|
||||
// poll to remove entries on expiration, no need to act on expiration events
|
||||
let _ = self.aggregate_validators_on_subnet.poll().map_err(|e| { error!(self.log, "Failed to check for aggregate validator on subnet expirations"; "error"=> format!("{}", e)); });
|
||||
if let Poll::Ready(Some(Err(e))) = self.aggregate_validators_on_subnet.poll_next_unpin(cx) {
|
||||
error!(self.log, "Failed to check for aggregate validator on subnet expirations"; "error"=> format!("{}", e));
|
||||
}
|
||||
|
||||
// process any generated events
|
||||
if let Some(event) = self.events.pop_front() {
|
||||
return Ok(Async::Ready(Some(event)));
|
||||
return Poll::Ready(Some(event));
|
||||
}
|
||||
|
||||
Ok(Async::NotReady)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use beacon_chain::builder::{BeaconChainBuilder, Witness};
|
||||
use beacon_chain::eth1_chain::CachingEth1Backend;
|
||||
use beacon_chain::events::NullEventHandler;
|
||||
use beacon_chain::migrate::NullMigrator;
|
||||
use eth2_libp2p::discovery::{build_enr, Keypair};
|
||||
use eth2_libp2p::{discovery::CombinedKey, NetworkConfig, NetworkGlobals};
|
||||
use futures::Stream;
|
||||
use genesis::{generate_deterministic_keypairs, interop_genesis_state};
|
||||
use lazy_static::lazy_static;
|
||||
use matches::assert_matches;
|
||||
use slog::Logger;
|
||||
use sloggers::{null::NullLoggerBuilder, Build};
|
||||
use slot_clock::{SlotClock, SystemTimeSlotClock};
|
||||
use std::convert::TryInto;
|
||||
use std::sync::atomic::{AtomicBool, Ordering::Relaxed};
|
||||
use std::time::SystemTime;
|
||||
use store::MemoryStore;
|
||||
use tempfile::tempdir;
|
||||
use tokio::prelude::*;
|
||||
use types::{CommitteeIndex, EnrForkId, EthSpec, MinimalEthSpec};
|
||||
|
||||
const SLOT_DURATION_MILLIS: u64 = 200;
|
||||
|
||||
type TestBeaconChainType = Witness<
|
||||
MemoryStore<MinimalEthSpec>,
|
||||
NullMigrator,
|
||||
SystemTimeSlotClock,
|
||||
CachingEth1Backend<MinimalEthSpec, MemoryStore<MinimalEthSpec>>,
|
||||
MinimalEthSpec,
|
||||
NullEventHandler<MinimalEthSpec>,
|
||||
>;
|
||||
|
||||
pub struct TestBeaconChain {
|
||||
chain: Arc<BeaconChain<TestBeaconChainType>>,
|
||||
}
|
||||
|
||||
impl TestBeaconChain {
|
||||
pub fn new_with_system_clock() -> Self {
|
||||
let data_dir = tempdir().expect("should create temporary data_dir");
|
||||
let spec = MinimalEthSpec::default_spec();
|
||||
|
||||
let keypairs = generate_deterministic_keypairs(1);
|
||||
|
||||
let log = get_logger();
|
||||
let chain = Arc::new(
|
||||
BeaconChainBuilder::new(MinimalEthSpec)
|
||||
.logger(log.clone())
|
||||
.custom_spec(spec.clone())
|
||||
.store(Arc::new(MemoryStore::open()))
|
||||
.store_migrator(NullMigrator)
|
||||
.data_dir(data_dir.path().to_path_buf())
|
||||
.genesis_state(
|
||||
interop_genesis_state::<MinimalEthSpec>(&keypairs, 0, &spec)
|
||||
.expect("should generate interop state"),
|
||||
)
|
||||
.expect("should build state using recent genesis")
|
||||
.dummy_eth1_backend()
|
||||
.expect("should build dummy backend")
|
||||
.null_event_handler()
|
||||
.slot_clock(SystemTimeSlotClock::new(
|
||||
Slot::new(0),
|
||||
Duration::from_secs(recent_genesis_time()),
|
||||
Duration::from_millis(SLOT_DURATION_MILLIS),
|
||||
))
|
||||
.reduced_tree_fork_choice()
|
||||
.expect("should add fork choice to builder")
|
||||
.build()
|
||||
.expect("should build"),
|
||||
);
|
||||
Self { chain }
|
||||
}
|
||||
}
|
||||
|
||||
pub fn recent_genesis_time() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(SystemTime::UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_secs()
|
||||
}
|
||||
|
||||
fn get_logger() -> Logger {
|
||||
NullLoggerBuilder.build().expect("logger should build")
|
||||
}
|
||||
|
||||
lazy_static! {
|
||||
static ref CHAIN: TestBeaconChain = { TestBeaconChain::new_with_system_clock() };
|
||||
}
|
||||
|
||||
fn get_attestation_service() -> AttestationService<TestBeaconChainType> {
|
||||
let log = get_logger();
|
||||
|
||||
let beacon_chain = CHAIN.chain.clone();
|
||||
|
||||
let config = NetworkConfig::default();
|
||||
let enr_key: CombinedKey = Keypair::generate_secp256k1().try_into().unwrap();
|
||||
let enr = build_enr::<MinimalEthSpec>(&enr_key, &config, EnrForkId::default()).unwrap();
|
||||
|
||||
let network_globals: NetworkGlobals<MinimalEthSpec> = NetworkGlobals::new(enr, 0, 0, &log);
|
||||
AttestationService::new(beacon_chain, Arc::new(network_globals), &log)
|
||||
}
|
||||
|
||||
fn get_subscription(
|
||||
validator_index: u64,
|
||||
attestation_committee_index: CommitteeIndex,
|
||||
slot: Slot,
|
||||
) -> ValidatorSubscription {
|
||||
let is_aggregator = true;
|
||||
ValidatorSubscription {
|
||||
validator_index,
|
||||
attestation_committee_index,
|
||||
slot,
|
||||
is_aggregator,
|
||||
}
|
||||
}
|
||||
|
||||
fn _get_subscriptions(validator_count: u64, slot: Slot) -> Vec<ValidatorSubscription> {
|
||||
let mut subscriptions: Vec<ValidatorSubscription> = Vec::new();
|
||||
for validator_index in 0..validator_count {
|
||||
let is_aggregator = true;
|
||||
subscriptions.push(ValidatorSubscription {
|
||||
validator_index,
|
||||
attestation_committee_index: validator_index,
|
||||
slot,
|
||||
is_aggregator,
|
||||
});
|
||||
}
|
||||
subscriptions
|
||||
}
|
||||
|
||||
// gets a number of events from the subscription service, or returns none if it times out after a number
|
||||
// of slots
|
||||
fn get_events<S: Stream<Item = AttServiceMessage, Error = ()>>(
|
||||
stream: S,
|
||||
no_events: u64,
|
||||
no_slots_before_timeout: u32,
|
||||
) -> impl Future<Item = Vec<AttServiceMessage>, Error = ()> {
|
||||
stream
|
||||
.take(no_events)
|
||||
.collect()
|
||||
.timeout(Duration::from_millis(SLOT_DURATION_MILLIS) * no_slots_before_timeout)
|
||||
.map_err(|_| ())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn subscribe_current_slot() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 0;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// not enough time for peer discovery, just subscribe
|
||||
let expected = vec![AttServiceMessage::Subscribe(SubnetId::new(validator_index))];
|
||||
|
||||
let test_result = Arc::new(AtomicBool::new(false));
|
||||
let thread_result = test_result.clone();
|
||||
tokio::run(
|
||||
get_events(attestation_service, 4, 1)
|
||||
.map(move |events| {
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any2),
|
||||
AttServiceMessage::Subscribe(_any1),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
// test completed successfully
|
||||
thread_result.store(true, Relaxed);
|
||||
})
|
||||
// this doesn't need to be here, but helps with debugging
|
||||
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
|
||||
);
|
||||
assert!(test_result.load(Relaxed))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn subscribe_current_slot_wait_for_unsubscribe() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 0;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// not enough time for peer discovery, just subscribe, unsubscribe
|
||||
let expected = vec![
|
||||
AttServiceMessage::Subscribe(SubnetId::new(validator_index)),
|
||||
AttServiceMessage::Unsubscribe(SubnetId::new(validator_index)),
|
||||
];
|
||||
|
||||
let test_result = Arc::new(AtomicBool::new(false));
|
||||
let thread_result = test_result.clone();
|
||||
tokio::run(
|
||||
get_events(attestation_service, 5, 2)
|
||||
.map(move |events| {
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any2),
|
||||
AttServiceMessage::Subscribe(_any1),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
// test completed successfully
|
||||
thread_result.store(true, Relaxed);
|
||||
})
|
||||
// this doesn't need to be here, but helps with debugging
|
||||
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
|
||||
);
|
||||
assert!(test_result.load(Relaxed))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn subscribe_five_slots_ahead() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 5;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// just discover peers, don't subscribe yet
|
||||
let expected = vec![AttServiceMessage::DiscoverPeers(SubnetId::new(
|
||||
validator_index,
|
||||
))];
|
||||
|
||||
let test_result = Arc::new(AtomicBool::new(false));
|
||||
let thread_result = test_result.clone();
|
||||
tokio::run(
|
||||
get_events(attestation_service, 4, 1)
|
||||
.map(move |events| {
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any1),
|
||||
AttServiceMessage::Subscribe(_any2),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
// test completed successfully
|
||||
thread_result.store(true, Relaxed);
|
||||
})
|
||||
// this doesn't need to be here, but helps with debugging
|
||||
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
|
||||
);
|
||||
assert!(test_result.load(Relaxed))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn subscribe_five_slots_ahead_wait_five_slots() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 5;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// we should discover peers, wait, then subscribe
|
||||
let expected = vec![
|
||||
AttServiceMessage::DiscoverPeers(SubnetId::new(validator_index)),
|
||||
AttServiceMessage::Subscribe(SubnetId::new(validator_index)),
|
||||
];
|
||||
|
||||
let test_result = Arc::new(AtomicBool::new(false));
|
||||
let thread_result = test_result.clone();
|
||||
tokio::run(
|
||||
get_events(attestation_service, 5, 5)
|
||||
.map(move |events| {
|
||||
//dbg!(&events);
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any1),
|
||||
AttServiceMessage::Subscribe(_any2),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
// test completed successfully
|
||||
thread_result.store(true, Relaxed);
|
||||
})
|
||||
// this doesn't need to be here, but helps with debugging
|
||||
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
|
||||
);
|
||||
assert!(test_result.load(Relaxed))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn subscribe_ten_slots_ahead() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 10;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// ten slots ahead is before our target peer discover time, so expect no messages
|
||||
let expected: Vec<AttServiceMessage> = vec![];
|
||||
|
||||
let test_result = Arc::new(AtomicBool::new(false));
|
||||
let thread_result = test_result.clone();
|
||||
tokio::run(
|
||||
get_events(attestation_service, 3, 1)
|
||||
.map(move |events| {
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any1),
|
||||
AttServiceMessage::Subscribe(_any2),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
// test completed successfully
|
||||
thread_result.store(true, Relaxed);
|
||||
})
|
||||
// this doesn't need to be here, but helps with debugging
|
||||
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
|
||||
);
|
||||
assert!(test_result.load(Relaxed))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn subscribe_ten_slots_ahead_wait_five_slots() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 10;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// expect discover peers because we will enter TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD range
|
||||
let expected: Vec<AttServiceMessage> = vec![AttServiceMessage::DiscoverPeers(
|
||||
SubnetId::new(validator_index),
|
||||
)];
|
||||
|
||||
let test_result = Arc::new(AtomicBool::new(false));
|
||||
let thread_result = test_result.clone();
|
||||
tokio::run(
|
||||
get_events(attestation_service, 4, 5)
|
||||
.map(move |events| {
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any1),
|
||||
AttServiceMessage::Subscribe(_any2),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
// test completed successfully
|
||||
thread_result.store(true, Relaxed);
|
||||
})
|
||||
// this doesn't need to be here, but helps with debugging
|
||||
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
|
||||
);
|
||||
assert!(test_result.load(Relaxed))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn subscribe_all_random_subnets() {
|
||||
// subscribe 10 slots ahead so we do not produce any exact subnet messages
|
||||
let subscription_slot = 10;
|
||||
let subscription_count = 64;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions =
|
||||
_get_subscriptions(subscription_count, current_slot + subscription_slot);
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
let test_result = Arc::new(AtomicBool::new(false));
|
||||
let thread_result = test_result.clone();
|
||||
tokio::run(
|
||||
get_events(attestation_service, 192, 3)
|
||||
.map(move |events| {
|
||||
let mut discover_peer_count = 0;
|
||||
let mut subscribe_count = 0;
|
||||
let mut enr_add_count = 0;
|
||||
let mut unexpected_msg_count = 0;
|
||||
|
||||
for event in events {
|
||||
match event {
|
||||
AttServiceMessage::DiscoverPeers(_any_subnet) => {
|
||||
discover_peer_count = discover_peer_count + 1
|
||||
}
|
||||
AttServiceMessage::Subscribe(_any_subnet) => {
|
||||
subscribe_count = subscribe_count + 1
|
||||
}
|
||||
AttServiceMessage::EnrAdd(_any_subnet) => {
|
||||
enr_add_count = enr_add_count + 1
|
||||
}
|
||||
_ => unexpected_msg_count = unexpected_msg_count + 1,
|
||||
}
|
||||
}
|
||||
|
||||
assert_eq!(discover_peer_count, 64);
|
||||
assert_eq!(subscribe_count, 64);
|
||||
assert_eq!(enr_add_count, 64);
|
||||
assert_eq!(unexpected_msg_count, 0);
|
||||
// test completed successfully
|
||||
thread_result.store(true, Relaxed);
|
||||
})
|
||||
// this doesn't need to be here, but helps with debugging
|
||||
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
|
||||
);
|
||||
assert!(test_result.load(Relaxed))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn subscribe_all_random_subnets_plus_one() {
|
||||
// subscribe 10 slots ahead so we do not produce any exact subnet messages
|
||||
let subscription_slot = 10;
|
||||
// the 65th subscription should result in no more messages than the previous scenario
|
||||
let subscription_count = 65;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions =
|
||||
_get_subscriptions(subscription_count, current_slot + subscription_slot);
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
let test_result = Arc::new(AtomicBool::new(false));
|
||||
let thread_result = test_result.clone();
|
||||
tokio::run(
|
||||
get_events(attestation_service, 192, 3)
|
||||
.map(move |events| {
|
||||
let mut discover_peer_count = 0;
|
||||
let mut subscribe_count = 0;
|
||||
let mut enr_add_count = 0;
|
||||
let mut unexpected_msg_count = 0;
|
||||
|
||||
for event in events {
|
||||
match event {
|
||||
AttServiceMessage::DiscoverPeers(_any_subnet) => {
|
||||
discover_peer_count = discover_peer_count + 1
|
||||
}
|
||||
AttServiceMessage::Subscribe(_any_subnet) => {
|
||||
subscribe_count = subscribe_count + 1
|
||||
}
|
||||
AttServiceMessage::EnrAdd(_any_subnet) => {
|
||||
enr_add_count = enr_add_count + 1
|
||||
}
|
||||
_ => unexpected_msg_count = unexpected_msg_count + 1,
|
||||
}
|
||||
}
|
||||
|
||||
assert_eq!(discover_peer_count, 64);
|
||||
assert_eq!(subscribe_count, 64);
|
||||
assert_eq!(enr_add_count, 64);
|
||||
assert_eq!(unexpected_msg_count, 0);
|
||||
// test completed successfully
|
||||
thread_result.store(true, Relaxed);
|
||||
})
|
||||
// this doesn't need to be here, but helps with debugging
|
||||
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
|
||||
);
|
||||
assert!(test_result.load(Relaxed))
|
||||
Poll::Pending
|
||||
}
|
||||
}
|
||||
|
||||
508
beacon_node/network/src/attestation_service/tests.rs
Normal file
508
beacon_node/network/src/attestation_service/tests.rs
Normal file
@@ -0,0 +1,508 @@
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::super::*;
|
||||
use beacon_chain::{
|
||||
builder::{BeaconChainBuilder, Witness},
|
||||
eth1_chain::CachingEth1Backend,
|
||||
events::NullEventHandler,
|
||||
migrate::NullMigrator,
|
||||
};
|
||||
use eth2_libp2p::discovery::{build_enr, Keypair};
|
||||
use eth2_libp2p::{discovery::CombinedKey, CombinedKeyExt, NetworkConfig, NetworkGlobals};
|
||||
use futures::Stream;
|
||||
use genesis::{generate_deterministic_keypairs, interop_genesis_state};
|
||||
use lazy_static::lazy_static;
|
||||
use matches::assert_matches;
|
||||
use slog::Logger;
|
||||
use sloggers::{null::NullLoggerBuilder, Build};
|
||||
use slot_clock::{SlotClock, SystemTimeSlotClock};
|
||||
use std::time::SystemTime;
|
||||
use store::MemoryStore;
|
||||
use tempfile::tempdir;
|
||||
use tokio::time::Duration;
|
||||
use types::{CommitteeIndex, EnrForkId, EthSpec, MinimalEthSpec};
|
||||
|
||||
const SLOT_DURATION_MILLIS: u64 = 2000;
|
||||
|
||||
type TestBeaconChainType = Witness<
|
||||
MemoryStore<MinimalEthSpec>,
|
||||
NullMigrator,
|
||||
SystemTimeSlotClock,
|
||||
CachingEth1Backend<MinimalEthSpec, MemoryStore<MinimalEthSpec>>,
|
||||
MinimalEthSpec,
|
||||
NullEventHandler<MinimalEthSpec>,
|
||||
>;
|
||||
|
||||
pub struct TestBeaconChain {
|
||||
chain: Arc<BeaconChain<TestBeaconChainType>>,
|
||||
}
|
||||
|
||||
impl TestBeaconChain {
|
||||
pub fn new_with_system_clock() -> Self {
|
||||
let data_dir = tempdir().expect("should create temporary data_dir");
|
||||
let spec = MinimalEthSpec::default_spec();
|
||||
|
||||
let keypairs = generate_deterministic_keypairs(1);
|
||||
|
||||
let log = get_logger();
|
||||
let chain = Arc::new(
|
||||
BeaconChainBuilder::new(MinimalEthSpec)
|
||||
.logger(log.clone())
|
||||
.custom_spec(spec.clone())
|
||||
.store(Arc::new(MemoryStore::open()))
|
||||
.store_migrator(NullMigrator)
|
||||
.data_dir(data_dir.path().to_path_buf())
|
||||
.genesis_state(
|
||||
interop_genesis_state::<MinimalEthSpec>(&keypairs, 0, &spec)
|
||||
.expect("should generate interop state"),
|
||||
)
|
||||
.expect("should build state using recent genesis")
|
||||
.dummy_eth1_backend()
|
||||
.expect("should build dummy backend")
|
||||
.null_event_handler()
|
||||
.slot_clock(SystemTimeSlotClock::new(
|
||||
Slot::new(0),
|
||||
Duration::from_secs(recent_genesis_time()),
|
||||
Duration::from_millis(SLOT_DURATION_MILLIS),
|
||||
))
|
||||
.reduced_tree_fork_choice()
|
||||
.expect("should add fork choice to builder")
|
||||
.build()
|
||||
.expect("should build"),
|
||||
);
|
||||
Self { chain }
|
||||
}
|
||||
}
|
||||
|
||||
pub fn recent_genesis_time() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(SystemTime::UNIX_EPOCH)
|
||||
.unwrap()
|
||||
.as_secs()
|
||||
}
|
||||
|
||||
fn get_logger() -> Logger {
|
||||
NullLoggerBuilder.build().expect("logger should build")
|
||||
}
|
||||
|
||||
lazy_static! {
|
||||
static ref CHAIN: TestBeaconChain = { TestBeaconChain::new_with_system_clock() };
|
||||
}
|
||||
|
||||
fn get_attestation_service() -> AttestationService<TestBeaconChainType> {
|
||||
let log = get_logger();
|
||||
|
||||
let beacon_chain = CHAIN.chain.clone();
|
||||
|
||||
let config = NetworkConfig::default();
|
||||
let enr_key = CombinedKey::from_libp2p(&Keypair::generate_secp256k1()).unwrap();
|
||||
let enr = build_enr::<MinimalEthSpec>(&enr_key, &config, EnrForkId::default()).unwrap();
|
||||
|
||||
let network_globals: NetworkGlobals<MinimalEthSpec> = NetworkGlobals::new(enr, 0, 0, &log);
|
||||
AttestationService::new(beacon_chain, Arc::new(network_globals), &log)
|
||||
}
|
||||
|
||||
fn get_subscription(
|
||||
validator_index: u64,
|
||||
attestation_committee_index: CommitteeIndex,
|
||||
slot: Slot,
|
||||
) -> ValidatorSubscription {
|
||||
let is_aggregator = true;
|
||||
ValidatorSubscription {
|
||||
validator_index,
|
||||
attestation_committee_index,
|
||||
slot,
|
||||
is_aggregator,
|
||||
}
|
||||
}
|
||||
|
||||
fn _get_subscriptions(validator_count: u64, slot: Slot) -> Vec<ValidatorSubscription> {
|
||||
let mut subscriptions: Vec<ValidatorSubscription> = Vec::new();
|
||||
for validator_index in 0..validator_count {
|
||||
let is_aggregator = true;
|
||||
subscriptions.push(ValidatorSubscription {
|
||||
validator_index,
|
||||
attestation_committee_index: validator_index,
|
||||
slot,
|
||||
is_aggregator,
|
||||
});
|
||||
}
|
||||
subscriptions
|
||||
}
|
||||
|
||||
// gets a number of events from the subscription service, or returns none if it times out after a number
|
||||
// of slots
|
||||
async fn get_events<S: Stream<Item = AttServiceMessage> + Unpin>(
|
||||
mut stream: S,
|
||||
no_events: usize,
|
||||
no_slots_before_timeout: u32,
|
||||
) -> Vec<AttServiceMessage> {
|
||||
let mut events = Vec::new();
|
||||
|
||||
let collect_stream_fut = async {
|
||||
loop {
|
||||
if let Some(result) = stream.next().await {
|
||||
events.push(result);
|
||||
if events.len() == no_events {
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
tokio::select! {
|
||||
_ = collect_stream_fut => {return events}
|
||||
_ = tokio::time::delay_for(
|
||||
Duration::from_millis(SLOT_DURATION_MILLIS) * no_slots_before_timeout,
|
||||
) => { return events; }
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_current_slot() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 0;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// not enough time for peer discovery, just subscribe
|
||||
let expected = vec![AttServiceMessage::Subscribe(SubnetId::new(validator_index))];
|
||||
|
||||
let events = get_events(attestation_service, 4, 1).await;
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any2),
|
||||
AttServiceMessage::Subscribe(_any1),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_current_slot_wait_for_unsubscribe() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 0;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// not enough time for peer discovery, just subscribe, unsubscribe
|
||||
let expected = vec![
|
||||
AttServiceMessage::Subscribe(SubnetId::new(validator_index)),
|
||||
AttServiceMessage::Unsubscribe(SubnetId::new(validator_index)),
|
||||
];
|
||||
|
||||
let events = get_events(attestation_service, 5, 2).await;
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any2),
|
||||
AttServiceMessage::Subscribe(_any1),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_five_slots_ahead() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 5;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// just discover peers, don't subscribe yet
|
||||
let expected = vec![AttServiceMessage::DiscoverPeers(SubnetId::new(
|
||||
validator_index,
|
||||
))];
|
||||
|
||||
let events = get_events(attestation_service, 4, 1).await;
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any1),
|
||||
AttServiceMessage::Subscribe(_any2),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_five_slots_ahead_wait_five_slots() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 5;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// we should discover peers, wait, then subscribe
|
||||
let expected = vec![
|
||||
AttServiceMessage::DiscoverPeers(SubnetId::new(validator_index)),
|
||||
AttServiceMessage::Subscribe(SubnetId::new(validator_index)),
|
||||
];
|
||||
|
||||
let events = get_events(attestation_service, 5, 5).await;
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any1),
|
||||
AttServiceMessage::Subscribe(_any2),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_7_slots_ahead() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 7;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// ten slots ahead is before our target peer discover time, so expect no messages
|
||||
let expected: Vec<AttServiceMessage> = vec![];
|
||||
|
||||
let events = get_events(attestation_service, 3, 1).await;
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any1),
|
||||
AttServiceMessage::Subscribe(_any2),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_ten_slots_ahead_wait_five_slots() {
|
||||
// subscription config
|
||||
let validator_index = 1;
|
||||
let committee_index = 1;
|
||||
let subscription_slot = 10;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions = vec![get_subscription(
|
||||
validator_index,
|
||||
committee_index,
|
||||
current_slot + Slot::new(subscription_slot),
|
||||
)];
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
// expect discover peers because we will enter TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD range
|
||||
let expected: Vec<AttServiceMessage> = vec![AttServiceMessage::DiscoverPeers(
|
||||
SubnetId::new(validator_index),
|
||||
)];
|
||||
|
||||
let events = get_events(attestation_service, 4, 5).await;
|
||||
assert_matches!(
|
||||
events[..3],
|
||||
[
|
||||
AttServiceMessage::DiscoverPeers(_any1),
|
||||
AttServiceMessage::Subscribe(_any2),
|
||||
AttServiceMessage::EnrAdd(_any3)
|
||||
]
|
||||
);
|
||||
assert_eq!(expected[..], events[3..]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_all_random_subnets() {
|
||||
// subscribe 10 slots ahead so we do not produce any exact subnet messages
|
||||
let subscription_slot = 10;
|
||||
let subscription_count = 64;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions =
|
||||
_get_subscriptions(subscription_count, current_slot + subscription_slot);
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
let events = get_events(attestation_service, 192, 3).await;
|
||||
let mut discover_peer_count = 0;
|
||||
let mut subscribe_count = 0;
|
||||
let mut enr_add_count = 0;
|
||||
let mut unexpected_msg_count = 0;
|
||||
|
||||
for event in events {
|
||||
match event {
|
||||
AttServiceMessage::DiscoverPeers(_any_subnet) => {
|
||||
discover_peer_count = discover_peer_count + 1
|
||||
}
|
||||
AttServiceMessage::Subscribe(_any_subnet) => subscribe_count = subscribe_count + 1,
|
||||
AttServiceMessage::EnrAdd(_any_subnet) => enr_add_count = enr_add_count + 1,
|
||||
_ => unexpected_msg_count = unexpected_msg_count + 1,
|
||||
}
|
||||
}
|
||||
|
||||
assert_eq!(discover_peer_count, 64);
|
||||
assert_eq!(subscribe_count, 64);
|
||||
assert_eq!(enr_add_count, 64);
|
||||
assert_eq!(unexpected_msg_count, 0);
|
||||
// test completed successfully
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn subscribe_all_random_subnets_plus_one() {
|
||||
// subscribe 10 slots ahead so we do not produce any exact subnet messages
|
||||
let subscription_slot = 10;
|
||||
// the 65th subscription should result in no more messages than the previous scenario
|
||||
let subscription_count = 65;
|
||||
|
||||
// create the attestation service and subscriptions
|
||||
let mut attestation_service = get_attestation_service();
|
||||
let current_slot = attestation_service
|
||||
.beacon_chain
|
||||
.slot_clock
|
||||
.now()
|
||||
.expect("Could not get current slot");
|
||||
|
||||
let subscriptions =
|
||||
_get_subscriptions(subscription_count, current_slot + subscription_slot);
|
||||
|
||||
// submit the subscriptions
|
||||
attestation_service
|
||||
.validator_subscriptions(subscriptions)
|
||||
.unwrap();
|
||||
|
||||
let events = get_events(attestation_service, 192, 3).await;
|
||||
let mut discover_peer_count = 0;
|
||||
let mut subscribe_count = 0;
|
||||
let mut enr_add_count = 0;
|
||||
let mut unexpected_msg_count = 0;
|
||||
|
||||
for event in events {
|
||||
match event {
|
||||
AttServiceMessage::DiscoverPeers(_any_subnet) => {
|
||||
discover_peer_count = discover_peer_count + 1
|
||||
}
|
||||
AttServiceMessage::Subscribe(_any_subnet) => subscribe_count = subscribe_count + 1,
|
||||
AttServiceMessage::EnrAdd(_any_subnet) => enr_add_count = enr_add_count + 1,
|
||||
_ => unexpected_msg_count = unexpected_msg_count + 1,
|
||||
}
|
||||
}
|
||||
|
||||
assert_eq!(discover_peer_count, 64);
|
||||
assert_eq!(subscribe_count, 64);
|
||||
assert_eq!(enr_add_count, 64);
|
||||
assert_eq!(unexpected_msg_count, 0);
|
||||
}
|
||||
}
|
||||
@@ -1,8 +1,12 @@
|
||||
#[macro_use]
|
||||
extern crate lazy_static;
|
||||
|
||||
/// This crate provides the network server for Lighthouse.
|
||||
pub mod error;
|
||||
pub mod service;
|
||||
|
||||
mod attestation_service;
|
||||
mod metrics;
|
||||
mod persisted_dht;
|
||||
mod router;
|
||||
mod sync;
|
||||
|
||||
39
beacon_node/network/src/metrics.rs
Normal file
39
beacon_node/network/src/metrics.rs
Normal file
@@ -0,0 +1,39 @@
|
||||
pub use lighthouse_metrics::*;
|
||||
|
||||
lazy_static! {
|
||||
/*
|
||||
* Gossip Rx
|
||||
*/
|
||||
pub static ref GOSSIP_BLOCKS_RX: Result<IntCounter> = try_create_int_counter(
|
||||
"network_gossip_blocks_rx_total",
|
||||
"Count of gossip blocks received"
|
||||
);
|
||||
pub static ref GOSSIP_UNAGGREGATED_ATTESTATIONS_RX: Result<IntCounter> = try_create_int_counter(
|
||||
"network_gossip_unaggregated_attestations_rx_total",
|
||||
"Count of gossip unaggregated attestations received"
|
||||
);
|
||||
pub static ref GOSSIP_UNAGGREGATED_ATTESTATIONS_IGNORED: Result<IntCounter> = try_create_int_counter(
|
||||
"network_gossip_unaggregated_attestations_ignored_total",
|
||||
"Count of gossip unaggregated attestations ignored by attestation service"
|
||||
);
|
||||
pub static ref GOSSIP_AGGREGATED_ATTESTATIONS_RX: Result<IntCounter> = try_create_int_counter(
|
||||
"network_gossip_aggregated_attestations_rx_total",
|
||||
"Count of gossip aggregated attestations received"
|
||||
);
|
||||
|
||||
/*
|
||||
* Gossip Tx
|
||||
*/
|
||||
pub static ref GOSSIP_BLOCKS_TX: Result<IntCounter> = try_create_int_counter(
|
||||
"network_gossip_blocks_tx_total",
|
||||
"Count of gossip blocks transmitted"
|
||||
);
|
||||
pub static ref GOSSIP_UNAGGREGATED_ATTESTATIONS_TX: Result<IntCounter> = try_create_int_counter(
|
||||
"network_gossip_unaggregated_attestations_tx_total",
|
||||
"Count of gossip unaggregated attestations transmitted"
|
||||
);
|
||||
pub static ref GOSSIP_AGGREGATED_ATTESTATIONS_TX: Result<IntCounter> = try_create_int_counter(
|
||||
"network_gossip_aggregated_attestations_tx_total",
|
||||
"Count of gossip aggregated attestations transmitted"
|
||||
);
|
||||
}
|
||||
@@ -16,10 +16,9 @@ use eth2_libp2p::{
|
||||
},
|
||||
MessageId, NetworkGlobals, PeerId, PubsubMessage, RPCEvent,
|
||||
};
|
||||
use futures::future::Future;
|
||||
use futures::stream::Stream;
|
||||
use futures::prelude::*;
|
||||
use processor::Processor;
|
||||
use slog::{debug, o, trace, warn};
|
||||
use slog::{debug, info, o, trace, warn};
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::mpsc;
|
||||
use types::EthSpec;
|
||||
@@ -60,7 +59,7 @@ impl<T: BeaconChainTypes> Router<T> {
|
||||
beacon_chain: Arc<BeaconChain<T>>,
|
||||
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
|
||||
executor: &tokio::runtime::TaskExecutor,
|
||||
runtime_handle: &tokio::runtime::Handle,
|
||||
log: slog::Logger,
|
||||
) -> error::Result<mpsc::UnboundedSender<RouterMessage<T::EthSpec>>> {
|
||||
let message_handler_log = log.new(o!("service"=> "router"));
|
||||
@@ -70,7 +69,7 @@ impl<T: BeaconChainTypes> Router<T> {
|
||||
|
||||
// Initialise a message instance, which itself spawns the syncing thread.
|
||||
let processor = Processor::new(
|
||||
executor,
|
||||
runtime_handle,
|
||||
beacon_chain,
|
||||
network_globals,
|
||||
network_send.clone(),
|
||||
@@ -85,13 +84,12 @@ impl<T: BeaconChainTypes> Router<T> {
|
||||
};
|
||||
|
||||
// spawn handler task and move the message handler instance into the spawned thread
|
||||
executor.spawn(
|
||||
runtime_handle.spawn(async move {
|
||||
handler_recv
|
||||
.for_each(move |msg| Ok(handler.handle_message(msg)))
|
||||
.map_err(move |_| {
|
||||
debug!(log, "Network message handler terminated.");
|
||||
}),
|
||||
);
|
||||
.for_each(move |msg| future::ready(handler.handle_message(msg)))
|
||||
.await;
|
||||
debug!(log, "Network message handler terminated.");
|
||||
});
|
||||
|
||||
Ok(handler_send)
|
||||
}
|
||||
@@ -172,7 +170,7 @@ impl<T: BeaconChainTypes> Router<T> {
|
||||
// an error could have occurred.
|
||||
match error_response {
|
||||
RPCCodedResponse::InvalidRequest(error) => {
|
||||
warn!(self.log, "Peer indicated invalid request"; "peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
|
||||
warn!(self.log, "RPC Invalid Request"; "peer_id" => peer_id.to_string(), "request_id" => request_id, "error" => error.to_string());
|
||||
self.handle_rpc_error(
|
||||
peer_id,
|
||||
request_id,
|
||||
@@ -180,7 +178,7 @@ impl<T: BeaconChainTypes> Router<T> {
|
||||
);
|
||||
}
|
||||
RPCCodedResponse::ServerError(error) => {
|
||||
warn!(self.log, "Peer internal server error"; "peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
|
||||
warn!(self.log, "RPC Server Error"; "peer_id" => peer_id.to_string(), "request_id" => request_id, "error" => error.to_string());
|
||||
self.handle_rpc_error(
|
||||
peer_id,
|
||||
request_id,
|
||||
@@ -188,7 +186,7 @@ impl<T: BeaconChainTypes> Router<T> {
|
||||
);
|
||||
}
|
||||
RPCCodedResponse::Unknown(error) => {
|
||||
warn!(self.log, "Unknown peer error"; "peer" => format!("{:?}", peer_id), "error" => error.as_string());
|
||||
warn!(self.log, "RPC Unknown Error"; "peer_id" => peer_id.to_string(), "request_id" => request_id, "error" => error.to_string());
|
||||
self.handle_rpc_error(
|
||||
peer_id,
|
||||
request_id,
|
||||
@@ -278,6 +276,7 @@ impl<T: BeaconChainTypes> Router<T> {
|
||||
PubsubMessage::BeaconBlock(block) => {
|
||||
match self.processor.should_forward_block(&peer_id, block) {
|
||||
Ok(verified_block) => {
|
||||
info!(self.log, "New block received"; "slot" => verified_block.block.slot(), "hash" => verified_block.block_root.to_string());
|
||||
self.propagate_message(id, peer_id.clone());
|
||||
self.processor.on_block_gossip(peer_id, verified_block);
|
||||
}
|
||||
@@ -313,7 +312,7 @@ impl<T: BeaconChainTypes> Router<T> {
|
||||
/// Informs the network service that the message should be forwarded to other peers.
|
||||
fn propagate_message(&mut self, message_id: MessageId, propagation_source: PeerId) {
|
||||
self.network_send
|
||||
.try_send(NetworkMessage::Propagate {
|
||||
.send(NetworkMessage::Propagate {
|
||||
propagation_source,
|
||||
message_id,
|
||||
})
|
||||
|
||||
@@ -44,7 +44,7 @@ pub struct Processor<T: BeaconChainTypes> {
|
||||
impl<T: BeaconChainTypes> Processor<T> {
|
||||
/// Instantiate a `Processor` instance
|
||||
pub fn new(
|
||||
executor: &tokio::runtime::TaskExecutor,
|
||||
runtime_handle: &tokio::runtime::Handle,
|
||||
beacon_chain: Arc<BeaconChain<T>>,
|
||||
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
|
||||
@@ -54,7 +54,7 @@ impl<T: BeaconChainTypes> Processor<T> {
|
||||
|
||||
// spawn the sync thread
|
||||
let (sync_send, _sync_exit) = crate::sync::manager::spawn(
|
||||
executor,
|
||||
runtime_handle,
|
||||
beacon_chain.clone(),
|
||||
network_globals,
|
||||
network_send.clone(),
|
||||
@@ -71,7 +71,7 @@ impl<T: BeaconChainTypes> Processor<T> {
|
||||
}
|
||||
|
||||
fn send_to_sync(&mut self, message: SyncMessage<T::EthSpec>) {
|
||||
self.sync_send.try_send(message).unwrap_or_else(|_| {
|
||||
self.sync_send.send(message).unwrap_or_else(|_| {
|
||||
warn!(
|
||||
self.log,
|
||||
"Could not send message to the sync service";
|
||||
@@ -485,10 +485,9 @@ impl<T: BeaconChainTypes> Processor<T> {
|
||||
) -> Result<GossipVerifiedBlock<T>, BlockError> {
|
||||
let result = self.chain.verify_block_for_gossip(*block.clone());
|
||||
|
||||
if let Err(BlockError::ParentUnknown(block_hash)) = result {
|
||||
if let Err(BlockError::ParentUnknown(_)) = result {
|
||||
// if we don't know the parent, start a parent lookup
|
||||
// TODO: Modify the return to avoid the block clone.
|
||||
debug!(self.log, "Unknown block received. Starting a parent lookup"; "block_slot" => block.message.slot, "block_hash" => format!("{}", block_hash));
|
||||
self.send_to_sync(SyncMessage::UnknownBlock(peer_id.clone(), block));
|
||||
}
|
||||
result
|
||||
@@ -929,7 +928,7 @@ impl<T: EthSpec> HandlerNetworkContext<T> {
|
||||
);
|
||||
self.send_rpc_request(peer_id.clone(), RPCRequest::Goodbye(reason));
|
||||
self.network_send
|
||||
.try_send(NetworkMessage::Disconnect { peer_id })
|
||||
.send(NetworkMessage::Disconnect { peer_id })
|
||||
.unwrap_or_else(|_| {
|
||||
warn!(
|
||||
self.log,
|
||||
@@ -970,7 +969,7 @@ impl<T: EthSpec> HandlerNetworkContext<T> {
|
||||
|
||||
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent<T>) {
|
||||
self.network_send
|
||||
.try_send(NetworkMessage::RPC(peer_id, rpc_event))
|
||||
.send(NetworkMessage::RPC(peer_id, rpc_event))
|
||||
.unwrap_or_else(|_| {
|
||||
warn!(
|
||||
self.log,
|
||||
|
||||
@@ -1,23 +1,22 @@
|
||||
use crate::error;
|
||||
use crate::persisted_dht::{load_dht, persist_dht};
|
||||
use crate::router::{Router, RouterMessage};
|
||||
use crate::{
|
||||
attestation_service::{AttServiceMessage, AttestationService},
|
||||
NetworkConfig,
|
||||
};
|
||||
use crate::{error, metrics};
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes};
|
||||
use eth2_libp2p::Service as LibP2PService;
|
||||
use eth2_libp2p::{rpc::RPCRequest, BehaviourEvent, Enr, MessageId, NetworkGlobals, PeerId, Swarm};
|
||||
use eth2_libp2p::{PubsubMessage, RPCEvent};
|
||||
use eth2_libp2p::{rpc::RPCRequest, BehaviourEvent, Enr, MessageId, NetworkGlobals, PeerId};
|
||||
use eth2_libp2p::{Libp2pEvent, PubsubMessage, RPCEvent};
|
||||
use futures::prelude::*;
|
||||
use futures::Stream;
|
||||
use rest_types::ValidatorSubscription;
|
||||
use slog::{debug, error, info, trace};
|
||||
use slog::{debug, error, info, o, trace};
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, Instant};
|
||||
use tokio::runtime::TaskExecutor;
|
||||
use std::time::Duration;
|
||||
use tokio::runtime::Handle;
|
||||
use tokio::sync::{mpsc, oneshot};
|
||||
use tokio::timer::Delay;
|
||||
use tokio::time::Delay;
|
||||
use types::EthSpec;
|
||||
|
||||
mod tests;
|
||||
@@ -42,8 +41,6 @@ pub struct NetworkService<T: BeaconChainTypes> {
|
||||
store: Arc<T::Store>,
|
||||
/// A collection of global variables, accessible outside of the network service.
|
||||
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
|
||||
/// An initial delay to update variables after the libp2p service has started.
|
||||
initial_delay: Delay,
|
||||
/// A delay that expires when a new fork takes place.
|
||||
next_fork_update: Option<Delay>,
|
||||
/// The logger for the network service.
|
||||
@@ -56,7 +53,7 @@ impl<T: BeaconChainTypes> NetworkService<T> {
|
||||
pub fn start(
|
||||
beacon_chain: Arc<BeaconChain<T>>,
|
||||
config: &NetworkConfig,
|
||||
executor: &TaskExecutor,
|
||||
runtime_handle: &Handle,
|
||||
network_log: slog::Logger,
|
||||
) -> error::Result<(
|
||||
Arc<NetworkGlobals<T::EthSpec>>,
|
||||
@@ -78,16 +75,12 @@ impl<T: BeaconChainTypes> NetworkService<T> {
|
||||
|
||||
// launch libp2p service
|
||||
let (network_globals, mut libp2p) =
|
||||
LibP2PService::new(config, enr_fork_id, network_log.clone())?;
|
||||
runtime_handle.enter(|| LibP2PService::new(config, enr_fork_id, &network_log))?;
|
||||
|
||||
for enr in load_dht::<T::Store, T::EthSpec>(store.clone()) {
|
||||
libp2p.swarm.add_enr(enr);
|
||||
}
|
||||
|
||||
// A delay used to initialise code after the network has started
|
||||
// This is currently used to obtain the listening addresses from the libp2p service.
|
||||
let initial_delay = Delay::new(Instant::now() + Duration::from_secs(1));
|
||||
|
||||
// launch derived network services
|
||||
|
||||
// router task
|
||||
@@ -95,7 +88,7 @@ impl<T: BeaconChainTypes> NetworkService<T> {
|
||||
beacon_chain.clone(),
|
||||
network_globals.clone(),
|
||||
network_send.clone(),
|
||||
executor,
|
||||
runtime_handle,
|
||||
network_log.clone(),
|
||||
)?;
|
||||
|
||||
@@ -104,6 +97,7 @@ impl<T: BeaconChainTypes> NetworkService<T> {
|
||||
AttestationService::new(beacon_chain.clone(), network_globals.clone(), &network_log);
|
||||
|
||||
// create the network service and spawn the task
|
||||
let network_log = network_log.new(o!("service"=> "network"));
|
||||
let network_service = NetworkService {
|
||||
beacon_chain,
|
||||
libp2p,
|
||||
@@ -112,13 +106,12 @@ impl<T: BeaconChainTypes> NetworkService<T> {
|
||||
router_send,
|
||||
store,
|
||||
network_globals: network_globals.clone(),
|
||||
initial_delay,
|
||||
next_fork_update,
|
||||
log: network_log,
|
||||
propagation_percentage,
|
||||
};
|
||||
|
||||
let network_exit = spawn_service(network_service, &executor)?;
|
||||
let network_exit = runtime_handle.enter(|| spawn_service(network_service))?;
|
||||
|
||||
Ok((network_globals, network_send, network_exit))
|
||||
}
|
||||
@@ -126,248 +119,249 @@ impl<T: BeaconChainTypes> NetworkService<T> {
|
||||
|
||||
fn spawn_service<T: BeaconChainTypes>(
|
||||
mut service: NetworkService<T>,
|
||||
executor: &TaskExecutor,
|
||||
) -> error::Result<tokio::sync::oneshot::Sender<()>> {
|
||||
let (network_exit, mut exit_rx) = tokio::sync::oneshot::channel();
|
||||
|
||||
// spawn on the current executor
|
||||
executor.spawn(
|
||||
futures::future::poll_fn(move || -> Result<_, ()> {
|
||||
|
||||
let log = &service.log;
|
||||
|
||||
// handles any logic which requires an initial delay
|
||||
if !service.initial_delay.is_elapsed() {
|
||||
if let Ok(Async::Ready(_)) = service.initial_delay.poll() {
|
||||
let multi_addrs = Swarm::listeners(&service.libp2p.swarm).cloned().collect();
|
||||
*service.network_globals.listen_multiaddrs.write() = multi_addrs;
|
||||
}
|
||||
}
|
||||
|
||||
// perform termination tasks when the network is being shutdown
|
||||
if let Ok(Async::Ready(_)) | Err(_) = exit_rx.poll() {
|
||||
tokio::spawn(async move {
|
||||
loop {
|
||||
// build the futures to check simultaneously
|
||||
tokio::select! {
|
||||
// handle network shutdown
|
||||
_ = (&mut exit_rx) => {
|
||||
// network thread is terminating
|
||||
let enrs: Vec<Enr> = service.libp2p.swarm.enr_entries().cloned().collect();
|
||||
debug!(
|
||||
log,
|
||||
service.log,
|
||||
"Persisting DHT to store";
|
||||
"Number of peers" => format!("{}", enrs.len()),
|
||||
);
|
||||
|
||||
match persist_dht::<T::Store, T::EthSpec>(service.store.clone(), enrs) {
|
||||
Err(e) => error!(
|
||||
log,
|
||||
service.log,
|
||||
"Failed to persist DHT on drop";
|
||||
"error" => format!("{:?}", e)
|
||||
),
|
||||
Ok(_) => info!(
|
||||
log,
|
||||
service.log,
|
||||
"Saved DHT state";
|
||||
),
|
||||
}
|
||||
|
||||
info!(log.clone(), "Network service shutdown");
|
||||
return Ok(Async::Ready(()));
|
||||
}
|
||||
|
||||
// processes the network channel before processing the libp2p swarm
|
||||
loop {
|
||||
// poll the network channel
|
||||
match service.network_recv.poll() {
|
||||
Ok(Async::Ready(Some(message))) => match message {
|
||||
NetworkMessage::RPC(peer_id, rpc_event) => {
|
||||
trace!(log, "Sending RPC"; "rpc" => format!("{}", rpc_event));
|
||||
service.libp2p.swarm.send_rpc(peer_id, rpc_event);
|
||||
}
|
||||
NetworkMessage::Propagate {
|
||||
propagation_source,
|
||||
message_id,
|
||||
} => {
|
||||
// TODO: Remove this for mainnet
|
||||
// randomly prevents propagation
|
||||
let mut should_send = true;
|
||||
if let Some(percentage) = service.propagation_percentage {
|
||||
// not exact percentage but close enough
|
||||
let rand = rand::random::<u8>() % 100;
|
||||
if rand > percentage {
|
||||
// don't propagate
|
||||
should_send = false;
|
||||
info!(service.log, "Network service shutdown");
|
||||
return;
|
||||
}
|
||||
// handle a message sent to the network
|
||||
Some(message) = service.network_recv.recv() => {
|
||||
match message {
|
||||
NetworkMessage::RPC(peer_id, rpc_event) => {
|
||||
trace!(service.log, "Sending RPC"; "rpc" => format!("{}", rpc_event));
|
||||
service.libp2p.swarm.send_rpc(peer_id, rpc_event);
|
||||
}
|
||||
NetworkMessage::Propagate {
|
||||
propagation_source,
|
||||
message_id,
|
||||
} => {
|
||||
// TODO: Remove this for mainnet
|
||||
// randomly prevents propagation
|
||||
let mut should_send = true;
|
||||
if let Some(percentage) = service.propagation_percentage {
|
||||
// not exact percentage but close enough
|
||||
let rand = rand::random::<u8>() % 100;
|
||||
if rand > percentage {
|
||||
// don't propagate
|
||||
should_send = false;
|
||||
}
|
||||
}
|
||||
if !should_send {
|
||||
info!(service.log, "Random filter did not propagate message");
|
||||
} else {
|
||||
trace!(service.log, "Propagating gossipsub message";
|
||||
"propagation_peer" => format!("{:?}", propagation_source),
|
||||
"message_id" => message_id.to_string(),
|
||||
);
|
||||
service
|
||||
.libp2p
|
||||
.swarm
|
||||
.propagate_message(&propagation_source, message_id);
|
||||
}
|
||||
}
|
||||
if !should_send {
|
||||
info!(log, "Random filter did not propagate message");
|
||||
} else {
|
||||
trace!(log, "Propagating gossipsub message";
|
||||
"propagation_peer" => format!("{:?}", propagation_source),
|
||||
"message_id" => message_id.to_string(),
|
||||
);
|
||||
service.libp2p
|
||||
.swarm
|
||||
.propagate_message(&propagation_source, message_id);
|
||||
}
|
||||
}
|
||||
NetworkMessage::Publish { messages } => {
|
||||
// TODO: Remove this for mainnet
|
||||
// randomly prevents propagation
|
||||
let mut should_send = true;
|
||||
if let Some(percentage) = service.propagation_percentage {
|
||||
// not exact percentage but close enough
|
||||
let rand = rand::random::<u8>() % 100;
|
||||
if rand > percentage {
|
||||
// don't propagate
|
||||
should_send = false;
|
||||
NetworkMessage::Publish { messages } => {
|
||||
// TODO: Remove this for mainnet
|
||||
// randomly prevents propagation
|
||||
let mut should_send = true;
|
||||
if let Some(percentage) = service.propagation_percentage {
|
||||
// not exact percentage but close enough
|
||||
let rand = rand::random::<u8>() % 100;
|
||||
if rand > percentage {
|
||||
// don't propagate
|
||||
should_send = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
if !should_send {
|
||||
info!(log, "Random filter did not publish messages");
|
||||
} else {
|
||||
let mut topic_kinds = Vec::new();
|
||||
for message in &messages {
|
||||
if !should_send {
|
||||
info!(service.log, "Random filter did not publish messages");
|
||||
} else {
|
||||
let mut topic_kinds = Vec::new();
|
||||
for message in &messages {
|
||||
if !topic_kinds.contains(&message.kind()) {
|
||||
topic_kinds.push(message.kind());
|
||||
}
|
||||
}
|
||||
debug!(log, "Sending pubsub messages"; "count" => messages.len(), "topics" => format!("{:?}", topic_kinds));
|
||||
service.libp2p.swarm.publish(messages);
|
||||
}
|
||||
}
|
||||
NetworkMessage::Disconnect { peer_id } => {
|
||||
service.libp2p.disconnect_and_ban_peer(
|
||||
peer_id,
|
||||
std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
|
||||
);
|
||||
}
|
||||
NetworkMessage::Subscribe { subscriptions } =>
|
||||
{
|
||||
// the result is dropped as it used solely for ergonomics
|
||||
let _ = service.attestation_service.validator_subscriptions(subscriptions);
|
||||
}
|
||||
},
|
||||
Ok(Async::NotReady) => break,
|
||||
Ok(Async::Ready(None)) => {
|
||||
debug!(log, "Network channel closed");
|
||||
return Err(());
|
||||
}
|
||||
Err(e) => {
|
||||
debug!(log, "Network channel error"; "error" => format!("{}", e));
|
||||
return Err(());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// process any attestation service events
|
||||
// NOTE: This must come after the network message processing as that may trigger events in
|
||||
// the attestation service.
|
||||
while let Ok(Async::Ready(Some(attestation_service_message))) = service.attestation_service.poll() {
|
||||
match attestation_service_message {
|
||||
// TODO: Implement
|
||||
AttServiceMessage::Subscribe(subnet_id) => {
|
||||
service.libp2p.swarm.subscribe_to_subnet(subnet_id);
|
||||
},
|
||||
AttServiceMessage::Unsubscribe(subnet_id) => {
|
||||
service.libp2p.swarm.subscribe_to_subnet(subnet_id);
|
||||
},
|
||||
AttServiceMessage::EnrAdd(subnet_id) => {
|
||||
service.libp2p.swarm.update_enr_subnet(subnet_id, true);
|
||||
},
|
||||
AttServiceMessage::EnrRemove(subnet_id) => {
|
||||
service.libp2p.swarm.update_enr_subnet(subnet_id, false);
|
||||
},
|
||||
AttServiceMessage::DiscoverPeers(subnet_id) => {
|
||||
service.libp2p.swarm.peers_request(subnet_id);
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
let mut peers_to_ban = Vec::new();
|
||||
// poll the swarm
|
||||
loop {
|
||||
match service.libp2p.poll() {
|
||||
Ok(Async::Ready(Some(event))) => match event {
|
||||
BehaviourEvent::RPC(peer_id, rpc_event) => {
|
||||
// if we received a Goodbye message, drop and ban the peer
|
||||
if let RPCEvent::Request(_, RPCRequest::Goodbye(_)) = rpc_event {
|
||||
peers_to_ban.push(peer_id.clone());
|
||||
};
|
||||
service.router_send
|
||||
.try_send(RouterMessage::RPC(peer_id, rpc_event))
|
||||
.map_err(|_| { debug!(log, "Failed to send RPC to router");} )?;
|
||||
}
|
||||
BehaviourEvent::PeerDialed(peer_id) => {
|
||||
debug!(log, "Peer Dialed"; "peer_id" => format!("{}", peer_id));
|
||||
service.router_send
|
||||
.try_send(RouterMessage::PeerDialed(peer_id))
|
||||
.map_err(|_| { debug!(log, "Failed to send peer dialed to router");})?;
|
||||
}
|
||||
BehaviourEvent::PeerDisconnected(peer_id) => {
|
||||
debug!(log, "Peer Disconnected"; "peer_id" => format!("{}", peer_id));
|
||||
service.router_send
|
||||
.try_send(RouterMessage::PeerDisconnected(peer_id))
|
||||
.map_err(|_| { debug!(log, "Failed to send peer disconnect to router");})?;
|
||||
}
|
||||
BehaviourEvent::StatusPeer(peer_id) => {
|
||||
service.router_send
|
||||
.try_send(RouterMessage::StatusPeer(peer_id))
|
||||
.map_err(|_| { debug!(log, "Failed to send re-status peer to router");})?;
|
||||
}
|
||||
BehaviourEvent::PubsubMessage {
|
||||
id,
|
||||
source,
|
||||
message,
|
||||
..
|
||||
} => {
|
||||
|
||||
match message {
|
||||
// attestation information gets processed in the attestation service
|
||||
PubsubMessage::Attestation(ref subnet_and_attestation) => {
|
||||
let subnet = &subnet_and_attestation.0;
|
||||
let attestation = &subnet_and_attestation.1;
|
||||
// checks if we have an aggregator for the slot. If so, we process
|
||||
// the attestation
|
||||
if service.attestation_service.should_process_attestation(&id, &source, subnet, attestation) {
|
||||
service.router_send
|
||||
.try_send(RouterMessage::PubsubMessage(id, source, message))
|
||||
.map_err(|_| { debug!(log, "Failed to send pubsub message to router");})?;
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
// all else is sent to the router
|
||||
service.router_send
|
||||
.try_send(RouterMessage::PubsubMessage(id, source, message))
|
||||
.map_err(|_| { debug!(log, "Failed to send pubsub message to router");})?;
|
||||
debug!(
|
||||
service.log,
|
||||
"Sending pubsub messages";
|
||||
"count" => messages.len(),
|
||||
"topics" => format!("{:?}", topic_kinds)
|
||||
);
|
||||
expose_publish_metrics(&messages);
|
||||
service.libp2p.swarm.publish(messages);
|
||||
}
|
||||
}
|
||||
}
|
||||
BehaviourEvent::PeerSubscribed(_, _) => {}
|
||||
},
|
||||
Ok(Async::Ready(None)) => unreachable!("Stream never ends"),
|
||||
Ok(Async::NotReady) => break,
|
||||
Err(_) => break,
|
||||
NetworkMessage::Disconnect { peer_id } => {
|
||||
service.libp2p.disconnect_and_ban_peer(
|
||||
peer_id,
|
||||
std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
|
||||
);
|
||||
}
|
||||
NetworkMessage::Subscribe { subscriptions } => {
|
||||
// the result is dropped as it used solely for ergonomics
|
||||
let _ = service
|
||||
.attestation_service
|
||||
.validator_subscriptions(subscriptions);
|
||||
}
|
||||
}
|
||||
}
|
||||
// process any attestation service events
|
||||
Some(attestation_service_message) = service.attestation_service.next() => {
|
||||
match attestation_service_message {
|
||||
// TODO: Implement
|
||||
AttServiceMessage::Subscribe(subnet_id) => {
|
||||
service.libp2p.swarm.subscribe_to_subnet(subnet_id);
|
||||
}
|
||||
AttServiceMessage::Unsubscribe(subnet_id) => {
|
||||
service.libp2p.swarm.subscribe_to_subnet(subnet_id);
|
||||
}
|
||||
AttServiceMessage::EnrAdd(subnet_id) => {
|
||||
service.libp2p.swarm.update_enr_subnet(subnet_id, true);
|
||||
}
|
||||
AttServiceMessage::EnrRemove(subnet_id) => {
|
||||
service.libp2p.swarm.update_enr_subnet(subnet_id, false);
|
||||
}
|
||||
AttServiceMessage::DiscoverPeers(subnet_id) => {
|
||||
service.libp2p.swarm.peers_request(subnet_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
libp2p_event = service.libp2p.next_event() => {
|
||||
// poll the swarm
|
||||
match libp2p_event {
|
||||
Libp2pEvent::Behaviour(event) => match event {
|
||||
BehaviourEvent::RPC(peer_id, rpc_event) => {
|
||||
// if we received a Goodbye message, drop and ban the peer
|
||||
if let RPCEvent::Request(_, RPCRequest::Goodbye(_)) = rpc_event {
|
||||
//peers_to_ban.push(peer_id.clone());
|
||||
service.libp2p.disconnect_and_ban_peer(
|
||||
peer_id.clone(),
|
||||
std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
|
||||
);
|
||||
};
|
||||
let _ = service
|
||||
.router_send
|
||||
.send(RouterMessage::RPC(peer_id, rpc_event))
|
||||
.map_err(|_| {
|
||||
debug!(service.log, "Failed to send RPC to router");
|
||||
});
|
||||
}
|
||||
BehaviourEvent::StatusPeer(peer_id) => {
|
||||
let _ = service
|
||||
.router_send
|
||||
.send(RouterMessage::StatusPeer(peer_id))
|
||||
.map_err(|_| {
|
||||
debug!(service.log, "Failed to send re-status peer to router");
|
||||
});
|
||||
}
|
||||
BehaviourEvent::PubsubMessage {
|
||||
id,
|
||||
source,
|
||||
message,
|
||||
..
|
||||
} => {
|
||||
// Update prometheus metrics.
|
||||
expose_receive_metrics(&message);
|
||||
match message {
|
||||
// attestation information gets processed in the attestation service
|
||||
PubsubMessage::Attestation(ref subnet_and_attestation) => {
|
||||
let subnet = &subnet_and_attestation.0;
|
||||
let attestation = &subnet_and_attestation.1;
|
||||
// checks if we have an aggregator for the slot. If so, we process
|
||||
// the attestation
|
||||
if service.attestation_service.should_process_attestation(
|
||||
&id,
|
||||
&source,
|
||||
subnet,
|
||||
attestation,
|
||||
) {
|
||||
let _ = service
|
||||
.router_send
|
||||
.send(RouterMessage::PubsubMessage(id, source, message))
|
||||
.map_err(|_| {
|
||||
debug!(service.log, "Failed to send pubsub message to router");
|
||||
});
|
||||
} else {
|
||||
metrics::inc_counter(&metrics::GOSSIP_UNAGGREGATED_ATTESTATIONS_IGNORED)
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
// all else is sent to the router
|
||||
let _ = service
|
||||
.router_send
|
||||
.send(RouterMessage::PubsubMessage(id, source, message))
|
||||
.map_err(|_| {
|
||||
debug!(service.log, "Failed to send pubsub message to router");
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
BehaviourEvent::PeerSubscribed(_, _) => {},
|
||||
}
|
||||
Libp2pEvent::NewListenAddr(multiaddr) => {
|
||||
service.network_globals.listen_multiaddrs.write().push(multiaddr);
|
||||
}
|
||||
Libp2pEvent::PeerConnected{ peer_id, endpoint,} => {
|
||||
debug!(service.log, "Peer Connected"; "peer_id" => peer_id.to_string(), "endpoint" => format!("{:?}", endpoint));
|
||||
if let eth2_libp2p::ConnectedPoint::Dialer { .. } = endpoint {
|
||||
let _ = service
|
||||
.router_send
|
||||
.send(RouterMessage::PeerDialed(peer_id))
|
||||
.map_err(|_| {
|
||||
debug!(service.log, "Failed to send peer dialed to router"); });
|
||||
}
|
||||
}
|
||||
Libp2pEvent::PeerDisconnected{ peer_id, endpoint,} => {
|
||||
debug!(service.log, "Peer Disconnected"; "peer_id" => peer_id.to_string(), "endpoint" => format!("{:?}", endpoint));
|
||||
let _ = service
|
||||
.router_send
|
||||
.send(RouterMessage::PeerDisconnected(peer_id))
|
||||
.map_err(|_| {
|
||||
debug!(service.log, "Failed to send peer disconnect to router");
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ban and disconnect any peers that sent Goodbye requests
|
||||
while let Some(peer_id) = peers_to_ban.pop() {
|
||||
service.libp2p.disconnect_and_ban_peer(
|
||||
peer_id.clone(),
|
||||
std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
|
||||
);
|
||||
}
|
||||
|
||||
// if we have just forked, update inform the libp2p layer
|
||||
if let Some(mut update_fork_delay) = service.next_fork_update.take() {
|
||||
if !update_fork_delay.is_elapsed() {
|
||||
if let Ok(Async::Ready(_)) = update_fork_delay.poll() {
|
||||
service.libp2p.swarm.update_fork_version(service.beacon_chain.enr_fork_id());
|
||||
service.next_fork_update = next_fork_delay(&service.beacon_chain);
|
||||
if let Some(delay) = &service.next_fork_update {
|
||||
if delay.is_elapsed() {
|
||||
service
|
||||
.libp2p
|
||||
.swarm
|
||||
.update_fork_version(service.beacon_chain.enr_fork_id());
|
||||
service.next_fork_update = next_fork_delay(&service.beacon_chain);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Async::NotReady)
|
||||
})
|
||||
|
||||
);
|
||||
});
|
||||
|
||||
Ok(network_exit)
|
||||
}
|
||||
@@ -376,11 +370,11 @@ fn spawn_service<T: BeaconChainTypes>(
|
||||
/// If there is no scheduled fork, `None` is returned.
|
||||
fn next_fork_delay<T: BeaconChainTypes>(
|
||||
beacon_chain: &BeaconChain<T>,
|
||||
) -> Option<tokio::timer::Delay> {
|
||||
) -> Option<tokio::time::Delay> {
|
||||
beacon_chain.duration_to_next_fork().map(|until_fork| {
|
||||
// Add a short time-out to start within the new fork period.
|
||||
let delay = Duration::from_millis(200);
|
||||
tokio::timer::Delay::new(Instant::now() + until_fork + delay)
|
||||
tokio::time::delay_until(tokio::time::Instant::now() + until_fork + delay)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -403,3 +397,33 @@ pub enum NetworkMessage<T: EthSpec> {
|
||||
/// Disconnect and bans a peer id.
|
||||
Disconnect { peer_id: PeerId },
|
||||
}
|
||||
|
||||
/// Inspects the `messages` that were being sent to the network and updates Prometheus metrics.
|
||||
fn expose_publish_metrics<T: EthSpec>(messages: &[PubsubMessage<T>]) {
|
||||
for message in messages {
|
||||
match message {
|
||||
PubsubMessage::BeaconBlock(_) => metrics::inc_counter(&metrics::GOSSIP_BLOCKS_TX),
|
||||
PubsubMessage::Attestation(_) => {
|
||||
metrics::inc_counter(&metrics::GOSSIP_UNAGGREGATED_ATTESTATIONS_TX)
|
||||
}
|
||||
PubsubMessage::AggregateAndProofAttestation(_) => {
|
||||
metrics::inc_counter(&metrics::GOSSIP_AGGREGATED_ATTESTATIONS_TX)
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Inspects a `message` received from the network and updates Prometheus metrics.
|
||||
fn expose_receive_metrics<T: EthSpec>(message: &PubsubMessage<T>) {
|
||||
match message {
|
||||
PubsubMessage::BeaconBlock(_) => metrics::inc_counter(&metrics::GOSSIP_BLOCKS_RX),
|
||||
PubsubMessage::Attestation(_) => {
|
||||
metrics::inc_counter(&metrics::GOSSIP_UNAGGREGATED_ATTESTATIONS_RX)
|
||||
}
|
||||
PubsubMessage::AggregateAndProofAttestation(_) => {
|
||||
metrics::inc_counter(&metrics::GOSSIP_AGGREGATED_ATTESTATIONS_RX)
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,7 +5,6 @@ mod tests {
|
||||
use crate::{NetworkConfig, NetworkService};
|
||||
use beacon_chain::test_utils::BeaconChainHarness;
|
||||
use eth2_libp2p::Enr;
|
||||
use futures::{Future, IntoFuture};
|
||||
use slog::Logger;
|
||||
use sloggers::{null::NullLoggerBuilder, Build};
|
||||
use std::str::FromStr;
|
||||
@@ -33,21 +32,20 @@ mod tests {
|
||||
let enrs = vec![enr1, enr2];
|
||||
|
||||
let runtime = Runtime::new().unwrap();
|
||||
let executor = runtime.executor();
|
||||
let handle = runtime.handle().clone();
|
||||
|
||||
let mut config = NetworkConfig::default();
|
||||
config.libp2p_port = 21212;
|
||||
config.discovery_port = 21212;
|
||||
config.boot_nodes = enrs.clone();
|
||||
runtime
|
||||
.block_on_all(
|
||||
// Create a new network service which implicitly gets dropped at the
|
||||
// end of the block.
|
||||
NetworkService::start(beacon_chain.clone(), &config, &executor, log.clone())
|
||||
.into_future()
|
||||
.and_then(move |(_globals, _service, _exit)| Ok(())),
|
||||
)
|
||||
.unwrap();
|
||||
runtime.spawn(async move {
|
||||
// Create a new network service which implicitly gets dropped at the
|
||||
// end of the block.
|
||||
|
||||
let _ =
|
||||
NetworkService::start(beacon_chain.clone(), &config, &handle, log.clone()).unwrap();
|
||||
});
|
||||
runtime.shutdown_timeout(tokio::time::Duration::from_millis(300));
|
||||
|
||||
// Load the persisted dht from the store
|
||||
let persisted_enrs = load_dht(store);
|
||||
|
||||
@@ -34,26 +34,38 @@ pub fn spawn_block_processor<T: BeaconChainTypes>(
|
||||
chain: Weak<BeaconChain<T>>,
|
||||
process_id: ProcessId,
|
||||
downloaded_blocks: Vec<SignedBeaconBlock<T::EthSpec>>,
|
||||
mut sync_send: mpsc::UnboundedSender<SyncMessage<T::EthSpec>>,
|
||||
sync_send: mpsc::UnboundedSender<SyncMessage<T::EthSpec>>,
|
||||
log: slog::Logger,
|
||||
) {
|
||||
std::thread::spawn(move || {
|
||||
match process_id {
|
||||
// this a request from the range sync
|
||||
ProcessId::RangeBatchId(chain_id, batch_id) => {
|
||||
debug!(log, "Processing batch"; "id" => *batch_id, "blocks" => downloaded_blocks.len());
|
||||
let len = downloaded_blocks.len();
|
||||
let start_slot = if len > 0 {
|
||||
downloaded_blocks[0].message.slot.as_u64()
|
||||
} else {
|
||||
0
|
||||
};
|
||||
let end_slot = if len > 0 {
|
||||
downloaded_blocks[len - 1].message.slot.as_u64()
|
||||
} else {
|
||||
0
|
||||
};
|
||||
|
||||
debug!(log, "Processing batch"; "id" => *batch_id, "blocks" => downloaded_blocks.len(), "start_slot" => start_slot, "end_slot" => end_slot);
|
||||
let result = match process_blocks(chain, downloaded_blocks.iter(), &log) {
|
||||
(_, Ok(_)) => {
|
||||
debug!(log, "Batch processed"; "id" => *batch_id );
|
||||
debug!(log, "Batch processed"; "id" => *batch_id , "start_slot" => start_slot, "end_slot" => end_slot);
|
||||
BatchProcessResult::Success
|
||||
}
|
||||
(imported_blocks, Err(e)) if imported_blocks > 0 => {
|
||||
debug!(log, "Batch processing failed but imported some blocks";
|
||||
warn!(log, "Batch processing failed but imported some blocks";
|
||||
"id" => *batch_id, "error" => e, "imported_blocks"=> imported_blocks);
|
||||
BatchProcessResult::Partial
|
||||
}
|
||||
(_, Err(e)) => {
|
||||
debug!(log, "Batch processing failed"; "id" => *batch_id, "error" => e);
|
||||
warn!(log, "Batch processing failed"; "id" => *batch_id, "error" => e);
|
||||
BatchProcessResult::Failed
|
||||
}
|
||||
};
|
||||
@@ -64,7 +76,7 @@ pub fn spawn_block_processor<T: BeaconChainTypes>(
|
||||
downloaded_blocks,
|
||||
result,
|
||||
};
|
||||
sync_send.try_send(msg).unwrap_or_else(|_| {
|
||||
sync_send.send(msg).unwrap_or_else(|_| {
|
||||
debug!(
|
||||
log,
|
||||
"Block processor could not inform range sync result. Likely shutting down."
|
||||
@@ -84,7 +96,7 @@ pub fn spawn_block_processor<T: BeaconChainTypes>(
|
||||
(_, Err(e)) => {
|
||||
warn!(log, "Parent lookup failed"; "last_peer_id" => format!("{}", peer_id), "error" => e);
|
||||
sync_send
|
||||
.try_send(SyncMessage::ParentLookupFailed(peer_id))
|
||||
.send(SyncMessage::ParentLookupFailed(peer_id))
|
||||
.unwrap_or_else(|_| {
|
||||
// on failure, inform to downvote the peer
|
||||
debug!(
|
||||
|
||||
@@ -43,7 +43,6 @@ use eth2_libp2p::rpc::{methods::*, RequestId};
|
||||
use eth2_libp2p::types::NetworkGlobals;
|
||||
use eth2_libp2p::PeerId;
|
||||
use fnv::FnvHashMap;
|
||||
use futures::prelude::*;
|
||||
use slog::{crit, debug, error, info, trace, warn, Logger};
|
||||
use smallvec::SmallVec;
|
||||
use std::boxed::Box;
|
||||
@@ -182,7 +181,7 @@ impl SingleBlockRequest {
|
||||
/// chain. This allows the chain to be
|
||||
/// dropped during the syncing process which will gracefully end the `SyncManager`.
|
||||
pub fn spawn<T: BeaconChainTypes>(
|
||||
executor: &tokio::runtime::TaskExecutor,
|
||||
runtime_handle: &tokio::runtime::Handle,
|
||||
beacon_chain: Arc<BeaconChain<T>>,
|
||||
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
|
||||
@@ -197,14 +196,14 @@ pub fn spawn<T: BeaconChainTypes>(
|
||||
let (sync_send, sync_recv) = mpsc::unbounded_channel::<SyncMessage<T::EthSpec>>();
|
||||
|
||||
// create an instance of the SyncManager
|
||||
let sync_manager = SyncManager {
|
||||
let mut sync_manager = SyncManager {
|
||||
range_sync: RangeSync::new(
|
||||
beacon_chain.clone(),
|
||||
network_globals.clone(),
|
||||
sync_send.clone(),
|
||||
log.clone(),
|
||||
),
|
||||
network: SyncNetworkContext::new(network_send, log.clone()),
|
||||
network: SyncNetworkContext::new(network_send, network_globals.clone(), log.clone()),
|
||||
chain: beacon_chain,
|
||||
network_globals,
|
||||
input_channel: sync_recv,
|
||||
@@ -216,14 +215,10 @@ pub fn spawn<T: BeaconChainTypes>(
|
||||
|
||||
// spawn the sync manager thread
|
||||
debug!(log, "Sync Manager started");
|
||||
executor.spawn(
|
||||
sync_manager
|
||||
.select(exit_rx.then(|_| Ok(())))
|
||||
.then(move |_| {
|
||||
info!(log.clone(), "Sync Manager shutdown");
|
||||
Ok(())
|
||||
}),
|
||||
);
|
||||
runtime_handle.spawn(async move {
|
||||
futures::future::select(Box::pin(sync_manager.main()), exit_rx).await;
|
||||
info!(log.clone(), "Sync Manager shutdown");
|
||||
});
|
||||
(sync_send, sync_exit)
|
||||
}
|
||||
|
||||
@@ -470,6 +465,8 @@ impl<T: BeaconChainTypes> SyncManager<T> {
|
||||
}
|
||||
}
|
||||
|
||||
debug!(self.log, "Unknown block received. Starting a parent lookup"; "block_slot" => block.message.slot, "block_hash" => format!("{}", block.canonical_root()));
|
||||
|
||||
let parent_request = ParentRequests {
|
||||
downloaded_blocks: vec![block],
|
||||
failed_attempts: 0,
|
||||
@@ -730,17 +727,13 @@ impl<T: BeaconChainTypes> SyncManager<T> {
|
||||
self.parent_queue.push(parent_request);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: BeaconChainTypes> Future for SyncManager<T> {
|
||||
type Item = ();
|
||||
type Error = String;
|
||||
|
||||
fn poll(&mut self) -> Result<Async<Self::Item>, Self::Error> {
|
||||
/// The main driving future for the sync manager.
|
||||
async fn main(&mut self) {
|
||||
// process any inbound messages
|
||||
loop {
|
||||
match self.input_channel.poll() {
|
||||
Ok(Async::Ready(Some(message))) => match message {
|
||||
if let Some(sync_message) = self.input_channel.recv().await {
|
||||
match sync_message {
|
||||
SyncMessage::AddPeer(peer_id, info) => {
|
||||
self.add_peer(peer_id, info);
|
||||
}
|
||||
@@ -792,17 +785,8 @@ impl<T: BeaconChainTypes> Future for SyncManager<T> {
|
||||
SyncMessage::ParentLookupFailed(peer_id) => {
|
||||
self.network.downvote_peer(peer_id);
|
||||
}
|
||||
},
|
||||
Ok(Async::NotReady) => break,
|
||||
Ok(Async::Ready(None)) => {
|
||||
return Err("Sync manager channel closed".into());
|
||||
}
|
||||
Err(e) => {
|
||||
return Err(format!("Sync Manager channel error: {:?}", e));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(Async::NotReady)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@ use crate::service::NetworkMessage;
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes};
|
||||
use eth2_libp2p::rpc::methods::*;
|
||||
use eth2_libp2p::rpc::{RPCEvent, RPCRequest, RequestId};
|
||||
use eth2_libp2p::PeerId;
|
||||
use eth2_libp2p::{Client, NetworkGlobals, PeerId};
|
||||
use slog::{debug, trace, warn};
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::mpsc;
|
||||
@@ -18,20 +18,39 @@ pub struct SyncNetworkContext<T: EthSpec> {
|
||||
/// The network channel to relay messages to the Network service.
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
|
||||
|
||||
/// Access to the network global vars.
|
||||
network_globals: Arc<NetworkGlobals<T>>,
|
||||
|
||||
/// A sequential ID for all RPC requests.
|
||||
request_id: RequestId,
|
||||
/// Logger for the `SyncNetworkContext`.
|
||||
log: slog::Logger,
|
||||
}
|
||||
|
||||
impl<T: EthSpec> SyncNetworkContext<T> {
|
||||
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage<T>>, log: slog::Logger) -> Self {
|
||||
pub fn new(
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
|
||||
network_globals: Arc<NetworkGlobals<T>>,
|
||||
log: slog::Logger,
|
||||
) -> Self {
|
||||
Self {
|
||||
network_send,
|
||||
network_globals,
|
||||
request_id: 1,
|
||||
log,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the Client type of the peer if known
|
||||
pub fn client_type(&self, peer_id: &PeerId) -> Client {
|
||||
self.network_globals
|
||||
.peers
|
||||
.read()
|
||||
.peer_info(peer_id)
|
||||
.map(|info| info.client.clone())
|
||||
.unwrap_or_default()
|
||||
}
|
||||
|
||||
pub fn status_peer<U: BeaconChainTypes>(
|
||||
&mut self,
|
||||
chain: Arc<BeaconChain<U>>,
|
||||
@@ -104,7 +123,7 @@ impl<T: EthSpec> SyncNetworkContext<T> {
|
||||
// ignore the error if the channel send fails
|
||||
let _ = self.send_rpc_request(peer_id.clone(), RPCRequest::Goodbye(reason));
|
||||
self.network_send
|
||||
.try_send(NetworkMessage::Disconnect { peer_id })
|
||||
.send(NetworkMessage::Disconnect { peer_id })
|
||||
.unwrap_or_else(|_| {
|
||||
warn!(
|
||||
self.log,
|
||||
@@ -130,7 +149,7 @@ impl<T: EthSpec> SyncNetworkContext<T> {
|
||||
rpc_event: RPCEvent<T>,
|
||||
) -> Result<(), &'static str> {
|
||||
self.network_send
|
||||
.try_send(NetworkMessage::RPC(peer_id, rpc_event))
|
||||
.send(NetworkMessage::RPC(peer_id, rpc_event))
|
||||
.map_err(|_| {
|
||||
debug!(
|
||||
self.log,
|
||||
|
||||
@@ -31,6 +31,7 @@ const BATCH_BUFFER_SIZE: u8 = 5;
|
||||
/// be downvoted.
|
||||
const INVALID_BATCH_LOOKUP_ATTEMPTS: u8 = 3;
|
||||
|
||||
#[derive(PartialEq)]
|
||||
/// A return type for functions that act on a `Chain` which informs the caller whether the chain
|
||||
/// has been completed and should be removed or to be kept if further processing is
|
||||
/// required.
|
||||
@@ -380,8 +381,8 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
|
||||
}
|
||||
}
|
||||
BatchProcessResult::Failed => {
|
||||
warn!(self.log, "Batch processing failed";
|
||||
"chain_id" => self.id,"id" => *batch.id, "peer" => format!("{}", batch.current_peer));
|
||||
debug!(self.log, "Batch processing failed";
|
||||
"chain_id" => self.id,"id" => *batch.id, "peer" => batch.current_peer.to_string(), "client" => network.client_type(&batch.current_peer).to_string());
|
||||
// The batch processing failed
|
||||
// This could be because this batch is invalid, or a previous invalidated batch
|
||||
// is invalid. We need to find out which and downvote the peer that has sent us
|
||||
|
||||
@@ -369,7 +369,19 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
|
||||
.find_map(|(index, chain)| Some((index, func(chain)?)))
|
||||
}
|
||||
|
||||
/// Runs a function on all finalized chains.
|
||||
/// Given a chain iterator, runs a given function on each chain and return all `Some` results.
|
||||
fn request_function_all<'a, F, I, U>(chain: I, mut func: F) -> Vec<(usize, U)>
|
||||
where
|
||||
I: Iterator<Item = &'a mut SyncingChain<T>>,
|
||||
F: FnMut(&'a mut SyncingChain<T>) -> Option<U>,
|
||||
{
|
||||
chain
|
||||
.enumerate()
|
||||
.filter_map(|(index, chain)| Some((index, func(chain)?)))
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Runs a function on finalized chains until we get the first `Some` result from `F`.
|
||||
pub fn finalized_request<F, U>(&mut self, func: F) -> Option<(usize, U)>
|
||||
where
|
||||
F: FnMut(&mut SyncingChain<T>) -> Option<U>,
|
||||
@@ -377,7 +389,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
|
||||
ChainCollection::request_function(self.finalized_chains.iter_mut(), func)
|
||||
}
|
||||
|
||||
/// Runs a function on all head chains.
|
||||
/// Runs a function on head chains until we get the first `Some` result from `F`.
|
||||
pub fn head_request<F, U>(&mut self, func: F) -> Option<(usize, U)>
|
||||
where
|
||||
F: FnMut(&mut SyncingChain<T>) -> Option<U>,
|
||||
@@ -385,7 +397,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
|
||||
ChainCollection::request_function(self.head_chains.iter_mut(), func)
|
||||
}
|
||||
|
||||
/// Runs a function on all finalized and head chains.
|
||||
/// Runs a function on finalized and head chains until we get the first `Some` result from `F`.
|
||||
pub fn head_finalized_request<F, U>(&mut self, func: F) -> Option<(usize, U)>
|
||||
where
|
||||
F: FnMut(&mut SyncingChain<T>) -> Option<U>,
|
||||
@@ -398,6 +410,19 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
|
||||
)
|
||||
}
|
||||
|
||||
/// Runs a function on all finalized and head chains and collects all `Some` results from `F`.
|
||||
pub fn head_finalized_request_all<F, U>(&mut self, func: F) -> Vec<(usize, U)>
|
||||
where
|
||||
F: FnMut(&mut SyncingChain<T>) -> Option<U>,
|
||||
{
|
||||
ChainCollection::request_function_all(
|
||||
self.finalized_chains
|
||||
.iter_mut()
|
||||
.chain(self.head_chains.iter_mut()),
|
||||
func,
|
||||
)
|
||||
}
|
||||
|
||||
/// Removes any outdated finalized or head chains.
|
||||
///
|
||||
/// This removes chains with no peers, or chains whose start block slot is less than our current
|
||||
|
||||
@@ -355,7 +355,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
|
||||
peer_id: &PeerId,
|
||||
) {
|
||||
// if the peer is in the awaiting head mapping, remove it
|
||||
self.awaiting_head_peers.remove(&peer_id);
|
||||
self.awaiting_head_peers.remove(peer_id);
|
||||
|
||||
// remove the peer from any peer pool
|
||||
self.remove_peer(network, peer_id);
|
||||
@@ -370,26 +370,26 @@ impl<T: BeaconChainTypes> RangeSync<T> {
|
||||
/// for this peer. If so we mark the batch as failed. The batch may then hit it's maximum
|
||||
/// retries. In this case, we need to remove the chain and re-status all the peers.
|
||||
fn remove_peer(&mut self, network: &mut SyncNetworkContext<T::EthSpec>, peer_id: &PeerId) {
|
||||
if let Some((index, ProcessingResult::RemoveChain)) =
|
||||
self.chains.head_finalized_request(|chain| {
|
||||
if chain.peer_pool.remove(peer_id) {
|
||||
// this chain contained the peer
|
||||
while let Some(batch) = chain.pending_batches.remove_batch_by_peer(peer_id) {
|
||||
if let ProcessingResult::RemoveChain = chain.failed_batch(network, batch) {
|
||||
// a single batch failed, remove the chain
|
||||
return Some(ProcessingResult::RemoveChain);
|
||||
}
|
||||
for (index, result) in self.chains.head_finalized_request_all(|chain| {
|
||||
if chain.peer_pool.remove(peer_id) {
|
||||
// this chain contained the peer
|
||||
while let Some(batch) = chain.pending_batches.remove_batch_by_peer(peer_id) {
|
||||
if let ProcessingResult::RemoveChain = chain.failed_batch(network, batch) {
|
||||
// a single batch failed, remove the chain
|
||||
return Some(ProcessingResult::RemoveChain);
|
||||
}
|
||||
// peer removed from chain, no batch failed
|
||||
Some(ProcessingResult::KeepChain)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
{
|
||||
// the chain needed to be removed
|
||||
debug!(self.log, "Chain being removed due to failed batch");
|
||||
self.chains.remove_chain(network, index);
|
||||
// peer removed from chain, no batch failed
|
||||
Some(ProcessingResult::KeepChain)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}) {
|
||||
if result == ProcessingResult::RemoveChain {
|
||||
// the chain needed to be removed
|
||||
debug!(self.log, "Chain being removed due to failed batch");
|
||||
self.chains.remove_chain(network, index);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user