Stable futures (#879)

* Port eth1 lib to use stable futures

* Port eth1_test_rig to stable futures

* Port eth1 tests to stable futures

* Port genesis service to stable futures

* Port genesis tests to stable futures

* Port beacon_chain to stable futures

* Port lcli to stable futures

* Fix eth1_test_rig (#1014)

* Fix lcli

* Port timer to stable futures

* Fix timer

* Port websocket_server to stable futures

* Port notifier to stable futures

* Add TODOS

* Update hashmap hashset to stable futures

* Adds panic test to hashset delay

* Port remote_beacon_node to stable futures

* Fix lcli merge conflicts

* Non rpc stuff compiles

* protocol.rs compiles

* Port websockets, timer and notifier to stable futures (#1035)

* Fix lcli

* Port timer to stable futures

* Fix timer

* Port websocket_server to stable futures

* Port notifier to stable futures

* Add TODOS

* Port remote_beacon_node to stable futures

* Partial eth2-libp2p stable future upgrade

* Finished first round of fighting RPC types

* Further progress towards porting eth2-libp2p adds caching to discovery

* Update behaviour

* RPC handler to stable futures

* Update RPC to master libp2p

* Network service additions

* Fix the fallback transport construction (#1102)

* Correct warning

* Remove hashmap delay

* Compiling version of eth2-libp2p

* Update all crates versions

* Fix conversion function and add tests (#1113)

* Port validator_client to stable futures (#1114)

* Add PH & MS slot clock changes

* Account for genesis time

* Add progress on duties refactor

* Add simple is_aggregator bool to val subscription

* Start work on attestation_verification.rs

* Add progress on ObservedAttestations

* Progress with ObservedAttestations

* Fix tests

* Add observed attestations to the beacon chain

* Add attestation observation to processing code

* Add progress on attestation verification

* Add first draft of ObservedAttesters

* Add more tests

* Add observed attesters to beacon chain

* Add observers to attestation processing

* Add more attestation verification

* Create ObservedAggregators map

* Remove commented-out code

* Add observed aggregators into chain

* Add progress

* Finish adding features to attestation verification

* Ensure beacon chain compiles

* Link attn verification into chain

* Integrate new attn verification in chain

* Remove old attestation processing code

* Start trying to fix beacon_chain tests

* Split adding into pools into two functions

* Add aggregation to harness

* Get test harness working again

* Adjust the number of aggregators for test harness

* Fix edge-case in harness

* Integrate new attn processing in network

* Fix compile bug in validator_client

* Update validator API endpoints

* Fix aggreagation in test harness

* Fix enum thing

* Fix attestation observation bug:

* Patch failing API tests

* Start adding comments to attestation verification

* Remove unused attestation field

* Unify "is block known" logic

* Update comments

* Supress fork choice errors for network processing

* Add todos

* Tidy

* Add gossip attn tests

* Disallow test harness to produce old attns

* Comment out in-progress tests

* Partially address pruning tests

* Fix failing store test

* Add aggregate tests

* Add comments about which spec conditions we check

* Dont re-aggregate

* Split apart test harness attn production

* Fix compile error in network

* Make progress on commented-out test

* Fix skipping attestation test

* Add fork choice verification tests

* Tidy attn tests, remove dead code

* Remove some accidentally added code

* Fix clippy lint

* Rename test file

* Add block tests, add cheap block proposer check

* Rename block testing file

* Add observed_block_producers

* Tidy

* Switch around block signature verification

* Finish block testing

* Remove gossip from signature tests

* First pass of self review

* Fix deviation in spec

* Update test spec tags

* Start moving over to hashset

* Finish moving observed attesters to hashmap

* Move aggregation pool over to hashmap

* Make fc attn borrow again

* Fix rest_api compile error

* Fix missing comments

* Fix monster test

* Uncomment increasing slots test

* Address remaining comments

* Remove unsafe, use cfg test

* Remove cfg test flag

* Fix dodgy comment

* Revert "Update hashmap hashset to stable futures"

This reverts commit d432378a3c.

* Revert "Adds panic test to hashset delay"

This reverts commit 281502396f.

* Ported attestation_service

* Ported duties_service

* Ported fork_service

* More ports

* Port block_service

* Minor fixes

* VC compiles

* Update TODOS

* Borrow self where possible

* Ignore aggregates that are already known.

* Unify aggregator modulo logic

* Fix typo in logs

* Refactor validator subscription logic

* Avoid reproducing selection proof

* Skip HTTP call if no subscriptions

* Rename DutyAndState -> DutyAndProof

* Tidy logs

* Print root as dbg

* Fix compile errors in tests

* Fix compile error in test

* Re-Fix attestation and duties service

* Minor fixes

Co-authored-by: Paul Hauner <paul@paulhauner.com>

* Network crate update to stable futures

* Port account_manager to stable futures (#1121)

* Port account_manager to stable futures

* Run async fns in tokio environment

* Port rest_api crate to stable futures (#1118)

* Port rest_api lib to stable futures

* Reduce tokio features

* Update notifier to stable futures

* Builder update

* Further updates

* Convert self referential async functions

* stable futures fixes (#1124)

* Fix eth1 update functions

* Fix genesis and client

* Fix beacon node lib

* Return appropriate runtimes from environment

* Fix test rig

* Refactor eth1 service update

* Upgrade simulator to stable futures

* Lighthouse compiles on stable futures

* Remove println debugging statement

* Update libp2p service, start rpc test upgrade

* Update network crate for new libp2p

* Update tokio::codec to futures_codec (#1128)

* Further work towards RPC corrections

* Correct http timeout and network service select

* Use tokio runtime for libp2p

* Revert "Update tokio::codec to futures_codec (#1128)"

This reverts commit e57aea924a.

* Upgrade RPC libp2p tests

* Upgrade secio fallback test

* Upgrade gossipsub examples

* Clean up RPC protocol

* Test fixes (#1133)

* Correct websocket timeout and run on os thread

* Fix network test

* Clean up PR

* Correct tokio tcp move attestation service tests

* Upgrade attestation service tests

* Correct network test

* Correct genesis test

* Test corrections

* Log info when block is received

* Modify logs and update attester service events

* Stable futures: fixes to vc, eth1 and account manager (#1142)

* Add local testnet scripts

* Remove whiteblock script

* Rename local testnet script

* Move spawns onto handle

* Fix VC panic

* Initial fix to block production issue

* Tidy block producer fix

* Tidy further

* Add local testnet clean script

* Run cargo fmt

* Tidy duties service

* Tidy fork service

* Tidy ForkService

* Tidy AttestationService

* Tidy notifier

* Ensure await is not suppressed in eth1

* Ensure await is not suppressed in account_manager

* Use .ok() instead of .unwrap_or(())

* RPC decoding test for proto

* Update discv5 and eth2-libp2p deps

* Fix lcli double runtime issue (#1144)

* Handle stream termination and dialing peer errors

* Correct peer_info variant types

* Remove unnecessary warnings

* Handle subnet unsubscription removal and improve logigng

* Add logs around ping

* Upgrade discv5 and improve logging

* Handle peer connection status for multiple connections

* Improve network service logging

* Improve logging around peer manager

* Upgrade swarm poll centralise peer management

* Identify clients on error

* Fix `remove_peer` in sync (#1150)

* remove_peer removes from all chains

* Remove logs

* Fix early return from loop

* Improved logging, fix panic

* Partially correct tests

* Stable futures: Vc sync (#1149)

* Improve syncing heuristic

* Add comments

* Use safer method for tolerance

* Fix tests

* Stable futures: Fix VC bug, update agg pool, add more metrics (#1151)

* Expose epoch processing summary

* Expose participation metrics to prometheus

* Switch to f64

* Reduce precision

* Change precision

* Expose observed attesters metrics

* Add metrics for agg/unagg attn counts

* Add metrics for gossip rx

* Add metrics for gossip tx

* Adds ignored attns to prom

* Add attestation timing

* Add timer for aggregation pool sig agg

* Add write lock timer for agg pool

* Add more metrics to agg pool

* Change map lock code

* Add extra metric to agg pool

* Change lock handling in agg pool

* Change .write() to .read()

* Add another agg pool timer

* Fix for is_aggregator

* Fix pruning bug

Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
Age Manning
2020-05-17 21:16:48 +10:00
committed by GitHub
parent 21901b1615
commit b6408805a2
165 changed files with 7924 additions and 7733 deletions

View File

@@ -12,19 +12,19 @@ fake_crypto = ["bls/fake_crypto"]
[dependencies]
bls = { path = "../../eth2/utils/bls" }
compare_fields = { path = "../../eth2/utils/compare_fields" }
ethereum-types = "0.9"
hex = "0.3"
rayon = "1.2.0"
serde = "1.0.102"
serde_derive = "1.0.102"
ethereum-types = "0.9.1"
hex = "0.4.2"
rayon = "1.3.0"
serde = "1.0.110"
serde_derive = "1.0.110"
serde_repr = "0.1.5"
serde_yaml = "0.8.11"
eth2_ssz = "0.1.2"
eth2_ssz_derive = "0.1.0"
tree_hash = "0.1.0"
tree_hash_derive = "0.2"
tree_hash_derive = "0.2.0"
cached_tree_hash = { path = "../../eth2/utils/cached_tree_hash" }
state_processing = { path = "../../eth2/state_processing" }
swap_or_not_shuffle = { path = "../../eth2/utils/swap_or_not_shuffle" }
types = { path = "../../eth2/types" }
walkdir = "2.2.9"
walkdir = "2.3.1"

View File

@@ -66,7 +66,7 @@ impl<E: EthSpec> Case for SanitySlots<E> {
state.build_all_caches(spec).unwrap();
let mut result = (0..self.slots)
.try_for_each(|_| per_slot_processing(&mut state, None, spec))
.try_for_each(|_| per_slot_processing(&mut state, None, spec).map(|_| ()))
.map(|_| state);
compare_beacon_state_results_without_caches(&mut result, &mut expected)

View File

@@ -6,8 +6,8 @@ edition = "2018"
[dependencies]
web3 = "0.10.0"
tokio = "0.1.22"
futures = "0.1.25"
tokio = { version = "0.2.20", features = ["time"] }
futures = { version = "0.3.5", features = ["compat"] }
types = { path = "../../eth2/types"}
serde_json = "1.0"
serde_json = "1.0.52"
deposit_contract = { path = "../../eth2/utils/deposit_contract"}

View File

@@ -1,4 +1,4 @@
use futures::Future;
use futures::compat::Future01CompatExt;
use serde_json::json;
use std::io::prelude::*;
use std::io::BufReader;
@@ -98,28 +98,34 @@ impl GanacheInstance {
}
/// Increase the timestamp on future blocks by `increase_by` seconds.
pub fn increase_time(&self, increase_by: u64) -> impl Future<Item = (), Error = String> {
pub async fn increase_time(&self, increase_by: u64) -> Result<(), String> {
self.web3
.transport()
.execute("evm_increaseTime", vec![json!(increase_by)])
.compat()
.await
.map(|_json_value| ())
.map_err(|e| format!("Failed to increase time on EVM (is this ganache?): {:?}", e))
}
/// Returns the current block number, as u64
pub fn block_number(&self) -> impl Future<Item = u64, Error = String> {
pub async fn block_number(&self) -> Result<u64, String> {
self.web3
.eth()
.block_number()
.compat()
.await
.map(|v| v.as_u64())
.map_err(|e| format!("Failed to get block number: {:?}", e))
}
/// Mines a single block.
pub fn evm_mine(&self) -> impl Future<Item = (), Error = String> {
pub async fn evm_mine(&self) -> Result<(), String> {
self.web3
.transport()
.execute("evm_mine", vec![])
.compat()
.await
.map(|_| ())
.map_err(|_| {
"utils should mine new block with evm_mine (only works with ganache-cli!)"

View File

@@ -10,10 +10,10 @@ mod ganache;
use deposit_contract::{
encode_eth1_tx_data, testnet, ABI, BYTECODE, CONTRACT_DEPLOY_GAS, DEPOSIT_GAS,
};
use futures::{future, stream, Future, IntoFuture, Stream};
use futures::compat::Future01CompatExt;
use ganache::GanacheInstance;
use std::time::{Duration, Instant};
use tokio::{runtime::Runtime, timer::Delay};
use std::time::Duration;
use tokio::time::delay_for;
use types::DepositData;
use types::{test_utils::generate_deterministic_keypair, EthSpec, Hash256, Keypair, Signature};
use web3::contract::{Contract, Options};
@@ -31,13 +31,14 @@ pub struct GanacheEth1Instance {
}
impl GanacheEth1Instance {
pub fn new() -> impl Future<Item = Self, Error = String> {
GanacheInstance::new().into_future().and_then(|ganache| {
DepositContract::deploy(ganache.web3.clone(), 0, None).map(|deposit_contract| Self {
pub async fn new() -> Result<Self, String> {
let ganache = GanacheInstance::new()?;
DepositContract::deploy(ganache.web3.clone(), 0, None)
.await
.map(|deposit_contract| Self {
ganache,
deposit_contract,
})
})
}
pub fn endpoint(&self) -> String {
@@ -57,19 +58,19 @@ pub struct DepositContract {
}
impl DepositContract {
pub fn deploy(
pub async fn deploy(
web3: Web3<Http>,
confirmations: usize,
password: Option<String>,
) -> impl Future<Item = Self, Error = String> {
Self::deploy_bytecode(web3, confirmations, BYTECODE, ABI, password)
) -> Result<Self, String> {
Self::deploy_bytecode(web3, confirmations, BYTECODE, ABI, password).await
}
pub fn deploy_testnet(
pub async fn deploy_testnet(
web3: Web3<Http>,
confirmations: usize,
password: Option<String>,
) -> impl Future<Item = Self, Error = String> {
) -> Result<Self, String> {
Self::deploy_bytecode(
web3,
confirmations,
@@ -77,35 +78,33 @@ impl DepositContract {
testnet::ABI,
password,
)
.await
}
fn deploy_bytecode(
async fn deploy_bytecode(
web3: Web3<Http>,
confirmations: usize,
bytecode: &[u8],
abi: &[u8],
password: Option<String>,
) -> impl Future<Item = Self, Error = String> {
let web3_1 = web3.clone();
deploy_deposit_contract(
) -> Result<Self, String> {
let address = deploy_deposit_contract(
web3.clone(),
confirmations,
bytecode.to_vec(),
abi.to_vec(),
password,
)
.await
.map_err(|e| {
format!(
"Failed to deploy contract: {}. Is scripts/ganache_tests_node.sh running?.",
e
)
})
.and_then(move |address| {
Contract::from_json(web3_1.eth(), address, ABI)
.map_err(|e| format!("Failed to init contract: {:?}", e))
})
.map(|contract| Self { contract, web3 })
})?;
Contract::from_json(web3.clone().eth(), address, ABI)
.map_err(|e| format!("Failed to init contract: {:?}", e))
.map(move |contract| Self { contract, web3 })
}
/// The deposit contract's address in `0x00ab...` format.
@@ -136,7 +135,7 @@ impl DepositContract {
/// Creates a random, valid deposit and submits it to the deposit contract.
///
/// The keypairs are created randomly and destroyed.
pub fn deposit_random<E: EthSpec>(&self, runtime: &mut Runtime) -> Result<(), String> {
pub async fn deposit_random<E: EthSpec>(&self) -> Result<(), String> {
let keypair = Keypair::random();
let mut deposit = DepositData {
@@ -148,21 +147,21 @@ impl DepositContract {
deposit.signature = deposit.create_signature(&keypair.sk, &E::default_spec());
self.deposit(runtime, deposit)
self.deposit(deposit).await
}
/// Perfoms a blocking deposit.
pub fn deposit(&self, runtime: &mut Runtime, deposit_data: DepositData) -> Result<(), String> {
runtime
.block_on(self.deposit_async(deposit_data))
pub async fn deposit(&self, deposit_data: DepositData) -> Result<(), String> {
self.deposit_async(deposit_data)
.await
.map_err(|e| format!("Deposit failed: {:?}", e))
}
pub fn deposit_deterministic_async<E: EthSpec>(
pub async fn deposit_deterministic_async<E: EthSpec>(
&self,
keypair_index: usize,
amount: u64,
) -> impl Future<Item = (), Error = String> {
) -> Result<(), String> {
let keypair = generate_deterministic_keypair(keypair_index);
let mut deposit = DepositData {
@@ -174,73 +173,57 @@ impl DepositContract {
deposit.signature = deposit.create_signature(&keypair.sk, &E::default_spec());
self.deposit_async(deposit)
self.deposit_async(deposit).await
}
/// Performs a non-blocking deposit.
pub fn deposit_async(
&self,
deposit_data: DepositData,
) -> impl Future<Item = (), Error = String> {
let contract = self.contract.clone();
let web3_1 = self.web3.clone();
self.web3
pub async fn deposit_async(&self, deposit_data: DepositData) -> Result<(), String> {
let from = self
.web3
.eth()
.accounts()
.compat()
.await
.map_err(|e| format!("Failed to get accounts: {:?}", e))
.and_then(|accounts| {
accounts
.get(DEPOSIT_ACCOUNTS_INDEX)
.cloned()
.ok_or_else(|| "Insufficient accounts for deposit".to_string())
})
.and_then(move |from| {
let tx_request = TransactionRequest {
from,
to: Some(contract.address()),
gas: Some(U256::from(DEPOSIT_GAS)),
gas_price: None,
value: Some(from_gwei(deposit_data.amount)),
// Note: the reason we use this `TransactionRequest` instead of just using the
// function in `self.contract` is so that the `encode_eth1_tx_data` function gets used
// during testing.
//
// It's important that `encode_eth1_tx_data` stays correct and does not suffer from
// code-rot.
data: encode_eth1_tx_data(&deposit_data).map(Into::into).ok(),
nonce: None,
condition: None,
};
})?;
let tx_request = TransactionRequest {
from,
to: Some(self.contract.address()),
gas: Some(U256::from(DEPOSIT_GAS)),
gas_price: None,
value: Some(from_gwei(deposit_data.amount)),
// Note: the reason we use this `TransactionRequest` instead of just using the
// function in `self.contract` is so that the `eth1_tx_data` function gets used
// during testing.
//
// It's important that `eth1_tx_data` stays correct and does not suffer from
// code-rot.
data: encode_eth1_tx_data(&deposit_data).map(Into::into).ok(),
nonce: None,
condition: None,
};
web3_1
.eth()
.send_transaction(tx_request)
.map_err(|e| format!("Failed to call deposit fn: {:?}", e))
})
.map(|_| ())
self.web3
.eth()
.send_transaction(tx_request)
.compat()
.await
.map_err(|e| format!("Failed to call deposit fn: {:?}", e))?;
Ok(())
}
/// Peforms many deposits, each preceded by a delay.
pub fn deposit_multiple(
&self,
deposits: Vec<DelayThenDeposit>,
) -> impl Future<Item = (), Error = String> {
let s = self.clone();
stream::unfold(deposits.into_iter(), move |mut deposit_iter| {
let s = s.clone();
match deposit_iter.next() {
Some(deposit) => Some(
Delay::new(Instant::now() + deposit.delay)
.map_err(|e| format!("Failed to execute delay: {:?}", e))
.and_then(move |_| s.deposit_async(deposit.deposit))
.map(move |yielded| (yielded, deposit_iter)),
),
None => None,
}
})
.collect()
.map(|_| ())
pub async fn deposit_multiple(&self, deposits: Vec<DelayThenDeposit>) -> Result<(), String> {
for deposit in deposits.into_iter() {
delay_for(deposit.delay).await;
self.deposit_async(deposit.deposit).await?;
}
Ok(())
}
}
@@ -260,61 +243,56 @@ fn from_gwei(gwei: u64) -> U256 {
/// Deploys the deposit contract to the given web3 instance using the account with index
/// `DEPLOYER_ACCOUNTS_INDEX`.
fn deploy_deposit_contract(
async fn deploy_deposit_contract(
web3: Web3<Http>,
confirmations: usize,
bytecode: Vec<u8>,
abi: Vec<u8>,
password_opt: Option<String>,
) -> impl Future<Item = Address, Error = String> {
) -> Result<Address, String> {
let bytecode = String::from_utf8(bytecode).expect("bytecode must be valid utf8");
let web3_1 = web3.clone();
web3.eth()
let from_address = web3
.eth()
.accounts()
.compat()
.await
.map_err(|e| format!("Failed to get accounts: {:?}", e))
.and_then(|accounts| {
accounts
.get(DEPLOYER_ACCOUNTS_INDEX)
.cloned()
.ok_or_else(|| "Insufficient accounts for deployer".to_string())
})
.and_then(move |from_address| {
let future: Box<dyn Future<Item = Address, Error = String> + Send> =
if let Some(password) = password_opt {
// Unlock for only a single transaction.
let duration = None;
})?;
let future = web3_1
.personal()
.unlock_account(from_address, &password, duration)
.then(move |result| match result {
Ok(true) => Ok(from_address),
Ok(false) => Err("Eth1 node refused to unlock account".to_string()),
Err(e) => Err(format!("Eth1 unlock request failed: {:?}", e)),
});
let deploy_address = if let Some(password) = password_opt {
let result = web3
.personal()
.unlock_account(from_address, &password, None)
.compat()
.await;
match result {
Ok(true) => return Ok(from_address),
Ok(false) => return Err("Eth1 node refused to unlock account".to_string()),
Err(e) => return Err(format!("Eth1 unlock request failed: {:?}", e)),
};
} else {
from_address
};
Box::new(future)
} else {
Box::new(future::ok(from_address))
};
let pending_contract = Contract::deploy(web3.eth(), &abi)
.map_err(|e| format!("Unable to build contract deployer: {:?}", e))?
.confirmations(confirmations)
.options(Options {
gas: Some(U256::from(CONTRACT_DEPLOY_GAS)),
..Options::default()
})
.execute(bytecode, (), deploy_address)
.map_err(|e| format!("Failed to execute deployment: {:?}", e))?;
future
})
.and_then(move |deploy_address| {
Contract::deploy(web3.eth(), &abi)
.map_err(|e| format!("Unable to build contract deployer: {:?}", e))?
.confirmations(confirmations)
.options(Options {
gas: Some(U256::from(CONTRACT_DEPLOY_GAS)),
..Options::default()
})
.execute(bytecode, (), deploy_address)
.map_err(|e| format!("Failed to execute deployment: {:?}", e))
})
.and_then(|pending_contract| {
pending_contract
.map(|contract| contract.address())
.map_err(|e| format!("Unable to resolve pending contract: {:?}", e))
})
pending_contract
.compat()
.await
.map(|contract| contract.address())
.map_err(|e| format!("Unable to resolve pending contract: {:?}", e))
}

View File

@@ -9,11 +9,11 @@ environment = { path = "../../lighthouse/environment" }
beacon_node = { path = "../../beacon_node" }
types = { path = "../../eth2/types" }
eth2_config = { path = "../../eth2/utils/eth2_config" }
tempdir = "0.3"
reqwest = "0.9"
url = "1.2"
serde = "1.0"
futures = "0.1.25"
tempdir = "0.3.7"
reqwest = "0.10.4"
url = "2.1.1"
serde = "1.0.110"
futures = "0.3.5"
genesis = { path = "../../beacon_node/genesis" }
remote_beacon_node = { path = "../../eth2/utils/remote_beacon_node" }
validator_client = { path = "../../validator_client" }

View File

@@ -4,7 +4,6 @@
use beacon_node::ProductionBeaconNode;
use environment::RuntimeContext;
use futures::Future;
use std::path::PathBuf;
use std::time::{SystemTime, UNIX_EPOCH};
use tempdir::TempDir;
@@ -29,10 +28,10 @@ impl<E: EthSpec> LocalBeaconNode<E> {
/// Starts a new, production beacon node on the tokio runtime in the given `context`.
///
/// The node created is using the same types as the node we use in production.
pub fn production(
pub async fn production(
context: RuntimeContext<E>,
mut client_config: ClientConfig,
) -> impl Future<Item = Self, Error = String> {
) -> Result<Self, String> {
// Creates a temporary directory that will be deleted once this `TempDir` is dropped.
let datadir = TempDir::new("lighthouse_node_test_rig")
.expect("should create temp directory for client datadir");
@@ -40,10 +39,12 @@ impl<E: EthSpec> LocalBeaconNode<E> {
client_config.data_dir = datadir.path().into();
client_config.network.network_dir = PathBuf::from(datadir.path()).join("network");
ProductionBeaconNode::new(context, client_config).map(move |client| Self {
client: client.into_inner(),
datadir,
})
ProductionBeaconNode::new(context, client_config)
.await
.map(move |client| Self {
client: client.into_inner(),
datadir,
})
}
}
@@ -103,47 +104,49 @@ impl<E: EthSpec> LocalValidatorClient<E> {
/// are created in a temp dir then removed when the process exits.
///
/// The validator created is using the same types as the node we use in production.
pub fn production_with_insecure_keypairs(
pub async fn production_with_insecure_keypairs(
context: RuntimeContext<E>,
mut config: ValidatorConfig,
keypair_indices: &[usize],
) -> impl Future<Item = Self, Error = String> {
) -> Result<Self, String> {
// Creates a temporary directory that will be deleted once this `TempDir` is dropped.
let datadir = TempDir::new("lighthouse-beacon-node")
.expect("should create temp directory for client datadir");
config.key_source = KeySource::InsecureKeypairs(keypair_indices.to_vec());
Self::new(context, config, datadir)
Self::new(context, config, datadir).await
}
/// Creates a validator client that attempts to read keys from the default data dir.
///
/// - The validator created is using the same types as the node we use in production.
/// - It is recommended to use `production_with_insecure_keypairs` for testing.
pub fn production(
pub async fn production(
context: RuntimeContext<E>,
config: ValidatorConfig,
) -> impl Future<Item = Self, Error = String> {
) -> Result<Self, String> {
// Creates a temporary directory that will be deleted once this `TempDir` is dropped.
let datadir = TempDir::new("lighthouse-validator")
.expect("should create temp directory for client datadir");
Self::new(context, config, datadir)
Self::new(context, config, datadir).await
}
fn new(
async fn new(
context: RuntimeContext<E>,
mut config: ValidatorConfig,
datadir: TempDir,
) -> impl Future<Item = Self, Error = String> {
) -> Result<Self, String> {
config.data_dir = datadir.path().into();
ProductionValidatorClient::new(context, config).map(move |mut client| {
client
.start_service()
.expect("should start validator services");
Self { client, datadir }
})
ProductionValidatorClient::new(context, config)
.await
.map(move |mut client| {
client
.start_service()
.expect("should start validator services");
Self { client, datadir }
})
}
}

View File

@@ -10,9 +10,9 @@ edition = "2018"
node_test_rig = { path = "../node_test_rig" }
types = { path = "../../eth2/types" }
validator_client = { path = "../../validator_client" }
parking_lot = "0.9.0"
futures = "0.1.29"
tokio = "0.1.22"
parking_lot = "0.10.2"
futures = "0.3.5"
tokio = "0.2.20"
eth1_test_rig = { path = "../eth1_test_rig" }
env_logger = "0.7.1"
clap = "2.33.0"

View File

@@ -1,136 +1,128 @@
use crate::local_network::LocalNetwork;
use futures::{stream, Future, IntoFuture, Stream};
use std::time::{Duration, Instant};
use tokio::timer::Delay;
use std::time::Duration;
use types::{Epoch, EthSpec, Slot, Unsigned};
/// Checks that all of the validators have on-boarded by the start of the second eth1 voting
/// period.
pub fn verify_initial_validator_count<E: EthSpec>(
pub async fn verify_initial_validator_count<E: EthSpec>(
network: LocalNetwork<E>,
slot_duration: Duration,
initial_validator_count: usize,
) -> impl Future<Item = (), Error = String> {
slot_delay(Slot::new(1), slot_duration)
.and_then(move |()| verify_validator_count(network, initial_validator_count))
) -> Result<(), String> {
slot_delay(Slot::new(1), slot_duration).await;
verify_validator_count(network, initial_validator_count).await?;
Ok(())
}
/// Checks that all of the validators have on-boarded by the start of the second eth1 voting
/// period.
pub fn verify_validator_onboarding<E: EthSpec>(
pub async fn verify_validator_onboarding<E: EthSpec>(
network: LocalNetwork<E>,
slot_duration: Duration,
expected_validator_count: usize,
) -> impl Future<Item = (), Error = String> {
) -> Result<(), String> {
slot_delay(
Slot::new(E::SlotsPerEth1VotingPeriod::to_u64()),
slot_duration,
)
.and_then(move |()| verify_validator_count(network, expected_validator_count))
.await;
verify_validator_count(network, expected_validator_count).await?;
Ok(())
}
/// Checks that the chain has made the first possible finalization.
///
/// Intended to be run as soon as chain starts.
pub fn verify_first_finalization<E: EthSpec>(
pub async fn verify_first_finalization<E: EthSpec>(
network: LocalNetwork<E>,
slot_duration: Duration,
) -> impl Future<Item = (), Error = String> {
epoch_delay(Epoch::new(4), slot_duration, E::slots_per_epoch())
.and_then(|()| verify_all_finalized_at(network, Epoch::new(2)))
) -> Result<(), String> {
epoch_delay(Epoch::new(4), slot_duration, E::slots_per_epoch()).await;
verify_all_finalized_at(network, Epoch::new(2)).await?;
Ok(())
}
/// Delays for `epochs`, plus half a slot extra.
pub fn epoch_delay(
epochs: Epoch,
slot_duration: Duration,
slots_per_epoch: u64,
) -> impl Future<Item = (), Error = String> {
pub async fn epoch_delay(epochs: Epoch, slot_duration: Duration, slots_per_epoch: u64) {
let duration = slot_duration * (epochs.as_u64() * slots_per_epoch) as u32 + slot_duration / 2;
Delay::new(Instant::now() + duration).map_err(|e| format!("Epoch delay failed: {:?}", e))
tokio::time::delay_for(duration).await
}
/// Delays for `slots`, plus half a slot extra.
fn slot_delay(slots: Slot, slot_duration: Duration) -> impl Future<Item = (), Error = String> {
async fn slot_delay(slots: Slot, slot_duration: Duration) {
let duration = slot_duration * slots.as_u64() as u32 + slot_duration / 2;
Delay::new(Instant::now() + duration).map_err(|e| format!("Epoch delay failed: {:?}", e))
tokio::time::delay_for(duration).await;
}
/// Verifies that all beacon nodes in the given network have a head state that has a finalized
/// epoch of `epoch`.
pub fn verify_all_finalized_at<E: EthSpec>(
pub async fn verify_all_finalized_at<E: EthSpec>(
network: LocalNetwork<E>,
epoch: Epoch,
) -> impl Future<Item = (), Error = String> {
network
.remote_nodes()
.into_future()
.and_then(|remote_nodes| {
stream::unfold(remote_nodes.into_iter(), |mut iter| {
iter.next().map(|remote_node| {
remote_node
.http
.beacon()
.get_head()
.map(|head| head.finalized_slot.epoch(E::slots_per_epoch()))
.map(|epoch| (epoch, iter))
.map_err(|e| format!("Get head via http failed: {:?}", e))
})
})
.collect()
})
.and_then(move |epochs| {
if epochs.iter().any(|node_epoch| *node_epoch != epoch) {
Err(format!(
"Nodes are not finalized at epoch {}. Finalized epochs: {:?}",
epoch, epochs
))
} else {
Ok(())
}
})
) -> Result<(), String> {
let epochs = {
let mut epochs = Vec::new();
for remote_node in network.remote_nodes()? {
epochs.push(
remote_node
.http
.beacon()
.get_head()
.await
.map(|head| head.finalized_slot.epoch(E::slots_per_epoch()))
.map_err(|e| format!("Get head via http failed: {:?}", e))?,
);
}
epochs
};
if epochs.iter().any(|node_epoch| *node_epoch != epoch) {
Err(format!(
"Nodes are not finalized at epoch {}. Finalized epochs: {:?}",
epoch, epochs
))
} else {
Ok(())
}
}
/// Verifies that all beacon nodes in the given `network` have a head state that contains
/// `expected_count` validators.
fn verify_validator_count<E: EthSpec>(
async fn verify_validator_count<E: EthSpec>(
network: LocalNetwork<E>,
expected_count: usize,
) -> impl Future<Item = (), Error = String> {
network
.remote_nodes()
.into_future()
.and_then(|remote_nodes| {
stream::unfold(remote_nodes.into_iter(), |mut iter| {
iter.next().map(|remote_node| {
let beacon = remote_node.http.beacon();
beacon
.get_head()
.map_err(|e| format!("Get head via http failed: {:?}", e))
.and_then(move |head| {
beacon
.get_state_by_root(head.state_root)
.map(|(state, _root)| state)
.map_err(|e| format!("Get state root via http failed: {:?}", e))
})
.map(|state| (state.validators.len(), iter))
})
})
.collect()
})
.and_then(move |validator_counts| {
if validator_counts
.iter()
.any(|count| *count != expected_count)
{
Err(format!(
"Nodes do not all have {} validators in their state. Validator counts: {:?}",
expected_count, validator_counts
))
} else {
Ok(())
}
})
) -> Result<(), String> {
let validator_counts = {
let mut validator_counts = Vec::new();
for remote_node in network.remote_nodes()? {
let beacon = remote_node.http.beacon();
let head = beacon
.get_head()
.await
.map_err(|e| format!("Get head via http failed: {:?}", e))?;
let vc = beacon
.get_state_by_root(head.state_root)
.await
.map(|(state, _root)| state)
.map_err(|e| format!("Get state root via http failed: {:?}", e))?
.validators
.len();
validator_counts.push(vc);
}
validator_counts
};
if validator_counts
.iter()
.any(|count| *count != expected_count)
{
Err(format!(
"Nodes do not all have {} validators in their state. Validator counts: {:?}",
expected_count, validator_counts
))
} else {
Ok(())
}
}

View File

@@ -1,13 +1,12 @@
use crate::{checks, LocalNetwork, E};
use clap::ArgMatches;
use eth1_test_rig::GanacheEth1Instance;
use futures::{future, stream, Future, Stream};
use futures::prelude::*;
use node_test_rig::{
environment::EnvironmentBuilder, testing_client_config, ClientGenesis, ValidatorConfig,
};
use std::net::{IpAddr, Ipv4Addr};
use std::time::{Duration, Instant};
use tokio::timer::Interval;
use std::time::Duration;
pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
let node_count = value_t!(matches, "nodes", usize).expect("missing nodes default");
@@ -50,159 +49,123 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
let deposit_amount = env.eth2_config.spec.max_effective_balance;
let context = env.core_context();
let executor = context.executor.clone();
let future = GanacheEth1Instance::new()
let main_future = async {
/*
* Deploy the deposit contract, spawn tasks to keep creating new blocks and deposit
* validators.
*/
.map(move |ganache_eth1_instance| {
let deposit_contract = ganache_eth1_instance.deposit_contract;
let ganache = ganache_eth1_instance.ganache;
let eth1_endpoint = ganache.endpoint();
let deposit_contract_address = deposit_contract.address();
let ganache_eth1_instance = GanacheEth1Instance::new().await?;
let deposit_contract = ganache_eth1_instance.deposit_contract;
let ganache = ganache_eth1_instance.ganache;
let eth1_endpoint = ganache.endpoint();
let deposit_contract_address = deposit_contract.address();
// Start a timer that produces eth1 blocks on an interval.
executor.spawn(
Interval::new(Instant::now(), eth1_block_time)
.map_err(|_| eprintln!("Eth1 block timer failed"))
.for_each(move |_| ganache.evm_mine().map_err(|_| ()))
.map_err(|_| eprintln!("Eth1 evm_mine failed"))
.map(|_| ()),
);
// Start a timer that produces eth1 blocks on an interval.
tokio::spawn(async move {
let mut interval = tokio::time::interval(eth1_block_time);
while let Some(_) = interval.next().await {
let _ = ganache.evm_mine().await;
}
});
// Submit deposits to the deposit contract.
executor.spawn(
stream::unfold(0..total_validator_count, move |mut iter| {
iter.next().map(|i| {
println!("Submitting deposit for validator {}...", i);
deposit_contract
.deposit_deterministic_async::<E>(i, deposit_amount)
.map(|_| ((), iter))
})
})
.collect()
.map(|_| ())
.map_err(|e| eprintln!("Error submitting deposit: {}", e)),
);
// Submit deposits to the deposit contract.
tokio::spawn(async move {
for i in 0..total_validator_count {
println!("Submitting deposit for validator {}...", i);
let _ = deposit_contract
.deposit_deterministic_async::<E>(i, deposit_amount)
.await;
}
});
let mut beacon_config = testing_client_config();
let mut beacon_config = testing_client_config();
beacon_config.genesis = ClientGenesis::DepositContract;
beacon_config.eth1.endpoint = eth1_endpoint;
beacon_config.eth1.deposit_contract_address = deposit_contract_address;
beacon_config.eth1.deposit_contract_deploy_block = 0;
beacon_config.eth1.lowest_cached_block_number = 0;
beacon_config.eth1.follow_distance = 1;
beacon_config.dummy_eth1_backend = false;
beacon_config.sync_eth1_chain = true;
beacon_config.genesis = ClientGenesis::DepositContract;
beacon_config.eth1.endpoint = eth1_endpoint;
beacon_config.eth1.deposit_contract_address = deposit_contract_address;
beacon_config.eth1.deposit_contract_deploy_block = 0;
beacon_config.eth1.lowest_cached_block_number = 0;
beacon_config.eth1.follow_distance = 1;
beacon_config.dummy_eth1_backend = false;
beacon_config.sync_eth1_chain = true;
beacon_config.network.enr_address = Some(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
beacon_config.network.enr_address = Some(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
beacon_config
})
/*
* Create a new `LocalNetwork` with one beacon node.
*/
.and_then(move |beacon_config| {
LocalNetwork::new(context, beacon_config.clone())
.map(|network| (network, beacon_config))
})
let network = LocalNetwork::new(context, beacon_config.clone()).await?;
/*
* One by one, add beacon nodes to the network.
*/
.and_then(move |(network, beacon_config)| {
let network_1 = network.clone();
stream::unfold(0..node_count - 1, move |mut iter| {
iter.next().map(|_| {
network_1
.add_beacon_node(beacon_config.clone())
.map(|()| ((), iter))
})
})
.collect()
.map(|_| network)
})
for _ in 0..node_count - 1 {
network.add_beacon_node(beacon_config.clone()).await?;
}
/*
* One by one, add validator clients to the network. Each validator client is attached to
* a single corresponding beacon node.
*/
.and_then(move |network| {
let network_1 = network.clone();
// Note: presently the validator client future will only resolve once genesis time
// occurs. This is great for this scenario, but likely to change in the future.
//
// If the validator client future behaviour changes, we would need to add a new future
// that delays until genesis. Otherwise, all of the checks that start in the next
// future will start too early.
// Note: presently the validator client future will only resolve once genesis time
// occurs. This is great for this scenario, but likely to change in the future.
//
// If the validator client future behaviour changes, we would need to add a new future
// that delays until genesis. Otherwise, all of the checks that start in the next
// future will start too early.
stream::unfold(0..node_count, move |mut iter| {
iter.next().map(|i| {
let indices = (i * validators_per_node..(i + 1) * validators_per_node)
.collect::<Vec<_>>();
for i in 0..node_count {
let indices =
(i * validators_per_node..(i + 1) * validators_per_node).collect::<Vec<_>>();
network
.add_validator_client(ValidatorConfig::default(), i, indices)
.await?;
}
network_1
.add_validator_client(ValidatorConfig::default(), i, indices)
.map(|()| ((), iter))
})
})
.collect()
.map(|_| network)
})
/*
* Start the processes that will run checks on the network as it runs.
*/
.and_then(move |network| {
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.
let final_future: Box<dyn Future<Item = (), Error = String> + Send> =
if end_after_checks {
Box::new(future::ok(()).map_err(|()| "".to_string()))
} else {
Box::new(future::empty().map_err(|()| "".to_string()))
};
future::ok(())
// Check that the chain finalizes at the first given opportunity.
.join(checks::verify_first_finalization(
network.clone(),
slot_duration,
))
// Check that the chain starts with the expected validator count.
.join(checks::verify_initial_validator_count(
network.clone(),
slot_duration,
initial_validator_count,
))
// Check that validators greater than `spec.min_genesis_active_validator_count` are
// onboarded at the first possible opportunity.
.join(checks::verify_validator_onboarding(
network.clone(),
slot_duration,
total_validator_count,
))
// End now or run forever, depending on the `end_after_checks` flag.
.join(final_future)
.map(|_| network)
})
let _err = futures::join!(
// Check that the chain finalizes at the first given opportunity.
checks::verify_first_finalization(network.clone(), slot_duration),
// Check that the chain starts with the expected validator count.
checks::verify_initial_validator_count(
network.clone(),
slot_duration,
initial_validator_count,
),
// Check that validators greater than `spec.min_genesis_active_validator_count` are
// onboarded at the first possible opportunity.
checks::verify_validator_onboarding(
network.clone(),
slot_duration,
total_validator_count,
)
);
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.
if !end_after_checks {
future::pending::<()>().await;
}
/*
* End the simulation by dropping the network. This will kill all running beacon nodes and
* validator clients.
*/
.map(|network| {
println!(
"Simulation complete. Finished with {} beacon nodes and {} validator clients",
network.beacon_node_count(),
network.validator_client_count()
);
println!(
"Simulation complete. Finished with {} beacon nodes and {} validator clients",
network.beacon_node_count(),
network.validator_client_count()
);
// Be explicit about dropping the network, as this kills all the nodes. This ensures
// all the checks have adequate time to pass.
drop(network)
});
// Be explicit about dropping the network, as this kills all the nodes. This ensures
// all the checks have adequate time to pass.
drop(network);
Ok::<(), String>(())
};
env.runtime().block_on(future)
Ok(env.runtime().block_on(main_future).unwrap())
}

View File

@@ -1,4 +1,3 @@
use futures::{Future, IntoFuture};
use node_test_rig::{
environment::RuntimeContext, ClientConfig, LocalBeaconNode, LocalValidatorClient,
RemoteBeaconNode, ValidatorConfig,
@@ -42,23 +41,24 @@ impl<E: EthSpec> Deref for LocalNetwork<E> {
impl<E: EthSpec> LocalNetwork<E> {
/// Creates a new network with a single `BeaconNode`.
pub fn new(
pub async fn new(
context: RuntimeContext<E>,
mut beacon_config: ClientConfig,
) -> impl Future<Item = Self, Error = String> {
) -> Result<Self, String> {
beacon_config.network.discovery_port = BOOTNODE_PORT;
beacon_config.network.libp2p_port = BOOTNODE_PORT;
beacon_config.network.enr_udp_port = Some(BOOTNODE_PORT);
beacon_config.network.enr_tcp_port = Some(BOOTNODE_PORT);
LocalBeaconNode::production(context.service_context("boot_node".into()), beacon_config).map(
|beacon_node| Self {
inner: Arc::new(Inner {
context,
beacon_nodes: RwLock::new(vec![beacon_node]),
validator_clients: RwLock::new(vec![]),
}),
},
)
let beacon_node =
LocalBeaconNode::production(context.service_context("boot_node".into()), beacon_config)
.await?;
Ok(Self {
inner: Arc::new(Inner {
context,
beacon_nodes: RwLock::new(vec![beacon_node]),
validator_clients: RwLock::new(vec![]),
}),
})
}
/// Returns the number of beacon nodes in the network.
@@ -78,72 +78,65 @@ impl<E: EthSpec> LocalNetwork<E> {
}
/// Adds a beacon node to the network, connecting to the 0'th beacon node via ENR.
pub fn add_beacon_node(
&self,
mut beacon_config: ClientConfig,
) -> impl Future<Item = (), Error = String> {
pub async fn add_beacon_node(&self, mut beacon_config: ClientConfig) -> Result<(), String> {
let self_1 = self.clone();
println!("Adding beacon node..");
self.beacon_nodes
.read()
.first()
.map(|boot_node| {
beacon_config.network.boot_nodes.push(
boot_node
.client
.enr()
.expect("bootnode must have a network"),
);
})
.expect("should have at least one node");
{
let read_lock = self.beacon_nodes.read();
let boot_node = read_lock.first().expect("should have at least one node");
beacon_config.network.boot_nodes.push(
boot_node
.client
.enr()
.expect("bootnode must have a network"),
);
}
let index = self.beacon_nodes.read().len();
LocalBeaconNode::production(
let beacon_node = LocalBeaconNode::production(
self.context.service_context(format!("node_{}", index)),
beacon_config,
)
.map(move |beacon_node| {
self_1.beacon_nodes.write().push(beacon_node);
})
.await?;
self_1.beacon_nodes.write().push(beacon_node);
Ok(())
}
/// Adds a validator client to the network, connecting it to the beacon node with index
/// `beacon_node`.
pub fn add_validator_client(
pub async fn add_validator_client(
&self,
mut validator_config: ValidatorConfig,
beacon_node: usize,
keypair_indices: Vec<usize>,
) -> impl Future<Item = (), Error = String> {
) -> Result<(), String> {
let index = self.validator_clients.read().len();
let context = self.context.service_context(format!("validator_{}", index));
let self_1 = self.clone();
let socket_addr = {
let read_lock = self.beacon_nodes.read();
let beacon_node = read_lock
.get(beacon_node)
.ok_or_else(|| format!("No beacon node for index {}", beacon_node))?;
beacon_node
.client
.http_listen_addr()
.expect("Must have http started")
};
self.beacon_nodes
.read()
.get(beacon_node)
.map(move |beacon_node| {
let socket_addr = beacon_node
.client
.http_listen_addr()
.expect("Must have http started");
validator_config.http_server =
format!("http://{}:{}", socket_addr.ip(), socket_addr.port());
validator_config
})
.ok_or_else(|| format!("No beacon node for index {}", beacon_node))
.into_future()
.and_then(move |validator_config| {
LocalValidatorClient::production_with_insecure_keypairs(
context,
validator_config,
&keypair_indices,
)
})
.map(move |validator_client| self_1.validator_clients.write().push(validator_client))
validator_config.http_server =
format!("http://{}:{}", socket_addr.ip(), socket_addr.port());
let validator_client = LocalValidatorClient::production_with_insecure_keypairs(
context,
validator_config,
&keypair_indices,
)
.await?;
self_1.validator_clients.write().push(validator_client);
Ok(())
}
/// For all beacon nodes in `Self`, return a HTTP client to access each nodes HTTP API.
@@ -157,13 +150,14 @@ impl<E: EthSpec> LocalNetwork<E> {
}
/// Return current epoch of bootnode.
pub fn bootnode_epoch(&self) -> impl Future<Item = Epoch, Error = String> {
pub async fn bootnode_epoch(&self) -> Result<Epoch, String> {
let nodes = self.remote_nodes().expect("Failed to get remote nodes");
let bootnode = nodes.first().expect("Should contain bootnode");
bootnode
.http
.beacon()
.get_head()
.await
.map_err(|e| format!("Cannot get head: {:?}", e))
.map(|head| head.finalized_slot.epoch(E::slots_per_epoch()))
}

View File

@@ -1,6 +1,6 @@
use crate::{checks, LocalNetwork};
use clap::ArgMatches;
use futures::{future, stream, Future, Stream};
use futures::prelude::*;
use node_test_rig::{
environment::EnvironmentBuilder, testing_client_config, ClientGenesis, ValidatorConfig,
};
@@ -63,88 +63,61 @@ pub fn run_no_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
beacon_config.network.enr_address = Some(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
let future = LocalNetwork::new(context, beacon_config.clone())
let main_future = async {
let network = LocalNetwork::new(context, beacon_config.clone()).await?;
/*
* One by one, add beacon nodes to the network.
*/
.and_then(move |network| {
let network_1 = network.clone();
stream::unfold(0..node_count - 1, move |mut iter| {
iter.next().map(|_| {
network_1
.add_beacon_node(beacon_config.clone())
.map(|()| ((), iter))
})
})
.collect()
.map(|_| network)
})
for _ in 0..node_count - 1 {
network.add_beacon_node(beacon_config.clone()).await?;
}
/*
* One by one, add validator clients to the network. Each validator client is attached to
* a single corresponding beacon node.
*/
.and_then(move |network| {
let network_1 = network.clone();
// Note: presently the validator client future will only resolve once genesis time
// occurs. This is great for this scenario, but likely to change in the future.
//
// If the validator client future behaviour changes, we would need to add a new future
// that delays until genesis. Otherwise, all of the checks that start in the next
// future will start too early.
// Note: presently the validator client future will only resolve once genesis time
// occurs. This is great for this scenario, but likely to change in the future.
//
// If the validator client future behaviour changes, we would need to add a new future
// that delays until genesis. Otherwise, all of the checks that start in the next
// future will start too early.
stream::unfold(0..node_count, move |mut iter| {
iter.next().map(|i| {
let indices = (i * validators_per_node..(i + 1) * validators_per_node)
.collect::<Vec<_>>();
network_1
.add_validator_client(ValidatorConfig::default(), i, indices)
.map(|()| ((), iter))
})
})
.collect()
.map(|_| network)
})
for i in 0..node_count {
let indices =
(i * validators_per_node..(i + 1) * validators_per_node).collect::<Vec<_>>();
network
.add_validator_client(ValidatorConfig::default(), i, indices)
.await?;
}
/*
* Start the processes that will run checks on the network as it runs.
*/
.and_then(move |network| {
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.
let final_future: Box<dyn Future<Item = (), Error = String> + Send> =
if end_after_checks {
Box::new(future::ok(()).map_err(|()| "".to_string()))
} else {
Box::new(future::empty().map_err(|()| "".to_string()))
};
// Check that the chain finalizes at the first given opportunity.
checks::verify_first_finalization(network.clone(), slot_duration).await?;
future::ok(())
// Check that the chain finalizes at the first given opportunity.
.join(checks::verify_first_finalization(
network.clone(),
slot_duration,
))
// End now or run forever, depending on the `end_after_checks` flag.
.join(final_future)
.map(|_| network)
})
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.
if !end_after_checks {
future::pending::<()>().await;
}
/*
* End the simulation by dropping the network. This will kill all running beacon nodes and
* validator clients.
*/
.map(|network| {
println!(
"Simulation complete. Finished with {} beacon nodes and {} validator clients",
network.beacon_node_count(),
network.validator_client_count()
);
println!(
"Simulation complete. Finished with {} beacon nodes and {} validator clients",
network.beacon_node_count(),
network.validator_client_count()
);
// Be explicit about dropping the network, as this kills all the nodes. This ensures
// all the checks have adequate time to pass.
drop(network)
});
// Be explicit about dropping the network, as this kills all the nodes. This ensures
// all the checks have adequate time to pass.
drop(network);
Ok::<(), String>(())
};
env.runtime().block_on(future)
Ok(env.runtime().block_on(main_future).unwrap())
}

View File

@@ -1,14 +1,13 @@
use crate::checks::{epoch_delay, verify_all_finalized_at};
use crate::local_network::LocalNetwork;
use clap::ArgMatches;
use futures::{future, stream, Future, IntoFuture, Stream};
use futures::prelude::*;
use node_test_rig::ClientConfig;
use node_test_rig::{
environment::EnvironmentBuilder, testing_client_config, ClientGenesis, ValidatorConfig,
};
use std::net::{IpAddr, Ipv4Addr};
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use tokio::timer::Interval;
use types::{Epoch, EthSpec};
pub fn run_syncing_sim(matches: &ArgMatches) -> Result<(), String> {
@@ -78,110 +77,118 @@ fn syncing_sim(
beacon_config.network.enr_address = Some(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
let future = LocalNetwork::new(context, beacon_config.clone())
let main_future = async {
/*
* Create a new `LocalNetwork` with one beacon node.
*/
let network = LocalNetwork::new(context, beacon_config.clone()).await?;
/*
* Add a validator client which handles all validators from the genesis state.
*/
.and_then(move |network| {
network
.add_validator_client(ValidatorConfig::default(), 0, (0..num_validators).collect())
.map(|_| network)
})
/*
* Start the processes that will run checks on the network as it runs.
*/
.and_then(move |network| {
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.
let final_future: Box<dyn Future<Item = (), Error = String> + Send> =
if end_after_checks {
Box::new(future::ok(()).map_err(|()| "".to_string()))
} else {
Box::new(future::empty().map_err(|()| "".to_string()))
};
network
.add_validator_client(ValidatorConfig::default(), 0, (0..num_validators).collect())
.await?;
// Check all syncing strategies one after other.
pick_strategy(
&strategy,
network.clone(),
beacon_config.clone(),
slot_duration,
initial_delay,
sync_timeout,
)
.await?;
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.
if !end_after_checks {
future::pending::<()>().await;
}
future::ok(())
// Check all syncing strategies one after other.
.join(pick_strategy(
&strategy,
network.clone(),
beacon_config.clone(),
slot_duration,
initial_delay,
sync_timeout,
))
.join(final_future)
.map(|_| network)
})
/*
* End the simulation by dropping the network. This will kill all running beacon nodes and
* validator clients.
*/
.map(|network| {
println!(
"Simulation complete. Finished with {} beacon nodes and {} validator clients",
network.beacon_node_count(),
network.validator_client_count()
);
println!(
"Simulation complete. Finished with {} beacon nodes and {} validator clients",
network.beacon_node_count(),
network.validator_client_count()
);
// Be explicit about dropping the network, as this kills all the nodes. This ensures
// all the checks have adequate time to pass.
drop(network)
});
// Be explicit about dropping the network, as this kills all the nodes. This ensures
// all the checks have adequate time to pass.
drop(network);
Ok::<(), String>(())
};
env.runtime().block_on(future)
env.runtime().block_on(main_future)
}
pub fn pick_strategy<E: EthSpec>(
pub async fn pick_strategy<E: EthSpec>(
strategy: &str,
network: LocalNetwork<E>,
beacon_config: ClientConfig,
slot_duration: Duration,
initial_delay: u64,
sync_timeout: u64,
) -> Box<dyn Future<Item = (), Error = String> + Send + 'static> {
) -> Result<(), String> {
match strategy {
"one-node" => Box::new(verify_one_node_sync(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)),
"two-nodes" => Box::new(verify_two_nodes_sync(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)),
"mixed" => Box::new(verify_in_between_sync(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)),
"all" => Box::new(verify_syncing(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)),
_ => Box::new(Err("Invalid strategy".into()).into_future()),
"one-node" => {
verify_one_node_sync(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)
.await
}
"two-nodes" => {
verify_two_nodes_sync(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)
.await
}
"mixed" => {
verify_in_between_sync(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)
.await
}
"all" => {
verify_syncing(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)
.await
}
_ => Err("Invalid strategy".into()),
}
}
/// Verify one node added after `initial_delay` epochs is in sync
/// after `sync_timeout` epochs.
pub fn verify_one_node_sync<E: EthSpec>(
pub async fn verify_one_node_sync<E: EthSpec>(
network: LocalNetwork<E>,
beacon_config: ClientConfig,
slot_duration: Duration,
initial_delay: u64,
sync_timeout: u64,
) -> impl Future<Item = (), Error = String> {
) -> Result<(), String> {
let epoch_duration = slot_duration * (E::slots_per_epoch() as u32);
let network_c = network.clone();
// Delay for `initial_delay` epochs before adding another node to start syncing
@@ -190,35 +197,34 @@ pub fn verify_one_node_sync<E: EthSpec>(
slot_duration,
E::slots_per_epoch(),
)
.and_then(move |_| {
// Add a beacon node
network.add_beacon_node(beacon_config).map(|_| network)
})
.and_then(move |network| {
// Check every `epoch_duration` if nodes are synced
// limited to at most `sync_timeout` epochs
Interval::new_interval(epoch_duration)
.take(sync_timeout)
.map_err(|_| "Failed to create interval".to_string())
.take_while(move |_| check_still_syncing(&network_c))
.for_each(|_| Ok(())) // consume the stream
.map(|_| network)
})
.and_then(move |network| network.bootnode_epoch().map(|e| (e, network)))
.and_then(move |(epoch, network)| {
verify_all_finalized_at(network, epoch).map_err(|e| format!("One node sync error: {}", e))
})
.await;
// Add a beacon node
network.add_beacon_node(beacon_config).await?;
// Check every `epoch_duration` if nodes are synced
// limited to at most `sync_timeout` epochs
let mut interval = tokio::time::interval(epoch_duration);
let mut count = 0;
while let Some(_) = interval.next().await {
if count >= sync_timeout || !check_still_syncing(&network_c).await? {
break;
}
count += 1;
}
let epoch = network.bootnode_epoch().await?;
verify_all_finalized_at(network, epoch)
.map_err(|e| format!("One node sync error: {}", e))
.await
}
/// Verify two nodes added after `initial_delay` epochs are in sync
/// after `sync_timeout` epochs.
pub fn verify_two_nodes_sync<E: EthSpec>(
pub async fn verify_two_nodes_sync<E: EthSpec>(
network: LocalNetwork<E>,
beacon_config: ClientConfig,
slot_duration: Duration,
initial_delay: u64,
sync_timeout: u64,
) -> impl Future<Item = (), Error = String> {
) -> Result<(), String> {
let epoch_duration = slot_duration * (E::slots_per_epoch() as u32);
let network_c = network.clone();
// Delay for `initial_delay` epochs before adding another node to start syncing
@@ -227,41 +233,36 @@ pub fn verify_two_nodes_sync<E: EthSpec>(
slot_duration,
E::slots_per_epoch(),
)
.and_then(move |_| {
// Add beacon nodes
network
.add_beacon_node(beacon_config.clone())
.map(|_| (network, beacon_config))
.and_then(|(network, beacon_config)| {
network.add_beacon_node(beacon_config).map(|_| network)
})
})
.and_then(move |network| {
// Check every `epoch_duration` if nodes are synced
// limited to at most `sync_timeout` epochs
Interval::new_interval(epoch_duration)
.take(sync_timeout)
.map_err(|_| "Failed to create interval".to_string())
.take_while(move |_| check_still_syncing(&network_c))
.for_each(|_| Ok(())) // consume the stream
.map(|_| network)
})
.and_then(move |network| network.bootnode_epoch().map(|e| (e, network)))
.and_then(move |(epoch, network)| {
verify_all_finalized_at(network, epoch).map_err(|e| format!("Two node sync error: {}", e))
})
.await;
// Add beacon nodes
network.add_beacon_node(beacon_config.clone()).await?;
network.add_beacon_node(beacon_config).await?;
// Check every `epoch_duration` if nodes are synced
// limited to at most `sync_timeout` epochs
let mut interval = tokio::time::interval(epoch_duration);
let mut count = 0;
while let Some(_) = interval.next().await {
if count >= sync_timeout || !check_still_syncing(&network_c).await? {
break;
}
count += 1;
}
let epoch = network.bootnode_epoch().await?;
verify_all_finalized_at(network, epoch)
.map_err(|e| format!("One node sync error: {}", e))
.await
}
/// Add 2 syncing nodes after `initial_delay` epochs,
/// Add another node after `sync_timeout - 5` epochs and verify all are
/// in sync after `sync_timeout + 5` epochs.
pub fn verify_in_between_sync<E: EthSpec>(
pub async fn verify_in_between_sync<E: EthSpec>(
network: LocalNetwork<E>,
beacon_config: ClientConfig,
slot_duration: Duration,
initial_delay: u64,
sync_timeout: u64,
) -> impl Future<Item = (), Error = String> {
) -> Result<(), String> {
let epoch_duration = slot_duration * (E::slots_per_epoch() as u32);
let network_c = network.clone();
// Delay for `initial_delay` epochs before adding another node to start syncing
@@ -271,52 +272,43 @@ pub fn verify_in_between_sync<E: EthSpec>(
slot_duration,
E::slots_per_epoch(),
)
.and_then(move |_| {
// Add a beacon node
network
.add_beacon_node(beacon_config.clone())
.map(|_| (network, beacon_config))
.and_then(|(network, beacon_config)| {
network.add_beacon_node(beacon_config).map(|_| network)
})
})
.and_then(move |network| {
// Delay before adding additional syncing nodes.
epoch_delay(
Epoch::new(sync_timeout - 5),
slot_duration,
E::slots_per_epoch(),
)
.map(|_| network)
})
.and_then(move |network| {
// Add a beacon node
network.add_beacon_node(config1.clone()).map(|_| network)
})
.and_then(move |network| {
// Check every `epoch_duration` if nodes are synced
// limited to at most `sync_timeout` epochs
Interval::new_interval(epoch_duration)
.take(sync_timeout + 5)
.map_err(|_| "Failed to create interval".to_string())
.take_while(move |_| check_still_syncing(&network_c))
.for_each(|_| Ok(())) // consume the stream
.map(|_| network)
})
.and_then(move |network| network.bootnode_epoch().map(|e| (e, network)))
.and_then(move |(epoch, network)| {
verify_all_finalized_at(network, epoch).map_err(|e| format!("In between sync error: {}", e))
})
.await;
// Add two beacon nodes
network.add_beacon_node(beacon_config.clone()).await?;
network.add_beacon_node(beacon_config).await?;
// Delay before adding additional syncing nodes.
epoch_delay(
Epoch::new(sync_timeout - 5),
slot_duration,
E::slots_per_epoch(),
)
.await;
// Add a beacon node
network.add_beacon_node(config1.clone()).await?;
// Check every `epoch_duration` if nodes are synced
// limited to at most `sync_timeout` epochs
let mut interval = tokio::time::interval(epoch_duration);
let mut count = 0;
while let Some(_) = interval.next().await {
if count >= sync_timeout || !check_still_syncing(&network_c).await? {
break;
}
count += 1;
}
let epoch = network.bootnode_epoch().await?;
verify_all_finalized_at(network, epoch)
.map_err(|e| format!("One node sync error: {}", e))
.await
}
/// Run syncing strategies one after other.
pub fn verify_syncing<E: EthSpec>(
pub async fn verify_syncing<E: EthSpec>(
network: LocalNetwork<E>,
beacon_config: ClientConfig,
slot_duration: Duration,
initial_delay: u64,
sync_timeout: u64,
) -> impl Future<Item = (), Error = String> {
) -> Result<(), String> {
verify_one_node_sync(
network.clone(),
beacon_config.clone(),
@@ -324,53 +316,42 @@ pub fn verify_syncing<E: EthSpec>(
initial_delay,
sync_timeout,
)
.map(|_| println!("Completed one node sync"))
.and_then(move |_| {
verify_two_nodes_sync(
network.clone(),
beacon_config.clone(),
slot_duration,
initial_delay,
sync_timeout,
)
.map(|_| {
println!("Completed two node sync");
(network, beacon_config)
})
})
.and_then(move |(network, beacon_config)| {
verify_in_between_sync(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)
.map(|_| println!("Completed in between sync"))
})
.await?;
println!("Completed one node sync");
verify_two_nodes_sync(
network.clone(),
beacon_config.clone(),
slot_duration,
initial_delay,
sync_timeout,
)
.await?;
println!("Completed two node sync");
verify_in_between_sync(
network,
beacon_config,
slot_duration,
initial_delay,
sync_timeout,
)
.await?;
println!("Completed in between sync");
Ok(())
}
pub fn check_still_syncing<E: EthSpec>(
network: &LocalNetwork<E>,
) -> impl Future<Item = bool, Error = String> {
network
.remote_nodes()
.into_future()
// get syncing status of nodes
.and_then(|remote_nodes| {
stream::unfold(remote_nodes.into_iter(), |mut iter| {
iter.next().map(|remote_node| {
remote_node
.http
.node()
.syncing_status()
.map(|status| status.is_syncing)
.map(|status| (status, iter))
.map_err(|e| format!("Get syncing status via http failed: {:?}", e))
})
})
.collect()
})
.and_then(move |status| Ok(status.iter().any(|is_syncing| *is_syncing)))
.map_err(|e| format!("Failed syncing check: {:?}", e))
pub async fn check_still_syncing<E: EthSpec>(network: &LocalNetwork<E>) -> Result<bool, String> {
// get syncing status of nodes
let mut status = Vec::new();
for remote_node in network.remote_nodes()? {
status.push(
remote_node
.http
.node()
.syncing_status()
.await
.map(|status| status.is_syncing)
.map_err(|e| format!("Get syncing status via http failed: {:?}", e))?,
)
}
Ok(status.iter().any(|is_syncing| *is_syncing))
}