mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-22 14:24:44 +00:00
* Refactor to cache Eth1Data * Fix merge conflicts and minor refactorings * Rename Eth1Cache to Eth1DataCache * Refactor events subscription * Add deposits module to interface with BeaconChain deposits * Remove utils * Rename to types.rs and add trait constraints to Eth1DataFetcher * Confirm to trait constraints. Make Web3DataFetcher cloneable * Make fetcher object member of deposit and eth1_data cache and other fixes * Fix update_cache function * Move fetch_eth1_data to impl block * Fix deposit tests * Create Eth1 object for interfacing with Beacon chain * Add `run` function for running update_cache and subscribe_deposit_logs tasks * Add logging * Run `cargo fmt` and make tests pass * Convert sync functions to async * Add timeouts to web3 functions * Return futures from cache functions * Add failed chaining of futures * Working cache updation * Clean up tests and `update_cache` function * Refactor `get_eth1_data` functions to work with future returning functions * Refactor eth1 `run` function to work with modified `update_cache` api * Minor changes * Add distance parameter to `update_cache` * Fix tests and other minor fixes * Working integration with cache and deposits * Add merkle_tree construction, proof generation and verification code * Add function to construct and fetch Deposits for BeaconNode * Add error handling * Import ssz * Add error handling to eth1 cache and fix minor errors * Run rustfmt * Fix minor bug * Rename Eth1Error and change to Result<T> * Change deposit fetching mechanism from notification based to poll based * Add deposits from eth1 chain in a given range every `x` blocks * Modify `run` function to accommodate changes * Minor fixes * Fix formatting * Initial commit. web3 api working. * Tidied up lib. Add function for fetching logs. * Refactor with `Eth1DataFetcher` trait * Add parsing for deposit contract logs and get_eth1_data function * Add `get_eth1_votes` function * Refactor to cache Eth1Data * Fix merge conflicts and minor refactorings * Rename Eth1Cache to Eth1DataCache * Refactor events subscription * Add deposits module to interface with BeaconChain deposits * Remove utils * Rename to types.rs and add trait constraints to Eth1DataFetcher * Confirm to trait constraints. Make Web3DataFetcher cloneable * Make fetcher object member of deposit and eth1_data cache and other fixes * Fix update_cache function * Move fetch_eth1_data to impl block * Fix deposit tests * Create Eth1 object for interfacing with Beacon chain * Add `run` function for running update_cache and subscribe_deposit_logs tasks * Add logging * Run `cargo fmt` and make tests pass * Convert sync functions to async * Add timeouts to web3 functions * Return futures from cache functions * Add failed chaining of futures * Working cache updation * Clean up tests and `update_cache` function * Refactor `get_eth1_data` functions to work with future returning functions * Refactor eth1 `run` function to work with modified `update_cache` api * Minor changes * Add distance parameter to `update_cache` * Fix tests and other minor fixes * Working integration with cache and deposits * Add merkle_tree construction, proof generation and verification code * Add function to construct and fetch Deposits for BeaconNode * Add error handling * Import ssz * Add error handling to eth1 cache and fix minor errors * Run rustfmt * Fix minor bug * Rename Eth1Error and change to Result<T> * Change deposit fetching mechanism from notification based to poll based * Add deposits from eth1 chain in a given range every `x` blocks * Modify `run` function to accommodate changes * Minor fixes * Fix formatting * Fix merge issue * Refactor with `Config` struct. Remote `ContractConfig` * Rename eth1_chain crate to eth1 * Rename files and read abi file using `fs::read` * Move eth1 to lib * Remove unnecessary mutability constraint * Add `Web3Backend` for returning actual eth1 data * Refactor `get_eth1_votes` to return a Result * Delete `eth1_chain` crate * Return `Result` from `get_deposits` * Fix range of deposits to return to beacon chain * Add `get_block_height_by_hash` trait function * Add naive method for getting `previous_eth1_distance` * Add eth1 config params to main config * Add instructions for setting up eth1 testing environment * Add build script to fetch deposit contract abi * Contract ABI is part of compiled binary * Fix minor bugs * Move docs to lib * Add timeout to config * Remove print statements * Change warn to error * Fix typos * Removed prints in test and get timeout value from config * Fixed error types * Added logging to web3_fetcher * Refactor for modified web3 api * Fix minor stuff * Add build script * Tidy, hide eth1 integration tests behind flag * Add http crate * Add first stages of eth1_test_rig * Fix deposits on test rig * Fix bug with deposit count method * Add block hash getter to http eth1 * Clean eth1 http crate and tests * Add script to start ganache * Adds deposit tree to eth1-http * Extend deposit tree tests * Tidy tests in eth1-http * Add more detail to get block request * Add block cache to eth1-http * Rename deposit tree to deposit cache * Add inital updating to eth1-http * Tidy updater * Fix compile bugs in tests * Adds an Eth1DataCache builder * Reorg eth1-http files * Add (failing) tests for eth1 updater * Rename files, fix bug in eth1-http * Ensure that ganache timestamps are increasing * Fix bugs with getting eth1data ancestors * Improve eth1 testing, fix bugs * Add truncate method to block cache * Add pruning to block cache update process * Add tests for block pruning * Allow for dropping an expired cache. * Add more comments * Add first compiling version of deposit updater * Add common fn for getting range of required blocks * Add passing deposit update test * Improve tests * Fix block pruning bug * Add tests for running two updates at once * Add updater services to eth1 * Add deposit collection to beacon chain * Add incomplete builder experiments * Add first working version of beacon chain builder * Update test harness to new beacon chain type * Rename builder file, tidy * Add first working client builder * Progress further on client builder * Update becaon node binary to use client builder * Ensure release tests compile * Remove old eth1 crate * Add first pass of new lighthouse binary * Fix websocket server startup * Remove old binary code from beacon_node crate * Add first working beacon node tests * Add genesis crate, new eth1 cache_2 * Add Serivce to Eth1Cache * Refactor with general eth1 improvements * Add passing genesis test * Tidy, add comments * Add more comments to eth1 service * Add further eth1 progress * Fix some bugs with genesis * Fix eth1 bugs, make eth1 linking more efficient * Shift logic in genesis service * Add more comments to genesis service * Add gzip, max request values, timeouts to http * Update testnet parameters to suit goerli testnet * Add ability to vary Fork, fix custom spec * Be more explicit about deposit fork version * Start adding beacon chain eth1 option * Add more flexibility to prod client * Further runtime refactoring * Allow for starting from store * Add bootstrapping to client config * Add remote_beacon_node crate * Update eth1 service for more configurability * Update eth1 tests to use less runtimes * Patch issues with tests using too many files * Move dummy eth1 backend flag * Ensure all tests pass * Add ganache-cli to Dockerfile * Use a special docker hub image for testing * Appease clippy * Move validator client into lighthouse binary * Allow starting with dummy eth1 backend * Improve logging * Fix dummy eth1 backend from cli * Add extra testnet command * Ensure consistent spec in beacon node * Update eth1 rig to work on goerli * Tidy lcli, start adding support for yaml config * Add incomplete YamlConfig struct * Remove efforts at YamlConfig * Add incomplete eth1 voting. Blocked on spec issues * Add (untested) first pass at eth1 vote algo * Add tests for winning vote * Add more tests for eth1 chain * Add more eth1 voting tests * Added more eth1 voting testing * Change test name * Add more tests to eth1 chain * Tidy eth1 generics, add more tests * Improve comments * Tidy beacon_node tests * Tidy, rename JsonRpc.. to Caching.. * Tidy voting logic * Tidy builder docs * Add comments, tidy eth1 * Add more comments to eth1 * Fix bug with winning_vote * Add doc comments to the `ClientBuilder` * Remove commented-out code * Improve `ClientBuilder` docs * Add comments to client config * Add decoding test for `ClientConfig` * Remove unused `DepositSet` struct * Tidy `block_cache` * Remove commented out lines * Remove unused code in `eth1` crate * Remove old validator binary `main.rs` * Tidy, fix tests compile error * Add initial tests for get_deposits * Remove dead code in eth1_test_rig * Update TestingDepositBuilder * Add testing for getting eth1 deposits * Fix duplicate rand dep * Remove dead code * Remove accidentally-added files * Fix comment in eth1_genesis_service * Add .gitignore for eth1_test_rig * Fix bug in eth1_genesis_service * Remove dead code from eth2_config * Fix tabs/spaces in root Cargo.toml * Tidy eth1 crate * Allow for re-use of eth1 service after genesis * Update docs for new CLI * Change README gif * Tidy eth1 http module * Tidy eth1 service * Tidy environment crate * Remove unused file * Tidy, add comments * Remove commented-out code * Address majority of Michael's comments * Address other PR comments * Add link to issue alongside TODO
380 lines
15 KiB
Rust
380 lines
15 KiB
Rust
pub use crate::{common::genesis_deposits, interop::interop_genesis_state};
|
|
pub use eth1::Config as Eth1Config;
|
|
|
|
use eth1::{DepositLog, Eth1Block, Service};
|
|
use futures::{
|
|
future,
|
|
future::{loop_fn, Loop},
|
|
Future,
|
|
};
|
|
use parking_lot::Mutex;
|
|
use slog::{debug, error, info, Logger};
|
|
use state_processing::{
|
|
initialize_beacon_state_from_eth1, is_valid_genesis_state,
|
|
per_block_processing::process_deposit, process_activations,
|
|
};
|
|
use std::sync::Arc;
|
|
use std::time::{Duration, Instant};
|
|
use tokio::timer::Delay;
|
|
use types::{BeaconState, ChainSpec, Deposit, Eth1Data, EthSpec, Hash256};
|
|
|
|
/// Provides a service that connects to some Eth1 HTTP JSON-RPC endpoint and maintains a cache of eth1
|
|
/// blocks and deposits, listening for the eth1 block that triggers eth2 genesis and returning the
|
|
/// genesis `BeaconState`.
|
|
///
|
|
/// Is a wrapper around the `Service` struct of the `eth1` crate.
|
|
#[derive(Clone)]
|
|
pub struct Eth1GenesisService {
|
|
/// The underlying service. Access to this object is only required for testing and diagnosis.
|
|
pub core: Service,
|
|
/// The highest block number we've processed and determined it does not trigger genesis.
|
|
highest_processed_block: Arc<Mutex<Option<u64>>>,
|
|
/// Enabled when the genesis service should start downloading blocks.
|
|
///
|
|
/// It is disabled until there are enough deposit logs to start syncing.
|
|
sync_blocks: Arc<Mutex<bool>>,
|
|
}
|
|
|
|
impl Eth1GenesisService {
|
|
/// Creates a new service. Does not attempt to connect to the Eth1 node.
|
|
pub fn new(config: Eth1Config, log: Logger) -> Self {
|
|
Self {
|
|
core: Service::new(config, log),
|
|
highest_processed_block: Arc::new(Mutex::new(None)),
|
|
sync_blocks: Arc::new(Mutex::new(false)),
|
|
}
|
|
}
|
|
|
|
fn first_viable_eth1_block(&self, min_genesis_active_validator_count: usize) -> Option<u64> {
|
|
if self.core.deposit_cache_len() < min_genesis_active_validator_count {
|
|
None
|
|
} else {
|
|
self.core
|
|
.deposits()
|
|
.read()
|
|
.cache
|
|
.get(min_genesis_active_validator_count.saturating_sub(1))
|
|
.map(|log| log.block_number)
|
|
}
|
|
}
|
|
|
|
/// Returns a future that will keep updating the cache and resolve once it has discovered the
|
|
/// first Eth1 block that triggers an Eth2 genesis.
|
|
///
|
|
/// ## Returns
|
|
///
|
|
/// - `Ok(state)` once the canonical eth2 genesis state has been discovered.
|
|
/// - `Err(e)` if there is some internal error during updates.
|
|
pub fn wait_for_genesis_state<E: EthSpec>(
|
|
&self,
|
|
update_interval: Duration,
|
|
spec: ChainSpec,
|
|
) -> impl Future<Item = BeaconState<E>, Error = String> {
|
|
let service = self.clone();
|
|
|
|
loop_fn::<(ChainSpec, Option<BeaconState<E>>), _, _, _>(
|
|
(spec, None),
|
|
move |(spec, state)| {
|
|
let service_1 = service.clone();
|
|
let service_2 = service.clone();
|
|
let service_3 = service.clone();
|
|
let service_4 = service.clone();
|
|
let log = service.core.log.clone();
|
|
let min_genesis_active_validator_count = spec.min_genesis_active_validator_count;
|
|
|
|
Delay::new(Instant::now() + update_interval)
|
|
.map_err(|e| format!("Delay between genesis deposit checks failed: {:?}", e))
|
|
.and_then(move |()| {
|
|
service_1
|
|
.core
|
|
.update_deposit_cache()
|
|
.map_err(|e| format!("{:?}", e))
|
|
})
|
|
.then(move |update_result| {
|
|
if let Err(e) = update_result {
|
|
error!(
|
|
log,
|
|
"Failed to update eth1 deposit cache";
|
|
"error" => e
|
|
)
|
|
}
|
|
|
|
// Do not exit the loop if there is an error whilst updating.
|
|
Ok(())
|
|
})
|
|
// Only enable the `sync_blocks` flag if there are enough deposits to feasibly
|
|
// trigger genesis.
|
|
//
|
|
// Note: genesis is triggered by the _active_ validator count, not just the
|
|
// deposit count, so it's possible that block downloads are started too early.
|
|
// This is just wasteful, not erroneous.
|
|
.and_then(move |()| {
|
|
let mut sync_blocks = service_2.sync_blocks.lock();
|
|
|
|
if !(*sync_blocks) {
|
|
if let Some(viable_eth1_block) = service_2.first_viable_eth1_block(
|
|
min_genesis_active_validator_count as usize,
|
|
) {
|
|
info!(
|
|
service_2.core.log,
|
|
"Minimum genesis deposit count met";
|
|
"deposit_count" => min_genesis_active_validator_count,
|
|
"block_number" => viable_eth1_block,
|
|
);
|
|
service_2.core.set_lowest_cached_block(viable_eth1_block);
|
|
*sync_blocks = true
|
|
}
|
|
}
|
|
|
|
Ok(*sync_blocks)
|
|
})
|
|
.and_then(move |should_update_block_cache| {
|
|
let maybe_update_future: Box<dyn Future<Item = _, Error = _> + Send> =
|
|
if should_update_block_cache {
|
|
Box::new(service_3.core.update_block_cache().then(
|
|
move |update_result| {
|
|
if let Err(e) = update_result {
|
|
error!(
|
|
service_3.core.log,
|
|
"Failed to update eth1 block cache";
|
|
"error" => format!("{:?}", e)
|
|
);
|
|
}
|
|
|
|
// Do not exit the loop if there is an error whilst updating.
|
|
Ok(())
|
|
},
|
|
))
|
|
} else {
|
|
Box::new(future::ok(()))
|
|
};
|
|
|
|
maybe_update_future
|
|
})
|
|
.and_then(move |()| {
|
|
if let Some(genesis_state) = service_4
|
|
.scan_new_blocks::<E>(&spec)
|
|
.map_err(|e| format!("Failed to scan for new blocks: {}", e))?
|
|
{
|
|
Ok(Loop::Break((spec, genesis_state)))
|
|
} else {
|
|
debug!(
|
|
service_4.core.log,
|
|
"No eth1 genesis block found";
|
|
"cached_blocks" => service_4.core.block_cache_len(),
|
|
"cached_deposits" => service_4.core.deposit_cache_len(),
|
|
"cache_head" => service_4.highest_known_block(),
|
|
);
|
|
|
|
Ok(Loop::Continue((spec, state)))
|
|
}
|
|
})
|
|
},
|
|
)
|
|
.map(|(_spec, state)| state)
|
|
}
|
|
|
|
/// Processes any new blocks that have appeared since this function was last run.
|
|
///
|
|
/// A `highest_processed_block` value is stored in `self`. This function will find any blocks
|
|
/// in it's caches that have a higher block number than `highest_processed_block` and check to
|
|
/// see if they would trigger an Eth2 genesis.
|
|
///
|
|
/// Blocks are always tested in increasing order, starting with the lowest unknown block
|
|
/// number in the cache.
|
|
///
|
|
/// ## Returns
|
|
///
|
|
/// - `Ok(Some(eth1_block))` if a previously-unprocessed block would trigger Eth2 genesis.
|
|
/// - `Ok(None)` if none of the new blocks would trigger genesis, or there were no new blocks.
|
|
/// - `Err(_)` if there was some internal error.
|
|
fn scan_new_blocks<E: EthSpec>(
|
|
&self,
|
|
spec: &ChainSpec,
|
|
) -> Result<Option<BeaconState<E>>, String> {
|
|
let genesis_trigger_eth1_block = self
|
|
.core
|
|
.blocks()
|
|
.read()
|
|
.iter()
|
|
// It's only worth scanning blocks that have timestamps _after_ genesis time. It's
|
|
// impossible for any other block to trigger genesis.
|
|
.filter(|block| block.timestamp >= spec.min_genesis_time)
|
|
// The block cache might be more recently updated than deposit cache. Restrict any
|
|
// block numbers that are not known by all caches.
|
|
.filter(|block| {
|
|
self.highest_known_block()
|
|
.map(|n| block.number <= n)
|
|
.unwrap_or_else(|| false)
|
|
})
|
|
.find(|block| {
|
|
let mut highest_processed_block = self.highest_processed_block.lock();
|
|
|
|
let next_new_block_number =
|
|
highest_processed_block.map(|n| n + 1).unwrap_or_else(|| 0);
|
|
|
|
if block.number < next_new_block_number {
|
|
return false;
|
|
}
|
|
|
|
self.is_valid_genesis_eth1_block::<E>(block, &spec)
|
|
.and_then(|val| {
|
|
*highest_processed_block = Some(block.number);
|
|
Ok(val)
|
|
})
|
|
.unwrap_or_else(|_| {
|
|
error!(
|
|
self.core.log,
|
|
"Failed to detect if eth1 block triggers genesis";
|
|
"eth1_block_number" => block.number,
|
|
"eth1_block_hash" => format!("{}", block.hash),
|
|
);
|
|
false
|
|
})
|
|
})
|
|
.cloned();
|
|
|
|
if let Some(eth1_block) = genesis_trigger_eth1_block {
|
|
debug!(
|
|
self.core.log,
|
|
"All genesis conditions met";
|
|
"eth1_block_height" => eth1_block.number,
|
|
);
|
|
|
|
let genesis_state = self
|
|
.genesis_from_eth1_block(eth1_block.clone(), &spec)
|
|
.map_err(|e| format!("Failed to generate valid genesis state : {}", e))?;
|
|
|
|
info!(
|
|
self.core.log,
|
|
"Deposit contract genesis complete";
|
|
"eth1_block_height" => eth1_block.number,
|
|
"validator_count" => genesis_state.validators.len(),
|
|
);
|
|
|
|
Ok(Some(genesis_state))
|
|
} else {
|
|
Ok(None)
|
|
}
|
|
}
|
|
|
|
/// Produces an eth2 genesis `BeaconState` from the given `eth1_block`.
|
|
///
|
|
/// ## Returns
|
|
///
|
|
/// - Ok(genesis_state) if all went well.
|
|
/// - Err(e) if the given `eth1_block` was not a viable block to trigger genesis or there was
|
|
/// an internal error.
|
|
fn genesis_from_eth1_block<E: EthSpec>(
|
|
&self,
|
|
eth1_block: Eth1Block,
|
|
spec: &ChainSpec,
|
|
) -> Result<BeaconState<E>, String> {
|
|
let deposit_logs = self
|
|
.core
|
|
.deposits()
|
|
.read()
|
|
.cache
|
|
.iter()
|
|
.take_while(|log| log.block_number <= eth1_block.number)
|
|
.map(|log| log.deposit_data.clone())
|
|
.collect::<Vec<_>>();
|
|
|
|
let genesis_state = initialize_beacon_state_from_eth1(
|
|
eth1_block.hash,
|
|
eth1_block.timestamp,
|
|
genesis_deposits(deposit_logs, &spec)?,
|
|
&spec,
|
|
)
|
|
.map_err(|e| format!("Unable to initialize genesis state: {:?}", e))?;
|
|
|
|
if is_valid_genesis_state(&genesis_state, &spec) {
|
|
Ok(genesis_state)
|
|
} else {
|
|
Err("Generated state was not valid.".to_string())
|
|
}
|
|
}
|
|
|
|
/// A cheap (compared to using `initialize_beacon_state_from_eth1) method for determining if some
|
|
/// `target_block` will trigger genesis.
|
|
fn is_valid_genesis_eth1_block<E: EthSpec>(
|
|
&self,
|
|
target_block: &Eth1Block,
|
|
spec: &ChainSpec,
|
|
) -> Result<bool, String> {
|
|
if target_block.timestamp < spec.min_genesis_time {
|
|
Ok(false)
|
|
} else {
|
|
let mut local_state: BeaconState<E> = BeaconState::new(
|
|
0,
|
|
Eth1Data {
|
|
block_hash: Hash256::zero(),
|
|
deposit_root: Hash256::zero(),
|
|
deposit_count: 0,
|
|
},
|
|
&spec,
|
|
);
|
|
|
|
local_state.genesis_time = target_block.timestamp;
|
|
|
|
self.deposit_logs_at_block(target_block.number)
|
|
.iter()
|
|
// TODO: add the signature field back.
|
|
//.filter(|deposit_log| deposit_log.signature_is_valid)
|
|
.map(|deposit_log| Deposit {
|
|
proof: vec![Hash256::zero(); spec.deposit_contract_tree_depth as usize].into(),
|
|
data: deposit_log.deposit_data.clone(),
|
|
})
|
|
.try_for_each(|deposit| {
|
|
// No need to verify proofs in order to test if some block will trigger genesis.
|
|
const PROOF_VERIFICATION: bool = false;
|
|
|
|
// Note: presently all the signatures are verified each time this function is
|
|
// run.
|
|
//
|
|
// It would be more efficient to pre-verify signatures, filter out the invalid
|
|
// ones and disable verification for `process_deposit`.
|
|
//
|
|
// This is only more efficient in scenarios where `min_genesis_time` occurs
|
|
// _before_ `min_validator_count` is met. We're unlikely to see this scenario
|
|
// in testnets (`min_genesis_time` is usually `0`) and I'm not certain it will
|
|
// happen for the real, production deposit contract.
|
|
|
|
process_deposit(&mut local_state, &deposit, spec, PROOF_VERIFICATION)
|
|
.map_err(|e| format!("Error whilst processing deposit: {:?}", e))
|
|
})?;
|
|
|
|
process_activations(&mut local_state, spec);
|
|
|
|
Ok(is_valid_genesis_state(&local_state, spec))
|
|
}
|
|
}
|
|
|
|
/// Returns the `block_number` of the highest (by block number) block in the cache.
|
|
///
|
|
/// Takes the lower block number of the deposit and block caches to ensure this number is safe.
|
|
fn highest_known_block(&self) -> Option<u64> {
|
|
let block_cache = self.core.blocks().read().highest_block_number()?;
|
|
let deposit_cache = self.core.deposits().read().last_processed_block?;
|
|
|
|
Some(std::cmp::min(block_cache, deposit_cache))
|
|
}
|
|
|
|
/// Returns all deposit logs included in `block_number` and all prior blocks.
|
|
fn deposit_logs_at_block(&self, block_number: u64) -> Vec<DepositLog> {
|
|
self.core
|
|
.deposits()
|
|
.read()
|
|
.cache
|
|
.iter()
|
|
.take_while(|log| log.block_number <= block_number)
|
|
.cloned()
|
|
.collect()
|
|
}
|
|
|
|
/// Returns the `Service` contained in `self`.
|
|
pub fn into_core_service(self) -> Service {
|
|
self.core
|
|
}
|
|
}
|