mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-18 12:22:51 +00:00
Eth1 Integration (#542)
* Refactor to cache Eth1Data * Fix merge conflicts and minor refactorings * Rename Eth1Cache to Eth1DataCache * Refactor events subscription * Add deposits module to interface with BeaconChain deposits * Remove utils * Rename to types.rs and add trait constraints to Eth1DataFetcher * Confirm to trait constraints. Make Web3DataFetcher cloneable * Make fetcher object member of deposit and eth1_data cache and other fixes * Fix update_cache function * Move fetch_eth1_data to impl block * Fix deposit tests * Create Eth1 object for interfacing with Beacon chain * Add `run` function for running update_cache and subscribe_deposit_logs tasks * Add logging * Run `cargo fmt` and make tests pass * Convert sync functions to async * Add timeouts to web3 functions * Return futures from cache functions * Add failed chaining of futures * Working cache updation * Clean up tests and `update_cache` function * Refactor `get_eth1_data` functions to work with future returning functions * Refactor eth1 `run` function to work with modified `update_cache` api * Minor changes * Add distance parameter to `update_cache` * Fix tests and other minor fixes * Working integration with cache and deposits * Add merkle_tree construction, proof generation and verification code * Add function to construct and fetch Deposits for BeaconNode * Add error handling * Import ssz * Add error handling to eth1 cache and fix minor errors * Run rustfmt * Fix minor bug * Rename Eth1Error and change to Result<T> * Change deposit fetching mechanism from notification based to poll based * Add deposits from eth1 chain in a given range every `x` blocks * Modify `run` function to accommodate changes * Minor fixes * Fix formatting * Initial commit. web3 api working. * Tidied up lib. Add function for fetching logs. * Refactor with `Eth1DataFetcher` trait * Add parsing for deposit contract logs and get_eth1_data function * Add `get_eth1_votes` function * Refactor to cache Eth1Data * Fix merge conflicts and minor refactorings * Rename Eth1Cache to Eth1DataCache * Refactor events subscription * Add deposits module to interface with BeaconChain deposits * Remove utils * Rename to types.rs and add trait constraints to Eth1DataFetcher * Confirm to trait constraints. Make Web3DataFetcher cloneable * Make fetcher object member of deposit and eth1_data cache and other fixes * Fix update_cache function * Move fetch_eth1_data to impl block * Fix deposit tests * Create Eth1 object for interfacing with Beacon chain * Add `run` function for running update_cache and subscribe_deposit_logs tasks * Add logging * Run `cargo fmt` and make tests pass * Convert sync functions to async * Add timeouts to web3 functions * Return futures from cache functions * Add failed chaining of futures * Working cache updation * Clean up tests and `update_cache` function * Refactor `get_eth1_data` functions to work with future returning functions * Refactor eth1 `run` function to work with modified `update_cache` api * Minor changes * Add distance parameter to `update_cache` * Fix tests and other minor fixes * Working integration with cache and deposits * Add merkle_tree construction, proof generation and verification code * Add function to construct and fetch Deposits for BeaconNode * Add error handling * Import ssz * Add error handling to eth1 cache and fix minor errors * Run rustfmt * Fix minor bug * Rename Eth1Error and change to Result<T> * Change deposit fetching mechanism from notification based to poll based * Add deposits from eth1 chain in a given range every `x` blocks * Modify `run` function to accommodate changes * Minor fixes * Fix formatting * Fix merge issue * Refactor with `Config` struct. Remote `ContractConfig` * Rename eth1_chain crate to eth1 * Rename files and read abi file using `fs::read` * Move eth1 to lib * Remove unnecessary mutability constraint * Add `Web3Backend` for returning actual eth1 data * Refactor `get_eth1_votes` to return a Result * Delete `eth1_chain` crate * Return `Result` from `get_deposits` * Fix range of deposits to return to beacon chain * Add `get_block_height_by_hash` trait function * Add naive method for getting `previous_eth1_distance` * Add eth1 config params to main config * Add instructions for setting up eth1 testing environment * Add build script to fetch deposit contract abi * Contract ABI is part of compiled binary * Fix minor bugs * Move docs to lib * Add timeout to config * Remove print statements * Change warn to error * Fix typos * Removed prints in test and get timeout value from config * Fixed error types * Added logging to web3_fetcher * Refactor for modified web3 api * Fix minor stuff * Add build script * Tidy, hide eth1 integration tests behind flag * Add http crate * Add first stages of eth1_test_rig * Fix deposits on test rig * Fix bug with deposit count method * Add block hash getter to http eth1 * Clean eth1 http crate and tests * Add script to start ganache * Adds deposit tree to eth1-http * Extend deposit tree tests * Tidy tests in eth1-http * Add more detail to get block request * Add block cache to eth1-http * Rename deposit tree to deposit cache * Add inital updating to eth1-http * Tidy updater * Fix compile bugs in tests * Adds an Eth1DataCache builder * Reorg eth1-http files * Add (failing) tests for eth1 updater * Rename files, fix bug in eth1-http * Ensure that ganache timestamps are increasing * Fix bugs with getting eth1data ancestors * Improve eth1 testing, fix bugs * Add truncate method to block cache * Add pruning to block cache update process * Add tests for block pruning * Allow for dropping an expired cache. * Add more comments * Add first compiling version of deposit updater * Add common fn for getting range of required blocks * Add passing deposit update test * Improve tests * Fix block pruning bug * Add tests for running two updates at once * Add updater services to eth1 * Add deposit collection to beacon chain * Add incomplete builder experiments * Add first working version of beacon chain builder * Update test harness to new beacon chain type * Rename builder file, tidy * Add first working client builder * Progress further on client builder * Update becaon node binary to use client builder * Ensure release tests compile * Remove old eth1 crate * Add first pass of new lighthouse binary * Fix websocket server startup * Remove old binary code from beacon_node crate * Add first working beacon node tests * Add genesis crate, new eth1 cache_2 * Add Serivce to Eth1Cache * Refactor with general eth1 improvements * Add passing genesis test * Tidy, add comments * Add more comments to eth1 service * Add further eth1 progress * Fix some bugs with genesis * Fix eth1 bugs, make eth1 linking more efficient * Shift logic in genesis service * Add more comments to genesis service * Add gzip, max request values, timeouts to http * Update testnet parameters to suit goerli testnet * Add ability to vary Fork, fix custom spec * Be more explicit about deposit fork version * Start adding beacon chain eth1 option * Add more flexibility to prod client * Further runtime refactoring * Allow for starting from store * Add bootstrapping to client config * Add remote_beacon_node crate * Update eth1 service for more configurability * Update eth1 tests to use less runtimes * Patch issues with tests using too many files * Move dummy eth1 backend flag * Ensure all tests pass * Add ganache-cli to Dockerfile * Use a special docker hub image for testing * Appease clippy * Move validator client into lighthouse binary * Allow starting with dummy eth1 backend * Improve logging * Fix dummy eth1 backend from cli * Add extra testnet command * Ensure consistent spec in beacon node * Update eth1 rig to work on goerli * Tidy lcli, start adding support for yaml config * Add incomplete YamlConfig struct * Remove efforts at YamlConfig * Add incomplete eth1 voting. Blocked on spec issues * Add (untested) first pass at eth1 vote algo * Add tests for winning vote * Add more tests for eth1 chain * Add more eth1 voting tests * Added more eth1 voting testing * Change test name * Add more tests to eth1 chain * Tidy eth1 generics, add more tests * Improve comments * Tidy beacon_node tests * Tidy, rename JsonRpc.. to Caching.. * Tidy voting logic * Tidy builder docs * Add comments, tidy eth1 * Add more comments to eth1 * Fix bug with winning_vote * Add doc comments to the `ClientBuilder` * Remove commented-out code * Improve `ClientBuilder` docs * Add comments to client config * Add decoding test for `ClientConfig` * Remove unused `DepositSet` struct * Tidy `block_cache` * Remove commented out lines * Remove unused code in `eth1` crate * Remove old validator binary `main.rs` * Tidy, fix tests compile error * Add initial tests for get_deposits * Remove dead code in eth1_test_rig * Update TestingDepositBuilder * Add testing for getting eth1 deposits * Fix duplicate rand dep * Remove dead code * Remove accidentally-added files * Fix comment in eth1_genesis_service * Add .gitignore for eth1_test_rig * Fix bug in eth1_genesis_service * Remove dead code from eth2_config * Fix tabs/spaces in root Cargo.toml * Tidy eth1 crate * Allow for re-use of eth1 service after genesis * Update docs for new CLI * Change README gif * Tidy eth1 http module * Tidy eth1 service * Tidy environment crate * Remove unused file * Tidy, add comments * Remove commented-out code * Address majority of Michael's comments * Address other PR comments * Add link to issue alongside TODO
This commit is contained in:
271
beacon_node/eth1/src/block_cache.rs
Normal file
271
beacon_node/eth1/src/block_cache.rs
Normal file
@@ -0,0 +1,271 @@
|
||||
use std::ops::RangeInclusive;
|
||||
use types::{Eth1Data, Hash256};
|
||||
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub enum Error {
|
||||
/// The timestamp of each block equal to or later than the block prior to it.
|
||||
InconsistentTimestamp { parent: u64, child: u64 },
|
||||
/// Some `Eth1Block` was provided with the same block number but different data. The source
|
||||
/// of eth1 data is inconsistent.
|
||||
Conflicting(u64),
|
||||
/// The given block was not one block number higher than the higest known block number.
|
||||
NonConsecutive { given: u64, expected: u64 },
|
||||
/// Some invariant was violated, there is a likely bug in the code.
|
||||
Internal(String),
|
||||
}
|
||||
|
||||
/// A block of the eth1 chain.
|
||||
///
|
||||
/// Contains all information required to add a `BlockCache` entry.
|
||||
#[derive(Debug, PartialEq, Clone, Eq, Hash)]
|
||||
pub struct Eth1Block {
|
||||
pub hash: Hash256,
|
||||
pub timestamp: u64,
|
||||
pub number: u64,
|
||||
pub deposit_root: Option<Hash256>,
|
||||
pub deposit_count: Option<u64>,
|
||||
}
|
||||
|
||||
impl Eth1Block {
|
||||
pub fn eth1_data(self) -> Option<Eth1Data> {
|
||||
Some(Eth1Data {
|
||||
deposit_root: self.deposit_root?,
|
||||
deposit_count: self.deposit_count?,
|
||||
block_hash: self.hash,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Stores block and deposit contract information and provides queries based upon the block
|
||||
/// timestamp.
|
||||
#[derive(Debug, PartialEq, Clone, Default)]
|
||||
pub struct BlockCache {
|
||||
blocks: Vec<Eth1Block>,
|
||||
}
|
||||
|
||||
impl BlockCache {
|
||||
/// Returns the number of blocks stored in `self`.
|
||||
pub fn len(&self) -> usize {
|
||||
self.blocks.len()
|
||||
}
|
||||
|
||||
/// True if the cache does not store any blocks.
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.blocks.is_empty()
|
||||
}
|
||||
|
||||
/// Returns the highest block number stored.
|
||||
pub fn highest_block_number(&self) -> Option<u64> {
|
||||
self.blocks.last().map(|block| block.number)
|
||||
}
|
||||
|
||||
/// Returns an iterator over all blocks.
|
||||
///
|
||||
/// Blocks a guaranteed to be returned with;
|
||||
///
|
||||
/// - Monotonically increasing block numbers.
|
||||
/// - Non-uniformly increasing block timestamps.
|
||||
pub fn iter(&self) -> impl DoubleEndedIterator<Item = &Eth1Block> + Clone {
|
||||
self.blocks.iter()
|
||||
}
|
||||
|
||||
/// Shortens the cache, keeping the latest (by block number) `len` blocks while dropping the
|
||||
/// rest.
|
||||
///
|
||||
/// If `len` is greater than the vector's current length, this has no effect.
|
||||
pub fn truncate(&mut self, len: usize) {
|
||||
if len < self.blocks.len() {
|
||||
self.blocks = self.blocks.split_off(self.blocks.len() - len);
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the range of block numbers stored in the block cache. All blocks in this range can
|
||||
/// be accessed.
|
||||
fn available_block_numbers(&self) -> Option<RangeInclusive<u64>> {
|
||||
Some(self.blocks.first()?.number..=self.blocks.last()?.number)
|
||||
}
|
||||
|
||||
/// Returns a block with the corresponding number, if any.
|
||||
pub fn block_by_number(&self, block_number: u64) -> Option<&Eth1Block> {
|
||||
self.blocks.get(
|
||||
self.blocks
|
||||
.as_slice()
|
||||
.binary_search_by(|block| block.number.cmp(&block_number))
|
||||
.ok()?,
|
||||
)
|
||||
}
|
||||
|
||||
/// Insert an `Eth1Snapshot` into `self`, allowing future queries.
|
||||
///
|
||||
/// Allows inserting either:
|
||||
///
|
||||
/// - The root block (i.e., any block if there are no existing blocks), or,
|
||||
/// - An immediate child of the most recent (highest block number) block.
|
||||
///
|
||||
/// ## Errors
|
||||
///
|
||||
/// - If the cache is not empty and `item.block.block_number - 1` is not already in `self`.
|
||||
/// - If `item.block.block_number` is in `self`, but is not identical to the supplied
|
||||
/// `Eth1Snapshot`.
|
||||
/// - If `item.block.timestamp` is prior to the parent.
|
||||
pub fn insert_root_or_child(&mut self, block: Eth1Block) -> Result<(), Error> {
|
||||
let expected_block_number = self
|
||||
.highest_block_number()
|
||||
.map(|n| n + 1)
|
||||
.unwrap_or_else(|| block.number);
|
||||
|
||||
// If there are already some cached blocks, check to see if the new block number is one of
|
||||
// them.
|
||||
//
|
||||
// If the block is already known, check to see the given block is identical to it. If not,
|
||||
// raise an inconsistency error. This is mostly likely caused by some fork on the eth1
|
||||
// chain.
|
||||
if let Some(local) = self.available_block_numbers() {
|
||||
if local.contains(&block.number) {
|
||||
let known_block = self.block_by_number(block.number).ok_or_else(|| {
|
||||
Error::Internal("An expected block was not present".to_string())
|
||||
})?;
|
||||
|
||||
if known_block == &block {
|
||||
return Ok(());
|
||||
} else {
|
||||
return Err(Error::Conflicting(block.number));
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// Only permit blocks when it's either:
|
||||
//
|
||||
// - The first block inserted.
|
||||
// - Exactly one block number higher than the highest known block number.
|
||||
if block.number != expected_block_number {
|
||||
return Err(Error::NonConsecutive {
|
||||
given: block.number,
|
||||
expected: expected_block_number,
|
||||
});
|
||||
}
|
||||
|
||||
// If the block is not the first block inserted, ensure that its timestamp is not higher
|
||||
// than its parents.
|
||||
if let Some(previous_block) = self.blocks.last() {
|
||||
if previous_block.timestamp > block.timestamp {
|
||||
return Err(Error::InconsistentTimestamp {
|
||||
parent: previous_block.timestamp,
|
||||
child: block.timestamp,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
self.blocks.push(block);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn get_block(i: u64, interval_secs: u64) -> Eth1Block {
|
||||
Eth1Block {
|
||||
hash: Hash256::from_low_u64_be(i),
|
||||
timestamp: i * interval_secs,
|
||||
number: i,
|
||||
deposit_root: Some(Hash256::from_low_u64_be(i << 32)),
|
||||
deposit_count: Some(i),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_blocks(n: usize, interval_secs: u64) -> Vec<Eth1Block> {
|
||||
(0..n as u64)
|
||||
.into_iter()
|
||||
.map(|i| get_block(i, interval_secs))
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn insert(cache: &mut BlockCache, s: Eth1Block) -> Result<(), Error> {
|
||||
cache.insert_root_or_child(s)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn truncate() {
|
||||
let n = 16;
|
||||
let blocks = get_blocks(n, 10);
|
||||
|
||||
let mut cache = BlockCache::default();
|
||||
|
||||
for block in blocks {
|
||||
insert(&mut cache, block.clone()).expect("should add consecutive blocks");
|
||||
}
|
||||
|
||||
for len in vec![0, 1, 2, 3, 4, 8, 15, 16] {
|
||||
let mut cache = cache.clone();
|
||||
|
||||
cache.truncate(len);
|
||||
|
||||
assert_eq!(
|
||||
cache.blocks.len(),
|
||||
len,
|
||||
"should truncate to length: {}",
|
||||
len
|
||||
);
|
||||
}
|
||||
|
||||
let mut cache_2 = cache.clone();
|
||||
cache_2.truncate(17);
|
||||
assert_eq!(
|
||||
cache_2.blocks.len(),
|
||||
n,
|
||||
"truncate to larger than n should be a no-op"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn inserts() {
|
||||
let n = 16;
|
||||
let blocks = get_blocks(n, 10);
|
||||
|
||||
let mut cache = BlockCache::default();
|
||||
|
||||
for block in blocks {
|
||||
insert(&mut cache, block.clone()).expect("should add consecutive blocks");
|
||||
}
|
||||
|
||||
// No error for re-adding a block identical to one that exists.
|
||||
assert!(insert(&mut cache, get_block(n as u64 - 1, 10)).is_ok());
|
||||
|
||||
// Error for re-adding a block that is different to the one that exists.
|
||||
assert!(insert(&mut cache, get_block(n as u64 - 1, 11)).is_err());
|
||||
|
||||
// Error for adding non-consecutive blocks.
|
||||
assert!(insert(&mut cache, get_block(n as u64 + 1, 10)).is_err());
|
||||
assert!(insert(&mut cache, get_block(n as u64 + 2, 10)).is_err());
|
||||
|
||||
// Error for adding timestamp prior to previous.
|
||||
assert!(insert(&mut cache, get_block(n as u64, 1)).is_err());
|
||||
// Double check to make sure previous test was only affected by timestamp.
|
||||
assert!(insert(&mut cache, get_block(n as u64, 10)).is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn duplicate_timestamp() {
|
||||
let mut blocks = get_blocks(7, 10);
|
||||
|
||||
blocks[0].timestamp = 0;
|
||||
blocks[1].timestamp = 10;
|
||||
blocks[2].timestamp = 10;
|
||||
blocks[3].timestamp = 20;
|
||||
blocks[4].timestamp = 30;
|
||||
blocks[5].timestamp = 40;
|
||||
blocks[6].timestamp = 40;
|
||||
|
||||
let mut cache = BlockCache::default();
|
||||
|
||||
for block in &blocks {
|
||||
insert(&mut cache, block.clone())
|
||||
.expect("should add consecutive blocks with duplicate timestamps");
|
||||
}
|
||||
|
||||
assert_eq!(cache.blocks, blocks, "should have added all blocks");
|
||||
}
|
||||
}
|
||||
371
beacon_node/eth1/src/deposit_cache.rs
Normal file
371
beacon_node/eth1/src/deposit_cache.rs
Normal file
@@ -0,0 +1,371 @@
|
||||
use crate::DepositLog;
|
||||
use eth2_hashing::hash;
|
||||
use std::ops::Range;
|
||||
use tree_hash::TreeHash;
|
||||
use types::{Deposit, Hash256};
|
||||
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub enum Error {
|
||||
/// A deposit log was added when a prior deposit was not already in the cache.
|
||||
///
|
||||
/// Logs have to be added with monotonically-increasing block numbers.
|
||||
NonConsecutive { log_index: u64, expected: usize },
|
||||
/// The eth1 event log data was unable to be parsed.
|
||||
LogParseError(String),
|
||||
/// There are insufficient deposits in the cache to fulfil the request.
|
||||
InsufficientDeposits {
|
||||
known_deposits: usize,
|
||||
requested: u64,
|
||||
},
|
||||
/// A log with the given index is already present in the cache and it does not match the one
|
||||
/// provided.
|
||||
DuplicateDistinctLog(u64),
|
||||
/// The deposit count must always be large enough to account for the requested deposit range.
|
||||
///
|
||||
/// E.g., you cannot request deposit 10 when the deposit count is 9.
|
||||
DepositCountInvalid { deposit_count: u64, range_end: u64 },
|
||||
/// An unexpected condition was encountered.
|
||||
InternalError(String),
|
||||
}
|
||||
|
||||
/// Emulates the eth1 deposit contract merkle tree.
|
||||
pub struct DepositDataTree {
|
||||
tree: merkle_proof::MerkleTree,
|
||||
mix_in_length: usize,
|
||||
depth: usize,
|
||||
}
|
||||
|
||||
impl DepositDataTree {
|
||||
/// Create a new Merkle tree from a list of leaves (`DepositData::tree_hash_root`) and a fixed depth.
|
||||
pub fn create(leaves: &[Hash256], mix_in_length: usize, depth: usize) -> Self {
|
||||
Self {
|
||||
tree: merkle_proof::MerkleTree::create(leaves, depth),
|
||||
mix_in_length,
|
||||
depth,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns 32 bytes representing the "mix in length" for the merkle root of this tree.
|
||||
fn length_bytes(&self) -> Vec<u8> {
|
||||
int_to_bytes32(self.mix_in_length)
|
||||
}
|
||||
|
||||
/// Retrieve the root hash of this Merkle tree with the length mixed in.
|
||||
pub fn root(&self) -> Hash256 {
|
||||
let mut preimage = [0; 64];
|
||||
preimage[0..32].copy_from_slice(&self.tree.hash()[..]);
|
||||
preimage[32..64].copy_from_slice(&self.length_bytes());
|
||||
Hash256::from_slice(&hash(&preimage))
|
||||
}
|
||||
|
||||
/// Return the leaf at `index` and a Merkle proof of its inclusion.
|
||||
///
|
||||
/// The Merkle proof is in "bottom-up" order, starting with a leaf node
|
||||
/// and moving up the tree. Its length will be exactly equal to `depth + 1`.
|
||||
pub fn generate_proof(&self, index: usize) -> (Hash256, Vec<Hash256>) {
|
||||
let (root, mut proof) = self.tree.generate_proof(index, self.depth);
|
||||
proof.push(Hash256::from_slice(&self.length_bytes()));
|
||||
(root, proof)
|
||||
}
|
||||
}
|
||||
|
||||
/// Mirrors the merkle tree of deposits in the eth1 deposit contract.
|
||||
///
|
||||
/// Provides `Deposit` objects with merkle proofs included.
|
||||
#[derive(Default)]
|
||||
pub struct DepositCache {
|
||||
logs: Vec<DepositLog>,
|
||||
roots: Vec<Hash256>,
|
||||
}
|
||||
|
||||
impl DepositCache {
|
||||
/// Returns the number of deposits available in the cache.
|
||||
pub fn len(&self) -> usize {
|
||||
self.logs.len()
|
||||
}
|
||||
|
||||
/// True if the cache does not store any blocks.
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.logs.is_empty()
|
||||
}
|
||||
|
||||
/// Returns the block number for the most recent deposit in the cache.
|
||||
pub fn latest_block_number(&self) -> Option<u64> {
|
||||
self.logs.last().map(|log| log.block_number)
|
||||
}
|
||||
|
||||
/// Returns an iterator over all the logs in `self`.
|
||||
pub fn iter(&self) -> impl Iterator<Item = &DepositLog> {
|
||||
self.logs.iter()
|
||||
}
|
||||
|
||||
/// Returns the i'th deposit log.
|
||||
pub fn get(&self, i: usize) -> Option<&DepositLog> {
|
||||
self.logs.get(i)
|
||||
}
|
||||
|
||||
/// Adds `log` to self.
|
||||
///
|
||||
/// This function enforces that `logs` are imported one-by-one with no gaps between
|
||||
/// `log.index`, starting at `log.index == 0`.
|
||||
///
|
||||
/// ## Errors
|
||||
///
|
||||
/// - If a log with index `log.index - 1` is not already present in `self` (ignored when empty).
|
||||
/// - If a log with `log.index` is already known, but the given `log` is distinct to it.
|
||||
pub fn insert_log(&mut self, log: DepositLog) -> Result<(), Error> {
|
||||
if log.index == self.logs.len() as u64 {
|
||||
self.roots
|
||||
.push(Hash256::from_slice(&log.deposit_data.tree_hash_root()));
|
||||
self.logs.push(log);
|
||||
|
||||
Ok(())
|
||||
} else if log.index < self.logs.len() as u64 {
|
||||
if self.logs[log.index as usize] == log {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(Error::DuplicateDistinctLog(log.index))
|
||||
}
|
||||
} else {
|
||||
Err(Error::NonConsecutive {
|
||||
log_index: log.index,
|
||||
expected: self.logs.len(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a list of `Deposit` objects, within the given deposit index `range`.
|
||||
///
|
||||
/// The `deposit_count` is used to generate the proofs for the `Deposits`. For example, if we
|
||||
/// have 100 proofs, but the eth2 chain only acknowledges 50 of them, we must produce our
|
||||
/// proofs with respect to a tree size of 50.
|
||||
///
|
||||
///
|
||||
/// ## Errors
|
||||
///
|
||||
/// - If `deposit_count` is larger than `range.end`.
|
||||
/// - There are not sufficient deposits in the tree to generate the proof.
|
||||
pub fn get_deposits(
|
||||
&self,
|
||||
range: Range<u64>,
|
||||
deposit_count: u64,
|
||||
tree_depth: usize,
|
||||
) -> Result<(Hash256, Vec<Deposit>), Error> {
|
||||
if deposit_count < range.end {
|
||||
// It's invalid to ask for more deposits than should exist.
|
||||
Err(Error::DepositCountInvalid {
|
||||
deposit_count,
|
||||
range_end: range.end,
|
||||
})
|
||||
} else if range.end > self.logs.len() as u64 {
|
||||
// The range of requested deposits exceeds the deposits stored locally.
|
||||
Err(Error::InsufficientDeposits {
|
||||
requested: range.end,
|
||||
known_deposits: self.logs.len(),
|
||||
})
|
||||
} else if deposit_count > self.roots.len() as u64 {
|
||||
// There are not `deposit_count` known deposit roots, so we can't build the merkle tree
|
||||
// to prove into.
|
||||
Err(Error::InsufficientDeposits {
|
||||
requested: deposit_count,
|
||||
known_deposits: self.logs.len(),
|
||||
})
|
||||
} else {
|
||||
let roots = self
|
||||
.roots
|
||||
.get(0..deposit_count as usize)
|
||||
.ok_or_else(|| Error::InternalError("Unable to get known root".into()))?;
|
||||
|
||||
// Note: there is likely a more optimal solution than recreating the `DepositDataTree`
|
||||
// each time this function is called.
|
||||
//
|
||||
// Perhaps a base merkle tree could be maintained that contains all deposits up to the
|
||||
// last finalized eth1 deposit count. Then, that tree could be cloned and extended for
|
||||
// each of these calls.
|
||||
|
||||
let tree = DepositDataTree::create(roots, deposit_count as usize, tree_depth);
|
||||
|
||||
let deposits = self
|
||||
.logs
|
||||
.get(range.start as usize..range.end as usize)
|
||||
.ok_or_else(|| Error::InternalError("Unable to get known log".into()))?
|
||||
.iter()
|
||||
.map(|deposit_log| {
|
||||
let (_leaf, proof) = tree.generate_proof(deposit_log.index as usize);
|
||||
|
||||
Deposit {
|
||||
proof: proof.into(),
|
||||
data: deposit_log.deposit_data.clone(),
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok((tree.root(), deposits))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns `int` as little-endian bytes with a length of 32.
|
||||
fn int_to_bytes32(int: usize) -> Vec<u8> {
|
||||
let mut vec = int.to_le_bytes().to_vec();
|
||||
vec.resize(32, 0);
|
||||
vec
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
pub mod tests {
|
||||
use super::*;
|
||||
use crate::deposit_log::tests::EXAMPLE_LOG;
|
||||
use crate::http::Log;
|
||||
|
||||
pub const TREE_DEPTH: usize = 32;
|
||||
|
||||
fn example_log() -> DepositLog {
|
||||
let log = Log {
|
||||
block_number: 42,
|
||||
data: EXAMPLE_LOG.to_vec(),
|
||||
};
|
||||
DepositLog::from_log(&log).expect("should decode log")
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn insert_log_valid() {
|
||||
let mut tree = DepositCache::default();
|
||||
|
||||
for i in 0..16 {
|
||||
let mut log = example_log();
|
||||
log.index = i;
|
||||
tree.insert_log(log).expect("should add consecutive logs")
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn insert_log_invalid() {
|
||||
let mut tree = DepositCache::default();
|
||||
|
||||
for i in 0..4 {
|
||||
let mut log = example_log();
|
||||
log.index = i;
|
||||
tree.insert_log(log).expect("should add consecutive logs")
|
||||
}
|
||||
|
||||
// Add duplicate, when given is the same as the one known.
|
||||
let mut log = example_log();
|
||||
log.index = 3;
|
||||
assert!(tree.insert_log(log).is_ok());
|
||||
|
||||
// Add duplicate, when given is different to the one known.
|
||||
let mut log = example_log();
|
||||
log.index = 3;
|
||||
log.block_number = 99;
|
||||
assert!(tree.insert_log(log).is_err());
|
||||
|
||||
// Skip inserting a log.
|
||||
let mut log = example_log();
|
||||
log.index = 5;
|
||||
assert!(tree.insert_log(log).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_deposit_valid() {
|
||||
let n = 1_024;
|
||||
let mut tree = DepositCache::default();
|
||||
|
||||
for i in 0..n {
|
||||
let mut log = example_log();
|
||||
log.index = i;
|
||||
log.block_number = i;
|
||||
log.deposit_data.withdrawal_credentials = Hash256::from_low_u64_be(i);
|
||||
tree.insert_log(log).expect("should add consecutive logs")
|
||||
}
|
||||
|
||||
// Get 0 deposits, with max deposit count.
|
||||
let (_, deposits) = tree
|
||||
.get_deposits(0..0, n, TREE_DEPTH)
|
||||
.expect("should get the full tree");
|
||||
assert_eq!(deposits.len(), 0, "should return no deposits");
|
||||
|
||||
// Get 0 deposits, with 0 deposit count.
|
||||
let (_, deposits) = tree
|
||||
.get_deposits(0..0, 0, TREE_DEPTH)
|
||||
.expect("should get the full tree");
|
||||
assert_eq!(deposits.len(), 0, "should return no deposits");
|
||||
|
||||
// Get 0 deposits, with 0 deposit count, tree depth 0.
|
||||
let (_, deposits) = tree
|
||||
.get_deposits(0..0, 0, 0)
|
||||
.expect("should get the full tree");
|
||||
assert_eq!(deposits.len(), 0, "should return no deposits");
|
||||
|
||||
// Get all deposits, with max deposit count.
|
||||
let (full_root, deposits) = tree
|
||||
.get_deposits(0..n, n, TREE_DEPTH)
|
||||
.expect("should get the full tree");
|
||||
assert_eq!(deposits.len(), n as usize, "should return all deposits");
|
||||
|
||||
// Get 4 deposits, with max deposit count.
|
||||
let (root, deposits) = tree
|
||||
.get_deposits(0..4, n, TREE_DEPTH)
|
||||
.expect("should get the four from the full tree");
|
||||
assert_eq!(
|
||||
deposits.len(),
|
||||
4 as usize,
|
||||
"should get 4 deposits from full tree"
|
||||
);
|
||||
assert_eq!(
|
||||
root, full_root,
|
||||
"should still return full root when getting deposit subset"
|
||||
);
|
||||
|
||||
// Get half of the deposits, with half deposit count.
|
||||
let (half_root, deposits) = tree
|
||||
.get_deposits(0..n / 2, n / 2, TREE_DEPTH)
|
||||
.expect("should get the half tree");
|
||||
assert_eq!(
|
||||
deposits.len(),
|
||||
n as usize / 2,
|
||||
"should return half deposits"
|
||||
);
|
||||
|
||||
// Get 4 deposits, with half deposit count.
|
||||
let (root, deposits) = tree
|
||||
.get_deposits(0..4, n / 2, TREE_DEPTH)
|
||||
.expect("should get the half tree");
|
||||
assert_eq!(
|
||||
deposits.len(),
|
||||
4 as usize,
|
||||
"should get 4 deposits from half tree"
|
||||
);
|
||||
assert_eq!(
|
||||
root, half_root,
|
||||
"should still return half root when getting deposit subset"
|
||||
);
|
||||
assert_ne!(
|
||||
full_root, half_root,
|
||||
"should get different root when pinning deposit count"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_deposit_invalid() {
|
||||
let n = 16;
|
||||
let mut tree = DepositCache::default();
|
||||
|
||||
for i in 0..n {
|
||||
let mut log = example_log();
|
||||
log.index = i;
|
||||
log.block_number = i;
|
||||
log.deposit_data.withdrawal_credentials = Hash256::from_low_u64_be(i);
|
||||
tree.insert_log(log).expect("should add consecutive logs")
|
||||
}
|
||||
|
||||
// Range too high.
|
||||
assert!(tree.get_deposits(0..n + 1, n, TREE_DEPTH).is_err());
|
||||
|
||||
// Count too high.
|
||||
assert!(tree.get_deposits(0..n, n + 1, TREE_DEPTH).is_err());
|
||||
|
||||
// Range higher than count.
|
||||
assert!(tree.get_deposits(0..4, 2, TREE_DEPTH).is_err());
|
||||
}
|
||||
}
|
||||
107
beacon_node/eth1/src/deposit_log.rs
Normal file
107
beacon_node/eth1/src/deposit_log.rs
Normal file
@@ -0,0 +1,107 @@
|
||||
use super::http::Log;
|
||||
use ssz::Decode;
|
||||
use types::{DepositData, Hash256, PublicKeyBytes, SignatureBytes};
|
||||
|
||||
/// The following constants define the layout of bytes in the deposit contract `DepositEvent`. The
|
||||
/// event bytes are formatted according to the Ethereum ABI.
|
||||
const PUBKEY_START: usize = 192;
|
||||
const PUBKEY_LEN: usize = 48;
|
||||
const CREDS_START: usize = PUBKEY_START + 64 + 32;
|
||||
const CREDS_LEN: usize = 32;
|
||||
const AMOUNT_START: usize = CREDS_START + 32 + 32;
|
||||
const AMOUNT_LEN: usize = 8;
|
||||
const SIG_START: usize = AMOUNT_START + 32 + 32;
|
||||
const SIG_LEN: usize = 96;
|
||||
const INDEX_START: usize = SIG_START + 96 + 32;
|
||||
const INDEX_LEN: usize = 8;
|
||||
|
||||
/// A fully parsed eth1 deposit contract log.
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub struct DepositLog {
|
||||
pub deposit_data: DepositData,
|
||||
/// The block number of the log that included this `DepositData`.
|
||||
pub block_number: u64,
|
||||
/// The index included with the deposit log.
|
||||
pub index: u64,
|
||||
}
|
||||
|
||||
impl DepositLog {
|
||||
/// Attempts to parse a raw `Log` from the deposit contract into a `DepositLog`.
|
||||
pub fn from_log(log: &Log) -> Result<Self, String> {
|
||||
let bytes = &log.data;
|
||||
|
||||
let pubkey = bytes
|
||||
.get(PUBKEY_START..PUBKEY_START + PUBKEY_LEN)
|
||||
.ok_or_else(|| "Insufficient bytes for pubkey".to_string())?;
|
||||
let withdrawal_credentials = bytes
|
||||
.get(CREDS_START..CREDS_START + CREDS_LEN)
|
||||
.ok_or_else(|| "Insufficient bytes for withdrawal credential".to_string())?;
|
||||
let amount = bytes
|
||||
.get(AMOUNT_START..AMOUNT_START + AMOUNT_LEN)
|
||||
.ok_or_else(|| "Insufficient bytes for amount".to_string())?;
|
||||
let signature = bytes
|
||||
.get(SIG_START..SIG_START + SIG_LEN)
|
||||
.ok_or_else(|| "Insufficient bytes for signature".to_string())?;
|
||||
let index = bytes
|
||||
.get(INDEX_START..INDEX_START + INDEX_LEN)
|
||||
.ok_or_else(|| "Insufficient bytes for index".to_string())?;
|
||||
|
||||
let deposit_data = DepositData {
|
||||
pubkey: PublicKeyBytes::from_ssz_bytes(pubkey)
|
||||
.map_err(|e| format!("Invalid pubkey ssz: {:?}", e))?,
|
||||
withdrawal_credentials: Hash256::from_ssz_bytes(withdrawal_credentials)
|
||||
.map_err(|e| format!("Invalid withdrawal_credentials ssz: {:?}", e))?,
|
||||
amount: u64::from_ssz_bytes(amount)
|
||||
.map_err(|e| format!("Invalid amount ssz: {:?}", e))?,
|
||||
signature: SignatureBytes::from_ssz_bytes(signature)
|
||||
.map_err(|e| format!("Invalid signature ssz: {:?}", e))?,
|
||||
};
|
||||
|
||||
Ok(DepositLog {
|
||||
deposit_data,
|
||||
block_number: log.block_number,
|
||||
index: u64::from_ssz_bytes(index).map_err(|e| format!("Invalid index ssz: {:?}", e))?,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
pub mod tests {
|
||||
use super::*;
|
||||
use crate::http::Log;
|
||||
|
||||
/// The data from a deposit event, using the v0.8.3 version of the deposit contract.
|
||||
pub const EXAMPLE_LOG: &[u8] = &[
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 1, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 1, 128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 167, 108, 6, 69, 88, 17, 3, 51, 6, 4, 158, 232, 82,
|
||||
248, 218, 2, 71, 219, 55, 102, 86, 125, 136, 203, 36, 77, 64, 213, 43, 52, 175, 154, 239,
|
||||
50, 142, 52, 201, 77, 54, 239, 0, 229, 22, 46, 139, 120, 62, 240, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 64, 89, 115, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 96, 140, 74, 175, 158, 209, 20, 206,
|
||||
30, 63, 215, 238, 113, 60, 132, 216, 211, 100, 186, 202, 71, 34, 200, 160, 225, 212, 213,
|
||||
119, 88, 51, 80, 101, 74, 2, 45, 78, 153, 12, 192, 44, 51, 77, 40, 10, 72, 246, 34, 193,
|
||||
187, 22, 95, 4, 211, 245, 224, 13, 162, 21, 163, 54, 225, 22, 124, 3, 56, 14, 81, 122, 189,
|
||||
149, 250, 251, 159, 22, 77, 94, 157, 197, 196, 253, 110, 201, 88, 193, 246, 136, 226, 221,
|
||||
18, 113, 232, 105, 100, 114, 103, 237, 189, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
];
|
||||
|
||||
#[test]
|
||||
fn can_parse_example_log() {
|
||||
let log = Log {
|
||||
block_number: 42,
|
||||
data: EXAMPLE_LOG.to_vec(),
|
||||
};
|
||||
DepositLog::from_log(&log).expect("should decode log");
|
||||
}
|
||||
}
|
||||
405
beacon_node/eth1/src/http.rs
Normal file
405
beacon_node/eth1/src/http.rs
Normal file
@@ -0,0 +1,405 @@
|
||||
//! Provides a very minimal set of functions for interfacing with the eth2 deposit contract via an
|
||||
//! eth1 HTTP JSON-RPC endpoint.
|
||||
//!
|
||||
//! All remote functions return a future (i.e., are async).
|
||||
//!
|
||||
//! Does not use a web3 library, instead it uses `reqwest` (`hyper`) to call the remote endpoint
|
||||
//! and `serde` to decode the response.
|
||||
//!
|
||||
//! ## Note
|
||||
//!
|
||||
//! There is no ABI parsing here, all function signatures and topics are hard-coded as constants.
|
||||
|
||||
use futures::{Future, Stream};
|
||||
use libflate::gzip::Decoder;
|
||||
use reqwest::{header::CONTENT_TYPE, r#async::ClientBuilder, StatusCode};
|
||||
use serde_json::{json, Value};
|
||||
use std::io::prelude::*;
|
||||
use std::ops::Range;
|
||||
use std::time::Duration;
|
||||
use types::Hash256;
|
||||
|
||||
/// `keccak("DepositEvent(bytes,bytes,bytes,bytes,bytes)")`
|
||||
pub const DEPOSIT_EVENT_TOPIC: &str =
|
||||
"0x649bbc62d0e31342afea4e5cd82d4049e7e1ee912fc0889aa790803be39038c5";
|
||||
/// `keccak("get_deposit_root()")[0..4]`
|
||||
pub const DEPOSIT_ROOT_FN_SIGNATURE: &str = "0x863a311b";
|
||||
/// `keccak("get_deposit_count()")[0..4]`
|
||||
pub const DEPOSIT_COUNT_FN_SIGNATURE: &str = "0x621fd130";
|
||||
|
||||
/// Number of bytes in deposit contract deposit root response.
|
||||
pub const DEPOSIT_COUNT_RESPONSE_BYTES: usize = 96;
|
||||
/// Number of bytes in deposit contract deposit root (value only).
|
||||
pub const DEPOSIT_ROOT_BYTES: usize = 32;
|
||||
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub struct Block {
|
||||
pub hash: Hash256,
|
||||
pub timestamp: u64,
|
||||
pub number: u64,
|
||||
}
|
||||
|
||||
/// Returns the current block number.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_block_number(
|
||||
endpoint: &str,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = u64, Error = String> {
|
||||
send_rpc_request(endpoint, "eth_blockNumber", json!([]), timeout)
|
||||
.and_then(|response_body| {
|
||||
hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for block number".to_string())?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Data was not string")?,
|
||||
)
|
||||
})
|
||||
.map_err(|e| format!("Failed to get block number: {}", e))
|
||||
}
|
||||
|
||||
/// Gets a block hash by block number.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_block(
|
||||
endpoint: &str,
|
||||
block_number: u64,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Block, Error = String> {
|
||||
let params = json!([
|
||||
format!("0x{:x}", block_number),
|
||||
false // do not return full tx objects.
|
||||
]);
|
||||
|
||||
send_rpc_request(endpoint, "eth_getBlockByNumber", params, timeout)
|
||||
.and_then(|response_body| {
|
||||
let hash = hex_to_bytes(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for block".to_string())?
|
||||
.get("hash")
|
||||
.ok_or_else(|| "No hash for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block hash was not string")?,
|
||||
)?;
|
||||
let hash = if hash.len() == 32 {
|
||||
Ok(Hash256::from_slice(&hash))
|
||||
} else {
|
||||
Err(format!("Block has was not 32 bytes: {:?}", hash))
|
||||
}?;
|
||||
|
||||
let timestamp = hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for timestamp".to_string())?
|
||||
.get("timestamp")
|
||||
.ok_or_else(|| "No timestamp for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block timestamp was not string")?,
|
||||
)?;
|
||||
|
||||
let number = hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for number".to_string())?
|
||||
.get("number")
|
||||
.ok_or_else(|| "No number for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block number was not string")?,
|
||||
)?;
|
||||
|
||||
if number <= usize::max_value() as u64 {
|
||||
Ok(Block {
|
||||
hash,
|
||||
timestamp,
|
||||
number,
|
||||
})
|
||||
} else {
|
||||
Err(format!("Block number {} is larger than a usize", number))
|
||||
}
|
||||
})
|
||||
.map_err(|e| format!("Failed to get block number: {}", e))
|
||||
}
|
||||
|
||||
/// Returns the value of the `get_deposit_count()` call at the given `address` for the given
|
||||
/// `block_number`.
|
||||
///
|
||||
/// Assumes that the `address` has the same ABI as the eth2 deposit contract.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_deposit_count(
|
||||
endpoint: &str,
|
||||
address: &str,
|
||||
block_number: u64,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Option<u64>, Error = String> {
|
||||
call(
|
||||
endpoint,
|
||||
address,
|
||||
DEPOSIT_COUNT_FN_SIGNATURE,
|
||||
block_number,
|
||||
timeout,
|
||||
)
|
||||
.and_then(|result| result.ok_or_else(|| "No response to deposit count".to_string()))
|
||||
.and_then(|bytes| {
|
||||
if bytes.is_empty() {
|
||||
Ok(None)
|
||||
} else if bytes.len() == DEPOSIT_COUNT_RESPONSE_BYTES {
|
||||
let mut array = [0; 8];
|
||||
array.copy_from_slice(&bytes[32 + 32..32 + 32 + 8]);
|
||||
Ok(Some(u64::from_le_bytes(array)))
|
||||
} else {
|
||||
Err(format!(
|
||||
"Deposit count response was not {} bytes: {:?}",
|
||||
DEPOSIT_COUNT_RESPONSE_BYTES, bytes
|
||||
))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns the value of the `get_hash_tree_root()` call at the given `block_number`.
|
||||
///
|
||||
/// Assumes that the `address` has the same ABI as the eth2 deposit contract.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_deposit_root(
|
||||
endpoint: &str,
|
||||
address: &str,
|
||||
block_number: u64,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Option<Hash256>, Error = String> {
|
||||
call(
|
||||
endpoint,
|
||||
address,
|
||||
DEPOSIT_ROOT_FN_SIGNATURE,
|
||||
block_number,
|
||||
timeout,
|
||||
)
|
||||
.and_then(|result| result.ok_or_else(|| "No response to deposit root".to_string()))
|
||||
.and_then(|bytes| {
|
||||
if bytes.is_empty() {
|
||||
Ok(None)
|
||||
} else if bytes.len() == DEPOSIT_ROOT_BYTES {
|
||||
Ok(Some(Hash256::from_slice(&bytes)))
|
||||
} else {
|
||||
Err(format!(
|
||||
"Deposit root response was not {} bytes: {:?}",
|
||||
DEPOSIT_ROOT_BYTES, bytes
|
||||
))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Performs a instant, no-transaction call to the contract `address` with the given `0x`-prefixed
|
||||
/// `hex_data`.
|
||||
///
|
||||
/// Returns bytes, if any.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
fn call(
|
||||
endpoint: &str,
|
||||
address: &str,
|
||||
hex_data: &str,
|
||||
block_number: u64,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Option<Vec<u8>>, Error = String> {
|
||||
let params = json! ([
|
||||
{
|
||||
"to": address,
|
||||
"data": hex_data,
|
||||
},
|
||||
format!("0x{:x}", block_number)
|
||||
]);
|
||||
|
||||
send_rpc_request(endpoint, "eth_call", params, timeout).and_then(|response_body| {
|
||||
match response_result(&response_body)? {
|
||||
None => Ok(None),
|
||||
Some(result) => {
|
||||
let hex = result
|
||||
.as_str()
|
||||
.map(|s| s.to_string())
|
||||
.ok_or_else(|| "'result' value was not a string".to_string())?;
|
||||
|
||||
Ok(Some(hex_to_bytes(&hex)?))
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// A reduced set of fields from an Eth1 contract log.
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub struct Log {
|
||||
pub(crate) block_number: u64,
|
||||
pub(crate) data: Vec<u8>,
|
||||
}
|
||||
|
||||
/// Returns logs for the `DEPOSIT_EVENT_TOPIC`, for the given `address` in the given
|
||||
/// `block_height_range`.
|
||||
///
|
||||
/// It's not clear from the Ethereum JSON-RPC docs if this range is inclusive or not.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_deposit_logs_in_range(
|
||||
endpoint: &str,
|
||||
address: &str,
|
||||
block_height_range: Range<u64>,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Vec<Log>, Error = String> {
|
||||
let params = json! ([{
|
||||
"address": address,
|
||||
"topics": [DEPOSIT_EVENT_TOPIC],
|
||||
"fromBlock": format!("0x{:x}", block_height_range.start),
|
||||
"toBlock": format!("0x{:x}", block_height_range.end),
|
||||
}]);
|
||||
|
||||
send_rpc_request(endpoint, "eth_getLogs", params, timeout)
|
||||
.and_then(|response_body| {
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for deposit logs".to_string())?
|
||||
.as_array()
|
||||
.cloned()
|
||||
.ok_or_else(|| "'result' value was not an array".to_string())?
|
||||
.into_iter()
|
||||
.map(|value| {
|
||||
let block_number = value
|
||||
.get("blockNumber")
|
||||
.ok_or_else(|| "No block number field in log")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block number was not string")?;
|
||||
|
||||
let data = value
|
||||
.get("data")
|
||||
.ok_or_else(|| "No block number field in log")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Data was not string")?;
|
||||
|
||||
Ok(Log {
|
||||
block_number: hex_to_u64_be(&block_number)?,
|
||||
data: hex_to_bytes(data)?,
|
||||
})
|
||||
})
|
||||
.collect::<Result<Vec<Log>, String>>()
|
||||
})
|
||||
.map_err(|e| format!("Failed to get logs in range: {}", e))
|
||||
}
|
||||
|
||||
/// Sends an RPC request to `endpoint`, using a POST with the given `body`.
|
||||
///
|
||||
/// Tries to receive the response and parse the body as a `String`.
|
||||
pub fn send_rpc_request(
|
||||
endpoint: &str,
|
||||
method: &str,
|
||||
params: Value,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = String, Error = String> {
|
||||
let body = json! ({
|
||||
"jsonrpc": "2.0",
|
||||
"method": method,
|
||||
"params": params,
|
||||
"id": 1
|
||||
})
|
||||
.to_string();
|
||||
|
||||
// Note: it is not ideal to create a new client for each request.
|
||||
//
|
||||
// A better solution would be to create some struct that contains a built client and pass it
|
||||
// around (similar to the `web3` crate's `Transport` structs).
|
||||
ClientBuilder::new()
|
||||
.timeout(timeout)
|
||||
.build()
|
||||
.expect("The builder should always build a client")
|
||||
.post(endpoint)
|
||||
.header(CONTENT_TYPE, "application/json")
|
||||
.body(body)
|
||||
.send()
|
||||
.map_err(|e| format!("Request failed: {:?}", e))
|
||||
.and_then(|response| {
|
||||
if response.status() != StatusCode::OK {
|
||||
Err(format!(
|
||||
"Response HTTP status was not 200 OK: {}.",
|
||||
response.status()
|
||||
))
|
||||
} else {
|
||||
Ok(response)
|
||||
}
|
||||
})
|
||||
.and_then(|response| {
|
||||
response
|
||||
.headers()
|
||||
.get(CONTENT_TYPE)
|
||||
.ok_or_else(|| "No content-type header in response".to_string())
|
||||
.and_then(|encoding| {
|
||||
encoding
|
||||
.to_str()
|
||||
.map(|s| s.to_string())
|
||||
.map_err(|e| format!("Failed to parse content-type header: {}", e))
|
||||
})
|
||||
.map(|encoding| (response, encoding))
|
||||
})
|
||||
.and_then(|(response, encoding)| {
|
||||
response
|
||||
.into_body()
|
||||
.concat2()
|
||||
.map(|chunk| chunk.iter().cloned().collect::<Vec<u8>>())
|
||||
.map_err(|e| format!("Failed to receive body: {:?}", e))
|
||||
.and_then(move |bytes| match encoding.as_str() {
|
||||
"application/json" => Ok(bytes),
|
||||
"application/json; charset=utf-8" => Ok(bytes),
|
||||
// Note: gzip is not presently working because we always seem to get an empty
|
||||
// response from the server.
|
||||
//
|
||||
// I expect this is some simple-to-solve issue for someone who is familiar with
|
||||
// the eth1 JSON RPC.
|
||||
//
|
||||
// Some public-facing web3 servers use gzip to compress their traffic, it would
|
||||
// be good to support this.
|
||||
"application/x-gzip" => {
|
||||
let mut decoder = Decoder::new(&bytes[..])
|
||||
.map_err(|e| format!("Failed to create gzip decoder: {}", e))?;
|
||||
let mut decompressed = vec![];
|
||||
decoder
|
||||
.read_to_end(&mut decompressed)
|
||||
.map_err(|e| format!("Failed to decompress gzip data: {}", e))?;
|
||||
|
||||
Ok(decompressed)
|
||||
}
|
||||
other => Err(format!("Unsupported encoding: {}", other)),
|
||||
})
|
||||
.map(|bytes| String::from_utf8_lossy(&bytes).into_owned())
|
||||
.map_err(|e| format!("Failed to receive body: {:?}", e))
|
||||
})
|
||||
}
|
||||
|
||||
/// Accepts an entire HTTP body (as a string) and returns the `result` field, as a serde `Value`.
|
||||
fn response_result(response: &str) -> Result<Option<Value>, String> {
|
||||
Ok(serde_json::from_str::<Value>(&response)
|
||||
.map_err(|e| format!("Failed to parse response: {:?}", e))?
|
||||
.get("result")
|
||||
.cloned()
|
||||
.map(Some)
|
||||
.unwrap_or_else(|| None))
|
||||
}
|
||||
|
||||
/// Parses a `0x`-prefixed, **big-endian** hex string as a u64.
|
||||
///
|
||||
/// Note: the JSON-RPC encodes integers as big-endian. The deposit contract uses little-endian.
|
||||
/// Therefore, this function is only useful for numbers encoded by the JSON RPC.
|
||||
///
|
||||
/// E.g., `0x01 == 1`
|
||||
fn hex_to_u64_be(hex: &str) -> Result<u64, String> {
|
||||
u64::from_str_radix(strip_prefix(hex)?, 16)
|
||||
.map_err(|e| format!("Failed to parse hex as u64: {:?}", e))
|
||||
}
|
||||
|
||||
/// Parses a `0x`-prefixed, big-endian hex string as bytes.
|
||||
///
|
||||
/// E.g., `0x0102 == vec![1, 2]`
|
||||
fn hex_to_bytes(hex: &str) -> Result<Vec<u8>, String> {
|
||||
hex::decode(strip_prefix(hex)?).map_err(|e| format!("Failed to parse hex as bytes: {:?}", e))
|
||||
}
|
||||
|
||||
/// Removes the `0x` prefix from some bytes. Returns an error if the prefix is not present.
|
||||
fn strip_prefix(hex: &str) -> Result<&str, String> {
|
||||
if hex.starts_with("0x") {
|
||||
Ok(&hex[2..])
|
||||
} else {
|
||||
Err("Hex string did not start with `0x`".to_string())
|
||||
}
|
||||
}
|
||||
27
beacon_node/eth1/src/inner.rs
Normal file
27
beacon_node/eth1/src/inner.rs
Normal file
@@ -0,0 +1,27 @@
|
||||
use crate::Config;
|
||||
use crate::{block_cache::BlockCache, deposit_cache::DepositCache};
|
||||
use parking_lot::RwLock;
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct DepositUpdater {
|
||||
pub cache: DepositCache,
|
||||
pub last_processed_block: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
pub struct Inner {
|
||||
pub block_cache: RwLock<BlockCache>,
|
||||
pub deposit_cache: RwLock<DepositUpdater>,
|
||||
pub config: RwLock<Config>,
|
||||
}
|
||||
|
||||
impl Inner {
|
||||
/// Prunes the block cache to `self.target_block_cache_len`.
|
||||
///
|
||||
/// Is a no-op if `self.target_block_cache_len` is `None`.
|
||||
pub fn prune_blocks(&self) {
|
||||
if let Some(block_cache_truncation) = self.config.read().block_cache_truncation {
|
||||
self.block_cache.write().truncate(block_cache_truncation);
|
||||
}
|
||||
}
|
||||
}
|
||||
11
beacon_node/eth1/src/lib.rs
Normal file
11
beacon_node/eth1/src/lib.rs
Normal file
@@ -0,0 +1,11 @@
|
||||
mod block_cache;
|
||||
mod deposit_cache;
|
||||
mod deposit_log;
|
||||
pub mod http;
|
||||
mod inner;
|
||||
mod service;
|
||||
|
||||
pub use block_cache::{BlockCache, Eth1Block};
|
||||
pub use deposit_cache::DepositCache;
|
||||
pub use deposit_log::DepositLog;
|
||||
pub use service::{BlockCacheUpdateOutcome, Config, DepositCacheUpdateOutcome, Error, Service};
|
||||
643
beacon_node/eth1/src/service.rs
Normal file
643
beacon_node/eth1/src/service.rs
Normal file
@@ -0,0 +1,643 @@
|
||||
use crate::{
|
||||
block_cache::{BlockCache, Error as BlockCacheError, Eth1Block},
|
||||
deposit_cache::Error as DepositCacheError,
|
||||
http::{
|
||||
get_block, get_block_number, get_deposit_count, get_deposit_logs_in_range, get_deposit_root,
|
||||
},
|
||||
inner::{DepositUpdater, Inner},
|
||||
DepositLog,
|
||||
};
|
||||
use exit_future::Exit;
|
||||
use futures::{
|
||||
future::{loop_fn, Loop},
|
||||
stream, Future, Stream,
|
||||
};
|
||||
use parking_lot::{RwLock, RwLockReadGuard};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use slog::{debug, error, trace, Logger};
|
||||
use std::ops::{Range, RangeInclusive};
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, Instant};
|
||||
use tokio::timer::Delay;
|
||||
|
||||
const STANDARD_TIMEOUT_MILLIS: u64 = 15_000;
|
||||
|
||||
/// Timeout when doing a eth_blockNumber call.
|
||||
const BLOCK_NUMBER_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
|
||||
/// Timeout when doing an eth_getBlockByNumber call.
|
||||
const GET_BLOCK_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
|
||||
/// Timeout when doing an eth_call to read the deposit contract root.
|
||||
const GET_DEPOSIT_ROOT_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
|
||||
/// Timeout when doing an eth_call to read the deposit contract deposit count.
|
||||
const GET_DEPOSIT_COUNT_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
|
||||
/// Timeout when doing an eth_getLogs to read the deposit contract logs.
|
||||
const GET_DEPOSIT_LOG_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
|
||||
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub enum Error {
|
||||
/// The remote node is less synced that we expect, it is not useful until has done more
|
||||
/// syncing.
|
||||
RemoteNotSynced {
|
||||
next_required_block: u64,
|
||||
remote_highest_block: u64,
|
||||
follow_distance: u64,
|
||||
},
|
||||
/// Failed to download a block from the eth1 node.
|
||||
BlockDownloadFailed(String),
|
||||
/// Failed to get the current block number from the eth1 node.
|
||||
GetBlockNumberFailed(String),
|
||||
/// Failed to read the deposit contract root from the eth1 node.
|
||||
GetDepositRootFailed(String),
|
||||
/// Failed to read the deposit contract deposit count from the eth1 node.
|
||||
GetDepositCountFailed(String),
|
||||
/// Failed to read the deposit contract root from the eth1 node.
|
||||
GetDepositLogsFailed(String),
|
||||
/// There was an inconsistency when adding a block to the cache.
|
||||
FailedToInsertEth1Block(BlockCacheError),
|
||||
/// There was an inconsistency when adding a deposit to the cache.
|
||||
FailedToInsertDeposit(DepositCacheError),
|
||||
/// A log downloaded from the eth1 contract was not well formed.
|
||||
FailedToParseDepositLog {
|
||||
block_range: Range<u64>,
|
||||
error: String,
|
||||
},
|
||||
/// There was an unexpected internal error.
|
||||
Internal(String),
|
||||
}
|
||||
|
||||
/// The success message for an Eth1Data cache update.
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub enum BlockCacheUpdateOutcome {
|
||||
/// The cache was sucessfully updated.
|
||||
Success {
|
||||
blocks_imported: usize,
|
||||
head_block_number: Option<u64>,
|
||||
},
|
||||
}
|
||||
|
||||
/// The success message for an Eth1 deposit cache update.
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub enum DepositCacheUpdateOutcome {
|
||||
/// The cache was sucessfully updated.
|
||||
Success { logs_imported: usize },
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct Config {
|
||||
/// An Eth1 node (e.g., Geth) running a HTTP JSON-RPC endpoint.
|
||||
pub endpoint: String,
|
||||
/// The address the `BlockCache` and `DepositCache` should assume is the canonical deposit contract.
|
||||
pub deposit_contract_address: String,
|
||||
/// Defines the first block that the `DepositCache` will start searching for deposit logs.
|
||||
///
|
||||
/// Setting too high can result in missed logs. Setting too low will result in unnecessary
|
||||
/// calls to the Eth1 node's HTTP JSON RPC.
|
||||
pub deposit_contract_deploy_block: u64,
|
||||
/// Defines the lowest block number that should be downloaded and added to the `BlockCache`.
|
||||
pub lowest_cached_block_number: u64,
|
||||
/// Defines how far behind the Eth1 node's head we should follow.
|
||||
///
|
||||
/// Note: this should be less than or equal to the specification's `ETH1_FOLLOW_DISTANCE`.
|
||||
pub follow_distance: u64,
|
||||
/// Defines the number of blocks that should be retained each time the `BlockCache` calls truncate on
|
||||
/// itself.
|
||||
pub block_cache_truncation: Option<usize>,
|
||||
/// The interval between updates when using the `auto_update` function.
|
||||
pub auto_update_interval_millis: u64,
|
||||
/// The span of blocks we should query for logs, per request.
|
||||
pub blocks_per_log_query: usize,
|
||||
/// The maximum number of log requests per update.
|
||||
pub max_log_requests_per_update: Option<usize>,
|
||||
/// The maximum number of log requests per update.
|
||||
pub max_blocks_per_update: Option<usize>,
|
||||
}
|
||||
|
||||
impl Default for Config {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
endpoint: "http://localhost:8545".into(),
|
||||
deposit_contract_address: "0x0000000000000000000000000000000000000000".into(),
|
||||
deposit_contract_deploy_block: 0,
|
||||
lowest_cached_block_number: 0,
|
||||
follow_distance: 128,
|
||||
block_cache_truncation: Some(4_096),
|
||||
auto_update_interval_millis: 500,
|
||||
blocks_per_log_query: 1_000,
|
||||
max_log_requests_per_update: None,
|
||||
max_blocks_per_update: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Provides a set of Eth1 caches and async functions to update them.
|
||||
///
|
||||
/// Stores the following caches:
|
||||
///
|
||||
/// - Deposit cache: stores all deposit logs from the deposit contract.
|
||||
/// - Block cache: stores some number of eth1 blocks.
|
||||
#[derive(Clone)]
|
||||
pub struct Service {
|
||||
inner: Arc<Inner>,
|
||||
pub log: Logger,
|
||||
}
|
||||
|
||||
impl Service {
|
||||
/// Creates a new service. Does not attempt to connect to the eth1 node.
|
||||
pub fn new(config: Config, log: Logger) -> Self {
|
||||
Self {
|
||||
inner: Arc::new(Inner {
|
||||
config: RwLock::new(config),
|
||||
..Inner::default()
|
||||
}),
|
||||
log,
|
||||
}
|
||||
}
|
||||
|
||||
/// Provides access to the block cache.
|
||||
pub fn blocks(&self) -> &RwLock<BlockCache> {
|
||||
&self.inner.block_cache
|
||||
}
|
||||
|
||||
/// Provides access to the deposit cache.
|
||||
pub fn deposits(&self) -> &RwLock<DepositUpdater> {
|
||||
&self.inner.deposit_cache
|
||||
}
|
||||
|
||||
/// Returns the number of currently cached blocks.
|
||||
pub fn block_cache_len(&self) -> usize {
|
||||
self.blocks().read().len()
|
||||
}
|
||||
|
||||
/// Returns the number deposits available in the deposit cache.
|
||||
pub fn deposit_cache_len(&self) -> usize {
|
||||
self.deposits().read().cache.len()
|
||||
}
|
||||
|
||||
/// Read the service's configuration.
|
||||
pub fn config(&self) -> RwLockReadGuard<Config> {
|
||||
self.inner.config.read()
|
||||
}
|
||||
|
||||
/// Updates the configuration in `self to be `new_config`.
|
||||
///
|
||||
/// Will truncate the block cache if the new configure specifies truncation.
|
||||
pub fn update_config(&self, new_config: Config) -> Result<(), String> {
|
||||
let mut old_config = self.inner.config.write();
|
||||
|
||||
if new_config.deposit_contract_deploy_block != old_config.deposit_contract_deploy_block {
|
||||
// This may be possible, I just haven't looked into the details to ensure it's safe.
|
||||
Err("Updating deposit_contract_deploy_block is not supported".to_string())
|
||||
} else {
|
||||
*old_config = new_config;
|
||||
|
||||
// Prevents a locking condition when calling prune_blocks.
|
||||
drop(old_config);
|
||||
|
||||
self.inner.prune_blocks();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Set the lowest block that the block cache will store.
|
||||
///
|
||||
/// Note: this block may not always be present if truncating is enabled.
|
||||
pub fn set_lowest_cached_block(&self, block_number: u64) {
|
||||
self.inner.config.write().lowest_cached_block_number = block_number;
|
||||
}
|
||||
|
||||
/// Update the deposit and block cache, returning an error if either fail.
|
||||
///
|
||||
/// ## Returns
|
||||
///
|
||||
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
|
||||
/// - Err(_) if there is an error.
|
||||
///
|
||||
/// Emits logs for debugging and errors.
|
||||
pub fn update(
|
||||
&self,
|
||||
) -> impl Future<Item = (DepositCacheUpdateOutcome, BlockCacheUpdateOutcome), Error = String>
|
||||
{
|
||||
let log_a = self.log.clone();
|
||||
let log_b = self.log.clone();
|
||||
|
||||
let deposit_future = self
|
||||
.update_deposit_cache()
|
||||
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))
|
||||
.then(move |result| {
|
||||
match &result {
|
||||
Ok(DepositCacheUpdateOutcome::Success { logs_imported }) => trace!(
|
||||
log_a,
|
||||
"Updated eth1 deposit cache";
|
||||
"logs_imported" => logs_imported,
|
||||
),
|
||||
Err(e) => error!(
|
||||
log_a,
|
||||
"Failed to update eth1 deposit cache";
|
||||
"error" => e
|
||||
),
|
||||
};
|
||||
|
||||
result
|
||||
});
|
||||
|
||||
let block_future = self
|
||||
.update_block_cache()
|
||||
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))
|
||||
.then(move |result| {
|
||||
match &result {
|
||||
Ok(BlockCacheUpdateOutcome::Success {
|
||||
blocks_imported,
|
||||
head_block_number,
|
||||
}) => trace!(
|
||||
log_b,
|
||||
"Updated eth1 block cache";
|
||||
"blocks_imported" => blocks_imported,
|
||||
"head_block" => head_block_number,
|
||||
),
|
||||
Err(e) => error!(
|
||||
log_b,
|
||||
"Failed to update eth1 block cache";
|
||||
"error" => e
|
||||
),
|
||||
};
|
||||
|
||||
result
|
||||
});
|
||||
|
||||
deposit_future.join(block_future)
|
||||
}
|
||||
|
||||
/// A looping future that updates the cache, then waits `config.auto_update_interval` before
|
||||
/// updating it again.
|
||||
///
|
||||
/// ## Returns
|
||||
///
|
||||
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
|
||||
/// - Err(_) if there is an error.
|
||||
///
|
||||
/// Emits logs for debugging and errors.
|
||||
pub fn auto_update(&self, exit: Exit) -> impl Future<Item = (), Error = ()> {
|
||||
let service = self.clone();
|
||||
let log = self.log.clone();
|
||||
let update_interval = Duration::from_millis(self.config().auto_update_interval_millis);
|
||||
|
||||
loop_fn((), move |()| {
|
||||
let exit = exit.clone();
|
||||
let service = service.clone();
|
||||
let log_a = log.clone();
|
||||
let log_b = log.clone();
|
||||
|
||||
service
|
||||
.update()
|
||||
.then(move |update_result| {
|
||||
match update_result {
|
||||
Err(e) => error!(
|
||||
log_a,
|
||||
"Failed to update eth1 genesis cache";
|
||||
"retry_millis" => update_interval.as_millis(),
|
||||
"error" => e,
|
||||
),
|
||||
Ok((deposit, block)) => debug!(
|
||||
log_a,
|
||||
"Updated eth1 genesis cache";
|
||||
"retry_millis" => update_interval.as_millis(),
|
||||
"blocks" => format!("{:?}", block),
|
||||
"deposits" => format!("{:?}", deposit),
|
||||
),
|
||||
};
|
||||
|
||||
// Do not break the loop if there is an update failure.
|
||||
Ok(())
|
||||
})
|
||||
.and_then(move |_| Delay::new(Instant::now() + update_interval))
|
||||
.then(move |timer_result| {
|
||||
if let Err(e) = timer_result {
|
||||
error!(
|
||||
log_b,
|
||||
"Failed to trigger eth1 cache update delay";
|
||||
"error" => format!("{:?}", e),
|
||||
);
|
||||
}
|
||||
// Do not break the loop if there is an timer failure.
|
||||
Ok(())
|
||||
})
|
||||
.map(move |_| {
|
||||
if exit.is_live() {
|
||||
Loop::Continue(())
|
||||
} else {
|
||||
Loop::Break(())
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
/// Contacts the remote eth1 node and attempts to import deposit logs up to the configured
|
||||
/// follow-distance block.
|
||||
///
|
||||
/// Will process no more than `BLOCKS_PER_LOG_QUERY * MAX_LOG_REQUESTS_PER_UPDATE` blocks in a
|
||||
/// single update.
|
||||
///
|
||||
/// ## Resolves with
|
||||
///
|
||||
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
|
||||
/// - Err(_) if there is an error.
|
||||
///
|
||||
/// Emits logs for debugging and errors.
|
||||
pub fn update_deposit_cache(
|
||||
&self,
|
||||
) -> impl Future<Item = DepositCacheUpdateOutcome, Error = Error> {
|
||||
let service_1 = self.clone();
|
||||
let service_2 = self.clone();
|
||||
let blocks_per_log_query = self.config().blocks_per_log_query;
|
||||
let max_log_requests_per_update = self
|
||||
.config()
|
||||
.max_log_requests_per_update
|
||||
.unwrap_or_else(usize::max_value);
|
||||
|
||||
let next_required_block = self
|
||||
.deposits()
|
||||
.read()
|
||||
.last_processed_block
|
||||
.map(|n| n + 1)
|
||||
.unwrap_or_else(|| self.config().deposit_contract_deploy_block);
|
||||
|
||||
get_new_block_numbers(
|
||||
&self.config().endpoint,
|
||||
next_required_block,
|
||||
self.config().follow_distance,
|
||||
)
|
||||
.map(move |range| {
|
||||
range
|
||||
.map(|range| {
|
||||
range
|
||||
.collect::<Vec<u64>>()
|
||||
.chunks(blocks_per_log_query)
|
||||
.take(max_log_requests_per_update)
|
||||
.map(|vec| {
|
||||
let first = vec.first().cloned().unwrap_or_else(|| 0);
|
||||
let last = vec.last().map(|n| n + 1).unwrap_or_else(|| 0);
|
||||
(first..last)
|
||||
})
|
||||
.collect::<Vec<Range<u64>>>()
|
||||
})
|
||||
.unwrap_or_else(|| vec![])
|
||||
})
|
||||
.and_then(move |block_number_chunks| {
|
||||
stream::unfold(
|
||||
block_number_chunks.into_iter(),
|
||||
move |mut chunks| match chunks.next() {
|
||||
Some(chunk) => {
|
||||
let chunk_1 = chunk.clone();
|
||||
Some(
|
||||
get_deposit_logs_in_range(
|
||||
&service_1.config().endpoint,
|
||||
&service_1.config().deposit_contract_address,
|
||||
chunk,
|
||||
Duration::from_millis(GET_DEPOSIT_LOG_TIMEOUT_MILLIS),
|
||||
)
|
||||
.map_err(Error::GetDepositLogsFailed)
|
||||
.map(|logs| (chunk_1, logs))
|
||||
.map(|logs| (logs, chunks)),
|
||||
)
|
||||
}
|
||||
None => None,
|
||||
},
|
||||
)
|
||||
.fold(0, move |mut sum, (block_range, log_chunk)| {
|
||||
let mut cache = service_2.deposits().write();
|
||||
|
||||
log_chunk
|
||||
.into_iter()
|
||||
.map(|raw_log| {
|
||||
DepositLog::from_log(&raw_log).map_err(|error| {
|
||||
Error::FailedToParseDepositLog {
|
||||
block_range: block_range.clone(),
|
||||
error,
|
||||
}
|
||||
})
|
||||
})
|
||||
// Return early if any of the logs cannot be parsed.
|
||||
//
|
||||
// This costs an additional `collect`, however it enforces that no logs are
|
||||
// imported if any one of them cannot be parsed.
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
.into_iter()
|
||||
.map(|deposit_log| {
|
||||
cache
|
||||
.cache
|
||||
.insert_log(deposit_log)
|
||||
.map_err(Error::FailedToInsertDeposit)?;
|
||||
|
||||
sum += 1;
|
||||
|
||||
Ok(())
|
||||
})
|
||||
// Returns if a deposit is unable to be added to the cache.
|
||||
//
|
||||
// If this error occurs, the cache will no longer be guaranteed to hold either
|
||||
// none or all of the logs for each block (i.e., they may exist _some_ logs for
|
||||
// a block, but not _all_ logs for that block). This scenario can cause the
|
||||
// node to choose an invalid genesis state or propose an invalid block.
|
||||
.collect::<Result<_, _>>()?;
|
||||
|
||||
cache.last_processed_block = Some(block_range.end.saturating_sub(1));
|
||||
|
||||
Ok(sum)
|
||||
})
|
||||
.map(|logs_imported| DepositCacheUpdateOutcome::Success { logs_imported })
|
||||
})
|
||||
}
|
||||
|
||||
/// Contacts the remote eth1 node and attempts to import all blocks up to the configured
|
||||
/// follow-distance block.
|
||||
///
|
||||
/// If configured, prunes the block cache after importing new blocks.
|
||||
///
|
||||
/// ## Resolves with
|
||||
///
|
||||
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
|
||||
/// - Err(_) if there is an error.
|
||||
///
|
||||
/// Emits logs for debugging and errors.
|
||||
pub fn update_block_cache(&self) -> impl Future<Item = BlockCacheUpdateOutcome, Error = Error> {
|
||||
let cache_1 = self.inner.clone();
|
||||
let cache_2 = self.inner.clone();
|
||||
let cache_3 = self.inner.clone();
|
||||
let cache_4 = self.inner.clone();
|
||||
let cache_5 = self.inner.clone();
|
||||
|
||||
let block_cache_truncation = self.config().block_cache_truncation;
|
||||
let max_blocks_per_update = self
|
||||
.config()
|
||||
.max_blocks_per_update
|
||||
.unwrap_or_else(usize::max_value);
|
||||
|
||||
let next_required_block = cache_1
|
||||
.block_cache
|
||||
.read()
|
||||
.highest_block_number()
|
||||
.map(|n| n + 1)
|
||||
.unwrap_or_else(|| self.config().lowest_cached_block_number);
|
||||
|
||||
get_new_block_numbers(
|
||||
&self.config().endpoint,
|
||||
next_required_block,
|
||||
self.config().follow_distance,
|
||||
)
|
||||
// Map the range of required blocks into a Vec.
|
||||
//
|
||||
// If the required range is larger than the size of the cache, drop the exiting cache
|
||||
// because it's exipred and just download enough blocks to fill the cache.
|
||||
.and_then(move |range| {
|
||||
range
|
||||
.map(|range| {
|
||||
if range.start() > range.end() {
|
||||
// Note: this check is not strictly necessary, however it remains to safe
|
||||
// guard against any regression which may cause an underflow in a following
|
||||
// subtraction operation.
|
||||
Err(Error::Internal("Range was not increasing".into()))
|
||||
} else {
|
||||
let range_size = range.end() - range.start();
|
||||
let max_size = block_cache_truncation
|
||||
.map(|n| n as u64)
|
||||
.unwrap_or_else(u64::max_value);
|
||||
|
||||
if range_size > max_size {
|
||||
// If the range of required blocks is larger than `max_size`, drop all
|
||||
// existing blocks and download `max_size` count of blocks.
|
||||
let first_block = range.end() - max_size;
|
||||
(*cache_5.block_cache.write()) = BlockCache::default();
|
||||
Ok((first_block..=*range.end()).collect::<Vec<u64>>())
|
||||
} else {
|
||||
Ok(range.collect::<Vec<u64>>())
|
||||
}
|
||||
}
|
||||
})
|
||||
.unwrap_or_else(|| Ok(vec![]))
|
||||
})
|
||||
// Download the range of blocks and sequentially import them into the cache.
|
||||
.and_then(move |required_block_numbers| {
|
||||
let required_block_numbers = required_block_numbers
|
||||
.into_iter()
|
||||
.take(max_blocks_per_update);
|
||||
|
||||
// Produce a stream from the list of required block numbers and return a future that
|
||||
// consumes the it.
|
||||
stream::unfold(
|
||||
required_block_numbers,
|
||||
move |mut block_numbers| match block_numbers.next() {
|
||||
Some(block_number) => Some(
|
||||
download_eth1_block(cache_2.clone(), block_number)
|
||||
.map(|v| (v, block_numbers)),
|
||||
),
|
||||
None => None,
|
||||
},
|
||||
)
|
||||
.fold(0, move |sum, eth1_block| {
|
||||
cache_3
|
||||
.block_cache
|
||||
.write()
|
||||
.insert_root_or_child(eth1_block)
|
||||
.map_err(Error::FailedToInsertEth1Block)?;
|
||||
|
||||
Ok(sum + 1)
|
||||
})
|
||||
})
|
||||
.and_then(move |blocks_imported| {
|
||||
// Prune the block cache, preventing it from growing too large.
|
||||
cache_4.prune_blocks();
|
||||
|
||||
Ok(BlockCacheUpdateOutcome::Success {
|
||||
blocks_imported,
|
||||
head_block_number: cache_4.clone().block_cache.read().highest_block_number(),
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Determine the range of blocks that need to be downloaded, given the remotes best block and
|
||||
/// the locally stored best block.
|
||||
fn get_new_block_numbers<'a>(
|
||||
endpoint: &str,
|
||||
next_required_block: u64,
|
||||
follow_distance: u64,
|
||||
) -> impl Future<Item = Option<RangeInclusive<u64>>, Error = Error> + 'a {
|
||||
get_block_number(endpoint, Duration::from_millis(BLOCK_NUMBER_TIMEOUT_MILLIS))
|
||||
.map_err(Error::GetBlockNumberFailed)
|
||||
.and_then(move |remote_highest_block| {
|
||||
let remote_follow_block = remote_highest_block.saturating_sub(follow_distance);
|
||||
|
||||
if next_required_block <= remote_follow_block {
|
||||
Ok(Some(next_required_block..=remote_follow_block))
|
||||
} else if next_required_block > remote_highest_block + 1 {
|
||||
// If this is the case, the node must have gone "backwards" in terms of it's sync
|
||||
// (i.e., it's head block is lower than it was before).
|
||||
//
|
||||
// We assume that the `follow_distance` should be sufficient to ensure this never
|
||||
// happens, otherwise it is an error.
|
||||
Err(Error::RemoteNotSynced {
|
||||
next_required_block,
|
||||
remote_highest_block,
|
||||
follow_distance,
|
||||
})
|
||||
} else {
|
||||
// Return an empty range.
|
||||
Ok(None)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Downloads the `(block, deposit_root, deposit_count)` tuple from an eth1 node for the given
|
||||
/// `block_number`.
|
||||
///
|
||||
/// Performs three async calls to an Eth1 HTTP JSON RPC endpoint.
|
||||
fn download_eth1_block<'a>(
|
||||
cache: Arc<Inner>,
|
||||
block_number: u64,
|
||||
) -> impl Future<Item = Eth1Block, Error = Error> + 'a {
|
||||
// Performs a `get_blockByNumber` call to an eth1 node.
|
||||
get_block(
|
||||
&cache.config.read().endpoint,
|
||||
block_number,
|
||||
Duration::from_millis(GET_BLOCK_TIMEOUT_MILLIS),
|
||||
)
|
||||
.map_err(Error::BlockDownloadFailed)
|
||||
.join3(
|
||||
// Perform 2x `eth_call` via an eth1 node to read the deposit contract root and count.
|
||||
get_deposit_root(
|
||||
&cache.config.read().endpoint,
|
||||
&cache.config.read().deposit_contract_address,
|
||||
block_number,
|
||||
Duration::from_millis(GET_DEPOSIT_ROOT_TIMEOUT_MILLIS),
|
||||
)
|
||||
.map_err(Error::GetDepositRootFailed),
|
||||
get_deposit_count(
|
||||
&cache.config.read().endpoint,
|
||||
&cache.config.read().deposit_contract_address,
|
||||
block_number,
|
||||
Duration::from_millis(GET_DEPOSIT_COUNT_TIMEOUT_MILLIS),
|
||||
)
|
||||
.map_err(Error::GetDepositCountFailed),
|
||||
)
|
||||
.map(|(http_block, deposit_root, deposit_count)| Eth1Block {
|
||||
hash: http_block.hash,
|
||||
number: http_block.number,
|
||||
timestamp: http_block.timestamp,
|
||||
deposit_root,
|
||||
deposit_count,
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use toml;
|
||||
|
||||
#[test]
|
||||
fn serde_serialize() {
|
||||
let serialized =
|
||||
toml::to_string(&Config::default()).expect("Should serde encode default config");
|
||||
toml::from_str::<Config>(&serialized).expect("Should serde decode default config");
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user