mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-10 12:11:59 +00:00
Stable futures (#879)
* Port eth1 lib to use stable futures * Port eth1_test_rig to stable futures * Port eth1 tests to stable futures * Port genesis service to stable futures * Port genesis tests to stable futures * Port beacon_chain to stable futures * Port lcli to stable futures * Fix eth1_test_rig (#1014) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Update hashmap hashset to stable futures * Adds panic test to hashset delay * Port remote_beacon_node to stable futures * Fix lcli merge conflicts * Non rpc stuff compiles * protocol.rs compiles * Port websockets, timer and notifier to stable futures (#1035) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Port remote_beacon_node to stable futures * Partial eth2-libp2p stable future upgrade * Finished first round of fighting RPC types * Further progress towards porting eth2-libp2p adds caching to discovery * Update behaviour * RPC handler to stable futures * Update RPC to master libp2p * Network service additions * Fix the fallback transport construction (#1102) * Correct warning * Remove hashmap delay * Compiling version of eth2-libp2p * Update all crates versions * Fix conversion function and add tests (#1113) * Port validator_client to stable futures (#1114) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Revert "Update hashmap hashset to stable futures" This reverts commitd432378a3c. * Revert "Adds panic test to hashset delay" This reverts commit281502396f. * Ported attestation_service * Ported duties_service * Ported fork_service * More ports * Port block_service * Minor fixes * VC compiles * Update TODOS * Borrow self where possible * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test * Re-Fix attestation and duties service * Minor fixes Co-authored-by: Paul Hauner <paul@paulhauner.com> * Network crate update to stable futures * Port account_manager to stable futures (#1121) * Port account_manager to stable futures * Run async fns in tokio environment * Port rest_api crate to stable futures (#1118) * Port rest_api lib to stable futures * Reduce tokio features * Update notifier to stable futures * Builder update * Further updates * Convert self referential async functions * stable futures fixes (#1124) * Fix eth1 update functions * Fix genesis and client * Fix beacon node lib * Return appropriate runtimes from environment * Fix test rig * Refactor eth1 service update * Upgrade simulator to stable futures * Lighthouse compiles on stable futures * Remove println debugging statement * Update libp2p service, start rpc test upgrade * Update network crate for new libp2p * Update tokio::codec to futures_codec (#1128) * Further work towards RPC corrections * Correct http timeout and network service select * Use tokio runtime for libp2p * Revert "Update tokio::codec to futures_codec (#1128)" This reverts commite57aea924a. * Upgrade RPC libp2p tests * Upgrade secio fallback test * Upgrade gossipsub examples * Clean up RPC protocol * Test fixes (#1133) * Correct websocket timeout and run on os thread * Fix network test * Clean up PR * Correct tokio tcp move attestation service tests * Upgrade attestation service tests * Correct network test * Correct genesis test * Test corrections * Log info when block is received * Modify logs and update attester service events * Stable futures: fixes to vc, eth1 and account manager (#1142) * Add local testnet scripts * Remove whiteblock script * Rename local testnet script * Move spawns onto handle * Fix VC panic * Initial fix to block production issue * Tidy block producer fix * Tidy further * Add local testnet clean script * Run cargo fmt * Tidy duties service * Tidy fork service * Tidy ForkService * Tidy AttestationService * Tidy notifier * Ensure await is not suppressed in eth1 * Ensure await is not suppressed in account_manager * Use .ok() instead of .unwrap_or(()) * RPC decoding test for proto * Update discv5 and eth2-libp2p deps * Fix lcli double runtime issue (#1144) * Handle stream termination and dialing peer errors * Correct peer_info variant types * Remove unnecessary warnings * Handle subnet unsubscription removal and improve logigng * Add logs around ping * Upgrade discv5 and improve logging * Handle peer connection status for multiple connections * Improve network service logging * Improve logging around peer manager * Upgrade swarm poll centralise peer management * Identify clients on error * Fix `remove_peer` in sync (#1150) * remove_peer removes from all chains * Remove logs * Fix early return from loop * Improved logging, fix panic * Partially correct tests * Stable futures: Vc sync (#1149) * Improve syncing heuristic * Add comments * Use safer method for tolerance * Fix tests * Stable futures: Fix VC bug, update agg pool, add more metrics (#1151) * Expose epoch processing summary * Expose participation metrics to prometheus * Switch to f64 * Reduce precision * Change precision * Expose observed attesters metrics * Add metrics for agg/unagg attn counts * Add metrics for gossip rx * Add metrics for gossip tx * Adds ignored attns to prom * Add attestation timing * Add timer for aggregation pool sig agg * Add write lock timer for agg pool * Add more metrics to agg pool * Change map lock code * Add extra metric to agg pool * Change lock handling in agg pool * Change .write() to .read() * Add another agg pool timer * Fix for is_aggregator * Fix pruning bug Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
@@ -10,8 +10,8 @@
|
||||
//!
|
||||
//! There is no ABI parsing here, all function signatures and topics are hard-coded as constants.
|
||||
|
||||
use futures::{Future, Stream};
|
||||
use reqwest::{header::CONTENT_TYPE, r#async::ClientBuilder, StatusCode};
|
||||
use futures::future::TryFutureExt;
|
||||
use reqwest::{header::CONTENT_TYPE, ClientBuilder, StatusCode};
|
||||
use serde_json::{json, Value};
|
||||
use std::ops::Range;
|
||||
use std::time::Duration;
|
||||
@@ -40,80 +40,73 @@ pub struct Block {
|
||||
/// Returns the current block number.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_block_number(
|
||||
endpoint: &str,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = u64, Error = String> {
|
||||
send_rpc_request(endpoint, "eth_blockNumber", json!([]), timeout)
|
||||
.and_then(|response_body| {
|
||||
hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for block number".to_string())?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Data was not string")?,
|
||||
)
|
||||
})
|
||||
.map_err(|e| format!("Failed to get block number: {}", e))
|
||||
pub async fn get_block_number(endpoint: &str, timeout: Duration) -> Result<u64, String> {
|
||||
let response_body = send_rpc_request(endpoint, "eth_blockNumber", json!([]), timeout).await?;
|
||||
hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for block number".to_string())?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Data was not string")?,
|
||||
)
|
||||
.map_err(|e| format!("Failed to get block number: {}", e))
|
||||
}
|
||||
|
||||
/// Gets a block hash by block number.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_block(
|
||||
pub async fn get_block(
|
||||
endpoint: &str,
|
||||
block_number: u64,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Block, Error = String> {
|
||||
) -> Result<Block, String> {
|
||||
let params = json!([
|
||||
format!("0x{:x}", block_number),
|
||||
false // do not return full tx objects.
|
||||
]);
|
||||
|
||||
send_rpc_request(endpoint, "eth_getBlockByNumber", params, timeout)
|
||||
.and_then(|response_body| {
|
||||
let hash = hex_to_bytes(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for block".to_string())?
|
||||
.get("hash")
|
||||
.ok_or_else(|| "No hash for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block hash was not string")?,
|
||||
)?;
|
||||
let hash = if hash.len() == 32 {
|
||||
Ok(Hash256::from_slice(&hash))
|
||||
} else {
|
||||
Err(format!("Block has was not 32 bytes: {:?}", hash))
|
||||
}?;
|
||||
let response_body = send_rpc_request(endpoint, "eth_getBlockByNumber", params, timeout).await?;
|
||||
let hash = hex_to_bytes(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for block".to_string())?
|
||||
.get("hash")
|
||||
.ok_or_else(|| "No hash for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block hash was not string")?,
|
||||
)?;
|
||||
let hash = if hash.len() == 32 {
|
||||
Ok(Hash256::from_slice(&hash))
|
||||
} else {
|
||||
Err(format!("Block has was not 32 bytes: {:?}", hash))
|
||||
}?;
|
||||
|
||||
let timestamp = hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for timestamp".to_string())?
|
||||
.get("timestamp")
|
||||
.ok_or_else(|| "No timestamp for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block timestamp was not string")?,
|
||||
)?;
|
||||
let timestamp = hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for timestamp".to_string())?
|
||||
.get("timestamp")
|
||||
.ok_or_else(|| "No timestamp for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block timestamp was not string")?,
|
||||
)?;
|
||||
|
||||
let number = hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for number".to_string())?
|
||||
.get("number")
|
||||
.ok_or_else(|| "No number for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block number was not string")?,
|
||||
)?;
|
||||
let number = hex_to_u64_be(
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for number".to_string())?
|
||||
.get("number")
|
||||
.ok_or_else(|| "No number for block")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block number was not string")?,
|
||||
)?;
|
||||
|
||||
if number <= usize::max_value() as u64 {
|
||||
Ok(Block {
|
||||
hash,
|
||||
timestamp,
|
||||
number,
|
||||
})
|
||||
} else {
|
||||
Err(format!("Block number {} is larger than a usize", number))
|
||||
}
|
||||
if number <= usize::max_value() as u64 {
|
||||
Ok(Block {
|
||||
hash,
|
||||
timestamp,
|
||||
number,
|
||||
})
|
||||
.map_err(|e| format!("Failed to get block number: {}", e))
|
||||
} else {
|
||||
Err(format!("Block number {} is larger than a usize", number))
|
||||
}
|
||||
.map_err(|e| format!("Failed to get block number: {}", e))
|
||||
}
|
||||
|
||||
/// Returns the value of the `get_deposit_count()` call at the given `address` for the given
|
||||
@@ -122,20 +115,21 @@ pub fn get_block(
|
||||
/// Assumes that the `address` has the same ABI as the eth2 deposit contract.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_deposit_count(
|
||||
pub async fn get_deposit_count(
|
||||
endpoint: &str,
|
||||
address: &str,
|
||||
block_number: u64,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Option<u64>, Error = String> {
|
||||
call(
|
||||
) -> Result<Option<u64>, String> {
|
||||
let result = call(
|
||||
endpoint,
|
||||
address,
|
||||
DEPOSIT_COUNT_FN_SIGNATURE,
|
||||
block_number,
|
||||
timeout,
|
||||
)
|
||||
.and_then(|result| match result {
|
||||
.await?;
|
||||
match result {
|
||||
None => Err("Deposit root response was none".to_string()),
|
||||
Some(bytes) => {
|
||||
if bytes.is_empty() {
|
||||
@@ -151,7 +145,7 @@ pub fn get_deposit_count(
|
||||
))
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the value of the `get_hash_tree_root()` call at the given `block_number`.
|
||||
@@ -159,20 +153,21 @@ pub fn get_deposit_count(
|
||||
/// Assumes that the `address` has the same ABI as the eth2 deposit contract.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_deposit_root(
|
||||
pub async fn get_deposit_root(
|
||||
endpoint: &str,
|
||||
address: &str,
|
||||
block_number: u64,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Option<Hash256>, Error = String> {
|
||||
call(
|
||||
) -> Result<Option<Hash256>, String> {
|
||||
let result = call(
|
||||
endpoint,
|
||||
address,
|
||||
DEPOSIT_ROOT_FN_SIGNATURE,
|
||||
block_number,
|
||||
timeout,
|
||||
)
|
||||
.and_then(|result| match result {
|
||||
.await?;
|
||||
match result {
|
||||
None => Err("Deposit root response was none".to_string()),
|
||||
Some(bytes) => {
|
||||
if bytes.is_empty() {
|
||||
@@ -186,7 +181,7 @@ pub fn get_deposit_root(
|
||||
))
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Performs a instant, no-transaction call to the contract `address` with the given `0x`-prefixed
|
||||
@@ -195,13 +190,13 @@ pub fn get_deposit_root(
|
||||
/// Returns bytes, if any.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
fn call(
|
||||
async fn call(
|
||||
endpoint: &str,
|
||||
address: &str,
|
||||
hex_data: &str,
|
||||
block_number: u64,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Option<Vec<u8>>, Error = String> {
|
||||
) -> Result<Option<Vec<u8>>, String> {
|
||||
let params = json! ([
|
||||
{
|
||||
"to": address,
|
||||
@@ -210,19 +205,18 @@ fn call(
|
||||
format!("0x{:x}", block_number)
|
||||
]);
|
||||
|
||||
send_rpc_request(endpoint, "eth_call", params, timeout).and_then(|response_body| {
|
||||
match response_result(&response_body)? {
|
||||
None => Ok(None),
|
||||
Some(result) => {
|
||||
let hex = result
|
||||
.as_str()
|
||||
.map(|s| s.to_string())
|
||||
.ok_or_else(|| "'result' value was not a string".to_string())?;
|
||||
let response_body = send_rpc_request(endpoint, "eth_call", params, timeout).await?;
|
||||
match response_result(&response_body)? {
|
||||
None => Ok(None),
|
||||
Some(result) => {
|
||||
let hex = result
|
||||
.as_str()
|
||||
.map(|s| s.to_string())
|
||||
.ok_or_else(|| "'result' value was not a string".to_string())?;
|
||||
|
||||
Ok(Some(hex_to_bytes(&hex)?))
|
||||
}
|
||||
Ok(Some(hex_to_bytes(&hex)?))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// A reduced set of fields from an Eth1 contract log.
|
||||
@@ -238,12 +232,12 @@ pub struct Log {
|
||||
/// It's not clear from the Ethereum JSON-RPC docs if this range is inclusive or not.
|
||||
///
|
||||
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
||||
pub fn get_deposit_logs_in_range(
|
||||
pub async fn get_deposit_logs_in_range(
|
||||
endpoint: &str,
|
||||
address: &str,
|
||||
block_height_range: Range<u64>,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = Vec<Log>, Error = String> {
|
||||
) -> Result<Vec<Log>, String> {
|
||||
let params = json! ([{
|
||||
"address": address,
|
||||
"topics": [DEPOSIT_EVENT_TOPIC],
|
||||
@@ -251,46 +245,44 @@ pub fn get_deposit_logs_in_range(
|
||||
"toBlock": format!("0x{:x}", block_height_range.end),
|
||||
}]);
|
||||
|
||||
send_rpc_request(endpoint, "eth_getLogs", params, timeout)
|
||||
.and_then(|response_body| {
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for deposit logs".to_string())?
|
||||
.as_array()
|
||||
.cloned()
|
||||
.ok_or_else(|| "'result' value was not an array".to_string())?
|
||||
.into_iter()
|
||||
.map(|value| {
|
||||
let block_number = value
|
||||
.get("blockNumber")
|
||||
.ok_or_else(|| "No block number field in log")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block number was not string")?;
|
||||
let response_body = send_rpc_request(endpoint, "eth_getLogs", params, timeout).await?;
|
||||
response_result(&response_body)?
|
||||
.ok_or_else(|| "No result field was returned for deposit logs".to_string())?
|
||||
.as_array()
|
||||
.cloned()
|
||||
.ok_or_else(|| "'result' value was not an array".to_string())?
|
||||
.into_iter()
|
||||
.map(|value| {
|
||||
let block_number = value
|
||||
.get("blockNumber")
|
||||
.ok_or_else(|| "No block number field in log")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Block number was not string")?;
|
||||
|
||||
let data = value
|
||||
.get("data")
|
||||
.ok_or_else(|| "No block number field in log")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Data was not string")?;
|
||||
let data = value
|
||||
.get("data")
|
||||
.ok_or_else(|| "No block number field in log")?
|
||||
.as_str()
|
||||
.ok_or_else(|| "Data was not string")?;
|
||||
|
||||
Ok(Log {
|
||||
block_number: hex_to_u64_be(&block_number)?,
|
||||
data: hex_to_bytes(data)?,
|
||||
})
|
||||
})
|
||||
.collect::<Result<Vec<Log>, String>>()
|
||||
Ok(Log {
|
||||
block_number: hex_to_u64_be(&block_number)?,
|
||||
data: hex_to_bytes(data)?,
|
||||
})
|
||||
})
|
||||
.collect::<Result<Vec<Log>, String>>()
|
||||
.map_err(|e| format!("Failed to get logs in range: {}", e))
|
||||
}
|
||||
|
||||
/// Sends an RPC request to `endpoint`, using a POST with the given `body`.
|
||||
///
|
||||
/// Tries to receive the response and parse the body as a `String`.
|
||||
pub fn send_rpc_request(
|
||||
pub async fn send_rpc_request(
|
||||
endpoint: &str,
|
||||
method: &str,
|
||||
params: Value,
|
||||
timeout: Duration,
|
||||
) -> impl Future<Item = String, Error = String> {
|
||||
) -> Result<String, String> {
|
||||
let body = json! ({
|
||||
"jsonrpc": "2.0",
|
||||
"method": method,
|
||||
@@ -303,7 +295,7 @@ pub fn send_rpc_request(
|
||||
//
|
||||
// A better solution would be to create some struct that contains a built client and pass it
|
||||
// around (similar to the `web3` crate's `Transport` structs).
|
||||
ClientBuilder::new()
|
||||
let response = ClientBuilder::new()
|
||||
.timeout(timeout)
|
||||
.build()
|
||||
.expect("The builder should always build a client")
|
||||
@@ -312,43 +304,32 @@ pub fn send_rpc_request(
|
||||
.body(body)
|
||||
.send()
|
||||
.map_err(|e| format!("Request failed: {:?}", e))
|
||||
.and_then(|response| {
|
||||
if response.status() != StatusCode::OK {
|
||||
Err(format!(
|
||||
"Response HTTP status was not 200 OK: {}.",
|
||||
response.status()
|
||||
))
|
||||
} else {
|
||||
Ok(response)
|
||||
}
|
||||
})
|
||||
.and_then(|response| {
|
||||
response
|
||||
.headers()
|
||||
.get(CONTENT_TYPE)
|
||||
.ok_or_else(|| "No content-type header in response".to_string())
|
||||
.and_then(|encoding| {
|
||||
encoding
|
||||
.to_str()
|
||||
.map(|s| s.to_string())
|
||||
.map_err(|e| format!("Failed to parse content-type header: {}", e))
|
||||
})
|
||||
.map(|encoding| (response, encoding))
|
||||
})
|
||||
.and_then(|(response, encoding)| {
|
||||
response
|
||||
.into_body()
|
||||
.concat2()
|
||||
.map(|chunk| chunk.iter().cloned().collect::<Vec<u8>>())
|
||||
.map_err(|e| format!("Failed to receive body: {:?}", e))
|
||||
.and_then(move |bytes| match encoding.as_str() {
|
||||
"application/json" => Ok(bytes),
|
||||
"application/json; charset=utf-8" => Ok(bytes),
|
||||
other => Err(format!("Unsupported encoding: {}", other)),
|
||||
})
|
||||
.map(|bytes| String::from_utf8_lossy(&bytes).into_owned())
|
||||
.map_err(|e| format!("Failed to receive body: {:?}", e))
|
||||
.await?;
|
||||
if response.status() != StatusCode::OK {
|
||||
return Err(format!(
|
||||
"Response HTTP status was not 200 OK: {}.",
|
||||
response.status()
|
||||
));
|
||||
};
|
||||
let encoding = response
|
||||
.headers()
|
||||
.get(CONTENT_TYPE)
|
||||
.ok_or_else(|| "No content-type header in response".to_string())?
|
||||
.to_str()
|
||||
.map(|s| s.to_string())
|
||||
.map_err(|e| format!("Failed to parse content-type header: {}", e))?;
|
||||
|
||||
response
|
||||
.bytes()
|
||||
.map_err(|e| format!("Failed to receive body: {:?}", e))
|
||||
.await
|
||||
.and_then(move |bytes| match encoding.as_str() {
|
||||
"application/json" => Ok(bytes),
|
||||
"application/json; charset=utf-8" => Ok(bytes),
|
||||
other => Err(format!("Unsupported encoding: {}", other)),
|
||||
})
|
||||
.map(|bytes| String::from_utf8_lossy(&bytes).into_owned())
|
||||
.map_err(|e| format!("Failed to receive body: {:?}", e))
|
||||
}
|
||||
|
||||
/// Accepts an entire HTTP body (as a string) and returns the `result` field, as a serde `Value`.
|
||||
|
||||
@@ -2,21 +2,18 @@ use crate::metrics;
|
||||
use crate::{
|
||||
block_cache::{BlockCache, Error as BlockCacheError, Eth1Block},
|
||||
deposit_cache::Error as DepositCacheError,
|
||||
http::{get_block, get_block_number, get_deposit_logs_in_range},
|
||||
http::{get_block, get_block_number, get_deposit_logs_in_range, Log},
|
||||
inner::{DepositUpdater, Inner},
|
||||
DepositLog,
|
||||
};
|
||||
use futures::{
|
||||
future::{loop_fn, Loop},
|
||||
stream, Future, Stream,
|
||||
};
|
||||
use futures::{future::TryFutureExt, stream, stream::TryStreamExt, StreamExt};
|
||||
use parking_lot::{RwLock, RwLockReadGuard};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use slog::{debug, error, info, trace, Logger};
|
||||
use std::ops::{Range, RangeInclusive};
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
|
||||
use tokio::timer::Delay;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
use tokio::time::{interval_at, Duration, Instant};
|
||||
|
||||
const STANDARD_TIMEOUT_MILLIS: u64 = 15_000;
|
||||
|
||||
@@ -241,63 +238,40 @@ impl Service {
|
||||
/// - Err(_) if there is an error.
|
||||
///
|
||||
/// Emits logs for debugging and errors.
|
||||
pub fn update(
|
||||
&self,
|
||||
) -> impl Future<Item = (DepositCacheUpdateOutcome, BlockCacheUpdateOutcome), Error = String>
|
||||
{
|
||||
let log_a = self.log.clone();
|
||||
let log_b = self.log.clone();
|
||||
let inner_1 = self.inner.clone();
|
||||
let inner_2 = self.inner.clone();
|
||||
pub async fn update(
|
||||
service: Self,
|
||||
) -> Result<(DepositCacheUpdateOutcome, BlockCacheUpdateOutcome), String> {
|
||||
let update_deposit_cache = async {
|
||||
let outcome = Service::update_deposit_cache(service.clone())
|
||||
.await
|
||||
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))?;
|
||||
|
||||
let deposit_future = self
|
||||
.update_deposit_cache()
|
||||
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))
|
||||
.then(move |result| {
|
||||
match &result {
|
||||
Ok(DepositCacheUpdateOutcome { logs_imported }) => trace!(
|
||||
log_a,
|
||||
"Updated eth1 deposit cache";
|
||||
"cached_deposits" => inner_1.deposit_cache.read().cache.len(),
|
||||
"logs_imported" => logs_imported,
|
||||
"last_processed_eth1_block" => inner_1.deposit_cache.read().last_processed_block,
|
||||
),
|
||||
Err(e) => error!(
|
||||
log_a,
|
||||
"Failed to update eth1 deposit cache";
|
||||
"error" => e
|
||||
),
|
||||
};
|
||||
trace!(
|
||||
service.log,
|
||||
"Updated eth1 deposit cache";
|
||||
"cached_deposits" => service.inner.deposit_cache.read().cache.len(),
|
||||
"logs_imported" => outcome.logs_imported,
|
||||
"last_processed_eth1_block" => service.inner.deposit_cache.read().last_processed_block,
|
||||
);
|
||||
Ok(outcome)
|
||||
};
|
||||
|
||||
result
|
||||
});
|
||||
let update_block_cache = async {
|
||||
let outcome = Service::update_block_cache(service.clone())
|
||||
.await
|
||||
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))?;
|
||||
|
||||
let block_future = self
|
||||
.update_block_cache()
|
||||
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))
|
||||
.then(move |result| {
|
||||
match &result {
|
||||
Ok(BlockCacheUpdateOutcome {
|
||||
blocks_imported,
|
||||
head_block_number,
|
||||
}) => trace!(
|
||||
log_b,
|
||||
"Updated eth1 block cache";
|
||||
"cached_blocks" => inner_2.block_cache.read().len(),
|
||||
"blocks_imported" => blocks_imported,
|
||||
"head_block" => head_block_number,
|
||||
),
|
||||
Err(e) => error!(
|
||||
log_b,
|
||||
"Failed to update eth1 block cache";
|
||||
"error" => e
|
||||
),
|
||||
};
|
||||
trace!(
|
||||
service.log,
|
||||
"Updated eth1 block cache";
|
||||
"cached_blocks" => service.inner.block_cache.read().len(),
|
||||
"blocks_imported" => outcome.blocks_imported,
|
||||
"head_block" => outcome.head_block_number,
|
||||
);
|
||||
Ok(outcome)
|
||||
};
|
||||
|
||||
result
|
||||
});
|
||||
|
||||
deposit_future.join(block_future)
|
||||
futures::try_join!(update_deposit_cache, update_block_cache)
|
||||
}
|
||||
|
||||
/// A looping future that updates the cache, then waits `config.auto_update_interval` before
|
||||
@@ -309,56 +283,42 @@ impl Service {
|
||||
/// - Err(_) if there is an error.
|
||||
///
|
||||
/// Emits logs for debugging and errors.
|
||||
pub fn auto_update(
|
||||
&self,
|
||||
exit: tokio::sync::oneshot::Receiver<()>,
|
||||
) -> impl Future<Item = (), Error = ()> {
|
||||
let service = self.clone();
|
||||
let log = self.log.clone();
|
||||
let update_interval = Duration::from_millis(self.config().auto_update_interval_millis);
|
||||
pub fn auto_update(service: Self, exit: tokio::sync::oneshot::Receiver<()>) {
|
||||
let update_interval = Duration::from_millis(service.config().auto_update_interval_millis);
|
||||
|
||||
let loop_future = loop_fn((), move |()| {
|
||||
let service = service.clone();
|
||||
let log_a = log.clone();
|
||||
let log_b = log.clone();
|
||||
let mut interval = interval_at(Instant::now(), update_interval);
|
||||
|
||||
service
|
||||
.update()
|
||||
.then(move |update_result| {
|
||||
match update_result {
|
||||
Err(e) => error!(
|
||||
log_a,
|
||||
"Failed to update eth1 cache";
|
||||
"retry_millis" => update_interval.as_millis(),
|
||||
"error" => e,
|
||||
),
|
||||
Ok((deposit, block)) => debug!(
|
||||
log_a,
|
||||
"Updated eth1 cache";
|
||||
"retry_millis" => update_interval.as_millis(),
|
||||
"blocks" => format!("{:?}", block),
|
||||
"deposits" => format!("{:?}", deposit),
|
||||
),
|
||||
};
|
||||
let update_future = async move {
|
||||
while interval.next().await.is_some() {
|
||||
Service::do_update(service.clone(), update_interval)
|
||||
.await
|
||||
.ok();
|
||||
}
|
||||
};
|
||||
|
||||
// Do not break the loop if there is an update failure.
|
||||
Ok(())
|
||||
})
|
||||
.and_then(move |_| Delay::new(Instant::now() + update_interval))
|
||||
.then(move |timer_result| {
|
||||
if let Err(e) = timer_result {
|
||||
error!(
|
||||
log_b,
|
||||
"Failed to trigger eth1 cache update delay";
|
||||
"error" => format!("{:?}", e),
|
||||
);
|
||||
}
|
||||
// Do not break the loop if there is an timer failure.
|
||||
Ok(Loop::Continue(()))
|
||||
})
|
||||
});
|
||||
let future = futures::future::select(Box::pin(update_future), exit);
|
||||
|
||||
loop_future.select(exit).map(|_| ()).map_err(|_| ())
|
||||
tokio::task::spawn(future);
|
||||
}
|
||||
|
||||
async fn do_update(service: Self, update_interval: Duration) -> Result<(), ()> {
|
||||
let update_result = Service::update(service.clone()).await;
|
||||
match update_result {
|
||||
Err(e) => error!(
|
||||
service.log,
|
||||
"Failed to update eth1 cache";
|
||||
"retry_millis" => update_interval.as_millis(),
|
||||
"error" => e,
|
||||
),
|
||||
Ok((deposit, block)) => debug!(
|
||||
service.log,
|
||||
"Updated eth1 cache";
|
||||
"retry_millis" => update_interval.as_millis(),
|
||||
"blocks" => format!("{:?}", block),
|
||||
"deposits" => format!("{:?}", deposit),
|
||||
),
|
||||
};
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Contacts the remote eth1 node and attempts to import deposit logs up to the configured
|
||||
@@ -373,135 +333,126 @@ impl Service {
|
||||
/// - Err(_) if there is an error.
|
||||
///
|
||||
/// Emits logs for debugging and errors.
|
||||
pub fn update_deposit_cache(
|
||||
&self,
|
||||
) -> impl Future<Item = DepositCacheUpdateOutcome, Error = Error> {
|
||||
let service_1 = self.clone();
|
||||
let service_2 = self.clone();
|
||||
let service_3 = self.clone();
|
||||
let blocks_per_log_query = self.config().blocks_per_log_query;
|
||||
let max_log_requests_per_update = self
|
||||
pub async fn update_deposit_cache(service: Self) -> Result<DepositCacheUpdateOutcome, Error> {
|
||||
let endpoint = service.config().endpoint.clone();
|
||||
let follow_distance = service.config().follow_distance;
|
||||
let deposit_contract_address = service.config().deposit_contract_address.clone();
|
||||
|
||||
let blocks_per_log_query = service.config().blocks_per_log_query;
|
||||
let max_log_requests_per_update = service
|
||||
.config()
|
||||
.max_log_requests_per_update
|
||||
.unwrap_or_else(usize::max_value);
|
||||
|
||||
let next_required_block = self
|
||||
let next_required_block = service
|
||||
.deposits()
|
||||
.read()
|
||||
.last_processed_block
|
||||
.map(|n| n + 1)
|
||||
.unwrap_or_else(|| self.config().deposit_contract_deploy_block);
|
||||
.unwrap_or_else(|| service.config().deposit_contract_deploy_block);
|
||||
|
||||
get_new_block_numbers(
|
||||
&self.config().endpoint,
|
||||
next_required_block,
|
||||
self.config().follow_distance,
|
||||
)
|
||||
.map(move |range| {
|
||||
let range = get_new_block_numbers(&endpoint, next_required_block, follow_distance).await?;
|
||||
|
||||
let block_number_chunks = if let Some(range) = range {
|
||||
range
|
||||
.map(|range| {
|
||||
range
|
||||
.collect::<Vec<u64>>()
|
||||
.chunks(blocks_per_log_query)
|
||||
.take(max_log_requests_per_update)
|
||||
.map(|vec| {
|
||||
let first = vec.first().cloned().unwrap_or_else(|| 0);
|
||||
let last = vec.last().map(|n| n + 1).unwrap_or_else(|| 0);
|
||||
first..last
|
||||
})
|
||||
.collect::<Vec<Range<u64>>>()
|
||||
.collect::<Vec<u64>>()
|
||||
.chunks(blocks_per_log_query)
|
||||
.take(max_log_requests_per_update)
|
||||
.map(|vec| {
|
||||
let first = vec.first().cloned().unwrap_or_else(|| 0);
|
||||
let last = vec.last().map(|n| n + 1).unwrap_or_else(|| 0);
|
||||
first..last
|
||||
})
|
||||
.unwrap_or_else(|| vec![])
|
||||
})
|
||||
.and_then(move |block_number_chunks| {
|
||||
stream::unfold(
|
||||
block_number_chunks.into_iter(),
|
||||
move |mut chunks| match chunks.next() {
|
||||
.collect::<Vec<Range<u64>>>()
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
let logs: Vec<(Range<u64>, Vec<Log>)> =
|
||||
stream::try_unfold(block_number_chunks.into_iter(), |mut chunks| async {
|
||||
match chunks.next() {
|
||||
Some(chunk) => {
|
||||
let chunk_1 = chunk.clone();
|
||||
Some(
|
||||
get_deposit_logs_in_range(
|
||||
&service_1.config().endpoint,
|
||||
&service_1.config().deposit_contract_address,
|
||||
chunk,
|
||||
Duration::from_millis(GET_DEPOSIT_LOG_TIMEOUT_MILLIS),
|
||||
)
|
||||
.map_err(Error::GetDepositLogsFailed)
|
||||
.map(|logs| (chunk_1, logs))
|
||||
.map(|logs| (logs, chunks)),
|
||||
match get_deposit_logs_in_range(
|
||||
&endpoint,
|
||||
&deposit_contract_address,
|
||||
chunk,
|
||||
Duration::from_millis(GET_DEPOSIT_LOG_TIMEOUT_MILLIS),
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(logs) => Ok(Some(((chunk_1, logs), chunks))),
|
||||
Err(e) => Err(Error::GetDepositLogsFailed(e)),
|
||||
}
|
||||
}
|
||||
None => None,
|
||||
},
|
||||
)
|
||||
.fold(0, move |mut sum, (block_range, log_chunk)| {
|
||||
let mut cache = service_2.deposits().write();
|
||||
|
||||
log_chunk
|
||||
.into_iter()
|
||||
.map(|raw_log| {
|
||||
DepositLog::from_log(&raw_log).map_err(|error| {
|
||||
Error::FailedToParseDepositLog {
|
||||
block_range: block_range.clone(),
|
||||
error,
|
||||
}
|
||||
})
|
||||
})
|
||||
// Return early if any of the logs cannot be parsed.
|
||||
//
|
||||
// This costs an additional `collect`, however it enforces that no logs are
|
||||
// imported if any one of them cannot be parsed.
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
.into_iter()
|
||||
.map(|deposit_log| {
|
||||
cache
|
||||
.cache
|
||||
.insert_log(deposit_log)
|
||||
.map_err(Error::FailedToInsertDeposit)?;
|
||||
|
||||
sum += 1;
|
||||
|
||||
Ok(())
|
||||
})
|
||||
// Returns if a deposit is unable to be added to the cache.
|
||||
//
|
||||
// If this error occurs, the cache will no longer be guaranteed to hold either
|
||||
// none or all of the logs for each block (i.e., they may exist _some_ logs for
|
||||
// a block, but not _all_ logs for that block). This scenario can cause the
|
||||
// node to choose an invalid genesis state or propose an invalid block.
|
||||
.collect::<Result<_, _>>()?;
|
||||
|
||||
cache.last_processed_block = Some(block_range.end.saturating_sub(1));
|
||||
|
||||
metrics::set_gauge(&metrics::DEPOSIT_CACHE_LEN, cache.cache.len() as i64);
|
||||
metrics::set_gauge(
|
||||
&metrics::HIGHEST_PROCESSED_DEPOSIT_BLOCK,
|
||||
cache.last_processed_block.unwrap_or_else(|| 0) as i64,
|
||||
);
|
||||
|
||||
Ok(sum)
|
||||
})
|
||||
.map(move |logs_imported| {
|
||||
if logs_imported > 0 {
|
||||
info!(
|
||||
service_3.log,
|
||||
"Imported deposit log(s)";
|
||||
"latest_block" => service_3.inner.deposit_cache.read().cache.latest_block_number(),
|
||||
"total" => service_3.deposit_cache_len(),
|
||||
"new" => logs_imported
|
||||
);
|
||||
} else {
|
||||
debug!(
|
||||
service_3.log,
|
||||
"No new deposits found";
|
||||
"latest_block" => service_3.inner.deposit_cache.read().cache.latest_block_number(),
|
||||
"total_deposits" => service_3.deposit_cache_len(),
|
||||
);
|
||||
None => Ok(None),
|
||||
}
|
||||
|
||||
DepositCacheUpdateOutcome { logs_imported }
|
||||
})
|
||||
})
|
||||
.try_collect()
|
||||
.await?;
|
||||
|
||||
let mut logs_imported = 0;
|
||||
for (block_range, log_chunk) in logs.iter() {
|
||||
let mut cache = service.deposits().write();
|
||||
log_chunk
|
||||
.into_iter()
|
||||
.map(|raw_log| {
|
||||
DepositLog::from_log(&raw_log).map_err(|error| Error::FailedToParseDepositLog {
|
||||
block_range: block_range.clone(),
|
||||
error,
|
||||
})
|
||||
})
|
||||
// Return early if any of the logs cannot be parsed.
|
||||
//
|
||||
// This costs an additional `collect`, however it enforces that no logs are
|
||||
// imported if any one of them cannot be parsed.
|
||||
.collect::<Result<Vec<_>, _>>()?
|
||||
.into_iter()
|
||||
.map(|deposit_log| {
|
||||
cache
|
||||
.cache
|
||||
.insert_log(deposit_log)
|
||||
.map_err(Error::FailedToInsertDeposit)?;
|
||||
|
||||
logs_imported += 1;
|
||||
|
||||
Ok(())
|
||||
})
|
||||
// Returns if a deposit is unable to be added to the cache.
|
||||
//
|
||||
// If this error occurs, the cache will no longer be guaranteed to hold either
|
||||
// none or all of the logs for each block (i.e., they may exist _some_ logs for
|
||||
// a block, but not _all_ logs for that block). This scenario can cause the
|
||||
// node to choose an invalid genesis state or propose an invalid block.
|
||||
.collect::<Result<_, _>>()?;
|
||||
|
||||
cache.last_processed_block = Some(block_range.end.saturating_sub(1));
|
||||
|
||||
metrics::set_gauge(&metrics::DEPOSIT_CACHE_LEN, cache.cache.len() as i64);
|
||||
metrics::set_gauge(
|
||||
&metrics::HIGHEST_PROCESSED_DEPOSIT_BLOCK,
|
||||
cache.last_processed_block.unwrap_or_else(|| 0) as i64,
|
||||
);
|
||||
}
|
||||
|
||||
if logs_imported > 0 {
|
||||
info!(
|
||||
service.log,
|
||||
"Imported deposit log(s)";
|
||||
"latest_block" => service.inner.deposit_cache.read().cache.latest_block_number(),
|
||||
"total" => service.deposit_cache_len(),
|
||||
"new" => logs_imported
|
||||
);
|
||||
} else {
|
||||
debug!(
|
||||
service.log,
|
||||
"No new deposits found";
|
||||
"latest_block" => service.inner.deposit_cache.read().cache.latest_block_number(),
|
||||
"total_deposits" => service.deposit_cache_len(),
|
||||
);
|
||||
}
|
||||
|
||||
Ok(DepositCacheUpdateOutcome { logs_imported })
|
||||
}
|
||||
|
||||
/// Contacts the remote eth1 node and attempts to import all blocks up to the configured
|
||||
@@ -515,218 +466,249 @@ impl Service {
|
||||
/// - Err(_) if there is an error.
|
||||
///
|
||||
/// Emits logs for debugging and errors.
|
||||
pub fn update_block_cache(&self) -> impl Future<Item = BlockCacheUpdateOutcome, Error = Error> {
|
||||
let cache_1 = self.inner.clone();
|
||||
let cache_2 = self.inner.clone();
|
||||
let cache_3 = self.inner.clone();
|
||||
let cache_4 = self.inner.clone();
|
||||
let cache_5 = self.inner.clone();
|
||||
let cache_6 = self.inner.clone();
|
||||
|
||||
let service_1 = self.clone();
|
||||
|
||||
let block_cache_truncation = self.config().block_cache_truncation;
|
||||
let max_blocks_per_update = self
|
||||
pub async fn update_block_cache(service: Self) -> Result<BlockCacheUpdateOutcome, Error> {
|
||||
let block_cache_truncation = service.config().block_cache_truncation;
|
||||
let max_blocks_per_update = service
|
||||
.config()
|
||||
.max_blocks_per_update
|
||||
.unwrap_or_else(usize::max_value);
|
||||
|
||||
let next_required_block = cache_1
|
||||
let next_required_block = service
|
||||
.inner
|
||||
.block_cache
|
||||
.read()
|
||||
.highest_block_number()
|
||||
.map(|n| n + 1)
|
||||
.unwrap_or_else(|| self.config().lowest_cached_block_number);
|
||||
.unwrap_or_else(|| service.config().lowest_cached_block_number);
|
||||
|
||||
get_new_block_numbers(
|
||||
&self.config().endpoint,
|
||||
next_required_block,
|
||||
self.config().follow_distance,
|
||||
)
|
||||
let endpoint = service.config().endpoint.clone();
|
||||
let follow_distance = service.config().follow_distance;
|
||||
|
||||
let range = get_new_block_numbers(&endpoint, next_required_block, follow_distance).await?;
|
||||
// Map the range of required blocks into a Vec.
|
||||
//
|
||||
// If the required range is larger than the size of the cache, drop the exiting cache
|
||||
// because it's exipred and just download enough blocks to fill the cache.
|
||||
.and_then(move |range| {
|
||||
range
|
||||
.map(|range| {
|
||||
if range.start() > range.end() {
|
||||
// Note: this check is not strictly necessary, however it remains to safe
|
||||
// guard against any regression which may cause an underflow in a following
|
||||
// subtraction operation.
|
||||
Err(Error::Internal("Range was not increasing".into()))
|
||||
} else {
|
||||
let range_size = range.end() - range.start();
|
||||
let max_size = block_cache_truncation
|
||||
.map(|n| n as u64)
|
||||
.unwrap_or_else(u64::max_value);
|
||||
if range_size > max_size {
|
||||
// If the range of required blocks is larger than `max_size`, drop all
|
||||
// existing blocks and download `max_size` count of blocks.
|
||||
let first_block = range.end() - max_size;
|
||||
(*cache_5.block_cache.write()) = BlockCache::default();
|
||||
Ok((first_block..=*range.end()).collect::<Vec<u64>>())
|
||||
} else {
|
||||
Ok(range.collect::<Vec<u64>>())
|
||||
let required_block_numbers = if let Some(range) = range {
|
||||
if range.start() > range.end() {
|
||||
// Note: this check is not strictly necessary, however it remains to safe
|
||||
// guard against any regression which may cause an underflow in a following
|
||||
// subtraction operation.
|
||||
return Err(Error::Internal("Range was not increasing".into()));
|
||||
} else {
|
||||
let range_size = range.end() - range.start();
|
||||
let max_size = block_cache_truncation
|
||||
.map(|n| n as u64)
|
||||
.unwrap_or_else(u64::max_value);
|
||||
if range_size > max_size {
|
||||
// If the range of required blocks is larger than `max_size`, drop all
|
||||
// existing blocks and download `max_size` count of blocks.
|
||||
let first_block = range.end() - max_size;
|
||||
(*service.inner.block_cache.write()) = BlockCache::default();
|
||||
(first_block..=*range.end()).collect::<Vec<u64>>()
|
||||
} else {
|
||||
range.collect::<Vec<u64>>()
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
// Download the range of blocks and sequentially import them into the cache.
|
||||
// Last processed block in deposit cache
|
||||
let latest_in_cache = service
|
||||
.inner
|
||||
.deposit_cache
|
||||
.read()
|
||||
.last_processed_block
|
||||
.unwrap_or(0);
|
||||
|
||||
let required_block_numbers = required_block_numbers
|
||||
.into_iter()
|
||||
.filter(|x| *x <= latest_in_cache)
|
||||
.take(max_blocks_per_update)
|
||||
.collect::<Vec<_>>();
|
||||
// Produce a stream from the list of required block numbers and return a future that
|
||||
// consumes the it.
|
||||
|
||||
let eth1_blocks: Vec<Eth1Block> = stream::try_unfold(
|
||||
required_block_numbers.into_iter(),
|
||||
|mut block_numbers| async {
|
||||
match block_numbers.next() {
|
||||
Some(block_number) => {
|
||||
match download_eth1_block(service.inner.clone(), block_number).await {
|
||||
Ok(eth1_block) => Ok(Some((eth1_block, block_numbers))),
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
}
|
||||
})
|
||||
.unwrap_or_else(|| Ok(vec![]))
|
||||
})
|
||||
// Download the range of blocks and sequentially import them into the cache.
|
||||
.and_then(move |required_block_numbers| {
|
||||
// Last processed block in deposit cache
|
||||
let latest_in_cache = cache_6
|
||||
.deposit_cache
|
||||
.read()
|
||||
.last_processed_block
|
||||
.unwrap_or(0);
|
||||
None => Ok(None),
|
||||
}
|
||||
},
|
||||
)
|
||||
.try_collect()
|
||||
.await?;
|
||||
|
||||
let required_block_numbers = required_block_numbers
|
||||
.into_iter()
|
||||
.filter(|x| *x <= latest_in_cache)
|
||||
.take(max_blocks_per_update)
|
||||
.collect::<Vec<_>>();
|
||||
// Produce a stream from the list of required block numbers and return a future that
|
||||
// consumes the it.
|
||||
stream::unfold(
|
||||
required_block_numbers.into_iter(),
|
||||
move |mut block_numbers| match block_numbers.next() {
|
||||
Some(block_number) => Some(
|
||||
download_eth1_block(cache_2.clone(), block_number)
|
||||
.map(|v| (v, block_numbers)),
|
||||
),
|
||||
None => None,
|
||||
},
|
||||
)
|
||||
.fold(0, move |sum, eth1_block| {
|
||||
cache_3
|
||||
.block_cache
|
||||
.write()
|
||||
.insert_root_or_child(eth1_block)
|
||||
.map_err(Error::FailedToInsertEth1Block)?;
|
||||
|
||||
metrics::set_gauge(
|
||||
&metrics::BLOCK_CACHE_LEN,
|
||||
cache_3.block_cache.read().len() as i64,
|
||||
);
|
||||
metrics::set_gauge(
|
||||
&metrics::LATEST_CACHED_BLOCK_TIMESTAMP,
|
||||
cache_3
|
||||
.block_cache
|
||||
.read()
|
||||
.latest_block_timestamp()
|
||||
.unwrap_or_else(|| 0) as i64,
|
||||
);
|
||||
|
||||
Ok(sum + 1)
|
||||
})
|
||||
})
|
||||
.and_then(move |blocks_imported| {
|
||||
// Prune the block cache, preventing it from growing too large.
|
||||
cache_4.prune_blocks();
|
||||
let mut blocks_imported = 0;
|
||||
for eth1_block in eth1_blocks {
|
||||
service
|
||||
.inner
|
||||
.block_cache
|
||||
.write()
|
||||
.insert_root_or_child(eth1_block)
|
||||
.map_err(Error::FailedToInsertEth1Block)?;
|
||||
|
||||
metrics::set_gauge(
|
||||
&metrics::BLOCK_CACHE_LEN,
|
||||
cache_4.block_cache.read().len() as i64,
|
||||
service.inner.block_cache.read().len() as i64,
|
||||
);
|
||||
metrics::set_gauge(
|
||||
&metrics::LATEST_CACHED_BLOCK_TIMESTAMP,
|
||||
service
|
||||
.inner
|
||||
.block_cache
|
||||
.read()
|
||||
.latest_block_timestamp()
|
||||
.unwrap_or_else(|| 0) as i64,
|
||||
);
|
||||
|
||||
let block_cache = service_1.inner.block_cache.read();
|
||||
let latest_block_mins = block_cache
|
||||
.latest_block_timestamp()
|
||||
.and_then(|timestamp| {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.ok()
|
||||
.and_then(|now| now.checked_sub(Duration::from_secs(timestamp)))
|
||||
})
|
||||
.map(|duration| format!("{} mins", duration.as_secs() / 60))
|
||||
.unwrap_or_else(|| "n/a".into());
|
||||
blocks_imported += 1;
|
||||
}
|
||||
|
||||
if blocks_imported > 0 {
|
||||
debug!(
|
||||
service_1.log,
|
||||
"Imported eth1 block(s)";
|
||||
"latest_block_age" => latest_block_mins,
|
||||
"latest_block" => block_cache.highest_block_number(),
|
||||
"total_cached_blocks" => block_cache.len(),
|
||||
"new" => blocks_imported
|
||||
);
|
||||
} else {
|
||||
debug!(
|
||||
service_1.log,
|
||||
"No new eth1 blocks imported";
|
||||
"latest_block" => block_cache.highest_block_number(),
|
||||
"cached_blocks" => block_cache.len(),
|
||||
);
|
||||
}
|
||||
// Prune the block cache, preventing it from growing too large.
|
||||
service.inner.prune_blocks();
|
||||
|
||||
Ok(BlockCacheUpdateOutcome {
|
||||
blocks_imported,
|
||||
head_block_number: cache_4.block_cache.read().highest_block_number(),
|
||||
metrics::set_gauge(
|
||||
&metrics::BLOCK_CACHE_LEN,
|
||||
service.inner.block_cache.read().len() as i64,
|
||||
);
|
||||
|
||||
let block_cache = service.inner.block_cache.read();
|
||||
let latest_block_mins = block_cache
|
||||
.latest_block_timestamp()
|
||||
.and_then(|timestamp| {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.ok()
|
||||
.and_then(|now| now.checked_sub(Duration::from_secs(timestamp)))
|
||||
})
|
||||
.map(|duration| format!("{} mins", duration.as_secs() / 60))
|
||||
.unwrap_or_else(|| "n/a".into());
|
||||
|
||||
if blocks_imported > 0 {
|
||||
info!(
|
||||
service.log,
|
||||
"Imported eth1 block(s)";
|
||||
"latest_block_age" => latest_block_mins,
|
||||
"latest_block" => block_cache.highest_block_number(),
|
||||
"total_cached_blocks" => block_cache.len(),
|
||||
"new" => blocks_imported
|
||||
);
|
||||
} else {
|
||||
debug!(
|
||||
service.log,
|
||||
"No new eth1 blocks imported";
|
||||
"latest_block" => block_cache.highest_block_number(),
|
||||
"cached_blocks" => block_cache.len(),
|
||||
);
|
||||
}
|
||||
|
||||
let block_cache = service.inner.block_cache.read();
|
||||
let latest_block_mins = block_cache
|
||||
.latest_block_timestamp()
|
||||
.and_then(|timestamp| {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.ok()
|
||||
.and_then(|now| now.checked_sub(Duration::from_secs(timestamp)))
|
||||
})
|
||||
.map(|duration| format!("{} mins", duration.as_secs() / 60))
|
||||
.unwrap_or_else(|| "n/a".into());
|
||||
|
||||
if blocks_imported > 0 {
|
||||
debug!(
|
||||
service.log,
|
||||
"Imported eth1 block(s)";
|
||||
"latest_block_age" => latest_block_mins,
|
||||
"latest_block" => block_cache.highest_block_number(),
|
||||
"total_cached_blocks" => block_cache.len(),
|
||||
"new" => blocks_imported
|
||||
);
|
||||
} else {
|
||||
debug!(
|
||||
service.log,
|
||||
"No new eth1 blocks imported";
|
||||
"latest_block" => block_cache.highest_block_number(),
|
||||
"cached_blocks" => block_cache.len(),
|
||||
);
|
||||
}
|
||||
|
||||
Ok(BlockCacheUpdateOutcome {
|
||||
blocks_imported,
|
||||
head_block_number: service.inner.block_cache.read().highest_block_number(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Determine the range of blocks that need to be downloaded, given the remotes best block and
|
||||
/// the locally stored best block.
|
||||
fn get_new_block_numbers<'a>(
|
||||
async fn get_new_block_numbers<'a>(
|
||||
endpoint: &str,
|
||||
next_required_block: u64,
|
||||
follow_distance: u64,
|
||||
) -> impl Future<Item = Option<RangeInclusive<u64>>, Error = Error> + 'a {
|
||||
get_block_number(endpoint, Duration::from_millis(BLOCK_NUMBER_TIMEOUT_MILLIS))
|
||||
.map_err(Error::GetBlockNumberFailed)
|
||||
.and_then(move |remote_highest_block| {
|
||||
let remote_follow_block = remote_highest_block.saturating_sub(follow_distance);
|
||||
) -> Result<Option<RangeInclusive<u64>>, Error> {
|
||||
let remote_highest_block =
|
||||
get_block_number(endpoint, Duration::from_millis(BLOCK_NUMBER_TIMEOUT_MILLIS))
|
||||
.map_err(Error::GetBlockNumberFailed)
|
||||
.await?;
|
||||
let remote_follow_block = remote_highest_block.saturating_sub(follow_distance);
|
||||
|
||||
if next_required_block <= remote_follow_block {
|
||||
Ok(Some(next_required_block..=remote_follow_block))
|
||||
} else if next_required_block > remote_highest_block + 1 {
|
||||
// If this is the case, the node must have gone "backwards" in terms of it's sync
|
||||
// (i.e., it's head block is lower than it was before).
|
||||
//
|
||||
// We assume that the `follow_distance` should be sufficient to ensure this never
|
||||
// happens, otherwise it is an error.
|
||||
Err(Error::RemoteNotSynced {
|
||||
next_required_block,
|
||||
remote_highest_block,
|
||||
follow_distance,
|
||||
})
|
||||
} else {
|
||||
// Return an empty range.
|
||||
Ok(None)
|
||||
}
|
||||
if next_required_block <= remote_follow_block {
|
||||
Ok(Some(next_required_block..=remote_follow_block))
|
||||
} else if next_required_block > remote_highest_block + 1 {
|
||||
// If this is the case, the node must have gone "backwards" in terms of it's sync
|
||||
// (i.e., it's head block is lower than it was before).
|
||||
//
|
||||
// We assume that the `follow_distance` should be sufficient to ensure this never
|
||||
// happens, otherwise it is an error.
|
||||
Err(Error::RemoteNotSynced {
|
||||
next_required_block,
|
||||
remote_highest_block,
|
||||
follow_distance,
|
||||
})
|
||||
} else {
|
||||
// Return an empty range.
|
||||
Ok(None)
|
||||
}
|
||||
}
|
||||
|
||||
/// Downloads the `(block, deposit_root, deposit_count)` tuple from an eth1 node for the given
|
||||
/// `block_number`.
|
||||
///
|
||||
/// Performs three async calls to an Eth1 HTTP JSON RPC endpoint.
|
||||
fn download_eth1_block<'a>(
|
||||
cache: Arc<Inner>,
|
||||
block_number: u64,
|
||||
) -> impl Future<Item = Eth1Block, Error = Error> + 'a {
|
||||
async fn download_eth1_block(cache: Arc<Inner>, block_number: u64) -> Result<Eth1Block, Error> {
|
||||
let endpoint = cache.config.read().endpoint.clone();
|
||||
|
||||
let deposit_root = cache
|
||||
.deposit_cache
|
||||
.read()
|
||||
.cache
|
||||
.get_deposit_root_from_cache(block_number);
|
||||
|
||||
let deposit_count = cache
|
||||
.deposit_cache
|
||||
.read()
|
||||
.cache
|
||||
.get_deposit_count_from_cache(block_number);
|
||||
|
||||
// Performs a `get_blockByNumber` call to an eth1 node.
|
||||
get_block(
|
||||
&cache.config.read().endpoint,
|
||||
let http_block = get_block(
|
||||
&endpoint,
|
||||
block_number,
|
||||
Duration::from_millis(GET_BLOCK_TIMEOUT_MILLIS),
|
||||
)
|
||||
.map_err(Error::BlockDownloadFailed)
|
||||
.map(move |http_block| Eth1Block {
|
||||
.await?;
|
||||
|
||||
Ok(Eth1Block {
|
||||
hash: http_block.hash,
|
||||
number: http_block.number,
|
||||
timestamp: http_block.timestamp,
|
||||
|
||||
Reference in New Issue
Block a user