Compare commits

..

11 Commits

Author SHA1 Message Date
Paul Hauner
0cde4e285c Bump version to v0.3.3 (#1850)
## Issue Addressed

NA

## Proposed Changes

- Update versions
- Run `cargo update`

## Additional Info

- Blocked on #1846
2020-11-02 23:55:15 +00:00
Michael Sproul
2ff5828310 Downgrade ADX check to a warning (#1846)
## Issue Addressed

Closes #1842

## Proposed Changes

Due to the lies told to us by VPS providers about what CPU features they support, we are forced to check for the availability of CPU features like ADX by just _running code and seeing if it crashes_. The prominent warning should hopefully help users who have truly incompatible CPUs work out what is going on, while not burdening users of cheap VPSs.
2020-11-02 22:35:37 +00:00
Pawan Dhananjay
863ee7c9f2 Update to discv5 bootnodes (#1849)
## Issue Addressed

We seem to have roll backed to old discv5 bootnodes with #1799 because of which fresh nodes with no cached peers cannot find any peers.

## Proposed Changes

Updates `boot_enr.yaml` to discv5.1 bootnodes.
2020-11-02 21:29:43 +00:00
Paul Hauner
7afbaa807e Return eth1-related data via the API (#1797)
## Issue Addressed

- Related to #1691

## Proposed Changes

Adds the following API endpoints:

- `GET lighthouse/eth1/syncing`: status about how synced we are with Eth1.
- `GET lighthouse/eth1/block_cache`: all locally cached eth1 blocks.
- `GET lighthouse/eth1/deposit_cache`: all locally cached eth1 deposits.

Additionally:

- Moves some types from the `beacon_node/eth1` to the `common/eth2` crate, so they can be used in the API without duplication.
- Allow `update_deposit_cache` and `update_block_cache` to take an optional head block number to avoid duplicate requests.

## Additional Info

TBC
2020-11-02 00:37:30 +00:00
divma
6c0c050fbb Tweak head syncing (#1845)
## Issue Addressed

Fixes head syncing

## Proposed Changes

- Get back to statusing peers after removing chain segments and making the peer manager deal with status according to the Sync status, preventing an old known deadlock
- Also a bug where a chain would get removed if the optimistic batch succeeds being empty

## Additional Info

Tested on Medalla and looking good
2020-11-01 23:37:39 +00:00
realbigsean
304793a6ab add quoted serialization util for FixedVector and VariableList (#1794)
## Issue Addressed

This comment: https://github.com/sigp/lighthouse/issues/1776#issuecomment-712349841

## Proposed Changes

- Add quoted serde utils for `FixedVector` and `VariableList`
- Had to remove the dependency that `ssz_types` has on `serde_utils` to avoid a circular dependency.

## Additional Info


Co-authored-by: realbigsean <seananderson33@gmail.com>
2020-10-29 23:25:21 +00:00
Pawan Dhananjay
56f9394141 Add cli option for voluntary exits (#1781)
## Issue Addressed

Resolve #1652 

## Proposed Changes

Adds a cli option for voluntary exits. The flow is similar to prysm's where after entering the password for the validator keystore (or load password from `secrets` if present) the user is given multiple warnings about the operation being irreversible, then redirected to the docs webpage(not added yet) which explains what a voluntary exit is and the consequences of exiting and then prompted to enter a phrase from the docs webpage as a final confirmation. 

Example usage
```
$ lighthouse --testnet zinken account validator exit --validator <validator-pubkey> --beacon-node http://localhost:5052

Running account manager for zinken testnet                                                                                                          
validator-dir path: "..."

Enter the keystore password:  for validator in ...

Password is correct

Publishing a voluntary exit for validator: ...              
WARNING: This is an irreversible operation                                                                                                                    
WARNING: Withdrawing staked eth will not be possible until Eth1/Eth2 merge Please visit [website] to make sure you understand the implications of a voluntary exit.            
                                                                                                                                             
Enter the phrase from the above URL to confirm the voluntary exit:
Exit my validator
Published voluntary exit for validator ...
```

## Additional info

Not sure if we should have batch exits (`--validator all`) option for exiting all the validators in the `validators` directory. I'm slightly leaning towards having only single exits but don't have a strong preference.
2020-10-29 23:25:19 +00:00
Paul Hauner
f64f8246db Only run http_api tests in release (#1827)
## Issue Addressed

NA

## Proposed Changes

As raised by @hermanjunge in a DM, the `http_api` tests have been observed taking 100+ minutes on debug. This PR:

- Moves the `http_api` tests to only run in release.
- Groups some `http_api` tests to reduce test-setup overhead.

## Additional Info

NA
2020-10-29 22:25:20 +00:00
realbigsean
ae0f025375 Beacon state validator id filter (#1803)
## Issue Addressed

Michael's comment here: https://github.com/sigp/lighthouse/issues/1434#issuecomment-708834079
Resolves #1808

## Proposed Changes

- Add query param `id` and `status` to the `validators` endpoint
- Add string serialization and deserialization for `ValidatorStatus`
- Drop `Epoch` from `ValidatorStatus` variants

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2020-10-29 05:13:04 +00:00
divma
9f45ac2f5e More sync edge cases + prettify range (#1834)
## Issue Addressed
Sync edge case when we get an empty optimistic batch that passes validation and is inside the download buffer. Eventually the chain would reach the batch and treat it as an ugly state. 

## Proposed Changes
- Handle the edge case advancing the chain's target + code clarification
- Some largey changes for readability + ergonomics since rust has try ops
- Better handling of bad batch and chain states
2020-10-29 02:29:24 +00:00
blacktemplar
2bd5b9182f fix unbanning of peers (#1838)
## Issue Addressed

NA

## Proposed Changes

Currently a banned peer will remain banned indefinitely as long as update is called on the score struct regularly. This fixes this bug and the score decay starts after `BANNED_BEFORE_DECAY` seconds after banning.
2020-10-29 01:25:02 +00:00
57 changed files with 2168 additions and 650 deletions

119
Cargo.lock generated
View File

@@ -2,7 +2,7 @@
# It is not intended for manual editing.
[[package]]
name = "account_manager"
version = "0.3.2"
version = "0.3.3"
dependencies = [
"account_utils",
"bls",
@@ -12,6 +12,7 @@ dependencies = [
"directory",
"dirs 3.0.1",
"environment",
"eth2",
"eth2_keystore",
"eth2_ssz",
"eth2_ssz_derive",
@@ -23,10 +24,13 @@ dependencies = [
"libc",
"rand 0.7.3",
"rayon",
"safe_arith",
"slashing_protection",
"slog",
"slog-async",
"slog-term",
"slot_clock",
"tempfile",
"tokio 0.2.22",
"types",
"validator_dir",
@@ -53,9 +57,9 @@ dependencies = [
[[package]]
name = "addr2line"
version = "0.13.0"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b6a2d3371669ab3ca9797670853d61402b03d0b4b9ebf33d677dfa720203072"
checksum = "7c0929d69e78dd9bf5408269919fcbcaeb2e35e5d43e5815517cdc6a8e11a423"
dependencies = [
"gimli",
]
@@ -149,12 +153,6 @@ dependencies = [
"stream-cipher",
]
[[package]]
name = "ahash"
version = "0.3.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8fd72866655d1904d6b0997d0b07ba561047d070fbe29de039031c641b61217"
[[package]]
name = "ahash"
version = "0.4.6"
@@ -309,9 +307,9 @@ checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"
[[package]]
name = "backtrace"
version = "0.3.53"
version = "0.3.54"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "707b586e0e2f247cbde68cdd2c3ce69ea7b7be43e1c5b426e37c9319c4b9838e"
checksum = "2baad346b2d4e94a24347adeee9c7a93f412ee94b9cc26e5b59dea23848e9f28"
dependencies = [
"addr2line",
"cfg-if 1.0.0",
@@ -359,6 +357,7 @@ dependencies = [
"derivative",
"environment",
"eth1",
"eth2",
"eth2_config",
"eth2_hashing",
"eth2_ssz",
@@ -405,7 +404,7 @@ dependencies = [
[[package]]
name = "beacon_node"
version = "0.3.2"
version = "0.3.3"
dependencies = [
"beacon_chain",
"clap",
@@ -506,9 +505,9 @@ dependencies = [
[[package]]
name = "blake2b_simd"
version = "0.5.10"
version = "0.5.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d8fb2d74254a3a0b5cac33ac9f8ed0e44aa50378d9dbb2e5d83bd21ed1dc2c8a"
checksum = "afa748e348ad3be8263be728124b24a24f268266f6f5d58af9d75f6a40b5c587"
dependencies = [
"arrayref",
"arrayvec",
@@ -517,9 +516,9 @@ dependencies = [
[[package]]
name = "blake2s_simd"
version = "0.5.10"
version = "0.5.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ab9e07352b829279624ceb7c64adb4f585dacdb81d35cafae81139ccd617cf44"
checksum = "9e461a7034e85b211a4acb57ee2e6730b32912b06c08cc242243c39fc21ae6a2"
dependencies = [
"arrayref",
"arrayvec",
@@ -603,7 +602,7 @@ dependencies = [
[[package]]
name = "boot_node"
version = "0.3.2"
version = "0.3.3"
dependencies = [
"beacon_node",
"clap",
@@ -951,9 +950,9 @@ checksum = "a2d9162b7289a46e86208d6af2c686ca5bfde445878c41a458a9fac706252d0b"
[[package]]
name = "const_fn"
version = "0.4.2"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce90df4c658c62f12d78f7508cf92f9173e5184a539c10bfe54a3107b3ffd0f2"
checksum = "c478836e029dcef17fb47c89023448c64f781a046e0300e257ad8225ae59afab"
[[package]]
name = "constant_time_eq"
@@ -1245,9 +1244,9 @@ dependencies = [
[[package]]
name = "data-encoding"
version = "2.3.0"
version = "2.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4d0e2d24e5ee3b23a01de38eefdcd978907890701f08ffffd4cb457ca4ee8d6"
checksum = "993a608597367c6377b258c25d7120740f00ed23a2252b729b1932dd7866f908"
[[package]]
name = "db-key"
@@ -1519,11 +1518,11 @@ dependencies = [
[[package]]
name = "encoding_rs"
version = "0.8.24"
version = "0.8.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a51b8cf747471cb9499b6d59e59b0444f4c90eba8968c4e44874e92b5b64ace2"
checksum = "801bbab217d7f79c0062f4f7205b5d4427c6d1a7bd7aafdd1475f7c59d62b283"
dependencies = [
"cfg-if 0.1.10",
"cfg-if 1.0.0",
]
[[package]]
@@ -1595,6 +1594,7 @@ version = "0.2.0"
dependencies = [
"environment",
"eth1_test_rig",
"eth2",
"eth2_hashing",
"eth2_ssz",
"eth2_ssz_derive",
@@ -1640,6 +1640,7 @@ dependencies = [
"eth2_keystore",
"eth2_libp2p",
"eth2_ssz",
"eth2_ssz_derive",
"hex",
"libsecp256k1",
"procinfo",
@@ -1793,6 +1794,7 @@ dependencies = [
"eth2_ssz",
"serde",
"serde_derive",
"serde_json",
"serde_utils",
"tree_hash",
"tree_hash_derive",
@@ -2273,9 +2275,9 @@ dependencies = [
[[package]]
name = "gimli"
version = "0.22.0"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aaf91faf136cb47367fa430cd46e37a788775e7fa104f8b4bcb3861dc389b724"
checksum = "f6503fe142514ca4799d4c26297c4248239fe8838d827db6bd6065c6ed29a6ce"
[[package]]
name = "git-version"
@@ -2360,23 +2362,13 @@ version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d36fab90f82edc3c747f9d438e06cf0a491055896f2a279638bb5beed6c40177"
[[package]]
name = "hashbrown"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e91b62f79061a0bc2e046024cb7ba44b08419ed238ecbd9adbd787434b9e8c25"
dependencies = [
"ahash 0.3.8",
"autocfg 1.0.1",
]
[[package]]
name = "hashbrown"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7afe4a420e3fe79967a00898cc1f4db7c8a49a9333a29f8a4bd76a253d5cd04"
dependencies = [
"ahash 0.4.6",
"ahash",
]
[[package]]
@@ -2385,7 +2377,7 @@ version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d99cf782f0dc4372d26846bec3de7804ceb5df083c2d4462c0b8d2330e894fa8"
dependencies = [
"hashbrown 0.9.1",
"hashbrown",
]
[[package]]
@@ -2789,7 +2781,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55e2e4c765aa53a0424761bf9f41aa7a6ac1efa87238f59560640e27fca028f2"
dependencies = [
"autocfg 1.0.1",
"hashbrown 0.9.1",
"hashbrown",
]
[[package]]
@@ -2940,7 +2932,7 @@ checksum = "830d08ce1d1d941e6b30645f1a0eb5643013d835ce3779a5fc208261dbe10f55"
[[package]]
name = "lcli"
version = "0.3.2"
version = "0.3.3"
dependencies = [
"bls",
"clap",
@@ -3301,7 +3293,7 @@ dependencies = [
[[package]]
name = "lighthouse"
version = "0.3.2"
version = "0.3.3"
dependencies = [
"account_manager",
"account_utils",
@@ -3412,11 +3404,11 @@ dependencies = [
[[package]]
name = "lru"
version = "0.6.0"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "111b945ac72ec09eb7bc62a0fbdc3cc6e80555a7245f52a69d3921a75b53b153"
checksum = "be716eb6878ca2263eb5d00a781aa13264a794f519fe6af4fbb2668b2d5441c0"
dependencies = [
"hashbrown 0.8.2",
"hashbrown",
]
[[package]]
@@ -3843,9 +3835,9 @@ dependencies = [
[[package]]
name = "num-integer"
version = "0.1.43"
version = "0.1.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8d59457e662d541ba17869cf51cf177c0b5f0cbf476c66bdc90bf1edac4f875b"
checksum = "d2cc698a63b549a70bc047073d2949cce27cd1c7b0a4a862d08a8031bc2801db"
dependencies = [
"autocfg 1.0.1",
"num-traits",
@@ -3853,9 +3845,9 @@ dependencies = [
[[package]]
name = "num-iter"
version = "0.1.41"
version = "0.1.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a6e6b7c748f995c4c29c5f5ae0248536e04a5739927c74ec0fa564805094b9f"
checksum = "b2021c8337a54d21aca0d59a92577a029af9431cb59b909b03252b9c164fad59"
dependencies = [
"autocfg 1.0.1",
"num-integer",
@@ -3864,9 +3856,9 @@ dependencies = [
[[package]]
name = "num-traits"
version = "0.2.12"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac267bcc07f48ee5f8935ab0d24f316fb722d7a1292e2913f0cc196b29ffd611"
checksum = "9a64b1ec5cda2586e284722486d802acf1f7dbdc623e2bfc57e65ca1cd099290"
dependencies = [
"autocfg 1.0.1",
]
@@ -3883,9 +3875,9 @@ dependencies = [
[[package]]
name = "object"
version = "0.21.1"
version = "0.22.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37fd5004feb2ce328a52b0b3d01dbf4ffff72583493900ed15f22d4111c51693"
checksum = "8d3b63360ec3cb337817c2dbd47ab4a0f170d285d8e5a2064600f3def1402397"
[[package]]
name = "once_cell"
@@ -4243,9 +4235,9 @@ dependencies = [
[[package]]
name = "ppv-lite86"
version = "0.2.9"
version = "0.2.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c36fa947111f5c62a733b652544dd0016a43ce89619538a8ef92724a6f501a20"
checksum = "ac74c624d6b2d21f425f752262f42188365d7b8ff1aff74c82e45136510a4857"
[[package]]
name = "primitive-types"
@@ -4719,9 +4711,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.4.1"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8963b85b8ce3074fecffde43b4b0dded83ce2f367dc8d363afc56679f3ee820b"
checksum = "38cf2c13ed4745de91a5eb834e11c00bcc3709e773173b2ce4c56c9fbde04b9c"
dependencies = [
"aho-corasick",
"memchr",
@@ -4741,9 +4733,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.6.20"
version = "0.6.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8cab7a364d15cde1e505267766a2d3c4e22a843e1a601f0fa7564c0f82ced11c"
checksum = "3b181ba2dcf07aaccad5448e8ead58db5b742cf85dfe035e2227f137a539a189"
[[package]]
name = "remove_dir_all"
@@ -5134,9 +5126,9 @@ dependencies = [
[[package]]
name = "serde_yaml"
version = "0.8.13"
version = "0.8.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae3e2dd40a7cdc18ca80db804b7f461a39bb721160a85c9a1fa30134bf3c02a5"
checksum = "f7baae0a99f1a324984bcdc5f0718384c1f69775f1c7eec8b859b71b443e3fd7"
dependencies = [
"dtoa",
"linked-hash-map",
@@ -5224,11 +5216,10 @@ dependencies = [
[[package]]
name = "signal-hook-registry"
version = "1.2.1"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3e12110bc539e657a646068aaf5eb5b63af9d0c1f7b29c97113fad80e15f035"
checksum = "ce32ea0c6c56d5eacaeb814fbed9960547021d3edd010ded1425f180536b20ab"
dependencies = [
"arc-swap",
"libc",
]
@@ -6641,7 +6632,7 @@ dependencies = [
[[package]]
name = "validator_client"
version = "0.3.2"
version = "0.3.3"
dependencies = [
"account_utils",
"bincode",

View File

@@ -1,6 +1,6 @@
[package]
name = "account_manager"
version = "0.3.2"
version = "0.3.3"
authors = ["Paul Hauner <paul@paulhauner.com>", "Luke Anderson <luke@sigmaprime.io>"]
edition = "2018"
@@ -31,3 +31,9 @@ tokio = { version = "0.2.22", features = ["full"] }
eth2_keystore = { path = "../crypto/eth2_keystore" }
account_utils = { path = "../common/account_utils" }
slashing_protection = { path = "../validator_client/slashing_protection" }
eth2 = {path = "../common/eth2"}
safe_arith = {path = "../consensus/safe_arith"}
slot_clock = { path = "../common/slot_clock" }
[dev-dependencies]
tempfile = "3.1.0"

View File

@@ -140,6 +140,7 @@ pub fn cli_run<T: EthSpec>(
ensure_dir_exists(&validator_dir)?;
ensure_dir_exists(&secrets_dir)?;
eprintln!("validator-dir path: {:?}", validator_dir);
eprintln!("secrets-dir path {:?}", secrets_dir);
eprintln!("wallets-dir path {:?}", wallet_base_dir);

View File

@@ -0,0 +1,348 @@
use crate::wallet::create::STDIN_INPUTS_FLAG;
use bls::{Keypair, PublicKey};
use clap::{App, Arg, ArgMatches};
use environment::Environment;
use eth2::{
types::{GenesisData, StateId, ValidatorId, ValidatorStatus},
BeaconNodeHttpClient, Url,
};
use eth2_keystore::Keystore;
use eth2_testnet_config::Eth2TestnetConfig;
use safe_arith::SafeArith;
use slot_clock::{SlotClock, SystemTimeSlotClock};
use std::path::PathBuf;
use std::time::Duration;
use types::{ChainSpec, Epoch, EthSpec, Fork, VoluntaryExit};
pub const CMD: &str = "exit";
pub const KEYSTORE_FLAG: &str = "keystore";
pub const PASSWORD_FILE_FLAG: &str = "password-file";
pub const BEACON_SERVER_FLAG: &str = "beacon-node";
pub const PASSWORD_PROMPT: &str = "Enter the keystore password";
pub const DEFAULT_BEACON_NODE: &str = "http://localhost:5052/";
pub const CONFIRMATION_PHRASE: &str = "Exit my validator";
pub const WEBSITE_URL: &str = "https://lighthouse-book.sigmaprime.io/voluntary-exit.html";
pub const PROMPT: &str = "WARNING: WITHDRAWING STAKED ETH IS NOT CURRENTLY POSSIBLE";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new("exit")
.about("Submits a VoluntaryExit to the beacon chain for a given validator keystore.")
.arg(
Arg::with_name(KEYSTORE_FLAG)
.long(KEYSTORE_FLAG)
.value_name("KEYSTORE_PATH")
.help("The path to the EIP-2335 voting keystore for the validator")
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(PASSWORD_FILE_FLAG)
.long(PASSWORD_FILE_FLAG)
.value_name("PASSWORD_FILE_PATH")
.help("The path to the password file which unlocks the validator voting keystore")
.takes_value(true),
)
.arg(
Arg::with_name(BEACON_SERVER_FLAG)
.long(BEACON_SERVER_FLAG)
.value_name("NETWORK_ADDRESS")
.help("Address to a beacon node HTTP API")
.default_value(&DEFAULT_BEACON_NODE)
.takes_value(true),
)
.arg(
Arg::with_name(STDIN_INPUTS_FLAG)
.long(STDIN_INPUTS_FLAG)
.help("If present, read all user inputs from stdin instead of tty."),
)
}
pub fn cli_run<E: EthSpec>(matches: &ArgMatches, mut env: Environment<E>) -> Result<(), String> {
let keystore_path: PathBuf = clap_utils::parse_required(matches, KEYSTORE_FLAG)?;
let password_file_path: Option<PathBuf> =
clap_utils::parse_optional(matches, PASSWORD_FILE_FLAG)?;
let stdin_inputs = matches.is_present(STDIN_INPUTS_FLAG);
let spec = env.eth2_config().spec.clone();
let server_url: String = clap_utils::parse_required(matches, BEACON_SERVER_FLAG)?;
let client = BeaconNodeHttpClient::new(
Url::parse(&server_url)
.map_err(|e| format!("Failed to parse beacon http server: {:?}", e))?,
);
let testnet_config = env
.testnet
.clone()
.expect("network should have a valid config");
env.runtime().block_on(publish_voluntary_exit::<E>(
&keystore_path,
password_file_path.as_ref(),
&client,
&spec,
stdin_inputs,
&testnet_config,
))?;
Ok(())
}
/// Gets the keypair and validator_index for every validator and calls `publish_voluntary_exit` on it.
async fn publish_voluntary_exit<E: EthSpec>(
keystore_path: &PathBuf,
password_file_path: Option<&PathBuf>,
client: &BeaconNodeHttpClient,
spec: &ChainSpec,
stdin_inputs: bool,
testnet_config: &Eth2TestnetConfig,
) -> Result<(), String> {
let genesis_data = get_geneisis_data(client).await?;
let testnet_genesis_root = testnet_config
.beacon_state::<E>()
.as_ref()
.expect("network should have valid genesis state")
.genesis_validators_root;
// Verify that the beacon node and validator being exited are on the same network.
if genesis_data.genesis_validators_root != testnet_genesis_root {
return Err(
"Invalid genesis state. Please ensure that your beacon node is on the same network \
as the validator you are publishing an exit for"
.to_string(),
);
}
// Return immediately if beacon node is not synced
if is_syncing(client).await? {
return Err("Beacon node is still syncing".to_string());
}
let keypair = load_voting_keypair(keystore_path, password_file_path, stdin_inputs)?;
let epoch = get_current_epoch::<E>(genesis_data.genesis_time, spec)
.ok_or_else(|| "Failed to get current epoch. Please check your system time".to_string())?;
let validator_index = get_validator_index_for_exit(client, &keypair.pk, epoch, spec).await?;
let fork = get_beacon_state_fork(client).await?;
let voluntary_exit = VoluntaryExit {
epoch,
validator_index,
};
eprintln!(
"Publishing a voluntary exit for validator: {} \n",
keypair.pk
);
eprintln!("WARNING: THIS IS AN IRREVERSIBLE OPERATION\n");
eprintln!("{}\n", PROMPT);
eprintln!(
"PLEASE VISIT {} TO MAKE SURE YOU UNDERSTAND THE IMPLICATIONS OF A VOLUNTARY EXIT.",
WEBSITE_URL
);
eprintln!("Enter the exit phrase from the above URL to confirm the voluntary exit: ");
let confirmation = account_utils::read_input_from_user(stdin_inputs)?;
if confirmation == CONFIRMATION_PHRASE {
// Sign and publish the voluntary exit to network
let signed_voluntary_exit = voluntary_exit.sign(
&keypair.sk,
&fork,
genesis_data.genesis_validators_root,
spec,
);
client
.post_beacon_pool_voluntary_exits(&signed_voluntary_exit)
.await
.map_err(|e| format!("Failed to publish voluntary exit: {}", e))?;
tokio::time::delay_for(std::time::Duration::from_secs(1)).await; // Provides nicer UX.
eprintln!(
"Successfully validated and published voluntary exit for validator {}",
keypair.pk
);
} else {
eprintln!(
"Did not publish voluntary exit for validator {}. Please check that you entered the correct exit phrase.",
keypair.pk
);
}
Ok(())
}
/// Get the validator index of a given the validator public key by querying the beacon node endpoint.
///
/// Returns an error if the beacon endpoint returns an error or given validator is not eligible for an exit.
async fn get_validator_index_for_exit(
client: &BeaconNodeHttpClient,
validator_pubkey: &PublicKey,
epoch: Epoch,
spec: &ChainSpec,
) -> Result<u64, String> {
let validator_data = client
.get_beacon_states_validator_id(
StateId::Head,
&ValidatorId::PublicKey(validator_pubkey.into()),
)
.await
.map_err(|e| format!("Failed to get validator details: {:?}", e))?
.ok_or_else(|| {
format!(
"Validator {} is not present in the beacon state. \
Please ensure that your beacon node is synced and the validator has been deposited.",
validator_pubkey
)
})?
.data;
match validator_data.status {
ValidatorStatus::Active => {
let eligible_epoch = validator_data
.validator
.activation_epoch
.safe_add(spec.shard_committee_period)
.map_err(|e| format!("Failed to calculate eligible epoch, validator activation epoch too high: {:?}", e))?;
if epoch >= eligible_epoch {
Ok(validator_data.index)
} else {
Err(format!(
"Validator {:?} is not eligible for exit. It will become eligible on epoch {}",
validator_pubkey, eligible_epoch
))
}
}
status => Err(format!(
"Validator {:?} is not eligible for voluntary exit. Validator status: {:?}",
validator_pubkey, status
)),
}
}
/// Get genesis data by querying the beacon node client.
async fn get_geneisis_data(client: &BeaconNodeHttpClient) -> Result<GenesisData, String> {
Ok(client
.get_beacon_genesis()
.await
.map_err(|e| format!("Failed to get beacon genesis: {}", e))?
.data)
}
/// Gets syncing status from beacon node client and returns true if syncing and false otherwise.
async fn is_syncing(client: &BeaconNodeHttpClient) -> Result<bool, String> {
Ok(client
.get_node_syncing()
.await
.map_err(|e| format!("Failed to get sync status: {:?}", e))?
.data
.is_syncing)
}
/// Get fork object for the current state by querying the beacon node client.
async fn get_beacon_state_fork(client: &BeaconNodeHttpClient) -> Result<Fork, String> {
Ok(client
.get_beacon_states_fork(StateId::Head)
.await
.map_err(|e| format!("Failed to get get fork: {:?}", e))?
.ok_or_else(|| "Failed to get fork, state not found".to_string())?
.data)
}
/// Calculates the current epoch from the genesis time and current time.
fn get_current_epoch<E: EthSpec>(genesis_time: u64, spec: &ChainSpec) -> Option<Epoch> {
let slot_clock = SystemTimeSlotClock::new(
spec.genesis_slot,
Duration::from_secs(genesis_time),
Duration::from_millis(spec.milliseconds_per_slot),
);
slot_clock.now().map(|s| s.epoch(E::slots_per_epoch()))
}
/// Load the voting keypair by loading and decrypting the keystore.
///
/// If the `password_file_path` is Some, unlock keystore using password in given file
/// otherwise, prompts user for a password to unlock the keystore.
fn load_voting_keypair(
voting_keystore_path: &PathBuf,
password_file_path: Option<&PathBuf>,
stdin_inputs: bool,
) -> Result<Keypair, String> {
let keystore = Keystore::from_json_file(&voting_keystore_path).map_err(|e| {
format!(
"Unable to read keystore JSON {:?}: {:?}",
voting_keystore_path, e
)
})?;
// Get password from password file.
if let Some(password_file) = password_file_path {
validator_dir::unlock_keypair_from_password_path(voting_keystore_path, password_file)
.map_err(|e| format!("Error while decrypting keypair: {:?}", e))
} else {
// Prompt password from user.
eprintln!("");
eprintln!(
"{} for validator in {:?}: ",
PASSWORD_PROMPT, voting_keystore_path
);
let password = account_utils::read_password_from_user(stdin_inputs)?;
match keystore.decrypt_keypair(password.as_ref()) {
Ok(keypair) => {
eprintln!("Password is correct.");
eprintln!("");
std::thread::sleep(std::time::Duration::from_secs(1)); // Provides nicer UX.
Ok(keypair)
}
Err(eth2_keystore::Error::InvalidPassword) => Err("Invalid password".to_string()),
Err(e) => Err(format!("Error while decrypting keypair: {:?}", e)),
}
}
}
#[cfg(test)]
#[cfg(not(debug_assertions))]
mod tests {
use super::*;
use eth2_keystore::KeystoreBuilder;
use std::fs::File;
use std::io::Write;
use tempfile::{tempdir, TempDir};
const PASSWORD: &str = "cats";
const KEYSTORE_NAME: &str = "keystore-m_12381_3600_0_0_0-1595406747.json";
const PASSWORD_FILE: &str = "password.pass";
fn create_and_save_keystore(dir: &TempDir, save_password: bool) -> PublicKey {
let keypair = Keypair::random();
let keystore = KeystoreBuilder::new(&keypair, PASSWORD.as_bytes(), "".into())
.unwrap()
.build()
.unwrap();
// Create a keystore.
File::create(dir.path().join(KEYSTORE_NAME))
.map(|mut file| keystore.to_json_writer(&mut file).unwrap())
.unwrap();
if save_password {
File::create(dir.path().join(PASSWORD_FILE))
.map(|mut file| file.write_all(PASSWORD.as_bytes()).unwrap())
.unwrap();
}
keystore.public_key().unwrap()
}
#[test]
fn test_load_keypair_password_file() {
let dir = tempdir().unwrap();
let expected_pk = create_and_save_keystore(&dir, true);
let kp = load_voting_keypair(
&dir.path().join(KEYSTORE_NAME),
Some(&dir.path().join(PASSWORD_FILE)),
false,
)
.unwrap();
assert_eq!(expected_pk, kp.pk.into());
}
}

View File

@@ -86,6 +86,7 @@ pub fn cli_run(matches: &ArgMatches, validator_dir: PathBuf) -> Result<(), Strin
)
})?;
eprintln!("validator-dir path: {:?}", validator_dir);
// Collect the paths for the keystores that should be imported.
let keystore_paths = match (keystore, keystores_dir) {
(Some(keystore), None) => vec![keystore],

View File

@@ -10,6 +10,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
}
pub fn cli_run(validator_dir: PathBuf) -> Result<(), String> {
eprintln!("validator-dir path: {:?}", validator_dir);
let mgr = ValidatorManager::open(&validator_dir)
.map_err(|e| format!("Unable to read --{}: {:?}", VALIDATOR_DIR_FLAG, e))?;

View File

@@ -1,4 +1,5 @@
pub mod create;
pub mod exit;
pub mod import;
pub mod list;
pub mod recover;
@@ -32,6 +33,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.subcommand(list::cli_app())
.subcommand(recover::cli_app())
.subcommand(slashing_protection::cli_app())
.subcommand(exit::cli_app())
}
pub fn cli_run<T: EthSpec>(matches: &ArgMatches, env: Environment<T>) -> Result<(), String> {
@@ -51,6 +53,7 @@ pub fn cli_run<T: EthSpec>(matches: &ArgMatches, env: Environment<T>) -> Result<
(slashing_protection::CMD, Some(matches)) => {
slashing_protection::cli_run(matches, env, validator_base_dir)
}
(exit::CMD, Some(matches)) => exit::cli_run(matches, env),
(unknown, _) => Err(format!(
"{} does not have a {} command. See --help",
CMD, unknown

View File

@@ -88,6 +88,9 @@ pub fn cli_run(matches: &ArgMatches, validator_dir: PathBuf) -> Result<(), Strin
let mnemonic_path: Option<PathBuf> = clap_utils::parse_optional(matches, MNEMONIC_FLAG)?;
let stdin_inputs = matches.is_present(STDIN_INPUTS_FLAG);
eprintln!("validator-dir path: {:?}", validator_dir);
eprintln!("secrets-dir path: {:?}", secrets_dir);
ensure_dir_exists(&validator_dir)?;
ensure_dir_exists(&secrets_dir)?;

View File

@@ -44,6 +44,7 @@ pub fn cli_run<T: EthSpec>(
env: Environment<T>,
validator_base_dir: PathBuf,
) -> Result<(), String> {
eprintln!("validator-dir path: {:?}", validator_base_dir);
let slashing_protection_db_path = validator_base_dir.join(SLASHING_PROTECTION_FILENAME);
let testnet_config = env

View File

@@ -1,6 +1,6 @@
[package]
name = "beacon_node"
version = "0.3.2"
version = "0.3.3"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com"]
edition = "2018"

View File

@@ -61,3 +61,4 @@ derivative = "2.1.1"
itertools = "0.9.0"
regex = "1.3.9"
exit-future = "0.2.0"
eth2 = { path = "../../common/eth2" }

View File

@@ -1,5 +1,6 @@
use crate::metrics;
use eth1::{Config as Eth1Config, Eth1Block, Service as HttpService};
use eth2::lighthouse::Eth1SyncStatusData;
use eth2_hashing::hash;
use slog::{debug, error, trace, Logger};
use ssz::{Decode, Encode};
@@ -9,6 +10,7 @@ use std::cmp::Ordering;
use std::collections::HashMap;
use std::iter::DoubleEndedIterator;
use std::marker::PhantomData;
use std::time::{SystemTime, UNIX_EPOCH};
use store::{DBColumn, Error as StoreError, StoreItem};
use task_executor::TaskExecutor;
use types::{
@@ -19,6 +21,11 @@ use types::{
type BlockNumber = u64;
type Eth1DataVoteCount = HashMap<(Eth1Data, BlockNumber), u64>;
/// We will declare ourself synced with the Eth1 chain, even if we are this many blocks behind.
///
/// This number (8) was chosen somewhat arbitrarily.
const ETH1_SYNC_TOLERANCE: u64 = 8;
#[derive(Debug)]
pub enum Error {
/// Unable to return an Eth1Data for the given epoch.
@@ -53,6 +60,84 @@ impl From<safe_arith::ArithError> for Error {
}
}
/// Returns an `Eth1SyncStatusData` given some parameters:
///
/// - `latest_cached_block`: The latest eth1 block in our cache, if any.
/// - `head_block`: The block at the very head of our eth1 node (ignoring follow distance, etc).
/// - `genesis_time`: beacon chain genesis time.
/// - `current_slot`: current beacon chain slot.
/// - `spec`: current beacon chain specification.
fn get_sync_status<T: EthSpec>(
latest_cached_block: Option<&Eth1Block>,
head_block: Option<&Eth1Block>,
genesis_time: u64,
current_slot: Slot,
spec: &ChainSpec,
) -> Option<Eth1SyncStatusData> {
let period = T::SlotsPerEth1VotingPeriod::to_u64();
// Since `period` is a "constant", we assume it is set sensibly.
let voting_period_start_slot = (current_slot / period) * period;
let voting_period_start_timestamp = {
let period_start = slot_start_seconds::<T>(
genesis_time,
spec.milliseconds_per_slot,
voting_period_start_slot,
);
let eth1_follow_distance_seconds = spec
.seconds_per_eth1_block
.saturating_mul(spec.eth1_follow_distance);
period_start.saturating_sub(eth1_follow_distance_seconds)
};
let latest_cached_block_number = latest_cached_block.map(|b| b.number);
let latest_cached_block_timestamp = latest_cached_block.map(|b| b.timestamp);
let head_block_number = head_block.map(|b| b.number);
let head_block_timestamp = head_block.map(|b| b.timestamp);
let eth1_node_sync_status_percentage = if let Some(head_block) = head_block {
let now = SystemTime::now().duration_since(UNIX_EPOCH).ok()?.as_secs();
let head_age = now.saturating_sub(head_block.timestamp);
if head_age < ETH1_SYNC_TOLERANCE * spec.seconds_per_eth1_block {
// Always indicate we are fully synced if it's within the sync threshold.
100.0
} else {
let blocks_behind = head_age
.checked_div(spec.seconds_per_eth1_block)
.unwrap_or(0);
let part = f64::from(head_block.number as u32);
let whole = f64::from(head_block.number.saturating_add(blocks_behind) as u32);
if whole > 0.0 {
(part / whole) * 100.0
} else {
// Avoids a divide-by-zero.
0.0
}
}
} else {
// Always return 0% synced if the head block of the eth1 chain is unknown.
0.0
};
// Lighthouse is "cached and ready" when it has cached enough blocks to cover the start of the
// current voting period.
let lighthouse_is_cached_and_ready =
latest_cached_block_timestamp.map_or(false, |t| t >= voting_period_start_timestamp);
Some(Eth1SyncStatusData {
head_block_number,
head_block_timestamp,
latest_cached_block_number,
latest_cached_block_timestamp,
voting_period_start_timestamp,
eth1_node_sync_status_percentage,
lighthouse_is_cached_and_ready,
})
}
#[derive(Encode, Decode, Clone)]
pub struct SszEth1 {
use_dummy_backend: bool,
@@ -143,6 +228,22 @@ where
}
}
/// Returns a status indicating how synced our caches are with the eth1 chain.
pub fn sync_status(
&self,
genesis_time: u64,
current_slot: Slot,
spec: &ChainSpec,
) -> Option<Eth1SyncStatusData> {
get_sync_status::<E>(
self.backend.latest_cached_block().as_ref(),
self.backend.head_block().as_ref(),
genesis_time,
current_slot,
spec,
)
}
/// Instantiate `Eth1Chain` from a persisted `SszEth1`.
///
/// The `Eth1Chain` will have the same caches as the persisted `SszEth1`.
@@ -195,6 +296,14 @@ pub trait Eth1ChainBackend<T: EthSpec>: Sized + Send + Sync {
spec: &ChainSpec,
) -> Result<Vec<Deposit>, Error>;
/// Returns the latest block stored in the cache. Used to obtain an idea of how up-to-date the
/// beacon node eth1 cache is.
fn latest_cached_block(&self) -> Option<Eth1Block>;
/// Returns the block at the head of the chain (ignoring follow distance, etc). Used to obtain
/// an idea of how up-to-date the remote eth1 node is.
fn head_block(&self) -> Option<Eth1Block>;
/// Encode the `Eth1ChainBackend` instance to bytes.
fn as_bytes(&self) -> Vec<u8>;
@@ -241,6 +350,14 @@ impl<T: EthSpec> Eth1ChainBackend<T> for DummyEth1ChainBackend<T> {
Ok(vec![])
}
fn latest_cached_block(&self) -> Option<Eth1Block> {
None
}
fn head_block(&self) -> Option<Eth1Block> {
None
}
/// Return empty Vec<u8> for dummy backend.
fn as_bytes(&self) -> Vec<u8> {
Vec::new()
@@ -400,6 +517,14 @@ impl<T: EthSpec> Eth1ChainBackend<T> for CachingEth1Backend<T> {
}
}
fn latest_cached_block(&self) -> Option<Eth1Block> {
self.core.latest_cached_block()
}
fn head_block(&self) -> Option<Eth1Block> {
self.core.head_block()
}
/// Return encoded byte representation of the block and deposit caches.
fn as_bytes(&self) -> Vec<u8> {
self.core.as_bytes()

View File

@@ -334,6 +334,7 @@ where
chain: self.beacon_chain.clone(),
network_tx: self.network_send.clone(),
network_globals: self.network_globals.clone(),
eth1_service: self.eth1_service.clone(),
log: log.clone(),
});
@@ -590,7 +591,7 @@ where
})?
};
self.eth1_service = None;
self.eth1_service = Some(backend.core.clone());
// Starts the service that connects to an eth1 node and periodically updates caches.
backend.start(context.executor);

View File

@@ -31,3 +31,4 @@ libflate = "1.0.2"
lighthouse_metrics = { path = "../../common/lighthouse_metrics"}
lazy_static = "1.4.0"
task_executor = { path = "../../common/task_executor" }
eth2 = { path = "../../common/eth2" }

View File

@@ -1,6 +1,7 @@
use ssz_derive::{Decode, Encode};
use std::ops::RangeInclusive;
use types::{Eth1Data, Hash256};
pub use eth2::lighthouse::Eth1Block;
#[derive(Debug, PartialEq, Clone)]
pub enum Error {
@@ -15,28 +16,6 @@ pub enum Error {
Internal(String),
}
/// A block of the eth1 chain.
///
/// Contains all information required to add a `BlockCache` entry.
#[derive(Debug, PartialEq, Clone, Eq, Hash, Encode, Decode)]
pub struct Eth1Block {
pub hash: Hash256,
pub timestamp: u64,
pub number: u64,
pub deposit_root: Option<Hash256>,
pub deposit_count: Option<u64>,
}
impl Eth1Block {
pub fn eth1_data(self) -> Option<Eth1Data> {
Some(Eth1Data {
deposit_root: self.deposit_root?,
deposit_count: self.deposit_count?,
block_hash: self.hash,
})
}
}
/// Stores block and deposit contract information and provides queries based upon the block
/// timestamp.
#[derive(Debug, PartialEq, Clone, Default, Encode, Decode)]
@@ -55,6 +34,16 @@ impl BlockCache {
self.blocks.is_empty()
}
/// Returns the earliest (lowest timestamp) block, if any.
pub fn earliest_block(&self) -> Option<&Eth1Block> {
self.blocks.first()
}
/// Returns the latest (highest timestamp) block, if any.
pub fn latest_block(&self) -> Option<&Eth1Block> {
self.blocks.last()
}
/// Returns the timestamp of the earliest block in the cache (if any).
pub fn earliest_block_timestamp(&self) -> Option<u64> {
self.blocks.first().map(|block| block.timestamp)
@@ -181,6 +170,7 @@ impl BlockCache {
#[cfg(test)]
mod tests {
use super::*;
use types::Hash256;
fn get_block(i: u64, interval_secs: u64) -> Eth1Block {
Eth1Block {

View File

@@ -304,7 +304,7 @@ pub mod tests {
block_number: 42,
data: EXAMPLE_LOG.to_vec(),
};
DepositLog::from_log(&log, &spec).expect("should decode log")
log.to_deposit_log(&spec).expect("should decode log")
}
#[test]

View File

@@ -1,11 +1,12 @@
use super::http::Log;
use ssz::Decode;
use ssz_derive::{Decode, Encode};
use state_processing::per_block_processing::signature_sets::{
deposit_pubkey_signature_message, deposit_signature_set,
};
use types::{ChainSpec, DepositData, Hash256, PublicKeyBytes, SignatureBytes};
pub use eth2::lighthouse::DepositLog;
/// The following constants define the layout of bytes in the deposit contract `DepositEvent`. The
/// event bytes are formatted according to the Ethereum ABI.
const PUBKEY_START: usize = 192;
@@ -19,22 +20,10 @@ const SIG_LEN: usize = 96;
const INDEX_START: usize = SIG_START + 96 + 32;
const INDEX_LEN: usize = 8;
/// A fully parsed eth1 deposit contract log.
#[derive(Debug, PartialEq, Clone, Encode, Decode)]
pub struct DepositLog {
pub deposit_data: DepositData,
/// The block number of the log that included this `DepositData`.
pub block_number: u64,
/// The index included with the deposit log.
pub index: u64,
/// True if the signature is valid.
pub signature_is_valid: bool,
}
impl DepositLog {
impl Log {
/// Attempts to parse a raw `Log` from the deposit contract into a `DepositLog`.
pub fn from_log(log: &Log, spec: &ChainSpec) -> Result<Self, String> {
let bytes = &log.data;
pub fn to_deposit_log(&self, spec: &ChainSpec) -> Result<DepositLog, String> {
let bytes = &self.data;
let pubkey = bytes
.get(PUBKEY_START..PUBKEY_START + PUBKEY_LEN)
@@ -68,7 +57,7 @@ impl DepositLog {
Ok(DepositLog {
deposit_data,
block_number: log.block_number,
block_number: self.block_number,
index: u64::from_ssz_bytes(index).map_err(|e| format!("Invalid index ssz: {:?}", e))?,
signature_is_valid,
})
@@ -77,7 +66,6 @@ impl DepositLog {
#[cfg(test)]
pub mod tests {
use super::*;
use crate::http::Log;
use types::{EthSpec, MainnetEthSpec};
@@ -113,6 +101,7 @@ pub mod tests {
block_number: 42,
data: EXAMPLE_LOG.to_vec(),
};
DepositLog::from_log(&log, &MainnetEthSpec::default_spec()).expect("should decode log");
log.to_deposit_log(&MainnetEthSpec::default_spec())
.expect("should decode log");
}
}

View File

@@ -39,6 +39,13 @@ pub enum Eth1NetworkId {
Custom(u64),
}
/// Used to identify a block when querying the Eth1 node.
#[derive(Clone, Copy)]
pub enum BlockQuery {
Number(u64),
Latest,
}
impl Into<u64> for Eth1NetworkId {
fn into(self) -> u64 {
match self {
@@ -107,11 +114,15 @@ pub async fn get_block_number(endpoint: &str, timeout: Duration) -> Result<u64,
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub async fn get_block(
endpoint: &str,
block_number: u64,
query: BlockQuery,
timeout: Duration,
) -> Result<Block, String> {
let query_param = match query {
BlockQuery::Number(block_number) => format!("0x{:x}", block_number),
BlockQuery::Latest => "latest".to_string(),
};
let params = json!([
format!("0x{:x}", block_number),
query_param,
false // do not return full tx objects.
]);

View File

@@ -1,6 +1,6 @@
use crate::Config;
use crate::{
block_cache::BlockCache,
block_cache::{BlockCache, Eth1Block},
deposit_cache::{DepositCache, SszDepositCache},
};
use parking_lot::RwLock;
@@ -29,6 +29,7 @@ pub struct Inner {
pub block_cache: RwLock<BlockCache>,
pub deposit_cache: RwLock<DepositUpdater>,
pub config: RwLock<Config>,
pub remote_head_block: RwLock<Option<Eth1Block>>,
pub spec: ChainSpec,
}
@@ -86,6 +87,9 @@ impl SszEth1Cache {
cache: self.deposit_cache.to_deposit_cache()?,
last_processed_block: self.last_processed_block,
}),
// Set the remote head_block zero when creating a new instance. We only care about
// present and future eth1 nodes.
remote_head_block: RwLock::new(None),
config: RwLock::new(config),
spec,
})

View File

@@ -3,10 +3,10 @@ use crate::{
block_cache::{BlockCache, Error as BlockCacheError, Eth1Block},
deposit_cache::Error as DepositCacheError,
http::{
get_block, get_block_number, get_deposit_logs_in_range, get_network_id, Eth1NetworkId, Log,
get_block, get_block_number, get_deposit_logs_in_range, get_network_id, BlockQuery,
Eth1NetworkId, Log,
},
inner::{DepositUpdater, Inner},
DepositLog,
};
use futures::{future::TryFutureExt, stream, stream::TryStreamExt, StreamExt};
use parking_lot::{RwLock, RwLockReadGuard};
@@ -148,6 +148,7 @@ impl Service {
deposit_cache: RwLock::new(DepositUpdater::new(
config.deposit_contract_deploy_block,
)),
remote_head_block: RwLock::new(None),
config: RwLock::new(config),
spec,
}),
@@ -206,6 +207,21 @@ impl Service {
self.inner.block_cache.read().latest_block_timestamp()
}
/// Returns the latest head block returned from an Eth1 node.
///
/// ## Note
///
/// This is the simply the head of the Eth1 chain, with no regard to follow distance or the
/// voting period start.
pub fn head_block(&self) -> Option<Eth1Block> {
self.inner.remote_head_block.read().as_ref().cloned()
}
/// Returns the latest cached block.
pub fn latest_cached_block(&self) -> Option<Eth1Block> {
self.inner.block_cache.read().latest_block().cloned()
}
/// Returns the lowest block number stored.
pub fn lowest_block_number(&self) -> Option<u64> {
self.inner.block_cache.read().lowest_block_number()
@@ -301,9 +317,16 @@ impl Service {
pub async fn update(
&self,
) -> Result<(DepositCacheUpdateOutcome, BlockCacheUpdateOutcome), String> {
let remote_head_block = download_eth1_block(self.inner.clone(), None)
.map_err(|e| format!("Failed to update Eth1 service: {:?}", e))
.await?;
let remote_head_block_number = Some(remote_head_block.number);
*self.inner.remote_head_block.write() = Some(remote_head_block);
let update_deposit_cache = async {
let outcome = self
.update_deposit_cache()
.update_deposit_cache(remote_head_block_number)
.await
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))?;
@@ -314,12 +337,12 @@ impl Service {
"logs_imported" => outcome.logs_imported,
"last_processed_eth1_block" => self.inner.deposit_cache.read().last_processed_block,
);
Ok(outcome)
Ok::<_, String>(outcome)
};
let update_block_cache = async {
let outcome = self
.update_block_cache()
.update_block_cache(remote_head_block_number)
.await
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))?;
@@ -330,10 +353,13 @@ impl Service {
"blocks_imported" => outcome.blocks_imported,
"head_block" => outcome.head_block_number,
);
Ok(outcome)
Ok::<_, String>(outcome)
};
futures::try_join!(update_deposit_cache, update_block_cache)
let (deposit_outcome, block_outcome) =
futures::try_join!(update_deposit_cache, update_block_cache)?;
Ok((deposit_outcome, block_outcome))
}
/// A looping future that updates the cache, then waits `config.auto_update_interval` before
@@ -413,13 +439,19 @@ impl Service {
/// Will process no more than `BLOCKS_PER_LOG_QUERY * MAX_LOG_REQUESTS_PER_UPDATE` blocks in a
/// single update.
///
/// If `remote_highest_block_opt` is `Some`, use that value instead of querying `self.endpoint`
/// for the head of the eth1 chain.
///
/// ## Resolves with
///
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
/// - Err(_) if there is an error.
///
/// Emits logs for debugging and errors.
pub async fn update_deposit_cache(&self) -> Result<DepositCacheUpdateOutcome, Error> {
pub async fn update_deposit_cache(
&self,
remote_highest_block_opt: Option<u64>,
) -> Result<DepositCacheUpdateOutcome, Error> {
let endpoint = self.config().endpoint.clone();
let follow_distance = self.config().follow_distance;
let deposit_contract_address = self.config().deposit_contract_address.clone();
@@ -437,7 +469,13 @@ impl Service {
.map(|n| n + 1)
.unwrap_or_else(|| self.config().deposit_contract_deploy_block);
let range = get_new_block_numbers(&endpoint, next_required_block, follow_distance).await?;
let range = get_new_block_numbers(
&endpoint,
remote_highest_block_opt,
next_required_block,
follow_distance,
)
.await?;
let block_number_chunks = if let Some(range) = range {
range
@@ -483,7 +521,7 @@ impl Service {
log_chunk
.iter()
.map(|raw_log| {
DepositLog::from_log(&raw_log, self.inner.spec()).map_err(|error| {
raw_log.to_deposit_log(self.inner.spec()).map_err(|error| {
Error::FailedToParseDepositLog {
block_range: block_range.clone(),
error,
@@ -548,13 +586,19 @@ impl Service {
///
/// If configured, prunes the block cache after importing new blocks.
///
/// If `remote_highest_block_opt` is `Some`, use that value instead of querying `self.endpoint`
/// for the head of the eth1 chain.
///
/// ## Resolves with
///
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
/// - Err(_) if there is an error.
///
/// Emits logs for debugging and errors.
pub async fn update_block_cache(&self) -> Result<BlockCacheUpdateOutcome, Error> {
pub async fn update_block_cache(
&self,
remote_highest_block_opt: Option<u64>,
) -> Result<BlockCacheUpdateOutcome, Error> {
let block_cache_truncation = self.config().block_cache_truncation;
let max_blocks_per_update = self
.config()
@@ -572,7 +616,13 @@ impl Service {
let endpoint = self.config().endpoint.clone();
let follow_distance = self.config().follow_distance;
let range = get_new_block_numbers(&endpoint, next_required_block, follow_distance).await?;
let range = get_new_block_numbers(
&endpoint,
remote_highest_block_opt,
next_required_block,
follow_distance,
)
.await?;
// Map the range of required blocks into a Vec.
//
// If the required range is larger than the size of the cache, drop the exiting cache
@@ -623,7 +673,7 @@ impl Service {
|mut block_numbers| async {
match block_numbers.next() {
Some(block_number) => {
match download_eth1_block(self.inner.clone(), block_number).await {
match download_eth1_block(self.inner.clone(), Some(block_number)).await {
Ok(eth1_block) => Ok(Some((eth1_block, block_numbers))),
Err(e) => Err(e),
}
@@ -708,13 +758,17 @@ impl Service {
/// the locally stored best block.
async fn get_new_block_numbers<'a>(
endpoint: &str,
remote_highest_block_opt: Option<u64>,
next_required_block: u64,
follow_distance: u64,
) -> Result<Option<RangeInclusive<u64>>, Error> {
let remote_highest_block =
let remote_highest_block = if let Some(block_number) = remote_highest_block_opt {
block_number
} else {
get_block_number(endpoint, Duration::from_millis(BLOCK_NUMBER_TIMEOUT_MILLIS))
.map_err(Error::GetBlockNumberFailed)
.await?;
.await?
};
let remote_follow_block = remote_highest_block.saturating_sub(follow_distance);
if next_required_block <= remote_follow_block {
@@ -739,26 +793,37 @@ async fn get_new_block_numbers<'a>(
/// Downloads the `(block, deposit_root, deposit_count)` tuple from an eth1 node for the given
/// `block_number`.
///
/// Set `block_number_opt = None` to get the "latest" eth1 block (i.e., the head).
///
/// Performs three async calls to an Eth1 HTTP JSON RPC endpoint.
async fn download_eth1_block(cache: Arc<Inner>, block_number: u64) -> Result<Eth1Block, Error> {
async fn download_eth1_block(
cache: Arc<Inner>,
block_number_opt: Option<u64>,
) -> Result<Eth1Block, Error> {
let endpoint = cache.config.read().endpoint.clone();
let deposit_root = cache
.deposit_cache
.read()
.cache
.get_deposit_root_from_cache(block_number);
let deposit_root = block_number_opt.and_then(|block_number| {
cache
.deposit_cache
.read()
.cache
.get_deposit_root_from_cache(block_number)
});
let deposit_count = cache
.deposit_cache
.read()
.cache
.get_deposit_count_from_cache(block_number);
let deposit_count = block_number_opt.and_then(|block_number| {
cache
.deposit_cache
.read()
.cache
.get_deposit_count_from_cache(block_number)
});
// Performs a `get_blockByNumber` call to an eth1 node.
let http_block = get_block(
&endpoint,
block_number,
block_number_opt
.map(BlockQuery::Number)
.unwrap_or_else(|| BlockQuery::Latest),
Duration::from_millis(GET_BLOCK_TIMEOUT_MILLIS),
)
.map_err(Error::BlockDownloadFailed)

View File

@@ -1,8 +1,8 @@
#![cfg(test)]
use environment::{Environment, EnvironmentBuilder};
use eth1::http::{get_deposit_count, get_deposit_logs_in_range, get_deposit_root, Block, Log};
use eth1::DepositCache;
use eth1::{Config, Service};
use eth1::{DepositCache, DepositLog};
use eth1_test_rig::GanacheEth1Instance;
use futures::compat::Future01CompatExt;
use merkle_proof::verify_merkle_proof;
@@ -146,16 +146,16 @@ mod eth1_cache {
}
service
.update_deposit_cache()
.update_deposit_cache(None)
.await
.expect("should update deposit cache");
service
.update_block_cache()
.update_block_cache(None)
.await
.expect("should update block cache");
service
.update_block_cache()
.update_block_cache(None)
.await
.expect("should update cache when nothing has changed");
@@ -209,11 +209,11 @@ mod eth1_cache {
}
service
.update_deposit_cache()
.update_deposit_cache(None)
.await
.expect("should update deposit cache");
service
.update_block_cache()
.update_block_cache(None)
.await
.expect("should update block cache");
@@ -256,11 +256,11 @@ mod eth1_cache {
eth1.ganache.evm_mine().await.expect("should mine block")
}
service
.update_deposit_cache()
.update_deposit_cache(None)
.await
.expect("should update deposit cache");
service
.update_block_cache()
.update_block_cache(None)
.await
.expect("should update block cache");
}
@@ -300,12 +300,15 @@ mod eth1_cache {
eth1.ganache.evm_mine().await.expect("should mine block")
}
futures::try_join!(
service.update_deposit_cache(),
service.update_deposit_cache()
service.update_deposit_cache(None),
service.update_deposit_cache(None)
)
.expect("should perform two simultaneous updates of deposit cache");
futures::try_join!(service.update_block_cache(), service.update_block_cache())
.expect("should perform two simultaneous updates of block cache");
futures::try_join!(
service.update_block_cache(None),
service.update_block_cache(None)
)
.expect("should perform two simultaneous updates of block cache");
assert!(service.block_cache_len() >= n, "should grow the cache");
}
@@ -351,12 +354,12 @@ mod deposit_tree {
}
service
.update_deposit_cache()
.update_deposit_cache(None)
.await
.expect("should perform update");
service
.update_deposit_cache()
.update_deposit_cache(None)
.await
.expect("should perform update when nothing has changed");
@@ -426,8 +429,8 @@ mod deposit_tree {
}
futures::try_join!(
service.update_deposit_cache(),
service.update_deposit_cache()
service.update_deposit_cache(None),
service.update_deposit_cache(None)
)
.expect("should perform two updates concurrently");
@@ -477,7 +480,7 @@ mod deposit_tree {
let logs: Vec<_> = blocking_deposit_logs(&eth1, 0..block_number)
.await
.iter()
.map(|raw| DepositLog::from_log(raw, spec).expect("should parse deposit log"))
.map(|raw| raw.to_deposit_log(spec).expect("should parse deposit log"))
.inspect(|log| {
tree.insert_log(log.clone())
.expect("should add consecutive logs")
@@ -535,11 +538,16 @@ mod deposit_tree {
/// Tests for the base HTTP requests and response handlers.
mod http {
use super::*;
use eth1::http::BlockQuery;
async fn get_block(eth1: &GanacheEth1Instance, block_number: u64) -> Block {
eth1::http::get_block(&eth1.endpoint(), block_number, timeout())
.await
.expect("should get block number")
eth1::http::get_block(
&eth1.endpoint(),
BlockQuery::Number(block_number),
timeout(),
)
.await
.expect("should get block number")
}
#[tokio::test]
@@ -668,7 +676,7 @@ mod fast {
}
service
.update_deposit_cache()
.update_deposit_cache(None)
.await
.expect("should perform update");
@@ -736,7 +744,7 @@ mod persist {
}
service
.update_deposit_cache()
.update_deposit_cache(None)
.await
.expect("should perform update");
@@ -748,7 +756,7 @@ mod persist {
let deposit_count = service.deposit_cache_len();
service
.update_block_cache()
.update_block_cache(None)
.await
.expect("should perform update");

View File

@@ -3,6 +3,7 @@
pub use self::peerdb::*;
use crate::discovery::{subnet_predicate, Discovery, DiscoveryEvent, TARGET_SUBNET_PEERS};
use crate::rpc::{GoodbyeReason, MetaData, Protocol, RPCError, RPCResponseErrorCode};
use crate::types::SyncState;
use crate::{error, metrics};
use crate::{EnrExt, NetworkConfig, NetworkGlobals, PeerId, SubnetDiscovery};
use futures::prelude::*;
@@ -844,16 +845,19 @@ impl<TSpec: EthSpec> Stream for PeerManager<TSpec> {
}
}
loop {
match self.status_peers.poll_next_unpin(cx) {
Poll::Ready(Some(Ok(peer_id))) => {
self.status_peers.insert(peer_id.clone());
self.events.push(PeerManagerEvent::Status(peer_id))
if !matches!(self.network_globals.sync_state(), SyncState::SyncingFinalized{..}|SyncState::SyncingHead{..})
{
loop {
match self.status_peers.poll_next_unpin(cx) {
Poll::Ready(Some(Ok(peer_id))) => {
self.status_peers.insert(peer_id.clone());
self.events.push(PeerManagerEvent::Status(peer_id))
}
Poll::Ready(Some(Err(e))) => {
error!(self.log, "Failed to check for peers to ping"; "error" => e.to_string())
}
Poll::Ready(None) | Poll::Pending => break,
}
Poll::Ready(Some(Err(e))) => {
error!(self.log, "Failed to check for peers to ping"; "error" => e.to_string())
}
Poll::Ready(None) | Poll::Pending => break,
}
}

View File

@@ -7,6 +7,7 @@
//! The scoring algorithms are currently experimental.
use serde::Serialize;
use std::time::Instant;
use tokio::time::Duration;
lazy_static! {
static ref HALFLIFE_DECAY: f64 = -(2.0f64.ln()) / SCORE_HALFLIFE;
@@ -25,7 +26,7 @@ const MIN_SCORE: f64 = -100.0;
/// The halflife of a peer's score. I.e the number of seconds it takes for the score to decay to half its value.
const SCORE_HALFLIFE: f64 = 600.0;
/// The number of seconds we ban a peer for before their score begins to decay.
const BANNED_BEFORE_DECAY: u64 = 1800;
const BANNED_BEFORE_DECAY: Duration = Duration::from_secs(1800);
/// A collection of actions a peer can perform which will adjust its score.
/// Each variant has an associated score change.
@@ -187,6 +188,11 @@ impl Score {
new_score = MIN_SCORE;
}
if self.score > MIN_SCORE_BEFORE_BAN && new_score <= MIN_SCORE_BEFORE_BAN {
//we ban this peer for at least BANNED_BEFORE_DECAY seconds
self.last_updated += BANNED_BEFORE_DECAY;
}
self.score = new_score;
}
@@ -213,26 +219,18 @@ impl Score {
/// Applies time-based logic such as decay rates to the score.
/// This function should be called periodically.
pub fn update(&mut self) {
// Apply decay logic
//
// There is two distinct decay processes. One for banned peers and one for all others. If
// the score is below the banning threshold and the duration since it was last update is
// shorter than the banning threshold, we do nothing.
let now = Instant::now();
if self.score <= MIN_SCORE_BEFORE_BAN
&& now
.checked_duration_since(self.last_updated)
.map(|d| d.as_secs())
<= Some(BANNED_BEFORE_DECAY)
{
// The peer is banned and still within the ban timeout. Do not update it's score.
// Update last_updated so that the decay begins correctly when ready.
self.last_updated = now;
return;
}
self.update_at(Instant::now())
}
/// Applies time-based logic such as decay rates to the score with the given now value.
/// This private sub function is mainly used for testing.
fn update_at(&mut self, now: Instant) {
// Decay the current score
// Using exponential decay based on a constant half life.
// It is important that we use here `checked_duration_since` instead of elapsed, since
// we set last_updated to the future when banning peers. Therefore `checked_duration_since`
// will return None in this case and the score does not get decayed.
if let Some(secs_since_update) = now
.checked_duration_since(self.last_updated)
.map(|d| d.as_secs())
@@ -277,4 +275,23 @@ mod tests {
score.add(change);
assert_eq!(score.score(), DEFAULT_SCORE + change);
}
#[test]
fn test_ban_time() {
let mut score = Score::default();
let now = Instant::now();
let change = MIN_SCORE_BEFORE_BAN;
score.add(change);
assert_eq!(score.score(), MIN_SCORE_BEFORE_BAN);
assert_eq!(score.state(), ScoreState::Banned);
score.update_at(now + BANNED_BEFORE_DECAY);
assert_eq!(score.score(), MIN_SCORE_BEFORE_BAN);
assert_eq!(score.state(), ScoreState::Banned);
score.update_at(now + BANNED_BEFORE_DECAY + Duration::from_secs(1));
assert!(score.score() > MIN_SCORE_BEFORE_BAN);
assert_eq!(score.state(), ScoreState::Disconnected);
}
}

View File

@@ -10,6 +10,9 @@ pub enum SyncState {
/// The node is performing a long-range (batch) sync over one or many head chains.
/// In this state parent lookups are disabled.
SyncingHead { start_slot: Slot, target_slot: Slot },
/// The node has identified the need for is sync operations and is transitioning to a syncing
/// state.
SyncTransition,
/// The node is up to date with all known peers and is connected to at least one
/// fully synced peer. In this state, parent lookups are enabled.
Synced,
@@ -25,6 +28,7 @@ impl PartialEq for SyncState {
(SyncState::SyncingHead { .. }, SyncState::SyncingHead { .. }) => true,
(SyncState::Synced, SyncState::Synced) => true,
(SyncState::Stalled, SyncState::Stalled) => true,
(SyncState::SyncTransition, SyncState::SyncTransition) => true,
_ => false,
}
}
@@ -36,6 +40,7 @@ impl SyncState {
match self {
SyncState::SyncingFinalized { .. } => true,
SyncState::SyncingHead { .. } => true,
SyncState::SyncTransition => true,
SyncState::Synced => false,
SyncState::Stalled => false,
}
@@ -54,6 +59,7 @@ impl std::fmt::Display for SyncState {
SyncState::SyncingHead { .. } => write!(f, "Syncing Head Chain"),
SyncState::Synced { .. } => write!(f, "Synced"),
SyncState::Stalled { .. } => write!(f, "Stalled"),
SyncState::SyncTransition => write!(f, "Searching syncing peers"),
}
}
}

View File

@@ -114,7 +114,7 @@ impl Eth1GenesisService {
loop {
let update_result = eth1_service
.update_deposit_cache()
.update_deposit_cache(None)
.await
.map_err(|e| format!("{:?}", e));
@@ -156,7 +156,7 @@ impl Eth1GenesisService {
}
// Download new eth1 blocks into the cache.
let blocks_imported = match eth1_service.update_block_cache().await {
let blocks_imported = match eth1_service.update_block_cache(None).await {
Ok(outcome) => {
debug!(
log,

View File

@@ -63,6 +63,7 @@ pub struct Context<T: BeaconChainTypes> {
pub chain: Option<Arc<BeaconChain<T>>>,
pub network_tx: Option<UnboundedSender<NetworkMessage<T::EthSpec>>>,
pub network_globals: Option<Arc<NetworkGlobals<T::EthSpec>>>,
pub eth1_service: Option<eth1::Service>,
pub log: Logger,
}
@@ -300,6 +301,19 @@ pub fn serve<T: BeaconChainTypes>(
}
});
// Create a `warp` filter that provides access to the Eth1 service.
let inner_ctx = ctx.clone();
let eth1_service_filter = warp::any()
.map(move || inner_ctx.eth1_service.clone())
.and_then(|eth1_service| async move {
match eth1_service {
Some(eth1_service) => Ok(eth1_service),
None => Err(warp_utils::reject::custom_not_found(
"The Eth1 service is not started. Use --eth1 on the CLI.".to_string(),
)),
}
});
// Create a `warp` filter that rejects request whilst the node is syncing.
let not_while_syncing_filter = warp::any()
.and(network_globals.clone())
@@ -330,7 +344,7 @@ pub fn serve<T: BeaconChainTypes>(
)))
}
}
SyncState::SyncingHead { .. } => Ok(()),
SyncState::SyncingHead { .. } | SyncState::SyncTransition => Ok(()),
SyncState::Synced => Ok(()),
SyncState::Stalled => Err(warp_utils::reject::not_synced(
"sync is stalled".to_string(),
@@ -421,40 +435,69 @@ pub fn serve<T: BeaconChainTypes>(
})
});
// GET beacon/states/{state_id}/validators
// GET beacon/states/{state_id}/validators?id,status
let get_beacon_state_validators = beacon_states_path
.clone()
.and(warp::path("validators"))
.and(warp::query::<api_types::ValidatorsQuery>())
.and(warp::path::end())
.and_then(|state_id: StateId, chain: Arc<BeaconChain<T>>| {
blocking_json_task(move || {
state_id
.map_state(&chain, |state| {
let epoch = state.current_epoch();
let finalized_epoch = state.finalized_checkpoint.epoch;
let far_future_epoch = chain.spec.far_future_epoch;
.and_then(
|state_id: StateId, chain: Arc<BeaconChain<T>>, query: api_types::ValidatorsQuery| {
blocking_json_task(move || {
state_id
.map_state(&chain, |state| {
let epoch = state.current_epoch();
let finalized_epoch = state.finalized_checkpoint.epoch;
let far_future_epoch = chain.spec.far_future_epoch;
Ok(state
.validators
.iter()
.zip(state.balances.iter())
.enumerate()
.map(|(index, (validator, balance))| api_types::ValidatorData {
index: index as u64,
balance: *balance,
status: api_types::ValidatorStatus::from_validator(
Some(validator),
epoch,
finalized_epoch,
far_future_epoch,
),
validator: validator.clone(),
})
.collect::<Vec<_>>())
})
.map(api_types::GenericResponse::from)
})
});
Ok(state
.validators
.iter()
.zip(state.balances.iter())
.enumerate()
// filter by validator id(s) if provided
.filter(|(index, (validator, _))| {
query.id.as_ref().map_or(true, |ids| {
ids.0.iter().any(|id| match id {
ValidatorId::PublicKey(pubkey) => {
&validator.pubkey == pubkey
}
ValidatorId::Index(param_index) => {
*param_index == *index as u64
}
})
})
})
// filter by status(es) if provided and map the result
.filter_map(|(index, (validator, balance))| {
let status = api_types::ValidatorStatus::from_validator(
Some(validator),
epoch,
finalized_epoch,
far_future_epoch,
);
if query
.status
.as_ref()
.map_or(true, |statuses| statuses.0.contains(&status))
{
Some(api_types::ValidatorData {
index: index as u64,
balance: *balance,
status,
validator: validator.clone(),
})
} else {
None
}
})
.collect::<Vec<_>>())
})
.map(api_types::GenericResponse::from)
})
},
);
// GET beacon/states/{state_id}/validators/{validator_id}
let get_beacon_state_validators_id = beacon_states_path
@@ -537,8 +580,8 @@ pub fn serve<T: BeaconChainTypes>(
} else {
CommitteeCache::initialized(state, epoch, &chain.spec).map(Cow::Owned)
}
.map_err(BeaconChainError::BeaconStateError)
.map_err(warp_utils::reject::beacon_chain_error)?;
.map_err(BeaconChainError::BeaconStateError)
.map_err(warp_utils::reject::beacon_chain_error)?;
// Use either the supplied slot or all slots in the epoch.
let slots = query.slot.map(|slot| vec![slot]).unwrap_or_else(|| {
@@ -566,11 +609,11 @@ pub fn serve<T: BeaconChainTypes>(
let committee = committee_cache
.get_beacon_committee(slot, index)
.ok_or_else(|| {
warp_utils::reject::custom_bad_request(format!(
"committee index {} does not exist in epoch {}",
index, epoch
))
})?;
warp_utils::reject::custom_bad_request(format!(
"committee index {} does not exist in epoch {}",
index, epoch
))
})?;
response.push(api_types::CommitteeData {
index,
@@ -1202,12 +1245,12 @@ pub fn serve<T: BeaconChainTypes>(
.and(network_globals.clone())
.and_then(|network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
blocking_task(move || match *network_globals.sync_state.read() {
SyncState::SyncingFinalized { .. } | SyncState::SyncingHead { .. } => {
Ok(warp::reply::with_status(
warp::reply(),
warp::http::StatusCode::PARTIAL_CONTENT,
))
}
SyncState::SyncingFinalized { .. }
| SyncState::SyncingHead { .. }
| SyncState::SyncTransition => Ok(warp::reply::with_status(
warp::reply(),
warp::http::StatusCode::PARTIAL_CONTENT,
)),
SyncState::Synced => Ok(warp::reply::with_status(
warp::reply(),
warp::http::StatusCode::OK,
@@ -1605,7 +1648,7 @@ pub fn serve<T: BeaconChainTypes>(
return Err(warp_utils::reject::object_invalid(format!(
"gossip verification failed: {:?}",
e
)))
)));
}
};
@@ -1777,6 +1820,80 @@ pub fn serve<T: BeaconChainTypes>(
})
});
// GET lighthouse/eth1/syncing
let get_lighthouse_eth1_syncing = warp::path("lighthouse")
.and(warp::path("eth1"))
.and(warp::path("syncing"))
.and(warp::path::end())
.and(chain_filter.clone())
.and_then(|chain: Arc<BeaconChain<T>>| {
blocking_json_task(move || {
let head_info = chain
.head_info()
.map_err(warp_utils::reject::beacon_chain_error)?;
let current_slot = chain
.slot()
.map_err(warp_utils::reject::beacon_chain_error)?;
chain
.eth1_chain
.as_ref()
.ok_or_else(|| {
warp_utils::reject::custom_not_found(
"Eth1 sync is disabled. See the --eth1 CLI flag.".to_string(),
)
})
.and_then(|eth1| {
eth1.sync_status(head_info.genesis_time, current_slot, &chain.spec)
.ok_or_else(|| {
warp_utils::reject::custom_server_error(
"Unable to determine Eth1 sync status".to_string(),
)
})
})
.map(api_types::GenericResponse::from)
})
});
// GET lighthouse/eth1/block_cache
let get_lighthouse_eth1_block_cache = warp::path("lighthouse")
.and(warp::path("eth1"))
.and(warp::path("block_cache"))
.and(warp::path::end())
.and(eth1_service_filter.clone())
.and_then(|eth1_service: eth1::Service| {
blocking_json_task(move || {
Ok(api_types::GenericResponse::from(
eth1_service
.blocks()
.read()
.iter()
.cloned()
.collect::<Vec<_>>(),
))
})
});
// GET lighthouse/eth1/deposit_cache
let get_lighthouse_eth1_deposit_cache = warp::path("lighthouse")
.and(warp::path("eth1"))
.and(warp::path("deposit_cache"))
.and(warp::path::end())
.and(eth1_service_filter)
.and_then(|eth1_service: eth1::Service| {
blocking_json_task(move || {
Ok(api_types::GenericResponse::from(
eth1_service
.deposits()
.read()
.cache
.iter()
.cloned()
.collect::<Vec<_>>(),
))
})
});
// GET lighthouse/beacon/states/{state_id}/ssz
let get_lighthouse_beacon_states_ssz = warp::path("lighthouse")
.and(warp::path("beacon"))
@@ -1843,6 +1960,9 @@ pub fn serve<T: BeaconChainTypes>(
.or(get_lighthouse_proto_array.boxed())
.or(get_lighthouse_validator_inclusion_global.boxed())
.or(get_lighthouse_validator_inclusion.boxed())
.or(get_lighthouse_eth1_syncing.boxed())
.or(get_lighthouse_eth1_block_cache.boxed())
.or(get_lighthouse_eth1_deposit_cache.boxed())
.or(get_lighthouse_beacon_states_ssz.boxed())
.boxed(),
)

View File

@@ -1,3 +1,5 @@
#![cfg(not(debug_assertions))] // Tests are too slow in debug.
use beacon_chain::{
test_utils::{AttestationStrategy, BeaconChainHarness, BlockStrategy, EphemeralHarnessType},
BeaconChain, StateSkipConfig,
@@ -167,6 +169,9 @@ impl ApiTester {
*network_globals.sync_state.write() = SyncState::Synced;
let eth1_service =
eth1::Service::new(eth1::Config::default(), log.clone(), chain.spec.clone());
let context = Arc::new(Context {
config: Config {
enabled: true,
@@ -177,6 +182,7 @@ impl ApiTester {
chain: Some(chain.clone()),
network_tx: Some(network_tx),
network_globals: Some(Arc::new(network_globals)),
eth1_service: Some(eth1_service),
log,
});
let ctx = context.clone();
@@ -411,40 +417,87 @@ impl ApiTester {
pub async fn test_beacon_states_validators(self) -> Self {
for state_id in self.interesting_state_ids() {
let result = self
.client
.get_beacon_states_validators(state_id)
.await
.unwrap()
.map(|res| res.data);
for statuses in self.interesting_validator_statuses() {
for validator_indices in self.interesting_validator_indices() {
let state_opt = self.get_state(state_id);
let validators: Vec<Validator> = match state_opt.as_ref() {
Some(state) => state.validators.clone().into(),
None => vec![],
};
let validator_index_ids = validator_indices
.iter()
.cloned()
.map(|i| ValidatorId::Index(i))
.collect::<Vec<ValidatorId>>();
let validator_pubkey_ids = validator_indices
.iter()
.cloned()
.map(|i| {
ValidatorId::PublicKey(
validators
.get(i as usize)
.map_or(PublicKeyBytes::empty(), |val| val.pubkey.clone()),
)
})
.collect::<Vec<ValidatorId>>();
let expected = self.get_state(state_id).map(|state| {
let epoch = state.current_epoch();
let finalized_epoch = state.finalized_checkpoint.epoch;
let far_future_epoch = self.chain.spec.far_future_epoch;
let result_index_ids = self
.client
.get_beacon_states_validators(
state_id,
Some(validator_index_ids.as_slice()),
None,
)
.await
.unwrap()
.map(|res| res.data);
let mut validators = Vec::with_capacity(state.validators.len());
let result_pubkey_ids = self
.client
.get_beacon_states_validators(
state_id,
Some(validator_pubkey_ids.as_slice()),
None,
)
.await
.unwrap()
.map(|res| res.data);
for i in 0..state.validators.len() {
let validator = state.validators[i].clone();
let expected = state_opt.map(|state| {
let epoch = state.current_epoch();
let finalized_epoch = state.finalized_checkpoint.epoch;
let far_future_epoch = self.chain.spec.far_future_epoch;
validators.push(ValidatorData {
index: i as u64,
balance: state.balances[i],
status: ValidatorStatus::from_validator(
Some(&validator),
epoch,
finalized_epoch,
far_future_epoch,
),
validator,
})
let mut validators = Vec::with_capacity(validator_indices.len());
for i in validator_indices {
if i >= state.validators.len() as u64 {
continue;
}
let validator = state.validators[i as usize].clone();
let status = ValidatorStatus::from_validator(
Some(&validator),
epoch,
finalized_epoch,
far_future_epoch,
);
if statuses.contains(&status) || statuses.is_empty() {
validators.push(ValidatorData {
index: i as u64,
balance: state.balances[i as usize],
status,
validator,
});
}
}
validators
});
assert_eq!(result_index_ids, expected, "{:?}", state_id);
assert_eq!(result_pubkey_ids, expected, "{:?}", state_id);
}
validators
});
assert_eq!(result, expected, "{:?}", state_id);
}
}
self
@@ -1149,6 +1202,28 @@ impl ApiTester {
interesting
}
fn interesting_validator_statuses(&self) -> Vec<Vec<ValidatorStatus>> {
let interesting = vec![
vec![],
vec![ValidatorStatus::Active],
vec![
ValidatorStatus::Unknown,
ValidatorStatus::WaitingForEligibility,
ValidatorStatus::WaitingForFinality,
ValidatorStatus::WaitingInQueue,
ValidatorStatus::StandbyForActive,
ValidatorStatus::Active,
ValidatorStatus::ActiveAwaitingVoluntaryExit,
ValidatorStatus::ActiveAwaitingSlashedExit,
ValidatorStatus::ExitedVoluntarily,
ValidatorStatus::ExitedSlashed,
ValidatorStatus::Withdrawable,
ValidatorStatus::Withdrawn,
],
];
interesting
}
pub async fn test_get_validator_duties_attester(self) -> Self {
let current_epoch = self.chain.epoch().unwrap().as_u64();
@@ -1572,6 +1647,32 @@ impl ApiTester {
self
}
pub async fn test_get_lighthouse_eth1_syncing(self) -> Self {
self.client.get_lighthouse_eth1_syncing().await.unwrap();
self
}
pub async fn test_get_lighthouse_eth1_block_cache(self) -> Self {
let blocks = self.client.get_lighthouse_eth1_block_cache().await.unwrap();
assert!(blocks.data.is_empty());
self
}
pub async fn test_get_lighthouse_eth1_deposit_cache(self) -> Self {
let deposits = self
.client
.get_lighthouse_eth1_deposit_cache()
.await
.unwrap();
assert!(deposits.data.is_empty());
self
}
pub async fn test_get_lighthouse_beacon_states_ssz(self) -> Self {
for state_id in self.interesting_state_ids() {
let result = self
@@ -1591,61 +1692,44 @@ impl ApiTester {
}
#[tokio::test(core_threads = 2)]
async fn beacon_genesis() {
ApiTester::new().test_beacon_genesis().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_states_root() {
ApiTester::new().test_beacon_states_root().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_states_fork() {
ApiTester::new().test_beacon_states_fork().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_states_finality_checkpoints() {
async fn beacon_get() {
ApiTester::new()
.test_beacon_genesis()
.await
.test_beacon_states_root()
.await
.test_beacon_states_fork()
.await
.test_beacon_states_finality_checkpoints()
.await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_states_validators() {
ApiTester::new().test_beacon_states_validators().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_states_committees() {
ApiTester::new().test_beacon_states_committees().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_states_validator_id() {
ApiTester::new().test_beacon_states_validator_id().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_headers() {
ApiTester::new()
.await
.test_beacon_states_validators()
.await
.test_beacon_states_committees()
.await
.test_beacon_states_validator_id()
.await
.test_beacon_headers_all_slots()
.await
.test_beacon_headers_all_parents()
.await
.test_beacon_headers_block_id()
.await
.test_beacon_blocks()
.await
.test_beacon_blocks_attestations()
.await
.test_beacon_blocks_root()
.await
.test_get_beacon_pool_attestations()
.await
.test_get_beacon_pool_attester_slashings()
.await
.test_get_beacon_pool_proposer_slashings()
.await
.test_get_beacon_pool_voluntary_exits()
.await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_headers_block_id() {
ApiTester::new().test_beacon_headers_block_id().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_blocks() {
ApiTester::new().test_beacon_blocks().await;
}
#[tokio::test(core_threads = 2)]
async fn post_beacon_blocks_valid() {
ApiTester::new().test_post_beacon_blocks_valid().await;
@@ -1656,29 +1740,6 @@ async fn post_beacon_blocks_invalid() {
ApiTester::new().test_post_beacon_blocks_invalid().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_blocks_root() {
ApiTester::new().test_beacon_blocks_root().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_blocks_attestations() {
ApiTester::new().test_beacon_blocks_attestations().await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_pools_get() {
ApiTester::new()
.test_get_beacon_pool_attestations()
.await
.test_get_beacon_pool_attester_slashings()
.await
.test_get_beacon_pool_proposer_slashings()
.await
.test_get_beacon_pool_voluntary_exits()
.await;
}
#[tokio::test(core_threads = 2)]
async fn beacon_pools_post_attestations_valid() {
ApiTester::new()
@@ -1889,6 +1950,12 @@ async fn lighthouse_endpoints() {
.await
.test_get_lighthouse_validator_inclusion_global()
.await
.test_get_lighthouse_eth1_syncing()
.await
.test_get_lighthouse_eth1_block_cache()
.await
.test_get_lighthouse_eth1_deposit_cache()
.await
.test_get_lighthouse_beacon_states_ssz()
.await;
}

View File

@@ -691,10 +691,7 @@ impl<T: BeaconChainTypes> SyncManager<T> {
{
SyncState::Synced
} else if peers.advanced_peers().next().is_some() {
SyncState::SyncingHead {
start_slot: head,
target_slot: current_slot,
}
SyncState::SyncTransition
} else if peers.synced_peers().next().is_none() {
SyncState::Stalled
} else {

View File

@@ -1,7 +1,6 @@
use crate::sync::RequestId;
use eth2_libp2p::rpc::methods::BlocksByRangeRequest;
use eth2_libp2p::PeerId;
use slog::{crit, warn, Logger};
use ssz::Encode;
use std::collections::HashSet;
use std::hash::{Hash, Hasher};
@@ -15,6 +14,13 @@ const MAX_BATCH_DOWNLOAD_ATTEMPTS: u8 = 5;
/// after `MAX_BATCH_PROCESSING_ATTEMPTS` times, it is considered faulty.
const MAX_BATCH_PROCESSING_ATTEMPTS: u8 = 3;
/// Error type of a batch in a wrong state.
// Such errors should never be encountered.
pub struct WrongState(pub(super) String);
/// Auxiliary type alias for readability.
type IsFailed = bool;
/// A segment of a chain.
pub struct BatchInfo<T: EthSpec> {
/// Start slot of the batch.
@@ -57,6 +63,14 @@ impl<T: EthSpec> BatchState<T> {
pub fn poison(&mut self) -> BatchState<T> {
std::mem::replace(self, BatchState::Poisoned)
}
pub fn is_failed(&self) -> IsFailed {
match self {
BatchState::Failed => true,
BatchState::Poisoned => unreachable!("Poisoned batch"),
_ => false,
}
}
}
impl<T: EthSpec> BatchInfo<T> {
@@ -134,16 +148,20 @@ impl<T: EthSpec> BatchInfo<T> {
}
/// Adds a block to a downloading batch.
pub fn add_block(&mut self, block: SignedBeaconBlock<T>, logger: &Logger) {
pub fn add_block(&mut self, block: SignedBeaconBlock<T>) -> Result<(), WrongState> {
match self.state.poison() {
BatchState::Downloading(peer, mut blocks, req_id) => {
blocks.push(block);
self.state = BatchState::Downloading(peer, blocks, req_id)
self.state = BatchState::Downloading(peer, blocks, req_id);
Ok(())
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
other => {
crit!(logger, "Add block for batch in wrong state"; "state" => ?other);
self.state = other
self.state = other;
Err(WrongState(format!(
"Add block for batch in wrong state {:?}",
self.state
)))
}
}
}
@@ -153,8 +171,7 @@ impl<T: EthSpec> BatchInfo<T> {
#[must_use = "Batch may have failed"]
pub fn download_completed(
&mut self,
logger: &Logger,
) -> Result<usize /* Received blocks */, &BatchState<T>> {
) -> Result<usize /* Received blocks */, Result<(Slot, Slot, IsFailed), WrongState>> {
match self.state.poison() {
BatchState::Downloading(peer, blocks, _request_id) => {
// verify that blocks are in range
@@ -182,9 +199,8 @@ impl<T: EthSpec> BatchInfo<T> {
// drop the blocks
BatchState::AwaitingDownload
};
warn!(logger, "Batch received out of range blocks";
&self, "expected" => expected, "received" => received);
return Err(&self.state);
return Err(Ok((expected, received, self.state.is_failed())));
}
}
@@ -194,15 +210,17 @@ impl<T: EthSpec> BatchInfo<T> {
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
other => {
crit!(logger, "Download completed for batch in wrong state"; "state" => ?other);
self.state = other;
Err(&self.state)
Err(Err(WrongState(format!(
"Download completed for batch in wrong state {:?}",
self.state
))))
}
}
}
#[must_use = "Batch may have failed"]
pub fn download_failed(&mut self, logger: &Logger) -> &BatchState<T> {
pub fn download_failed(&mut self) -> Result<IsFailed, WrongState> {
match self.state.poison() {
BatchState::Downloading(peer, _, _request_id) => {
// register the attempt and check if the batch can be tried again
@@ -215,13 +233,15 @@ impl<T: EthSpec> BatchInfo<T> {
// drop the blocks
BatchState::AwaitingDownload
};
&self.state
Ok(self.state.is_failed())
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
other => {
crit!(logger, "Download failed for batch in wrong state"; "state" => ?other);
self.state = other;
&self.state
Err(WrongState(format!(
"Download failed for batch in wrong state {:?}",
self.state
)))
}
}
}
@@ -230,37 +250,42 @@ impl<T: EthSpec> BatchInfo<T> {
&mut self,
peer: PeerId,
request_id: RequestId,
logger: &Logger,
) {
) -> Result<(), WrongState> {
match self.state.poison() {
BatchState::AwaitingDownload => {
self.state = BatchState::Downloading(peer, Vec::new(), request_id);
Ok(())
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
other => {
crit!(logger, "Starting download for batch in wrong state"; "state" => ?other);
self.state = other
self.state = other;
Err(WrongState(format!(
"Starting download for batch in wrong state {:?}",
self.state
)))
}
}
}
pub fn start_processing(&mut self, logger: &Logger) -> Vec<SignedBeaconBlock<T>> {
pub fn start_processing(&mut self) -> Result<Vec<SignedBeaconBlock<T>>, WrongState> {
match self.state.poison() {
BatchState::AwaitingProcessing(peer, blocks) => {
self.state = BatchState::Processing(Attempt::new(peer, &blocks));
blocks
Ok(blocks)
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
other => {
crit!(logger, "Starting procesing batch in wrong state"; "state" => ?other);
self.state = other;
vec![]
Err(WrongState(format!(
"Starting procesing batch in wrong state {:?}",
self.state
)))
}
}
}
#[must_use = "Batch may have failed"]
pub fn processing_completed(&mut self, was_sucessful: bool, logger: &Logger) -> &BatchState<T> {
pub fn processing_completed(&mut self, was_sucessful: bool) -> Result<IsFailed, WrongState> {
match self.state.poison() {
BatchState::Processing(attempt) => {
self.state = if !was_sucessful {
@@ -278,19 +303,21 @@ impl<T: EthSpec> BatchInfo<T> {
} else {
BatchState::AwaitingValidation(attempt)
};
&self.state
Ok(self.state.is_failed())
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
other => {
crit!(logger, "Procesing completed for batch in wrong state"; "state" => ?other);
self.state = other;
&self.state
Err(WrongState(format!(
"Procesing completed for batch in wrong state: {:?}",
self.state
)))
}
}
}
#[must_use = "Batch may have failed"]
pub fn validation_failed(&mut self, logger: &Logger) -> &BatchState<T> {
pub fn validation_failed(&mut self) -> Result<IsFailed, WrongState> {
match self.state.poison() {
BatchState::AwaitingValidation(attempt) => {
self.failed_processing_attempts.push(attempt);
@@ -303,13 +330,15 @@ impl<T: EthSpec> BatchInfo<T> {
} else {
BatchState::AwaitingDownload
};
&self.state
Ok(self.state.is_failed())
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
other => {
crit!(logger, "Validation failed for batch in wrong state"; "state" => ?other);
self.state = other;
&self.state
Err(WrongState(format!(
"Validation failed for batch in wrong state: {:?}",
self.state
)))
}
}
}
@@ -370,8 +399,14 @@ impl<T: EthSpec> slog::KV for BatchInfo<T> {
impl<T: EthSpec> std::fmt::Debug for BatchState<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
BatchState::Processing(_) => f.write_str("Processing"),
BatchState::AwaitingValidation(_) => f.write_str("AwaitingValidation"),
BatchState::Processing(Attempt {
ref peer_id,
hash: _,
}) => write!(f, "Processing({})", peer_id),
BatchState::AwaitingValidation(Attempt {
ref peer_id,
hash: _,
}) => write!(f, "AwaitingValidation({})", peer_id),
BatchState::AwaitingDownload => f.write_str("AwaitingDownload"),
BatchState::Failed => f.write_str("Failed"),
BatchState::AwaitingProcessing(ref peer, ref blocks) => {

View File

@@ -26,14 +26,22 @@ const BATCH_BUFFER_SIZE: u8 = 5;
/// A return type for functions that act on a `Chain` which informs the caller whether the chain
/// has been completed and should be removed or to be kept if further processing is
/// required.
#[derive(PartialEq)]
#[must_use = "Should be checked, since a failed chain must be removed. A chain that requested
being removed and continued is now in an inconsistent state"]
pub enum ProcessingResult {
KeepChain,
RemoveChain,
pub type ProcessingResult = Result<KeepChain, RemoveChain>;
/// Reasons for removing a chain
pub enum RemoveChain {
EmptyPeerPool,
ChainCompleted,
ChainFailed(BatchId),
WrongBatchState(String),
WrongChainState(String),
}
#[derive(Debug)]
pub struct KeepChain;
/// A chain identifier
pub type ChainId = u64;
pub type BatchId = Epoch;
@@ -83,6 +91,9 @@ pub struct SyncingChain<T: BeaconChainTypes> {
/// The current processing batch, if any.
current_processing_batch: Option<BatchId>,
/// Batches validated by this chain.
validated_batches: u8,
/// A multi-threaded, non-blocking processor for applying messages to the beacon chain.
beacon_processor_send: Sender<BeaconWorkEvent<T::EthSpec>>,
@@ -132,6 +143,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
attempted_optimistic_starts: HashSet::default(),
state: ChainSyncingState::Stopped,
current_processing_batch: None,
validated_batches: 0,
beacon_processor_send,
log: log.new(o!("chain" => id)),
}
@@ -147,6 +159,16 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
self.id
}
/// Peers currently syncing this chain.
pub fn peers<'a>(&'a self) -> impl Iterator<Item = PeerId> + 'a {
self.peers.keys().cloned()
}
/// Progress in epochs made by the chain
pub fn validated_epochs(&self) -> u64 {
self.validated_batches as u64 * EPOCHS_PER_BATCH
}
/// Removes a peer from the chain.
/// If the peer has active batches, those are considered failed and re-requested.
pub fn remove_peer(
@@ -158,24 +180,21 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// fail the batches
for id in batch_ids {
if let Some(batch) = self.batches.get_mut(&id) {
if let BatchState::Failed = batch.download_failed(&self.log) {
return ProcessingResult::RemoveChain;
}
if let ProcessingResult::RemoveChain = self.retry_batch_download(network, id) {
// drop the chain early
return ProcessingResult::RemoveChain;
if batch.download_failed()? {
return Err(RemoveChain::ChainFailed(id));
}
self.retry_batch_download(network, id)?;
} else {
debug!(self.log, "Batch not found while removing peer";
"peer" => %peer_id, "batch" => "id")
"peer" => %peer_id, "batch" => id)
}
}
}
if self.peers.is_empty() {
ProcessingResult::RemoveChain
Err(RemoveChain::EmptyPeerPool)
} else {
ProcessingResult::KeepChain
Ok(KeepChain)
}
}
@@ -202,7 +221,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
None => {
debug!(self.log, "Received a block for unknown batch"; "epoch" => batch_id);
// A batch might get removed when the chain advances, so this is non fatal.
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
Some(batch) => {
// A batch could be retried without the peer failing the request (disconnecting/
@@ -210,7 +229,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// reasons. Check that this block belongs to the expected peer, and that the
// request_id matches
if !batch.is_expecting_block(peer_id, &request_id) {
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
batch
}
@@ -218,17 +237,16 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
if let Some(block) = beacon_block {
// This is not a stream termination, simply add the block to the request
batch.add_block(block, &self.log);
ProcessingResult::KeepChain
batch.add_block(block)?;
Ok(KeepChain)
} else {
// A stream termination has been sent. This batch has ended. Process a completed batch.
// Remove the request from the peer's active batches
self.peers
.get_mut(peer_id)
.unwrap_or_else(|| panic!("Batch is registered for the peer"))
.remove(&batch_id);
.map(|active_requests| active_requests.remove(&batch_id));
match batch.download_completed(&self.log) {
match batch.download_completed() {
Ok(received) => {
let awaiting_batches = batch_id.saturating_sub(
self.optimistic_start
@@ -237,14 +255,16 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
debug!(self.log, "Completed batch received"; "epoch" => batch_id, "blocks" => received, "awaiting_batches" => awaiting_batches);
// pre-emptively request more blocks from peers whilst we process current blocks,
if let ProcessingResult::RemoveChain = self.request_batches(network) {
return ProcessingResult::RemoveChain;
}
self.request_batches(network)?;
self.process_completed_batches(network)
}
Err(state) => {
if let BatchState::Failed = state {
return ProcessingResult::RemoveChain;
Err(result) => {
let (expected_boundary, received_boundary, is_failed) = result?;
warn!(self.log, "Batch received out of range blocks"; "expected_boundary" => expected_boundary, "received_boundary" => received_boundary,
"peer_id" => %peer_id, batch);
if is_failed {
return Err(RemoveChain::ChainFailed(batch_id));
}
// this batch can't be used, so we need to request it again.
self.retry_batch_download(network, batch_id)
@@ -262,14 +282,16 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
) -> ProcessingResult {
// Only process batches if this chain is Syncing, and only one at a time
if self.state != ChainSyncingState::Syncing || self.current_processing_batch.is_some() {
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
let batch = match self.batches.get_mut(&batch_id) {
Some(batch) => batch,
None => {
debug!(self.log, "Processing unknown batch"; "batch" => %batch_id);
return ProcessingResult::RemoveChain;
return Err(RemoveChain::WrongChainState(format!(
"Trying to process a batch that does not exist: {}",
batch_id
)));
}
};
@@ -277,7 +299,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// result callback. This is done, because an empty batch could end a chain and the logic
// for removing chains and checking completion is in the callback.
let blocks = batch.start_processing(&self.log);
let blocks = batch.start_processing()?;
let process_id = ProcessId::RangeBatchId(self.id, batch_id);
self.current_processing_batch = Some(batch_id);
@@ -293,7 +315,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// re-downloaded.
self.on_batch_process_result(network, batch_id, &BatchProcessResult::Failed(false))
} else {
ProcessingResult::KeepChain
Ok(KeepChain)
}
}
@@ -304,101 +326,96 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
) -> ProcessingResult {
// Only process batches if this chain is Syncing and only process one batch at a time
if self.state != ChainSyncingState::Syncing || self.current_processing_batch.is_some() {
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
// Find the id of the batch we are going to process.
//
// First try our optimistic start, if any. If this batch is ready, we process it. If the
// batch has not already been completed, check the current chain target.
let optimistic_id = if let Some(epoch) = self.optimistic_start {
if let Some(epoch) = self.optimistic_start {
if let Some(batch) = self.batches.get(&epoch) {
let state = batch.state();
match state {
BatchState::AwaitingProcessing(..) => {
// this batch is ready
debug!(self.log, "Processing optimistic start"; "epoch" => epoch);
Some(epoch)
return self.process_batch(network, epoch);
}
BatchState::Downloading(..) => {
// The optimistic batch is being downloaded. We wait for this before
// attempting to process other batches.
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
BatchState::Processing(_)
| BatchState::AwaitingDownload
| BatchState::Failed
| BatchState::Poisoned => {
| BatchState::Failed => {
// these are all inconsistent states:
// - Processing -> `self.current_processing_batch` is None
// - Failed -> non recoverable batch. For an optimistic batch, it should
// have been removed
// - Poisoned -> this is an intermediate state that should never be reached
// - AwaitingDownload -> A recoverable failed batch should have been
// re-requested.
crit!(self.log, "Optimistic batch indicates inconsistent chain state"; "state" => ?state);
return ProcessingResult::RemoveChain;
return Err(RemoveChain::WrongChainState(format!(
"Optimistic batch indicates inconsistent chain state: {:?}",
state
)));
}
BatchState::AwaitingValidation(_) => {
// This is possible due to race conditions, and tho it would be considered
// an inconsistent state, the chain can continue. If an optimistic batch
// is successfully processed it is no longer considered an optimistic
// candidate. If the batch was empty the chain rejects it; if it was non
// empty the chain is advanced to this point (so that the old optimistic
// batch is now the processing target)
debug!(self.log, "Optimistic batch should never be Awaiting Validation"; "batch" => epoch);
None
// If an optimistic start is given to the chain after the corresponding
// batch has been requested and processed we can land here. We drop the
// optimistic candidate since we can't conclude whether the batch included
// blocks or not at this point
debug!(self.log, "Dropping optimistic candidate"; "batch" => epoch);
self.optimistic_start = None;
}
}
} else {
None
}
} else {
None
};
}
// if the optimistic target can't be processed, check the processing target
let id = optimistic_id.or_else(|| {
if let Some(batch) = self.batches.get(&self.processing_target) {
let state = batch.state();
match state {
BatchState::AwaitingProcessing(..) => Some(self.processing_target),
BatchState::Downloading(..) => {
// Batch is not ready, nothing to process
None
}
BatchState::Failed
| BatchState::AwaitingDownload
| BatchState::AwaitingValidation(_)
| BatchState::Processing(_)
| BatchState::Poisoned => {
// these are all inconsistent states:
// - Failed -> non recoverable batch. Chain should have beee removed
// - AwaitingDownload -> A recoverable failed batch should have been
// re-requested.
// - AwaitingValidation -> self.processing_target should have been moved
// forward
// - Processing -> `self.current_processing_batch` is None
// - Poisoned -> Intermediate state that should never be reached
unreachable!(
"Robust target batch indicates inconsistent chain state: {:?}",
state
)
if let Some(batch) = self.batches.get(&self.processing_target) {
let state = batch.state();
match state {
BatchState::AwaitingProcessing(..) => {
return self.process_batch(network, self.processing_target);
}
BatchState::Downloading(..) => {
// Batch is not ready, nothing to process
}
BatchState::Poisoned => unreachable!("Poisoned batch"),
BatchState::Failed | BatchState::AwaitingDownload | BatchState::Processing(_) => {
// these are all inconsistent states:
// - Failed -> non recoverable batch. Chain should have beee removed
// - AwaitingDownload -> A recoverable failed batch should have been
// re-requested.
// - Processing -> `self.current_processing_batch` is None
return Err(RemoveChain::WrongChainState(format!(
"Robust target batch indicates inconsistent chain state: {:?}",
state
)));
}
BatchState::AwaitingValidation(_) => {
// we can land here if an empty optimistic batch succeeds processing and is
// inside the download buffer (between `self.processing_target` and
// `self.to_be_downloaded`). In this case, eventually the chain advances to the
// batch (`self.processing_target` reaches this point).
debug!(self.log, "Chain encountered a robust batch awaiting validation"; "batch" => self.processing_target);
self.processing_target += EPOCHS_PER_BATCH;
if self.to_be_downloaded <= self.processing_target {
self.to_be_downloaded = self.processing_target + EPOCHS_PER_BATCH;
}
}
} else {
crit!(self.log, "Batch not found for current processing target";
"epoch" => self.processing_target);
None
}
});
// we found a batch to process
if let Some(id) = id {
self.process_batch(network, id)
} else {
ProcessingResult::KeepChain
return Err(RemoveChain::WrongChainState(format!(
"Batch not found for current processing target {}",
self.processing_target
)));
}
Ok(KeepChain)
}
/// The block processor has completed processing a batch. This function handles the result
@@ -415,12 +432,12 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
Some(processing_id) if *processing_id != batch_id => {
debug!(self.log, "Unexpected batch result";
"batch_epoch" => batch_id, "expected_batch_epoch" => processing_id);
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
None => {
debug!(self.log, "Chain was not expecting a batch result";
"batch_epoch" => batch_id);
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
_ => {
// batch_id matches, continue
@@ -430,68 +447,63 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
match result {
BatchProcessResult::Success(was_non_empty) => {
let batch = match self.batches.get_mut(&batch_id) {
Some(batch) => batch,
None => {
debug!(self.log, "Current processing batch not found"; "batch" => batch_id);
return ProcessingResult::RemoveChain;
}
};
let _ = batch.processing_completed(true, &self.log);
let batch = self.batches.get_mut(&batch_id).ok_or_else(|| {
RemoveChain::WrongChainState(format!(
"Current processing batch not found: {}",
batch_id
))
})?;
batch.processing_completed(true)?;
// If the processed batch was not empty, we can validate previous unvalidated
// blocks.
if *was_non_empty {
self.advance_chain(network, batch_id);
// we register so that on chain switching we don't try it again
self.attempted_optimistic_starts.insert(batch_id);
self.processing_target += EPOCHS_PER_BATCH;
} else if let Some(epoch) = self.optimistic_start {
// check if this batch corresponds to an optimistic batch. In this case, we
// reject it as an optimistic candidate since the batch was empty
if epoch == batch_id {
if let ProcessingResult::RemoveChain = self.reject_optimistic_batch(
self.reject_optimistic_batch(
network,
false, /* do not re-request */
"batch was empty",
) {
return ProcessingResult::RemoveChain;
};
)?;
} else {
self.processing_target += EPOCHS_PER_BATCH;
}
}
self.processing_target += EPOCHS_PER_BATCH;
// check if the chain has completed syncing
if self.current_processed_slot() >= self.target_head_slot {
// chain is completed
debug!(self.log, "Chain is complete");
ProcessingResult::RemoveChain
Err(RemoveChain::ChainCompleted)
} else {
// chain is not completed
// attempt to request more batches
if let ProcessingResult::RemoveChain = self.request_batches(network) {
return ProcessingResult::RemoveChain;
}
self.request_batches(network)?;
// attempt to process more batches
self.process_completed_batches(network)
}
}
BatchProcessResult::Failed(imported_blocks) => {
let (batch, peer) = match self.batches.get_mut(&batch_id) {
Some(batch) => match batch.current_peer().cloned() {
Some(peer) => (batch, peer),
None => {
debug!(self.log, "Current processing has no peer"; "batch" => batch_id);
return ProcessingResult::RemoveChain;
}
},
None => {
debug!(self.log, "Current processing batch not found"; "batch" => batch_id);
return ProcessingResult::RemoveChain;
}
};
let batch = self.batches.get_mut(&batch_id).ok_or_else(|| {
RemoveChain::WrongChainState(format!(
"Batch not found for current processing target {}",
batch_id
))
})?;
let peer = batch.current_peer().cloned().ok_or_else(|| {
RemoveChain::WrongBatchState(format!(
"Processing target is in wrong state: {:?}",
batch.state(),
))
})?;
debug!(self.log, "Batch processing failed"; "imported_blocks" => imported_blocks,
"batch_epoch" => batch_id, "peer" => %peer, "client" => %network.client_type(&peer));
if let BatchState::Failed = batch.processing_completed(false, &self.log) {
if batch.processing_completed(false)? {
// check that we have not exceeded the re-process retry counter
// If a batch has exceeded the invalid batch lookup attempts limit, it means
// that it is likely all peers in this chain are are sending invalid batches
@@ -506,7 +518,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
for (peer, _) in self.peers.drain() {
network.report_peer(peer, action);
}
ProcessingResult::RemoveChain
Err(RemoveChain::ChainFailed(batch_id))
} else {
// chain can continue. Check if it can be moved forward
if *imported_blocks {
@@ -545,7 +557,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
}
}
ProcessingResult::KeepChain
Ok(KeepChain)
}
/// Removes any batches previous to the given `validating_epoch` and updates the current
@@ -577,6 +589,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
let removed_batches = std::mem::replace(&mut self.batches, remaining_batches);
for (id, batch) in removed_batches.into_iter() {
self.validated_batches = self.validated_batches.saturating_add(1);
// only for batches awaiting validation can we be sure the last attempt is
// right, and thus, that any different attempt is wrong
match batch.state() {
@@ -619,13 +632,12 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
),
BatchState::AwaitingProcessing(..) => {}
BatchState::Processing(_) => {
debug_assert_eq!(
id,
self.current_processing_batch.expect(
"A batch in a processing state means the chain is processing it"
)
);
self.current_processing_batch = None;
debug!(self.log, "Advancing chain while processing a batch"; "batch" => id, batch);
if let Some(processing_id) = self.current_processing_batch {
if id <= processing_id {
self.current_processing_batch = None;
}
}
}
}
}
@@ -673,15 +685,9 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// If this batch is an optimistic batch, we reject this epoch as an optimistic
// candidate and try to re download it
if epoch == batch_id {
if let ProcessingResult::RemoveChain =
self.reject_optimistic_batch(network, true, "batch was invalid")
{
return ProcessingResult::RemoveChain;
} else {
// since this is the optimistic batch, we can't consider previous batches as
// invalid.
return ProcessingResult::KeepChain;
}
return self.reject_optimistic_batch(network, true, "batch was invalid");
// since this is the optimistic batch, we can't consider previous batches as
// invalid.
}
}
// this is our robust `processing_target`. All previous batches must be awaiting
@@ -689,9 +695,9 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
let mut redownload_queue = Vec::new();
for (id, batch) in self.batches.range_mut(..batch_id) {
if let BatchState::Failed = batch.validation_failed(&self.log) {
if batch.validation_failed()? {
// remove the chain early
return ProcessingResult::RemoveChain;
return Err(RemoveChain::ChainFailed(batch_id));
}
redownload_queue.push(*id);
}
@@ -701,9 +707,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
self.processing_target = self.start_epoch;
for id in redownload_queue {
if let ProcessingResult::RemoveChain = self.retry_batch_download(network, id) {
return ProcessingResult::RemoveChain;
}
self.retry_batch_download(network, id)?;
}
// finally, re-request the failed batch.
self.retry_batch_download(network, batch_id)
@@ -746,9 +750,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
self.state = ChainSyncingState::Syncing;
// begin requesting blocks from the peer pool, until all peers are exhausted.
if let ProcessingResult::RemoveChain = self.request_batches(network) {
return ProcessingResult::RemoveChain;
}
self.request_batches(network)?;
// start processing batches if needed
self.process_completed_batches(network)
@@ -770,7 +772,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// Either new or not, this peer is idle, try to request more batches
self.request_batches(network)
} else {
ProcessingResult::KeepChain
Ok(KeepChain)
}
}
@@ -789,19 +791,19 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// sending an error /timeout) if the peer is removed from the chain for other
// reasons. Check that this block belongs to the expected peer
if !batch.is_expecting_block(peer_id, &request_id) {
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
debug!(self.log, "Batch failed. RPC Error"; "batch_epoch" => batch_id);
if let Some(active_requests) = self.peers.get_mut(peer_id) {
active_requests.remove(&batch_id);
}
if let BatchState::Failed = batch.download_failed(&self.log) {
return ProcessingResult::RemoveChain;
if batch.download_failed()? {
return Err(RemoveChain::ChainFailed(batch_id));
}
self.retry_batch_download(network, batch_id)
} else {
// this could be an error for an old batch, removed when the chain advances
ProcessingResult::KeepChain
Ok(KeepChain)
}
}
@@ -813,7 +815,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
) -> ProcessingResult {
let batch = match self.batches.get_mut(&batch_id) {
Some(batch) => batch,
None => return ProcessingResult::KeepChain,
None => return Ok(KeepChain),
};
// Find a peer to request the batch
@@ -834,7 +836,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
self.send_batch(network, batch_id, peer)
} else {
// If we are here the chain has no more peers
ProcessingResult::RemoveChain
Err(RemoveChain::EmptyPeerPool)
}
}
@@ -850,7 +852,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
match network.blocks_by_range_request(peer.clone(), request, self.id, batch_id) {
Ok(request_id) => {
// inform the batch about the new request
batch.start_downloading_from_peer(peer.clone(), request_id, &self.log);
batch.start_downloading_from_peer(peer.clone(), request_id)?;
if self
.optimistic_start
.map(|epoch| epoch == batch_id)
@@ -866,21 +868,26 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
.get_mut(&peer)
.map(|requests| {
requests.insert(batch_id);
ProcessingResult::KeepChain
Ok(KeepChain)
})
.unwrap_or(ProcessingResult::RemoveChain);
.unwrap_or_else(|| {
Err(RemoveChain::WrongChainState(format!(
"Sending batch to a peer that is not in the chain: {}",
peer
)))
});
}
Err(e) => {
// NOTE: under normal conditions this shouldn't happen but we handle it anyway
warn!(self.log, "Could not send batch request";
"batch_id" => batch_id, "error" => e, &batch);
// register the failed download and check if the batch can be retried
batch.start_downloading_from_peer(peer.clone(), 1, &self.log); // fake request_id is not relevant
batch.start_downloading_from_peer(peer.clone(), 1)?; // fake request_id is not relevant
self.peers
.get_mut(&peer)
.map(|request| request.remove(&batch_id));
if let BatchState::Failed = batch.download_failed(&self.log) {
return ProcessingResult::RemoveChain;
if batch.download_failed()? {
return Err(RemoveChain::ChainFailed(batch_id));
} else {
return self.retry_batch_download(network, batch_id);
}
@@ -888,7 +895,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
}
}
ProcessingResult::KeepChain
Ok(KeepChain)
}
/// Returns true if this chain is currently syncing.
@@ -906,7 +913,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
network: &mut SyncNetworkContext<T::EthSpec>,
) -> ProcessingResult {
if !matches!(self.state, ChainSyncingState::Syncing) {
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
// find the next pending batch and request it from the peer
@@ -933,27 +940,23 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
if let Some(peer) = idle_peers.pop() {
let optimistic_batch = BatchInfo::new(&epoch, EPOCHS_PER_BATCH);
self.batches.insert(epoch, optimistic_batch);
if let ProcessingResult::RemoveChain = self.send_batch(network, epoch, peer) {
return ProcessingResult::RemoveChain;
}
self.send_batch(network, epoch, peer)?;
}
}
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
while let Some(peer) = idle_peers.pop() {
if let Some(batch_id) = self.include_next_batch() {
// send the batch
if let ProcessingResult::RemoveChain = self.send_batch(network, batch_id, peer) {
return ProcessingResult::RemoveChain;
}
self.send_batch(network, batch_id, peer)?;
} else {
// No more batches, simply stop
return ProcessingResult::KeepChain;
return Ok(KeepChain);
}
}
ProcessingResult::KeepChain
Ok(KeepChain)
}
/// Creates the next required batch from the chain. If there are no more batches required,
@@ -1037,6 +1040,36 @@ impl<T: BeaconChainTypes> slog::KV for SyncingChain<T> {
)?;
serializer.emit_usize("batches", self.batches.len())?;
serializer.emit_usize("peers", self.peers.len())?;
serializer.emit_u8("validated_batches", self.validated_batches)?;
slog::Result::Ok(())
}
}
use super::batch::WrongState as WrongBatchState;
impl From<WrongBatchState> for RemoveChain {
fn from(err: WrongBatchState) -> Self {
RemoveChain::WrongBatchState(err.0)
}
}
impl std::fmt::Debug for RemoveChain {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// needed to avoid Debugging Strings
match self {
RemoveChain::ChainCompleted => f.write_str("ChainCompleted"),
RemoveChain::EmptyPeerPool => f.write_str("EmptyPeerPool"),
RemoveChain::ChainFailed(batch) => write!(f, "ChainFailed(batch: {} )", batch),
RemoveChain::WrongBatchState(reason) => write!(f, "WrongBatchState: {}", reason),
RemoveChain::WrongChainState(reason) => write!(f, "WrongChainState: {}", reason),
}
}
}
impl RemoveChain {
pub fn is_critical(&self) -> bool {
matches!(
self,
RemoveChain::WrongBatchState(..) | RemoveChain::WrongChainState(..)
)
}
}

View File

@@ -3,7 +3,7 @@
//! Each chain type is stored in it's own map. A variety of helper functions are given along with
//! this struct to simplify the logic of the other layers of sync.
use super::chain::{ChainId, ProcessingResult, SyncingChain};
use super::chain::{ChainId, ProcessingResult, RemoveChain, SyncingChain};
use super::sync_type::RangeSyncType;
use crate::beacon_processor::WorkEvent as BeaconWorkEvent;
use crate::sync::network_context::SyncNetworkContext;
@@ -11,7 +11,7 @@ use crate::sync::PeerSyncInfo;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::PeerId;
use fnv::FnvHashMap;
use slog::{debug, error};
use slog::{crit, debug, error};
use smallvec::SmallVec;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
@@ -108,33 +108,33 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
/// Calls `func` on every chain of the collection. If the result is
/// `ProcessingResult::RemoveChain`, the chain is removed and returned.
/// NOTE: `func` must not change the syncing state of a chain.
pub fn call_all<F>(&mut self, mut func: F) -> Vec<(SyncingChain<T>, RangeSyncType)>
pub fn call_all<F>(&mut self, mut func: F) -> Vec<(SyncingChain<T>, RangeSyncType, RemoveChain)>
where
F: FnMut(&mut SyncingChain<T>) -> ProcessingResult,
{
let mut to_remove = Vec::new();
for (id, chain) in self.finalized_chains.iter_mut() {
if let ProcessingResult::RemoveChain = func(chain) {
to_remove.push((*id, RangeSyncType::Finalized));
if let Err(remove_reason) = func(chain) {
to_remove.push((*id, RangeSyncType::Finalized, remove_reason));
}
}
for (id, chain) in self.head_chains.iter_mut() {
if let ProcessingResult::RemoveChain = func(chain) {
to_remove.push((*id, RangeSyncType::Head));
if let Err(remove_reason) = func(chain) {
to_remove.push((*id, RangeSyncType::Head, remove_reason));
}
}
let mut results = Vec::with_capacity(to_remove.len());
for (id, sync_type) in to_remove.into_iter() {
for (id, sync_type, reason) in to_remove.into_iter() {
let chain = match sync_type {
RangeSyncType::Finalized => self.finalized_chains.remove(&id),
RangeSyncType::Head => self.head_chains.remove(&id),
};
let chain = chain.expect("Chain exists");
self.on_chain_removed(&id, chain.is_syncing());
results.push((chain, sync_type));
results.push((chain, sync_type, reason));
}
results
}
@@ -144,29 +144,30 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
/// If the function returns `ProcessingResult::RemoveChain`, the chain is removed and returned.
/// If the chain is found, its syncing type is returned, or an error otherwise.
/// NOTE: `func` should not change the sync state of a chain.
#[allow(clippy::type_complexity)]
pub fn call_by_id<F>(
&mut self,
id: ChainId,
func: F,
) -> Result<(Option<SyncingChain<T>>, RangeSyncType), ()>
) -> Result<(Option<(SyncingChain<T>, RemoveChain)>, RangeSyncType), ()>
where
F: FnOnce(&mut SyncingChain<T>) -> ProcessingResult,
{
if let Entry::Occupied(mut entry) = self.finalized_chains.entry(id) {
// Search in our finalized chains first
if let ProcessingResult::RemoveChain = func(entry.get_mut()) {
if let Err(remove_reason) = func(entry.get_mut()) {
let chain = entry.remove();
self.on_chain_removed(&id, chain.is_syncing());
Ok((Some(chain), RangeSyncType::Finalized))
Ok((Some((chain, remove_reason)), RangeSyncType::Finalized))
} else {
Ok((None, RangeSyncType::Finalized))
}
} else if let Entry::Occupied(mut entry) = self.head_chains.entry(id) {
// Search in our head chains next
if let ProcessingResult::RemoveChain = func(entry.get_mut()) {
if let Err(remove_reason) = func(entry.get_mut()) {
let chain = entry.remove();
self.on_chain_removed(&id, chain.is_syncing());
Ok((Some(chain), RangeSyncType::Head))
Ok((Some((chain, remove_reason)), RangeSyncType::Head))
} else {
Ok((None, RangeSyncType::Head))
}
@@ -299,7 +300,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
match old_id {
Some(Some(old_id)) => debug!(self.log, "Switching finalized chains";
"old_id" => old_id, &chain),
None => debug!(self.log, "Syncing new chain"; &chain),
None => debug!(self.log, "Syncing new finalized chain"; &chain),
Some(None) => {
// this is the same chain. We try to advance it.
}
@@ -308,11 +309,14 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
// update the state to a new finalized state
self.state = RangeSyncState::Finalized(new_id);
if let ProcessingResult::RemoveChain =
chain.start_syncing(network, local_epoch, local_head_epoch)
if let Err(remove_reason) = chain.start_syncing(network, local_epoch, local_head_epoch)
{
// this happens only if sending a batch over the `network` fails a lot
error!(self.log, "Chain removed while switching chains");
if remove_reason.is_critical() {
crit!(self.log, "Chain removed while switching chains"; "chain" => new_id, "reason" => ?remove_reason);
} else {
// this happens only if sending a batch over the `network` fails a lot
error!(self.log, "Chain removed while switching chains"; "chain" => new_id, "reason" => ?remove_reason);
}
self.finalized_chains.remove(&new_id);
self.on_chain_removed(&new_id, true);
}
@@ -330,6 +334,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
) {
// Include the awaiting head peers
for (peer_id, peer_sync_info) in awaiting_head_peers.drain() {
debug!(self.log, "including head peer");
self.add_peer_or_create_chain(
local_epoch,
peer_sync_info.head_root,
@@ -364,11 +369,15 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
if !chain.is_syncing() {
debug!(self.log, "New head chain started syncing"; &chain);
}
if let ProcessingResult::RemoveChain =
if let Err(remove_reason) =
chain.start_syncing(network, local_epoch, local_head_epoch)
{
self.head_chains.remove(&id);
error!(self.log, "Chain removed while switching head chains"; "id" => id);
if remove_reason.is_critical() {
crit!(self.log, "Chain removed while switching head chains"; "chain" => id, "reason" => ?remove_reason);
} else {
error!(self.log, "Chain removed while switching head chains"; "chain" => id, "reason" => ?remove_reason);
}
} else {
syncing_chains.push(id);
}
@@ -481,8 +490,12 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
debug!(self.log, "Adding peer to known chain"; "peer_id" => %peer, "sync_type" => ?sync_type, &chain);
debug_assert_eq!(chain.target_head_root, target_head_root);
debug_assert_eq!(chain.target_head_slot, target_head_slot);
if let ProcessingResult::RemoveChain = chain.add_peer(network, peer) {
debug!(self.log, "Chain removed after adding peer"; "chain" => id);
if let Err(remove_reason) = chain.add_peer(network, peer) {
if remove_reason.is_critical() {
error!(self.log, "Chain removed after adding peer"; "chain" => id, "reason" => ?remove_reason);
} else {
error!(self.log, "Chain removed after adding peer"; "chain" => id, "reason" => ?remove_reason);
}
let chain = entry.remove();
self.on_chain_removed(&id, chain.is_syncing());
}

View File

@@ -19,7 +19,7 @@
//! need to be downloaded.
//!
//! A few interesting notes about finalized chain syncing:
//! - Only one finalized chain can sync at a time.
//! - Only one finalized chain can sync at a time
//! - The finalized chain with the largest peer pool takes priority.
//! - As one finalized chain completes, others are checked to see if we they can be continued,
//! otherwise they are removed.
@@ -39,7 +39,7 @@
//! Each chain is downloaded in batches of blocks. The batched blocks are processed sequentially
//! and further batches are requested as current blocks are being processed.
use super::chain::ChainId;
use super::chain::{ChainId, RemoveChain, SyncingChain};
use super::chain_collection::ChainCollection;
use super::sync_type::RangeSyncType;
use crate::beacon_processor::WorkEvent as BeaconWorkEvent;
@@ -49,7 +49,7 @@ use crate::sync::PeerSyncInfo;
use crate::sync::RequestId;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::PeerId;
use slog::{debug, error, trace};
use slog::{crit, debug, error, trace};
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::mpsc;
@@ -121,7 +121,8 @@ impl<T: BeaconChainTypes> RangeSync<T> {
.finalized_epoch
.start_slot(T::EthSpec::slots_per_epoch());
// NOTE: A peer that has been re-status'd may now exist in multiple finalized chains.
// NOTE: A peer that has been re-status'd may now exist in multiple finalized chains. This
// is OK since we since only one finalized chain at a time.
// determine which kind of sync to perform and set up the chains
match RangeSyncType::new(&self.beacon_chain, &local_info, &remote_info) {
@@ -208,22 +209,22 @@ impl<T: BeaconChainTypes> RangeSync<T> {
chain.on_block_response(network, batch_id, &peer_id, request_id, beacon_block)
}) {
Ok((removed_chain, sync_type)) => {
if let Some(_removed_chain) = removed_chain {
debug!(self.log, "Chain removed after block response"; "sync_type" => ?sync_type, "chain_id" => chain_id);
// update the state of the collection
self.chains.update(
if let Some((removed_chain, remove_reason)) = removed_chain {
self.on_chain_removed(
removed_chain,
sync_type,
remove_reason,
network,
&mut self.awaiting_head_peers,
&self.beacon_processor_send,
"block response",
);
}
}
Err(_) => {
debug!(self.log, "BlocksByRange response for removed chain"; "chain" => chain_id)
trace!(self.log, "BlocksByRange response for removed chain"; "chain" => chain_id)
}
}
} else {
debug!(self.log, "Response/Error for non registered request"; "request_id" => request_id)
trace!(self.log, "Response/Error for non registered request"; "request_id" => request_id)
}
}
@@ -241,17 +242,18 @@ impl<T: BeaconChainTypes> RangeSync<T> {
Ok((None, _sync_type)) => {
// Chain was found and not removed
}
Ok((Some(_removed_chain), sync_type)) => {
debug!(self.log, "Chain removed after processing result"; "chain" => chain_id, "sync_type" => ?sync_type);
self.chains.update(
Ok((Some((removed_chain, remove_reason)), sync_type)) => {
self.on_chain_removed(
removed_chain,
sync_type,
remove_reason,
network,
&mut self.awaiting_head_peers,
&self.beacon_processor_send,
"batch processing result",
);
}
Err(_) => {
debug!(self.log, "BlocksByRange response for removed chain"; "chain" => chain_id)
trace!(self.log, "BlocksByRange response for removed chain"; "chain" => chain_id)
}
}
}
@@ -275,18 +277,20 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// for this peer. If so we mark the batch as failed. The batch may then hit it's maximum
/// retries. In this case, we need to remove the chain.
fn remove_peer(&mut self, network: &mut SyncNetworkContext<T::EthSpec>, peer_id: &PeerId) {
for (removed_chain, sync_type) in self
for (removed_chain, sync_type, remove_reason) in self
.chains
.call_all(|chain| chain.remove_peer(peer_id, network))
{
debug!(self.log, "Chain removed after removing peer"; "sync_type" => ?sync_type, "chain" => removed_chain.get_id());
self.on_chain_removed(
removed_chain,
sync_type,
remove_reason,
network,
"peer removed",
);
// update the state of the collection
}
self.chains.update(
network,
&mut self.awaiting_head_peers,
&self.beacon_processor_send,
);
}
/// An RPC error has occurred.
@@ -306,22 +310,46 @@ impl<T: BeaconChainTypes> RangeSync<T> {
chain.inject_error(network, batch_id, &peer_id, request_id)
}) {
Ok((removed_chain, sync_type)) => {
if let Some(removed_chain) = removed_chain {
debug!(self.log, "Chain removed on rpc error"; "sync_type" => ?sync_type, "chain" => removed_chain.get_id());
// update the state of the collection
self.chains.update(
if let Some((removed_chain, remove_reason)) = removed_chain {
self.on_chain_removed(
removed_chain,
sync_type,
remove_reason,
network,
&mut self.awaiting_head_peers,
&self.beacon_processor_send,
"RPC error",
);
}
}
Err(_) => {
debug!(self.log, "BlocksByRange response for removed chain"; "chain" => chain_id)
trace!(self.log, "BlocksByRange response for removed chain"; "chain" => chain_id)
}
}
} else {
debug!(self.log, "Response/Error for non registered request"; "request_id" => request_id)
trace!(self.log, "Response/Error for non registered request"; "request_id" => request_id)
}
}
fn on_chain_removed(
&mut self,
chain: SyncingChain<T>,
sync_type: RangeSyncType,
remove_reason: RemoveChain,
network: &mut SyncNetworkContext<T::EthSpec>,
op: &'static str,
) {
if remove_reason.is_critical() {
crit!(self.log, "Chain removed"; "sync_type" => ?sync_type, &chain, "reason" => ?remove_reason, "op" => op);
} else {
debug!(self.log, "Chain removed"; "sync_type" => ?sync_type, &chain, "reason" => ?remove_reason, "op" => op);
}
network.status_peers(self.beacon_chain.clone(), chain.peers());
// update the state of the collection
self.chains.update(
network,
&mut self.awaiting_head_peers,
&self.beacon_processor_send,
);
}
}

View File

@@ -15,6 +15,7 @@
* [Validator Management](./validator-management.md)
* [Importing from the Eth2 Launchpad](./validator-import-launchpad.md)
* [Slashing Protection](./slashing-protection.md)
* [Voluntary Exits](./voluntary-exit.md)
* [APIs](./api.md)
* [Beacon Node API](./api-bn.md)
* [/lighthouse](./api-lighthouse.md)

View File

@@ -178,6 +178,127 @@ See [Validator Inclusion APIs](./validator-inclusion.md).
See [Validator Inclusion APIs](./validator-inclusion.md).
### `/lighthouse/eth1/syncing`
Returns information regarding the Eth1 network, as it is required for use in
Eth2
#### Fields
- `head_block_number`, `head_block_timestamp`: the block number and timestamp
from the very head of the Eth1 chain. Useful for understanding the immediate
health of the Eth1 node that the beacon node is connected to.
- `latest_cached_block_number` & `latest_cached_block_timestamp`: the block
number and timestamp of the latest block we have in our block cache.
- For correct Eth1 voting this timestamp should be later than the
`voting_period_start_timestamp`.
- `voting_period_start_timestamp`: the start of the period where block
producers must include votes for blocks in the Eth1 chain. Provided for
reference.
- `eth1_node_sync_status_percentage` (float): An estimate of how far the head of the
Eth1 node is from the head of the Eth1 chain.
- `100.0` indicates a fully synced Eth1 node.
- `0.0` indicates an Eth1 node that has not verified any blocks past the
genesis block.
- `lighthouse_is_cached_and_ready`: Is set to `true` if the caches in the
beacon node are ready for block production.
- This value might be set to
`false` whilst `eth1_node_sync_status_percentage == 100.0` if the beacon
node is still building its internal cache.
- This value might be set to `true` whilst
`eth1_node_sync_status_percentage < 100.0` since the cache only cares
about blocks a certain distance behind the head.
#### Example
```bash
curl -X GET "http://localhost:5052/lighthouse/eth1/syncing" -H "accept: application/json" | jq
```
```json
{
"data": {
"head_block_number": 3611806,
"head_block_timestamp": 1603249317,
"latest_cached_block_number": 3610758,
"latest_cached_block_timestamp": 1603233597,
"voting_period_start_timestamp": 1603228632,
"eth1_node_sync_status_percentage": 100,
"lighthouse_is_cached_and_ready": true
}
}
```
### `/lighthouse/eth1/block_cache`
Returns a list of all the Eth1 blocks in the Eth1 voting cache.
#### Example
```bash
curl -X GET "http://localhost:5052/lighthouse/eth1/block_cache" -H "accept: application/json" | jq
```
```json
{
"data": [
{
"hash": "0x3a17f4b7ae4ee57ef793c49ebc9c06ff85207a5e15a1d0bd37b68c5ef5710d7f",
"timestamp": 1603173338,
"number": 3606741,
"deposit_root": "0xd24920d936e8fb9b67e93fd126ce1d9e14058b6d82dcf7d35aea46879fae6dee",
"deposit_count": 88911
},
{
"hash": "0x78852954ea4904e5f81038f175b2adefbede74fbb2338212964405443431c1e7",
"timestamp": 1603173353,
"number": 3606742,
"deposit_root": "0xd24920d936e8fb9b67e93fd126ce1d9e14058b6d82dcf7d35aea46879fae6dee",
"deposit_count": 88911
}
]
}
```
### `/lighthouse/eth1/deposit_cache`
Returns a list of all cached logs from the deposit contract.
#### Example
```bash
curl -X GET "http://localhost:5052/lighthouse/eth1/deposit_cache" -H "accept: application/json" | jq
```
```json
{
"data": [
{
"deposit_data": {
"pubkey": "0xae9e6a550ac71490cdf134533b1688fcbdb16f113d7190eacf4f2e9ca6e013d5bd08c37cb2bde9bbdec8ffb8edbd495b",
"withdrawal_credentials": "0x0062a90ebe71c4c01c4e057d7d13b944d9705f524ebfa24290c22477ab0517e4",
"amount": "32000000000",
"signature": "0xa87a4874d276982c471e981a113f8af74a31ffa7d18898a02df2419de2a7f02084065784aa2f743d9ddf80952986ea0b012190cd866f1f2d9c633a7a33c2725d0b181906d413c82e2c18323154a2f7c7ae6f72686782ed9e423070daa00db05b"
},
"block_number": 3086571,
"index": 0,
"signature_is_valid": false
},
{
"deposit_data": {
"pubkey": "0xb1d0ec8f907e023ea7b8cb1236be8a74d02ba3f13aba162da4a68e9ffa2e395134658d150ef884bcfaeecdf35c286496",
"withdrawal_credentials": "0x00a6aa2a632a6c4847cf87ef96d789058eb65bfaa4cc4e0ebc39237421c22e54",
"amount": "32000000000",
"signature": "0x8d0f8ec11935010202d6dde9ab437f8d835b9cfd5052c001be5af9304f650ada90c5363022e1f9ef2392dd222cfe55b40dfd52578468d2b2092588d4ad3745775ea4d8199216f3f90e57c9435c501946c030f7bfc8dbd715a55effa6674fd5a4"
},
"block_number": 3086579,
"index": 1,
"signature_is_valid": false
}
]
}
```
### `/lighthouse/beacon/states/{state_id}/ssz`
Obtains a `BeaconState` in SSZ bytes. Useful for obtaining a genesis state.

View File

@@ -36,3 +36,17 @@ Each binary is contained in a `.tar.gz` archive. For this example, lets use the
1. Test the binary with `./lighthouse --version` (it should print the version).
1. (Optional) Move the `lighthouse` binary to a location in your `PATH`, so the `lighthouse` command can be called from anywhere.
- E.g., `cp lighthouse /usr/bin`
## Troubleshooting
If you get a SIGILL (exit code 132), then your CPU is incompatible with the optimized build
of Lighthouse and you should switch to the `-portable` build. In this case, you will see a
warning like this on start-up:
```
WARN CPU seems incompatible with optimized Lighthouse build, advice: If you get a SIGILL, please try Lighthouse portable build
```
On some VPS providers, the virtualization can make it appear as if CPU features are not available,
even when they are. In this case you might see the warning above, but so long as the client
continues to function it's nothing to worry about.

View File

@@ -0,0 +1,68 @@
# Voluntary exits
A validator may chose to voluntarily stop performing duties (proposing blocks and attesting to blocks) by submitting
a voluntary exit transaction to the beacon chain.
A validator can initiate a voluntary exit provided that the validator is currently active, has not been slashed and has been active for at least 256 epochs (~27 hours) since it has been activated.
> Note: After initiating a voluntary exit, the validator will have to keep performing duties until it has successfully exited to avoid penalties.
It takes at a minimum 5 epochs (32 minutes) for a validator to exit after initiating a voluntary exit.
This number can be much higher depending on how many other validators are queued to exit.
## Withdrawal of exited funds
Even though users can perform a voluntary exit in phase 0, they **cannot withdraw their exited funds at this point in time**.
This implies that the staked funds are effectively **frozen** until withdrawals are enabled in future phases.
To understand the phased rollout strategy for Eth2, please visit <https://ethereum.org/en/eth2/#roadmap>.
## Initiating a voluntary exit
In order to initiate an exit, users can use the `lighthouse account validator exit` command.
- The `--keystore` flag is used to specify the path to the EIP-2335 voting keystore for the validator.
- The `--beacon-node` flag is used to specify a beacon chain HTTP endpoint that confirms to the [Eth2.0 Standard API](https://ethereum.github.io/eth2.0-APIs/) specifications. That beacon node will be used to validate and propagate the voluntary exit. The default value for this flag is `http://localhost:5052`.
- The `--testnet` flag is used to specify a particular testnet (default is `medalla`).
- The `--password-file` flag is used to specify the path to the file containing the password for the voting keystore. If this flag is not provided, the user will be prompted to enter the password.
After validating the password, the user will be prompted to enter a special exit phrase as a final confirmation after which the voluntary exit will be published to the beacon chain.
The exit phrase is the following:
> Exit my validator
Below is an example for initiating a voluntary exit on the zinken testnet.
```
$ lighthouse --testnet zinken account validator exit --keystore /path/to/keystore --beacon-node http://localhost:5052
Running account manager for zinken testnet
validator-dir path: ~/.lighthouse/zinken/validators
Enter the keystore password for validator in 0xabcd
Password is correct
Publishing a voluntary exit for validator 0xabcd
WARNING: WARNING: THIS IS AN IRREVERSIBLE OPERATION
WARNING: WITHDRAWING STAKED ETH WILL NOT BE POSSIBLE UNTIL ETH1/ETH2 MERGE.
PLEASE VISIT https://lighthouse-book.sigmaprime.io/voluntary-exit.html
TO MAKE SURE YOU UNDERSTAND THE IMPLICATIONS OF A VOLUNTARY EXIT.
Enter the exit phrase from the above URL to confirm the voluntary exit:
Exit my validator
Successfully published voluntary exit for validator 0xabcd
```

View File

@@ -1,6 +1,6 @@
[package]
name = "boot_node"
version = "0.3.2"
version = "0.3.3"
authors = ["Sigma Prime <contact@sigmaprime.io>"]
edition = "2018"

View File

@@ -290,7 +290,7 @@ pub fn recursively_find_voting_keystores<P: AsRef<Path>>(
}
/// Returns `true` if we should consider the `file_name` to represent a voting keystore.
fn is_voting_keystore(file_name: &str) -> bool {
pub fn is_voting_keystore(file_name: &str) -> bool {
// All formats end with `.json`.
if !file_name.ends_with(".json") {
return false;

View File

@@ -21,7 +21,8 @@ libsecp256k1 = "0.3.5"
ring = "0.16.12"
bytes = "0.5.6"
account_utils = { path = "../../common/account_utils" }
eth2_ssz = { path = "../../consensus/ssz" }
eth2_ssz = "0.1.2"
eth2_ssz_derive = "0.1.0"
[target.'cfg(target_os = "linux")'.dependencies]
psutil = { version = "3.2.0", optional = true }

View File

@@ -210,12 +210,14 @@ impl BeaconNodeHttpClient {
self.get_opt(path).await
}
/// `GET beacon/states/{state_id}/validators`
/// `GET beacon/states/{state_id}/validators?id,status`
///
/// Returns `Ok(None)` on a 404 error.
pub async fn get_beacon_states_validators(
&self,
state_id: StateId,
ids: Option<&[ValidatorId]>,
statuses: Option<&[ValidatorStatus]>,
) -> Result<Option<GenericResponse<Vec<ValidatorData>>>, Error> {
let mut path = self.eth_path()?;
@@ -226,6 +228,24 @@ impl BeaconNodeHttpClient {
.push(&state_id.to_string())
.push("validators");
if let Some(ids) = ids {
let id_string = ids
.iter()
.map(|i| i.to_string())
.collect::<Vec<_>>()
.join(",");
path.query_pairs_mut().append_pair("id", &id_string);
}
if let Some(statuses) = statuses {
let status_string = statuses
.iter()
.map(|i| i.to_string())
.collect::<Vec<_>>()
.join(",");
path.query_pairs_mut().append_pair("status", &status_string);
}
self.get_opt(path).await
}

View File

@@ -3,12 +3,13 @@
use crate::{
ok_or_error,
types::{BeaconState, Epoch, EthSpec, GenericResponse, ValidatorId},
BeaconNodeHttpClient, Error, StateId, StatusCode,
BeaconNodeHttpClient, DepositData, Error, Eth1Data, Hash256, StateId, StatusCode,
};
use proto_array::core::ProtoArray;
use reqwest::IntoUrl;
use serde::{Deserialize, Serialize};
use ssz::Decode;
use ssz_derive::{Decode, Encode};
pub use eth2_libp2p::{types::SyncState, PeerInfo};
@@ -145,6 +146,50 @@ impl Health {
}
}
/// Indicates how up-to-date the Eth1 caches are.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct Eth1SyncStatusData {
pub head_block_number: Option<u64>,
pub head_block_timestamp: Option<u64>,
pub latest_cached_block_number: Option<u64>,
pub latest_cached_block_timestamp: Option<u64>,
pub voting_period_start_timestamp: u64,
pub eth1_node_sync_status_percentage: f64,
pub lighthouse_is_cached_and_ready: bool,
}
/// A fully parsed eth1 deposit contract log.
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize, Encode, Decode)]
pub struct DepositLog {
pub deposit_data: DepositData,
/// The block number of the log that included this `DepositData`.
pub block_number: u64,
/// The index included with the deposit log.
pub index: u64,
/// True if the signature is valid.
pub signature_is_valid: bool,
}
/// A block of the eth1 chain.
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize, Encode, Decode)]
pub struct Eth1Block {
pub hash: Hash256,
pub timestamp: u64,
pub number: u64,
pub deposit_root: Option<Hash256>,
pub deposit_count: Option<u64>,
}
impl Eth1Block {
pub fn eth1_data(self) -> Option<Eth1Data> {
Some(Eth1Data {
deposit_root: self.deposit_root?,
deposit_count: self.deposit_count?,
block_hash: self.hash,
})
}
}
impl BeaconNodeHttpClient {
/// Perform a HTTP GET request, returning `None` on a 404 error.
async fn get_bytes_opt<U: IntoUrl>(&self, url: U) -> Result<Option<Vec<u8>>, Error> {
@@ -246,6 +291,51 @@ impl BeaconNodeHttpClient {
self.get(path).await
}
/// `GET lighthouse/eth1/syncing`
pub async fn get_lighthouse_eth1_syncing(
&self,
) -> Result<GenericResponse<Eth1SyncStatusData>, Error> {
let mut path = self.server.clone();
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("lighthouse")
.push("eth1")
.push("syncing");
self.get(path).await
}
/// `GET lighthouse/eth1/block_cache`
pub async fn get_lighthouse_eth1_block_cache(
&self,
) -> Result<GenericResponse<Vec<Eth1Block>>, Error> {
let mut path = self.server.clone();
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("lighthouse")
.push("eth1")
.push("block_cache");
self.get(path).await
}
/// `GET lighthouse/eth1/deposit_cache`
pub async fn get_lighthouse_eth1_deposit_cache(
&self,
) -> Result<GenericResponse<Vec<DepositLog>>, Error> {
let mut path = self.server.clone();
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("lighthouse")
.push("eth1")
.push("deposit_cache");
self.get(path).await
}
/// `GET lighthouse/beacon/states/{state_id}/ssz`
pub async fn get_lighthouse_beacon_states_ssz<E: EthSpec>(
&self,

View File

@@ -165,7 +165,7 @@ pub struct FinalityCheckpointsData {
pub finalized: Checkpoint,
}
#[derive(Debug, Clone, PartialEq)]
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub enum ValidatorId {
PublicKey(PublicKeyBytes),
Index(u64),
@@ -211,17 +211,18 @@ pub struct ValidatorData {
//
// https://hackmd.io/bQxMDRt1RbS1TLno8K4NPg?view
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum ValidatorStatus {
Unknown,
WaitingForEligibility,
WaitingForFinality,
WaitingInQueue,
StandbyForActive(Epoch),
StandbyForActive,
Active,
ActiveAwaitingVoluntaryExit(Epoch),
ActiveAwaitingSlashedExit(Epoch),
ExitedVoluntarily(Epoch),
ExitedSlashed(Epoch),
ActiveAwaitingVoluntaryExit,
ActiveAwaitingSlashedExit,
ExitedVoluntarily,
ExitedSlashed,
Withdrawable,
Withdrawn,
}
@@ -238,22 +239,22 @@ impl ValidatorStatus {
ValidatorStatus::Withdrawable
} else if validator.is_exited_at(epoch) {
if validator.slashed {
ValidatorStatus::ExitedSlashed(validator.withdrawable_epoch)
ValidatorStatus::ExitedSlashed
} else {
ValidatorStatus::ExitedVoluntarily(validator.withdrawable_epoch)
ValidatorStatus::ExitedVoluntarily
}
} else if validator.is_active_at(epoch) {
if validator.exit_epoch < far_future_epoch {
if validator.slashed {
ValidatorStatus::ActiveAwaitingSlashedExit(validator.exit_epoch)
ValidatorStatus::ActiveAwaitingSlashedExit
} else {
ValidatorStatus::ActiveAwaitingVoluntaryExit(validator.exit_epoch)
ValidatorStatus::ActiveAwaitingVoluntaryExit
}
} else {
ValidatorStatus::Active
}
} else if validator.activation_epoch < far_future_epoch {
ValidatorStatus::StandbyForActive(validator.activation_epoch)
ValidatorStatus::StandbyForActive
} else if validator.activation_eligibility_epoch < far_future_epoch {
if finalized_epoch < validator.activation_eligibility_epoch {
ValidatorStatus::WaitingForFinality
@@ -269,12 +270,61 @@ impl ValidatorStatus {
}
}
impl FromStr for ValidatorStatus {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"unknown" => Ok(ValidatorStatus::Unknown),
"waiting_for_eligibility" => Ok(ValidatorStatus::WaitingForEligibility),
"waiting_for_finality" => Ok(ValidatorStatus::WaitingForFinality),
"waiting_in_queue" => Ok(ValidatorStatus::WaitingInQueue),
"standby_for_active" => Ok(ValidatorStatus::StandbyForActive),
"active" => Ok(ValidatorStatus::Active),
"active_awaiting_voluntary_exit" => Ok(ValidatorStatus::ActiveAwaitingVoluntaryExit),
"active_awaiting_slashed_exit" => Ok(ValidatorStatus::ActiveAwaitingSlashedExit),
"exited_voluntarily" => Ok(ValidatorStatus::ExitedVoluntarily),
"exited_slashed" => Ok(ValidatorStatus::ExitedSlashed),
"withdrawable" => Ok(ValidatorStatus::Withdrawable),
"withdrawn" => Ok(ValidatorStatus::Withdrawn),
_ => Err(format!("{} cannot be parsed as a validator status.", s)),
}
}
}
impl fmt::Display for ValidatorStatus {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ValidatorStatus::Unknown => write!(f, "unknown"),
ValidatorStatus::WaitingForEligibility => write!(f, "waiting_for_eligibility"),
ValidatorStatus::WaitingForFinality => write!(f, "waiting_for_finality"),
ValidatorStatus::WaitingInQueue => write!(f, "waiting_in_queue"),
ValidatorStatus::StandbyForActive => write!(f, "standby_for_active"),
ValidatorStatus::Active => write!(f, "active"),
ValidatorStatus::ActiveAwaitingVoluntaryExit => {
write!(f, "active_awaiting_voluntary_exit")
}
ValidatorStatus::ActiveAwaitingSlashedExit => write!(f, "active_awaiting_slashed_exit"),
ValidatorStatus::ExitedVoluntarily => write!(f, "exited_voluntarily"),
ValidatorStatus::ExitedSlashed => write!(f, "exited_slashed"),
ValidatorStatus::Withdrawable => write!(f, "withdrawable"),
ValidatorStatus::Withdrawn => write!(f, "withdrawn"),
}
}
}
#[derive(Serialize, Deserialize)]
pub struct CommitteesQuery {
pub slot: Option<Slot>,
pub index: Option<u64>,
}
#[derive(Deserialize)]
pub struct ValidatorsQuery {
pub id: Option<QueryVec<ValidatorId>>,
pub status: Option<QueryVec<ValidatorStatus>>,
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct CommitteeData {
#[serde(with = "serde_utils::quoted_u64")]

View File

@@ -1,10 +1,10 @@
- enr:-LK4QKWk9yZo258PQouLshTOEEGWVHH7GhKwpYmB5tmKE4eHeSfman0PZvM2Rpp54RWgoOagAsOfKoXgZSbiCYzERWABh2F0dG5ldHOIAAAAAAAAAACEZXRoMpAAAAAAAAAAAAAAAAAAAAAAgmlkgnY0gmlwhDQlA5CJc2VjcDI1NmsxoQOYiWqrQtQksTEtS3qY6idxJE5wkm0t9wKqpzv2gCR21oN0Y3CCIyiDdWRwgiMo
- enr:-LK4QEnIS-PIxxLCadJdnp83VXuJqgKvC9ZTIWaJpWqdKlUFCiup2sHxWihF9EYGlMrQLs0mq_2IyarhNq38eoaOHUoBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpAAAAAAAAAAAAAAAAAAAAAAgmlkgnY0gmlwhA37LMaJc2VjcDI1NmsxoQJ7k0mKtTd_kdEq251flOjD1HKpqgMmIETDoD-Msy_O-4N0Y3CCIyiDdWRwgiMo
- enr:-KG4QIOJRu0BBlcXJcn3lI34Ub1aBLYipbnDaxBnr2uf2q6nE1TWnKY5OAajg3eG6mHheQSfRhXLuy-a8V5rqXKSoUEChGV0aDKQGK5MywAAAAH__________4JpZIJ2NIJpcIQKAAFhiXNlY3AyNTZrMaEDESplmV9c2k73v0DjxVXJ6__2bWyP-tK28_80lf7dUhqDdGNwgiMog3VkcIIjKA
- enr:-Ku4QLglCMIYAgHd51uFUqejD9DWGovHOseHQy7Od1SeZnHnQ3fSpE4_nbfVs8lsy8uF07ae7IgrOOUFU0NFvZp5D4wBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpAYrkzLAAAAAf__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQJxCnE6v_x2ekgY_uoE1rtwzvGy40mq9eD66XfHPBWgIIN1ZHCCD6A
- enr:-Ku4QOzU2MY51tYFcoByfULugCu2mepfqAbB0DajbRzg8xlILLfi5Iv_Wx-ARn8SiFoZZb3yp2x05cnUDYSoDYZupjIBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpAYrkzLAAAAAf__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQLEq16KLm1vPjUKYGkHq296D60i7y209NYPUpwZPXDVgYN1ZHCCD6A
- enr:-Ku4QOYFmi2BW_YPDew_CKdfMvsrcRY1ARA-ImtcqFl-lgoxOFbxte4PU44-1M3uRNSRM-6rVa8USGohmWwtgwalEt8Bh2F0dG5ldHOIAAAAAAAAAACEZXRoMpAYrkzLAAAAAf__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQKH3lxnglLqrA7L6sl5r7XFnckr3XCnlZMaBTYSdE8SHIN1ZHCCD6A
- enr:-LK4QC3FCb7-JTNRiWAezECk_QUJc9c2IkJA1-EAmqAA5wmdbPWsAeRpnMXKRJqOYG0TE99ycB1nOb9y26mjb_UoHS4Bh2F0dG5ldHOIAAAAAAAAAACEZXRoMpDnp11aAAAAAf__________gmlkgnY0gmlwhDMPYfCJc2VjcDI1NmsxoQOmDQryZJApMwIT-dQAbxjvxLbPzyKn9GFk5dqam4MDTYN0Y3CCIyiDdWRwgiMo
- enr:-LK4QLvxLzt346gAPkTxohygiJvjd97lGcFeE5yXgZKtsMfEOveLE_FO2slJoHNzNF7vhwfwjt4X2vqzwGiR9gcrmDMBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpDnp11aAAAAAf__________gmlkgnY0gmlwhDMPRgeJc2VjcDI1NmsxoQPjXTGx3HkaCG2neFxJmaTn5eCgbra3LY1twCeXPHChL4N0Y3CCIyiDdWRwgiMo
- enr:-Ku4QFVactU18ogiqPPasKs3jhUm5ISszUrUMK2c6SUPbGtANXVJ2wFapsKwVEVnVKxZ7Gsr9yEc4PYF-a14ahPa1q0Bh2F0dG5ldHOIAAAAAAAAAACEZXRoMpAYrkzLAAAAAf__________gmlkgnY0gmlwhGQbAHyJc2VjcDI1NmsxoQILF-Ya2i5yowVkQtlnZLjG0kqC4qtwmSk8ha7tKLuME4N1ZHCCIyg
# lighthouse Node
- enr:-LK4QCGFeQXjpQkgOfLHsbTjD65IOtSqV7Qo-Qdqv6SrL8lqFY7INPMMGP5uGKkVDcJkeXimSeNeypaZV3MHkcJgr9QCh2F0dG5ldHOIAAAAAAAAAACEZXRoMpDnp11aAAAAAf__________gmlkgnY0gmlwhA37LMaJc2VjcDI1NmsxoQJ7k0mKtTd_kdEq251flOjD1HKpqgMmIETDoD-Msy_O-4N0Y3CCIyiDdWRwgiMo
# Lighthouse node
- enr:-LK4QCpyWmMLYwC2umMJ_g0c9VY7YOFwZyaR80_tuQNTWOzJbaR82DDhVQYqmE_0gvN6Du5jwnxzIaaNRZQlVXzfIK0Dh2F0dG5ldHOIAAAAAAAAAACEZXRoMpDnp11aAAAAAf__________gmlkgnY0gmlwhCLR2xuJc2VjcDI1NmsxoQOYiWqrQtQksTEtS3qY6idxJE5wkm0t9wKqpzv2gCR21oN0Y3CCIyiDdWRwgiMo
# Prysm
- enr:-Ku4QOnVSyvzS3VbF87J8MubaRuTyfPi6B67XQg6-5eAV_uILAhn9geTTQmfqDIOcIeAxWHUUajQp6lYniAXPWncp6UBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpAYrkzLAAAAAf__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQKekYKqUtwbaJKKCct_srE5-g7tBUm68mj_jpeSb7CCqYN1ZHCCC7g
# Prysm
- enr:-Ku4QHWezvidY_m0dWEwERrNrqjEQWrlIx7b8K4EIxGgTrLmUxHCZPW5-t8PsS8nFxAJ8k8YacKP5zPRk5gbsTSsRTQBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpAYrkzLAAAAAf__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQMypP_ODwTuBq2v0oIdjPGCEyu9Hb_jHDbuIX_iNvBRGoN1ZHCCGWQ
# Cat-dog
- enr:-Ku4QJmPsyq4lmDdFebMKXk7vdt8WsLWkArYT2K8eN057oFudm2tITrZJD9sq1x92-bRmXTyAJgb2FD4ior-KHIU3KcDh2F0dG5ldHOIAAAAAAAAAACEZXRoMpDaNQiCAAAAA___________gmlkgnY0gmlwhBK4vdCJc2VjcDI1NmsxoQMWAsR84_ETgq4-14FV2x00ptmI-YU3tdkZV9CUgYPEnIN1ZHCCI1s

View File

@@ -10,7 +10,7 @@ use target_info::Target;
/// `Lighthouse/v0.2.0-1419501f2+`
pub const VERSION: &str = git_version!(
args = ["--always", "--dirty=+"],
prefix = "Lighthouse/v0.3.2-",
prefix = "Lighthouse/v0.3.3-",
fallback = "unknown"
);

View File

@@ -12,7 +12,10 @@ pub mod insecure_keys;
mod manager;
mod validator_dir;
pub use crate::validator_dir::{Error, Eth1DepositData, ValidatorDir, ETH1_DEPOSIT_TX_HASH_FILE};
pub use crate::validator_dir::{
unlock_keypair_from_password_path, Error, Eth1DepositData, ValidatorDir,
ETH1_DEPOSIT_TX_HASH_FILE,
};
pub use builder::{
Builder, Error as BuilderError, ETH1_DEPOSIT_DATA_FILE, VOTING_KEYSTORE_FILE,
WITHDRAWAL_KEYSTORE_FILE,

View File

@@ -143,7 +143,7 @@ impl ValidatorDir {
///
/// If there is a filesystem error, a password is missing or the password is incorrect.
pub fn voting_keypair<P: AsRef<Path>>(&self, password_dir: P) -> Result<Keypair, Error> {
unlock_keypair(&self.dir.clone(), VOTING_KEYSTORE_FILE, password_dir)
unlock_keypair(&self.dir.join(VOTING_KEYSTORE_FILE), password_dir)
}
/// Attempts to read the keystore in `self.dir` and decrypt the keypair using a password file
@@ -155,7 +155,7 @@ impl ValidatorDir {
///
/// If there is a file-system error, a password is missing or the password is incorrect.
pub fn withdrawal_keypair<P: AsRef<Path>>(&self, password_dir: P) -> Result<Keypair, Error> {
unlock_keypair(&self.dir.clone(), WITHDRAWAL_KEYSTORE_FILE, password_dir)
unlock_keypair(&self.dir.join(WITHDRAWAL_KEYSTORE_FILE), password_dir)
}
/// Indicates if there is a file containing an eth1 deposit transaction. This can be used to
@@ -250,17 +250,16 @@ impl Drop for ValidatorDir {
}
}
/// Attempts to load and decrypt a keystore.
fn unlock_keypair<P: AsRef<Path>>(
keystore_dir: &PathBuf,
filename: &str,
/// Attempts to load and decrypt a Keypair given path to the keystore.
pub fn unlock_keypair<P: AsRef<Path>>(
keystore_path: &PathBuf,
password_dir: P,
) -> Result<Keypair, Error> {
let keystore = Keystore::from_json_reader(
&mut OpenOptions::new()
.read(true)
.create(false)
.open(keystore_dir.clone().join(filename))
.open(keystore_path)
.map_err(Error::UnableToOpenKeystore)?,
)
.map_err(Error::UnableToReadKeystore)?;
@@ -271,7 +270,28 @@ fn unlock_keypair<P: AsRef<Path>>(
let password: PlainText = read(&password_path)
.map_err(|_| Error::UnableToReadPassword(password_path))?
.into();
keystore
.decrypt_keypair(password.as_bytes())
.map_err(Error::UnableToDecryptKeypair)
}
/// Attempts to load and decrypt a Keypair given path to the keystore and the password file.
pub fn unlock_keypair_from_password_path(
keystore_path: &PathBuf,
password_path: &PathBuf,
) -> Result<Keypair, Error> {
let keystore = Keystore::from_json_reader(
&mut OpenOptions::new()
.read(true)
.create(false)
.open(keystore_path)
.map_err(Error::UnableToOpenKeystore)?,
)
.map_err(Error::UnableToReadKeystore)?;
let password: PlainText = read(password_path)
.map_err(|_| Error::UnableToReadPassword(password_path.clone()))?
.into();
keystore
.decrypt_keypair(password.as_bytes())
.map_err(Error::UnableToDecryptKeypair)

View File

@@ -17,4 +17,5 @@ typenum = "1.12.0"
arbitrary = { version = "0.4.6", features = ["derive"], optional = true }
[dev-dependencies]
serde_json = "1.0.58"
tree_hash_derive = "0.2.0"

View File

@@ -40,6 +40,7 @@
#[macro_use]
mod bitfield;
mod fixed_vector;
pub mod serde_utils;
mod tree_hash;
mod variable_list;

View File

@@ -0,0 +1,2 @@
pub mod quoted_u64_fixed_vec;
pub mod quoted_u64_var_list;

View File

@@ -0,0 +1,113 @@
//! Formats `FixedVector<u64,N>` using quotes.
//!
//! E.g., `FixedVector::from(vec![0, 1, 2])` serializes as `["0", "1", "2"]`.
//!
//! Quotes can be optional during decoding. If `N` does not equal the length deserialization will fail.
use crate::serde_utils::quoted_u64_var_list::deserialize_max;
use crate::FixedVector;
use serde::ser::SerializeSeq;
use serde::{Deserializer, Serializer};
use serde_utils::quoted_u64_vec::QuotedIntWrapper;
use std::marker::PhantomData;
use typenum::Unsigned;
pub struct QuotedIntFixedVecVisitor<N> {
_phantom: PhantomData<N>,
}
impl<'a, N> serde::de::Visitor<'a> for QuotedIntFixedVecVisitor<N>
where
N: Unsigned,
{
type Value = FixedVector<u64, N>;
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(formatter, "a list of quoted or unquoted integers")
}
fn visit_seq<A>(self, seq: A) -> Result<Self::Value, A::Error>
where
A: serde::de::SeqAccess<'a>,
{
let vec = deserialize_max(seq, N::to_usize())?;
let fix: FixedVector<u64, N> = FixedVector::new(vec)
.map_err(|e| serde::de::Error::custom(format!("FixedVector: {:?}", e)))?;
Ok(fix)
}
}
pub fn serialize<S>(value: &[u64], serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut seq = serializer.serialize_seq(Some(value.len()))?;
for &int in value {
seq.serialize_element(&QuotedIntWrapper { int })?;
}
seq.end()
}
pub fn deserialize<'de, D, N>(deserializer: D) -> Result<FixedVector<u64, N>, D::Error>
where
D: Deserializer<'de>,
N: Unsigned,
{
deserializer.deserialize_any(QuotedIntFixedVecVisitor {
_phantom: PhantomData,
})
}
#[cfg(test)]
mod test {
use super::*;
use serde_derive::{Deserialize, Serialize};
use typenum::U4;
#[derive(Debug, Serialize, Deserialize)]
struct Obj {
#[serde(with = "crate::serde_utils::quoted_u64_fixed_vec")]
values: FixedVector<u64, U4>,
}
#[test]
fn quoted_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": ["1", "2", "3", "4"] }"#).unwrap();
let expected: FixedVector<u64, U4> = FixedVector::from(vec![1, 2, 3, 4]);
assert_eq!(obj.values, expected);
}
#[test]
fn unquoted_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": [1, 2, 3, 4] }"#).unwrap();
let expected: FixedVector<u64, U4> = FixedVector::from(vec![1, 2, 3, 4]);
assert_eq!(obj.values, expected);
}
#[test]
fn mixed_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": ["1", 2, "3", "4"] }"#).unwrap();
let expected: FixedVector<u64, U4> = FixedVector::from(vec![1, 2, 3, 4]);
assert_eq!(obj.values, expected);
}
#[test]
fn empty_list_err() {
serde_json::from_str::<Obj>(r#"{ "values": [] }"#).unwrap_err();
}
#[test]
fn short_list_err() {
serde_json::from_str::<Obj>(r#"{ "values": [1, 2] }"#).unwrap_err();
}
#[test]
fn long_list_err() {
serde_json::from_str::<Obj>(r#"{ "values": [1, 2, 3, 4, 5] }"#).unwrap_err();
}
#[test]
fn whole_list_quoted_err() {
serde_json::from_str::<Obj>(r#"{ "values": "[1, 2, 3, 4]" }"#).unwrap_err();
}
}

View File

@@ -0,0 +1,139 @@
//! Formats `VariableList<u64,N>` using quotes.
//!
//! E.g., `VariableList::from(vec![0, 1, 2])` serializes as `["0", "1", "2"]`.
//!
//! Quotes can be optional during decoding. If the length of the `Vec` is greater than `N`, deserialization fails.
use crate::VariableList;
use serde::ser::SerializeSeq;
use serde::{Deserializer, Serializer};
use serde_utils::quoted_u64_vec::QuotedIntWrapper;
use std::marker::PhantomData;
use typenum::Unsigned;
pub struct QuotedIntVarListVisitor<N> {
_phantom: PhantomData<N>,
}
impl<'a, N> serde::de::Visitor<'a> for QuotedIntVarListVisitor<N>
where
N: Unsigned,
{
type Value = VariableList<u64, N>;
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(formatter, "a list of quoted or unquoted integers")
}
fn visit_seq<A>(self, seq: A) -> Result<Self::Value, A::Error>
where
A: serde::de::SeqAccess<'a>,
{
let vec = deserialize_max(seq, N::to_usize())?;
let list: VariableList<u64, N> = VariableList::new(vec)
.map_err(|e| serde::de::Error::custom(format!("VariableList: {:?}", e)))?;
Ok(list)
}
}
pub fn serialize<S>(value: &[u64], serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut seq = serializer.serialize_seq(Some(value.len()))?;
for &int in value {
seq.serialize_element(&QuotedIntWrapper { int })?;
}
seq.end()
}
pub fn deserialize<'de, D, N>(deserializer: D) -> Result<VariableList<u64, N>, D::Error>
where
D: Deserializer<'de>,
N: Unsigned,
{
deserializer.deserialize_any(QuotedIntVarListVisitor {
_phantom: PhantomData,
})
}
/// Returns a `Vec` of no more than `max_items` length.
pub(crate) fn deserialize_max<'a, A>(mut seq: A, max_items: usize) -> Result<Vec<u64>, A::Error>
where
A: serde::de::SeqAccess<'a>,
{
let mut vec = vec![];
let mut counter = 0;
while let Some(val) = seq.next_element()? {
let val: QuotedIntWrapper = val;
counter += 1;
if counter > max_items {
return Err(serde::de::Error::custom(format!(
"Deserialization failed. Length cannot be greater than {}.",
max_items
)));
}
vec.push(val.int);
}
Ok(vec)
}
#[cfg(test)]
mod test {
use super::*;
use serde_derive::{Deserialize, Serialize};
use typenum::U4;
#[derive(Debug, Serialize, Deserialize)]
struct Obj {
#[serde(with = "crate::serde_utils::quoted_u64_var_list")]
values: VariableList<u64, U4>,
}
#[test]
fn quoted_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": ["1", "2", "3", "4"] }"#).unwrap();
let expected: VariableList<u64, U4> = VariableList::from(vec![1, 2, 3, 4]);
assert_eq!(obj.values, expected);
}
#[test]
fn unquoted_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": [1, 2, 3, 4] }"#).unwrap();
let expected: VariableList<u64, U4> = VariableList::from(vec![1, 2, 3, 4]);
assert_eq!(obj.values, expected);
}
#[test]
fn mixed_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": ["1", 2, "3", "4"] }"#).unwrap();
let expected: VariableList<u64, U4> = VariableList::from(vec![1, 2, 3, 4]);
assert_eq!(obj.values, expected);
}
#[test]
fn empty_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": [] }"#).unwrap();
assert!(obj.values.is_empty());
}
#[test]
fn short_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": [1, 2] }"#).unwrap();
let expected: VariableList<u64, U4> = VariableList::from(vec![1, 2]);
assert_eq!(obj.values, expected);
}
#[test]
fn long_list_err() {
serde_json::from_str::<Obj>(r#"{ "values": [1, 2, 3, 4, 5] }"#).unwrap_err();
}
#[test]
fn whole_list_quoted_err() {
serde_json::from_str::<Obj>(r#"{ "values": "[1, 2, 3, 4]" }"#).unwrap_err();
}
}

View File

@@ -320,7 +320,7 @@ mod test {
let vec = vec![];
let fixed: VariableList<u64, U4> = VariableList::from(vec);
assert_eq!(&fixed[..], &vec![][..]);
assert_eq!(&fixed[..], &[] as &[u64]);
}
#[test]

View File

@@ -181,12 +181,14 @@ where
#[compare_fields(as_slice)]
pub validators: VariableList<Validator, T::ValidatorRegistryLimit>,
#[compare_fields(as_slice)]
#[serde(with = "ssz_types::serde_utils::quoted_u64_var_list")]
pub balances: VariableList<u64, T::ValidatorRegistryLimit>,
// Randomness
pub randao_mixes: FixedVector<Hash256, T::EpochsPerHistoricalVector>,
// Slashings
#[serde(with = "ssz_types::serde_utils::quoted_u64_fixed_vec")]
pub slashings: FixedVector<u64, T::EpochsPerSlashingsVector>,
// Attestations

View File

@@ -1,7 +1,7 @@
[package]
name = "lcli"
description = "Lighthouse CLI (modeled after zcli)"
version = "0.3.2"
version = "0.3.3"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"

View File

@@ -1,6 +1,6 @@
[package]
name = "lighthouse"
version = "0.3.2"
version = "0.3.3"
authors = ["Sigma Prime <contact@sigmaprime.io>"]
edition = "2018"

View File

@@ -196,13 +196,6 @@ fn run<E: EthSpec>(
));
}
#[cfg(all(feature = "modern", target_arch = "x86_64"))]
if !std::is_x86_feature_detected!("adx") {
return Err(format!(
"CPU incompatible with optimized binary, please try Lighthouse portable build"
));
}
let debug_level = matches
.value_of("debug-level")
.ok_or_else(|| "Expected --debug-level flag".to_string())?;
@@ -232,6 +225,15 @@ fn run<E: EthSpec>(
);
}
#[cfg(all(feature = "modern", target_arch = "x86_64"))]
if !std::is_x86_feature_detected!("adx") {
warn!(
log,
"CPU seems incompatible with optimized Lighthouse build";
"advice" => "If you get a SIGILL, please try Lighthouse portable build"
);
}
// Note: the current code technically allows for starting a beacon node _and_ a validator
// client at the same time.
//

View File

@@ -1,6 +1,6 @@
[package]
name = "validator_client"
version = "0.3.2"
version = "0.3.3"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>", "Luke Anderson <luke@lukeanderson.com.au>"]
edition = "2018"