Compare commits

..

20 Commits

Author SHA1 Message Date
Paul Hauner
dfd02d6179 Bump to v0.2.7 (#1561)
## Issue Addressed

NA

## Proposed Changes

- Update to v0.2.7
- Add script to make update easy.

## Additional Info

NA
2020-08-24 08:25:34 +00:00
Paul Hauner
3569506acd Remove rayon from rest_api (#1562)
## Issue Addressed

NA

## Proposed Changes

Addresses a deadlock condition described here: https://hackmd.io/ijQlqOdqSGaWmIo6zMVV-A?view

## Additional Info

NA
2020-08-24 07:28:54 +00:00
Paul Hauner
c895dc8971 Shift HTTP server heavy-lifting to blocking executor (#1518)
## Issue Addressed

NA

## Proposed Changes

Shift practically all HTTP endpoint handlers to the blocking executor (some very light tasks are left on the core executor).

## Additional Info

This PR covers the `rest_api` which will soon be refactored to suit the standard API. As such, I've cut a few corners and left some existing issues open in this patch. What I have done here should leave the API in state that is not necessary *exactly* the same, but good enough for us to run validators with. Specifically, the number of blocking workers that can be spawned is unbounded and I have not implemented a queue; this will need to be fixed when we implement the standard API.
2020-08-24 03:06:10 +00:00
blacktemplar
2bc9115a94 reuse beacon_node methods for initializing network configs in boot_node (#1520)
## Issue Addressed

#1378

## Proposed Changes

Boot node reuses code from beacon_node to initialize network config. This also enables using the network directory to store/load the enr and the private key.

## Additional Info

Note that before this PR the port cli arguments were off (the argument was named `enr-port` but used as `boot-node-enr-port`).
Therefore as port always the cli port argument was used (for both enr and listening). Now the enr-port argument can be used to overwrite the listening port as the public port others should connect to.

Last but not least note, that this restructuring reuses `ethlibp2p::NetworkConfig` that has many more options than the ones used in the boot node. For example the network config has an own `discv5_config` field that gets never used in the boot node and instead another `Discv5Config` gets created later in the boot node process.

Co-authored-by: Age Manning <Age@AgeManning.com>
2020-08-21 12:00:01 +00:00
Nat
3cfd70d7fd Docs: Fix reference to incorrect password file. (#1556)
Leftover "mywallet.pass" -> "wally.pass"

Thanks @pecurliarly (from Discord)!
2020-08-21 03:50:37 +00:00
blacktemplar
3f0a113c7f ban IP addresses if too many banned peers for this IP address (#1543)
## Issue Addressed

#1283 

## Proposed Changes

All peers with the same IP will be considered banned as long as there are more than 5 (constant) peers with this IP that have a score below the ban threshold. As soon as some of those 5 peers get unbanned (through decay) and if there are then less than 5 peers with a score below the threshold the IP will be considered not banned anymore.
2020-08-21 01:41:12 +00:00
Paul Hauner
ebb25b5569 Bump version to v0.2.6 (#1549)
## Issue Addressed

NA

## Proposed Changes

See title.

## Additional Info

NA
2020-08-19 09:31:01 +00:00
Pawan Dhananjay
bbed42f30c Refactor attestation service (#1415)
## Issue Addressed

N/A

## Proposed Changes

Refactor attestation service to send out requests to find peers for subnets as soon as we get attestation duties. 
Earlier, we had much more involved logic to send the discovery requests to the discovery service only 6 slots before the attestation slot. Now that discovery is much smarter with grouped queries, the complexity in attestation service can be reduced considerably.



Co-authored-by: Age Manning <Age@AgeManning.com>
2020-08-19 08:46:25 +00:00
divma
fdc6e2aa8e Shutdown like a Sir (#1545)
## Issue Addressed
#1494 

## Proposed Changes
- Give the TaskExecutor the sender side of a channel that a task can clone to request shutting down
- The receiver side of this channel is in environment and now we block until ctrl+c or an internal shutdown signal is received
- The swarm now informs when it has reached 0 listeners
- The network receives this message and requests the shutdown
2020-08-19 05:51:14 +00:00
Paul Hauner
8e7dd7b2b1 Add remaining network ops to queuing system (#1546)
## Issue Addressed

NA

## Proposed Changes

- Refactors the `BeaconProcessor` to remove some excessive nesting and file bloat
  - Sorry about the noise from this, it's all contained in 4d3f8c5 though.
- Adds exits, proposer slashings, attester slashings to the `BeaconProcessor` so we don't get overwhelmed with large amounts of slashings (which happened a few hours ago).

## Additional Info

NA
2020-08-19 05:09:53 +00:00
Age Manning
33b2a3d0e0 Version bump to v0.2.5 (#1540)
## Description

Version bumps lighthouse to v0.2.5
2020-08-18 11:23:08 +00:00
Paul Hauner
93b7c3b7ff Set default max skips to 700 (#1542)
## Issue Addressed

NA

## Proposed Changes

Sets the default max skips to 700 so that it can cover the 693 slot skip from `80894 - 80201`.

## Additional Info

NA
2020-08-18 09:27:04 +00:00
Age Manning
2d0b214b57 Clean up logs (#1541)
## Description

This PR improves some logging for the end-user. 

It downgrades some warning logs and removes the slots per second sync speed if we are syncing and the speed is 0. This is likely because we are syncing from a finalised checkpoint and the head doesn't change.
2020-08-18 08:11:39 +00:00
Paul Hauner
d4f763bbae Fix mistake with attestation skip slots (#1539)
## Issue Addressed

NA

## Proposed Changes

- Fixes a mistake I made in #1530 which resulted us in *not* rejecting attestations that we intended to reject.
- Adds skip-slot checks for blocks earlier in import process, so it rejects gossip and RPC blocks.

## Additional Info

NA
2020-08-18 06:28:26 +00:00
Age Manning
e1e5002d3c Fingerprint Lodestar (#1536)
Fingerprints the Lodestar client
2020-08-18 06:28:24 +00:00
Paul Hauner
46dd530476 Allow import of Prysm keystores (#1535)
## Issue Addressed

- Resolves #1361

## Proposed Changes

Loosens the constraints imposed by EIP-2335 so we can import keys from Prysm.

## Additional Info

NA
2020-08-18 06:28:20 +00:00
Age Manning
8311074d68 Purge out-dated head chains on chain completion (#1538)
## Description

There can be many head chains queued up to complete. Currently we try and process all of these to completion before we consider the node synced. 

In a chaotic network, there can be many of these and processing them to completion can be very expensive and slow. This PR removes any non-syncing head chains from the queue, and re-status's the peers. If, after we have synced to head on one chain, there is still a valid head chain to download, it will be re-established once the status has been returned. 

This should assist with getting nodes to sync on medalla faster.
2020-08-18 05:22:34 +00:00
Age Manning
3bb30754d9 Keep track of failed head chains and prevent re-lookups (#1534)
## Overview

There are forked chains which get referenced by blocks and attestations on a network. Typically if these chains are very long, we stop looking up the chain and downvote the peer. In extreme circumstances, many peers are on many chains, the chains can be very deep and become time consuming performing lookups. 

This PR adds a cache to known failed chain lookups. This prevents us from starting a parent-lookup (or stopping one half way through) if we have attempted the chain lookup in the past.
2020-08-18 03:54:09 +00:00
Age Manning
cc44a64d15 Limit parallelism of head chain sync (#1527)
## Description

Currently lighthouse load-balances across peers a single finalized chain. The chain is selected via the most peers. Once synced to the latest finalized epoch Lighthouse creates chains amongst its peers and syncs them all in parallel amongst each peer (grouped by their current head block). 

This is typically fast and relatively efficient under normal operations. However if the chain has not finalized in a long time, the head chains can grow quite long. Peer's head chains will update every slot as new blocks are added to the head. Syncing all head chains in parallel is a bottleneck and highly inefficient in block duplication leads to RPC timeouts when attempting to handle all new heads chains at once. 

This PR limits the parallelism of head syncing chains to 2. We now sync at most two head chains at a time. This allows for the possiblity of sync progressing alongside a peer being slow and holding up one chain via RPC timeouts.
2020-08-18 02:49:24 +00:00
divma
46dbf027af Do not reset batch ids & redownload out of range batches (#1528)
The changes are somewhat simple but should solve two issues:
- When quickly changing between chains once and a second time back again, batchIds would collide and cause havoc. 
- If we got an out of range response from a peer, sync would remain in syncing but without advancing

Changes:
- remove the batch id. Identify each batch (inside a chain) by its starting epoch. Target epochs for downloading and processing now advance by EPOCHS_PER_BATCH
- for the same reason, move the "to_be_downloaded_id" to be an epoch
- remove a sneaky line that dropped an out of range batch without downloading it
- bonus: put the chain_id in the log given to the chain. This is why explicitly logging the chain_id is removed
2020-08-18 01:29:51 +00:00
87 changed files with 3684 additions and 2634 deletions

239
Cargo.lock generated
View File

@@ -2,7 +2,7 @@
# It is not intended for manual editing.
[[package]]
name = "account_manager"
version = "0.2.3"
version = "0.2.6"
dependencies = [
"account_utils",
"bls",
@@ -77,7 +77,7 @@ version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fc95d1bdb8e6666b2b217308eeeb09f2d6728d104be3e31916cc74d15420331"
dependencies = [
"generic-array 0.14.3",
"generic-array 0.14.4",
]
[[package]]
@@ -275,9 +275,9 @@ checksum = "1d49d90015b3c36167a20fe2810c5cd875ad504b39cff3d4eae7977e6b7c1cb2"
[[package]]
name = "autocfg"
version = "1.0.0"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8aac770f1885fd7e387acedd76065302551364496e46b3dd00860b2f8359b9d"
checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"
[[package]]
name = "backtrace"
@@ -361,7 +361,7 @@ dependencies = [
"slog-term",
"sloggers",
"slot_clock",
"smallvec 1.4.1",
"smallvec 1.4.2",
"state_processing",
"store",
"tempfile",
@@ -373,7 +373,7 @@ dependencies = [
[[package]]
name = "beacon_node"
version = "0.2.3"
version = "0.2.6"
dependencies = [
"beacon_chain",
"clap",
@@ -478,7 +478,7 @@ version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4152116fd6e9dadb291ae18fc1ec3575ed6d84c29642d97890f4b4a3417297e4"
dependencies = [
"generic-array 0.14.3",
"generic-array 0.14.4",
]
[[package]]
@@ -487,7 +487,7 @@ version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa136449e765dc7faa244561ccae839c394048667929af599b5d931ebe7b7f10"
dependencies = [
"generic-array 0.14.3",
"generic-array 0.14.4",
]
[[package]]
@@ -530,8 +530,9 @@ dependencies = [
[[package]]
name = "boot_node"
version = "0.2.3"
version = "0.2.6"
dependencies = [
"beacon_node",
"clap",
"discv5",
"eth2_libp2p",
@@ -635,7 +636,7 @@ dependencies = [
"ethereum-types",
"quickcheck",
"quickcheck_macros",
"smallvec 1.4.1",
"smallvec 1.4.2",
"tree_hash",
]
@@ -685,9 +686,9 @@ dependencies = [
[[package]]
name = "chrono"
version = "0.4.13"
version = "0.4.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c74d84029116787153e02106bf53e66828452a4b325cc8652b788b5967c0a0b6"
checksum = "942f72db697d8767c22d46a598e01f2d3b475501ea43d0db4f16d90259182d0b"
dependencies = [
"num-integer",
"num-traits",
@@ -696,9 +697,9 @@ dependencies = [
[[package]]
name = "clap"
version = "2.33.2"
version = "2.33.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "10040cdf04294b565d9e0319955430099ec3813a64c952b86a41200ad714ae48"
checksum = "37e58ac78573c40708d45522f0d80fa2f01cc4f9b4e2bf749807255454312002"
dependencies = [
"ansi_term",
"atty",
@@ -958,7 +959,7 @@ version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "058ed274caafc1f60c4997b5fc07bf7dc7cca454af7c6e81edffe5f33f70dace"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
"cfg-if",
"crossbeam-utils",
"lazy_static",
@@ -984,7 +985,7 @@ version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3c7c73a2d1e9fc0886a08b93e98eb643461230d5f1925e4036204d5f2e261a8"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
"cfg-if",
"lazy_static",
]
@@ -1011,7 +1012,7 @@ version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b584a330336237c1eecd3e94266efb216c56ed91225d634cb2991c5f3fd1aeab"
dependencies = [
"generic-array 0.14.3",
"generic-array 0.14.4",
"subtle 2.2.3",
]
@@ -1069,6 +1070,19 @@ dependencies = [
"zeroize",
]
[[package]]
name = "curve25519-dalek"
version = "3.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8492de420e9e60bc9a1d66e2dbb91825390b738a388606600663fc529b4b307"
dependencies = [
"byteorder",
"digest 0.9.0",
"rand_core 0.5.1",
"subtle 2.2.3",
"zeroize",
]
[[package]]
name = "darwin-libproc"
version = "0.1.2"
@@ -1091,9 +1105,9 @@ dependencies = [
[[package]]
name = "data-encoding"
version = "2.2.1"
version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72aa14c04dfae8dd7d8a2b1cb7ca2152618cd01336dbfe704b8dcbf8d41dbd69"
checksum = "d4d0e2d24e5ee3b23a01de38eefdcd978907890701f08ffffd4cb457ca4ee8d6"
[[package]]
name = "db-key"
@@ -1163,7 +1177,7 @@ version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3dd60d1080a57a05ab032377049e0591415d2b31afd7028356dbf3cc6dcb066"
dependencies = [
"generic-array 0.14.3",
"generic-array 0.14.4",
]
[[package]]
@@ -1218,7 +1232,7 @@ dependencies = [
"rand 0.7.3",
"rlp",
"sha2 0.8.2",
"smallvec 1.4.1",
"smallvec 1.4.2",
"tokio 0.2.22",
"uint",
"zeroize",
@@ -1247,15 +1261,15 @@ dependencies = [
[[package]]
name = "ed25519-dalek"
version = "1.0.0-pre.4"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "21a8a37f4e8b35af971e6db5e3897e7a6344caa3f92f6544f88125a1f5f0035a"
checksum = "53d2e93f837d749c16d118e7ddf7a4dfd0ac8f452cf51e46e9348824e5ef6851"
dependencies = [
"curve25519-dalek",
"curve25519-dalek 3.0.0",
"ed25519",
"rand 0.7.3",
"serde",
"sha2 0.8.2",
"sha2 0.9.1",
"zeroize",
]
@@ -1284,9 +1298,9 @@ dependencies = [
[[package]]
name = "either"
version = "1.5.3"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bb1f6b1ce1c140482ea30ddd3335fc0024ac7ee112895426e0a629a6c20adfe3"
checksum = "cd56b59865bce947ac5958779cfa508f6c3b9497cc762b7e24a12d11ccde2c4f"
[[package]]
name = "encoding_rs"
@@ -1299,9 +1313,9 @@ dependencies = [
[[package]]
name = "enr"
version = "0.1.2"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1c78d64a14865c080072c05ffb2c11aab15963d5e763ca4dbc02865dc1b615ee"
checksum = "a3137b4854534673ea350751670c6fe53920394a328ba9ce4d9acabd4f60a586"
dependencies = [
"base64 0.12.3",
"bs58",
@@ -1504,7 +1518,7 @@ dependencies = [
"slog-async",
"slog-stdlog",
"slog-term",
"smallvec 1.4.1",
"smallvec 1.4.2",
"snap",
"tempdir",
"tiny-keccak 2.0.2",
@@ -1522,7 +1536,7 @@ version = "0.1.2"
dependencies = [
"eth2_ssz_derive",
"ethereum-types",
"smallvec 1.4.1",
"smallvec 1.4.2",
]
[[package]]
@@ -1699,9 +1713,9 @@ checksum = "37ab347416e802de484e4d03c7316c48f1ecb56574dfd4a46a80f173ce1de04d"
[[package]]
name = "flate2"
version = "1.0.16"
version = "1.0.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68c90b0fc46cf89d227cc78b40e494ff81287a92dd07631e5af0d06fe3cf885e"
checksum = "766d0e77a2c1502169d4a93ff3b8c15a71fd946cd0126309752104e5f3c46d94"
dependencies = [
"cfg-if",
"crc32fast",
@@ -1916,9 +1930,9 @@ dependencies = [
[[package]]
name = "generic-array"
version = "0.14.3"
version = "0.14.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "60fb4bb6bba52f78a471264d9a3b7d026cc0af47b22cd2cffbc0b787ca003e63"
checksum = "501466ecc8a30d1d3b7fc9229b122b2ce8ed6e9d9223f1138d4babb253e51817"
dependencies = [
"typenum",
"version_check 0.9.2",
@@ -2069,14 +2083,14 @@ checksum = "d36fab90f82edc3c747f9d438e06cf0a491055896f2a279638bb5beed6c40177"
[[package]]
name = "handlebars"
version = "3.3.0"
version = "3.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86dbc8a0746b08f363d2e00da48e6c9ceb75c198ac692d2715fcbb5bee74c87d"
checksum = "5deefd4816fb852b1ff3cb48f6c41da67be2d0e1d20b26a7a3b076da11f064b1"
dependencies = [
"log 0.4.11",
"pest",
"pest_derive",
"quick-error",
"quick-error 2.0.0",
"serde",
"serde_json",
]
@@ -2093,11 +2107,11 @@ dependencies = [
[[package]]
name = "hashbrown"
version = "0.8.1"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34f595585f103464d8d2f6e9864682d74c1601fed5e07d62b1c9058dba8246fb"
checksum = "e91b62f79061a0bc2e046024cb7ba44b08419ed238ecbd9adbd787434b9e8c25"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
]
[[package]]
@@ -2241,7 +2255,7 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df004cfca50ef23c36850aaaa59ad52cc70d0e90243c3c7737a4dd32dc7a3c4f"
dependencies = [
"quick-error",
"quick-error 1.2.3",
]
[[package]]
@@ -2398,8 +2412,8 @@ version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86b45e59b16c76b11bf9738fd5d38879d3bd28ad292d7b313608becb17ae2df9"
dependencies = [
"autocfg 1.0.0",
"hashbrown 0.8.1",
"autocfg 1.0.1",
"hashbrown 0.8.2",
]
[[package]]
@@ -2517,13 +2531,13 @@ dependencies = [
[[package]]
name = "lazycell"
version = "1.2.1"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b294d6fa9ee409a054354afc4352b0b9ef7ca222c69b8812cbea9e7d2bf3783f"
checksum = "830d08ce1d1d941e6b30645f1a0eb5643013d835ce3779a5fc208261dbe10f55"
[[package]]
name = "lcli"
version = "0.2.2"
version = "0.2.6"
dependencies = [
"bls",
"clap",
@@ -2576,9 +2590,9 @@ dependencies = [
[[package]]
name = "libc"
version = "0.2.74"
version = "0.2.76"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2f02823cf78b754822df5f7f268fb59822e7296276d3e069d8e8cb26a14bd10"
checksum = "755456fae044e6fa1ebbbd1b3e902ae19e73097ed4ed87bb79934a867c007bc3"
[[package]]
name = "libflate"
@@ -2627,7 +2641,7 @@ dependencies = [
"parity-multiaddr 0.9.1 (git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803)",
"parking_lot 0.10.2",
"pin-project",
"smallvec 1.4.1",
"smallvec 1.4.2",
"wasm-timer",
]
@@ -2658,7 +2672,7 @@ dependencies = [
"ring",
"rw-stream-sink",
"sha2 0.8.2",
"smallvec 1.4.1",
"smallvec 1.4.2",
"thiserror",
"unsigned-varint 0.4.0",
"void",
@@ -2691,7 +2705,7 @@ dependencies = [
"ring",
"rw-stream-sink",
"sha2 0.8.2",
"smallvec 1.4.1",
"smallvec 1.4.2",
"thiserror",
"unsigned-varint 0.4.0",
"void",
@@ -2736,7 +2750,7 @@ dependencies = [
"prost-build",
"rand 0.7.3",
"sha2 0.8.2",
"smallvec 1.4.1",
"smallvec 1.4.2",
"unsigned-varint 0.4.0",
"wasm-timer",
]
@@ -2752,7 +2766,7 @@ dependencies = [
"log 0.4.11",
"prost",
"prost-build",
"smallvec 1.4.1",
"smallvec 1.4.2",
"wasm-timer",
]
@@ -2777,7 +2791,7 @@ version = "0.23.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
dependencies = [
"bytes 0.5.6",
"curve25519-dalek",
"curve25519-dalek 2.1.0",
"futures 0.3.5",
"lazy_static",
"libp2p-core 0.21.0",
@@ -2801,7 +2815,7 @@ dependencies = [
"libp2p-core 0.21.0",
"log 0.4.11",
"rand 0.7.3",
"smallvec 1.4.1",
"smallvec 1.4.2",
"void",
"wasm-timer",
]
@@ -2869,19 +2883,18 @@ dependencies = [
[[package]]
name = "libz-sys"
version = "1.0.25"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2eb5e43362e38e2bca2fd5f5134c4d4564a23a5c28e9b95411652021a8675ebe"
checksum = "af67924b8dd885cccea261866c8ce5b74d239d272e154053ff927dae839f5ae9"
dependencies = [
"cc",
"libc",
"pkg-config",
"vcpkg",
]
[[package]]
name = "lighthouse"
version = "0.2.3"
version = "0.2.6"
dependencies = [
"account_manager",
"account_utils",
@@ -2993,6 +3006,13 @@ dependencies = [
"linked-hash-map",
]
[[package]]
name = "lru_cache"
version = "0.1.0"
dependencies = [
"fnv",
]
[[package]]
name = "lru_time_cache"
version = "0.10.0"
@@ -3038,7 +3058,7 @@ version = "0.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c198b026e1bbf08a937e94c6c60f9ec4a2267f5b0d2eec9c1b21b061ce2be55f"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
]
[[package]]
@@ -3205,7 +3225,7 @@ dependencies = [
"futures 0.3.5",
"log 0.4.11",
"pin-project",
"smallvec 1.4.1",
"smallvec 1.4.2",
"unsigned-varint 0.4.0",
]
@@ -3219,7 +3239,7 @@ dependencies = [
"futures 0.3.5",
"log 0.4.11",
"pin-project",
"smallvec 1.4.1",
"smallvec 1.4.2",
"unsigned-varint 0.4.0",
]
@@ -3271,6 +3291,7 @@ dependencies = [
"itertools 0.9.0",
"lazy_static",
"lighthouse_metrics",
"lru_cache",
"matches",
"num_cpus",
"parking_lot 0.11.0",
@@ -3280,7 +3301,7 @@ dependencies = [
"slog",
"sloggers",
"slot_clock",
"smallvec 1.4.1",
"smallvec 1.4.2",
"state_processing",
"store",
"tempfile",
@@ -3333,7 +3354,7 @@ version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b7f3fc75e3697059fb1bc465e3d8cca6cf92f56854f201158b3f9c77d5a3cfa0"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
"num-integer",
"num-traits",
]
@@ -3353,7 +3374,7 @@ dependencies = [
"num-traits",
"rand 0.7.3",
"serde",
"smallvec 1.4.1",
"smallvec 1.4.2",
"zeroize",
]
@@ -3363,7 +3384,7 @@ version = "0.1.43"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8d59457e662d541ba17869cf51cf177c0b5f0cbf476c66bdc90bf1edac4f875b"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
"num-traits",
]
@@ -3373,7 +3394,7 @@ version = "0.1.41"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a6e6b7c748f995c4c29c5f5ae0248536e04a5739927c74ec0fa564805094b9f"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
"num-integer",
"num-traits",
]
@@ -3384,7 +3405,7 @@ version = "0.2.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac267bcc07f48ee5f8935ab0d24f316fb722d7a1292e2913f0cc196b29ffd611"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
]
[[package]]
@@ -3405,11 +3426,11 @@ checksum = "1ab52be62400ca80aa00285d25253d7f7c437b7375c4de678f5405d3afe82ca5"
[[package]]
name = "once_cell"
version = "1.4.0"
version = "1.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b631f7e854af39a1739f401cf34a8a013dfe09eac4fa4dba91e9768bd28168d"
checksum = "260e51e7efe62b592207e9e13a68e43692a7a279171d6ba57abd208bf23645ad"
dependencies = [
"parking_lot 0.10.2",
"parking_lot 0.11.0",
]
[[package]]
@@ -3465,7 +3486,7 @@ version = "0.9.58"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a842db4709b604f0fe5d1170ae3565899be2ad3d9cbc72dedc789ac0511f78de"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
"cc",
"libc",
"openssl-src",
@@ -3593,7 +3614,7 @@ dependencies = [
"cloudabi 0.0.3",
"libc",
"redox_syscall",
"smallvec 1.4.1",
"smallvec 1.4.2",
"winapi 0.3.9",
]
@@ -3608,7 +3629,7 @@ dependencies = [
"instant",
"libc",
"redox_syscall",
"smallvec 1.4.1",
"smallvec 1.4.2",
"winapi 0.3.9",
]
@@ -3773,9 +3794,9 @@ dependencies = [
[[package]]
name = "ppv-lite86"
version = "0.2.8"
version = "0.2.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "237a5ed80e274dbc66f86bd59c1e25edc039660be53194b5fe0a482e0f2612ea"
checksum = "c36fa947111f5c62a733b652544dd0016a43ce89619538a8ef92724a6f501a20"
[[package]]
name = "primitive-types"
@@ -3902,9 +3923,9 @@ dependencies = [
[[package]]
name = "protobuf"
version = "2.16.2"
version = "2.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d883f78645c21b7281d21305181aa1f4dd9e9363e7cf2566c93121552cff003e"
checksum = "cb14183cc7f213ee2410067e1ceeadba2a7478a59432ff0747a335202798b1e2"
[[package]]
name = "psutil"
@@ -3931,6 +3952,12 @@ version = "1.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1d01941d82fa2ab50be1e79e6714289dd7cde78eba4c074bc5a4374f650dfe0"
[[package]]
name = "quick-error"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3ac73b1112776fc109b2e61909bc46c7e1bf0d7f690ffb1676553acce16d5cda"
[[package]]
name = "quickcheck"
version = "0.9.2"
@@ -4098,7 +4125,7 @@ version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62f02856753d04e03e26929f820d0a0a337ebe71f849801eea335d464b349080"
dependencies = [
"autocfg 1.0.0",
"autocfg 1.0.1",
"crossbeam-deque",
"either",
"rayon-core",
@@ -4258,7 +4285,6 @@ dependencies = [
"node_test_rig",
"operation_pool",
"parking_lot 0.11.0",
"rayon",
"remote_beacon_node",
"rest_types",
"serde",
@@ -4281,15 +4307,22 @@ dependencies = [
name = "rest_types"
version = "0.2.0"
dependencies = [
"beacon_chain",
"bls",
"environment",
"eth2_hashing",
"eth2_ssz",
"eth2_ssz_derive",
"hyper 0.13.7",
"procinfo",
"psutil",
"rayon",
"serde",
"serde_json",
"serde_yaml",
"state_processing",
"store",
"tokio 0.2.22",
"tree_hash",
"types",
]
@@ -4346,7 +4379,7 @@ dependencies = [
"libsqlite3-sys",
"lru-cache",
"memchr",
"smallvec 1.4.1",
"smallvec 1.4.2",
"time 0.1.43",
]
@@ -4391,9 +4424,9 @@ dependencies = [
[[package]]
name = "rustls"
version = "0.18.0"
version = "0.18.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cac94b333ee2aac3284c5b8a1b7fb4dd11cba88c244e3fe33cdbd047af0eb693"
checksum = "5d1126dcf58e93cee7d098dbda643b5f92ed724f1f6a63007c1116eed6700c81"
dependencies = [
"base64 0.12.3",
"log 0.4.11",
@@ -4560,9 +4593,9 @@ checksum = "a0eddf2e8f50ced781f288c19f18621fa72a3779e3cb58dbf23b07469b0abeb4"
[[package]]
name = "serde"
version = "1.0.114"
version = "1.0.115"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5317f7588f0a5078ee60ef675ef96735a1442132dc645eb1d12c018620ed8cd3"
checksum = "e54c9a88f2da7238af84b5101443f0c0d0a3bbdc455e34a5c9497b1903ed55d5"
dependencies = [
"serde_derive",
]
@@ -4579,9 +4612,9 @@ dependencies = [
[[package]]
name = "serde_derive"
version = "1.0.114"
version = "1.0.115"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a0be94b04690fbaed37cddffc5c134bf537c8e3329d53e982fe04c374978f8e"
checksum = "609feed1d0a73cc36a0182a840a9b37b4a82f0b1150369f0536a9e3f2a31dc48"
dependencies = [
"proc-macro2",
"quote",
@@ -4885,9 +4918,9 @@ dependencies = [
[[package]]
name = "smallvec"
version = "1.4.1"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3757cb9d89161a2f24e1cf78efa0c1fcff485d18e3f55e0aa3480824ddaa0f3f"
checksum = "fbee7696b84bbf3d89a1c2eccff0850e3047ed46bfcd2e92c29a2d074d57e252"
[[package]]
name = "snafu"
@@ -5100,7 +5133,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09f8ed9974042b8c3672ff3030a69fcc03b74c47c3d1ecb7755e8a3626011e88"
dependencies = [
"block-cipher",
"generic-array 0.14.3",
"generic-array 0.14.4",
]
[[package]]
@@ -5380,9 +5413,9 @@ dependencies = [
[[package]]
name = "tinyvec"
version = "0.3.3"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53953d2d3a5ad81d9f844a32f14ebb121f50b650cd59d0ee2a07cf13c617efed"
checksum = "238ce071d267c5710f9d31451efec16c5ee22de34df17cc05e56cbc92e967117"
[[package]]
name = "tokio"
@@ -5721,9 +5754,9 @@ checksum = "e987b6bf443f4b5b3b6f38704195592cca41c5bb7aedd3c3693c7081f8289860"
[[package]]
name = "tracing"
version = "0.1.18"
version = "0.1.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0aae59226cf195d8e74d4b34beae1859257efb4e5fed3f147d2dc2c7d372178"
checksum = "6d79ca061b032d6ce30c660fded31189ca0b9922bf483cd70759f13a2d86786c"
dependencies = [
"cfg-if",
"log 0.4.11",
@@ -5732,9 +5765,9 @@ dependencies = [
[[package]]
name = "tracing-core"
version = "0.1.13"
version = "0.1.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d593f98af59ebc017c0648f0117525db358745a8894a8d684e185ba3f45954f9"
checksum = "db63662723c316b43ca36d833707cc93dff82a02ba3d7e354f342682cc8b3545"
dependencies = [
"lazy_static",
]
@@ -5773,7 +5806,7 @@ dependencies = [
"ethereum-types",
"lazy_static",
"rand 0.7.3",
"smallvec 1.4.1",
"smallvec 1.4.2",
"tree_hash_derive",
"types",
]
@@ -5857,9 +5890,9 @@ checksum = "c6ff93345ba2206230b1bb1aa3ece1a63dd9443b7531024575d16a0680a59444"
[[package]]
name = "uint"
version = "0.8.4"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "429ffcad8c8c15f874578c7337d156a3727eb4a1c2374c0ae937ad9a9b748c80"
checksum = "9db035e67dfaf7edd9aebfe8676afcd63eed53c8a4044fed514c8cccf1835177"
dependencies = [
"arbitrary",
"byteorder",
@@ -5934,7 +5967,7 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8326b2c654932e3e4f9196e69d08fdf7cfd718e1dc6f66b347e6024a0c961402"
dependencies = [
"generic-array 0.14.3",
"generic-array 0.14.4",
"subtle 2.2.3",
]
@@ -6003,7 +6036,7 @@ dependencies = [
[[package]]
name = "validator_client"
version = "0.2.3"
version = "0.2.6"
dependencies = [
"account_utils",
"bls",
@@ -6437,7 +6470,7 @@ version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "637ff90c9540fa3073bb577e65033069e4bae7c79d49d74aa3ffdf5342a53217"
dependencies = [
"curve25519-dalek",
"curve25519-dalek 2.1.0",
"rand_core 0.5.1",
"zeroize",
]

View File

@@ -28,6 +28,7 @@ members = [
"common/lighthouse_metrics",
"common/lighthouse_version",
"common/logging",
"common/lru_cache",
"common/remote_beacon_node",
"common/rest_types",
"common/slot_clock",

View File

@@ -1,6 +1,6 @@
[package]
name = "account_manager"
version = "0.2.4"
version = "0.2.7"
authors = ["Paul Hauner <paul@paulhauner.com>", "Luke Anderson <luke@sigmaprime.io>"]
edition = "2018"

View File

@@ -1,6 +1,6 @@
[package]
name = "beacon_node"
version = "0.2.4"
version = "0.2.7"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com"]
edition = "2018"

View File

@@ -549,7 +549,7 @@ fn verify_head_block_is_known<T: BeaconChainTypes>(
{
// Reject any block that exceeds our limit on skipped slots.
if let Some(max_skip_slots) = max_skip_slots {
if block.slot > attestation.data.slot + max_skip_slots {
if attestation.data.slot > block.slot + max_skip_slots {
return Err(Error::TooManySkippedSlots {
head_block_slot: block.slot,
attestation_slot: attestation.data.slot,

View File

@@ -431,6 +431,9 @@ impl<T: BeaconChainTypes> GossipVerifiedBlock<T> {
let (mut parent, block) = load_parent(block, chain)?;
// Reject any block that exceeds our limit on skipped slots.
check_block_skip_slots(chain, &parent.beacon_block.message, &block.message)?;
let state = cheap_state_advance_to_obtain_committees(
&mut parent.beacon_state,
block.slot(),
@@ -517,6 +520,10 @@ impl<T: BeaconChainTypes> SignatureVerifiedBlock<T> {
chain: &BeaconChain<T>,
) -> Result<Self, BlockError<T::EthSpec>> {
let (mut parent, block) = load_parent(block, chain)?;
// Reject any block that exceeds our limit on skipped slots.
check_block_skip_slots(chain, &parent.beacon_block.message, &block.message)?;
let block_root = get_block_root(&block);
let state = cheap_state_advance_to_obtain_committees(
@@ -648,14 +655,7 @@ impl<'a, T: BeaconChainTypes> FullyVerifiedBlock<'a, T> {
}
// Reject any block that exceeds our limit on skipped slots.
if let Some(max_skip_slots) = chain.config.import_max_skip_slots {
if block.slot() > parent.beacon_block.slot() + max_skip_slots {
return Err(BlockError::TooManySkippedSlots {
parent_slot: parent.beacon_block.slot(),
block_slot: block.slot(),
});
}
}
check_block_skip_slots(chain, &parent.beacon_block.message, &block.message)?;
/*
* Perform cursory checks to see if the block is even worth processing.
@@ -799,6 +799,30 @@ impl<'a, T: BeaconChainTypes> FullyVerifiedBlock<'a, T> {
}
}
/// Check that the count of skip slots between the block and its parent does not exceed our maximum
/// value.
///
/// Whilst this is not part of the specification, we include this to help prevent us from DoS
/// attacks. In times of dire network circumstance, the user can configure the
/// `import_max_skip_slots` value.
fn check_block_skip_slots<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
parent: &BeaconBlock<T::EthSpec>,
block: &BeaconBlock<T::EthSpec>,
) -> Result<(), BlockError<T::EthSpec>> {
// Reject any block that exceeds our limit on skipped slots.
if let Some(max_skip_slots) = chain.config.import_max_skip_slots {
if block.slot > parent.slot + max_skip_slots {
return Err(BlockError::TooManySkippedSlots {
parent_slot: parent.slot,
block_slot: block.slot,
});
}
}
Ok(())
}
/// Returns `Ok(())` if the block is later than the finalized slot on `chain`.
///
/// Returns an error if the block is earlier or equal to the finalized slot, or there was an error

View File

@@ -1,6 +1,7 @@
use serde_derive::{Deserialize, Serialize};
pub const DEFAULT_IMPORT_BLOCK_MAX_SKIP_SLOTS: u64 = 10 * 32;
/// There is a 693 block skip in the current canonical Medalla chain, we use 700 to be safe.
pub const DEFAULT_IMPORT_BLOCK_MAX_SKIP_SLOTS: u64 = 700;
#[derive(Debug, PartialEq, Eq, Clone, Deserialize, Serialize)]
pub struct ChainConfig {

View File

@@ -122,14 +122,28 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
head_distance.as_u64(),
slot_distance_pretty(head_distance, slot_duration)
);
info!(
log,
"Syncing";
"peers" => peer_count_pretty(connected_peer_count),
"distance" => distance,
"speed" => sync_speed_pretty(speedo.slots_per_second()),
"est_time" => estimated_time_pretty(speedo.estimated_time_till_slot(current_slot)),
);
let speed = speedo.slots_per_second();
let display_speed = speed.map_or(false, |speed| speed != 0.0);
if display_speed {
info!(
log,
"Syncing";
"peers" => peer_count_pretty(connected_peer_count),
"distance" => distance,
"speed" => sync_speed_pretty(speed),
"est_time" => estimated_time_pretty(speedo.estimated_time_till_slot(current_slot)),
);
} else {
info!(
log,
"Syncing";
"peers" => peer_count_pretty(connected_peer_count),
"distance" => distance,
"est_time" => estimated_time_pretty(speedo.estimated_time_till_slot(current_slot)),
);
}
} else if sync_state.is_synced() {
let block_info = if current_slot > head_slot {
" … empty".to_string()
@@ -220,14 +234,37 @@ fn seconds_pretty(secs: f64) -> String {
let hours = d.whole_hours();
let minutes = d.whole_minutes();
let week_string = if weeks == 1 { "week" } else { "weeks" };
let day_string = if days == 1 { "day" } else { "days" };
let hour_string = if hours == 1 { "hr" } else { "hrs" };
let min_string = if minutes == 1 { "min" } else { "mins" };
if weeks > 0 {
format!("{:.0} weeks {:.0} days", weeks, days % DAYS_PER_WEEK)
format!(
"{:.0} {} {:.0} {}",
weeks,
week_string,
days % DAYS_PER_WEEK,
day_string
)
} else if days > 0 {
format!("{:.0} days {:.0} hrs", days, hours % HOURS_PER_DAY)
format!(
"{:.0} {} {:.0} {}",
days,
day_string,
hours % HOURS_PER_DAY,
hour_string
)
} else if hours > 0 {
format!("{:.0} hrs {:.0} mins", hours, minutes % MINUTES_PER_HOUR)
format!(
"{:.0} {} {:.0} {}",
hours,
hour_string,
minutes % MINUTES_PER_HOUR,
min_string
)
} else {
format!("{:.0} mins", minutes)
format!("{:.0} {}", minutes, min_string)
}
}

View File

@@ -1,6 +1,6 @@
use crate::peer_manager::{score::PeerAction, PeerManager, PeerManagerEvent};
use crate::rpc::*;
use crate::types::{EnrBitfield, GossipEncoding, GossipKind, GossipTopic};
use crate::types::{EnrBitfield, GossipEncoding, GossipKind, GossipTopic, SubnetDiscovery};
use crate::Eth2Enr;
use crate::{error, metrics, Enr, NetworkConfig, NetworkGlobals, PubsubMessage, TopicHash};
use futures::prelude::*;
@@ -29,7 +29,6 @@ use std::{
marker::PhantomData,
sync::Arc,
task::{Context, Poll},
time::Instant,
};
use types::{EnrForkId, EthSpec, SignedBeaconBlock, SubnetId};
@@ -301,8 +300,9 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
/// Attempts to discover new peers for a given subnet. The `min_ttl` gives the time at which we
/// would like to retain the peers for.
pub fn discover_subnet_peers(&mut self, subnet_id: SubnetId, min_ttl: Option<Instant>) {
self.peer_manager.discover_subnet_peers(subnet_id, min_ttl)
pub fn discover_subnet_peers(&mut self, subnet_subscriptions: Vec<SubnetDiscovery>) {
self.peer_manager
.discover_subnet_peers(subnet_subscriptions)
}
/// Updates the local ENR's "eth2" field with the latest EnrForkId.

View File

@@ -6,6 +6,7 @@ use super::enr_ext::CombinedKeyExt;
use super::ENR_FILENAME;
use crate::types::{Enr, EnrBitfield};
use crate::NetworkConfig;
use discv5::enr::EnrKey;
use libp2p::core::identity::Keypair;
use slog::{debug, warn};
use ssz::{Decode, Encode};
@@ -48,6 +49,56 @@ impl Eth2Enr for Enr {
}
}
/// Either use the given ENR or load an ENR from file if it exists and matches the current NodeId
/// and sequence number.
/// If an ENR exists, with the same NodeId, this function checks to see if the loaded ENR from
/// disk is suitable to use, otherwise we increment the given ENR's sequence number.
pub fn use_or_load_enr(
enr_key: &CombinedKey,
local_enr: &mut Enr,
config: &NetworkConfig,
log: &slog::Logger,
) -> Result<(), String> {
let enr_f = config.network_dir.join(ENR_FILENAME);
if let Ok(mut enr_file) = File::open(enr_f.clone()) {
let mut enr_string = String::new();
match enr_file.read_to_string(&mut enr_string) {
Err(_) => debug!(log, "Could not read ENR from file"),
Ok(_) => {
match Enr::from_str(&enr_string) {
Ok(disk_enr) => {
// if the same node id, then we may need to update our sequence number
if local_enr.node_id() == disk_enr.node_id() {
if compare_enr(&local_enr, &disk_enr) {
debug!(log, "ENR loaded from disk"; "file" => format!("{:?}", enr_f));
// the stored ENR has the same configuration, use it
*local_enr = disk_enr;
return Ok(());
}
// same node id, different configuration - update the sequence number
// Note: local_enr is generated with default(0) attnets value,
// so a non default value in persisted enr will also update sequence number.
let new_seq_no = disk_enr.seq().checked_add(1).ok_or_else(|| "ENR sequence number on file is too large. Remove it to generate a new NodeId")?;
local_enr.set_seq(new_seq_no, enr_key).map_err(|e| {
format!("Could not update ENR sequence number: {:?}", e)
})?;
debug!(log, "ENR sequence number increased"; "seq" => new_seq_no);
}
}
Err(e) => {
warn!(log, "ENR from file could not be decoded"; "error" => format!("{:?}", e));
}
}
}
}
}
save_enr_to_disk(&config.network_dir, &local_enr, log);
Ok(())
}
/// Loads an ENR from file if it exists and matches the current NodeId and sequence number. If none
/// exists, generates a new one.
///
@@ -65,51 +116,11 @@ pub fn build_or_load_enr<T: EthSpec>(
let enr_key = CombinedKey::from_libp2p(&local_key)?;
let mut local_enr = build_enr::<T>(&enr_key, config, enr_fork_id)?;
let enr_f = config.network_dir.join(ENR_FILENAME);
if let Ok(mut enr_file) = File::open(enr_f.clone()) {
let mut enr_string = String::new();
match enr_file.read_to_string(&mut enr_string) {
Err(_) => debug!(log, "Could not read ENR from file"),
Ok(_) => {
match Enr::from_str(&enr_string) {
Ok(disk_enr) => {
// if the same node id, then we may need to update our sequence number
if local_enr.node_id() == disk_enr.node_id() {
if compare_enr(&local_enr, &disk_enr) {
debug!(log, "ENR loaded from disk"; "file" => format!("{:?}", enr_f));
// the stored ENR has the same configuration, use it
return Ok(disk_enr);
}
// same node id, different configuration - update the sequence number
// Note: local_enr is generated with default(0) attnets value,
// so a non default value in persisted enr will also update sequence number.
let new_seq_no = disk_enr.seq().checked_add(1).ok_or_else(|| "ENR sequence number on file is too large. Remove it to generate a new NodeId")?;
local_enr.set_seq(new_seq_no, &enr_key).map_err(|e| {
format!("Could not update ENR sequence number: {:?}", e)
})?;
debug!(log, "ENR sequence number increased"; "seq" => new_seq_no);
}
}
Err(e) => {
warn!(log, "ENR from file could not be decoded"; "error" => format!("{:?}", e));
}
}
}
}
}
save_enr_to_disk(&config.network_dir, &local_enr, log);
use_or_load_enr(&enr_key, &mut local_enr, config, log)?;
Ok(local_enr)
}
/// Builds a lighthouse ENR given a `NetworkConfig`.
pub fn build_enr<T: EthSpec>(
enr_key: &CombinedKey,
config: &NetworkConfig,
enr_fork_id: EnrForkId,
) -> Result<Enr, String> {
pub fn create_enr_builder_from_config<T: EnrKey>(config: &NetworkConfig) -> EnrBuilder<T> {
let mut builder = EnrBuilder::new("v4");
if let Some(enr_address) = config.enr_address {
builder.ip(enr_address);
@@ -120,7 +131,17 @@ pub fn build_enr<T: EthSpec>(
// we always give it our listening tcp port
// TODO: Add uPnP support to map udp and tcp ports
let tcp_port = config.enr_tcp_port.unwrap_or_else(|| config.libp2p_port);
builder.tcp(tcp_port);
builder.tcp(tcp_port).tcp(config.libp2p_port);
builder
}
/// Builds a lighthouse ENR given a `NetworkConfig`.
pub fn build_enr<T: EthSpec>(
enr_key: &CombinedKey,
config: &NetworkConfig,
enr_fork_id: EnrForkId,
) -> Result<Enr, String> {
let mut builder = create_enr_builder_from_config(config);
// set the `eth2` field on our ENR
builder.add_value(ETH2_ENR_KEY.into(), enr_fork_id.as_ssz_bytes());
@@ -131,7 +152,6 @@ pub fn build_enr<T: EthSpec>(
builder.add_value(BITFIELD_ENR_KEY.into(), bitfield.as_ssz_bytes());
builder
.tcp(config.libp2p_port)
.build(enr_key)
.map_err(|e| format!("Could not build Local ENR: {:?}", e))
}

View File

@@ -3,12 +3,12 @@ pub(crate) mod enr;
pub mod enr_ext;
// Allow external use of the lighthouse ENR builder
pub use enr::{build_enr, CombinedKey, Eth2Enr};
pub use enr::{build_enr, create_enr_builder_from_config, use_or_load_enr, CombinedKey, Eth2Enr};
pub use enr_ext::{CombinedKeyExt, EnrExt};
pub use libp2p::core::identity::Keypair;
use crate::metrics;
use crate::{error, Enr, NetworkConfig, NetworkGlobals};
use crate::{error, Enr, NetworkConfig, NetworkGlobals, SubnetDiscovery};
use discv5::{enr::NodeId, Discv5, Discv5Event};
use enr::{BITFIELD_ENR_KEY, ETH2_ENR_KEY};
use futures::prelude::*;
@@ -305,12 +305,19 @@ impl<TSpec: EthSpec> Discovery<TSpec> {
}
/// Processes a request to search for more peers on a subnet.
pub fn discover_subnet_peers(&mut self, subnet_id: SubnetId, min_ttl: Option<Instant>) {
pub fn discover_subnet_peers(&mut self, subnets_to_discover: Vec<SubnetDiscovery>) {
// If the discv5 service isn't running, ignore queries
if !self.started {
return;
}
self.add_subnet_query(subnet_id, min_ttl, 0);
debug!(
self.log,
"Making discovery query for subnets";
"subnets" => format!("{:?}", subnets_to_discover.iter().map(|s| s.subnet_id).collect::<Vec<_>>())
);
for subnet in subnets_to_discover {
self.add_subnet_query(subnet.subnet_id, subnet.min_ttl, 0);
}
}
/// Add an ENR to the routing table of the discovery mechanism.
@@ -514,6 +521,11 @@ impl<TSpec: EthSpec> Discovery<TSpec> {
// This query is for searching for peers of a particular subnet
// Drain subnet_queries so we can re-use it as we continue to process the queue
let grouped_queries: Vec<SubnetQuery> = subnet_queries.drain(..).collect();
debug!(
self.log,
"Starting grouped subnet query";
"subnets" => format!("{:?}", grouped_queries.iter().map(|q| q.subnet_id).collect::<Vec<_>>()),
);
self.start_subnet_query(grouped_queries);
}
}

View File

@@ -1,5 +1,6 @@
///! The subnet predicate used for searching for a particular subnet.
use super::*;
use slog::{debug, trace};
use std::ops::Deref;
/// Returns the predicate for a given subnet.
@@ -30,7 +31,7 @@ where
.collect();
if matches.is_empty() {
debug!(
trace!(
log_clone,
"Peer found but not on any of the desired subnets";
"peer_id" => format!("{}", enr.peer_id())

View File

@@ -14,7 +14,7 @@ pub mod rpc;
mod service;
pub mod types;
pub use crate::types::{error, Enr, GossipTopic, NetworkGlobals, PubsubMessage};
pub use crate::types::{error, Enr, GossipTopic, NetworkGlobals, PubsubMessage, SubnetDiscovery};
pub use behaviour::{BehaviourEvent, PeerRequestId, Request, Response};
pub use config::Config as NetworkConfig;
pub use discovery::{CombinedKeyExt, EnrExt, Eth2Enr};
@@ -26,4 +26,4 @@ pub use metrics::scrape_discovery_metrics;
pub use peer_manager::{
client::Client, score::PeerAction, PeerDB, PeerInfo, PeerSyncStatus, SyncInfo,
};
pub use service::{Libp2pEvent, Service, NETWORK_KEY_FILENAME};
pub use service::{load_private_key, Libp2pEvent, Service, NETWORK_KEY_FILENAME};

View File

@@ -30,6 +30,8 @@ pub enum ClientKind {
Teku,
/// A Prysm node.
Prysm,
/// A lodestar node.
Lodestar,
/// An unknown client.
Unknown,
}
@@ -84,6 +86,7 @@ impl std::fmt::Display for Client {
"Prysm: version: {}, os_version: {}",
self.version, self.os_version
),
ClientKind::Lodestar => write!(f, "Lodestar: version: {}", self.version),
ClientKind::Unknown => {
if let Some(agent_string) = &self.agent_string {
write!(f, "Unknown: {}", agent_string)
@@ -157,6 +160,18 @@ fn client_from_agent_version(agent_version: &str) -> (ClientKind, String, String
}
(kind, version, os_version)
}
Some("js-libp2p") => {
let kind = ClientKind::Lodestar;
let mut version = String::from("unknown");
let mut os_version = version.clone();
if let Some(agent_version) = agent_split.next() {
version = agent_version.into();
if let Some(agent_os_version) = agent_split.next() {
os_version = agent_os_version.into();
}
}
(kind, version, os_version)
}
_ => {
let unknown = String::from("unknown");
(ClientKind::Unknown, unknown.clone(), unknown)

View File

@@ -4,7 +4,7 @@ pub use self::peerdb::*;
use crate::discovery::{Discovery, DiscoveryEvent};
use crate::rpc::{GoodbyeReason, MetaData, Protocol, RPCError, RPCResponseErrorCode};
use crate::{error, metrics};
use crate::{EnrExt, NetworkConfig, NetworkGlobals, PeerId};
use crate::{EnrExt, NetworkConfig, NetworkGlobals, PeerId, SubnetDiscovery};
use futures::prelude::*;
use futures::Stream;
use hashset_delay::HashSetDelay;
@@ -19,7 +19,7 @@ use std::{
task::{Context, Poll},
time::{Duration, Instant},
};
use types::{EthSpec, SubnetId};
use types::EthSpec;
pub use libp2p::core::{identity::Keypair, Multiaddr};
@@ -213,17 +213,19 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
}
/// A request to find peers on a given subnet.
pub fn discover_subnet_peers(&mut self, subnet_id: SubnetId, min_ttl: Option<Instant>) {
pub fn discover_subnet_peers(&mut self, subnets_to_discover: Vec<SubnetDiscovery>) {
// Extend the time to maintain peers if required.
if let Some(min_ttl) = min_ttl {
self.network_globals
.peers
.write()
.extend_peers_on_subnet(subnet_id, min_ttl);
for s in subnets_to_discover.iter() {
if let Some(min_ttl) = s.min_ttl {
self.network_globals
.peers
.write()
.extend_peers_on_subnet(s.subnet_id, min_ttl);
}
}
// request the subnet query from discovery
self.discovery.discover_subnet_peers(subnet_id, min_ttl);
self.discovery.discover_subnet_peers(subnets_to_discover);
}
/// A STATUS message has been received from a peer. This resets the status timer.

View File

@@ -7,6 +7,7 @@ use serde::{
ser::{SerializeStructVariant, Serializer},
Serialize,
};
use std::net::IpAddr;
use std::time::Instant;
use types::{EthSpec, SubnetId};
use PeerConnectionStatus::*;
@@ -104,6 +105,8 @@ pub enum PeerConnectionStatus {
Banned {
/// moment when the peer was banned.
since: Instant,
/// ip addresses this peer had a the moment of the ban
ip_addresses: Vec<IpAddr>,
},
/// We are currently dialing this peer.
Dialing {
@@ -129,7 +132,7 @@ impl Serialize for PeerConnectionStatus {
s.serialize_field("since", &since.elapsed().as_secs())?;
s.end()
}
Banned { since } => {
Banned { since, .. } => {
let mut s = serializer.serialize_struct_variant("", 2, "Banned", 1)?;
s.serialize_field("since", &since.elapsed().as_secs())?;
s.end()
@@ -218,15 +221,16 @@ impl PeerConnectionStatus {
}
/// Modifies the status to Banned
pub fn ban(&mut self) {
pub fn ban(&mut self, ip_addresses: Vec<IpAddr>) {
*self = Banned {
since: Instant::now(),
ip_addresses,
};
}
/// The score system has unbanned the peer. Update the connection status
pub fn unban(&mut self) {
if let PeerConnectionStatus::Banned { since } = self {
if let PeerConnectionStatus::Banned { since, .. } = self {
*self = PeerConnectionStatus::Disconnected { since: *since }
}
}

View File

@@ -1,11 +1,13 @@
use super::peer_info::{PeerConnectionStatus, PeerInfo};
use super::peer_sync_status::PeerSyncStatus;
use super::score::{Score, ScoreState};
use crate::multiaddr::Protocol;
use crate::rpc::methods::MetaData;
use crate::PeerId;
use rand::seq::SliceRandom;
use slog::{crit, debug, trace, warn};
use std::collections::HashMap;
use std::net::IpAddr;
use std::time::Instant;
use types::{EthSpec, SubnetId};
@@ -13,6 +15,9 @@ use types::{EthSpec, SubnetId};
const MAX_DC_PEERS: usize = 500;
/// The maximum number of banned nodes to remember.
const MAX_BANNED_PEERS: usize = 1000;
/// If there are more than `BANNED_PEERS_PER_IP_THRESHOLD` many banned peers with the same IP we ban
/// the IP.
const BANNED_PEERS_PER_IP_THRESHOLD: usize = 5;
/// Storage of known peers, their reputation and information
pub struct PeerDB<TSpec: EthSpec> {
@@ -20,18 +25,72 @@ pub struct PeerDB<TSpec: EthSpec> {
peers: HashMap<PeerId, PeerInfo<TSpec>>,
/// The number of disconnected nodes in the database.
disconnected_peers: usize,
/// The number of banned peers in the database.
banned_peers: usize,
/// Counts banned peers in total and per ip
banned_peers_count: BannedPeersCount,
/// PeerDB's logger
log: slog::Logger,
}
pub struct BannedPeersCount {
/// The number of banned peers in the database.
banned_peers: usize,
/// maps ips to number of banned peers with this ip
banned_peers_per_ip: HashMap<IpAddr, usize>,
}
impl BannedPeersCount {
/// Removes the peer from the counts if it is banned. Returns true if the peer was banned and
/// false otherwise.
pub fn remove_banned_peer(&mut self, connection_status: &PeerConnectionStatus) -> bool {
match connection_status {
PeerConnectionStatus::Banned { ip_addresses, .. } => {
self.banned_peers = self.banned_peers.saturating_sub(1);
for address in ip_addresses {
if let Some(count) = self.banned_peers_per_ip.get_mut(address) {
*count = count.saturating_sub(1);
}
}
true
}
_ => false, //if not banned do nothing
}
}
pub fn add_banned_peer(&mut self, connection_status: &PeerConnectionStatus) {
if let PeerConnectionStatus::Banned { ip_addresses, .. } = connection_status {
self.banned_peers += 1;
for address in ip_addresses {
*self.banned_peers_per_ip.entry(*address).or_insert(0) += 1;
}
}
}
pub fn banned_peers(&self) -> usize {
self.banned_peers
}
/// An IP is considered banned if more than BANNED_PEERS_PER_IP_THRESHOLD banned peers
/// exist with this IP
pub fn ip_is_banned(&self, ip: &IpAddr) -> bool {
self.banned_peers_per_ip
.get(ip)
.map_or(false, |count| *count > BANNED_PEERS_PER_IP_THRESHOLD)
}
pub fn new() -> Self {
BannedPeersCount {
banned_peers: 0,
banned_peers_per_ip: HashMap::new(),
}
}
}
impl<TSpec: EthSpec> PeerDB<TSpec> {
pub fn new(log: &slog::Logger) -> Self {
Self {
log: log.clone(),
disconnected_peers: 0,
banned_peers: 0,
banned_peers_count: BannedPeersCount::new(),
peers: HashMap::new(),
}
}
@@ -99,17 +158,35 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
/// Returns true if the Peer is banned.
pub fn is_banned(&self, peer_id: &PeerId) -> bool {
match self.peers.get(peer_id).map(|info| info.score.state()) {
Some(ScoreState::Banned) => true,
_ => false,
if let Some(peer) = self.peers.get(peer_id) {
match peer.score.state() {
ScoreState::Banned => true,
_ => self.ip_is_banned(peer),
}
} else {
false
}
}
fn ip_is_banned(&self, peer: &PeerInfo<TSpec>) -> bool {
peer.listening_addresses.iter().any(|addr| {
addr.iter().any(|p| match p {
Protocol::Ip4(ip) => self.banned_peers_count.ip_is_banned(&ip.into()),
Protocol::Ip6(ip) => self.banned_peers_count.ip_is_banned(&ip.into()),
_ => false,
})
})
}
/// Returns true if the Peer is either banned or in the disconnected state.
pub fn is_banned_or_disconnected(&self, peer_id: &PeerId) -> bool {
match self.peers.get(peer_id).map(|info| info.score.state()) {
Some(ScoreState::Banned) | Some(ScoreState::Disconnected) => true,
_ => false,
if let Some(peer) = self.peers.get(peer_id) {
match peer.score.state() {
ScoreState::Banned | ScoreState::Disconnected => true,
_ => self.ip_is_banned(peer),
}
} else {
false
}
}
@@ -233,9 +310,10 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
if info.connection_status.is_disconnected() {
self.disconnected_peers = self.disconnected_peers.saturating_sub(1);
}
if info.connection_status.is_banned() {
self.banned_peers = self.banned_peers.saturating_sub(1);
}
self.banned_peers_count
.remove_banned_peer(&info.connection_status);
info.connection_status = PeerConnectionStatus::Dialing {
since: Instant::now(),
};
@@ -284,9 +362,8 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
if info.connection_status.is_disconnected() {
self.disconnected_peers = self.disconnected_peers.saturating_sub(1);
}
if info.connection_status.is_banned() {
self.banned_peers = self.banned_peers.saturating_sub(1);
}
self.banned_peers_count
.remove_banned_peer(&info.connection_status);
info.connection_status.connect_ingoing();
}
@@ -297,9 +374,8 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
if info.connection_status.is_disconnected() {
self.disconnected_peers = self.disconnected_peers.saturating_sub(1);
}
if info.connection_status.is_banned() {
self.banned_peers = self.banned_peers.saturating_sub(1);
}
self.banned_peers_count
.remove_banned_peer(&info.connection_status);
info.connection_status.connect_outgoing();
}
@@ -329,8 +405,23 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
self.disconnected_peers = self.disconnected_peers.saturating_sub(1);
}
if !info.connection_status.is_banned() {
info.connection_status.ban();
self.banned_peers += 1;
info.connection_status
.ban(
info.listening_addresses
.iter()
.fold(Vec::new(), |mut v, a| {
for p in a {
match p {
Protocol::Ip4(ip) => v.push(ip.into()),
Protocol::Ip6(ip) => v.push(ip.into()),
_ => (),
}
}
v
}),
);
self.banned_peers_count
.add_banned_peer(&info.connection_status);
}
self.shrink_to_fit();
}
@@ -345,8 +436,9 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
});
if info.connection_status.is_banned() {
self.banned_peers_count
.remove_banned_peer(&info.connection_status);
info.connection_status.unban();
self.banned_peers = self.banned_peers.saturating_sub(1);
}
self.shrink_to_fit();
}
@@ -355,9 +447,9 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
/// Drops the peers with the lowest reputation so that the number of
/// disconnected peers is less than MAX_DC_PEERS
pub fn shrink_to_fit(&mut self) {
// Remove excess baned peers
while self.banned_peers > MAX_BANNED_PEERS {
if let Some(to_drop) = self
// Remove excess banned peers
while self.banned_peers_count.banned_peers() > MAX_BANNED_PEERS {
if let Some(to_drop) = if let Some((id, info)) = self
.peers
.iter()
.filter(|(_, info)| info.connection_status.is_banned())
@@ -366,15 +458,23 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
.score
.partial_cmp(&info_b.score)
.unwrap_or(std::cmp::Ordering::Equal)
})
.map(|(id, _)| id.clone())
{
}) {
self.banned_peers_count
.remove_banned_peer(&info.connection_status);
Some(id.clone())
} else {
// If there is no minimum, this is a coding error.
crit!(
self.log,
"banned_peers > MAX_BANNED_PEERS despite no banned peers in db!"
);
// reset banned_peers this will also exit the loop
self.banned_peers_count = BannedPeersCount::new();
None
} {
debug!(self.log, "Removing old banned peer"; "peer_id" => to_drop.to_string());
self.peers.remove(&to_drop);
}
// If there is no minimum, this is a coding error. For safety we decrease
// the count to avoid a potential infinite loop.
self.banned_peers = self.banned_peers.saturating_sub(1);
}
// Remove excess disconnected peers
@@ -422,8 +522,11 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
#[cfg(test)]
mod tests {
use super::*;
use libp2p::core::Multiaddr;
use slog::{o, Drain};
use std::net::{Ipv4Addr, Ipv6Addr};
use types::MinimalEthSpec;
type M = MinimalEthSpec;
pub fn build_log(level: slog::Level, enabled: bool) -> slog::Logger {
@@ -503,13 +606,13 @@ mod tests {
let p = PeerId::random();
pdb.connect_ingoing(&p);
}
assert_eq!(pdb.banned_peers, 0);
assert_eq!(pdb.banned_peers_count.banned_peers(), 0);
for p in pdb.connected_peer_ids().cloned().collect::<Vec<_>>() {
pdb.ban(&p);
}
assert_eq!(pdb.banned_peers, MAX_BANNED_PEERS);
assert_eq!(pdb.banned_peers_count.banned_peers(), MAX_BANNED_PEERS);
}
#[test]
@@ -608,24 +711,39 @@ mod tests {
pdb.connect_ingoing(&random_peer2);
pdb.connect_ingoing(&random_peer3);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
assert_eq!(pdb.banned_peers, pdb.banned_peers().count());
assert_eq!(
pdb.banned_peers_count.banned_peers(),
pdb.banned_peers().count()
);
pdb.connect_ingoing(&random_peer);
pdb.disconnect(&random_peer1);
pdb.ban(&random_peer2);
pdb.connect_ingoing(&random_peer3);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
assert_eq!(pdb.banned_peers, pdb.banned_peers().count());
assert_eq!(
pdb.banned_peers_count.banned_peers(),
pdb.banned_peers().count()
);
pdb.ban(&random_peer1);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
assert_eq!(pdb.banned_peers, pdb.banned_peers().count());
assert_eq!(
pdb.banned_peers_count.banned_peers(),
pdb.banned_peers().count()
);
pdb.connect_outgoing(&random_peer2);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
assert_eq!(pdb.banned_peers, pdb.banned_peers().count());
assert_eq!(
pdb.banned_peers_count.banned_peers(),
pdb.banned_peers().count()
);
pdb.ban(&random_peer3);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
assert_eq!(pdb.banned_peers, pdb.banned_peers().count());
assert_eq!(
pdb.banned_peers_count.banned_peers(),
pdb.banned_peers().count()
);
pdb.ban(&random_peer3);
pdb.connect_ingoing(&random_peer1);
@@ -633,15 +751,191 @@ mod tests {
pdb.ban(&random_peer3);
pdb.connect_ingoing(&random_peer);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
assert_eq!(pdb.banned_peers, pdb.banned_peers().count());
assert_eq!(
pdb.banned_peers_count.banned_peers(),
pdb.banned_peers().count()
);
pdb.disconnect(&random_peer);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
assert_eq!(pdb.banned_peers, pdb.banned_peers().count());
assert_eq!(
pdb.banned_peers_count.banned_peers(),
pdb.banned_peers().count()
);
pdb.disconnect(&random_peer);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
assert_eq!(pdb.banned_peers, pdb.banned_peers().count());
assert_eq!(
pdb.banned_peers_count.banned_peers(),
pdb.banned_peers().count()
);
pdb.ban(&random_peer);
assert_eq!(pdb.disconnected_peers, pdb.disconnected_peers().count());
}
fn connect_peer_with_ips(pdb: &mut PeerDB<M>, ips: Vec<Vec<IpAddr>>) -> PeerId {
let p = PeerId::random();
pdb.connect_ingoing(&p);
pdb.peers.get_mut(&p).unwrap().listening_addresses = ips
.into_iter()
.map(|ip_addresses| {
let mut addr = Multiaddr::empty();
for ip_address in ip_addresses {
addr.push(Protocol::from(ip_address));
}
addr
})
.collect();
p
}
#[test]
fn test_ban_address() {
let mut pdb = get_db();
let ip1: IpAddr = Ipv4Addr::new(1, 2, 3, 4).into();
let ip2: IpAddr = Ipv6Addr::new(1, 2, 3, 4, 5, 6, 7, 8).into();
let ip3: IpAddr = Ipv4Addr::new(1, 2, 3, 5).into();
let ip4: IpAddr = Ipv6Addr::new(1, 2, 3, 4, 5, 6, 7, 9).into();
let ip5: IpAddr = Ipv4Addr::new(2, 2, 3, 4).into();
let mut peers = Vec::new();
for i in 0..BANNED_PEERS_PER_IP_THRESHOLD + 2 {
peers.push(connect_peer_with_ips(
&mut pdb,
if i == 0 {
vec![vec![ip1], vec![ip2]]
} else {
vec![vec![ip1, ip2], vec![ip3, ip4]]
},
));
}
let p1 = connect_peer_with_ips(&mut pdb, vec![vec![ip1]]);
let p2 = connect_peer_with_ips(&mut pdb, vec![vec![ip2, ip5]]);
let p3 = connect_peer_with_ips(&mut pdb, vec![vec![ip3], vec![ip5]]);
let p4 = connect_peer_with_ips(&mut pdb, vec![vec![ip5, ip4]]);
let p5 = connect_peer_with_ips(&mut pdb, vec![vec![ip5]]);
for p in &peers[..BANNED_PEERS_PER_IP_THRESHOLD + 1] {
pdb.ban(p);
}
//check that ip1 and ip2 are banned but ip3-5 not
assert!(pdb.is_banned(&p1));
assert!(pdb.is_banned(&p2));
assert!(!pdb.is_banned(&p3));
assert!(!pdb.is_banned(&p4));
assert!(!pdb.is_banned(&p5));
//ban also the last peer in peers
pdb.ban(&peers[BANNED_PEERS_PER_IP_THRESHOLD + 1]);
//check that ip1-ip4 are banned but ip5 not
assert!(pdb.is_banned(&p1));
assert!(pdb.is_banned(&p2));
assert!(pdb.is_banned(&p3));
assert!(pdb.is_banned(&p4));
assert!(!pdb.is_banned(&p5));
//peers[0] gets unbanned
pdb.unban(&peers[0]);
//nothing changed
assert!(pdb.is_banned(&p1));
assert!(pdb.is_banned(&p2));
assert!(pdb.is_banned(&p3));
assert!(pdb.is_banned(&p4));
assert!(!pdb.is_banned(&p5));
//peers[1] gets unbanned
pdb.unban(&peers[1]);
//all ips are unbanned
assert!(!pdb.is_banned(&p1));
assert!(!pdb.is_banned(&p2));
assert!(!pdb.is_banned(&p3));
assert!(!pdb.is_banned(&p4));
assert!(!pdb.is_banned(&p5));
}
#[test]
fn test_banned_ip_consistent_after_changing_ips() {
let mut pdb = get_db();
let ip1: IpAddr = Ipv4Addr::new(1, 2, 3, 4).into();
let ip2: IpAddr = Ipv6Addr::new(1, 2, 3, 4, 5, 6, 7, 8).into();
let mut peers = Vec::new();
for _ in 0..BANNED_PEERS_PER_IP_THRESHOLD + 1 {
peers.push(connect_peer_with_ips(&mut pdb, vec![vec![ip1]]));
}
let p1 = connect_peer_with_ips(&mut pdb, vec![vec![ip1]]);
let p2 = connect_peer_with_ips(&mut pdb, vec![vec![ip2]]);
//ban all peers
for p in &peers {
pdb.ban(p);
}
//check ip is banned
assert!(pdb.is_banned(&p1));
assert!(!pdb.is_banned(&p2));
//change addresses of banned peers
for p in &peers {
pdb.peers.get_mut(p).unwrap().listening_addresses =
vec![Multiaddr::empty().with(Protocol::from(ip2))];
}
//check still the same ip is banned
assert!(pdb.is_banned(&p1));
assert!(!pdb.is_banned(&p2));
//unban a peer
pdb.unban(&peers[0]);
//check not banned anymore
assert!(!pdb.is_banned(&p1));
assert!(!pdb.is_banned(&p2));
//check still not banned after new ban
pdb.ban(&peers[0]);
assert!(!pdb.is_banned(&p1));
assert!(!pdb.is_banned(&p2));
//unban and reban all peers
for p in &peers {
pdb.unban(p);
pdb.ban(p);
}
//ip2 is now banned
assert!(!pdb.is_banned(&p1));
assert!(pdb.is_banned(&p2));
//change ips back again
for p in &peers {
pdb.peers.get_mut(p).unwrap().listening_addresses =
vec![Multiaddr::empty().with(Protocol::from(ip1))];
}
//reban every peer except one
for p in &peers[1..] {
pdb.unban(p);
pdb.ban(p);
}
//nothing is banned
assert!(!pdb.is_banned(&p1));
assert!(!pdb.is_banned(&p2));
//reban last peer
pdb.unban(&peers[0]);
pdb.ban(&peers[0]);
//ip1 is banned
assert!(pdb.is_banned(&p1));
assert!(!pdb.is_banned(&p2));
}
}

View File

@@ -36,6 +36,8 @@ pub enum Libp2pEvent<TSpec: EthSpec> {
Behaviour(BehaviourEvent<TSpec>),
/// A new listening address has been established.
NewListenAddr(Multiaddr),
/// We reached zero listening addresses.
ZeroListeners,
}
/// The configuration and state of the libp2p components for the beacon node.
@@ -283,10 +285,17 @@ impl<TSpec: EthSpec> Service<TSpec> {
debug!(self.log, "Listen address expired"; "multiaddr" => multiaddr.to_string())
}
SwarmEvent::ListenerClosed { addresses, reason } => {
crit!(self.log, "Listener closed"; "addresses" => format!("{:?}", addresses), "reason" => format!("{:?}", reason))
crit!(self.log, "Listener closed"; "addresses" => format!("{:?}", addresses), "reason" => format!("{:?}", reason));
if Swarm::listeners(&self.swarm).count() == 0 {
return Libp2pEvent::ZeroListeners;
}
}
SwarmEvent::ListenerError { error } => {
warn!(self.log, "Listener error"; "error" => format!("{:?}", error.to_string()))
// this is non fatal, but we still check
warn!(self.log, "Listener error"; "error" => format!("{:?}", error.to_string()));
if Swarm::listeners(&self.swarm).count() == 0 {
return Libp2pEvent::ZeroListeners;
}
}
SwarmEvent::Dialing(peer_id) => {
debug!(self.log, "Dialing peer"; "peer_id" => peer_id.to_string());
@@ -348,7 +357,7 @@ fn keypair_from_bytes(mut bytes: Vec<u8>) -> error::Result<Keypair> {
/// generated and is then saved to disk.
///
/// Currently only secp256k1 keys are allowed, as these are the only keys supported by discv5.
fn load_private_key(config: &NetworkConfig, log: &slog::Logger) -> Keypair {
pub fn load_private_key(config: &NetworkConfig, log: &slog::Logger) -> Keypair {
// check for key from disk
let network_key_f = config.network_dir.join(NETWORK_KEY_FILENAME);
if let Ok(mut network_key_file) = File::open(network_key_f.clone()) {

View File

@@ -1,6 +1,7 @@
pub mod error;
mod globals;
mod pubsub;
mod subnet;
mod sync_state;
mod topics;
@@ -13,5 +14,6 @@ pub type Enr = discv5::enr::Enr<discv5::enr::CombinedKey>;
pub use globals::NetworkGlobals;
pub use pubsub::PubsubMessage;
pub use subnet::SubnetDiscovery;
pub use sync_state::SyncState;
pub use topics::{GossipEncoding, GossipKind, GossipTopic};

View File

@@ -0,0 +1,28 @@
use std::time::{Duration, Instant};
use types::SubnetId;
const DURATION_DIFFERENCE: Duration = Duration::from_millis(1);
/// A subnet to discover peers on along with the instant after which it's no longer useful.
#[derive(Debug, Clone)]
pub struct SubnetDiscovery {
pub subnet_id: SubnetId,
pub min_ttl: Option<Instant>,
}
impl PartialEq for SubnetDiscovery {
fn eq(&self, other: &SubnetDiscovery) -> bool {
self.subnet_id == other.subnet_id
&& match (self.min_ttl, other.min_ttl) {
(Some(min_ttl_instant), Some(other_min_ttl_instant)) => {
min_ttl_instant.saturating_duration_since(other_min_ttl_instant)
< DURATION_DIFFERENCE
&& other_min_ttl_instant.saturating_duration_since(min_ttl_instant)
< DURATION_DIFFERENCE
}
(None, None) => true,
(None, Some(_)) => true,
(Some(_), None) => true,
}
}
}

View File

@@ -94,8 +94,13 @@ pub async fn build_libp2p_instance(boot_nodes: Vec<Enr>, log: slog::Logger) -> L
// launch libp2p service
let (signal, exit) = exit_future::signal();
let executor =
environment::TaskExecutor::new(tokio::runtime::Handle::current(), exit, log.clone());
let (shutdown_tx, _) = futures::channel::mpsc::channel(1);
let executor = environment::TaskExecutor::new(
tokio::runtime::Handle::current(),
exit,
log.clone(),
shutdown_tx,
);
Libp2pInstance(
LibP2PService::new(executor, &config, EnrForkId::default(), &log)
.await

View File

@@ -40,3 +40,4 @@ lighthouse_metrics = { path = "../../common/lighthouse_metrics" }
environment = { path = "../../lighthouse/environment" }
itertools = "0.9.0"
num_cpus = "1.13.0"
lru_cache = { path = "../../common/lru_cache" }

View File

@@ -2,7 +2,7 @@
//! given time. It schedules subscriptions to shard subnets, requests peer discoveries and
//! determines whether attestations should be aggregated and/or passed to the beacon node.
use std::collections::VecDeque;
use std::collections::{HashMap, VecDeque};
use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
@@ -13,7 +13,7 @@ use rand::seq::SliceRandom;
use slog::{crit, debug, error, o, trace, warn};
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{types::GossipKind, NetworkGlobals};
use eth2_libp2p::{types::GossipKind, NetworkGlobals, SubnetDiscovery};
use hashset_delay::HashSetDelay;
use rest_types::ValidatorSubscription;
use slot_clock::SlotClock;
@@ -25,11 +25,8 @@ mod tests;
/// The minimum number of slots ahead that we attempt to discover peers for a subscription. If the
/// slot is less than this number, skip the peer discovery process.
const MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 1;
/// The number of slots ahead that we attempt to discover peers for a subscription. If the slot to
/// attest to is greater than this, we queue a discovery request for this many slots prior to
/// subscribing.
const TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 6;
/// Subnet discovery query takes atmost 30 secs, 2 slots take 24s.
const MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 2;
/// The time (in slots) before a last seen validator is considered absent and we unsubscribe from the random
/// gossip topics that we subscribed to due to the validator connection.
const LAST_SEEN_VALIDATOR_TIMEOUT: u32 = 150;
@@ -39,12 +36,10 @@ const LAST_SEEN_VALIDATOR_TIMEOUT: u32 = 150;
/// Note: The time is calculated as `time = milliseconds_per_slot / ADVANCE_SUBSCRIPTION_TIME`.
const ADVANCE_SUBSCRIBE_TIME: u32 = 3;
/// The default number of slots before items in hash delay sets used by this class should expire.
/// 36s at 12s slot time
const DEFAULT_EXPIRATION_TIMEOUT: u32 = 3;
// 36s at 12s slot time
/// The duration difference between two instance for them to be considered equal.
const DURATION_DIFFERENCE: Duration = Duration::from_millis(1);
#[derive(Debug, Eq, Clone)]
#[derive(Debug, PartialEq, Clone)]
pub enum AttServiceMessage {
/// Subscribe to the specified subnet id.
Subscribe(SubnetId),
@@ -54,44 +49,8 @@ pub enum AttServiceMessage {
EnrAdd(SubnetId),
/// Remove the `SubnetId` from the ENR bitfield.
EnrRemove(SubnetId),
/// Discover peers for a particular subnet.
/// The includes the `Instant` we need the discovered peer until.
DiscoverPeers {
subnet_id: SubnetId,
min_ttl: Option<Instant>,
},
}
impl PartialEq for AttServiceMessage {
fn eq(&self, other: &AttServiceMessage) -> bool {
match (self, other) {
(&AttServiceMessage::Subscribe(a), &AttServiceMessage::Subscribe(b)) => a == b,
(&AttServiceMessage::Unsubscribe(a), &AttServiceMessage::Unsubscribe(b)) => a == b,
(&AttServiceMessage::EnrAdd(a), &AttServiceMessage::EnrAdd(b)) => a == b,
(&AttServiceMessage::EnrRemove(a), &AttServiceMessage::EnrRemove(b)) => a == b,
(
&AttServiceMessage::DiscoverPeers { subnet_id, min_ttl },
&AttServiceMessage::DiscoverPeers {
subnet_id: other_subnet_id,
min_ttl: other_min_ttl,
},
) => {
subnet_id == other_subnet_id
&& match (min_ttl, other_min_ttl) {
(Some(min_ttl_instant), Some(other_min_ttl_instant)) => {
min_ttl_instant.saturating_duration_since(other_min_ttl_instant)
< DURATION_DIFFERENCE
&& other_min_ttl_instant.saturating_duration_since(min_ttl_instant)
< DURATION_DIFFERENCE
}
(None, None) => true,
(None, Some(_)) => true,
(Some(_), None) => true,
}
}
_ => false,
}
}
/// Discover peers for a list of `SubnetDiscovery`.
DiscoverPeers(Vec<SubnetDiscovery>),
}
/// A particular subnet at a given slot.
@@ -116,9 +75,6 @@ pub struct AttestationService<T: BeaconChainTypes> {
/// The collection of currently subscribed random subnets mapped to their expiry deadline.
random_subnets: HashSetDelay<SubnetId>,
/// A collection of timeouts for when to start searching for peers for a particular shard.
discover_peers: HashSetDelay<ExactSubnet>,
/// A collection of timeouts for when to subscribe to a shard subnet.
subscriptions: HashSetDelay<ExactSubnet>,
@@ -172,7 +128,6 @@ impl<T: BeaconChainTypes> AttestationService<T> {
network_globals,
beacon_chain,
random_subnets: HashSetDelay::new(Duration::from_millis(random_subnet_duration_millis)),
discover_peers: HashSetDelay::new(default_timeout),
subscriptions: HashSetDelay::new(default_timeout),
unsubscriptions: HashSetDelay::new(default_timeout),
aggregate_validators_on_subnet: HashSetDelay::new(default_timeout),
@@ -198,6 +153,8 @@ impl<T: BeaconChainTypes> AttestationService<T> {
&mut self,
subscriptions: Vec<ValidatorSubscription>,
) -> Result<(), String> {
// Maps each subnet_id subscription to it's highest slot
let mut subnets_to_discover: HashMap<SubnetId, Slot> = HashMap::new();
for subscription in subscriptions {
metrics::inc_counter(&metrics::SUBNET_SUBSCRIPTION_REQUESTS);
//NOTE: We assume all subscriptions have been verified before reaching this service
@@ -226,15 +183,20 @@ impl<T: BeaconChainTypes> AttestationService<T> {
continue;
}
};
// Ensure each subnet_id inserted into the map has the highest slot as it's value.
// Higher slot corresponds to higher min_ttl in the `SubnetDiscovery` entry.
if let Some(slot) = subnets_to_discover.get(&subnet_id) {
if subscription.slot > *slot {
subnets_to_discover.insert(subnet_id, subscription.slot);
}
} else {
subnets_to_discover.insert(subnet_id, subscription.slot);
}
let exact_subnet = ExactSubnet {
subnet_id,
slot: subscription.slot,
};
// determine if we should run a discovery lookup request and request it if required
if let Err(e) = self.discover_peers_request(exact_subnet.clone()) {
warn!(self.log, "Discovery lookup request error"; "error" => e);
}
// determine if the validator is an aggregator. If so, we subscribe to the subnet and
// if successful add the validator to a mapping of known aggregators for that exact
@@ -264,6 +226,14 @@ impl<T: BeaconChainTypes> AttestationService<T> {
}
}
if let Err(e) = self.discover_peers_request(
subnets_to_discover
.into_iter()
.map(|(subnet_id, slot)| ExactSubnet { subnet_id, slot }),
) {
warn!(self.log, "Discovery lookup request error"; "error" => e);
};
// pre-emptively wake the thread to check for new events
if let Some(waker) = &self.waker {
waker.wake_by_ref();
@@ -290,114 +260,55 @@ impl<T: BeaconChainTypes> AttestationService<T> {
/// Checks if there are currently queued discovery requests and the time required to make the
/// request.
///
/// If there is sufficient time and no other request exists, queues a peer discovery request
/// for the required subnet.
fn discover_peers_request(&mut self, exact_subnet: ExactSubnet) -> Result<(), &'static str> {
/// If there is sufficient time, queues a peer discovery request for all the required subnets.
fn discover_peers_request(
&mut self,
exact_subnets: impl Iterator<Item = ExactSubnet>,
) -> Result<(), &'static str> {
let current_slot = self
.beacon_chain
.slot_clock
.now()
.ok_or_else(|| "Could not get the current slot")?;
let slot_duration = self.beacon_chain.slot_clock.slot_duration();
// if there is enough time to perform a discovery lookup
if exact_subnet.slot >= current_slot.saturating_add(MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD) {
// check if a discovery request already exists
if self.discover_peers.get(&exact_subnet).is_some() {
// already a request queued, end
return Ok(());
}
// if the slot is more than epoch away, add an event to start looking for peers
if exact_subnet.slot
< current_slot.saturating_add(TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD)
{
// add one slot to ensure we keep the peer for the subscription slot
let min_ttl = self
.beacon_chain
.slot_clock
.duration_to_slot(exact_subnet.slot + 1)
.map(|duration| std::time::Instant::now() + duration);
self.send_or_update_discovery_event(exact_subnet.subnet_id, min_ttl);
} else {
// Queue the discovery event to be executed for
// TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD
let duration_to_discover = {
let duration_to_next_slot = self
let discovery_subnets: Vec<SubnetDiscovery> = exact_subnets
.filter_map(|exact_subnet| {
// check if there is enough time to perform a discovery lookup
if exact_subnet.slot
>= current_slot.saturating_add(MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD)
{
// if the slot is more than epoch away, add an event to start looking for peers
// add one slot to ensure we keep the peer for the subscription slot
let min_ttl = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| "Unable to determine duration to next slot")?;
// The -1 is done here to exclude the current slot duration, as we will use
// `duration_to_next_slot`.
let slots_until_discover = exact_subnet
.slot
.saturating_sub(current_slot)
.saturating_sub(1u64)
.saturating_sub(TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD);
.duration_to_slot(exact_subnet.slot + 1)
.map(|duration| std::time::Instant::now() + duration);
Some(SubnetDiscovery {
subnet_id: exact_subnet.subnet_id,
min_ttl,
})
} else {
// TODO: Send the time frame needed to have a peer connected, so that we can
// maintain peers for a least this duration.
// We may want to check the global PeerInfo to see estimated timeouts for each
// peer before they can be removed.
warn!(self.log,
"Not enough time for a discovery search";
"subnet_id" => format!("{:?}", exact_subnet)
);
None
}
})
.collect();
duration_to_next_slot + slot_duration * (slots_until_discover.as_u64() as u32)
};
self.discover_peers
.insert_at(exact_subnet, duration_to_discover);
}
} else {
// TODO: Send the time frame needed to have a peer connected, so that we can
// maintain peers for a least this duration.
// We may want to check the global PeerInfo to see estimated timeouts for each
// peer before they can be removed.
return Err("Not enough time for a discovery search");
if !discovery_subnets.is_empty() {
self.events
.push_back(AttServiceMessage::DiscoverPeers(discovery_subnets));
}
Ok(())
}
/// Checks if we have a discover peers event already and sends a new event if necessary
///
/// If a message exists for the same subnet, compare the `min_ttl` of the current and
/// existing messages and extend the existing message as necessary.
fn send_or_update_discovery_event(&mut self, subnet_id: SubnetId, min_ttl: Option<Instant>) {
// track whether this message already exists in the event queue
let mut is_duplicate = false;
self.events.iter_mut().for_each(|event| {
if let AttServiceMessage::DiscoverPeers {
subnet_id: other_subnet_id,
min_ttl: other_min_ttl,
} = event
{
if subnet_id == *other_subnet_id {
let other_min_ttl_clone = *other_min_ttl;
match (min_ttl, other_min_ttl_clone) {
(Some(min_ttl_instant), Some(other_min_ttl_instant)) =>
// only update the min_ttl if it is greater than the existing min_ttl and a DURATION_DIFFERENCE padding
{
if min_ttl_instant.saturating_duration_since(other_min_ttl_instant)
> DURATION_DIFFERENCE
{
*other_min_ttl = min_ttl;
}
}
(None, Some(_)) => {} // Keep the current one as it has an actual min_ttl
(Some(min_ttl), None) => {
// Update the request to include a min_ttl.
*other_min_ttl = Some(min_ttl);
}
(None, None) => {} // Duplicate message, do nothing.
}
is_duplicate = true;
return;
}
};
});
if !is_duplicate {
self.events
.push_back(AttServiceMessage::DiscoverPeers { subnet_id, min_ttl });
}
}
/// Checks the current random subnets and subscriptions to determine if a new subscription for this
/// subnet is required for the given slot.
///
@@ -547,7 +458,11 @@ impl<T: BeaconChainTypes> AttestationService<T> {
if !already_subscribed {
// send a discovery request and a subscription
self.send_or_update_discovery_event(subnet_id, None);
self.events
.push_back(AttServiceMessage::DiscoverPeers(vec![SubnetDiscovery {
subnet_id,
min_ttl: None,
}]));
self.events
.push_back(AttServiceMessage::Subscribe(subnet_id));
}
@@ -558,20 +473,6 @@ impl<T: BeaconChainTypes> AttestationService<T> {
/* A collection of functions that handle the various timeouts */
/// Request a discovery query to find peers for a particular subnet.
fn handle_discover_peers(&mut self, exact_subnet: ExactSubnet) {
debug!(self.log, "Searching for peers for subnet"; "subnet" => *exact_subnet.subnet_id, "target_slot" => exact_subnet.slot);
// add one slot to ensure we keep the peer for the subscription slot
let min_ttl = self
.beacon_chain
.slot_clock
.duration_to_slot(exact_subnet.slot + 1)
.map(|duration| std::time::Instant::now() + duration);
self.send_or_update_discovery_event(exact_subnet.subnet_id, min_ttl)
}
/// A queued subscription is ready.
///
/// We add subscriptions events even if we are already subscribed to a random subnet (as these
@@ -731,15 +632,6 @@ impl<T: BeaconChainTypes> Stream for AttestationService<T> {
self.waker = Some(cx.waker().clone());
}
// process any peer discovery events
match self.discover_peers.poll_next_unpin(cx) {
Poll::Ready(Some(Ok(exact_subnet))) => self.handle_discover_peers(exact_subnet),
Poll::Ready(Some(Err(e))) => {
error!(self.log, "Failed to check for peer discovery requests"; "error"=> e);
}
Poll::Ready(None) | Poll::Pending => {}
}
// process any subscription events
match self.subscriptions.poll_next_unpin(cx) {
Poll::Ready(Some(Ok(exact_subnet))) => self.handle_subscriptions(exact_subnet),

View File

@@ -8,7 +8,9 @@ mod tests {
migrate::NullMigrator,
};
use eth2_libp2p::discovery::{build_enr, Keypair};
use eth2_libp2p::{discovery::CombinedKey, CombinedKeyExt, NetworkConfig, NetworkGlobals};
use eth2_libp2p::{
discovery::CombinedKey, CombinedKeyExt, NetworkConfig, NetworkGlobals, SubnetDiscovery,
};
use futures::Stream;
use genesis::{generate_deterministic_keypairs, interop_genesis_state};
use lazy_static::lazy_static;
@@ -120,23 +122,21 @@ mod tests {
}
}
fn _get_subscriptions(
fn get_subscriptions(
validator_count: u64,
slot: Slot,
committee_count_at_slot: u64,
) -> Vec<ValidatorSubscription> {
let mut subscriptions: Vec<ValidatorSubscription> = Vec::new();
for validator_index in 0..validator_count {
let is_aggregator = true;
subscriptions.push(ValidatorSubscription {
validator_index,
attestation_committee_index: validator_index,
slot,
committee_count_at_slot,
is_aggregator,
});
}
subscriptions
(0..validator_count)
.map(|validator_index| {
get_subscription(
validator_index,
validator_index,
slot,
committee_count_at_slot,
)
})
.collect()
}
// gets a number of events from the subscription service, or returns none if it times out after a number
@@ -210,14 +210,7 @@ mod tests {
let events = get_events(attestation_service, no_events_expected, 1).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers {
subnet_id: _any_subnet,
min_ttl: _any_instant
},
AttServiceMessage::Subscribe(_any1),
AttServiceMessage::EnrAdd(_any3)
]
[AttServiceMessage::DiscoverPeers(_), AttServiceMessage::Subscribe(_any1), AttServiceMessage::EnrAdd(_any3)]
);
// if there are fewer events than expected, there's been a collision
if events.len() == no_events_expected {
@@ -270,14 +263,7 @@ mod tests {
let events = get_events(attestation_service, no_events_expected, 2).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers {
subnet_id: _any_subnet,
min_ttl: _any_instant
},
AttServiceMessage::Subscribe(_any1),
AttServiceMessage::EnrAdd(_any3)
]
[AttServiceMessage::DiscoverPeers(_), AttServiceMessage::Subscribe(_any1), AttServiceMessage::EnrAdd(_any3)]
);
// if there are fewer events than expected, there's been a collision
if events.len() == no_events_expected {
@@ -330,19 +316,15 @@ mod tests {
&attestation_service.beacon_chain.spec,
)
.unwrap();
let expected = vec![AttServiceMessage::DiscoverPeers { subnet_id, min_ttl }];
let expected = vec![AttServiceMessage::DiscoverPeers(vec![SubnetDiscovery {
subnet_id,
min_ttl,
}])];
let events = get_events(attestation_service, no_events_expected, 1).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers {
subnet_id: _any_subnet,
min_ttl: _any_instant
},
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
[AttServiceMessage::DiscoverPeers(_), AttServiceMessage::Subscribe(_any2), AttServiceMessage::EnrAdd(_any3)]
);
// if there are fewer events than expected, there's been a collision
if events.len() == no_events_expected {
@@ -396,21 +378,14 @@ mod tests {
)
.unwrap();
let expected = vec![
AttServiceMessage::DiscoverPeers { subnet_id, min_ttl },
AttServiceMessage::DiscoverPeers(vec![SubnetDiscovery { subnet_id, min_ttl }]),
AttServiceMessage::Subscribe(subnet_id),
];
let events = get_events(attestation_service, no_events_expected, 5).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers {
subnet_id: _any_subnet,
min_ttl: _any_instant
},
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
[AttServiceMessage::DiscoverPeers(_), AttServiceMessage::Subscribe(_any2), AttServiceMessage::EnrAdd(_any3)]
);
// if there are fewer events than expected, there's been a collision
if events.len() == no_events_expected {
@@ -454,14 +429,7 @@ mod tests {
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers {
subnet_id: _any_subnet,
min_ttl: _any_instant
},
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
[AttServiceMessage::DiscoverPeers(_), AttServiceMessage::Subscribe(_any2), AttServiceMessage::EnrAdd(_any3)]
);
// if there are fewer events than expected, there's been a collision
if events.len() == no_events_expected {
@@ -517,20 +485,16 @@ mod tests {
// expect discover peers because we will enter TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD range
let expected: Vec<AttServiceMessage> =
vec![AttServiceMessage::DiscoverPeers { subnet_id, min_ttl }];
vec![AttServiceMessage::DiscoverPeers(vec![SubnetDiscovery {
subnet_id,
min_ttl,
}])];
let events = get_events(attestation_service, no_events_expected, 5).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers {
subnet_id: _any_subnet,
min_ttl: _any_instant
},
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
[AttServiceMessage::DiscoverPeers(_), AttServiceMessage::Subscribe(_any2), AttServiceMessage::EnrAdd(_any3)]
);
// if there are fewer events than expected, there's been a collision
if events.len() == no_events_expected {
@@ -553,7 +517,7 @@ mod tests {
.now()
.expect("Could not get current slot");
let subscriptions = _get_subscriptions(
let subscriptions = get_subscriptions(
subscription_count,
current_slot + subscription_slot,
committee_count,
@@ -572,10 +536,9 @@ mod tests {
for event in events {
match event {
AttServiceMessage::DiscoverPeers {
subnet_id: _any_subnet,
min_ttl: _any_instant,
} => discover_peer_count = discover_peer_count + 1,
AttServiceMessage::DiscoverPeers(_) => {
discover_peer_count = discover_peer_count + 1
}
AttServiceMessage::Subscribe(_any_subnet) => subscribe_count = subscribe_count + 1,
AttServiceMessage::EnrAdd(_any_subnet) => enr_add_count = enr_add_count + 1,
_ => unexpected_msg_count = unexpected_msg_count + 1,
@@ -605,7 +568,7 @@ mod tests {
.now()
.expect("Could not get current slot");
let subscriptions = _get_subscriptions(
let subscriptions = get_subscriptions(
subscription_count,
current_slot + subscription_slot,
committee_count,
@@ -624,10 +587,9 @@ mod tests {
for event in events {
match event {
AttServiceMessage::DiscoverPeers {
subnet_id: _any_subnet,
min_ttl: _any_instant,
} => discover_peer_count = discover_peer_count + 1,
AttServiceMessage::DiscoverPeers(_) => {
discover_peer_count = discover_peer_count + 1
}
AttServiceMessage::Subscribe(_any_subnet) => subscribe_count = subscribe_count + 1,
AttServiceMessage::EnrAdd(_any_subnet) => enr_add_count = enr_add_count + 1,
_ => unexpected_msg_count = unexpected_msg_count + 1,
@@ -639,4 +601,40 @@ mod tests {
assert_eq!(enr_add_count, 64);
assert_eq!(unexpected_msg_count, 0);
}
#[tokio::test]
async fn test_discovery_peers_count() {
let subscription_slot = 10;
let validator_count = 32;
let committee_count = 1;
let expected_events = 97;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = get_subscriptions(
validator_count,
current_slot + subscription_slot,
committee_count,
);
// submit sthe subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
let events = get_events(attestation_service, expected_events, 3).await;
let event = events.get(96);
if let Some(AttServiceMessage::DiscoverPeers(d)) = event {
assert_eq!(d.len(), validator_count as usize);
} else {
panic!("Unexpected event {:?}", event);
}
}
}

View File

@@ -1,21 +1,21 @@
use crate::metrics;
use crate::router::processor::FUTURE_SLOT_TOLERANCE;
use crate::sync::manager::SyncMessage;
use crate::sync::{BatchId, BatchProcessResult, ChainId};
use crate::sync::{BatchProcessResult, ChainId};
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockError, ChainSegmentResult};
use eth2_libp2p::PeerId;
use slog::{debug, error, trace, warn};
use std::sync::Arc;
use tokio::sync::mpsc;
use types::{EthSpec, SignedBeaconBlock};
use types::{Epoch, EthSpec, Hash256, SignedBeaconBlock};
/// Id associated to a block processing request, either a batch or a single block.
#[derive(Clone, Debug, PartialEq)]
pub enum ProcessId {
/// Processing Id of a range syncing batch.
RangeBatchId(ChainId, BatchId),
/// Processing Id of the parent lookup of a block
ParentLookup(PeerId),
RangeBatchId(ChainId, Epoch),
/// Processing Id of the parent lookup of a block.
ParentLookup(PeerId, Hash256),
}
pub fn handle_chain_segment<T: BeaconChainTypes>(
@@ -27,7 +27,7 @@ pub fn handle_chain_segment<T: BeaconChainTypes>(
) {
match process_id {
// this a request from the range sync
ProcessId::RangeBatchId(chain_id, batch_id) => {
ProcessId::RangeBatchId(chain_id, epoch) => {
let len = downloaded_blocks.len();
let start_slot = if len > 0 {
downloaded_blocks[0].message.slot.as_u64()
@@ -40,26 +40,26 @@ pub fn handle_chain_segment<T: BeaconChainTypes>(
0
};
debug!(log, "Processing batch"; "id" => *batch_id, "blocks" => downloaded_blocks.len(), "start_slot" => start_slot, "end_slot" => end_slot);
debug!(log, "Processing batch"; "batch_epoch" => epoch, "blocks" => downloaded_blocks.len(), "first_block_slot" => start_slot, "last_block_slot" => end_slot, "service" => "sync");
let result = match process_blocks(chain, downloaded_blocks.iter(), &log) {
(_, Ok(_)) => {
debug!(log, "Batch processed"; "id" => *batch_id , "start_slot" => start_slot, "end_slot" => end_slot);
debug!(log, "Batch processed"; "batch_epoch" => epoch , "first_block_slot" => start_slot, "last_block_slot" => end_slot, "service"=> "sync");
BatchProcessResult::Success
}
(imported_blocks, Err(e)) if imported_blocks > 0 => {
debug!(log, "Batch processing failed but imported some blocks";
"id" => *batch_id, "error" => e, "imported_blocks"=> imported_blocks);
"batch_epoch" => epoch, "error" => e, "imported_blocks"=> imported_blocks, "service" => "sync");
BatchProcessResult::Partial
}
(_, Err(e)) => {
debug!(log, "Batch processing failed"; "id" => *batch_id, "error" => e);
debug!(log, "Batch processing failed"; "batch_epoch" => epoch, "error" => e, "service" => "sync");
BatchProcessResult::Failed
}
};
let msg = SyncMessage::BatchProcessed {
chain_id,
batch_id,
epoch,
downloaded_blocks,
result,
};
@@ -71,7 +71,7 @@ pub fn handle_chain_segment<T: BeaconChainTypes>(
});
}
// this a parent lookup request from the sync manager
ProcessId::ParentLookup(peer_id) => {
ProcessId::ParentLookup(peer_id, chain_head) => {
debug!(
log, "Processing parent lookup";
"last_peer_id" => format!("{}", peer_id),
@@ -81,9 +81,9 @@ pub fn handle_chain_segment<T: BeaconChainTypes>(
// reverse
match process_blocks(chain, downloaded_blocks.iter().rev(), &log) {
(_, Err(e)) => {
warn!(log, "Parent lookup failed"; "last_peer_id" => format!("{}", peer_id), "error" => e);
debug!(log, "Parent lookup failed"; "last_peer_id" => format!("{}", peer_id), "error" => e);
sync_send
.send(SyncMessage::ParentLookupFailed(peer_id))
.send(SyncMessage::ParentLookupFailed{peer_id, chain_head})
.unwrap_or_else(|_| {
// on failure, inform to downvote the peer
debug!(

View File

@@ -36,22 +36,22 @@
//! task.
use crate::{metrics, service::NetworkMessage, sync::SyncMessage};
use beacon_chain::{
attestation_verification::Error as AttnError, BeaconChain, BeaconChainError, BeaconChainTypes,
BlockError, ForkChoiceError,
};
use chain_segment::handle_chain_segment;
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockError};
use environment::TaskExecutor;
use eth2_libp2p::{MessageId, NetworkGlobals, PeerId};
use slog::{crit, debug, error, info, trace, warn, Logger};
use ssz::Encode;
use slog::{crit, debug, error, trace, warn, Logger};
use std::collections::VecDeque;
use std::sync::{Arc, Weak};
use std::time::{Duration, Instant};
use tokio::sync::{mpsc, oneshot};
use types::{Attestation, EthSpec, Hash256, SignedAggregateAndProof, SignedBeaconBlock, SubnetId};
use types::{
Attestation, AttesterSlashing, EthSpec, Hash256, ProposerSlashing, SignedAggregateAndProof,
SignedBeaconBlock, SignedVoluntaryExit, SubnetId,
};
use worker::Worker;
mod chain_segment;
mod worker;
pub use chain_segment::ProcessId;
@@ -78,6 +78,18 @@ const MAX_AGGREGATED_ATTESTATION_QUEUE_LEN: usize = 1_024;
/// before we start dropping them.
const MAX_GOSSIP_BLOCK_QUEUE_LEN: usize = 1_024;
/// The maximum number of queued `SignedVoluntaryExit` objects received on gossip that will be stored
/// before we start dropping them.
const MAX_GOSSIP_EXIT_QUEUE_LEN: usize = 4_096;
/// The maximum number of queued `ProposerSlashing` objects received on gossip that will be stored
/// before we start dropping them.
const MAX_GOSSIP_PROPOSER_SLASHING_QUEUE_LEN: usize = 4_096;
/// The maximum number of queued `AttesterSlashing` objects received on gossip that will be stored
/// before we start dropping them.
const MAX_GOSSIP_ATTESTER_SLASHING_QUEUE_LEN: usize = 4_096;
/// The maximum number of queued `SignedBeaconBlock` objects received from the network RPC that
/// will be stored before we start dropping them.
const MAX_RPC_BLOCK_QUEUE_LEN: usize = 1_024;
@@ -242,6 +254,54 @@ impl<E: EthSpec> WorkEvent<E> {
}
}
/// Create a new `Work` event for some exit.
pub fn gossip_voluntary_exit(
message_id: MessageId,
peer_id: PeerId,
voluntary_exit: Box<SignedVoluntaryExit>,
) -> Self {
Self {
drop_during_sync: false,
work: Work::GossipVoluntaryExit {
message_id,
peer_id,
voluntary_exit,
},
}
}
/// Create a new `Work` event for some proposer slashing.
pub fn gossip_proposer_slashing(
message_id: MessageId,
peer_id: PeerId,
proposer_slashing: Box<ProposerSlashing>,
) -> Self {
Self {
drop_during_sync: false,
work: Work::GossipProposerSlashing {
message_id,
peer_id,
proposer_slashing,
},
}
}
/// Create a new `Work` event for some attester slashing.
pub fn gossip_attester_slashing(
message_id: MessageId,
peer_id: PeerId,
attester_slashing: Box<AttesterSlashing<E>>,
) -> Self {
Self {
drop_during_sync: false,
work: Work::GossipAttesterSlashing {
message_id,
peer_id,
attester_slashing,
},
}
}
/// Create a new `Work` event for some block, where the result from computation (if any) is
/// sent to the other side of `result_tx`.
pub fn rpc_beacon_block(block: Box<SignedBeaconBlock<E>>) -> (Self, BlockResultReceiver<E>) {
@@ -282,6 +342,21 @@ pub enum Work<E: EthSpec> {
peer_id: PeerId,
block: Box<SignedBeaconBlock<E>>,
},
GossipVoluntaryExit {
message_id: MessageId,
peer_id: PeerId,
voluntary_exit: Box<SignedVoluntaryExit>,
},
GossipProposerSlashing {
message_id: MessageId,
peer_id: PeerId,
proposer_slashing: Box<ProposerSlashing>,
},
GossipAttesterSlashing {
message_id: MessageId,
peer_id: PeerId,
attester_slashing: Box<AttesterSlashing<E>>,
},
RpcBlock {
block: Box<SignedBeaconBlock<E>>,
result_tx: BlockResultSender<E>,
@@ -299,6 +374,9 @@ impl<E: EthSpec> Work<E> {
Work::GossipAttestation { .. } => "gossip_attestation",
Work::GossipAggregate { .. } => "gossip_aggregate",
Work::GossipBlock { .. } => "gossip_block",
Work::GossipVoluntaryExit { .. } => "gossip_voluntary_exit",
Work::GossipProposerSlashing { .. } => "gossip_proposer_slashing",
Work::GossipAttesterSlashing { .. } => "gossip_attester_slashing",
Work::RpcBlock { .. } => "rpc_block",
Work::ChainSegment { .. } => "chain_segment",
}
@@ -351,21 +429,33 @@ impl<T: BeaconChainTypes> BeaconProcessor<T> {
pub fn spawn_manager(mut self, mut event_rx: mpsc::Receiver<WorkEvent<T::EthSpec>>) {
let (idle_tx, mut idle_rx) = mpsc::channel::<()>(MAX_IDLE_QUEUE_LEN);
// Using LIFO queues for attestations since validator profits rely upon getting fresh
// attestations into blocks. Additionally, later attestations contain more information than
// earlier ones, so we consider them more valuable.
let mut aggregate_queue = LifoQueue::new(MAX_AGGREGATED_ATTESTATION_QUEUE_LEN);
let mut aggregate_debounce = TimeLatch::default();
let mut attestation_queue = LifoQueue::new(MAX_UNAGGREGATED_ATTESTATION_QUEUE_LEN);
let mut attestation_debounce = TimeLatch::default();
let mut gossip_block_queue = FifoQueue::new(MAX_GOSSIP_BLOCK_QUEUE_LEN);
// Using a FIFO queue for voluntary exits since it prevents exit censoring. I don't have
// a strong feeling about queue type for exits.
let mut gossip_voluntary_exit_queue = FifoQueue::new(MAX_GOSSIP_EXIT_QUEUE_LEN);
// Using a FIFO queue for slashing to prevent people from flushing their slashings from the
// queues with lots of junk messages.
let mut gossip_proposer_slashing_queue =
FifoQueue::new(MAX_GOSSIP_PROPOSER_SLASHING_QUEUE_LEN);
let mut gossip_attester_slashing_queue =
FifoQueue::new(MAX_GOSSIP_ATTESTER_SLASHING_QUEUE_LEN);
// Using a FIFO queue since blocks need to be imported sequentially.
let mut rpc_block_queue = FifoQueue::new(MAX_RPC_BLOCK_QUEUE_LEN);
let mut chain_segment_queue = FifoQueue::new(MAX_CHAIN_SEGMENT_QUEUE_LEN);
let mut gossip_block_queue = FifoQueue::new(MAX_GOSSIP_BLOCK_QUEUE_LEN);
let executor = self.executor.clone();
// The manager future will run on the non-blocking executor and delegate tasks to worker
// The manager future will run on the core executor and delegate tasks to worker
// threads on the blocking executor.
let manager_future = async move {
loop {
@@ -452,6 +542,18 @@ impl<T: BeaconChainTypes> BeaconProcessor<T> {
self.spawn_worker(idle_tx.clone(), item);
} else if let Some(item) = attestation_queue.pop() {
self.spawn_worker(idle_tx.clone(), item);
// Check slashings after all other consensus messages so we prioritize
// following head.
//
// Check attester slashings before proposer slashings since they have the
// potential to slash multiple validators at once.
} else if let Some(item) = gossip_attester_slashing_queue.pop() {
self.spawn_worker(idle_tx.clone(), item);
} else if let Some(item) = gossip_proposer_slashing_queue.pop() {
self.spawn_worker(idle_tx.clone(), item);
// Check exits last since our validators don't get rewards from them.
} else if let Some(item) = gossip_voluntary_exit_queue.pop() {
self.spawn_worker(idle_tx.clone(), item);
}
}
// There is no new work event and we are unable to spawn a new worker.
@@ -491,6 +593,15 @@ impl<T: BeaconChainTypes> BeaconProcessor<T> {
Work::GossipBlock { .. } => {
gossip_block_queue.push(work, work_id, &self.log)
}
Work::GossipVoluntaryExit { .. } => {
gossip_voluntary_exit_queue.push(work, work_id, &self.log)
}
Work::GossipProposerSlashing { .. } => {
gossip_proposer_slashing_queue.push(work, work_id, &self.log)
}
Work::GossipAttesterSlashing { .. } => {
gossip_attester_slashing_queue.push(work, work_id, &self.log)
}
Work::RpcBlock { .. } => rpc_block_queue.push(work, work_id, &self.log),
Work::ChainSegment { .. } => {
chain_segment_queue.push(work, work_id, &self.log)
@@ -523,6 +634,18 @@ impl<T: BeaconChainTypes> BeaconProcessor<T> {
&metrics::BEACON_PROCESSOR_CHAIN_SEGMENT_QUEUE_TOTAL,
chain_segment_queue.len() as i64,
);
metrics::set_gauge(
&metrics::BEACON_PROCESSOR_EXIT_QUEUE_TOTAL,
gossip_voluntary_exit_queue.len() as i64,
);
metrics::set_gauge(
&metrics::BEACON_PROCESSOR_PROPOSER_SLASHING_QUEUE_TOTAL,
gossip_proposer_slashing_queue.len() as i64,
);
metrics::set_gauge(
&metrics::BEACON_PROCESSOR_ATTESTER_SLASHING_QUEUE_TOTAL,
gossip_attester_slashing_queue.len() as i64,
);
if aggregate_queue.is_full() && aggregate_debounce.elapsed() {
error!(
@@ -544,7 +667,7 @@ impl<T: BeaconChainTypes> BeaconProcessor<T> {
}
};
// Spawn on the non-blocking executor.
// Spawn on the core executor.
executor.spawn(manager_future, MANAGER_TASK_NAME);
}
@@ -574,11 +697,16 @@ impl<T: BeaconChainTypes> BeaconProcessor<T> {
return;
};
let network_tx = self.network_tx.clone();
let sync_tx = self.sync_tx.clone();
let log = self.log.clone();
let executor = self.executor.clone();
let worker = Worker {
chain,
network_tx: self.network_tx.clone(),
sync_tx: self.sync_tx.clone(),
log: self.log.clone(),
};
trace!(
self.log,
"Spawning beacon processor worker";
@@ -589,291 +717,85 @@ impl<T: BeaconChainTypes> BeaconProcessor<T> {
executor.spawn_blocking(
move || {
let _worker_timer = worker_timer;
let inner_log = log.clone();
// We use this closure pattern to avoid using a `return` that prevents the
// `idle_tx` message from sending.
let handler = || {
let log = inner_log.clone();
match work {
/*
* Unaggregated attestation verification.
*/
Work::GossipAttestation {
message_id,
peer_id,
attestation,
subnet_id,
should_import,
} => {
let beacon_block_root = attestation.data.beacon_block_root;
let attestation = match chain
.verify_unaggregated_attestation_for_gossip(*attestation, subnet_id)
{
Ok(attestation) => attestation,
Err(e) => {
handle_attestation_verification_failure(
&log,
sync_tx,
peer_id,
beacon_block_root,
"unaggregated",
e,
);
return;
}
};
// Indicate to the `Network` service that this message is valid and can be
// propagated on the gossip network.
propagate_gossip_message(network_tx, message_id, peer_id.clone(), &log);
if !should_import {
return;
}
metrics::inc_counter(
&metrics::BEACON_PROCESSOR_UNAGGREGATED_ATTESTATION_VERIFIED_TOTAL,
);
if let Err(e) = chain.apply_attestation_to_fork_choice(&attestation) {
match e {
BeaconChainError::ForkChoiceError(
ForkChoiceError::InvalidAttestation(e),
) => debug!(
log,
"Attestation invalid for fork choice";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
),
e => error!(
log,
"Error applying attestation to fork choice";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
),
}
}
if let Err(e) = chain.add_to_naive_aggregation_pool(attestation) {
debug!(
log,
"Attestation invalid for agg pool";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
metrics::inc_counter(
&metrics::BEACON_PROCESSOR_UNAGGREGATED_ATTESTATION_IMPORTED_TOTAL,
);
}
/*
* Aggregated attestation verification.
*/
Work::GossipAggregate {
message_id,
peer_id,
aggregate,
} => {
let beacon_block_root =
aggregate.message.aggregate.data.beacon_block_root;
let aggregate =
match chain.verify_aggregated_attestation_for_gossip(*aggregate) {
Ok(aggregate) => aggregate,
Err(e) => {
handle_attestation_verification_failure(
&log,
sync_tx,
peer_id,
beacon_block_root,
"aggregated",
e,
);
return;
}
};
// Indicate to the `Network` service that this message is valid and can be
// propagated on the gossip network.
propagate_gossip_message(network_tx, message_id, peer_id.clone(), &log);
metrics::inc_counter(
&metrics::BEACON_PROCESSOR_AGGREGATED_ATTESTATION_VERIFIED_TOTAL,
);
if let Err(e) = chain.apply_attestation_to_fork_choice(&aggregate) {
match e {
BeaconChainError::ForkChoiceError(
ForkChoiceError::InvalidAttestation(e),
) => debug!(
log,
"Aggregate invalid for fork choice";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
),
e => error!(
log,
"Error applying aggregate to fork choice";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
),
}
}
if let Err(e) = chain.add_to_block_inclusion_pool(aggregate) {
debug!(
log,
"Attestation invalid for op pool";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
metrics::inc_counter(
&metrics::BEACON_PROCESSOR_AGGREGATED_ATTESTATION_IMPORTED_TOTAL,
);
}
/*
* Verification for beacon blocks received on gossip.
*/
Work::GossipBlock {
message_id,
peer_id,
block,
} => {
let verified_block = match chain.verify_block_for_gossip(*block) {
Ok(verified_block) => {
info!(
log,
"New block received";
"slot" => verified_block.block.slot(),
"hash" => verified_block.block_root.to_string()
);
propagate_gossip_message(
network_tx,
message_id,
peer_id.clone(),
&log,
);
verified_block
}
Err(BlockError::ParentUnknown(block)) => {
send_sync_message(
sync_tx,
SyncMessage::UnknownBlock(peer_id, block),
&log,
);
return;
}
Err(e) => {
warn!(
log,
"Could not verify block for gossip";
"error" => format!("{:?}", e)
);
return;
}
};
metrics::inc_counter(
&metrics::BEACON_PROCESSOR_GOSSIP_BLOCK_VERIFIED_TOTAL,
);
let block = Box::new(verified_block.block.clone());
match chain.process_block(verified_block) {
Ok(_block_root) => {
metrics::inc_counter(
&metrics::BEACON_PROCESSOR_GOSSIP_BLOCK_IMPORTED_TOTAL,
);
trace!(
log,
"Gossipsub block processed";
"peer_id" => peer_id.to_string()
);
// TODO: It would be better if we can run this _after_ we publish the block to
// reduce block propagation latency.
//
// The `MessageHandler` would be the place to put this, however it doesn't seem
// to have a reference to the `BeaconChain`. I will leave this for future
// works.
match chain.fork_choice() {
Ok(()) => trace!(
log,
"Fork choice success";
"location" => "block gossip"
),
Err(e) => error!(
log,
"Fork choice failed";
"error" => format!("{:?}", e),
"location" => "block gossip"
),
}
}
Err(BlockError::ParentUnknown { .. }) => {
// Inform the sync manager to find parents for this block
// This should not occur. It should be checked by `should_forward_block`
error!(
log,
"Block with unknown parent attempted to be processed";
"peer_id" => peer_id.to_string()
);
send_sync_message(
sync_tx,
SyncMessage::UnknownBlock(peer_id, block),
&log,
);
}
other => {
debug!(
log,
"Invalid gossip beacon block";
"outcome" => format!("{:?}", other),
"block root" => format!("{}", block.canonical_root()),
"block slot" => block.slot()
);
trace!(
log,
"Invalid gossip beacon block ssz";
"ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())),
);
}
};
}
/*
* Verification for beacon blocks received during syncing via RPC.
*/
Work::RpcBlock { block, result_tx } => {
let block_result = chain.process_block(*block);
metrics::inc_counter(
&metrics::BEACON_PROCESSOR_RPC_BLOCK_IMPORTED_TOTAL,
);
if result_tx.send(block_result).is_err() {
crit!(log, "Failed return sync block result");
}
}
/*
* Verification for a chain segment (multiple blocks).
*/
Work::ChainSegment { process_id, blocks } => {
handle_chain_segment(chain, process_id, blocks, sync_tx, log)
}
};
match work {
/*
* Unaggregated attestation verification.
*/
Work::GossipAttestation {
message_id,
peer_id,
attestation,
subnet_id,
should_import,
} => worker.process_gossip_attestation(
message_id,
peer_id,
*attestation,
subnet_id,
should_import,
),
/*
* Aggregated attestation verification.
*/
Work::GossipAggregate {
message_id,
peer_id,
aggregate,
} => worker.process_gossip_aggregate(message_id, peer_id, *aggregate),
/*
* Verification for beacon blocks received on gossip.
*/
Work::GossipBlock {
message_id,
peer_id,
block,
} => worker.process_gossip_block(message_id, peer_id, *block),
/*
* Voluntary exits received on gossip.
*/
Work::GossipVoluntaryExit {
message_id,
peer_id,
voluntary_exit,
} => worker.process_gossip_voluntary_exit(message_id, peer_id, *voluntary_exit),
/*
* Proposer slashings received on gossip.
*/
Work::GossipProposerSlashing {
message_id,
peer_id,
proposer_slashing,
} => worker.process_gossip_proposer_slashing(
message_id,
peer_id,
*proposer_slashing,
),
/*
* Attester slashings received on gossip.
*/
Work::GossipAttesterSlashing {
message_id,
peer_id,
attester_slashing,
} => worker.process_gossip_attester_slashing(
message_id,
peer_id,
*attester_slashing,
),
/*
* Verification for beacon blocks received during syncing via RPC.
*/
Work::RpcBlock { block, result_tx } => {
worker.process_rpc_block(*block, result_tx)
}
/*
* Verification for a chain segment (multiple blocks).
*/
Work::ChainSegment { process_id, blocks } => {
worker.process_chain_segment(process_id, blocks)
}
};
handler();
trace!(
log,
@@ -895,300 +817,3 @@ impl<T: BeaconChainTypes> BeaconProcessor<T> {
);
}
}
/// Send a message on `message_tx` that the `message_id` sent by `peer_id` should be propagated on
/// the gossip network.
///
/// Creates a log if there is an interal error.
fn propagate_gossip_message<E: EthSpec>(
network_tx: mpsc::UnboundedSender<NetworkMessage<E>>,
message_id: MessageId,
peer_id: PeerId,
log: &Logger,
) {
network_tx
.send(NetworkMessage::Validate {
propagation_source: peer_id,
message_id,
})
.unwrap_or_else(|_| {
warn!(
log,
"Could not send propagation request to the network service"
)
});
}
/// Send a message to `sync_tx`.
///
/// Creates a log if there is an interal error.
fn send_sync_message<E: EthSpec>(
sync_tx: mpsc::UnboundedSender<SyncMessage<E>>,
message: SyncMessage<E>,
log: &Logger,
) {
sync_tx
.send(message)
.unwrap_or_else(|_| error!(log, "Could not send message to the sync service"));
}
/// Handle an error whilst verifying an `Attestation` or `SignedAggregateAndProof` from the
/// network.
pub fn handle_attestation_verification_failure<E: EthSpec>(
log: &Logger,
sync_tx: mpsc::UnboundedSender<SyncMessage<E>>,
peer_id: PeerId,
beacon_block_root: Hash256,
attestation_type: &str,
error: AttnError,
) {
metrics::register_attestation_error(&error);
match &error {
AttnError::FutureEpoch { .. }
| AttnError::PastEpoch { .. }
| AttnError::FutureSlot { .. }
| AttnError::PastSlot { .. } => {
/*
* These errors can be triggered by a mismatch between our slot and the peer.
*
*
* The peer has published an invalid consensus message, _only_ if we trust our own clock.
*/
}
AttnError::InvalidSelectionProof { .. } | AttnError::InvalidSignature => {
/*
* These errors are caused by invalid signatures.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::EmptyAggregationBitfield => {
/*
* The aggregate had no signatures and is therefore worthless.
*
* Whilst we don't gossip this attestation, this act is **not** a clear
* violation of the spec nor indication of fault.
*
* This may change soon. Reference:
*
* https://github.com/ethereum/eth2.0-specs/pull/1732
*/
}
AttnError::AggregatorPubkeyUnknown(_) => {
/*
* The aggregator index was higher than any known validator index. This is
* possible in two cases:
*
* 1. The attestation is malformed
* 2. The attestation attests to a beacon_block_root that we do not know.
*
* It should be impossible to reach (2) without triggering
* `AttnError::UnknownHeadBlock`, so we can safely assume the peer is
* faulty.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AggregatorNotInCommittee { .. } => {
/*
* The aggregator index was higher than any known validator index. This is
* possible in two cases:
*
* 1. The attestation is malformed
* 2. The attestation attests to a beacon_block_root that we do not know.
*
* It should be impossible to reach (2) without triggering
* `AttnError::UnknownHeadBlock`, so we can safely assume the peer is
* faulty.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AttestationAlreadyKnown { .. } => {
/*
* The aggregate attestation has already been observed on the network or in
* a block.
*
* The peer is not necessarily faulty.
*/
trace!(
log,
"Attestation already known";
"peer_id" => peer_id.to_string(),
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
return;
}
AttnError::AggregatorAlreadyKnown(_) => {
/*
* There has already been an aggregate attestation seen from this
* aggregator index.
*
* The peer is not necessarily faulty.
*/
trace!(
log,
"Aggregator already known";
"peer_id" => peer_id.to_string(),
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
return;
}
AttnError::PriorAttestationKnown { .. } => {
/*
* We have already seen an attestation from this validator for this epoch.
*
* The peer is not necessarily faulty.
*/
trace!(
log,
"Prior attestation known";
"peer_id" => peer_id.to_string(),
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
return;
}
AttnError::ValidatorIndexTooHigh(_) => {
/*
* The aggregator index (or similar field) was higher than the maximum
* possible number of validators.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::UnknownHeadBlock { beacon_block_root } => {
// Note: its a little bit unclear as to whether or not this block is unknown or
// just old. See:
//
// https://github.com/sigp/lighthouse/issues/1039
// TODO: Maintain this attestation and re-process once sync completes
debug!(
log,
"Attestation for unknown block";
"peer_id" => peer_id.to_string(),
"block" => format!("{}", beacon_block_root)
);
// we don't know the block, get the sync manager to handle the block lookup
sync_tx
.send(SyncMessage::UnknownBlockHash(peer_id, *beacon_block_root))
.unwrap_or_else(|_| {
warn!(
log,
"Failed to send to sync service";
"msg" => "UnknownBlockHash"
)
});
return;
}
AttnError::UnknownTargetRoot(_) => {
/*
* The block indicated by the target root is not known to us.
*
* We should always get `AttnError::UnknwonHeadBlock` before we get this
* error, so this means we can get this error if:
*
* 1. The target root does not represent a valid block.
* 2. We do not have the target root in our DB.
*
* For (2), we should only be processing attestations when we should have
* all the available information. Note: if we do a weak-subjectivity sync
* it's possible that this situation could occur, but I think it's
* unlikely. For now, we will declare this to be an invalid message>
*
* The peer has published an invalid consensus message.
*/
}
AttnError::BadTargetEpoch => {
/*
* The aggregator index (or similar field) was higher than the maximum
* possible number of validators.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::NoCommitteeForSlotAndIndex { .. } => {
/*
* It is not possible to attest this the given committee in the given slot.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::NotExactlyOneAggregationBitSet(_) => {
/*
* The unaggregated attestation doesn't have only one signature.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AttestsToFutureBlock { .. } => {
/*
* The beacon_block_root is from a higher slot than the attestation.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::InvalidSubnetId { received, expected } => {
/*
* The attestation was received on an incorrect subnet id.
*/
debug!(
log,
"Received attestation on incorrect subnet";
"expected" => format!("{:?}", expected),
"received" => format!("{:?}", received),
)
}
AttnError::Invalid(_) => {
/*
* The attestation failed the state_processing verification.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::TooManySkippedSlots {
head_block_slot,
attestation_slot,
} => {
/*
* The attestation references a head block that is too far behind the attestation slot.
*
* The message is not necessarily invalid, but we choose to ignore it.
*/
debug!(
log,
"Rejected long skip slot attestation";
"head_block_slot" => head_block_slot,
"attestation_slot" => attestation_slot,
)
}
AttnError::BeaconChainError(e) => {
/*
* Lighthouse hit an unexpected error whilst processing the attestation. It
* should be impossible to trigger a `BeaconChainError` from the network,
* so we have a bug.
*
* It's not clear if the message is invalid/malicious.
*/
error!(
log,
"Unable to validate aggregate";
"peer_id" => peer_id.to_string(),
"error" => format!("{:?}", e),
);
}
}
debug!(
log,
"Invalid attestation from network";
"reason" => format!("{:?}", error),
"block" => format!("{}", beacon_block_root),
"peer_id" => peer_id.to_string(),
"type" => format!("{:?}", attestation_type),
);
}

View File

@@ -0,0 +1,726 @@
use super::{
chain_segment::{handle_chain_segment, ProcessId},
BlockResultSender,
};
use crate::{metrics, service::NetworkMessage, sync::SyncMessage};
use beacon_chain::{
attestation_verification::Error as AttnError, observed_operations::ObservationOutcome,
BeaconChain, BeaconChainError, BeaconChainTypes, BlockError, ForkChoiceError,
};
use eth2_libp2p::{MessageId, PeerId};
use slog::{crit, debug, error, info, trace, warn, Logger};
use ssz::Encode;
use std::sync::Arc;
use tokio::sync::mpsc;
use types::{
Attestation, AttesterSlashing, Hash256, ProposerSlashing, SignedAggregateAndProof,
SignedBeaconBlock, SignedVoluntaryExit, SubnetId,
};
/// Contains the context necessary to import blocks, attestations, etc to the beacon chain.
pub struct Worker<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>,
pub network_tx: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
pub sync_tx: mpsc::UnboundedSender<SyncMessage<T::EthSpec>>,
pub log: Logger,
}
impl<T: BeaconChainTypes> Worker<T> {
/// Process the unaggregated attestation received from the gossip network and:
///
/// - If it passes gossip propagation criteria, tell the network thread to forward it.
/// - Attempt to apply it to fork choice.
/// - Attempt to add it to the naive aggregation pool.
///
/// Raises a log if there are errors.
pub fn process_gossip_attestation(
self,
message_id: MessageId,
peer_id: PeerId,
attestation: Attestation<T::EthSpec>,
subnet_id: SubnetId,
should_import: bool,
) {
let beacon_block_root = attestation.data.beacon_block_root;
let attestation = match self
.chain
.verify_unaggregated_attestation_for_gossip(attestation, subnet_id)
{
Ok(attestation) => attestation,
Err(e) => {
self.handle_attestation_verification_failure(
peer_id,
beacon_block_root,
"unaggregated",
e,
);
return;
}
};
// Indicate to the `Network` service that this message is valid and can be
// propagated on the gossip network.
self.propagate_gossip_message(message_id, peer_id.clone());
if !should_import {
return;
}
metrics::inc_counter(&metrics::BEACON_PROCESSOR_UNAGGREGATED_ATTESTATION_VERIFIED_TOTAL);
if let Err(e) = self.chain.apply_attestation_to_fork_choice(&attestation) {
match e {
BeaconChainError::ForkChoiceError(ForkChoiceError::InvalidAttestation(e)) => {
debug!(
self.log,
"Attestation invalid for fork choice";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
e => error!(
self.log,
"Error applying attestation to fork choice";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
),
}
}
if let Err(e) = self.chain.add_to_naive_aggregation_pool(attestation) {
debug!(
self.log,
"Attestation invalid for agg pool";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
metrics::inc_counter(&metrics::BEACON_PROCESSOR_UNAGGREGATED_ATTESTATION_IMPORTED_TOTAL);
}
/// Process the aggregated attestation received from the gossip network and:
///
/// - If it passes gossip propagation criteria, tell the network thread to forward it.
/// - Attempt to apply it to fork choice.
/// - Attempt to add it to the block inclusion pool.
///
/// Raises a log if there are errors.
pub fn process_gossip_aggregate(
self,
message_id: MessageId,
peer_id: PeerId,
aggregate: SignedAggregateAndProof<T::EthSpec>,
) {
let beacon_block_root = aggregate.message.aggregate.data.beacon_block_root;
let aggregate = match self
.chain
.verify_aggregated_attestation_for_gossip(aggregate)
{
Ok(aggregate) => aggregate,
Err(e) => {
self.handle_attestation_verification_failure(
peer_id,
beacon_block_root,
"aggregated",
e,
);
return;
}
};
// Indicate to the `Network` service that this message is valid and can be
// propagated on the gossip network.
self.propagate_gossip_message(message_id, peer_id.clone());
metrics::inc_counter(&metrics::BEACON_PROCESSOR_AGGREGATED_ATTESTATION_VERIFIED_TOTAL);
if let Err(e) = self.chain.apply_attestation_to_fork_choice(&aggregate) {
match e {
BeaconChainError::ForkChoiceError(ForkChoiceError::InvalidAttestation(e)) => {
debug!(
self.log,
"Aggregate invalid for fork choice";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
e => error!(
self.log,
"Error applying aggregate to fork choice";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
),
}
}
if let Err(e) = self.chain.add_to_block_inclusion_pool(aggregate) {
debug!(
self.log,
"Attestation invalid for op pool";
"reason" => format!("{:?}", e),
"peer" => peer_id.to_string(),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
metrics::inc_counter(&metrics::BEACON_PROCESSOR_AGGREGATED_ATTESTATION_IMPORTED_TOTAL);
}
/// Process the beacon block received from the gossip network and:
///
/// - If it passes gossip propagation criteria, tell the network thread to forward it.
/// - Attempt to add it to the beacon chain, informing the sync thread if more blocks need to
/// be downloaded.
///
/// Raises a log if there are errors.
pub fn process_gossip_block(
self,
message_id: MessageId,
peer_id: PeerId,
block: SignedBeaconBlock<T::EthSpec>,
) {
let verified_block = match self.chain.verify_block_for_gossip(block) {
Ok(verified_block) => {
info!(
self.log,
"New block received";
"slot" => verified_block.block.slot(),
"hash" => verified_block.block_root.to_string()
);
self.propagate_gossip_message(message_id, peer_id.clone());
verified_block
}
Err(BlockError::ParentUnknown(block)) => {
self.send_sync_message(SyncMessage::UnknownBlock(peer_id, block));
return;
}
Err(BlockError::BlockIsAlreadyKnown) => {
debug!(
self.log,
"Gossip block is already known";
);
return;
}
Err(e) => {
warn!(
self.log,
"Could not verify block for gossip";
"error" => format!("{:?}", e)
);
return;
}
};
metrics::inc_counter(&metrics::BEACON_PROCESSOR_GOSSIP_BLOCK_VERIFIED_TOTAL);
let block = Box::new(verified_block.block.clone());
match self.chain.process_block(verified_block) {
Ok(_block_root) => {
metrics::inc_counter(&metrics::BEACON_PROCESSOR_GOSSIP_BLOCK_IMPORTED_TOTAL);
trace!(
self.log,
"Gossipsub block processed";
"peer_id" => peer_id.to_string()
);
// TODO: It would be better if we can run this _after_ we publish the block to
// reduce block propagation latency.
//
// The `MessageHandler` would be the place to put this, however it doesn't seem
// to have a reference to the `BeaconChain`. I will leave this for future
// works.
match self.chain.fork_choice() {
Ok(()) => trace!(
self.log,
"Fork choice success";
"location" => "block gossip"
),
Err(e) => error!(
self.log,
"Fork choice failed";
"error" => format!("{:?}", e),
"location" => "block gossip"
),
}
}
Err(BlockError::ParentUnknown { .. }) => {
// Inform the sync manager to find parents for this block
// This should not occur. It should be checked by `should_forward_block`
error!(
self.log,
"Block with unknown parent attempted to be processed";
"peer_id" => peer_id.to_string()
);
self.send_sync_message(SyncMessage::UnknownBlock(peer_id, block));
}
other => {
debug!(
self.log,
"Invalid gossip beacon block";
"outcome" => format!("{:?}", other),
"block root" => format!("{}", block.canonical_root()),
"block slot" => block.slot()
);
trace!(
self.log,
"Invalid gossip beacon block ssz";
"ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())),
);
}
};
}
pub fn process_gossip_voluntary_exit(
self,
message_id: MessageId,
peer_id: PeerId,
voluntary_exit: SignedVoluntaryExit,
) {
let validator_index = voluntary_exit.message.validator_index;
let exit = match self.chain.verify_voluntary_exit_for_gossip(voluntary_exit) {
Ok(ObservationOutcome::New(exit)) => exit,
Ok(ObservationOutcome::AlreadyKnown) => {
debug!(
self.log,
"Dropping exit for already exiting validator";
"validator_index" => validator_index,
"peer" => peer_id.to_string()
);
return;
}
Err(e) => {
debug!(
self.log,
"Dropping invalid exit";
"validator_index" => validator_index,
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
return;
}
};
metrics::inc_counter(&metrics::BEACON_PROCESSOR_EXIT_VERIFIED_TOTAL);
self.propagate_gossip_message(message_id, peer_id);
self.chain.import_voluntary_exit(exit);
debug!(self.log, "Successfully imported voluntary exit");
metrics::inc_counter(&metrics::BEACON_PROCESSOR_EXIT_IMPORTED_TOTAL);
}
pub fn process_gossip_proposer_slashing(
self,
message_id: MessageId,
peer_id: PeerId,
proposer_slashing: ProposerSlashing,
) {
let validator_index = proposer_slashing.signed_header_1.message.proposer_index;
let slashing = match self
.chain
.verify_proposer_slashing_for_gossip(proposer_slashing)
{
Ok(ObservationOutcome::New(slashing)) => slashing,
Ok(ObservationOutcome::AlreadyKnown) => {
debug!(
self.log,
"Dropping proposer slashing";
"reason" => "Already seen a proposer slashing for that validator",
"validator_index" => validator_index,
"peer" => peer_id.to_string()
);
return;
}
Err(e) => {
debug!(
self.log,
"Dropping invalid proposer slashing";
"validator_index" => validator_index,
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
return;
}
};
metrics::inc_counter(&metrics::BEACON_PROCESSOR_PROPOSER_SLASHING_VERIFIED_TOTAL);
self.propagate_gossip_message(message_id, peer_id);
self.chain.import_proposer_slashing(slashing);
debug!(self.log, "Successfully imported proposer slashing");
metrics::inc_counter(&metrics::BEACON_PROCESSOR_PROPOSER_SLASHING_IMPORTED_TOTAL);
}
pub fn process_gossip_attester_slashing(
self,
message_id: MessageId,
peer_id: PeerId,
attester_slashing: AttesterSlashing<T::EthSpec>,
) {
let slashing = match self
.chain
.verify_attester_slashing_for_gossip(attester_slashing)
{
Ok(ObservationOutcome::New(slashing)) => slashing,
Ok(ObservationOutcome::AlreadyKnown) => {
debug!(
self.log,
"Dropping attester slashing";
"reason" => "Slashings already known for all slashed validators",
"peer" => peer_id.to_string()
);
return;
}
Err(e) => {
debug!(
self.log,
"Dropping invalid attester slashing";
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
return;
}
};
metrics::inc_counter(&metrics::BEACON_PROCESSOR_ATTESTER_SLASHING_VERIFIED_TOTAL);
self.propagate_gossip_message(message_id, peer_id);
if let Err(e) = self.chain.import_attester_slashing(slashing) {
debug!(self.log, "Error importing attester slashing"; "error" => format!("{:?}", e));
metrics::inc_counter(&metrics::BEACON_PROCESSOR_ATTESTER_SLASHING_ERROR_TOTAL);
} else {
debug!(self.log, "Successfully imported attester slashing");
metrics::inc_counter(&metrics::BEACON_PROCESSOR_ATTESTER_SLASHING_IMPORTED_TOTAL);
}
}
/// Attempt to process a block received from a direct RPC request, returning the processing
/// result on the `result_tx` channel.
///
/// Raises a log if there are errors publishing the result to the channel.
pub fn process_rpc_block(
self,
block: SignedBeaconBlock<T::EthSpec>,
result_tx: BlockResultSender<T::EthSpec>,
) {
let block_result = self.chain.process_block(block);
metrics::inc_counter(&metrics::BEACON_PROCESSOR_RPC_BLOCK_IMPORTED_TOTAL);
if result_tx.send(block_result).is_err() {
crit!(self.log, "Failed return sync block result");
}
}
/// Attempt to import the chain segment (`blocks`) to the beacon chain, informing the sync
/// thread if more blocks are needed to process it.
pub fn process_chain_segment(
self,
process_id: ProcessId,
blocks: Vec<SignedBeaconBlock<T::EthSpec>>,
) {
handle_chain_segment(self.chain, process_id, blocks, self.sync_tx, self.log)
}
/// Send a message on `message_tx` that the `message_id` sent by `peer_id` should be propagated on
/// the gossip network.
///
/// Creates a log if there is an interal error.
fn propagate_gossip_message(&self, message_id: MessageId, peer_id: PeerId) {
self.network_tx
.send(NetworkMessage::Validate {
propagation_source: peer_id,
message_id,
})
.unwrap_or_else(|_| {
warn!(
self.log,
"Could not send propagation request to the network service"
)
});
}
/// Send a message to `sync_tx`.
///
/// Creates a log if there is an interal error.
fn send_sync_message(&self, message: SyncMessage<T::EthSpec>) {
self.sync_tx
.send(message)
.unwrap_or_else(|_| error!(self.log, "Could not send message to the sync service"));
}
/// Handle an error whilst verifying an `Attestation` or `SignedAggregateAndProof` from the
/// network.
pub fn handle_attestation_verification_failure(
&self,
peer_id: PeerId,
beacon_block_root: Hash256,
attestation_type: &str,
error: AttnError,
) {
metrics::register_attestation_error(&error);
match &error {
AttnError::FutureEpoch { .. }
| AttnError::PastEpoch { .. }
| AttnError::FutureSlot { .. }
| AttnError::PastSlot { .. } => {
/*
* These errors can be triggered by a mismatch between our slot and the peer.
*
*
* The peer has published an invalid consensus message, _only_ if we trust our own clock.
*/
}
AttnError::InvalidSelectionProof { .. } | AttnError::InvalidSignature => {
/*
* These errors are caused by invalid signatures.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::EmptyAggregationBitfield => {
/*
* The aggregate had no signatures and is therefore worthless.
*
* Whilst we don't gossip this attestation, this act is **not** a clear
* violation of the spec nor indication of fault.
*
* This may change soon. Reference:
*
* https://github.com/ethereum/eth2.0-specs/pull/1732
*/
}
AttnError::AggregatorPubkeyUnknown(_) => {
/*
* The aggregator index was higher than any known validator index. This is
* possible in two cases:
*
* 1. The attestation is malformed
* 2. The attestation attests to a beacon_block_root that we do not know.
*
* It should be impossible to reach (2) without triggering
* `AttnError::UnknownHeadBlock`, so we can safely assume the peer is
* faulty.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AggregatorNotInCommittee { .. } => {
/*
* The aggregator index was higher than any known validator index. This is
* possible in two cases:
*
* 1. The attestation is malformed
* 2. The attestation attests to a beacon_block_root that we do not know.
*
* It should be impossible to reach (2) without triggering
* `AttnError::UnknownHeadBlock`, so we can safely assume the peer is
* faulty.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AttestationAlreadyKnown { .. } => {
/*
* The aggregate attestation has already been observed on the network or in
* a block.
*
* The peer is not necessarily faulty.
*/
trace!(
self.log,
"Attestation already known";
"peer_id" => peer_id.to_string(),
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
return;
}
AttnError::AggregatorAlreadyKnown(_) => {
/*
* There has already been an aggregate attestation seen from this
* aggregator index.
*
* The peer is not necessarily faulty.
*/
trace!(
self.log,
"Aggregator already known";
"peer_id" => peer_id.to_string(),
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
return;
}
AttnError::PriorAttestationKnown { .. } => {
/*
* We have already seen an attestation from this validator for this epoch.
*
* The peer is not necessarily faulty.
*/
trace!(
self.log,
"Prior attestation known";
"peer_id" => peer_id.to_string(),
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
return;
}
AttnError::ValidatorIndexTooHigh(_) => {
/*
* The aggregator index (or similar field) was higher than the maximum
* possible number of validators.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::UnknownHeadBlock { beacon_block_root } => {
// Note: its a little bit unclear as to whether or not this block is unknown or
// just old. See:
//
// https://github.com/sigp/lighthouse/issues/1039
// TODO: Maintain this attestation and re-process once sync completes
debug!(
self.log,
"Attestation for unknown block";
"peer_id" => peer_id.to_string(),
"block" => format!("{}", beacon_block_root)
);
// we don't know the block, get the sync manager to handle the block lookup
self.sync_tx
.send(SyncMessage::UnknownBlockHash(peer_id, *beacon_block_root))
.unwrap_or_else(|_| {
warn!(
self.log,
"Failed to send to sync service";
"msg" => "UnknownBlockHash"
)
});
return;
}
AttnError::UnknownTargetRoot(_) => {
/*
* The block indicated by the target root is not known to us.
*
* We should always get `AttnError::UnknwonHeadBlock` before we get this
* error, so this means we can get this error if:
*
* 1. The target root does not represent a valid block.
* 2. We do not have the target root in our DB.
*
* For (2), we should only be processing attestations when we should have
* all the available information. Note: if we do a weak-subjectivity sync
* it's possible that this situation could occur, but I think it's
* unlikely. For now, we will declare this to be an invalid message>
*
* The peer has published an invalid consensus message.
*/
}
AttnError::BadTargetEpoch => {
/*
* The aggregator index (or similar field) was higher than the maximum
* possible number of validators.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::NoCommitteeForSlotAndIndex { .. } => {
/*
* It is not possible to attest this the given committee in the given slot.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::NotExactlyOneAggregationBitSet(_) => {
/*
* The unaggregated attestation doesn't have only one signature.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AttestsToFutureBlock { .. } => {
/*
* The beacon_block_root is from a higher slot than the attestation.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::InvalidSubnetId { received, expected } => {
/*
* The attestation was received on an incorrect subnet id.
*/
debug!(
self.log,
"Received attestation on incorrect subnet";
"expected" => format!("{:?}", expected),
"received" => format!("{:?}", received),
)
}
AttnError::Invalid(_) => {
/*
* The attestation failed the state_processing verification.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::TooManySkippedSlots {
head_block_slot,
attestation_slot,
} => {
/*
* The attestation references a head block that is too far behind the attestation slot.
*
* The message is not necessarily invalid, but we choose to ignore it.
*/
debug!(
self.log,
"Rejected long skip slot attestation";
"head_block_slot" => head_block_slot,
"attestation_slot" => attestation_slot,
)
}
AttnError::BeaconChainError(e) => {
/*
* Lighthouse hit an unexpected error whilst processing the attestation. It
* should be impossible to trigger a `BeaconChainError` from the network,
* so we have a bug.
*
* It's not clear if the message is invalid/malicious.
*/
error!(
self.log,
"Unable to validate aggregate";
"peer_id" => peer_id.to_string(),
"error" => format!("{:?}", e),
);
}
}
debug!(
self.log,
"Invalid attestation from network";
"reason" => format!("{:?}", error),
"block" => format!("{}", beacon_block_root),
"peer_id" => peer_id.to_string(),
"type" => format!("{:?}", attestation_type),
);
}
}

View File

@@ -98,6 +98,49 @@ lazy_static! {
"beacon_processor_gossip_block_imported_total",
"Total number of gossip blocks imported to fork choice, etc."
);
// Gossip Exits.
pub static ref BEACON_PROCESSOR_EXIT_QUEUE_TOTAL: Result<IntGauge> = try_create_int_gauge(
"beacon_processor_exit_queue_total",
"Count of exits from gossip waiting to be verified."
);
pub static ref BEACON_PROCESSOR_EXIT_VERIFIED_TOTAL: Result<IntCounter> = try_create_int_counter(
"beacon_processor_exit_verified_total",
"Total number of voluntary exits verified for propagation."
);
pub static ref BEACON_PROCESSOR_EXIT_IMPORTED_TOTAL: Result<IntCounter> = try_create_int_counter(
"beacon_processor_exit_imported_total",
"Total number of voluntary exits imported to the op pool."
);
// Gossip proposer slashings.
pub static ref BEACON_PROCESSOR_PROPOSER_SLASHING_QUEUE_TOTAL: Result<IntGauge> = try_create_int_gauge(
"beacon_processor_proposer_slashing_queue_total",
"Count of proposer slashings from gossip waiting to be verified."
);
pub static ref BEACON_PROCESSOR_PROPOSER_SLASHING_VERIFIED_TOTAL: Result<IntCounter> = try_create_int_counter(
"beacon_processor_proposer_slashing_verified_total",
"Total number of proposer slashings verified for propagation."
);
pub static ref BEACON_PROCESSOR_PROPOSER_SLASHING_IMPORTED_TOTAL: Result<IntCounter> = try_create_int_counter(
"beacon_processor_proposer_slashing_imported_total",
"Total number of proposer slashings imported to the op pool."
);
// Gossip attester slashings.
pub static ref BEACON_PROCESSOR_ATTESTER_SLASHING_QUEUE_TOTAL: Result<IntGauge> = try_create_int_gauge(
"beacon_processor_attester_slashing_queue_total",
"Count of attester slashings from gossip waiting to be verified."
);
pub static ref BEACON_PROCESSOR_ATTESTER_SLASHING_VERIFIED_TOTAL: Result<IntCounter> = try_create_int_counter(
"beacon_processor_attester_slashing_verified_total",
"Total number of attester slashings verified for propagation."
);
pub static ref BEACON_PROCESSOR_ATTESTER_SLASHING_IMPORTED_TOTAL: Result<IntCounter> = try_create_int_counter(
"beacon_processor_attester_slashing_imported_total",
"Total number of attester slashings imported to the op pool."
);
pub static ref BEACON_PROCESSOR_ATTESTER_SLASHING_ERROR_TOTAL: Result<IntCounter> = try_create_int_counter(
"beacon_processor_attester_slashing_error_total",
"Total number of attester slashings that raised an error during processing."
);
// Rpc blocks.
pub static ref BEACON_PROCESSOR_RPC_BLOCK_QUEUE_TOTAL: Result<IntGauge> = try_create_int_gauge(
"beacon_processor_rpc_block_queue_total",

View File

@@ -16,7 +16,7 @@ use eth2_libp2p::{
};
use futures::prelude::*;
use processor::Processor;
use slog::{debug, o, trace, warn};
use slog::{debug, o, trace};
use std::sync::Arc;
use tokio::sync::mpsc;
use types::EthSpec;
@@ -26,8 +26,6 @@ use types::EthSpec;
/// passing them to the internal message processor. The message processor spawns a syncing thread
/// which manages which blocks need to be requested and processed.
pub struct Router<T: BeaconChainTypes> {
/// A channel to the network service to allow for gossip propagation.
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
/// Access to the peer db for logging.
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
/// Processes validated and decoded messages from the network. Has direct access to the
@@ -89,13 +87,12 @@ impl<T: BeaconChainTypes> Router<T> {
executor.clone(),
beacon_chain,
network_globals.clone(),
network_send.clone(),
network_send,
&log,
);
// generate the Message handler
let mut handler = Router {
network_send,
network_globals,
processor,
log: message_handler_log,
@@ -232,13 +229,7 @@ impl<T: BeaconChainTypes> Router<T> {
}
PubsubMessage::VoluntaryExit(exit) => {
debug!(self.log, "Received a voluntary exit"; "peer_id" => format!("{}", peer_id));
if let Some(verified_exit) = self
.processor
.verify_voluntary_exit_for_gossip(&peer_id, *exit)
{
self.propagate_message(id, peer_id);
self.processor.import_verified_voluntary_exit(verified_exit);
}
self.processor.on_voluntary_exit_gossip(id, peer_id, exit);
}
PubsubMessage::ProposerSlashing(proposer_slashing) => {
debug!(
@@ -246,14 +237,8 @@ impl<T: BeaconChainTypes> Router<T> {
"Received a proposer slashing";
"peer_id" => format!("{}", peer_id)
);
if let Some(verified_proposer_slashing) = self
.processor
.verify_proposer_slashing_for_gossip(&peer_id, *proposer_slashing)
{
self.propagate_message(id, peer_id);
self.processor
.import_verified_proposer_slashing(verified_proposer_slashing);
}
self.processor
.on_proposer_slashing_gossip(id, peer_id, proposer_slashing);
}
PubsubMessage::AttesterSlashing(attester_slashing) => {
debug!(
@@ -261,30 +246,9 @@ impl<T: BeaconChainTypes> Router<T> {
"Received a attester slashing";
"peer_id" => format!("{}", peer_id)
);
if let Some(verified_attester_slashing) = self
.processor
.verify_attester_slashing_for_gossip(&peer_id, *attester_slashing)
{
self.propagate_message(id, peer_id);
self.processor
.import_verified_attester_slashing(verified_attester_slashing);
}
self.processor
.on_attester_slashing_gossip(id, peer_id, attester_slashing);
}
}
}
/// Informs the network service that the message should be forwarded to other peers (is valid).
fn propagate_message(&mut self, message_id: MessageId, propagation_source: PeerId) {
self.network_send
.send(NetworkMessage::Validate {
propagation_source,
message_id,
})
.unwrap_or_else(|_| {
warn!(
self.log,
"Could not send propagation request to the network service"
)
});
}
}

View File

@@ -3,14 +3,13 @@ use crate::beacon_processor::{
};
use crate::service::NetworkMessage;
use crate::sync::{PeerSyncInfo, SyncMessage};
use beacon_chain::{observed_operations::ObservationOutcome, BeaconChain, BeaconChainTypes};
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::rpc::*;
use eth2_libp2p::{
MessageId, NetworkGlobals, PeerAction, PeerId, PeerRequestId, Request, Response,
};
use itertools::process_results;
use slog::{debug, error, o, trace, warn};
use state_processing::SigVerifiedOp;
use std::cmp;
use std::sync::Arc;
use tokio::sync::mpsc;
@@ -585,140 +584,70 @@ impl<T: BeaconChainTypes> Processor<T> {
})
}
/// Verify a voluntary exit before gossiping or processing it.
///
/// Errors are logged at debug level.
pub fn verify_voluntary_exit_for_gossip(
&self,
peer_id: &PeerId,
voluntary_exit: SignedVoluntaryExit,
) -> Option<SigVerifiedOp<SignedVoluntaryExit>> {
let validator_index = voluntary_exit.message.validator_index;
match self.chain.verify_voluntary_exit_for_gossip(voluntary_exit) {
Ok(ObservationOutcome::New(sig_verified_exit)) => Some(sig_verified_exit),
Ok(ObservationOutcome::AlreadyKnown) => {
debug!(
self.log,
"Dropping exit for already exiting validator";
"validator_index" => validator_index,
"peer" => peer_id.to_string()
);
None
}
Err(e) => {
debug!(
self.log,
"Dropping invalid exit";
"validator_index" => validator_index,
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
None
}
}
}
/// Import a verified exit into the op pool.
pub fn import_verified_voluntary_exit(
&self,
verified_voluntary_exit: SigVerifiedOp<SignedVoluntaryExit>,
pub fn on_voluntary_exit_gossip(
&mut self,
message_id: MessageId,
peer_id: PeerId,
voluntary_exit: Box<SignedVoluntaryExit>,
) {
self.chain.import_voluntary_exit(verified_voluntary_exit);
debug!(self.log, "Successfully imported voluntary exit");
self.beacon_processor_send
.try_send(BeaconWorkEvent::gossip_voluntary_exit(
message_id,
peer_id,
voluntary_exit,
))
.unwrap_or_else(|e| {
error!(
&self.log,
"Unable to send to gossip processor";
"type" => "voluntary exit gossip",
"error" => e.to_string(),
)
})
}
/// Verify a proposer slashing before gossiping or processing it.
///
/// Errors are logged at debug level.
pub fn verify_proposer_slashing_for_gossip(
&self,
peer_id: &PeerId,
proposer_slashing: ProposerSlashing,
) -> Option<SigVerifiedOp<ProposerSlashing>> {
let validator_index = proposer_slashing.signed_header_1.message.proposer_index;
match self
.chain
.verify_proposer_slashing_for_gossip(proposer_slashing)
{
Ok(ObservationOutcome::New(verified_slashing)) => Some(verified_slashing),
Ok(ObservationOutcome::AlreadyKnown) => {
debug!(
self.log,
"Dropping proposer slashing";
"reason" => "Already seen a proposer slashing for that validator",
"validator_index" => validator_index,
"peer" => peer_id.to_string()
);
None
}
Err(e) => {
debug!(
self.log,
"Dropping invalid proposer slashing";
"validator_index" => validator_index,
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
None
}
}
}
/// Import a verified proposer slashing into the op pool.
pub fn import_verified_proposer_slashing(
&self,
proposer_slashing: SigVerifiedOp<ProposerSlashing>,
pub fn on_proposer_slashing_gossip(
&mut self,
message_id: MessageId,
peer_id: PeerId,
proposer_slashing: Box<ProposerSlashing>,
) {
self.chain.import_proposer_slashing(proposer_slashing);
debug!(self.log, "Successfully imported proposer slashing");
self.beacon_processor_send
.try_send(BeaconWorkEvent::gossip_proposer_slashing(
message_id,
peer_id,
proposer_slashing,
))
.unwrap_or_else(|e| {
error!(
&self.log,
"Unable to send to gossip processor";
"type" => "proposer slashing gossip",
"error" => e.to_string(),
)
})
}
/// Verify an attester slashing before gossiping or processing it.
///
/// Errors are logged at debug level.
pub fn verify_attester_slashing_for_gossip(
&self,
peer_id: &PeerId,
attester_slashing: AttesterSlashing<T::EthSpec>,
) -> Option<SigVerifiedOp<AttesterSlashing<T::EthSpec>>> {
match self
.chain
.verify_attester_slashing_for_gossip(attester_slashing)
{
Ok(ObservationOutcome::New(verified_slashing)) => Some(verified_slashing),
Ok(ObservationOutcome::AlreadyKnown) => {
debug!(
self.log,
"Dropping attester slashing";
"reason" => "Slashings already known for all slashed validators",
"peer" => peer_id.to_string()
);
None
}
Err(e) => {
debug!(
self.log,
"Dropping invalid attester slashing";
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
None
}
}
}
/// Import a verified attester slashing into the op pool.
pub fn import_verified_attester_slashing(
&self,
attester_slashing: SigVerifiedOp<AttesterSlashing<T::EthSpec>>,
pub fn on_attester_slashing_gossip(
&mut self,
message_id: MessageId,
peer_id: PeerId,
attester_slashing: Box<AttesterSlashing<T::EthSpec>>,
) {
if let Err(e) = self.chain.import_attester_slashing(attester_slashing) {
debug!(self.log, "Error importing attester slashing"; "error" => format!("{:?}", e));
} else {
debug!(self.log, "Successfully imported attester slashing");
}
self.beacon_processor_send
.try_send(BeaconWorkEvent::gossip_attester_slashing(
message_id,
peer_id,
attester_slashing,
))
.unwrap_or_else(|e| {
error!(
&self.log,
"Unable to send to gossip processor";
"type" => "attester slashing gossip",
"error" => e.to_string(),
)
})
}
}

View File

@@ -169,6 +169,7 @@ fn spawn_service<T: BeaconChainTypes>(
mut service: NetworkService<T>,
) -> error::Result<()> {
let mut exit_rx = executor.exit();
let mut shutdown_sender = executor.shutdown_sender();
// spawn on the current executor
executor.spawn_without_exit(async move {
@@ -271,8 +272,8 @@ fn spawn_service<T: BeaconChainTypes>(
AttServiceMessage::EnrRemove(subnet_id) => {
service.libp2p.swarm.update_enr_subnet(subnet_id, false);
}
AttServiceMessage::DiscoverPeers{subnet_id, min_ttl} => {
service.libp2p.swarm.discover_subnet_peers(subnet_id, min_ttl);
AttServiceMessage::DiscoverPeers(subnets_to_discover) => {
service.libp2p.swarm.discover_subnet_peers(subnets_to_discover);
}
}
}
@@ -376,6 +377,12 @@ fn spawn_service<T: BeaconChainTypes>(
Libp2pEvent::NewListenAddr(multiaddr) => {
service.network_globals.listen_multiaddrs.write().push(multiaddr);
}
Libp2pEvent::ZeroListeners => {
let _ = shutdown_sender.send("All listeners are closed. Unable to listen").await.map_err(|e| {
warn!(service.log, "failed to send a shutdown signal"; "error" => e.to_string()
)
});
}
}
}
}

View File

@@ -40,7 +40,13 @@ mod tests {
let runtime = Runtime::new().unwrap();
let (signal, exit) = exit_future::signal();
let executor = environment::TaskExecutor::new(runtime.handle().clone(), exit, log.clone());
let (shutdown_tx, _) = futures::channel::mpsc::channel(1);
let executor = environment::TaskExecutor::new(
runtime.handle().clone(),
exit,
log.clone(),
shutdown_tx,
);
let mut config = NetworkConfig::default();
config.libp2p_port = 21212;

View File

@@ -35,7 +35,7 @@
use super::network_context::SyncNetworkContext;
use super::peer_sync_info::{PeerSyncInfo, PeerSyncType};
use super::range_sync::{BatchId, ChainId, RangeSync, EPOCHS_PER_BATCH};
use super::range_sync::{ChainId, RangeSync, EPOCHS_PER_BATCH};
use super::RequestId;
use crate::beacon_processor::{ProcessId, WorkEvent as BeaconWorkEvent};
use crate::service::NetworkMessage;
@@ -44,6 +44,7 @@ use eth2_libp2p::rpc::{methods::MAX_REQUEST_BLOCKS, BlocksByRootRequest, Goodbye
use eth2_libp2p::types::NetworkGlobals;
use eth2_libp2p::{PeerAction, PeerId};
use fnv::FnvHashMap;
use lru_cache::LRUCache;
use slog::{crit, debug, error, info, trace, warn, Logger};
use smallvec::SmallVec;
use ssz_types::VariableList;
@@ -51,7 +52,7 @@ use std::boxed::Box;
use std::ops::Sub;
use std::sync::Arc;
use tokio::sync::mpsc;
use types::{EthSpec, Hash256, SignedBeaconBlock, Slot};
use types::{Epoch, EthSpec, Hash256, SignedBeaconBlock, Slot};
/// The number of slots ahead of us that is allowed before requesting a long-range (batch) Sync
/// from a peer. If a peer is within this tolerance (forwards or backwards), it is treated as a
@@ -100,13 +101,18 @@ pub enum SyncMessage<T: EthSpec> {
/// A batch has been processed by the block processor thread.
BatchProcessed {
chain_id: ChainId,
batch_id: BatchId,
epoch: Epoch,
downloaded_blocks: Vec<SignedBeaconBlock<T>>,
result: BatchProcessResult,
},
/// A parent lookup has failed for a block given by this `peer_id`.
ParentLookupFailed(PeerId),
/// A parent lookup has failed.
ParentLookupFailed {
/// The head of the chain of blocks that failed to process.
chain_head: Hash256,
/// The peer that instigated the chain lookup.
peer_id: PeerId,
},
}
/// The result of processing a multiple blocks (a chain segment).
@@ -161,6 +167,9 @@ pub struct SyncManager<T: BeaconChainTypes> {
/// A collection of parent block lookups.
parent_queue: SmallVec<[ParentRequests<T::EthSpec>; 3]>,
/// A cache of failed chain lookups to prevent duplicate searches.
failed_chains: LRUCache<Hash256>,
/// A collection of block hashes being searched for and a flag indicating if a result has been
/// received or not.
///
@@ -222,6 +231,7 @@ pub fn spawn<T: BeaconChainTypes>(
network_globals,
input_channel: sync_recv,
parent_queue: SmallVec::new(),
failed_chains: LRUCache::new(500),
single_block_lookups: FnvHashMap::default(),
log: log.clone(),
beacon_processor_send,
@@ -351,6 +361,22 @@ impl<T: BeaconChainTypes> SyncManager<T> {
return;
}
};
// check if the parent of this block isn't in our failed cache. If it is, this
// chain should be dropped and the peer downscored.
if self.failed_chains.contains(&block.message.parent_root) {
debug!(self.log, "Parent chain ignored due to past failure"; "block" => format!("{:?}", block.message.parent_root), "slot" => block.message.slot);
if !parent_request.downloaded_blocks.is_empty() {
// Add the root block to failed chains
self.failed_chains
.insert(parent_request.downloaded_blocks[0].canonical_root());
} else {
crit!(self.log, "Parent chain has no blocks");
}
self.network
.report_peer(peer_id, PeerAction::MidToleranceError);
return;
}
// add the block to response
parent_request.downloaded_blocks.push(block);
// queue for processing
@@ -510,6 +536,15 @@ impl<T: BeaconChainTypes> SyncManager<T> {
}
}
let block_root = block.canonical_root();
// If this block or it's parent is part of a known failed chain, ignore it.
if self.failed_chains.contains(&block.message.parent_root)
|| self.failed_chains.contains(&block_root)
{
debug!(self.log, "Block is from a past failed chain. Dropping"; "block_root" => format!("{:?}", block_root), "block_slot" => block.message.slot);
return;
}
// Make sure this block is not already being searched for
// NOTE: Potentially store a hashset of blocks for O(1) lookups
for parent_req in self.parent_queue.iter() {
@@ -697,6 +732,8 @@ impl<T: BeaconChainTypes> SyncManager<T> {
// If the last block in the queue has an unknown parent, we continue the parent
// lookup-search.
let chain_block_hash = parent_request.downloaded_blocks[0].canonical_root();
let newest_block = parent_request
.downloaded_blocks
.pop()
@@ -715,8 +752,10 @@ impl<T: BeaconChainTypes> SyncManager<T> {
self.request_parent(parent_request);
}
Ok(_) | Err(BlockError::BlockIsAlreadyKnown { .. }) => {
let process_id =
ProcessId::ParentLookup(parent_request.last_submitted_peer.clone());
let process_id = ProcessId::ParentLookup(
parent_request.last_submitted_peer.clone(),
chain_block_hash,
);
let blocks = parent_request.downloaded_blocks;
match self
@@ -742,6 +781,10 @@ impl<T: BeaconChainTypes> SyncManager<T> {
"outcome" => format!("{:?}", outcome),
"last_peer" => parent_request.last_submitted_peer.to_string(),
);
// Add this chain to cache of failed chains
self.failed_chains.insert(chain_block_hash);
// This currently can be a host of errors. We permit this due to the partial
// ambiguity.
// TODO: Refine the error types and score the peer appropriately.
@@ -764,8 +807,17 @@ impl<T: BeaconChainTypes> SyncManager<T> {
|| parent_request.downloaded_blocks.len() >= PARENT_DEPTH_TOLERANCE
{
let error = if parent_request.failed_attempts >= PARENT_FAIL_TOLERANCE {
// This is a peer-specific error and the chain could be continued with another
// peer. We don't consider this chain a failure and prevent retries with another
// peer.
"too many failed attempts"
} else {
if !parent_request.downloaded_blocks.is_empty() {
self.failed_chains
.insert(parent_request.downloaded_blocks[0].canonical_root());
} else {
crit!(self.log, "Parent lookup has no blocks");
}
"reached maximum lookup-depth"
};
@@ -774,6 +826,11 @@ impl<T: BeaconChainTypes> SyncManager<T> {
"ancestors_found" => parent_request.downloaded_blocks.len(),
"reason" => error
);
// Downscore the peer.
self.network.report_peer(
parent_request.last_submitted_peer,
PeerAction::LowToleranceError,
);
return; // drop the request
}
@@ -842,24 +899,25 @@ impl<T: BeaconChainTypes> SyncManager<T> {
}
SyncMessage::BatchProcessed {
chain_id,
batch_id,
epoch,
downloaded_blocks,
result,
} => {
self.range_sync.handle_block_process_result(
&mut self.network,
chain_id,
batch_id,
epoch,
downloaded_blocks,
result,
);
}
SyncMessage::ParentLookupFailed(peer_id) => {
SyncMessage::ParentLookupFailed {
chain_head,
peer_id,
} => {
// A peer sent an object (block or attestation) that referenced a parent.
// On request for this parent the peer indicated it did not have this
// block.
// This is not fatal. Peer's could prune old blocks so we moderately
// tolerate this behaviour.
// The processing of this chain failed.
self.failed_chains.insert(chain_head);
self.network
.report_peer(peer_id, PeerAction::MidToleranceError);
}

View File

@@ -8,7 +8,7 @@ mod range_sync;
pub use manager::{BatchProcessResult, SyncMessage};
pub use peer_sync_info::PeerSyncInfo;
pub use range_sync::{BatchId, ChainId};
pub use range_sync::ChainId;
/// Type of id of rpc requests sent by sync
pub type RequestId = usize;

View File

@@ -110,7 +110,7 @@ impl<T: EthSpec> SyncNetworkContext<T> {
}
pub fn report_peer(&mut self, peer_id: PeerId, action: PeerAction) {
debug!(self.log, "Sync reporting peer"; "peer_id" => peer_id.to_string(), "action"=> action.to_string());
debug!(self.log, "Sync reporting peer"; "peer_id" => peer_id.to_string(), "action" => action.to_string());
self.network_send
.send(NetworkMessage::ReportPeer { peer_id, action })
.unwrap_or_else(|_| {

View File

@@ -9,37 +9,14 @@ use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::hash::{Hash, Hasher};
use std::ops::Sub;
use types::{EthSpec, SignedBeaconBlock, Slot};
#[derive(Copy, Clone, Debug, PartialEq)]
pub struct BatchId(pub u64);
impl std::ops::Deref for BatchId {
type Target = u64;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl std::ops::DerefMut for BatchId {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl std::convert::From<u64> for BatchId {
fn from(id: u64) -> Self {
BatchId(id)
}
}
use types::{Epoch, EthSpec, SignedBeaconBlock, Slot};
/// A collection of sequential blocks that are requested from peers in a single RPC request.
#[derive(PartialEq, Debug)]
pub struct Batch<T: EthSpec> {
/// The ID of the batch, these are sequential.
pub id: BatchId,
/// The requested start slot of the batch, inclusive.
pub start_slot: Slot,
/// The requested end slot of batch, exlcusive.
/// The requested start epoch of the batch.
pub start_epoch: Epoch,
/// The requested end slot of batch, exclusive.
pub end_slot: Slot,
/// The `Attempts` that have been made to send us this batch.
pub attempts: Vec<Attempt>,
@@ -69,10 +46,9 @@ pub struct Attempt {
impl<T: EthSpec> Eq for Batch<T> {}
impl<T: EthSpec> Batch<T> {
pub fn new(id: BatchId, start_slot: Slot, end_slot: Slot, peer_id: PeerId) -> Self {
pub fn new(start_epoch: Epoch, end_slot: Slot, peer_id: PeerId) -> Self {
Batch {
id,
start_slot,
start_epoch,
end_slot,
attempts: Vec::new(),
current_peer: peer_id,
@@ -82,12 +58,21 @@ impl<T: EthSpec> Batch<T> {
}
}
pub fn start_slot(&self) -> Slot {
// batches are shifted by 1
self.start_epoch.start_slot(T::slots_per_epoch()) + 1
}
pub fn end_slot(&self) -> Slot {
self.end_slot
}
pub fn to_blocks_by_range_request(&self) -> BlocksByRangeRequest {
let start_slot = self.start_slot();
BlocksByRangeRequest {
start_slot: self.start_slot.into(),
start_slot: start_slot.into(),
count: min(
T::slots_per_epoch() * EPOCHS_PER_BATCH,
self.end_slot.sub(self.start_slot).into(),
self.end_slot.sub(start_slot).into(),
),
step: 1,
}
@@ -105,7 +90,7 @@ impl<T: EthSpec> Batch<T> {
impl<T: EthSpec> Ord for Batch<T> {
fn cmp(&self, other: &Self) -> Ordering {
self.id.0.cmp(&other.id.0)
self.start_epoch.cmp(&other.start_epoch)
}
}

View File

@@ -1,4 +1,4 @@
use super::batch::{Batch, BatchId, PendingBatches};
use super::batch::{Batch, PendingBatches};
use crate::beacon_processor::ProcessId;
use crate::beacon_processor::WorkEvent as BeaconWorkEvent;
use crate::sync::RequestId;
@@ -73,11 +73,12 @@ pub struct SyncingChain<T: BeaconChainTypes> {
/// and thus available to download this chain from.
pub peer_pool: HashSet<PeerId>,
/// The next batch_id that needs to be downloaded.
to_be_downloaded_id: BatchId,
/// Starting epoch of the next batch that needs to be downloaded.
to_be_downloaded: Epoch,
/// The next batch id that needs to be processed.
to_be_processed_id: BatchId,
/// Starting epoch of the batch that needs to be processed next.
/// This is incremented as the chain advances.
processing_target: Epoch,
/// The current state of the chain.
pub state: ChainSyncingState,
@@ -91,7 +92,7 @@ pub struct SyncingChain<T: BeaconChainTypes> {
/// A reference to the underlying beacon chain.
chain: Arc<BeaconChain<T>>,
/// A reference to the sync logger.
/// The chain's log.
log: slog::Logger,
}
@@ -127,8 +128,8 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
completed_batches: Vec::new(),
processed_batches: Vec::new(),
peer_pool,
to_be_downloaded_id: BatchId(1),
to_be_processed_id: BatchId(1),
to_be_downloaded: start_epoch,
processing_target: start_epoch,
state: ChainSyncingState::Stopped,
current_processing_batch: None,
beacon_processor_send,
@@ -139,13 +140,10 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// Returns the latest slot number that has been processed.
fn current_processed_slot(&self) -> Slot {
self.start_epoch
// the last slot we processed was included in the previous batch, and corresponds to the
// first slot of the current target epoch
self.processing_target
.start_slot(T::EthSpec::slots_per_epoch())
.saturating_add(
self.to_be_processed_id.saturating_sub(1u64)
* T::EthSpec::slots_per_epoch()
* EPOCHS_PER_BATCH,
)
}
/// A batch of blocks has been received. This function gets run on all chains and should
@@ -182,21 +180,19 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// An entire batch of blocks has been received. This functions checks to see if it can be processed,
// remove any batches waiting to be verified and if this chain is syncing, request new
// blocks for the peer.
debug!(self.log, "Completed batch received"; "id"=> *batch.id, "blocks" => &batch.downloaded_blocks.len(), "awaiting_batches" => self.completed_batches.len());
debug!(self.log, "Completed batch received"; "epoch" => batch.start_epoch, "blocks" => &batch.downloaded_blocks.len(), "awaiting_batches" => self.completed_batches.len());
// verify the range of received blocks
// Note that the order of blocks is verified in block processing
if let Some(last_slot) = batch.downloaded_blocks.last().map(|b| b.slot()) {
// the batch is non-empty
let first_slot = batch.downloaded_blocks[0].slot();
if batch.start_slot > first_slot || batch.end_slot < last_slot {
if batch.start_slot() > first_slot || batch.end_slot() < last_slot {
warn!(self.log, "BlocksByRange response returned out of range blocks";
"response_initial_slot" => first_slot,
"requested_initial_slot" => batch.start_slot);
// This is a pretty bad error. We don't consider this fatal, but we don't tolerate
// this much either.
network.report_peer(batch.current_peer, PeerAction::LowToleranceError);
self.to_be_processed_id = batch.id; // reset the id back to here, when incrementing, it will check against completed batches
"response_initial_slot" => first_slot,
"requested_initial_slot" => batch.start_slot());
// this batch can't be used, so we need to request it again.
self.failed_batch(network, batch);
return;
}
}
@@ -242,7 +238,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// Check if there is a batch ready to be processed
if !self.completed_batches.is_empty()
&& self.completed_batches[0].id == self.to_be_processed_id
&& self.completed_batches[0].start_epoch == self.processing_target
{
let batch = self.completed_batches.remove(0);
@@ -258,7 +254,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// Sends a batch to the beacon processor for async processing in a queue.
fn process_batch(&mut self, mut batch: Batch<T::EthSpec>) {
let blocks = std::mem::replace(&mut batch.downloaded_blocks, Vec::new());
let process_id = ProcessId::RangeBatchId(self.id, batch.id);
let process_id = ProcessId::RangeBatchId(self.id, batch.start_epoch);
self.current_processing_batch = Some(batch);
if let Err(e) = self
@@ -280,7 +276,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
chain_id: ChainId,
batch_id: BatchId,
batch_start_epoch: Epoch,
downloaded_blocks: &mut Option<Vec<SignedBeaconBlock<T::EthSpec>>>,
result: &BatchProcessResult,
) -> Option<ProcessingResult> {
@@ -289,14 +285,14 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
return None;
}
match &self.current_processing_batch {
Some(current_batch) if current_batch.id != batch_id => {
Some(current_batch) if current_batch.start_epoch != batch_start_epoch => {
debug!(self.log, "Unexpected batch result";
"chain_id" => self.id, "batch_id" => *batch_id, "expected_batch_id" => *current_batch.id);
"batch_epoch" => batch_start_epoch, "expected_batch_epoch" => current_batch.start_epoch);
return None;
}
None => {
debug!(self.log, "Chain was not expecting a batch result";
"chain_id" => self.id, "batch_id" => *batch_id);
"batch_epoch" => batch_start_epoch);
return None;
}
_ => {
@@ -308,7 +304,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
let downloaded_blocks = downloaded_blocks.take().or_else(|| {
// if taken by another chain, we are no longer waiting on a result.
self.current_processing_batch = None;
crit!(self.log, "Processed batch taken by another chain"; "chain_id" => self.id);
crit!(self.log, "Processed batch taken by another chain");
None
})?;
@@ -318,16 +314,15 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
batch.downloaded_blocks = downloaded_blocks;
// double check batches are processed in order TODO: Remove for prod
if batch.id != self.to_be_processed_id {
if batch.start_epoch != self.processing_target {
crit!(self.log, "Batch processed out of order";
"chain_id" => self.id,
"processed_batch_id" => *batch.id,
"expected_id" => *self.to_be_processed_id);
"processed_starting_epoch" => batch.start_epoch,
"expected_epoch" => self.processing_target);
}
let res = match result {
BatchProcessResult::Success => {
*self.to_be_processed_id += 1;
self.processing_target += EPOCHS_PER_BATCH;
// If the processed batch was not empty, we can validate previous invalidated
// blocks including the current batch.
@@ -357,7 +352,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
}
BatchProcessResult::Partial => {
warn!(self.log, "Batch processing failed but at least one block was imported";
"chain_id" => self.id, "id" => *batch.id, "peer" => format!("{}", batch.current_peer)
"batch_epoch" => batch.start_epoch, "peer" => batch.current_peer.to_string()
);
// At least one block was successfully verified and imported, so we can be sure all
// previous batches are valid and we only need to download the current failed
@@ -375,7 +370,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
let action = PeerAction::LowToleranceError;
warn!(self.log, "Batch failed to download. Dropping chain scoring peers";
"score_adjustment" => action.to_string(),
"chain_id" => self.id, "id"=> *batch.id);
"batch_epoch"=> batch.start_epoch);
for peer_id in self.peer_pool.drain() {
network.report_peer(peer_id, action);
}
@@ -388,7 +383,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
}
BatchProcessResult::Failed => {
debug!(self.log, "Batch processing failed";
"chain_id" => self.id,"id" => *batch.id, "peer" => batch.current_peer.to_string(), "client" => network.client_type(&batch.current_peer).to_string());
"batch_epoch" => batch.start_epoch, "peer" => batch.current_peer.to_string(), "client" => network.client_type(&batch.current_peer).to_string());
// The batch processing failed
// This could be because this batch is invalid, or a previous invalidated batch
// is invalid. We need to find out which and downvote the peer that has sent us
@@ -403,7 +398,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
let action = PeerAction::LowToleranceError;
warn!(self.log, "Batch failed to download. Dropping chain scoring peers";
"score_adjustment" => action.to_string(),
"chain_id" => self.id, "id"=> *batch.id);
"batch_epoch" => batch.start_epoch);
for peer_id in self.peer_pool.drain() {
network.report_peer(peer_id, action);
}
@@ -433,11 +428,10 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
) {
while !self.processed_batches.is_empty() {
let mut processed_batch = self.processed_batches.remove(0);
if *processed_batch.id >= *last_batch.id {
if processed_batch.start_epoch >= last_batch.start_epoch {
crit!(self.log, "A processed batch had a greater id than the current process id";
"chain_id" => self.id,
"processed_id" => *processed_batch.id,
"current_id" => *last_batch.id);
"processed_start_epoch" => processed_batch.start_epoch,
"current_start_epoch" => last_batch.start_epoch);
}
// Go through passed attempts and downscore peers that returned invalid batches
@@ -452,11 +446,10 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
let action = PeerAction::LowToleranceError;
debug!(
self.log, "Re-processed batch validated. Scoring original peer";
"chain_id" => self.id,
"batch_id" => *processed_batch.id,
"score_adjustment" => action.to_string(),
"original_peer" => format!("{}",attempt.peer_id),
"new_peer" => format!("{}", processed_batch.current_peer)
"batch_epoch" => processed_batch.start_epoch,
"score_adjustment" => action.to_string(),
"original_peer" => format!("{}",attempt.peer_id),
"new_peer" => format!("{}", processed_batch.current_peer)
);
network.report_peer(attempt.peer_id, action);
} else {
@@ -465,11 +458,10 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
let action = PeerAction::MidToleranceError;
debug!(
self.log, "Re-processed batch validated by the same peer.";
"chain_id" => self.id,
"batch_id" => *processed_batch.id,
"score_adjustment" => action.to_string(),
"original_peer" => format!("{}",attempt.peer_id),
"new_peer" => format!("{}", processed_batch.current_peer)
"batch_epoch" => processed_batch.start_epoch,
"score_adjustment" => action.to_string(),
"original_peer" => format!("{}",attempt.peer_id),
"new_peer" => format!("{}", processed_batch.current_peer)
);
network.report_peer(attempt.peer_id, action);
}
@@ -508,7 +500,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// Find any pre-processed batches awaiting validation
while !self.processed_batches.is_empty() {
let past_batch = self.processed_batches.remove(0);
*self.to_be_processed_id = std::cmp::min(*self.to_be_processed_id, *past_batch.id);
self.processing_target = std::cmp::min(self.processing_target, past_batch.start_epoch);
self.reprocess_batch(network, past_batch);
}
@@ -552,11 +544,10 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
batch.current_peer = new_peer.clone();
debug!(self.log, "Re-requesting batch";
"chain_id" => self.id,
"start_slot" => batch.start_slot,
"start_slot" => batch.start_slot(),
"end_slot" => batch.end_slot -1, // The -1 shows inclusive blocks
"id" => *batch.id,
"peer" => format!("{}", batch.current_peer),
"batch_epoch" => batch.start_epoch,
"peer" => batch.current_peer.to_string(),
"retries" => batch.retries,
"re-processes" => batch.reprocess_retries);
self.send_batch(network, batch);
@@ -592,12 +583,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
self.start_epoch = local_finalized_epoch;
debug!(self.log, "Updating chain's progress";
"chain_id" => self.id,
"prev_completed_slot" => current_processed_slot,
"new_completed_slot" => self.current_processed_slot());
// Re-index batches
*self.to_be_downloaded_id = 1;
*self.to_be_processed_id = 1;
self.to_be_downloaded = local_finalized_epoch;
self.processing_target = local_finalized_epoch;
// remove any completed or processed batches
self.completed_batches.clear();
@@ -621,7 +611,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// do not request blocks if the chain is not syncing
if let ChainSyncingState::Stopped = self.state {
debug!(self.log, "Peer added to a non-syncing chain";
"chain_id" => self.id, "peer_id" => format!("{}", peer_id));
"peer_id" => format!("{}", peer_id));
return;
}
@@ -650,8 +640,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
) -> Option<ProcessingResult> {
if let Some(batch) = self.pending_batches.remove(request_id) {
debug!(self.log, "Batch failed. RPC Error";
"chain_id" => self.id,
"id" => *batch.id,
"batch_epoch" => batch.start_epoch,
"retries" => batch.retries,
"peer" => format!("{:?}", peer_id));
@@ -688,17 +677,23 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
batch.current_peer = new_peer.clone();
debug!(self.log, "Re-Requesting batch";
"chain_id" => self.id,
"start_slot" => batch.start_slot,
"start_slot" => batch.start_slot(),
"end_slot" => batch.end_slot -1, // The -1 shows inclusive blocks
"id" => *batch.id,
"peer" => format!("{:?}", batch.current_peer));
"batch_epoch" => batch.start_epoch,
"peer" => batch.current_peer.to_string());
self.send_batch(network, batch);
ProcessingResult::KeepChain
}
}
/// Returns true if this chain is currently syncing.
pub fn is_syncing(&self) -> bool {
match self.state {
ChainSyncingState::Syncing => true,
ChainSyncingState::Stopped => false,
}
}
/// Attempts to request the next required batches from the peer pool if the chain is syncing. It will exhaust the peer
/// pool and left over batches until the batch buffer is reached or all peers are exhausted.
fn request_batches(&mut self, network: &mut SyncNetworkContext<T::EthSpec>) {
@@ -714,10 +709,9 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
if let Some(peer_id) = self.get_next_peer() {
if let Some(batch) = self.get_next_batch(peer_id) {
debug!(self.log, "Requesting batch";
"chain_id" => self.id,
"start_slot" => batch.start_slot,
"end_slot" => batch.end_slot -1, // The -1 shows inclusive blocks
"id" => *batch.id,
"start_slot" => batch.start_slot(),
"end_slot" => batch.end_slot -1, // The -1 shows inclusive blocks
"batch_epoch" => batch.start_epoch,
"peer" => format!("{}", batch.current_peer));
// send the batch
self.send_batch(network, batch);
@@ -770,22 +764,15 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
return None;
}
// One is added to the start slot to begin one slot after the epoch boundary
let batch_start_slot = self
.start_epoch
.start_slot(slots_per_epoch)
.saturating_add(1u64)
+ self.to_be_downloaded_id.saturating_sub(1) * blocks_per_batch;
// don't request batches beyond the target head slot
if batch_start_slot > self.target_head_slot {
if self.to_be_downloaded.start_slot(slots_per_epoch) > self.target_head_slot {
return None;
}
// truncate the batch to the epoch containing the target head of the chain
let batch_end_slot = std::cmp::min(
// request either a batch containing the max number of blocks per batch
batch_start_slot + blocks_per_batch,
self.to_be_downloaded.start_slot(slots_per_epoch) + blocks_per_batch + 1,
// or a batch of one epoch of blocks, which contains the `target_head_slot`
self.target_head_slot
.saturating_add(slots_per_epoch)
@@ -793,28 +780,9 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
.start_slot(slots_per_epoch),
);
let batch_id = self.to_be_downloaded_id;
// Find the next batch id. The largest of the next sequential id, or the next uncompleted
// id
let max_completed_id = self
.completed_batches
.iter()
.last()
.map(|x| x.id.0)
.unwrap_or_else(|| 0);
// TODO: Check if this is necessary
self.to_be_downloaded_id = BatchId(std::cmp::max(
self.to_be_downloaded_id.0 + 1,
max_completed_id + 1,
));
Some(Batch::new(
batch_id,
batch_start_slot,
batch_end_slot,
peer_id,
))
let batch = Some(Batch::new(self.to_be_downloaded, batch_end_slot, peer_id));
self.to_be_downloaded += EPOCHS_PER_BATCH;
batch
}
/// Requests the provided batch from the provided peer.
@@ -832,14 +800,13 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
}
Err(e) => {
warn!(self.log, "Batch request failed";
"chain_id" => self.id,
"start_slot" => batch.start_slot,
"end_slot" => batch.end_slot -1, // The -1 shows inclusive blocks
"id" => *batch.id,
"peer" => format!("{}", batch.current_peer),
"retries" => batch.retries,
"error" => e,
"re-processes" => batch.reprocess_retries);
"start_slot" => batch.start_slot(),
"end_slot" => batch.end_slot -1, // The -1 shows inclusive blocks
"start_epoch" => batch.start_epoch,
"peer" => batch.current_peer.to_string(),
"retries" => batch.retries,
"error" => e,
"re-processes" => batch.reprocess_retries);
self.failed_batch(network, batch);
}
}

View File

@@ -9,12 +9,15 @@ use crate::sync::network_context::SyncNetworkContext;
use crate::sync::PeerSyncInfo;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{types::SyncState, NetworkGlobals, PeerId};
use slog::{debug, error, info};
use slog::{debug, error, info, o};
use std::sync::Arc;
use tokio::sync::mpsc;
use types::EthSpec;
use types::{Epoch, Hash256, Slot};
/// The number of head syncing chains to sync at a time.
const PARALLEL_HEAD_CHAINS: usize = 2;
/// The state of the long range/batch sync.
#[derive(Clone)]
pub enum RangeSyncState {
@@ -205,8 +208,9 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
/// Updates the state of the chain collection.
///
/// This removes any out-dated chains, swaps to any higher priority finalized chains and
/// updates the state of the collection.
pub fn update_finalized(&mut self, network: &mut SyncNetworkContext<T::EthSpec>) {
/// updates the state of the collection. This starts head chains syncing if any are required to
/// do so.
pub fn update(&mut self, network: &mut SyncNetworkContext<T::EthSpec>) {
let local_epoch = {
let local = match PeerSyncInfo::from_chain(&self.beacon_chain) {
Some(local) => local,
@@ -222,9 +226,25 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
local.finalized_epoch
};
// Remove any outdated finalized chains
// Remove any outdated finalized/head chains
self.purge_outdated_chains(network);
// Choose the best finalized chain if one needs to be selected.
self.update_finalized_chains(network, local_epoch);
if self.finalized_syncing_index().is_none() {
// Handle head syncing chains if there are no finalized chains left.
self.update_head_chains(network, local_epoch);
}
}
/// This looks at all current finalized chains and decides if a new chain should be prioritised
/// or not.
fn update_finalized_chains(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
local_epoch: Epoch,
) {
// Check if any chains become the new syncing chain
if let Some(index) = self.finalized_syncing_index() {
// There is a current finalized chain syncing
@@ -269,32 +289,92 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
head_root: chain.target_head_root,
};
self.state = state;
} else {
// There are no finalized chains, update the state.
if self.head_chains.is_empty() {
self.state = RangeSyncState::Idle;
} else {
// for the syncing API, we find the minimal start_slot and the maximum
// target_slot of all head chains to report back.
let (min_epoch, max_slot) = self.head_chains.iter().fold(
(Epoch::from(0u64), Slot::from(0u64)),
|(min, max), chain| {
(
std::cmp::min(min, chain.start_epoch),
std::cmp::max(max, chain.target_head_slot),
)
},
);
let head_state = RangeSyncState::Head {
start_slot: min_epoch.start_slot(T::EthSpec::slots_per_epoch()),
head_slot: max_slot,
};
self.state = head_state;
}
}
}
/// Start syncing any head chains if required.
fn update_head_chains(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
local_epoch: Epoch,
) {
// There are no finalized chains, update the state.
if self.head_chains.is_empty() {
self.state = RangeSyncState::Idle;
return;
}
let mut currently_syncing = self
.head_chains
.iter()
.filter(|chain| chain.is_syncing())
.count();
let mut not_syncing = self.head_chains.len() - currently_syncing;
// Find all head chains that are not currently syncing ordered by peer count.
while currently_syncing <= PARALLEL_HEAD_CHAINS && not_syncing > 0 {
// Find the chain with the most peers and start syncing
if let Some((_index, chain)) = self
.head_chains
.iter_mut()
.filter(|chain| !chain.is_syncing())
.enumerate()
.max_by_key(|(_index, chain)| chain.peer_pool.len())
{
// start syncing this chain
debug!(self.log, "New head chain started syncing"; "new_target_root" => format!("{}", chain.target_head_root), "new_end_slot" => chain.target_head_slot, "new_start_epoch"=> chain.start_epoch);
chain.start_syncing(network, local_epoch);
}
// update variables
currently_syncing = self
.head_chains
.iter()
.filter(|chain| chain.is_syncing())
.count();
not_syncing = self.head_chains.len() - currently_syncing;
}
// Start
// for the syncing API, we find the minimal start_slot and the maximum
// target_slot of all head chains to report back.
let (min_epoch, max_slot) = self
.head_chains
.iter()
.filter(|chain| chain.is_syncing())
.fold(
(Epoch::from(0u64), Slot::from(0u64)),
|(min, max), chain| {
(
std::cmp::min(min, chain.start_epoch),
std::cmp::max(max, chain.target_head_slot),
)
},
);
let head_state = RangeSyncState::Head {
start_slot: min_epoch.start_slot(T::EthSpec::slots_per_epoch()),
head_slot: max_slot,
};
self.state = head_state;
}
/// This is called once a head chain has completed syncing. It removes all non-syncing head
/// chains and re-status their peers.
pub fn clear_head_chains(&mut self, network: &mut SyncNetworkContext<T::EthSpec>) {
let log_ref = &self.log;
self.head_chains.retain(|chain| {
if !chain.is_syncing()
{
debug!(log_ref, "Removing old head chain"; "start_epoch" => chain.start_epoch, "end_slot" => chain.target_head_slot);
chain.status_peers(network);
false
} else {
true
}
});
}
/// Add a new finalized chain to the collection.
pub fn new_finalized_chain(
&mut self,
@@ -313,7 +393,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
peer_id,
beacon_processor_send,
self.beacon_chain.clone(),
self.log.clone(),
self.log.new(o!("chain" => chain_id)),
));
}
@@ -321,7 +401,6 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
#[allow(clippy::too_many_arguments)]
pub fn new_head_chain(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
remote_finalized_epoch: Epoch,
target_head: Hash256,
target_slot: Slot,
@@ -336,7 +415,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
self.head_chains.retain(|chain| !chain.peer_pool.is_empty());
let chain_id = rand::random();
let mut new_head_chain = SyncingChain::new(
let new_head_chain = SyncingChain::new(
chain_id,
remote_finalized_epoch,
target_slot,
@@ -346,8 +425,6 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
self.beacon_chain.clone(),
self.log.clone(),
);
// All head chains can sync simultaneously
new_head_chain.start_syncing(network, remote_finalized_epoch);
self.head_chains.push(new_head_chain);
}
@@ -511,7 +588,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
debug!(self.log, "Chain was removed"; "start_epoch" => chain.start_epoch, "end_slot" => chain.target_head_slot);
// update the state
self.update_finalized(network);
self.update(network);
}
/// Returns the index of finalized chain that is currently syncing. Returns `None` if no

View File

@@ -8,6 +8,5 @@ mod range;
mod sync_type;
pub use batch::Batch;
pub use batch::BatchId;
pub use chain::{ChainId, EPOCHS_PER_BATCH};
pub use range::RangeSync;

View File

@@ -42,7 +42,6 @@
use super::chain::{ChainId, ProcessingResult};
use super::chain_collection::{ChainCollection, RangeSyncState};
use super::sync_type::RangeSyncType;
use super::BatchId;
use crate::beacon_processor::WorkEvent as BeaconWorkEvent;
use crate::sync::network_context::SyncNetworkContext;
use crate::sync::BatchProcessResult;
@@ -54,7 +53,7 @@ use slog::{debug, error, trace};
use std::collections::HashSet;
use std::sync::Arc;
use tokio::sync::mpsc;
use types::{EthSpec, SignedBeaconBlock};
use types::{Epoch, EthSpec, SignedBeaconBlock};
/// The primary object dealing with long range/batch syncing. This contains all the active and
/// non-active chains that need to be processed before the syncing is considered complete. This
@@ -161,13 +160,13 @@ impl<T: BeaconChainTypes> RangeSync<T> {
.chains
.get_finalized_mut(remote_info.finalized_root, remote_finalized_slot)
{
debug!(self.log, "Finalized chain exists, adding peer"; "peer_id" => format!("{:?}", peer_id), "target_root" => format!("{}", chain.target_head_root), "end_slot" => chain.target_head_slot, "start_epoch"=> chain.start_epoch);
debug!(self.log, "Finalized chain exists, adding peer"; "peer_id" => peer_id.to_string(), "target_root" => chain.target_head_root.to_string(), "targe_slot" => chain.target_head_slot);
// add the peer to the chain's peer pool
chain.add_peer(network, peer_id);
// check if the new peer's addition will favour a new syncing chain.
self.chains.update_finalized(network);
self.chains.update(network);
// update the global sync state if necessary
self.chains.update_sync_state();
} else {
@@ -182,7 +181,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
peer_id,
self.beacon_processor_send.clone(),
);
self.chains.update_finalized(network);
self.chains.update(network);
// update the global sync state
self.chains.update_sync_state();
}
@@ -222,7 +221,6 @@ impl<T: BeaconChainTypes> RangeSync<T> {
debug!(self.log, "Creating a new syncing head chain"; "head_root" => format!("{}",remote_info.head_root), "start_epoch" => start_epoch, "head_slot" => remote_info.head_slot, "peer_id" => format!("{:?}", peer_id));
self.chains.new_head_chain(
network,
start_epoch,
remote_info.head_root,
remote_info.head_slot,
@@ -230,7 +228,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
self.beacon_processor_send.clone(),
);
}
self.chains.update_finalized(network);
self.chains.update(network);
self.chains.update_sync_state();
}
}
@@ -271,7 +269,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
chain_id: ChainId,
batch_id: BatchId,
epoch: Epoch,
downloaded_blocks: Vec<SignedBeaconBlock<T::EthSpec>>,
result: BatchProcessResult,
) {
@@ -279,19 +277,13 @@ impl<T: BeaconChainTypes> RangeSync<T> {
let mut downloaded_blocks = Some(downloaded_blocks);
match self.chains.finalized_request(|chain| {
chain.on_batch_process_result(
network,
chain_id,
batch_id,
&mut downloaded_blocks,
&result,
)
chain.on_batch_process_result(network, chain_id, epoch, &mut downloaded_blocks, &result)
}) {
Some((index, ProcessingResult::RemoveChain)) => {
let chain = self.chains.remove_finalized_chain(index);
debug!(self.log, "Finalized chain removed"; "start_epoch" => chain.start_epoch, "end_slot" => chain.target_head_slot);
// update the state of the collection
self.chains.update_finalized(network);
self.chains.update(network);
// the chain is complete, re-status it's peers
chain.status_peers(network);
@@ -319,7 +311,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
chain.on_batch_process_result(
network,
chain_id,
batch_id,
epoch,
&mut downloaded_blocks,
&result,
)
@@ -330,8 +322,12 @@ impl<T: BeaconChainTypes> RangeSync<T> {
// the chain is complete, re-status it's peers and remove it
chain.status_peers(network);
// Remove non-syncing head chains and re-status the peers
// This removes a build-up of potentially duplicate head chains. Any
// legitimate head chains will be re-established
self.chains.clear_head_chains(network);
// update the state of the collection
self.chains.update_finalized(network);
self.chains.update(network);
// update the global state and log any change
self.chains.update_sync_state();
}
@@ -339,7 +335,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
None => {
// This can happen if a chain gets purged due to being out of date whilst a
// batch process is in progress.
debug!(self.log, "No chains match the block processing id"; "id" => *batch_id);
debug!(self.log, "No chains match the block processing id"; "batch_epoch" => epoch, "chain_id" => chain_id);
}
}
}
@@ -360,7 +356,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
self.remove_peer(network, peer_id);
// update the state of the collection
self.chains.update_finalized(network);
self.chains.update(network);
// update the global state and inform the user
self.chains.update_sync_state();
}

View File

@@ -34,7 +34,6 @@ hex = "0.4.2"
parking_lot = "0.11.0"
futures = "0.3.5"
operation_pool = { path = "../operation_pool" }
rayon = "1.3.0"
environment = { path = "../../lighthouse/environment" }
uhttp_sse = "0.5.1"
bus = "2.2.3"

View File

@@ -1,34 +0,0 @@
use crate::response_builder::ResponseBuilder;
use crate::ApiResult;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use hyper::{Body, Request};
use operation_pool::PersistedOperationPool;
use std::sync::Arc;
/// Returns the `proto_array` fork choice struct, encoded as JSON.
///
/// Useful for debugging or advanced inspection of the chain.
pub fn get_fork_choice<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz(
&*beacon_chain
.fork_choice
.read()
.proto_array()
.core_proto_array(),
)
}
/// Returns the `PersistedOperationPool` struct.
///
/// Useful for debugging or advanced inspection of the stored operations.
pub fn get_operation_pool<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body(&PersistedOperationPool::from_operation_pool(
&beacon_chain.op_pool,
))
}

View File

@@ -1,14 +1,13 @@
use crate::helpers::*;
use crate::response_builder::ResponseBuilder;
use crate::validator::get_state_for_epoch;
use crate::{ApiError, ApiResult, UrlQuery};
use crate::Context;
use crate::{ApiError, UrlQuery};
use beacon_chain::{
observed_operations::ObservationOutcome, BeaconChain, BeaconChainTypes, StateSkipConfig,
};
use bus::BusReader;
use futures::executor::block_on;
use hyper::body::Bytes;
use hyper::{Body, Request, Response};
use hyper::{Body, Request};
use rest_types::{
BlockResponse, CanonicalHeadResponse, Committee, HeadBeaconBlock, StateResponse,
ValidatorRequest, ValidatorResponse,
@@ -16,20 +15,20 @@ use rest_types::{
use std::io::Write;
use std::sync::Arc;
use slog::{error, Logger};
use slog::error;
use types::{
AttesterSlashing, BeaconState, EthSpec, Hash256, ProposerSlashing, PublicKeyBytes,
RelativeEpoch, SignedBeaconBlockHash, Slot,
};
/// HTTP handler to return a `BeaconBlock` at a given `root` or `slot`.
/// Returns a summary of the head of the beacon chain.
pub fn get_head<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
ctx: Arc<Context<T>>,
) -> Result<CanonicalHeadResponse, ApiError> {
let beacon_chain = &ctx.beacon_chain;
let chain_head = beacon_chain.head()?;
let head = CanonicalHeadResponse {
Ok(CanonicalHeadResponse {
slot: chain_head.beacon_state.slot,
block_root: chain_head.beacon_block_root,
state_root: chain_head.beacon_state_root,
@@ -51,33 +50,27 @@ pub fn get_head<T: BeaconChainTypes>(
.epoch
.start_slot(T::EthSpec::slots_per_epoch()),
previous_justified_block_root: chain_head.beacon_state.previous_justified_checkpoint.root,
};
ResponseBuilder::new(&req)?.body(&head)
})
}
/// HTTP handler to return a list of head BeaconBlocks.
pub fn get_heads<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
let heads = beacon_chain
/// Return the list of heads of the beacon chain.
pub fn get_heads<T: BeaconChainTypes>(ctx: Arc<Context<T>>) -> Vec<HeadBeaconBlock> {
ctx.beacon_chain
.heads()
.into_iter()
.map(|(beacon_block_root, beacon_block_slot)| HeadBeaconBlock {
beacon_block_root,
beacon_block_slot,
})
.collect::<Vec<_>>();
ResponseBuilder::new(&req)?.body(&heads)
.collect()
}
/// HTTP handler to return a `BeaconBlock` at a given `root` or `slot`.
pub fn get_block<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<BlockResponse<T::EthSpec>, ApiError> {
let beacon_chain = &ctx.beacon_chain;
let query_params = ["root", "slot"];
let (key, value) = UrlQuery::from_request(&req)?.first_of(&query_params)?;
@@ -85,7 +78,7 @@ pub fn get_block<T: BeaconChainTypes>(
("slot", value) => {
let target = parse_slot(&value)?;
block_root_at_slot(&beacon_chain, target)?.ok_or_else(|| {
block_root_at_slot(beacon_chain, target)?.ok_or_else(|| {
ApiError::NotFound(format!(
"Unable to find SignedBeaconBlock for slot {:?}",
target
@@ -103,30 +96,26 @@ pub fn get_block<T: BeaconChainTypes>(
))
})?;
let response = BlockResponse {
Ok(BlockResponse {
root: block_root,
beacon_block: block,
};
ResponseBuilder::new(&req)?.body(&response)
})
}
/// HTTP handler to return a `SignedBeaconBlock` root at a given `slot`.
pub fn get_block_root<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Hash256, ApiError> {
let slot_string = UrlQuery::from_request(&req)?.only_one("slot")?;
let target = parse_slot(&slot_string)?;
let root = block_root_at_slot(&beacon_chain, target)?.ok_or_else(|| {
block_root_at_slot(&ctx.beacon_chain, target)?.ok_or_else(|| {
ApiError::NotFound(format!(
"Unable to find SignedBeaconBlock for slot {:?}",
target
))
})?;
ResponseBuilder::new(&req)?.body(&root)
})
}
fn make_sse_response_chunk(new_head_hash: SignedBeaconBlockHash) -> std::io::Result<Bytes> {
@@ -140,45 +129,27 @@ fn make_sse_response_chunk(new_head_hash: SignedBeaconBlockHash) -> std::io::Res
Ok(bytes)
}
pub fn stream_forks<T: BeaconChainTypes>(
log: Logger,
mut events: BusReader<SignedBeaconBlockHash>,
) -> ApiResult {
pub fn stream_forks<T: BeaconChainTypes>(ctx: Arc<Context<T>>) -> Result<Body, ApiError> {
let mut events = ctx.events.lock().add_rx();
let (mut sender, body) = Body::channel();
std::thread::spawn(move || {
while let Ok(new_head_hash) = events.recv() {
let chunk = match make_sse_response_chunk(new_head_hash) {
Ok(chunk) => chunk,
Err(e) => {
error!(log, "Failed to make SSE chunk"; "error" => e.to_string());
error!(ctx.log, "Failed to make SSE chunk"; "error" => e.to_string());
sender.abort();
break;
}
};
match block_on(sender.send_data(chunk)) {
Err(e) if e.is_closed() => break,
Err(e) => error!(log, "Couldn't stream piece {:?}", e),
Err(e) => error!(ctx.log, "Couldn't stream piece {:?}", e),
Ok(_) => (),
}
}
});
let response = Response::builder()
.status(200)
.header("Content-Type", "text/event-stream")
.header("Connection", "Keep-Alive")
.header("Cache-Control", "no-cache")
.header("Access-Control-Allow-Origin", "*")
.body(body)
.map_err(|e| ApiError::ServerError(format!("Failed to build response: {:?}", e)))?;
Ok(response)
}
/// HTTP handler to return the `Fork` of the current head.
pub fn get_fork<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body(&beacon_chain.head()?.beacon_state.fork)
Ok(body)
}
/// HTTP handler to which accepts a query string of a list of validator pubkeys and maps it to a
@@ -187,9 +158,9 @@ pub fn get_fork<T: BeaconChainTypes>(
/// This method is limited to as many `pubkeys` that can fit in a URL. See `post_validators` for
/// doing bulk requests.
pub fn get_validators<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<ValidatorResponse>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let validator_pubkeys = query
@@ -204,17 +175,14 @@ pub fn get_validators<T: BeaconChainTypes>(
None
};
let validators =
validator_responses_by_pubkey(beacon_chain, state_root_opt, validator_pubkeys)?;
ResponseBuilder::new(&req)?.body(&validators)
validator_responses_by_pubkey(&ctx.beacon_chain, state_root_opt, validator_pubkeys)
}
/// HTTP handler to return all validators, each as a `ValidatorResponse`.
pub fn get_all_validators<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<ValidatorResponse>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let state_root_opt = if let Some((_key, value)) = query.first_of_opt(&["state_root"]) {
@@ -223,23 +191,21 @@ pub fn get_all_validators<T: BeaconChainTypes>(
None
};
let mut state = get_state_from_root_opt(&beacon_chain, state_root_opt)?;
let mut state = get_state_from_root_opt(&ctx.beacon_chain, state_root_opt)?;
state.update_pubkey_cache()?;
let validators = state
state
.validators
.iter()
.map(|validator| validator_response_by_pubkey(&state, validator.pubkey.clone()))
.collect::<Result<Vec<_>, _>>()?;
ResponseBuilder::new(&req)?.body(&validators)
.collect::<Result<Vec<_>, _>>()
}
/// HTTP handler to return all active validators, each as a `ValidatorResponse`.
pub fn get_active_validators<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<ValidatorResponse>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let state_root_opt = if let Some((_key, value)) = query.first_of_opt(&["state_root"]) {
@@ -248,17 +214,15 @@ pub fn get_active_validators<T: BeaconChainTypes>(
None
};
let mut state = get_state_from_root_opt(&beacon_chain, state_root_opt)?;
let mut state = get_state_from_root_opt(&ctx.beacon_chain, state_root_opt)?;
state.update_pubkey_cache()?;
let validators = state
state
.validators
.iter()
.filter(|validator| validator.is_active_at(state.current_epoch()))
.map(|validator| validator_response_by_pubkey(&state, validator.pubkey.clone()))
.collect::<Result<Vec<_>, _>>()?;
ResponseBuilder::new(&req)?.body(&validators)
.collect::<Result<Vec<_>, _>>()
}
/// HTTP handler to which accepts a `ValidatorRequest` and returns a `ValidatorResponse` for
@@ -266,17 +230,11 @@ pub fn get_active_validators<T: BeaconChainTypes>(
///
/// This method allows for a basically unbounded list of `pubkeys`, where as the `get_validators`
/// request is limited by the max number of pubkeys you can fit in a URL.
pub async fn post_validators<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
let response_builder = ResponseBuilder::new(&req);
let body = req.into_body();
let chunks = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
serde_json::from_slice::<ValidatorRequest>(&chunks)
pub fn post_validators<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<ValidatorResponse>, ApiError> {
serde_json::from_slice::<ValidatorRequest>(&req.into_body())
.map_err(|e| {
ApiError::BadRequest(format!(
"Unable to parse JSON into ValidatorRequest: {:?}",
@@ -285,12 +243,11 @@ pub async fn post_validators<T: BeaconChainTypes>(
})
.and_then(|bulk_request| {
validator_responses_by_pubkey(
beacon_chain,
&ctx.beacon_chain,
bulk_request.state_root,
bulk_request.pubkeys,
)
})
.and_then(|validators| response_builder?.body(&validators))
}
/// Returns either the state given by `state_root_opt`, or the canonical head state if it is
@@ -317,11 +274,11 @@ fn get_state_from_root_opt<T: BeaconChainTypes>(
/// Maps a vec of `validator_pubkey` to a vec of `ValidatorResponse`, using the state at the given
/// `state_root`. If `state_root.is_none()`, uses the canonial head state.
fn validator_responses_by_pubkey<T: BeaconChainTypes>(
beacon_chain: Arc<BeaconChain<T>>,
beacon_chain: &BeaconChain<T>,
state_root_opt: Option<Hash256>,
validator_pubkeys: Vec<PublicKeyBytes>,
) -> Result<Vec<ValidatorResponse>, ApiError> {
let mut state = get_state_from_root_opt(&beacon_chain, state_root_opt)?;
let mut state = get_state_from_root_opt(beacon_chain, state_root_opt)?;
state.update_pubkey_cache()?;
validator_pubkeys
@@ -372,24 +329,25 @@ fn validator_response_by_pubkey<E: EthSpec>(
/// HTTP handler
pub fn get_committees<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<Committee>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let epoch = query.epoch()?;
let mut state = get_state_for_epoch(&beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
let mut state =
get_state_for_epoch(&ctx.beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
let relative_epoch = RelativeEpoch::from_epoch(state.current_epoch(), epoch).map_err(|e| {
ApiError::ServerError(format!("Failed to get state suitable for epoch: {:?}", e))
})?;
state
.build_committee_cache(relative_epoch, &beacon_chain.spec)
.build_committee_cache(relative_epoch, &ctx.beacon_chain.spec)
.map_err(|e| ApiError::ServerError(format!("Unable to build committee cache: {:?}", e)))?;
let committees = state
Ok(state
.get_beacon_committees_at_epoch(relative_epoch)
.map_err(|e| ApiError::ServerError(format!("Unable to get all committees: {:?}", e)))?
.into_iter()
@@ -398,9 +356,7 @@ pub fn get_committees<T: BeaconChainTypes>(
index: c.index,
committee: c.committee.to_vec(),
})
.collect::<Vec<_>>();
ResponseBuilder::new(&req)?.body(&committees)
.collect::<Vec<_>>())
}
/// HTTP handler to return a `BeaconState` at a given `root` or `slot`.
@@ -408,10 +364,10 @@ pub fn get_committees<T: BeaconChainTypes>(
/// Will not return a state if the request slot is in the future. Will return states higher than
/// the current head by skipping slots.
pub fn get_state<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
let head_state = beacon_chain.head()?.beacon_state;
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<StateResponse<T::EthSpec>, ApiError> {
let head_state = ctx.beacon_chain.head()?.beacon_state;
let (key, value) = match UrlQuery::from_request(&req) {
Ok(query) => {
@@ -429,11 +385,12 @@ pub fn get_state<T: BeaconChainTypes>(
};
let (root, state): (Hash256, BeaconState<T::EthSpec>) = match (key.as_ref(), value) {
("slot", value) => state_at_slot(&beacon_chain, parse_slot(&value)?)?,
("slot", value) => state_at_slot(&ctx.beacon_chain, parse_slot(&value)?)?,
("root", value) => {
let root = &parse_root(&value)?;
let state = beacon_chain
let state = ctx
.beacon_chain
.store
.get_state(root, None)?
.ok_or_else(|| ApiError::NotFound(format!("No state for root: {:?}", root)))?;
@@ -443,12 +400,10 @@ pub fn get_state<T: BeaconChainTypes>(
_ => return Err(ApiError::ServerError("Unexpected query parameter".into())),
};
let response = StateResponse {
Ok(StateResponse {
root,
beacon_state: state,
};
ResponseBuilder::new(&req)?.body(&response)
})
}
/// HTTP handler to return a `BeaconState` root at a given `slot`.
@@ -456,15 +411,13 @@ pub fn get_state<T: BeaconChainTypes>(
/// Will not return a state if the request slot is in the future. Will return states higher than
/// the current head by skipping slots.
pub fn get_state_root<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Hash256, ApiError> {
let slot_string = UrlQuery::from_request(&req)?.only_one("slot")?;
let slot = parse_slot(&slot_string)?;
let root = state_root_at_slot(&beacon_chain, slot, StateSkipConfig::WithStateRoots)?;
ResponseBuilder::new(&req)?.body(&root)
state_root_at_slot(&ctx.beacon_chain, slot, StateSkipConfig::WithStateRoots)
}
/// HTTP handler to return a `BeaconState` at the genesis block.
@@ -472,50 +425,28 @@ pub fn get_state_root<T: BeaconChainTypes>(
/// This is an undocumented convenience method used during testing. For production, simply do a
/// state request at slot 0.
pub fn get_genesis_state<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
let (_root, state) = state_at_slot(&beacon_chain, Slot::new(0))?;
ResponseBuilder::new(&req)?.body(&state)
ctx: Arc<Context<T>>,
) -> Result<BeaconState<T::EthSpec>, ApiError> {
state_at_slot(&ctx.beacon_chain, Slot::new(0)).map(|(_root, state)| state)
}
/// Read the genesis time from the current beacon chain state.
pub fn get_genesis_time<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body(&beacon_chain.head_info()?.genesis_time)
}
/// Read the `genesis_validators_root` from the current beacon chain state.
pub fn get_genesis_validators_root<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body(&beacon_chain.head_info()?.genesis_validators_root)
}
pub async fn proposer_slashing<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
let response_builder = ResponseBuilder::new(&req);
pub fn proposer_slashing<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<bool, ApiError> {
let body = req.into_body();
let chunks = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
serde_json::from_slice::<ProposerSlashing>(&chunks)
serde_json::from_slice::<ProposerSlashing>(&body)
.map_err(|e| format!("Unable to parse JSON into ProposerSlashing: {:?}", e))
.and_then(move |proposer_slashing| {
if beacon_chain.eth1_chain.is_some() {
let obs_outcome = beacon_chain
if ctx.beacon_chain.eth1_chain.is_some() {
let obs_outcome = ctx
.beacon_chain
.verify_proposer_slashing_for_gossip(proposer_slashing)
.map_err(|e| format!("Error while verifying proposer slashing: {:?}", e))?;
if let ObservationOutcome::New(verified_proposer_slashing) = obs_outcome {
beacon_chain.import_proposer_slashing(verified_proposer_slashing);
ctx.beacon_chain
.import_proposer_slashing(verified_proposer_slashing);
Ok(())
} else {
Err("Proposer slashing for that validator index already known".into())
@@ -524,22 +455,17 @@ pub async fn proposer_slashing<T: BeaconChainTypes>(
Err("Cannot insert proposer slashing on node without Eth1 connection.".to_string())
}
})
.map_err(ApiError::BadRequest)
.and_then(|_| response_builder?.body(&true))
.map_err(ApiError::BadRequest)?;
Ok(true)
}
pub async fn attester_slashing<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
let response_builder = ResponseBuilder::new(&req);
pub fn attester_slashing<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<bool, ApiError> {
let body = req.into_body();
let chunks = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
serde_json::from_slice::<AttesterSlashing<T::EthSpec>>(&chunks)
serde_json::from_slice::<AttesterSlashing<T::EthSpec>>(&body)
.map_err(|e| {
ApiError::BadRequest(format!(
"Unable to parse JSON into AttesterSlashing: {:?}",
@@ -547,13 +473,13 @@ pub async fn attester_slashing<T: BeaconChainTypes>(
))
})
.and_then(move |attester_slashing| {
if beacon_chain.eth1_chain.is_some() {
beacon_chain
if ctx.beacon_chain.eth1_chain.is_some() {
ctx.beacon_chain
.verify_attester_slashing_for_gossip(attester_slashing)
.map_err(|e| format!("Error while verifying attester slashing: {:?}", e))
.and_then(|outcome| {
if let ObservationOutcome::New(verified_attester_slashing) = outcome {
beacon_chain
ctx.beacon_chain
.import_attester_slashing(verified_attester_slashing)
.map_err(|e| {
format!("Error while importing attester slashing: {:?}", e)
@@ -568,6 +494,7 @@ pub async fn attester_slashing<T: BeaconChainTypes>(
"Cannot insert attester slashing on node without Eth1 connection.".to_string(),
))
}
})
.and_then(|_| response_builder?.body(&true))
})?;
Ok(true)
}

View File

@@ -1,8 +1,7 @@
use crate::helpers::*;
use crate::response_builder::ResponseBuilder;
use crate::{ApiError, ApiResult, UrlQuery};
use beacon_chain::{BeaconChain, BeaconChainTypes};
use hyper::{Body, Request};
use crate::{ApiError, Context, UrlQuery};
use beacon_chain::BeaconChainTypes;
use hyper::Request;
use rest_types::{IndividualVotesRequest, IndividualVotesResponse};
use serde::{Deserialize, Serialize};
use ssz_derive::{Decode, Encode};
@@ -50,38 +49,31 @@ impl Into<VoteCount> for TotalBalances {
/// HTTP handler return a `VoteCount` for some given `Epoch`.
pub fn get_vote_count<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<VoteCount, ApiError> {
let query = UrlQuery::from_request(&req)?;
let epoch = query.epoch()?;
// This is the last slot of the given epoch (one prior to the first slot of the next epoch).
let target_slot = (epoch + 1).start_slot(T::EthSpec::slots_per_epoch()) - 1;
let (_root, state) = state_at_slot(&beacon_chain, target_slot)?;
let spec = &beacon_chain.spec;
let (_root, state) = state_at_slot(&ctx.beacon_chain, target_slot)?;
let spec = &ctx.beacon_chain.spec;
let mut validator_statuses = ValidatorStatuses::new(&state, spec)?;
validator_statuses.process_attestations(&state, spec)?;
let report: VoteCount = validator_statuses.total_balances.into();
ResponseBuilder::new(&req)?.body(&report)
Ok(validator_statuses.total_balances.into())
}
pub async fn post_individual_votes<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
let response_builder = ResponseBuilder::new(&req);
pub fn post_individual_votes<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<IndividualVotesResponse>, ApiError> {
let body = req.into_body();
let chunks = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
serde_json::from_slice::<IndividualVotesRequest>(&chunks)
serde_json::from_slice::<IndividualVotesRequest>(&body)
.map_err(|e| {
ApiError::BadRequest(format!(
"Unable to parse JSON into ValidatorDutiesRequest: {:?}",
@@ -94,8 +86,8 @@ pub async fn post_individual_votes<T: BeaconChainTypes>(
// This is the last slot of the given epoch (one prior to the first slot of the next epoch).
let target_slot = (epoch + 1).start_slot(T::EthSpec::slots_per_epoch()) - 1;
let (_root, mut state) = state_at_slot(&beacon_chain, target_slot)?;
let spec = &beacon_chain.spec;
let (_root, mut state) = state_at_slot(&ctx.beacon_chain, target_slot)?;
let spec = &ctx.beacon_chain.spec;
let mut validator_statuses = ValidatorStatuses::new(&state, spec)?;
validator_statuses.process_attestations(&state, spec)?;
@@ -135,5 +127,4 @@ pub async fn post_individual_votes<T: BeaconChainTypes>(
})
.collect::<Result<Vec<_>, _>>()
})
.and_then(|votes| response_builder?.body_no_ssz(&votes))
}

View File

@@ -1,9 +1,7 @@
use crate::{ApiError, ApiResult, NetworkChannel};
use crate::{ApiError, NetworkChannel};
use beacon_chain::{BeaconChain, BeaconChainTypes, StateSkipConfig};
use bls::PublicKeyBytes;
use eth2_libp2p::PubsubMessage;
use http::header;
use hyper::{Body, Request};
use itertools::process_results;
use network::NetworkMessage;
use ssz::Decode;
@@ -41,21 +39,6 @@ pub fn parse_committee_index(string: &str) -> Result<CommitteeIndex, ApiError> {
.map_err(|e| ApiError::BadRequest(format!("Unable to parse committee index: {:?}", e)))
}
/// Checks the provided request to ensure that the `content-type` header.
///
/// The content-type header should either be omitted, in which case JSON is assumed, or it should
/// explicitly specify `application/json`. If anything else is provided, an error is returned.
pub fn check_content_type_for_json(req: &Request<Body>) -> Result<(), ApiError> {
match req.headers().get(header::CONTENT_TYPE) {
Some(h) if h == "application/json" => Ok(()),
Some(h) => Err(ApiError::BadRequest(format!(
"The provided content-type {:?} is not available, this endpoint only supports json.",
h
))),
_ => Ok(()),
}
}
/// Parse an SSZ object from some hex-encoded bytes.
///
/// E.g., A signature is `"0x0000000000000000000000000000000000000000000000000000000000000000"`
@@ -228,14 +211,8 @@ pub fn state_root_at_slot<T: BeaconChainTypes>(
}
}
pub fn implementation_pending_response(_req: Request<Body>) -> ApiResult {
Err(ApiError::NotImplemented(
"API endpoint has not yet been implemented, but is planned to be soon.".to_owned(),
))
}
pub fn publish_beacon_block_to_network<T: BeaconChainTypes + 'static>(
chan: NetworkChannel<T::EthSpec>,
chan: &NetworkChannel<T::EthSpec>,
block: SignedBeaconBlock<T::EthSpec>,
) -> Result<(), ApiError> {
// send the block via SSZ encoding

View File

@@ -1,22 +1,15 @@
#[macro_use]
mod macros;
#[macro_use]
extern crate lazy_static;
mod router;
extern crate network as client_network;
mod advanced;
mod beacon;
pub mod config;
mod consensus;
mod error;
mod helpers;
mod lighthouse;
mod metrics;
mod network;
mod node;
mod response_builder;
mod router;
mod spec;
mod url_query;
mod validator;
@@ -24,7 +17,6 @@ use beacon_chain::{BeaconChain, BeaconChainTypes};
use bus::Bus;
use client_network::NetworkMessage;
pub use config::ApiEncodingFormat;
use error::{ApiError, ApiResult};
use eth2_config::Eth2Config;
use eth2_libp2p::NetworkGlobals;
use futures::future::TryFutureExt;
@@ -32,6 +24,7 @@ use hyper::server::conn::AddrStream;
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Server};
use parking_lot::Mutex;
use rest_types::ApiError;
use slog::{info, warn};
use std::net::SocketAddr;
use std::path::PathBuf;
@@ -42,6 +35,7 @@ use url_query::UrlQuery;
pub use crate::helpers::parse_pubkey_bytes;
pub use config::Config;
pub use router::Context;
pub type NetworkChannel<T> = mpsc::UnboundedSender<NetworkMessage<T>>;
@@ -63,36 +57,28 @@ pub fn start_server<T: BeaconChainTypes>(
events: Arc<Mutex<Bus<SignedBeaconBlockHash>>>,
) -> Result<SocketAddr, hyper::Error> {
let log = executor.log();
let inner_log = log.clone();
let rest_api_config = Arc::new(config.clone());
let eth2_config = Arc::new(eth2_config);
let context = Arc::new(Context {
executor: executor.clone(),
config: config.clone(),
beacon_chain,
network_globals: network_info.network_globals.clone(),
network_chan: network_info.network_chan,
eth2_config,
log: log.clone(),
db_path,
freezer_db_path,
events,
});
// Define the function that will build the request handler.
let make_service = make_service_fn(move |_socket: &AddrStream| {
let beacon_chain = beacon_chain.clone();
let log = inner_log.clone();
let rest_api_config = rest_api_config.clone();
let eth2_config = eth2_config.clone();
let network_globals = network_info.network_globals.clone();
let network_channel = network_info.network_chan.clone();
let db_path = db_path.clone();
let freezer_db_path = freezer_db_path.clone();
let events = events.clone();
let ctx = context.clone();
async move {
Ok::<_, hyper::Error>(service_fn(move |req: Request<Body>| {
router::route(
req,
beacon_chain.clone(),
network_globals.clone(),
network_channel.clone(),
rest_api_config.clone(),
eth2_config.clone(),
log.clone(),
db_path.clone(),
freezer_db_path.clone(),
events.clone(),
)
router::on_http_request(req, ctx.clone())
}))
}
});

View File

@@ -1,24 +1,16 @@
//! This contains a collection of lighthouse specific HTTP endpoints.
use crate::response_builder::ResponseBuilder;
use crate::ApiResult;
use eth2_libp2p::{NetworkGlobals, PeerInfo};
use hyper::{Body, Request};
use crate::{ApiError, Context};
use beacon_chain::BeaconChainTypes;
use eth2_libp2p::PeerInfo;
use serde::Serialize;
use std::sync::Arc;
use types::EthSpec;
/// The syncing state of the beacon node.
pub fn syncing<T: EthSpec>(
req: Request<Body>,
network_globals: Arc<NetworkGlobals<T>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz(&network_globals.sync_state())
}
/// Returns all known peers and corresponding information
pub fn peers<T: EthSpec>(req: Request<Body>, network_globals: Arc<NetworkGlobals<T>>) -> ApiResult {
let peers: Vec<Peer<T>> = network_globals
pub fn peers<T: BeaconChainTypes>(ctx: Arc<Context<T>>) -> Result<Vec<Peer<T::EthSpec>>, ApiError> {
Ok(ctx
.network_globals
.peers
.read()
.peers()
@@ -26,16 +18,15 @@ pub fn peers<T: EthSpec>(req: Request<Body>, network_globals: Arc<NetworkGlobals
peer_id: peer_id.to_string(),
peer_info: peer_info.clone(),
})
.collect();
ResponseBuilder::new(&req)?.body_no_ssz(&peers)
.collect())
}
/// Returns all known connected peers and their corresponding information
pub fn connected_peers<T: EthSpec>(
req: Request<Body>,
network_globals: Arc<NetworkGlobals<T>>,
) -> ApiResult {
let peers: Vec<Peer<T>> = network_globals
pub fn connected_peers<T: BeaconChainTypes>(
ctx: Arc<Context<T>>,
) -> Result<Vec<Peer<T::EthSpec>>, ApiError> {
Ok(ctx
.network_globals
.peers
.read()
.connected_peers()
@@ -43,14 +34,13 @@ pub fn connected_peers<T: EthSpec>(
peer_id: peer_id.to_string(),
peer_info: peer_info.clone(),
})
.collect();
ResponseBuilder::new(&req)?.body_no_ssz(&peers)
.collect())
}
/// Information returned by `peers` and `connected_peers`.
#[derive(Clone, Debug, Serialize)]
#[serde(bound = "T: EthSpec")]
struct Peer<T: EthSpec> {
pub struct Peer<T: EthSpec> {
/// The Peer's ID
peer_id: String,
/// The PeerInfo associated with the peer.

View File

@@ -1,11 +0,0 @@
macro_rules! try_future {
($expr:expr) => {
match $expr {
core::result::Result::Ok(val) => val,
core::result::Result::Err(err) => return Err(std::convert::From::from(err)),
}
};
($expr:expr,) => {
$crate::try_future!($expr)
};
}

View File

@@ -1,42 +1,38 @@
use crate::response_builder::ResponseBuilder;
use crate::{ApiError, ApiResult};
use beacon_chain::{BeaconChain, BeaconChainTypes};
use hyper::{Body, Request};
use crate::{ApiError, Context};
use beacon_chain::BeaconChainTypes;
use lighthouse_metrics::{Encoder, TextEncoder};
use rest_types::Health;
use std::path::PathBuf;
use std::sync::Arc;
pub use lighthouse_metrics::*;
lazy_static! {
pub static ref BEACON_HTTP_API_REQUESTS_TOTAL: Result<IntCounterVec> =
try_create_int_counter_vec(
"beacon_http_api_requests_total",
"Count of HTTP requests received",
&["endpoint"]
);
pub static ref BEACON_HTTP_API_SUCCESS_TOTAL: Result<IntCounterVec> =
try_create_int_counter_vec(
"beacon_http_api_success_total",
"Count of HTTP requests that returned 200 OK",
&["endpoint"]
);
pub static ref BEACON_HTTP_API_ERROR_TOTAL: Result<IntCounterVec> = try_create_int_counter_vec(
"beacon_http_api_error_total",
"Count of HTTP that did not return 200 OK",
&["endpoint"]
);
pub static ref BEACON_HTTP_API_TIMES_TOTAL: Result<HistogramVec> = try_create_histogram_vec(
"beacon_http_api_times_total",
"Duration to process HTTP requests",
&["endpoint"]
);
pub static ref REQUEST_RESPONSE_TIME: Result<Histogram> = try_create_histogram(
"http_server_request_duration_seconds",
"Time taken to build a response to a HTTP request"
);
pub static ref REQUEST_COUNT: Result<IntCounter> = try_create_int_counter(
"http_server_request_total",
"Total count of HTTP requests received"
);
pub static ref SUCCESS_COUNT: Result<IntCounter> = try_create_int_counter(
"http_server_success_total",
"Total count of HTTP 200 responses sent"
);
pub static ref VALIDATOR_GET_BLOCK_REQUEST_RESPONSE_TIME: Result<Histogram> =
try_create_histogram(
"http_server_validator_block_get_request_duration_seconds",
"Time taken to respond to GET /validator/block"
);
pub static ref VALIDATOR_GET_ATTESTATION_REQUEST_RESPONSE_TIME: Result<Histogram> =
try_create_histogram(
"http_server_validator_attestation_get_request_duration_seconds",
"Time taken to respond to GET /validator/attestation"
);
pub static ref VALIDATOR_GET_DUTIES_REQUEST_RESPONSE_TIME: Result<Histogram> =
try_create_histogram(
"http_server_validator_duties_get_request_duration_seconds",
"Time taken to respond to GET /validator/duties"
);
pub static ref PROCESS_NUM_THREADS: Result<IntGauge> = try_create_int_gauge(
"process_num_threads",
"Number of threads used by the current process"
@@ -77,11 +73,8 @@ lazy_static! {
///
/// This is a HTTP handler method.
pub fn get_prometheus<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
db_path: PathBuf,
freezer_db_path: PathBuf,
) -> ApiResult {
ctx: Arc<Context<T>>,
) -> std::result::Result<String, ApiError> {
let mut buffer = vec![];
let encoder = TextEncoder::new();
@@ -101,9 +94,9 @@ pub fn get_prometheus<T: BeaconChainTypes>(
// using `lighthouse_metrics::gather(..)` to collect the global `DEFAULT_REGISTRY` metrics into
// a string that can be returned via HTTP.
slot_clock::scrape_for_metrics::<T::EthSpec, T::SlotClock>(&beacon_chain.slot_clock);
store::scrape_for_metrics(&db_path, &freezer_db_path);
beacon_chain::scrape_for_metrics(&beacon_chain);
slot_clock::scrape_for_metrics::<T::EthSpec, T::SlotClock>(&ctx.beacon_chain.slot_clock);
store::scrape_for_metrics(&ctx.db_path, &ctx.freezer_db_path);
beacon_chain::scrape_for_metrics(&ctx.beacon_chain);
eth2_libp2p::scrape_discovery_metrics();
// This will silently fail if we are unable to observe the health. This is desired behaviour
@@ -133,6 +126,5 @@ pub fn get_prometheus<T: BeaconChainTypes>(
.unwrap();
String::from_utf8(buffer)
.map(|string| ResponseBuilder::new(&req)?.body_text(string))
.map_err(|e| ApiError::ServerError(format!("Failed to encode prometheus info: {:?}", e)))?
.map_err(|e| ApiError::ServerError(format!("Failed to encode prometheus info: {:?}", e)))
}

View File

@@ -1,72 +0,0 @@
use crate::error::ApiResult;
use crate::response_builder::ResponseBuilder;
use crate::NetworkGlobals;
use beacon_chain::BeaconChainTypes;
use eth2_libp2p::{Multiaddr, PeerId};
use hyper::{Body, Request};
use std::sync::Arc;
/// HTTP handler to return the list of libp2p multiaddr the client is listening on.
///
/// Returns a list of `Multiaddr`, serialized according to their `serde` impl.
pub fn get_listen_addresses<T: BeaconChainTypes>(
req: Request<Body>,
network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult {
let multiaddresses: Vec<Multiaddr> = network.listen_multiaddrs();
ResponseBuilder::new(&req)?.body_no_ssz(&multiaddresses)
}
/// HTTP handler to return the network port the client is listening on.
///
/// Returns the TCP port number in its plain form (which is also valid JSON serialization)
pub fn get_listen_port<T: BeaconChainTypes>(
req: Request<Body>,
network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body(&network.listen_port_tcp())
}
/// HTTP handler to return the Discv5 ENR from the client's libp2p service.
///
/// ENR is encoded as base64 string.
pub fn get_enr<T: BeaconChainTypes>(
req: Request<Body>,
network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz(&network.local_enr().to_base64())
}
/// HTTP handler to return the `PeerId` from the client's libp2p service.
///
/// PeerId is encoded as base58 string.
pub fn get_peer_id<T: BeaconChainTypes>(
req: Request<Body>,
network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz(&network.local_peer_id().to_base58())
}
/// HTTP handler to return the number of peers connected in the client's libp2p service.
pub fn get_peer_count<T: BeaconChainTypes>(
req: Request<Body>,
network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body(&network.connected_peers())
}
/// HTTP handler to return the list of peers connected to the client's libp2p service.
///
/// Peers are presented as a list of `PeerId::to_string()`.
pub fn get_peer_list<T: BeaconChainTypes>(
req: Request<Body>,
network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult {
let connected_peers: Vec<String> = network
.peers
.read()
.connected_peer_ids()
.map(PeerId::to_string)
.collect();
ResponseBuilder::new(&req)?.body_no_ssz(&connected_peers)
}

View File

@@ -1,23 +1,19 @@
use crate::response_builder::ResponseBuilder;
use crate::{ApiError, ApiResult};
use eth2_libp2p::{types::SyncState, NetworkGlobals};
use hyper::{Body, Request};
use lighthouse_version::version_with_platform;
use rest_types::{Health, SyncingResponse, SyncingStatus};
use crate::{ApiError, Context};
use beacon_chain::BeaconChainTypes;
use eth2_libp2p::types::SyncState;
use rest_types::{SyncingResponse, SyncingStatus};
use std::sync::Arc;
use types::{EthSpec, Slot};
use types::Slot;
/// Read the version string from the current Lighthouse build.
pub fn get_version(req: Request<Body>) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz(&version_with_platform())
}
/// Returns a syncing status.
pub fn syncing<T: BeaconChainTypes>(ctx: Arc<Context<T>>) -> Result<SyncingResponse, ApiError> {
let current_slot = ctx
.beacon_chain
.head_info()
.map_err(|e| ApiError::ServerError(format!("Unable to read head slot: {:?}", e)))?
.slot;
pub fn syncing<T: EthSpec>(
req: Request<Body>,
network: Arc<NetworkGlobals<T>>,
current_slot: Slot,
) -> ApiResult {
let (starting_slot, highest_slot) = match network.sync_state() {
let (starting_slot, highest_slot) = match ctx.network_globals.sync_state() {
SyncState::SyncingFinalized {
start_slot,
head_slot,
@@ -36,14 +32,8 @@ pub fn syncing<T: EthSpec>(
highest_slot,
};
ResponseBuilder::new(&req)?.body(&SyncingResponse {
is_syncing: network.is_syncing(),
Ok(SyncingResponse {
is_syncing: ctx.network_globals.is_syncing(),
sync_status,
})
}
pub fn get_health(req: Request<Body>) -> ApiResult {
let health = Health::observe().map_err(ApiError::ServerError)?;
ResponseBuilder::new(&req)?.body_no_ssz(&health)
}

View File

@@ -1,83 +0,0 @@
use super::{ApiError, ApiResult};
use crate::config::ApiEncodingFormat;
use hyper::header;
use hyper::{Body, Request, Response, StatusCode};
use serde::Serialize;
use ssz::Encode;
pub struct ResponseBuilder {
encoding: ApiEncodingFormat,
}
impl ResponseBuilder {
pub fn new(req: &Request<Body>) -> Result<Self, ApiError> {
let accept_header: String = req
.headers()
.get(header::ACCEPT)
.map_or(Ok(""), |h| h.to_str())
.map_err(|e| {
ApiError::BadRequest(format!(
"The Accept header contains invalid characters: {:?}",
e
))
})
.map(String::from)?;
// JSON is our default encoding, unless something else is requested.
let encoding = ApiEncodingFormat::from(accept_header.as_str());
Ok(Self { encoding })
}
pub fn body<T: Serialize + Encode>(self, item: &T) -> ApiResult {
match self.encoding {
ApiEncodingFormat::SSZ => Response::builder()
.status(StatusCode::OK)
.header("content-type", "application/ssz")
.body(Body::from(item.as_ssz_bytes()))
.map_err(|e| ApiError::ServerError(format!("Failed to build response: {:?}", e))),
_ => self.body_no_ssz(item),
}
}
pub fn body_no_ssz<T: Serialize>(self, item: &T) -> ApiResult {
let (body, content_type) = match self.encoding {
ApiEncodingFormat::JSON => (
Body::from(serde_json::to_string(&item).map_err(|e| {
ApiError::ServerError(format!(
"Unable to serialize response body as JSON: {:?}",
e
))
})?),
"application/json",
),
ApiEncodingFormat::SSZ => {
return Err(ApiError::UnsupportedType(
"Response cannot be encoded as SSZ.".into(),
));
}
ApiEncodingFormat::YAML => (
Body::from(serde_yaml::to_string(&item).map_err(|e| {
ApiError::ServerError(format!(
"Unable to serialize response body as YAML: {:?}",
e
))
})?),
"application/yaml",
),
};
Response::builder()
.status(StatusCode::OK)
.header("content-type", content_type)
.body(body)
.map_err(|e| ApiError::ServerError(format!("Failed to build response: {:?}", e)))
}
pub fn body_text(self, text: String) -> ApiResult {
Response::builder()
.status(StatusCode::OK)
.header("content-type", "text/plain; charset=utf-8")
.body(Body::from(text))
.map_err(|e| ApiError::ServerError(format!("Failed to build response: {:?}", e)))
}
}

View File

@@ -1,241 +1,322 @@
use crate::{
advanced, beacon, config::Config, consensus, error::ApiError, helpers, lighthouse, metrics,
network, node, spec, validator, NetworkChannel,
beacon, config::Config, consensus, lighthouse, metrics, node, validator, NetworkChannel,
};
use beacon_chain::{BeaconChain, BeaconChainTypes};
use bus::Bus;
use environment::TaskExecutor;
use eth2_config::Eth2Config;
use eth2_libp2p::NetworkGlobals;
use eth2_libp2p::{NetworkGlobals, PeerId};
use hyper::header::HeaderValue;
use hyper::{Body, Method, Request, Response};
use lighthouse_version::version_with_platform;
use operation_pool::PersistedOperationPool;
use parking_lot::Mutex;
use rest_types::{ApiError, Handler, Health};
use slog::debug;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Instant;
use types::{SignedBeaconBlockHash, Slot};
use types::{EthSpec, SignedBeaconBlockHash};
// Allowing more than 7 arguments.
#[allow(clippy::too_many_arguments)]
pub async fn route<T: BeaconChainTypes>(
pub struct Context<T: BeaconChainTypes> {
pub executor: TaskExecutor,
pub config: Config,
pub beacon_chain: Arc<BeaconChain<T>>,
pub network_globals: Arc<NetworkGlobals<T::EthSpec>>,
pub network_chan: NetworkChannel<T::EthSpec>,
pub eth2_config: Arc<Eth2Config>,
pub log: slog::Logger,
pub db_path: PathBuf,
pub freezer_db_path: PathBuf,
pub events: Arc<Mutex<Bus<SignedBeaconBlockHash>>>,
}
pub async fn on_http_request<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
network_channel: NetworkChannel<T::EthSpec>,
rest_api_config: Arc<Config>,
eth2_config: Arc<Eth2Config>,
local_log: slog::Logger,
db_path: PathBuf,
freezer_db_path: PathBuf,
events: Arc<Mutex<Bus<SignedBeaconBlockHash>>>,
ctx: Arc<Context<T>>,
) -> Result<Response<Body>, ApiError> {
metrics::inc_counter(&metrics::REQUEST_COUNT);
let received_instant = Instant::now();
let path = req.uri().path().to_string();
let log = local_log.clone();
let result = {
let _timer = metrics::start_timer(&metrics::REQUEST_RESPONSE_TIME);
let _timer = metrics::start_timer_vec(&metrics::BEACON_HTTP_API_TIMES_TOTAL, &[&path]);
metrics::inc_counter_vec(&metrics::BEACON_HTTP_API_REQUESTS_TOTAL, &[&path]);
match (req.method(), path.as_ref()) {
// Methods for Client
(&Method::GET, "/node/health") => node::get_health(req),
(&Method::GET, "/node/version") => node::get_version(req),
(&Method::GET, "/node/syncing") => {
// inform the current slot, or set to 0
let current_slot = beacon_chain
.head_info()
.map(|info| info.slot)
.unwrap_or_else(|_| Slot::from(0u64));
let received_instant = Instant::now();
let log = ctx.log.clone();
let allow_origin = ctx.config.allow_origin.clone();
node::syncing::<T::EthSpec>(req, network_globals, current_slot)
}
// Methods for Network
(&Method::GET, "/network/enr") => network::get_enr::<T>(req, network_globals),
(&Method::GET, "/network/peer_count") => {
network::get_peer_count::<T>(req, network_globals)
}
(&Method::GET, "/network/peer_id") => network::get_peer_id::<T>(req, network_globals),
(&Method::GET, "/network/peers") => network::get_peer_list::<T>(req, network_globals),
(&Method::GET, "/network/listen_port") => {
network::get_listen_port::<T>(req, network_globals)
}
(&Method::GET, "/network/listen_addresses") => {
network::get_listen_addresses::<T>(req, network_globals)
}
// Methods for Beacon Node
(&Method::GET, "/beacon/head") => beacon::get_head::<T>(req, beacon_chain),
(&Method::GET, "/beacon/heads") => beacon::get_heads::<T>(req, beacon_chain),
(&Method::GET, "/beacon/block") => beacon::get_block::<T>(req, beacon_chain),
(&Method::GET, "/beacon/block_root") => beacon::get_block_root::<T>(req, beacon_chain),
(&Method::GET, "/beacon/fork") => beacon::get_fork::<T>(req, beacon_chain),
(&Method::GET, "/beacon/fork/stream") => {
let reader = events.lock().add_rx();
beacon::stream_forks::<T>(log, reader)
}
(&Method::GET, "/beacon/genesis_time") => {
beacon::get_genesis_time::<T>(req, beacon_chain)
}
(&Method::GET, "/beacon/genesis_validators_root") => {
beacon::get_genesis_validators_root::<T>(req, beacon_chain)
}
(&Method::GET, "/beacon/validators") => beacon::get_validators::<T>(req, beacon_chain),
(&Method::POST, "/beacon/validators") => {
beacon::post_validators::<T>(req, beacon_chain).await
}
(&Method::GET, "/beacon/validators/all") => {
beacon::get_all_validators::<T>(req, beacon_chain)
}
(&Method::GET, "/beacon/validators/active") => {
beacon::get_active_validators::<T>(req, beacon_chain)
}
(&Method::GET, "/beacon/state") => beacon::get_state::<T>(req, beacon_chain),
(&Method::GET, "/beacon/state_root") => beacon::get_state_root::<T>(req, beacon_chain),
(&Method::GET, "/beacon/state/genesis") => {
beacon::get_genesis_state::<T>(req, beacon_chain)
}
(&Method::GET, "/beacon/committees") => beacon::get_committees::<T>(req, beacon_chain),
(&Method::POST, "/beacon/proposer_slashing") => {
beacon::proposer_slashing::<T>(req, beacon_chain).await
}
(&Method::POST, "/beacon/attester_slashing") => {
beacon::attester_slashing::<T>(req, beacon_chain).await
}
// Methods for Validator
(&Method::POST, "/validator/duties") => {
let timer =
metrics::start_timer(&metrics::VALIDATOR_GET_DUTIES_REQUEST_RESPONSE_TIME);
let response = validator::post_validator_duties::<T>(req, beacon_chain);
drop(timer);
response.await
}
(&Method::POST, "/validator/subscribe") => {
validator::post_validator_subscriptions::<T>(req, network_channel).await
}
(&Method::GET, "/validator/duties/all") => {
validator::get_all_validator_duties::<T>(req, beacon_chain)
}
(&Method::GET, "/validator/duties/active") => {
validator::get_active_validator_duties::<T>(req, beacon_chain)
}
(&Method::GET, "/validator/block") => {
let timer =
metrics::start_timer(&metrics::VALIDATOR_GET_BLOCK_REQUEST_RESPONSE_TIME);
let response = validator::get_new_beacon_block::<T>(req, beacon_chain, log);
drop(timer);
response
}
(&Method::POST, "/validator/block") => {
validator::publish_beacon_block::<T>(req, beacon_chain, network_channel, log).await
}
(&Method::GET, "/validator/attestation") => {
let timer =
metrics::start_timer(&metrics::VALIDATOR_GET_ATTESTATION_REQUEST_RESPONSE_TIME);
let response = validator::get_new_attestation::<T>(req, beacon_chain);
drop(timer);
response
}
(&Method::GET, "/validator/aggregate_attestation") => {
validator::get_aggregate_attestation::<T>(req, beacon_chain)
}
(&Method::POST, "/validator/attestations") => {
validator::publish_attestations::<T>(req, beacon_chain, network_channel, log).await
}
(&Method::POST, "/validator/aggregate_and_proofs") => {
validator::publish_aggregate_and_proofs::<T>(
req,
beacon_chain,
network_channel,
log,
)
.await
}
// Methods for consensus
(&Method::GET, "/consensus/global_votes") => {
consensus::get_vote_count::<T>(req, beacon_chain)
}
(&Method::POST, "/consensus/individual_votes") => {
consensus::post_individual_votes::<T>(req, beacon_chain).await
}
// Methods for bootstrap and checking configuration
(&Method::GET, "/spec") => spec::get_spec::<T>(req, beacon_chain),
(&Method::GET, "/spec/slots_per_epoch") => spec::get_slots_per_epoch::<T>(req),
(&Method::GET, "/spec/deposit_contract") => {
helpers::implementation_pending_response(req)
}
(&Method::GET, "/spec/eth2_config") => spec::get_eth2_config::<T>(req, eth2_config),
// Methods for advanced parameters
(&Method::GET, "/advanced/fork_choice") => {
advanced::get_fork_choice::<T>(req, beacon_chain)
}
(&Method::GET, "/advanced/operation_pool") => {
advanced::get_operation_pool::<T>(req, beacon_chain)
}
(&Method::GET, "/metrics") => {
metrics::get_prometheus::<T>(req, beacon_chain, db_path, freezer_db_path)
}
// Lighthouse specific
(&Method::GET, "/lighthouse/syncing") => {
lighthouse::syncing::<T::EthSpec>(req, network_globals)
}
(&Method::GET, "/lighthouse/peers") => {
lighthouse::peers::<T::EthSpec>(req, network_globals)
}
(&Method::GET, "/lighthouse/connected_peers") => {
lighthouse::connected_peers::<T::EthSpec>(req, network_globals)
}
_ => Err(ApiError::NotFound(
"Request path and/or method not found.".to_owned(),
)),
}
};
let request_processing_duration = Instant::now().duration_since(received_instant);
// Map the Rust-friendly `Result` in to a http-friendly response. In effect, this ensures that
// any `Err` returned from our response handlers becomes a valid http response to the client
// (e.g., a response with a 404 or 500 status).
match result {
match route(req, ctx).await {
Ok(mut response) => {
if rest_api_config.allow_origin != "" {
metrics::inc_counter_vec(&metrics::BEACON_HTTP_API_SUCCESS_TOTAL, &[&path]);
if allow_origin != "" {
let headers = response.headers_mut();
headers.insert(
hyper::header::ACCESS_CONTROL_ALLOW_ORIGIN,
HeaderValue::from_str(&rest_api_config.allow_origin)?,
HeaderValue::from_str(&allow_origin)?,
);
headers.insert(hyper::header::VARY, HeaderValue::from_static("Origin"));
}
debug!(
local_log,
log,
"HTTP API request successful";
"path" => path,
"duration_ms" => request_processing_duration.as_millis()
"duration_ms" => Instant::now().duration_since(received_instant).as_millis()
);
metrics::inc_counter(&metrics::SUCCESS_COUNT);
Ok(response)
}
Err(error) => {
metrics::inc_counter_vec(&metrics::BEACON_HTTP_API_ERROR_TOTAL, &[&path]);
debug!(
local_log,
log,
"HTTP API request failure";
"path" => path,
"duration_ms" => request_processing_duration.as_millis()
"duration_ms" => Instant::now().duration_since(received_instant).as_millis()
);
Ok(error.into())
}
}
}
async fn route<T: BeaconChainTypes>(
req: Request<Body>,
ctx: Arc<Context<T>>,
) -> Result<Response<Body>, ApiError> {
let path = req.uri().path().to_string();
let ctx = ctx.clone();
let method = req.method().clone();
let executor = ctx.executor.clone();
let handler = Handler::new(req, ctx, executor)?;
match (method, path.as_ref()) {
(Method::GET, "/node/version") => handler
.static_value(version_with_platform())
.await?
.serde_encodings(),
(Method::GET, "/node/health") => handler
.static_value(Health::observe().map_err(ApiError::ServerError)?)
.await?
.serde_encodings(),
(Method::GET, "/node/syncing") => handler
.allow_body()
.in_blocking_task(|_, ctx| node::syncing(ctx))
.await?
.serde_encodings(),
(Method::GET, "/network/enr") => handler
.in_core_task(|_, ctx| Ok(ctx.network_globals.local_enr().to_base64()))
.await?
.serde_encodings(),
(Method::GET, "/network/peer_count") => handler
.in_core_task(|_, ctx| Ok(ctx.network_globals.connected_peers()))
.await?
.serde_encodings(),
(Method::GET, "/network/peer_id") => handler
.in_core_task(|_, ctx| Ok(ctx.network_globals.local_peer_id().to_base58()))
.await?
.serde_encodings(),
(Method::GET, "/network/peers") => handler
.in_blocking_task(|_, ctx| {
Ok(ctx
.network_globals
.peers
.read()
.connected_peer_ids()
.map(PeerId::to_string)
.collect::<Vec<_>>())
})
.await?
.serde_encodings(),
(Method::GET, "/network/listen_port") => handler
.in_core_task(|_, ctx| Ok(ctx.network_globals.listen_port_tcp()))
.await?
.serde_encodings(),
(Method::GET, "/network/listen_addresses") => handler
.in_blocking_task(|_, ctx| Ok(ctx.network_globals.listen_multiaddrs()))
.await?
.serde_encodings(),
(Method::GET, "/beacon/head") => handler
.in_blocking_task(|_, ctx| beacon::get_head(ctx))
.await?
.all_encodings(),
(Method::GET, "/beacon/heads") => handler
.in_blocking_task(|_, ctx| Ok(beacon::get_heads(ctx)))
.await?
.all_encodings(),
(Method::GET, "/beacon/block") => handler
.in_blocking_task(beacon::get_block)
.await?
.all_encodings(),
(Method::GET, "/beacon/block_root") => handler
.in_blocking_task(beacon::get_block_root)
.await?
.all_encodings(),
(Method::GET, "/beacon/fork") => handler
.in_blocking_task(|_, ctx| Ok(ctx.beacon_chain.head_info()?.fork))
.await?
.all_encodings(),
(Method::GET, "/beacon/fork/stream") => {
handler.sse_stream(|_, ctx| beacon::stream_forks(ctx)).await
}
(Method::GET, "/beacon/genesis_time") => handler
.in_blocking_task(|_, ctx| Ok(ctx.beacon_chain.head_info()?.genesis_time))
.await?
.all_encodings(),
(Method::GET, "/beacon/genesis_validators_root") => handler
.in_blocking_task(|_, ctx| Ok(ctx.beacon_chain.head_info()?.genesis_validators_root))
.await?
.all_encodings(),
(Method::GET, "/beacon/validators") => handler
.in_blocking_task(beacon::get_validators)
.await?
.all_encodings(),
(Method::POST, "/beacon/validators") => handler
.allow_body()
.in_blocking_task(beacon::post_validators)
.await?
.all_encodings(),
(Method::GET, "/beacon/validators/all") => handler
.in_blocking_task(beacon::get_all_validators)
.await?
.all_encodings(),
(Method::GET, "/beacon/validators/active") => handler
.in_blocking_task(beacon::get_active_validators)
.await?
.all_encodings(),
(Method::GET, "/beacon/state") => handler
.in_blocking_task(beacon::get_state)
.await?
.all_encodings(),
(Method::GET, "/beacon/state_root") => handler
.in_blocking_task(beacon::get_state_root)
.await?
.all_encodings(),
(Method::GET, "/beacon/state/genesis") => handler
.in_blocking_task(|_, ctx| beacon::get_genesis_state(ctx))
.await?
.all_encodings(),
(Method::GET, "/beacon/committees") => handler
.in_blocking_task(beacon::get_committees)
.await?
.all_encodings(),
(Method::POST, "/beacon/proposer_slashing") => handler
.allow_body()
.in_blocking_task(beacon::proposer_slashing)
.await?
.serde_encodings(),
(Method::POST, "/beacon/attester_slashing") => handler
.allow_body()
.in_blocking_task(beacon::attester_slashing)
.await?
.serde_encodings(),
(Method::POST, "/validator/duties") => handler
.allow_body()
.in_blocking_task(validator::post_validator_duties)
.await?
.serde_encodings(),
(Method::POST, "/validator/subscribe") => handler
.allow_body()
.in_blocking_task(validator::post_validator_subscriptions)
.await?
.serde_encodings(),
(Method::GET, "/validator/duties/all") => handler
.in_blocking_task(validator::get_all_validator_duties)
.await?
.serde_encodings(),
(Method::GET, "/validator/duties/active") => handler
.in_blocking_task(validator::get_active_validator_duties)
.await?
.serde_encodings(),
(Method::GET, "/validator/block") => handler
.in_blocking_task(validator::get_new_beacon_block)
.await?
.serde_encodings(),
(Method::POST, "/validator/block") => handler
.allow_body()
.in_blocking_task(validator::publish_beacon_block)
.await?
.serde_encodings(),
(Method::GET, "/validator/attestation") => handler
.in_blocking_task(validator::get_new_attestation)
.await?
.serde_encodings(),
(Method::GET, "/validator/aggregate_attestation") => handler
.in_blocking_task(validator::get_aggregate_attestation)
.await?
.serde_encodings(),
(Method::POST, "/validator/attestations") => handler
.allow_body()
.in_blocking_task(validator::publish_attestations)
.await?
.serde_encodings(),
(Method::POST, "/validator/aggregate_and_proofs") => handler
.allow_body()
.in_blocking_task(validator::publish_aggregate_and_proofs)
.await?
.serde_encodings(),
(Method::GET, "/consensus/global_votes") => handler
.allow_body()
.in_blocking_task(consensus::get_vote_count)
.await?
.serde_encodings(),
(Method::POST, "/consensus/individual_votes") => handler
.allow_body()
.in_blocking_task(consensus::post_individual_votes)
.await?
.serde_encodings(),
(Method::GET, "/spec") => handler
// TODO: this clone is not ideal.
.in_blocking_task(|_, ctx| Ok(ctx.beacon_chain.spec.clone()))
.await?
.serde_encodings(),
(Method::GET, "/spec/slots_per_epoch") => handler
.static_value(T::EthSpec::slots_per_epoch())
.await?
.serde_encodings(),
(Method::GET, "/spec/eth2_config") => handler
// TODO: this clone is not ideal.
.in_blocking_task(|_, ctx| Ok(ctx.eth2_config.as_ref().clone()))
.await?
.serde_encodings(),
(Method::GET, "/advanced/fork_choice") => handler
.in_blocking_task(|_, ctx| {
Ok(ctx
.beacon_chain
.fork_choice
.read()
.proto_array()
.core_proto_array()
.clone())
})
.await?
.serde_encodings(),
(Method::GET, "/advanced/operation_pool") => handler
.in_blocking_task(|_, ctx| {
Ok(PersistedOperationPool::from_operation_pool(
&ctx.beacon_chain.op_pool,
))
})
.await?
.serde_encodings(),
(Method::GET, "/metrics") => handler
.in_blocking_task(|_, ctx| metrics::get_prometheus(ctx))
.await?
.text_encoding(),
(Method::GET, "/lighthouse/syncing") => handler
.in_blocking_task(|_, ctx| Ok(ctx.network_globals.sync_state()))
.await?
.serde_encodings(),
(Method::GET, "/lighthouse/peers") => handler
.in_blocking_task(|_, ctx| lighthouse::peers(ctx))
.await?
.serde_encodings(),
(Method::GET, "/lighthouse/connected_peers") => handler
.in_blocking_task(|_, ctx| lighthouse::connected_peers(ctx))
.await?
.serde_encodings(),
_ => Err(ApiError::NotFound(
"Request path and/or method not found.".to_owned(),
)),
}
}

View File

@@ -1,28 +0,0 @@
use super::ApiResult;
use crate::response_builder::ResponseBuilder;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_config::Eth2Config;
use hyper::{Body, Request};
use std::sync::Arc;
use types::EthSpec;
/// HTTP handler to return the full spec object.
pub fn get_spec<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz(&beacon_chain.spec)
}
/// HTTP handler to return the full Eth2Config object.
pub fn get_eth2_config<T: BeaconChainTypes>(
req: Request<Body>,
eth2_config: Arc<Eth2Config>,
) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz(eth2_config.as_ref())
}
/// HTTP handler to return the full spec object.
pub fn get_slots_per_epoch<T: BeaconChainTypes>(req: Request<Body>) -> ApiResult {
ResponseBuilder::new(&req)?.body(&T::EthSpec::slots_per_epoch())
}

View File

@@ -1,41 +1,32 @@
use crate::helpers::{
check_content_type_for_json, parse_hex_ssz_bytes, publish_beacon_block_to_network,
};
use crate::response_builder::ResponseBuilder;
use crate::{ApiError, ApiResult, NetworkChannel, UrlQuery};
use crate::helpers::{parse_hex_ssz_bytes, publish_beacon_block_to_network};
use crate::{ApiError, Context, NetworkChannel, UrlQuery};
use beacon_chain::{
attestation_verification::Error as AttnError, BeaconChain, BeaconChainError, BeaconChainTypes,
BlockError, ForkChoiceError, StateSkipConfig,
};
use bls::PublicKeyBytes;
use eth2_libp2p::PubsubMessage;
use hyper::{Body, Request};
use hyper::Request;
use network::NetworkMessage;
use rayon::prelude::*;
use rest_types::{ValidatorDutiesRequest, ValidatorDutyBytes, ValidatorSubscription};
use slog::{error, info, trace, warn, Logger};
use std::sync::Arc;
use types::beacon_state::EthSpec;
use types::{
Attestation, AttestationData, BeaconState, Epoch, RelativeEpoch, SelectionProof,
Attestation, AttestationData, BeaconBlock, BeaconState, Epoch, RelativeEpoch, SelectionProof,
SignedAggregateAndProof, SignedBeaconBlock, SubnetId,
};
/// HTTP Handler to retrieve the duties for a set of validators during a particular epoch. This
/// method allows for collecting bulk sets of validator duties without risking exceeding the max
/// URL length with query pairs.
pub async fn post_validator_duties<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
let response_builder = ResponseBuilder::new(&req);
pub fn post_validator_duties<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<ValidatorDutyBytes>, ApiError> {
let body = req.into_body();
let chunks = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
serde_json::from_slice::<ValidatorDutiesRequest>(&chunks)
serde_json::from_slice::<ValidatorDutiesRequest>(&body)
.map_err(|e| {
ApiError::BadRequest(format!(
"Unable to parse JSON into ValidatorDutiesRequest: {:?}",
@@ -44,29 +35,22 @@ pub async fn post_validator_duties<T: BeaconChainTypes>(
})
.and_then(|bulk_request| {
return_validator_duties(
beacon_chain,
&ctx.beacon_chain.clone(),
bulk_request.epoch,
bulk_request.pubkeys.into_iter().map(Into::into).collect(),
)
})
.and_then(|duties| response_builder?.body_no_ssz(&duties))
}
/// HTTP Handler to retrieve subscriptions for a set of validators. This allows the node to
/// organise peer discovery and topic subscription for known validators.
pub async fn post_validator_subscriptions<T: BeaconChainTypes>(
req: Request<Body>,
network_chan: NetworkChannel<T::EthSpec>,
) -> ApiResult {
try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req);
pub fn post_validator_subscriptions<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<(), ApiError> {
let body = req.into_body();
let chunks = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
serde_json::from_slice(&chunks)
serde_json::from_slice(&body)
.map_err(|e| {
ApiError::BadRequest(format!(
"Unable to parse JSON into ValidatorSubscriptions: {:?}",
@@ -74,7 +58,7 @@ pub async fn post_validator_subscriptions<T: BeaconChainTypes>(
))
})
.and_then(move |subscriptions: Vec<ValidatorSubscription>| {
network_chan
ctx.network_chan
.send(NetworkMessage::Subscribe { subscriptions })
.map_err(|e| {
ApiError::ServerError(format!(
@@ -84,19 +68,18 @@ pub async fn post_validator_subscriptions<T: BeaconChainTypes>(
})?;
Ok(())
})
.and_then(|_| response_builder?.body_no_ssz(&()))
}
/// HTTP Handler to retrieve all validator duties for the given epoch.
pub fn get_all_validator_duties<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<ValidatorDutyBytes>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let epoch = query.epoch()?;
let state = get_state_for_epoch(&beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
let state = get_state_for_epoch(&ctx.beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
let validator_pubkeys = state
.validators
@@ -104,21 +87,19 @@ pub fn get_all_validator_duties<T: BeaconChainTypes>(
.map(|validator| validator.pubkey.clone())
.collect();
let duties = return_validator_duties(beacon_chain, epoch, validator_pubkeys)?;
ResponseBuilder::new(&req)?.body_no_ssz(&duties)
return_validator_duties(&ctx.beacon_chain, epoch, validator_pubkeys)
}
/// HTTP Handler to retrieve all active validator duties for the given epoch.
pub fn get_active_validator_duties<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Vec<ValidatorDutyBytes>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let epoch = query.epoch()?;
let state = get_state_for_epoch(&beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
let state = get_state_for_epoch(&ctx.beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
let validator_pubkeys = state
.validators
@@ -127,9 +108,7 @@ pub fn get_active_validator_duties<T: BeaconChainTypes>(
.map(|validator| validator.pubkey.clone())
.collect();
let duties = return_validator_duties(beacon_chain, epoch, validator_pubkeys)?;
ResponseBuilder::new(&req)?.body_no_ssz(&duties)
return_validator_duties(&ctx.beacon_chain, epoch, validator_pubkeys)
}
/// Helper function to return the state that can be used to determine the duties for some `epoch`.
@@ -165,7 +144,7 @@ pub fn get_state_for_epoch<T: BeaconChainTypes>(
/// Helper function to get the duties for some `validator_pubkeys` in some `epoch`.
fn return_validator_duties<T: BeaconChainTypes>(
beacon_chain: Arc<BeaconChain<T>>,
beacon_chain: &BeaconChain<T>,
epoch: Epoch,
validator_pubkeys: Vec<PublicKeyBytes>,
) -> Result<Vec<ValidatorDutyBytes>, ApiError> {
@@ -281,10 +260,9 @@ fn return_validator_duties<T: BeaconChainTypes>(
/// HTTP Handler to produce a new BeaconBlock from the current state, ready to be signed by a validator.
pub fn get_new_beacon_block<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
log: Logger,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<BeaconBlock<T::EthSpec>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let slot = query.slot()?;
@@ -296,11 +274,12 @@ pub fn get_new_beacon_block<T: BeaconChainTypes>(
None
};
let (new_block, _state) = beacon_chain
let (new_block, _state) = ctx
.beacon_chain
.produce_block(randao_reveal, slot, validator_graffiti)
.map_err(|e| {
error!(
log,
ctx.log,
"Error whilst producing block";
"error" => format!("{:?}", e)
);
@@ -311,48 +290,40 @@ pub fn get_new_beacon_block<T: BeaconChainTypes>(
))
})?;
ResponseBuilder::new(&req)?.body(&new_block)
Ok(new_block)
}
/// HTTP Handler to publish a SignedBeaconBlock, which has been signed by a validator.
pub async fn publish_beacon_block<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel<T::EthSpec>,
log: Logger,
) -> ApiResult {
try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req);
pub fn publish_beacon_block<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<(), ApiError> {
let body = req.into_body();
let chunks = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
serde_json::from_slice(&chunks).map_err(|e| {
serde_json::from_slice(&body).map_err(|e| {
ApiError::BadRequest(format!("Unable to parse JSON into SignedBeaconBlock: {:?}", e))
})
.and_then(move |block: SignedBeaconBlock<T::EthSpec>| {
let slot = block.slot();
match beacon_chain.process_block(block.clone()) {
match ctx.beacon_chain.process_block(block.clone()) {
Ok(block_root) => {
// Block was processed, publish via gossipsub
info!(
log,
ctx.log,
"Block from local validator";
"block_root" => format!("{}", block_root),
"block_slot" => slot,
);
publish_beacon_block_to_network::<T>(network_chan, block)?;
publish_beacon_block_to_network::<T>(&ctx.network_chan, block)?;
// Run the fork choice algorithm and enshrine a new canonical head, if
// found.
//
// The new head may or may not be the block we just received.
if let Err(e) = beacon_chain.fork_choice() {
if let Err(e) = ctx.beacon_chain.fork_choice() {
error!(
log,
ctx.log,
"Failed to find beacon chain head";
"error" => format!("{:?}", e)
);
@@ -366,9 +337,9 @@ pub async fn publish_beacon_block<T: BeaconChainTypes>(
// - Excessive time between block produce and publish.
// - A validator is using another beacon node to produce blocks and
// submitting them here.
if beacon_chain.head()?.beacon_block_root != block_root {
if ctx.beacon_chain.head()?.beacon_block_root != block_root {
warn!(
log,
ctx.log,
"Block from validator is not head";
"desc" => "potential re-org",
);
@@ -380,7 +351,7 @@ pub async fn publish_beacon_block<T: BeaconChainTypes>(
}
Err(BlockError::BeaconChainError(e)) => {
error!(
log,
ctx.log,
"Error whilst processing block";
"error" => format!("{:?}", e)
);
@@ -392,7 +363,7 @@ pub async fn publish_beacon_block<T: BeaconChainTypes>(
}
Err(other) => {
warn!(
log,
ctx.log,
"Invalid block from local validator";
"outcome" => format!("{:?}", other)
);
@@ -404,41 +375,41 @@ pub async fn publish_beacon_block<T: BeaconChainTypes>(
}
}
})
.and_then(|_| response_builder?.body_no_ssz(&()))
}
/// HTTP Handler to produce a new Attestation from the current state, ready to be signed by a validator.
pub fn get_new_attestation<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Attestation<T::EthSpec>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let slot = query.slot()?;
let index = query.committee_index()?;
let attestation = beacon_chain
ctx.beacon_chain
.produce_unaggregated_attestation(slot, index)
.map_err(|e| ApiError::BadRequest(format!("Unable to produce attestation: {:?}", e)))?;
ResponseBuilder::new(&req)?.body(&attestation)
.map_err(|e| ApiError::BadRequest(format!("Unable to produce attestation: {:?}", e)))
}
/// HTTP Handler to retrieve the aggregate attestation for a slot
pub fn get_aggregate_attestation<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
) -> ApiResult {
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<Attestation<T::EthSpec>, ApiError> {
let query = UrlQuery::from_request(&req)?;
let attestation_data = query.attestation_data()?;
match beacon_chain.get_aggregated_attestation(&attestation_data) {
Ok(Some(attestation)) => ResponseBuilder::new(&req)?.body(&attestation),
match ctx
.beacon_chain
.get_aggregated_attestation(&attestation_data)
{
Ok(Some(attestation)) => Ok(attestation),
Ok(None) => Err(ApiError::NotFound(format!(
"No matching aggregate attestation for slot {:?} is known in slot {:?}",
attestation_data.slot,
beacon_chain.slot()
ctx.beacon_chain.slot()
))),
Err(e) => Err(ApiError::ServerError(format!(
"Unable to obtain attestation: {:?}",
@@ -448,22 +419,13 @@ pub fn get_aggregate_attestation<T: BeaconChainTypes>(
}
/// HTTP Handler to publish a list of Attestations, which have been signed by a number of validators.
pub async fn publish_attestations<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel<T::EthSpec>,
log: Logger,
) -> ApiResult {
try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req);
pub fn publish_attestations<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<(), ApiError> {
let bytes = req.into_body();
let body = req.into_body();
let chunk = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
let chunks = chunk.iter().cloned().collect::<Vec<u8>>();
serde_json::from_slice(&chunks.as_slice())
serde_json::from_slice(&bytes)
.map_err(|e| {
ApiError::BadRequest(format!(
"Unable to deserialize JSON into a list of attestations: {:?}",
@@ -474,16 +436,16 @@ pub async fn publish_attestations<T: BeaconChainTypes>(
.map(
move |attestations: Vec<(Attestation<T::EthSpec>, SubnetId)>| {
attestations
.into_par_iter()
.into_iter()
.enumerate()
.map(|(i, (attestation, subnet_id))| {
process_unaggregated_attestation(
&beacon_chain,
network_chan.clone(),
&ctx.beacon_chain,
ctx.network_chan.clone(),
attestation,
subnet_id,
i,
&log,
&ctx.log,
)
})
.collect::<Vec<Result<_, _>>>()
@@ -493,7 +455,7 @@ pub async fn publish_attestations<T: BeaconChainTypes>(
//
// Note: this will only provide info about the _first_ failure, not all failures.
.and_then(|processing_results| processing_results.into_iter().try_for_each(|result| result))
.and_then(|_| response_builder?.body_no_ssz(&()))
.map(|_| ())
}
/// Processes an unaggregrated attestation that was included in a list of attestations with the
@@ -566,21 +528,13 @@ fn process_unaggregated_attestation<T: BeaconChainTypes>(
}
/// HTTP Handler to publish an Attestation, which has been signed by a validator.
#[allow(clippy::redundant_clone)] // false positives in this function.
pub async fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel<T::EthSpec>,
log: Logger,
) -> ApiResult {
try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req);
pub fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
req: Request<Vec<u8>>,
ctx: Arc<Context<T>>,
) -> Result<(), ApiError> {
let body = req.into_body();
let chunk = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
let chunks = chunk.iter().cloned().collect::<Vec<u8>>();
serde_json::from_slice(&chunks.as_slice())
serde_json::from_slice(&body)
.map_err(|e| {
ApiError::BadRequest(format!(
"Unable to deserialize JSON into a list of SignedAggregateAndProof: {:?}",
@@ -591,15 +545,15 @@ pub async fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
.map(
move |signed_aggregates: Vec<SignedAggregateAndProof<T::EthSpec>>| {
signed_aggregates
.into_par_iter()
.into_iter()
.enumerate()
.map(|(i, signed_aggregate)| {
process_aggregated_attestation(
&beacon_chain,
network_chan.clone(),
&ctx.beacon_chain,
ctx.network_chan.clone(),
signed_aggregate,
i,
&log,
&ctx.log,
)
})
.collect::<Vec<Result<_, _>>>()
@@ -609,7 +563,6 @@ pub async fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
//
// Note: this will only provide info about the _first_ failure, not all failures.
.and_then(|processing_results| processing_results.into_iter().try_for_each(|result| result))
.and_then(|_| response_builder?.body_no_ssz(&()))
}
/// Processes an aggregrated attestation that was included in a list of attestations with the index

View File

@@ -804,7 +804,7 @@ fn get_version() {
let version = env
.runtime()
.block_on(remote_node.http.node().get_version())
.expect("should fetch eth2 config from http api");
.expect("should fetch version from http api");
assert_eq!(
lighthouse_version::version_with_platform(),

View File

@@ -260,6 +260,6 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
)
.value_name("NUM_SLOTS")
.takes_value(true)
.default_value("320")
.default_value("700")
)
}

View File

@@ -2,7 +2,7 @@ use beacon_chain::builder::PUBKEY_CACHE_FILENAME;
use clap::ArgMatches;
use clap_utils::BAD_TESTNET_DIR_MESSAGE;
use client::{config::DEFAULT_DATADIR, ClientConfig, ClientGenesis};
use eth2_libp2p::{multiaddr::Protocol, Enr, Multiaddr};
use eth2_libp2p::{multiaddr::Protocol, Enr, Multiaddr, NetworkConfig};
use eth2_testnet_config::Eth2TestnetConfig;
use slog::{crit, info, Logger};
use ssz::Encode;
@@ -75,148 +75,13 @@ pub fn get_config<E: EthSpec>(
/*
* Networking
*/
// If a network dir has been specified, override the `datadir` definition.
if let Some(dir) = cli_args.value_of("network-dir") {
client_config.network.network_dir = PathBuf::from(dir);
} else {
client_config.network.network_dir = client_config.data_dir.join(NETWORK_DIR);
};
if let Some(listen_address_str) = cli_args.value_of("listen-address") {
let listen_address = listen_address_str
.parse()
.map_err(|_| format!("Invalid listen address: {:?}", listen_address_str))?;
client_config.network.listen_address = listen_address;
}
if let Some(target_peers_str) = cli_args.value_of("target-peers") {
client_config.network.target_peers = target_peers_str
.parse::<usize>()
.map_err(|_| format!("Invalid number of target peers: {}", target_peers_str))?;
}
if let Some(port_str) = cli_args.value_of("port") {
let port = port_str
.parse::<u16>()
.map_err(|_| format!("Invalid port: {}", port_str))?;
client_config.network.libp2p_port = port;
client_config.network.discovery_port = port;
}
if let Some(port_str) = cli_args.value_of("discovery-port") {
let port = port_str
.parse::<u16>()
.map_err(|_| format!("Invalid port: {}", port_str))?;
client_config.network.discovery_port = port;
}
if let Some(boot_enr_str) = cli_args.value_of("boot-nodes") {
let mut enrs: Vec<Enr> = vec![];
let mut multiaddrs: Vec<Multiaddr> = vec![];
for addr in boot_enr_str.split(',') {
match addr.parse() {
Ok(enr) => enrs.push(enr),
Err(_) => {
// parsing as ENR failed, try as Multiaddr
let multi: Multiaddr = addr
.parse()
.map_err(|_| format!("Not valid as ENR nor Multiaddr: {}", addr))?;
if !multi.iter().any(|proto| matches!(proto, Protocol::Udp(_))) {
slog::error!(log, "Missing UDP in Multiaddr {}", multi.to_string());
}
if !multi.iter().any(|proto| matches!(proto, Protocol::P2p(_))) {
slog::error!(log, "Missing P2P in Multiaddr {}", multi.to_string());
}
multiaddrs.push(multi);
}
}
}
client_config.network.boot_nodes_enr = enrs;
client_config.network.boot_nodes_multiaddr = multiaddrs;
}
if let Some(libp2p_addresses_str) = cli_args.value_of("libp2p-addresses") {
client_config.network.libp2p_nodes = libp2p_addresses_str
.split(',')
.map(|multiaddr| {
multiaddr
.parse()
.map_err(|_| format!("Invalid Multiaddr: {}", multiaddr))
})
.collect::<Result<Vec<Multiaddr>, _>>()?;
}
if let Some(enr_udp_port_str) = cli_args.value_of("enr-udp-port") {
client_config.network.enr_udp_port = Some(
enr_udp_port_str
.parse::<u16>()
.map_err(|_| format!("Invalid discovery port: {}", enr_udp_port_str))?,
);
}
if let Some(enr_tcp_port_str) = cli_args.value_of("enr-tcp-port") {
client_config.network.enr_tcp_port = Some(
enr_tcp_port_str
.parse::<u16>()
.map_err(|_| format!("Invalid ENR TCP port: {}", enr_tcp_port_str))?,
);
}
if cli_args.is_present("enr-match") {
// set the enr address to localhost if the address is 0.0.0.0
if client_config.network.listen_address
== "0.0.0.0".parse::<IpAddr>().expect("valid ip addr")
{
client_config.network.enr_address =
Some("127.0.0.1".parse::<IpAddr>().expect("valid ip addr"));
} else {
client_config.network.enr_address = Some(client_config.network.listen_address);
}
client_config.network.enr_udp_port = Some(client_config.network.discovery_port);
}
if let Some(enr_address) = cli_args.value_of("enr-address") {
let resolved_addr = match enr_address.parse::<IpAddr>() {
Ok(addr) => addr, // // Input is an IpAddr
Err(_) => {
let mut addr = enr_address.to_string();
// Appending enr-port to the dns hostname to appease `to_socket_addrs()` parsing.
// Since enr-update is disabled with a dns address, not setting the enr-udp-port
// will make the node undiscoverable.
if let Some(enr_udp_port) = client_config.network.enr_udp_port {
addr.push_str(&format!(":{}", enr_udp_port.to_string()));
} else {
return Err(
"enr-udp-port must be set for node to be discoverable with dns address"
.into(),
);
}
// `to_socket_addr()` does the dns resolution
// Note: `to_socket_addrs()` is a blocking call
let resolved_addr = if let Ok(mut resolved_addrs) = addr.to_socket_addrs() {
// Pick the first ip from the list of resolved addresses
resolved_addrs
.next()
.map(|a| a.ip())
.ok_or_else(|| "Resolved dns addr contains no entries".to_string())?
} else {
return Err(format!("Failed to parse enr-address: {}", enr_address));
};
client_config.network.discv5_config.enr_update = false;
resolved_addr
}
};
client_config.network.enr_address = Some(resolved_addr);
}
if cli_args.is_present("disable_enr_auto_update") {
client_config.network.discv5_config.enr_update = false;
}
if cli_args.is_present("disable-discovery") {
client_config.network.disable_discovery = true;
slog::warn!(log, "Discovery is disabled. New peers will not be found");
}
set_network_config(
&mut client_config.network,
cli_args,
&client_config.data_dir,
&log,
false,
)?;
/*
* Http server
@@ -399,6 +264,163 @@ pub fn get_config<E: EthSpec>(
Ok(client_config)
}
/// Sets the network config from the command line arguments
pub fn set_network_config(
config: &mut NetworkConfig,
cli_args: &ArgMatches,
data_dir: &PathBuf,
log: &Logger,
use_listening_port_as_enr_port_by_default: bool,
) -> Result<(), String> {
// If a network dir has been specified, override the `datadir` definition.
if let Some(dir) = cli_args.value_of("network-dir") {
config.network_dir = PathBuf::from(dir);
} else {
config.network_dir = data_dir.join(NETWORK_DIR);
};
if let Some(listen_address_str) = cli_args.value_of("listen-address") {
let listen_address = listen_address_str
.parse()
.map_err(|_| format!("Invalid listen address: {:?}", listen_address_str))?;
config.listen_address = listen_address;
}
if let Some(target_peers_str) = cli_args.value_of("target-peers") {
config.target_peers = target_peers_str
.parse::<usize>()
.map_err(|_| format!("Invalid number of target peers: {}", target_peers_str))?;
}
if let Some(port_str) = cli_args.value_of("port") {
let port = port_str
.parse::<u16>()
.map_err(|_| format!("Invalid port: {}", port_str))?;
config.libp2p_port = port;
config.discovery_port = port;
}
if let Some(port_str) = cli_args.value_of("discovery-port") {
let port = port_str
.parse::<u16>()
.map_err(|_| format!("Invalid port: {}", port_str))?;
config.discovery_port = port;
}
if let Some(boot_enr_str) = cli_args.value_of("boot-nodes") {
let mut enrs: Vec<Enr> = vec![];
let mut multiaddrs: Vec<Multiaddr> = vec![];
for addr in boot_enr_str.split(',') {
match addr.parse() {
Ok(enr) => enrs.push(enr),
Err(_) => {
// parsing as ENR failed, try as Multiaddr
let multi: Multiaddr = addr
.parse()
.map_err(|_| format!("Not valid as ENR nor Multiaddr: {}", addr))?;
if !multi.iter().any(|proto| matches!(proto, Protocol::Udp(_))) {
slog::error!(log, "Missing UDP in Multiaddr {}", multi.to_string());
}
if !multi.iter().any(|proto| matches!(proto, Protocol::P2p(_))) {
slog::error!(log, "Missing P2P in Multiaddr {}", multi.to_string());
}
multiaddrs.push(multi);
}
}
}
config.boot_nodes_enr = enrs;
config.boot_nodes_multiaddr = multiaddrs;
}
if let Some(libp2p_addresses_str) = cli_args.value_of("libp2p-addresses") {
config.libp2p_nodes = libp2p_addresses_str
.split(',')
.map(|multiaddr| {
multiaddr
.parse()
.map_err(|_| format!("Invalid Multiaddr: {}", multiaddr))
})
.collect::<Result<Vec<Multiaddr>, _>>()?;
}
if let Some(enr_udp_port_str) = cli_args.value_of("enr-udp-port") {
config.enr_udp_port = Some(
enr_udp_port_str
.parse::<u16>()
.map_err(|_| format!("Invalid discovery port: {}", enr_udp_port_str))?,
);
}
if let Some(enr_tcp_port_str) = cli_args.value_of("enr-tcp-port") {
config.enr_tcp_port = Some(
enr_tcp_port_str
.parse::<u16>()
.map_err(|_| format!("Invalid ENR TCP port: {}", enr_tcp_port_str))?,
);
}
if cli_args.is_present("enr-match") {
// set the enr address to localhost if the address is 0.0.0.0
if config.listen_address == "0.0.0.0".parse::<IpAddr>().expect("valid ip addr") {
config.enr_address = Some("127.0.0.1".parse::<IpAddr>().expect("valid ip addr"));
} else {
config.enr_address = Some(config.listen_address);
}
config.enr_udp_port = Some(config.discovery_port);
}
if let Some(enr_address) = cli_args.value_of("enr-address") {
let resolved_addr = match enr_address.parse::<IpAddr>() {
Ok(addr) => addr, // // Input is an IpAddr
Err(_) => {
let mut addr = enr_address.to_string();
// Appending enr-port to the dns hostname to appease `to_socket_addrs()` parsing.
// Since enr-update is disabled with a dns address, not setting the enr-udp-port
// will make the node undiscoverable.
if let Some(enr_udp_port) = config.enr_udp_port.or_else(|| {
if use_listening_port_as_enr_port_by_default {
Some(config.discovery_port)
} else {
None
}
}) {
addr.push_str(&format!(":{}", enr_udp_port.to_string()));
} else {
return Err(
"enr-udp-port must be set for node to be discoverable with dns address"
.into(),
);
}
// `to_socket_addr()` does the dns resolution
// Note: `to_socket_addrs()` is a blocking call
let resolved_addr = if let Ok(mut resolved_addrs) = addr.to_socket_addrs() {
// Pick the first ip from the list of resolved addresses
resolved_addrs
.next()
.map(|a| a.ip())
.ok_or_else(|| "Resolved dns addr contains no entries".to_string())?
} else {
return Err(format!("Failed to parse enr-address: {}", enr_address));
};
config.discv5_config.enr_update = false;
resolved_addr
}
};
config.enr_address = Some(resolved_addr);
}
if cli_args.is_present("disable_enr_auto_update") {
config.discv5_config.enr_update = false;
}
if cli_args.is_present("disable-discovery") {
config.disable_discovery = true;
slog::warn!(log, "Discovery is disabled. New peers will not be found");
}
Ok(())
}
/// Gets the datadir which should be used.
pub fn get_data_dir(cli_args: &ArgMatches) -> PathBuf {
// Read the `--datadir` flag.

View File

@@ -7,7 +7,7 @@ mod config;
pub use beacon_chain;
pub use cli::cli_app;
pub use client::{Client, ClientBuilder, ClientConfig, ClientGenesis};
pub use config::{get_data_dir, get_eth2_testnet_config};
pub use config::{get_data_dir, get_eth2_testnet_config, set_network_config};
pub use eth2_config::Eth2Config;
use beacon_chain::events::TeeEventHandler;

View File

@@ -44,7 +44,7 @@ In step (1), we created a wallet in `~/.lighthouse/wallets` with the name
`wally`. We encrypted this using a pre-defined password in the
`wally.pass` file. Then, in step (2), we created one new validator in the
`~/.lighthouse/validators` directory using `wally` (unlocking it with
`mywallet.pass`) and storing the passwords to the validators voting key in
`wally.pass`) and storing the passwords to the validators voting key in
`~/.lighthouse/secrets`.
Thanks to the hierarchical key derivation scheme, we can delete all of the

View File

@@ -1,10 +1,11 @@
[package]
name = "boot_node"
version = "0.2.4"
version = "0.2.7"
authors = ["Sigma Prime <contact@sigmaprime.io>"]
edition = "2018"
[dependencies]
beacon_node = { path = "../beacon_node" }
clap = "2.33.0"
eth2_libp2p = { path = "../beacon_node/eth2_libp2p" }
slog = "2.5.2"

View File

@@ -12,7 +12,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
surface compared to a full beacon node.")
.settings(&[clap::AppSettings::ColoredHelp])
.arg(
Arg::with_name("boot-node-enr-address")
Arg::with_name("enr-address")
.value_name("IP-ADDRESS")
.help("The external IP address/ DNS address to broadcast to other peers on how to reach this node. \
If a DNS address is provided, the enr-address is set to the IP address it resolves to and \
@@ -44,7 +44,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.takes_value(true),
)
.arg(
Arg::with_name("enr-port")
Arg::with_name("enr-udp-port")
.long("enr-port")
.value_name("PORT")
.help("The UDP port of the boot node's ENR. This is the port that external peers will dial to reach this boot node. Set this only if the external port differs from the listening port.")

View File

@@ -1,7 +1,12 @@
use beacon_node::{get_data_dir, set_network_config};
use clap::ArgMatches;
use discv5::{enr::CombinedKey, Enr};
use eth2_libp2p::{
discovery::{create_enr_builder_from_config, use_or_load_enr},
load_private_key, CombinedKeyExt, NetworkConfig,
};
use std::convert::TryFrom;
use std::net::{IpAddr, SocketAddr, ToSocketAddrs};
use std::net::SocketAddr;
/// A set of configuration parameters for the bootnode, established from CLI arguments.
pub struct BootNodeConfig {
@@ -17,17 +22,22 @@ impl TryFrom<&ArgMatches<'_>> for BootNodeConfig {
type Error = String;
fn try_from(matches: &ArgMatches<'_>) -> Result<Self, Self::Error> {
let listen_address = matches
.value_of("listen-address")
.expect("required parameter")
.parse::<IpAddr>()
.map_err(|_| "Invalid listening address".to_string())?;
let data_dir = get_data_dir(matches);
let listen_port = matches
.value_of("port")
.expect("required parameter")
.parse::<u16>()
.map_err(|_| "Invalid listening port".to_string())?;
let mut network_config = NetworkConfig::default();
let logger = slog_scope::logger();
set_network_config(&mut network_config, matches, &data_dir, &logger, true)?;
let private_key = load_private_key(&network_config, &logger);
let local_key = CombinedKey::from_libp2p(&private_key)?;
let mut local_enr = create_enr_builder_from_config(&network_config)
.build(&local_key)
.map_err(|e| format!("Failed to build ENR: {:?}", e))?;
use_or_load_enr(&local_key, &mut local_enr, &network_config, &logger)?;
let boot_nodes = {
if let Some(boot_nodes) = matches.value_of("boot-nodes") {
@@ -40,34 +50,11 @@ impl TryFrom<&ArgMatches<'_>> for BootNodeConfig {
}
};
let enr_port = {
if let Some(port) = matches.value_of("boot-node-enr-port") {
port.parse::<u16>()
.map_err(|_| "Invalid ENR port".to_string())?
} else {
listen_port
}
};
let enr_address = {
let address_string = matches
.value_of("boot-node-enr-address")
.expect("required parameter");
resolve_address(address_string.into(), enr_port)?
};
let auto_update = matches.is_present("enable-enr_auto_update");
// the address to listen on
let listen_socket = SocketAddr::new(listen_address, enr_port);
// Generate a new key and build a new ENR
let local_key = CombinedKey::generate_secp256k1();
let local_enr = discv5::enr::EnrBuilder::new("v4")
.ip(enr_address)
.udp(enr_port)
.build(&local_key)
.map_err(|e| format!("Failed to build ENR: {:?}", e))?;
let listen_socket =
SocketAddr::new(network_config.listen_address, network_config.discovery_port);
Ok(BootNodeConfig {
listen_socket,
@@ -78,25 +65,3 @@ impl TryFrom<&ArgMatches<'_>> for BootNodeConfig {
})
}
}
/// Resolves an IP/DNS string to an IpAddr.
fn resolve_address(address_string: String, port: u16) -> Result<IpAddr, String> {
match address_string.parse::<IpAddr>() {
Ok(addr) => Ok(addr), // valid IpAddr
Err(_) => {
let mut addr = address_string.clone();
// Appending enr-port to the dns hostname to appease `to_socket_addrs()` parsing.
addr.push_str(&format!(":{}", port.to_string()));
// `to_socket_addr()` does the dns resolution
// Note: `to_socket_addrs()` is a blocking call
addr.to_socket_addrs()
.map(|mut resolved_addrs|
// Pick the first ip from the list of resolved addresses
resolved_addrs
.next()
.map(|a| a.ip())
.ok_or_else(|| "Resolved dns addr contains no entries".to_string()))
.map_err(|_| format!("Failed to parse enr-address: {}", address_string))?
}
}
}

View File

@@ -310,6 +310,15 @@ fn is_voting_keystore(file_name: &str) -> bool {
return true;
}
// The format exported by Prysm. I don't have a reference for this, but it was shared via
// Discord to Paul H.
if Regex::new("keystore-[0-9]+.json")
.expect("regex is valid")
.is_match(file_name)
{
return true;
}
false
}
@@ -318,8 +327,12 @@ mod tests {
use super::*;
#[test]
fn voting_keystore_filename() {
fn voting_keystore_filename_lighthouse() {
assert!(is_voting_keystore(VOTING_KEYSTORE_FILE));
}
#[test]
fn voting_keystore_filename_launchpad() {
assert!(!is_voting_keystore("cats"));
assert!(!is_voting_keystore(&format!("a{}", VOTING_KEYSTORE_FILE)));
assert!(!is_voting_keystore(&format!("{}b", VOTING_KEYSTORE_FILE)));
@@ -337,4 +350,14 @@ mod tests {
"keystore-m_12381_3600_1_0-1593476250.json"
));
}
#[test]
fn voting_keystore_filename_prysm() {
assert!(is_voting_keystore("keystore-0.json"));
assert!(is_voting_keystore("keystore-1.json"));
assert!(is_voting_keystore("keystore-101238259.json"));
assert!(!is_voting_keystore("keystore-.json"));
assert!(!is_voting_keystore("keystore-0a.json"));
assert!(!is_voting_keystore("keystore-cats.json"));
}
}

View File

@@ -10,7 +10,7 @@ use target_info::Target;
/// `Lighthouse/v0.2.0-1419501f2+`
pub const VERSION: &str = git_version!(
args = ["--always", "--dirty=+"],
prefix = "Lighthouse/v0.2.4-",
prefix = "Lighthouse/v0.2.7-",
fallback = "unknown"
);

View File

@@ -0,0 +1,8 @@
[package]
name = "lru_cache"
version = "0.1.0"
authors = ["Sigma Prime <contact@sigmaprime.io>"]
edition = "2018"
[dependencies]
fnv = "1.0.7"

View File

@@ -0,0 +1,7 @@
//! A library to provide fast and efficient LRU Cache's without updating.
mod space;
mod time;
pub use space::LRUCache;
pub use time::LRUTimeCache;

View File

@@ -0,0 +1,93 @@
///! This implements a time-based LRU cache for fast checking of duplicates
use fnv::FnvHashSet;
use std::collections::VecDeque;
/// Cache that stores keys until the size is used up. Does not update elements for efficiency.
pub struct LRUCache<Key>
where
Key: Eq + std::hash::Hash + Clone,
{
/// The duplicate cache.
map: FnvHashSet<Key>,
/// An ordered list of keys by order.
list: VecDeque<Key>,
// The max size of the cache,
size: usize,
}
impl<Key> LRUCache<Key>
where
Key: Eq + std::hash::Hash + Clone,
{
pub fn new(size: usize) -> Self {
LRUCache {
map: FnvHashSet::default(),
list: VecDeque::new(),
size,
}
}
/// Determines if the key is in the cache.
pub fn contains(&self, key: &Key) -> bool {
self.map.contains(key)
}
// Inserts new elements and removes any expired elements.
//
// If the key was not present this returns `true`. If the value was already present this
// returns `false`.
pub fn insert(&mut self, key: Key) -> bool {
// check the cache before removing elements
let result = self.map.insert(key.clone());
// add the new key to the list, if it doesn't already exist.
if result {
self.list.push_back(key);
}
// remove any overflow keys
self.update();
result
}
/// Removes any expired elements from the cache.
fn update(&mut self) {
// remove any expired results
for _ in 0..self.map.len().saturating_sub(self.size) {
if let Some(key) = self.list.pop_front() {
self.map.remove(&key);
}
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn cache_added_entries_exist() {
let mut cache = LRUCache::new(5);
cache.insert("t");
cache.insert("e");
// Should report that 't' and 't' already exists
assert!(!cache.insert("t"));
assert!(!cache.insert("e"));
}
#[test]
fn cache_entries_get_removed() {
let mut cache = LRUCache::new(2);
cache.insert("t");
assert!(!cache.insert("t"));
cache.insert("e");
assert!(!cache.insert("e"));
// add another element to clear the first key
cache.insert("s");
assert!(!cache.insert("s"));
// should be removed from the cache
assert!(cache.insert("t"));
}
}

View File

@@ -0,0 +1,126 @@
///! This implements a time-based LRU cache for fast checking of duplicates
use fnv::FnvHashSet;
use std::collections::VecDeque;
use std::time::{Duration, Instant};
struct Element<Key> {
/// The key being inserted.
key: Key,
/// The instant the key was inserted.
inserted: Instant,
}
pub struct LRUTimeCache<Key> {
/// The duplicate cache.
map: FnvHashSet<Key>,
/// An ordered list of keys by insert time.
list: VecDeque<Element<Key>>,
/// The time elements remain in the cache.
ttl: Duration,
}
impl<Key> LRUTimeCache<Key>
where
Key: Eq + std::hash::Hash + Clone,
{
pub fn new(ttl: Duration) -> Self {
LRUTimeCache {
map: FnvHashSet::default(),
list: VecDeque::new(),
ttl,
}
}
// Inserts new elements and removes any expired elements.
//
// If the key was not present this returns `true`. If the value was already present this
// returns `false`.
pub fn insert_update(&mut self, key: Key) -> bool {
// check the cache before removing elements
let result = self.map.insert(key.clone());
let now = Instant::now();
// remove any expired results
while let Some(element) = self.list.pop_front() {
if element.inserted + self.ttl > now {
self.list.push_front(element);
break;
}
self.map.remove(&element.key);
}
// add the new key to the list, if it doesn't already exist.
if result {
self.list.push_back(Element { key, inserted: now });
}
result
}
// Inserts new element does not expire old elements.
//
// If the key was not present this returns `true`. If the value was already present this
// returns `false`.
pub fn insert(&mut self, key: Key) -> bool {
// check the cache before removing elements
let result = self.map.insert(key.clone());
// add the new key to the list, if it doesn't already exist.
if result {
self.list.push_back(Element {
key,
inserted: Instant::now(),
});
}
result
}
/// Removes any expired elements from the cache.
pub fn update(&mut self) {
let now = Instant::now();
// remove any expired results
while let Some(element) = self.list.pop_front() {
if element.inserted + self.ttl > now {
self.list.push_front(element);
break;
}
self.map.remove(&element.key);
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn cache_added_entries_exist() {
let mut cache = LRUTimeCache::new(Duration::from_secs(10));
cache.insert("t");
cache.insert("e");
// Should report that 't' and 't' already exists
assert!(!cache.insert("t"));
assert!(!cache.insert("e"));
}
#[test]
fn cache_entries_expire() {
let mut cache = LRUTimeCache::new(Duration::from_millis(100));
cache.insert_update("t");
assert!(!cache.insert_update("t"));
cache.insert_update("e");
assert!(!cache.insert_update("t"));
assert!(!cache.insert_update("e"));
// sleep until cache expiry
std::thread::sleep(Duration::from_millis(101));
// add another element to clear previous cache
cache.insert_update("s");
// should be removed from the cache
assert!(cache.insert_update("t"));
}
}

View File

@@ -14,6 +14,13 @@ state_processing = { path = "../../consensus/state_processing" }
bls = { path = "../../crypto/bls" }
serde = { version = "1.0.110", features = ["derive"] }
rayon = "1.3.0"
hyper = "0.13.5"
tokio = { version = "0.2.21", features = ["sync"] }
environment = { path = "../../lighthouse/environment" }
store = { path = "../../beacon_node/store" }
beacon_chain = { path = "../../beacon_node/beacon_chain" }
serde_json = "1.0.52"
serde_yaml = "0.8.11"
[target.'cfg(target_os = "linux")'.dependencies]
psutil = "3.1.0"

View File

@@ -0,0 +1,247 @@
use crate::{ApiError, ApiResult};
use environment::TaskExecutor;
use hyper::header;
use hyper::{Body, Request, Response, StatusCode};
use serde::Deserialize;
use serde::Serialize;
use ssz::Encode;
/// Defines the encoding for the API.
#[derive(Clone, Serialize, Deserialize, Copy)]
pub enum ApiEncodingFormat {
JSON,
YAML,
SSZ,
}
impl ApiEncodingFormat {
pub fn get_content_type(&self) -> &str {
match self {
ApiEncodingFormat::JSON => "application/json",
ApiEncodingFormat::YAML => "application/yaml",
ApiEncodingFormat::SSZ => "application/ssz",
}
}
}
impl From<&str> for ApiEncodingFormat {
fn from(f: &str) -> ApiEncodingFormat {
match f {
"application/yaml" => ApiEncodingFormat::YAML,
"application/ssz" => ApiEncodingFormat::SSZ,
_ => ApiEncodingFormat::JSON,
}
}
}
/// Provides a HTTP request handler with Lighthouse-specific functionality.
pub struct Handler<T> {
executor: TaskExecutor,
req: Request<()>,
body: Body,
ctx: T,
encoding: ApiEncodingFormat,
allow_body: bool,
}
impl<T: Clone + Send + Sync + 'static> Handler<T> {
/// Start handling a new request.
pub fn new(req: Request<Body>, ctx: T, executor: TaskExecutor) -> Result<Self, ApiError> {
let (req_parts, body) = req.into_parts();
let req = Request::from_parts(req_parts, ());
let accept_header: String = req
.headers()
.get(header::ACCEPT)
.map_or(Ok(""), |h| h.to_str())
.map_err(|e| {
ApiError::BadRequest(format!(
"The Accept header contains invalid characters: {:?}",
e
))
})
.map(String::from)?;
Ok(Self {
executor,
req,
body,
ctx,
allow_body: false,
encoding: ApiEncodingFormat::from(accept_header.as_str()),
})
}
/// The default behaviour is to return an error if any body is supplied in the request. Calling
/// this function disables that error.
pub fn allow_body(mut self) -> Self {
self.allow_body = true;
self
}
/// Return a simple static value.
///
/// Does not use the blocking executor.
pub async fn static_value<V>(self, value: V) -> Result<HandledRequest<V>, ApiError> {
// Always check and disallow a body for a static value.
let _ = Self::get_body(self.body, false).await?;
Ok(HandledRequest {
value,
encoding: self.encoding,
})
}
/// Calls `func` in-line, on the core executor.
///
/// This should only be used for very fast tasks.
pub async fn in_core_task<F, V>(self, func: F) -> Result<HandledRequest<V>, ApiError>
where
V: Send + Sync + 'static,
F: Fn(Request<Vec<u8>>, T) -> Result<V, ApiError> + Send + Sync + 'static,
{
let body = Self::get_body(self.body, self.allow_body).await?;
let (req_parts, _) = self.req.into_parts();
let req = Request::from_parts(req_parts, body);
let value = func(req, self.ctx)?;
Ok(HandledRequest {
value,
encoding: self.encoding,
})
}
/// Spawns `func` on the blocking executor.
///
/// This method is suitable for handling long-running or intensive tasks.
pub async fn in_blocking_task<F, V>(self, func: F) -> Result<HandledRequest<V>, ApiError>
where
V: Send + Sync + 'static,
F: Fn(Request<Vec<u8>>, T) -> Result<V, ApiError> + Send + Sync + 'static,
{
let ctx = self.ctx;
let body = Self::get_body(self.body, self.allow_body).await?;
let (req_parts, _) = self.req.into_parts();
let req = Request::from_parts(req_parts, body);
let value = self
.executor
.clone()
.handle
.spawn_blocking(move || func(req, ctx))
.await
.map_err(|e| {
ApiError::ServerError(format!(
"Failed to get blocking join handle: {}",
e.to_string()
))
})??;
Ok(HandledRequest {
value,
encoding: self.encoding,
})
}
/// Call `func`, then return a response that is suitable for an SSE stream.
pub async fn sse_stream<F>(self, func: F) -> ApiResult
where
F: Fn(Request<()>, T) -> Result<Body, ApiError>,
{
let body = func(self.req, self.ctx)?;
Response::builder()
.status(200)
.header("Content-Type", "text/event-stream")
.header("Connection", "Keep-Alive")
.header("Cache-Control", "no-cache")
.header("Access-Control-Allow-Origin", "*")
.body(body)
.map_err(|e| ApiError::ServerError(format!("Failed to build response: {:?}", e)))
}
/// Downloads the bytes for `body`.
async fn get_body(body: Body, allow_body: bool) -> Result<Vec<u8>, ApiError> {
let bytes = hyper::body::to_bytes(body)
.await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
if !allow_body && !bytes[..].is_empty() {
Err(ApiError::BadRequest(
"The request body must be empty".to_string(),
))
} else {
Ok(bytes.into_iter().collect())
}
}
}
/// A request that has been "handled" and now a result (`value`) needs to be serialize and
/// returned.
pub struct HandledRequest<V> {
encoding: ApiEncodingFormat,
value: V,
}
impl HandledRequest<String> {
/// Simple encode a string as utf-8.
pub fn text_encoding(self) -> ApiResult {
Response::builder()
.status(StatusCode::OK)
.header("content-type", "text/plain; charset=utf-8")
.body(Body::from(self.value))
.map_err(|e| ApiError::ServerError(format!("Failed to build response: {:?}", e)))
}
}
impl<V: Serialize + Encode> HandledRequest<V> {
/// Suitable for all items which implement `serde` and `ssz`.
pub fn all_encodings(self) -> ApiResult {
match self.encoding {
ApiEncodingFormat::SSZ => Response::builder()
.status(StatusCode::OK)
.header("content-type", "application/ssz")
.body(Body::from(self.value.as_ssz_bytes()))
.map_err(|e| ApiError::ServerError(format!("Failed to build response: {:?}", e))),
_ => self.serde_encodings(),
}
}
}
impl<V: Serialize> HandledRequest<V> {
/// Suitable for items which only implement `serde`.
pub fn serde_encodings(self) -> ApiResult {
let (body, content_type) = match self.encoding {
ApiEncodingFormat::JSON => (
Body::from(serde_json::to_string(&self.value).map_err(|e| {
ApiError::ServerError(format!(
"Unable to serialize response body as JSON: {:?}",
e
))
})?),
"application/json",
),
ApiEncodingFormat::SSZ => {
return Err(ApiError::UnsupportedType(
"Response cannot be encoded as SSZ.".into(),
));
}
ApiEncodingFormat::YAML => (
Body::from(serde_yaml::to_string(&self.value).map_err(|e| {
ApiError::ServerError(format!(
"Unable to serialize response body as YAML: {:?}",
e
))
})?),
"application/yaml",
),
};
Response::builder()
.status(StatusCode::OK)
.header("content-type", content_type)
.body(body)
.map_err(|e| ApiError::ServerError(format!("Failed to build response: {:?}", e)))
}
}

View File

@@ -2,20 +2,21 @@
//!
//! This is primarily used by the validator client and the beacon node rest API.
mod api_error;
mod beacon;
mod consensus;
mod handler;
mod node;
mod validator;
pub use api_error::{ApiError, ApiResult};
pub use beacon::{
BlockResponse, CanonicalHeadResponse, Committee, HeadBeaconBlock, StateResponse,
ValidatorRequest, ValidatorResponse,
};
pub use consensus::{IndividualVote, IndividualVotesRequest, IndividualVotesResponse};
pub use handler::{ApiEncodingFormat, Handler};
pub use node::{Health, SyncingResponse, SyncingStatus};
pub use validator::{
ValidatorDutiesRequest, ValidatorDuty, ValidatorDutyBytes, ValidatorSubscription,
};
pub use consensus::{IndividualVote, IndividualVotesRequest, IndividualVotesResponse};
pub use node::{Health, SyncingResponse, SyncingStatus};

View File

@@ -27,7 +27,7 @@ pub struct ProtoNode {
best_descendant: Option<usize>,
}
#[derive(PartialEq, Debug, Serialize, Deserialize)]
#[derive(PartialEq, Debug, Serialize, Deserialize, Clone)]
pub struct ProtoArray {
/// Do not attempt to prune the tree unless it has at least this many nodes. Small prunes
/// simply waste time.

View File

@@ -24,10 +24,14 @@ use serde_repr::*;
pub struct JsonKeystore {
pub crypto: Crypto,
pub uuid: Uuid,
pub path: String,
/// EIP-2335 does not declare this field as optional, but Prysm is omitting it so we must
/// support it.
pub path: Option<String>,
pub pubkey: String,
pub version: Version,
pub description: Option<String>,
/// Not part of EIP-2335, but `ethdo` and Prysm have adopted it anyway so we must support it.
pub name: Option<String>,
}
/// Version for `JsonKeystore`.

View File

@@ -172,10 +172,11 @@ impl Keystore {
},
},
uuid,
path,
path: Some(path),
pubkey: keypair.pk.to_hex_string()[2..].to_string(),
version: Version::four(),
description: None,
name: None,
},
})
}
@@ -218,8 +219,8 @@ impl Keystore {
/// Returns the path for the keystore.
///
/// Note: the path is not validated, it is simply whatever string the keystore provided.
pub fn path(&self) -> &str {
&self.json.path
pub fn path(&self) -> Option<String> {
self.json.path.clone()
}
/// Returns the pubkey for the keystore.

View File

@@ -64,7 +64,7 @@ fn eip2335_test_vector_scrypt() {
Uuid::parse_str("1d85ae20-35c5-4611-98e8-aa14a633906f").unwrap(),
"uuid"
);
assert_eq!(keystore.path(), "", "path");
assert_eq!(keystore.path().unwrap(), "", "path");
}
#[test]
@@ -108,5 +108,5 @@ fn eip2335_test_vector_pbkdf() {
Uuid::parse_str("64625def-3331-4eea-ab6f-782f3ed16a83").unwrap(),
"uuid"
);
assert_eq!(keystore.path(), "m/12381/60/0/0", "path");
assert_eq!(keystore.path().unwrap(), "m/12381/60/0/0", "path");
}

View File

@@ -841,10 +841,7 @@ fn missing_path() {
}
"#;
match Keystore::from_json_str(&vector) {
Err(Error::InvalidJson(_)) => {}
_ => panic!("expected invalid json error"),
}
assert!(Keystore::from_json_str(&vector).is_ok());
}
#[test]
@@ -1010,3 +1007,43 @@ fn pbkdf2_missing_parameter() {
_ => panic!("expected invalid json error"),
}
}
#[test]
fn name_field() {
let vector = r#"
{
"crypto": {
"kdf": {
"function": "scrypt",
"params": {
"dklen": 32,
"n": 262144,
"p": 1,
"r": 8,
"salt": "d4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3"
},
"message": ""
},
"checksum": {
"function": "sha256",
"params": {},
"message": "149aafa27b041f3523c53d7acba1905fa6b1c90f9fef137568101f44b531a3cb"
},
"cipher": {
"function": "aes-128-ctr",
"params": {
"iv": "264daa3f303d7259501c93d997d84fe6"
},
"message": "54ecc8863c0550351eee5720f3be6a5d4a016025aa91cd6436cfec938d6a8d30"
}
},
"pubkey": "9612d7a727c9d0a22e185a1c768478dfe919cada9266988cb32359c11f2b7b27f4ae4040902382ae2910c15e2b420d07",
"uuid": "1d85ae20-35c5-4611-98e8-aa14a633906f",
"path": "",
"version": 4,
"name": "cats"
}
"#;
assert!(Keystore::from_json_str(&vector).is_ok());
}

View File

@@ -225,13 +225,13 @@ fn key_derivation_from_seed() {
.expect("should generate keystores");
assert_eq!(
keystores.voting.path(),
keystores.voting.path().unwrap(),
format!("m/12381/3600/{}/0/0", i),
"voting path should match"
);
assert_eq!(
keystores.withdrawal.path(),
keystores.withdrawal.path().unwrap(),
format!("m/12381/3600/{}/0", i),
"withdrawal path should match"
);

View File

@@ -1,7 +1,7 @@
[package]
name = "lcli"
description = "Lighthouse CLI (modeled after zcli)"
version = "0.2.2"
version = "0.2.7"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"

View File

@@ -1,6 +1,6 @@
[package]
name = "lighthouse"
version = "0.2.4"
version = "0.2.7"
authors = ["Sigma Prime <contact@sigmaprime.io>"]
edition = "2018"

View File

@@ -1,4 +1,5 @@
use crate::metrics;
use futures::channel::mpsc::Sender;
use futures::prelude::*;
use slog::{debug, trace};
use tokio::runtime::Handle;
@@ -7,9 +8,15 @@ use tokio::runtime::Handle;
#[derive(Clone)]
pub struct TaskExecutor {
/// The handle to the runtime on which tasks are spawned
pub(crate) handle: Handle,
pub handle: Handle,
/// The receiver exit future which on receiving shuts down the task
pub(crate) exit: exit_future::Exit,
/// Sender given to tasks, so that if they encounter a state in which execution cannot
/// continue they can request that everything shuts down.
///
/// The task must provide a reason for shutting down.
pub(crate) signal_tx: Sender<&'static str>,
pub(crate) log: slog::Logger,
}
@@ -18,8 +25,18 @@ impl TaskExecutor {
///
/// Note: this function is mainly useful in tests. A `TaskExecutor` should be normally obtained from
/// a [`RuntimeContext`](struct.RuntimeContext.html)
pub fn new(handle: Handle, exit: exit_future::Exit, log: slog::Logger) -> Self {
Self { handle, exit, log }
pub fn new(
handle: Handle,
exit: exit_future::Exit,
log: slog::Logger,
signal_tx: Sender<&'static str>,
) -> Self {
Self {
handle,
exit,
signal_tx,
log,
}
}
/// Spawn a future on the tokio runtime wrapped in an `exit_future::Exit`. The task is canceled
@@ -51,7 +68,7 @@ impl TaskExecutor {
/// Spawn a future on the tokio runtime. This function does not wrap the task in an `exit_future::Exit`
/// like [spawn](#method.spawn).
/// The caller of this function is responsible for wrapping up the task with an `exit_future::Exit` to
/// The caller of this function is responsible for wrapping up the task with an `exit_future::Exit` to
/// ensure that the task gets canceled appropriately.
/// This function generates prometheus metrics on number of tasks and task duration.
///
@@ -121,6 +138,11 @@ impl TaskExecutor {
self.exit.clone()
}
/// Get a channel to request shutting down.
pub fn shutdown_sender(&self) -> Sender<&'static str> {
self.signal_tx.clone()
}
/// Returns a reference to the logger.
pub fn log(&self) -> &slog::Logger {
&self.log

View File

@@ -9,7 +9,11 @@
use eth2_config::Eth2Config;
use eth2_testnet_config::Eth2TestnetConfig;
use futures::channel::oneshot;
use futures::channel::{
mpsc::{channel, Receiver, Sender},
oneshot,
};
use futures::{future, StreamExt};
pub use executor::TaskExecutor;
use slog::{info, o, Drain, Level, Logger};
@@ -260,10 +264,13 @@ impl<E: EthSpec> EnvironmentBuilder<E> {
/// Consumes the builder, returning an `Environment`.
pub fn build(self) -> Result<Environment<E>, String> {
let (signal, exit) = exit_future::signal();
let (signal_tx, signal_rx) = channel(1);
Ok(Environment {
runtime: self
.runtime
.ok_or_else(|| "Cannot build environment without runtime".to_string())?,
signal_tx,
signal_rx: Some(signal_rx),
signal: Some(signal),
exit,
log: self
@@ -295,6 +302,7 @@ impl<E: EthSpec> RuntimeContext<E> {
Self {
executor: TaskExecutor {
handle: self.executor.handle.clone(),
signal_tx: self.executor.signal_tx.clone(),
exit: self.executor.exit.clone(),
log: self.executor.log.new(o!("service" => service_name)),
},
@@ -318,6 +326,10 @@ impl<E: EthSpec> RuntimeContext<E> {
/// validator client, or to run tests that involve logging and async task execution.
pub struct Environment<E: EthSpec> {
runtime: Runtime,
/// Receiver side of an internal shutdown signal.
signal_rx: Option<Receiver<&'static str>>,
/// Sender to request shutting down.
signal_tx: Sender<&'static str>,
signal: Option<exit_future::Signal>,
exit: exit_future::Exit,
log: Logger,
@@ -340,6 +352,7 @@ impl<E: EthSpec> Environment<E> {
RuntimeContext {
executor: TaskExecutor {
exit: self.exit.clone(),
signal_tx: self.signal_tx.clone(),
handle: self.runtime().handle().clone(),
log: self.log.clone(),
},
@@ -353,6 +366,7 @@ impl<E: EthSpec> Environment<E> {
RuntimeContext {
executor: TaskExecutor {
exit: self.exit.clone(),
signal_tx: self.signal_tx.clone(),
handle: self.runtime().handle().clone(),
log: self.log.new(o!("service" => service_name)),
},
@@ -361,8 +375,20 @@ impl<E: EthSpec> Environment<E> {
}
}
/// Block the current thread until Ctrl+C is received.
pub fn block_until_ctrl_c(&mut self) -> Result<(), String> {
/// Block the current thread until a shutdown signal is received.
///
/// This can be either the user Ctrl-C'ing or a task requesting to shutdown.
pub fn block_until_shutdown_requested(&mut self) -> Result<(), String> {
// future of a task requesting to shutdown
let mut rx = self
.signal_rx
.take()
.ok_or("Inner shutdown already received")?;
let inner_shutdown =
async move { rx.next().await.ok_or("Internal shutdown channel exhausted") };
futures::pin_mut!(inner_shutdown);
// setup for handling a Ctrl-C
let (ctrlc_send, ctrlc_oneshot) = oneshot::channel();
let ctrlc_send_c = RefCell::new(Some(ctrlc_send));
ctrlc::set_handler(move || {
@@ -372,10 +398,18 @@ impl<E: EthSpec> Environment<E> {
})
.map_err(|e| format!("Could not set ctrlc handler: {:?}", e))?;
// Block this thread until Crtl+C is pressed.
self.runtime()
.block_on(ctrlc_oneshot)
.map_err(|e| format!("Ctrlc oneshot failed: {:?}", e))
// Block this thread until a shutdown signal is received.
match self
.runtime()
.block_on(future::select(inner_shutdown, ctrlc_oneshot))
{
future::Either::Left((Ok(reason), _)) => {
info!(self.log, "Internal shutdown received"; "reason" => reason);
Ok(())
}
future::Either::Left((Err(e), _)) => Err(e.into()),
future::Either::Right((x, _)) => x.map_err(|e| format!("Ctrlc oneshot failed: {}", e)),
}
}
/// Shutdown the `tokio` runtime when all tasks are idle.

View File

@@ -300,8 +300,8 @@ fn run<E: EthSpec>(
return Err("No subcommand supplied.".into());
}
// Block this thread until Crtl+C is pressed.
environment.block_until_ctrl_c()?;
// Block this thread until we get a ctrl-c or a task sends a shutdown signal.
environment.block_until_shutdown_requested()?;
info!(log, "Shutting down..");
environment.fire_signal();

32
scripts/change_version.sh Executable file
View File

@@ -0,0 +1,32 @@
# Change the version across multiple files, prior to a release. Use `sed` to
# find/replace the exiting version with the new one.
#
# Takes two arguments:
#
# 1. Current version (e.g., `0.2.6`)
# 2. New version (e.g., `0.2.7`)
#
# ## Example:
#
# `./change_version.sh 0.2.6 0.2.7`
FROM=$1
TO=$2
VERSION_CRATE="../common/lighthouse_version/src/lib.rs"
update_cargo_toml () {
echo $1
sed -i -e "s/version = \"$FROM\"/version = \"$TO\"/g" $1
}
echo "Changing version from $FROM to $TO"
update_cargo_toml ../account_manager/Cargo.toml
update_cargo_toml ../beacon_node/Cargo.toml
update_cargo_toml ../boot_node/Cargo.toml
update_cargo_toml ../lcli/Cargo.toml
update_cargo_toml ../lighthouse/Cargo.toml
update_cargo_toml ../validator_client/Cargo.toml
echo $VERSION_CRATE
sed -i -e "s/$FROM/$TO/g" $VERSION_CRATE

View File

@@ -16,5 +16,6 @@ exec lighthouse \
--testnet-dir $TESTNET_DIR \
--dummy-eth1 \
--http \
--port 9902 \
--http-port 6052 \
--boot-nodes $(cat $BEACON_DIR/beacon/network/enr.dat)

View File

@@ -1,6 +1,6 @@
[package]
name = "validator_client"
version = "0.2.4"
version = "0.2.7"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>", "Luke Anderson <luke@lukeanderson.com.au>"]
edition = "2018"