Compare commits

..

20 Commits

Author SHA1 Message Date
Paul Hauner
0821e6b39f Bump version to v0.2.9 (#1598)
## Issue Addressed

NA

## Proposed Changes

- Bump version tags
- Run `cargo update`

## Additional Info

NA
2020-09-09 02:28:35 +00:00
realbigsean
9cf8f45192 Mnemonic key recovery (#1579)
## Issue Addressed

N/A

## Proposed Changes

Add a  `lighthouse am wallet recover` command that recreates a wallet from a mnemonic but no validator keys.  Add a `lighthouse am validator recover` command which would directly create keys from a mnemonic for a given index and count.

## Additional Info


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2020-09-08 12:17:51 +00:00
Pawan Dhananjay
00cdc4bb35 Update state before producing attestation (#1596)
## Issue Addressed

Partly addresses #1547 

## Proposed Changes

This fix addresses the missing attestations at slot 0 of an epoch (also sometimes slot 1 when slot 0 was skipped).
There are 2 cases:
1. BN receives the block for the attestation slot after 4 seconds (1/3rd of the slot).
2. No block is proposed for this slot.

In both cases, when we produce the attestation, we pass the head state to the 
`produce_unaggregated_attestation_for_block` function here
9833eca024/beacon_node/beacon_chain/src/beacon_chain.rs (L845-L850)

Since we don't advance the state in this function, we set `attestation.data.source = state.current_justified_checkpoint` which is atleast 2 epochs lower than current_epoch(wall clock epoch). 
This attestation is invalid and cannot be included in a block because of this assert from the spec:
```python
if data.target.epoch == get_current_epoch(state):
        assert data.source == state.current_justified_checkpoint
        state.current_epoch_attestations.append(pending_attestation)
```
https://github.com/ethereum/eth2.0-specs/blob/dev/specs/phase0/beacon-chain.md#attestations

This PR changes the `produce_unaggregated_attestation_for_block` function to ensure that it advances the state before producing the attestation at the new epoch.

Running this on my node, have missed 0 attestations across all 8 of my validators in a 100 epoch period 🎉 
To compare, I was missing ~14 attestations across all 8 validators in the same 100 epoch period before the fix. 

Will report missed attestations if any after running for another 100 epochs tomorrow.
2020-09-08 11:25:43 +00:00
Michael Sproul
19be7abfd2 Don't quote slot and epoch, for now (#1597)
Fixes a breaking change to our API that was unnecessary and can wait until #1569 is merged
2020-09-08 02:12:36 +00:00
Age Manning
9833eca024 Use simple logger builder pattern (#1594)
## Issue Addressed

`simple_logger` depricated the functions we are currently using causing our CI to fail. This updates the to the builder pattern.
2020-09-07 07:44:17 +00:00
Daniel Schonfeld
2a9a815f29 conforming to the p2p specs, requiring error_messages to be bound (#1593)
## Issue Addressed

#1421 

## Proposed Changes

Bounding the error_message that can be returned for RPC domain errors


Co-authored-by: Age Manning <Age@AgeManning.com>
2020-09-07 06:47:05 +00:00
Age Manning
a6376b4585 Update discv5 to v10 (#1592)
## Issue Addressed

Code improvements, dependency improvements and better async handling.
2020-09-07 05:53:20 +00:00
Michael Sproul
74fa87aa98 Add serde_utils module with quoted u64 support (#1588)
## Proposed Changes

This is an extraction of the quoted int code from #1569, that I've come to rely on for #1544.

It allows us to parse integers from serde strings in YAML, JSON, etc. The main differences from the code in Paul's original PR are:

* Added a submodule that makes quoting mandatory (`require_quotes`).
* Decoding is generic over the type `T` being decoded. You can use `#[serde(with = "serde_utils::quoted_u64::require_quotes")]` on `Epoch` and `Slot` fields (this is what I do in my slashing protection PR).

I've turned on quoting for `Epoch` and `Slot` in this PR, but will leave the other `types` changes to you Paul.

I opted to put everything in the `conseus/serde_utils` module so that BLS can use it without a circular dependency. In future when we want to publish `types` I think we could publish `serde_utils` as `lighthouse_serde_utils` or something. Open to other ideas on this front too.
2020-09-07 01:03:53 +00:00
Michael Sproul
211109bbc0 Revert "add a github action for build multi-arch docker images (#1574)" (#1591)
This reverts commit 2627463366.

## Issue Addressed

This is a temporary fix for #1589, by reverting #1574. The Docker image needs to be built with `--build-arg PORTABLE=true`, and we could probably integrate that into the multi-arch build, but in the interests of expediting a fix, this PR opts for a revert.
2020-09-06 04:46:25 +00:00
Sean
638daa87fe Avoid Printing Binary String to Logs (#1576)
Converts the graffiti binary data to string before printing to logs.

## Issue Addressed

#1566 

## Proposed Changes
Rather than converting graffiti to a vector the binary data less the last character is passed to String::from_utf_lossy(). This then allows us to call the to_string() function directly to give us the string

## Additional Info

Rust skills are fairly weak
2020-09-05 05:46:25 +00:00
realbigsean
2627463366 add a github action for build multi-arch docker images (#1574)
## Issue Addressed

#1512

## Proposed Changes

Use Github Actions to automate the Docker image build, so that we can make a multi-arch image.  

## Additional Info

This change will require adding the DOCKER_USERNAME and DOCKER_PASSWORD secrets in Github. It will also require disabling the Docker Hub automated build.
2020-09-04 02:43:32 +00:00
Antoine Detante
9c9176c1d1 Allow to use the same password when importing multiple keystores (#1479) (#1510)
## Issue Addressed

#1479 

## Proposed Changes

* Add an optional flag `reuse-password` in the `import` command of account_manager, allowing to use the same password for all imported keystores.
2020-09-04 01:49:21 +00:00
Pawan Dhananjay
87181204d0 Minor documentation fixes (#1297)
## Issue Addressed

N/A

## Proposed Changes

- Fix a wrong command in the validator generation example.
- Replace occurrences of 'passphrase' with 'password'. This is mostly because I felt that there was a lot of mixing of the two phrases in the documentation and the actual commands which is a bit confusing. Picked 'password' everywhere because it felt more appropriate but I don't mind changing it to 'passphrase' as long it's consistent everywhere.
2020-09-02 04:59:22 +00:00
Age Manning
fb9d828e5e Extended Gossipsub metrics (#1577)
## Issue Addressed

N/A

## Proposed Changes

Adds extended metrics to get a better idea of what is happening at the gossipsub layer of lighthouse. This provides information about mesh statistics per topics, subscriptions and peer scores. 

## Additional Info
2020-09-01 06:59:14 +00:00
Age Manning
8301a984eb Revert 1502 - Switching docker user to lighthouse (#1578)
## Issue Addressed

The lighthouse user has recently changed to `lighthouse` from root. 

This requires uses to change ownership of their current docker mounted volumes and the upgrade path is non-trivial. 
This reverts #1502 and we will include it in a major release in the future.

## Proposed Changes

N/A

## Additional Info

N/A
2020-09-01 01:32:02 +00:00
Maximilian Ehlers
7d71d98dc1 Creates a new lighthouse user and makes it the default user to be use… (#1502)
…d in the Docker image

## Issue Addressed
https://github.com/sigp/lighthouse/issues/1459

## Proposed Changes

- Create new `lighthouse` user and group in Docker container
- Set user as the default user
2020-08-31 07:52:26 +00:00
realbigsean
c34e8efb12 Increase logging channel capacity (#1570)
## Issue Addressed

#1464

## Proposed Changes

Increase the slog-async log channel size from the default of 128 to 2048 to reduce the number of dropped logs. 

## Additional Info
2020-08-31 02:36:19 +00:00
Pawan Dhananjay
adea7992f8 Eth1 network exit on wrong network id (#1563)
## Issue Addressed

Fixes #1509 

## Proposed Changes

Exit the beacon node if the eth1 endpoint points to an invalid eth1 network. Check the network id before every eth1 cache update and display an error log if the network id has changed to an invalid one.
2020-08-31 02:36:17 +00:00
blacktemplar
c18d37c202 Use Gossipsub 1.1 (#1516)
## Issue Addressed

#1172

## Proposed Changes

* updates the libp2p dependency
* small adaptions based on changes in libp2p
* report not just valid messages but also invalid and distinguish between `IGNORE`d messages and `REJECT`ed messages


Co-authored-by: Age Manning <Age@AgeManning.com>
2020-08-30 13:06:50 +00:00
tobisako
b6340ec495 fix change flag name end_after_checks to continue_after_checks (#1573)
## Issue Addressed

Resolve #1387 

## Proposed Changes

Replace flag name **end_after_checks** to ** continue_after_checks**
Change condition to simple (remove **!**, It's no change logic.)

## Additional Info

Operation check
- [x] subcommand `eth1-sim` with ganach-cli
  - [x] `./simulator eth1-sim` -> test is completes
  - [x] `./simulator eth1-sim --continue_after_checks` -> test is never completes
  - [x] `./simulator eth1-sim -c` -> test is never completes
  - [x] `./simulator eth1-sim -c true` -> error: Found (clap)
  - [x] `./simulator eth1-sim -c false` -> error: Found (clap)
- [x] subcommand `no-eth1-sim`
  - [x] `./simulator no-eth1-sim` -> test is completes
  - [x] `./simulator no-eth1-sim --continue_after_checks` -> test is never completes
  - [x] `./simulator no-eth1-sim -c` -> test is never completes
  - [x] `./simulator no-eth1-sim -c true` -> error: Found (clap)
  - [x] `./simulator no-eth1-sim -c false` -> error: Found (clap)
2020-08-27 23:21:21 +00:00
71 changed files with 1729 additions and 410 deletions

263
Cargo.lock generated
View File

@@ -2,7 +2,7 @@
# It is not intended for manual editing.
[[package]]
name = "account_manager"
version = "0.2.8"
version = "0.2.9"
dependencies = [
"account_utils",
"bls",
@@ -113,7 +113,7 @@ dependencies = [
"aes",
"block-cipher",
"ghash",
"subtle 2.2.3",
"subtle 2.3.0",
]
[[package]]
@@ -309,12 +309,6 @@ dependencies = [
"safemem",
]
[[package]]
name = "base64"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b41b7ea54a0c9d92199de89e20e58d49f02f8e699814ef3fdf266f6f748d15c7"
[[package]]
name = "base64"
version = "0.12.3"
@@ -354,6 +348,7 @@ dependencies = [
"rand 0.7.3",
"rand_core 0.5.1",
"rayon",
"regex",
"safe_arith",
"serde",
"serde_derive",
@@ -375,7 +370,7 @@ dependencies = [
[[package]]
name = "beacon_node"
version = "0.2.8"
version = "0.2.9"
dependencies = [
"beacon_chain",
"clap",
@@ -462,21 +457,6 @@ dependencies = [
"constant_time_eq",
]
[[package]]
name = "blake3"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce4f9586c9a3151c4b49b19e82ba163dd073614dd057e53c969e1a4db5b52720"
dependencies = [
"arrayref",
"arrayvec",
"cc",
"cfg-if",
"constant_time_eq",
"crypto-mac 0.8.0",
"digest 0.9.0",
]
[[package]]
name = "block-buffer"
version = "0.7.3"
@@ -554,7 +534,7 @@ dependencies = [
[[package]]
name = "boot_node"
version = "0.2.8"
version = "0.2.9"
dependencies = [
"beacon_node",
"clap",
@@ -776,7 +756,7 @@ dependencies = [
"sloggers",
"slot_clock",
"store",
"time 0.2.16",
"time 0.2.18",
"timer",
"tokio 0.2.22",
"toml",
@@ -869,6 +849,12 @@ dependencies = [
"proc-macro-hack",
]
[[package]]
name = "const_fn"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce90df4c658c62f12d78f7508cf92f9173e5184a539c10bfe54a3107b3ffd0f2"
[[package]]
name = "constant_time_eq"
version = "0.1.5"
@@ -958,12 +944,12 @@ dependencies = [
[[package]]
name = "crossbeam-channel"
version = "0.4.3"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09ee0cc8804d5393478d743b035099520087a5186f3b93fa58cec08fa62407b6"
checksum = "b153fe7cbef478c567df0f972e02e6d736db11affe43dfc9c56a9374d1adfb87"
dependencies = [
"cfg-if",
"crossbeam-utils",
"maybe-uninit",
]
[[package]]
@@ -1037,7 +1023,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b584a330336237c1eecd3e94266efb216c56ed91225d634cb2991c5f3fd1aeab"
dependencies = [
"generic-array 0.14.4",
"subtle 2.2.3",
"subtle 2.3.0",
]
[[package]]
@@ -1090,7 +1076,7 @@ dependencies = [
"byteorder",
"digest 0.8.1",
"rand_core 0.5.1",
"subtle 2.2.3",
"subtle 2.3.0",
"zeroize",
]
@@ -1103,7 +1089,7 @@ dependencies = [
"byteorder",
"digest 0.9.0",
"rand_core 0.5.1",
"subtle 2.2.3",
"subtle 2.3.0",
"zeroize",
]
@@ -1233,10 +1219,11 @@ checksum = "212d0f5754cb6769937f4501cc0e67f4f4483c8d2c3e1e922ee9edbe4ab4c7c0"
[[package]]
name = "discv5"
version = "0.1.0-alpha.8"
version = "0.1.0-alpha.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "90782d49541b01f9b7e34e6af5d80d01396bf7b1a81505a0035da224134b8d73"
checksum = "b4cba1b485c16864edc11ccbf3abf5fbf1c26ce759ab36c32ee8e12638d50b0d"
dependencies = [
"aes-gcm",
"arrayvec",
"digest 0.8.1",
"enr",
@@ -1251,7 +1238,6 @@ dependencies = [
"lru_time_cache",
"multihash",
"net2",
"openssl",
"parking_lot 0.11.0",
"rand 0.7.3",
"rlp",
@@ -2131,12 +2117,9 @@ dependencies = [
[[package]]
name = "hashbrown"
version = "0.8.2"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e91b62f79061a0bc2e046024cb7ba44b08419ed238ecbd9adbd787434b9e8c25"
dependencies = [
"autocfg 1.0.1",
]
checksum = "00d63df3d41950fb462ed38308eea019113ad1508da725bbedcd0fa5a85ef5f7"
[[package]]
name = "hashset_delay"
@@ -2432,12 +2415,12 @@ dependencies = [
[[package]]
name = "indexmap"
version = "1.5.1"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86b45e59b16c76b11bf9738fd5d38879d3bd28ad292d7b313608becb17ae2df9"
checksum = "55e2e4c765aa53a0424761bf9f41aa7a6ac1efa87238f59560640e27fca028f2"
dependencies = [
"autocfg 1.0.1",
"hashbrown 0.8.2",
"hashbrown 0.9.0",
]
[[package]]
@@ -2457,9 +2440,12 @@ dependencies = [
[[package]]
name = "integer-sqrt"
version = "0.1.3"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f65877bf7d44897a473350b1046277941cee20b263397e90869c50b6e766088b"
checksum = "1d3038ef3da477fd70e6161006ef37f919c24bfe7244062f67a69defe72c02aa"
dependencies = [
"num-traits",
]
[[package]]
name = "iovec"
@@ -2502,9 +2488,9 @@ checksum = "dc6f3ad7b9d11a0c00842ff8de1b60ee58661048eb8049ed33c73594f359d7e6"
[[package]]
name = "js-sys"
version = "0.3.44"
version = "0.3.45"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85a7e2c92a4804dd459b86c339278d0fe87cf93757fae222c3fa3ae75458bc73"
checksum = "ca059e81d9486668f12d455a4ea6daa600bd408134cd17e3d3fb5a32d1f016f8"
dependencies = [
"wasm-bindgen",
]
@@ -2561,7 +2547,7 @@ checksum = "830d08ce1d1d941e6b30645f1a0eb5643013d835ce3779a5fc208261dbe10f55"
[[package]]
name = "lcli"
version = "0.2.8"
version = "0.2.9"
dependencies = [
"bls",
"clap",
@@ -2644,8 +2630,8 @@ checksum = "c7d73b3f436185384286bd8098d17ec07c9a7d2388a6599f824d8502b529702a"
[[package]]
name = "libp2p"
version = "0.23.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
version = "0.25.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"atomic",
"bytes 0.5.6",
@@ -2662,7 +2648,7 @@ dependencies = [
"libp2p-tcp",
"libp2p-websocket",
"multihash",
"parity-multiaddr 0.9.1 (git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803)",
"parity-multiaddr 0.9.1",
"parking_lot 0.10.2",
"pin-project",
"smallvec 1.4.2",
@@ -2687,7 +2673,7 @@ dependencies = [
"log 0.4.11",
"multihash",
"multistream-select 0.8.2 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-multiaddr 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)",
"parity-multiaddr 0.9.2",
"parking_lot 0.10.2",
"pin-project",
"prost",
@@ -2706,7 +2692,7 @@ dependencies = [
[[package]]
name = "libp2p-core"
version = "0.21.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"asn1_der",
"bs58",
@@ -2719,8 +2705,8 @@ dependencies = [
"libsecp256k1",
"log 0.4.11",
"multihash",
"multistream-select 0.8.2 (git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803)",
"parity-multiaddr 0.9.1 (git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803)",
"multistream-select 0.8.2 (git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f)",
"parity-multiaddr 0.9.1",
"parking_lot 0.10.2",
"pin-project",
"prost",
@@ -2739,7 +2725,7 @@ dependencies = [
[[package]]
name = "libp2p-core-derive"
version = "0.20.2"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"quote",
"syn",
@@ -2748,7 +2734,7 @@ dependencies = [
[[package]]
name = "libp2p-dns"
version = "0.21.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"futures 0.3.5",
"libp2p-core 0.21.0",
@@ -2757,10 +2743,10 @@ dependencies = [
[[package]]
name = "libp2p-gossipsub"
version = "0.21.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
version = "0.22.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"base64 0.11.0",
"base64 0.12.3",
"byteorder",
"bytes 0.5.6",
"fnv",
@@ -2773,16 +2759,16 @@ dependencies = [
"prost",
"prost-build",
"rand 0.7.3",
"sha2 0.8.2",
"sha2 0.9.1",
"smallvec 1.4.2",
"unsigned-varint 0.4.0",
"unsigned-varint 0.5.1",
"wasm-timer",
]
[[package]]
name = "libp2p-identify"
version = "0.21.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
version = "0.22.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"futures 0.3.5",
"libp2p-core 0.21.0",
@@ -2797,7 +2783,7 @@ dependencies = [
[[package]]
name = "libp2p-mplex"
version = "0.21.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"bytes 0.5.6",
"fnv",
@@ -2812,7 +2798,7 @@ dependencies = [
[[package]]
name = "libp2p-noise"
version = "0.23.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"bytes 0.5.6",
"curve25519-dalek 2.1.0",
@@ -2832,9 +2818,10 @@ dependencies = [
[[package]]
name = "libp2p-swarm"
version = "0.21.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
version = "0.22.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"either",
"futures 0.3.5",
"libp2p-core 0.21.0",
"log 0.4.11",
@@ -2847,7 +2834,7 @@ dependencies = [
[[package]]
name = "libp2p-tcp"
version = "0.21.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"futures 0.3.5",
"futures-timer",
@@ -2862,7 +2849,7 @@ dependencies = [
[[package]]
name = "libp2p-websocket"
version = "0.22.0"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"async-tls",
"either",
@@ -2890,7 +2877,7 @@ dependencies = [
"hmac-drbg",
"rand 0.7.3",
"sha2 0.8.2",
"subtle 2.2.3",
"subtle 2.3.0",
"typenum",
]
@@ -2907,9 +2894,9 @@ dependencies = [
[[package]]
name = "libz-sys"
version = "1.1.0"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "af67924b8dd885cccea261866c8ce5b74d239d272e154053ff927dae839f5ae9"
checksum = "602113192b08db8f38796c4e85c39e960c145965140e918018bcde1952429655"
dependencies = [
"cc",
"pkg-config",
@@ -2918,7 +2905,7 @@ dependencies = [
[[package]]
name = "lighthouse"
version = "0.2.8"
version = "0.2.9"
dependencies = [
"account_manager",
"account_utils",
@@ -3221,18 +3208,17 @@ dependencies = [
[[package]]
name = "multihash"
version = "0.11.3"
version = "0.11.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "51cc1552a982658478dbc22eefb72bb1d4fd1161eb9818f7bbf4347443f07569"
checksum = "567122ab6492f49b59def14ecc36e13e64dca4188196dd0cd41f9f3f979f3df6"
dependencies = [
"blake2b_simd",
"blake2s_simd",
"blake3",
"digest 0.9.0",
"sha-1 0.9.1",
"sha2 0.9.1",
"sha3",
"unsigned-varint 0.5.0",
"unsigned-varint 0.5.1",
]
[[package]]
@@ -3244,7 +3230,7 @@ checksum = "1255076139a83bb467426e7f8d0134968a8118844faa755985e077cf31850333"
[[package]]
name = "multistream-select"
version = "0.8.2"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"bytes 0.5.6",
"futures 0.3.5",
@@ -3288,9 +3274,9 @@ dependencies = [
[[package]]
name = "net2"
version = "0.2.34"
version = "0.2.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2ba7c918ac76704fb42afcbbb43891e72731f3dcca3bef2a19786297baf14af7"
checksum = "3ebc3ec692ed7c9a255596c67808dee269f64655d8baf7b4f0638e51ba1d6853"
dependencies = [
"cfg-if",
"libc",
@@ -3538,7 +3524,7 @@ dependencies = [
[[package]]
name = "parity-multiaddr"
version = "0.9.1"
source = "git+https://github.com/sigp/rust-libp2p?rev=bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803#bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
source = "git+https://github.com/sigp/rust-libp2p?rev=03f998022ce2f566a6c6e6c4206bc0ce4d45109f#03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
dependencies = [
"arrayref",
"bs58",
@@ -3554,9 +3540,9 @@ dependencies = [
[[package]]
name = "parity-multiaddr"
version = "0.9.1"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cc20af3143a62c16e7c9e92ea5c6ae49f7d271d97d4d8fe73afc28f0514a3d0f"
checksum = "2165a93382a93de55868dcbfa11e4a8f99676a9164eee6a2b4a9479ad319c257"
dependencies = [
"arrayref",
"bs58",
@@ -3572,9 +3558,9 @@ dependencies = [
[[package]]
name = "parity-scale-codec"
version = "1.3.4"
version = "1.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34d38aeaffc032ec69faa476b3caaca8d4dd7f3f798137ff30359e5c7869ceb6"
checksum = "7c740e5fbcb6847058b40ac7e5574766c6388f585e184d769910fe0d3a2ca861"
dependencies = [
"arrayvec",
"bitvec",
@@ -3850,9 +3836,9 @@ checksum = "eba180dafb9038b050a4c280019bbedf9f2467b61e5d892dcad585bb57aadc5a"
[[package]]
name = "proc-macro2"
version = "1.0.19"
version = "1.0.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04f5f085b5d71e2188cb8271e5da0161ad52c3f227a661a3c135fdf28e258b12"
checksum = "175c513d55719db99da20232b06cda8bab6b83ec2d04e3283edf0213c37c1a29"
dependencies = [
"unicode-xid",
]
@@ -4610,12 +4596,6 @@ version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
[[package]]
name = "send_wrapper"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a0eddf2e8f50ced781f288c19f18621fa72a3779e3cb58dbf23b07469b0abeb4"
[[package]]
name = "serde"
version = "1.0.115"
@@ -4688,6 +4668,15 @@ dependencies = [
"url 2.1.1",
]
[[package]]
name = "serde_utils"
version = "0.1.0"
dependencies = [
"serde",
"serde_derive",
"serde_json",
]
[[package]]
name = "serde_yaml"
version = "0.8.13"
@@ -4786,9 +4775,9 @@ checksum = "29f060a7d147e33490ec10da418795238fd7545bba241504d6b31a409f2e6210"
[[package]]
name = "simple_logger"
version = "1.6.0"
version = "1.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fea0c4611f32f4c2bac73754f22dca1f57e6c1945e0590dae4e5f2a077b92367"
checksum = "13a53ed2efd04911c8280f2da7bf9abd350c931b86bc7f9f2386fbafbf525ff9"
dependencies = [
"atty",
"chrono",
@@ -4803,6 +4792,7 @@ version = "0.2.0"
dependencies = [
"clap",
"env_logger",
"eth1",
"eth1_test_rig",
"futures 0.3.5",
"node_test_rig",
@@ -5000,7 +4990,7 @@ dependencies = [
"ring",
"rustc_version",
"sha2 0.9.1",
"subtle 2.2.3",
"subtle 2.3.0",
"x25519-dalek",
]
@@ -5018,9 +5008,9 @@ dependencies = [
[[package]]
name = "soketto"
version = "0.4.1"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "85457366ae0c6ce56bf05a958aef14cd38513c236568618edbcd9a8c52cb80b0"
checksum = "b5c71ed3d54db0a699f4948e1bb3e45b450fa31fe602621dee6680361d569c88"
dependencies = [
"base64 0.12.3",
"bytes 0.5.6",
@@ -5029,7 +5019,7 @@ dependencies = [
"httparse",
"log 0.4.11",
"rand 0.7.3",
"sha-1 0.8.2",
"sha-1 0.9.1",
]
[[package]]
@@ -5196,9 +5186,9 @@ checksum = "2d67a5a62ba6e01cb2192ff309324cb4875d0c451d55fe2319433abe7a05a8ee"
[[package]]
name = "subtle"
version = "2.2.3"
version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "502d53007c02d7605a05df1c1a73ee436952781653da5d0bf57ad608f66932c1"
checksum = "343f3f510c2915908f155e94f17220b19ccfacf2a64a2a5d8004f2c3e311e7fd"
[[package]]
name = "swap_or_not_shuffle"
@@ -5211,9 +5201,9 @@ dependencies = [
[[package]]
name = "syn"
version = "1.0.39"
version = "1.0.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "891d8d6567fe7c7f8835a3a98af4208f3846fba258c1bc3c31d6e506239f11f9"
checksum = "963f7d3cc59b59b9325165add223142bbf1df27655d07789f109896d353d8350"
dependencies = [
"proc-macro2",
"quote",
@@ -5355,11 +5345,11 @@ dependencies = [
[[package]]
name = "time"
version = "0.2.16"
version = "0.2.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a51cadc5b1eec673a685ff7c33192ff7b7603d0b75446fb354939ee615acb15"
checksum = "12785163ae8a1cbb52a5db39af4a5baabd3fe49f07f76f952f89d7e89e5ce531"
dependencies = [
"cfg-if",
"const_fn",
"libc",
"standback",
"stdweb",
@@ -5803,9 +5793,9 @@ dependencies = [
[[package]]
name = "tracing-core"
version = "0.1.15"
version = "0.1.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4f0e00789804e99b20f12bc7003ca416309d28a6f495d6af58d1e2c2842461b5"
checksum = "5bcf46c1f1f06aeea2d6b81f3c863d0930a596c86ad1920d4e5bad6dd1d7119a"
dependencies = [
"lazy_static",
]
@@ -6006,7 +5996,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8326b2c654932e3e4f9196e69d08fdf7cfd718e1dc6f66b347e6024a0c961402"
dependencies = [
"generic-array 0.14.4",
"subtle 2.2.3",
"subtle 2.3.0",
]
[[package]]
@@ -6030,9 +6020,13 @@ dependencies = [
[[package]]
name = "unsigned-varint"
version = "0.5.0"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a98e44fc6af1e18c3a06666d829b4fd8d2714fb2dbffe8ab99d5dc7ea6baa628"
checksum = "f7fdeedbf205afadfe39ae559b75c3240f24e257d0ca27e85f85cb82aa19ac35"
dependencies = [
"bytes 0.5.6",
"futures_codec",
]
[[package]]
name = "untrusted"
@@ -6074,7 +6068,7 @@ dependencies = [
[[package]]
name = "validator_client"
version = "0.2.8"
version = "0.2.9"
dependencies = [
"account_utils",
"bls",
@@ -6205,9 +6199,9 @@ checksum = "1a143597ca7c7793eff794def352d41792a93c481eb1042423ff7ff72ba2c31f"
[[package]]
name = "wasm-bindgen"
version = "0.2.67"
version = "0.2.68"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0563a9a4b071746dd5aedbc3a28c6fe9be4586fb3fbadb67c400d4f53c6b16c"
checksum = "1ac64ead5ea5f05873d7c12b545865ca2b8d28adfc50a49b84770a3a97265d42"
dependencies = [
"cfg-if",
"serde",
@@ -6217,9 +6211,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.67"
version = "0.2.68"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc71e4c5efa60fb9e74160e89b93353bc24059999c0ae0fb03affc39770310b0"
checksum = "f22b422e2a757c35a73774860af8e112bff612ce6cb604224e8e47641a9e4f68"
dependencies = [
"bumpalo",
"lazy_static",
@@ -6232,9 +6226,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-futures"
version = "0.4.17"
version = "0.4.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95f8d235a77f880bcef268d379810ea6c0af2eacfa90b1ad5af731776e0c4699"
checksum = "b7866cab0aa01de1edf8b5d7936938a7e397ee50ce24119aef3e1eaa3b6171da"
dependencies = [
"cfg-if",
"js-sys",
@@ -6244,9 +6238,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.67"
version = "0.2.68"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97c57cefa5fa80e2ba15641578b44d36e7a64279bc5ed43c6dbaf329457a2ed2"
checksum = "6b13312a745c08c469f0b292dd2fcd6411dba5f7160f593da6ef69b64e407038"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
@@ -6254,9 +6248,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.67"
version = "0.2.68"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "841a6d1c35c6f596ccea1f82504a192a60378f64b3bb0261904ad8f2f5657556"
checksum = "f249f06ef7ee334cc3b8ff031bfc11ec99d00f34d86da7498396dc1e3b1498fe"
dependencies = [
"proc-macro2",
"quote",
@@ -6267,15 +6261,15 @@ dependencies = [
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.67"
version = "0.2.68"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "93b162580e34310e5931c4b792560108b10fd14d64915d7fff8ff00180e70092"
checksum = "1d649a3145108d7d3fbcde896a468d1bd636791823c9921135218ad89be08307"
[[package]]
name = "wasm-bindgen-test"
version = "0.3.17"
version = "0.3.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7d92df9d5715606f9e48f85df3b78cb77ae44a2ea9a5f2a785a97bd0066b9300"
checksum = "34d1cdc8b98a557f24733d50a1199c4b0635e465eecba9c45b214544da197f64"
dependencies = [
"console_error_panic_hook",
"js-sys",
@@ -6287,9 +6281,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-test-macro"
version = "0.3.17"
version = "0.3.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "51611ce8e84cba89379d91fc5074bacc5530f69da1c09a2853d906129d12b3b8"
checksum = "e8fb9c67be7439ee8ab1b7db502a49c05e51e2835b66796c705134d9b8e1a585"
dependencies = [
"proc-macro2",
"quote",
@@ -6297,15 +6291,14 @@ dependencies = [
[[package]]
name = "wasm-timer"
version = "0.2.4"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "324c5e65a08699c9c4334ba136597ab22b85dccd4b65dd1e36ccf8f723a95b54"
checksum = "be0ecb0db480561e9a7642b5d3e4187c128914e58aa84330b9493e3eb68c5e7f"
dependencies = [
"futures 0.3.5",
"js-sys",
"parking_lot 0.9.0",
"parking_lot 0.11.0",
"pin-utils",
"send_wrapper",
"wasm-bindgen",
"wasm-bindgen-futures",
"web-sys",
@@ -6313,9 +6306,9 @@ dependencies = [
[[package]]
name = "web-sys"
version = "0.3.44"
version = "0.3.45"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dda38f4e5ca63eda02c059d243aa25b5f35ab98451e518c51612cd0f1bd19a47"
checksum = "4bf6ef87ad7ae8008e15a355ce696bed26012b7caa21605188cfd8214ab51e2d"
dependencies = [
"js-sys",
"wasm-bindgen",

View File

@@ -44,6 +44,7 @@ members = [
"consensus/ssz_derive",
"consensus/ssz_types",
"consensus/serde_hex",
"consensus/serde_utils",
"consensus/state_processing",
"consensus/swap_or_not_shuffle",
"consensus/tree_hash",

View File

@@ -1,6 +1,6 @@
[package]
name = "account_manager"
version = "0.2.8"
version = "0.2.9"
authors = ["Paul Hauner <paul@paulhauner.com>", "Luke Anderson <luke@sigmaprime.io>"]
edition = "2018"

View File

@@ -1,6 +1,15 @@
use account_utils::PlainText;
use account_utils::{read_mnemonic_from_user, strip_off_newlines};
use clap::ArgMatches;
use eth2_wallet::bip39::{Language, Mnemonic};
use std::fs;
use std::fs::create_dir_all;
use std::path::{Path, PathBuf};
use std::str::from_utf8;
use std::thread::sleep;
use std::time::Duration;
pub const MNEMONIC_PROMPT: &str = "Enter the mnemonic phrase:";
pub fn ensure_dir_exists<P: AsRef<Path>>(path: P) -> Result<(), String> {
let path = path.as_ref();
@@ -19,3 +28,43 @@ pub fn base_wallet_dir(matches: &ArgMatches, arg: &'static str) -> Result<PathBu
PathBuf::new().join(".lighthouse").join("wallets"),
)
}
pub fn read_mnemonic_from_cli(
mnemonic_path: Option<PathBuf>,
stdin_password: bool,
) -> Result<Mnemonic, String> {
let mnemonic = match mnemonic_path {
Some(path) => fs::read(&path)
.map_err(|e| format!("Unable to read {:?}: {:?}", path, e))
.and_then(|bytes| {
let bytes_no_newlines: PlainText = strip_off_newlines(bytes).into();
let phrase = from_utf8(&bytes_no_newlines.as_ref())
.map_err(|e| format!("Unable to derive mnemonic: {:?}", e))?;
Mnemonic::from_phrase(phrase, Language::English).map_err(|e| {
format!(
"Unable to derive mnemonic from string {:?}: {:?}",
phrase, e
)
})
})?,
None => loop {
eprintln!("");
eprintln!("{}", MNEMONIC_PROMPT);
let mnemonic = read_mnemonic_from_user(stdin_password)?;
match Mnemonic::from_phrase(mnemonic.as_str(), Language::English) {
Ok(mnemonic_m) => {
eprintln!("Valid mnemonic provided.");
eprintln!("");
sleep(Duration::from_secs(1));
break mnemonic_m;
}
Err(_) => {
eprintln!("Invalid mnemonic");
}
}
},
};
Ok(mnemonic)
}

View File

@@ -13,7 +13,7 @@ use validator_dir::Builder as ValidatorDirBuilder;
pub const CMD: &str = "create";
pub const BASE_DIR_FLAG: &str = "base-dir";
pub const WALLET_NAME_FLAG: &str = "wallet-name";
pub const WALLET_PASSPHRASE_FLAG: &str = "wallet-passphrase";
pub const WALLET_PASSWORD_FLAG: &str = "wallet-password";
pub const DEPOSIT_GWEI_FLAG: &str = "deposit-gwei";
pub const STORE_WITHDRAW_FLAG: &str = "store-withdrawal-keystore";
pub const COUNT_FLAG: &str = "count";
@@ -34,8 +34,8 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.required(true),
)
.arg(
Arg::with_name(WALLET_PASSPHRASE_FLAG)
.long(WALLET_PASSPHRASE_FLAG)
Arg::with_name(WALLET_PASSWORD_FLAG)
.long(WALLET_PASSWORD_FLAG)
.value_name("WALLET_PASSWORD_PATH")
.help("A path to a file containing the password which will unlock the wallet.")
.takes_value(true)
@@ -109,8 +109,7 @@ pub fn cli_run<T: EthSpec>(
let spec = env.core_context().eth2_config.spec;
let name: String = clap_utils::parse_required(matches, WALLET_NAME_FLAG)?;
let wallet_password_path: PathBuf =
clap_utils::parse_required(matches, WALLET_PASSPHRASE_FLAG)?;
let wallet_password_path: PathBuf = clap_utils::parse_required(matches, WALLET_PASSWORD_FLAG)?;
let validator_dir = clap_utils::parse_path_with_default_in_home_dir(
matches,
VALIDATOR_DIR_FLAG,

View File

@@ -6,6 +6,7 @@ use account_utils::{
recursively_find_voting_keystores, ValidatorDefinition, ValidatorDefinitions,
CONFIG_FILENAME,
},
ZeroizeString,
};
use clap::{App, Arg, ArgMatches};
use std::fs;
@@ -17,6 +18,7 @@ pub const CMD: &str = "import";
pub const KEYSTORE_FLAG: &str = "keystore";
pub const DIR_FLAG: &str = "directory";
pub const STDIN_PASSWORD_FLAG: &str = "stdin-passwords";
pub const REUSE_PASSWORD_FLAG: &str = "reuse-password";
pub const PASSWORD_PROMPT: &str = "Enter the keystore password, or press enter to omit it:";
pub const KEYSTORE_REUSE_WARNING: &str = "DO NOT USE THE ORIGINAL KEYSTORES TO VALIDATE WITH \
@@ -68,6 +70,11 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.long(STDIN_PASSWORD_FLAG)
.help("If present, read passwords from stdin instead of tty."),
)
.arg(
Arg::with_name(REUSE_PASSWORD_FLAG)
.long(REUSE_PASSWORD_FLAG)
.help("If present, the same password will be used for all imported keystores."),
)
}
pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
@@ -79,6 +86,7 @@ pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
PathBuf::new().join(".lighthouse").join("validators"),
)?;
let stdin_password = matches.is_present(STDIN_PASSWORD_FLAG);
let reuse_password = matches.is_present(REUSE_PASSWORD_FLAG);
ensure_dir_exists(&validator_dir)?;
@@ -118,7 +126,9 @@ pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
// - Add the keystore to the validator definitions file.
//
// Skip keystores that already exist, but exit early if any operation fails.
// Reuses the same password for all keystores if the `REUSE_PASSWORD_FLAG` flag is set.
let mut num_imported_keystores = 0;
let mut previous_password: Option<ZeroizeString> = None;
for src_keystore in &keystore_paths {
let keystore = Keystore::from_json_file(src_keystore)
.map_err(|e| format!("Unable to read keystore JSON {:?}: {:?}", src_keystore, e))?;
@@ -136,6 +146,10 @@ pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
);
let password_opt = loop {
if let Some(password) = previous_password.clone() {
eprintln!("Reuse previous password.");
break Some(password);
}
eprintln!("");
eprintln!("{}", PASSWORD_PROMPT);
@@ -152,6 +166,9 @@ pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
eprintln!("Password is correct.");
eprintln!("");
sleep(Duration::from_secs(1)); // Provides nicer UX.
if reuse_password {
previous_password = Some(password.clone());
}
break Some(password);
}
Err(eth2_keystore::Error::InvalidPassword) => {

View File

@@ -2,6 +2,7 @@ pub mod create;
pub mod deposit;
pub mod import;
pub mod list;
pub mod recover;
use crate::common::base_wallet_dir;
use clap::{App, Arg, ArgMatches};
@@ -24,6 +25,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.subcommand(deposit::cli_app())
.subcommand(import::cli_app())
.subcommand(list::cli_app())
.subcommand(recover::cli_app())
}
pub fn cli_run<T: EthSpec>(matches: &ArgMatches, env: Environment<T>) -> Result<(), String> {
@@ -34,6 +36,7 @@ pub fn cli_run<T: EthSpec>(matches: &ArgMatches, env: Environment<T>) -> Result<
(deposit::CMD, Some(matches)) => deposit::cli_run::<T>(matches, env),
(import::CMD, Some(matches)) => import::cli_run(matches),
(list::CMD, Some(matches)) => list::cli_run(matches),
(recover::CMD, Some(matches)) => recover::cli_run(matches),
(unknown, _) => Err(format!(
"{} does not have a {} command. See --help",
CMD, unknown

View File

@@ -0,0 +1,156 @@
use super::create::STORE_WITHDRAW_FLAG;
use super::import::STDIN_PASSWORD_FLAG;
use crate::common::{ensure_dir_exists, read_mnemonic_from_cli};
use crate::validator::create::COUNT_FLAG;
use crate::{SECRETS_DIR_FLAG, VALIDATOR_DIR_FLAG};
use account_utils::eth2_keystore::{keypair_from_secret, Keystore, KeystoreBuilder};
use account_utils::random_password;
use clap::{App, Arg, ArgMatches};
use eth2_wallet::bip39::Seed;
use eth2_wallet::{recover_validator_secret_from_mnemonic, KeyType, ValidatorKeystores};
use std::path::PathBuf;
use validator_dir::Builder as ValidatorDirBuilder;
pub const CMD: &str = "recover";
pub const FIRST_INDEX_FLAG: &str = "first-index";
pub const MNEMONIC_FLAG: &str = "mnemonic-path";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD)
.about(
"Recovers validator private keys given a BIP-39 mnemonic phrase. \
If you did not specify a `--first-index` or count `--count`, by default this will \
only recover the keys associated with the validator at index 0 for an HD wallet \
in accordance with the EIP-2333 spec.")
.arg(
Arg::with_name(FIRST_INDEX_FLAG)
.long(FIRST_INDEX_FLAG)
.value_name("FIRST_INDEX")
.help("The first of consecutive key indexes you wish to recover.")
.takes_value(true)
.required(false)
.default_value("0"),
)
.arg(
Arg::with_name(COUNT_FLAG)
.long(COUNT_FLAG)
.value_name("COUNT")
.help("The number of validator keys you wish to recover. Counted consecutively from the provided `--first_index`.")
.takes_value(true)
.required(false)
.default_value("1"),
)
.arg(
Arg::with_name(MNEMONIC_FLAG)
.long(MNEMONIC_FLAG)
.value_name("MNEMONIC_PATH")
.help(
"If present, the mnemonic will be read in from this file.",
)
.takes_value(true)
)
.arg(
Arg::with_name(VALIDATOR_DIR_FLAG)
.long(VALIDATOR_DIR_FLAG)
.value_name("VALIDATOR_DIRECTORY")
.help(
"The path where the validator directories will be created. \
Defaults to ~/.lighthouse/validators",
)
.takes_value(true),
)
.arg(
Arg::with_name(SECRETS_DIR_FLAG)
.long(SECRETS_DIR_FLAG)
.value_name("SECRETS_DIR")
.help(
"The path where the validator keystore passwords will be stored. \
Defaults to ~/.lighthouse/secrets",
)
.takes_value(true),
)
.arg(
Arg::with_name(STORE_WITHDRAW_FLAG)
.long(STORE_WITHDRAW_FLAG)
.help(
"If present, the withdrawal keystore will be stored alongside the voting \
keypair. It is generally recommended to *not* store the withdrawal key and \
instead generate them from the wallet seed when required.",
),
)
.arg(
Arg::with_name(STDIN_PASSWORD_FLAG)
.long(STDIN_PASSWORD_FLAG)
.help("If present, read passwords from stdin instead of tty."),
)
}
pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
let validator_dir = clap_utils::parse_path_with_default_in_home_dir(
matches,
VALIDATOR_DIR_FLAG,
PathBuf::new().join(".lighthouse").join("validators"),
)?;
let secrets_dir = clap_utils::parse_path_with_default_in_home_dir(
matches,
SECRETS_DIR_FLAG,
PathBuf::new().join(".lighthouse").join("secrets"),
)?;
let first_index: u32 = clap_utils::parse_required(matches, FIRST_INDEX_FLAG)?;
let count: u32 = clap_utils::parse_required(matches, COUNT_FLAG)?;
let mnemonic_path: Option<PathBuf> = clap_utils::parse_optional(matches, MNEMONIC_FLAG)?;
let stdin_password = matches.is_present(STDIN_PASSWORD_FLAG);
ensure_dir_exists(&validator_dir)?;
ensure_dir_exists(&secrets_dir)?;
eprintln!("");
eprintln!("WARNING: KEY RECOVERY CAN LEAD TO DUPLICATING VALIDATORS KEYS, WHICH CAN LEAD TO SLASHING.");
eprintln!("");
let mnemonic = read_mnemonic_from_cli(mnemonic_path, stdin_password)?;
let seed = Seed::new(&mnemonic, "");
for index in first_index..first_index + count {
let voting_password = random_password();
let withdrawal_password = random_password();
let derive = |key_type: KeyType, password: &[u8]| -> Result<Keystore, String> {
let (secret, path) =
recover_validator_secret_from_mnemonic(seed.as_bytes(), index, key_type)
.map_err(|e| format!("Unable to recover validator keys: {:?}", e))?;
let keypair = keypair_from_secret(secret.as_bytes())
.map_err(|e| format!("Unable build keystore: {:?}", e))?;
KeystoreBuilder::new(&keypair, password, format!("{}", path))
.map_err(|e| format!("Unable build keystore: {:?}", e))?
.build()
.map_err(|e| format!("Unable build keystore: {:?}", e))
};
let keystores = ValidatorKeystores {
voting: derive(KeyType::Voting, voting_password.as_bytes())?,
withdrawal: derive(KeyType::Withdrawal, withdrawal_password.as_bytes())?,
};
let voting_pubkey = keystores.voting.pubkey().to_string();
ValidatorDirBuilder::new(validator_dir.clone(), secrets_dir.clone())
.voting_keystore(keystores.voting, voting_password.as_bytes())
.withdrawal_keystore(keystores.withdrawal, withdrawal_password.as_bytes())
.store_withdrawal_keystore(matches.is_present(STORE_WITHDRAW_FLAG))
.build()
.map_err(|e| format!("Unable to build validator directory: {:?}", e))?;
println!(
"{}/{}\tIndex: {}\t0x{}",
index - first_index,
count - first_index,
index,
voting_pubkey
);
}
Ok(())
}

View File

@@ -5,7 +5,7 @@ use eth2_wallet::{
bip39::{Language, Mnemonic, MnemonicType},
PlainText,
};
use eth2_wallet_manager::{WalletManager, WalletType};
use eth2_wallet_manager::{LockedWallet, WalletManager, WalletType};
use std::ffi::OsStr;
use std::fs::{self, File};
use std::io::prelude::*;
@@ -15,7 +15,7 @@ use std::path::{Path, PathBuf};
pub const CMD: &str = "create";
pub const HD_TYPE: &str = "hd";
pub const NAME_FLAG: &str = "name";
pub const PASSPHRASE_FLAG: &str = "passphrase-file";
pub const PASSWORD_FLAG: &str = "password-file";
pub const TYPE_FLAG: &str = "type";
pub const MNEMONIC_FLAG: &str = "mnemonic-output-path";
@@ -34,8 +34,8 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.required(true),
)
.arg(
Arg::with_name(PASSPHRASE_FLAG)
.long(PASSPHRASE_FLAG)
Arg::with_name(PASSWORD_FLAG)
.long(PASSWORD_FLAG)
.value_name("WALLET_PASSWORD_PATH")
.help(
"A path to a file containing the password which will unlock the wallet. \
@@ -70,46 +70,14 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
}
pub fn cli_run(matches: &ArgMatches, base_dir: PathBuf) -> Result<(), String> {
let name: String = clap_utils::parse_required(matches, NAME_FLAG)?;
let wallet_password_path: PathBuf = clap_utils::parse_required(matches, PASSPHRASE_FLAG)?;
let mnemonic_output_path: Option<PathBuf> = clap_utils::parse_optional(matches, MNEMONIC_FLAG)?;
let type_field: String = clap_utils::parse_required(matches, TYPE_FLAG)?;
let wallet_type = match type_field.as_ref() {
HD_TYPE => WalletType::Hd,
unknown => return Err(format!("--{} {} is not supported", TYPE_FLAG, unknown)),
};
let mgr = WalletManager::open(&base_dir)
.map_err(|e| format!("Unable to open --{}: {:?}", BASE_DIR_FLAG, e))?;
// Create a new random mnemonic.
//
// The `tiny-bip39` crate uses `thread_rng()` for this entropy.
let mnemonic = Mnemonic::new(MnemonicType::Words12, Language::English);
// Create a random password if the file does not exist.
if !wallet_password_path.exists() {
// To prevent users from accidentally supplying their password to the PASSPHRASE_FLAG and
// create a file with that name, we require that the password has a .pass suffix.
if wallet_password_path.extension() != Some(&OsStr::new("pass")) {
return Err(format!(
"Only creates a password file if that file ends in .pass: {:?}",
wallet_password_path
));
}
create_with_600_perms(&wallet_password_path, random_password().as_bytes())
.map_err(|e| format!("Unable to write to {:?}: {:?}", wallet_password_path, e))?;
}
let wallet_password = fs::read(&wallet_password_path)
.map_err(|e| format!("Unable to read {:?}: {:?}", wallet_password_path, e))
.map(|bytes| PlainText::from(strip_off_newlines(bytes)))?;
let wallet = mgr
.create_wallet(name, wallet_type, &mnemonic, wallet_password.as_bytes())
.map_err(|e| format!("Unable to create wallet: {:?}", e))?;
let wallet = create_wallet_from_mnemonic(matches, &base_dir.as_path(), &mnemonic)?;
if let Some(path) = mnemonic_output_path {
create_with_600_perms(&path, mnemonic.phrase().as_bytes())
@@ -140,6 +108,48 @@ pub fn cli_run(matches: &ArgMatches, base_dir: PathBuf) -> Result<(), String> {
Ok(())
}
pub fn create_wallet_from_mnemonic(
matches: &ArgMatches,
base_dir: &Path,
mnemonic: &Mnemonic,
) -> Result<LockedWallet, String> {
let name: String = clap_utils::parse_required(matches, NAME_FLAG)?;
let wallet_password_path: PathBuf = clap_utils::parse_required(matches, PASSWORD_FLAG)?;
let type_field: String = clap_utils::parse_required(matches, TYPE_FLAG)?;
let wallet_type = match type_field.as_ref() {
HD_TYPE => WalletType::Hd,
unknown => return Err(format!("--{} {} is not supported", TYPE_FLAG, unknown)),
};
let mgr = WalletManager::open(&base_dir)
.map_err(|e| format!("Unable to open --{}: {:?}", BASE_DIR_FLAG, e))?;
// Create a random password if the file does not exist.
if !wallet_password_path.exists() {
// To prevent users from accidentally supplying their password to the PASSWORD_FLAG and
// create a file with that name, we require that the password has a .pass suffix.
if wallet_password_path.extension() != Some(&OsStr::new("pass")) {
return Err(format!(
"Only creates a password file if that file ends in .pass: {:?}",
wallet_password_path
));
}
create_with_600_perms(&wallet_password_path, random_password().as_bytes())
.map_err(|e| format!("Unable to write to {:?}: {:?}", wallet_password_path, e))?;
}
let wallet_password = fs::read(&wallet_password_path)
.map_err(|e| format!("Unable to read {:?}: {:?}", wallet_password_path, e))
.map(|bytes| PlainText::from(strip_off_newlines(bytes)))?;
let wallet = mgr
.create_wallet(name, wallet_type, &mnemonic, wallet_password.as_bytes())
.map_err(|e| format!("Unable to create wallet: {:?}", e))?;
Ok(wallet)
}
/// Creates a file with `600 (-rw-------)` permissions.
pub fn create_with_600_perms<P: AsRef<Path>>(path: P, bytes: &[u8]) -> Result<(), String> {
let path = path.as_ref();

View File

@@ -1,5 +1,6 @@
pub mod create;
pub mod list;
pub mod recover;
use crate::{
common::{base_wallet_dir, ensure_dir_exists},
@@ -21,6 +22,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
)
.subcommand(create::cli_app())
.subcommand(list::cli_app())
.subcommand(recover::cli_app())
}
pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
@@ -30,6 +32,7 @@ pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
match matches.subcommand() {
(create::CMD, Some(matches)) => create::cli_run(matches, base_dir),
(list::CMD, Some(_)) => list::cli_run(base_dir),
(recover::CMD, Some(matches)) => recover::cli_run(matches, base_dir),
(unknown, _) => Err(format!(
"{} does not have a {} command. See --help",
CMD, unknown

View File

@@ -0,0 +1,87 @@
use crate::common::read_mnemonic_from_cli;
use crate::wallet::create::create_wallet_from_mnemonic;
use crate::wallet::create::{HD_TYPE, NAME_FLAG, PASSWORD_FLAG, TYPE_FLAG};
use clap::{App, Arg, ArgMatches};
use std::path::PathBuf;
pub const CMD: &str = "recover";
pub const MNEMONIC_FLAG: &str = "mnemonic-path";
pub const STDIN_PASSWORD_FLAG: &str = "stdin-passwords";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD)
.about("Recovers an EIP-2386 wallet from a given a BIP-39 mnemonic phrase.")
.arg(
Arg::with_name(NAME_FLAG)
.long(NAME_FLAG)
.value_name("WALLET_NAME")
.help(
"The wallet will be created with this name. It is not allowed to \
create two wallets with the same name for the same --base-dir.",
)
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(PASSWORD_FLAG)
.long(PASSWORD_FLAG)
.value_name("PASSWORD_FILE_PATH")
.help(
"This will be the new password for your recovered wallet. \
A path to a file containing the password which will unlock the wallet. \
If the file does not exist, a random password will be generated and \
saved at that path. To avoid confusion, if the file does not already \
exist it must include a '.pass' suffix.",
)
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(MNEMONIC_FLAG)
.long(MNEMONIC_FLAG)
.value_name("MNEMONIC_PATH")
.help("If present, the mnemonic will be read in from this file.")
.takes_value(true),
)
.arg(
Arg::with_name(TYPE_FLAG)
.long(TYPE_FLAG)
.value_name("WALLET_TYPE")
.help(
"The type of wallet to create. Only HD (hierarchical-deterministic) \
wallets are supported presently..",
)
.takes_value(true)
.possible_values(&[HD_TYPE])
.default_value(HD_TYPE),
)
.arg(
Arg::with_name(STDIN_PASSWORD_FLAG)
.long(STDIN_PASSWORD_FLAG)
.help("If present, read passwords from stdin instead of tty."),
)
}
pub fn cli_run(matches: &ArgMatches, wallet_base_dir: PathBuf) -> Result<(), String> {
let mnemonic_path: Option<PathBuf> = clap_utils::parse_optional(matches, MNEMONIC_FLAG)?;
let stdin_password = matches.is_present(STDIN_PASSWORD_FLAG);
eprintln!("");
eprintln!("WARNING: KEY RECOVERY CAN LEAD TO DUPLICATING VALIDATORS KEYS, WHICH CAN LEAD TO SLASHING.");
eprintln!("");
let mnemonic = read_mnemonic_from_cli(mnemonic_path, stdin_password)?;
let wallet = create_wallet_from_mnemonic(matches, &wallet_base_dir.as_path(), &mnemonic)
.map_err(|e| format!("Unable to create wallet: {:?}", e))?;
println!("Your wallet has been successfully recovered.");
println!();
println!("Your wallet's UUID is:");
println!();
println!("\t{}", wallet.wallet().uuid());
println!();
println!("You do not need to backup your UUID or keep it secret.");
Ok(())
}

View File

@@ -1,6 +1,6 @@
[package]
name = "beacon_node"
version = "0.2.8"
version = "0.2.9"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com"]
edition = "2018"

View File

@@ -58,3 +58,4 @@ environment = { path = "../../lighthouse/environment" }
bus = "2.2.3"
derivative = "2.1.1"
itertools = "0.9.0"
regex = "1.3.9"

View File

@@ -31,6 +31,7 @@ use fork_choice::ForkChoice;
use itertools::process_results;
use operation_pool::{OperationPool, PersistedOperationPool};
use parking_lot::RwLock;
use regex::bytes::Regex;
use slog::{crit, debug, error, info, trace, warn, Logger};
use slot_clock::SlotClock;
use state_processing::{
@@ -852,16 +853,16 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
if state.slot > slot {
return Err(Error::CannotAttestToFutureState);
} else if state.current_epoch() + 1 < epoch {
} else if state.current_epoch() < epoch {
let mut_state = state.to_mut();
while mut_state.current_epoch() + 1 < epoch {
while mut_state.current_epoch() < epoch {
// Note: here we provide `Hash256::zero()` as the root of the current state. This
// has the effect of setting the values of all historic state roots to the zero
// hash. This is an optimization, we don't need the state roots so why calculate
// them?
per_slot_processing(mut_state, Some(Hash256::zero()), &self.spec)?;
}
mut_state.build_committee_cache(RelativeEpoch::Next, &self.spec)?;
mut_state.build_committee_cache(RelativeEpoch::Current, &self.spec)?;
}
let committee_len = state.get_beacon_committee(slot, index)?.committee.len();
@@ -1319,8 +1320,11 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
block: SignedBeaconBlock<T::EthSpec>,
) -> Result<GossipVerifiedBlock<T>, BlockError<T::EthSpec>> {
let slot = block.message.slot;
let graffiti_string = String::from_utf8(block.message.body.graffiti[..].to_vec())
.unwrap_or_else(|_| format!("{:?}", &block.message.body.graffiti[..]));
#[allow(clippy::invalid_regex)]
let re = Regex::new("\\p{C}").expect("regex is valid");
let graffiti_string =
String::from_utf8_lossy(&re.replace_all(&block.message.body.graffiti[..], &b""[..]))
.to_string();
match GossipVerifiedBlock::new(block, self) {
Ok(verified) => {

View File

@@ -584,7 +584,7 @@ where
/// Specifies that the `BeaconChain` should cache eth1 blocks/logs from a remote eth1 node
/// (e.g., Parity/Geth) and refer to that cache when collecting deposits or eth1 votes during
/// block production.
pub fn caching_eth1_backend(mut self, config: Eth1Config) -> Result<Self, String> {
pub async fn caching_eth1_backend(mut self, config: Eth1Config) -> Result<Self, String> {
let context = self
.runtime_context
.as_ref()
@@ -598,6 +598,17 @@ where
.clone()
.ok_or_else(|| "caching_eth1_backend requires a chain spec".to_string())?;
// Check if the eth1 endpoint we connect to is on the correct network id.
let network_id =
eth1::http::get_network_id(&config.endpoint, Duration::from_millis(15_000)).await?;
if network_id != config.network_id {
return Err(format!(
"Invalid eth1 network id. Expected {:?}, got {:?}",
config.network_id, network_id
));
}
let backend = if let Some(eth1_service_from_genesis) = self.eth1_service {
eth1_service_from_genesis.update_config(config)?;

View File

@@ -12,8 +12,10 @@
use futures::future::TryFutureExt;
use reqwest::{header::CONTENT_TYPE, ClientBuilder, StatusCode};
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use std::ops::Range;
use std::str::FromStr;
use std::time::Duration;
use types::Hash256;
@@ -30,6 +32,40 @@ pub const DEPOSIT_COUNT_RESPONSE_BYTES: usize = 96;
/// Number of bytes in deposit contract deposit root (value only).
pub const DEPOSIT_ROOT_BYTES: usize = 32;
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)]
pub enum Eth1NetworkId {
Goerli,
Mainnet,
Custom(u64),
}
impl FromStr for Eth1NetworkId {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"1" => Ok(Eth1NetworkId::Mainnet),
"5" => Ok(Eth1NetworkId::Goerli),
custom => {
let network_id = u64::from_str_radix(custom, 10)
.map_err(|e| format!("Failed to parse eth1 network id {}", e))?;
Ok(Eth1NetworkId::Custom(network_id))
}
}
}
}
/// Get the eth1 network id of the given endpoint.
pub async fn get_network_id(endpoint: &str, timeout: Duration) -> Result<Eth1NetworkId, String> {
let response_body = send_rpc_request(endpoint, "net_version", json!([]), timeout).await?;
Eth1NetworkId::from_str(
response_result(&response_body)?
.ok_or_else(|| "No result was returned for block number".to_string())?
.as_str()
.ok_or_else(|| "Data was not string")?,
)
}
#[derive(Debug, PartialEq, Clone)]
pub struct Block {
pub hash: Hash256,

View File

@@ -2,7 +2,9 @@ use crate::metrics;
use crate::{
block_cache::{BlockCache, Error as BlockCacheError, Eth1Block},
deposit_cache::Error as DepositCacheError,
http::{get_block, get_block_number, get_deposit_logs_in_range, Log},
http::{
get_block, get_block_number, get_deposit_logs_in_range, get_network_id, Eth1NetworkId, Log,
},
inner::{DepositUpdater, Inner},
DepositLog,
};
@@ -16,6 +18,9 @@ use std::time::{SystemTime, UNIX_EPOCH};
use tokio::time::{interval_at, Duration, Instant};
use types::ChainSpec;
/// Indicates the default eth1 network we use for the deposit contract.
pub const DEFAULT_NETWORK_ID: Eth1NetworkId = Eth1NetworkId::Goerli;
const STANDARD_TIMEOUT_MILLIS: u64 = 15_000;
/// Timeout when doing a eth_blockNumber call.
@@ -76,6 +81,8 @@ pub struct Config {
pub endpoint: String,
/// The address the `BlockCache` and `DepositCache` should assume is the canonical deposit contract.
pub deposit_contract_address: String,
/// The eth1 network id where the deposit contract is deployed (Goerli/Mainnet).
pub network_id: Eth1NetworkId,
/// Defines the first block that the `DepositCache` will start searching for deposit logs.
///
/// Setting too high can result in missed logs. Setting too low will result in unnecessary
@@ -105,6 +112,7 @@ impl Default for Config {
Self {
endpoint: "http://localhost:8545".into(),
deposit_contract_address: "0x0000000000000000000000000000000000000000".into(),
network_id: DEFAULT_NETWORK_ID,
deposit_contract_deploy_block: 1,
lowest_cached_block_number: 1,
follow_distance: 128,
@@ -350,6 +358,29 @@ impl Service {
}
async fn do_update(&self, update_interval: Duration) -> Result<(), ()> {
let endpoint = self.config().endpoint.clone();
let config_network = self.config().network_id.clone();
let result =
get_network_id(&endpoint, Duration::from_millis(STANDARD_TIMEOUT_MILLIS)).await;
match result {
Ok(network_id) => {
if network_id != config_network {
error!(
self.log,
"Failed to update eth1 cache";
"reason" => "Invalid eth1 network id",
"expected" => format!("{:?}",DEFAULT_NETWORK_ID),
"got" => format!("{:?}",network_id),
);
return Ok(());
}
}
Err(e) => {
error!(self.log, "Failed to get eth1 network id"; "error" => e);
return Ok(());
}
}
let update_result = self.update().await;
match update_result {
Err(e) => error!(

View File

@@ -32,7 +32,7 @@ snap = "1.0.0"
void = "1.0.2"
tokio-io-timeout = "0.4.0"
tokio-util = { version = "0.3.1", features = ["codec", "compat"] }
discv5 = { version = "0.1.0-alpha.8", features = ["libp2p", "openssl-vendored"] }
discv5 = { version = "0.1.0-alpha.10", features = ["libp2p"] }
tiny-keccak = "2.0.2"
environment = { path = "../../lighthouse/environment" }
# TODO: Remove rand crate for mainnet
@@ -41,7 +41,7 @@ rand = "0.7.3"
[dependencies.libp2p]
#version = "0.23.0"
git = "https://github.com/sigp/rust-libp2p"
rev = "bbf0cfbaff2f733b3ae7bfed3caba8b7ee542803"
rev = "03f998022ce2f566a6c6e6c4206bc0ce4d45109f"
default-features = false
features = ["websocket", "identify", "mplex", "noise", "gossipsub", "dns", "tcp-tokio"]

View File

@@ -131,8 +131,9 @@ impl<TSpec: EthSpec> ProtocolsHandler for DelegatingHandler<TSpec> {
type InboundProtocol = DelegateInProto<TSpec>;
type OutboundProtocol = DelegateOutProto<TSpec>;
type OutboundOpenInfo = DelegateOutInfo<TSpec>;
type InboundOpenInfo = ();
fn listen_protocol(&self) -> SubstreamProtocol<Self::InboundProtocol> {
fn listen_protocol(&self) -> SubstreamProtocol<Self::InboundProtocol, ()> {
let gossip_proto = self.gossip_handler.listen_protocol();
let rpc_proto = self.rpc_handler.listen_protocol();
let identify_proto = self.identify_handler.listen_protocol();
@@ -147,24 +148,27 @@ impl<TSpec: EthSpec> ProtocolsHandler for DelegatingHandler<TSpec> {
SelectUpgrade::new(rpc_proto.into_upgrade().1, identify_proto.into_upgrade().1),
);
SubstreamProtocol::new(select).with_timeout(timeout)
SubstreamProtocol::new(select, ()).with_timeout(timeout)
}
fn inject_fully_negotiated_inbound(
&mut self,
out: <Self::InboundProtocol as InboundUpgrade<NegotiatedSubstream>>::Output,
_info: Self::InboundOpenInfo,
) {
match out {
// Gossipsub
EitherOutput::First(out) => self.gossip_handler.inject_fully_negotiated_inbound(out),
EitherOutput::First(out) => {
self.gossip_handler.inject_fully_negotiated_inbound(out, ())
}
// RPC
EitherOutput::Second(EitherOutput::First(out)) => {
self.rpc_handler.inject_fully_negotiated_inbound(out)
self.rpc_handler.inject_fully_negotiated_inbound(out, ())
}
// Identify
EitherOutput::Second(EitherOutput::Second(out)) => {
self.identify_handler.inject_fully_negotiated_inbound(out)
}
EitherOutput::Second(EitherOutput::Second(out)) => self
.identify_handler
.inject_fully_negotiated_inbound(out, ()),
}
}
@@ -317,10 +321,11 @@ impl<TSpec: EthSpec> ProtocolsHandler for DelegatingHandler<TSpec> {
event,
)));
}
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol, info }) => {
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol }) => {
return Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest {
protocol: protocol.map_upgrade(EitherUpgrade::A),
info: EitherOutput::First(info),
protocol: protocol
.map_upgrade(EitherUpgrade::A)
.map_info(EitherOutput::First),
});
}
Poll::Pending => (),
@@ -333,10 +338,11 @@ impl<TSpec: EthSpec> ProtocolsHandler for DelegatingHandler<TSpec> {
Poll::Ready(ProtocolsHandlerEvent::Close(event)) => {
return Poll::Ready(ProtocolsHandlerEvent::Close(DelegateError::RPC(event)));
}
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol, info }) => {
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol }) => {
return Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest {
protocol: protocol.map_upgrade(|u| EitherUpgrade::B(EitherUpgrade::A(u))),
info: EitherOutput::Second(EitherOutput::First(info)),
protocol: protocol
.map_upgrade(|u| EitherUpgrade::B(EitherUpgrade::A(u)))
.map_info(|info| EitherOutput::Second(EitherOutput::First(info))),
});
}
Poll::Pending => (),
@@ -351,10 +357,11 @@ impl<TSpec: EthSpec> ProtocolsHandler for DelegatingHandler<TSpec> {
Poll::Ready(ProtocolsHandlerEvent::Close(event)) => {
return Poll::Ready(ProtocolsHandlerEvent::Close(DelegateError::Identify(event)));
}
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol, info: () }) => {
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol }) => {
return Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest {
protocol: protocol.map_upgrade(|u| EitherUpgrade::B(EitherUpgrade::B(u))),
info: EitherOutput::Second(EitherOutput::Second(())),
protocol: protocol
.map_upgrade(|u| EitherUpgrade::B(EitherUpgrade::B(u)))
.map_info(|_| EitherOutput::Second(EitherOutput::Second(()))),
});
}
Poll::Pending => (),

View File

@@ -54,16 +54,18 @@ impl<TSpec: EthSpec> ProtocolsHandler for BehaviourHandler<TSpec> {
type InboundProtocol = DelegateInProto<TSpec>;
type OutboundProtocol = DelegateOutProto<TSpec>;
type OutboundOpenInfo = DelegateOutInfo<TSpec>;
type InboundOpenInfo = ();
fn listen_protocol(&self) -> SubstreamProtocol<Self::InboundProtocol> {
fn listen_protocol(&self) -> SubstreamProtocol<Self::InboundProtocol, ()> {
self.delegate.listen_protocol()
}
fn inject_fully_negotiated_inbound(
&mut self,
out: <Self::InboundProtocol as InboundUpgrade<NegotiatedSubstream>>::Output,
_info: Self::InboundOpenInfo,
) {
self.delegate.inject_fully_negotiated_inbound(out)
self.delegate.inject_fully_negotiated_inbound(out, ())
}
fn inject_fully_negotiated_outbound(
@@ -127,11 +129,8 @@ impl<TSpec: EthSpec> ProtocolsHandler for BehaviourHandler<TSpec> {
Poll::Ready(ProtocolsHandlerEvent::Close(err)) => {
return Poll::Ready(ProtocolsHandlerEvent::Close(err))
}
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol, info }) => {
return Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest {
protocol,
info,
});
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol }) => {
return Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol });
}
Poll::Pending => (),
}

View File

@@ -11,7 +11,10 @@ use libp2p::{
identity::Keypair,
Multiaddr,
},
gossipsub::{Gossipsub, GossipsubEvent, MessageAuthenticity, MessageId},
gossipsub::{
Gossipsub, GossipsubEvent, IdentTopic as Topic, MessageAcceptance, MessageAuthenticity,
MessageId,
},
identify::{Identify, IdentifyEvent},
swarm::{
NetworkBehaviour, NetworkBehaviourAction as NBAction, NotifyHandler, PollParameters,
@@ -94,15 +97,19 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
let meta_data = load_or_build_metadata(&net_conf.network_dir, &log);
// TODO: Until other clients support no author, we will use a 0 peer_id as our author.
let message_author = PeerId::from_bytes(vec![0, 1, 0]).expect("Valid peer id");
let gossipsub = Gossipsub::new(MessageAuthenticity::Anonymous, net_conf.gs_config.clone())
.map_err(|e| format!("Could not construct gossipsub: {:?}", e))?;
// Temporarily disable scoring until parameters are tested.
/*
gossipsub
.with_peer_score(PeerScoreParams::default(), PeerScoreThresholds::default())
.expect("Valid score params and thresholds");
*/
Ok(Behaviour {
eth2_rpc: RPC::new(log.clone()),
gossipsub: Gossipsub::new(
MessageAuthenticity::Author(message_author),
net_conf.gs_config.clone(),
),
gossipsub,
identify,
peer_manager: PeerManager::new(local_key, net_conf, network_globals.clone(), log)
.await?,
@@ -147,6 +154,10 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
GossipEncoding::default(),
self.enr_fork_id.fork_digest,
);
// TODO: Implement scoring
// let topic: Topic = gossip_topic.into();
// self.gossipsub.set_topic_params(t.hash(), TopicScoreParams::default());
self.subscribe(gossip_topic)
}
@@ -168,6 +179,12 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
GossipEncoding::default(),
self.enr_fork_id.fork_digest,
);
// TODO: Implement scoring
/*
let t: Topic = topic.clone().into();
self.gossipsub
.set_topic_params(t.hash(), TopicScoreParams::default());
*/
self.subscribe(topic)
}
@@ -189,9 +206,18 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
.write()
.insert(topic.clone());
let topic_str: String = topic.clone().into();
debug!(self.log, "Subscribed to topic"; "topic" => topic_str);
self.gossipsub.subscribe(topic.into())
let topic: Topic = topic.into();
match self.gossipsub.subscribe(&topic) {
Err(_) => {
warn!(self.log, "Failed to subscribe to topic"; "topic" => topic.to_string());
false
}
Ok(v) => {
debug!(self.log, "Subscribed to topic"; "topic" => topic.to_string());
v
}
}
}
/// Unsubscribe from a gossipsub topic.
@@ -201,8 +227,20 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
.gossipsub_subscriptions
.write()
.remove(&topic);
// unsubscribe from the topic
self.gossipsub.unsubscribe(topic.into())
let topic: Topic = topic.into();
match self.gossipsub.unsubscribe(&topic) {
Err(_) => {
warn!(self.log, "Failed to unsubscribe from topic"; "topic" => topic.to_string());
false
}
Ok(v) => {
debug!(self.log, "Unsubscribed to topic"; "topic" => topic.to_string());
v
}
}
}
/// Publishes a list of messages on the pubsub (gossipsub) behaviour, choosing the encoding.
@@ -211,8 +249,28 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
for topic in message.topics(GossipEncoding::default(), self.enr_fork_id.fork_digest) {
match message.encode(GossipEncoding::default()) {
Ok(message_data) => {
if let Err(e) = self.gossipsub.publish(&topic.into(), message_data) {
if let Err(e) = self.gossipsub.publish(topic.clone().into(), message_data) {
slog::warn!(self.log, "Could not publish message"; "error" => format!("{:?}", e));
// add to metrics
match topic.kind() {
GossipKind::Attestation(subnet_id) => {
if let Some(v) = metrics::get_int_gauge(
&metrics::FAILED_ATTESTATION_PUBLISHES_PER_SUBNET,
&[&subnet_id.to_string()],
) {
v.inc()
};
}
kind => {
if let Some(v) = metrics::get_int_gauge(
&metrics::FAILED_PUBLISHES_PER_MAIN_TOPIC,
&[&format!("{:?}", kind)],
) {
v.inc()
};
}
}
}
}
Err(e) => crit!(self.log, "Could not publish message"; "error" => e),
@@ -221,11 +279,21 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
}
}
/// Forwards a message that is waiting in gossipsub's mcache. Messages are only propagated
/// once validated by the beacon chain.
pub fn validate_message(&mut self, propagation_source: &PeerId, message_id: MessageId) {
self.gossipsub
.validate_message(&message_id, propagation_source);
/// Informs the gossipsub about the result of a message validation.
/// If the message is valid it will get propagated by gossipsub.
pub fn report_message_validation_result(
&mut self,
propagation_source: &PeerId,
message_id: MessageId,
validation_result: MessageAcceptance,
) {
if let Err(e) = self.gossipsub.report_message_validation_result(
&message_id,
propagation_source,
validation_result,
) {
warn!(self.log, "Failed to report message validation"; "message_id" => message_id.to_string(), "peer_id" => propagation_source.to_string(), "error" => format!("{:?}", e));
}
}
/* Eth2 RPC behaviour functions */
@@ -392,11 +460,25 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
fn on_gossip_event(&mut self, event: GossipsubEvent) {
match event {
GossipsubEvent::Message(propagation_source, id, gs_msg) => {
GossipsubEvent::Message {
propagation_source,
message_id: id,
message: gs_msg,
} => {
// Note: We are keeping track here of the peer that sent us the message, not the
// peer that originally published the message.
match PubsubMessage::decode(&gs_msg.topics, &gs_msg.data) {
Err(e) => debug!(self.log, "Could not decode gossipsub message"; "error" => e),
Err(e) => {
debug!(self.log, "Could not decode gossipsub message"; "error" => e);
//reject the message
if let Err(e) = self.gossipsub.report_message_validation_result(
&id,
&propagation_source,
MessageAcceptance::Reject,
) {
warn!(self.log, "Failed to report message validation"; "message_id" => id.to_string(), "peer_id" => propagation_source.to_string(), "error" => format!("{:?}", e));
}
}
Ok(msg) => {
// Notify the network
self.add_event(BehaviourEvent::PubsubMessage {
@@ -409,23 +491,9 @@ impl<TSpec: EthSpec> Behaviour<TSpec> {
}
}
GossipsubEvent::Subscribed { peer_id, topic } => {
if let Some(topic_metric) = metrics::get_int_gauge(
&metrics::GOSSIPSUB_SUBSCRIBED_PEERS_COUNT,
&[topic.as_str()],
) {
topic_metric.inc()
}
self.add_event(BehaviourEvent::PeerSubscribed(peer_id, topic));
}
GossipsubEvent::Unsubscribed { peer_id: _, topic } => {
if let Some(topic_metric) = metrics::get_int_gauge(
&metrics::GOSSIPSUB_SUBSCRIBED_PEERS_COUNT,
&[topic.as_str()],
) {
topic_metric.dec()
}
}
GossipsubEvent::Unsubscribed { .. } => {}
}
}

View File

@@ -99,8 +99,8 @@ impl Default for Config {
let gs_config = GossipsubConfigBuilder::new()
.max_transmit_size(GOSSIP_MAX_SIZE)
.heartbeat_interval(Duration::from_millis(700))
.mesh_n(6)
.mesh_n_low(5)
.mesh_n(8)
.mesh_n_low(6)
.mesh_n_high(12)
.gossip_lazy(6)
.fanout_ttl(Duration::from_secs(60))
@@ -111,7 +111,8 @@ impl Default for Config {
// prevent duplicates for 550 heartbeats(700millis * 550) = 385 secs
.duplicate_cache_time(Duration::from_secs(385))
.message_id_fn(gossip_message_id)
.build();
.build()
.expect("valid gossipsub configuration");
// discv5 configuration
let discv5_config = Discv5ConfigBuilder::new()

View File

@@ -19,7 +19,7 @@ pub use behaviour::{BehaviourEvent, PeerRequestId, Request, Response};
pub use config::Config as NetworkConfig;
pub use discovery::{CombinedKeyExt, EnrExt, Eth2Enr};
pub use discv5;
pub use libp2p::gossipsub::{MessageId, Topic, TopicHash};
pub use libp2p::gossipsub::{Gossipsub, MessageAcceptance, MessageId, Topic, TopicHash};
pub use libp2p::{core::ConnectedPoint, PeerId, Swarm};
pub use libp2p::{multiaddr, Multiaddr};
pub use metrics::scrape_discovery_metrics;

View File

@@ -34,9 +34,20 @@ lazy_static! {
"Unsolicited discovery requests per ip per second",
&["Addresses"]
);
pub static ref GOSSIPSUB_SUBSCRIBED_PEERS_COUNT: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_peers_per_topic_count",
"Peers subscribed per topic",
pub static ref PEERS_PER_CLIENT: Result<IntGaugeVec> = try_create_int_gauge_vec(
"libp2p_peers_per_client",
"The connected peers via client implementation",
&["Client"]
);
pub static ref FAILED_ATTESTATION_PUBLISHES_PER_SUBNET: Result<IntGaugeVec> =
try_create_int_gauge_vec(
"gossipsub_failed_attestation_publishes_per_subnet",
"Failed attestation publishes per subnet",
&["subnet"]
);
pub static ref FAILED_PUBLISHES_PER_MAIN_TOPIC: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_failed_publishes_per_main_topic",
"Failed gossip publishes",
&["topic_hash"]
);
}

View File

@@ -20,7 +20,7 @@ pub struct Client {
pub agent_string: Option<String>,
}
#[derive(Clone, Debug, Serialize)]
#[derive(Clone, Debug, Serialize, PartialEq)]
pub enum ClientKind {
/// A lighthouse node (the best kind).
Lighthouse,
@@ -98,6 +98,12 @@ impl std::fmt::Display for Client {
}
}
impl std::fmt::Display for ClientKind {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?}", self)
}
}
// helper function to identify clients from their agent_version. Returns the client
// kind and it's associated version and the OS kind.
fn client_from_agent_version(agent_version: &str) -> (ClientKind, String, String) {

View File

@@ -239,6 +239,27 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
///
/// This is also called when dialing a peer fails.
pub fn notify_disconnect(&mut self, peer_id: &PeerId) {
// Decrement the PEERS_PER_CLIENT metric
if let Some(kind) = self
.network_globals
.peers
.read()
.peer_info(peer_id)
.and_then(|peer_info| {
if let Connected { .. } = peer_info.connection_status {
Some(peer_info.client.kind.clone())
} else {
None
}
})
{
if let Some(v) =
metrics::get_int_gauge(&metrics::PEERS_PER_CLIENT, &[&kind.to_string()])
{
v.dec()
};
}
self.network_globals.peers.write().disconnect(peer_id);
// remove the ping and status timer for the peer
@@ -296,8 +317,25 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
/// Updates `PeerInfo` with `identify` information.
pub fn identify(&mut self, peer_id: &PeerId, info: &IdentifyInfo) {
if let Some(peer_info) = self.network_globals.peers.write().peer_info_mut(peer_id) {
let previous_kind = peer_info.client.kind.clone();
peer_info.client = client::Client::from_identify_info(info);
peer_info.listening_addresses = info.listen_addrs.clone();
if previous_kind != peer_info.client.kind {
// update the peer client kind metric
if let Some(v) = metrics::get_int_gauge(
&metrics::PEERS_PER_CLIENT,
&[&peer_info.client.kind.to_string()],
) {
v.inc()
};
if let Some(v) = metrics::get_int_gauge(
&metrics::PEERS_PER_CLIENT,
&[&previous_kind.to_string()],
) {
v.dec()
};
}
} else {
crit!(self.log, "Received an Identify response from an unknown peer"; "peer_id" => peer_id.to_string());
}
@@ -551,7 +589,10 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
}
match connection {
ConnectingType::Dialing => peerdb.dialing_peer(peer_id),
ConnectingType::Dialing => {
peerdb.dialing_peer(peer_id);
return true;
}
ConnectingType::IngoingConnected => peerdb.connect_outgoing(peer_id),
ConnectingType::OutgoingConnected => peerdb.connect_ingoing(peer_id),
}
@@ -568,6 +609,21 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
self.network_globals.connected_peers() as i64,
);
// Increment the PEERS_PER_CLIENT metric
if let Some(kind) = self
.network_globals
.peers
.read()
.peer_info(peer_id)
.map(|peer_info| peer_info.client.kind.clone())
{
if let Some(v) =
metrics::get_int_gauge(&metrics::PEERS_PER_CLIENT, &[&kind.to_string()])
{
v.inc()
};
}
true
}

View File

@@ -4,7 +4,7 @@ use super::PeerSyncStatus;
use crate::rpc::MetaData;
use crate::Multiaddr;
use serde::{
ser::{SerializeStructVariant, Serializer},
ser::{SerializeStruct, Serializer},
Serialize,
};
use std::net::IpAddr;
@@ -120,29 +120,51 @@ pub enum PeerConnectionStatus {
/// Serialization for http requests.
impl Serialize for PeerConnectionStatus {
fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {
let mut s = serializer.serialize_struct("connection_status", 5)?;
match self {
Connected { n_in, n_out } => {
let mut s = serializer.serialize_struct_variant("", 0, "Connected", 2)?;
s.serialize_field("in", n_in)?;
s.serialize_field("out", n_out)?;
s.serialize_field("status", "connected")?;
s.serialize_field("connections_in", n_in)?;
s.serialize_field("connections_out", n_out)?;
s.serialize_field("last_seen", &0)?;
s.serialize_field("banned_ips", &Vec::<IpAddr>::new())?;
s.end()
}
Disconnected { since } => {
let mut s = serializer.serialize_struct_variant("", 1, "Disconnected", 1)?;
s.serialize_field("since", &since.elapsed().as_secs())?;
s.serialize_field("status", "disconnected")?;
s.serialize_field("connections_in", &0)?;
s.serialize_field("connections_out", &0)?;
s.serialize_field("last_seen", &since.elapsed().as_secs())?;
s.serialize_field("banned_ips", &Vec::<IpAddr>::new())?;
s.end()
}
Banned { since, .. } => {
let mut s = serializer.serialize_struct_variant("", 2, "Banned", 1)?;
s.serialize_field("since", &since.elapsed().as_secs())?;
Banned {
since,
ip_addresses,
} => {
s.serialize_field("status", "banned")?;
s.serialize_field("connections_in", &0)?;
s.serialize_field("connections_out", &0)?;
s.serialize_field("last_seen", &since.elapsed().as_secs())?;
s.serialize_field("banned_ips", &ip_addresses)?;
s.end()
}
Dialing { since } => {
let mut s = serializer.serialize_struct_variant("", 3, "Dialing", 1)?;
s.serialize_field("since", &since.elapsed().as_secs())?;
s.serialize_field("status", "dialing")?;
s.serialize_field("connections_in", &0)?;
s.serialize_field("connections_out", &0)?;
s.serialize_field("last_seen", &since.elapsed().as_secs())?;
s.serialize_field("banned_ips", &Vec::<IpAddr>::new())?;
s.end()
}
Unknown => {
s.serialize_field("status", "unknown")?;
s.serialize_field("connections_in", &0)?;
s.serialize_field("connections_out", &0)?;
s.serialize_field("last_seen", &0)?;
s.serialize_field("banned_ips", &Vec::<IpAddr>::new())?;
s.end()
}
Unknown => serializer.serialize_unit_variant("", 4, "Unknown"),
}
}
}

View File

@@ -1,5 +1,6 @@
//! This handles the various supported encoding mechanism for the Eth 2.0 RPC.
use crate::rpc::methods::ErrorType;
use crate::rpc::{RPCCodedResponse, RPCRequest, RPCResponse};
use libp2p::bytes::BufMut;
use libp2p::bytes::BytesMut;
@@ -8,12 +9,12 @@ use tokio_util::codec::{Decoder, Encoder};
use types::EthSpec;
pub trait OutboundCodec<TItem>: Encoder<TItem> + Decoder {
type ErrorType;
type CodecErrorType;
fn decode_error(
&mut self,
src: &mut BytesMut,
) -> Result<Option<Self::ErrorType>, <Self as Decoder>::Error>;
) -> Result<Option<Self::CodecErrorType>, <Self as Decoder>::Error>;
}
/* Global Inbound Codec */
@@ -130,8 +131,8 @@ where
impl<TCodec, TSpec> Decoder for BaseOutboundCodec<TCodec, TSpec>
where
TSpec: EthSpec,
TCodec:
OutboundCodec<RPCRequest<TSpec>, ErrorType = String> + Decoder<Item = RPCResponse<TSpec>>,
TCodec: OutboundCodec<RPCRequest<TSpec>, CodecErrorType = ErrorType>
+ Decoder<Item = RPCResponse<TSpec>>,
{
type Item = RPCCodedResponse<TSpec>;
type Error = <TCodec as Decoder>::Error;

View File

@@ -374,9 +374,12 @@ impl<TSpec: EthSpec> Decoder for SSZSnappyOutboundCodec<TSpec> {
}
impl<TSpec: EthSpec> OutboundCodec<RPCRequest<TSpec>> for SSZSnappyOutboundCodec<TSpec> {
type ErrorType = String;
type CodecErrorType = ErrorType;
fn decode_error(&mut self, src: &mut BytesMut) -> Result<Option<Self::ErrorType>, RPCError> {
fn decode_error(
&mut self,
src: &mut BytesMut,
) -> Result<Option<Self::CodecErrorType>, RPCError> {
if self.len.is_none() {
// Decode the length of the uncompressed bytes from an unsigned varint
match self.inner.decode(src).map_err(RPCError::from)? {
@@ -401,9 +404,9 @@ impl<TSpec: EthSpec> OutboundCodec<RPCRequest<TSpec>> for SSZSnappyOutboundCodec
let n = reader.get_ref().position();
self.len = None;
let _read_bytes = src.split_to(n as usize);
Ok(Some(
String::from_utf8_lossy(&<Vec<u8>>::from_ssz_bytes(&decoded_buffer)?).into(),
))
Ok(Some(ErrorType(VariableList::from_ssz_bytes(
&decoded_buffer,
)?)))
}
Err(e) => match e.kind() {
// Haven't received enough bytes to decode yet

View File

@@ -81,7 +81,7 @@ where
TSpec: EthSpec,
{
/// The upgrade for inbound substreams.
listen_protocol: SubstreamProtocol<RPCProtocol<TSpec>>,
listen_protocol: SubstreamProtocol<RPCProtocol<TSpec>, ()>,
/// Errors occurring on outbound and inbound connections queued for reporting back.
pending_errors: Vec<HandlerErr>,
@@ -225,7 +225,10 @@ impl<TSpec> RPCHandler<TSpec>
where
TSpec: EthSpec,
{
pub fn new(listen_protocol: SubstreamProtocol<RPCProtocol<TSpec>>, log: &slog::Logger) -> Self {
pub fn new(
listen_protocol: SubstreamProtocol<RPCProtocol<TSpec>, ()>,
log: &slog::Logger,
) -> Self {
RPCHandler {
listen_protocol,
pending_errors: Vec::new(),
@@ -249,7 +252,7 @@ where
///
/// > **Note**: If you modify the protocol, modifications will only applies to future inbound
/// > substreams, not the ones already being negotiated.
pub fn listen_protocol_ref(&self) -> &SubstreamProtocol<RPCProtocol<TSpec>> {
pub fn listen_protocol_ref(&self) -> &SubstreamProtocol<RPCProtocol<TSpec>, ()> {
&self.listen_protocol
}
@@ -257,7 +260,7 @@ where
///
/// > **Note**: If you modify the protocol, modifications will only apply to future inbound
/// > substreams, not the ones already being negotiated.
pub fn listen_protocol_mut(&mut self) -> &mut SubstreamProtocol<RPCProtocol<TSpec>> {
pub fn listen_protocol_mut(&mut self) -> &mut SubstreamProtocol<RPCProtocol<TSpec>, ()> {
&mut self.listen_protocol
}
@@ -344,14 +347,16 @@ where
type InboundProtocol = RPCProtocol<TSpec>;
type OutboundProtocol = RPCRequest<TSpec>;
type OutboundOpenInfo = (RequestId, RPCRequest<TSpec>); // Keep track of the id and the request
type InboundOpenInfo = ();
fn listen_protocol(&self) -> SubstreamProtocol<Self::InboundProtocol> {
fn listen_protocol(&self) -> SubstreamProtocol<Self::InboundProtocol, ()> {
self.listen_protocol.clone()
}
fn inject_fully_negotiated_inbound(
&mut self,
substream: <Self::InboundProtocol as InboundUpgrade<NegotiatedSubstream>>::Output,
_info: Self::InboundOpenInfo,
) {
// only accept new peer requests when active
if !matches!(self.state, HandlerState::Active) {
@@ -863,8 +868,7 @@ where
let (id, req) = self.dial_queue.remove(0);
self.dial_queue.shrink_to_fit();
return Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest {
protocol: SubstreamProtocol::new(req.clone()),
info: (id, req),
protocol: SubstreamProtocol::new(req.clone(), ()).map_info(|()| (id, req)),
});
}
Poll::Pending

View File

@@ -19,7 +19,7 @@ type MaxErrorLen = U256;
/// Wrapper over SSZ List to represent error message in rpc responses.
#[derive(Debug, Clone)]
pub struct ErrorType(VariableList<u8, MaxErrorLen>);
pub struct ErrorType(pub VariableList<u8, MaxErrorLen>);
impl From<String> for ErrorType {
fn from(s: String) -> Self {
@@ -283,13 +283,13 @@ impl<T: EthSpec> RPCCodedResponse<T> {
}
/// Builds an RPCCodedResponse from a response code and an ErrorMessage
pub fn from_error(response_code: u8, err: String) -> Self {
pub fn from_error(response_code: u8, err: ErrorType) -> Self {
let code = match response_code {
1 => RPCResponseErrorCode::InvalidRequest,
2 => RPCResponseErrorCode::ServerError,
_ => RPCResponseErrorCode::Unknown,
};
RPCCodedResponse::Error(code, err.into())
RPCCodedResponse::Error(code, err)
}
/// Specifies which response allows for multiple chunks for the stream handler.

View File

@@ -169,9 +169,12 @@ where
fn new_handler(&mut self) -> Self::ProtocolsHandler {
RPCHandler::new(
SubstreamProtocol::new(RPCProtocol {
phantom: PhantomData,
}),
SubstreamProtocol::new(
RPCProtocol {
phantom: PhantomData,
},
(),
),
&self.log,
)
}

View File

@@ -1,4 +1,4 @@
use libp2p::gossipsub::Topic;
use libp2p::gossipsub::IdentTopic as Topic;
use serde_derive::{Deserialize, Serialize};
use types::SubnetId;
@@ -139,7 +139,7 @@ impl GossipTopic {
impl Into<Topic> for GossipTopic {
fn into(self) -> Topic {
Topic::new(self.into())
Topic::new(self)
}
}

View File

@@ -10,6 +10,7 @@ use std::time::Duration;
use types::{EnrForkId, MinimalEthSpec};
type E = MinimalEthSpec;
use libp2p::gossipsub::GossipsubConfigBuilder;
use tempdir::TempDir;
pub struct Libp2pInstance(LibP2PService<E>, exit_future::Signal);
@@ -83,8 +84,11 @@ pub fn build_config(port: u16, mut boot_nodes: Vec<Enr>) -> NetworkConfig {
config.boot_nodes_enr.append(&mut boot_nodes);
config.network_dir = path.into_path();
// Reduce gossipsub heartbeat parameters
config.gs_config.heartbeat_initial_delay = Duration::from_millis(500);
config.gs_config.heartbeat_interval = Duration::from_millis(500);
config.gs_config = GossipsubConfigBuilder::from(config.gs_config)
.heartbeat_initial_delay(Duration::from_millis(500))
.heartbeat_interval(Duration::from_millis(500))
.build()
.unwrap();
config
}

View File

@@ -7,7 +7,7 @@ use beacon_chain::{
attestation_verification::Error as AttnError, observed_operations::ObservationOutcome,
BeaconChain, BeaconChainError, BeaconChainTypes, BlockError, ForkChoiceError,
};
use eth2_libp2p::{MessageId, PeerId};
use eth2_libp2p::{MessageAcceptance, MessageId, PeerId};
use slog::{crit, debug, error, info, trace, warn, Logger};
use ssz::Encode;
use std::sync::Arc;
@@ -51,6 +51,7 @@ impl<T: BeaconChainTypes> Worker<T> {
Err(e) => {
self.handle_attestation_verification_failure(
peer_id,
message_id,
beacon_block_root,
"unaggregated",
e,
@@ -61,7 +62,7 @@ impl<T: BeaconChainTypes> Worker<T> {
// Indicate to the `Network` service that this message is valid and can be
// propagated on the gossip network.
self.propagate_gossip_message(message_id, peer_id.clone());
self.propagate_validation_result(message_id, peer_id.clone(), MessageAcceptance::Accept);
if !should_import {
return;
@@ -124,8 +125,10 @@ impl<T: BeaconChainTypes> Worker<T> {
{
Ok(aggregate) => aggregate,
Err(e) => {
// Report the failure to gossipsub
self.handle_attestation_verification_failure(
peer_id,
message_id,
beacon_block_root,
"aggregated",
e,
@@ -136,7 +139,7 @@ impl<T: BeaconChainTypes> Worker<T> {
// Indicate to the `Network` service that this message is valid and can be
// propagated on the gossip network.
self.propagate_gossip_message(message_id, peer_id.clone());
self.propagate_validation_result(message_id, peer_id.clone(), MessageAcceptance::Accept);
metrics::inc_counter(&metrics::BEACON_PROCESSOR_AGGREGATED_ATTESTATION_VERIFIED_TOTAL);
@@ -195,26 +198,43 @@ impl<T: BeaconChainTypes> Worker<T> {
"slot" => verified_block.block.slot(),
"hash" => verified_block.block_root.to_string()
);
self.propagate_gossip_message(message_id, peer_id.clone());
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Accept,
);
verified_block
}
Err(BlockError::ParentUnknown(block)) => {
self.send_sync_message(SyncMessage::UnknownBlock(peer_id, block));
return;
}
Err(BlockError::BlockIsAlreadyKnown) => {
debug!(
self.log,
"Gossip block is already known";
);
Err(e @ BlockError::FutureSlot { .. })
| Err(e @ BlockError::WouldRevertFinalizedSlot { .. })
| Err(e @ BlockError::BlockIsAlreadyKnown)
| Err(e @ BlockError::RepeatProposal { .. })
| Err(e @ BlockError::NotFinalizedDescendant { .. })
| Err(e @ BlockError::BeaconChainError(_)) => {
warn!(self.log, "Could not verify block for gossip, ignoring the block";
"error" => e.to_string());
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
Err(e) => {
warn!(
self.log,
"Could not verify block for gossip";
"error" => format!("{:?}", e)
);
Err(e @ BlockError::StateRootMismatch { .. })
| Err(e @ BlockError::IncorrectBlockProposer { .. })
| Err(e @ BlockError::BlockSlotLimitReached)
| Err(e @ BlockError::ProposalSignatureInvalid)
| Err(e @ BlockError::NonLinearSlots)
| Err(e @ BlockError::UnknownValidator(_))
| Err(e @ BlockError::PerBlockProcessingError(_))
| Err(e @ BlockError::NonLinearParentRoots)
| Err(e @ BlockError::BlockIsNotLaterThanParent { .. })
| Err(e @ BlockError::InvalidSignature)
| Err(e @ BlockError::TooManySkippedSlots { .. })
| Err(e @ BlockError::GenesisBlock) => {
warn!(self.log, "Could not verify block for gossip, rejecting the block";
"error" => e.to_string());
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Reject);
return;
}
};
@@ -290,6 +310,11 @@ impl<T: BeaconChainTypes> Worker<T> {
let exit = match self.chain.verify_voluntary_exit_for_gossip(voluntary_exit) {
Ok(ObservationOutcome::New(exit)) => exit,
Ok(ObservationOutcome::AlreadyKnown) => {
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Ignore,
);
debug!(
self.log,
"Dropping exit for already exiting validator";
@@ -306,13 +331,16 @@ impl<T: BeaconChainTypes> Worker<T> {
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
// These errors occur due to a fault in the beacon chain. It is not necessarily
// the fault on the peer.
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
};
metrics::inc_counter(&metrics::BEACON_PROCESSOR_EXIT_VERIFIED_TOTAL);
self.propagate_gossip_message(message_id, peer_id);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Accept);
self.chain.import_voluntary_exit(exit);
debug!(self.log, "Successfully imported voluntary exit");
@@ -341,9 +369,12 @@ impl<T: BeaconChainTypes> Worker<T> {
"validator_index" => validator_index,
"peer" => peer_id.to_string()
);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
Err(e) => {
// This is likely a fault with the beacon chain and not necessarily a
// malicious message from the peer.
debug!(
self.log,
"Dropping invalid proposer slashing";
@@ -351,13 +382,14 @@ impl<T: BeaconChainTypes> Worker<T> {
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
};
metrics::inc_counter(&metrics::BEACON_PROCESSOR_PROPOSER_SLASHING_VERIFIED_TOTAL);
self.propagate_gossip_message(message_id, peer_id);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Accept);
self.chain.import_proposer_slashing(slashing);
debug!(self.log, "Successfully imported proposer slashing");
@@ -383,6 +415,7 @@ impl<T: BeaconChainTypes> Worker<T> {
"reason" => "Slashings already known for all slashed validators",
"peer" => peer_id.to_string()
);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
Err(e) => {
@@ -392,13 +425,14 @@ impl<T: BeaconChainTypes> Worker<T> {
"peer" => peer_id.to_string(),
"error" => format!("{:?}", e)
);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
};
metrics::inc_counter(&metrics::BEACON_PROCESSOR_ATTESTER_SLASHING_VERIFIED_TOTAL);
self.propagate_gossip_message(message_id, peer_id);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Accept);
if let Err(e) = self.chain.import_attester_slashing(slashing) {
debug!(self.log, "Error importing attester slashing"; "error" => format!("{:?}", e));
@@ -441,11 +475,19 @@ impl<T: BeaconChainTypes> Worker<T> {
/// the gossip network.
///
/// Creates a log if there is an interal error.
fn propagate_gossip_message(&self, message_id: MessageId, peer_id: PeerId) {
/// Propagates the result of the validation fot the given message to the network. If the result
/// is valid the message gets forwarded to other peers.
fn propagate_validation_result(
&self,
message_id: MessageId,
propagation_source: PeerId,
validation_result: MessageAcceptance,
) {
self.network_tx
.send(NetworkMessage::Validate {
propagation_source: peer_id,
.send(NetworkMessage::ValidationResult {
propagation_source,
message_id,
validation_result,
})
.unwrap_or_else(|_| {
warn!(
@@ -469,6 +511,7 @@ impl<T: BeaconChainTypes> Worker<T> {
pub fn handle_attestation_verification_failure(
&self,
peer_id: PeerId,
message_id: MessageId,
beacon_block_root: Hash256,
attestation_type: &str,
error: AttnError,
@@ -485,6 +528,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message, _only_ if we trust our own clock.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::InvalidSelectionProof { .. } | AttnError::InvalidSignature => {
/*
@@ -492,6 +540,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::EmptyAggregationBitfield => {
/*
@@ -500,10 +553,12 @@ impl<T: BeaconChainTypes> Worker<T> {
* Whilst we don't gossip this attestation, this act is **not** a clear
* violation of the spec nor indication of fault.
*
* This may change soon. Reference:
*
* https://github.com/ethereum/eth2.0-specs/pull/1732
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::AggregatorPubkeyUnknown(_) => {
/*
@@ -519,6 +574,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::AggregatorNotInCommittee { .. } => {
/*
@@ -534,6 +594,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::AttestationAlreadyKnown { .. } => {
/*
@@ -549,6 +614,7 @@ impl<T: BeaconChainTypes> Worker<T> {
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
AttnError::AggregatorAlreadyKnown(_) => {
@@ -565,6 +631,7 @@ impl<T: BeaconChainTypes> Worker<T> {
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
AttnError::PriorAttestationKnown { .. } => {
@@ -580,6 +647,7 @@ impl<T: BeaconChainTypes> Worker<T> {
"block" => format!("{}", beacon_block_root),
"type" => format!("{:?}", attestation_type),
);
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
AttnError::ValidatorIndexTooHigh(_) => {
@@ -589,6 +657,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::UnknownHeadBlock { beacon_block_root } => {
// Note: its a little bit unclear as to whether or not this block is unknown or
@@ -605,7 +678,10 @@ impl<T: BeaconChainTypes> Worker<T> {
);
// we don't know the block, get the sync manager to handle the block lookup
self.sync_tx
.send(SyncMessage::UnknownBlockHash(peer_id, *beacon_block_root))
.send(SyncMessage::UnknownBlockHash(
peer_id.clone(),
*beacon_block_root,
))
.unwrap_or_else(|_| {
warn!(
self.log,
@@ -613,6 +689,7 @@ impl<T: BeaconChainTypes> Worker<T> {
"msg" => "UnknownBlockHash"
)
});
self.propagate_validation_result(message_id, peer_id, MessageAcceptance::Ignore);
return;
}
AttnError::UnknownTargetRoot(_) => {
@@ -632,6 +709,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::BadTargetEpoch => {
/*
@@ -640,6 +722,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::NoCommitteeForSlotAndIndex { .. } => {
/*
@@ -647,6 +734,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::NotExactlyOneAggregationBitSet(_) => {
/*
@@ -654,6 +746,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::AttestsToFutureBlock { .. } => {
/*
@@ -661,6 +758,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::InvalidSubnetId { received, expected } => {
@@ -672,7 +774,12 @@ impl<T: BeaconChainTypes> Worker<T> {
"Received attestation on incorrect subnet";
"expected" => format!("{:?}", expected),
"received" => format!("{:?}", received),
)
);
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::Invalid(_) => {
/*
@@ -680,6 +787,11 @@ impl<T: BeaconChainTypes> Worker<T> {
*
* The peer has published an invalid consensus message.
*/
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::TooManySkippedSlots {
head_block_slot,
@@ -695,7 +807,14 @@ impl<T: BeaconChainTypes> Worker<T> {
"Rejected long skip slot attestation";
"head_block_slot" => head_block_slot,
"attestation_slot" => attestation_slot,
)
);
// In this case we wish to penalize gossipsub peers that do this to avoid future
// attestations that have too many skip slots.
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Reject,
);
}
AttnError::BeaconChainError(e) => {
/*
@@ -711,6 +830,11 @@ impl<T: BeaconChainTypes> Worker<T> {
"peer_id" => peer_id.to_string(),
"error" => format!("{:?}", e),
);
self.propagate_validation_result(
message_id,
peer_id.clone(),
MessageAcceptance::Ignore,
);
}
}

View File

@@ -1,36 +1,91 @@
use beacon_chain::attestation_verification::Error as AttnError;
pub use lighthouse_metrics::*;
lazy_static! {
/*
* Gossip subnets and scoring
*/
pub static ref PEERS_PER_PROTOCOL: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_peers_per_protocol",
"Peers via supported protocol",
&["protocol"]
);
pub static ref GOSSIPSUB_SUBSCRIBED_SUBNET_TOPIC: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_subscribed_subnets",
"Subnets currently subscribed to",
&["subnet"]
);
pub static ref GOSSIPSUB_SUBSCRIBED_PEERS_SUBNET_TOPIC: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_peers_per_subnet_topic_count",
"Peers subscribed per subnet topic",
&["subnet"]
);
pub static ref MESH_PEERS_PER_MAIN_TOPIC: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_mesh_peers_per_main_topic",
"Mesh peers per main topic",
&["topic_hash"]
);
pub static ref MESH_PEERS_PER_SUBNET_TOPIC: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_mesh_peers_per_subnet_topic",
"Mesh peers per subnet topic",
&["subnet"]
);
pub static ref AVG_GOSSIPSUB_PEER_SCORE_PER_MAIN_TOPIC: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_avg_peer_score_per_topic",
"Average peer's score per topic",
&["topic_hash"]
);
pub static ref AVG_GOSSIPSUB_PEER_SCORE_PER_SUBNET_TOPIC: Result<IntGaugeVec> = try_create_int_gauge_vec(
"gossipsub_avg_peer_score_per_subnet_topic",
"Average peer's score per subnet topic",
&["subnet"]
);
pub static ref ATTESTATIONS_PUBLISHED_PER_SUBNET_PER_SLOT: Result<IntCounterVec> = try_create_int_counter_vec(
"gossipsub_attestations_published_per_subnet_per_slot",
"Failed attestation publishes per subnet",
&["subnet"]
);
}
lazy_static! {
/*
* Gossip Rx
*/
pub static ref GOSSIP_BLOCKS_RX: Result<IntCounter> = try_create_int_counter(
"network_gossip_blocks_rx_total",
"gossipsub_blocks_rx_total",
"Count of gossip blocks received"
);
pub static ref GOSSIP_UNAGGREGATED_ATTESTATIONS_RX: Result<IntCounter> = try_create_int_counter(
"network_gossip_unaggregated_attestations_rx_total",
"gossipsub_unaggregated_attestations_rx_total",
"Count of gossip unaggregated attestations received"
);
pub static ref GOSSIP_AGGREGATED_ATTESTATIONS_RX: Result<IntCounter> = try_create_int_counter(
"network_gossip_aggregated_attestations_rx_total",
"gossipsub_aggregated_attestations_rx_total",
"Count of gossip aggregated attestations received"
);
/*
* Gossip Tx
*/
pub static ref GOSSIP_BLOCKS_TX: Result<IntCounter> = try_create_int_counter(
"network_gossip_blocks_tx_total",
"gossipsub_blocks_tx_total",
"Count of gossip blocks transmitted"
);
pub static ref GOSSIP_UNAGGREGATED_ATTESTATIONS_TX: Result<IntCounter> = try_create_int_counter(
"network_gossip_unaggregated_attestations_tx_total",
"gossipsub_unaggregated_attestations_tx_total",
"Count of gossip unaggregated attestations transmitted"
);
pub static ref GOSSIP_AGGREGATED_ATTESTATIONS_TX: Result<IntCounter> = try_create_int_counter(
"network_gossip_aggregated_attestations_tx_total",
"gossipsub_aggregated_attestations_tx_total",
"Count of gossip aggregated attestations transmitted"
);
@@ -38,11 +93,11 @@ lazy_static! {
* Attestation subnet subscriptions
*/
pub static ref SUBNET_SUBSCRIPTION_REQUESTS: Result<IntCounter> = try_create_int_counter(
"network_subnet_subscriptions_total",
"gossipsub_subnet_subscriptions_total",
"Count of validator subscription requests."
);
pub static ref SUBNET_SUBSCRIPTION_AGGREGATOR_REQUESTS: Result<IntCounter> = try_create_int_counter(
"network_subnet_subscriptions_aggregator_total",
"gossipsub_subnet_subscriptions_aggregator_total",
"Count of validator subscription requests where the subscriber is an aggregator."
);
@@ -194,95 +249,95 @@ lazy_static! {
* Attestation Errors
*/
pub static ref GOSSIP_ATTESTATION_ERROR_FUTURE_EPOCH: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_future_epoch",
"gossipsub_attestation_error_future_epoch",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_PAST_EPOCH: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_past_epoch",
"gossipsub_attestation_error_past_epoch",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_FUTURE_SLOT: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_future_slot",
"gossipsub_attestation_error_future_slot",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_PAST_SLOT: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_past_slot",
"gossipsub_attestation_error_past_slot",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_INVALID_SELECTION_PROOF: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_invalid_selection_proof",
"gossipsub_attestation_error_invalid_selection_proof",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_INVALID_SIGNATURE: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_invalid_signature",
"gossipsub_attestation_error_invalid_signature",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_EMPTY_AGGREGATION_BITFIELD: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_empty_aggregation_bitfield",
"gossipsub_attestation_error_empty_aggregation_bitfield",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_AGGREGATOR_PUBKEY_UNKNOWN: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_aggregator_pubkey_unknown",
"gossipsub_attestation_error_aggregator_pubkey_unknown",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_AGGREGATOR_NOT_IN_COMMITTEE: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_aggregator_not_in_committee",
"gossipsub_attestation_error_aggregator_not_in_committee",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_ATTESTATION_ALREADY_KNOWN: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_attestation_already_known",
"gossipsub_attestation_error_attestation_already_known",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_AGGREGATOR_ALREADY_KNOWN: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_aggregator_already_known",
"gossipsub_attestation_error_aggregator_already_known",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_PRIOR_ATTESTATION_KNOWN: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_prior_attestation_known",
"gossipsub_attestation_error_prior_attestation_known",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_VALIDATOR_INDEX_TOO_HIGH: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_validator_index_too_high",
"gossipsub_attestation_error_validator_index_too_high",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_UNKNOWN_HEAD_BLOCK: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_unknown_head_block",
"gossipsub_attestation_error_unknown_head_block",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_UNKNOWN_TARGET_ROOT: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_unknown_target_root",
"gossipsub_attestation_error_unknown_target_root",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_BAD_TARGET_EPOCH: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_bad_target_epoch",
"gossipsub_attestation_error_bad_target_epoch",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_NO_COMMITTEE_FOR_SLOT_AND_INDEX: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_no_committee_for_slot_and_index",
"gossipsub_attestation_error_no_committee_for_slot_and_index",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_NOT_EXACTLY_ONE_AGGREGATION_BIT_SET: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_not_exactly_one_aggregation_bit_set",
"gossipsub_attestation_error_not_exactly_one_aggregation_bit_set",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_ATTESTS_TO_FUTURE_BLOCK: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_attests_to_future_block",
"gossipsub_attestation_error_attests_to_future_block",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_INVALID_SUBNET_ID: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_invalid_subnet_id",
"gossipsub_attestation_error_invalid_subnet_id",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_INVALID_STATE_PROCESSING: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_invalid_state_processing",
"gossipsub_attestation_error_invalid_state_processing",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_INVALID_TOO_MANY_SKIPPED_SLOTS: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_invalid_too_many_skipped_slots",
"gossipsub_attestation_error_invalid_too_many_skipped_slots",
"Count of a specific error type (see metric name)"
);
pub static ref GOSSIP_ATTESTATION_ERROR_BEACON_CHAIN_ERROR: Result<IntCounter> = try_create_int_counter(
"gossip_attestation_error_beacon_chain_error",
"gossipsub_attestation_error_beacon_chain_error",
"Count of a specific error type (see metric name)"
);
}

View File

@@ -18,8 +18,6 @@ use types::{
SignedAggregateAndProof, SignedBeaconBlock, SignedVoluntaryExit, Slot, SubnetId,
};
//TODO: Rate limit requests
/// If a block is more than `FUTURE_SLOT_TOLERANCE` slots ahead of our slot clock, we drop it.
/// Otherwise we queue it.
pub(crate) const FUTURE_SLOT_TOLERANCE: u64 = 1;

View File

@@ -6,17 +6,18 @@ use crate::{
};
use crate::{error, metrics};
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::Service as LibP2PService;
use eth2_libp2p::{
rpc::{GoodbyeReason, RPCResponseErrorCode, RequestId},
Libp2pEvent, PeerAction, PeerRequestId, PubsubMessage, Request, Response,
};
use eth2_libp2p::{BehaviourEvent, MessageId, NetworkGlobals, PeerId};
use eth2_libp2p::{
types::GossipKind, BehaviourEvent, GossipTopic, MessageId, NetworkGlobals, PeerId, TopicHash,
};
use eth2_libp2p::{MessageAcceptance, Service as LibP2PService};
use futures::prelude::*;
use rest_types::ValidatorSubscription;
use slog::{debug, error, info, o, trace, warn};
use std::sync::Arc;
use std::time::Duration;
use std::{collections::HashMap, sync::Arc, time::Duration};
use store::HotColdDB;
use tokio::sync::mpsc;
use tokio::time::Delay;
@@ -24,6 +25,9 @@ use types::EthSpec;
mod tests;
/// The interval (in seconds) that various network metrics will update.
const METRIC_UPDATE_INTERVAL: u64 = 1;
/// Types of messages that the network service can receive.
#[derive(Debug)]
pub enum NetworkMessage<T: EthSpec> {
@@ -55,11 +59,13 @@ pub enum NetworkMessage<T: EthSpec> {
/// Publish a list of messages to the gossipsub protocol.
Publish { messages: Vec<PubsubMessage<T>> },
/// Validates a received gossipsub message. This will propagate the message on the network.
Validate {
ValidationResult {
/// The peer that sent us the message. We don't send back to this peer.
propagation_source: PeerId,
/// The id of the message we are validating and propagating.
message_id: MessageId,
/// The result of the validation
validation_result: MessageAcceptance,
},
/// Reports a peer to the peer manager for performing an action.
ReportPeer { peer_id: PeerId, action: PeerAction },
@@ -89,6 +95,8 @@ pub struct NetworkService<T: BeaconChainTypes> {
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
/// A delay that expires when a new fork takes place.
next_fork_update: Option<Delay>,
/// A timer for updating various network metrics.
metrics_update: tokio::time::Interval,
/// The logger for the network service.
log: slog::Logger,
}
@@ -144,6 +152,9 @@ impl<T: BeaconChainTypes> NetworkService<T> {
let attestation_service =
AttestationService::new(beacon_chain.clone(), network_globals.clone(), &network_log);
// create a timer for updating network metrics
let metrics_update = tokio::time::interval(Duration::from_secs(METRIC_UPDATE_INTERVAL));
// create the network service and spawn the task
let network_log = network_log.new(o!("service" => "network"));
let network_service = NetworkService {
@@ -155,6 +166,7 @@ impl<T: BeaconChainTypes> NetworkService<T> {
store,
network_globals: network_globals.clone(),
next_fork_update,
metrics_update,
log: network_log,
};
@@ -173,9 +185,8 @@ fn spawn_service<T: BeaconChainTypes>(
// spawn on the current executor
executor.spawn_without_exit(async move {
// TODO: there is something with this code that prevents cargo fmt from doing anything at
// all. Ok, it is worse, the compiler doesn't show errors over this code beyond ast
// checking
let mut metric_update_counter = 0;
loop {
// build the futures to check simultaneously
tokio::select! {
@@ -204,6 +215,17 @@ fn spawn_service<T: BeaconChainTypes>(
info!(service.log, "Network service shutdown");
return;
}
_ = service.metrics_update.next() => {
// update various network metrics
metric_update_counter +=1;
if metric_update_counter* 1000 % T::EthSpec::default_spec().milliseconds_per_slot == 0 {
// if a slot has occurred, reset the metrics
let _ = metrics::ATTESTATIONS_PUBLISHED_PER_SUBNET_PER_SLOT
.as_ref()
.map(|gauge| gauge.reset());
}
update_gossip_metrics::<T::EthSpec>(&service.libp2p.swarm.gs());
}
// handle a message sent to the network
Some(message) = service.network_recv.recv() => {
match message {
@@ -216,9 +238,10 @@ fn spawn_service<T: BeaconChainTypes>(
NetworkMessage::SendError{ peer_id, error, id, reason } => {
service.libp2p.respond_with_error(peer_id, id, error, reason);
}
NetworkMessage::Validate {
NetworkMessage::ValidationResult {
propagation_source,
message_id,
validation_result,
} => {
trace!(service.log, "Propagating gossipsub message";
"propagation_peer" => format!("{:?}", propagation_source),
@@ -227,7 +250,9 @@ fn spawn_service<T: BeaconChainTypes>(
service
.libp2p
.swarm
.validate_message(&propagation_source, message_id);
.report_message_validation_result(
&propagation_source, message_id, validation_result
);
}
NetworkMessage::Publish { messages } => {
let mut topic_kinds = Vec::new();
@@ -419,7 +444,11 @@ fn expose_publish_metrics<T: EthSpec>(messages: &[PubsubMessage<T>]) {
for message in messages {
match message {
PubsubMessage::BeaconBlock(_) => metrics::inc_counter(&metrics::GOSSIP_BLOCKS_TX),
PubsubMessage::Attestation(_) => {
PubsubMessage::Attestation(subnet_id) => {
metrics::inc_counter_vec(
&metrics::ATTESTATIONS_PUBLISHED_PER_SUBNET_PER_SLOT,
&[&subnet_id.0.to_string()],
);
metrics::inc_counter(&metrics::GOSSIP_UNAGGREGATED_ATTESTATIONS_TX)
}
PubsubMessage::AggregateAndProofAttestation(_) => {
@@ -443,3 +472,163 @@ fn expose_receive_metrics<T: EthSpec>(message: &PubsubMessage<T>) {
_ => {}
}
}
fn update_gossip_metrics<T: EthSpec>(gossipsub: &eth2_libp2p::Gossipsub) {
// Clear the metrics
let _ = metrics::PEERS_PER_PROTOCOL
.as_ref()
.map(|gauge| gauge.reset());
let _ = metrics::PEERS_PER_PROTOCOL
.as_ref()
.map(|gauge| gauge.reset());
let _ = metrics::MESH_PEERS_PER_MAIN_TOPIC
.as_ref()
.map(|gauge| gauge.reset());
let _ = metrics::AVG_GOSSIPSUB_PEER_SCORE_PER_MAIN_TOPIC
.as_ref()
.map(|gauge| gauge.reset());
let _ = metrics::AVG_GOSSIPSUB_PEER_SCORE_PER_SUBNET_TOPIC
.as_ref()
.map(|gauge| gauge.reset());
// reset the mesh peers, showing all subnets
for subnet_id in 0..T::default_spec().attestation_subnet_count {
let _ = metrics::get_int_gauge(
&metrics::MESH_PEERS_PER_SUBNET_TOPIC,
&[&subnet_id.to_string()],
)
.map(|v| v.set(0));
let _ = metrics::get_int_gauge(
&metrics::GOSSIPSUB_SUBSCRIBED_SUBNET_TOPIC,
&[&subnet_id.to_string()],
)
.map(|v| v.set(0));
let _ = metrics::get_int_gauge(
&metrics::GOSSIPSUB_SUBSCRIBED_PEERS_SUBNET_TOPIC,
&[&subnet_id.to_string()],
)
.map(|v| v.set(0));
}
// Subnet topics subscribed to
for topic_hash in gossipsub.topics() {
if let Ok(topic) = GossipTopic::decode(topic_hash.as_str()) {
if let GossipKind::Attestation(subnet_id) = topic.kind() {
let _ = metrics::get_int_gauge(
&metrics::GOSSIPSUB_SUBSCRIBED_SUBNET_TOPIC,
&[&subnet_id.to_string()],
)
.map(|v| v.set(1));
}
}
}
// Peers per subscribed subnet
let mut peers_per_topic: HashMap<TopicHash, usize> = HashMap::new();
for (peer_id, topics) in gossipsub.all_peers() {
for topic_hash in topics {
*peers_per_topic.entry(topic_hash.clone()).or_default() += 1;
if let Ok(topic) = GossipTopic::decode(topic_hash.as_str()) {
match topic.kind() {
GossipKind::Attestation(subnet_id) => {
if let Some(v) = metrics::get_int_gauge(
&metrics::GOSSIPSUB_SUBSCRIBED_PEERS_SUBNET_TOPIC,
&[&subnet_id.to_string()],
) {
v.inc()
};
// average peer scores
if let Some(score) = gossipsub.peer_score(peer_id) {
if let Some(v) = metrics::get_int_gauge(
&metrics::AVG_GOSSIPSUB_PEER_SCORE_PER_SUBNET_TOPIC,
&[&subnet_id.to_string()],
) {
v.add(score as i64)
};
}
}
kind => {
// main topics
if let Some(score) = gossipsub.peer_score(peer_id) {
if let Some(v) = metrics::get_int_gauge(
&metrics::AVG_GOSSIPSUB_PEER_SCORE_PER_MAIN_TOPIC,
&[&format!("{:?}", kind)],
) {
v.add(score as i64)
};
}
}
}
}
}
}
// adjust to average scores by dividing by number of peers
for (topic_hash, peers) in peers_per_topic.iter() {
if let Ok(topic) = GossipTopic::decode(topic_hash.as_str()) {
match topic.kind() {
GossipKind::Attestation(subnet_id) => {
// average peer scores
if let Some(v) = metrics::get_int_gauge(
&metrics::AVG_GOSSIPSUB_PEER_SCORE_PER_SUBNET_TOPIC,
&[&subnet_id.to_string()],
) {
v.set(v.get() / (*peers as i64))
};
}
kind => {
// main topics
if let Some(v) = metrics::get_int_gauge(
&metrics::AVG_GOSSIPSUB_PEER_SCORE_PER_MAIN_TOPIC,
&[&format!("{:?}", kind)],
) {
v.set(v.get() / (*peers as i64))
};
}
}
}
}
// mesh peers
for topic_hash in gossipsub.topics() {
let peers = gossipsub.mesh_peers(&topic_hash).count();
if let Ok(topic) = GossipTopic::decode(topic_hash.as_str()) {
match topic.kind() {
GossipKind::Attestation(subnet_id) => {
if let Some(v) = metrics::get_int_gauge(
&metrics::MESH_PEERS_PER_SUBNET_TOPIC,
&[&subnet_id.to_string()],
) {
v.set(peers as i64)
};
}
kind => {
// main topics
if let Some(v) = metrics::get_int_gauge(
&metrics::MESH_PEERS_PER_MAIN_TOPIC,
&[&format!("{:?}", kind)],
) {
v.set(peers as i64)
};
}
}
}
}
// protocol peers
let mut peers_per_protocol: HashMap<String, i64> = HashMap::new();
for (_peer, protocol) in gossipsub.peer_protocol() {
*peers_per_protocol.entry(protocol.to_string()).or_default() += 1;
}
for (protocol, peers) in peers_per_protocol.iter() {
if let Some(v) =
metrics::get_int_gauge(&metrics::PEERS_PER_PROTOCOL, &[&protocol.to_string()])
{
v.set(*peers)
};
}
}

View File

@@ -172,7 +172,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
} else {
// there is no finalized chain that matches this peer's last finalized target
// create a new finalized chain
debug!(self.log, "New finalized chain added to sync"; "peer_id" => format!("{:?}", peer_id), "start_epoch" => local_finalized_slot, "end_slot" => remote_finalized_slot, "finalized_root" => format!("{}", remote_info.finalized_root));
debug!(self.log, "New finalized chain added to sync"; "peer_id" => format!("{:?}", peer_id), "start_slot" => local_finalized_slot, "end_slot" => remote_finalized_slot, "finalized_root" => format!("{}", remote_info.finalized_root));
self.chains.new_finalized_chain(
local_info.finalized_epoch,

View File

@@ -100,7 +100,9 @@ impl<E: EthSpec> ProductionBeaconNode<E> {
"endpoint" => &client_config.eth1.endpoint,
"method" => "json rpc via http"
);
builder.caching_eth1_backend(client_config.eth1.clone())?
builder
.caching_eth1_backend(client_config.eth1.clone())
.await?
} else if client_config.dummy_eth1_backend {
warn!(
log,

View File

@@ -11,6 +11,7 @@
* [Key Management](./key-management.md)
* [Create a wallet](./wallet-create.md)
* [Create a validator](./validator-create.md)
* [Key recovery](./key-recovery.md)
* [Validator Management](./validator-management.md)
* [Importing from the Eth2 Launchpad](./validator-import-launchpad.md)
* [Local Testnets](./local-testnets.md)

View File

@@ -88,7 +88,7 @@ validator](./validator-create.md). A two-step example follows:
Create a wallet with:
```bash
lighthouse --testnet medalla account wallet create --name my-validators --passphrase-file my-validators.pass
lighthouse --testnet medalla account wallet create --name my-validators --password-file my-validators.pass
```
The output will look like this:
@@ -124,7 +124,7 @@ used to restore your validator if there is a data loss.
Create a validator from the wallet with:
```bash
lighthouse --testnet medalla account validator create --wallet-name my-validators --wallet-passphrase my-validators.pass --count 1
lighthouse --testnet medalla account validator create --wallet-name my-validators --wallet-password my-validators.pass --count 1
```
The output will look like this:

View File

@@ -35,9 +35,9 @@ items, starting at one easy-to-backup mnemonic and ending with multiple
keypairs. Creating a single validator looks like this:
1. Create a **wallet** and record the **mnemonic**:
- `lighthouse account wallet create --name wally --passphrase-file wally.pass`
- `lighthouse account wallet create --name wally --password-file wally.pass`
1. Create the voting and withdrawal **keystores** for one validator:
- `lighthouse account validator create --wallet-name wally --wallet-passphrase wally.pass --count 1`
- `lighthouse account validator create --wallet-name wally --wallet-password wally.pass --count 1`
In step (1), we created a wallet in `~/.lighthouse/wallets` with the name

65
book/src/key-recovery.md Normal file
View File

@@ -0,0 +1,65 @@
# Key recovery
Generally, validator keystore files are generated alongside a *mnemonic*. If
the keystore and/or the keystore password are lost this mnemonic can
regenerate a new, equivalent keystore with a new password.
There are two ways to recover keys using the `lighthouse` CLI:
- `lighthouse account validator recover`: recover one or more EIP-2335 keystores from a mnemonic.
These keys can be used directly in a validator client.
- `lighthouse account wallet recover`: recover an EIP-2386 wallet from a
mnemonic.
## ⚠️ Warning
**Recovering validator keys from a mnemonic should only be used as a last
resort.** Key recovery entails significant risks:
- Exposing your mnemonic to a computer at any time puts it at risk of being
compromised. Your mnemonic is **not encrypted** and is a target for theft.
- It's completely possible to regenerate a validator keypairs that is already active
on some other validator client. Running the same keypairs on two different
validator clients is very likely to result in slashing.
## Recover EIP-2335 validator keystores
A single mnemonic can generate a practically unlimited number of validator
keystores using an *index*. Generally, the first time you generate a keystore
you'll use index 0, the next time you'll use index 1, and so on. Using the same
index on the same mnemonic always results in the same validator keypair being
generated (see [EIP-2334](https://eips.ethereum.org/EIPS/eip-2334) for more
detail).
Using the `lighthouse account validator recover` command you can generate the
keystores that correspond to one or more indices in the mnemonic:
- `lighthouse account validator recover`: recover only index `0`.
- `lighthouse account validator recover --count 2`: recover indices `0, 1`.
- `lighthouse account validator recover --first-index 1`: recover only index `1`.
- `lighthouse account validator recover --first-index 1 --count 2`: recover indices `1, 2`.
For each of the indices recovered in the above commands, a directory will be
created in the `--validator-dir` location (default `~/.lighthouse/validator`)
which contains all the information necessary to run a validator using the
`lighthouse vc` command. The password to this new keystore will be placed in
the `--secrets-dir` (default `~/.lighthouse/secrets`).
## Recover a EIP-2386 wallet
Instead of creating EIP-2335 keystores directly, an EIP-2386 wallet can be
generated from the mnemonic. This wallet can then be used to generate validator
keystores, if desired. For example, the following command will create an
encrypted wallet named `wally-recovered` from a mnemonic:
```
lighthouse account wallet recover --name wally-recovered
```
**⚠️ Warning:** the wallet will be created with a `nextaccount` value of `0`.
This means that if you have already generated `n` validators, then the next `n`
validators generated by this wallet will be duplicates. As mentioned
previously, running duplicate validators is likely to result in slashing.

View File

@@ -17,7 +17,7 @@ lighthouse account validator create --help
Creates new validators from an existing EIP-2386 wallet using the EIP-2333 HD key derivation scheme.
USAGE:
lighthouse account_manager validator create [FLAGS] [OPTIONS] --wallet-name <WALLET_NAME> --wallet-passphrase <WALLET_PASSWORD_PATH>
lighthouse account_manager validator create [FLAGS] [OPTIONS] --wallet-name <WALLET_NAME> --wallet-password <WALLET_PASSWORD_PATH>
FLAGS:
-h, --help Prints help information
@@ -56,7 +56,7 @@ OPTIONS:
The path where the validator directories will be created. Defaults to ~/.lighthouse/validators
--wallet-name <WALLET_NAME> Use the wallet identified by this name
--wallet-passphrase <WALLET_PASSWORD_PATH>
--wallet-password <WALLET_PASSWORD_PATH>
A path to a file containing the password which will unlock the wallet.
```
@@ -66,12 +66,12 @@ The example assumes that the `wally` wallet was generated from the
[wallet](./wallet-create.md) example.
```bash
lighthouse --testnet medalla account validator create --name wally --wallet-passphrase wally.pass --count 1
lighthouse --testnet medalla account validator create --name wally --wallet-password wally.pass --count 1
```
This command will:
- Derive a new BLS keypair from `wally`, updating it so that it generates a
- Derive a single new BLS keypair from `wally`, updating it so that it generates a
new key next time.
- Create a new directory in `~/.lighthouse/validators` containing:
- An encrypted keystore containing the validators voting keypair.

View File

@@ -25,7 +25,7 @@ lighthouse account wallet create --help
Creates a new HD (hierarchical-deterministic) EIP-2386 wallet.
USAGE:
lighthouse account_manager wallet create [OPTIONS] --name <WALLET_NAME> --passphrase-file <WALLET_PASSWORD_PATH>
lighthouse account_manager wallet create [OPTIONS] --name <WALLET_NAME> --password-file <WALLET_PASSWORD_PATH>
FLAGS:
-h, --help Prints help information
@@ -39,7 +39,7 @@ OPTIONS:
--name <WALLET_NAME>
The wallet will be created with this name. It is not allowed to create two wallets with the same name for
the same --base-dir.
--passphrase-file <WALLET_PASSWORD_PATH>
--password-file <WALLET_PASSWORD_PATH>
A path to a file containing the password which will unlock the wallet. If the file does not exist, a random
password will be generated and saved at that path. To avoid confusion, if the file does not already exist it
must include a '.pass' suffix.
@@ -61,7 +61,7 @@ Creates a new wallet named `wally` with a randomly generated password saved
to `./wallet.pass`:
```bash
lighthouse account wallet create --name wally --passphrase-file wally.pass
lighthouse account wallet create --name wally --password-file wally.pass
```
> Notes:

View File

@@ -1,6 +1,6 @@
[package]
name = "boot_node"
version = "0.2.8"
version = "0.2.9"
authors = ["Sigma Prime <contact@sigmaprime.io>"]
edition = "2018"

View File

@@ -9,6 +9,8 @@ mod server;
pub use cli::cli_app;
use config::BootNodeConfig;
const LOG_CHANNEL_SIZE: usize = 2048;
/// Run the bootnode given the CLI configuration.
pub fn run(matches: &ArgMatches<'_>, debug_level: String) {
let debug_level = match debug_level.as_str() {
@@ -26,7 +28,9 @@ pub fn run(matches: &ArgMatches<'_>, debug_level: String) {
let decorator = slog_term::TermDecorator::new().build();
let decorator = logging::AlignedTermDecorator::new(decorator, logging::MAX_MESSAGE_WIDTH);
let drain = slog_term::FullFormat::new(decorator).build().fuse();
slog_async::Async::new(drain).build()
slog_async::Async::new(drain)
.chan_size(LOG_CHANNEL_SIZE)
.build()
};
let drain = match debug_level {

View File

@@ -107,6 +107,23 @@ pub fn read_password_from_user(use_stdin: bool) -> Result<ZeroizeString, String>
result.map(ZeroizeString::from)
}
/// Reads a mnemonic phrase from TTY or stdin if `use_stdin == true`.
pub fn read_mnemonic_from_user(use_stdin: bool) -> Result<String, String> {
let mut input = String::new();
if use_stdin {
io::stdin()
.read_line(&mut input)
.map_err(|e| format!("Error reading from stdin: {}", e))?;
} else {
let tty = File::open("/dev/tty").map_err(|e| format!("Error opening tty: {}", e))?;
let mut buf_reader = io::BufReader::new(tty);
buf_reader
.read_line(&mut input)
.map_err(|e| format!("Error reading from tty: {}", e))?;
}
Ok(input)
}
/// Provides a new-type wrapper around `String` that is zeroized on `Drop`.
///
/// Useful for ensuring that password memory is zeroed-out on drop.

View File

@@ -163,6 +163,22 @@ pub fn get_int_gauge(int_gauge_vec: &Result<IntGaugeVec>, name: &[&str]) -> Opti
}
}
/// If `int_gauge_vec.is_ok()`, sets the gauge with the given `name` to the given `value`
/// otherwise returns false.
pub fn set_int_gauge(int_gauge_vec: &Result<IntGaugeVec>, name: &[&str], value: i64) -> bool {
if let Ok(int_gauge_vec) = int_gauge_vec {
int_gauge_vec
.get_metric_with_label_values(name)
.map(|v| {
v.set(value);
true
})
.unwrap_or_else(|_| false)
} else {
false
}
}
/// If `int_counter_vec.is_ok()`, returns a counter with the given `name`.
pub fn get_int_counter(
int_counter_vec: &Result<IntCounterVec>,

View File

@@ -10,7 +10,7 @@ use target_info::Target;
/// `Lighthouse/v0.2.0-1419501f2+`
pub const VERSION: &str = git_version!(
args = ["--always", "--dirty=+"],
prefix = "Lighthouse/v0.2.8-",
prefix = "Lighthouse/v0.2.9-",
fallback = "unknown"
);

View File

@@ -0,0 +1,12 @@
[package]
name = "serde_utils"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com", "Michael Sproul <michael@sigmaprime.io>"]
edition = "2018"
[dependencies]
serde = { version = "1.0.110", features = ["derive"] }
serde_derive = "1.0.110"
[dev-dependencies]
serde_json = "1.0.52"

View File

@@ -0,0 +1,2 @@
pub mod quoted_u64;
pub mod quoted_u64_vec;

View File

@@ -0,0 +1,115 @@
use serde::{Deserializer, Serializer};
use serde_derive::{Deserialize, Serialize};
use std::marker::PhantomData;
/// Serde support for deserializing quoted integers.
///
/// Configurable so that quotes are either required or optional.
pub struct QuotedIntVisitor<T> {
require_quotes: bool,
_phantom: PhantomData<T>,
}
impl<'a, T> serde::de::Visitor<'a> for QuotedIntVisitor<T>
where
T: From<u64> + Into<u64> + Copy,
{
type Value = T;
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
if self.require_quotes {
write!(formatter, "a quoted integer")
} else {
write!(formatter, "a quoted or unquoted integer")
}
}
fn visit_str<E>(self, s: &str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
s.parse::<u64>()
.map(T::from)
.map_err(serde::de::Error::custom)
}
fn visit_u64<E>(self, v: u64) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
if self.require_quotes {
Err(serde::de::Error::custom(
"received unquoted integer when quotes are required",
))
} else {
Ok(T::from(v))
}
}
}
/// Wrapper type for requiring quotes on a `u64`-like type.
///
/// Unlike using `serde(with = "quoted_u64::require_quotes")` this is composable, and can be nested
/// inside types like `Option`, `Result` and `Vec`.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Deserialize, Serialize)]
#[serde(transparent)]
pub struct Quoted<T>
where
T: From<u64> + Into<u64> + Copy,
{
#[serde(with = "require_quotes")]
pub value: T,
}
/// Serialize with quotes.
pub fn serialize<S, T>(value: &T, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
T: From<u64> + Into<u64> + Copy,
{
let v: u64 = (*value).into();
serializer.serialize_str(&format!("{}", v))
}
/// Deserialize with or without quotes.
pub fn deserialize<'de, D, T>(deserializer: D) -> Result<T, D::Error>
where
D: Deserializer<'de>,
T: From<u64> + Into<u64> + Copy,
{
deserializer.deserialize_any(QuotedIntVisitor {
require_quotes: false,
_phantom: PhantomData,
})
}
/// Requires quotes when deserializing.
///
/// Usage: `#[serde(with = "quoted_u64::require_quotes")]`.
pub mod require_quotes {
pub use super::serialize;
use super::*;
pub fn deserialize<'de, D, T>(deserializer: D) -> Result<T, D::Error>
where
D: Deserializer<'de>,
T: From<u64> + Into<u64> + Copy,
{
deserializer.deserialize_any(QuotedIntVisitor {
require_quotes: true,
_phantom: PhantomData,
})
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn require_quotes() {
let x = serde_json::from_str::<Quoted<u64>>("\"8\"").unwrap();
assert_eq!(x.value, 8);
serde_json::from_str::<Quoted<u64>>("8").unwrap_err();
}
}

View File

@@ -0,0 +1,91 @@
use serde::ser::SerializeSeq;
use serde::{Deserializer, Serializer};
use serde_derive::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
#[serde(transparent)]
pub struct QuotedIntWrapper {
#[serde(with = "crate::quoted_u64")]
int: u64,
}
pub struct QuotedIntVecVisitor;
impl<'a> serde::de::Visitor<'a> for QuotedIntVecVisitor {
type Value = Vec<u64>;
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(formatter, "a list of quoted or unquoted integers")
}
fn visit_seq<A>(self, mut seq: A) -> Result<Self::Value, A::Error>
where
A: serde::de::SeqAccess<'a>,
{
let mut vec = vec![];
while let Some(val) = seq.next_element()? {
let val: QuotedIntWrapper = val;
vec.push(val.int);
}
Ok(vec)
}
}
pub fn serialize<S>(value: &[u64], serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut seq = serializer.serialize_seq(Some(value.len()))?;
for &int in value {
seq.serialize_element(&QuotedIntWrapper { int })?;
}
seq.end()
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Vec<u64>, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_any(QuotedIntVecVisitor)
}
#[cfg(test)]
mod test {
use super::*;
#[derive(Debug, Serialize, Deserialize)]
struct Obj {
#[serde(with = "crate::quoted_u64_vec")]
values: Vec<u64>,
}
#[test]
fn quoted_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": ["1", "2", "3", "4"] }"#).unwrap();
assert_eq!(obj.values, vec![1, 2, 3, 4]);
}
#[test]
fn unquoted_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": [1, 2, 3, 4] }"#).unwrap();
assert_eq!(obj.values, vec![1, 2, 3, 4]);
}
#[test]
fn mixed_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": ["1", 2, "3", "4"] }"#).unwrap();
assert_eq!(obj.values, vec![1, 2, 3, 4]);
}
#[test]
fn empty_list_success() {
let obj: Obj = serde_json::from_str(r#"{ "values": [] }"#).unwrap();
assert!(obj.values.is_empty());
}
#[test]
fn whole_list_quoted_err() {
serde_json::from_str::<Obj>(r#"{ "values": "[1, 2, 3, 4]" }"#).unwrap_err();
}
}

View File

@@ -29,6 +29,7 @@ pub struct Slot(u64);
#[cfg_attr(feature = "arbitrary-fuzz", derive(arbitrary::Arbitrary))]
#[derive(Eq, Clone, Copy, Default, Serialize, Deserialize)]
#[serde(transparent)]
pub struct Epoch(u64);
impl_common!(Slot);

View File

@@ -1,3 +1,3 @@
mod serde_utils;
pub use serde_utils::*;
pub use self::serde_utils::*;

View File

@@ -6,6 +6,6 @@ pub mod json_wallet;
pub use bip39;
pub use validator_path::{KeyType, ValidatorPath, COIN_TYPE, PURPOSE};
pub use wallet::{
recover_validator_secret, DerivedKey, Error, KeystoreError, PlainText, Uuid,
ValidatorKeystores, Wallet, WalletBuilder,
recover_validator_secret, recover_validator_secret_from_mnemonic, DerivedKey, Error,
KeystoreError, PlainText, Uuid, ValidatorKeystores, Wallet, WalletBuilder,
};

View File

@@ -285,3 +285,19 @@ pub fn recover_validator_secret(
Ok((destination.secret().to_vec().into(), path))
}
/// Returns `(secret, path)` for the `key_type` for the validator at `index`.
///
/// This function should only be used for key recovery since it can easily lead to key duplication.
pub fn recover_validator_secret_from_mnemonic(
secret: &[u8],
index: u32,
key_type: KeyType,
) -> Result<(PlainText, ValidatorPath), Error> {
let path = ValidatorPath::new(index, key_type);
let master = DerivedKey::from_seed(secret).map_err(|()| Error::EmptyPassword)?;
let destination = path.iter_nodes().fold(master, |dk, i| dk.child(*i));
Ok((destination.secret().to_vec().into(), path))
}

View File

@@ -1,7 +1,7 @@
[package]
name = "lcli"
description = "Lighthouse CLI (modeled after zcli)"
version = "0.2.8"
version = "0.2.9"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"

View File

@@ -15,7 +15,7 @@ mod transition_blocks;
use clap::{App, Arg, ArgMatches, SubCommand};
use environment::EnvironmentBuilder;
use log::Level;
use log::LevelFilter;
use parse_hex::run_parse_hex;
use std::fs::File;
use std::path::PathBuf;
@@ -25,7 +25,10 @@ use transition_blocks::run_transition_blocks;
use types::{test_utils::TestingBeaconStateBuilder, EthSpec, MainnetEthSpec, MinimalEthSpec};
fn main() {
simple_logger::init_with_level(Level::Info).expect("logger should initialize");
simple_logger::SimpleLogger::new()
.with_level(LevelFilter::Info)
.init()
.expect("Logger should be initialised");
let matches = App::new("Lighthouse CLI Tool")
.version(lighthouse_version::VERSION)

View File

@@ -1,6 +1,6 @@
[package]
name = "lighthouse"
version = "0.2.8"
version = "0.2.9"
authors = ["Sigma Prime <contact@sigmaprime.io>"]
edition = "2018"

View File

@@ -21,4 +21,4 @@ slog-json = "2.3.0"
exit-future = "0.2.0"
lazy_static = "1.4.0"
lighthouse_metrics = { path = "../../common/lighthouse_metrics" }
discv5 = { version = "0.1.0-alpha.8", features = ["libp2p", "openssl-vendored"] }
discv5 = { version = "0.1.0-alpha.10", features = ["libp2p"] }

View File

@@ -29,6 +29,7 @@ mod executor;
mod metrics;
pub const ETH2_CONFIG_FILENAME: &str = "eth2-spec.toml";
const LOG_CHANNEL_SIZE: usize = 2048;
/// Builds an `Environment`.
pub struct EnvironmentBuilder<E: EthSpec> {
@@ -129,7 +130,9 @@ impl<E: EthSpec> EnvironmentBuilder<E> {
match format.to_uppercase().as_str() {
"JSON" => {
let drain = slog_json::Json::default(std::io::stdout()).fuse();
slog_async::Async::new(drain).build()
slog_async::Async::new(drain)
.chan_size(LOG_CHANNEL_SIZE)
.build()
}
_ => return Err("Logging format provided is not supported".to_string()),
}
@@ -138,7 +141,9 @@ impl<E: EthSpec> EnvironmentBuilder<E> {
let decorator =
logging::AlignedTermDecorator::new(decorator, logging::MAX_MESSAGE_WIDTH);
let drain = slog_term::FullFormat::new(decorator).build().fuse();
slog_async::Async::new(drain).build()
slog_async::Async::new(drain)
.chan_size(LOG_CHANNEL_SIZE)
.build()
};
let drain = match debug_level {
@@ -192,7 +197,9 @@ impl<E: EthSpec> EnvironmentBuilder<E> {
match format.to_uppercase().as_str() {
"JSON" => {
let drain = slog_json::Json::default(file).fuse();
slog_async::Async::new(drain).build()
slog_async::Async::new(drain)
.chan_size(LOG_CHANNEL_SIZE)
.build()
}
_ => return Err("Logging format provided is not supported".to_string()),
}
@@ -201,7 +208,9 @@ impl<E: EthSpec> EnvironmentBuilder<E> {
let decorator =
logging::AlignedTermDecorator::new(decorator, logging::MAX_MESSAGE_WIDTH);
let drain = slog_term::FullFormat::new(decorator).build().fuse();
slog_async::Async::new(drain).build()
slog_async::Async::new(drain)
.chan_size(LOG_CHANNEL_SIZE)
.build()
};
let drain = match debug_level {

View File

@@ -102,7 +102,7 @@ fn create_wallet<P: AsRef<Path>>(
.arg(CREATE_CMD)
.arg(format!("--{}", NAME_FLAG))
.arg(&name)
.arg(format!("--{}", PASSPHRASE_FLAG))
.arg(format!("--{}", PASSWORD_FLAG))
.arg(password.as_ref().as_os_str())
.arg(format!("--{}", MNEMONIC_FLAG))
.arg(mnemonic.as_ref().as_os_str()),
@@ -238,7 +238,7 @@ impl TestValidator {
.arg(CREATE_CMD)
.arg(format!("--{}", WALLET_NAME_FLAG))
.arg(&self.wallet.name)
.arg(format!("--{}", WALLET_PASSPHRASE_FLAG))
.arg(format!("--{}", WALLET_PASSWORD_FLAG))
.arg(self.wallet.password_path().into_os_string())
.arg(format!("--{}", VALIDATOR_DIR_FLAG))
.arg(self.validator_dir.clone().into_os_string())

View File

@@ -14,6 +14,8 @@ use web3::{
/// How long we will wait for ganache to indicate that it is ready.
const GANACHE_STARTUP_TIMEOUT_MILLIS: u64 = 10_000;
const NETWORK_ID: u64 = 42;
/// Provides a dedicated `ganachi-cli` instance with a connected `Web3` instance.
///
/// Requires that `ganachi-cli` is installed and available on `PATH`.
@@ -42,6 +44,8 @@ impl GanacheInstance {
.arg(format!("{}", port))
.arg("--mnemonic")
.arg("\"vast thought differ pull jewel broom cook wrist tribe word before omit\"")
.arg("--networkId")
.arg(format!("{}", NETWORK_ID))
.spawn()
.map_err(|e| {
format!(
@@ -97,6 +101,11 @@ impl GanacheInstance {
endpoint(self.port)
}
/// Returns the network id of the ganache instance
pub fn network_id(&self) -> u64 {
NETWORK_ID
}
/// Increase the timestamp on future blocks by `increase_by` seconds.
pub async fn increase_time(&self, increase_by: u64) -> Result<(), String> {
self.web3

View File

@@ -8,6 +8,7 @@ edition = "2018"
[dependencies]
node_test_rig = { path = "../node_test_rig" }
eth1 = {path = "../../beacon_node/eth1"}
types = { path = "../../consensus/types" }
validator_client = { path = "../../validator_client" }
parking_lot = "0.11.0"

View File

@@ -36,11 +36,11 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.takes_value(true)
.default_value("4")
.help("Speed up factor"))
.arg(Arg::with_name("end_after_checks")
.short("e")
.long("end_after_checks")
.arg(Arg::with_name("continue_after_checks")
.short("c")
.long("continue_after_checks")
.takes_value(false)
.help("End after checks (default true)"))
.help("Continue after checks (default false)"))
)
.subcommand(
SubCommand::with_name("no-eth1-sim")
@@ -64,11 +64,11 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.takes_value(true)
.default_value("4")
.help("Speed up factor"))
.arg(Arg::with_name("end_after_checks")
.short("e")
.long("end_after_checks")
.arg(Arg::with_name("continue_after_checks")
.short("c")
.long("continue_after_checks")
.takes_value(false)
.help("End after checks (default true)"))
.help("Continue after checks (default false)"))
)
.subcommand(
SubCommand::with_name("syncing-sim")

View File

@@ -1,5 +1,6 @@
use crate::{checks, LocalNetwork, E};
use clap::ArgMatches;
use eth1::http::Eth1NetworkId;
use eth1_test_rig::GanacheEth1Instance;
use futures::prelude::*;
use node_test_rig::{
@@ -17,12 +18,12 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
.expect("missing validators_per_node default");
let speed_up_factor =
value_t!(matches, "speed_up_factor", u64).expect("missing speed_up_factor default");
let end_after_checks = !matches.is_present("end_after_checks");
let continue_after_checks = matches.is_present("continue_after_checks");
println!("Beacon Chain Simulator:");
println!(" nodes:{}", node_count);
println!(" validators_per_node:{}", validators_per_node);
println!(" end_after_checks:{}", end_after_checks);
println!(" continue_after_checks:{}", continue_after_checks);
// Generate the directories and keystores required for the validator clients.
let validator_files = (0..node_count)
@@ -73,6 +74,7 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
*/
let ganache_eth1_instance = GanacheEth1Instance::new().await?;
let deposit_contract = ganache_eth1_instance.deposit_contract;
let network_id = ganache_eth1_instance.ganache.network_id();
let ganache = ganache_eth1_instance.ganache;
let eth1_endpoint = ganache.endpoint();
let deposit_contract_address = deposit_contract.address();
@@ -105,6 +107,7 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
beacon_config.eth1.follow_distance = 1;
beacon_config.dummy_eth1_backend = false;
beacon_config.sync_eth1_chain = true;
beacon_config.eth1.network_id = Eth1NetworkId::Custom(network_id);
beacon_config.network.enr_address = Some(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
@@ -174,9 +177,9 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
onboarding?;
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.
// of `continue_after_checks`.
if !end_after_checks {
if continue_after_checks {
future::pending::<()>().await;
}
/*

View File

@@ -17,12 +17,12 @@ pub fn run_no_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
.expect("missing validators_per_node default");
let speed_up_factor =
value_t!(matches, "speed_up_factor", u64).expect("missing speed_up_factor default");
let end_after_checks = !matches.is_present("end_after_checks");
let continue_after_checks = matches.is_present("continue_after_checks");
println!("Beacon Chain Simulator:");
println!(" nodes:{}", node_count);
println!(" validators_per_node:{}", validators_per_node);
println!(" end_after_checks:{}", end_after_checks);
println!(" continue_after_checks:{}", continue_after_checks);
// Generate the directories and keystores required for the validator clients.
let validator_files = (0..node_count)
@@ -141,9 +141,9 @@ pub fn run_no_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
start_checks?;
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.
// of `continue_after_checks`.
if !end_after_checks {
if continue_after_checks {
future::pending::<()>().await;
}
/*

View File

@@ -1,6 +1,6 @@
[package]
name = "validator_client"
version = "0.2.8"
version = "0.2.9"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>", "Luke Anderson <luke@lukeanderson.com.au>"]
edition = "2018"