Add `console-subscriber` feature for debugging tokio async tasks.
Supersedes #7420 to work with `unstable`.
Usage:
- Build Lighthouse with `RUSTFLAGS=--cfg tokio_unstable` and `--features console-subscriber`, e.g.:
```
RUSTFLAGS=-"-cfg=tokio_unstable --remap-path-prefix=$(pwd)=." FEATURES=console-subscriber make
```
- Run the Lighthouse binary.
- Install `tokio-console` and run it in a terminal.
Having merged the drop-headtracker PR we now have a DB schema change in `unstable` compared to `release-v7.0.0`:
- https://github.com/sigp/lighthouse/pull/6744
There is a DB downgrade available, however this needs to be applied manually and it's usually a bit of a hassle.
This PR bumps the version on `unstable` to `v7.1.0-beta.0` _without_ actually cutting a `v7.1.0-beta.0` release, so that we can tell at a glance which schema version a node is using.
Update cargo dependencies while keeping `rust_eth_kzg` pinned to `0.5.1` due to the regression described in:
- https://github.com/sigp/lighthouse/pull/6608
The changes from that PR were not sufficient to actually pin the dependencies of `rust_eth_kzg`, because the dependencies from the workspace Cargo.toml file were not being used anywhere. To fix this, I've added them as explicit dependencies in `crypto/kzg/Cargo.toml`. With this change, `cargo update` no longer tries to update them.
* Remove ZeroizeString in favour of Zeroizing<String>
* cargo fmt
* remove unrelated line that slipped in
* Update beacon_node/store/Cargo.toml
thanks michael!
Co-authored-by: Michael Sproul <micsproul@gmail.com>
* Merge branch 'unstable' into remove-zeroizedstring
* initial redb impl
* redb impl
* remove phantom data
* fixed table definition
* fighting the borrow checker
* a rough draft that doesnt cause lifetime issues
* refactoring
* refactor
* refactor
* passing unit tests
* refactor
* refactor
* refactor
* commit
* move everything to one database
* remove panics, ready for a review
* merge
* a working redb impl
* passing a ref of txn to cursor
* this tries to create a second write transaction when initializing cursor. breaks everything
* Use 2 lifetimes and subtyping
Also fixes a bug in last_key caused by rev and next_back cancelling out
* Move table into cursor
* Merge remote-tracking branch 'origin/unstable' into redb-slasher-backend-impl
* changes based on feedback
* update lmdb
* fix lifetime issues
* moving everything from Cursor to Transaction
* update
* upgrade to redb 2.0
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into redb-slasher-backend-impl
* bring back cursor
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into redb-slasher-backend-impl
* fix delete while
* linting
* linting
* switch to lmdb
* update redb to v2.1
* build fixes, remove unwrap or default
* another build error
* hopefully this is the last build error
* fmt
* cargo.toml
* fix mdbx
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into redb-slasher-backend-impl
* Remove a collect
* Merge remote-tracking branch 'origin/unstable' into redb-slasher-backend-impl
* Merge branch 'redb-slasher-backend-impl' of https://github.com/eserilev/lighthouse into redb-slasher-backend-impl
* re-enable test
* fix failing slasher test
* Merge remote-tracking branch 'origin/unstable' into redb-slasher-backend-impl
* Rename DB file to `slasher.redb`
* Enable jemalloc by default on non windows target.
* Update `allocator_name` function to check for `target_os` instead as we've deprecated `jemalloc` feature.
* Return and error if peer has disconnected
* Report errors for rate limited requests
* Code improvement
* Bump rust version to 1.78
* Downgrade to 1.77
* Update beacon_node/lighthouse_network/src/service/mod.rs
Co-authored-by: João Oliveira <hello@jxs.pt>
* fix fmt
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into rpc-peer-disconnect-error
* update lockfile
* fix lib.rs and tests.rs
* update decode.rs
* auto-delete in Cargo.lock
* delete milagro in cargo.toml
* remove milagro from makefile
* remove milagro from the name
* delete milagro in comment
* delete milagro in cargo.toml
* delete in /testing/ef_tests/cargo.toml
* delete milagro in the logical OR
* delete milagro in /lighthouse/src/main.rs
* delete milagro in /crypto/bls/tests/tests.rs
* delete milagro in comment
* delete milagro in /testing//ef_test/src//cases/bls_eth_aggregate_pubkeys.rs
* delete milagro
* delete more in lib.rs
* delete more in lib.rs
* delete more in lib.rs
* delete milagro in /crypto/bls/src/lib.rs
* delete milagro in crypto/bls/src/mod.rs
* delete milagro.rs
* add metrics layer
* add metrics
* simplify getting the target
* make clippy happy
* fix typos
* unify deps under workspace
* make import statement shorter, fix typos
* enable warn by default, mark flag as deprecated
* do not exit on error when initializing logging fails
* revert exit on error
* adjust bootnode logging
* use target as is by default
* make libp2p events register correctly
* adjust repilcated cli help
* turn on debug logs by default, remove deprecation warning
* suppress output (#5)
---------
Co-authored-by: Age Manning <Age@AgeManning.com>
* update libp2p and address compiler errors
* remove bandwidth logging from transport
* use libp2p registry
* make clippy happy
* use rust 1.73
* correct rpc keep alive
* remove comments and obsolte code
* remove libp2p prefix
* make clippy happy
* use quic under facade
* remove fast msg id
* bubble up close statements
* fix wrong comment
## Issue Addressed
Synchronize dependencies and edition on the workspace `Cargo.toml`
## Proposed Changes
with https://github.com/rust-lang/cargo/issues/8415 merged it's now possible to synchronize details on the workspace `Cargo.toml` like the metadata and dependencies.
By only having dependencies that are shared between multiple crates aligned on the workspace `Cargo.toml` it's easier to not miss duplicate versions of the same dependency and therefore ease on the compile times.
## Additional Info
this PR also removes the no longer required direct dependency of the `serde_derive` crate.
should be reviewed after https://github.com/sigp/lighthouse/pull/4639 get's merged.
closes https://github.com/sigp/lighthouse/issues/4651
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
## Issue Addressed
#4738
## Proposed Changes
See the above issue for details. Went with option #2 to use the async reqwest client in `Eth2NetworkConfig` and propagate the async-ness.
## Proposed Changes
New release to replace the cancelled v4.4.0 release.
This release includes the bugfix #4687 which avoids a deadlock that was present in v4.4.0.
## Additional Info
Awaiting testing over the weekend this will be merged Monday September 4th.
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](efbabe3252). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
## Issue Addressed
Addresses #2557
## Proposed Changes
Adds the `lighthouse validator-manager` command, which provides:
- `lighthouse validator-manager create`
- Creates a `validators.json` file and a `deposits.json` (same format as https://github.com/ethereum/staking-deposit-cli)
- `lighthouse validator-manager import`
- Imports validators from a `validators.json` file to the VC via the HTTP API.
- `lighthouse validator-manager move`
- Moves validators from one VC to the other, utilizing only the VC API.
## Additional Info
In 98bcb947c I've reduced some VC `ERRO` and `CRIT` warnings to `WARN` or `DEBG` for the case where a pubkey is missing from the validator store. These were being triggered when we removed a validator but still had it in caches. It seems to me that `UnknownPubkey` will only happen in the case where we've removed a validator, so downgrading the logs is prudent. All the logs are `DEBG` apart from attestations and blocks which are `WARN`. I thought having *some* logging about this condition might help us down the track.
In 856cd7e37d I've made the VC delete the corresponding password file when it's deleting a keystore. This seemed like nice hygiene. Notably, it'll only delete that password file after it scans the validator definitions and finds that no other validator is also using that password file.
## Issue Addressed
Closes#4354Closes#3987
Replaces #4305, #4283
## Proposed Changes
This switches the default slasher backend _back_ to LMDB.
If an MDBX database exists and the MDBX backend is enabled then MDBX will continue to be used. Our release binaries and Docker images will continue to include MDBX for as long as it is practical, so users of these should not notice any difference.
The main benefit is to users compiling from source and devs running tests. These users no longer have to struggle to compile MDBX and deal with the compatibility issues that arises. Similarly, devs don't need to worry about toggling feature flags in tests or risk forgetting to run the slasher tests due to backend issues.