Compare commits

...

1895 Commits

Author SHA1 Message Date
Michael Sproul
b5f27610ca Resolve rebase_on issues 2024-04-08 10:08:57 +10:00
Michael Sproul
8705ac507b Fix tree-states tests after latest update (#5528)
* Fix clippy

* Fix beacon chain tests

* Comment out dodgy iterator test

* Fix imports for beta compiler
2024-04-08 09:40:33 +10:00
Michael Sproul
0d9fb5a755 Merge remote-tracking branch 'origin/unstable' into tree-states 2024-04-05 15:14:04 +11:00
Eitan Seri-Levi
b65daac907 Add missing header to eth/v1/builder/blinded_blocks (#5407)
* add missing header

* read header in mock builder

* Merge branch 'unstable' into builder-blinded-blocks-missing-header
2024-04-04 19:38:09 +00:00
Eitan Seri-Levi
ee69e14db9 Add is_parent_strong proposer re-org check (#5417)
* initial fork choice additions

* add helper fns

* add is_parent_strong

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into add_is_parent_strong_check

* disabling proposer reorg should set parent_threshold to u64 max

* add new flag, is_parent_strong check in override fcu params

* cherry-pick changes

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into add_is_parent_strong_check

* cleanup

* fmt

* Minor review tweaks
2024-04-04 19:38:06 +00:00
Lion - dapplion
053525e281 Remove DataAvailabilityView trait from ChildComponents (#5421)
* Remove DataAvailabilityView trait from ChildComponents

* PR reviews

* Update beacon_node/network/src/sync/block_lookups/common.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into child_components_independent
2024-04-04 18:49:09 +00:00
Michael Sproul
feb531f85b Single-pass epoch processing and optimised block processing (#5279)
* Single-pass epoch processing (#4483, #4573)

Co-authored-by: Michael Sproul <michael@sigmaprime.io>

* Delete unused epoch processing code (#5170)

* Delete unused epoch processing code

* Compare total deltas

* Remove unnecessary apply_pending

* cargo fmt

* Remove newline

* Use epoch cache in block packing (#5223)

* Remove progressive balances mode (#5224)

* inline inactivity_penalty_quotient_for_state

* drop previous_epoch_total_active_balance

* fc lint

* spec compliant process_sync_aggregate (#15)

* spec compliant process_sync_aggregate

* Update consensus/state_processing/src/per_block_processing/altair/sync_committee.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Delete the participation cache (#16)

* update help

* Fix op_pool tests

* Fix fork choice tests

* Merge remote-tracking branch 'sigp/unstable' into epoch-single-pass

* Simplify exit cache (#5280)

* Fix clippy on exit cache

* Clean up single-pass a bit (#5282)

* Address Mark's review of single-pass (#5386)

* Merge remote-tracking branch 'origin/unstable' into epoch-single-pass

* Address Sean's review comments (#5414)

* Address most of Sean's review comments

* Simplify total balance cache building

* Clean up unused junk

* Merge remote-tracking branch 'origin/unstable' into epoch-single-pass

* More self-review

* Merge remote-tracking branch 'origin/unstable' into epoch-single-pass

* Merge branch 'unstable' into epoch-single-pass

* Fix imports for beta compiler

* Fix tests, probably
2024-04-04 13:14:36 +00:00
Eitan Seri-Levi
f4cdcea7b1 Return not synced errors for endpoints that require syncing (#5136)
* add not synced filter into then blocks

* refactor
2024-04-04 01:36:23 +00:00
Michael Sproul
7825af4a6e Bump h2 for RUSTSEC-2024-0332 (#5514)
* Bump `h2` for RUSTSEC-2024-0332
2024-04-04 01:36:20 +00:00
Mac L
969d12dc6f Use E for EthSpec globally (#5264)
* Use `E` for `EthSpec` globally

* Fix tests

* Merge branch 'unstable' into e-ethspec

* Merge branch 'unstable' into e-ethspec

# Conflicts:
#	beacon_node/execution_layer/src/engine_api.rs
#	beacon_node/execution_layer/src/engine_api/http.rs
#	beacon_node/execution_layer/src/engine_api/json_structures.rs
#	beacon_node/execution_layer/src/test_utils/handle_rpc.rs
#	beacon_node/store/src/partial_beacon_state.rs
#	consensus/types/src/beacon_block.rs
#	consensus/types/src/beacon_block_body.rs
#	consensus/types/src/beacon_state.rs
#	consensus/types/src/config_and_preset.rs
#	consensus/types/src/execution_payload.rs
#	consensus/types/src/execution_payload_header.rs
#	consensus/types/src/light_client_optimistic_update.rs
#	consensus/types/src/payload.rs
#	lcli/src/parse_ssz.rs
2024-04-02 15:12:25 +00:00
Mac L
f8fdb71f50 Add Electra fork boilerplate (#5122)
* Add Electra fork boilerplate

* Remove electra from spec tests

* Fix tests

* Remove sneaky log file

* Fix more tests

* Fix even more tests and add suggestions

* Remove unrelated lcli addition

* Update more tests

* Merge branch 'unstable' into electra

* Add comment for test-suite lcli override

* Merge branch 'unstable' into electra

* Cleanup

* Merge branch 'unstable' into electra

* Apply suggestions

* Merge branch 'unstable' into electra

* Merge sigp/unstable into electra

* Merge branch 'unstable' into electra
2024-04-02 12:35:02 +00:00
realbigsean
3058b96f25 Release v5.1.3 (#5497)
* Release v5.1.3
2024-03-28 05:22:30 +00:00
Pawan Dhananjay
c909941ca1 Bump duplicate cache time (#5493)
* Bump seen_ttl for gossipsub duplicate cache
2024-03-27 17:31:51 +00:00
realbigsean
9d24844f50 Lookup log improvements (#5491)
* log improvements
2024-03-27 13:24:04 +00:00
realbigsean
334aa2eabd Single lookup improvements (#5488)
* Fix unexpected `UnrequestedBlobId` and `ExtraBlocksReturned` errors due to race conditions.

* Continue chain segment processing and skip any blocks that are already known, rather than returning an error.

* more de-dup checking

* ensure we don't reset `requested_ids` during rpc download

* better fix

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into more-dup-lookup-fixes

* remove chain hash check

* Merge branch 'fix-block-lookup-race' of https://github.com/jimmygchen/lighthouse into sean-test-lookups

* remove block check

* add back tests

* Log and CI fixes

* undue extra check

* Merge branch 'sean-test-lookups' of https://github.com/realbigsean/lighthouse into sean-test-lookups

* log improvements

* Improve logging
2024-03-27 10:01:27 +00:00
Michael Sproul
250a5bdc27 Run fork choice after RPC blob import (#5475)
* Run fork choice after RPC blob import
2024-03-26 21:36:08 +00:00
João Oliveira
59ef564b1d Move gossipsub into a separate crate (#5401)
* move gossipsub into a separate crate

* Merge branch 'unstable' of github.com:sigp/lighthouse into separate-gossipsub

* address review 2

* clippy beta

* update logging to log gossipsub logs
2024-03-26 03:10:59 +00:00
Akihito Nakano
8cec8a6793 Fix double counted metrics (#5476)
* Fix double counted metrics
2024-03-26 03:10:26 +00:00
Eitan Seri-Levi
e4d4e439cb Add Capella & Deneb light client support (#4946)
* rebase and add comment

* conditional test

* test

* optimistic chould be working now

* finality should be working now

* try again

* try again

* clippy fix

* add lc bootstrap beacon api

* add lc optimistic/finality update to events

* fmt

* That error isn't occuring on my computer but I think this should fix it

* Merge branch 'unstable' into light_client_beacon_api_1

# Conflicts:
#	beacon_node/beacon_chain/src/events.rs
#	beacon_node/http_api/src/lib.rs
#	beacon_node/http_api/src/test_utils.rs
#	beacon_node/http_api/tests/main.rs
#	beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs
#	beacon_node/lighthouse_network/src/rpc/methods.rs
#	beacon_node/lighthouse_network/src/service/api_types.rs
#	beacon_node/network/src/beacon_processor/worker/rpc_methods.rs
#	beacon_node/tests/test.rs
#	common/eth2/src/types.rs
#	lighthouse/src/main.rs

* Add missing test file

* Update light client types to comply with Altair light client spec.

* Fix test compilation

* Merge branch 'unstable' into light_client_beacon_api_1

* Support deserializing light client structures for the Bellatrix fork

* Move `get_light_client_bootstrap` logic to `BeaconChain`. `LightClientBootstrap` API to return `ForkVersionedResponse`.

* Misc fixes.
- log cleanup
- move http_api config mutation to `config::get_config` for consistency
- fix light client API responses

* Add light client bootstrap API test and fix existing ones.

* Merge branch 'unstable' into light_client_beacon_api_1

* Fix test for `light-client-server` http api config.

* Appease clippy

* Add Altair light client SSZ tests

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into light_client_beacon_api_1

* updates to light client header

* light client header from signed beacon block

* using options

* implement helper functions

* placeholder conversion from vec hash256 to exec branch

* add deneb

* using fixed vector

* remove unwraps

* by epoch

* compute merkle proof

* merkle proof

* update comments

* resolve merge conflicts

* linting

* Merge branch 'unstable' into light-client-ssz-tests

# Conflicts:
#	beacon_node/beacon_chain/src/beacon_chain.rs
#	consensus/types/src/light_client_bootstrap.rs
#	consensus/types/src/light_client_header.rs

* superstruct attempt

* superstruct changes

* lint

* altair

* update

* update

* changes to light_client_optimistic_ and finality

* merge unstable

* refactor

* resolved merge conflicts

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into capella_deneb_light_client_types

* block_to_light_client_header fork aware

* fmt

* comment fix

* comment fix

* include merge fork, update deserialize_by_fork, refactor

* fmt

* pass by ref to prevent clone

* rename merkle proof fn

* add FIXME

* LightClientHeader TestRandom

* fix comments

* fork version deserialize

* merge unstable

* move fn arguments, fork name calc

* use task executor

* remove unneeded fns

* remove dead code

* add manual ssz decoding/encoding and add ssz_tests_by_fork macro

* merge deneb types with tests

* merge ssz tests, revert code deletion, cleanup

* move chainspec

* update ssz tests

* fmt

* light client ssz tests

* change to superstruct

* changes from feedback

* linting

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into capella_deneb_light_client_types

* test fix

* cleanup

* Remove unused `derive`.

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into capella_deneb_light_client_types

* beta compiler fix

* merge
2024-03-25 11:01:56 +00:00
chonghe
19b0db2cdf Delete PRE_CAPELLA_ENGINE_CAPABILITIES (#5406)
* Adjust width

* Commit changes

* Delete PRE_CAPELLA_ENGINE_CAPABILITIES

* Revert "Adjust width"

This reverts commit 6fea81b897.

* Revert "Commit changes"

This reverts commit d00859a63e.

* Simplify

* Merge branch 'delete-pre-capella' of https://github.com/chong-he/lighthouse into delete-pre-capella
2024-03-23 21:30:36 +00:00
chonghe
a0e64d0652 Built-in documentation text width in Lighthouse book (#5394)
* Adjust width

* Commit changes

* Trigger Build
2024-03-23 20:52:09 +00:00
dknopik
0a6e4a11d7 Verify whether validators really are unknown during sync committee duty API request (#5174)
* Verify whether validators really are unknown during sync committee duty API request

* Merge branch 'unstable' into fix-4717

* Merge branch 'unstable' into fix-4717

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into fix-4717
2024-03-23 00:01:11 +00:00
Daniel Ramírez-Chiquillo
035c378c61 Make sure all geth processes are killed when stopping a local testnet (#5383)
* Fix geth processes not being killed when stopping a local testnet

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into fix_stop_testnet
2024-03-23 00:01:08 +00:00
Lion - dapplion
306d3eb0eb Delete unused incomplete_processing_components (#5418)
* Delete unused incomplete_processing_components

* lint
2024-03-23 00:00:35 +00:00
Michael Sproul
332e1bb9e6 Fix one and hide all beacon-processor flags (#5397)
* Fix `beacon-processor-work-queue-len`

* Hide beacon-processor flags
2024-03-22 23:24:17 +00:00
zhiqiangxu
5121d655f7 chore: reduce scope of commitment (#5426)
* reduce scope of commitment

* avoid clone for last reference

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into mod_merge_single_blob
2024-03-22 18:16:52 +00:00
zhiqiangxu
5bfe6a8ae3 chore: remove stale comment (#5440)
* rm stale comment

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into rm_irelevant_comment
2024-03-22 18:16:49 +00:00
Afanti
003bb0ae3c fix: tail command typo (#5456)
* fix: tail command typo

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into patch-1
2024-03-22 18:16:46 +00:00
realbigsean
21cdc64bfe Improve parent lookup logging (#5451)
* upgrade parent lookup result processing logs to debug, use display instead of debug for BlockError in case a blob parent unknown error is hit, add block root to BlockIsAlreadyKnown

* fix compile

* fix compile

* fix compile
2024-03-22 18:16:13 +00:00
realbigsean
5ce16192c7 Release v5.1.2 (#5453)
* Release v5.1.2
2024-03-21 18:04:00 +00:00
João Oliveira
6edf031490 disable libp2p upnp (#5449)
* disable libp2p upnp

when passing --disable-upnp cli flag
2024-03-21 01:13:03 +00:00
Michael Sproul
65a6118c53 Fix gossip verification of duplicate attester slashings (#5385)
* Fix gossip verification of duplicate attester slashings
2024-03-20 20:47:38 +00:00
João Oliveira
f33ce8cc34 fix NAT nat_open metrics report (#5427)
* fix nat reporting
2024-03-20 06:45:18 +00:00
Akihito Nakano
7117772fb3 Fix peer count metrics (#5404)
* Set the peers_per_client metrics directly, rather than using increment/decrement

* Move PEERS_CONNECTED related update to the same place

* Move PEERS_CONNECTED_MULTI related update to the same place

* Rename

* Remove unused variables
2024-03-20 06:45:16 +00:00
joao
aa592853df Improvements and Fixes in Documentation, Including Corrected Command Usage (#4998)
* Fix typo: change 'periodical' to 'periodic' in progress updates description

* Fix wrong command in Usage section

* fix typo in Development Environment section

* Fix typo: change 'Explictly' to 'Explicitly'

* Fix typos in Lighthouse UI and Contributing sections

* Fix typo: replace 'confirms' with 'conforms' in Beacon Node API description

* fix minor typographical error: change 'advice' to 'advise' in SIGILL warning message

* Fix spelling error in Detailed Guide section

* Revert "Fix typo: change 'Explictly' to 'Explicitly'"

This reverts commit 6b0781682b4f9d129d9af48d428353f596b8b852.

* Revert "fix minor typographical error: change 'advice' to 'advise' in SIGILL warning message"

This reverts commit a4904a0afd2d531caf4585e1f0726a6dd65081ea.

* compiled

* Revert "compiled"

This reverts commit 425a553bd93af93340858050cc45041ae86b3733.

* Revert "Revert "compiled""

This reverts commit b1f871cb1bd41822f4c7bd696f77ba38cdd32f9d.

* Empty commit to trigger CI.
2024-03-20 06:44:44 +00:00
Eitan Seri-Levi
01ec42e75a Fix Rust beta compiler errors 1.78.0-beta.1 (#5439)
* remove redundant imports

* fix test

* contains key

* fmt

* Merge branch 'unstable' into fix-beta-compiler
2024-03-20 05:17:02 +00:00
realbigsean
4449627c7c Dedup parent blob requests (#5432)
* de dup parent blob requests

* add new line
2024-03-20 00:19:26 +00:00
chonghe
eab3672c6d Add attestation simulator, blobs info and some updates to Lighthouse Book (#5364)
* Apply suggestions from code review

* Revise attestation simulator doc

* Revise blobs.md

* Summary

* Add blobs

* Simulator docs

* Revise attestation simulator

* minor formatting

* Revise vm node

* Update faq

* Update faq

* Add link to v4.6.0

* Remove minification in the docs

* Update Goerli to Holesky

* Add a note on moved vm validator monitor

* Update Rpi 4 note

* Revise attestation simulator doc

* Add docs for attestation simulator

* update database table

* Update faq on resources used

* Fix and update table
2024-03-14 06:12:25 +00:00
Michael Sproul
2a3c709f8c Release v5.1.1 (#5396)
* Release v5.1.1
2024-03-12 03:42:15 +00:00
Michael Sproul
1d7223fadf Fix macOS build by updating cc (#5393)
* Fix macOS build by bumping `cc`
2024-03-12 02:47:54 +00:00
Michael Sproul
2cd0e609f5 Tree states release v5.1.222-exp 2024-03-12 11:56:16 +11:00
Michael Sproul
a9f445707e Fix macOS build by bumping cc (#5392) 2024-03-12 11:49:45 +11:00
Michael Sproul
03e2a39246 Tree states release v5.1.111-exp 2024-03-12 10:36:37 +11:00
Michael Sproul
a249dbfc1a Merge remote-tracking branch 'origin/unstable' into tree-states-update 2024-03-11 15:29:03 +11:00
Pawan Dhananjay
10a38a8aae Release v5.1.0 (#5372)
* Bump versions
2024-03-10 23:44:29 +00:00
Pawan Dhananjay
54b1c229e1 Downgrade rate limited log (#5381)
* Address review comments

* Downgrade log for by_root requests
2024-03-10 23:05:24 +00:00
Michael Sproul
f93844e63b Optimise concurrent block production (#5368)
* Optimise concurrent block production
2024-03-08 05:15:28 +00:00
Jimmy Chen
762dab23b8 Fix AddrInUse error in cli tests (#5266)
* Fix `AddrInUse` error in cli tests.
2024-03-08 05:15:25 +00:00
Jonas Bostoen
641f6be3f0 Explicit peers (#5333)
* Merge branch 'unstable' into feature/explicit-peers

* Merge latest unstable

* refactor: remove explicit-peers flag, mark trusted peers as explicit instead

* feat(beacon_node): add explicit peers to GossipSub, mark as trusted

* feat(beacon_node): add explicit peers cli + config
2024-03-07 22:22:39 +00:00
Age Manning
de91c77cb2 Improve peer performance for NAT'd nodes (#5345)
* Merge latest unstable

* Reduce diff

* Reduce logic, discover up to max peers

* Peer discovery for Natd peers
2024-03-07 12:32:30 +00:00
Pawan Dhananjay
84a902a589 Reduce load on validator subscription channels (#5311)
* Fix tests

* Merge branch 'unstable' into unclog-channels

* Avoid reallocations

* Reduce subscription load on beacon node
2024-03-07 12:32:27 +00:00
antondlr
8cd2b1ca87 Update CI actions to alleviate deprecation warnings (#5321)
* Update and pin all actions to a modern release
2024-03-07 12:32:24 +00:00
Age Manning
85c3204d70 Correct the metrics for topic subscriptions (#5344)
* Handle fork boundaries

* Merge latest unstable

* Topic subscription fix
2024-03-07 12:32:21 +00:00
Age Manning
fc8f1a4ca7 Attempt to publish to at least mesh_n peers (#5357)
* Code improvements

* Fix gossipsub tests

* Merge latest unstable

* Differentiate errors and better scoring

* Attempt to publish to mesh_n peers
2024-03-07 09:48:51 +00:00
Krishang Shah
b9614571a3 Fix 5288: Doesn't POST if attestations is empty. (#5318)
* changed to is_empty() and removed WARN

* added log argument

* fix: issue 5288
2024-03-07 08:49:18 +00:00
Michael Sproul
bf118a17d4 Fix block v3 header decoding (#5366)
* Fix block v3 header decoding
2024-03-07 03:31:06 +00:00
chonghe
258eeb5f09 Delete milagro library (#5298)
* fix lib.rs and tests.rs

* update decode.rs

* auto-delete in Cargo.lock

* delete milagro in cargo.toml

* remove milagro from makefile

* remove milagro from the name

* delete milagro in comment

* delete milagro in cargo.toml

* delete in /testing/ef_tests/cargo.toml

* delete milagro in the logical OR

* delete milagro in /lighthouse/src/main.rs

* delete milagro in /crypto/bls/tests/tests.rs

* delete milagro in comment

* delete milagro in /testing//ef_test/src//cases/bls_eth_aggregate_pubkeys.rs

* delete milagro

* delete more in lib.rs

* delete more in lib.rs

* delete more in lib.rs

* delete milagro in /crypto/bls/src/lib.rs

* delete milagro in crypto/bls/src/mod.rs

* delete milagro.rs
2024-03-06 23:17:42 +00:00
Jimmy Chen
6aebb49718 Update dependency whoami (#5351)
* Update dependency
2024-03-06 17:29:05 +00:00
Michael Sproul
cff6258bb1 Fix duties override bug in VC (#5305)
* Fix duties override bug in VC

* Use initial request efficiently

* Prevent expired subscriptions by construction

* Clean up selection proof logic

* Add test
2024-03-04 23:15:05 +00:00
Age Manning
f919f82e4f Improve logging around peer scoring (#5325)
* Improve logging around score states

* Improve logs also
2024-03-04 19:39:23 +00:00
zhiqiangxu
ee6f667702 bump ethereum_serde_utils (#5341)
* bump ethereum_serde_utils

* cargo update
2024-03-02 19:14:03 +00:00
Lion - dapplion
7b65d385b3 Drop address_change_broadcast (#5287)
* Drop address_change_broadcast
2024-02-29 01:51:11 +00:00
Michael Sproul
88b37a10df Optimise no-op PATCH ops in VC HTTP API (#5064)
* Optimise no-op changes in VC API

* Handle another no-op case

* Merge remote-tracking branch 'origin/unstable' into opt-vc-patch-api
2024-02-29 01:51:07 +00:00
Age Manning
64e563f5e9 Recognize the Caplin consensus client (#5304)
* Caplin has joined the party

* Fix typo
2024-02-28 05:38:24 +00:00
João Oliveira
a89ff100af improve libp2p connected peer metrics (#5314)
* patch rust-yamux dep

* improve libp2p connected peer metrics
2024-02-28 03:52:55 +00:00
João Oliveira
65c4ff0775 remove exit-future (#5183)
* remove exit-future usage,

as it is non maintained, and replace with async-channel which is already in the repo.

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into remove-exit-future

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into remove-exit-future
2024-02-27 22:12:44 +00:00
João Oliveira
abd99652b4 remove nat module and use libp2p upnp (#4840)
* remove nat module and use libp2p upnp

* update Cargo.lock

* remove no longer used dependencies

* restore nat module refactored

* log successful mapping

* only activate upnp if config enabled

reduce logs to debug!

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into libp2p-nat

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into libp2p-nat

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into libp2p-nat

* address review

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into libp2p-nat

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into libp2p-nat

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into libp2p-nat

* address review
2024-02-27 07:29:18 +00:00
Age Manning
d36241b4a1 Track multiaddr in connection status (#5308)
* Record the multiaddr for connected peers
2024-02-27 06:32:48 +00:00
Lion - dapplion
928915c718 as_store_bytes not fallible (#5285) 2024-02-27 12:23:05 +11:00
Lion - dapplion
5dfc5c1f88 Remove milhouse export aliases (#5286)
Closes #5141
2024-02-27 12:19:22 +11:00
Michael Sproul
744c598b1c Fix typos and make block hash calculation public (#5275)
* Fix typo in `verify_payload_block_hash`

* Make calculate_execution_block_hash public again
2024-02-26 07:45:17 +00:00
Pawan Dhananjay
3ab9d3a84e Add a cli option for the snapshot cache size (#5270)
* Add a cli option for snapshot cache size

* Remove junk

* Make snapshot_cache module public

* lint

* Update docs
2024-02-26 05:19:39 +00:00
Michael Sproul
de6ede163c Delete ancient, unused HTTP docs (#5281)
* Delete ancient, unused HTTP docs
2024-02-26 05:19:35 +00:00
Jimmy Chen
f08e8f5633 Run apt update before install. (#5295)
* Run `apt update` before install.
2024-02-26 03:10:59 +00:00
chonghe
13956a0741 Add build instructions for Fedora/RHEL/CentOS (#5225)
* Add dependencies
2024-02-23 15:31:55 +00:00
dapplion
a5d3408c59 Merge remote-tracking branch 'sigp/epoch-single-pass' into tree-states 2024-02-23 11:49:06 +08:00
Michael Sproul
20f53e7769 Tree states release v5.0.111-exp (#5276)
* Fix CLI help text

* Tree states release v5.0.111-exp
2024-02-23 12:34:13 +11:00
Michael Sproul
f9c9c40a67 Fix tree-states tests (#5277)
* Fix beta compiler lints

* Fix fork choice tests

* Fix op_pool tests

---------

Co-authored-by: dapplion <35266934+dapplion@users.noreply.github.com>
2024-02-23 09:30:31 +11:00
Michael Sproul
26117a558f Merge remote-tracking branch 'origin/unstable' into tree-states 2024-02-22 15:43:04 +11:00
Paul Hauner
b5bae6e7a2 Release v5.0.0 (#5254)
* Bump versions
2024-02-20 22:12:52 +00:00
Jimmy Chen
50c423ad88 Revert libp2p metrics (#4870) (#5265)
* Revert "improve libp2p connected peer metrics (#4870)"

This reverts commit 0c3fef59b3.
2024-02-20 04:19:17 +00:00
João Oliveira
a229b52723 Deactivate RPC Connection Handler after goodbye message is sent (#5250)
* Deactivate RPC Connection Handler

after goodbye message is sent

* nit: use to_string instead of format

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into rpc-shutdown-improvement

* clippy

* Fix cargo.lock

* Merge latest unstable
2024-02-19 07:16:01 +00:00
Eitan Seri-Levi
4d625951b8 Deprecate env_log flag in tracing layer (#5228)
* deprecate terminal logs file in tracing layer

* sink writer
2024-02-19 05:17:58 +00:00
Michael Sproul
c9702cb0a1 Download checkpoint blobs during checkpoint sync (#5252)
* MVP implementation (untested)

* update store checkpoint sync test

* update cli help

* Merge pull request #5253 from realbigsean/checkpoint-blobs-sean

Checkpoint blobs sean

* Warn only if blobs are missing from server

* Merge remote-tracking branch 'origin/unstable' into checkpoint-blobs

* Verify checkpoint blobs

* Move blob verification earlier
2024-02-19 02:22:23 +00:00
Paul Hauner
e22c9eed8f Add Deneb fork epoch for Gnosis (#5242)
* Add Deneb fork epoch for Gnosis

* Add deneb constants

* Update common/eth2_network_config/built_in_network_configs/gnosis/config.yaml

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* Adjust `min_epochs_for_block_requests`

* Change `MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT`

* Fix chain spec `max_per_epoch_activation_churn_limit`

* Fix chain spec values again
2024-02-19 02:22:19 +00:00
realbigsean
f21472991d check the da cache and the attester cache in responding to RPC requests (#5138)
* check the da cache and the attester cache in responding to RPC requests

* use the processing cache instead

* update comment

* add da cache metrics

* rename early attester cache method

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into check-da-cache-in-rpc-response

* make rustup update run on the runners

* Revert "make rustup update run on the runners"

This reverts commit d097e9bfa8.
2024-02-19 02:22:15 +00:00
ethDreamer
a264afd19f Verify Versioned Hashes During Optimistic Sync (#4832)
* Convert NewPayloadRequest to use Reference

* Refactor for Clarity

* Verify Versioned Hashes

* Added Tests for Version Hash Verification

* Added Moar Tests

* Fix Problems Caused By  Merge

* Update to use Alloy Instead of Reth Crates (#14)

* Update beacon_node/execution_layer/src/engine_api/new_payload_request.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* Faster Versioned Hash Extraction

* Update to rust 1.75 & Pin alloy-consensus
2024-02-18 12:40:45 +00:00
realbigsean
1711b80779 enable doppelganger tests for deneb (#5137)
* enable doppelganger tests for deneb

* comment out lcli install skip

* Add sanity check

* Merge remote-tracking branch 'origin/unstable' into deneb-doppelganger
2024-02-15 12:25:02 +00:00
Michael Sproul
f17fb291b7 Handle unknown head during attestation publishing (#5010)
* Handle unknown head during attestation publishing

* Merge remote-tracking branch 'origin/unstable' into queue-http-attestations

* Simplify task spawner

* Improve logging

* Add a test

* Improve error logging

* Merge remote-tracking branch 'origin/unstable' into queue-http-attestations

* Fix beta compiler warnings
2024-02-15 12:24:47 +00:00
Age Manning
49536ff103 Add distributed flag to VC to enable support for DVT (#4867)
* Initial flag building

* Update validator_client/src/cli.rs

Co-authored-by: Abhishek Kumar <43061995+xenowits@users.noreply.github.com>

* Merge latest unstable

* Per slot aggregates

* One slot lookahead for sync committee aggregates

* Update validator_client/src/duties_service.rs

Co-authored-by: Abhishek Kumar <43061995+xenowits@users.noreply.github.com>

* Rename selection_look_ahead

* Merge branch 'unstable' into vc-distributed

* Merge remote-tracking branch 'origin/unstable' into vc-distributed

* Update CLI text
2024-02-15 12:23:58 +00:00
Michael Sproul
0e819fa785 Schedule Deneb on mainnet (#5233)
* Schedule Deneb on mainnet

* Fix trusted setup roundtrip test

* Fix BN CLI tests for insecure genesis sync
2024-02-15 12:23:51 +00:00
Michael Sproul
7c23625193 Quieten gossip republish logs (#5235)
* Quieten gossip republish logs
2024-02-15 04:18:23 +00:00
João Oliveira
256d9042d3 Drop gossipsub stale messages when polling ConnectionHandler. (#5175)
* drop gossipsub stale messages

* convert async-channel::Receiver to Peekable,

to be able to Peek next message without dropping it
2024-02-15 00:41:52 +00:00
Michael Sproul
b1f7f0aaf9 Remove progressive balances mode (#5224) 2024-02-14 11:55:27 +11:00
Michael Sproul
9eb1685506 Use epoch cache in block packing (#5223) 2024-02-14 11:55:11 +11:00
Eitan Seri-Levi
e7ef2a3a54 validator liveness endpoint should accept string encoded indices (#5184)
* deserialize string indices as u64

* client should send quoted indices
2024-02-09 04:59:39 +00:00
realbigsean
4172d9f75c Update to consensus spec v1.4.0-beta.6 (#5094)
* get latest ef tests passing

* fix tests

* Fix invalid payload recovery tests

* Merge branch 'unstable' into update-to-spec-v1.4.0-beta.6

* Revert "fix tests"

This reverts commit 0c875b02e0.

* Fix fork choice def. tests

* Update beacon_node/beacon_chain/tests/payload_invalidation.rs
2024-02-08 18:08:21 +00:00
Akihito Nakano
e353358484 Update to warp v0.3.6 (#5172)
* Update to warp 0.3.6

* Update Cargo.lock
2024-02-08 16:57:26 +00:00
Jimmy Chen
4b19eac8ce Remove curve25519-dalek patch (#5214)
* Remove patch dependencies
2024-02-08 16:57:19 +00:00
Michael Sproul
6f442f2bb8 Improve database compaction and prune-states (#5142)
* Fix no-op state prune check

* Compact freezer DB after pruning

* Refine DB compaction

* Add blobs-db options to inspect/compact

* Better key size

* Fix compaction end key
2024-02-08 10:05:08 +00:00
zilayo
e470596715 chore(docs): amend port guidance to enable QUIC support (#5029)
* chore(docs): amend port guidance to enable QUIC support
2024-02-08 02:40:58 +00:00
João Oliveira
0c3fef59b3 improve libp2p connected peer metrics (#4870)
* improve libp2p connected peer metrics

* separate discv5 port from libp2p for NAT open

* use metric family for DISCOVERY_BYTES

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into improve-metrics
2024-02-08 02:40:54 +00:00
Pawan Dhananjay
0b59d10ab6 Fix backfill stalling (#5192)
* Prevent early short circuit in `peer_disconnected`

* lint
2024-02-08 02:40:51 +00:00
Age Manning
4db84de563 Improve network parameters (#5177)
* Modify network parameters for current mainnet conditions
2024-02-08 02:40:47 +00:00
João Oliveira
675a231b45 increment peer per topic count on graft messages (#5212)
* increment peer per topic count on graft messages
2024-02-08 02:00:49 +00:00
Michael Sproul
8e2f7c5fb9 Delete unused epoch processing code (#5170)
* Delete unused epoch processing code

* Compare total deltas

* Remove unnecessary apply_pending

* cargo fmt

* Remove newline
2024-02-08 09:20:28 +11:00
Age Manning
853042746b Downgrade gossipsub duplicate logs (#5163)
* Downgrade duplicate publish logs

* Maintain backwards compatiblity, deprecate flag

* The tests had to go, because there's no config to test against

* Update help_bn.md
2024-02-06 07:24:01 +00:00
Jimmy Chen
5cc29e47c5 Fix failing cargo-udeps (#5203)
* Attempt to fix nightly build.

* Update Cargo.toml to pin a commit

* Update Cargo.lock
2024-02-06 07:23:57 +00:00
Jimmy Chen
4b62a024d7 Update external LLVM version in preparation for Rust 1.76. (#5179)
* Update external LLVM version in preparation for Rust 1.76.
2024-02-06 07:23:53 +00:00
Age Manning
ebe77bbad2 Small typo in testing config (#5178)
* Small typo
2024-02-06 07:23:49 +00:00
Michael Sproul
7bec3f9b59 Optional slashing protection for remote keys (#4981)
* Optional slashing protection for remote keys

* Merge remote-tracking branch 'origin/unstable' into disable-slashing-protection-web3signer

* Start writing tests

* Merge remote-tracking branch 'origin/unstable' into disable-slashing-protection-web3signer

* Merge remote-tracking branch 'michael/disable-slashing-protection-web3signer' into disable-slashing-protection-web3signer

* Make half-written tests compile

* Make tests work

* Update help text

* Update book CLI text

* Merge remote-tracking branch 'origin/unstable' into disable-slashing-protection-web3signer

* More logging & CLI tests

* CLI tweaks
2024-02-06 01:30:31 +00:00
Jimmy Chen
795c5778e1 Remove unused js file in Lighthouse book (#5164)
* Remove unused js file in Lighthouse book.
2024-02-06 01:30:26 +00:00
Jimmy Chen
8fa11aa792 Fix incorrect value set for blobs_by_root_request rpc limit. (#5181)
* Fix incorrect value set for `blobs_by_root_request` rpc limit.
2024-02-05 18:37:28 +00:00
Jimmy Chen
39e9f7dc6b Fix Rust beta compiler errors (1.77) (#5180)
* Lint fixes

* More fixes for beta compiler.

* Format fixes

* Move `#[allow(dead_code)]` to field level.

* Remove old comment.

* Update beacon_node/execution_layer/src/test_utils/mod.rs

Co-authored-by: João Oliveira <hello@jxs.pt>

* remove duplicate line
2024-02-05 17:54:11 +00:00
Michael Sproul
ae6620e59d Delete lighthouse db diff (#5171)
* Delete `lighthouse db diff`

* Fix help text
2024-02-02 17:47:14 +11:00
Michael Sproul
8fb6989801 Config for web3signer keep-alive (#5007)
* Allow tweaking connection pool settings

* Build docker image

* Fix imports

* Merge tag 'v4.6.0' into web3signer-keep-alive

v4.6.0

* Delete temp docker build stuff

* Fix tests

* Merge remote-tracking branch 'origin/unstable' into web3signer-keep-alive

* Update CLI text
2024-02-01 08:35:14 +00:00
Michael Sproul
0b6416c444 Re-disable block verification tests in debug (#5155)
* Re-disable block verification tests in debug
2024-02-01 08:35:10 +00:00
4rgon4ut
1d4ee5d150 Update gnosis bootnodes (#4986)
* chore(eth2_network_config): update gnosis bootnodes

* Merge remote-tracking branch 'origin/unstable' into update_gnosis_bootnodes
2024-02-01 08:35:06 +00:00
4rgon4ut
b7ba5a087c feat(config): add chiado bootnodes ENRs (#4727)
* feat(config): add chiado bootnodes erns

* Merge remote-tracking branch 'origin/unstable' into feat/add-chiado-bootnodes
2024-02-01 08:35:00 +00:00
João Oliveira
dada5750ee Properly log panics with slog (#5075)
* log panics with slog

* update set_hook location

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into slog-panics
2024-01-31 19:20:09 +00:00
chonghe
ab6a6e0741 Some updates in Lighthouse Book (#5000)
* Add jq in api-bn

* Update beaconstate size

* Add fields to web3signer API

* Link web3signer API

* Update /lighthouse/logs in table

* plural

* update slasher doc

* update FAQ

* Add link in validator section

* Add more info on state pruning

* Update database size

* Merge branch 'unstable' into book-update

* Revise Siren for vc to connect bn

* Merge branch 'book-update' of https://github.com/chong-he/lighthouse into book-update

* Corrections to siren faq

* Fix typos

* Update release date for 4.6.0

* Merge branch 'unstable' into book-update
2024-01-31 18:11:54 +00:00
Age Manning
f02189c86a Prevent QUIC logs when quic is disabled (#5071)
* Prevent logs and dialing quic multiaddrs when not supported

* Merge latest unstable
2024-01-31 18:11:49 +00:00
Age Manning
7582da7855 Test backfill (#5109)
* Test backfill

* Revert cargo.toml

* Update beacon_node/beacon_chain/src/builder.rs

Co-authored-by: João Oliveira <hello@jxs.pt>

* Remove redundant code
2024-01-31 18:11:45 +00:00
chonghe
a257a12110 Add a note about validator monitoring in Lighthouse book (#5143)
* Add validator monitor
2024-01-31 17:32:38 +00:00
Age Manning
4273004bd9 Add gossipsub as a Lighthouse behaviour (#5066)
* Move gossipsub as a lighthouse behaviour

* Update dependencies, pin to corrected libp2p version

* Merge latest unstable

* Fix test

* Remove unused dep

* Fix cargo.lock

* Re-order behaviour, pin upstream libp2p

* Pin discv5 to latest version
2024-01-31 17:32:31 +00:00
Akihito Nakano
b9c519d565 Move from igd to igd-next (#5068)
* Move from igd to igd-next

* Fix clippy error

warning: useless conversion to the same type: `std::net::IpAddr`
2024-01-31 15:48:01 +00:00
Divma
8353ec9785 Update deterministic subnets upgrade to allow prefix computation (#4959)
* add new spec field to spec

* add spec changes

* fix bug and update expected subnets with validation from pyspec

* fix values and make test prettier (?)

* fix style

---------

Co-authored-by: Lion - dapplion <35266934+dapplion@users.noreply.github.com>
2024-01-31 09:30:26 +00:00
Michael Sproul
d2aef1b35c Fix bug in --builder-proposals (#5151)
* Fix bug in `--builder-proposals`

* Add tests

* More sensible test order

* Fix duplicate builder-boost test case

* Cargo fmt and rename
2024-01-31 05:25:55 +00:00
Lion - dapplion
b035638f9b Compute recent lightclient updates (#4969)
* Compute recent lightclient updates

* Review PR

* Merge remote-tracking branch 'upstream/unstable' into lc-prod-recent-updates

* Review PR

* consistent naming

* add metrics

* revert dropping reprocessing queue

* Update light client optimistic update re-processing logic. (#7)

* Add light client server simulator tests. Co-authored by @dapplion.

* Merge branch 'unstable' into fork/dapplion/lc-prod-recent-updates

* Fix lint

* Enable light client server in simulator test.

* Fix test for light client optimistic updates and finality updates.
2024-01-31 05:25:51 +00:00
Eitan Seri-Levi
1d87edb03d Assume Content-Type is json for endpoints that require json (#4575)
* added default content type filter

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into unstable

* create custom warp json filter that ignores content type header

* cargo fmt and linting

* updated test

* updated test

* merge unstable

* merge conflicts

* workspace=true

* use Bytes instead of Buf

* resolve merge conflict

* resolve merge conflicts

* add extra error message context

* merge conflicts

* lint
2024-01-31 01:11:54 +00:00
Michael Sproul
b8db3e4f08 Delete temporary mitigation for equivocation via RPC (#5037)
* Delete unnecessary RPC equivocation check
2024-01-31 01:11:49 +00:00
Michael Sproul
25bcd2a2fd Tree states v4.6.222-exp (#5147) 2024-01-31 10:13:18 +11:00
Michael Sproul
68a9a2ee20 Merge remote-tracking branch 'origin/unstable' into tree-states 2024-01-30 17:13:57 +11:00
Michael Sproul
8e68926162 fsync during backfill to prevent DB corruption (#5144) 2024-01-30 17:08:43 +11:00
Michael Sproul
7862c71bb5 Fix tree-states sub-epoch diffs (#5097) 2024-01-30 15:56:48 +11:00
Sylvain Bossut
0f345c7e0a Make lcli docker image portable (#5069)
* Set lcli docker build to use portable feature (#4370)

* Merge remote-tracking branch 'origin/unstable' into unstable
2024-01-30 03:03:48 +00:00
chonghe
020702f8eb Update to the docs (#5106)
* Perform transaction

* Remove lighthouse/beacon/states/{state_id}/ssz

* Update database api example

* Update checkpoint sync

* minor update

* single beacon node

* add info in mev

* Merge remote-tracking branch 'origin/unstable' into testnet

* add builder_boost_factor example

* change to 50 for consistency
2024-01-30 03:03:37 +00:00
Peter Straus
47f05ac60d fix: update outdated links to external resources (#5018)
* fix: update outdated links to external resources

* Update style guide link
2024-01-30 00:33:10 +00:00
Jack McPherson
abeb358f0b Remove custom SSZ beacon states route (#5065)
* Remove SSZ state root route

* Remove SSZ states route from client impl

* Patch tests

* Merge branch 'unstable' into 5063-delete-ssz-state-route

* Further remove dead code
2024-01-30 00:33:05 +00:00
Michael Sproul
6f3af67362 Fix off-by-one in backfill sig verification (#5120)
* Fix off-by-one in backfill sig verification

* Add self-referential PR link
2024-01-30 00:33:01 +00:00
realbigsean
a4fcf60bcc Increase attestation cache sizes (#5135)
* increase observed attesters and aggregates cache sizes

* fix comment
2024-01-30 00:32:57 +00:00
Sergey Kisel
64efdaf39a #5102 Fix load_state_for_block_production metric mapping (#5103)
* #5102 Fix load_state_for_block_production metric mapping
2024-01-30 00:32:52 +00:00
Michael Sproul
11461d8f51 Fix new CLI tests for tree-states (#5132) 2024-01-30 09:59:25 +11:00
Jimmy Chen
c7e5dd1098 Update Mergify commit message template (#5126) 2024-01-27 00:43:44 +00:00
Michael Sproul
262e5f22bc Merge remote-tracking branch 'origin/unstable' into tree-states 2024-01-25 15:10:19 +11:00
Mac L
0b6c898213 Replace backticks with single quotes (#5121) 2024-01-25 03:57:48 +00:00
Paul Hauner
1be5253610 Bump versions (#5123) 2024-01-25 10:02:00 +11:00
Eitan Seri-Levi
f9e36c94ed Expose additional builder booster related flags in the vc (#5086)
* expose builder booster flags in vc, enable options in validator endpoints, update tests

* resolve failing test

* fix issues related to CreateConfig and MoveConfig

* remove unneeded val, change how boost factor flag logic in the vc, add some additional documentation

* fix typos

* fix typos

* assume builder-proosals flag if one of other two vc builder flags are present

* fmt

* typo

* typo

* Fix CLI help text

* Prioritise per validator builder boost configurations over CLI flags.

* Add http test for builder boost factor with process defaults.

* Fix issue with PATCH request

* Add prefer builder proposals

* Add more builder boost factor tests.

---------

Co-authored-by: Mac L <mjladson@pm.me>
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2024-01-25 09:09:47 +11:00
Eitan Seri-Levi
612eaf2d41 Prevent rolling file appender panic (#5117)
* rolling file appender panic removal and max log file count

* max log file
2024-01-24 10:12:48 +11:00
Age Manning
a36a12a8d2 Correct multiple dial bug (#5113)
* Dialing the same peer-id error fix

* Improve dialing logging

* Update beacon_node/lighthouse_network/src/peer_manager/mod.rs

Co-authored-by: Lion - dapplion <35266934+dapplion@users.noreply.github.com>

---------

Co-authored-by: Lion - dapplion <35266934+dapplion@users.noreply.github.com>
2024-01-24 10:10:05 +11:00
Pawan Dhananjay
b55b58b3c6 Fix indices filter in blobs_sidecar http endpoint (#5118)
* add get blobs unit test

* Use a multi_key_query for blob_sidecar indices

* Fix test

* Remove env_logger

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2024-01-24 09:35:24 +11:00
realbigsean
1cebf41452 Backfill blob storage fix (#5119)
* store blobs in the correct db in backfill

* add database migration

* add migration file

* remove log info suggesting deneb isn't schedule

* add batching in blob migration
2024-01-24 09:35:02 +11:00
Age Manning
5851027bfd Correct discovery logic (#5111) 2024-01-23 16:58:24 +11:00
Age Manning
b7c9b27872 Gossipsub fanout correction (#5110)
* Correct fanout in gossipsub

* Upgrade discv5 to pin new libp2p version

* Update cargo.lock
2024-01-23 16:38:00 +11:00
Michael Sproul
a403138ed0 Reduce size of futures in HTTP API to prevent stack overflows (#5104)
* Box::pin a few big futures

* Arc the blocks early in publication

* Fix more tests
2024-01-23 15:32:07 +11:00
ethDreamer
02d1f36090 Small Readability Improvement in Networking Code (#5098) 2024-01-23 12:08:57 +08:00
ethDreamer
712f5aba73 Upgrade shlex to 1.3.0 (#5108) 2024-01-23 14:16:36 +11:00
ethDreamer
9a630e4dbb Stop Penalizing Peers in Parent SingleBlobLookup (#5096)
* Stop Penalizing Peers in Parent SingleBlobLookup

* Add test for parent lookup bug (#13)

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2024-01-22 05:35:06 +01:00
realbigsean
70b0528f36 set deneb fork on testnets (#5089)
* set deneb fork on testnets

* re-order fields in holesky
2024-01-22 15:14:26 +11:00
Lion - dapplion
585124fb2f Hold HeadTracker lock until persisting to disk (#5084)
* Fix head tracker drop order on un-ordered shutdown

* lint

---------

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2024-01-22 15:14:11 +11:00
Michael Sproul
185646acb2 Fix PublishBlockRequest SSZ decoding (#5078)
* Fix PublishBlockRequest SSZ decoding

* Send header for block v1 ssz client

* Delete unused function
2024-01-19 12:56:30 +11:00
realbigsean
2a161cef73 Bump h2 (#5085)
* bump h2

* Empty commit for CI

* bump the other h2 version
2024-01-19 12:41:28 +11:00
Mac L
47b28c4935 Remove blobs_db when purge-db (#5081) 2024-01-18 15:59:08 -05:00
Mac L
a0b407c15d Add Deneb readiness logging (#5074) 2024-01-18 15:21:38 -05:00
Eitan Seri-Levi
f22e5b0f85 Critical dependency logging (#4988)
* add metrics layer

* add metrics

* simplify getting the target

* make clippy happy

* fix typos

* unify deps under workspace

* make import statement shorter, fix typos

* enable warn by default, mark flag as deprecated

* do not exit on error when initializing logging fails

* revert exit on error

* adjust bootnode logging

* add logging layer

* non blocking file writer

* non blocking file writer

* add tracing visitor

* use target as is by default

* make libp2p events register correctly

* adjust repilcated cli help

* refactor tracing layer

* linting

* filesize

* log gossipsub, dont filter by log level

* turn on debug logs by default, remove deprecation warning

* truncate file, add timestamp, add unit test

* suppress output (#5)

* use tracing appender

* cleanup

* Add a task to remove old log files and upgrade to warn level

* Add the time feature for tokio

* Udeps and fmt

---------

Co-authored-by: Diva M <divma@protonmail.com>
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
Co-authored-by: Age Manning <Age@AgeManning.com>
2024-01-16 19:44:26 +11:00
Divma
a68b701807 add tracing metrics layer for dependency logging (#4979)
* add metrics layer

* add metrics

* simplify getting the target

* make clippy happy

* fix typos

* unify deps under workspace

* make import statement shorter, fix typos

* enable warn by default, mark flag as deprecated

* do not exit on error when initializing logging fails

* revert exit on error

* adjust bootnode logging

* use target as is by default

* make libp2p events register correctly

* adjust repilcated cli help

* turn on debug logs by default, remove deprecation warning

* suppress output (#5)

---------

Co-authored-by: Age Manning <Age@AgeManning.com>
2024-01-16 12:22:09 +11:00
Akihito Nakano
e10e4b7811 Fix zero port error (#5021)
* Fix zero port error and add tests

* Tweak indentation

* assign 0 to quic_port if tcp_port == 0

* Remove unnecessary deps
2024-01-16 12:19:49 +11:00
vuittont60
0613eb7a21 docs: fix typos (#5059)
* docs: fix typos

* docs: fix typo

* docs: update by 'make cli'
2024-01-16 12:18:54 +11:00
Enrico Del Fante
31b797ed83 Update teku's bootnodes (#5052) 2024-01-16 12:18:06 +11:00
Eitan Seri-Levi
72bcf47dd0 add content-type octet stream helper fn (#5062) 2024-01-15 16:23:39 +11:00
Michael Sproul
00af017582 Fix asset paths in release CI upload job (#5056) 2024-01-15 11:19:47 +11:00
Michael Sproul
6262be7219 Don't error on inactive indices in att. rewards 2024-01-14 09:41:42 +11:00
Michael Sproul
9cd924366c Tree states release v4.6.111-exp 2024-01-12 10:50:34 +11:00
Michael Sproul
664a7784f8 Add cache for parallel HTTP requests (#4879) 2024-01-11 17:13:43 +11:00
Michael Sproul
8db17dac1d Merge remote-tracking branch 'origin/unstable' into tree-states 2024-01-11 13:15:06 +11:00
Paul Hauner
2e8e160679 v4.6.0-rc.0 (#5042) 2024-01-11 10:46:29 +11:00
João Oliveira
38df87c3c5 Switch libp2p sigp gossipsub fork (#4999)
* switch libp2p source to sigp fork

* Shift the connection closing inside RPC behaviour

* Tag specific commits

* Add slow peer scoring

* Fix test

* Use default yamux config

* Pin discv5 to our libp2p fork and cargo update

* Upgrade libp2p to enable yamux gains

* Add a comment specifying the branch being used

* cleanup build output from within container
(prevents CI warnings related to fs permissions)

* Remove revision tags add branches for testing, will revert back once we're happy

* Update to latest rust-libp2p version

* Pin forks

* Update cargo.lock

* Re-pin to panic-free rust

---------

Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
Co-authored-by: antondlr <anton@delaruelle.net>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2024-01-10 16:26:52 +11:00
Paul Hauner
be79f74c6d Fix IncorrectAttestationSource error in attn. simulator (#5048)
* Don't produce attestations when syncing

* Handle `IncorrectAttestationSource`
2024-01-10 10:47:00 +11:00
Michael Sproul
7e948eec9d Fix block v3 reward encodings (#5049)
* Fix block v3 reward encodings

* Use crates.io version
2024-01-10 10:44:07 +11:00
zhiqiangxu
4a65d28b3b remove needless Into (#5026) 2024-01-10 08:38:16 +11:00
Paul Hauner
12d3d237cd Disallow genesis sync outside blob pruning window (#5038)
* Disallow Syncing From Genesis By Default

* Fix CLI Tests

* Perform checks in the `ClientBuilder`

* Tidy, fix tests

* Return an error based on the Deneb fork

* Fix typos

* Fix failing test

* Add missing CLI flag

* Fix CLI flags

* Add suggestion from Sean

* Fix conflict with blob sidecars epochs

---------

Co-authored-by: Mark Mackey <mark@sigmaprime.io>
2024-01-09 14:17:01 -05:00
realbigsean
b47e3f252e Runtime rpc request sizes (#4841)
* add runtime variable list type

* add configs to ChainSpec

* git rid of max request blocks type

* fix tests and lints

* remove todos

* git rid of old const usage

* fix decode impl

* add new config to `Config` api struct

* add docs fix compilt

* move methods for per-fork-spec to chainspec

* get values off chain spec

* fix compile

* remove min by root size

* add tests for runtime var list

---------

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2024-01-09 10:23:47 +11:00
Eitan Seri-Levi
5c8c8da8b1 Use blocks v3 endpoint in the VC (#4813)
* block v3 endpoint init

* block v3 flow

* block v3 flow

* continue refactor

* the full flow...

* add api logic

* add api logic

* add new endpoint version

* added v3 endpoint

* some debugging

* merge v2 flow with v3

* debugging

* tests passing

* tests passing

* revert cargo lock

* initial v3 test

* blinded payload test case passing

* fix clippy issues

* cleanup

* cleanup

* remove dead code

* fixed logs

* add block value

* block value fix

* linting

* merge unstable

* refactor

* add consensus block value

* lint

* update header name to consensus block value

* prevent setting the participation flag

* clone get_epoch_participation result

* fmt

* clone epoch participation outside of the loop

* add block v3 to vc

* add v3 logic into vc

* add produce-block-v3

* refactor based on feedback

* update

* remove comments

* refactor

* header bugfix

* fmt

* resolve merge conflicts

* fix merge

* fix merge

* refactor

* refactor

* cleanup

* lint

* changes based on feedback

* revert

* remove block v3 fallback to v2

* publish_block_v3 should return irrecoveerable errors

* comments

* comments

* fixed issues from merge

* merge conflicts

* Don't activate at fork; support builder_proposals

* Update CLI flags & book

* Remove duplicate `current_slot` parameter in `publish_block` function, and remove unnecessary clone.

* Revert changes on making block errors irrecoverable.

---------

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2024-01-08 16:12:39 -05:00
realbigsean
f70c32ec70 create unified slashing cache (#5033)
* create unified slashing cache

* add observed slashable file

* fix broadcast validation tests

* revert block seen cache changes

* clean up slashable cache test

* check header signatures for RPC blobs

* don't throw error on RPC signature invalie
2024-01-08 10:30:57 -05:00
Paul Hauner
f62cfc6475 Add hint for solving CLI tests failure (#5041)
* Add hint for solving CLI test failure.

* Add `make cli-local`
2024-01-08 10:30:37 -05:00
realbigsean
db05d37675 update goerli config (#5036) 2024-01-08 11:24:12 +11:00
Eitan Seri-Levi
9c1505d082 Block v3 builder boost factor (#5035)
* builder boost factor

* default boost factor

* revert

* deprecate always_prefer_builder_payload, builder-profit-threshold, ignore_builder_override_suggestion_threshold and builder_comparison_factor flags

* revert

* set deprecated flags to no op, revert should_override_builder

* fix test, calc boosted relay value correctly, dont calculate if none

* Add deprecation warnings and restore CLI docs
2024-01-08 11:10:32 +11:00
Jimmy Chen
0c97762032 Track shared memory in metrics (#5023)
* Track shared memory so we can exclude them form resident memory.

* Bump psutil version to 3.3.0 to get shared memory metrics.
2024-01-05 14:48:11 -05:00
Akihito Nakano
01994c438e Bump zerocopy (#5024) 2023-12-22 09:57:39 -05:00
Michael Sproul
af11e78ae1 Clean up blockv3 metadata and client (#5015)
* Improve block production v3 client

* Delete wayward line

* Overhaul JSON endpoint as well

* Rename timeout param

* Update tests

* I broke everything

* Ah this is an insane fix

* Remove unnecessary optionals

* Doc fix
2023-12-22 09:39:17 -05:00
Age Manning
a7e5926a1f Information for network testing (#4961)
* Documentation for network testing

* Update doc
2023-12-22 09:02:58 +11:00
realbigsean
c55608be10 suppress error on duplicate blobs (#4995) 2023-12-18 12:15:12 -05:00
Jimmy Chen
dfc3b3714a Fix incorrect blob queue metrics (#5014)
* Fix blob queue metrics.

* Update blob metric description
2023-12-18 12:04:57 -05:00
Jimmy Chen
b0c374c1ca Update dependencies to get rid of the yanked deps (#4994)
* Initial attempt to upgrade hashbrown to latest to get rid of the crate warnings.

* Replace `.expect()` usage with use of `const`.

* Update ahash 0.7 as well

* Remove unsafe code.

* Update `lru` to 0.12 and fix release test errors.

* Set non-blocking socket

* Bump testcontainers to 0.15.

* Fix lint

---------

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-12-15 18:31:59 +11:00
Michael Sproul
f1113540d8 Fix off-by-one bug in missed block detection (#5012)
* Regression test

* Fix the bug
2023-12-15 14:23:54 +11:00
Jimmy Chen
366f0d7ac2 Add flag to disable warning logs for duplicate gossip messages (#5009)
* Add flag to disable warning logs for duplicate gossip messages.

* Update Lighthouse book.
2023-12-15 09:26:51 +11:00
Pawan Dhananjay
ae4a296089 Convert a FullPayload to a BlindedPayload correctly (#5005) 2023-12-14 08:44:14 -05:00
Joel Rousseau
189430a45c Add attestation simulator (#4880)
* basic scaffold

* remove unnecessary ?

* check if committee cache is init

* typed ValidatorMonitor with ethspecs + store attestations within

* nits

* process unaggregated attestation

* typo

* extract in func

* add tests

* better naming

* better naming 2

* less verbose

* use same naming as validator monitor

* use attestation_simulator

* add metrics

* remove cache

* refacto flag_indices process

* add lag

* remove copying state

* clean and lint

* extract metrics

* nits

* compare prom metrics in tests

* implement lag

* nits

* nits

* add attestation simulator service

* fmt

* return beacon_chain as arc

* nit: debug

* sed s/unaggregated/unagg.//

* fmt

* fmt

* nit: remove unused comments

* increase max unaggregated attestation hashmap to 64

* nit: sed s/clone/copied//

* improve perf: remove unecessary hashmap copy

* fix flag indices comp

* start service in client builder

* remove //

* cargo fmt

* lint

* cloned keys

* fmt

* use Slot value instead of pointer

* Update beacon_node/beacon_chain/src/attestation_simulator.rs

Co-authored-by: Paul Hauner <paul@paulhauner.com>

---------

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-12-14 11:44:56 +11:00
Michael Sproul
c8dc082ba8 Merge remote-tracking branch 'origin/unstable' into tree-states 2023-12-14 09:59:43 +11:00
Pawan Dhananjay
a3a370302a Use the block header to compute the canonical_root (#5003) 2023-12-14 09:24:36 +11:00
Gua00va
a3fb27c99b add forK_choice_read_lock as parameter (#4978) 2023-12-13 16:06:00 +11:00
Jimmy Chen
7f64738085 Update Rust version in lcli Dockerfile. (#5002) 2023-12-12 10:20:49 -05:00
Jimmy Chen
153aaa1679 Remove bors.toml 🫡 (#5001) 2023-12-12 16:54:22 +11:00
Jimmy Chen
4cf4819497 Add mergify merge queue configuration file (#4917)
* Add mergify.yml.

* Abstract out CI jobs to a "success" job so that a change in the CI jobs don't require modification to mergify configuration on stable branch.

* Add new jobs to the `needs` list of  `test-suite-success`.

* Set `batch_max_wait_time` to 60 s.
2023-12-12 16:04:29 +11:00
Age Manning
69f1b7afec Disable flood publishing (#4383)
* Disable flood publish

* Change default configuration
2023-12-12 09:45:54 +11:00
ethDreamer
78ffa378b4 Batch Verify RPC Blobs (#4934) 2023-12-08 16:48:03 -05:00
realbigsean
46184e5ce4 Remove delayed lookups (#4992)
* initial rip out

* fix unused imports

* delete tests and fix lint

* fix peers scoring for blobs
2023-12-08 15:42:55 -05:00
Michael Sproul
b882519d2f Implement POST validators/validator_balances APIs (#4872)
* Add POST for fetching validators from state

* Implement POST for balances

* Tests
2023-12-08 12:09:36 +11:00
Michael Sproul
e02adbf7bf Update docs for v4.6.0 (#4982)
* Update DB migration docs

* Document VC broadcast modes

* Update downgrade example (#6)

* update downgrade example

* Add period

* Add v4.1.0

---------

Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
2023-12-08 10:45:05 +11:00
Divma
6c0c41c7ac upgrade libp2p to v0.53.* (#4935)
* update libp2p and address compiler errors

* remove bandwidth logging from transport

* use libp2p registry

* make clippy happy

* use rust 1.73

* correct rpc keep alive

* remove comments and obsolte code

* remove libp2p prefix

* make clippy happy

* use quic under facade

* remove fast msg id

* bubble up close statements

* fix wrong comment
2023-12-07 20:39:59 +11:00
Jimmy Chen
67e0569d9b Fix corrupted DB on networks where the first slot is skipped (Holesky) (#4985)
* Fix zero block roots on skip slots.

* Remove temporary comment, println code and unused imports.

* Remove `println!` in test.
2023-12-07 15:12:06 +11:00
Eitan Seri-Levi
8ba39cbf2c Implement graffiti management API (#4951)
* implement get graffiti

* add set graffiti

* add set graffiti

* delete graffiti

* set graffiti

* set graffiti

* fmt

* added tests

* add graffiti file check

* update

* fixed delete req

* remove unused code

* changes based on feedback

* changes based on feedback

* invalid auth test plus lint

* fmt

* remove unneeded async
2023-12-07 12:02:46 +11:00
chonghe
d9d84242a7 CLI in Lighthouse Book (#4571)
* Add cli.sh file

* update bash script

* update Makefile

* update

* modified test-suite

* fix path

* Fix cli.sh permissions

* update cmd

* cli_manual

* Revise to update

* Update directory in Github

* Correct cli.txt directory

* test old cli_manual

* change exit 1

* Update cli and makefile

* Move cli.sh

* remove files

* fix permission

* Indentation and revision

* Fixed permission

* Create new cli folder

* remove dummy

* put a dummy file

* Revise cli.sh

* comment

* function

* remove vm.md

* test make cli

* test

* testing

* testing

* update

* update

* test

* test

* add vm

* change back non-debug mode

* add exist and update for future debug

* revise

* remove troubleshooting part

* update

* add summary.md

* test

* test

* Update Makefile

Co-authored-by: Mac L <mjladson@pm.me>

* Update Makefile

Co-authored-by: Mac L <mjladson@pm.me>

* Remove help-cli.md and rearrange

* Remove help-cli.md

* Update scripts/cli.sh

Co-authored-by: Mac L <mjladson@pm.me>

* Update scripts/cli.sh

Co-authored-by: Mac L <mjladson@pm.me>

* remove maxperf

* move then to same line as if

* Fix indent and echo file not found

* To be explicit in replacing the old file

* Add logging when there are changes

* Add local variables

* spacing

* remove cargo fmt

* update .md files

* Edit exit message to avoid confusion

* Remove am and add vm subcommands

* Add cargo-fmt

* Revise test-suite.yml

* Update SUMMARY.md

* Add set -e

* Add vm

* Fix

* Add vm

* set -e

* Remove return 1 and add :

* Small revision

* Fix typo

* Update scripts/cli.sh

Co-authored-by: Mac L <mjladson@pm.me>

* Indent

* Update scripts/cli.sh

Co-authored-by: Mac L <mjladson@pm.me>

* Remove .exe in Windows

* Fix period with \.

* test

* check diff

* linux commit

* Add cli.sh file

* update bash script

* update Makefile

* update

* modified test-suite

* fix path

* Fix cli.sh permissions

* update cmd

* cli_manual

* Revise to update

* Update directory in Github

* Correct cli.txt directory

* test old cli_manual

* change exit 1

* Update cli and makefile

* Move cli.sh

* remove files

* fix permission

* Indentation and revision

* Fixed permission

* Create new cli folder

* remove dummy

* put a dummy file

* Revise cli.sh

* comment

* function

* remove vm.md

* test make cli

* test

* testing

* testing

* update

* update

* test

* test

* add vm

* change back non-debug mode

* add exist and update for future debug

* revise

* remove troubleshooting part

* update

* add summary.md

* test

* test

* Update Makefile

Co-authored-by: Mac L <mjladson@pm.me>

* Update Makefile

Co-authored-by: Mac L <mjladson@pm.me>

* Remove help-cli.md and rearrange

* Remove help-cli.md

* Update scripts/cli.sh

Co-authored-by: Mac L <mjladson@pm.me>

* Update scripts/cli.sh

Co-authored-by: Mac L <mjladson@pm.me>

* remove maxperf

* move then to same line as if

* Fix indent and echo file not found

* To be explicit in replacing the old file

* Add logging when there are changes

* Add local variables

* spacing

* remove cargo fmt

* Edit exit message to avoid confusion

* update .md files

* Remove am and add vm subcommands

* Add cargo-fmt

* Revise test-suite.yml

* Update SUMMARY.md

* Add set -e

* Add vm

* Fix

* Add vm

* set -e

* Remove return 1 and add :

* Small revision

* Fix typo

* Update scripts/cli.sh

Co-authored-by: Mac L <mjladson@pm.me>

* Indent

* Update scripts/cli.sh

Co-authored-by: Mac L <mjladson@pm.me>

* Remove .exe in Windows

* Fix period with \.

* test

* check diff

* Revert "Merge branch 'book-cli' of https://github.com/chong-he/lighthouse into book-cli"

This reverts commit 314005d3f8, reversing
changes made to a007f61378.

* update

* update

* Remove echo diff

* Dockerize

* Remove `-ti`

* take ownership inside container

* fix mistake

* proper escaping, restore ownership afterwards

* try without taking ownership of repo

* update

* add diff for troubleshooting

* binary

* update using linux

* binary

* make file

* remove diff

* add diff

* update progressive balance help text

* Remove diff

---------

Co-authored-by: Mac L <mjladson@pm.me>
Co-authored-by: antondlr <anton@delaruelle.net>
2023-12-07 10:39:22 +11:00
ethDreamer
52117f43ba Small Improvements (#4980)
* initial changes

* use arc<blobsidecar> in vector

* Utilize new pattern for KzgVerifiedBlob

* fmt

* Update beacon_node/beacon_chain/src/blob_verification.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* forgot to save..

* lint

* fmt.. again

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-12-06 15:51:40 -05:00
Pawan Dhananjay
31044402ee Sidecar inclusion proof (#4900)
* Refactor BlobSidecar to new type

* Fix some compile errors

* Gossip verification compiles

* Fix http api types take 1

* Fix another round of compile errors

* Beacon node crate compiles

* EF tests compile

* Remove all blob signing from VC

* fmt

* Tests compile

* Fix some tests

* Fix more http tests

* get compiling

* Fix gossip conditions and tests

* Add basic proof generation and verification

* remove unnecessary ssz decode

* add back build_sidecar

* remove default at fork for blobs

* fix beacon chain tests

* get relase tests compiling

* fix lints

* fix existing spec tests

* add new ef tests

* fix gossip duplicate rule

* lints

* add back sidecar signature check in gossip

* add finalized descendant check to blob sidecar gossip

* fix error conversion

* fix release tests

* sidecar inclusion self review cleanup

* Add proof verification and computation metrics

* Remove accidentally committed file

* Unify some block and blob errors; add slashing conditions for sidecars

* Address review comment

* Clean up re-org tests (#4957)

* Address more review comments

* Add Comments & Eliminate Unnecessary Clones

* update names

* Update beacon_node/beacon_chain/src/metrics.rs

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* Update beacon_node/network/src/network_beacon_processor/tests.rs

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* pr feedback

* fix test compile

* Sidecar Inclusion proof small refactor and updates (#4967)

* Update some comments, variables and small cosmetic fixes.

* Couple blobs and proofs into a tuple in `PayloadAndBlobs` for simplicity and safety.

* Update function comment.

* Update testing/ef_tests/src/cases/merkle_proof_validity.rs

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* Rename the block and blob wrapper types used in the beacon API interfaces.

* make sure gossip invalid blobs are passed to the slasher (#4970)

* Add blob headers to slasher before adding to DA checker

* Replace Vec with HashSet in BlockQueue

* fmt

* Rename gindex -> index

* Simplify gossip condition

---------

Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: realbigsean <sean@sigmaprime.io>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Mark Mackey <mark@sigmaprime.io>
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-12-05 11:19:59 -05:00
Manu NALEPA
ec8edfb89a EIP-3076 tests - complete database (#4958)
* EIP-3076 interchange tests: Add `should_succeed_complete` boolean.

This new added `should_succeed_complete` boolean means the test should succeed using
complete anti-slashing DB, while the untouched `should_succeed` boolean is
still implicitely reserved for minimal anti-slashing DB.

Note:
This commit only adds the new `should_succeed_complete` boolean, and copy the value from
`should_succeed` without any meaning regarding the value this boolean should
take.

* `TestEIP3076SpecTests`: Modify two tests

Those two tests are modified in a way they both comply with minimal AND
complete anti-slashing DB.

* Disallow false positives and differentiate more minimal vs complete cases

* Fix my own typos

* Update to v5.3.0 tag

---------

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-12-05 10:49:50 +11:00
Jimmy Chen
4250385ae1 Fix stuck linkcheck on CI (#4977)
* Fix linkcheck CI job timeouts. Use linkcheck 3.0.0 without Docker.

* Add sleep to wait for the mdbook server to start serving
2023-12-05 10:32:10 +11:00
Michael Sproul
4741bf1e70 Remove stray println 2023-12-05 09:23:24 +11:00
Michael Sproul
cefe9fdf70 Restore crash safety for database pruning (#4975)
* Add some DB sanity checks

* Restore crash safety for database pruning
2023-12-04 17:15:25 +11:00
Michael Sproul
c8b2324880 Enable progressive balances fast mode by default (#4971)
* Enable progressive balances fast mode by default

* Fix default in chain_config
2023-12-04 14:42:49 +11:00
Michael Sproul
66d30bc0bc Merge remote-tracking branch 'origin/unstable' into tree-states 2023-12-01 12:02:21 +11:00
Michael Sproul
e880d9db50 Fix cache initialization in block rewards API (#4960) 2023-12-01 11:06:27 +11:00
Gua00va
44aaf13ff0 Standard Liveness Endpoint (#4853)
* Changes to use required Endpoint

* Format

* fixed doppleganger service

* minor fix

* efficiency changes

* fixed tests

* remove commented line

---------

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-11-30 17:41:22 +11:00
Michael Sproul
547ed1de63 Clone state ahead of block production (#4925)
* Clone state ahead of block production

* Add pruning and fix logging

* Don't hold 2 states in mem
2023-11-30 13:49:35 +11:00
ethDreamer
43d98153d6 Refactor & Fix Bugs in Payload Selection Logic (#4950)
* Refactor & Fix Bugs in Payload Selection Logic

* Fix lint

* Update beacon_node/execution_layer/src/lib.rs

* Finish renaming function
2023-11-29 15:20:12 +11:00
Paul Hauner
86163d94f2 Add lcli mock-el (#4587)
* Track time to early attester cache

* Add debug log for early attester cache

* Add mock-el command to lcli

* Revert beacon chain changes

* Tidy, fix compilation errors

* Add required flag

* Default to SYNCING response

* Hide flag
2023-11-29 10:04:29 +11:00
GeemoCandama
8a599ec7dc API for LightClientBootstrap, LightClientFinalityUpdate, LightClientOptimisticUpdate and light client events (#3954)
* rebase and add comment

* conditional test

* test

* optimistic chould be working now

* finality should be working now

* try again

* try again

* clippy fix

* add lc bootstrap beacon api

* add lc optimistic/finality update to events

* fmt

* That error isn't occuring on my computer but I think this should fix it

* Add missing test file

* Update light client types to comply with Altair light client spec.

* Fix test compilation

* Support deserializing light client structures for the Bellatrix fork

* Move `get_light_client_bootstrap` logic to `BeaconChain`. `LightClientBootstrap` API to return `ForkVersionedResponse`.

* Misc fixes.
- log cleanup
- move http_api config mutation to `config::get_config` for consistency
- fix light client API responses

* Add light client bootstrap API test and fix existing ones.

* Fix test for `light-client-server` http api config.

* Appease clippy

* Efficiency improvement when retrieving beacon state.

---------

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-11-28 17:14:29 +11:00
Michael Sproul
44c1817c2b Remove block-delay-ms (#4956) 2023-11-28 16:00:28 +11:00
Jimmy Chen
c88cb371a0 Remove MAX_BLOBS_PER_BLOCK from Holesky config. (#4955) 2023-11-27 23:07:40 +11:00
Alexander Uvizhev
b4556a3d62 Broadcast various requests as per #4684 (#4920)
* multiple broadcast flags

* rewrite with single --broadcast option

* satisfy cargo fmt

* shorten sync-committee-messages

* fix a doc comment and a test

* use strum

* Add broadcast test to simulator

* bring --disable-run-on-all flag back with deprecation notice
2023-11-27 15:39:37 +11:00
realbigsean
e856a904ef Merge pull request #4939 from eserilev/block-v3-general-tests
Add more block v3 tests
2023-11-21 14:52:52 -05:00
Eitan Seri-Levi
98f159cc18 fmt 2023-11-20 21:37:58 -08:00
Eitan Seri-Levi
24a0a7ffd0 remove cache check 2023-11-20 21:37:04 -08:00
Eitan Seri-Levi
228180bb35 add some more block v3 tests 2023-11-20 20:46:13 -08:00
Jimmy Chen
6b63d18420 Fix Rust beta compiler warnings (rustc 1.75.0-beta.1 (782883f60 2023-11-12)) (#4932) 2023-11-18 03:55:11 +11:00
Mac L
68e076d60a Remove legacy database migrations (#4919)
* Remove legacy db migrations

* Fix tests

* Remove migrations to/from v15

* Remove v16 migrations
2023-11-16 22:09:05 +11:00
Michael Sproul
d04e361129 Fix missed block logs (#4922) 2023-11-16 22:07:58 +11:00
Zackary Scott
e181741d38 Fix for issue 4860 - Added in process_justification_and_finalization (#4877)
* Added in process_justification_and_finalization

Added in process_justification_and_finalization to compute_attestation_rewards_altair to take into account justified attestations when coming out of inactivity leak. Also added in test to check for this edge case.

* Added in justification and finalization for compute_attestation_rewards_base

* Added in test for altair rewards without inactivity leak
2023-11-16 22:07:48 +11:00
Michael Sproul
9cdc4b989a Merge remote-tracking branch 'origin/unstable' into tree-states 2023-11-12 22:19:07 +03:00
Michael Sproul
051c3e842f Always use a separate database for blobs (#4892)
* Always use a separate blobs DB

* Add + update tests
2023-11-09 16:51:36 +11:00
chonghe
1b8c0ed987 Update local testnet doc and parameters (#4749)
* update

* revise link

* update parameters

* update doc

* Update doc

* add quic port

* remove debug log

* Fix el_bootnode not being killed

* Fix time

* Fix doc in manual creation of testnet

* Update file

* update api doc

* Revert "update api doc"

This reverts commit ed695743de.

* add git clone

* Fix path

* Fix path

* Update scripts/local_testnet/setup_time.sh

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* Update scripts/local_testnet/README.md

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* Fix SLOT_PER_EPOCH that changes with mainnet or minimal

* Embedded setup_time.sh in start_local_testnet.sh

* fix slot per epoch constant

* Add comment

* Add CANCUN_TIME

* Fix CANCUN_TIME constant 32 slots

* Correct typo

* chmod +x ./setup_time.sh

---------

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-11-09 15:06:10 +11:00
chonghe
7fd9389a8c Delete deprecated cli flags (#4906)
* Delete BN spec flag and VC beacon-node flag

* Remove warn

* slog

* add warn

* delete eth1-endpoint

* delete server from vc cli.rs

* delete server flag in config.rs

* delete delete-lockfiles in vc

* delete allow-unsynced flag in VC

* delete strict-fee-recipient in VC and warn log

* delete merge flag in bn (hidden)

* delete count-unrealized and count-unrealized-full in bn (hidden)

* delete http-disable-legacy-spec in bn (hidden)

* delete eth1-endpoint in lcli

* delete warn message lcli

* delete eth1-endpoints

* delete minify in slashing protection

* delete minify related

* Remove mut

* add back warn! log

* Indentation

* Delete count-unrealized

* Delete eth1-endpoints

* Delete eth1-endpoint test

* delete eth1-endpints test

* delete allow-unsynced test

* Add back lcli eth1-endpoint

---------

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-11-09 15:05:55 +11:00
ethDreamer
7818100777 Verify KZG in Bulk During Block Sync (#4903) 2023-11-09 15:05:44 +11:00
Eitan Seri-Levi
a380f6ef1f Add block-v3 SSZ tests (#4902)
* create block-v3 ssz tests

* fmt
2023-11-09 15:05:35 +11:00
Joel Rousseau
ac8811afac Add missed blocks to monitored validators (#4731)
* add missed_block metric

* init missed_block in constructor

* declare beaconproposercache in ValidatorMonitor

* refacto proposer_shuffling_decision_root to use epoch instead of current.epoch

* imple new proposer_shuffling_decision_root in callers

* push missed_blocks

* prune missed_blocks

* only add to hashmap if it's a monitored validator

* remove current_epoch dup + typos

* extract in func

* add prom metrics

* checkpoint is not only epoch but slot as well

* add safeguard if we start a new chain at slot 0

* clean

* remove unnecessary negative value for a slot

* typo in comment

* remove unused current_epoch

* share beacon_proposer_cache between validator_monitor and beacon_chain

* pass Hash256::zero()

* debug objects

* fix loop: lag is at the head

* sed s/get_slot/get_epoch

* fewer calls to cache.get_epoch

* fix typos

* remove cache first call

* export TYPICAL_SLOTS_PER_EPOCH and use it in validator_monitor

* switch to gauge & loop over missed_blocks hashset

* fix subnet_service tests

* remove unused var

* clean + fix nits

* add beacon_proposer_cache + validator_monitor in builder

* fix store_tests

* fix builder tests

* add tests

* add validator monitor set of tests

* clean tests

* nits

* optimise imports

* lint

* typo

* added self.aggregatable

* duplicate proposer_shuffling_decision_root

* remove duplication in passing beacon_proposer_cache

* remove duplication in passing beacon_proposer_cache

* using indices

* fmt

* implement missed blocks total

* nits

* avoid heap allocation

* remove recursion limit

* fix lint

* Fix valdiator monitor builder pattern

Unify validator monitor config struct

* renaming metrics

* renaming metrics in validator monitor

* add log if there's a missing validator index

* consistent log

* fix loop

* better loop

* move gauge to counter

* fmt

* add error message

* lint

* fix prom metrics

* set gauge to 0 when non-finalized epochs

* better wording

* remove hash256::zero in favour of block_root

* fix gauge total label

* fix last missed block validator

* Add `MissedBlock` struct

* Fix comment

* Refactor non-finalized block loop

* Fix off-by-one

* Avoid string allocation

* Fix compile error

* Remove non-finalized blocks metric

* fix func clojure

* remove unused variable

* remove unused DEFAULT_INDIVIDUAL_TRACKING_THRESHOLD

* remove unused DEFAULT_INDIVIDUAL_TRACKING_THRESHOLD in builder

* add validator index depending on the fork name

* typos

---------

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-11-09 15:05:14 +11:00
realbigsean
bcca88a150 more logging improvements (#4885)
## Issue Addressed

-downgrades `Missing components over rpc` to debug because this isn't unusual and just results in a re-try
-removes the result from  `Block component processed for lookup` because this prints the full block on an unknown parent error

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-11-03 13:31:27 +00:00
Justin Traglia
f2aabe915b Fix comment for blob sidecar observation pruning (#4893)
## Issue Addressed

The comment implies that observations for the given slot would be retained but they are not.

## Proposed Changes

I'm pretty sure the functionality is correct and the comment is slightly incorrect, so just update the comment. The comment needs to say something along the lines of "less than or equal to" rather than just "less than."

## Additional Info

It doesn't make sense to keep finalized observations since those are no longer accepted.
2023-11-03 12:38:13 +00:00
Jimmy Chen
36d8849813 Add commmand for pruning states (#4835)
## Issue Addressed

Closes #4481. 

(Continuation of #4648)

## Proposed Changes

- [x] Add `lighthouse db prune-states`
- [x] Make it work
- [x] Ensure block roots are handled correctly (to be addressed in 4735)
- [x] Check perf on mainnet/Goerli/Gnosis (takes a few seconds max)
- [x] Run block root healing logic (#4875 ) at the beginning
- [x] Add some tests
- [x] Update docs
- [x] Add `--freezer` flag and other improvements to `lighthouse db inspect`

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-11-03 00:12:19 +00:00
Eitan Seri-Levi
07f53b18fc Block v3 endpoint (#4629)
## Issue Addressed

#4582

## Proposed Changes

Add a new v3 block fetching flow that can decide to return a Full OR Blinded payload

## Additional Info



Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-11-03 00:12:18 +00:00
realbigsean
42da392edc fix deneb sync bug (#4869)
## Issue Addressed

I observed our forward sync on devnet 9 would stall when we would hit this log:
```
250425:Oct 19 00:54:17.133 WARN Blocks and blobs request for range received invalid data, error: KzgCommitmentMismatch, batch_id: 4338, peer_id: 16Uiu2HAmHbmkEQFDrJfNuy1aYyAfHkNUwSD9FN7EVAqGJ8YTF9Mh, service: sync, module: network::sync::manager:1036
```

## Proposed Changes

`range_sync_block_and_blob_response` [here](1cb02a13a5/beacon_node/network/src/sync/manager.rs (L1013)) removes the request from the sync manager. later, however if there's an error, `inject_error` [here](1cb02a13a5/beacon_node/network/src/sync/manager.rs (L1055)) expects the request to exist so we can handle retry logic. So this PR just re-inserts the request (withthout any accumulated blobs or blocks) when we hit an error here.

The issue is unique to block+blob sync because the error here is only possible from mismatches between blocks + blobs after we've downloaded both, there's no equivalent error in block sync



Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-10-31 21:04:18 +00:00
Eitan Seri-Levi
4ce01ddd11 Activate clippy::manual_let_else lint (#4889)
## Issue Addressed

#4888

## Proposed Changes

Enabled `clippy::manual_let_else` lint and resolved the warning messages.
2023-10-31 10:31:02 +00:00
realbigsean
a9f9dc241d restore cargo vendor in test suite (#4886)
## Issue Addressed

resolves https://github.com/sigp/lighthouse/issues/4440

## Proposed Changes

restore our `cargo vendor` test in CI

changes to `c-kzg` here mean we no longer have to compile it twice and get duplicate source errors: https://github.com/sigp/lighthouse/pull/4862

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-10-27 10:21:47 +00:00
Michael Sproul
c574f8136e Fix block backfill with genesis skip slots (#4820)
## Issue Addressed

Closes #4817.

## Proposed Changes

- Fill in the linear block roots array between 0 and the slot of the first block (e.g. slots 0 and 1 on Holesky).
- Backport the `--freezer`, `--skip` and `--limit` options for `lighthouse db inspect` from tree-states. This allows us to easily view the database corruption of 4817 using `lighthouse db inspect --network holesky --freezer --column bbr --output values --limit 2`.
- Backport the `iter_column_from` change and `MemoryStore` overhaul from tree-states. These are required to enable `lighthouse db inspect`.
- Rework `freezer_upper_limit` to allow state lookups for slots below the `state_lower_limit`. Currently state lookups will fail until state reconstruction completes entirely.

There is a new regression test for the main bug, but no test for the `freezer_upper_limit` fix because we don't currently support running state reconstruction partially (see #3026). This will be fixed once we merge `tree-states`! In lieu of an automated test, I've tested manually on a Holesky node while it was reconstructing.

## Additional Info

Users who backfilled Holesky to slot 0 (e.g. using `--reconstruct-historic-states`) need to either:

- Re-sync from genesis.
- Re-sync using checkpoint sync and the changes from this PR.

Due to the recency of the Holesky genesis, writing a custom pass to fix up broken databases (which would require its own thorough testing) was deemed unnecessary. This is the primary reason for this PR being marked `backwards-incompat`.

This will create few conflicts with Deneb, which I've already resolved on `tree-states-deneb` and will be happy to backport to Deneb once this PR is merged to unstable.
2023-10-27 05:08:49 +00:00
Michael Sproul
d36ebba1ea Handle out-of-order forks in epoch cache (#4881) 2023-10-26 18:09:28 +11:00
zhiqiangxu
b82d1a993c fix docs about --builder (#4754)
Align the document with the [implementation](2841f60686/beacon_node/execution_layer/src/lib.rs (L740)) regarding the `--builder` flag.


The `tokio::join!` macro takes a list of async expressions and evaluates them concurrently on the same task.
2023-10-26 05:23:50 +00:00
realbigsean
ba891e1fed deneb related logging improvements (#4859)
1. Add commitments to logs and update the `Display` implementation of `KzgCommitment` to become truncated similarly to block root.

I've been finding it difficult to debug scenarios involving multiple blobs for the same `(index, block_root)`. Logging the commitment will help with this, we can match it to what exists in the block. 

Example output:

```
Oct 20 21:13:36.700 DEBG Successfully verified gossip blob       commitment: 0xa3c1…1cd8, index: 0, root: 0xf31e…f9de, slot: 154568
Oct 20 21:13:36.785 DEBG Successfully verified gossip block      commitments: [0xa3c1…1cd8, 0x8655…02ff, 0x8d6a…955a, 0x84ac…3a1b, 0x9752…629b, 0xb9fc…20fb], root: 0xf31eeb732702e429e89057b15e1c0c631e8452e09e03cb1924353f536ef4f9de, slot: 154568, graffiti: teku/besu, service: beacon
```

Example output in a block with no blobs (this will show up pre-deneb):
```
426734:Oct 20 21:15:24.113 DEBG Successfully verified gossip block, commitments: [], root: 0x619db1360ba0e8d44ae2a0f2450ebca47e167191feecffcfac0e8d7b6c39623c, slot: 154577, graffiti: teku/nethermind, service: beacon, module: beacon_chain::beacon_chain:2765
```

2. Remove `strum::IntoStaticStr` from `AvailabilityCheckError`. This is because `IntoStaticStr` end up dropping information inside the enum. So kzg commitments in this error are dropped, making it more difficult to debug
```
AvailabilityCheckError::KzgCommitmentMismatch {
        blob_commitment: KzgCommitment,
        block_commitment: KzgCommitment,
    },
```
which is output as just `AvailabilityCheckError`

3. Some additional misc sync logs I found useful in debugging https://github.com/sigp/lighthouse/pull/4869

4. This downgrades ”Block returned for single block lookup not present” to debug because I don’t think we can fix the scenario that causes this unless we can cancel inflight rpc requests

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-10-25 15:30:17 +00:00
Jimmy Chen
d4f26ee123 Add block roots heal logic in v18 schema migration. (#4875)
## Issue Addressed

Fixes #4697. 

This also unblocks the state pruning PR (#4835). Because self healing breaks if state pruning is applied to a database with missing block roots.

## Proposed Changes

- Fill in the missing block roots between last restore point slot and split slot when upgrading to latest database version.
2023-10-25 03:42:24 +00:00
antondlr
a228e61773 don't make lcli on self-hosted runners (#4874)
## Issue Addressed

Our self-hosted runners now have a modern (Deneb-ready) version of `lcli` preinstalled so we no longer need to compile it.
2023-10-25 03:42:23 +00:00
Pawan Dhananjay
6315a81260 Upgrade to v1.4.0-beta.3 (#4862)
## Issue Addressed

Makes lighthouse compliant with new kzg changes in https://github.com/ethereum/consensus-specs/releases/tag/v1.4.0-beta.3

## Proposed Changes

1. Adds new official trusted setup
2. Refactors kzg to match upstream changes in https://github.com/ethereum/c-kzg-4844/pull/377
3. Updates pre-generated `BlobBundle` to work with official trusted setup. ~~Using json here instead of ssz to account for different value of `MaxBlobCommitmentsPerBlock` in minimal and mainnet. By using json, we can just use one pre generated bundle for both minimal and mainnet. Size of 2 separate ssz bundles is approximately equal to one json bundle cc @jimmygchen~~ 
Dunno what I was doing, ssz works without any issues  
4. Stores trusted_setup as just bytes in eth2_network_config so that we don't have kzg dependency in that lib and in lcli. 


Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-10-21 13:49:27 +00:00
Pawan Dhananjay
074c4951fc Reduce calls to network channel (#4863)
## Issue Addressed

N/A

## Proposed Changes

Sends blocks and blobs from http_api to the network channel for publishing in a single network channel send. This is to avoid overhead of multiple calls.
Also adds a metric for rpc blob retrieval duration.
2023-10-20 19:42:47 +00:00
Jimmy Chen
e8fba8d3a7 Enable BLS portable feature on all CI tests (#4868)
## Issue Addressed

Addresses the recent CI failures caused by caching `blst` for the wrong CPU type. 

## Proposed Changes

- Use `FEATURES: jemalloc,portable` when building Lighthouse & `lcli` in tests
- Add a new `TEST_FEATURES` and set to `portable` for all CI test jobs.
- Updated Makefiles to read the `TEST_FEATURES` environment variable, and default to none.
2023-10-20 07:30:27 +00:00
Jimmy Chen
8880675eda Add make lint to development environment section in Book (#4866)
## Issue Addressed

We run `clippy` as part of our CI, so it would help new devs if we add the `make lint` command to the dev setup section in the Lighthouse book.
2023-10-20 06:23:29 +00:00
Zackary Scott
b11988223f #4512 inactivity calculation for Altair (#4807)
## Issue Addressed
#4512 
Which issue # does this PR address?

## Proposed Changes
Add inactivity calculation for Altair

Please list or describe the changes introduced by this PR.
Add inactivity calculation for Altair

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-10-20 06:23:28 +00:00
Michael Sproul
1aca48400c Tree states release v4.5.444-exp
- Update xdelta3 to remove dodgy build steps
- Fix asset paths for draft release
2023-10-20 14:13:58 +11:00
GoodDaisy
90f78d141f fix typos (#4838) 2023-10-19 22:05:15 +00:00
João Oliveira
c6583bb5fa update libp2p (#4864)
## Issue Addressed

updates libp2p to the latest version and uses the new `SwarmBuilder`. Superseeds https://github.com/sigp/lighthouse/pull/4695/
CC @mxinden I don't think we can use both `bandwidth_loggers` with the new syntax right?
2023-10-19 21:22:55 +00:00
Michael Sproul
8c28d175b8 Fix: write post state in lcli skip-slots (#4843)
## Issue Addressed

Fix a bug in `lcli skip-slots` that resulted in it always writing the pre-state to the output file.

## Proposed Changes

Correctly keep track of the post-state, and write it.
2023-10-19 05:19:25 +00:00
Dustin Brickwood
1c6356f8f3 chore: replace deprecated hub with gh for releases (#4839)
## Issue Addressed

- The tool hub that is used to create draft releases has been removed as of October 2. See: https://github.com/actions/runner-images/issues/8362
- This change replaces `hub` usage in favour of [`gh`](https://cli.github.com/manual/gh_release_create) 

## Proposed Changes

- This change replaces `hub` usage in favour of [`gh`](https://cli.github.com/manual/gh_release_create) 

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-10-19 05:19:24 +00:00
Joe Clapis
1de02f731b Add CARGO_USE_GIT_CLI to the Dockerfile to work around an OOM bug during cross-compiling (#4828)
## Issue Addressed
#4827 

## Proposed Changes

This PR introduces a new build-arg to the Lighthouse Dockerfile: `CARGO_USE_GIT_CLI`. This arg will be passed into the `CARGO_NET_GIT_FETCH_WITH_CLI` [environment variable](https://doc.rust-lang.org/cargo/reference/config.html#netgit-fetch-with-cli), which instructs `cargo` to use the git CLI during `fetch` operations instead of the git library. Doing so works around [a bug](https://github.com/rust-lang/cargo/issues/10583) with the git library that causes it to go OOM during `fetch` operations on `arm64` platforms.

The default value is `false` so this doesn't affect Lighthouse builds or the CI pipeline. Running a build with `--build-arg CARGO_USE_GIT_CLI=true` will activate it, which is necessary to cross-compile the `arm64` binary when not using `cross` (i.e., when building via the Dockerfile instead of natively if you don't have a rust environment ready to go).

Special thanks to @michaelsproul for helping me repro the initial problem.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-10-19 05:19:23 +00:00
João Oliveira
f06391717c collect bandwidth metrics per transport (#4805)
## Issue Addressed

Following the conversation on https://github.com/libp2p/rust-libp2p/pull/3666 the changes introduced in this PR will allow us to give more insights if the bandwidth limitations happen at the transport level, namely if quic helps vs yamux and it's [window size limitation](https://github.com/libp2p/rust-yamux/issues/162) or if the bottleneck is at the gossipsub level.
## Proposed Changes

introduce new quic and tcp bandwidth metric gauges.

cc @mxinden (turned out to be easier, Thomas gave me a hint)
2023-10-19 05:19:22 +00:00
Michael Sproul
6b4f154dcc Tree states release v4.5.333-exp 2023-10-19 15:57:23 +11:00
Michael Sproul
0cb8fdf370 Various small tree-states fixes (#4861)
* Fix block backfill with genesis skip slots

* Fix freezer upper limit

* Fix: write post state in lcli skip-slots (#4843)

* Added CARGO_USE_GIT_CLI to the Dockerfile (#4828)

* chore: replace deprecated hub with gh for releases (#4839)

* Put schema version back to 24 (ignore Deneb)

* Minimise diff

---------

Co-authored-by: Joe Clapis <jclapis@outlook.com>
Co-authored-by: Dustin Brickwood <dustinbrickwood204@gmail.com>
2023-10-19 14:59:29 +11:00
Michael Sproul
72d8c3852c Merge remote-tracking branch 'origin/unstable' into tree-states 2023-10-19 12:07:35 +11:00
Jimmy Chen
98cac2bc6b Deneb review .github (CI cleanup) (#4696)
## Issue Addressed

Related to https://github.com/sigp/lighthouse/issues/4676.

Deneb-specifc CI code to be removed before merging to `unstable`. Dot not merge until we're ready to merge into `unstable`, as we may need to release deneb docker images before merging.

Keep in mind that most of the changes in the below PR (to `unstable`) have already 
been merged to `deneb-free-blobs`, so merging `deneb-free-blobs` into `unstable` would include those changes - it would be ok if the release runners are ready, otherwise we may want to exclude them before merging.
- https://github.com/sigp/lighthouse/pull/4592
2023-10-18 15:23:31 +00:00
Michael Sproul
5bbeedb5b7 Reduce nextest threads to 8 (#4846)
## Issue Addressed

Fix OOMs caused by too many concurrent tests. The runner machine is currently liable to run `32 * 5 = 160` tests in parallel. If each test uses say 300MB max, this is 48GB of RAM!

## Proposed Changes

Reduce the number of threads per runner job to 8. This should cap the memory at 4x lower than the current limit, i.e. around 12GB. If we continue to run out of RAM, we should consider more sophisticated limits.
2023-10-18 14:19:41 +00:00
Michael Sproul
192d442718 Fix Rayon deadlock in test utils (#4837)
## Issue Addressed

Fix a deadlock in the tests that was causing tests on tree-states to run for hours without finishing: https://github.com/sigp/lighthouse/actions/runs/6491194654/job/17628138360.

## Proposed Changes

Avoid using a Mutex under the Rayon `par_iter`. Instead, use an `AtomicUsize`. I've run the new version several times in a loop and it hasn't deadlocked (it was deadlocking consistently on tree-states).

## Additional Info

The same bug exists in unstable and tree-states, but I'm not sure why it was triggering so consistently on the tree-states branch.
2023-10-18 13:36:42 +00:00
Michael Sproul
463e62e833 Generalise compare_fields to work with iterators (#4823)
## Proposed Changes

Add `compare_fields(as_iter)` as a field attribute to `compare_fields_derive`. This allows any iterable type to be compared in the same as a slice (by index). 

This is forwards-compatible with tree-states types like `List` and `Vector` which can not be cast to slices.
2023-10-18 12:59:53 +00:00
Jimmy Chen
1b4545cd9d Remove blob clones in KZG verification (#4852)
## Issue Addressed

This PR removes two instances of blob clones during blob verification that may not be necessary.
2023-10-18 06:52:54 +00:00
Mac L
a7c46bf7ed Fix Homebrew link (#4822)
## Issue Addressed

N/A

## Proposed Changes

I saw a false positive on the link-check CI run and while investigating I noticed that this link technically 404's but is not "dead" in the strict sense. I have updated it to the correct path.
2023-10-18 06:52:53 +00:00
João Oliveira
f10d3d07c3 remove crit! logging from ListenerClosed event on Ok() (#4821)
## Issue Addressed

Since adding Quic support on https://github.com/sigp/lighthouse/pull/4577, and due to `quinn`s api nature LH now triggers the [`ListenerClosed`](https://docs.rs/libp2p/0.52.3/libp2p/swarm/struct.ListenerClosed.html) event.. @michaelsproul noticed we are logging this event as `crit!` independently of the reason. This PR matches the reason, logging with `debug!` and `error!` (instead of `crit!`) according to its `Result`  
## Additional Info
LH will still log `crit!` until https://github.com/libp2p/rust-libp2p/pull/4621 has been merged
2023-10-18 06:52:52 +00:00
Jimmy Chen
18f3edff0a Add vendor directory to .gitignore (#4819)
## Issue Addressed

The vendor directory gets populated after running `cargo vendor`. This directory should be ignored by VCS.
2023-10-18 06:52:51 +00:00
Michael Sproul
5cc0f1097b Fix metric for total block production time (#4794)
## Proposed Changes

Fix the misplacement of the total block production time metric, which occurred during a previous refactor.

Total block production times are no longer skewed low (data from Holesky + blockdreamer):

```
# HELP beacon_block_production_seconds Full runtime of block production
# TYPE beacon_block_production_seconds histogram
beacon_block_production_seconds_bucket{le="0.005"} 0
beacon_block_production_seconds_bucket{le="0.01"} 0
beacon_block_production_seconds_bucket{le="0.025"} 0
beacon_block_production_seconds_bucket{le="0.05"} 0
beacon_block_production_seconds_bucket{le="0.1"} 0
beacon_block_production_seconds_bucket{le="0.25"} 0
beacon_block_production_seconds_bucket{le="0.5"} 37
beacon_block_production_seconds_bucket{le="1"} 65
beacon_block_production_seconds_bucket{le="2.5"} 66
beacon_block_production_seconds_bucket{le="5"} 66
beacon_block_production_seconds_bucket{le="10"} 66
beacon_block_production_seconds_bucket{le="+Inf"} 66
beacon_block_production_seconds_sum 34.225780452
beacon_block_production_seconds_count 66
```

## Additional Info

Cheers to @jimmygchen for helping spot this.
2023-10-18 06:52:50 +00:00
Jimmy Chen
64c156c0c1 Pre-generate test blobs bundle to improve test time. (#4829)
## Issue Addressed

Addresses #4778, and potentially fixes the flaky deneb builder test `builder_works_post_deneb`.

The [deneb builder test](c5c84f1213/beacon_node/http_api/tests/tests.rs (L5371)) has been quite flaky on our CI (`release-tests`) since it was introduced. I'm guessing that it might be timing out on the builder `get_header` call (1 second), and therefore the local payload is used, while the test expects builder payload to be used. 

On my machine the [`get_header` ](c5c84f1213/beacon_node/execution_layer/src/test_utils/mock_builder.rs (L367)) call takes about 550ms, which could easily go over 1s on slower environments (our windows CI runner is much slower than the ubuntu one).

I did a profile on the test and it showed that `blob_to_kzg_commiment` and `compute_kzg_proof` was taking a large chunk of time, so perhaps pre-generating the blobs could help stablise this test.

## Proposed Changes

Pre-generate blobs bundle for Mainnet and Minimal presets.

Before the change `get_header` took about **550ms**, and it's now reduced to **50-55ms** after the change. If timeout was indeed the cause of the flaky test, this fix should stablise it. This also brings the flaky `builder_works_post_deneb` test time from 50s to 10s. (8s if we only use a single blob)
2023-10-18 04:40:29 +00:00
Mac L
369b624b19 Fix broken Nethermind integration tests (#4836)
## Issue Addressed

CI is currently blocked by persistently failing integration tests.

## Proposed Changes

Use latest Nethermind release and apply the appropriate fixes as there have been breaking changes.
Also increase the timeout since I had some local timeouts.


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: antondlr <anton@delaruelle.net>
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-10-18 04:08:55 +00:00
Michael Sproul
8b0545da12 Merge Deneb (#4054) 2023-10-17 10:58:11 +11:00
realbigsean
283ec8cf24 Deneb pr updates 2 (#4851)
* use workspace deps in kzg crate

* delete unused blobs dp path field

* full match on fork name in engine api get payload v3

* only accept v3 payloads on get payload v3 endpoint in mock el

* remove FIXMEs related to merge transition tests

* move static tx to test utils

* default max_per_epoch_activation_churn_limit to mainnet value

* remove unnecessary async

* remove comment

* use task executor in `blob_sidecars` endpoint
2023-10-17 09:53:46 +11:00
Michael Sproul
ba0567d3ef Merge remote-tracking branch 'origin/unstable' into deneb-free-blobs 2023-10-16 16:33:37 +11:00
Age Manning
cf544b3996 Very minor own nitpicks (#4845) 2023-10-16 16:30:14 +11:00
Michael Sproul
2d662f78ae Use Deneb fork in generate_genesis_header
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-10-16 16:24:27 +11:00
Michael Sproul
bb6675e42d Clean up progressive balance slashings further (#4834)
* Clean up progressive balance slashings further

* Fix Rayon deadlock in test utils

* Fix cargo-fmt
2023-10-13 16:45:56 +11:00
Michael Sproul
b121e69fcf Fix cache logic for epoch boundary skips (#4833) 2023-10-13 16:45:35 +11:00
Jimmy Chen
38e7172508 Add blob_sidecar event to SSE (#4790)
* Add `blob_sidecar` event to SSE.

* Return 202 if a block is published but failed blob validation when validation level is `Gossip`.

* Move `BlobSidecar` event to `process_gossip_blob` and add test.

* Emit `BlobSidecar` event when blobs are received over rpc.

* Improve test assertions on `SseBlobSidecar`s.

* Add quotes to blob index serialization in `SseBlobSidecar`

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-10-12 10:13:08 -04:00
realbigsean
4555e33048 Remove serde derive references (#4830)
* remove remaining uses of serde_derive

* fix lockfile

---------

Co-authored-by: João Oliveira <hello@jxs.pt>
2023-10-11 13:01:30 -04:00
Michael Sproul
d9acee5a72 Delete unused ssz_types file (#4824) 2023-10-11 12:49:08 -04:00
ethDreamer
8660043024 Prevent Overflow LRU Cache from Exploding (#4801)
* Initial Commit of State LRU Cache

* Build State Caches After Reconstruction

* Cleanup Duplicated Code in OverflowLRUCache Tests

* Added Test for State LRU Cache

* Prune Cache of Old States During Maintenance

* Address Michael's Comments

* Few More Comments

* Removed Unused impl

* Last touch up

* Fix Clippy
2023-10-10 23:51:00 -05:00
Michael Sproul
b77de6958a Re-enable ARM builds 2023-10-11 14:37:46 +11:00
Michael Sproul
dfa3b4325a Fix Clippy for 1.73 2023-10-11 14:03:11 +11:00
Michael Sproul
6ae4c22d56 Fix merge snafu in tests 2023-10-11 11:57:39 +11:00
Michael Sproul
e63d02e357 Merge remote-tracking branch 'origin/deneb-free-blobs' into tree-states-deneb 2023-10-11 11:52:39 +11:00
Michael Sproul
d4f87ef1a1 Fix three consensus bugs! 2023-10-11 11:18:22 +11:00
Michael Sproul
ca1abfeb2d Support iterables in compare_fields 2023-10-11 10:43:49 +11:00
Jimmy Chen
4ad7e15732 Address Clippy 1.73 lints on Deneb branch (#4810)
* Address Clippy 1.73 lints (#4809)

## Proposed Changes

Fix Clippy lints enabled by default in Rust 1.73.0, released today.

* Address Clippy 1.73 lints.

---------

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-10-06 12:23:57 +05:30
Michael Sproul
c3321dddb7 Reduce attestation subscription spam from VC (#4806)
## Proposed Changes

Instead of sending every attestation subscription every slot to every BN:

- Send subscriptions 32, 16, 8, 7, 6, 5, 4, 3 slots before they occur.
- Track whether each subscription is sent successfully and retry it in subsequent slots if necessary.

## Additional Info

- [x] Add unit tests for `SubscriptionSlots`.
- [x] Test on Holesky.
- [x] Based on #4774 for testing.
2023-10-06 06:26:18 +00:00
chonghe
accb56e4fb Revise doc API section (#4798)
## Issue Addressed

Partially #4788 

## Proposed Changes

Remove documentation on `/lighthouse/database/reconstruct` API to avoid confusion as the calling the API during historical block download will show an error in the beacon log

Add Events API about `payload_attributes`

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-10-06 04:34:47 +00:00
Michael Sproul
9769a247b2 Address Clippy 1.73 lints (#4809)
## Proposed Changes

Fix Clippy lints enabled by default in Rust 1.73.0, released today.
2023-10-06 03:05:47 +00:00
Michael Sproul
e2a60a6fb1 Merge remote-tracking branch 'origin/deneb-free-blobs' into tree-states-deneb 2023-10-06 11:11:36 +11:00
realbigsean
203ac65041 Merge pull request #4808 from jimmygchen/merge-unstable-to-deneb-20231005
Merge `unstable` branch to deneb 20231005
2023-10-05 11:17:48 -04:00
Jimmy Chen
a96963fd5f Re-commit corrupted key files 2023-10-06 00:24:09 +11:00
Jimmy Chen
3692622339 Merge branch 'unstable' into merge-unstable-to-deneb-20231005 2023-10-06 00:20:54 +11:00
Michael Sproul
b82f7843ff Use peeking_take_while in BlockReplayer (#4803)
## Issue Addressed

While reviewing #4801 I noticed that our use of `take_while` in the block replayer means that if a state root iterator _with gaps_ is provided, some additonal state roots will be dropped unnecessarily. In practice the impact is small, because once there's _one_ state root miss, the whole tree hash cache needs to be built anyway, and subsequent misses are less costly. However this was still a little inefficient, so I figured it's better to fix it.

## Proposed Changes

Use [`peeking_take_while`](https://docs.rs/itertools/latest/itertools/trait.Itertools.html#method.peeking_take_while) to avoid consuming the next element when checking whether it satisfies the slot predicate.

## Additional Info

There's a gist here that shows the basic dynamics in isolation: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=40b623cc0febf9ed51705d476ab140c5. Changing the `peeking_take_while` to a `take_while` causes the assert to fail. Similarly I've added a new test `block_replayer_peeking_state_roots` which fails if the same change is applied inside `get_state_root`.
2023-10-05 06:03:24 +00:00
Jimmy Chen
72563ffb41 Fix CI tests 2023-10-05 16:38:06 +11:00
Jimmy Chen
c5c84f1213 Merge branch 'unstable' into merge-unstable-to-deneb-20231005
# Conflicts:
#	.github/workflows/test-suite.yml
#	Cargo.lock
#	beacon_node/execution_layer/Cargo.toml
#	beacon_node/execution_layer/src/test_utils/mock_builder.rs
#	beacon_node/execution_layer/src/test_utils/mod.rs
#	beacon_node/network/src/service/tests.rs
#	consensus/types/src/builder_bid.rs
2023-10-05 15:54:44 +11:00
Nico Flaig
4b619c63d7 Exit aggregation step early if no validator is aggregator (#4774)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/4712

## Proposed Changes

Exit aggregation step early if no validator is aggregator. This avoids an unnecessary request to the beacon node and more importantly fixes noisy errors if Lighthouse VC is used with other clients such as Lodestar and Prysm.

## Additional Info

Related issue https://github.com/ChainSafe/lodestar/issues/5553
2023-10-05 02:14:55 +00:00
duguorong009
7d537214b7 fix(validator_client): return http 404 rather than 405 in http api (#4758)
## Issue Addressed
- Close #4596 

## Proposed Changes
- Add `Filter::recover` to handle rejections specifically as 404 NOT FOUND

Please list or describe the changes introduced by this PR.

## Additional Info

Similar to PR #3836
2023-10-04 00:43:29 +00:00
Akihito Nakano
ba8bcf4bd3 Remove deficit gossipsub scoring during topic transition (#4486)
## Issue Addressed

This PR closes https://github.com/sigp/lighthouse/issues/3237

## Proposed Changes

Remove topic weight of old topics when the fork happens.

## Additional Info

- Divided `NetworkService::start()` into `NetworkService::build()` and `NetworkService::start()` for ease of testing.
2023-10-04 00:43:28 +00:00
Michael Sproul
6ec649a4e2 Optimise head block root API (#4799)
## Issue Addressed

We've had a report of sync committee performance suffering with the beacon processor HTTP API prioritisations.

## Proposed Changes

Increase the priority of `/eth/v1/beacon/blocks/head/root` requests, which are used by the validator client to form sync committee messages, here:

441fc1691b/validator_client/src/sync_committee_service.rs (L181-L188)

Additionally, avoid loading the blinded block in all but the `block_id=block_root` case. I'm not sure why we were doing this previously, I suspect it was just an oversight during the implementation of the `finalized` status on API requests.

## Additional Info

I think this change should have minimal negative impact as:

- The block root endpoint is quick to compute (a few ms max).
- Only the priority of `head` requests is increased. Analytical processes that are making lots of block root requests for past slots are unable to DoS the beacon processor, as their requests will still be processed after attestations.
2023-10-03 23:59:37 +00:00
Pawan Dhananjay
5bab9b866e Don't downscore peers on duplicate blocks (#4791)
## Issue Addressed

N/A

## Proposed Changes

We were currently downscoring a peer for sending us a block that we already have in fork choice. This is unnecessary as we get duplicates in lighthouse only when
1. We published the block, so the block is already in fork choice
2. We imported the same block over rpc

In both scenarios, the peer who sent us the block over gossip is not at fault.

This isn't exploitable as valid duplicates will get dropped by the gossipsub duplicate filter
2023-10-03 23:59:35 +00:00
Lucas Saldanha
f7daf82430 Removed old Teku mainnet bootnode ENRs (#4786)
## Issue Addressed

N/A

## Proposed Changes

Removing the two Teku mainnet bootnodes that are being sunset.

## Additional Info

We are leaving only these two bootnodes: https://github.com/eth-clients/eth2-networks/blob/master/shared/mainnet/bootstrap_nodes.txt#L10-L11
2023-10-03 23:59:34 +00:00
Divma
f11884ccdb enforce non zero enr ports (#4776)
## Issue Addressed

Right now lighthouse accepts zero as enr ports. Since enr ports should be reachable, zero ports should be rejected here

## Proposed Changes

- update the config to use `NonZerou16` as an ENR port for all enr-related fields.
- the enr builder from config now sets the enr to the listening port only if the enr port is not already set (prev behaviour) and the listening port is not zero (new behaviour)
- reject zero listening ports when used with `enr-match`. 
- boot node now rejects listening port as zero, since those are advertised.
- generate-bootnode-enr also rejected zero listening ports for the same reason.
- update local network scripts

## Additional Info

Unrelated, but why do we overwrite `enr-x-port` values with listening ports if `enr-match` is present? we prob should only do this for enr values that are not already set.
2023-10-03 23:59:34 +00:00
João Oliveira
0dc95a1d37 PeerManager: move the check for banned peers from connection_established (#4569)
## Issue Addressed
https://github.com/sigp/lighthouse/issues/4543
## Proposed Changes
- Removes `NotBanned` from `BanResult`, implements `Display` and `std::error::Error` for `BanResult` and changes `ban_result` return type to `Option<BanResult>` which helps returning `BanResult` on `handle_established_inbound_connection`  
- moves the check from for banned peers from `on_connection_established` to `handle_established_inbound_connection` to start addressing #4543.
- Removes `allow_block_list` as it's now redundant? Not sure about this one but if `PeerManager` keeps track of the banned peers, no need to send a `Swarm` event for `alow_block_list` to also keep that list right? 
 
## Questions

-  #4543 refers:
>  More specifically, implement the connection limit behaviour inside the peer manager.

@AgeManning do you mean copying `libp2p::connection_limits::Behaviour`'s code into `PeerManager`/ having it as an inner `NetworkBehaviour` of `PeerManager`/other? If it's the first two, I think it probably makes more sense to have it as it is as it's less code to maintain.

> Also implement the banning of peers inside the behaviour, rather than passing messages back up to the swarm.

I tried to achieve this, but we still need to pass the `PeerManagerEvent::Banned` swarm event as `DiscV5` handles it's node and ip management internally and I did not find a method to query if a peer is banned. Is there anything else we can do from here?

3397612160/beacon_node/lighthouse_network/src/discovery/mod.rs (L931-L940)

Same as the question above, I did not find a way to check if `DiscV5` has the peer banned, so that we could check here and avoid sending `Swarm` events

3397612160/beacon_node/lighthouse_network/src/peer_manager/network_behaviour.rs (L168-L178)

Is there a chance we try to dial a peer that has been banned previously? 

Thanks!
2023-10-03 23:59:32 +00:00
realbigsean
7605494791 Use only lighthouse types in the mock builder (#4793)
## Proposed Changes

- only use LH types to avoid build issues
- use warp instead of axum for the server to avoid importing the dep

## Additional Info

- wondering if we can move the `execution_layer/test_utils` to its own crate and import it as a dev dependency
- this would be made easier by separating out our engine API types into their own crate so we can use them in the test crate
- or maybe we can look into using reth types for the engine api if they are in their own crate


Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-10-03 17:59:28 +00:00
realbigsean
c7ddf1f0b1 add processing and processed caching to the DA checker (#4732)
* add processing and processed caching to the DA checker

* move processing cache out of critical cache

* get it compiling

* fix lints

* add docs to `AvailabilityView`

* some self review

* fix lints

* fix beacon chain tests

* cargo fmt

* make availability view easier to implement, start on testing

* move child component cache and finish test

* cargo fix

* cargo fix

* cargo fix

* fmt and lint

* make blob commitments not optional, rename some caches, add missing blobs struct

* Update beacon_node/beacon_chain/src/data_availability_checker/processing_cache.rs

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

* marks review feedback and other general cleanup

* cargo fix

* improve availability view docs

* some renames

* some renames and docs

* fix should delay lookup logic

* get rid of some wrapper methods

* fix up single lookup changes

* add a couple docs

* add single blob merge method and improve process_... docs

* update some names

* lints

* fix merge

* remove blob indices from lookup creation log

* remove blob indices from lookup creation log

* delayed lookup logging improvement

* check fork choice before doing any blob processing

* remove unused dep

* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/network/src/sync/block_lookups/delayed_lookup.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* remove duplicate deps

* use gen range in random blobs geneartor

* rename processing cache fields

* require block root in rpc block construction and check block root consistency

* send peers as vec in single message

* spawn delayed lookup service from network beacon processor

* fix tests

---------

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-10-03 09:59:33 -04:00
Age Manning
8a1b77bf89 Ultra Fast Super Slick CI (#4755)
Attempting to improve our CI speeds as its recently been a pain point.

Major changes:

 - Use a github action to pull stable/nightly rust rather than building it each run
 - Shift test suite to `nexttest` https://github.com/nextest-rs/nextest for CI
 
 UPDATE:

So I've iterated on some changes, and although I think its still not optimal I think this is a good base to start from. Some extra things in this PR:
- Shifted where we pull rust from. We're now using this thing: https://github.com/moonrepo/setup-rust . It's got some interesting cache's built in, but was not seeing the gains that Jimmy managed to get. In either case tho, it can pull rust, cargofmt, clippy, cargo nexttest all in < 5s. So I think it's worthwhile. 
- I've grouped a few of the check-like tests into a single test called `code-test`. Although we were using github runners in parallel which may be faster, it just seems wasteful. There were like 4-5 tests, where we would pull lighthouse, compile it, then run an action, like clippy, cargo-audit or fmt. I've grouped these into a single action, so we only compile lighthouse once, then in each step we run the checks. This avoids compiling lighthouse like 5 times.
- Ive made doppelganger tests run on our local machines to avoid pulling foundry, building and making lcli which are all now baked into the images. 
- We have sccache and do not incremental compile lighthouse

Misc bonus things:
- Cargo update
- Fix web3 signer openssl keys which is required after a cargo update
- Use mock_instant in an LRU cache test to avoid non-deterministic test
- Remove race condition in building web3signer tests

There's still some things we could improve on. Such as downloading the EF tests every run and the web3-signer binary, but I've left these to be out of scope of this PR. I think the above are meaningful improvements.



Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: antondlr <anton@delaruelle.net>
2023-10-03 06:33:15 +00:00
Michael Sproul
9446fc88ed Fix semantic Deneb <> tree-states conflicts 2023-10-03 16:07:25 +11:00
Jack McPherson
1c98806b6f Allow libp2p to determine listening addresses (#4700)
## Issue Addressed

#4675 

## Proposed Changes

 - Update local ENR (**only port numbers**) with local addresses received from libp2p (via `SwarmEvent::NewListenAddr`)
 - Only use the zero port for CLI tests

## Additional Info

### See Also ###

 - #4705 
 - #4402 
 - #4745
2023-10-03 04:57:20 +00:00
realbigsean
a935daebd5 Clean bors.toml (#4795)
unblock https://github.com/sigp/lighthouse/pull/4755

Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-10-03 03:37:12 +00:00
realbigsean
67aeb6bf6b insert cached child at the front of a chain of parent lookups (#4780)
* insert cached child at the front of a chain of parent lookups

* use vecdeque in parent lookup chain of blocks
2023-09-29 12:55:12 -04:00
Michael Sproul
109c4a5d17 Merge remote-tracking branch 'origin/deneb-free-blobs' into tree-states 2023-09-29 16:34:29 +10:00
Jimmy Chen
57edc0f3ce Add serde(default) to max_per_epoch_activation_churn_limit in spec config so that VC is compatible to older BN versions. (#4783) 2023-09-26 08:48:32 -04:00
realbigsean
9f37d6df77 reduce blob prune logging in forward sync (#4779) 2023-09-26 08:22:26 -04:00
realbigsean
a642bd7de7 Merge pull request #4781 from jimmygchen/merge-unstable-to-deneb-20230926
Merge `unstable` into `deneb-free-blobs`
2023-09-26 08:19:37 -04:00
Jimmy Chen
8f07a96b88 Fix failing tests. 2023-09-26 12:39:58 +10:00
Michael Sproul
f1f76f2f44 Tree states release v4.5.222-exp (#4782) 2023-09-26 12:23:28 +10:00
Jimmy Chen
1458394cd9 Fix compilation issues after merging unstable. 2023-09-26 11:46:20 +10:00
Michael Sproul
cae73a4d82 Merge tag 'v4.5.0' into tree-states
v4.5.0
2023-09-26 11:21:44 +10:00
Jimmy Chen
7a3cb135d4 Fix tests and add BlockContents decoding. Remove unused builder_threshold field in ApiTesterConfig. 2023-09-26 10:57:21 +10:00
Jimmy Chen
c0b6b92f27 Merge unstable 20230925 into deneb-free-blobs. 2023-09-26 10:32:18 +10:00
Michael Sproul
9244f7f7bc Improvements to Deneb store upon review (#4693)
* Start testing blob pruning

* Get rid of unnecessary orphaned blob column

* Make random blob tests deterministic

* Test for pruning being blocked by finality

* Fix bugs and test fork boundary

* A few more tweaks to pruning conditions

* Tweak oldest_blob_slot semantics

* Test margin pruning

* Clean up some terminology and lints

* Schema migrations for v18

* Remove FIXME

* Prune blobs on finalization not every slot

* Fix more bugs + tests

* Address review comments
2023-09-25 14:21:54 -04:00
Paul Hauner
441fc1691b Release v4.5.0 (#4768)
## Issue Addressed

NA

## Proposed Changes

Bump versions from v4.4.1 to v4.5.0.

## Additional Info

NA
2023-09-25 05:14:01 +00:00
Lion - dapplion
5c5afafc0d Update Deneb to 1.4.0-beta.2 (devnet-9) (#4735)
* Add MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT

* Update tests to 1.4.0-beta.2

* Implement equivocation check for proposer boost

* Use hotfix tests and fix minimal config

* Start updating fork choice tests for Deneb

* Finish implementing fork choice blob handling

---------

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-09-25 15:05:31 +10:00
João Oliveira
0f05499e30 Fix cli options (#4772)
## Issue Addressed

Fixes breaking change introduced on https://github.com/sigp/lighthouse/pull/4674/  that doesn't allow multiple `http_enabled` `ArgGroup` flags
2023-09-22 12:00:51 +00:00
Paul Hauner
fbb6997309 Fix release CI for self-hosted runners (#4770)
## Issue Addressed

NA

## Proposed Changes

Disables some commands for self-hosted runners to prevent failures.

## Additional Info

NA
2023-09-22 11:04:47 +00:00
Michael Sproul
364074da26 Tree states release v4.5.111-exp (#4769) 2023-09-22 15:52:23 +10:00
Michael Sproul
d24875ff74 Merge remote-tracking branch 'origin/unstable' into tree-states 2023-09-22 15:11:42 +10:00
Michael Sproul
cd23c89adb Improve state cache eviction and reduce mem usage (#4762)
* Improve state cache eviction and reduce mem usage

* Fix epochs_per_state_diff tests
2023-09-22 14:49:15 +10:00
João Oliveira
dcd69dfc62 Move dependencies to workspace (#4650)
## Issue Addressed

Synchronize dependencies and edition on the workspace `Cargo.toml`

## Proposed Changes

with https://github.com/rust-lang/cargo/issues/8415 merged it's now possible to synchronize details on the workspace `Cargo.toml` like the metadata and dependencies.
By only having dependencies that are shared between multiple crates aligned on the workspace `Cargo.toml` it's easier to not miss duplicate versions of the same dependency and therefore ease on the compile times.

## Additional Info
this PR also removes the no longer required direct dependency of the `serde_derive` crate.

should be reviewed after https://github.com/sigp/lighthouse/pull/4639 get's merged.
closes https://github.com/sigp/lighthouse/issues/4651


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-09-22 04:30:56 +00:00
antondlr
69c39ad1e5 Use release workflow runners (#4765)
## Issue Addressed

Build releases on self-hosted hardware to speed the process up
2023-09-22 02:33:13 +00:00
Age Manning
6b02e8525a Add new teku bootnodes (#4724)
Adds new Teku bootnodes

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-09-22 02:33:12 +00:00
Jimmy Chen
c4e907de9f Update the voluntary exit endpoint to comply with the key manager specification (#4679)
## Issue Addressed

#4635 

## Proposed Changes

Wrap the `SignedVoluntaryExit` object in a `GenericResponse` container, adding an additional `data` layer, to ensure compliance with the key manager API specification.

The new response would look like this:

```json
{"data":{"message":{"epoch":"196868","validator_index":"505597"},"signature":"0xhexsig"}}
```

This is a backward incompatible change and will affect Siren as well.
2023-09-22 02:33:11 +00:00
João Oliveira
c5588eb66e require http and metrics for respective flags (#4674)
## Issue Addressed

following discussion on https://github.com/sigp/lighthouse/pull/4639#discussion_r1305183750 this PR makes the `http` and `metrics` sub-flags to require those main flags enabled
2023-09-22 02:33:10 +00:00
Paul Hauner
2441a247ab Bump quinn-proto to address rustsec vuln (#4767)
## Issue Addressed

NA

## Proposed Changes

Bumps `quinn-proto` to address a QUIC-related vulnerability: https://rustsec.org/advisories/RUSTSEC-2023-0063

Fixes a `cargo audit` failure.

## Additional Info

NA
2023-09-21 22:37:00 +00:00
antondlr
d0b1abc6fa Update Holesky boot ENR (#4763)
## Issue Addressed

update boot ENR for Holesky relaunch
2023-09-21 22:36:59 +00:00
Michael Sproul
0074a3b5f5 Fix block & state queries prior to genesis (#4761)
## Issue Addressed

Closes #4751

## Proposed Changes

Prevent `state_root_at_slot` and `block_root_at_slot` from erroring out due to a call to `self.slot()?` that fails before genesis. This fixes pre-genesis queries for:

- block at slot 0
- block by genesis block root
- state at slot 0
- state by genesis state root
- state at `finalized` tag
- state at `justified` tag
2023-09-21 06:38:33 +00:00
Jimmy Chen
d3fe3ad337 Update holesky config for relaunch (#4760)
## Issue Addressed

#4759 

Note: Sigma Prime ENR hasn't been updated, tracking it in #4759
2023-09-21 06:38:32 +00:00
Eitan Seri-Levi
992b476eac Add SSZ support to validator block production endpoints (#4534)
## Issue Addressed

#4531 

## Proposed Changes

add SSZ support to the following block production endpoints:

GET /eth/v2/validator/blocks/{slot}
GET /eth/v1/validator/blinded_blocks/{slot}

## Additional Info

i updated a few existing tests to use ssz instead of writing completely new tests
2023-09-21 06:38:31 +00:00
Jimmy Chen
a0478da990 Fix genesis state download panic when running in debug mode (#4753)
## Issue Addressed

#4738 

## Proposed Changes

See the above issue for details. Went with option #2 to use the async reqwest client in `Eth2NetworkConfig` and propagate the async-ness.
2023-09-21 04:17:25 +00:00
realbigsean
082bb2d638 Self hosted docker builds (#4592)
## Issue Addressed

We're OOM'ing on Docker builds on the Deneb branch https://github.com/sigp/lighthouse/issues/3929

Are we ok to self host automated docker builds?


Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: realbigsean <sean@sigmaprime.io>
Co-authored-by: antondlr <anton@delaruelle.net>
2023-09-21 04:17:24 +00:00
Jimmy Chen
fe3bd03234 Fix local testnet to generate keys in the correct folders (#4752)
Fix local testnet to generate keys in the correct folders when `BN_COUNT` and `VC_COUNT` don't match.

The current script place the generated validator keys in validator folders based on the `BN_COUNT` config, e.g. `node_1/validators`, `node_2/validators`..etc. We should be using `VC_COUNT` here instead, otherwise the number of validator clients may not match the number of directories generated, and would result in either:
1. a VC not having any keys  (when `BN_COUNT` < `VC_COUNT`)
2. a validator key directory not being used (when `BN_COUNT` > `VC_COUNT`).
2023-09-21 00:26:56 +00:00
chonghe
f9a3c00518 Update local testnet script (#4733)
There is an issue with the file `scripts/local_testnet/start_local_testnet.sh` - when we use a non-default `$SPEC-PRESET` in `vars.env` it runs into an error: 
```
executing: ./setup.sh >> /home/ck/.lighthouse/local-testnet/testnet/setup.log
parse error: Invalid numeric literal at line 1, column 7
```
@jimmygchen found the issue and the updated script includes the flag `--spec $SPEC-PRESET`
2023-09-21 00:26:55 +00:00
Michael Sproul
5a35278aea Add more checks and logging before genesis (#4730)
## Proposed Changes

This PR adds more logging prior to genesis, particularly on networks that start with execution enabled.

There are new checks using `eth_getBlockByHash/Number` to verify that the genesis state's `latest_execution_payload_header` matches the execution node's genesis block.

The first commit also runs the merge-readiness/Capella-readiness checks prior to genesis. This has two effects:

- Give more information on the execution node's status and its readiness for genesis.
- Prevent the `el_offline` status from being set on `/eth/v1/node/syncing`, which previously caused the VC to complain loudly.

I would like to include this for the Holesky reboot. It would have caught the misconfig that doomed the first Holesky.

## Additional Info

- Geth doesn't serve payload bodies prior to genesis, which is why we use the legacy methods. I haven't checked with other ELs yet.
- Currently this is logging errors with _Capella_ genesis states generated by `ethereum-genesis-generator` because the `withdrawals_root` is not set correctly (it is 0x0). This is not a blocker for Holesky, as it starts from Bellatrix (Pari is investigating).
2023-09-21 00:26:53 +00:00
Jimmy Chen
1e9925435e Reuse fork choice read lock instead of re-acquiring it immediately (#4688)
## Issue Addressed

I went through the code base and look for places where we acquire fork choice locks (after the deadlock bug was found and fixed in #4687), and discovered an instance where we re-acquire a lock immediately after dropping it. This shouldn't cause deadlock like the other issue, but is slightly less efficient.
2023-09-21 00:26:52 +00:00
Michael Sproul
4b6cb3db2c Prevent port re-use in HTTP API tests (#4745)
## Issue Addressed

CI is plagued by `AddrAlreadyInUse` failures, which are caused by race conditions in allocating free ports.

This PR removes all usages of the `unused_port` crate for Lighthouse's HTTP API, in favour of passing `:0` as the listen address. As a result, the listen address isn't known ahead of time and must be read from the listening socket after it binds. This requires tying some self-referential knots, which is a little disruptive, but hopefully doesn't clash too much with Deneb 🤞

There are still a few usages of `unused_tcp4_port` left in cases where we start external processes, like the `watch` Postgres DB, Anvil, Geth, Nethermind, etc. Removing these usages is non-trivial because it's hard to read the port back from an external process after starting it with `--port 0`. We might be able to do something on Linux where we read from `/proc/`, but I'll leave that for future work.
2023-09-20 01:19:03 +00:00
Jimmy Chen
665334e936 Implement SSZ decoding for SignedBlockContents (#4744)
* Implement `SignedBlockContent` decoding and fixed bug in `SignedBlockContent::new`

* Update Cargo.lock file

* Use `make_genesis_spec` to simplify test setup.

* Fix syntax errors.
2023-09-19 18:18:05 -04:00
João Oliveira
d386a07b0c validator client: start http api before genesis (#4714)
## Issue Addressed

On a new network a user might require importing validators before waiting until genesis has occurred.

## Proposed Changes

Starts the validator client http api before waiting for genesis 

## Additional Info

cc @antondlr
2023-09-15 10:08:30 +00:00
Jimmy Chen
b88e57c989 Update Java runtime requirement to 17 for Web3Signer tests (#4681)
Web3Signer now requires Java runtime v17, see [v23.8.0 release](https://github.com/Consensys/web3signer/releases/tag/23.8.0).

We have some Web3Signer tests that requires a compatible Java runtime to be installed on dev machines. This PR updates `setup` documentation in Lighthouse book, and also fixes a small typo.
2023-09-15 08:49:14 +00:00
realbigsean
5f98a7b8ad Deneb PR feedback updates (#4716)
* move length update outside of if let in LRU cache

* add comment and use hex for G1_POINT_AT_INFINITY

* remove some misleading comments from `ssz_snappy`

* make sure we can't overflow on blobs by range requests with large counts

* downgrade gossip verification internal availability check error

* change blob rpc responses from BlockingFnWithManualSendOnIdle to BlockingFn

* remove unnecessary collect in blobs by range response

* add a comment to blobs by range response start slot logic

* typo persist_data_availabilty_checker -> persist_data_availability_checker

* unify cheap_state_advance_to_obtain_committees
2023-09-15 15:05:48 +10:00
Age Manning
e4ed317b76 Add Experimental QUIC support (#4577)
## Issue Addressed

#4402 

## Proposed Changes

This PR adds QUIC support to Lighthouse. As this is not officially spec'd this will only work between lighthouse <-> lighthouse connections. We attempt a QUIC connection (if the node advertises it) and if it fails we fallback to TCP. 

This should be a backwards compatible modification. We want to test this functionality on live networks to observe any improvements in bandwidth/latency.

NOTE: This also removes the websockets transport as I believe no one is really using it. It should be mentioned in our release however.


Co-authored-by: João Oliveira <hello@jxs.pt>
2023-09-15 03:07:24 +00:00
Pawan Dhananjay
2cfcb51207 Blob references in ckzg (#4723)
* Move to using references in ckzg functions

* cleanup TrustedSetup a bit

* Remove BYTES_PER_FIELD_ELEMENT from KzgPreset
2023-09-14 12:14:30 +10:00
Michael Sproul
1b4bc8818b Release v4.4.111-exp (#4729) 2023-09-14 10:07:26 +10:00
Stefan
4a31e369bf increase the max topic subscriptions #4581 (#4588)
* increase the max topic subscriptions #4581

* make the max_subscription limitation based off constants / configuration

* format

* wording & add deneb topic array

* reduce max_subscriptions_per_request to 2x

* format

* update comment
2023-09-13 17:06:33 +10:00
Michael Sproul
5cb2ed3696 Restore custom image for Cross 2023-09-13 14:43:02 +10:00
Michael Sproul
f7c6b7d64b Bump schema version to v24 2023-09-13 14:00:28 +10:00
Michael Sproul
68f80cc862 Change default epochs-per-state-diff to 16
This should make replaying diffs during non-finality a bit quicker.
2023-09-13 13:56:53 +10:00
Michael Sproul
838e104b25 Attempt to fix flaky test 2023-09-13 13:54:03 +10:00
Michael Sproul
d961d2c9ed Disable ARM docker builds 2023-09-13 12:51:20 +10:00
Michael Sproul
b8e04ce5a4 Merge remote-tracking branch 'origin/unstable' into tree-states 2023-09-13 11:25:18 +10:00
realbigsean
8e7b57a794 Merge pull request #4719 from jimmygchen/deneb-merge-from-unstable-20230911
Deneb merge from unstable 20230911
2023-09-12 17:41:42 -04:00
realbigsean
58cd50681c fix the missing components response code for block publish (#4721) 2023-09-11 09:54:14 -04:00
Jack McPherson
35f47f454f Await listening address from libp2p in RPC tests setup (#4705)
## Issue Addressed

#4704 

## Proposed Changes

 - Receive multiaddr from libp2p by awaiting listener setup

## Additional Info

See also: #4675
2023-09-11 06:14:56 +00:00
Jimmy Chen
d4aedab21f Fix some compilation errors in tests 2023-09-11 14:07:08 +10:00
Jimmy Chen
6771954c5f Merge unstable 20230911 into deneb-free-blobs. 2023-09-11 12:09:58 +10:00
Jimmy Chen
1e4ee7aa5e Tree states to support per-slot state diffs (#4652)
* Support per slot state diffs

* Store HierarchyConfig on disk. Support storing hdiffs at per slot level.

* Revert HierachyConfig change for testing.

* Add validity check for the hierarchy config when opening the DB.

* Update HDiff tests.

* Fix `get_cold_state` panic when the diff for the slot isn't stored.

* Use slots instead of epochs for storing snapshots in freezer DB.

* Add snapshot buffer to `diff_buffer_cache` instead of loading it from db every time.

* Add `hierarchy-exponents` cli flag to beacon node.

* Add test for `StorageStrategy::ReplayFrom` and ignore a flaky test.

* Drop hierarchy_config in tests for more frequent snapshot and fix an issue where hdiff wasn't stored unless it's a epoch boundary slot.
2023-09-11 10:19:40 +10:00
Jimmy Chen
1db739490e Update BlindedBlobsBundle SSZ list max length and update builder tests (#4710)
* Update mev-rs and ethereum-consensus

* Fix mock buidler open bid to return fork versioned response

* Update `mev-rs` and `ethereum-consensus`

* Remove BuilderKzgCommitments and use BlockBodyKzgCommitments everywhere.

* Update testnet scripts to support builder testing and update README.md.

* Add comment on `mev-rs` version.

* Add `BN_ARGS` config to `./scripts/tests/vars.env`

* Update builder testing command in README.md

* Reject zero block hash payloads after Bellatrix.

* Update scripts/local_testnet/README.md

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-09-09 16:10:15 +10:00
Jimmy Chen
50bf40b4bc Update GET config/spec endpoint to support Deneb config (#4708)
* Update `GET config/spec` endpoint to support Deneb config.

* Update config spec `http_api` test
2023-09-09 16:09:50 +10:00
Jimmy Chen
87ec7599fe Fix duplicate geth startup command in doppelganger_protection.sh (#4713) 2023-09-08 12:25:57 +10:00
Michael Sproul
8db44decb7 Fix parent_beacon_block_root during proposer prep (#4703)
* Fix `parent_beacon_block_root` during prep/reorg

* Fix another bug and add tests

* Remove overzealous payload attributes check
2023-09-07 15:43:23 +10:00
Jimmy Chen
f9bea3c174 Deneb review common/eth2 (#4698)
* Update comments and small cleanup.

* Deserialize into `SsePayloadAttributesV3` for Deneb fork. Update `SignedBlockContents::blobs_cloned` to return blobs for `BlindedBlockAndBlobSidecars`.

* Improve code readability and error handling when converting blinded block into full block.
2023-09-06 14:39:25 +10:00
Jimmy Chen
1ff4033830 Remove Node.js from release-tests CI job since we no longer use ganache (#4691)
## Issue Addressed

I noticed a node.js version warning on our CI, and thought about updating the version to get rid of this warning, but then realized we may not need node.js anymore now we're using `anvil` instead of `ganache`.

> The following actions uses node12 which is deprecated and will be forced to run on node16: actions/setup-node@v2. For more info: https://github.blog/changelog/2023-06-13-github-actions-all-actions-will-run-on-node16-instead-of-node12-by-default/
2023-09-06 04:37:05 +00:00
Ricki Moore
0caf2af771 Feat: siren faq update (#4685)
## Issue Addressed

Siren FAQ requires more information regarding network connections to lighthouse BN/VC

## Proposed Changes

Added more info regarding port, BN/VC flags, ssh tunneling and VPNs access
2023-09-06 04:37:04 +00:00
Daichuan Wu
48e2b205e8 Fix some typos in "Advanced Networking" documentation (#4672)
## Issue Addressed
N/A

## Proposed Changes
The current Advanced Networking page references the ["--listen-addresses"](14924dbc95/book/src/advanced_networking.md (L124C8-L124C8)) argument, which does not exist in the beacon node. This PR changes such instances of "--listen-addresses" to "--listen-address".

Additionally, the page mentions using sockets that [both listen to IPv6](14924dbc95/book/src/advanced_networking.md (L151)) in a dual-stack setup? Hence, this PR also changes said line to "using one socket for IPv4 and another socket for IPv6".

## Additional Info
None.
2023-09-06 04:37:03 +00:00
chonghe
291ff640f1 Minor revision to Lighthouse Book on validator-manager (#4638)
Correct the formatting and remove `http-port 5062` to make the command simpler

Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
2023-09-06 04:37:02 +00:00
Michael Sproul
13606533b5 Simplifications for ./crypto (#4677) 2023-09-05 17:22:43 -04:00
Paul Hauner
2550170337 Deneb review suggestions (3) (#4694)
* Fix typos

* Avoid consuming/cloning blob

* Tidy comments
2023-09-05 15:50:57 -04:00
Paul Hauner
0bfc933c50 Deneb review suggestions (2) (#4680)
* Serde deny unknown fields

* Require 0x prefix
2023-09-05 15:49:57 -04:00
Paul Hauner
e783a40e01 Deneb review suggestions (#4678)
* Increase resolution for single lookup delay

* Remove code duplication

* Remove trusted setup code duplication
2023-09-05 15:49:11 -04:00
Jimmy Chen
5ea38d90cc Pin foundry toolchain version to fix stuck CI jobs (#4682) 2023-09-05 09:41:11 -04:00
Michael Sproul
2841f60686 Release v4.4.1 (#4690)
## Proposed Changes

New release to replace the cancelled v4.4.0 release.

This release includes the bugfix #4687 which avoids a deadlock that was present in v4.4.0.

## Additional Info

Awaiting testing over the weekend this will be merged Monday September 4th.
2023-09-04 02:56:52 +00:00
Michael Sproul
74eb267643 Remove double-locking deadlock from HTTP API (#4687)
## Issue Addressed

Fix a deadlock introduced in #4236 which was caught during the v4.4.0 release testing cycle (with thanks to @paulhauner and `gdb`).

## Proposed Changes

Avoid re-locking the fork choice read lock when querying a state by root in the HTTP API. This avoids a deadlock due to the lock already being held.

## Additional Info

The [RwLock docs](https://docs.rs/lock_api/latest/lock_api/struct.RwLock.html#method.read) explicitly advise against re-locking:

> Note that attempts to recursively acquire a read lock on a RwLock when the current thread already holds one may result in a deadlock.
2023-08-31 11:18:00 +00:00
Paul Hauner
e99ba3a14e Release v4.4.0 (#4673)
## Issue Addressed

NA

## Proposed Changes

Bump versions from `v4.3.0` to `v4.4.0`.

## Additional Info

NA
2023-08-31 02:12:35 +00:00
Jimmy Chen
41ac9b6a08 Pin foundry toolchain version to fix stuck CI jobs 2023-08-30 21:41:35 +10:00
Philippe Schommers
0c23c86849 feat: add chiado (#4530)
## Issue Addressed

N/A

## Proposed Changes

Adds the Chiado (Gnosis testnet) network to the builtin one.

## Additional Info

It's a fairly trivial change all things considered as the preset already exists, so shouldn't be hard to maintain.

It compiles and seems to work, but I'm sure I missed something?

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-08-29 05:56:30 +00:00
Michael Sproul
f284e0e264 Fix bug in block root storage (#4663)
## Issue Addressed

Fix a bug in the storage of the linear block roots array in the freezer DB. Previously this array was always written as part of state storage (or block backfill). With state pruning enabled by #4610, these states were no longer being written and as a result neither were the block roots.

The impact is quite low, we would just log an error when trying to forwards-iterate the block roots, which for validating nodes only happens when they try to look up blocks for peers:

> Aug 25 03:42:36.980 ERRO Missing chunk in forwards iterator      chunk index: 49726, service: freezer_db

Any node checkpoint synced off `unstable` is affected and has a corrupt database. If you see the log above, you need to re-sync with the fix. Nodes that haven't checkpoint synced recently should _not_ be corrupted, even if they ran the buggy version.

## Proposed Changes

- Use a `ChunkWriter` to write the block roots when states are not being stored.
- Tweak the usage of `get_latest_restore_point` so that it doesn't return a nonsense value when state pruning is enabled.
- Tweak the guarantee on the block roots array so that block roots are assumed available up to the split slot (exclusive). This is a bit nicer than relying on anything to do with the latest restore point, which is a nonsensical concept when there aren't any restore points.

## Additional Info

I'm looking forward to deleting the chunked vector code for good when we merge tree-states 😁
2023-08-28 05:34:28 +00:00
Paul Hauner
d61f507184 Add Holesky (#4653)
## Issue Addressed

NA

## Proposed Changes

Add the Holesky network config as per 36e4ff2d51/custom_config_data.

Since the genesis state is ~190MB, I've opted to *not* include it in the binary and instead download it at runtime (see #4564 for context). To download this file we have:

- A hard-coded URL for a SigP-hosted S3 bucket with the Holesky genesis state. Assuming this download works correctly, users will be none the wiser that the state wasn't included in the binary (apart from some additional logs)
- If the user provides a `--checkpoint-sync-url` flag, then LH will download the genesis state from that server rather than our S3 bucket.
- If the user provides a `--genesis-state-url` flag, then LH will download the genesis state from that server regardless of the S3 bucket or `--checkpoint-sync-url` flag.
- Whenever a genesis state is downloaded it is checked against a checksum baked into the binary.
- A genesis state will never be downloaded if it's already included in the binary.
- There is a `--genesis-state-url-timeout` flag to tweak the timeout for downloading the genesis state file.

## Log Output

Example of log output when a state is downloaded:

```bash
Aug 23 05:40:13.424 INFO Logging to file                         path: "/Users/paul/.lighthouse/holesky/beacon/logs/beacon.log"
Aug 23 05:40:13.425 INFO Lighthouse started                      version: Lighthouse/v4.3.0-bd9931f+
Aug 23 05:40:13.425 INFO Configured for network                  name: holesky
Aug 23 05:40:13.426 INFO Data directory initialised              datadir: /Users/paul/.lighthouse/holesky
Aug 23 05:40:13.427 INFO Deposit contract                        address: 0x4242424242424242424242424242424242424242, deploy_block: 0
Aug 23 05:40:13.427 INFO Downloading genesis state               info: this may take some time on testnets with large validator counts, timeout: 60s, server: https://sigp-public-genesis-states.s3.ap-southeast-2.amazonaws.com/
Aug 23 05:40:29.895 INFO Starting from known genesis state       service: beacon
```

Example of log output when there are no URLs specified:

```
Aug 23 06:29:51.645 INFO Logging to file                         path: "/Users/paul/.lighthouse/goerli/beacon/logs/beacon.log"
Aug 23 06:29:51.646 INFO Lighthouse started                      version: Lighthouse/v4.3.0-666a39c+
Aug 23 06:29:51.646 INFO Configured for network                  name: goerli
Aug 23 06:29:51.647 INFO Data directory initialised              datadir: /Users/paul/.lighthouse/goerli
Aug 23 06:29:51.647 INFO Deposit contract                        address: 0xff50ed3d0ec03ac01d4c79aad74928bff48a7b2b, deploy_block: 4367322
The genesis state is not present in the binary and there are no known download URLs. Please use --checkpoint-sync-url or --genesis-state-url.
```

## Additional Info

I tested the `--genesis-state-url` flag with all 9 Goerli checkpoint sync servers on https://eth-clients.github.io/checkpoint-sync-endpoints/ and they all worked 🎉 

My IDE eagerly formatted some `Cargo.toml`. I've disabled it but I don't see the value in spending time reverting the changes that are already there.

I also added the `GenesisStateBytes` enum to avoid an unnecessary clone on the genesis state bytes baked into the binary. This is not a huge deal on Mainnet, but will become more relevant when testing with big genesis states.

When we do a fresh checkpoint sync we're downloading the genesis state to check the `genesis_validators_root` against the finalised state we receive. This is not *entirely* pointless, since we verify the checksum when we download the genesis state so we are actually guaranteeing that the finalised state is on the same network. There might be a smarter/less-download-y way to go about this, but I've run out of cycles to figure that out. Perhaps we can grab it in the next release?
2023-08-28 05:34:27 +00:00
Mac L
e056c279aa Increase web3signer_tests timeouts (#4662)
## Issue Addressed

`web3signer_tests` can sometimes timeout.

## Proposed Changes

Increase the `web3signer_tests` timeout from 20s to 30s

## Additional Info
Previously I believed the consistent CI failures were due to this, but it ended up being something different. See below:

---

The timing of this makes it very likely it is related to the [latest release of `web3-signer`](https://github.com/Consensys/web3signer/releases/tag/23.8.1).

I now believe this is due to an out of date Java runtime on our runners. A newer version of Java became a requirement with the new `web3-signer` release.

However, I was getting timeouts locally, which implies that the margin before timeout is quite small at 20s so bumping it up to 30s could be a good idea regardless.
2023-08-28 00:55:34 +00:00
Mac L
55e02e7c3f Show --gui flag in help text (#4660)
## Issue Addressed

N/A

## Proposed Changes

Remove the `hidden(true)` modifier on the `--gui` flag so it shows up when running `lighthouse bn --help`

## Additional Info

We need to include this now that Siren has had its first stable release.
2023-08-28 00:55:33 +00:00
Jimmy Chen
9c24cd4ad4 Do not log slot clock error prior to genesis (#4657)
## Issue Addressed

#4654 

## Proposed Changes

Only log error if we're unable to read slot clock after genesis. 

I thought about simply down grading the `error` to a `warn`, but feel like it's still unnecessary noise before genesis, and it would be good to retain error log if we're pass genesis. But I'd be ok with just downgrading the log level, too.
2023-08-28 00:55:32 +00:00
Michael Sproul
8e95b69a1a Send success code for duplicate blocks on HTTP (#4655)
## Issue Addressed

Closes #4473 (take 3)

## Proposed Changes

- Send a 202 status code by default for duplicate blocks, instead of 400. This conveys to the caller that the block was published, but makes no guarantees about its validity. Block relays can count this as a success or a failure as they wish.
- For users wanting finer-grained control over which status is returned for duplicates, a flag `--http-duplicate-block-status` can be used to adjust the behaviour. A 400 status can be supplied to restore the old (spec-compliant) behaviour, or a 200 status can be used to silence VCs that warn loudly for non-200 codes (e.g. Lighthouse prior to v4.4.0).
- Update the Lighthouse VC to gracefully handle success codes other than 200. The info message isn't the nicest thing to read, but it covers all bases and isn't a nasty `ERRO`/`CRIT` that will wake anyone up.

## Additional Info

I'm planning to raise a PR to `beacon-APIs` to specify that clients may return 202 for duplicate blocks. Really it would be nice to use some 2xx code that _isn't_ the same as the code for "published but invalid". I think unfortunately there aren't any suitable codes, and maybe the best fit is `409 CONFLICT`. Given that we need to fix this promptly for our release, I think using the 202 code temporarily with configuration strikes a nice compromise.
2023-08-28 00:55:31 +00:00
João Oliveira
c258270d6a update dependencies (#4639)
## Issue Addressed

updates underlying dependencies and removes the ignored `RUSTSEC`'s for `cargo audit`.

Also switches `procinfo` to `procfs` on `eth2` to remove the `nom` warning, `procinfo` is unmaintained see [here](https://github.com/danburkert/procinfo-rs/issues/46).
2023-08-28 00:55:28 +00:00
realbigsean
ce824e00a3 Merge pull request #4658 from realbigsean/merge-unstable-deneb-aug-24
Merge unstable deneb aug 24
2023-08-24 17:33:49 -04:00
realbigsean
42b34dbbe4 cargo fmt 2023-08-24 14:35:06 -04:00
realbigsean
f90b190d9a Merge branch 'unstable' of https://github.com/sigp/lighthouse into merge-unstable-deneb-aug-24 2023-08-24 14:34:32 -04:00
realbigsean
14924dbc95 rust 1.72 lints (#4659) 2023-08-24 14:33:24 -04:00
realbigsean
8ed77d4550 Merge branch 'unstable' of https://github.com/sigp/lighthouse into merge-unstable-deneb-aug-24 2023-08-24 10:54:43 -04:00
Michael Sproul
fa6003bb5c Use lockfile with cross and fix audit fail (#4656)
## Issue Addressed

Temporary ignore for #4651. We are unaffected, and upstream will be patched in a few days.

## Proposed Changes

- Ignore cargo audit failures (ublocks CI)
- Use `--locked` when building with `cross`. We use `--locked` for regular builds, and I think excluding it from `cross` was just an oversight.

I think for consistent builds it makes sense to use `--locked` while building. This is particularly relevant for release binaries, which otherwise will just use a random selection of dependencies that exist on build day (near impossible to recreate if we had to).
2023-08-24 05:54:38 +00:00
Pawan Dhananjay
ea43b6a53c Revive mplex (#4619)
## Issue Addressed

N/A

## Proposed Changes

In #4431 , we seem to have removed support for mplex as it is being deprecated in libp2p. See https://github.com/libp2p/specs/issues/553 . Related rust-libp2p PR https://github.com/libp2p/rust-libp2p/pull/3920
However, since this isn't part of the official [consensus specs](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#multiplexing), we still need to support mplex. 

> Clients MUST support [mplex](https://github.com/libp2p/specs/tree/master/mplex) and MAY support [yamux](https://github.com/hashicorp/yamux/blob/master/spec.md).

This PR adds back mplex support as before.
2023-08-24 05:54:37 +00:00
Eitan Seri-Levi
661779f08e Implement expected withdrawals endpoint (#4390)
## Issue Addressed

[#4029](https://github.com/sigp/lighthouse/issues/4029)

## Proposed Changes

implement expected_withdrawals HTTP API per the spec 

https://github.com/ethereum/beacon-APIs/pull/304

## Additional Info
2023-08-24 05:54:36 +00:00
realbigsean
255394419c Merge pull request #4649 from jimmygchen/merge-unstable-to-deneb-20230822
Merge `unstable` to deneb 20230822
2023-08-23 14:41:31 -04:00
Jimmy Chen
4bd527546b Fix failing beacon chain tests and remove unnecessary blob clone 2023-08-23 11:23:12 +10:00
Jimmy Chen
609c2c2250 Update ckzg to latest version with blst 0.3.11 2023-08-23 06:59:23 +10:00
Jimmy Chen
03c610ed77 Fix release test compilation error 2023-08-22 23:20:58 +10:00
Jimmy Chen
7138763299 Empty commit 2023-08-22 22:59:49 +10:00
Jimmy Chen
8a6f171b2a Merge branch 'unstable' into merge-unstable-to-deneb-20230822
# Conflicts:
#	beacon_node/beacon_chain/src/builder.rs
#	beacon_node/beacon_chain/tests/store_tests.rs
#	beacon_node/client/src/builder.rs
#	beacon_node/src/config.rs
#	beacon_node/store/src/hot_cold_store.rs
#	lighthouse/tests/beacon_node.rs
2023-08-22 21:20:47 +10:00
Jimmy Chen
49d7fdfed1 Fix proposer cache miss in blob verification (#4646) 2023-08-21 18:35:49 -04:00
realbigsean
63b876b87e fix beacon chain tests (#4645) 2023-08-21 11:42:50 -04:00
Jimmy Chen
f92b856cd1 Remove Antithesis docker build workflow (#4642)
## Issue Addressed

This PR remove Antithesis docker build CI workflow. Confirmed with Antithesis team that this is no longer used.
2023-08-21 05:02:36 +00:00
Aoi Kurokawa
91f3bc274b Fix the link of anvil in lighthouse book (#4641)
## Issue Addressed

Wrong link for anvil

## Proposed Changes

Fix the link to correct one.
2023-08-21 05:02:35 +00:00
Michael Sproul
524d9af288 Fix beacon-processor-max-workers (#4636)
## Issue Addressed

Fixes a bug in the handling of `--beacon-process-max-workers` which caused it to have no effect.

## Proposed Changes

For this PR I channeled @ethDreamer and saw deep into the faulty CLI config -- this bug is almost identical to the one Mark found and fixed in #4622.
2023-08-21 05:02:34 +00:00
zhiqiangxu
b3e1c297c8 ForkChoice: remove an impossible case for parent_checkpoints (#4626)
`parent_finalized.epoch + 1 > block_epoch` will never be `true` since as the comment says:

```
 A block in epoch `N` cannot contain attestations which would finalize an epoch higher than `N - 1`.
```
2023-08-21 05:02:33 +00:00
Michael Sproul
20067b9465 Remove checkpoint alignment requirements and enable historic state pruning (#4610)
## Issue Addressed

Closes #3210
Closes #3211

## Proposed Changes

- Checkpoint sync from the latest finalized state regardless of its alignment.
- Add the `block_root` to the database's split point. This is _only_ added to the in-memory split in order to avoid a schema migration. See `load_split`.
- Add a new method to the DB called `get_advanced_state`, which looks up a state _by block root_, with a `state_root` as fallback. Using this method prevents accidental accesses of the split's unadvanced state, which does not exist in the hot DB and is not guaranteed to exist in the freezer DB at all. Previously Lighthouse would look up this state _from the freezer DB_, even if it was required for block/attestation processing, which was suboptimal.
- Replace several state look-ups in block and attestation processing with `get_advanced_state` so that they can't hit the split block's unadvanced state.
- Do not store any states in the freezer database by default. All states will be deleted upon being evicted from the hot database unless `--reconstruct-historic-states` is set. The anchor info which was previously used for checkpoint sync is used to implement this, including when syncing from genesis.

## Additional Info

Needs further testing. I want to stress-test the pruned database under Hydra.

The `get_advanced_state` method is intended to become more relevant over time: `tree-states` includes an identically named method that returns advanced states from its in-memory cache.

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-08-21 05:02:32 +00:00
realbigsean
7d468cb487 More deneb cleanup (#4640)
* remove protoc and token from network tests github action

* delete unused beacon chain methods

* downgrade writing blobs to store log

* reduce diff in block import logic

* remove some todo's and deneb built in network

* remove unnecessary error, actually use some added metrics

* remove some metrics, fix missing components on publish funcitonality

* fix status tests

* rename sidecar by root to blobs by root

* clean up some metrics

* remove unnecessary feature gate from attestation subnet tests, clean up blobs by range response code

* pawan's suggestion in `protocol_info`, peer score in matching up batch sync block and blobs

* fix range tests for deneb

* pub block and blob db cache behind the same mutex

* remove unused errs and an empty file

* move sidecar trait to new file

* move types from payload to eth2 crate

* update comment and add flag value name

* make function private again, remove allow unused

* use reth rlp for tx decoding

* fix compile after merge

* rename kzg commitments

* cargo fmt

* remove unused dep

* Update beacon_node/execution_layer/src/lib.rs

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>

* Update beacon_node/beacon_processor/src/lib.rs

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>

* pawan's suggestiong for vec capacity

* cargo fmt

* Revert "use reth rlp for tx decoding"

This reverts commit 5181837d81.

* remove reth rlp

---------

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2023-08-20 21:17:17 -04:00
Jimmy Chen
4898430330 Add Deneb builder test & update mock builder (#4607)
* Update mock builder, mev-rs dependencies, eth2 lib to support deneb builder flow

* Replace `sharingForkTime` with `cancunTime`

* Patch `ethereum-consensus` to include some deneb-devnet-8 changes

* Add deneb builder test and fix block contents deserialization

* Fix builder bid encoding issue and passing deneb builder test \o/

* Fix test compilation

* Revert `cancunTime` change in genesis to pass doppelganger tests
2023-08-18 20:12:09 -04:00
ethDreamer
687c58fde0 Fix Prefer Builder Flag (#4622) 2023-08-18 03:22:27 +00:00
Jimmy Chen
f031a570ce Merge pull request #4632 from jimmygchen/revive-mplex-deneb
Revive mplex (Deneb)
2023-08-17 16:37:53 +10:00
Pawan Dhananjay
c280b4849c Add back mplex 2023-08-17 16:09:03 +10:00
Michael Sproul
c7e978b727 Update blst to 0.3.11 (#4624)
## Proposed Changes

This PR updates `blst` to 0.3.11, which gives us _runtime detection of CPU features_ 🎉

Although [performance benchmarks](https://gist.github.com/michaelsproul/f759fa28dfa4003962507db34b439d6c) don't show a substantial detriment to running the `portable` build vs `modern`, in order to take things slowly I propose the following roll-out strategy:

- Keep both `modern` and `portable` builds for releases/Docker images.
- Run the `portable` build on half of SigP's infrastructure to monitor for performance deficits.
- Listen out for user issues with the `portable` builds (e.g. SIGILLs from misdetected hardware).
- Make the `portable` build the default and remove the `modern` build from our release binaries & Docker images.
2023-08-17 02:37:32 +00:00
zhiqiangxu
b33bfe25b7 add metrics::VALIDATOR_DUTIES_SYNC_HTTP_POST for post_validator_duties_sync (#4617)
It seems `post_validator_duties_sync` is the only api which doesn't have its own metric in `duties_service`, this PR adds `metrics::VALIDATOR_DUTIES_SYNC_HTTP_POST` for completeness.
2023-08-17 02:37:31 +00:00
zhiqiangxu
609819bb4d attester_duties: remove unnecessary case (#4614)
Since `tolerant_current_epoch` is expected to be either `current_epoch` or `current_epoch+1`, we can eliminate a case here. 

And added a comment about `compute_historic_attester_duties` , since `RelativeEpoch::from_epoch` will only allow `request_epoch == current_epoch-1` when `request_epoch < current_epoch`.
2023-08-17 02:37:30 +00:00
Michael Sproul
7251a93c5e Don't kill SSE stream if channel fills up (#4500)
## Issue Addressed

Closes #4245

## Proposed Changes

- If an SSE channel fills up, send a comment instead of terminating the stream.
- Add a CLI flag for scaling up the SSE buffer: `--http-sse-capacity-multiplier N`.

## Additional Info

~~Blocked on #4462. I haven't rebased on that PR yet for initial testing, because it still needs some more work to handle long-running HTTP threads.~~

- [x] Add CLI flag tests.
2023-08-17 02:37:29 +00:00
ethDreamer
0e04f36a20 Add Test for deneb Block Hash Calculation (#4621) 2023-08-16 11:14:51 -04:00
Jimmy Chen
ba6662344b Update docs to remove lighthouse/database/historical_blocks (removed in #4307) (#4627) 2023-08-16 11:13:44 -04:00
realbigsean
e1991e7e81 Merge pull request #4628 from jimmygchen/merge-unstable-to-deneb-20230816
Merge unstable to deneb 20230816
2023-08-16 11:12:27 -04:00
Jimmy Chen
ff792d950c Merge branch 'unstable' into merge-unstable-to-deneb-20230816
# Conflicts:
#	beacon_node/http_api/src/lib.rs
2023-08-16 14:31:59 +10:00
Jimmy Chen
59c24bcd2d Fix disable backfill flag not working correctly (#4615)
## Issue Addressed

The feature flag used to control this feature is `disable_backfill` instead of `disable-backfill`.

kudos to @michaelsproul for discovering this bug!
2023-08-14 06:08:34 +00:00
Michael Sproul
249f85f1d9 Improve HTTP API error messages + tweaks (#4595)
## Issue Addressed

Closes #3404 (mostly)

## Proposed Changes

- Remove all uses of Warp's `and_then` (which backtracks) in favour of `then` (which doesn't).
- Bump the priority of the `POST` method for `v2/blocks` to `P0`. Publishing a block needs to happen quickly.
- Run the new SSZ POST endpoints on the beacon processor. I think this was missed in between merging #4462 and #4504/#4479.
- Fix a minor issue in the validator registrations endpoint whereby an error from spawning the task on the beacon processor would be dropped.

## Additional Info

I've tested this manually and can confirm that we no longer get the dreaded `Unsupported endpoint version` errors for queries like:

```
$ curl -X POST -H "Content-Type: application/json" --data @block.json "http://localhost:5052/eth/v2/beacon/blocks" | jq
{
  "code": 400,
  "message": "BAD_REQUEST: WeakSubjectivityConflict",
  "stacktraces": []
}
```

```
$ curl -X POST -H "Content-Type: application/octet-stream" --data @block.json "http://localhost:5052/eth/v2/beacon/blocks" | jq
{
  "code": 400,
  "message": "BAD_REQUEST: invalid SSZ: OffsetOutOfBounds(572530811)",
  "stacktraces": []
}
```

```
$ curl "http://localhost:5052/eth/v2/validator/blocks/7067595"
{"code":400,"message":"BAD_REQUEST: invalid query: Invalid query string","stacktraces":[]}
```

However, I can still trigger it by leaving off the `Content-Type`. We can re-test this aspect with #4575.
2023-08-14 04:06:37 +00:00
zhiqiangxu
912f869829 ForkChoice: remove head_block_root field (#4590)
This field is redundant with `ForkchoiceUpdateParameters.head_root`, the same `head_root` is assigned to both fields in [`get_head`](dfcb3363c7/consensus/fork_choice/src/fork_choice.rs (L508-L523)).
2023-08-14 04:06:34 +00:00
zhiqiangxu
dfab24bf92 opt maybe_update_best_child_and_descendant: remove an impossible case (#4583)
Here `child.weight == best_child.weight` is impossible since it's already checked [above](dfcb3363c7/consensus/proto_array/src/proto_array.rs (L878)).
2023-08-14 03:16:04 +00:00
Jimmy Chen
ca050053bf Use the native concurrency property to cancel workflows (#4572)
I noticed that some of our workflows aren't getting cancelled when a new one has been triggered, so we ended up having a long queue in our CI when multiple changes are triggered in a short period.

Looking at the comment here, I noticed the list of workflow IDs are outdated and no longer exist, and some new ones are missing:
dfcb3363c7/.github/workflows/cancel-previous-runs.yml (L12-L13)

I attempted to update these, and came across this comment on the [`cancel-workflow-action`](https://github.com/styfle/cancel-workflow-action) repo:
> You probably don't need to install this custom action.
>
> Instead, use the native [concurrency](https://github.blog/changelog/2021-04-19-github-actions-limit-workflow-run-or-job-concurrency/) property to cancel workflows, for example:

So I thought instead of updating the workflow and maintaining the workflow IDs, perhaps we can try experimenting the [native `concurrency` property](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#concurrency).
2023-08-14 03:16:03 +00:00
zhiqiangxu
842b42297b Fix bug of init_from_beacon_node (#4613) 2023-08-14 00:29:47 +00:00
zhiqiangxu
e92359b756 use account_manager::CMD instead of magic string (#4612)
Make the code style a bit more consistent with following lines.
2023-08-14 00:29:47 +00:00
zhiqiangxu
fa93b58257 remove optional_eth2_network_config (#4611)
It seems the passed [`optional_config`](dfcb3363c7/lighthouse/src/main.rs (L515)) is always `Some` instead of `None`.
2023-08-14 00:29:46 +00:00
zhiqiangxu
501ce62d7c minor optimize process_active_validator: avoid a call to state.get_validator (#4608) 2023-08-14 00:29:45 +00:00
João Oliveira
9d8b2764ef align editorconfig with rustfmt (#4600)
## Issue Addressed
There seems to be a conflict between `editorconfig` and `rustfmt`. 
`editorconfig` is configured with [`insert_final_newline=false`](https://github.com/sigp/lighthouse/blob/stable/.editorconfig#L9C1-L9C21) which [removes the newline](https://github.com/editorconfig/editorconfig/wiki/EditorConfig-Properties#insert_final_newline), whereas `rustfmt` [adds a newline](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#newline_style). 

## Proposed Changes

Align `.editorconfig` with `rustfmt`
2023-08-14 00:29:44 +00:00
zhiqiangxu
f1ac12f23a Fix some typos (#4565) 2023-08-14 00:29:43 +00:00
realbigsean
7f7ad799b3 Fix manifest lists (#4605)
* self hosted docker builds attempted fix

* use imagetools instead of docker manifest
2023-08-10 15:51:44 -04:00
realbigsean
ab37f02ddc Fix env var references (#4604)
* self hosted docker builds attempted fix

* fix env var references in docker builds
2023-08-10 13:09:18 -04:00
realbigsean
11027e3487 self hosted docker builds attempted fix (#4603) 2023-08-10 12:42:56 -04:00
realbigsean
87b165c304 bump clang in cross.toml (#4602) 2023-08-10 10:58:57 -04:00
realbigsean
1440dc0cd2 Merge pull request #4599 from realbigsean/self-hosted-docker-deneb
Self hosted docker deneb
2023-08-10 09:44:04 -04:00
realbigsean
754ce5ec61 remove self hosted runner check where it might not be needed 2023-08-10 09:42:50 -04:00
realbigsean
8d81f1bee7 self hosted docker builds 2023-08-10 09:42:36 -04:00
Jimmy Chen
0b7a426946 Builder flow for Deneb & Blobs (#4428)
* Add Deneb builder flow types with generics

* Update validator client `get_blinded_blocks` call to support Deneb

* `produceBlindedBlock` endpoint updates:
- Handle new Deneb BuilderBid response from builder endpoint (new BlindedBlobsBundle type)
- Build BlockContents response (containing kzg_commitments, proof and blinded_blob_sidecars)

* Appease Clippy lint

* Partial implementation of submit blinded block & blobs. Refactor existing `BlobSidecar` related types to support blinded blobs.

* Add associated types for BlockProposal

* Rename `AbstractSidecar` to `Sidecar`

* Remove blob cache as it's no longer necessary

* Remove unnecessary enum variant

* Clean up

* Hanlde unblinded blobs and publish full block contents

* Fix tests

* Add local EL blobs caching in blinded flow

* Remove BlockProposal and move associated Sidecar trait to AbstractExecPayload to simplify changes

* add blob roots associated type

* move raw blobs associated type to sidecar trait

* Fix todos and improve error handling

* Consolidate BlobsBundle from `execution_layer` into `consensus/types`

* Rename RawBlobs, Blobs, and BlobRoots

* Use `BlobRoots` type alias

* Update error message.

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* update builder bid type

# Conflicts:
#	consensus/types/src/builder_bid.rs

* Fix lint

* remove generic from builder bid

---------

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-08-10 09:32:49 -04:00
Eitan Seri-Levi
1fcada8a32 Improve transport connection errors (#4540)
## Issue Addressed

#4538 

## Proposed Changes

add newtype wrapper around DialError that extracts error messages and logs them in a more readable format

## Additional Info

I was able to test Transport Dial Errors in the situation where a libp2p instance attempts to ping a nonexistent peer. That error message should look something like

`A transport level error has ocurred: Connection refused (os error 61)`

AgeManning mentioned we should try fetching only the most inner error (in situations where theres a nested error). I took a stab at implementing that

For non transport DialErrors, I wrote out the error messages explicitly (as per the docs). Could potentially clean things up here if thats not necessary


Co-authored-by: Age Manning <Age@AgeManning.com>
2023-08-10 00:10:09 +00:00
realbigsean
fddd4e4c87 Merge pull request #4591 from realbigsean/merge-unstable-deneb-aug-9
Merge unstable deneb aug 9
2023-08-09 16:24:18 -04:00
ethDreamer
2b5385fb46 Changes for devnet-8 (#4518)
* Addressed #4487

Add override threshold flag
Added tests for Override Threshold Flag
Override default shown in decimal

* Addressed #4445

Addressed Jimmy's Comments
No need for matches
Fix Mock Execution Engine Tests
Fix clippy
fix fcuv3 bug

* Fix Block Root Calculation post-Deneb

* Addressed #4444

Attestation Verification Post-Deneb
Fix Gossip Attestation Verification Test

* Addressed #4443

Fix Exit Signing for EIP-7044
Fix cross exit test
Move 7044 Logic to signing_context()

* Update EF Tests

* Addressed #4560

* Added Comments around EIP7045

* Combine Altair Deneb to Eliminate Duplicated Code
2023-08-09 15:44:47 -04:00
realbigsean
9e8a289d21 fix blobs by range test 2023-08-09 15:36:39 -04:00
realbigsean
4da6ca73d7 fix imports 2023-08-09 14:12:45 -04:00
realbigsean
c3ced28095 cargo fmt 2023-08-09 10:45:21 -04:00
realbigsean
12b5e9ad3d Merge branch 'unstable' of https://github.com/sigp/lighthouse into merge-unstable-deneb-aug-9 2023-08-09 10:42:51 -04:00
Michael Sproul
e373e9a107 Fix genesis state storage for genesis sync (#4589) 2023-08-09 19:42:14 +10:00
Paul Hauner
b60304b19f Use BeaconProcessor for API requests (#4462)
## Issue Addressed

NA

## Proposed Changes

Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:

1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).

Presently there are two levels of priorities:

- `Priority::P0`
    - The beacon processor will prioritise these above everything other than importing new blocks.
    - Roughly all validator-sensitive endpoints.
- `Priority::P1`
    - The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
    - Everything that's not `Priority::P0`
    
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:

```
        --http-enable-beacon-processor <BOOLEAN>
            The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
            "true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
            responses will be executed immediately. [default: true]
```
    
## New CLI Flags

I added some other new CLI flags:

```
        --beacon-processor-aggregate-batch-size <INTEGER>
            Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
            reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
            network. [default: 64]
        --beacon-processor-attestation-batch-size <INTEGER>
            Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
            usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
            [default: 64]
        --beacon-processor-max-workers <INTEGER>
            Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
            consumption. Reducing the value may result in decreased resource usage and diminished performance. The
            default value is the number of logical CPU cores on the host.
        --beacon-processor-reprocess-queue-len <INTEGER>
            Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
            messages from being dropped while lower values may help protect the node from becoming overwhelmed.
            [default: 12288]
```


I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.

## Additional Info

I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷 

The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.

I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](efbabe3252). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
realbigsean
02c7a2eaf5 Improve single block/blob logging (#4579)
* remove closure from `check_availability_mayb_import`

* impove logging, add wrapper struct to requested ids

* improve logging

* only log if we're in deneb. Only delay lookup if we're in deneb

* fix bug in missing components check
2023-08-08 18:45:11 -04:00
realbigsean
efbf906094 Merge pull request #4578 from jimmygchen/merge-unstable-to-deneb-20230808
Merge unstable to deneb 20230808
2023-08-08 10:42:14 -04:00
Jimmy Chen
3ba9047437 Fix release tests 2023-08-08 17:40:40 +10:00
Jimmy Chen
d401633100 Add same error handling for blob signing when pubkey is missing 2023-08-08 17:25:29 +10:00
Jimmy Chen
ec416df061 Merge branch 'unstable' into merge-unstable-to-deneb-20230808
# Conflicts:
#	Cargo.lock
#	beacon_node/beacon_chain/src/lib.rs
#	beacon_node/execution_layer/src/engine_api.rs
#	beacon_node/execution_layer/src/engine_api/http.rs
#	beacon_node/execution_layer/src/test_utils/mod.rs
#	beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs
#	beacon_node/lighthouse_network/src/rpc/handler.rs
#	beacon_node/lighthouse_network/src/rpc/protocol.rs
#	beacon_node/lighthouse_network/src/service/utils.rs
#	beacon_node/lighthouse_network/tests/rpc_tests.rs
#	beacon_node/network/Cargo.toml
#	beacon_node/network/src/network_beacon_processor/tests.rs
#	lcli/src/parse_ssz.rs
#	scripts/cross/Dockerfile
#	validator_client/src/block_service.rs
#	validator_client/src/validator_store.rs
2023-08-08 17:02:51 +10:00
Michael Sproul
bba152656c Fix deadlock in finalization migration (#4576) 2023-08-08 13:57:05 +10:00
Michael Sproul
18e64e60ad Optimise mutations in single-pass epoch processing (#4573)
* Optimise mutations in single-pass epoch processing

* Use safer Cow::make_mut

* Update to upstream milhouse
2023-08-08 11:25:26 +10:00
Paul Hauner
1373dcf076 Add validator-manager (#3502)
## Issue Addressed

Addresses #2557

## Proposed Changes

Adds the `lighthouse validator-manager` command, which provides:

- `lighthouse validator-manager create`
    - Creates a `validators.json` file and a `deposits.json` (same format as https://github.com/ethereum/staking-deposit-cli)
- `lighthouse validator-manager import`
    - Imports validators from a `validators.json` file to the VC via the HTTP API.
- `lighthouse validator-manager move`
    - Moves validators from one VC to the other, utilizing only the VC API.

## Additional Info

In 98bcb947c I've reduced some VC `ERRO` and `CRIT` warnings to `WARN` or `DEBG` for the case where a pubkey is missing from the validator store. These were being triggered when we removed a validator but still had it in caches. It seems to me that `UnknownPubkey` will only happen in the case where we've removed a validator, so downgrading the logs is prudent. All the logs are `DEBG` apart from attestations and blocks which are `WARN`. I thought having *some* logging about this condition might help us down the track.

In 856cd7e37d I've made the VC delete the corresponding password file when it's deleting a keystore. This seemed like nice hygiene. Notably, it'll only delete that password file after it scans the validator definitions and finds that no other validator is also using that password file.
2023-08-08 00:03:22 +00:00
Paul Hauner
5ea75052a8 Increase slashing protection test SQL timeout to 500ms (#4574)
## Issue Addressed

NA

## Proposed Changes

We've been seeing a lot of [CI failures](https://github.com/sigp/lighthouse/actions/runs/5781296217/job/15666209142) with errors like this:

```
---- extra_interchange_tests::export_same_key_twice stdout ----
thread 'extra_interchange_tests::export_same_key_twice' panicked at 'called `Result::unwrap()` on an `Err` value: SQLError("Unable to open database: Error(None)")', validator_client/slashing_protection/src/extra_interchange_tests.rs:48:67
```

I'm assuming they're timeouts. I noticed that tests have a 0.1s timeout. Perhaps this just doesn't cut it when our new runners are overloaded.

## Additional Info

NA
2023-08-07 22:53:05 +00:00
Eitan Seri-Levi
521432129d Support SSZ request body for POST /beacon/blinded_blocks endpoints (v1 & v2) (#4504)
## Issue Addressed

#4262 

## Proposed Changes

add SSZ support in request body for POST /beacon/blinded_blocks endpoints (v1 & v2)

## Additional Info
2023-08-07 22:53:04 +00:00
realbigsean
731b7e7af5 Refactor deneb networking (#4561)
* Revert "fix merge"

This reverts commit 405e95b0ce.

* refactor deneb block processing

* cargo fmt

* make block and blob single lookups generic

* get tests compiling

* clean up everything add child component, fix peer scoring and retry logic

* smol cleanup and a bugfix

* remove ParentLookupReqId

* Update beacon_node/network/src/sync/manager.rs

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* Update beacon_node/network/src/sync/manager.rs

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* update unreachables to crits

* Revert "update unreachables to crits"

This reverts commit 064bf64dff.

* update make request/build request to make more sense

* pr feedback

* Update beacon_node/network/src/sync/block_lookups/mod.rs

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* Update beacon_node/network/src/sync/block_lookups/mod.rs

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

* more pr feedback, fix availability check error handling

* improve block component processed log

---------

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-08-07 14:16:21 -04:00
ethDreamer
c8ea3e1c86 Fix small bug in test utils (#4570) 2023-08-07 11:49:52 -04:00
Nico Flaig
31daf3a87c Update doppelganger note about sync committee contributions (#4425)
**Motivation**

As clarified [on discord](https://discord.com/channels/605577013327167508/605577013331361793/1121246688183603240), sync committee contributions are not delayed if DP is enabled.

**Description**

This PR updates doppelganger note about sync committee contributions. Based on the current docs, a user might assume that DP is not working as expected.
2023-08-07 00:46:29 +00:00
Pawan Dhananjay
a36e34eec4 Fix todos in deneb code (#4547)
* Low hanging fruits

* Remove unnecessary todo

I think it's fine to not handle this since the calling functions handle the error.
No specific reason imo to handle it in the function as well.

* Rename BlobError to GossipBlobError

I feel this signified better what the error is for. The BlobError was only for failures when gossip
verifying a blob. We cannot get this error when doing rpc validation

* Remove the BlockError::BlobValidation variant

This error was only there to appease gossip verification before publish.
It's unclear how to peer score this error since this cannot actually occur during any
block verification flows.
This commit introuduces an additional error type BlockContentsError to better represent the
Error type

* Add docs for peer scoring (or lack thereof) of AvailabilityCheck errors

* I do not see a non-convoluted way of doing this. Okay to have some redundant code here

* Removing this to catch the failure red handed

* Fix compilation

* Cannot be deleted because some tests assume the trait impl

Also useful to have around for testing in the future imo

* Add some metrics and logs

* Only process `Imported` variant in sync_methods

The only additional thing for other variants that might be useful is logging. We can do that
later if required

* Convert to TryFrom

Not really sure where this would be used, but just did what the comment says.
Could consider just returning the Block variant for a deneb block in the From version

* Unlikely to change now

* This is fine as this is max_rpc_size per rpc chunk (for blobs, it would be 128kb max)

* Log count instead of individual blobs, can delete log later if it becomes too annoying.

* Add block production blob verification timer

* Extend block_straemer test to deneb

* Remove dbg statement

* Fix tests
2023-08-03 20:27:03 -04:00
Armağan Yıldırak
3397612160 Shift networking configuration (#4426)
## Issue Addressed
Addresses [#4401](https://github.com/sigp/lighthouse/issues/4401)

## Proposed Changes
Shift some constants into ```ChainSpec``` and remove the constant values from code space.

## Additional Info

I mostly used ```MainnetEthSpec::default_spec()``` for getting ```ChainSpec```. I wonder Did I make a mistake about that.


Co-authored-by: armaganyildirak <armaganyildirak@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: Diva M <divma@protonmail.com>
2023-08-03 01:51:47 +00:00
zhiqiangxu
7399a54ca3 CommitteeCache.get_all_beacon_committees: set correct capacity to avoid realloc (#4557) 2023-08-02 23:50:43 +00:00
zhiqiangxu
b5e25aeb2f CommitteeCache.initialized: fail early if possible (#4556) 2023-08-02 23:50:42 +00:00
zhiqiangxu
fcf51d691e fix typo (#4555) 2023-08-02 23:50:41 +00:00
Divma
ff9b09d964 upgrade to libp2p 0.52 (#4431)
## Issue Addressed

Upgrade libp2p to v0.52

## Proposed Changes
- **Workflows**: remove installation of `protoc`
- **Book**: remove installation of `protoc`
- **`Dockerfile`s and `cross`**: remove custom base `Dockerfile` for cross since it's no longer needed. Remove `protoc` from remaining `Dockerfiles`s
- **Upgrade `discv5` to `v0.3.1`:** we have some cool stuff in there: no longer needs `protoc` and faster ip updates on cold start
- **Upgrade `prometheus` to `0.21.0`**, now it no longer needs encoding checks
- **things that look like refactors:** bunch of api types were renamed and need to be accessed in a different (clearer) way
- **Lighthouse network**
	- connection limits is now a behaviour
	- banned peers no longer exist on the swarm level, but at the behaviour level
	- `connection_event_buffer_size` now is handled per connection with a buffer size of 4
	- `mplex` is deprecated and was removed
	- rpc handler now logs the peer to which it belongs

## Additional Info

Tried to keep as much behaviour unchanged as possible. However, there is a great deal of improvements we can do _after_ this upgrade:
- Smart connection limits: Connection limits have been checked only based on numbers, we can now use information about the incoming peer to decide if we want it
- More powerful peer management: Dial attempts from other behaviours can be rejected early
- Incoming connections can be rejected early
- Banning can be returned exclusively to the peer management: We should not get connections to banned peers anymore making use of this
- TCP Nat updates: We might be able to take advantage of confirmed external addresses to check out tcp ports/ips


Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: Akihito Nakano <sora.akatsuki@gmail.com>
2023-08-02 00:59:34 +00:00
Gua00va
73764d0dd2 Deprecate exchangeTransitionConfiguration functionality (#4517)
## Issue Addressed

Solves #4442 
## Proposed Changes

EL clients log errors if we don't query this endpoint, but they are making releases that remove this error logging. After those are out we can stop calling it, after which point EL teams will remove the endpoint entirely. 
Refer https://hackmd.io/@n0ble/deprecate-exchgTC
2023-07-31 23:51:39 +00:00
chonghe
cb275e746d Update Lighthouse book FAQ (#4510)
Some updates in the FAQ based on issues seen on Discord. Additionally, corrected the disk usage on the default SPRP as the previously provided value is not correct. 

Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
2023-07-31 23:51:38 +00:00
Eitan Seri-Levi
e8c411c288 add ssz support in request body for /beacon/blocks endpoints (v1 & v2) (#4479)
## Issue Addressed

[#4457](https://github.com/sigp/lighthouse/issues/4457)

## Proposed Changes

add ssz support in request body for  /beacon/blocks endpoints (v1 & v2)


## Additional Info
2023-07-31 23:51:37 +00:00
Age Manning
8654f20028 Development feature flag - Disable backfill (#4537)
Often when testing I have to create a hack which is annoying to maintain. 

I think it might be handy to add a custom compile-time flag that developers can use if they want to test things locally without having to backfill a bunch of blocks.

There is probably an argument to have a feature called "backfill" which is enabled by default and can be disabled. I didn't go this route because I think it's counter-intuitive to have a feature that enables a core and necessary behaviour.
2023-07-31 01:53:08 +00:00
Gua00va
117802cef1 Add Eth Version Header (#4528)
## Issue Addressed
Closes #4525 

## Proposed Changes
`GET /eth/v1/validator/blinded_blocks` endpoint and `GET /eth/v1/validator/blocks`  now send `Eth-Version` header.

Co-authored-by: Gua00va <105484243+Gua00va@users.noreply.github.com>
2023-07-31 01:53:07 +00:00
Jimmy Chen
b5337c0ea5 Fix incorrect ideal rewards calculation (#4520)
## Issue Addressed

The PR fixes a bug where the the ideal rewards for source and head were incorrectly set.

Output from testing a validator that performed optimally in a Phase 0 epoch , note the `source` and `target` under ideal rewards is incorrect (compared to the actual `total_rewards` below):

```json
{ 
   "ideal_rewards": [
    ...
      {
        "effective_balance": "32000000000",
        "head": "18771",
        "target": "18770",
        "source": "18729",
        "inclusion_delay": "17083",
        "inactivity": "0"
      }
    ],
    "total_rewards": [
      {
        "validator_index": "0",
        "head": "18729",
        "target": "18770",
        "source": "18771",
        "inclusion_delay": "17083",
        "inactivity": "0"
      }
    ]
```
2023-07-31 01:53:06 +00:00
Michael Sproul
b96cfcaaa4 Fix bug in lcli transition-blocks and improve pretty-ssz (#4513)
## Proposed Changes

- Fix bad `state_root` reuse in `lcli transition-blocks` that resulted in invalid results at skipped slots.
- Modernise `lcli pretty-ssz` to include fork-generic decoders for `SignedBeaconBlock` and `BeaconState` which respect the `--network`/`--testnet-dir` flag.

## Additional Info

Breaking change: the underscore names like `signed_block_merge` are removed in favour of the fork-generic name `SignedBeaconBlock`, and fork-specific names which match the superstruct variants, e.g. `SignedBeaconBlockMerge`.
2023-07-31 01:53:05 +00:00
Michael Sproul
eafe08780c Restore upstream arbitrary (#4372)
## Proposed Changes

Remove patch for `arbitrary` in favour of upstream, now that the `arithmetic_side_effects` lint no longer triggers in derive macro code.

## Additional Info

~~Blocked on Rust 1.71.0, to be released 13 July 23~~
2023-07-31 01:53:04 +00:00
Aoi Kurokawa
85a3340d0e Implement liveness BeaconAPI (#4343)
## Issue Addressed

#4243

## Proposed Changes

- create a new endpoint for liveness/{endpoint}

## Additional Info
This is my first PR.
2023-07-31 01:53:03 +00:00
Jimmy Chen
9c75d8088d Merge pull request #4535 from qu0b/blob/empty
API - Handle blocks without blobs gracefully
2023-07-28 20:00:26 +10:00
qu0b
e021575d8a formatting 2023-07-28 10:27:38 +02:00
qu0b
c991516419 fix CI errors 2023-07-27 14:32:58 +02:00
qu0b
28de041527 cargo fmt & lint-fix 2023-07-26 11:22:06 +02:00
qu0b
1be4d54035 remove option & remove block check 2023-07-26 10:12:58 +02:00
qu0b
97bffd03d0 handle empty blocks gracefully 2023-07-26 10:11:38 +02:00
realbigsean
8c341bb9cc cargo fmt (#4541) 2023-07-25 14:40:04 -04:00
realbigsean
33dd13c798 Refactor deneb block processing (#4511)
* Revert "fix merge"

This reverts commit 405e95b0ce.

* refactor deneb block processing

* cargo fmt

* fix ci
2023-07-25 10:51:10 -04:00
realbigsean
3735450749 Merge pull request #4533 from jimmygchen/deneb-free-blobs
Merge `unstable` into `deneb-free-blobs`
2023-07-25 10:45:22 -04:00
Jimmy Chen
fe94a05dd1 Fix lint 2023-07-24 23:02:29 +10:00
Jimmy Chen
54c6e1dd3d Fix compilation 2023-07-24 21:09:07 +10:00
Jimmy Chen
4ca101e085 Merge branch 'unstable' into deneb-free-blobs 2023-07-24 20:55:27 +10:00
Gua00va
f1f04bc68a Add Changes to BlobSidecars Endpoint (#4455)
* changed name

* Fix sidecars

* Added query type and parrameter

* added query struct and function

* added method

* improved filtering method

* added blob_sidecar_list_indexed to block_id

* minor blobqueryindex fix

* function and formatting fix

* minor function and naming fix

* minor changes
2023-07-21 10:56:57 -04:00
Paul Hauner
071dd4cd9c Add self-hosted runners v2 (#4506)
## Issue Addressed

NA

## Proposed Changes

Carries on from #4115, with the following modifications:

1. Self-hosted runners are only enabled if `github.repository == sigp/lighthouse`.
    - This allows forks to still have Github-hosted CI.
    - This gives us a method to switch back to Github-runners if we have extended downtime on self-hosted. 
1. Does not remove any existing dependency builds for Github-hosted runners (e.g., installing the latest Rust).
1. Adds the `WATCH_HOST` environment variable which defines where we expect to find the postgres db in the `watch` tests. This should be set to `host.docker.internal` for the tests to pass on self-hosted runners.

## Additional Info

NA


Co-authored-by: antondlr <anton@delaruelle.net>
2023-07-21 04:46:52 +00:00
Michael Sproul
8423e9f554 Merge remote-tracking branch 'origin/unstable' into tree-states 2023-07-19 11:23:52 +10:00
Michael Sproul
5d2063d262 Single-pass epoch processing (#4483) 2023-07-18 16:59:55 +10:00
Jimmy Chen
fc7f1ba6b9 Phase 0 attestation rewards via Beacon API (#4474)
## Issue Addressed

Addresses #4026.

Beacon-API spec [here](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Beacon/getAttestationsRewards).

Endpoint: `POST /eth/v1/beacon/rewards/attestations/{epoch}`

This endpoint already supports post-Altair epochs. This PR adds support for phase 0 rewards calculation.

## Proposed Changes

- [x] Attestation rewards API to support phase 0 rewards calculation, re-using logic from `state_processing`. Refactored `get_attestation_deltas` slightly to support computing deltas for a subset of validators.
- [x] Add `inclusion_delay` to `ideal_rewards` (`beacon-API` spec update to follow)
- [x] Add `inactivity` penalties to both `ideal_rewards` and `total_rewards` (`beacon-API` spec update to follow)
- [x] Add tests to compute attestation rewards and compare results with beacon states 

## Additional Notes

- The extra penalty for missing attestations or being slashed during an inactivity leak is currently not included in the API response (for both phase 0 and Altair) in the spec. 
- I went with adding `inactivity` as a separate component rather than combining them with the 4 rewards, because this is how it was grouped in [the phase 0 spec](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#get_attestation_deltas). During inactivity leak, all rewards include the optimal reward, and inactivity penalties are calculated separately (see below code snippet from the spec), so it would be quite confusing if we merge them. This would also work better with Altair, because there's no "cancelling" of rewards and inactivity penalties are more separate.
- Altair calculation logic (to include inactivity penalties) to be updated in a follow-up PR.

```python
def get_attestation_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence[Gwei]]:
    """
    Return attestation reward/penalty deltas for each validator.
    """
    source_rewards, source_penalties = get_source_deltas(state)
    target_rewards, target_penalties = get_target_deltas(state)
    head_rewards, head_penalties = get_head_deltas(state)
    inclusion_delay_rewards, _ = get_inclusion_delay_deltas(state)
    _, inactivity_penalties = get_inactivity_penalty_deltas(state)

    rewards = [
        source_rewards[i] + target_rewards[i] + head_rewards[i] + inclusion_delay_rewards[i]
        for i in range(len(state.validators))
    ]

    penalties = [
        source_penalties[i] + target_penalties[i] + head_penalties[i] + inactivity_penalties[i]
        for i in range(len(state.validators))
    ]

    return rewards, penalties
```

## Example API Response

<details>
  <summary>Click me</summary>
  
```json
{
  "ideal_rewards": [
    {
      "effective_balance": "1000000000",
      "head": "6638",
      "target": "6638",
      "source": "6638",
      "inclusion_delay": "9783",
      "inactivity": "0"
    },
    {
      "effective_balance": "2000000000",
      "head": "13276",
      "target": "13276",
      "source": "13276",
      "inclusion_delay": "19565",
      "inactivity": "0"
    },
    {
      "effective_balance": "3000000000",
      "head": "19914",
      "target": "19914",
      "source": "19914",
      "inclusion_delay": "29349",
      "inactivity": "0"
    },
    {
      "effective_balance": "4000000000",
      "head": "26553",
      "target": "26553",
      "source": "26553",
      "inclusion_delay": "39131",
      "inactivity": "0"
    },
    {
      "effective_balance": "5000000000",
      "head": "33191",
      "target": "33191",
      "source": "33191",
      "inclusion_delay": "48914",
      "inactivity": "0"
    },
    {
      "effective_balance": "6000000000",
      "head": "39829",
      "target": "39829",
      "source": "39829",
      "inclusion_delay": "58697",
      "inactivity": "0"
    },
    {
      "effective_balance": "7000000000",
      "head": "46468",
      "target": "46468",
      "source": "46468",
      "inclusion_delay": "68480",
      "inactivity": "0"
    },
    {
      "effective_balance": "8000000000",
      "head": "53106",
      "target": "53106",
      "source": "53106",
      "inclusion_delay": "78262",
      "inactivity": "0"
    },
    {
      "effective_balance": "9000000000",
      "head": "59744",
      "target": "59744",
      "source": "59744",
      "inclusion_delay": "88046",
      "inactivity": "0"
    },
    {
      "effective_balance": "10000000000",
      "head": "66383",
      "target": "66383",
      "source": "66383",
      "inclusion_delay": "97828",
      "inactivity": "0"
    },
    {
      "effective_balance": "11000000000",
      "head": "73021",
      "target": "73021",
      "source": "73021",
      "inclusion_delay": "107611",
      "inactivity": "0"
    },
    {
      "effective_balance": "12000000000",
      "head": "79659",
      "target": "79659",
      "source": "79659",
      "inclusion_delay": "117394",
      "inactivity": "0"
    },
    {
      "effective_balance": "13000000000",
      "head": "86298",
      "target": "86298",
      "source": "86298",
      "inclusion_delay": "127176",
      "inactivity": "0"
    },
    {
      "effective_balance": "14000000000",
      "head": "92936",
      "target": "92936",
      "source": "92936",
      "inclusion_delay": "136959",
      "inactivity": "0"
    },
    {
      "effective_balance": "15000000000",
      "head": "99574",
      "target": "99574",
      "source": "99574",
      "inclusion_delay": "146742",
      "inactivity": "0"
    },
    {
      "effective_balance": "16000000000",
      "head": "106212",
      "target": "106212",
      "source": "106212",
      "inclusion_delay": "156525",
      "inactivity": "0"
    },
    {
      "effective_balance": "17000000000",
      "head": "112851",
      "target": "112851",
      "source": "112851",
      "inclusion_delay": "166307",
      "inactivity": "0"
    },
    {
      "effective_balance": "18000000000",
      "head": "119489",
      "target": "119489",
      "source": "119489",
      "inclusion_delay": "176091",
      "inactivity": "0"
    },
    {
      "effective_balance": "19000000000",
      "head": "126127",
      "target": "126127",
      "source": "126127",
      "inclusion_delay": "185873",
      "inactivity": "0"
    },
    {
      "effective_balance": "20000000000",
      "head": "132766",
      "target": "132766",
      "source": "132766",
      "inclusion_delay": "195656",
      "inactivity": "0"
    },
    {
      "effective_balance": "21000000000",
      "head": "139404",
      "target": "139404",
      "source": "139404",
      "inclusion_delay": "205439",
      "inactivity": "0"
    },
    {
      "effective_balance": "22000000000",
      "head": "146042",
      "target": "146042",
      "source": "146042",
      "inclusion_delay": "215222",
      "inactivity": "0"
    },
    {
      "effective_balance": "23000000000",
      "head": "152681",
      "target": "152681",
      "source": "152681",
      "inclusion_delay": "225004",
      "inactivity": "0"
    },
    {
      "effective_balance": "24000000000",
      "head": "159319",
      "target": "159319",
      "source": "159319",
      "inclusion_delay": "234787",
      "inactivity": "0"
    },
    {
      "effective_balance": "25000000000",
      "head": "165957",
      "target": "165957",
      "source": "165957",
      "inclusion_delay": "244570",
      "inactivity": "0"
    },
    {
      "effective_balance": "26000000000",
      "head": "172596",
      "target": "172596",
      "source": "172596",
      "inclusion_delay": "254352",
      "inactivity": "0"
    },
    {
      "effective_balance": "27000000000",
      "head": "179234",
      "target": "179234",
      "source": "179234",
      "inclusion_delay": "264136",
      "inactivity": "0"
    },
    {
      "effective_balance": "28000000000",
      "head": "185872",
      "target": "185872",
      "source": "185872",
      "inclusion_delay": "273918",
      "inactivity": "0"
    },
    {
      "effective_balance": "29000000000",
      "head": "192510",
      "target": "192510",
      "source": "192510",
      "inclusion_delay": "283701",
      "inactivity": "0"
    },
    {
      "effective_balance": "30000000000",
      "head": "199149",
      "target": "199149",
      "source": "199149",
      "inclusion_delay": "293484",
      "inactivity": "0"
    },
    {
      "effective_balance": "31000000000",
      "head": "205787",
      "target": "205787",
      "source": "205787",
      "inclusion_delay": "303267",
      "inactivity": "0"
    },
    {
      "effective_balance": "32000000000",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    }
  ],
  "total_rewards": [
    {
      "validator_index": "0",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    },
    {
      "validator_index": "32",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    },
    {
      "validator_index": "63",
      "head": "-357771",
      "target": "-357771",
      "source": "-357771",
      "inclusion_delay": "0",
      "inactivity": "0"
    }
  ]
}
```
</details>
2023-07-18 01:48:40 +00:00
realbigsean
f98671f5ab Merge pull request #4477 from realbigsean/merge-unstable-deneb-june-6th
Merge unstable deneb june 6th
2023-07-17 16:32:26 -04:00
realbigsean
cffa562384 cargo fmt 2023-07-17 16:32:09 -04:00
Pawan Dhananjay
e1d0724abf Fix more beta compiler warnings 2023-07-17 13:29:12 -07:00
Pawan Dhananjay
382b5abbee Merge branch 'deneb-free-blobs' into merge-unstable-deneb-june-6th 2023-07-17 11:47:46 -07:00
realbigsean
27a99015f5 Merge pull request #38 from realbigsean/merge-unstable-deneb-jul-14
Merge unstable deneb jul 14
2023-07-17 12:41:49 -04:00
realbigsean
597389cea5 Merge branch 'merge-unstable-deneb-june-6th' of https://github.com/realbigsean/lighthouse into merge-unstable-deneb-jul-14 2023-07-17 12:41:30 -04:00
realbigsean
aeee5beac2 smol fixes 2023-07-17 10:46:54 -04:00
realbigsean
a618830f8f fix smol lint 2023-07-17 10:28:37 -04:00
realbigsean
0f514cbb36 fixes after merge 2023-07-17 09:50:32 -04:00
realbigsean
b96db45090 Merge branch 'unstable' of https://github.com/sigp/lighthouse into merge-unstable-deneb-jul-14 2023-07-17 09:33:37 -04:00
Divma
4435a22221 Cleanup unreachable code in lcli::generate_bootnode_enr and some tests (#4485)
## Issue Addressed
n/a Noticed this while working on something else

## Proposed Changes
- leverage the appropriate types to avoid a bunch of `unwrap` and errors

## Additional Info
n/a
2023-07-17 05:31:53 +00:00
Jimmy Chen
a25ec16a67 Speed up CI by installing foundry with Github action (#4505)
## Issue Addressed

Speed up CI by installing foundry with Github action instead of building Anvil from source.

Building anvil from source on GItHub hosted runners currently takes about 10 mins. Using the `foundry-toolchain` action to install only takes about 2 seconds.
2023-07-17 00:14:20 +00:00
Pawan Dhananjay
f2223feb21 Rust 1.71 lints (#4503)
## Issue Addressed

N/A

## Proposed Changes

Add lints for rust 1.71

[3789134](3789134ae2) is probably the one that needs most attention as it changes beacon state code. I changed the `is_in_inactivity_leak ` function to return a `ArithError` as not all consumers of that function work well with a `BeaconState::Error`.
2023-07-17 00:14:19 +00:00
Jimmy Chen
d4a61756ca CI fix: add retries to eth1 sim tests (#4501)
## Issue Addressed

This PR attempts to workaround the recent frequent eth1 simulator failures caused by missing eth logs from Anvil. 

> FailedToInsertDeposit(NonConsecutive { log_index: 1, expected: 0 })

This usually occurs at the beginning of the tests, and it guarantees a timeout after a few hours if this log shows up, and this is currently causing our CIs to fail quite frequently. 

Example failure here: https://github.com/sigp/lighthouse/actions/runs/5525760195/jobs/10079736914

## Proposed Changes

The quick fix applied here adds a timeout to node startup and restarts the node again.

- Add a 60 seconds timeout to beacon node startup in eth1 simulator tests. It takes ~10 seconds on my machine, but could take longer on CI runners.
- Wrap the startup code in a retry function, that allows for 3 retries before returning an error.

## Additional Info

We should probably raise an issue under the Anvil GitHub repo there so this can be further investigated.
2023-07-17 00:14:18 +00:00
Jimmy Chen
68d5a6cf99 Clean up local testnet files without prompting (#4498)
## Issue Addressed

Addresses an issue where CI could fail due to an nonexisting file error:

```
Run ./clean.sh
rm: cannot remove '/home/runner/.lighthouse/local-testnet/geth_datadir4/geth/fastcache.tmp.1549331618': No such file or directory
Error: Process completed with exit code 1.
```

This seems to happen quite frequently now, I'm not sure exactly why but perhaps worth trying suppressing the prompt?

https://github.com/sigp/lighthouse/actions/runs/5455027574/jobs/9925916159?pr=4463
2023-07-17 00:14:17 +00:00
Michael Sproul
5569d97a07 Remove wget dependency (#4497)
## Proposed Changes

Replace `wget` in the EF-tests makefile with `curl`.

On macOS `curl` is pre-installed, and I found myself making this change to avoid installing `wget`.

The `-L` flag is used to follow redirects which is useful if a repo gets renamed, and more similar to `wget`'s default behaviour.
2023-07-17 00:14:16 +00:00
Michael Sproul
03674c7199 Update mev-rs and remove patches (#4496)
## Issue Addressed

Fixes occasional compilation errors with mev-rs (see #4456).

## Proposed Changes

- Update `mev-rs` to the latest version, which allows us to remove hacky `[patch]` sections
- Update the `axum` version used in `watch` so LH only uses a single version
2023-07-17 00:14:15 +00:00
Jack McPherson
1a5de8b0f0 Remove instances of required arguments with default values (#4489)
## Issue Addressed

#4488 

## Proposed Changes

 - Remove all instances of the `required` modifier where we have a default value specified for a subcommand

## Additional Info

N/A
2023-07-17 00:14:14 +00:00
Jimmy Chen
5cd738c882 Use unique arg names for eth1-sim (#4463)
## Issue Addressed

When trying to run `eth1-sim` locally, the simulator doesn't start for me, and panicked due to duplicate arg names for `proposer-nodes` (using same arg names as `nodes`). Not sure why this isn't failing on CI but failing on mine 🤔 

```
thread 'main' panicked at 'Argument short must be unique
thread 'main' panicked at 'Argument long must be unique
```
2023-07-17 00:14:13 +00:00
Michael Sproul
6c375205fb Fix HTTP state API bug and add --epochs-per-migration (#4236)
## Issue Addressed

Fix an issue observed by `@zlan` on Discord where Lighthouse would sometimes return this error when looking up states via the API:

> {"code":500,"message":"UNHANDLED_ERROR: ForkChoiceError(MissingProtoArrayBlock(0xc9cf1495421b6ef3215d82253b388d77321176a1dcef0db0e71a0cd0ffc8cdb7))","stacktraces":[]}

## Proposed Changes

The error stems from a faulty assumption in the HTTP API logic: that any state in the hot database must have its block in fork choice. This isn't true because the state's hot database may update much less frequently than the fork choice store, e.g. if reconstructing states (where freezer migration pauses), or if the freezer migration runs slowly. There could also be a race between loading the hot state and checking fork choice, e.g. even if the finalization migration of DB+fork choice were atomic, the update could happen between the 1st and 2nd calls.

To address this I've changed the HTTP API logic to use the finalized block's execution status as a fallback where it is safe to do so. In the case where a block is non-canonical and prior to finalization (permanently orphaned) we default `execution_optimistic` to `true`.

## Additional Info

I've also added a new CLI flag to reduce the frequency of the finalization migration as this is useful for several purposes:

- Spacing out database writes (less frequent, larger batches)
- Keeping a limited chain history with high availability, e.g. the last month in the hot database.

This new flag made it _substantially_ easier to test this change. It was extracted from `tree-states` (where it's called `--db-migration-period`), which is why this PR also carries the `tree-states` label.
2023-07-17 00:14:12 +00:00
Pawan Dhananjay
18760822fe Fix beta compiler warnings 2023-07-14 13:16:48 -07:00
realbigsean
405e95b0ce fix merge 2023-07-14 16:15:28 -04:00
realbigsean
42f54ee561 fix merge conflict issues 2023-07-14 16:01:57 -04:00
realbigsean
2b93c0eb0d remove spec minimal feature gating in tests (#4468)
* remove spec minimal feature gating in tests

* do merge transition in overflow cache test
2023-07-14 12:59:15 -07:00
Tyler
0c7eed5e58 bump proc-macro2 (#4464)
## Issue Addressed

Addresses issue:  #4459

## Proposed Changes

`cargo update -p proc-macro2`

proc-macro2 v1.0.58 -> v1.0.63

## Additional Info

See: #4459 / https://github.com/rust-lang/rust/issues/113152
2023-07-12 22:02:26 +00:00
realbigsean
a6f48f5ecb Merge branch 'unstable' of https://github.com/sigp/lighthouse into merge-unstable-deneb-june-6th 2023-07-12 13:05:30 -04:00
realbigsean
c016f5d787 gossip validate blobs prior to publish 2023-07-12 12:32:14 -04:00
realbigsean
1599487933 fix tests 2023-07-12 10:13:25 -04:00
Jack McPherson
62c9170755 Remove hidden re-exports to appease Rust 1.73 (#4495)
## Issue Addressed

#4494 

## Proposed Changes

 - Remove explicit re-exports of various types to appease the new compiler lint

## Additional Info

It seems `warn(hidden_glob_reexports)` is the main culprit.
2023-07-12 07:06:00 +00:00
Michael Sproul
420e9c490e Add state-root command and network support to lcli (#4492)
## Proposed Changes

* Add `lcli state-root` command for computing the hash tree root of a `BeaconState`.
* Add a `--network` flag which can be used instead of `--testnet-dir` to set the network, e.g. Mainnet, Goerli, Gnosis.
* Use the new network flag in `transition-blocks`, `skip-slots`, and `block-root`, which previously only supported mainnet.
* **BREAKING CHANGE** Remove the default value of `~/.lighthouse/testnet` from `--testnet-dir`. This may have made sense in previous versions where `lcli` was more testnet focussed, but IMO it is an unnecessary complication and foot-gun today.
2023-07-12 07:05:58 +00:00
realbigsean
57bb1d931e fix progressive balance slashing tests 2023-07-11 17:07:52 -04:00
realbigsean
782a53ad9d fix compile 2023-07-11 16:05:05 -04:00
realbigsean
6fd2ef49e4 Revert "remove into gossip verified block"
This reverts commit 246d52d209.
2023-07-10 10:22:28 -04:00
Paul Hauner
c25825a539 Move the BeaconProcessor into a new crate (#4435)
*Replaces #4434. It is identical, but this PR has a smaller diff due to a curated commit history.*

## Issue Addressed

NA

## Proposed Changes

This PR moves the scheduling logic for the `BeaconProcessor` into a new crate in `beacon_node/beacon_processor`. Previously it existed in the `beacon_node/network` crate.

This addresses a circular-dependency problem where it's not possible to use the `BeaconProcessor` from the `beacon_chain` crate. The `network` crate depends on the `beacon_chain` crate (`network -> beacon_chain`), but importing the `BeaconProcessor` into the `beacon_chain` crate would create a circular dependancy of `beacon_chain -> network`.

The `BeaconProcessor` was designed to provide queuing and prioritized scheduling for messages from the network. It has proven to be quite valuable and I believe we'd make Lighthouse more stable and effective by using it elsewhere. In particular, I think we should use the `BeaconProcessor` for:

1. HTTP API requests.
1. Scheduled tasks in the `BeaconChain` (e.g., state advance).

Using the `BeaconProcessor` for these tasks would help prevent the BN from becoming overwhelmed and would also help it to prioritize operations (e.g., choosing to process blocks from gossip before responding to low-priority HTTP API requests).

## Additional Info

This PR is intended to have zero impact on runtime behaviour. It aims to simply separate the *scheduling* code (i.e., the `BeaconProcessor`) from the *business logic* in the `network` crate (i.e., the `Worker` impls). Future PRs (see #4462) can build upon these works to actually use the `BeaconProcessor` for more operations.

I've gone to some effort to use `git mv` to make the diff look more like "file was moved and modified" rather than "file was deleted and a new one added". This should reduce review burden and help maintain commit attribution.
2023-07-10 07:45:54 +00:00
Michael Sproul
ea2420d193 Bump default checkpoint sync timeout to 3 minutes (#4466)
## Issue Addressed

[Users on Twitter](https://twitter.com/ashekhirin/status/1676334843192397824) are getting checkpoint sync URL timeouts with the default of 60s, so this PR increases the default timeout to 3 minutes.

I've also added a short section to the book about adjusting the timeout with `--checkpoint-sync-url-timeout`.
2023-07-08 13:16:06 +00:00
realbigsean
c4da1ba450 resolve merge issues 2023-07-07 10:17:04 -04:00
realbigsean
cfe2452533 Merge branch 'remove-into-gossip-verified-block' of https://github.com/realbigsean/lighthouse into merge-unstable-deneb-june-6th 2023-07-06 16:51:35 -04:00
realbigsean
246d52d209 remove into gossip verified block 2023-07-06 11:40:58 -04:00
Jack McPherson
a6d5c7d7e0 Correct checks for backfill completeness (#4465)
## Issue Addressed

#4331 

## Proposed Changes

 - Use comparison rather than strict equality between the earliest epoch we know about and the backfill target (which will be the most recent WSP by default or genesis)
 - Add helper function `BackFillSync<T>::would_complete` to achieve this in one location

## Additional Info

 - There's an ad hoc test for this in #4461


Co-authored-by: Age Manning <Age@AgeManning.com>
2023-07-06 07:35:31 +00:00
realbigsean
c3ef84bda2 remove uninlined arg lint suppression (#4469) 2023-07-05 16:54:08 -04:00
realbigsean
ba65812972 remove patched dependencies (#4470) 2023-07-05 15:53:35 -04:00
realbigsean
af4a66846e remove clang from Dockerfile (#4471) 2023-07-05 15:53:22 -04:00
realbigsean
d41193c318 fix failing network tests (#4472) 2023-07-05 15:52:59 -04:00
realbigsean
d9254b7ded update get blobs endpoint name from blobs to blob_sidecars (#4467)
* changed name

* Fix sidecars

* update get blobs endpoint name from blobs to blob_sidecars

---------

Co-authored-by: Rahul Dogra <rahulcooldogra@gmail.com>
2023-07-05 12:04:12 -04:00
Paul Hauner
dfcb3363c7 Release v4.3.0 (#4452)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

NA
2023-07-04 13:29:55 +00:00
Michael Sproul
079cd67df2 Merge remote-tracking branch 'origin/tree-states' into tree-states 2023-07-03 15:03:54 +10:00
Michael Sproul
0291998a51 Merge remote-tracking branch 'origin/unstable' into tree-states 2023-07-03 15:01:21 +10:00
Age Manning
8e65419455 Ipv6 bootnodes update (#4394)
We now officially have ipv6 support. The mainnet bootnodes have been updated to support ipv6. This PR updates lighthouse's internal bootnodes for mainnet to avoid fetching them on initial load.
2023-07-03 03:20:21 +00:00
Michael Sproul
b414c323f8 Implement activation queue cache 2023-07-03 12:03:14 +10:00
Michael Sproul
835fa70ea4 Fix EpochCache handling in ef-tests (#4454) 2023-07-01 09:53:06 +10:00
Michael Sproul
f631b515f6 Fix EpochCache handling in ef-tests 2023-06-30 22:57:36 +10:00
realbigsean
4a79328055 update clang in cross docker (#4451) 2023-06-29 22:40:53 -04:00
Jimmy Chen
2df714e2cd Tree states optimization using EpochCache (#4429)
* Relocate epoch cache to BeaconState

* Optimize per block processing by pulling previous epoch & current epoch calculation up.

* Revert `get_cow` change (no performance improvement)

* Initialize `EpochCache` in epoch processing and load it from state when getting base rewards.

* Initialize `EpochCache` at start of block processing if required.

* Initialize `EpochCache` in `transition_blocks` if `exclude_cache_builds` is enabled

* Fix epoch cache initialization logic

* Remove FIXME comment.

* Cache previous & current epochs in `consensus_context.rs`.

* Move `get_base_rewards` from `ConsensusContext` to `BeaconState`.

* Update Milhouse version
2023-06-30 11:25:51 +10:00
Jimmy Chen
46be05f728 Cache target attester balances for unrealized FFG progression calculation (#4362)
## Issue Addressed

#4118 

## Proposed Changes

This PR introduces a "progressive balances" cache on the `BeaconState`, which keeps track of the accumulated target attestation balance for the current & previous epochs. The cached values are utilised by fork choice to calculate unrealized justification and finalization (instead of converting epoch participation arrays to balances for each block we receive).

This optimization will be rolled out gradually to allow for more testing. A new `--progressive-balances disabled|checked|strict|fast` flag is introduced to support this:
- `checked`: enabled with checks against participation cache, and falls back to the existing epoch processing calculation if there is a total target attester balance mismatch. There is no performance gain from this as the participation cache still needs to be computed. **This is the default mode for now.**
- `strict`: enabled with checks against participation cache, returns error if there is a mismatch. **Used for testing only**.
- `fast`: enabled with no comparative checks and without computing the participation cache. This mode gives us the performance gains from the optimization. This is still experimental and not currently recommended for production usage, but will become the default mode in a future release.
- `disabled`: disable the usage of progressive cache, and use the existing method for FFG progression calculation. This mode may be useful if we find a bug and want to stop the frequent error logs.

### Tasks

- [x] Initial cache implementation in `BeaconState`
- [x] Perform checks in fork choice to compare the progressive balances cache against results from `ParticipationCache`
- [x] Add CLI flag, and disable the optimization by default
- [x] Testing on Goerli & Benchmarking
- [x]  Move caching logic from state processing to the `ProgressiveBalancesCache` (see [this comment](https://github.com/sigp/lighthouse/pull/4362#discussion_r1230877001))
- [x] Add attesting balance metrics



Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-06-30 01:13:06 +00:00
Eitan Seri-Levi
826e090f50 Update node health endpoint (#4310)
## Issue Addressed

[#4292](https://github.com/sigp/lighthouse/issues/4292)

## Proposed Changes

Updated the node health endpoint

will return a 200 status code if  `!syncing && !el_offline && !optimistic`

wil return a 206 if `(syncing || optimistic) &&  !el_offline`

will return a 503 if `el_offline`



## Additional Info
2023-06-30 01:13:04 +00:00
Eitan Seri-Levi
edd093293a added debounce to log (#4269)
## Issue Addressed

[#4259](https://github.com/sigp/lighthouse/issues/4259)

## Proposed Changes

debounce spammy `Unable to send message to the beacon processor` log messages

## Additional Info

We could potentially debounce other logs that have the potential to be "spammy". 

After some feedback we decided to additionally add the following change:

create a newtype wrapper around `mpsc::Sender<BeaconWorkEvent<T>>`. When there is an error on the try_send method on the wrapper, we increase a counter metric with one label per work type.
2023-06-30 01:13:03 +00:00
Michael Sproul
160bbde8a2 Fix db-migration-period default (#4441)
* Fix db-migration-period default

* Fix version regex
2023-06-30 10:29:34 +10:00
realbigsean
adbb62f7f3 Devnet6 (#4404)
* some blob reprocessing work

* remove ForceBlockLookup

* reorder enum match arms in sync manager

* a lot more reprocessing work

* impl logic for triggerng blob lookups along with block lookups

* deal with rpc blobs in groups per block in the da checker. don't cache missing blob ids in the da checker.

* make single block lookup generic

* more work

* add delayed processing logic and combine some requests

* start fixing some compile errors

* fix compilation in main block lookup mod

* much work

* get things compiling

* parent blob lookups

* fix compile

* revert red/stevie changes

* fix up sync manager delay message logic

* add peer usefulness enum

* should remove lookup refactor

* consolidate retry error handling

* improve peer scoring during certain failures in parent lookups

* improve retry code

* drop parent lookup if either req has a peer disconnect during download

* refactor single block processed method

* processing peer refactor

* smol bugfix

* fix some todos

* fix lints

* fix lints

* fix compile in lookup tests

* fix lints

* fix lints

* fix existing block lookup tests

* renamings

* fix after merge

* cargo fmt

* compilation fix in beacon chain tests

* fix

* refactor lookup tests to work with multiple forks and response types

* make tests into macros

* wrap availability check error

* fix compile after merge

* add random blobs

* start fixing up lookup verify error handling

* some bug fixes and the start of deneb only tests

* make tests work for all forks

* track information about peer source

* error refactoring

* improve peer scoring

* fix test compilation

* make sure blobs are sent for processing after stream termination, delete copied tests

* add some tests and fix a bug

* smol bugfixes and moar tests

* add tests and fix some things

* compile after merge

* lots of refactoring

* retry on invalid block/blob

* merge unknown parent messages before current slot lookup

* get tests compiling

* penalize blob peer on invalid blobs

* Check disk on in-memory cache miss

* Update beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs

* Update beacon_node/network/src/sync/network_context.rs

Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>

* fix bug in matching blocks and blobs in range sync

* pr feedback

* fix conflicts

* upgrade logs from warn to crit when we receive incorrect response in range

* synced_and_connected_within_tolerance -> should_search_for_block

* remove todo

* add data gas used and update excess data gas to u64

* Fix Broken Overflow Tests

* payload verification with commitments

* fix merge conflicts

* restore payload file

* Restore payload file

* remove todo

* add max blob commitments per block

* c-kzg lib update

* Fix ef tests

* Abstract over minimal/mainnet spec in kzg crate

* Start integrating new KZG

* checkpoint sync without alignment

* checkpoint sync without alignment

* add import

* add import

* query for checkpoint state by slot rather than state root (teku doesn't serve by state root)

* query for checkpoint state by slot rather than state root (teku doesn't serve by state root)

* loosen check

* get state first and query by most recent block root

* Revert "loosen check"

This reverts commit 069d13dd63.

* get state first and query by most recent block root

* merge max blobs change

* simplify delay logic

* rename unknown parent sync message variants

* rename parameter, block_slot -> slot

* add some docs to the lookup module

* use interval instead of sleep

* drop request if blocks and blobs requests both return `None` for `Id`

* clean up `find_single_lookup` logic

* add lookup source enum

* clean up `find_single_lookup` logic

* add docs to find_single_lookup_request

* move LookupSource our of param where unnecessary

* remove unnecessary todo

* query for block by `state.latest_block_header.slot`

* fix lint

* fix merge transition ef tests

* fix test

* fix test

* fix observed  blob sidecars test

* Add some metrics (#33)

* fix protocol limits for blobs by root

* Update Engine API for 1:1 Structure Method

* make beacon chain tests to fix devnet 6 changes

* get ckzg working and fix some tests

* fix remaining tests

* fix lints

* Fix KZG linking issues

* remove unused dep

* lockfile

* test fixes

* remove dbgs

* remove unwrap

* cleanup tx generator

* small fixes

* fixing fixes

* more self reivew

* more self review

* refactor genesis header initialization

* refactor mock el instantiations

* fix compile

* fix network test, make sure they run for each fork

* pr feedback

* fix last test (hopefully)

---------

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
Co-authored-by: Mark Mackey <mark@sigmaprime.io>
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-06-29 15:35:43 -04:00
Jack McPherson
1aff082eea Add broadcast validation routes to Beacon Node HTTP API (#4316)
## Issue Addressed

 - #4293 
 - #4264 

## Proposed Changes

*Changes largely follow those suggested in the main issue*.

 - Add new routes to HTTP API
   - `post_beacon_blocks_v2`
   - `post_blinded_beacon_blocks_v2`
 - Add new routes to `BeaconNodeHttpClient`
   - `post_beacon_blocks_v2`
   - `post_blinded_beacon_blocks_v2`
 - Define new Eth2 common types
   - `BroadcastValidation`, enum representing the level of validation to apply to blocks prior to broadcast
   - `BroadcastValidationQuery`, the corresponding HTTP query string type for the above type
 - ~~Define `_checked` variants of both `publish_block` and `publish_blinded_block` that enforce a validation level at a type level~~
 - Add interactive tests to the `bn_http_api_tests` test target covering each validation level (to their own test module, `broadcast_validation_tests`)
   - `beacon/blocks`
       - `broadcast_validation=gossip`
         - Invalid (400)
         - Full Pass (200)
         - Partial Pass (202)
        - `broadcast_validation=consensus`
          - Invalid (400)
          - Only gossip (400)
          - Only consensus pass (i.e., equivocates) (200)
          - Full pass (200)
        - `broadcast_validation=consensus_and_equivocation`
          - Invalid (400)
          - Invalid due to early equivocation (400)
          - Only gossip (400)
          - Only consensus (400)
          - Pass (200)
   - `beacon/blinded_blocks`
       - `broadcast_validation=gossip`
         - Invalid (400)
         - Full Pass (200)
         - Partial Pass (202)
        - `broadcast_validation=consensus`
          - Invalid (400)
          - Only gossip (400)
          - ~~Only consensus pass (i.e., equivocates) (200)~~
          - Full pass (200)
        - `broadcast_validation=consensus_and_equivocation`
          - Invalid (400)
          - Invalid due to early equivocation (400)
          - Only gossip (400)
          - Only consensus (400)
          - Pass (200)
 - Add a new trait, `IntoGossipVerifiedBlock`, which allows type-level guarantees to be made as to gossip validity
 - Modify the structure of the `ObservedBlockProducers` cache from a `(slot, validator_index)` mapping to a `((slot, validator_index), block_root)` mapping
 - Modify `ObservedBlockProducers::proposer_has_been_observed` to return a `SeenBlock` rather than a boolean on success
 - Punish gossip peer (low) for submitting equivocating blocks
 - Rename `BlockError::SlashablePublish` to `BlockError::SlashableProposal`

## Additional Info

This PR contains changes that directly modify how blocks are verified within the client. For more context, consult [comments in-thread](https://github.com/sigp/lighthouse/pull/4316#discussion_r1234724202).


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-06-29 12:02:38 +00:00
int88
23b06aa51e avoid relocking head during builder health check (#4323)
## Issue Addressed

#4314 

## Proposed Changes

avoid relocking head during builder health check

## Additional Info

NA
2023-06-29 09:39:15 +00:00
realbigsean
4c9fcf1e83 Merge pull request #4432 from jimmygchen/deneb-merge-from-unstable-20230627
Deneb merge from unstable 20230627
2023-06-28 10:20:59 -04:00
Jimmy Chen
5b85aeca5f Add BlobSidecar encode & decode test and fix RpcLimit for BlobsByRoot 2023-06-28 22:42:30 +10:00
Jimmy Chen
03a17a84da Update handle_rpc_response blobs match arms to be consistent with block v2 protocols. 2023-06-28 16:56:52 +10:00
Jimmy Chen
68140fa036 Update max block request limit to MAX_REQUEST_BLOCKS_DENEB to ensure this doesn't cause incompatibilities with other clients. 2023-06-28 16:46:20 +10:00
Jimmy Chen
5c4485e45e Merge branch 'deneb-merge-from-unstable-20230627' of github.com:jimmygchen/lighthouse into deneb-merge-from-unstable-20230627 2023-06-28 16:26:38 +10:00
Jimmy Chen
d1146ec8b5 Sync finalized sync to 2 epochs + 1 slot past our peer's finalized slot in order to finalize the chain locally 2023-06-28 16:15:37 +10:00
Jimmy Chen
dfbe4b1add Add missing Cargo.lock changes (ssz_types patch) 2023-06-28 16:11:38 +10:00
Jimmy Chen
56caccbac0 Added a few fixes from merge 2023-06-27 17:48:50 +10:00
Michael Sproul
6954de6afd Tree states alpha release v4.2.990-exp 2023-06-27 17:34:41 +10:00
Michael Sproul
8dc374ec3d Temporarily disable ARM builds 2023-06-27 17:33:25 +10:00
Michael Sproul
af5fb20941 Tree states alpha release v4.2.99-exp 2023-06-27 16:52:46 +10:00
Michael Sproul
56c7a529fb Install Clang 5 in Cross builder image 2023-06-27 16:52:46 +10:00
Jimmy Chen
cc03ba430c Merge branch 'unstable' into deneb-merge-from-unstable-20230627
# Conflicts:
#	Cargo.lock
#	common/eth2_network_config/built_in_network_configs/gnosis/config.yaml
2023-06-27 15:30:44 +10:00
Jimmy Chen
d062f61125 Fix failing tests after merge 2023-06-27 15:27:42 +10:00
Michael Sproul
7c2eb96bc5 Set epochs per migration to 1
Workaround for #4236
2023-06-27 15:06:43 +10:00
Michael Sproul
88e30b6dcf Fix failing tests (#4423)
* Get tests passing

* Get benchmarks compiling

* Fix EF withdrawals test

* Remove unused deps

* Fix tree_hash panic in tests

* Fix slasher compilation

* Fix ssz_generic test

* Get more tests passing

* Fix EF tests for real

* Fix local testnet scripts
2023-06-27 15:04:39 +10:00
Michael Sproul
ead4e60a76 Schedule Capella for Gnosis chain (#4433)
## Issue Addressed

Closes #4422
Implements https://github.com/gnosischain/configs/pull/12

## Proposed Changes

Schedule the Capella fork for Gnosis chain at epoch 648704, August 1st 2023 11:34:20 UTC.
2023-06-27 01:06:51 +00:00
Paul Hauner
9072acbfa6 Tidy formatting of Reqwest errors (#4336)
## Issue Addressed

NA

## Proposed Changes

Implements the `PrettyReqwestError` to wrap a `reqwest::Error` and give nicer `Debug` formatting. It also wraps the `Url` component in a `SensitiveUrl` to avoid leaking sensitive info in logs.

### Before

```
Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("localhost")), port: Some(9999), path: "/eth/v1/node/version", query: None, fragment: None }, source: hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 61, kind: ConnectionRefused, message: "Connection refused" })) })
```

### After

```
HttpClient(url: http://localhost:9999/, kind: request, detail: error trying to connect: tcp connect error: Connection refused (os error 61))
```

## Additional Info

I've also renamed the `Reqwest` error enum variants to `HttpClient`, to give people a better chance at knowing what's going on. Reqwest is pretty odd and looks like a typo.

I've implemented it in the `eth2` and `execution_layer` crates. This should affect most logs in the VC and EE-related ones in the BN.

I think the last crate that could benefit from the is the `beacon_node/eth1` crate. I haven't updated it in this PR since its error type is not so amenable to it (everything goes into a `String`). I don't have a whole lot of time to jig around with that at the moment and I feel that this PR as it stands is a significant enough improvement to merge on its own. Leaving it as-is is fine for the time being and we can always come back for it later (or implement in-protocol deposits!).
2023-06-27 01:06:50 +00:00
Pawan Dhananjay
448d3ec9b3 Aggregate subsets (#3493)
## Issue Addressed

Resolves #3238 

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2023-06-27 01:06:49 +00:00
Jimmy Chen
97c4660761 Merge branch 'unstable' into deneb-merge-from-unstable-20230627
# Conflicts:
#	beacon_node/beacon_chain/src/beacon_chain.rs
#	beacon_node/beacon_chain/src/block_verification.rs
#	beacon_node/beacon_chain/src/lib.rs
#	beacon_node/beacon_chain/src/test_utils.rs
#	beacon_node/beacon_chain/tests/block_verification.rs
#	beacon_node/beacon_chain/tests/store_tests.rs
#	beacon_node/beacon_chain/tests/tests.rs
#	beacon_node/http_api/src/publish_blocks.rs
#	beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs
#	beacon_node/lighthouse_network/src/rpc/methods.rs
#	beacon_node/lighthouse_network/src/rpc/outbound.rs
#	beacon_node/lighthouse_network/src/rpc/protocol.rs
#	beacon_node/lighthouse_network/src/service/api_types.rs
#	beacon_node/network/src/beacon_processor/worker/gossip_methods.rs
#	beacon_node/network/src/beacon_processor/worker/rpc_methods.rs
#	beacon_node/network/src/beacon_processor/worker/sync_methods.rs
#	beacon_node/network/src/sync/block_lookups/single_block_lookup.rs
#	beacon_node/network/src/sync/network_context.rs
#	beacon_node/network/src/sync/range_sync/batch.rs
#	beacon_node/network/src/sync/range_sync/chain.rs
#	common/eth2/src/types.rs
#	consensus/fork_choice/src/fork_choice.rs
2023-06-27 08:40:24 +10:00
Mac L
cc780aae3e Bump openssl deps (#4421)
## Proposed Changes

Bump the `openssl` deps to resolve the `cargo-audit` failure caused by https://rustsec.org/advisories/RUSTSEC-2023-0044.html
2023-06-22 02:14:59 +00:00
Jimmy Chen
6d585b5885 Add lint-fix task to automatically fix some Clippy warnings. (#4419)
## Issue Addressed

This PR adds a new `lint-fix` task to automatically fix simple Clippy warnings using `cargo clippy --fix`.

Usage:

```
make lint-fix
```
2023-06-22 02:14:58 +00:00
Jimmy Chen
33c942ff03 Add support for updating validator graffiti (#4417)
## Issue Addressed

#4386 

## Proposed Changes

The original proposal described in the issue adds a new endpoint to support updating validator graffiti, but I realized we already have an endpoint that we use for updating various validator fields in memory and in the validator definitions file, so I think that would be the best place to add this to.

### API endpoint

`PATCH lighthouse/validators/{validator_pubkey}` 

This endpoint updates the graffiti in both the [ validator definition file](https://lighthouse-book.sigmaprime.io/graffiti.html#2-setting-the-graffiti-in-the-validator_definitionsyml) and the in memory `InitializedValidators`. In the next block proposal, the new graffiti will be used.

Note that the [`--graffiti-file`](https://lighthouse-book.sigmaprime.io/graffiti.html#1-using-the---graffiti-file-flag-on-the-validator-client) flag has a priority over the validator definitions file, so if the caller attempts to update the graffiti while the `--graffiti-file` flag is present, the endpoint will return an error (Bad request 400).

## Tasks

- [x] Add graffiti update support to `PATCH lighthouse/validators/{validator_pubkey}` 
- [x] Return error if user tries to update graffiti while the `--graffiti-flag` is set
- [x] Update Lighthouse book
2023-06-22 02:14:57 +00:00
Paul Hauner
3cac6d9ed5 Configure the validator/register_validator batch size via the CLI (#4399)
## Issue Addressed

NA

## Proposed Changes

Adds the `--validator-registration-batch-size` flag to the VC to allow runtime configuration of the number of validators POSTed to the [`validator/register_validator`](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Validator/registerValidator) endpoint.

There are builders (Agnostic and Eden) that are timing out with `regsiterValidator` requests with ~400 validators, even with a 9 second timeout. Exposing the batch size will help tune batch sizes to (hopefully) avoid this.

This PR should not change the behavior of Lighthouse when the new flag is not provided (i.e., the same default value is used).

## Additional Info

NA
2023-06-22 02:14:56 +00:00
Michael Sproul
ca412ab3f0 Use rebasing to minimise BeaconState mem usage (#4416)
* Use "rebasing" to minimise BeaconState mem usage

* Update metastruct

* Use upstream milhouse, update cargo lock

* Rebase caches for extra memory savings
2023-06-21 11:05:09 +10:00
Michael Sproul
6eb151396d Configurable diff buffer cache size (#4420) 2023-06-20 19:10:05 +10:00
chonghe
bd6a015fe7 Update Lighthouse book on Doppelganger Protection (#4418)
Revise the page by removing the info on sync committee delay. Also added an faq on changing the port.
2023-06-20 05:20:37 +00:00
Mac L
c76afc6630 Remove legacy max-skip-slots checks (#4403)
## Proposed Changes

Remove `max-skip-slots` checks when processing blocks.
This was legacy code which was previously used in the Medalla testnet to sync to the correct fork.
With the addition of checkpoint sync which allows us to sync to any arbitrary fork, this is no longer a necessary feature, so it has been removed for simplicity.

## Additional Notes
The CLI flag and checks for attestation processing have been retained as it still may have uses in DoS protection.
2023-06-20 05:20:36 +00:00
Paul Hauner
d56cec83fc Address clippy lints in tree-states (#4414)
* Address some clippy lints

* Box errors to fix error size lint

* Add Default impl for Validator

* Address more clippy lints

* Re-implement `check_state_diff`

* Fix misc test compile errors
2023-06-20 11:47:52 +10:00
Age Manning
6621e1d0c5 Improve ENR logic for ipv6 (#4395)
Currently, the ENR of the node may not be correctly updated when specifying ipv6 fields through the CLI if an ENR exists on disk. 

This remedies a bug where we were not checking for ipv6 fields when comparing whether to use an on-disk ENR or updating based on CLI configuration parameters.
2023-06-19 23:53:25 +00:00
Michael Sproul
23db089a7a Implement tree states & hierarchical state DB 2023-06-19 10:14:47 +10:00
ethDreamer
e1af24b470 Add more comments to overflow LRU cache (#4406) 2023-06-16 09:27:36 -04:00
chonghe
2bb62b7f7d Correct table formatting in Lighthouse book (#4407)
This is only a minor correction to the table not properly showing up in Lighthouse book. The changes solves the formatting issue. Another change is on the link to do port-forwarding.
2023-06-16 06:44:32 +00:00
Michael Sproul
affea585f4 Remove CountUnrealized (#4357)
## Issue Addressed

Closes #4332

## Proposed Changes

Remove the `CountUnrealized` type, defaulting unrealized justification to _on_. This fixes the #4332 issue by ensuring that importing the same block to fork choice always results in the same outcome.

Finalized sync speed may be slightly impacted by this change, but that is deemed an acceptable trade-off until the optimisation from #4118 is implemented.

TODO:

- [x] Also check that the block isn't a duplicate before importing
2023-06-16 06:44:31 +00:00
realbigsean
a62e52f319 Single blob lookups (#4152)
* some blob reprocessing work

* remove ForceBlockLookup

* reorder enum match arms in sync manager

* a lot more reprocessing work

* impl logic for triggerng blob lookups along with block lookups

* deal with rpc blobs in groups per block in the da checker. don't cache missing blob ids in the da checker.

* make single block lookup generic

* more work

* add delayed processing logic and combine some requests

* start fixing some compile errors

* fix compilation in main block lookup mod

* much work

* get things compiling

* parent blob lookups

* fix compile

* revert red/stevie changes

* fix up sync manager delay message logic

* add peer usefulness enum

* should remove lookup refactor

* consolidate retry error handling

* improve peer scoring during certain failures in parent lookups

* improve retry code

* drop parent lookup if either req has a peer disconnect during download

* refactor single block processed method

* processing peer refactor

* smol bugfix

* fix some todos

* fix lints

* fix lints

* fix compile in lookup tests

* fix lints

* fix lints

* fix existing block lookup tests

* renamings

* fix after merge

* cargo fmt

* compilation fix in beacon chain tests

* fix

* refactor lookup tests to work with multiple forks and response types

* make tests into macros

* wrap availability check error

* fix compile after merge

* add random blobs

* start fixing up lookup verify error handling

* some bug fixes and the start of deneb only tests

* make tests work for all forks

* track information about peer source

* error refactoring

* improve peer scoring

* fix test compilation

* make sure blobs are sent for processing after stream termination, delete copied tests

* add some tests and fix a bug

* smol bugfixes and moar tests

* add tests and fix some things

* compile after merge

* lots of refactoring

* retry on invalid block/blob

* merge unknown parent messages before current slot lookup

* get tests compiling

* penalize blob peer on invalid blobs

* Check disk on in-memory cache miss

* Update beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs

* Update beacon_node/network/src/sync/network_context.rs

Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>

* fix bug in matching blocks and blobs in range sync

* pr feedback

* fix conflicts

* upgrade logs from warn to crit when we receive incorrect response in range

* synced_and_connected_within_tolerance -> should_search_for_block

* remove todo

* Fix Broken Overflow Tests

* fix merge conflicts

* checkpoint sync without alignment

* add import

* query for checkpoint state by slot rather than state root (teku doesn't serve by state root)

* get state first and query by most recent block root

* simplify delay logic

* rename unknown parent sync message variants

* rename parameter, block_slot -> slot

* add some docs to the lookup module

* use interval instead of sleep

* drop request if blocks and blobs requests both return `None` for `Id`

* clean up `find_single_lookup` logic

* add lookup source enum

* clean up `find_single_lookup` logic

* add docs to find_single_lookup_request

* move LookupSource our of param where unnecessary

* remove unnecessary todo

* query for block by `state.latest_block_header.slot`

* fix lint

* fix test

* fix test

* fix observed  blob sidecars test

* PR updates

* use optional params instead of a closure

* create lookup and trigger request in separate method calls

* remove `LookupSource`

* make sure duplicate lookups are not dropped

---------

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
Co-authored-by: Mark Mackey <mark@sigmaprime.io>
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
2023-06-15 12:59:10 -04:00
ethDreamer
77fc511170 Use JSON by default for Deposit Snapshot Sync (#4397)
Checkpointz now supports deposit snapshot but [they only support returning them in JSON](https://github.com/ethpandaops/checkpointz/issues/74) so I've modified lighthouse to request them in JSON by default.

There's also `get_opt` & `get_opt_with_timeout` methods which seem to expect responses in JSON but were not adding `Accept: application/json` to the request headers so I fixed that as well.

Also the beacon API puts quantities in quotes so I fixed that in the snapshot JSON serialization
2023-06-15 08:42:20 +00:00
Pawan Dhananjay
0ecca1dcb0 Rework internal rpc protocol handling (#4290)
## Issue Addressed

Resolves #3980. Builds on work by @GeemoCandama in #4084 

## Proposed Changes

Extends the `SupportedProtocol` abstraction added in Geemo's PR and attempts to fix internal versioning of requests that are mentioned in this comment https://github.com/sigp/lighthouse/pull/4084#issuecomment-1496380033 

Co-authored-by: geemo <geemo@tutanota.com>
2023-06-14 05:08:50 +00:00
Michael Sproul
0caaad4c03 Re-enable maxperf for Windows releases (#4371)
## Issue Addressed

Closes #3964

## Proposed Changes

Use the `maxperf` profile to build Windows binaries, now that Rust 1.70.0 fixed the underlying issue.
2023-06-14 02:29:51 +00:00
realbigsean
5428e68943 Merge pull request #4392 from jimmygchen/deneb-merge-from-unstable-20230613
Deneb merge from unstable 20230613
2023-06-13 12:23:04 -04:00
chonghe
2548be3e66 Minor revision in Lighthouse book (#4385)
## Proposed Changes

Correct some typos in the book, also update information about withdrawals since the Mainnet will be having 700K validators in about a month
2023-06-13 13:12:56 +00:00
AMIT SINGH
a227ee7478 Use MediaType accept header parser (#4216)
## Issue Addressed

#3510 

## Proposed Changes

- Replace mime with MediaTypeList
- Remove parse_accept fn as MediaTypeList does it built-in
- Get the supported media type of the highest q-factor in a single list iteration without sorting

## Additional Info

I have addressed the suggested changes in previous [PR](https://github.com/sigp/lighthouse/pull/3520#discussion_r959048633)
2023-06-13 10:25:27 +00:00
Divma
2639e67e90 Update discv5 to expand ipv6 support (#4319)
Done in different PRs so that they can reviewed independently, as it's likely this won't be merged before I leave

Includes resolution for #4080 
- [ ] #4299
- [ ] #4318
- [ ] #4320 

Co-authored-by: Diva M <divma@protonmail.com>
Co-authored-by: Age Manning <Age@AgeManning.com>
2023-06-13 01:25:05 +00:00
Jimmy Chen
801bf5951c Merge branch 'unstable' into deneb-merge-from-unstable-20230613 2023-06-13 09:44:18 +10:00
Pawan Dhananjay
0a2a00a527 Update max blobs per block (#4391)
* Change max blobs to 6 in code

* Rename

* fmt
2023-06-12 16:33:48 -04:00
Gua00va
62a2413ade Enable slasher broadcast by default (#4368)
## Issue Addressed

This PR addresses issue https://github.com/sigp/lighthouse/issues/4350

## Proposed Changes

This change will enable slasher broadcast in the following cases:
No flag is passed,
`--slasher-broadcast` is passed and,
`--slasher-broadcast=true` is passed.

Only when an explicit false value is passed the slasher does not broadcast.(`--slasher-broadcast=false`).

## Additional Info

TODO

- [x] Modify CLI parsing logic
- [x] Write test

Refer to #4353 


Co-authored-by: Rahul Dogra <rahulcooldogra@gmail.com>
Co-authored-by: Gua00va <105484243+Gua00va@users.noreply.github.com>
2023-06-08 13:47:56 +00:00
Michael Sproul
f167951835 Fix Anvil compilation on Windows (#4381)
## Issue Addressed

Workaround for https://github.com/foundry-rs/foundry/issues/5115.

## Proposed Changes

Allow Anvil to be installed on Windows without errors by enabling the IPC features (which we don't use, but Anvil expects to exist).
2023-06-07 01:50:36 +00:00
Ricki Moore
186d0af873 feat: added new info about logs and config features (#4378)
## Proposed Changes

Add additional information about Siren's new configuration, dashboard and logs view features.
2023-06-07 01:50:35 +00:00
Paul Hauner
5e3fb13cfe Downgrade a CRIT in the VC for builder timeouts (#4366)
## Issue Addressed

NA

## Proposed Changes

Downgrade a `CRIT` to an `ERRO` when there's an `Irrecoverable` error whilst publishing a blinded block.

It's quite common for builders successfully broadcast a block to the network whilst failing to respond to the BN when it publishes a signed, blinded block. The VC is currently raising a `CRIT` when this happens and I think that's excessive.

These changes have the same intent as  #4073. In that PR I only managed to remove the `CRIT`s in the BN but missed this one in the VC.

I've also tidied the log messages to:

- Give them all the same title (*"Error whilst producing block"*) to help with grepping.
- Include the `block_slot` so it's easy to look up the slot in an explorer and see if it was actually skipped.

## Additional Info

This PR should not change any logic beyond logging.
2023-06-07 01:50:34 +00:00
Michael Sproul
299cfe1fe6 Switch default slasher backend to LMDB (#4360)
## Issue Addressed

Closes #4354
Closes #3987

Replaces #4305, #4283

## Proposed Changes

This switches the default slasher backend _back_ to LMDB.

If an MDBX database exists and the MDBX backend is enabled then MDBX will continue to be used. Our release binaries and Docker images will continue to include MDBX for as long as it is practical, so users of these should not notice any difference.

The main benefit is to users compiling from source and devs running tests. These users no longer have to struggle to compile MDBX and deal with the compatibility issues that arises. Similarly, devs don't need to worry about toggling feature flags in tests or risk forgetting to run the slasher tests due to backend issues.
2023-06-07 01:50:33 +00:00
Peter
b14d1493cc Always log the value of relay and local blocks for comparison (#4352)
## Issue Addressed

N/A

## Proposed Changes

This change will log the value of the relay block and the local block when the relay block is more profitable.

## Additional Info

This change will help validators understand the block selection (as it looks like the execution reward sometimes is higher that the MEV-reward).

The rationale for this change is to aid operators to better understand why a relay-block was chosen over a local block.
Looking at produced blocks (at beaconcha.in for example) it sometimes looks like the builder is making a profit just from the execution reward vs the MEV-reward, and creates the nagging question: "Could i have built this block and made that extra profit?"... The answer is probably "No, not without the extra transactions included by the relay", but by logging the value of the local block-candidate, this will no longer be an issue.. 


### Example (Mainnet)
https://beaconcha.in/block/17370329
MEV Block Reward: 0.17122 Ether to 0xE35bBaFa0266089f95d745d348b468622805D82B
Execution Reward: 0.17528 Ether to 0x1f9090aaE28b8a3dCeaDf281B0F12828e676c326
Difference: 0.00406 Ether

### Examples (Goerli)

https://goerli.beaconcha.in/block/9040065
MEV Block Reward: 0.56423 Ether to 0xF5794543CF6055Ae710E9c8E99E31343Cea004a8
Execution Reward: 0.56488 Ether to 0xfC0157aA4F5DB7177830ACddB3D5a9BB5BE9cc5e
Difference: 0.00065 Ether

https://goerli.beaconcha.in/block/9019921
MEV Block Reward: 1.39440 Ether to 0xF5794543CF6055Ae710E9c8E99E31343Cea004a8
Execution Reward: 1.39469 Ether to 0xfC0157aA4F5DB7177830ACddB3D5a9BB5BE9cc5e
Difference: 0.00029 Ether

https://goerli.beaconcha.in/block/9015583
MEV Block Reward: 1.04356 Ether to 0xF5794543CF6055Ae710E9c8E99E31343Cea004a8
Execution Reward: 1.04896 Ether to 0xfC0157aA4F5DB7177830ACddB3D5a9BB5BE9cc5e
Difference: 0.0054 Ether
2023-06-07 01:50:31 +00:00
realbigsean
ec1b36474b update automatic docker builds for the correct branch (#4375) 2023-06-05 10:41:11 -04:00
realbigsean
72d6cda11b Revert "Temporarily allow Rust check warnings on 4844 branch. (#4088)" (#4373)
This reverts commit fa9baab0f7.
2023-06-05 09:54:19 -04:00
ethDreamer
aef232cb20 Remove Unnecessary Option in Blob Pruning (#4363) 2023-06-05 09:11:18 -04:00
ethDreamer
ceaa740841 Validate & Store Blobs During Backfill (#4307)
* Verify and Store Blobs During Backfill

* Improve logs

* Eliminated Clone

* Fix Inital Vector Capacity

* Addressed Sean's Comments
2023-06-05 09:09:42 -04:00
realbigsean
7a4be59884 Merge pull request #4367 from realbigsean/merge-unstable-to-deneb
Merge unstable to deneb
2023-06-02 12:12:36 -04:00
realbigsean
6adb68c17a fix compile after merge 2023-06-02 12:10:01 -04:00
realbigsean
a227959298 Merge branch 'unstable' of https://github.com/sigp/lighthouse into deneb-free-blobs 2023-06-02 11:57:15 -04:00
Paul Hauner
d07c78bccf Appease clippy in Rust 1.70 (#4365)
## Issue Addressed

NA

## Proposed Changes

Fixes some new clippy lints raised after updating to Rust 1.70.

## Additional Info

NA
2023-06-02 03:17:40 +00:00
Michael Sproul
4af4e98c82 Update Nethermind (#4361)
## Issue Addressed

Nethermind integration tests are failing with a compilation error: https://github.com/sigp/lighthouse/actions/runs/5138586473/jobs/9248079945

This PR updates Nethermind to the latest release to hopefully fix that
2023-06-02 03:17:39 +00:00
Pawan Dhananjay
d399961e6e Add an option to disable inbound rate limiter (#4327)
## Issue Addressed

On deneb devnetv5, lighthouse keeps rate limiting peers which makes it harder to bootstrap new nodes as there are very few peers in the network. This PR adds an option to disable the inbound rate limiter for testnets.

Added an option to configure inbound rate limits as well.

Co-authored-by: Diva M <divma@protonmail.com>
2023-06-02 03:17:38 +00:00
Michael Sproul
04386cfabb Expose execution block hash calculation (#4326)
## Proposed Changes

This is a light refactor of the execution layer's block hash calculation logic making it easier to use externally. e.g. in `eleel` (https://github.com/sigp/eleel/pull/18).

A static method is preferable to a method because the calculation doesn't actually need any data from `self`, and callers may want to compute block hashes without constructing an `ExecutionLayer` (`eleel` only constructs a simpler `Engine` struct).
2023-06-02 03:17:37 +00:00
chonghe
6c769ed86c Update Lighthouse Book API and Advanced Usage section (#4300)
## Issue Addressed

Update Information in Lighthouse Book

## Proposed Changes

- move Validator Graffiti from Advanced Usage to Validator Management
- update API response and command
- some items that aren't too sure I put it in comment, which can be seen in raw/review format but not live


## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
2023-06-02 03:17:36 +00:00
chonghe
749a242b0f Addition to Lighthouse Book faq.md (#4273)
## Issue Addressed

Added some frequently asked questions in the Lighthouse Book

## Proposed Changes

Created another file faqV2.md which categorises the FAQs into different sections for better organisation. For review purpose, a review on faq.md will suffice. Then, if faqV2.md looks better, can delete faq.md; otherwise if the changes to the faqV2.md is too much, can keep faq.md and delete faqv2.md.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
2023-06-02 03:17:35 +00:00
realbigsean
e8f1d533fb Merge pull request #4349 from jimmygchen/deneb-merge-from-unstable-20230530
Deneb merge from unstable 2023/05/30
2023-05-31 09:49:29 -04:00
Jimmy Chen
00ffd186c2 Empty commit. 2023-05-31 21:17:46 +10:00
Jimmy Chen
7b1a0d4bba Fix libpq typo in beacon.watch README (#4356)
## Issue Addressed

Fix README typo (`libpg` -> `libpq`).


Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-05-31 07:16:20 +00:00
Jimmy Chen
65a2ae38fe Fix failing tests (workaround) 2023-05-31 12:32:31 +10:00
Jimmy Chen
c4063056ac Update testnet ETH1_BLOCK_HASH value 2023-05-31 11:54:56 +10:00
Jimmy Chen
18347231b1 Fix failing tests 2023-05-30 23:31:09 +10:00
Jimmy Chen
81c9af5aaf Use patched versions of common libraries 2023-05-30 22:46:22 +10:00
Jimmy Chen
70c4ae35ab Merge branch 'unstable' into deneb-free-blobs
# Conflicts:
#	.github/workflows/docker.yml
#	.github/workflows/local-testnet.yml
#	.github/workflows/test-suite.yml
#	Cargo.lock
#	Cargo.toml
#	beacon_node/beacon_chain/src/beacon_chain.rs
#	beacon_node/beacon_chain/src/builder.rs
#	beacon_node/beacon_chain/src/test_utils.rs
#	beacon_node/execution_layer/src/engine_api/json_structures.rs
#	beacon_node/network/src/beacon_processor/mod.rs
#	beacon_node/network/src/beacon_processor/worker/gossip_methods.rs
#	beacon_node/network/src/sync/backfill_sync/mod.rs
#	beacon_node/store/src/config.rs
#	beacon_node/store/src/hot_cold_store.rs
#	common/eth2_network_config/Cargo.toml
#	consensus/ssz/src/decode/impls.rs
#	consensus/ssz_derive/src/lib.rs
#	consensus/ssz_derive/tests/tests.rs
#	consensus/ssz_types/src/serde_utils/mod.rs
#	consensus/tree_hash/src/impls.rs
#	consensus/tree_hash/src/lib.rs
#	consensus/types/Cargo.toml
#	consensus/types/src/beacon_state.rs
#	consensus/types/src/chain_spec.rs
#	consensus/types/src/eth_spec.rs
#	consensus/types/src/fork_name.rs
#	lcli/Cargo.toml
#	lcli/src/main.rs
#	lcli/src/new_testnet.rs
#	scripts/local_testnet/el_bootnode.sh
#	scripts/local_testnet/genesis.json
#	scripts/local_testnet/geth.sh
#	scripts/local_testnet/setup.sh
#	scripts/local_testnet/start_local_testnet.sh
#	scripts/local_testnet/vars.env
#	scripts/tests/doppelganger_protection.sh
#	scripts/tests/genesis.json
#	scripts/tests/vars.env
#	testing/ef_tests/Cargo.toml
#	validator_client/src/block_service.rs
2023-05-30 22:44:05 +10:00
Jimmy Chen
e6deaad91e Remove unused crate publishing Github action and script (#4347)
## Issue Addressed

`publish-crate` action and its script no longer used after https://github.com/sigp/lighthouse/pull/3890.
2023-05-30 06:15:58 +00:00
chonghe
f536932935 Add SSH tunneling in Lighthouse UI Siren (#4328)
## Issue Addressed

-

## Proposed Changes

Add information to use SSH to connect to Siren beyond the local computer

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-05-30 06:15:57 +00:00
Age Manning
fdea8f2b27 Shift subnet backbone structure (attnets revamp) (#4304)
This PR address the following spec change: https://github.com/ethereum/consensus-specs/pull/3312

Instead of subscribing to a long-lived subnet for every attached validator to a beacon node, all beacon nodes will subscribe to `SUBNETS_PER_NODE` long-lived subnets. This is currently set to 2 for mainnet. 

This PR does not include any scoring or advanced discovery mechanisms. A future PR will improve discovery and we can implement scoring after the next hard fork when we expect all client teams and all implementations to respect this spec change.

This will be a significant change in the subnet network structure for consensus clients and we will likely have to monitor and tweak our peer management logic.
2023-05-30 06:15:56 +00:00
Daniel Ramirez Chiquillo
d150ccbee5 Add libpq-dev and docker to the to the list of additional requirements for developers in the Book (#4282)
## Issue Addressed

Realized this was missing while discussing #4280 

## Proposed Changes

Add an Item to the list of additional requirements for developers.
2023-05-30 06:15:54 +00:00
Michael Sproul
baad729fa7 Fix Rust 1.71.0 warnings (#4348)
## Issue Addressed

The Rust 1.70 release is imminent, so CI is using 1.71 for the Beta compiler, which is failing with a warning.
2023-05-30 01:38:51 +00:00
Pawan Dhananjay
a7da331f6a Fix geth scripts (#4342)
## Issue Addressed

N/A

## Proposed Changes

Geth's latest release breaks our CI with the following message
```
Fatal: Failed to register the Ethereum service: ethash is only supported as a historical component of already merged networks
Shutting down
```
Latest geth version has removed support for PoW networks. Hence, we need to add an extra `terminalTotalDifficultyPassed ` parameter in the genesis config to start from a merged network.
2023-05-30 01:38:50 +00:00
Eitan Seri-Levi
2a7e54d8bd swap unnecessary write lock to read lock in block_verification (#4340)
## Issue Addressed

[#4334](https://github.com/sigp/lighthouse/issues/4334)

## Proposed Changes

swap unnecessary write lock to read lock

## Additional Info

N/A


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-05-30 01:38:49 +00:00
Jimmy Chen
88abaaae05 Add db inspect --output values option to support dumping raw db values (#4324)
## Issue Addressed

Add a new `--output values` option to `db inspect` for dumping raw database values to SSZ files.

This could be useful for inspecting the database when we're unable to start the beacon node.

Example usage:

```
# Output the `ForkChoice` column to an SSZ file
lighthouse db inspect --column frk --output values
```

By default, it stores the output files in the current directory, and can be overriden with `--ouput-dir`.

List of columns can be found here:
c547a11b0d/beacon_node/store/src/lib.rs (L169-L216)
2023-05-30 01:38:48 +00:00
Jimmy Chen
10318fa34b Update blog link in README (#4322)
## Issue Addressed

Update blog link in README.md

Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-05-30 01:38:47 +00:00
Philippe Schommers
02ef7ae016 chore: Bellatrix occurred for Gnosis (#4301)
## Issue Addressed

None

## Proposed Changes

Tiny change to the documentation: Bellatrix happened for Gnosis so also needs a merge-ready config.

## Additional Info

None
2023-05-30 01:38:46 +00:00
Eitan Seri-Levi
744b1950e5 Keep payload cache idempotent (#4256)
## Issue Addressed

[#4239](https://github.com/sigp/lighthouse/issues/4239)

## Proposed Changes

keep the payload cache entry intact after fetching it

## Additional Info
2023-05-30 01:38:45 +00:00
Paul Hauner
c547a11b0d v4.2.0 (#4309)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

NA
2023-05-23 00:17:10 +00:00
Age Manning
aa1ed787e9 Logging via the HTTP API (#4074)
This PR adds the ability to read the Lighthouse logs from the HTTP API for both the BN and the VC. 

This is done in such a way to as minimize any kind of performance hit by adding this feature.

The current design creates a tokio broadcast channel and mixes is into a form of slog drain that combines with our main global logger drain, only if the http api is enabled. 

The drain gets the logs, checks the log level and drops them if they are below INFO. If they are INFO or higher, it sends them via a broadcast channel only if there are users subscribed to the HTTP API channel. If not, it drops the logs. 

If there are more than one subscriber, the channel clones the log records and converts them to json in their independent HTTP API tasks. 

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-05-22 05:57:08 +00:00
Michael Sproul
c27f2bf9c6 Avoid excessive logging of BN online status (#4315)
## Issue Addressed

https://github.com/sigp/lighthouse/pull/4309#issuecomment-1556052261

## Proposed Changes

Log the `Connected to beacon node` message only if the node was previously offline. This avoids a regression in logging after #4295, whereby the `Connected to beacon node` message would be logged every slot.

The new reduced logging is _slightly different_ from what we had prior to my changes in #4295. The main difference is that we used to log the `Connected` message whenever a node was online and subject to a health check (for being unhealthy in some other way). I think the new behaviour is reasonable, as the `Connected` message isn't particularly helpful if the BN is unhealthy, and the specific reason for unhealthiness will be logged by the warnings for `is_compatible`/`is_synced`.
2023-05-22 02:36:43 +00:00
Lion - dapplion
981c72e863 Update gnosis capella preset (#4302)
## Issue Addressed

N/A

## Proposed Changes

Gnosis preset values have been updated from previous placeholder values. This changes are required for chiado since it inherits the preset from gnosis mainnet.

- preset values update PR ref: https://github.com/gnosischain/configs/pull/11

## Additional Info

N/A
2023-05-19 10:07:12 +00:00
int88
283c327f85 add more sepolia bootnodes (#4297)
## Issue Addressed

#4288

## Proposed Changes

add more sepolia bootnodes

## Additional Info

Before add this fix:
```bash
May 16 08:13:59.161 INFO ENR Initialised                         tcp4: None, udp4: None, ip4: None, id: 0xf5b4..a912, seq: 1, enr: enr:-K24QADMSAVTKfbfTeiHkxDNMoW8OzVTsROE_FkYbo1Bny9kIEj74Q8Rqkz5c1q_PIWG9_3GvtSMfsjPI6h5S6z_wLsBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpBH63KzkAAAcv__________gmlkgnY0iXNlY3AyNTZrMaECD_KuAkUG3mbySM7ifutb6k6iRuG_q6-aUUHevTw7mayIc3luY25ldHMAg3RjcIIjKA, service: libp2p
May 16 08:13:59.162 DEBG Adding node to routing table            tcp: None, udp: Some(9001), ip: Some(178.128.150.254), peer_id: 16Uiu2HAm7xtvD82P5sCTV3N8acdrNpYKFsoY5Da3PShxaJ4zkquH, node_id: 0x2335..612e, service: libp2p
May 16 08:13:59.162 DEBG Discovery service started               service: libp2p
May 16 08:13:59.162 DEBG Starting a peer discovery request       target_peers: 16, service: libp2p
```

After add this fix:
```bash
May 16 08:52:04.263 INFO ENR Initialised                         udp6: None, tcp6: None, tcp4: Some(9000), udp4: None, ip4: None, id: 0x5065..83f0, seq: 1, enr: enr:-K24QIBmjZ2bYWd30jnYRoBGUvdGV7O0zd90FejaQWNQGIPbCZzY1VfOrdfvVv-CQdwZlCa3izxURu9BbCvZ2o_anJQBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpBH63KzkAAAcv__________gmlkgnY0iXNlY3AyNTZrMaECo-FsxPuF0qVXaGKkNvaWCMTtpscPwFzJyfSb9F7LNtiIc3luY25ldHMAg3RjcIIjKA, service: libp2p
May 16 08:52:04.263 DEBG Adding node to routing table            tcp: None, udp: Some(9001), ip: Some(178.128.150.254), peer_id: 16Uiu2HAm7xtvD82P5sCTV3N8acdrNpYKFsoY5Da3PShxaJ4zkquH, node_id: 0x2335..612e, service: libp2p
May 16 08:52:04.264 DEBG Adding node to routing table            tcp: Some(9000), udp: Some(9000), ip: Some(165.22.196.173), peer_id: 16Uiu2HAmCC5FvoVuvrTgaQyUzf8kf86D3XMe8PshNCLyiQAfzQGi, node_id: 0xfcac..7b09, service: libp2p
May 16 08:52:04.264 DEBG Adding node to routing table            tcp: Some(9000), udp: Some(9000), ip: Some(18.216.38.49), peer_id: 16Uiu2HAkxxexUkjjh9rzPTkoqvT2jvdegLpC8GXp1s6euFeiHpmW, node_id: 0x1977..3ea6, service: libp2p
May 16 08:52:04.264 DEBG Discovery service started               service: libp2p
May 16 08:52:04.264 DEBG Starting a peer discovery request       target_peers: 16, service: libp2p
```

And the nodes add to the route table is the same as the results decoded from `boot_enr.yaml` which means the fix is working!
2023-05-19 07:40:35 +00:00
Paul Hauner
01ae37ac37 Add more metrics for tracking sync messages (#4308)
## Issue Addressed

NA

## Proposed Changes

Adds metrics to track validators that are submitting equivocating (but not slashable) sync messages. This follows on from some research we've been doing in a separate fork of LH.

## Additional Info

@jimmygchen and @michaelsproul have already run their eyes over this so it should be easy to get into v4.2.0, IMO.
2023-05-19 05:13:07 +00:00
Jimmy Chen
75aea7054c Enshrine head state shuffling in the shuffling_cache (#4296)
## Issue Addressed

#4281 

## Proposed Changes

- Change `ShufflingCache` implementation from using `LruCache` to a custom cache that removes entry with lowest epoch instead of oldest insertion time.
- Protect the "enshrined" head shufflings when inserting new committee cache entries. The shuffling ids matching the head's previous, current, and future epochs will never be ejected from the cache during `Self::insert_cache_item`.

## Additional Info

There is a bonus point on shuffling preferences in the issue description that hasn't been implemented yet, as I haven't figured out a good way to do this:

> However I'm not convinced since there are some complexities around tie-breaking when two entries have the same epoch. Perhaps preferring entries in the canonical chain is best? 

We should be able to check if a block is on the canonical chain by:

```rust
canonical_head
        .fork_choice_read_lock()
        .contains_block(root)
```

However we need to interleave the shuffling and fork choice locks, which may cause deadlocks if we're not careful (mentioned by @paulhauner). Alternatively, we could use the `state.block_roots` field of the `chain.canonical_head.snapshot.beacon_state`, which avoids deadlock but requires more work.

I'd like to get some feedback on review & testing before I dig deeper into the preferences stuff, as having the canonical head preference may already be quite useful in preventing the issue raised.


Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-05-19 05:13:05 +00:00
Michael Sproul
3052db29fe Implement el_offline and use it in the VC (#4295)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/4291, part of #3613.

## Proposed Changes

- Implement the `el_offline` field on `/eth/v1/node/syncing`. We set `el_offline=true` if:
  - The EL's internal status is `Offline` or `AuthFailed`, _or_
  - The most recent call to `newPayload` resulted in an error (more on this in a moment).

- Use the `el_offline` field in the VC to mark nodes with offline ELs as _unsynced_. These nodes will still be used, but only after synced nodes.
- Overhaul the usage of `RequireSynced` so that `::No` is used almost everywhere. The `--allow-unsynced` flag was broken and had the opposite effect to intended, so it has been deprecated.
- Add tests for the EL being offline on the upcheck call, and being offline due to the newPayload check.


## Why track `newPayload` errors?

Tracking the EL's online/offline status is too coarse-grained to be useful in practice, because:

- If the EL is timing out to some calls, it's unlikely to timeout on the `upcheck` call, which is _just_ `eth_syncing`. Every failed call is followed by an upcheck [here](693886b941/beacon_node/execution_layer/src/engines.rs (L372-L380)), which would have the effect of masking the failure and keeping the status _online_.
- The `newPayload` call is the most likely to time out. It's the call in which ELs tend to do most of their work (often 1-2 seconds), with `forkchoiceUpdated` usually returning much faster (<50ms).
- If `newPayload` is failing consistently (e.g. timing out) then this is a good indication that either the node's EL is in trouble, or the network as a whole is. In the first case validator clients _should_ prefer other BNs if they have one available. In the second case, all of their BNs will likely report `el_offline` and they'll just have to proceed with trying to use them.

## Additional Changes

- Add utility method `ForkName::latest` which is quite convenient for test writing, but probably other things too.
- Delete some stale comments from when we used to support multiple execution nodes.
2023-05-17 05:51:56 +00:00
ethDreamer
aaa118ff0e Fix PERSIST_ETH1_CACHE / PERSIST_OP_POOL Metrics (#4278)
Do these metrics ever get read? As far as I'm aware, they're only ever updated when lighthouse is shutting down?
2023-05-17 05:51:55 +00:00
Pawan Dhananjay
91a7f51ab0 Post merge local testnets (#3807)
## Issue Addressed

N/A

## Proposed Changes

Modifies the local testnet scripts to start a network with genesis validators embedded into the genesis state. This allows us to start a local testnet without the need for deploying a deposit contract or depositing validators pre-genesis.

This also enables us to start a local test network at any fork we want without going through fork transitions. Also adds scripts to start multiple geth clients and peer them with each other and peer the geth clients with beacon nodes to start a post merge local testnet.

## Additional info

Adds a new lcli command `mnemonics-validators` that generates validator directories derived from a given mnemonic. 
Adds a new `derived-genesis-state` option to the `lcli new-testnet` command to generate a genesis state populated with validators derived from a mnemonic.
2023-05-17 05:51:54 +00:00
realbigsean
9b55d74c1c fix db startup (#4298) 2023-05-16 09:43:26 -04:00
Jack McPherson
b29bb2e037 Remove redundant gossipsub tests (#4294)
## Issue Addressed

#2335 

## Proposed Changes

 - Remove the `lighthouse-network::tests::gossipsub_tests` module
 - Remove dead code from the `lighthouse-network::tests::common` helper module (`build_full_mesh`)

## Additional Info

After discussion with both @divagant-martian and @AgeManning, these tests seem to have two main issues in that they are:

 - Redundant, in that they don't test anything meaningful (due to our handling of duplicate messages)
 - Out-of-place, in that it doesn't really test Lighthouse-specific functionality (rather libp2p functionality)

As such, this PR supersedes #4286.
2023-05-16 01:10:47 +00:00
realbigsean
c4b2f1c8ac fix count usage in blobs by range (#4289) 2023-05-15 12:37:51 -04:00
Paul Hauner
7c0b2755c2 Don't requeue already-known RPC blocks (#4214)
## Issue Addressed

NA

## Proposed Changes

Adds an additional check to a feature introduced in #4179 to prevent us from re-queuing already-known blocks that could be rejected immediately.

## Additional Info

Ideally this would have been included in v4.1.0, however we came across it too late to release it safely. We decided that the safest path forward is to release *without* this check and then patch it in the next version. The lack of this check should only result in a very minor performance impact (the impact is totally negligible in my assessment).
2023-05-15 07:22:04 +00:00
Paul Hauner
714ed53839 Add a flag for storing invalid blocks (#4194)
## Issue Addressed

NA

## Proposed Changes

Adds a flag to store invalid blocks on disk for teh debugz. Only *some* invalid blocks are stored, those which:

- Were received via gossip (rather than RPC, for instance)
    - This keeps things simple to start with and should capture most blocks.
- Passed gossip verification
    - This reduces the ability for random people to fill up our disk. A proposer signature is required to write something to disk.

## Additional Info

It's possible that we'll store blocks that aren't necessarily invalid, but we had an internal error during verification. Those blocks seem like they might be useful sometimes.
2023-05-15 07:22:03 +00:00
Pawan Dhananjay
8a3eb4df9c Replace ganache-cli with anvil (#3555)
## Issue Addressed

N/A

## Proposed Changes

Replace ganache-cli with anvil https://github.com/foundry-rs/foundry/blob/master/anvil/README.md
We can lose all js dependencies in CI as a consequence.

## Additional info
Also changes the ethers-rs version used in the execution layer (for the transaction reconstruction) to a newer one. This was necessary to get use the ethers utils for anvil. The fixed execution engine integration tests should catch any potential issues with the payload reconstruction after #3592 


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-05-15 07:22:02 +00:00
Mac L
3c029d48bf DB migration for fork choice cleanup (#4265)
## Issue Addressed

#4233

## Proposed Changes

Remove the `best_justified_checkpoint` from the `PersistedForkChoiceStore` type as it is now unused.
Additionally, remove the `Option`'s wrapping the `justified_checkpoint` and `finalized_checkpoint` fields on `ProtoNode` which were only present to facilitate a previous migration.

Include the necessary code to facilitate the migration to a new DB schema.
2023-05-15 02:10:42 +00:00
Jimmy Chen
40abaefffb Attestation verification uses head state fork (#4263)
## Issue Addressed

Addresses #4238 

## Proposed Changes

- [x] Add tests for the scenarios
- [x] Use the fork of the attestation slot for signature verification.
2023-05-15 02:10:41 +00:00
Jimmy Chen
cf239fed61 Reduce bandwidth over the VC<>BN API using dependant roots (#4170)
## Issue Addressed

#4157 

## Proposed Changes

See description in #4157.

In diagram form:

![reduce-attestation-bandwidth](https://user-images.githubusercontent.com/742762/230277084-f97301c1-0c5d-4fb3-92f9-91f99e4dc7d4.png)


Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-05-15 02:10:40 +00:00
ethDreamer
46db30416d Implement Overflow LRU Cache for Pending Blobs (#4203)
* All Necessary Objects Implement Encode/Decode

* Major Components for LRUOverflowCache Implemented

* Finish Database Code

* Add Maintenance Methods

* Added Maintenance Service

* Persist Blobs on Shutdown / Reload on Startup

* Address Clippy Complaints

* Add (emum_behaviour = "tag") to ssz_derive

* Convert Encode/Decode Implementations to "tag"

* Started Adding Tests

* Added a ton of tests

* 1 character fix

* Feature Guard Minimal Spec Tests

* Update beacon_node/beacon_chain/src/data_availability_checker.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* Address Sean's Comments

* Add iter_raw_keys method

* Remove TODOs

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-05-12 10:08:24 -04:00
chonghe
b7b4549545 Update links in Lighthouse Book (#4279)
## Issue Addressed

Update some broken links in Lighthouse Book

## Proposed Changes

The updated links correctly link to the section of the book

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2023-05-10 00:33:11 +00:00
Jimmy Chen
8d9c748025 Fix attestation withdrawals root mismatch (#4249)
## Issue Addressed

Addresses #4234 

## Proposed Changes

- Skip withdrawals processing in an inconsistent state replay. 
- Repurpose `StateRootStrategy`: rename to `StateProcessingStrategy` and always skip withdrawals if using `StateProcessingStrategy::Inconsistent`
- Add a test to reproduce the scenario


Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-05-09 10:48:15 +00:00
Nikita Kryuchkov
c7c51062ab Fix log on initializing external block builder (#4267)
## Issue Addressed

#4266 

## Proposed Changes

- Log `Using external block builder` instead of `Connected to external block builder` on its initialization to resolve the confusion (there's no actual connection there)

## Additional Info

The log is mentioned in builders docs, so it's changed there too.
2023-05-09 07:15:06 +00:00
ethDreamer
a22e4bf636 Implement KZG EF Tests (#4274) 2023-05-08 15:58:23 -04:00
Jack McPherson
d64be0dd78 Add branching instructions to the contribution guide (#4272)
## Issue Addressed

N/A

## Proposed Changes

 - Add an explicit base branch name (i.e., `unstable`) to the [contributing guide](https://github.com/sigp/lighthouse/blob/stable/CONTRIBUTING.md).

## Additional Info

N/A
2023-05-08 12:20:19 +00:00
Jack McPherson
6235e452e1 Do not attempt to resubscribe to core topics (#4271)
This commit adds a check to the networking service when handling core gossipsub topic subscription requests. If the BN is already subscribed to the core topics, we won't attempt to resubscribe.

## Issue Addressed

#4258 

## Proposed Changes

 - In the networking service, check if we're already subscribed to all of the core gossipsub topics and, if so, do nothing

## Additional Info

N/A
2023-05-08 07:15:26 +00:00
Age Manning
35ca086269 Backfill blocks only to the WSP by default (#4082)
## Limit Backfill Sync

This PR transitions Lighthouse from syncing all the way back to genesis to only syncing back to the weak subjectivity point (~ 5 months) when syncing via a checkpoint sync.

There are a number of important points to note with this PR:

- Firstly and most importantly, this PR fundamentally shifts the default security guarantees of checkpoint syncing in Lighthouse. Prior to this PR, Lighthouse could verify the checkpoint of any given chain by ensuring the chain eventually terminates at the corresponding genesis. This guarantee can still be employed via the new CLI flag --genesis-backfill which will prompt lighthouse to the old behaviour of downloading all blocks back to genesis. The new behaviour only checks the proposer signatures for the last 5 months of blocks but cannot guarantee the chain matches the genesis chain.
- I have not modified any of the peer scoring or RPC responses. Clients syncing from gensis, will downscore new Lighthouse peers that do not possess blocks prior to the WSP. This is by design, as Lighthouse nodes of this form, need a mechanism to sort through peers in order to find useful peers in order to complete their genesis sync. We therefore do not discriminate between empty/error responses for blocks prior or post the local WSP. If we request a block that a peer does not posses, then fundamentally that peer is less useful to us than other peers.
- This will make a radical shift in that the majority of nodes will no longer store the full history of the chain. In the future we could add a pruning mechanism to remove old blocks from the db also.


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-05-05 03:49:23 +00:00
int88
60d13c3fea improve api-bn.md (#4244)
## Issue Addressed

NA

## Proposed Changes

improve doc `api-bn.md` to make user convenient to copy and run the command. 

## Additional Info

NA
2023-05-05 00:51:58 +00:00
int88
6d8d212da8 use state cache to optimise historical state lookup (#4228)
## Issue Addressed

#3873

## Proposed Changes

add a cache to optimise historical state lookup.

## Additional Info

N/A


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-05-05 00:51:57 +00:00
chonghe
45835f6a6b Update Lighthouse book Sec 3-6 and FAQ (#4221)
## Issue Addressed

Update Lighthouse book to include latest information especially after Capella upgrade

## Proposed Changes

Notable changes:
- Combine Sec 4.1 & 6.1 into Sec 4, because Sec 6.1 is importing validator key which is a required step when want to run a validator
- Combine Sec 5.1 & 5.2 with Sec 5, and move Sec 5 to under Sec 9
- Added partial withdrawals in Sec 6



## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: chonghe <tanck2005@gmail.com>
2023-05-05 00:51:56 +00:00
Eitan Seri-Levi
b1416c8a43 simplify calculate_committee_fraction (#4213)
## Issue Addressed

[#4211](https://github.com/sigp/lighthouse/issues/4211)

## Proposed Changes

This PR conforms the helper function `calculate_committee_fraction` to the [v1.3.0 spec](https://github.com/ethereum/consensus-specs/blob/v1.3.0/specs/phase0/fork-choice.md#get_weight)

## Additional Info

the old definition of `calculate_committee_fraction` is almost identical, but the new definition is simpler.
2023-05-03 09:02:58 +00:00
Akihito Nakano
edbb47dd03 Update igd to v0.12.1 (#4257)
## Issue Addressed

https://github.com/sigp/lighthouse/issues/4171

## Proposed Changes

Through [this PR](https://github.com/sbstp/rust-igd/pull/56) in rust-igd, `igd` v0.12.1 no longer panics if there is an issue while searching for a gateway. So updating igd makes lighthouse emit a helpful log instead of panicking.

## Additional Info

No CHANGELOG exists in rust-igd. 👀 Here is the commit history between v0.11.1 and v0.12.1. No breaking changes.

https://github.com/sbstp/rust-igd/compare/v0.11.1...v0.12.1
2023-05-03 04:12:14 +00:00
Jimmy Chen
2aef2db66f Un-deprecate test utils functions such as extend_chain (#4255)
## Issue Addressed

This PR un-deprecates some commonly used test util functions, e.g. `extend_chain`. Most of these were deprecated in 2020 but some of us still found them quite convenient and they're still being used a lot. If there's no issue with using them, I think we should remove the "Deprecated" comment to avoid confusion.
2023-05-03 04:12:12 +00:00
Michael Sproul
bb651a503d Enable build profiles for Docker source builds (#4237)
## Proposed Changes

- Allow Docker images to be built with different profiles via e.g. `--build-arg PROFILE=maxperf`.
- Include the build profile in `lighthouse --version`.

## Additional Info

This only affects Docker images built from source. Our published Docker images use `cross`-compiled binaries that get copied into place.
2023-05-03 04:12:11 +00:00
Age Manning
616bee6757 Maintain trusted peers (#4159)
## Issue Addressed
#4150 

## Proposed Changes

Maintain trusted peers in the pruning logic. ~~In principle the changes here are not necessary as a trusted peer has a max score (100) and all other peers can have at most 0 (because we don't implement positive scores). This means that we should never prune trusted peers unless we have more trusted peers than the target peer count.~~

This change shifts this logic to explicitly never prune trusted peers which I expect is the intuitive behaviour. 

~~I suspect the issue in #4150 arises when a trusted peer disconnects from us for one reason or another and then we remove that peer from our peerdb as it becomes stale. When it re-connects at some large time later, it is no longer a trusted peer.~~

Currently we do disconnect trusted peers, and this PR corrects this to maintain trusted peers in the pruning logic.

As suggested in #4150 we maintain trusted peers in the db and thus we remember them even if they disconnect from us.
2023-05-03 04:12:10 +00:00
realbigsean
9db6b39dc3 fix check on max request size (#4250) 2023-05-02 19:14:02 -04:00
Michael Sproul
826e748629 Prepare CI for merge queues (#4252)
* Prepare CI for merge queues

* Fix syntax SNAFUs
2023-05-02 01:59:51 +00:00
Eitan Seri-Levi
f1c64d119a remove withdrawal warning (#4207)
## Issue Addressed

[#4196](https://github.com/sigp/lighthouse/issues/4196).

## Proposed Changes

Remove withdrawal warning since we are post shapella upgrade

## Additional Info
2023-05-01 08:22:40 +00:00
Eitan Seri-Levi
11103d6289 Logfile no restrict help text for windows (#4165)
## Issue Addressed

[#4162](https://github.com/sigp/lighthouse/issues/4162)

## Proposed Changes

update `--logfile-no-restricted-perms` flag help text to indicate that, for Windows users, the file permissions are inherited from the parent folder

## Additional Info

N/A
2023-05-01 08:22:39 +00:00
Ricki Moore
aaf1e4b1ab Feat: lighthouse book - ui authentication (#4232)
## Proposed Changes

Added page explanation for authentication under Siren UI book.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2023-05-01 02:15:57 +00:00
Michael Sproul
c11638c36c Split common crates out into their own repos (#3890)
## Proposed Changes

Split out several crates which now exist in separate repos under `sigp`.

- [`ssz` and `ssz_derive`](https://github.com/sigp/ethereum_ssz)
- [`tree_hash` and `tree_hash_derive`](https://github.com/sigp/tree_hash)
- [`ethereum_hashing`](https://github.com/sigp/ethereum_hashing)
- [`ethereum_serde_utils`](https://github.com/sigp/ethereum_serde_utils)
- [`ssz_types`](https://github.com/sigp/ssz_types)

For the published crates see: https://crates.io/teams/github:sigp:crates-io?sort=recent-updates.

## Additional Info

- [x] Need to work out how to handle versioning. I was hoping to do 1.0 versions of several crates, but if they depend on `ethereum-types 0.x` that is not going to work. EDIT: decided to go with 0.5.x versions.
- [x] Need to port several changes from `tree-states`, `capella`, `eip4844` branches to the external repos.
2023-04-28 01:15:40 +00:00
ethDreamer
c1d47da02d Update engine_api to latest version (#4223)
* Update Engine API to Latest

* Get Mock EE Working

* Fix Mock EE

* Update Engine API Again

* Rip out get_blobs_bundle Stuff

* Fix Test Harness

* Fix Clippy Complaints

* Fix Beacon Chain Tests
2023-04-27 14:18:21 -04:00
Justin Traglia
aa34339298 Rename to MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS (#4206)
Co-authored-by: realbigsean <sean@sigmaprime.io>
2023-04-26 14:53:06 -04:00
Pawan Dhananjay
cbe4880490 Fix deneb doppelganger tests (#4124)
* Temp hack to compile

* Fix doppelganger tests

* Kill in groups instead of storing pid

* Install geth in CI

* Install geth first

* Fix eth1_block_hash

* Fix directory paths and block hash

* Fix workflow for local testnets; reset genesis.json after running script

* Disable capella and deneb forks for doppelganger tests

* oops not capella

* Spin up a spare bn for the doppelganger validator

* testing

* Revert "testing"

This reverts commit 14eb178bca.

* Modify beacon_node script to take trusted peers

* Set doppelganger bn as a trusted peer

* Update var

* update another

* Fix port

* Add a flag to disable peer scoring

* Disable peer scoring in local testnet bn script

* Revert trusted peers hack

* fmt

* Fix proposer boost score
2023-04-26 13:26:00 -04:00
Pawan Dhananjay
a632969695 Gossip verification cleanup (#4219)
* Add ObservedBlobSidecar tests

* Add logging for tricky verification cases

* Update beacon_node/beacon_chain/src/blob_verification.rs

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-04-26 10:44:58 -04:00
Age Manning
7456e1e8fa Separate BN for block proposals (#4182)
It is a well-known fact that IP addresses for beacon nodes used by specific validators can be de-anonymized. There is an assumed risk that a malicious user may attempt to DOS validators when producing blocks to prevent chain growth/liveness.

Although there are a number of ideas put forward to address this, there a few simple approaches we can take to mitigate this risk.

Currently, a Lighthouse user is able to set a number of beacon-nodes that their validator client can connect to. If one beacon node is taken offline, it can fallback to another. Different beacon nodes can use VPNs or rotate IPs in order to mask their IPs.

This PR provides an additional setup option which further mitigates attacks of this kind.

This PR introduces a CLI flag --proposer-only to the beacon node. Setting this flag will configure the beacon node to run with minimal peers and crucially will not subscribe to subnets or sync committees. Therefore nodes of this kind should not be identified as nodes connected to validators of any kind.

It also introduces a CLI flag --proposer-nodes to the validator client. Users can then provide a number of beacon nodes (which may or may not run the --proposer-only flag) that the Validator client will use for block production and propagation only. If these nodes fail, the validator client will fallback to the default list of beacon nodes.

Users are then able to set up a number of beacon nodes dedicated to block proposals (which are unlikely to be identified as validator nodes) and point their validator clients to produce blocks on these nodes and attest on other beacon nodes. An attack attempting to prevent liveness on the eth2 network would then need to preemptively find and attack the proposer nodes which is significantly more difficult than the default setup.

This is a follow on from: #3328 

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-04-26 01:12:36 +00:00
int88
90d562b3d4 add attestation inclusion distance in http api (#4148)
## Issue Addressed

#4097

## Proposed Changes

Add attestation inclusion distance in http api, extend `/lighthouse/ui/validator_metrics` to include it.

## Usage
```
curl -X POST "http://localhost:8001/lighthouse/ui/validator_metrics" -d '{"indices": [1]}' -H "Content-Type: application/json" | jq
```

```
{
  "data": {
    "validators": {
      "1": {
        "attestation_hits": 3,
        "attestation_misses": 1,
        "attestation_hit_percentage": 75,
        "attestation_head_hits": 3,
        "attestation_head_misses": 0,
        "attestation_head_hit_percentage": 100,
        "attestation_target_hits": 3,
        "attestation_target_misses": 0,
        "attestation_target_hit_percentage": 100,
        "attestation_inclusion_distance": 1
      }
    }
  }
}
```

## Additional Info

NA
2023-04-26 01:12:35 +00:00
realbigsean
cbe2e47931 update blobs by range protocol name (#4229) 2023-04-24 09:03:23 -04:00
Pawan Dhananjay
7a36d004e4 Subscribe blob topics (#4224) 2023-04-22 09:21:09 -04:00
Pawan Dhananjay
b6c0e91c05 Merge branch 'eip4844' into deneb-free-blobs 2023-04-21 14:34:50 -07:00
Pawan Dhananjay
b2ccc822d8 Fix compiler issues 2023-04-21 14:14:57 -07:00
Pawan Dhananjay
689c0f76d3 Merge branch 'unstable' into eip4844 2023-04-21 14:13:25 -07:00
Pawan Dhananjay
a78285db5e Fix Rust 1.69 lints (#4222)
## Issue Addressed

N/A

## Proposed Changes

Fixes lints mostly `extra-unused-type-parameters` https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_type_paramete
2023-04-21 18:29:28 +00:00
Jimmy Chen
ed7824869c Update LLVM version to 15.0 in CI workflows (#4220)
## Issue Addressed

The latest stable version (1.69.0) of Rust was released on 20 April and contains this change:
- [Update the minimum external LLVM to 14.](https://github.com/rust-lang/rust/pull/107573/)

This impacts some of our CI workflows (build and release-test-windows) that uses LLVM 13.0. This  PR updates the workflows to install LLVM 15.0.

**UPDATE**: Also updated `h2` to address [this issue](https://github.com/advisories/GHSA-f8vr-r385-rh5r)
2023-04-21 18:29:27 +00:00
Pawan Dhananjay
895bbd6c03 Gossip conditions deneb (#4164)
* Add all gossip conditions

* Handle some gossip errors

* Update beacon_node/beacon_chain/src/blob_verification.rs

Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>

* Add an ObservedBlobSidecars cache

---------

Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
2023-04-20 18:26:20 -04:00
realbigsean
e6b033aefd update blob transaction (#4218)
* update blob transaction

* update blob transaction

* rename in JSON deserializing
2023-04-20 18:23:59 -04:00
realbigsean
a6335eb27e bump ef tests version (#4217) 2023-04-20 14:00:25 -04:00
Jimmy Chen
2d083436c8 Switch blob tx type to 0x03 (#4186) 2023-04-20 13:58:49 -04:00
Jimmy Chen
9dee718153 Remove unused blob endpoint and types (#4209) 2023-04-20 13:40:10 -04:00
Paul Hauner
693886b941 Release v4.1.0 (#4191)
## Issue Addressed

NA

## Proposed Changes

Bump versions.

## Additional Info

NA
2023-04-20 00:51:38 +00:00
Paul Hauner
48843ba198 Check lateness of block before requeuing it (#4208)
## Issue Addressed

NA

## Proposed Changes

Avoids reprocessing loops introduced in #4179. (Also somewhat related to #4192).

Breaks the re-queue loop by only re-queuing when an RPC block is received before the attestation creation deadline.

I've put `proposal_is_known` behind a closure to avoid interacting with the `observed_proposers` lock unnecessarily. 

## Additional Info

NA
2023-04-19 04:23:20 +00:00
Jimmy Chen
434386774e Bump Rust version (MSRV) (#4204)
## Issue Addressed

There was a [`VecDeque` bug](https://github.com/rust-lang/rust/issues/108453) in some recent versions of the Rust standard library (1.67.0 & 1.67.1) that could cause Lighthouse to panic (reported by `@Sea Monkey` on discord). See full logs below.

The issue was likely introduced in Rust 1.67.0 and [fixed](https://github.com/rust-lang/rust/pull/108475) in 1.68, and we were able to reproduce the panic ourselves using [@michaelsproul's fuzz tests](https://github.com/michaelsproul/lighthouse/blob/fuzz-lru-time-cache/beacon_node/lighthouse_network/src/peer_manager/fuzz.rs#L111) on both Rust 1.67.0 and 1.67.1. 

Users that uses our Docker images or binaries are unlikely affected, as our Docker images were built with `1.66`, and latest binaries were built with latest stable (`1.68.2`). It likely impacts user that builds from source using Rust versions 1.67.x.

## Proposed Changes

Bump Rust version (MSRV) to latest stable `1.68.2`. 

## Additional Info

From `@Sea Monkey` on Lighthouse Discord:

> Crash on goerli using `unstable` `dd124b2d6804d02e4e221f29387a56775acccd08`

```
thread 'tokio-runtime-worker' panicked at 'Key must exist', /mnt/goerli/goerli/lighthouse/common/lru_cache/src/time.rs:68:28
stack backtrace:
Apr 15 09:37:36.993 WARN Peer sent invalid block in single block lookup, peer_id: 16Uiu2HAm6ZuyJpVpR6y51X4Enbp8EhRBqGycQsDMPX7e5XfPYznG, error: WouldRevertFinalizedSlot { block_slot: Slot(5420212), finalized_slot: Slot(5420224) }, root: 0x10f6…3165, service: sync
   0: rust_begin_unwind
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14
   2: core::panicking::panic_display
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:135:5
   3: core::panicking::panic_str
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:119:5
   4: core::option::expect_failed
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/option.rs:1879:5
   5: lru_cache::time::LRUTimeCache<Key>::raw_remove
   6: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_ban_operation
   7: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_score_action
   8: lighthouse_network::peer_manager::PeerManager<TSpec>::report_peer
   9: network::service::NetworkService<T>::spawn_service::{{closure}}
  10: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
  11: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
  12: <futures_util::future::future::flatten::Flatten<Fut,<Fut as core::future::future::Future>::Output> as core::future::future::Future>::poll
  13: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  14: tokio::runtime::task::core::Core<T,S>::poll
  15: tokio::runtime::task::harness::Harness<T,S>::poll
  16: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  17: tokio::runtime::scheduler::multi_thread::worker::Context::run
  18: tokio::macros::scoped_tls::ScopedKey<T>::set
  19: tokio::runtime::scheduler::multi_thread::worker::run
  20: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  21: tokio::runtime::task::core::Core<T,S>::poll
  22: tokio::runtime::task::harness::Harness<T,S>::poll
  23: tokio::runtime::blocking::pool::Inner::run
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Apr 15 09:37:37.069 INFO Saved DHT state                         service: network
Apr 15 09:37:37.070 INFO Network service shutdown                service: network
Apr 15 09:37:37.132 CRIT Task panic. This is a bug!              advice: Please check above for a backtrace and notify the developers, message: <none>, task_name: network
Apr 15 09:37:37.132 INFO Internal shutdown received              reason: Panic (fatal error)
Apr 15 09:37:37.133 INFO Shutting down..                         reason: Failure("Panic (fatal error)")
Apr 15 09:37:37.135 WARN Unable to free worker                   error: channel closed, msg: did not free worker, shutdown may be underway
Apr 15 09:37:39.350 INFO Saved beacon chain to disk              service: beacon
Panic (fatal error)
```
2023-04-18 02:47:37 +00:00
Michael Sproul
e9a7316f1d Set user agent on requests to builder (#4199)
## Issue Addressed

Closes #4185

## Proposed Changes

- Set user agent to `Lighthouse/vX.Y.Z-<commit hash>` by default
- Allow tweaking user agent via `--builder-user-agent "agent"`
2023-04-18 02:47:36 +00:00
Michael Sproul
1d92e3f77c Use efficient payload reconstruction for HTTP API (#4102)
## Proposed Changes

Builds on #4028 to use the new payload bodies methods in the HTTP API as well.

## Caveats

The payloads by range method only works for the finalized chain, so it can't be used in the execution engine integration tests because we try to reconstruct unfinalized payloads there.
2023-04-18 02:47:35 +00:00
Paul Hauner
dd124b2d68 Address observed proposers behaviour (#4192)
## Issue Addressed

NA

## Proposed Changes

Apply two changes to code introduced in #4179:

1. Remove the `ERRO` log for when we error on `proposer_has_been_observed()`. We were seeing a lot of this in our logs for finalized blocks and it's a bit noisy.
1. Use `false` rather than `true` for `proposal_already_known` when there is an error. If a block raises an error in `proposer_has_been_observed()` then the block must be invalid, so we should process (and reject) it now rather than queuing it.

For reference, here is one of the offending `ERRO` logs:

```
ERRO Failed to check observed proposers block_root: 0x5845…878e, source: rpc, error: FinalizedBlock { slot: Slot(5410983), finalized_slot: Slot(5411232) }
```

## Additional Info

NA
2023-04-14 06:37:16 +00:00
Paul Hauner
2b3084f578 Use head state for exit verification (#4183)
## Issue Addressed

NA

## Proposed Changes

Similar to #4181 but without the version bump and a more nuanced fix.

Patches the high CPU usage seen after the Capella fork which was caused by processing exits when there are skip slots.

## Additional Info

~~This is an imperfect solution that will cause us to drop some exits at the fork boundary. This is tracked at #4184.~~
2023-04-14 01:11:46 +00:00
chonghe
56dba96319 Update Lighthouse book and some FAQs (#4178)
## Issue Addressed

Updated Lighthouse book on Section 2 and added some FAQs

## Proposed Changes

All changes are made in the book/src .md files.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: chonghe <tanck2005@gmail.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-04-14 01:11:45 +00:00
Michael Sproul
a3669abac5 Avoid processing redundant RPC blocks (#4179)
## Proposed Changes

We already make some attempts to avoid processing RPC blocks when a block from the same proposer is already being processed through gossip. This PR strengthens that guarantee by using the existing cache for `observed_block_producers` to inform whether an RPC block's processing should be delayed.
2023-04-13 07:05:02 +00:00
Michael Sproul
b90c0c3fb1 Make re-org strat more cautious and add more config (#4151)
## Proposed Changes

This change attempts to prevent failed re-orgs by:

1. Lowering the re-org cutoff from 2s to 1s. This is informed by a failed re-org attempted by @yorickdowne's node. The failed block was requested in the 1.5-2s window due to a Vouch failure, and failed to propagate to the majority of the network before the attestation deadline at 4s.
2. Allow users to adjust their re-org cutoff depending on observed network conditions and their risk profile. The static 2 second cutoff was too rigid.
3. Add a `--proposer-reorg-disallowed-offsets` flag which can be used to prohibit reorgs at certain slots. This is intended to help workaround an issue whereby reorging blocks at slot 1 are currently taking ~1.6s to propagate on gossip rather than ~500ms. This is suspected to be due to a cache miss in current versions of Prysm, which should be fixed in their next release.

## Additional Info

I'm of two minds about removing the `shuffling_stable` check which checks for blocks at slot 0 in the epoch. If we removed it users would be able to configure Lighthouse to try reorging at slot 0, which likely wouldn't work very well due to interactions with the proposer index cache. I think we could leave it for now and revisit it later.
2023-04-13 07:05:01 +00:00
ethDreamer
00cf5fc184 Remove Redundant Trait Bound (#4169)
I realized this is redundant while reasoning about how the `store` is implemented given the [definition of `ItemStore`](https://github.com/sigp/lighthouse/blob/v4.0.1/beacon_node/store/src/lib.rs#L107)
```rust
pub trait ItemStore<E: EthSpec>: KeyValueStore<E> + Sync + Send + Sized + 'static {
    ...
}
```
2023-04-12 01:48:22 +00:00
Mac L
0e2e23e088 Remove the unused ExecutionOptimisticForkVersionedResponse type (#4160)
## Issue Addressed

#4146 

## Proposed Changes

Removes the `ExecutionOptimisticForkVersionedResponse` type and the associated Beacon API endpoint which is now deprecated. Also removes the test associated with the endpoint.
2023-04-12 01:48:21 +00:00
Pawan Dhananjay
3b117f4bf6 Add a flag to disable peer scoring (#4135)
## Issue Addressed

N/A

## Proposed Changes

Adds a flag for disabling peer scoring. This is useful for local testing and testing small networks for new features.
2023-04-12 01:48:19 +00:00
Divma
fca8559acc Update kzg to get windows going, expose blst features (#4177)
* fmt

* update kzg

* use commit from main repo
2023-04-10 19:05:01 -05:00
Diva M
f6f63b18ed Merge branch 'eip4844' into deneb-free-blobs 2023-04-10 10:47:43 -05:00
Diva M
df1da104fd Merge branch 'unstable' into eip4844 2023-04-10 10:46:41 -05:00
Jimmy Chen
4d17fb3af6 CI fix: move download web3signer binary out of build script (#4163)
## Issue Addressed

Attempt to fix #3812 

## Proposed Changes

Move web3signer binary download script out of build script to avoid downloading unless necessary. If this works, it should also reduce the build time for all jobs that runs compilation.
2023-04-06 06:36:21 +00:00
Diva M
911a63559b Merge branch 'eip4844' into deneb-free-blobs 2023-04-05 13:33:33 -05:00
Pawan Dhananjay
1b8225c76d Revert upgrade to tokio utils to reprocessing queue (#4167) 2023-04-05 11:43:39 -05:00
Diva M
32f9ba04d7 fix merge conflict 2023-04-04 12:10:51 -05:00
Diva M
cb818152f3 Merge branch 'unstable' into eip4844 2023-04-04 12:07:09 -05:00
Diva M
3c1a22ceaf Merge commit '1e029ce5384e911390a513e2d1885532f34a8b2b' into eip4844 2023-04-04 11:56:54 -05:00
Diva M
9558c18dc5 Merge commit 'c5383e393acee152e92641ce4699d05913953e70' into eip4844 2023-04-04 11:56:01 -05:00
Diva M
905322394b Merge commit '036b797b2c1831352f937356576b3c78c65220ad' into eip4844 2023-04-04 11:53:55 -05:00
ethDreamer
3a21317600 Unified Availability Cache into One (#4161)
* Unified Availability Cache into One

* Update beacon_node/beacon_chain/src/data_availability_checker.rs

Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-04-04 09:50:35 -04:00
Pawan Dhananjay
ffefd20137 Block processing cleanup (#4153)
* Implements Ord for BlobSidecar based on index

* Use BTreeMap for gossip cache to maintain blob order by index

* fmt

* Another panic fix
2023-04-03 15:07:11 +05:30
Mac L
8630ddfec4 Add beacon.watch (#3362)
> This is currently a WIP and all features are subject to alteration or removal at any time.

## Overview

The successor to #2873.

Contains the backbone of `beacon.watch` including syncing code, the initial API, and several core database tables.

See `watch/README.md` for more information, requirements and usage.
2023-04-03 05:35:11 +00:00
int88
1e029ce538 remove dup log (#4155)
## Issue Addressed

NA

## Proposed Changes

remove duplicate log message.

## Additional Info

NA
2023-04-03 03:02:58 +00:00
Age Manning
311e69db65 Ban peer race condition (#4140)
It is possible that when we go to ban a peer, there is already an unbanned message in the queue. It could lead to the case that we ban and immediately unban a peer leaving us in a state where a should-be banned peer is unbanned. 

If this banned peer connects to us in this faulty state, we currently do not attempt to re-ban it. This PR does correct this also, so if we do see this error, it will now self-correct (although we shouldn't see the error in the first place). 

I have also incremented the severity of not supporting protocols as I see peers ultimately get banned in a few steps and it seems to make sense to just ban them outright, rather than have them linger.
2023-04-03 03:02:57 +00:00
Jimmy Chen
e2c68c8893 Add new validator API for voluntary exit (#4119)
## Issue Addressed

Addresses #4117 

## Proposed Changes

See https://github.com/ethereum/keymanager-APIs/pull/58 for proposed API specification.

## TODO

- [x] ~~Add submission to BN~~ 
  - removed, see discussion in [keymanager API](https://github.com/ethereum/keymanager-APIs/pull/58)
- [x] ~~Add flag to allow voluntary exit via the API~~ 
  - no longer needed now the VC doesn't submit exit directly
- [x] ~~Additional verification / checks, e.g. if validator on same network as BN~~ 
  - to be done on client side
- [x] ~~Potentially wait for the message to propagate and return some exit information in the response~~ 
  - not required
- [x] Update http tests
- [x] ~~Update lighthouse book~~ 
  - not required if this endpoint makes it to the standard keymanager API

Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-04-03 03:02:56 +00:00
Jimmy Chen
2de3451011 Rate limiting backfill sync (#3936)
## Issue Addressed

#3212 

## Proposed Changes

- Introduce a new `rate_limiting_backfill_queue` - any new inbound backfill work events gets immediately sent to this FIFO queue **without any processing**
- Spawn a `backfill_scheduler` routine that pops a backfill event from the FIFO queue at specified intervals (currently halfway through a slot, or at 6s after slot start for 12s slots) and sends the event to `BeaconProcessor` via a `scheduled_backfill_work_tx` channel
- This channel gets polled last in the `InboundEvents`, and work event received is  wrapped in a `InboundEvent::ScheduledBackfillWork` enum variant, which gets processed immediately or queued by the `BeaconProcessor` (existing logic applies from here)

Diagram comparing backfill processing with / without rate-limiting: 
https://github.com/sigp/lighthouse/issues/3212#issuecomment-1386249922

See this comment for @paulhauner's  explanation and solution: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1384674956

## Additional Info

I've compared this branch (with backfill processing rate limited to to 1 and 3 batches per slot) against the latest stable version. The CPU usage during backfill sync is reduced by ~5% - 20%, more details on this page:

https://hackmd.io/@jimmygchen/SJuVpJL3j

The above testing is done on Goerli (as I don't currently have hardware for Mainnet), I'm guessing the differences are likely to be bigger on mainnet due to block size.

### TODOs

- [x] Experiment with processing multiple batches per slot. (need to think about how to do this for different slot durations)
- [x] Add option to disable rate-limiting, enabed by default.
- [x] (No longer required now we're reusing the reprocessing queue) Complete the `backfill_scheduler` task when backfill sync is completed or not required
2023-04-03 03:02:55 +00:00
chonghe
c5383e393a Update database-migrations.md (#4149)
## Issue Addressed

Update the database-migrations to include v4.0.1 for database version v16:


## Proposed Changes

Update the table by adding a row

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2023-03-31 05:00:50 +00:00
int88
5691123153 update README of local_testnet (#4114)
## Issue Addressed

NA

## Proposed Changes

update the descriptions of README in `scripts/local_testnet`.

## Additional Info

NA
2023-03-30 10:14:07 +00:00
Jimmy Chen
d351cc8d8d Test failing CI tests due to port conflicts (#4134)
## Issue Addressed

#4127. PR to test port conflicts in CI tests . 

## Proposed Changes

See issue for more details, potential solution could be adding a cache bound by time to the `unused_port` function.
2023-03-30 06:08:38 +00:00
Daniel Ramirez Chiquillo
036b797b2c Add finalized to HTTP API responses (#3753)
## Issue Addressed

#3708 

## Proposed Changes
- Add `is_finalized_block` method to `BeaconChain` in `beacon_node/beacon_chain/src/beacon_chain.rs`.
- Add `is_finalized_state` method to `BeaconChain` in `beacon_node/beacon_chain/src/beacon_chain.rs`.
- Add `fork_and_execution_optimistic_and_finalized` in `beacon_node/http_api/src/state_id.rs`.
- Add `ExecutionOptimisticFinalizedForkVersionedResponse` type in `consensus/types/src/fork_versioned_response.rs`.
- Add `execution_optimistic_finalized_fork_versioned_response`function in  `beacon_node/http_api/src/version.rs`.
- Add `ExecutionOptimisticFinalizedResponse` type in `common/eth2/src/types.rs`.
- Add `add_execution_optimistic_finalized` method in  `common/eth2/src/types.rs`.
- Update API response methods to include finalized.
- Remove `execution_optimistic_fork_versioned_response`

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-30 06:08:37 +00:00
Age Manning
12205a8811 Correct log for ENR (#4133)
## Issue Addressed

https://github.com/sigp/lighthouse/issues/4080

Fixes a log when displaying the initial ENR.
2023-03-29 23:55:55 +00:00
Maksim Shcherbo
788a4b718f Optimise update_validators by decrypting key cache only when necessary (#4126)
## Title

Optimise `update_validators` by decrypting key cache only when necessary

## Issue Addressed

Resolves [#3968: Slow performance of validator client PATCH API with hundreds of keys](https://github.com/sigp/lighthouse/issues/3968)

## Proposed Changes

1. Add a check to determine if there is at least one local definition before decrypting the key cache.
2. Assign an empty `KeyCache` when all definitions are of the `Web3Signer` type.
3. Perform cache-related operations (e.g., saving the modified key cache) only if there are local definitions.

## Additional Info

This PR addresses the excessive CPU usage and slow performance experienced when using the `PATCH lighthouse/validators/{pubkey}` request with a large number of keys. The issue was caused by the key cache using cryptography to decipher and cipher the cache entities every time the request was made. This operation called `scrypt`, which was very slow and required a lot of memory when there were many concurrent requests.

These changes have no impact on the overall functionality but can lead to significant performance improvements when working with remote signers. Importantly, the key cache is never used when there are only `Web3Signer` definitions, avoiding the expensive operation of decrypting the key cache in such cases.

Co-authored-by: Maksim Shcherbo <max.shcherbo@consensys.net>
2023-03-29 02:56:39 +00:00
Christopher Chong
6bb28bc806 Add debug fork choice api (#4003)
## Issue Addressed

Which issue # does this PR address?
https://github.com/sigp/lighthouse/issues/3669

## Proposed Changes

Please list or describe the changes introduced by this PR.
- A new API to fetch fork choice data, as specified [here](https://github.com/ethereum/beacon-APIs/pull/232)
- A new integration test to test the new API

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.

- `extra_data` field specified in the beacon-API spec is not implemented, please let me know if I should instead.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-03-29 02:56:37 +00:00
int88
d3c20ffa9d improve error message (#4141)
## Issue Addressed

NA

## Proposed Changes

Not use magic number directly in the error message.

## Additional Info

NA
2023-03-28 22:07:05 +00:00
Jimmy Chen
65d0f63639 Update Rust version in lcli Dockerfile (#4121)
## Issue Addressed

The minimum supported Rust version has been set to 1.66 as of Lighthouse v4.0.0.  This PR updates Rust to 1.66 in lcli Dockerfile. 

Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-03-28 22:07:03 +00:00
realbigsean
deec9c51ba clean up blob by root response (#4136) 2023-03-28 12:49:32 -04:00
Michael Sproul
f4d13f9149 Update arbitrary (#4139)
## Proposed Changes

To prevent breakages from `cargo update`, this updates the `arbitrary` crate to a new commit from my fork. Unfortunately we still need to use my fork (even though my `bound` change was merged) because of this issue: https://github.com/rust-lang/rust-clippy/issues/10185.

In a couple of Rust versions it should be resolved upstream.
2023-03-28 16:49:21 +00:00
realbigsean
d24e5cc22a clean up blobs by range response (#4137) 2023-03-28 12:49:19 -04:00
realbigsean
da7fab5188 Ef test version update (#4142)
* Revert "revert change to ef_tests"

This reverts commit 1093ba1a27.

* Revert "Revert "Use consensus-spec-tests `v1.3.0-rc.3` (#4021)""

This reverts commit 20be7024e1.

* update tests
2023-03-28 11:47:30 -04:00
realbigsean
f580863337 Add simple pruning to data availability checker (#4132)
* use block wrapper in sync pairing

* add pruning to the data availability checker

* remove unused function, rename function
2023-03-27 10:09:53 -04:00
realbigsean
af974dc0b8 use block wrapper in sync pairing (#4131) 2023-03-26 19:18:54 -04:00
Paul Hauner
a53830fd60 Release v4.0.1 (#4125)
## Issue Addressed

NA

## Proposed Changes

- Bump versions.
- Bump openssl version to resolve various `cargo audit` notices.

## Additional Info

- Requires further testing
2023-03-26 22:39:28 +00:00
realbigsean
a5addf661c Rename eip4844 to deneb (#4129)
* rename 4844 to deneb

* rename 4844 to deneb

* move excess data gas field

* get EF tests working

* fix ef tests lint

* fix the blob identifier ef test

* fix accessed files ef test script

* get beacon chain tests passing
2023-03-26 11:49:16 -04:00
ethDreamer
d84117c0d0 Removed TODO (#4128) 2023-03-24 17:33:49 -04:00
Pawan Dhananjay
b276af98b7 Rework block processing (#4092)
* introduce availability pending block

* add intoavailableblock trait

* small fixes

* add 'gossip blob cache' and start to clean up processing and transition types

* shard memory blob cache

* Initial commit

* Fix after rebase

* Add gossip verification conditions

* cache cleanup

* general chaos

* extended chaos

* cargo fmt

* more progress

* more progress

* tons of changes, just tryna compile

* everything, everywhere, all at once

* Reprocess an ExecutedBlock on unavailable blobs

* Add sus gossip verification for blobs

* Merge stuff

* Remove reprocessing cache stuff

* lint

* Add a wrapper to allow construction of only valid `AvailableBlock`s

* rename blob arc list to blob list

* merge cleanuo

* Revert "merge cleanuo"

This reverts commit 5e98326878.

* Revert "Revert "merge cleanuo""

This reverts commit 3a4009443a.

* fix rpc methods

* move beacon block and blob to eth2/types

* rename gossip blob cache to data availability checker

* lots of changes

* fix some compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* cargo fmt

* use a common data structure for block import types

* fix availability check on proposal import

* refactor the blob cache and split the block wrapper into two types

* add type conversion for signed block and block wrapper

* fix beacon chain tests and do some renaming, add some comments

* Partial processing (#4)

* move beacon block and blob to eth2/types

* rename gossip blob cache to data availability checker

* lots of changes

* fix some compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* cargo fmt

* use a common data structure for block import types

* fix availability check on proposal import

* refactor the blob cache and split the block wrapper into two types

* add type conversion for signed block and block wrapper

* fix beacon chain tests and do some renaming, add some comments

* cargo update (#6)

---------

Co-authored-by: realbigsean <sean@sigmaprime.io>
Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-03-24 17:30:41 -04:00
Diva M
25a2d8f078 Merge branch 'eip4844' into deneb-free-blobs 2023-03-24 14:38:29 -05:00
Diva M
1093ba1a27 revert change to ef_tests 2023-03-24 14:32:58 -05:00
Diva M
1b9cfcc11b Merge branch 'unstable' into eip4844 2023-03-24 13:32:50 -05:00
Diva M
7fad926b65 Merge commit '65a5eb829264cb279ed66814c961991ae3a0a04b' into eip4844 2023-03-24 13:24:21 -05:00
Paul Hauner
b2525d6ebd Release Candidate v4.0.1-rc.0 (#4123) 2023-03-23 21:16:14 +11:00
Paul Hauner
23ea1481e0 Fix fork choice error message (#4122)
## Issue Addressed

NA

## Proposed Changes

Ensures that we log the values of the *head* block rather than the *justified* block.

## Additional Info

NA
2023-03-23 07:16:49 +00:00
Paul Hauner
0616e01202 Release v4.0.0 (#4112)
## Issue Addressed

NA

## Proposed Changes

Bump versions to `v4.0.0`

## Additional Info

NA
2023-03-22 04:06:02 +00:00
ethDreamer
d1e653cfdb Update Blob Storage Structure (#4104)
* Initial Changes to Blob Storage

* Add Arc to SignedBlobSidecar Definition
2023-03-21 15:33:06 -04:00
Jimmy Chen
b40dceaae9 Update get blobs endpoint to return a list of BlobSidecars (#4109)
* Update get blobs endpoint to return BlobSidecarList

* Update code comment

* Update blob retrieval to return BlobSidecarList without Arc

* Remove usage of BlobSidecarList type alias to avoid code conflicts

* Add clippy allow exception
2023-03-21 09:56:32 -05:00
Paul Hauner
1f8c17b530 Fork choice modifications and cleanup (#3962)
## Issue Addressed

NA

## Proposed Changes

- Implements https://github.com/ethereum/consensus-specs/pull/3290/
- Bumps `ef-tests` to [v1.3.0-rc.4](https://github.com/ethereum/consensus-spec-tests/releases/tag/v1.3.0-rc.4).

The `CountRealizedFull` concept has been removed and the `--count-unrealized-full` and `--count-unrealized` BN flags now do nothing but log a `WARN` when used.

## Database Migration Debt

This PR removes the `best_justified_checkpoint` from fork choice. This field is persisted on-disk and the correct way to go about this would be to make a DB migration to remove the field. However, in this PR I've simply stubbed out the value with a junk value. I've taken this approach because if we're going to do a DB migration I'd love to remove the `Option`s around the justified and finalized checkpoints on `ProtoNode` whilst we're at it. Those options were added in #2822 which was included in Lighthouse v2.1.0. The options were only put there to handle the migration and they've been set to `Some` ever since v2.1.0. There's no reason to keep them as options anymore.

I started adding the DB migration to this branch but I started to feel like I was bloating this rather critical PR with nice-to-haves. I've kept the partially-complete migration [over in my repo](https://github.com/paulhauner/lighthouse/tree/fc-pr-18-migration) so we can pick it up after this PR is merged.
2023-03-21 07:34:41 +00:00
Paul Hauner
3ac5583cf9 Set Capella fork epoch for Mainnet (#4111)
## Issue Addressed

NA

## Proposed Changes

Sets the mainnet Capella fork epoch as per https://github.com/ethereum/consensus-specs/pull/3300

## Additional Info

I expect the `ef_tests` to fail until we get a compatible consensus spec tests release.
2023-03-21 05:15:01 +00:00
Paul Hauner
59e45fe349 Reduce verbosity of reprocess queue logs (#4101)
## Issue Addressed

NA

## Proposed Changes

Replaces #4058 to attempt to reduce `ERRO Failed to send scheduled attestation` spam and provide more information for diagnosis. With this PR we achieve:

- When dequeuing attestations after a block is received, send only one log which reports `n` failures (rather than `n` logs reporting `n` failures).
- Make a distinction in logs between two separate attestation dequeuing events.
- Add more information to both log events to help assist with troubleshooting.

## Additional Info

NA
2023-03-21 05:15:00 +00:00
Age Manning
785a9171e6 Customisable shuffling cache size (#4081)
This PR enables the user to adjust the shuffling cache size.

This is useful for some HTTP API requests which require re-computing old shufflings. This PR currently optimizes the
beacon/states/{state_id}/committees HTTP API by first checking the cache before re-building shuffling.

If the shuffling is set to a non-default value, then the HTTP API request will also fill the cache when as it constructs new shufflings.

If the CLI flag is not present or the value is set to the default of 16 the default behaviour is observed.


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-21 05:14:59 +00:00
Age Manning
76a2007b64 Improve Lighthouse Connectivity Via ENR TCP Update (#4057)
Currently Lighthouse will remain uncontactable if users port forward a port that is not the same as the one they are listening on. 

For example, if Lighthouse runs with port 9000 TCP/UDP locally but a router is configured to pass 9010 externally to the lighthouse node on 9000, other nodes on the network will not be able to reach the lighthouse node. 

This occurs because Lighthouse does not update its ENR TCP port on external socket discovery. The intention was always that users should use `--enr-tcp-port` to customise this, but this is non-intuitive. 

The difficulty arises because we have no discovery mechanism to find our external TCP port. If we discovery a new external UDP port, we must guess what our external TCP port might be. This PR assumes the external TCP port is the same as the external UDP port (which may not be the case) and thus updates the TCP port along with the UDP port if the `--enr-tcp-port` flag is not set. 

Along with this PR, will be added documentation to the Lighthouse book so users can correctly understand and configure their ENR to maximize Lighthouse's connectivity. 

This relies on https://github.com/sigp/discv5/pull/166 and we should wait for a new release in discv5 before adding this PR.
2023-03-21 05:14:57 +00:00
Age Manning
17d56b06f6 Ignore self as a bootnode (#4110)
If a node is also a bootnode it can try to add itself to its own local routing table which will emit an error. 

The error is entirely harmless but we would prefer to avoid emitting the error. 

This PR does not attempt to add a boot node ENR if that ENR corresponds to our local peer-id/node-id.
2023-03-20 21:50:37 +00:00
ethDreamer
65a5eb8292 Reconstruct Payloads using Payload Bodies Methods (#4028)
## Issue Addressed

* #3895 

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-19 23:15:59 +00:00
Diva M
78414333a2 Merge branch 'eip4844' into deneb-free-blobs 2023-03-17 16:39:17 -05:00
Diva M
607242c127 Merge branch 'unstable' into eip4844 2023-03-17 16:26:51 -05:00
Jimmy Chen
1301c62436 Validator blob signing for the unblinded flow (#4096)
* Implement validator blob signing (full block and full blob)

* Fix compilation error and remove redundant slot check

* Fix clippy error
2023-03-17 09:29:25 -04:00
Paul Hauner
020fb483fe Clarify "Ready for Capella" (#4095)
## Issue Addressed

Resolves #4061

## Proposed Changes

Adds a message to tell users to check their EE.

## Additional Info

I really struggled to come up with something succinct and complete, so I'm totally open to feedback.
2023-03-17 00:44:04 +00:00
Paul Hauner
c3e5053612 Reduce false positive logging for late builder blocks (#4073)
## Issue Addressed

NA

## Proposed Changes

When producing a block from a builder, there are two points where we could consider the block "broadcast":

1. When the blinded block is published to the builder.
2. When the un-blinded block is published to the P2P network (this is always *after* the previous step).

Our logging for late block broadcasts was using (2) for builder-blocks, which was creating a lot of false-positive logs. This is because the builder publishes the block on the P2P network themselves before returning it to us and we perform (2). For clarity, the logs were false-positives because we claim that the block was published late by us when it was actually published earlier by the builder.

This PR changes our logging behavior so we do our logging at (1) instead. It also updates our metrics for block broadcast to distinguish between local and builder blocks. I believe the metrics change will be natively compatible with existing Grafana dashboards.

## Additional Info

One could argue that the builder *should* return the block to us faster, however that's not the case. I think it's more important that we don't desensitize users with false-positives.
2023-03-17 00:44:03 +00:00
Michael Sproul
4c2d4af6cd Make more noise when the EL is broken (#3986)
## Issue Addressed

Closes #3814, replaces #3818.

## Proposed Changes

* Add a WARN log for the case where we are attempting to sync chain segments but can't process them because they're building on an invalid parent. The most common case where we see this is when the execution node database is corrupt, causing sync to stall mysteriously (because we're currently logging the failure only at debug level).
* Additionally I've bumped up the logging for invalid execution payloads to `WARN`. This may result in some duplicate logs as we log errors from the `beacon_chain` and then again from the beacon processor. Invalid payloads and corrupt DBs _should_ be rare enough that this doesn't produce overwhelming log volume.
2023-03-17 00:44:02 +00:00
Divma
3c18e1a3a4 thread blocks and blobs to sync (#4100)
* thread blocks and blobs to sync

* satisfy dead code analysis
2023-03-16 19:20:39 -05:00
Ricki Moore
974b7e9f58 Siren Ui Lighthouse Version Requirments (#4093)
## Issue Addressed

Added note in lighthouse book to instruct users to use a min lighthouse requirement to run Siren Ui.

Which issue # does this PR address?

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2023-03-16 08:03:43 +00:00
Age Manning
3d99ce25f8 Correct a race condition when dialing peers (#4056)
There is a race condition which occurs when multiple discovery queries return at almost the exact same time and they independently contain a useful peer we would like to connect to.

The condition can occur that we can add the same peer to the dial queue, before we get a chance to process the queue. 
This ends up displaying an error to the user: 
```
ERRO Dialing an already dialing peer
```
Although this error is harmless it's not ideal. 

There are two solutions to resolving this:
1. As we decide to dial the peer, we change the state in the peer-db to dialing (before we add it to the queue) which would prevent other requests from adding to the queue. 
2. We prevent duplicates in the dial queue

This PR has opted for 2. because 1. will complicate the code in that we are changing states in non-intuitive places. Although this technically adds a very slight performance cost, its probably a cleaner solution as we can keep the state-changing logic in one place.
2023-03-16 05:44:54 +00:00
realbigsean
1a87222641 Merge pull request #4094 from realbigsean/free-blob-lints
Free blob lints
2023-03-15 16:56:53 -04:00
realbigsean
cf4285e1d4 compile tests 2023-03-15 16:34:00 -04:00
realbigsean
fb7d729d92 migrate types to API crate 2023-03-15 16:03:36 -04:00
realbigsean
b303d2fb7e lints 2023-03-15 15:32:22 -04:00
Diva M
34cea6d1c3 fmt 2023-03-15 12:37:53 -05:00
realbigsean
91a06ba484 Merge pull request #4083 from jimmygchen/post-block-and-blobs
Implement POST beacon block for EIP-4844
2023-03-15 13:35:37 -04:00
Diva M
4a39e43f96 Merge branch 'eip4844' into deneb-free-blobs 2023-03-15 12:26:30 -05:00
realbigsean
2e075c0a80 Merge pull request #4091 from realbigsean/get-validator-block-and-blobs
Get validator block and blobs
2023-03-15 12:22:03 -04:00
realbigsean
3d99e1f14d move block contents to api crate, rename blob sidecars list 2023-03-15 12:15:08 -04:00
Divma
2c9477de43 Fix block and blob coupling in the network context (#4086)
* update docs

* introduce a temp enum to model an adjusted `BlockWrapper` and fix blob coupling

* fix compilation issue

* fix blob coupling in the network context

* review comments
2023-03-15 11:04:45 -05:00
Jimmy Chen
2ef3ebbef3 Update SignedBlobSidecar container (#4078) 2023-03-15 11:03:56 -05:00
Jimmy Chen
fa9baab0f7 Temporarily allow Rust check warnings on 4844 branch. (#4088) 2023-03-15 09:43:31 -05:00
Jimmy Chen
775ca89801 Add blobs publishing 2023-03-15 15:57:30 +11:00
Jimmy Chen
5887c8fe92 Update commented code to use todo! 2023-03-15 15:29:16 +11:00
Jimmy Chen
02a88f0704 Add KZG proof and blob validation 2023-03-15 15:15:46 +11:00
Jimmy Chen
62627d984c Comment out code that fails to compile 2023-03-15 12:30:59 +11:00
Daniel Ramirez Chiquillo
1ec3041673 Remove Router/Processor Code (#4002)
## Issue Addressed

#3938 

## Proposed Changes

- `network::Processor` is deleted and all it's logic is moved to `network::Router`.
- The `network::Router` module is moved to a single file.
- The following functions are deleted: `on_disconnect` `send_status` `on_status_response` `on_blocks_by_root_request` `on_lightclient_bootstrap` `on_blocks_by_range_request` `on_block_gossip` `on_unaggregated_attestation_gossip` `on_aggregated_attestation_gossip` `on_voluntary_exit_gossip` `on_proposer_slashing_gossip` `on_attester_slashing_gossip` `on_sync_committee_signature_gossip` `on_sync_committee_contribution_gossip` `on_light_client_finality_update_gossip` `on_light_client_optimistic_update_gossip`. This deletions are possible because the updated `Router` allows the underlying methods to be called directly.
2023-03-15 01:27:47 +00:00
realbigsean
ac0eb39ced Complete match for has_context_bytes (#3972)
## Issue Addressed

- Add a complete match for `Protocol` here. 
- The incomplete match was causing us not to append context bytes to the light client protocols
- This is the relevant part of the spec and it looks like context bytes are defined https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/p2p-interface.md#getlightclientbootstrap

Disclaimer: I have no idea if people are using it but it shouldn't have been working so not sure why it wasn't caught

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-03-15 01:27:46 +00:00
Jimmy Chen
e4608d44ea Merge branch 'deneb-free-blobs' into get-validator-block-and-blobs 2023-03-15 10:55:00 +11:00
Diva M
9974dfe87b fix merge mistake 2023-03-14 14:45:17 -05:00
Diva M
7f2e9b80bb Merge branch 'unstable' into eip4844 2023-03-14 12:00:32 -05:00
Pawan Dhananjay
76f49bdb44 Update kzg interface (#4077)
* Update kzg interface

* Update utils

* Update dependency

* Address review comments
2023-03-14 12:13:15 +05:30
Michael Sproul
36e163c042 Add parent_block_number to payload SSE (#4053)
## Issue Addressed

In #4027 I forgot to add the `parent_block_number` to the payload attributes SSE.

## Proposed Changes

Compute the parent block number while computing the pre-payload attributes. Pass it on to the SSE stream.

## Additional Info

Not essential for v3.5.1 as I suspect most builders don't need the `parent_block_root`. I would like to use it for my dummy no-op builder however.
2023-03-14 06:26:37 +00:00
Jimmy Chen
a8978a5f69 Implement publish block and blobs endpoint (WIP) 2023-03-14 17:03:45 +11:00
Divma
e190ebb8a0 Support for Ipv6 (#4046)
## Issue Addressed
Add support for ipv6 and dual stack in lighthouse. 

## Proposed Changes
From an user perspective, now setting an ipv6 address, optionally configuring the ports should feel exactly the same as using an ipv4 address. If listening over both ipv4 and ipv6 then the user needs to:
- use the `--listen-address` two times (ipv4 and ipv6 addresses)
- `--port6` becomes then required
- `--discovery-port6` can now be used to additionally configure the ipv6 udp port

### Rough list of code changes
- Discovery:
  - Table filter and ip mode set to match the listening config. 
  - Ipv6 address, tcp port and udp port set in the ENR builder
  - Reported addresses now check which tcp port to give to libp2p
- LH Network Service:
  - Can listen over Ipv6, Ipv4, or both. This uses two sockets. Using mapped addresses is disabled from libp2p and it's the most compatible option.
- NetworkGlobals:
  - No longer stores udp port since was not used at all. Instead, stores the Ipv4 and Ipv6 TCP ports.
- NetworkConfig:
  - Update names to make it clear that previous udp and tcp ports in ENR were Ipv4
  - Add fields to configure Ipv6 udp and tcp ports in the ENR
  - Include advertised enr Ipv6 address.
  - Add type to model Listening address that's either Ipv4, Ipv6 or both. A listening address includes the ip, udp port and tcp port.
- UPnP:
  - Kept only for ipv4
- Cli flags:
  - `--listen-addresses` now can take up to two values
  - `--port` will apply to ipv4 or ipv6 if only one listening address is given. If two listening addresses are given it will apply only to Ipv4.
  - `--port6` New flag required when listening over ipv4 and ipv6 that applies exclusively to Ipv6.
  - `--discovery-port` will now apply to ipv4 and ipv6 if only one listening address is given.
  - `--discovery-port6` New flag to configure the individual udp port of ipv6 if listening over both ipv4 and ipv6.
  - `--enr-udp-port` Updated docs to specify that it only applies to ipv4. This is an old behaviour.
  - `--enr-udp6-port` Added to configure the enr udp6 field.
  - `--enr-tcp-port` Updated docs to specify that it only applies to ipv4. This is an old behaviour.
  - `--enr-tcp6-port` Added to configure the enr tcp6 field.
  - `--enr-addresses` now can take two values.
  - `--enr-match` updated behaviour.
- Common:
  - rename `unused_port` functions to specify that they are over ipv4.
  - add functions to get unused ports over ipv6.
- Testing binaries
  - Updated code to reflect network config changes and unused_port changes.

## Additional Info

TODOs:
- use two sockets in discovery. I'll get back to this and it's on https://github.com/sigp/discv5/pull/160
- lcli allow listening over two sockets in generate_bootnodes_enr
- add at least one smoke flag for ipv6 (I have tested this and works for me)
- update the book
2023-03-14 01:13:34 +00:00
Jimmy Chen
9ba390f7c5 Merge branch 'update-signed-blob-sidecar' into post-block-and-blobs 2023-03-14 11:52:25 +11:00
Jimmy Chen
e81a5029b8 Update SignedBlobSidecar container 2023-03-14 11:48:58 +11:00
Jimmy Chen
6ec0ce6070 Implement get validator block endpoint for EIP-4844 2023-03-13 16:50:08 +11:00
Michael Sproul
06af31a66a Correct /lighthouse/nat implementation (#4069)
## Proposed Changes

The current `/lighthouse/nat` implementation checks for _zero_ address updated messages, when it should check for a _non-zero_ number. This was spotted while debugging an issue on Discord where a user's ports weren't forwarded but `/lighthouse/nat` was still returning `true`.
2023-03-13 04:08:15 +00:00
Sebastian Richel
373beaf913 Added warning when new jwt is generated (#4000)
## Issue Addressed
#3435

## Proposed Changes
Fire a warning with the path of JWT to be created when the path given by --execution-jwt is not found
Currently, the same error is logged if the jwt is found but doesn't match the execution client's jwt, and if no jwt was found at the given path. This makes it very hard to tell if you accidentally typed the wrong path, as a new jwt is created silently that won't match the execution client's jwt. So instead, it will now fire a warning stating that a jwt is being generated at the given path.

## Additional Info
In the future, it may be smarter to handle this case by adding an InvalidJWTPath member to the Error enum in lib.rs or auth.rs
that can be handled during upcheck()

This is my first PR and first project with rust. so thanks to anyone who looks at this for their patience and help!

Co-authored-by: Sebastian Richel <47844429+sebastianrich18@users.noreply.github.com>
2023-03-13 04:08:14 +00:00
Michael Sproul
90cef1db86 Appease Clippy 1.68 and refactor http_api (#4068)
## Proposed Changes

Two tiny updates to satisfy Clippy 1.68

Plus refactoring of the `http_api` into less complex types so the compiler can chew and digest them more easily.

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-13 01:40:03 +00:00
Alex Wied
4a1c0c96be Fix order of arguments to log_count (#4060)
See: https://github.com/sigp/lighthouse/pull/4027

## Proposed Changes

The order of the arguments to `log_count` is swapped in `beacon_node/beacon_chain/src/events.rs`.
2023-03-13 01:40:01 +00:00
Diva M
0ae3078988 bump up recursion limits in the sim bin 2023-03-10 17:51:35 -05:00
Diva M
ea23d55b72 more lints 2023-03-10 15:05:08 -05:00
Diva M
9e30c3c169 new lints 2023-03-10 14:35:43 -05:00
Diva M
61b12c2eff bump up recursion limits by a lot 2023-03-10 14:26:15 -05:00
Diva M
ae3e5f73d6 fmt 2023-03-10 11:24:22 -05:00
Diva M
fe421f7987 Merge branch 'eip4844' into deneb-free-blobs 2023-03-10 11:21:23 -05:00
Diva M
203a9e5f5e Merge branch 'unstable' into eip4844 2023-03-10 11:19:56 -05:00
Divma
140bdd370d update code paths in the network crate (#4065)
* wip

* fix router

* arc the byroot responses we send

* add placeholder for blob verification

* respond to blobs by range and blobs by root request in the most horrible and gross way ever

* everything in sync is now unimplemented

* fix compiation issues

* http_pi change is very small, just add it

* remove ctrl-c ctrl-v's docs
2023-03-10 16:52:31 +05:30
Divma
3898cf7be8 Modify lighthouse_network gossip types to free the blobs (#4064)
* Modify blob topics

* add signedblol type

pubsun messages are signed

* improve subnet topic index

* improve display code

* fix parse code

---------

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2023-03-10 16:23:36 +05:30
Divma
545532a883 fix rpc types to free the blobs (#4059)
* rename to follow name in spec

* use roots and indexes

* wip

* fix req/resp types

* move blob identifier to consensus types
2023-03-07 16:28:45 -05:00
Paul Hauner
319cc61afe Release v3.5.1 (#4049)
## Issue Addressed

NA

## Proposed Changes

Bumps versions to v3.5.1.

## Additional Info

- [x] Requires further testing
2023-03-07 07:57:39 +00:00
Daniel Ramirez Chiquillo
4c109115ca Add a flag to always use payloads from builders (#4052)
## Issue Addressed

#4040 

## Proposed Changes

- Add the `always_prefer_builder_payload`  field to `Config` in `beacon_node/client/src/config.rs`.
- Add that same field to `Inner` in `beacon_node/execution_layer/src/lib.rs`
- Modify the logic for picking the payload in `beacon_node/execution_layer/src/lib.rs`
- Add the `always-prefer-builder-payload` flag to the beacon node CLI
- Test the new flags in `lighthouse/tests/beacon_node.rs`

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-03-07 05:37:28 +00:00
Paul Hauner
5bb635d17f Set Capella fork epoch for Goerli (#4044)
## Issue Addressed

NA

## Proposed Changes

Sets the Capella fork epoch as per https://github.com/eth-clients/goerli/pull/160.

The fork will occur at:

- Epoch: 162304
- Slot: 5193728
- UTC: 14/03/2023, 10:25:36 pm

## Additional Info

- [x] Blocked on https://github.com/eth-clients/goerli/pull/160 being merged
2023-03-07 03:06:52 +00:00
Diva M
63011c5bca fmt 2023-03-06 18:09:31 -05:00
Diva M
75a0d52ee0 get back the Blobs pub use to fix compilation issues 2023-03-06 17:58:00 -05:00
Diva M
bf40acd9df adjust constant to spec values and names 2023-03-06 17:32:40 -05:00
Pawan Dhananjay
39d4f0a1f3 Add BlobSidecar type 2023-03-06 17:10:12 -05:00
Diva M
91a8d4575d add missing eip4844 core topics placeholder 2023-03-06 14:35:52 -05:00
Paul Hauner
3ad77fb52c Add VC metric for primary BN latency (#4051)
## Issue Addressed

NA

## Proposed Changes

In #4024 we added metrics to expose the latency measurements from a VC to each BN. Whilst playing with these new metrics on our infra I realised it would be great to have a single metric to make sure that the primary BN for each VC has a reasonable latency. With the current "metrics for all BNs" it's hard to tell which is the primary.

## Additional Info

NA
2023-03-06 04:08:49 +00:00
Paul Hauner
f775404c10 Log a WARN in the VC for a mismatched Capella fork epoch (#4050)
## Issue Addressed

NA

## Proposed Changes

- Adds a `WARN` statement for Capella, just like the previous forks.
- Adds a hint message to all those WARNs to suggest the user update the BN or VC.

## Additional Info

NA
2023-03-06 04:08:48 +00:00
Michael Sproul
4dc0c4c5b7 Update dependencies incl tempfile (#4048)
## Proposed Changes

Fix the cargo audit failure caused by [RUSTSEC-2023-0018](https://rustsec.org/advisories/RUSTSEC-2023-0018) which we were exposed to via `tempfile`.

## Additional Info

I've held back the libp2p crate for now because it seemed to introduce another duplicate dependency on libp2p-core, for a total of 3 copies. Maybe that's fine, but we can sort it out later.
2023-03-05 23:43:32 +00:00
Michael Sproul
57dfcfd83a Optimise attestation selection proof signing (#4033)
## Issue Addressed

Closes #3963 (hopefully)

## Proposed Changes

Compute attestation selection proofs gradually each slot rather than in a single `join_all` at the start of each epoch. On a machine with 5k validators this replaces 5k tasks signing 5k proofs with 1 task that signs 5k/32 ~= 160 proofs each slot.

Based on testing with Goerli validators this seems to reduce the average time to produce a signature by preventing Tokio and the OS from falling over each other trying to run hundreds of threads. My testing so far has been with local keystores, which run on a dynamic pool of up to 512 OS threads because they use [`spawn_blocking`](https://docs.rs/tokio/1.11.0/tokio/task/fn.spawn_blocking.html) (and we haven't changed the default).

An earlier version of this PR hyper-optimised the time-per-signature metric to the detriment of the entire system's performance (see the reverted commits). The current PR is conservative in that it avoids touching the attestation service at all. I think there's more optimising to do here, but we can come back for that in a future PR rather than expanding the scope of this one.

The new algorithm for attestation selection proofs is:

- We sign a small batch of selection proofs each slot, for slots up to 8 slots in the future. On average we'll sign one slot's worth of proofs per slot, with an 8 slot lookahead.
- The batch is signed halfway through the slot when there is unlikely to be contention for signature production (blocks are <4s, attestations are ~4-6 seconds, aggregates are 8s+).

## Performance Data

_See first comment for updated graphs_.

Graph of median signing times before this PR:

![signing_times_median](https://user-images.githubusercontent.com/4452260/221495627-3ab3c105-319f-406e-b99d-b5913e0ded9c.png)

Graph of update attesters metric (includes selection proof signing) before this PR:

![update_attesters_store](https://user-images.githubusercontent.com/4452260/221497057-01ba40e4-8148-45f6-9e21-36a9567a631a.png)

Median signing time after this PR (prototype from 12:00, updated version from 13:30):

![signing_times_median_updated](https://user-images.githubusercontent.com/4452260/221771578-47a040cc-b832-482f-9a1a-d1bd9854e00e.png)

99th percentile on signing times (bounded attestation signing from 16:55, now removed):

![signing_times_99pc](https://user-images.githubusercontent.com/4452260/221772055-e64081a8-2220-45ba-ba6d-9d7e344a5bde.png)

Attester map update timing after this PR:

![update_attesters_store_updated](https://user-images.githubusercontent.com/4452260/221771757-c8558a48-7f4e-4bb5-9929-dee177a66c1e.png)

Selection proof signings per second change:

![signing_attempts](https://user-images.githubusercontent.com/4452260/221771855-64f5da22-1655-478d-926b-810be8a3650c.png)

## Link to late blocks

I believe this is related to the slow block signings because logs from Stakely in #3963 show these two logs almost 5 seconds apart:

> Feb 23 18:56:23.978 INFO Received unsigned block, slot: 5862880, service: block, module: validator_client::block_service:393
> Feb 23 18:56:28.552 INFO Publishing signed block, slot: 5862880, service: block, module: validator_client::block_service:416

The only thing that happens between those two logs is the signing of the block:

0fb58a680d/validator_client/src/block_service.rs (L410-L414)

Helpfully, Stakely noticed this issue without any Lighthouse BNs in the mix, which pointed to a clear issue in the VC.

## TODO

- [x] Further testing on testnet infrastructure.
- [x] Make the attestation signing parallelism configurable.
2023-03-05 23:43:31 +00:00
Michael Sproul
01556f6f01 Optimise payload attributes calculation and add SSE (#4027)
## Issue Addressed

Closes #3896
Closes #3998
Closes #3700

## Proposed Changes

- Optimise the calculation of withdrawals for payload attributes by avoiding state clones, avoiding unnecessary state advances and reading from the snapshot cache if possible.
- Use the execution layer's payload attributes cache to avoid re-calculating payload attributes. I actually implemented a new LRU cache just for withdrawals but it had the exact same key and most of the same data as the existing payload attributes cache, so I deleted it.
- Add a new SSE event that fires when payloadAttributes are calculated. This is useful for block builders, a la https://github.com/ethereum/beacon-APIs/issues/244.
- Add a new CLI flag `--always-prepare-payload` which forces payload attributes to be sent with every fcU regardless of connected proposers. This is intended for use by builders/relays.

For maximum effect, the flags I've been using to run Lighthouse in "payload builder mode" are:

```
--always-prepare-payload \
--prepare-payload-lookahead 12000 \
--suggested-fee-recipient 0x0000000000000000000000000000000000000000
```

The fee recipient is required so Lighthouse has something to pack in the payload attributes (it can be ignored by the builder). The lookahead causes fcU to be sent at the start of every slot rather than at 8s. As usual, fcU will also be sent after each change of head block. I think this combination is sufficient for builders to build on all viable heads. Often there will be two fcU (and two payload attributes) sent for the same slot: one sent at the start of the slot with the head from `n - 1` as the parent, and one sent after the block arrives with `n` as the parent.

Example usage of the new event stream:

```bash
curl -N "http://localhost:5052/eth/v1/events?topics=payload_attributes"
```

## Additional Info

- [x] Tests added by updating the proposer re-org tests. This has the benefit of testing the proposer re-org code paths with withdrawals too, confirming that the new changes don't interact poorly.
- [ ] Benchmarking with `blockdreamer` on devnet-7 showed promising results but I'm yet to do a comparison to `unstable`.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-03-05 23:43:30 +00:00
Paul Hauner
6e15533b54 Add latency measurement service to VC (#4024)
## Issue Addressed

NA

## Proposed Changes

Adds a service which periodically polls (11s into each mainnet slot) the `node/version` endpoint on each BN and roughly measures the round-trip latency. The latency is exposed as a `DEBG` log and a Prometheus metric.

The `--latency-measurement-service` has been added to the VC, with the following options:

- `--latency-measurement-service true`: enable the service (default).
    - `--latency-measurement-service`: (without a value) has the same effect.
- `--latency-measurement-service false`: disable the service.

## Additional Info

Whilst looking at our staking setup, I think the BN+VC latency is contributing to late blocks. Now that we have to wait for the builders to respond it's nice to try and do everything we can to reduce that latency. Having visibility is the first step.
2023-03-05 23:43:29 +00:00
Diva M
56841bcfcf fix gitignore 2023-03-03 14:31:33 -05:00
Diva M
f16e82ab2c Merge branch 'unstable' into eip4844 2023-03-03 14:14:18 -05:00
Diva M
6fb1681ee7 Merge branch 'unstable-up-to-clean-capella' into eip4844 2023-03-03 14:13:29 -05:00
Diva M
a3862e42b1 Revert "Clean capella (#4019)"
This reverts commit 047c7544e3.
2023-03-03 14:13:03 -05:00
Diva M
f405feb2cd define missing vars 2023-03-03 13:16:27 -05:00
Diva M
e602b94b8d define CAPELLA_FORK_EPOCH in scripts/tests/vars.env with same value as ALTAIR_FORK_EPOCH 2023-03-03 12:41:36 -05:00
Diva M
74542843a7 define BELLATRIX_FORK_EPOCH in scripts/tests/vars.env with same value as ALTAIR_FORK_EPOCH 2023-03-03 12:07:04 -05:00
Diva M
20be7024e1 Revert "Use consensus-spec-tests v1.3.0-rc.3 (#4021)"
This reverts commit caa6190d4a.
2023-03-03 11:07:26 -05:00
Paul Hauner
cac3a66be4 Permit a null LVH from an INVALID response to newPayload (#4037)
## Issue Addressed

NA

## Proposed Changes

As discovered in #4034, Lighthouse is not accepting `latest_valid_hash == None` in an `INVALID` response to `newPayload`. The `null`/`None` response *was* illegal at one point, however it was added in https://github.com/ethereum/execution-apis/pull/254.

This PR brings Lighthouse in line with the standard and should fix the root cause of what #4034 patched around.

## Additional Info

NA
2023-03-03 04:12:50 +00:00
Diva M
a9be1eae40 solve couple of merge conflicts 2023-03-02 15:46:14 -05:00
Diva M
d93753cc88 Merge branch 'unstable' into off-4844 2023-03-02 15:38:00 -05:00
Atanas Minkov
2b6348781a Log a debug message when a request fails for a beacon node candidate (#4036)
## Issue Addressed
#3985

## Proposed Changes

Log a debug message when a BN candidate returns an error.

`Mar 01 16:40:24.011 DEBG Request to beacon node failed           error: ServerMessage(ErrorMessage { code: 503, message: "SERVICE_UNAVAILABLE: beacon node is syncing: head slot is 8416, current slot is 5098402", stacktraces: [] }), node: http://localhost:5052/`
2023-03-02 05:26:14 +00:00
Pawan Dhananjay
5b18fd92cb Cleaner logic for gossip subscriptions for new forks (#4030)
## Issue Addressed

Cleaner resolution for #4006 

## Proposed Changes

We are currently subscribing to core topics of new forks way before the actual fork since we had just a single `CORE_TOPICS` array. This PR separates the core topics for every fork and subscribes to only required topics based on the current fork.
Also adds logic for subscribing to the core topics of a new fork only 2 slots before the fork happens.

2 slots is to give enough time for the gossip meshes to form. 

Currently doesn't add logic to remove topics from older forks in new forks. For e.g. in the coupled 4844 world, we had to remove the `BeaconBlock` topic in favour of `BeaconBlocksAndBlobsSidecar` at the 4844 fork. It should be easy enough to add though. Not adding it because I'm assuming that  #4019 will get merged before this PR and we won't require any deletion logic. Happy to add it regardless though.
2023-03-01 09:22:48 +00:00
Michael Sproul
ca1ce381a9 Delete Kiln and Ropsten configs (#4038)
## Proposed Changes

Remove built-in support for Ropsten and Kiln via the `--network` flag. Both testnets are long dead and deprecated.

This shaves about 30MiB off the binary size, from 135MiB to 103MiB (maxperf), or 165MiB to 135MiB (release).
2023-03-01 06:16:14 +00:00
Divma
047c7544e3 Clean capella (#4019)
## Issue Addressed

Cleans up all the remnants of 4844 in capella. This makes sure when 4844 is reviewed there is nothing we are missing because it got included here 

## Proposed Changes

drop a bomb on every 4844 thing 

## Additional Info

Merge process I did (locally) is as follows:
- squash merge to produce one commit
- in new branch off unstable with the squashed commit create a `git revert HEAD` commit
- merge that new branch onto 4844 with `--strategy ours`
- compare local 4844 to remote 4844 and make sure the diff is empty
- enjoy

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-03-01 03:19:02 +00:00
Paul Hauner
17d9a620cf Add more logs in the BN HTTP API during block production (#4025)
## Issue Addressed

NA

## Proposed Changes

Adds two new `DEBG` logs to the HTTP API:

1. As soon as we are requested to produce a block.
2. As soon as a signed block is received.

In #3858 we added some very helpful logs to the VC so we could see when things are happening with block proposals in the VC. After doing some more debugging, I found that I can tell when the VC is sending a block but I *can't* tell the time that the BN receives it (I can only get the time after the BN has started doing some work with the block). Knowing when the VC published and as soon as the BN receives is useful for determining the delays introduced by network latency (and some other things like JSON decoding, etc).

## Additional Info

NA
2023-02-28 02:20:53 +00:00
Age Manning
0155455990 Docs for Siren (#4023)
This adds some documentation for the Siren app into the Lighthouse book.

Co-authored-by: Mavrik <mrricki.m.usmc@gmail.com>
2023-02-28 02:20:52 +00:00
Paul Hauner
caa6190d4a Use consensus-spec-tests v1.3.0-rc.3 (#4021)
## Issue Addressed

NA

## Proposed Changes

Updates our `ef_tests` to use: https://github.com/ethereum/consensus-specs/releases/tag/v1.3.0-rc.3

This required:

- Skipping a `merkle_proof_validity` test (see #4022)
- Account for the `eip4844` tests changing name to `deneb`
    - My IDE did some Python linting during this change. It seemed simple and nice so I left it there.

## Additional Info

NA
2023-02-28 02:20:51 +00:00
Alan Höng
cc4fc422b2 Add content-type header to metrics server response (#3970)
This fixes issues with certain metrics scrapers, which might error if the content-type is not correctly set.

## Issue Addressed

Fixes https://github.com/sigp/lighthouse/issues/3437

## Proposed Changes

Simply set header: `Content-Type: text/plain` on metrics server response. Seems like the errored branch does this correctly already.

## Additional Info

This is needed also to enable influx-db metric scraping which work very nicely with Geth.
2023-02-28 02:20:50 +00:00
Michael Sproul
47b22d5256 Allow compilation with no slasher backend (#3888)
## Proposed Changes

Allowing compiling without MDBX by running:

```bash
CARGO_INSTALL_EXTRA_FLAGS="--no-default-features" make
```

The reasons to do this are several:

- Save compilation time if the slasher won't be used
- Work around compilation errors in slasher backend dependencies (our pinned version of MDBX is currently not compiling on FreeBSD with certain compiler versions).

## Additional Info

When I opened this PR we were using resolver v1 which [doesn't disable default features in dependencies](https://doc.rust-lang.org/cargo/reference/features.html#resolver-version-2-command-line-flags), and `mdbx` is default for the `slasher` crate. Even after the resolver got changed to v2 in #3697 compiling with `--no-default-features` _still_ wasn't turning off the slasher crate's default features, so I added `default-features = false` in all the places we depend on it.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-28 02:20:49 +00:00
Age Manning
be29394a9d Execution Integration Tests Correction (#4034)
The execution integration tests are currently failing.

This is a quick modification to pin the execution client version to correct the tests.
2023-02-27 21:26:16 +00:00
Paul Hauner
0fb58a680d v3.5.0 (#3996)
## Issue Addressed

NA

## Proposed Changes

- Bump versions

## Sepolia Capella Upgrade

This release will enable the Capella fork on Sepolia. We are planning to publish this release on the 23rd of Feb 2023.

Users who can build from source and wish to do pre-release testing can use this branch.

## Additional Info

- [ ] Requires further testing
2023-02-22 06:00:49 +00:00
Age Manning
5c63d8758e Register disconnected peers when temporarily banned (#4001)
This is a correction to #3757. 

The correction registers a peer that is being disconnected in the local peer manager db to ensure we are tracking the correct state.
2023-02-21 23:45:44 +00:00
Pawan Dhananjay
3721f3a83c Fix doppelganger script (#3988)
## Issue Addressed

N/A

## Proposed Changes

The doppelganger tests were failing silently since the `PROPOSER_BOOST` config was not set. Sets the config and script returns an error if any subprocess fails.
2023-02-21 23:45:43 +00:00
Michael Sproul
fa8b920dd8 Merge branch 'capella' into unstable 2023-02-22 10:25:45 +11:00
Paul Hauner
9c81be8ac4 Fix metric (#4020) 2023-02-22 09:46:45 +11:00
Pawan Dhananjay
bb5285ac6d Remove BeaconBlockAndBlobsSidecar from core topics (#4016) 2023-02-22 09:45:38 +11:00
Michael Sproul
b7d7addd4a Disable debug info on CI (#4018)
## Issue Addressed

Closes #4005

Alternative to #4017

## Proposed Changes

Disable debug info on CI to save RAM and disk space.
2023-02-21 20:54:57 +00:00
Mac L
3642efe76a Cache validator balances and allow them to be served over the HTTP API (#3863)
## Issue Addressed

#3804

## Proposed Changes

- Add `total_balance` to the validator monitor and adjust the number of historical epochs which are cached. 
- Allow certain values in the cache to be served out via the HTTP API without requiring a state read.

## Usage
```
curl -X POST "http://localhost:5052/lighthouse/ui/validator_info" -d '{"indices": [0]}' -H "Content-Type: application/json" | jq
```

```
{
  "data": {
    "validators": {
      "0": {
        "info": [
          {
            "epoch": 172981,
            "total_balance": 36566388519
          },
         ...
          {
            "epoch": 172990,
            "total_balance": 36566496513
          }
        ]
      },
      "1": {
        "info": [
          {
            "epoch": 172981,
            "total_balance": 36355797968
          },
          ...
          {
            "epoch": 172990,
            "total_balance": 36355905962
          }
        ]
      }
    }
  }
}
```

## Additional Info

This requires no historical states to operate which mean it will still function on the freshly checkpoint synced node, however because of this, the values will populate each epoch (up to a maximum of 10 entries).

Another benefit of this method, is that we can easily cache any other values which would normally require a state read and serve them via the same endpoint. However, we would need be cautious about not overly increasing block processing time by caching values from complex computations.

This also caches some of the validator metrics directly, rather than pulling them from the Prometheus metrics when the API is called. This means when the validator count exceeds the individual monitor threshold, the cached values will still be available.

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-02-21 20:54:55 +00:00
Paul Hauner
40669da486 Modify some Capella comments (#4015)
* Modify comment to only include 4844

Capella only modifies per epoch processing by adding
`process_historical_summaries_update`, which does not change the realization of
justification or finality.

Whilst 4844 does not currently modify realization, the spec is not yet final
enough to say that it never will.

* Clarify address change verification comment

The verification of the address change doesn't really have anything to do with
the current epoch. I think this was just a copy-paste from a function like
`verify_exit`.
2023-02-21 18:03:42 +11:00
Paul Hauner
1bce7a02c8 Fix post-Bellatrix checkpoint sync (#4014)
* Recognise execution in post-merge blocks

* Remove `.body()`

* Fix typo

* Use `is_default_with_empty_roots`.
2023-02-21 18:03:24 +11:00
Paul Hauner
eed7d65ce7 Allow for withdrawals in max block size (#4011)
* Allow for withdrawals in max block size

* Ensure payload size is counted
2023-02-21 18:03:10 +11:00
Paul Hauner
729c178020 Revert Sepolia genesis change (#4013) 2023-02-21 17:18:28 +11:00
Paul Hauner
b72f273e47 Capella consensus review (#4012)
* Add extra encoding/decoding tests

* Remove TODO

The method LGTM

* Remove `FreeAttestation`

This is an ancient relic, I'm surprised it still existed!

* Add paranoid check for eip4844 code

This is not technically necessary, but I think it's nice to be explicit about
EIP4844 consensus code for the time being.

* Reduce big-O complexity of address change pruning

I'm not sure this is *actually* useful, but it might come in handy if we see a
ton of address changes at the fork boundary. I know the devops team have been
testing with ~100k changes, so maybe this will help in that case.

* Revert "Reduce big-O complexity of address change pruning"

This reverts commit e7d93e6cc7.
2023-02-21 15:33:27 +11:00
Paul Hauner
d53d43844c Suggestions for Capella beacon_chain (#3999)
* Remove CapellaReadiness::NotSynced

Some EEs have a habit of flipping between synced/not-synced, which causes some
spurious "Not read for the merge" messages back before the merge. For the
merge, if the EE wasn't synced the CE simple wouldn't go through the transition
(due to optimistic sync stuff). However, we don't have that hard requirement
for Capella; the CE will go through the fork and just wait for the EE to catch
up. I think that removing `NotSynced` here will avoid false-positives on the
"Not ready logs..". We'll be creating other WARN/ERRO logs if the EE isn't
synced, anyway.

* Change some Capella readiness logging

There's two changes here:

1. Shorten the log messages, for readability.
2. Change the hints.

Connecting a Capella-ready LH to a non-Capella-ready EE gives this log:

```
WARN Not ready for Capella                   info: The execution endpoint does not appear to support the required engine api methods for Capella: Required Methods Unsupported: engine_getPayloadV2 engine_forkchoiceUpdatedV2 engine_newPayloadV2, service: slot_notifier
```

This variant of error doesn't get a "try updating" style hint, when it's the
one that needs it. This is because we detect the method-not-found reponse from
the EE and return default capabilities, rather than indicating that the request
fails. I think it's fair to say that an EE upgrade is required whenever it
doesn't provide the required methods.

I changed the `ExchangeCapabilitiesFailed` message since that can only happen
when the EE fails to respond with anything other than success or not-found.
2023-02-21 11:05:36 +11:00
Paul Hauner
c3c181aa03 Remove "eip4844" network (#4008) 2023-02-21 11:01:22 +11:00
Michael Sproul
0b6850221e Fix Capella schema downgrades (#4004) 2023-02-20 17:50:42 +11:00
Paul Hauner
9a41f65b89 Add capella fork epoch (#3997) 2023-02-17 16:25:20 +11:00
Michael Sproul
1f419f4653 Merge pull request #3981 from michaelsproul/capella-update
Update `capella` to `unstable`
2023-02-17 12:47:34 +11:00
Michael Sproul
066c27750a Merge remote-tracking branch 'origin/staging' into capella-update 2023-02-17 12:05:36 +11:00
Paul Hauner
4aa8a2ab12 Suggestions for Capella execution_layer (#3983)
* Restrict Engine::request to FnOnce

* Use `Into::into`

* Impl IntoIterator for VariableList

* Use Instant rather than SystemTime
2023-02-17 11:58:33 +11:00
Michael Sproul
ebf2fec5d0 Fix exec integration tests for Geth v1.11.0 (#3982)
## Proposed Changes

* Bump Go from 1.17 to 1.20. The latest Geth release v1.11.0 requires 1.18 minimum.
* Prevent a cache miss during payload building by using the right fee recipient. This prevents Geth v1.11.0 from building a block with 0 transactions. The payload building mechanism is overhauled in the new Geth to improve the payload every 2s, and the tests were failing because we were falling back on a `getPayload` call with no lookahead due to `get_payload_id` cache miss caused by the mismatched fee recipient. Alternatively we could hack the tests to send `proposer_preparation_data`, but I think the static fee recipient is simpler for now.
* Add support for optionally enabling Lighthouse logs in the integration tests. Enable using `cargo run --release --features logging/test_logger`. This was very useful for debugging.
2023-02-16 23:34:33 +00:00
Jimmy Chen
245e922c7b Improve testing slot clock to allow manipulation of time in tests (#3974)
## Issue Addressed

I discovered this issue while implementing [this test](https://github.com/jimmygchen/lighthouse/blob/test-example/beacon_node/network/src/beacon_processor/tests.rs#L895), where I tried to manipulate the slot clock with: 

`rig.chain.slot_clock.set_current_time(duration);`

however the change doesn't get reflected in the `slot_clock` in `ReprocessQueue`, and I realised `slot_clock` was cloned a few times in the code, and therefore changing the time in `rig.chain.slot_clock` doesn't have any effect in `ReprocessQueue`.

I've incorporated the suggestion from the @paulhauner and @michaelsproul - wrapping the `ManualSlotClock.current_time` (`RwLock<Duration>)` in an `Arc`, and the above test now passes. 

Let's see if this breaks any existing tests :)
2023-02-16 23:34:32 +00:00
Divma
ffeb8b6e05 blacklist tests in windows (#3961)
## Issue Addressed
Windows tests for subscription and unsubscriptions fail in CI sporadically. We usually ignore this failures, so this PR aims to help reduce the failure noise. Associated issue is https://github.com/sigp/lighthouse/issues/3960
2023-02-16 23:34:30 +00:00
Michael Sproul
461bda6e85 Execution engine suggestions from code review
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-02-16 16:54:05 +11:00
realbigsean
55753f8bc8 bump recursion limit 2023-02-15 16:32:50 -05:00
realbigsean
ca8e341649 fix compilation after merge 2023-02-15 14:30:39 -05:00
realbigsean
8320b918ae merge self limiter 2023-02-15 14:26:18 -05:00
realbigsean
4d0b0f681d merge self limiter 2023-02-15 14:25:58 -05:00
realbigsean
b805fa6279 merge with upstream 2023-02-15 14:20:12 -05:00
realbigsean
87d1fbeb21 Merge pull request #3905 from emhane/beacon_chain_tests
Debug CI
2023-02-15 11:53:41 -05:00
Emilia Hane
aaf6404d4f Remove unused generic 2023-02-15 17:45:22 +01:00
Michael Sproul
2fcfdf1a01 Fix docker and deps (#3978)
## Proposed Changes

- Fix this cargo-audit failure for `sqlite3-sys`: https://github.com/sigp/lighthouse/actions/runs/4179008889/jobs/7238473962
- Prevent the Docker builds from running out of RAM on CI by removing `gnosis` and LMDB support from the `-dev` images (see: https://github.com/sigp/lighthouse/pull/3959#issuecomment-1430531155, successful run on my fork: https://github.com/michaelsproul/lighthouse/actions/runs/4179162480/jobs/7239537947).
2023-02-15 11:51:46 +00:00
Emilia Hane
2672cf40bb Better fix for debug tests 2023-02-15 11:47:56 +01:00
Emilia Hane
9fea440ae6 Fix lint 2023-02-15 09:54:36 +01:00
Michael Sproul
fd379ae2e2 Upgrade sqlite3 2023-02-15 09:44:37 +01:00
realbigsean
44dbccfeae add v3 to capabilities 2023-02-15 09:23:59 +01:00
Emilia Hane
13efd47238 fixup! Disable use of system time in tests 2023-02-15 09:20:30 +01:00
Michael Sproul
918b688f72 Simplify payload traits and reduce cloning (#3976)
* Simplify payload traits and reduce cloning

* Fix self limiter
2023-02-15 14:17:56 +11:00
Emilia Hane
9e4abc79fb Comment out tests that use system time 2023-02-14 14:12:50 +01:00
Emilia Hane
73c7ad73b8 Disable use of system time in tests 2023-02-14 13:33:38 +01:00
Emilia Hane
148385eb70 Remove unused error 2023-02-14 12:43:13 +01:00
Emilia Hane
810d875b02 Fix merge conflicts with eip4844 2023-02-14 12:42:41 +01:00
Emilia Hane
8200d37045 Merge pull request #2 from realbigsean/sean-debug-ci
Debug CI Sean's PR feedback
2023-02-14 12:24:15 +01:00
Michael Sproul
10d32ee04c Quote Capella BeaconState fields (#3967) 2023-02-14 14:41:28 +11:00
Michael Sproul
5fc798dd1b Merge pull request #3973 from michaelsproul/capella-merge
Update Capella to latest `unstable`
2023-02-14 14:41:14 +11:00
Age Manning
8dd9249177 Enforce a timeout on peer disconnect (#3757)
On heavily crowded networks, we are seeing many attempted connections to our node every second. 

Often these connections come from peers that have just been disconnected. This can be for a number of reasons including: 
- We have deemed them to be not as useful as other peers
- They have performed poorly
- They have dropped the connection with us
- The connection was spontaneously lost
- They were randomly removed because we have too many peers

In all of these cases, if we have reached or exceeded our target peer limit, there is no desire to accept new connections immediately after the disconnect from these peers. In fact, it often costs us resources to handle the established connections and defeats some of the logic of dropping them in the first place. 

This PR adds a timeout, that prevents recently disconnected peers from reconnecting to us.

Technically we implement a ban at the swarm layer to prevent immediate re connections for at least 10 minutes. I decided to keep this light, and use a time-based LRUCache which only gets updated during the peer manager heartbeat to prevent added stress of polling a delay map for what could be a large number of peers.

This cache is bounded in time. An extra space bound could be added should people consider this a risk.

Co-authored-by: Diva M <divma@protonmail.com>
2023-02-14 03:25:42 +00:00
Michael Sproul
f7bd4bf06e Update block rewards API for Capella 2023-02-14 12:09:40 +11:00
Michael Sproul
d53ccf8fc7 Placeholder for BlobsByRange outbound rate limit 2023-02-14 12:08:14 +11:00
Michael Sproul
18c8cab4da Merge remote-tracking branch 'origin/unstable' into capella-merge 2023-02-14 12:07:27 +11:00
realbigsean
d2ecbd942e fix a couple new lints 2023-02-13 17:13:47 -05:00
realbigsean
cd8757de1c Revert "make batch size check compile time panic"
This reverts commit 68f2484efc.
2023-02-13 16:51:55 -05:00
realbigsean
68f2484efc make batch size check compile time panic 2023-02-13 16:51:46 -05:00
realbigsean
4c3561dcaf make batch size check compile time panic 2023-02-13 16:50:33 -05:00
realbigsean
8f9c5cfca9 remove unused structs 2023-02-13 16:47:36 -05:00
realbigsean
ad9af6d8b1 complete match for has_context_bytes 2023-02-13 16:44:54 -05:00
realbigsean
fc2d07b4e3 allow unused 2023-02-13 16:36:38 -05:00
realbigsean
28702c9d5d merge upstream, add back get_blobs logic 2023-02-13 16:29:21 -05:00
realbigsean
e58d7e85bf Merge pull request #3951 from realbigsean/fix-blob-tx-ssz
Encode blob transactions as signed blob transactions
2023-02-13 14:50:44 -05:00
realbigsean
e1cb4b8a11 Merge pull request #3869 from emhane/blobs_freezer
Blobs freezer
2023-02-13 14:40:02 -05:00
Nazar Hussain
fa1d4c7054 Invalid cross build feature flag (#3959)
## Issue Addressed

The documentation referring to build from source mismatches with the what gitworkflow uses. 

aa5b7ef783/book/src/installation-source.md (L118-L120)

## Proposed Changes

Because the github workflow uses `cross` to build from source and for that build there is different env variable `CROSS_FEATURES` so need pass at the compile time. 

## Additional Info

Verified that existing `-dev` builds does not contains the `minimal` spec enabled. 

```bash
> docker run --rm --name node-5-cl-lighthouse sigp/lighthouse:latest-amd64-unstable-dev lighthouse --version
Lighthouse v3.4.0-aa5b7ef
BLS library: blst-portable
SHA256 hardware acceleration: true
Allocator: jemalloc
Specs: mainnet (true), minimal (false), gnosis (true)
```
2023-02-13 03:32:03 +00:00
Michael Sproul
2f456ff9eb Fix regression in DB write atomicity (#3931)
## Issue Addressed

Fix a bug introduced by #3696. The bug is not expected to occur frequently, so releasing this PR is non-urgent.

## Proposed Changes

* Add a variant to `StoreOp` that allows a raw KV operation to be passed around.
* Return to using `self.store.do_atomically` rather than `self.store.hot_db.do_atomically`. This streamlines the write back into a single call and makes our auto-revert work again.
* Prevent `import_block_update_shuffling_cache` from failing block import. This is an outstanding bug from before v3.4.0 which may have contributed to some random unexplained database corruption.

## Additional Info

In #3696 I split the database write into two calls, one to convert the `StoreOp`s to `KeyValueStoreOp`s and one to write them. This had the unfortunate side-effect of damaging our atomicity guarantees in case of a write error. If the first call failed, we would be left with the block in fork choice but not on-disk (or the snapshot cache), which would prevent us from processing any descendant blocks. On `unstable` the first call is very unlikely to fail unless the disk is full, but on `tree-states` the conversion is more involved and a user reported database corruption after it failed in a way that should have been recoverable.

Additionally, as @emhane observed, #3696 also inadvertently removed the import of the new block into the block cache. Although this seems like it could have negatively impacted performance, there are several mitigating factors:

- For regular block processing we should almost always load the parent block (and state) from the snapshot cache.
- We often load blinded blocks, which bypass the block cache anyway.
- Metrics show no noticeable increase in the block cache miss rate with v3.4.0.

However, I expect the block cache _will_ be useful again in `tree-states`, so it is restored to use by this PR.
2023-02-13 03:32:01 +00:00
Paul Hauner
84843d67d7 Reduce some EE and builder related ERRO logs to WARN (#3966)
## Issue Addressed

NA

## Proposed Changes

Our `ERRO` stream has been rather noisy since the merge due to some unexpected behaviours of builders and EEs. Now that we've been running post-merge for a while, I think we can drop some of these `ERRO` to `WARN` so we're not "crying wolf".

The modified logs are:

#### `ERRO Execution engine call failed`

I'm seeing this quite frequently on Geth nodes. They seem to timeout when they're busy and it rarely indicates a serious issue. We also have logging across block import, fork choice updating and payload production that raise `ERRO` or `CRIT` when the EE times out, so I think we're not at risk of silencing actual issues.

#### `ERRO "Builder failed to reveal payload"`

In #3775 we reduced this log from `CRIT` to `ERRO` since it's common for builders to fail to reveal the block to the producer directly whilst still broadcasting it to the networ. I think it's worth dropping this to `WARN` since it's rarely interesting.

I elected to stay with `WARN` since I really do wish builders would fulfill their API promises by returning the block to us. Perhaps I'm just being pedantic here, I could be convinced otherwise.

#### `ERRO "Relay error when registering validator(s)"`

It seems like builders and/or mev-boost struggle to handle heavy loads of validator registrations. I haven't observed issues with validators not actually being registered, but I see timeouts on these endpoints many times a day. It doesn't seem like this `ERRO` is worth it.

#### `ERRO Error fetching block for peer     ExecutionLayerErrorPayloadReconstruction`

This means we failed to respond to a peer on the P2P network with a block they requested because of an error in the `execution_layer`. It's very common to see timeouts or incomplete responses on this endpoint whilst the EE is busy and I don't think it's important enough for an `ERRO`. As long as the peer count stays high, I don't think the user needs to be actively concerned about how we're responding to peers.

## Additional Info

NA
2023-02-12 23:14:08 +00:00
Michael Sproul
3b4c677727 Use release profile for Windows binaries (#3965)
## Proposed Changes

Disable `maxperf` profile on Windows due to #3964. This is required for the v3.5.0 release CI to succeed without crashing.
2023-02-12 23:14:07 +00:00
Emilia Hane
c6dfa7a1ac fixup! Add tests for blob pruning flags 2023-02-10 21:15:57 +01:00
ethDreamer
e743d75c9b Update Mock Builder for Post-Capella Tests (#3958)
* Update Mock Builder for Post-Capella Tests

* Add _mut Suffix to BidStuff Functions

* Fix Setting Gas Limit
2023-02-10 13:30:14 -06:00
Emilia Hane
d3a09af7f7 Run cargo update 2023-02-10 16:29:36 +01:00
Emilia Hane
28e9f07746 Fix lint for prune blobs pr 2023-02-10 16:23:04 +01:00
Emilia Hane
d9eed481b7 fixup! Add tests for blob pruning flags 2023-02-10 16:18:39 +01:00
ethDreamer
39f8327f73 Properly Deserialize ForkVersionedResponses (#3944)
* Move ForkVersionedResponse to consensus/types

* Properly Deserialize ForkVersionedResponses

* Elide Types in from_value Calls

* Added Tests for ForkVersionedResponse Deserialize

* Address Sean's Comments & Make Less Restrictive

* Utilize `map_fork_name!`
2023-02-10 08:49:25 -06:00
Emilia Hane
a1768b16b9 fixup! Update dependencies (#3946) 2023-02-10 15:43:40 +01:00
Michael Sproul
d890f2bf6b Update dependencies (#3946)
Resolves the cargo-audit failure caused by https://rustsec.org/advisories/RUSTSEC-2023-0010.

I also removed the ignore for `RUSTSEC-2020-0159` as we are no longer using a vulnerable version of `chrono`. We still need the other ignore for `time 0.1` because we depend on it via `sloggers -> chrono -> time 0.1`.
2023-02-10 15:35:01 +01:00
Emilia Hane
3dd42e5723 Remove unused dependencies 2023-02-10 15:35:01 +01:00
Emilia Hane
0104d6143c fixup! Fix latest clippy lints 2023-02-10 15:35:01 +01:00
Emilia Hane
02cca3478b Fix conflicts rebasing eip4844 2023-02-10 15:35:01 +01:00
Michael Sproul
1f3eef2c5f Unpin fixed-hash (#3917)
## Proposed Changes
Remove the `[patch]` for `fixed-hash`.

We pinned it years ago in #2710 to fix `arbitrary` support. Nowadays the 0.7 version of `fixed-hash` is only used by the `web3` crate and doesn't need `arbitrary`.

~~Blocked on #3916 but could be merged in the same Bors batch.~~
2023-02-10 15:35:01 +01:00
Emilia Hane
615402abcf fixup! Fix conflicts rebasing eip4844 2023-02-10 15:35:00 +01:00
Emilia Hane
db36eb978b Fix latest clippy lints 2023-02-10 15:35:00 +01:00
Emilia Hane
2653f88b5f Fix conflicts rebasing eip4844 2023-02-10 15:35:00 +01:00
Emilia Hane
c7b49feb9e fixup! Change CI clippy 2023-02-10 15:35:00 +01:00
Emilia Hane
e0a9cd6b84 Change CI clippy 2023-02-10 15:35:00 +01:00
Emilia Hane
43bf908e7a Fix release tests 2023-02-10 15:34:59 +01:00
Emilia Hane
481718856c Fix clippy 2023-02-10 15:34:59 +01:00
Emilia Hane
4d3ff347a3 Fixes after rebasing eip4844 2023-02-10 15:34:58 +01:00
Emilia Hane
5437dcae9c Fix conflicts rebasing eip4844 2023-02-10 15:34:58 +01:00
Emilia Hane
7545ae9e9b fixup! Fix block lookup debug tests 2023-02-10 15:34:46 +01:00
realbigsean
a68e3eac2c pr feedback 2023-02-10 08:25:42 -05:00
Emilia Hane
6beca6defc Fix range sync tests 2023-02-10 09:41:24 +01:00
Emilia Hane
e9e198a2b6 Fix conflicts rebasing eip4844 2023-02-10 09:41:23 +01:00
Emilia Hane
d292a3a6a8 Fix conflicts rebasing eip4844 2023-02-10 09:41:23 +01:00
Emilia Hane
994990063a Fix weak_subjectivity_sync test 2023-02-10 09:41:23 +01:00
Emilia Hane
546d63f83c Fix rebase conflicts 2023-02-10 09:41:23 +01:00
Emilia Hane
50e01bef1f Add eip4844 fork to tests 2023-02-10 09:41:22 +01:00
Emilia Hane
09370e70d9 Fix rebase conflicts 2023-02-10 09:41:19 +01:00
Emilia Hane
69c30bb6eb Fix release test 2023-02-10 09:39:22 +01:00
Emilia Hane
8365d76277 fixup! Debug tests 2023-02-10 09:39:22 +01:00
Emilia Hane
16cb9cfca2 fixup! Debug tests 2023-02-10 09:39:22 +01:00
Emilia Hane
7220f35ff6 Debug tests 2023-02-10 09:39:21 +01:00
Emilia Hane
995b2715f2 Fix network block_lookups test 2023-02-10 09:39:21 +01:00
Emilia Hane
3676ce78b5 Fix rebase conflicts 2023-02-10 09:39:21 +01:00
Michael Sproul
c9354a9d25 Tweaks to reward APIs (#3957)
## Proposed Changes

* Return the effective balance in gwei to align with the spec ([ideal attestation rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards/getAttestationsRewards)).
* Use quoted `i64`s for attestation and sync committee rewards.
2023-02-10 06:19:42 +00:00
Paul Hauner
5276dd0cb0 Fix edge-case when finding the finalized descendant (#3924)
## Issue Addressed

NA

## Description

We were missing an edge case when checking to see if a block is a descendant of the finalized checkpoint. This edge case is described for one of the tests in this PR:

a119edc739/consensus/proto_array/src/proto_array_fork_choice.rs (L1018-L1047)

This bug presented itself in the following mainnet log:

```
Jan 26 15:12:42.841 ERRO Unable to validate attestation error: MissingBeaconState(0x7c30cb80ec3d4ec624133abfa70e4c6cfecfca456bfbbbff3393e14e5b20bf25), peer_id: 16Uiu2HAm8RPRciXJYtYc5c3qtCRdrZwkHn2BXN3XP1nSi1gxHYit, type: "unaggregated", slot: Slot(5660161), beacon_block_root: 0x4a45e59da7cb9487f4836c83bdd1b741b4f31c67010c7ae343fa6771b3330489
```

Here the BN is rejecting an attestation because of a "missing beacon state". Whilst it was correct to reject the attestation, it should have rejected it because it attests to a block that conflicts with finality rather than claiming that the database is inconsistent.

The block that this attestation points to (`0x4a45`) is block `C` in the above diagram. It is a non-canonical block in the first slot of an epoch that conflicts with the finalized checkpoint. Due to our lazy pruning of proto array, `0x4a45` was still present in proto-array. Our missed edge-case in [`ForkChoice::is_descendant_of_finalized`](38514c07f2/consensus/fork_choice/src/fork_choice.rs (L1375-L1379)) would have indicated to us that the block is a descendant of the finalized block. Therefore, we would have accepted the attestation thinking that it attests to a descendant of the finalized *checkpoint*.

Since we didn't have the shuffling for this erroneously processed block, we attempted to read its state from the database. This failed because we prune states from the database by keeping track of the tips of the chain and iterating back until we find a finalized block. This would have deleted `C` from the database, hence the `MissingBeaconState` error.
2023-02-09 23:51:18 +00:00
Pawan Dhananjay
2b735a9e8b Add attestation duty slot metric (#2704)
## Issue Addressed

Resolves #2521 

## Proposed Changes

Add a metric that indicates the next attestation duty slot for all managed validators in the validator client.
2023-02-09 23:51:17 +00:00
Emilia Hane
12720f9ac5 fixup! Help user choose blobs db 2023-02-09 10:37:53 +01:00
Emilia Hane
1300fb7ffa Fix conflicts from rebasing eip4844 2023-02-09 10:37:11 +01:00
Emilia Hane
290e1d2ff7 fixup! Complete making blocks and blobs db atomic 2023-02-09 07:50:57 +01:00
Emilia Hane
38fe2dce3f fixup! Complete making blocks and blobs db atomic 2023-02-09 07:50:55 +01:00
Emilia Hane
ca934b7cb5 Fix rebase conflicts 2023-02-09 07:50:30 +01:00
Emilia Hane
72cd68c0a4 Complete making blocks and blobs db atomic 2023-02-09 07:46:27 +01:00
Emilia Hane
89cccfc397 Fix rebase conflicts 2023-02-09 07:46:25 +01:00
Emilia Hane
ba882958ed Delete blobs along with block 2023-02-09 07:42:46 +01:00
Emilia Hane
04fafebfa6 fixup! Throw error when params don't match with previous run 2023-02-09 07:42:46 +01:00
Emilia Hane
00ce8d9572 Throw error when params don't match with previous run 2023-02-09 07:42:46 +01:00
Emilia Hane
d8e501d3ab Add todos 2023-02-09 07:42:43 +01:00
Emilia Hane
f971f3a3a2 Fix rebase conflicts 2023-02-09 07:41:38 +01:00
Emilia Hane
f8c3e7fc91 Lint fix 2023-02-09 07:36:11 +01:00
Emilia Hane
7f91dd803c Help user choose blobs db 2023-02-09 07:36:11 +01:00
Emilia Hane
22915c2d7e fixup! Store blobs in correct db for atomic ops 2023-02-09 07:36:10 +01:00
Emilia Hane
dcb5495745 Store blobs in correct db for atomic ops 2023-02-09 07:36:10 +01:00
Emilia Hane
625980e484 Fix rebase conflicts 2023-02-09 07:36:07 +01:00
Emilia Hane
04f635c0ac Remove IDE file 2023-02-09 07:35:47 +01:00
Emilia Hane
3679a0f1cb Improve syntax 2023-02-09 07:35:47 +01:00
Emilia Hane
3c0aa201e3 fixup! Help user configure blobs freezer correctly between start ups 2023-02-09 07:35:47 +01:00
Emilia Hane
0ba0775812 Help user configure blobs freezer correctly between start ups 2023-02-09 07:35:45 +01:00
Emilia Hane
05c51b37b1 fix rebase conflicts 2023-02-09 07:35:10 +01:00
Emilia Hane
e0b1a0841c fixup! Store blobs in separate freezer or historical state freezer 2023-02-09 07:35:10 +01:00
Emilia Hane
f9737628fc Store blobs in separate freezer or historical state freezer 2023-02-09 07:34:59 +01:00
Paul Hauner
aa5b7ef783 Remove participation rate from API docs (#3955)
## Issue Addressed

NA

## Proposed Changes

Removes the "Participation Rate" since it references an undefined variable: `previous_epoch_attesting_gwei`.

I didn't replace it with anything since I think "Justification/Finalization Rate" already expresses what it was trying to express.

## Additional Info

NA
2023-02-09 04:31:22 +00:00
Nazar Hussain
c33eb29ee3 Fix the whitespace in docker workflow (#3952)
## Issue Addressed

Fix a whitespace issue that was causing failure in the docker build. 


## Additional Info

https://github.com/sigp/lighthouse/pull/3948
2023-02-08 20:23:21 +00:00
realbigsean
41567194e9 Merge pull request #3852 from emhane/prune_blobs
Prune blobs
2023-02-08 13:39:26 -05:00
realbigsean
99da11e9f4 fix lints 2023-02-08 11:03:34 -05:00
realbigsean
902f64a946 remove clone of access lists 2023-02-08 10:59:48 -05:00
realbigsean
dd40adc5c0 check byte length when converting to uint256 and hash256 from bytes. Add comments 2023-02-08 10:38:45 -05:00
Emilia Hane
6a37e84399 fixup! Fix regression in DB write atomicity 2023-02-08 11:44:46 +01:00
Emilia Hane
bc468b4ce5 fixup! Improve use of whitespace 2023-02-08 11:44:45 +01:00
Michael Sproul
ac4b5b580c Fix regression in DB write atomicity 2023-02-08 11:44:45 +01:00
Emilia Hane
9d919917f5 Removed unused code 2023-02-08 11:44:45 +01:00
Emilia Hane
d7eb9441cf Reorder loading of db metadata from disk to allow for future changes to schema 2023-02-08 11:44:45 +01:00
Emilia Hane
d599e41f3d Remove debug comment
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-08 11:44:44 +01:00
Emilia Hane
577262ccbf Improve use of whitespace
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-08 11:44:44 +01:00
Emilia Hane
56c84178f2 Fix conflicts rebasing eip4844 2023-02-08 11:44:44 +01:00
Emilia Hane
b2abec5d35 Verify StoreConfig 2023-02-08 11:44:44 +01:00
Emilia Hane
00ca21e84c Make implementation of BlobInfo more coder friendly 2023-02-08 11:44:43 +01:00
Emilia Hane
8f137df02e fixup! Allow user to set an epoch margin for pruning 2023-02-08 11:44:43 +01:00
Emilia Hane
a2eda76291 Correct comment 2023-02-08 11:44:43 +01:00
Emilia Hane
1e59cb9dea Add tests for blob pruning flags 2023-02-08 11:44:43 +01:00
Emilia Hane
9ee9b6df76 Remove unused stuff 2023-02-08 11:44:42 +01:00
Emilia Hane
6dff69bde9 Atomically update blob info with pruned blobs 2023-02-08 11:44:42 +01:00
Emilia Hane
5d2480c762 Improve naming 2023-02-08 11:44:42 +01:00
Emilia Hane
9c2e623555 Reflect use of prune margin epochs at import 2023-02-08 11:44:42 +01:00
Emilia Hane
d4795601f2 fixup! Prune from highest data availability boundary 2023-02-08 11:44:41 +01:00
Emilia Hane
43c3c74a48 fixup! Fix blobs store bug 2023-02-08 11:44:41 +01:00
Emilia Hane
63ca3bfb29 Prune from highest data availability boundary 2023-02-08 11:44:41 +01:00
Emilia Hane
c50f83116e Fix wording
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-08 11:44:41 +01:00
Emilia Hane
f6346f89c1 Clarify comment
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-08 11:44:41 +01:00
Emilia Hane
e4b447395a Clarify wording
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-08 11:44:40 +01:00
Emilia Hane
756c881857 Keep uniform size small keys
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-08 11:44:40 +01:00
Emilia Hane
4de523fb75 fixup! Allow user to set an epoch margin for pruning 2023-02-08 11:44:40 +01:00
Emilia Hane
1812301c9c Allow user to set an epoch margin for pruning 2023-02-08 11:44:40 +01:00
Emilia Hane
d7fc24a9d5 Plug in running blob pruning in migrator, related bug fixes and add todos 2023-02-08 11:44:40 +01:00
Emilia Hane
d1b75e281f Fix typo 2023-02-08 11:44:39 +01:00
Emilia Hane
0bdc291490 Only store non-empty orphaned blobs 2023-02-08 11:44:39 +01:00
Emilia Hane
caa04db58a Run prune blobs on migrator thread 2023-02-08 11:44:39 +01:00
Emilia Hane
a875bec5f2 Fix blobs store bug 2023-02-08 11:44:39 +01:00
Emilia Hane
3bede06c9b Fix typo 2023-02-08 11:44:38 +01:00
Emilia Hane
54699f808c fixup! Clarify hybrid blob prune solution and fix error handling 2023-02-08 11:44:38 +01:00
Emilia Hane
83a9520761 Clarify hybrid blob prune solution and fix error handling 2023-02-08 11:44:38 +01:00
Emilia Hane
74172ed160 Ignore IDE file 2023-02-08 11:44:38 +01:00
Emilia Hane
3d93dad0e2 Fix type bug
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-02-08 11:44:37 +01:00
Emilia Hane
44ec331452 fixup! Simplify conceptual design 2023-02-08 11:44:37 +01:00
Emilia Hane
20567750c1 fixup! Simplify conceptual design 2023-02-08 11:44:37 +01:00
Emilia Hane
7103a257ce Simplify conceptual design 2023-02-08 11:44:37 +01:00
Emilia Hane
0d13932663 Fix epoch constructor misconception 2023-02-08 11:44:37 +01:00
Emilia Hane
b5abfe620a Convert epochs_per_blob_prune to Epoch once 2023-02-08 11:44:36 +01:00
Emilia Hane
fb2ce909f6 Avoid repeteadly updating blob info for multiple head candidates 2023-02-08 11:44:36 +01:00
Emilia Hane
d58a30b3de fixup! Store orphan block roots 2023-02-08 11:44:36 +01:00
Emilia Hane
6346c30158 Enable skipping blob pruning at each epoch 2023-02-08 11:44:35 +01:00
Emilia Hane
2f565d25b2 Prune blobs in bg after canonical head update 2023-02-08 11:44:35 +01:00
Emilia Hane
8752deeced Store orphan block roots 2023-02-08 11:44:35 +01:00
Emilia Hane
c7f53a9062 Delete blobs that conflict with finalization 2023-02-08 11:44:34 +01:00
Emilia Hane
94aa2cef67 Log info loaded from disk 2023-02-08 11:44:34 +01:00
Emilia Hane
a2b8c6ee69 Save fetching state for blobs pruning 2023-02-08 11:44:33 +01:00
Emilia Hane
6f5ca02ac9 Improve syntax
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-08 11:44:33 +01:00
Emilia Hane
667cca5cf2 Fix try_prune_blobs to use state root 2023-02-08 11:44:33 +01:00
Emilia Hane
d67468d737 Prune blobs on migration in addition to start-up 2023-02-08 11:44:32 +01:00
Emilia Hane
82ffec378a Fix typo
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-02-08 11:44:32 +01:00
Emilia Hane
ce2db355de Fix rebase conflict 2023-02-08 11:44:32 +01:00
Emilia Hane
a211e6afee Fix rebase conflict 2023-02-08 11:44:31 +01:00
Emilia Hane
28e1e635c3 Fix rebase conflict 2023-02-08 11:44:31 +01:00
Emilia Hane
d3b94d8617 fixup! Prune blobs before data availability breakpoint 2023-02-08 11:44:31 +01:00
Emilia Hane
934f3ab587 Remove inaccurate guess for db index 2023-02-08 11:44:31 +01:00
Emilia Hane
d21c66ddf4 fixup! Plug in pruning of blobs into app 2023-02-08 11:44:31 +01:00
Emilia Hane
b88d888145 fixup! Plug in pruning of blobs into app 2023-02-08 11:44:30 +01:00
Emilia Hane
2a41f25d68 fixup! Prune blobs before data availability breakpoint 2023-02-08 11:44:30 +01:00
Emilia Hane
fe0c911402 Plug in pruning of blobs into app 2023-02-08 11:44:30 +01:00
Emilia Hane
7bf88c2336 Prune blobs before data availability breakpoint 2023-02-08 11:44:30 +01:00
Emilia Hane
e2a6da4274 Boiler plate code for blobs pruning 2023-02-08 11:44:20 +01:00
realbigsean
8661477675 use hex decode instead of parse 2023-02-07 21:51:19 -05:00
Michael Sproul
1cae98856c Update dependencies (#3946)
## Issue Addressed

Resolves the cargo-audit failure caused by https://rustsec.org/advisories/RUSTSEC-2023-0010.

I also removed the ignore for `RUSTSEC-2020-0159` as we are no longer using a vulnerable version of `chrono`. We still need the other ignore for `time 0.1` because we depend on it via `sloggers -> chrono -> time 0.1`.
2023-02-08 02:18:54 +00:00
Divma
ceb986549d Self rate limiting dev flag (#3928)
## Issue Addressed
Adds self rate limiting options, mainly with the idea to comply with peer's rate limits in small testnets

## Proposed Changes
Add a hidden flag `self-limiter` this can take no value, or customs values to configure quotas per protocol

## Additional Info
### How to use
`--self-limiter` will turn on the self rate limiter applying the same params we apply to inbound requests (requests from other peers)
`--self-limiter "beacon_blocks_by_range:64/1"` will turn on the self rate limiter for ALL protocols, but change the quota for bbrange to 64 requested blocks per 1 second.
`--self-limiter "beacon_blocks_by_range:64/1;ping:1/10"` same as previous one, changing the quota for ping as well.

### Caveats
- The rate limiter is either on or off for all protocols. I added the custom values to be able to change the quotas per protocol so that some protocols can be given extremely loose or tight quotas. I think this should satisfy every need even if we can't technically turn off rate limits per protocol.
- This reuses the rate limiter struct for the inbound requests so there is this ugly part of the code in which we need to deal with the inbound only protocols (light client stuff) if this becomes too ugly as we add lc protocols, we might want to split the rate limiters. I've checked this and looks doable with const generics to avoid so much code duplication

### Knowing if this is on
```
Feb 06 21:12:05.493 DEBG Using self rate limiting params         config: OutboundRateLimiterConfig { ping: 2/10s, metadata: 1/15s, status: 5/15s, goodbye: 1/10s, blocks_by_range: 1024/10s, blocks_by_root: 128/10s }, service: libp2p_rpc, service: libp2p
```
2023-02-08 02:18:53 +00:00
Nazar Hussain
7934485aef Update the docker build to include features based images (#3875)
## Proposed Changes

There are some features that are enabled/disabled with the `FEATURES` env variable. This PR would introduce a pattern to introduce docker images based on those features. This can be useful later on to have specific images for some experimental features in the future.

## Additional Info

We at Lodesart need to have `minimal` spec support for some cross-client network testing. To make it efficient on the CI, we tend to use minimal preset.
2023-02-08 02:18:51 +00:00
Diva M
493784366f self rate limiting 2023-02-07 13:00:35 -05:00
realbigsean
a42d07592c fix compilation issues after merge 2023-02-07 12:33:29 -05:00
realbigsean
26a296246d Merge branch 'capella' of https://github.com/sigp/lighthouse into eip4844
# Conflicts:
#	beacon_node/beacon_chain/src/beacon_chain.rs
#	beacon_node/beacon_chain/src/block_verification.rs
#	beacon_node/beacon_chain/src/test_utils.rs
#	beacon_node/execution_layer/src/engine_api.rs
#	beacon_node/execution_layer/src/engine_api/http.rs
#	beacon_node/execution_layer/src/lib.rs
#	beacon_node/execution_layer/src/test_utils/handle_rpc.rs
#	beacon_node/http_api/src/lib.rs
#	beacon_node/http_api/tests/fork_tests.rs
#	beacon_node/network/src/beacon_processor/mod.rs
#	beacon_node/network/src/beacon_processor/work_reprocessing_queue.rs
#	beacon_node/network/src/beacon_processor/worker/sync_methods.rs
#	beacon_node/operation_pool/src/bls_to_execution_changes.rs
#	beacon_node/operation_pool/src/lib.rs
#	beacon_node/operation_pool/src/persistence.rs
#	consensus/serde_utils/src/u256_hex_be_opt.rs
#	testing/antithesis/Dockerfile.libvoidstar
2023-02-07 12:12:56 -05:00
realbigsean
21e5b7fecd Merge pull request #3941 from realbigsean/blob-decoding
Blob decoding
2023-02-07 08:38:20 -05:00
realbigsean
3533ed418e pr feedback and bugfixes 2023-02-07 08:36:35 -05:00
naviechan
9547ac069c Implement block_rewards API (per-validator reward) (#3907)
## Issue Addressed

[#3661](https://github.com/sigp/lighthouse/issues/3661)

## Proposed Changes

`/eth/v1/beacon/rewards/blocks/{block_id}`

```
{
  "execution_optimistic": false,
  "finalized": false,
  "data": {
    "proposer_index": "123",
    "total": "123",
    "attestations": "123",
    "sync_aggregate": "123",
    "proposer_slashings": "123",
    "attester_slashings": "123"
  }
}
```

The issue contains the implementation of three per-validator reward APIs:
* `sync_committee_rewards`
* [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822)
* `block_rewards`

This PR only implements the `block_rewards`.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards)

## Additional Info

The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

Co-authored-by: kevinbogner <kevbogner@gmail.com>
Co-authored-by: navie <naviechan@gmail.com>
2023-02-07 08:33:23 +00:00
Paul Hauner
e062a7cf76 Broadcast address changes at Capella (#3919)
* Add first efforts at broadcast

* Tidy

* Move broadcast code to client

* Progress with broadcast impl

* Rename to address change

* Fix compile errors

* Use `while` loop

* Tidy

* Flip broadcast condition

* Switch to forgetting individual indices

* Always broadcast when the node starts

* Refactor into two functions

* Add testing

* Add another test

* Tidy, add more testing

* Tidy

* Add test, rename enum

* Rename enum again

* Tidy

* Break loop early

* Add V15 schema migration

* Bump schema version

* Progress with migration

* Update beacon_node/client/src/address_change_broadcast.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Fix typo in function name

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-07 17:13:49 +11:00
Michael Sproul
2073518f0f Remove unused u256_hex_be_opt (#3942) 2023-02-07 11:23:36 +11:00
kevinbogner
4d07e40501 Implement attestation_rewards API (per-validator reward) (#3822)
## Issue Addressed

#3661 

## Proposed Changes
`/eth/v1/beacon/rewards/attestations/{epoch}`

```json
{
  "execution_optimistic": false,
  "finalized": false,
  "data": [
    {
      "ideal_rewards": [
        {
          "effective_balance": "1000000000",
          "head": "2500",
          "target": "5000",
          "source": "5000"
        }
      ],
      "total_rewards": [
        {
          "validator_index": "0",
          "head": "2000",
          "target": "2000",
          "source": "4000",
          "inclusion_delay": "2000"
        }
      ]
    }
  ]
}
```

The issue contains the implementation of three per-validator reward APIs:
- [`sync_committee_rewards`](https://github.com/sigp/lighthouse/pull/3790)
- `attestation_rewards`
- `block_rewards`.

This PR *only* implements the `attestation_rewards`.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards

## Additional Info
The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

---
- [x] `get_state`
- [x] Calculate *ideal rewards* with some logic from  `get_flag_index_deltas`
- [x] Calculate *actual rewards*  with some logic from `get_flag_index_deltas`
- [x] Code cleanup
- [x] Testing
2023-02-07 00:00:19 +00:00
Michael Sproul
c56706efae Unpin fixed-hash (#3917)
## Proposed Changes
Remove the `[patch]` for `fixed-hash`.

We pinned it years ago in #2710 to fix `arbitrary` support. Nowadays the 0.7 version of `fixed-hash` is only used by the `web3` crate and doesn't need `arbitrary`.

~~Blocked on #3916 but could be merged in the same Bors batch.~~
2023-02-06 04:18:03 +00:00
ethDreamer
5b398b1990 Don't Reject all Builder Bids After Capella (#3940)
* Fix bug in Builder API Post-Capella

* Fix Clippy Complaints
2023-02-06 13:09:13 +11:00
sean
e5896d9b71 re-order methods 2023-02-05 18:14:24 -05:00
sean
1315098c15 variable list from -> new 2023-02-05 17:56:03 -05:00
sean
38db8d7952 add back in 4844 tx consistencycheck during payload reconstruction 2023-02-05 17:31:20 -05:00
sean
f22aac1603 improve error handling 2023-02-05 17:26:36 -05:00
sean
90e25dc6cf from to new 2023-02-05 16:55:55 -05:00
sean
94a369b9df blob tx decoding 2023-02-05 16:39:15 -05:00
realbigsean
d9e83e6cec blob decoding 2023-02-03 17:06:33 -05:00
ethDreamer
90b6ae62e6 Use Local Payload if More Profitable than Builder (#3934)
* Use Local Payload if More Profitable than Builder

* Rename clone -> clone_from_ref

* Minimize Clones of GetPayloadResponse

* Cleanup & Fix Tests

* Added Tests for Payload Choice by Profit

* Fix Outdated Comments
2023-02-01 19:37:46 -06:00
Michael Sproul
4d0955cd0b Merge pull request #3933 from ethDreamer/capella
Merge `unstable` into `capella`
2023-02-01 09:25:49 +11:00
Mark Mackey
d842215a44 Merge branch 'upstream/unstable' into capella 2023-01-31 12:16:26 -06:00
ethDreamer
7b7595347d exchangeCapabilities & Capella Readiness Logging (#3918)
* Undo Passing Spec to Engine API

* Utilize engine_exchangeCapabilities

* Add Logging to Indicate Capella Readiness

* Add exchangeCapabilities to mock_execution_layer

* Send Nested Array for engine_exchangeCapabilities

* Use Mutex Instead of RwLock for EngineCapabilities

* Improve Locking to Avoid Deadlock

* Prettier logic for get_engine_capabilities

* Improve Comments

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/client/src/notifier.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/execution_layer/src/engine_api/http.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Addressed Michael's Comments

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-01-31 18:26:23 +01:00
realbigsean
6d2dff66a0 Merge pull request #3926 from divagant-martian/stream-timeout-blob-bug
send stream terminators
2023-01-27 19:05:04 +01:00
realbigsean
b7e20fb87a Update beacon_node/lighthouse_network/src/rpc/protocol.rs 2023-01-27 19:03:43 +01:00
Diva M
9976d3bbbc send stream terminators 2023-01-27 18:11:26 +01:00
realbigsean
c0bdc1d120 Merge pull request #3920 from realbigsean/fix-and-loosen-execution-block-decoding
Fix and loosen execution block decoding
2023-01-27 18:10:47 +01:00
realbigsean
37e7c1d5c7 keep verification of payloads pre 4844 2023-01-27 17:59:40 +01:00
realbigsean
2645249eb5 Merge branch 'eip4844' of https://github.com/sigp/lighthouse into fix-and-loosen-execution-block-decoding 2023-01-27 11:54:30 +01:00
realbigsean
ee25c21463 Merge pull request #3921 from realbigsean/cap-4844
Eip4844 capella rebase
2023-01-27 11:51:11 +01:00
realbigsean
2c12200a1a fix compile 2023-01-27 11:41:59 +01:00
realbigsean
dd512cd82a stub out tx root check, fix block hash calculation 2023-01-27 11:39:26 +01:00
Michael Sproul
0866b739d0 Clippy 1.67 (#3916)
## Proposed Changes

Clippy 1.67.0 put us on blast for the size of some of our errors, most of them written by me ( 👀 ). This PR shrinks the size of `BeaconChainError` by dropping some extraneous info and boxing an inner error which should only occur infrequently anyway.

For the `AttestationSlashInfo` and `BlockSlashInfo` I opted to ignore the lint as they are always used in a `Result<A, Info>` where `A` is a similar size. This means they don't bloat the size of the `Result`, so it's a bit annoying for Clippy to report this as an issue.

I also chose to ignore `clippy::uninlined-format-args` because I think the benefit-to-churn ratio is too low. E.g. sometimes we have long identifiers in `format!` args and IMO the non-inlined form is easier to read:

```rust
// I prefer this...
format!(
    "{} did {} to {}",
    REALLY_LONG_CONSTANT_NAME,
    ANOTHER_REALLY_LONG_CONSTANT_NAME,
    regular_long_identifier_name
);
  
// To this
format!("{REALLY_LONG_CONSTANT_NAME} did {ANOTHER_REALLY_LONG_CONSTANT_NAME} to {regular_long_identifier_name}");
```

I tried generating an automatic diff with `cargo clippy --fix` but it came out at:

```
250 files changed, 1209 insertions(+), 1469 deletions(-)
```

Which seems like a bad idea when we'd have to back-merge it to `capella` and `eip4844` 😱
2023-01-27 09:48:42 +00:00
realbigsean
7c8d97c06e remove unused import 2023-01-25 14:26:01 +01:00
Michael Sproul
17b6a6094f Update another test broken by the shuffling change 2023-01-25 14:23:35 +01:00
Michael Sproul
494a270279 Fix the new BLS to execution change test 2023-01-25 14:23:35 +01:00
Michael Sproul
550d63f3fe Update sync rewards API for abstract exec payload 2023-01-25 14:23:35 +01:00
GeemoCandama
f857811e5f light client optimistic update reprocessing (#3799)
Currently there is a race between receiving blocks and receiving light client optimistic updates (in unstable), which results in processing errors. This is a continuation of PR #3693 and seeks to progress on issue #3651

Add the parent_root to ReprocessQueueMessage::BlockImported so we can remove blocks from queue when a block arrives that has the same parent root. We use the parent root as opposed to the block_root because the LightClientOptimisticUpdate does not contain the block_root.

If light_client_optimistic_update.attested_header.canonical_root() != head_block.message().parent_root() then we queue the update. Otherwise we process immediately.
michaelsproul came up with this idea.
The code was heavily based off of the attestation reprocessing.
I have not properly tested this to see if it works as intended.
2023-01-25 14:23:33 +01:00
naviechan
9b5c2eefd5 Implement sync_committee_rewards API (per-validator reward) (#3903)
[#3661](https://github.com/sigp/lighthouse/issues/3661)

`/eth/v1/beacon/rewards/sync_committee/{block_id}`

```
{
  "execution_optimistic": false,
  "finalized": false,
  "data": [
    {
      "validator_index": "0",
      "reward": "2000"
    }
  ]
}
```

The issue contains the implementation of three per-validator reward APIs:
* `sync_committee_rewards`
* [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822)
* `block_rewards`

This PR only implements the `sync_committe_rewards `.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards)

The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

Co-authored-by: navie <naviechan@gmail.com>
Co-authored-by: kevinbogner <kevbogner@gmail.com>
2023-01-25 14:22:15 +01:00
ethDreamer
eb9da6c837 Use eth1_withdrawal_credentials in Test States (#3898)
* Use eth1_withdrawal_credential in Some Test States

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Increase validator sizes

* Pick next sync committee message

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-25 14:21:54 +01:00
Michael Sproul
a4cfe50ade Import BLS to execution changes before Capella (#3892)
* Import BLS to execution changes before Capella

* Test for BLS to execution change HTTP API

* Pack BLS to execution changes in LIFO order

* Remove unused var

* Clippy
2023-01-25 14:21:54 +01:00
antondlr
9f2baced0b fix multiarch docker builds (#3904)
## Issue Addressed

#3902 
Tested and confirmed working [here](https://github.com/antondlr/lighthouse/actions/runs/3970418322)

## Additional Info

buildx v0.10.0 added provenance attestations to images but they are packed in a way that's incompatible with `docker manifest`
https://github.com/docker/buildx/releases
2023-01-25 14:21:54 +01:00
Michael Sproul
c2f64f8216 Switch allocator to jemalloc (#3697)
## Proposed Changes

Another `tree-states` motivated PR, this adds `jemalloc` as the default allocator, with an option to use the system allocator by compiling with `FEATURES="" make`.

- [x] Metrics
- [x] Test on Windows
- [x] Test on macOS
- [x] Test with `musl`
- [x] Metrics dashboard on `lighthouse-metrics` (https://github.com/sigp/lighthouse-metrics/pull/37)


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-01-25 14:21:54 +01:00
Age Manning
528f7181bc Improve block delay metrics (#3894)
We recently ran a large-block experiment on the testnet and plan to do a further experiment on mainnet.

Although the metrics recovered from lighthouse nodes were quite useful, I think we could do with greater resolution in the block delay metrics and get some specific values for each block (currently these can be lost to large exponential histogram buckets). 

This PR increases the resolution of the block delay histogram buckets, but also introduces a new metric which records the last block delay. Depending on the polling resolution of the metric server, we can lose some block delay information, however it will always give us a specific value and we will not lose exact data based on poor resolution histogram buckets.
2023-01-25 14:21:53 +01:00
realbigsean
8e50d316de update antithesis dockerfile (#3883)
Resolves https://github.com/sigp/lighthouse/issues/3879

Co-authored-by: realbigsean <sean@sigmaprime.io>
2023-01-25 14:21:48 +01:00
aliask
63593ef711 Fix some dead links in markdown files (#3885)
## Issue Addressed

No issue has been raised for these broken links.

## Proposed Changes

Update links with the new URLs for the same document.

## Additional Info

~The link for the [Lighthouse Development Updates](https://eepurl.com/dh9Lvb/) mailing list is also broken, but I can't find the correct link.~


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-25 14:21:16 +01:00
GeemoCandama
12bdde13fe add logging for starting request and receiving block (#3858)
## Issue Addressed

#3853 

## Proposed Changes

Added `INFO` level logs for requesting and receiving the unsigned block.

## Additional Info

Logging for successfully publishing the signed block is already there. And seemingly there is a log for when "We realize we are going to produce a block" in the `start_update_service`: `info!(log, "Block production service started");
`.  Is there anywhere else you'd like to see logging around this event?


Co-authored-by: GeemoCandama <104614073+GeemoCandama@users.noreply.github.com>
2023-01-25 14:21:16 +01:00
David Theodore
1b6d1a9ef9 add better err reporting UnableToOpenVotingKeystore (#3781)
## Issue Addressed

#3780 

## Proposed Changes

Add error reporting that notifies the node operator that the `voting_keystore_path` in their `validator_definitions.yml` file may be incorrect.

## Additional Info

There is more info in issue #3780 


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-25 14:21:16 +01:00
Mac L
47ade13ef7 Add CLI flag to specify the format of logs written to the logfile (#3839)
## Proposed Changes

Decouple the stdout and logfile formats by adding the `--logfile-format` CLI flag.
This behaves identically to the existing `--log-format` flag, but instead will only affect the logs written to the logfile.
The `--log-format` flag will no longer have any effect on the contents of the logfile.

## Additional Info
This avoids being a breaking change by causing `logfile-format` to default to the value of `--log-format` if it is not provided. 
This means that users who were previously relying on being able to use a JSON formatted logfile will be able to continue to use `--log-format JSON`. 

Users who want to use JSON on stdout and default logs in the logfile, will need to pass the following flags: `--log-format JSON --logfile-format DEFAULT`
2023-01-25 14:21:16 +01:00
Santiago Medina
bd7bd005ee Return HTTP 404 rather than 405 (#3836)
Issue #3112

Add `Filter::recover` to the GET chain to handle rejections specifically as 404 NOT FOUND

Making a request to `http://localhost:5052/not_real` now returns the following:

```
{
    "code": 404,
    "message": "NOT_FOUND",
    "stacktraces": []
}
```

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-25 14:21:07 +01:00
Madman600
6511d28f0b Update checkpoint-sync.md (#3831)
Remove infura checkpoint sync instructions.


Co-authored-by: Adam Patacchiola <aep600@gmail.com>
2023-01-25 14:19:48 +01:00
realbigsean
1dd9812f62 Merge pull request #3915 from realbigsean/devnetv4-builtin-config
built in devnetv4 config eip4844
2023-01-25 13:41:09 +01:00
realbigsean
86177312f8 update zip 2023-01-25 11:55:05 +01:00
realbigsean
32b0fb13d2 Merge pull request #3912 from realbigsean/sync-error-handling
Sync error handling
2023-01-25 11:46:59 +01:00
realbigsean
5e8d79891b merge conflict resolution 2023-01-25 11:10:44 +01:00
realbigsean
9fde813a7e Merge pull request #3913 from divagant-martian/reject-pre-fork-rpc-upgrades
only support 4844 rpc methods if on 4844
2023-01-25 10:56:56 +01:00
realbigsean
eabe5dc970 Merge pull request #3910 from realbigsean/block-wrapper-refactor
Block wrapper refactor
2023-01-25 10:56:30 +01:00
Paul Hauner
f615fb0885 Merge pull request #3914 from michaelsproul/capella-fix-tests
Fix the new BLS to execution change test
2023-01-25 10:20:12 +01:00
Michael Sproul
16bdb2771b Update another test broken by the shuffling change 2023-01-25 16:18:00 +11:00
Michael Sproul
e48487db01 Fix the new BLS to execution change test 2023-01-25 15:47:07 +11:00
Michael Sproul
79a20e8a5f Update sync rewards API for abstract exec payload 2023-01-25 15:46:47 +11:00
Michael Sproul
c76a1971cc Merge remote-tracking branch 'origin/unstable' into capella 2023-01-25 14:20:16 +11:00
Michael Sproul
e8d1dd4e7c Fix docs for oldest_block_slot (#3911)
## Proposed Changes

Update the docs to correct the description of `oldest_block_slot`. Credit to `laern` on Discord for noticing this.
2023-01-25 02:17:10 +00:00
GeemoCandama
a7351c00c0 light client optimistic update reprocessing (#3799)
## Issue Addressed
Currently there is a race between receiving blocks and receiving light client optimistic updates (in unstable), which results in processing errors. This is a continuation of PR #3693 and seeks to progress on issue #3651

## Proposed Changes

Add the parent_root to ReprocessQueueMessage::BlockImported so we can remove blocks from queue when a block arrives that has the same parent root. We use the parent root as opposed to the block_root because the LightClientOptimisticUpdate does not contain the block_root.

If light_client_optimistic_update.attested_header.canonical_root() != head_block.message().parent_root() then we queue the update. Otherwise we process immediately.
## Additional Info
michaelsproul came up with this idea.
The code was heavily based off of the attestation reprocessing.
I have not properly tested this to see if it works as intended.
2023-01-24 22:17:50 +00:00
ethDreamer
3d4dd6af75 Use eth1_withdrawal_credentials in Test States (#3898)
* Use eth1_withdrawal_credential in Some Test States

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Increase validator sizes

* Pick next sync committee message

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-24 16:22:51 +01:00
realbigsean
d3240c1ffb fix common issue across blocks by range and blobs by range 2023-01-24 15:42:28 +01:00
realbigsean
18d4faf611 review updates 2023-01-24 15:30:29 +01:00
realbigsean
2225e6ac89 pass in data availability boundary to the get_blobs method 2023-01-24 14:35:07 +01:00
realbigsean
a4ea1761bb Update beacon_node/beacon_chain/src/beacon_chain.rs 2023-01-24 12:28:58 +01:00
realbigsean
5b4cd997d0 Update beacon_node/lighthouse_network/src/rpc/methods.rs 2023-01-24 12:20:40 +01:00
Diva M
2d2da92132 only support 4844 rpc methods if on 4844 2023-01-24 05:15:23 -05:00
realbigsean
b658cc7aaf simplify checking attester cache for block and blobs. use ResourceUnavailable according to the spec 2023-01-24 10:50:47 +01:00
naviechan
2802bc9a9c Implement sync_committee_rewards API (per-validator reward) (#3903)
## Issue Addressed

[#3661](https://github.com/sigp/lighthouse/issues/3661)

## Proposed Changes

`/eth/v1/beacon/rewards/sync_committee/{block_id}`

```
{
  "execution_optimistic": false,
  "finalized": false,
  "data": [
    {
      "validator_index": "0",
      "reward": "2000"
    }
  ]
}
```

The issue contains the implementation of three per-validator reward APIs:
* `sync_committee_rewards`
* [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822)
* `block_rewards`

This PR only implements the `sync_committe_rewards `.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards)

## Additional Info

The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

Co-authored-by: navie <naviechan@gmail.com>
Co-authored-by: kevinbogner <kevbogner@gmail.com>
2023-01-24 02:06:42 +00:00
Emilia Hane
e14550425d Fix mismatched response bug 2023-01-23 13:23:04 +01:00
realbigsean
4a51f65ce2 move available block comment 2023-01-23 09:17:57 +01:00
realbigsean
75320ff8bc cleanup 2023-01-22 05:54:25 +01:00
Emilia Hane
81a754577d fixup! Improve error handling 2023-01-21 15:47:33 +01:00
Emilia Hane
f32f08eec0 Fix typo 2023-01-21 14:47:14 +01:00
Emilia Hane
5fc648217d fixup! Improve error handling 2023-01-21 14:46:24 +01:00
realbigsean
a83fd1afb4 remove unused imports 2023-01-21 04:52:36 -05:00
realbigsean
cbd09dc281 finish refactor 2023-01-21 04:48:25 -05:00
Michael Sproul
d8abf2fc41 Import BLS to execution changes before Capella (#3892)
* Import BLS to execution changes before Capella

* Test for BLS to execution change HTTP API

* Pack BLS to execution changes in LIFO order

* Remove unused var

* Clippy
2023-01-21 10:39:59 +11:00
Michael Sproul
bb0e99c097 Merge remote-tracking branch 'origin/unstable' into capella 2023-01-21 10:37:26 +11:00
realbigsean
eb9feed784 add new traits 2023-01-20 16:04:35 -05:00
antondlr
3e67fa3038 fix multiarch docker builds (#3904)
## Issue Addressed

#3902 
Tested and confirmed working [here](https://github.com/antondlr/lighthouse/actions/runs/3970418322)

## Additional Info

buildx v0.10.0 added provenance attestations to images but they are packed in a way that's incompatible with `docker manifest`
https://github.com/docker/buildx/releases
2023-01-20 20:26:32 +00:00
Emilia Hane
f7eb89ddd9 Improve error handling 2023-01-20 21:16:47 +01:00
realbigsean
8e57eef0ed return a BlobsUnavailable error when the block root is a pre-4844 block 2023-01-20 21:16:47 +01:00
realbigsean
c6479444c2 don't send errors when we *correctly* don't have blobs 2023-01-20 21:16:47 +01:00
realbigsean
e1ce4e5b78 make explicity BlobsUnavailable error and handle it directly 2023-01-20 21:16:47 +01:00
realbigsean
f7f64eb007 fix/consolidate some error handling 2023-01-20 21:16:47 +01:00
Emilia Hane
89cb58d17b Fix typo
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-01-20 21:16:47 +01:00
Emilia Hane
9cc25162e2 Send error message if eip4844 fork disabled
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-01-20 21:16:46 +01:00
Emilia Hane
654e59cbba Fix rename fn bug
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-01-20 21:16:46 +01:00
Emilia Hane
a00b355800 fixup! Less strict handling of faulty rpc req params and syntax improvement 2023-01-20 21:16:46 +01:00
Emilia Hane
b4ec4c1ccf Less strict handling of faulty rpc req params and syntax improvement 2023-01-20 21:16:46 +01:00
Emilia Hane
9445ac70d8 Check data availability boundary in rpc request 2023-01-20 21:16:46 +01:00
realbigsean
3cb8fb7973 block wrapper refactor initial commit 2023-01-20 11:50:16 -05:00
Michael Sproul
4deab888c9 Switch allocator to jemalloc (#3697)
## Proposed Changes

Another `tree-states` motivated PR, this adds `jemalloc` as the default allocator, with an option to use the system allocator by compiling with `FEATURES="" make`.

- [x] Metrics
- [x] Test on Windows
- [x] Test on macOS
- [x] Test with `musl`
- [x] Metrics dashboard on `lighthouse-metrics` (https://github.com/sigp/lighthouse-metrics/pull/37)


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-01-20 04:19:29 +00:00
Age Manning
f8a3b3b95a Improve block delay metrics (#3894)
We recently ran a large-block experiment on the testnet and plan to do a further experiment on mainnet.

Although the metrics recovered from lighthouse nodes were quite useful, I think we could do with greater resolution in the block delay metrics and get some specific values for each block (currently these can be lost to large exponential histogram buckets). 

This PR increases the resolution of the block delay histogram buckets, but also introduces a new metric which records the last block delay. Depending on the polling resolution of the metric server, we can lose some block delay information, however it will always give us a specific value and we will not lose exact data based on poor resolution histogram buckets.
2023-01-20 00:46:56 +00:00
realbigsean
208f531ae7 update antithesis dockerfile (#3883)
Resolves https://github.com/sigp/lighthouse/issues/3879


Co-authored-by: realbigsean <sean@sigmaprime.io>
2023-01-20 00:46:55 +00:00
realbigsean
e046657b4f Merge pull request #3897 from realbigsean/capella-back-merge
execution API related fixes after capella merge
2023-01-19 14:52:38 -05:00
realbigsean
8f8b7211d0 execution API related fixes 2023-01-19 09:32:08 -05:00
realbigsean
e179be1f0b Merge pull request #3876 from realbigsean/devnetv4
Devnet V4
2023-01-19 09:03:08 -05:00
realbigsean
acbbdd8b9e merge eip4844 2023-01-19 08:46:38 -05:00
Pawan Dhananjay
fd08a2cb0a Fix trusted setup in lcli::new_testnet 2023-01-19 18:16:04 +05:30
ethDreamer
26787412cd Update engine_api to Latest spec (#3893)
* Update engine_api to Latest spec

* Small Test Fix

* Fix Test Deserialization Issue
2023-01-19 22:42:17 +11:00
Pawan Dhananjay
f04486dc71 Update kzg library to use bytes only interface 2023-01-17 12:12:17 +05:30
aliask
480309fb96 Fix some dead links in markdown files (#3885)
## Issue Addressed

No issue has been raised for these broken links.

## Proposed Changes

Update links with the new URLs for the same document.

## Additional Info

~The link for the [Lighthouse Development Updates](https://eepurl.com/dh9Lvb/) mailing list is also broken, but I can't find the correct link.~


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-17 05:13:49 +00:00
GeemoCandama
b4d9fc03ee add logging for starting request and receiving block (#3858)
## Issue Addressed

#3853 

## Proposed Changes

Added `INFO` level logs for requesting and receiving the unsigned block.

## Additional Info

Logging for successfully publishing the signed block is already there. And seemingly there is a log for when "We realize we are going to produce a block" in the `start_update_service`: `info!(log, "Block production service started");
`.  Is there anywhere else you'd like to see logging around this event?


Co-authored-by: GeemoCandama <104614073+GeemoCandama@users.noreply.github.com>
2023-01-17 05:13:48 +00:00
David Theodore
9a970ce3a2 add better err reporting UnableToOpenVotingKeystore (#3781)
## Issue Addressed

#3780 

## Proposed Changes

Add error reporting that notifies the node operator that the `voting_keystore_path` in their `validator_definitions.yml` file may be incorrect.

## Additional Info

There is more info in issue #3780 


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-17 05:13:47 +00:00
realbigsean
ddcd10b194 merge latest capella changes 2023-01-16 09:17:18 -05:00
Mac L
6ac1c5b439 Add CLI flag to specify the format of logs written to the logfile (#3839)
## Proposed Changes

Decouple the stdout and logfile formats by adding the `--logfile-format` CLI flag.
This behaves identically to the existing `--log-format` flag, but instead will only affect the logs written to the logfile.
The `--log-format` flag will no longer have any effect on the contents of the logfile.

## Additional Info
This avoids being a breaking change by causing `logfile-format` to default to the value of `--log-format` if it is not provided. 
This means that users who were previously relying on being able to use a JSON formatted logfile will be able to continue to use `--log-format JSON`. 

Users who want to use JSON on stdout and default logs in the logfile, will need to pass the following flags: `--log-format JSON --logfile-format DEFAULT`
2023-01-16 03:42:10 +00:00
Santiago Medina
912ea2a5ca Return HTTP 404 rather than 405 (#3836)
## Issue Addressed

Issue #3112

## Proposed Changes

Add `Filter::recover` to the GET chain to handle rejections specifically as 404 NOT FOUND

## Additional Info

Making a request to `http://localhost:5052/not_real` now returns the following:

```
{
    "code": 404,
    "message": "NOT_FOUND",
    "stacktraces": []
}
```


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-16 03:42:09 +00:00
Madman600
4d9e137e6a Update checkpoint-sync.md (#3831)
Remove infura checkpoint sync instructions.


Co-authored-by: Adam Patacchiola <aep600@gmail.com>
2023-01-16 03:42:08 +00:00
ethDreamer
51088725fb CL-EL withdrawals harmonization using Gwei units (#3884) 2023-01-16 12:03:42 +11:00
realbigsean
5b77c5c0f2 Merge pull request #3881 from realbigsean/bump-ef-tests
bump ef-tests
2023-01-13 17:42:00 -05:00
realbigsean
302eaca344 intentionally skip LightClientHeader ssz static tests 2023-01-13 16:15:27 -05:00
realbigsean
48a8cbbc45 Merge pull request #3877 from michaelsproul/withdrawals-block-hash
Verify blockHash with withdrawals
2023-01-13 15:12:58 -05:00
realbigsean
35c9e2407b bump ef-tests 2023-01-13 15:11:46 -05:00
realbigsean
f47fc9ddf3 Merge pull request #3880 from ethDreamer/capella
Sign BlsToExecutionChange w/ GENESIS_FORK_VERSION
2023-01-13 15:11:11 -05:00
realbigsean
1319683736 Update gossip_methods.rs 2023-01-13 14:59:03 -05:00
Mark Mackey
05c1291d8a Don't Penalize Early bls_to_execution_change 2023-01-13 12:53:25 -06:00
Mark Mackey
d9dd9b43ee Sign BlsToExecutionChange w/ GENESIS_FORK_VERSION 2023-01-13 10:47:19 -06:00
realbigsean
e9affafb6b get rid of EL endpoint switching at forks 2023-01-13 10:51:45 -05:00
Michael Sproul
8e2931d73b Verify blockHash with withdrawals 2023-01-13 12:46:54 +11:00
realbigsean
d96d793bfb fix compilation issues 2023-01-12 14:17:14 -05:00
realbigsean
06f71e8cce merge capella 2023-01-12 12:51:09 -05:00
realbigsean
c705be202b run historical_summary ef tests for 4844 2023-01-12 11:15:33 -05:00
realbigsean
d0368b0459 Merge pull request #3874 from michaelsproul/beacon_chain_tests
Fix some beacon_chain tests
2023-01-12 10:59:04 -05:00
realbigsean
63da81de62 check for eip4844 unaccessed files 2023-01-12 08:50:02 -05:00
Michael Sproul
aa896decc1 Fix some beacon_chain tests 2023-01-12 19:13:01 +11:00
Michael Sproul
b2c2d31b69 Merge pull request #3871 from michaelsproul/capella-v3.4.0
Back-merge unstable to Capella
2023-01-12 16:44:00 +11:00
Michael Sproul
2af8110529 Merge remote-tracking branch 'origin/unstable' into capella
Fixing the conflicts involved patching up some of the `block_hash` verification,
the rest will be done as part of https://github.com/sigp/lighthouse/issues/3870
2023-01-12 16:22:00 +11:00
Michael Sproul
56e6b3557a Fix Arbitrary implementations (#3867)
* Fix Arbitrary implementations

* Remove remaining vestiges of arbitrary-fuzz

* Remove FIXME

* Clippy
2023-01-12 15:17:03 +11:00
ethDreamer
52c1055fdc Remove withdrawals-processing feature (#3864)
* Use spec to Determine Supported Engine APIs

* Remove `withdrawals-processing` feature

* Fixed Tests

* Missed Some Spots

* Fixed Another Test

* Stupid Clippy
2023-01-12 15:15:08 +11:00
realbigsean
f7f351784a get ef tests passing after capella rebase 2023-01-11 18:32:15 -05:00
realbigsean
438126f19a merge upstream, fix compile errors 2023-01-11 13:52:58 -05:00
Paul Hauner
38514c07f2 Release v3.4.0 (#3862)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

- [x] ~~Blocked on #3728, #3801~~
- [x] ~~Blocked on #3866~~
- [x] Requires additional testing
2023-01-11 03:27:08 +00:00
realbigsean
98b11bbd3f add historical summaries (#3865)
* add historical summaries

* fix tree hash caching, disable the sanity slots test with fake crypto

* add ssz static HistoricalSummary

* only store historical summaries after capella

* Teach `UpdatePattern` about Capella

* Tidy EF tests

* Clippy

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-01-11 12:40:21 +11:00
Michael Sproul
0c74cd4696 Update dependencies incl Tokio (#3866)
## Proposed Changes

Update all dependencies to new semver-compatible releases with `cargo update`. Importantly this patches a Tokio vuln: https://rustsec.org/advisories/RUSTSEC-2023-0001. I don't think we were affected by the vuln because it only applies to named pipes on Windows, but it's still good hygiene to patch.
2023-01-09 23:29:23 +00:00
realbigsean
680a5c7fc8 Merge pull request #3861 from paulhauner/ssz-transparent-4844
Allow leading skipped tuple fields
2023-01-09 09:50:00 -05:00
realbigsean
c85cd87c10 Web3 signer validator definitions reloading on any request (#3801)
## Issue Addressed

https://github.com/sigp/lighthouse/issues/3795


Co-authored-by: realbigsean <sean@sigmaprime.io>
2023-01-09 08:18:56 +00:00
Paul Hauner
830efdb5c2 Improve validator monitor experience for high validator counts (#3728)
## Issue Addressed

NA

## Proposed Changes

Myself and others (#3678) have observed  that when running with lots of validators (e.g., 1000s) the cardinality is too much for Prometheus. I've seen Prometheus instances just grind to a halt when we turn the validator monitor on for our testnet validators (we have 10,000s of Goerli validators). Additionally, the debug log volume can get very high with one log per validator, per attestation.

To address this, the `bn --validator-monitor-individual-tracking-threshold <INTEGER>` flag has been added to *disable* per-validator (i.e., non-aggregated) metrics/logging once the validator monitor exceeds the threshold of validators. The default value is `64`, which is a finger-to-the-wind value. I don't actually know the value at which Prometheus starts to become overwhelmed, but I've seen it work with ~64 validators and I've seen it *not* work with 1000s of validators. A default of `64` seems like it will result in a breaking change to users who are running millions of dollars worth of validators whilst resulting in a no-op for low-validator-count users. I'm open to changing this number, though.

Additionally, this PR starts collecting aggregated Prometheus metrics (e.g., total count of head hits across all validators), so that high-validator-count validators still have some interesting metrics. We already had logging for aggregated values, so nothing has been added there.

I've opted to make this a breaking change since it can be rather damaging to your Prometheus instance to accidentally enable the validator monitor with large numbers of validators. I've crashed a Prometheus instance myself and had a report from another user who's done the same thing.

## Additional Info

NA

## Breaking Changes Note

A new label has been added to the validator monitor Prometheus metrics: `total`. This label tracks the aggregated metrics of all validators in the validator monitor (as opposed to each validator being tracking individually using its pubkey as the label).

Additionally, a new flag has been added to the Beacon Node: `--validator-monitor-individual-tracking-threshold`. The default value is `64`, which means that when the validator monitor is tracking more than 64 validators then it will stop tracking per-validator metrics and only track the `all_validators` metric. It will also stop logging per-validator logs and only emit aggregated logs (the exception being that exit and slashing logs are always emitted).

These changes were introduced in #3728 to address issues with untenable Prometheus cardinality and log volume when using the validator monitor with high validator counts (e.g., 1000s of validators). Users with less than 65 validators will see no change in behavior (apart from the added `all_validators` metric). Users with more than 65 validators who wish to maintain the previous behavior can set something like `--validator-monitor-individual-tracking-threshold 999999`.
2023-01-09 08:18:55 +00:00
Pawan Dhananjay
ba410c3012 Embed trusted setup in network config (#3851)
* Load trusted setup in network config

* Fix trusted setup serialize and deserialize

* Load trusted setup from hardcoded preset instead of a file

* Truncate after deserialising trusted setup

* Fix beacon node script

* Remove hardcoded setup file

* Add length checks
2023-01-09 12:34:16 +05:30
Michael Sproul
168a7805c3 Add more Gnosis bootnodes (#3855)
## Proposed Changes

Add the latest long-running Gnosis chain bootnodes provided to us by the Gnosis team.
2023-01-09 05:12:31 +00:00
Paul Hauner
026b5afbb9 Allow leading skipped tuple fields 2023-01-09 15:56:17 +11:00
Michael Sproul
87c44697d0 Bump MSRV to 1.65 (#3860) 2023-01-09 14:40:09 +11:00
Michael Sproul
4bd2b777ec Verify execution block hashes during finalized sync (#3794)
## Issue Addressed

Recent discussions with other client devs about optimistic sync have revealed a conceptual issue with the optimisation implemented in #3738. In designing that feature I failed to consider that the execution node checks the `blockHash` of the execution payload before responding with `SYNCING`, and that omitting this check entirely results in a degradation of the full node's validation. A node omitting the `blockHash` checks could be tricked by a supermajority of validators into following an invalid chain, something which is ordinarily impossible.

## Proposed Changes

I've added verification of the `payload.block_hash` in Lighthouse. In case of failure we log a warning and fall back to verifying the payload with the execution client.

I've used our existing dependency on `ethers_core` for RLP support, and a new dependency on Parity's `triehash` crate for the Merkle patricia trie. Although the `triehash` crate is currently unmaintained it seems like our best option at the moment (it is also used by Reth, and requires vastly less boilerplate than Parity's generic `trie-root` library).

Block hash verification is pretty quick, about 500us per block on my machine (mainnet).

The optimistic finalized sync feature can be disabled using `--disable-optimistic-finalized-sync` which forces full verification with the EL.

## Additional Info

This PR also introduces a new dependency on our [`metastruct`](https://github.com/sigp/metastruct) library, which was perfectly suited to the RLP serialization method. There will likely be changes as `metastruct` grows, but I think this is a good way to start dogfooding it.

I took inspiration from some Parity and Reth code while writing this, and have preserved the relevant license headers on the files containing code that was copied and modified.
2023-01-09 03:11:59 +00:00
ethDreamer
11f4784ae6 Added bls_to_execution_changes to PersistedOpPool (#3857)
* Added bls_to_execution_changes to PersistedOpPool
2023-01-09 12:38:02 +11:00
ethDreamer
cb94f639b0 Isolate withdrawals-processing Feature (#3854) 2023-01-09 11:05:28 +11:00
ethDreamer
6b72f45cad Merge pull request #3845 from realbigsean/capella-cleanup
Capella cleanup
2023-01-06 13:26:41 -06:00
Age Manning
1d9a2022b4 Upgrade to libp2p v0.50.0 (#3764)
I've needed to do this work in order to do some episub testing. 

This version of libp2p has not yet been released, so this is left as a draft for when we wish to update.

Co-authored-by: Diva M <divma@protonmail.com>
2023-01-06 15:59:33 +00:00
realbigsean
33ff84743d Merge pull request #3848 from emhane/empty_blobs_db
Don't write empty blobs to db
2023-01-06 10:12:02 -05:00
Emilia Hane
c44738c77b Undo response modification in commit 597363d2f 2023-01-06 12:42:21 +01:00
Emilia Hane
74bca46fc2 Fix bug of early termination of batch send 2023-01-06 11:45:13 +01:00
Emilia Hane
caad492d48 Avoid loading unecessary data 2023-01-06 11:17:26 +01:00
Emilia Hane
88dde8b3bb Remove accidentally added file 2023-01-06 08:17:39 +01:00
ethDreamer
a7c79ab5a4 Merge pull request #3847 from ethDreamer/capella
Update Execution Layer Tests for Capella
2023-01-05 15:50:26 -06:00
Mark Mackey
2ac609b64e Fixing Moar Failing Tests 2023-01-05 13:00:44 -06:00
Age Manning
4e5e7ee1fc Restructure code for libp2p upgrade (#3850)
Our custom RPC implementation is lagging from the libp2p v50 version. 

We are going to need to change a bunch of function names and would be nice to have consistent ordering of function names inside the handlers. 

This is a precursor to the libp2p upgrade to minimize merge conflicts in function ordering.
2023-01-05 17:18:24 +00:00
Emilia Hane
e9e9fc865b Cargo clean 2023-01-05 16:28:59 +01:00
Emilia Hane
597363d2f9 Don't send empty blobs sidecar for blobs by range request 2023-01-05 16:28:59 +01:00
Emilia Hane
01ac7ad23c Return empty blobs sidecar when no kzg commitments in block 2023-01-05 16:28:58 +01:00
Emilia Hane
92c4e99305 Don't write empty blobs to db 2023-01-05 16:28:58 +01:00
Mark Mackey
8711db2f3b Fix EF Tests 2023-01-04 15:14:43 -06:00
realbigsean
8a77b05c19 Merge pull request #3830 from jimmygchen/blobs-beacon-api
Add Beacon API endpoint (/lighthouse) to download blobs by block ID
2023-01-04 09:04:32 -05:00
Jimmy Chen
11e3902dc8 Add support for blobs_sidecar ssz parsing 2023-01-04 18:51:28 +11:00
Mark Mackey
933772dd06 Fixed Operation Pool Tests 2023-01-03 18:40:35 -06:00
Mark Mackey
be232c4587 Update Execution Layer Tests for Capella 2023-01-03 16:58:15 -06:00
realbigsean
e02fcb30ab dont verify the signature of the genesis block in backfill sync (#3846) 2023-01-03 17:23:01 -05:00
Jimmy Chen
7c2a0aeb58 Merge branch 'blobs-beacon-api' of github.com:jimmygchen/lighthouse into blobs-beacon-api 2023-01-04 01:58:08 +11:00
Jimmy Chen
355cab8704 Simplify blob_sidecar query and remove override for Head and Slot 2023-01-04 01:57:15 +11:00
realbigsean
786d9834f5 sym link clang in cross dockerfile 2023-01-03 09:25:02 -05:00
Jimmy Chen
11736b68d3 Return beacon_chain_error if blob query from BeaconStore returns an error 2023-01-04 01:23:31 +11:00
realbigsean
4353c49855 Update beacon_node/execution_layer/src/engine_api/json_structures.rs 2023-01-03 08:55:19 -05:00
realbigsean
d8f7277beb cleanup 2022-12-30 11:00:14 -05:00
realbigsean
c4a41e8f55 only use withdrawals-processing in docker build on capella 2022-12-29 11:07:22 -05:00
Pawan Dhananjay
2734f3f9db Fix devnet3 genesis.ssz.zip 2022-12-29 20:34:25 +05:30
realbigsean
c28c38b044 Merge pull request #3844 from pawanjay176/minor-fixes
Devnet 3 minor fixes
2022-12-29 09:05:00 -05:00
Pawan Dhananjay
e63cf80040 Update c-kzg version 2022-12-29 16:42:48 +05:30
Pawan Dhananjay
c47a0ade22 Fix config values for devnet 3; add more bootnodes 2022-12-29 16:42:26 +05:30
Michael Sproul
222a514506 Merge pull request #3842 from ethDreamer/capella
Merge with `unstable` and fix clippy
2022-12-29 09:35:33 +11:00
Mark Mackey
986ae4360a Fix clippy complaints 2022-12-28 14:47:16 -06:00
Mark Mackey
c188cde034 merge upstream/unstable 2022-12-28 14:43:25 -06:00
sean
31ba051879 add clang to dockerfile 2022-12-28 18:34:28 +00:00
sean
e94eb1d2d6 Merge branch 'capella' of https://github.com/sigp/lighthouse into eip4844 2022-12-28 18:29:45 +00:00
realbigsean
d34706b044 Merge pull request #3840 from realbigsean/blob-renamings-and-cleanup
Blob cleanup (renames, remove , wrap BlockWrapper enum)
2022-12-28 13:28:36 -05:00
sean
40c6daa34b add pawan's suggestsion 2022-12-28 18:27:21 +00:00
realbigsean
6e73965450 Merge pull request #3841 from realbigsean/docker-eip4844
create automatic docker build for eip4844
2022-12-28 11:12:37 -05:00
realbigsean
6d85930520 create automatic docker build for eip4844 2022-12-28 11:11:17 -05:00
realbigsean
019467840f Merge pull request #3833 from realbigsean/empty-blobs
Empty blobs validation
2022-12-28 10:32:06 -05:00
realbigsean
8a70d80a2f Revert "Revert "renames, remove , wrap BlockWrapper enum to make descontruction private""
This reverts commit 1931a442dc.
2022-12-28 10:31:18 -05:00
realbigsean
1931a442dc Revert "renames, remove , wrap BlockWrapper enum to make descontruction private"
This reverts commit 5b3b34a9d7.
2022-12-28 10:30:36 -05:00
realbigsean
5b3b34a9d7 renames, remove , wrap BlockWrapper enum to make descontruction private 2022-12-28 10:28:45 -05:00
realbigsean
502b5e5bf0 unused error lint 2022-12-28 09:32:29 -05:00
ethDreamer
890cd6d810 Merge pull request #3838 from ethDreamer/capella
Fixed Some Tests
2022-12-27 17:35:37 -06:00
Mark Mackey
c922566fbc Fixed Some Tests 2022-12-27 15:59:34 -06:00
Mark Mackey
96da8b9383 Feature Guard V2 Engine API Methods 2022-12-27 15:55:43 -06:00
Jimmy Chen
1b64cbadba Merge remote-tracking branch 'origin/eip4844' into blobs-beacon-api 2022-12-24 00:41:45 +00:00
Diva M
aeb243fe61 fmt 2022-12-23 17:44:50 -05:00
Diva M
6bf439befd Merge branch 'eip4844' into empty-blobs 2022-12-23 17:38:59 -05:00
Divma
240854750c cleanup: remove unused imports, unusued fields (#3834) 2022-12-23 17:16:10 -05:00
realbigsean
dcd5e40fa0 Merge pull request #3832 from divagant-martian/4844-backfill-fixes
Remove sync validation no longer valid + Fix bakcfill batch sizes
2022-12-23 13:43:44 -05:00
realbigsean
adf5f462d5 fix blob validation for empty blobs when using 2022-12-23 12:59:04 -05:00
realbigsean
1dc0759f57 impl hash correctly for the blob wrapper 2022-12-23 12:53:59 -05:00
realbigsean
d09523802b impl hash correctly for the blob wrapper 2022-12-23 12:52:26 -05:00
realbigsean
5e11edc612 fix blob validation for empty blobs 2022-12-23 12:47:38 -05:00
Diva M
24087f104d add the batch type to the Batch's KV 2022-12-23 10:49:46 -05:00
Diva M
901764b8f0 backfill batches need to be of just one epoch 2022-12-23 10:32:59 -05:00
realbigsean
5db0a88d4f fix compilation errors from merge 2022-12-23 10:27:01 -05:00
realbigsean
f45d117e73 merge with capella 2022-12-23 10:21:18 -05:00
realbigsean
4d50fa36bc Merge pull request #3829 from divagant-martian/handle-no-blob-range-response
Handle peers sending no blob when the blob is empty in range responses
2022-12-23 10:15:30 -05:00
Diva M
66f9aa922d clean up and improvements 2022-12-23 09:52:10 -05:00
Jimmy Chen
f6d5e8fea3 Rename variable only 2022-12-23 16:14:53 +11:00
Jimmy Chen
3b9041047a Fix typoe and add blobs endpoint to eth2 lib. 2022-12-23 15:28:08 +11:00
Jimmy Chen
bf7f709b51 Add missing route 2022-12-23 14:52:03 +11:00
Jimmy Chen
0155c94f86 Fix code formatting 2022-12-23 14:47:15 +11:00
Jimmy Chen
70d6e6705e Add Beacon API endpoint to download blobs by block ID 2022-12-23 12:42:00 +11:00
Diva M
3643f5cc19 spelling 2022-12-22 17:47:36 -05:00
Diva M
48ff56d9cb spelling 2022-12-22 17:38:55 -05:00
Diva M
e24f6c93d9 fix ctrl c'd comment 2022-12-22 17:38:16 -05:00
Diva M
fbc147e273 remove unused entry struct 2022-12-22 17:34:01 -05:00
Diva M
847f0de0ea change base 2022-12-22 17:32:33 -05:00
Diva M
cd6655dba9 handle no blobs from peers instead of empty blobs in range requests 2022-12-22 17:30:04 -05:00
realbigsean
61763790d5 Merge pull request #3825 from jimmygchen/small-fixes
Various small fixes to 4844 branch
2022-12-22 17:12:09 -05:00
realbigsean
332db29bf9 Merge pull request #3828 from realbigsean/update-eip4844-testnet-info
Update eip4844 testnet info
2022-12-22 14:20:24 -05:00
realbigsean
12402b309d fix geth binary path 2022-12-22 14:08:38 -05:00
realbigsean
d504d51dd9 merge with upstream add context bytes to error log 2022-12-22 14:06:28 -05:00
realbigsean
adbecb80eb config updates 2022-12-22 10:26:09 -05:00
realbigsean
cb28201f5b update built in 4844 testnet 2022-12-22 08:54:24 -05:00
realbigsean
33d01a7911 miscelaneous fixes on syncing, rpc and responding to peer's sync related requests (#3827)
- there was a bug in responding range blob requests where we would incorrectly label the first slot of an epoch as a non-skipped slot if it were skipped. this bug did not exist in the code for responding to block range request because the logic error was mitigated by defensive coding elsewhere
- there was a bug where a block received during range sync without a corresponding blob (and vice versa) was incorrectly interpreted as a stream termination
- RPC size limit fixes.
- Our blob cache was dead locking so I removed use of it for now.
- Because of our change in finalized sync batch size from 2 to 1 and our transition to using exact epoch boundaries for batches (rather than one slot past the epoch boundary), we need to sync finalized sync to 2 epochs + 1 slot past our peer's finalized slot in order to finalize the chain locally.
- use fork context bytes in rpc methods on both the server and client side
2022-12-21 15:50:51 -05:00
realbigsean
2f4c44fd88 remove my binary path for geth 2022-12-21 14:04:14 -05:00
realbigsean
ff772311fa add context bytes to blob messages, fix rpc limits, sync past finalized checkpoint during finalized sync so we can advance our own finalization, fix stream termination bug in blobs by range 2022-12-21 13:56:52 -05:00
ethDreamer
bfce794777 Merge pull request #3826 from ethDreamer/capella
Fixed spec serialization bug
2022-12-21 11:43:10 -06:00
Mark Mackey
3d253abadc Fixed spec serialization bug 2022-12-21 11:41:14 -06:00
Jimmy Chen
f7bb458c5e Fix incorrect logging 2022-12-22 02:01:11 +11:00
Jimmy Chen
ccfd092845 Fix blob request logging and incorrect enum type 2022-12-22 00:22:37 +11:00
Jimmy Chen
14aa87aff3 Fix code comment 2022-12-22 00:19:38 +11:00
Jimmy Chen
a6b771f265 Add more logging to Error::MaxDistanceExceeded 2022-12-22 00:19:22 +11:00
realbigsean
a67fa516c7 don't expect context bytes for blob messages 2022-12-20 19:32:54 -05:00
realbigsean
cc420caaa5 Merge pull request #3823 from jimmygchen/make-bootnode-binary-var
Use bootnode binary variable in testnet scripts
2022-12-20 19:22:46 -05:00
realbigsean
9c46a1cb21 fix rate limits, and a couple other bugs 2022-12-20 18:56:07 -05:00
Jimmy Chen
c76e371559 Add missing source 2022-12-21 00:29:15 +11:00
Jimmy Chen
db29cb08a6 Add bootnode binary variable in testnet scripts 2022-12-21 00:18:05 +11:00
ethDreamer
bf468692f7 Merge pull request #3821 from ethDreamer/capella
Removed `withdrawals` feature flag
2022-12-19 19:36:05 -06:00
ethDreamer
b224ed8151 Update consensus/state_processing/src/upgrade/eip4844.rs
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2022-12-19 19:35:17 -06:00
ethDreamer
0c22d69e15 Update consensus/state_processing/src/upgrade/eip4844.rs
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2022-12-19 19:35:08 -06:00
Michael Sproul
59a7a4703c Various CI fixes (#3813)
## Issue Addressed

Closes #3812
Closes #3750
Closes #3705
2022-12-20 01:34:52 +00:00
John Adler
53aad18da3 docs: remove mention of phases in voluntary exits (#3776)
The notion of "phases" doesn't exist anymore in the Ethereum roadmap. Also fix dead link to roadmap.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-12-20 01:34:51 +00:00
realbigsean
7d5db8015d correctly respond without skips on first range response 2022-12-19 19:07:21 -05:00
Mark Mackey
b75ca74222 Removed withdrawals feature flag 2022-12-19 15:38:46 -06:00
Divma
51e588bdf9 Merge pull request #3820 from realbigsean/sync-fixes
Handle ResourceUnavailable errors and other rpc/sync fixes
2022-12-19 12:38:33 -05:00
realbigsean
3ab0f46077 Update beacon_node/http_api/src/publish_blocks.rs
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
2022-12-19 12:27:31 -05:00
realbigsean
eddfb50c58 Revert "Revert "remove json snooper from local testnet scripts""
This reverts commit ba1cabc0c9.
2022-12-19 11:39:54 -05:00
realbigsean
5de4f5b8d0 handle parent blob request edge cases correctly. fix data availability boundary check 2022-12-19 11:39:09 -05:00
ethDreamer
ab11f8c71f Merge pull request #3817 from ethDreamer/capella
Make engine_getPayloadV2 accept local block value
2022-12-16 16:05:37 -06:00
Mark Mackey
3a08c7634e Make engine_getPayloadV2 accept local block value 2022-12-16 15:44:55 -06:00
realbigsean
22ed36bc6a fix is_empty check 2022-12-16 15:16:17 -05:00
realbigsean
ba1cabc0c9 Revert "remove json snooper from local testnet scripts"
This reverts commit 60d70ca501.
2022-12-16 14:52:37 -05:00
realbigsean
4570ccd233 Merge pull request #3816 from realbigsean/advertise-blobs-supported-protocols
Add blob rpc protocols to `protocol_info`
2022-12-16 14:38:06 -05:00
realbigsean
0349b104bf add blob rpc protocols to 2022-12-16 14:28:14 -05:00
Divma
ffbf70e2d9 Clippy lints for rust 1.66 (#3810)
## Issue Addressed
Fixes the new clippy lints for rust 1.66

## Proposed Changes

Most of the changes come from:
- [unnecessary_cast](https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_cast)
- [iter_kv_map](https://rust-lang.github.io/rust-clippy/master/index.html#iter_kv_map)
- [needless_borrow](https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow)

## Additional Info

na
2022-12-16 04:04:00 +00:00
Michael Sproul
558367ab8c Bounded withdrawals and spec v1.3.0-alpha.2 (#3802) 2022-12-16 09:20:45 +11:00
ethDreamer
37f735058a Merge pull request #3809 from ethDreamer/capella
Merge branch `unstable` into `capella`
2022-12-15 12:25:46 -06:00
Mark Mackey
3e90fb8cae Merge branch 'unstable' into capella 2022-12-15 12:20:03 -06:00
realbigsean
1644978cdb fix compilation 2022-12-15 10:26:10 -05:00
realbigsean
d893706e0e merge with capella 2022-12-15 09:33:18 -05:00
Michael Sproul
2c7ebc7278 Enable withdrawals features in Capella docker images (#3805) 2022-12-15 12:25:45 +11:00
Michael Sproul
d48460782b Publish capella images on push (#3803) 2022-12-15 11:42:35 +11:00
Divma
63c74b37f4 send error answering bbrange requests when an error occurrs (#3800)
## Issue Addressed

While testing withdrawals with @ethDreamer we noticed lighthouse is sending empty batches when an error occurs. As LH peer receiving this, we would consider this a low tolerance action because the peer is claiming the batch is right and is empty.

## Proposed Changes
If any kind of error occurs, send a error response instead

## Additional Info
Right now we don't handle such thing as a partial batch with an error. If an error is received, the whole batch is discarded. Because of this it makes little sense to send partial batches that end with an error, so it's better to do the proposed solution instead of sending empty batches.
2022-12-15 00:16:38 +00:00
Pawan Dhananjay
52a06231c8 Add support for compile time FIELD_ELEMENTS_PER_BLOB 2022-12-14 21:25:47 +05:30
Michael Sproul
f3e8ca852e Fix Clippy 2022-12-14 14:04:13 +11:00
Michael Sproul
991e4094f8 Merge remote-tracking branch 'origin/unstable' into capella-update 2022-12-14 13:00:41 +11:00
Michael Sproul
023674ab3b Update Gnosis chain bootnodes (#3793)
## Proposed Changes

Update the Gnosis chain bootnodes. The current list of Gnosis bootnodes were abandoned at some point before the Gnosis merge and are now failing to bootstrap peers. There's a workaround list of bootnodes here: https://docs.gnosischain.com/updates/20221208-temporary-bootnodes

The list from this PR represents the long-term bootnodes run by the Gnosis team. We will also try to set up SigP bootnodes for Gnosis chain at some point.
2022-12-14 01:20:02 +00:00
Michael Sproul
63d3dd27fc Batch API for address changes (#3798) 2022-12-14 12:01:33 +11:00
Michael Sproul
75dd8780e0 Use JsonPayload for payload reconstruction (#3797) 2022-12-14 11:52:46 +11:00
ethDreamer
07d6ef749a Fixed Payload Reconstruction Bug (#3796) 2022-12-14 11:49:30 +11:00
ethDreamer
b1c33361ea Fixed Clippy Complaints & Some Failing Tests (#3791)
* Fixed Clippy Complaints & Some Failing Tests
* Update Dockerfile to Rust-1.65
* EF test file renamed
* Touch up comments based on feedback
2022-12-13 10:50:24 -06:00
Michael Sproul
775d222299 Enable proposer boost re-orging (#2860)
## Proposed Changes

With proposer boosting implemented (#2822) we have an opportunity to re-org out late blocks.

This PR adds three flags to the BN to control this behaviour:

* `--disable-proposer-reorgs`: turn aggressive re-orging off (it's on by default).
* `--proposer-reorg-threshold N`: attempt to orphan blocks with less than N% of the committee vote. If this parameter isn't set then N defaults to 20% when the feature is enabled.
* `--proposer-reorg-epochs-since-finalization N`: only attempt to re-org late blocks when the number of epochs since finalization is less than or equal to N. The default is 2 epochs, meaning re-orgs will only be attempted when the chain is finalizing optimally.

For safety Lighthouse will only attempt a re-org under very specific conditions:

1. The block being proposed is 1 slot after the canonical head, and the canonical head is 1 slot after its parent. i.e. at slot `n + 1` rather than building on the block from slot `n` we build on the block from slot `n - 1`.
2. The current canonical head received less than N% of the committee vote. N should be set depending on the proposer boost fraction itself, the fraction of the network that is believed to be applying it, and the size of the largest entity that could be hoarding votes.
3. The current canonical head arrived after the attestation deadline from our perspective. This condition was only added to support suppression of forkchoiceUpdated messages, but makes intuitive sense.
4. The block is being proposed in the first 2 seconds of the slot. This gives it time to propagate and receive the proposer boost.


## Additional Info

For the initial idea and background, see: https://github.com/ethereum/consensus-specs/pull/2353#issuecomment-950238004

There is also a specification for this feature here: https://github.com/ethereum/consensus-specs/pull/3034

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
2022-12-13 09:57:26 +00:00
Pawan Dhananjay
2e89a719b0 Fix auth port 2022-12-13 15:14:07 +05:30
Paul Hauner
6f79263a21 Make all validator monitor logs INFO (#3727)
## Issue Addressed

NA

## Proposed Changes

This is a *potentially* contentious change, but I find it annoying that the validator monitor logs `WARN` and `ERRO` for imperfect attestations. Perfect attestation performance is unachievable (don't believe those photo-shopped beauty magazines!) since missed and poorly-packed blocks by other validators will reduce your performance.

When the validator monitor is on with 10s or more validators, I find the logs are washed out with ERROs that are not worth investigating. I suspect that users who really want to know if validators are missing attestations can do so by matching the content of the log, rather than the log level.

I'm open to feedback about this, especially from anyone who is relying on the current log levels.

## Additional Info

NA

## Breaking Changes Notes

The validator monitor will no longer emit `WARN` and `ERRO` logs for sub-optimal attestation performance. The logs will now be emitted at `INFO` level. This change was introduced to avoid cluttering the `WARN` and `ERRO` logs with alerts that are frequently triggered by the actions of other network participants (e.g., a missed block) and require no action from the user.
2022-12-13 06:24:52 +00:00
GeemoCandama
1b28ef8a8d Adding light_client gossip topics (#3693)
## Issue Addressed
Implementing the light_client_gossip topics but I'm not there yet.

Which issue # does this PR address?
Partially #3651

## Proposed Changes
Add light client gossip topics.
Please list or describe the changes introduced by this PR.
I'm going to Implement light_client_finality_update and light_client_optimistic_update gossip topics. Currently I've attempted the former and I'm seeking feedback.

## Additional Info
I've only implemented the light_client_finality_update topic because I wanted to make sure I was on the correct path. Also checking that the gossiped LightClientFinalityUpdate is the same as the locally constructed one is not implemented because caching the updates will make this much easier. Could someone give me some feedback on this please? 

Please provide any additional information. For example, future considerations
or information useful for reviewers.

Co-authored-by: GeemoCandama <104614073+GeemoCandama@users.noreply.github.com>
2022-12-13 06:24:51 +00:00
Michael Sproul
173a0abab4 Fix Withdrawal serialisation and check address change fork (#3789)
* Disallow address changes before Capella

* Quote u64s in Withdrawal serialisation
2022-12-13 17:03:21 +11:00
realbigsean
e8322c8a04 Merge pull request #3792 from jimmygchen/eip4844
Fix local testnet for MacOS
2022-12-12 17:47:06 -05:00
sean
2a7a1b31dc Merge branch 'capella' of https://github.com/sigp/lighthouse into eip4844 2022-12-12 22:41:54 +00:00
Jimmy Chen
704cf57de4 add gnu sed & gnu grep to testnet job 2022-12-13 00:14:32 +11:00
Justin Traglia
f7a54afde5 Fix some capella nits (#3782) 2022-12-12 11:40:44 +11:00
Paul Hauner
c973bfc90c Reduce log severity for late and unrevealed blocks (#3775)
## Issue Addressed

NA

## Proposed Changes

In #3725 I introduced a `CRIT` log for unrevealed payloads, against @michaelsproul's [advice](https://github.com/sigp/lighthouse/pull/3725#discussion_r1034142113). After being woken up in the middle of the night by a block that was not revealed to the BN but *was* revealed to the network, I have capitulated. This PR implements @michaelsproul's suggestion and reduces the severity to `ERRO`.

Additionally, I have dropped a `CRIT` to an `ERRO` for when a block is published late. The block in question was indeed published late on the network, however now that we have builders that can slow down block production I don't think the error is "actionable" enough to warrant a `CRIT` for the user.

## Additional Info

NA
2022-12-10 00:45:18 +00:00
realbigsean
9806f54f2d update trusted setup and eth1 balance allocations in testnet genesis 2022-12-09 15:21:10 -05:00
realbigsean
d4004ee050 update MaxBlobsPerBlock to 4 2022-12-09 13:50:10 -05:00
Mac L
979b73c9b6 Add API endpoint to get VC graffiti (#3779)
## Issue Addressed

#3766

## Proposed Changes

Adds an endpoint to get the graffiti that will be used for the next block proposal for each validator.

## Usage
```bash
curl -H "Authorization: Bearer api-token" http://localhost:9095/lighthouse/ui/graffiti | jq
```

```json
{
  "data": {
    "0x81283b7a20e1ca460ebd9bbd77005d557370cabb1f9a44f530c4c4c66230f675f8df8b4c2818851aa7d77a80ca5a4a5e": "mr f was here",
    "0xa3a32b0f8b4ddb83f1a0a853d81dd725dfe577d4f4c3db8ece52ce2b026eca84815c1a7e8e92a4de3d755733bf7e4a9b": "mr v was here",
    "0x872c61b4a7f8510ec809e5b023f5fdda2105d024c470ddbbeca4bc74e8280af0d178d749853e8f6a841083ac1b4db98f": null
  }
}
```

## Additional Info

This will only return graffiti that the validator client knows about.
That is from these 3 sources:
1. Graffiti File
2. validator_definitions.yml
3. The `--graffiti` flag on the VC

If the graffiti is set on the BN, it will not be returned. This may warrant an additional endpoint on the BN side which can be used in the event the endpoint returns `null`.
2022-12-09 09:20:13 +00:00
Mac L
80dd615fff Update book with missing Lighthouse endpoints (#3769)
## Proposed Changes


Adds docs for the following endpoints:
- `/lighthouse/analysis/attestation_performance`
- `/lighthouse/analysis/block_packing_efficiency`
2022-12-09 09:20:10 +00:00
Mac L
8cb9b5e126 Expose certain validator_monitor metrics to the HTTP API (#3760)
## Issue Addressed

#3724 

## Proposed Changes

Exposes certain `validator_monitor` as an endpoint on the HTTP API. Will only return metrics for validators which are actively being monitored.

### Usage

```bash
curl -X GET "http://localhost:5052/lighthouse/ui/validator_metrics" -H "accept: application/json" | jq
```

```json
{
  "data": {
    "validators": {
      "12345": {
        "attestation_hits": 10,
        "attestation_misses": 0,
        "attestation_hit_percentage": 100,
        "attestation_head_hits": 10,
        "attestation_head_misses": 0,
        "attestation_head_hit_percentage": 100,
        "attestation_target_hits": 5,
        "attestation_target_misses": 5,
        "attestation_target_hit_percentage": 50 
      }
    }
  }
}
```

## Additional Info

Based on #3756 which should be merged first.
2022-12-09 06:39:19 +00:00
realbigsean
60d70ca501 remove json snooper from local testnet scripts 2022-12-08 13:15:04 -05:00
realbigsean
d59a1c63fb small cleanup 2022-12-08 09:18:56 -05:00
realbigsean
254cad369e Merge pull request #3751 from realbigsean/eip4844-devnet-v3
Eip4844 devnet v3
2022-12-08 09:04:42 -05:00
realbigsean
be8c6349dc Merge pull request #27 from realbigsean/sean-interop-4844-testing
P2P fixes, local post merge testnet, query different EL endpoints at fork boundaries
2022-12-08 08:50:28 -05:00
realbigsean
ef9579602f geth binary location update 2022-12-08 08:49:39 -05:00
realbigsean
715002a615 use is synced for notifier again 2022-12-08 08:42:27 -05:00
realbigsean
14fa1e527f remove unused log and fix EL config serde 2022-12-08 08:32:59 -05:00
realbigsean
8c95ab07a3 remove fCU v3 query 2022-12-08 08:21:39 -05:00
realbigsean
658e9d9bba saturating sub epoch for blob boundary 2022-12-07 15:40:51 -05:00
realbigsean
5a42f6b067 range block or block+blob requests 2022-12-07 15:35:46 -05:00
realbigsean
a0d4aecf30 requests block + blob always post eip4844 2022-12-07 15:30:08 -05:00
realbigsean
b616c0a056 reset genesis.json fork times 2022-12-07 14:04:46 -05:00
realbigsean
6d4fb41b84 fix blob slot validation 2022-12-07 13:49:24 -05:00
realbigsean
dbc57ba2d9 merge with upstream 2022-12-07 13:11:21 -05:00
realbigsean
6c8b1b323b merge upstream 2022-12-07 12:27:21 -05:00
realbigsean
33721d17b2 Revert "fix compilation errors (#25)"
This reverts commit ae054d2663.
2022-12-07 12:25:11 -05:00
realbigsean
e5f26516bd add trusted setup, query different version of EL endpoint at each fork 2022-12-07 12:01:21 -05:00
Jimmy Chen
ae054d2663 fix compilation errors (#25) 2022-12-07 10:07:55 -05:00
realbigsean
2704955b2e local testnet config updates 2022-12-06 08:54:46 -05:00
ethDreamer
b6486e809d Fixed moar tests (#3774) 2022-12-05 09:08:55 +11:00
ethDreamer
5282e200be Merge 'upstream/unstable' into capella (#3773)
* Add API endpoint to count statuses of all validators (#3756)
* Delete DB schema migrations for v11 and earlier (#3761)

Co-authored-by: Mac L <mjladson@pm.me>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2022-12-03 14:05:25 -06:00
ethDreamer
1a39976715 Fixed Compiler Warnings & Failing Tests (#3771) 2022-12-03 10:42:12 +11:00
Pawan Dhananjay
e72d9fb922 Working post bellatrix local testnet 2022-12-02 19:50:45 +05:30
Pawan Dhananjay
df3615664e Embed interop validators into genesis 2022-12-02 12:55:27 +05:30
Pawan Dhananjay
ff24773a5a Add EL scripts 2022-12-02 12:55:27 +05:30
Michael Sproul
84392d63fa Delete DB schema migrations for v11 and earlier (#3761)
## Proposed Changes

Now that the Gnosis merge is scheduled, all users should have upgraded beyond Lighthouse v3.0.0. Accordingly we can delete schema migrations for versions prior to v3.0.0.

## Additional Info

I also deleted the state cache stuff I added in #3714 as it turned out to be useless for the light client proofs due to the one-slot offset.
2022-12-02 00:07:43 +00:00
realbigsean
2cd971c7d3 attempt to fix serde for opt hex be u256 2022-12-01 15:17:57 -05:00
realbigsean
c96234f38c fix hex be opt 2022-12-01 14:20:50 -05:00
realbigsean
8102a01085 merge with upstream 2022-12-01 11:13:07 -05:00
Mac L
18c9be595d Add API endpoint to count statuses of all validators (#3756)
## Issue Addressed

#3724

## Proposed Changes

Adds an endpoint to quickly count the number of occurances of each status in the validator set.

## Usage

```bash
curl -X GET "http://localhost:5052/lighthouse/ui/validator_count" -H "accept: application/json" | jq
```

```json
{
  "data": {
    "active_ongoing":479508,
    "active_exiting":0,
    "active_slashed":0,
    "pending_initialized":28,
    "pending_queued":0,
    "withdrawal_possible":933,
    "withdrawal_done":0,
    "exited_unslashed":0,
    "exited_slashed":3
  }
}
```
2022-12-01 06:03:53 +00:00
Michael Sproul
6c9de4a53a Merge pull request #3762 from ethDreamer/capella_eip4844_merge_unstable
merge `capella` with `unstable`
2022-12-01 11:15:51 +11:00
Mark Mackey
8a04c3428e Merged with unstable 2022-11-30 17:29:10 -06:00
Diva M
979a95d62f handle unknown parents for block-blob pairs
wip

handle unknown parents for block-blob pairs
2022-11-30 17:21:54 -05:00
realbigsean
2157d91b43 process single block and blob 2022-11-30 11:51:18 -05:00
realbigsean
fc9d0a512d handle blobs by range requests 2022-11-30 10:02:29 -05:00
realbigsean
422d145902 chain segment processing for blobs 2022-11-30 09:40:15 -05:00
Michael Sproul
22115049ee Prioritise important parts of block processing (#3696)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/2327

## Proposed Changes

This is an extension of some ideas I implemented while working on `tree-states`:

- Cache the indexed attestations from blocks in the `ConsensusContext`. Previously we were re-computing them 3-4 times over.
- Clean up `import_block` by splitting each part into `import_block_XXX`.
- Move some stuff off hot paths, specifically:
  - Relocate non-essential tasks that were running between receiving the payload verification status and priming the early attester cache. These tasks are moved after the cache priming:
    - Attestation observation
    - Validator monitor updates
    - Slasher updates
    - Updating the shuffling cache
  - Fork choice attestation observation now happens at the end of block verification in parallel with payload verification (this seems to save 5-10ms).
  - Payload verification now happens _before_ advancing the pre-state and writing it to disk! States were previously being written eagerly and adding ~20-30ms in front of verifying the execution payload. State catchup also sometimes takes ~500ms if we get a cache miss and need to rebuild the tree hash cache.

The remaining task that's taking substantial time (~20ms) is importing the block to fork choice. I _think_ this is because of pull-tips, and we should be able to optimise it out with a clever total active balance cache in the state (which would be computed in parallel with payload verification). I've decided to leave that for future work though. For now it can be observed via the new `beacon_block_processing_post_exec_pre_attestable_seconds` metric.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-11-30 05:22:58 +00:00
Divma
b4f4c0d253 Ipv6 bootnodes (#3752)
## Issue Addressed
our bootnodes as of now support only ipv4. this makes it so that they support ipv6

## Proposed Changes
- Adds code necessary to update the bootnodes to run on dual stack nodes and therefore contact and store ipv6 nodes.
- Adds some metrics about connectivity type of stored peers. It might have been nice to see some metrics over the sessions but that feels out of scope right now.

## Additional Info
- some code quality improvements sneaked in since the changes seemed small
- I think it depends on the OS, but enabling mapped addresses on an ipv6 node without dual stack support enabled could fail silently, making these nodes effectively ipv6 only. In the future I'll probably change this to use two sockets, which should fail loudly
2022-11-30 03:21:35 +00:00
Mark Mackey
f5e6a54f05 Refactored Execution Layer & Fixed Some Tests 2022-11-29 18:18:33 -06:00
Mark Mackey
36170ec428 Fixed some BeaconChain Tests 2022-11-29 18:18:18 -06:00
Mark Mackey
e0ea26c228 Remove withdrawals guard for PayloadAttributesV2 2022-11-29 18:03:29 -06:00
ethDreamer
342489a0c3 Fixed Payload Deserialization in DB (#3758) 2022-11-30 10:27:13 +11:00
Pawan Dhananjay
8b56446b64 Add more kzg validations 2022-11-29 16:04:18 +05:30
Pawan Dhananjay
0ed3f7474c Remove empty blob handling as it's handled in kzg library 2022-11-29 15:15:05 +05:30
GeemoCandama
3534c85e30 Optimize finalized chain sync by skipping newPayload messages (#3738)
## Issue Addressed

#3704 

## Proposed Changes
Adds is_syncing_finalized: bool parameter for block verification functions. Sets the payload_verification_status to Optimistic if is_syncing_finalized is true. Uses SyncState in NetworkGlobals in BeaconProcessor to retrieve the syncing status.

## Additional Info
I could implement FinalizedSignatureVerifiedBlock if you think it would be nicer.
2022-11-29 08:19:27 +00:00
Paul Hauner
a2969ba7de Improve debugging experience for builder proposals (#3725)
## Issue Addressed

NA

## Proposed Changes

This PR sets out to improve the logging/metrics experience when interacting with the builder. Namely, it:

- Adds/changes metrics (see "Metrics Changes" section).
- Adds new logs which show the duration of requests to the builder/local EL.
- Refactors existing logs for consistency and so that the `parent_hash` is include in all relevant logs (we can grep for this field when trying to trace the flow of block production).


Additionally, when I was implementing this PR I noticed that we skip some verification of the builder payload in the scenario where the builder return `Ok` but the local EL returns with `Err`. Namely, we were skipping the bid signature and other values like parent hash and prev randao. In this PR I've changed it so we *always* check these values and reject the bid if they're incorrect. With these changes, we'll sometimes choose to skip a proposal rather than propose something invalid -- that's the only side-effect to the changes that I can see.

## Metrics Changes

- Changed: `execution_layer_request_times`:
    - `method = "get_blinded_payload_local"`: time taken to get a payload from a local EE.
    - `method = "get_blinded_payload_builder"`: time taken to get a blinded payload from a builder.
    - `method = "post_blinded_payload_builder"`: time taken to get a builder to reveal a payload they've previously supplied us.
- `execution_layer_get_payload_outcome`
    - `outcome = "success"`: we successfully produced a payload from a builder or local EE.
    - `outcome = "failure"`: we were unable to get a payload from a builder or local EE.
- New: `execution_layer_builder_reveal_payload_outcome`
    - `outcome = "success"`: a builder revealed a payload from a signed, blinded block.
    - `outcome = "failure"`: the builder did not reveal the payload.
- New: `execution_layer_get_payload_source`
    - `type = "builder"`: we used a payload from a builder to produce a block.
    - `type = "local"`: we used a payload from a local EE to produce a block.
- New: `execution_layer_get_payload_builder_rejections` has a `reason` field to describe why we rejected a payload from a builder.
- New: `execution_layer_payload_bids` tracks the bid (in gwei) from the builder or local EE (local EE not yet supported, waiting on EEs to expose the value). Can only record values that fit inside an i64 (roughly 9 million ETH).
## Additional Info

NA
2022-11-29 05:51:42 +00:00
Diva M
e548073602 Merge branch 'blob-syncing' into eip4844-devnet-v3 2022-11-28 15:10:50 -05:00
Diva M
c532f4438c debug impl for wrapper type 2022-11-28 14:54:47 -05:00
Diva M
050acf67a3 Revert "TEMP HACK to get it compiling"
This reverts commit 3c79c33d86.
2022-11-28 14:36:07 -05:00
Diva M
3c79c33d86 TEMP HACK to get it compiling 2022-11-28 14:30:40 -05:00
Diva M
4760dbb078 add wrapper type 2022-11-28 14:22:19 -05:00
Diva M
805df307f6 wip 2022-11-28 14:13:12 -05:00
realbigsean
e962e80bb4 fix compile errors 2022-11-28 12:16:45 -05:00
realbigsean
6d7235f2c8 add blob info 2022-11-28 11:36:48 -05:00
realbigsean
0027cdcd3a Merge branch 'eip4844-devnet-v3' of https://github.com/realbigsean/lighthouse into eip4844-devnet-v3 2022-11-28 11:26:55 -05:00
realbigsean
92cae14409 add blob info 2022-11-28 11:26:46 -05:00
Diva M
f4babdedd5 more hacks 2022-11-28 11:06:12 -05:00
Diva M
25f29256ce hack the thing to get it to compile 2022-11-28 10:42:50 -05:00
Pawan Dhananjay
cb78f2f8df Add more kzg validations 2022-11-28 20:23:18 +05:30
Pawan Dhananjay
9640d420f7 Run kzg operations for block operations 2022-11-28 18:17:10 +05:30
Pawan Dhananjay
3075b82ea0 Add kzg trusted setup file cli param and load into beacon chain 2022-11-28 16:54:20 +05:30
kevinbogner
99ec9d9baf Add Run a Node guide (#3681)
## Issue Addressed

Related to #3672  

## Proposed Changes

- Added a guide to run a node. Mainly, copy and paste from 'Merge Migration' and 'Checkpoint Sync'.
- Ranked it high in ToC:
  - Introduction
  - Installation
  - Run a Node
  - Become a Validator
	...
- Hid 'Merge Migration' in ToC.

## Additional Info

- Should I add/rephrase/delete something?
- Now there is some redundancy:
  - 'Run a node' and 'Checkpoint Sync' contain similar information.
  - Same for 'Run a node' and 'Become a Validator'.


Co-authored-by: kevinbogner <114221396+kevinbogner@users.noreply.github.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-11-28 10:05:43 +00:00
Age Manning
2779017076 Gossipsub fast message id change (#3755)
For improved consistency, this mixes in the topic into our fast message id for more consistent tracking of messages across topics.
2022-11-28 07:36:52 +00:00
Mac L
c881b80367 Add CLI flag for gui requirements (#3731)
## Issue Addressed

#3723

## Proposed Changes

Adds a new CLI flag `--gui` which enables all the various flags required for the gui to function properly.
Currently enables the `--http` and `--validator-monitor-auto` flags.
2022-11-28 00:22:53 +00:00
realbigsean
3c9e1abcb7 merge upstream 2022-11-26 10:01:57 -05:00
sean
07a79c8266 add block and blobs sidecar to whitelist 2022-11-25 14:44:57 +00:00
sean
d6838d67dc add clang to dockerfile 2022-11-25 14:38:45 +00:00
sean
f88caa7afc set quota for blobs by root 2022-11-25 14:30:51 +00:00
Mac L
969ff240cd Add CLI flag to opt in to world-readable log files (#3747)
## Issue Addressed

#3732

## Proposed Changes

Add a CLI flag to allow users to opt out of the restrictive permissions of the log files.

## Additional Info

This is not recommended for most users. The log files can contain sensitive information such as validator indices, public keys and API tokens (see #2438). However some users using a multi-user setup may find this helpful if they understand the risks involved.
2022-11-25 07:57:11 +00:00
antondlr
e9bf7f7cc1 remove commas from comma-separated kv pairs (#3737)
## Issue Addressed

Logs are in comma separated kv list, but the values sometimes contain commas, which breaks parsing
2022-11-25 07:57:10 +00:00
Giulio rebuffo
d5a2de759b Added LightClientBootstrap V1 (#3711)
## Issue Addressed

Partially addresses #3651

## Proposed Changes

Adds server-side support for light_client_bootstrap_v1 topic

## Additional Info

This PR, creates each time a bootstrap without using cache, I do not know how necessary a cache is in this case as this topic is not supposed to be called frequently and IMHO we can just prevent abuse by using the limiter, but let me know what you think or if there is any caveat to this, or if it is necessary only for the sake of good practice.


Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-11-25 05:19:00 +00:00
Michael Sproul
788b337951 Op pool and gossip for BLS to execution changes (#3726) 2022-11-25 07:09:26 +11:00
realbigsean
a61f35272c fix compiling 2022-11-24 08:18:01 -05:00
realbigsean
1222404450 Merge branch 'blob-syncing' of https://github.com/realbigsean/lighthouse into blob-sync-kzg 2022-11-24 07:46:04 -05:00
Divma
bf5005244e Blob syncing (#24)
* add a rt is_blob_batch

* use the mixed type everywhere

* glue

* more glue

* minor fixes

* fix range tests

* filling in the gaps

* moore filling in the gaps
2022-11-24 07:45:38 -05:00
realbigsean
58b54f0a53 Rename excess blobs and update 4844 json RPC serialization/deserialization (#3745)
* rename excess blobs and fix json serialization/deserialization

* remove coments
2022-11-24 16:41:35 +11:00
Michael Sproul
e3ccd8fd4a Two Capella bugfixes (#3749)
* Two Capella bugfixes

* fix payload default check in fork choice

* Revert "fix payload default check in fork choice"

This reverts commit e56fefbd05.

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-11-24 15:14:06 +11:00
Paul Hauner
bf533c8e42 v3.3.0 (#3741)
## Issue Addressed

NA

## Proposed Changes

- Bump versions
- Pin the `nethermind` version since our method of getting the latest tags on `master` is giving us an old version (`1.14.1`).
- Increase timeout for execution engine startup.

## Additional Info

- [x] ~Awaiting further testing~
2022-11-23 23:38:32 +00:00
realbigsean
beddcfaac2 get spec tests working and fix json serialization 2022-11-23 18:30:45 -05:00
realbigsean
abc933faa8 Merge branch 'capella-bugfixes' of https://github.com/michaelsproul/lighthouse into blob-sync-kzg 2022-11-23 11:27:41 -05:00
realbigsean
7aa52a4141 ef-test fixes 2022-11-23 11:27:37 -05:00
realbigsean
62f8a5ee10 Merge branch 'blob-syncing' of https://github.com/realbigsean/lighthouse into blob-sync-kzg 2022-11-23 11:22:54 -05:00
realbigsean
ce097ac8d2 Merge branch 'json-rpc-blobs-updates' of https://github.com/realbigsean/lighthouse into blob-syncing 2022-11-23 11:22:15 -05:00
realbigsean
743347cf04 Revert "fix payload default check in fork choice"
This reverts commit e56fefbd05.
2022-11-23 11:18:47 -05:00
realbigsean
e56fefbd05 fix payload default check in fork choice 2022-11-23 09:44:00 -05:00
realbigsean
87ec448b00 Merge branch 'eip4844' of https://github.com/sigp/lighthouse into blob-sync-kzg 2022-11-23 09:19:30 -05:00
Michael Sproul
53a22c2fcb Two Capella bugfixes 2022-11-23 18:51:39 +11:00
ethDreamer
28c9603505 Stuuupid camelCase (#3748) 2022-11-23 14:42:58 +11:00
realbigsean
f601fb3b7e ef-test updates 2022-11-22 20:13:51 -05:00
Pawan Dhananjay
902055f295 ugly utils 2022-11-22 20:10:32 -05:00
Pawan Dhananjay
e8b5f311aa Add kzg crate functions 2022-11-22 20:10:17 -05:00
Pawan Dhananjay
3288404ec1 Skeleton 2022-11-22 20:09:21 -05:00
realbigsean
48b2efce9f merge with upstream 2022-11-22 18:38:30 -05:00
realbigsean
0228b2b42d - fix pre-merge block production (#3746)
- return `None` on pre-4844 blob requests
2022-11-22 17:10:40 -06:00
realbigsean
160b915695 remove coments 2022-11-22 17:30:35 -05:00
realbigsean
51b44290a3 rename excess blobs and fix json serialization/deserialization 2022-11-22 17:22:46 -05:00
ethDreamer
24e5252a55 Massive Update to Engine API (#3740)
* Massive Update to Engine API

* Update beacon_node/execution_layer/src/engine_api/json_structures.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/execution_layer/src/engine_api/json_structures.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/execution_payload.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* Update beacon_node/execution_layer/src/engine_api.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2022-11-22 13:27:48 -05:00
Michael Sproul
61b4bbf870 Fix BlocksByRoot response types (#3743) 2022-11-22 12:29:47 -05:00
realbigsean
a0d712ed8c Merge pull request #23 from divagant-martian/blob-syncing
make it compile to start from here
2022-11-21 14:59:12 -05:00
Diva M
7ed2d35424 get it to compile 2022-11-21 14:53:33 -05:00
realbigsean
e7ee79185b add blobs cache and fix some block production 2022-11-21 14:09:06 -05:00
Michael Sproul
b477c42748 Lower deposit finalization error to warning (#3739)
## Issue Addressed

Partially addresses #3707

## Proposed Changes

Drop `ERRO` log to `WARN` until we identify the exact conditions that lead to this case.

Add a message which hopefully reassures users who only see this log once 😅 

Add the block hash to the error message in case it will prove useful in debugging the root cause.
2022-11-21 06:29:03 +00:00
Lion - dapplion
e3729533a1 Schedule gnosis merge (#3729)
## Issue Addressed

N/A

## Proposed Changes

Schedule Gnosis merge
- Upstream config PR: https://github.com/gnosischain/configs/pull/3
- Nethermind PR: https://github.com/NethermindEth/nethermind/pull/4901
- Public announcement: https://twitter.com/gnosischain/status/1592589482641223682

## Additional Info

N/A

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2022-11-21 06:29:02 +00:00
Akihito Nakano
8a36acdb1a Super small improvement: Remove unnecessary mut (#3736)
## Issue Addressed

<!--Which issue # does this PR address?-->

Removed some unnecessary `mut`. 🙂 

<!--
## Proposed Changes

Please list or describe the changes introduced by this PR.
-->

<!--
## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
-->
2022-11-21 03:15:54 +00:00
realbigsean
dc87156641 block and blob handling progress 2022-11-19 16:53:34 -05:00
realbigsean
45897ad4e1 remove blob wrapper 2022-11-19 15:18:42 -05:00
realbigsean
dfd0013eab Merge pull request #22 from divagant-martian/seans-block-blobs
toy skelleton of sync changes
2022-11-16 14:45:45 -05:00
Diva M
78c72158c8 toy skelleton of sync changes 2022-11-16 13:53:38 -05:00
realbigsean
7162e5e23b add a bunch of blob coupling boiler plate, add a blobs by root request 2022-11-15 16:43:56 -05:00
Pawan Dhananjay
857ef25d28 Add metrics for subnet queries (#3721)
## Issue Addressed

N/A

## Proposed Changes

Add metrics for peers discovered in subnet discv5 queries.
2022-11-15 13:25:38 +00:00
Michael Sproul
713b6a18d4 Simplify GossipTopic -> String conversion (#3722)
## Proposed Changes

With a few different changes to the gossip topics in flight (light clients, Capella, 4844, etc) I think this simplification makes sense. I noticed it while plumbing through a new Capella topic.
2022-11-15 05:21:48 +00:00
Daniel Ramirez Chiquillo
05178848e5 compile with beta compiler on CI (#3717)
## Issue Addressed

Closes #3709 

## Proposed Changes

Add the job `compile-with-beta-compiler` to `test-suite`. This job has the following steps:

1. Use `actions/checkout@v3`. (Needed to run make in a later step.)
2. Install the dependencies listed in [build from source guide](https://lighthouse-book.sigmaprime.io/installation-source.html).
3. Change the compiler to the current beta version with `rustup override`.
4. Run `make`.
2022-11-15 05:21:36 +00:00
Age Manning
230168deff Health Endpoints for UI (#3668)
This PR adds some health endpoints for the beacon node and the validator client.

Specifically it adds the endpoint:
`/lighthouse/ui/health`

These are not entirely stable yet. But provide a base for modification for our UI. 

These also may have issues with various platforms and may need modification.
2022-11-15 05:21:26 +00:00
Michael Sproul
0cdd049da9 Fixes to make EF Capella tests pass (#3719)
* Fixes to make EF Capella tests pass

* Clippy for state_processing
2022-11-14 13:14:31 -06:00
Mark Mackey
276e1845fd Added process_withdrawals 2022-11-13 18:20:27 -06:00
Michael Sproul
9bd6d9ce7a CI gardening maintenance (#3706)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3656

## Proposed Changes

* Replace `set-output` by `$GITHUB_OUTPUT` usage
* Avoid rate-limits when installing `protoc` by making authenticated requests (continuation of https://github.com/sigp/lighthouse/pull/3621)
* Upgrade all Ubuntu 18.04 usage to 22.04 (18.04 is end of life)
* Upgrade macOS-latest to explicit macOS-12 to silence warning
* Use `actions/checkout@v3` and `actions/cache@v3` to avoid deprecated NodeJS v12

## Additional Info

Can't silence the NodeJS warnings entirely due to https://github.com/sigp/lighthouse/issues/3705. Can fix that in future.
2022-11-13 22:40:44 +00:00
tim gretler
5dba89e43b Sync committee sign bn fallback (#3624)
## Issue Addressed

Closes #3612

## Proposed Changes

- Iterates through BNs until it finds a non-optimistic head.

A slight change in error behavior: 
- Previously: `spawn_contribution_tasks` did not return an error for a non-optimistic block head. It returned `Ok(())` logged a warning.
- Now: `spawn_contribution_tasks` returns an error if it cannot find a non-optimistic block head. The caller of `spawn_contribution_tasks` then logs the error as a critical error.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-11-13 22:40:43 +00:00
Michael Sproul
3be41006a6 Add --light-client-server flag and state cache utils (#3714)
## Issue Addressed

Part of https://github.com/sigp/lighthouse/issues/3651.

## Proposed Changes

Add a flag for enabling the light client server, which should be checked before gossip/RPC traffic is processed (e.g. https://github.com/sigp/lighthouse/pull/3693, https://github.com/sigp/lighthouse/pull/3711). The flag is available at runtime from `beacon_chain.config.enable_light_client_server`.

Additionally, a new method `BeaconChain::with_mutable_state_for_block` is added which I envisage being used for computing light client updates. Unfortunately its performance will be quite poor on average because it will only run quickly with access to the tree hash cache. Each slot the tree hash cache is only available for a brief window of time between the head block being processed and the state advance at 9s in the slot. When the state advance happens the cache is moved and mutated to get ready for the next slot, which makes it no longer useful for merkle proofs related to the head block. Rather than spend more time trying to optimise this I think we should continue prototyping with this code, and I'll make sure `tree-states` is ready to ship before we enable the light client server in prod (cf. https://github.com/sigp/lighthouse/pull/3206).

## Additional Info

I also fixed a bug in the implementation of `BeaconState::compute_merkle_proof` whereby the tree hash cache was moved with `.take()` but never put back with `.restore()`.
2022-11-11 11:03:18 +00:00
GeemoCandama
c591fcd201 add checkpoint-sync-url-timeout flag (#3710)
## Issue Addressed
#3702 
Which issue # does this PR address?
#3702
## Proposed Changes
Added checkpoint-sync-url-timeout flag to cli. Added timeout field to ClientGenesis::CheckpointSyncUrl to utilize timeout set

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: GeemoCandama <104614073+GeemoCandama@users.noreply.github.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-11-11 00:38:28 +00:00
Michael Sproul
d99bfcf1a5 Blinded block and RANDAO APIs (#3571)
## Issue Addressed

https://github.com/ethereum/beacon-APIs/pull/241
https://github.com/ethereum/beacon-APIs/pull/242

## Proposed Changes

Implement two new endpoints for fetching blinded blocks and RANDAO mixes.


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-11-11 00:38:27 +00:00
Mark Mackey
81319dfcae Forgot one feature guard 2022-11-10 15:33:26 -06:00
realbigsean
fe04d945cc make signed block + sidecar consensus spec 2022-11-10 14:22:30 -05:00
Mark Mackey
756e48f5dc BeaconState field renamed 2022-11-10 11:49:55 -06:00
Mark Mackey
2191242341 Added stuff that NEEDS IMPLEMENTING 2022-11-09 19:35:01 -06:00
Mark Mackey
2d01ae6036 Fixed compiling with withdrawals enabled 2022-11-09 19:34:19 -06:00
Mark Mackey
ab13f95db5 Updated for queueless withdrawals spec 2022-11-09 18:09:34 -06:00
tim gretler
266d765285 Register blocks in validator monitor (#3635)
## Issue Addressed

Closes #3460

## Proposed Changes

`blocks` and `block_min_delay` are never updated in the epoch summary



Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-11-09 05:37:09 +00:00
Giulio rebuffo
9d6209725f Added Merkle Proof Generation for Beacon State (#3674)
## Issue Addressed

This PR addresses partially #3651

## Proposed Changes

This PR adds the following methods:

* a new method to trait `TreeHash`, `hash_tree_leaves` which returns all the Merkle leaves of the ssz object.
* a new method to `BeaconState`: `compute_merkle_proof` which generates a specific merkle proof for given depth and index by using the `hash_tree_leaves` as leaves function.

## Additional Info

Now here is some rationale on why I decided to go down this route: adding a new function to commonly used trait is a pain but was necessary to make sure we have all merkle leaves for every object, that is why I just added  `hash_tree_leaves`  in the trait and not  `compute_merkle_proof` as well. although it would make sense it gives us code duplication/harder review time and we just need it from one specific object in one specific usecase so not worth the effort YET. In my humble opinion.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-11-08 01:58:18 +00:00
realbigsean
bc0af72c74 fix topic name 2022-11-07 12:36:31 -05:00
Divma
84c7d8cc70 Blocklookup data inconsistencies (#3677)
## Issue Addressed
Closes #3649 

## Proposed Changes

Add a regression test for the data inconsistency, catching the problem in 31e88c5533 [here](https://github.com/sigp/lighthouse/actions/runs/3379894044/jobs/5612044797#step:6:2043).
When a chain is sent for processing, move it to a separate collection and now the test works, yay!

## Additional Info

na
2022-11-07 06:48:34 +00:00
Michael Sproul
253767ebc1 Update stale sections of the book (#3671)
## Issue Addressed

Which issue # does this PR address?

## Proposed Changes

* Add v3.2 and v3.3 to database migrations table
* Remove docs on `--subscribe-all-subnets` and `--import-all-attestations` from redundancy docs
* Clarify that the merge has already occurred on the merge migration page
2022-11-07 06:48:32 +00:00
Paul Hauner
0655006e87 Clarify error log when registering validators (#3650)
## Issue Addressed

NA

## Proposed Changes

Adds clarification to an error log when there is an error submitting a validator registration.

There seems to be a few cases where relays return errors during validator registration, including spurious timeouts and when a validator has been very recently activated/made pending.

Changing this log helps indicate that it's "just another registration error" rather than something more serious. I didn't drop this to a `WARN` since I still have hope we can eliminate these errors completely by chatting with relays and adjusting timeouts.

## Additional Info

NA
2022-11-07 06:48:31 +00:00
Jimmy Chen
cb393f5b7d Fix compilation error (#3692) 2022-11-06 11:46:48 -05:00
realbigsean
fc0b06a039 Feature gate withdrawals (#3684)
* start feature gating

* feature gate withdrawals
2022-11-04 16:50:26 -04:00
realbigsean
1aec17b09c Merge branch 'unstable' of https://github.com/sigp/lighthouse into eip4844 2022-11-04 13:23:55 -04:00
Divma
8600645f65 Fix rust 1.65 lints (#3682)
## Issue Addressed

New lints for rust 1.65

## Proposed Changes

Notable change is the identification or parameters that are only used in recursion

## Additional Info
na
2022-11-04 07:43:43 +00:00
realbigsean
c45b809b76 Cleanup payload types (#3675)
* Add transparent support

* Add `Config` struct

* Deprecate `enum_behaviour`

* Partially remove enum_behaviour from project

* Revert "Partially remove enum_behaviour from project"

This reverts commit 46ffb7fe77.

* Revert "Deprecate `enum_behaviour`"

This reverts commit 89b64a6f53.

* Add `struct_behaviour`

* Tidy

* Move tests into `ssz_derive`

* Bump ssz derive

* Fix comment

* newtype transaparent ssz

* use ssz transparent and create macros for  per fork implementations

* use superstruct map macros

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-11-02 10:30:41 -04:00
realbigsean
d8a49aad2b merge with unstable fixes 2022-11-01 13:26:56 -04:00
realbigsean
8656d23327 merge with unstable 2022-11-01 13:18:00 -04:00
realbigsean
5ad834280b Block processing eip4844 (#3673)
* add eip4844 block processing

* fix blob processing code

* consensus logic fixes and cleanup

* use safe arith
2022-11-01 13:15:11 -04:00
Pawan Dhananjay
29f2ec46d3 Couple blocks and blobs in gossip (#3670)
* Revert "Add more gossip verification conditions"

This reverts commit 1430b561c3.

* Revert "Add todos"

This reverts commit 91efb9d4c7.

* Revert "Reprocess blob sidecar messages"

This reverts commit 21bf3d37cd.

* Add the coupled topic

* Decode SignedBeaconBlockAndBlobsSidecar correctly

* Process Block and Blobs in beacon processor

* Remove extra blob publishing logic from vc

* Remove blob signing in vc

* Ugly hack to compile
2022-11-01 10:28:21 -04:00
ethDreamer
e8604757a2 Deposit Cache Finalization & Fast WS Sync (#2915)
## Summary

The deposit cache now has the ability to finalize deposits. This will cause it to drop unneeded deposit logs and hashes in the deposit Merkle tree that are no longer required to construct deposit proofs. The cache is finalized whenever the latest finalized checkpoint has a new `Eth1Data` with all deposits imported.

This has three benefits:

1. Improves the speed of constructing Merkle proofs for deposits as we can just replay deposits since the last finalized checkpoint instead of all historical deposits when re-constructing the Merkle tree.
2. Significantly faster weak subjectivity sync as the deposit cache can be transferred to the newly syncing node in compressed form. The Merkle tree that stores `N` finalized deposits requires a maximum of `log2(N)` hashes. The newly syncing node then only needs to download deposits since the last finalized checkpoint to have a full tree.
3. Future proofing in preparation for [EIP-4444](https://eips.ethereum.org/EIPS/eip-4444) as execution nodes will no longer be required to store logs permanently so we won't always have all historical logs available to us.

## More Details

Image to illustrate how the deposit contract merkle tree evolves and finalizes along with the resulting `DepositTreeSnapshot`
![image](https://user-images.githubusercontent.com/37123614/151465302-5fc56284-8a69-4998-b20e-45db3934ac70.png)

## Other Considerations

I've changed the structure of the `SszDepositCache` so once you load & save your database from this version of lighthouse, you will no longer be able to load it from older versions.

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
2022-10-30 04:04:24 +00:00
realbigsean
9f155eec7a 48 byte array serde 2022-10-28 10:25:48 -04:00
realbigsean
82eef493f3 clean up types 2022-10-28 10:18:04 -04:00
Divma
46fbf5b98b Update discv5 (#3171)
## Issue Addressed

Updates discv5

Pending on
- [x] #3547 
- [x] Alex upgrades his deps

## Proposed Changes

updates discv5 and the enr crate. The only relevant change would be some clear indications of ipv4 usage in lighthouse

## Additional Info

Functionally, this should be equivalent to the prev version.
As draft pending a discv5 release
2022-10-28 05:40:06 +00:00
Kausik Das ✪
5bd1501cb1 Book spelling and grammar corrections (#3659)
## Issue Addressed

There are few spelling and grammar errors in the book.

## Proposed Changes

Corrected those spelling and grammar errors in the below files
- book/src/advanced-release-candidates.md
- book/src/advanced_networking.md
- book/src/builders.md
- book/src/key-management.md
- book/src/merge-migration.md
- book/src/wallet-create.md


Co-authored-by: Kausik Das <kausik007007@gmail.com>
Co-authored-by: Kausik Das ✪ <kausik007007@gmail.com>
2022-10-28 03:23:50 +00:00
Giulio rebuffo
f2f920dec8 Added lightclient server side containers (#3655)
## Issue Addressed

This PR partially addresses #3651

## Proposed Changes
This PR adds the following containers types from [the lightclient specs](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md): `LightClientUpdate`, `LightClientFinalityUpdate`, `LightClientOptimisticUpdate` and `LightClientBootstrap`. It also implements the creation of each updates as delined by this [document](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/full-node.md).

## Additional Info

Here is a brief description of what each of these container signify:

`LightClientUpdate`: This container is only provided by server (full node) to lightclients when catching up new sync committees beetwen periods and we want possibly one lightclient update ready for each post-altair period the lighthouse node go over. it is needed in the resp/req in method `light_client_update_by_range`.

`LightClientFinalityUpdate/LightClientFinalityUpdate`: Lighthouse will need only the latest of each of this kind of updates, so no need to store them in the database, we can just store the latest one of each one in memory and then just supply them via gossip or respreq, only the latest ones are served by a full node. finality updates marks the transition to a new finalized header, while optimistic updates signify new non-finalized header which are imported optimistically.

`LightClientBootstrap`: This object is retrieved by lightclients during the bootstrap process after a finalized checkpoint is retrieved, ideally we want to store a LightClientBootstrap for each finalized root and then serve each of them by finalized root in respreq protocol id `light_client_bootstrap`.

Little digression to how we implement the creation of each updates: the creation of a optimistic/finality update is just a version of the lightclient_update creation mechanism with less fields being set, there is underlying concept of inheritance, if you look at the specs it becomes very obvious that a lightclient update is just an extension of a finality update and a finality update an extension to an optimistic update.

## Extra note

`LightClientStore` is not implemented as it is only useful as internal storage design for the lightclient side.
2022-10-28 03:23:49 +00:00
ethDreamer
f1a3b3b01c Added Capella Epoch Processing Logic (#3666) 2022-10-27 17:41:39 -04:00
realbigsean
137f230344 Capella eip 4844 cleanup (#3652)
* add capella gossip boiler plate

* get everything compiling

Co-authored-by: realbigsean <sean@sigmaprime.io
Co-authored-by: Mark Mackey <mark@sigmaprime.io>

* small cleanup

* small cleanup

* cargo fix + some test cleanup

* improve block production

* add fixme for potential panic

Co-authored-by: Mark Mackey <mark@sigmaprime.io>
2022-10-26 15:15:26 -04:00
Michael Sproul
6d5a2b509f Release v3.2.1 (#3660)
## Proposed Changes

Patch release to include the performance regression fix https://github.com/sigp/lighthouse/pull/3658.

## Additional Info

~~Blocked on the merge of https://github.com/sigp/lighthouse/pull/3658.~~
2022-10-26 09:38:25 +00:00
Michael Sproul
77eabc5401 Revert "Optimise HTTP validator lookups" (#3658)
## Issue Addressed

This reverts commit ca9dc8e094 (PR #3559) with some modifications.

## Proposed Changes

Unfortunately that PR introduced a performance regression in fork choice. The optimisation _intended_ to build the exit and pubkey caches on the head state _only if_ they were not already built. However, due to the head state always being cloned without these caches, we ended up building them every time the head changed, leading to a ~70ms+ penalty on mainnet.

fcfd02aeec/beacon_node/beacon_chain/src/canonical_head.rs (L633-L636)

I believe this is a severe enough regression to justify immediately releasing v3.2.1 with this change.

## Additional Info

I didn't fully revert #3559, because there were some unrelated deletions of dead code in that PR which I figured we may as well keep.

An alternative would be to clone the extra caches, but this likely still imposes some cost, so in the interest of applying a conservative fix quickly, I think reversion is the best approach. The optimisation from #3559 was not even optimising a particularly significant path, it was mostly for VCs running larger numbers of inactive keys. We can re-do it in the `tree-states` world where cache clones are cheap.
2022-10-26 06:50:04 +00:00
Paul Hauner
fcfd02aeec Release v3.2.0 (#3647)
## Issue Addressed

NA

## Proposed Changes

Bump version to `v3.2.0`

## Additional Info

- ~~Blocked on #3597~~
- ~~Blocked on #3645~~
- ~~Blocked on #3653~~
- ~~Requires additional testing~~
2022-10-25 06:36:51 +00:00
Divma
3a5888e53d Ban and unban peers at the swarm level (#3653)
## Issue Addressed

I missed this from https://github.com/sigp/lighthouse/pull/3491. peers were being banned at the behaviour level only. The identify errors are explained by this as well

## Proposed Changes

Add banning and unbanning 

## Additional Info

Befor,e having tests that catch this was hard because the swarm was outside the behaviour. We could now have tests that prevent something like this in the future
2022-10-24 21:39:30 +00:00
Michael Sproul
dbb93cd0d2 bors: require slasher and syncing sim tests (#3645)
## Issue Addressed
I noticed that [this build](https://github.com/sigp/lighthouse/actions/runs/3269950873/jobs/5378036501) wasn't marked failed by Bors when the `syncing-simulator-ubuntu` job failed. This is because that job is absent from the `bors.toml` config.

## Proposed Changes

Add missing jobs to Bors config so that they are required:

- `syncing-simulator-ubuntu`
- `slasher-tests`
- `disallowed-from-async-lint`

The `disallowed-from-async-lint` was previously allowed to fail because it was considered beta, but I think it's stable enough now we may as well require it.
2022-10-19 22:55:50 +00:00
pinkiebell
d0efb6b18a beacon_node: add --disable-deposit-contract-sync flag (#3597)
Overrides any previous option that enables the eth1 service.
Useful for operating a `light` beacon node.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-10-19 22:55:49 +00:00
GeemoCandama
c5cd0d9b3f add execution-timeout-multiplier flag to optionally increase timeouts (#3631)
## Issue Addressed
Add flag to lengthen execution layer timeouts

Which issue # does this PR address?

#3607 

## Proposed Changes

Added execution-timeout-multiplier flag and a cli test to ensure the execution layer config has the multiplier set correctly.

Please list or describe the changes introduced by this PR.
Add execution_timeout_multiplier to the execution layer config as Option<u32> and pass the u32 to HttpJsonRpc.

## Additional Info
Not certain that this is the best way to implement it so I'd appreciate any feedback.

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-10-18 04:02:07 +00:00
Michael Sproul
edf23bb40e Fix attestation shuffling filter (#3629)
## Issue Addressed

Fix a bug in block production that results in blocks with 0 attestations during the first slot of an epoch.

The bug is marked by debug logs of the form:

> DEBG Discarding attestation because of missing ancestor, block_root: 0x3cc00d9c9e0883b2d0db8606278f2b8423d4902f9a1ee619258b5b60590e64f8, pivot_slot: 4042591

It occurs when trying to look up the shuffling decision root for an attestation from a slot which is prior to fork choice's finalized block. This happens frequently when proposing in the first slot of the epoch where we have:

- `current_epoch == n`
- `attestation.data.target.epoch == n - 1`
- attestation shuffling epoch `== n - 3` (decision block being the last block of `n - 3`)
- `state.finalized_checkpoint.epoch == n - 2` (first block of `n - 2` is finalized)

Hence the shuffling decision slot is out of range of the fork choice backwards iterator _by a single slot_.

Unfortunately this bug was hidden when we weren't pruning fork choice, and then reintroduced in v2.5.1 when we fixed the pruning (https://github.com/sigp/lighthouse/releases/tag/v2.5.1). There's no way to turn that off or disable the filtering in our current release, so we need a new release to fix this issue.

Fortunately, it also does not occur on every epoch boundary because of the gradual pruning of fork choice every 256 blocks (~8 epochs):

01e84b71f5/consensus/proto_array/src/proto_array_fork_choice.rs (L16)

01e84b71f5/consensus/proto_array/src/proto_array.rs (L713-L716)

So the probability of proposing a 0-attestation block given a proposal assignment is approximately `1/32 * 1/8 = 0.39%`.

## Proposed Changes

- Load the block's shuffling ID from fork choice and verify it against the expected shuffling ID of the head state. This code was initially written before we had settled on a representation of shuffling IDs, so I think it's a nice simplification to make use of them here rather than more ad-hoc logic that fundamentally does the same thing.

## Additional Info

Thanks to @moshe-blox for noticing this issue and bringing it to our attention.
2022-10-18 04:02:06 +00:00
Michael Sproul
59ec6b71b8 Consensus context with proposer index caching (#3604)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/2371

## Proposed Changes

Backport some changes from `tree-states` that remove duplicated calculations of the `proposer_index`.

With this change the proposer index should be calculated only once for each block, and then plumbed through to every place it is required.

## Additional Info

In future I hope to add more data to the consensus context that is cached on a per-epoch basis, like the effective balances of validators and the base rewards.

There are some other changes to remove indexing in tests that were also useful for `tree-states` (the `tree-states` types don't implement `Index`).
2022-10-15 22:25:54 +00:00
Michael Sproul
e4cbdc1c77 Optimistic sync spec tests (v1.2.0) (#3564)
## Issue Addressed

Implements new optimistic sync test format from https://github.com/ethereum/consensus-specs/pull/2982.

## Proposed Changes

- Add parsing and runner support for the new test format.
- Extend the mock EL with a set of canned responses keyed by block hash. Although this doubles up on some of the existing functionality I think it's really nice to use compared to the `preloaded_responses` or static responses. I think we could write novel new opt sync tests using these primtives much more easily than the previous ones. Forks are natively supported, and different responses to `forkchoiceUpdated` and `newPayload` are also straight-forward.

## Additional Info

Blocked on merge of the spec PR and release of new test vectors.
2022-10-15 22:25:52 +00:00
Michael Sproul
ca9dc8e094 Optimise HTTP validator lookups (#3559)
## Issue Addressed

While digging around in some logs I noticed that queries for validators by pubkey were taking 10ms+, which seemed too long. This was due to a loop through the entire validator registry for each lookup.

## Proposed Changes

Rather than using a loop through the register, this PR utilises the pubkey cache which is usually initialised at the head*. In case the cache isn't built, we fall back to the previous loop logic. In the vast majority of cases I expect the cache will be built, as the validator client queries at the `head` where all caches should be built.

## Additional Info

*I had to modify the cache build that runs after fork choice to build the pubkey cache. I think it had been optimised out, perhaps accidentally. I think it's preferable to have the exit cache and the pubkey cache built on the head state, as they are required for verifying deposits and exits respectively, and we may as well build them off the hot path of block processing. Previously they'd get built the first time a deposit or exit needed to be verified.

I've deleted the unused `map_state` function which was obsoleted by `map_state_and_execution_optimistic`.
2022-10-15 22:25:51 +00:00
ethDreamer
221c433d62 Fixed a ton of state_processing stuff (#3642)
FIXME's:
 * consensus/fork_choice/src/fork_choice.rs
 * consensus/state_processing/src/per_epoch_processing/capella.rs
 * consensus/types/src/execution_payload_header.rs
 
TODO's:
 * consensus/state_processing/src/per_epoch_processing/capella/partial_withdrawals.rs
 * consensus/state_processing/src/per_epoch_processing/capella/full_withdrawals.rs
2022-10-14 17:35:10 -05:00
ethDreamer
c1c5dc0a64 Fixed some stuff in state processing (#3640) 2022-10-13 17:07:32 -05:00
ethDreamer
255fdf0724 Added Capella Data Structures to consensus/types (#3637)
* Ran Cargo fmt

* Added Capella Data Structures to consensus/types
2022-10-13 09:37:20 -05:00
will
9f242137b0 Add a new bls test (#3235)
## Issue Addressed

Which issue # does this PR address?
#2629 

## Proposed Changes

Please list or describe the changes introduced by this PR.

1. ci would dowload the bls test cases from https://github.com/ethereum/bls12-381-tests/
2. all the bls test cases(except eth ones) would use cases in the archive from step one
3. The bls test cases from https://github.com/ethereum/consensus-spec-tests would stay there and no use . For the future , these bls test cases would be remove suggested from https://github.com/ethereum/consensus-spec-tests/issues/25 . So it would do no harm and compatible for future cases.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Question: 

I am not sure if I should implement tests about `deserialization_G1`, `deserialization_G2` and `hash_to_G2` for the issue.
2022-10-12 23:40:42 +00:00
Pawan Dhananjay
1430b561c3 Add more gossip verification conditions 2022-10-06 21:16:59 -05:00
realbigsean
44515b8cbe cargo fix 2022-10-05 17:20:54 -04:00
realbigsean
b5b4ce9509 blob production 2022-10-05 17:14:45 -04:00
Pawan Dhananjay
91efb9d4c7 Add todos 2022-10-05 02:56:57 -05:00
Pawan Dhananjay
21bf3d37cd Reprocess blob sidecar messages 2022-10-05 02:52:26 -05:00
Pawan Dhananjay
c55b28bf10 Minor fixes 2022-10-04 19:18:06 -05:00
Pawan Dhananjay
12fe514550 Add more gossip verification functions for blobs 2022-10-04 19:17:53 -05:00
Pawan Dhananjay
9d99c784ea Add gossip verification stub 2022-10-04 17:54:14 -05:00
Pawan Dhananjay
3d69484f76 Fix genesis.ssz.zip 2022-10-04 14:52:58 -05:00
realbigsean
cc59f93605 compressed eip4844 genesis 2022-10-04 15:42:05 -04:00
realbigsean
7527c2b455 fix RPC limit add blob signing domain 2022-10-04 14:57:29 -04:00
realbigsean
ba16a037a3 cleanup 2022-10-04 09:34:05 -04:00
mariuspod
242ae21e5d Pass EL JWT secret key via cli flag (#3568)
## Proposed Changes

In this change I've added a new beacon_node cli flag `--execution-jwt-secret-key` for passing the JWT secret directly as string.

Without this flag, it was non-trivial to pass a secrets file containing a JWT secret key without compromising its contents into some management repo or fiddling around with manual file mounts for cloud-based deployments.

When used in combination with environment variables, the secret can be injected into container-based systems like docker & friends quite easily.

It's both possible to either specify the file_path to the JWT secret or pass the JWT secret directly.

I've modified the docs and attached a test as well.

## Additional Info

The logic has been adapted a bit so that either one of `--execution-jwt` or `--execution-jwt-secret-key` must be set when specifying `--execution-endpoint` so that it's still compatible with the semantics before this change and there's at least one secret provided.
2022-10-04 12:41:03 +00:00
realbigsean
c0dc42ea07 cargo fmt 2022-10-04 08:21:46 -04:00
Divma
4926e3967f [DEV FEATURE] Deterministic long lived subnets (#3453)
## Issue Addressed

#2847 

## Proposed Changes
Add under a feature flag the required changes to subscribe to long lived subnets in a deterministic way

## Additional Info

There is an additional required change that is actually searching for peers using the prefix, but I find that it's best to make this change in the future
2022-10-04 10:37:48 +00:00
GeemoCandama
6a92bf70e4 CLI tests for logging flags (#3609)
## Issue Addressed
Adding CLI tests for logging flags: log-color and disable-log-timestamp
Which issue # does this PR address?
#3588 
## Proposed Changes
Add CLI tests for logging flags as described in #3588 
Please list or describe the changes introduced by this PR.
Added logger_config to client::Config as suggested. Implemented Default for LoggerConfig based on what was being done elsewhere in the repo. Created 2 tests for each flag addressed.
## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-10-04 08:33:40 +00:00
Pawan Dhananjay
8728c40102 Remove fallback support from eth1 service (#3594)
## Issue Addressed

N/A

## Proposed Changes

With https://github.com/sigp/lighthouse/pull/3214 we made it such that you can either have 1 auth endpoint or multiple non auth endpoints. Now that we are post merge on all networks (testnets and mainnet), we cannot progress a chain without a dedicated auth execution layer connection so there is no point in having a non-auth eth1-endpoint for syncing deposit cache. 

This code removes all fallback related code in the eth1 service. We still keep the single non-auth endpoint since it's useful for testing.

## Additional Info

This removes all eth1 fallback related metrics that were relevant for the monitoring service, so we might need to change the api upstream.
2022-10-04 08:33:39 +00:00
realbigsean
8d45e48775 cargo fix 2022-10-03 21:52:16 -04:00
realbigsean
e81dbbfea4 compile 2022-10-03 21:48:02 -04:00
Michael Sproul
58bd2f76d0 Ensure protoc is installed for release CI (#3621)
## Issue Addressed

The release CI is currently broken due to the addition of the `protoc` dependency. Here's a failure of the release flow running on my fork: https://github.com/michaelsproul/lighthouse/actions/runs/3155541478/jobs/5134317334

## Proposed Changes

- Install `protoc` on Windows and Mac so that it's available for `cargo install`.
- Install an x86_64 binary in the Cross image for the aarch64 platform: we need a binary that runs on the host, _not_ on the target.
- Fix `macos` local testnet CI by using the Github API key to dodge rate limiting (this issue: https://github.com/actions/runner-images/issues/602).
2022-10-03 23:09:25 +00:00
realbigsean
88006735c4 compile 2022-10-03 10:06:04 -04:00
realbigsean
7520651515 cargo fix and some test fixes 2022-09-29 12:43:35 -04:00
realbigsean
fe6fc55449 fix compilation errors, rename capella -> shanghai, cleanup some rebase issues 2022-09-29 12:43:13 -04:00
realbigsean
809b52715e some block building updates 2022-09-29 12:38:00 -04:00
realbigsean
acaa340b41 add new beacon state variant for shanghai 2022-09-29 12:37:14 -04:00
realbigsean
203418ffc9 add engine_getBlobV1 2022-09-29 12:35:55 -04:00
realbigsean
3f1e5cee78 Some gossip work 2022-09-29 12:35:53 -04:00
realbigsean
ebc0ccd02a some more sync boilerplate 2022-09-29 12:34:09 -04:00
realbigsean
4008da6c60 sync tx blobs 2022-09-29 12:32:55 -04:00
realbigsean
4cdf1b546d add shanghai fork version and epoch 2022-09-29 12:28:58 -04:00
realbigsean
7125f0e3c6 cargo fix 2022-09-29 12:26:08 -04:00
realbigsean
de44b300c0 add/update types 2022-09-29 12:25:56 -04:00
Marius Kjærstad
ff145b986f Changed http:// to https:// on mailing list link (#3610)
Changed http:// to https:// on mailing list link in README.md

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-09-29 06:13:35 +00:00
Michael Sproul
f77e3bc0ad Add maxperf build profile (#3608)
## Proposed Changes

Add a new Cargo compilation profile called `maxperf` which enables more aggressive compiler optimisations at the expense of compilation time.

Some rough initial benchmarks show that this can provide up to a 25% reduction to run time for CPU bound tasks like block processing: https://docs.google.com/spreadsheets/d/15jHuZe7lLHhZq9Nw8kc6EL0Qh_N_YAYqkW2NQ_Afmtk/edit

The numbers in that spreadsheet compare the `consensus-context` branch from #3604 to the same branch compiled with the `maxperf` profile using:

```
PROFILE=maxperf make install-lcli
```

## Additional Info

The downsides of the maxperf profile are:

- It increases compile times substantially, which will particularly impact low-spec hardware. Compiling `lcli` is about 3x slower. Compiling Lighthouse is about 5x slower on my 5950X: 17m 38s rather than 3m 28s.

As a result I think we should not enable this everywhere by default.

- **Option 1**: enable by default for our released binaries. This gives the majority of users the fastest version of `lighthouse` possible, at the expense of slowing down our release CI. Source builds will continue to use the default `release` profile unless users opt-in to `maxperf`.
- **Option 2**: enable by default for source builds. This gives users building from source an edge, but makes them pay for it with compilation time. 

I think I would prefer Option 1. I'll try doing some benchmarking to see how long a maxperf build of Lighthouse would take on GitHub actions.

Credit to Nicholas Nethercote for documenting these options in the Rust Performance Book: https://nnethercote.github.io/perf-book/build-configuration.html.
2022-09-29 06:13:33 +00:00
tim gretler
8d325e700b Use #!/usr/bin/env everywhere for local testnets (#3606)
Full local testnet support for people that don't have `/bin/bash`
2022-09-29 06:13:30 +00:00
Age Manning
27bb9ff07d Handle Lodestar's new agent string (#3620)
## Issue Addressed

#3561 

## Proposed Changes

Recognize Lodestars new agent string and appropriately count these peers as lodestar peers.
2022-09-29 01:50:13 +00:00
Age Manning
01b6bf7a2d Improve logging a little (#3619)
Some of the logs in combination with others could be improved. 

It will save some time debugging by improving the wording slightly.
2022-09-29 01:50:12 +00:00
Divma
b1d2510d1b Libp2p v0.48.0 upgrade (#3547)
## Issue Addressed

Upgrades libp2p to v.0.47.0. This is the compilation of
- [x] #3495 
- [x] #3497 
- [x] #3491 
- [x] #3546 
- [x] #3553 

Co-authored-by: Age Manning <Age@AgeManning.com>
2022-09-29 01:50:11 +00:00
Pawan Dhananjay
6779912fe4 Publish subscriptions to all beacon nodes (#3529)
## Issue Addressed

Resolves #3516 

## Proposed Changes

Adds a beacon fallback function for running a beacon node http query on all available fallbacks instead of returning on a first successful result. Uses the new `run_on_all` method for attestation and sync committee subscriptions. 

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-09-28 19:53:35 +00:00
Michael Sproul
abcebf276f Add guide to MEV logs (#3611)
## Proposed Changes

Add some docs on checking the builder configuration, which is a frequently asked question on Discord.

## Additional Info

My text editor also insisted on stripping some trailing newlines, but can put 'em back if we want
2022-09-28 17:45:09 +00:00
Paul Hauner
01e84b71f5 v3.1.2 (#3603)
## Issue Addressed

NA

## Proposed Changes

Bump versions to v3.1.2

## Additional Info

- ~~Blocked on several PRs.~~
- ~~Requires further testing.~~
2022-09-26 01:17:36 +00:00
Divma
bd873e7162 New rust lints for rustc 1.64.0 (#3602)
## Issue Addressed
fixes lints from the last rust release

## Proposed Changes
Fix the lints, most of the lints by `clippy::question-mark` are false positives in the form of https://github.com/rust-lang/rust-clippy/issues/9518 so it's allowed for now

## Additional Info
2022-09-23 03:52:46 +00:00
Divma
9bd384a573 send attnet unsubscription event on random subnet expiry (#3600)
## Issue Addressed
🐞 in which we don't actually unsubscribe from a random long lived subnet when it expires

## Proposed Changes

Remove code addressing a specific case in which we are subscribed to all subnets and handle the removal of the long lived subnet. I don't think the special case code is particularly important as, if someone is running with that many validators to be subscribed to all subnets, it should use `--subscribe-all-subnets` instead

## Additional Info

Noticed on some test nodes climbing bandwidth usage periodically (around 27hours, the time of subnet expirations) I'm running this code to test this does not happen anymore, but I think it should be good now
2022-09-23 03:52:45 +00:00
Paul Hauner
9246a92d76 Make garbage collection test less failure prone (#3599)
## Issue Addressed

NA

## Proposed Changes

This PR attempts to fix the following spurious CI failure:

```
---- store_tests::garbage_collect_temp_states_from_failed_block stdout ----
thread 'store_tests::garbage_collect_temp_states_from_failed_block' panicked at 'disk store should initialize: DBError { message: "Error { message: \"IO error: lock /tmp/.tmp6DcBQ9/cold_db/LOCK: already held by process\" }" }', beacon_node/beacon_chain/tests/store_tests.rs:59:10
```

I believe that some async task is taking a clone of the store and holding it in some other thread for a short time. This creates a race-condition when we try to open a new instance of the store.

## Additional Info

NA
2022-09-23 03:52:44 +00:00
Pawan Dhananjay
3a3dddc5fb Fix ee integration tests (#3592)
## Issue Addressed

Resolves #3573 

## Proposed Changes

Fix the bytecode for the deposit contract deployment transaction and value for deposit transaction in the execution engine integration tests. Also verify that all published transaction make it to the execution payload and have a valid status.
2022-09-23 03:52:43 +00:00
Paul Hauner
fa6ad1a11a Deduplicate block root computation (#3590)
## Issue Addressed

NA

## Proposed Changes

This PR removes duplicated block root computation.

Computing the `SignedBeaconBlock::canonical_root` has become more expensive since the merge as we need to compute the merke root of each transaction inside an `ExecutionPayload`.

Computing the root for [a mainnet block](https://beaconcha.in/slot/4704236) is taking ~10ms on my i7-8700K CPU @ 3.70GHz (no sha extensions). Given that our median seen-to-imported time for blocks is presently 300-400ms, removing a few duplicated block roots (~30ms) could represent an easy 10% improvement. When we consider that the seen-to-imported times include operations *after* the block has been placed in the early attester cache, we could expect the 30ms to be more significant WRT our seen-to-attestable times.

## Additional Info

NA
2022-09-23 03:52:42 +00:00
Ramana Kumar
76ba0a1aaf Add disable-log-timestamp flag (#3101) (#3586)
## Issues Addressed

Closes https://github.com/sigp/lighthouse/issues/3101

## Proposed Changes

Add global flag to suppress timestamps in the terminal logger.
2022-09-23 03:52:41 +00:00
Paul Hauner
3128b5b430 v3.1.1 (#3585)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

- ~~Requires additional testing~~
- ~~Blocked on:~~
    - ~~#3589~~
    - ~~#3540~~
    - ~~#3587~~
2022-09-22 06:08:52 +00:00
Paul Hauner
dadbd69eec Fix concurrency issue with oneshot_broadcast (#3596)
## Issue Addressed

NA

## Proposed Changes

Fixes an issue found during testing with #3595.

## Additional Info

NA
2022-09-21 10:52:14 +00:00
Paul Hauner
96692b8e43 Impl oneshot_broadcast for committee promises (#3595)
## Issue Addressed

NA

## Proposed Changes

Fixes an issue introduced in #3574 where I erroneously assumed that a `crossbeam_channel` multiple receiver queue was a *broadcast* queue. This is incorrect, each message will be received by *only one* receiver. The effect of this mistake is these logs:

```
Sep 20 06:56:17.001 INFO Synced                                  slot: 4736079, block: 0xaa8a…180d, epoch: 148002, finalized_epoch: 148000, finalized_root: 0x2775…47f2, exec_hash: 0x2ca5…ffde (verified), peers: 6, service: slot_notifier
Sep 20 06:56:23.237 ERRO Unable to validate attestation          error: CommitteeCacheWait(RecvError), peer_id: 16Uiu2HAm2Jnnj8868tb7hCta1rmkXUf5YjqUH1YPj35DCwNyeEzs, type: "aggregated", slot: Slot(4736047), beacon_block_root: 0x88d318534b1010e0ebd79aed60b6b6da1d70357d72b271c01adf55c2b46206c1
```

## Additional Info

NA
2022-09-21 01:01:50 +00:00
Paul Hauner
a95bcba2ab Avoid holding write-lock whilst waiting on shuffling cache promise (#3589)
## Issue Addressed

NA

## Proposed Changes

Fixes a bug which hogged the write-lock for the `shuffling_cache`.

## Additional Info

NA
2022-09-19 07:58:50 +00:00
Michael Sproul
507bb9dad4 Refined payload pruning (#3587)
## Proposed Changes

Improve the payload pruning feature in several ways:

- Payload pruning is now entirely optional. It is enabled by default but can be disabled with `--prune-payloads false`. The previous `--prune-payloads-on-startup` flag from #3565 is removed.
- Initial payload pruning on startup now runs in a background thread. This thread will always load the split state, which is a small fraction of its total work (up to ~300ms) and then backtrack from that state. This pruning process ran in 2m5s on one Prater node with good I/O and 16m on a node with slower I/O.
- To work with the optional payload pruning the database function `try_load_full_block` will now attempt to load execution payloads for finalized slots _if_ pruning is currently disabled. This gives users an opt-out for the extensive traffic between the CL and EL for reconstructing payloads.

## Additional Info

If the `prune-payloads` flag is toggled on and off then the on-startup check may not see any payloads to delete and fail to clean them up. In this case the `lighthouse db prune_payloads` command should be used to force a manual sweep of the database.
2022-09-19 07:58:49 +00:00
Michael Sproul
f2ac0738d8 Implement skip_randao_verification and blinded block rewards API (#3540)
## Issue Addressed

https://github.com/ethereum/beacon-APIs/pull/222

## Proposed Changes

Update Lighthouse's randao verification API to match the `beacon-APIs` spec. We implemented the API before spec stabilisation, and it changed slightly in the course of review.

Rather than a flag `verify_randao` taking a boolean value, the new API uses a `skip_randao_verification` flag which takes no argument. The new spec also requires the randao reveal to be present and equal to the point-at-infinity when `skip_randao_verification` is set.

I've also updated the `POST /lighthouse/analysis/block_rewards` API to take blinded blocks as input, as the execution payload is irrelevant and we may want to assess blocks produced by builders.

## Additional Info

This is technically a breaking change, but seeing as I suspect I'm the only one using these parameters/APIs, I think we're OK to include this in a patch release.
2022-09-19 07:58:48 +00:00
Marius van der Wijden
6f7d21c542 enable 4844 at epoch 3 2022-09-18 12:13:03 +02:00
Marius van der Wijden
257087b010 correct fork version 2022-09-18 11:43:53 +02:00
Marius van der Wijden
285dbf43ed hacky hacks 2022-09-18 11:34:46 +02:00
Marius van der Wijden
14aa4957b9 correct fork version 2022-09-18 10:46:01 +02:00
Marius van der Wijden
aa000fd119 more enr 2022-09-18 10:23:53 +02:00
Marius van der Wijden
8b71b978e0 new round of hacks (config etc) 2022-09-17 23:42:49 +02:00
Daniel Knopik
750c594f5f forgor something 2022-09-17 21:38:57 +02:00
Daniel Knopik
eab1fce0e5 Merge branch 'eip4844' of github.com:dknopik/lighthouse into eip4844 2022-09-17 20:55:36 +02:00
Daniel Knopik
76572db9d5 add network config 2022-09-17 20:55:21 +02:00
Marius van der Wijden
f43532d3de implement handle blobs by range req 2022-09-17 20:05:51 +02:00
Marius van der Wijden
f9209e2d08 more network stuff 2022-09-17 16:39:40 +02:00
Marius van der Wijden
aeb52ff186 network stuff 2022-09-17 16:10:42 +02:00
Daniel Knopik
d4d40be870 storable blobs 2022-09-17 15:58:52 +02:00
Marius van der Wijden
36a0add0cd network stuff 2022-09-17 15:23:28 +02:00
Daniel Knopik
0518665949 Merge remote-tracking branch 'fork/eip4844' into eip4844 2022-09-17 14:58:33 +02:00
Daniel Knopik
292a16a6eb gossip boilerplate 2022-09-17 14:58:27 +02:00
Marius van der Wijden
acace8ab31 network: blobs by range message 2022-09-17 14:55:18 +02:00
Daniel Knopik
bcc738cb9d progress on gossip stuff 2022-09-17 14:31:57 +02:00
Marius van der Wijden
8473f08d10 beacon: consensus: implement engine api getBlobs 2022-09-17 14:10:15 +02:00
Daniel Knopik
dcfae6c5cf implement From<FullPayload> for Payload 2022-09-17 13:29:20 +02:00
Marius van der Wijden
fe6be28e6b beacon: consensus: implement engine api getBlobs 2022-09-17 13:20:18 +02:00
Daniel Knopik
ca1e17b386 it compiles! 2022-09-17 12:23:03 +02:00
Daniel Knopik
95203c51d4 fix some bugx, adjust stucts 2022-09-17 11:26:18 +02:00
Daniel Knopik
8e4e499b51 start adding types 2022-09-17 09:49:12 +02:00
Michael Sproul
ca42ef2e5a Prune finalized execution payloads (#3565)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3556

## Proposed Changes

Delete finalized execution payloads from the database in two places:

1. When running the finalization migration in `migrate_database`. We delete the finalized payloads between the last split point and the new updated split point. _If_ payloads are already pruned prior to this then this is sufficient to prune _all_ payloads as non-canonical payloads are already deleted by the head pruner, and all canonical payloads prior to the previous split will already have been pruned.
2. To address the fact that users will update to this code _after_ the merge on mainnet (and testnets), we need a one-off scan to delete the finalized payloads from the canonical chain. This is implemented in `try_prune_execution_payloads` which runs on startup and scans the chain back to the Bellatrix fork or the anchor slot (if checkpoint synced after Bellatrix). In the case where payloads are already pruned this check only imposes a single state load for the split state, which shouldn't be _too slow_. Even so, a flag `--prepare-payloads-on-startup=false` is provided to turn this off after it has run the first time, which provides faster start-up times.

There is also a new `lighthouse db prune_payloads` subcommand for users who prefer to run the pruning manually.

## Additional Info

The tests have been updated to not rely on finalized payloads in the database, instead using the `MockExecutionLayer` to reconstruct them. Additionally a check was added to `check_chain_dump` which asserts the non-existence or existence of payloads on disk depending on their slot.
2022-09-17 02:27:01 +00:00
Michael Sproul
5b2843c2cd Pre-allocate vectors in SSZ decoding (#3417)
## Issue Addressed

Fixes a potential regression in memory fragmentation identified by @paulhauner here: https://github.com/sigp/lighthouse/pull/3371#discussion_r931770045.

## Proposed Changes

Immediately allocate a vector with sufficient size to hold all decoded elements in SSZ decoding. The `size_hint` is derived from the range iterator here:

2983235650/consensus/ssz/src/decode/impls.rs (L489)

## Additional Info

I'd like to test this out on some infra for a substantial duration to see if it affects total fragmentation.
2022-09-16 11:54:17 +00:00
Paul Hauner
b0b606dabe Use SmallVec for TreeHash packed encoding (#3581)
## Issue Addressed

NA

## Proposed Changes

I've noticed that our block hashing times increase significantly after the merge. I did some flamegraph-ing and noticed that we're allocating a `Vec` for each byte of each execution payload transaction. This seems like unnecessary work and a bit of a fragmentation risk.

This PR switches to `SmallVec<[u8; 32]>` for the packed encoding of `TreeHash`. I believe this is a nice simple optimisation with no downside.

### Benchmarking

These numbers were computed using #3580 on my desktop (i7 hex-core). You can see a bit of noise in the numbers, that's probably just my computer doing other things. Generally I found this change takes the time from 10-11ms to 8-9ms. I can also see all the allocations disappear from flamegraph.

This is the block being benchmarked: https://beaconcha.in/slot/4704236

#### Before

```
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 980: 10.553003ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 981: 10.563737ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 982: 10.646352ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 983: 10.628532ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 984: 10.552112ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 985: 10.587778ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 986: 10.640526ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 987: 10.587243ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 988: 10.554748ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 989: 10.551111ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 990: 11.559031ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 991: 11.944827ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 992: 10.554308ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 993: 11.043397ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 994: 11.043315ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 995: 11.207711ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 996: 11.056246ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 997: 11.049706ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 998: 11.432449ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 999: 11.149617ms
```

#### After

```
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 980: 14.011653ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 981: 8.925314ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 982: 8.849563ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 983: 8.893689ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 984: 8.902964ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 985: 8.942067ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 986: 8.907088ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 987: 9.346101ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 988: 8.96142ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 989: 9.366437ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 990: 9.809334ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 991: 9.541561ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 992: 11.143518ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 993: 10.821181ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 994: 9.855973ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 995: 10.941006ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 996: 9.596155ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 997: 9.121739ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 998: 9.090019ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 999: 9.071885ms
```

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-09-16 08:54:06 +00:00
Paul Hauner
bde3c168e2 Add lcli block-root tool (#3580)
## Issue Addressed

NA

## Proposed Changes

Adds a simple tool for computing the block root of some block from a beacon-API or a file. This is useful for benchmarking.

## Additional Info

NA
2022-09-16 08:54:04 +00:00
Paul Hauner
2cd3e3a768 Avoid duplicate committee cache loads (#3574)
## Issue Addressed

NA

## Proposed Changes

I have observed scenarios on Goerli where Lighthouse was receiving attestations which reference the same, un-cached shuffling on multiple threads at the same time. Lighthouse was then loading the same state from database and determining the shuffling on multiple threads at the same time. This is unnecessary load on the disk and RAM.

This PR modifies the shuffling cache so that each entry can be either:

- A committee
- A promise for a committee (i.e., a `crossbeam_channel::Receiver`)

Now, in the scenario where we have thread A and thread B simultaneously requesting the same un-cached shuffling, we will have the following:

1. Thread A will take the write-lock on the shuffling cache, find that there's no cached committee and then create a "promise" (a `crossbeam_channel::Sender`) for a committee before dropping the write-lock.
1. Thread B will then be allowed to take the write-lock for the shuffling cache and find the promise created by thread A. It will block the current thread waiting for thread A to fulfill that promise.
1. Thread A will load the state from disk, obtain the shuffling, send it down the channel, insert the entry into the cache and then continue to verify the attestation.
1. Thread B will then receive the shuffling from the receiver, be un-blocked and then continue to verify the attestation.

In the case where thread A fails to generate the shuffling and drops the sender, the next time that specific shuffling is requested we will detect that the channel is disconnected and return a `None` entry for that shuffling. This will cause the shuffling to be re-calculated.

## Additional Info

NA
2022-09-16 08:54:03 +00:00
Paul Hauner
7d3948c8fe Add metric for re-org distance (#3566)
## Issue Addressed

NA

## Proposed Changes

Add a metric to track the re-org distance.

## Additional Info

NA
2022-09-13 17:19:27 +00:00
Michael Sproul
cd31e54b99 Bump axum deps (#3570)
## Issue Addressed

Fix a `cargo-audit` failure. We don't use `axum` for anything besides tests, but `cargo-audit` is failing due to this vulnerability in `axum-core`: https://rustsec.org/advisories/RUSTSEC-2022-0055
2022-09-13 01:57:47 +00:00
realbigsean
614d74a6d4 Fix builder gas limit docs (#3569)
## Issue Addressed

Make sure gas limit examples in our docs represent sane values.

Thanks @dankrad for raising this in discord.

## Additional Info

We could also consider logging warnings about whether the gas limits configured are sane. Prysm has an open issue for this: https://github.com/prysmaticlabs/prysm/issues/10810


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-09-13 01:57:46 +00:00
Alessandro Tagliapietra
88a7e5a2ca Fix ganache test endpoint for ipv6 machines (#3563)
## Issue Addressed

#3562

## Proposed Changes

Change the fork endpoint from `localhost` to `127.0.0.1` to match the ganache default listening host.
This way it doesn't try (and fail) to connect to `::1` on IPV6 machines.

## Additional Info

First PR
2022-09-13 01:57:45 +00:00
tim gretler
98815516a1 Support histogram buckets (#3391)
## Issue Addressed

#3285

## Proposed Changes

Adds support for specifying histogram with buckets and adds new metric buckets for metrics mentioned in issue.

## Additional Info

Need some help for the buckets.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-09-13 01:57:44 +00:00
Rémy Roy
cfa518ab41 Use generic domain for community checkpoint sync example (#3560)
## Proposed Changes

Use a generic domain for community checkpoint sync example to meet the concern raised in https://github.com/sigp/lighthouse/pull/3558#discussion_r966720171
2022-09-10 01:35:11 +00:00
Nils Effinghausen
f682df51a1 fix description for BALANCES_CACHE_MISSES metric (#3545)
## Issue Addressed

fixes metric description


Co-authored-by: Nils Effinghausen <nils.effinghausen@t-systems.com>
2022-09-10 01:35:10 +00:00
Rémy Roy
60e9777db8 Add community checkpoint sync endpoints to book (#3558)
## Proposed Changes

Add a section on the new community checkpoint sync endpoints in the book. This should help stakers sync faster even without having to create an account.
2022-09-09 02:52:35 +00:00
realbigsean
d1a8d6cf91 Pin mev rs deps (#3557)
## Issue Addressed

We were unable to update lighthouse by running `cargo update` because some of the `mev-build-rs` deps weren't pinned. But `mev-build-rs` is now pinned here and includes it's own pinned commits for `ssz-rs` and `etheruem-consensus`



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-09-08 23:46:03 +00:00
realbigsean
a9f075c3c0 Remove strict fee recipient (#3552)
## Issue Addressed

Resolves: #3550

Remove the `--strict-fee-recipient` flag. It will cause missed proposals prior to the bellatrix transition.

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-09-08 23:46:02 +00:00
realbigsean
81d078bfc7 remove strict fee recipient docs (#3551)
## Issue Addressed

Related: #3550

Remove references to the `--strict-fee-recipient` flag in docs. The flag will cause missed proposals prior to the merge transition.



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-09-08 00:06:25 +00:00
Alexander Cyon
419c53bf24 Add flag 'log-color' preserving color of log redirected to file. (#3538)
Add flag 'log-color' which preserves colors of log when stdout is redirected to a file.

This is my first lighthouse PR, please let me know if I'm not following contribution guidelines, I welcome meta-feeback (feedback on git commit messages, git branch naming, and the contents of the description of this PR.)

## Issue Addressed

Solves https://github.com/sigp/lighthouse/issues/3527

## Proposed Changes

Adding a flag which enables log color preserving when stdout is redirected to a file.

### Usage
Below I demonstrate current behaviour (without using the new flag) and the new behaviur (when using new flag).

In the screenshot below, I have to panes, one on top running `lighthouse` which redirects to file `~/Desktop/test.log` and one pane in the bottom which runs `tail -f ~/Desktop/test.log`.

#### Current behaviour
```sh
lighthouse --network prater vc |& tee -a ~/Desktop/test.log
```

**Result is no colors**

<img width="1624" alt="current" src="https://user-images.githubusercontent.com/864410/188258226-bfcf8271-4c9e-474c-848e-ac92a60df25c.png">


#### New behaviour
```sh
lighthouse --network prater vc --log-color |& tee -a ~/Desktop/test.log
```

**Result is colors** 🔴🟢🔵🟡

<img width="1624" alt="new" src="https://user-images.githubusercontent.com/864410/188258223-7d9ecf09-92c8-4cba-8f24-bd4d88fc0353.png">

## Additional Info

I chose American spelling of "color" instead of Brittish "colour' since that was aligned with `slog`'s API - method`force_color()`, let me know if you prefer spelling "colour" instead. I also chose to let it be an arg not taking any argument, just like `logfile-compress` flag, rather than having to write `--log-color true`.
2022-09-06 05:58:27 +00:00
ZZ
528e150e53 Update graffiti.md (#3537)
fix typo
2022-09-05 08:29:02 +00:00
Michael Sproul
9a7f7f1c1e Configurable monitoring endpoint frequency (#3530)
## Issue Addressed

Closes #3514

## Proposed Changes

- Change default monitoring endpoint frequency to 120 seconds to fit with 30k requests/month limit.
- Allow configuration of the monitoring endpoint frequency using `--monitoring-endpoint-frequency N` where `N` is a value in seconds.
2022-09-05 08:29:00 +00:00
realbigsean
177aef8f1e Builder profit threshold flag (#3534)
## Issue Addressed

Resolves https://github.com/sigp/lighthouse/issues/3517

## Proposed Changes

Adds a `--builder-profit-threshold <wei value>` flag to the BN. If an external payload's value field is less than this value, the local payload will be used. The value of the local payload will not be checked (it can't really be checked until the engine API is updated to support this).


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-09-05 04:50:49 +00:00
omahs
95c56630a6 Fixing a few typos / documentation (#3531)
Fixing a few typos in the documentation
2022-09-05 04:50:48 +00:00
realbigsean
cae40731a2 Strict count unrealized (#3522)
## Issue Addressed

Add a flag that can increase count unrealized strictness, defaults to false

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: sean <seananderson33@gmail.com>
2022-09-05 04:50:47 +00:00
MaboroshiChan
f13dd04f42 Add timeout for --checkpoint-sync-url (#3521)
## Issue Addressed

[Have --checkpoint-sync-url timeout](https://github.com/sigp/lighthouse/issues/3478)

## Proposed Changes

I added a parameter for `get_bytes_opt_accept_header<U: IntoUrl>` which accept a timeout duration, and modified the body of `get_beacon_blocks_ssz` and `get_debug_beacon_states_ssz` to pass corresponding timeout durations.
2022-09-05 04:50:46 +00:00
Mac L
80359d8ddb Fix attestation performance API InvalidValidatorIndex error (#3503)
## Issue Addressed

When requesting an index which is not active during `start_epoch`, Lighthouse returns: 
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/999999999?start_epoch=100000&end_epoch=100000"
```
```json
{
  "code": 500,
  "message": "INTERNAL_SERVER_ERROR: ParticipationCache(InvalidValidatorIndex(999999999))",
  "stacktraces": []
}
```

This error occurs even when the index in question becomes active before `end_epoch` which is undesirable as it can prevent larger queries from completing.

## Proposed Changes

In the event the index is out-of-bounds (has not yet been activated), simply return all fields as `false`:

```
-> curl "http://localhost:5052/lighthouse/analysis/attestation_performance/999999999?start_epoch=100000&end_epoch=100000"
```
```json
[
  {
    "index": 999999999,
    "epochs": {
      "100000": {
        "active": false,
        "head": false,
        "target": false,
        "source": false
      }
    }
  }
]
```

By doing this, we cover the case where a validator becomes active sometime between `start_epoch` and `end_epoch`.

## Additional Info

Note that this error only occurs for epochs after the Altair hard fork.
2022-09-05 04:50:45 +00:00
Divma
473abc14ca Subscribe to subnets only when needed (#3419)
## Issue Addressed

We currently subscribe to attestation subnets as soon as the subscription arrives (one epoch in advance), this makes it so that subscriptions for future slots are scheduled instead of done immediately. 

## Proposed Changes

- Schedule subscriptions to subnets for future slots.
- Finish removing hashmap_delay, in favor of [delay_map](https://github.com/AgeManning/delay_map). This was the only remaining service to do this.
- Subscriptions for past slots are rejected, before we would subscribe for one slot.
- Add a new test for subscriptions that are not consecutive.

## Additional Info

This is also an effort in making the code easier to understand
2022-09-05 00:22:48 +00:00
Paul Hauner
aa022f4685 v3.1.0 (#3525)
## Issue Addressed

NA

## Proposed Changes

- Bump versions

## Additional Info

- ~~Blocked on #3508~~
- ~~Blocked on #3526~~
- ~~Requires additional testing.~~
- Expected release date is 2022-09-01
2022-08-31 22:21:55 +00:00
Pawan Dhananjay
c5785887a9 Log fee recipients in VC (#3526)
## Issue Addressed

Resolves #3524 

## Proposed Changes

Log fee recipient in the `Validator exists in beacon chain` log. Logging in the BN already happens here 18c61a5e8b/beacon_node/beacon_chain/src/beacon_chain.rs (L3858-L3865)

I also think it's good practice to encourage users to set the fee recipient in the VC rather than the BN because of issues mentioned here https://github.com/sigp/lighthouse/issues/3432

Some example logs from prater:
```
Aug 30 03:47:09.922 INFO Validator exists in beacon chain        fee_recipient: 0xab97_ad88, validator_index: 213615, pubkey: 0xb542b69ba14ddbaf717ca1762ece63a4804c08d38a1aadf156ae718d1545942e86763a1604f5065d4faa550b7259d651, service: duties


Aug 30 03:48:05.505 INFO Validator exists in beacon chain        fee_recipient: Fee recipient for validator not set in validator_definitions.yml or provided with the `--suggested-fee-recipient flag`, validator_index: 210710, pubkey: 0xad5d67cc7f990590c7b3fa41d593c4cf12d9ead894be2311fbb3e5c733d8c1b909e9d47af60ea3480fb6b37946c35390, service: duties
```


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-08-30 05:47:32 +00:00
Paul Hauner
661307dce1 Separate committee subscriptions queue (#3508)
## Issue Addressed

NA

## Proposed Changes

As we've seen on Prater, there seems to be a correlation between these messages

```
WARN Not enough time for a discovery search  subnet_id: ExactSubnet { subnet_id: SubnetId(19), slot: Slot(3742336) }, service: attestation_service
```

... and nodes falling 20-30 slots behind the head for short periods. These nodes are running ~20k Prater validators.

After running some metrics, I can see that the `network_recv` channel is processing ~250k `AttestationSubscribe` messages per minute. It occurred to me that perhaps the `AttestationSubscribe` messages are "washing out" the `SendRequest` and `SendResponse` messages. In this PR I separate the `AttestationSubscribe` and `SyncCommitteeSubscribe` messages into their own queue so the `tokio::select!` in the `NetworkService` can still process the other messages in the `network_recv` channel without necessarily having to clear all the subscription messages first.

~~I've also added filter to the HTTP API to prevent duplicate subscriptions going to the network service.~~

## Additional Info

- Currently being tested on Prater
2022-08-30 05:47:31 +00:00
realbigsean
ebd0e0e2d9 Docker builds in GitHub actions (#3523)
## Issue Addressed

I think the antithesis is failing due to an OOM which may be resolved by updating the ubuntu image it runs on. The lcli build looks like it's failing because the image lacks the `libclang` dependency



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-08-29 18:31:27 +00:00
Michael Sproul
7a50684741 Harden slot notifier against clock drift (#3519)
## Issue Addressed

Partly resolves #3518

## Proposed Changes

Change the slot notifier to use `duration_to_next_slot` rather than an interval timer. This makes it robust against underlying clock changes.
2022-08-29 14:34:43 +00:00
Paul Hauner
1a833ecc17 Add more logging for invalid payloads (#3515)
## Issue Addressed

NA

## Proposed Changes

Adds more `debug` logging to help troubleshoot invalid execution payload blocks. I was doing some of this recently and found it to be challenging.

With this PR we should be able to grep `Invalid execution payload` and get one-liners that will show the block, slot and details about the proposer.

I also changed the log in `process_invalid_execution_payload` since it was a little misleading; the `block_root` wasn't necessary the block which had an invalid payload.

## Additional Info

NA
2022-08-29 14:34:42 +00:00
Paul Hauner
8609cced0e Reset payload statuses when resuming fork choice (#3498)
## Issue Addressed

NA

## Proposed Changes

This PR is motivated by a recent consensus failure in Geth where it returned `INVALID` for an `VALID` block. Without this PR, the only way to recover is by re-syncing Lighthouse. Whilst ELs "shouldn't have consensus failures", in reality it's something that we can expect from time to time due to the complex nature of Ethereum. Being able to recover easily will help the network recover and EL devs to troubleshoot.

The risk introduced with this PR is that genuinely INVALID payloads get a "second chance" at being imported. I believe the DoS risk here is negligible since LH needs to be restarted in order to re-process the payload. Furthermore, there's no reason to think that a well-performing EL will accept a truly invalid payload the second-time-around.

## Additional Info

This implementation has the following intricacies:

1. Instead of just resetting *invalid* payloads to optimistic, we'll also reset *valid* payloads. This is an artifact of our existing implementation.
1. We will only reset payload statuses when we detect an invalid payload present in `proto_array`
    - This helps save us from forgetting that all our blocks are valid in the "best case scenario" where there are no invalid blocks.
1. If we fail to revert the payload statuses we'll log a `CRIT` and just continue with a `proto_array` that *does not* have reverted payload statuses.
    - The code to revert statuses needs to deal with balances and proposer-boost, so it's a failure point. This is a defensive measure to avoid introducing new show-stopping bugs to LH.
2022-08-29 14:34:41 +00:00
realbigsean
2ce86a0830 Validator registration request failures do not cause us to mark BNs offline (#3488)
## Issue Addressed

Relates to https://github.com/sigp/lighthouse/issues/3416

## Proposed Changes

- Add an `OfflineOnFailure` enum to the `first_success` method for querying beacon nodes so that a val registration request failure from the BN -> builder does not result in the BN being marked offline. This seems important because these failures could be coming directly from a connected relay and actually have no bearing on BN health.  Other messages that are sent to a relay have a local fallback so shouldn't result in errors 

- Downgrade the following log to a `WARN`

```
ERRO Unable to publish validator registrations to the builder network, error: All endpoints failed https://BN_B => RequestFailed(ServerMessage(ErrorMessage { code: 500, message: "UNHANDLED_ERROR: BuilderMissing", stacktraces: [] })), https://XXXX/ => Unavailable(Offline), [omitted]
```

## Additional Info

I think this change at least improves the UX of having a VC connected to some builder and some non-builder beacon nodes. I think we need to balance potentially alerting users that there is a BN <> VC misconfiguration and also allowing this type of fallback to work. 

If we want to fully support this type of configuration we may want to consider adding a flag `--builder-beacon-nodes` and track whether a VC should be making builder queries on a per-beacon node basis.  But I think the changes in this PR are independent of that type of extension.

PS: Sorry for the big diff here, it's mostly formatting changes after I added a new arg to a bunch of methods calls.




Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-08-29 11:35:59 +00:00
Michael Sproul
66eca1a882 Refactor op pool for speed and correctness (#3312)
## Proposed Changes

This PR has two aims: to speed up attestation packing in the op pool, and to fix bugs in the verification of attester slashings, proposer slashings and voluntary exits. The changes are bundled into a single database schema upgrade (v12).

Attestation packing is sped up by removing several inefficiencies: 

- No more recalculation of `attesting_indices` during packing.
- No (unnecessary) examination of the `ParticipationFlags`: a bitfield suffices. See `RewardCache`.
- No re-checking of attestation validity during packing: the `AttestationMap` provides attestations which are "correct by construction" (I have checked this using Hydra).
- No SSZ re-serialization for the clunky `AttestationId` type (it can be removed in a future release).

So far the speed-up seems to be roughly 2-10x, from 500ms down to 50-100ms.

Verification of attester slashings, proposer slashings and voluntary exits is fixed by:

- Tracking the `ForkVersion`s that were used to verify each message inside the `SigVerifiedOp`. This allows us to quickly re-verify that they match the head state's opinion of what the `ForkVersion` should be at the epoch(s) relevant to the message.
- Storing the `SigVerifiedOp` on disk rather than the raw operation. This allows us to continue track the fork versions after a reboot.

This is mostly contained in this commit 52bb1840ae.

## Additional Info

The schema upgrade uses the justified state to re-verify attestations and compute `attesting_indices` for them. It will drop any attestations that fail to verify, by the logic that attestations are most valuable in the few slots after they're observed, and are probably stale and useless by the time a node restarts. Exits and proposer slashings and similarly re-verified to obtain `SigVerifiedOp`s.

This PR contains a runtime killswitch `--paranoid-block-proposal` which opts out of all the optimisations in favour of closely verifying every included message. Although I'm quite sure that the optimisations are correct this flag could be useful in the event of an unforeseen emergency.

Finally, you might notice that the `RewardCache` appears quite useless in its current form because it is only updated on the hot-path immediately before proposal. My hope is that in future we can shift calls to `RewardCache::update` into the background, e.g. while performing the state advance. It is also forward-looking to `tree-states` compatibility, where iterating and indexing `state.{previous,current}_epoch_participation` is expensive and needs to be minimised.
2022-08-29 09:10:26 +00:00
Michael Sproul
1c9ec42dcb More merge doc updates (#3509)
## Proposed Changes

Address a few shortcomings of the book noticed by users:

- Remove description of redundant execution nodes
- Use an Infura eth1 node rather than an eth2 node in the merge migration example
- Add an example of the fee recipient address format (we support addresses without the 0x prefix, but 0x prefixed feels more canonical).
- Clarify that Windows support is no longer beta
- Add a link to the MSRV to the build-from-source instructions
2022-08-26 21:47:50 +00:00
Paul Hauner
c64e17bb81 Return readonly: false for local keystores (#3490)
## Issue Addressed

NA

## Proposed Changes

Indicate that local keystores are `readonly: Some(false)` rather than `None` via the `/eth/v1/keystores` method on the VC API.

I'll mark this as backwards-incompat so we remember to mention it in the release notes. There aren't any type-level incompatibilities here, just a change in how Lighthouse responds to responses.

## Additional Info

- Blocked on #3464
2022-08-24 23:35:00 +00:00
Paul Hauner
ebd661783e Enable block_lookup_failed EF test (#3489)
## Issue Addressed

Resolves #3448

## Proposed Changes

Removes a known failure that wasn't actually a known failure. The tests declare this block invalid and we refuse to import it due to `ExecutionPayloadError(UnverifiedNonOptimisticCandidate)`.

This is correct since there is only one "eth1" block included in this test and two are required to trigger the merge (pre- and post-TTD blocks). It is slot 1 (tick = 12s) when this block is imported so the import must be prevented by `SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY`.

I'm not sure where I got the idea in #3448 that this test needed retrospective checking, that seems like a false assumption in hindsight.

## Additional Info

- Blocked on #3464
2022-08-24 23:34:59 +00:00
realbigsean
cb132c622d don't register exited or slashed validators with the builder api (#3473)
## Issue Addressed

#3465

## Proposed Changes

Filter out any validator registrations for validators that are not `active` or `pending`.  I'm adding this filtering the beacon node because all the information is readily available there. In other parts of the VC we are usually sending per-validator requests based on duties from the BN. And duties will only be provided for active validators so we don't have this type of filtering elsewhere in the VC.



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-08-24 23:34:58 +00:00
Divma
8c69d57c2c Pause sync when EE is offline (#3428)
## Issue Addressed

#3032

## Proposed Changes

Pause sync when ee is offline. Changes include three main parts:
- Online/offline notification system
- Pause sync
- Resume sync

#### Online/offline notification system
- The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism.
- The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc.
- Sync waits for state changes concurrently with normal messages.

#### Pause Sync
Sync has four components, pausing is done differently in each:
- **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it.
- **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it.
- **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers.
- **Backfill**: Not affected by ee states, we don't pause.

#### Resume Sync
- **Block lookups**: Enabled again.
- **Parent lookups**: Enabled again.
- **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them.
- **Backfill**: Not affected by ee states, no need to resume.

## Additional Info

**QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed

Next gen of #3094

Will work best with #3439 

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
Michael Sproul
aab4a8d2f2 Update docs for mainnet merge release (#3494)
## Proposed Changes

Update the merge migration docs to encourage updating mainnet configs _now_!

The docs are also updated to recommend _against_ `--suggested-fee-recipient` on the beacon node (https://github.com/sigp/lighthouse/issues/3432).

Additionally the `--help` for the CLI is updated to match with a few small semantic changes:

- `--execution-jwt` is no longer allowed without `--execution-endpoint`. We've ended up without a default for `--execution-endpoint`, so I think that's fine.
- The flags related to the JWT are only allowed if `--execution-jwt` is provided.
2022-08-23 03:50:58 +00:00
Paul Hauner
18c61a5e8b v3.0.0 (#3464)
## Issue Addressed

NA

## Proposed Changes

Bump versions to v3.0.0

## Additional Info

- ~~Blocked on #3439~~
- ~~Blocked on #3459~~
- ~~Blocked on #3463~~
- ~~Blocked on #3462~~
- ~~Requires further testing~~


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2022-08-22 03:43:08 +00:00
Paul Hauner
931153885c Run per-slot fork choice at a further distance from the head (#3487)
## Issue Addressed

NA

## Proposed Changes

Run fork choice when the head is 256 slots from the wall-clock slot, rather than 4.

The reason we don't *always* run FC is so that it doesn't slow us down during sync. As the comments state, setting the value to 256 means that we'd only have one interrupting fork-choice call if we were syncing at 20 slots/sec.

## Additional Info

NA
2022-08-19 04:27:24 +00:00
Paul Hauner
df358b864d Add metrics for EE PayloadStatus returns (#3486)
## Issue Addressed

NA

## Proposed Changes

Adds some metrics so we can track payload status responses from the EE. I think this will be useful for troubleshooting and alerting.

I also bumped the `BecaonChain::per_slot_task` to `debug` since it doesn't seem too noisy and would have helped us with some things we were debugging in the past.

## Additional Info

NA
2022-08-19 04:27:23 +00:00
Paul Hauner
043fa2153e Revise EE peer penalites (#3485)
## Issue Addressed

NA

## Proposed Changes

Don't penalize peers for errors that might be caused by an honest optimistic node.

## Additional Info

NA
2022-08-19 04:27:22 +00:00
Paul Hauner
a0605c4ee6 Bump EF tests to v1.2.0 rc.3 (#3483)
## Issue Addressed

NA

## Proposed Changes

Bumps test vectors and ignores another weird MacOS file.

## Additional Info

NA
2022-08-19 04:27:21 +00:00
Mac L
726d1b0d9b Unblock CI by updating git submodules directly in execution integration tests (#3479)
## Issue Addressed

Recent changes to the Nethermind codebase removed the `rocksdb` git submodule in favour of a `nuget` package.
This appears to have broken our ability to build the latest release of Nethermind inside our integration tests.

## Proposed Changes

~Temporarily pin the version used for the Nethermind integration tests to `master`. This ensures we use the packaged version of `rocksdb`. This is only necessary until a new release of Nethermind is available.~

Use `git submodule update --init --recursive` to ensure the required submodules are pulled before building.

Co-authored-by: Diva M <divma@protonmail.com>
2022-08-19 04:27:20 +00:00
Michael Sproul
c2604c47d6 Optimistic sync: remove justified block check (#3477)
## Issue Addressed

Implements spec change https://github.com/ethereum/consensus-specs/pull/2881

## Proposed Changes

Remove the justified block check from `is_optimistic_candidate_block`.
2022-08-17 02:36:41 +00:00
Paul Hauner
7664776fc4 Add test for exits spanning epochs (#3476)
## Issue Addressed

NA

## Proposed Changes

Adds a test that was written whilst doing some testing. This PR does not make changes to production code, it just adds a test for already existing functionality.

## Additional Info

NA
2022-08-17 02:36:40 +00:00
Michael Sproul
8255c8682e Align engine API timeouts with spec (#3470)
## Proposed Changes

Match the timeouts from the `execution-apis` spec. Our existing values were already quite close so I don't imagine this change to be very disruptive.

The spec sets the timeout for `engine_getPayloadV1` to only 1 second, but we were already using a longer value of 2 seconds. I've kept the 2 second timeout as I don't think there's any need to fail faster when producing a payload.

There's no timeout specified for `eth_syncing` so I've matched it to the shortest timeout from the spec (1 second). I think the previous value of 250ms was likely too low and could have been contributing to spurious timeouts, particularly for remote ELs.

## Additional Info

The timeouts are defined on each endpoint in this document: https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md
2022-08-17 02:36:39 +00:00
Paul Hauner
d9d1288156 Add mainnet merge values 🐼 (#3462)
## Issue Addressed

NA

## Proposed Changes

Adds **tentative** values for the merge TTD and Bellatrix as per https://github.com/ethereum/consensus-specs/pull/2969

## Additional Info

- ~~Blocked on https://github.com/ethereum/consensus-specs/pull/2969~~
2022-08-17 02:36:38 +00:00
Michael Sproul
e5fc9f26bc Log if no execution endpoint is configured (#3467)
## Issue Addressed

Fixes an issue whereby syncing a post-merge network without an execution endpoint would silently stall. Sync swallows the errors from block verification so previously there was no indication in the logs for why the node couldn't sync.

## Proposed Changes

Add an error log to the merge-readiness notifier for the case where the merge has already completed but no execution endpoint is configured.
2022-08-15 01:31:02 +00:00
Michael Sproul
25e3dc9300 Fix block verification and checkpoint sync caches (#3466)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/2962

## Proposed Changes

Build all caches on the checkpoint state before storing it in the database.

Additionally, fix a bug in `signature_verify_chain_segment` which prevented block verification from succeeding unless the previous epoch cache was already built. The previous epoch cache is required to verify the signatures of attestations included from previous epochs, even when all the blocks in the segment are from the same epoch.

The comments around `signature_verify_chain_segment` have also been updated to reflect the fact that it should only be used on a chain of blocks from a single epoch. I believe this restriction had already been added at some point in the past and that the current comments were just outdated (and I think because the proposer shuffling can change in the next epoch based on the blocks applied in the current epoch that this limitation is essential).
2022-08-15 01:31:00 +00:00
Paul Hauner
f03f9ba680 Increase merge-readiness lookhead (#3463)
## Issue Addressed

NA

## Proposed Changes

Start issuing merge-readiness logs 2 weeks before the Bellatrix fork epoch. Additionally, if the Bellatrix epoch is specified and the use has configured an EL, always log merge readiness logs, this should benefit pro-active users.

### Lookahead Reasoning

- Bellatrix fork is:
    - epoch 144896
    - slot 4636672
    - Unix timestamp: `1606824023 + (4636672 * 12) = 1662464087`
    - GMT: Tue Sep 06 2022 11:34:47 GMT+0000
- Warning start time is:
    - Unix timestamp: `1662464087 - 604800 * 2 = 1661254487`
    - GMT: Tue Aug 23 2022 11:34:47 GMT+0000

The [current expectation](https://discord.com/channels/595666850260713488/745077610685661265/1007445305198911569) is that EL and CL clients will releases out by Aug 22nd at the latest, then an EF announcement will go out on the 23rd. If all goes well, LH will start alerting users about merge-readiness just after the announcement.

## Additional Info

NA
2022-08-15 01:30:59 +00:00
realbigsean
dd93aa8701 Standard gas limit api (#3450)
## Issue Addressed

Resolves https://github.com/sigp/lighthouse/issues/3403

## Proposed Changes

Implements https://ethereum.github.io/keymanager-APIs/#/Gas%20Limit

## Additional Info

N/A

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-08-15 01:30:58 +00:00
Michael Sproul
92d597ad23 Modularise slasher backend (#3443)
## Proposed Changes

Enable multiple database backends for the slasher, either MDBX (default) or LMDB. The backend can be selected using `--slasher-backend={lmdb,mdbx}`.

## Additional Info

In order to abstract over the two library's different handling of database lifetimes I've used `Box::leak` to give the `Environment` type a `'static` lifetime. This was the only way I could think of using 100% safe code to construct a self-referential struct `SlasherDB`, where the `OpenDatabases` refers to the `Environment`. I think this is OK, as the `Environment` is expected to live for the life of the program, and both database engines leave the database in a consistent state after each write. The memory claimed for memory-mapping will be freed by the OS and appropriately flushed regardless of whether the `Environment` is actually dropped.

We are depending on two `sigp` forks of `libmdbx-rs` and `lmdb-rs`, to give us greater control over MDBX OS support and LMDB's version.
2022-08-15 01:30:56 +00:00
Pawan Dhananjay
71fd0b42f2 Fix lints for Rust 1.63 (#3459)
## Issue Addressed

N/A

## Proposed Changes

Fix clippy lints for latest rust version 1.63. I have allowed the [derive_partial_eq_without_eq](https://rust-lang.github.io/rust-clippy/master/index.html#derive_partial_eq_without_eq) lint as satisfying this lint would result in more code that we might not want and I feel it's not required. 

Happy to fix this lint across lighthouse if required though.
2022-08-12 00:56:39 +00:00
Divma
f4ffa9e0b4 Handle processing results of non faulty batches (#3439)
## Issue Addressed
Solves #3390 

So after checking some logs @pawanjay176 got, we conclude that this happened because we blacklisted a chain after trying it "too much". Now here, in all occurrences it seems that "too much" means we got too many download failures. This happened very slowly, exactly because the batch is allowed to stay alive for very long times after not counting penalties when the ee is offline. The error here then was not that the batch failed because of offline ee errors, but that we blacklisted a chain because of download errors, which we can't pin on the chain but on the peer. This PR fixes that.

## Proposed Changes

Adds a missing piece of logic so that if a chain fails for errors that can't be attributed to an objectively bad behavior from the peer, it is not blacklisted. The issue at hand occurred when new peers arrived claiming a head that had wrongfully blacklisted, even if the original peers participating in the chain were not penalized.

Another notable change is that we need to consider a batch invalid if it processed correctly but its next non empty batch fails processing. Now since a batch can fail processing in non empty ways, there is no need to mark as invalid previous batches.

Improves some logging as well.

## Additional Info

We should do this regardless of pausing sync on ee offline/unsynced state. This is because I think it's almost impossible to ensure a processing result will reach in a predictable order with a synced notification from the ee. Doing this handles what I think are inevitable data races when we actually pause sync

This also fixes a return that reports which batch failed and caused us some confusion checking the logs
2022-08-12 00:56:38 +00:00
realbigsean
a476ae4907 Linkcheck fix (#3452)
## Issue Addressed

I think we're running into this in our linkcheck, so I'm going to frist verify linkcheck fails on the current version, and then try downgrading it to see if it passes https://github.com/chronotope/chrono/issues/755

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-08-11 10:08:36 +00:00
Alex Wied
e0f86588e6 lighthouse_version: Fix version string regex (#3451)
## Issue Addressed

N/A

## Proposed Changes

If the build tree is not a git repository, the unit test will fail. This PR fixes the issue.

## Additional Info

N/A
2022-08-11 07:50:32 +00:00
Paul Hauner
4fc0cb121c Remove some "wontfix" TODOs for the merge (#3449)
## Issue Addressed

NA

## Proposed Changes

Removes three types of TODOs:

1. `execution_layer/src/lib.rs`: It was [determined](https://github.com/ethereum/consensus-specs/issues/2636#issuecomment-988688742) that there is no action required here.
2. `beacon_processor/worker/gossip_methods.rs`: Removed TODOs relating to peer scoring that have already been addressed via `epe.penalize_peer()`.
    - It seems `cargo fmt` wanted to adjust some things here as well 🤷 
3. `proto_array_fork_choice.rs`: it would be nice to remove that useless `bool` for cleanliness, but I don't think it's something we need to do and the TODO just makes things look messier IMO.


## Additional Info

There should be no functional changes to the code in this PR.

There are still some TODOs lingering, those ones require actual changes or more thought.
2022-08-10 13:06:46 +00:00
Michael Sproul
4e05f19fb5 Serve Bellatrix preset in BN API (#3425)
## Issue Addressed

Resolves #3388
Resolves #2638

## Proposed Changes

- Return the `BellatrixPreset` on `/eth/v1/config/spec` by default.
- Allow users to opt out of this by providing `--http-spec-fork=altair` (unless there's a Bellatrix fork epoch set).
- Add the Altair constants from #2638 and make serving the constants non-optional (the `http-disable-legacy-spec` flag is deprecated).
- Modify the VC to only read the `Config` and not to log extra fields. This prevents it from having to muck around parsing the `ConfigAndPreset` fields it doesn't need.

## Additional Info

This change is backwards-compatible for the VC and the BN, but is marked as a breaking change for the removal of `--http-disable-legacy-spec`.

I tried making `Config` a `superstruct` too, but getting the automatic decoding to work was a huge pain and was going to require a lot of hacks, so I gave up in favour of keeping the default-based approach we have now.
2022-08-10 07:52:59 +00:00
Pawan Dhananjay
c25934956b Remove INVALID_TERMINAL_BLOCK (#3385)
## Issue Addressed

Resolves #3379 

## Proposed Changes

Remove instances of `InvalidTerminalBlock` in lighthouse and use 
`Invalid {latest_valid_hash: "0x0000000000000000000000000000000000000000000000000000000000000000"}` 
to represent that status.
2022-08-10 07:52:58 +00:00
Paul Hauner
2de26b20f8 Don't return errors on HTTP API for already-known messages (#3341)
## Issue Addressed

- Resolves #3266

## Proposed Changes

Return 200 OK rather than an error when a block, attestation or sync message is already known.

Presently, we will log return an error which causes a BN to go "offline" from the VCs perspective which causes the fallback mechanism to do work to try and avoid and upcheck offline nodes. This can be observed as instability in the `vc_beacon_nodes_available_count` metric.

The current behaviour also causes scary logs for the user. There's nothing to *actually* be concerned about when we see duplicate messages, this can happen on fallback systems (see code comments).

## Additional Info

NA
2022-08-10 07:52:57 +00:00
Brendan Timmons
052d5cf31f fix: incorrectly formatted MEV link in Lighthouse Book (#3434)
## Issue Addressed

N/A

## Proposed Changes

Simply fix the incorrect formatting on markdown link.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-08-09 06:05:17 +00:00
realbigsean
6f13727fbe Don't use the builder network if the head is optimistic (#3412)
## Issue Addressed

Resolves https://github.com/sigp/lighthouse/issues/3394

Adds a check in `is_healthy` about whether the head is optimistic when choosing whether to use the builder network. 



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-08-09 06:05:16 +00:00
Paul Hauner
5bb4aada92 Update Prater ENRs (#3396)
## Issue Addressed

NA

## Proposed Changes

Update bootnodes for Prater. There are new IP addresses for the Sigma Prime nodes. Teku and Nimbus nodes were also added.

## Additional Info

Related: 24760cd4b4
2022-08-09 06:05:15 +00:00
Paul Hauner
a688621919 Add support for beaconAPI in lcli functions (#3252)
## Issue Addressed

NA

## Proposed Changes

Modifies `lcli skip-slots` and `lcli transition-blocks` allow them to source blocks/states from a beaconAPI and also gives them some more features to assist with benchmarking.

## Additional Info

Breaks the current `lcli skip-slots` and `lcli transition-blocks` APIs by changing some flag names. It should be simple enough to figure out the changes via `--help`.

Currently blocked on #3263.
2022-08-09 06:05:13 +00:00
kayla-henrie
68bd7cae21 [Contribution docs] Add GitPOAP Badge to Display Number of Minted GitPOAPs for Contributors (#3343)
## Issue Addressed - N/A

## Proposed Changes

Adding badge to contribution docs that shows the number of minted GitPOAPs

## Additional Info

Hey all, this PR adds a [GitPOAP Badge](https://docs.gitpoap.io/api#get-v1repoownernamebadge) to the contribution docs that displays the number of minted GitPOAPs for this repository by contributors to this repo.

You can see an example of this in [our Documentation repository](https://github.com/gitpoap/gitpoap-docs#gitpoap-docs).

This should help would-be contributors as well as existing contributors find out that they will/have received GitPOAPs for their contributions.

CC: @colfax23 @kayla-henrie

Replaces: https://github.com/sigp/lighthouse/pull/3330

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-08-09 02:27:04 +00:00
realbigsean
e26004461f Don't attempt to register validators that are pre-activation (#3441)
## Issue Addressed

https://github.com/sigp/lighthouse/issues/3440

## Proposed Changes

Don't consider pre-activation validators for validator registration. 



Co-authored-by: sean <seananderson33@gmail.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-08-08 23:57:00 +00:00
Kirill
aba5225147 crypto/bls: make blst dependency optional (#3387)
## Issue Addressed

#3386 

## Proposed Changes

* make `blst` crate `optional`
* include `blst` dependency into `supranational` feature
* hide `blst`-related code with `supranational` feature

Co-authored-by: Kirill <kirill@aurora.dev>
2022-08-08 23:56:59 +00:00
Michael Sproul
6bc4a2cc91 Update invalid head tests (#3400)
## Proposed Changes

Update the invalid head tests so that they work with the current default fork choice configuration.

Thanks @realbigsean for fixing the persistence test and the EF tests.

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-08-05 23:41:09 +00:00
Michael Sproul
83666e04fd Expand merge migration docs (#3430)
## Issue Addressed

Resolves #3424

## Proposed Changes

This PR expands the merge migration docs to include (hopefully) clearer guidance on the steps required. It's inspired by @winksaville's work in #3426 but takes a more drastic approach to rewriting large sections.

* Add a prominent _When?_ section
* Add links to execution engine configuration guides
* Add links to community guides
* Fix the location of the _Strict fee recipient_ section
2022-08-05 06:46:59 +00:00
Mac L
5d317779bb Ensure validator/blinded_blocks/{slot} endpoint conforms to spec (#3429)
## Issue Addressed

#3418

## Proposed Changes

- Remove `eth/v2/validator/blinded_blocks/{slot}` as this endpoint does not exist in the spec.
- Return `version` in the `eth/v1/validator/blinded_blocks/{slot}` endpoint.

## Additional Info

Since this removes the `v2` endpoint, this is *technically* a breaking change, but as this does not exist in the spec users may or may not be relying on this.

Depending on what we feel is appropriate, I'm happy to edit this so we keep the `v2` endpoint for now but simply bring the `v1` endpoint in line with `v2`.
2022-08-05 06:46:58 +00:00
Ramana Kumar
386ced1aed Include validator indices in attestation logs (#3393)
## Issue Addressed

Fixes #2967

## Proposed Changes

Collect validator indices alongside attestations when creating signed
attestations (and aggregates) for inclusion in the logs.

## Additional Info

This is my first time looking at Lighthouse source code and using Rust, so newbie feedback appreciated!
2022-08-05 01:51:39 +00:00
realbigsean
43ce0de73f Downgrade log for 204 from builder (#3411)
## Issue Addressed

A 204 from the connected builder just indicates there's no payload available from the builder, not that there's an issue. So I don't actually think this should be a warn. During the merge transition when we are pre-finalization a 204 will actually be expected. And maybe even longer if the relay chooses to delay providing payloads for a longer period post-merge.

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-08-03 17:13:15 +00:00
Paul Hauner
fe6af05bf6 Use latest Geth release in EE integration tests (#3395)
## Issue Addressed

NA

## Proposed Changes

This PR reverts #3382 and adds the `--syncmode=full` as described here: https://github.com/sigp/lighthouse/pull/3382#issuecomment-1197680345

## Additional Info

- Blocked on #3392
2022-08-03 17:13:14 +00:00
Michael Sproul
df51a73272 Release v2.5.1 (#3406)
## Issue Addressed

Patch release to address fork choice issues in the presence of clock drift: https://github.com/sigp/lighthouse/pull/3402
2022-08-03 04:23:09 +00:00
Paul Hauner
553a794994 Ignore RUSTSEC-2022-0040 - owning_ref soundness (#3415)
## Issue Addressed

NA

## Proposed Changes

We are unaffected by this issue: https://github.com/sigp/lighthouse/pull/3410#issuecomment-1203244792

## Additional Info

NA
2022-08-02 23:20:52 +00:00
Mac L
e24552d61a Restore backwards compatibility when using older BNs (#3410)
## Issue Addressed

https://github.com/status-im/nimbus-eth2/issues/3930

## Proposed Changes

We can trivially support beacon nodes which do not provide the `is_optimistic` field by wrapping the field in an `Option`.
2022-08-02 23:20:51 +00:00
Paul Hauner
d0beecca20 Make fork choice prune again (#3408)
## Issue Addressed

NA

## Proposed Changes

There was a regression in #3244 (released in v2.4.0) which stopped pruning fork choice (see [here](https://github.com/sigp/lighthouse/pull/3244#discussion_r935187485)).

This would form a very slow memory leak, using ~100mb per month. The release has been out for ~11 days, so users should not be seeing a dangerous increase in memory, *yet*.

Credits to @michaelsproul for noticing this 🎉 

## Additional Info

NA
2022-08-02 07:58:42 +00:00
Paul Hauner
d23437f726 Ensure FC uses the current slot from the store (#3402)
## Issue Addressed

NA

## Proposed Changes

Ensure that we read the current slot from the `fc_store` rather than the slot clock. This is because the `fc_store` will never allow the slot to go backwards, even if the system clock does. The `ProtoArray::find_head` function assumes a non-decreasing slot.

This issue can cause logs like this:

```
ERRO Error whist recomputing head, error: ForkChoiceError(ProtoArrayError("find_head failed: InvalidBestNode(InvalidBestNodeInfo { start_root: 0xb22655aa2ae23075a60bd40797b3ba220db33d6fb86fa7910f0ed48e34bda72f, justified_checkpoint: Checkpoint { epoch: Epoch(111569), root: 0xb22655aa2ae23075a60bd40797b3ba220db33d6fb86fa7910f0ed48e34bda72f }, finalized_checkpoint: Checkpoint { epoch: Epoch(111568), root: 0x6140797e40c587b0d3f159483bbc603accb7b3af69891979d63efac437f9896f }, head_root: 0xb22655aa2ae23075a60bd40797b3ba220db33d6fb86fa7910f0ed48e34bda72f, head_justified_checkpoint: Some(Checkpoint { epoch: Epoch(111568), root: 0x6140797e40c587b0d3f159483bbc603accb7b3af69891979d63efac437f9896f }), head_finalized_checkpoint: Some(Checkpoint { epoch: Epoch(111567), root: 0x59b913d37383a158a9ea5546a572acc79e2cdfbc904c744744789d2c3814c570 }) })")), service: beacon, module: beacon_chain::canonical_head:499
```

We expect nodes to automatically recover from this issue within seconds without any major impact. However, having *any* errors in the path of fork choice is undesirable and should be avoided.

## Additional Info

NA
2022-08-02 00:58:25 +00:00
Justin Traglia
807bc8b0b3 Fix a few typos in option help strings (#3401)
## Proposed Changes

Fixes a typo I noticed while looking at options.
2022-08-02 00:58:24 +00:00
Michael Sproul
3b056232d8 Add list of DB migrations to docs (#3399)
## Proposed Changes

Add a list of schema version changes to the book.

I hope this will be helpful for users upgrading to v2.5.0, to know that they can downgrade to schema v9 to run v2.3.0/v2.4.0 or to schema v8 to run v2.2.0/v2.1.0.
2022-08-02 00:58:23 +00:00
Michael Sproul
18383a63b2 Tidy eth1/deposit contract logging (#3397)
## Issue Addressed

Fixes an issue identified by @remyroy whereby we were logging a recommendation to use `--eth1-endpoints` on merge-ready setups (when the execution layer was out of sync).

## Proposed Changes

I took the opportunity to clean up the other eth1-related logs, replacing "eth1" by "deposit contract" or "execution" as appropriate.

I've downgraded the severity of the `CRIT` log to `ERRO` and removed most of the recommendation text. The reason being that users lacking an execution endpoint will be informed by the new `WARN Not merge ready` log pre-Bellatrix, or the regular errors from block verification post-Bellatrix.
2022-08-01 07:20:43 +00:00
Paul Hauner
2983235650 v2.5.0 (#3392)
## Issue Addressed

NA

## Proposed Changes

Bump versions.

## Additional Info

- ~~Blocked on #3383~~
- ~~Awaiting further testing.~~
2022-08-01 03:41:08 +00:00
Paul Hauner
bcfde6e7df Indicate that invalid blocks are optimistic (#3383)
## Issue Addressed

NA

## Proposed Changes

This PR will make Lighthouse return blocks with invalid payloads via the API with `execution_optimistic = true`. This seems a bit awkward, however I think it's better than returning a 404 or some other error.

Let's consider the case where the only possible head is invalid (#3370 deals with this). In such a scenario all of the duties endpoints will start failing because the head is invalid. I think it would be better if the duties endpoints continue to work, because it's likely that even though the head is invalid the duties are still based upon valid blocks and we want the VC to have them cached. There's no risk to the VC here because we won't actually produce an attestation pointing to an invalid head.

Ultimately, I don't think it's particularly important for us to distinguish between optimistic and invalid blocks on the API. Neither should be trusted and the only *real* reason that we track this is so we can try and fork around the invalid blocks.


## Additional Info

- ~~Blocked on #3370~~
2022-07-30 05:08:57 +00:00
Michael Sproul
fdfdb9b57c Enable count-unrealized by default (#3389)
## Issue Addressed

Enable https://github.com/sigp/lighthouse/pull/3322 by default on all networks.

The feature can be opted out of using `--count-unrealized=false` (the CLI flag is updated to take a parameter).
2022-07-30 00:22:41 +00:00
Pawan Dhananjay
b3ce8d0de9 Fix penalties in sync methods (#3384)
## Issue Addressed

N/A

## Proposed Changes

Uses the `penalize_peer` function added in #3350 in sync methods as well. The existing code in sync methods missed the `ExecutionPayloadError::UnverifiedNonOptimisticCandidate` case.
2022-07-30 00:22:39 +00:00
ethDreamer
034260bd99 Initial Commit of Retrospective OTB Verification (#3372)
## Issue Addressed

* #2983 

## Proposed Changes

Basically followed the [instructions laid out here](https://github.com/sigp/lighthouse/issues/2983#issuecomment-1062494947)


Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
2022-07-30 00:22:38 +00:00
realbigsean
6c2d8b2262 Builder Specs v0.2.0 (#3134)
## Issue Addressed

https://github.com/sigp/lighthouse/issues/3091

Extends https://github.com/sigp/lighthouse/pull/3062, adding pre-bellatrix block support on blinded endpoints and allowing the normal proposal flow (local payload construction) on blinded endpoints. This resulted in better fallback logic because the VC will not have to switch endpoints on failure in the BN <> Builder API, the BN can just fallback immediately and without repeating block processing that it shouldn't need to. We can also keep VC fallback from the VC<>BN API's blinded endpoint to full endpoint.

## Proposed Changes

- Pre-bellatrix blocks on blinded endpoints
- Add a new `PayloadCache` to the execution layer
- Better fallback-from-builder logic

## Todos

- [x] Remove VC transition logic
- [x] Add logic to only enable builder flow after Merge transition finalization
- [x] Tests
- [x] Fix metrics
- [x] Rustdocs


Co-authored-by: Mac L <mjladson@pm.me>
Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-30 00:22:37 +00:00
Paul Hauner
25f0e261cb Don't return errors when fork choice fails (#3370)
## Issue Addressed

NA

## Proposed Changes

There are scenarios where the only viable head will have an invalid execution payload, in this scenario the `get_head` function on `proto_array` will return an error. We must recover from this scenario by importing blocks from the network.

This PR stops `BeaconChain::recompute_head` from returning an error so that we can't accidentally start down-scoring peers or aborting block import just because the current head has an invalid payload.

## Reviewer Notes

The following changes are included:

1. Allow `fork_choice.get_head` to fail gracefully in `BeaconChain::process_block` when trying to update the `early_attester_cache`; simply don't add the block to the cache rather than aborting the entire process.
1. Don't return an error from `BeaconChain::recompute_head_at_current_slot` and `BeaconChain::recompute_head` to defensively prevent calling functions from aborting any process just because the fork choice function failed to run.
    - This should have practically no effect, since most callers were still continuing if recomputing the head failed.
    - The outlier is that the API will return 200 rather than a 500 when fork choice fails.
1. Add the `ProtoArrayForkChoice::set_all_blocks_to_optimistic` function to recover from the scenario where we've rebooted and the persisted fork choice has an invalid head.
2022-07-28 13:57:09 +00:00
Michael Sproul
d04fde3ba9 Remove equivocating validators from fork choice (#3371)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3241
Closes https://github.com/sigp/lighthouse/issues/3242

## Proposed Changes

* [x] Implement logic to remove equivocating validators from fork choice per https://github.com/ethereum/consensus-specs/pull/2845
* [x] Update tests to v1.2.0-rc.1. The new test which exercises `equivocating_indices` is passing.
* [x] Pull in some SSZ abstractions from the `tree-states` branch that make implementing Vec-compatible encoding for types like `BTreeSet` and `BTreeMap`.
* [x] Implement schema upgrades and downgrades for the database (new schema version is V11).
* [x] Apply attester slashings from blocks to fork choice

## Additional Info

* This PR doesn't need the `BTreeMap` impl, but `tree-states` does, and I don't think there's any harm in keeping it. But I could also be convinced to drop it.

Blocked on #3322.
2022-07-28 09:43:41 +00:00
Paul Hauner
efb360cc6d Downgrade Geth to v1.10.20 in EE integration tests (#3382)
## Issue Addressed

NA

## Proposed Changes

The execution integration tests have started failing since Geth updated to v1.10.21. More details here: https://github.com/ethereum/go-ethereum/issues/25427#issuecomment-1197552755

This PR pins our version at v1.10.20.

## Additional Info

NA
2022-07-28 07:40:05 +00:00
realbigsean
5bdba157e1 Fix antithesis docker builds (#3380)
## Issue Addressed

The antithesis Docker builds starting failing once we made our MSRV later than 1.58. It seems like it was because there is a new "LLVM pass manager" used by rust by default in more recent versions. Adding a new flag disables usage of the new pass manager and allows builds to pass.

This adds a single flag to the antithesis `Dockerfile.libvoidstar`: `RUSTFLAGS="-Znew-llvm-pass-manager=no"`. But this flag requires us to use `nightly` so it also adds that, pinning to an arbitrary recent date. 

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-28 07:40:03 +00:00
Philip White
cf3bcca969 Allow setting web3signer version through environment (#3368)
## Issue Addressed

#3369 

## Proposed Changes

The goal is to make it possible to build Lighthouse without network access,
so builds can be reproducible.

This parallels the existing functionality in `common/deposit_contract/build.rs`,
which allows specifying a filename through the environment to avoid downloading
it. In this case, by specifying the version and making it available on the
filesystem, the existing logic will avoid a network download.
2022-07-27 03:20:01 +00:00
Pawan Dhananjay
f3439116da Return ResourceUnavailable if we are unable to reconstruct execution payloads (#3365)
## Issue Addressed

Resolves #3351 

## Proposed Changes

Returns a `ResourceUnavailable` rpc error if we are unable to serve full payloads to blocks by root and range requests because the execution layer is not synced.


## Additional Info

This PR also changes the penalties such that a `ResourceUnavailable` error is only penalized if it is an outgoing request. If we are syncing and aren't getting full block responses, then we don't have use for the peer. However, this might not be true for the incoming request case. We let the peer decide in this case if we are still useful or if we should be banned.
cc @divagant-martian please let me know if i'm missing something here.
2022-07-27 03:20:00 +00:00
Michael Sproul
947ad9f14a Allow syncing or accepted in integration test (#3378)
## Issue Addressed

Unblock CI for this failure: https://github.com/sigp/lighthouse/runs/7529551988

The root cause is a disagreement between the test and Nethermind over whether the appropriate status for a payload with an unknown parent is SYNCING or ACCEPTED. According to the spec, SYNCING is correct so we should update the test to expect this correct behaviour. However Geth still returns `ACCEPTED`, so for now we allow either.
2022-07-27 00:51:08 +00:00
Justin Traglia
e29765e118 Reformat tables and add borders (#3377)
## Proposed Changes

This PR reformats Markdown tables and ensures all tables have borders.
2022-07-27 00:51:07 +00:00
Justin Traglia
0f62d900fe Fix some typos (#3376)
## Proposed Changes

This PR fixes various minor typos in the project.
2022-07-27 00:51:06 +00:00
Mac L
44fae52cd7 Refuse to sign sync committee messages when head is optimistic (#3191)
## Issue Addressed

Resolves #3151 

## Proposed Changes

When fetching duties for sync committee contributions, check the value of `execution_optimistic` of the head block from the BN and refuse to sign any sync committee messages `if execution_optimistic == true`.

## Additional Info
- Is backwards compatible with older BNs
- Finding a way to add test coverage for this would be prudent. Open to suggestions.
2022-07-27 00:51:05 +00:00
Mac L
d316305411 Add is_optimistic to eth/v1/node/syncing response (#3374)
## Issue Addressed

As specified in the [Beacon Chain API specs](https://github.com/ethereum/beacon-APIs/blob/master/apis/node/syncing.yaml#L32-L35) we should return `is_optimistic` as part of the response to a query for the `eth/v1/node/syncing` endpoint.

## Proposed Changes

Compute the optimistic status of the head and add it to the `SyncingData` response.
2022-07-26 08:50:16 +00:00
realbigsean
904dd62524 Strict fee recipient (#3363)
## Issue Addressed

Resolves #3267
Resolves #3156 

## Proposed Changes

- Move the log for fee recipient checks from proposer cache insertion into block proposal so we are directly checking what we get from the EE
- Only log when there is a discrepancy with the local EE, not when using the builder API. In the `builder-api` branch there is an `info` log when there is a discrepancy, I think it is more likely there will be a difference in fee recipient with the builder api because proposer payments might be made via a transaction in the block. Not really sure what patterns will become commong.
- Upgrade the log from a `warn` to an `error` - not actually sure which we want, but I think this is worth an error because the local EE with default transaction ordering I think should pretty much always use the provided fee recipient
- add a `strict-fee-recipient` flag to the VC so we only sign blocks with matching fee recipients. Falls back from the builder API to the local API if there is a discrepancy .




Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-26 02:17:24 +00:00
Paul Hauner
b82e2dfc51 Add merge transition docs (#3361)
## Issue Addressed

NA

## Proposed Changes

Add some documentation about migrating pre-merge Lighthouse to post-merge Lighthouse.

## Additional Info

NA
2022-07-26 02:17:22 +00:00
ethDreamer
f7354abe0f Fix Block Cache Range Math for Faster Syncing (#3358)
## Issue Addressed

While messing with the deposit snapshot stuff, I had my proxy running and noticed the beacon node wasn't syncing the block cache continuously. There were long periods where it did nothing. I believe this was caused by a logical error introduced in #3234 that dealt with an issue that arose while syncing the block cache on Ropsten.

The problem is that when the block cache is initially syncing, it will trigger the logic that detects the cache is far behind the execution chain in time. This will trigger a batch syncing mechanism which is intended to sync further ahead than the chain would normally. But the batch syncing is actually slower than the range this function usually estimates (in this scenario).

## Proposed Changes

I believe I've fixed this function by taking the end of the range to be the maximum of (batch syncing range, usual range).
I've also renamed and restructured some things a bit. It's equivalent logic but I think it's more clear what's going on.
2022-07-26 02:17:21 +00:00
realbigsean
20ebf1f3c1 Realized unrealized experimentation (#3322)
## Issue Addressed

Add a flag that optionally enables unrealized vote tracking.  Would like to test out on testnets and benchmark differences in methods of vote tracking. This PR includes a DB schema upgrade to enable to new vote tracking style.


Co-authored-by: realbigsean <sean@sigmaprime.io>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: sean <seananderson33@gmail.com>
Co-authored-by: Mac L <mjladson@pm.me>
2022-07-25 23:53:26 +00:00
Mac L
bb5a6d2cca Add execution_optimistic flag to HTTP responses (#3070)
## Issue Addressed

#3031 

## Proposed Changes

Updates the following API endpoints to conform with https://github.com/ethereum/beacon-APIs/pull/190 and https://github.com/ethereum/beacon-APIs/pull/196
- [x] `beacon/states/{state_id}/root` 
- [x] `beacon/states/{state_id}/fork`
- [x] `beacon/states/{state_id}/finality_checkpoints`
- [x] `beacon/states/{state_id}/validators`
- [x] `beacon/states/{state_id}/validators/{validator_id}`
- [x] `beacon/states/{state_id}/validator_balances`
- [x] `beacon/states/{state_id}/committees`
- [x] `beacon/states/{state_id}/sync_committees`
- [x] `beacon/headers`
- [x] `beacon/headers/{block_id}`
- [x] `beacon/blocks/{block_id}`
- [x] `beacon/blocks/{block_id}/root`
- [x] `beacon/blocks/{block_id}/attestations`
- [x] `debug/beacon/states/{state_id}`
- [x] `debug/beacon/heads`
- [x] `validator/duties/attester/{epoch}`
- [x] `validator/duties/proposer/{epoch}`
- [x] `validator/duties/sync/{epoch}`

Updates the following Server-Sent Events:
- [x]  `events?topics=head`
- [x]  `events?topics=block`
- [x]  `events?topics=finalized_checkpoint`
- [x]  `events?topics=chain_reorg`

## Backwards Incompatible
There is a very minor breaking change with the way the API now handles requests to `beacon/blocks/{block_id}/root` and `beacon/states/{state_id}/root` when `block_id` or `state_id` is the `Root` variant of `BlockId` and `StateId` respectively.

Previously a request to a non-existent root would simply echo the root back to the requester:
```
curl "http://localhost:5052/eth/v1/beacon/states/0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/root"
{"data":{"root":"0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"}}
```
Now it will return a `404`:
```
curl "http://localhost:5052/eth/v1/beacon/blocks/0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/root"
{"code":404,"message":"NOT_FOUND: beacon block with root 0xaaaa…aaaa","stacktraces":[]}
```

In addition to this is the block root `0x0000000000000000000000000000000000000000000000000000000000000000` previously would return the genesis block. It will now return a `404`:
```
curl "http://localhost:5052/eth/v1/beacon/blocks/0x0000000000000000000000000000000000000000000000000000000000000000"
{"code":404,"message":"NOT_FOUND: beacon block with root 0x0000…0000","stacktraces":[]}
```

## Additional Info
- `execution_optimistic` is always set, and will return `false` pre-Bellatrix. I am also open to the idea of doing something like `#[serde(skip_serializing_if = "Option::is_none")]`.
- The value of `execution_optimistic` is set to `false` where possible. Any computation that is reliant on the `head` will simply use the `ExecutionStatus` of the head (unless the head block is pre-Bellatrix).

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-07-25 08:23:00 +00:00
Paul Hauner
21dec6f603 v2.4.0 (#3360)
## Issue Addressed

NA

## Proposed Changes

Bump versions to v2.4.0

## Additional Info

Blocked on:

- ~~#3349~~
- ~~#3347~~
2022-07-21 22:02:36 +00:00
Pawan Dhananjay
612cdb7092 Merge readiness endpoint (#3349)
## Issue Addressed

Resolves final task in https://github.com/sigp/lighthouse/issues/3260

## Proposed Changes

Adds a lighthouse http endpoint to indicate merge readiness.

Blocked on #3339
2022-07-21 05:45:39 +00:00
Michael Sproul
e32868458f Set safe block hash to justified (#3347)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3189.

## Proposed Changes

- Always supply the justified block hash as the `safe_block_hash` when calling `forkchoiceUpdated` on the execution engine.
- Refactor the `get_payload` routine to use the new `ForkchoiceUpdateParameters` struct rather than just the `finalized_block_hash`. I think this is a nice simplification and that the old way of computing the `finalized_block_hash` was unnecessary, but if anyone sees reason to keep that approach LMK.
2022-07-21 05:45:37 +00:00
Paul Hauner
6a0e9d4353 Add Goerli --network flag as duplicate of Prater: Option A (#3346)
## Issue Addressed

- Resolves #3338

## Proposed Changes

This PR adds a new `--network goerli` flag that reuses the [Prater network configs](https://github.com/sigp/lighthouse/tree/stable/common/eth2_network_config/built_in_network_configs/prater).

As you'll see in #3338, there are several approaches to the problem of the Goerli/Prater alias. This approach achieves:

1. No duplication of the genesis state between Goerli and Prater.
    - Upside: the genesis state for Prater is ~17mb, duplication would increase the size of the binary by that much.
2. When the user supplies `--network goerli`, they will get a datadir in `~/.lighthouse/goerli`.
    - Upside: our docs stay correct when they declare a datadir is located at `~/.lighthouse/{network}`
    - Downside: switching from `--network prater` to `--network goerli` will require some manual migration. 
3. When using `--network goerli`, the [`config/spec`](https://ethereum.github.io/beacon-APIs/#/Config/getSpec) endpoint will return a [`CONFIG_NAME`](02a2b71d64/configs/mainnet.yaml (L11)) of "prater".
    - Upside: VC running `--network prater` will still think it's on the same network as one using `--network goerli`.
    - Downside: potentially confusing.
    
#3348 achieves the same goal as this PR with a different approach and set of trade-offs.

## Additional Info

### Notes for reviewers:

In e4896c2682 you'll see that I remove the `$name_str` by just using `stringify!($name_ident)` instead. This is a simplification that should have have been there in the first place.

Then, in 90b5e22fca I reclaim that second parameter with a new purpose; to specify the directory from which to load configs.
2022-07-20 23:16:56 +00:00
Pawan Dhananjay
5b5cf9cfaa Log ttd (#3339)
## Issue Addressed

Resolves #3249 

## Proposed Changes

Log merge related parameters and EE status in the beacon notifier before the merge.


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-07-20 23:16:54 +00:00
ethDreamer
7c3ff903ca Fix Gossip Penalties During Optimistic Sync Window (#3350)
## Issue Addressed
* #3344 

## Proposed Changes

There are a number of cases during block processing where we might get an `ExecutionPayloadError` but we shouldn't penalize peers. We were forgetting to enumerate all of the non-penalizing errors in every single match statement where we are making that decision. I created a function to make it explicit when we should and should not penalize peers and I used that function in all places where this logic is needed. This way we won't make the same mistake if we add another variant of `ExecutionPayloadError` in the future.
2022-07-20 20:59:38 +00:00
Paul Hauner
6d8dfc9eee Add TTD and Bellatrix epoch for Prater (#3345)
## Issue Addressed

NA

## Proposed Changes

Adds the TTD and Bellatrix values for Prater, as per https://github.com/eth-clients/eth2-networks/pull/77.

## Additional Info

- ~~Blocked on https://github.com/eth-clients/eth2-networks/pull/77~~
2022-07-20 20:59:36 +00:00
realbigsean
fabe50abe7 debug tests rust version (#3354)
## Issue Addressed

Which issue # does this PR address?

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-20 18:18:26 +00:00
realbigsean
822c30da66 docker rust version update (#3353)
## Issue Addressed

The lcli and antithesis docker builds are failing in unstable so bumping all the versions here

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-20 18:18:25 +00:00
Mac L
7dbc59efeb Share reqwest::Client between validators when using Web3Signer (#3335)
## Issue Addressed

#3302

## Proposed Changes

Move the `reqwest::Client` from being initialized per-validator, to being initialized per distinct Web3Signer. 
This is done by placing the `Client` into a `HashMap` keyed by the definition of the Web3Signer as specified by the `ValidatorDefintion`. This will allow multiple Web3Signers to be used with a single VC and also maintains backwards compatibility.

## Additional Info

This was done to reduce the memory used by the VC when connecting to a Web3Signer.

I set up a local testnet using [a custom script](https://github.com/macladson/lighthouse/tree/web3signer-local-test/scripts/local_testnet_web3signer) and ran a VC with 200 validator keys:


VC with Web3Signer:
- `unstable`: ~200MB
- With fix: ~50MB



VC with Local Signer:
- `unstable`: ~35MB
- With fix: ~35MB 


> I'm seeing some fragmentation with the VC using the Web3Signer, but not when using a local signer (this is most likely due to making lots of http requests and dealing with lots of JSON objects). I tested the above using `MALLOC_ARENA_MAX=1` to try to reduce the fragmentation. Without it, the values are around +50MB for both `unstable` and the fix.
2022-07-19 05:48:05 +00:00
Pawan Dhananjay
e5e4e62758 Don't create a execution payload with same timestamp as terminal block (#3331)
## Issue Addressed

Resolves #3316 

## Proposed Changes

This PR fixes an issue where lighthouse created a transition block with `block.execution_payload().timestamp == terminal_block.timestamp` if the terminal block was created at the slot boundary.
2022-07-18 23:15:41 +00:00
Pawan Dhananjay
f9b9658711 Add merge support to simulator (#3292)
## Issue Addressed

N/A

## Proposed Changes

Make simulator merge compatible. Adds a `--post_merge` flag to the eth1 simulator that enables a ttd and simulates the merge transition. Uses the `MockServer` in the execution layer test utils to simulate a dummy execution node.

Adds the merge transition simulation to CI.
2022-07-18 23:15:40 +00:00
Pawan Dhananjay
da7b7a0f60 Make transactions in execution layer integration tests (#3320)
## Issue Addressed

Resolves #3159 

## Proposed Changes

Sends transactions to the EE before requesting for a payload in the `execution_integration_tests`. Made some changes to the integration tests in order to be able to sign and publish transactions to the EE:

1. `genesis.json` for both geth and nethermind was modified to include pre-funded accounts that we know private keys for 
2. Using the unauthenticated port again in order to make `eth_sendTransaction` and calls from the `personal` namespace to import keys

Also added a `fcu` call with `PayloadAttributes` before calling `getPayload` in order to give EEs sufficient time to pack transactions into the payload.
2022-07-18 01:51:36 +00:00
Age Manning
2ed51c364d Improve block-lookup functionality (#3287)
Improves some of the functionality around single and parent block lookup. 

Gives extra information about whether failures for lookups are related to processing or downloading.

This is entirely untested.


Co-authored-by: Diva M <divma@protonmail.com>
2022-07-17 23:26:58 +00:00
Peter Davies
4f58c555a9 Add Merge support to web3signer validators (#3318)
## Issue Addressed

Web3signer validators can't produce post-Bellatrix blocks.

## Proposed Changes

Add support for Bellatrix to web3signer validators.

## Additional Info

I am running validators with this code on Ropsten, but it may be a while for them to get a proposal.
2022-07-15 14:16:00 +00:00
Mac L
2940783a9c Upstream local testnet improvements (#3336)
## Proposed Changes

Adds some improvements I found when playing around with local testnets in #3335:
- When trying to kill processes, do not exit on a failure. (If a node fails to start due to a bug, the PID associated with it no longer exists. When trying to tear down the testnets, an error will be raised when it tries that PID and then will not try any PIDs following it. This change means it will continue and tear down the rest of the network.
- When starting the testnet, set `ulimit` to a high number. This allows the VCs to import 1000s of validators without running into limitations.
2022-07-15 07:31:22 +00:00
Pawan Dhananjay
5243cc6c30 Add a u256_hex_be module to encode/decode U256 types (#3321)
## Issue Addressed

Resolves #3314 

## Proposed Changes

Add a module to encode/decode u256 types according to the execution layer encoding/decoding standards
https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md#structures

Updates `JsonExecutionPayloadV1.base_fee_per_gas`, `JsonExecutionPayloadHeaderV1.base_fee_per_gas`  and `TransitionConfigurationV1.terminal_total_difficulty` to encode/decode according to standards

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-07-15 07:31:21 +00:00
Pawan Dhananjay
28b0ff27ff Ignored sync jobs 2 (#3317)
## Issue Addressed

Duplicate of #3269. Making this since @divagant-martian opened the previous PR and she can't approve her own PR 😄 


Co-authored-by: Diva M <divma@protonmail.com>
2022-07-15 07:31:20 +00:00
Akihito Nakano
98a9626ef5 Bump the MSRV to 1.62 and using #[derive(Default)] on enums (#3304)
## Issue Addressed

N/A

## Proposed Changes

Since Rust 1.62, we can use `#[derive(Default)]` on enums.  

https://blog.rust-lang.org/2022/06/30/Rust-1.62.0.html#default-enum-variants

There are no changes to functionality in this PR, just replaced the `Default` trait implementation with `#[derive(Default)]`.
2022-07-15 07:31:19 +00:00
Paul Hauner
1f54e10b7b Do not interpret "latest valid hash" as identifying a valid hash (#3327)
## Issue Addressed

NA

## Proposed Changes

After some discussion in Discord with @mkalinin it was raised that it was not the intention of the engine API to have CLs validate the `latest_valid_hash` (LVH) and all ancestors.

Whilst I believe the engine API is being updated such that the LVH *must* identify a valid hash or be set to some junk value, I'm not confident that we can rely upon the LVH as being valid (at least for now) due to the confusion surrounding it.

Being able to validate blocks via the LVH is a relatively minor optimisation; if the LVH value ends up becoming our head we'll send an fcU and get the VALID status there.

Falsely marking a block as valid has serious consequences and since it's a minor optimisation to use LVH I think that we don't take the risk.

For clarity, we will still *invalidate* the *descendants* of the LVH, we just wont *validate* the *ancestors*.

## Additional Info

NA
2022-07-13 23:07:49 +00:00
Paul Hauner
7a6e6928a3 Further remove EE redundancy (#3324)
## Issue Addressed

Resolves #3176

## Proposed Changes

Continues from PRs by @divagant-martian to gradually remove EL redundancy (see #3284, #3257).

This PR achieves:

- Removes the `broadcast` and `first_success` methods. The functional impact is that every request to the EE will always be tried immediately, regardless of the cached `EngineState` (this resolves #3176). Previously we would check the engine state before issuing requests, this doesn't make sense in a single-EE world; there's only one EE so we might as well try it for every request.
- Runs the upcheck/watchdog routine once per slot rather than thrice. When we had multiple EEs frequent polling was useful to try and detect when the primary EE had come back online and we could switch to it. That's not as relevant now.
- Always creates logs in the `Engines::upcheck` function. Previously we would mute some logs since they could get really noisy when one EE was down but others were functioning fine. Now we only have one EE and are upcheck-ing it less, it makes sense to always produce logs.

This PR purposefully does not achieve:

- Updating all occurances of "engines" to "engine". I'm trying to keep the diff small and manageable. We can come back for this.

## Additional Info

NA
2022-07-13 20:31:39 +00:00
Paul Hauner
a390695e0f Add --release to disallowed-from-async lint (#3325)
## Issue Addressed

- #3251

## Proposed Changes

Adds the release tag to the `disallowed_from_async` lint.

## Additional Info

~~I haven't run this locally yet due to (minor) complexity of running the lint, I'm seeing if it will work via Github.~~
2022-07-12 15:54:17 +00:00
sragss
4212f22ddb add sync committee contribution timeout (#3291)
## Issue Addressed

Resolves #3276. 

## Proposed Changes

Add a timeout for the sync committee contributions at 1/4 the slot length such that we may be able to try backup beacon nodes in the case of contribution post failure.

## Additional Info

1/4 slot length seemed standard for the timeouts, but may want to decrease this to 1/2.

I did not find any timeout related / sync committee related tests, so there are no tests. Happy to write some with a bit of guidance.
2022-07-11 01:44:42 +00:00
Divma
6d42a09ff8 Merge Engines and Engine struct in one in the execution_layer crate (#3284)
## Issue Addressed

Part of https://github.com/sigp/lighthouse/issues/3118, continuation of https://github.com/sigp/lighthouse/pull/3257 and https://github.com/sigp/lighthouse/pull/3283

## Proposed Changes
- Merge the [`Engines`](9c429d0764/beacon_node/execution_layer/src/engines.rs (L161-L165)) struct and [`Engine` ](9c429d0764/beacon_node/execution_layer/src/engines.rs (L62-L67))
- Remove unnecessary generics 

## Additional Info
There is more cleanup to do that will come in subsequent PRs
2022-07-11 01:44:41 +00:00
Kirill
5dbfb37d74 eth2_hashing: make cpufeatures dep optional (#3309)
## Issue Addressed

#3308 

## Proposed Changes

* add `cpufeatures` feature.
* make `cpufeature` default feature to preserve the compatibility;
* hide all `cpufeature`-related code with `cpufeatures` feature.

Co-authored-by: Kirill <kirill@aurora.dev>
2022-07-06 22:00:58 +00:00
ethDreamer
d5e2d98970 Implement feerecipient API for keymanager (#3213)
## Issue Addressed

* #3173 

## Proposed Changes

Moved all `fee_recipient_file` related logic inside the `ValidatorStore` as it makes more sense to have this all together there. I tested this with the validators I have on `mainnet-shadow-fork-5` and everything appeared to work well. Only technicality is that I can't get the method to return `401` when the authorization header is not specified (it returns `400` instead). Fixing this is probably quite difficult given that none of `warp`'s rejections have code `401`.. I don't really think this matters too much though as long as it fails.
2022-07-06 03:51:08 +00:00
Divma
3dc323b035 Fix RUSTSEC-2022-0032 (#3311)
## Issue Addressed
Failure of cargo audit for [RUSTSEC-2022-0032](https://rustsec.org/advisories/RUSTSEC-2022-0032)

## Proposed Changes
Cargo update does the trick again

## Additional Info
na
2022-07-05 23:36:42 +00:00
Michael Sproul
aed764c4d8 Document min CMake version (#3310)
## Proposed Changes

Add a tip about the minimum CMake version to make it more obvious what it takes to compile on Ubuntu 18.04.
2022-07-05 23:36:36 +00:00
Michael Sproul
748475be1d Ensure caches are built for block_rewards POST API (#3305)
## Issue Addressed

Follow up to https://github.com/sigp/lighthouse/pull/3290 that fixes a caching bug

## Proposed Changes

Build the committee cache for the new `POST /lighthouse/analysis/block_rewards` API. Due to an unusual quirk of the total active balance cache the API endpoint would sometimes fail after loading a state from disk which had a current epoch cache _but not_  a total active balance cache. This PR adds calls to build the caches immediately before they're required, and has been running smoothly with `blockdreamer` the last few days.
2022-07-04 02:56:15 +00:00
Akihito Nakano
1cc8a97d4e Remove unused method in HandlerNetworkContext (#3299)
## Issue Addressed

N/A

## Proposed Changes

Removed unused method in `HandlerNetworkContext`.
2022-07-04 02:56:14 +00:00
Divma
1219da9a45 Simplify error handling after engines fallback removal (#3283)
## Issue Addressed
Part of #3118, continuation of #3257

## Proposed Changes
- the [ `first_success_without_retry` ](9c429d0764/beacon_node/execution_layer/src/engines.rs (L348-L351)) function returns a single error.
- the [`first_success`](9c429d0764/beacon_node/execution_layer/src/engines.rs (L324)) function returns a single error.
- [ `EngineErrors` ](9c429d0764/beacon_node/execution_layer/src/lib.rs (L69)) carries a single error.
- [`EngineError`](9c429d0764/beacon_node/execution_layer/src/engines.rs (L173-L177)) now does not need to carry an Id
- [`process_multiple_payload_statuses`](9c429d0764/beacon_node/execution_layer/src/payload_status.rs (L46-L50)) now doesn't need to receive an iterator of statuses and weight in different errors

## Additional Info
This is built on top of #3294
2022-07-04 02:56:13 +00:00
Michael Sproul
61ed5f0ec6 Optimize historic committee calculation for the HTTP API (#3272)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3270

## Proposed Changes

Optimize the calculation of historic beacon committees in the HTTP API.

This is achieved by allowing committee caches to be constructed for historic epochs, and constructing these committee caches on the fly in the API. This is much faster than reconstructing the state at the requested epoch, which usually takes upwards of 20s, and sometimes minutes with SPRP=8192. The depth of the `randao_mixes` array allows us to look back 64K epochs/0.8 years from a single state, which is pretty awesome!

We always use the `state_id` provided by the caller, but will return a nice 400 error if the epoch requested is out of range for the state requested, e.g.

```bash
# Prater
curl "http://localhost:5052/eth/v1/beacon/states/3170304/committees?epoch=33538"
```

```json
{"code":400,"message":"BAD_REQUEST: epoch out of bounds, try state at slot 1081344","stacktraces":[]}
```

Queries will be fastest when aligned to `slot % SPRP == 0`, so the hint suggests a slot that is 0 mod 8192.
2022-07-04 02:56:11 +00:00
Divma
66bb5c716c Use latest tags for nethermind and geth in the execution engine integration test (#3303)
## Issue Addressed

Currently the execution-engine-integration test uses latest master for nethermind and geth, and right now the test fails using the latest unreleased commits.

## Proposed Changes
Fix the nethermind and geth revisions the test uses to the latest tag in each repo. This way we are not continuously testing unreleased code, which might even get reverted, and reduce the failures only to releases in each one.
Also improve error handling of the commands used to manage the git repos.

## Additional Info
na

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-07-03 05:36:51 +00:00
Paul Hauner
be4e261e74 Use async code when interacting with EL (#3244)
## Overview

This rather extensive PR achieves two primary goals:

1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.

Additionally, it achieves:

- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
    - I had to do this to deal with sending blocks into spawned tasks.
    - Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
    - We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
    - Avoids cloning *all the blocks* in *every chain segment* during sync.
    - It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.

For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273

## Changes to `canonical_head` and `fork_choice`

Previously, the `BeaconChain` had two separate fields:

```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```

Now, we have grouped these values under a single struct:

```
canonical_head: CanonicalHead {
  cached_head: RwLock<Arc<Snapshot>>,
  fork_choice: RwLock<BeaconForkChoice>
} 
```

Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.

## Breaking Changes

### The `state` (root) field in the `finalized_checkpoint` SSE event

Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:

1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.

Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).

I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.

## Notes for Reviewers

I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.

I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".

I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.

I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.

Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.

You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.

I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.

Co-authored-by: Mac L <mjladson@pm.me>
2022-07-03 05:36:50 +00:00
Paul Hauner
e5212f1320 Avoid growing Vec for sync committee indices (#3301)
## Issue Addressed

NA

## Proposed Changes

This is a fairly simple micro-optimization to avoid using `Vec::grow`. I don't believe this will have a substantial effect on block processing times, however it was showing up in flamegraphs. I think it's worth making this change for general memory-hygiene.

## Additional Info

NA
2022-07-01 03:44:37 +00:00
realbigsean
a7da0677d5 Remove builder redundancy (#3294)
## Issue Addressed

This PR is a subset of the changes in #3134. Unstable will still not function correctly with the new builder spec once this is merged, #3134 should be used on testnets

## Proposed Changes

- Removes redundancy in "builders" (servers implementing the builder spec)
- Renames `payload-builder` flag to `builder`
- Moves from old builder RPC API to new HTTP API, but does not implement the validator registration API (implemented in https://github.com/sigp/lighthouse/pull/3194)



Co-authored-by: sean <seananderson33@gmail.com>
Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-01 01:15:19 +00:00
Divma
d40c76e667 Fix clippy lints for rust 1.62 (#3300)
## Issue Addressed

Fixes some new clippy lints after the last rust release
### Lints fixed for the curious:
- [cast_abs_to_unsigned](https://rust-lang.github.io/rust-clippy/master/index.html#cast_abs_to_unsigned)
- [map_identity](https://rust-lang.github.io/rust-clippy/master/index.html#map_identity) 
- [let_unit_value](https://rust-lang.github.io/rust-clippy/master/index.html#let_unit_value)
- [crate_in_macro_def](https://rust-lang.github.io/rust-clippy/master/index.html#crate_in_macro_def) 
- [extra_unused_lifetimes](https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_lifetimes)
- [format_push_string](https://rust-lang.github.io/rust-clippy/master/index.html#format_push_string)
2022-06-30 22:51:49 +00:00
realbigsean
f6ec44f0dd Register validator api (#3194)
## Issue Addressed

Lays the groundwork for builder API changes by implementing the beacon-API's new `register_validator` endpoint

## Proposed Changes

- Add a routine in the VC that runs on startup (re-try until success), once per epoch or whenever `suggested_fee_recipient` is updated, signing `ValidatorRegistrationData` and sending it to the BN.
  -  TODO: `gas_limit` config options https://github.com/ethereum/builder-specs/issues/17
-  BN only sends VC registration data to builders on demand, but VC registration data *does update* the BN's prepare proposer cache and send an updated fcU to  a local EE. This is necessary for fee recipient consistency between the blinded and full block flow in the event of fallback.  Having the BN only send registration data to builders on demand gives feedback directly to the VC about relay status. Also, since the BN has no ability to sign these messages anyways (so couldn't refresh them if it wanted), and validator registration is independent of the BN head, I think this approach makes sense. 
- Adds upcoming consensus spec changes for this PR https://github.com/ethereum/consensus-specs/pull/2884
  -  I initially applied the bit mask based on a configured application domain.. but I ended up just hard coding it here instead because that's how it's spec'd in the builder repo. 
  -  Should application mask appear in the api?



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-06-30 00:49:21 +00:00
Pawan Dhananjay
5de00b7ee8 Unify execution layer endpoints (#3214)
## Issue Addressed

Resolves #3069 

## Proposed Changes

Unify the `eth1-endpoints` and `execution-endpoints` flags in a backwards compatible way as described in https://github.com/sigp/lighthouse/issues/3069#issuecomment-1134219221

Users have 2 options:
1. Use multiple non auth execution endpoints for deposit processing pre-merge
2. Use a single jwt authenticated execution endpoint for both execution layer and deposit processing post merge

Related https://github.com/sigp/lighthouse/issues/3118

To enable jwt authenticated deposit processing, this PR removes the calls to `net_version` as the `net` namespace is not exposed in the auth server in execution clients. 
Moving away from using `networkId` is a good step in my opinion as it doesn't provide us with any added guarantees over `chainId`. See https://github.com/ethereum/consensus-specs/issues/2163 and https://github.com/sigp/lighthouse/issues/2115


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-06-29 09:07:09 +00:00
Michael Sproul
53b2b500db Extend block reward APIs (#3290)
## Proposed Changes

Add a new HTTP endpoint `POST /lighthouse/analysis/block_rewards` which takes a vec of `BeaconBlock`s as input and outputs the `BlockReward`s for them.

Augment the `BlockReward` struct with the attestation data for attestations in the block, which simplifies access to this information from blockprint. Using attestation data I've been able to make blockprint up to 95% accurate across Prysm/Lighthouse/Teku/Nimbus. I hope to go even higher using a bunch of synthetic blocks produced for Prysm/Nimbus/Lodestar, which are underrepresented in the current training data.
2022-06-29 04:50:37 +00:00
Michael Sproul
36453929d5 Update Cross config for v0.2.2 (#3286)
## Proposed Changes

Update `Cross.toml` for the recently released Cross v0.2.2. This allows us to remove the dependency on my fork of the Cross Docker image, which was a maintenance burden and prone to bit-rot. This PR puts us back in sync with upstream Cross.

## Additional Info

Due to some bindgen errors on the default Cross images we seemingly need a full `clang-3.9` install. The `libclang-3.9-dev` package was found to be insufficient due to `stdarg.h` being missing.

In order to continue building locally all Lighthouse devs should update their local cross version with `cargo install cross`.
2022-06-29 04:50:36 +00:00
Paul Hauner
45b2eb18bc v2.3.2-rc.0 (#3289)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

NA
2022-06-28 03:03:30 +00:00
Paul Hauner
f3a1b5da31 Update Sepolia TTD (#3288)
## Issue Addressed

NA

## Proposed Changes

Update Sepolia TTD as per https://github.com/eth-clients/merge-testnets/pull/21

## Additional Info

NA
2022-06-27 22:50:27 +00:00
Pawan Dhananjay
7acfbd89ee Recover from NonConsecutive eth1 errors (#3273)
## Issue Addressed

Fixes #1864 and a bunch of other closed but unresolved issues.

## Proposed Changes

Allows the deposit caching to recover from `NonConsecutive` deposit errors by resetting the last processed block to the last valid deposit's block number. Still not sure of the underlying cause of this error, but this should recover the cache so we don't need `--eth1-purge-cache` anymore 🎉 

A huge thanks to @one-three-three-seven for reproducing the error and providing the data that helped testing out the fix 🙌 

Still needs a few more tests.
2022-06-26 23:10:58 +00:00
Akihito Nakano
082ed35bdc Test the pruning of excess peers using randomly generated input (#3248)
## Issue Addressed

https://github.com/sigp/lighthouse/issues/3092


## Proposed Changes

Added property-based tests for the pruning implementation. A randomly generated input for the test contains connection direction, subnets, and scores.


## Additional Info

I left some comments on this PR, what I have tried, and [a question](https://github.com/sigp/lighthouse/pull/3248#discussion_r891981969).

Co-authored-by: Diva M <divma@protonmail.com>
2022-06-25 22:22:34 +00:00
Michael Sproul
d21f083777 Add more paths to HTTP API metrics (#3282)
## Proposed Changes

Expand the set of paths tracked by the HTTP API metrics to include all paths hit by the validator client.

These paths were only partially updated for Altair, so we were missing some of the sync committee and v2 APIs.
2022-06-23 05:19:21 +00:00
Paul Hauner
748658e32c Add some debug logs for checkpoint sync (#3281)
## Issue Addressed

NA

## Proposed Changes

I used these logs when debugging a spurious failure with Infura and thought they might be nice to have around permanently.

There's no changes to functionality in this PR, just some additional `debug!` logs.

## Additional Info

NA
2022-06-23 05:19:20 +00:00
Divma
7af5742081 Deprecate step param in BlocksByRange RPC request (#3275)
## Issue Addressed

Deprecates the step parameter in the blocks by range request

## Proposed Changes

- Modifies the BlocksByRangeRequest type to remove the step parameter and everywhere we took it into account before
- Adds a new type to still handle coding and decoding of requests that use the parameter

## Additional Info
I went with a deprecation over the type itself so that requests received outside `lighthouse_network` don't even need to deal with this parameter. After the deprecation period just removing the Old blocks by range request should be straightforward
2022-06-22 16:23:34 +00:00
Divma
2063c0fa0d Initial work to remove engines fallback from the execution_layer crate (#3257)
## Issue Addressed

Part of #3160 

## Proposed Changes
Use only the first url given in the execution engine, if more than one is provided log it.
This change only moves having multiple engines to one. The amount of code cleanup that can and should be done forward is not small and would interfere with ongoing PRs. I'm keeping the changes intentionally very very minimal.

## Additional Info

Future works:
- In [ `EngineError` ](9c429d0764/beacon_node/execution_layer/src/engines.rs (L173-L177)) the id is not needed since it now has no meaning.
- the [ `first_success_without_retry` ](9c429d0764/beacon_node/execution_layer/src/engines.rs (L348-L351)) function can return a single error.
- the [`first_success`](9c429d0764/beacon_node/execution_layer/src/engines.rs (L324)) function can return a single error.
- After the redundancy is removed for the builders, we can probably make the [ `EngineErrors` ](9c429d0764/beacon_node/execution_layer/src/lib.rs (L69)) carry a single error.
- Merge the [`Engines`](9c429d0764/beacon_node/execution_layer/src/engines.rs (L161-L165)) struct and [`Engine` ](9c429d0764/beacon_node/execution_layer/src/engines.rs (L62-L67))
- Fix the associated configurations and cli params. Not sure if both are done in https://github.com/sigp/lighthouse/pull/3214

In general I think those changes can be done incrementally and in individual pull requests.
2022-06-22 14:27:16 +00:00
Michael Sproul
8faaa35b58 Enable malloc metrics for the VC (#3279)
## Issue Addressed

Following up from https://github.com/sigp/lighthouse/pull/3223#issuecomment-1158718102, it has been observed that the validator client uses vastly more memory in some compilation configurations than others. Compiling with Cross and then putting the binary into an Ubuntu 22.04 image seems to use 3x more memory than compiling with Cargo directly on Debian bullseye.

## Proposed Changes

Enable malloc metrics for the validator client. This will hopefully allow us to see the difference between the two compilation configs and compare heap fragmentation. This PR doesn't enable malloc tuning for the VC because it was found to perform significantly worse. The `--disable-malloc-tuning` flag is repurposed to just disable the metrics.
2022-06-20 23:20:30 +00:00
Michael Sproul
efebf712dd Avoid cloning snapshots during sync (#3271)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/2944

## Proposed Changes

Remove snapshots from the cache during sync rather than cloning them. This reduces unnecessary cloning and memory fragmentation during sync.

## Additional Info

This PR relies on the fact that the `block_delay` cache is not populated for blocks from sync. Relying on block delay may have the side effect that a change in `block_delay` calculation could lead to: a) more clones, if block delays are added for syncing blocks or b) less clones, if blocks near the head are erroneously provided without a `block_delay`. Case (a) would be a regression to the current status quo, and (b) is low-risk given we know that the snapshot cache is current susceptible to misses (hence `tree-states`).
2022-06-20 23:20:29 +00:00
eklm
a9e158663b Fix validator_monitor_prev_epoch_ metrics (#2911)
## Issue Addressed

#2820

## Proposed Changes

The problem is that validator_monitor_prev_epoch metrics are updated only if there is EpochSummary present in summaries map for the previous epoch and it is not the case for the offline validator. Ensure that EpochSummary is inserted into summaries map also for the offline validators.
2022-06-20 04:06:30 +00:00
Pawan Dhananjay
f428719761 Do not penalize peers on execution layer offline errors (#3258)
## Issue Addressed

Partly resolves https://github.com/sigp/lighthouse/issues/3032

## Proposed Changes

Extracts some of the functionality of #3094 into a separate PR as the original PR requires a bit more work.
Do not unnecessarily penalize peers when we fail to validate received execution payloads because our execution layer is offline.
2022-06-19 23:13:40 +00:00
Divma
21b3425a12 Update cargo lockfile to fix RUSTSEC-2022-0025, RUSTSEC-2022-0026 and RUSTSEC-2022-0027 (#3278)
## Issue Addressed

Fixes [RUSTSEC-2022-0025](https://rustsec.org/advisories/RUSTSEC-2022-0025), [RUSTSEC-2022-0026](https://rustsec.org/advisories/RUSTSEC-2022-0026) and [RUSTSEC-2022-0027](https://rustsec.org/advisories/RUSTSEC-2022-0027) raised in [this test run](https://github.com/sigp/lighthouse/runs/6943343852?check_suite_focus=true)

## Proposed Changes
a `cargo update` was enough

## Additional Info
2022-06-18 23:59:43 +00:00
Pawan Dhananjay
7aeb9f9ecd Add sepolia config (#3268)
## Issue Addressed

N/A

## Proposed Changes

Add network config for sepolia from https://github.com/eth-clients/merge-testnets/pull/14
2022-06-17 03:10:52 +00:00
Paul Hauner
564d7da656 v2.3.1 (#3262)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

NA
2022-06-14 05:25:38 +00:00
Divma
3dd50bda11 Improve substream management (#3261)
## Issue Addressed

Which issue # does this PR address?

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-06-10 06:58:50 +00:00
Paul Hauner
11d80a6a38 Optimise per_epoch_processing low-hanging-fruit (#3254)
## Issue Addressed

NA

## Proposed Changes

- Uses a `Vec` in `SingleEpochParticipationCache` rather than `HashMap` to speed up processing times at the cost of memory usage.
- Cache the result of `integer_sqrt` rather than recomputing for each validator.
- Cache `state.previous_epoch` rather than recomputing it for each validator.

### Benchmarks

Benchmarks on a recent mainnet state using #3252 to get timing.

#### Without this PR

```
lcli skip-slots --state-path /tmp/state-0x3cdc.ssz --partial-state-advance --slots 32 --state-root 0x3cdc33cd02713d8d6cc33a6dbe2d3a5bf9af1d357de0d175a403496486ff845e --runs 10
[2022-06-09T08:21:02Z INFO  lcli::skip_slots] Using mainnet spec
[2022-06-09T08:21:02Z INFO  lcli::skip_slots] Advancing 32 slots
[2022-06-09T08:21:02Z INFO  lcli::skip_slots] Doing 10 runs
[2022-06-09T08:21:02Z INFO  lcli::skip_slots] State path: "/tmp/state-0x3cdc.ssz"
SSZ decoding /tmp/state-0x3cdc.ssz: 43ms
[2022-06-09T08:21:03Z INFO  lcli::skip_slots] Run 0: 245.718794ms
[2022-06-09T08:21:03Z INFO  lcli::skip_slots] Run 1: 245.364782ms
[2022-06-09T08:21:03Z INFO  lcli::skip_slots] Run 2: 255.866179ms
[2022-06-09T08:21:04Z INFO  lcli::skip_slots] Run 3: 243.838909ms
[2022-06-09T08:21:04Z INFO  lcli::skip_slots] Run 4: 250.431425ms
[2022-06-09T08:21:04Z INFO  lcli::skip_slots] Run 5: 248.68765ms
[2022-06-09T08:21:04Z INFO  lcli::skip_slots] Run 6: 262.051113ms
[2022-06-09T08:21:05Z INFO  lcli::skip_slots] Run 7: 264.293967ms
[2022-06-09T08:21:05Z INFO  lcli::skip_slots] Run 8: 293.202007ms
[2022-06-09T08:21:05Z INFO  lcli::skip_slots] Run 9: 264.552017ms
```

#### With this PR:

```
lcli skip-slots --state-path /tmp/state-0x3cdc.ssz --partial-state-advance --slots 32 --state-root 0x3cdc33cd02713d8d6cc33a6dbe2d3a5bf9af1d357de0d175a403496486ff845e --runs 10
[2022-06-09T08:57:59Z INFO  lcli::skip_slots] Run 0: 73.898678ms
[2022-06-09T08:57:59Z INFO  lcli::skip_slots] Run 1: 75.536978ms
[2022-06-09T08:57:59Z INFO  lcli::skip_slots] Run 2: 75.176104ms
[2022-06-09T08:57:59Z INFO  lcli::skip_slots] Run 3: 76.460828ms
[2022-06-09T08:57:59Z INFO  lcli::skip_slots] Run 4: 75.904195ms
[2022-06-09T08:58:00Z INFO  lcli::skip_slots] Run 5: 75.53077ms
[2022-06-09T08:58:00Z INFO  lcli::skip_slots] Run 6: 74.745572ms
[2022-06-09T08:58:00Z INFO  lcli::skip_slots] Run 7: 75.823489ms
[2022-06-09T08:58:00Z INFO  lcli::skip_slots] Run 8: 74.892055ms
[2022-06-09T08:58:00Z INFO  lcli::skip_slots] Run 9: 76.333569ms
```

## Additional Info

NA
2022-06-10 04:29:28 +00:00
Michael Sproul
1d016a83f2 Lint against panicky calls in async functions (#3250)
## Description

Add a new lint to CI that attempts to detect calls to functions like `block_on` from async execution contexts. This lint was written from scratch exactly for this purpose, on my fork of Clippy: https://github.com/michaelsproul/rust-clippy/tree/disallow-from-async

## Additional Info

- I've successfully detected the previous two issues we had with `block_on` by running the linter on the commits prior to each of these PRs: https://github.com/sigp/lighthouse/pull/3165, https://github.com/sigp/lighthouse/pull/3199.
- The lint runs on CI with `continue-on-error: true` so that if it fails spuriously it won't block CI.
- I think it would be good to merge this PR before https://github.com/sigp/lighthouse/pull/3244 so that we can lint the extensive executor-related changes in that PR.
- I aim to upstream the lint to Clippy, at which point building a custom version of Clippy from my fork will no longer be necessary. I imagine this will take several weeks or months though, because the code is currently a bit hacky and will need some renovations to pass review.
2022-06-10 04:29:27 +00:00
Michael Sproul
452b46a7af Pin MDBX at last version with Win/Mac support (#3246)
## Issue Addressed

Newer versions of MDBX have removed Windows and macOS support, so this PR pins MDBX at the last working version to prevent an accidental regression via `cargo update`.

## Additional Info

This is a short-term solution, if our pinned version of MDBX turns out to be buggy we will need to consider backporting patches from upstream to our own fork.
2022-06-10 04:29:26 +00:00
Divma
56b4cd88ca minor libp2p upgrade (#3259)
## Issue Addressed

Upgrades libp2p
2022-06-09 23:48:51 +00:00
Mac L
9c429d0764 Only use authenticated endpoints during EE integration testing (#3253)
## Issue Addressed

Failures in our CI integration tests for Geth.

## Proposed Changes

Only connect to the authenticated execution endpoints during execution tests.
This is necessary now that it is impossible to connect to the `engine` api on an unauthenticated endpoint.
See https://github.com/ethereum/go-ethereum/pull/24997

## Additional Info

As these tests break semi-regularly, I have kept logs enabled to ease future debugging.
I've also updated the Nethermind tests, although these weren't broken. This should future-proof us if Nethermind decides to follow suit with Geth
2022-06-09 10:47:03 +00:00
Divma
cfd26d25e0 do not count sync batch attempts when peer is not at fault (#3245)
## Issue Addressed
currently we count a failed attempt for a syncing chain even if the peer is not at fault. This makes us do more work if the chain fails, and heavily penalize peers, when we can simply retry. Inspired by a proposal I made to #3094 

## Proposed Changes
If a batch fails but the peer is not at fault, do not count the attempt
Also removes some annoying logs

## Additional Info
We still get a counter on ignored attempts.. just in case
2022-06-07 02:35:56 +00:00
Divma
58e223e429 update libp2p (#3233)
## Issue Addressed
na

## Proposed Changes
Updates libp2p to https://github.com/libp2p/rust-libp2p/pull/2662

## Additional Info
From comments on the relevant PRs listed, we should pay attention at peer management consistency, but I don't think anything weird will happen.
This is running in prater tok and sin
2022-06-07 02:35:55 +00:00
Michael Sproul
54cf94ea59 Fix per-slot timer in presence of clock changes (#3243)
## Issue Addressed

Fixes a timing issue that results in spurious fork choice notifier failures:

```
WARN Error signalling fork choice waiter     slot: 3962270, error: ForkChoiceSignalOutOfOrder { current: Slot(3962271), latest: Slot(3962270) }, service: beacon
```

There’s a fork choice run that is scheduled to run at the start of every slot by the `timer`, which creates a 12s interval timer when the beacon node starts up. The problem is that if there’s a bit of clock drift that gets corrected via NTP (or a leap second for that matter) then these 12s intervals will cease to line up with the start of the slot. This then creates the mismatch in slot number that we see above.

Lighthouse also runs fork choice 500ms before the slot begins, and these runs are what is conflicting with the start-of-slot runs. This means that the warning in current versions of Lighthouse is mostly cosmetic because fork choice is up to date with all but the most recent 500ms of attestations (which usually isn’t many).

## Proposed Changes

Fix the per-slot timer so that it continually re-calculates the duration to the start of the next slot and waits for that.

A side-effect of this change is that we may skip slots if the per-slot task takes >12s to run, but I think this is an unlikely scenario and an acceptable compromise.
2022-06-06 23:52:32 +00:00
Divma
493c2c037c reduce reprocess queue/channel sizes (#3239)
## Issue Addressed

Reduces the effect of late blocks on overall node buildup

## Proposed Changes

change the capacity of the channels used to send work for reprocessing in the beacon processor, and to send back to the main processor task, to be 75% of the capacity of the channel for receiving new events

## Additional Info

The issues we've seen suggest we should still evaluate node performance under stress, with late blocks being a big factor. 
Other changes that could help: 
1. right now we have a cap for queued attestations for reprocessing that applies to the sum of aggregated and unaggregated attestations. We could consider adding a separate cap that favors aggregated ones.
2. solving #2848
2022-06-06 23:52:31 +00:00
Akihito Nakano
a6d2ed6119 Fix: PeerManager doesn't remove "outbound only" peers which should be pruned (#3236)
## Issue Addressed

This is one step to address https://github.com/sigp/lighthouse/issues/3092 before introducing `quickcheck`.

I noticed an issue while I was reading the pruning implementation `PeerManager::prune_excess_peers()`. If a peer with the following condition, **`outbound_peers_pruned` counter increases but the peer is not pushed to `peers_to_prune`**.

- [outbound only](1e4ac8a4b9/beacon_node/lighthouse_network/src/peer_manager/mod.rs (L1018))
- [min_subnet_count <= MIN_SYNC_COMMITTEE_PEERS](1e4ac8a4b9/beacon_node/lighthouse_network/src/peer_manager/mod.rs (L1047))

As a result, PeerManager doesn't remove "outbound" peers which should be pruned.

Note: [`subnet_to_peer`](e0d673ea86/beacon_node/lighthouse_network/src/peer_manager/mod.rs (L999)) (HashMap) doesn't guarantee a particular order of iteration. So whether the test fails depend on the order of iteration.
2022-06-06 05:51:10 +00:00
Paul Hauner
3d51f24717 Update Ropsten TTD (#3240)
## Issue Addressed

NA

## Proposed Changes

Updates the Ropsten TTD as per: https://blog.ethereum.org/2022/06/03/ropsten-merge-ttd/

## Additional Info

NA
2022-06-04 21:24:39 +00:00
Michael Sproul
47d57a290b Improve eth1 block cache sync (for Ropsten) (#3234)
## Issue Addressed

Fix for the eth1 cache sync issue observed on Ropsten.

## Proposed Changes

Ropsten blocks are so infrequent that they broke our algorithm for downloading eth1 blocks. We currently try to download forwards from the last block in our cache to the block with block number [`remote_highest_block - FOLLOW_DISTANCE + FOLLOW_DISTANCE / ETH1_BLOCK_TIME_TOLERANCE_FACTOR`](6f732986f1/beacon_node/eth1/src/service.rs (L489-L492)). With the tolerance set to 4 this is insufficient because we lag by 1536 blocks, which is more like ~14 hours on Ropsten. This results in us having an incomplete eth1 cache, because we should cache all blocks between -16h and -8h. Even if we were to set the tolerance to 2 for the largest allowance, we would only look back 1024 blocks which is still more than 8 hours.

For example consider this block https://ropsten.etherscan.io/block/12321390. The block from 1536 blocks earlier is 14 hours and 20 minutes before it: https://ropsten.etherscan.io/block/12319854. The block from 1024 blocks earlier is https://ropsten.etherscan.io/block/12320366, 8 hours and 48 minutes before.

- This PR introduces a new CLI flag called `--eth1-cache-follow-distance` which can be used to set the distance manually.
- A new dynamic catchup mechanism is added which detects when the cache is lagging the true eth1 chain and tries to download more blocks within the follow distance in order to catch up.
2022-06-03 06:05:03 +00:00
Mac L
20071975c7 Switch Nethermind integration tests to use master branch (#3228)
## Issue Addressed

N/A

## Proposed Changes

Preemptively switch Nethermind integration tests to use the `master` branch along with the baked in `kiln` config.

## Additional Info

There have been some spurious timeouts across CI so this also increases the timeout to 20s.
2022-06-03 03:22:55 +00:00
Mac L
55ac423872 Emit log when fee recipient values are inconsistent (#3202)
## Issue Addressed

#3156

## Proposed Changes

Emit a `WARN` log whenever the value of `fee_recipient` as returned from the EE is different from the value of `suggested_fee_recipient` as set on the BN, for example by the `--suggested-fee-recipient` CLI flag.

## Additional Info

I have set the log level to `WARN` since it is legal behaviour (meaning it isn't really an error but is important to know when it is occurring).

If we feel like this behaviour is almost always undesired (caused by a misconfiguration or malicious EE) then an `ERRO` log would be more appropriate. Happy to change it in that case.
2022-06-03 03:22:54 +00:00
Pawan Dhananjay
8e1305a3d2 Use a stable tag for ubuntu in dockerfile (#3231)
## Issue Addressed

N/A

## Proposed Changes

Use stable version of ubuntu base image in dockerfile instead of using latest. This will help in narrowing down issues with docker images.
2022-05-31 06:09:12 +00:00
Michael Sproul
cc4b778b1f Inline safe_arith methods (#3229)
## Proposed Changes

Speed up epoch processing by around 10% by inlining methods from the `safe_arith` crate.

The Rust standard library uses `#[inline]` for the `checked_` functions that we're wrapping, so it makes sense for us to inline them too.

## Additional Info

I conducted a brief statistical test on the block at slot [3858336](https://beaconcha.in/block/3858336) applied to the state at slot 3858335, which requires an epoch transition. The command used for testing was:

```
lcli transition-blocks --testnet-dir ./common/eth2_network_config/built_in_network_configs/mainnet --no-signature-verification state.ssz block.ssz output.ssz
``` 

The testing found that inlining reduced the epoch transition time from 398ms to 359ms, a reduction of 9.77%, which was found to be statistically significant with a two-tailed t-test (p < 0.01). Data and intermediate calculations can be found here: https://docs.google.com/spreadsheets/d/1tlf3eFjz3dcXeb9XVOn21953uYpc9RdQapPtcHGH1PY
2022-05-31 06:09:12 +00:00
Paul Hauner
16e49af8e1 Use genesis slot for node/syncing (#3226)
## Issue Addressed

NA

## Proposed Changes

Resolves this error log emitted from the VC prior to genesis:

```
WARN Unable connect to beacon node           error: ServerMessage(ErrorMessage { code: 500, message: "UNHANDLED_ERROR: UnableToReadSlot", stacktraces: [] })
```

## Additional Info

NA
2022-05-31 06:09:11 +00:00
Michael Sproul
98c8ac1a87 Fix typo in peer state transition log (#3224)
## Issue Addressed

We were logging `out_finalized_epoch` instead of `our_finalized_epoch`. I noticed this ages ago but only just got around to fixing it.

## Additional Info

I also reformatted the log line to respect the line length limit (`rustfmt` won't do it because it gets confused by the `;` in slog's log macros).
2022-05-31 06:09:10 +00:00
Michael Sproul
ee18f6a9f7 Add lcli indexed-attestations (#3221)
## Proposed Changes

It's reasonably often that we want to manually convert an attestation to indexed form. This PR adds an `lcli` command for doing this, using an SSZ state and a list of JSON attestations (as extracted from a JSON block) as input.
2022-05-31 06:09:08 +00:00
Pawan Dhananjay
af5da1244e Fix links in docs (#3219)
## Issue Addressed

N/A

## Proposed Changes

Fix the link for `advanced-release-candidates.md` in the lighthouse book and add it to the summary page.
2022-05-31 06:09:07 +00:00
Paul Hauner
6f732986f1 v2.3.0 (#3222)
## Issue Addressed

NA

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

- Pending testing on our infra. **Please do not merge**
2022-05-30 01:35:10 +00:00
Paul Hauner
f675c865e2 Set Ropsten TTD to unrealistically high value (#3225)
## Issue Addressed

NA

## Proposed Changes

Updates Ropsten TTD as per https://github.com/eth-clients/merge-testnets/pull/11.

## Additional Info

NA
2022-05-27 04:29:46 +00:00
Divma
a7896a58cc move backfill sync jobs from highest priority to lowest (#3215)
## Issue Addressed
#3212 

## Proposed Changes
Move chain segments coming from back-fill syncing from highest priority to lowest

## Additional Info
If this does not solve the issue, next steps would be lowering the batch size for back-fill sync, and as last resort throttling the processing of these chain segments
2022-05-26 02:05:17 +00:00
Mac L
fd55373b88 Add new VC metrics for beacon node availability (#3193)
## Issue Addressed

#3154 

## Proposed Changes

Add three new metrics for the VC:
1. `vc_beacon_nodes_synced_count`
2. `vc_beacon_nodes_available_count`
3. `vc_beacon_nodes_total_count`

Their values mirror the values present in the following log line:
```
Apr 08 17:25:17.000 INFO Connected to beacon node(s) synced: 4, available: 4, total: 4, service: notifier
```
2022-05-26 02:05:16 +00:00
Paul Hauner
f4aa17ef85 v2.3.0-rc.0 (#3218)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

NA
2022-05-25 05:29:26 +00:00
Michael Sproul
229f883968 Avoid parallel fork choice runs during sync (#3217)
## Issue Addressed

Fixes an issue that @paulhauner found with the v2.3.0 release candidate whereby the fork choice runs introduced by #3168 tripped over each other during sync:

```
May 24 23:06:40.542 WARN Error signalling fork choice waiter     slot: 3884129, error: ForkChoiceSignalOutOfOrder { current: Slot(3884131), latest: Slot(3884129) }, service: beacon
```

This can occur because fork choice is called from the state advance _and_ the per-slot task. When one of these runs takes a long time it can end up finishing after a run from a later slot, tripping the error above. The problem is resolved by not running either of these fork choice calls during sync.

Additionally, these parallel fork choice runs were causing issues in the database:

```
May 24 07:49:05.098 WARN Found a chain that should already have been pruned, head_slot: 92925, head_block_root: 0xa76c7bf1b98e54ed4b0d8686efcfdf853484e6c2a4c67e91cbf19e5ad1f96b17, service: beacon
May 24 07:49:05.101 WARN Database migration failed               error: HotColdDBError(FreezeSlotError { current_split_slot: Slot(92608), proposed_split_slot: Slot(92576) }), service: beacon
```

In this case, two fork choice calls triggering the finalization processing were being processed out of order due to differences in their processing time, causing the background migrator to try to advance finalization _backwards_ 😳. Removing the parallel fork choice runs from sync effectively addresses the issue, because these runs are most likely to have different finalized checkpoints (because of the speed at which fork choice advances during sync). In theory it's still possible to process updates out of order if any other fork choice runs end up completing out of order, but this should be much less common. Fixing out of order fork choice runs in general is difficult as it requires architectural changes like serialising fork choice updates through a single thread, or locking fork choice along with the head when it is mutated (https://github.com/sigp/lighthouse/pull/3175).

## Proposed Changes

* Don't run per-slot fork choice during sync (if head is older than 4 slots)
* Don't run state-advance fork choice during sync (if head is older than 4 slots)
* Check for monotonic finalization updates in the background migrator. This is a good defensive check to have, and I'm not sure why we didn't have it before (we may have had it and wrongly removed it).
2022-05-25 03:27:30 +00:00
Michael Sproul
60449849e2 Document database migrations (#3203)
## Proposed Changes

Add documentation for the `lighthouse db migate` command, which users will be able to use to downgrade from Lighthouse v2.3.0 on non-merge networks (mainnet & Prater).

I think it's important to get this into the live instance of the book so we can link to it from the v2.3.0 release notes.
2022-05-23 03:52:32 +00:00
Michael Sproul
a72154eda0 Decrease proposer boost to 40% (#3201)
## Issue Addressed

https://github.com/ethereum/consensus-specs/pull/2895

## Proposed Changes

Lower the proposer boost to 40%, which is a trade-off against different types of attacks.

## Additional Info

This PR also enables proposer boost on Ropsten assuming that this PR will be merged: https://github.com/eth-clients/merge-testnets/pull/10
2022-05-23 03:52:31 +00:00
Paul Hauner
7a64994283 Call per_slot_task from a blocking thread (v2) (#3199)
*This PR was adapted from @pawanjay176's work in #3197.*

## Issue Addressed

Fixes a regression in https://github.com/sigp/lighthouse/pull/3168

## Proposed Changes

https://github.com/sigp/lighthouse/pull/3168 added calls to `fork_choice` in  `BeaconChain::per_slot_task` function. This leads to a panic as `per_slot_task` is called from an async context which calls fork choice, which then calls `block_on`.

This PR changes the timer to call the `per_slot_task` function in a blocking thread.

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-05-20 23:05:07 +00:00
Paul Hauner
2291a14871 Remove build status badge from README (#3195)
## Issue Addressed

Removes the build status badge from the main README.md. I don't think it actually serves a purpose and it also has the downside that a spurious failure gives us a red badge. For example, v2.2.1 failed with a [spurious failure](https://github.com/sigp/lighthouse/runs/5984392665?check_suite_focus=true) and I can't see a way to re-trigger that run. It will be red until our next release.

The same test suite runs when we merge into `unstable`, so those tests must have already passed in order for the commits to get onto `stable` (assuming our workflow is followed). Github will send notifications on failed CI, so we'll still be alerted to a failure without checking this badge.
2022-05-20 05:02:14 +00:00
Michael Sproul
6eaeaa542f Fix Rust 1.61 clippy lints (#3192)
## Issue Addressed

This fixes the low-hanging Clippy lints introduced in Rust 1.61 (due any hour now). It _ignores_ one lint, because fixing it requires a structural refactor of the validator client that needs to be done delicately. I've started on that refactor and will create another PR that can be reviewed in more depth in the coming days. I think we should merge this PR in the meantime to unblock CI.
2022-05-20 05:02:13 +00:00
Paul Hauner
aa3e67de4a Add Ropsten configuration (#3184)
## Issue Addressed

NA

## Proposed Changes

Adds the configuration for the upcoming merge of the Ropsten network, as per:

https://github.com/eth-clients/merge-testnets/pull/9

Use the Ropsten network with: `lighthouse --network ropsten`

## Additional Info

This is still a work-in-progress. We should wait for the eth-clients/merge-testnets PR to be approved before merging this into our `unstable`.
2022-05-20 05:02:12 +00:00
Michael Sproul
8fa032c8ae Run fork choice before block proposal (#3168)
## Issue Addressed

Upcoming spec change https://github.com/ethereum/consensus-specs/pull/2878

## Proposed Changes

1. Run fork choice at the start of every slot, and wait for this run to complete before proposing a block.
2. As an optimisation, also run fork choice 3/4 of the way through the slot (at 9s), _dequeueing attestations for the next slot_.
3. Remove the fork choice run from the state advance timer that occurred before advancing the state.

## Additional Info

### Block Proposal Accuracy

This change makes us more likely to propose on top of the correct head in the presence of re-orgs with proposer boost in play. The main scenario that this change is designed to address is described in the linked spec issue.

### Attestation Accuracy

This change _also_ makes us more likely to attest to the correct head. Currently in the case of a skipped slot at `slot` we only run fork choice 9s into `slot - 1`. This means the attestations from `slot - 1` aren't taken into consideration, and any boost applied to the block from `slot - 1` is not removed (it should be). In the language of the linked spec issue, this means we are liable to attest to C, even when the majority voting weight has already caused a re-org to B.

### Why remove the call before the state advance?

If we've run fork choice at the start of the slot then it has already dequeued all the attestations from the previous slot, which are the only ones eligible to influence the head in the current slot. Running fork choice again is unnecessary (unless we run it for the next slot and try to pre-empt a re-org, but I don't currently think this is a great idea).

### Performance

Based on Prater testing this adds about 5-25ms of runtime to block proposal times, which are 500-1000ms on average (and spike to 5s+ sometimes due to state handling issues 😢 ). I believe this is a small enough penalty to enable it by default, with the option to disable it via the new flag `--fork-choice-before-proposal-timeout 0`. Upcoming work on block packing and state representation will also reduce block production times in general, while removing the spikes.

### Implementation

Fork choice gets invoked at the start of the slot via the `per_slot_task` function called from the slot timer. It then uses a condition variable to signal to block production that fork choice has been updated. This is a bit funky, but it seems to work. One downside of the timer-based approach is that it doesn't happen automatically in most of the tests. The test added by this PR has to trigger the run manually.
2022-05-20 05:02:11 +00:00
realbigsean
54b58fdc01 Log out response status when we hit PayloadIdUnavailable (#3190)
## Issue Addressed

@z3n-chada is currently getting a `PayloadIdUnavailable` error when connecting lighthouse to Erigon and it's difficult to discern why so this just logs out the response status from the EE when we hit an `PayloadIdUnavailable` error

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-05-19 06:00:48 +00:00
Akihito Nakano
695f415590 Tiny improvement: PeerManager and maximum discovery query (#3182)
## Issue Addressed

As [`Discovery` bounds the maximum discovery query](e88b18be09/beacon_node/lighthouse_network/src/discovery/mod.rs (L328)), `PeerManager` no need to handle it.

e88b18be09/beacon_node/lighthouse_network/src/discovery/mod.rs (L328)
2022-05-19 06:00:46 +00:00
Peter Davies
807283538f Add client authentication to Web3Signer validators (#3170)
## Issue Addressed

Web3Signer validators do not support client authentication. This means the `--tls-known-clients-file` option on Web3Signer can't be used with Lighthouse.

## Proposed Changes

Add two new fields to Web3Signer validators, `client_identity_path` and `client_identity_password`, which specify the path and password for a PKCS12 file containing a certificate and private key. If `client_identity_path` is present, use the certificate for SSL client authentication.

## Additional Info

I am successfully validating on Prater using client authentication with Web3Signer and client authentication.
2022-05-18 23:14:37 +00:00
tim gretler
053625f113 Avoid unnecessary slashing protection when publishing blocks (#3188)
## Issue Addressed

#3141 

## Proposed Changes

Changes the algorithm for proposing blocks from

```
For each BN (first success):
   - Produce a block
   - Sign the block and store its root in the slashing protection DB
   - Publish the block
```
to
```
For each BN (first success):
   - Produce a block
Sign the block and store its root in the slashing protection DB
For each BN (first success):
   - Publish the block
```

Separating the producing from the publishing makes sure that we only add a signed block once to the slashing DB.
2022-05-18 06:50:51 +00:00
will
0428018cc1 Fix http header accept parsing problem (#3185)
## Issue Addressed

Which issue # does this PR address?
#3114 

## Proposed Changes

1. introduce `mime` package 
2. Parse `Accept` field in the header with `mime`

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-05-18 06:50:50 +00:00
Mac L
def9bc660e Remove DB migrations for legacy database schemas (#3181)
## Proposed Changes

Remove support for DB migrations that support upgrading from schema's below version 5. This is mostly for cosmetic/code quality reasons as in most circumstances upgrading from versions of Lighthouse this old will almost always require a re-sync.

## Additional Info

The minimum supported database schema is now version 5.
2022-05-17 04:54:39 +00:00
Paul Hauner
db8a6f81ea Prevent attestation to future blocks from early attester cache (#3183)
## Issue Addressed

N/A

## Proposed Changes

Prevents the early attester cache from producing attestations to future blocks. This bug could result in a missed head vote if the BN was requested to produce an attestation for an earlier slot than the head block during the (usually) short window of time between verifying a block and setting it as the head.

This bug was noticed in an [Antithesis](https://andreagrieser.com/) test and diagnosed by @realbigsean. 

## Additional Info

NA
2022-05-17 01:51:25 +00:00
Paul Hauner
38050fa460 Allow TaskExecutor to be used in async tests (#3178)
# Description

Since the `TaskExecutor` currently requires a `Weak<Runtime>`, it's impossible to use it in an async test where the `Runtime` is created outside our scope. Whilst we *could* create a new `Runtime` instance inside the async test, dropping that `Runtime` would cause a panic (you can't drop a `Runtime` in an async context).

To address this issue, this PR creates the `enum Handle`, which supports either:

- A `Weak<Runtime>` (for use in our production code)
- A `Handle` to a runtime (for use in testing)

In theory, there should be no change to the behaviour of our production code (beyond some slightly different descriptions in HTTP 500 errors), or even our tests. If there is no change, you might ask *"why bother?"*. There are two PRs (#3070 and #3175) that are waiting on these fixes to introduce some new tests. Since we've added the EL to the `BeaconChain` (for the merge), we are now doing more async stuff in tests.

I've also added a `RuntimeExecutor` to the `BeaconChainTestHarness`. Whilst that's not immediately useful, it will become useful in the near future with all the new async testing.
2022-05-16 08:35:59 +00:00
François Garillot
3f9e83e840 [refactor] Refactor Option/Result combinators (#3180)
Code simplifications using `Option`/`Result` combinators to make pattern-matches a tad simpler. 
Opinions on these loosely held, happy to adjust in review.

Tool-aided by [comby-rust](https://github.com/huitseeker/comby-rust).
2022-05-16 01:59:47 +00:00
Mac L
e81a428ffb Remove lcli block packing analysis (#3179)
## Proposed Changes

Remove the `lcli` code which performs block packing analysis.
The `lcli` code has been deprecated by a more performant version available in the HTTP API added in #2879.

## Additional Info

Any applications depending on the `lcli` code should migrate to the version in the HTTP API.
The only feature which is unavailable in the API version is an estimate of live/dead validators. This was originally used to determine a closer approximation of block packing efficiencies since offline validators have a disproportionate impact on efficiencies. However the implimentation in `lcli` is a poor approximation which cannot account for a multitude of factors. It is recommended to simply calculate relative efficiencies instead or use a more advanced method of determining live/dead validators.
2022-05-16 01:59:46 +00:00
Michael Sproul
bcdd960ab1 Separate execution payloads in the DB (#3157)
## Proposed Changes

Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.

⚠️ **This is achieved in a backwards-incompatible way for networks that have already merged** ⚠️. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.

The main changes are:

- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
    - `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
    - `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
   - I've tested manually that it works on Kiln, using Geth and Nethermind.
   - This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
   - We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).

## Additional Info

- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
Michael Sproul
be59fd9af7 Exclude EE build dirs from Docker context (#3174)
## Proposed Changes

Remove the bulky part of the EE integration test directory from the Docker build context so that we don't have to send 10GB of junk to the Docker daemon when building an image.
2022-05-09 23:43:31 +00:00
tim gretler
2877c29ca3 Add remotekey API support (#3162)
## Issue Addressed

#3068

## Proposed Changes

Adds support for remote key API.

## Additional Info

Needed to add `is_local_keystore`  argument to `delete_definition_and_keystore` to know if we want to delete local or remote key. Previously this wasn't necessary because remotekeys(web3signers) could be deleted.
2022-05-09 07:21:38 +00:00
Akihito Nakano
bb7e7d72e8 Fix: no version info in homebrew package (#3167)
## Issue Addressed

Resolves #3102 

## Proposed Changes

- https://github.com/sigp/lighthouse/issues/3102#issuecomment-1114835063
- This is not an ideal solution, since the commit hash is missing from version number, but I think it is sufficient.

## Additional Info

I've tested ... :

- `fallback` is updated via `change_version.sh`.

```shell
$ cd scripts/
$ ./change_version.sh 2.2.1 2.2.2
$ git diff ../common/lighthouse_version/src/lib.rs
```
```diff
@ common/lighthouse_version/src/lib.rs:20 @ pub const VERSION: &str = git_version!(
        // NOTE: using --match instead of --exclude for compatibility with old Git
        "--match=thiswillnevermatchlol"
    ],
-   prefix = "Lighthouse/v2.2.1-",
-   fallback = "Lighthouse/v2.2.1"
+   prefix = "Lighthouse/v2.2.2-",
+   fallback = "Lighthouse/v2.2.2"
);
```

- a package built without git info prints expected version number (v2.2.1).

```shell
$ git archive HEAD --output=/tmp/lighthouse.zip
$ cd /tmp
$ unzip lighthouse.zip
$ cd lighthouse
$ cargo build --release
$ target/release/lighthouse --version
Lighthouse v2.2.1
BLS library: blst
SHA256 hardware acceleration: false
Specs: mainnet (true), minimal (false), gnosis (false)
```
2022-05-04 23:30:36 +00:00
Michael Sproul
ae47a93c42 Don't panic in forkchoiceUpdated handler (#3165)
## Issue Addressed

Fix a panic due to misuse of the Tokio executor when processing a forkchoiceUpdated response. We were previously calling `process_invalid_execution_payload` from the async function `update_execution_engine_forkchoice_async`, which resulted in a panic because `process_invalid_execution_payload` contains a call to fork choice, which ultimately calls `block_on`.

An example backtrace can be found here: https://gist.github.com/michaelsproul/ac5da03e203d6ffac672423eaf52fb20

## Proposed Changes

Wrap the call to `process_invalid_execution_payload` in a `spawn_blocking` so that `block_on` is no longer called from an async context.

## Additional Info

- I've been thinking about how to catch bugs like this with static analysis (a new Clippy lint).
- The payload validation tests have been re-worked to support distinct responses from the mock EE for newPayload and forkchoiceUpdated. Three new tests have been added covering the `Invalid`, `InvalidBlockHash` and `InvalidTerminalBlock` cases.
- I think we need a bunch more tests of different legal and illegal variations
2022-05-04 23:30:34 +00:00
Mac L
10795f1c86 Fix Execution Engine integration tests (#3163)
## Proposed Changes

Recently, changes to Nethermind's Kiln branch have broken our integration tests. 
This PR updates the chainspec to Kiln  to ensure proper compatibility.
2022-04-21 14:59:09 +00:00
Age Manning
64d52c02ce Change the url of the blog post (#3161)
Shifts the blog domain to lighthouse-blog.sigmaprime.io
2022-04-21 14:59:08 +00:00
Pawan Dhananjay
db0beb5178 Poll shutdown timeout in rpc handler (#3153)
## Issue Addressed

N/A

## Proposed Changes

Previously, we were using `Sleep::is_elapsed()` to check if the shutdown timeout had triggered without polling the sleep. This PR polls the sleep timer.
2022-04-13 03:54:44 +00:00
Divma
580d2f7873 log upgrades + prevent dialing of disconnecting peers (#3148)
## Issue Addressed
We still ping peers that are considered in a disconnecting state

## Proposed Changes

Do not ping peers once we decide they are disconnecting
Upgrade logs about ignored rpc messages

## Additional Info
--
2022-04-13 03:54:43 +00:00
Paul Hauner
b49b4291a3 Disallow attesting to optimistic head (#3140)
## Issue Addressed

NA

## Proposed Changes

Disallow the production of attestations and retrieval of unaggregated attestations when they reference an optimistic head. Add tests to this end.

I also moved `BeaconChain::produce_unaggregated_attestation_for_block` to the `BeaconChainHarness`. It was only being used during tests, so it's nice to stop pretending it's production code. I also needed something that could produce attestations to optimistic blocks in order to simulate scenarios where the justified checkpoint is determined invalid (if no one would attest to an optimistic block, we could never justify it and then flip it to invalid).

## Additional Info

- ~~Blocked on #3126~~
2022-04-13 03:54:42 +00:00
Divma
7366266bd1 keep failed finalized chains to avoid retries (#3142)
## Issue Addressed

In very rare occasions we've seen most if not all our peers in a chain with which we don't agree. Purging these peers can take a very long time: number of retries of the chain. Meanwhile sync is caught in a loop trying the chain again and again. This makes it so that we fast track purging peers via registering the failed chain to prevent retrying for some time (30 seconds). Longer times could be dangerous since a chain can fail if a batch fails to download for example. In this case, I think it's still acceptable to fast track purging peers since they are nor providing the required info anyway 

Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
2022-04-13 01:10:55 +00:00
Michael Sproul
aa72088f8f v2.2.1 (#3149)
## Issue Addressed

Addresses sync stalls on v2.2.0 (i.e. https://github.com/sigp/lighthouse/issues/3147).

## Additional Info

I've avoided doing a full `cargo update` because I noticed there's a new patch version of libp2p and thought it could do with some more testing.



Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-04-12 02:52:12 +00:00
Paul Hauner
c8edeaff29 Don't log crits for missing EE before Bellatrix (#3150)
## Issue Addressed

NA

## Proposed Changes

Fixes an issue introduced in #3088 which was causing unnecessary `crit` logs on networks without Bellatrix enabled.

## Additional Info

NA
2022-04-11 23:14:47 +00:00
Pawan Dhananjay
fff4dd6311 Fix rpc limits version 2 (#3146)
## Issue Addressed

N/A

## Proposed Changes

https://github.com/sigp/lighthouse/pull/3133 changed the rpc type limits to be fork aware i.e. if our current fork based on wall clock slot is Altair, then we apply only altair rpc type limits. This is a bug because phase0 blocks can still be sent over rpc and phase 0 block minimum size is smaller than altair block minimum size. So a phase0 block with `size < SIGNED_BEACON_BLOCK_ALTAIR_MIN` will return an `InvalidData` error as it doesn't pass the rpc types bound check.

This error can be seen when we try syncing pre-altair blocks with size smaller than `SIGNED_BEACON_BLOCK_ALTAIR_MIN`.

This PR fixes the issue by also accounting for forks earlier than current_fork in the rpc limits calculation in the  `rpc_block_limits_by_fork` function. I decided to hardcode the limits in the function because that seemed simpler than calculating previous forks based on current fork and doing a min across forks. Adding a new fork variant is simple and can the limits can be easily checked in a review. 

Adds unit tests and modifies the syncing simulator to check the syncing from across fork boundaries. 
The syncing simulator's block 1 would always be of phase 0 minimum size (404 bytes) which is smaller than altair min block size (since block 1 contains no attestations).
2022-04-07 23:45:38 +00:00
ethDreamer
22002a4e68 Transition Block Proposer Preparation (#3088)
## Issue Addressed

- #3058 

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-04-07 14:03:34 +00:00
Aren49
5ff4013263 Fix SPRP default value in cli (#3145)
Changed SPRP to the correct default value of 8192.
2022-04-07 04:04:11 +00:00
Paul Hauner
8a40763183 Ensure VALID response from fcU updates protoarray (#3126)
## Issue Addressed

NA

## Proposed Changes

Ensures that a `VALID` response from a `forkchoiceUpdate` call will update that block in `ProtoArray`.

I also had to modify the mock execution engine so it wouldn't return valid when all payloads were supposed to be some other static value.

## Additional Info

NA
2022-04-05 20:58:17 +00:00
Paul Hauner
42cdaf5840 Add tests for importing blocks on invalid parents (#3123)
## Issue Addressed

NA

## Proposed Changes

- Adds more checks to prevent importing blocks atop parent with invalid execution payloads.
- Adds a test for these conditions.

## Additional Info

NA
2022-04-05 20:58:16 +00:00
Michael Sproul
bac7c3fa54 v2.2.0 (#3139)
## Proposed Changes

Cut release v2.2.0 including proposer boost.

## Additional Info

I also updated the clippy lints for the imminent release of Rust 1.60, although LH v2.2.0 will continue to compile using Rust 1.58 (our MSRV).
2022-04-05 02:53:09 +00:00
Michael Sproul
99bb55472c Update mdbook runner to Ubuntu 20.04 (#3138)
## Issue Addressed

This resolves errors related to the glibc version of the downloaded mdbook binaries.

Currently the mdbook job is failing on `unstable`: https://github.com/sigp/lighthouse/runs/5785245715?check_suite_focus=true
2022-04-04 06:08:26 +00:00
Michael Sproul
4d0122444b Update and consolidate dependencies (#3136)
## Proposed Changes

I did some gardening 🌳 in our dependency tree:

- Remove duplicate versions of `warp` (git vs patch)
- Remove duplicate versions of lots of small deps: `cpufeatures`, `ethabi`, `ethereum-types`, `bitvec`, `nix`, `libsecp256k1`.
- Update MDBX (should resolve #3028). I tested and Lighthouse compiles on Windows 11 now.
- Restore `psutil` back to upstream
- Make some progress updating everything to rand 0.8. There are a few crates stuck on 0.7.

Hopefully this puts us on a better footing for future `cargo audit` issues, and improves compile times slightly.

## Additional Info

Some crates are held back by issues with `zeroize`. libp2p-noise depends on [`chacha20poly1305`](https://crates.io/crates/chacha20poly1305) which depends on zeroize < v1.5, and we can only have one version of zeroize because it's post 1.0 (see https://github.com/rust-lang/cargo/issues/6584). The latest version of `zeroize` is v1.5.4, which is used by the new versions of many other crates (e.g. `num-bigint-dig`). Once a new version of chacha20poly1305 is released we can update libp2p-noise and upgrade everything to the latest `zeroize` version.

I've also opened a PR to `blst` related to zeroize: https://github.com/supranational/blst/pull/111
2022-04-04 00:26:16 +00:00
Pawan Dhananjay
ab434bc075 Fix merge rpc length limits (#3133)
## Issue Addressed

N/A

## Proposed Changes

Fix the upper bound for blocks by root responses to be equal to the max merge block size instead of altair.
Further make the rpc response limits fork aware.
2022-04-04 00:26:15 +00:00
Michael Sproul
375e2b49b3 Conserve disk space by raising default SPRP (#3137)
## Proposed Changes

Increase the default `--slots-per-restore-point` to 8192 for a 4x reduction in freezer DB disk usage.

Existing nodes that use the previous default of 2048 will be left unchanged. Newly synced nodes (with or without checkpoint sync) will use the new 8192 default. 

Long-term we could do away with the freezer DB entirely for validator-only nodes, but this change is much simpler and grants us some extra space in the short term. We can also roll it out gradually across our nodes by purging databases one by one, while keeping the Ansible config the same.

## Additional Info

We ignore a change from 2048 to 8192 if the user hasn't set the 8192 explicitly. We fire a debug log in the case where we do ignore:

```
DEBG Ignoring slots-per-restore-point config in favour of on-disk value, on_disk: 2048, config: 8192
```
2022-04-01 07:16:25 +00:00
Michael Sproul
414197b06d Enable proposer boost on mainnet and GBC (#3131)
## Proposed Changes

Mitigate the fork choice attacks described in [_Three Attacks on Proof-of-Stake Ethereum_](https://arxiv.org/abs/2110.10086) by enabling proposer boost @ 70% on mainnet.

Proposer boost has been running with stability on Prater for a few months now, and is safe to roll out gradually on mainnet. I'll argue that the financial impact of rolling out gradually is also minimal.

Consider how a proposer-boosted validator handles two types of re-orgs:

## Ex ante re-org (from the paper)

In the mitigated attack, a malicious proposer releases their block at slot `n + 1` late so that it re-orgs the block at the slot _after_  them (at slot `n + 2`). Non-boosting validators will follow this re-org and vote for block `n + 1` in slot `n + 2`. Boosted validators will vote for `n + 2`. If the boosting validators are outnumbered, there'll be a re-org to the malicious block from `n + 1` and validators applying the boost will have their slot `n + 2` attestations miss head (and target on an epoch boundary). Note that all the attesters from slot `n + 1` are doomed to lose their head vote rewards, but this is the same regardless of boosting.

Therefore, Lighthouse nodes stand to miss slightly more head votes than other nodes if they are in the minority while applying the proposer boost. Once the proposer boost nodes gain a majority, this trend reverses.

## Ex post re-org (using the boost)

The other type of re-org is an ex post re-org using the strategy described here: https://github.com/sigp/lighthouse/pull/2860. With this strategy, boosted nodes will follow the attempted re-org and again lose a head vote if the re-org is unsuccessful. Once boosting is widely adopted, the re-orgs will succeed and the non-boosting validators will lose out.

I don't think there are (m)any validators applying this strategy, because it is irrational to attempt it before boosting is widely adopted. Therefore I think we can safely ignore this possibility.

## Risk Assessment

From observing re-orgs on mainnet I don't think ex ante re-orgs are very common. I've observed around 1 per day for the last month on my node (see: https://gist.github.com/michaelsproul/3b2142fa8fe0ff767c16553f96959e8c), compared to 2.5 ex post re-orgs per day.

Given one extra slot per day where attesting will cause a missed head vote, each individual validator has a 1/32 chance of being assigned to that slot. So we have an increase of 1/32 missed head votes per validator per day in expectation. Given that we currently see ~7 head vote misses per validator per day due to late/missing blocks (and re-orgs), this represents only a (1/32)/7 = 0.45% increase in missed head votes in expectation. I believe this is so small that we shouldn't worry about it. Particularly as getting proposer boost deployed is good for network health and may enable us to drive down the number of late blocks over time (which will decrease head vote misses).

## TL;DR

Enable proposer boost now and release ASAP, as financial downside is a 0.45% increase in missed head votes until widespread adoption.
2022-04-01 04:58:42 +00:00
Pawan Dhananjay
9ec072ff3b Strip newline from jwt secrets (#3132)
## Issue Addressed

Resolves #3128 

## Proposed Changes

Strip trailing newlines from jwt secret files.
2022-04-01 00:59:00 +00:00
Michael Sproul
41e7a07c51 Add lighthouse db command (#3129)
## Proposed Changes

Add a `lighthouse db` command with three initial subcommands:

- `lighthouse db version`: print the database schema version.
- `lighthouse db migrate --to N`: manually upgrade (or downgrade!) the database to a different version.
- `lighthouse db inspect --column C`: log the key and size in bytes of every value in a given `DBColumn`.

This PR lays the groundwork for other changes, namely:

- Mark's fast-deposit sync (https://github.com/sigp/lighthouse/pull/2915), for which I think we should implement a database downgrade (from v9 to v8).
- My `tree-states` work, which already implements a downgrade (v10 to v8).
- Standalone purge commands like `lighthouse db purge-dht` per https://github.com/sigp/lighthouse/issues/2824.

## Additional Info

I updated the `strum` crate to 0.24.0, which necessitated some changes in the network code to remove calls to deprecated methods.

Thanks to @winksaville for the motivation, and implementation work that I used as a source of inspiration (https://github.com/sigp/lighthouse/pull/2685).
2022-04-01 00:58:59 +00:00
realbigsean
ea783360d3 Kiln mev boost (#3062)
## Issue Addressed

MEV boost compatibility

## Proposed Changes

See #2987

## Additional Info

This is blocked on the stabilization of a couple specs, [here](https://github.com/ethereum/beacon-APIs/pull/194) and [here](https://github.com/flashbots/mev-boost/pull/20).

Additional TODO's and outstanding questions

- [ ] MEV boost JWT Auth
- [ ] Will `builder_proposeBlindedBlock` return the revealed payload for the BN to propogate
- [ ] Should we remove `private-tx-proposals` flag and communicate BN <> VC with blinded blocks by default once these endpoints enter the beacon-API's repo? This simplifies merge transition logic. 

Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-03-31 07:52:23 +00:00
realbigsean
83234ee4ce json rpc id to value (#3110)
## Issue Addressed

N/A

## Proposed Changes

- Update the JSON-RPC id field for both our request and response objects to be a `serde_json::Value` rather than a `u32`. This field could be a string or a number according to the JSON-RPC 2.0 spec. We only ever set it to a number, but if, for example, we get a response that wraps this number in quotes, we would fail to deserialize it. I think because we're not doing any validation around this id otherwise, we should be less strict with it in this regard. 

## Additional Info



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-03-29 22:59:55 +00:00
Paul Hauner
26e5281c68 Increase timeouts for EEs (#3125)
## Issue Addressed

NA

## Proposed Changes

In the first Goerli shadow-fork, Lighthouse was getting timeouts from Geth which prevented the LH+Geth pair from progressing.

There's not a whole lot of information I can use to set these timeouts. The most interesting pieces of information I have are quotes from Marius from Geth:

- *"Fcu also needs to construct the block which can take 2sec"* ([Discord](https://discord.com/channels/595666850260713488/910910348922589184/958006487052066836))
- *"2 sec should be ok for new payload, weird that it times out"* ([Discord](https://discord.com/channels/595666850260713488/910910348922589184/958006487052066836))

I don't think we should be so worried about getting these timeouts correct now. No one really knows how long the various EEs are going to take, it's a bit too early in development. With these changes I'm giving some headroom so that we don't fail just because EEs are quite optimized enough. I've set the value to 6s (half a mainnet slot), since I think anything beyond 6s is an interesting problem that we want to know about sooner rather than later.

## Additional Info

NA
2022-03-28 23:32:12 +00:00
Pawan Dhananjay
a42cb69f6e Update engine state in broadcast (#3071)
## Issue Addressed

N/A

## Proposed Changes

Set the engine state to `EngineState::Offline` if the engine api call fails during broadcast. This caused issues while pausing sync when the execution engine is offline because `EngineState` always returned `Synced`.
2022-03-28 23:32:11 +00:00
Pawan Dhananjay
20e32f5812 Improve slashing import log (#3122)
## Issue Addressed

N/A

## Proposed Changes

The slashing db import log showed the latest proposed block in the db as `latest block` which might be potentially confusing.
2022-03-28 07:14:16 +00:00
Akihito Nakano
f8a1b428ef Fix typos in docs (#3121)
Fixed typos in the `Advanced Networking` page. ✏️
2022-03-28 07:14:15 +00:00
Paul Hauner
172320ff08 Target geth master in integration testing (#3120)
## Issue Addressed

NA

## Proposed Changes

Target the `master` branch of the canonical Geth repo, rather than @MariusVanDerWijden's fork.

In [this tweet](https://twitter.com/vdWijden/status/1506899633741705217?s=20&t=yraR-qFAtSDCYtl_gyoeiw), Marius states:

> We merged all important changes for [#TheMerge](https://twitter.com/hashtag/TheMerge?src=hashtag_click) into [@go_ethereum](https://twitter.com/go_ethereum)
's master branch. So no need to use my fork anymore! Changes will be applied (in old geth fashion) directly to master. My merge-kiln-v2 branch is already stale, so please switch, you can also use --kiln to join Kiln

## Additional Info

NA
2022-03-28 07:14:14 +00:00
Michael Sproul
6efd95496b Optionally skip RANDAO verification during block production (#3116)
## Proposed Changes

Allow Lighthouse to speculatively create blocks via the `/eth/v1/validators/blocks` endpoint by optionally skipping the RANDAO verification that we introduced in #2740. When `verify_randao=false` is passed as a query parameter the `randao_reveal` is not required to be present, and if present will only be lightly checked (must be a valid BLS sig). If `verify_randao` is omitted it defaults to true and Lighthouse behaves exactly as it did previously, hence this PR is backwards-compatible.

I'd like to get this change into `unstable` pretty soon as I've got 3 projects building on top of it:

- [`blockdreamer`](https://github.com/michaelsproul/blockdreamer), which mocks block production every slot in order to fingerprint clients
- analysis of Lighthouse's block packing _optimality_, which uses `blockdreamer` to extract interesting instances of the attestation packing problem
- analysis of Lighthouse's block packing _performance_ (as in speed) on the `tree-states` branch

## Additional Info

Having tested `blockdreamer` with Prysm, Nimbus and Teku I noticed that none of them verify the randao signature on `/eth/v1/validator/blocks`. I plan to open a PR to the `beacon-APIs` repo anyway so that this parameter can be standardised in case the other clients add RANDAO verification by default in future.
2022-03-28 07:14:13 +00:00
Pawan Dhananjay
986044370e Add merge objects to lcli parse-ssz subcommand (#3119)
## Issue Addressed

N/A

## Proposed Changes

Parse merge state and blocks in parse-ssz subcommand.
2022-03-25 14:32:33 +00:00
Lucas Manuel
adca0efc64 feat: Update ASCII art (#3113)
## Issue Addressed

No issue, just updating merge ASCII art.

## Proposed Changes

Updating ASCII art for merge.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-03-24 00:04:50 +00:00
Mac L
41b5af9b16 Support IPv6 in BN and VC HTTP APIs (#3104)
## Issue Addressed

#3103

## Proposed Changes

Parse `http-address` and `metrics-address` as `IpAddr` for both the beacon node and validator client to support IPv6 addresses.
Also adjusts parsing of CORS origins to allow for IPv6 addresses.

## Usage
You can now set  `http-address` and/or `metrics-address`  flags to IPv6 addresses.
For example, the following:
`lighthouse bn --http --http-address :: --metrics --metrics-address ::1`
will expose the beacon node HTTP server on `[::]` (equivalent of `0.0.0.0` in IPv4) and the metrics HTTP server on `localhost` (the equivalent of `127.0.0.1` in IPv4) 

The beacon node API can then be accessed by:
`curl "http://[server-ipv6-address]:5052/eth/v1/some_endpoint"`

And the metrics server api can be accessed by:
`curl "http://localhost:5054/metrics"` or by `curl "http://[::1]:5054/metrics"`

## Additional Info
On most Linux distributions the `v6only` flag is set to `false` by default (see the section for the `IPV6_V6ONLY` flag in https://www.man7.org/linux/man-pages/man7/ipv6.7.html) which means IPv4 connections will continue to function on a IPv6 address (providing it is appropriately mapped). This means that even if the Lighthouse API is running on `::` it is also possible to accept IPv4 connections.

However on Windows, this is not the case. The `v6only` flag is set to `true` so binding to `::` will only allow IPv6 connections.
2022-03-24 00:04:49 +00:00
Mac L
3c675a9dfc Add Nethermind integration tests (#3100)
## Proposed Changes

Extend the current Geth merge integration tests to support Nethermind.
2022-03-24 00:04:48 +00:00
Divma
788b6af3c4 Remove sync await points (#3036)
## Issue Addressed

Removes the await points in sync waiting for a processor response for rpc block processing. Built on top of #3029 
This also handles a couple of bugs in the previous code and adds a relatively comprehensive test suite.
2022-03-23 01:09:39 +00:00
ethDreamer
af50130e21 Add Proposer Cache Pruning & POS Activated Banner (#3109)
## Issue Addressed

The proposers cache wasn't being pruned. Also didn't have a celebratory banner for the merge 😄

## Banner
![pos_log_panda](https://user-images.githubusercontent.com/37123614/159528545-3aa54cbd-9362-49b1-830c-f4402f6ac341.png)
2022-03-22 21:33:38 +00:00
realbigsean
116c5721a3 Fix ganache windows CI attempt 2 (#3107)
## Issue Addressed

Attempt to fix CI

## Proposed Changes

- ~~install `node-gyp-build` which should look for prebuilt binaries for `@truffle-suite/bigint_buffer`. This should make it so we don't have to build it directly. See: https://github.com/trufflesuite/ganache/pull/1414~~ this didn't work
- This also uses the `setup-node` action because it includes caching. Sort of a shot in the dark, but the ganache github repo uses it and the failures seem to be for missing files in a node cache 




Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-03-22 21:33:37 +00:00
realbigsean
ec08b0884b Fix ganache in windows CI (#3105)
## Issue Addressed

Hopefully makes windows ganache installation more reliable.

## Proposed Changes

- use `chocolatey` to install windows build tools. This seems to often be the prescribed solution for `node gyp` issues. `chocolatey` is used here because `npm install --global --production windows-build-tools` hangs in github actions

## Additional Info
I still haven't found why the prior installation technique would sometimes work, the `windows-2019` environments seem to be identical across successes and failures.  I think this should be re-run a few times to see if it can consistently pass


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-03-21 21:47:18 +00:00
Emilia Hane
2aabcaaaed Correct typos book (#3099)
## Issue Addressed

No issue

## Proposed Changes

Correct typos in book

## Additional Info

Nothing to add


Co-authored-by: Emilia Hane <58548332+emhane@users.noreply.github.com>
2022-03-20 22:48:15 +00:00
realbigsean
ae5b141dc4 Updates to tests and local testnet for Ganache 7 (#3056)
## Issue Addressed

#2961

## Proposed Changes

-- update `--chainId` -> `--chain.chainId`
-- remove `--keepAliveTimeout`
-- fix log to listen for
-- rename `ganache-cli` to `ganache` everywhere


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-03-20 22:48:14 +00:00
Michael Sproul
9bc9527998 v2.1.5 (#3096)
## Issue Addressed

New release to address openssl vuln fixed in #3095

Closes #3093
2022-03-17 23:13:46 +00:00
Michael Sproul
a1befd89aa Update openssl for CVE-2022-0778 (#3095)
## Issue Addressed

Fix the `cargo-audit` failure for the recent openssl bug involving parsing of untrusted certificates (CVE-2022-0778).

## Additional Info

Lighthouse loads remote certificates in the following cases:

* When connecting to an eth1 node (`--eth1-endpoints`).
* When connecting to a beacon node from the VC (`--beacon-nodes`).
* When connecting to a beacon node for checkpoint sync (`--checkpoint-sync-url`).

In all of these cases we are already placing a lot of trust in the server at the other end, however due to the scope for MITM attacks we are still potentially vulnerable. E.g. an ISP could inject an invalid certificate for the remote host which would cause Lighthouse to hang indefinitely.
2022-03-17 03:33:32 +00:00
kraemahz
139c24a0f8 Clarify proposers message is about current epoch (#3084)
## Issue Addressed

#3083

## Proposed Changes

Changes "proposers" to "proposers_this_epoch" in the validator log message.

Co-authored-by: kraemahz <58143782+kraemahz@users.noreply.github.com>
2022-03-17 03:33:30 +00:00
Michael Sproul
e715db8b99 Add minimum supported Rust version (#3082)
## Proposed Changes

Set a minimum supported Rust version (MSRV) in the `Cargo.toml` for the Lighthouse binary so that attempts to compile it with an outdated compiler fail immediately with a clear error.

To ensure that the codebase builds with the MSRV I've also added a Github actions job that runs `cargo check` using the MSRV extracted from `Cargo.toml`. This will force us to keep it up to date.

I opted to use `cargo check` rather than Clippy because Clippy frequently introduces new lints that we adopt, so our MSRV for Clippy is usually the most recent Rust version, while the MSRV for building Lighthouse is older.
2022-03-17 03:33:29 +00:00
Paul Hauner
98f74041a0 Use windows-2019 in release CI (#3090)
## Issue Addressed

NA

## Proposed Changes

Address a CI failure in the release suite.

Example: https://github.com/sigp/lighthouse/actions/runs/1984266187

## Additional Info

I believe we should merge this into `unstable` and `stable`. Then, move the `v2.1.4` commit to target the commit with the updated CI. It's sad that v2.1.4 has two commits, but they're functionally equivalent for users.
2022-03-15 03:21:11 +00:00
Paul Hauner
28aceaa213 v2.1.4 (#3076)
## Issue Addressed

NA

## Proposed Changes

- Bump version to `v2.1.4`
- Run `cargo update`

## Additional Info

I think this release should be published around the 15th of March.

Presently `blocked` for testing on our infrastructure.
2022-03-14 23:11:40 +00:00
Michael Sproul
c2e9354126 Gracefully handle missing sync committee duties (#3086)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3085
Closes https://github.com/sigp/lighthouse/issues/2953

## Proposed Changes

Downgrade some of the warnings logged by the VC which were useful during development of the sync committee service but are creating trouble now that we avoid populating the `sync_duties` map with 0 active validators.
2022-03-14 06:16:49 +00:00
realbigsean
f5d8fdbb4e Proposer preparation data quoted validator index in API (#3080)
## Issue Addressed

#3077

## Proposed Changes

Quotes around validator index in `prepare_beacon_proposer` endpoint


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-03-13 21:57:05 +00:00
realbigsean
925e9241d1 Quotes around SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY in the API (#3074)
## Issue Addressed

#3073 

## Proposed Changes

Add around `SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY` in the API

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-03-13 21:57:04 +00:00
realbigsean
15b8811580 Update ttd in kiln config (#3081)
## Issue Addressed

Which issue # does this PR address?

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-03-11 20:11:22 +00:00
Paul Hauner
e4fa7d906f Fix post-merge checkpoint sync (#3065)
## Issue Addressed

This address an issue which was preventing checkpoint-sync.

When the node starts from checkpoint sync, the head block and the finalized block are the same value. We did not respect this when sending a `forkchoiceUpdated` (fcU) call to the EL and were expecting fork choice to hold the *finalized ancestor of the head* and returning an error when it didn't.

This PR uses *only fork choice* for sending fcU updates. This is actually quite nice and avoids some atomicity issues between `chain.canonical_head` and `chain.fork_choice`. Now, whenever `chain.fork_choice.get_head` returns a value we also cache the values required for the next fcU call.

## TODO

- [x] ~~Blocked on #3043~~
- [x] Ensure there isn't a warn message at startup.
2022-03-10 06:05:24 +00:00
Paul Hauner
6d4af4c9ca Kiln (#3067)
## Issue Addressed

Adds the [Kiln](https://github.com/eth-clients/merge-testnets/tree/main/kiln) configs, so we can use `--network kiln`. 

## Additional Notes

- Also includes the fix from #3066.
2022-03-10 02:34:17 +00:00
Paul Hauner
c475499dfe Fix UnableToReadSlot at startup (#3066)
## Issue Addressed

Don't send an fcU message at startup if it's pre-genesis. The startup fcU message is not critical, not required by the spec, so it's fine to avoid it for networks that start post-Bellatrix fork.
2022-03-09 23:04:19 +00:00
Michael Sproul
65eaf01942 VC: avoid sending fee recipients until just before merge (#3064)
## Issue Addressed

Presently if the VC is configured with a fee recipient it will error out when sending fee-recipient preparations to a beacon node that doesn't yet support the API:

```
Mar 08 22:23:36.236 ERRO Unable to publish proposer preparation  error: All endpoints failed https://eth2-beacon-prater.infura.io/ => RequestFailed(StatusCode(404)), service: preparation
```

This doesn't affect other VC duties, but could be a source of anxiety for users trying to do the right thing and configure their fee recipients in advance.

## Proposed Changes

Change the preparation service to only send preparations if the current slot is later than 2 epochs before the Bellatrix hard fork epoch.

## Additional Info

I've tagged this v2.1.4 as I think it's a small change that's worth having for the next release
2022-03-09 06:36:38 +00:00
Paul Hauner
267d8babc8 Prepare proposer (#3043)
## Issue Addressed

Resolves #2936

## Proposed Changes

Adds functionality for calling [`validator/prepare_beacon_proposer`](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Validator/prepareBeaconProposer) in advance.

There is a `BeaconChain::prepare_beacon_proposer` method which, which called, computes the proposer for the next slot. If that proposer has been registered via the `validator/prepare_beacon_proposer` API method, then the `beacon_chain.execution_layer` will be provided the `PayloadAttributes` for us in all future forkchoiceUpdated calls. An artificial forkchoiceUpdated call will be created 4s before each slot, when the head updates and when a validator updates their information.

Additionally, I added strict ordering for calls from the `BeaconChain` to the `ExecutionLayer`. I'm not certain the `ExecutionLayer` will always maintain this ordering, but it's a good start to have consistency from the `BeaconChain`. There are some deadlock opportunities introduced, they are documented in the code.

## Additional Info

- ~~Blocked on #2837~~

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2022-03-09 00:42:05 +00:00
Divma
527dfa4893 cargo audit updates (#3063)
## Issue Addressed
Closes #3008 and updates `regex` to solve https://rustsec.org/advisories/RUSTSEC-2022-0013
2022-03-08 19:48:12 +00:00
Pawan Dhananjay
381d0ece3c auth for engine api (#3046)
## Issue Addressed

Resolves #3015 

## Proposed Changes

Add JWT token based authentication to engine api requests. The jwt secret key is read from the provided file and is used to sign tokens that are used for authenticated communication with the EL node.

- [x] Interop with geth (synced `merge-devnet-4` with the `merge-kiln-v2` branch on geth)
- [x] Interop with other EL clients (nethermind on `merge-devnet-4`)
- [x] ~Implement `zeroize` for jwt secrets~
- [x] Add auth server tests with `mock_execution_layer`
- [x] Get auth working with the `execution_engine_integration` tests






Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-03-08 06:46:24 +00:00
Paul Hauner
3b4865c3ae Poll the engine_exchangeTransitionConfigurationV1 endpoint (#3047)
## Issue Addressed

There has been an [`engine_exchangetransitionconfigurationv1`](https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md#engine_exchangetransitionconfigurationv1) method added to the execution API specs.

The `engine_exchangetransitionconfigurationv1` will be polled every 60s as per this PR: https://github.com/ethereum/execution-apis/pull/189. If that PR is merged as-is, then we will be matching the spec. If that PR *is not* merged, we are still fully compatible with the spec, but just doing more than we are required.

## Additional Info

- [x] ~~Blocked on #2837~~
- [x] Add method to EE integration tests
2022-03-08 04:40:42 +00:00
Akihito Nakano
4186d117af Replace OpenOptions::new with File::options to be readable (#3059)
## Issue Addressed

Closes #3049 

This PR updates widely but this replace is safe as `File::options()` is equivelent to `OpenOptions::new()`.
ref: https://doc.rust-lang.org/stable/src/std/fs.rs.html#378-380
2022-03-07 06:30:18 +00:00
tim gretler
cbda0a2f0a Add log debounce to work processor (#3045)
## Issue Addressed

#3010 

## Proposed Changes

- move log debounce time latch to `./common/logging`
- add timelatch to limit logging for `attestations_delay_queue` and `queued_block_roots`

## Additional Info

- Is a separate crate for the time latch preferred? 
- `elapsed()` could take `LOG_DEBOUNCE_INTERVAL ` as an argument to allow for different granularity.
2022-03-07 06:30:17 +00:00
Michael Sproul
1829250ee4 Ignore attestations to finalized blocks (don't reject) (#3052)
## Issue Addressed

Addresses spec changes from v1.1.0:

- https://github.com/ethereum/consensus-specs/pull/2830
- https://github.com/ethereum/consensus-specs/pull/2846

## Proposed Changes

* Downgrade the REJECT for `HeadBlockFinalized` to an IGNORE. This applies to both unaggregated and aggregated attestations.

## Additional Info

I thought about also changing the penalty for `UnknownTargetRoot` but I don't think it's reachable in practice.
2022-03-04 00:41:22 +00:00
Paul Hauner
09d2187198 Lower debug! logs to trace! (#3053)
## Issue Addressed

These logs were very loud during sync.
2022-03-03 22:37:42 +00:00
Paul Hauner
aea43b626b Rename random to prev_randao (#3040)
## Issue Addressed

As discussed on last-night's consensus call, the testnets next week will target the [Kiln Spec v2](https://hackmd.io/@n0ble/kiln-spec).

Presently, we support Kiln V1. V2 is backwards compatible, except for renaming `random` to `prev_randao` in:

- https://github.com/ethereum/execution-apis/pull/180
- https://github.com/ethereum/consensus-specs/pull/2835

With this PR we'll no longer be compatible with the existing Kintsugi and Kiln testnets, however we'll be ready for the testnets next week. I raised this breaking change in the call last night, we are all keen to move forward and break things.

We now target the [`merge-kiln-v2`](https://github.com/MariusVanDerWijden/go-ethereum/tree/merge-kiln-v2) branch for interop with Geth. This required adding the `--http.aauthport` to the tester to avoid a port conflict at startup.

### Changes to exec integration tests

There's some change in the `merge-kiln-v2` version of Geth that means it can't compile on a vanilla Github runner. Bumping the `go` version on the runner solved this issue.

Whilst addressing this, I refactored the `testing/execution_integration` crate to be a *binary* rather than a *library* with tests. This means that we don't need to run the `build.rs` and build Geth whenever someone runs `make lint` or `make test-release`. This is nice for everyday users, but it's also nice for CI so that we can have a specific runner for these tests and we don't need to ensure *all* runners support everything required to build all execution clients.

## More Info

- [x] ~~EF tests are failing since the rename has broken some tests that reference the old field name. I have been told there will be new tests released in the coming days (25/02/22 or 26/02/22).~~
2022-03-03 02:10:57 +00:00
Divma
4bf1af4e85 Custom RPC request management for sync (#3029)
## Proposed Changes
Make `lighthouse_network` generic over request ids, now usable by sync
2022-03-02 22:07:17 +00:00
Age Manning
e88b18be09 Update libp2p (#3039)
Update libp2p. 

This corrects some gossipsub metrics.
2022-03-02 05:09:52 +00:00
Age Manning
f3c1dde898 Filter non global ips from discovery (#3023)
## Issue Addressed

#3006 

## Proposed Changes

This PR changes the default behaviour of lighthouse to ignore discovered IPs that are not globally routable. It adds a CLI flag, --enable-local-discovery to permit the non-global IPs in discovery.

NOTE: We should take care in merging this as I will break current set-ups that rely on local IP discovery. I made this the non-default behaviour because we dont really want to be wasting resources attempting to connect to non-routable addresses and we dont want to propagate these to others (on the chance we can connect to one of these local nodes), improving discoveries efficiency.
2022-03-02 03:14:27 +00:00
Akihito Nakano
668115a4b8 Rename Eth1/Eth2 in documents (#3021)
## Issue Addressed

Resolves https://github.com/sigp/lighthouse/issues/3019

## Proposed Changes

- Eth2 Eth2.0 Ethereum 2.0 -> Ethereum consensus
- Eth2 network -> consensus layer
- Ethereum 2.0 specification -> Ethereum proof-of-stake consensus specification
- Eth2 deposit contract -> Staking deposit contract
- Eth1 -> execution client

## Additional Info

The description needs to be updated by someone who has permission to do. 📝 

<img width="487" alt="image" src="https://user-images.githubusercontent.com/1885716/153995211-816d9561-751e-4810-abb9-83d979379783.png">
2022-03-02 01:05:08 +00:00
Age Manning
e34524be75 Increase default target-peer count to 80 (#3005)
Increase the default peer count from 50 to 80
2022-03-02 01:05:07 +00:00
Paul Hauner
b6493d5e24 Enforce Optimistic Sync Conditions & CLI Tests (v2) (#3050)
## Description

This PR adds a single, trivial commit (f5d2b27d78) atop #2986 to resolve a tests compile error. The original author (@ethDreamer) is AFK so I'm getting this one merged ☺️ 

Please see #2986 for more information about the other, significant changes in this PR.


Co-authored-by: Mark Mackey <mark@sigmaprime.io>
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
2022-03-01 22:56:47 +00:00
Age Manning
a1b730c043 Cleanup small issues (#3027)
Downgrades some excessive networking logs and corrects some metrics.
2022-03-01 01:49:22 +00:00
Paul Hauner
27e83b888c Retrospective invalidation of exec. payloads for opt. sync (#2837)
## Issue Addressed

NA

## Proposed Changes

Adds the functionality to allow blocks to be validated/invalidated after their import as per the [optimistic sync spec](https://github.com/ethereum/consensus-specs/blob/dev/sync/optimistic.md#how-to-optimistically-import-blocks). This means:

- Updating `ProtoArray` to allow flipping the `execution_status` of ancestors/descendants based on payload validity updates.
- Creating separation between `execution_layer` and the `beacon_chain` by creating a `PayloadStatus` struct.
- Refactoring how the `execution_layer` selects a `PayloadStatus` from the multiple statuses returned from multiple EEs.
- Adding testing framework for optimistic imports.
- Add `ExecutionBlockHash(Hash256)` new-type struct to avoid confusion between *beacon block roots* and *execution payload hashes*.
- Add `merge` to [`FORKS`](c3a793fd73/Makefile (L17)) in the `Makefile` to ensure we test the beacon chain with merge settings.
    - Fix some tests here that were failing due to a missing execution layer.

## TODO

- [ ] Balance tests

Co-authored-by: Mark Mackey <mark@sigmaprime.io>
2022-02-28 22:07:48 +00:00
Michael Sproul
5e1f8a8480 Update to Rust 1.59 and 2021 edition (#3038)
## Proposed Changes

Lots of lint updates related to `flat_map`, `unwrap_or_else` and string patterns. I did a little more creative refactoring in the op pool, but otherwise followed Clippy's suggestions.

## Additional Info

We need this PR to unblock CI.
2022-02-25 00:10:17 +00:00
Mac L
c1df5d29cb Ensure logfile respects the validators-dir CLI flag (#3003)
## Issue Addressed

Closes #2990 

## Proposed Changes

Add a check to see if the `--validators-dir` CLI flag is set and if so store validator logs into it.
Ensure that if the log directory cannot be created, emit a `WARN` and disable file logging rather than panicking. 

## Additional Info

Panics associated with logfiles can still occur in these scenarios:
1. The `$datadir/validators/logs` directory already exists with the wrong permissions (or was changed after creation).
1. The logfile already exists with the wrong permissions (or was changed after creation).
> These panics are cosmetic only since only the logfile thread panics. Following the panics, LH will continue to function as normal. 

I believe this is due to the use of [`slog::Fuse`](https://docs.rs/slog/latest/slog/struct.Fuse.html) when initializing the logger.
I'm not sure if there a better way of handling logfile errors?
I think ideally, rather than panicking, we would emit a `WARN` to the stdout logger with the panic reason, then exit the logfile thread gracefully.
2022-02-24 00:31:35 +00:00
Mac L
696de58141 Add aliases for validator-dir flags (#3034)
## Issue Addressed

#3020

## Proposed Changes

- Alias the `validators-dir` arg to `validator-dir` in the `validator_client` subcommand.
- Alias the `validator-dir` arg to `validators-dir` in the `account_manager validator` subcommand.
- Add test for the validator_client alias.
2022-02-22 03:09:02 +00:00
Paul Hauner
5a0b049049 Avoid hogging the fallback status lock in the VC (#3022)
## Issue Addressed

Addresses https://github.com/sigp/lighthouse/issues/2926

## Proposed Changes

Appropriated from https://github.com/sigp/lighthouse/issues/2926#issuecomment-1039676768:

When a node returns *any* error we call [`CandidateBeaconNode::set_offline`](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L424)) which sets it's `status` to `CandidateError::Offline`. That node will then be ignored until the routine [`fallback_updater_service`](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L44)) manages to reconnect to it.

However, I believe there was an issue in the [`CanidateBeaconNode::refesh_status`](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L157-L178)) method, which is used by the updater service to see if the node has come good again. It was holding a [write lock on the `status` field](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L165)) whilst it polled the node status. This means a long timeout would hog the write lock and starve other processes.

When a VC is trying to access a beacon node for whatever purpose (getting duties, posting blocks, etc), it performs [three passes](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L432-L482)) through the lists of nodes, trying to run some generic `function` (closure, lambda, etc) on each node:

- 1st pass: only try running `function` on all nodes which are both synced and online.
- 2nd pass: try running `function` on all nodes that are online, but not necessarily synced.
- 3rd pass: for each offline node, try refreshing its status and then running `function` on it.

So, it turns out that if the `CanidateBeaconNode::refesh_status` function from the routine update service is hogging the write-lock, the 1st pass gets blocked whilst trying to read the status of the first node. So, nodes that should be left until the 3rd pass are blocking the process of the 1st and 2nd passes, hence the behaviour described in #2926.

## Additional Info

NA
2022-02-22 03:09:00 +00:00
Michael Sproul
b37d5db8df Increase Bors timeout, refine target-branch-check (#3035)
## Issue Addressed

Timeouts due to Windows builds running for 2h 20m.

## Proposed Changes

* Increase Bors timeout to 3h
* Refine the target branch check so that it will pass when we make PRs to feature branches. This is just an extra change I've been meaning to sneak in for a while.

## Additional Info

* I think it would also be cool to try caching for CI again, but that's a separate issue and we'll still need the long timeout on a cache miss.
2022-02-21 23:21:03 +00:00
Mac L
104e3104f9 Add API to compute block packing efficiency data (#2879)
## Issue Addressed
N/A

## Proposed Changes
Add a HTTP API which can be used to compute the block packing data for all blocks over a discrete range of epochs.

## Usage
### Request
```
curl "http:localhost:5052/lighthouse/analysis/block_packing_efficiency?start_epoch=57730&end_epoch=57732"
```
### Response
```
[
  {
    "slot": "1847360",
    "block_hash": "0xa7dc230659802df2f99ea3798faede2e75942bb5735d56e6bfdc2df335dcd61f",
    "proposer_info": {
      "validator_index": 1686,
      "graffiti": ""
    },
    "available_attestations": 7096,
    "included_attestations": 6459,
    "prior_skip_slots": 0
  },
  ...
]
```
## Additional Info

This is notably different to the existing lcli code:
- Uses `BlockReplayer` #2863 and as such runs significantly faster than the previous method.
- Corrects the off-by-one #2878
- Removes the `offline` validators component. This was only a "best guess" and simply was used as a way to determine an estimate of the "true" packing efficiency and was generally not helpful in terms of direct comparisons between different packing methods. As such it has been removed from the API and any future estimates of "offline" validators would be better suited in a separate/more targeted API or as part of 'beacon watch': #2873 
- Includes `prior_skip_slots`.
2022-02-21 23:21:02 +00:00
eklm
56b2ec6b29 Allow proposer duties request for the next epoch (#2963)
## Issue Addressed

Closes #2880 

## Proposed Changes

Support requests to the next epoch in proposer_duties api.

## Additional Info

Implemented with skipping proposer cache for this case because the cache for the future epoch will be missed every new slot as dependent_root is changed and we don't want to "wash it out" by saving additional values.
2022-02-18 05:32:00 +00:00
tim gretler
c8019caba6 Fix sync committee polling for 0 validators (#2999)
## Issue Addressed

#2953

## Proposed Changes

Adds empty local validator check. 

## Additional Info

Two other options: 
- add check inside `local_index` collection. Instead of after collection.
- Move `local_index` collection to the beginning of the `poll_sync_committee_duties` function and combine sync committee with altair fork check.
2022-02-18 02:36:44 +00:00
Age Manning
3ebb8b0244 Improved peer management (#2993)
## Issue Addressed

I noticed in some logs some excess and unecessary discovery queries. What was happening was we were pruning our peers down to our outbound target and having some disconnect. When we are below this threshold we try to find more peers (even if we are at our peer limit). The request becomes futile because we have no more peer slots. 

This PR corrects this issue and advances the pruning mechanism to favour subnet peers. 

An overview the new logic added is:
- We prune peers down to a target outbound peer count which is higher than the minimum outbound peer count.
- We only search for more peers if there is room to do so, and we are below the minimum outbound peer count not the target. So this gives us some buffer for peers to disconnect. The buffer is currently 10%

The modified pruning logic is documented in the code but for reference it should do the following:
- Prune peers with bad scores first
- If we need to prune more peers, then prune peers that are subscribed to a long-lived subnet
- If we still need to prune peers, the prune peers that we have a higher density of on any given subnet which should drive for uniform peers across all subnets.

This will need a bit of testing as it modifies some significant peer management behaviours in lighthouse.
2022-02-18 02:36:43 +00:00
Michael Sproul
da4ca024f1 Use SmallVec in Bitfield (#3025)
## Issue Addressed

Alternative to #2935

## Proposed Changes

Replace the `Vec<u8>` inside `Bitfield` with a `SmallVec<[u8; 32>`. This eliminates heap allocations for attestation bitfields until we reach 500K validators, at which point we can consider increasing `SMALLVEC_LEN` to 40 or 48.

While running Lighthouse under `heaptrack` I found that SSZ encoding and decoding of bitfields corresponded to 22% of all allocations by count. I've confirmed that with this change applied those allocations disappear entirely.

## Additional Info

We can win another 8 bytes of space by using `smallvec`'s [`union` feature](https://docs.rs/smallvec/1.8.0/smallvec/#union), although I might leave that for a future PR because I don't know how experimental that feature is and whether it uses some spicy `unsafe` blocks.
2022-02-17 23:55:04 +00:00
Paul Hauner
0a6a8ea3b0 Engine API v1.0.0.alpha.6 + interop tests (#3024)
## Issue Addressed

NA

## Proposed Changes

This PR extends #3018 to address my review comments there and add automated integration tests with Geth (and other implementations, in the future).

I've also de-duplicated the "unused port" logic by creating an  `common/unused_port` crate.

## Additional Info

I'm not sure if we want to merge this PR, or update #3018 and merge that. I don't mind, I'm primarily opening this PR to make sure CI works.


Co-authored-by: Mark Mackey <mark@sigmaprime.io>
2022-02-17 21:47:06 +00:00
Paul Hauner
2f8531dc60 Update to consensus-specs v1.1.9 (#3016)
## Issue Addressed

Closes #3014

## Proposed Changes

- Rename `receipt_root` to `receipts_root`
- Rename `execute_payload` to `notify_new_payload`
   - This is slightly weird since we modify everything except the actual HTTP call to the engine API. That change is expected to be implemented in #2985 (cc @ethDreamer)
- Enable "random" tests for Bellatrix.

## Notes

This will break *partially* compatibility with Kintusgi testnets in order to gain compatibility with [Kiln](https://hackmd.io/@n0ble/kiln-spec) testnets. I think it will only break the BN APIs due to the `receipts_root` change, however it might have some other effects too.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-02-14 23:57:23 +00:00
Michael Sproul
886afd684a Update block reward API docs (#3013)
## Proposed Changes

Fix the URLs and source code link in the docs for the block rewards API.
2022-02-11 11:02:09 +00:00
1224 changed files with 178040 additions and 53122 deletions

4
.cargo/config.toml Normal file
View File

@@ -0,0 +1,4 @@
[env]
# Set the number of arenas to 16 when using jemalloc.
JEMALLOC_SYS_WITH_MALLOC_CONF = "abort_conf:true,narenas:16"

113
.config/nextest.toml Normal file
View File

@@ -0,0 +1,113 @@
# This is the default config used by nextest. It is embedded in the binary at
# build time. It may be used as a template for .config/nextest.toml.
[store]
# The directory under the workspace root at which nextest-related files are
# written. Profile-specific storage is currently written to dir/<profile-name>.
dir = "target/nextest"
# This section defines the default nextest profile. Custom profiles are layered
# on top of the default profile.
[profile.default]
# "retries" defines the number of times a test should be retried. If set to a
# non-zero value, tests that succeed on a subsequent attempt will be marked as
# non-flaky. Can be overridden through the `--retries` option.
# Examples
# * retries = 3
# * retries = { backoff = "fixed", count = 2, delay = "1s" }
# * retries = { backoff = "exponential", count = 10, delay = "1s", jitter = true, max-delay = "10s" }
retries = 0
# The number of threads to run tests with. Supported values are either an integer or
# the string "num-cpus". Can be overridden through the `--test-threads` option.
test-threads = 8
# The number of threads required for each test. This is generally used in overrides to
# mark certain tests as heavier than others. However, it can also be set as a global parameter.
threads-required = 1
# Show these test statuses in the output.
#
# The possible values this can take are:
# * none: no output
# * fail: show failed (including exec-failed) tests
# * retry: show flaky and retried tests
# * slow: show slow tests
# * pass: show passed tests
# * skip: show skipped tests (most useful for CI)
# * all: all of the above
#
# Each value includes all the values above it; for example, "slow" includes
# failed and retried tests.
#
# Can be overridden through the `--status-level` flag.
status-level = "pass"
# Similar to status-level, show these test statuses at the end of the run.
final-status-level = "flaky"
# "failure-output" defines when standard output and standard error for failing tests are produced.
# Accepted values are
# * "immediate": output failures as soon as they happen
# * "final": output failures at the end of the test run
# * "immediate-final": output failures as soon as they happen and at the end of
# the test run; combination of "immediate" and "final"
# * "never": don't output failures at all
#
# For large test suites and CI it is generally useful to use "immediate-final".
#
# Can be overridden through the `--failure-output` option.
failure-output = "immediate"
# "success-output" controls production of standard output and standard error on success. This should
# generally be set to "never".
success-output = "never"
# Cancel the test run on the first failure. For CI runs, consider setting this
# to false.
fail-fast = true
# Treat a test that takes longer than the configured 'period' as slow, and print a message.
# See <https://nexte.st/book/slow-tests> for more information.
#
# Optional: specify the parameter 'terminate-after' with a non-zero integer,
# which will cause slow tests to be terminated after the specified number of
# periods have passed.
# Example: slow-timeout = { period = "60s", terminate-after = 2 }
slow-timeout = { period = "120s" }
# Treat a test as leaky if after the process is shut down, standard output and standard error
# aren't closed within this duration.
#
# This usually happens in case of a test that creates a child process and lets it inherit those
# handles, but doesn't clean the child process up (especially when it fails).
#
# See <https://nexte.st/book/leaky-tests> for more information.
leak-timeout = "100ms"
[profile.default.junit]
# Output a JUnit report into the given file inside 'store.dir/<profile-name>'.
# If unspecified, JUnit is not written out.
# path = "junit.xml"
# The name of the top-level "report" element in JUnit report. If aggregating
# reports across different test runs, it may be useful to provide separate names
# for each report.
report-name = "lighthouse-run"
# Whether standard output and standard error for passing tests should be stored in the JUnit report.
# Output is stored in the <system-out> and <system-err> elements of the <testcase> element.
store-success-output = false
# Whether standard output and standard error for failing tests should be stored in the JUnit report.
# Output is stored in the <system-out> and <system-err> elements of the <testcase> element.
#
# Note that if a description can be extracted from the output, it is always stored in the
# <description> element.
store-failure-output = true
# This profile is activated if MIRI_SYSROOT is set.
[profile.default-miri]
# Miri tests take up a lot of memory, so only run 1 test at a time by default.
test-threads = 4

View File

@@ -1,4 +1,5 @@
testing/ef_tests/consensus-spec-tests
testing/execution_engine_integration/execution_clients
target/
*.data
*.tar.gz

View File

@@ -6,4 +6,4 @@ end_of_line=lf
charset=utf-8
trim_trailing_whitespace=true
max_line_length=100
insert_final_newline=false
insert_final_newline=true

23
.github/custom/clippy.toml vendored Normal file
View File

@@ -0,0 +1,23 @@
disallowed-from-async-methods = [
"tokio::runtime::Handle::block_on",
"tokio::runtime::Runtime::block_on",
"tokio::task::LocalSet::block_on",
"tokio::sync::Mutex::blocking_lock",
"tokio::sync::RwLock::blocking_read",
"tokio::sync::mpsc::Receiver::blocking_recv",
"tokio::sync::mpsc::UnboundedReceiver::blocking_recv",
"tokio::sync::oneshot::Receiver::blocking_recv",
"tokio::sync::mpsc::Sender::blocking_send",
"tokio::sync::RwLock::blocking_write",
]
async-wrapper-methods = [
"tokio::runtime::Handle::spawn_blocking",
"task_executor::TaskExecutor::spawn_blocking",
"task_executor::TaskExecutor::spawn_blocking_handle",
"warp_utils::task::blocking_task",
"warp_utils::task::blocking_json_task",
"beacon_chain::beacon_chain::BeaconChain::spawn_blocking_handle",
"validator_client::http_api::blocking_signed_json_task",
"execution_layer::test_utils::MockServer::new",
"execution_layer::test_utils::MockServer::new_with_config",
]

19
.github/mergify.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
queue_rules:
- name: default
batch_size: 8
batch_max_wait_time: 60 s
checks_timeout: 10800 s
merge_method: squash
commit_message_template: |
{{ title }} (#{{ number }})
{% for commit in commits %}
* {{ commit.commit_message }}
{% endfor %}
queue_conditions:
- "#approved-reviews-by >= 1"
- "check-success=license/cla"
- "check-success=target-branch-check"
merge_conditions:
- "check-success=test-suite-success"
- "check-success=local-testnet-success"

View File

@@ -5,11 +5,15 @@ on:
branches:
- unstable
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build-and-upload-to-s3:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@master
- uses: actions/checkout@v4
- name: Setup mdBook
uses: peaceiris/actions-mdbook@v1

View File

@@ -1,14 +0,0 @@
name: cancel previous runs
on: [push]
jobs:
cancel:
name: 'Cancel Previous Runs'
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
# https://github.com/styfle/cancel-workflow-action/releases
- uses: styfle/cancel-workflow-action@514c783324374c6940d1b92bfb962d0763d22de3 # 0.7.0
with:
# https://api.github.com/repos/sigp/lighthouse/actions/workflows
workflow_id: 697364,2434944,4462424,308241,2883401,316
access_token: ${{ github.token }}

View File

@@ -1,31 +0,0 @@
name: docker antithesis
on:
push:
branches:
- unstable
env:
ANTITHESIS_PASSWORD: ${{ secrets.ANTITHESIS_PASSWORD }}
ANTITHESIS_USERNAME: ${{ secrets.ANTITHESIS_USERNAME }}
ANTITHESIS_SERVER: ${{ secrets.ANTITHESIS_SERVER }}
REPOSITORY: ${{ secrets.ANTITHESIS_REPOSITORY }}
IMAGE_NAME: lighthouse
TAG: libvoidstar
jobs:
build-docker:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Update Rust
run: rustup update stable
- name: Dockerhub login
run: |
echo "${ANTITHESIS_PASSWORD}" | docker login --username ${ANTITHESIS_USERNAME} https://${ANTITHESIS_SERVER} --password-stdin
- name: Build AMD64 dockerfile (with push)
run: |
docker build \
--tag ${ANTITHESIS_SERVER}/${REPOSITORY}/${IMAGE_NAME}:${TAG} \
--file ./testing/antithesis/Dockerfile.libvoidstar .
docker push ${ANTITHESIS_SERVER}/${REPOSITORY}/${IMAGE_NAME}:${TAG}

View File

@@ -8,11 +8,17 @@ on:
tags:
- v*
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
IMAGE_NAME: ${{ github.repository_owner}}/lighthouse
LCLI_IMAGE_NAME: ${{ github.repository_owner }}/lcli
# Enable self-hosted runners for the sigp repo only.
SELF_HOSTED_RUNNERS: ${{ github.repository == 'sigp/lighthouse' }}
jobs:
# Extract the VERSION which is either `latest` or `vX.Y.Z`, and the VERSION_SUFFIX
@@ -22,7 +28,7 @@ jobs:
# `unstable`, but for now we keep the two parts of the version separate for backwards
# compatibility.
extract-version:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- name: Extract version (if stable)
if: github.event.ref == 'refs/heads/stable'
@@ -43,23 +49,31 @@ jobs:
VERSION: ${{ env.VERSION }}
VERSION_SUFFIX: ${{ env.VERSION_SUFFIX }}
build-docker-single-arch:
name: build-docker-${{ matrix.binary }}
runs-on: ubuntu-18.04
name: build-docker-${{ matrix.binary }}${{ matrix.features.version_suffix }}
# Use self-hosted runners only on the sigp repo.
runs-on: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "release"]') || 'ubuntu-22.04' }}
strategy:
matrix:
binary: [aarch64,
aarch64-portable,
x86_64,
x86_64-portable]
features: [
{version_suffix: "", env: "gnosis,slasher-lmdb,slasher-mdbx,jemalloc"},
{version_suffix: "-dev", env: "jemalloc,spec-minimal"}
]
include:
- profile: maxperf
needs: [extract-version]
env:
# We need to enable experimental docker features in order to use `docker buildx`
DOCKER_CLI_EXPERIMENTAL: enabled
VERSION: ${{ needs.extract-version.outputs.VERSION }}
VERSION_SUFFIX: ${{ needs.extract-version.outputs.VERSION_SUFFIX }}
FEATURE_SUFFIX: ${{ matrix.features.version_suffix }}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Update Rust
if: env.SELF_HOSTED_RUNNERS == 'false'
run: rustup update stable
- name: Dockerhub login
run: |
@@ -67,17 +81,15 @@ jobs:
- name: Cross build Lighthouse binary
run: |
cargo install cross
make build-${{ matrix.binary }}
env CROSS_PROFILE=${{ matrix.profile }} CROSS_FEATURES=${{ matrix.features.env }} make build-${{ matrix.binary }}
- name: Make bin dir
run: mkdir ./bin
- name: Move cross-built binary into Docker scope (if ARM)
if: startsWith(matrix.binary, 'aarch64')
run: |
mkdir ./bin;
mv ./target/aarch64-unknown-linux-gnu/release/lighthouse ./bin;
run: mv ./target/aarch64-unknown-linux-gnu/${{ matrix.profile }}/lighthouse ./bin
- name: Move cross-built binary into Docker scope (if x86_64)
if: startsWith(matrix.binary, 'x86_64')
run: |
mkdir ./bin;
mv ./target/x86_64-unknown-linux-gnu/release/lighthouse ./bin;
run: mv ./target/x86_64-unknown-linux-gnu/${{ matrix.profile }}/lighthouse ./bin
- name: Map aarch64 to arm64 short arch
if: startsWith(matrix.binary, 'aarch64')
run: echo "SHORT_ARCH=arm64" >> $GITHUB_ENV
@@ -87,53 +99,65 @@ jobs:
- name: Set modernity suffix
if: endsWith(matrix.binary, '-portable') != true
run: echo "MODERNITY_SUFFIX=-modern" >> $GITHUB_ENV;
# Install dependencies for emulation. Have to create a new builder to pick up emulation support.
- name: Build Dockerfile and push
run: |
docker run --privileged --rm tonistiigi/binfmt --install ${SHORT_ARCH}
docker buildx create --use --name cross-builder
docker buildx build \
--platform=linux/${SHORT_ARCH} \
--file ./Dockerfile.cross . \
--tag ${IMAGE_NAME}:${VERSION}-${SHORT_ARCH}${VERSION_SUFFIX}${MODERNITY_SUFFIX} \
--push
- name: Install QEMU
if: env.SELF_HOSTED_RUNNERS == 'false'
run: sudo apt-get update && sudo apt-get install -y qemu-user-static
- name: Set up Docker Buildx
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v5
with:
file: ./Dockerfile.cross
context: .
platforms: linux/${{ env.SHORT_ARCH }}
push: true
tags: ${{ env.IMAGE_NAME }}:${{ env.VERSION }}-${{ env.SHORT_ARCH }}${{ env.VERSION_SUFFIX }}${{ env.MODERNITY_SUFFIX }}${{ env.FEATURE_SUFFIX }}
build-docker-multiarch:
name: build-docker-multiarch${{ matrix.modernity }}
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
needs: [build-docker-single-arch, extract-version]
strategy:
matrix:
modernity: ["", "-modern"]
env:
# We need to enable experimental docker features in order to use `docker manifest`
DOCKER_CLI_EXPERIMENTAL: enabled
VERSION: ${{ needs.extract-version.outputs.VERSION }}
VERSION_SUFFIX: ${{ needs.extract-version.outputs.VERSION_SUFFIX }}
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Dockerhub login
run: |
echo "${DOCKER_PASSWORD}" | docker login --username ${DOCKER_USERNAME} --password-stdin
- name: Create and push multiarch manifest
run: |
docker manifest create ${IMAGE_NAME}:${VERSION}${VERSION_SUFFIX}${{ matrix.modernity }} \
--amend ${IMAGE_NAME}:${VERSION}-arm64${VERSION_SUFFIX}${{ matrix.modernity }} \
--amend ${IMAGE_NAME}:${VERSION}-amd64${VERSION_SUFFIX}${{ matrix.modernity }};
docker manifest push ${IMAGE_NAME}:${VERSION}${VERSION_SUFFIX}${{ matrix.modernity }}
docker buildx imagetools create -t ${IMAGE_NAME}:${VERSION}${VERSION_SUFFIX}${{ matrix.modernity }} \
${IMAGE_NAME}:${VERSION}-arm64${VERSION_SUFFIX}${{ matrix.modernity }} \
${IMAGE_NAME}:${VERSION}-amd64${VERSION_SUFFIX}${{ matrix.modernity }};
build-docker-lcli:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
needs: [extract-version]
env:
VERSION: ${{ needs.extract-version.outputs.VERSION }}
VERSION_SUFFIX: ${{ needs.extract-version.outputs.VERSION_SUFFIX }}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Dockerhub login
run: |
echo "${DOCKER_PASSWORD}" | docker login --username ${DOCKER_USERNAME} --password-stdin
- name: Build lcli dockerfile (with push)
run: |
docker build \
--build-arg PORTABLE=true \
--tag ${LCLI_IMAGE_NAME}:${VERSION}${VERSION_SUFFIX} \
--file ./lcli/Dockerfile .
docker push ${LCLI_IMAGE_NAME}:${VERSION}${VERSION_SUFFIX}
- name: Build lcli and push
uses: docker/build-push-action@v5
with:
build-args: |
FEATURES=portable
context: .
push: true
file: ./lcli/Dockerfile
tags: ${{ env.LCLI_IMAGE_NAME }}:${{ env.VERSION }}${{ env.VERSION_SUFFIX }}

View File

@@ -7,6 +7,11 @@ on:
pull_request:
paths:
- 'book/**'
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
linkcheck:
@@ -15,16 +20,17 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Create docker network
run: docker network create book
uses: actions/checkout@v4
- name: Run mdbook server
run: docker run -v ${{ github.workspace }}/book:/book --network book --name book -p 3000:3000 -d peaceiris/mdbook:latest serve --hostname 0.0.0.0
run: |
docker run -v ${{ github.workspace }}/book:/book --name book -p 3000:3000 -d peaceiris/mdbook:latest serve --hostname 0.0.0.0
sleep 5
- name: Print logs
run: docker logs book
- name: Run linkcheck
run: docker run --network book tennox/linkcheck:latest book:3000
run: |
curl -sL https://github.com/filiph/linkcheck/releases/download/3.0.0/linkcheck-3.0.0-linux-x64.tar.gz | tar xvzf - linkcheck/linkcheck --strip 1
./linkcheck localhost:3000 -d

View File

@@ -6,23 +6,47 @@ on:
branches:
- unstable
pull_request:
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
run-local-testnet:
strategy:
matrix:
os:
- ubuntu-18.04
- macos-latest
- ubuntu-22.04
- macos-12
runs-on: ${{ matrix.os }}
env:
# Enable portable to prevent issues with caching `blst` for the wrong CPU type
FEATURES: portable,jemalloc
steps:
- uses: actions/checkout@v1
- name: Install ganache
run: npm install ganache-cli@latest --global
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Install geth (ubuntu)
if: matrix.os == 'ubuntu-22.04'
run: |
sudo add-apt-repository -y ppa:ethereum/ethereum
sudo apt-get update
sudo apt-get install ethereum
- name: Install geth (mac)
if: matrix.os == 'macos-12'
run: |
brew tap ethereum/ethereum
brew install ethereum
- name: Install GNU sed & GNU grep
if: matrix.os == 'macos-12'
run: |
brew install gnu-sed grep
echo "$(brew --prefix)/opt/gnu-sed/libexec/gnubin" >> $GITHUB_PATH
echo "$(brew --prefix)/opt/grep/libexec/gnubin" >> $GITHUB_PATH
# https://github.com/actions/cache/blob/main/examples.md#rust---cargo
- uses: actions/cache@v2
- uses: actions/cache@v4
id: cache-cargo
with:
path: |
@@ -34,17 +58,43 @@ jobs:
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install lighthouse
if: steps.cache-cargo.outputs.cache-hit != 'true'
run: make && make install-lcli
- name: Start local testnet
run: ./start_local_testnet.sh
run: ./start_local_testnet.sh genesis.json && sleep 60
working-directory: scripts/local_testnet
- name: Print logs
run: ./print_logs.sh
run: ./dump_logs.sh
working-directory: scripts/local_testnet
- name: Stop local testnet
run: ./stop_local_testnet.sh
working-directory: scripts/local_testnet
- name: Clean-up testnet
run: ./clean.sh
working-directory: scripts/local_testnet
- name: Start local testnet with blinded block production
run: ./start_local_testnet.sh -p genesis.json && sleep 60
working-directory: scripts/local_testnet
- name: Print logs for blinded block testnet
run: ./dump_logs.sh
working-directory: scripts/local_testnet
- name: Stop local testnet with blinded block production
run: ./stop_local_testnet.sh
working-directory: scripts/local_testnet
# This job succeeds ONLY IF all others succeed. It is used by the merge queue to determine whether
# a PR is safe to merge. New jobs should be added here.
local-testnet-success:
name: local-testnet-success
runs-on: ubuntu-latest
needs: ["run-local-testnet"]
steps:
- uses: actions/checkout@v4
- name: Check that success job is dependent on all others
run: ./scripts/ci/check-success-job.sh ./.github/workflows/local-testnet.yml local-testnet-success

View File

@@ -1,66 +0,0 @@
name: Publish Crate
on:
push:
tags:
- tree-hash-v*
- tree-hash-derive-v*
- eth2-ssz-v*
- eth2-ssz-derive-v*
- eth2-ssz-types-v*
- eth2-serde-util-v*
- eth2-hashing-v*
env:
CARGO_API_TOKEN: ${{ secrets.CARGO_API_TOKEN }}
jobs:
extract-tag:
runs-on: ubuntu-latest
steps:
- name: Extract tag
run: echo "::set-output name=TAG::$(echo ${GITHUB_REF#refs/tags/})"
id: extract_tag
outputs:
TAG: ${{ steps.extract_tag.outputs.TAG }}
publish-crate:
runs-on: ubuntu-latest
needs: [extract-tag]
env:
TAG: ${{ needs.extract-tag.outputs.TAG }}
steps:
- uses: actions/checkout@v2
- name: Update Rust
run: rustup update stable
- name: Cargo login
run: |
echo "${CARGO_API_TOKEN}" | cargo login
- name: publish eth2 ssz derive
if: startsWith(env.TAG, 'eth2-ssz-derive-v')
run: |
./scripts/ci/publish.sh consensus/ssz_derive eth2_ssz_derive "$TAG"
- name: publish eth2 ssz
if: startsWith(env.TAG, 'eth2-ssz-v')
run: |
./scripts/ci/publish.sh consensus/ssz eth2_ssz "$TAG"
- name: publish eth2 hashing
if: startsWith(env.TAG, 'eth2-hashing-v')
run: |
./scripts/ci/publish.sh crypto/eth2_hashing eth2_hashing "$TAG"
- name: publish tree hash derive
if: startsWith(env.TAG, 'tree-hash-derive-v')
run: |
./scripts/ci/publish.sh consensus/tree_hash_derive tree_hash_derive "$TAG"
- name: publish tree hash
if: startsWith(env.TAG, 'tree-hash-v')
run: |
./scripts/ci/publish.sh consensus/tree_hash tree_hash "$TAG"
- name: publish ssz types
if: startsWith(env.TAG, 'eth2-ssz-types-v')
run: |
./scripts/ci/publish.sh consensus/ssz_types eth2_ssz_types "$TAG"
- name: publish serde util
if: startsWith(env.TAG, 'eth2-serde-util-v')
run: |
./scripts/ci/publish.sh consensus/serde_utils eth2_serde_utils "$TAG"

View File

@@ -5,18 +5,24 @@ on:
tags:
- v*
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
REPO_NAME: sigp/lighthouse
IMAGE_NAME: sigp/lighthouse
REPO_NAME: ${{ github.repository_owner }}/lighthouse
IMAGE_NAME: ${{ github.repository_owner }}/lighthouse
# Enable self-hosted runners for the sigp repo only.
SELF_HOSTED_RUNNERS: ${{ github.repository == 'sigp/lighthouse' }}
jobs:
extract-version:
runs-on: ubuntu-latest
steps:
- name: Extract version
run: echo "::set-output name=VERSION::$(echo ${GITHUB_REF#refs/tags/})"
run: echo "VERSION=$(echo ${GITHUB_REF#refs/tags/})" >> $GITHUB_OUTPUT
id: extract_version
outputs:
VERSION: ${{ steps.extract_version.outputs.VERSION }}
@@ -34,42 +40,47 @@ jobs:
x86_64-windows-portable]
include:
- arch: aarch64-unknown-linux-gnu
platform: ubuntu-latest
runner: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "release", "large"]') || 'ubuntu-latest' }}
profile: maxperf
- arch: aarch64-unknown-linux-gnu-portable
platform: ubuntu-latest
runner: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "release", "large"]') || 'ubuntu-latest' }}
profile: maxperf
- arch: x86_64-unknown-linux-gnu
platform: ubuntu-latest
runner: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "release", "large"]') || 'ubuntu-latest' }}
profile: maxperf
- arch: x86_64-unknown-linux-gnu-portable
platform: ubuntu-latest
runner: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "release", "large"]') || 'ubuntu-latest' }}
profile: maxperf
- arch: x86_64-apple-darwin
platform: macos-latest
runner: macos-latest
profile: maxperf
- arch: x86_64-apple-darwin-portable
platform: macos-latest
runner: macos-latest
profile: maxperf
- arch: x86_64-windows
platform: windows-latest
runner: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "windows", "release"]') || 'windows-2019' }}
profile: maxperf
- arch: x86_64-windows-portable
platform: windows-latest
runner: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "windows", "release"]') || 'windows-2019' }}
profile: maxperf
runs-on: ${{ matrix.platform }}
runs-on: ${{ matrix.runner }}
needs: extract-version
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Build toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: stable
profile: minimal
override: true
uses: actions/checkout@v4
- name: Get latest version of stable Rust
if: env.SELF_HOSTED_RUNNERS == 'false'
run: rustup update stable
# ==============================
# Windows dependencies
# ==============================
- uses: KyleMayes/install-llvm-action@v1
if: startsWith(matrix.arch, 'x86_64-windows')
if: env.SELF_HOSTED_RUNNERS == 'false' && startsWith(matrix.arch, 'x86_64-windows')
with:
version: "13.0"
version: "16.0"
directory: ${{ runner.temp }}/llvm
- name: Set LIBCLANG_PATH
if: startsWith(matrix.arch, 'x86_64-windows')
@@ -83,49 +94,49 @@ jobs:
if: matrix.arch == 'aarch64-unknown-linux-gnu-portable'
run: |
cargo install cross
make build-aarch64-portable
env CROSS_PROFILE=${{ matrix.profile }} make build-aarch64-portable
- name: Build Lighthouse for aarch64-unknown-linux-gnu
if: matrix.arch == 'aarch64-unknown-linux-gnu'
run: |
cargo install cross
make build-aarch64
env CROSS_PROFILE=${{ matrix.profile }} make build-aarch64
- name: Build Lighthouse for x86_64-unknown-linux-gnu-portable
if: matrix.arch == 'x86_64-unknown-linux-gnu-portable'
run: |
cargo install cross
make build-x86_64-portable
env CROSS_PROFILE=${{ matrix.profile }} make build-x86_64-portable
- name: Build Lighthouse for x86_64-unknown-linux-gnu
if: matrix.arch == 'x86_64-unknown-linux-gnu'
run: |
cargo install cross
make build-x86_64
env CROSS_PROFILE=${{ matrix.profile }} make build-x86_64
- name: Move cross-compiled binary
if: startsWith(matrix.arch, 'aarch64')
run: mv target/aarch64-unknown-linux-gnu/release/lighthouse ~/.cargo/bin/lighthouse
run: mv target/aarch64-unknown-linux-gnu/${{ matrix.profile }}/lighthouse ~/.cargo/bin/lighthouse
- name: Move cross-compiled binary
if: startsWith(matrix.arch, 'x86_64-unknown-linux-gnu')
run: mv target/x86_64-unknown-linux-gnu/release/lighthouse ~/.cargo/bin/lighthouse
run: mv target/x86_64-unknown-linux-gnu/${{ matrix.profile }}/lighthouse ~/.cargo/bin/lighthouse
- name: Build Lighthouse for x86_64-apple-darwin portable
if: matrix.arch == 'x86_64-apple-darwin-portable'
run: cargo install --path lighthouse --force --locked --features portable,gnosis
run: cargo install --path lighthouse --force --locked --features portable,gnosis --profile ${{ matrix.profile }}
- name: Build Lighthouse for x86_64-apple-darwin modern
if: matrix.arch == 'x86_64-apple-darwin'
run: cargo install --path lighthouse --force --locked --features modern,gnosis
run: cargo install --path lighthouse --force --locked --features modern,gnosis --profile ${{ matrix.profile }}
- name: Build Lighthouse for Windows portable
if: matrix.arch == 'x86_64-windows-portable'
run: cargo install --path lighthouse --force --locked --features portable,gnosis
run: cargo install --path lighthouse --force --locked --features portable,gnosis --profile ${{ matrix.profile }}
- name: Build Lighthouse for Windows modern
if: matrix.arch == 'x86_64-windows'
run: cargo install --path lighthouse --force --locked --features modern,gnosis
run: cargo install --path lighthouse --force --locked --features modern,gnosis --profile ${{ matrix.profile }}
- name: Configure GPG and create artifacts
if: startsWith(matrix.arch, 'x86_64-windows') != true
@@ -161,17 +172,19 @@ jobs:
# This is required to share artifacts between different jobs
# =======================================================================
- name: Upload artifact
uses: actions/upload-artifact@v2
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: lighthouse-${{ needs.extract-version.outputs.VERSION }}-${{ matrix.arch }}.tar.gz
path: lighthouse-${{ needs.extract-version.outputs.VERSION }}-${{ matrix.arch }}.tar.gz
compression-level: 0
- name: Upload signature
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
with:
name: lighthouse-${{ needs.extract-version.outputs.VERSION }}-${{ matrix.arch }}.tar.gz.asc
path: lighthouse-${{ needs.extract-version.outputs.VERSION }}-${{ matrix.arch }}.tar.gz.asc
compression-level: 0
draft-release:
name: Draft Release
@@ -182,7 +195,7 @@ jobs:
steps:
# This is necessary for generating the changelog. It has to come before "Download Artifacts" or else it deletes the artifacts.
- name: Checkout sources
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -191,7 +204,7 @@ jobs:
# ==============================
- name: Download artifacts
uses: actions/download-artifact@v2
uses: actions/download-artifact@v4
# ==============================
# Create release draft
@@ -199,11 +212,14 @@ jobs:
- name: Generate Full Changelog
id: changelog
run: echo "::set-output name=CHANGELOG::$(git log --pretty=format:"- %s" $(git describe --tags --abbrev=0 ${{ env.VERSION }}^)..${{ env.VERSION }})"
run: |
echo "CHANGELOG<<EOF" >> $GITHUB_OUTPUT
echo "$(git log --pretty=format:"- %s" $(git describe --tags --abbrev=0 ${{ env.VERSION }}^)..${{ env.VERSION }})" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Create Release Draft
env:
GITHUB_USER: sigp
GITHUB_USER: ${{ github.repository_owner }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# The formatting here is borrowed from OpenEthereum: https://github.com/openethereum/openethereum/blob/main/.github/workflows/build.yml
@@ -212,7 +228,7 @@ jobs:
<Rick and Morty character>
## Testing Checklist (DELETE ME)
- [ ] Run on synced Prater Sigma Prime nodes.
- [ ] Run on synced Canary (mainnet) Sigma Prime nodes.
- [ ] Resync a Prater node.
@@ -268,9 +284,6 @@ jobs:
| <img src="https://simpleicons.org/icons/docker.svg" style="width: 32px;"/> | Docker | [${{ env.VERSION }}](https://hub.docker.com/r/${{ env.IMAGE_NAME }}/tags?page=1&ordering=last_updated&name=${{ env.VERSION }}) | [${{ env.IMAGE_NAME }}](https://hub.docker.com/r/${{ env.IMAGE_NAME }}) |
ENDBODY
)
assets=()
for asset in ./lighthouse-*.tar.gz*; do
assets+=("-a" "$asset/$asset")
done
assets=(./lighthouse-*.tar.gz*/lighthouse-*.tar.gz*)
tag_name="${{ env.VERSION }}"
echo "$body" | hub release create --draft "${assets[@]}" -F "-" "$tag_name"
echo "$body" | gh release create --draft -F "-" "$tag_name" "${assets[@]}"

View File

@@ -8,120 +8,219 @@ on:
- trying
- 'pr/*'
pull_request:
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
# Deny warnings in CI
RUSTFLAGS: "-D warnings"
# The Nightly version used for cargo-udeps, might need updating from time to time.
PINNED_NIGHTLY: nightly-2021-12-01
# Disable debug info (see https://github.com/sigp/lighthouse/issues/4005)
RUSTFLAGS: "-D warnings -C debuginfo=0"
# Prevent Github API rate limiting.
LIGHTHOUSE_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Enable self-hosted runners for the sigp repo only.
SELF_HOSTED_RUNNERS: ${{ github.repository == 'sigp/lighthouse' }}
# Self-hosted runners need to reference a different host for `./watch` tests.
WATCH_HOST: ${{ github.repository == 'sigp/lighthouse' && 'host.docker.internal' || 'localhost' }}
# Disable incremental compilation
CARGO_INCREMENTAL: 0
# Enable portable to prevent issues with caching `blst` for the wrong CPU type
TEST_FEATURES: portable
jobs:
target-branch-check:
name: target-branch-check
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request' || github.event_name == 'merge_group'
steps:
- name: Check that pull request is targeting unstable branch
run: test ${{ github.base_ref }} = "unstable"
cargo-fmt:
name: cargo-fmt
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Get latest version of stable Rust
run: rustup update stable
- name: Check formatting with cargo fmt
run: make cargo-fmt
- name: Check that the pull request is not targeting the stable branch
run: test ${{ github.base_ref }} != "stable"
release-tests-ubuntu:
name: release-tests-ubuntu
runs-on: ubuntu-latest
needs: cargo-fmt
# Use self-hosted runners only on the sigp repo.
runs-on: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "CI", "large"]') || 'ubuntu-latest' }}
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Install ganache-cli
run: sudo npm install -g ganache-cli
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
bins: cargo-nextest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Install Foundry (anvil)
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly-ca67d15f4abd46394b324c50e21e66f306a1162d
- name: Run tests in release
run: make test-release
run: make nextest-release
- name: Show cache stats
if: env.SELF_HOSTED_RUNNERS == 'true'
run: sccache --show-stats
release-tests-windows:
name: release-tests-windows
runs-on: windows-latest
needs: cargo-fmt
runs-on: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "windows", "CI"]') || 'windows-2019' }}
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Install ganache-cli
run: npm install -g ganache-cli
- name: Install make
run: choco install -y make
- uses: KyleMayes/install-llvm-action@v1
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: moonrepo/setup-rust@v1
with:
version: "13.0"
directory: ${{ runner.temp }}/llvm
channel: stable
cache-target: release
bins: cargo-nextest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Install Foundry (anvil)
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly-ca67d15f4abd46394b324c50e21e66f306a1162d
- name: Install make
if: env.SELF_HOSTED_RUNNERS == 'false'
run: choco install -y make
# - uses: KyleMayes/install-llvm-action@v1
# if: env.SELF_HOSTED_RUNNERS == 'false'
# with:
# version: "16.0"
# directory: ${{ runner.temp }}/llvm
- name: Set LIBCLANG_PATH
run: echo "LIBCLANG_PATH=$((gcm clang).source -replace "clang.exe")" >> $env:GITHUB_ENV
- name: Run tests in release
run: make test-release
run: make nextest-release
- name: Show cache stats
if: env.SELF_HOSTED_RUNNERS == 'true'
run: sccache --show-stats
beacon-chain-tests:
name: beacon-chain-tests
runs-on: ubuntu-latest
needs: cargo-fmt
# Use self-hosted runners only on the sigp repo.
runs-on: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "CI", "large"]') || 'ubuntu-latest' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
bins: cargo-nextest
- name: Run beacon_chain tests for all known forks
run: make test-beacon-chain
- name: Show cache stats
if: env.SELF_HOSTED_RUNNERS == 'true'
run: sccache --show-stats
op-pool-tests:
name: op-pool-tests
runs-on: ubuntu-latest
needs: cargo-fmt
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
bins: cargo-nextest
- name: Run operation_pool tests for all known forks
run: make test-op-pool
network-tests:
name: network-tests
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
bins: cargo-nextest
- name: Run network tests for all known forks
run: make test-network
slasher-tests:
name: slasher-tests
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
bins: cargo-nextest
- name: Run slasher tests for all supported backends
run: make test-slasher
debug-tests-ubuntu:
name: debug-tests-ubuntu
runs-on: ubuntu-latest
needs: cargo-fmt
# Use self-hosted runners only on the sigp repo.
runs-on: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "CI", "large"]') || 'ubuntu-latest' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Install ganache-cli
run: sudo npm install -g ganache-cli
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: moonrepo/setup-rust@v1
with:
channel: stable
bins: cargo-nextest
- name: Install Foundry (anvil)
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly-ca67d15f4abd46394b324c50e21e66f306a1162d
- name: Run tests in debug
run: make test-debug
run: make nextest-debug
- name: Show cache stats
if: env.SELF_HOSTED_RUNNERS == 'true'
run: sccache --show-stats
state-transition-vectors-ubuntu:
name: state-transition-vectors-ubuntu
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
- name: Run state_transition_vectors in release.
run: make run-state-transition-tests
ef-tests-ubuntu:
name: ef-tests-ubuntu
runs-on: ubuntu-latest
needs: cargo-fmt
# Use self-hosted runners only on the sigp repo.
runs-on: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "CI", "small"]') || 'ubuntu-latest' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Run consensus-spec-tests with blst, milagro and fake_crypto
run: make test-ef
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
bins: cargo-nextest
- name: Run consensus-spec-tests with blst and fake_crypto
run: make nextest-ef
- name: Show cache stats
if: env.SELF_HOSTED_RUNNERS == 'true'
run: sccache --show-stats
dockerfile-ubuntu:
name: dockerfile-ubuntu
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Get latest version of stable Rust
run: rustup update stable
- uses: actions/checkout@v4
- name: Build the root Dockerfile
run: docker build --build-arg FEATURES=portable -t lighthouse:local .
- name: Test the built image
@@ -129,131 +228,171 @@ jobs:
eth1-simulator-ubuntu:
name: eth1-simulator-ubuntu
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Install ganache-cli
run: sudo npm install -g ganache-cli
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
- name: Install Foundry (anvil)
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly-ca67d15f4abd46394b324c50e21e66f306a1162d
- name: Run the beacon chain sim that starts from an eth1 contract
run: cargo run --release --bin simulator eth1-sim
merge-transition-ubuntu:
name: merge-transition-ubuntu
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
- name: Install Foundry (anvil)
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly-ca67d15f4abd46394b324c50e21e66f306a1162d
- name: Run the beacon chain sim and go through the merge transition
run: cargo run --release --bin simulator eth1-sim --post-merge
no-eth1-simulator-ubuntu:
name: no-eth1-simulator-ubuntu
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Install ganache-cli
run: sudo npm install -g ganache-cli
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
- name: Run the beacon chain sim without an eth1 connection
run: cargo run --release --bin simulator no-eth1-sim
syncing-simulator-ubuntu:
name: syncing-simulator-ubuntu
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Get latest version of stable Rust
run: rustup update stable
- name: Install ganache-cli
run: sudo npm install -g ganache-cli
- name: Run the syncing simulator
run: cargo run --release --bin simulator syncing-sim
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
- name: Install Foundry (anvil)
uses: foundry-rs/foundry-toolchain@v1
with:
version: nightly-ca67d15f4abd46394b324c50e21e66f306a1162d
- name: Run the syncing simulator
run: cargo run --release --bin simulator syncing-sim
doppelganger-protection-test:
name: doppelganger-protection-test
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Get latest version of stable Rust
run: rustup update stable
- name: Install ganache-cli
run: sudo npm install -g ganache-cli
- name: Install lighthouse and lcli
run: |
make
make install-lcli
- name: Run the doppelganger protection success test script
run: |
cd scripts/tests
./doppelganger_protection.sh success
- name: Run the doppelganger protection failure test script
run: |
cd scripts/tests
./doppelganger_protection.sh failure
check-benchmarks:
name: check-benchmarks
runs-on: ubuntu-latest
needs: cargo-fmt
name: doppelganger-protection-test
runs-on: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "CI", "small"]') || 'ubuntu-latest' }}
env:
# Enable portable to prevent issues with caching `blst` for the wrong CPU type
FEATURES: jemalloc,portable
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Typecheck benchmark code without running it
run: make check-benches
check-consensus:
name: check-consensus
runs-on: ubuntu-latest
needs: cargo-fmt
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
- name: Install geth
if: env.SELF_HOSTED_RUNNERS == 'false'
run: |
sudo add-apt-repository -y ppa:ethereum/ethereum
sudo apt-get update
sudo apt-get install ethereum
- name: Install lighthouse
run: |
make
- name: Install lcli
# TODO: uncomment after the version of lcli in https://github.com/sigp/lighthouse/pull/5137
# is installed on the runners
# if: env.SELF_HOSTED_RUNNERS == 'false'
run: make install-lcli
- name: Run the doppelganger protection failure test script
run: |
cd scripts/tests
./doppelganger_protection.sh failure genesis.json
- name: Run the doppelganger protection success test script
run: |
cd scripts/tests
./doppelganger_protection.sh success genesis.json
execution-engine-integration-ubuntu:
name: execution-engine-integration-ubuntu
runs-on: ${{ github.repository == 'sigp/lighthouse' && fromJson('["self-hosted", "linux", "CI", "small"]') || 'ubuntu-latest' }}
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
- name: Typecheck consensus code in strict mode
run: make check-consensus
clippy:
name: clippy
if: env.SELF_HOSTED_RUNNERS == 'false'
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
cache: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Add go compiler to $PATH
if: env.SELF_HOSTED_RUNNERS == 'true'
run: echo "/usr/local/go/bin" >> $GITHUB_PATH
- name: Run exec engine integration tests in release
run: make test-exec-engine
check-code:
name: check-code
runs-on: ubuntu-latest
needs: cargo-fmt
env:
CARGO_INCREMENTAL: 1
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
run: rustup update stable
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
components: rustfmt,clippy
bins: cargo-audit
- name: Check formatting with cargo fmt
run: make cargo-fmt
- name: Lint code for quality and style with Clippy
run: make lint
- name: Certify Cargo.lock freshness
run: git diff --exit-code Cargo.lock
arbitrary-check:
name: arbitrary-check
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Get latest version of stable Rust
run: rustup update stable
- name: Typecheck benchmark code without running it
run: make check-benches
- name: Validate state_processing feature arbitrary-fuzz
run: make arbitrary-fuzz
cargo-audit:
name: cargo-audit
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Get latest version of stable Rust
run: rustup update stable
- name: Run cargo audit to identify known security vulnerabilities reported to the RustSec Advisory Database
run: make audit
cargo-vendor:
name: cargo-vendor
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Run cargo audit
run: make audit-CI
- name: Run cargo vendor to make sure dependencies can be vendored for packaging, reproducibility and archival purpose
run: CARGO_HOME=$(readlink -f $HOME) make vendor
check-msrv:
name: check-msrv
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust at Minimum Supported Rust Version (MSRV)
run: |
metadata=$(cargo metadata --no-deps --format-version 1)
msrv=$(echo $metadata | jq -r '.packages | map(select(.name == "lighthouse")) | .[0].rust_version')
rustup override set $msrv
- name: Run cargo check
run: cargo check --workspace
cargo-udeps:
name: cargo-udeps
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Install Rust (${{ env.PINNED_NIGHTLY }})
run: rustup toolchain install $PINNED_NIGHTLY
- name: Install cargo-udeps
run: cargo install cargo-udeps --locked
- uses: actions/checkout@v4
- name: Get latest version of nightly Rust
uses: moonrepo/setup-rust@v1
with:
channel: nightly
bins: cargo-udeps
cache: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create Cargo config dir
run: mkdir -p .cargo
- name: Install custom Cargo config
@@ -263,3 +402,59 @@ jobs:
env:
# Allow warnings on Nightly
RUSTFLAGS: ""
compile-with-beta-compiler:
name: compile-with-beta-compiler
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install dependencies
run: sudo apt update && sudo apt install -y git gcc g++ make cmake pkg-config llvm-dev libclang-dev clang
- name: Use Rust beta
run: rustup override set beta
- name: Run make
run: make
cli-check:
name: cli-check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Get latest version of stable Rust
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
- name: Run Makefile to trigger the bash script
run: make cli
# This job succeeds ONLY IF all others succeed. It is used by the merge queue to determine whether
# a PR is safe to merge. New jobs should be added here.
test-suite-success:
name: test-suite-success
runs-on: ubuntu-latest
needs: [
'target-branch-check',
'release-tests-ubuntu',
'release-tests-windows',
'beacon-chain-tests',
'op-pool-tests',
'network-tests',
'slasher-tests',
'debug-tests-ubuntu',
'state-transition-vectors-ubuntu',
'ef-tests-ubuntu',
'dockerfile-ubuntu',
'eth1-simulator-ubuntu',
'merge-transition-ubuntu',
'no-eth1-simulator-ubuntu',
'syncing-simulator-ubuntu',
'doppelganger-protection-test',
'execution-engine-integration-ubuntu',
'check-code',
'check-msrv',
'cargo-udeps',
'compile-with-beta-compiler',
'cli-check',
]
steps:
- uses: actions/checkout@v4
- name: Check that success job is dependent on all others
run: ./scripts/ci/check-success-job.sh ./.github/workflows/test-suite.yml test-suite-success

10
.gitignore vendored
View File

@@ -1,4 +1,5 @@
target/
vendor/
**/*.rs.bk
*.pk
*.sk
@@ -8,3 +9,12 @@ perf.data*
*.tar.gz
/bin
genesis.ssz
/clippy.toml
/.cargo
# IntelliJ
/*.iml
.idea
# VSCode
/.vscode

View File

@@ -1,4 +1,5 @@
# Contributors Guide
[![GitPOAP badge](https://public-api.gitpoap.io/v1/repo/sigp/lighthouse/badge)](https://www.gitpoap.io/gh/sigp/lighthouse)
Lighthouse is an open-source Ethereum 2.0 client. We're community driven and
welcome all contribution. We aim to provide a constructive, respectful and fun
@@ -44,8 +45,8 @@ questions.
2. **Work in a feature branch** of your personal fork
(github.com/YOUR_NAME/lighthouse) of the main repository
(github.com/sigp/lighthouse).
3. Once you feel you have addressed the issue, **create a pull-request** to merge
your changes in to the main repository.
3. Once you feel you have addressed the issue, **create a pull-request** with
`unstable` as the base branch to merge your changes into the main repository.
4. Wait for the repository maintainers to **review your changes** to ensure the
issue is addressed satisfactorily. Optionally, mention your PR on
[discord](https://discord.gg/cyAszAh).

7593
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,9 +4,12 @@ members = [
"beacon_node",
"beacon_node/beacon_chain",
"beacon_node/beacon_processor",
"beacon_node/builder_client",
"beacon_node/client",
"beacon_node/eth1",
"beacon_node/lighthouse_network",
"beacon_node/lighthouse_network/gossipsub",
"beacon_node/execution_layer",
"beacon_node/http_api",
"beacon_node/http_metrics",
@@ -14,7 +17,7 @@ members = [
"beacon_node/store",
"beacon_node/timer",
"boot_node",
"boot_node",
"common/account_utils",
"common/clap_utils",
@@ -27,39 +30,38 @@ members = [
"common/eth2_interop_keypairs",
"common/eth2_network_config",
"common/eth2_wallet_manager",
"common/hashset_delay",
"common/lighthouse_metrics",
"common/lighthouse_version",
"common/lockfile",
"common/logging",
"common/lru_cache",
"common/malloc_utils",
"common/oneshot_broadcast",
"common/pretty_reqwest_error",
"common/promise_cache",
"common/sensitive_url",
"common/slot_clock",
"common/system_health",
"common/task_executor",
"common/target_check",
"common/test_random_derive",
"common/unused_port",
"common/validator_dir",
"common/warp_utils",
"common/fallback",
"common/monitoring_api",
"database_manager",
"consensus/cached_tree_hash",
"consensus/int_to_bytes",
"consensus/fork_choice",
"consensus/proto_array",
"consensus/safe_arith",
"consensus/ssz",
"consensus/ssz_derive",
"consensus/ssz_types",
"consensus/serde_utils",
"consensus/state_processing",
"consensus/swap_or_not_shuffle",
"consensus/tree_hash",
"consensus/tree_hash_derive",
"crypto/bls",
"crypto/eth2_hashing",
"crypto/kzg",
"crypto/eth2_key_derivation",
"crypto/eth2_keystore",
"crypto/eth2_wallet",
@@ -74,6 +76,7 @@ members = [
"testing/ef_tests",
"testing/eth1_test_rig",
"testing/execution_engine_integration",
"testing/node_test_rig",
"testing/simulator",
"testing/test-test_logger",
@@ -82,13 +85,164 @@ members = [
"validator_client",
"validator_client/slashing_protection",
]
[patch]
"validator_manager",
"watch",
]
resolver = "2"
[workspace.package]
edition = "2021"
[workspace.dependencies]
anyhow = "1"
arbitrary = { version = "1", features = ["derive"] }
async-channel = "1.9.0"
bincode = "1"
bitvec = "1"
byteorder = "1"
bytes = "1"
clap = "2"
compare_fields_derive = { path = "common/compare_fields_derive" }
criterion = "0.3"
crossbeam-channel = "0.5.8"
delay_map = "0.3"
derivative = "2"
dirs = "3"
either = "1.9"
discv5 = { version = "0.4.1", features = ["libp2p"] }
env_logger = "0.9"
error-chain = "0.12"
ethereum-types = "0.14"
ethereum_hashing = "1.0.0-beta.2"
ethereum_serde_utils = "0.5.2"
ethereum_ssz = "0.5"
ethereum_ssz_derive = "0.5"
ethers-core = "1"
ethers-providers = { version = "1", default-features = false }
exit-future = "0.2"
fnv = "1"
fs2 = "0.4"
futures = "0.3"
hex = "0.4"
hyper = "1"
itertools = "0.10"
lazy_static = "1"
libsecp256k1 = "0.7"
log = "0.4"
lru = "0.12"
maplit = "1"
milhouse = { git = "https://github.com/sigp/milhouse", branch = "main" }
num_cpus = "1"
parking_lot = "0.12"
paste = "1"
quickcheck = "1"
quickcheck_macros = "1"
quote = "1"
r2d2 = "0.8"
rand = "0.8"
rayon = "1.7"
regex = "1"
reqwest = { version = "0.11", default-features = false, features = ["blocking", "json", "stream", "rustls-tls", "native-tls-vendored"] }
ring = "0.16"
rusqlite = { version = "0.28", features = ["bundled"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
serde_repr = "0.1"
serde_yaml = "0.9"
sha2 = "0.9"
slog = { version = "2", features = ["max_level_trace", "release_max_level_trace", "nested-values"] }
slog-async = "2"
slog-term = "2"
sloggers = { version = "2", features = ["json"] }
smallvec = "1.11.2"
snap = "1"
ssz_types = "0.5"
strum = { version = "0.24", features = ["derive"] }
superstruct = "0.7"
syn = "1"
sysinfo = "0.26"
tempfile = "3"
tokio = { version = "1", features = ["rt-multi-thread", "sync", "signal"] }
tokio-stream = { version = "0.1", features = ["sync"] }
tokio-util = { version = "0.6", features = ["codec", "compat", "time"] }
tracing = "0.1.40"
tracing-appender = "0.2"
tracing-core = "0.1"
tracing-log = "0.2"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
tree_hash = "0.5"
tree_hash_derive = "0.5"
url = "2"
uuid = { version = "0.8", features = ["serde", "v4"] }
warp = { version = "0.3.6", default-features = false, features = ["tls"] }
xdelta3 = { git = "http://github.com/michaelsproul/xdelta3-rs", rev="ae9a1d2585ef998f4656acdc35cf263ee88e6dfa" }
zeroize = { version = "1", features = ["zeroize_derive"] }
zip = "0.6"
zstd = "0.11.2"
# Local crates.
account_utils = { path = "common/account_utils" }
beacon_chain = { path = "beacon_node/beacon_chain" }
beacon_node = { path = "beacon_node" }
beacon_processor = { path = "beacon_node/beacon_processor" }
bls = { path = "crypto/bls" }
cached_tree_hash = { path = "consensus/cached_tree_hash" }
clap_utils = { path = "common/clap_utils" }
compare_fields = { path = "common/compare_fields" }
deposit_contract = { path = "common/deposit_contract" }
directory = { path = "common/directory" }
environment = { path = "lighthouse/environment" }
eth1 = { path = "beacon_node/eth1" }
eth1_test_rig = { path = "testing/eth1_test_rig" }
eth2 = { path = "common/eth2" }
eth2_config = { path = "common/eth2_config" }
eth2_key_derivation = { path = "crypto/eth2_key_derivation" }
eth2_keystore = { path = "crypto/eth2_keystore" }
eth2_network_config = { path = "common/eth2_network_config" }
eth2_wallet = { path = "crypto/eth2_wallet" }
execution_layer = { path = "beacon_node/execution_layer" }
filesystem = { path = "common/filesystem" }
fork_choice = { path = "consensus/fork_choice" }
genesis = { path = "beacon_node/genesis" }
gossipsub = { path = "beacon_node/lighthouse_network/gossipsub/" }
http_api = { path = "beacon_node/http_api" }
int_to_bytes = { path = "consensus/int_to_bytes" }
kzg = { path = "crypto/kzg" }
lighthouse_metrics = { path = "common/lighthouse_metrics" }
lighthouse_network = { path = "beacon_node/lighthouse_network" }
lighthouse_version = { path = "common/lighthouse_version" }
lockfile = { path = "common/lockfile" }
logging = { path = "common/logging" }
lru_cache = { path = "common/lru_cache" }
malloc_utils = { path = "common/malloc_utils" }
merkle_proof = { path = "consensus/merkle_proof" }
monitoring_api = { path = "common/monitoring_api" }
network = { path = "beacon_node/network" }
operation_pool = { path = "beacon_node/operation_pool" }
pretty_reqwest_error = { path = "common/pretty_reqwest_error" }
proto_array = { path = "consensus/proto_array" }
safe_arith = {path = "consensus/safe_arith"}
sensitive_url = { path = "common/sensitive_url" }
slasher = { path = "slasher" }
slashing_protection = { path = "validator_client/slashing_protection" }
slot_clock = { path = "common/slot_clock" }
state_processing = { path = "consensus/state_processing" }
store = { path = "beacon_node/store" }
swap_or_not_shuffle = { path = "consensus/swap_or_not_shuffle" }
task_executor = { path = "common/task_executor" }
types = { path = "consensus/types" }
unused_port = { path = "common/unused_port" }
validator_client = { path = "validator_client" }
validator_dir = { path = "common/validator_dir" }
warp_utils = { path = "common/warp_utils" }
[patch.crates-io]
fixed-hash = { git = "https://github.com/paritytech/parity-common", rev="df638ab0885293d21d656dc300d39236b69ce57d" }
warp = { git = "https://github.com/macladson/warp", rev ="7e75acc" }
eth2_ssz = { path = "consensus/ssz" }
eth2_ssz_types = { path = "consensus/ssz_types" }
tree_hash = { path = "consensus/tree_hash" }
eth2_serde_utils = { path = "consensus/serde_utils" }
yamux = { git = "https://github.com/sigp/rust-yamux.git" }
[profile.maxperf]
inherits = "release"
lto = "fat"
codegen-units = 1
incremental = false

View File

@@ -1,15 +1,5 @@
[build.env]
passthrough = [
"RUSTFLAGS",
]
# These custom images are required to work around the lack of Clang in the default `cross` images.
# We need Clang to run `bindgen` for MDBX, and the `BINDGEN_EXTRA_CLANG_ARGS` flags must also be set
# while cross-compiling for ARM to prevent bindgen from attempting to include headers from the host.
#
# For more information see https://github.com/rust-embedded/cross/pull/608
[target.x86_64-unknown-linux-gnu]
image = "michaelsproul/cross-clang:x86_64-latest"
pre-build = ["apt-get install -y cmake clang-5.0"]
[target.aarch64-unknown-linux-gnu]
image = "michaelsproul/cross-clang:aarch64-latest"
pre-build = ["apt-get install -y cmake clang-5.0"]

View File

@@ -1,11 +1,15 @@
FROM rust:1.58.1-bullseye AS builder
FROM rust:1.75.0-bullseye AS builder
RUN apt-get update && apt-get -y upgrade && apt-get install -y cmake libclang-dev
COPY . lighthouse
ARG FEATURES
ARG PROFILE=release
ARG CARGO_USE_GIT_CLI=true
ENV FEATURES $FEATURES
ENV PROFILE $PROFILE
ENV CARGO_NET_GIT_FETCH_WITH_CLI=$CARGO_USE_GIT_CLI
RUN cd lighthouse && make
FROM ubuntu:latest
FROM ubuntu:22.04
RUN apt-get update && apt-get -y upgrade && apt-get install -y --no-install-recommends \
libssl-dev \
ca-certificates \

View File

@@ -1,7 +1,7 @@
# This image is meant to enable cross-architecture builds.
# It assumes the lighthouse binary has already been
# compiled for `$TARGETPLATFORM` and moved to `./bin`.
FROM --platform=$TARGETPLATFORM ubuntu:latest
FROM --platform=$TARGETPLATFORM ubuntu:22.04
RUN apt-get update && apt-get install -y --no-install-recommends \
libssl-dev \
ca-certificates \

158
Makefile
View File

@@ -2,6 +2,7 @@
EF_TESTS = "testing/ef_tests"
STATE_TRANSITION_VECTORS = "testing/state_transition_vectors"
EXECUTION_ENGINE_INTEGRATION = "testing/execution_engine_integration"
GIT_TAG := $(shell git describe --tags --candidates 1)
BIN_DIR = "bin"
@@ -11,20 +12,53 @@ AARCH64_TAG = "aarch64-unknown-linux-gnu"
BUILD_PATH_AARCH64 = "target/$(AARCH64_TAG)/release"
PINNED_NIGHTLY ?= nightly
CLIPPY_PINNED_NIGHTLY=nightly-2022-05-19
# List of features to use when building natively. Can be overridden via the environment.
# No jemalloc on Windows
ifeq ($(OS),Windows_NT)
FEATURES?=
else
FEATURES?=jemalloc
endif
# List of features to use when cross-compiling. Can be overridden via the environment.
CROSS_FEATURES ?= gnosis,slasher-lmdb,slasher-mdbx,jemalloc
# Cargo profile for Cross builds. Default is for local builds, CI uses an override.
CROSS_PROFILE ?= release
# List of features to use when running EF tests.
EF_TEST_FEATURES ?=
# List of features to use when running CI tests.
TEST_FEATURES ?=
# Cargo profile for regular builds.
PROFILE ?= release
# List of all hard forks. This list is used to set env variables for several tests so that
# they run for different forks.
FORKS=phase0 altair
FORKS=phase0 altair merge capella deneb electra
# Extra flags for Cargo
CARGO_INSTALL_EXTRA_FLAGS?=
# Builds the Lighthouse binary in release (optimized).
#
# Binaries will most likely be found in `./target/release`
install:
cargo install --path lighthouse --force --locked --features "$(FEATURES)"
cargo install --path lighthouse --force --locked \
--features "$(FEATURES)" \
--profile "$(PROFILE)" \
$(CARGO_INSTALL_EXTRA_FLAGS)
# Builds the lcli binary in release (optimized).
install-lcli:
cargo install --path lcli --force --locked --features "$(FEATURES)"
cargo install --path lcli --force --locked \
--features "$(FEATURES)" \
--profile "$(PROFILE)" \
$(CARGO_INSTALL_EXTRA_FLAGS)
# The following commands use `cross` to build a cross-compile.
#
@@ -40,13 +74,13 @@ install-lcli:
# optimized CPU functions that may not be available on some systems. This
# results in a more portable binary with ~20% slower BLS verification.
build-x86_64:
cross build --release --bin lighthouse --target x86_64-unknown-linux-gnu --features modern,gnosis
cross build --bin lighthouse --target x86_64-unknown-linux-gnu --features "modern,$(CROSS_FEATURES)" --profile "$(CROSS_PROFILE)" --locked
build-x86_64-portable:
cross build --release --bin lighthouse --target x86_64-unknown-linux-gnu --features portable,gnosis
cross build --bin lighthouse --target x86_64-unknown-linux-gnu --features "portable,$(CROSS_FEATURES)" --profile "$(CROSS_PROFILE)" --locked
build-aarch64:
cross build --release --bin lighthouse --target aarch64-unknown-linux-gnu --features gnosis
cross build --bin lighthouse --target aarch64-unknown-linux-gnu --features "$(CROSS_FEATURES)" --profile "$(CROSS_PROFILE)" --locked
build-aarch64-portable:
cross build --release --bin lighthouse --target aarch64-unknown-linux-gnu --features portable,gnosis
cross build --bin lighthouse --target aarch64-unknown-linux-gnu --features "portable,$(CROSS_FEATURES)" --profile "$(CROSS_PROFILE)" --locked
# Create a `.tar.gz` containing a binary for a specific target.
define tarball_release_binary
@@ -75,12 +109,26 @@ build-release-tarballs:
# Runs the full workspace tests in **release**, without downloading any additional
# test vectors.
test-release:
cargo test --workspace --release --exclude ef_tests --exclude beacon_chain
cargo test --workspace --release --features "$(TEST_FEATURES)" \
--exclude ef_tests --exclude beacon_chain --exclude slasher --exclude network
# Runs the full workspace tests in **release**, without downloading any additional
# test vectors, using nextest.
nextest-release:
cargo nextest run --workspace --release --features "$(TEST_FEATURES)" \
--exclude ef_tests --exclude beacon_chain --exclude slasher --exclude network
# Runs the full workspace tests in **debug**, without downloading any additional test
# vectors.
test-debug:
cargo test --workspace --exclude ef_tests --exclude beacon_chain
cargo test --workspace --features "$(TEST_FEATURES)" \
--exclude ef_tests --exclude beacon_chain --exclude network
# Runs the full workspace tests in **debug**, without downloading any additional test
# vectors, using nextest.
nextest-debug:
cargo nextest run --workspace --features "$(TEST_FEATURES)" \
--exclude ef_tests --exclude beacon_chain --exclude network
# Runs cargo-fmt (linter).
cargo-fmt:
@@ -88,34 +136,50 @@ cargo-fmt:
# Typechecks benchmark code
check-benches:
cargo check --workspace --benches
# Typechecks consensus code *without* allowing deprecated legacy arithmetic or metrics.
check-consensus:
cargo check -p state_processing --no-default-features
cargo check --workspace --benches --features "$(TEST_FEATURES)"
# Runs only the ef-test vectors.
run-ef-tests:
rm -rf $(EF_TESTS)/.accessed_file_log.txt
cargo test --release -p ef_tests --features "ef_tests"
cargo test --release -p ef_tests --features "ef_tests,fake_crypto"
cargo test --release -p ef_tests --features "ef_tests,milagro"
cargo test --release -p ef_tests --features "ef_tests,$(EF_TEST_FEATURES)"
cargo test --release -p ef_tests --features "ef_tests,$(EF_TEST_FEATURES),fake_crypto"
./$(EF_TESTS)/check_all_files_accessed.py $(EF_TESTS)/.accessed_file_log.txt $(EF_TESTS)/consensus-spec-tests
# Runs EF test vectors with nextest
nextest-run-ef-tests:
rm -rf $(EF_TESTS)/.accessed_file_log.txt
cargo nextest run --release -p ef_tests --features "ef_tests,$(EF_TEST_FEATURES)"
cargo nextest run --release -p ef_tests --features "ef_tests,$(EF_TEST_FEATURES),fake_crypto"
./$(EF_TESTS)/check_all_files_accessed.py $(EF_TESTS)/.accessed_file_log.txt $(EF_TESTS)/consensus-spec-tests
# Run the tests in the `beacon_chain` crate for all known forks.
test-beacon-chain: $(patsubst %,test-beacon-chain-%,$(FORKS))
test-beacon-chain-%:
env FORK_NAME=$* cargo test --release --features fork_from_env -p beacon_chain
env FORK_NAME=$* cargo nextest run --release --features "fork_from_env,slasher/lmdb,$(TEST_FEATURES)" -p beacon_chain
# Run the tests in the `operation_pool` crate for all known forks.
test-op-pool: $(patsubst %,test-op-pool-%,$(FORKS))
test-op-pool-%:
env FORK_NAME=$* cargo test --release \
--features 'beacon_chain/fork_from_env'\
env FORK_NAME=$* cargo nextest run --release \
--features "beacon_chain/fork_from_env,$(TEST_FEATURES)"\
-p operation_pool
# Run the tests in the `network` crate for all known forks.
test-network: $(patsubst %,test-network-%,$(FORKS))
test-network-%:
env FORK_NAME=$* cargo nextest run --release \
--features "fork_from_env,$(TEST_FEATURES)" \
-p network
# Run the tests in the `slasher` crate for all supported database backends.
test-slasher:
cargo nextest run --release -p slasher --features "lmdb,$(TEST_FEATURES)"
cargo nextest run --release -p slasher --no-default-features --features "mdbx,$(TEST_FEATURES)"
cargo nextest run --release -p slasher --features "lmdb,mdbx,$(TEST_FEATURES)" # both backends enabled
# Runs only the tests/state_transition_vectors tests.
run-state-transition-tests:
make -C $(STATE_TRANSITION_VECTORS) test
@@ -123,22 +187,56 @@ run-state-transition-tests:
# Downloads and runs the EF test vectors.
test-ef: make-ef-tests run-ef-tests
# Downloads and runs the EF test vectors with nextest.
nextest-ef: make-ef-tests nextest-run-ef-tests
# Runs tests checking interop between Lighthouse and execution clients.
test-exec-engine:
make -C $(EXECUTION_ENGINE_INTEGRATION) test
# Runs the full workspace tests in release, without downloading any additional
# test vectors.
test: test-release
# Updates the CLI help text pages in the Lighthouse book, building with Docker.
cli:
docker run --rm --user=root \
-v ${PWD}:/home/runner/actions-runner/lighthouse sigmaprime/github-runner \
bash -c 'cd lighthouse && make && ./scripts/cli.sh'
# Updates the CLI help text pages in the Lighthouse book, building using local
# `cargo`.
cli-local:
make && ./scripts/cli.sh
# Runs the entire test suite, downloading test vectors if required.
test-full: cargo-fmt test-release test-debug test-ef
test-full: cargo-fmt test-release test-debug test-ef test-exec-engine
# Lints the code for bad style and potentially unsafe arithmetic using Clippy.
# Clippy lints are opt-in per-crate for now. By default, everything is allowed except for performance and correctness lints.
lint:
cargo clippy --workspace --tests -- \
cargo clippy --workspace --tests $(EXTRA_CLIPPY_OPTS) --features "$(TEST_FEATURES)" -- \
-D clippy::fn_to_numeric_cast_any \
-D clippy::manual_let_else \
-D warnings \
-A clippy::derive_partial_eq_without_eq \
-A clippy::from-over-into \
-A clippy::upper-case-acronyms \
-A clippy::vec-init-then-push
-A clippy::vec-init-then-push \
-A clippy::question-mark \
-A clippy::uninlined-format-args \
-A clippy::enum_variant_names
# Lints the code using Clippy and automatically fix some simple compiler warnings.
lint-fix:
EXTRA_CLIPPY_OPTS="--fix --allow-staged --allow-dirty" $(MAKE) lint
nightly-lint:
cp .github/custom/clippy.toml .
cargo +$(CLIPPY_PINNED_NIGHTLY) clippy --workspace --tests --release -- \
-A clippy::all \
-D clippy::disallowed_from_async
rm clippy.toml
# Runs the makefile in the `ef_tests` repo.
#
@@ -150,13 +248,17 @@ make-ef-tests:
# Verifies that crates compile with fuzzing features enabled
arbitrary-fuzz:
cargo check -p state_processing --features arbitrary-fuzz
cargo check -p slashing_protection --features arbitrary-fuzz
cargo check -p state_processing --features arbitrary-fuzz,$(TEST_FEATURES)
cargo check -p slashing_protection --features arbitrary-fuzz,$(TEST_FEATURES)
# Runs cargo audit (Audit Cargo.lock files for crates with security vulnerabilities reported to the RustSec Advisory Database)
audit:
audit: install-audit audit-CI
install-audit:
cargo install --force cargo-audit
cargo audit --ignore RUSTSEC-2020-0071 --ignore RUSTSEC-2020-0159 --ignore RUSTSEC-2022-0009
audit-CI:
cargo audit
# Runs `cargo vendor` to make sure dependencies can be vendored for packaging, reproducibility and archival purpose.
vendor:
@@ -164,7 +266,7 @@ vendor:
# Runs `cargo udeps` to check for unused dependencies
udeps:
cargo +$(PINNED_NIGHTLY) udeps --tests --all-targets --release
cargo +$(PINNED_NIGHTLY) udeps --tests --all-targets --release --features "$(TEST_FEATURES)"
# Performs a `cargo` clean and cleans the `ef_tests` directory.
clean:

View File

@@ -1,18 +1,16 @@
# Lighthouse: Ethereum 2.0
# Lighthouse: Ethereum consensus client
An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime.
An open-source Ethereum consensus client, written in Rust and maintained by Sigma Prime.
[![Build Status]][Build Link] [![Book Status]][Book Link] [![Chat Badge]][Chat Link]
[![Book Status]][Book Link] [![Chat Badge]][Chat Link]
[Build Status]: https://github.com/sigp/lighthouse/workflows/test-suite/badge.svg?branch=stable
[Build Link]: https://github.com/sigp/lighthouse/actions
[Chat Badge]: https://img.shields.io/badge/chat-discord-%237289da
[Chat Link]: https://discord.gg/cyAszAh
[Book Status]:https://img.shields.io/badge/user--docs-unstable-informational
[Book Link]: https://lighthouse-book.sigmaprime.io
[stable]: https://github.com/sigp/lighthouse/tree/stable
[unstable]: https://github.com/sigp/lighthouse/tree/unstable
[blog]: https://lighthouse.sigmaprime.io
[blog]: https://lighthouse-blog.sigmaprime.io
[Documentation](https://lighthouse-book.sigmaprime.io)
@@ -22,7 +20,7 @@ An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prim
Lighthouse is:
- Ready for use on Eth2 mainnet.
- Ready for use on Ethereum consensus mainnet.
- Fully open-source, licensed under Apache 2.0.
- Security-focused. Fuzzing techniques have been continuously applied and several external security reviews have been performed.
- Built in [Rust](https://www.rust-lang.org), a modern language providing unique safety guarantees and
@@ -30,20 +28,20 @@ Lighthouse is:
- Funded by various organisations, including Sigma Prime, the
Ethereum Foundation, ConsenSys, the Decentralization Foundation and private individuals.
- Actively involved in the specification and security analysis of the
Ethereum 2.0 specification.
Ethereum proof-of-stake consensus specification.
## Eth2 Deposit Contract
## Staking Deposit Contract
The Lighthouse team acknowledges
[`0x00000000219ab540356cBB839Cbe05303d7705Fa`](https://etherscan.io/address/0x00000000219ab540356cbb839cbe05303d7705fa)
as the canonical Eth2 deposit contract address.
as the canonical staking deposit contract address.
## Documentation
The [Lighthouse Book](https://lighthouse-book.sigmaprime.io) contains information for users and
developers.
The Lighthouse team maintains a blog at [lighthouse.sigmaprime.io][blog] which contains periodical
The Lighthouse team maintains a blog at [lighthouse-blog.sigmaprime.io][blog] which contains periodic
progress updates, roadmap insights and interesting findings.
## Branches
@@ -66,9 +64,9 @@ of the Lighthouse book.
## Contact
The best place for discussion is the [Lighthouse Discord
server](https://discord.gg/cyAszAh).
server](https://discord.gg/cyAszAh).
Sign up to the [Lighthouse Development Updates](http://eepurl.com/dh9Lvb) mailing list for email
Sign up to the [Lighthouse Development Updates](https://eepurl.com/dh9Lvb) mailing list for email
notifications about releases, network status and other important information.
Encrypt sensitive messages using our [PGP

View File

@@ -1,29 +1,35 @@
[package]
name = "account_manager"
version = "0.3.5"
authors = ["Paul Hauner <paul@paulhauner.com>", "Luke Anderson <luke@sigmaprime.io>"]
edition = "2018"
authors = [
"Paul Hauner <paul@paulhauner.com>",
"Luke Anderson <luke@sigmaprime.io>",
]
edition = { workspace = true }
[dependencies]
bls = { path = "../crypto/bls" }
clap = "2.33.3"
types = { path = "../consensus/types" }
environment = { path = "../lighthouse/environment" }
eth2_network_config = { path = "../common/eth2_network_config" }
clap_utils = { path = "../common/clap_utils" }
directory = { path = "../common/directory" }
eth2_wallet = { path = "../crypto/eth2_wallet" }
bls = { workspace = true }
clap = { workspace = true }
types = { workspace = true }
environment = { workspace = true }
eth2_network_config = { workspace = true }
clap_utils = { workspace = true }
directory = { workspace = true }
eth2_wallet = { workspace = true }
eth2_wallet_manager = { path = "../common/eth2_wallet_manager" }
validator_dir = { path = "../common/validator_dir" }
tokio = { version = "1.14.0", features = ["full"] }
eth2_keystore = { path = "../crypto/eth2_keystore" }
account_utils = { path = "../common/account_utils" }
slashing_protection = { path = "../validator_client/slashing_protection" }
eth2 = {path = "../common/eth2"}
safe_arith = {path = "../consensus/safe_arith"}
slot_clock = { path = "../common/slot_clock" }
filesystem = { path = "../common/filesystem" }
sensitive_url = { path = "../common/sensitive_url" }
validator_dir = { workspace = true }
tokio = { workspace = true }
eth2_keystore = { workspace = true }
account_utils = { workspace = true }
slashing_protection = { workspace = true }
eth2 = { workspace = true }
safe_arith = { workspace = true }
slot_clock = { workspace = true }
filesystem = { workspace = true }
sensitive_url = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
slog = { workspace = true }
[dev-dependencies]
tempfile = "3.1.0"
tempfile = { workspace = true }

View File

@@ -29,6 +29,6 @@ Simply run `./account_manager generate` to generate a new random private key,
which will be automatically saved to the correct directory.
If you prefer to use our "deterministic" keys for testing purposes, simply
run `./accounts_manager generate_deterministic -i <index>`, where `index` is
run `./account_manager generate_deterministic -i <index>`, where `index` is
the validator index for the key. This will reliably produce the same key each time
and save it to the directory.
and save it to the directory.

View File

@@ -1,55 +1,7 @@
use account_utils::PlainText;
use account_utils::{read_input_from_user, strip_off_newlines};
use eth2_wallet::bip39::{Language, Mnemonic};
use std::fs;
use std::path::PathBuf;
use std::str::from_utf8;
use std::thread::sleep;
use std::time::Duration;
use account_utils::read_input_from_user;
pub const MNEMONIC_PROMPT: &str = "Enter the mnemonic phrase:";
pub const WALLET_NAME_PROMPT: &str = "Enter wallet name:";
pub fn read_mnemonic_from_cli(
mnemonic_path: Option<PathBuf>,
stdin_inputs: bool,
) -> Result<Mnemonic, String> {
let mnemonic = match mnemonic_path {
Some(path) => fs::read(&path)
.map_err(|e| format!("Unable to read {:?}: {:?}", path, e))
.and_then(|bytes| {
let bytes_no_newlines: PlainText = strip_off_newlines(bytes).into();
let phrase = from_utf8(bytes_no_newlines.as_ref())
.map_err(|e| format!("Unable to derive mnemonic: {:?}", e))?;
Mnemonic::from_phrase(phrase, Language::English).map_err(|e| {
format!(
"Unable to derive mnemonic from string {:?}: {:?}",
phrase, e
)
})
})?,
None => loop {
eprintln!();
eprintln!("{}", MNEMONIC_PROMPT);
let mnemonic = read_input_from_user(stdin_inputs)?;
match Mnemonic::from_phrase(mnemonic.as_str(), Language::English) {
Ok(mnemonic_m) => {
eprintln!("Valid mnemonic provided.");
eprintln!();
sleep(Duration::from_secs(1));
break mnemonic_m;
}
Err(_) => {
eprintln!("Invalid mnemonic");
}
}
},
};
Ok(mnemonic)
}
/// Reads in a wallet name from the user. If the `--wallet-name` flag is provided, use it. Otherwise
/// read from an interactive prompt using tty unless the `--stdin-inputs` flag is provided.
pub fn read_wallet_name_from_cli(

View File

@@ -10,6 +10,7 @@ use types::EthSpec;
pub const CMD: &str = "account_manager";
pub const SECRETS_DIR_FLAG: &str = "secrets-dir";
pub const VALIDATOR_DIR_FLAG: &str = "validator-dir";
pub const VALIDATOR_DIR_FLAG_ALIAS: &str = "validators-dir";
pub const WALLETS_DIR_FLAG: &str = "wallets-dir";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
@@ -21,7 +22,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
}
/// Run the account manager, returning an error if the operation did not succeed.
pub fn run<T: EthSpec>(matches: &ArgMatches<'_>, env: Environment<T>) -> Result<(), String> {
pub fn run<E: EthSpec>(matches: &ArgMatches<'_>, env: Environment<E>) -> Result<(), String> {
match matches.subcommand() {
(wallet::CMD, Some(matches)) => wallet::cli_run(matches)?,
(validator::CMD, Some(matches)) => validator::cli_run(matches, env)?,

View File

@@ -112,9 +112,9 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
)
}
pub fn cli_run<T: EthSpec>(
pub fn cli_run<E: EthSpec>(
matches: &ArgMatches,
mut env: Environment<T>,
env: Environment<E>,
validator_dir: PathBuf,
) -> Result<(), String> {
let spec = env.core_context().eth2_config.spec;

View File

@@ -14,7 +14,7 @@ use slot_clock::{SlotClock, SystemTimeSlotClock};
use std::path::{Path, PathBuf};
use std::time::Duration;
use tokio::time::sleep;
use types::{ChainSpec, Epoch, EthSpec, Fork, VoluntaryExit};
use types::{ChainSpec, Epoch, EthSpec, VoluntaryExit};
pub const CMD: &str = "exit";
pub const KEYSTORE_FLAG: &str = "keystore";
@@ -27,7 +27,6 @@ pub const PASSWORD_PROMPT: &str = "Enter the keystore password";
pub const DEFAULT_BEACON_NODE: &str = "http://localhost:5052/";
pub const CONFIRMATION_PHRASE: &str = "Exit my validator";
pub const WEBSITE_URL: &str = "https://lighthouse-book.sigmaprime.io/voluntary-exit.html";
pub const PROMPT: &str = "WARNING: WITHDRAWING STAKED ETH IS NOT CURRENTLY POSSIBLE";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new("exit")
@@ -124,10 +123,8 @@ async fn publish_voluntary_exit<E: EthSpec>(
) -> Result<(), String> {
let genesis_data = get_geneisis_data(client).await?;
let testnet_genesis_root = eth2_network_config
.beacon_state::<E>()
.as_ref()
.expect("network should have valid genesis state")
.genesis_validators_root();
.genesis_validators_root::<E>()?
.ok_or("Genesis state is unknown")?;
// Verify that the beacon node and validator being exited are on the same network.
if genesis_data.genesis_validators_root != testnet_genesis_root {
@@ -149,7 +146,6 @@ async fn publish_voluntary_exit<E: EthSpec>(
.ok_or("Failed to get current epoch. Please check your system time")?;
let validator_index = get_validator_index_for_exit(client, &keypair.pk, epoch, spec).await?;
let fork = get_beacon_state_fork(client).await?;
let voluntary_exit = VoluntaryExit {
epoch,
validator_index,
@@ -161,7 +157,6 @@ async fn publish_voluntary_exit<E: EthSpec>(
);
if !no_confirmation {
eprintln!("WARNING: THIS IS AN IRREVERSIBLE OPERATION\n");
eprintln!("{}\n", PROMPT);
eprintln!(
"PLEASE VISIT {} TO MAKE SURE YOU UNDERSTAND THE IMPLICATIONS OF A VOLUNTARY EXIT.",
WEBSITE_URL
@@ -177,12 +172,8 @@ async fn publish_voluntary_exit<E: EthSpec>(
if confirmation == CONFIRMATION_PHRASE {
// Sign and publish the voluntary exit to network
let signed_voluntary_exit = voluntary_exit.sign(
&keypair.sk,
&fork,
genesis_data.genesis_validators_root,
spec,
);
let signed_voluntary_exit =
voluntary_exit.sign(&keypair.sk, genesis_data.genesis_validators_root, spec);
client
.post_beacon_pool_voluntary_exits(&signed_voluntary_exit)
.await
@@ -212,8 +203,8 @@ async fn publish_voluntary_exit<E: EthSpec>(
let validator_data = get_validator_data(client, &keypair.pk).await?;
match validator_data.status {
ValidatorStatus::ActiveExiting => {
let exit_epoch = validator_data.validator.exit_epoch;
let withdrawal_epoch = validator_data.validator.withdrawable_epoch;
let exit_epoch = validator_data.validator.exit_epoch();
let withdrawal_epoch = validator_data.validator.withdrawable_epoch();
let current_epoch = get_current_epoch::<E>(genesis_data.genesis_time, spec)
.ok_or("Failed to get current epoch. Please check your system time")?;
eprintln!("Voluntary exit has been accepted into the beacon chain, but not yet finalized. \
@@ -233,7 +224,7 @@ async fn publish_voluntary_exit<E: EthSpec>(
ValidatorStatus::ExitedSlashed | ValidatorStatus::ExitedUnslashed => {
eprintln!(
"Validator has exited on epoch: {}",
validator_data.validator.exit_epoch
validator_data.validator.exit_epoch()
);
break;
}
@@ -259,7 +250,7 @@ async fn get_validator_index_for_exit(
ValidatorStatus::ActiveOngoing => {
let eligible_epoch = validator_data
.validator
.activation_epoch
.activation_epoch()
.safe_add(spec.shard_committee_period)
.map_err(|e| format!("Failed to calculate eligible epoch, validator activation epoch too high: {:?}", e))?;
@@ -320,16 +311,6 @@ async fn is_syncing(client: &BeaconNodeHttpClient) -> Result<bool, String> {
.is_syncing)
}
/// Get fork object for the current state by querying the beacon node client.
async fn get_beacon_state_fork(client: &BeaconNodeHttpClient) -> Result<Fork, String> {
Ok(client
.get_beacon_states_fork(StateId::Head)
.await
.map_err(|e| format!("Failed to get get fork: {:?}", e))?
.ok_or("Failed to get fork, state not found")?
.data)
}
/// Calculates the current epoch from the genesis time and current time.
fn get_current_epoch<E: EthSpec>(genesis_time: u64, spec: &ChainSpec) -> Option<Epoch> {
let slot_clock = SystemTimeSlotClock::new(
@@ -349,7 +330,7 @@ fn load_voting_keypair(
password_file_path: Option<&PathBuf>,
stdin_inputs: bool,
) -> Result<Keypair, String> {
let keystore = Keystore::from_json_file(&voting_keystore_path).map_err(|e| {
let keystore = Keystore::from_json_file(voting_keystore_path).map_err(|e| {
format!(
"Unable to read keystore JSON {:?}: {:?}",
voting_keystore_path, e

View File

@@ -4,8 +4,8 @@ use account_utils::{
eth2_keystore::Keystore,
read_password_from_user,
validator_definitions::{
recursively_find_voting_keystores, ValidatorDefinition, ValidatorDefinitions,
CONFIG_FILENAME,
recursively_find_voting_keystores, PasswordStorage, ValidatorDefinition,
ValidatorDefinitions, CONFIG_FILENAME,
},
ZeroizeString,
};
@@ -176,7 +176,7 @@ pub fn cli_run(matches: &ArgMatches, validator_dir: PathBuf) -> Result<(), Strin
let password = match keystore_password_path.as_ref() {
Some(path) => {
let password_from_file: ZeroizeString = fs::read_to_string(&path)
let password_from_file: ZeroizeString = fs::read_to_string(path)
.map_err(|e| format!("Unable to read {:?}: {:?}", path, e))?
.into();
password_from_file.without_newlines()
@@ -256,7 +256,7 @@ pub fn cli_run(matches: &ArgMatches, validator_dir: PathBuf) -> Result<(), Strin
.ok_or_else(|| format!("Badly formatted file name: {:?}", src_keystore))?;
// Copy the keystore to the new location.
fs::copy(&src_keystore, &dest_keystore)
fs::copy(src_keystore, &dest_keystore)
.map_err(|e| format!("Unable to copy keystore: {:?}", e))?;
// Register with slashing protection.
@@ -277,9 +277,15 @@ pub fn cli_run(matches: &ArgMatches, validator_dir: PathBuf) -> Result<(), Strin
let suggested_fee_recipient = None;
let validator_def = ValidatorDefinition::new_keystore_with_password(
&dest_keystore,
password_opt,
password_opt
.map(PasswordStorage::ValidatorDefinitions)
.unwrap_or(PasswordStorage::None),
graffiti,
suggested_fee_recipient,
None,
None,
None,
None,
)
.map_err(|e| format!("Unable to create new validator definition: {:?}", e))?;

View File

@@ -6,7 +6,7 @@ pub mod modify;
pub mod recover;
pub mod slashing_protection;
use crate::VALIDATOR_DIR_FLAG;
use crate::{VALIDATOR_DIR_FLAG, VALIDATOR_DIR_FLAG_ALIAS};
use clap::{App, Arg, ArgMatches};
use directory::{parse_path_or_default_with_flag, DEFAULT_VALIDATOR_DIR};
use environment::Environment;
@@ -21,6 +21,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.arg(
Arg::with_name(VALIDATOR_DIR_FLAG)
.long(VALIDATOR_DIR_FLAG)
.alias(VALIDATOR_DIR_FLAG_ALIAS)
.value_name("VALIDATOR_DIRECTORY")
.help(
"The path to search for validator directories. \
@@ -38,7 +39,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.subcommand(exit::cli_app())
}
pub fn cli_run<T: EthSpec>(matches: &ArgMatches, env: Environment<T>) -> Result<(), String> {
pub fn cli_run<E: EthSpec>(matches: &ArgMatches, env: Environment<E>) -> Result<(), String> {
let validator_base_dir = if matches.value_of("datadir").is_some() {
let path: PathBuf = clap_utils::parse_required(matches, "datadir")?;
path.join(DEFAULT_VALIDATOR_DIR)
@@ -48,7 +49,7 @@ pub fn cli_run<T: EthSpec>(matches: &ArgMatches, env: Environment<T>) -> Result<
eprintln!("validator-dir path: {:?}", validator_base_dir);
match matches.subcommand() {
(create::CMD, Some(matches)) => create::cli_run::<T>(matches, env, validator_base_dir),
(create::CMD, Some(matches)) => create::cli_run::<E>(matches, env, validator_base_dir),
(modify::CMD, Some(matches)) => modify::cli_run(matches, validator_base_dir),
(import::CMD, Some(matches)) => import::cli_run(matches, validator_base_dir),
(list::CMD, Some(_)) => list::cli_run(validator_base_dir),

View File

@@ -1,10 +1,9 @@
use super::create::STORE_WITHDRAW_FLAG;
use crate::common::read_mnemonic_from_cli;
use crate::validator::create::COUNT_FLAG;
use crate::wallet::create::STDIN_INPUTS_FLAG;
use crate::SECRETS_DIR_FLAG;
use account_utils::eth2_keystore::{keypair_from_secret, Keystore, KeystoreBuilder};
use account_utils::random_password;
use account_utils::{random_password, read_mnemonic_from_cli};
use clap::{App, Arg, ArgMatches};
use directory::ensure_dir_exists;
use directory::{parse_path_or_default_with_flag, DEFAULT_SECRET_DIR};

View File

@@ -7,7 +7,7 @@ use slashing_protection::{
use std::fs::File;
use std::path::PathBuf;
use std::str::FromStr;
use types::{BeaconState, Epoch, EthSpec, PublicKeyBytes, Slot};
use types::{Epoch, EthSpec, PublicKeyBytes, Slot};
pub const CMD: &str = "slashing-protection";
pub const IMPORT_CMD: &str = "import";
@@ -16,7 +16,6 @@ pub const EXPORT_CMD: &str = "export";
pub const IMPORT_FILE_ARG: &str = "IMPORT-FILE";
pub const EXPORT_FILE_ARG: &str = "EXPORT-FILE";
pub const MINIFY_FLAG: &str = "minify";
pub const PUBKEYS_FLAG: &str = "pubkeys";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
@@ -31,16 +30,6 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.value_name("FILE")
.help("The slashing protection interchange file to import (.json)"),
)
.arg(
Arg::with_name(MINIFY_FLAG)
.long(MINIFY_FLAG)
.takes_value(true)
.possible_values(&["false", "true"])
.help(
"Deprecated: Lighthouse no longer requires minification on import \
because it always minifies",
),
),
)
.subcommand(
App::new(EXPORT_CMD)
@@ -61,45 +50,26 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
comma-separated. All known keys will be exported if omitted",
),
)
.arg(
Arg::with_name(MINIFY_FLAG)
.long(MINIFY_FLAG)
.takes_value(true)
.default_value("false")
.possible_values(&["false", "true"])
.help(
"Minify the output file. This will make it smaller and faster to \
import, but not faster to generate.",
),
),
)
}
pub fn cli_run<T: EthSpec>(
pub fn cli_run<E: EthSpec>(
matches: &ArgMatches<'_>,
env: Environment<T>,
env: Environment<E>,
validator_base_dir: PathBuf,
) -> Result<(), String> {
let slashing_protection_db_path = validator_base_dir.join(SLASHING_PROTECTION_FILENAME);
let eth2_network_config = env
.eth2_network_config
.ok_or("Unable to get testnet configuration from the environment")?;
let genesis_validators_root = eth2_network_config
.beacon_state::<T>()
.map(|state: BeaconState<T>| state.genesis_validators_root())
.map_err(|e| {
format!(
"Unable to get genesis state, has genesis occurred? Detail: {:?}",
e
)
})?;
.genesis_validators_root::<E>()?
.ok_or_else(|| "Unable to get genesis state, has genesis occurred?".to_string())?;
match matches.subcommand() {
(IMPORT_CMD, Some(matches)) => {
let import_filename: PathBuf = clap_utils::parse_required(matches, IMPORT_FILE_ARG)?;
let minify: Option<bool> = clap_utils::parse_optional(matches, MINIFY_FLAG)?;
let import_file = File::open(&import_filename).map_err(|e| {
format!(
"Unable to open import file at {}: {:?}",
@@ -109,23 +79,10 @@ pub fn cli_run<T: EthSpec>(
})?;
eprint!("Loading JSON file into memory & deserializing");
let mut interchange = Interchange::from_json_reader(&import_file)
let interchange = Interchange::from_json_reader(&import_file)
.map_err(|e| format!("Error parsing file for import: {:?}", e))?;
eprintln!(" [done].");
if let Some(minify) = minify {
eprintln!(
"WARNING: --minify flag is deprecated and will be removed in a future release"
);
if minify {
eprint!("Minifying input file for faster loading");
interchange = interchange
.minify()
.map_err(|e| format!("Minification failed: {:?}", e))?;
eprintln!(" [done].");
}
}
let slashing_protection_database =
SlashingDatabase::open_or_create(&slashing_protection_db_path).map_err(|e| {
format!(
@@ -158,7 +115,7 @@ pub fn cli_run<T: EthSpec>(
InterchangeImportOutcome::Success { pubkey, summary } => {
eprintln!("- {:?}", pubkey);
eprintln!(
" - latest block: {}",
" - latest proposed block: {}",
display_slot(summary.max_block_slot)
);
eprintln!(
@@ -213,7 +170,6 @@ pub fn cli_run<T: EthSpec>(
}
(EXPORT_CMD, Some(matches)) => {
let export_filename: PathBuf = clap_utils::parse_required(matches, EXPORT_FILE_ARG)?;
let minify: bool = clap_utils::parse_required(matches, MINIFY_FLAG)?;
let selected_pubkeys = if let Some(pubkeys) =
clap_utils::parse_optional::<String>(matches, PUBKEYS_FLAG)?
@@ -244,17 +200,10 @@ pub fn cli_run<T: EthSpec>(
)
})?;
let mut interchange = slashing_protection_database
let interchange = slashing_protection_database
.export_interchange_info(genesis_validators_root, selected_pubkeys.as_deref())
.map_err(|e| format!("Error during export: {:?}", e))?;
if minify {
eprintln!("Minifying output file");
interchange = interchange
.minify()
.map_err(|e| format!("Unable to minify output: {:?}", e))?;
}
let output_file = File::create(export_filename)
.map_err(|e| format!("Error creating output file: {:?}", e))?;

View File

@@ -159,7 +159,7 @@ pub fn create_wallet_from_mnemonic(
unknown => return Err(format!("--{} {} is not supported", TYPE_FLAG, unknown)),
};
let mgr = WalletManager::open(&wallet_base_dir)
let mgr = WalletManager::open(wallet_base_dir)
.map_err(|e| format!("Unable to open --{}: {:?}", WALLETS_DIR_FLAG, e))?;
let wallet_password: PlainText = match wallet_password_path {

View File

@@ -10,7 +10,7 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
}
pub fn cli_run(wallet_base_dir: PathBuf) -> Result<(), String> {
let mgr = WalletManager::open(&wallet_base_dir)
let mgr = WalletManager::open(wallet_base_dir)
.map_err(|e| format!("Unable to open --{}: {:?}", WALLETS_DIR_FLAG, e))?;
for (name, _uuid) in mgr

View File

@@ -1,6 +1,6 @@
use crate::common::read_mnemonic_from_cli;
use crate::wallet::create::{create_wallet_from_mnemonic, STDIN_INPUTS_FLAG};
use crate::wallet::create::{HD_TYPE, NAME_FLAG, PASSWORD_FLAG, TYPE_FLAG};
use account_utils::read_mnemonic_from_cli;
use clap::{App, Arg, ArgMatches};
use std::path::PathBuf;

View File

@@ -1,8 +1,11 @@
[package]
name = "beacon_node"
version = "2.1.3"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com"]
edition = "2018"
version = "5.1.222-exp"
authors = [
"Paul Hauner <paul@paulhauner.com>",
"Age Manning <Age@AgeManning.com",
]
edition = { workspace = true }
[lib]
name = "beacon_node"
@@ -12,30 +15,36 @@ path = "src/lib.rs"
node_test_rig = { path = "../testing/node_test_rig" }
[features]
write_ssz_files = ["beacon_chain/write_ssz_files"] # Writes debugging .ssz files to /tmp during block processing.
write_ssz_files = [
"beacon_chain/write_ssz_files",
] # Writes debugging .ssz files to /tmp during block processing.
[dependencies]
eth2_config = { path = "../common/eth2_config" }
beacon_chain = { path = "beacon_chain" }
types = { path = "../consensus/types" }
store = { path = "./store" }
eth2_config = { workspace = true }
beacon_chain = { workspace = true }
types = { workspace = true }
store = { workspace = true }
client = { path = "client" }
clap = "2.33.3"
slog = { version = "2.5.2", features = ["max_level_trace", "release_max_level_trace"] }
dirs = "3.0.1"
directory = {path = "../common/directory"}
futures = "0.3.7"
environment = { path = "../lighthouse/environment" }
task_executor = { path = "../common/task_executor" }
genesis = { path = "genesis" }
eth2_network_config = { path = "../common/eth2_network_config" }
lighthouse_network = { path = "./lighthouse_network" }
serde = "1.0.116"
clap_utils = { path = "../common/clap_utils" }
hyper = "0.14.4"
lighthouse_version = { path = "../common/lighthouse_version" }
hex = "0.4.2"
slasher = { path = "../slasher" }
monitoring_api = { path = "../common/monitoring_api" }
sensitive_url = { path = "../common/sensitive_url" }
http_api = { path = "http_api" }
clap = { workspace = true }
slog = { workspace = true }
dirs = { workspace = true }
directory = { workspace = true }
futures = { workspace = true }
environment = { workspace = true }
task_executor = { workspace = true }
genesis = { workspace = true }
eth2_network_config = { workspace = true }
execution_layer = { workspace = true }
lighthouse_network = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
clap_utils = { workspace = true }
hyper = { workspace = true }
lighthouse_version = { workspace = true }
hex = { workspace = true }
slasher = { workspace = true }
monitoring_api = { workspace = true }
sensitive_url = { workspace = true }
http_api = { workspace = true }
unused_port = { workspace = true }
strum = { workspace = true }

View File

@@ -2,7 +2,7 @@
name = "beacon_chain"
version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>"]
edition = "2018"
edition = { workspace = true }
autotests = false # using a single test binary compiles faster
[features]
@@ -10,55 +10,69 @@ default = ["participation_metrics"]
write_ssz_files = [] # Writes debugging .ssz files to /tmp during block processing.
participation_metrics = [] # Exposes validator participation metrics to Prometheus.
fork_from_env = [] # Initialise the harness chain spec from the FORK_NAME env variable
portable = ["bls/supranational-portable"]
test_backfill = []
[dev-dependencies]
maplit = "1.0.2"
environment = { path = "../../lighthouse/environment" }
maplit = { workspace = true }
environment = { workspace = true }
serde_json = { workspace = true }
[dependencies]
merkle_proof = { path = "../../consensus/merkle_proof" }
store = { path = "../store" }
parking_lot = "0.11.0"
lazy_static = "1.4.0"
smallvec = "1.6.1"
lighthouse_metrics = { path = "../../common/lighthouse_metrics" }
operation_pool = { path = "../operation_pool" }
rayon = "1.4.1"
serde = "1.0.116"
serde_derive = "1.0.116"
slog = { version = "2.5.2", features = ["max_level_trace"] }
sloggers = { version = "2.1.1", features = ["json"] }
slot_clock = { path = "../../common/slot_clock" }
eth2_hashing = "0.2.0"
eth2_ssz = "0.4.1"
eth2_ssz_types = "0.2.2"
eth2_ssz_derive = "0.3.0"
state_processing = { path = "../../consensus/state_processing" }
tree_hash = "0.4.1"
types = { path = "../../consensus/types" }
tokio = "1.14.0"
eth1 = { path = "../eth1" }
futures = "0.3.7"
genesis = { path = "../genesis" }
int_to_bytes = { path = "../../consensus/int_to_bytes" }
rand = "0.7.3"
proto_array = { path = "../../consensus/proto_array" }
lru = "0.7.1"
tempfile = "3.1.0"
bitvec = "0.19.3"
bls = { path = "../../crypto/bls" }
safe_arith = { path = "../../consensus/safe_arith" }
fork_choice = { path = "../../consensus/fork_choice" }
task_executor = { path = "../../common/task_executor" }
derivative = "2.1.1"
itertools = "0.10.0"
slasher = { path = "../../slasher" }
eth2 = { path = "../../common/eth2" }
strum = { version = "0.21.0", features = ["derive"] }
logging = { path = "../../common/logging" }
execution_layer = { path = "../execution_layer" }
sensitive_url = { path = "../../common/sensitive_url" }
superstruct = "0.4.0"
bitvec = { workspace = true }
bls = { workspace = true }
crossbeam-channel = { workspace = true }
derivative = { workspace = true }
eth1 = { workspace = true }
eth2 = { workspace = true }
eth2_network_config = { workspace = true }
ethereum_hashing = { workspace = true }
ethereum_serde_utils = { workspace = true }
ethereum_ssz = { workspace = true }
ethereum_ssz_derive = { workspace = true }
execution_layer = { workspace = true }
fork_choice = { workspace = true }
futures = { workspace = true }
genesis = { workspace = true }
hex = { workspace = true }
int_to_bytes = { workspace = true }
itertools = { workspace = true }
kzg = { workspace = true }
lazy_static = { workspace = true }
lighthouse_metrics = { workspace = true }
logging = { workspace = true }
lru = { workspace = true }
merkle_proof = { workspace = true }
oneshot_broadcast = { path = "../../common/oneshot_broadcast/" }
operation_pool = { workspace = true }
parking_lot = { workspace = true }
promise_cache = { path = "../../common/promise_cache" }
proto_array = { workspace = true }
rand = { workspace = true }
rayon = { workspace = true }
safe_arith = { workspace = true }
sensitive_url = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
slasher = { workspace = true }
slog = { workspace = true }
slog-async = { workspace = true }
slog-term = { workspace = true }
sloggers = { workspace = true }
slot_clock = { workspace = true }
smallvec = { workspace = true }
ssz_types = { workspace = true }
state_processing = { workspace = true }
store = { workspace = true }
strum = { workspace = true }
superstruct = { workspace = true }
task_executor = { workspace = true }
tempfile = { workspace = true }
tokio = { workspace = true }
tokio-stream = { workspace = true }
tree_hash = { workspace = true }
tree_hash_derive = { workspace = true }
types = { workspace = true }
[[test]]
name = "beacon_chain_tests"

View File

@@ -0,0 +1,452 @@
use crate::{BeaconChain, BeaconChainError, BeaconChainTypes};
use eth2::lighthouse::attestation_rewards::{IdealAttestationRewards, TotalAttestationRewards};
use eth2::lighthouse::StandardAttestationRewards;
use eth2::types::ValidatorId;
use safe_arith::SafeArith;
use serde_utils::quoted_u64::Quoted;
use slog::debug;
use state_processing::common::base::{self, SqrtTotalActiveBalance};
use state_processing::per_epoch_processing::altair::{
process_inactivity_updates_slow, process_justification_and_finalization,
};
use state_processing::per_epoch_processing::base::rewards_and_penalties::{
get_attestation_component_delta, get_attestation_deltas_all, get_attestation_deltas_subset,
get_inactivity_penalty_delta, get_inclusion_delay_delta,
};
use state_processing::per_epoch_processing::base::validator_statuses::InclusionInfo;
use state_processing::per_epoch_processing::base::{
process_justification_and_finalization as process_justification_and_finalization_base,
TotalBalances, ValidatorStatus, ValidatorStatuses,
};
use state_processing::{
common::altair::BaseRewardPerIncrement,
common::update_progressive_balances_cache::initialize_progressive_balances_cache,
epoch_cache::initialize_epoch_cache,
per_epoch_processing::altair::rewards_and_penalties::get_flag_weight,
};
use std::collections::HashMap;
use store::consts::altair::{
PARTICIPATION_FLAG_WEIGHTS, TIMELY_HEAD_FLAG_INDEX, TIMELY_SOURCE_FLAG_INDEX,
TIMELY_TARGET_FLAG_INDEX,
};
use types::consts::altair::WEIGHT_DENOMINATOR;
use types::{BeaconState, Epoch, EthSpec, RelativeEpoch};
impl<T: BeaconChainTypes> BeaconChain<T> {
pub fn compute_attestation_rewards(
&self,
epoch: Epoch,
validators: Vec<ValidatorId>,
) -> Result<StandardAttestationRewards, BeaconChainError> {
debug!(self.log, "computing attestation rewards"; "epoch" => epoch, "validator_count" => validators.len());
// Get state
let state_slot = (epoch + 1).end_slot(T::EthSpec::slots_per_epoch());
let state_root = self
.state_root_at_slot(state_slot)?
.ok_or(BeaconChainError::NoStateForSlot(state_slot))?;
let state = self
.get_state(&state_root, Some(state_slot))?
.ok_or(BeaconChainError::MissingBeaconState(state_root))?;
match state {
BeaconState::Base(_) => self.compute_attestation_rewards_base(state, validators),
BeaconState::Altair(_)
| BeaconState::Merge(_)
| BeaconState::Capella(_)
| BeaconState::Deneb(_)
| BeaconState::Electra(_) => self.compute_attestation_rewards_altair(state, validators),
}
}
fn compute_attestation_rewards_base(
&self,
mut state: BeaconState<T::EthSpec>,
validators: Vec<ValidatorId>,
) -> Result<StandardAttestationRewards, BeaconChainError> {
let spec = &self.spec;
let mut validator_statuses = ValidatorStatuses::new(&state, spec)?;
validator_statuses.process_attestations(&state)?;
process_justification_and_finalization_base(
&state,
&validator_statuses.total_balances,
spec,
)?
.apply_changes_to_state(&mut state);
let ideal_rewards =
self.compute_ideal_rewards_base(&state, &validator_statuses.total_balances)?;
let indices_to_attestation_delta = if validators.is_empty() {
get_attestation_deltas_all(&state, &validator_statuses, spec)?
.into_iter()
.enumerate()
.collect()
} else {
let validator_indices = Self::validators_ids_to_indices(&mut state, validators)?;
get_attestation_deltas_subset(&state, &validator_statuses, &validator_indices, spec)?
};
let mut total_rewards = vec![];
for (index, delta) in indices_to_attestation_delta.into_iter() {
let head_delta = delta.head_delta;
let head = (head_delta.rewards as i64).safe_sub(head_delta.penalties as i64)?;
let target_delta = delta.target_delta;
let target = (target_delta.rewards as i64).safe_sub(target_delta.penalties as i64)?;
let source_delta = delta.source_delta;
let source = (source_delta.rewards as i64).safe_sub(source_delta.penalties as i64)?;
// No penalties associated with inclusion delay
let inclusion_delay = delta.inclusion_delay_delta.rewards;
let inactivity = delta.inactivity_penalty_delta.penalties.wrapping_neg() as i64;
let rewards = TotalAttestationRewards {
validator_index: index as u64,
head,
target,
source,
inclusion_delay: Some(Quoted {
value: inclusion_delay,
}),
inactivity,
};
total_rewards.push(rewards);
}
Ok(StandardAttestationRewards {
ideal_rewards,
total_rewards,
})
}
fn compute_attestation_rewards_altair(
&self,
mut state: BeaconState<T::EthSpec>,
validators: Vec<ValidatorId>,
) -> Result<StandardAttestationRewards, BeaconChainError> {
let spec = &self.spec;
// Build required caches.
initialize_epoch_cache(&mut state, spec)?;
initialize_progressive_balances_cache(&mut state, spec)?;
state.build_exit_cache(spec)?;
state.build_committee_cache(RelativeEpoch::Previous, spec)?;
state.build_committee_cache(RelativeEpoch::Current, spec)?;
// Calculate ideal_rewards
process_justification_and_finalization(&state)?.apply_changes_to_state(&mut state);
process_inactivity_updates_slow(&mut state, spec)?;
let previous_epoch = state.previous_epoch();
let mut ideal_rewards_hashmap = HashMap::new();
for flag_index in 0..PARTICIPATION_FLAG_WEIGHTS.len() {
let weight = get_flag_weight(flag_index)
.map_err(|_| BeaconChainError::AttestationRewardsError)?;
let unslashed_participating_balance = state
.progressive_balances_cache()
.previous_epoch_flag_attesting_balance(flag_index)?;
let unslashed_participating_increments =
unslashed_participating_balance.safe_div(spec.effective_balance_increment)?;
let total_active_balance = state.get_total_active_balance()?;
let active_increments =
total_active_balance.safe_div(spec.effective_balance_increment)?;
let base_reward_per_increment =
BaseRewardPerIncrement::new(total_active_balance, spec)?;
for effective_balance_eth in 1..=self.max_effective_balance_increment_steps()? {
let effective_balance =
effective_balance_eth.safe_mul(spec.effective_balance_increment)?;
let base_reward =
effective_balance_eth.safe_mul(base_reward_per_increment.as_u64())?;
let penalty = -(base_reward.safe_mul(weight)?.safe_div(WEIGHT_DENOMINATOR)? as i64);
let reward_numerator = base_reward
.safe_mul(weight)?
.safe_mul(unslashed_participating_increments)?;
let ideal_reward = reward_numerator
.safe_div(active_increments)?
.safe_div(WEIGHT_DENOMINATOR)?;
if !state.is_in_inactivity_leak(previous_epoch, spec)? {
ideal_rewards_hashmap
.insert((flag_index, effective_balance), (ideal_reward, penalty));
} else {
ideal_rewards_hashmap.insert((flag_index, effective_balance), (0, penalty));
}
}
}
// Calculate total_rewards
let mut total_rewards: Vec<TotalAttestationRewards> = Vec::new();
let validators = if validators.is_empty() {
Self::all_eligible_validator_indices(&state, previous_epoch)?
} else {
Self::validators_ids_to_indices(&mut state, validators)?
};
for &validator_index in &validators {
// Return 0s for unknown/inactive validator indices.
let Ok(validator) = state.get_validator(validator_index) else {
debug!(
self.log,
"No rewards for inactive/unknown validator";
"index" => validator_index,
"epoch" => previous_epoch
);
total_rewards.push(TotalAttestationRewards {
validator_index: validator_index as u64,
head: 0,
target: 0,
source: 0,
inclusion_delay: None,
inactivity: 0,
});
continue;
};
let previous_epoch_participation_flags = state
.previous_epoch_participation()?
.get(validator_index)
.ok_or(BeaconChainError::AttestationRewardsError)?;
let eligible = state.is_eligible_validator(previous_epoch, validator)?;
let mut head_reward = 0i64;
let mut target_reward = 0i64;
let mut source_reward = 0i64;
let mut inactivity_penalty = 0i64;
if eligible {
let effective_balance = validator.effective_balance();
for flag_index in 0..PARTICIPATION_FLAG_WEIGHTS.len() {
let (ideal_reward, penalty) = ideal_rewards_hashmap
.get(&(flag_index, effective_balance))
.ok_or(BeaconChainError::AttestationRewardsError)?;
let voted_correctly = !validator.slashed()
&& previous_epoch_participation_flags.has_flag(flag_index)?;
if voted_correctly {
if flag_index == TIMELY_HEAD_FLAG_INDEX {
head_reward += *ideal_reward as i64;
} else if flag_index == TIMELY_TARGET_FLAG_INDEX {
target_reward += *ideal_reward as i64;
} else if flag_index == TIMELY_SOURCE_FLAG_INDEX {
source_reward += *ideal_reward as i64;
}
} else if flag_index == TIMELY_HEAD_FLAG_INDEX {
head_reward = 0;
} else if flag_index == TIMELY_TARGET_FLAG_INDEX {
target_reward = *penalty;
let penalty_numerator = effective_balance
.safe_mul(state.get_inactivity_score(validator_index)?)?;
let penalty_denominator = spec.inactivity_score_bias.safe_mul(
spec.inactivity_penalty_quotient_for_fork(state.fork_name_unchecked()),
)?;
inactivity_penalty =
-(penalty_numerator.safe_div(penalty_denominator)? as i64);
} else if flag_index == TIMELY_SOURCE_FLAG_INDEX {
source_reward = *penalty;
}
}
}
total_rewards.push(TotalAttestationRewards {
validator_index: validator_index as u64,
head: head_reward,
target: target_reward,
source: source_reward,
inclusion_delay: None,
inactivity: inactivity_penalty,
});
}
// Convert hashmap to vector
let mut ideal_rewards: Vec<IdealAttestationRewards> = ideal_rewards_hashmap
.iter()
.map(
|((flag_index, effective_balance), (ideal_reward, _penalty))| {
(flag_index, effective_balance, ideal_reward)
},
)
.fold(
HashMap::new(),
|mut acc, (flag_index, &effective_balance, ideal_reward)| {
let entry = acc
.entry(effective_balance)
.or_insert(IdealAttestationRewards {
effective_balance,
head: 0,
target: 0,
source: 0,
inclusion_delay: None,
inactivity: 0,
});
match *flag_index {
TIMELY_SOURCE_FLAG_INDEX => entry.source += ideal_reward,
TIMELY_TARGET_FLAG_INDEX => entry.target += ideal_reward,
TIMELY_HEAD_FLAG_INDEX => entry.head += ideal_reward,
_ => {}
}
acc
},
)
.into_values()
.collect::<Vec<IdealAttestationRewards>>();
ideal_rewards.sort_by(|a, b| a.effective_balance.cmp(&b.effective_balance));
Ok(StandardAttestationRewards {
ideal_rewards,
total_rewards,
})
}
fn max_effective_balance_increment_steps(&self) -> Result<u64, BeaconChainError> {
let spec = &self.spec;
let max_steps = spec
.max_effective_balance
.safe_div(spec.effective_balance_increment)?;
Ok(max_steps)
}
fn all_eligible_validator_indices(
state: &BeaconState<T::EthSpec>,
previous_epoch: Epoch,
) -> Result<Vec<usize>, BeaconChainError> {
state
.validators()
.iter()
.enumerate()
.filter_map(|(i, validator)| {
state
.is_eligible_validator(previous_epoch, validator)
.map(|eligible| eligible.then_some(i))
.map_err(BeaconChainError::BeaconStateError)
.transpose()
})
.collect()
}
fn validators_ids_to_indices(
state: &mut BeaconState<T::EthSpec>,
validators: Vec<ValidatorId>,
) -> Result<Vec<usize>, BeaconChainError> {
let indices = validators
.into_iter()
.map(|validator| match validator {
ValidatorId::Index(i) => Ok(i as usize),
ValidatorId::PublicKey(pubkey) => state
.get_validator_index(&pubkey)?
.ok_or(BeaconChainError::ValidatorPubkeyUnknown(pubkey)),
})
.collect::<Result<Vec<_>, _>>()?;
Ok(indices)
}
fn compute_ideal_rewards_base(
&self,
state: &BeaconState<T::EthSpec>,
total_balances: &TotalBalances,
) -> Result<Vec<IdealAttestationRewards>, BeaconChainError> {
let spec = &self.spec;
let previous_epoch = state.previous_epoch();
let finality_delay = previous_epoch
.safe_sub(state.finalized_checkpoint().epoch)?
.as_u64();
let ideal_validator_status = ValidatorStatus {
is_previous_epoch_attester: true,
is_slashed: false,
inclusion_info: Some(InclusionInfo {
delay: 1,
..Default::default()
}),
..Default::default()
};
let mut ideal_attestation_rewards_list = Vec::new();
let sqrt_total_active_balance = SqrtTotalActiveBalance::new(total_balances.current_epoch());
for effective_balance_step in 1..=self.max_effective_balance_increment_steps()? {
let effective_balance =
effective_balance_step.safe_mul(spec.effective_balance_increment)?;
let base_reward =
base::get_base_reward(effective_balance, sqrt_total_active_balance, spec)?;
// compute ideal head rewards
let head = get_attestation_component_delta(
true,
total_balances.previous_epoch_head_attesters(),
total_balances,
base_reward,
finality_delay,
spec,
)?
.rewards;
// compute ideal target rewards
let target = get_attestation_component_delta(
true,
total_balances.previous_epoch_target_attesters(),
total_balances,
base_reward,
finality_delay,
spec,
)?
.rewards;
// compute ideal source rewards
let source = get_attestation_component_delta(
true,
total_balances.previous_epoch_attesters(),
total_balances,
base_reward,
finality_delay,
spec,
)?
.rewards;
// compute ideal inclusion delay rewards
let inclusion_delay =
get_inclusion_delay_delta(&ideal_validator_status, base_reward, spec)?
.0
.rewards;
// compute inactivity penalty
let inactivity = get_inactivity_penalty_delta(
&ideal_validator_status,
base_reward,
finality_delay,
spec,
)?
.penalties
.wrapping_neg() as i64;
let ideal_attestation_rewards = IdealAttestationRewards {
effective_balance,
head,
target,
source,
inclusion_delay: Some(Quoted {
value: inclusion_delay,
}),
inactivity,
};
ideal_attestation_rewards_list.push(ideal_attestation_rewards);
}
Ok(ideal_attestation_rewards_list)
}
}

View File

@@ -0,0 +1,107 @@
use crate::{BeaconChain, BeaconChainTypes};
use slog::{debug, error};
use slot_clock::SlotClock;
use std::sync::Arc;
use task_executor::TaskExecutor;
use tokio::time::sleep;
use types::{EthSpec, Slot};
/// Don't run the attestation simulator if the head slot is this many epochs
/// behind the wall-clock slot.
const SYNCING_TOLERANCE_EPOCHS: u64 = 2;
/// Spawns a routine which produces an unaggregated attestation at every slot.
///
/// This routine will run once per slot
pub fn start_attestation_simulator_service<T: BeaconChainTypes>(
executor: TaskExecutor,
chain: Arc<BeaconChain<T>>,
) {
executor.clone().spawn(
async move { attestation_simulator_service(executor, chain).await },
"attestation_simulator_service",
);
}
/// Loop indefinitely, calling `BeaconChain::produce_unaggregated_attestation` every 4s into each slot.
async fn attestation_simulator_service<T: BeaconChainTypes>(
executor: TaskExecutor,
chain: Arc<BeaconChain<T>>,
) {
let slot_duration = chain.slot_clock.slot_duration();
let additional_delay = slot_duration / 3;
loop {
match chain.slot_clock.duration_to_next_slot() {
Some(duration) => {
sleep(duration + additional_delay).await;
debug!(
chain.log,
"Simulating unagg. attestation production";
);
// Run the task in the executor
let inner_chain = chain.clone();
executor.spawn(
async move {
if let Ok(current_slot) = inner_chain.slot() {
produce_unaggregated_attestation(inner_chain, current_slot);
}
},
"attestation_simulator_service",
);
}
None => {
error!(chain.log, "Failed to read slot clock");
// If we can't read the slot clock, just wait another slot.
sleep(slot_duration).await;
}
};
}
}
pub fn produce_unaggregated_attestation<T: BeaconChainTypes>(
chain: Arc<BeaconChain<T>>,
current_slot: Slot,
) {
// Don't run the attestation simulator when the head slot is far behind the
// wall-clock slot.
//
// This helps prevent the simulator from becoming a burden by computing
// committees from old states.
let syncing_tolerance_slots = SYNCING_TOLERANCE_EPOCHS * T::EthSpec::slots_per_epoch();
if chain.best_slot() + syncing_tolerance_slots < current_slot {
return;
}
// Since attestations for different committees are practically identical (apart from the committee index field)
// Committee 0 is guaranteed to exist. That means there's no need to load the committee.
let beacon_committee_index = 0;
// Store the unaggregated attestation in the validator monitor for later processing
match chain.produce_unaggregated_attestation(current_slot, beacon_committee_index) {
Ok(unaggregated_attestation) => {
let data = &unaggregated_attestation.data;
debug!(
chain.log,
"Produce unagg. attestation";
"attestation_source" => data.source.root.to_string(),
"attestation_target" => data.target.root.to_string(),
);
chain
.validator_monitor
.write()
.set_unaggregated_attestation(unaggregated_attestation);
}
Err(e) => {
debug!(
chain.log,
"Failed to simulate attestation";
"error" => ?e
);
}
}
}

View File

@@ -27,13 +27,16 @@
//! ▼
//! impl VerifiedAttestation
//! ```
// Ignore this lint for `AttestationSlashInfo` which is of comparable size to the non-error types it
// is returned alongside.
#![allow(clippy::result_large_err)]
mod batch;
use crate::{
beacon_chain::{MAXIMUM_GOSSIP_CLOCK_DISPARITY, VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT},
metrics,
observed_aggregates::ObserveOutcome,
observed_attesters::Error as ObservedAttestersError,
beacon_chain::VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT, metrics,
observed_aggregates::ObserveOutcome, observed_attesters::Error as ObservedAttestersError,
BeaconChain, BeaconChainError, BeaconChainTypes,
};
use bls::verify_signature_sets;
@@ -52,8 +55,8 @@ use std::borrow::Cow;
use strum::AsRefStr;
use tree_hash::TreeHash;
use types::{
Attestation, BeaconCommittee, CommitteeIndex, Epoch, EthSpec, Hash256, IndexedAttestation,
SelectionProof, SignedAggregateAndProof, Slot, SubnetId,
Attestation, BeaconCommittee, ChainSpec, CommitteeIndex, Epoch, EthSpec, ForkName, Hash256,
IndexedAttestation, SelectionProof, SignedAggregateAndProof, Slot, SubnetId,
};
pub use batch::{batch_verify_aggregated_attestations, batch_verify_unaggregated_attestations};
@@ -112,14 +115,14 @@ pub enum Error {
///
/// The peer has sent an invalid message.
AggregatorPubkeyUnknown(u64),
/// The attestation has been seen before; either in a block, on the gossip network or from a
/// local validator.
/// The attestation or a superset of this attestation's aggregations bits for the same data
/// has been seen before; either in a block, on the gossip network or from a local validator.
///
/// ## Peer scoring
///
/// It's unclear if this attestation is valid, however we have already observed it and do not
/// need to observe it again.
AttestationAlreadyKnown(Hash256),
AttestationSupersetKnown(Hash256),
/// There has already been an aggregation observed for this validator, we refuse to process a
/// second.
///
@@ -263,7 +266,7 @@ enum CheckAttestationSignature {
struct IndexedAggregatedAttestation<'a, T: BeaconChainTypes> {
signed_aggregate: &'a SignedAggregateAndProof<T::EthSpec>,
indexed_attestation: IndexedAttestation<T::EthSpec>,
attestation_root: Hash256,
attestation_data_root: Hash256,
}
/// Wraps a `Attestation` that has been verified up until the point that an `IndexedAttestation` can
@@ -318,10 +321,17 @@ impl<'a, T: BeaconChainTypes> Clone for IndexedUnaggregatedAttestation<'a, T> {
/// A helper trait implemented on wrapper types that can be progressed to a state where they can be
/// verified for application to fork choice.
pub trait VerifiedAttestation<T: BeaconChainTypes> {
pub trait VerifiedAttestation<T: BeaconChainTypes>: Sized {
fn attestation(&self) -> &Attestation<T::EthSpec>;
fn indexed_attestation(&self) -> &IndexedAttestation<T::EthSpec>;
// Inefficient default implementation. This is overridden for gossip verified attestations.
fn into_attestation_and_indices(self) -> (Attestation<T::EthSpec>, Vec<u64>) {
let attestation = self.attestation().clone();
let attesting_indices = self.indexed_attestation().attesting_indices.clone().into();
(attestation, attesting_indices)
}
}
impl<'a, T: BeaconChainTypes> VerifiedAttestation<T> for VerifiedAggregatedAttestation<'a, T> {
@@ -442,7 +452,7 @@ impl<'a, T: BeaconChainTypes> IndexedAggregatedAttestation<'a, T> {
// MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance).
//
// We do not queue future attestations for later processing.
verify_propagation_slot_range(&chain.slot_clock, attestation)?;
verify_propagation_slot_range(&chain.slot_clock, attestation, &chain.spec)?;
// Check the attestation's epoch matches its target.
if attestation.data.slot.epoch(T::EthSpec::slots_per_epoch())
@@ -455,14 +465,17 @@ impl<'a, T: BeaconChainTypes> IndexedAggregatedAttestation<'a, T> {
}
// Ensure the valid aggregated attestation has not already been seen locally.
let attestation_root = attestation.tree_hash_root();
let attestation_data = &attestation.data;
let attestation_data_root = attestation_data.tree_hash_root();
if chain
.observed_attestations
.write()
.is_known(attestation, attestation_root)
.is_known_subset(attestation, attestation_data_root)
.map_err(|e| Error::BeaconChainError(e.into()))?
{
return Err(Error::AttestationAlreadyKnown(attestation_root));
metrics::inc_counter(&metrics::AGGREGATED_ATTESTATION_SUBSETS);
return Err(Error::AttestationSupersetKnown(attestation_data_root));
}
let aggregator_index = signed_aggregate.message.aggregator_index;
@@ -508,7 +521,7 @@ impl<'a, T: BeaconChainTypes> IndexedAggregatedAttestation<'a, T> {
if attestation.aggregation_bits.is_zero() {
Err(Error::EmptyAggregationBitfield)
} else {
Ok(attestation_root)
Ok(attestation_data_root)
}
}
@@ -521,13 +534,13 @@ impl<'a, T: BeaconChainTypes> IndexedAggregatedAttestation<'a, T> {
let attestation = &signed_aggregate.message.aggregate;
let aggregator_index = signed_aggregate.message.aggregator_index;
let attestation_root = match Self::verify_early_checks(signed_aggregate, chain) {
let attestation_data_root = match Self::verify_early_checks(signed_aggregate, chain) {
Ok(root) => root,
Err(e) => return Err(SignatureNotChecked(&signed_aggregate.message.aggregate, e)),
};
let indexed_attestation =
match map_attestation_committee(chain, attestation, |(committee, _)| {
let get_indexed_attestation_with_committee =
|(committee, _): (BeaconCommittee, CommitteesPerSlot)| {
// Note: this clones the signature which is known to be a relatively slow operation.
//
// Future optimizations should remove this clone.
@@ -548,15 +561,21 @@ impl<'a, T: BeaconChainTypes> IndexedAggregatedAttestation<'a, T> {
get_indexed_attestation(committee.committee, attestation)
.map_err(|e| BeaconChainError::from(e).into())
}) {
Ok(indexed_attestation) => indexed_attestation,
Err(e) => return Err(SignatureNotChecked(&signed_aggregate.message.aggregate, e)),
};
let indexed_attestation = match map_attestation_committee(
chain,
attestation,
get_indexed_attestation_with_committee,
) {
Ok(indexed_attestation) => indexed_attestation,
Err(e) => return Err(SignatureNotChecked(&signed_aggregate.message.aggregate, e)),
};
Ok(IndexedAggregatedAttestation {
signed_aggregate,
indexed_attestation,
attestation_root,
attestation_data_root,
})
}
}
@@ -565,7 +584,7 @@ impl<'a, T: BeaconChainTypes> VerifiedAggregatedAttestation<'a, T> {
/// Run the checks that happen after the indexed attestation and signature have been checked.
fn verify_late_checks(
signed_aggregate: &SignedAggregateAndProof<T::EthSpec>,
attestation_root: Hash256,
attestation_data_root: Hash256,
chain: &BeaconChain<T>,
) -> Result<(), Error> {
let attestation = &signed_aggregate.message.aggregate;
@@ -575,13 +594,14 @@ impl<'a, T: BeaconChainTypes> VerifiedAggregatedAttestation<'a, T> {
//
// It's important to double check that the attestation is not already known, otherwise two
// attestations processed at the same time could be published.
if let ObserveOutcome::AlreadyKnown = chain
if let ObserveOutcome::Subset = chain
.observed_attestations
.write()
.observe_item(attestation, Some(attestation_root))
.observe_item(attestation, Some(attestation_data_root))
.map_err(|e| Error::BeaconChainError(e.into()))?
{
return Err(Error::AttestationAlreadyKnown(attestation_root));
metrics::inc_counter(&metrics::AGGREGATED_ATTESTATION_SUBSETS);
return Err(Error::AttestationSupersetKnown(attestation_data_root));
}
// Observe the aggregator so we don't process another aggregate from them.
@@ -641,7 +661,7 @@ impl<'a, T: BeaconChainTypes> VerifiedAggregatedAttestation<'a, T> {
let IndexedAggregatedAttestation {
signed_aggregate,
indexed_attestation,
attestation_root,
attestation_data_root,
} = signed_aggregate;
match check_signature {
@@ -665,7 +685,7 @@ impl<'a, T: BeaconChainTypes> VerifiedAggregatedAttestation<'a, T> {
CheckAttestationSignature::No => (),
};
if let Err(e) = Self::verify_late_checks(signed_aggregate, attestation_root, chain) {
if let Err(e) = Self::verify_late_checks(signed_aggregate, attestation_data_root, chain) {
return Err(SignatureValid(indexed_attestation, e));
}
@@ -706,7 +726,7 @@ impl<'a, T: BeaconChainTypes> IndexedUnaggregatedAttestation<'a, T> {
// MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance).
//
// We do not queue future attestations for later processing.
verify_propagation_slot_range(&chain.slot_clock, attestation)?;
verify_propagation_slot_range(&chain.slot_clock, attestation, &chain.spec)?;
// Check to ensure that the attestation is "unaggregated". I.e., it has exactly one
// aggregation bit set.
@@ -961,7 +981,6 @@ impl<'a, T: BeaconChainTypes> VerifiedUnaggregatedAttestation<'a, T> {
}
/// Returns `Ok(())` if the `attestation.data.beacon_block_root` is known to this chain.
/// You can use this `shuffling_id` to read from the shuffling cache.
///
/// The block root may not be known for two reasons:
///
@@ -977,8 +996,8 @@ fn verify_head_block_is_known<T: BeaconChainTypes>(
max_skip_slots: Option<u64>,
) -> Result<ProtoBlock, Error> {
let block_opt = chain
.fork_choice
.read()
.canonical_head
.fork_choice_read_lock()
.get_block(&attestation.data.beacon_block_root)
.or_else(|| {
chain
@@ -1022,11 +1041,11 @@ fn verify_head_block_is_known<T: BeaconChainTypes>(
pub fn verify_propagation_slot_range<S: SlotClock, E: EthSpec>(
slot_clock: &S,
attestation: &Attestation<E>,
spec: &ChainSpec,
) -> Result<(), Error> {
let attestation_slot = attestation.data.slot;
let latest_permissible_slot = slot_clock
.now_with_future_tolerance(MAXIMUM_GOSSIP_CLOCK_DISPARITY)
.now_with_future_tolerance(spec.maximum_gossip_clock_disparity())
.ok_or(BeaconChainError::UnableToReadSlot)?;
if attestation_slot > latest_permissible_slot {
return Err(Error::FutureSlot {
@@ -1036,10 +1055,21 @@ pub fn verify_propagation_slot_range<S: SlotClock, E: EthSpec>(
}
// Taking advantage of saturating subtraction on `Slot`.
let earliest_permissible_slot = slot_clock
.now_with_past_tolerance(MAXIMUM_GOSSIP_CLOCK_DISPARITY)
let one_epoch_prior = slot_clock
.now_with_past_tolerance(spec.maximum_gossip_clock_disparity())
.ok_or(BeaconChainError::UnableToReadSlot)?
- E::slots_per_epoch();
let current_fork =
spec.fork_name_at_slot::<E>(slot_clock.now().ok_or(BeaconChainError::UnableToReadSlot)?);
let earliest_permissible_slot = match current_fork {
ForkName::Base | ForkName::Altair | ForkName::Merge | ForkName::Capella => one_epoch_prior,
// EIP-7045
ForkName::Deneb | ForkName::Electra => one_epoch_prior
.epoch(E::slots_per_epoch())
.start_slot(E::slots_per_epoch()),
};
if attestation_slot < earliest_permissible_slot {
return Err(Error::PastSlot {
attestation_slot,
@@ -1091,13 +1121,13 @@ pub fn verify_attestation_signature<T: BeaconChainTypes>(
/// Verifies that the `attestation.data.target.root` is indeed the target root of the block at
/// `attestation.data.beacon_block_root`.
pub fn verify_attestation_target_root<T: EthSpec>(
pub fn verify_attestation_target_root<E: EthSpec>(
head_block: &ProtoBlock,
attestation: &Attestation<T>,
attestation: &Attestation<E>,
) -> Result<(), Error> {
// Check the attestation target root.
let head_block_epoch = head_block.slot.epoch(T::slots_per_epoch());
let attestation_epoch = attestation.data.slot.epoch(T::slots_per_epoch());
let head_block_epoch = head_block.slot.epoch(E::slots_per_epoch());
let attestation_epoch = attestation.data.slot.epoch(E::slots_per_epoch());
if head_block_epoch > attestation_epoch {
// The epoch references an invalid head block from a future epoch.
//
@@ -1246,7 +1276,10 @@ where
// processing an attestation that does not include our latest finalized block in its chain.
//
// We do not delay consideration for later, we simply drop the attestation.
if !chain.fork_choice.read().contains_block(&target.root)
if !chain
.canonical_head
.fork_choice_read_lock()
.contains_block(&target.root)
&& !chain.early_attester_cache.contains_block(target.root)
{
return Err(Error::UnknownTargetRoot(target.root));

View File

@@ -65,14 +65,15 @@ where
.try_read_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT)
.ok_or(BeaconChainError::ValidatorPubkeyCacheLockTimeout)?;
let fork = chain.with_head(|head| Ok::<_, BeaconChainError>(head.beacon_state.fork()))?;
let mut signature_sets = Vec::with_capacity(num_indexed * 3);
// Iterate, flattening to get only the `Ok` values.
for indexed in indexing_results.iter().flatten() {
let signed_aggregate = &indexed.signed_aggregate;
let indexed_attestation = &indexed.indexed_attestation;
let fork = chain
.spec
.fork_at_epoch(indexed_attestation.data.target.epoch);
signature_sets.push(
signed_aggregate_selection_proof_signature_set(
@@ -174,13 +175,14 @@ where
.try_read_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT)
.ok_or(BeaconChainError::ValidatorPubkeyCacheLockTimeout)?;
let fork = chain.with_head(|head| Ok::<_, BeaconChainError>(head.beacon_state.fork()))?;
let mut signature_sets = Vec::with_capacity(num_partially_verified);
// Iterate, flattening to get only the `Ok` values.
for partially_verified in partial_results.iter().flatten() {
let indexed_attestation = &partially_verified.indexed_attestation;
let fork = chain
.spec
.fork_at_epoch(indexed_attestation.data.target.epoch);
let signature_set = indexed_attestation_signature_set_from_pubkeys(
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),

View File

@@ -84,7 +84,7 @@ pub struct CommitteeLengths {
impl CommitteeLengths {
/// Instantiate `Self` using `state.current_epoch()`.
pub fn new<T: EthSpec>(state: &BeaconState<T>, spec: &ChainSpec) -> Result<Self, Error> {
pub fn new<E: EthSpec>(state: &BeaconState<E>, spec: &ChainSpec) -> Result<Self, Error> {
let active_validator_indices_len = if let Ok(committee_cache) =
state.committee_cache(RelativeEpoch::Current)
{
@@ -102,21 +102,21 @@ impl CommitteeLengths {
}
/// Get the count of committees per each slot of `self.epoch`.
pub fn get_committee_count_per_slot<T: EthSpec>(
pub fn get_committee_count_per_slot<E: EthSpec>(
&self,
spec: &ChainSpec,
) -> Result<usize, Error> {
T::get_committee_count_per_slot(self.active_validator_indices_len, spec).map_err(Into::into)
E::get_committee_count_per_slot(self.active_validator_indices_len, spec).map_err(Into::into)
}
/// Get the length of the committee at the given `slot` and `committee_index`.
pub fn get_committee_length<T: EthSpec>(
pub fn get_committee_length<E: EthSpec>(
&self,
slot: Slot,
committee_index: CommitteeIndex,
spec: &ChainSpec,
) -> Result<CommitteeLength, Error> {
let slots_per_epoch = T::slots_per_epoch();
let slots_per_epoch = E::slots_per_epoch();
let request_epoch = slot.epoch(slots_per_epoch);
// Sanity check.
@@ -128,7 +128,7 @@ impl CommitteeLengths {
}
let slots_per_epoch = slots_per_epoch as usize;
let committees_per_slot = self.get_committee_count_per_slot::<T>(spec)?;
let committees_per_slot = self.get_committee_count_per_slot::<E>(spec)?;
let index_in_epoch = compute_committee_index_in_epoch(
slot,
slots_per_epoch,
@@ -162,7 +162,7 @@ pub struct AttesterCacheValue {
impl AttesterCacheValue {
/// Instantiate `Self` using `state.current_epoch()`.
pub fn new<T: EthSpec>(state: &BeaconState<T>, spec: &ChainSpec) -> Result<Self, Error> {
pub fn new<E: EthSpec>(state: &BeaconState<E>, spec: &ChainSpec) -> Result<Self, Error> {
let current_justified_checkpoint = state.current_justified_checkpoint();
let committee_lengths = CommitteeLengths::new(state, spec)?;
Ok(Self {
@@ -172,14 +172,14 @@ impl AttesterCacheValue {
}
/// Get the justified checkpoint and committee length for some `slot` and `committee_index`.
fn get<T: EthSpec>(
fn get<E: EthSpec>(
&self,
slot: Slot,
committee_index: CommitteeIndex,
spec: &ChainSpec,
) -> Result<(JustifiedCheckpoint, CommitteeLength), Error> {
self.committee_lengths
.get_committee_length::<T>(slot, committee_index, spec)
.get_committee_length::<E>(slot, committee_index, spec)
.map(|committee_length| (self.current_justified_checkpoint, committee_length))
}
}
@@ -216,12 +216,12 @@ impl AttesterCacheKey {
/// ## Errors
///
/// May error if `epoch` is out of the range of `state.block_roots`.
pub fn new<T: EthSpec>(
pub fn new<E: EthSpec>(
epoch: Epoch,
state: &BeaconState<T>,
state: &BeaconState<E>,
latest_block_root: Hash256,
) -> Result<Self, Error> {
let slots_per_epoch = T::slots_per_epoch();
let slots_per_epoch = E::slots_per_epoch();
let decision_slot = epoch.start_slot(slots_per_epoch).saturating_sub(1_u64);
let decision_root = if decision_slot.epoch(slots_per_epoch) == epoch {
@@ -255,7 +255,7 @@ pub struct AttesterCache {
impl AttesterCache {
/// Get the justified checkpoint and committee length for the `slot` and `committee_index` in
/// the state identified by the cache `key`.
pub fn get<T: EthSpec>(
pub fn get<E: EthSpec>(
&self,
key: &AttesterCacheKey,
slot: Slot,
@@ -265,14 +265,14 @@ impl AttesterCache {
self.cache
.read()
.get(key)
.map(|cache_item| cache_item.get::<T>(slot, committee_index, spec))
.map(|cache_item| cache_item.get::<E>(slot, committee_index, spec))
.transpose()
}
/// Cache the `state.current_epoch()` values if they are not already present in the state.
pub fn maybe_cache_state<T: EthSpec>(
pub fn maybe_cache_state<E: EthSpec>(
&self,
state: &BeaconState<T>,
state: &BeaconState<E>,
latest_block_root: Hash256,
spec: &ChainSpec,
) -> Result<(), Error> {

View File

@@ -0,0 +1,246 @@
use crate::{BeaconChain, BeaconChainError, BeaconChainTypes};
use eth2::lighthouse::StandardBlockReward;
use operation_pool::RewardCache;
use safe_arith::SafeArith;
use slog::error;
use state_processing::{
common::{get_attestation_participation_flag_indices, get_attesting_indices_from_state},
epoch_cache::initialize_epoch_cache,
per_block_processing::{
altair::sync_committee::compute_sync_aggregate_rewards, get_slashable_indices,
},
};
use store::{
consts::altair::{PARTICIPATION_FLAG_WEIGHTS, PROPOSER_WEIGHT, WEIGHT_DENOMINATOR},
RelativeEpoch,
};
use types::{AbstractExecPayload, BeaconBlockRef, BeaconState, BeaconStateError, Hash256};
type BeaconBlockSubRewardValue = u64;
impl<T: BeaconChainTypes> BeaconChain<T> {
pub fn compute_beacon_block_reward<Payload: AbstractExecPayload<T::EthSpec>>(
&self,
block: BeaconBlockRef<'_, T::EthSpec, Payload>,
block_root: Hash256,
state: &mut BeaconState<T::EthSpec>,
) -> Result<StandardBlockReward, BeaconChainError> {
if block.slot() != state.slot() {
return Err(BeaconChainError::BlockRewardSlotError);
}
state.build_committee_cache(RelativeEpoch::Previous, &self.spec)?;
state.build_committee_cache(RelativeEpoch::Current, &self.spec)?;
initialize_epoch_cache(state, &self.spec)?;
self.compute_beacon_block_reward_with_cache(block, block_root, state)
}
// This should only be called after a committee cache has been built
// for both the previous and current epoch
fn compute_beacon_block_reward_with_cache<Payload: AbstractExecPayload<T::EthSpec>>(
&self,
block: BeaconBlockRef<'_, T::EthSpec, Payload>,
block_root: Hash256,
state: &BeaconState<T::EthSpec>,
) -> Result<StandardBlockReward, BeaconChainError> {
let proposer_index = block.proposer_index();
let sync_aggregate_reward =
self.compute_beacon_block_sync_aggregate_reward(block, state)?;
let proposer_slashing_reward = self
.compute_beacon_block_proposer_slashing_reward(block, state)
.map_err(|e| {
error!(
self.log,
"Error calculating proposer slashing reward";
"error" => ?e
);
BeaconChainError::BlockRewardError
})?;
let attester_slashing_reward = self
.compute_beacon_block_attester_slashing_reward(block, state)
.map_err(|e| {
error!(
self.log,
"Error calculating attester slashing reward";
"error" => ?e
);
BeaconChainError::BlockRewardError
})?;
let block_attestation_reward = if let BeaconState::Base(_) = state {
self.compute_beacon_block_attestation_reward_base(block, block_root, state)
.map_err(|e| {
error!(
self.log,
"Error calculating base block attestation reward";
"error" => ?e
);
BeaconChainError::BlockRewardAttestationError
})?
} else {
self.compute_beacon_block_attestation_reward_altair_deneb(block, state)
.map_err(|e| {
error!(
self.log,
"Error calculating altair block attestation reward";
"error" => ?e
);
BeaconChainError::BlockRewardAttestationError
})?
};
let total_reward = sync_aggregate_reward
.safe_add(proposer_slashing_reward)?
.safe_add(attester_slashing_reward)?
.safe_add(block_attestation_reward)?;
Ok(StandardBlockReward {
proposer_index,
total: total_reward,
attestations: block_attestation_reward,
sync_aggregate: sync_aggregate_reward,
proposer_slashings: proposer_slashing_reward,
attester_slashings: attester_slashing_reward,
})
}
fn compute_beacon_block_sync_aggregate_reward<Payload: AbstractExecPayload<T::EthSpec>>(
&self,
block: BeaconBlockRef<'_, T::EthSpec, Payload>,
state: &BeaconState<T::EthSpec>,
) -> Result<BeaconBlockSubRewardValue, BeaconChainError> {
if let Ok(sync_aggregate) = block.body().sync_aggregate() {
let (_, proposer_reward_per_bit) = compute_sync_aggregate_rewards(state, &self.spec)
.map_err(|_| BeaconChainError::BlockRewardSyncError)?;
Ok(sync_aggregate.sync_committee_bits.num_set_bits() as u64 * proposer_reward_per_bit)
} else {
Ok(0)
}
}
fn compute_beacon_block_proposer_slashing_reward<Payload: AbstractExecPayload<T::EthSpec>>(
&self,
block: BeaconBlockRef<'_, T::EthSpec, Payload>,
state: &BeaconState<T::EthSpec>,
) -> Result<BeaconBlockSubRewardValue, BeaconChainError> {
let mut proposer_slashing_reward = 0;
let proposer_slashings = block.body().proposer_slashings();
for proposer_slashing in proposer_slashings {
proposer_slashing_reward.safe_add_assign(
state
.get_validator(proposer_slashing.proposer_index() as usize)?
.effective_balance()
.safe_div(self.spec.whistleblower_reward_quotient)?,
)?;
}
Ok(proposer_slashing_reward)
}
fn compute_beacon_block_attester_slashing_reward<Payload: AbstractExecPayload<T::EthSpec>>(
&self,
block: BeaconBlockRef<'_, T::EthSpec, Payload>,
state: &BeaconState<T::EthSpec>,
) -> Result<BeaconBlockSubRewardValue, BeaconChainError> {
let mut attester_slashing_reward = 0;
let attester_slashings = block.body().attester_slashings();
for attester_slashing in attester_slashings {
for attester_index in get_slashable_indices(state, attester_slashing)? {
attester_slashing_reward.safe_add_assign(
state
.get_validator(attester_index as usize)?
.effective_balance()
.safe_div(self.spec.whistleblower_reward_quotient)?,
)?;
}
}
Ok(attester_slashing_reward)
}
fn compute_beacon_block_attestation_reward_base<Payload: AbstractExecPayload<T::EthSpec>>(
&self,
block: BeaconBlockRef<'_, T::EthSpec, Payload>,
block_root: Hash256,
state: &BeaconState<T::EthSpec>,
) -> Result<BeaconBlockSubRewardValue, BeaconChainError> {
// Call compute_block_reward in the base case
// Since base does not have sync aggregate, we only grab attesation portion of the returned
// value
let mut reward_cache = RewardCache::default();
let block_attestation_reward = self
.compute_block_reward(block, block_root, state, &mut reward_cache, true)?
.attestation_rewards
.total;
Ok(block_attestation_reward)
}
fn compute_beacon_block_attestation_reward_altair_deneb<
Payload: AbstractExecPayload<T::EthSpec>,
>(
&self,
block: BeaconBlockRef<'_, T::EthSpec, Payload>,
state: &BeaconState<T::EthSpec>,
) -> Result<BeaconBlockSubRewardValue, BeaconChainError> {
let mut total_proposer_reward = 0;
let proposer_reward_denominator = WEIGHT_DENOMINATOR
.safe_sub(PROPOSER_WEIGHT)?
.safe_mul(WEIGHT_DENOMINATOR)?
.safe_div(PROPOSER_WEIGHT)?;
let mut current_epoch_participation = state.current_epoch_participation()?.clone();
let mut previous_epoch_participation = state.previous_epoch_participation()?.clone();
for attestation in block.body().attestations() {
let data = &attestation.data;
let inclusion_delay = state.slot().safe_sub(data.slot)?.as_u64();
// [Modified in Deneb:EIP7045]
let participation_flag_indices = get_attestation_participation_flag_indices(
state,
data,
inclusion_delay,
&self.spec,
)?;
let attesting_indices = get_attesting_indices_from_state(state, attestation)?;
let mut proposer_reward_numerator = 0;
for index in attesting_indices {
let index = index as usize;
for (flag_index, &weight) in PARTICIPATION_FLAG_WEIGHTS.iter().enumerate() {
let epoch_participation = if data.target.epoch == state.current_epoch() {
&mut current_epoch_participation
} else {
&mut previous_epoch_participation
};
let validator_participation = epoch_participation
.get_mut(index)
.ok_or(BeaconStateError::ParticipationOutOfBounds(index))?;
if participation_flag_indices.contains(&flag_index)
&& !validator_participation.has_flag(flag_index)?
{
validator_participation.add_flag(flag_index)?;
proposer_reward_numerator
.safe_add_assign(state.get_base_reward(index)?.safe_mul(weight)?)?;
}
}
}
total_proposer_reward.safe_add_assign(
proposer_reward_numerator.safe_div(proposer_reward_denominator)?,
)?;
}
Ok(total_proposer_reward)
}
}

View File

@@ -0,0 +1,981 @@
use crate::{BeaconChain, BeaconChainError, BeaconChainTypes};
use execution_layer::{ExecutionLayer, ExecutionPayloadBodyV1};
use slog::{crit, debug, Logger};
use std::collections::HashMap;
use std::sync::Arc;
use store::{DatabaseBlock, ExecutionPayloadDeneb};
use task_executor::TaskExecutor;
use tokio::sync::{
mpsc::{self, UnboundedSender},
RwLock,
};
use tokio_stream::{wrappers::UnboundedReceiverStream, Stream};
use types::{
ChainSpec, EthSpec, ExecPayload, ExecutionBlockHash, ForkName, Hash256, SignedBeaconBlock,
SignedBlindedBeaconBlock, Slot,
};
use types::{
ExecutionPayload, ExecutionPayloadCapella, ExecutionPayloadElectra, ExecutionPayloadHeader,
ExecutionPayloadMerge,
};
#[derive(PartialEq)]
pub enum CheckCaches {
Yes,
No,
}
#[derive(Debug)]
pub enum Error {
PayloadReconstruction(String),
BlocksByRangeFailure(Box<execution_layer::Error>),
RequestNotFound,
BlockResultNotFound,
}
const BLOCKS_PER_RANGE_REQUEST: u64 = 32;
// This is the same as a DatabaseBlock but the Arc allows us to avoid an unnecessary clone.
enum LoadedBeaconBlock<E: EthSpec> {
Full(Arc<SignedBeaconBlock<E>>),
Blinded(Box<SignedBlindedBeaconBlock<E>>),
}
type LoadResult<E> = Result<Option<LoadedBeaconBlock<E>>, BeaconChainError>;
type BlockResult<E> = Result<Option<Arc<SignedBeaconBlock<E>>>, BeaconChainError>;
enum RequestState<E: EthSpec> {
UnSent(Vec<BlockParts<E>>),
Sent(HashMap<Hash256, Arc<BlockResult<E>>>),
}
struct BodiesByRange<E: EthSpec> {
start: u64,
count: u64,
state: RequestState<E>,
}
// stores the components of a block for future re-construction in a small form
struct BlockParts<E: EthSpec> {
blinded_block: Box<SignedBlindedBeaconBlock<E>>,
header: Box<ExecutionPayloadHeader<E>>,
body: Option<Box<ExecutionPayloadBodyV1<E>>>,
}
impl<E: EthSpec> BlockParts<E> {
pub fn new(
blinded: Box<SignedBlindedBeaconBlock<E>>,
header: ExecutionPayloadHeader<E>,
) -> Self {
Self {
blinded_block: blinded,
header: Box::new(header),
body: None,
}
}
pub fn root(&self) -> Hash256 {
self.blinded_block.canonical_root()
}
pub fn slot(&self) -> Slot {
self.blinded_block.message().slot()
}
pub fn block_hash(&self) -> ExecutionBlockHash {
self.header.block_hash()
}
}
fn reconstruct_default_header_block<E: EthSpec>(
blinded_block: Box<SignedBlindedBeaconBlock<E>>,
header_from_block: ExecutionPayloadHeader<E>,
spec: &ChainSpec,
) -> BlockResult<E> {
let fork = blinded_block
.fork_name(spec)
.map_err(BeaconChainError::InconsistentFork)?;
let payload: ExecutionPayload<E> = match fork {
ForkName::Merge => ExecutionPayloadMerge::default().into(),
ForkName::Capella => ExecutionPayloadCapella::default().into(),
ForkName::Deneb => ExecutionPayloadDeneb::default().into(),
ForkName::Electra => ExecutionPayloadElectra::default().into(),
ForkName::Base | ForkName::Altair => {
return Err(Error::PayloadReconstruction(format!(
"Block with fork variant {} has execution payload",
fork
))
.into())
}
};
let header_from_payload = ExecutionPayloadHeader::from(payload.to_ref());
if header_from_payload == header_from_block {
blinded_block
.try_into_full_block(Some(payload))
.ok_or(BeaconChainError::AddPayloadLogicError)
.map(Arc::new)
.map(Some)
} else {
Err(BeaconChainError::InconsistentPayloadReconstructed {
slot: blinded_block.slot(),
exec_block_hash: header_from_block.block_hash(),
canonical_transactions_root: header_from_block.transactions_root(),
reconstructed_transactions_root: header_from_payload.transactions_root(),
})
}
}
fn reconstruct_blocks<E: EthSpec>(
block_map: &mut HashMap<Hash256, Arc<BlockResult<E>>>,
block_parts_with_bodies: HashMap<Hash256, BlockParts<E>>,
log: &Logger,
) {
for (root, block_parts) in block_parts_with_bodies {
if let Some(payload_body) = block_parts.body {
match payload_body.to_payload(block_parts.header.as_ref().clone()) {
Ok(payload) => {
let header_from_payload = ExecutionPayloadHeader::from(payload.to_ref());
if header_from_payload == *block_parts.header {
block_map.insert(
root,
Arc::new(
block_parts
.blinded_block
.try_into_full_block(Some(payload))
.ok_or(BeaconChainError::AddPayloadLogicError)
.map(Arc::new)
.map(Some),
),
);
} else {
let error = BeaconChainError::InconsistentPayloadReconstructed {
slot: block_parts.blinded_block.slot(),
exec_block_hash: block_parts.header.block_hash(),
canonical_transactions_root: block_parts.header.transactions_root(),
reconstructed_transactions_root: header_from_payload
.transactions_root(),
};
debug!(log, "Failed to reconstruct block"; "root" => ?root, "error" => ?error);
block_map.insert(root, Arc::new(Err(error)));
}
}
Err(string) => {
block_map.insert(
root,
Arc::new(Err(Error::PayloadReconstruction(string).into())),
);
}
}
} else {
block_map.insert(
root,
Arc::new(Err(BeaconChainError::BlockHashMissingFromExecutionLayer(
block_parts.block_hash(),
))),
);
}
}
}
impl<E: EthSpec> BodiesByRange<E> {
pub fn new(maybe_block_parts: Option<BlockParts<E>>) -> Self {
if let Some(block_parts) = maybe_block_parts {
Self {
start: block_parts.header.block_number(),
count: 1,
state: RequestState::UnSent(vec![block_parts]),
}
} else {
Self {
start: 0,
count: 0,
state: RequestState::UnSent(vec![]),
}
}
}
pub fn is_unsent(&self) -> bool {
matches!(self.state, RequestState::UnSent(_))
}
pub fn push_block_parts(&mut self, block_parts: BlockParts<E>) -> Result<(), BlockParts<E>> {
if self.count == BLOCKS_PER_RANGE_REQUEST {
return Err(block_parts);
}
match &mut self.state {
RequestState::Sent(_) => Err(block_parts),
RequestState::UnSent(blocks_parts_vec) => {
let block_number = block_parts.header.block_number();
if self.count == 0 {
self.start = block_number;
self.count = 1;
blocks_parts_vec.push(block_parts);
Ok(())
} else {
// need to figure out if this block fits in the request
if block_number < self.start
|| self.start + BLOCKS_PER_RANGE_REQUEST <= block_number
{
return Err(block_parts);
}
blocks_parts_vec.push(block_parts);
if self.start + self.count <= block_number {
self.count = block_number - self.start + 1;
}
Ok(())
}
}
}
}
async fn execute(&mut self, execution_layer: &ExecutionLayer<E>, log: &Logger) {
if let RequestState::UnSent(blocks_parts_ref) = &mut self.state {
let block_parts_vec = std::mem::take(blocks_parts_ref);
let mut block_map = HashMap::new();
match execution_layer
.get_payload_bodies_by_range(self.start, self.count)
.await
{
Ok(bodies) => {
let mut range_map = (self.start..(self.start + self.count))
.zip(bodies.into_iter().chain(std::iter::repeat(None)))
.collect::<HashMap<_, _>>();
let mut with_bodies = HashMap::new();
for mut block_parts in block_parts_vec {
with_bodies
// it's possible the same block is requested twice, using
// or_insert_with() skips duplicates
.entry(block_parts.root())
.or_insert_with(|| {
let block_number = block_parts.header.block_number();
block_parts.body =
range_map.remove(&block_number).flatten().map(Box::new);
block_parts
});
}
reconstruct_blocks(&mut block_map, with_bodies, log);
}
Err(e) => {
let block_result =
Arc::new(Err(Error::BlocksByRangeFailure(Box::new(e)).into()));
debug!(log, "Payload bodies by range failure"; "error" => ?block_result);
for block_parts in block_parts_vec {
block_map.insert(block_parts.root(), block_result.clone());
}
}
}
self.state = RequestState::Sent(block_map);
}
}
pub async fn get_block_result(
&mut self,
root: &Hash256,
execution_layer: &ExecutionLayer<E>,
log: &Logger,
) -> Option<Arc<BlockResult<E>>> {
self.execute(execution_layer, log).await;
if let RequestState::Sent(map) = &self.state {
return map.get(root).cloned();
}
// Shouldn't reach this point
None
}
}
#[derive(Clone)]
enum EngineRequest<E: EthSpec> {
ByRange(Arc<RwLock<BodiesByRange<E>>>),
// When we already have the data or there's an error
NoRequest(Arc<RwLock<HashMap<Hash256, Arc<BlockResult<E>>>>>),
}
impl<E: EthSpec> EngineRequest<E> {
pub fn new_by_range() -> Self {
Self::ByRange(Arc::new(RwLock::new(BodiesByRange::new(None))))
}
pub fn new_no_request() -> Self {
Self::NoRequest(Arc::new(RwLock::new(HashMap::new())))
}
pub async fn is_unsent(&self) -> bool {
match self {
Self::ByRange(bodies_by_range) => bodies_by_range.read().await.is_unsent(),
Self::NoRequest(_) => false,
}
}
pub async fn push_block_parts(&mut self, block_parts: BlockParts<E>, log: &Logger) {
match self {
Self::ByRange(bodies_by_range) => {
let mut request = bodies_by_range.write().await;
if let Err(block_parts) = request.push_block_parts(block_parts) {
drop(request);
let new_by_range = BodiesByRange::new(Some(block_parts));
*self = Self::ByRange(Arc::new(RwLock::new(new_by_range)));
}
}
Self::NoRequest(_) => {
// this should _never_ happen
crit!(
log,
"Please notify the devs";
"beacon_block_streamer" => "push_block_parts called on NoRequest Variant",
);
}
}
}
pub async fn push_block_result(
&mut self,
root: Hash256,
block_result: BlockResult<E>,
log: &Logger,
) {
// this function will only fail if something is seriously wrong
match self {
Self::ByRange(_) => {
// this should _never_ happen
crit!(
log,
"Please notify the devs";
"beacon_block_streamer" => "push_block_result called on ByRange",
);
}
Self::NoRequest(results) => {
results.write().await.insert(root, Arc::new(block_result));
}
}
}
pub async fn get_block_result(
&self,
root: &Hash256,
execution_layer: &ExecutionLayer<E>,
log: &Logger,
) -> Arc<BlockResult<E>> {
match self {
Self::ByRange(by_range) => {
by_range
.write()
.await
.get_block_result(root, execution_layer, log)
.await
}
Self::NoRequest(map) => map.read().await.get(root).cloned(),
}
.unwrap_or_else(|| {
crit!(
log,
"Please notify the devs";
"beacon_block_streamer" => "block_result not found in request",
"root" => ?root,
);
Arc::new(Err(Error::BlockResultNotFound.into()))
})
}
}
pub struct BeaconBlockStreamer<T: BeaconChainTypes> {
execution_layer: ExecutionLayer<T::EthSpec>,
check_caches: CheckCaches,
beacon_chain: Arc<BeaconChain<T>>,
}
impl<T: BeaconChainTypes> BeaconBlockStreamer<T> {
pub fn new(
beacon_chain: &Arc<BeaconChain<T>>,
check_caches: CheckCaches,
) -> Result<Self, BeaconChainError> {
let execution_layer = beacon_chain
.execution_layer
.as_ref()
.ok_or(BeaconChainError::ExecutionLayerMissing)?
.clone();
Ok(Self {
execution_layer,
check_caches,
beacon_chain: beacon_chain.clone(),
})
}
fn check_caches(&self, root: Hash256) -> Option<Arc<SignedBeaconBlock<T::EthSpec>>> {
if self.check_caches == CheckCaches::Yes {
self.beacon_chain
.data_availability_checker
.get_block(&root)
.or(self.beacon_chain.early_attester_cache.get_block(root))
} else {
None
}
}
fn load_payloads(&self, block_roots: Vec<Hash256>) -> Vec<(Hash256, LoadResult<T::EthSpec>)> {
let mut db_blocks = Vec::new();
for root in block_roots {
if let Some(cached_block) = self.check_caches(root).map(LoadedBeaconBlock::Full) {
db_blocks.push((root, Ok(Some(cached_block))));
continue;
}
match self.beacon_chain.store.try_get_full_block(&root, None) {
Err(e) => db_blocks.push((root, Err(e.into()))),
Ok(opt_block) => db_blocks.push((
root,
Ok(opt_block.map(|db_block| match db_block {
DatabaseBlock::Full(block) => LoadedBeaconBlock::Full(Arc::new(block)),
DatabaseBlock::Blinded(block) => {
LoadedBeaconBlock::Blinded(Box::new(block))
}
})),
)),
}
}
db_blocks
}
/// Pre-process the loaded blocks into execution engine requests.
///
/// The purpose of this function is to separate the blocks into 2 categories:
/// 1) no_request - when we already have the full block or there's an error
/// 2) blocks_by_range - used for blinded blocks
///
/// The function returns a vector of block roots in the same order as requested
/// along with the engine request that each root corresponds to.
async fn get_requests(
&self,
payloads: Vec<(Hash256, LoadResult<T::EthSpec>)>,
) -> Vec<(Hash256, EngineRequest<T::EthSpec>)> {
let mut ordered_block_roots = Vec::new();
let mut requests = HashMap::new();
// we sort the by range blocks by slot before adding them to the
// request as it should *better* optimize the number of blocks that
// can fit in the same request
let mut by_range_blocks: Vec<BlockParts<T::EthSpec>> = vec![];
let mut no_request = EngineRequest::new_no_request();
for (root, load_result) in payloads {
// preserve the order of the requested blocks
ordered_block_roots.push(root);
let block_result = match load_result {
Err(e) => Err(e),
Ok(None) => Ok(None),
Ok(Some(LoadedBeaconBlock::Full(full_block))) => Ok(Some(full_block)),
Ok(Some(LoadedBeaconBlock::Blinded(blinded_block))) => {
match blinded_block
.message()
.execution_payload()
.map(|payload| payload.to_execution_payload_header())
{
Ok(header) => {
if header.block_hash() == ExecutionBlockHash::zero() {
reconstruct_default_header_block(
blinded_block,
header,
&self.beacon_chain.spec,
)
} else {
// Add the block to the set requiring a by-range request.
let block_parts = BlockParts::new(blinded_block, header);
by_range_blocks.push(block_parts);
continue;
}
}
Err(e) => Err(BeaconChainError::BeaconStateError(e)),
}
}
};
no_request
.push_block_result(root, block_result, &self.beacon_chain.log)
.await;
requests.insert(root, no_request.clone());
}
// Now deal with the by_range requests. Sort them in order of increasing slot
let mut by_range = EngineRequest::<T::EthSpec>::new_by_range();
by_range_blocks.sort_by_key(|block_parts| block_parts.slot());
for block_parts in by_range_blocks {
let root = block_parts.root();
by_range
.push_block_parts(block_parts, &self.beacon_chain.log)
.await;
requests.insert(root, by_range.clone());
}
let mut result = vec![];
for root in ordered_block_roots {
if let Some(request) = requests.get(&root) {
result.push((root, request.clone()))
} else {
crit!(
self.beacon_chain.log,
"Please notify the devs";
"beacon_block_streamer" => "request not found",
"root" => ?root,
);
no_request
.push_block_result(
root,
Err(Error::RequestNotFound.into()),
&self.beacon_chain.log,
)
.await;
result.push((root, no_request.clone()));
}
}
result
}
// used when the execution engine doesn't support the payload bodies methods
async fn stream_blocks_fallback(
&self,
block_roots: Vec<Hash256>,
sender: UnboundedSender<(Hash256, Arc<BlockResult<T::EthSpec>>)>,
) {
debug!(
self.beacon_chain.log,
"Using slower fallback method of eth_getBlockByHash()"
);
for root in block_roots {
let cached_block = self.check_caches(root);
let block_result = if cached_block.is_some() {
Ok(cached_block)
} else {
self.beacon_chain
.get_block(&root)
.await
.map(|opt_block| opt_block.map(Arc::new))
};
if sender.send((root, Arc::new(block_result))).is_err() {
break;
}
}
}
async fn stream_blocks(
&self,
block_roots: Vec<Hash256>,
sender: UnboundedSender<(Hash256, Arc<BlockResult<T::EthSpec>>)>,
) {
let n_roots = block_roots.len();
let mut n_success = 0usize;
let mut n_sent = 0usize;
let mut engine_requests = 0usize;
let payloads = self.load_payloads(block_roots);
let requests = self.get_requests(payloads).await;
for (root, request) in requests {
if request.is_unsent().await {
engine_requests += 1;
}
let result = request
.get_block_result(&root, &self.execution_layer, &self.beacon_chain.log)
.await;
let successful = result
.as_ref()
.as_ref()
.map(|opt| opt.is_some())
.unwrap_or(false);
if sender.send((root, result)).is_err() {
break;
} else {
n_sent += 1;
if successful {
n_success += 1;
}
}
}
debug!(
self.beacon_chain.log,
"BeaconBlockStreamer finished";
"requested blocks" => n_roots,
"sent" => n_sent,
"succeeded" => n_success,
"failed" => (n_sent - n_success),
"engine requests" => engine_requests,
);
}
pub async fn stream(
self,
block_roots: Vec<Hash256>,
sender: UnboundedSender<(Hash256, Arc<BlockResult<T::EthSpec>>)>,
) {
match self
.execution_layer
.get_engine_capabilities(None)
.await
.map_err(Box::new)
.map_err(BeaconChainError::EngineGetCapabilititesFailed)
{
Ok(engine_capabilities) => {
if engine_capabilities.get_payload_bodies_by_range_v1 {
self.stream_blocks(block_roots, sender).await;
} else {
// use the fallback method
self.stream_blocks_fallback(block_roots, sender).await;
}
}
Err(e) => {
send_errors(block_roots, sender, e).await;
}
}
}
pub fn launch_stream(
self,
block_roots: Vec<Hash256>,
executor: &TaskExecutor,
) -> impl Stream<Item = (Hash256, Arc<BlockResult<T::EthSpec>>)> {
let (block_tx, block_rx) = mpsc::unbounded_channel();
debug!(
self.beacon_chain.log,
"Launching a BeaconBlockStreamer";
"blocks" => block_roots.len(),
);
executor.spawn(self.stream(block_roots, block_tx), "get_blocks_sender");
UnboundedReceiverStream::new(block_rx)
}
}
async fn send_errors<E: EthSpec>(
block_roots: Vec<Hash256>,
sender: UnboundedSender<(Hash256, Arc<BlockResult<E>>)>,
beacon_chain_error: BeaconChainError,
) {
let result = Arc::new(Err(beacon_chain_error));
for root in block_roots {
if sender.send((root, result.clone())).is_err() {
break;
}
}
}
impl From<Error> for BeaconChainError {
fn from(value: Error) -> Self {
BeaconChainError::BlockStreamerError(value)
}
}
#[cfg(test)]
mod tests {
use crate::beacon_block_streamer::{BeaconBlockStreamer, CheckCaches};
use crate::test_utils::{test_spec, BeaconChainHarness, EphemeralHarnessType};
use execution_layer::test_utils::{Block, DEFAULT_ENGINE_CAPABILITIES};
use execution_layer::EngineCapabilities;
use lazy_static::lazy_static;
use std::time::Duration;
use tokio::sync::mpsc;
use types::{ChainSpec, Epoch, EthSpec, Hash256, Keypair, MinimalEthSpec, Slot};
const VALIDATOR_COUNT: usize = 48;
lazy_static! {
/// A cached set of keys.
static ref KEYPAIRS: Vec<Keypair> = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT);
}
fn get_harness(
validator_count: usize,
spec: ChainSpec,
) -> BeaconChainHarness<EphemeralHarnessType<MinimalEthSpec>> {
let harness = BeaconChainHarness::builder(MinimalEthSpec)
.spec(spec)
.keypairs(KEYPAIRS[0..validator_count].to_vec())
.logger(logging::test_logger())
.fresh_ephemeral_store()
.mock_execution_layer()
.build();
harness.advance_slot();
harness
}
#[tokio::test]
async fn check_all_blocks_from_altair_to_electra() {
let slots_per_epoch = MinimalEthSpec::slots_per_epoch() as usize;
let num_epochs = 10;
let bellatrix_fork_epoch = 2usize;
let capella_fork_epoch = 4usize;
let deneb_fork_epoch = 6usize;
let electra_fork_epoch = 8usize;
let num_blocks_produced = num_epochs * slots_per_epoch;
let mut spec = test_spec::<MinimalEthSpec>();
spec.altair_fork_epoch = Some(Epoch::new(0));
spec.bellatrix_fork_epoch = Some(Epoch::new(bellatrix_fork_epoch as u64));
spec.capella_fork_epoch = Some(Epoch::new(capella_fork_epoch as u64));
spec.deneb_fork_epoch = Some(Epoch::new(deneb_fork_epoch as u64));
spec.electra_fork_epoch = Some(Epoch::new(electra_fork_epoch as u64));
let harness = get_harness(VALIDATOR_COUNT, spec.clone());
// go to bellatrix fork
harness
.extend_slots(bellatrix_fork_epoch * slots_per_epoch)
.await;
// extend half an epoch
harness.extend_slots(slots_per_epoch / 2).await;
// trigger merge
harness
.execution_block_generator()
.move_to_terminal_block()
.expect("should move to terminal block");
let timestamp = harness.get_timestamp_at_slot() + harness.spec.seconds_per_slot;
harness
.execution_block_generator()
.modify_last_block(|block| {
if let Block::PoW(terminal_block) = block {
terminal_block.timestamp = timestamp;
}
});
// finish out merge epoch
harness.extend_slots(slots_per_epoch / 2).await;
// finish rest of epochs
harness
.extend_slots((num_epochs - 1 - bellatrix_fork_epoch) * slots_per_epoch)
.await;
let head = harness.chain.head_snapshot();
let state = &head.beacon_state;
assert_eq!(
state.slot(),
Slot::new(num_blocks_produced as u64),
"head should be at the current slot"
);
assert_eq!(
state.current_epoch(),
num_blocks_produced as u64 / MinimalEthSpec::slots_per_epoch(),
"head should be at the expected epoch"
);
assert_eq!(
state.current_justified_checkpoint().epoch,
state.current_epoch() - 1,
"the head should be justified one behind the current epoch"
);
assert_eq!(
state.finalized_checkpoint().epoch,
state.current_epoch() - 2,
"the head should be finalized two behind the current epoch"
);
let block_roots: Vec<Hash256> = harness
.chain
.forwards_iter_block_roots(Slot::new(0))
.expect("should get iter")
.map(Result::unwrap)
.map(|(root, _)| root)
.collect();
let mut expected_blocks = vec![];
// get all blocks the old fashioned way
for root in &block_roots {
let block = harness
.chain
.get_block(root)
.await
.expect("should get block")
.expect("block should exist");
expected_blocks.push(block);
}
for epoch in 0..num_epochs {
let start = epoch * slots_per_epoch;
let mut epoch_roots = vec![Hash256::zero(); slots_per_epoch];
epoch_roots[..].clone_from_slice(&block_roots[start..(start + slots_per_epoch)]);
let streamer = BeaconBlockStreamer::new(&harness.chain, CheckCaches::No)
.expect("should create streamer");
let (block_tx, mut block_rx) = mpsc::unbounded_channel();
streamer.stream(epoch_roots.clone(), block_tx).await;
for (i, expected_root) in epoch_roots.into_iter().enumerate() {
let (found_root, found_block_result) =
block_rx.recv().await.expect("should get block");
assert_eq!(
found_root, expected_root,
"expected block root should match"
);
match found_block_result.as_ref() {
Ok(maybe_block) => {
let found_block = maybe_block.clone().expect("should have a block");
let expected_block = expected_blocks
.get(start + i)
.expect("should get expected block");
assert_eq!(
found_block.as_ref(),
expected_block,
"expected block should match found block"
);
}
Err(e) => panic!("Error retrieving block {}: {:?}", expected_root, e),
}
}
}
}
#[tokio::test]
async fn check_fallback_altair_to_electra() {
let slots_per_epoch = MinimalEthSpec::slots_per_epoch() as usize;
let num_epochs = 10;
let bellatrix_fork_epoch = 2usize;
let capella_fork_epoch = 4usize;
let deneb_fork_epoch = 6usize;
let electra_fork_epoch = 8usize;
let num_blocks_produced = num_epochs * slots_per_epoch;
let mut spec = test_spec::<MinimalEthSpec>();
spec.altair_fork_epoch = Some(Epoch::new(0));
spec.bellatrix_fork_epoch = Some(Epoch::new(bellatrix_fork_epoch as u64));
spec.capella_fork_epoch = Some(Epoch::new(capella_fork_epoch as u64));
spec.deneb_fork_epoch = Some(Epoch::new(deneb_fork_epoch as u64));
spec.electra_fork_epoch = Some(Epoch::new(electra_fork_epoch as u64));
let harness = get_harness(VALIDATOR_COUNT, spec);
// modify execution engine so it doesn't support engine_payloadBodiesBy* methods
let mock_execution_layer = harness.mock_execution_layer.as_ref().unwrap();
mock_execution_layer
.server
.set_engine_capabilities(EngineCapabilities {
get_payload_bodies_by_hash_v1: false,
get_payload_bodies_by_range_v1: false,
..DEFAULT_ENGINE_CAPABILITIES
});
// refresh capabilities cache
harness
.chain
.execution_layer
.as_ref()
.unwrap()
.get_engine_capabilities(Some(Duration::ZERO))
.await
.unwrap();
// go to bellatrix fork
harness
.extend_slots(bellatrix_fork_epoch * slots_per_epoch)
.await;
// extend half an epoch
harness.extend_slots(slots_per_epoch / 2).await;
// trigger merge
harness
.execution_block_generator()
.move_to_terminal_block()
.expect("should move to terminal block");
let timestamp = harness.get_timestamp_at_slot() + harness.spec.seconds_per_slot;
harness
.execution_block_generator()
.modify_last_block(|block| {
if let Block::PoW(terminal_block) = block {
terminal_block.timestamp = timestamp;
}
});
// finish out merge epoch
harness.extend_slots(slots_per_epoch / 2).await;
// finish rest of epochs
harness
.extend_slots((num_epochs - 1 - bellatrix_fork_epoch) * slots_per_epoch)
.await;
let head = harness.chain.head_snapshot();
let state = &head.beacon_state;
assert_eq!(
state.slot(),
Slot::new(num_blocks_produced as u64),
"head should be at the current slot"
);
assert_eq!(
state.current_epoch(),
num_blocks_produced as u64 / MinimalEthSpec::slots_per_epoch(),
"head should be at the expected epoch"
);
assert_eq!(
state.current_justified_checkpoint().epoch,
state.current_epoch() - 1,
"the head should be justified one behind the current epoch"
);
assert_eq!(
state.finalized_checkpoint().epoch,
state.current_epoch() - 2,
"the head should be finalized two behind the current epoch"
);
let block_roots: Vec<Hash256> = harness
.chain
.forwards_iter_block_roots(Slot::new(0))
.expect("should get iter")
.map(Result::unwrap)
.map(|(root, _)| root)
.collect();
let mut expected_blocks = vec![];
// get all blocks the old fashioned way
for root in &block_roots {
let block = harness
.chain
.get_block(root)
.await
.expect("should get block")
.expect("block should exist");
expected_blocks.push(block);
}
for epoch in 0..num_epochs {
let start = epoch * slots_per_epoch;
let mut epoch_roots = vec![Hash256::zero(); slots_per_epoch];
epoch_roots[..].clone_from_slice(&block_roots[start..(start + slots_per_epoch)]);
let streamer = BeaconBlockStreamer::new(&harness.chain, CheckCaches::No)
.expect("should create streamer");
let (block_tx, mut block_rx) = mpsc::unbounded_channel();
streamer.stream(epoch_roots.clone(), block_tx).await;
for (i, expected_root) in epoch_roots.into_iter().enumerate() {
let (found_root, found_block_result) =
block_rx.recv().await.expect("should get block");
assert_eq!(
found_root, expected_root,
"expected block root should match"
);
match found_block_result.as_ref() {
Ok(maybe_block) => {
let found_block = maybe_block.clone().expect("should have a block");
let expected_block = expected_blocks
.get(start + i)
.expect("should get expected block");
assert_eq!(
found_block.as_ref(),
expected_block,
"expected block should match found block"
);
}
Err(e) => panic!("Error retrieving block {}: {:?}", expected_root, e),
}
}
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -7,13 +7,25 @@
use crate::{metrics, BeaconSnapshot};
use derivative::Derivative;
use fork_choice::ForkChoiceStore;
use proto_array::JustifiedBalances;
use safe_arith::ArithError;
use ssz_derive::{Decode, Encode};
use std::collections::BTreeSet;
use std::marker::PhantomData;
use std::sync::Arc;
use store::{Error as StoreError, HotColdDB, ItemStore};
use superstruct::superstruct;
use types::{
BeaconBlock, BeaconState, BeaconStateError, Checkpoint, Epoch, EthSpec, Hash256, Slot,
AbstractExecPayload, BeaconBlockRef, BeaconState, BeaconStateError, Checkpoint, Epoch, EthSpec,
Hash256, Slot,
};
/// Ensure this justified checkpoint has an epoch of 0 so that it is never
/// greater than the justified checkpoint and enshrined as the actual justified
/// checkpoint.
const JUNK_BEST_JUSTIFIED_CHECKPOINT: Checkpoint = Checkpoint {
epoch: Epoch::new(0),
root: Hash256::repeat_byte(0),
};
#[derive(Debug)]
@@ -29,6 +41,7 @@ pub enum Error {
MissingState(Hash256),
InvalidPersistedBytes(ssz::DecodeError),
BeaconStateError(BeaconStateError),
Arith(ArithError),
}
impl From<BeaconStateError> for Error {
@@ -37,29 +50,17 @@ impl From<BeaconStateError> for Error {
}
}
impl From<ArithError> for Error {
fn from(e: ArithError) -> Self {
Error::Arith(e)
}
}
/// The number of validator balance sets that are cached within `BalancesCache`.
const MAX_BALANCE_CACHE_SIZE: usize = 4;
/// Returns the effective balances for every validator in the given `state`.
///
/// Any validator who is not active in the epoch of the given `state` is assigned a balance of
/// zero.
pub fn get_effective_balances<T: EthSpec>(state: &BeaconState<T>) -> Vec<u64> {
state
.validators()
.iter()
.map(|validator| {
if validator.is_active_at(state.current_epoch()) {
validator.effective_balance
} else {
0
}
})
.collect()
}
#[superstruct(
variants(V1, V8),
variants(V8),
variant_attributes(derive(PartialEq, Clone, Debug, Encode, Decode)),
no_enum
)]
@@ -73,13 +74,11 @@ pub(crate) struct CacheItem {
pub(crate) type CacheItem = CacheItemV8;
#[superstruct(
variants(V1, V8),
variants(V8),
variant_attributes(derive(PartialEq, Clone, Default, Debug, Encode, Decode)),
no_enum
)]
pub struct BalancesCache {
#[superstruct(only(V1))]
pub(crate) items: Vec<CacheItemV1>,
#[superstruct(only(V8))]
pub(crate) items: Vec<CacheItemV8>,
}
@@ -113,7 +112,7 @@ impl BalancesCache {
let item = CacheItem {
block_root: epoch_boundary_root,
epoch,
balances: get_effective_balances(state),
balances: JustifiedBalances::from_justified_state(state)?.effective_balances,
};
if self.items.len() == MAX_BALANCE_CACHE_SIZE {
@@ -152,9 +151,11 @@ pub struct BeaconForkChoiceStore<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<
time: Slot,
finalized_checkpoint: Checkpoint,
justified_checkpoint: Checkpoint,
justified_balances: Vec<u64>,
best_justified_checkpoint: Checkpoint,
justified_balances: JustifiedBalances,
unrealized_justified_checkpoint: Checkpoint,
unrealized_finalized_checkpoint: Checkpoint,
proposer_boost_root: Hash256,
equivocating_indices: BTreeSet<u64>,
_phantom: PhantomData<E>,
}
@@ -178,7 +179,7 @@ where
pub fn get_forkchoice_store(
store: Arc<HotColdDB<E, Hot, Cold>>,
anchor: &BeaconSnapshot<E>,
) -> Self {
) -> Result<Self, Error> {
let anchor_state = &anchor.beacon_state;
let mut anchor_block_header = anchor_state.latest_block_header().clone();
if anchor_block_header.state_root == Hash256::zero() {
@@ -191,18 +192,21 @@ where
root: anchor_root,
};
let finalized_checkpoint = justified_checkpoint;
let justified_balances = JustifiedBalances::from_justified_state(anchor_state)?;
Self {
Ok(Self {
store,
balances_cache: <_>::default(),
time: anchor_state.slot(),
justified_checkpoint,
justified_balances: anchor_state.balances().clone().into(),
justified_balances,
finalized_checkpoint,
best_justified_checkpoint: justified_checkpoint,
unrealized_justified_checkpoint: justified_checkpoint,
unrealized_finalized_checkpoint: finalized_checkpoint,
proposer_boost_root: Hash256::zero(),
equivocating_indices: BTreeSet::new(),
_phantom: PhantomData,
}
})
}
/// Save the current state of `Self` to a `PersistedForkChoiceStore` which can be stored to the
@@ -213,9 +217,11 @@ where
time: self.time,
finalized_checkpoint: self.finalized_checkpoint,
justified_checkpoint: self.justified_checkpoint,
justified_balances: self.justified_balances.clone(),
best_justified_checkpoint: self.best_justified_checkpoint,
justified_balances: self.justified_balances.effective_balances.clone(),
unrealized_justified_checkpoint: self.unrealized_justified_checkpoint,
unrealized_finalized_checkpoint: self.unrealized_finalized_checkpoint,
proposer_boost_root: self.proposer_boost_root,
equivocating_indices: self.equivocating_indices.clone(),
}
}
@@ -224,15 +230,19 @@ where
persisted: PersistedForkChoiceStore,
store: Arc<HotColdDB<E, Hot, Cold>>,
) -> Result<Self, Error> {
let justified_balances =
JustifiedBalances::from_effective_balances(persisted.justified_balances)?;
Ok(Self {
store,
balances_cache: persisted.balances_cache,
time: persisted.time,
finalized_checkpoint: persisted.finalized_checkpoint,
justified_checkpoint: persisted.justified_checkpoint,
justified_balances: persisted.justified_balances,
best_justified_checkpoint: persisted.best_justified_checkpoint,
justified_balances,
unrealized_justified_checkpoint: persisted.unrealized_justified_checkpoint,
unrealized_finalized_checkpoint: persisted.unrealized_finalized_checkpoint,
proposer_boost_root: persisted.proposer_boost_root,
equivocating_indices: persisted.equivocating_indices,
_phantom: PhantomData,
})
}
@@ -254,9 +264,9 @@ where
self.time = slot
}
fn on_verified_block(
fn on_verified_block<Payload: AbstractExecPayload<E>>(
&mut self,
_block: &BeaconBlock<E>,
_block: BeaconBlockRef<E, Payload>,
block_root: Hash256,
state: &BeaconState<E>,
) -> Result<(), Self::Error> {
@@ -267,18 +277,22 @@ where
&self.justified_checkpoint
}
fn justified_balances(&self) -> &[u64] {
fn justified_balances(&self) -> &JustifiedBalances {
&self.justified_balances
}
fn best_justified_checkpoint(&self) -> &Checkpoint {
&self.best_justified_checkpoint
}
fn finalized_checkpoint(&self) -> &Checkpoint {
&self.finalized_checkpoint
}
fn unrealized_justified_checkpoint(&self) -> &Checkpoint {
&self.unrealized_justified_checkpoint
}
fn unrealized_finalized_checkpoint(&self) -> &Checkpoint {
&self.unrealized_finalized_checkpoint
}
fn proposer_boost_root(&self) -> Hash256 {
self.proposer_boost_root
}
@@ -294,57 +308,116 @@ where
self.justified_checkpoint.root,
self.justified_checkpoint.epoch,
) {
// NOTE: could avoid this re-calculation by introducing a `PersistedCacheItem`.
metrics::inc_counter(&metrics::BALANCES_CACHE_HITS);
self.justified_balances = balances;
self.justified_balances = JustifiedBalances::from_effective_balances(balances)?;
} else {
metrics::inc_counter(&metrics::BALANCES_CACHE_MISSES);
let justified_block = self
.store
.get_block(&self.justified_checkpoint.root)
.get_blinded_block(&self.justified_checkpoint.root, None)
.map_err(Error::FailedToReadBlock)?
.ok_or(Error::MissingBlock(self.justified_checkpoint.root))?
.deconstruct()
.0;
let state = self
let max_slot = self
.justified_checkpoint
.epoch
.start_slot(E::slots_per_epoch());
let (_, state) = self
.store
.get_state(&justified_block.state_root(), Some(justified_block.slot()))
.get_advanced_hot_state(
self.justified_checkpoint.root,
max_slot,
justified_block.state_root(),
)
.map_err(Error::FailedToReadState)?
.ok_or_else(|| Error::MissingState(justified_block.state_root()))?;
self.justified_balances = get_effective_balances(&state);
self.justified_balances = JustifiedBalances::from_justified_state(&state)?;
}
Ok(())
}
fn set_best_justified_checkpoint(&mut self, checkpoint: Checkpoint) {
self.best_justified_checkpoint = checkpoint
fn set_unrealized_justified_checkpoint(&mut self, checkpoint: Checkpoint) {
self.unrealized_justified_checkpoint = checkpoint;
}
fn set_unrealized_finalized_checkpoint(&mut self, checkpoint: Checkpoint) {
self.unrealized_finalized_checkpoint = checkpoint;
}
fn set_proposer_boost_root(&mut self, proposer_boost_root: Hash256) {
self.proposer_boost_root = proposer_boost_root;
}
fn equivocating_indices(&self) -> &BTreeSet<u64> {
&self.equivocating_indices
}
fn extend_equivocating_indices(&mut self, indices: impl IntoIterator<Item = u64>) {
self.equivocating_indices.extend(indices);
}
}
pub type PersistedForkChoiceStore = PersistedForkChoiceStoreV17;
/// A container which allows persisting the `BeaconForkChoiceStore` to the on-disk database.
#[superstruct(
variants(V1, V7, V8),
variants(V11, V17),
variant_attributes(derive(Encode, Decode)),
no_enum
)]
pub struct PersistedForkChoiceStore {
#[superstruct(only(V1, V7))]
pub balances_cache: BalancesCacheV1,
#[superstruct(only(V8))]
#[superstruct(only(V11, V17))]
pub balances_cache: BalancesCacheV8,
pub time: Slot,
pub finalized_checkpoint: Checkpoint,
pub justified_checkpoint: Checkpoint,
pub justified_balances: Vec<u64>,
#[superstruct(only(V11))]
pub best_justified_checkpoint: Checkpoint,
#[superstruct(only(V7, V8))]
#[superstruct(only(V11, V17))]
pub unrealized_justified_checkpoint: Checkpoint,
#[superstruct(only(V11, V17))]
pub unrealized_finalized_checkpoint: Checkpoint,
#[superstruct(only(V11, V17))]
pub proposer_boost_root: Hash256,
#[superstruct(only(V11, V17))]
pub equivocating_indices: BTreeSet<u64>,
}
pub type PersistedForkChoiceStore = PersistedForkChoiceStoreV8;
impl Into<PersistedForkChoiceStore> for PersistedForkChoiceStoreV11 {
fn into(self) -> PersistedForkChoiceStore {
PersistedForkChoiceStore {
balances_cache: self.balances_cache,
time: self.time,
finalized_checkpoint: self.finalized_checkpoint,
justified_checkpoint: self.justified_checkpoint,
justified_balances: self.justified_balances,
unrealized_justified_checkpoint: self.unrealized_justified_checkpoint,
unrealized_finalized_checkpoint: self.unrealized_finalized_checkpoint,
proposer_boost_root: self.proposer_boost_root,
equivocating_indices: self.equivocating_indices,
}
}
}
impl Into<PersistedForkChoiceStoreV11> for PersistedForkChoiceStore {
fn into(self) -> PersistedForkChoiceStoreV11 {
PersistedForkChoiceStoreV11 {
balances_cache: self.balances_cache,
time: self.time,
finalized_checkpoint: self.finalized_checkpoint,
justified_checkpoint: self.justified_checkpoint,
justified_balances: self.justified_balances,
best_justified_checkpoint: JUNK_BEST_JUSTIFIED_CHECKPOINT,
unrealized_justified_checkpoint: self.unrealized_justified_checkpoint,
unrealized_finalized_checkpoint: self.unrealized_finalized_checkpoint,
proposer_boost_root: self.proposer_boost_root,
equivocating_indices: self.equivocating_indices,
}
}
}

View File

@@ -8,17 +8,25 @@
//! very simple to reason about, but it might store values that are useless due to finalization. The
//! values it stores are very small, so this should not be an issue.
use crate::{BeaconChain, BeaconChainError, BeaconChainTypes};
use fork_choice::ExecutionStatus;
use lru::LruCache;
use smallvec::SmallVec;
use types::{BeaconStateError, Epoch, EthSpec, Fork, Hash256, Slot, Unsigned};
use state_processing::state_advance::partial_state_advance;
use std::cmp::Ordering;
use std::num::NonZeroUsize;
use types::non_zero_usize::new_non_zero_usize;
use types::{
BeaconState, BeaconStateError, ChainSpec, Epoch, EthSpec, Fork, Hash256, Slot, Unsigned,
};
/// The number of sets of proposer indices that should be cached.
const CACHE_SIZE: usize = 16;
const CACHE_SIZE: NonZeroUsize = new_non_zero_usize(16);
/// This value is fairly unimportant, it's used to avoid heap allocations. The result of it being
/// incorrect is non-substantial from a consensus perspective (and probably also from a
/// performance perspective).
const TYPICAL_SLOTS_PER_EPOCH: usize = 32;
pub const TYPICAL_SLOTS_PER_EPOCH: usize = 32;
/// For some given slot, this contains the proposer index (`index`) and the `fork` that should be
/// used to verify their signature.
@@ -59,19 +67,19 @@ impl Default for BeaconProposerCache {
impl BeaconProposerCache {
/// If it is cached, returns the proposer for the block at `slot` where the block has the
/// ancestor block root of `shuffling_decision_block` at `end_slot(slot.epoch() - 1)`.
pub fn get_slot<T: EthSpec>(
pub fn get_slot<E: EthSpec>(
&mut self,
shuffling_decision_block: Hash256,
slot: Slot,
) -> Option<Proposer> {
let epoch = slot.epoch(T::slots_per_epoch());
let epoch = slot.epoch(E::slots_per_epoch());
let key = (epoch, shuffling_decision_block);
if let Some(cache) = self.cache.get(&key) {
// This `if` statement is likely unnecessary, but it feels like good practice.
if epoch == cache.epoch {
cache
.proposers
.get(slot.as_usize() % T::SlotsPerEpoch::to_usize())
.get(slot.as_usize() % E::SlotsPerEpoch::to_usize())
.map(|&index| Proposer {
index,
fork: cache.fork,
@@ -89,7 +97,7 @@ impl BeaconProposerCache {
/// The nth slot in the returned `SmallVec` will be equal to the nth slot in the given `epoch`.
/// E.g., if `epoch == 1` then `smallvec[0]` refers to slot 32 (assuming `SLOTS_PER_EPOCH ==
/// 32`).
pub fn get_epoch<T: EthSpec>(
pub fn get_epoch<E: EthSpec>(
&mut self,
shuffling_decision_block: Hash256,
epoch: Epoch,
@@ -125,3 +133,68 @@ impl BeaconProposerCache {
Ok(())
}
}
/// Compute the proposer duties using the head state without cache.
pub fn compute_proposer_duties_from_head<T: BeaconChainTypes>(
request_epoch: Epoch,
chain: &BeaconChain<T>,
) -> Result<(Vec<usize>, Hash256, ExecutionStatus, Fork), BeaconChainError> {
// Atomically collect information about the head whilst holding the canonical head `Arc` as
// short as possible.
let (mut state, head_state_root, head_block_root) = {
let head = chain.canonical_head.cached_head();
// Take a copy of the head state.
let head_state = head.snapshot.beacon_state.clone();
let head_state_root = head.head_state_root();
let head_block_root = head.head_block_root();
(head_state, head_state_root, head_block_root)
};
let execution_status = chain
.canonical_head
.fork_choice_read_lock()
.get_block_execution_status(&head_block_root)
.ok_or(BeaconChainError::HeadMissingFromForkChoice(head_block_root))?;
// Advance the state into the requested epoch.
ensure_state_is_in_epoch(&mut state, head_state_root, request_epoch, &chain.spec)?;
let indices = state
.get_beacon_proposer_indices(&chain.spec)
.map_err(BeaconChainError::from)?;
let dependent_root = state
// The only block which decides its own shuffling is the genesis block.
.proposer_shuffling_decision_root(chain.genesis_block_root)
.map_err(BeaconChainError::from)?;
Ok((indices, dependent_root, execution_status, state.fork()))
}
/// If required, advance `state` to `target_epoch`.
///
/// ## Details
///
/// - Returns an error if `state.current_epoch() > target_epoch`.
/// - No-op if `state.current_epoch() == target_epoch`.
/// - It must be the case that `state.canonical_root() == state_root`, but this function will not
/// check that.
pub fn ensure_state_is_in_epoch<E: EthSpec>(
state: &mut BeaconState<E>,
state_root: Hash256,
target_epoch: Epoch,
spec: &ChainSpec,
) -> Result<(), BeaconChainError> {
match state.current_epoch().cmp(&target_epoch) {
// Protects against an inconsistent slot clock.
Ordering::Greater => Err(BeaconStateError::SlotOutOfBounds.into()),
// The state needs to be advanced.
Ordering::Less => {
let target_slot = target_epoch.start_slot(E::slots_per_epoch());
partial_state_advance(state, Some(state_root), target_slot, spec)
.map_err(BeaconChainError::from)
}
// The state is suitable, nothing to do.
Ordering::Equal => Ok(()),
}
}

View File

@@ -1,19 +1,36 @@
use serde_derive::Serialize;
use types::{beacon_state::CloneConfig, BeaconState, EthSpec, Hash256, SignedBeaconBlock};
use serde::Serialize;
use std::sync::Arc;
use types::{
AbstractExecPayload, BeaconState, EthSpec, FullPayload, Hash256, SignedBeaconBlock,
SignedBlindedBeaconBlock,
};
/// Represents some block and its associated state. Generally, this will be used for tracking the
/// head, justified head and finalized head.
#[derive(Clone, Serialize, PartialEq, Debug)]
pub struct BeaconSnapshot<E: EthSpec> {
pub beacon_block: SignedBeaconBlock<E>,
pub struct BeaconSnapshot<E: EthSpec, Payload: AbstractExecPayload<E> = FullPayload<E>> {
pub beacon_block: Arc<SignedBeaconBlock<E, Payload>>,
pub beacon_block_root: Hash256,
pub beacon_state: BeaconState<E>,
}
impl<E: EthSpec> BeaconSnapshot<E> {
/// This snapshot is to be used for verifying a child of `self.beacon_block`.
#[derive(Debug)]
pub struct PreProcessingSnapshot<T: EthSpec> {
/// This state is equivalent to the `self.beacon_block.state_root()` state that has been
/// advanced forward one slot using `per_slot_processing`. This state is "primed and ready" for
/// the application of another block.
pub pre_state: BeaconState<T>,
/// This value is only set to `Some` if the `pre_state` was *not* advanced forward.
pub beacon_state_root: Option<Hash256>,
pub beacon_block: SignedBlindedBeaconBlock<T>,
pub beacon_block_root: Hash256,
}
impl<E: EthSpec, Payload: AbstractExecPayload<E>> BeaconSnapshot<E, Payload> {
/// Create a new checkpoint.
pub fn new(
beacon_block: SignedBeaconBlock<E>,
beacon_block: Arc<SignedBeaconBlock<E, Payload>>,
beacon_block_root: Hash256,
beacon_state: BeaconState<E>,
) -> Self {
@@ -36,7 +53,7 @@ impl<E: EthSpec> BeaconSnapshot<E> {
/// Update all fields of the checkpoint.
pub fn update(
&mut self,
beacon_block: SignedBeaconBlock<E>,
beacon_block: Arc<SignedBeaconBlock<E, Payload>>,
beacon_block_root: Hash256,
beacon_state: BeaconState<E>,
) {
@@ -44,12 +61,4 @@ impl<E: EthSpec> BeaconSnapshot<E> {
self.beacon_block_root = beacon_block_root;
self.beacon_state = beacon_state;
}
pub fn clone_with(&self, clone_config: CloneConfig) -> Self {
Self {
beacon_block: self.beacon_block.clone(),
beacon_block_root: self.beacon_block_root,
beacon_state: self.beacon_state.clone_with(clone_config),
}
}
}

View File

@@ -0,0 +1,602 @@
use derivative::Derivative;
use slot_clock::SlotClock;
use std::sync::Arc;
use crate::beacon_chain::{BeaconChain, BeaconChainTypes};
use crate::block_verification::{
cheap_state_advance_to_obtain_committees, process_block_slash_info, BlockSlashInfo,
};
use crate::kzg_utils::{validate_blob, validate_blobs};
use crate::{metrics, BeaconChainError};
use kzg::{Error as KzgError, Kzg, KzgCommitment};
use merkle_proof::MerkleTreeError;
use slog::debug;
use ssz_derive::{Decode, Encode};
use ssz_types::VariableList;
use tree_hash::TreeHash;
use types::blob_sidecar::BlobIdentifier;
use types::{BeaconStateError, BlobSidecar, EthSpec, Hash256, SignedBeaconBlockHeader, Slot};
/// An error occurred while validating a gossip blob.
#[derive(Debug)]
pub enum GossipBlobError<E: EthSpec> {
/// The blob sidecar is from a slot that is later than the current slot (with respect to the
/// gossip clock disparity).
///
/// ## Peer scoring
///
/// Assuming the local clock is correct, the peer has sent an invalid message.
FutureSlot {
message_slot: Slot,
latest_permissible_slot: Slot,
},
/// There was an error whilst processing the blob. It is not known if it is
/// valid or invalid.
///
/// ## Peer scoring
///
/// We were unable to process this blob due to an internal error. It's
/// unclear if the blob is valid.
BeaconChainError(BeaconChainError),
/// The `BlobSidecar` was gossiped over an incorrect subnet.
///
/// ## Peer scoring
///
/// The blob is invalid or the peer is faulty.
InvalidSubnet { expected: u64, received: u64 },
/// The sidecar corresponds to a slot older than the finalized head slot.
///
/// ## Peer scoring
///
/// It's unclear if this blob is valid, but this blob is for a finalized slot and is
/// therefore useless to us.
PastFinalizedSlot {
blob_slot: Slot,
finalized_slot: Slot,
},
/// The proposer index specified in the sidecar does not match the locally computed
/// proposer index.
///
/// ## Peer scoring
///
/// The blob is invalid and the peer is faulty.
ProposerIndexMismatch { sidecar: usize, local: usize },
/// The proposal signature in invalid.
///
/// ## Peer scoring
///
/// The blob is invalid and the peer is faulty.
ProposalSignatureInvalid,
/// The proposal_index corresponding to blob.beacon_block_root is not known.
///
/// ## Peer scoring
///
/// The blob is invalid and the peer is faulty.
UnknownValidator(u64),
/// The provided blob is not from a later slot than its parent.
///
/// ## Peer scoring
///
/// The blob is invalid and the peer is faulty.
BlobIsNotLaterThanParent { blob_slot: Slot, parent_slot: Slot },
/// The provided blob's parent block is unknown.
///
/// ## Peer scoring
///
/// We cannot process the blob without validating its parent, the peer isn't necessarily faulty.
BlobParentUnknown(Arc<BlobSidecar<E>>),
/// Invalid kzg commitment inclusion proof
/// ## Peer scoring
///
/// The blob sidecar is invalid and the peer is faulty
InvalidInclusionProof,
/// A blob has already been seen for the given `(sidecar.block_root, sidecar.index)` tuple
/// over gossip or no gossip sources.
///
/// ## Peer scoring
///
/// The peer isn't faulty, but we do not forward it over gossip.
RepeatBlob {
proposer: u64,
slot: Slot,
index: u64,
},
/// `Kzg` struct hasn't been initialized. This is an internal error.
///
/// ## Peer scoring
///
/// The peer isn't faulty, This is an internal error.
KzgNotInitialized,
/// The kzg verification failed.
///
/// ## Peer scoring
///
/// The blob sidecar is invalid and the peer is faulty.
KzgError(kzg::Error),
/// The kzg commitment inclusion proof failed.
///
/// ## Peer scoring
///
/// The blob sidecar is invalid
InclusionProof(MerkleTreeError),
/// The pubkey cache timed out.
///
/// ## Peer scoring
///
/// The blob sidecar may be valid, this is an internal error.
PubkeyCacheTimeout,
/// The block conflicts with finalization, no need to propagate.
///
/// ## Peer scoring
///
/// It's unclear if this block is valid, but it conflicts with finality and shouldn't be
/// imported.
NotFinalizedDescendant { block_parent_root: Hash256 },
}
impl<E: EthSpec> std::fmt::Display for GossipBlobError<E> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
GossipBlobError::BlobParentUnknown(blob_sidecar) => {
write!(
f,
"BlobParentUnknown(parent_root:{})",
blob_sidecar.block_parent_root()
)
}
other => write!(f, "{:?}", other),
}
}
}
impl<E: EthSpec> From<BeaconChainError> for GossipBlobError<E> {
fn from(e: BeaconChainError) -> Self {
GossipBlobError::BeaconChainError(e)
}
}
impl<E: EthSpec> From<BeaconStateError> for GossipBlobError<E> {
fn from(e: BeaconStateError) -> Self {
GossipBlobError::BeaconChainError(BeaconChainError::BeaconStateError(e))
}
}
pub type GossipVerifiedBlobList<T> = VariableList<
GossipVerifiedBlob<T>,
<<T as BeaconChainTypes>::EthSpec as EthSpec>::MaxBlobsPerBlock,
>;
/// A wrapper around a `BlobSidecar` that indicates it has been approved for re-gossiping on
/// the p2p network.
#[derive(Debug)]
pub struct GossipVerifiedBlob<T: BeaconChainTypes> {
block_root: Hash256,
blob: KzgVerifiedBlob<T::EthSpec>,
}
impl<T: BeaconChainTypes> GossipVerifiedBlob<T> {
pub fn new(
blob: Arc<BlobSidecar<T::EthSpec>>,
subnet_id: u64,
chain: &BeaconChain<T>,
) -> Result<Self, GossipBlobError<T::EthSpec>> {
let header = blob.signed_block_header.clone();
// We only process slashing info if the gossip verification failed
// since we do not process the blob any further in that case.
validate_blob_sidecar_for_gossip(blob, subnet_id, chain).map_err(|e| {
process_block_slash_info::<_, GossipBlobError<T::EthSpec>>(
chain,
BlockSlashInfo::from_early_error_blob(header, e),
)
})
}
/// Construct a `GossipVerifiedBlob` that is assumed to be valid.
///
/// This should ONLY be used for testing.
pub fn __assumed_valid(blob: Arc<BlobSidecar<T::EthSpec>>) -> Self {
Self {
block_root: blob.block_root(),
blob: KzgVerifiedBlob { blob },
}
}
pub fn id(&self) -> BlobIdentifier {
BlobIdentifier {
block_root: self.block_root,
index: self.blob.blob_index(),
}
}
pub fn block_root(&self) -> Hash256 {
self.block_root
}
pub fn slot(&self) -> Slot {
self.blob.blob.slot()
}
pub fn index(&self) -> u64 {
self.blob.blob.index
}
pub fn kzg_commitment(&self) -> KzgCommitment {
self.blob.blob.kzg_commitment
}
pub fn signed_block_header(&self) -> SignedBeaconBlockHeader {
self.blob.blob.signed_block_header.clone()
}
pub fn block_proposer_index(&self) -> u64 {
self.blob.blob.block_proposer_index()
}
pub fn into_inner(self) -> KzgVerifiedBlob<T::EthSpec> {
self.blob
}
pub fn as_blob(&self) -> &BlobSidecar<T::EthSpec> {
self.blob.as_blob()
}
/// This is cheap as we're calling clone on an Arc
pub fn clone_blob(&self) -> Arc<BlobSidecar<T::EthSpec>> {
self.blob.clone_blob()
}
}
/// Wrapper over a `BlobSidecar` for which we have completed kzg verification.
/// i.e. `verify_blob_kzg_proof(blob, commitment, proof) == true`.
#[derive(Debug, Derivative, Clone, Encode, Decode)]
#[derivative(PartialEq, Eq)]
#[ssz(struct_behaviour = "transparent")]
pub struct KzgVerifiedBlob<E: EthSpec> {
blob: Arc<BlobSidecar<E>>,
}
impl<E: EthSpec> PartialOrd for KzgVerifiedBlob<E> {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl<E: EthSpec> Ord for KzgVerifiedBlob<E> {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
self.blob.cmp(&other.blob)
}
}
impl<E: EthSpec> KzgVerifiedBlob<E> {
pub fn new(blob: Arc<BlobSidecar<E>>, kzg: &Kzg) -> Result<Self, KzgError> {
verify_kzg_for_blob(blob, kzg)
}
pub fn to_blob(self) -> Arc<BlobSidecar<E>> {
self.blob
}
pub fn as_blob(&self) -> &BlobSidecar<E> {
&self.blob
}
/// This is cheap as we're calling clone on an Arc
pub fn clone_blob(&self) -> Arc<BlobSidecar<E>> {
self.blob.clone()
}
pub fn blob_index(&self) -> u64 {
self.blob.index
}
/// Construct a `KzgVerifiedBlob` that is assumed to be valid.
///
/// This should ONLY be used for testing.
#[cfg(test)]
pub fn __assumed_valid(blob: Arc<BlobSidecar<E>>) -> Self {
Self { blob }
}
}
/// Complete kzg verification for a `BlobSidecar`.
///
/// Returns an error if the kzg verification check fails.
pub fn verify_kzg_for_blob<E: EthSpec>(
blob: Arc<BlobSidecar<E>>,
kzg: &Kzg,
) -> Result<KzgVerifiedBlob<E>, KzgError> {
validate_blob::<E>(kzg, &blob.blob, blob.kzg_commitment, blob.kzg_proof)?;
Ok(KzgVerifiedBlob { blob })
}
pub struct KzgVerifiedBlobList<E: EthSpec> {
verified_blobs: Vec<KzgVerifiedBlob<E>>,
}
impl<E: EthSpec> KzgVerifiedBlobList<E> {
pub fn new<I: IntoIterator<Item = Arc<BlobSidecar<E>>>>(
blob_list: I,
kzg: &Kzg,
) -> Result<Self, KzgError> {
let blobs = blob_list.into_iter().collect::<Vec<_>>();
verify_kzg_for_blob_list(blobs.iter(), kzg)?;
Ok(Self {
verified_blobs: blobs
.into_iter()
.map(|blob| KzgVerifiedBlob { blob })
.collect(),
})
}
}
impl<E: EthSpec> IntoIterator for KzgVerifiedBlobList<E> {
type Item = KzgVerifiedBlob<E>;
type IntoIter = std::vec::IntoIter<Self::Item>;
fn into_iter(self) -> Self::IntoIter {
self.verified_blobs.into_iter()
}
}
/// Complete kzg verification for a list of `BlobSidecar`s.
/// Returns an error if any of the `BlobSidecar`s fails kzg verification.
///
/// Note: This function should be preferred over calling `verify_kzg_for_blob`
/// in a loop since this function kzg verifies a list of blobs more efficiently.
pub fn verify_kzg_for_blob_list<'a, E: EthSpec, I>(
blob_iter: I,
kzg: &'a Kzg,
) -> Result<(), KzgError>
where
I: Iterator<Item = &'a Arc<BlobSidecar<E>>>,
{
let (blobs, (commitments, proofs)): (Vec<_>, (Vec<_>, Vec<_>)) = blob_iter
.map(|blob| (&blob.blob, (blob.kzg_commitment, blob.kzg_proof)))
.unzip();
validate_blobs::<E>(kzg, commitments.as_slice(), blobs, proofs.as_slice())
}
pub fn validate_blob_sidecar_for_gossip<T: BeaconChainTypes>(
blob_sidecar: Arc<BlobSidecar<T::EthSpec>>,
subnet: u64,
chain: &BeaconChain<T>,
) -> Result<GossipVerifiedBlob<T>, GossipBlobError<T::EthSpec>> {
let blob_slot = blob_sidecar.slot();
let blob_index = blob_sidecar.index;
let block_parent_root = blob_sidecar.block_parent_root();
let blob_proposer_index = blob_sidecar.block_proposer_index();
let block_root = blob_sidecar.block_root();
let blob_epoch = blob_slot.epoch(T::EthSpec::slots_per_epoch());
let signed_block_header = &blob_sidecar.signed_block_header;
// This condition is not possible if we have received the blob from the network
// since we only subscribe to `MaxBlobsPerBlock` subnets over gossip network.
// We include this check only for completeness.
// Getting this error would imply something very wrong with our networking decoding logic.
if blob_index >= T::EthSpec::max_blobs_per_block() as u64 {
return Err(GossipBlobError::InvalidSubnet {
expected: subnet,
received: blob_index,
});
}
// Verify that the blob_sidecar was received on the correct subnet.
if blob_index != subnet {
return Err(GossipBlobError::InvalidSubnet {
expected: blob_index,
received: subnet,
});
}
// Verify that the sidecar is not from a future slot.
let latest_permissible_slot = chain
.slot_clock
.now_with_future_tolerance(chain.spec.maximum_gossip_clock_disparity())
.ok_or(BeaconChainError::UnableToReadSlot)?;
if blob_slot > latest_permissible_slot {
return Err(GossipBlobError::FutureSlot {
message_slot: blob_slot,
latest_permissible_slot,
});
}
// Verify that the sidecar slot is greater than the latest finalized slot
let latest_finalized_slot = chain
.head()
.finalized_checkpoint()
.epoch
.start_slot(T::EthSpec::slots_per_epoch());
if blob_slot <= latest_finalized_slot {
return Err(GossipBlobError::PastFinalizedSlot {
blob_slot,
finalized_slot: latest_finalized_slot,
});
}
// Verify that this is the first blob sidecar received for the tuple:
// (block_header.slot, block_header.proposer_index, blob_sidecar.index)
if chain
.observed_blob_sidecars
.read()
.proposer_is_known(&blob_sidecar)
.map_err(|e| GossipBlobError::BeaconChainError(e.into()))?
{
return Err(GossipBlobError::RepeatBlob {
proposer: blob_proposer_index,
slot: blob_slot,
index: blob_index,
});
}
// Verify the inclusion proof in the sidecar
let _timer = metrics::start_timer(&metrics::BLOB_SIDECAR_INCLUSION_PROOF_VERIFICATION);
if !blob_sidecar
.verify_blob_sidecar_inclusion_proof()
.map_err(GossipBlobError::InclusionProof)?
{
return Err(GossipBlobError::InvalidInclusionProof);
}
drop(_timer);
let fork_choice = chain.canonical_head.fork_choice_read_lock();
// We have already verified that the blob is past finalization, so we can
// just check fork choice for the block's parent.
let Some(parent_block) = fork_choice.get_block(&block_parent_root) else {
return Err(GossipBlobError::BlobParentUnknown(blob_sidecar));
};
// Do not process a blob that does not descend from the finalized root.
// We just loaded the parent_block, so we can be sure that it exists in fork choice.
if !fork_choice.is_finalized_checkpoint_or_descendant(block_parent_root) {
return Err(GossipBlobError::NotFinalizedDescendant { block_parent_root });
}
drop(fork_choice);
if parent_block.slot >= blob_slot {
return Err(GossipBlobError::BlobIsNotLaterThanParent {
blob_slot,
parent_slot: parent_block.slot,
});
}
let proposer_shuffling_root =
if parent_block.slot.epoch(T::EthSpec::slots_per_epoch()) == blob_epoch {
parent_block
.next_epoch_shuffling_id
.shuffling_decision_block
} else {
parent_block.root
};
let proposer_opt = chain
.beacon_proposer_cache
.lock()
.get_slot::<T::EthSpec>(proposer_shuffling_root, blob_slot);
let (proposer_index, fork) = if let Some(proposer) = proposer_opt {
(proposer.index, proposer.fork)
} else {
debug!(
chain.log,
"Proposer shuffling cache miss for blob verification";
"block_root" => %block_root,
"index" => %blob_index,
);
let (parent_state_root, mut parent_state) = chain
.store
.get_advanced_hot_state(block_parent_root, blob_slot, parent_block.state_root)
.map_err(|e| GossipBlobError::BeaconChainError(e.into()))?
.ok_or_else(|| {
BeaconChainError::DBInconsistent(format!(
"Missing state for parent block {block_parent_root:?}",
))
})?;
let state = cheap_state_advance_to_obtain_committees::<_, GossipBlobError<T::EthSpec>>(
&mut parent_state,
Some(parent_state_root),
blob_slot,
&chain.spec,
)?;
let proposers = state.get_beacon_proposer_indices(&chain.spec)?;
let proposer_index = *proposers
.get(blob_slot.as_usize() % T::EthSpec::slots_per_epoch() as usize)
.ok_or_else(|| BeaconChainError::NoProposerForSlot(blob_slot))?;
// Prime the proposer shuffling cache with the newly-learned value.
chain.beacon_proposer_cache.lock().insert(
blob_epoch,
proposer_shuffling_root,
proposers,
state.fork(),
)?;
(proposer_index, state.fork())
};
// Signature verify the signed block header.
let signature_is_valid = {
let pubkey_cache = chain.validator_pubkey_cache.read();
let pubkey = pubkey_cache
.get(proposer_index)
.ok_or_else(|| GossipBlobError::UnknownValidator(proposer_index as u64))?;
signed_block_header.verify_signature::<T::EthSpec>(
pubkey,
&fork,
chain.genesis_validators_root,
&chain.spec,
)
};
if !signature_is_valid {
return Err(GossipBlobError::ProposalSignatureInvalid);
}
if proposer_index != blob_proposer_index as usize {
return Err(GossipBlobError::ProposerIndexMismatch {
sidecar: blob_proposer_index as usize,
local: proposer_index,
});
}
chain
.observed_slashable
.write()
.observe_slashable(
blob_sidecar.slot(),
blob_sidecar.block_proposer_index(),
block_root,
)
.map_err(|e| GossipBlobError::BeaconChainError(e.into()))?;
// Now the signature is valid, store the proposal so we don't accept another blob sidecar
// with the same `BlobIdentifier`.
// It's important to double-check that the proposer still hasn't been observed so we don't
// have a race-condition when verifying two blocks simultaneously.
//
// Note: If this BlobSidecar goes on to fail full verification, we do not evict it from the seen_cache
// as alternate blob_sidecars for the same identifier can still be retrieved
// over rpc. Evicting them from this cache would allow faster propagation over gossip. So we allow
// retrieval of potentially valid blocks over rpc, but try to punish the proposer for signing
// invalid messages. Issue for more background
// https://github.com/ethereum/consensus-specs/issues/3261
if chain
.observed_blob_sidecars
.write()
.observe_sidecar(&blob_sidecar)
.map_err(|e| GossipBlobError::BeaconChainError(e.into()))?
{
return Err(GossipBlobError::RepeatBlob {
proposer: proposer_index as u64,
slot: blob_slot,
index: blob_index,
});
}
// Kzg verification for gossip blob sidecar
let kzg = chain
.kzg
.as_ref()
.ok_or(GossipBlobError::KzgNotInitialized)?;
let kzg_verified_blob =
KzgVerifiedBlob::new(blob_sidecar, kzg).map_err(GossipBlobError::KzgError)?;
Ok(GossipVerifiedBlob {
block_root,
blob: kzg_verified_blob,
})
}
/// Returns the canonical root of the given `blob`.
///
/// Use this function to ensure that we report the blob hashing time Prometheus metric.
pub fn get_blob_root<E: EthSpec>(blob: &BlobSidecar<E>) -> Hash256 {
let blob_root_timer = metrics::start_timer(&metrics::BLOCK_PROCESSING_BLOB_ROOT);
let blob_root = blob.tree_hash_root();
metrics::stop_timer(blob_root_timer);
blob_root
}

View File

@@ -1,29 +1,50 @@
use crate::{BeaconChain, BeaconChainError, BeaconChainTypes};
use eth2::lighthouse::{AttestationRewards, BlockReward, BlockRewardMeta};
use operation_pool::{AttMaxCover, MaxCover};
use state_processing::per_block_processing::altair::sync_committee::compute_sync_aggregate_rewards;
use types::{BeaconBlockRef, BeaconState, EthSpec, Hash256, RelativeEpoch};
use operation_pool::{AttMaxCover, MaxCover, RewardCache, SplitAttestation};
use state_processing::{
common::get_attesting_indices_from_state,
per_block_processing::altair::sync_committee::compute_sync_aggregate_rewards,
};
use types::{AbstractExecPayload, BeaconBlockRef, BeaconState, EthSpec, Hash256};
impl<T: BeaconChainTypes> BeaconChain<T> {
pub fn compute_block_reward(
pub fn compute_block_reward<Payload: AbstractExecPayload<T::EthSpec>>(
&self,
block: BeaconBlockRef<'_, T::EthSpec>,
block: BeaconBlockRef<'_, T::EthSpec, Payload>,
block_root: Hash256,
state: &BeaconState<T::EthSpec>,
reward_cache: &mut RewardCache,
include_attestations: bool,
) -> Result<BlockReward, BeaconChainError> {
if block.slot() != state.slot() {
return Err(BeaconChainError::BlockRewardSlotError);
}
let active_indices = state.get_cached_active_validator_indices(RelativeEpoch::Current)?;
let total_active_balance = state.get_total_balance(active_indices, &self.spec)?;
let mut per_attestation_rewards = block
reward_cache.update(state)?;
let total_active_balance = state.get_total_active_balance()?;
let split_attestations = block
.body()
.attestations()
.iter()
.map(|att| {
AttMaxCover::new(att, state, total_active_balance, &self.spec)
.ok_or(BeaconChainError::BlockRewardAttestationError)
let attesting_indices = get_attesting_indices_from_state(state, att)?;
Ok(SplitAttestation::new(att.clone(), attesting_indices))
})
.collect::<Result<Vec<_>, BeaconChainError>>()?;
let mut per_attestation_rewards = split_attestations
.iter()
.map(|att| {
AttMaxCover::new(
att.as_ref(),
state,
reward_cache,
total_active_balance,
&self.spec,
)
.ok_or(BeaconChainError::BlockRewardAttestationError)
})
.collect::<Result<Vec<_>, _>>()?;
@@ -34,7 +55,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
let latest_att = &updated[i];
for att in to_update {
att.update_covering_set(latest_att.object(), latest_att.covering_set());
att.update_covering_set(latest_att.intermediate(), latest_att.covering_set());
}
}
@@ -60,11 +81,24 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
.map(|cover| cover.fresh_validators_rewards)
.collect();
// Add the attestation data if desired.
let attestations = if include_attestations {
block
.body()
.attestations()
.iter()
.map(|a| a.data.clone())
.collect()
} else {
vec![]
};
let attestation_rewards = AttestationRewards {
total: attestation_total,
prev_epoch_total,
curr_epoch_total,
per_attestation_rewards,
attestations,
};
// Sync committee rewards.

View File

@@ -18,14 +18,16 @@ type BlockRoot = Hash256;
#[derive(Clone, Default)]
pub struct Timestamps {
pub observed: Option<Duration>,
pub attestable: Option<Duration>,
pub imported: Option<Duration>,
pub set_as_head: Option<Duration>,
}
// Helps arrange delay data so it is more relevant to metrics.
#[derive(Default)]
#[derive(Debug, Default)]
pub struct BlockDelays {
pub observed: Option<Duration>,
pub attestable: Option<Duration>,
pub imported: Option<Duration>,
pub set_as_head: Option<Duration>,
}
@@ -35,6 +37,9 @@ impl BlockDelays {
let observed = times
.observed
.and_then(|observed_time| observed_time.checked_sub(slot_start_time));
let attestable = times
.attestable
.and_then(|attestable_time| attestable_time.checked_sub(slot_start_time));
let imported = times
.imported
.and_then(|imported_time| imported_time.checked_sub(times.observed?));
@@ -43,6 +48,7 @@ impl BlockDelays {
.and_then(|set_as_head_time| set_as_head_time.checked_sub(times.imported?));
BlockDelays {
observed,
attestable,
imported,
set_as_head,
}
@@ -51,7 +57,7 @@ impl BlockDelays {
// If the block was received via gossip, we can record the client type of the peer which sent us
// the block.
#[derive(Clone, Default)]
#[derive(Debug, Clone, Default, PartialEq)]
pub struct BlockPeerInfo {
pub id: Option<String>,
pub client: Option<String>,
@@ -80,6 +86,8 @@ pub struct BlockTimesCache {
/// Helper methods to read from and write to the cache.
impl BlockTimesCache {
/// Set the observation time for `block_root` to `timestamp` if `timestamp` is less than
/// any previous timestamp at which this block was observed.
pub fn set_time_observed(
&mut self,
block_root: BlockRoot,
@@ -92,11 +100,27 @@ impl BlockTimesCache {
.cache
.entry(block_root)
.or_insert_with(|| BlockTimesCacheValue::new(slot));
block_times.timestamps.observed = Some(timestamp);
block_times.peer_info = BlockPeerInfo {
id: peer_id,
client: peer_client,
};
match block_times.timestamps.observed {
Some(existing_observation_time) if existing_observation_time <= timestamp => {
// Existing timestamp is earlier, do nothing.
}
_ => {
// No existing timestamp, or new timestamp is earlier.
block_times.timestamps.observed = Some(timestamp);
block_times.peer_info = BlockPeerInfo {
id: peer_id,
client: peer_client,
};
}
}
}
pub fn set_time_attestable(&mut self, block_root: BlockRoot, slot: Slot, timestamp: Duration) {
let block_times = self
.cache
.entry(block_root)
.or_insert_with(|| BlockTimesCacheValue::new(slot));
block_times.timestamps.attestable = Some(timestamp);
}
pub fn set_time_imported(&mut self, block_root: BlockRoot, slot: Slot, timestamp: Duration) {
@@ -141,3 +165,71 @@ impl BlockTimesCache {
.retain(|_, cache| cache.slot > current_slot.saturating_sub(64_u64));
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn observed_time_uses_minimum() {
let mut cache = BlockTimesCache::default();
let block_root = Hash256::zero();
let slot = Slot::new(100);
let slot_start_time = Duration::from_secs(0);
let ts1 = Duration::from_secs(5);
let ts2 = Duration::from_secs(6);
let ts3 = Duration::from_secs(4);
let peer_info2 = BlockPeerInfo {
id: Some("peer2".to_string()),
client: Some("lighthouse".to_string()),
};
let peer_info3 = BlockPeerInfo {
id: Some("peer3".to_string()),
client: Some("prysm".to_string()),
};
cache.set_time_observed(block_root, slot, ts1, None, None);
assert_eq!(
cache.get_block_delays(block_root, slot_start_time).observed,
Some(ts1)
);
assert_eq!(cache.get_peer_info(block_root), BlockPeerInfo::default());
// Second observation with higher timestamp should not override anything, even though it has
// superior peer info.
cache.set_time_observed(
block_root,
slot,
ts2,
peer_info2.id.clone(),
peer_info2.client.clone(),
);
assert_eq!(
cache.get_block_delays(block_root, slot_start_time).observed,
Some(ts1)
);
assert_eq!(cache.get_peer_info(block_root), BlockPeerInfo::default());
// Third observation with lower timestamp should override everything.
cache.set_time_observed(
block_root,
slot,
ts3,
peer_info3.id.clone(),
peer_info3.client.clone(),
);
assert_eq!(
cache.get_block_delays(block_root, slot_start_time).observed,
Some(ts3)
);
assert_eq!(cache.get_peer_info(block_root), peer_info3);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,547 @@
use crate::blob_verification::{GossipBlobError, GossipVerifiedBlobList};
use crate::block_verification::BlockError;
use crate::data_availability_checker::AvailabilityCheckError;
pub use crate::data_availability_checker::{AvailableBlock, MaybeAvailableBlock};
use crate::eth1_finalization_cache::Eth1FinalizationData;
use crate::{get_block_root, GossipVerifiedBlock, PayloadVerificationOutcome};
use derivative::Derivative;
use ssz_types::VariableList;
use state_processing::ConsensusContext;
use std::fmt::{Debug, Formatter};
use std::sync::Arc;
use types::blob_sidecar::{BlobIdentifier, BlobSidecarError, FixedBlobSidecarList};
use types::{
BeaconBlockRef, BeaconState, BlindedPayload, BlobSidecarList, Epoch, EthSpec, Hash256,
SignedBeaconBlock, SignedBeaconBlockHeader, Slot,
};
/// A block that has been received over RPC. It has 2 internal variants:
///
/// 1. `BlockAndBlobs`: A fully available post deneb block with all the blobs available. This variant
/// is only constructed after making consistency checks between blocks and blobs.
/// Hence, it is fully self contained w.r.t verification. i.e. this block has all the required
/// data to get verified and imported into fork choice.
///
/// 2. `Block`: This can be a fully available pre-deneb block **or** a post-deneb block that may or may
/// not require blobs to be considered fully available.
///
/// Note: We make a distinction over blocks received over gossip because
/// in a post-deneb world, the blobs corresponding to a given block that are received
/// over rpc do not contain the proposer signature for dos resistance.
#[derive(Clone, Derivative)]
#[derivative(Hash(bound = "E: EthSpec"))]
pub struct RpcBlock<E: EthSpec> {
block_root: Hash256,
block: RpcBlockInner<E>,
}
impl<E: EthSpec> Debug for RpcBlock<E> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "RpcBlock({:?})", self.block_root)
}
}
impl<E: EthSpec> RpcBlock<E> {
pub fn block_root(&self) -> Hash256 {
self.block_root
}
pub fn as_block(&self) -> &SignedBeaconBlock<E> {
match &self.block {
RpcBlockInner::Block(block) => block,
RpcBlockInner::BlockAndBlobs(block, _) => block,
}
}
pub fn block_cloned(&self) -> Arc<SignedBeaconBlock<E>> {
match &self.block {
RpcBlockInner::Block(block) => block.clone(),
RpcBlockInner::BlockAndBlobs(block, _) => block.clone(),
}
}
pub fn blobs(&self) -> Option<&BlobSidecarList<E>> {
match &self.block {
RpcBlockInner::Block(_) => None,
RpcBlockInner::BlockAndBlobs(_, blobs) => Some(blobs),
}
}
}
/// Note: This variant is intentionally private because we want to safely construct the
/// internal variants after applying consistency checks to ensure that the block and blobs
/// are consistent with respect to each other.
#[derive(Debug, Clone, Derivative)]
#[derivative(Hash(bound = "E: EthSpec"))]
enum RpcBlockInner<E: EthSpec> {
/// Single block lookup response. This should potentially hit the data availability cache.
Block(Arc<SignedBeaconBlock<E>>),
/// This variant is used with parent lookups and by-range responses. It should have all blobs
/// ordered, all block roots matching, and the correct number of blobs for this block.
BlockAndBlobs(Arc<SignedBeaconBlock<E>>, BlobSidecarList<E>),
}
impl<E: EthSpec> RpcBlock<E> {
/// Constructs a `Block` variant.
pub fn new_without_blobs(
block_root: Option<Hash256>,
block: Arc<SignedBeaconBlock<E>>,
) -> Self {
let block_root = block_root.unwrap_or_else(|| get_block_root(&block));
Self {
block_root,
block: RpcBlockInner::Block(block),
}
}
/// Constructs a new `BlockAndBlobs` variant after making consistency
/// checks between the provided blocks and blobs.
pub fn new(
block_root: Option<Hash256>,
block: Arc<SignedBeaconBlock<E>>,
blobs: Option<BlobSidecarList<E>>,
) -> Result<Self, AvailabilityCheckError> {
let block_root = block_root.unwrap_or_else(|| get_block_root(&block));
if let (Some(blobs), Ok(block_commitments)) = (
blobs.as_ref(),
block.message().body().blob_kzg_commitments(),
) {
if blobs.len() != block_commitments.len() {
return Err(AvailabilityCheckError::MissingBlobs);
}
for (blob, &block_commitment) in blobs.iter().zip(block_commitments.iter()) {
let blob_commitment = blob.kzg_commitment;
if blob_commitment != block_commitment {
return Err(AvailabilityCheckError::KzgCommitmentMismatch {
block_commitment,
blob_commitment,
});
}
}
}
let inner = match blobs {
Some(blobs) => RpcBlockInner::BlockAndBlobs(block, blobs),
None => RpcBlockInner::Block(block),
};
Ok(Self {
block_root,
block: inner,
})
}
pub fn new_from_fixed(
block_root: Hash256,
block: Arc<SignedBeaconBlock<E>>,
blobs: FixedBlobSidecarList<E>,
) -> Result<Self, AvailabilityCheckError> {
let filtered = blobs
.into_iter()
.filter_map(|b| b.clone())
.collect::<Vec<_>>();
let blobs = if filtered.is_empty() {
None
} else {
Some(VariableList::from(filtered))
};
Self::new(Some(block_root), block, blobs)
}
pub fn deconstruct(
self,
) -> (
Hash256,
Arc<SignedBeaconBlock<E>>,
Option<BlobSidecarList<E>>,
) {
let block_root = self.block_root();
match self.block {
RpcBlockInner::Block(block) => (block_root, block, None),
RpcBlockInner::BlockAndBlobs(block, blobs) => (block_root, block, Some(blobs)),
}
}
pub fn n_blobs(&self) -> usize {
match &self.block {
RpcBlockInner::Block(_) => 0,
RpcBlockInner::BlockAndBlobs(_, blobs) => blobs.len(),
}
}
}
/// A block that has gone through all pre-deneb block processing checks including block processing
/// and execution by an EL client. This block hasn't necessarily completed data availability checks.
///
///
/// It contains 2 variants:
/// 1. `Available`: This block has been executed and also contains all data to consider it a
/// fully available block. i.e. for post-deneb, this implies that this contains all the
/// required blobs.
/// 2. `AvailabilityPending`: This block hasn't received all required blobs to consider it a
/// fully available block.
pub enum ExecutedBlock<E: EthSpec> {
Available(AvailableExecutedBlock<E>),
AvailabilityPending(AvailabilityPendingExecutedBlock<E>),
}
impl<E: EthSpec> ExecutedBlock<E> {
pub fn new(
block: MaybeAvailableBlock<E>,
import_data: BlockImportData<E>,
payload_verification_outcome: PayloadVerificationOutcome,
) -> Self {
match block {
MaybeAvailableBlock::Available(available_block) => {
Self::Available(AvailableExecutedBlock::new(
available_block,
import_data,
payload_verification_outcome,
))
}
MaybeAvailableBlock::AvailabilityPending {
block_root: _,
block: pending_block,
} => Self::AvailabilityPending(AvailabilityPendingExecutedBlock::new(
pending_block,
import_data,
payload_verification_outcome,
)),
}
}
pub fn as_block(&self) -> &SignedBeaconBlock<E> {
match self {
Self::Available(available) => available.block.block(),
Self::AvailabilityPending(pending) => &pending.block,
}
}
pub fn block_root(&self) -> Hash256 {
match self {
ExecutedBlock::AvailabilityPending(pending) => pending.import_data.block_root,
ExecutedBlock::Available(available) => available.import_data.block_root,
}
}
}
/// A block that has completed all pre-deneb block processing checks including verification
/// by an EL client **and** has all requisite blob data to be imported into fork choice.
#[derive(PartialEq)]
pub struct AvailableExecutedBlock<E: EthSpec> {
pub block: AvailableBlock<E>,
pub import_data: BlockImportData<E>,
pub payload_verification_outcome: PayloadVerificationOutcome,
}
impl<E: EthSpec> AvailableExecutedBlock<E> {
pub fn new(
block: AvailableBlock<E>,
import_data: BlockImportData<E>,
payload_verification_outcome: PayloadVerificationOutcome,
) -> Self {
Self {
block,
import_data,
payload_verification_outcome,
}
}
pub fn get_all_blob_ids(&self) -> Vec<BlobIdentifier> {
let num_blobs_expected = self
.block
.message()
.body()
.blob_kzg_commitments()
.map_or(0, |commitments| commitments.len());
let mut blob_ids = Vec::with_capacity(num_blobs_expected);
for i in 0..num_blobs_expected {
blob_ids.push(BlobIdentifier {
block_root: self.import_data.block_root,
index: i as u64,
});
}
blob_ids
}
}
/// A block that has completed all pre-deneb block processing checks, verification
/// by an EL client but does not have all requisite blob data to get imported into
/// fork choice.
pub struct AvailabilityPendingExecutedBlock<E: EthSpec> {
pub block: Arc<SignedBeaconBlock<E>>,
pub import_data: BlockImportData<E>,
pub payload_verification_outcome: PayloadVerificationOutcome,
}
impl<E: EthSpec> AvailabilityPendingExecutedBlock<E> {
pub fn new(
block: Arc<SignedBeaconBlock<E>>,
import_data: BlockImportData<E>,
payload_verification_outcome: PayloadVerificationOutcome,
) -> Self {
Self {
block,
import_data,
payload_verification_outcome,
}
}
pub fn as_block(&self) -> &SignedBeaconBlock<E> {
&self.block
}
pub fn num_blobs_expected(&self) -> usize {
self.block
.message()
.body()
.blob_kzg_commitments()
.map_or(0, |commitments| commitments.len())
}
}
#[derive(Debug, PartialEq)]
pub struct BlockImportData<E: EthSpec> {
pub block_root: Hash256,
pub state: BeaconState<E>,
pub parent_block: SignedBeaconBlock<E, BlindedPayload<E>>,
pub parent_eth1_finalization_data: Eth1FinalizationData,
pub confirmed_state_roots: Vec<Hash256>,
pub consensus_context: ConsensusContext<E>,
}
pub type GossipVerifiedBlockContents<E> =
(GossipVerifiedBlock<E>, Option<GossipVerifiedBlobList<E>>);
#[derive(Debug)]
pub enum BlockContentsError<E: EthSpec> {
BlockError(BlockError<E>),
BlobError(GossipBlobError<E>),
SidecarError(BlobSidecarError),
}
impl<E: EthSpec> From<BlockError<E>> for BlockContentsError<E> {
fn from(value: BlockError<E>) -> Self {
Self::BlockError(value)
}
}
impl<E: EthSpec> From<GossipBlobError<E>> for BlockContentsError<E> {
fn from(value: GossipBlobError<E>) -> Self {
Self::BlobError(value)
}
}
impl<E: EthSpec> std::fmt::Display for BlockContentsError<E> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
BlockContentsError::BlockError(err) => {
write!(f, "BlockError({})", err)
}
BlockContentsError::BlobError(err) => {
write!(f, "BlobError({})", err)
}
BlockContentsError::SidecarError(err) => {
write!(f, "SidecarError({:?})", err)
}
}
}
}
/// Trait for common block operations.
pub trait AsBlock<E: EthSpec> {
fn slot(&self) -> Slot;
fn epoch(&self) -> Epoch;
fn parent_root(&self) -> Hash256;
fn state_root(&self) -> Hash256;
fn signed_block_header(&self) -> SignedBeaconBlockHeader;
fn message(&self) -> BeaconBlockRef<E>;
fn as_block(&self) -> &SignedBeaconBlock<E>;
fn block_cloned(&self) -> Arc<SignedBeaconBlock<E>>;
fn canonical_root(&self) -> Hash256;
fn into_rpc_block(self) -> RpcBlock<E>;
}
impl<E: EthSpec> AsBlock<E> for Arc<SignedBeaconBlock<E>> {
fn slot(&self) -> Slot {
SignedBeaconBlock::slot(self)
}
fn epoch(&self) -> Epoch {
SignedBeaconBlock::epoch(self)
}
fn parent_root(&self) -> Hash256 {
SignedBeaconBlock::parent_root(self)
}
fn state_root(&self) -> Hash256 {
SignedBeaconBlock::state_root(self)
}
fn signed_block_header(&self) -> SignedBeaconBlockHeader {
SignedBeaconBlock::signed_block_header(self)
}
fn message(&self) -> BeaconBlockRef<E> {
SignedBeaconBlock::message(self)
}
fn as_block(&self) -> &SignedBeaconBlock<E> {
self
}
fn block_cloned(&self) -> Arc<SignedBeaconBlock<E>> {
Arc::<SignedBeaconBlock<E>>::clone(self)
}
fn canonical_root(&self) -> Hash256 {
SignedBeaconBlock::canonical_root(self)
}
fn into_rpc_block(self) -> RpcBlock<E> {
RpcBlock::new_without_blobs(None, self)
}
}
impl<E: EthSpec> AsBlock<E> for MaybeAvailableBlock<E> {
fn slot(&self) -> Slot {
self.as_block().slot()
}
fn epoch(&self) -> Epoch {
self.as_block().epoch()
}
fn parent_root(&self) -> Hash256 {
self.as_block().parent_root()
}
fn state_root(&self) -> Hash256 {
self.as_block().state_root()
}
fn signed_block_header(&self) -> SignedBeaconBlockHeader {
self.as_block().signed_block_header()
}
fn message(&self) -> BeaconBlockRef<E> {
self.as_block().message()
}
fn as_block(&self) -> &SignedBeaconBlock<E> {
match &self {
MaybeAvailableBlock::Available(block) => block.as_block(),
MaybeAvailableBlock::AvailabilityPending {
block_root: _,
block,
} => block,
}
}
fn block_cloned(&self) -> Arc<SignedBeaconBlock<E>> {
match &self {
MaybeAvailableBlock::Available(block) => block.block_cloned(),
MaybeAvailableBlock::AvailabilityPending {
block_root: _,
block,
} => block.clone(),
}
}
fn canonical_root(&self) -> Hash256 {
self.as_block().canonical_root()
}
fn into_rpc_block(self) -> RpcBlock<E> {
match self {
MaybeAvailableBlock::Available(available_block) => available_block.into_rpc_block(),
MaybeAvailableBlock::AvailabilityPending { block_root, block } => {
RpcBlock::new_without_blobs(Some(block_root), block)
}
}
}
}
impl<E: EthSpec> AsBlock<E> for AvailableBlock<E> {
fn slot(&self) -> Slot {
self.block().slot()
}
fn epoch(&self) -> Epoch {
self.block().epoch()
}
fn parent_root(&self) -> Hash256 {
self.block().parent_root()
}
fn state_root(&self) -> Hash256 {
self.block().state_root()
}
fn signed_block_header(&self) -> SignedBeaconBlockHeader {
self.block().signed_block_header()
}
fn message(&self) -> BeaconBlockRef<E> {
self.block().message()
}
fn as_block(&self) -> &SignedBeaconBlock<E> {
self.block()
}
fn block_cloned(&self) -> Arc<SignedBeaconBlock<E>> {
AvailableBlock::block_cloned(self)
}
fn canonical_root(&self) -> Hash256 {
self.block().canonical_root()
}
fn into_rpc_block(self) -> RpcBlock<E> {
let (block_root, block, blobs_opt) = self.deconstruct();
// Circumvent the constructor here, because an Available block will have already had
// consistency checks performed.
let inner = match blobs_opt {
None => RpcBlockInner::Block(block),
Some(blobs) => RpcBlockInner::BlockAndBlobs(block, blobs),
};
RpcBlock {
block_root,
block: inner,
}
}
}
impl<E: EthSpec> AsBlock<E> for RpcBlock<E> {
fn slot(&self) -> Slot {
self.as_block().slot()
}
fn epoch(&self) -> Epoch {
self.as_block().epoch()
}
fn parent_root(&self) -> Hash256 {
self.as_block().parent_root()
}
fn state_root(&self) -> Hash256 {
self.as_block().state_root()
}
fn signed_block_header(&self) -> SignedBeaconBlockHeader {
self.as_block().signed_block_header()
}
fn message(&self) -> BeaconBlockRef<E> {
self.as_block().message()
}
fn as_block(&self) -> &SignedBeaconBlock<E> {
match &self.block {
RpcBlockInner::Block(block) => block,
RpcBlockInner::BlockAndBlobs(block, _) => block,
}
}
fn block_cloned(&self) -> Arc<SignedBeaconBlock<E>> {
match &self.block {
RpcBlockInner::Block(block) => block.clone(),
RpcBlockInner::BlockAndBlobs(block, _) => block.clone(),
}
}
fn canonical_root(&self) -> Hash256 {
self.as_block().canonical_root()
}
fn into_rpc_block(self) -> RpcBlock<E> {
self
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,121 @@
//! Provides tools for checking if a node is ready for the Capella upgrade.
use crate::{BeaconChain, BeaconChainTypes};
use execution_layer::http::{
ENGINE_FORKCHOICE_UPDATED_V2, ENGINE_GET_PAYLOAD_V2, ENGINE_NEW_PAYLOAD_V2,
};
use serde::{Deserialize, Serialize};
use std::fmt;
use std::time::Duration;
use types::*;
/// The time before the Capella fork when we will start issuing warnings about preparation.
use super::merge_readiness::SECONDS_IN_A_WEEK;
pub const CAPELLA_READINESS_PREPARATION_SECONDS: u64 = SECONDS_IN_A_WEEK * 2;
pub const ENGINE_CAPABILITIES_REFRESH_INTERVAL: u64 = 300;
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
#[serde(tag = "type")]
pub enum CapellaReadiness {
/// The execution engine is capella-enabled (as far as we can tell)
Ready,
/// We are connected to an execution engine which doesn't support the V2 engine api methods
V2MethodsNotSupported { error: String },
/// The transition configuration with the EL failed, there might be a problem with
/// connectivity, authentication or a difference in configuration.
ExchangeCapabilitiesFailed { error: String },
/// The user has not configured an execution endpoint
NoExecutionEndpoint,
}
impl fmt::Display for CapellaReadiness {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
CapellaReadiness::Ready => {
write!(f, "This node appears ready for Capella.")
}
CapellaReadiness::ExchangeCapabilitiesFailed { error } => write!(
f,
"Could not exchange capabilities with the \
execution endpoint: {}",
error
),
CapellaReadiness::NoExecutionEndpoint => write!(
f,
"The --execution-endpoint flag is not specified, this is a \
requirement post-merge"
),
CapellaReadiness::V2MethodsNotSupported { error } => write!(
f,
"Execution endpoint does not support Capella methods: {}",
error
),
}
}
}
impl<T: BeaconChainTypes> BeaconChain<T> {
/// Returns `true` if capella epoch is set and Capella fork has occurred or will
/// occur within `CAPELLA_READINESS_PREPARATION_SECONDS`
pub fn is_time_to_prepare_for_capella(&self, current_slot: Slot) -> bool {
if let Some(capella_epoch) = self.spec.capella_fork_epoch {
let capella_slot = capella_epoch.start_slot(T::EthSpec::slots_per_epoch());
let capella_readiness_preparation_slots =
CAPELLA_READINESS_PREPARATION_SECONDS / self.spec.seconds_per_slot;
// Return `true` if Capella has happened or is within the preparation time.
current_slot + capella_readiness_preparation_slots > capella_slot
} else {
// The Capella fork epoch has not been defined yet, no need to prepare.
false
}
}
/// Attempts to connect to the EL and confirm that it is ready for capella.
pub async fn check_capella_readiness(&self) -> CapellaReadiness {
if let Some(el) = self.execution_layer.as_ref() {
match el
.get_engine_capabilities(Some(Duration::from_secs(
ENGINE_CAPABILITIES_REFRESH_INTERVAL,
)))
.await
{
Err(e) => {
// The EL was either unreachable or responded with an error
CapellaReadiness::ExchangeCapabilitiesFailed {
error: format!("{:?}", e),
}
}
Ok(capabilities) => {
let mut missing_methods = String::from("Required Methods Unsupported:");
let mut all_good = true;
if !capabilities.get_payload_v2 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_GET_PAYLOAD_V2);
all_good = false;
}
if !capabilities.forkchoice_updated_v2 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_FORKCHOICE_UPDATED_V2);
all_good = false;
}
if !capabilities.new_payload_v2 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_NEW_PAYLOAD_V2);
all_good = false;
}
if all_good {
CapellaReadiness::Ready
} else {
CapellaReadiness::V2MethodsNotSupported {
error: missing_methods,
}
}
}
}
} else {
CapellaReadiness::NoExecutionEndpoint
}
}
}

View File

@@ -1,10 +1,27 @@
use serde_derive::{Deserialize, Serialize};
use types::Checkpoint;
pub use proto_array::{DisallowedReOrgOffsets, ReOrgThreshold};
use serde::{Deserialize, Serialize};
use std::time::Duration;
use types::{Checkpoint, Epoch};
pub const DEFAULT_RE_ORG_HEAD_THRESHOLD: ReOrgThreshold = ReOrgThreshold(20);
pub const DEFAULT_RE_ORG_PARENT_THRESHOLD: ReOrgThreshold = ReOrgThreshold(160);
pub const DEFAULT_RE_ORG_MAX_EPOCHS_SINCE_FINALIZATION: Epoch = Epoch::new(2);
/// Default to 1/12th of the slot, which is 1 second on mainnet.
pub const DEFAULT_RE_ORG_CUTOFF_DENOMINATOR: u32 = 12;
pub const DEFAULT_FORK_CHOICE_BEFORE_PROPOSAL_TIMEOUT: u64 = 250;
/// Default fraction of a slot lookahead for payload preparation (12/3 = 4 seconds on mainnet).
pub const DEFAULT_PREPARE_PAYLOAD_LOOKAHEAD_FACTOR: u32 = 3;
/// Fraction of a slot lookahead for fork choice in the state advance timer (500ms on mainnet).
pub const FORK_CHOICE_LOOKAHEAD_FACTOR: u32 = 24;
/// Cache only a small number of states in the parallel cache by default.
pub const DEFAULT_PARALLEL_STATE_CACHE_SIZE: usize = 2;
#[derive(Debug, PartialEq, Eq, Clone, Deserialize, Serialize)]
pub struct ChainConfig {
/// Maximum number of slots to skip when importing a consensus message (e.g., block,
/// attestation, etc).
/// Maximum number of slots to skip when importing an attestation.
///
/// If `None`, there is no limit.
pub import_max_skip_slots: Option<u64>,
@@ -18,6 +35,62 @@ pub struct ChainConfig {
pub enable_lock_timeouts: bool,
/// The max size of a message that can be sent over the network.
pub max_network_size: usize,
/// Maximum percentage of the head committee weight at which to attempt re-orging the canonical head.
pub re_org_head_threshold: Option<ReOrgThreshold>,
/// Minimum percentage of the parent committee weight at which to attempt re-orging the canonical head.
pub re_org_parent_threshold: Option<ReOrgThreshold>,
/// Maximum number of epochs since finalization for attempting a proposer re-org.
pub re_org_max_epochs_since_finalization: Epoch,
/// Maximum delay after the start of the slot at which to propose a reorging block.
pub re_org_cutoff_millis: Option<u64>,
/// Additional epoch offsets at which re-orging block proposals are not permitted.
///
/// By default this list is empty, but it can be useful for reacting to network conditions, e.g.
/// slow gossip of re-org blocks at slot 1 in the epoch.
pub re_org_disallowed_offsets: DisallowedReOrgOffsets,
/// Number of milliseconds to wait for fork choice before proposing a block.
///
/// If set to 0 then block proposal will not wait for fork choice at all.
pub fork_choice_before_proposal_timeout_ms: u64,
/// Number of skip slots in a row before the BN refuses to use connected builders during payload construction.
pub builder_fallback_skips: usize,
/// Number of skip slots in the past `SLOTS_PER_EPOCH` before the BN refuses to use connected
/// builders during payload construction.
pub builder_fallback_skips_per_epoch: usize,
/// Number of epochs since finalization before the BN refuses to use connected builders during
/// payload construction.
pub builder_fallback_epochs_since_finalization: usize,
/// Whether any chain health checks should be considered when deciding whether to use the builder API.
pub builder_fallback_disable_checks: bool,
/// When set to `true`, forget any valid/invalid/optimistic statuses in fork choice during start
/// up.
pub always_reset_payload_statuses: bool,
/// Whether to apply paranoid checks to blocks proposed by this beacon node.
pub paranoid_block_proposal: bool,
/// Optionally set timeout for calls to checkpoint sync endpoint.
pub checkpoint_sync_url_timeout: u64,
/// The offset before the start of a proposal slot at which payload attributes should be sent.
///
/// Low values are useful for execution engines which don't improve their payload after the
/// first call, and high values are useful for ensuring the EL is given ample notice.
pub prepare_payload_lookahead: Duration,
/// Use EL-free optimistic sync for the finalized part of the chain.
pub optimistic_finalized_sync: bool,
/// The size of the shuffling cache,
pub shuffling_cache_size: usize,
/// If using a weak-subjectivity sync, whether we should download blocks all the way back to
/// genesis.
pub genesis_backfill: bool,
/// Whether to send payload attributes every slot, regardless of connected proposers.
///
/// This is useful for block builders and testing.
pub always_prepare_payload: bool,
/// Number of epochs between each migration of data from the hot database to the freezer.
pub epochs_per_migration: u64,
/// Size of the promise cache for de-duplicating parallel state requests.
pub parallel_state_cache_size: usize,
/// When set to true Light client server computes and caches state proofs for serving updates
pub enable_light_client_server: bool,
}
impl Default for ChainConfig {
@@ -28,6 +101,40 @@ impl Default for ChainConfig {
reconstruct_historic_states: false,
enable_lock_timeouts: true,
max_network_size: 10 * 1_048_576, // 10M
re_org_head_threshold: Some(DEFAULT_RE_ORG_HEAD_THRESHOLD),
re_org_parent_threshold: Some(DEFAULT_RE_ORG_PARENT_THRESHOLD),
re_org_max_epochs_since_finalization: DEFAULT_RE_ORG_MAX_EPOCHS_SINCE_FINALIZATION,
re_org_cutoff_millis: None,
re_org_disallowed_offsets: DisallowedReOrgOffsets::default(),
fork_choice_before_proposal_timeout_ms: DEFAULT_FORK_CHOICE_BEFORE_PROPOSAL_TIMEOUT,
// Builder fallback configs that are set in `clap` will override these.
builder_fallback_skips: 3,
builder_fallback_skips_per_epoch: 8,
builder_fallback_epochs_since_finalization: 3,
builder_fallback_disable_checks: false,
always_reset_payload_statuses: false,
paranoid_block_proposal: false,
checkpoint_sync_url_timeout: 60,
prepare_payload_lookahead: Duration::from_secs(4),
// This value isn't actually read except in tests.
optimistic_finalized_sync: true,
shuffling_cache_size: crate::shuffling_cache::DEFAULT_CACHE_SIZE,
genesis_backfill: false,
always_prepare_payload: false,
epochs_per_migration: crate::migrate::DEFAULT_EPOCHS_PER_MIGRATION,
parallel_state_cache_size: DEFAULT_PARALLEL_STATE_CACHE_SIZE,
enable_light_client_server: false,
}
}
}
impl ChainConfig {
/// The latest delay from the start of the slot at which to attempt a 1-slot re-org.
pub fn re_org_cutoff(&self, seconds_per_slot: u64) -> Duration {
self.re_org_cutoff_millis
.map(Duration::from_millis)
.unwrap_or_else(|| {
Duration::from_secs(seconds_per_slot) / DEFAULT_RE_ORG_CUTOFF_DENOMINATOR
})
}
}

View File

@@ -0,0 +1,639 @@
use crate::blob_verification::{verify_kzg_for_blob_list, GossipVerifiedBlob, KzgVerifiedBlobList};
use crate::block_verification_types::{
AvailabilityPendingExecutedBlock, AvailableExecutedBlock, RpcBlock,
};
pub use crate::data_availability_checker::availability_view::{
AvailabilityView, GetCommitment, GetCommitments,
};
pub use crate::data_availability_checker::child_components::ChildComponents;
use crate::data_availability_checker::overflow_lru_cache::OverflowLRUCache;
use crate::data_availability_checker::processing_cache::ProcessingCache;
use crate::{BeaconChain, BeaconChainTypes, BeaconStore};
use kzg::Kzg;
use parking_lot::RwLock;
pub use processing_cache::ProcessingComponents;
use slasher::test_utils::E;
use slog::{debug, error, Logger};
use slot_clock::SlotClock;
use ssz_types::FixedVector;
use std::fmt;
use std::fmt::Debug;
use std::num::NonZeroUsize;
use std::sync::Arc;
use task_executor::TaskExecutor;
use types::beacon_block_body::KzgCommitmentOpts;
use types::blob_sidecar::{BlobIdentifier, BlobSidecar, FixedBlobSidecarList};
use types::{BlobSidecarList, ChainSpec, Epoch, EthSpec, Hash256, SignedBeaconBlock};
mod availability_view;
mod child_components;
mod error;
mod overflow_lru_cache;
mod processing_cache;
mod state_lru_cache;
pub use error::{Error as AvailabilityCheckError, ErrorCategory as AvailabilityCheckErrorCategory};
use types::non_zero_usize::new_non_zero_usize;
/// The LRU Cache stores `PendingComponents` which can store up to
/// `MAX_BLOBS_PER_BLOCK = 6` blobs each. A `BlobSidecar` is 0.131256 MB. So
/// the maximum size of a `PendingComponents` is ~ 0.787536 MB. Setting this
/// to 1024 means the maximum size of the cache is ~ 0.8 GB. But the cache
/// will target a size of less than 75% of capacity.
pub const OVERFLOW_LRU_CAPACITY: NonZeroUsize = new_non_zero_usize(1024);
/// Until tree-states is implemented, we can't store very many states in memory :(
pub const STATE_LRU_CAPACITY_NON_ZERO: NonZeroUsize = new_non_zero_usize(2);
pub const STATE_LRU_CAPACITY: usize = STATE_LRU_CAPACITY_NON_ZERO.get();
/// This includes a cache for any blocks or blobs that have been received over gossip or RPC
/// and are awaiting more components before they can be imported. Additionally the
/// `DataAvailabilityChecker` is responsible for KZG verification of block components as well as
/// checking whether a "availability check" is required at all.
pub struct DataAvailabilityChecker<T: BeaconChainTypes> {
processing_cache: RwLock<ProcessingCache<T::EthSpec>>,
availability_cache: Arc<OverflowLRUCache<T>>,
slot_clock: T::SlotClock,
kzg: Option<Arc<Kzg>>,
log: Logger,
spec: ChainSpec,
}
/// This type is returned after adding a block / blob to the `DataAvailabilityChecker`.
///
/// Indicates if the block is fully `Available` or if we need blobs or blocks
/// to "complete" the requirements for an `AvailableBlock`.
#[derive(PartialEq)]
pub enum Availability<E: EthSpec> {
MissingComponents(Hash256),
Available(Box<AvailableExecutedBlock<E>>),
}
impl<E: EthSpec> Debug for Availability<E> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Self::MissingComponents(block_root) => {
write!(f, "MissingComponents({})", block_root)
}
Self::Available(block) => write!(f, "Available({:?})", block.import_data.block_root),
}
}
}
impl<T: BeaconChainTypes> DataAvailabilityChecker<T> {
pub fn new(
slot_clock: T::SlotClock,
kzg: Option<Arc<Kzg>>,
store: BeaconStore<T>,
log: &Logger,
spec: ChainSpec,
) -> Result<Self, AvailabilityCheckError> {
let overflow_cache = OverflowLRUCache::new(OVERFLOW_LRU_CAPACITY, store, spec.clone())?;
Ok(Self {
processing_cache: <_>::default(),
availability_cache: Arc::new(overflow_cache),
slot_clock,
log: log.clone(),
kzg,
spec,
})
}
/// Checks if the given block root is cached.
pub fn has_block(&self, block_root: &Hash256) -> bool {
self.processing_cache.read().has_block(block_root)
}
/// Get the processing info for a block.
pub fn get_processing_components(
&self,
block_root: Hash256,
) -> Option<ProcessingComponents<T::EthSpec>> {
self.processing_cache.read().get(&block_root).cloned()
}
/// If there's no block, all possible ids will be returned that don't exist in the given blobs.
/// If there no blobs, all possible ids will be returned.
pub fn get_missing_blob_ids<V>(
&self,
block_root: Hash256,
block: &Option<Arc<SignedBeaconBlock<T::EthSpec>>>,
blobs: &FixedVector<Option<V>, <T::EthSpec as EthSpec>::MaxBlobsPerBlock>,
) -> MissingBlobs {
let Some(current_slot) = self.slot_clock.now_or_genesis() else {
error!(
self.log,
"Failed to read slot clock when checking for missing blob ids"
);
return MissingBlobs::BlobsNotRequired;
};
let current_epoch = current_slot.epoch(T::EthSpec::slots_per_epoch());
if self.da_check_required_for_epoch(current_epoch) {
match block {
Some(cached_block) => {
let block_commitments = cached_block.get_commitments();
let blob_ids = blobs
.iter()
.take(block_commitments.len())
.enumerate()
.filter_map(|(index, blob_commitment_opt)| {
blob_commitment_opt.is_none().then_some(BlobIdentifier {
block_root,
index: index as u64,
})
})
.collect();
MissingBlobs::KnownMissing(blob_ids)
}
None => {
MissingBlobs::PossibleMissing(BlobIdentifier::get_all_blob_ids::<E>(block_root))
}
}
} else {
MissingBlobs::BlobsNotRequired
}
}
/// Get a blob from the availability cache.
pub fn get_blob(
&self,
blob_id: &BlobIdentifier,
) -> Result<Option<Arc<BlobSidecar<T::EthSpec>>>, AvailabilityCheckError> {
self.availability_cache.peek_blob(blob_id)
}
/// Get a block from the availability cache. Includes any blocks we are currently processing.
pub fn get_block(&self, block_root: &Hash256) -> Option<Arc<SignedBeaconBlock<T::EthSpec>>> {
self.processing_cache
.read()
.get(block_root)
.and_then(|cached| cached.block.clone())
}
/// Put a list of blobs received via RPC into the availability cache. This performs KZG
/// verification on the blobs in the list.
pub fn put_rpc_blobs(
&self,
block_root: Hash256,
blobs: FixedBlobSidecarList<T::EthSpec>,
) -> Result<Availability<T::EthSpec>, AvailabilityCheckError> {
let Some(kzg) = self.kzg.as_ref() else {
return Err(AvailabilityCheckError::KzgNotInitialized);
};
let verified_blobs = KzgVerifiedBlobList::new(Vec::from(blobs).into_iter().flatten(), kzg)
.map_err(AvailabilityCheckError::Kzg)?;
self.availability_cache
.put_kzg_verified_blobs(block_root, verified_blobs)
}
/// Check if we've cached other blobs for this block. If it completes a set and we also
/// have a block cached, return the `Availability` variant triggering block import.
/// Otherwise cache the blob sidecar.
///
/// This should only accept gossip verified blobs, so we should not have to worry about dupes.
pub fn put_gossip_blob(
&self,
gossip_blob: GossipVerifiedBlob<T>,
) -> Result<Availability<T::EthSpec>, AvailabilityCheckError> {
self.availability_cache
.put_kzg_verified_blobs(gossip_blob.block_root(), vec![gossip_blob.into_inner()])
}
/// Check if we have all the blobs for a block. Returns `Availability` which has information
/// about whether all components have been received or more are required.
pub fn put_pending_executed_block(
&self,
executed_block: AvailabilityPendingExecutedBlock<T::EthSpec>,
) -> Result<Availability<T::EthSpec>, AvailabilityCheckError> {
self.availability_cache
.put_pending_executed_block(executed_block)
}
/// Verifies kzg commitments for an RpcBlock, returns a `MaybeAvailableBlock` that may
/// include the fully available block.
///
/// WARNING: This function assumes all required blobs are already present, it does NOT
/// check if there are any missing blobs.
pub fn verify_kzg_for_rpc_block(
&self,
block: RpcBlock<T::EthSpec>,
) -> Result<MaybeAvailableBlock<T::EthSpec>, AvailabilityCheckError> {
let (block_root, block, blobs) = block.deconstruct();
match blobs {
None => {
if self.blobs_required_for_block(&block) {
Ok(MaybeAvailableBlock::AvailabilityPending { block_root, block })
} else {
Ok(MaybeAvailableBlock::Available(AvailableBlock {
block_root,
block,
blobs: None,
}))
}
}
Some(blob_list) => {
let verified_blobs = if self.blobs_required_for_block(&block) {
let kzg = self
.kzg
.as_ref()
.ok_or(AvailabilityCheckError::KzgNotInitialized)?;
verify_kzg_for_blob_list(blob_list.iter(), kzg)
.map_err(AvailabilityCheckError::Kzg)?;
Some(blob_list)
} else {
None
};
Ok(MaybeAvailableBlock::Available(AvailableBlock {
block_root,
block,
blobs: verified_blobs,
}))
}
}
}
/// Checks if a vector of blocks are available. Returns a vector of `MaybeAvailableBlock`
/// This is more efficient than calling `verify_kzg_for_rpc_block` in a loop as it does
/// all kzg verification at once
///
/// WARNING: This function assumes all required blobs are already present, it does NOT
/// check if there are any missing blobs.
pub fn verify_kzg_for_rpc_blocks(
&self,
blocks: Vec<RpcBlock<T::EthSpec>>,
) -> Result<Vec<MaybeAvailableBlock<T::EthSpec>>, AvailabilityCheckError> {
let mut results = Vec::with_capacity(blocks.len());
let all_blobs: BlobSidecarList<T::EthSpec> = blocks
.iter()
.filter(|block| self.blobs_required_for_block(block.as_block()))
// this clone is cheap as it's cloning an Arc
.filter_map(|block| block.blobs().cloned())
.flatten()
.collect::<Vec<_>>()
.into();
// verify kzg for all blobs at once
if !all_blobs.is_empty() {
let kzg = self
.kzg
.as_ref()
.ok_or(AvailabilityCheckError::KzgNotInitialized)?;
verify_kzg_for_blob_list(all_blobs.iter(), kzg)?;
}
for block in blocks {
let (block_root, block, blobs) = block.deconstruct();
match blobs {
None => {
if self.blobs_required_for_block(&block) {
results.push(MaybeAvailableBlock::AvailabilityPending { block_root, block })
} else {
results.push(MaybeAvailableBlock::Available(AvailableBlock {
block_root,
block,
blobs: None,
}))
}
}
Some(blob_list) => {
let verified_blobs = if self.blobs_required_for_block(&block) {
Some(blob_list)
} else {
None
};
// already verified kzg for all blobs
results.push(MaybeAvailableBlock::Available(AvailableBlock {
block_root,
block,
blobs: verified_blobs,
}))
}
}
}
Ok(results)
}
/// Determines the blob requirements for a block. If the block is pre-deneb, no blobs are required.
/// If the block's epoch is from prior to the data availability boundary, no blobs are required.
fn blobs_required_for_block(&self, block: &SignedBeaconBlock<T::EthSpec>) -> bool {
block.num_expected_blobs() > 0 && self.da_check_required_for_epoch(block.epoch())
}
/// Adds a block to the processing cache. This block's commitments are unverified but caching
/// them here is useful to avoid duplicate downloads of blocks, as well as understanding
/// our blob download requirements. We will also serve this over RPC.
pub fn notify_block(&self, block_root: Hash256, block: Arc<SignedBeaconBlock<T::EthSpec>>) {
self.processing_cache
.write()
.entry(block_root)
.or_default()
.merge_block(block);
}
/// Add a single blob commitment to the processing cache. This commitment is unverified but caching
/// them here is useful to avoid duplicate downloads of blobs, as well as understanding
/// our block and blob download requirements.
pub fn notify_gossip_blob(&self, block_root: Hash256, blob: &GossipVerifiedBlob<T>) {
let index = blob.index();
let commitment = blob.kzg_commitment();
self.processing_cache
.write()
.entry(block_root)
.or_default()
.merge_single_blob(index as usize, commitment);
}
/// Adds blob commitments to the processing cache. These commitments are unverified but caching
/// them here is useful to avoid duplicate downloads of blobs, as well as understanding
/// our block and blob download requirements.
pub fn notify_rpc_blobs(&self, block_root: Hash256, blobs: &FixedBlobSidecarList<T::EthSpec>) {
let mut commitments = KzgCommitmentOpts::<T::EthSpec>::default();
for blob in blobs.iter().flatten() {
if let Some(commitment) = commitments.get_mut(blob.index as usize) {
*commitment = Some(blob.kzg_commitment);
}
}
self.processing_cache
.write()
.entry(block_root)
.or_default()
.merge_blobs(commitments);
}
/// Clears the block and all blobs from the processing cache for a give root if they exist.
pub fn remove_notified(&self, block_root: &Hash256) {
self.processing_cache.write().remove(block_root)
}
/// The epoch at which we require a data availability check in block processing.
/// `None` if the `Deneb` fork is disabled.
pub fn data_availability_boundary(&self) -> Option<Epoch> {
self.spec.deneb_fork_epoch.and_then(|fork_epoch| {
self.slot_clock
.now()
.map(|slot| slot.epoch(T::EthSpec::slots_per_epoch()))
.map(|current_epoch| {
std::cmp::max(
fork_epoch,
current_epoch
.saturating_sub(self.spec.min_epochs_for_blob_sidecars_requests),
)
})
})
}
/// Returns true if the given epoch lies within the da boundary and false otherwise.
pub fn da_check_required_for_epoch(&self, block_epoch: Epoch) -> bool {
self.data_availability_boundary()
.map_or(false, |da_epoch| block_epoch >= da_epoch)
}
/// Returns `true` if the current epoch is greater than or equal to the `Deneb` epoch.
pub fn is_deneb(&self) -> bool {
self.slot_clock.now().map_or(false, |slot| {
self.spec.deneb_fork_epoch.map_or(false, |deneb_epoch| {
let now_epoch = slot.epoch(T::EthSpec::slots_per_epoch());
now_epoch >= deneb_epoch
})
})
}
/// Persist all in memory components to disk
pub fn persist_all(&self) -> Result<(), AvailabilityCheckError> {
self.availability_cache.write_all_to_disk()
}
/// Collects metrics from the data availability checker.
pub fn metrics(&self) -> DataAvailabilityCheckerMetrics {
DataAvailabilityCheckerMetrics {
processing_cache_size: self.processing_cache.read().len(),
num_store_entries: self.availability_cache.num_store_entries(),
state_cache_size: self.availability_cache.state_cache_size(),
block_cache_size: self.availability_cache.block_cache_size(),
}
}
}
/// Helper struct to group data availability checker metrics.
pub struct DataAvailabilityCheckerMetrics {
pub processing_cache_size: usize,
pub num_store_entries: usize,
pub state_cache_size: usize,
pub block_cache_size: usize,
}
pub fn start_availability_cache_maintenance_service<T: BeaconChainTypes>(
executor: TaskExecutor,
chain: Arc<BeaconChain<T>>,
) {
// this cache only needs to be maintained if deneb is configured
if chain.spec.deneb_fork_epoch.is_some() {
let overflow_cache = chain.data_availability_checker.availability_cache.clone();
executor.spawn(
async move { availability_cache_maintenance_service(chain, overflow_cache).await },
"availability_cache_service",
);
} else {
debug!(
chain.log,
"Deneb fork not configured, not starting availability cache maintenance service"
);
}
}
async fn availability_cache_maintenance_service<T: BeaconChainTypes>(
chain: Arc<BeaconChain<T>>,
overflow_cache: Arc<OverflowLRUCache<T>>,
) {
let epoch_duration = chain.slot_clock.slot_duration() * T::EthSpec::slots_per_epoch() as u32;
loop {
match chain
.slot_clock
.duration_to_next_epoch(T::EthSpec::slots_per_epoch())
{
Some(duration) => {
// this service should run 3/4 of the way through the epoch
let additional_delay = (epoch_duration * 3) / 4;
tokio::time::sleep(duration + additional_delay).await;
let Some(deneb_fork_epoch) = chain.spec.deneb_fork_epoch else {
// shutdown service if deneb fork epoch not set
break;
};
debug!(
chain.log,
"Availability cache maintenance service firing";
);
let Some(current_epoch) = chain
.slot_clock
.now()
.map(|slot| slot.epoch(T::EthSpec::slots_per_epoch()))
else {
continue;
};
if current_epoch < deneb_fork_epoch {
// we are not in deneb yet
continue;
}
let finalized_epoch = chain
.canonical_head
.fork_choice_read_lock()
.finalized_checkpoint()
.epoch;
// any data belonging to an epoch before this should be pruned
let cutoff_epoch = std::cmp::max(
finalized_epoch + 1,
std::cmp::max(
current_epoch
.saturating_sub(chain.spec.min_epochs_for_blob_sidecars_requests),
deneb_fork_epoch,
),
);
if let Err(e) = overflow_cache.do_maintenance(cutoff_epoch) {
error!(chain.log, "Failed to maintain availability cache"; "error" => ?e);
}
}
None => {
error!(chain.log, "Failed to read slot clock");
// If we can't read the slot clock, just wait another slot.
tokio::time::sleep(chain.slot_clock.slot_duration()).await;
}
};
}
}
/// A fully available block that is ready to be imported into fork choice.
#[derive(Clone, Debug, PartialEq)]
pub struct AvailableBlock<E: EthSpec> {
block_root: Hash256,
block: Arc<SignedBeaconBlock<E>>,
blobs: Option<BlobSidecarList<E>>,
}
impl<E: EthSpec> AvailableBlock<E> {
pub fn __new_for_testing(
block_root: Hash256,
block: Arc<SignedBeaconBlock<E>>,
blobs: Option<BlobSidecarList<E>>,
) -> Self {
Self {
block_root,
block,
blobs,
}
}
pub fn block(&self) -> &SignedBeaconBlock<E> {
&self.block
}
pub fn block_cloned(&self) -> Arc<SignedBeaconBlock<E>> {
self.block.clone()
}
pub fn blobs(&self) -> Option<&BlobSidecarList<E>> {
self.blobs.as_ref()
}
pub fn deconstruct(
self,
) -> (
Hash256,
Arc<SignedBeaconBlock<E>>,
Option<BlobSidecarList<E>>,
) {
let AvailableBlock {
block_root,
block,
blobs,
} = self;
(block_root, block, blobs)
}
}
#[derive(Debug, Clone)]
pub enum MaybeAvailableBlock<E: EthSpec> {
/// This variant is fully available.
/// i.e. for pre-deneb blocks, it contains a (`SignedBeaconBlock`, `Blobs::None`) and for
/// post-4844 blocks, it contains a `SignedBeaconBlock` and a Blobs variant other than `Blobs::None`.
Available(AvailableBlock<E>),
/// This variant is not fully available and requires blobs to become fully available.
AvailabilityPending {
block_root: Hash256,
block: Arc<SignedBeaconBlock<E>>,
},
}
impl<E: EthSpec> MaybeAvailableBlock<E> {
pub fn block_cloned(&self) -> Arc<SignedBeaconBlock<E>> {
match self {
Self::Available(block) => block.block_cloned(),
Self::AvailabilityPending { block, .. } => block.clone(),
}
}
}
#[derive(Debug, Clone)]
pub enum MissingBlobs {
/// We know for certain these blobs are missing.
KnownMissing(Vec<BlobIdentifier>),
/// We think these blobs might be missing.
PossibleMissing(Vec<BlobIdentifier>),
/// Blobs are not required.
BlobsNotRequired,
}
impl MissingBlobs {
pub fn new_without_block(block_root: Hash256, is_deneb: bool) -> Self {
if is_deneb {
MissingBlobs::PossibleMissing(BlobIdentifier::get_all_blob_ids::<E>(block_root))
} else {
MissingBlobs::BlobsNotRequired
}
}
pub fn is_empty(&self) -> bool {
match self {
MissingBlobs::KnownMissing(v) => v.is_empty(),
MissingBlobs::PossibleMissing(v) => v.is_empty(),
MissingBlobs::BlobsNotRequired => true,
}
}
pub fn contains(&self, blob_id: &BlobIdentifier) -> bool {
match self {
MissingBlobs::KnownMissing(v) => v.contains(blob_id),
MissingBlobs::PossibleMissing(v) => v.contains(blob_id),
MissingBlobs::BlobsNotRequired => false,
}
}
pub fn remove(&mut self, blob_id: &BlobIdentifier) {
match self {
MissingBlobs::KnownMissing(v) => v.retain(|id| id != blob_id),
MissingBlobs::PossibleMissing(v) => v.retain(|id| id != blob_id),
MissingBlobs::BlobsNotRequired => {}
}
}
pub fn indices(&self) -> Vec<u64> {
match self {
MissingBlobs::KnownMissing(v) => v.iter().map(|id| id.index).collect(),
MissingBlobs::PossibleMissing(v) => v.iter().map(|id| id.index).collect(),
MissingBlobs::BlobsNotRequired => vec![],
}
}
}
impl Into<Vec<BlobIdentifier>> for MissingBlobs {
fn into(self) -> Vec<BlobIdentifier> {
match self {
MissingBlobs::KnownMissing(v) => v,
MissingBlobs::PossibleMissing(v) => v,
MissingBlobs::BlobsNotRequired => vec![],
}
}
}

View File

@@ -0,0 +1,507 @@
use super::state_lru_cache::DietAvailabilityPendingExecutedBlock;
use crate::blob_verification::KzgVerifiedBlob;
use crate::block_verification_types::AsBlock;
use crate::data_availability_checker::overflow_lru_cache::PendingComponents;
use crate::data_availability_checker::ProcessingComponents;
use kzg::KzgCommitment;
use ssz_types::FixedVector;
use std::sync::Arc;
use types::beacon_block_body::KzgCommitments;
use types::{BlobSidecar, EthSpec, SignedBeaconBlock};
/// Defines an interface for managing data availability with two key invariants:
///
/// 1. If we haven't seen a block yet, we will insert the first blob for a given (block_root, index)
/// but we won't insert subsequent blobs for the same (block_root, index) if they have a different
/// commitment.
/// 2. On block insertion, any non-matching blob commitments are evicted.
///
/// Types implementing this trait can be used for validating and managing availability
/// of blocks and blobs in a cache-like data structure.
pub trait AvailabilityView<E: EthSpec> {
/// The type representing a block in the implementation.
type BlockType: GetCommitments<E>;
/// The type representing a blob in the implementation. Must implement `Clone`.
type BlobType: Clone + GetCommitment<E>;
/// Returns an immutable reference to the cached block.
fn get_cached_block(&self) -> &Option<Self::BlockType>;
/// Returns an immutable reference to the fixed vector of cached blobs.
fn get_cached_blobs(&self) -> &FixedVector<Option<Self::BlobType>, E::MaxBlobsPerBlock>;
/// Returns a mutable reference to the cached block.
fn get_cached_block_mut(&mut self) -> &mut Option<Self::BlockType>;
/// Returns a mutable reference to the fixed vector of cached blobs.
fn get_cached_blobs_mut(
&mut self,
) -> &mut FixedVector<Option<Self::BlobType>, E::MaxBlobsPerBlock>;
/// Checks if a block exists in the cache.
///
/// Returns:
/// - `true` if a block exists.
/// - `false` otherwise.
fn block_exists(&self) -> bool {
self.get_cached_block().is_some()
}
/// Checks if a blob exists at the given index in the cache.
///
/// Returns:
/// - `true` if a blob exists at the given index.
/// - `false` otherwise.
fn blob_exists(&self, blob_index: usize) -> bool {
self.get_cached_blobs()
.get(blob_index)
.map(|b| b.is_some())
.unwrap_or(false)
}
/// Returns the number of blobs that are expected to be present. Returns `None` if we don't have a
/// block.
///
/// This corresponds to the number of commitments that are present in a block.
fn num_expected_blobs(&self) -> Option<usize> {
self.get_cached_block()
.as_ref()
.map(|b| b.get_commitments().len())
}
/// Returns the number of blobs that have been received and are stored in the cache.
fn num_received_blobs(&self) -> usize {
self.get_cached_blobs().iter().flatten().count()
}
/// Inserts a block into the cache.
fn insert_block(&mut self, block: Self::BlockType) {
*self.get_cached_block_mut() = Some(block)
}
/// Inserts a blob at a specific index in the cache.
///
/// Existing blob at the index will be replaced.
fn insert_blob_at_index(&mut self, blob_index: usize, blob: Self::BlobType) {
if let Some(b) = self.get_cached_blobs_mut().get_mut(blob_index) {
*b = Some(blob);
}
}
/// Merges a given set of blobs into the cache.
///
/// Blobs are only inserted if:
/// 1. The blob entry at the index is empty and no block exists.
/// 2. The block exists and its commitment matches the blob's commitment.
fn merge_blobs(&mut self, blobs: FixedVector<Option<Self::BlobType>, E::MaxBlobsPerBlock>) {
for (index, blob) in blobs.iter().cloned().enumerate() {
let Some(blob) = blob else { continue };
self.merge_single_blob(index, blob);
}
}
/// Merges a single blob into the cache.
///
/// Blobs are only inserted if:
/// 1. The blob entry at the index is empty and no block exists, or
/// 2. The block exists and its commitment matches the blob's commitment.
fn merge_single_blob(&mut self, index: usize, blob: Self::BlobType) {
if let Some(cached_block) = self.get_cached_block() {
let block_commitment_opt = cached_block.get_commitments().get(index).copied();
if let Some(block_commitment) = block_commitment_opt {
if block_commitment == *blob.get_commitment() {
self.insert_blob_at_index(index, blob)
}
}
} else if !self.blob_exists(index) {
self.insert_blob_at_index(index, blob)
}
}
/// Inserts a new block and revalidates the existing blobs against it.
///
/// Blobs that don't match the new block's commitments are evicted.
fn merge_block(&mut self, block: Self::BlockType) {
self.insert_block(block);
let reinsert = std::mem::take(self.get_cached_blobs_mut());
self.merge_blobs(reinsert);
}
/// Checks if the block and all of its expected blobs are available in the cache.
///
/// Returns `true` if both the block exists and the number of received blobs matches the number
/// of expected blobs.
fn is_available(&self) -> bool {
if let Some(num_expected_blobs) = self.num_expected_blobs() {
num_expected_blobs == self.num_received_blobs()
} else {
false
}
}
}
/// Implements the `AvailabilityView` trait for a given struct.
///
/// - `$struct_name`: The name of the struct for which to implement `AvailabilityView`.
/// - `$block_type`: The type to use for `BlockType` in the `AvailabilityView` trait.
/// - `$blob_type`: The type to use for `BlobType` in the `AvailabilityView` trait.
/// - `$block_field`: The field name in the struct that holds the cached block.
/// - `$blob_field`: The field name in the struct that holds the cached blobs.
#[macro_export]
macro_rules! impl_availability_view {
($struct_name:ident, $block_type:ty, $blob_type:ty, $block_field:ident, $blob_field:ident) => {
impl<E: EthSpec> AvailabilityView<E> for $struct_name<E> {
type BlockType = $block_type;
type BlobType = $blob_type;
fn get_cached_block(&self) -> &Option<Self::BlockType> {
&self.$block_field
}
fn get_cached_blobs(
&self,
) -> &FixedVector<Option<Self::BlobType>, E::MaxBlobsPerBlock> {
&self.$blob_field
}
fn get_cached_block_mut(&mut self) -> &mut Option<Self::BlockType> {
&mut self.$block_field
}
fn get_cached_blobs_mut(
&mut self,
) -> &mut FixedVector<Option<Self::BlobType>, E::MaxBlobsPerBlock> {
&mut self.$blob_field
}
}
};
}
impl_availability_view!(
ProcessingComponents,
Arc<SignedBeaconBlock<E>>,
KzgCommitment,
block,
blob_commitments
);
impl_availability_view!(
PendingComponents,
DietAvailabilityPendingExecutedBlock<E>,
KzgVerifiedBlob<E>,
executed_block,
verified_blobs
);
pub trait GetCommitments<E: EthSpec> {
fn get_commitments(&self) -> KzgCommitments<E>;
}
pub trait GetCommitment<E: EthSpec> {
fn get_commitment(&self) -> &KzgCommitment;
}
impl<E: EthSpec> GetCommitment<E> for KzgCommitment {
fn get_commitment(&self) -> &KzgCommitment {
self
}
}
// These implementations are required to implement `AvailabilityView` for `PendingComponents`.
impl<E: EthSpec> GetCommitments<E> for DietAvailabilityPendingExecutedBlock<E> {
fn get_commitments(&self) -> KzgCommitments<E> {
self.as_block()
.message()
.body()
.blob_kzg_commitments()
.cloned()
.unwrap_or_default()
}
}
impl<E: EthSpec> GetCommitment<E> for KzgVerifiedBlob<E> {
fn get_commitment(&self) -> &KzgCommitment {
&self.as_blob().kzg_commitment
}
}
// These implementations are required to implement `AvailabilityView` for `ChildComponents`.
impl<E: EthSpec> GetCommitments<E> for Arc<SignedBeaconBlock<E>> {
fn get_commitments(&self) -> KzgCommitments<E> {
self.message()
.body()
.blob_kzg_commitments()
.ok()
.cloned()
.unwrap_or_default()
}
}
impl<E: EthSpec> GetCommitment<E> for Arc<BlobSidecar<E>> {
fn get_commitment(&self) -> &KzgCommitment {
&self.kzg_commitment
}
}
#[cfg(test)]
pub mod tests {
use super::*;
use crate::block_verification_types::BlockImportData;
use crate::eth1_finalization_cache::Eth1FinalizationData;
use crate::test_utils::{generate_rand_block_and_blobs, NumBlobs};
use crate::AvailabilityPendingExecutedBlock;
use crate::PayloadVerificationOutcome;
use fork_choice::PayloadVerificationStatus;
use rand::rngs::StdRng;
use rand::SeedableRng;
use state_processing::ConsensusContext;
use types::test_utils::TestRandom;
use types::{BeaconState, ChainSpec, ForkName, MainnetEthSpec, Slot};
type E = MainnetEthSpec;
type Setup<E> = (
SignedBeaconBlock<E>,
FixedVector<Option<Arc<BlobSidecar<E>>>, <E as EthSpec>::MaxBlobsPerBlock>,
FixedVector<Option<Arc<BlobSidecar<E>>>, <E as EthSpec>::MaxBlobsPerBlock>,
);
pub fn pre_setup() -> Setup<E> {
let mut rng = StdRng::seed_from_u64(0xDEADBEEF0BAD5EEDu64);
let (block, blobs_vec) =
generate_rand_block_and_blobs::<E>(ForkName::Deneb, NumBlobs::Random, &mut rng);
let mut blobs: FixedVector<_, <E as EthSpec>::MaxBlobsPerBlock> = FixedVector::default();
for blob in blobs_vec {
if let Some(b) = blobs.get_mut(blob.index as usize) {
*b = Some(Arc::new(blob));
}
}
let mut invalid_blobs: FixedVector<
Option<Arc<BlobSidecar<E>>>,
<E as EthSpec>::MaxBlobsPerBlock,
> = FixedVector::default();
for (index, blob) in blobs.iter().enumerate() {
if let Some(invalid_blob) = blob {
let mut blob_copy = invalid_blob.as_ref().clone();
blob_copy.kzg_commitment = KzgCommitment::random_for_test(&mut rng);
*invalid_blobs.get_mut(index).unwrap() = Some(Arc::new(blob_copy));
}
}
(block, blobs, invalid_blobs)
}
type ProcessingViewSetup<E> = (
Arc<SignedBeaconBlock<E>>,
FixedVector<Option<KzgCommitment>, <E as EthSpec>::MaxBlobsPerBlock>,
FixedVector<Option<KzgCommitment>, <E as EthSpec>::MaxBlobsPerBlock>,
);
pub fn setup_processing_components(
block: SignedBeaconBlock<E>,
valid_blobs: FixedVector<Option<Arc<BlobSidecar<E>>>, <E as EthSpec>::MaxBlobsPerBlock>,
invalid_blobs: FixedVector<Option<Arc<BlobSidecar<E>>>, <E as EthSpec>::MaxBlobsPerBlock>,
) -> ProcessingViewSetup<E> {
let blobs = FixedVector::from(
valid_blobs
.iter()
.map(|blob_opt| blob_opt.as_ref().map(|blob| blob.kzg_commitment))
.collect::<Vec<_>>(),
);
let invalid_blobs = FixedVector::from(
invalid_blobs
.iter()
.map(|blob_opt| blob_opt.as_ref().map(|blob| blob.kzg_commitment))
.collect::<Vec<_>>(),
);
(Arc::new(block), blobs, invalid_blobs)
}
type PendingComponentsSetup<E> = (
DietAvailabilityPendingExecutedBlock<E>,
FixedVector<Option<KzgVerifiedBlob<E>>, <E as EthSpec>::MaxBlobsPerBlock>,
FixedVector<Option<KzgVerifiedBlob<E>>, <E as EthSpec>::MaxBlobsPerBlock>,
);
pub fn setup_pending_components(
block: SignedBeaconBlock<E>,
valid_blobs: FixedVector<Option<Arc<BlobSidecar<E>>>, <E as EthSpec>::MaxBlobsPerBlock>,
invalid_blobs: FixedVector<Option<Arc<BlobSidecar<E>>>, <E as EthSpec>::MaxBlobsPerBlock>,
) -> PendingComponentsSetup<E> {
let blobs = FixedVector::from(
valid_blobs
.iter()
.map(|blob_opt| {
blob_opt
.as_ref()
.map(|blob| KzgVerifiedBlob::__assumed_valid(blob.clone()))
})
.collect::<Vec<_>>(),
);
let invalid_blobs = FixedVector::from(
invalid_blobs
.iter()
.map(|blob_opt| {
blob_opt
.as_ref()
.map(|blob| KzgVerifiedBlob::__assumed_valid(blob.clone()))
})
.collect::<Vec<_>>(),
);
let dummy_parent = block.clone_as_blinded();
let block = AvailabilityPendingExecutedBlock {
block: Arc::new(block),
import_data: BlockImportData {
block_root: Default::default(),
state: BeaconState::new(0, Default::default(), &ChainSpec::minimal()),
parent_block: dummy_parent,
parent_eth1_finalization_data: Eth1FinalizationData {
eth1_data: Default::default(),
eth1_deposit_index: 0,
},
confirmed_state_roots: vec![],
consensus_context: ConsensusContext::new(Slot::new(0)),
},
payload_verification_outcome: PayloadVerificationOutcome {
payload_verification_status: PayloadVerificationStatus::Verified,
is_valid_merge_transition_block: false,
},
};
(block.into(), blobs, invalid_blobs)
}
pub fn assert_cache_consistent<V: AvailabilityView<E>>(cache: V) {
if let Some(cached_block) = cache.get_cached_block() {
let cached_block_commitments = cached_block.get_commitments();
for index in 0..E::max_blobs_per_block() {
let block_commitment = cached_block_commitments.get(index).copied();
let blob_commitment_opt = cache.get_cached_blobs().get(index).unwrap();
let blob_commitment = blob_commitment_opt.as_ref().map(|b| *b.get_commitment());
assert_eq!(block_commitment, blob_commitment);
}
} else {
panic!("No cached block")
}
}
pub fn assert_empty_blob_cache<V: AvailabilityView<E>>(cache: V) {
for blob in cache.get_cached_blobs().iter() {
assert!(blob.is_none());
}
}
#[macro_export]
macro_rules! generate_tests {
($module_name:ident, $type_name:ty, $block_field:ident, $blob_field:ident, $setup_fn:ident) => {
mod $module_name {
use super::*;
use types::Hash256;
#[test]
fn valid_block_invalid_blobs_valid_blobs() {
let (block_commitments, blobs, random_blobs) = pre_setup();
let (block_commitments, blobs, random_blobs) =
$setup_fn(block_commitments, blobs, random_blobs);
let block_root = Hash256::zero();
let mut cache = <$type_name>::empty(block_root);
cache.merge_block(block_commitments);
cache.merge_blobs(random_blobs);
cache.merge_blobs(blobs);
assert_cache_consistent(cache);
}
#[test]
fn invalid_blobs_block_valid_blobs() {
let (block_commitments, blobs, random_blobs) = pre_setup();
let (block_commitments, blobs, random_blobs) =
$setup_fn(block_commitments, blobs, random_blobs);
let block_root = Hash256::zero();
let mut cache = <$type_name>::empty(block_root);
cache.merge_blobs(random_blobs);
cache.merge_block(block_commitments);
cache.merge_blobs(blobs);
assert_cache_consistent(cache);
}
#[test]
fn invalid_blobs_valid_blobs_block() {
let (block_commitments, blobs, random_blobs) = pre_setup();
let (block_commitments, blobs, random_blobs) =
$setup_fn(block_commitments, blobs, random_blobs);
let block_root = Hash256::zero();
let mut cache = <$type_name>::empty(block_root);
cache.merge_blobs(random_blobs);
cache.merge_blobs(blobs);
cache.merge_block(block_commitments);
assert_empty_blob_cache(cache);
}
#[test]
fn block_valid_blobs_invalid_blobs() {
let (block_commitments, blobs, random_blobs) = pre_setup();
let (block_commitments, blobs, random_blobs) =
$setup_fn(block_commitments, blobs, random_blobs);
let block_root = Hash256::zero();
let mut cache = <$type_name>::empty(block_root);
cache.merge_block(block_commitments);
cache.merge_blobs(blobs);
cache.merge_blobs(random_blobs);
assert_cache_consistent(cache);
}
#[test]
fn valid_blobs_block_invalid_blobs() {
let (block_commitments, blobs, random_blobs) = pre_setup();
let (block_commitments, blobs, random_blobs) =
$setup_fn(block_commitments, blobs, random_blobs);
let block_root = Hash256::zero();
let mut cache = <$type_name>::empty(block_root);
cache.merge_blobs(blobs);
cache.merge_block(block_commitments);
cache.merge_blobs(random_blobs);
assert_cache_consistent(cache);
}
#[test]
fn valid_blobs_invalid_blobs_block() {
let (block_commitments, blobs, random_blobs) = pre_setup();
let (block_commitments, blobs, random_blobs) =
$setup_fn(block_commitments, blobs, random_blobs);
let block_root = Hash256::zero();
let mut cache = <$type_name>::empty(block_root);
cache.merge_blobs(blobs);
cache.merge_blobs(random_blobs);
cache.merge_block(block_commitments);
assert_cache_consistent(cache);
}
}
};
}
generate_tests!(
processing_components_tests,
ProcessingComponents::<E>,
kzg_commitments,
processing_blobs,
setup_processing_components
);
generate_tests!(
pending_components_tests,
PendingComponents<E>,
executed_block,
verified_blobs,
setup_pending_components
);
}

View File

@@ -0,0 +1,69 @@
use crate::block_verification_types::RpcBlock;
use bls::Hash256;
use std::sync::Arc;
use types::blob_sidecar::FixedBlobSidecarList;
use types::{BlobSidecar, EthSpec, SignedBeaconBlock};
/// For requests triggered by an `UnknownBlockParent` or `UnknownBlobParent`, this struct
/// is used to cache components as they are sent to the network service. We can't use the
/// data availability cache currently because any blocks or blobs without parents
/// won't pass validation and therefore won't make it into the cache.
pub struct ChildComponents<E: EthSpec> {
pub block_root: Hash256,
pub downloaded_block: Option<Arc<SignedBeaconBlock<E>>>,
pub downloaded_blobs: FixedBlobSidecarList<E>,
}
impl<E: EthSpec> From<RpcBlock<E>> for ChildComponents<E> {
fn from(value: RpcBlock<E>) -> Self {
let (block_root, block, blobs) = value.deconstruct();
let fixed_blobs = blobs.map(|blobs| {
FixedBlobSidecarList::from(blobs.into_iter().map(Some).collect::<Vec<_>>())
});
Self::new(block_root, Some(block), fixed_blobs)
}
}
impl<E: EthSpec> ChildComponents<E> {
pub fn empty(block_root: Hash256) -> Self {
Self {
block_root,
downloaded_block: None,
downloaded_blobs: <_>::default(),
}
}
pub fn new(
block_root: Hash256,
block: Option<Arc<SignedBeaconBlock<E>>>,
blobs: Option<FixedBlobSidecarList<E>>,
) -> Self {
let mut cache = Self::empty(block_root);
if let Some(block) = block {
cache.merge_block(block);
}
if let Some(blobs) = blobs {
cache.merge_blobs(blobs);
}
cache
}
pub fn merge_block(&mut self, block: Arc<SignedBeaconBlock<E>>) {
self.downloaded_block = Some(block);
}
pub fn merge_blob(&mut self, blob: Arc<BlobSidecar<E>>) {
if let Some(blob_ref) = self.downloaded_blobs.get_mut(blob.index as usize) {
*blob_ref = Some(blob);
}
}
pub fn merge_blobs(&mut self, blobs: FixedBlobSidecarList<E>) {
for blob in blobs.iter().flatten() {
self.merge_blob(blob.clone());
}
}
pub fn clear_blobs(&mut self) {
self.downloaded_blobs = FixedBlobSidecarList::default();
}
}

View File

@@ -0,0 +1,79 @@
use kzg::{Error as KzgError, KzgCommitment};
use types::{BeaconStateError, Hash256};
#[derive(Debug)]
pub enum Error {
Kzg(KzgError),
KzgNotInitialized,
KzgVerificationFailed,
KzgCommitmentMismatch {
blob_commitment: KzgCommitment,
block_commitment: KzgCommitment,
},
Unexpected,
SszTypes(ssz_types::Error),
MissingBlobs,
BlobIndexInvalid(u64),
StoreError(store::Error),
DecodeError(ssz::DecodeError),
ParentStateMissing(Hash256),
BlockReplayError(state_processing::BlockReplayError),
RebuildingStateCaches(BeaconStateError),
}
pub enum ErrorCategory {
/// Internal Errors (not caused by peers)
Internal,
/// Errors caused by faulty / malicious peers
Malicious,
}
impl Error {
pub fn category(&self) -> ErrorCategory {
match self {
Error::KzgNotInitialized
| Error::SszTypes(_)
| Error::MissingBlobs
| Error::StoreError(_)
| Error::DecodeError(_)
| Error::Unexpected
| Error::ParentStateMissing(_)
| Error::BlockReplayError(_)
| Error::RebuildingStateCaches(_) => ErrorCategory::Internal,
Error::Kzg(_)
| Error::BlobIndexInvalid(_)
| Error::KzgCommitmentMismatch { .. }
| Error::KzgVerificationFailed => ErrorCategory::Malicious,
}
}
}
impl From<ssz_types::Error> for Error {
fn from(value: ssz_types::Error) -> Self {
Self::SszTypes(value)
}
}
impl From<store::Error> for Error {
fn from(value: store::Error) -> Self {
Self::StoreError(value)
}
}
impl From<ssz::DecodeError> for Error {
fn from(value: ssz::DecodeError) -> Self {
Self::DecodeError(value)
}
}
impl From<state_processing::BlockReplayError> for Error {
fn from(value: state_processing::BlockReplayError) -> Self {
Self::BlockReplayError(value)
}
}
impl From<KzgError> for Error {
fn from(value: KzgError) -> Self {
Self::Kzg(value)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,63 @@
use crate::data_availability_checker::AvailabilityView;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::sync::Arc;
use types::beacon_block_body::KzgCommitmentOpts;
use types::{EthSpec, Hash256, SignedBeaconBlock};
/// This cache is used only for gossip blocks/blobs and single block/blob lookups, to give req/resp
/// a view of what we have and what we require. This cache serves a slightly different purpose than
/// gossip caches because it allows us to process duplicate blobs that are valid in gossip.
/// See `AvailabilityView`'s trait definition.
#[derive(Default)]
pub struct ProcessingCache<E: EthSpec> {
processing_cache: HashMap<Hash256, ProcessingComponents<E>>,
}
impl<E: EthSpec> ProcessingCache<E> {
pub fn get(&self, block_root: &Hash256) -> Option<&ProcessingComponents<E>> {
self.processing_cache.get(block_root)
}
pub fn entry(&mut self, block_root: Hash256) -> Entry<'_, Hash256, ProcessingComponents<E>> {
self.processing_cache.entry(block_root)
}
pub fn remove(&mut self, block_root: &Hash256) {
self.processing_cache.remove(block_root);
}
pub fn has_block(&self, block_root: &Hash256) -> bool {
self.processing_cache
.get(block_root)
.map_or(false, |b| b.block_exists())
}
pub fn len(&self) -> usize {
self.processing_cache.len()
}
}
#[derive(Default, Debug, Clone)]
pub struct ProcessingComponents<E: EthSpec> {
/// Blobs required for a block can only be known if we have seen the block. So `Some` here
/// means we've seen it, a `None` means we haven't. The `kzg_commitments` value helps us figure
/// out whether incoming blobs actually match the block.
pub block: Option<Arc<SignedBeaconBlock<E>>>,
/// `KzgCommitments` for blobs are always known, even if we haven't seen the block. See
/// `AvailabilityView`'s trait definition for more details.
pub blob_commitments: KzgCommitmentOpts<E>,
}
impl<E: EthSpec> ProcessingComponents<E> {
pub fn new() -> Self {
Self::default()
}
}
// Not safe for use outside of tests as this always required a slot.
#[cfg(test)]
impl<E: EthSpec> ProcessingComponents<E> {
pub fn empty(_block_root: Hash256) -> Self {
Self {
block: None,
blob_commitments: KzgCommitmentOpts::<E>::default(),
}
}
}

View File

@@ -0,0 +1,229 @@
use crate::block_verification_types::AsBlock;
use crate::{
block_verification_types::BlockImportData,
data_availability_checker::{AvailabilityCheckError, STATE_LRU_CAPACITY_NON_ZERO},
eth1_finalization_cache::Eth1FinalizationData,
AvailabilityPendingExecutedBlock, BeaconChainTypes, BeaconStore, PayloadVerificationOutcome,
};
use lru::LruCache;
use parking_lot::RwLock;
use ssz_derive::{Decode, Encode};
use state_processing::{BlockReplayer, ConsensusContext, StateProcessingStrategy};
use std::sync::Arc;
use types::{ssz_tagged_signed_beacon_block, ssz_tagged_signed_beacon_block_arc};
use types::{BeaconState, BlindedPayload, ChainSpec, Epoch, EthSpec, Hash256, SignedBeaconBlock};
/// This mirrors everything in the `AvailabilityPendingExecutedBlock`, except
/// that it is much smaller because it contains only a state root instead of
/// a full `BeaconState`.
#[derive(Encode, Decode, Clone)]
pub struct DietAvailabilityPendingExecutedBlock<E: EthSpec> {
#[ssz(with = "ssz_tagged_signed_beacon_block_arc")]
block: Arc<SignedBeaconBlock<E>>,
state_root: Hash256,
#[ssz(with = "ssz_tagged_signed_beacon_block")]
parent_block: SignedBeaconBlock<E, BlindedPayload<E>>,
parent_eth1_finalization_data: Eth1FinalizationData,
confirmed_state_roots: Vec<Hash256>,
consensus_context: ConsensusContext<E>,
payload_verification_outcome: PayloadVerificationOutcome,
}
/// just implementing the same methods as `AvailabilityPendingExecutedBlock`
impl<E: EthSpec> DietAvailabilityPendingExecutedBlock<E> {
pub fn as_block(&self) -> &SignedBeaconBlock<E> {
&self.block
}
pub fn num_blobs_expected(&self) -> usize {
self.block
.message()
.body()
.blob_kzg_commitments()
.map_or(0, |commitments| commitments.len())
}
}
/// This LRU cache holds BeaconStates used for block import. If the cache overflows,
/// the least recently used state will be dropped. If the dropped state is needed
/// later on, it will be recovered from the parent state and replaying the block.
///
/// WARNING: This cache assumes the parent block of any `AvailabilityPendingExecutedBlock`
/// has already been imported into ForkChoice. If this is not the case, the cache
/// will fail to recover the state when the cache overflows because it can't load
/// the parent state!
pub struct StateLRUCache<T: BeaconChainTypes> {
states: RwLock<LruCache<Hash256, BeaconState<T::EthSpec>>>,
store: BeaconStore<T>,
spec: ChainSpec,
}
impl<T: BeaconChainTypes> StateLRUCache<T> {
pub fn new(store: BeaconStore<T>, spec: ChainSpec) -> Self {
Self {
states: RwLock::new(LruCache::new(STATE_LRU_CAPACITY_NON_ZERO)),
store,
spec,
}
}
/// This will store the state in the LRU cache and return a
/// `DietAvailabilityPendingExecutedBlock` which is much cheaper to
/// keep around in memory.
pub fn register_pending_executed_block(
&self,
executed_block: AvailabilityPendingExecutedBlock<T::EthSpec>,
) -> DietAvailabilityPendingExecutedBlock<T::EthSpec> {
let state = executed_block.import_data.state;
let state_root = executed_block.block.state_root();
self.states.write().put(state_root, state);
DietAvailabilityPendingExecutedBlock {
block: executed_block.block,
state_root,
parent_block: executed_block.import_data.parent_block,
parent_eth1_finalization_data: executed_block.import_data.parent_eth1_finalization_data,
confirmed_state_roots: executed_block.import_data.confirmed_state_roots,
consensus_context: executed_block.import_data.consensus_context,
payload_verification_outcome: executed_block.payload_verification_outcome,
}
}
/// Recover the `AvailabilityPendingExecutedBlock` from the diet version.
/// This method will first check the cache and if the state is not found
/// it will reconstruct the state by loading the parent state from disk and
/// replaying the block.
pub fn recover_pending_executed_block(
&self,
diet_executed_block: DietAvailabilityPendingExecutedBlock<T::EthSpec>,
) -> Result<AvailabilityPendingExecutedBlock<T::EthSpec>, AvailabilityCheckError> {
let maybe_state = self.states.write().pop(&diet_executed_block.state_root);
if let Some(state) = maybe_state {
let block_root = diet_executed_block.block.canonical_root();
Ok(AvailabilityPendingExecutedBlock {
block: diet_executed_block.block,
import_data: BlockImportData {
block_root,
state,
parent_block: diet_executed_block.parent_block,
parent_eth1_finalization_data: diet_executed_block
.parent_eth1_finalization_data,
confirmed_state_roots: diet_executed_block.confirmed_state_roots,
consensus_context: diet_executed_block.consensus_context,
},
payload_verification_outcome: diet_executed_block.payload_verification_outcome,
})
} else {
self.reconstruct_pending_executed_block(diet_executed_block)
}
}
/// Reconstruct the `AvailabilityPendingExecutedBlock` by loading the parent
/// state from disk and replaying the block. This function does NOT check the
/// LRU cache.
pub fn reconstruct_pending_executed_block(
&self,
diet_executed_block: DietAvailabilityPendingExecutedBlock<T::EthSpec>,
) -> Result<AvailabilityPendingExecutedBlock<T::EthSpec>, AvailabilityCheckError> {
let block_root = diet_executed_block.block.canonical_root();
let state = self.reconstruct_state(&diet_executed_block)?;
Ok(AvailabilityPendingExecutedBlock {
block: diet_executed_block.block,
import_data: BlockImportData {
block_root,
state,
parent_block: diet_executed_block.parent_block,
parent_eth1_finalization_data: diet_executed_block.parent_eth1_finalization_data,
confirmed_state_roots: diet_executed_block.confirmed_state_roots,
consensus_context: diet_executed_block.consensus_context,
},
payload_verification_outcome: diet_executed_block.payload_verification_outcome,
})
}
/// Reconstruct the state by loading the parent state from disk and replaying
/// the block.
fn reconstruct_state(
&self,
diet_executed_block: &DietAvailabilityPendingExecutedBlock<T::EthSpec>,
) -> Result<BeaconState<T::EthSpec>, AvailabilityCheckError> {
let parent_block_root = diet_executed_block.parent_block.canonical_root();
let parent_block_state_root = diet_executed_block.parent_block.state_root();
let (parent_state_root, parent_state) = self
.store
.get_advanced_hot_state(
parent_block_root,
diet_executed_block.parent_block.slot(),
parent_block_state_root,
)
.map_err(AvailabilityCheckError::StoreError)?
.ok_or(AvailabilityCheckError::ParentStateMissing(
parent_block_state_root,
))?;
let state_roots = vec![
Ok((parent_state_root, diet_executed_block.parent_block.slot())),
Ok((
diet_executed_block.state_root,
diet_executed_block.block.slot(),
)),
];
let block_replayer: BlockReplayer<'_, T::EthSpec, AvailabilityCheckError, _> =
BlockReplayer::new(parent_state, &self.spec)
.no_signature_verification()
.state_processing_strategy(StateProcessingStrategy::Accurate)
.state_root_iter(state_roots.into_iter())
.minimal_block_root_verification();
block_replayer
.apply_blocks(vec![diet_executed_block.block.clone_as_blinded()], None)
.map(|block_replayer| block_replayer.into_state())
.and_then(|mut state| {
state
.build_exit_cache(&self.spec)
.map_err(AvailabilityCheckError::RebuildingStateCaches)?;
state
.update_tree_hash_cache()
.map_err(AvailabilityCheckError::RebuildingStateCaches)?;
Ok(state)
})
}
/// returns the state cache for inspection
pub fn lru_cache(&self) -> &RwLock<LruCache<Hash256, BeaconState<T::EthSpec>>> {
&self.states
}
/// remove any states from the cache from before the given epoch
pub fn do_maintenance(&self, cutoff_epoch: Epoch) {
let mut write_lock = self.states.write();
while let Some((_, state)) = write_lock.peek_lru() {
if state.slot().epoch(T::EthSpec::slots_per_epoch()) < cutoff_epoch {
write_lock.pop_lru();
} else {
break;
}
}
}
}
/// This can only be used during testing. The intended way to
/// obtain a `DietAvailabilityPendingExecutedBlock` is to call
/// `register_pending_executed_block` on the `StateLRUCache`.
#[cfg(test)]
impl<E: EthSpec> From<AvailabilityPendingExecutedBlock<E>>
for DietAvailabilityPendingExecutedBlock<E>
{
fn from(value: AvailabilityPendingExecutedBlock<E>) -> Self {
Self {
block: value.block,
state_root: value.import_data.state.canonical_root(),
parent_block: value.import_data.parent_block,
parent_eth1_finalization_data: value.import_data.parent_eth1_finalization_data,
confirmed_state_roots: value.import_data.confirmed_state_roots,
consensus_context: value.import_data.consensus_context,
payload_verification_outcome: value.payload_verification_outcome,
}
}
}

View File

@@ -0,0 +1,121 @@
//! Provides tools for checking if a node is ready for the Deneb upgrade.
use crate::{BeaconChain, BeaconChainTypes};
use execution_layer::http::{
ENGINE_FORKCHOICE_UPDATED_V3, ENGINE_GET_PAYLOAD_V3, ENGINE_NEW_PAYLOAD_V3,
};
use serde::{Deserialize, Serialize};
use std::fmt;
use std::time::Duration;
use types::*;
/// The time before the Deneb fork when we will start issuing warnings about preparation.
use super::merge_readiness::SECONDS_IN_A_WEEK;
pub const DENEB_READINESS_PREPARATION_SECONDS: u64 = SECONDS_IN_A_WEEK * 2;
pub const ENGINE_CAPABILITIES_REFRESH_INTERVAL: u64 = 300;
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
#[serde(tag = "type")]
pub enum DenebReadiness {
/// The execution engine is deneb-enabled (as far as we can tell)
Ready,
/// We are connected to an execution engine which doesn't support the V3 engine api methods
V3MethodsNotSupported { error: String },
/// The transition configuration with the EL failed, there might be a problem with
/// connectivity, authentication or a difference in configuration.
ExchangeCapabilitiesFailed { error: String },
/// The user has not configured an execution endpoint
NoExecutionEndpoint,
}
impl fmt::Display for DenebReadiness {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DenebReadiness::Ready => {
write!(f, "This node appears ready for Deneb.")
}
DenebReadiness::ExchangeCapabilitiesFailed { error } => write!(
f,
"Could not exchange capabilities with the \
execution endpoint: {}",
error
),
DenebReadiness::NoExecutionEndpoint => write!(
f,
"The --execution-endpoint flag is not specified, this is a \
requirement post-merge"
),
DenebReadiness::V3MethodsNotSupported { error } => write!(
f,
"Execution endpoint does not support Deneb methods: {}",
error
),
}
}
}
impl<T: BeaconChainTypes> BeaconChain<T> {
/// Returns `true` if deneb epoch is set and Deneb fork has occurred or will
/// occur within `DENEB_READINESS_PREPARATION_SECONDS`
pub fn is_time_to_prepare_for_deneb(&self, current_slot: Slot) -> bool {
if let Some(deneb_epoch) = self.spec.deneb_fork_epoch {
let deneb_slot = deneb_epoch.start_slot(T::EthSpec::slots_per_epoch());
let deneb_readiness_preparation_slots =
DENEB_READINESS_PREPARATION_SECONDS / self.spec.seconds_per_slot;
// Return `true` if Deneb has happened or is within the preparation time.
current_slot + deneb_readiness_preparation_slots > deneb_slot
} else {
// The Deneb fork epoch has not been defined yet, no need to prepare.
false
}
}
/// Attempts to connect to the EL and confirm that it is ready for capella.
pub async fn check_deneb_readiness(&self) -> DenebReadiness {
if let Some(el) = self.execution_layer.as_ref() {
match el
.get_engine_capabilities(Some(Duration::from_secs(
ENGINE_CAPABILITIES_REFRESH_INTERVAL,
)))
.await
{
Err(e) => {
// The EL was either unreachable or responded with an error
DenebReadiness::ExchangeCapabilitiesFailed {
error: format!("{:?}", e),
}
}
Ok(capabilities) => {
let mut missing_methods = String::from("Required Methods Unsupported:");
let mut all_good = true;
if !capabilities.get_payload_v3 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_GET_PAYLOAD_V3);
all_good = false;
}
if !capabilities.forkchoice_updated_v3 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_FORKCHOICE_UPDATED_V3);
all_good = false;
}
if !capabilities.new_payload_v3 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_NEW_PAYLOAD_V3);
all_good = false;
}
if all_good {
DenebReadiness::Ready
} else {
DenebReadiness::V3MethodsNotSupported {
error: missing_methods,
}
}
}
}
} else {
DenebReadiness::NoExecutionEndpoint
}
}
}

View File

@@ -1,9 +1,11 @@
use crate::data_availability_checker::AvailableBlock;
use crate::{
attester_cache::{CommitteeLengths, Error},
metrics,
};
use parking_lot::RwLock;
use proto_array::Block as ProtoBlock;
use std::sync::Arc;
use types::*;
pub struct CacheItem<E: EthSpec> {
@@ -18,7 +20,8 @@ pub struct CacheItem<E: EthSpec> {
/*
* Values used to make the block available.
*/
block: SignedBeaconBlock<E>,
block: Arc<SignedBeaconBlock<E>>,
blobs: Option<BlobSidecarList<E>>,
proto_block: ProtoBlock,
}
@@ -48,7 +51,7 @@ impl<E: EthSpec> EarlyAttesterCache<E> {
pub fn add_head_block(
&self,
beacon_block_root: Hash256,
block: SignedBeaconBlock<E>,
block: AvailableBlock<E>,
proto_block: ProtoBlock,
state: &BeaconState<E>,
spec: &ChainSpec,
@@ -66,6 +69,7 @@ impl<E: EthSpec> EarlyAttesterCache<E> {
},
};
let (_, block, blobs) = block.deconstruct();
let item = CacheItem {
epoch,
committee_lengths,
@@ -73,6 +77,7 @@ impl<E: EthSpec> EarlyAttesterCache<E> {
source,
target,
block,
blobs,
proto_block,
};
@@ -85,7 +90,7 @@ impl<E: EthSpec> EarlyAttesterCache<E> {
///
/// - There is a cache `item` present.
/// - If `request_slot` is in the same epoch as `item.epoch`.
/// - If `request_index` does not exceed `item.comittee_count`.
/// - If `request_index` does not exceed `item.committee_count`.
pub fn try_attest(
&self,
request_slot: Slot,
@@ -93,9 +98,7 @@ impl<E: EthSpec> EarlyAttesterCache<E> {
spec: &ChainSpec,
) -> Result<Option<Attestation<E>>, Error> {
let lock = self.item.read();
let item = if let Some(item) = lock.as_ref() {
item
} else {
let Some(item) = lock.as_ref() else {
return Ok(None);
};
@@ -104,6 +107,10 @@ impl<E: EthSpec> EarlyAttesterCache<E> {
return Ok(None);
}
if request_slot < item.block.slot() {
return Ok(None);
}
let committee_count = item
.committee_lengths
.get_committee_count_per_slot::<E>(spec)?;
@@ -142,7 +149,7 @@ impl<E: EthSpec> EarlyAttesterCache<E> {
}
/// Returns the block, if `block_root` matches the cached item.
pub fn get_block(&self, block_root: Hash256) -> Option<SignedBeaconBlock<E>> {
pub fn get_block(&self, block_root: Hash256) -> Option<Arc<SignedBeaconBlock<E>>> {
self.item
.read()
.as_ref()
@@ -150,6 +157,15 @@ impl<E: EthSpec> EarlyAttesterCache<E> {
.map(|item| item.block.clone())
}
/// Returns the blobs, if `block_root` matches the cached item.
pub fn get_blobs(&self, block_root: Hash256) -> Option<BlobSidecarList<E>> {
self.item
.read()
.as_ref()
.filter(|item| item.beacon_block_root == block_root)
.and_then(|item| item.blobs.clone())
}
/// Returns the proto-array block, if `block_root` matches the cached item.
pub fn get_proto_block(&self, block_root: Hash256) -> Option<ProtoBlock> {
self.item

View File

@@ -0,0 +1,123 @@
//! Provides tools for checking if a node is ready for the Electra upgrade and following merge
//! transition.
use crate::{BeaconChain, BeaconChainTypes};
use execution_layer::http::{
ENGINE_FORKCHOICE_UPDATED_V3, ENGINE_GET_PAYLOAD_V3, ENGINE_NEW_PAYLOAD_V3,
};
use serde::{Deserialize, Serialize};
use std::fmt;
use std::time::Duration;
use types::*;
/// The time before the Electra fork when we will start issuing warnings about preparation.
use super::merge_readiness::SECONDS_IN_A_WEEK;
pub const ELECTRA_READINESS_PREPARATION_SECONDS: u64 = SECONDS_IN_A_WEEK * 2;
pub const ENGINE_CAPABILITIES_REFRESH_INTERVAL: u64 = 300;
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
#[serde(tag = "type")]
pub enum ElectraReadiness {
/// The execution engine is electra-enabled (as far as we can tell)
Ready,
/// We are connected to an execution engine which doesn't support the V3 engine api methods
V3MethodsNotSupported { error: String },
/// The transition configuration with the EL failed, there might be a problem with
/// connectivity, authentication or a difference in configuration.
ExchangeCapabilitiesFailed { error: String },
/// The user has not configured an execution endpoint
NoExecutionEndpoint,
}
impl fmt::Display for ElectraReadiness {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ElectraReadiness::Ready => {
write!(f, "This node appears ready for Electra.")
}
ElectraReadiness::ExchangeCapabilitiesFailed { error } => write!(
f,
"Could not exchange capabilities with the \
execution endpoint: {}",
error
),
ElectraReadiness::NoExecutionEndpoint => write!(
f,
"The --execution-endpoint flag is not specified, this is a \
requirement post-merge"
),
ElectraReadiness::V3MethodsNotSupported { error } => write!(
f,
"Execution endpoint does not support Electra methods: {}",
error
),
}
}
}
impl<T: BeaconChainTypes> BeaconChain<T> {
/// Returns `true` if electra epoch is set and Electra fork has occurred or will
/// occur within `ELECTRA_READINESS_PREPARATION_SECONDS`
pub fn is_time_to_prepare_for_electra(&self, current_slot: Slot) -> bool {
if let Some(electra_epoch) = self.spec.electra_fork_epoch {
let electra_slot = electra_epoch.start_slot(T::EthSpec::slots_per_epoch());
let electra_readiness_preparation_slots =
ELECTRA_READINESS_PREPARATION_SECONDS / self.spec.seconds_per_slot;
// Return `true` if Electra has happened or is within the preparation time.
current_slot + electra_readiness_preparation_slots > electra_slot
} else {
// The Electra fork epoch has not been defined yet, no need to prepare.
false
}
}
/// Attempts to connect to the EL and confirm that it is ready for electra.
pub async fn check_electra_readiness(&self) -> ElectraReadiness {
if let Some(el) = self.execution_layer.as_ref() {
match el
.get_engine_capabilities(Some(Duration::from_secs(
ENGINE_CAPABILITIES_REFRESH_INTERVAL,
)))
.await
{
Err(e) => {
// The EL was either unreachable or responded with an error
ElectraReadiness::ExchangeCapabilitiesFailed {
error: format!("{:?}", e),
}
}
Ok(capabilities) => {
// TODO(electra): Update in the event we get V4s.
let mut missing_methods = String::from("Required Methods Unsupported:");
let mut all_good = true;
if !capabilities.get_payload_v3 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_GET_PAYLOAD_V3);
all_good = false;
}
if !capabilities.forkchoice_updated_v3 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_FORKCHOICE_UPDATED_V3);
all_good = false;
}
if !capabilities.new_payload_v3 {
missing_methods.push(' ');
missing_methods.push_str(ENGINE_NEW_PAYLOAD_V3);
all_good = false;
}
if all_good {
ElectraReadiness::Ready
} else {
ElectraReadiness::V3MethodsNotSupported {
error: missing_methods,
}
}
}
}
} else {
ElectraReadiness::NoExecutionEndpoint
}
}
}

View File

@@ -1,13 +1,18 @@
use crate::attester_cache::Error as AttesterCacheError;
use crate::beacon_block_streamer::Error as BlockStreamerError;
use crate::beacon_chain::ForkChoiceError;
use crate::beacon_fork_choice_store::Error as ForkChoiceStoreError;
use crate::data_availability_checker::AvailabilityCheckError;
use crate::eth1_chain::Error as Eth1ChainError;
use crate::historical_blocks::HistoricalBlockError;
use crate::migrate::PruningError;
use crate::naive_aggregation_pool::Error as NaiveAggregationError;
use crate::observed_aggregates::Error as ObservedAttestationsError;
use crate::observed_attesters::Error as ObservedAttestersError;
use crate::observed_blob_sidecars::Error as ObservedBlobSidecarsError;
use crate::observed_block_producers::Error as ObservedBlockProducersError;
use execution_layer::PayloadStatus;
use fork_choice::ExecutionStatus;
use futures::channel::mpsc::TrySendError;
use operation_pool::OpPoolError;
use safe_arith::ArithError;
@@ -15,15 +20,17 @@ use ssz_types::Error as SszTypesError;
use state_processing::{
block_signature_verifier::Error as BlockSignatureVerifierError,
per_block_processing::errors::{
AttestationValidationError, AttesterSlashingValidationError, ExitValidationError,
ProposerSlashingValidationError, SyncCommitteeMessageValidationError,
AttestationValidationError, AttesterSlashingValidationError,
BlsExecutionChangeValidationError, ExitValidationError, ProposerSlashingValidationError,
SyncCommitteeMessageValidationError,
},
signature_sets::Error as SignatureSetError,
state_advance::Error as StateAdvanceError,
BlockProcessingError, BlockReplayError, SlotProcessingError,
BlockProcessingError, BlockReplayError, EpochProcessingError, SlotProcessingError,
};
use std::time::Duration;
use task_executor::ShutdownReason;
use tokio::task::JoinError;
use types::*;
macro_rules! easy_from_to {
@@ -42,13 +49,13 @@ pub enum BeaconChainError {
UnableToReadSlot,
UnableToComputeTimeAtSlot,
RevertedFinalizedEpoch {
previous_epoch: Epoch,
new_epoch: Epoch,
old: Checkpoint,
new: Checkpoint,
},
SlotClockDidNotStart,
NoStateForSlot(Slot),
UnableToFindTargetRoot(Slot),
BeaconStateError(BeaconStateError),
EpochCacheError(EpochCacheError),
DBInconsistent(String),
DBError(store::Error),
ForkChoiceError(ForkChoiceError),
@@ -56,6 +63,7 @@ pub enum BeaconChainError {
MissingBeaconBlock(Hash256),
MissingBeaconState(Hash256),
SlotProcessingError(SlotProcessingError),
EpochProcessingError(EpochProcessingError),
StateAdvanceError(StateAdvanceError),
UnableToAdvanceState(String),
NoStateForAttestation {
@@ -67,6 +75,7 @@ pub enum BeaconChainError {
ExitValidationError(ExitValidationError),
ProposerSlashingValidationError(ProposerSlashingValidationError),
AttesterSlashingValidationError(AttesterSlashingValidationError),
BlsExecutionChangeValidationError(BlsExecutionChangeValidationError),
StateSkipTooLarge {
start_slot: Slot,
requested_slot: Slot,
@@ -82,13 +91,10 @@ pub enum BeaconChainError {
ValidatorPubkeyCacheLockTimeout,
SnapshotCacheLockTimeout,
IncorrectStateForAttestation(RelativeEpochError),
InvalidValidatorPubkeyBytes(bls::Error),
ValidatorPubkeyCacheIncomplete(usize),
SignatureSetError(SignatureSetError),
BlockSignatureVerifierError(state_processing::block_signature_verifier::Error),
BlockReplayError(BlockReplayError),
DuplicateValidatorPublicKey,
ValidatorPubkeyCacheFileError(String),
ValidatorIndexUnknown(usize),
ValidatorPubkeyUnknown(PublicKeyBytes),
OpPoolError(OpPoolError),
@@ -96,6 +102,7 @@ pub enum BeaconChainError {
ObservedAttestationsError(ObservedAttestationsError),
ObservedAttestersError(ObservedAttestersError),
ObservedBlockProducersError(ObservedBlockProducersError),
ObservedBlobSidecarsError(ObservedBlobSidecarsError),
AttesterCacheError(AttesterCacheError),
PruningError(PruningError),
ArithError(ArithError),
@@ -135,28 +142,101 @@ pub enum BeaconChainError {
new_slot: Slot,
},
AltairForkDisabled,
BuilderMissing,
ExecutionLayerMissing,
BlockVariantLacksExecutionPayload(Hash256),
ExecutionLayerErrorPayloadReconstruction(ExecutionBlockHash, Box<execution_layer::Error>),
EngineGetCapabilititesFailed(Box<execution_layer::Error>),
ExecutionLayerGetBlockByNumberFailed(Box<execution_layer::Error>),
ExecutionLayerGetBlockByHashFailed(Box<execution_layer::Error>),
BlockHashMissingFromExecutionLayer(ExecutionBlockHash),
InconsistentPayloadReconstructed {
slot: Slot,
exec_block_hash: ExecutionBlockHash,
canonical_transactions_root: Hash256,
reconstructed_transactions_root: Hash256,
},
BlockStreamerError(BlockStreamerError),
AddPayloadLogicError,
ExecutionForkChoiceUpdateFailed(execution_layer::Error),
PrepareProposerFailed(BlockProcessingError),
ExecutionForkChoiceUpdateInvalid {
status: PayloadStatus,
},
BlockRewardError,
BlockRewardSlotError,
BlockRewardAttestationError,
BlockRewardSyncError,
SyncCommitteeRewardsSyncError,
AttestationRewardsError,
HeadMissingFromForkChoice(Hash256),
FinalizedBlockMissingFromForkChoice(Hash256),
HeadBlockMissingFromForkChoice(Hash256),
InvalidFinalizedPayload {
finalized_root: Hash256,
execution_block_hash: ExecutionBlockHash,
},
InvalidFinalizedPayloadShutdownError(TrySendError<ShutdownReason>),
JustifiedPayloadInvalid {
justified_root: Hash256,
execution_block_hash: Option<ExecutionBlockHash>,
},
ForkchoiceUpdate(execution_layer::Error),
FinalizedCheckpointMismatch {
head_state: Checkpoint,
fork_choice: Hash256,
},
InvalidSlot(Slot),
HeadBlockNotFullyVerified {
beacon_block_root: Hash256,
execution_status: ExecutionStatus,
},
CannotAttestToFinalizedBlock {
beacon_block_root: Hash256,
},
SyncContributionDataReferencesFinalizedBlock {
beacon_block_root: Hash256,
},
RuntimeShutdown,
TokioJoin(tokio::task::JoinError),
ProcessInvalidExecutionPayload(JoinError),
ForkChoiceSignalOutOfOrder {
current: Slot,
latest: Slot,
},
ForkchoiceUpdateParamsMissing,
HeadHasInvalidPayload {
block_root: Hash256,
execution_status: ExecutionStatus,
},
AttestationHeadNotInForkChoice(Hash256),
MissingPersistedForkChoice,
ShufflingCacheError(promise_cache::PromiseCacheError),
BlsToExecutionPriorToCapella,
BlsToExecutionConflictsWithPool,
InconsistentFork(InconsistentFork),
ProposerHeadForkChoiceError(fork_choice::Error<proto_array::Error>),
UnableToPublish,
AvailabilityCheckError(AvailabilityCheckError),
LightClientError(LightClientError),
UnsupportedFork,
}
easy_from_to!(SlotProcessingError, BeaconChainError);
easy_from_to!(EpochProcessingError, BeaconChainError);
easy_from_to!(AttestationValidationError, BeaconChainError);
easy_from_to!(SyncCommitteeMessageValidationError, BeaconChainError);
easy_from_to!(ExitValidationError, BeaconChainError);
easy_from_to!(ProposerSlashingValidationError, BeaconChainError);
easy_from_to!(AttesterSlashingValidationError, BeaconChainError);
easy_from_to!(BlsExecutionChangeValidationError, BeaconChainError);
easy_from_to!(SszTypesError, BeaconChainError);
easy_from_to!(OpPoolError, BeaconChainError);
easy_from_to!(NaiveAggregationError, BeaconChainError);
easy_from_to!(ObservedAttestationsError, BeaconChainError);
easy_from_to!(ObservedAttestersError, BeaconChainError);
easy_from_to!(ObservedBlockProducersError, BeaconChainError);
easy_from_to!(ObservedBlobSidecarsError, BeaconChainError);
easy_from_to!(AttesterCacheError, BeaconChainError);
easy_from_to!(BlockSignatureVerifierError, BeaconChainError);
easy_from_to!(PruningError, BeaconChainError);
@@ -165,15 +245,20 @@ easy_from_to!(ForkChoiceStoreError, BeaconChainError);
easy_from_to!(HistoricalBlockError, BeaconChainError);
easy_from_to!(StateAdvanceError, BeaconChainError);
easy_from_to!(BlockReplayError, BeaconChainError);
easy_from_to!(InconsistentFork, BeaconChainError);
easy_from_to!(AvailabilityCheckError, BeaconChainError);
easy_from_to!(EpochCacheError, BeaconChainError);
easy_from_to!(LightClientError, BeaconChainError);
#[derive(Debug)]
pub enum BlockProductionError {
UnableToGetHeadInfo(BeaconChainError),
UnableToGetBlockRootFromState,
UnableToReadSlot,
UnableToProduceAtSlot(Slot),
SlotProcessingError(SlotProcessingError),
BlockProcessingError(BlockProcessingError),
EpochCacheError(EpochCacheError),
ForkChoiceError(ForkChoiceError),
Eth1ChainError(Eth1ChainError),
BeaconStateError(BeaconStateError),
StateAdvanceError(StateAdvanceError),
@@ -190,8 +275,21 @@ pub enum BlockProductionError {
TerminalPoWBlockLookupFailed(execution_layer::Error),
GetPayloadFailed(execution_layer::Error),
FailedToReadFinalizedBlock(store::Error),
FailedToLoadState(store::Error),
MissingFinalizedBlock(Hash256),
BlockTooLarge(usize),
ShuttingDown,
MissingBlobs,
MissingSyncAggregate,
MissingExecutionPayload,
MissingKzgCommitment(String),
TokioJoin(JoinError),
BeaconChain(BeaconChainError),
InvalidPayloadFork,
TrustedSetupNotInitialized,
InvalidBlockVariant(String),
KzgError(kzg::Error),
FailedToBuildBlobSidecars(String),
}
easy_from_to!(BlockProcessingError, BlockProductionError);
@@ -199,3 +297,5 @@ easy_from_to!(BeaconStateError, BlockProductionError);
easy_from_to!(SlotProcessingError, BlockProductionError);
easy_from_to!(Eth1ChainError, BlockProductionError);
easy_from_to!(StateAdvanceError, BlockProductionError);
easy_from_to!(ForkChoiceError, BlockProductionError);
easy_from_to!(EpochCacheError, BlockProductionError);

View File

@@ -1,7 +1,7 @@
use crate::metrics;
use eth1::{Config as Eth1Config, Eth1Block, Service as HttpService};
use eth2::lighthouse::Eth1SyncStatusData;
use eth2_hashing::hash;
use ethereum_hashing::hash;
use int_to_bytes::int_to_bytes32;
use slog::{debug, error, trace, Logger};
use ssz::{Decode, Encode};
@@ -9,14 +9,12 @@ use ssz_derive::{Decode, Encode};
use state_processing::per_block_processing::get_new_eth1_data;
use std::cmp::Ordering;
use std::collections::HashMap;
use std::iter::DoubleEndedIterator;
use std::marker::PhantomData;
use std::time::{SystemTime, UNIX_EPOCH};
use store::{DBColumn, Error as StoreError, StoreItem};
use task_executor::TaskExecutor;
use types::{
BeaconState, BeaconStateError, ChainSpec, Deposit, Eth1Data, EthSpec, Hash256, Slot, Unsigned,
DEPOSIT_TREE_DEPTH,
};
type BlockNumber = u64;
@@ -68,7 +66,7 @@ impl From<safe_arith::ArithError> for Error {
/// - `genesis_time`: beacon chain genesis time.
/// - `current_slot`: current beacon chain slot.
/// - `spec`: current beacon chain specification.
fn get_sync_status<T: EthSpec>(
fn get_sync_status<E: EthSpec>(
latest_cached_block: Option<&Eth1Block>,
head_block: Option<&Eth1Block>,
genesis_time: u64,
@@ -86,10 +84,10 @@ fn get_sync_status<T: EthSpec>(
// that are *before* genesis, so that we can indicate to users that we're actually adequately
// cached for where they are in time.
let voting_target_timestamp = if let Some(current_slot) = current_slot {
let period = T::SlotsPerEth1VotingPeriod::to_u64();
let period = E::SlotsPerEth1VotingPeriod::to_u64();
let voting_period_start_slot = (current_slot / period) * period;
let period_start = slot_start_seconds::<T>(
let period_start = slot_start_seconds(
genesis_time,
spec.seconds_per_slot,
voting_period_start_slot,
@@ -99,7 +97,7 @@ fn get_sync_status<T: EthSpec>(
} else {
// The number of seconds in an eth1 voting period.
let voting_period_duration =
T::slots_per_eth1_voting_period() as u64 * spec.seconds_per_slot;
E::slots_per_eth1_voting_period() as u64 * spec.seconds_per_slot;
let now = SystemTime::now().duration_since(UNIX_EPOCH).ok()?.as_secs();
@@ -170,8 +168,8 @@ fn get_sync_status<T: EthSpec>(
#[derive(Encode, Decode, Clone)]
pub struct SszEth1 {
use_dummy_backend: bool,
backend_bytes: Vec<u8>,
pub use_dummy_backend: bool,
pub backend_bytes: Vec<u8>,
}
impl StoreItem for SszEth1 {
@@ -305,16 +303,22 @@ where
}
}
/// Set in motion the finalization of `Eth1Data`. This method is called during block import
/// so it should be fast.
pub fn finalize_eth1_data(&self, eth1_data: Eth1Data) {
self.backend.finalize_eth1_data(eth1_data);
}
/// Consumes `self`, returning the backend.
pub fn into_backend(self) -> T {
self.backend
}
}
pub trait Eth1ChainBackend<T: EthSpec>: Sized + Send + Sync {
pub trait Eth1ChainBackend<E: EthSpec>: Sized + Send + Sync {
/// Returns the `Eth1Data` that should be included in a block being produced for the given
/// `state`.
fn eth1_data(&self, beacon_state: &BeaconState<T>, spec: &ChainSpec)
fn eth1_data(&self, beacon_state: &BeaconState<E>, spec: &ChainSpec)
-> Result<Eth1Data, Error>;
/// Returns all `Deposits` between `state.eth1_deposit_index` and
@@ -326,7 +330,7 @@ pub trait Eth1ChainBackend<T: EthSpec>: Sized + Send + Sync {
/// be more than `MAX_DEPOSIT_COUNT` or the churn may be too high.
fn queued_deposits(
&self,
beacon_state: &BeaconState<T>,
beacon_state: &BeaconState<E>,
eth1_data_vote: &Eth1Data,
spec: &ChainSpec,
) -> Result<Vec<Deposit>, Error>;
@@ -335,6 +339,10 @@ pub trait Eth1ChainBackend<T: EthSpec>: Sized + Send + Sync {
/// beacon node eth1 cache is.
fn latest_cached_block(&self) -> Option<Eth1Block>;
/// Set in motion the finalization of `Eth1Data`. This method is called during block import
/// so it should be fast.
fn finalize_eth1_data(&self, eth1_data: Eth1Data);
/// Returns the block at the head of the chain (ignoring follow distance, etc). Used to obtain
/// an idea of how up-to-date the remote eth1 node is.
fn head_block(&self) -> Option<Eth1Block>;
@@ -356,13 +364,13 @@ pub trait Eth1ChainBackend<T: EthSpec>: Sized + Send + Sync {
/// Never creates deposits, therefore the validator set is static.
///
/// This was used in the 2019 Canada interop workshops.
pub struct DummyEth1ChainBackend<T: EthSpec>(PhantomData<T>);
pub struct DummyEth1ChainBackend<E: EthSpec>(PhantomData<E>);
impl<T: EthSpec> Eth1ChainBackend<T> for DummyEth1ChainBackend<T> {
impl<E: EthSpec> Eth1ChainBackend<E> for DummyEth1ChainBackend<E> {
/// Produce some deterministic junk based upon the current epoch.
fn eth1_data(&self, state: &BeaconState<T>, _spec: &ChainSpec) -> Result<Eth1Data, Error> {
fn eth1_data(&self, state: &BeaconState<E>, _spec: &ChainSpec) -> Result<Eth1Data, Error> {
let current_epoch = state.current_epoch();
let slots_per_voting_period = T::slots_per_eth1_voting_period() as u64;
let slots_per_voting_period = E::slots_per_eth1_voting_period() as u64;
let current_voting_period: u64 = current_epoch.as_u64() / slots_per_voting_period;
let deposit_root = hash(&int_to_bytes32(current_voting_period));
@@ -378,7 +386,7 @@ impl<T: EthSpec> Eth1ChainBackend<T> for DummyEth1ChainBackend<T> {
/// The dummy back-end never produces deposits.
fn queued_deposits(
&self,
_: &BeaconState<T>,
_: &BeaconState<E>,
_: &Eth1Data,
_: &ChainSpec,
) -> Result<Vec<Deposit>, Error> {
@@ -389,6 +397,8 @@ impl<T: EthSpec> Eth1ChainBackend<T> for DummyEth1ChainBackend<T> {
None
}
fn finalize_eth1_data(&self, _eth1_data: Eth1Data) {}
fn head_block(&self) -> Option<Eth1Block> {
None
}
@@ -409,7 +419,7 @@ impl<T: EthSpec> Eth1ChainBackend<T> for DummyEth1ChainBackend<T> {
}
}
impl<T: EthSpec> Default for DummyEth1ChainBackend<T> {
impl<E: EthSpec> Default for DummyEth1ChainBackend<E> {
fn default() -> Self {
Self(PhantomData)
}
@@ -421,22 +431,23 @@ impl<T: EthSpec> Default for DummyEth1ChainBackend<T> {
/// The `core` connects to some external eth1 client (e.g., Parity/Geth) and polls it for
/// information.
#[derive(Clone)]
pub struct CachingEth1Backend<T: EthSpec> {
pub struct CachingEth1Backend<E: EthSpec> {
pub core: HttpService,
log: Logger,
_phantom: PhantomData<T>,
_phantom: PhantomData<E>,
}
impl<T: EthSpec> CachingEth1Backend<T> {
impl<E: EthSpec> CachingEth1Backend<E> {
/// Instantiates `self` with empty caches.
///
/// Does not connect to the eth1 node or start any tasks to keep the cache updated.
pub fn new(config: Eth1Config, log: Logger, spec: ChainSpec) -> Self {
Self {
core: HttpService::new(config, log.clone(), spec),
pub fn new(config: Eth1Config, log: Logger, spec: ChainSpec) -> Result<Self, String> {
Ok(Self {
core: HttpService::new(config, log.clone(), spec)
.map_err(|e| format!("Failed to create eth1 http service: {:?}", e))?,
log,
_phantom: PhantomData,
}
})
}
/// Starts the routine which connects to the external eth1 node and updates the caches.
@@ -454,11 +465,11 @@ impl<T: EthSpec> CachingEth1Backend<T> {
}
}
impl<T: EthSpec> Eth1ChainBackend<T> for CachingEth1Backend<T> {
fn eth1_data(&self, state: &BeaconState<T>, spec: &ChainSpec) -> Result<Eth1Data, Error> {
let period = T::SlotsPerEth1VotingPeriod::to_u64();
impl<E: EthSpec> Eth1ChainBackend<E> for CachingEth1Backend<E> {
fn eth1_data(&self, state: &BeaconState<E>, spec: &ChainSpec) -> Result<Eth1Data, Error> {
let period = E::SlotsPerEth1VotingPeriod::to_u64();
let voting_period_start_slot = (state.slot() / period) * period;
let voting_period_start_seconds = slot_start_seconds::<T>(
let voting_period_start_seconds = slot_start_seconds(
state.genesis_time(),
spec.seconds_per_slot,
voting_period_start_slot,
@@ -524,7 +535,7 @@ impl<T: EthSpec> Eth1ChainBackend<T> for CachingEth1Backend<T> {
fn queued_deposits(
&self,
state: &BeaconState<T>,
state: &BeaconState<E>,
eth1_data_vote: &Eth1Data,
_spec: &ChainSpec,
) -> Result<Vec<Deposit>, Error> {
@@ -540,13 +551,13 @@ impl<T: EthSpec> Eth1ChainBackend<T> for CachingEth1Backend<T> {
Ordering::Equal => Ok(vec![]),
Ordering::Less => {
let next = deposit_index;
let last = std::cmp::min(deposit_count, next + T::MaxDeposits::to_u64());
let last = std::cmp::min(deposit_count, next + E::MaxDeposits::to_u64());
self.core
.deposits()
.read()
.cache
.get_deposits(next, last, deposit_count, DEPOSIT_TREE_DEPTH)
.get_deposits(next, last, deposit_count)
.map_err(|e| Error::BackendError(format!("Failed to get deposits: {:?}", e)))
.map(|(_deposit_root, deposits)| deposits)
}
@@ -557,6 +568,12 @@ impl<T: EthSpec> Eth1ChainBackend<T> for CachingEth1Backend<T> {
self.core.latest_cached_block()
}
/// This only writes the eth1_data to a temporary cache so that the service
/// thread can later do the actual finalizing of the deposit tree.
fn finalize_eth1_data(&self, eth1_data: Eth1Data) {
self.core.set_to_finalize(Some(eth1_data));
}
fn head_block(&self) -> Option<Eth1Block> {
self.core.head_block()
}
@@ -609,8 +626,8 @@ where
/// Collect all valid votes that are cast during the current voting period.
/// Return hashmap with count of each vote cast.
fn collect_valid_votes<T: EthSpec>(
state: &BeaconState<T>,
fn collect_valid_votes<E: EthSpec>(
state: &BeaconState<E>,
votes_to_consider: &HashMap<Eth1Data, BlockNumber>,
) -> Eth1DataVoteCount {
let mut valid_votes = HashMap::new();
@@ -640,11 +657,7 @@ fn find_winning_vote(valid_votes: Eth1DataVoteCount) -> Option<Eth1Data> {
}
/// Returns the unix-epoch seconds at the start of the given `slot`.
fn slot_start_seconds<T: EthSpec>(
genesis_unix_seconds: u64,
seconds_per_slot: u64,
slot: Slot,
) -> u64 {
fn slot_start_seconds(genesis_unix_seconds: u64, seconds_per_slot: u64, slot: Slot) -> u64 {
genesis_unix_seconds + slot.as_u64() * seconds_per_slot
}
@@ -680,7 +693,7 @@ mod test {
fn get_voting_period_start_seconds(state: &BeaconState<E>, spec: &ChainSpec) -> u64 {
let period = <E as EthSpec>::SlotsPerEth1VotingPeriod::to_u64();
let voting_period_start_slot = (state.slot() / period) * period;
slot_start_seconds::<E>(
slot_start_seconds(
state.genesis_time(),
spec.seconds_per_slot,
voting_period_start_slot,
@@ -690,23 +703,23 @@ mod test {
#[test]
fn slot_start_time() {
let zero_sec = 0;
assert_eq!(slot_start_seconds::<E>(100, zero_sec, Slot::new(2)), 100);
assert_eq!(slot_start_seconds(100, zero_sec, Slot::new(2)), 100);
let one_sec = 1;
assert_eq!(slot_start_seconds::<E>(100, one_sec, Slot::new(0)), 100);
assert_eq!(slot_start_seconds::<E>(100, one_sec, Slot::new(1)), 101);
assert_eq!(slot_start_seconds::<E>(100, one_sec, Slot::new(2)), 102);
assert_eq!(slot_start_seconds(100, one_sec, Slot::new(0)), 100);
assert_eq!(slot_start_seconds(100, one_sec, Slot::new(1)), 101);
assert_eq!(slot_start_seconds(100, one_sec, Slot::new(2)), 102);
let three_sec = 3;
assert_eq!(slot_start_seconds::<E>(100, three_sec, Slot::new(0)), 100);
assert_eq!(slot_start_seconds::<E>(100, three_sec, Slot::new(1)), 103);
assert_eq!(slot_start_seconds::<E>(100, three_sec, Slot::new(2)), 106);
assert_eq!(slot_start_seconds(100, three_sec, Slot::new(0)), 100);
assert_eq!(slot_start_seconds(100, three_sec, Slot::new(1)), 103);
assert_eq!(slot_start_seconds(100, three_sec, Slot::new(2)), 106);
let five_sec = 5;
assert_eq!(slot_start_seconds::<E>(100, five_sec, Slot::new(0)), 100);
assert_eq!(slot_start_seconds::<E>(100, five_sec, Slot::new(1)), 105);
assert_eq!(slot_start_seconds::<E>(100, five_sec, Slot::new(2)), 110);
assert_eq!(slot_start_seconds::<E>(100, five_sec, Slot::new(3)), 115);
assert_eq!(slot_start_seconds(100, five_sec, Slot::new(0)), 100);
assert_eq!(slot_start_seconds(100, five_sec, Slot::new(1)), 105);
assert_eq!(slot_start_seconds(100, five_sec, Slot::new(2)), 110);
assert_eq!(slot_start_seconds(100, five_sec, Slot::new(3)), 115);
}
fn get_eth1_block(timestamp: u64, number: u64) -> Eth1Block {
@@ -722,7 +735,7 @@ mod test {
mod eth1_chain_json_backend {
use super::*;
use eth1::DepositLog;
use types::{test_utils::generate_deterministic_keypair, EthSpec, MainnetEthSpec};
use types::{test_utils::generate_deterministic_keypair, MainnetEthSpec};
fn get_eth1_chain() -> Eth1Chain<CachingEth1Backend<E>, E> {
let eth1_config = Eth1Config {
@@ -730,11 +743,9 @@ mod test {
};
let log = null_logger().unwrap();
Eth1Chain::new(CachingEth1Backend::new(
eth1_config,
log,
MainnetEthSpec::default_spec(),
))
Eth1Chain::new(
CachingEth1Backend::new(eth1_config, log, MainnetEthSpec::default_spec()).unwrap(),
)
}
fn get_deposit_log(i: u64, spec: &ChainSpec) -> DepositLog {
@@ -955,7 +966,7 @@ mod test {
let spec = &E::default_spec();
let state: BeaconState<E> = BeaconState::new(0, get_eth1_data(0), spec);
let blocks = vec![];
let blocks = [];
assert_eq!(
get_votes_to_consider(
@@ -1009,6 +1020,7 @@ mod test {
mod collect_valid_votes {
use super::*;
use types::List;
fn get_eth1_data_vec(n: u64, block_number_offset: u64) -> Vec<(Eth1Data, BlockNumber)> {
(0..n)
@@ -1056,12 +1068,14 @@ mod test {
let votes_to_consider = get_eth1_data_vec(slots, 0);
*state.eth1_data_votes_mut() = votes_to_consider[0..slots as usize / 4]
.iter()
.map(|(eth1_data, _)| eth1_data)
.cloned()
.collect::<Vec<_>>()
.into();
*state.eth1_data_votes_mut() = List::new(
votes_to_consider[0..slots as usize / 4]
.iter()
.map(|(eth1_data, _)| eth1_data)
.cloned()
.collect::<Vec<_>>(),
)
.unwrap();
let votes =
collect_valid_votes(&state, &votes_to_consider.clone().into_iter().collect());
@@ -1085,12 +1099,14 @@ mod test {
.expect("should have some eth1 data")
.clone();
*state.eth1_data_votes_mut() = vec![duplicate_eth1_data.clone(); 4]
.iter()
.map(|(eth1_data, _)| eth1_data)
.cloned()
.collect::<Vec<_>>()
.into();
*state.eth1_data_votes_mut() = List::new(
vec![duplicate_eth1_data.clone(); 4]
.iter()
.map(|(eth1_data, _)| eth1_data)
.cloned()
.collect::<Vec<_>>(),
)
.unwrap();
let votes = collect_valid_votes(&state, &votes_to_consider.into_iter().collect());
assert_votes!(

View File

@@ -0,0 +1,499 @@
use slog::{debug, Logger};
use ssz_derive::{Decode, Encode};
use std::cmp;
use std::collections::BTreeMap;
use types::{Checkpoint, Epoch, Eth1Data, Hash256 as Root};
/// The default size of the cache.
/// The beacon chain only looks at the last 4 epochs for finalization.
/// Add 1 for current epoch and 4 earlier epochs.
pub const DEFAULT_ETH1_CACHE_SIZE: usize = 5;
/// These fields are named the same as the corresponding fields in the `BeaconState`
/// as this structure stores these values from the `BeaconState` at a `Checkpoint`
#[derive(Clone, Debug, PartialEq, Encode, Decode)]
pub struct Eth1FinalizationData {
pub eth1_data: Eth1Data,
pub eth1_deposit_index: u64,
}
impl Eth1FinalizationData {
/// Ensures the deposit finalization conditions have been met. See:
/// https://eips.ethereum.org/EIPS/eip-4881#deposit-finalization-conditions
fn fully_imported(&self) -> bool {
self.eth1_deposit_index >= self.eth1_data.deposit_count
}
}
/// Implements map from Checkpoint -> Eth1CacheData
pub struct CheckpointMap {
capacity: usize,
// There shouldn't be more than a couple of potential checkpoints at the same
// epoch. Searching through a vector for the matching Root should be faster
// than using another map from Root->Eth1CacheData
store: BTreeMap<Epoch, Vec<(Root, Eth1FinalizationData)>>,
}
impl Default for CheckpointMap {
fn default() -> Self {
Self::new()
}
}
/// Provides a map of `Eth1CacheData` referenced by `Checkpoint`
///
/// ## Cache Queuing
///
/// The cache keeps a maximum number of (`capacity`) epochs. Because there may be
/// forks at the epoch boundary, it's possible that there exists more than one
/// `Checkpoint` for the same `Epoch`. This cache will store all checkpoints for
/// a given `Epoch`. When adding data for a new `Checkpoint` would cause the number
/// of `Epoch`s stored to exceed `capacity`, the data for oldest `Epoch` is dropped
impl CheckpointMap {
pub fn new() -> Self {
CheckpointMap {
capacity: DEFAULT_ETH1_CACHE_SIZE,
store: BTreeMap::new(),
}
}
pub fn with_capacity(capacity: usize) -> Self {
CheckpointMap {
capacity: cmp::max(1, capacity),
store: BTreeMap::new(),
}
}
pub fn insert(&mut self, checkpoint: Checkpoint, eth1_finalization_data: Eth1FinalizationData) {
self.store
.entry(checkpoint.epoch)
.or_default()
.push((checkpoint.root, eth1_finalization_data));
// faster to reduce size after the fact than do pre-checking to see
// if the current data would increase the size of the BTreeMap
while self.store.len() > self.capacity {
let oldest_stored_epoch = self.store.keys().next().cloned().unwrap();
self.store.remove(&oldest_stored_epoch);
}
}
pub fn get(&self, checkpoint: &Checkpoint) -> Option<&Eth1FinalizationData> {
match self.store.get(&checkpoint.epoch) {
Some(vec) => {
for (root, data) in vec {
if *root == checkpoint.root {
return Some(data);
}
}
None
}
None => None,
}
}
#[cfg(test)]
pub fn len(&self) -> usize {
self.store.len()
}
}
/// This cache stores `Eth1CacheData` that could potentially be finalized within 4
/// future epochs.
pub struct Eth1FinalizationCache {
by_checkpoint: CheckpointMap,
pending_eth1: BTreeMap<u64, Eth1Data>,
last_finalized: Option<Eth1Data>,
log: Logger,
}
/// Provides a cache of `Eth1CacheData` at epoch boundaries. This is used to
/// finalize deposits when a new epoch is finalized.
///
impl Eth1FinalizationCache {
pub fn new(log: Logger) -> Self {
Eth1FinalizationCache {
by_checkpoint: CheckpointMap::new(),
pending_eth1: BTreeMap::new(),
last_finalized: None,
log,
}
}
pub fn with_capacity(log: Logger, capacity: usize) -> Self {
Eth1FinalizationCache {
by_checkpoint: CheckpointMap::with_capacity(capacity),
pending_eth1: BTreeMap::new(),
last_finalized: None,
log,
}
}
pub fn insert(&mut self, checkpoint: Checkpoint, eth1_finalization_data: Eth1FinalizationData) {
if !eth1_finalization_data.fully_imported() {
self.pending_eth1.insert(
eth1_finalization_data.eth1_data.deposit_count,
eth1_finalization_data.eth1_data.clone(),
);
debug!(
self.log,
"Eth1Cache: inserted pending eth1";
"eth1_data.deposit_count" => eth1_finalization_data.eth1_data.deposit_count,
"eth1_deposit_index" => eth1_finalization_data.eth1_deposit_index,
);
}
self.by_checkpoint
.insert(checkpoint, eth1_finalization_data);
}
pub fn finalize(&mut self, checkpoint: &Checkpoint) -> Option<Eth1Data> {
if let Some(eth1_finalized_data) = self.by_checkpoint.get(checkpoint) {
let finalized_deposit_index = eth1_finalized_data.eth1_deposit_index;
let mut result = None;
while let Some(pending_count) = self.pending_eth1.keys().next().cloned() {
if finalized_deposit_index >= pending_count {
result = self.pending_eth1.remove(&pending_count);
debug!(
self.log,
"Eth1Cache: dropped pending eth1";
"pending_count" => pending_count,
"finalized_deposit_index" => finalized_deposit_index,
);
} else {
break;
}
}
if eth1_finalized_data.fully_imported() {
result = Some(eth1_finalized_data.eth1_data.clone())
}
if result.is_some() {
self.last_finalized = result;
}
self.last_finalized.clone()
} else {
debug!(
self.log,
"Eth1Cache: cache miss";
"epoch" => checkpoint.epoch,
);
None
}
}
#[cfg(test)]
pub fn by_checkpoint(&self) -> &CheckpointMap {
&self.by_checkpoint
}
#[cfg(test)]
pub fn pending_eth1(&self) -> &BTreeMap<u64, Eth1Data> {
&self.pending_eth1
}
}
#[cfg(test)]
pub mod tests {
use super::*;
use sloggers::null::NullLoggerBuilder;
use sloggers::Build;
use std::collections::HashMap;
const SLOTS_PER_EPOCH: u64 = 32;
const MAX_DEPOSITS: u64 = 16;
const EPOCHS_PER_ETH1_VOTING_PERIOD: u64 = 64;
fn eth1cache() -> Eth1FinalizationCache {
let log_builder = NullLoggerBuilder;
Eth1FinalizationCache::new(log_builder.build().expect("should build log"))
}
fn random_eth1_data(deposit_count: u64) -> Eth1Data {
Eth1Data {
deposit_root: Root::random(),
deposit_count,
block_hash: Root::random(),
}
}
fn random_checkpoint(epoch: u64) -> Checkpoint {
Checkpoint {
epoch: epoch.into(),
root: Root::random(),
}
}
fn random_checkpoints(n: usize) -> Vec<Checkpoint> {
let mut result = Vec::with_capacity(n);
for epoch in 0..n {
result.push(random_checkpoint(epoch as u64))
}
result
}
#[test]
fn fully_imported_deposits() {
let epochs = 16;
let deposits_imported = 128;
let eth1data = random_eth1_data(deposits_imported);
let checkpoints = random_checkpoints(epochs as usize);
let mut eth1cache = eth1cache();
for epoch in 4..epochs {
assert_eq!(
eth1cache.by_checkpoint().len(),
cmp::min((epoch - 4) as usize, DEFAULT_ETH1_CACHE_SIZE),
"Unexpected cache size"
);
let checkpoint = checkpoints
.get(epoch as usize)
.expect("should get checkpoint");
eth1cache.insert(
*checkpoint,
Eth1FinalizationData {
eth1_data: eth1data.clone(),
eth1_deposit_index: deposits_imported,
},
);
let finalized_checkpoint = checkpoints
.get((epoch - 4) as usize)
.expect("should get finalized checkpoint");
assert!(
eth1cache.pending_eth1().is_empty(),
"Deposits are fully imported so pending cache should be empty"
);
if epoch < 8 {
assert_eq!(
eth1cache.finalize(finalized_checkpoint),
None,
"Should have cache miss"
);
} else {
assert_eq!(
eth1cache.finalize(finalized_checkpoint),
Some(eth1data.clone()),
"Should have cache hit"
)
}
}
}
#[test]
fn partially_imported_deposits() {
let epochs = 16;
let initial_deposits_imported = 1024;
let deposits_imported_per_epoch = MAX_DEPOSITS * SLOTS_PER_EPOCH;
let full_import_epoch = 13;
let total_deposits =
initial_deposits_imported + deposits_imported_per_epoch * full_import_epoch;
let eth1data = random_eth1_data(total_deposits);
let checkpoints = random_checkpoints(epochs as usize);
let mut eth1cache = eth1cache();
for epoch in 0..epochs {
assert_eq!(
eth1cache.by_checkpoint().len(),
cmp::min(epoch as usize, DEFAULT_ETH1_CACHE_SIZE),
"Unexpected cache size"
);
let checkpoint = checkpoints
.get(epoch as usize)
.expect("should get checkpoint");
let deposits_imported = cmp::min(
total_deposits,
initial_deposits_imported + deposits_imported_per_epoch * epoch,
);
eth1cache.insert(
*checkpoint,
Eth1FinalizationData {
eth1_data: eth1data.clone(),
eth1_deposit_index: deposits_imported,
},
);
if epoch >= 4 {
let finalized_epoch = epoch - 4;
let finalized_checkpoint = checkpoints
.get(finalized_epoch as usize)
.expect("should get finalized checkpoint");
if finalized_epoch < full_import_epoch {
assert_eq!(
eth1cache.finalize(finalized_checkpoint),
None,
"Deposits not fully finalized so cache should return no Eth1Data",
);
assert_eq!(
eth1cache.pending_eth1().len(),
1,
"Deposits not fully finalized. Pending eth1 cache should have 1 entry"
);
} else {
assert_eq!(
eth1cache.finalize(finalized_checkpoint),
Some(eth1data.clone()),
"Deposits fully imported and finalized. Cache should return Eth1Data. finalized_deposits[{}]",
(initial_deposits_imported + deposits_imported_per_epoch * finalized_epoch),
);
assert!(
eth1cache.pending_eth1().is_empty(),
"Deposits fully imported and finalized. Pending cache should be empty"
);
}
}
}
}
#[test]
fn fork_at_epoch_boundary() {
let epochs = 12;
let deposits_imported = 128;
let eth1data = random_eth1_data(deposits_imported);
let checkpoints = random_checkpoints(epochs as usize);
let mut forks = HashMap::new();
let mut eth1cache = eth1cache();
for epoch in 0..epochs {
assert_eq!(
eth1cache.by_checkpoint().len(),
cmp::min(epoch as usize, DEFAULT_ETH1_CACHE_SIZE),
"Unexpected cache size"
);
let checkpoint = checkpoints
.get(epoch as usize)
.expect("should get checkpoint");
eth1cache.insert(
*checkpoint,
Eth1FinalizationData {
eth1_data: eth1data.clone(),
eth1_deposit_index: deposits_imported,
},
);
// lets put a fork at every third epoch
if epoch % 3 == 0 {
let fork = random_checkpoint(epoch);
eth1cache.insert(
fork,
Eth1FinalizationData {
eth1_data: eth1data.clone(),
eth1_deposit_index: deposits_imported,
},
);
forks.insert(epoch as usize, fork);
}
assert!(
eth1cache.pending_eth1().is_empty(),
"Deposits are fully imported so pending cache should be empty"
);
if epoch >= 4 {
let finalized_epoch = (epoch - 4) as usize;
let finalized_checkpoint = if finalized_epoch % 3 == 0 {
forks.get(&finalized_epoch).expect("should get fork")
} else {
checkpoints
.get(finalized_epoch)
.expect("should get checkpoint")
};
assert_eq!(
eth1cache.finalize(finalized_checkpoint),
Some(eth1data.clone()),
"Should have cache hit"
);
if finalized_epoch >= 3 {
let dropped_epoch = finalized_epoch - 3;
if let Some(dropped_checkpoint) = forks.get(&dropped_epoch) {
// got checkpoint for an old fork that should no longer
// be in the cache because it is from too long ago
assert_eq!(
eth1cache.finalize(dropped_checkpoint),
None,
"Should have cache miss"
);
}
}
}
}
}
#[test]
fn massive_deposit_queue() {
// Simulating a situation where deposits don't get imported within an eth1 voting period
let eth1_voting_periods = 8;
let initial_deposits_imported = 1024;
let deposits_imported_per_epoch = MAX_DEPOSITS * SLOTS_PER_EPOCH;
let initial_deposit_queue =
deposits_imported_per_epoch * EPOCHS_PER_ETH1_VOTING_PERIOD * 2 + 32;
let new_deposits_per_voting_period =
EPOCHS_PER_ETH1_VOTING_PERIOD * deposits_imported_per_epoch / 2;
let mut epoch_data = BTreeMap::new();
let mut eth1s_by_count = BTreeMap::new();
let mut eth1cache = eth1cache();
let mut last_period_deposits = initial_deposits_imported;
for period in 0..eth1_voting_periods {
let period_deposits = initial_deposits_imported
+ initial_deposit_queue
+ period * new_deposits_per_voting_period;
let period_eth1_data = random_eth1_data(period_deposits);
eth1s_by_count.insert(period_eth1_data.deposit_count, period_eth1_data.clone());
for epoch_mod_period in 0..EPOCHS_PER_ETH1_VOTING_PERIOD {
let epoch = period * EPOCHS_PER_ETH1_VOTING_PERIOD + epoch_mod_period;
let checkpoint = random_checkpoint(epoch);
let deposits_imported = cmp::min(
period_deposits,
last_period_deposits + deposits_imported_per_epoch * epoch_mod_period,
);
eth1cache.insert(
checkpoint,
Eth1FinalizationData {
eth1_data: period_eth1_data.clone(),
eth1_deposit_index: deposits_imported,
},
);
epoch_data.insert(epoch, (checkpoint, deposits_imported));
if epoch >= 4 {
let finalized_epoch = epoch - 4;
let (finalized_checkpoint, finalized_deposits) = epoch_data
.get(&finalized_epoch)
.expect("should get epoch data");
let pending_eth1s = eth1s_by_count.range((finalized_deposits + 1)..).count();
let last_finalized_eth1 = eth1s_by_count
.range(0..(finalized_deposits + 1))
.map(|(_, eth1)| eth1)
.last()
.cloned();
assert_eq!(
eth1cache.finalize(finalized_checkpoint),
last_finalized_eth1,
"finalized checkpoint mismatch",
);
assert_eq!(
eth1cache.pending_eth1().len(),
pending_eth1s,
"pending eth1 mismatch"
);
}
}
// remove unneeded stuff from old epochs
while epoch_data.len() > DEFAULT_ETH1_CACHE_SIZE {
let oldest_stored_epoch = epoch_data
.keys()
.next()
.cloned()
.expect("should get oldest epoch");
epoch_data.remove(&oldest_stored_epoch);
}
last_period_deposits = period_deposits;
}
}
}

View File

@@ -6,111 +6,181 @@ use types::EthSpec;
const DEFAULT_CHANNEL_CAPACITY: usize = 16;
pub struct ServerSentEventHandler<T: EthSpec> {
attestation_tx: Sender<EventKind<T>>,
block_tx: Sender<EventKind<T>>,
finalized_tx: Sender<EventKind<T>>,
head_tx: Sender<EventKind<T>>,
exit_tx: Sender<EventKind<T>>,
chain_reorg_tx: Sender<EventKind<T>>,
contribution_tx: Sender<EventKind<T>>,
late_head: Sender<EventKind<T>>,
block_reward_tx: Sender<EventKind<T>>,
pub struct ServerSentEventHandler<E: EthSpec> {
attestation_tx: Sender<EventKind<E>>,
block_tx: Sender<EventKind<E>>,
blob_sidecar_tx: Sender<EventKind<E>>,
finalized_tx: Sender<EventKind<E>>,
head_tx: Sender<EventKind<E>>,
exit_tx: Sender<EventKind<E>>,
chain_reorg_tx: Sender<EventKind<E>>,
contribution_tx: Sender<EventKind<E>>,
payload_attributes_tx: Sender<EventKind<E>>,
late_head: Sender<EventKind<E>>,
light_client_finality_update_tx: Sender<EventKind<E>>,
light_client_optimistic_update_tx: Sender<EventKind<E>>,
block_reward_tx: Sender<EventKind<E>>,
log: Logger,
}
impl<T: EthSpec> ServerSentEventHandler<T> {
pub fn new(log: Logger) -> Self {
Self::new_with_capacity(log, DEFAULT_CHANNEL_CAPACITY)
impl<E: EthSpec> ServerSentEventHandler<E> {
pub fn new(log: Logger, capacity_multiplier: usize) -> Self {
Self::new_with_capacity(
log,
capacity_multiplier.saturating_mul(DEFAULT_CHANNEL_CAPACITY),
)
}
pub fn new_with_capacity(log: Logger, capacity: usize) -> Self {
let (attestation_tx, _) = broadcast::channel(capacity);
let (block_tx, _) = broadcast::channel(capacity);
let (blob_sidecar_tx, _) = broadcast::channel(capacity);
let (finalized_tx, _) = broadcast::channel(capacity);
let (head_tx, _) = broadcast::channel(capacity);
let (exit_tx, _) = broadcast::channel(capacity);
let (chain_reorg_tx, _) = broadcast::channel(capacity);
let (contribution_tx, _) = broadcast::channel(capacity);
let (payload_attributes_tx, _) = broadcast::channel(capacity);
let (late_head, _) = broadcast::channel(capacity);
let (light_client_finality_update_tx, _) = broadcast::channel(capacity);
let (light_client_optimistic_update_tx, _) = broadcast::channel(capacity);
let (block_reward_tx, _) = broadcast::channel(capacity);
Self {
attestation_tx,
block_tx,
blob_sidecar_tx,
finalized_tx,
head_tx,
exit_tx,
chain_reorg_tx,
contribution_tx,
payload_attributes_tx,
late_head,
light_client_finality_update_tx,
light_client_optimistic_update_tx,
block_reward_tx,
log,
}
}
pub fn register(&self, kind: EventKind<T>) {
let result = match kind {
EventKind::Attestation(attestation) => self
pub fn register(&self, kind: EventKind<E>) {
let log_count = |name, count| {
trace!(
self.log,
"Registering server-sent event";
"kind" => name,
"receiver_count" => count
);
};
let result = match &kind {
EventKind::Attestation(_) => self
.attestation_tx
.send(EventKind::Attestation(attestation))
.map(|count| trace!(self.log, "Registering server-sent attestation event"; "receiver_count" => count)),
EventKind::Block(block) => self.block_tx.send(EventKind::Block(block))
.map(|count| trace!(self.log, "Registering server-sent block event"; "receiver_count" => count)),
EventKind::FinalizedCheckpoint(checkpoint) => self.finalized_tx
.send(EventKind::FinalizedCheckpoint(checkpoint))
.map(|count| trace!(self.log, "Registering server-sent finalized checkpoint event"; "receiver_count" => count)),
EventKind::Head(head) => self.head_tx.send(EventKind::Head(head))
.map(|count| trace!(self.log, "Registering server-sent head event"; "receiver_count" => count)),
EventKind::VoluntaryExit(exit) => self.exit_tx.send(EventKind::VoluntaryExit(exit))
.map(|count| trace!(self.log, "Registering server-sent voluntary exit event"; "receiver_count" => count)),
EventKind::ChainReorg(reorg) => self.chain_reorg_tx.send(EventKind::ChainReorg(reorg))
.map(|count| trace!(self.log, "Registering server-sent chain reorg event"; "receiver_count" => count)),
EventKind::ContributionAndProof(contribution_and_proof) => self.contribution_tx.send(EventKind::ContributionAndProof(contribution_and_proof))
.map(|count| trace!(self.log, "Registering server-sent contribution and proof event"; "receiver_count" => count)),
EventKind::LateHead(late_head) => self.late_head.send(EventKind::LateHead(late_head))
.map(|count| trace!(self.log, "Registering server-sent late head event"; "receiver_count" => count)),
EventKind::BlockReward(block_reward) => self.block_reward_tx.send(EventKind::BlockReward(block_reward))
.map(|count| trace!(self.log, "Registering server-sent contribution and proof event"; "receiver_count" => count)),
.send(kind)
.map(|count| log_count("attestation", count)),
EventKind::Block(_) => self
.block_tx
.send(kind)
.map(|count| log_count("block", count)),
EventKind::BlobSidecar(_) => self
.blob_sidecar_tx
.send(kind)
.map(|count| log_count("blob sidecar", count)),
EventKind::FinalizedCheckpoint(_) => self
.finalized_tx
.send(kind)
.map(|count| log_count("finalized checkpoint", count)),
EventKind::Head(_) => self
.head_tx
.send(kind)
.map(|count| log_count("head", count)),
EventKind::VoluntaryExit(_) => self
.exit_tx
.send(kind)
.map(|count| log_count("exit", count)),
EventKind::ChainReorg(_) => self
.chain_reorg_tx
.send(kind)
.map(|count| log_count("chain reorg", count)),
EventKind::ContributionAndProof(_) => self
.contribution_tx
.send(kind)
.map(|count| log_count("contribution and proof", count)),
EventKind::PayloadAttributes(_) => self
.payload_attributes_tx
.send(kind)
.map(|count| log_count("payload attributes", count)),
EventKind::LateHead(_) => self
.late_head
.send(kind)
.map(|count| log_count("late head", count)),
EventKind::LightClientFinalityUpdate(_) => self
.light_client_finality_update_tx
.send(kind)
.map(|count| log_count("light client finality update", count)),
EventKind::LightClientOptimisticUpdate(_) => self
.light_client_optimistic_update_tx
.send(kind)
.map(|count| log_count("light client optimistic update", count)),
EventKind::BlockReward(_) => self
.block_reward_tx
.send(kind)
.map(|count| log_count("block reward", count)),
};
if let Err(SendError(event)) = result {
trace!(self.log, "No receivers registered to listen for event"; "event" => ?event);
}
}
pub fn subscribe_attestation(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_attestation(&self) -> Receiver<EventKind<E>> {
self.attestation_tx.subscribe()
}
pub fn subscribe_block(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_block(&self) -> Receiver<EventKind<E>> {
self.block_tx.subscribe()
}
pub fn subscribe_finalized(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_blob_sidecar(&self) -> Receiver<EventKind<E>> {
self.blob_sidecar_tx.subscribe()
}
pub fn subscribe_finalized(&self) -> Receiver<EventKind<E>> {
self.finalized_tx.subscribe()
}
pub fn subscribe_head(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_head(&self) -> Receiver<EventKind<E>> {
self.head_tx.subscribe()
}
pub fn subscribe_exit(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_exit(&self) -> Receiver<EventKind<E>> {
self.exit_tx.subscribe()
}
pub fn subscribe_reorgs(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_reorgs(&self) -> Receiver<EventKind<E>> {
self.chain_reorg_tx.subscribe()
}
pub fn subscribe_contributions(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_contributions(&self) -> Receiver<EventKind<E>> {
self.contribution_tx.subscribe()
}
pub fn subscribe_late_head(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_payload_attributes(&self) -> Receiver<EventKind<E>> {
self.payload_attributes_tx.subscribe()
}
pub fn subscribe_late_head(&self) -> Receiver<EventKind<E>> {
self.late_head.subscribe()
}
pub fn subscribe_block_reward(&self) -> Receiver<EventKind<T>> {
pub fn subscribe_light_client_finality_update(&self) -> Receiver<EventKind<E>> {
self.light_client_finality_update_tx.subscribe()
}
pub fn subscribe_light_client_optimistic_update(&self) -> Receiver<EventKind<E>> {
self.light_client_optimistic_update_tx.subscribe()
}
pub fn subscribe_block_reward(&self) -> Receiver<EventKind<E>> {
self.block_reward_tx.subscribe()
}
@@ -122,6 +192,10 @@ impl<T: EthSpec> ServerSentEventHandler<T> {
self.block_tx.receiver_count() > 0
}
pub fn has_blob_sidecar_subscribers(&self) -> bool {
self.blob_sidecar_tx.receiver_count() > 0
}
pub fn has_finalized_subscribers(&self) -> bool {
self.finalized_tx.receiver_count() > 0
}
@@ -142,6 +216,10 @@ impl<T: EthSpec> ServerSentEventHandler<T> {
self.contribution_tx.receiver_count() > 0
}
pub fn has_payload_attributes_subscribers(&self) -> bool {
self.payload_attributes_tx.receiver_count() > 0
}
pub fn has_late_head_subscribers(&self) -> bool {
self.late_head.receiver_count() > 0
}

View File

@@ -7,65 +7,217 @@
//! So, this module contains functions that one might expect to find in other crates, but they live
//! here for good reason.
use crate::otb_verification_service::OptimisticTransitionBlock;
use crate::{
BeaconChain, BeaconChainError, BeaconChainTypes, BlockError, BlockProductionError,
ExecutionPayloadError,
};
use execution_layer::ExecutePayloadResponseStatus;
use fork_choice::PayloadVerificationStatus;
use execution_layer::{
BlockProposalContents, BlockProposalContentsType, BuilderParams, NewPayloadRequest,
PayloadAttributes, PayloadStatus,
};
use fork_choice::{InvalidationOperation, PayloadVerificationStatus};
use proto_array::{Block as ProtoBlock, ExecutionStatus};
use slog::debug;
use slog::{debug, warn};
use slot_clock::SlotClock;
use state_processing::per_block_processing::{
compute_timestamp_at_slot, is_execution_enabled, is_merge_transition_complete,
partially_verify_execution_payload,
compute_timestamp_at_slot, get_expected_withdrawals, is_execution_enabled,
is_merge_transition_complete, partially_verify_execution_payload,
};
use std::sync::Arc;
use tokio::task::JoinHandle;
use tree_hash::TreeHash;
use types::payload::BlockProductionVersion;
use types::*;
pub type PreparePayloadResult<E> = Result<BlockProposalContentsType<E>, BlockProductionError>;
pub type PreparePayloadHandle<E> = JoinHandle<Option<PreparePayloadResult<E>>>;
#[derive(PartialEq)]
pub enum AllowOptimisticImport {
Yes,
No,
}
/// Signal whether the execution payloads of new blocks should be
/// immediately verified with the EL or imported optimistically without
/// any EL communication.
#[derive(Default, Clone, Copy)]
pub enum NotifyExecutionLayer {
#[default]
Yes,
No,
}
/// Used to await the result of executing payload with a remote EE.
pub struct PayloadNotifier<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>,
pub block: Arc<SignedBeaconBlock<T::EthSpec>>,
payload_verification_status: Option<PayloadVerificationStatus>,
}
impl<T: BeaconChainTypes> PayloadNotifier<T> {
pub fn new(
chain: Arc<BeaconChain<T>>,
block: Arc<SignedBeaconBlock<T::EthSpec>>,
state: &BeaconState<T::EthSpec>,
notify_execution_layer: NotifyExecutionLayer,
) -> Result<Self, BlockError<T::EthSpec>> {
let payload_verification_status = if is_execution_enabled(state, block.message().body()) {
// Perform the initial stages of payload verification.
//
// We will duplicate these checks again during `per_block_processing`, however these
// checks are cheap and doing them here ensures we have verified them before marking
// the block as optimistically imported. This is particularly relevant in the case
// where we do not send the block to the EL at all.
let block_message = block.message();
partially_verify_execution_payload::<_, FullPayload<_>>(
state,
block.slot(),
block_message.body(),
&chain.spec,
)
.map_err(BlockError::PerBlockProcessingError)?;
match notify_execution_layer {
NotifyExecutionLayer::No if chain.config.optimistic_finalized_sync => {
// Create a NewPayloadRequest (no clones required) and check optimistic sync verifications
let new_payload_request: NewPayloadRequest<T::EthSpec> =
block_message.try_into()?;
if let Err(e) = new_payload_request.perform_optimistic_sync_verifications() {
warn!(
chain.log,
"Falling back to slow block hash verification";
"block_number" => ?block_message.execution_payload().map(|payload| payload.block_number()),
"info" => "you can silence this warning with --disable-optimistic-finalized-sync",
"error" => ?e,
);
None
} else {
Some(PayloadVerificationStatus::Optimistic)
}
}
_ => None,
}
} else {
Some(PayloadVerificationStatus::Irrelevant)
};
Ok(Self {
chain,
block,
payload_verification_status,
})
}
pub async fn notify_new_payload(
self,
) -> Result<PayloadVerificationStatus, BlockError<T::EthSpec>> {
if let Some(precomputed_status) = self.payload_verification_status {
Ok(precomputed_status)
} else {
notify_new_payload(&self.chain, self.block.message()).await
}
}
}
/// Verify that `execution_payload` contained by `block` is considered valid by an execution
/// engine.
///
/// ## Specification
///
/// Equivalent to the `execute_payload` function in the merge Beacon Chain Changes, although it
/// Equivalent to the `notify_new_payload` function in the merge Beacon Chain Changes, although it
/// contains a few extra checks by running `partially_verify_execution_payload` first:
///
/// https://github.com/ethereum/consensus-specs/blob/v1.1.5/specs/merge/beacon-chain.md#execute_payload
pub fn execute_payload<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
state: &BeaconState<T::EthSpec>,
block: BeaconBlockRef<T::EthSpec>,
/// https://github.com/ethereum/consensus-specs/blob/v1.1.9/specs/bellatrix/beacon-chain.md#notify_new_payload
async fn notify_new_payload<'a, T: BeaconChainTypes>(
chain: &Arc<BeaconChain<T>>,
block: BeaconBlockRef<'a, T::EthSpec>,
) -> Result<PayloadVerificationStatus, BlockError<T::EthSpec>> {
if !is_execution_enabled(state, block.body()) {
return Ok(PayloadVerificationStatus::Irrelevant);
}
let execution_payload = block.execution_payload()?;
// Perform the initial stages of payload verification.
//
// We will duplicate these checks again during `per_block_processing`, however these checks
// are cheap and doing them here ensures we protect the execution payload from junk.
partially_verify_execution_payload(state, execution_payload, &chain.spec)
.map_err(BlockError::PerBlockProcessingError)?;
let execution_layer = chain
.execution_layer
.as_ref()
.ok_or(ExecutionPayloadError::NoExecutionConnection)?;
let execute_payload_response = execution_layer
.block_on(|execution_layer| execution_layer.execute_payload(execution_payload));
match execute_payload_response {
Ok((status, _latest_valid_hash)) => match status {
ExecutePayloadResponseStatus::Valid => Ok(PayloadVerificationStatus::Verified),
// TODO(merge): invalidate any invalid ancestors of this block in fork choice.
ExecutePayloadResponseStatus::Invalid => {
Err(ExecutionPayloadError::RejectedByExecutionEngine.into())
let execution_block_hash = block.execution_payload()?.block_hash();
let new_payload_response = execution_layer.notify_new_payload(block.try_into()?).await;
match new_payload_response {
Ok(status) => match status {
PayloadStatus::Valid => Ok(PayloadVerificationStatus::Verified),
PayloadStatus::Syncing | PayloadStatus::Accepted => {
Ok(PayloadVerificationStatus::Optimistic)
}
PayloadStatus::Invalid {
latest_valid_hash,
ref validation_error,
} => {
warn!(
chain.log,
"Invalid execution payload";
"validation_error" => ?validation_error,
"latest_valid_hash" => ?latest_valid_hash,
"execution_block_hash" => ?execution_block_hash,
"root" => ?block.tree_hash_root(),
"graffiti" => block.body().graffiti().as_utf8_lossy(),
"proposer_index" => block.proposer_index(),
"slot" => block.slot(),
"method" => "new_payload",
);
// Only trigger payload invalidation in fork choice if the
// `latest_valid_hash` is `Some` and non-zero.
//
// A `None` latest valid hash indicates that the EE was unable
// to determine the most recent valid ancestor. Since `block`
// has not yet been applied to fork choice, there's nothing to
// invalidate.
//
// An all-zeros payload indicates that an EIP-3675 check has
// failed regarding the validity of the terminal block. Rather
// than iterating back in the chain to find the terminal block
// and invalidating that, we simply reject this block without
// invalidating anything else.
if let Some(latest_valid_hash) =
latest_valid_hash.filter(|hash| *hash != ExecutionBlockHash::zero())
{
// This block has not yet been applied to fork choice, so the latest block that was
// imported to fork choice was the parent.
let latest_root = block.parent_root();
chain
.process_invalid_execution_payload(&InvalidationOperation::InvalidateMany {
head_block_root: latest_root,
always_invalidate_head: false,
latest_valid_ancestor: latest_valid_hash,
})
.await?;
}
Err(ExecutionPayloadError::RejectedByExecutionEngine { status }.into())
}
PayloadStatus::InvalidBlockHash {
ref validation_error,
} => {
warn!(
chain.log,
"Invalid execution payload block hash";
"validation_error" => ?validation_error,
"execution_block_hash" => ?execution_block_hash,
"root" => ?block.tree_hash_root(),
"graffiti" => block.body().graffiti().as_utf8_lossy(),
"proposer_index" => block.proposer_index(),
"slot" => block.slot(),
"method" => "new_payload",
);
// Returning an error here should be sufficient to invalidate the block. We have no
// information to indicate its parent is invalid, so no need to run
// `BeaconChain::process_invalid_execution_payload`.
Err(ExecutionPayloadError::RejectedByExecutionEngine { status }.into())
}
ExecutePayloadResponseStatus::Syncing => Ok(PayloadVerificationStatus::NotVerified),
},
Err(_) => Err(ExecutionPayloadError::RejectedByExecutionEngine.into()),
Err(e) => Err(ExecutionPayloadError::RequestFailed(e).into()),
}
}
@@ -81,15 +233,16 @@ pub fn execute_payload<T: BeaconChainTypes>(
/// Equivalent to the `validate_merge_block` function in the merge Fork Choice Changes:
///
/// https://github.com/ethereum/consensus-specs/blob/v1.1.5/specs/merge/fork-choice.md#validate_merge_block
pub fn validate_merge_block<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
block: BeaconBlockRef<T::EthSpec>,
pub async fn validate_merge_block<'a, T: BeaconChainTypes>(
chain: &Arc<BeaconChain<T>>,
block: BeaconBlockRef<'a, T::EthSpec>,
allow_optimistic_import: AllowOptimisticImport,
) -> Result<(), BlockError<T::EthSpec>> {
let spec = &chain.spec;
let block_epoch = block.slot().epoch(T::EthSpec::slots_per_epoch());
let execution_payload = block.execution_payload()?;
if spec.terminal_block_hash != Hash256::zero() {
if spec.terminal_block_hash != ExecutionBlockHash::zero() {
if block_epoch < spec.terminal_block_hash_activation_epoch {
return Err(ExecutionPayloadError::InvalidActivationEpoch {
activation_epoch: spec.terminal_block_hash_activation_epoch,
@@ -98,10 +251,10 @@ pub fn validate_merge_block<T: BeaconChainTypes>(
.into());
}
if execution_payload.parent_hash != spec.terminal_block_hash {
if execution_payload.parent_hash() != spec.terminal_block_hash {
return Err(ExecutionPayloadError::InvalidTerminalBlockHash {
terminal_block_hash: spec.terminal_block_hash,
payload_parent_hash: execution_payload.parent_hash,
payload_parent_hash: execution_payload.parent_hash(),
}
.into());
}
@@ -115,29 +268,67 @@ pub fn validate_merge_block<T: BeaconChainTypes>(
.ok_or(ExecutionPayloadError::NoExecutionConnection)?;
let is_valid_terminal_pow_block = execution_layer
.block_on(|execution_layer| {
execution_layer.is_valid_terminal_pow_block_hash(execution_payload.parent_hash, spec)
})
.is_valid_terminal_pow_block_hash(execution_payload.parent_hash(), spec)
.await
.map_err(ExecutionPayloadError::from)?;
match is_valid_terminal_pow_block {
Some(true) => Ok(()),
Some(false) => Err(ExecutionPayloadError::InvalidTerminalPoWBlock {
parent_hash: execution_payload.parent_hash,
parent_hash: execution_payload.parent_hash(),
}
.into()),
None => {
debug!(
chain.log,
"Optimistically accepting terminal block";
"block_hash" => ?execution_payload.parent_hash,
"msg" => "the terminal block/parent was unavailable"
);
Ok(())
if allow_optimistic_import == AllowOptimisticImport::Yes
&& is_optimistic_candidate_block(chain, block.slot(), block.parent_root()).await?
{
debug!(
chain.log,
"Optimistically importing merge transition block";
"block_hash" => ?execution_payload.parent_hash(),
"msg" => "the terminal block/parent was unavailable"
);
// Store Optimistic Transition Block in Database for later Verification
OptimisticTransitionBlock::from_block(block)
.persist_in_store::<T, _>(&chain.store)?;
Ok(())
} else {
Err(ExecutionPayloadError::UnverifiedNonOptimisticCandidate.into())
}
}
}
}
/// Check to see if a block with the given parameters is valid to be imported optimistically.
pub async fn is_optimistic_candidate_block<T: BeaconChainTypes>(
chain: &Arc<BeaconChain<T>>,
block_slot: Slot,
block_parent_root: Hash256,
) -> Result<bool, BeaconChainError> {
let current_slot = chain.slot()?;
let inner_chain = chain.clone();
// Use a blocking task to check if the block is an optimistic candidate. Interacting
// with the `fork_choice` lock in an async task can block the core executor.
chain
.spawn_blocking_handle(
move || {
inner_chain
.canonical_head
.fork_choice_read_lock()
.is_optimistic_candidate_block(
current_slot,
block_slot,
&block_parent_root,
&inner_chain.spec,
)
},
"validate_merge_block_optimistic_candidate",
)
.await?
.map_err(BeaconChainError::from)
}
/// Validate the gossip block's execution_payload according to the checks described here:
/// https://github.com/ethereum/consensus-specs/blob/dev/specs/merge/p2p-interface.md#beacon_block
pub fn validate_execution_payload_for_gossip<T: BeaconChainTypes>(
@@ -152,7 +343,7 @@ pub fn validate_execution_payload_for_gossip<T: BeaconChainTypes>(
let is_merge_transition_complete = match parent_block.execution_status {
// Optimistically declare that an "unknown" status block has completed the merge.
ExecutionStatus::Valid(_) | ExecutionStatus::Unknown(_) => true,
ExecutionStatus::Valid(_) | ExecutionStatus::Optimistic(_) => true,
// It's impossible for an irrelevant block to have completed the merge. It is pre-merge
// by definition.
ExecutionStatus::Irrelevant(_) => false,
@@ -165,7 +356,7 @@ pub fn validate_execution_payload_for_gossip<T: BeaconChainTypes>(
}
};
if is_merge_transition_complete || execution_payload != &<_>::default() {
if is_merge_transition_complete || !execution_payload.is_default_with_empty_roots() {
let expected_timestamp = chain
.slot_clock
.start_of(block.slot())
@@ -175,11 +366,11 @@ pub fn validate_execution_payload_for_gossip<T: BeaconChainTypes>(
))?;
// The block's execution payload timestamp is correct with respect to the slot
if execution_payload.timestamp != expected_timestamp {
if execution_payload.timestamp() != expected_timestamp {
return Err(BlockError::ExecutionPayloadError(
ExecutionPayloadError::InvalidPayloadTimestamp {
expected: expected_timestamp,
found: execution_payload.timestamp,
found: execution_payload.timestamp(),
},
));
}
@@ -202,29 +393,66 @@ pub fn validate_execution_payload_for_gossip<T: BeaconChainTypes>(
///
/// https://github.com/ethereum/consensus-specs/blob/v1.1.5/specs/merge/validator.md#block-proposal
pub fn get_execution_payload<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
chain: Arc<BeaconChain<T>>,
state: &BeaconState<T::EthSpec>,
parent_block_root: Hash256,
proposer_index: u64,
) -> Result<ExecutionPayload<T::EthSpec>, BlockProductionError> {
Ok(prepare_execution_payload_blocking(chain, state, proposer_index)?.unwrap_or_default())
}
builder_params: BuilderParams,
builder_boost_factor: Option<u64>,
block_production_version: BlockProductionVersion,
) -> Result<PreparePayloadHandle<T::EthSpec>, BlockProductionError> {
// Compute all required values from the `state` now to avoid needing to pass it into a spawned
// task.
let spec = &chain.spec;
let current_epoch = state.current_epoch();
let is_merge_transition_complete = is_merge_transition_complete(state);
let timestamp =
compute_timestamp_at_slot(state, state.slot(), spec).map_err(BeaconStateError::from)?;
let random = *state.get_randao_mix(current_epoch)?;
let latest_execution_payload_header_block_hash =
state.latest_execution_payload_header()?.block_hash();
let withdrawals = match state {
&BeaconState::Capella(_) | &BeaconState::Deneb(_) | &BeaconState::Electra(_) => {
Some(get_expected_withdrawals(state, spec)?.into())
}
&BeaconState::Merge(_) => None,
// These shouldn't happen but they're here to make the pattern irrefutable
&BeaconState::Base(_) | &BeaconState::Altair(_) => None,
};
let parent_beacon_block_root = match state {
BeaconState::Deneb(_) | BeaconState::Electra(_) => Some(parent_block_root),
BeaconState::Merge(_) | BeaconState::Capella(_) => None,
// These shouldn't happen but they're here to make the pattern irrefutable
BeaconState::Base(_) | BeaconState::Altair(_) => None,
};
/// Wraps the async `prepare_execution_payload` function as a blocking task.
pub fn prepare_execution_payload_blocking<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
state: &BeaconState<T::EthSpec>,
proposer_index: u64,
) -> Result<Option<ExecutionPayload<T::EthSpec>>, BlockProductionError> {
let execution_layer = chain
.execution_layer
.as_ref()
.ok_or(BlockProductionError::ExecutionLayerMissing)?;
// Spawn a task to obtain the execution payload from the EL via a series of async calls. The
// `join_handle` can be used to await the result of the function.
let join_handle = chain
.task_executor
.clone()
.spawn_handle(
async move {
prepare_execution_payload::<T>(
&chain,
is_merge_transition_complete,
timestamp,
random,
proposer_index,
latest_execution_payload_header_block_hash,
builder_params,
withdrawals,
parent_beacon_block_root,
builder_boost_factor,
block_production_version,
)
.await
},
"get_execution_payload",
)
.ok_or(BlockProductionError::ShuttingDown)?;
execution_layer
.block_on_generic(|_| async {
prepare_execution_payload(chain, state, proposer_index).await
})
.map_err(BlockProductionError::BlockingFailed)?
Ok(join_handle)
}
/// Prepares an execution payload for inclusion in a block.
@@ -241,74 +469,110 @@ pub fn prepare_execution_payload_blocking<T: BeaconChainTypes>(
/// Equivalent to the `prepare_execution_payload` function in the Validator Guide:
///
/// https://github.com/ethereum/consensus-specs/blob/v1.1.5/specs/merge/validator.md#block-proposal
pub async fn prepare_execution_payload<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
state: &BeaconState<T::EthSpec>,
#[allow(clippy::too_many_arguments)]
pub async fn prepare_execution_payload<T>(
chain: &Arc<BeaconChain<T>>,
is_merge_transition_complete: bool,
timestamp: u64,
random: Hash256,
proposer_index: u64,
) -> Result<Option<ExecutionPayload<T::EthSpec>>, BlockProductionError> {
latest_execution_payload_header_block_hash: ExecutionBlockHash,
builder_params: BuilderParams,
withdrawals: Option<Vec<Withdrawal>>,
parent_beacon_block_root: Option<Hash256>,
builder_boost_factor: Option<u64>,
block_production_version: BlockProductionVersion,
) -> Result<BlockProposalContentsType<T::EthSpec>, BlockProductionError>
where
T: BeaconChainTypes,
{
let current_epoch = builder_params.slot.epoch(T::EthSpec::slots_per_epoch());
let spec = &chain.spec;
let fork = spec.fork_name_at_slot::<T::EthSpec>(builder_params.slot);
let execution_layer = chain
.execution_layer
.as_ref()
.ok_or(BlockProductionError::ExecutionLayerMissing)?;
let parent_hash = if !is_merge_transition_complete(state) {
let is_terminal_block_hash_set = spec.terminal_block_hash != Hash256::zero();
let parent_hash = if !is_merge_transition_complete {
let is_terminal_block_hash_set = spec.terminal_block_hash != ExecutionBlockHash::zero();
let is_activation_epoch_reached =
state.current_epoch() >= spec.terminal_block_hash_activation_epoch;
current_epoch >= spec.terminal_block_hash_activation_epoch;
if is_terminal_block_hash_set && !is_activation_epoch_reached {
return Ok(None);
// Use the "empty" payload if there's a terminal block hash, but we haven't reached the
// terminal block epoch yet.
return Ok(BlockProposalContentsType::Full(
BlockProposalContents::Payload {
payload: FullPayload::default_at_fork(fork)?,
block_value: Uint256::zero(),
},
));
}
let terminal_pow_block_hash = execution_layer
.get_terminal_pow_block_hash(spec)
.get_terminal_pow_block_hash(spec, timestamp)
.await
.map_err(BlockProductionError::TerminalPoWBlockLookupFailed)?;
if let Some(terminal_pow_block_hash) = terminal_pow_block_hash {
terminal_pow_block_hash
} else {
return Ok(None);
// If the merge transition hasn't occurred yet and the EL hasn't found the terminal
// block, return an "empty" payload.
return Ok(BlockProposalContentsType::Full(
BlockProposalContents::Payload {
payload: FullPayload::default_at_fork(fork)?,
block_value: Uint256::zero(),
},
));
}
} else {
state.latest_execution_payload_header()?.block_hash
latest_execution_payload_header_block_hash
};
let timestamp = compute_timestamp_at_slot(state, spec).map_err(BeaconStateError::from)?;
let random = *state.get_randao_mix(state.current_epoch())?;
let finalized_root = state.finalized_checkpoint().root;
// Try to obtain the fork choice update parameters from the cached head.
//
// Use a blocking task to interact with the `canonical_head` lock otherwise we risk blocking the
// core `tokio` executor.
let inner_chain = chain.clone();
let forkchoice_update_params = chain
.spawn_blocking_handle(
move || {
inner_chain
.canonical_head
.cached_head()
.forkchoice_update_parameters()
},
"prepare_execution_payload_forkchoice_update_params",
)
.await
.map_err(BlockProductionError::BeaconChain)?;
// The finalized block hash is not included in the specification, however we provide this
// parameter so that the execution layer can produce a payload id if one is not already known
// (e.g., due to a recent reorg).
let finalized_block_hash =
if let Some(block) = chain.fork_choice.read().get_block(&finalized_root) {
block.execution_status.block_hash()
} else {
chain
.store
.get_block(&finalized_root)
.map_err(BlockProductionError::FailedToReadFinalizedBlock)?
.ok_or(BlockProductionError::MissingFinalizedBlock(finalized_root))?
.message()
.body()
.execution_payload()
.ok()
.map(|ep| ep.block_hash)
};
let suggested_fee_recipient = execution_layer
.get_suggested_fee_recipient(proposer_index)
.await;
let payload_attributes = PayloadAttributes::new(
timestamp,
random,
suggested_fee_recipient,
withdrawals,
parent_beacon_block_root,
);
// Note: the suggested_fee_recipient is stored in the `execution_layer`, it will add this parameter.
let execution_payload = execution_layer
let block_contents = execution_layer
.get_payload(
parent_hash,
timestamp,
random,
finalized_block_hash.unwrap_or_else(Hash256::zero),
proposer_index,
&payload_attributes,
forkchoice_update_params,
builder_params,
fork,
&chain.spec,
builder_boost_factor,
block_production_version,
)
.await
.map_err(BlockProductionError::GetPayloadFailed)?;
Ok(Some(execution_payload))
Ok(block_contents)
}

View File

@@ -0,0 +1,97 @@
//! Concurrency helpers for synchronising block proposal with fork choice.
//!
//! The transmitter provides a way for a thread runnning fork choice on a schedule to signal
//! to the receiver that fork choice has been updated for a given slot.
use crate::BeaconChainError;
use parking_lot::{Condvar, Mutex};
use std::sync::Arc;
use std::time::Duration;
use types::Slot;
/// Sender, for use by the per-slot task timer.
pub struct ForkChoiceSignalTx {
pair: Arc<(Mutex<Slot>, Condvar)>,
}
/// Receiver, for use by the beacon chain waiting on fork choice to complete.
pub struct ForkChoiceSignalRx {
pair: Arc<(Mutex<Slot>, Condvar)>,
}
pub enum ForkChoiceWaitResult {
/// Successfully reached a slot greater than or equal to the awaited slot.
Success(Slot),
/// Fork choice was updated to a lower slot, indicative of lag or processing delays.
Behind(Slot),
/// Timed out waiting for the fork choice update from the sender.
TimeOut,
}
impl ForkChoiceSignalTx {
pub fn new() -> Self {
let pair = Arc::new((Mutex::new(Slot::new(0)), Condvar::new()));
Self { pair }
}
pub fn get_receiver(&self) -> ForkChoiceSignalRx {
ForkChoiceSignalRx {
pair: self.pair.clone(),
}
}
/// Signal to the receiver that fork choice has been updated to `slot`.
///
/// Return an error if the provided `slot` is strictly less than any previously provided slot.
pub fn notify_fork_choice_complete(&self, slot: Slot) -> Result<(), BeaconChainError> {
let (lock, condvar) = &*self.pair;
let mut current_slot = lock.lock();
if slot < *current_slot {
return Err(BeaconChainError::ForkChoiceSignalOutOfOrder {
current: *current_slot,
latest: slot,
});
} else {
*current_slot = slot;
}
// We use `notify_all` because there may be multiple block proposals waiting simultaneously.
// Usually there'll be 0-1.
condvar.notify_all();
Ok(())
}
}
impl Default for ForkChoiceSignalTx {
fn default() -> Self {
Self::new()
}
}
impl ForkChoiceSignalRx {
pub fn wait_for_fork_choice(&self, slot: Slot, timeout: Duration) -> ForkChoiceWaitResult {
let (lock, condvar) = &*self.pair;
let mut current_slot = lock.lock();
// Wait for `current_slot >= slot`.
//
// Do not loop and wait, if we receive an update for the wrong slot then something is
// quite out of whack and we shouldn't waste more time waiting.
if *current_slot < slot {
let timeout_result = condvar.wait_for(&mut current_slot, timeout);
if timeout_result.timed_out() {
return ForkChoiceWaitResult::TimeOut;
}
}
if *current_slot >= slot {
ForkChoiceWaitResult::Success(*current_slot)
} else {
ForkChoiceWaitResult::Behind(*current_slot)
}
}
}

View File

@@ -4,7 +4,8 @@ use itertools::process_results;
use slog::{info, warn, Logger};
use state_processing::state_advance::complete_state_advance;
use state_processing::{
per_block_processing, per_block_processing::BlockSignatureStrategy, VerifyBlockRoot,
per_block_processing, per_block_processing::BlockSignatureStrategy, ConsensusContext,
StateProcessingStrategy, VerifyBlockRoot,
};
use std::sync::Arc;
use std::time::Duration;
@@ -48,7 +49,7 @@ pub fn revert_to_fork_boundary<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>
);
let block_iter = ParentRootBlockIterator::fork_tolerant(&store, head_block_root);
process_results(block_iter, |mut iter| {
let (block_root, blinded_block) = process_results(block_iter, |mut iter| {
iter.find_map(|(block_root, block)| {
if block.slot() < fork_epoch.start_slot(E::slots_per_epoch()) {
Some((block_root, block))
@@ -69,7 +70,13 @@ pub fn revert_to_fork_boundary<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>
e, CORRUPT_DB_MESSAGE
)
})?
.ok_or_else(|| format!("No pre-fork blocks found. {}", CORRUPT_DB_MESSAGE))
.ok_or_else(|| format!("No pre-fork blocks found. {}", CORRUPT_DB_MESSAGE))?;
let block = store
.make_full_block(&block_root, blinded_block)
.map_err(|e| format!("Unable to add payload to new head block: {:?}", e))?;
Ok((block_root, block))
}
/// Reset fork choice to the finalized checkpoint of the supplied head state.
@@ -91,13 +98,14 @@ pub fn reset_fork_choice_to_finalization<E: EthSpec, Hot: ItemStore<E>, Cold: It
head_block_root: Hash256,
head_state: &BeaconState<E>,
store: Arc<HotColdDB<E, Hot, Cold>>,
current_slot: Option<Slot>,
spec: &ChainSpec,
) -> Result<ForkChoice<BeaconForkChoiceStore<E, Hot, Cold>, E>, String> {
// Fetch finalized block.
let finalized_checkpoint = head_state.finalized_checkpoint();
let finalized_block_root = finalized_checkpoint.root;
let finalized_block = store
.get_block(&finalized_block_root)
.get_full_block(&finalized_block_root, None)
.map_err(|e| format!("Error loading finalized block: {:?}", e))?
.ok_or_else(|| {
format!(
@@ -132,17 +140,20 @@ pub fn reset_fork_choice_to_finalization<E: EthSpec, Hot: ItemStore<E>, Cold: It
})?;
let finalized_snapshot = BeaconSnapshot {
beacon_block_root: finalized_block_root,
beacon_block: finalized_block,
beacon_block: Arc::new(finalized_block),
beacon_state: finalized_state,
};
let fc_store = BeaconForkChoiceStore::get_forkchoice_store(store.clone(), &finalized_snapshot);
let fc_store = BeaconForkChoiceStore::get_forkchoice_store(store.clone(), &finalized_snapshot)
.map_err(|e| format!("Unable to reset fork choice store for revert: {e:?}"))?;
let mut fork_choice = ForkChoice::from_anchor(
fc_store,
finalized_block_root,
&finalized_snapshot.beacon_block,
&finalized_snapshot.beacon_state,
current_slot,
spec,
)
.map_err(|e| format!("Unable to reset fork choice for revert: {:?}", e))?;
@@ -158,12 +169,15 @@ pub fn reset_fork_choice_to_finalization<E: EthSpec, Hot: ItemStore<E>, Cold: It
complete_state_advance(&mut state, None, block.slot(), spec)
.map_err(|e| format!("State advance failed: {:?}", e))?;
let mut ctxt = ConsensusContext::new(block.slot())
.set_proposer_index(block.message().proposer_index());
per_block_processing(
&mut state,
&block,
None,
BlockSignatureStrategy::NoVerification,
StateProcessingStrategy::Accurate,
VerifyBlockRoot::True,
&mut ctxt,
spec,
)
.map_err(|e| format!("Error replaying block: {:?}", e))?;
@@ -172,13 +186,12 @@ pub fn reset_fork_choice_to_finalization<E: EthSpec, Hot: ItemStore<E>, Cold: It
// retro-actively determine if they were valid or not.
//
// This scenario is so rare that it seems OK to double-verify some blocks.
let payload_verification_status = PayloadVerificationStatus::NotVerified;
let payload_verification_status = PayloadVerificationStatus::Optimistic;
let (block, _) = block.deconstruct();
fork_choice
.on_block(
block.slot(),
&block,
block.message(),
block.canonical_root(),
// Reward proposer boost. We are reinforcing the canonical chain.
Duration::from_secs(0),

View File

@@ -1,4 +1,4 @@
use parking_lot::RwLock;
use parking_lot::{RwLock, RwLockReadGuard};
use ssz_derive::{Decode, Encode};
use std::collections::HashMap;
use types::{Hash256, Slot};
@@ -16,6 +16,8 @@ pub enum Error {
#[derive(Default, Debug)]
pub struct HeadTracker(pub RwLock<HashMap<Hash256, Slot>>);
pub type HeadTrackerReader<'a> = RwLockReadGuard<'a, HashMap<Hash256, Slot>>;
impl HeadTracker {
/// Register a block with `Self`, so it may or may not be included in a `Self::heads` call.
///
@@ -44,8 +46,13 @@ impl HeadTracker {
/// Returns a `SszHeadTracker`, which contains all necessary information to restore the state
/// of `Self` at some later point.
///
/// Should ONLY be used for tests, due to the potential for database races.
///
/// See <https://github.com/sigp/lighthouse/issues/4773>
#[cfg(test)]
pub fn to_ssz_container(&self) -> SszHeadTracker {
SszHeadTracker::from_map(&*self.0.read())
SszHeadTracker::from_map(&self.0.read())
}
/// Creates a new `Self` from the given `SszHeadTracker`, restoring `Self` to the same state of
@@ -83,8 +90,8 @@ impl PartialEq<HeadTracker> for HeadTracker {
/// This is used when persisting the state of the `BeaconChain` to disk.
#[derive(Encode, Decode, Clone)]
pub struct SszHeadTracker {
roots: Vec<Hash256>,
slots: Vec<Slot>,
pub roots: Vec<Hash256>,
pub slots: Vec<Slot>,
}
impl SszHeadTracker {

View File

@@ -1,3 +1,4 @@
use crate::data_availability_checker::AvailableBlock;
use crate::{errors::BeaconChainError as Error, metrics, BeaconChain, BeaconChainTypes};
use itertools::Itertools;
use slog::debug;
@@ -8,8 +9,8 @@ use state_processing::{
use std::borrow::Cow;
use std::iter;
use std::time::Duration;
use store::{chunked_vector::BlockRoots, AnchorInfo, ChunkWriter, KeyValueStore};
use types::{Hash256, SignedBeaconBlock, Slot};
use store::{get_key_for_col, AnchorInfo, BlobInfo, DBColumn, KeyValueStore, KeyValueStoreOp};
use types::{Hash256, Slot};
/// Use a longer timeout on the pubkey cache.
///
@@ -58,27 +59,30 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
/// Return the number of blocks successfully imported.
pub fn import_historical_block_batch(
&self,
blocks: &[SignedBeaconBlock<T::EthSpec>],
mut blocks: Vec<AvailableBlock<T::EthSpec>>,
) -> Result<usize, Error> {
let anchor_info = self
.store
.get_anchor_info()
.ok_or(HistoricalBlockError::NoAnchorInfo)?;
let blob_info = self.store.get_blob_info();
// Take all blocks with slots less than the oldest block slot.
let num_relevant =
blocks.partition_point(|block| block.slot() < anchor_info.oldest_block_slot);
let blocks_to_import = &blocks
.get(..num_relevant)
.ok_or(HistoricalBlockError::IndexOutOfBounds)?;
let num_relevant = blocks.partition_point(|available_block| {
available_block.block().slot() < anchor_info.oldest_block_slot
});
if blocks_to_import.len() != blocks.len() {
let total_blocks = blocks.len();
blocks.truncate(num_relevant);
let blocks_to_import = blocks;
if blocks_to_import.len() != total_blocks {
debug!(
self.log,
"Ignoring some historic blocks";
"oldest_block_slot" => anchor_info.oldest_block_slot,
"total_blocks" => blocks.len(),
"ignored" => blocks.len().saturating_sub(blocks_to_import.len()),
"total_blocks" => total_blocks,
"ignored" => total_blocks.saturating_sub(blocks_to_import.len()),
);
}
@@ -86,17 +90,22 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
return Ok(0);
}
let n_blobs_lists_to_import = blocks_to_import
.iter()
.filter(|available_block| available_block.blobs().is_some())
.count();
let mut expected_block_root = anchor_info.oldest_block_parent;
let mut prev_block_slot = anchor_info.oldest_block_slot;
let mut chunk_writer =
ChunkWriter::<BlockRoots, _, _>::new(&self.store.cold_db, prev_block_slot.as_usize())?;
let mut cold_batch = Vec::with_capacity(blocks.len());
let mut hot_batch = Vec::with_capacity(blocks.len());
let mut new_oldest_blob_slot = blob_info.oldest_blob_slot;
for block in blocks_to_import.iter().rev() {
// Check chain integrity.
let block_root = block.canonical_root();
let mut blob_batch = Vec::with_capacity(n_blobs_lists_to_import);
let mut cold_batch = Vec::with_capacity(blocks_to_import.len());
let mut signed_blocks = Vec::with_capacity(blocks_to_import.len());
for available_block in blocks_to_import.into_iter().rev() {
let (block_root, block, maybe_blobs) = available_block.deconstruct();
if block_root != expected_block_root {
return Err(HistoricalBlockError::MismatchedBlockRoot {
@@ -106,32 +115,52 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
.into());
}
// Store block in the hot database.
hot_batch.push(self.store.block_as_kv_store_op(&block_root, block));
let blinded_block = block.clone_as_blinded();
// Store block in the freezer database without payload.
self.store.blinded_block_as_cold_kv_store_ops(
&block_root,
&blinded_block,
&mut cold_batch,
)?;
// Store the blobs too
if let Some(blobs) = maybe_blobs {
new_oldest_blob_slot = Some(block.slot());
self.store
.blobs_as_kv_store_ops(&block_root, blobs, &mut blob_batch);
}
// Store block roots, including at all skip slots in the freezer DB.
for slot in (block.slot().as_usize()..prev_block_slot.as_usize()).rev() {
chunk_writer.set(slot, block_root, &mut cold_batch)?;
// The block root mapping for `block.slot()` itself was already written when the block
// was stored, above.
for slot in (block.slot().as_usize() + 1..prev_block_slot.as_usize()).rev() {
cold_batch.push(KeyValueStoreOp::PutKeyValue(
get_key_for_col(DBColumn::BeaconBlockRoots.into(), &slot.to_be_bytes()),
block_root.as_bytes().to_vec(),
));
}
prev_block_slot = block.slot();
expected_block_root = block.message().parent_root();
signed_blocks.push(block);
// If we've reached genesis, add the genesis block root to the batch and set the
// anchor slot to 0 to indicate completion.
// If we've reached genesis, add the genesis block root to the batch for all slots
// between 0 and the first block slot, and set the anchor slot to 0 to indicate
// completion.
if expected_block_root == self.genesis_block_root {
let genesis_slot = self.spec.genesis_slot;
chunk_writer.set(
genesis_slot.as_usize(),
self.genesis_block_root,
&mut cold_batch,
)?;
for slot in genesis_slot.as_u64()..prev_block_slot.as_u64() {
cold_batch.push(KeyValueStoreOp::PutKeyValue(
get_key_for_col(DBColumn::BeaconBlockRoots.into(), &slot.to_be_bytes()),
self.genesis_block_root.as_bytes().to_vec(),
));
}
prev_block_slot = genesis_slot;
expected_block_root = Hash256::zero();
break;
}
}
chunk_writer.write(&mut cold_batch)?;
// these were pushed in reverse order so we reverse again
signed_blocks.reverse();
// Verify signatures in one batch, holding the pubkey cache lock for the shortest duration
// possible. For each block fetch the parent root from its successor. Slicing from index 1
@@ -142,15 +171,16 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
.validator_pubkey_cache
.try_read_for(PUBKEY_CACHE_LOCK_TIMEOUT)
.ok_or(HistoricalBlockError::ValidatorPubkeyCacheTimeout)?;
let block_roots = blocks_to_import
let block_roots = signed_blocks
.get(1..)
.ok_or(HistoricalBlockError::IndexOutOfBounds)?
.iter()
.map(|block| block.parent_root())
.chain(iter::once(anchor_info.oldest_block_parent));
let signature_set = blocks_to_import
let signature_set = signed_blocks
.iter()
.zip_eq(block_roots)
.filter(|&(_block, block_root)| (block_root != self.genesis_block_root))
.map(|(block, block_root)| {
block_proposal_signature_set_from_parts(
block,
@@ -175,11 +205,31 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
drop(verify_timer);
drop(sig_timer);
// Write the I/O batches to disk, writing the blocks themselves first, as it's better
// for the hot DB to contain extra blocks than for the cold DB to point to blocks that
// do not exist.
self.store.hot_db.do_atomically(hot_batch)?;
// Write the I/O batches to disk.
// We fsync after each write because we need the writes to the blob and freezer DB to
// be persisted if the writes to the hot DB are persisted. Without fsync we could end up
// in a situation where the hot DB's anchor is updated but the actual blocks are forgotten
// from disk.
self.store.blobs_db.do_atomically(blob_batch)?;
self.store.blobs_db.sync()?;
self.store.cold_db.do_atomically(cold_batch)?;
self.store.cold_db.sync()?;
let mut anchor_and_blob_batch = Vec::with_capacity(2);
// Update the blob info.
if new_oldest_blob_slot != blob_info.oldest_blob_slot {
if let Some(oldest_blob_slot) = new_oldest_blob_slot {
let new_blob_info = BlobInfo {
oldest_blob_slot: Some(oldest_blob_slot),
..blob_info.clone()
};
anchor_and_blob_batch.push(
self.store
.compare_and_set_blob_info(blob_info, new_blob_info)?,
);
}
}
// Update the anchor.
let new_anchor = AnchorInfo {
@@ -187,16 +237,24 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
oldest_block_parent: expected_block_root,
..anchor_info
};
let backfill_complete = new_anchor.block_backfill_complete();
self.store
.compare_and_set_anchor_info_with_write(Some(anchor_info), Some(new_anchor))?;
let backfill_complete = new_anchor.block_backfill_complete(self.genesis_backfill_slot);
anchor_and_blob_batch.push(
self.store
.compare_and_set_anchor_info(Some(anchor_info), Some(new_anchor))?,
);
self.store.hot_db.do_atomically(anchor_and_blob_batch)?;
self.store.hot_db.sync()?;
// If backfill has completed and the chain is configured to reconstruct historic states,
// send a message to the background migrator instructing it to begin reconstruction.
if backfill_complete && self.config.reconstruct_historic_states {
// This can only happen if we have backfilled all the way to genesis.
if backfill_complete
&& self.genesis_backfill_slot == Slot::new(0)
&& self.config.reconstruct_historic_states
{
self.store_migrator.process_reconstruction();
}
Ok(blocks_to_import.len())
Ok(num_relevant)
}
}

View File

@@ -0,0 +1,78 @@
use kzg::{Blob as KzgBlob, Error as KzgError, Kzg};
use types::{Blob, EthSpec, Hash256, KzgCommitment, KzgProof};
/// Converts a blob ssz List object to an array to be used with the kzg
/// crypto library.
fn ssz_blob_to_crypto_blob<E: EthSpec>(blob: &Blob<E>) -> Result<KzgBlob, KzgError> {
KzgBlob::from_bytes(blob.as_ref()).map_err(Into::into)
}
/// Validate a single blob-commitment-proof triplet from a `BlobSidecar`.
pub fn validate_blob<E: EthSpec>(
kzg: &Kzg,
blob: &Blob<E>,
kzg_commitment: KzgCommitment,
kzg_proof: KzgProof,
) -> Result<(), KzgError> {
let _timer = crate::metrics::start_timer(&crate::metrics::KZG_VERIFICATION_SINGLE_TIMES);
let kzg_blob = ssz_blob_to_crypto_blob::<E>(blob)?;
kzg.verify_blob_kzg_proof(&kzg_blob, kzg_commitment, kzg_proof)
}
/// Validate a batch of blob-commitment-proof triplets from multiple `BlobSidecars`.
pub fn validate_blobs<E: EthSpec>(
kzg: &Kzg,
expected_kzg_commitments: &[KzgCommitment],
blobs: Vec<&Blob<E>>,
kzg_proofs: &[KzgProof],
) -> Result<(), KzgError> {
let _timer = crate::metrics::start_timer(&crate::metrics::KZG_VERIFICATION_BATCH_TIMES);
let blobs = blobs
.into_iter()
.map(|blob| ssz_blob_to_crypto_blob::<E>(blob))
.collect::<Result<Vec<_>, KzgError>>()?;
kzg.verify_blob_kzg_proof_batch(&blobs, expected_kzg_commitments, kzg_proofs)
}
/// Compute the kzg proof given an ssz blob and its kzg commitment.
pub fn compute_blob_kzg_proof<E: EthSpec>(
kzg: &Kzg,
blob: &Blob<E>,
kzg_commitment: KzgCommitment,
) -> Result<KzgProof, KzgError> {
let kzg_blob = ssz_blob_to_crypto_blob::<E>(blob)?;
kzg.compute_blob_kzg_proof(&kzg_blob, kzg_commitment)
}
/// Compute the kzg commitment for a given blob.
pub fn blob_to_kzg_commitment<E: EthSpec>(
kzg: &Kzg,
blob: &Blob<E>,
) -> Result<KzgCommitment, KzgError> {
let kzg_blob = ssz_blob_to_crypto_blob::<E>(blob)?;
kzg.blob_to_kzg_commitment(&kzg_blob)
}
/// Compute the kzg proof for a given blob and an evaluation point z.
pub fn compute_kzg_proof<E: EthSpec>(
kzg: &Kzg,
blob: &Blob<E>,
z: Hash256,
) -> Result<(KzgProof, Hash256), KzgError> {
let z = z.0.into();
let kzg_blob = ssz_blob_to_crypto_blob::<E>(blob)?;
kzg.compute_kzg_proof(&kzg_blob, &z)
.map(|(proof, z)| (proof, Hash256::from_slice(&z.to_vec())))
}
/// Verify a `kzg_proof` for a `kzg_commitment` that evaluating a polynomial at `z` results in `y`
pub fn verify_kzg_proof<E: EthSpec>(
kzg: &Kzg,
kzg_commitment: KzgCommitment,
kzg_proof: KzgProof,
z: Hash256,
y: Hash256,
) -> Result<bool, KzgError> {
kzg.verify_kzg_proof(kzg_commitment, &z.0.into(), &y.0.into(), kzg_proof)
}

View File

@@ -1,47 +1,70 @@
#![recursion_limit = "128"] // For lazy-static
pub mod attestation_rewards;
pub mod attestation_simulator;
pub mod attestation_verification;
mod attester_cache;
pub mod beacon_block_reward;
mod beacon_block_streamer;
mod beacon_chain;
mod beacon_fork_choice_store;
mod beacon_proposer_cache;
pub mod beacon_proposer_cache;
mod beacon_snapshot;
pub mod blob_verification;
pub mod block_reward;
mod block_times_cache;
mod block_verification;
pub mod block_verification_types;
pub mod builder;
pub mod canonical_head;
pub mod capella_readiness;
pub mod chain_config;
pub mod data_availability_checker;
pub mod deneb_readiness;
mod early_attester_cache;
pub mod electra_readiness;
mod errors;
pub mod eth1_chain;
mod eth1_finalization_cache;
pub mod events;
mod execution_payload;
pub mod execution_payload;
pub mod fork_choice_signal;
pub mod fork_revert;
mod head_tracker;
pub mod historical_blocks;
mod metrics;
pub mod kzg_utils;
pub mod light_client_finality_update_verification;
pub mod light_client_optimistic_update_verification;
mod light_client_server_cache;
pub mod merge_readiness;
pub mod metrics;
pub mod migrate;
mod naive_aggregation_pool;
mod observed_aggregates;
mod observed_attesters;
mod observed_block_producers;
mod observed_blob_sidecars;
pub mod observed_block_producers;
pub mod observed_operations;
mod observed_slashable;
pub mod otb_verification_service;
mod parallel_state_cache;
mod persisted_beacon_chain;
mod persisted_fork_choice;
mod pre_finalization_cache;
pub mod proposer_prep_service;
pub mod schema_change;
mod shuffling_cache;
mod snapshot_cache;
pub mod shuffling_cache;
pub mod state_advance_timer;
pub mod sync_committee_rewards;
pub mod sync_committee_verification;
pub mod test_utils;
mod timeout_rw_lock;
pub mod validator_monitor;
mod validator_pubkey_cache;
pub use self::beacon_chain::{
AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, BeaconStore, ChainSegmentResult,
ForkChoiceError, HeadInfo, HeadSafetyStatus, StateSkipConfig, WhenSlotSkipped,
MAXIMUM_GOSSIP_CLOCK_DISPARITY,
AttestationProcessingOutcome, AvailabilityProcessingStatus, BeaconBlockResponse,
BeaconBlockResponseWrapper, BeaconChain, BeaconChainTypes, BeaconStore, ChainSegmentResult,
ForkChoiceError, LightClientProducerEvent, OverrideForkchoiceUpdate, ProduceBlockVerification,
StateSkipConfig, WhenSlotSkipped, INVALID_FINALIZED_MERGE_TRANSITION_BLOCK_SHUTDOWN_REASON,
INVALID_JUSTIFIED_PAYLOAD_SHUTDOWN_REASON,
};
pub use self::beacon_snapshot::BeaconSnapshot;
pub use self::chain_config::ChainConfig;
@@ -49,10 +72,22 @@ pub use self::errors::{BeaconChainError, BlockProductionError};
pub use self::historical_blocks::HistoricalBlockError;
pub use attestation_verification::Error as AttestationError;
pub use beacon_fork_choice_store::{BeaconForkChoiceStore, Error as ForkChoiceStoreError};
pub use block_verification::{BlockError, ExecutionPayloadError, GossipVerifiedBlock};
pub use block_verification::{
get_block_root, BlockError, ExecutionPayloadError, ExecutionPendingBlock, GossipVerifiedBlock,
IntoExecutionPendingBlock, IntoGossipVerifiedBlockContents, PayloadVerificationOutcome,
PayloadVerificationStatus,
};
pub use block_verification_types::AvailabilityPendingExecutedBlock;
pub use block_verification_types::ExecutedBlock;
pub use canonical_head::{CachedHead, CanonicalHead, CanonicalHeadRwLock};
pub use eth1_chain::{Eth1Chain, Eth1ChainBackend};
pub use events::ServerSentEventHandler;
pub use execution_layer::EngineState;
pub use execution_payload::NotifyExecutionLayer;
pub use fork_choice::{ExecutionStatus, ForkchoiceUpdateParameters};
pub use kzg::TrustedSetup;
pub use metrics::scrape_for_metrics;
pub use migrate::MigratorConfig;
pub use parking_lot;
pub use slot_clock;
pub use state_processing::per_block_processing::errors::{
@@ -62,3 +97,13 @@ pub use state_processing::per_block_processing::errors::{
pub use store;
pub use timeout_rw_lock::TimeoutRwLock;
pub use types;
pub mod validator_pubkey_cache {
use crate::BeaconChainTypes;
pub type ValidatorPubkeyCache<T> = store::ValidatorPubkeyCache<
<T as BeaconChainTypes>::EthSpec,
<T as BeaconChainTypes>::HotStore,
<T as BeaconChainTypes>::ColdStore,
>;
}

View File

@@ -0,0 +1,75 @@
use crate::{BeaconChain, BeaconChainTypes};
use derivative::Derivative;
use slot_clock::SlotClock;
use std::time::Duration;
use strum::AsRefStr;
use types::LightClientFinalityUpdate;
/// Returned when a light client finality update was not successfully verified. It might not have been verified for
/// two reasons:
///
/// - The light client finality message is malformed or inappropriate for the context (indicated by all variants
/// other than `BeaconChainError`).
/// - The application encountered an internal error whilst attempting to determine validity
/// (the `BeaconChainError` variant)
#[derive(Debug, AsRefStr)]
pub enum Error {
/// The light client finality message was received is prior to one-third of slot duration passage. (with
/// respect to the gossip clock disparity and slot clock duration).
///
/// ## Peer scoring
///
/// Assuming the local clock is correct, the peer has sent an invalid message.
TooEarly,
/// Light client finality update message does not match the locally constructed one.
InvalidLightClientFinalityUpdate,
/// Signature slot start time is none.
SigSlotStartIsNone,
/// Failed to construct a LightClientFinalityUpdate from state.
FailedConstructingUpdate,
}
/// Wraps a `LightClientFinalityUpdate` that has been verified for propagation on the gossip network.
#[derive(Derivative)]
#[derivative(Clone(bound = "T: BeaconChainTypes"))]
pub struct VerifiedLightClientFinalityUpdate<T: BeaconChainTypes> {
light_client_finality_update: LightClientFinalityUpdate<T::EthSpec>,
seen_timestamp: Duration,
}
impl<T: BeaconChainTypes> VerifiedLightClientFinalityUpdate<T> {
/// Returns `Ok(Self)` if the `light_client_finality_update` is valid to be (re)published on the gossip
/// network.
pub fn verify(
rcv_finality_update: LightClientFinalityUpdate<T::EthSpec>,
chain: &BeaconChain<T>,
seen_timestamp: Duration,
) -> Result<Self, Error> {
// verify that enough time has passed for the block to have been propagated
let start_time = chain
.slot_clock
.start_of(*rcv_finality_update.signature_slot())
.ok_or(Error::SigSlotStartIsNone)?;
let one_third_slot_duration = Duration::new(chain.spec.seconds_per_slot / 3, 0);
if seen_timestamp + chain.spec.maximum_gossip_clock_disparity()
< start_time + one_third_slot_duration
{
return Err(Error::TooEarly);
}
let latest_finality_update = chain
.light_client_server_cache
.get_latest_finality_update()
.ok_or(Error::FailedConstructingUpdate)?;
// verify that the gossiped finality update is the same as the locally constructed one.
if latest_finality_update != rcv_finality_update {
return Err(Error::InvalidLightClientFinalityUpdate);
}
Ok(Self {
light_client_finality_update: rcv_finality_update,
seen_timestamp,
})
}
}

View File

@@ -0,0 +1,91 @@
use crate::{BeaconChain, BeaconChainTypes};
use derivative::Derivative;
use eth2::types::Hash256;
use slot_clock::SlotClock;
use std::time::Duration;
use strum::AsRefStr;
use types::LightClientOptimisticUpdate;
/// Returned when a light client optimistic update was not successfully verified. It might not have been verified for
/// two reasons:
///
/// - The light client optimistic message is malformed or inappropriate for the context (indicated by all variants
/// other than `BeaconChainError`).
/// - The application encountered an internal error whilst attempting to determine validity
/// (the `BeaconChainError` variant)
#[derive(Debug, AsRefStr)]
pub enum Error {
/// The light client optimistic message was received is prior to one-third of slot duration passage. (with
/// respect to the gossip clock disparity and slot clock duration).
///
/// ## Peer scoring
///
/// Assuming the local clock is correct, the peer has sent an invalid message.
TooEarly,
/// Light client optimistic update message does not match the locally constructed one.
InvalidLightClientOptimisticUpdate,
/// Signature slot start time is none.
SigSlotStartIsNone,
/// Failed to construct a LightClientOptimisticUpdate from state.
FailedConstructingUpdate,
/// Unknown block with parent root.
UnknownBlockParentRoot(Hash256),
}
/// Wraps a `LightClientOptimisticUpdate` that has been verified for propagation on the gossip network.
#[derive(Derivative)]
#[derivative(Clone(bound = "T: BeaconChainTypes"))]
pub struct VerifiedLightClientOptimisticUpdate<T: BeaconChainTypes> {
light_client_optimistic_update: LightClientOptimisticUpdate<T::EthSpec>,
pub parent_root: Hash256,
seen_timestamp: Duration,
}
impl<T: BeaconChainTypes> VerifiedLightClientOptimisticUpdate<T> {
/// Returns `Ok(Self)` if the `light_client_optimistic_update` is valid to be (re)published on the gossip
/// network.
pub fn verify(
rcv_optimistic_update: LightClientOptimisticUpdate<T::EthSpec>,
chain: &BeaconChain<T>,
seen_timestamp: Duration,
) -> Result<Self, Error> {
// verify that enough time has passed for the block to have been propagated
let start_time = chain
.slot_clock
.start_of(*rcv_optimistic_update.signature_slot())
.ok_or(Error::SigSlotStartIsNone)?;
let one_third_slot_duration = Duration::new(chain.spec.seconds_per_slot / 3, 0);
if seen_timestamp + chain.spec.maximum_gossip_clock_disparity()
< start_time + one_third_slot_duration
{
return Err(Error::TooEarly);
}
let head = chain.canonical_head.cached_head();
let head_block = &head.snapshot.beacon_block;
// check if we can process the optimistic update immediately
// otherwise queue
let canonical_root = rcv_optimistic_update.get_canonical_root();
if canonical_root != head_block.message().parent_root() {
return Err(Error::UnknownBlockParentRoot(canonical_root));
}
let latest_optimistic_update = chain
.light_client_server_cache
.get_latest_optimistic_update()
.ok_or(Error::FailedConstructingUpdate)?;
// verify that the gossiped optimistic update is the same as the locally constructed one.
if latest_optimistic_update != rcv_optimistic_update {
return Err(Error::InvalidLightClientOptimisticUpdate);
}
let parent_root = rcv_optimistic_update.get_parent_root();
Ok(Self {
light_client_optimistic_update: rcv_optimistic_update,
parent_root,
seen_timestamp,
})
}
}

View File

@@ -0,0 +1,250 @@
use crate::errors::BeaconChainError;
use crate::{metrics, BeaconChainTypes, BeaconStore};
use parking_lot::{Mutex, RwLock};
use slog::{debug, Logger};
use ssz_types::FixedVector;
use std::num::NonZeroUsize;
use types::light_client_update::{FinalizedRootProofLen, FINALIZED_ROOT_INDEX};
use types::non_zero_usize::new_non_zero_usize;
use types::{
BeaconBlockRef, BeaconState, ChainSpec, EthSpec, ForkName, Hash256, LightClientFinalityUpdate,
LightClientOptimisticUpdate, Slot, SyncAggregate,
};
/// A prev block cache miss requires to re-generate the state of the post-parent block. Items in the
/// prev block cache are very small 32 * (6 + 1) = 224 bytes. 32 is an arbitrary number that
/// represents unlikely re-orgs, while keeping the cache very small.
const PREV_BLOCK_CACHE_SIZE: NonZeroUsize = new_non_zero_usize(32);
/// This cache computes light client messages ahead of time, required to satisfy p2p and API
/// requests. These messages include proofs on historical states, so on-demand computation is
/// expensive.
///
pub struct LightClientServerCache<T: BeaconChainTypes> {
/// Tracks a single global latest finality update out of all imported blocks.
///
/// TODO: Active discussion with @etan-status if this cache should be fork aware to return
/// latest canonical (update with highest signature slot, where its attested header is part of
/// the head chain) instead of global latest (update with highest signature slot, out of all
/// branches).
latest_finality_update: RwLock<Option<LightClientFinalityUpdate<T::EthSpec>>>,
/// Tracks a single global latest optimistic update out of all imported blocks.
latest_optimistic_update: RwLock<Option<LightClientOptimisticUpdate<T::EthSpec>>>,
/// Caches state proofs by block root
prev_block_cache: Mutex<lru::LruCache<Hash256, LightClientCachedData>>,
}
impl<T: BeaconChainTypes> LightClientServerCache<T> {
pub fn new() -> Self {
Self {
latest_finality_update: None.into(),
latest_optimistic_update: None.into(),
prev_block_cache: lru::LruCache::new(PREV_BLOCK_CACHE_SIZE).into(),
}
}
/// Compute and cache state proofs for latter production of light-client messages. Does not
/// trigger block replay.
pub fn cache_state_data(
&self,
spec: &ChainSpec,
block: BeaconBlockRef<T::EthSpec>,
block_root: Hash256,
block_post_state: &mut BeaconState<T::EthSpec>,
) -> Result<(), BeaconChainError> {
let _timer = metrics::start_timer(&metrics::LIGHT_CLIENT_SERVER_CACHE_STATE_DATA_TIMES);
// Only post-altair
if spec.fork_name_at_slot::<T::EthSpec>(block.slot()) == ForkName::Base {
return Ok(());
}
// Persist in memory cache for a descendent block
let cached_data = LightClientCachedData::from_state(block_post_state)?;
self.prev_block_cache.lock().put(block_root, cached_data);
Ok(())
}
/// Given a block with a SyncAggregte computes better or more recent light client updates. The
/// results are cached either on disk or memory to be served via p2p and rest API
pub fn recompute_and_cache_updates(
&self,
store: BeaconStore<T>,
block_parent_root: &Hash256,
block_slot: Slot,
sync_aggregate: &SyncAggregate<T::EthSpec>,
log: &Logger,
chain_spec: &ChainSpec,
) -> Result<(), BeaconChainError> {
let _timer =
metrics::start_timer(&metrics::LIGHT_CLIENT_SERVER_CACHE_RECOMPUTE_UPDATES_TIMES);
let signature_slot = block_slot;
let attested_block_root = block_parent_root;
let attested_block = store.get_full_block(attested_block_root, None)?.ok_or(
BeaconChainError::DBInconsistent(format!(
"Block not available {:?}",
attested_block_root
)),
)?;
let cached_parts = self.get_or_compute_prev_block_cache(
store.clone(),
attested_block_root,
&attested_block.state_root(),
attested_block.slot(),
)?;
let attested_slot = attested_block.slot();
// Spec: Full nodes SHOULD provide the LightClientOptimisticUpdate with the highest
// attested_header.beacon.slot (if multiple, highest signature_slot) as selected by fork choice
let is_latest_optimistic = match &self.latest_optimistic_update.read().clone() {
Some(latest_optimistic_update) => {
is_latest_optimistic_update(latest_optimistic_update, attested_slot, signature_slot)
}
None => true,
};
if is_latest_optimistic {
// can create an optimistic update, that is more recent
*self.latest_optimistic_update.write() = Some(LightClientOptimisticUpdate::new(
&attested_block,
sync_aggregate.clone(),
signature_slot,
chain_spec,
)?);
};
// Spec: Full nodes SHOULD provide the LightClientFinalityUpdate with the highest
// attested_header.beacon.slot (if multiple, highest signature_slot) as selected by fork choice
let is_latest_finality = match &self.latest_finality_update.read().clone() {
Some(latest_finality_update) => {
is_latest_finality_update(latest_finality_update, attested_slot, signature_slot)
}
None => true,
};
if is_latest_finality & !cached_parts.finalized_block_root.is_zero() {
// Immediately after checkpoint sync the finalized block may not be available yet.
if let Some(finalized_block) =
store.get_full_block(&cached_parts.finalized_block_root, None)?
{
*self.latest_finality_update.write() = Some(LightClientFinalityUpdate::new(
&attested_block,
&finalized_block,
cached_parts.finality_branch.clone(),
sync_aggregate.clone(),
signature_slot,
chain_spec,
)?);
} else {
debug!(
log,
"Finalized block not available in store for light_client server";
"finalized_block_root" => format!("{}", cached_parts.finalized_block_root),
);
}
}
Ok(())
}
/// Retrieves prev block cached data from cache. If not present re-computes by retrieving the
/// parent state, and inserts an entry to the cache.
///
/// In separate function since FnOnce of get_or_insert can not be fallible.
fn get_or_compute_prev_block_cache(
&self,
store: BeaconStore<T>,
block_root: &Hash256,
block_state_root: &Hash256,
block_slot: Slot,
) -> Result<LightClientCachedData, BeaconChainError> {
// Attempt to get the value from the cache first.
if let Some(cached_parts) = self.prev_block_cache.lock().get(block_root) {
return Ok(cached_parts.clone());
}
metrics::inc_counter(&metrics::LIGHT_CLIENT_SERVER_CACHE_PREV_BLOCK_CACHE_MISS);
// Compute the value, handling potential errors.
let mut state = store
.get_state(block_state_root, Some(block_slot))?
.ok_or_else(|| {
BeaconChainError::DBInconsistent(format!("Missing state {:?}", block_state_root))
})?;
let new_value = LightClientCachedData::from_state(&mut state)?;
// Insert value and return owned
self.prev_block_cache
.lock()
.put(*block_root, new_value.clone());
Ok(new_value)
}
pub fn get_latest_finality_update(&self) -> Option<LightClientFinalityUpdate<T::EthSpec>> {
self.latest_finality_update.read().clone()
}
pub fn get_latest_optimistic_update(&self) -> Option<LightClientOptimisticUpdate<T::EthSpec>> {
self.latest_optimistic_update.read().clone()
}
}
impl<T: BeaconChainTypes> Default for LightClientServerCache<T> {
fn default() -> Self {
Self::new()
}
}
type FinalityBranch = FixedVector<Hash256, FinalizedRootProofLen>;
#[derive(Clone)]
struct LightClientCachedData {
finality_branch: FinalityBranch,
finalized_block_root: Hash256,
}
impl LightClientCachedData {
fn from_state<E: EthSpec>(state: &mut BeaconState<E>) -> Result<Self, BeaconChainError> {
Ok(Self {
finality_branch: state.compute_merkle_proof(FINALIZED_ROOT_INDEX)?.into(),
finalized_block_root: state.finalized_checkpoint().root,
})
}
}
// Implements spec prioritization rules:
// > Full nodes SHOULD provide the LightClientFinalityUpdate with the highest attested_header.beacon.slot (if multiple, highest signature_slot)
//
// ref: https://github.com/ethereum/consensus-specs/blob/113c58f9bf9c08867f6f5f633c4d98e0364d612a/specs/altair/light-client/full-node.md#create_light_client_finality_update
fn is_latest_finality_update<E: EthSpec>(
prev: &LightClientFinalityUpdate<E>,
attested_slot: Slot,
signature_slot: Slot,
) -> bool {
let prev_slot = prev.get_attested_header_slot();
if attested_slot > prev_slot {
true
} else {
attested_slot == prev_slot && signature_slot > *prev.signature_slot()
}
}
// Implements spec prioritization rules:
// > Full nodes SHOULD provide the LightClientOptimisticUpdate with the highest attested_header.beacon.slot (if multiple, highest signature_slot)
//
// ref: https://github.com/ethereum/consensus-specs/blob/113c58f9bf9c08867f6f5f633c4d98e0364d612a/specs/altair/light-client/full-node.md#create_light_client_optimistic_update
fn is_latest_optimistic_update<E: EthSpec>(
prev: &LightClientOptimisticUpdate<E>,
attested_slot: Slot,
signature_slot: Slot,
) -> bool {
let prev_slot = prev.get_slot();
if attested_slot > prev_slot {
true
} else {
attested_slot == prev_slot && signature_slot > *prev.signature_slot()
}
}

View File

@@ -0,0 +1,281 @@
//! Provides tools for checking if a node is ready for the Bellatrix upgrade and following merge
//! transition.
use crate::{BeaconChain, BeaconChainError as Error, BeaconChainTypes};
use execution_layer::BlockByNumberQuery;
use serde::{Deserialize, Serialize, Serializer};
use slog::debug;
use std::fmt;
use std::fmt::Write;
use types::*;
/// The time before the Bellatrix fork when we will start issuing warnings about preparation.
pub const SECONDS_IN_A_WEEK: u64 = 604800;
pub const MERGE_READINESS_PREPARATION_SECONDS: u64 = SECONDS_IN_A_WEEK * 2;
#[derive(Default, Debug, Serialize, Deserialize)]
pub struct MergeConfig {
#[serde(serialize_with = "serialize_uint256")]
pub terminal_total_difficulty: Option<Uint256>,
#[serde(skip_serializing_if = "Option::is_none")]
pub terminal_block_hash: Option<ExecutionBlockHash>,
#[serde(skip_serializing_if = "Option::is_none")]
pub terminal_block_hash_epoch: Option<Epoch>,
}
impl fmt::Display for MergeConfig {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if self.terminal_block_hash.is_none()
&& self.terminal_block_hash_epoch.is_none()
&& self.terminal_total_difficulty.is_none()
{
return write!(
f,
"Merge terminal difficulty parameters not configured, check your config"
);
}
let mut display_string = String::new();
if let Some(terminal_total_difficulty) = self.terminal_total_difficulty {
write!(
display_string,
"terminal_total_difficulty: {},",
terminal_total_difficulty
)?;
}
if let Some(terminal_block_hash) = self.terminal_block_hash {
write!(
display_string,
"terminal_block_hash: {},",
terminal_block_hash
)?;
}
if let Some(terminal_block_hash_epoch) = self.terminal_block_hash_epoch {
write!(
display_string,
"terminal_block_hash_epoch: {},",
terminal_block_hash_epoch
)?;
}
write!(f, "{}", display_string.trim_end_matches(','))?;
Ok(())
}
}
impl MergeConfig {
/// Instantiate `self` from the values in a `ChainSpec`.
pub fn from_chainspec(spec: &ChainSpec) -> Self {
let mut params = MergeConfig::default();
if spec.terminal_total_difficulty != Uint256::max_value() {
params.terminal_total_difficulty = Some(spec.terminal_total_difficulty);
}
if spec.terminal_block_hash != ExecutionBlockHash::zero() {
params.terminal_block_hash = Some(spec.terminal_block_hash);
}
if spec.terminal_block_hash_activation_epoch != Epoch::max_value() {
params.terminal_block_hash_epoch = Some(spec.terminal_block_hash_activation_epoch);
}
params
}
}
/// Indicates if a node is ready for the Bellatrix upgrade and subsequent merge transition.
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
#[serde(tag = "type")]
pub enum MergeReadiness {
/// The node is ready, as far as we can tell.
Ready {
config: MergeConfig,
#[serde(serialize_with = "serialize_uint256")]
current_difficulty: Option<Uint256>,
},
/// The EL can be reached and has the correct configuration, however it's not yet synced.
NotSynced,
/// The user has not configured this node to use an execution endpoint.
NoExecutionEndpoint,
}
impl fmt::Display for MergeReadiness {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
MergeReadiness::Ready {
config: params,
current_difficulty,
} => {
write!(
f,
"This node appears ready for the merge. \
Params: {}, current_difficulty: {:?}",
params, current_difficulty
)
}
MergeReadiness::NotSynced => write!(
f,
"The execution endpoint is connected and configured, \
however it is not yet synced"
),
MergeReadiness::NoExecutionEndpoint => write!(
f,
"The --execution-endpoint flag is not specified, this is a \
requirement for the merge"
),
}
}
}
pub enum GenesisExecutionPayloadStatus {
Correct(ExecutionBlockHash),
BlockHashMismatch {
got: ExecutionBlockHash,
expected: ExecutionBlockHash,
},
TransactionsRootMismatch {
got: Hash256,
expected: Hash256,
},
WithdrawalsRootMismatch {
got: Hash256,
expected: Hash256,
},
OtherMismatch,
Irrelevant,
AlreadyHappened,
}
impl<T: BeaconChainTypes> BeaconChain<T> {
/// Returns `true` if user has an EL configured, or if the Bellatrix fork has occurred or will
/// occur within `MERGE_READINESS_PREPARATION_SECONDS`.
pub fn is_time_to_prepare_for_bellatrix(&self, current_slot: Slot) -> bool {
if let Some(bellatrix_epoch) = self.spec.bellatrix_fork_epoch {
let bellatrix_slot = bellatrix_epoch.start_slot(T::EthSpec::slots_per_epoch());
let merge_readiness_preparation_slots =
MERGE_READINESS_PREPARATION_SECONDS / self.spec.seconds_per_slot;
if self.execution_layer.is_some() {
// The user has already configured an execution layer, start checking for readiness
// right away.
true
} else {
// Return `true` if Bellatrix has happened or is within the preparation time.
current_slot + merge_readiness_preparation_slots > bellatrix_slot
}
} else {
// The Bellatrix fork epoch has not been defined yet, no need to prepare.
false
}
}
/// Attempts to connect to the EL and confirm that it is ready for the merge.
pub async fn check_merge_readiness(&self, current_slot: Slot) -> MergeReadiness {
if let Some(el) = self.execution_layer.as_ref() {
if !el.is_synced_for_notifier(current_slot).await {
// The EL is not synced.
return MergeReadiness::NotSynced;
}
let params = MergeConfig::from_chainspec(&self.spec);
let current_difficulty = el.get_current_difficulty().await.ok();
MergeReadiness::Ready {
config: params,
current_difficulty,
}
} else {
// There is no EL configured.
MergeReadiness::NoExecutionEndpoint
}
}
/// Check that the execution payload embedded in the genesis state matches the EL's genesis
/// block.
pub async fn check_genesis_execution_payload_is_correct(
&self,
) -> Result<GenesisExecutionPayloadStatus, Error> {
let head_snapshot = self.head_snapshot();
let genesis_state = &head_snapshot.beacon_state;
if genesis_state.slot() != 0 {
return Ok(GenesisExecutionPayloadStatus::AlreadyHappened);
}
let Ok(latest_execution_payload_header) = genesis_state.latest_execution_payload_header()
else {
return Ok(GenesisExecutionPayloadStatus::Irrelevant);
};
let fork = self.spec.fork_name_at_epoch(Epoch::new(0));
let execution_layer = self
.execution_layer
.as_ref()
.ok_or(Error::ExecutionLayerMissing)?;
let exec_block_hash = latest_execution_payload_header.block_hash();
// Use getBlockByNumber(0) to check that the block hash matches.
// At present, Geth does not respond to engine_getPayloadBodiesByRange before genesis.
let execution_block = execution_layer
.get_block_by_number(BlockByNumberQuery::Tag("0x0"))
.await
.map_err(|e| Error::ExecutionLayerGetBlockByNumberFailed(Box::new(e)))?
.ok_or(Error::BlockHashMissingFromExecutionLayer(exec_block_hash))?;
if execution_block.block_hash != exec_block_hash {
return Ok(GenesisExecutionPayloadStatus::BlockHashMismatch {
got: execution_block.block_hash,
expected: exec_block_hash,
});
}
// Double-check the block by reconstructing it.
let execution_payload = execution_layer
.get_payload_by_hash_legacy(exec_block_hash, fork)
.await
.map_err(|e| Error::ExecutionLayerGetBlockByHashFailed(Box::new(e)))?
.ok_or(Error::BlockHashMissingFromExecutionLayer(exec_block_hash))?;
// Verify payload integrity.
let header_from_payload = ExecutionPayloadHeader::from(execution_payload.to_ref());
let got_transactions_root = header_from_payload.transactions_root();
let expected_transactions_root = latest_execution_payload_header.transactions_root();
let got_withdrawals_root = header_from_payload.withdrawals_root().ok();
let expected_withdrawals_root = latest_execution_payload_header.withdrawals_root().ok();
if got_transactions_root != expected_transactions_root {
return Ok(GenesisExecutionPayloadStatus::TransactionsRootMismatch {
got: got_transactions_root,
expected: expected_transactions_root,
});
}
if let Some(&expected) = expected_withdrawals_root {
if let Some(&got) = got_withdrawals_root {
if got != expected {
return Ok(GenesisExecutionPayloadStatus::WithdrawalsRootMismatch {
got,
expected,
});
}
}
}
if header_from_payload.to_ref() != latest_execution_payload_header {
debug!(
self.log,
"Genesis execution payload reconstruction failure";
"consensus_node_header" => ?latest_execution_payload_header,
"execution_node_header" => ?header_from_payload
);
return Ok(GenesisExecutionPayloadStatus::OtherMismatch);
}
Ok(GenesisExecutionPayloadStatus::Correct(exec_block_hash))
}
}
/// Utility function to serialize a Uint256 as a decimal string.
fn serialize_uint256<S>(val: &Option<Uint256>, s: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match val {
Some(v) => v.to_string().serialize(s),
None => s.serialize_none(),
}
}

View File

@@ -4,11 +4,21 @@ use crate::{BeaconChain, BeaconChainError, BeaconChainTypes};
use lazy_static::lazy_static;
pub use lighthouse_metrics::*;
use slot_clock::SlotClock;
use std::time::Duration;
use types::{BeaconState, Epoch, EthSpec, Hash256, Slot};
/// The maximum time to wait for the snapshot cache lock during a metrics scrape.
const SNAPSHOT_CACHE_TIMEOUT: Duration = Duration::from_millis(100);
// Attestation simulator metrics
pub const VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_HEAD_ATTESTER_HIT_TOTAL: &str =
"validator_monitor_attestation_simulator_head_attester_hit_total";
pub const VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_HEAD_ATTESTER_MISS_TOTAL: &str =
"validator_monitor_attestation_simulator_head_attester_miss_total";
pub const VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_TARGET_ATTESTER_HIT_TOTAL: &str =
"validator_monitor_attestation_simulator_target_attester_hit_total";
pub const VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_TARGET_ATTESTER_MISS_TOTAL: &str =
"validator_monitor_attestation_simulator_target_attester_miss_total";
pub const VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_SOURCE_ATTESTER_HIT_TOTAL: &str =
"validator_monitor_attestation_simulator_source_attester_hit_total";
pub const VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_SOURCE_ATTESTER_MISS_TOTAL: &str =
"validator_monitor_attestation_simulator_source_attester_miss_total";
lazy_static! {
/*
@@ -40,6 +50,14 @@ lazy_static! {
"beacon_block_processing_block_root_seconds",
"Time spent calculating the block root when processing a block."
);
pub static ref BLOCK_HEADER_PROCESSING_BLOCK_ROOT: Result<Histogram> = try_create_histogram(
"beacon_block_header_processing_block_root_seconds",
"Time spent calculating the block root for a beacon block header."
);
pub static ref BLOCK_PROCESSING_BLOB_ROOT: Result<Histogram> = try_create_histogram(
"beacon_block_processing_blob_root_seconds",
"Time spent calculating the blob root when processing a block."
);
pub static ref BLOCK_PROCESSING_DB_READ: Result<Histogram> = try_create_histogram(
"beacon_block_processing_db_read_seconds",
"Time spent loading block and state from DB for block processing"
@@ -64,6 +82,11 @@ lazy_static! {
"beacon_block_processing_state_root_seconds",
"Time spent calculating the state root when processing a block."
);
pub static ref BLOCK_PROCESSING_POST_EXEC_PROCESSING: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_processing_post_exec_pre_attestable_seconds",
"Time between finishing execution processing and the block becoming attestable",
linear_buckets(5e-3, 5e-3, 10)
);
pub static ref BLOCK_PROCESSING_DB_WRITE: Result<Histogram> = try_create_histogram(
"beacon_block_processing_db_write_seconds",
"Time spent writing a newly processed block and state to DB"
@@ -72,6 +95,11 @@ lazy_static! {
"beacon_block_processing_attestation_observation_seconds",
"Time spent hashing and remembering all the attestations in the block"
);
pub static ref BLOCK_PROCESSING_FORK_CHOICE: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_processing_fork_choice_seconds",
"Time spent running fork choice's `get_head` during block import",
exponential_buckets(1e-3, 2.0, 8)
);
pub static ref BLOCK_SYNC_AGGREGATE_SET_BITS: Result<IntGauge> = try_create_int_gauge(
"block_sync_aggregate_set_bits",
"The number of true bits in the last sync aggregate in a block"
@@ -90,6 +118,15 @@ lazy_static! {
);
pub static ref BLOCK_PRODUCTION_TIMES: Result<Histogram> =
try_create_histogram("beacon_block_production_seconds", "Full runtime of block production");
pub static ref BLOCK_PRODUCTION_FORK_CHOICE_TIMES: Result<Histogram> = try_create_histogram(
"beacon_block_production_fork_choice_seconds",
"Time taken to run fork choice before block production"
);
pub static ref BLOCK_PRODUCTION_GET_PROPOSER_HEAD_TIMES: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_production_get_proposer_head_times",
"Time taken for fork choice to compute the proposer head before block production",
exponential_buckets(1e-3, 2.0, 8)
);
pub static ref BLOCK_PRODUCTION_STATE_LOAD_TIMES: Result<Histogram> = try_create_histogram(
"beacon_block_production_state_load_seconds",
"Time taken to load the base state for block production"
@@ -118,14 +155,17 @@ lazy_static! {
/*
* Block Statistics
*/
pub static ref OPERATIONS_PER_BLOCK_ATTESTATION: Result<Histogram> = try_create_histogram(
pub static ref OPERATIONS_PER_BLOCK_ATTESTATION: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_operations_per_block_attestation_total",
"Number of attestations in a block"
"Number of attestations in a block",
// Full block is 128.
Ok(vec![0_f64, 1_f64, 3_f64, 15_f64, 31_f64, 63_f64, 127_f64, 255_f64])
);
pub static ref BLOCK_SIZE: Result<Histogram> = try_create_histogram(
pub static ref BLOCK_SIZE: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_total_size",
"Size of a signed beacon block"
"Size of a signed beacon block",
linear_buckets(5120_f64,5120_f64,10)
);
/*
@@ -247,6 +287,10 @@ lazy_static! {
try_create_int_counter("beacon_shuffling_cache_hits_total", "Count of times shuffling cache fulfils request");
pub static ref SHUFFLING_CACHE_MISSES: Result<IntCounter> =
try_create_int_counter("beacon_shuffling_cache_misses_total", "Count of times shuffling cache fulfils request");
pub static ref SHUFFLING_CACHE_PROMISE_HITS: Result<IntCounter> =
try_create_int_counter("beacon_shuffling_cache_promise_hits_total", "Count of times shuffling cache returns a promise to future shuffling");
pub static ref SHUFFLING_CACHE_PROMISE_FAILS: Result<IntCounter> =
try_create_int_counter("beacon_shuffling_cache_promise_fails_total", "Count of times shuffling cache detects a failed promise");
/*
* Early attester cache
@@ -256,6 +300,11 @@ lazy_static! {
"Count of times the early attester cache returns an attestation"
);
}
// Second lazy-static block is used to account for macro recursion limit.
lazy_static! {
/*
* Attestation Production
*/
@@ -275,10 +324,7 @@ lazy_static! {
"attestation_production_cache_prime_seconds",
"Time spent loading a new state from the disk due to a cache miss"
);
}
// Second lazy-static block is used to account for macro recursion limit.
lazy_static! {
/*
* Fork Choice
*/
@@ -298,14 +344,34 @@ lazy_static! {
"beacon_fork_choice_reorg_total",
"Count of occasions fork choice has switched to a different chain"
);
pub static ref FORK_CHOICE_REORG_DISTANCE: Result<IntGauge> = try_create_int_gauge(
"beacon_fork_choice_reorg_distance",
"The distance of each re-org of the fork choice algorithm"
);
pub static ref FORK_CHOICE_REORG_COUNT_INTEROP: Result<IntCounter> = try_create_int_counter(
"beacon_reorgs_total",
"Count of occasions fork choice has switched to a different chain"
);
pub static ref FORK_CHOICE_TIMES: Result<Histogram> =
try_create_histogram("beacon_fork_choice_seconds", "Full runtime of fork choice");
pub static ref FORK_CHOICE_FIND_HEAD_TIMES: Result<Histogram> =
try_create_histogram("beacon_fork_choice_find_head_seconds", "Full runtime of fork choice find_head function");
pub static ref FORK_CHOICE_TIMES: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_fork_choice_seconds",
"Full runtime of fork choice",
linear_buckets(10e-3, 20e-3, 10)
);
pub static ref FORK_CHOICE_OVERRIDE_FCU_TIMES: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_fork_choice_override_fcu_seconds",
"Time taken to compute the optional forkchoiceUpdated override",
exponential_buckets(1e-3, 2.0, 8)
);
pub static ref FORK_CHOICE_AFTER_NEW_HEAD_TIMES: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_fork_choice_after_new_head_seconds",
"Time taken to run `after_new_head`",
exponential_buckets(1e-3, 2.0, 10)
);
pub static ref FORK_CHOICE_AFTER_FINALIZATION_TIMES: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_fork_choice_after_finalization_seconds",
"Time taken to run `after_finalization`",
exponential_buckets(1e-3, 2.0, 10)
);
pub static ref FORK_CHOICE_PROCESS_BLOCK_TIMES: Result<Histogram> = try_create_histogram(
"beacon_fork_choice_process_block_seconds",
"Time taken to add a block and all attestations to fork choice"
@@ -321,7 +387,7 @@ lazy_static! {
pub static ref BALANCES_CACHE_HITS: Result<IntCounter> =
try_create_int_counter("beacon_balances_cache_hits_total", "Count of times balances cache fulfils request");
pub static ref BALANCES_CACHE_MISSES: Result<IntCounter> =
try_create_int_counter("beacon_balances_cache_misses_total", "Count of times balances cache fulfils request");
try_create_int_counter("beacon_balances_cache_misses_total", "Count of times balances cache misses request");
/*
* Persisting BeaconChain components to disk
@@ -334,6 +400,8 @@ lazy_static! {
try_create_histogram("beacon_persist_eth1_cache", "Time taken to persist the eth1 caches");
pub static ref PERSIST_FORK_CHOICE: Result<Histogram> =
try_create_histogram("beacon_persist_fork_choice", "Time taken to persist the fork choice struct");
pub static ref PERSIST_DATA_AVAILABILITY_CHECKER: Result<Histogram> =
try_create_histogram("beacon_persist_data_availability_checker", "Time taken to persist the data availability checker");
/*
* Eth1
@@ -767,31 +835,62 @@ lazy_static! {
"Number of attester slashings seen",
&["src", "validator"]
);
}
// Fourth lazy-static block is used to account for macro recursion limit.
lazy_static! {
/*
* Block Delay Metrics
*/
pub static ref BEACON_BLOCK_OBSERVED_SLOT_START_DELAY_TIME: Result<Histogram> = try_create_histogram(
pub static ref BEACON_BLOCK_OBSERVED_SLOT_START_DELAY_TIME: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_observed_slot_start_delay_time",
"Duration between the start of the block's slot and the time the block was observed.",
// [0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50]
decimal_buckets(-1,2)
);
pub static ref BEACON_BLOCK_IMPORTED_OBSERVED_DELAY_TIME: Result<Histogram> = try_create_histogram(
pub static ref BEACON_BLOCK_IMPORTED_OBSERVED_DELAY_TIME: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_imported_observed_delay_time",
"Duration between the time the block was observed and the time when it was imported.",
// [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5]
decimal_buckets(-2,0)
);
pub static ref BEACON_BLOCK_HEAD_IMPORTED_DELAY_TIME: Result<Histogram> = try_create_histogram(
pub static ref BEACON_BLOCK_HEAD_IMPORTED_DELAY_TIME: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_head_imported_delay_time",
"Duration between the time the block was imported and the time when it was set as head.",
// [0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5]
decimal_buckets(-2,-1)
);
pub static ref BEACON_BLOCK_HEAD_SLOT_START_DELAY_TIME: Result<Histogram> = try_create_histogram(
pub static ref BEACON_BLOCK_HEAD_ATTESTABLE_DELAY_TIME: Result<Histogram> = try_create_histogram(
"beacon_block_head_attestable_delay_time",
"Duration between the start of the slot and the time at which the block could be attested to.",
);
pub static ref BEACON_BLOCK_HEAD_SLOT_START_DELAY_TIME: Result<Histogram> = try_create_histogram_with_buckets(
"beacon_block_head_slot_start_delay_time",
"Duration between the start of the block's slot and the time when it was set as head.",
// [0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50]
decimal_buckets(-1,2)
);
pub static ref BEACON_BLOCK_HEAD_SLOT_START_DELAY_EXCEEDED_TOTAL: Result<IntCounter> = try_create_int_counter(
"beacon_block_head_slot_start_delay_exceeded_total",
"Triggered when the duration between the start of the block's slot and the current time \
will result in failed attestations.",
);
pub static ref BEACON_BLOCK_HEAD_MISSED_ATT_DEADLINE_LATE: Result<IntCounter> = try_create_int_counter(
"beacon_block_head_missed_att_deadline_late",
"Total number of delayed head blocks that arrived late"
);
pub static ref BEACON_BLOCK_HEAD_MISSED_ATT_DEADLINE_BORDERLINE: Result<IntCounter> = try_create_int_counter(
"beacon_block_head_missed_att_deadline_borderline",
"Total number of delayed head blocks that arrived very close to the deadline"
);
pub static ref BEACON_BLOCK_HEAD_MISSED_ATT_DEADLINE_SLOW: Result<IntCounter> = try_create_int_counter(
"beacon_block_head_missed_att_deadline_slow",
"Total number of delayed head blocks that arrived on time but not processed in time"
);
pub static ref BEACON_BLOCK_HEAD_MISSED_ATT_DEADLINE_OTHER: Result<IntCounter> = try_create_int_counter(
"beacon_block_head_missed_att_deadline_other",
"Total number of delayed head blocks that were not late and not slow to process"
);
/*
* General block metrics
@@ -801,10 +900,7 @@ lazy_static! {
"gossip_beacon_block_skipped_slots",
"For each gossip blocks, the number of skip slots between it and its parent"
);
}
// Fourth lazy-static block is used to account for macro recursion limit.
lazy_static! {
/*
* Sync Committee Message Verification
*/
@@ -820,6 +916,14 @@ lazy_static! {
"beacon_sync_committee_message_gossip_verification_seconds",
"Full runtime of sync contribution gossip verification"
);
pub static ref SYNC_MESSAGE_EQUIVOCATIONS: Result<IntCounter> = try_create_int_counter(
"sync_message_equivocations_total",
"Number of sync messages with the same validator index for different blocks"
);
pub static ref SYNC_MESSAGE_EQUIVOCATIONS_TO_HEAD: Result<IntCounter> = try_create_int_counter(
"sync_message_equivocations_to_head_total",
"Number of sync message which conflict with a previous message but elect the head"
);
/*
* Sync Committee Contribution Verification
@@ -918,6 +1022,169 @@ lazy_static! {
"beacon_pre_finalization_block_lookup_count",
"Number of block roots subject to single block lookups"
);
/*
* Blob sidecar Verification
*/
pub static ref BLOBS_SIDECAR_PROCESSING_REQUESTS: Result<IntCounter> = try_create_int_counter(
"beacon_blobs_sidecar_processing_requests_total",
"Count of all blob sidecars submitted for processing"
);
pub static ref BLOBS_SIDECAR_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
"beacon_blobs_sidecar_processing_successes_total",
"Number of blob sidecars verified for gossip"
);
pub static ref BLOBS_SIDECAR_GOSSIP_VERIFICATION_TIMES: Result<Histogram> = try_create_histogram(
"beacon_blobs_sidecar_gossip_verification_seconds",
"Full runtime of blob sidecars gossip verification"
);
pub static ref BLOB_SIDECAR_INCLUSION_PROOF_VERIFICATION: Result<Histogram> = try_create_histogram(
"blob_sidecar_inclusion_proof_verification_seconds",
"Time taken to verify blob sidecar inclusion proof"
);
pub static ref BLOB_SIDECAR_INCLUSION_PROOF_COMPUTATION: Result<Histogram> = try_create_histogram(
"blob_sidecar_inclusion_proof_computation_seconds",
"Time taken to compute blob sidecar inclusion proof"
);
}
// Fifth lazy-static block is used to account for macro recursion limit.
lazy_static! {
/*
* Light server message verification
*/
pub static ref FINALITY_UPDATE_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
"light_client_finality_update_verification_success_total",
"Number of light client finality updates verified for gossip"
);
/*
* Light server message verification
*/
pub static ref OPTIMISTIC_UPDATE_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
"light_client_optimistic_update_verification_success_total",
"Number of light client optimistic updates verified for gossip"
);
/*
* Aggregate subset metrics
*/
pub static ref SYNC_CONTRIBUTION_SUBSETS: Result<IntCounter> = try_create_int_counter(
"beacon_sync_contribution_subsets_total",
"Count of new sync contributions that are subsets of already known aggregates"
);
pub static ref AGGREGATED_ATTESTATION_SUBSETS: Result<IntCounter> = try_create_int_counter(
"beacon_aggregated_attestation_subsets_total",
"Count of new aggregated attestations that are subsets of already known aggregates"
);
/*
* Attestation simulator metrics
*/
pub static ref VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_HEAD_ATTESTER_HIT: Result<IntCounter> =
try_create_int_counter(
VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_HEAD_ATTESTER_HIT_TOTAL,
"Incremented if a validator is flagged as a previous slot head attester \
during per slot processing",
);
pub static ref VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_HEAD_ATTESTER_MISS: Result<IntCounter> =
try_create_int_counter(
VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_HEAD_ATTESTER_MISS_TOTAL,
"Incremented if a validator is not flagged as a previous slot head attester \
during per slot processing",
);
pub static ref VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_TARGET_ATTESTER_HIT: Result<IntCounter> =
try_create_int_counter(
VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_TARGET_ATTESTER_HIT_TOTAL,
"Incremented if a validator is flagged as a previous slot target attester \
during per slot processing",
);
pub static ref VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_TARGET_ATTESTER_MISS: Result<IntCounter> =
try_create_int_counter(
VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_TARGET_ATTESTER_MISS_TOTAL,
"Incremented if a validator is not flagged as a previous slot target attester \
during per slot processing",
);
pub static ref VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_SOURCE_ATTESTER_HIT: Result<IntCounter> =
try_create_int_counter(
VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_SOURCE_ATTESTER_HIT_TOTAL,
"Incremented if a validator is flagged as a previous slot source attester \
during per slot processing",
);
pub static ref VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_SOURCE_ATTESTER_MISS: Result<IntCounter> =
try_create_int_counter(
VALIDATOR_MONITOR_ATTESTATION_SIMULATOR_SOURCE_ATTESTER_MISS_TOTAL,
"Incremented if a validator is not flagged as a previous slot source attester \
during per slot processing",
);
/*
* Missed block metrics
*/
pub static ref VALIDATOR_MONITOR_MISSED_BLOCKS_TOTAL: Result<IntCounterVec> = try_create_int_counter_vec(
"validator_monitor_missed_blocks_total",
"Number of non-finalized blocks missed",
&["validator"]
);
/*
* Kzg related metrics
*/
pub static ref KZG_VERIFICATION_SINGLE_TIMES: Result<Histogram> =
try_create_histogram("kzg_verification_single_seconds", "Runtime of single kzg verification");
pub static ref KZG_VERIFICATION_BATCH_TIMES: Result<Histogram> =
try_create_histogram("kzg_verification_batch_seconds", "Runtime of batched kzg verification");
pub static ref BLOCK_PRODUCTION_BLOBS_VERIFICATION_TIMES: Result<Histogram> = try_create_histogram(
"beacon_block_production_blobs_verification_seconds",
"Time taken to verify blobs against commitments and creating BlobSidecar objects in block production"
);
/*
* Availability related metrics
*/
pub static ref BLOCK_AVAILABILITY_DELAY: Result<Histogram> = try_create_histogram_with_buckets(
"block_availability_delay",
"Duration between start of the slot and the time at which all components of the block are available.",
// Create a custom bucket list for greater granularity in block delay
Ok(vec![0.1, 0.2, 0.3,0.4,0.5,0.75,1.0,1.25,1.5,1.75,2.0,2.5,3.0,3.5,4.0,5.0,6.0,7.0,8.0,9.0,10.0,15.0,20.0])
);
/*
* Data Availability cache metrics
*/
pub static ref DATA_AVAILABILITY_PROCESSING_CACHE_SIZE: Result<IntGauge> =
try_create_int_gauge(
"data_availability_processing_cache_size",
"Number of entries in the data availability processing cache."
);
pub static ref DATA_AVAILABILITY_OVERFLOW_MEMORY_BLOCK_CACHE_SIZE: Result<IntGauge> =
try_create_int_gauge(
"data_availability_overflow_memory_block_cache_size",
"Number of entries in the data availability overflow block memory cache."
);
pub static ref DATA_AVAILABILITY_OVERFLOW_MEMORY_STATE_CACHE_SIZE: Result<IntGauge> =
try_create_int_gauge(
"data_availability_overflow_memory_state_cache_size",
"Number of entries in the data availability overflow state memory cache."
);
pub static ref DATA_AVAILABILITY_OVERFLOW_STORE_CACHE_SIZE: Result<IntGauge> =
try_create_int_gauge(
"data_availability_overflow_store_cache_size",
"Number of entries in the data availability overflow store cache."
);
/*
* light_client server metrics
*/
pub static ref LIGHT_CLIENT_SERVER_CACHE_STATE_DATA_TIMES: Result<Histogram> = try_create_histogram(
"beacon_light_client_server_cache_state_data_seconds",
"Time taken to produce and cache state data",
);
pub static ref LIGHT_CLIENT_SERVER_CACHE_RECOMPUTE_UPDATES_TIMES: Result<Histogram> = try_create_histogram(
"beacon_light_client_server_cache_recompute_updates_seconds",
"Time taken to recompute and cache updates",
);
pub static ref LIGHT_CLIENT_SERVER_CACHE_PREV_BLOCK_CACHE_MISS: Result<IntCounter> = try_create_int_counter(
"beacon_light_client_server_cache_prev_block_cache_miss",
"Count of prev block cache misses",
);
}
/// Scrape the `beacon_chain` for metrics that are not constantly updated (e.g., the present slot,
@@ -935,15 +1202,28 @@ pub fn scrape_for_metrics<T: BeaconChainTypes>(beacon_chain: &BeaconChain<T>) {
let attestation_stats = beacon_chain.op_pool.attestation_stats();
if let Some(snapshot_cache) = beacon_chain
.snapshot_cache
.try_write_for(SNAPSHOT_CACHE_TIMEOUT)
{
set_gauge(
&BLOCK_PROCESSING_SNAPSHOT_CACHE_SIZE,
snapshot_cache.len() as i64,
)
}
set_gauge_by_usize(
&BLOCK_PROCESSING_SNAPSHOT_CACHE_SIZE,
beacon_chain.store.state_cache_len(),
);
let da_checker_metrics = beacon_chain.data_availability_checker.metrics();
set_gauge_by_usize(
&DATA_AVAILABILITY_PROCESSING_CACHE_SIZE,
da_checker_metrics.processing_cache_size,
);
set_gauge_by_usize(
&DATA_AVAILABILITY_OVERFLOW_MEMORY_BLOCK_CACHE_SIZE,
da_checker_metrics.block_cache_size,
);
set_gauge_by_usize(
&DATA_AVAILABILITY_OVERFLOW_MEMORY_STATE_CACHE_SIZE,
da_checker_metrics.state_cache_size,
);
set_gauge_by_usize(
&DATA_AVAILABILITY_OVERFLOW_STORE_CACHE_SIZE,
da_checker_metrics.num_store_entries,
);
if let Some((size, num_lookups)) = beacon_chain.pre_finalization_block_cache.metrics() {
set_gauge_by_usize(&PRE_FINALIZATION_BLOCK_CACHE_SIZE, size);
@@ -986,7 +1266,7 @@ pub fn scrape_for_metrics<T: BeaconChainTypes>(beacon_chain: &BeaconChain<T>) {
}
/// Scrape the given `state` assuming it's the head state, updating the `DEFAULT_REGISTRY`.
fn scrape_head_state<T: EthSpec>(state: &BeaconState<T>, state_root: Hash256) {
fn scrape_head_state<E: EthSpec>(state: &BeaconState<E>, state_root: Hash256) {
set_gauge_by_slot(&HEAD_STATE_SLOT, state.slot());
set_gauge_by_slot(&HEAD_STATE_SLOT_INTEROP, state.slot());
set_gauge_by_hash(&HEAD_STATE_ROOT, state_root);
@@ -1055,7 +1335,7 @@ fn scrape_head_state<T: EthSpec>(state: &BeaconState<T>, state_root: Hash256) {
num_active += 1;
}
if v.slashed {
if v.slashed() {
num_slashed += 1;
}

View File

@@ -6,7 +6,7 @@ use parking_lot::Mutex;
use slog::{debug, error, info, warn, Logger};
use std::collections::{HashMap, HashSet};
use std::mem;
use std::sync::{mpsc, Arc};
use std::sync::Arc;
use std::thread;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use store::hot_cold_store::{migrate_database, HotColdDBError};
@@ -24,21 +24,44 @@ const MAX_COMPACTION_PERIOD_SECONDS: u64 = 604800;
const MIN_COMPACTION_PERIOD_SECONDS: u64 = 7200;
/// Compact after a large finality gap, if we respect `MIN_COMPACTION_PERIOD_SECONDS`.
const COMPACTION_FINALITY_DISTANCE: u64 = 1024;
const BLOCKS_PER_RECONSTRUCTION: usize = 8192 * 4;
/// Default number of epochs to wait between finalization migrations.
pub const DEFAULT_EPOCHS_PER_MIGRATION: u64 = 1;
/// The background migrator runs a thread to perform pruning and migrate state from the hot
/// to the cold database.
pub struct BackgroundMigrator<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> {
db: Arc<HotColdDB<E, Hot, Cold>>,
#[allow(clippy::type_complexity)]
tx_thread: Option<Mutex<(mpsc::Sender<Notification>, thread::JoinHandle<()>)>>,
/// Record of when the last migration ran, for enforcing `epochs_per_migration`.
prev_migration: Arc<Mutex<PrevMigration>>,
tx_thread: Option<
Mutex<(
crossbeam_channel::Sender<Notification>,
thread::JoinHandle<()>,
)>,
>,
/// Genesis block root, for persisting the `PersistedBeaconChain`.
genesis_block_root: Hash256,
log: Logger,
}
#[derive(Debug, Default, Clone, PartialEq, Eq)]
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct MigratorConfig {
pub blocking: bool,
/// Run migrations at most once per `epochs_per_migration`.
///
/// If set to 0 or 1, then run every finalization.
pub epochs_per_migration: u64,
}
impl Default for MigratorConfig {
fn default() -> Self {
Self {
blocking: false,
epochs_per_migration: DEFAULT_EPOCHS_PER_MIGRATION,
}
}
}
impl MigratorConfig {
@@ -46,6 +69,20 @@ impl MigratorConfig {
self.blocking = true;
self
}
pub fn epochs_per_migration(mut self, epochs_per_migration: u64) -> Self {
self.epochs_per_migration = epochs_per_migration;
self
}
}
/// Record of when the last migration ran.
#[derive(Debug)]
pub struct PrevMigration {
/// The epoch at which the last finalization migration ran.
epoch: Epoch,
/// The number of epochs to wait between runs.
epochs_per_migration: u64,
}
/// Pruning can be successful, or in rare cases deferred to a later point.
@@ -55,7 +92,13 @@ pub enum PruningOutcome {
Successful {
old_finalized_checkpoint: Checkpoint,
},
DeferredConcurrentMutation,
/// The run was aborted because the new finalized checkpoint is older than the previous one.
OutOfOrderFinalization {
old_finalized_checkpoint: Checkpoint,
new_finalized_checkpoint: Checkpoint,
},
/// The run was aborted due to a concurrent mutation of the head tracker.
DeferredConcurrentHeadTrackerMutation,
}
/// Logic errors that can occur during pruning, none of these should ever happen.
@@ -68,23 +111,46 @@ pub enum PruningError {
MissingInfoForCanonicalChain {
slot: Slot,
},
FinalizedStateOutOfOrder {
old_finalized_checkpoint: Checkpoint,
new_finalized_checkpoint: Checkpoint,
},
UnexpectedEqualStateRoots,
UnexpectedUnequalStateRoots,
}
/// Message sent to the migration thread containing the information it needs to run.
#[derive(Debug)]
pub enum Notification {
Finalization(FinalizationNotification),
Reconstruction,
PruneBlobs(Epoch),
}
#[derive(Clone, Debug)]
pub struct FinalizationNotification {
finalized_state_root: BeaconStateHash,
finalized_checkpoint: Checkpoint,
head_tracker: Arc<HeadTracker>,
prev_migration: Arc<Mutex<PrevMigration>>,
genesis_block_root: Hash256,
}
/*
impl Notification {
pub fn epoch(&self) -> Option<Epoch> {
match self {
Notification::Finalization(FinalizationNotification {
finalized_checkpoint,
..
}) => Some(finalized_checkpoint.epoch),
Notification::Reconstruction => None,
Notification::PruneBlobs => None,
}
}
}
*/
impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Hot, Cold> {
/// Create a new `BackgroundMigrator` and spawn its thread if necessary.
pub fn new(
@@ -93,6 +159,11 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
genesis_block_root: Hash256,
log: Logger,
) -> Self {
// Estimate last migration run from DB split slot.
let prev_migration = Arc::new(Mutex::new(PrevMigration {
epoch: db.get_split_slot().epoch(E::slots_per_epoch()),
epochs_per_migration: config.epochs_per_migration,
}));
let tx_thread = if config.blocking {
None
} else {
@@ -101,6 +172,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
Self {
db,
tx_thread,
prev_migration,
genesis_block_root,
log,
}
@@ -121,6 +193,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
finalized_state_root,
finalized_checkpoint,
head_tracker,
prev_migration: self.prev_migration.clone(),
genesis_block_root: self.genesis_block_root,
};
@@ -142,8 +215,16 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
}
}
pub fn process_prune_blobs(&self, data_availability_boundary: Epoch) {
if let Some(Notification::PruneBlobs(data_availability_boundary)) =
self.send_background_notification(Notification::PruneBlobs(data_availability_boundary))
{
Self::run_prune_blobs(self.db.clone(), data_availability_boundary, &self.log);
}
}
pub fn run_reconstruction(db: Arc<HotColdDB<E, Hot, Cold>>, log: &Logger) {
if let Err(e) = db.reconstruct_historic_states() {
if let Err(e) = db.reconstruct_historic_states(None) {
error!(
log,
"State reconstruction failed";
@@ -152,6 +233,20 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
}
}
pub fn run_prune_blobs(
db: Arc<HotColdDB<E, Hot, Cold>>,
data_availability_boundary: Epoch,
log: &Logger,
) {
if let Err(e) = db.try_prune_blobs(false, data_availability_boundary) {
error!(
log,
"Blob pruning failed";
"error" => ?e,
);
}
}
/// If configured to run in the background, send `notif` to the background thread.
///
/// Return `None` if the message was sent to the background thread, `Some(notif)` otherwise.
@@ -194,9 +289,31 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
notif: FinalizationNotification,
log: &Logger,
) {
// Do not run too frequently.
let epoch = notif.finalized_checkpoint.epoch;
let mut prev_migration = notif.prev_migration.lock();
if epoch < prev_migration.epoch + prev_migration.epochs_per_migration {
debug!(
log,
"Database consolidation deferred";
"last_finalized_epoch" => prev_migration.epoch,
"new_finalized_epoch" => epoch,
"epochs_per_migration" => prev_migration.epochs_per_migration,
);
return;
}
// Update the previous migration epoch immediately to avoid holding the lock. If the
// migration doesn't succeed then the next migration will be retried at the next scheduled
// run.
prev_migration.epoch = epoch;
drop(prev_migration);
debug!(log, "Database consolidation started");
let timer = std::time::Instant::now();
let finalized_state_root = notif.finalized_state_root;
let finalized_block_root = notif.finalized_checkpoint.root;
let finalized_state = match db.get_state(&finalized_state_root.into(), None) {
Ok(Some(state)) => state,
@@ -223,7 +340,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
Ok(PruningOutcome::Successful {
old_finalized_checkpoint,
}) => old_finalized_checkpoint,
Ok(PruningOutcome::DeferredConcurrentMutation) => {
Ok(PruningOutcome::DeferredConcurrentHeadTrackerMutation) => {
warn!(
log,
"Pruning deferred because of a concurrent mutation";
@@ -231,18 +348,36 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
);
return;
}
Ok(PruningOutcome::OutOfOrderFinalization {
old_finalized_checkpoint,
new_finalized_checkpoint,
}) => {
warn!(
log,
"Ignoring out of order finalization request";
"old_finalized_epoch" => old_finalized_checkpoint.epoch,
"new_finalized_epoch" => new_finalized_checkpoint.epoch,
"message" => "this is expected occasionally due to a (harmless) race condition"
);
return;
}
Err(e) => {
warn!(log, "Block pruning failed"; "error" => format!("{:?}", e));
warn!(log, "Block pruning failed"; "error" => ?e);
return;
}
};
match migrate_database(db.clone(), finalized_state_root.into(), &finalized_state) {
match migrate_database(
db.clone(),
finalized_state_root.into(),
finalized_block_root,
&finalized_state,
) {
Ok(()) => {}
Err(Error::HotColdDBError(HotColdDBError::FreezeSlotUnaligned(slot))) => {
debug!(
log,
"Database migration postponed, unaligned finalized block";
"Database migration postponed due to unaligned finalized block";
"slot" => slot.as_u64()
);
}
@@ -250,7 +385,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
warn!(
log,
"Database migration failed";
"error" => format!("{:?}", e)
"error" => ?e
);
return;
}
@@ -263,10 +398,14 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
notif.finalized_checkpoint.epoch,
log,
) {
warn!(log, "Database compaction failed"; "error" => format!("{:?}", e));
warn!(log, "Database compaction failed"; "error" => ?e);
}
debug!(log, "Database consolidation complete");
debug!(
log,
"Database consolidation complete";
"running_time_ms" => timer.elapsed().as_millis()
);
}
/// Spawn a new child thread to run the migration process.
@@ -275,33 +414,85 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
fn spawn_thread(
db: Arc<HotColdDB<E, Hot, Cold>>,
log: Logger,
) -> (mpsc::Sender<Notification>, thread::JoinHandle<()>) {
let (tx, rx) = mpsc::channel();
) -> (
crossbeam_channel::Sender<Notification>,
thread::JoinHandle<()>,
) {
let (tx, rx) = crossbeam_channel::unbounded();
let tx_thread = tx.clone();
let thread = thread::spawn(move || {
while let Ok(notif) = rx.recv() {
// Read the rest of the messages in the channel, preferring any reconstruction
// notification, or the finalization notification with the greatest finalized epoch.
let notif =
rx.try_iter()
.fold(notif, |best, other: Notification| match (&best, &other) {
(Notification::Reconstruction, _)
| (_, Notification::Reconstruction) => Notification::Reconstruction,
(
Notification::Finalization(fin1),
Notification::Finalization(fin2),
) => {
if fin2.finalized_checkpoint.epoch > fin1.finalized_checkpoint.epoch
{
other
} else {
best
}
}
});
let mut sel = crossbeam_channel::Select::new();
sel.recv(&rx);
match notif {
Notification::Reconstruction => Self::run_reconstruction(db.clone(), &log),
Notification::Finalization(fin) => Self::run_migration(db.clone(), fin, &log),
loop {
// Block until sth is in queue
let _queue_size = sel.ready();
let queue: Vec<Notification> = rx.try_iter().collect();
debug!(
log,
"New worker thread poll";
"queue" => ?queue
);
// Find a reconstruction notification and best finalization notification.
let reconstruction_notif = queue
.iter()
.find(|n| matches!(n, Notification::Reconstruction));
let migrate_notif = queue
.iter()
.filter_map(|n| match n {
Notification::Finalization(f) => Some(f),
_ => None,
})
.max_by_key(|f| f.finalized_checkpoint.epoch)
.cloned();
let prune_blobs_notif = queue
.iter()
.filter_map(|n| match n {
Notification::PruneBlobs(dab) => Some(*dab),
_ => None,
})
.max();
// Do a bit of state reconstruction first if required.
if reconstruction_notif.is_some() {
let timer = std::time::Instant::now();
match db.reconstruct_historic_states(Some(BLOCKS_PER_RECONSTRUCTION)) {
Err(Error::StateReconstructionDidNotComplete) => {
info!(
log,
"Finished reconstruction batch";
"batch_time_ms" => timer.elapsed().as_millis()
);
// Handle send error
let _ = tx_thread.send(Notification::Reconstruction);
}
Err(e) => {
error!(
log,
"State reconstruction failed";
"error" => ?e,
);
}
Ok(()) => {
info!(
log,
"Finished state reconstruction";
"batch_time_ms" => timer.elapsed().as_millis()
);
}
}
}
// Do the finalization migration.
if let Some(notif) = migrate_notif {
Self::run_migration(db.clone(), notif, &log);
}
// Prune blobs.
if let Some(dab) = prune_blobs_notif {
Self::run_prune_blobs(db.clone(), dab, &log);
}
}
});
@@ -321,13 +512,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
genesis_block_root: Hash256,
log: &Logger,
) -> Result<PruningOutcome, BeaconChainError> {
let old_finalized_checkpoint =
store
.load_pruning_checkpoint()?
.unwrap_or_else(|| Checkpoint {
epoch: Epoch::new(0),
root: Hash256::zero(),
});
let old_finalized_checkpoint = store.get_pruning_checkpoint();
let old_finalized_slot = old_finalized_checkpoint
.epoch
@@ -347,6 +532,16 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
.into());
}
// The new finalized state must be newer than the previous finalized state.
// I think this can happen sometimes currently due to `fork_choice` running in parallel
// with itself and sending us notifications out of order.
if old_finalized_slot > new_finalized_slot {
return Ok(PruningOutcome::OutOfOrderFinalization {
old_finalized_checkpoint,
new_finalized_checkpoint,
});
}
debug!(
log,
"Starting database pruning";
@@ -371,18 +566,32 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
})
.collect::<Result<_, _>>()?;
// Quick sanity check. If the canonical block & state roots are incorrect then we could
// incorrectly delete canonical states, which would corrupt the database.
let expected_canonical_block_roots = new_finalized_slot
.saturating_sub(old_finalized_slot)
.as_usize()
.saturating_add(1);
if newly_finalized_chain.len() != expected_canonical_block_roots {
return Err(BeaconChainError::DBInconsistent(format!(
"canonical chain iterator is corrupt; \
expected {} but got {} block roots",
expected_canonical_block_roots,
newly_finalized_chain.len()
)));
}
// We don't know which blocks are shared among abandoned chains, so we buffer and delete
// everything in one fell swoop.
let mut abandoned_blocks: HashSet<SignedBeaconBlockHash> = HashSet::new();
let mut abandoned_states: HashSet<(Slot, BeaconStateHash)> = HashSet::new();
let mut abandoned_heads: HashSet<Hash256> = HashSet::new();
let heads = head_tracker.heads();
debug!(
log,
"Extra pruning information";
"old_finalized_root" => format!("{:?}", old_finalized_checkpoint.root),
"new_finalized_root" => format!("{:?}", new_finalized_checkpoint.root),
"old_finalized_root" => ?old_finalized_checkpoint.root,
"new_finalized_root" => ?new_finalized_checkpoint.root,
"head_count" => heads.len(),
);
@@ -391,7 +600,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
// so delete it from the head tracker but leave it and its states in the database
// This is suboptimal as it wastes disk space, but it's difficult to fix. A re-sync
// can be used to reclaim the space.
let head_state_root = match store.get_block(&head_hash) {
let head_state_root = match store.get_blinded_block(&head_hash, Some(head_slot)) {
Ok(Some(block)) => block.state_root(),
Ok(None) => {
return Err(BeaconStateError::MissingBeaconBlock(head_hash.into()).into())
@@ -410,14 +619,15 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
};
let mut potentially_abandoned_head = Some(head_hash);
let mut potentially_abandoned_blocks = vec![];
let mut potentially_abandoned_blocks = HashSet::new();
// Iterate backwards from this head, staging blocks and states for deletion.
// Iterate backwards from this head, staging blocks for deletion.
let iter = std::iter::once(Ok((head_hash, head_state_root, head_slot)))
.chain(RootsIterator::from_block(&store, head_hash)?);
for maybe_tuple in iter {
let (block_root, state_root, slot) = maybe_tuple?;
let block_root = SignedBeaconBlockHash::from(block_root);
let state_root = BeaconStateHash::from(state_root);
@@ -427,11 +637,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
// the fork's block and state for possible deletion.
None => {
if slot > new_finalized_slot {
potentially_abandoned_blocks.push((
slot,
Some(block_root),
Some(state_root),
));
potentially_abandoned_blocks.insert(block_root);
} else if slot >= old_finalized_slot {
return Err(PruningError::MissingInfoForCanonicalChain { slot }.into());
} else {
@@ -441,7 +647,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
warn!(
log,
"Found a chain that should already have been pruned";
"head_block_root" => format!("{:?}", head_hash),
"head_block_root" => ?head_hash,
"head_slot" => head_slot,
);
potentially_abandoned_head.take();
@@ -469,26 +675,14 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
// we will have just staged the common block for deletion.
// Unstage it.
else {
for (_, block_root, _) in
potentially_abandoned_blocks.iter_mut().rev()
{
if block_root.as_ref() == Some(finalized_block_root) {
*block_root = None;
} else {
break;
}
}
potentially_abandoned_blocks.remove(finalized_block_root);
}
break;
} else {
if state_root == *finalized_state_root {
return Err(PruningError::UnexpectedEqualStateRoots.into());
}
potentially_abandoned_blocks.push((
slot,
Some(block_root),
Some(state_root),
));
potentially_abandoned_blocks.insert(block_root);
}
}
}
@@ -498,18 +692,11 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
debug!(
log,
"Pruning head";
"head_block_root" => format!("{:?}", abandoned_head),
"head_block_root" => ?abandoned_head,
"head_slot" => head_slot,
);
abandoned_heads.insert(abandoned_head);
abandoned_blocks.extend(
potentially_abandoned_blocks
.iter()
.filter_map(|(_, maybe_block_hash, _)| *maybe_block_hash),
);
abandoned_states.extend(potentially_abandoned_blocks.iter().filter_map(
|(slot, _, maybe_state_hash)| maybe_state_hash.map(|sr| (*slot, sr)),
));
abandoned_blocks.extend(potentially_abandoned_blocks);
}
}
@@ -523,7 +710,7 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
// later.
for head_hash in &abandoned_heads {
if !head_tracker_lock.contains_key(head_hash) {
return Ok(PruningOutcome::DeferredConcurrentMutation);
return Ok(PruningOutcome::DeferredConcurrentHeadTrackerMutation);
}
}
@@ -532,34 +719,76 @@ impl<E: EthSpec, Hot: ItemStore<E>, Cold: ItemStore<E>> BackgroundMigrator<E, Ho
head_tracker_lock.remove(&head_hash);
}
let batch: Vec<StoreOp<E>> = abandoned_blocks
let num_deleted_blocks = abandoned_blocks.len();
let mut batch: Vec<StoreOp<E>> = abandoned_blocks
.into_iter()
.map(Into::into)
.map(StoreOp::DeleteBlock)
.chain(
abandoned_states
.into_iter()
.map(|(slot, state_hash)| StoreOp::DeleteState(state_hash.into(), Some(slot))),
)
.flat_map(|block_root: Hash256| {
[
StoreOp::DeleteBlock(block_root),
StoreOp::DeleteExecutionPayload(block_root),
StoreOp::DeleteBlobs(block_root),
]
})
.collect();
let mut kv_batch = store.convert_to_kv_batch(&batch)?;
// Persist the head in case the process is killed or crashes here. This prevents
// the head tracker reverting after our mutation above.
let persisted_head = PersistedBeaconChain {
_canonical_head_block_root: DUMMY_CANONICAL_HEAD_BLOCK_ROOT,
genesis_block_root,
ssz_head_tracker: SszHeadTracker::from_map(&*head_tracker_lock),
ssz_head_tracker: SszHeadTracker::from_map(&head_tracker_lock),
};
drop(head_tracker_lock);
kv_batch.push(persisted_head.as_kv_store_op(BEACON_CHAIN_DB_KEY));
batch.push(StoreOp::KeyValueOp(
persisted_head.as_kv_store_op(BEACON_CHAIN_DB_KEY),
));
// Persist the new finalized checkpoint as the pruning checkpoint.
kv_batch.push(store.pruning_checkpoint_store_op(new_finalized_checkpoint));
store.do_atomically_with_block_and_blobs_cache(batch)?;
debug!(
log,
"Database block pruning complete";
"num_deleted_blocks" => num_deleted_blocks,
);
store.hot_db.do_atomically(kv_batch)?;
debug!(log, "Database pruning complete");
// Do a separate pass to clean up irrelevant states.
let mut state_delete_batch = vec![];
for res in store.iter_hot_state_summaries() {
let (state_root, summary) = res?;
if summary.slot <= new_finalized_slot {
// If state root doesn't match state root from canonical chain, then delete.
// We may also find older states here that should have been deleted by `migrate_db`
// but weren't due to wonky I/O atomicity.
if newly_finalized_chain
.get(&summary.slot)
.map_or(true, |(_, canonical_state_root)| {
state_root != Hash256::from(*canonical_state_root)
})
{
let reason = if summary.slot < old_finalized_slot {
"old dangling state"
} else {
"non-canonical"
};
debug!(
log,
"Deleting state";
"state_root" => ?state_root,
"slot" => summary.slot,
"reason" => reason,
);
state_delete_batch.push(StoreOp::DeleteState(state_root, Some(summary.slot)));
}
}
}
let num_deleted_states = state_delete_batch.len();
store.do_atomically_with_block_and_blobs_cache(state_delete_batch)?;
debug!(
log,
"Database state pruning complete";
"num_deleted_states" => num_deleted_states,
);
Ok(PruningOutcome::Successful {
old_finalized_checkpoint,

View File

@@ -402,7 +402,7 @@ impl<T: AggregateMap> NaiveAggregationPool<T> {
/// Returns the total number of items stored in `self`.
pub fn num_items(&self) -> usize {
self.maps.iter().map(|(_, map)| map.len()).sum()
self.maps.values().map(T::len).sum()
}
/// Returns an aggregated `T::Value` with the given `T::Data`, if any.
@@ -421,10 +421,7 @@ impl<T: AggregateMap> NaiveAggregationPool<T> {
/// Iterate all items in all slots of `self`.
pub fn iter(&self) -> impl Iterator<Item = &T::Value> {
self.maps
.iter()
.map(|(_slot, map)| map.get_map().iter().map(|(_key, value)| value))
.flatten()
self.maps.values().flat_map(|map| map.get_map().values())
}
/// Removes any items with a slot lower than `current_slot` and bars any future
@@ -451,11 +448,7 @@ impl<T: AggregateMap> NaiveAggregationPool<T> {
// If we have too many maps, remove the lowest amount to ensure we only have
// `SLOTS_RETAINED` left.
if self.maps.len() > SLOTS_RETAINED {
let mut slots = self
.maps
.iter()
.map(|(slot, _map)| *slot)
.collect::<Vec<_>>();
let mut slots = self.maps.keys().copied().collect::<Vec<_>>();
// Sort is generally pretty slow, however `SLOTS_RETAINED` is quite low so it should be
// negligible.
slots.sort_unstable();

View File

@@ -1,7 +1,9 @@
//! Provides an `ObservedAggregates` struct which allows us to reject aggregated attestations or
//! sync committee contributions if we've already seen them.
use std::collections::HashSet;
use crate::sync_committee_verification::SyncCommitteeData;
use ssz_types::{BitList, BitVector};
use std::collections::HashMap;
use std::marker::PhantomData;
use tree_hash::TreeHash;
use types::consts::altair::{
@@ -10,8 +12,16 @@ use types::consts::altair::{
use types::slot_data::SlotData;
use types::{Attestation, EthSpec, Hash256, Slot, SyncCommitteeContribution};
pub type ObservedSyncContributions<E> = ObservedAggregates<SyncCommitteeContribution<E>, E>;
pub type ObservedAggregateAttestations<E> = ObservedAggregates<Attestation<E>, E>;
pub type ObservedSyncContributions<E> = ObservedAggregates<
SyncCommitteeContribution<E>,
E,
BitVector<<E as types::EthSpec>::SyncSubcommitteeSize>,
>;
pub type ObservedAggregateAttestations<E> = ObservedAggregates<
Attestation<E>,
E,
BitList<<E as types::EthSpec>::MaxValidatorsPerCommittee>,
>;
/// A trait use to associate capacity constants with the type being stored in `ObservedAggregates`.
pub trait Consts {
@@ -25,7 +35,7 @@ pub trait Consts {
fn max_per_slot_capacity() -> usize;
}
impl<T: EthSpec> Consts for Attestation<T> {
impl<E: EthSpec> Consts for Attestation<E> {
/// Use 128 as it's the target committee size for the mainnet spec. This is perhaps a little
/// wasteful for the minimal spec, but considering it's approx. 128 * 32 bytes we're not wasting
/// much.
@@ -33,7 +43,7 @@ impl<T: EthSpec> Consts for Attestation<T> {
/// We need to keep attestations for each slot of the current epoch.
fn max_slot_capacity() -> usize {
T::slots_per_epoch() as usize
2 * E::slots_per_epoch() as usize
}
/// As a DoS protection measure, the maximum number of distinct `Attestations` or
@@ -52,7 +62,7 @@ impl<T: EthSpec> Consts for Attestation<T> {
}
}
impl<T: EthSpec> Consts for SyncCommitteeContribution<T> {
impl<E: EthSpec> Consts for SyncCommitteeContribution<E> {
/// Set to `TARGET_AGGREGATORS_PER_SYNC_SUBCOMMITTEE * SYNC_COMMITTEE_SUBNET_COUNT`. This is the
/// expected number of aggregators per slot across all subcommittees.
const DEFAULT_PER_SLOT_CAPACITY: usize =
@@ -65,14 +75,85 @@ impl<T: EthSpec> Consts for SyncCommitteeContribution<T> {
/// We should never receive more aggregates than there are sync committee participants.
fn max_per_slot_capacity() -> usize {
T::sync_committee_size()
E::sync_committee_size()
}
}
/// A trait for types that implement a behaviour where one object of that type
/// can be a subset/superset of another.
/// This trait allows us to be generic over the aggregate item that we store in the cache that
/// we want to prevent duplicates/subsets for.
pub trait SubsetItem {
/// The item that is stored for later comparison with new incoming aggregate items.
type Item;
/// Returns `true` if `self` is a non-strict subset of `other` and `false` otherwise.
fn is_subset(&self, other: &Self::Item) -> bool;
/// Returns `true` if `self` is a non-strict superset of `other` and `false` otherwise.
fn is_superset(&self, other: &Self::Item) -> bool;
/// Returns the item that gets stored in `ObservedAggregates` for later subset
/// comparison with incoming aggregates.
fn get_item(&self) -> Self::Item;
/// Returns a unique value that keys the object to the item that is being stored
/// in `ObservedAggregates`.
fn root(&self) -> Hash256;
}
impl<E: EthSpec> SubsetItem for Attestation<E> {
type Item = BitList<E::MaxValidatorsPerCommittee>;
fn is_subset(&self, other: &Self::Item) -> bool {
self.aggregation_bits.is_subset(other)
}
fn is_superset(&self, other: &Self::Item) -> bool {
other.is_subset(&self.aggregation_bits)
}
/// Returns the sync contribution aggregation bits.
fn get_item(&self) -> Self::Item {
self.aggregation_bits.clone()
}
/// Returns the hash tree root of the attestation data.
fn root(&self) -> Hash256 {
self.data.tree_hash_root()
}
}
impl<E: EthSpec> SubsetItem for SyncCommitteeContribution<E> {
type Item = BitVector<E::SyncSubcommitteeSize>;
fn is_subset(&self, other: &Self::Item) -> bool {
self.aggregation_bits.is_subset(other)
}
fn is_superset(&self, other: &Self::Item) -> bool {
other.is_subset(&self.aggregation_bits)
}
/// Returns the sync contribution aggregation bits.
fn get_item(&self) -> Self::Item {
self.aggregation_bits.clone()
}
/// Returns the hash tree root of the root, slot and subcommittee index
/// of the sync contribution.
fn root(&self) -> Hash256 {
SyncCommitteeData {
root: self.beacon_block_root,
slot: self.slot,
subcommittee_index: self.subcommittee_index,
}
.tree_hash_root()
}
}
#[derive(Debug, PartialEq)]
pub enum ObserveOutcome {
/// This item was already known.
AlreadyKnown,
/// This item is a non-strict subset of an already known item.
Subset,
/// This was the first time this item was observed.
New,
}
@@ -94,26 +175,28 @@ pub enum Error {
},
}
/// A `HashSet` that contains entries related to some `Slot`.
struct SlotHashSet {
set: HashSet<Hash256>,
/// A `HashMap` that contains entries related to some `Slot`.
struct SlotHashSet<I> {
/// Contains a vector of maximally-sized aggregation bitfields/bitvectors
/// such that no bitfield/bitvector is a subset of any other in the list.
map: HashMap<Hash256, Vec<I>>,
slot: Slot,
max_capacity: usize,
}
impl SlotHashSet {
impl<I> SlotHashSet<I> {
pub fn new(slot: Slot, initial_capacity: usize, max_capacity: usize) -> Self {
Self {
slot,
set: HashSet::with_capacity(initial_capacity),
map: HashMap::with_capacity(initial_capacity),
max_capacity,
}
}
/// Store the items in self so future observations recognise its existence.
pub fn observe_item<T: SlotData>(
pub fn observe_item<S: SlotData + SubsetItem<Item = I>>(
&mut self,
item: &T,
item: &S,
root: Hash256,
) -> Result<ObserveOutcome, Error> {
if item.get_slot() != self.slot {
@@ -123,29 +206,45 @@ impl SlotHashSet {
});
}
if self.set.contains(&root) {
Ok(ObserveOutcome::AlreadyKnown)
} else {
// Here we check to see if this slot has reached the maximum observation count.
//
// The resulting behaviour is that we are no longer able to successfully observe new
// items, however we will continue to return `is_known` values. We could also
// disable `is_known`, however then we would stop forwarding items across the
// gossip network and I think that this is a worse case than sending some invalid ones.
// The underlying libp2p network is responsible for removing duplicate messages, so
// this doesn't risk a broadcast loop.
if self.set.len() >= self.max_capacity {
return Err(Error::ReachedMaxObservationsPerSlot(self.max_capacity));
if let Some(aggregates) = self.map.get_mut(&root) {
for existing in aggregates {
// Check if `item` is a subset of any of the observed aggregates
if item.is_subset(existing) {
return Ok(ObserveOutcome::Subset);
// Check if `item` is a superset of any of the observed aggregates
// If true, we replace the new item with its existing subset. This allows us
// to hold fewer items in the list.
} else if item.is_superset(existing) {
*existing = item.get_item();
return Ok(ObserveOutcome::New);
}
}
self.set.insert(root);
Ok(ObserveOutcome::New)
}
// Here we check to see if this slot has reached the maximum observation count.
//
// The resulting behaviour is that we are no longer able to successfully observe new
// items, however we will continue to return `is_known_subset` values. We could also
// disable `is_known_subset`, however then we would stop forwarding items across the
// gossip network and I think that this is a worse case than sending some invalid ones.
// The underlying libp2p network is responsible for removing duplicate messages, so
// this doesn't risk a broadcast loop.
if self.map.len() >= self.max_capacity {
return Err(Error::ReachedMaxObservationsPerSlot(self.max_capacity));
}
let item = item.get_item();
self.map.entry(root).or_default().push(item);
Ok(ObserveOutcome::New)
}
/// Indicates if `item` has been observed before.
pub fn is_known<T: SlotData>(&self, item: &T, root: Hash256) -> Result<bool, Error> {
/// Check if `item` is a non-strict subset of any of the already observed aggregates for
/// the given root and slot.
pub fn is_known_subset<S: SlotData + SubsetItem<Item = I>>(
&self,
item: &S,
root: Hash256,
) -> Result<bool, Error> {
if item.get_slot() != self.slot {
return Err(Error::IncorrectSlot {
expected: self.slot,
@@ -153,25 +252,28 @@ impl SlotHashSet {
});
}
Ok(self.set.contains(&root))
Ok(self
.map
.get(&root)
.map_or(false, |agg| agg.iter().any(|val| item.is_subset(val))))
}
/// The number of observed items in `self`.
pub fn len(&self) -> usize {
self.set.len()
self.map.len()
}
}
/// Stores the roots of objects for some number of `Slots`, so we can determine if
/// these have previously been seen on the network.
pub struct ObservedAggregates<T: TreeHash + SlotData + Consts, E: EthSpec> {
pub struct ObservedAggregates<T: SlotData + Consts, E: EthSpec, I> {
lowest_permissible_slot: Slot,
sets: Vec<SlotHashSet>,
sets: Vec<SlotHashSet<I>>,
_phantom_spec: PhantomData<E>,
_phantom_tree_hash: PhantomData<T>,
}
impl<T: TreeHash + SlotData + Consts, E: EthSpec> Default for ObservedAggregates<T, E> {
impl<T: SlotData + Consts, E: EthSpec, I> Default for ObservedAggregates<T, E, I> {
fn default() -> Self {
Self {
lowest_permissible_slot: Slot::new(0),
@@ -182,17 +284,17 @@ impl<T: TreeHash + SlotData + Consts, E: EthSpec> Default for ObservedAggregates
}
}
impl<T: TreeHash + SlotData + Consts, E: EthSpec> ObservedAggregates<T, E> {
/// Store the root of `item` in `self`.
impl<T: SlotData + Consts + SubsetItem<Item = I>, E: EthSpec, I> ObservedAggregates<T, E, I> {
/// Store `item` in `self` keyed at `root`.
///
/// `root` must equal `item.tree_hash_root()`.
/// `root` must equal `item.root::<SubsetItem>()`.
pub fn observe_item(
&mut self,
item: &T,
root_opt: Option<Hash256>,
) -> Result<ObserveOutcome, Error> {
let index = self.get_set_index(item.get_slot())?;
let root = root_opt.unwrap_or_else(|| item.tree_hash_root());
let root = root_opt.unwrap_or_else(|| item.root());
self.sets
.get_mut(index)
@@ -200,16 +302,18 @@ impl<T: TreeHash + SlotData + Consts, E: EthSpec> ObservedAggregates<T, E> {
.and_then(|set| set.observe_item(item, root))
}
/// Check to see if the `root` of `item` is in self.
/// Check if `item` is a non-strict subset of any of the already observed aggregates for
/// the given root and slot.
///
/// `root` must equal `a.tree_hash_root()`.
pub fn is_known(&mut self, item: &T, root: Hash256) -> Result<bool, Error> {
/// `root` must equal `item.root::<SubsetItem>()`.
#[allow(clippy::wrong_self_convention)]
pub fn is_known_subset(&mut self, item: &T, root: Hash256) -> Result<bool, Error> {
let index = self.get_set_index(item.get_slot())?;
self.sets
.get(index)
.ok_or(Error::InvalidSetIndex(index))
.and_then(|set| set.is_known(item, root))
.and_then(|set| set.is_known_subset(item, root))
}
/// The maximum number of slots that items are stored for.
@@ -295,7 +399,6 @@ impl<T: TreeHash + SlotData + Consts, E: EthSpec> ObservedAggregates<T, E> {
#[cfg(not(debug_assertions))]
mod tests {
use super::*;
use tree_hash::TreeHash;
use types::{test_utils::test_random_instance, Hash256};
type E = types::MainnetEthSpec;
@@ -329,7 +432,7 @@ mod tests {
for a in &items {
assert_eq!(
store.is_known(a, a.tree_hash_root()),
store.is_known_subset(a, a.root()),
Ok(false),
"should indicate an unknown attestation is unknown"
);
@@ -342,13 +445,13 @@ mod tests {
for a in &items {
assert_eq!(
store.is_known(a, a.tree_hash_root()),
store.is_known_subset(a, a.root()),
Ok(true),
"should indicate a known attestation is known"
);
assert_eq!(
store.observe_item(a, Some(a.tree_hash_root())),
Ok(ObserveOutcome::AlreadyKnown),
store.observe_item(a, Some(a.root())),
Ok(ObserveOutcome::Subset),
"should acknowledge an existing attestation"
);
}

View File

@@ -20,29 +20,27 @@ use std::collections::{HashMap, HashSet};
use std::hash::Hash;
use std::marker::PhantomData;
use types::slot_data::SlotData;
use types::{Epoch, EthSpec, Slot, Unsigned};
use types::{Epoch, EthSpec, Hash256, Slot, Unsigned};
/// The maximum capacity of the `AutoPruningEpochContainer`.
///
/// Fits the next, current and previous epochs. We require the next epoch due to the
/// `MAXIMUM_GOSSIP_CLOCK_DISPARITY`. We require the previous epoch since the specification
/// declares:
/// If the current epoch is N, this fits epoch N + 1, N, N - 1, and N - 2. We require the next epoch due
/// to the `MAXIMUM_GOSSIP_CLOCK_DISPARITY`. We require the N - 2 epoch since the specification declares:
///
/// ```ignore
/// aggregate.data.slot + ATTESTATION_PROPAGATION_SLOT_RANGE
/// >= current_slot >= aggregate.data.slot
/// the epoch of `aggregate.data.slot` is either the current or previous epoch
/// ```
///
/// This means that during the current epoch we will always accept an attestation
/// from at least one slot in the previous epoch.
pub const MAX_CACHED_EPOCHS: u64 = 3;
/// This means that during the current epoch we will always accept an attestation from
/// at least one slot in the epoch prior to the previous epoch.
pub const MAX_CACHED_EPOCHS: u64 = 4;
pub type ObservedAttesters<E> = AutoPruningEpochContainer<EpochBitfield, E>;
pub type ObservedSyncContributors<E> =
AutoPruningSlotContainer<SlotSubcommitteeIndex, SyncContributorSlotHashSet<E>, E>;
AutoPruningSlotContainer<SlotSubcommitteeIndex, Hash256, SyncContributorSlotHashSet<E>, E>;
pub type ObservedAggregators<E> = AutoPruningEpochContainer<EpochHashSet, E>;
pub type ObservedSyncAggregators<E> =
AutoPruningSlotContainer<SlotSubcommitteeIndex, SyncAggregatorSlotHashSet, E>;
AutoPruningSlotContainer<SlotSubcommitteeIndex, (), SyncAggregatorSlotHashSet, E>;
#[derive(Debug, PartialEq)]
pub enum Error {
@@ -62,7 +60,7 @@ pub enum Error {
}
/// Implemented on an item in an `AutoPruningContainer`.
pub trait Item {
pub trait Item<T> {
/// Instantiate `Self` with the given `capacity`.
fn with_capacity(capacity: usize) -> Self;
@@ -75,11 +73,11 @@ pub trait Item {
/// Returns the number of validators that have been observed by `self`.
fn validator_count(&self) -> usize;
/// Store `validator_index` in `self`.
fn insert(&mut self, validator_index: usize) -> bool;
/// Store `validator_index` and `value` in `self`.
fn insert(&mut self, validator_index: usize, value: T) -> bool;
/// Returns `true` if `validator_index` has been stored in `self`.
fn contains(&self, validator_index: usize) -> bool;
/// Returns `Some(T)` if there is an entry for `validator_index`.
fn get(&self, validator_index: usize) -> Option<T>;
}
/// Stores a `BitVec` that represents which validator indices have attested or sent sync committee
@@ -88,7 +86,7 @@ pub struct EpochBitfield {
bitfield: BitVec,
}
impl Item for EpochBitfield {
impl Item<()> for EpochBitfield {
fn with_capacity(capacity: usize) -> Self {
Self {
bitfield: BitVec::with_capacity(capacity),
@@ -108,7 +106,7 @@ impl Item for EpochBitfield {
self.bitfield.iter().filter(|bit| **bit).count()
}
fn insert(&mut self, validator_index: usize) -> bool {
fn insert(&mut self, validator_index: usize, _value: ()) -> bool {
self.bitfield
.get_mut(validator_index)
.map(|mut bit| {
@@ -129,8 +127,11 @@ impl Item for EpochBitfield {
})
}
fn contains(&self, validator_index: usize) -> bool {
self.bitfield.get(validator_index).map_or(false, |bit| *bit)
fn get(&self, validator_index: usize) -> Option<()> {
self.bitfield
.get(validator_index)
.map_or(false, |bit| *bit)
.then_some(())
}
}
@@ -140,7 +141,7 @@ pub struct EpochHashSet {
set: HashSet<usize>,
}
impl Item for EpochHashSet {
impl Item<()> for EpochHashSet {
fn with_capacity(capacity: usize) -> Self {
Self {
set: HashSet::with_capacity(capacity),
@@ -163,27 +164,27 @@ impl Item for EpochHashSet {
/// Inserts the `validator_index` in the set. Returns `true` if the `validator_index` was
/// already in the set.
fn insert(&mut self, validator_index: usize) -> bool {
fn insert(&mut self, validator_index: usize, _value: ()) -> bool {
!self.set.insert(validator_index)
}
/// Returns `true` if the `validator_index` is in the set.
fn contains(&self, validator_index: usize) -> bool {
self.set.contains(&validator_index)
fn get(&self, validator_index: usize) -> Option<()> {
self.set.contains(&validator_index).then_some(())
}
}
/// Stores a `HashSet` of which validator indices have created a sync aggregate during a
/// slot.
pub struct SyncContributorSlotHashSet<E> {
set: HashSet<usize>,
map: HashMap<usize, Hash256>,
phantom: PhantomData<E>,
}
impl<E: EthSpec> Item for SyncContributorSlotHashSet<E> {
impl<E: EthSpec> Item<Hash256> for SyncContributorSlotHashSet<E> {
fn with_capacity(capacity: usize) -> Self {
Self {
set: HashSet::with_capacity(capacity),
map: HashMap::with_capacity(capacity),
phantom: PhantomData,
}
}
@@ -194,22 +195,24 @@ impl<E: EthSpec> Item for SyncContributorSlotHashSet<E> {
}
fn len(&self) -> usize {
self.set.len()
self.map.len()
}
fn validator_count(&self) -> usize {
self.set.len()
self.map.len()
}
/// Inserts the `validator_index` in the set. Returns `true` if the `validator_index` was
/// already in the set.
fn insert(&mut self, validator_index: usize) -> bool {
!self.set.insert(validator_index)
fn insert(&mut self, validator_index: usize, beacon_block_root: Hash256) -> bool {
self.map
.insert(validator_index, beacon_block_root)
.is_some()
}
/// Returns `true` if the `validator_index` is in the set.
fn contains(&self, validator_index: usize) -> bool {
self.set.contains(&validator_index)
fn get(&self, validator_index: usize) -> Option<Hash256> {
self.map.get(&validator_index).copied()
}
}
@@ -219,7 +222,7 @@ pub struct SyncAggregatorSlotHashSet {
set: HashSet<usize>,
}
impl Item for SyncAggregatorSlotHashSet {
impl Item<()> for SyncAggregatorSlotHashSet {
fn with_capacity(capacity: usize) -> Self {
Self {
set: HashSet::with_capacity(capacity),
@@ -241,13 +244,13 @@ impl Item for SyncAggregatorSlotHashSet {
/// Inserts the `validator_index` in the set. Returns `true` if the `validator_index` was
/// already in the set.
fn insert(&mut self, validator_index: usize) -> bool {
fn insert(&mut self, validator_index: usize, _value: ()) -> bool {
!self.set.insert(validator_index)
}
/// Returns `true` if the `validator_index` is in the set.
fn contains(&self, validator_index: usize) -> bool {
self.set.contains(&validator_index)
fn get(&self, validator_index: usize) -> Option<()> {
self.set.contains(&validator_index).then_some(())
}
}
@@ -275,7 +278,7 @@ impl<T, E: EthSpec> Default for AutoPruningEpochContainer<T, E> {
}
}
impl<T: Item, E: EthSpec> AutoPruningEpochContainer<T, E> {
impl<T: Item<()>, E: EthSpec> AutoPruningEpochContainer<T, E> {
/// Observe that `validator_index` has produced attestation `a`. Returns `Ok(true)` if `a` has
/// previously been observed for `validator_index`.
///
@@ -293,7 +296,7 @@ impl<T: Item, E: EthSpec> AutoPruningEpochContainer<T, E> {
self.prune(epoch);
if let Some(item) = self.items.get_mut(&epoch) {
Ok(item.insert(validator_index))
Ok(item.insert(validator_index, ()))
} else {
// To avoid re-allocations, try and determine a rough initial capacity for the new item
// by obtaining the mean size of all items in earlier epoch.
@@ -309,7 +312,7 @@ impl<T: Item, E: EthSpec> AutoPruningEpochContainer<T, E> {
let initial_capacity = sum.checked_div(count).unwrap_or_else(T::default_capacity);
let mut item = T::with_capacity(initial_capacity);
item.insert(validator_index);
item.insert(validator_index, ());
self.items.insert(epoch, item);
Ok(false)
@@ -333,7 +336,7 @@ impl<T: Item, E: EthSpec> AutoPruningEpochContainer<T, E> {
let exists = self
.items
.get(&epoch)
.map_or(false, |item| item.contains(validator_index));
.map_or(false, |item| item.get(validator_index).is_some());
Ok(exists)
}
@@ -392,7 +395,7 @@ impl<T: Item, E: EthSpec> AutoPruningEpochContainer<T, E> {
pub fn index_seen_at_epoch(&self, index: usize, epoch: Epoch) -> bool {
self.items
.get(&epoch)
.map(|item| item.contains(index))
.map(|item| item.get(index).is_some())
.unwrap_or(false)
}
}
@@ -405,23 +408,63 @@ impl<T: Item, E: EthSpec> AutoPruningEpochContainer<T, E> {
/// sync contributions with an epoch prior to `data.slot - 3` will be cleared from the cache.
///
/// `V` should be set to a `SyncAggregatorSlotHashSet` or a `SyncContributorSlotHashSet`.
pub struct AutoPruningSlotContainer<K: SlotData + Eq + Hash, V, E: EthSpec> {
pub struct AutoPruningSlotContainer<K: SlotData + Eq + Hash, S, V, E: EthSpec> {
lowest_permissible_slot: Slot,
items: HashMap<K, V>,
_phantom: PhantomData<E>,
_phantom_e: PhantomData<E>,
_phantom_s: PhantomData<S>,
}
impl<K: SlotData + Eq + Hash, V, E: EthSpec> Default for AutoPruningSlotContainer<K, V, E> {
impl<K: SlotData + Eq + Hash, S, V, E: EthSpec> Default for AutoPruningSlotContainer<K, S, V, E> {
fn default() -> Self {
Self {
lowest_permissible_slot: Slot::new(0),
items: HashMap::new(),
_phantom: PhantomData,
_phantom_e: PhantomData,
_phantom_s: PhantomData,
}
}
}
impl<K: SlotData + Eq + Hash, V: Item, E: EthSpec> AutoPruningSlotContainer<K, V, E> {
impl<K: SlotData + Eq + Hash + Copy, S, V: Item<S>, E: EthSpec>
AutoPruningSlotContainer<K, S, V, E>
{
/// Observes the given `value` for the given `validator_index`.
///
/// The `override_observation` function is supplied `previous_observation`
/// and `value`. If it returns `true`, then any existing observation will be
/// overridden.
///
/// This function returns `Some` if:
/// - An observation already existed for the validator, AND,
/// - The `override_observation` function returned `false`.
///
/// Alternatively, it returns `None` if:
/// - An observation did not already exist for the given validator, OR,
/// - The `override_observation` function returned `true`.
pub fn observe_validator_with_override<F>(
&mut self,
key: K,
validator_index: usize,
value: S,
override_observation: F,
) -> Result<Option<S>, Error>
where
F: Fn(&S, &S) -> bool,
{
if let Some(prev_observation) = self.observation_for_validator(key, validator_index)? {
if override_observation(&prev_observation, &value) {
self.observe_validator(key, validator_index, value)?;
Ok(None)
} else {
Ok(Some(prev_observation))
}
} else {
self.observe_validator(key, validator_index, value)?;
Ok(None)
}
}
/// Observe that `validator_index` has produced a sync committee message. Returns `Ok(true)` if
/// the sync committee message has previously been observed for `validator_index`.
///
@@ -429,14 +472,19 @@ impl<K: SlotData + Eq + Hash, V: Item, E: EthSpec> AutoPruningSlotContainer<K, V
///
/// - `validator_index` is higher than `VALIDATOR_REGISTRY_LIMIT`.
/// - `key.slot` is earlier than `self.lowest_permissible_slot`.
pub fn observe_validator(&mut self, key: K, validator_index: usize) -> Result<bool, Error> {
pub fn observe_validator(
&mut self,
key: K,
validator_index: usize,
value: S,
) -> Result<bool, Error> {
let slot = key.get_slot();
self.sanitize_request(slot, validator_index)?;
self.prune(slot);
if let Some(item) = self.items.get_mut(&key) {
Ok(item.insert(validator_index))
Ok(item.insert(validator_index, value))
} else {
// To avoid re-allocations, try and determine a rough initial capacity for the new item
// by obtaining the mean size of all items in earlier slot.
@@ -452,32 +500,45 @@ impl<K: SlotData + Eq + Hash, V: Item, E: EthSpec> AutoPruningSlotContainer<K, V
let initial_capacity = sum.checked_div(count).unwrap_or_else(V::default_capacity);
let mut item = V::with_capacity(initial_capacity);
item.insert(validator_index);
item.insert(validator_index, value);
self.items.insert(key, item);
Ok(false)
}
}
/// Returns `Ok(true)` if the `validator_index` has already produced a conflicting sync committee message.
///
/// ## Errors
///
/// - `validator_index` is higher than `VALIDATOR_REGISTRY_LIMIT`.
/// - `key.slot` is earlier than `self.lowest_permissible_slot`.
// Identical to `Self::observation_for_validator` but discards the
// observation, simply returning `true` if the validator has been observed
// at all.
pub fn validator_has_been_observed(
&self,
key: K,
validator_index: usize,
) -> Result<bool, Error> {
self.observation_for_validator(key, validator_index)
.map(|observation| observation.is_some())
}
/// Returns `Ok(Some)` if the `validator_index` has already produced a
/// conflicting sync committee message.
///
/// ## Errors
///
/// - `validator_index` is higher than `VALIDATOR_REGISTRY_LIMIT`.
/// - `key.slot` is earlier than `self.lowest_permissible_slot`.
pub fn observation_for_validator(
&self,
key: K,
validator_index: usize,
) -> Result<Option<S>, Error> {
self.sanitize_request(key.get_slot(), validator_index)?;
let exists = self
let observation = self
.items
.get(&key)
.map_or(false, |item| item.contains(validator_index));
.and_then(|item| item.get(validator_index));
Ok(exists)
Ok(observation)
}
/// Returns the number of validators that have been observed at the given `slot`. Returns
@@ -561,6 +622,116 @@ mod tests {
type E = types::MainnetEthSpec;
#[test]
fn value_storage() {
type Container = AutoPruningSlotContainer<Slot, Hash256, SyncContributorSlotHashSet<E>, E>;
let mut store: Container = <_>::default();
let key = Slot::new(0);
let validator_index = 0;
let value = Hash256::zero();
// Assert there is no entry.
assert!(store
.observation_for_validator(key, validator_index)
.unwrap()
.is_none());
assert!(!store
.validator_has_been_observed(key, validator_index)
.unwrap());
// Add an entry.
assert!(!store
.observe_validator(key, validator_index, value)
.unwrap());
// Assert there is a correct entry.
assert_eq!(
store
.observation_for_validator(key, validator_index)
.unwrap(),
Some(value)
);
assert!(store
.validator_has_been_observed(key, validator_index)
.unwrap());
let alternate_value = Hash256::from_low_u64_be(1);
// Assert that override false does not override.
assert_eq!(
store
.observe_validator_with_override(key, validator_index, alternate_value, |_, _| {
false
})
.unwrap(),
Some(value)
);
// Assert that override true overrides and acts as if there was never an
// entry there.
assert_eq!(
store
.observe_validator_with_override(key, validator_index, alternate_value, |_, _| {
true
})
.unwrap(),
None
);
assert_eq!(
store
.observation_for_validator(key, validator_index)
.unwrap(),
Some(alternate_value)
);
// Reset the store.
let mut store: Container = <_>::default();
// Asset that a new entry with override = false is inserted
assert_eq!(
store
.observation_for_validator(key, validator_index)
.unwrap(),
None
);
assert_eq!(
store
.observe_validator_with_override(key, validator_index, value, |_, _| { false })
.unwrap(),
None,
);
assert_eq!(
store
.observation_for_validator(key, validator_index)
.unwrap(),
Some(value)
);
// Reset the store.
let mut store: Container = <_>::default();
// Asset that a new entry with override = true is inserted
assert_eq!(
store
.observation_for_validator(key, validator_index)
.unwrap(),
None
);
assert_eq!(
store
.observe_validator_with_override(key, validator_index, value, |_, _| { true })
.unwrap(),
None,
);
assert_eq!(
store
.observation_for_validator(key, validator_index)
.unwrap(),
Some(value)
);
}
macro_rules! test_suite_epoch {
($mod_name: ident, $type: ident) => {
#[cfg(test)]
@@ -668,7 +839,7 @@ mod tests {
let mut store = $type::default();
let max_cap = store.max_capacity();
let to_skip = vec![1_u64, 3, 4, 5];
let to_skip = [1_u64, 3, 4, 5];
let periods = (0..max_cap * 3)
.into_iter()
.filter(|i| !to_skip.contains(i))
@@ -722,7 +893,7 @@ mod tests {
test_suite_epoch!(observed_aggregators, ObservedAggregators);
macro_rules! test_suite_slot {
($mod_name: ident, $type: ident) => {
($mod_name: ident, $type: ident, $value: expr) => {
#[cfg(test)]
mod $mod_name {
use super::*;
@@ -737,7 +908,7 @@ mod tests {
"should indicate an unknown item is unknown"
);
assert_eq!(
store.observe_validator(key, i),
store.observe_validator(key, i, $value),
Ok(false),
"should observe new item"
);
@@ -750,7 +921,7 @@ mod tests {
"should indicate a known item is known"
);
assert_eq!(
store.observe_validator(key, i),
store.observe_validator(key, i, $value),
Ok(true),
"should acknowledge an existing item"
);
@@ -839,7 +1010,7 @@ mod tests {
let mut store = $type::default();
let max_cap = store.max_capacity();
let to_skip = vec![1_u64, 3, 4, 5];
let to_skip = [1_u64, 3, 4, 5];
let periods = (0..max_cap * 3)
.into_iter()
.filter(|i| !to_skip.contains(i))
@@ -948,7 +1119,7 @@ mod tests {
let mut store = $type::default();
let max_cap = store.max_capacity();
let to_skip = vec![1_u64, 3, 4, 5];
let to_skip = [1_u64, 3, 4, 5];
let periods = (0..max_cap * 3)
.into_iter()
.filter(|i| !to_skip.contains(i))
@@ -997,6 +1168,10 @@ mod tests {
}
};
}
test_suite_slot!(observed_sync_contributors, ObservedSyncContributors);
test_suite_slot!(observed_sync_aggregators, ObservedSyncAggregators);
test_suite_slot!(
observed_sync_contributors,
ObservedSyncContributors,
Hash256::zero()
);
test_suite_slot!(observed_sync_aggregators, ObservedSyncAggregators, ());
}

View File

@@ -0,0 +1,430 @@
//! Provides the `ObservedBlobSidecars` struct which allows for rejecting `BlobSidecar`s
//! that we have already seen over the gossip network.
//! Only `BlobSidecar`s that have completed proposer signature verification can be added
//! to this cache to reduce DoS risks.
use crate::observed_block_producers::ProposalKey;
use std::collections::{HashMap, HashSet};
use std::marker::PhantomData;
use types::{BlobSidecar, EthSpec, Slot};
#[derive(Debug, PartialEq)]
pub enum Error {
/// The slot of the provided `BlobSidecar` is prior to finalization and should not have been provided
/// to this function. This is an internal error.
FinalizedBlob { slot: Slot, finalized_slot: Slot },
/// The blob sidecar contains an invalid blob index, the blob sidecar is invalid.
/// Note: The invalid blob should have been caught and flagged as an error much before reaching
/// here.
InvalidBlobIndex(u64),
}
/// Maintains a cache of seen `BlobSidecar`s that are received over gossip
/// and have been gossip verified.
///
/// The cache supports pruning based upon the finalized epoch. It does not automatically prune, you
/// must call `Self::prune` manually.
///
/// Note: To prevent DoS attacks, this cache must include only items that have received some DoS resistance
/// like checking the proposer signature.
pub struct ObservedBlobSidecars<E: EthSpec> {
finalized_slot: Slot,
/// Stores all received blob indices for a given `(ValidatorIndex, Slot)` tuple.
items: HashMap<ProposalKey, HashSet<u64>>,
_phantom: PhantomData<E>,
}
impl<E: EthSpec> Default for ObservedBlobSidecars<E> {
/// Instantiates `Self` with `finalized_slot == 0`.
fn default() -> Self {
Self {
finalized_slot: Slot::new(0),
items: HashMap::new(),
_phantom: PhantomData,
}
}
}
impl<E: EthSpec> ObservedBlobSidecars<E> {
/// Observe the `blob_sidecar` at (`blob_sidecar.block_proposer_index, blob_sidecar.slot`).
/// This will update `self` so future calls to it indicate that this `blob_sidecar` is known.
///
/// The supplied `blob_sidecar` **MUST** have completed proposer signature verification.
pub fn observe_sidecar(&mut self, blob_sidecar: &BlobSidecar<E>) -> Result<bool, Error> {
self.sanitize_blob_sidecar(blob_sidecar)?;
let blob_indices = self
.items
.entry(ProposalKey {
slot: blob_sidecar.slot(),
proposer: blob_sidecar.block_proposer_index(),
})
.or_insert_with(|| HashSet::with_capacity(E::max_blobs_per_block()));
let did_not_exist = blob_indices.insert(blob_sidecar.index);
Ok(!did_not_exist)
}
/// Returns `true` if the `blob_sidecar` has already been observed in the cache within the prune window.
pub fn proposer_is_known(&self, blob_sidecar: &BlobSidecar<E>) -> Result<bool, Error> {
self.sanitize_blob_sidecar(blob_sidecar)?;
let is_known = self
.items
.get(&ProposalKey {
slot: blob_sidecar.slot(),
proposer: blob_sidecar.block_proposer_index(),
})
.map_or(false, |blob_indices| {
blob_indices.contains(&blob_sidecar.index)
});
Ok(is_known)
}
fn sanitize_blob_sidecar(&self, blob_sidecar: &BlobSidecar<E>) -> Result<(), Error> {
if blob_sidecar.index >= E::max_blobs_per_block() as u64 {
return Err(Error::InvalidBlobIndex(blob_sidecar.index));
}
let finalized_slot = self.finalized_slot;
if finalized_slot > 0 && blob_sidecar.slot() <= finalized_slot {
return Err(Error::FinalizedBlob {
slot: blob_sidecar.slot(),
finalized_slot,
});
}
Ok(())
}
/// Prune `blob_sidecar` observations for slots less than or equal to the given slot.
pub fn prune(&mut self, finalized_slot: Slot) {
if finalized_slot == 0 {
return;
}
self.finalized_slot = finalized_slot;
self.items.retain(|k, _| k.slot > finalized_slot);
}
}
#[cfg(test)]
mod tests {
use super::*;
use bls::Hash256;
use std::sync::Arc;
use types::MainnetEthSpec;
type E = MainnetEthSpec;
fn get_blob_sidecar(slot: u64, proposer_index: u64, index: u64) -> Arc<BlobSidecar<E>> {
let mut blob_sidecar = BlobSidecar::empty();
blob_sidecar.signed_block_header.message.slot = slot.into();
blob_sidecar.signed_block_header.message.proposer_index = proposer_index;
blob_sidecar.index = index;
Arc::new(blob_sidecar)
}
#[test]
fn pruning() {
let mut cache = ObservedBlobSidecars::default();
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 0, "no slots should be present");
// Slot 0, index 0
let proposer_index_a = 420;
let sidecar_a = get_blob_sidecar(0, proposer_index_a, 0);
assert_eq!(
cache.observe_sidecar(&sidecar_a),
Ok(false),
"can observe proposer, indicates proposer unobserved"
);
/*
* Preconditions.
*/
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(
cache.items.len(),
1,
"only one (validator_index, slot) tuple should be present"
);
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_a, Slot::new(0)))
.expect("slot zero should be present");
assert_eq!(
cached_blob_indices.len(),
1,
"only one proposer should be present"
);
/*
* Check that a prune at the genesis slot does nothing.
*/
cache.prune(Slot::new(0));
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 1, "only one slot should be present");
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_a, Slot::new(0)))
.expect("slot zero should be present");
assert_eq!(
cached_blob_indices.len(),
1,
"only one proposer should be present"
);
/*
* Check that a prune empties the cache
*/
cache.prune(E::slots_per_epoch().into());
assert_eq!(
cache.finalized_slot,
Slot::from(E::slots_per_epoch()),
"finalized slot is updated"
);
assert_eq!(cache.items.len(), 0, "no items left");
/*
* Check that we can't insert a finalized sidecar
*/
// First slot of finalized epoch
let block_b = get_blob_sidecar(E::slots_per_epoch(), 419, 0);
assert_eq!(
cache.observe_sidecar(&block_b),
Err(Error::FinalizedBlob {
slot: E::slots_per_epoch().into(),
finalized_slot: E::slots_per_epoch().into(),
}),
"cant insert finalized sidecar"
);
assert_eq!(cache.items.len(), 0, "sidecar was not added");
/*
* Check that we _can_ insert a non-finalized block
*/
let three_epochs = E::slots_per_epoch() * 3;
// First slot of finalized epoch
let proposer_index_b = 421;
let block_b = get_blob_sidecar(three_epochs, proposer_index_b, 0);
assert_eq!(
cache.observe_sidecar(&block_b),
Ok(false),
"can insert non-finalized block"
);
assert_eq!(cache.items.len(), 1, "only one slot should be present");
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_b, Slot::new(three_epochs)))
.expect("the three epochs slot should be present");
assert_eq!(
cached_blob_indices.len(),
1,
"only one proposer should be present"
);
/*
* Check that a prune doesnt wipe later blocks
*/
let two_epochs = E::slots_per_epoch() * 2;
cache.prune(two_epochs.into());
assert_eq!(
cache.finalized_slot,
Slot::from(two_epochs),
"finalized slot is updated"
);
assert_eq!(cache.items.len(), 1, "only one slot should be present");
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_b, Slot::new(three_epochs)))
.expect("the three epochs slot should be present");
assert_eq!(
cached_blob_indices.len(),
1,
"only one proposer should be present"
);
}
#[test]
fn simple_observations() {
let mut cache = ObservedBlobSidecars::default();
// Slot 0, index 0
let proposer_index_a = 420;
let sidecar_a = get_blob_sidecar(0, proposer_index_a, 0);
assert_eq!(
cache.proposer_is_known(&sidecar_a),
Ok(false),
"no observation in empty cache"
);
assert_eq!(
cache.observe_sidecar(&sidecar_a),
Ok(false),
"can observe proposer, indicates proposer unobserved"
);
assert_eq!(
cache.proposer_is_known(&sidecar_a),
Ok(true),
"observed block is indicated as true"
);
assert_eq!(
cache.observe_sidecar(&sidecar_a),
Ok(true),
"observing again indicates true"
);
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 1, "only one slot should be present");
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_a, Slot::new(0)))
.expect("slot zero should be present");
assert_eq!(
cached_blob_indices.len(),
1,
"only one proposer should be present"
);
// Slot 1, proposer 0
let proposer_index_b = 421;
let sidecar_b = get_blob_sidecar(1, proposer_index_b, 0);
assert_eq!(
cache.proposer_is_known(&sidecar_b),
Ok(false),
"no observation for new slot"
);
assert_eq!(
cache.observe_sidecar(&sidecar_b),
Ok(false),
"can observe proposer for new slot, indicates proposer unobserved"
);
assert_eq!(
cache.proposer_is_known(&sidecar_b),
Ok(true),
"observed block in slot 1 is indicated as true"
);
assert_eq!(
cache.observe_sidecar(&sidecar_b),
Ok(true),
"observing slot 1 again indicates true"
);
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 2, "two slots should be present");
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_a, Slot::new(0)))
.expect("slot zero should be present");
assert_eq!(
cached_blob_indices.len(),
1,
"only one proposer should be present in slot 0"
);
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_b, Slot::new(1)))
.expect("slot zero should be present");
assert_eq!(
cached_blob_indices.len(),
1,
"only one proposer should be present in slot 1"
);
// Slot 0, index 1
let sidecar_c = get_blob_sidecar(0, proposer_index_a, 1);
assert_eq!(
cache.proposer_is_known(&sidecar_c),
Ok(false),
"no observation for new index"
);
assert_eq!(
cache.observe_sidecar(&sidecar_c),
Ok(false),
"can observe new index, indicates sidecar unobserved for new index"
);
assert_eq!(
cache.proposer_is_known(&sidecar_c),
Ok(true),
"observed new sidecar is indicated as true"
);
assert_eq!(
cache.observe_sidecar(&sidecar_c),
Ok(true),
"observing new sidecar again indicates true"
);
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 2, "two slots should be present");
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_a, Slot::new(0)))
.expect("slot zero should be present");
assert_eq!(
cached_blob_indices.len(),
2,
"two blob indices should be present in slot 0"
);
// Create a sidecar sharing slot and proposer but with a different block root.
let mut sidecar_d: BlobSidecar<E> = BlobSidecar {
index: sidecar_c.index,
blob: sidecar_c.blob.clone(),
kzg_commitment: sidecar_c.kzg_commitment,
kzg_proof: sidecar_c.kzg_proof,
signed_block_header: sidecar_c.signed_block_header.clone(),
kzg_commitment_inclusion_proof: sidecar_c.kzg_commitment_inclusion_proof.clone(),
};
sidecar_d.signed_block_header.message.body_root = Hash256::repeat_byte(7);
assert_eq!(
cache.proposer_is_known(&sidecar_d),
Ok(true),
"there has been an observation for this proposer index"
);
assert_eq!(
cache.observe_sidecar(&sidecar_d),
Ok(true),
"indicates sidecar proposer was observed"
);
let cached_blob_indices = cache
.items
.get(&ProposalKey::new(proposer_index_a, Slot::new(0)))
.expect("slot zero should be present");
assert_eq!(
cached_blob_indices.len(),
2,
"two blob indices should be present in slot 0"
);
// Try adding an out of bounds index
let invalid_index = E::max_blobs_per_block() as u64;
let sidecar_d = get_blob_sidecar(0, proposer_index_a, invalid_index);
assert_eq!(
cache.observe_sidecar(&sidecar_d),
Err(Error::InvalidBlobIndex(invalid_index)),
"cannot add an index > MaxBlobsPerBlock"
);
}
}

View File

@@ -1,9 +1,10 @@
//! Provides the `ObservedBlockProducers` struct which allows for rejecting gossip blocks from
//! validators that have already produced a block.
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::marker::PhantomData;
use types::{BeaconBlockRef, Epoch, EthSpec, Slot, Unsigned};
use types::{BeaconBlockRef, Epoch, EthSpec, Hash256, Slot, Unsigned};
#[derive(Debug, PartialEq)]
pub enum Error {
@@ -14,6 +15,18 @@ pub enum Error {
ValidatorIndexTooHigh(u64),
}
#[derive(Eq, Hash, PartialEq, Debug, Default)]
pub struct ProposalKey {
pub slot: Slot,
pub proposer: u64,
}
impl ProposalKey {
pub fn new(proposer: u64, slot: Slot) -> Self {
Self { slot, proposer }
}
}
/// Maintains a cache of observed `(block.slot, block.proposer)`.
///
/// The cache supports pruning based upon the finalized epoch. It does not automatically prune, you
@@ -27,7 +40,7 @@ pub enum Error {
/// known_distinct_shufflings` which is much smaller.
pub struct ObservedBlockProducers<E: EthSpec> {
finalized_slot: Slot,
items: HashMap<Slot, HashSet<u64>>,
items: HashMap<ProposalKey, HashSet<Hash256>>,
_phantom: PhantomData<E>,
}
@@ -42,6 +55,24 @@ impl<E: EthSpec> Default for ObservedBlockProducers<E> {
}
}
pub enum SeenBlock {
Duplicate,
Slashable,
UniqueNonSlashable,
}
impl SeenBlock {
pub fn proposer_previously_observed(self) -> bool {
match self {
Self::Duplicate | Self::Slashable => true,
Self::UniqueNonSlashable => false,
}
}
pub fn is_slashable(&self) -> bool {
matches!(self, Self::Slashable)
}
}
impl<E: EthSpec> ObservedBlockProducers<E> {
/// Observe that the `block` was produced by `block.proposer_index` at `block.slot`. This will
/// update `self` so future calls to it indicate that this block is known.
@@ -52,16 +83,44 @@ impl<E: EthSpec> ObservedBlockProducers<E> {
///
/// - `block.proposer_index` is greater than `VALIDATOR_REGISTRY_LIMIT`.
/// - `block.slot` is equal to or less than the latest pruned `finalized_slot`.
pub fn observe_proposer(&mut self, block: BeaconBlockRef<'_, E>) -> Result<bool, Error> {
pub fn observe_proposal(
&mut self,
block_root: Hash256,
block: BeaconBlockRef<'_, E>,
) -> Result<SeenBlock, Error> {
self.sanitize_block(block)?;
let did_not_exist = self
.items
.entry(block.slot())
.or_insert_with(|| HashSet::with_capacity(E::SlotsPerEpoch::to_usize()))
.insert(block.proposer_index());
let key = ProposalKey {
slot: block.slot(),
proposer: block.proposer_index(),
};
Ok(!did_not_exist)
let entry = self.items.entry(key);
let slashable_proposal = match entry {
Entry::Occupied(mut occupied_entry) => {
let block_roots = occupied_entry.get_mut();
let newly_inserted = block_roots.insert(block_root);
let is_equivocation = block_roots.len() > 1;
if is_equivocation {
SeenBlock::Slashable
} else if !newly_inserted {
SeenBlock::Duplicate
} else {
SeenBlock::UniqueNonSlashable
}
}
Entry::Vacant(vacant_entry) => {
let block_roots = HashSet::from([block_root]);
vacant_entry.insert(block_roots);
SeenBlock::UniqueNonSlashable
}
};
Ok(slashable_proposal)
}
/// Returns `Ok(true)` if the `block` has been observed before, `Ok(false)` if not. Does not
@@ -72,15 +131,33 @@ impl<E: EthSpec> ObservedBlockProducers<E> {
///
/// - `block.proposer_index` is greater than `VALIDATOR_REGISTRY_LIMIT`.
/// - `block.slot` is equal to or less than the latest pruned `finalized_slot`.
pub fn proposer_has_been_observed(&self, block: BeaconBlockRef<'_, E>) -> Result<bool, Error> {
pub fn proposer_has_been_observed(
&self,
block: BeaconBlockRef<'_, E>,
block_root: Hash256,
) -> Result<SeenBlock, Error> {
self.sanitize_block(block)?;
let exists = self
.items
.get(&block.slot())
.map_or(false, |set| set.contains(&block.proposer_index()));
let key = ProposalKey {
slot: block.slot(),
proposer: block.proposer_index(),
};
Ok(exists)
if let Some(block_roots) = self.items.get(&key) {
let block_already_known = block_roots.contains(&block_root);
let no_prev_known_blocks =
block_roots.difference(&HashSet::from([block_root])).count() == 0;
if !no_prev_known_blocks {
Ok(SeenBlock::Slashable)
} else if block_already_known {
Ok(SeenBlock::Duplicate)
} else {
Ok(SeenBlock::UniqueNonSlashable)
}
} else {
Ok(SeenBlock::UniqueNonSlashable)
}
}
/// Returns `Ok(())` if the given `block` is sane.
@@ -112,15 +189,15 @@ impl<E: EthSpec> ObservedBlockProducers<E> {
}
self.finalized_slot = finalized_slot;
self.items.retain(|slot, _set| *slot > finalized_slot);
self.items.retain(|key, _| key.slot > finalized_slot);
}
/// Returns `true` if the given `validator_index` has been stored in `self` at `epoch`.
///
/// This is useful for doppelganger detection.
pub fn index_seen_at_epoch(&self, validator_index: u64, epoch: Epoch) -> bool {
self.items.iter().any(|(slot, producers)| {
slot.epoch(E::slots_per_epoch()) == epoch && producers.contains(&validator_index)
self.items.iter().any(|(key, _)| {
key.slot.epoch(E::slots_per_epoch()) == epoch && key.proposer == validator_index
})
}
}
@@ -148,9 +225,12 @@ mod tests {
// Slot 0, proposer 0
let block_a = get_block(0, 0);
let block_root = block_a.canonical_root();
assert_eq!(
cache.observe_proposer(block_a.to_ref()),
cache
.observe_proposal(block_root, block_a.to_ref())
.map(SeenBlock::proposer_previously_observed),
Ok(false),
"can observe proposer, indicates proposer unobserved"
);
@@ -164,7 +244,10 @@ mod tests {
assert_eq!(
cache
.items
.get(&Slot::new(0))
.get(&ProposalKey {
slot: Slot::new(0),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
@@ -182,7 +265,10 @@ mod tests {
assert_eq!(
cache
.items
.get(&Slot::new(0))
.get(&ProposalKey {
slot: Slot::new(0),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
@@ -207,9 +293,12 @@ mod tests {
// First slot of finalized epoch, proposer 0
let block_b = get_block(E::slots_per_epoch(), 0);
let block_root_b = block_b.canonical_root();
assert_eq!(
cache.observe_proposer(block_b.to_ref()),
cache
.observe_proposal(block_root_b, block_b.to_ref())
.map(SeenBlock::proposer_previously_observed),
Err(Error::FinalizedBlock {
slot: E::slots_per_epoch().into(),
finalized_slot: E::slots_per_epoch().into(),
@@ -229,7 +318,9 @@ mod tests {
let block_b = get_block(three_epochs, 0);
assert_eq!(
cache.observe_proposer(block_b.to_ref()),
cache
.observe_proposal(block_root_b, block_b.to_ref())
.map(SeenBlock::proposer_previously_observed),
Ok(false),
"can insert non-finalized block"
);
@@ -238,7 +329,10 @@ mod tests {
assert_eq!(
cache
.items
.get(&Slot::new(three_epochs))
.get(&ProposalKey {
slot: Slot::new(three_epochs),
proposer: 0
})
.expect("the three epochs slot should be present")
.len(),
1,
@@ -262,7 +356,10 @@ mod tests {
assert_eq!(
cache
.items
.get(&Slot::new(three_epochs))
.get(&ProposalKey {
slot: Slot::new(three_epochs),
proposer: 0
})
.expect("the three epochs slot should be present")
.len(),
1,
@@ -276,24 +373,33 @@ mod tests {
// Slot 0, proposer 0
let block_a = get_block(0, 0);
let block_root_a = block_a.canonical_root();
assert_eq!(
cache.proposer_has_been_observed(block_a.to_ref()),
cache
.proposer_has_been_observed(block_a.to_ref(), block_a.canonical_root())
.map(|x| x.proposer_previously_observed()),
Ok(false),
"no observation in empty cache"
);
assert_eq!(
cache.observe_proposer(block_a.to_ref()),
cache
.observe_proposal(block_root_a, block_a.to_ref())
.map(SeenBlock::proposer_previously_observed),
Ok(false),
"can observe proposer, indicates proposer unobserved"
);
assert_eq!(
cache.proposer_has_been_observed(block_a.to_ref()),
cache
.proposer_has_been_observed(block_a.to_ref(), block_a.canonical_root())
.map(|x| x.proposer_previously_observed()),
Ok(true),
"observed block is indicated as true"
);
assert_eq!(
cache.observe_proposer(block_a.to_ref()),
cache
.observe_proposal(block_root_a, block_a.to_ref())
.map(SeenBlock::proposer_previously_observed),
Ok(true),
"observing again indicates true"
);
@@ -303,7 +409,10 @@ mod tests {
assert_eq!(
cache
.items
.get(&Slot::new(0))
.get(&ProposalKey {
slot: Slot::new(0),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
@@ -312,24 +421,33 @@ mod tests {
// Slot 1, proposer 0
let block_b = get_block(1, 0);
let block_root_b = block_b.canonical_root();
assert_eq!(
cache.proposer_has_been_observed(block_b.to_ref()),
cache
.proposer_has_been_observed(block_b.to_ref(), block_b.canonical_root())
.map(|x| x.proposer_previously_observed()),
Ok(false),
"no observation for new slot"
);
assert_eq!(
cache.observe_proposer(block_b.to_ref()),
cache
.observe_proposal(block_root_b, block_b.to_ref())
.map(SeenBlock::proposer_previously_observed),
Ok(false),
"can observe proposer for new slot, indicates proposer unobserved"
);
assert_eq!(
cache.proposer_has_been_observed(block_b.to_ref()),
cache
.proposer_has_been_observed(block_b.to_ref(), block_b.canonical_root())
.map(|x| x.proposer_previously_observed()),
Ok(true),
"observed block in slot 1 is indicated as true"
);
assert_eq!(
cache.observe_proposer(block_b.to_ref()),
cache
.observe_proposal(block_root_b, block_b.to_ref())
.map(SeenBlock::proposer_previously_observed),
Ok(true),
"observing slot 1 again indicates true"
);
@@ -339,7 +457,10 @@ mod tests {
assert_eq!(
cache
.items
.get(&Slot::new(0))
.get(&ProposalKey {
slot: Slot::new(0),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
@@ -348,7 +469,10 @@ mod tests {
assert_eq!(
cache
.items
.get(&Slot::new(1))
.get(&ProposalKey {
slot: Slot::new(1),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
@@ -357,45 +481,54 @@ mod tests {
// Slot 0, proposer 1
let block_c = get_block(0, 1);
let block_root_c = block_c.canonical_root();
assert_eq!(
cache.proposer_has_been_observed(block_c.to_ref()),
cache
.proposer_has_been_observed(block_c.to_ref(), block_c.canonical_root())
.map(|x| x.proposer_previously_observed()),
Ok(false),
"no observation for new proposer"
);
assert_eq!(
cache.observe_proposer(block_c.to_ref()),
cache
.observe_proposal(block_root_c, block_c.to_ref())
.map(SeenBlock::proposer_previously_observed),
Ok(false),
"can observe new proposer, indicates proposer unobserved"
);
assert_eq!(
cache.proposer_has_been_observed(block_c.to_ref()),
cache
.proposer_has_been_observed(block_c.to_ref(), block_c.canonical_root())
.map(|x| x.proposer_previously_observed()),
Ok(true),
"observed new proposer block is indicated as true"
);
assert_eq!(
cache.observe_proposer(block_c.to_ref()),
cache
.observe_proposal(block_root_c, block_c.to_ref())
.map(SeenBlock::proposer_previously_observed),
Ok(true),
"observing new proposer again indicates true"
);
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 2, "two slots should be present");
assert_eq!(cache.items.len(), 3, "three slots should be present");
assert_eq!(
cache
.items
.get(&Slot::new(0))
.expect("slot zero should be present")
.len(),
.iter()
.filter(|(k, _)| k.slot == cache.finalized_slot)
.count(),
2,
"two proposers should be present in slot 0"
);
assert_eq!(
cache
.items
.get(&Slot::new(1))
.expect("slot zero should be present")
.len(),
.iter()
.filter(|(k, _)| k.slot == Slot::new(1))
.count(),
1,
"only one proposer should be present in slot 1"
);

View File

@@ -1,10 +1,12 @@
use derivative::Derivative;
use smallvec::SmallVec;
use state_processing::{SigVerifiedOp, VerifyOperation};
use smallvec::{smallvec, SmallVec};
use ssz::{Decode, Encode};
use state_processing::{SigVerifiedOp, VerifyOperation, VerifyOperationAt};
use std::collections::HashSet;
use std::marker::PhantomData;
use types::{
AttesterSlashing, BeaconState, ChainSpec, EthSpec, ProposerSlashing, SignedVoluntaryExit,
AttesterSlashing, BeaconState, ChainSpec, Epoch, EthSpec, ForkName, ProposerSlashing,
SignedBlsToExecutionChange, SignedVoluntaryExit, Slot,
};
/// Number of validator indices to store on the stack in `observed_validators`.
@@ -24,17 +26,20 @@ pub struct ObservedOperations<T: ObservableOperation<E>, E: EthSpec> {
/// previously seen attester slashings, i.e. those validators in the intersection of
/// `attestation_1.attester_indices` and `attestation_2.attester_indices`.
observed_validator_indices: HashSet<u64>,
/// The name of the current fork. The default will be overwritten on first use.
#[derivative(Default(value = "ForkName::Base"))]
current_fork: ForkName,
_phantom: PhantomData<(T, E)>,
}
/// Was the observed operation new and valid for further processing, or a useless duplicate?
#[derive(Debug, PartialEq, Eq, Clone)]
pub enum ObservationOutcome<T> {
New(SigVerifiedOp<T>),
pub enum ObservationOutcome<T: Encode + Decode, E: EthSpec> {
New(SigVerifiedOp<T, E>),
AlreadyKnown,
}
/// Trait for exits and slashings which can be observed using `ObservedOperations`.
/// Trait for operations which can be observed using `ObservedOperations`.
pub trait ObservableOperation<E: EthSpec>: VerifyOperation<E> + Sized {
/// The set of validator indices involved in this operation.
///
@@ -44,13 +49,13 @@ pub trait ObservableOperation<E: EthSpec>: VerifyOperation<E> + Sized {
impl<E: EthSpec> ObservableOperation<E> for SignedVoluntaryExit {
fn observed_validators(&self) -> SmallVec<[u64; SMALL_VEC_SIZE]> {
std::iter::once(self.message.validator_index).collect()
smallvec![self.message.validator_index]
}
}
impl<E: EthSpec> ObservableOperation<E> for ProposerSlashing {
fn observed_validators(&self) -> SmallVec<[u64; SMALL_VEC_SIZE]> {
std::iter::once(self.signed_header_1.message.proposer_index).collect()
smallvec![self.signed_header_1.message.proposer_index]
}
}
@@ -75,13 +80,25 @@ impl<E: EthSpec> ObservableOperation<E> for AttesterSlashing<E> {
}
}
impl<E: EthSpec> ObservableOperation<E> for SignedBlsToExecutionChange {
fn observed_validators(&self) -> SmallVec<[u64; SMALL_VEC_SIZE]> {
smallvec![self.message.validator_index]
}
}
impl<T: ObservableOperation<E>, E: EthSpec> ObservedOperations<T, E> {
pub fn verify_and_observe(
pub fn verify_and_observe_parametric<F>(
&mut self,
op: T,
validate: F,
head_state: &BeaconState<E>,
spec: &ChainSpec,
) -> Result<ObservationOutcome<T>, T::Error> {
) -> Result<ObservationOutcome<T, E>, T::Error>
where
F: Fn(T) -> Result<SigVerifiedOp<T, E>, T::Error>,
{
self.reset_at_fork_boundary(head_state.slot(), spec);
let observed_validator_indices = &mut self.observed_validator_indices;
let new_validator_indices = op.observed_validators();
@@ -99,7 +116,7 @@ impl<T: ObservableOperation<E>, E: EthSpec> ObservedOperations<T, E> {
}
// Validate the op using operation-specific logic (`verify_attester_slashing`, etc).
let verified_op = op.validate(head_state, spec)?;
let verified_op = validate(op)?;
// Add the relevant indices to the set of known indices to prevent processing of duplicates
// in the future.
@@ -107,4 +124,51 @@ impl<T: ObservableOperation<E>, E: EthSpec> ObservedOperations<T, E> {
Ok(ObservationOutcome::New(verified_op))
}
pub fn verify_and_observe(
&mut self,
op: T,
head_state: &BeaconState<E>,
spec: &ChainSpec,
) -> Result<ObservationOutcome<T, E>, T::Error> {
let validate = |op: T| op.validate(head_state, spec);
self.verify_and_observe_parametric(op, validate, head_state, spec)
}
/// Reset the cache when crossing a fork boundary.
///
/// This prevents an attacker from crafting a self-slashing which is only valid before the fork
/// (e.g. using the Altair fork domain at a Bellatrix epoch), in order to prevent propagation of
/// all other slashings due to the duplicate check.
///
/// It doesn't matter if this cache gets reset too often, as we reset it on restart anyway and a
/// false negative just results in propagation of messages which should have been ignored.
///
/// In future we could check slashing relevance against the op pool itself, but that would
/// require indexing the attester slashings in the op pool by validator index.
fn reset_at_fork_boundary(&mut self, head_slot: Slot, spec: &ChainSpec) {
let head_fork = spec.fork_name_at_slot::<E>(head_slot);
if head_fork != self.current_fork {
self.observed_validator_indices.clear();
self.current_fork = head_fork;
}
}
/// Reset the cache. MUST ONLY BE USED IN TESTS.
pub fn __reset_for_testing_only(&mut self) {
self.observed_validator_indices.clear();
}
}
impl<T: ObservableOperation<E> + VerifyOperationAt<E>, E: EthSpec> ObservedOperations<T, E> {
pub fn verify_and_observe_at(
&mut self,
op: T,
verify_at_epoch: Epoch,
head_state: &BeaconState<E>,
spec: &ChainSpec,
) -> Result<ObservationOutcome<T, E>, T::Error> {
let validate = |op: T| op.validate_at(head_state, verify_at_epoch, spec);
self.verify_and_observe_parametric(op, validate, head_state, spec)
}
}

View File

@@ -0,0 +1,486 @@
//! Provides the `ObservedSlashable` struct which tracks slashable messages seen in
//! gossip or via RPC. Useful in supporting `broadcast_validation` in the Beacon API.
use crate::observed_block_producers::Error;
use std::collections::hash_map::Entry;
use std::collections::{HashMap, HashSet};
use std::marker::PhantomData;
use types::{EthSpec, Hash256, Slot, Unsigned};
#[derive(Eq, Hash, PartialEq, Debug, Default)]
pub struct ProposalKey {
pub slot: Slot,
pub proposer: u64,
}
/// Maintains a cache of observed `(block.slot, block.proposer)`.
///
/// The cache supports pruning based upon the finalized epoch. It does not automatically prune, you
/// must call `Self::prune` manually.
///
/// The maximum size of the cache is determined by `slots_since_finality *
/// VALIDATOR_REGISTRY_LIMIT`. This is quite a large size, so it's important that upstream
/// functions only use this cache for blocks with a valid signature. Only allowing valid signed
/// blocks reduces the theoretical maximum size of this cache to `slots_since_finality *
/// active_validator_count`, however in reality that is more like `slots_since_finality *
/// known_distinct_shufflings` which is much smaller.
pub struct ObservedSlashable<E: EthSpec> {
finalized_slot: Slot,
items: HashMap<ProposalKey, HashSet<Hash256>>,
_phantom: PhantomData<E>,
}
impl<E: EthSpec> Default for ObservedSlashable<E> {
/// Instantiates `Self` with `finalized_slot == 0`.
fn default() -> Self {
Self {
finalized_slot: Slot::new(0),
items: HashMap::new(),
_phantom: PhantomData,
}
}
}
impl<E: EthSpec> ObservedSlashable<E> {
/// Observe that the `header` was produced by `header.proposer_index` at `header.slot`. This will
/// update `self` so future calls to it indicate that this block is known.
///
/// The supplied `block` **MUST** be signature verified (see struct-level documentation).
///
/// ## Errors
///
/// - `header.proposer_index` is greater than `VALIDATOR_REGISTRY_LIMIT`.
/// - `header.slot` is equal to or less than the latest pruned `finalized_slot`.
pub fn observe_slashable(
&mut self,
slot: Slot,
proposer_index: u64,
block_root: Hash256,
) -> Result<(), Error> {
self.sanitize_header(slot, proposer_index)?;
let key = ProposalKey {
slot,
proposer: proposer_index,
};
let entry = self.items.entry(key);
match entry {
Entry::Occupied(mut occupied_entry) => {
let block_roots = occupied_entry.get_mut();
block_roots.insert(block_root);
}
Entry::Vacant(vacant_entry) => {
let block_roots = HashSet::from([block_root]);
vacant_entry.insert(block_roots);
}
}
Ok(())
}
/// Returns `Ok(true)` if the `block_root` is slashable, `Ok(false)` if not. Does not
/// update the cache, so calling this function multiple times will continue to return
/// `Ok(false)`, until `Self::observe_proposer` is called.
///
/// ## Errors
///
/// - `proposer_index` is greater than `VALIDATOR_REGISTRY_LIMIT`.
/// - `slot` is equal to or less than the latest pruned `finalized_slot`.
pub fn is_slashable(
&self,
slot: Slot,
proposer_index: u64,
block_root: Hash256,
) -> Result<bool, Error> {
self.sanitize_header(slot, proposer_index)?;
let key = ProposalKey {
slot,
proposer: proposer_index,
};
if let Some(block_roots) = self.items.get(&key) {
let no_prev_known_blocks =
block_roots.difference(&HashSet::from([block_root])).count() == 0;
Ok(!no_prev_known_blocks)
} else {
Ok(false)
}
}
/// Returns `Ok(())` if the given `header` is sane.
fn sanitize_header(&self, slot: Slot, proposer_index: u64) -> Result<(), Error> {
if proposer_index >= E::ValidatorRegistryLimit::to_u64() {
return Err(Error::ValidatorIndexTooHigh(proposer_index));
}
let finalized_slot = self.finalized_slot;
if finalized_slot > 0 && slot <= finalized_slot {
return Err(Error::FinalizedBlock {
slot,
finalized_slot,
});
}
Ok(())
}
/// Removes all observations of blocks equal to or earlier than `finalized_slot`.
///
/// Stores `finalized_slot` in `self`, so that `self` will reject any block that has a slot
/// equal to or less than `finalized_slot`.
///
/// No-op if `finalized_slot == 0`.
pub fn prune(&mut self, finalized_slot: Slot) {
if finalized_slot == 0 {
return;
}
self.finalized_slot = finalized_slot;
self.items.retain(|key, _| key.slot > finalized_slot);
}
}
#[cfg(test)]
mod tests {
use super::*;
use types::{BeaconBlock, Graffiti, MainnetEthSpec};
type E = MainnetEthSpec;
fn get_block(slot: u64, proposer: u64) -> BeaconBlock<E> {
let mut block = BeaconBlock::empty(&E::default_spec());
*block.slot_mut() = slot.into();
*block.proposer_index_mut() = proposer;
block
}
#[test]
fn pruning() {
let mut cache = ObservedSlashable::<E>::default();
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 0, "no slots should be present");
// Slot 0, proposer 0
let block_a = get_block(0, 0);
let block_root = block_a.canonical_root();
assert_eq!(
cache.observe_slashable(block_a.slot(), block_a.proposer_index(), block_root),
Ok(()),
"can observe proposer"
);
/*
* Preconditions.
*/
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 1, "only one slot should be present");
assert_eq!(
cache
.items
.get(&ProposalKey {
slot: Slot::new(0),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
"only one proposer should be present"
);
/*
* Check that a prune at the genesis slot does nothing.
*/
cache.prune(Slot::new(0));
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 1, "only one slot should be present");
assert_eq!(
cache
.items
.get(&ProposalKey {
slot: Slot::new(0),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
"only one block root should be present"
);
/*
* Check that a prune empties the cache
*/
cache.prune(E::slots_per_epoch().into());
assert_eq!(
cache.finalized_slot,
Slot::from(E::slots_per_epoch()),
"finalized slot is updated"
);
assert_eq!(cache.items.len(), 0, "no items left");
/*
* Check that we can't insert a finalized block
*/
// First slot of finalized epoch, proposer 0
let block_b = get_block(E::slots_per_epoch(), 0);
let block_root_b = block_b.canonical_root();
assert_eq!(
cache.observe_slashable(block_b.slot(), block_b.proposer_index(), block_root_b),
Err(Error::FinalizedBlock {
slot: E::slots_per_epoch().into(),
finalized_slot: E::slots_per_epoch().into(),
}),
"cant insert finalized block"
);
assert_eq!(cache.items.len(), 0, "block was not added");
/*
* Check that we _can_ insert a non-finalized block
*/
let three_epochs = E::slots_per_epoch() * 3;
// First slot of finalized epoch, proposer 0
let block_b = get_block(three_epochs, 0);
assert_eq!(
cache.observe_slashable(block_b.slot(), block_b.proposer_index(), block_root_b),
Ok(()),
"can insert non-finalized block"
);
assert_eq!(cache.items.len(), 1, "only one slot should be present");
assert_eq!(
cache
.items
.get(&ProposalKey {
slot: Slot::new(three_epochs),
proposer: 0
})
.expect("the three epochs slot should be present")
.len(),
1,
"only one proposer should be present"
);
/*
* Check that a prune doesnt wipe later blocks
*/
let two_epochs = E::slots_per_epoch() * 2;
cache.prune(two_epochs.into());
assert_eq!(
cache.finalized_slot,
Slot::from(two_epochs),
"finalized slot is updated"
);
assert_eq!(cache.items.len(), 1, "only one slot should be present");
assert_eq!(
cache
.items
.get(&ProposalKey {
slot: Slot::new(three_epochs),
proposer: 0
})
.expect("the three epochs slot should be present")
.len(),
1,
"only one block root should be present"
);
}
#[test]
fn simple_observations() {
let mut cache = ObservedSlashable::<E>::default();
// Slot 0, proposer 0
let block_a = get_block(0, 0);
let block_root_a = block_a.canonical_root();
assert_eq!(
cache.is_slashable(
block_a.slot(),
block_a.proposer_index(),
block_a.canonical_root()
),
Ok(false),
"no observation in empty cache"
);
assert_eq!(
cache.observe_slashable(block_a.slot(), block_a.proposer_index(), block_root_a),
Ok(()),
"can observe proposer"
);
assert_eq!(
cache.is_slashable(
block_a.slot(),
block_a.proposer_index(),
block_a.canonical_root()
),
Ok(false),
"observed but unslashed block"
);
assert_eq!(
cache.observe_slashable(block_a.slot(), block_a.proposer_index(), block_root_a),
Ok(()),
"observing again"
);
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 1, "only one slot should be present");
assert_eq!(
cache
.items
.get(&ProposalKey {
slot: Slot::new(0),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
"only one block root should be present"
);
// Slot 1, proposer 0
let block_b = get_block(1, 0);
let block_root_b = block_b.canonical_root();
assert_eq!(
cache.is_slashable(
block_b.slot(),
block_b.proposer_index(),
block_b.canonical_root()
),
Ok(false),
"not slashable for new slot"
);
assert_eq!(
cache.observe_slashable(block_b.slot(), block_b.proposer_index(), block_root_b),
Ok(()),
"can observe proposer for new slot"
);
assert_eq!(
cache.is_slashable(
block_b.slot(),
block_b.proposer_index(),
block_b.canonical_root()
),
Ok(false),
"observed but not slashable block in slot 1"
);
assert_eq!(
cache.observe_slashable(block_b.slot(), block_b.proposer_index(), block_root_b),
Ok(()),
"observing slot 1 again"
);
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 2, "two slots should be present");
assert_eq!(
cache
.items
.get(&ProposalKey {
slot: Slot::new(0),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
"only one block root should be present in slot 0"
);
assert_eq!(
cache
.items
.get(&ProposalKey {
slot: Slot::new(1),
proposer: 0
})
.expect("slot zero should be present")
.len(),
1,
"only one block root should be present in slot 1"
);
// Slot 0, proposer 1
let block_c = get_block(0, 1);
let block_root_c = block_c.canonical_root();
assert_eq!(
cache.is_slashable(
block_c.slot(),
block_c.proposer_index(),
block_c.canonical_root()
),
Ok(false),
"not slashable due to new proposer"
);
assert_eq!(
cache.observe_slashable(block_c.slot(), block_c.proposer_index(), block_root_c),
Ok(()),
"can observe new proposer, indicates proposer unobserved"
);
assert_eq!(
cache.is_slashable(
block_c.slot(),
block_c.proposer_index(),
block_c.canonical_root()
),
Ok(false),
"not slashable due to new proposer"
);
assert_eq!(
cache.observe_slashable(block_c.slot(), block_c.proposer_index(), block_root_c),
Ok(()),
"observing new proposer again"
);
assert_eq!(cache.finalized_slot, 0, "finalized slot is zero");
assert_eq!(cache.items.len(), 3, "three slots should be present");
assert_eq!(
cache
.items
.iter()
.filter(|(k, _)| k.slot == cache.finalized_slot)
.count(),
2,
"two proposers should be present in slot 0"
);
assert_eq!(
cache
.items
.iter()
.filter(|(k, _)| k.slot == Slot::new(1))
.count(),
1,
"only one proposer should be present in slot 1"
);
// Slot 0, proposer 1 (again)
let mut block_d = get_block(0, 1);
*block_d.body_mut().graffiti_mut() = Graffiti::from(*b"this is slashable ");
let block_root_d = block_d.canonical_root();
assert_eq!(
cache.is_slashable(
block_d.slot(),
block_d.proposer_index(),
block_d.canonical_root()
),
Ok(true),
"slashable due to new proposer"
);
assert_eq!(
cache.observe_slashable(block_d.slot(), block_d.proposer_index(), block_root_d),
Ok(()),
"can observe new proposer, indicates proposer unobserved"
);
}
}

View File

@@ -0,0 +1,381 @@
use crate::execution_payload::{validate_merge_block, AllowOptimisticImport};
use crate::{
BeaconChain, BeaconChainError, BeaconChainTypes, BlockError, ExecutionPayloadError,
INVALID_FINALIZED_MERGE_TRANSITION_BLOCK_SHUTDOWN_REASON,
};
use itertools::process_results;
use proto_array::InvalidationOperation;
use slog::{crit, debug, error, info, warn};
use slot_clock::SlotClock;
use ssz::{Decode, Encode};
use ssz_derive::{Decode, Encode};
use state_processing::per_block_processing::is_merge_transition_complete;
use std::sync::Arc;
use store::{DBColumn, Error as StoreError, HotColdDB, KeyValueStore, StoreItem};
use task_executor::{ShutdownReason, TaskExecutor};
use tokio::time::sleep;
use tree_hash::TreeHash;
use types::{BeaconBlockRef, EthSpec, Hash256, Slot};
use DBColumn::OptimisticTransitionBlock as OTBColumn;
#[derive(Clone, Debug, Decode, Encode, PartialEq)]
pub struct OptimisticTransitionBlock {
root: Hash256,
slot: Slot,
}
impl OptimisticTransitionBlock {
// types::BeaconBlockRef<'_, <T as BeaconChainTypes>::EthSpec>
pub fn from_block<E: EthSpec>(block: BeaconBlockRef<E>) -> Self {
Self {
root: block.tree_hash_root(),
slot: block.slot(),
}
}
pub fn root(&self) -> &Hash256 {
&self.root
}
pub fn slot(&self) -> &Slot {
&self.slot
}
pub fn persist_in_store<T, A>(&self, store: A) -> Result<(), StoreError>
where
T: BeaconChainTypes,
A: AsRef<HotColdDB<T::EthSpec, T::HotStore, T::ColdStore>>,
{
if store
.as_ref()
.item_exists::<OptimisticTransitionBlock>(&self.root)?
{
Ok(())
} else {
store.as_ref().put_item(&self.root, self)
}
}
pub fn remove_from_store<T, A>(&self, store: A) -> Result<(), StoreError>
where
T: BeaconChainTypes,
A: AsRef<HotColdDB<T::EthSpec, T::HotStore, T::ColdStore>>,
{
store
.as_ref()
.hot_db
.key_delete(OTBColumn.into(), self.root.as_bytes())
}
fn is_canonical<T: BeaconChainTypes>(
&self,
chain: &BeaconChain<T>,
) -> Result<bool, BeaconChainError> {
Ok(chain
.forwards_iter_block_roots_until(self.slot, self.slot)?
.next()
.transpose()?
.map(|(root, _)| root)
== Some(self.root))
}
}
impl StoreItem for OptimisticTransitionBlock {
fn db_column() -> DBColumn {
OTBColumn
}
fn as_store_bytes(&self) -> Vec<u8> {
self.as_ssz_bytes()
}
fn from_store_bytes(bytes: &[u8]) -> Result<Self, StoreError> {
Ok(Self::from_ssz_bytes(bytes)?)
}
}
/// The routine is expected to run once per epoch, 1/4th through the epoch.
pub const EPOCH_DELAY_FACTOR: u32 = 4;
/// Spawns a routine which checks the validity of any optimistically imported transition blocks
///
/// This routine will run once per epoch, at `epoch_duration / EPOCH_DELAY_FACTOR` after
/// the start of each epoch.
///
/// The service will not be started if there is no `execution_layer` on the `chain`.
pub fn start_otb_verification_service<T: BeaconChainTypes>(
executor: TaskExecutor,
chain: Arc<BeaconChain<T>>,
) {
// Avoid spawning the service if there's no EL, it'll just error anyway.
if chain.execution_layer.is_some() {
executor.spawn(
async move { otb_verification_service(chain).await },
"otb_verification_service",
);
}
}
pub fn load_optimistic_transition_blocks<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
) -> Result<Vec<OptimisticTransitionBlock>, StoreError> {
process_results(
chain.store.hot_db.iter_column::<Hash256>(OTBColumn),
|iter| {
iter.map(|(_, bytes)| OptimisticTransitionBlock::from_store_bytes(&bytes))
.collect()
},
)?
}
#[derive(Debug)]
pub enum Error {
ForkChoice(String),
BeaconChain(BeaconChainError),
StoreError(StoreError),
NoBlockFound(OptimisticTransitionBlock),
}
pub async fn validate_optimistic_transition_blocks<T: BeaconChainTypes>(
chain: &Arc<BeaconChain<T>>,
otbs: Vec<OptimisticTransitionBlock>,
) -> Result<(), Error> {
let finalized_slot = chain
.canonical_head
.fork_choice_read_lock()
.get_finalized_block()
.map_err(|e| Error::ForkChoice(format!("{:?}", e)))?
.slot;
// separate otbs into
// non-canonical
// finalized canonical
// unfinalized canonical
let mut non_canonical_otbs = vec![];
let (finalized_canonical_otbs, unfinalized_canonical_otbs) = process_results(
otbs.into_iter().map(|otb| {
otb.is_canonical(chain)
.map(|is_canonical| (otb, is_canonical))
}),
|pair_iter| {
pair_iter
.filter_map(|(otb, is_canonical)| {
if is_canonical {
Some(otb)
} else {
non_canonical_otbs.push(otb);
None
}
})
.partition::<Vec<_>, _>(|otb| *otb.slot() <= finalized_slot)
},
)
.map_err(Error::BeaconChain)?;
// remove non-canonical blocks that conflict with finalized checkpoint from the database
for otb in non_canonical_otbs {
if *otb.slot() <= finalized_slot {
otb.remove_from_store::<T, _>(&chain.store)
.map_err(Error::StoreError)?;
}
}
// ensure finalized canonical otb are valid, otherwise kill client
for otb in finalized_canonical_otbs {
match chain.get_block(otb.root()).await {
Ok(Some(block)) => {
match validate_merge_block(chain, block.message(), AllowOptimisticImport::No).await
{
Ok(()) => {
// merge transition block is valid, remove it from OTB
otb.remove_from_store::<T, _>(&chain.store)
.map_err(Error::StoreError)?;
info!(
chain.log,
"Validated merge transition block";
"block_root" => ?otb.root(),
"type" => "finalized"
);
}
// The block was not able to be verified by the EL. Leave the OTB in the
// database since the EL is likely still syncing and may verify the block
// later.
Err(BlockError::ExecutionPayloadError(
ExecutionPayloadError::UnverifiedNonOptimisticCandidate,
)) => (),
Err(BlockError::ExecutionPayloadError(
ExecutionPayloadError::InvalidTerminalPoWBlock { .. },
)) => {
// Finalized Merge Transition Block is Invalid! Kill the Client!
crit!(
chain.log,
"Finalized merge transition block is invalid!";
"msg" => "You must use the `--purge-db` flag to clear the database and restart sync. \
You may be on a hostile network.",
"block_hash" => ?block.canonical_root()
);
let mut shutdown_sender = chain.shutdown_sender();
if let Err(e) = shutdown_sender.try_send(ShutdownReason::Failure(
INVALID_FINALIZED_MERGE_TRANSITION_BLOCK_SHUTDOWN_REASON,
)) {
crit!(
chain.log,
"Failed to shut down client";
"error" => ?e,
"shutdown_reason" => INVALID_FINALIZED_MERGE_TRANSITION_BLOCK_SHUTDOWN_REASON
);
}
}
_ => {}
}
}
Ok(None) => return Err(Error::NoBlockFound(otb)),
// Our database has pruned the payload and the payload was unavailable on the EL since
// the EL is still syncing or the payload is non-canonical.
Err(BeaconChainError::BlockHashMissingFromExecutionLayer(_)) => (),
Err(e) => return Err(Error::BeaconChain(e)),
}
}
// attempt to validate any non-finalized canonical otb blocks
for otb in unfinalized_canonical_otbs {
match chain.get_block(otb.root()).await {
Ok(Some(block)) => {
match validate_merge_block(chain, block.message(), AllowOptimisticImport::No).await
{
Ok(()) => {
// merge transition block is valid, remove it from OTB
otb.remove_from_store::<T, _>(&chain.store)
.map_err(Error::StoreError)?;
info!(
chain.log,
"Validated merge transition block";
"block_root" => ?otb.root(),
"type" => "not finalized"
);
}
// The block was not able to be verified by the EL. Leave the OTB in the
// database since the EL is likely still syncing and may verify the block
// later.
Err(BlockError::ExecutionPayloadError(
ExecutionPayloadError::UnverifiedNonOptimisticCandidate,
)) => (),
Err(BlockError::ExecutionPayloadError(
ExecutionPayloadError::InvalidTerminalPoWBlock { .. },
)) => {
// Unfinalized Merge Transition Block is Invalid -> Run process_invalid_execution_payload
warn!(
chain.log,
"Merge transition block invalid";
"block_root" => ?otb.root()
);
chain
.process_invalid_execution_payload(
&InvalidationOperation::InvalidateOne {
block_root: *otb.root(),
},
)
.await
.map_err(|e| {
warn!(
chain.log,
"Error checking merge transition block";
"error" => ?e,
"location" => "process_invalid_execution_payload"
);
Error::BeaconChain(e)
})?;
}
_ => {}
}
}
Ok(None) => return Err(Error::NoBlockFound(otb)),
// Our database has pruned the payload and the payload was unavailable on the EL since
// the EL is still syncing or the payload is non-canonical.
Err(BeaconChainError::BlockHashMissingFromExecutionLayer(_)) => (),
Err(e) => return Err(Error::BeaconChain(e)),
}
}
Ok(())
}
/// Loop until any optimistically imported merge transition blocks have been verified and
/// the merge has been finalized.
async fn otb_verification_service<T: BeaconChainTypes>(chain: Arc<BeaconChain<T>>) {
let epoch_duration = chain.slot_clock.slot_duration() * T::EthSpec::slots_per_epoch() as u32;
loop {
match chain
.slot_clock
.duration_to_next_epoch(T::EthSpec::slots_per_epoch())
{
Some(duration) => {
let additional_delay = epoch_duration / EPOCH_DELAY_FACTOR;
sleep(duration + additional_delay).await;
debug!(
chain.log,
"OTB verification service firing";
);
if !is_merge_transition_complete(
&chain.canonical_head.cached_head().snapshot.beacon_state,
) {
// We are pre-merge. Nothing to do yet.
continue;
}
// load all optimistically imported transition blocks from the database
match load_optimistic_transition_blocks(chain.as_ref()) {
Ok(otbs) => {
if otbs.is_empty() {
if chain
.canonical_head
.fork_choice_read_lock()
.get_finalized_block()
.map_or(false, |block| {
block.execution_status.is_execution_enabled()
})
{
// there are no optimistic blocks in the database, we can exit
// the service since the merge transition is finalized and we'll
// never see another transition block
break;
} else {
debug!(
chain.log,
"No optimistic transition blocks";
"info" => "waiting for the merge transition to finalize"
)
}
}
if let Err(e) = validate_optimistic_transition_blocks(&chain, otbs).await {
warn!(
chain.log,
"Error while validating optimistic transition blocks";
"error" => ?e
);
}
}
Err(e) => {
error!(
chain.log,
"Error loading optimistic transition blocks";
"error" => ?e
);
}
};
}
None => {
error!(chain.log, "Failed to read slot clock");
// If we can't read the slot clock, just wait another slot.
sleep(chain.slot_clock.slot_duration()).await;
}
};
}
debug!(
chain.log,
"No optimistic transition blocks in database";
"msg" => "shutting down OTB verification service"
);
}

View File

@@ -0,0 +1,22 @@
use promise_cache::{PromiseCache, Protect};
use types::{BeaconState, Hash256};
#[derive(Debug, Default)]
pub struct ParallelStateProtector;
impl Protect<Hash256> for ParallelStateProtector {
type SortKey = usize;
/// Evict in arbitrary (hashmap) order by using the same key for every value.
fn sort_key(&self, _: &Hash256) -> Self::SortKey {
0
}
/// We don't care too much about preventing evictions of particular states here. All the states
/// in this cache should be different from the head state.
fn protect_from_eviction(&self, _: &Hash256) -> bool {
false
}
}
pub type ParallelStateCache<E> = PromiseCache<Hash256, BeaconState<E>, ParallelStateProtector>;

View File

@@ -1,27 +1,41 @@
use crate::beacon_fork_choice_store::{
PersistedForkChoiceStoreV1, PersistedForkChoiceStoreV7, PersistedForkChoiceStoreV8,
};
use crate::beacon_fork_choice_store::{PersistedForkChoiceStoreV11, PersistedForkChoiceStoreV17};
use ssz::{Decode, Encode};
use ssz_derive::{Decode, Encode};
use store::{DBColumn, Error, StoreItem};
use superstruct::superstruct;
// If adding a new version you should update this type alias and fix the breakages.
pub type PersistedForkChoice = PersistedForkChoiceV8;
pub type PersistedForkChoice = PersistedForkChoiceV17;
#[superstruct(
variants(V1, V7, V8),
variants(V11, V17),
variant_attributes(derive(Encode, Decode)),
no_enum
)]
pub struct PersistedForkChoice {
pub fork_choice: fork_choice::PersistedForkChoice,
#[superstruct(only(V1))]
pub fork_choice_store: PersistedForkChoiceStoreV1,
#[superstruct(only(V7))]
pub fork_choice_store: PersistedForkChoiceStoreV7,
#[superstruct(only(V8))]
pub fork_choice_store: PersistedForkChoiceStoreV8,
#[superstruct(only(V11))]
pub fork_choice_store: PersistedForkChoiceStoreV11,
#[superstruct(only(V17))]
pub fork_choice_store: PersistedForkChoiceStoreV17,
}
impl Into<PersistedForkChoice> for PersistedForkChoiceV11 {
fn into(self) -> PersistedForkChoice {
PersistedForkChoice {
fork_choice: self.fork_choice,
fork_choice_store: self.fork_choice_store.into(),
}
}
}
impl Into<PersistedForkChoiceV11> for PersistedForkChoice {
fn into(self) -> PersistedForkChoiceV11 {
PersistedForkChoiceV11 {
fork_choice: self.fork_choice,
fork_choice_store: self.fork_choice_store.into(),
}
}
}
macro_rules! impl_store_item {
@@ -35,13 +49,12 @@ macro_rules! impl_store_item {
self.as_ssz_bytes()
}
fn from_store_bytes(bytes: &[u8]) -> std::result::Result<Self, Error> {
fn from_store_bytes(bytes: &[u8]) -> Result<Self, Error> {
Self::from_ssz_bytes(bytes).map_err(Into::into)
}
}
};
}
impl_store_item!(PersistedForkChoiceV1);
impl_store_item!(PersistedForkChoiceV7);
impl_store_item!(PersistedForkChoiceV8);
impl_store_item!(PersistedForkChoiceV11);
impl_store_item!(PersistedForkChoiceV17);

View File

@@ -3,11 +3,13 @@ use itertools::process_results;
use lru::LruCache;
use parking_lot::Mutex;
use slog::debug;
use std::num::NonZeroUsize;
use std::time::Duration;
use types::non_zero_usize::new_non_zero_usize;
use types::Hash256;
const BLOCK_ROOT_CACHE_LIMIT: usize = 512;
const LOOKUP_LIMIT: usize = 8;
const BLOCK_ROOT_CACHE_LIMIT: NonZeroUsize = new_non_zero_usize(512);
const LOOKUP_LIMIT: NonZeroUsize = new_non_zero_usize(8);
const METRICS_TIMEOUT: Duration = Duration::from_millis(100);
/// Cache for rejecting attestations to blocks from before finalization.
@@ -71,14 +73,14 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
}
// 2. Check on disk.
if self.store.get_block(&block_root)?.is_some() {
if self.store.get_blinded_block(&block_root, None)?.is_some() {
cache.block_roots.put(block_root, ());
return Ok(true);
}
// 3. Check the network with a single block lookup.
cache.in_progress_lookups.put(block_root, ());
if cache.in_progress_lookups.len() == LOOKUP_LIMIT {
if cache.in_progress_lookups.len() == LOOKUP_LIMIT.get() {
// NOTE: we expect this to occur sometimes if a lot of blocks that we look up fail to be
// imported for reasons other than being pre-finalization. The cache will eventually
// self-repair in this case by replacing old entries with new ones until all the failed

View File

@@ -0,0 +1,72 @@
use crate::{BeaconChain, BeaconChainTypes};
use slog::{debug, error};
use slot_clock::SlotClock;
use std::sync::Arc;
use task_executor::TaskExecutor;
use tokio::time::sleep;
/// Spawns a routine which ensures the EL is provided advance notice of any block producers.
///
/// This routine will run once per slot, at `chain.prepare_payload_lookahead()`
/// before the start of each slot.
///
/// The service will not be started if there is no `execution_layer` on the `chain`.
pub fn start_proposer_prep_service<T: BeaconChainTypes>(
executor: TaskExecutor,
chain: Arc<BeaconChain<T>>,
) {
// Avoid spawning the service if there's no EL, it'll just error anyway.
if chain.execution_layer.is_some() {
executor.clone().spawn(
async move { proposer_prep_service(executor, chain).await },
"proposer_prep_service",
);
}
}
/// Loop indefinitely, calling `BeaconChain::prepare_beacon_proposer_async` at an interval.
async fn proposer_prep_service<T: BeaconChainTypes>(
executor: TaskExecutor,
chain: Arc<BeaconChain<T>>,
) {
let slot_duration = chain.slot_clock.slot_duration();
loop {
match chain.slot_clock.duration_to_next_slot() {
Some(duration) => {
let additional_delay =
slot_duration.saturating_sub(chain.config.prepare_payload_lookahead);
sleep(duration + additional_delay).await;
debug!(
chain.log,
"Proposer prepare routine firing";
);
let inner_chain = chain.clone();
executor.spawn(
async move {
if let Ok(current_slot) = inner_chain.slot() {
if let Err(e) = inner_chain.prepare_beacon_proposer(current_slot).await
{
error!(
inner_chain.log,
"Proposer prepare routine failed";
"error" => ?e
);
}
} else {
debug!(inner_chain.log, "No slot for proposer prepare routine");
}
},
"proposer_prep_update",
);
}
None => {
error!(chain.log, "Failed to read slot clock");
// If we can't read the slot clock, just wait another slot.
sleep(slot_duration).await;
}
};
}
}

View File

@@ -1,185 +1,91 @@
//! Utilities for managing database schema changes.
mod migration_schema_v6;
mod migration_schema_v7;
mod migration_schema_v8;
mod types;
mod migration_schema_v17;
mod migration_schema_v18;
mod migration_schema_v19;
mod migration_schema_v20;
use crate::beacon_chain::{BeaconChainTypes, FORK_CHOICE_DB_KEY, OP_POOL_DB_KEY};
use crate::persisted_fork_choice::{PersistedForkChoiceV1, PersistedForkChoiceV7};
use crate::validator_pubkey_cache::ValidatorPubkeyCache;
use operation_pool::{PersistedOperationPool, PersistedOperationPoolBase};
use slog::{warn, Logger};
use ssz::{Decode, Encode};
use ssz_derive::{Decode, Encode};
use std::fs;
use std::path::Path;
use crate::beacon_chain::BeaconChainTypes;
use crate::types::ChainSpec;
use slog::Logger;
use std::sync::Arc;
use store::config::OnDiskStoreConfig;
use store::hot_cold_store::{HotColdDB, HotColdDBError};
use store::metadata::{SchemaVersion, CONFIG_KEY, CURRENT_SCHEMA_VERSION};
use store::{DBColumn, Error as StoreError, ItemStore, StoreItem};
const PUBKEY_CACHE_FILENAME: &str = "pubkey_cache.ssz";
use store::metadata::{SchemaVersion, CURRENT_SCHEMA_VERSION};
use store::Error as StoreError;
/// Migrate the database from one schema version to another, applying all requisite mutations.
#[allow(clippy::only_used_in_recursion)] // spec is not used but likely to be used in future
pub fn migrate_schema<T: BeaconChainTypes>(
db: Arc<HotColdDB<T::EthSpec, T::HotStore, T::ColdStore>>,
datadir: &Path,
deposit_contract_deploy_block: u64,
from: SchemaVersion,
to: SchemaVersion,
log: Logger,
spec: &ChainSpec,
) -> Result<(), StoreError> {
match (from, to) {
// Migrating from the current schema version to iself is always OK, a no-op.
// Migrating from the current schema version to itself is always OK, a no-op.
(_, _) if from == to && to == CURRENT_SCHEMA_VERSION => Ok(()),
// Migrate across multiple versions by recursively migrating one step at a time.
// Upgrade for tree-states database changes.
(SchemaVersion(12), SchemaVersion(20)) => {
migration_schema_v20::upgrade_to_v20::<T>(db, log)
}
// Downgrade for tree-states database changes.
(SchemaVersion(20), SchemaVersion(12)) => {
migration_schema_v20::downgrade_from_v20::<T>(db, log)
}
// Upgrade across multiple versions by recursively migrating one step at a time.
(_, _) if from.as_u64() + 1 < to.as_u64() => {
let next = SchemaVersion(from.as_u64() + 1);
migrate_schema::<T>(db.clone(), datadir, from, next, log.clone())?;
migrate_schema::<T>(db, datadir, next, to, log)
migrate_schema::<T>(
db.clone(),
deposit_contract_deploy_block,
from,
next,
log.clone(),
spec,
)?;
migrate_schema::<T>(db, deposit_contract_deploy_block, next, to, log, spec)
}
// Migration from v0.3.0 to v0.3.x, adding the temporary states column.
// Nothing actually needs to be done, but once a DB uses v2 it shouldn't go back.
(SchemaVersion(1), SchemaVersion(2)) => {
db.store_schema_version(to)?;
Ok(())
// Downgrade across multiple versions by recursively migrating one step at a time.
(_, _) if to.as_u64() + 1 < from.as_u64() => {
let next = SchemaVersion(from.as_u64() - 1);
migrate_schema::<T>(
db.clone(),
deposit_contract_deploy_block,
from,
next,
log.clone(),
spec,
)?;
migrate_schema::<T>(db, deposit_contract_deploy_block, next, to, log, spec)
}
// Migration for removing the pubkey cache.
(SchemaVersion(2), SchemaVersion(3)) => {
let pk_cache_path = datadir.join(PUBKEY_CACHE_FILENAME);
// Load from file, store to DB.
ValidatorPubkeyCache::<T>::load_from_file(&pk_cache_path)
.and_then(|cache| ValidatorPubkeyCache::convert(cache, db.clone()))
.map_err(|e| StoreError::SchemaMigrationError(format!("{:?}", e)))?;
db.store_schema_version(to)?;
// Delete cache file now that keys are stored in the DB.
fs::remove_file(&pk_cache_path).map_err(|e| {
StoreError::SchemaMigrationError(format!(
"unable to delete {}: {:?}",
pk_cache_path.display(),
e
))
})?;
Ok(())
}
// Migration for adding sync committee contributions to the persisted op pool.
(SchemaVersion(3), SchemaVersion(4)) => {
// Deserialize from what exists in the database using the `PersistedOperationPoolBase`
// variant and convert it to the Altair variant.
let pool_opt = db
.get_item::<PersistedOperationPoolBase<T::EthSpec>>(&OP_POOL_DB_KEY)?
.map(PersistedOperationPool::Base)
.map(PersistedOperationPool::base_to_altair);
if let Some(pool) = pool_opt {
// Store the converted pool under the same key.
db.put_item::<PersistedOperationPool<T::EthSpec>>(&OP_POOL_DB_KEY, &pool)?;
}
db.store_schema_version(to)?;
Ok(())
}
// Migration for weak subjectivity sync support and clean up of `OnDiskStoreConfig` (#1784).
(SchemaVersion(4), SchemaVersion(5)) => {
if let Some(OnDiskStoreConfigV4 {
slots_per_restore_point,
..
}) = db.hot_db.get(&CONFIG_KEY)?
{
let new_config = OnDiskStoreConfig {
slots_per_restore_point,
};
db.hot_db.put(&CONFIG_KEY, &new_config)?;
}
db.store_schema_version(to)?;
Ok(())
}
// Migration for adding `execution_status` field to the fork choice store.
(SchemaVersion(5), SchemaVersion(6)) => {
// Database operations to be done atomically
let mut ops = vec![];
// The top-level `PersistedForkChoice` struct is still V1 but will have its internal
// bytes for the fork choice updated to V6.
let fork_choice_opt = db.get_item::<PersistedForkChoiceV1>(&FORK_CHOICE_DB_KEY)?;
if let Some(mut persisted_fork_choice) = fork_choice_opt {
migration_schema_v6::update_execution_statuses::<T>(&mut persisted_fork_choice)
.map_err(StoreError::SchemaMigrationError)?;
// Store the converted fork choice store under the same key.
ops.push(persisted_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
}
db.store_schema_version_atomically(to, ops)?;
Ok(())
}
// 1. Add `proposer_boost_root`.
// 2. Update `justified_epoch` to `justified_checkpoint` and `finalized_epoch` to
// `finalized_checkpoint`.
// 3. This migration also includes a potential update to the justified
// checkpoint in case the fork choice store's justified checkpoint and finalized checkpoint
// combination does not actually exist for any blocks in fork choice. This was possible in
// the consensus spec prior to v1.1.6.
//
// Relevant issues:
// Migrations from before SchemaVersion(16) are deprecated.
//
// https://github.com/sigp/lighthouse/issues/2741
// https://github.com/ethereum/consensus-specs/pull/2727
// https://github.com/ethereum/consensus-specs/pull/2730
(SchemaVersion(6), SchemaVersion(7)) => {
// Database operations to be done atomically
let mut ops = vec![];
let fork_choice_opt = db.get_item::<PersistedForkChoiceV1>(&FORK_CHOICE_DB_KEY)?;
if let Some(persisted_fork_choice_v1) = fork_choice_opt {
// This migrates the `PersistedForkChoiceStore`, adding the `proposer_boost_root` field.
let mut persisted_fork_choice_v7 = persisted_fork_choice_v1.into();
let result = migration_schema_v7::update_fork_choice::<T>(
&mut persisted_fork_choice_v7,
db.clone(),
);
// Fall back to re-initializing fork choice from an anchor state if necessary.
if let Err(e) = result {
warn!(log, "Unable to migrate to database schema 7, re-initializing fork choice"; "error" => ?e);
migration_schema_v7::update_with_reinitialized_fork_choice::<T>(
&mut persisted_fork_choice_v7,
db.clone(),
)
.map_err(StoreError::SchemaMigrationError)?;
}
// Store the converted fork choice store under the same key.
ops.push(persisted_fork_choice_v7.as_kv_store_op(FORK_CHOICE_DB_KEY));
}
db.store_schema_version_atomically(to, ops)?;
Ok(())
(SchemaVersion(16), SchemaVersion(17)) => {
let ops = migration_schema_v17::upgrade_to_v17::<T>(db.clone(), log)?;
db.store_schema_version_atomically(to, ops)
}
// Migration to add an `epoch` key to the fork choice's balances cache.
(SchemaVersion(7), SchemaVersion(8)) => {
let mut ops = vec![];
let fork_choice_opt = db.get_item::<PersistedForkChoiceV7>(&FORK_CHOICE_DB_KEY)?;
if let Some(fork_choice) = fork_choice_opt {
let updated_fork_choice =
migration_schema_v8::update_fork_choice::<T>(fork_choice, db.clone())?;
ops.push(updated_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
}
db.store_schema_version_atomically(to, ops)?;
Ok(())
(SchemaVersion(17), SchemaVersion(16)) => {
let ops = migration_schema_v17::downgrade_from_v17::<T>(db.clone(), log)?;
db.store_schema_version_atomically(to, ops)
}
(SchemaVersion(17), SchemaVersion(18)) => {
let ops = migration_schema_v18::upgrade_to_v18::<T>(db.clone(), log)?;
db.store_schema_version_atomically(to, ops)
}
(SchemaVersion(18), SchemaVersion(17)) => {
let ops = migration_schema_v18::downgrade_from_v18::<T>(db.clone(), log)?;
db.store_schema_version_atomically(to, ops)
}
(SchemaVersion(18), SchemaVersion(19)) => {
let ops = migration_schema_v19::upgrade_to_v19::<T>(db.clone(), log)?;
db.store_schema_version_atomically(to, ops)
}
(SchemaVersion(19), SchemaVersion(18)) => {
let ops = migration_schema_v19::downgrade_from_v19::<T>(db.clone(), log)?;
db.store_schema_version_atomically(to, ops)
}
// Anything else is an error.
(_, _) => Err(HotColdDBError::UnsupportedSchemaVersion {
@@ -189,24 +95,3 @@ pub fn migrate_schema<T: BeaconChainTypes>(
.into()),
}
}
// Store config used in v4 schema and earlier.
#[derive(Debug, Clone, PartialEq, Eq, Encode, Decode)]
pub struct OnDiskStoreConfigV4 {
pub slots_per_restore_point: u64,
pub _block_cache_size: usize,
}
impl StoreItem for OnDiskStoreConfigV4 {
fn db_column() -> DBColumn {
DBColumn::BeaconMeta
}
fn as_store_bytes(&self) -> Vec<u8> {
self.as_ssz_bytes()
}
fn from_store_bytes(bytes: &[u8]) -> Result<Self, StoreError> {
Ok(Self::from_ssz_bytes(bytes)?)
}
}

View File

@@ -0,0 +1,88 @@
use crate::beacon_chain::{BeaconChainTypes, FORK_CHOICE_DB_KEY};
use crate::persisted_fork_choice::{PersistedForkChoiceV11, PersistedForkChoiceV17};
use proto_array::core::{SszContainerV16, SszContainerV17};
use slog::{debug, Logger};
use ssz::{Decode, Encode};
use std::sync::Arc;
use store::{Error, HotColdDB, KeyValueStoreOp, StoreItem};
pub fn upgrade_fork_choice(
mut fork_choice: PersistedForkChoiceV11,
) -> Result<PersistedForkChoiceV17, Error> {
let ssz_container_v16 = SszContainerV16::from_ssz_bytes(
&fork_choice.fork_choice.proto_array_bytes,
)
.map_err(|e| {
Error::SchemaMigrationError(format!(
"Failed to decode ProtoArrayForkChoice during schema migration: {:?}",
e
))
})?;
let ssz_container_v17: SszContainerV17 = ssz_container_v16.try_into().map_err(|e| {
Error::SchemaMigrationError(format!(
"Missing checkpoint during schema migration: {:?}",
e
))
})?;
fork_choice.fork_choice.proto_array_bytes = ssz_container_v17.as_ssz_bytes();
Ok(fork_choice.into())
}
pub fn downgrade_fork_choice(
mut fork_choice: PersistedForkChoiceV17,
) -> Result<PersistedForkChoiceV11, Error> {
let ssz_container_v17 = SszContainerV17::from_ssz_bytes(
&fork_choice.fork_choice.proto_array_bytes,
)
.map_err(|e| {
Error::SchemaMigrationError(format!(
"Failed to decode ProtoArrayForkChoice during schema migration: {:?}",
e
))
})?;
let ssz_container_v16: SszContainerV16 = ssz_container_v17.into();
fork_choice.fork_choice.proto_array_bytes = ssz_container_v16.as_ssz_bytes();
Ok(fork_choice.into())
}
pub fn upgrade_to_v17<T: BeaconChainTypes>(
db: Arc<HotColdDB<T::EthSpec, T::HotStore, T::ColdStore>>,
log: Logger,
) -> Result<Vec<KeyValueStoreOp>, Error> {
// Get persisted_fork_choice.
let v11 = db
.get_item::<PersistedForkChoiceV11>(&FORK_CHOICE_DB_KEY)?
.ok_or_else(|| Error::SchemaMigrationError("fork choice missing from database".into()))?;
let v17 = upgrade_fork_choice(v11)?;
debug!(
log,
"Removing unused best_justified_checkpoint from fork choice store."
);
Ok(vec![v17.as_kv_store_op(FORK_CHOICE_DB_KEY)])
}
pub fn downgrade_from_v17<T: BeaconChainTypes>(
db: Arc<HotColdDB<T::EthSpec, T::HotStore, T::ColdStore>>,
log: Logger,
) -> Result<Vec<KeyValueStoreOp>, Error> {
// Get persisted_fork_choice.
let v17 = db
.get_item::<PersistedForkChoiceV17>(&FORK_CHOICE_DB_KEY)?
.ok_or_else(|| Error::SchemaMigrationError("fork choice missing from database".into()))?;
let v11 = downgrade_fork_choice(v17)?;
debug!(
log,
"Adding junk best_justified_checkpoint to fork choice store."
);
Ok(vec![v11.as_kv_store_op(FORK_CHOICE_DB_KEY)])
}

Some files were not shown because too many files have changed in this diff Show More