Compare commits

...

236 Commits

Author SHA1 Message Date
realbigsean
6e54763c79 Merge branch 'unstable' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-07-15 14:52:55 -07:00
realbigsean
f290c68c93 Beacon api + validator electra (#5744)
* Attestation superstruct changes for EIP 7549 (#5644)

* update

* experiment

* superstruct changes

* revert

* superstruct changes

* fix tests

* indexed attestation

* indexed attestation superstruct

* updated TODOs

* `superstruct` the `AttesterSlashing` (#5636)

* `superstruct` Attester Fork Variants

* Push a little further

* Deal with Encode / Decode of AttesterSlashing

* not so sure about this..

* Stop Encode/Decode Bounds from Propagating Out

* Tons of Changes..

* More Conversions to AttestationRef

* Add AsReference trait (#15)

* Add AsReference trait

* Fix some snafus

* Got it Compiling! :D

* Got Tests Building

* Get beacon chain tests compiling

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Merge remote-tracking branch 'upstream/unstable' into electra_attestation_changes

* Make EF Tests Fork-Agnostic (#5713)

* Finish EF Test Fork Agnostic (#5714)

* Superstruct `AggregateAndProof` (#5715)

* Upgrade `superstruct` to `0.8.0`

* superstruct `AggregateAndProof`

* Merge remote-tracking branch 'sigp/unstable' into electra_attestation_changes

* cargo fmt

* Merge pull request #5726 from realbigsean/electra_attestation_changes

Merge unstable into Electra attestation changes

* process withdrawals updates

* cleanup withdrawals processing

* update `process_operations` deposit length check

* add apply_deposit changes

* add execution layer withdrawal request processing

* process deposit receipts

* add consolidation processing

* update process operations function

* exit updates

* clean up

* update slash_validator

* EIP7549 `get_attestation_indices` (#5657)

* get attesting indices electra impl

* fmt

* get tests to pass

* fmt

* fix some beacon chain tests

* fmt

* fix slasher test

* fmt got me again

* fix more tests

* fix tests

* Some small changes (#5739)

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* cargo fmt (#5740)

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* fix attestation verification

* Add new engine api methods

* Fix the versioning of v4 requests

* Handle new engine api methods in mock EL

* Note todo

* Fix todos

* Add support for electra fields in getPayloadBodies

* Add comments for potential versioning confusion

* udpates for aggregate attestation endpoint

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Sketch op pool changes

* fix get attesting indices (#5742)

* fix get attesting indices

* better errors

* fix compile

* only get committee index once

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Ef test fixes (#5753)

* attestation related ef test fixes

* delete commented out stuff

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Fix Aggregation Pool for Electra (#5754)

* Fix Aggregation Pool for Electra

* Remove Outdated Interface

* fix ssz (#5755)

* Get `electra_op_pool` up to date (#5756)

* fix get attesting indices (#5742)

* fix get attesting indices

* better errors

* fix compile

* only get committee index once

* Ef test fixes (#5753)

* attestation related ef test fixes

* delete commented out stuff

* Fix Aggregation Pool for Electra (#5754)

* Fix Aggregation Pool for Electra

* Remove Outdated Interface

* fix ssz (#5755)

---------

Co-authored-by: realbigsean <sean@sigmaprime.io>

* Revert "Get `electra_op_pool` up to date (#5756)" (#5757)

This reverts commit ab9e58aa3d.

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into electra_op_pool

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Compute on chain aggregate impl (#5752)

* add compute_on_chain_agg impl to op pool changes

* fmt

* get op pool tests to pass

* update beacon api aggregate attestationendpoint

* update the naive agg pool interface (#5760)

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* updates after merge

* Fix bugs in cross-committee aggregation

* Add comment to max cover optimisation

* Fix assert

* Electra epoch processing

* add deposit limit for old deposit queue

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Merge pull request #5749 from sigp/electra_op_pool

Optimise Electra op pool aggregation

* don't fail on empty consolidations

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* update committee offset

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* update committee offset

* update committee offset

* update committee offset

* only increment the state deposit index on old deposit flow

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* use correct max eb in epoch cache initialization

* drop initiate validator ordering optimization

* fix initiate exit for single pass

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* accept new payload v4 in mock el

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Fix Electra Fork Choice Tests (#5764)

* Fix Electra Fork Choice Tests (#5764)

* Fix Electra Fork Choice Tests (#5764)

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Fix Consolidation Sigs & Withdrawals

* Merge pull request #5766 from ethDreamer/two_fixes

Fix Consolidation Sigs & Withdrawals

* Merge branches 'block-processing-electra' and 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Send unagg attestation based on fork

* Fix ser/de

* Merge branch 'electra-engine-api' into beacon-api-electra

* Subscribe to the correct subnets for electra attestations (#5782)

* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra

* cargo fmt

* Subscribe to the correct subnets for electra attestations (#5782)

* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra

* cargo fmt

* Subscribe to the correct subnets for electra attestations (#5782)

* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra

* cargo fmt

* Subscribe to the correct subnets for electra attestations (#5782)

* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra

* cargo fmt

* Subscribe to the correct subnets for electra attestations (#5782)

* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra

* cargo fmt

* update electra readiness with new endpoints

* fix slashing handling

* Fix Bug In Block Processing with 0x02 Credentials

* Merge remote-tracking branch 'upstream/unstable'

* Send unagg attestation based on fork

* Publish all aggregates

* just one more check bro plz..

* Merge pull request #5832 from ethDreamer/electra_attestation_changes_merge_unstable

Merge `unstable` into `electra_attestation_changes`

* Merge pull request #5835 from realbigsean/fix-validator-logic

Fix validator logic

* Merge pull request #5816 from realbigsean/electra-attestation-slashing-handling

Electra slashing handling

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* fix: serde rename camle case for execution payload body (#5846)

* Merge branch 'electra-engine-api' into beacon-api-electra

* Electra attestation changes rm decode impl (#5856)

* Remove Crappy Decode impl for Attestation

* Remove Inefficient Attestation Decode impl

* Implement Schema Upgrade / Downgrade

* Update beacon_node/beacon_chain/src/schema_change/migration_schema_v20.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Fix failing attestation tests and misc electra attestation cleanup (#5810)

* - get attestation related beacon chain tests to pass
- observed attestations are now keyed off of data + committee index
- rename op pool attestationref to compactattestationref
- remove unwraps in agg pool and use options instead
- cherry pick some changes from ef-tests-electra

* cargo fmt

* fix failing test

* Revert dockerfile changes

* make committee_index return option

* function args shouldnt be a ref to attestation ref

* fmt

* fix dup imports

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* fix some todos (#5817)

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* add consolidations to merkle calc for inclusion proof

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Remove Duplicate KZG Commitment Merkle Proof Code (#5874)

* Remove Duplicate KZG Commitment Merkle Proof Code

* s/tree_lists/fields/

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* fix compile

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Fix slasher tests (#5906)

* Fix electra tests

* Add electra attestations to double vote tests

* Update superstruct to 0.8

* Merge remote-tracking branch 'origin/unstable' into electra_attestation_changes

* Small cleanup in slasher tests

* Clean up Electra observed aggregates (#5929)

* Use consistent key in observed_attestations

* Remove unwraps from observed aggregates

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* De-dup attestation constructor logic

* Remove unwraps in Attestation construction

* Dedup match_attestation_data

* Remove outdated TODO

* Use ForkName Ord in fork-choice tests

* Use ForkName Ord in BeaconBlockBody

* Make to_electra not fallible

* Remove TestRandom impl for IndexedAttestation

* Remove IndexedAttestation faulty Decode impl

* Drop TestRandom impl

* Add PendingAttestationInElectra

* Indexed att on disk (#35)

* indexed att on disk

* fix lints

* Update slasher/src/migrate.rs

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

---------

Co-authored-by: Lion - dapplion <35266934+dapplion@users.noreply.github.com>
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

* add electra fork enabled fn to ForkName impl (#36)

* add electra fork enabled fn to ForkName impl

* remove inadvertent file

* Update common/eth2/src/types.rs

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

* Dedup attestation constructor logic in attester cache

* Use if let Ok for committee_bits

* Dedup Attestation constructor code

* Diff reduction in tests

* Fix beacon_chain tests

* Diff reduction

* Use Ord for ForkName in pubsub

* Resolve into_attestation_and_indices todo

* Remove stale TODO

* Fix beacon_chain tests

* Test spec invariant

* Use electra_enabled in pubsub

* Remove get_indexed_attestation_from_signed_aggregate

* Use ok_or instead of if let else

* committees are sorted

* remove dup method `get_indexed_attestation_from_committees`

* Merge pull request #5940 from dapplion/electra_attestation_changes_lionreview

Electra attestations #5712 review

* update default persisted op pool deserialization

* ensure aggregate and proof uses serde untagged on ref

* Fork aware ssz static attestation tests

* Electra attestation changes from Lions review (#5971)

* dedup/cleanup and remove unneeded hashset use

* remove irrelevant TODOs

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* Fix Compilation Break

* Merge pull request #5973 from ethDreamer/beacon-api-electra

Fix Compilation Break

* Electra attestation changes sean review (#5972)

* instantiate empty bitlist in unreachable code

* clean up error conversion

* fork enabled bool cleanup

* remove a couple todos

* return bools instead of options in `aggregate` and use the result

* delete commented out code

* use map macros in simple transformations

* remove signers_disjoint_from

* get ef tests compiling

* get ef tests compiling

* update intentionally excluded files

* Avoid changing slasher schema for Electra

* Delete slasher schema v4

* Fix clippy

* Fix compilation of beacon_chain tests

* Update database.rs

* Update per_block_processing.rs

* Add electra lightclient types

* Update slasher/src/database.rs

* fix imports

* Merge pull request #5980 from dapplion/electra-lightclient

Add electra lightclient types

* Merge pull request #5975 from michaelsproul/electra-slasher-no-migration

Avoid changing slasher schema for Electra

* Update beacon_node/beacon_chain/src/attestation_verification.rs

* Update beacon_node/beacon_chain/src/attestation_verification.rs

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes

* Merge branch 'electra_attestation_changes' of https://github.com/realbigsean/lighthouse into block-processing-electra

* Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc

* Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* The great renaming receipt -> request

* Address some more review comments

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra-engine-api

* Update beacon_node/beacon_chain/src/electra_readiness.rs

* Update consensus/types/src/chain_spec.rs

* update GET requests

* update POST requests

* add client updates and test updates

* Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra-engine-api

* Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra

* compile after merge

* unwrap -> unwrap_err

* self review

* fix tests

* convert op pool messages to electra in electra

* remove methods to post without content header

* filter instead of convert
2024-07-15 19:49:08 +00:00
Jimmy Chen
7b283c5ddb Enable the outbound rate limiter by default, and update blobs method quotas (#6093)
* Enable the outbound rate limiter by default, and update blobs method quotas.

* Lint and book updates.
2024-07-15 18:52:02 +00:00
Lion - dapplion
0e5993943e Add range sync metrics to track efficiency (#6095)
* Add more range sync metrics to track efficiency

* Add ignored blocks metrics
2024-07-15 18:51:59 +00:00
realbigsean
4065ef66ab filter instead of convert 2024-07-15 10:57:13 -07:00
dapplion
9f40d91d51 Revert "Add BeaconBlocksByRange v3"
This reverts commit e3ce7fc5ea.
2024-07-15 18:01:06 +02:00
dapplion
e3ce7fc5ea Add BeaconBlocksByRange v3 2024-07-15 17:36:05 +02:00
realbigsean
71a2eadc46 remove methods to post without content header 2024-07-11 12:17:29 -07:00
realbigsean
386aacda2a convert op pool messages to electra in electra 2024-07-11 11:38:10 -07:00
realbigsean
c4cb8ad833 fix tests 2024-07-09 08:54:58 -07:00
realbigsean
d394746248 self review 2024-07-08 18:50:14 -07:00
realbigsean
d1357e459a unwrap -> unwrap_err 2024-07-08 18:37:38 -07:00
realbigsean
80266a8109 compile after merge 2024-07-08 18:28:02 -07:00
realbigsean
dabb3d12dc Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-07-08 18:25:56 -07:00
realbigsean
0c2ee92f90 Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra-engine-api 2024-07-08 18:25:13 -07:00
realbigsean
f405601d6f add client updates and test updates 2024-07-08 18:19:10 -07:00
realbigsean
39d41ada93 update POST requests 2024-07-08 14:14:20 -07:00
realbigsean
6766f329e3 update GET requests 2024-07-08 14:04:36 -07:00
realbigsean
69ac34209c Update consensus/types/src/chain_spec.rs 2024-07-01 08:21:31 -07:00
realbigsean
c9fe10b366 Update beacon_node/beacon_chain/src/electra_readiness.rs 2024-07-01 08:21:17 -07:00
realbigsean
257bcc37fc Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra-engine-api 2024-07-01 07:47:19 -07:00
Pawan Dhananjay
033457ce89 Address some more review comments 2024-06-28 15:10:27 +05:30
Pawan Dhananjay
806a5ebe1f The great renaming receipt -> request 2024-06-28 14:53:10 +05:30
realbigsean
897f06a29c Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-06-25 13:13:16 -07:00
realbigsean
4a858b3f6b Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-06-25 13:12:30 -07:00
realbigsean
51a8c80069 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-06-25 13:12:07 -07:00
realbigsean
a8d84d69c5 Merge branch 'electra_attestation_changes' of https://github.com/realbigsean/lighthouse into block-processing-electra 2024-06-25 13:05:27 -07:00
realbigsean
87fde510b8 Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes 2024-06-25 13:04:57 -07:00
realbigsean
d137881614 Update beacon_node/beacon_chain/src/attestation_verification.rs 2024-06-21 13:13:07 -04:00
realbigsean
68fd7a7881 Update beacon_node/beacon_chain/src/attestation_verification.rs 2024-06-21 13:09:56 -04:00
realbigsean
cf030d0a8a Merge pull request #5975 from michaelsproul/electra-slasher-no-migration
Avoid changing slasher schema for Electra
2024-06-21 11:25:01 -04:00
realbigsean
5517c78102 Merge pull request #5980 from dapplion/electra-lightclient
Add electra lightclient types
2024-06-21 11:24:33 -04:00
realbigsean
8fc533368c fix imports 2024-06-21 10:57:35 -04:00
realbigsean
09141ec51a Update slasher/src/database.rs 2024-06-21 10:48:43 -04:00
dapplion
8715589e40 Add electra lightclient types 2024-06-21 14:50:03 +02:00
Lion - dapplion
7509cf6d3b Update per_block_processing.rs 2024-06-21 09:55:23 +02:00
Lion - dapplion
70a80d5da0 Update database.rs 2024-06-21 09:48:35 +02:00
Michael Sproul
339d1b8229 Fix compilation of beacon_chain tests 2024-06-21 17:36:45 +10:00
Michael Sproul
13b1b05960 Fix clippy 2024-06-21 17:03:06 +10:00
Michael Sproul
ebbb17b6bc Delete slasher schema v4 2024-06-21 14:21:36 +10:00
Michael Sproul
b6913ae542 Avoid changing slasher schema for Electra 2024-06-21 14:21:36 +10:00
realbigsean
27ed90e4dc Electra attestation changes sean review (#5972)
* instantiate empty bitlist in unreachable code

* clean up error conversion

* fork enabled bool cleanup

* remove a couple todos

* return bools instead of options in `aggregate` and use the result

* delete commented out code

* use map macros in simple transformations

* remove signers_disjoint_from

* get ef tests compiling

* get ef tests compiling

* update intentionally excluded files
2024-06-21 14:20:10 +10:00
realbigsean
68035eb5e6 Merge pull request #5973 from ethDreamer/beacon-api-electra
Fix Compilation Break
2024-06-20 13:42:45 -04:00
Mark Mackey
09f48c5527 Fix Compilation Break 2024-06-20 12:38:12 -05:00
realbigsean
af98e98c25 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-06-20 10:46:51 -04:00
realbigsean
c276af6061 Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-06-20 10:45:31 -04:00
realbigsean
536c9f83b6 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-06-20 10:43:35 -04:00
realbigsean
dd0aa8e2ec Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-06-20 10:41:17 -04:00
realbigsean
efb8a01e91 Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes 2024-06-20 09:36:56 -04:00
Eitan Seri-Levi
f85a124362 Electra attestation changes from Lions review (#5971)
* dedup/cleanup and remove unneeded hashset use

* remove irrelevant TODOs
2024-06-20 09:36:43 -04:00
dapplion
0e2add2daa Fork aware ssz static attestation tests 2024-06-20 09:58:53 +02:00
realbigsean
381bbaba94 ensure aggregate and proof uses serde untagged on ref 2024-06-19 17:04:47 -04:00
realbigsean
afb9122cc1 update default persisted op pool deserialization 2024-06-19 15:00:33 -04:00
realbigsean
6e4483288a Merge pull request #5940 from dapplion/electra_attestation_changes_lionreview
Electra attestations #5712 review
2024-06-19 13:52:19 -04:00
realbigsean
3977b92c49 remove dup method get_indexed_attestation_from_committees 2024-06-19 13:45:47 -04:00
dapplion
d67270f899 committees are sorted 2024-06-19 12:59:27 +02:00
dapplion
a8d8989c05 Use ok_or instead of if let else 2024-06-19 12:50:41 +02:00
dapplion
9e6e76fb89 Remove get_indexed_attestation_from_signed_aggregate 2024-06-19 12:47:38 +02:00
dapplion
70a2d4de10 Use electra_enabled in pubsub 2024-06-19 11:43:41 +02:00
dapplion
cbb7c5d8f4 Test spec invariant 2024-06-19 11:39:45 +02:00
dapplion
370d511223 Fix beacon_chain tests 2024-06-19 11:31:51 +02:00
dapplion
4d4c268e1e Remove stale TODO 2024-06-19 11:31:51 +02:00
dapplion
7fce143300 Resolve into_attestation_and_indices todo 2024-06-19 11:31:50 +02:00
dapplion
4d3edfeaed Use Ord for ForkName in pubsub 2024-06-19 11:31:50 +02:00
dapplion
7521f97ca5 Diff reduction 2024-06-19 11:31:50 +02:00
dapplion
d26473621a Fix beacon_chain tests 2024-06-19 11:31:50 +02:00
dapplion
444cd625ef Diff reduction in tests 2024-06-19 11:31:50 +02:00
dapplion
6f0b78426a Dedup Attestation constructor code 2024-06-19 11:31:50 +02:00
dapplion
6a4d842376 Use if let Ok for committee_bits 2024-06-19 11:31:50 +02:00
dapplion
dec7cff9c7 Dedup attestation constructor logic in attester cache 2024-06-19 11:31:50 +02:00
Lion - dapplion
2634a1f1a6 Update common/eth2/src/types.rs
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
2024-06-19 11:31:50 +02:00
Eitan Seri-Levi
7af3f2eb35 add electra fork enabled fn to ForkName impl (#36)
* add electra fork enabled fn to ForkName impl

* remove inadvertent file
2024-06-19 11:31:50 +02:00
realbigsean
9e84779522 Indexed att on disk (#35)
* indexed att on disk

* fix lints

* Update slasher/src/migrate.rs

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

---------

Co-authored-by: Lion - dapplion <35266934+dapplion@users.noreply.github.com>
Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
2024-06-19 11:31:50 +02:00
dapplion
45d007a71f Add PendingAttestationInElectra 2024-06-19 11:31:50 +02:00
dapplion
5070ab254d Drop TestRandom impl 2024-06-19 11:31:50 +02:00
dapplion
f0492852f3 Remove IndexedAttestation faulty Decode impl 2024-06-19 11:31:50 +02:00
dapplion
4f08f6e0da Remove TestRandom impl for IndexedAttestation 2024-06-19 11:31:50 +02:00
dapplion
5acc0523df Make to_electra not fallible 2024-06-19 11:31:50 +02:00
dapplion
1d0e3f4d30 Use ForkName Ord in BeaconBlockBody 2024-06-19 11:31:50 +02:00
dapplion
960f8c5c48 Use ForkName Ord in fork-choice tests 2024-06-19 11:31:50 +02:00
dapplion
795eff9bf4 Remove outdated TODO 2024-06-19 11:31:50 +02:00
dapplion
3ec21a2435 Dedup match_attestation_data 2024-06-19 11:31:50 +02:00
dapplion
dd0d5e2d93 Remove unwraps in Attestation construction 2024-06-19 11:31:50 +02:00
dapplion
d87541c045 De-dup attestation constructor logic 2024-06-19 11:31:50 +02:00
realbigsean
9a01b6b363 Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes 2024-06-17 15:08:36 -04:00
Michael Sproul
3ac3ddb2b7 Clean up Electra observed aggregates (#5929)
* Use consistent key in observed_attestations

* Remove unwraps from observed aggregates
2024-06-17 10:23:02 -04:00
Michael Sproul
c4f2284dbe Small cleanup in slasher tests 2024-06-14 12:50:18 +10:00
Michael Sproul
d5aa2d8dfe Merge remote-tracking branch 'origin/unstable' into electra_attestation_changes 2024-06-14 12:32:47 +10:00
Michael Sproul
d7f3c9583e Update superstruct to 0.8 2024-06-14 12:32:20 +10:00
Pawan Dhananjay
35e07eb0a9 Fix slasher tests (#5906)
* Fix electra tests

* Add electra attestations to double vote tests
2024-06-14 12:27:36 +10:00
realbigsean
a5ee0ed91f Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-06-13 16:43:32 -04:00
realbigsean
f57fa8788d Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-06-13 16:43:04 -04:00
realbigsean
c43d1c2884 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-06-13 16:42:45 -04:00
realbigsean
8dc9f38a60 Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-06-13 16:42:02 -04:00
realbigsean
b21b1086f1 fix compile 2024-06-13 16:40:52 -04:00
realbigsean
c2c2bafa9a Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-06-13 14:48:09 -04:00
realbigsean
49db91b27e Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-06-13 14:47:37 -04:00
realbigsean
772ab53811 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-06-13 14:47:16 -04:00
realbigsean
f25531d4cc Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-06-13 14:46:41 -04:00
realbigsean
77c630bc2e Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes 2024-06-13 14:45:49 -04:00
ethDreamer
f9d354539a Remove Duplicate KZG Commitment Merkle Proof Code (#5874)
* Remove Duplicate KZG Commitment Merkle Proof Code

* s/tree_lists/fields/
2024-06-01 12:51:00 -04:00
realbigsean
7a408b7724 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-31 08:53:12 -04:00
realbigsean
7d3a5dfab4 Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-31 08:52:37 -04:00
realbigsean
40139440c9 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-05-31 08:51:55 -04:00
realbigsean
a647a3635f Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-31 08:51:36 -04:00
realbigsean
29ed1c5c26 add consolidations to merkle calc for inclusion proof 2024-05-31 08:49:35 -04:00
realbigsean
49de63f792 Merge branch 'unstable' of https://github.com/sigp/lighthouse into electra_attestation_changes 2024-05-31 08:49:04 -04:00
realbigsean
b61d244c0c fix some todos (#5817) 2024-05-30 11:52:38 -04:00
Eitan Seri-Levi
e340998241 Fix failing attestation tests and misc electra attestation cleanup (#5810)
* - get attestation related beacon chain tests to pass
- observed attestations are now keyed off of data + committee index
- rename op pool attestationref to compactattestationref
- remove unwraps in agg pool and use options instead
- cherry pick some changes from ef-tests-electra

* cargo fmt

* fix failing test

* Revert dockerfile changes

* make committee_index return option

* function args shouldnt be a ref to attestation ref

* fmt

* fix dup imports

---------

Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2024-05-30 11:51:34 -04:00
ethDreamer
75432e1135 Electra attestation changes rm decode impl (#5856)
* Remove Crappy Decode impl for Attestation

* Remove Inefficient Attestation Decode impl

* Implement Schema Upgrade / Downgrade

* Update beacon_node/beacon_chain/src/schema_change/migration_schema_v20.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2024-05-30 11:34:14 -04:00
Pawan Dhananjay
72abfa4fec Merge branch 'electra-engine-api' into beacon-api-electra 2024-05-28 17:38:44 +05:30
Matthias Seitz
aed25c49e3 fix: serde rename camle case for execution payload body (#5846) 2024-05-28 17:36:33 +05:30
realbigsean
36a7b1280f Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-24 11:00:14 -04:00
realbigsean
1aa410cd8a Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-24 10:58:28 -04:00
realbigsean
57b6a9ab91 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-05-24 10:58:05 -04:00
realbigsean
9440c36202 Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-24 10:57:47 -04:00
realbigsean
3e10e68c1d Merge pull request #5816 from realbigsean/electra-attestation-slashing-handling
Electra slashing handling
2024-05-24 10:53:26 -04:00
realbigsean
3f169ef17a Merge pull request #5835 from realbigsean/fix-validator-logic
Fix validator logic
2024-05-24 10:52:32 -04:00
realbigsean
469296b665 Merge pull request #5832 from ethDreamer/electra_attestation_changes_merge_unstable
Merge `unstable` into `electra_attestation_changes`
2024-05-24 10:52:12 -04:00
Mark Mackey
bb734afa1d just one more check bro plz.. 2024-05-24 10:50:11 -04:00
Mark Mackey
154b7a7b8a Publish all aggregates 2024-05-24 10:50:04 -04:00
Pawan Dhananjay
82858bc04e Send unagg attestation based on fork 2024-05-24 10:49:53 -04:00
Mark Mackey
987abe07f9 Merge remote-tracking branch 'upstream/unstable' 2024-05-24 13:24:37 +02:00
Mark Mackey
f9c50bca07 Fix Bug In Block Processing with 0x02 Credentials 2024-05-20 14:55:12 -04:00
realbigsean
bafb5f0cc0 fix slashing handling 2024-05-20 14:14:41 -04:00
realbigsean
8e537d139e update electra readiness with new endpoints 2024-05-15 04:35:09 -04:00
realbigsean
a8088f1bfa cargo fmt 2024-05-15 03:00:12 -04:00
Eitan Seri-Levi
79a5f2556f Subscribe to the correct subnets for electra attestations (#5782)
* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra
2024-05-15 03:00:09 -04:00
realbigsean
210ad2ff77 cargo fmt 2024-05-15 02:59:44 -04:00
Eitan Seri-Levi
0c29896438 Subscribe to the correct subnets for electra attestations (#5782)
* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra
2024-05-15 02:59:42 -04:00
realbigsean
d8941d70b6 cargo fmt 2024-05-15 02:59:24 -04:00
Eitan Seri-Levi
fc15736fcb Subscribe to the correct subnets for electra attestations (#5782)
* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra
2024-05-15 02:59:20 -04:00
realbigsean
227aa4bc4f cargo fmt 2024-05-15 02:58:47 -04:00
Eitan Seri-Levi
4f0ecf2a5c Subscribe to the correct subnets for electra attestations (#5782)
* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra
2024-05-15 02:58:34 -04:00
realbigsean
ec055f4717 cargo fmt 2024-05-15 02:57:35 -04:00
Eitan Seri-Levi
8506fb056f Subscribe to the correct subnets for electra attestations (#5782)
* subscribe to the correct att subnets for electra

* subscribe to the correct att subnets for electra
2024-05-15 02:57:06 -04:00
Pawan Dhananjay
84689379af Merge branch 'electra-engine-api' into beacon-api-electra 2024-05-14 15:05:42 +03:00
Pawan Dhananjay
7f5490675c Fix ser/de 2024-05-14 15:03:52 +03:00
Pawan Dhananjay
c680164742 Send unagg attestation based on fork 2024-05-13 10:18:17 +03:00
realbigsean
812b3d77d0 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-12 15:57:17 -04:00
realbigsean
5f73d315b5 Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-12 15:55:12 -04:00
realbigsean
c53d4ac459 Merge branches 'block-processing-electra' and 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-05-12 15:54:33 -04:00
realbigsean
793764f066 Merge pull request #5766 from ethDreamer/two_fixes
Fix Consolidation Sigs & Withdrawals
2024-05-12 15:53:22 -04:00
Mark Mackey
217fa9f805 Fix Consolidation Sigs & Withdrawals 2024-05-12 20:31:20 +03:00
realbigsean
179324b9fa Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-12 06:56:48 -04:00
realbigsean
67ba04e9ec Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-12 06:56:18 -04:00
realbigsean
97e88dd23d Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-05-12 06:55:52 -04:00
realbigsean
c900a88461 Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-12 06:55:06 -04:00
ethDreamer
aaf8e503c6 Fix Electra Fork Choice Tests (#5764) 2024-05-12 12:43:59 +03:00
ethDreamer
af7ba6ff70 Fix Electra Fork Choice Tests (#5764) 2024-05-12 12:41:29 +03:00
ethDreamer
3b1fb0ad81 Fix Electra Fork Choice Tests (#5764) 2024-05-12 12:24:19 +03:00
realbigsean
b8dc6288f1 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-11 10:25:44 -04:00
realbigsean
fc2c942de4 accept new payload v4 in mock el 2024-05-11 10:25:01 -04:00
realbigsean
28cf796072 Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-11 10:24:44 -04:00
realbigsean
75f22ee844 fix initiate exit for single pass 2024-05-11 10:23:59 -04:00
realbigsean
f4907ef971 drop initiate validator ordering optimization 2024-05-11 10:23:50 -04:00
realbigsean
a75257fb6e use correct max eb in epoch cache initialization 2024-05-11 10:23:41 -04:00
realbigsean
1ab786a9a9 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-11 05:09:39 -04:00
realbigsean
518a91a7a6 Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-11 05:09:21 -04:00
realbigsean
261551e3c6 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-05-11 05:08:28 -04:00
realbigsean
a97e86c1a6 only increment the state deposit index on old deposit flow 2024-05-11 05:07:59 -04:00
realbigsean
9a22eb8698 update committee offset 2024-05-10 21:18:34 -04:00
realbigsean
5364ba53fa update committee offset 2024-05-10 21:18:21 -04:00
realbigsean
40c4c00097 update committee offset 2024-05-10 21:18:00 -04:00
realbigsean
b819d2d0a6 Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-10 21:16:59 -04:00
realbigsean
e1dcfb6960 update committee offset 2024-05-10 21:15:21 -04:00
realbigsean
4b28872671 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-10 12:11:18 -04:00
realbigsean
9bd430bea2 Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-10 12:10:58 -04:00
realbigsean
677a94d507 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-epoch-proc 2024-05-10 12:10:34 -04:00
realbigsean
f60eac6abc Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-10 12:09:51 -04:00
realbigsean
89e4de90d5 don't fail on empty consolidations 2024-05-10 12:09:39 -04:00
realbigsean
7926afeb18 Merge pull request #5749 from sigp/electra_op_pool
Optimise Electra op pool aggregation
2024-05-10 12:08:34 -04:00
realbigsean
be9c4bb587 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-10 10:01:11 -04:00
realbigsean
6d2c396ef2 Merge branch 'electra-epoch-proc' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-10 10:00:42 -04:00
realbigsean
ba02ffc162 add deposit limit for old deposit queue 2024-05-10 09:52:51 -04:00
Michael Sproul
08e045875f Electra epoch processing 2024-05-10 17:11:46 +10:00
Michael Sproul
72548cb54e Fix assert 2024-05-10 12:49:15 +10:00
Michael Sproul
16265ef455 Add comment to max cover optimisation 2024-05-10 12:44:18 +10:00
Michael Sproul
437e8516cd Fix bugs in cross-committee aggregation 2024-05-10 12:29:57 +10:00
realbigsean
d505c04507 updates after merge 2024-05-09 21:33:48 -04:00
realbigsean
e494b411e7 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-09 21:31:32 -04:00
realbigsean
9b5ea9d867 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-09 21:30:55 -04:00
realbigsean
aa83e8b889 Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-09 21:29:53 -04:00
realbigsean
e4485570f2 update the naive agg pool interface (#5760) 2024-05-09 21:29:31 -04:00
realbigsean
6477eecc65 update beacon api aggregate attestationendpoint 2024-05-09 21:19:59 -04:00
Eitan Seri-Levi
411fcee2ac Compute on chain aggregate impl (#5752)
* add compute_on_chain_agg impl to op pool changes

* fmt

* get op pool tests to pass
2024-05-10 10:56:20 +10:00
realbigsean
f9d4a28168 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-09 19:04:40 -04:00
realbigsean
fae4a2bccc Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-09 19:04:03 -04:00
realbigsean
19f8333a8b Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-09 19:03:47 -04:00
realbigsean
b807d39bad Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into electra_op_pool 2024-05-09 18:18:01 -04:00
ethDreamer
ca0967119b Revert "Get electra_op_pool up to date (#5756)" (#5757)
This reverts commit ab9e58aa3d.
2024-05-09 17:10:04 -05:00
ethDreamer
ab9e58aa3d Get electra_op_pool up to date (#5756)
* fix get attesting indices (#5742)

* fix get attesting indices

* better errors

* fix compile

* only get committee index once

* Ef test fixes (#5753)

* attestation related ef test fixes

* delete commented out stuff

* Fix Aggregation Pool for Electra (#5754)

* Fix Aggregation Pool for Electra

* Remove Outdated Interface

* fix ssz (#5755)

---------

Co-authored-by: realbigsean <sean@sigmaprime.io>
2024-05-09 16:59:39 -05:00
realbigsean
c30f70906b fix ssz (#5755) 2024-05-09 17:49:12 -04:00
ethDreamer
cb8c8f59cf Fix Aggregation Pool for Electra (#5754)
* Fix Aggregation Pool for Electra

* Remove Outdated Interface
2024-05-09 15:50:11 -04:00
realbigsean
c575cd61b7 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-09 13:47:12 -04:00
realbigsean
6fe919a8e7 Merge branch 'block-processing-electra' of https://github.com/sigp/lighthouse into electra-engine-api 2024-05-09 13:46:19 -04:00
realbigsean
36a559e11a Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-09 13:45:46 -04:00
realbigsean
07229b76ed Ef test fixes (#5753)
* attestation related ef test fixes

* delete commented out stuff
2024-05-09 13:40:52 -04:00
realbigsean
3ea3d226e1 Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-09 09:44:58 -04:00
realbigsean
e32dfcdcad fix get attesting indices (#5742)
* fix get attesting indices

* better errors

* fix compile

* only get committee index once
2024-05-09 09:34:56 -04:00
Michael Sproul
7cb7653d36 Sketch op pool changes 2024-05-09 17:45:52 +10:00
realbigsean
c20fc48eb4 Merge branch 'electra-engine-api' of https://github.com/sigp/lighthouse into beacon-api-electra 2024-05-08 22:18:47 -04:00
realbigsean
c8fca4f1d0 udpates for aggregate attestation endpoint 2024-05-08 22:18:07 -04:00
Pawan Dhananjay
5e1d5ff641 Add comments for potential versioning confusion 2024-05-08 16:46:23 -07:00
Pawan Dhananjay
dd5c9a8c81 Add support for electra fields in getPayloadBodies 2024-05-08 16:22:01 -07:00
Pawan Dhananjay
683de56f6e Fix todos 2024-05-08 15:08:56 -07:00
Pawan Dhananjay
3ef7c9078e Note todo 2024-05-08 15:08:56 -07:00
Pawan Dhananjay
1ddd078d32 Handle new engine api methods in mock EL 2024-05-08 15:08:56 -07:00
Pawan Dhananjay
ca2a946175 Fix the versioning of v4 requests 2024-05-08 15:08:55 -07:00
Pawan Dhananjay
42a499373f Add new engine api methods 2024-05-08 15:08:52 -07:00
realbigsean
7abb7621d5 fix attestation verification 2024-05-08 14:11:22 -04:00
realbigsean
721e73fd82 Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-08 12:53:14 -04:00
ethDreamer
43c3f63e30 cargo fmt (#5740) 2024-05-08 11:53:08 -05:00
realbigsean
7c0a8f840e Merge branch 'electra_attestation_changes' of https://github.com/sigp/lighthouse into block-processing-electra 2024-05-08 12:48:09 -04:00
ethDreamer
f30246b9d4 Some small changes (#5739) 2024-05-08 11:40:08 -05:00
Eitan Seri-Levi
90179d4a88 EIP7549 get_attestation_indices (#5657)
* get attesting indices electra impl

* fmt

* get tests to pass

* fmt

* fix some beacon chain tests

* fmt

* fix slasher test

* fmt got me again

* fix more tests

* fix tests
2024-05-08 11:32:44 -05:00
realbigsean
8517236aed update slash_validator 2024-05-07 16:24:14 -04:00
realbigsean
5728f78032 clean up 2024-05-07 16:02:14 -04:00
realbigsean
75ab913a3a exit updates 2024-05-07 15:32:07 -04:00
realbigsean
31955c2e7f update process operations function 2024-05-07 14:52:16 -04:00
realbigsean
c40bec9319 add consolidation processing 2024-05-07 14:01:44 -04:00
realbigsean
32357d8f0a process deposit receipts 2024-05-07 11:07:12 -04:00
realbigsean
1d5f75582f add execution layer withdrawal request processing 2024-05-07 10:25:53 -04:00
realbigsean
3c688410cc add apply_deposit changes 2024-05-07 08:48:01 -04:00
realbigsean
f1f9f92dec update process_operations deposit length check 2024-05-06 21:56:50 -04:00
realbigsean
e0abede1d1 cleanup withdrawals processing 2024-05-06 21:25:47 -04:00
realbigsean
e2e82ff1b9 process withdrawals updates 2024-05-06 18:56:16 -04:00
realbigsean
2c2e44c4ed Merge pull request #5726 from realbigsean/electra_attestation_changes
Merge unstable into Electra attestation changes
2024-05-06 18:04:59 -04:00
realbigsean
38382a3ca1 cargo fmt 2024-05-06 17:32:25 -04:00
realbigsean
9f6de8e5d7 Merge remote-tracking branch 'sigp/unstable' into electra_attestation_changes 2024-05-06 17:26:43 -04:00
ethDreamer
19a9479234 Superstruct AggregateAndProof (#5715)
* Upgrade `superstruct` to `0.8.0`

* superstruct `AggregateAndProof`
2024-05-06 10:09:22 -05:00
ethDreamer
7c6526d978 Finish EF Test Fork Agnostic (#5714) 2024-05-03 14:09:49 -05:00
ethDreamer
9b98f4e297 Make EF Tests Fork-Agnostic (#5713) 2024-05-03 13:57:01 -05:00
Mark Mackey
3a41e137d1 Merge remote-tracking branch 'upstream/unstable' into electra_attestation_changes 2024-05-02 18:23:32 -05:00
ethDreamer
e6c7f145dd superstruct the AttesterSlashing (#5636)
* `superstruct` Attester Fork Variants

* Push a little further

* Deal with Encode / Decode of AttesterSlashing

* not so sure about this..

* Stop Encode/Decode Bounds from Propagating Out

* Tons of Changes..

* More Conversions to AttestationRef

* Add AsReference trait (#15)

* Add AsReference trait

* Fix some snafus

* Got it Compiling! :D

* Got Tests Building

* Get beacon chain tests compiling

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2024-05-02 18:00:21 -05:00
Eitan Seri-Levi
3b7132bc0d Attestation superstruct changes for EIP 7549 (#5644)
* update

* experiment

* superstruct changes

* revert

* superstruct changes

* fix tests

* indexed attestation

* indexed attestation superstruct

* updated TODOs
2024-04-30 11:49:08 -05:00
22 changed files with 999 additions and 228 deletions

View File

@@ -546,12 +546,20 @@ impl<E: EthSpec> Eth1ChainBackend<E> for CachingEth1Backend<E> {
state.eth1_data().deposit_count state.eth1_data().deposit_count
}; };
match deposit_index.cmp(&deposit_count) { // [New in Electra:EIP6110]
let deposit_index_limit =
if let Ok(deposit_receipts_start_index) = state.deposit_requests_start_index() {
std::cmp::min(deposit_count, deposit_receipts_start_index)
} else {
deposit_count
};
match deposit_index.cmp(&deposit_index_limit) {
Ordering::Greater => Err(Error::DepositIndexTooHigh), Ordering::Greater => Err(Error::DepositIndexTooHigh),
Ordering::Equal => Ok(vec![]), Ordering::Equal => Ok(vec![]),
Ordering::Less => { Ordering::Less => {
let next = deposit_index; let next = deposit_index;
let last = std::cmp::min(deposit_count, next + E::MaxDeposits::to_u64()); let last = std::cmp::min(deposit_index_limit, next + E::MaxDeposits::to_u64());
self.core self.core
.deposits() .deposits()

View File

@@ -31,6 +31,7 @@ mod validators;
mod version; mod version;
use crate::produce_block::{produce_blinded_block_v2, produce_block_v2, produce_block_v3}; use crate::produce_block::{produce_blinded_block_v2, produce_block_v2, produce_block_v3};
use crate::version::fork_versioned_response;
use beacon_chain::{ use beacon_chain::{
attestation_verification::VerifiedAttestation, observed_operations::ObservationOutcome, attestation_verification::VerifiedAttestation, observed_operations::ObservationOutcome,
validator_monitor::timestamp_now, AttestationError as AttnError, BeaconChain, BeaconChainError, validator_monitor::timestamp_now, AttestationError as AttnError, BeaconChain, BeaconChainError,
@@ -256,12 +257,15 @@ pub fn prometheus_metrics() -> warp::filters::log::Log<impl Fn(warp::filters::lo
.or_else(|| starts_with("v1/validator/duties/sync")) .or_else(|| starts_with("v1/validator/duties/sync"))
.or_else(|| starts_with("v1/validator/attestation_data")) .or_else(|| starts_with("v1/validator/attestation_data"))
.or_else(|| starts_with("v1/validator/aggregate_attestation")) .or_else(|| starts_with("v1/validator/aggregate_attestation"))
.or_else(|| starts_with("v2/validator/aggregate_attestation"))
.or_else(|| starts_with("v1/validator/aggregate_and_proofs")) .or_else(|| starts_with("v1/validator/aggregate_and_proofs"))
.or_else(|| starts_with("v2/validator/aggregate_and_proofs"))
.or_else(|| starts_with("v1/validator/sync_committee_contribution")) .or_else(|| starts_with("v1/validator/sync_committee_contribution"))
.or_else(|| starts_with("v1/validator/contribution_and_proofs")) .or_else(|| starts_with("v1/validator/contribution_and_proofs"))
.or_else(|| starts_with("v1/validator/beacon_committee_subscriptions")) .or_else(|| starts_with("v1/validator/beacon_committee_subscriptions"))
.or_else(|| starts_with("v1/validator/sync_committee_subscriptions")) .or_else(|| starts_with("v1/validator/sync_committee_subscriptions"))
.or_else(|| starts_with("v1/beacon/pool/attestations")) .or_else(|| starts_with("v1/beacon/pool/attestations"))
.or_else(|| starts_with("v2/beacon/pool/attestations"))
.or_else(|| starts_with("v1/beacon/pool/sync_committees")) .or_else(|| starts_with("v1/beacon/pool/sync_committees"))
.or_else(|| starts_with("v1/beacon/blocks/head/root")) .or_else(|| starts_with("v1/beacon/blocks/head/root"))
.or_else(|| starts_with("v1/validator/prepare_beacon_proposer")) .or_else(|| starts_with("v1/validator/prepare_beacon_proposer"))
@@ -1623,26 +1627,38 @@ pub fn serve<T: BeaconChainTypes>(
); );
// GET beacon/blocks/{block_id}/attestations // GET beacon/blocks/{block_id}/attestations
let get_beacon_block_attestations = beacon_blocks_path_v1 let get_beacon_block_attestations = beacon_blocks_path_any
.clone() .clone()
.and(warp::path("attestations")) .and(warp::path("attestations"))
.and(warp::path::end()) .and(warp::path::end())
.then( .then(
|block_id: BlockId, |endpoint_version: EndpointVersion,
block_id: BlockId,
task_spawner: TaskSpawner<T::EthSpec>, task_spawner: TaskSpawner<T::EthSpec>,
chain: Arc<BeaconChain<T>>| { chain: Arc<BeaconChain<T>>| {
task_spawner.blocking_json_task(Priority::P1, move || { task_spawner.blocking_response_task(Priority::P1, move || {
let (block, execution_optimistic, finalized) = let (block, execution_optimistic, finalized) =
block_id.blinded_block(&chain)?; block_id.blinded_block(&chain)?;
Ok(api_types::GenericResponse::from( let fork_name = block
block .fork_name(&chain.spec)
.message() .map_err(inconsistent_fork_rejection)?;
.body() let atts = block
.attestations() .message()
.map(|att| att.clone_as_attestation()) .body()
.collect::<Vec<_>>(), .attestations()
) .map(|att| att.clone_as_attestation())
.add_execution_optimistic_finalized(execution_optimistic, finalized)) .collect::<Vec<_>>();
let res = execution_optimistic_finalized_fork_versioned_response(
endpoint_version,
fork_name,
execution_optimistic,
finalized,
&atts,
)?;
Ok(add_consensus_version_header(
warp::reply::json(&res).into_response(),
fork_name,
))
}) })
}, },
); );
@@ -1750,8 +1766,14 @@ pub fn serve<T: BeaconChainTypes>(
.and(task_spawner_filter.clone()) .and(task_spawner_filter.clone())
.and(chain_filter.clone()); .and(chain_filter.clone());
let beacon_pool_path_any = any_version
.and(warp::path("beacon"))
.and(warp::path("pool"))
.and(task_spawner_filter.clone())
.and(chain_filter.clone());
// POST beacon/pool/attestations // POST beacon/pool/attestations
let post_beacon_pool_attestations = beacon_pool_path let post_beacon_pool_attestations = beacon_pool_path_any
.clone() .clone()
.and(warp::path("attestations")) .and(warp::path("attestations"))
.and(warp::path::end()) .and(warp::path::end())
@@ -1760,7 +1782,11 @@ pub fn serve<T: BeaconChainTypes>(
.and(reprocess_send_filter) .and(reprocess_send_filter)
.and(log_filter.clone()) .and(log_filter.clone())
.then( .then(
|task_spawner: TaskSpawner<T::EthSpec>, // V1 and V2 are identical except V2 has a consensus version header in the request.
// We only require this header for SSZ deserialization, which isn't supported for
// this endpoint presently.
|_endpoint_version: EndpointVersion,
task_spawner: TaskSpawner<T::EthSpec>,
chain: Arc<BeaconChain<T>>, chain: Arc<BeaconChain<T>>,
attestations: Vec<Attestation<T::EthSpec>>, attestations: Vec<Attestation<T::EthSpec>>,
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>, network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>,
@@ -1781,16 +1807,17 @@ pub fn serve<T: BeaconChainTypes>(
); );
// GET beacon/pool/attestations?committee_index,slot // GET beacon/pool/attestations?committee_index,slot
let get_beacon_pool_attestations = beacon_pool_path let get_beacon_pool_attestations = beacon_pool_path_any
.clone() .clone()
.and(warp::path("attestations")) .and(warp::path("attestations"))
.and(warp::path::end()) .and(warp::path::end())
.and(warp::query::<api_types::AttestationPoolQuery>()) .and(warp::query::<api_types::AttestationPoolQuery>())
.then( .then(
|task_spawner: TaskSpawner<T::EthSpec>, |endpoint_version: EndpointVersion,
task_spawner: TaskSpawner<T::EthSpec>,
chain: Arc<BeaconChain<T>>, chain: Arc<BeaconChain<T>>,
query: api_types::AttestationPoolQuery| { query: api_types::AttestationPoolQuery| {
task_spawner.blocking_json_task(Priority::P1, move || { task_spawner.blocking_response_task(Priority::P1, move || {
let query_filter = |data: &AttestationData| { let query_filter = |data: &AttestationData| {
query.slot.map_or(true, |slot| slot == data.slot) query.slot.map_or(true, |slot| slot == data.slot)
&& query && query
@@ -1807,20 +1834,48 @@ pub fn serve<T: BeaconChainTypes>(
.filter(|&att| query_filter(att.data())) .filter(|&att| query_filter(att.data()))
.cloned(), .cloned(),
); );
Ok(api_types::GenericResponse::from(attestations)) // Use the current slot to find the fork version, and convert all messages to the
// current fork's format. This is to ensure consistent message types matching
// `Eth-Consensus-Version`.
let current_slot =
chain
.slot_clock
.now()
.ok_or(warp_utils::reject::custom_server_error(
"unable to read slot clock".to_string(),
))?;
let fork_name = chain.spec.fork_name_at_slot::<T::EthSpec>(current_slot);
let attestations = attestations
.into_iter()
.filter(|att| {
(fork_name.electra_enabled() && matches!(att, Attestation::Electra(_)))
|| (!fork_name.electra_enabled()
&& matches!(att, Attestation::Base(_)))
})
.collect::<Vec<_>>();
let res = fork_versioned_response(endpoint_version, fork_name, &attestations)?;
Ok(add_consensus_version_header(
warp::reply::json(&res).into_response(),
fork_name,
))
}) })
}, },
); );
// POST beacon/pool/attester_slashings // POST beacon/pool/attester_slashings
let post_beacon_pool_attester_slashings = beacon_pool_path let post_beacon_pool_attester_slashings = beacon_pool_path_any
.clone() .clone()
.and(warp::path("attester_slashings")) .and(warp::path("attester_slashings"))
.and(warp::path::end()) .and(warp::path::end())
.and(warp_utils::json::json()) .and(warp_utils::json::json())
.and(network_tx_filter.clone()) .and(network_tx_filter.clone())
.then( .then(
|task_spawner: TaskSpawner<T::EthSpec>, // V1 and V2 are identical except V2 has a consensus version header in the request.
// We only require this header for SSZ deserialization, which isn't supported for
// this endpoint presently.
|_endpoint_version: EndpointVersion,
task_spawner: TaskSpawner<T::EthSpec>,
chain: Arc<BeaconChain<T>>, chain: Arc<BeaconChain<T>>,
slashing: AttesterSlashing<T::EthSpec>, slashing: AttesterSlashing<T::EthSpec>,
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>| { network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>| {
@@ -1857,18 +1912,45 @@ pub fn serve<T: BeaconChainTypes>(
); );
// GET beacon/pool/attester_slashings // GET beacon/pool/attester_slashings
let get_beacon_pool_attester_slashings = beacon_pool_path let get_beacon_pool_attester_slashings =
.clone() beacon_pool_path_any
.and(warp::path("attester_slashings")) .clone()
.and(warp::path::end()) .and(warp::path("attester_slashings"))
.then( .and(warp::path::end())
|task_spawner: TaskSpawner<T::EthSpec>, chain: Arc<BeaconChain<T>>| { .then(
task_spawner.blocking_json_task(Priority::P1, move || { |endpoint_version: EndpointVersion,
let attestations = chain.op_pool.get_all_attester_slashings(); task_spawner: TaskSpawner<T::EthSpec>,
Ok(api_types::GenericResponse::from(attestations)) chain: Arc<BeaconChain<T>>| {
}) task_spawner.blocking_response_task(Priority::P1, move || {
}, let slashings = chain.op_pool.get_all_attester_slashings();
);
// Use the current slot to find the fork version, and convert all messages to the
// current fork's format. This is to ensure consistent message types matching
// `Eth-Consensus-Version`.
let current_slot = chain.slot_clock.now().ok_or(
warp_utils::reject::custom_server_error(
"unable to read slot clock".to_string(),
),
)?;
let fork_name = chain.spec.fork_name_at_slot::<T::EthSpec>(current_slot);
let slashings = slashings
.into_iter()
.filter(|slashing| {
(fork_name.electra_enabled()
&& matches!(slashing, AttesterSlashing::Electra(_)))
|| (!fork_name.electra_enabled()
&& matches!(slashing, AttesterSlashing::Base(_)))
})
.collect::<Vec<_>>();
let res = fork_versioned_response(endpoint_version, fork_name, &slashings)?;
Ok(add_consensus_version_header(
warp::reply::json(&res).into_response(),
fork_name,
))
})
},
);
// POST beacon/pool/proposer_slashings // POST beacon/pool/proposer_slashings
let post_beacon_pool_proposer_slashings = beacon_pool_path let post_beacon_pool_proposer_slashings = beacon_pool_path
@@ -3175,7 +3257,7 @@ pub fn serve<T: BeaconChainTypes>(
); );
// GET validator/aggregate_attestation?attestation_data_root,slot // GET validator/aggregate_attestation?attestation_data_root,slot
let get_validator_aggregate_attestation = eth_v1 let get_validator_aggregate_attestation = any_version
.and(warp::path("validator")) .and(warp::path("validator"))
.and(warp::path("aggregate_attestation")) .and(warp::path("aggregate_attestation"))
.and(warp::path::end()) .and(warp::path::end())
@@ -3184,29 +3266,45 @@ pub fn serve<T: BeaconChainTypes>(
.and(task_spawner_filter.clone()) .and(task_spawner_filter.clone())
.and(chain_filter.clone()) .and(chain_filter.clone())
.then( .then(
|query: api_types::ValidatorAggregateAttestationQuery, |endpoint_version: EndpointVersion,
query: api_types::ValidatorAggregateAttestationQuery,
not_synced_filter: Result<(), Rejection>, not_synced_filter: Result<(), Rejection>,
task_spawner: TaskSpawner<T::EthSpec>, task_spawner: TaskSpawner<T::EthSpec>,
chain: Arc<BeaconChain<T>>| { chain: Arc<BeaconChain<T>>| {
task_spawner.blocking_json_task(Priority::P0, move || { task_spawner.blocking_json_task(Priority::P0, move || {
not_synced_filter?; not_synced_filter?;
chain let res = if endpoint_version == V2 {
.get_pre_electra_aggregated_attestation_by_slot_and_root( let Some(committee_index) = query.committee_index else {
return Err(warp_utils::reject::custom_bad_request(
"missing committee index".to_string(),
));
};
chain.get_aggregated_attestation_electra(
query.slot,
&query.attestation_data_root,
committee_index,
)
} else if endpoint_version == V1 {
// Do nothing
chain.get_pre_electra_aggregated_attestation_by_slot_and_root(
query.slot, query.slot,
&query.attestation_data_root, &query.attestation_data_root,
) )
.map_err(|e| { } else {
warp_utils::reject::custom_bad_request(format!( return Err(unsupported_version_rejection(endpoint_version));
"unable to fetch aggregate: {:?}", };
e res.map_err(|e| {
)) warp_utils::reject::custom_bad_request(format!(
})? "unable to fetch aggregate: {:?}",
.map(api_types::GenericResponse::from) e
.ok_or_else(|| { ))
warp_utils::reject::custom_not_found( })?
"no matching aggregate found".to_string(), .map(api_types::GenericResponse::from)
) .ok_or_else(|| {
}) warp_utils::reject::custom_not_found(
"no matching aggregate found".to_string(),
)
})
}) })
}, },
); );
@@ -3302,7 +3400,7 @@ pub fn serve<T: BeaconChainTypes>(
); );
// POST validator/aggregate_and_proofs // POST validator/aggregate_and_proofs
let post_validator_aggregate_and_proofs = eth_v1 let post_validator_aggregate_and_proofs = any_version
.and(warp::path("validator")) .and(warp::path("validator"))
.and(warp::path("aggregate_and_proofs")) .and(warp::path("aggregate_and_proofs"))
.and(warp::path::end()) .and(warp::path::end())
@@ -3313,7 +3411,11 @@ pub fn serve<T: BeaconChainTypes>(
.and(network_tx_filter.clone()) .and(network_tx_filter.clone())
.and(log_filter.clone()) .and(log_filter.clone())
.then( .then(
|not_synced_filter: Result<(), Rejection>, // V1 and V2 are identical except V2 has a consensus version header in the request.
// We only require this header for SSZ deserialization, which isn't supported for
// this endpoint presently.
|_endpoint_version: EndpointVersion,
not_synced_filter: Result<(), Rejection>,
task_spawner: TaskSpawner<T::EthSpec>, task_spawner: TaskSpawner<T::EthSpec>,
chain: Arc<BeaconChain<T>>, chain: Arc<BeaconChain<T>>,
aggregates: Vec<SignedAggregateAndProof<T::EthSpec>>, aggregates: Vec<SignedAggregateAndProof<T::EthSpec>>,

View File

@@ -150,8 +150,13 @@ async fn attestations_across_fork_with_skip_slots() {
.collect::<Vec<_>>(); .collect::<Vec<_>>();
assert!(!unaggregated_attestations.is_empty()); assert!(!unaggregated_attestations.is_empty());
let fork_name = harness.spec.fork_name_at_slot::<E>(fork_slot);
client client
.post_beacon_pool_attestations(&unaggregated_attestations) .post_beacon_pool_attestations_v1(&unaggregated_attestations)
.await
.unwrap();
client
.post_beacon_pool_attestations_v2(&unaggregated_attestations, fork_name)
.await .await
.unwrap(); .unwrap();
@@ -162,7 +167,11 @@ async fn attestations_across_fork_with_skip_slots() {
assert!(!signed_aggregates.is_empty()); assert!(!signed_aggregates.is_empty());
client client
.post_validator_aggregate_and_proof(&signed_aggregates) .post_validator_aggregate_and_proof_v1(&signed_aggregates)
.await
.unwrap();
client
.post_validator_aggregate_and_proof_v2(&signed_aggregates, fork_name)
.await .await
.unwrap(); .unwrap();
} }

View File

@@ -893,9 +893,10 @@ async fn queue_attestations_from_http() {
.flat_map(|attestations| attestations.into_iter().map(|(att, _subnet)| att)) .flat_map(|attestations| attestations.into_iter().map(|(att, _subnet)| att))
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let fork_name = tester.harness.spec.fork_name_at_slot::<E>(attestation_slot);
let attestation_future = tokio::spawn(async move { let attestation_future = tokio::spawn(async move {
client client
.post_beacon_pool_attestations(&attestations) .post_beacon_pool_attestations_v2(&attestations, fork_name)
.await .await
.expect("attestations should be processed successfully") .expect("attestations should be processed successfully")
}); });

View File

@@ -1668,7 +1668,7 @@ impl ApiTester {
for block_id in self.interesting_block_ids() { for block_id in self.interesting_block_ids() {
let result = self let result = self
.client .client
.get_beacon_blocks_attestations(block_id.0) .get_beacon_blocks_attestations_v2(block_id.0)
.await .await
.unwrap() .unwrap()
.map(|res| res.data); .map(|res| res.data);
@@ -1699,9 +1699,9 @@ impl ApiTester {
self self
} }
pub async fn test_post_beacon_pool_attestations_valid(mut self) -> Self { pub async fn test_post_beacon_pool_attestations_valid_v1(mut self) -> Self {
self.client self.client
.post_beacon_pool_attestations(self.attestations.as_slice()) .post_beacon_pool_attestations_v1(self.attestations.as_slice())
.await .await
.unwrap(); .unwrap();
@@ -1713,7 +1713,25 @@ impl ApiTester {
self self
} }
pub async fn test_post_beacon_pool_attestations_invalid(mut self) -> Self { pub async fn test_post_beacon_pool_attestations_valid_v2(mut self) -> Self {
let fork_name = self
.attestations
.first()
.map(|att| self.chain.spec.fork_name_at_slot::<E>(att.data().slot))
.unwrap();
self.client
.post_beacon_pool_attestations_v2(self.attestations.as_slice(), fork_name)
.await
.unwrap();
assert!(
self.network_rx.network_recv.recv().await.is_some(),
"valid attestation should be sent to network"
);
self
}
pub async fn test_post_beacon_pool_attestations_invalid_v1(mut self) -> Self {
let mut attestations = Vec::new(); let mut attestations = Vec::new();
for attestation in &self.attestations { for attestation in &self.attestations {
let mut invalid_attestation = attestation.clone(); let mut invalid_attestation = attestation.clone();
@@ -1726,7 +1744,7 @@ impl ApiTester {
let err = self let err = self
.client .client
.post_beacon_pool_attestations(attestations.as_slice()) .post_beacon_pool_attestations_v1(attestations.as_slice())
.await .await
.unwrap_err(); .unwrap_err();
@@ -1749,6 +1767,48 @@ impl ApiTester {
self self
} }
pub async fn test_post_beacon_pool_attestations_invalid_v2(mut self) -> Self {
let mut attestations = Vec::new();
for attestation in &self.attestations {
let mut invalid_attestation = attestation.clone();
invalid_attestation.data_mut().slot += 1;
// add both to ensure we only fail on invalid attestations
attestations.push(attestation.clone());
attestations.push(invalid_attestation);
}
let fork_name = self
.attestations
.first()
.map(|att| self.chain.spec.fork_name_at_slot::<E>(att.data().slot))
.unwrap();
let err_v2 = self
.client
.post_beacon_pool_attestations_v2(attestations.as_slice(), fork_name)
.await
.unwrap_err();
match err_v2 {
Error::ServerIndexedMessage(IndexedErrorMessage {
code,
message: _,
failures,
}) => {
assert_eq!(code, 400);
assert_eq!(failures.len(), self.attestations.len());
}
_ => panic!("query did not fail correctly"),
}
assert!(
self.network_rx.network_recv.recv().await.is_some(),
"if some attestations are valid, we should send them to the network"
);
self
}
pub async fn test_get_beacon_light_client_bootstrap(self) -> Self { pub async fn test_get_beacon_light_client_bootstrap(self) -> Self {
let block_id = BlockId(CoreBlockId::Finalized); let block_id = BlockId(CoreBlockId::Finalized);
@@ -1812,7 +1872,7 @@ impl ApiTester {
pub async fn test_get_beacon_pool_attestations(self) -> Self { pub async fn test_get_beacon_pool_attestations(self) -> Self {
let result = self let result = self
.client .client
.get_beacon_pool_attestations(None, None) .get_beacon_pool_attestations_v1(None, None)
.await .await
.unwrap() .unwrap()
.data; .data;
@@ -1822,12 +1882,20 @@ impl ApiTester {
assert_eq!(result, expected); assert_eq!(result, expected);
let result = self
.client
.get_beacon_pool_attestations_v2(None, None)
.await
.unwrap()
.data;
assert_eq!(result, expected);
self self
} }
pub async fn test_post_beacon_pool_attester_slashings_valid(mut self) -> Self { pub async fn test_post_beacon_pool_attester_slashings_valid_v1(mut self) -> Self {
self.client self.client
.post_beacon_pool_attester_slashings(&self.attester_slashing) .post_beacon_pool_attester_slashings_v1(&self.attester_slashing)
.await .await
.unwrap(); .unwrap();
@@ -1839,7 +1907,25 @@ impl ApiTester {
self self
} }
pub async fn test_post_beacon_pool_attester_slashings_invalid(mut self) -> Self { pub async fn test_post_beacon_pool_attester_slashings_valid_v2(mut self) -> Self {
let fork_name = self
.chain
.spec
.fork_name_at_slot::<E>(self.attester_slashing.attestation_1().data().slot);
self.client
.post_beacon_pool_attester_slashings_v2(&self.attester_slashing, fork_name)
.await
.unwrap();
assert!(
self.network_rx.network_recv.recv().await.is_some(),
"valid attester slashing should be sent to network"
);
self
}
pub async fn test_post_beacon_pool_attester_slashings_invalid_v1(mut self) -> Self {
let mut slashing = self.attester_slashing.clone(); let mut slashing = self.attester_slashing.clone();
match &mut slashing { match &mut slashing {
AttesterSlashing::Base(ref mut slashing) => { AttesterSlashing::Base(ref mut slashing) => {
@@ -1851,7 +1937,35 @@ impl ApiTester {
} }
self.client self.client
.post_beacon_pool_attester_slashings(&slashing) .post_beacon_pool_attester_slashings_v1(&slashing)
.await
.unwrap_err();
assert!(
self.network_rx.network_recv.recv().now_or_never().is_none(),
"invalid attester slashing should not be sent to network"
);
self
}
pub async fn test_post_beacon_pool_attester_slashings_invalid_v2(mut self) -> Self {
let mut slashing = self.attester_slashing.clone();
match &mut slashing {
AttesterSlashing::Base(ref mut slashing) => {
slashing.attestation_1.data.slot += 1;
}
AttesterSlashing::Electra(ref mut slashing) => {
slashing.attestation_1.data.slot += 1;
}
}
let fork_name = self
.chain
.spec
.fork_name_at_slot::<E>(self.attester_slashing.attestation_1().data().slot);
self.client
.post_beacon_pool_attester_slashings_v2(&slashing, fork_name)
.await .await
.unwrap_err(); .unwrap_err();
@@ -1866,7 +1980,7 @@ impl ApiTester {
pub async fn test_get_beacon_pool_attester_slashings(self) -> Self { pub async fn test_get_beacon_pool_attester_slashings(self) -> Self {
let result = self let result = self
.client .client
.get_beacon_pool_attester_slashings() .get_beacon_pool_attester_slashings_v1()
.await .await
.unwrap() .unwrap()
.data; .data;
@@ -1875,6 +1989,14 @@ impl ApiTester {
assert_eq!(result, expected); assert_eq!(result, expected);
let result = self
.client
.get_beacon_pool_attester_slashings_v2()
.await
.unwrap()
.data;
assert_eq!(result, expected);
self self
} }
@@ -3233,30 +3355,52 @@ impl ApiTester {
} }
pub async fn test_get_validator_aggregate_attestation(self) -> Self { pub async fn test_get_validator_aggregate_attestation(self) -> Self {
let attestation = self if self
.chain .chain
.head_beacon_block() .spec
.message() .fork_name_at_slot::<E>(self.chain.slot().unwrap())
.body() .electra_enabled()
.attestations() {
.next() for attestation in self.chain.naive_aggregation_pool.read().iter() {
.unwrap() let result = self
.clone_as_attestation(); .client
.get_validator_aggregate_attestation_v2(
attestation.data().slot,
attestation.data().tree_hash_root(),
attestation.committee_index().unwrap(),
)
.await
.unwrap()
.unwrap()
.data;
let expected = attestation;
let result = self assert_eq!(&result, expected);
.client }
.get_validator_aggregate_attestation( } else {
attestation.data().slot, let attestation = self
attestation.data().tree_hash_root(), .chain
) .head_beacon_block()
.await .message()
.unwrap() .body()
.unwrap() .attestations()
.data; .next()
.unwrap()
.clone_as_attestation();
let result = self
.client
.get_validator_aggregate_attestation_v1(
attestation.data().slot,
attestation.data().tree_hash_root(),
)
.await
.unwrap()
.unwrap()
.data;
let expected = attestation;
let expected = attestation; assert_eq!(result, expected);
}
assert_eq!(result, expected);
self self
} }
@@ -3355,11 +3499,11 @@ impl ApiTester {
) )
} }
pub async fn test_get_validator_aggregate_and_proofs_valid(mut self) -> Self { pub async fn test_get_validator_aggregate_and_proofs_valid_v1(mut self) -> Self {
let aggregate = self.get_aggregate().await; let aggregate = self.get_aggregate().await;
self.client self.client
.post_validator_aggregate_and_proof::<E>(&[aggregate]) .post_validator_aggregate_and_proof_v1::<E>(&[aggregate])
.await .await
.unwrap(); .unwrap();
@@ -3368,7 +3512,7 @@ impl ApiTester {
self self
} }
pub async fn test_get_validator_aggregate_and_proofs_invalid(mut self) -> Self { pub async fn test_get_validator_aggregate_and_proofs_invalid_v1(mut self) -> Self {
let mut aggregate = self.get_aggregate().await; let mut aggregate = self.get_aggregate().await;
match &mut aggregate { match &mut aggregate {
SignedAggregateAndProof::Base(ref mut aggregate) => { SignedAggregateAndProof::Base(ref mut aggregate) => {
@@ -3380,7 +3524,7 @@ impl ApiTester {
} }
self.client self.client
.post_validator_aggregate_and_proof::<E>(&[aggregate]) .post_validator_aggregate_and_proof_v1::<E>(&[aggregate.clone()])
.await .await
.unwrap_err(); .unwrap_err();
@@ -3389,6 +3533,46 @@ impl ApiTester {
self self
} }
pub async fn test_get_validator_aggregate_and_proofs_valid_v2(mut self) -> Self {
let aggregate = self.get_aggregate().await;
let fork_name = self
.chain
.spec
.fork_name_at_slot::<E>(aggregate.message().aggregate().data().slot);
self.client
.post_validator_aggregate_and_proof_v2::<E>(&[aggregate], fork_name)
.await
.unwrap();
assert!(self.network_rx.network_recv.recv().await.is_some());
self
}
pub async fn test_get_validator_aggregate_and_proofs_invalid_v2(mut self) -> Self {
let mut aggregate = self.get_aggregate().await;
match &mut aggregate {
SignedAggregateAndProof::Base(ref mut aggregate) => {
aggregate.message.aggregate.data.slot += 1;
}
SignedAggregateAndProof::Electra(ref mut aggregate) => {
aggregate.message.aggregate.data.slot += 1;
}
}
let fork_name = self
.chain
.spec
.fork_name_at_slot::<E>(aggregate.message().aggregate().data().slot);
self.client
.post_validator_aggregate_and_proof_v2::<E>(&[aggregate], fork_name)
.await
.unwrap_err();
assert!(self.network_rx.network_recv.recv().now_or_never().is_none());
self
}
pub async fn test_get_validator_beacon_committee_subscriptions(mut self) -> Self { pub async fn test_get_validator_beacon_committee_subscriptions(mut self) -> Self {
let subscription = BeaconCommitteeSubscription { let subscription = BeaconCommitteeSubscription {
validator_index: 0, validator_index: 0,
@@ -3484,7 +3668,7 @@ impl ApiTester {
pub async fn test_post_validator_register_validator_slashed(self) -> Self { pub async fn test_post_validator_register_validator_slashed(self) -> Self {
// slash a validator // slash a validator
self.client self.client
.post_beacon_pool_attester_slashings(&self.attester_slashing) .post_beacon_pool_attester_slashings_v1(&self.attester_slashing)
.await .await
.unwrap(); .unwrap();
@@ -3597,7 +3781,7 @@ impl ApiTester {
// Attest to the current slot // Attest to the current slot
self.client self.client
.post_beacon_pool_attestations(self.attestations.as_slice()) .post_beacon_pool_attestations_v1(self.attestations.as_slice())
.await .await
.unwrap(); .unwrap();
@@ -5237,7 +5421,7 @@ impl ApiTester {
// Attest to the current slot // Attest to the current slot
self.client self.client
.post_beacon_pool_attestations(self.attestations.as_slice()) .post_beacon_pool_attestations_v1(self.attestations.as_slice())
.await .await
.unwrap(); .unwrap();
@@ -5292,7 +5476,7 @@ impl ApiTester {
let expected_attestation_len = self.attestations.len(); let expected_attestation_len = self.attestations.len();
self.client self.client
.post_beacon_pool_attestations(self.attestations.as_slice()) .post_beacon_pool_attestations_v1(self.attestations.as_slice())
.await .await
.unwrap(); .unwrap();
@@ -5801,34 +5985,66 @@ async fn post_beacon_blocks_duplicate() {
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn beacon_pools_post_attestations_valid() { async fn beacon_pools_post_attestations_valid_v1() {
ApiTester::new() ApiTester::new()
.await .await
.test_post_beacon_pool_attestations_valid() .test_post_beacon_pool_attestations_valid_v1()
.await; .await;
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn beacon_pools_post_attestations_invalid() { async fn beacon_pools_post_attestations_invalid_v1() {
ApiTester::new() ApiTester::new()
.await .await
.test_post_beacon_pool_attestations_invalid() .test_post_beacon_pool_attestations_invalid_v1()
.await; .await;
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn beacon_pools_post_attester_slashings_valid() { async fn beacon_pools_post_attestations_valid_v2() {
ApiTester::new() ApiTester::new()
.await .await
.test_post_beacon_pool_attester_slashings_valid() .test_post_beacon_pool_attestations_valid_v2()
.await; .await;
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn beacon_pools_post_attester_slashings_invalid() { async fn beacon_pools_post_attestations_invalid_v2() {
ApiTester::new() ApiTester::new()
.await .await
.test_post_beacon_pool_attester_slashings_invalid() .test_post_beacon_pool_attestations_invalid_v2()
.await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn beacon_pools_post_attester_slashings_valid_v1() {
ApiTester::new()
.await
.test_post_beacon_pool_attester_slashings_valid_v1()
.await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn beacon_pools_post_attester_slashings_invalid_v1() {
ApiTester::new()
.await
.test_post_beacon_pool_attester_slashings_invalid_v1()
.await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn beacon_pools_post_attester_slashings_valid_v2() {
ApiTester::new()
.await
.test_post_beacon_pool_attester_slashings_valid_v2()
.await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn beacon_pools_post_attester_slashings_invalid_v2() {
ApiTester::new()
.await
.test_post_beacon_pool_attester_slashings_invalid_v2()
.await; .await;
} }
@@ -6156,36 +6372,70 @@ async fn get_validator_aggregate_attestation_with_skip_slots() {
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_validator_aggregate_and_proofs_valid() { async fn get_validator_aggregate_and_proofs_valid_v1() {
ApiTester::new() ApiTester::new()
.await .await
.test_get_validator_aggregate_and_proofs_valid() .test_get_validator_aggregate_and_proofs_valid_v1()
.await; .await;
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_validator_aggregate_and_proofs_valid_with_skip_slots() { async fn get_validator_aggregate_and_proofs_valid_with_skip_slots_v1() {
ApiTester::new() ApiTester::new()
.await .await
.skip_slots(E::slots_per_epoch() * 2) .skip_slots(E::slots_per_epoch() * 2)
.test_get_validator_aggregate_and_proofs_valid() .test_get_validator_aggregate_and_proofs_valid_v1()
.await; .await;
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_validator_aggregate_and_proofs_invalid() { async fn get_validator_aggregate_and_proofs_valid_v2() {
ApiTester::new() ApiTester::new()
.await .await
.test_get_validator_aggregate_and_proofs_invalid() .test_get_validator_aggregate_and_proofs_valid_v2()
.await; .await;
} }
#[tokio::test(flavor = "multi_thread", worker_threads = 2)] #[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_validator_aggregate_and_proofs_invalid_with_skip_slots() { async fn get_validator_aggregate_and_proofs_valid_with_skip_slots_v2() {
ApiTester::new() ApiTester::new()
.await .await
.skip_slots(E::slots_per_epoch() * 2) .skip_slots(E::slots_per_epoch() * 2)
.test_get_validator_aggregate_and_proofs_invalid() .test_get_validator_aggregate_and_proofs_valid_v2()
.await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_validator_aggregate_and_proofs_invalid_v1() {
ApiTester::new()
.await
.test_get_validator_aggregate_and_proofs_invalid_v1()
.await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_validator_aggregate_and_proofs_invalid_with_skip_slots_v1() {
ApiTester::new()
.await
.skip_slots(E::slots_per_epoch() * 2)
.test_get_validator_aggregate_and_proofs_invalid_v1()
.await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_validator_aggregate_and_proofs_invalid_v2() {
ApiTester::new()
.await
.test_get_validator_aggregate_and_proofs_invalid_v2()
.await;
}
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn get_validator_aggregate_and_proofs_invalid_with_skip_slots_v2() {
ApiTester::new()
.await
.skip_slots(E::slots_per_epoch() * 2)
.test_get_validator_aggregate_and_proofs_invalid_v2()
.await; .await;
} }

View File

@@ -103,8 +103,13 @@ impl RateLimiterConfig {
pub const DEFAULT_GOODBYE_QUOTA: Quota = Quota::one_every(10); pub const DEFAULT_GOODBYE_QUOTA: Quota = Quota::one_every(10);
pub const DEFAULT_BLOCKS_BY_RANGE_QUOTA: Quota = Quota::n_every(1024, 10); pub const DEFAULT_BLOCKS_BY_RANGE_QUOTA: Quota = Quota::n_every(1024, 10);
pub const DEFAULT_BLOCKS_BY_ROOT_QUOTA: Quota = Quota::n_every(128, 10); pub const DEFAULT_BLOCKS_BY_ROOT_QUOTA: Quota = Quota::n_every(128, 10);
pub const DEFAULT_BLOBS_BY_RANGE_QUOTA: Quota = Quota::n_every(768, 10); // `BlocksByRange` and `BlobsByRange` are sent together during range sync.
pub const DEFAULT_BLOBS_BY_ROOT_QUOTA: Quota = Quota::n_every(128, 10); // It makes sense for blocks and blobs quotas to be equivalent in terms of the number of blocks:
// 1024 blocks * 6 max blobs per block.
// This doesn't necessarily mean that we are sending this many blobs, because the quotas are
// measured against the maximum request size.
pub const DEFAULT_BLOBS_BY_RANGE_QUOTA: Quota = Quota::n_every(6144, 10);
pub const DEFAULT_BLOBS_BY_ROOT_QUOTA: Quota = Quota::n_every(768, 10);
pub const DEFAULT_LIGHT_CLIENT_BOOTSTRAP_QUOTA: Quota = Quota::one_every(10); pub const DEFAULT_LIGHT_CLIENT_BOOTSTRAP_QUOTA: Quota = Quota::one_every(10);
pub const DEFAULT_LIGHT_CLIENT_OPTIMISTIC_UPDATE_QUOTA: Quota = Quota::one_every(10); pub const DEFAULT_LIGHT_CLIENT_OPTIMISTIC_UPDATE_QUOTA: Quota = Quota::one_every(10);
pub const DEFAULT_LIGHT_CLIENT_FINALITY_UPDATE_QUOTA: Quota = Quota::one_every(10); pub const DEFAULT_LIGHT_CLIENT_FINALITY_UPDATE_QUOTA: Quota = Quota::one_every(10);

View File

@@ -237,6 +237,36 @@ lazy_static! {
"Number of Syncing chains in range, per range type", "Number of Syncing chains in range, per range type",
&["range_type"] &["range_type"]
); );
pub static ref SYNCING_CHAINS_REMOVED: Result<IntCounterVec> = try_create_int_counter_vec(
"sync_range_removed_chains_total",
"Total count of range syncing chains removed per range type",
&["range_type"]
);
pub static ref SYNCING_CHAINS_ADDED: Result<IntCounterVec> = try_create_int_counter_vec(
"sync_range_added_chains_total",
"Total count of range syncing chains added per range type",
&["range_type"]
);
pub static ref SYNCING_CHAINS_DROPPED_BLOCKS: Result<IntCounterVec> = try_create_int_counter_vec(
"sync_range_chains_dropped_blocks_total",
"Total count of dropped blocks when removing a syncing chain per range type",
&["range_type"]
);
pub static ref SYNCING_CHAINS_IGNORED_BLOCKS: Result<IntCounterVec> = try_create_int_counter_vec(
"sync_range_chains_ignored_blocks_total",
"Total count of ignored blocks when processing a syncing chain batch per chain type",
&["chain_type"]
);
pub static ref SYNCING_CHAINS_PROCESSED_BATCHES: Result<IntCounterVec> = try_create_int_counter_vec(
"sync_range_chains_processed_batches_total",
"Total count of processed batches in a syncing chain batch per chain type",
&["chain_type"]
);
pub static ref SYNCING_CHAIN_BATCH_AWAITING_PROCESSING: Result<Histogram> = try_create_histogram_with_buckets(
"sync_range_chain_batch_awaiting_processing_seconds",
"Time range sync batches spend in AwaitingProcessing state",
Ok(vec![0.01,0.02,0.05,0.1,0.2,0.5,1.0,2.0,5.0,10.0,20.0])
);
pub static ref SYNC_SINGLE_BLOCK_LOOKUPS: Result<IntGauge> = try_create_int_gauge( pub static ref SYNC_SINGLE_BLOCK_LOOKUPS: Result<IntGauge> = try_create_int_gauge(
"sync_single_block_lookups", "sync_single_block_lookups",
"Number of single block lookups underway" "Number of single block lookups underway"

View File

@@ -326,7 +326,7 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
.process_blocks(downloaded_blocks.iter(), notify_execution_layer) .process_blocks(downloaded_blocks.iter(), notify_execution_layer)
.await .await
{ {
(_, Ok(_)) => { (imported_blocks, Ok(_)) => {
debug!(self.log, "Batch processed"; debug!(self.log, "Batch processed";
"batch_epoch" => epoch, "batch_epoch" => epoch,
"first_block_slot" => start_slot, "first_block_slot" => start_slot,
@@ -335,7 +335,8 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
"processed_blocks" => sent_blocks, "processed_blocks" => sent_blocks,
"service"=> "sync"); "service"=> "sync");
BatchProcessResult::Success { BatchProcessResult::Success {
was_non_empty: sent_blocks > 0, sent_blocks,
imported_blocks,
} }
} }
(imported_blocks, Err(e)) => { (imported_blocks, Err(e)) => {
@@ -349,7 +350,7 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
"service" => "sync"); "service" => "sync");
match e.peer_action { match e.peer_action {
Some(penalty) => BatchProcessResult::FaultyFailure { Some(penalty) => BatchProcessResult::FaultyFailure {
imported_blocks: imported_blocks > 0, imported_blocks,
penalty, penalty,
}, },
None => BatchProcessResult::NonFaultyFailure, None => BatchProcessResult::NonFaultyFailure,
@@ -368,7 +369,7 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
.sum::<usize>(); .sum::<usize>();
match self.process_backfill_blocks(downloaded_blocks) { match self.process_backfill_blocks(downloaded_blocks) {
(_, Ok(_)) => { (imported_blocks, Ok(_)) => {
debug!(self.log, "Backfill batch processed"; debug!(self.log, "Backfill batch processed";
"batch_epoch" => epoch, "batch_epoch" => epoch,
"first_block_slot" => start_slot, "first_block_slot" => start_slot,
@@ -377,7 +378,8 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
"processed_blobs" => n_blobs, "processed_blobs" => n_blobs,
"service"=> "sync"); "service"=> "sync");
BatchProcessResult::Success { BatchProcessResult::Success {
was_non_empty: sent_blocks > 0, sent_blocks,
imported_blocks,
} }
} }
(_, Err(e)) => { (_, Err(e)) => {
@@ -390,7 +392,7 @@ impl<T: BeaconChainTypes> NetworkBeaconProcessor<T> {
"service" => "sync"); "service" => "sync");
match e.peer_action { match e.peer_action {
Some(penalty) => BatchProcessResult::FaultyFailure { Some(penalty) => BatchProcessResult::FaultyFailure {
imported_blocks: false, imported_blocks: 0,
penalty, penalty,
}, },
None => BatchProcessResult::NonFaultyFailure, None => BatchProcessResult::NonFaultyFailure,

View File

@@ -528,7 +528,7 @@ impl<T: BeaconChainTypes> BackFillSync<T> {
// result callback. This is done, because an empty batch could end a chain and the logic // result callback. This is done, because an empty batch could end a chain and the logic
// for removing chains and checking completion is in the callback. // for removing chains and checking completion is in the callback.
let blocks = match batch.start_processing() { let (blocks, _) = match batch.start_processing() {
Err(e) => { Err(e) => {
return self return self
.fail_sync(BackFillError::BatchInvalidState(batch_id, e.0)) .fail_sync(BackFillError::BatchInvalidState(batch_id, e.0))
@@ -615,13 +615,15 @@ impl<T: BeaconChainTypes> BackFillSync<T> {
"batch_epoch" => batch_id, "peer" => %peer, "client" => %network.client_type(&peer)); "batch_epoch" => batch_id, "peer" => %peer, "client" => %network.client_type(&peer));
match result { match result {
BatchProcessResult::Success { was_non_empty } => { BatchProcessResult::Success {
imported_blocks, ..
} => {
if let Err(e) = batch.processing_completed(BatchProcessingResult::Success) { if let Err(e) = batch.processing_completed(BatchProcessingResult::Success) {
self.fail_sync(BackFillError::BatchInvalidState(batch_id, e.0))?; self.fail_sync(BackFillError::BatchInvalidState(batch_id, e.0))?;
} }
// If the processed batch was not empty, we can validate previous unvalidated // If the processed batch was not empty, we can validate previous unvalidated
// blocks. // blocks.
if *was_non_empty { if *imported_blocks > 0 {
self.advance_chain(network, batch_id); self.advance_chain(network, batch_id);
} }
@@ -677,7 +679,7 @@ impl<T: BeaconChainTypes> BackFillSync<T> {
Ok(BatchOperationOutcome::Continue) => { Ok(BatchOperationOutcome::Continue) => {
// chain can continue. Check if it can be progressed // chain can continue. Check if it can be progressed
if *imported_blocks { if *imported_blocks > 0 {
// At least one block was successfully verified and imported, then we can be sure all // At least one block was successfully verified and imported, then we can be sure all
// previous batches are valid and we only need to download the current failed // previous batches are valid and we only need to download the current failed
// batch. // batch.

View File

@@ -156,11 +156,12 @@ pub enum BlockProcessingResult<E: EthSpec> {
pub enum BatchProcessResult { pub enum BatchProcessResult {
/// The batch was completed successfully. It carries whether the sent batch contained blocks. /// The batch was completed successfully. It carries whether the sent batch contained blocks.
Success { Success {
was_non_empty: bool, sent_blocks: usize,
imported_blocks: usize,
}, },
/// The batch processing failed. It carries whether the processing imported any block. /// The batch processing failed. It carries whether the processing imported any block.
FaultyFailure { FaultyFailure {
imported_blocks: bool, imported_blocks: usize,
penalty: PeerAction, penalty: PeerAction,
}, },
NonFaultyFailure, NonFaultyFailure,

View File

@@ -5,6 +5,7 @@ use lighthouse_network::PeerId;
use std::collections::HashSet; use std::collections::HashSet;
use std::hash::{Hash, Hasher}; use std::hash::{Hash, Hasher};
use std::ops::Sub; use std::ops::Sub;
use std::time::{Duration, Instant};
use strum::Display; use strum::Display;
use types::{Epoch, EthSpec, Slot}; use types::{Epoch, EthSpec, Slot};
@@ -118,7 +119,7 @@ pub enum BatchState<E: EthSpec> {
/// The batch is being downloaded. /// The batch is being downloaded.
Downloading(PeerId, Id), Downloading(PeerId, Id),
/// The batch has been completely downloaded and is ready for processing. /// The batch has been completely downloaded and is ready for processing.
AwaitingProcessing(PeerId, Vec<RpcBlock<E>>), AwaitingProcessing(PeerId, Vec<RpcBlock<E>>, Instant),
/// The batch is being processed. /// The batch is being processed.
Processing(Attempt), Processing(Attempt),
/// The batch was successfully processed and is waiting to be validated. /// The batch was successfully processed and is waiting to be validated.
@@ -210,13 +211,26 @@ impl<E: EthSpec, B: BatchConfig> BatchInfo<E, B> {
match &self.state { match &self.state {
BatchState::AwaitingDownload | BatchState::Failed => None, BatchState::AwaitingDownload | BatchState::Failed => None,
BatchState::Downloading(peer_id, _) BatchState::Downloading(peer_id, _)
| BatchState::AwaitingProcessing(peer_id, _) | BatchState::AwaitingProcessing(peer_id, _, _)
| BatchState::Processing(Attempt { peer_id, .. }) | BatchState::Processing(Attempt { peer_id, .. })
| BatchState::AwaitingValidation(Attempt { peer_id, .. }) => Some(peer_id), | BatchState::AwaitingValidation(Attempt { peer_id, .. }) => Some(peer_id),
BatchState::Poisoned => unreachable!("Poisoned batch"), BatchState::Poisoned => unreachable!("Poisoned batch"),
} }
} }
/// Returns the count of stored pending blocks if in awaiting processing state
pub fn pending_blocks(&self) -> usize {
match &self.state {
BatchState::AwaitingProcessing(_, blocks, _) => blocks.len(),
BatchState::AwaitingDownload
| BatchState::Downloading { .. }
| BatchState::Processing { .. }
| BatchState::AwaitingValidation { .. }
| BatchState::Poisoned
| BatchState::Failed => 0,
}
}
/// Returns a BlocksByRange request associated with the batch. /// Returns a BlocksByRange request associated with the batch.
pub fn to_blocks_by_range_request(&self) -> (BlocksByRangeRequest, ByRangeRequestType) { pub fn to_blocks_by_range_request(&self) -> (BlocksByRangeRequest, ByRangeRequestType) {
( (
@@ -293,7 +307,7 @@ impl<E: EthSpec, B: BatchConfig> BatchInfo<E, B> {
} }
let received = blocks.len(); let received = blocks.len();
self.state = BatchState::AwaitingProcessing(peer, blocks); self.state = BatchState::AwaitingProcessing(peer, blocks, Instant::now());
Ok(received) Ok(received)
} }
BatchState::Poisoned => unreachable!("Poisoned batch"), BatchState::Poisoned => unreachable!("Poisoned batch"),
@@ -365,11 +379,11 @@ impl<E: EthSpec, B: BatchConfig> BatchInfo<E, B> {
} }
} }
pub fn start_processing(&mut self) -> Result<Vec<RpcBlock<E>>, WrongState> { pub fn start_processing(&mut self) -> Result<(Vec<RpcBlock<E>>, Duration), WrongState> {
match self.state.poison() { match self.state.poison() {
BatchState::AwaitingProcessing(peer, blocks) => { BatchState::AwaitingProcessing(peer, blocks, start_instant) => {
self.state = BatchState::Processing(Attempt::new::<B, E>(peer, &blocks)); self.state = BatchState::Processing(Attempt::new::<B, E>(peer, &blocks));
Ok(blocks) Ok((blocks, start_instant.elapsed()))
} }
BatchState::Poisoned => unreachable!("Poisoned batch"), BatchState::Poisoned => unreachable!("Poisoned batch"),
other => { other => {
@@ -515,7 +529,7 @@ impl<E: EthSpec> std::fmt::Debug for BatchState<E> {
}) => write!(f, "AwaitingValidation({})", peer_id), }) => write!(f, "AwaitingValidation({})", peer_id),
BatchState::AwaitingDownload => f.write_str("AwaitingDownload"), BatchState::AwaitingDownload => f.write_str("AwaitingDownload"),
BatchState::Failed => f.write_str("Failed"), BatchState::Failed => f.write_str("Failed"),
BatchState::AwaitingProcessing(ref peer, ref blocks) => { BatchState::AwaitingProcessing(ref peer, ref blocks, _) => {
write!(f, "AwaitingProcessing({}, {} blocks)", peer, blocks.len()) write!(f, "AwaitingProcessing({}, {} blocks)", peer, blocks.len())
} }
BatchState::Downloading(peer, request_id) => { BatchState::Downloading(peer, request_id) => {

View File

@@ -1,4 +1,6 @@
use super::batch::{BatchInfo, BatchProcessingResult, BatchState}; use super::batch::{BatchInfo, BatchProcessingResult, BatchState};
use super::RangeSyncType;
use crate::metrics;
use crate::network_beacon_processor::ChainSegmentProcessId; use crate::network_beacon_processor::ChainSegmentProcessId;
use crate::sync::network_context::RangeRequestId; use crate::sync::network_context::RangeRequestId;
use crate::sync::{network_context::SyncNetworkContext, BatchOperationOutcome, BatchProcessResult}; use crate::sync::{network_context::SyncNetworkContext, BatchOperationOutcome, BatchProcessResult};
@@ -11,6 +13,7 @@ use rand::{seq::SliceRandom, Rng};
use slog::{crit, debug, o, warn}; use slog::{crit, debug, o, warn};
use std::collections::{btree_map::Entry, BTreeMap, HashSet}; use std::collections::{btree_map::Entry, BTreeMap, HashSet};
use std::hash::{Hash, Hasher}; use std::hash::{Hash, Hasher};
use strum::IntoStaticStr;
use types::{Epoch, EthSpec, Hash256, Slot}; use types::{Epoch, EthSpec, Hash256, Slot};
/// Blocks are downloaded in batches from peers. This constant specifies how many epochs worth of /// Blocks are downloaded in batches from peers. This constant specifies how many epochs worth of
@@ -53,6 +56,13 @@ pub struct KeepChain;
pub type ChainId = u64; pub type ChainId = u64;
pub type BatchId = Epoch; pub type BatchId = Epoch;
#[derive(Debug, Copy, Clone, IntoStaticStr)]
pub enum SyncingChainType {
Head,
Finalized,
Backfill,
}
/// A chain of blocks that need to be downloaded. Peers who claim to contain the target head /// A chain of blocks that need to be downloaded. Peers who claim to contain the target head
/// root are grouped into the peer pool and queried for batches when downloading the /// root are grouped into the peer pool and queried for batches when downloading the
/// chain. /// chain.
@@ -60,6 +70,9 @@ pub struct SyncingChain<T: BeaconChainTypes> {
/// A random id used to identify this chain. /// A random id used to identify this chain.
id: ChainId, id: ChainId,
/// SyncingChain type
pub chain_type: SyncingChainType,
/// The start of the chain segment. Any epoch previous to this one has been validated. /// The start of the chain segment. Any epoch previous to this one has been validated.
pub start_epoch: Epoch, pub start_epoch: Epoch,
@@ -126,6 +139,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
target_head_slot: Slot, target_head_slot: Slot,
target_head_root: Hash256, target_head_root: Hash256,
peer_id: PeerId, peer_id: PeerId,
chain_type: SyncingChainType,
log: &slog::Logger, log: &slog::Logger,
) -> Self { ) -> Self {
let mut peers = FnvHashMap::default(); let mut peers = FnvHashMap::default();
@@ -135,6 +149,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
SyncingChain { SyncingChain {
id, id,
chain_type,
start_epoch, start_epoch,
target_head_slot, target_head_slot,
target_head_root, target_head_root,
@@ -171,6 +186,14 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
self.validated_batches * EPOCHS_PER_BATCH self.validated_batches * EPOCHS_PER_BATCH
} }
/// Returns the total count of pending blocks in all the batches of this chain
pub fn pending_blocks(&self) -> usize {
self.batches
.values()
.map(|batch| batch.pending_blocks())
.sum()
}
/// Removes a peer from the chain. /// Removes a peer from the chain.
/// If the peer has active batches, those are considered failed and re-requested. /// If the peer has active batches, those are considered failed and re-requested.
pub fn remove_peer( pub fn remove_peer(
@@ -305,7 +328,12 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// result callback. This is done, because an empty batch could end a chain and the logic // result callback. This is done, because an empty batch could end a chain and the logic
// for removing chains and checking completion is in the callback. // for removing chains and checking completion is in the callback.
let blocks = batch.start_processing()?; let (blocks, duration_in_awaiting_processing) = batch.start_processing()?;
metrics::observe_duration(
&metrics::SYNCING_CHAIN_BATCH_AWAITING_PROCESSING,
duration_in_awaiting_processing,
);
let process_id = ChainSegmentProcessId::RangeBatchId(self.id, batch_id); let process_id = ChainSegmentProcessId::RangeBatchId(self.id, batch_id);
self.current_processing_batch = Some(batch_id); self.current_processing_batch = Some(batch_id);
@@ -469,10 +497,27 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// We consider three cases. Batch was successfully processed, Batch failed processing due // We consider three cases. Batch was successfully processed, Batch failed processing due
// to a faulty peer, or batch failed processing but the peer can't be deemed faulty. // to a faulty peer, or batch failed processing but the peer can't be deemed faulty.
match result { match result {
BatchProcessResult::Success { was_non_empty } => { BatchProcessResult::Success {
sent_blocks,
imported_blocks,
} => {
if sent_blocks > imported_blocks {
let ignored_blocks = sent_blocks - imported_blocks;
metrics::inc_counter_vec_by(
&metrics::SYNCING_CHAINS_IGNORED_BLOCKS,
&[self.chain_type.into()],
ignored_blocks as u64,
);
}
metrics::inc_counter_vec(
&metrics::SYNCING_CHAINS_PROCESSED_BATCHES,
&[self.chain_type.into()],
);
batch.processing_completed(BatchProcessingResult::Success)?; batch.processing_completed(BatchProcessingResult::Success)?;
if *was_non_empty { // was not empty = sent_blocks > 0
if *sent_blocks > 0 {
// If the processed batch was not empty, we can validate previous unvalidated // If the processed batch was not empty, we can validate previous unvalidated
// blocks. // blocks.
self.advance_chain(network, batch_id); self.advance_chain(network, batch_id);
@@ -515,7 +560,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
match batch.processing_completed(BatchProcessingResult::FaultyFailure)? { match batch.processing_completed(BatchProcessingResult::FaultyFailure)? {
BatchOperationOutcome::Continue => { BatchOperationOutcome::Continue => {
// Chain can continue. Check if it can be moved forward. // Chain can continue. Check if it can be moved forward.
if *imported_blocks { if *imported_blocks > 0 {
// At least one block was successfully verified and imported, so we can be sure all // At least one block was successfully verified and imported, so we can be sure all
// previous batches are valid and we only need to download the current failed // previous batches are valid and we only need to download the current failed
// batch. // batch.
@@ -1142,3 +1187,12 @@ impl RemoveChain {
) )
} }
} }
impl From<RangeSyncType> for SyncingChainType {
fn from(value: RangeSyncType) -> Self {
match value {
RangeSyncType::Head => Self::Head,
RangeSyncType::Finalized => Self::Finalized,
}
}
}

View File

@@ -64,8 +64,8 @@ impl<T: BeaconChainTypes, C: BlockStorage> ChainCollection<T, C> {
/// Updates the Syncing state of the collection after a chain is removed. /// Updates the Syncing state of the collection after a chain is removed.
fn on_chain_removed(&mut self, id: &ChainId, was_syncing: bool, sync_type: RangeSyncType) { fn on_chain_removed(&mut self, id: &ChainId, was_syncing: bool, sync_type: RangeSyncType) {
let _ = metrics::get_int_gauge(&metrics::SYNCING_CHAINS_COUNT, &[sync_type.as_str()]) metrics::inc_counter_vec(&metrics::SYNCING_CHAINS_REMOVED, &[sync_type.as_str()]);
.map(|m| m.dec()); self.update_metrics();
match self.state { match self.state {
RangeSyncState::Finalized(ref syncing_id) => { RangeSyncState::Finalized(ref syncing_id) => {
@@ -493,15 +493,28 @@ impl<T: BeaconChainTypes, C: BlockStorage> ChainCollection<T, C> {
target_head_slot, target_head_slot,
target_head_root, target_head_root,
peer, peer,
sync_type.into(),
&self.log, &self.log,
); );
debug_assert_eq!(new_chain.get_id(), id); debug_assert_eq!(new_chain.get_id(), id);
debug!(self.log, "New chain added to sync"; "peer_id" => peer_rpr, "sync_type" => ?sync_type, &new_chain); debug!(self.log, "New chain added to sync"; "peer_id" => peer_rpr, "sync_type" => ?sync_type, &new_chain);
entry.insert(new_chain); entry.insert(new_chain);
let _ = metrics::inc_counter_vec(&metrics::SYNCING_CHAINS_ADDED, &[sync_type.as_str()]);
metrics::get_int_gauge(&metrics::SYNCING_CHAINS_COUNT, &[sync_type.as_str()]) self.update_metrics();
.map(|m| m.inc());
} }
} }
} }
fn update_metrics(&self) {
metrics::set_gauge_vec(
&metrics::SYNCING_CHAINS_COUNT,
&[RangeSyncType::Finalized.as_str()],
self.finalized_chains.len() as i64,
);
metrics::set_gauge_vec(
&metrics::SYNCING_CHAINS_COUNT,
&[RangeSyncType::Head.as_str()],
self.head_chains.len() as i64,
);
}
} }

View File

@@ -43,6 +43,7 @@ use super::block_storage::BlockStorage;
use super::chain::{BatchId, ChainId, RemoveChain, SyncingChain}; use super::chain::{BatchId, ChainId, RemoveChain, SyncingChain};
use super::chain_collection::ChainCollection; use super::chain_collection::ChainCollection;
use super::sync_type::RangeSyncType; use super::sync_type::RangeSyncType;
use crate::metrics;
use crate::status::ToStatusMessage; use crate::status::ToStatusMessage;
use crate::sync::network_context::SyncNetworkContext; use crate::sync::network_context::SyncNetworkContext;
use crate::sync::BatchProcessResult; use crate::sync::BatchProcessResult;
@@ -346,6 +347,12 @@ where
} }
} }
metrics::inc_counter_vec_by(
&metrics::SYNCING_CHAINS_DROPPED_BLOCKS,
&[sync_type.as_str()],
chain.pending_blocks() as u64,
);
network.status_peers(self.beacon_chain.as_ref(), chain.peers()); network.status_peers(self.beacon_chain.as_ref(), chain.peers());
let status = self.beacon_chain.status_message(); let status = self.beacon_chain.status_message();

View File

@@ -372,17 +372,22 @@ pub fn cli_app() -> Command {
.arg( .arg(
Arg::new("self-limiter") Arg::new("self-limiter")
.long("self-limiter") .long("self-limiter")
.help( .help("This flag is deprecated and has no effect.")
"Enables the outbound rate limiter (requests made by this node). \ .hide(true)
Use the self-limiter-protocol flag to set per protocol configurations. \
If the self rate limiter is enabled and a protocol is not \
present in the configuration, the quotas used for the inbound rate limiter will be \
used."
)
.action(ArgAction::SetTrue) .action(ArgAction::SetTrue)
.help_heading(FLAG_HEADER) .help_heading(FLAG_HEADER)
.display_order(0) .display_order(0)
) )
.arg(
Arg::new("disable-self-limiter")
.long("disable-self-limiter")
.help(
"Disables the outbound rate limiter (requests sent by this node)."
)
.action(ArgAction::SetTrue)
.help_heading(FLAG_HEADER)
.display_order(0)
)
.arg( .arg(
Arg::new("self-limiter-protocols") Arg::new("self-limiter-protocols")
.long("self-limiter-protocols") .long("self-limiter-protocols")
@@ -397,7 +402,7 @@ pub fn cli_app() -> Command {
) )
.action(ArgAction::Append) .action(ArgAction::Append)
.value_delimiter(';') .value_delimiter(';')
.requires("self-limiter") .conflicts_with("disable-self-limiter")
.display_order(0) .display_order(0)
) )
.arg( .arg(

View File

@@ -1416,16 +1416,15 @@ pub fn set_network_config(
// Light client server config. // Light client server config.
config.enable_light_client_server = parse_flag(cli_args, "light-client-server"); config.enable_light_client_server = parse_flag(cli_args, "light-client-server");
// The self limiter is disabled by default. If the `self-limiter` flag is provided // The self limiter is enabled by default. If the `self-limiter-protocols` flag is not provided,
// without the `self-limiter-protocols` flag, the default params will be used. // the default params will be used.
if parse_flag(cli_args, "self-limiter") { config.outbound_rate_limiter_config = if parse_flag(cli_args, "disable-self-limiter") {
config.outbound_rate_limiter_config = None
if let Some(protocols) = cli_args.get_one::<String>("self-limiter-protocols") { } else if let Some(protocols) = cli_args.get_one::<String>("self-limiter-protocols") {
Some(protocols.parse()?) Some(protocols.parse()?)
} else { } else {
Some(Default::default()) Some(Default::default())
}; };
}
// Proposer-only mode overrides a number of previous configuration parameters. // Proposer-only mode overrides a number of previous configuration parameters.
// Specifically, we avoid subscribing to long-lived subnets and wish to maintain a minimal set // Specifically, we avoid subscribing to long-lived subnets and wish to maintain a minimal set

View File

@@ -505,6 +505,8 @@ Flags:
--disable-quic --disable-quic
Disables the quic transport. The node will rely solely on the TCP Disables the quic transport. The node will rely solely on the TCP
transport for libp2p connections. transport for libp2p connections.
--disable-self-limiter
Disables the outbound rate limiter (requests sent by this node).
--disable-upnp --disable-upnp
Disables UPnP support. Setting this will prevent Lighthouse from Disables UPnP support. Setting this will prevent Lighthouse from
attempting to automatically establish external port mappings. attempting to automatically establish external port mappings.
@@ -575,12 +577,6 @@ Flags:
When present, Lighthouse will forget the payload statuses of any When present, Lighthouse will forget the payload statuses of any
already-imported blocks. This can assist in the recovery from a already-imported blocks. This can assist in the recovery from a
consensus failure caused by the execution layer. consensus failure caused by the execution layer.
--self-limiter
Enables the outbound rate limiter (requests made by this node). Use
the self-limiter-protocol flag to set per protocol configurations. If
the self rate limiter is enabled and a protocol is not present in the
configuration, the quotas used for the inbound rate limiter will be
used.
--shutdown-after-sync --shutdown-after-sync
Shutdown beacon node as soon as sync is completed. Backfill sync will Shutdown beacon node as soon as sync is completed. Backfill sync will
not be performed before shutdown. not be performed before shutdown.

View File

@@ -346,6 +346,19 @@ impl BeaconNodeHttpClient {
Ok(()) Ok(())
} }
/// Perform a HTTP POST request with a custom timeout and consensus header.
async fn post_with_timeout_and_consensus_header<T: Serialize, U: IntoUrl>(
&self,
url: U,
body: &T,
timeout: Duration,
fork_name: ForkName,
) -> Result<(), Error> {
self.post_generic_with_consensus_version(url, body, Some(timeout), fork_name)
.await?;
Ok(())
}
/// Perform a HTTP POST request with a custom timeout, returning a JSON response. /// Perform a HTTP POST request with a custom timeout, returning a JSON response.
async fn post_with_timeout_and_response<T: DeserializeOwned, U: IntoUrl, V: Serialize>( async fn post_with_timeout_and_response<T: DeserializeOwned, U: IntoUrl, V: Serialize>(
&self, &self,
@@ -376,25 +389,6 @@ impl BeaconNodeHttpClient {
ok_or_error(response).await ok_or_error(response).await
} }
/// Generic POST function supporting arbitrary responses and timeouts.
/// Does not include Content-Type application/json in the request header.
async fn post_generic_json_without_content_type_header<T: Serialize, U: IntoUrl>(
&self,
url: U,
body: &T,
timeout: Option<Duration>,
) -> Result<Response, Error> {
let mut builder = self.client.post(url);
if let Some(timeout) = timeout {
builder = builder.timeout(timeout);
}
let serialized_body = serde_json::to_vec(body).map_err(Error::InvalidJson)?;
let response = builder.body(serialized_body).send().await?;
ok_or_error(response).await
}
/// Generic POST function supporting arbitrary responses and timeouts. /// Generic POST function supporting arbitrary responses and timeouts.
async fn post_generic_with_consensus_version<T: Serialize, U: IntoUrl>( async fn post_generic_with_consensus_version<T: Serialize, U: IntoUrl>(
&self, &self,
@@ -1228,10 +1222,10 @@ impl BeaconNodeHttpClient {
self.get_opt(path).await self.get_opt(path).await
} }
/// `GET beacon/blocks/{block_id}/attestations` /// `GET v1/beacon/blocks/{block_id}/attestations`
/// ///
/// Returns `Ok(None)` on a 404 error. /// Returns `Ok(None)` on a 404 error.
pub async fn get_beacon_blocks_attestations<E: EthSpec>( pub async fn get_beacon_blocks_attestations_v1<E: EthSpec>(
&self, &self,
block_id: BlockId, block_id: BlockId,
) -> Result<Option<ExecutionOptimisticFinalizedResponse<Vec<Attestation<E>>>>, Error> { ) -> Result<Option<ExecutionOptimisticFinalizedResponse<Vec<Attestation<E>>>>, Error> {
@@ -1247,8 +1241,28 @@ impl BeaconNodeHttpClient {
self.get_opt(path).await self.get_opt(path).await
} }
/// `POST beacon/pool/attestations` /// `GET v2/beacon/blocks/{block_id}/attestations`
pub async fn post_beacon_pool_attestations<E: EthSpec>( ///
/// Returns `Ok(None)` on a 404 error.
pub async fn get_beacon_blocks_attestations_v2<E: EthSpec>(
&self,
block_id: BlockId,
) -> Result<Option<ExecutionOptimisticFinalizedForkVersionedResponse<Vec<Attestation<E>>>>, Error>
{
let mut path = self.eth_path(V2)?;
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("beacon")
.push("blocks")
.push(&block_id.to_string())
.push("attestations");
self.get_opt(path).await
}
/// `POST v1/beacon/pool/attestations`
pub async fn post_beacon_pool_attestations_v1<E: EthSpec>(
&self, &self,
attestations: &[Attestation<E>], attestations: &[Attestation<E>],
) -> Result<(), Error> { ) -> Result<(), Error> {
@@ -1266,8 +1280,33 @@ impl BeaconNodeHttpClient {
Ok(()) Ok(())
} }
/// `GET beacon/pool/attestations?slot,committee_index` /// `POST v2/beacon/pool/attestations`
pub async fn get_beacon_pool_attestations<E: EthSpec>( pub async fn post_beacon_pool_attestations_v2<E: EthSpec>(
&self,
attestations: &[Attestation<E>],
fork_name: ForkName,
) -> Result<(), Error> {
let mut path = self.eth_path(V2)?;
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("beacon")
.push("pool")
.push("attestations");
self.post_with_timeout_and_consensus_header(
path,
&attestations,
self.timeouts.attestation,
fork_name,
)
.await?;
Ok(())
}
/// `GET v1/beacon/pool/attestations?slot,committee_index`
pub async fn get_beacon_pool_attestations_v1<E: EthSpec>(
&self, &self,
slot: Option<Slot>, slot: Option<Slot>,
committee_index: Option<u64>, committee_index: Option<u64>,
@@ -1293,8 +1332,35 @@ impl BeaconNodeHttpClient {
self.get(path).await self.get(path).await
} }
/// `POST beacon/pool/attester_slashings` /// `GET v2/beacon/pool/attestations?slot,committee_index`
pub async fn post_beacon_pool_attester_slashings<E: EthSpec>( pub async fn get_beacon_pool_attestations_v2<E: EthSpec>(
&self,
slot: Option<Slot>,
committee_index: Option<u64>,
) -> Result<ForkVersionedResponse<Vec<Attestation<E>>>, Error> {
let mut path = self.eth_path(V2)?;
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("beacon")
.push("pool")
.push("attestations");
if let Some(slot) = slot {
path.query_pairs_mut()
.append_pair("slot", &slot.to_string());
}
if let Some(index) = committee_index {
path.query_pairs_mut()
.append_pair("committee_index", &index.to_string());
}
self.get(path).await
}
/// `POST v1/beacon/pool/attester_slashings`
pub async fn post_beacon_pool_attester_slashings_v1<E: EthSpec>(
&self, &self,
slashing: &AttesterSlashing<E>, slashing: &AttesterSlashing<E>,
) -> Result<(), Error> { ) -> Result<(), Error> {
@@ -1306,14 +1372,33 @@ impl BeaconNodeHttpClient {
.push("pool") .push("pool")
.push("attester_slashings"); .push("attester_slashings");
self.post_generic_json_without_content_type_header(path, slashing, None) self.post_generic(path, slashing, None).await?;
Ok(())
}
/// `POST v2/beacon/pool/attester_slashings`
pub async fn post_beacon_pool_attester_slashings_v2<E: EthSpec>(
&self,
slashing: &AttesterSlashing<E>,
fork_name: ForkName,
) -> Result<(), Error> {
let mut path = self.eth_path(V2)?;
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("beacon")
.push("pool")
.push("attester_slashings");
self.post_generic_with_consensus_version(path, slashing, None, fork_name)
.await?; .await?;
Ok(()) Ok(())
} }
/// `GET beacon/pool/attester_slashings` /// `GET v1/beacon/pool/attester_slashings`
pub async fn get_beacon_pool_attester_slashings<E: EthSpec>( pub async fn get_beacon_pool_attester_slashings_v1<E: EthSpec>(
&self, &self,
) -> Result<GenericResponse<Vec<AttesterSlashing<E>>>, Error> { ) -> Result<GenericResponse<Vec<AttesterSlashing<E>>>, Error> {
let mut path = self.eth_path(V1)?; let mut path = self.eth_path(V1)?;
@@ -1327,6 +1412,21 @@ impl BeaconNodeHttpClient {
self.get(path).await self.get(path).await
} }
/// `GET v2/beacon/pool/attester_slashings`
pub async fn get_beacon_pool_attester_slashings_v2<E: EthSpec>(
&self,
) -> Result<ForkVersionedResponse<Vec<AttesterSlashing<E>>>, Error> {
let mut path = self.eth_path(V2)?;
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("beacon")
.push("pool")
.push("attester_slashings");
self.get(path).await
}
/// `POST beacon/pool/proposer_slashings` /// `POST beacon/pool/proposer_slashings`
pub async fn post_beacon_pool_proposer_slashings( pub async fn post_beacon_pool_proposer_slashings(
&self, &self,
@@ -2216,8 +2316,8 @@ impl BeaconNodeHttpClient {
self.get_with_timeout(path, self.timeouts.attestation).await self.get_with_timeout(path, self.timeouts.attestation).await
} }
/// `GET validator/aggregate_attestation?slot,attestation_data_root` /// `GET v1/validator/aggregate_attestation?slot,attestation_data_root`
pub async fn get_validator_aggregate_attestation<E: EthSpec>( pub async fn get_validator_aggregate_attestation_v1<E: EthSpec>(
&self, &self,
slot: Slot, slot: Slot,
attestation_data_root: Hash256, attestation_data_root: Hash256,
@@ -2240,6 +2340,32 @@ impl BeaconNodeHttpClient {
.await .await
} }
/// `GET v2/validator/aggregate_attestation?slot,attestation_data_root,committee_index`
pub async fn get_validator_aggregate_attestation_v2<E: EthSpec>(
&self,
slot: Slot,
attestation_data_root: Hash256,
committee_index: CommitteeIndex,
) -> Result<Option<ForkVersionedResponse<Attestation<E>>>, Error> {
let mut path = self.eth_path(V2)?;
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("validator")
.push("aggregate_attestation");
path.query_pairs_mut()
.append_pair("slot", &slot.to_string())
.append_pair(
"attestation_data_root",
&format!("{:?}", attestation_data_root),
)
.append_pair("committee_index", &committee_index.to_string());
self.get_opt_with_timeout(path, self.timeouts.attestation)
.await
}
/// `GET validator/sync_committee_contribution` /// `GET validator/sync_committee_contribution`
pub async fn get_validator_sync_committee_contribution<E: EthSpec>( pub async fn get_validator_sync_committee_contribution<E: EthSpec>(
&self, &self,
@@ -2335,8 +2461,8 @@ impl BeaconNodeHttpClient {
.await .await
} }
/// `POST validator/aggregate_and_proofs` /// `POST v1/validator/aggregate_and_proofs`
pub async fn post_validator_aggregate_and_proof<E: EthSpec>( pub async fn post_validator_aggregate_and_proof_v1<E: EthSpec>(
&self, &self,
aggregates: &[SignedAggregateAndProof<E>], aggregates: &[SignedAggregateAndProof<E>],
) -> Result<(), Error> { ) -> Result<(), Error> {
@@ -2353,6 +2479,30 @@ impl BeaconNodeHttpClient {
Ok(()) Ok(())
} }
/// `POST v2/validator/aggregate_and_proofs`
pub async fn post_validator_aggregate_and_proof_v2<E: EthSpec>(
&self,
aggregates: &[SignedAggregateAndProof<E>],
fork_name: ForkName,
) -> Result<(), Error> {
let mut path = self.eth_path(V2)?;
path.path_segments_mut()
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
.push("validator")
.push("aggregate_and_proofs");
self.post_with_timeout_and_consensus_header(
path,
&aggregates,
self.timeouts.attestation,
fork_name,
)
.await?;
Ok(())
}
/// `POST validator/beacon_committee_subscriptions` /// `POST validator/beacon_committee_subscriptions`
pub async fn post_validator_beacon_committee_subscriptions( pub async fn post_validator_beacon_committee_subscriptions(
&self, &self,

View File

@@ -780,6 +780,8 @@ pub struct ValidatorAttestationDataQuery {
pub struct ValidatorAggregateAttestationQuery { pub struct ValidatorAggregateAttestationQuery {
pub attestation_data_root: Hash256, pub attestation_data_root: Hash256,
pub slot: Slot, pub slot: Slot,
#[serde(skip_serializing_if = "Option::is_none")]
pub committee_index: Option<CommitteeIndex>,
} }
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Hash)] #[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Hash)]

View File

@@ -1,6 +1,6 @@
use crate::slot_data::SlotData; use crate::slot_data::SlotData;
use crate::Checkpoint;
use crate::{test_utils::TestRandom, Hash256, Slot}; use crate::{test_utils::TestRandom, Hash256, Slot};
use crate::{Checkpoint, ForkVersionDeserialize};
use derivative::Derivative; use derivative::Derivative;
use safe_arith::ArithError; use safe_arith::ArithError;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -26,6 +26,12 @@ pub enum Error {
InvalidCommitteeIndex, InvalidCommitteeIndex,
} }
impl From<ssz_types::Error> for Error {
fn from(e: ssz_types::Error) -> Self {
Error::SszTypesError(e)
}
}
#[superstruct( #[superstruct(
variants(Base, Electra), variants(Base, Electra),
variant_attributes( variant_attributes(
@@ -487,6 +493,46 @@ impl<'a, E: EthSpec> From<AttestationRefOnDisk<'a, E>> for AttestationRef<'a, E>
} }
} }
impl<E: EthSpec> ForkVersionDeserialize for Attestation<E> {
fn deserialize_by_fork<'de, D: serde::Deserializer<'de>>(
value: serde_json::Value,
fork_name: crate::ForkName,
) -> Result<Self, D::Error> {
if fork_name.electra_enabled() {
let attestation: AttestationElectra<E> =
serde_json::from_value(value).map_err(serde::de::Error::custom)?;
Ok(Attestation::Electra(attestation))
} else {
let attestation: AttestationBase<E> =
serde_json::from_value(value).map_err(serde::de::Error::custom)?;
Ok(Attestation::Base(attestation))
}
}
}
impl<E: EthSpec> ForkVersionDeserialize for Vec<Attestation<E>> {
fn deserialize_by_fork<'de, D: serde::Deserializer<'de>>(
value: serde_json::Value,
fork_name: crate::ForkName,
) -> Result<Self, D::Error> {
if fork_name.electra_enabled() {
let attestations: Vec<AttestationElectra<E>> =
serde_json::from_value(value).map_err(serde::de::Error::custom)?;
Ok(attestations
.into_iter()
.map(Attestation::Electra)
.collect::<Vec<_>>())
} else {
let attestations: Vec<AttestationBase<E>> =
serde_json::from_value(value).map_err(serde::de::Error::custom)?;
Ok(attestations
.into_iter()
.map(Attestation::Base)
.collect::<Vec<_>>())
}
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View File

@@ -171,6 +171,29 @@ impl<E: EthSpec> TestRandom for AttesterSlashing<E> {
} }
} }
impl<E: EthSpec> crate::ForkVersionDeserialize for Vec<AttesterSlashing<E>> {
fn deserialize_by_fork<'de, D: serde::Deserializer<'de>>(
value: serde_json::Value,
fork_name: crate::ForkName,
) -> Result<Self, D::Error> {
if fork_name.electra_enabled() {
let slashings: Vec<AttesterSlashingElectra<E>> =
serde_json::from_value(value).map_err(serde::de::Error::custom)?;
Ok(slashings
.into_iter()
.map(AttesterSlashing::Electra)
.collect::<Vec<_>>())
} else {
let slashings: Vec<AttesterSlashingBase<E>> =
serde_json::from_value(value).map_err(serde::de::Error::custom)?;
Ok(slashings
.into_iter()
.map(AttesterSlashing::Base)
.collect::<Vec<_>>())
}
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View File

@@ -287,17 +287,21 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
// Then download, sign and publish a `SignedAggregateAndProof` for each // Then download, sign and publish a `SignedAggregateAndProof` for each
// validator that is elected to aggregate for this `slot` and // validator that is elected to aggregate for this `slot` and
// `committee_index`. // `committee_index`.
self.produce_and_publish_aggregates(&attestation_data, &validator_duties) self.produce_and_publish_aggregates(
.await &attestation_data,
.map_err(move |e| { committee_index,
crit!( &validator_duties,
log, )
"Error during attestation routine"; .await
"error" => format!("{:?}", e), .map_err(move |e| {
"committee_index" => committee_index, crit!(
"slot" => slot.as_u64(), log,
) "Error during attestation routine";
})?; "error" => format!("{:?}", e),
"committee_index" => committee_index,
"slot" => slot.as_u64(),
)
})?;
} }
Ok(()) Ok(())
@@ -445,6 +449,11 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
warn!(log, "No attestations were published"); warn!(log, "No attestations were published");
return Ok(None); return Ok(None);
} }
let fork_name = self
.context
.eth2_config
.spec
.fork_name_at_slot::<E>(attestation_data.slot);
// Post the attestations to the BN. // Post the attestations to the BN.
match self match self
@@ -458,9 +467,15 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
&metrics::ATTESTATION_SERVICE_TIMES, &metrics::ATTESTATION_SERVICE_TIMES,
&[metrics::ATTESTATIONS_HTTP_POST], &[metrics::ATTESTATIONS_HTTP_POST],
); );
beacon_node if fork_name.electra_enabled() {
.post_beacon_pool_attestations(attestations) beacon_node
.await .post_beacon_pool_attestations_v2(attestations, fork_name)
.await
} else {
beacon_node
.post_beacon_pool_attestations_v1(attestations)
.await
}
}, },
) )
.await .await
@@ -504,6 +519,7 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
async fn produce_and_publish_aggregates( async fn produce_and_publish_aggregates(
&self, &self,
attestation_data: &AttestationData, attestation_data: &AttestationData,
committee_index: CommitteeIndex,
validator_duties: &[DutyAndProof], validator_duties: &[DutyAndProof],
) -> Result<(), String> { ) -> Result<(), String> {
let log = self.context.log(); let log = self.context.log();
@@ -516,6 +532,12 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
return Ok(()); return Ok(());
} }
let fork_name = self
.context
.eth2_config
.spec
.fork_name_at_slot::<E>(attestation_data.slot);
let aggregated_attestation = &self let aggregated_attestation = &self
.beacon_nodes .beacon_nodes
.first_success( .first_success(
@@ -526,17 +548,36 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
&metrics::ATTESTATION_SERVICE_TIMES, &metrics::ATTESTATION_SERVICE_TIMES,
&[metrics::AGGREGATES_HTTP_GET], &[metrics::AGGREGATES_HTTP_GET],
); );
beacon_node if fork_name.electra_enabled() {
.get_validator_aggregate_attestation( beacon_node
attestation_data.slot, .get_validator_aggregate_attestation_v2(
attestation_data.tree_hash_root(), attestation_data.slot,
) attestation_data.tree_hash_root(),
.await committee_index,
.map_err(|e| { )
format!("Failed to produce an aggregate attestation: {:?}", e) .await
})? .map_err(|e| {
.ok_or_else(|| format!("No aggregate available for {:?}", attestation_data)) format!("Failed to produce an aggregate attestation: {:?}", e)
.map(|result| result.data) })?
.ok_or_else(|| {
format!("No aggregate available for {:?}", attestation_data)
})
.map(|result| result.data)
} else {
beacon_node
.get_validator_aggregate_attestation_v1(
attestation_data.slot,
attestation_data.tree_hash_root(),
)
.await
.map_err(|e| {
format!("Failed to produce an aggregate attestation: {:?}", e)
})?
.ok_or_else(|| {
format!("No aggregate available for {:?}", attestation_data)
})
.map(|result| result.data)
}
}, },
) )
.await .await
@@ -604,9 +645,20 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
&metrics::ATTESTATION_SERVICE_TIMES, &metrics::ATTESTATION_SERVICE_TIMES,
&[metrics::AGGREGATES_HTTP_POST], &[metrics::AGGREGATES_HTTP_POST],
); );
beacon_node if fork_name.electra_enabled() {
.post_validator_aggregate_and_proof(signed_aggregate_and_proofs_slice) beacon_node
.await .post_validator_aggregate_and_proof_v2(
signed_aggregate_and_proofs_slice,
fork_name,
)
.await
} else {
beacon_node
.post_validator_aggregate_and_proof_v1(
signed_aggregate_and_proofs_slice,
)
.await
}
}, },
) )
.await .await