Which issue # does this PR address?
None
Discussed in private with @jimmygchen, Lighthouse's `earliest_available_slot` is guaranteed to always align with epoch boundaries, but as a safety implementation, we should use `start_slot` just in case other clients differ in their implementations.
At least we agreed it would be safer for `synced_peers_for_epoch`, I also made the change in `has_good_custody_range_sync_peer`, but this is to be reviewed please.
Co-Authored-By: Antoine James <antoine@ethereum.org>
Co-Authored-By: Jimmy Chen <jimmy@sigmaprime.io>
N/A
1. In the batch retry logic, we were failing to set the batch state to `AwaitingDownload` before attempting a retry. This PR sets it to `AwaitingDownload` before the retry and sets it back to `Downloading` if the retry suceeded in sending out a request
2. Remove all peer scoring logic from retrying and rely on just de priorotizing the failed peer. I finally concede the point to @dapplion 😄
3. Changes `block_components_by_range_request` to accept `block_peers` and `column_peers`. This is to ensure that we use the full synced peerset for requesting columns in order to avoid splitting the column peers among multiple head chains. During forward sync, we want the block peers to be the peers from the syncing chain and column peers to be all synced peers from the peerdb.
Also, fixes a typo and calls `attempt_send_awaiting_download_batches` from more places
Co-Authored-By: Pawan Dhananjay <pawandhananjay@gmail.com>
Anchor currently depends on `lighthouse_network` for a few types and utilities that live within. As we use our own libp2p behaviours, we actually do not use the core logic in that crate. This makes us transitively depend on a bunch of unneeded crates (even a whole separate libp2p if the versions mismatch!)
Move things we require into it's own lightweight crate.
Co-Authored-By: Daniel Knopik <daniel@dknopik.de>
Closes:
- #7865
- #7855
Changes extracted from earlier PR #7876
This PR fixes two main things with a few other improvements mentioned below:
- Prevent Lighthouse from repeatedly sending `DataColumnByRoot` requests to an unsynced peer, causing lookup sync to get stuck
- Allows Lighthouse to send discovery requests if there isn't enough **synced** peers in the required sampling subnets - this fixes the stuck sync scenario where there isn't enough usable peers in sampling subnet but no discovery is attempted.
- Make peer discovery queries if custody subnet peer count drops below the minimum threshold
- Update peer pruning logic to prioritise uniform distribution across all data column subnets and avoid pruning sampling peers if the count is below the target threshold (2)
- Check sync status when making discovery requests, to make sure we don't ignore requests if there isn't enough synced peers in the required sampling subnets
- Optimise some of the `PeerDB` functions checking custody peers
- Only send lookup requests to peers that are synced or advanced
Which issue # does this PR address?
Closes#7604
Improvements to range sync including:
1. Contain column requests only to peers that are part of the SyncingChain
2. Attribute the fault to the correct peer and downscore them if they don't return the data columns for the request
3. Improve sync performance by retrying only the failed columns from other peers instead of failing the entire batch
4. Uses the earliest_available_slot to make requests to peers that claim to have the epoch. Note: if no earliest_available_slot info is available, fallback to using previous logic i.e. assume peer has everything backfilled upto WS checkpoint/da boundary
Tested this on fusaka-devnet-2 with a full node and supernode and the recovering logic seems to works well.
Also tested this a little on mainnet.
Need to do more testing and possibly add some unit tests.
- Re-opens https://github.com/sigp/lighthouse/pull/6864 targeting unstable
Range sync and backfill sync still assume that each batch request is done by a single peer. This assumption breaks with PeerDAS, where we request custody columns to N peers.
Issues with current unstable:
- Peer prioritization counts batch requests per peer. This accounting is broken now, data columns by range request are not accounted
- Peer selection for data columns by range ignores the set of peers on a syncing chain, instead draws from the global pool of peers
- The implementation is very strict when we have no peers to request from. After PeerDAS this case is very common and we want to be flexible or easy and handle that case better than just hard failing everything.
- [x] Upstream peer prioritization to the network context, it knows exactly how many active requests a peer (including columns by range)
- [x] Upstream peer selection to the network context, now `block_components_by_range_request` gets a set of peers to choose from instead of a single peer. If it can't find a peer, it returns the error `RpcRequestSendError::NoPeer`
- [ ] Range sync and backfill sync handle `RpcRequestSendError::NoPeer` explicitly
- [ ] Range sync: leaves the batch in `AwaitingDownload` state and does nothing. **TODO**: we should have some mechanism to fail the chain if it's stale for too long - **EDIT**: Not done in this PR
- [x] Backfill sync: pauses the sync until another peer joins - **EDIT**: Same logic as unstable
### TODOs
- [ ] Add tests :)
- [x] Manually test backfill sync
Note: this touches the mainnet path!
N/A
Adds endpoints to add and remove trusted peers from the http api. The added peers are trusted peers so they won't be disconnected for bad scores. We try to maintain a connection to the peer in case they disconnect from us by trying to dial it every heartbeat.
Currently we have very poor coverage of range sync with unit tests. With the event driven test framework we could cover much more ground and be confident when modifying the code.
Add two basic cases:
- Happy path, complete a finalized sync for 2 epochs
- Post-PeerDAS case where we start without enough custody peers and later we find enough
⚠️ If you have ideas for more test cases, please let me know! I'll write them
* drop score Ord, PartialOrd, Eq and PartialEq impls
and impl total_cmp instead
* Revert "Fix test failure on Rust v1.81 (#6407)"
This reverts commit 8a085fc828.
* reverse in the compare function
* lint mdfiles
* Improve `get_custody_columns` validation, caching and error handling.
* Merge branch 'unstable' into get-custody-columns-error-handing
* Fix failing test and add more test.
* Fix failing test and add more test.
* Merge branch 'unstable' into get-custody-columns-error-handing
# Conflicts:
# beacon_node/lighthouse_network/src/discovery/subnet_predicate.rs
# beacon_node/lighthouse_network/src/peer_manager/peerdb.rs
# beacon_node/lighthouse_network/src/peer_manager/peerdb/peer_info.rs
# beacon_node/lighthouse_network/src/types/globals.rs
# beacon_node/network/src/service.rs
# consensus/types/src/data_column_subnet_id.rs
* Add unit test to make sure the default specs won't panic on the `compute_custody_requirement_subnets` function.
* Add condition when calling `compute_custody_subnets_from_metadata` and update logs.
* Validate `csc` when returning from enr. Remove `csc` computation on connection since we get them on metadata anyway.
* Add `peers_per_custody_subnet_count` to track peer csc and supernodes.
* Disconnect peers with invalid metadata and find other peers instead.
* Fix sampling tests.
* Merge branch 'unstable' into get-custody-columns-error-handing
* Merge branch 'unstable' into get-custody-columns-error-handing
* 1D PeerDAS prototype: Data format and Distribution (#5050)
* Build and publish column sidecars. Add stubs for gossip.
* Add blob column subnets
* Add `BlobColumnSubnetId` and initial compute subnet logic.
* Subscribe to blob column subnets.
* Introduce `BLOB_COLUMN_SUBNET_COUNT` based on DAS configuration parameter changes.
* Fix column sidecar type to use `VariableList` for data.
* Fix lint errors.
* Update types and naming to latest consensus-spec #3574.
* Fix test and some cleanups.
* Merge branch 'unstable' into das
* Merge branch 'unstable' into das
* Merge branch 'unstable' into das
# Conflicts:
# consensus/types/src/chain_spec.rs
* Add `DataColumnSidecarsByRoot ` req/resp protocol (#5196)
* Add stub for `DataColumnsByRoot`
* Add basic implementation of serving RPC data column from DA checker.
* Store data columns in early attester cache and blobs db.
* Apply suggestions from code review
Co-authored-by: Eitan Seri-Levi <eserilev@gmail.com>
Co-authored-by: Jacob Kaufmann <jacobkaufmann18@gmail.com>
* Fix build.
* Store `DataColumnInfo` in database and various cleanups.
* Update `DataColumnSidecar` ssz max size and remove panic code.
---------
Co-authored-by: Eitan Seri-Levi <eserilev@gmail.com>
Co-authored-by: Jacob Kaufmann <jacobkaufmann18@gmail.com>
* feat: add DAS KZG in data col construction (#5210)
* feat: add DAS KZG in data col construction
* refactor data col sidecar construction
* refactor: add data cols to GossipVerifiedBlockContents
* Disable windows tests for `das` branch. (c-kzg doesn't build on windows)
* Formatting and lint changes only.
* refactor: remove iters in construction of data cols
* Update vec capacity and error handling.
* Add `data_column_sidecar_computation_seconds` metric.
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* Merge branch 'unstable' into das
# Conflicts:
# .github/workflows/test-suite.yml
# beacon_node/lighthouse_network/src/types/topics.rs
* fix: update data col subnet count from 64 to 32 (#5413)
* feat: add peerdas custody field to ENR (#5409)
* feat: add peerdas custody field to ENR
* add hash prefix step in subnet computation
* refactor test and fix possible u64 overflow
* default to min custody value if not present in ENR
* Merge branch 'unstable' into das
* Merge branch 'unstable' into das-unstable-merge-0415
# Conflicts:
# Cargo.lock
# beacon_node/beacon_chain/src/data_availability_checker.rs
# beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs
# beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
# beacon_node/beacon_chain/src/data_availability_checker/processing_cache.rs
# beacon_node/lighthouse_network/src/rpc/methods.rs
# beacon_node/network/src/network_beacon_processor/mod.rs
# beacon_node/network/src/sync/block_lookups/tests.rs
# crypto/kzg/Cargo.toml
* Merge remote-tracking branch 'sigp/unstable' into das
* Merge remote-tracking branch 'sigp/unstable' into das
* Fix merge conflicts.
* Send custody data column to `DataAvailabilityChecker` for determining block importability (#5570)
* Only import custody data columns after publishing a block.
* Add `subscribe-all-data-column-subnets` and pass custody column count to `availability_cache`.
* Add custody requirement checks to `availability_cache`.
* Fix config not being passed to DAChecker and add more logging.
* Introduce `peer_das_epoch` and make blobs and columns mutually exclusive.
* Add DA filter for PeerDAS.
* Fix data availability check and use test_logger in tests.
* Fix subscribe to all data column subnets not working correctly.
* Fix tests.
* Only publish column sidecars if PeerDAS is activated. Add `PEER_DAS_EPOCH` chain spec serialization.
* Remove unused data column index in `OverflowKey`.
* Fix column sidecars incorrectly produced when there are no blobs.
* Re-instate index to `OverflowKey::DataColumn` and downgrade noisy debug log to `trace`.
* DAS sampling on sync (#5616)
* Data availability sampling on sync
* Address @jimmygchen review
* Trigger sampling
* Address some review comments and only send `SamplingBlock` sync message after PEER_DAS_EPOCH.
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* Merge branch 'unstable' into das
# Conflicts:
# Cargo.lock
# Cargo.toml
# beacon_node/beacon_chain/src/block_verification.rs
# beacon_node/http_api/src/publish_blocks.rs
# beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs
# beacon_node/lighthouse_network/src/rpc/protocol.rs
# beacon_node/lighthouse_network/src/types/pubsub.rs
# beacon_node/network/src/sync/block_lookups/single_block_lookup.rs
# beacon_node/store/src/hot_cold_store.rs
# consensus/types/src/beacon_state.rs
# consensus/types/src/chain_spec.rs
# consensus/types/src/eth_spec.rs
* Merge branch 'unstable' into das
* Re-process early sampling requests (#5569)
* Re-process early sampling requests
# Conflicts:
# beacon_node/beacon_processor/src/work_reprocessing_queue.rs
# beacon_node/lighthouse_network/src/rpc/methods.rs
# beacon_node/network/src/network_beacon_processor/rpc_methods.rs
* Update beacon_node/beacon_processor/src/work_reprocessing_queue.rs
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* Add missing var
* Beta compiler fixes and small typo fixes.
* Remove duplicate method.
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* Merge remote-tracking branch 'sigp/unstable' into das
* Fix merge conflict.
* Add data columns by root to currently supported protocol list (#5678)
* Add data columns by root to currently supported protocol list.
* Add missing data column by roots handling.
* Merge branch 'unstable' into das
# Conflicts:
# Cargo.lock
# Cargo.toml
# beacon_node/network/src/sync/block_lookups/tests.rs
# beacon_node/network/src/sync/manager.rs
* Fix simulator tests on `das` branch (#5731)
* Bump genesis delay in sim tests as KZG setup takes longer for DAS.
* Fix incorrect YAML spacing.
* DataColumnByRange boilerplate (#5353)
* add boilerplate
* fmt
* PeerDAS custody lookup sync (#5684)
* Implement custody sync
* Lint
* Fix tests
* Fix rebase issue
* Add data column kzg verification and update `c-kzg`. (#5701)
* Add data column kzg verification and update `c-kzg`.
* Fix incorrect `Cell` size.
* Add kzg verification on rpc blocks.
* Add kzg verification on rpc data columns.
* Rename `PEER_DAS_EPOCH` to `EIP7594_FORK_EPOCH` for client interop. (#5750)
* Fetch custody columns in range sync (#5747)
* Fetch custody columns in range sync
* Clean up todos
* Remove `BlobSidecar` construction and publish after PeerDAS activated (#5759)
* Avoid building and publishing blob sidecars after PeerDAS.
* Ignore gossip blobs with a slot greater than peer das activation epoch.
* Only attempt to verify blob count and import blobs before PeerDAS.
* #5684 review comments (#5748)
* #5684 review comments.
* Doc and message update only.
* Fix incorrect condition when constructing `RpcBlock` with `DataColumn`s
* Make sampling tests deterministic (#5775)
* PeerDAS spec tests (#5772)
* Add get_custody_columns spec tests.
* Add kzg merkle proof spec tests.
* Add SSZ spec tests.
* Add remaining KZG tests
* Load KZG only once per process, exclude electra tests and add missing SSZ tests.
* Fix lint and missing changes.
* Ignore macOS generated file.
* Merge remote branch 'sigp/unstable' into das
* Merge remote tracking branch 'origin/unstable' into das
* Implement unconditional reconstruction for supernodes (#5781)
* Implement unconditional reconstruction for supernodes
* Move code into KzgVerifiedCustodyDataColumn
* Remove expect
* Add test
* Thanks justin
* Add withhold attack mode for interop (#5788)
* Add withhold attack mode
* Update readme
* Drop added readmes
* Undo styling changes
* Add column gossip verification and handle unknown parent block (#5783)
* Add column gossip verification and handle missing parent for columns.
* Review PR
* Fix rebase issue
* more lint issues :)
---------
Co-authored-by: dapplion <35266934+dapplion@users.noreply.github.com>
* Trigger sampling on sync events (#5776)
* Trigger sampling on sync events
* Update beacon_chain.rs
* Fix tests
* Fix tests
* PeerDAS parameter changes for devnet-0 (#5779)
* Update PeerDAS parameters to latest values.
* Lint fix
* Fix lint.
* Update hardcoded subnet count to 64 (#5791)
* Fix incorrect columns per subnet and config cleanup (#5792)
* Tidy up PeerDAS preset and config values.
* Fix broken config
* Fix DAS branch CI (#5793)
* Fix invalid syntax.
* Update cli doc. Ignore get_custody_columns test temporarily.
* Fix failing test and add verify inclusion test.
* Undo accidentally removed code.
* Only attempt reconstruct columns once. (#5794)
* Re-enable precompute table for peerdas kzg (#5795)
* Merge branch 'unstable' into das
* Update subscription filter. (#5797)
* Remove penalty for duplicate columns (expected due to reconstruction) (#5798)
* Revert DAS config for interop testing. Optimise get_custody_columns function. (#5799)
* Don't perform reconstruction for proposer node as it already has all the columns. (#5806)
* Multithread compute_cells_and_proofs (#5805)
* Multi-thread reconstruct data columns
* Multi-thread path for block production
* Merge branch 'unstable' into das
# Conflicts:
# .github/workflows/test-suite.yml
# beacon_node/network/src/sync/block_lookups/mod.rs
# beacon_node/network/src/sync/block_lookups/single_block_lookup.rs
# beacon_node/network/src/sync/network_context.rs
* Fix CI errors.
* Move PeerDAS type-level config to configurable `ChainSpec` (#5828)
* Move PeerDAS type level config to `ChainSpec`.
* Fix tests
* Misc custody lookup improvements (#5821)
* Improve custody requests
* Type DataColumnsByRootRequestId
* Prioritize peers and load balance
* Update tests
* Address PR review
* Merge branch 'unstable' into das
* Rename deploy_block in network config (`das` branch) (#5852)
* Rename deploy_block.txt to deposit_contract_block.txt
* fmt
---------
Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
* Merge branch 'unstable' into das
* Fix CI and merge issues.
* Merge branch 'unstable' into das
# Conflicts:
# beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
# lcli/src/main.rs
* Store data columns individually in store and caches (#5890)
* Store data columns individually in store and caches
* Implement data column pruning
* Merge branch 'unstable' into das
# Conflicts:
# Cargo.lock
* Update reconstruction benches to newer criterion version. (#5949)
* Merge branch 'unstable' into das
# Conflicts:
# .github/workflows/test-suite.yml
* chore: add `recover_cells_and_compute_proofs` method (#5938)
* chore: add recover_cells_and_compute_proofs method
* Introduce type alias `CellsAndKzgProofs` to address type complexity.
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* Update `csc` format in ENR and spec tests for devnet-1 (#5966)
* Update `csc` format in ENR.
* Add spec tests for `recover_cells_and_kzg_proofs`.
* Add tests for ENR.
* Fix failing tests.
* Add protection against invalid csc value in ENR.
* Fix lint
* Fix csc encoding and decoding (#5997)
* Fix data column rpc request not being sent due to incorrect limits set. (#6000)
* Fix incorrect inbound request count causing rate limiting. (#6025)
* Merge branch 'stable' into das
# Conflicts:
# beacon_node/network/src/sync/block_lookups/tests.rs
# beacon_node/network/src/sync/block_sidecar_coupling.rs
# beacon_node/network/src/sync/manager.rs
# beacon_node/network/src/sync/network_context.rs
# beacon_node/network/src/sync/network_context/requests.rs
* Merge remote-tracking branch 'unstable' into das
* Add kurtosis config for DAS testing (#5968)
* Add kurtosis config for DAS testing.
* Fix invalid yaml file
* Update network parameter files.
* chore: add rust PeerdasKZG crypto library for peerdas functionality and rollback c-kzg dependency to 4844 version (#5941)
* chore: add recover_cells_and_compute_proofs method
* chore: add rust peerdas crypto library
* chore: integrate peerdaskzg rust library into kzg crate
* chore(multi):
- update `ssz_cell_to_crypto_cell`
- update conversion from the crypto cell type to a Vec<u8>. Since the Rust library defines them as references to an array, the conversion is simply `to_vec`
* chore(multi):
- update rest of code to handle the new crypto `Cell` type
- update test case code to no longer use the Box type
* chore: cleanup of superfluous conversions
* chore: revert c-kzg dependency back to v1
* chore: move dependency into correct order
* chore: update rust dependency
- This version includes a new method `PeerDasContext::with_num_threads`
* chore: remove Default initialization of PeerDasContext and explicitly set the parameters in `new_from_trusted_setup`
* chore: cleanup exports
* chore: commit updated cargo.lock
* Update Cargo.toml
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* chore: rename dependency
* chore: update peerdas lib
- sets the blst version to 0.3 so that it matches whatever lighthouse is using. Although 0.3.12 is latest, lighthouse is pinned to 0.3.3
* chore: fix clippy lifetime
- Rust doesn't allow you to elide the lifetime on type aliases
* chore: cargo clippy fix
* chore: cargo fmt
* chore: update lib to add redundant checks (these will be removed in consensus-specs PR 3819)
* chore: update dependency to ignore proofs
* chore: update peerdas lib to latest
* update lib
* chore: remove empty proof parameter
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* Update PeerDAS interop testnet config (#6069)
* Update interop testnet config.
* Fix typo and remove target peers
* Avoid retrying same sampling peer that previously failed. (#6084)
* Various fixes to custody range sync (#6004)
* Only start requesting batches when there are good peers across all custody columns to avoid spaming block requests.
* Add custody peer check before mutating `BatchInfo` to avoid inconsistent state.
* Add check to cover a case where batch is not processed while waiting for custody peers to become available.
* Fix lint and logic error
* Fix `good_peers_on_subnet` always returning false for `DataColumnSubnet`.
* Add test for `get_custody_peers_for_column`
* Revert epoch parameter refactor.
* Fall back to default custody requiremnt if peer ENR is not present.
* Add metrics and update code comment.
* Add more debug logs.
* Use subscribed peers on subnet before MetaDataV3 is implemented. Remove peer_id matching when injecting error because multiple peers are used for range requests. Use randomized custodial peer to avoid repeatedly sending requests to failing peers. Batch by range request where possible.
* Remove unused code and update docs.
* Add comment
* chore: update peerdas-kzg library (#6118)
* chore: update peerDAS lib
* chore: update library
* chore: update library to version that include "init context" benchmarks and optional validation checks
* chore: (can remove) -- Add benchmarks for init context
* Prevent continuous searchers for low-peer networks (#6162)
* Merge branch 'unstable' into das
* Fix merge conflicts
* Add cli flag to enable sampling and disable by default. (#6209)
* chore: Use reference to an array representing a blob instead of an owned KzgBlob (#6179)
* add KzgBlobRef type
* modify code to use KzgBlobRef
* clippy
* Remove Deneb blob related changes to maintain compatibility with `c-kzg-4844`.
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* Store computed custody subnets in PeerDB and fix custody lookup test (#6218)
* Fix failing custody lookup tests.
* Store custody subnets in PeerDB, fix custody lookup test and refactor some methods.
* Merge branch 'unstable' into das
# Conflicts:
# beacon_node/beacon_chain/src/beacon_chain.rs
# beacon_node/beacon_chain/src/block_verification_types.rs
# beacon_node/beacon_chain/src/builder.rs
# beacon_node/beacon_chain/src/data_availability_checker.rs
# beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
# beacon_node/beacon_chain/src/data_column_verification.rs
# beacon_node/beacon_chain/src/early_attester_cache.rs
# beacon_node/beacon_chain/src/historical_blocks.rs
# beacon_node/beacon_chain/tests/store_tests.rs
# beacon_node/lighthouse_network/src/discovery/enr.rs
# beacon_node/network/src/service.rs
# beacon_node/src/cli.rs
# beacon_node/store/src/hot_cold_store.rs
# beacon_node/store/src/lib.rs
# lcli/src/generate_bootnode_enr.rs
* Fix CI failures after merge.
* Batch sampling requests by peer (#6256)
* Batch sampling requests by peer
* Fix clippy errors
* Fix tests
* Add column_index to error message for ease of tracing
* Remove outdated comment
* Fix range sync never evaluating request as finished, causing it to get stuck. (#6276)
* Merge branch 'unstable' into das-0821-merge
# Conflicts:
# Cargo.lock
# Cargo.toml
# beacon_node/beacon_chain/src/beacon_chain.rs
# beacon_node/beacon_chain/src/data_availability_checker.rs
# beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
# beacon_node/beacon_chain/src/data_column_verification.rs
# beacon_node/beacon_chain/src/kzg_utils.rs
# beacon_node/beacon_chain/src/metrics.rs
# beacon_node/beacon_processor/src/lib.rs
# beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs
# beacon_node/lighthouse_network/src/rpc/config.rs
# beacon_node/lighthouse_network/src/rpc/methods.rs
# beacon_node/lighthouse_network/src/rpc/outbound.rs
# beacon_node/lighthouse_network/src/rpc/rate_limiter.rs
# beacon_node/lighthouse_network/src/service/api_types.rs
# beacon_node/lighthouse_network/src/types/globals.rs
# beacon_node/network/src/network_beacon_processor/mod.rs
# beacon_node/network/src/network_beacon_processor/rpc_methods.rs
# beacon_node/network/src/network_beacon_processor/sync_methods.rs
# beacon_node/network/src/sync/block_lookups/common.rs
# beacon_node/network/src/sync/block_lookups/mod.rs
# beacon_node/network/src/sync/block_lookups/single_block_lookup.rs
# beacon_node/network/src/sync/block_lookups/tests.rs
# beacon_node/network/src/sync/manager.rs
# beacon_node/network/src/sync/network_context.rs
# consensus/types/src/data_column_sidecar.rs
# crypto/kzg/Cargo.toml
# crypto/kzg/benches/benchmark.rs
# crypto/kzg/src/lib.rs
* Fix custody tests and load PeerDAS KZG instead.
* Fix ef tests and bench compilation.
* Fix failing sampling test.
* Merge pull request #6287 from jimmygchen/das-0821-merge
Merge `unstable` into `das` 20240821
* Remove get_block_import_status
* Merge branch 'unstable' into das
* Re-enable Windows release tests.
* Address some review comments.
* Address more review comments and cleanups.
* Comment out peer DAS KZG EF tests for now
* Address more review comments and fix build.
* Merge branch 'das' of github.com:sigp/lighthouse into das
* Unignore Electra tests
* Fix metric name
* Address some of Pawan's review comments
* Merge remote-tracking branch 'origin/unstable' into das
* Update PeerDAS network parameters for peerdas-devnet-2 (#6290)
* update subnet count & custody req
* das network params
* update ef tests
---------
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
* WIP
* Initial working version of new sync tests.
* Remove sync traits and fix lints.
* Reduce internal method visibility and make test method instead. Remove extra beacon chain harness instance created in tests.
* Improve `SyncTester` api.
* Fix lint.
* Test example
* Lookup tests using rig
* Tests should interface with events only
* lint
* Skip deneb test pre-deneb
* Add more assertions
* Remove logging changes
* Address @jimmygchen comments
* Merge branch 'unstable' of https://github.com/sigp/lighthouse into bn-p2p-tests
* remove unused assertions
* fix lint
## Issue Addressed
https://github.com/sigp/lighthouse/issues/4543
## Proposed Changes
- Removes `NotBanned` from `BanResult`, implements `Display` and `std::error::Error` for `BanResult` and changes `ban_result` return type to `Option<BanResult>` which helps returning `BanResult` on `handle_established_inbound_connection`
- moves the check from for banned peers from `on_connection_established` to `handle_established_inbound_connection` to start addressing #4543.
- Removes `allow_block_list` as it's now redundant? Not sure about this one but if `PeerManager` keeps track of the banned peers, no need to send a `Swarm` event for `alow_block_list` to also keep that list right?
## Questions
- #4543 refers:
> More specifically, implement the connection limit behaviour inside the peer manager.
@AgeManning do you mean copying `libp2p::connection_limits::Behaviour`'s code into `PeerManager`/ having it as an inner `NetworkBehaviour` of `PeerManager`/other? If it's the first two, I think it probably makes more sense to have it as it is as it's less code to maintain.
> Also implement the banning of peers inside the behaviour, rather than passing messages back up to the swarm.
I tried to achieve this, but we still need to pass the `PeerManagerEvent::Banned` swarm event as `DiscV5` handles it's node and ip management internally and I did not find a method to query if a peer is banned. Is there anything else we can do from here?
3397612160/beacon_node/lighthouse_network/src/discovery/mod.rs (L931-L940)
Same as the question above, I did not find a way to check if `DiscV5` has the peer banned, so that we could check here and avoid sending `Swarm` events
3397612160/beacon_node/lighthouse_network/src/peer_manager/network_behaviour.rs (L168-L178)
Is there a chance we try to dial a peer that has been banned previously?
Thanks!
## Issue Addressed
#4402
## Proposed Changes
This PR adds QUIC support to Lighthouse. As this is not officially spec'd this will only work between lighthouse <-> lighthouse connections. We attempt a QUIC connection (if the node advertises it) and if it fails we fallback to TCP.
This should be a backwards compatible modification. We want to test this functionality on live networks to observe any improvements in bandwidth/latency.
NOTE: This also removes the websockets transport as I believe no one is really using it. It should be mentioned in our release however.
Co-authored-by: João Oliveira <hello@jxs.pt>
## Issue Addressed
#4150
## Proposed Changes
Maintain trusted peers in the pruning logic. ~~In principle the changes here are not necessary as a trusted peer has a max score (100) and all other peers can have at most 0 (because we don't implement positive scores). This means that we should never prune trusted peers unless we have more trusted peers than the target peer count.~~
This change shifts this logic to explicitly never prune trusted peers which I expect is the intuitive behaviour.
~~I suspect the issue in #4150 arises when a trusted peer disconnects from us for one reason or another and then we remove that peer from our peerdb as it becomes stale. When it re-connects at some large time later, it is no longer a trusted peer.~~
Currently we do disconnect trusted peers, and this PR corrects this to maintain trusted peers in the pruning logic.
As suggested in #4150 we maintain trusted peers in the db and thus we remember them even if they disconnect from us.
## Issue Addressed
N/A
## Proposed Changes
Adds a flag for disabling peer scoring. This is useful for local testing and testing small networks for new features.
This is a correction to #3757.
The correction registers a peer that is being disconnected in the local peer manager db to ensure we are tracking the correct state.
On heavily crowded networks, we are seeing many attempted connections to our node every second.
Often these connections come from peers that have just been disconnected. This can be for a number of reasons including:
- We have deemed them to be not as useful as other peers
- They have performed poorly
- They have dropped the connection with us
- The connection was spontaneously lost
- They were randomly removed because we have too many peers
In all of these cases, if we have reached or exceeded our target peer limit, there is no desire to accept new connections immediately after the disconnect from these peers. In fact, it often costs us resources to handle the established connections and defeats some of the logic of dropping them in the first place.
This PR adds a timeout, that prevents recently disconnected peers from reconnecting to us.
Technically we implement a ban at the swarm layer to prevent immediate re connections for at least 10 minutes. I decided to keep this light, and use a time-based LRUCache which only gets updated during the peer manager heartbeat to prevent added stress of polling a delay map for what could be a large number of peers.
This cache is bounded in time. An extra space bound could be added should people consider this a risk.
Co-authored-by: Diva M <divma@protonmail.com>
## Issue Addressed
I noticed in some logs some excess and unecessary discovery queries. What was happening was we were pruning our peers down to our outbound target and having some disconnect. When we are below this threshold we try to find more peers (even if we are at our peer limit). The request becomes futile because we have no more peer slots.
This PR corrects this issue and advances the pruning mechanism to favour subnet peers.
An overview the new logic added is:
- We prune peers down to a target outbound peer count which is higher than the minimum outbound peer count.
- We only search for more peers if there is room to do so, and we are below the minimum outbound peer count not the target. So this gives us some buffer for peers to disconnect. The buffer is currently 10%
The modified pruning logic is documented in the code but for reference it should do the following:
- Prune peers with bad scores first
- If we need to prune more peers, then prune peers that are subscribed to a long-lived subnet
- If we still need to prune peers, the prune peers that we have a higher density of on any given subnet which should drive for uniform peers across all subnets.
This will need a bit of testing as it modifies some significant peer management behaviours in lighthouse.
## Issue Addressed
We emit a warning to verify that all peer connection state information is consistent. A warning is given under one edge case;
We try to dial a peer with peer-id X and multiaddr Y. The peer responds to multiaddr Y with a different peer-id, Z. The dialing to the peer fails, but libp2p injects the failed attempt as peer-id Z.
In this instance, our PeerDB tries to add a new peer in the disconnected state under a previously unknown peer-id. This is harmless and so this PR permits this behaviour without logging a warning.
## Issue Addressed
The PeerDB was getting out of sync with the number of disconnected peers compared to the actual count. As this value determines how many we store in our cache, over time the cache was depleting and we were removing peers immediately resulting in errors that manifest as unknown peers for some operations.
The error occurs when dialing a peer fails, we were not correctly updating the peerdb counter because the increment to the counter was placed in the wrong order and was therefore not incrementing the count.
This PR corrects this.
## Issue Addressed
Users are experiencing `Status'd peer not found` errors
## Proposed Changes
Although I cannot reproduce this error, this is only one connection state change that is not addressed in the peer manager (that I could see). The error occurs because the number of disconnected peers in the peerdb becomes out of sync with the actual number of disconnected peers. From what I can tell almost all possible connection state changes are handled, except for the case when a disconnected peer changes to be disconnecting. This can potentially happen at the peer connection limit, where a previously connected peer switches to disconnecting.
This PR decrements the disconnected counter when this event occurs and from what I can tell, covers all possible disconnection state changes in the peer manager.
## Issue Addressed
Part of a bigger effort to make the network globals read only. This moves all writes to the `PeerDB` to the `eth2_libp2p` crate. Limiting writes to the peer manager is a slightly more complicated issue for a next PR, to keep things reviewable.
## Proposed Changes
- Make the peers field in the globals a private field.
- Allow mutable access to the peers field to `eth2_libp2p` for now.
- Add a new network message to update the sync state.
Co-authored-by: Age Manning <Age@AgeManning.com>
## Issue Addressed
I've done this change in a couple of WIPs already so I might as well submit it on its own. This changes no functionality but reduces coupling in a 0.0001%. It also helps new people who need to work in the peer manager to better understand what it actually needs from the outside
## Proposed Changes
Add a config to the peer manager
## Description
The `eth2_libp2p` crate was originally named and designed to incorporate a simple libp2p integration into lighthouse. Since its origins the crates purpose has expanded dramatically. It now houses a lot more sophistication that is specific to lighthouse and no longer just a libp2p integration.
As of this writing it currently houses the following high-level lighthouse-specific logic:
- Lighthouse's implementation of the eth2 RPC protocol and specific encodings/decodings
- Integration and handling of ENRs with respect to libp2p and eth2
- Lighthouse's discovery logic, its integration with discv5 and logic about searching and handling peers.
- Lighthouse's peer manager - This is a large module handling various aspects of Lighthouse's network, such as peer scoring, handling pings and metadata, connection maintenance and recording, etc.
- Lighthouse's peer database - This is a collection of information stored for each individual peer which is specific to lighthouse. We store connection state, sync state, last seen ips and scores etc. The data stored for each peer is designed for various elements of the lighthouse code base such as syncing and the http api.
- Gossipsub scoring - This stores a collection of gossipsub 1.1 scoring mechanisms that are continuously analyssed and updated based on the ethereum 2 networks and how Lighthouse performs on these networks.
- Lighthouse specific types for managing gossipsub topics, sync status and ENR fields
- Lighthouse's network HTTP API metrics - A collection of metrics for lighthouse network monitoring
- Lighthouse's custom configuration of all networking protocols, RPC, gossipsub, discovery, identify and libp2p.
Therefore it makes sense to rename the crate to be more akin to its current purposes, simply that it manages the majority of Lighthouse's network stack. This PR renames this crate to `lighthouse_network`
Co-authored-by: Paul Hauner <paul@paulhauner.com>