Make range sync peer loadbalancing PeerDAS-friendly (#6922)

- Re-opens https://github.com/sigp/lighthouse/pull/6864 targeting unstable

Range sync and backfill sync still assume that each batch request is done by a single peer. This assumption breaks with PeerDAS, where we request custody columns to N peers.

Issues with current unstable:

- Peer prioritization counts batch requests per peer. This accounting is broken now, data columns by range request are not accounted
- Peer selection for data columns by range ignores the set of peers on a syncing chain, instead draws from the global pool of peers
- The implementation is very strict when we have no peers to request from. After PeerDAS this case is very common and we want to be flexible or easy and handle that case better than just hard failing everything.


  - [x] Upstream peer prioritization to the network context, it knows exactly how many active requests a peer (including columns by range)
- [x] Upstream peer selection to the network context, now `block_components_by_range_request` gets a set of peers to choose from instead of a single peer. If it can't find a peer, it returns the error `RpcRequestSendError::NoPeer`
- [ ] Range sync and backfill sync handle `RpcRequestSendError::NoPeer` explicitly
- [ ] Range sync: leaves the batch in `AwaitingDownload` state and does nothing. **TODO**: we should have some mechanism to fail the chain if it's stale for too long - **EDIT**: Not done in this PR
- [x] Backfill sync: pauses the sync until another peer joins - **EDIT**: Same logic as unstable

### TODOs

- [ ] Add tests :)
- [x] Manually test backfill sync

Note: this touches the mainnet path!
This commit is contained in:
Lion - dapplion
2025-05-06 23:03:07 -03:00
committed by GitHub
parent 43c38a6fa0
commit beb0ce68bd
12 changed files with 541 additions and 472 deletions

View File

@@ -1,6 +1,8 @@
use crate::discovery::enr::PEERDAS_CUSTODY_GROUP_COUNT_ENR_KEY;
use crate::discovery::{peer_id_to_node_id, CombinedKey};
use crate::{metrics, multiaddr::Multiaddr, types::Subnet, Enr, EnrExt, Gossipsub, PeerId};
use crate::{
metrics, multiaddr::Multiaddr, types::Subnet, Enr, EnrExt, Gossipsub, PeerId, SyncInfo,
};
use itertools::Itertools;
use logging::crit;
use peer_info::{ConnectionDirection, PeerConnectionStatus, PeerInfo};
@@ -15,7 +17,7 @@ use std::{
use sync_status::SyncStatus;
use tracing::{debug, error, trace, warn};
use types::data_column_custody_group::compute_subnets_for_node;
use types::{ChainSpec, DataColumnSubnetId, EthSpec};
use types::{ChainSpec, DataColumnSubnetId, Epoch, EthSpec, Hash256, Slot};
pub mod client;
pub mod peer_info;
@@ -735,6 +737,19 @@ impl<E: EthSpec> PeerDB<E> {
},
);
self.update_sync_status(
&peer_id,
SyncStatus::Synced {
// Fill in mock SyncInfo, only for the peer to return `is_synced() == true`.
info: SyncInfo {
head_slot: Slot::new(0),
head_root: Hash256::ZERO,
finalized_epoch: Epoch::new(0),
finalized_root: Hash256::ZERO,
},
},
);
if supernode {
let peer_info = self.peers.get_mut(&peer_id).expect("peer exists");
let all_subnets = (0..spec.data_column_sidecar_subnet_count)

View File

@@ -206,6 +206,20 @@ impl<E: EthSpec> NetworkGlobals<E> {
.collect::<Vec<_>>()
}
/// Returns true if the peer is known and is a custodian of `column_index`
pub fn is_custody_peer_of(&self, column_index: ColumnIndex, peer_id: &PeerId) -> bool {
self.peers
.read()
.peer_info(peer_id)
.map(|info| {
info.is_assigned_to_custody_subnet(&DataColumnSubnetId::from_column_index(
column_index,
&self.spec,
))
})
.unwrap_or(false)
}
/// Returns the TopicConfig to compute the set of Gossip topics for a given fork
pub fn as_topic_config(&self) -> TopicConfig {
TopicConfig {