Add --semi-supernode support (#8254)

Addresses #8218

A simplified version of #8241 for the initial release.

I've tried to minimise the logic change in this PR, although introducing the `NodeCustodyType` enum still result in quite a bit a of diff, but the actual logic change in `CustodyContext` is quite small.

The main changes are in the `CustdoyContext` struct
* ~~combining `validator_custody_count` and `current_is_supernode` fields into a single `custody_group_count_at_head` field. We persist the cgc of the initial cli values into the `custody_group_count_at_head` field and only allow for increase (same behaviour as before).~~
* I noticed the above approach caused a backward compatibility issue, I've [made a fix](15569bc085) and changed the approach slightly (which was actually what I had originally in mind):
* when initialising, only override the  `validator_custody_count` value if either flag `--supernode` or `--semi-supernode` is used; otherwise leave it as the existing default `0`. Most other logic remains unchanged.

All existing validator custody unit tests are still all passing, and I've added additional tests to cover semi-supernode, and restoring `CustodyContext` from disk.

Note: I've added a `WARN` if the user attempts to switch to a `--semi-supernode` or `--supernode` - this currently has no effect, but once @eserilev column backfill is merged, we should be able to support this quite easily.

Things to test
- [x] cgc in metadata / enr
- [x] cgc in metrics
- [x] subscribed subnets
- [x] getBlobs endpoint


  


Co-Authored-By: Jimmy Chen <jchen.tc@gmail.com>
This commit is contained in:
Jimmy Chen
2025-10-22 16:23:17 +11:00
committed by GitHub
parent 33e21634cb
commit 43c5e924d7
21 changed files with 420 additions and 114 deletions

View File

@@ -4,6 +4,7 @@ use crate::beacon_chain::{
BEACON_CHAIN_DB_KEY, CanonicalHead, LightClientProducerEvent, OP_POOL_DB_KEY,
};
use crate::beacon_proposer_cache::BeaconProposerCache;
use crate::custody_context::NodeCustodyType;
use crate::data_availability_checker::DataAvailabilityChecker;
use crate::fork_choice_signal::ForkChoiceSignalTx;
use crate::fork_revert::{reset_fork_choice_to_finalization, revert_to_fork_boundary};
@@ -100,7 +101,7 @@ pub struct BeaconChainBuilder<T: BeaconChainTypes> {
kzg: Arc<Kzg>,
task_executor: Option<TaskExecutor>,
validator_monitor_config: Option<ValidatorMonitorConfig>,
import_all_data_columns: bool,
node_custody_type: NodeCustodyType,
rng: Option<Box<dyn RngCore + Send>>,
}
@@ -139,7 +140,7 @@ where
kzg,
task_executor: None,
validator_monitor_config: None,
import_all_data_columns: false,
node_custody_type: NodeCustodyType::Fullnode,
rng: None,
}
}
@@ -640,9 +641,9 @@ where
self
}
/// Sets whether to require and import all data columns when importing block.
pub fn import_all_data_columns(mut self, import_all_data_columns: bool) -> Self {
self.import_all_data_columns = import_all_data_columns;
/// Sets the node custody type for data column import.
pub fn node_custody_type(mut self, node_custody_type: NodeCustodyType) -> Self {
self.node_custody_type = node_custody_type;
self
}
@@ -935,10 +936,11 @@ where
{
Arc::new(CustodyContext::new_from_persisted_custody_context(
custody,
self.import_all_data_columns,
self.node_custody_type,
&self.spec,
))
} else {
Arc::new(CustodyContext::new(self.import_all_data_columns))
Arc::new(CustodyContext::new(self.node_custody_type, &self.spec))
};
debug!(?custody_context, "Loading persisted custody context");