mirror of
https://github.com/sigp/lighthouse.git
synced 2026-03-19 21:04:41 +00:00
Rate limiting backfill sync (#3936)
## Issue Addressed #3212 ## Proposed Changes - Introduce a new `rate_limiting_backfill_queue` - any new inbound backfill work events gets immediately sent to this FIFO queue **without any processing** - Spawn a `backfill_scheduler` routine that pops a backfill event from the FIFO queue at specified intervals (currently halfway through a slot, or at 6s after slot start for 12s slots) and sends the event to `BeaconProcessor` via a `scheduled_backfill_work_tx` channel - This channel gets polled last in the `InboundEvents`, and work event received is wrapped in a `InboundEvent::ScheduledBackfillWork` enum variant, which gets processed immediately or queued by the `BeaconProcessor` (existing logic applies from here) Diagram comparing backfill processing with / without rate-limiting: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1386249922 See this comment for @paulhauner's explanation and solution: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1384674956 ## Additional Info I've compared this branch (with backfill processing rate limited to to 1 and 3 batches per slot) against the latest stable version. The CPU usage during backfill sync is reduced by ~5% - 20%, more details on this page: https://hackmd.io/@jimmygchen/SJuVpJL3j The above testing is done on Goerli (as I don't currently have hardware for Mainnet), I'm guessing the differences are likely to be bigger on mainnet due to block size. ### TODOs - [x] Experiment with processing multiple batches per slot. (need to think about how to do this for different slot durations) - [x] Add option to disable rate-limiting, enabed by default. - [x] (No longer required now we're reusing the reprocessing queue) Complete the `backfill_scheduler` task when backfill sync is completed or not required
This commit is contained in:
@@ -2893,7 +2893,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
metrics::start_timer(&metrics::FORK_CHOICE_PROCESS_BLOCK_TIMES);
|
||||
let block_delay = self
|
||||
.slot_clock
|
||||
.seconds_from_current_slot_start(self.spec.seconds_per_slot)
|
||||
.seconds_from_current_slot_start()
|
||||
.ok_or(Error::UnableToComputeTimeAtSlot)?;
|
||||
|
||||
fork_choice
|
||||
@@ -3746,7 +3746,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
|
||||
let slot_delay = self
|
||||
.slot_clock
|
||||
.seconds_from_current_slot_start(self.spec.seconds_per_slot)
|
||||
.seconds_from_current_slot_start()
|
||||
.or_else(|| {
|
||||
warn!(
|
||||
self.log,
|
||||
|
||||
@@ -68,6 +68,8 @@ pub struct ChainConfig {
|
||||
///
|
||||
/// This is useful for block builders and testing.
|
||||
pub always_prepare_payload: bool,
|
||||
/// Whether backfill sync processing should be rate-limited.
|
||||
pub enable_backfill_rate_limiting: bool,
|
||||
}
|
||||
|
||||
impl Default for ChainConfig {
|
||||
@@ -94,6 +96,7 @@ impl Default for ChainConfig {
|
||||
optimistic_finalized_sync: true,
|
||||
shuffling_cache_size: crate::shuffling_cache::DEFAULT_CACHE_SIZE,
|
||||
always_prepare_payload: false,
|
||||
enable_backfill_rate_limiting: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user