mirror of
https://github.com/sigp/lighthouse.git
synced 2026-04-16 20:39:10 +00:00
Rate limiting backfill sync (#3936)
## Issue Addressed #3212 ## Proposed Changes - Introduce a new `rate_limiting_backfill_queue` - any new inbound backfill work events gets immediately sent to this FIFO queue **without any processing** - Spawn a `backfill_scheduler` routine that pops a backfill event from the FIFO queue at specified intervals (currently halfway through a slot, or at 6s after slot start for 12s slots) and sends the event to `BeaconProcessor` via a `scheduled_backfill_work_tx` channel - This channel gets polled last in the `InboundEvents`, and work event received is wrapped in a `InboundEvent::ScheduledBackfillWork` enum variant, which gets processed immediately or queued by the `BeaconProcessor` (existing logic applies from here) Diagram comparing backfill processing with / without rate-limiting: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1386249922 See this comment for @paulhauner's explanation and solution: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1384674956 ## Additional Info I've compared this branch (with backfill processing rate limited to to 1 and 3 batches per slot) against the latest stable version. The CPU usage during backfill sync is reduced by ~5% - 20%, more details on this page: https://hackmd.io/@jimmygchen/SJuVpJL3j The above testing is done on Goerli (as I don't currently have hardware for Mainnet), I'm guessing the differences are likely to be bigger on mainnet due to block size. ### TODOs - [x] Experiment with processing multiple batches per slot. (need to think about how to do this for different slot durations) - [x] Add option to disable rate-limiting, enabed by default. - [x] (No longer required now we're reusing the reprocessing queue) Complete the `backfill_scheduler` task when backfill sync is completed or not required
This commit is contained in:
@@ -104,12 +104,23 @@ pub trait SlotClock: Send + Sync + Sized + Clone {
|
||||
self.slot_duration() * 2 / INTERVALS_PER_SLOT as u32
|
||||
}
|
||||
|
||||
/// Returns the `Duration` since the start of the current `Slot`. Useful in determining whether to apply proposer boosts.
|
||||
fn seconds_from_current_slot_start(&self, seconds_per_slot: u64) -> Option<Duration> {
|
||||
/// Returns the `Duration` since the start of the current `Slot` at seconds precision. Useful in determining whether to apply proposer boosts.
|
||||
fn seconds_from_current_slot_start(&self) -> Option<Duration> {
|
||||
self.now_duration()
|
||||
.and_then(|now| now.checked_sub(self.genesis_duration()))
|
||||
.map(|duration_into_slot| {
|
||||
Duration::from_secs(duration_into_slot.as_secs() % seconds_per_slot)
|
||||
Duration::from_secs(duration_into_slot.as_secs() % self.slot_duration().as_secs())
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns the `Duration` since the start of the current `Slot` at milliseconds precision.
|
||||
fn millis_from_current_slot_start(&self) -> Option<Duration> {
|
||||
self.now_duration()
|
||||
.and_then(|now| now.checked_sub(self.genesis_duration()))
|
||||
.map(|duration_into_slot| {
|
||||
Duration::from_millis(
|
||||
(duration_into_slot.as_millis() % self.slot_duration().as_millis()) as u64,
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user