Fix per-slot timer in presence of clock changes (#3243)

## Issue Addressed

Fixes a timing issue that results in spurious fork choice notifier failures:

```
WARN Error signalling fork choice waiter     slot: 3962270, error: ForkChoiceSignalOutOfOrder { current: Slot(3962271), latest: Slot(3962270) }, service: beacon
```

There’s a fork choice run that is scheduled to run at the start of every slot by the `timer`, which creates a 12s interval timer when the beacon node starts up. The problem is that if there’s a bit of clock drift that gets corrected via NTP (or a leap second for that matter) then these 12s intervals will cease to line up with the start of the slot. This then creates the mismatch in slot number that we see above.

Lighthouse also runs fork choice 500ms before the slot begins, and these runs are what is conflicting with the start-of-slot runs. This means that the warning in current versions of Lighthouse is mostly cosmetic because fork choice is up to date with all but the most recent 500ms of attestations (which usually isn’t many).

## Proposed Changes

Fix the per-slot timer so that it continually re-calculates the duration to the start of the next slot and waits for that.

A side-effect of this change is that we may skip slots if the per-slot task takes >12s to run, but I think this is an unlikely scenario and an acceptable compromise.
This commit is contained in:
Michael Sproul
2022-06-06 23:52:32 +00:00
parent 493c2c037c
commit 54cf94ea59
2 changed files with 13 additions and 18 deletions

View File

@@ -485,13 +485,8 @@ where
.beacon_chain
.clone()
.ok_or("node timer requires a beacon chain")?;
let seconds_per_slot = self
.chain_spec
.as_ref()
.ok_or("node timer requires a chain spec")?
.seconds_per_slot;
spawn_timer(context.executor, beacon_chain, seconds_per_slot)
spawn_timer(context.executor, beacon_chain)
.map_err(|e| format!("Unable to start node timer: {}", e))?;
Ok(self)