Only mark block lookups as pending if block is importing from gossip (#8112)

- PR https://github.com/sigp/lighthouse/pull/8045 introduced a regression of how lookup sync interacts with the da_checker.

Now in unstable block import from the HTTP API also insert the block in the da_checker while the block is being execution verified. If lookup sync finds the block in the da_checker in `NotValidated` state it expects a `GossipBlockProcessResult` message sometime later. That message is only sent after block import in gossip.

I confirmed in our node's logs for 4/4 cases of stuck lookups are caused by this sequence of events:
- Receive block through API, insert into da_checker in fn process_block in put_pre_execution_block
- Create lookup and leave in AwaitingDownload(block in processing cache) state
- Block from HTTP API finishes importing
- Lookup is left stuck

Closes https://github.com/sigp/lighthouse/issues/8104


  - https://github.com/sigp/lighthouse/pull/8110 was my initial solution attempt but we can't send the `GossipBlockProcessResult` event from the `http_api` crate without adding new channels, which seems messy.

For a given node it's rare that a lookup is created at the same time that a block is being published. This PR solves https://github.com/sigp/lighthouse/issues/8104 by allowing lookup sync to import the block twice in that case.


Co-Authored-By: dapplion <35266934+dapplion@users.noreply.github.com>
This commit is contained in:
Lion - dapplion
2025-09-25 05:52:27 +02:00
committed by GitHub
parent 79b33214ea
commit ffa7b2b2b9
8 changed files with 63 additions and 33 deletions

View File

@@ -219,7 +219,7 @@ impl<T: BeaconChainTypes> SingleBlockLookup<T> {
// can assert that this is the correct value of `blob_kzg_commitments_count`.
match cx.chain.get_block_process_status(&self.block_root) {
BlockProcessStatus::Unknown => None,
BlockProcessStatus::NotValidated(block)
BlockProcessStatus::NotValidated(block, _)
| BlockProcessStatus::ExecutionValidated(block) => Some(block.clone()),
}
}) {

View File

@@ -49,8 +49,8 @@ use tokio::sync::mpsc;
use tracing::{Span, debug, debug_span, error, warn};
use types::blob_sidecar::FixedBlobSidecarList;
use types::{
BlobSidecar, ColumnIndex, DataColumnSidecar, DataColumnSidecarList, EthSpec, ForkContext,
Hash256, SignedBeaconBlock, Slot,
BlobSidecar, BlockImportSource, ColumnIndex, DataColumnSidecar, DataColumnSidecarList, EthSpec,
ForkContext, Hash256, SignedBeaconBlock, Slot,
};
pub mod custody;
@@ -835,14 +835,26 @@ impl<T: BeaconChainTypes> SyncNetworkContext<T> {
match self.chain.get_block_process_status(&block_root) {
// Unknown block, continue request to download
BlockProcessStatus::Unknown => {}
// Block is known are currently processing, expect a future event with the result of
// processing.
BlockProcessStatus::NotValidated { .. } => {
// Lookup sync event safety: If the block is currently in the processing cache, we
// are guaranteed to receive a `SyncMessage::GossipBlockProcessResult` that will
// make progress on this lookup
return Ok(LookupRequestResult::Pending("block in processing cache"));
}
// Block is known and currently processing. Imports from gossip and HTTP API insert the
// block in the da_cache. However, HTTP API is unable to notify sync when it completes
// block import. Returning `Pending` here will result in stuck lookups if the block is
// importing from sync.
BlockProcessStatus::NotValidated(_, source) => match source {
BlockImportSource::Gossip => {
// Lookup sync event safety: If the block is currently in the processing cache, we
// are guaranteed to receive a `SyncMessage::GossipBlockProcessResult` that will
// make progress on this lookup
return Ok(LookupRequestResult::Pending("block in processing cache"));
}
BlockImportSource::Lookup
| BlockImportSource::RangeSync
| BlockImportSource::HttpApi => {
// Lookup, RangeSync or HttpApi block import don't emit the GossipBlockProcessResult
// event. If a lookup happens to be created during block import from one of
// those sources just import the block twice. Otherwise the lookup will get
// stuck. Double imports are fine, they just waste resources.
}
},
// Block is fully validated. If it's not yet imported it's waiting for missing block
// components. Consider this request completed and do nothing.
BlockProcessStatus::ExecutionValidated { .. } => {

View File

@@ -41,8 +41,8 @@ use slot_clock::{SlotClock, TestingSlotClock};
use tokio::sync::mpsc;
use tracing::info;
use types::{
BeaconState, BeaconStateBase, BlobSidecar, DataColumnSidecar, EthSpec, ForkContext, ForkName,
Hash256, MinimalEthSpec as E, SignedBeaconBlock, Slot,
BeaconState, BeaconStateBase, BlobSidecar, BlockImportSource, DataColumnSidecar, EthSpec,
ForkContext, ForkName, Hash256, MinimalEthSpec as E, SignedBeaconBlock, Slot,
data_column_sidecar::ColumnIndex,
test_utils::{SeedableRng, TestRandom, XorShiftRng},
};
@@ -1113,7 +1113,7 @@ impl TestRig {
self.harness
.chain
.data_availability_checker
.put_pre_execution_block(block.canonical_root(), block)
.put_pre_execution_block(block.canonical_root(), block, BlockImportSource::Gossip)
.unwrap();
}