Commit Graph

3340 Commits

Author SHA1 Message Date
Daniel Knopik
574b204bdb decouple eth2 from store and lighthouse_network (#6680)
- #6452 (partially)


  Remove dependencies on `store` and `lighthouse_network`  from `eth2`. This was achieved as follows:

- depend on `enr` and `multiaddr` directly instead of using `lighthouse_network`'s reexports.
- make `lighthouse_network` responsible for converting between API and internal types.
- in two cases, remove complex internal types and use the generic `serde_json::Value` instead - this is not ideal, but should be fine for now, as this affects two internal non-spec endpoints which are meant for debugging, unstable, and subject to change without notice anyway. Inspired by #6679. The alternative is to move all relevant types to `eth2` or `types` instead - what do you think?
2025-03-14 16:44:48 +00:00
Lion - dapplion
a6bdc474db Log range sync download errors (#6991)
Currently range sync download log errors just say `error: rpc_error` which isn't helpful. The actual error is suppressed unless logged somewhere else.


  Log the actual error that caused the batch download to fail as part of the log that states that the batch download failed.
2025-03-13 13:26:43 +00:00
ThreeHrSleep
d60c24ef1c Integrate tracing (#6339)
Tracing Integration
- [reference](5bbf1859e9/projects/project-ideas.md (L297))


  - [x] replace slog & log with tracing throughout the codebase
- [x] implement custom crit log
- [x] make relevant changes in the formatter
- [x] replace sloggers
- [x] re-write SSE logging components

cc: @macladson @eserilev
2025-03-12 22:31:05 +00:00
João Oliveira
f23f984f85 switch to upstream gossipsub (#7057)
We forked `gossipsub` into the lighthouse repo sometime ago so that we could iterate quicker on implementing back pressure and IDONTWANT.
Meanwhile we have pushed all our changes upstream and we are now the main maintainers of `rust-libp2p` this allows us to use upstream `gossipsub` again.
Nonetheless we still use our forked repo to give us freedom to experiment with features before submitting them upstream
2025-03-11 12:59:16 +00:00
Michael Sproul
1a08e6f0a0 Remove duplicate sync_tolerance_epochs config (#7109)
Delete duplicate sync tolerance epoch config in the HTTP API which is unused.

We introduced the `sync-tolerance-epoch` flag in this PR:

- https://github.com/sigp/lighthouse/pull/7030

Then refined it in this PR:

- https://github.com/sigp/lighthouse/pull/7044

Somewhere in the merge of `release-v7.0.0` into `unstable`, the config from the original PR which had been deleted came back. I think I resolved these conflicts, so my bad.
2025-03-11 05:35:13 +00:00
Paul Hauner
8d1abce26e Bump SSZ version for larger bitfield SmallVec (#6915)
NA


  Bumps the `ethereum_ssz` version, along with other crates that share the dep.

Primarily, this give us bitfields which can store 128 bytes on the stack before allocating, rather than 32 bytes (https://github.com/sigp/ethereum_ssz/pull/38). The validator count has increase massively since we set it at 32 bytes, so aggregation bitfields (et al) now require a heap allocation. This new value of 128 should get us to ~2m active validators.
2025-03-10 08:18:33 +00:00
Michael Sproul
b4e79edf2a Merge remote-tracking branch 'origin/release-v7.0.0' into unstable 2025-03-10 15:21:24 +11:00
Jimmy Chen
09849e841b Use sync_tolerance_epochs flag to control the proposer prep routines (#7044)
Replace the `2 + 2 == 5` hacks from `holesky-rescue` and use the existing `sync_tolerance_epochs` flag to control the proposer prep routines.
2025-03-06 03:50:42 +00:00
Ryan Schneider
efa6ba37bb Make ExecutionBlock::total_difficulty Optional (#7050)
This change makes the `total_difficulty` field in `ExecutionBlock` an `Option<Uint256>` since newer clients are no longer including the `totalDifficulty` field.

I think this will fix https://github.com/sigp/lighthouse/issues/6937 but I was actually more focused on the builder registration case described below.

In our [builder-playground](https://github.com/flashbots/builder-playground) we setup a local devnet using lighthouse, reth, and mev-boost-relay.  After upgrading to reth 1.2.0 and lighthouse v7.0.0.beta.0 for Pectra, we noticed that the validator registration process was _sometimes_ failing with:

```
Feb 25 23:35:25.038 ERRO Unable to publish proposer preparation to all beacon nodes, error: Some endpoints failed, num_failed: 1 http://localhost:3500/ => RequestFailed(ServerMessage(ErrorMessage { code: 400, message: "BAD_REQUEST: error updating proposer preparations: ForkchoiceUpdate(EngineError(Api { error: Json(Error(\"missing field `totalDifficulty`\", line: 0, column: 0)) }))", stacktraces: [] })), service: preparation
Feb 25 23:35:25.099 WARN Unable to publish validator registrations to the builder network, error: Some endpoints failed, num_failed: 1 http://localhost:3500/ => RequestFailed(ServerMessage(ErrorMessage { code: 400, message: "BAD_REQUEST: error updating proposer preparations: ForkchoiceUpdate(EngineError(Api { error: Json(Error(\"missing field `totalDifficulty`\", line: 0, column: 0)) }))", stacktraces: [] })), service: preparation
```

What was even more confusing, was that it was sometimes working, which actually led to a wild goose chase thinking it was a networking issue.  However, when tracing through the LH code, I came across this comment:

70194dfc6a/beacon_node/beacon_chain/src/beacon_chain.rs (L6048-L6049)

This explained why it sometimes worked, in our playground we run lighthouse with `--prepare-payload-lookahead 8000` thus there was always a 4-second window where the call wasn't made.

But, if the call was made, then this code would 100% fail with updated reth:

https://github.com/sigp/lighthouse/blob/unstable/beacon_node/execution_layer/src/lib.rs#L1688-L1692

Which would then mapped to a `Error::ForkchoiceUpdate` in `update_execution_engine_forkchoice`.


  Anyways, the fix was to make `total_difficulty` Optional, and then to update any code paths where it was used.  In doing so, I assume that if the EL doesn't include total difficulty then the chain is already post-merge.
2025-03-05 01:53:00 +00:00
Jimmy Chen
fe0cf9cb67 Add test flag to override SYNC_TOLERANCE_EPOCHS for range sync testing (#7030)
Related to #6880, an issue that's usually observed on local devnets with small number of nodes.

When testing range sync, I usually shutdown a node for some period of time and restart it again. However, if it's within `SYNC_TOLERANCE_EPOCHS` (8), Lighthouse would consider the node as synced, and if it may attempt to produce a block if requested by a validator - on a local devnet, nodes frequently produce blocks - when this happens, the node ends up producing a block that would revert finality and would get disconnected from peers immediately.

NOTE: This is PR#7030 cherry-picked from `unstable` to `release-v7.0.0`.

Run Lighthouse BN with this flag to override:

```
--sync-tolerance--epoch 0
```
2025-02-26 18:29:49 +11:00
Michael Sproul
cf4104abe5 Merge remote-tracking branch 'origin/release-v7.0.0' into unstable 2025-02-25 08:42:16 +11:00
Jimmy Chen
54b4150a62 Add test flag to override SYNC_TOLERANCE_EPOCHS for range sync testing (#7030)
Related to #6880, an issue that's usually observed on local devnets with small number of nodes.

When testing range sync, I usually shutdown a node for some period of time and restart it again. However, if it's within `SYNC_TOLERANCE_EPOCHS` (8), Lighthouse would consider the node as synced, and if it may attempt to produce a block if requested by a validator - on a local devnet, nodes frequently produce blocks - when this happens, the node ends up producing a block that would revert finality and would get disconnected from peers immediately.

### Usage

Run Lighthouse BN with this flag to override:

```
--sync-tolerance--epoch 0
```
2025-02-24 08:30:11 +00:00
Michael Sproul
454c7d05c4 Remove LC server config from HTTP API (#7017)
Partly addresses

- https://github.com/sigp/lighthouse/issues/6959


  Use the `enable_light_client_server` field from the beacon chain config in the HTTP API. I think we can make this the single source of truth, as I think the network crate also has access to the beacon chain config.
2025-02-24 07:15:32 +00:00
Krishang Shah
6e11bddd4b feat: adds CLI flags to delay publishing for edge case testing on PeerDAS devnets (#6947)
Closes #6919
2025-02-24 06:03:17 +00:00
Lion - dapplion
3fab6a2c0b Block availability data enum (#6866)
PeerDAS has undergone multiple refactors + the blending with the get_blobs optimization has generated technical debt.

A function signature like this

f008b84079/beacon_node/beacon_chain/src/beacon_chain.rs (L7171-L7178)

Allows at least the following combination of states:
- blobs: Some / None
- data_columns: Some / None
- data_column_recv: Some / None
- Block has data? Yes / No
- Block post-PeerDAS? Yes / No

In reality, we don't have that many possible states, only:

- `NoData`: pre-deneb, pre-PeerDAS with 0 blobs or post-PeerDAS with 0 blobs
- `Blobs(BlobSidecarList<E>)`: post-Deneb pre-PeerDAS with > 0 blobs
- `DataColumns(DataColumnSidecarList<E>)`: post-PeerDAS with > 0 blobs
- `DataColumnsRecv(oneshot::Receiver<DataColumnSidecarList<E>>)`: post-PeerDAS with > 0 blobs, but we obtained the columns via reconstruction

^ this are the variants of the new `AvailableBlockData` enum

So we go from 2^5 states to 4 well-defined. Downstream code benefits nicely from this clarity and I think it makes the whole feature much more maintainable.

Currently `is_available` returns a bool, and then we construct the available block in `make_available`. In a way the availability condition is duplicated in both functions. Instead, this PR constructs `AvailableBlockData` in `is_available` so the availability conditions are written once

```rust
if let Some(block_data) = is_available(..) {
let available_block = make_available(block_data);
}
```
2025-02-24 04:47:09 +00:00
Pawan Dhananjay
522b3cbaab Fix builder API headers (#7009)
Resolves https://github.com/sigp/lighthouse/issues/7000


  Set the accept header on builder to the correct value when requesting ssz.

This PR also adds a flag to disable ssz over the builder api altogether. In the case that builders/relays have an ssz bug, we can react quickly by asking clients to restart their nodes with the `--disable-ssz-builder` flag to force json. I'm not fully convinced if this is useful so open to removing it or opening another PR for it.

Testing this currently.
2025-02-24 03:39:13 +00:00
Michael Sproul
e5e43ecd81 Merge remote-tracking branch 'origin/release-v7.0.0' into unstable 2025-02-24 13:59:40 +11:00
Pawan Dhananjay
b3b6aea1c5 Rust 1.85 lints (#7019)
N/A


  2 changes:
1. Replace Option::map_or(true, ...) with is_none_or(...)
2. Remove unnecessary `Into::into` blocks where the type conversion is apparent from the types
2025-02-24 02:36:13 +00:00
João Oliveira
193061ff73 Use RpcSend on RPC::self_limiter::ready_requests (#6634)
We don't need to store `BehaviourAction` for `ready_requests` and therefore avoid having an `unreachable!` on #6625.
Therefore this PR should be merged before it
2025-02-19 21:17:19 +00:00
Michael Sproul
6ab6eae40c Merge remote-tracking branch 'origin/release-v7.0.0-beta.0' into unstable 2025-02-14 10:26:36 +11:00
Michael Sproul
1888be554c Release v7.0.0-beta.0 (#6962)
New release for Electra on Holesky and Sepolia.

Includes PRs:

- https://github.com/sigp/lighthouse/pull/6808
- https://github.com/sigp/lighthouse/pull/6914
- https://github.com/sigp/lighthouse/pull/6949
- https://github.com/sigp/lighthouse/pull/6950
- https://github.com/sigp/lighthouse/pull/6958
2025-02-13 03:06:20 +00:00
Michael Sproul
25f804a111 Fix light client plumbing in beacon processor (#6993)
Our Holesky nodes running with the light client enabled were logging messages about full queues:

> Feb 12 22:09:28.949 ERRO Work queue is full                      queue: unknown_light_client_optimistic_update, queue_len: 128, msg: the system has insufficient resources for load, service: bproc

I thought this might be genuine overload, but it turns out this queue was never being read from!


  - [x] Rename light-client related queues in the beacon processor for clarity.
- [x] Ensure all light-client related queues are being popped from.
2025-02-13 00:50:17 +00:00
Eitan Seri-Levi
ed8086c897 Ensure GET v2/validator/aggregate_attestation is backwards compatible (#6984)
Closes #6983


  `GET v2/validator/aggregate_attestation` is not backwards compatible. It only works for post electra attestations. This PR adds backwards compatibility and additional test coverage. We should include this in the upcoming 7.0 beta release if possible
2025-02-12 00:13:05 +00:00
Lion - dapplion
0055af56b6 Unsubscribe blob topics at Fulu fork (#6932)
Addresses #6854.

PeerDAS requires unsubscribing a Gossip topic at a fork boundary. This is not possible with our current topic machinery.


  Instead of defining which topics have to be **added** at a given fork, we define the complete set of topics at a given fork. The new start of the show and key function is:

```rust
pub fn core_topics_to_subscribe<E: EthSpec>(
fork_name: ForkName,
opts: &TopicConfig,
spec: &ChainSpec,
) -> Vec<GossipKind> {
// ...
if fork_name.deneb_enabled() && !fork_name.fulu_enabled() {
// All of deneb blob topics are core topics
for i in 0..spec.blob_sidecar_subnet_count(fork_name) {
topics.push(GossipKind::BlobSidecar(i));
}
}
// ...
}
```

`core_topics_to_subscribe` only returns the blob topics if `fork < Fulu`. Then at the fork boundary, we subscribe with the new fork digest to `core_topics_to_subscribe(next_fork)`, which excludes the blob topics.

I added `is_fork_non_core_topic` to carry on to the next fork the aggregator topics for attestations and sync committee messages. This approach is future-proof if those topics ever become fork-dependent.

Closes https://github.com/sigp/lighthouse/issues/6854
2025-02-11 23:40:14 +00:00
Lion - dapplion
d60388134d Add PeerDAS metrics to track subnets without peers (#6928)
Currently we track a key metric `PEERS_PER_COLUMN_SUBNET` in a getter `good_peers_on_sampling_subnets`. Another PR https://github.com/sigp/lighthouse/pull/6922 deletes that function, so we have to move the metric anyway. This PR moves that metric computation to the metrics spawned task which is refreshed every 5 seconds.

I also added a few more useful metrics. The total set and intended usage is:

- `sync_peers_per_column_subnet`: Track health of overall subnets in your node
- `sync_peers_per_custody_column_subnet`: Track health of the subnets your node needs. We should track this metric closely in our dashboards with a heatmap and bar plot
- ~~`sync_column_subnets_with_zero_peers`: Is equivalent to the Grafana query `count(sync_peers_per_column_subnet == 0) by (instance)`. We may prefer to skip it, but I believe it's the most important metric as if `sync_column_subnets_with_zero_peers > 0` your node stalls.~~
- ~~`sync_custody_column_subnets_with_zero_peers`: `count(sync_peers_per_custody_column_subnet == 0) by (instance)`~~
2025-02-11 06:56:38 +00:00
Lion - dapplion
3992d6ba74 Fix misc PeerDAS todos (#6862)
Address misc PeerDAS TODOs that are not too big for a dedicated PR


  I'll justify each TODO on an inlined comment
2025-02-11 06:07:13 +00:00
Lion - dapplion
d5a03c9d86 Add more range sync tests (#6872)
Currently we have very poor coverage of range sync with unit tests. With the event driven test framework we could cover much more ground and be confident when modifying the code.


  Add two basic cases:
- Happy path, complete a finalized sync for 2 epochs
- Post-PeerDAS case where we start without enough custody peers and later we find enough

⚠️  If you have ideas for more test cases, please let me know! I'll write them
2025-02-10 07:55:22 +00:00
Age Manning
62a0f25f97 IPv6 By Default (#6808) 2025-02-10 01:58:11 +00:00
Lion - dapplion
f35213ebe7 Sync active request byrange ids logs (#6914)
- Re-opened PR from https://github.com/sigp/lighthouse/pull/6869

Writing and running tests I noted that the sync RPC requests are very verbose now.

`DataColumnsByRootRequestId { id: 123, requester: Custody(CustodyId { requester: CustodyRequester(SingleLookupReqId { req_id: 121, lookup_id: 101 }) }) }`

Since this Id is logged rather often I believe there's value in
1. Making them more succinct for log verbosity
2. Make them a string that's easy to copy and work with elastic


  Write custom `Display` implementations to render Ids in a more DX format

_ DataColumnsByRootRequestId with a block lookup_

```
123/Custody/121/Lookup/101
```

_DataColumnsByRangeRequestId_

```
123/122/RangeSync/0/5492900659401505034
```

- This one will be shorter after https://github.com/sigp/lighthouse/pull/6868

Also made the logs format and text consistent across all methods
2025-02-10 01:27:05 +00:00
Eitan Seri-Levi
afdda83798 Enable Light Client server by default (#6950) 2025-02-10 01:27:03 +00:00
Michael Sproul
0344f68cfd Update attestation rewards API for Electra (#6819)
Closes:

- https://github.com/sigp/lighthouse/issues/6818


  Use `MAX_EFFECTIVE_BALANCE_ELECTRA` (2048) for attestation reward calculations involving Electra.

Add a new `InteropGenesisBuilder` that tries to provide a more flexible way to build genesis states. Unfortunately due to lifetime jank, it is quite unergonomic at present. We may want to refactor this builder in future to make it easier to use.
2025-02-09 10:15:33 +00:00
Eitan Seri-Levi
6032f15890 Fix aggregate attestation v2 response (#6926) 2025-02-09 10:15:30 +00:00
Michael Sproul
2bd5bbdffb Optimise and refine SingleAttestation conversion (#6934)
Closes

- https://github.com/sigp/lighthouse/issues/6805


  - Use a new `WorkEvent::GossipAttestationToConvert` to handle the conversion from `SingleAttestation` to `Attestation` _on_ the beacon processor (prevents a Tokio thread being blocked).
- Improve the error handling for single attestations. I think previously we had no ability to reprocess single attestations for unknown blocks -- we would just error. This seemed to be the case in both gossip processing and processing of `SingleAttestation`s from the HTTP API.
- Move the `SingleAttestation -> Attestation` conversion function into `beacon_chain` so that it can return the `attestation_verification::Error` type, which has well-defined error handling and peer penalties. The now-unused variants of `types::Attestation::Error` have been removed.
2025-02-07 23:18:57 +00:00
Michael Sproul
cb117f859d Fix fetch blobs in all-null case (#6940)
Fix another issue with fetch-blobs, similar to:

- https://github.com/sigp/lighthouse/pull/6911


  Check if the list of blobs returned is all `None`, and if so, do not proceed any further.

This prevents an ugly error like:

> Feb 03 17:32:12.384 ERRO Error fetching or processing blobs from EL, block_root: 0x7326fe2dc1cb9036c9de7a07a662c86a339085597849016eadf061b70b7815ba, error: BlobProcessingError(AvailabilityCheck(Unexpected)), module
: network::network_beacon_processor:1011
2025-02-07 09:19:32 +00:00
chonghe
d6596dbe21 Keep execution payload during historical backfill when prune-payloads set to false (#6766)
- #6510


  - Keep execution payload during historical backfill when `--prune-payloads false` is set
- Add a field in the historical backfill debug log to indicate if execution payload is kept
- Add a test to check historical blocks has execution payload when `--prune-payloads false is set
- Very minor typo correction that I notice when working on this
2025-02-07 09:19:29 +00:00
Lion - dapplion
921d95217d Remove un-used batch sync error condition (#6917)
- PR https://github.com/sigp/lighthouse/pull/6497 made obsolete some consistency checks inside the batch

I forgot to remove the consumers of those errors


  Remove un-used batch sync error condition, which was a nested `Result<_, Result<_, E>>`
2025-02-07 07:48:58 +00:00
Akihito Nakano
7408719de8 Remove unused metrics (#6817)
N/A


  Removed metrics that were defined but not used anywhere.
2025-02-07 07:48:52 +00:00
Michael Sproul
364a978f12 Fix attestation queue length metric (#6924)
We were using the wrong queue length for attestation work event metrics.
2025-02-06 07:08:20 +00:00
Krishang Shah
a4e3f361bf Update metrics.rs (#6863)
Fixes #5206, a low-hanging fruit.
2025-02-06 05:19:51 +00:00
Lion - dapplion
2193f6a4d4 Add individual by_range sync requests (#6497)
Part of
- https://github.com/sigp/lighthouse/issues/6258

To address PeerDAS sync issues we need to make individual by_range requests within a batch retriable. We should adopt the same pattern for lookup sync where each request (block/blobs/columns) is tracked individually within a "meta" request that group them all and handles retry logic.


  - Building on https://github.com/sigp/lighthouse/pull/6398

second step is to add individual request accumulators for `blocks_by_range`, `blobs_by_range`, and `data_columns_by_range`. This will allow each request to progress independently and be retried separately.

Most of the logic is just piping, excuse the large diff. This PR does not change the logic of how requests are handled or retried. This will be done in a future PR changing the logic of `RangeBlockComponentsRequest`.

### Before

- Sync manager receives block with `SyncRequestId::RangeBlockAndBlobs`
- Insert block into `SyncNetworkContext::range_block_components_requests`
- (If received stream terminators of all requests)
- Return `Vec<RpcBlock>`, and insert into `range_sync`

### Now

- Sync manager receives block with `SyncRequestId::RangeBlockAndBlobs`
- Insert block into `SyncNetworkContext:: blocks_by_range_requests`
- (If received stream terminator of this request)
- Return `Vec<SignedBlock>`, and insert into `SyncNetworkContext::components_by_range_requests `
- (If received a result for all requests)
- Return `Vec<RpcBlock>`, and insert into `range_sync`
2025-02-05 07:08:28 +00:00
Pawan Dhananjay
7bfdb33729 Return error if getBlobs not supported (#6911)
N/A


  Previously, we were returning an empty vec of Nones if get_blobs was not supported in the EL. This results in confusing logging where we try to process the empty list of blobs and log a bunch of Unexpected errors. See
```
Feb 03 17:32:12.383 DEBG Fetching blobs from the EL, num_expected_blobs: 6, block_root: 0x7326fe2dc1cb9036c9de7a07a662c86a339085597849016eadf061b70b7815ba, service: fetch_engine_blobs, service: beacon, module: beac
on_chain::fetch_blobs:84
Feb 03 17:32:12.384 DEBG Processing engine blobs, num_fetched_blobs: 0, block_root: 0x7326fe2dc1cb9036c9de7a07a662c86a339085597849016eadf061b70b7815ba, service: fetch_engine_blobs, service: beacon, module: beacon_c
hain::fetch_blobs:197
Feb 03 17:32:12.384 ERRO Error fetching or processing blobs from EL, block_root: 0x7326fe2dc1cb9036c9de7a07a662c86a339085597849016eadf061b70b7815ba, error: BlobProcessingError(AvailabilityCheck(Unexpected)), module
: network::network_beacon_processor:1011
```

The error we should be getting is that getBlobs is not supported, this PR adds a new error variant and returns that.
2025-02-05 01:14:41 +00:00
Age Manning
d1061dcf59 UX Network Fixes (#6796)
There were two things I came across during some recent testing, that this PR addresses.

1 - The default port for IPv6 was set to 9090, which is confusing. I've set this to match its ipv4 counterpart (i.e 9000 and 9001). This makes more sense and is easier to firewall, for those firewalls that support both versions for a single rule.

2 - Watching the NAT status of lighthouse, I notice we only set the field to 1 once the NAT is passed. We don't give it a default 0 (false). So we only see results when its successful. On peer disconnects, i've piggy-backed a loop of the connected peers to also watch and check for NAT status updates.
2025-02-04 01:28:37 +00:00
Eitan Seri-Levi
7e4b27c922 Migrate validator client to clap derive (#6300)
Partially #5900


  Migrate the validator client cli to clap derive
2025-02-03 20:08:31 +00:00
Lion - dapplion
95cec45c38 Use data column batch verification consistently (#6851)
Resolve a `TODO(das)` to use KZG batch verification in `put_rpc_custody_columns`


  Uses `verify_kzg_for_data_column_list_with_scoring` in all paths that send more than one column. To use batch verification and have attributability of which peer is sending a bad column.

Needs to move `verify_kzg_for_data_column_list_with_scoring` into the type's module to convert to the KZG verified type.
2025-02-03 06:07:45 +00:00
Eitan Seri-Levi
1e2b547b35 Add builder SSZ flow (#6859) 2025-02-03 06:07:42 +00:00
Lion - dapplion
55d1e754b4 Subscribe to PeerDAS topics on Fulu fork (#6849)
`TODO(das)` now that PeerDAS is scheduled in a hard fork we can subscribe to its topics on the fork activation. In current stable we subscribe to PeerDAS topics as soon as the node starts if PeerDAS is scheduled.

This PR adds another todo to unsubscribe to blob topics at the fork. This other PR included solution for that, but I can include it in a separate PR
- https://github.com/sigp/lighthouse/pull/5899/files


  Include PeerDAS topics as part of Fulu fork in `fork_core_topics`.
2025-02-03 06:07:39 +00:00
Pawan Dhananjay
a088b0b6c4 Fix subnet unsubscription time (#6890)
Hopefully fixes https://github.com/sigp/lighthouse/issues/6732


  In our `scheduled_subscriptions`, we were setting unsubscription slot to be `current_slot + 1`. Given that we were subscribing to the subnet at `duty.slot - 1`, the unsubscription slot ended up being `duty.slot`. So we were unsubscribing to the subnet at the beginning of the duty slot which is insane.

Fixes the `scheduled_subscriptions` to unsubscribe at `duty.slot + 1`.
2025-02-03 05:14:11 +00:00
João Oliveira
ddb845d503 update libp2p to 0.55 (#6889)
Updates libp2p to `0.55`.
Will address the deprecations in a subsequent PR
2025-01-31 12:21:06 +00:00
Lion - dapplion
027bb973f8 Compute columns in post-PeerDAS checkpoint sync (#6760)
Addresses #6026.

Post-PeerDAS the DB expects to have data columns for the finalized block.


  Instead of forcing the user to submit the columns, this PR computes the columns from the blobs that we can already fetch from the checkpointz server or with the existing CLI options.

Note 1: (EDIT) Pruning concern addressed

Note 2: I have not tested this feature

Note 3: @michaelsproul an alternative I recall is to not require the blobs / columns at this point and expect backfill to populate the finalized block
2025-01-31 06:00:52 +00:00
Eitan Seri-Levi
276eda3dfe POST /eth/v2/beacon/pool/attestations bugfixes (#6867) 2025-01-31 00:20:44 +00:00