## Limit Backfill Sync
This PR transitions Lighthouse from syncing all the way back to genesis to only syncing back to the weak subjectivity point (~ 5 months) when syncing via a checkpoint sync.
There are a number of important points to note with this PR:
- Firstly and most importantly, this PR fundamentally shifts the default security guarantees of checkpoint syncing in Lighthouse. Prior to this PR, Lighthouse could verify the checkpoint of any given chain by ensuring the chain eventually terminates at the corresponding genesis. This guarantee can still be employed via the new CLI flag --genesis-backfill which will prompt lighthouse to the old behaviour of downloading all blocks back to genesis. The new behaviour only checks the proposer signatures for the last 5 months of blocks but cannot guarantee the chain matches the genesis chain.
- I have not modified any of the peer scoring or RPC responses. Clients syncing from gensis, will downscore new Lighthouse peers that do not possess blocks prior to the WSP. This is by design, as Lighthouse nodes of this form, need a mechanism to sort through peers in order to find useful peers in order to complete their genesis sync. We therefore do not discriminate between empty/error responses for blocks prior or post the local WSP. If we request a block that a peer does not posses, then fundamentally that peer is less useful to us than other peers.
- This will make a radical shift in that the majority of nodes will no longer store the full history of the chain. In the future we could add a pruning mechanism to remove old blocks from the db also.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
## Issue Addressed
#4150
## Proposed Changes
Maintain trusted peers in the pruning logic. ~~In principle the changes here are not necessary as a trusted peer has a max score (100) and all other peers can have at most 0 (because we don't implement positive scores). This means that we should never prune trusted peers unless we have more trusted peers than the target peer count.~~
This change shifts this logic to explicitly never prune trusted peers which I expect is the intuitive behaviour.
~~I suspect the issue in #4150 arises when a trusted peer disconnects from us for one reason or another and then we remove that peer from our peerdb as it becomes stale. When it re-connects at some large time later, it is no longer a trusted peer.~~
Currently we do disconnect trusted peers, and this PR corrects this to maintain trusted peers in the pruning logic.
As suggested in #4150 we maintain trusted peers in the db and thus we remember them even if they disconnect from us.
* rename 4844 to deneb
* rename 4844 to deneb
* move excess data gas field
* get EF tests working
* fix ef tests lint
* fix the blob identifier ef test
* fix accessed files ef test script
* get beacon chain tests passing
* introduce availability pending block
* add intoavailableblock trait
* small fixes
* add 'gossip blob cache' and start to clean up processing and transition types
* shard memory blob cache
* Initial commit
* Fix after rebase
* Add gossip verification conditions
* cache cleanup
* general chaos
* extended chaos
* cargo fmt
* more progress
* more progress
* tons of changes, just tryna compile
* everything, everywhere, all at once
* Reprocess an ExecutedBlock on unavailable blobs
* Add sus gossip verification for blobs
* Merge stuff
* Remove reprocessing cache stuff
* lint
* Add a wrapper to allow construction of only valid `AvailableBlock`s
* rename blob arc list to blob list
* merge cleanuo
* Revert "merge cleanuo"
This reverts commit 5e98326878.
* Revert "Revert "merge cleanuo""
This reverts commit 3a4009443a.
* fix rpc methods
* move beacon block and blob to eth2/types
* rename gossip blob cache to data availability checker
* lots of changes
* fix some compilation issues
* fix compilation issues
* fix compilation issues
* fix compilation issues
* fix compilation issues
* fix compilation issues
* cargo fmt
* use a common data structure for block import types
* fix availability check on proposal import
* refactor the blob cache and split the block wrapper into two types
* add type conversion for signed block and block wrapper
* fix beacon chain tests and do some renaming, add some comments
* Partial processing (#4)
* move beacon block and blob to eth2/types
* rename gossip blob cache to data availability checker
* lots of changes
* fix some compilation issues
* fix compilation issues
* fix compilation issues
* fix compilation issues
* fix compilation issues
* fix compilation issues
* cargo fmt
* use a common data structure for block import types
* fix availability check on proposal import
* refactor the blob cache and split the block wrapper into two types
* add type conversion for signed block and block wrapper
* fix beacon chain tests and do some renaming, add some comments
* cargo update (#6)
---------
Co-authored-by: realbigsean <sean@sigmaprime.io>
Co-authored-by: realbigsean <seananderson33@gmail.com>
* update docs
* introduce a temp enum to model an adjusted `BlockWrapper` and fix blob coupling
* fix compilation issue
* fix blob coupling in the network context
* review comments
* wip
* fix router
* arc the byroot responses we send
* add placeholder for blob verification
* respond to blobs by range and blobs by root request in the most horrible and gross way ever
* everything in sync is now unimplemented
* fix compiation issues
* http_pi change is very small, just add it
* remove ctrl-c ctrl-v's docs
## Issue Addressed
Cleans up all the remnants of 4844 in capella. This makes sure when 4844 is reviewed there is nothing we are missing because it got included here
## Proposed Changes
drop a bomb on every 4844 thing
## Additional Info
Merge process I did (locally) is as follows:
- squash merge to produce one commit
- in new branch off unstable with the squashed commit create a `git revert HEAD` commit
- merge that new branch onto 4844 with `--strategy ours`
- compare local 4844 to remote 4844 and make sure the diff is empty
- enjoy
Co-authored-by: Paul Hauner <paul@paulhauner.com>
- there was a bug in responding range blob requests where we would incorrectly label the first slot of an epoch as a non-skipped slot if it were skipped. this bug did not exist in the code for responding to block range request because the logic error was mitigated by defensive coding elsewhere
- there was a bug where a block received during range sync without a corresponding blob (and vice versa) was incorrectly interpreted as a stream termination
- RPC size limit fixes.
- Our blob cache was dead locking so I removed use of it for now.
- Because of our change in finalized sync batch size from 2 to 1 and our transition to using exact epoch boundaries for batches (rather than one slot past the epoch boundary), we need to sync finalized sync to 2 epochs + 1 slot past our peer's finalized slot in order to finalize the chain locally.
- use fork context bytes in rpc methods on both the server and client side