If a node "a" receives instance-info from node "b1" it will override any
instance-info associated with "b1" pubkey in its crds table. This makes
it less likely that when "b1" receives crds values from "a" (either
through pull or push), it sees other instances of itself (because node
"a" discarded them when it received "b1" instance info).
In order for the crds table to contain all instance-info associated with
the same pubkey at the same time, we need to add the instance tokens to
the keys in the crds table (i.e. the CrdsValueLabel).
* Adds a CLI option to the validator to enable just-in-time compilation of BPF.
* Refactoring to use bpf_loader_program instead of feature_set to pass JIT flag from the validator CLI to the executor.
Gossip and other places repeatedly de-serialize vote-state stored in
vote accounts. Ideally the first de-serialization should cache the
result.
This commit adds new VoteAccount type which lazily de-serializes
VoteState from Account data and caches the result internally.
Serialize and Deserialize traits are manually implemented to match
existing code. So, despite changes to frozen_abi, this commit should be
backward compatible.
* runtime: Add `FeeCalculator` resolution method to `HashAgeKind`
* runtime: Plumb fee-collected accounts for failed nonce tx rollback
* runtime: Use fee-collected nonce/fee account for nonced TX error rollback
* runtime: Add test for failed nonced TX accounts rollback
* Fee payer test
* fixup: replace nonce account when it pays the fee
* fixup: nonce fee-payer collect test
* fixup: fixup: clippy/fmt for replace...
* runtime: Test for `HashAgeKind::fee_calculator()`
* Clippy
Co-authored-by: Trent Nelson <trent@solana.com>
process_pull_requests acquires a write lock on crds table to update
records timestamp for each of the pull-request callers:
https://github.com/solana-labs/solana/blob/3087c9049/core/src/crds_gossip_pull.rs#L287-L300
However, pull-requests overlap a lot in callers and this function ends
up doing a lot of redundant duplicate work.
This commit obtains unique callers before acquiring an exclusive lock on
crds table.
* Fix fragile tests in prep of stake rewrite pr
* Restore BOOTSTRAP_VALIDATOR_LAMPORTS where appropriate
* Further clean up
* Further clean up
* Aligh with other call site change
* Remove false warn!
* fix ci!
Validator logs show that prune messages are dropped because they exceed
packet data size:
https://github.com/solana-labs/solana/blob/f25c969ad/perf/src/packet.rs#L90-L92
This can exacerbate gossip traffic by redundantly increasing push
messages across network. The workaround is to break prunes into smaller
chunks and send over in multiple messages.
* Add TestValidator::new_with_fees constructor, and warning for low bootstrap_validator_lamports
* Add logging to solana-tokens integration test to help catch low bootstrap_validator_lamports in the future
* Reasonable TestValidator mint_lamports
split_gossip_messages:
https://github.com/solana-labs/solana/blob/a97c04b40/core/src/cluster_info.rs#L1536-L1574
splits crds-values into chunks to fit into a gossip packet. However it is
using a global upper-bound for the header-size across all protocols:
https://github.com/solana-labs/solana/blob/a97c04b40/core/src/cluster_info.rs#L90-L93
This can be wasteful as the specific gossip protocol can have smaller
header than this upper-bound (e.g. Protocol::PushMessage is 170 bytes
smaller). Adding more crds-values in one gossip packet can avoid the
overheads of separate packets and reduce total number of bytes sent over
the wire.
This commit updates the splitting function to take a max-chunk-size
argument. At call-site, this value is set to the size of the protocol
which the values are sent over.
In several places in gossip code, the entire crds table is scanned only
to filter out nodes' contact infos. Currently on mainnet, crds table is
of size ~70k, while there are only ~470 nodes. So the full table scan is
inefficient. Instead we may maintain an index of only nodes' contact
infos.
The --rpc-pubsub-enable-vote-subscription flag may be used to enable it.
The current vote subscription is problematic because it emits a
notification for *every* vote, so hundreds a second in a real cluster.
Critically it's also missing information about *who* is voting,
rendering all those notifications practically useless.
Until these two issues can be resolved, the vote subscription is not
much more than a potential DoS vector.