* Add separate vote processing tpu port
* Add feature to send to tpu vote port
* Add vote rejecting sigverify mode
* use packet.meta.is_simple_vote_tx in place of deserialization
* consolidate code that identifies vote tx atcommon path for cpu and gpu
* new key for feature set
* banking forward tpu vote
* add tpu vote port to dockerfile and other review changes
* Simplify thread id compare
* fix a test; updated cluster_info ABI change
Co-authored-by: Tao Zhu <tao@solana.com>
* sigverify to identify and mark simple vote transaction (#20021)
* check vote tx at get_packet_offsets to cover both cpu and gpu paths
* add pubkey_len to PacketOffsets to reduce the redundant bytes counting
* allow vote to have 1 or 2 sigs (#20082)
* Refactor stake program into solana_program (#17906)
* Move stake state / instructions into solana_program
* Update account-decoder
* Update cli and runtime
* Update all other parts
* Commit Cargo.lock changes in programs/bpf
* Update cli stake instruction import
* Allow integer arithmetic
* Update ABI digest
* Bump rust mem instruction count
* Remove useless structs
* Move stake::id() -> stake::program::id()
* Re-export from solana_sdk and mark deprecated
* Address feedback
* Run cargo fmt
* Run cargo fmt post cherry-pick
Reports of excessive GPU memory usage and errors
from cudaHostRegister. There are some cases where pinning is
not required.
(cherry picked from commit eeee75c5be)
Co-authored-by: sakridge <sakridge@gmail.com>
* Upgrade Rust to 1.52.0
update nightly_version to newly pushed docker image
fix clippy lint errors
1.52 comes with grcov 0.8.0, include this version to script
* upgrade to Rust 1.52.1
* disabling Serum from downstream projects until it is upgraded to Rust 1.52.1
https://github.com/solana-labs/solana/pull/15320
added an allocation limit to the recycler, which has been the source of a
number of bugs. For example the code bellow panics by simply cloning packets:
const RECYCLER_LIMIT: usize = 8;
let recycler = PacketsRecycler::new_with_limit("", RECYCLER_LIMIT as u32);
let packets = Packets::new_with_recycler(recycler.clone(), 1).unwrap();
for _ in 0..RECYCLER_LIMIT {
let _ = packets.clone();
}
Packets::new_with_recycler(recycler.clone(), 1);
The implementation also fails to account for instances where objects are
consumed. Having the allocation limit in the recycler also seems out of place,
as higher level code has better context to impose allocation limits (e.g. by
using bounded channels to rate-limit), whereas the recycler would be simpler
and more efficient if it just do the recycling.
This commit:
* Reverts https://github.com/solana-labs/solana/pull/15320
* Adds a shrink policy to the recycler without an allocation limit.
If the vector is pinned and has a recycler, From<PinnedVec>
implementation of Vec should clone (instead of consuming) the underlying
vector so that the next allocation of a PinnedVec will recycle an
already pinned one.
Validator logs show that prune messages are dropped because they exceed
packet data size:
https://github.com/solana-labs/solana/blob/f25c969ad/perf/src/packet.rs#L90-L92
This can exacerbate gossip traffic by redundantly increasing push
messages across network. The workaround is to break prunes into smaller
chunks and send over in multiple messages.
ClusterInfo::process_packets handles incoming packets in a thread_pool:
https://github.com/solana-labs/solana/blob/87311cce7/core/src/cluster_info.rs#L2118-L2134
However, profiling runtime shows that threads are not well utilized and
a lot of the processing is done sequentially.
This commit redistributes the work done in parallel. Testing on a gce
cluster shows 20%+ improvement in processing gossip packets with much
smaller variations.