* adds metrics for number of nodes vs number of pubkeys (#20512)
(cherry picked from commit 0da661de62)
# Conflicts:
# gossip/src/cluster_info_metrics.rs
* removes backport merge conflicts
Co-authored-by: behzad nouri <behzadnouri@gmail.com>
Merge AccountsDb plugin framework to v1.8 (#20518)
Summary of Changes
Create a plugin mechanism in the accounts update path so that accounts data can be streamed out to external data stores (be it Kafka or Postgres). The plugin mechanism allows
Data stores of connection strings/credentials to be configured,
Accounts with patterns to be streamed
PostgreSQL implementation of the streaming for different destination stores to be plugged in.
The code comprises 4 major parts:
accountsdb-plugin-intf: defines the plugin interface which concrete plugin should implement.
accountsdb-plugin-manager: manages the load/unload of plugins and provide interfaces which the validator can notify of accounts update to plugins.
accountsdb-plugin-postgres: the concrete plugin implementation for PostgreSQL
The validator integrations: updated streamed right after snapshot restore and after account update from transaction processing or other real updates.
The plugin is optionally loaded on demand by new validator CLI argument -- there is no impact if the plugin is not loaded.
* Cost Model to limit transactions which are not parallelizeable (#16694)
* * Add following to banking_stage:
1. CostModel as immutable ref shared between threads, to provide estimated cost for transactions.
2. CostTracker which is shared between threads, tracks transaction costs for each block.
* replace hard coded program ID with id() calls
* Add Account Access Cost as part of TransactionCost. Account Access cost are weighted differently between read and write, signed and non-signed.
* Establish instruction_execution_cost_table, add function to update or insert instruction cost, unit tested. It is read-only for now; it allows Replay to insert realtime instruction execution costs to the table.
* add test for cost_tracker atomically try_add operation, serves as safety guard for future changes
* check cost against local copy of cost_tracker, return transactions that would exceed limit as unprocessed transaction to be buffered; only apply bank processed transactions cost to tracker;
* bencher to new banking_stage with max cost limit to allow cost model being hit consistently during bench iterations
* replay stage feed back program cost (#17731)
* replay stage feeds back realtime per-program execution cost to cost model;
* program cost execution table is initialized into empty table, no longer populated with hardcoded numbers;
* changed cost unit to microsecond, using value collected from mainnet;
* add ExecuteCostTable with fixed capacity for security concern, when its limit is reached, programs with old age AND less occurrence will be pushed out to make room for new programs.
* investigate system performance test degradation (#17919)
* Add stats and counter around cost model ops, mainly:
- calculate transaction cost
- check transaction can fit in a block
- update block cost tracker after transactions are added to block
- replay_stage to update/insert execution cost to table
* Change mutex on cost_tracker to RwLock
* removed cloning cost_tracker for local use, as the metrics show clone is very expensive.
* acquire and hold locks for block of TXs, instead of acquire and release per transaction;
* remove redundant would_fit check from cost_tracker update execution path
* refactor cost checking with less frequent lock acquiring
* avoid many Transaction_cost heap allocation when calculate cost, which
is in the hot path - executed per transaction.
* create hashmap with new_capacity to reduce runtime heap realloc.
* code review changes: categorize stats, replace explicit drop calls, concisely initiate to default
* address potential deadlock by acquiring locks one at time
* Persist cost table to blockstore (#18123)
* Add `ProgramCosts` Column Family to blockstore, implement LedgerColumn; add `delete_cf` to Rocks
* Add ProgramCosts to compaction excluding list alone side with TransactionStatusIndex in one place: `excludes_from_compaction()`
* Write cost table to blockstore after `replay_stage` replayed active banks; add stats to measure persist time
* Deletes program from `ProgramCosts` in blockstore when they are removed from cost_table in memory
* Only try to persist to blockstore when cost_table is changed.
* Restore cost table during validator startup
* Offload `cost_model` related operations from replay main thread to dedicated service thread, add channel to send execute_timings between these threads;
* Move `cost_update_service` to its own module; replay_stage is now decoupled from cost_model.
* log warning when channel send fails (#18391)
* Aggregate cost_model into cost_tracker (#18374)
* * aggregate cost_model into cost_tracker, decouple it from banking_stage to prevent accidental deadlock. * Simplified code, removed unused functions
* review fixes
* update ledger tool to restore cost table from blockstore (#18489)
* update ledger tool to restore cost model from blockstore when compute-slot-cost
* Move initialize_cost_table into cost_model, so the function can be tested and shared between validator and ledger-tool
* refactor and simplify a test
* manually fix merge conflicts
* Per-program id timings (#17554)
* more manual fixing
* solve a merge conflict
* featurize cost model
* more merge fix
* cost model uses compute_unit to replace microsecond as cost unit
(#18934)
* Reject blocks for costs above the max block cost (#18994)
* Update block max cost limit to fix performance regession (#19276)
* replace function with const var for better readability (#19285)
* Add few more metrics data points (#19624)
* periodically report sigverify_stage stats (#19674)
* manual merge
* cost model nits (#18528)
* Accumulate consumed units (#18714)
* tx wide compute budget (#18631)
* more manual merge
* ignore zerorize drop security
* - update const cost values with data collected by #19627
- update cost calculation to closely proposed fee schedule #16984
* add transaction cost histogram metrics (#20350)
* rebase to 1.7.15
* add tx count and thread id to stats (#20451)
each stat reports and resets when slot changes
* remove cost_model feature_set
* ignore vote transactions from cost model
Co-authored-by: sakridge <sakridge@gmail.com>
Co-authored-by: Jeff Biseda <jbiseda@gmail.com>
Co-authored-by: Jack May <jack@solana.com>
if there is no newline at the end of the file, this export is glued to the rest of the code and generates a syntax error like this
```bash
if [ -f ~/.git-completion.bash ]; then
. ~/.git-completion.bash
fiexport PATH="/Users/user/.local/share/solana/install/active_release/bin:$PATH"
```
(cherry picked from commit 87c0d8d9e7)
Co-authored-by: OleG <emptystamp@gmail.com>
* Deploy error is buffer is too small
* missing file
(cherry picked from commit de8331eeaf)
# Conflicts:
# cli/tests/fixtures/noop.so
Co-authored-by: Jack May <jack@solana.com>
* Add separate vote processing tpu port
* Add feature to send to tpu vote port
* Add vote rejecting sigverify mode
* use packet.meta.is_simple_vote_tx in place of deserialization
* consolidate code that identifies vote tx atcommon path for cpu and gpu
* new key for feature set
* banking forward tpu vote
* add tpu vote port to dockerfile and other review changes
* Simplify thread id compare
* fix a test; updated cluster_info ABI change
Co-authored-by: Tao Zhu <tao@solana.com>
* removes Slot from TransmitShreds (#19327)
An earlier version of the code was funneling through stakes along with
shreds to broadcast:
https://github.com/solana-labs/solana/blob/b67ffab37/core/src/broadcast_stage.rs#L127
This was changed to only slots as stakes computation was pushed further
down the pipeline in:
https://github.com/solana-labs/solana/pull/18971
However shreds themselves embody which slot they belong to. So pairing
them with slot is redundant and adds rooms for bugs should they become
inconsistent.
(cherry picked from commit 1deb4add81)
# Conflicts:
# core/benches/cluster_info.rs
# core/src/broadcast_stage.rs
# core/src/broadcast_stage/broadcast_duplicates_run.rs
# core/src/broadcast_stage/fail_entry_verification_broadcast_run.rs
# core/src/broadcast_stage/standard_broadcast_run.rs
* removes backport merge conflicts
Co-authored-by: behzad nouri <behzadnouri@gmail.com>
* removes packet-count metrics from retransmit stage
Working towards sending shreds (instead of packets) to retransmit stage
so that shreds recovered from erasure codes are as well retransmitted.
Following commit will add these metrics back to window-service, earlier
in the pipeline.
(cherry picked from commit bf437b0336)
# Conflicts:
# core/src/retransmit_stage.rs
* adds packet/shred count stats to window-service
Adding back these metrics from the earlier commit which removed them
from retransmit stage.
(cherry picked from commit 8198a7eae1)
* removes erroneous uses of Arc<...> from retransmit stage
(cherry picked from commit 6e413331b5)
# Conflicts:
# core/src/retransmit_stage.rs
# core/src/tvu.rs
* sends shreds (instead of packets) to retransmit stage
Working towards channelling through shreds recovered from erasure codes
to retransmit stage.
(cherry picked from commit 3efccbffab)
# Conflicts:
# core/src/retransmit_stage.rs
* returns completed-data-set-info from insert_data_shred
instead of opaque (u32, u32) which are then converted to
CompletedDataSetInfo at the call-site.
(cherry picked from commit 3c71670bd9)
# Conflicts:
# ledger/src/blockstore.rs
* retransmits shreds recovered from erasure codes
Shreds recovered from erasure codes have not been received from turbine
and have not been retransmitted to other nodes downstream. This results
in more repairs across the cluster which is slower.
This commit channels through recovered shreds to retransmit stage in
order to further broadcast the shreds to downstream nodes in the tree.
(cherry picked from commit 7a8807b8bb)
# Conflicts:
# core/src/retransmit_stage.rs
# core/src/window_service.rs
* removes backport merge conflicts
Co-authored-by: behzad nouri <behzadnouri@gmail.com>
* removes raw indexing from streamer (#19183)
Raw indexing is verbose and error-prone. This same code had an indexing
bug causing validator nodes panic just a few months ago:
https://github.com/solana-labs/solana/commit/482b8c6be
(cherry picked from commit 8229a4fbf6)
# Conflicts:
# streamer/Cargo.toml
* removes backport merge conflicts
Co-authored-by: behzad nouri <behzadnouri@gmail.com>