Compare commits

...

173 Commits

Author SHA1 Message Date
mergify[bot]
4ebeb33602 Skip adding builtins if they will be removed (backport #23233) (#23241)
* Skip adding builtins if they will be removed (#23233)

* Add failing test for precompile transition

* Skip adding builtins if they will be removed

* cargo clean

* nits

* fix abi check

* remove workaround

Co-authored-by: Jack May <jack@solana.com>
(cherry picked from commit 1719d2349f)

# Conflicts:
#	runtime/src/bank.rs
#	runtime/src/builtins.rs

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
Co-authored-by: Jack May <jack@solana.com>
2022-02-19 08:28:30 +00:00
mergify[bot]
1f4ad0d1e8 Precompiles owned by the native loader (#23237) (#23240)
(cherry picked from commit 970f543ef6)

Co-authored-by: Jack May <jack@solana.com>
2022-02-19 02:45:15 +00:00
mergify[bot]
b2b92d7f5c Add --locked to spl-token-cli install (#23223) (#23225)
(cherry picked from commit c696944d36)

Co-authored-by: Will Hickey <will.hickey@solana.com>
2022-02-18 05:12:51 +00:00
mergify[bot]
02f8651a9c Fix the flaky test test_restart_tower_rollback (backport #23129) (#23155)
* Fix the flaky test test_restart_tower_rollback (#23129)

* Add flag to disable voting until a slot to avoid duplicate voting

* Fix the tower rollback test and remove it from flaky.

(cherry picked from commit ab92578b02)

* Resolve conflicts

Co-authored-by: Ashwin Sekar <ashwin@solana.com>
2022-02-17 20:31:27 +00:00
mergify[bot]
0fdbec9735 Add simulation detection countermeasure (backport #22880) (#23143)
* Add simulation detection countermeasure (#22880)

* Add simulation detection countermeasures

* Add program and test using TestValidator

* Remove incinerator deposit

* Remove incinerator

* Update Cargo.lock

* Add more features to simulation bank

* Update Cargo.lock per rebase

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
(cherry picked from commit c42b80f099)

# Conflicts:
#	programs/bpf/Cargo.lock
#	programs/bpf/Cargo.toml

* Update Cargo.lock

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2022-02-17 14:45:24 +00:00
mergify[bot]
f629c71849 Add slot-based timing metrics (backport #23097) (#23210)
* Add execute timings (#23097)

(cherry picked from commit 619335df1a)

# Conflicts:
#	core/src/banking_stage.rs

* resolve conflicts

Co-authored-by: carllin <carl@solana.com>
2022-02-17 10:57:50 +00:00
mergify[bot]
43e562142f docs: remove wallet ads (#23208)
(cherry picked from commit fa680a35ea)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-02-17 05:54:08 +00:00
Trent Nelson
c3098e99d1 Bump version to v1.9.8 2022-02-16 21:42:57 -07:00
mergify[bot]
421ad42b12 Fix flaky optimistic confirmation tests (backport #23178) (#23200)
* Fix flaky optimistic confirmation tests (#23178)

(cherry picked from commit bca1d51735)

# Conflicts:
#	local-cluster/tests/local_cluster.rs
#	local-cluster/tests/local_cluster_flakey.rs

* Resolve conflicts

Co-authored-by: carllin <carl@solana.com>
2022-02-17 03:18:24 +00:00
mergify[bot]
08cc140d4a accounts_index: Add SPL Token account indexing for token-2022 accounts (#23043) (#23203)
(cherry picked from commit a102453bae)

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-02-17 02:20:38 +00:00
mergify[bot]
2120ef5808 Fix ed25519 builtin program handling (backport #23182) (#23195)
* Fix ed25519 builtin program handling (#23182)

* Fix ed25519 builtin program handling

* Fix tests

* Add integration tests for processing transactions with ed25519 ixs

* Fix another test

* fix formatting

(cherry picked from commit 813725dfec)

* fix tests

Co-authored-by: Justin Starry <justin@solana.com>
Co-authored-by: Jack May <jack@solana.com>
2022-02-17 00:44:44 +00:00
Lijun Wang
c08af09aaa Removed solana-accountsdb-plugin-postgres from the monorepo as it has its own (#22567) (#23202)
Removed solana-accountsdb-plugin-postgres from the monorepo as it has its own  standalone repo now
2022-02-16 15:52:59 -08:00
mergify[bot]
8b12749f02 forward_buffered_packets return packet count in error path (#23167) (#23187)
(cherry picked from commit 115d71536b)

Co-authored-by: Jeff Biseda <jbiseda@gmail.com>
2022-02-16 13:01:16 -08:00
mergify[bot]
e343a17ce9 Update ping to transfer to self, with rotating amount (#22657) (#22675)
* Update ping to transfer to self, with rotating amount

* Remove balance check

(cherry picked from commit 90689585ef)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2022-02-16 12:52:28 -07:00
mergify[bot]
3fd78ac6ea shrink batches when over 80% of the space is wasted (backport #23066) (#23189)
* shrink batches when over 80% of the space is wasted (#23066)

* shrink batches when over 80% of the space is wasted

(cherry picked from commit 83d31c9e65)

# Conflicts:
#	core/benches/sigverify_stage.rs
#	core/src/sigverify_stage.rs
#	perf/src/sigverify.rs

* fixup!

Co-authored-by: anatoly yakovenko <anatoly@solana.com>
2022-02-16 19:30:16 +00:00
mergify[bot]
41bbc11a46 flag end-of-slot when poh bank is gone (backport #23069) (#23174)
* flag end-of-slot when poh bank is gone

(cherry picked from commit 03bf66a51b)

# Conflicts:
#	core/src/banking_stage.rs

* merge fix

Co-authored-by: Tao Zhu <tao@solana.com>
2022-02-16 18:08:03 +00:00
mergify[bot]
68934353f2 Typo fix (#23152) (#23153)
Fixed a type in the documentation.

(cherry picked from commit bb50259956)

Co-authored-by: Jerry <jerskisnow@protonmail.com>
2022-02-16 10:53:30 -07:00
mergify[bot]
92543a3f92 fix typo in docs (#22690) (#22691)
(cherry picked from commit 2b111cd631)

Co-authored-by: Steve James <0x2t1ff@gmail.com>
2022-02-16 10:50:45 -07:00
mergify[bot]
a514aff819 Update jsonrpc-api.md (#23190) (#23192)
(cherry picked from commit aaf657297f)

Co-authored-by: gagliardetto <gagliardetto@users.noreply.github.com>
2022-02-16 17:24:23 +00:00
mergify[bot]
8d8525e4fc Allow cli users to authorize Staker signed by Withdrawer (#23146) (#23176)
(cherry picked from commit 88b66ae3a8)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-02-16 10:16:55 -07:00
Trent Nelson
2c1cec4e2c validator: invert vote account sanity check arg 2022-02-16 08:31:43 +00:00
Trent Nelson
7d0a0a26bb rpc: genericize client constructors 2022-02-16 08:31:43 +00:00
Trent Nelson
73016d3ed2 rpc: make getGenesisHash part of minimal api 2022-02-16 08:31:43 +00:00
Trent Nelson
9b1cb5c1b7 test-validator: use JsonRpcConfig::default_for_test for consistency 2022-02-16 08:31:43 +00:00
Michael Vines
94f4748a34 Generate full snapshots 4x faster to keep incremental snapshots nice and small
(cherry picked from commit 577fa4ec0c)
2022-02-15 21:22:23 -08:00
Michael Vines
8963724ed6 solana-validator set-identity now supports the --require-tower flag 2022-02-15 21:09:44 -08:00
Tyera Eulberg
65df58c64a Update deprecated methods and recommend getBlocksWithLimit (#23127)
(cherry picked from commit d2a407a9a7)
2022-02-15 18:01:37 -08:00
Michael Vines
380c5da2d0 Add --skip-new-snapshot-check to exit and wait-for-restart-window subcommands
(cherry picked from commit 527f62c744)
2022-02-15 18:01:05 -08:00
mergify[bot]
7d488a6ed8 Remove references to instruction parsing in SPL Token -> Depositing section (#23161) (#23163)
(cherry picked from commit 917113914d)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-02-16 00:12:10 +00:00
mergify[bot]
159cfdae25 solana-validator monitor now reports identity changes (#23156)
(cherry picked from commit b44f40ee3a)

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-02-15 22:48:17 +00:00
Michael Vines
1c3d09ed21 solana-validator wait-for-restart-window --min-idle-time X now works 2022-02-15 08:58:53 -08:00
mergify[bot]
2c8cfdb3f3 Add fees to tx-wide caps (backport #22081) (#23095)
* Add fees to tx-wide caps (#22081)

(cherry picked from commit 3d9874b95a)

# Conflicts:
#	runtime/src/bank.rs

* resolve

Co-authored-by: Jack May <jack@solana.com>
2022-02-15 01:36:02 +00:00
mergify[bot]
85570ac207 Add error message for readlink -f failure (#23102) (#23121)
* Add error message for readlink -f failure

(cherry picked from commit 89f5145f64)

Co-authored-by: Will Hickey <will.hickey@solana.com>
2022-02-14 20:23:26 +00:00
mergify[bot]
054b95cbe1 docs: fix broken link for "transaction-id" (#22682) (#22683)
(cherry picked from commit a300e2d2dc)

Co-authored-by: Radu Pașparugă <radupasparuga25@gmail.com>
2022-02-14 22:21:29 +08:00
mergify[bot]
b67a5bb3b9 fix typo (#23107) (#23108)
(cherry picked from commit 22a2a4252a)

Co-authored-by: thepalmtrees <96289385+thepalmtrees@users.noreply.github.com>
2022-02-13 16:15:53 +00:00
mergify[bot]
3e3fb4e296 Introduce slot-specific packet metrics (backport #22906) (#23077)
* Introduce slot-specific packet metrics (#22906)

(cherry picked from commit 2f9e30a1f7)

# Conflicts:
#	core/benches/banking_stage.rs
#	core/src/banking_stage.rs
#	core/src/qos_service.rs

* Resolve conflicdts

Co-authored-by: carllin <carl@solana.com>
2022-02-13 05:44:38 +00:00
Michael Vines
f66d8551e9 Update minimum port range due to addition of QUIC port 2022-02-12 08:49:48 -08:00
mergify[bot]
a5cb10666c Bump QUIC_PORT_OFFSET to 6 to avoid jostling around other ports (#23096)
(cherry picked from commit 817f47d970)

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-02-12 02:36:04 +00:00
mergify[bot]
76384758d8 adds validator version to set_panic_hook (#23082) (#23088)
(cherry picked from commit 78089941ff)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-02-12 02:11:14 +00:00
mergify[bot]
4eca26ae50 Document message APIs (backport #22873) (#23091)
* Document message APIs (#22873)

* Document message APIs

* Ignore clippy

* Update sdk/program/src/message/mod.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Fix new_with_blockhash example

* Rename nonce_account_address in example

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
(cherry picked from commit f7753ce85f)

# Conflicts:
#	sdk/program/src/message/mod.rs

* Fix conflict

Co-authored-by: Brian Anderson <andersrb@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-02-11 21:09:14 +00:00
Will Hickey
2d144afec5 Bump version to 1.9.6 (#23092) 2022-02-11 15:00:06 -06:00
mergify[bot]
781609b27a uses sendmmsg in streamer (backport #23062) (#23080)
* uses sendmmsg in streamer (#23062)

packet::send_to sends packets one by one:
https://github.com/solana-labs/solana/blob/9213fcb11/streamer/src/packet.rs#L63-L75

sendmmsg uses a single system call for multiple messages:
https://github.com/solana-labs/solana/blob/9213fcb11/streamer/src/sendmmsg.rs#L94

(cherry picked from commit c078ca3fb3)

# Conflicts:
#	streamer/src/streamer.rs

* removes mergify merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-02-11 15:47:45 +00:00
mergify[bot]
5a5244ecf8 Add --full-rpc-api to run.sh (#23072) (#23079)
(cherry picked from commit 34443a238e)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2022-02-11 13:11:30 +00:00
mergify[bot]
2e60f95ab9 mention staking reward in getInflationReward doc (#23073) (#23074)
(cherry picked from commit 4bd6a231d2)

Co-authored-by: Anton <62949848+icepaq@users.noreply.github.com>
2022-02-11 04:21:29 +00:00
Michael Vines
55179524bd Add deactivate-feature feature to test validator cli (#23041) (#23065)
Co-authored-by: Charlie You <charlie.you@hey.com>
2022-02-10 19:18:20 -08:00
mergify[bot]
4a0785ddcd Disable features programmatically in TestValidatorGenesis (#22860) (#23064)
* Supported starting test-validator and disabling features

* Enable starting test validator and removing feature accounts

* Enable deactivating feature accounts

* Enable deactivating feature accounts - updates per PR comments

* Enable deactivating feature accounts - updates per PR comments

* Added more verbosity when key for deactition is either not a Feature or not in genesis_config accounts

(cherry picked from commit 3c65fd7ba3)

Co-authored-by: Frank V. Castellucci <5435165+FrankC01@users.noreply.github.com>
2022-02-10 14:58:51 -08:00
mergify[bot]
4698fbc036 Move cap_accounts_data_len feature gate only around new error (#23048) (#23057)
(cherry picked from commit 0a1ab945bc)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-10 20:41:15 +00:00
mergify[bot]
70f76b450e Add sbf-tools version to cargo target cache name on CI agents (#23027)
(cherry picked from commit c7aa7fb66b)

Co-authored-by: Dmitri Makarov <dmakarov@alumni.stanford.edu>
2022-02-09 22:24:04 +00:00
mergify[bot]
d64eebb799 estimate a program cost as 2 standard deviation above mean (backport #22286) (#23019)
* - estimate a program cost as 2 standard deviation above mean
- replaced get_average / get_mode with get_default to assign max units to unknown program

(cherry picked from commit a25ac1c988)

# Conflicts:
#	runtime/src/cost_model.rs

* use EMA in place of Welford

(cherry picked from commit 6587dbfa47)

* 1. Persist to blockstore less frequently;
2. reduce alpha for EMA to 1 percent to have roughly 200 data points for estimatio

(cherry picked from commit 7aa1fb4e24)

# Conflicts:
#	core/src/cost_update_service.rs
#	core/src/tvu.rs
#	runtime/src/cost_model.rs

* fix tests after merge

(cherry picked from commit ba2d83f580)

* fix merge

Co-authored-by: Tao Zhu <tao@solana.com>
2022-02-09 22:16:55 +00:00
Michael Vines
71211e0d90 rebase 2022-02-09 10:44:09 -08:00
Michael Vines
320fbd63c5 Prepare RPC subsystem for multiple SPL Token program ids
(cherry picked from commit 86d465c531)

# Conflicts:
#	rpc/src/rpc.rs
#	transaction-status/src/parse_instruction.rs
#	transaction-status/src/token_balances.rs
2022-02-09 10:44:09 -08:00
mergify[bot]
0fe00bab7d Return the accounts data len delta after processing messages (#22986) (#23023)
(cherry picked from commit 869cfc9a1c)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-09 03:06:57 +00:00
Michael Vines
00630d9c1b monitor: Remove getMaxRetransmitSlot RPC method usage
(cherry picked from commit dcd4ea9111)
2022-02-08 11:06:12 -08:00
Jack May
d05b5b0902 Add get_processed_sibling_instruction syscall (#22859) (#22956) 2022-02-08 09:21:11 -08:00
mergify[bot]
5c69af607d Put accounts data len updates behind feature gate (#22918) (#23007)
(cherry picked from commit f0f4042680)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-08 16:45:22 +00:00
mergify[bot]
df16a37ab5 bench should update leader schedule cache (#22991) (#22998)
(cherry picked from commit e52e48076e)

Co-authored-by: Tao Zhu <82401714+taozhu-chicago@users.noreply.github.com>
2022-02-08 15:05:57 +00:00
mergify[bot]
432eafd730 Search for consecutive ports (#22979) (#22984)
(cherry picked from commit 514aab46d9)

Co-authored-by: sakridge <sakridge@gmail.com>
2022-02-07 18:45:02 +00:00
mergify[bot]
41142a7d76 Optimize batching of transactions during replay for parallel processing (backport #22917) (#22982)
* Optimize batching of transactions during replay for parallel processing

(cherry picked from commit 4de14e530b)

* fix build

(cherry picked from commit dfef68f985)

* updates to address review feedback

(cherry picked from commit c5d8560cdb)

* suppress clippy

(cherry picked from commit a146f2d853)

Co-authored-by: Pankaj Garg <pankaj@solana.com>
2022-02-07 18:25:33 +00:00
mergify[bot]
8047601a7b Fix typo (#22973)
Fix typo

(cherry picked from commit eaf2df99c6)

Co-authored-by: wil-se <sebastiani.1753672@studenti.uniroma1.it>
2022-02-06 16:46:15 +00:00
mergify[bot]
85856a73aa Implement json output for solana ping (backport #22959) (#22968)
* Implement json output for solana ping (#22959)

(cherry picked from commit d2c89213ff)

# Conflicts:
#	cli/src/cluster_query.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-02-05 23:42:33 +00:00
mergify[bot]
c3890ada8e Bumps solana_rbpf to version v0.2.23 (#22954) (#22961)
(cherry picked from commit e05cf4bf97)

Co-authored-by: Alexander Meißner <AlexanderMeissner@gmx.net>
2022-02-05 13:08:29 +00:00
mergify[bot]
ceb253ce90 Bumps solana_rbpf to version v0.2.22 (#22923) (#22955)
* Bumps solana_rbpf to v0.2.22

* Adjusts vm::Config and feature gates.

(cherry picked from commit 96c88d1a5e)

Co-authored-by: Alexander Meißner <AlexanderMeissner@gmx.net>
2022-02-05 11:16:22 +00:00
mergify[bot]
dd6c365bd9 Resolve conflicts (#22905)
Co-authored-by: carllin <carl@solana.com>
2022-02-05 06:47:18 +00:00
mergify[bot]
9ea025315e removes VoteTracker::new in favor of VoteTracker::default (#22941) (#22946)
VoteTracker::new does not need a bank and is so redundant:
https://github.com/solana-labs/solana/blob/5a230f418/core/src/cluster_info_vote_listener.rs#L103-L107
(cherry picked from commit 27aaf9df85)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-02-04 20:59:01 +00:00
mergify[bot]
c43cef79b5 Add quic port for accepting transactions (#22753) (#22937)
using quinn library

streamer: Sign TLS cert with validator identity key

Handle multiple incoming chunks

(cherry picked from commit 5a230f418d)

Co-authored-by: sakridge <sakridge@gmail.com>
2022-02-04 20:53:27 +00:00
mergify[bot]
2605724aa3 Bump bpf-tools to v1.23 (#22929)
(cherry picked from commit a9d9a5095b)

Co-authored-by: Dmitri Makarov <dmakarov@alumni.stanford.edu>
2022-02-04 04:14:25 +00:00
mergify[bot]
539f303eb7 Handle accounts data size changes due to rent-collected accounts (#22412) (#22919)
Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-04 01:13:12 +00:00
Tao Zhu
f7091811d4 Allow buffered packets be consumed if bank is active, regardless leader schedule 2022-02-03 16:56:27 -06:00
Tao Zhu
15ef1827bf push live packets straight to buffer, leader only process packets from buffer 2022-02-03 16:56:27 -06:00
mergify[bot]
85fef67213 Refactor Rent::due() with RentDue enum (#22346) (#22921)
(cherry picked from commit d90d5ee9b6)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-03 22:15:54 +00:00
mergify[bot]
90a70d9b5b Use lazy_rent_collection directly (#22410) (#22920)
(cherry picked from commit 9bc2592da1)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-03 22:07:28 +00:00
mergify[bot]
643442e830 Reject close of active vote accounts (backport #22651) (#22896)
* Reject close of active vote accounts (#22651)

* 10461 Reject close of vote accounts unless it earned no credits in the previous epoch. This is checked by comparing current epoch (from clock sysvar) with the most recent epoch with credits in vote state.

(cherry picked from commit 75563f6c7b)

# Conflicts:
#	programs/vote/src/vote_processor.rs
#	sdk/src/feature_set.rs

* Resolve merge conflicts

Co-authored-by: Will Hickey <csu_hickey@yahoo.com>
Co-authored-by: Will Hickey <will.hickey@solana.com>
2022-02-03 19:59:07 +00:00
mergify[bot]
69e207ca58 rpc: use minimal mode by default (backport #22734) (#22879)
* rpc: use minimal mode by default

(cherry picked from commit eac4a6df68)

# Conflicts:
#	local-cluster/tests/local_cluster.rs

* test-validator-bin: reinstate full rpc method set

Co-authored-by: Trent Nelson <trent@solana.com>
2022-02-03 08:25:49 +00:00
mergify[bot]
fb8db79e63 adds reverse lookup index to cluster-nodes (#22892) (#22894)
retransmit has to exclude slot leader from set of nodes for each shred;
which currently requires a linear scan:
https://github.com/solana-labs/solana/blob/e3b137066/core/src/cluster_nodes.rs#L238-L242

This commit adds a reverse lookup index to avoid linear scan.

(cherry picked from commit dccbddad80)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-02-02 21:15:51 +00:00
mergify[bot]
237347847b caches WeightedShuffle struct in ClusterNodes (#22877) (#22889)
Instead of reconstructing WeightedShuffle struct for each shred
broadcast or retransmit, we can use the same struct with minimal
mutations.

(cherry picked from commit e3b137066d)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-02-02 16:54:57 +00:00
mergify[bot]
4706790c20 docs-ci: prebuild cli bin with output to appease TravisCI hang check (#22884)
(cherry picked from commit 2fda90e414)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-02-02 09:00:48 +00:00
mergify[bot]
04281734e5 Cleanup serde snapshot common.rs (#22854) (#22863)
Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-02 00:35:50 +00:00
mergify[bot]
a98ca9037d More serde snapshot cleanup (backport #22449) (#22872)
* More serde snapshot cleanup (#22449)

(cherry picked from commit 2756abce39)

# Conflicts:
#	runtime/src/serde_snapshot.rs
#	runtime/src/serde_snapshot/newer.rs

* fixup

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-01 22:25:04 +00:00
mergify[bot]
12e40a40f5 Cleanup serde snapshot's "future" to "newer" (backport #22431) (#22870)
* Refactor serde snapshot's "future" to "newer" (#22431)

(cherry picked from commit 9c3144e286)

# Conflicts:
#	runtime/src/serde_snapshot.rs

* fixup conflicts

* fixup remove unused use

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-02-01 19:03:02 +00:00
mergify[bot]
c715bc93cf removes Rng field from WeightedShuffle struct (#22850) (#22868)
(cherry picked from commit 45e09664b8)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-02-01 17:33:52 +00:00
mergify[bot]
3aa3cd8852 Clean up before credits_auto_rewind (#22839) (#22866)
* Clean up before credits_auto_rewind

* Use `=` intead of `|=` for mutable bool

(cherry picked from commit 545c97f903)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2022-02-01 15:22:33 +00:00
mergify[bot]
f83cb74509 rpc-sts: dedupe before initial send (#22856)
(cherry picked from commit 9f1f7aff2b)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-02-01 01:54:54 +00:00
mergify[bot]
6c47a98945 Small punctuation fix (#22838) (#22849)
(cherry picked from commit 29bf1e2529)

Co-authored-by: Justin Kat <601027+Jkat@users.noreply.github.com>
2022-01-31 18:48:16 +00:00
mergify[bot]
4dfbb4347c includes zero weighted entries in WeightedShuffle (#22829) (#22847)
Current WeightedShuffle implementation excludes zero weighted entries
from the shuffle:
https://github.com/solana-labs/solana/blob/13e631dcf/gossip/src/weighted_shuffle.rs#L29-L30

Though mathematically this might make more sense, for our use-cases
(turbine specifically), this results in less efficient code:
https://github.com/solana-labs/solana/blob/13e631dcf/core/src/cluster_nodes.rs#L409-L430

This commit changes the implementation so that zero weighted indices are
also included in the shuffle but appear only at the end after non-zero
weighted indices.

(cherry picked from commit 604ca9316c)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-01-31 18:16:40 +00:00
Haleem Assal
28fc733894 add 'ticks-per-slot' to 'solana-test-validator' (#22701)
* add 'ticks-per-slot' to 'solana-test-validator'

* add input parser validator for "ticks-per-slot" argument

* fix fmt

(cherry picked from commit 0562426661)
2022-01-28 20:36:35 -08:00
mergify[bot]
93b44d8a4c Add new_from_parent() timings (#22744) (#22806)
(cherry picked from commit 94a5aee484)

Co-authored-by: carllin <carl@solana.com>
2022-01-28 03:10:53 +00:00
Alexander Meißner
2804204f80 Adds TEST_DUPLICATE_PRIVILEGE_ESCALATION_SIGNER and TEST_DUPLICATE_PRIVILEGE_ESCALATION_WRITABLE. (#22790) 2022-01-28 00:52:09 +01:00
Dmitri Makarov
4d891043d1 Update syscall base costs 2022-01-27 13:36:16 -08:00
mergify[bot]
74498650bc Always contact release.solana.com over https (#22795)
(cherry picked from commit bd86459a94)

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-01-27 21:30:37 +00:00
Michael Vines
af3b307734 solana-test-validator now supports the --rpc-pubsub-enable-vote-subscription flag
(cherry picked from commit 75658e2a96)
2022-01-27 11:19:26 -08:00
Michael Vines
2368e09d89 Add vote account address to vote subscription
(cherry picked from commit 331b953551)

# Conflicts:
#	core/src/cluster_info_vote_listener.rs
#	rpc/src/rpc_pubsub.rs
#	rpc/src/rpc_subscriptions.rs
2022-01-27 11:19:26 -08:00
mergify[bot]
6fca541847 Restrict the Mergify copy command to core contributors (#22792)
(cherry picked from commit c0638439be)

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-01-27 17:10:07 +00:00
mergify[bot]
15e9cedc0d test_ed25519 fails if we randomly select index 1 (#22780)
(cherry picked from commit c1b543c74d)

Co-authored-by: Sean Young <sean@mess.org>
2022-01-27 12:50:01 +00:00
mergify[bot]
d68a40396c Improve poh recorder metrics (#22730) (#22764)
* Improve poh recorder metrics

* Add metric for poh service send record

* feedback

* clean up

(cherry picked from commit 115b488807)

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-27 08:56:41 +00:00
mergify[bot]
b0e0410003 Set the correct root in block commitment cache initialization (#22750) (#22757)
* Set the correct root in block commitment cache initialization

* clean up test

* bump

(cherry picked from commit d9c259a231)

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-27 03:44:59 +00:00
mergify[bot]
d1174f677e Perf: Reduce write locks on blockhash queue (#22729) (#22751)
* Perf: Reduce write locks on blockhash queue

* Add comment about thread safety

* Add comment about write starvation

(cherry picked from commit 071e97053f)

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-26 10:03:50 +00:00
mergify[bot]
cf88542254 Update vote-signing.md to remove references to anachronistic behavior (#22742)
(cherry picked from commit 8b1cde83c1)

Co-authored-by: Bryan Ischo <bryan@ischo.com>
2022-01-25 23:53:46 +00:00
mergify[bot]
99c55dbec3 Export BanksClientError (#22715) (#22732)
(cherry picked from commit f366e0f890)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2022-01-25 19:04:45 +00:00
Jon Cinque
bc412d51d6 Fix stable-bpf job by referencing Signature directly (#22721) 2022-01-25 02:41:36 +00:00
mergify[bot]
87c3e71bb8 spl-associated-token-account: Add feature for new program (#22648) (#22719)
* spl-associated-token-account: Add feature for new program

* Address feedback

(cherry picked from commit fc21af4e6e)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2022-01-24 19:20:30 -07:00
mergify[bot]
d0cf5bb721 Bump thread_local (#22711) (#22714)
(cherry picked from commit 1c10677f82)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2022-01-24 12:10:10 -07:00
mergify[bot]
fb54991901 Fix typos (#22700)
Fix typos

(cherry picked from commit fd0f5e4d12)

Co-authored-by: tanliwei <tanliwei@users.noreply.github.com>
2022-01-24 03:23:40 +00:00
mergify[bot]
9995a54be7 fix: flag was incorrect in doc (#22698)
(cherry picked from commit 714a344937)

Co-authored-by: Arash <arash@backbone.link>
2022-01-23 21:24:58 +00:00
mergify[bot]
d9a5f714e1 Refactor: Rename variables and helper method to PohRecorder (#22676) (#22688)
* Refactor: Rename leader_first_tick_height field

* Refactor: add `PohRecorder::slot_for_tick_height` helper

* Refactor: Add type for poh leader status

(cherry picked from commit 1240217a73)

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-23 15:34:44 +08:00
Anatoly Yakovenko
620a80b581 sigverify -- dedupe bloom filter too slow followups 2022-01-21 23:59:41 -07:00
mergify[bot]
b354dae249 Perf: Only check executors cache for executable bpf program ids (backport #22624) (#22629)
* Perf: Only check executors cache for executable bpf program ids (#22624)

* Only check executors cache for executable bpf program ids

* switch to native loader check

* clean up tests

* fix tests

* clippy

(cherry picked from commit 7d34a7acac)

# Conflicts:
#	runtime/src/bank.rs

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-22 14:08:15 +08:00
mergify[bot]
af7ed83285 Document transaction module (backport #22440) (#22664)
* Document transaction module (#22440)

* Document transaction module

* example_mocks is only for feature = full

(cherry picked from commit 8dd62854fa)

# Conflicts:
#	sdk/src/transaction/mod.rs

* Fix conflicts

Co-authored-by: Brian Anderson <andersrb@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-01-21 22:58:44 -07:00
Trent Nelson
8bc4cc90d2 Bump version to 1.9.6 2022-01-21 20:15:43 -07:00
Tyera Eulberg
39a4cc95dc v1.9: Impl get_/set_return_data syscalls for ProgramTest (#22652)
* Remove &mut self from set_return_data

* Impl get_/set_return_data for program-test SyscallStubs

* Add return_data program-test
2022-01-21 18:03:27 -07:00
mergify[bot]
187ed6a387 Remove unused fields from Bank (backport #22491) (#22630)
* Remove unused fields from Bank (#22491)

(cherry picked from commit 9977396d8f)

# Conflicts:
#	runtime/src/serde_snapshot/future.rs

* fixup the backport

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-01-22 00:13:32 +00:00
mergify[bot]
91bc44931f Add hidden cli option to allow validator reports replayed transaction cost metrics (backport #22369) (#22519)
* Add hidden cli option to allow validator reports replayed transaction cost metrics (#22369)

* add hidden cli option to allow validator reports replayed transaction cost detail metrics

* Update validator/src/main.rs

Co-authored-by: Michael Vines <mvines@gmail.com>

* - rebase master, using unbounded instead of channel; dowgrade to datapoint_trace

* removed cli arg, prefer log at trace

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit a724fa2347)

# Conflicts:
#	core/src/tvu.rs

* fix conflict

Co-authored-by: Tao Zhu <82401714+taozhu-chicago@users.noreply.github.com>
Co-authored-by: Tao Zhu <tao@solana.com>
2022-01-21 15:05:41 -07:00
mergify[bot]
35ca3182ba Add estimated and actual block cost units metrics (backport #22326) (#22517)
* Add estimated and actual block cost units metrics (#22326)

* - report cost details for transactions selected to be packed into block;
- report estimated execution units packed into block, and actual units and time after execution

* revert reporting per-transaction details

* rollup transaction cost details (eg signature cost, wirte lock, data cost and execution costs) into block stats

* change naming from units to cu, use struct to replace tuple

(cherry picked from commit 1309a9cea0)

# Conflicts:
#	core/src/banking_stage.rs
#	core/src/qos_service.rs

* fix conflicts

Co-authored-by: Tao Zhu <82401714+taozhu-chicago@users.noreply.github.com>
Co-authored-by: Tao Zhu <tao@solana.com>
2022-01-21 15:05:19 -07:00
mergify[bot]
24345d8e63 Update introduction.md (#22623) (#22625)
A few fixes for grammatical and spelling issues.

(cherry picked from commit 373f200ab8)

Co-authored-by: filip <44206832+filipkujawa@users.noreply.github.com>
2022-01-21 13:31:47 -07:00
anatoly yakovenko
bf45f5b88e Faster dedup v1.9 (#22638)
Faster dedup port of #22607
2022-01-21 11:21:28 -08:00
mergify[bot]
2ddb5b27c1 Refactor: move instructions sysvar serialization out of Message (#22544) (#22595)
(cherry picked from commit 7ba57e7a7c)

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-21 13:45:47 +08:00
mergify[bot]
7f10fd6a21 Refactor: move compute budget runtime logic into solana-program-runtime (backport #22543) (#22545)
* Refactor: move compute budget runtime logic into solana-program-runtime (#22543)

(cherry picked from commit cc76a73c49)

# Conflicts:
#	programs/bpf/tests/programs.rs
#	sdk/src/compute_budget.rs

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-21 13:45:01 +08:00
mergify[bot]
a0a881594a Speed up packet dedup and fix benches (#22592) (#22613)
* Speed up packet dedup and fix benches

* fix tests

* allow int arithmetic in bench

(cherry picked from commit a2d251ce1e)

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-20 22:51:58 +00:00
mergify[bot]
e9e35fd7bd system-monitor-service: support percentages from bigger numbers (#22598)
(cherry picked from commit cca3dbc76d)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-01-20 11:37:52 +00:00
Trent Nelson
66b94b86a9 banking-stage: remove unused stats fields 2022-01-20 06:09:10 +00:00
mergify[bot]
59f406d78a Refactor: move simple vote parsing to runtime (backport #22537) (#22587)
* Refactor: move simple vote parsing to runtime (#22537)

(cherry picked from commit 7f20c6149e)

# Conflicts:
#	core/src/cluster_info_vote_listener.rs
#	core/src/verified_vote_packets.rs
#	programs/vote/src/vote_transaction.rs
#	rpc/src/rpc_subscriptions.rs
#	runtime/src/bank.rs
#	runtime/src/bank_utils.rs
#	runtime/src/vote_sender_types.rs

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-20 04:51:50 +00:00
mergify[bot]
dbf9a32883 Optimize packet dedup (#22571) (#22585)
* Use bloom filter to dedup packets

* dedup first

* Update bloom/src/bloom.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/sigverify_stage.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/sigverify_stage.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update core/src/sigverify_stage.rs

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* fixup

* fixup

* fixup

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
(cherry picked from commit d343713f61)

# Conflicts:
#	Cargo.lock
#	core/Cargo.toml
#	core/src/banking_stage.rs
#	core/src/sigverify_stage.rs
#	gossip/Cargo.toml
#	perf/Cargo.toml
#	programs/bpf/Cargo.lock
#	runtime/Cargo.toml

Co-authored-by: anatoly yakovenko <anatoly@solana.com>
2022-01-20 02:51:49 +00:00
mergify[bot]
37e9076db0 Add PacketBatch packet_indexes stat (#22564) (#22575)
* collect stats on packet batch indicies

* cleanup

* cleanup

* cleanup

* change name

(cherry picked from commit 650882217c)

# Conflicts:
#	core/src/banking_stage.rs

Co-authored-by: buffalu <85544055+buffalu@users.noreply.github.com>
2022-01-20 00:05:05 +00:00
mergify[bot]
f77ea5f324 improves sigverify discard_excess_packets performance (backport #22577) (#22580)
* improves sigverify discard_excess_packets performance (#22577)

As shown by the added benchmark, current code does worse if there is a
spam address plus a lot of unique addresses.

on current master:
test bench_packet_discard_many_senders  ... bench:   1,997,960 ns/iter (+/- 103,715)
test bench_packet_discard_mixed_senders ... bench:  14,256,116 ns/iter (+/- 534,865)
test bench_packet_discard_single_sender ... bench:   1,306,809 ns/iter (+/- 61,992)

with this commit:
test bench_packet_discard_many_senders  ... bench:   1,644,025 ns/iter (+/- 83,715)
test bench_packet_discard_mixed_senders ... bench:   1,089,789 ns/iter (+/- 86,324)
test bench_packet_discard_single_sender ... bench:     955,234 ns/iter (+/- 55,953)

(cherry picked from commit dcf44d2523)

# Conflicts:
#	core/src/sigverify_stage.rs

* removes mergify merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-01-19 20:14:10 +00:00
mergify[bot]
c9df037dae Track discard time of excess packets in sigverify (#22554) (#22570)
* discard time histogram

* closer to the if

* update

(cherry picked from commit e616a7ebfc)

Co-authored-by: anatoly yakovenko <anatoly@solana.com>
2022-01-19 01:52:30 +00:00
mergify[bot]
2b87d99479 Use VecDeque instead of Vec in sigverify stage (#22538) (#22550)
avoid bad performance of remove(0) for a single sender

(cherry picked from commit 49443406fd)

# Conflicts:
#	core/src/sigverify_stage.rs

Co-authored-by: sakridge <sakridge@gmail.com>
2022-01-19 01:46:34 +00:00
mergify[bot]
2546ef4ad6 metrics for generate new bank forks (#22492) (#22548)
* metrics for generate new bank forks

* fixed

* Apply suggestions from code review

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* --fixup

* fixup!

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
(cherry picked from commit 2d94e6e5d3)

# Conflicts:
#	core/src/replay_stage.rs

Co-authored-by: anatoly yakovenko <anatoly@solana.com>
2022-01-19 01:45:25 +00:00
mergify[bot]
96ae795758 Add more details about vote account key rotation (#22539)
(cherry picked from commit 901b2881fb)

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-01-17 09:16:25 +00:00
Michael Vines
9bddb4e437 vote account withdraw authority may change the authorized voter 2022-01-15 23:46:10 -08:00
mergify[bot]
4079f12a3e Perf: Store deserialized sysvars in the sysvars cache (backport #22455) (#22480)
* Perf: Store deserialized sysvars in the sysvars cache (#22455)

* Perf: Store deserialized sysvars in sysvars cache

* add bench

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-15 08:48:34 +00:00
mergify[bot]
e121b94524 Bugfix/block subscribe (#22516) (#22525)
* use correct operation name

* require enable_rpc_transaction_history flag when enabling block_subscription

Co-authored-by: Zano <segfaultdoctor@protonmail.com>
(cherry picked from commit 7171b3a3ac)

Co-authored-by: segfaultdoctor <zano@jito.wtf>
2022-01-15 05:05:21 +00:00
mergify[bot]
a7623ad18c Fetch sysvars from invoke context for vote program (backport #22444) (#22469)
* Fetch sysvars from invoke context for vote program (#22444)

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-15 03:56:00 +00:00
Tao Zhu
054e475c6c ABI changed - added two more vote_cost related fields to cost_tracker 2022-01-14 10:49:43 -06:00
Tao Zhu
7a421fe602 Port counting vote CUs to block cost to v1.9 2022-01-14 10:49:43 -06:00
mergify[bot]
2ef0b85829 docs: fix get fee for message docs (#22501) (#22504)
(cherry picked from commit f12a8fcd73)

Co-authored-by: Yihau Chen <a122092487@gmail.com>
2022-01-14 09:00:22 +00:00
mergify[bot]
a6b7a3b7ff Refactor: move sysvar cache to new module (backport #22448) (#22461)
* Refactor: move sysvar cache to new module

(cherry picked from commit 7171c95bdd)

# Conflicts:
#	Cargo.lock
#	program-runtime/Cargo.toml
#	program-runtime/src/invoke_context.rs
#	programs/bpf/Cargo.lock
#	programs/bpf_loader/src/syscalls.rs
#	programs/stake/src/stake_instruction.rs
#	programs/vote/src/vote_instruction.rs
#	runtime/src/message_processor.rs

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-14 08:43:26 +00:00
mergify[bot]
9d69f2b324 Bank::get_fee_for_message is now nonce aware (backport #22494) (#22499)
* `Bank::get_fee_for_message` is now nonce aware

(cherry picked from commit 4c577d7f8c)

# Conflicts:
#	runtime/src/bank.rs
#	sdk/program/src/message/sanitized.rs

* Resolve conflicts

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-01-14 03:25:10 +00:00
mergify[bot]
4f82a4ba1f log internals (#22493) (#22497)
(cherry picked from commit eca8d21249)

Co-authored-by: carllin <carl@solana.com>
2022-01-14 02:33:36 +00:00
mergify[bot]
ed0b30efcc nit: Traceable balance checks (#22462) (#22489)
(cherry picked from commit 1632ee03da)

Co-authored-by: Jack May <jack@solana.com>
2022-01-13 19:09:00 +00:00
mergify[bot]
4ee6bc9a93 downgrade individual per-program-timing to trace to reduce writes to influx (#22471)
(cherry picked from commit 6614727be8)

Co-authored-by: Tao Zhu <tao@solana.com>
2022-01-13 02:37:47 +00:00
Lijun Wang
676c43b9d2 Fixed a merge issue (#22464)
Removed AddressLookupError
2022-01-12 13:56:39 -08:00
mergify[bot]
b1d8296498 Update docs vis-a-vis prohibition of RentPaying accounts (#22438) (#22458)
* Rent-exempt docs for exchange integrations

* Remove discussion of rent-paying accounts from developing docs

* Improve verbiage

(cherry picked from commit b27333e52d)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2022-01-12 19:58:05 +00:00
Tyera Eulberg
34984ed16e v1.9: Only examine explicit tx accounts for rent state (#22442)
* Add failing test

* Fix: only examine accounts explicitly included in a tx
2022-01-11 20:55:10 -07:00
mergify[bot]
f4d1577337 Refactor: consolidate memo extraction for each message version (#22422) (#22435)
(cherry picked from commit 35a5dd9c45)

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-12 00:23:03 +00:00
mergify[bot]
58dcc451a9 Prevent rent-paying account creation (backport #22292) (#22428)
* Prevent rent-paying account creation (#22292)

* Fixup typo

* Add new feature

* Add new TransactionError

* Add framework for checking account state before and after transaction processing

* Fail transactions that leave new rent-paying accounts

* Only check rent-state of writable tx accounts

* Review comments: combine process_result success behavior; log and metrics before feature activation

* Fix tests that assume rent-exempt accounts are okay

* Remove test no longer relevant

* Remove native/sysvar special case

* Move metrics submission to report legacy->legacy rent paying transitions as well

(cherry picked from commit 637e366b18)

# Conflicts:
#	runtime/src/bank.rs
#	runtime/src/lib.rs

* Fix conflicts and rework for TransactionRefCells

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2022-01-11 23:17:03 +00:00
mergify[bot]
f0695ef6d9 optimizes ReadOnlyAccountsCache LRU eviction implementation (backport #22403) (#22426)
* optimizes ReadOnlyAccountsCache LRU eviction implementation (#22403)

ReadOnlyAccountsCache is using a background thread, table scan and sort
to implement LRU eviction policy:
https://github.com/solana-labs/solana/blob/eaa52bc93/runtime/src/read_only_accounts_cache.rs#L66-L73
https://github.com/solana-labs/solana/blob/eaa52bc93/runtime/src/read_only_accounts_cache.rs#L186-L191
https://github.com/solana-labs/solana/blob/eaa52bc93/runtime/src/read_only_accounts_cache.rs#L222

DashMap internally locks each shard when accessed; so a table scan in
the background thread can create a lot of lock contention.

This commit adds an index-list queue containing cached keys in the order
that they are accessed. Each hash-map entry also includes its index into
this queue.
When an item is first entered into the cache, it is added to the end of
the queue. Also each time an entry is looked up from the cache it is
moved to the end of queue. As a result, items in the queue are always
sorted in the order that they have last been accessed. When doing LRU
eviction, cache entries are evicted from the front of the queue.
Using index-list, all queue operations above are O(1) with low overhead
and so above achieves an efficient implementation of LRU cache eviction
policy.

(cherry picked from commit a49ef49f87)

# Conflicts:
#	Cargo.lock
#	programs/bpf/Cargo.lock
#	runtime/Cargo.toml
#	runtime/src/accounts_db.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-01-11 19:09:57 +00:00
mergify[bot]
41b0d6cca3 limits gossip vote stats to the top most voted slots (#22416) (#22418)
(cherry picked from commit 49da347d84)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-01-10 23:14:57 +00:00
mergify[bot]
ae77a52c97 wrap create executor timings datapoint in a module (#22398)
(cherry picked from commit 428575f9ae)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-01-09 06:41:44 +00:00
mergify[bot]
133314e58c bank: fix executor cache metrics (#22396)
(cherry picked from commit 3b4aad9df1)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-01-09 06:06:35 +00:00
Trent Nelson
cb49ae21b4 Bump version to v1.9.5 2022-01-08 21:17:51 +00:00
mergify[bot]
a9ebba5643 Clarify docs of minimum_balance (#22385) (#22387)
(cherry picked from commit 0f94e1d3a2)

Co-authored-by: Evan Conrad <evan@roomservice.dev>
2022-01-08 20:07:57 +00:00
mergify[bot]
8ce65878da improve multi executor cache addition (#22382)
Co-authored-by: Jack May <jack@solana.com>
2022-01-08 13:03:46 +00:00
Trent Nelson
a4ca18a54d add excutor creation trace timings 2022-01-08 05:25:37 -07:00
mergify[bot]
7cb147fdcd Executor cache count primer (backport #22333) (#22375)
* bank: prime new executor cache entry use-counts

(cherry picked from commit 4ce48307bb)

* --amend

(cherry picked from commit ad3cb0bc93)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-01-08 11:01:34 +00:00
mergify[bot]
2d693be9fa remove per program timings from blockstore processor ledger replay (#22370) (#22372)
(cherry picked from commit 813006b33b)

Co-authored-by: carllin <carl@solana.com>
2022-01-08 08:43:48 +00:00
mergify[bot]
50e716fc80 bank: Add executors cache metrics (#22368)
(cherry picked from commit 6d76db1de5)

Co-authored-by: Trent Nelson <trent@solana.com>
2022-01-08 01:34:53 +00:00
Justin Starry
1f00926874 Add runtime support for address table lookups (backport #22223) (#22354) 2022-01-08 07:57:04 +08:00
mergify[bot]
662c6be51e removes CowCachedExecutors (#22343) (#22363)
Copy-on-write semantics for cached executors can be implemented by a
simple Arc<CachedExecutors> as opposed to CowCachedExecutors:
https://github.com/solana-labs/solana/blob/f1e2598ba/runtime/src/bank.rs#L244-L247

This will also avoid the need for double locking as in:
https://github.com/solana-labs/solana/blob/f1e2598ba/runtime/src/bank.rs#L3490-L3491
https://github.com/solana-labs/solana/blob/f1e2598ba/runtime/src/bank.rs#L3525-L3526

(cherry picked from commit c2389fc209)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2022-01-07 16:04:13 +00:00
mergify[bot]
9761f5b67f Add aarch64-apple-darwin publish tarball step (#22356)
(cherry picked from commit e2aa932e97)

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-01-07 10:17:11 +00:00
mergify[bot]
7b1da62763 Add execute metrics (backport #22296) (#22335)
* move `ExecuteTimings` from `runtime::bank` to `program_runtime::timings`

(cherry picked from commit 7d32909e17)

# Conflicts:
#	core/Cargo.toml
#	ledger/Cargo.toml
#	programs/bpf/Cargo.lock

* Add execute metrics

(cherry picked from commit b25e4a200b)

* Add metrics for executor creation

(cherry picked from commit 848b6dfbdd)

* Add helper macro for `AddAssign`ing with saturating arithmetic

(cherry picked from commit deb9344e49)

* Use saturating_add_assign macro

(cherry picked from commit 72fc6096a0)

* Consolidate process instruction execution timings to own struct

(cherry picked from commit 390ef0fbcd)

Co-authored-by: Trent Nelson <trent@solana.com>
Co-authored-by: Carl Lin <carl@solana.com>
2022-01-07 09:11:18 +00:00
mergify[bot]
2f97fee71a Cleanup ledger-tool analyze-storage command (#22310) (#22352)
* Make ledger-tool analyze-storage use Blockstore::open()

Opening a large ledger may require setting a larger open file descriptor
limit. Blockstore::open() does this whereas the underlying Database
object that analyze-storage was opening does not.

* Move key_size call lookup to take advantage of traits

* Fix typo where analyze worked on wrong column

* Make analyze-storage analyze all columns

(cherry picked from commit 9f1f64e384)

Co-authored-by: steviez <steven@solana.com>
2022-01-07 07:47:27 +00:00
Justin Starry
3ae674dd28 Increase timeout of local-cluster-slow CI step 2022-01-07 15:31:10 +08:00
mergify[bot]
8214bc9db4 Retain executor cache counts (#22322) (#22341)
(cherry picked from commit f1e2598baa)

Co-authored-by: Jack May <jack@solana.com>
2022-01-06 19:00:29 +00:00
mergify[bot]
1132def37c Split up local cluster tests into separate CI steps (backport #22295) (#22303)
* Split up local cluster tests into separate CI steps (#22295)

* Split up local cluster tests into separate CI steps

* Update buildkite-pipeline.sh

(cherry picked from commit 0e1afcbb26)

# Conflicts:
#	local-cluster/tests/local_cluster.rs

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2022-01-06 17:02:45 +00:00
mergify[bot]
7267ebaaf2 Consume from AccountsDataMeter (backport #21994) (#22323)
* Consume from AccountsDataMeter (#21994)

(cherry picked from commit 1460f00e0f)

# Conflicts:
#	program-runtime/src/invoke_context.rs

* fixup! conflicts

* fix tests for v1.9

* fixup! clippy

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-01-06 17:01:02 +00:00
mergify[bot]
4be6e52a4f cache executors on failed transactions (backport #22308) (#22328)
* cache executors on failed transactions (#22308)

(cherry picked from commit 12e160269e)

# Conflicts:
#	program-runtime/src/invoke_context.rs
#	runtime/src/bank.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2022-01-06 09:14:48 +00:00
mergify[bot]
e7348243b4 [ledger-tool]compare_blocks (#22229) (#22330)
* 1.made load_credentials accept credential path as a parameter. 2.partial implement bigtable comparasion function

* finding missing blocks in bigtables in a specified range

* refactor compare-blocks,add unit test for missing_blocks and fmt

* compare-block fix last block bug

* refactor compare-block and improve wording

* Update ledger-tool/src/bigtable.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* update compare-block command-line description

* style:improve wording/naming/code style

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit d9220652ad)

Co-authored-by: pieceofr <komimi.p@gmail.com>
2022-01-06 08:55:26 +00:00
mergify[bot]
fc0c74d722 Only sum accounts data len from non-zero lamport accounts (#22309) (#22317)
(cherry picked from commit ab13e39518)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-01-06 02:43:54 +00:00
mergify[bot]
687cd4779e Add AccountsDataMeter to InvokeContext (#21813) (#22299)
(cherry picked from commit 800472ddf5)

Co-authored-by: Brooks Prumo <brooks@solana.com>
2022-01-06 01:31:11 +00:00
mergify[bot]
b28d7050ab Update default --dynamic-port-range values to include some room for additional ports that may be added in the future (#22321)
(cherry picked from commit 37ebd9bd9e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-01-06 01:29:06 +00:00
Michael Vines
6d72acfd6d --dynamic-port-range now requires at least 12 ports 2022-01-05 16:12:28 -08:00
Brooks Prumo
840ec0686e Fix broken build from bpf/tests/programs.rs (#22312)
These tests were broken due to PR #22289
2022-01-05 15:06:15 -06:00
Will Hickey
ba0188a36d Bump version to 1.9.4 (#22304) 2022-01-05 12:02:36 -06:00
mergify[bot]
05b9a2f203 fix(rpc): recreate dead and uncleaned subscriptions (#22281) (#22294)
(cherry picked from commit c1995c647b)

Co-authored-by: Nikita <bananaelecitrus@gmail.com>
2022-01-05 17:16:12 +00:00
395 changed files with 23583 additions and 13728 deletions

View File

@@ -12,7 +12,8 @@ export PS4="++"
# Restore target/ from the previous CI build on this machine
#
eval "$(ci/channel-info.sh)"
export CARGO_TARGET_CACHE=$HOME/cargo-target-cache/"$CHANNEL"-"$BUILDKITE_LABEL"
eval "$(ci/sbf-tools-info.sh)"
export CARGO_TARGET_CACHE=$HOME/cargo-target-cache/"$CHANNEL"-"$BUILDKITE_LABEL"-"$SBF_TOOLS_VERSION"
(
set -x
MAX_CACHE_SIZE=18 # gigabytes

View File

@@ -113,3 +113,10 @@ pull_request_rules:
ignore_conflicts: true
branches:
- v1.9
commands_restrictions:
# The author of copied PRs is the Mergify user.
# Restrict `copy` access to Core Contributors
copy:
conditions:
- author=@core-contributors

776
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,6 @@
members = [
"accountsdb-plugin-interface",
"accountsdb-plugin-manager",
"accountsdb-plugin-postgres",
"accounts-cluster-bench",
"bench-streamer",
"bench-tps",
@@ -12,6 +11,7 @@ members = [
"banks-interface",
"banks-server",
"bucket_map",
"bloom",
"clap-utils",
"cli-config",
"cli-output",
@@ -48,6 +48,7 @@ members = [
"program-test",
"programs/address-lookup-table",
"programs/address-lookup-table-tests",
"programs/ed25519-tests",
"programs/bpf_loader",
"programs/bpf_loader/gen-syscall-list",
"programs/compute-budget",

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-account-decoder"
version = "1.9.3"
version = "1.9.8"
description = "Solana account decoder"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -19,9 +19,9 @@ lazy_static = "1.4.0"
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-config-program = { path = "../programs/config", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.3" }
solana-config-program = { path = "../programs/config", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.8" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
thiserror = "1.0"
zstd = "0.9.0"

View File

@@ -5,7 +5,7 @@ use {
parse_nonce::parse_nonce,
parse_stake::parse_stake,
parse_sysvar::parse_sysvar,
parse_token::{parse_token, spl_token_id},
parse_token::{parse_token, spl_token_ids},
parse_vote::parse_vote,
},
inflector::Inflector,
@@ -21,7 +21,6 @@ lazy_static! {
static ref STAKE_PROGRAM_ID: Pubkey = stake::program::id();
static ref SYSTEM_PROGRAM_ID: Pubkey = system_program::id();
static ref SYSVAR_PROGRAM_ID: Pubkey = sysvar::id();
static ref TOKEN_PROGRAM_ID: Pubkey = spl_token_id();
static ref VOTE_PROGRAM_ID: Pubkey = solana_vote_program::id();
pub static ref PARSABLE_PROGRAM_IDS: HashMap<Pubkey, ParsableAccount> = {
let mut m = HashMap::new();
@@ -31,7 +30,9 @@ lazy_static! {
);
m.insert(*CONFIG_PROGRAM_ID, ParsableAccount::Config);
m.insert(*SYSTEM_PROGRAM_ID, ParsableAccount::Nonce);
m.insert(*TOKEN_PROGRAM_ID, ParsableAccount::SplToken);
for spl_token_id in spl_token_ids() {
m.insert(spl_token_id, ParsableAccount::SplToken);
}
m.insert(*STAKE_PROGRAM_ID, ParsableAccount::Stake);
m.insert(*SYSVAR_PROGRAM_ID, ParsableAccount::Sysvar);
m.insert(*VOTE_PROGRAM_ID, ParsableAccount::Vote);

View File

@@ -15,16 +15,31 @@ use {
// A helper function to convert spl_token::id() as spl_sdk::pubkey::Pubkey to
// solana_sdk::pubkey::Pubkey
pub fn spl_token_id() -> Pubkey {
fn spl_token_id() -> Pubkey {
Pubkey::new_from_array(spl_token::id().to_bytes())
}
// Returns all known SPL Token program ids
pub fn spl_token_ids() -> Vec<Pubkey> {
vec![spl_token_id()]
}
// Check if the provided program id as a known SPL Token program id
pub fn is_known_spl_token_id(program_id: &Pubkey) -> bool {
*program_id == spl_token_id()
}
// A helper function to convert spl_token::native_mint::id() as spl_sdk::pubkey::Pubkey to
// solana_sdk::pubkey::Pubkey
pub fn spl_token_native_mint() -> Pubkey {
Pubkey::new_from_array(spl_token::native_mint::id().to_bytes())
}
// The program id of the `spl_token_native_mint` account
pub fn spl_token_native_mint_program_id() -> Pubkey {
spl_token_id()
}
// A helper function to convert a solana_sdk::pubkey::Pubkey to spl_sdk::pubkey::Pubkey
pub fn spl_token_pubkey(pubkey: &Pubkey) -> SplTokenPubkey {
SplTokenPubkey::new_from_array(pubkey.to_bytes())

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accounts-bench"
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -11,11 +11,11 @@ publish = false
[dependencies]
log = "0.4.14"
rayon = "1.5.1"
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-measure = { path = "../measure", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
clap = "2.33.1"
[package.metadata.docs.rs]

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accounts-cluster-bench"
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -13,25 +13,25 @@ clap = "2.33.1"
log = "0.4.14"
rand = "0.7.0"
rayon = "1.5.1"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.3" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.3" }
solana-client = { path = "../client", version = "=1.9.3" }
solana-core = { path = "../core", version = "=1.9.3" }
solana-faucet = { path = "../faucet", version = "=1.9.3" }
solana-gossip = { path = "../gossip", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-net-utils = { path = "../net-utils", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-streamer = { path = "../streamer", version = "=1.9.3" }
solana-test-validator = { path = "../test-validator", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-account-decoder = { path = "../account-decoder", version = "=1.9.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.8" }
solana-client = { path = "../client", version = "=1.9.8" }
solana-core = { path = "../core", version = "=1.9.8" }
solana-faucet = { path = "../faucet", version = "=1.9.8" }
solana-gossip = { path = "../gossip", version = "=1.9.8" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-measure = { path = "../measure", version = "=1.9.8" }
solana-net-utils = { path = "../net-utils", version = "=1.9.8" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-streamer = { path = "../streamer", version = "=1.9.8" }
solana-test-validator = { path = "../test-validator", version = "=1.9.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.9.3" }
solana-local-cluster = { path = "../local-cluster", version = "=1.9.8" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -674,7 +674,7 @@ pub mod test {
#[test]
fn test_accounts_cluster_bench() {
solana_logger::setup();
let validator_config = ValidatorConfig::default();
let validator_config = ValidatorConfig::default_for_test();
let num_nodes = 1;
let mut config = ClusterConfig {
cluster_lamports: 10_000_000,

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-interface"
description = "The Solana AccountsDb plugin interface."
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -12,8 +12,8 @@ documentation = "https://docs.rs/solana-accountsdb-plugin-interface"
[dependencies]
log = "0.4.11"
thiserror = "1.0.30"
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.8" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-manager"
description = "The Solana AccountsDb plugin manager."
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -17,14 +17,14 @@ log = "0.4.11"
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-metrics = { path = "../metrics", version = "=1.9.3" }
solana-rpc = { path = "../rpc", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.9.8" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-measure = { path = "../measure", version = "=1.9.8" }
solana-metrics = { path = "../metrics", version = "=1.9.8" }
solana-rpc = { path = "../rpc", version = "=1.9.8" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.8" }
thiserror = "1.0.30"
[package.metadata.docs.rs]

View File

@@ -8,7 +8,6 @@ use {
solana_measure::measure::Measure,
solana_metrics::*,
solana_rpc::transaction_notifier_interface::TransactionNotifier,
solana_runtime::bank,
solana_sdk::{clock::Slot, signature::Signature, transaction::SanitizedTransaction},
solana_transaction_status::TransactionStatusMeta,
std::sync::{Arc, RwLock},
@@ -85,7 +84,7 @@ impl TransactionNotifierImpl {
) -> ReplicaTransactionInfo<'a> {
ReplicaTransactionInfo {
signature,
is_vote: bank::is_simple_vote_transaction(transaction),
is_vote: transaction.is_simple_vote_transaction(),
transaction,
transaction_status_meta,
}

View File

@@ -1,39 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-postgres"
description = "The Solana AccountsDb plugin for PostgreSQL database."
version = "1.9.3"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[lib]
crate-type = ["cdylib", "rlib"]
[dependencies]
bs58 = "0.4.0"
chrono = { version = "0.4.11", features = ["serde"] }
crossbeam-channel = "0.5"
log = "0.4.14"
postgres = { version = "0.19.2", features = ["with-chrono-0_4"] }
postgres-types = { version = "0.2.2", features = ["derive"] }
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-metrics = { path = "../metrics", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
thiserror = "1.0.30"
tokio-postgres = "0.7.4"
[dev-dependencies]
solana-account-decoder = { path = "../account-decoder", version = "=1.9.3" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,5 +0,0 @@
This is an example implementing the AccountsDb plugin for PostgreSQL database.
Please see the `src/accountsdb_plugin_postgres.rs` for the format of the plugin's configuration file.
To create the schema objects for the database, please use `scripts/create_schema.sql`.
`scripts/drop_schema.sql` can be used to tear down the schema objects.

View File

@@ -1,195 +0,0 @@
/**
* This plugin implementation for PostgreSQL requires the following tables
*/
-- The table storing accounts
CREATE TABLE account (
pubkey BYTEA PRIMARY KEY,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- The table storing slot information
CREATE TABLE slot (
slot BIGINT PRIMARY KEY,
parent BIGINT,
status VARCHAR(16) NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- Types for Transactions
Create TYPE "TransactionErrorCode" AS ENUM (
'AccountInUse',
'AccountLoadedTwice',
'AccountNotFound',
'ProgramAccountNotFound',
'InsufficientFundsForFee',
'InvalidAccountForFee',
'AlreadyProcessed',
'BlockhashNotFound',
'InstructionError',
'CallChainTooDeep',
'MissingSignatureForFee',
'InvalidAccountIndex',
'SignatureFailure',
'InvalidProgramForExecution',
'SanitizeFailure',
'ClusterMaintenance',
'AccountBorrowOutstanding',
'WouldExceedMaxAccountCostLimit',
'WouldExceedMaxBlockCostLimit',
'UnsupportedVersion',
'InvalidWritableAccount',
'WouldExceedMaxAccountDataCostLimit'
);
CREATE TYPE "TransactionError" AS (
error_code "TransactionErrorCode",
error_detail VARCHAR(256)
);
CREATE TYPE "CompiledInstruction" AS (
program_id_index SMALLINT,
accounts SMALLINT[],
data BYTEA
);
CREATE TYPE "InnerInstructions" AS (
index SMALLINT,
instructions "CompiledInstruction"[]
);
CREATE TYPE "TransactionTokenBalance" AS (
account_index SMALLINT,
mint VARCHAR(44),
ui_token_amount DOUBLE PRECISION,
owner VARCHAR(44)
);
Create TYPE "RewardType" AS ENUM (
'Fee',
'Rent',
'Staking',
'Voting'
);
CREATE TYPE "Reward" AS (
pubkey VARCHAR(44),
lamports BIGINT,
post_balance BIGINT,
reward_type "RewardType",
commission SMALLINT
);
CREATE TYPE "TransactionStatusMeta" AS (
error "TransactionError",
fee BIGINT,
pre_balances BIGINT[],
post_balances BIGINT[],
inner_instructions "InnerInstructions"[],
log_messages TEXT[],
pre_token_balances "TransactionTokenBalance"[],
post_token_balances "TransactionTokenBalance"[],
rewards "Reward"[]
);
CREATE TYPE "TransactionMessageHeader" AS (
num_required_signatures SMALLINT,
num_readonly_signed_accounts SMALLINT,
num_readonly_unsigned_accounts SMALLINT
);
CREATE TYPE "TransactionMessage" AS (
header "TransactionMessageHeader",
account_keys BYTEA[],
recent_blockhash BYTEA,
instructions "CompiledInstruction"[]
);
CREATE TYPE "TransactionMessageAddressTableLookup" AS (
account_key BYTEA,
writable_indexes SMALLINT[],
readonly_indexes SMALLINT[]
);
CREATE TYPE "TransactionMessageV0" AS (
header "TransactionMessageHeader",
account_keys BYTEA[],
recent_blockhash BYTEA,
instructions "CompiledInstruction"[],
address_table_lookups "TransactionMessageAddressTableLookup"[]
);
CREATE TYPE "LoadedAddresses" AS (
writable BYTEA[],
readonly BYTEA[]
);
CREATE TYPE "LoadedMessageV0" AS (
message "TransactionMessageV0",
loaded_addresses "LoadedAddresses"
);
-- The table storing transactions
CREATE TABLE transaction (
slot BIGINT NOT NULL,
signature BYTEA NOT NULL,
is_vote BOOL NOT NULL,
message_type SMALLINT, -- 0: legacy, 1: v0 message
legacy_message "TransactionMessage",
v0_loaded_message "LoadedMessageV0",
signatures BYTEA[],
message_hash BYTEA,
meta "TransactionStatusMeta",
updated_on TIMESTAMP NOT NULL,
CONSTRAINT transaction_pk PRIMARY KEY (slot, signature)
);
-- The table storing block metadata
CREATE TABLE block (
slot BIGINT PRIMARY KEY,
blockhash VARCHAR(44),
rewards "Reward"[],
block_time BIGINT,
block_height BIGINT,
updated_on TIMESTAMP NOT NULL
);
/**
* The following is for keeping historical data for accounts and is not required for plugin to work.
*/
-- The table storing historical data for accounts
CREATE TABLE account_audit (
pubkey BYTEA,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
CREATE INDEX account_audit_account_key ON account_audit (pubkey, write_version);
CREATE FUNCTION audit_account_update() RETURNS trigger AS $audit_account_update$
BEGIN
INSERT INTO account_audit (pubkey, owner, lamports, slot, executable, rent_epoch, data, write_version, updated_on)
VALUES (OLD.pubkey, OLD.owner, OLD.lamports, OLD.slot,
OLD.executable, OLD.rent_epoch, OLD.data, OLD.write_version, OLD.updated_on);
RETURN NEW;
END;
$audit_account_update$ LANGUAGE plpgsql;
CREATE TRIGGER account_update_trigger AFTER UPDATE OR DELETE ON account
FOR EACH ROW EXECUTE PROCEDURE audit_account_update();

View File

@@ -1,26 +0,0 @@
/**
* Script for cleaning up the schema for PostgreSQL used for the AccountsDb plugin.
*/
DROP TRIGGER account_update_trigger ON account;
DROP FUNCTION audit_account_update;
DROP TABLE account_audit;
DROP TABLE account;
DROP TABLE slot;
DROP TABLE transaction;
DROP TABLE block;
DROP TYPE "TransactionError" CASCADE;
DROP TYPE "TransactionErrorCode" CASCADE;
DROP TYPE "LoadedMessageV0" CASCADE;
DROP TYPE "LoadedAddresses" CASCADE;
DROP TYPE "TransactionMessageV0" CASCADE;
DROP TYPE "TransactionMessage" CASCADE;
DROP TYPE "TransactionMessageHeader" CASCADE;
DROP TYPE "TransactionMessageAddressTableLookup" CASCADE;
DROP TYPE "TransactionStatusMeta" CASCADE;
DROP TYPE "RewardType" CASCADE;
DROP TYPE "Reward" CASCADE;
DROP TYPE "TransactionTokenBalance" CASCADE;
DROP TYPE "InnerInstructions" CASCADE;
DROP TYPE "CompiledInstruction" CASCADE;

View File

@@ -1,802 +0,0 @@
# This a reference configuration file for the PostgreSQL database version 14.
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: B = bytes Time units: us = microseconds
# kB = kilobytes ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
data_directory = '/var/lib/postgresql/14/main' # use data in another directory
# (change requires restart)
hba_file = '/etc/postgresql/14/main/pg_hba.conf' # host-based authentication file
# (change requires restart)
ident_file = '/etc/postgresql/14/main/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
external_pid_file = '/var/run/postgresql/14-main.pid' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
listen_addresses = '*'
port = 5433 # (change requires restart)
max_connections = 200 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP settings -
# see "man tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds;
# 0 selects the system default
#client_connection_check_interval = 0 # time between checks for client
# disconnection while running queries;
# 0 for never
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = scram-sha-256 # scram-sha-256 or md5
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = 'FILE:${sysconfdir}/krb5.keytab'
#krb_caseins_users = off
# - SSL -
ssl = on
#ssl_ca_file = ''
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
#ssl_crl_file = ''
#ssl_crl_dir = ''
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1.2'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 1GB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#huge_page_size = 0 # zero for system default
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#hash_mem_multiplier = 1.0 # 1-1000.0 multiplier on hash table work_mem
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#logical_decoding_work_mem = 64MB # min 64kB
#max_stack_depth = 2MB # min 100kB
#shared_memory_type = mmap # the default is the first option
# supported by the operating system:
# mmap
# sysv
# windows
# (change requires restart)
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# (change requires restart)
#min_dynamic_shared_memory = 0MB # (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kilobytes, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000 # min 64
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 2 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
# - Asynchronous Behavior -
#backend_flush_after = 0 # measured in pages, 0 disables
effective_io_concurrency = 1000 # 1-1000; 0 disables prefetching
#maintenance_io_concurrency = 10 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
#max_parallel_workers = 8 # maximum number of max_worker_processes that
# can be used in parallel operations
#parallel_leader_participation = on
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
wal_level = minimal # minimal, replica, or logical
# (change requires restart)
fsync = off # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
synchronous_commit = off # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux and FreeBSD)
# fsync
# fsync_writethrough
# open_sync
full_page_writes = off # recover from partial page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_compression = off # enable compression of full-page writes
#wal_init_zero = on # zero-fill new WAL files
#wal_recycle = on # recycle WAL files
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#wal_skip_threshold = 2MB
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
#checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
max_wal_size = 1GB
min_wal_size = 80MB
# - Archiving -
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = '' # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
# %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
#archive_cleanup_command = '' # command to execute at every restartpoint
#recovery_end_command = '' # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = '' # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = '' # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = '' # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = '' # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = '' # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest' # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause' # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the primary and on any standby that will send replication data.
max_wal_senders = 0 # max number of walsender processes
# (change requires restart)
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#wal_keep_size = 0 # in megabytes; 0 disables
#max_slot_wal_keep_size = -1 # in megabytes; -1 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Primary Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a primary server.
#primary_conninfo = '' # connection string to sending server
#primary_slot_name = '' # replication slot on sending server
#promote_trigger_file = '' # file name whose presence ends recovery
#hot_standby = on # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_create_temp_slot = off # create temp slot if primary_slot_name
# is not set
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from primary
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0 # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4 # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_async_append = on
#enable_bitmapscan = on
#enable_gathermerge = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_incremental_sort = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_memoize = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_parallel_hash = on
#enable_partition_pruning = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
#jit_above_cost = 100000 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#jit = on # allow JIT compilation
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#plan_cache_mode = auto # auto, force_generic_plan or
# force_custom_plan
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (Windows):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
#log_min_duration_sample = -1 # -1 is disabled, 0 logs a sample of statements
# and their durations, > 0 logs only a sample of
# statements running at least this number
# of milliseconds;
# sample fraction is determined by log_statement_sample_rate
#log_statement_sample_rate = 1.0 # fraction of logged statements exceeding
# log_min_duration_sample to be logged;
# 1.0 logs all such statements, 0.0 never logs
#log_transaction_sample_rate = 0.0 # fraction of transactions whose statements
# are logged regardless of their duration; 1.0 logs all
# statements from all transactions, 0.0 never logs
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_autovacuum_min_duration = -1 # log autovacuum activity;
# -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%m [%p] %q%u@%d ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %b = backend type
# %p = process ID
# %P = process ID of parallel group leader
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %Q = query ID (0 if none or not computed)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_recovery_conflict_waits = off # log standby recovery conflict waits
# >= deadlock_timeout
#log_parameter_max_length = -1 # when logging statements, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_parameter_max_length_on_error = 0 # when logging an error, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'Etc/UTC'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
cluster_name = '14/main' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_activity_query_size = 1024 # (change requires restart)
#track_counts = on
#track_io_timing = off
#track_wal_io_timing = off
#track_functions = none # none, pl, all
stats_temp_directory = '/var/run/postgresql/14-main.pg_stat_tmp'
# - Monitoring -
#compute_query_id = auto
#log_statement_stats = off
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_vacuum_insert_threshold = 1000 # min number of row inserts
# before vacuum; -1 disables insert
# vacuums
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_vacuum_insert_scale_factor = 0.2 # fraction of inserts over table
# size before insert vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#search_path = '"$user", public' # schema names
#row_security = on
#default_table_access_method = 'heap'
#default_tablespace = '' # a tablespace name, '' uses the default
#default_toast_compression = 'pglz' # 'pglz' or 'lz4'
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#idle_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_table_age = 150000000
#vacuum_freeze_min_age = 50000000
#vacuum_failsafe_age = 1600000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_failsafe_age = 1600000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'Etc/UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1 # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'C.UTF-8' # locale for system error message
# strings
lc_monetary = 'C.UTF-8' # locale for monetary formatting
lc_numeric = 'C.UTF-8' # locale for number formatting
lc_time = 'C.UTF-8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Shared Library Preloading -
#local_preload_libraries = ''
#session_preload_libraries = ''
#shared_preload_libraries = '' # (change requires restart)
#jit_provider = 'llvmjit' # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#extension_destdir = '' # prepend path when loading extensions
# and shared objects (added by Debian)
#gin_fuzzy_search_limit = 0
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2 # negative values mean
# (max_pred_locks_per_transaction
# / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2 # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#data_sync_retry = off # retry or panic on failure to fsync
# data?
# (change requires restart)
#recovery_init_sync_method = fsync # fsync, syncfs (Linux 5.8+)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf. Note that these are directives, not variable
# assignments, so they can usefully be given more than once.
include_dir = 'conf.d' # include files ending in '.conf' from
# a directory, e.g., 'conf.d'
#include_if_exists = '...' # include file only if it exists
#include = '...' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

View File

@@ -1,74 +0,0 @@
use {log::*, std::collections::HashSet};
#[derive(Debug)]
pub(crate) struct AccountsSelector {
pub accounts: HashSet<Vec<u8>>,
pub owners: HashSet<Vec<u8>>,
pub select_all_accounts: bool,
}
impl AccountsSelector {
pub fn default() -> Self {
AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts: true,
}
}
pub fn new(accounts: &[String], owners: &[String]) -> Self {
info!(
"Creating AccountsSelector from accounts: {:?}, owners: {:?}",
accounts, owners
);
let select_all_accounts = accounts.iter().any(|key| key == "*");
if select_all_accounts {
return AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts,
};
}
let accounts = accounts
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
let owners = owners
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
AccountsSelector {
accounts,
owners,
select_all_accounts,
}
}
pub fn is_account_selected(&self, account: &[u8], owner: &[u8]) -> bool {
self.select_all_accounts || self.accounts.contains(account) || self.owners.contains(owner)
}
/// Check if any account is of interested at all
pub fn is_enabled(&self) -> bool {
self.select_all_accounts || !self.accounts.is_empty() || !self.owners.is_empty()
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_create_accounts_selector() {
AccountsSelector::new(
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
&[],
);
AccountsSelector::new(
&[],
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
);
}
}

View File

@@ -1,466 +0,0 @@
use solana_measure::measure::Measure;
/// Main entry for the PostgreSQL plugin
use {
crate::{
accounts_selector::AccountsSelector,
postgres_client::{ParallelPostgresClient, PostgresClientBuilder},
transaction_selector::TransactionSelector,
},
bs58,
log::*,
serde_derive::{Deserialize, Serialize},
serde_json,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPlugin, AccountsDbPluginError, ReplicaAccountInfoVersions,
ReplicaBlockInfoVersions, ReplicaTransactionInfoVersions, Result, SlotStatus,
},
solana_metrics::*,
std::{fs::File, io::Read},
thiserror::Error,
};
#[derive(Default)]
pub struct AccountsDbPluginPostgres {
client: Option<ParallelPostgresClient>,
accounts_selector: Option<AccountsSelector>,
transaction_selector: Option<TransactionSelector>,
}
impl std::fmt::Debug for AccountsDbPluginPostgres {
fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Ok(())
}
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct AccountsDbPluginPostgresConfig {
pub host: Option<String>,
pub user: Option<String>,
pub port: Option<u16>,
pub connection_str: Option<String>,
pub threads: Option<usize>,
pub batch_size: Option<usize>,
pub panic_on_db_errors: Option<bool>,
/// Indicates if to store historical data for accounts
pub store_account_historical_data: Option<bool>,
}
#[derive(Error, Debug)]
pub enum AccountsDbPluginPostgresError {
#[error("Error connecting to the backend data store. Error message: ({msg})")]
DataStoreConnectionError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
DataSchemaError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
ConfigurationError { msg: String },
}
impl AccountsDbPlugin for AccountsDbPluginPostgres {
fn name(&self) -> &'static str {
"AccountsDbPluginPostgres"
}
/// Do initialization for the PostgreSQL plugin.
///
/// # Format of the config file:
/// * The `accounts_selector` section allows the user to controls accounts selections.
/// "accounts_selector" : {
/// "accounts" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// or:
/// "accounts_selector" = {
/// "owners" : \["pubkey-1", "pubkey-2", ..., "pubkey-m"\]
/// }
/// Accounts either satisyfing the accounts condition or owners condition will be selected.
/// When only owners is specified,
/// all accounts belonging to the owners will be streamed.
/// The accounts field supports wildcard to select all accounts:
/// "accounts_selector" : {
/// "accounts" : \["*"\],
/// }
/// * "host", optional, specifies the PostgreSQL server.
/// * "user", optional, specifies the PostgreSQL user.
/// * "port", optional, specifies the PostgreSQL server's port.
/// * "connection_str", optional, the custom PostgreSQL connection string.
/// Please refer to https://docs.rs/postgres/0.19.2/postgres/config/struct.Config.html for the connection configuration.
/// When `connection_str` is set, the values in "host", "user" and "port" are ignored. If `connection_str` is not given,
/// `host` and `user` must be given.
/// "store_account_historical_data", optional, set it to 'true', to store historical account data to account_audit
/// table.
/// * "threads" optional, specifies the number of worker threads for the plugin. A thread
/// maintains a PostgreSQL connection to the server. The default is '10'.
/// * "batch_size" optional, specifies the batch size of bulk insert when the AccountsDb is created
/// from restoring a snapshot. The default is '10'.
/// * "panic_on_db_errors", optional, contols if to panic when there are errors replicating data to the
/// PostgreSQL database. The default is 'false'.
/// * "transaction_selector", optional, controls if and what transaction to store. If this field is missing
/// None of the transction is stored.
/// "transaction_selector" : {
/// "mentions" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// The `mentions` field support wildcard to select all transaction or all 'vote' transactions:
/// For example, to select all transactions:
/// "transaction_selector" : {
/// "mentions" : \["*"\],
/// }
/// To select all vote transactions:
/// "transaction_selector" : {
/// "mentions" : \["all_votes"\],
/// }
/// # Examples
///
/// {
/// "libpath": "/home/solana/target/release/libsolana_accountsdb_plugin_postgres.so",
/// "host": "host_foo",
/// "user": "solana",
/// "threads": 10,
/// "accounts_selector" : {
/// "owners" : ["9oT9R5ZyRovSVnt37QvVoBttGpNqR3J7unkb567NP8k3"]
/// }
/// }
fn on_load(&mut self, config_file: &str) -> Result<()> {
solana_logger::setup_with_default("info");
info!(
"Loading plugin {:?} from config_file {:?}",
self.name(),
config_file
);
let mut file = File::open(config_file)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let result: serde_json::Value = serde_json::from_str(&contents).unwrap();
self.accounts_selector = Some(Self::create_accounts_selector_from_config(&result));
self.transaction_selector = Some(Self::create_transaction_selector_from_config(&result));
let result: serde_json::Result<AccountsDbPluginPostgresConfig> =
serde_json::from_str(&contents);
match result {
Err(err) => {
return Err(AccountsDbPluginError::ConfigFileReadError {
msg: format!(
"The config file is not in the JSON format expected: {:?}",
err
),
})
}
Ok(config) => {
let client = PostgresClientBuilder::build_pararallel_postgres_client(&config)?;
self.client = Some(client);
}
}
Ok(())
}
fn on_unload(&mut self) {
info!("Unloading plugin: {:?}", self.name());
match &mut self.client {
None => {}
Some(client) => {
client.join().unwrap();
}
}
}
fn update_account(
&mut self,
account: ReplicaAccountInfoVersions,
slot: u64,
is_startup: bool,
) -> Result<()> {
let mut measure_all = Measure::start("accountsdb-plugin-postgres-update-account-main");
match account {
ReplicaAccountInfoVersions::V0_0_1(account) => {
let mut measure_select =
Measure::start("accountsdb-plugin-postgres-update-account-select");
if let Some(accounts_selector) = &self.accounts_selector {
if !accounts_selector.is_account_selected(account.pubkey, account.owner) {
return Ok(());
}
} else {
return Ok(());
}
measure_select.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-select-us",
measure_select.as_us() as usize,
100000,
100000
);
debug!(
"Updating account {:?} with owner {:?} at slot {:?} using account selector {:?}",
bs58::encode(account.pubkey).into_string(),
bs58::encode(account.owner).into_string(),
slot,
self.accounts_selector.as_ref().unwrap()
);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database."
.to_string(),
},
)));
}
Some(client) => {
let mut measure_update =
Measure::start("accountsdb-plugin-postgres-update-account-client");
let result = { client.update_account(account, slot, is_startup) };
measure_update.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-client-us",
measure_update.as_us() as usize,
100000,
100000
);
if let Err(err) = result {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!("Failed to persist the update of account to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
}
}
measure_all.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-main-us",
measure_all.as_us() as usize,
100000,
100000
);
Ok(())
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<()> {
info!("Updating slot {:?} at with status {:?}", slot, status);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.update_slot_status(slot, parent, status);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the update of slot to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<()> {
info!("Notifying the end of startup for accounts notifications");
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.notify_end_of_startup();
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to notify the end of startup for accounts notifications. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_transaction(
&mut self,
transaction_info: ReplicaTransactionInfoVersions,
slot: u64,
) -> Result<()> {
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => match transaction_info {
ReplicaTransactionInfoVersions::V0_0_1(transaction_info) => {
if let Some(transaction_selector) = &self.transaction_selector {
if !transaction_selector.is_transaction_selected(
transaction_info.is_vote,
transaction_info.transaction.message().account_keys_iter(),
) {
return Ok(());
}
} else {
return Ok(());
}
let result = client.log_transaction_info(transaction_info, slot);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the transaction info to the PostgreSQL database. Error: {:?}", err)
});
}
}
},
}
Ok(())
}
fn notify_block_metadata(&mut self, block_info: ReplicaBlockInfoVersions) -> Result<()> {
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => match block_info {
ReplicaBlockInfoVersions::V0_0_1(block_info) => {
let result = client.update_block_metadata(block_info);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the update of block metadata to the PostgreSQL database. Error: {:?}", err)
});
}
}
},
}
Ok(())
}
/// Check if the plugin is interested in account data
/// Default is true -- if the plugin is not interested in
/// account data, please return false.
fn account_data_notifications_enabled(&self) -> bool {
self.accounts_selector
.as_ref()
.map_or_else(|| false, |selector| selector.is_enabled())
}
/// Check if the plugin is interested in transaction data
fn transaction_notifications_enabled(&self) -> bool {
self.transaction_selector
.as_ref()
.map_or_else(|| false, |selector| selector.is_enabled())
}
}
impl AccountsDbPluginPostgres {
fn create_accounts_selector_from_config(config: &serde_json::Value) -> AccountsSelector {
let accounts_selector = &config["accounts_selector"];
if accounts_selector.is_null() {
AccountsSelector::default()
} else {
let accounts = &accounts_selector["accounts"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
let owners = &accounts_selector["owners"];
let owners: Vec<String> = if owners.is_array() {
owners
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
AccountsSelector::new(&accounts, &owners)
}
}
fn create_transaction_selector_from_config(config: &serde_json::Value) -> TransactionSelector {
let transaction_selector = &config["transaction_selector"];
if transaction_selector.is_null() {
TransactionSelector::default()
} else {
let accounts = &transaction_selector["mentions"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
TransactionSelector::new(&accounts)
}
}
pub fn new() -> Self {
Self::default()
}
}
#[no_mangle]
#[allow(improper_ctypes_definitions)]
/// # Safety
///
/// This function returns the AccountsDbPluginPostgres pointer as trait AccountsDbPlugin.
pub unsafe extern "C" fn _create_plugin() -> *mut dyn AccountsDbPlugin {
let plugin = AccountsDbPluginPostgres::new();
let plugin: Box<dyn AccountsDbPlugin> = Box::new(plugin);
Box::into_raw(plugin)
}
#[cfg(test)]
pub(crate) mod tests {
use {super::*, serde_json};
#[test]
fn test_accounts_selector_from_config() {
let config = "{\"accounts_selector\" : { \
\"owners\" : [\"9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin\"] \
}}";
let config: serde_json::Value = serde_json::from_str(config).unwrap();
AccountsDbPluginPostgres::create_accounts_selector_from_config(&config);
}
}

View File

@@ -1,4 +0,0 @@
pub mod accounts_selector;
pub mod accountsdb_plugin_postgres;
pub mod postgres_client;
pub mod transaction_selector;

File diff suppressed because it is too large Load Diff

View File

@@ -1,97 +0,0 @@
use {
crate::{
accountsdb_plugin_postgres::{
AccountsDbPluginPostgresConfig, AccountsDbPluginPostgresError,
},
postgres_client::{
postgres_client_transaction::DbReward, SimplePostgresClient, UpdateBlockMetadataRequest,
},
},
chrono::Utc,
log::*,
postgres::{Client, Statement},
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPluginError, ReplicaBlockInfo,
},
};
#[derive(Clone, Debug)]
pub struct DbBlockInfo {
pub slot: i64,
pub blockhash: String,
pub rewards: Vec<DbReward>,
pub block_time: Option<i64>,
pub block_height: Option<i64>,
}
impl<'a> From<&ReplicaBlockInfo<'a>> for DbBlockInfo {
fn from(block_info: &ReplicaBlockInfo) -> Self {
Self {
slot: block_info.slot as i64,
blockhash: block_info.blockhash.to_string(),
rewards: block_info.rewards.iter().map(DbReward::from).collect(),
block_time: block_info.block_time,
block_height: block_info
.block_height
.map(|block_height| block_height as i64),
}
}
}
impl SimplePostgresClient {
pub(crate) fn build_block_metadata_upsert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt =
"INSERT INTO block (slot, blockhash, rewards, block_time, block_height, updated_on) \
VALUES ($1, $2, $3, $4, $5, $6)";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the block metadata update PostgreSQL database: ({}) host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(stmt) => Ok(stmt),
}
}
pub(crate) fn update_block_metadata_impl(
&mut self,
block_info: UpdateBlockMetadataRequest,
) -> Result<(), AccountsDbPluginError> {
let client = self.client.get_mut().unwrap();
let statement = &client.update_block_metadata_stmt;
let client = &mut client.client;
let updated_on = Utc::now().naive_utc();
let block_info = block_info.block_info;
let result = client.query(
statement,
&[
&block_info.slot,
&block_info.blockhash,
&block_info.rewards,
&block_info.block_time,
&block_info.block_height,
&updated_on,
],
);
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of block metadata to the PostgreSQL database. Error: {:?}",
err);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
Ok(())
}
}

View File

@@ -1,194 +0,0 @@
/// The transaction selector is responsible for filtering transactions
/// in the plugin framework.
use {log::*, solana_sdk::pubkey::Pubkey, std::collections::HashSet};
pub(crate) struct TransactionSelector {
pub mentioned_addresses: HashSet<Vec<u8>>,
pub select_all_transactions: bool,
pub select_all_vote_transactions: bool,
}
#[allow(dead_code)]
impl TransactionSelector {
pub fn default() -> Self {
Self {
mentioned_addresses: HashSet::default(),
select_all_transactions: false,
select_all_vote_transactions: false,
}
}
/// Create a selector based on the mentioned addresses
/// To select all transactions use ["*"] or ["all"]
/// To select all vote transactions, use ["all_votes"]
/// To select transactions mentioning specific addresses use ["<pubkey1>", "<pubkey2>", ...]
pub fn new(mentioned_addresses: &[String]) -> Self {
info!(
"Creating TransactionSelector from addresses: {:?}",
mentioned_addresses
);
let select_all_transactions = mentioned_addresses
.iter()
.any(|key| key == "*" || key == "all");
if select_all_transactions {
return Self {
mentioned_addresses: HashSet::default(),
select_all_transactions,
select_all_vote_transactions: true,
};
}
let select_all_vote_transactions = mentioned_addresses.iter().any(|key| key == "all_votes");
if select_all_vote_transactions {
return Self {
mentioned_addresses: HashSet::default(),
select_all_transactions,
select_all_vote_transactions: true,
};
}
let mentioned_addresses = mentioned_addresses
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
Self {
mentioned_addresses,
select_all_transactions: false,
select_all_vote_transactions: false,
}
}
/// Check if a transaction is of interest.
pub fn is_transaction_selected(
&self,
is_vote: bool,
mentioned_addresses: Box<dyn Iterator<Item = &Pubkey> + '_>,
) -> bool {
if !self.is_enabled() {
return false;
}
if self.select_all_transactions || (self.select_all_vote_transactions && is_vote) {
return true;
}
for address in mentioned_addresses {
if self.mentioned_addresses.contains(address.as_ref()) {
return true;
}
}
false
}
/// Check if any transaction is of interest at all
pub fn is_enabled(&self) -> bool {
self.select_all_transactions
|| self.select_all_vote_transactions
|| !self.mentioned_addresses.is_empty()
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_select_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&[pubkey1.to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_transaction_using_wildcard() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["*".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_transaction_all() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["all".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_vote_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["all_votes".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(true, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(true, Box::new(addresses.iter())));
}
#[test]
fn test_select_no_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&[]);
assert!(!selector.is_enabled());
let addresses = [pubkey1];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(!selector.is_transaction_selected(true, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(!selector.is_transaction_selected(true, Box::new(addresses.iter())));
}
}

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-banking-bench"
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -14,17 +14,17 @@ crossbeam-channel = "0.5"
log = "0.4.14"
rand = "0.7.0"
rayon = "1.5.1"
solana-core = { path = "../core", version = "=1.9.3" }
solana-gossip = { path = "../gossip", version = "=1.9.3" }
solana-ledger = { path = "../ledger", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-perf = { path = "../perf", version = "=1.9.3" }
solana-poh = { path = "../poh", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-streamer = { path = "../streamer", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-core = { path = "../core", version = "=1.9.8" }
solana-gossip = { path = "../gossip", version = "=1.9.8" }
solana-ledger = { path = "../ledger", version = "=1.9.8" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-measure = { path = "../measure", version = "=1.9.8" }
solana-perf = { path = "../perf", version = "=1.9.8" }
solana-poh = { path = "../poh", version = "=1.9.8" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-streamer = { path = "../streamer", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -11,6 +11,7 @@ use {
blockstore::Blockstore,
genesis_utils::{create_genesis_config, GenesisConfigInfo},
get_tmp_ledger_path,
leader_schedule_cache::LeaderScheduleCache,
},
solana_measure::measure::Measure,
solana_perf::packet::to_packet_batches,
@@ -218,8 +219,13 @@ fn main() {
let blockstore = Arc::new(
Blockstore::open(&ledger_path).expect("Expected to be able to open database ledger"),
);
let (exit, poh_recorder, poh_service, signal_receiver) =
create_test_recorder(&bank, &blockstore, None);
let leader_schedule_cache = Arc::new(LeaderScheduleCache::new_from_bank(&bank));
let (exit, poh_recorder, poh_service, signal_receiver) = create_test_recorder(
&bank,
&blockstore,
None,
Some(leader_schedule_cache.clone()),
);
let cluster_info = ClusterInfo::new(
Node::new_localhost().info,
Arc::new(Keypair::new()),
@@ -332,6 +338,7 @@ fn main() {
poh_recorder.lock().unwrap().set_bank(&bank);
assert!(poh_recorder.lock().unwrap().bank().is_some());
if bank.slot() > 32 {
leader_schedule_cache.set_root(&bank);
bank_forks.set_root(root, &AbsRequestSender::default(), None);
root += 1;
}

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-client"
version = "1.9.3"
version = "1.9.8"
description = "Solana banks client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,17 +12,17 @@ edition = "2021"
[dependencies]
borsh = "0.9.1"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.9.3" }
solana-program = { path = "../sdk/program", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-banks-interface = { path = "../banks-interface", version = "=1.9.8" }
solana-program = { path = "../sdk/program", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
tarpc = { version = "0.27.2", features = ["full"] }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
[dev-dependencies]
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-banks-server = { path = "../banks-server", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-banks-server = { path = "../banks-server", version = "=1.9.8" }
[lib]
crate-type = ["lib"]

View File

@@ -5,9 +5,9 @@
//! but they are undocumented, may change over time, and are generally more
//! cumbersome to use.
pub use crate::error::BanksClientError;
pub use solana_banks_interface::{BanksClient as TarpcClient, TransactionStatus};
use {
crate::error::BanksClientError,
borsh::BorshDeserialize,
futures::{future::join_all, Future, FutureExt, TryFutureExt},
solana_banks_interface::{BanksRequest, BanksResponse, BanksTransactionResultWithSimulation},

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-interface"
version = "1.9.3"
version = "1.9.8"
description = "Solana banks RPC interface"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -11,7 +11,7 @@ edition = "2021"
[dependencies]
serde = { version = "1.0.130", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
tarpc = { version = "0.27.2", features = ["full"] }
[lib]

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-server"
version = "1.9.3"
version = "1.9.8"
description = "Solana banks server"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,10 +12,10 @@ edition = "2021"
[dependencies]
bincode = "1.3.3"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.9.3" }
solana-banks-interface = { path = "../banks-interface", version = "=1.9.8" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.9.8" }
tarpc = { version = "0.27.2", features = ["full"] }
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-bench-streamer"
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -10,11 +10,11 @@ publish = false
[dependencies]
clap = "2.33.1"
solana-clap-utils = { path = "../clap-utils", version = "=1.9.3" }
solana-streamer = { path = "../streamer", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-net-utils = { path = "../net-utils", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.8" }
solana-streamer = { path = "../streamer", version = "=1.9.8" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-net-utils = { path = "../net-utils", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-bench-tps"
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -14,23 +14,23 @@ log = "0.4.14"
rayon = "1.5.1"
serde_json = "1.0.72"
serde_yaml = "0.8.21"
solana-core = { path = "../core", version = "=1.9.3" }
solana-genesis = { path = "../genesis", version = "=1.9.3" }
solana-client = { path = "../client", version = "=1.9.3" }
solana-faucet = { path = "../faucet", version = "=1.9.3" }
solana-gossip = { path = "../gossip", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-metrics = { path = "../metrics", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-net-utils = { path = "../net-utils", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-streamer = { path = "../streamer", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-core = { path = "../core", version = "=1.9.8" }
solana-genesis = { path = "../genesis", version = "=1.9.8" }
solana-client = { path = "../client", version = "=1.9.8" }
solana-faucet = { path = "../faucet", version = "=1.9.8" }
solana-gossip = { path = "../gossip", version = "=1.9.8" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-metrics = { path = "../metrics", version = "=1.9.8" }
solana-measure = { path = "../measure", version = "=1.9.8" }
solana-net-utils = { path = "../net-utils", version = "=1.9.8" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-streamer = { path = "../streamer", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
[dev-dependencies]
serial_test = "0.5.1"
solana-local-cluster = { path = "../local-cluster", version = "=1.9.3" }
solana-local-cluster = { path = "../local-cluster", version = "=1.9.8" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -21,7 +21,7 @@ pub const NUM_SIGNATURES_FOR_TXS: u64 = 100_000 * 60 * 60 * 24 * 7;
fn main() {
solana_logger::setup_with_default("solana=info");
solana_metrics::set_panic_hook("bench-tps");
solana_metrics::set_panic_hook("bench-tps", /*version:*/ None);
let matches = cli::build_args(solana_version::version!()).get_matches();
let cli_config = cli::extract_args(&matches);

View File

@@ -31,7 +31,7 @@ fn test_bench_tps_local_cluster(config: Config) {
node_stakes: vec![999_990; NUM_NODES],
cluster_lamports: 200_000_000,
validator_configs: make_identical_validator_configs(
&ValidatorConfig::default(),
&ValidatorConfig::default_for_test(),
NUM_NODES,
),
native_instruction_processors,

32
bloom/Cargo.toml Normal file
View File

@@ -0,0 +1,32 @@
[package]
name = "solana-bloom"
version = "1.9.8"
description = "Solana bloom filter"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-bloom"
edition = "2021"
[dependencies]
bv = { version = "0.11.1", features = ["serde"] }
fnv = "1.0.7"
rand = "0.7.0"
serde = { version = "1.0.133", features = ["rc"] }
rayon = "1.5.1"
serde_derive = "1.0.103"
solana-frozen-abi = { path = "../frozen-abi", version = "=1.9.8" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
log = "0.4.14"
[lib]
crate-type = ["lib"]
name = "solana_bloom"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]
[build-dependencies]
rustc_version = "0.4"

View File

@@ -5,7 +5,7 @@ use {
bv::BitVec,
fnv::FnvHasher,
rand::Rng,
solana_runtime::bloom::{AtomicBloom, Bloom, BloomHashIndex},
solana_bloom::bloom::{AtomicBloom, Bloom, BloomHashIndex},
solana_sdk::{
hash::{hash, Hash},
signature::Signature,

1
bloom/build.rs Symbolic link
View File

@@ -0,0 +1 @@
../frozen-abi/build.rs

View File

@@ -101,7 +101,7 @@ impl<T: BloomHashIndex> Bloom<T> {
}
}
fn pos(&self, key: &T, k: u64) -> u64 {
key.hash_at_index(k) % self.bits.len()
key.hash_at_index(k).wrapping_rem(self.bits.len())
}
pub fn clear(&mut self) {
self.bits = BitVec::new_fill(false, self.bits.len());
@@ -111,7 +111,7 @@ impl<T: BloomHashIndex> Bloom<T> {
for k in &self.keys {
let pos = self.pos(key, *k);
if !self.bits.get(pos) {
self.num_bits_set += 1;
self.num_bits_set = self.num_bits_set.saturating_add(1);
self.bits.set(pos, true);
}
}
@@ -164,21 +164,26 @@ impl<T: BloomHashIndex> From<Bloom<T>> for AtomicBloom<T> {
impl<T: BloomHashIndex> AtomicBloom<T> {
fn pos(&self, key: &T, hash_index: u64) -> (usize, u64) {
let pos = key.hash_at_index(hash_index) % self.num_bits;
let pos = key.hash_at_index(hash_index).wrapping_rem(self.num_bits);
// Divide by 64 to figure out which of the
// AtomicU64 bit chunks we need to modify.
let index = pos >> 6;
let index = pos.wrapping_shr(6);
// (pos & 63) is equivalent to mod 64 so that we can find
// the index of the bit within the AtomicU64 to modify.
let mask = 1u64 << (pos & 63);
let mask = 1u64.wrapping_shl(u32::try_from(pos & 63).unwrap());
(index as usize, mask)
}
pub fn add(&self, key: &T) {
/// Adds an item to the bloom filter and returns true if the item
/// was not in the filter before.
pub fn add(&self, key: &T) -> bool {
let mut added = false;
for k in &self.keys {
let (index, mask) = self.pos(key, *k);
self.bits[index].fetch_or(mask, Ordering::Relaxed);
let prev_val = self.bits[index].fetch_or(mask, Ordering::Relaxed);
added = added || prev_val & mask == 0u64;
}
added
}
pub fn contains(&self, key: &T) -> bool {
@@ -189,6 +194,12 @@ impl<T: BloomHashIndex> AtomicBloom<T> {
})
}
pub fn clear_for_tests(&mut self) {
self.bits.iter().for_each(|bit| {
bit.store(0u64, Ordering::Relaxed);
});
}
// Only for tests and simulations.
pub fn mock_clone(&self) -> Self {
Self {
@@ -320,7 +331,9 @@ mod test {
assert_eq!(bloom.keys.len(), 3);
assert_eq!(bloom.num_bits, 6168);
assert_eq!(bloom.bits.len(), 97);
hash_values.par_iter().for_each(|v| bloom.add(v));
hash_values.par_iter().for_each(|v| {
bloom.add(v);
});
let bloom: Bloom<Hash> = bloom.into();
assert_eq!(bloom.keys.len(), 3);
assert_eq!(bloom.bits.len(), 6168);
@@ -362,7 +375,9 @@ mod test {
}
// Round trip, re-inserting the same hash values.
let bloom: AtomicBloom<_> = bloom.into();
hash_values.par_iter().for_each(|v| bloom.add(v));
hash_values.par_iter().for_each(|v| {
bloom.add(v);
});
for hash_value in &hash_values {
assert!(bloom.contains(hash_value));
}
@@ -380,7 +395,9 @@ mod test {
let bloom: AtomicBloom<_> = bloom.into();
assert_eq!(bloom.num_bits, 9731);
assert_eq!(bloom.bits.len(), (9731 + 63) / 64);
more_hash_values.par_iter().for_each(|v| bloom.add(v));
more_hash_values.par_iter().for_each(|v| {
bloom.add(v);
});
for hash_value in &hash_values {
assert!(bloom.contains(hash_value));
}

5
bloom/src/lib.rs Normal file
View File

@@ -0,0 +1,5 @@
#![cfg_attr(RUSTC_WITH_SPECIALIZATION, feature(min_specialization))]
pub mod bloom;
#[macro_use]
extern crate solana_frozen_abi_macro;

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-bucket-map"
version = "1.9.3"
version = "1.9.8"
description = "solana-bucket-map"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-bucket-map"
@@ -12,11 +12,11 @@ edition = "2021"
[dependencies]
rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
memmap2 = "0.5.0"
log = { version = "0.4.11" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.8" }
rand = "0.7.0"
fs_extra = "1.2.0"
tempfile = "3.2.0"

View File

@@ -256,7 +256,15 @@ EOF
command_step "local-cluster" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster.sh" \
50
40
command_step "local-cluster-flakey" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-flakey.sh" \
10
command_step "local-cluster-slow" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster-slow.sh" \
30
}
pull_or_push_steps() {

View File

@@ -19,3 +19,8 @@ steps:
timeout_in_minutes: 240
name: "publish crate"
branches: "!master"
- command: "ci/publish-tarball.sh"
agents:
- "queue=release-build-aarch64-apple-darwin"
timeout_in_minutes: 60
name: "publish tarball (aarch64-apple-darwin)"

View File

@@ -150,7 +150,7 @@ elif [[ -n $BUILDKITE ]]; then
cat > release.solana.com-install <<EOF
SOLANA_RELEASE=$CHANNEL_OR_TAG
SOLANA_INSTALL_INIT_ARGS=$CHANNEL_OR_TAG
SOLANA_DOWNLOAD_ROOT=http://release.solana.com
SOLANA_DOWNLOAD_ROOT=https://release.solana.com
EOF
cat install/solana-install-init.sh >> release.solana.com-install

View File

@@ -27,6 +27,8 @@ steps+=(test-stable-perf)
steps+=(test-downstream-builds)
steps+=(test-bench)
steps+=(test-local-cluster)
steps+=(test-local-cluster-flakey)
steps+=(test-local-cluster-slow)
step_index=0
if [[ -n "$1" ]]; then

24
ci/sbf-tools-info.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env bash
#
# Finds the version of sbf-tools used by this source tree.
#
# stdout of this script may be eval-ed.
#
here="$(dirname "$0")"
SBF_TOOLS_VERSION=unknown
cargo_build_bpf_main="${here}/../sdk/cargo-build-bpf/src/main.rs"
if [[ -f "${cargo_build_bpf_main}" ]]; then
version=$(sed -e 's/^.*bpf_tools_version\s*=\s*"\(v[0-9.]\+\)".*/\1/;t;d' "${cargo_build_bpf_main}")
if [[ ${version} != '' ]]; then
SBF_TOOLS_VERSION="${version}"
else
echo '--- unable to parse SBF_TOOLS_VERSION'
fi
else
echo "--- '${cargo_build_bpf_main}' not present"
fi
echo SBF_TOOLS_VERSION="${SBF_TOOLS_VERSION}"

View File

@@ -0,0 +1 @@
test-stable.sh

View File

@@ -0,0 +1 @@
test-stable.sh

View File

@@ -100,7 +100,17 @@ test-stable-perf)
;;
test-local-cluster)
_ "$cargo" stable build --release --bins ${V:+--verbose}
_ "$cargo" stable test --release --package solana-local-cluster ${V:+--verbose} -- --nocapture --test-threads=1
_ "$cargo" stable test --release --package solana-local-cluster --test local_cluster ${V:+--verbose} -- --nocapture --test-threads=1
exit 0
;;
test-local-cluster-flakey)
_ "$cargo" stable build --release --bins ${V:+--verbose}
_ "$cargo" stable test --release --package solana-local-cluster --test local_cluster_flakey ${V:+--verbose} -- --nocapture --test-threads=1
exit 0
;;
test-local-cluster-slow)
_ "$cargo" stable build --release --bins ${V:+--verbose}
_ "$cargo" stable test --release --package solana-local-cluster --test local_cluster_slow ${V:+--verbose} -- --nocapture --test-threads=1
exit 0
;;
test-wasm)

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-clap-utils"
version = "1.9.3"
version = "1.9.8"
description = "Solana utilities for the clap"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,9 +12,9 @@ edition = "2021"
[dependencies]
clap = "2.33.0"
rpassword = "5.0"
solana-perf = { path = "../perf", version = "=1.9.3" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-perf = { path = "../perf", version = "=1.9.8" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
thiserror = "1.0.30"
tiny-bip39 = "0.8.2"
uriparse = "0.6.3"

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli-config"
description = "Blockchain, Rebuilt for Scale"
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli-output"
description = "Blockchain, Rebuilt for Scale"
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -19,12 +19,12 @@ Inflector = "0.11.4"
indicatif = "0.16.2"
serde = "1.0.130"
serde_json = "1.0.72"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.3" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.3" }
solana-client = { path = "../client", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.3" }
solana-account-decoder = { path = "../account-decoder", version = "=1.9.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.8" }
solana-client = { path = "../client", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.8" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
[package.metadata.docs.rs]

View File

@@ -46,6 +46,8 @@ use {
},
};
static CHECK_MARK: Emoji = Emoji("", "");
static CROSS_MARK: Emoji = Emoji("", "");
static WARNING: Emoji = Emoji("⚠️", "!");
#[derive(PartialEq, Debug)]
@@ -2523,6 +2525,172 @@ impl fmt::Display for CliGossipNodes {
impl QuietDisplay for CliGossipNodes {}
impl VerboseDisplay for CliGossipNodes {}
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CliPing {
pub source_pubkey: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub fixed_blockhash: Option<String>,
#[serde(skip_serializing)]
pub blockhash_from_cluster: bool,
pub pings: Vec<CliPingData>,
pub transaction_stats: CliPingTxStats,
#[serde(skip_serializing_if = "Option::is_none")]
pub confirmation_stats: Option<CliPingConfirmationStats>,
}
impl fmt::Display for CliPing {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(f)?;
writeln_name_value(f, "Source Account:", &self.source_pubkey)?;
if let Some(fixed_blockhash) = &self.fixed_blockhash {
let blockhash_origin = if self.blockhash_from_cluster {
"fetched from cluster"
} else {
"supplied from cli arguments"
};
writeln!(
f,
"Fixed blockhash is used: {} ({})",
fixed_blockhash, blockhash_origin
)?;
}
writeln!(f)?;
for ping in &self.pings {
write!(f, "{}", ping)?;
}
writeln!(f)?;
writeln!(f, "--- transaction statistics ---")?;
write!(f, "{}", self.transaction_stats)?;
if let Some(confirmation_stats) = &self.confirmation_stats {
write!(f, "{}", confirmation_stats)?;
}
Ok(())
}
}
impl QuietDisplay for CliPing {}
impl VerboseDisplay for CliPing {}
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CliPingData {
pub success: bool,
#[serde(skip_serializing_if = "Option::is_none")]
pub signature: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub ms: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
#[serde(skip_serializing)]
pub print_timestamp: bool,
pub timestamp: String,
pub sequence: u64,
#[serde(skip_serializing_if = "Option::is_none")]
pub lamports: Option<u64>,
}
impl fmt::Display for CliPingData {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let (mark, msg) = if let Some(signature) = &self.signature {
if self.success {
(
CHECK_MARK,
format!(
"{} lamport(s) transferred: seq={:<3} time={:>4}ms signature={}",
self.lamports.unwrap(),
self.sequence,
self.ms.unwrap(),
signature
),
)
} else if let Some(error) = &self.error {
(
CROSS_MARK,
format!(
"Transaction failed: seq={:<3} error={:?} signature={}",
self.sequence, error, signature
),
)
} else {
(
CROSS_MARK,
format!(
"Confirmation timeout: seq={:<3} signature={}",
self.sequence, signature
),
)
}
} else {
(
CROSS_MARK,
format!(
"Submit failed: seq={:<3} error={:?}",
self.sequence,
self.error.as_ref().unwrap(),
),
)
};
writeln!(
f,
"{}{}{}",
if self.print_timestamp {
&self.timestamp
} else {
""
},
mark,
msg
)
}
}
impl QuietDisplay for CliPingData {}
impl VerboseDisplay for CliPingData {}
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CliPingTxStats {
pub num_transactions: u32,
pub num_transaction_confirmed: u32,
}
impl fmt::Display for CliPingTxStats {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(
f,
"{} transactions submitted, {} transactions confirmed, {:.1}% transaction loss",
self.num_transactions,
self.num_transaction_confirmed,
(100.
- f64::from(self.num_transaction_confirmed) / f64::from(self.num_transactions)
* 100.)
)
}
}
impl QuietDisplay for CliPingTxStats {}
impl VerboseDisplay for CliPingTxStats {}
#[derive(Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct CliPingConfirmationStats {
pub min: f64,
pub mean: f64,
pub max: f64,
pub std_dev: f64,
}
impl fmt::Display for CliPingConfirmationStats {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(
f,
"confirmation min/mean/max/stddev = {:.0}/{:.0}/{:.0}/{:.0} ms",
self.min, self.mean, self.max, self.std_dev,
)
}
}
impl QuietDisplay for CliPingConfirmationStats {}
impl VerboseDisplay for CliPingConfirmationStats {}
#[cfg(test)]
mod tests {
use {

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-cli"
description = "Blockchain, Rebuilt for Scale"
version = "1.9.3"
version = "1.9.8"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -26,29 +26,29 @@ semver = "1.0.4"
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.3" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.9.3" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.3" }
solana-cli-config = { path = "../cli-config", version = "=1.9.3" }
solana-cli-output = { path = "../cli-output", version = "=1.9.3" }
solana-client = { path = "../client", version = "=1.9.3" }
solana-config-program = { path = "../programs/config", version = "=1.9.3" }
solana-faucet = { path = "../faucet", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-program-runtime = { path = "../program-runtime", version = "=1.9.3" }
solana_rbpf = "=0.2.21"
solana-remote-wallet = { path = "../remote-wallet", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.3" }
solana-account-decoder = { path = "../account-decoder", version = "=1.9.8" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.9.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.8" }
solana-cli-config = { path = "../cli-config", version = "=1.9.8" }
solana-cli-output = { path = "../cli-output", version = "=1.9.8" }
solana-client = { path = "../client", version = "=1.9.8" }
solana-config-program = { path = "../programs/config", version = "=1.9.8" }
solana-faucet = { path = "../faucet", version = "=1.9.8" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-program-runtime = { path = "../program-runtime", version = "=1.9.8" }
solana_rbpf = "=0.2.23"
solana-remote-wallet = { path = "../remote-wallet", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.8" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
thiserror = "1.0.30"
tiny-bip39 = "0.8.2"
[dev-dependencies]
solana-streamer = { path = "../streamer", version = "=1.9.3" }
solana-test-validator = { path = "../test-validator", version = "=1.9.3" }
solana-streamer = { path = "../streamer", version = "=1.9.8" }
solana-test-validator = { path = "../test-validator", version = "=1.9.8" }
tempfile = "3.2.0"
[[bin]]

View File

@@ -83,7 +83,6 @@ pub enum CliCommand {
filter: RpcTransactionLogsFilter,
},
Ping {
lamports: u64,
interval: Duration,
count: Option<u64>,
timeout: Duration,
@@ -973,7 +972,6 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
CliCommand::LiveSlots => process_live_slots(config),
CliCommand::Logs { filter } => process_logs(config, filter),
CliCommand::Ping {
lamports,
interval,
count,
timeout,
@@ -982,7 +980,6 @@ pub fn process_command(config: &CliConfig) -> ProcessResult {
} => process_ping(
&rpc_client,
config,
*lamports,
interval,
count,
timeout,

View File

@@ -4,7 +4,7 @@ use {
spend_utils::{resolve_spend_tx_and_check_account_balance, SpendAmount},
},
clap::{value_t, value_t_or_exit, App, AppSettings, Arg, ArgMatches, SubCommand},
console::{style, Emoji},
console::style,
serde::{Deserialize, Serialize},
solana_clap_utils::{
input_parsers::*,
@@ -15,7 +15,7 @@ use {
solana_cli_output::{
display::{
build_balance_message, format_labeled_address, new_spinner_progress_bar,
println_name_value, println_transaction, unix_timestamp_to_string, writeln_name_value,
println_transaction, unix_timestamp_to_string, writeln_name_value,
},
*,
},
@@ -43,13 +43,13 @@ use {
message::Message,
native_token::lamports_to_sol,
nonce::State as NonceState,
pubkey::{self, Pubkey},
pubkey::Pubkey,
rent::Rent,
rpc_port::DEFAULT_RPC_PORT_STR,
signature::Signature,
slot_history,
stake::{self, state::StakeState},
system_instruction, system_program,
system_instruction,
sysvar::{
self,
slot_history::SlotHistory,
@@ -74,9 +74,6 @@ use {
thiserror::Error,
};
static CHECK_MARK: Emoji = Emoji("", "");
static CROSS_MARK: Emoji = Emoji("", "");
pub trait ClusterQuerySubCommands {
fn cluster_query_subcommands(self) -> Self;
}
@@ -262,15 +259,6 @@ impl ClusterQuerySubCommands for App<'_, '_> {
.takes_value(false)
.help("Print timestamp (unix time + microseconds as in gettimeofday) before each line"),
)
.arg(
Arg::with_name("lamports")
.long("lamports")
.value_name("NUMBER")
.takes_value(true)
.default_value("1")
.validator(is_amount)
.help("Number of lamports to transfer for each transaction"),
)
.arg(
Arg::with_name("timeout")
.short("t")
@@ -515,7 +503,6 @@ pub fn parse_cluster_ping(
default_signer: &DefaultSigner,
wallet_manager: &mut Option<Arc<RemoteWalletManager>>,
) -> Result<CliCommandInfo, CliError> {
let lamports = value_t_or_exit!(matches, "lamports", u64);
let interval = Duration::from_secs(value_t_or_exit!(matches, "interval", u64));
let count = if matches.is_present("count") {
Some(value_t_or_exit!(matches, "count", u64))
@@ -527,7 +514,6 @@ pub fn parse_cluster_ping(
let print_timestamp = matches.is_present("print_timestamp");
Ok(CliCommandInfo {
command: CliCommand::Ping {
lamports,
interval,
count,
timeout,
@@ -1358,40 +1344,34 @@ pub fn process_get_transaction_count(rpc_client: &RpcClient, _config: &CliConfig
pub fn process_ping(
rpc_client: &RpcClient,
config: &CliConfig,
lamports: u64,
interval: &Duration,
count: &Option<u64>,
timeout: &Duration,
fixed_blockhash: &Option<Hash>,
print_timestamp: bool,
) -> ProcessResult {
println_name_value("Source Account:", &config.signers[0].pubkey().to_string());
println!();
let (signal_sender, signal_receiver) = std::sync::mpsc::channel();
ctrlc::set_handler(move || {
let _ = signal_sender.send(());
})
.expect("Error setting Ctrl-C handler");
let mut cli_pings = vec![];
let mut submit_count = 0;
let mut confirmed_count = 0;
let mut confirmation_time: VecDeque<u64> = VecDeque::with_capacity(1024);
let mut blockhash = rpc_client.get_latest_blockhash()?;
let mut blockhash_transaction_count = 0;
let mut lamports = 0;
let mut blockhash_acquired = Instant::now();
let mut blockhash_from_cluster = false;
if let Some(fixed_blockhash) = fixed_blockhash {
let blockhash_origin = if *fixed_blockhash != Hash::default() {
if *fixed_blockhash != Hash::default() {
blockhash = *fixed_blockhash;
"supplied from cli arguments"
} else {
"fetched from cluster"
};
println!(
"Fixed blockhash is used: {} ({})",
blockhash, blockhash_origin
);
blockhash_from_cluster = true;
}
}
'mainloop: for seq in 0..count.unwrap_or(std::u64::MAX) {
let now = Instant::now();
@@ -1399,15 +1379,12 @@ pub fn process_ping(
// Fetch a new blockhash every minute
let new_blockhash = rpc_client.get_new_latest_blockhash(&blockhash)?;
blockhash = new_blockhash;
blockhash_transaction_count = 0;
lamports = 0;
blockhash_acquired = Instant::now();
}
let seed =
&format!("{}{}", blockhash_transaction_count, blockhash)[0..pubkey::MAX_SEED_LEN];
let to = Pubkey::create_with_seed(&config.signers[0].pubkey(), seed, &system_program::id())
.unwrap();
blockhash_transaction_count += 1;
let to = config.signers[0].pubkey();
lamports += 1;
let build_message = |lamports| {
let ix = system_instruction::transfer(&config.signers[0].pubkey(), &to, lamports);
@@ -1430,11 +1407,7 @@ pub fn process_ping(
.duration_since(UNIX_EPOCH)
.unwrap()
.as_micros();
if print_timestamp {
format!("[{}.{:06}] ", micros / 1_000_000, micros % 1_000_000)
} else {
String::new()
}
format!("[{}.{:06}] ", micros / 1_000_000, micros % 1_000_000)
};
match rpc_client.send_transaction(&tx) {
@@ -1448,35 +1421,51 @@ pub fn process_ping(
Ok(()) => {
let elapsed_time_millis = elapsed_time.as_millis() as u64;
confirmation_time.push_back(elapsed_time_millis);
println!(
"{}{}{} lamport(s) transferred: seq={:<3} time={:>4}ms signature={}",
timestamp(),
CHECK_MARK, lamports, seq, elapsed_time_millis, signature
);
let cli_ping_data = CliPingData {
success: true,
signature: Some(signature.to_string()),
ms: Some(elapsed_time_millis),
error: None,
timestamp: timestamp(),
print_timestamp,
sequence: seq,
lamports: Some(lamports),
};
eprint!("{}", cli_ping_data);
cli_pings.push(cli_ping_data);
confirmed_count += 1;
}
Err(err) => {
println!(
"{}{}Transaction failed: seq={:<3} error={:?} signature={}",
timestamp(),
CROSS_MARK,
seq,
err,
signature
);
let cli_ping_data = CliPingData {
success: false,
signature: Some(signature.to_string()),
ms: None,
error: Some(err.to_string()),
timestamp: timestamp(),
print_timestamp,
sequence: seq,
lamports: None,
};
eprint!("{}", cli_ping_data);
cli_pings.push(cli_ping_data);
}
}
break;
}
if elapsed_time >= *timeout {
println!(
"{}{}Confirmation timeout: seq={:<3} signature={}",
timestamp(),
CROSS_MARK,
seq,
signature
);
let cli_ping_data = CliPingData {
success: false,
signature: Some(signature.to_string()),
ms: None,
error: None,
timestamp: timestamp(),
print_timestamp,
sequence: seq,
lamports: None,
};
eprint!("{}", cli_ping_data);
cli_pings.push(cli_ping_data);
break;
}
@@ -1490,13 +1479,18 @@ pub fn process_ping(
}
}
Err(err) => {
println!(
"{}{}Submit failed: seq={:<3} error={:?}",
timestamp(),
CROSS_MARK,
seq,
err
);
let cli_ping_data = CliPingData {
success: false,
signature: None,
ms: None,
error: Some(err.to_string()),
timestamp: timestamp(),
print_timestamp,
sequence: seq,
lamports: None,
};
eprint!("{}", cli_ping_data);
cli_pings.push(cli_ping_data);
}
}
submit_count += 1;
@@ -1506,28 +1500,34 @@ pub fn process_ping(
}
}
println!();
println!("--- transaction statistics ---");
println!(
"{} transactions submitted, {} transactions confirmed, {:.1}% transaction loss",
submit_count,
confirmed_count,
(100. - f64::from(confirmed_count) / f64::from(submit_count) * 100.)
);
if !confirmation_time.is_empty() {
let transaction_stats = CliPingTxStats {
num_transactions: submit_count,
num_transaction_confirmed: confirmed_count,
};
let confirmation_stats = if !confirmation_time.is_empty() {
let samples: Vec<f64> = confirmation_time.iter().map(|t| *t as f64).collect();
let dist = criterion_stats::Distribution::from(samples.into_boxed_slice());
let mean = dist.mean();
println!(
"confirmation min/mean/max/stddev = {:.0}/{:.0}/{:.0}/{:.0} ms",
dist.min(),
Some(CliPingConfirmationStats {
min: dist.min(),
mean,
dist.max(),
dist.std_dev(Some(mean))
);
}
max: dist.max(),
std_dev: dist.std_dev(Some(mean)),
})
} else {
None
};
Ok("".to_string())
let cli_ping = CliPing {
source_pubkey: config.signers[0].pubkey().to_string(),
fixed_blockhash: fixed_blockhash.map(|_| blockhash.to_string()),
blockhash_from_cluster,
pings: cli_pings,
transaction_stats,
confirmation_stats,
};
Ok(config.output_format.formatted_string(&cli_ping))
}
pub fn parse_logs(
@@ -2128,7 +2128,7 @@ pub fn process_calculate_rent(
timing::years_as_slots(1.0, &seconds_per_tick, clock::DEFAULT_TICKS_PER_SLOT);
let slots_per_epoch = epoch_schedule.slots_per_epoch as f64;
let years_per_epoch = slots_per_epoch / slots_per_year;
let (lamports_per_epoch, _) = rent.due(0, data_length, years_per_epoch);
let lamports_per_epoch = rent.due(0, data_length, years_per_epoch).lamports();
let cli_rent_calculation = CliRentCalculation {
lamports_per_byte_year: rent.lamports_per_byte_year,
lamports_per_epoch,
@@ -2304,7 +2304,6 @@ mod tests {
parse_command(&test_ping, &default_signer, &mut None).unwrap(),
CliCommandInfo {
command: CliCommand::Ping {
lamports: 1,
interval: Duration::from_secs(1),
count: Some(2),
timeout: Duration::from_secs(3),

View File

@@ -1997,10 +1997,7 @@ fn read_and_verify_elf(program_location: &str) -> Result<Vec<u8>, Box<dyn std::e
&program_data,
Some(verifier::check),
Config {
reject_unresolved_syscalls: true,
verify_mul64_imm_nonzero: false,
verify_shift32_imm: true,
reject_section_virtual_address_file_offset_mismatch: true,
reject_broken_elfs: true,
..Config::default()
},
register_syscalls(&mut invoke_context).unwrap(),

View File

@@ -1384,7 +1384,13 @@ pub fn process_stake_authorize(
if let Some(authorized) = authorized {
match authorization_type {
StakeAuthorize::Staker => {
check_current_authority(&authorized.staker, &authority.pubkey())?;
// first check authorized withdrawer
check_current_authority(&authorized.withdrawer, &authority.pubkey())
.or_else(|_| {
// ...then check authorized staker. If neither matches, error will
// print the stake key as `expected`
check_current_authority(&authorized.staker, &authority.pubkey())
})?;
}
StakeAuthorize::Withdrawer => {
check_current_authority(&authorized.withdrawer, &authority.pubkey())?;

View File

@@ -1,23 +1,29 @@
use {
solana_client::rpc_client::RpcClient,
solana_sdk::{clock::DEFAULT_MS_PER_SLOT, commitment_config::CommitmentConfig, pubkey::Pubkey},
solana_sdk::{clock::DEFAULT_MS_PER_SLOT, commitment_config::CommitmentConfig},
std::{thread::sleep, time::Duration},
};
pub fn check_recent_balance(expected_balance: u64, client: &RpcClient, pubkey: &Pubkey) {
(0..5).for_each(|tries| {
let balance = client
.get_balance_with_commitment(pubkey, CommitmentConfig::processed())
.unwrap()
.value;
if balance == expected_balance {
return;
}
if tries == 4 {
assert_eq!(balance, expected_balance);
}
sleep(Duration::from_millis(500));
});
#[macro_export]
macro_rules! check_balance {
($expected_balance:expr, $client:expr, $pubkey:expr) => {
(0..5).for_each(|tries| {
let balance = $client
.get_balance_with_commitment($pubkey, CommitmentConfig::processed())
.unwrap()
.value;
if balance == $expected_balance {
return;
}
if tries == 4 {
assert_eq!(balance, $expected_balance);
}
std::thread::sleep(std::time::Duration::from_millis(500));
});
};
($expected_balance:expr, $client:expr, $pubkey:expr,) => {
check_balance!($expected_balance, $client, $pubkey)
};
}
pub fn check_ready(rpc_client: &RpcClient) {

View File

@@ -1,8 +1,10 @@
#![allow(clippy::integer_arithmetic)]
use {
solana_cli::{
check_balance,
cli::{process_command, request_and_confirm_airdrop, CliCommand, CliConfig},
spend_utils::SpendAmount,
test_utils::{check_ready, check_recent_balance},
test_utils::check_ready,
},
solana_cli_output::{parse_sign_only_reply_string, OutputFormat},
solana_client::{
@@ -14,6 +16,7 @@ use {
solana_sdk::{
commitment_config::CommitmentConfig,
hash::Hash,
native_token::sol_to_lamports,
pubkey::Pubkey,
signature::{keypair_from_seed, Keypair, Signer},
system_program,
@@ -73,10 +76,14 @@ fn full_battery_tests(
&rpc_client,
&config_payer,
&config_payer.signers[0].pubkey(),
2000,
sol_to_lamports(2000.0),
)
.unwrap();
check_recent_balance(2000, &rpc_client, &config_payer.signers[0].pubkey());
check_balance!(
sol_to_lamports(2000.0),
&rpc_client,
&config_payer.signers[0].pubkey(),
);
let mut config_nonce = CliConfig::recent_for_tests();
config_nonce.json_rpc_url = json_rpc_url;
@@ -108,12 +115,16 @@ fn full_battery_tests(
seed,
nonce_authority: optional_authority,
memo: None,
amount: SpendAmount::Some(1000),
amount: SpendAmount::Some(sol_to_lamports(1000.0)),
};
process_command(&config_payer).unwrap();
check_recent_balance(1000, &rpc_client, &config_payer.signers[0].pubkey());
check_recent_balance(1000, &rpc_client, &nonce_account);
check_balance!(
sol_to_lamports(1000.0),
&rpc_client,
&config_payer.signers[0].pubkey(),
);
check_balance!(sol_to_lamports(1000.0), &rpc_client, &nonce_account);
// Get nonce
config_payer.signers.pop();
@@ -161,12 +172,16 @@ fn full_battery_tests(
nonce_authority: index,
memo: None,
destination_account_pubkey: payee_pubkey,
lamports: 100,
lamports: sol_to_lamports(100.0),
};
process_command(&config_payer).unwrap();
check_recent_balance(1000, &rpc_client, &config_payer.signers[0].pubkey());
check_recent_balance(900, &rpc_client, &nonce_account);
check_recent_balance(100, &rpc_client, &payee_pubkey);
check_balance!(
sol_to_lamports(1000.0),
&rpc_client,
&config_payer.signers[0].pubkey(),
);
check_balance!(sol_to_lamports(900.0), &rpc_client, &nonce_account);
check_balance!(sol_to_lamports(100.0), &rpc_client, &payee_pubkey);
// Show nonce account
config_payer.command = CliCommand::ShowNonceAccount {
@@ -208,17 +223,22 @@ fn full_battery_tests(
nonce_authority: 1,
memo: None,
destination_account_pubkey: payee_pubkey,
lamports: 100,
lamports: sol_to_lamports(100.0),
};
process_command(&config_payer).unwrap();
check_recent_balance(1000, &rpc_client, &config_payer.signers[0].pubkey());
check_recent_balance(800, &rpc_client, &nonce_account);
check_recent_balance(200, &rpc_client, &payee_pubkey);
check_balance!(
sol_to_lamports(1000.0),
&rpc_client,
&config_payer.signers[0].pubkey(),
);
check_balance!(sol_to_lamports(800.0), &rpc_client, &nonce_account);
check_balance!(sol_to_lamports(200.0), &rpc_client, &payee_pubkey);
}
#[test]
#[allow(clippy::redundant_closure)]
fn test_create_account_with_seed() {
const ONE_SIG_FEE: f64 = 0.000005;
solana_logger::setup();
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -241,19 +261,27 @@ fn test_create_account_with_seed() {
&rpc_client,
&CliConfig::recent_for_tests(),
&offline_nonce_authority_signer.pubkey(),
42,
sol_to_lamports(42.0),
)
.unwrap();
request_and_confirm_airdrop(
&rpc_client,
&CliConfig::recent_for_tests(),
&online_nonce_creator_signer.pubkey(),
4242,
sol_to_lamports(4242.0),
)
.unwrap();
check_recent_balance(42, &rpc_client, &offline_nonce_authority_signer.pubkey());
check_recent_balance(4242, &rpc_client, &online_nonce_creator_signer.pubkey());
check_recent_balance(0, &rpc_client, &to_address);
check_balance!(
sol_to_lamports(42.0),
&rpc_client,
&offline_nonce_authority_signer.pubkey(),
);
check_balance!(
sol_to_lamports(4242.0),
&rpc_client,
&online_nonce_creator_signer.pubkey(),
);
check_balance!(0, &rpc_client, &to_address);
check_ready(&rpc_client);
@@ -263,7 +291,7 @@ fn test_create_account_with_seed() {
let seed = authority_pubkey.to_string()[0..32].to_string();
let nonce_address =
Pubkey::create_with_seed(&creator_pubkey, &seed, &system_program::id()).unwrap();
check_recent_balance(0, &rpc_client, &nonce_address);
check_balance!(0, &rpc_client, &nonce_address);
let mut creator_config = CliConfig::recent_for_tests();
creator_config.json_rpc_url = test_validator.rpc_url();
@@ -273,13 +301,21 @@ fn test_create_account_with_seed() {
seed: Some(seed),
nonce_authority: Some(authority_pubkey),
memo: None,
amount: SpendAmount::Some(241),
amount: SpendAmount::Some(sol_to_lamports(241.0)),
};
process_command(&creator_config).unwrap();
check_recent_balance(241, &rpc_client, &nonce_address);
check_recent_balance(42, &rpc_client, &offline_nonce_authority_signer.pubkey());
check_recent_balance(4000, &rpc_client, &online_nonce_creator_signer.pubkey());
check_recent_balance(0, &rpc_client, &to_address);
check_balance!(sol_to_lamports(241.0), &rpc_client, &nonce_address);
check_balance!(
sol_to_lamports(42.0),
&rpc_client,
&offline_nonce_authority_signer.pubkey(),
);
check_balance!(
sol_to_lamports(4001.0 - ONE_SIG_FEE),
&rpc_client,
&online_nonce_creator_signer.pubkey(),
);
check_balance!(0, &rpc_client, &to_address);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@@ -299,7 +335,7 @@ fn test_create_account_with_seed() {
authority_config.command = CliCommand::ClusterVersion;
process_command(&authority_config).unwrap_err();
authority_config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(10.0)),
to: to_address,
from: 0,
sign_only: true,
@@ -325,7 +361,7 @@ fn test_create_account_with_seed() {
submit_config.json_rpc_url = test_validator.rpc_url();
submit_config.signers = vec![&authority_presigner];
submit_config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(10.0)),
to: to_address,
from: 0,
sign_only: false,
@@ -344,8 +380,16 @@ fn test_create_account_with_seed() {
derived_address_program_id: None,
};
process_command(&submit_config).unwrap();
check_recent_balance(241, &rpc_client, &nonce_address);
check_recent_balance(31, &rpc_client, &offline_nonce_authority_signer.pubkey());
check_recent_balance(4000, &rpc_client, &online_nonce_creator_signer.pubkey());
check_recent_balance(10, &rpc_client, &to_address);
check_balance!(sol_to_lamports(241.0), &rpc_client, &nonce_address);
check_balance!(
sol_to_lamports(32.0 - ONE_SIG_FEE),
&rpc_client,
&offline_nonce_authority_signer.pubkey(),
);
check_balance!(
sol_to_lamports(4001.0 - ONE_SIG_FEE),
&rpc_client,
&online_nonce_creator_signer.pubkey(),
);
check_balance!(sol_to_lamports(10.0), &rpc_client, &to_address);
}

View File

@@ -1,3 +1,4 @@
#![allow(clippy::integer_arithmetic)]
use {
serde_json::Value,
solana_cli::{

View File

@@ -1,9 +1,11 @@
#![allow(clippy::integer_arithmetic)]
use {
solana_cli::cli::{process_command, CliCommand, CliConfig},
solana_client::rpc_client::RpcClient,
solana_faucet::faucet::run_local_faucet,
solana_sdk::{
commitment_config::CommitmentConfig,
native_token::sol_to_lamports,
signature::{Keypair, Signer},
},
solana_streamer::socket::SocketAddrSpace,
@@ -22,7 +24,7 @@ fn test_cli_request_airdrop() {
bob_config.json_rpc_url = test_validator.rpc_url();
bob_config.command = CliCommand::Airdrop {
pubkey: None,
lamports: 50,
lamports: sol_to_lamports(50.0),
};
let keypair = Keypair::new();
bob_config.signers = vec![&keypair];
@@ -36,5 +38,5 @@ fn test_cli_request_airdrop() {
let balance = rpc_client
.get_balance(&bob_config.signers[0].pubkey())
.unwrap();
assert_eq!(balance, 50);
assert_eq!(balance, sol_to_lamports(50.0));
}

View File

@@ -1,10 +1,12 @@
#![allow(clippy::integer_arithmetic)]
#![allow(clippy::redundant_closure)]
use {
solana_cli::{
check_balance,
cli::{process_command, request_and_confirm_airdrop, CliCommand, CliConfig},
spend_utils::SpendAmount,
stake::StakeAuthorizationIndexed,
test_utils::{check_ready, check_recent_balance},
test_utils::check_ready,
},
solana_cli_output::{parse_sign_only_reply_string, OutputFormat},
solana_client::{
@@ -16,6 +18,7 @@ use {
solana_sdk::{
account_utils::StateMut,
commitment_config::CommitmentConfig,
fee::FeeStructure,
nonce::State as NonceState,
pubkey::Pubkey,
signature::{keypair_from_seed, Keypair, Signer},
@@ -150,7 +153,7 @@ fn test_seed_stake_delegation_and_deactivation() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_validator.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_validator.signers[0].pubkey());
let stake_address = Pubkey::create_with_seed(
&config_validator.signers[0].pubkey(),
@@ -239,7 +242,7 @@ fn test_stake_delegation_and_deactivation() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_validator.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_validator.signers[0].pubkey());
// Create stake account
config_validator.signers.push(&stake_keypair);
@@ -333,7 +336,7 @@ fn test_offline_stake_delegation_and_deactivation() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_validator.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_validator.signers[0].pubkey());
request_and_confirm_airdrop(
&rpc_client,
@@ -342,7 +345,7 @@ fn test_offline_stake_delegation_and_deactivation() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_offline.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_offline.signers[0].pubkey());
// Create stake account
config_validator.signers.push(&stake_keypair);
@@ -874,14 +877,15 @@ fn test_stake_authorize() {
#[test]
fn test_stake_authorize_with_fee_payer() {
solana_logger::setup();
const SIG_FEE: u64 = 42;
let fee_one_sig = FeeStructure::default().get_max_fee(1, 0);
let fee_two_sig = FeeStructure::default().get_max_fee(2, 0);
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
let test_validator = TestValidator::with_custom_fees(
mint_pubkey,
SIG_FEE,
1,
Some(faucet_addr),
SocketAddrSpace::Unspecified,
);
@@ -910,14 +914,14 @@ fn test_stake_authorize_with_fee_payer() {
config_offline.command = CliCommand::ClusterVersion;
process_command(&config_offline).unwrap_err();
request_and_confirm_airdrop(&rpc_client, &config, &default_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config, &default_pubkey, 5_000_000).unwrap();
check_balance!(5_000_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config_payer, &payer_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &payer_pubkey);
request_and_confirm_airdrop(&rpc_client, &config_payer, &payer_pubkey, 5_000_000).unwrap();
check_balance!(5_000_000, &rpc_client, &payer_pubkey);
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &offline_pubkey);
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 5_000_000).unwrap();
check_balance!(5_000_000, &rpc_client, &offline_pubkey);
check_ready(&rpc_client);
@@ -932,7 +936,7 @@ fn test_stake_authorize_with_fee_payer() {
withdrawer: None,
withdrawer_signer: None,
lockup: Lockup::default(),
amount: SpendAmount::Some(50_000),
amount: SpendAmount::Some(1_000_000),
sign_only: false,
dump_transaction_message: false,
blockhash_query: BlockhashQuery::All(blockhash_query::Source::Cluster),
@@ -943,8 +947,7 @@ fn test_stake_authorize_with_fee_payer() {
from: 0,
};
process_command(&config).unwrap();
// `config` balance should be 50,000 - 1 stake account sig - 1 fee sig
check_recent_balance(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
check_balance!(4_000_000 - fee_two_sig, &rpc_client, &default_pubkey);
// Assign authority with separate fee payer
config.signers = vec![&default_signer, &payer_keypair];
@@ -968,10 +971,10 @@ fn test_stake_authorize_with_fee_payer() {
};
process_command(&config).unwrap();
// `config` balance has not changed, despite submitting the TX
check_recent_balance(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
check_balance!(4_000_000 - fee_two_sig, &rpc_client, &default_pubkey);
// `config_payer` however has paid `config`'s authority sig
// and `config_payer`'s fee sig
check_recent_balance(100_000 - SIG_FEE - SIG_FEE, &rpc_client, &payer_pubkey);
check_balance!(5_000_000 - fee_two_sig, &rpc_client, &payer_pubkey);
// Assign authority with offline fee payer
let blockhash = rpc_client.get_latest_blockhash().unwrap();
@@ -1019,10 +1022,10 @@ fn test_stake_authorize_with_fee_payer() {
};
process_command(&config).unwrap();
// `config`'s balance again has not changed
check_recent_balance(50_000 - SIG_FEE - SIG_FEE, &rpc_client, &default_pubkey);
check_balance!(4_000_000 - fee_two_sig, &rpc_client, &default_pubkey);
// `config_offline` however has paid 1 sig due to being both authority
// and fee payer
check_recent_balance(100_000 - SIG_FEE, &rpc_client, &offline_pubkey);
check_balance!(5_000_000 - fee_one_sig, &rpc_client, &offline_pubkey);
}
#[test]
@@ -1056,12 +1059,17 @@ fn test_stake_split() {
config_offline.command = CliCommand::ClusterVersion;
process_command(&config_offline).unwrap_err();
request_and_confirm_airdrop(&rpc_client, &config, &config.signers[0].pubkey(), 500_000)
.unwrap();
check_recent_balance(500_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(
&rpc_client,
&config,
&config.signers[0].pubkey(),
50_000_000,
)
.unwrap();
check_balance!(50_000_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &offline_pubkey);
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 1_000_000).unwrap();
check_balance!(1_000_000, &rpc_client, &offline_pubkey);
// Create stake account, identity is authority
let minimum_stake_balance = rpc_client
@@ -1088,7 +1096,7 @@ fn test_stake_split() {
from: 0,
};
process_command(&config).unwrap();
check_recent_balance(
check_balance!(
10 * minimum_stake_balance,
&rpc_client,
&stake_account_pubkey,
@@ -1108,7 +1116,7 @@ fn test_stake_split() {
amount: SpendAmount::Some(minimum_nonce_balance),
};
process_command(&config).unwrap();
check_recent_balance(minimum_nonce_balance, &rpc_client, &nonce_account.pubkey());
check_balance!(minimum_nonce_balance, &rpc_client, &nonce_account.pubkey());
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@@ -1122,7 +1130,7 @@ fn test_stake_split() {
// Nonced offline split
let split_account = keypair_from_seed(&[2u8; 32]).unwrap();
check_recent_balance(0, &rpc_client, &split_account.pubkey());
check_balance!(0, &rpc_client, &split_account.pubkey());
config_offline.signers.push(&split_account);
config_offline.command = CliCommand::SplitStake {
stake_account_pubkey,
@@ -1162,12 +1170,12 @@ fn test_stake_split() {
fee_payer: 0,
};
process_command(&config).unwrap();
check_recent_balance(
check_balance!(
8 * minimum_stake_balance,
&rpc_client,
&stake_account_pubkey,
);
check_recent_balance(
check_balance!(
2 * minimum_stake_balance,
&rpc_client,
&split_account.pubkey(),
@@ -1205,12 +1213,12 @@ fn test_stake_set_lockup() {
config_offline.command = CliCommand::ClusterVersion;
process_command(&config_offline).unwrap_err();
request_and_confirm_airdrop(&rpc_client, &config, &config.signers[0].pubkey(), 500_000)
request_and_confirm_airdrop(&rpc_client, &config, &config.signers[0].pubkey(), 5_000_000)
.unwrap();
check_recent_balance(500_000, &rpc_client, &config.signers[0].pubkey());
check_balance!(5_000_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &offline_pubkey);
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 1_000_000).unwrap();
check_balance!(1_000_000, &rpc_client, &offline_pubkey);
// Create stake account, identity is authority
let minimum_stake_balance = rpc_client
@@ -1244,7 +1252,12 @@ fn test_stake_set_lockup() {
from: 0,
};
process_command(&config).unwrap();
check_recent_balance(
check_balance!(
10 * minimum_stake_balance,
&rpc_client,
&stake_account_pubkey,
);
check_balance!(
10 * minimum_stake_balance,
&rpc_client,
&stake_account_pubkey,
@@ -1377,7 +1390,7 @@ fn test_stake_set_lockup() {
amount: SpendAmount::Some(minimum_nonce_balance),
};
process_command(&config).unwrap();
check_recent_balance(minimum_nonce_balance, &rpc_client, &nonce_account_pubkey);
check_balance!(minimum_nonce_balance, &rpc_client, &nonce_account_pubkey);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@@ -1473,10 +1486,10 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
request_and_confirm_airdrop(&rpc_client, &config, &config.signers[0].pubkey(), 200_000)
.unwrap();
check_recent_balance(200_000, &rpc_client, &config.signers[0].pubkey());
check_balance!(200_000, &rpc_client, &config.signers[0].pubkey());
request_and_confirm_airdrop(&rpc_client, &config_offline, &offline_pubkey, 100_000).unwrap();
check_recent_balance(100_000, &rpc_client, &offline_pubkey);
check_balance!(100_000, &rpc_client, &offline_pubkey);
// Create nonce account
let minimum_nonce_balance = rpc_client
@@ -1553,7 +1566,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
from: 0,
};
process_command(&config).unwrap();
check_recent_balance(50_000, &rpc_client, &stake_pubkey);
check_balance!(50_000, &rpc_client, &stake_pubkey);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@@ -1572,7 +1585,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
config_offline.command = CliCommand::WithdrawStake {
stake_account_pubkey: stake_pubkey,
destination_account_pubkey: recipient_pubkey,
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(50_000),
withdraw_authority: 0,
custodian: None,
sign_only: true,
@@ -1591,7 +1604,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
config.command = CliCommand::WithdrawStake {
stake_account_pubkey: stake_pubkey,
destination_account_pubkey: recipient_pubkey,
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(50_000),
withdraw_authority: 0,
custodian: None,
sign_only: false,
@@ -1607,7 +1620,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
fee_payer: 0,
};
process_command(&config).unwrap();
check_recent_balance(42, &rpc_client, &recipient_pubkey);
check_balance!(50_000, &rpc_client, &recipient_pubkey);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@@ -1667,7 +1680,7 @@ fn test_offline_nonced_create_stake_account_and_withdraw() {
process_command(&config).unwrap();
let seed_address =
Pubkey::create_with_seed(&stake_pubkey, seed, &stake::program::id()).unwrap();
check_recent_balance(50_000, &rpc_client, &seed_address);
check_balance!(50_000, &rpc_client, &seed_address);
}
#[test]

View File

@@ -1,9 +1,11 @@
#![allow(clippy::integer_arithmetic)]
#![allow(clippy::redundant_closure)]
use {
solana_cli::{
check_balance,
cli::{process_command, request_and_confirm_airdrop, CliCommand, CliConfig},
spend_utils::SpendAmount,
test_utils::{check_ready, check_recent_balance},
test_utils::check_ready,
},
solana_cli_output::{parse_sign_only_reply_string, OutputFormat},
solana_client::{
@@ -14,6 +16,8 @@ use {
solana_faucet::faucet::run_local_faucet,
solana_sdk::{
commitment_config::CommitmentConfig,
fee::FeeStructure,
native_token::sol_to_lamports,
nonce::State as NonceState,
pubkey::Pubkey,
signature::{keypair_from_seed, Keypair, NullSigner, Signer},
@@ -26,6 +30,8 @@ use {
#[test]
fn test_transfer() {
solana_logger::setup();
let fee_one_sig = FeeStructure::default().get_max_fee(1, 0);
let fee_two_sig = FeeStructure::default().get_max_fee(2, 0);
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
@@ -49,15 +55,16 @@ fn test_transfer() {
let sender_pubkey = config.signers[0].pubkey();
let recipient_pubkey = Pubkey::new(&[1u8; 32]);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 50_000).unwrap();
check_recent_balance(50_000, &rpc_client, &sender_pubkey);
check_recent_balance(0, &rpc_client, &recipient_pubkey);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, sol_to_lamports(5.0))
.unwrap();
check_balance!(sol_to_lamports(5.0), &rpc_client, &sender_pubkey);
check_balance!(0, &rpc_client, &recipient_pubkey);
check_ready(&rpc_client);
// Plain ole transfer
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(1.0)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@@ -73,12 +80,16 @@ fn test_transfer() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(49_989, &rpc_client, &sender_pubkey);
check_recent_balance(10, &rpc_client, &recipient_pubkey);
check_balance!(
sol_to_lamports(4.0) - fee_one_sig,
&rpc_client,
&sender_pubkey
);
check_balance!(sol_to_lamports(1.0), &rpc_client, &recipient_pubkey);
// Plain ole transfer, failure due to InsufficientFundsForSpendAndFee
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(49_989),
amount: SpendAmount::Some(sol_to_lamports(4.0)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@@ -94,8 +105,12 @@ fn test_transfer() {
derived_address_program_id: None,
};
assert!(process_command(&config).is_err());
check_recent_balance(49_989, &rpc_client, &sender_pubkey);
check_recent_balance(10, &rpc_client, &recipient_pubkey);
check_balance!(
sol_to_lamports(4.0) - fee_one_sig,
&rpc_client,
&sender_pubkey
);
check_balance!(sol_to_lamports(1.0), &rpc_client, &recipient_pubkey);
let mut offline = CliConfig::recent_for_tests();
offline.json_rpc_url = String::default();
@@ -105,13 +120,14 @@ fn test_transfer() {
process_command(&offline).unwrap_err();
let offline_pubkey = offline.signers[0].pubkey();
request_and_confirm_airdrop(&rpc_client, &offline, &offline_pubkey, 50).unwrap();
check_recent_balance(50, &rpc_client, &offline_pubkey);
request_and_confirm_airdrop(&rpc_client, &offline, &offline_pubkey, sol_to_lamports(1.0))
.unwrap();
check_balance!(sol_to_lamports(1.0), &rpc_client, &offline_pubkey);
// Offline transfer
let blockhash = rpc_client.get_latest_blockhash().unwrap();
offline.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(0.5)),
to: recipient_pubkey,
from: 0,
sign_only: true,
@@ -133,7 +149,7 @@ fn test_transfer() {
let offline_presigner = sign_only.presigner_of(&offline_pubkey).unwrap();
config.signers = vec![&offline_presigner];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(0.5)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@@ -149,8 +165,12 @@ fn test_transfer() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(39, &rpc_client, &offline_pubkey);
check_recent_balance(20, &rpc_client, &recipient_pubkey);
check_balance!(
sol_to_lamports(0.5) - fee_one_sig,
&rpc_client,
&offline_pubkey
);
check_balance!(sol_to_lamports(1.5), &rpc_client, &recipient_pubkey);
// Create nonce account
let nonce_account = keypair_from_seed(&[3u8; 32]).unwrap();
@@ -166,7 +186,11 @@ fn test_transfer() {
amount: SpendAmount::Some(minimum_nonce_balance),
};
process_command(&config).unwrap();
check_recent_balance(49_987 - minimum_nonce_balance, &rpc_client, &sender_pubkey);
check_balance!(
sol_to_lamports(4.0) - fee_one_sig - fee_two_sig - minimum_nonce_balance,
&rpc_client,
&sender_pubkey,
);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@@ -181,7 +205,7 @@ fn test_transfer() {
// Nonced transfer
config.signers = vec![&default_signer];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(1.0)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@@ -200,8 +224,12 @@ fn test_transfer() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(49_976 - minimum_nonce_balance, &rpc_client, &sender_pubkey);
check_recent_balance(30, &rpc_client, &recipient_pubkey);
check_balance!(
sol_to_lamports(3.0) - 2 * fee_one_sig - fee_two_sig - minimum_nonce_balance,
&rpc_client,
&sender_pubkey,
);
check_balance!(sol_to_lamports(2.5), &rpc_client, &recipient_pubkey);
let new_nonce_hash = nonce_utils::get_account_with_commitment(
&rpc_client,
&nonce_account.pubkey(),
@@ -221,7 +249,11 @@ fn test_transfer() {
new_authority: offline_pubkey,
};
process_command(&config).unwrap();
check_recent_balance(49_975 - minimum_nonce_balance, &rpc_client, &sender_pubkey);
check_balance!(
sol_to_lamports(3.0) - 3 * fee_one_sig - fee_two_sig - minimum_nonce_balance,
&rpc_client,
&sender_pubkey,
);
// Fetch nonce hash
let nonce_hash = nonce_utils::get_account_with_commitment(
@@ -236,7 +268,7 @@ fn test_transfer() {
// Offline, nonced transfer
offline.signers = vec![&default_offline_signer];
offline.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(0.4)),
to: recipient_pubkey,
from: 0,
sign_only: true,
@@ -257,7 +289,7 @@ fn test_transfer() {
let offline_presigner = sign_only.presigner_of(&offline_pubkey).unwrap();
config.signers = vec![&offline_presigner];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(10),
amount: SpendAmount::Some(sol_to_lamports(0.4)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@@ -276,13 +308,18 @@ fn test_transfer() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(28, &rpc_client, &offline_pubkey);
check_recent_balance(40, &rpc_client, &recipient_pubkey);
check_balance!(
sol_to_lamports(0.1) - 2 * fee_one_sig,
&rpc_client,
&offline_pubkey
);
check_balance!(sol_to_lamports(2.9), &rpc_client, &recipient_pubkey);
}
#[test]
fn test_transfer_multisession_signing() {
solana_logger::setup();
let fee = FeeStructure::default().get_max_fee(2, 0);
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
@@ -305,19 +342,27 @@ fn test_transfer_multisession_signing() {
&rpc_client,
&CliConfig::recent_for_tests(),
&offline_from_signer.pubkey(),
43,
sol_to_lamports(43.0),
)
.unwrap();
request_and_confirm_airdrop(
&rpc_client,
&CliConfig::recent_for_tests(),
&offline_fee_payer_signer.pubkey(),
3,
sol_to_lamports(1.0) + 2 * fee,
)
.unwrap();
check_recent_balance(43, &rpc_client, &offline_from_signer.pubkey());
check_recent_balance(3, &rpc_client, &offline_fee_payer_signer.pubkey());
check_recent_balance(0, &rpc_client, &to_pubkey);
check_balance!(
sol_to_lamports(43.0),
&rpc_client,
&offline_from_signer.pubkey(),
);
check_balance!(
sol_to_lamports(1.0) + 2 * fee,
&rpc_client,
&offline_fee_payer_signer.pubkey(),
);
check_balance!(0, &rpc_client, &to_pubkey);
check_ready(&rpc_client);
@@ -331,7 +376,7 @@ fn test_transfer_multisession_signing() {
fee_payer_config.command = CliCommand::ClusterVersion;
process_command(&fee_payer_config).unwrap_err();
fee_payer_config.command = CliCommand::Transfer {
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(sol_to_lamports(42.0)),
to: to_pubkey,
from: 1,
sign_only: true,
@@ -362,7 +407,7 @@ fn test_transfer_multisession_signing() {
from_config.command = CliCommand::ClusterVersion;
process_command(&from_config).unwrap_err();
from_config.command = CliCommand::Transfer {
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(sol_to_lamports(42.0)),
to: to_pubkey,
from: 1,
sign_only: true,
@@ -390,7 +435,7 @@ fn test_transfer_multisession_signing() {
config.json_rpc_url = test_validator.rpc_url();
config.signers = vec![&fee_payer_presigner, &from_presigner];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(42),
amount: SpendAmount::Some(sol_to_lamports(42.0)),
to: to_pubkey,
from: 1,
sign_only: false,
@@ -407,14 +452,23 @@ fn test_transfer_multisession_signing() {
};
process_command(&config).unwrap();
check_recent_balance(1, &rpc_client, &offline_from_signer.pubkey());
check_recent_balance(1, &rpc_client, &offline_fee_payer_signer.pubkey());
check_recent_balance(42, &rpc_client, &to_pubkey);
check_balance!(
sol_to_lamports(1.0),
&rpc_client,
&offline_from_signer.pubkey(),
);
check_balance!(
sol_to_lamports(1.0) + fee,
&rpc_client,
&offline_fee_payer_signer.pubkey(),
);
check_balance!(sol_to_lamports(42.0), &rpc_client, &to_pubkey);
}
#[test]
fn test_transfer_all() {
solana_logger::setup();
let fee = FeeStructure::default().get_max_fee(1, 0);
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
@@ -437,9 +491,9 @@ fn test_transfer_all() {
let sender_pubkey = config.signers[0].pubkey();
let recipient_pubkey = Pubkey::new(&[1u8; 32]);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 50_000).unwrap();
check_recent_balance(50_000, &rpc_client, &sender_pubkey);
check_recent_balance(0, &rpc_client, &recipient_pubkey);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 500_000).unwrap();
check_balance!(500_000, &rpc_client, &sender_pubkey);
check_balance!(0, &rpc_client, &recipient_pubkey);
check_ready(&rpc_client);
@@ -461,8 +515,8 @@ fn test_transfer_all() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
check_recent_balance(0, &rpc_client, &sender_pubkey);
check_recent_balance(49_999, &rpc_client, &recipient_pubkey);
check_balance!(0, &rpc_client, &sender_pubkey);
check_balance!(500_000 - fee, &rpc_client, &recipient_pubkey);
}
#[test]
@@ -491,8 +545,8 @@ fn test_transfer_unfunded_recipient() {
let recipient_pubkey = Pubkey::new(&[1u8; 32]);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 50_000).unwrap();
check_recent_balance(50_000, &rpc_client, &sender_pubkey);
check_recent_balance(0, &rpc_client, &recipient_pubkey);
check_balance!(50_000, &rpc_client, &sender_pubkey);
check_balance!(0, &rpc_client, &recipient_pubkey);
check_ready(&rpc_client);
@@ -521,6 +575,7 @@ fn test_transfer_unfunded_recipient() {
#[test]
fn test_transfer_with_seed() {
solana_logger::setup();
let fee = FeeStructure::default().get_max_fee(1, 0);
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
@@ -551,17 +606,19 @@ fn test_transfer_with_seed() {
)
.unwrap();
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, 1).unwrap();
request_and_confirm_airdrop(&rpc_client, &config, &derived_address, 50_000).unwrap();
check_recent_balance(1, &rpc_client, &sender_pubkey);
check_recent_balance(50_000, &rpc_client, &derived_address);
check_recent_balance(0, &rpc_client, &recipient_pubkey);
request_and_confirm_airdrop(&rpc_client, &config, &sender_pubkey, sol_to_lamports(1.0))
.unwrap();
request_and_confirm_airdrop(&rpc_client, &config, &derived_address, sol_to_lamports(5.0))
.unwrap();
check_balance!(sol_to_lamports(1.0), &rpc_client, &sender_pubkey);
check_balance!(sol_to_lamports(5.0), &rpc_client, &derived_address);
check_balance!(0, &rpc_client, &recipient_pubkey);
check_ready(&rpc_client);
// Transfer with seed
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(50_000),
amount: SpendAmount::Some(sol_to_lamports(5.0)),
to: recipient_pubkey,
from: 0,
sign_only: false,
@@ -577,7 +634,7 @@ fn test_transfer_with_seed() {
derived_address_program_id: Some(derived_address_program_id),
};
process_command(&config).unwrap();
check_recent_balance(0, &rpc_client, &sender_pubkey);
check_recent_balance(50_000, &rpc_client, &recipient_pubkey);
check_recent_balance(0, &rpc_client, &derived_address);
check_balance!(sol_to_lamports(1.0) - fee, &rpc_client, &sender_pubkey);
check_balance!(sol_to_lamports(5.0), &rpc_client, &recipient_pubkey);
check_balance!(0, &rpc_client, &derived_address);
}

View File

@@ -1,8 +1,9 @@
#![allow(clippy::integer_arithmetic)]
use {
solana_cli::{
check_balance,
cli::{process_command, request_and_confirm_airdrop, CliCommand, CliConfig},
spend_utils::SpendAmount,
test_utils::check_recent_balance,
},
solana_cli_output::{parse_sign_only_reply_string, OutputFormat},
solana_client::{
@@ -69,12 +70,12 @@ fn test_vote_authorize_and_withdraw() {
.get_minimum_balance_for_rent_exemption(VoteState::size_of())
.unwrap()
.max(1);
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
// Transfer in some more SOL
config.signers = vec![&default_signer];
config.command = CliCommand::Transfer {
amount: SpendAmount::Some(1_000),
amount: SpendAmount::Some(10_000),
to: vote_account_pubkey,
from: 0,
sign_only: false,
@@ -90,8 +91,8 @@ fn test_vote_authorize_and_withdraw() {
derived_address_program_id: None,
};
process_command(&config).unwrap();
let expected_balance = expected_balance + 1_000;
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
let expected_balance = expected_balance + 10_000;
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
// Authorize vote account withdrawal to another signer
let first_withdraw_authority = Keypair::new();
@@ -169,7 +170,7 @@ fn test_vote_authorize_and_withdraw() {
config.command = CliCommand::WithdrawFromVoteAccount {
vote_account_pubkey,
withdraw_authority: 1,
withdraw_amount: SpendAmount::Some(100),
withdraw_amount: SpendAmount::Some(1_000),
destination_account_pubkey: destination_account,
sign_only: false,
dump_transaction_message: false,
@@ -180,9 +181,9 @@ fn test_vote_authorize_and_withdraw() {
fee_payer: 0,
};
process_command(&config).unwrap();
let expected_balance = expected_balance - 100;
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
check_recent_balance(100, &rpc_client, &destination_account);
let expected_balance = expected_balance - 1_000;
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
check_balance!(1_000, &rpc_client, &destination_account);
// Re-assign validator identity
let new_identity_keypair = Keypair::new();
@@ -212,8 +213,8 @@ fn test_vote_authorize_and_withdraw() {
fee_payer: 0,
};
process_command(&config).unwrap();
check_recent_balance(0, &rpc_client, &vote_account_pubkey);
check_recent_balance(expected_balance, &rpc_client, &destination_account);
check_balance!(0, &rpc_client, &vote_account_pubkey);
check_balance!(expected_balance, &rpc_client, &destination_account);
}
#[test]
@@ -247,7 +248,7 @@ fn test_offline_vote_authorize_and_withdraw() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_payer.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_payer.signers[0].pubkey());
request_and_confirm_airdrop(
&rpc_client,
@@ -256,7 +257,7 @@ fn test_offline_vote_authorize_and_withdraw() {
100_000,
)
.unwrap();
check_recent_balance(100_000, &rpc_client, &config_offline.signers[0].pubkey());
check_balance!(100_000, &rpc_client, &config_offline.signers[0].pubkey());
// Create vote account with specific withdrawer
let vote_account_keypair = Keypair::new();
@@ -288,12 +289,12 @@ fn test_offline_vote_authorize_and_withdraw() {
.get_minimum_balance_for_rent_exemption(VoteState::size_of())
.unwrap()
.max(1);
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
// Transfer in some more SOL
config_payer.signers = vec![&default_signer];
config_payer.command = CliCommand::Transfer {
amount: SpendAmount::Some(1_000),
amount: SpendAmount::Some(10_000),
to: vote_account_pubkey,
from: 0,
sign_only: false,
@@ -309,8 +310,8 @@ fn test_offline_vote_authorize_and_withdraw() {
derived_address_program_id: None,
};
process_command(&config_payer).unwrap();
let expected_balance = expected_balance + 1_000;
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
let expected_balance = expected_balance + 10_000;
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
// Authorize vote account withdrawal to another signer, offline
let withdraw_authority = Keypair::new();
@@ -367,7 +368,7 @@ fn test_offline_vote_authorize_and_withdraw() {
config_offline.command = CliCommand::WithdrawFromVoteAccount {
vote_account_pubkey,
withdraw_authority: 1,
withdraw_amount: SpendAmount::Some(100),
withdraw_amount: SpendAmount::Some(1_000),
destination_account_pubkey: destination_account,
sign_only: true,
dump_transaction_message: false,
@@ -387,7 +388,7 @@ fn test_offline_vote_authorize_and_withdraw() {
config_payer.command = CliCommand::WithdrawFromVoteAccount {
vote_account_pubkey,
withdraw_authority: 1,
withdraw_amount: SpendAmount::Some(100),
withdraw_amount: SpendAmount::Some(1_000),
destination_account_pubkey: destination_account,
sign_only: false,
dump_transaction_message: false,
@@ -398,9 +399,9 @@ fn test_offline_vote_authorize_and_withdraw() {
fee_payer: 0,
};
process_command(&config_payer).unwrap();
let expected_balance = expected_balance - 100;
check_recent_balance(expected_balance, &rpc_client, &vote_account_pubkey);
check_recent_balance(100, &rpc_client, &destination_account);
let expected_balance = expected_balance - 1_000;
check_balance!(expected_balance, &rpc_client, &vote_account_pubkey);
check_balance!(1_000, &rpc_client, &destination_account);
// Re-assign validator identity offline
let blockhash = rpc_client.get_latest_blockhash().unwrap();
@@ -483,9 +484,7 @@ fn test_offline_vote_authorize_and_withdraw() {
memo: None,
fee_payer: 0,
};
let result = process_command(&config_payer).unwrap();
println!("{:?}", result);
check_recent_balance(0, &rpc_client, &vote_account_pubkey);
println!("what");
check_recent_balance(expected_balance, &rpc_client, &destination_account);
process_command(&config_payer).unwrap();
check_balance!(0, &rpc_client, &vote_account_pubkey);
check_balance!(expected_balance, &rpc_client, &destination_account);
}

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-client-test"
version = "1.9.3"
version = "1.9.8"
description = "Solana RPC Test"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,24 +12,24 @@ edition = "2021"
[dependencies]
serde_json = "1.0.72"
serial_test = "0.5.1"
solana-client = { path = "../client", version = "=1.9.3" }
solana-ledger = { path = "../ledger", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.9.3" }
solana-metrics = { path = "../metrics", version = "=1.9.3" }
solana-perf = { path = "../perf", version = "=1.9.3" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.9.3" }
solana-rpc = { path = "../rpc", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-streamer = { path = "../streamer", version = "=1.9.3" }
solana-test-validator = { path = "../test-validator", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-client = { path = "../client", version = "=1.9.8" }
solana-ledger = { path = "../ledger", version = "=1.9.8" }
solana-measure = { path = "../measure", version = "=1.9.8" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.9.8" }
solana-metrics = { path = "../metrics", version = "=1.9.8" }
solana-perf = { path = "../perf", version = "=1.9.8" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.9.8" }
solana-rpc = { path = "../rpc", version = "=1.9.8" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-streamer = { path = "../streamer", version = "=1.9.8" }
solana-test-validator = { path = "../test-validator", version = "=1.9.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
systemstat = "0.1.10"
[dev-dependencies]
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.8" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-client"
version = "1.9.3"
version = "1.9.8"
description = "Solana Client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -23,15 +23,15 @@ semver = "1.0.4"
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.3" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.3" }
solana-faucet = { path = "../faucet", version = "=1.9.3" }
solana-net-utils = { path = "../net-utils", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.3" }
solana-account-decoder = { path = "../account-decoder", version = "=1.9.8" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.8" }
solana-faucet = { path = "../faucet", version = "=1.9.8" }
solana-net-utils = { path = "../net-utils", version = "=1.9.8" }
solana-measure = { path = "../measure", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.8" }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tungstenite = { version = "0.16.0", features = ["rustls-tls-webpki-roots"] }
@@ -40,7 +40,7 @@ url = "2.2.2"
[dev-dependencies]
assert_matches = "1.5.0"
jsonrpc-http-server = "18.0.0"
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.8" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -37,14 +37,14 @@ impl HttpSender {
///
/// The URL is an HTTP URL, usually for port 8899, as in
/// "http://localhost:8899". The sender has a default timeout of 30 seconds.
pub fn new(url: String) -> Self {
pub fn new<U: ToString>(url: U) -> Self {
Self::new_with_timeout(url, Duration::from_secs(30))
}
/// Create an HTTP RPC sender.
///
/// The URL is an HTTP URL, usually for port 8899.
pub fn new_with_timeout(url: String, timeout: Duration) -> Self {
pub fn new_with_timeout<U: ToString>(url: U, timeout: Duration) -> Self {
// `reqwest::blocking::Client` panics if run in a tokio async context. Shuttle the
// request to a different tokio thread to avoid this
let client = Arc::new(
@@ -58,7 +58,7 @@ impl HttpSender {
Self {
client,
url,
url: url.to_string(),
request_id: AtomicU64::new(0),
stats: RwLock::new(RpcTransportStats::default()),
}

View File

@@ -75,13 +75,13 @@ pub struct MockSender {
/// from [`RpcRequest`] to a JSON [`Value`] response, Any entries in this map
/// override the default behavior for the given request.
impl MockSender {
pub fn new(url: String) -> Self {
pub fn new<U: ToString>(url: U) -> Self {
Self::new_with_mocks(url, Mocks::default())
}
pub fn new_with_mocks(url: String, mocks: Mocks) -> Self {
pub fn new_with_mocks<U: ToString>(url: U, mocks: Mocks) -> Self {
Self {
url,
url: url.to_string(),
mocks: RwLock::new(mocks),
}
}

View File

@@ -305,7 +305,7 @@ impl PubsubClient {
let result = PubsubClientSubscription {
message_type: PhantomData,
operation: "blocks",
operation: "block",
socket,
subscription_id,
t_cleanup: Some(t_cleanup),

View File

@@ -191,7 +191,7 @@ impl RpcClient {
/// let url = "http://localhost:8899".to_string();
/// let client = RpcClient::new(url);
/// ```
pub fn new(url: String) -> Self {
pub fn new<U: ToString>(url: U) -> Self {
Self::new_with_commitment(url, CommitmentConfig::default())
}
@@ -214,7 +214,7 @@ impl RpcClient {
/// let commitment_config = CommitmentConfig::processed();
/// let client = RpcClient::new_with_commitment(url, commitment_config);
/// ```
pub fn new_with_commitment(url: String, commitment_config: CommitmentConfig) -> Self {
pub fn new_with_commitment<U: ToString>(url: U, commitment_config: CommitmentConfig) -> Self {
Self::new_sender(
HttpSender::new(url),
RpcClientConfig::with_commitment(commitment_config),
@@ -240,7 +240,7 @@ impl RpcClient {
/// let timeout = Duration::from_secs(1);
/// let client = RpcClient::new_with_timeout(url, timeout);
/// ```
pub fn new_with_timeout(url: String, timeout: Duration) -> Self {
pub fn new_with_timeout<U: ToString>(url: U, timeout: Duration) -> Self {
Self::new_sender(
HttpSender::new_with_timeout(url, timeout),
RpcClientConfig::with_commitment(CommitmentConfig::default()),
@@ -269,8 +269,8 @@ impl RpcClient {
/// commitment_config,
/// );
/// ```
pub fn new_with_timeout_and_commitment(
url: String,
pub fn new_with_timeout_and_commitment<U: ToString>(
url: U,
timeout: Duration,
commitment_config: CommitmentConfig,
) -> Self {
@@ -312,8 +312,8 @@ impl RpcClient {
/// confirm_transaction_initial_timeout,
/// );
/// ```
pub fn new_with_timeouts_and_commitment(
url: String,
pub fn new_with_timeouts_and_commitment<U: ToString>(
url: U,
timeout: Duration,
commitment_config: CommitmentConfig,
confirm_transaction_initial_timeout: Duration,
@@ -347,7 +347,7 @@ impl RpcClient {
/// let url = "fails".to_string();
/// let successful_client = RpcClient::new_mock(url);
/// ```
pub fn new_mock(url: String) -> Self {
pub fn new_mock<U: ToString>(url: U) -> Self {
Self::new_sender(
MockSender::new(url),
RpcClientConfig::with_commitment(CommitmentConfig::default()),
@@ -381,7 +381,7 @@ impl RpcClient {
/// let url = "succeeds".to_string();
/// let client = RpcClient::new_mock_with_mocks(url, mocks);
/// ```
pub fn new_mock_with_mocks(url: String, mocks: Mocks) -> Self {
pub fn new_mock_with_mocks<U: ToString>(url: U, mocks: Mocks) -> Self {
Self::new_sender(
MockSender::new_with_mocks(url, mocks),
RpcClientConfig::with_commitment(CommitmentConfig::default()),

View File

@@ -290,6 +290,8 @@ pub struct RpcIdentity {
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(rename_all = "camelCase")]
pub struct RpcVote {
/// Vote account address, as base-58 encoded string
pub vote_pubkey: String,
pub slots: Vec<Slot>,
pub hash: String,
pub timestamp: Option<UnixTimestamp>,

View File

@@ -1,7 +1,7 @@
[package]
name = "solana-core"
description = "Blockchain, Rebuilt for Scale"
version = "1.9.3"
version = "1.9.8"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-core"
readme = "../README.md"
@@ -34,30 +34,32 @@ rayon = "1.5.1"
retain_mut = "0.1.5"
serde = "1.0.130"
serde_derive = "1.0.103"
solana-accountsdb-plugin-manager = { path = "../accountsdb-plugin-manager", version = "=1.9.3" }
solana-client = { path = "../client", version = "=1.9.3" }
solana-entry = { path = "../entry", version = "=1.9.3" }
solana-gossip = { path = "../gossip", version = "=1.9.3" }
solana-ledger = { path = "../ledger", version = "=1.9.3" }
solana-logger = { path = "../logger", version = "=1.9.3" }
solana-measure = { path = "../measure", version = "=1.9.3" }
solana-metrics = { path = "../metrics", version = "=1.9.3" }
solana-net-utils = { path = "../net-utils", version = "=1.9.3" }
solana-perf = { path = "../perf", version = "=1.9.3" }
solana-poh = { path = "../poh", version = "=1.9.3" }
solana-rpc = { path = "../rpc", version = "=1.9.3" }
solana-replica-lib = { path = "../replica-lib", version = "=1.9.3" }
solana-runtime = { path = "../runtime", version = "=1.9.3" }
solana-sdk = { path = "../sdk", version = "=1.9.3" }
solana-frozen-abi = { path = "../frozen-abi", version = "=1.9.3" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.9.3" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.9.3" }
solana-streamer = { path = "../streamer", version = "=1.9.3" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.3" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.3" }
solana-accountsdb-plugin-manager = { path = "../accountsdb-plugin-manager", version = "=1.9.8" }
solana-bloom = { path = "../bloom", version = "=1.9.8" }
solana-client = { path = "../client", version = "=1.9.8" }
solana-entry = { path = "../entry", version = "=1.9.8" }
solana-gossip = { path = "../gossip", version = "=1.9.8" }
solana-ledger = { path = "../ledger", version = "=1.9.8" }
solana-logger = { path = "../logger", version = "=1.9.8" }
solana-measure = { path = "../measure", version = "=1.9.8" }
solana-metrics = { path = "../metrics", version = "=1.9.8" }
solana-net-utils = { path = "../net-utils", version = "=1.9.8" }
solana-perf = { path = "../perf", version = "=1.9.8" }
solana-poh = { path = "../poh", version = "=1.9.8" }
solana-program-runtime = { path = "../program-runtime", version = "=1.9.8" }
solana-rpc = { path = "../rpc", version = "=1.9.8" }
solana-replica-lib = { path = "../replica-lib", version = "=1.9.8" }
solana-runtime = { path = "../runtime", version = "=1.9.8" }
solana-sdk = { path = "../sdk", version = "=1.9.8" }
solana-frozen-abi = { path = "../frozen-abi", version = "=1.9.8" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.9.8" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.9.8" }
solana-streamer = { path = "../streamer", version = "=1.9.8" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.8" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.8" }
tempfile = "3.2.0"
thiserror = "1.0"
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.9.3" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.9.8" }
sys-info = "0.9.1"
tokio = { version = "1", features = ["full"] }
trees = "0.4.2"
@@ -71,9 +73,9 @@ matches = "0.1.9"
reqwest = { version = "0.11.6", default-features = false, features = ["blocking", "rustls-tls", "json"] }
serde_json = "1.0.72"
serial_test = "0.5.1"
solana-program-runtime = { path = "../program-runtime", version = "=1.9.3" }
solana-stake-program = { path = "../programs/stake", version = "=1.9.3" }
solana-version = { path = "../version", version = "=1.9.3" }
solana-program-runtime = { path = "../program-runtime", version = "=1.9.8" }
solana-stake-program = { path = "../programs/stake", version = "=1.9.8" }
solana-version = { path = "../version", version = "=1.9.8" }
static_assertions = "1.1.0"
systemstat = "0.1.10"

View File

@@ -10,6 +10,7 @@ use {
rayon::prelude::*,
solana_core::{
banking_stage::{BankingStage, BankingStageStats},
leader_slot_banking_stage_metrics::LeaderSlotMetricsTracker,
qos_service::QosService,
},
solana_entry::entry::{next_hash, Entry},
@@ -70,7 +71,7 @@ fn bench_consume_buffered(bencher: &mut Bencher) {
Blockstore::open(&ledger_path).expect("Expected to be able to open database ledger"),
);
let (exit, poh_recorder, poh_service, _signal_receiver) =
create_test_recorder(&bank, &blockstore, None);
create_test_recorder(&bank, &blockstore, None, None);
let recorder = poh_recorder.lock().unwrap().recorder();
@@ -98,6 +99,7 @@ fn bench_consume_buffered(bencher: &mut Bencher) {
&BankingStageStats::default(),
&recorder,
&Arc::new(QosService::new(Arc::new(RwLock::new(CostModel::default())))),
&mut LeaderSlotMetricsTracker::new(0),
);
});
@@ -174,7 +176,7 @@ fn bench_banking(bencher: &mut Bencher, tx_type: TransactionType) {
// set cost tracker limits to MAX so it will not filter out TXs
bank.write_cost_tracker()
.unwrap()
.set_limits(std::u64::MAX, std::u64::MAX);
.set_limits(std::u64::MAX, std::u64::MAX, std::u64::MAX);
debug!("threads: {} txs: {}", num_threads, txes);
@@ -213,7 +215,7 @@ fn bench_banking(bencher: &mut Bencher, tx_type: TransactionType) {
Blockstore::open(&ledger_path).expect("Expected to be able to open database ledger"),
);
let (exit, poh_recorder, poh_service, signal_receiver) =
create_test_recorder(&bank, &blockstore, None);
create_test_recorder(&bank, &blockstore, None, None);
let cluster_info = ClusterInfo::new(
Node::new_localhost().info,
Arc::new(Keypair::new()),

View File

@@ -1,4 +1,5 @@
#![feature(test)]
#![allow(clippy::integer_arithmetic)]
extern crate solana_core;
extern crate test;
@@ -8,7 +9,7 @@ use {
log::*,
rand::{thread_rng, Rng},
solana_core::{sigverify::TransactionSigVerifier, sigverify_stage::SigVerifyStage},
solana_perf::{packet::to_packet_batches, test_tx::test_tx},
solana_perf::{packet::to_packet_batches, packet::PacketBatch, test_tx::test_tx},
solana_sdk::{
hash::Hash,
signature::{Keypair, Signer},
@@ -22,8 +23,7 @@ use {
test::Bencher,
};
#[bench]
fn bench_packet_discard(bencher: &mut Bencher) {
fn run_bench_packet_discard(num_ips: usize, bencher: &mut Bencher) {
solana_logger::setup();
let len = 30 * 1000;
let chunk_size = 1024;
@@ -32,7 +32,7 @@ fn bench_packet_discard(bencher: &mut Bencher) {
let mut total = 0;
let ips: Vec<_> = (0..10_000)
let ips: Vec<_> = (0..num_ips)
.into_iter()
.map(|_| {
let mut addr = [0u16; 8];
@@ -52,27 +52,70 @@ fn bench_packet_discard(bencher: &mut Bencher) {
bencher.iter(move || {
SigVerifyStage::discard_excess_packets(&mut batches, 10_000);
let mut num_packets = 0;
for batch in batches.iter_mut() {
for p in batch.packets.iter_mut() {
if !p.meta.discard() {
num_packets += 1;
}
p.meta.set_discard(false);
}
}
assert_eq!(num_packets, 10_000);
});
}
#[bench]
fn bench_sigverify_stage(bencher: &mut Bencher) {
solana_logger::setup();
let (packet_s, packet_r) = channel();
let (verified_s, verified_r) = unbounded();
let verifier = TransactionSigVerifier::default();
let stage = SigVerifyStage::new(packet_r, verified_s, verifier);
fn bench_packet_discard_many_senders(bencher: &mut Bencher) {
run_bench_packet_discard(1000, bencher);
}
let now = Instant::now();
#[bench]
fn bench_packet_discard_single_sender(bencher: &mut Bencher) {
run_bench_packet_discard(1, bencher);
}
#[bench]
fn bench_packet_discard_mixed_senders(bencher: &mut Bencher) {
const SIZE: usize = 30 * 1000;
const CHUNK_SIZE: usize = 1024;
fn new_rand_addr<R: Rng>(rng: &mut R) -> std::net::IpAddr {
let mut addr = [0u16; 8];
rng.fill(&mut addr);
std::net::IpAddr::from(addr)
}
let mut rng = thread_rng();
let mut batches = to_packet_batches(&vec![test_tx(); SIZE], CHUNK_SIZE);
let spam_addr = new_rand_addr(&mut rng);
for batch in batches.iter_mut() {
for packet in batch.packets.iter_mut() {
// One spam address, ~1000 unique addresses.
packet.meta.addr = if rng.gen_ratio(1, 30) {
new_rand_addr(&mut rng)
} else {
spam_addr
}
}
}
bencher.iter(move || {
SigVerifyStage::discard_excess_packets(&mut batches, 10_000);
let mut num_packets = 0;
for batch in batches.iter_mut() {
for packet in batch.packets.iter_mut() {
if !packet.meta.discard() {
num_packets += 1;
}
packet.meta.set_discard(false);
}
}
assert_eq!(num_packets, 10_000);
});
}
fn gen_batches(use_same_tx: bool) -> Vec<PacketBatch> {
let len = 4096;
let use_same_tx = true;
let chunk_size = 1024;
let mut batches = if use_same_tx {
if use_same_tx {
let tx = test_tx();
to_packet_batches(&vec![tx; len], chunk_size)
} else {
@@ -90,14 +133,28 @@ fn bench_sigverify_stage(bencher: &mut Bencher) {
})
.collect();
to_packet_batches(&txs, chunk_size)
};
}
}
trace!(
"starting... generation took: {} ms batches: {}",
duration_as_ms(&now.elapsed()),
batches.len()
);
#[bench]
fn bench_sigverify_stage(bencher: &mut Bencher) {
solana_logger::setup();
trace!("start");
let (packet_s, packet_r) = channel();
let (verified_s, verified_r) = unbounded();
let verifier = TransactionSigVerifier::default();
let stage = SigVerifyStage::new(packet_r, verified_s, verifier);
let use_same_tx = true;
bencher.iter(move || {
let now = Instant::now();
let mut batches = gen_batches(use_same_tx);
trace!(
"starting... generation took: {} ms batches: {}",
duration_as_ms(&now.elapsed()),
batches.len()
);
let mut sent_len = 0;
for _ in 0..batches.len() {
if let Some(batch) = batches.pop() {
@@ -113,7 +170,7 @@ fn bench_sigverify_stage(bencher: &mut Bencher) {
received += v.packets.len();
batches.push(v);
}
if received >= sent_len {
if use_same_tx || received >= sent_len {
break;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -32,6 +32,7 @@ use {
bank_forks::BankForks,
commitment::VOTE_THRESHOLD_SIZE,
epoch_stakes::EpochStakes,
vote_parser,
vote_sender_types::{ReplayVoteReceiver, ReplayedVote},
},
solana_sdk::{
@@ -42,7 +43,7 @@ use {
slot_hashes,
transaction::Transaction,
},
solana_vote_program::{self, vote_state::Vote, vote_transaction},
solana_vote_program::vote_state::Vote,
std::{
collections::{HashMap, HashSet},
iter::repeat,
@@ -102,12 +103,6 @@ pub struct VoteTracker {
}
impl VoteTracker {
pub(crate) fn new(root_bank: &Bank) -> Self {
let vote_tracker = VoteTracker::default();
vote_tracker.progress_with_new_root_bank(root_bank);
vote_tracker
}
fn get_or_insert_slot_tracker(&self, slot: Slot) -> Arc<RwLock<SlotVoteTracker>> {
if let Some(slot_vote_tracker) = self.slot_vote_trackers.read().unwrap().get(&slot) {
return slot_vote_tracker.clone();
@@ -311,7 +306,7 @@ impl ClusterInfoVoteListener {
!packet_batch.packets[0].meta.discard()
})
.filter_map(|(tx, packet_batch)| {
let (vote_account_key, vote, _) = vote_transaction::parse_vote_transaction(&tx)?;
let (vote_account_key, vote, _) = vote_parser::parse_vote_transaction(&tx)?;
let slot = vote.last_voted_slot()?;
let epoch = epoch_schedule.get_epoch(slot);
let authorized_voter = root_bank
@@ -538,17 +533,14 @@ impl ClusterInfoVoteListener {
let mut sel = Select::new();
sel.recv(gossip_vote_txs_receiver);
sel.recv(replay_votes_receiver);
let mut remaining_wait_time = 200;
loop {
if remaining_wait_time == 0 {
break;
}
let mut remaining_wait_time = Duration::from_millis(200);
while remaining_wait_time > Duration::ZERO {
let start = Instant::now();
// Wait for one of the receivers to be ready. `ready_timeout`
// will return if channels either have something, or are
// disconnected. `ready_timeout` can wake up spuriously,
// hence the loop
let _ = sel.ready_timeout(Duration::from_millis(remaining_wait_time))?;
let _ = sel.ready_timeout(remaining_wait_time)?;
// Should not early return from this point onwards until `process_votes()`
// returns below to avoid missing any potential `optimistic_confirmed_slots`
@@ -566,10 +558,8 @@ impl ClusterInfoVoteListener {
bank_notification_sender,
cluster_confirmed_slot_sender,
));
} else {
remaining_wait_time = remaining_wait_time
.saturating_sub(std::cmp::max(start.elapsed().as_millis() as u64, 1));
}
remaining_wait_time = remaining_wait_time.saturating_sub(start.elapsed());
}
Ok(vec![])
}
@@ -683,7 +673,7 @@ impl ClusterInfoVoteListener {
}
if is_new_vote {
subscriptions.notify_vote(&vote);
subscriptions.notify_vote(*vote_pubkey, &vote);
let _ = verified_vote_sender.send((*vote_pubkey, vote.slots));
}
}
@@ -705,7 +695,7 @@ impl ClusterInfoVoteListener {
// Process votes from gossip and ReplayStage
let votes = gossip_vote_txs
.iter()
.filter_map(vote_transaction::parse_vote_transaction)
.filter_map(vote_parser::parse_vote_transaction)
.zip(repeat(/*is_gossip:*/ true))
.chain(replayed_votes.into_iter().zip(repeat(/*is_gossip:*/ false)));
for ((vote_pubkey, vote, _), is_gossip) in votes {
@@ -823,7 +813,7 @@ mod tests {
pubkey::Pubkey,
signature::{Keypair, Signature, Signer},
},
solana_vote_program::vote_state::Vote,
solana_vote_program::{vote_state::Vote, vote_transaction},
std::{
collections::BTreeSet,
iter::repeat_with,
@@ -1368,7 +1358,7 @@ mod tests {
let exit = Arc::new(AtomicBool::new(false));
let bank_forks = Arc::new(RwLock::new(BankForks::new(bank)));
let bank = bank_forks.read().unwrap().get(0).unwrap().clone();
let vote_tracker = VoteTracker::new(&bank);
let vote_tracker = VoteTracker::default();
let optimistically_confirmed_bank =
OptimisticallyConfirmedBank::locked_from_bank_forks_root(&bank_forks);
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
@@ -1475,7 +1465,7 @@ mod tests {
vec![100; validator_voting_keypairs.len()],
);
let bank = Bank::new_for_tests(&genesis_config);
let vote_tracker = VoteTracker::new(&bank);
let vote_tracker = VoteTracker::default();
let exit = Arc::new(AtomicBool::new(false));
let bank_forks = Arc::new(RwLock::new(BankForks::new(bank)));
let bank = bank_forks.read().unwrap().get(0).unwrap().clone();

View File

@@ -2,15 +2,13 @@ use {
crate::{broadcast_stage::BroadcastStage, retransmit_stage::RetransmitStage},
itertools::Itertools,
lru::LruCache,
rand::{Rng, SeedableRng},
rand::SeedableRng,
rand_chacha::ChaChaRng,
solana_gossip::{
cluster_info::{compute_retransmit_peers, ClusterInfo},
contact_info::ContactInfo,
crds_gossip_pull::CRDS_GOSSIP_PULL_CRDS_TIMEOUT_MS,
weighted_shuffle::{
weighted_best, weighted_sample_single, weighted_shuffle, WeightedShuffle,
},
weighted_shuffle::{weighted_best, weighted_shuffle, WeightedShuffle},
},
solana_ledger::shred::Shred,
solana_runtime::bank::Bank,
@@ -51,13 +49,13 @@ pub struct ClusterNodes<T> {
// All staked nodes + other known tvu-peers + the node itself;
// sorted by (stake, pubkey) in descending order.
nodes: Vec<Node>,
// Cumulative stakes (excluding the node itself), used for sampling
// broadcast peers.
cumulative_weights: Vec<u64>,
// Reverse index from nodes pubkey to their index in self.nodes.
index: HashMap<Pubkey, /*index:*/ usize>,
weighted_shuffle: WeightedShuffle</*stake:*/ u64>,
// Weights and indices for sampling peers. weighted_{shuffle,best} expect
// weights >= 1. For backward compatibility we use max(1, stake) for
// weights and exclude nodes with no contact-info.
index: Vec<(/*weight:*/ u64, /*index:*/ usize)>,
compat_index: Vec<(/*weight:*/ u64, /*index:*/ usize)>,
_phantom: PhantomData<T>,
}
@@ -90,12 +88,12 @@ impl Node {
impl<T> ClusterNodes<T> {
pub(crate) fn num_peers(&self) -> usize {
self.index.len()
self.compat_index.len()
}
// A peer is considered live if they generated their contact info recently.
pub(crate) fn num_peers_live(&self, now: u64) -> usize {
self.index
self.compat_index
.iter()
.filter_map(|(_, index)| self.nodes[*index].contact_info())
.filter(|node| {
@@ -133,7 +131,7 @@ impl ClusterNodes<BroadcastStage> {
return Vec::default();
}
let mut rng = ChaChaRng::from_seed(shred_seed);
let index = match weighted_sample_single(&mut rng, &self.cumulative_weights) {
let index = match self.weighted_shuffle.first(&mut rng) {
None => return Vec::default(),
Some(index) => index,
};
@@ -146,16 +144,16 @@ impl ClusterNodes<BroadcastStage> {
return vec![node.tvu];
}
}
let nodes: Vec<_> = self
.nodes
.iter()
.filter(|node| node.pubkey() != self.pubkey)
let mut rng = ChaChaRng::from_seed(shred_seed);
let nodes: Vec<&Node> = self
.weighted_shuffle
.clone()
.shuffle(&mut rng)
.map(|index| &self.nodes[index])
.collect();
if nodes.is_empty() {
return Vec::default();
}
let mut rng = ChaChaRng::from_seed(shred_seed);
let nodes = shuffle_nodes(&mut rng, &nodes);
let (neighbors, children) = compute_retransmit_peers(fanout, 0, &nodes);
neighbors[..1]
.iter()
@@ -177,10 +175,10 @@ impl ClusterNodes<BroadcastStage> {
/// Returns the root of turbine broadcast tree, which the leader sends the
/// shred to.
fn get_broadcast_peer(&self, shred_seed: [u8; 32]) -> Option<&ContactInfo> {
if self.index.is_empty() {
if self.compat_index.is_empty() {
None
} else {
let index = weighted_best(&self.index, shred_seed);
let index = weighted_best(&self.compat_index, shred_seed);
match &self.nodes[index].node {
NodeId::ContactInfo(node) => Some(node),
NodeId::Pubkey(_) => panic!("this should not happen!"),
@@ -235,18 +233,18 @@ impl ClusterNodes<RetransmitStage> {
if !enable_turbine_peers_shuffle_patch(shred.slot(), root_bank) {
return self.get_retransmit_peers_compat(shred_seed, fanout, slot_leader);
}
let mut weighted_shuffle = self.weighted_shuffle.clone();
// Exclude slot leader from list of nodes.
let nodes: Vec<_> = if slot_leader == self.pubkey {
if slot_leader == self.pubkey {
error!("retransmit from slot leader: {}", slot_leader);
self.nodes.iter().collect()
} else {
self.nodes
.iter()
.filter(|node| node.pubkey() != slot_leader)
.collect()
} else if let Some(index) = self.index.get(&slot_leader) {
weighted_shuffle.remove_index(*index);
};
let mut rng = ChaChaRng::from_seed(shred_seed);
let nodes = shuffle_nodes(&mut rng, &nodes);
let nodes: Vec<_> = weighted_shuffle
.shuffle(&mut rng)
.map(|index| &self.nodes[index])
.collect();
let self_index = nodes
.iter()
.position(|node| node.pubkey() == self.pubkey)
@@ -270,9 +268,9 @@ impl ClusterNodes<RetransmitStage> {
// Exclude leader from list of nodes.
let (weights, index): (Vec<u64>, Vec<usize>) = if slot_leader == self.pubkey {
error!("retransmit from slot leader: {}", slot_leader);
self.index.iter().copied().unzip()
self.compat_index.iter().copied().unzip()
} else {
self.index
self.compat_index
.iter()
.filter(|(_, i)| self.nodes[*i].pubkey() != slot_leader)
.copied()
@@ -299,49 +297,30 @@ impl ClusterNodes<RetransmitStage> {
}
}
fn build_cumulative_weights(self_pubkey: Pubkey, nodes: &[Node]) -> Vec<u64> {
let cumulative_stakes: Vec<_> = nodes
.iter()
.scan(0, |acc, node| {
if node.pubkey() != self_pubkey {
*acc += node.stake;
}
Some(*acc)
})
.collect();
if cumulative_stakes.last() != Some(&0) {
return cumulative_stakes;
}
nodes
.iter()
.scan(0, |acc, node| {
if node.pubkey() != self_pubkey {
*acc += 1;
}
Some(*acc)
})
.collect()
}
fn new_cluster_nodes<T: 'static>(
cluster_info: &ClusterInfo,
stakes: &HashMap<Pubkey, u64>,
) -> ClusterNodes<T> {
let self_pubkey = cluster_info.id();
let nodes = get_nodes(cluster_info, stakes);
let index: HashMap<_, _> = nodes
.iter()
.enumerate()
.map(|(ix, node)| (node.pubkey(), ix))
.collect();
let broadcast = TypeId::of::<T>() == TypeId::of::<BroadcastStage>();
let cumulative_weights = if broadcast {
build_cumulative_weights(self_pubkey, &nodes)
} else {
Vec::default()
};
let stakes: Vec<u64> = nodes.iter().map(|node| node.stake).collect();
let mut weighted_shuffle = WeightedShuffle::new(&stakes).unwrap();
if broadcast {
weighted_shuffle.remove_index(index[&self_pubkey]);
}
// For backward compatibility:
// * nodes which do not have contact-info are excluded.
// * stakes are floored at 1.
// The sorting key here should be equivalent to
// solana_gossip::deprecated::sorted_stakes_with_index.
// Leader itself is excluded when sampling broadcast peers.
let index = nodes
let compat_index = nodes
.iter()
.enumerate()
.filter(|(_, node)| node.contact_info().is_some())
@@ -352,8 +331,9 @@ fn new_cluster_nodes<T: 'static>(
ClusterNodes {
pubkey: self_pubkey,
nodes,
cumulative_weights,
index,
weighted_shuffle,
compat_index,
_phantom: PhantomData::default(),
}
}
@@ -406,29 +386,6 @@ fn enable_turbine_peers_shuffle_patch(shred_slot: Slot, root_bank: &Bank) -> boo
}
}
// Shuffles nodes w.r.t their stakes.
// Unstaked nodes will always appear at the very end.
fn shuffle_nodes<'a, R: Rng>(rng: &mut R, nodes: &[&'a Node]) -> Vec<&'a Node> {
// Nodes are sorted by (stake, pubkey) in descending order.
let stakes: Vec<u64> = nodes
.iter()
.map(|node| node.stake)
.take_while(|stake| *stake > 0)
.collect();
let num_staked = stakes.len();
let mut out: Vec<_> = WeightedShuffle::new(rng, &stakes)
.unwrap()
.map(|i| nodes[i])
.collect();
let weights = vec![1; nodes.len() - num_staked];
out.extend(
WeightedShuffle::new(rng, &weights)
.unwrap()
.map(|i| nodes[i + num_staked]),
);
out
}
impl<T> ClusterNodesCache<T> {
pub fn new(
// Capacity of underlying LRU-cache in terms of number of epochs.
@@ -505,18 +462,6 @@ impl From<Pubkey> for NodeId {
}
}
impl<T> Default for ClusterNodes<T> {
fn default() -> Self {
Self {
pubkey: Pubkey::default(),
nodes: Vec::default(),
cumulative_weights: Vec::default(),
index: Vec::default(),
_phantom: PhantomData::default(),
}
}
}
#[cfg(test)]
mod tests {
use {
@@ -608,7 +553,7 @@ mod tests {
assert_eq!(cluster_info.tvu_peers().len(), nodes.len() - 1);
let cluster_nodes = new_cluster_nodes::<RetransmitStage>(&cluster_info, &stakes);
// All nodes with contact-info should be in the index.
assert_eq!(cluster_nodes.index.len(), nodes.len());
assert_eq!(cluster_nodes.compat_index.len(), nodes.len());
// Staked nodes with no contact-info should be included.
assert!(cluster_nodes.nodes.len() > nodes.len());
// Assert that all nodes keep their contact-info.
@@ -631,9 +576,9 @@ mod tests {
let (peers, stakes_and_index) =
sorted_retransmit_peers_and_stakes(&cluster_info, Some(&stakes));
assert_eq!(stakes_and_index.len(), peers.len());
assert_eq!(cluster_nodes.index.len(), peers.len());
assert_eq!(cluster_nodes.compat_index.len(), peers.len());
for (i, node) in cluster_nodes
.index
.compat_index
.iter()
.map(|(_, i)| &cluster_nodes.nodes[*i])
.enumerate()
@@ -689,7 +634,7 @@ mod tests {
let cluster_nodes = ClusterNodes::<BroadcastStage>::new(&cluster_info, &stakes);
// All nodes with contact-info should be in the index.
// Excluding this node itself.
assert_eq!(cluster_nodes.index.len() + 1, nodes.len());
assert_eq!(cluster_nodes.compat_index.len() + 1, nodes.len());
// Staked nodes with no contact-info should be included.
assert!(cluster_nodes.nodes.len() > nodes.len());
// Assert that all nodes keep their contact-info.
@@ -711,9 +656,9 @@ mod tests {
}
let (peers, peers_and_stakes) = get_broadcast_peers(&cluster_info, Some(&stakes));
assert_eq!(peers_and_stakes.len(), peers.len());
assert_eq!(cluster_nodes.index.len(), peers.len());
assert_eq!(cluster_nodes.compat_index.len(), peers.len());
for (i, node) in cluster_nodes
.index
.compat_index
.iter()
.map(|(_, i)| &cluster_nodes.nodes[*i])
.enumerate()

View File

@@ -6,22 +6,18 @@
use {
solana_ledger::blockstore::Blockstore,
solana_measure::measure::Measure,
solana_runtime::{
bank::{Bank, ExecuteTimings},
cost_model::CostModel,
},
solana_program_runtime::timings::ExecuteTimings,
solana_runtime::{bank::Bank, cost_model::CostModel},
solana_sdk::timing::timestamp,
std::{
sync::{
atomic::{AtomicBool, Ordering},
mpsc::Receiver,
Arc, RwLock,
},
sync::{mpsc::Receiver, Arc, RwLock},
thread::{self, Builder, JoinHandle},
time::Duration,
},
};
// Update blockstore persistence storage when accumulated cost_table updates count exceeds the threshold
const PERSIST_THRESHOLD: u64 = 1_000;
#[derive(Default)]
pub struct CostUpdateServiceTiming {
last_print: u64,
@@ -33,20 +29,25 @@ pub struct CostUpdateServiceTiming {
impl CostUpdateServiceTiming {
fn update(
&mut self,
update_cost_model_count: u64,
update_cost_model_elapsed: u64,
persist_cost_table_elapsed: u64,
update_cost_model_count: Option<u64>,
update_cost_model_elapsed: Option<u64>,
persist_cost_table_elapsed: Option<u64>,
) {
self.update_cost_model_count += update_cost_model_count;
self.update_cost_model_elapsed += update_cost_model_elapsed;
self.persist_cost_table_elapsed += persist_cost_table_elapsed;
if let Some(update_cost_model_count) = update_cost_model_count {
self.update_cost_model_count += update_cost_model_count;
}
if let Some(update_cost_model_elapsed) = update_cost_model_elapsed {
self.update_cost_model_elapsed += update_cost_model_elapsed;
}
if let Some(persist_cost_table_elapsed) = persist_cost_table_elapsed {
self.persist_cost_table_elapsed += persist_cost_table_elapsed;
}
let now = timestamp();
let elapsed_ms = now - self.last_print;
if elapsed_ms > 1000 {
datapoint_info!(
"cost-update-service-stats",
("total_elapsed_us", elapsed_ms * 1000, i64),
(
"update_cost_model_count",
self.update_cost_model_count as i64,
@@ -71,8 +72,12 @@ impl CostUpdateServiceTiming {
}
pub enum CostUpdate {
FrozenBank { bank: Arc<Bank> },
ExecuteTiming { execute_timings: ExecuteTimings },
FrozenBank {
bank: Arc<Bank>,
},
ExecuteTiming {
execute_timings: Box<ExecuteTimings>,
},
}
pub type CostUpdateReceiver = Receiver<CostUpdate>;
@@ -84,7 +89,6 @@ pub struct CostUpdateService {
impl CostUpdateService {
#[allow(clippy::new_ret_no_self)]
pub fn new(
exit: Arc<AtomicBool>,
blockstore: Arc<Blockstore>,
cost_model: Arc<RwLock<CostModel>>,
cost_update_receiver: CostUpdateReceiver,
@@ -92,7 +96,7 @@ impl CostUpdateService {
let thread_hdl = Builder::new()
.name("solana-cost-update-service".to_string())
.spawn(move || {
Self::service_loop(exit, blockstore, cost_model, cost_update_receiver);
Self::service_loop(blockstore, cost_model, cost_update_receiver);
})
.unwrap();
@@ -104,118 +108,99 @@ impl CostUpdateService {
}
fn service_loop(
exit: Arc<AtomicBool>,
blockstore: Arc<Blockstore>,
cost_model: Arc<RwLock<CostModel>>,
cost_update_receiver: CostUpdateReceiver,
) {
let mut cost_update_service_timing = CostUpdateServiceTiming::default();
let mut dirty: bool;
let mut update_count: u64;
let wait_timer = Duration::from_millis(100);
let mut update_count = 0_u64;
loop {
if exit.load(Ordering::Relaxed) {
break;
}
for cost_update in cost_update_receiver.iter() {
match cost_update {
CostUpdate::FrozenBank { bank } => {
bank.read_cost_tracker().unwrap().report_stats(bank.slot());
}
CostUpdate::ExecuteTiming {
mut execute_timings,
} => {
let mut update_cost_model_time = Measure::start("update_cost_model_time");
update_count += Self::update_cost_model(&cost_model, &mut execute_timings);
update_cost_model_time.stop();
cost_update_service_timing.update(
Some(update_count),
Some(update_cost_model_time.as_us()),
None,
);
dirty = false;
update_count = 0_u64;
let mut update_cost_model_time = Measure::start("update_cost_model_time");
for cost_update in cost_update_receiver.try_iter() {
match cost_update {
CostUpdate::FrozenBank { bank } => {
bank.read_cost_tracker().unwrap().report_stats(bank.slot());
}
CostUpdate::ExecuteTiming {
mut execute_timings,
} => {
dirty |= Self::update_cost_model(&cost_model, &mut execute_timings);
update_count += 1;
if update_count > PERSIST_THRESHOLD {
let mut persist_cost_table_time = Measure::start("persist_cost_table_time");
Self::persist_cost_table(&blockstore, &cost_model);
update_count = 0_u64;
persist_cost_table_time.stop();
cost_update_service_timing.update(
None,
None,
Some(persist_cost_table_time.as_us()),
);
}
}
}
update_cost_model_time.stop();
let mut persist_cost_table_time = Measure::start("persist_cost_table_time");
if dirty {
Self::persist_cost_table(&blockstore, &cost_model);
}
persist_cost_table_time.stop();
cost_update_service_timing.update(
update_count,
update_cost_model_time.as_us(),
persist_cost_table_time.as_us(),
);
thread::sleep(wait_timer);
}
}
// Normalize `program_timings` with current estimated cost, update instruction_cost table
// Returns number of updates applied
fn update_cost_model(
cost_model: &RwLock<CostModel>,
execute_timings: &mut ExecuteTimings,
) -> bool {
let mut dirty = false;
{
for (program_id, program_timings) in &mut execute_timings.details.per_program_timings {
let current_estimated_program_cost =
cost_model.read().unwrap().find_instruction_cost(program_id);
program_timings.coalesce_error_timings(current_estimated_program_cost);
) -> u64 {
let mut update_count = 0_u64;
for (program_id, program_timings) in &mut execute_timings.details.per_program_timings {
let current_estimated_program_cost =
cost_model.read().unwrap().find_instruction_cost(program_id);
program_timings.coalesce_error_timings(current_estimated_program_cost);
if program_timings.count < 1 {
continue;
}
let units = program_timings.accumulated_units / program_timings.count as u64;
match cost_model
.write()
.unwrap()
.upsert_instruction_cost(program_id, units)
{
Ok(c) => {
debug!(
"after replayed into bank, instruction {:?} has averaged cost {}",
program_id, c
);
dirty = true;
}
Err(err) => {
debug!(
"after replayed into bank, instruction {:?} failed to update cost, err: {}",
program_id, err
);
}
}
if program_timings.count < 1 {
continue;
}
let units = program_timings.accumulated_units / program_timings.count as u64;
cost_model
.write()
.unwrap()
.upsert_instruction_cost(program_id, units);
update_count += 1;
debug!(
"After replayed into bank, updated cost for instruction {:?}, update_value {}, pre_aggregated_value {}",
program_id, units, current_estimated_program_cost
);
}
debug!(
"after replayed into bank, updated cost model instruction cost table, current values: {:?}",
cost_model.read().unwrap().get_instruction_cost_table()
);
dirty
update_count
}
// 1. Remove obsolete program entries from persisted table to limit its size
// 2. Update persisted program cost. This involves EMA cost calculation at
// execute_cost_table.get_cost()
fn persist_cost_table(blockstore: &Blockstore, cost_model: &RwLock<CostModel>) {
let cost_model_read = cost_model.read().unwrap();
let cost_table = cost_model_read.get_instruction_cost_table();
let db_records = blockstore.read_program_costs().expect("read programs");
let cost_model = cost_model.read().unwrap();
let active_program_keys = cost_model.get_program_keys();
// delete records from blockstore if they are no longer in cost_table
db_records.iter().for_each(|(pubkey, _)| {
if cost_table.get(pubkey).is_none() {
if !active_program_keys.contains(&pubkey) {
blockstore
.delete_program_cost(pubkey)
.expect("delete old program");
}
});
for (key, cost) in cost_table.iter() {
active_program_keys.iter().for_each(|program_id| {
let cost = cost_model.find_instruction_cost(program_id);
blockstore
.write_program_cost(key, cost)
.write_program_cost(program_id, &cost)
.expect("persist program costs to blockstore");
}
});
}
}
@@ -227,15 +212,9 @@ mod tests {
fn test_update_cost_model_with_empty_execute_timings() {
let cost_model = Arc::new(RwLock::new(CostModel::default()));
let mut empty_execute_timings = ExecuteTimings::default();
CostUpdateService::update_cost_model(&cost_model, &mut empty_execute_timings);
assert_eq!(
0,
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.len()
CostUpdateService::update_cost_model(&cost_model, &mut empty_execute_timings),
0
);
}
@@ -253,7 +232,7 @@ mod tests {
let accumulated_units: u64 = 100;
let total_errored_units = 0;
let count: u32 = 10;
expected_cost = accumulated_units / count as u64;
expected_cost = accumulated_units / count as u64; // = 10
execute_timings.details.per_program_timings.insert(
program_key_1,
@@ -265,22 +244,15 @@ mod tests {
total_errored_units,
},
);
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
let update_count =
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
assert_eq!(1, update_count);
assert_eq!(
1,
expected_cost,
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.len()
);
assert_eq!(
Some(&expected_cost),
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.get(&program_key_1)
.find_instruction_cost(&program_key_1)
);
}
@@ -289,8 +261,8 @@ mod tests {
let accumulated_us: u64 = 2000;
let accumulated_units: u64 = 200;
let count: u32 = 10;
// to expect new cost is Average(new_value, existing_value)
expected_cost = ((accumulated_units / count as u64) + expected_cost) / 2;
// to expect new cost = (mean + 2 * std) of [10, 20]
expected_cost = 13;
execute_timings.details.per_program_timings.insert(
program_key_1,
@@ -302,22 +274,15 @@ mod tests {
total_errored_units: 0,
},
);
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
let update_count =
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
assert_eq!(1, update_count);
assert_eq!(
1,
expected_cost,
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.len()
);
assert_eq!(
Some(&expected_cost),
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.get(&program_key_1)
.find_instruction_cost(&program_key_1)
);
}
}
@@ -341,20 +306,49 @@ mod tests {
total_errored_units: 0,
},
);
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
// If both the `errored_txs_compute_consumed` is empty and `count == 0`, then
// nothing should be inserted into the cost model
assert!(cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.is_empty());
assert_eq!(
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings),
0
);
}
// set up current instruction cost to 100
let current_program_cost = 100;
{
execute_timings.details.per_program_timings.insert(
program_key_1,
ProgramTiming {
accumulated_us: 1000,
accumulated_units: current_program_cost,
count: 1,
errored_txs_compute_consumed: vec![],
total_errored_units: 0,
},
);
let update_count =
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
assert_eq!(1, update_count);
assert_eq!(
current_program_cost,
cost_model
.read()
.unwrap()
.find_instruction_cost(&program_key_1)
);
}
// Test updating cost model with only erroring compute costs where the `cost_per_error` is
// greater than the current instruction cost for the program. Should update with the
// new erroring compute costs
let cost_per_error = 1000;
// expected_cost = (mean + 2*std) of data points:
// [
// 100, // original program_cost
// 1000, // cost_per_error
// ]
let expected_cost = 289u64;
{
let errored_txs_compute_consumed = vec![cost_per_error; 3];
let total_errored_units = errored_txs_compute_consumed.iter().sum();
@@ -368,29 +362,23 @@ mod tests {
total_errored_units,
},
);
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
let update_count =
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
assert_eq!(1, update_count);
assert_eq!(
1,
expected_cost,
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.len()
);
assert_eq!(
Some(&cost_per_error),
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.get(&program_key_1)
.find_instruction_cost(&program_key_1)
);
}
// Test updating cost model with only erroring compute costs where the error cost is
// `smaller_cost_per_error`, less than the current instruction cost for the program.
// The cost should not decrease for these new lesser errors
let smaller_cost_per_error = cost_per_error - 10;
let smaller_cost_per_error = expected_cost - 10;
{
let errored_txs_compute_consumed = vec![smaller_cost_per_error; 3];
let total_errored_units = errored_txs_compute_consumed.iter().sum();
@@ -404,22 +392,23 @@ mod tests {
total_errored_units,
},
);
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
let update_count =
CostUpdateService::update_cost_model(&cost_model, &mut execute_timings);
// expected_cost = (mean = 2*std) of data points:
// [
// 100, // original program cost,
// 1000, // cost_per_error from above test
// 289, // the smaller_cost_per_error will be coalesced to prev cost
// ]
let expected_cost = 293u64;
assert_eq!(1, update_count);
assert_eq!(
1,
expected_cost,
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.len()
);
assert_eq!(
Some(&cost_per_error),
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.get(&program_key_1)
.find_instruction_cost(&program_key_1)
);
}
}

View File

@@ -0,0 +1,871 @@
use {
crate::leader_slot_banking_stage_timing_metrics::*,
solana_poh::poh_recorder::BankStart,
solana_sdk::{clock::Slot, saturating_add_assign},
std::time::Instant,
};
/// A summary of what happened to transactions passed to the execution pipeline.
/// Transactions can
/// 1) Did not even make it to execution due to being filtered out by things like AccountInUse
/// lock conflicts or CostModel compute limits. These types of errors are retryable and
/// counted in `Self::retryable_transaction_indexes`.
/// 2) Did not execute due to some fatal error like too old, or duplicate signature. These
/// will be dropped from the transactions queue and not counted in `Self::retryable_transaction_indexes`
/// 3) Were executed and committed, captured by `committed_transactions_count` below.
/// 4) Were executed and failed commit, captured by `failed_commit_count` below.
pub(crate) struct ProcessTransactionsSummary {
// Returns true if we hit the end of the block/max PoH height for the block before
// processing all the transactions in the batch.
pub reached_max_poh_height: bool,
// Total number of transactions that were passed as candidates for execution. See description
// of struct above for possible outcomes for these transactions
pub transactions_attempted_execution_count: usize,
// Total number of transactions that made it into the block
pub committed_transactions_count: usize,
// Total number of transactions that made it into the block where the transactions
// output from execution was success/no error.
pub committed_transactions_with_successful_result_count: usize,
// All transactions that were executed but then failed record because the
// slot ended
pub failed_commit_count: usize,
// Indexes of transactions in the transactions slice that were not committed but are retryable
pub retryable_transaction_indexes: Vec<usize>,
// The number of transactions filtered out by the cost model
pub cost_model_throttled_transactions_count: usize,
// Total amount of time spent running the cost model
pub cost_model_us: u64,
// Breakdown of time spent executing and comitting transactions
pub execute_and_commit_timings: LeaderExecuteAndCommitTimings,
}
// Metrics describing packets ingested/processed in various parts of BankingStage during this
// validator's leader slot
#[derive(Debug, Default)]
struct LeaderSlotPacketCountMetrics {
// total number of live packets TPU received from verified receiver for processing.
total_new_valid_packets: u64,
// total number of packets TPU received from sigverify that failed signature verification.
newly_failed_sigverify_count: u64,
// total number of dropped packet due to the thread's buffered packets capacity being reached.
exceeded_buffer_limit_dropped_packets_count: u64,
// total number of packets that got added to the pending buffer after arriving to BankingStage
newly_buffered_packets_count: u64,
// total number of transactions in the buffer that were filtered out due to things like age and
// duplicate signature checks
retryable_packets_filtered_count: u64,
// total number of transactions that attempted execution in this slot. Should equal the sum
// of `committed_transactions_count`, `retryable_errored_transaction_count`, and
// `nonretryable_errored_transactions_count`.
transactions_attempted_execution_count: u64,
// total number of transactions that were executed and committed into the block
// on this thread
committed_transactions_count: u64,
// total number of transactions that were executed, got a successful execution output/no error,
// and were then committed into the block
committed_transactions_with_successful_result_count: u64,
// total number of transactions that were not executed or failed commit, BUT were added back to the buffered
// queue becaus they were retryable errors
retryable_errored_transaction_count: u64,
// total number of transactions that attempted execution due to some fatal error (too old, duplicate signature, etc.)
// AND were dropped from the buffered queue
nonretryable_errored_transactions_count: u64,
// total number of transactions that were executed, but failed to be committed into the Poh stream because
// the block ended. Some of these may be already counted in `nonretryable_errored_transactions_count` if they
// then hit the age limit after failing to be comitted.
executed_transactions_failed_commit_count: u64,
// total number of transactions that were excluded from the block because they were too expensive
// according to the cost model. These transactions are added back to the buffered queue and are
// already counted in `self.retrayble_errored_transaction_count`.
cost_model_throttled_transactions_count: u64,
// total number of forwardsable packets that failed forwarding
failed_forwarded_packets_count: u64,
// total number of forwardsable packets that were successfully forwarded
successful_forwarded_packets_count: u64,
// total number of attempted forwards that failed. Note this is not a count of the number of packets
// that failed, just the total number of batches of packets that failed forwarding
packet_batch_forward_failure_count: u64,
// total number of valid unprocessed packets in the buffer that were removed after being forwarded
cleared_from_buffer_after_forward_count: u64,
// total number of packets removed at the end of the slot due to being too old, duplicate, etc.
end_of_slot_filtered_invalid_count: u64,
}
impl LeaderSlotPacketCountMetrics {
fn new() -> Self {
Self { ..Self::default() }
}
fn report(&self, id: u32, slot: Slot) {
datapoint_info!(
"banking_stage-leader_slot_packet_counts",
("id", id as i64, i64),
("slot", slot as i64, i64),
(
"total_new_valid_packets",
self.total_new_valid_packets as i64,
i64
),
(
"newly_failed_sigverify_count",
self.newly_failed_sigverify_count as i64,
i64
),
(
"exceeded_buffer_limit_dropped_packets_count",
self.exceeded_buffer_limit_dropped_packets_count as i64,
i64
),
(
"newly_buffered_packets_count",
self.newly_buffered_packets_count as i64,
i64
),
(
"retryable_packets_filtered_count",
self.retryable_packets_filtered_count as i64,
i64
),
(
"transactions_attempted_execution_count",
self.transactions_attempted_execution_count as i64,
i64
),
(
"committed_transactions_count",
self.committed_transactions_count as i64,
i64
),
(
"committed_transactions_with_successful_result_count",
self.committed_transactions_with_successful_result_count as i64,
i64
),
(
"retryable_errored_transaction_count",
self.retryable_errored_transaction_count as i64,
i64
),
(
"nonretryable_errored_transactions_count",
self.nonretryable_errored_transactions_count as i64,
i64
),
(
"executed_transactions_failed_commit_count",
self.executed_transactions_failed_commit_count as i64,
i64
),
(
"cost_model_throttled_transactions_count",
self.cost_model_throttled_transactions_count as i64,
i64
),
(
"failed_forwarded_packets_count",
self.failed_forwarded_packets_count as i64,
i64
),
(
"successful_forwarded_packets_count",
self.successful_forwarded_packets_count as i64,
i64
),
(
"packet_batch_forward_failure_count",
self.packet_batch_forward_failure_count as i64,
i64
),
(
"cleared_from_buffer_after_forward_count",
self.cleared_from_buffer_after_forward_count as i64,
i64
),
(
"end_of_slot_filtered_invalid_count",
self.end_of_slot_filtered_invalid_count as i64,
i64
),
);
}
}
#[derive(Debug)]
pub(crate) struct LeaderSlotMetrics {
// banking_stage creates one QosService instance per working threads, that is uniquely
// identified by id. This field allows to categorize metrics for gossip votes, TPU votes
// and other transactions.
id: u32,
// aggregate metrics per slot
slot: Slot,
packet_count_metrics: LeaderSlotPacketCountMetrics,
timing_metrics: LeaderSlotTimingMetrics,
// Used by tests to check if the `self.report()` method was called
is_reported: bool,
}
impl LeaderSlotMetrics {
pub(crate) fn new(id: u32, slot: Slot, bank_creation_time: &Instant) -> Self {
Self {
id,
slot,
packet_count_metrics: LeaderSlotPacketCountMetrics::new(),
timing_metrics: LeaderSlotTimingMetrics::new(bank_creation_time),
is_reported: false,
}
}
pub(crate) fn report(&mut self) {
self.is_reported = true;
self.timing_metrics.report(self.id, self.slot);
self.packet_count_metrics.report(self.id, self.slot);
}
/// Returns `Some(self.slot)` if the metrics have been reported, otherwise returns None
fn reported_slot(&self) -> Option<Slot> {
if self.is_reported {
Some(self.slot)
} else {
None
}
}
}
#[derive(Debug)]
pub struct LeaderSlotMetricsTracker {
// Only `Some` if BankingStage detects it's time to construct our leader slot,
// otherwise `None`
leader_slot_metrics: Option<LeaderSlotMetrics>,
id: u32,
}
impl LeaderSlotMetricsTracker {
pub fn new(id: u32) -> Self {
Self {
leader_slot_metrics: None,
id,
}
}
// Returns reported slot if metrics were reported
pub(crate) fn update_on_leader_slot_boundary(
&mut self,
bank_start: &Option<BankStart>,
) -> Option<Slot> {
match (self.leader_slot_metrics.as_mut(), bank_start) {
(None, None) => None,
(Some(leader_slot_metrics), None) => {
leader_slot_metrics.report();
// Ensure tests catch that `report()` method was called
let reported_slot = leader_slot_metrics.reported_slot();
// Slot has ended, time to report metrics
self.leader_slot_metrics = None;
reported_slot
}
(None, Some(bank_start)) => {
// Our leader slot has begain, time to create a new slot tracker
self.leader_slot_metrics = Some(LeaderSlotMetrics::new(
self.id,
bank_start.working_bank.slot(),
&bank_start.bank_creation_time,
));
self.leader_slot_metrics.as_ref().unwrap().reported_slot()
}
(Some(leader_slot_metrics), Some(bank_start)) => {
if leader_slot_metrics.slot != bank_start.working_bank.slot() {
// Last slot has ended, new slot has began
leader_slot_metrics.report();
// Ensure tests catch that `report()` method was called
let reported_slot = leader_slot_metrics.reported_slot();
self.leader_slot_metrics = Some(LeaderSlotMetrics::new(
self.id,
bank_start.working_bank.slot(),
&bank_start.bank_creation_time,
));
reported_slot
} else {
leader_slot_metrics.reported_slot()
}
}
}
}
pub(crate) fn accumulate_process_transactions_summary(
&mut self,
process_transactions_summary: &ProcessTransactionsSummary,
) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
let ProcessTransactionsSummary {
transactions_attempted_execution_count,
committed_transactions_count,
committed_transactions_with_successful_result_count,
failed_commit_count,
ref retryable_transaction_indexes,
cost_model_throttled_transactions_count,
cost_model_us,
ref execute_and_commit_timings,
..
} = process_transactions_summary;
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.transactions_attempted_execution_count,
*transactions_attempted_execution_count as u64
);
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.committed_transactions_count,
*committed_transactions_count as u64
);
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.committed_transactions_with_successful_result_count,
*committed_transactions_with_successful_result_count as u64
);
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.executed_transactions_failed_commit_count,
*failed_commit_count as u64
);
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.retryable_errored_transaction_count,
retryable_transaction_indexes.len() as u64
);
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.nonretryable_errored_transactions_count,
transactions_attempted_execution_count
.saturating_sub(*committed_transactions_count)
.saturating_sub(retryable_transaction_indexes.len()) as u64
);
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.cost_model_throttled_transactions_count,
*cost_model_throttled_transactions_count as u64
);
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.process_packets_timings
.cost_model_us,
*cost_model_us as u64
);
leader_slot_metrics
.timing_metrics
.execute_and_commit_timings
.accumulate(execute_and_commit_timings);
}
}
// Packet inflow/outflow/processing metrics
pub(crate) fn increment_total_new_valid_packets(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.total_new_valid_packets,
count
);
}
}
pub(crate) fn increment_newly_failed_sigverify_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.newly_failed_sigverify_count,
count
);
}
}
pub(crate) fn increment_exceeded_buffer_limit_dropped_packets_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.exceeded_buffer_limit_dropped_packets_count,
count
);
}
}
pub(crate) fn increment_newly_buffered_packets_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.newly_buffered_packets_count,
count
);
}
}
pub(crate) fn increment_retryable_packets_filtered_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.retryable_packets_filtered_count,
count
);
}
}
pub(crate) fn increment_failed_forwarded_packets_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.failed_forwarded_packets_count,
count
);
}
}
pub(crate) fn increment_successful_forwarded_packets_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.successful_forwarded_packets_count,
count
);
}
}
pub(crate) fn increment_packet_batch_forward_failure_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.packet_batch_forward_failure_count,
count
);
}
}
pub(crate) fn increment_cleared_from_buffer_after_forward_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.cleared_from_buffer_after_forward_count,
count
);
}
}
pub(crate) fn increment_end_of_slot_filtered_invalid_count(&mut self, count: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.packet_count_metrics
.end_of_slot_filtered_invalid_count,
count
);
}
}
// Outermost banking thread's loop timing metrics
pub(crate) fn increment_process_buffered_packets_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.outer_loop_timings
.process_buffered_packets_us,
us
);
}
}
pub(crate) fn increment_slot_metrics_check_slot_boundary_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.outer_loop_timings
.slot_metrics_check_slot_boundary_us,
us
);
}
}
pub(crate) fn increment_receive_and_buffer_packets_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.outer_loop_timings
.receive_and_buffer_packets_us,
us
);
}
}
// Processing buffer timing metrics
pub(crate) fn increment_make_decision_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.process_buffered_packets_timings
.make_decision_us,
us
);
}
}
pub(crate) fn increment_consume_buffered_packets_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.process_buffered_packets_timings
.consume_buffered_packets_us,
us
);
}
}
pub(crate) fn increment_forward_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.process_buffered_packets_timings
.forward_us,
us
);
}
}
pub(crate) fn increment_forward_and_hold_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.process_buffered_packets_timings
.forward_and_hold_us,
us
);
}
}
// Consuming buffered packets timing metrics
pub(crate) fn increment_end_of_slot_filtering_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.consume_buffered_packets_timings
.end_of_slot_filtering_us,
us
);
}
}
pub(crate) fn increment_consume_buffered_packets_poh_recorder_lock_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.consume_buffered_packets_timings
.poh_recorder_lock_us,
us
);
}
}
pub(crate) fn increment_process_packets_transactions_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.consume_buffered_packets_timings
.process_packets_transactions_us,
us
);
}
}
// Processing packets timing metrics
pub(crate) fn increment_transactions_from_packets_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.process_packets_timings
.transactions_from_packets_us,
us
);
}
}
pub(crate) fn increment_process_transactions_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.process_packets_timings
.process_transactions_us,
us
);
}
}
pub(crate) fn increment_filter_retryable_packets_us(&mut self, us: u64) {
if let Some(leader_slot_metrics) = &mut self.leader_slot_metrics {
saturating_add_assign!(
leader_slot_metrics
.timing_metrics
.process_packets_timings
.filter_retryable_packets_us,
us
);
}
}
}
#[cfg(test)]
mod tests {
use {
super::*,
solana_runtime::{bank::Bank, genesis_utils::create_genesis_config},
solana_sdk::pubkey::Pubkey,
std::sync::Arc,
};
struct TestSlotBoundaryComponents {
first_bank: Arc<Bank>,
first_poh_recorder_bank: BankStart,
next_bank: Arc<Bank>,
next_poh_recorder_bank: BankStart,
leader_slot_metrics_tracker: LeaderSlotMetricsTracker,
}
fn setup_test_slot_boundary_banks() -> TestSlotBoundaryComponents {
let genesis = create_genesis_config(10);
let first_bank = Arc::new(Bank::new_for_tests(&genesis.genesis_config));
let first_poh_recorder_bank = BankStart {
working_bank: first_bank.clone(),
bank_creation_time: Arc::new(Instant::now()),
};
// Create a child descended from the first bank
let next_bank = Arc::new(Bank::new_from_parent(
&first_bank,
&Pubkey::new_unique(),
first_bank.slot() + 1,
));
let next_poh_recorder_bank = BankStart {
working_bank: next_bank.clone(),
bank_creation_time: Arc::new(Instant::now()),
};
let banking_stage_thread_id = 0;
let leader_slot_metrics_tracker = LeaderSlotMetricsTracker::new(banking_stage_thread_id);
TestSlotBoundaryComponents {
first_bank,
first_poh_recorder_bank,
next_bank,
next_poh_recorder_bank,
leader_slot_metrics_tracker,
}
}
#[test]
pub fn test_update_on_leader_slot_boundary_not_leader_to_not_leader() {
let TestSlotBoundaryComponents {
mut leader_slot_metrics_tracker,
..
} = setup_test_slot_boundary_banks();
// Test that with no bank being tracked, and no new bank being tracked, nothing is reported
assert!(leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&None)
.is_none());
assert!(leader_slot_metrics_tracker.leader_slot_metrics.is_none());
}
#[test]
pub fn test_update_on_leader_slot_boundary_not_leader_to_leader() {
let TestSlotBoundaryComponents {
first_poh_recorder_bank,
mut leader_slot_metrics_tracker,
..
} = setup_test_slot_boundary_banks();
// Test case where the thread has not detected a leader bank, and now sees a leader bank.
// Metrics should not be reported because leader slot has not ended
assert!(leader_slot_metrics_tracker.leader_slot_metrics.is_none());
assert!(leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&Some(first_poh_recorder_bank))
.is_none());
assert!(leader_slot_metrics_tracker.leader_slot_metrics.is_some());
}
#[test]
pub fn test_update_on_leader_slot_boundary_leader_to_not_leader() {
let TestSlotBoundaryComponents {
first_bank,
first_poh_recorder_bank,
mut leader_slot_metrics_tracker,
..
} = setup_test_slot_boundary_banks();
// Test case where the thread has a leader bank, and now detects there's no more leader bank,
// implying the slot has ended. Metrics should be reported for `first_bank.slot()`,
// because that leader slot has just ended.
assert!(leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&Some(first_poh_recorder_bank))
.is_none());
assert_eq!(
leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&None)
.unwrap(),
first_bank.slot()
);
assert!(leader_slot_metrics_tracker.leader_slot_metrics.is_none());
assert!(leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&None)
.is_none());
}
#[test]
pub fn test_update_on_leader_slot_boundary_leader_to_leader_same_slot() {
let TestSlotBoundaryComponents {
first_bank,
first_poh_recorder_bank,
mut leader_slot_metrics_tracker,
..
} = setup_test_slot_boundary_banks();
// Test case where the thread has a leader bank, and now detects the same leader bank,
// implying the slot is still running. Metrics should not be reported
assert!(leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&Some(first_poh_recorder_bank.clone()))
.is_none());
assert!(leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&Some(first_poh_recorder_bank))
.is_none());
assert_eq!(
leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&None)
.unwrap(),
first_bank.slot()
);
assert!(leader_slot_metrics_tracker.leader_slot_metrics.is_none());
}
#[test]
pub fn test_update_on_leader_slot_boundary_leader_to_leader_bigger_slot() {
let TestSlotBoundaryComponents {
first_bank,
first_poh_recorder_bank,
next_bank,
next_poh_recorder_bank,
mut leader_slot_metrics_tracker,
} = setup_test_slot_boundary_banks();
// Test case where the thread has a leader bank, and now detects there's a new leader bank
// for a bigger slot, implying the slot has ended. Metrics should be reported for the
// smaller slot
assert!(leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&Some(first_poh_recorder_bank))
.is_none());
assert_eq!(
leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&Some(next_poh_recorder_bank))
.unwrap(),
first_bank.slot()
);
assert_eq!(
leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&None)
.unwrap(),
next_bank.slot()
);
assert!(leader_slot_metrics_tracker.leader_slot_metrics.is_none());
}
#[test]
pub fn test_update_on_leader_slot_boundary_leader_to_leader_smaller_slot() {
let TestSlotBoundaryComponents {
first_bank,
first_poh_recorder_bank,
next_bank,
next_poh_recorder_bank,
mut leader_slot_metrics_tracker,
} = setup_test_slot_boundary_banks();
// Test case where the thread has a leader bank, and now detects there's a new leader bank
// for a samller slot, implying the slot has ended. Metrics should be reported for the
// bigger slot
assert!(leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&Some(next_poh_recorder_bank))
.is_none());
assert_eq!(
leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&Some(first_poh_recorder_bank))
.unwrap(),
next_bank.slot()
);
assert_eq!(
leader_slot_metrics_tracker
.update_on_leader_slot_boundary(&None)
.unwrap(),
first_bank.slot()
);
assert!(leader_slot_metrics_tracker.leader_slot_metrics.is_none());
}
}

View File

@@ -0,0 +1,286 @@
use {
solana_program_runtime::timings::ExecuteTimings,
solana_sdk::{clock::Slot, saturating_add_assign},
std::time::Instant,
};
#[derive(Default, Debug)]
pub struct LeaderExecuteAndCommitTimings {
pub collect_balances_us: u64,
pub load_execute_us: u64,
pub freeze_lock_us: u64,
pub record_us: u64,
pub commit_us: u64,
pub find_and_send_votes_us: u64,
pub record_transactions_timings: RecordTransactionsTimings,
pub execute_timings: ExecuteTimings,
}
impl LeaderExecuteAndCommitTimings {
pub fn accumulate(&mut self, other: &LeaderExecuteAndCommitTimings) {
saturating_add_assign!(self.collect_balances_us, other.collect_balances_us);
saturating_add_assign!(self.load_execute_us, other.load_execute_us);
saturating_add_assign!(self.freeze_lock_us, other.freeze_lock_us);
saturating_add_assign!(self.record_us, other.record_us);
saturating_add_assign!(self.commit_us, other.commit_us);
saturating_add_assign!(self.find_and_send_votes_us, other.find_and_send_votes_us);
saturating_add_assign!(self.commit_us, other.commit_us);
self.record_transactions_timings
.accumulate(&other.record_transactions_timings);
self.execute_timings.accumulate(&other.execute_timings);
}
pub fn report(&self, id: u32, slot: Slot) {
datapoint_info!(
"banking_stage-leader_slot_execute_and_commit_timings",
("id", id as i64, i64),
("slot", slot as i64, i64),
("collect_balances_us", self.collect_balances_us as i64, i64),
("load_execute_us", self.load_execute_us as i64, i64),
("freeze_lock_us", self.freeze_lock_us as i64, i64),
("record_us", self.record_us as i64, i64),
("commit_us", self.commit_us as i64, i64),
(
"find_and_send_votes_us",
self.find_and_send_votes_us as i64,
i64
),
);
datapoint_info!(
"banking_stage-leader_slot_record_timings",
("id", id as i64, i64),
("slot", slot as i64, i64),
(
"execution_results_to_transactions_us",
self.record_transactions_timings
.execution_results_to_transactions_us as i64,
i64
),
(
"hash_us",
self.record_transactions_timings.hash_us as i64,
i64
),
(
"poh_record_us",
self.record_transactions_timings.poh_record_us as i64,
i64
),
);
}
}
#[derive(Default, Debug)]
pub struct RecordTransactionsTimings {
pub execution_results_to_transactions_us: u64,
pub hash_us: u64,
pub poh_record_us: u64,
}
impl RecordTransactionsTimings {
pub fn accumulate(&mut self, other: &RecordTransactionsTimings) {
saturating_add_assign!(
self.execution_results_to_transactions_us,
other.execution_results_to_transactions_us
);
saturating_add_assign!(self.hash_us, other.hash_us);
saturating_add_assign!(self.poh_record_us, other.poh_record_us);
}
}
// Metrics capturing wallclock time spent in various parts of BankingStage during this
// validator's leader slot
#[derive(Debug)]
pub(crate) struct LeaderSlotTimingMetrics {
pub outer_loop_timings: OuterLoopTimings,
pub process_buffered_packets_timings: ProcessBufferedPacketsTimings,
pub consume_buffered_packets_timings: ConsumeBufferedPacketsTimings,
pub process_packets_timings: ProcessPacketsTimings,
pub execute_and_commit_timings: LeaderExecuteAndCommitTimings,
}
impl LeaderSlotTimingMetrics {
pub(crate) fn new(bank_creation_time: &Instant) -> Self {
Self {
outer_loop_timings: OuterLoopTimings::new(bank_creation_time),
process_buffered_packets_timings: ProcessBufferedPacketsTimings::default(),
consume_buffered_packets_timings: ConsumeBufferedPacketsTimings::default(),
process_packets_timings: ProcessPacketsTimings::default(),
execute_and_commit_timings: LeaderExecuteAndCommitTimings::default(),
}
}
pub(crate) fn report(&self, id: u32, slot: Slot) {
self.outer_loop_timings.report(id, slot);
self.process_buffered_packets_timings.report(id, slot);
self.consume_buffered_packets_timings.report(id, slot);
self.process_packets_timings.report(id, slot);
self.execute_and_commit_timings.report(id, slot);
}
}
#[derive(Debug)]
pub(crate) struct OuterLoopTimings {
pub bank_detected_time: Instant,
// Delay from when the bank was created to when this thread detected it
pub bank_detected_delay_us: u64,
// Time spent processing buffered packets
pub process_buffered_packets_us: u64,
// Time spent checking for slot boundary and reporting leader slot metrics
pub slot_metrics_check_slot_boundary_us: u64,
// Time spent processing new incoming packets to the banking thread
pub receive_and_buffer_packets_us: u64,
}
impl OuterLoopTimings {
fn new(bank_creation_time: &Instant) -> Self {
Self {
bank_detected_time: Instant::now(),
bank_detected_delay_us: bank_creation_time.elapsed().as_micros() as u64,
process_buffered_packets_us: 0,
slot_metrics_check_slot_boundary_us: 0,
receive_and_buffer_packets_us: 0,
}
}
fn report(&self, id: u32, slot: Slot) {
let bank_detected_to_now_us = self.bank_detected_time.elapsed().as_micros() as u64;
datapoint_info!(
"banking_stage-leader_slot_loop_timings",
("id", id as i64, i64),
("slot", slot as i64, i64),
(
"bank_detected_to_slot_end_detected_us",
bank_detected_to_now_us,
i64
),
(
"bank_creation_to_slot_end_detected_us",
bank_detected_to_now_us + self.bank_detected_delay_us,
i64
),
("bank_detected_delay_us", self.bank_detected_delay_us, i64),
(
"process_buffered_packets_us",
self.process_buffered_packets_us,
i64
),
(
"slot_metrics_check_slot_boundary_us",
self.slot_metrics_check_slot_boundary_us,
i64
),
(
"receive_and_buffer_packets_us",
self.receive_and_buffer_packets_us,
i64
),
);
}
}
#[derive(Debug, Default)]
pub(crate) struct ProcessBufferedPacketsTimings {
pub make_decision_us: u64,
pub consume_buffered_packets_us: u64,
pub forward_us: u64,
pub forward_and_hold_us: u64,
}
impl ProcessBufferedPacketsTimings {
fn report(&self, id: u32, slot: Slot) {
datapoint_info!(
"banking_stage-leader_slot_process_buffered_packets_timings",
("id", id as i64, i64),
("slot", slot as i64, i64),
("make_decision_us", self.make_decision_us as i64, i64),
(
"consume_buffered_packets_us",
self.consume_buffered_packets_us as i64,
i64
),
("forward_us", self.forward_us as i64, i64),
("forward_and_hold_us", self.forward_and_hold_us as i64, i64),
);
}
}
#[derive(Debug, Default)]
pub(crate) struct ConsumeBufferedPacketsTimings {
// Time spent grabbing poh recorder lock
pub poh_recorder_lock_us: u64,
// Time spent filtering invalid packets after leader slot has ended
pub end_of_slot_filtering_us: u64,
// Time spent processing transactions
pub process_packets_transactions_us: u64,
}
impl ConsumeBufferedPacketsTimings {
fn report(&self, id: u32, slot: Slot) {
datapoint_info!(
"banking_stage-leader_slot_consume_buffered_packets_timings",
("id", id as i64, i64),
("slot", slot as i64, i64),
(
"poh_recorder_lock_us",
self.poh_recorder_lock_us as i64,
i64
),
(
"end_of_slot_filtering_us",
self.end_of_slot_filtering_us as i64,
i64
),
(
"process_packets_transactions_us",
self.process_packets_transactions_us as i64,
i64
),
);
}
}
#[derive(Debug, Default)]
pub(crate) struct ProcessPacketsTimings {
// Time spent converting packets to transactions
pub transactions_from_packets_us: u64,
// Time spent processing transactions
pub process_transactions_us: u64,
// Time spent filtering retryable packets that were returned after transaction
// processing
pub filter_retryable_packets_us: u64,
// Time spent running the cost model in processing transactions before executing
// transactions
pub cost_model_us: u64,
}
impl ProcessPacketsTimings {
fn report(&self, id: u32, slot: Slot) {
datapoint_info!(
"banking_stage-leader_slot_process_packets_timings",
("id", id as i64, i64),
("slot", slot as i64, i64),
(
"transactions_from_packets_us",
self.transactions_from_packets_us,
i64
),
("process_transactions_us", self.process_transactions_us, i64),
(
"filter_retryable_packets_us",
self.filter_retryable_packets_us,
i64
),
("cost_model_us", self.cost_model_us, i64),
);
}
}

View File

@@ -28,6 +28,8 @@ pub mod fork_choice;
pub mod gen_keys;
pub mod heaviest_subtree_fork_choice;
pub mod latest_validator_votes_for_frozen_banks;
pub mod leader_slot_banking_stage_metrics;
pub mod leader_slot_banking_stage_timing_metrics;
pub mod ledger_cleanup_service;
pub mod optimistic_confirmation_verifier;
pub mod outstanding_requests;

View File

@@ -81,45 +81,133 @@ impl ReplaySlotStats {
i64
),
(
"serialize_us",
"execute_details_serialize_us",
self.execute_timings.details.serialize_us,
i64
),
(
"create_vm_us",
"execute_details_create_vm_us",
self.execute_timings.details.create_vm_us,
i64
),
(
"execute_inner_us",
"execute_details_execute_inner_us",
self.execute_timings.details.execute_us,
i64
),
(
"deserialize_us",
"execute_details_deserialize_us",
self.execute_timings.details.deserialize_us,
i64
),
(
"changed_account_count",
"execute_details_get_or_create_executor_us",
self.execute_timings.details.get_or_create_executor_us,
i64
),
(
"execute_details_changed_account_count",
self.execute_timings.details.changed_account_count,
i64
),
(
"total_account_count",
"execute_details_total_account_count",
self.execute_timings.details.total_account_count,
i64
),
(
"total_data_size",
"execute_details_total_data_size",
self.execute_timings.details.total_data_size,
i64
),
(
"data_size_changed",
"execute_details_data_size_changed",
self.execute_timings.details.data_size_changed,
i64
),
(
"execute_details_create_executor_register_syscalls_us",
self.execute_timings
.details
.create_executor_register_syscalls_us,
i64
),
(
"execute_details_create_executor_load_elf_us",
self.execute_timings.details.create_executor_load_elf_us,
i64
),
(
"execute_details_create_executor_verify_code_us",
self.execute_timings.details.create_executor_verify_code_us,
i64
),
(
"execute_details_create_executor_jit_compile_us",
self.execute_timings.details.create_executor_jit_compile_us,
i64
),
(
"execute_accessories_feature_set_clone_us",
self.execute_timings
.execute_accessories
.feature_set_clone_us,
i64
),
(
"execute_accessories_compute_budget_process_transaction_us",
self.execute_timings
.execute_accessories
.compute_budget_process_transaction_us,
i64
),
(
"execute_accessories_get_executors_us",
self.execute_timings.execute_accessories.get_executors_us,
i64
),
(
"execute_accessories_process_message_us",
self.execute_timings.execute_accessories.process_message_us,
i64
),
(
"execute_accessories_update_executors_us",
self.execute_timings.execute_accessories.update_executors_us,
i64
),
(
"execute_accessories_process_instructions_total_us",
self.execute_timings
.execute_accessories
.process_instructions
.total_us,
i64
),
(
"execute_accessories_process_instructions_verify_caller_us",
self.execute_timings
.execute_accessories
.process_instructions
.verify_caller_us,
i64
),
(
"execute_accessories_process_instructions_process_executable_chain_us",
self.execute_timings
.execute_accessories
.process_instructions
.process_executable_chain_us,
i64
),
(
"execute_accessories_process_instructions_verify_callee_us",
self.execute_timings
.execute_accessories
.process_instructions
.verify_callee_us,
i64
),
);
let mut per_pubkey_timings: Vec<_> = self
@@ -144,7 +232,7 @@ impl ReplaySlotStats {
);
for (pubkey, time) in per_pubkey_timings.iter().take(5) {
datapoint_info!(
datapoint_trace!(
"per_program_timings",
("slot", slot as i64, i64),
("pubkey", pubkey.to_string(), String),
@@ -167,7 +255,7 @@ impl ReplaySlotStats {
("accumulated_units", total_units, i64),
("count", total_count, i64),
("errored_units", total_errored_units, i64),
("count", total_errored_count, i64)
("errored_count", total_errored_count, i64)
);
}
}

View File

@@ -3,6 +3,7 @@
//! how transactions are included in blocks, and optimize those blocks.
//!
use {
crate::banking_stage::BatchedTransactionCostDetails,
solana_measure::measure::Measure,
solana_runtime::{
bank::Bank,
@@ -103,22 +104,25 @@ impl QosService {
txs_costs
}
// Given a list of transactions and their costs, this function returns a corresponding
// list of Results that indicate if a transaction is selected to be included in the current block,
/// Given a list of transactions and their costs, this function returns a corresponding
/// list of Results that indicate if a transaction is selected to be included in the current block,
/// and a count of the number of transactions that would fit in the block
pub fn select_transactions_per_cost<'a>(
&self,
transactions: impl Iterator<Item = &'a SanitizedTransaction>,
transactions_costs: impl Iterator<Item = &'a TransactionCost>,
bank: &Arc<Bank>,
) -> Vec<transaction::Result<()>> {
) -> (Vec<transaction::Result<()>>, usize) {
let mut cost_tracking_time = Measure::start("cost_tracking_time");
let mut cost_tracker = bank.write_cost_tracker().unwrap();
let mut num_included = 0;
let select_results = transactions
.zip(transactions_costs)
.map(|(tx, cost)| match cost_tracker.try_add(tx, cost) {
Ok(current_block_cost) => {
debug!("slot {:?}, transaction {:?}, cost {:?}, fit into current block, current block cost {}", bank.slot(), tx, cost, current_block_cost);
self.metrics.selected_txs_count.fetch_add(1, Ordering::Relaxed);
num_included += 1;
Ok(())
},
Err(e) => {
@@ -128,6 +132,10 @@ impl QosService {
self.metrics.retried_txs_per_block_limit_count.fetch_add(1, Ordering::Relaxed);
Err(TransactionError::WouldExceedMaxBlockCostLimit)
}
CostTrackerError::WouldExceedVoteMaxLimit => {
self.metrics.retried_txs_per_vote_limit_count.fetch_add(1, Ordering::Relaxed);
Err(TransactionError::WouldExceedMaxVoteCostLimit)
}
CostTrackerError::WouldExceedAccountMaxLimit => {
self.metrics.retried_txs_per_account_limit_count.fetch_add(1, Ordering::Relaxed);
Err(TransactionError::WouldExceedMaxAccountCostLimit)
@@ -140,7 +148,37 @@ impl QosService {
self.metrics
.cost_tracking_time
.fetch_add(cost_tracking_time.as_us(), Ordering::Relaxed);
select_results
(select_results, num_included)
}
pub fn accumulate_estimated_transaction_costs(
&self,
cost_details: &BatchedTransactionCostDetails,
) {
self.metrics
.estimated_signature_cu
.fetch_add(cost_details.batched_signature_cost, Ordering::Relaxed);
self.metrics
.estimated_write_lock_cu
.fetch_add(cost_details.batched_write_lock_cost, Ordering::Relaxed);
self.metrics
.estimated_data_bytes_cu
.fetch_add(cost_details.batched_data_bytes_cost, Ordering::Relaxed);
self.metrics
.estimated_execute_cu
.fetch_add(cost_details.batched_execute_cost, Ordering::Relaxed);
}
pub fn accumulate_actual_execute_cu(&self, units: u64) {
self.metrics
.actual_execute_cu
.fetch_add(units, Ordering::Relaxed);
}
pub fn accumulate_actual_execute_time(&self, micro_sec: u64) {
self.metrics
.actual_execute_time_us
.fetch_add(micro_sec, Ordering::Relaxed);
}
fn reporting_loop(
@@ -163,7 +201,26 @@ struct QosServiceMetrics {
cost_tracking_time: AtomicU64,
selected_txs_count: AtomicU64,
retried_txs_per_block_limit_count: AtomicU64,
retried_txs_per_vote_limit_count: AtomicU64,
retried_txs_per_account_limit_count: AtomicU64,
// accumulated estimated signature Compute Unites to be packed into block
estimated_signature_cu: AtomicU64,
// accumulated estimated write locks Compute Units to be packed into block
estimated_write_lock_cu: AtomicU64,
// accumulated estimated instructino data Compute Units to be packed into block
estimated_data_bytes_cu: AtomicU64,
// accumulated estimated program Compute Units to be packed into block
estimated_execute_cu: AtomicU64,
// accumulated actual program Compute Units that have been packed into block
actual_execute_cu: AtomicU64,
// accumulated actual program execute micro-sec that have been packed into block
actual_execute_time_us: AtomicU64,
}
impl QosServiceMetrics {
@@ -197,12 +254,48 @@ impl QosServiceMetrics {
.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"retried_txs_per_vote_limit_count",
self.retried_txs_per_vote_limit_count
.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"retried_txs_per_account_limit_count",
self.retried_txs_per_account_limit_count
.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"estimated_signature_cu",
self.estimated_signature_cu.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"estimated_write_lock_cu",
self.estimated_write_lock_cu.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"estimated_data_bytes_cu",
self.estimated_data_bytes_cu.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"estimated_execute_cu",
self.estimated_execute_cu.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"actual_execute_cu",
self.actual_execute_cu.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"actual_execute_time_us",
self.actual_execute_time_us.swap(0, Ordering::Relaxed) as i64,
i64
),
);
}
}
@@ -292,6 +385,7 @@ mod tests {
.unwrap()
.calculate_cost(&transfer_tx)
.sum();
let vote_tx_cost = cost_model.read().unwrap().calculate_cost(&vote_tx).sum();
// make a vec of txs
let txs = vec![transfer_tx.clone(), vote_tx.clone(), transfer_tx, vote_tx];
@@ -299,19 +393,21 @@ mod tests {
let qos_service = QosService::new(cost_model);
let txs_costs = qos_service.compute_transaction_costs(txs.iter());
// set cost tracker limit to fit 1 transfer tx, vote tx bypasses limit check
let cost_limit = transfer_tx_cost;
// set cost tracker limit to fit 1 transfer tx and 1 vote tx
let cost_limit = transfer_tx_cost + vote_tx_cost;
bank.write_cost_tracker()
.unwrap()
.set_limits(cost_limit, cost_limit);
let results = qos_service.select_transactions_per_cost(txs.iter(), txs_costs.iter(), &bank);
.set_limits(cost_limit, cost_limit, cost_limit);
let (results, num_selected) =
qos_service.select_transactions_per_cost(txs.iter(), txs_costs.iter(), &bank);
assert_eq!(num_selected, 2);
// verify that first transfer tx and all votes are allowed
// verify that first transfer tx and first votes are allowed
assert_eq!(results.len(), txs.len());
assert!(results[0].is_ok());
assert!(results[1].is_ok());
assert!(results[2].is_err());
assert!(results[3].is_ok());
assert!(results[3].is_err());
}
#[test]

View File

@@ -38,16 +38,18 @@ use {
},
solana_measure::measure::Measure,
solana_metrics::inc_new_counter_info,
solana_poh::poh_recorder::{PohRecorder, GRACE_TICKS_FACTOR, MAX_GRACE_SLOTS},
solana_poh::poh_recorder::{PohLeaderStatus, PohRecorder, GRACE_TICKS_FACTOR, MAX_GRACE_SLOTS},
solana_program_runtime::timings::ExecuteTimings,
solana_rpc::{
optimistically_confirmed_bank_tracker::{BankNotification, BankNotificationSender},
rpc_subscriptions::RpcSubscriptions,
},
solana_runtime::{
accounts_background_service::AbsRequestSender,
bank::{Bank, ExecuteTimings, NewBankOptions},
bank::{Bank, NewBankOptions},
bank_forks::BankForks,
commitment::BlockCommitmentCache,
transaction_cost_metrics_sender::TransactionCostMetricsSender,
vote_sender_types::ReplayVoteSender,
},
solana_sdk::{
@@ -55,6 +57,7 @@ use {
genesis_config::ClusterType,
hash::Hash,
pubkey::Pubkey,
saturating_add_assign,
signature::{Keypair, Signature, Signer},
timing::timestamp,
transaction::Transaction,
@@ -135,6 +138,9 @@ pub struct ReplayStageConfig {
pub wait_for_vote_to_start_leader: bool,
pub ancestor_hashes_replay_update_sender: AncestorHashesReplayUpdateSender,
pub tower_storage: Arc<dyn TowerStorage>,
// Stops voting until this slot has been reached. Should be used to avoid
// duplicate voting which can lead to slashing.
pub wait_to_vote_slot: Option<Slot>,
}
#[derive(Default)]
@@ -161,6 +167,10 @@ pub struct ReplayTiming {
process_duplicate_slots_elapsed: u64,
process_unfrozen_gossip_verified_vote_hashes_elapsed: u64,
repair_correct_slots_elapsed: u64,
generate_new_bank_forks_read_lock_us: u64,
generate_new_bank_forks_get_slots_since_us: u64,
generate_new_bank_forks_loop_us: u64,
generate_new_bank_forks_write_lock_us: u64,
}
impl ReplayTiming {
#[allow(clippy::too_many_arguments)]
@@ -292,7 +302,27 @@ impl ReplayTiming {
"repair_correct_slots_elapsed",
self.repair_correct_slots_elapsed as i64,
i64
)
),
(
"generate_new_bank_forks_read_lock_us",
self.generate_new_bank_forks_read_lock_us as i64,
i64
),
(
"generate_new_bank_forks_get_slots_since_us",
self.generate_new_bank_forks_get_slots_since_us as i64,
i64
),
(
"generate_new_bank_forks_loop_us",
self.generate_new_bank_forks_loop_us as i64,
i64
),
(
"generate_new_bank_forks_write_lock_us",
self.generate_new_bank_forks_write_lock_us as i64,
i64
),
);
*self = ReplayTiming::default();
@@ -329,6 +359,7 @@ impl ReplayStage {
voting_sender: Sender<VoteOp>,
drop_bank_sender: Sender<Vec<Arc<Bank>>>,
block_metadata_notifier: Option<BlockMetadataNotifierLock>,
transaction_cost_metrics_sender: Option<TransactionCostMetricsSender>,
) -> Self {
let ReplayStageConfig {
vote_account,
@@ -346,6 +377,7 @@ impl ReplayStage {
wait_for_vote_to_start_leader,
ancestor_hashes_replay_update_sender,
tower_storage,
wait_to_vote_slot,
} = config;
trace!("replay stage");
@@ -403,6 +435,7 @@ impl ReplayStage {
&leader_schedule_cache,
&rpc_subscriptions,
&mut progress,
&mut replay_timing,
);
generate_new_bank_forks_time.stop();
@@ -435,6 +468,7 @@ impl ReplayStage {
&mut duplicate_slots_to_repair,
&ancestor_hashes_replay_update_sender,
block_metadata_notifier.clone(),
transaction_cost_metrics_sender.as_ref(),
);
replay_active_banks_time.stop();
@@ -565,6 +599,7 @@ impl ReplayStage {
has_new_vote_been_rooted, &mut
last_vote_refresh_time,
&voting_sender,
wait_to_vote_slot,
);
}
}
@@ -648,6 +683,7 @@ impl ReplayStage {
&voting_sender,
&mut epoch_slots_frozen_slots,
&drop_bank_sender,
wait_to_vote_slot,
);
};
voting_time.stop();
@@ -1365,13 +1401,17 @@ impl ReplayStage {
assert!(!poh_recorder.lock().unwrap().has_bank());
let (reached_leader_slot, _grace_ticks, poh_slot, parent_slot) =
poh_recorder.lock().unwrap().reached_leader_slot();
let (poh_slot, parent_slot) = match poh_recorder.lock().unwrap().reached_leader_slot() {
PohLeaderStatus::Reached {
poh_slot,
parent_slot,
} => (poh_slot, parent_slot),
PohLeaderStatus::NotReached => {
trace!("{} poh_recorder hasn't reached_leader_slot", my_pubkey);
return;
}
};
if !reached_leader_slot {
trace!("{} poh_recorder hasn't reached_leader_slot", my_pubkey);
return;
}
trace!("{} reached_leader_slot", my_pubkey);
let parent = bank_forks
@@ -1476,7 +1516,10 @@ impl ReplayStage {
root_slot,
my_pubkey,
rpc_subscriptions,
NewBankOptions { vote_only_bank },
NewBankOptions {
vote_only_bank,
simulation_bank: false,
},
);
let tpu_bank = bank_forks.write().unwrap().insert(tpu_bank);
@@ -1492,6 +1535,7 @@ impl ReplayStage {
bank_progress: &mut ForkProgress,
transaction_status_sender: Option<&TransactionStatusSender>,
replay_vote_sender: &ReplayVoteSender,
transaction_cost_metrics_sender: Option<&TransactionCostMetricsSender>,
verify_recyclers: &VerifyRecyclers,
) -> result::Result<usize, BlockstoreProcessorError> {
let tx_count_before = bank_progress.replay_progress.num_txs;
@@ -1503,6 +1547,7 @@ impl ReplayStage {
false,
transaction_status_sender,
Some(replay_vote_sender),
transaction_cost_metrics_sender,
None,
verify_recyclers,
false,
@@ -1615,6 +1660,7 @@ impl ReplayStage {
voting_sender: &Sender<VoteOp>,
epoch_slots_frozen_slots: &mut EpochSlotsFrozenSlots,
bank_drop_sender: &Sender<Vec<Arc<Bank>>>,
wait_to_vote_slot: Option<Slot>,
) {
if bank.is_empty() {
inc_new_counter_info!("replay_stage-voted_empty_bank", 1);
@@ -1703,6 +1749,7 @@ impl ReplayStage {
*has_new_vote_been_rooted,
replay_timing,
voting_sender,
wait_to_vote_slot,
);
}
@@ -1715,10 +1762,16 @@ impl ReplayStage {
switch_fork_decision: &SwitchForkDecision,
vote_signatures: &mut Vec<Signature>,
has_new_vote_been_rooted: bool,
wait_to_vote_slot: Option<Slot>,
) -> Option<Transaction> {
if authorized_voter_keypairs.is_empty() {
return None;
}
if let Some(slot) = wait_to_vote_slot {
if bank.slot() < slot {
return None;
}
}
let vote_account = match bank.get_vote_account(vote_account_pubkey) {
None => {
warn!(
@@ -1813,6 +1866,7 @@ impl ReplayStage {
has_new_vote_been_rooted: bool,
last_vote_refresh_time: &mut LastVoteRefreshTime,
voting_sender: &Sender<VoteOp>,
wait_to_vote_slot: Option<Slot>,
) {
let last_voted_slot = tower.last_voted_slot();
if last_voted_slot.is_none() {
@@ -1855,6 +1909,7 @@ impl ReplayStage {
&SwitchForkDecision::SameFork,
vote_signatures,
has_new_vote_been_rooted,
wait_to_vote_slot,
);
if let Some(vote_tx) = vote_tx {
@@ -1892,6 +1947,7 @@ impl ReplayStage {
has_new_vote_been_rooted: bool,
replay_timing: &mut ReplayTiming,
voting_sender: &Sender<VoteOp>,
wait_to_vote_slot: Option<Slot>,
) {
let mut generate_time = Measure::start("generate_vote");
let vote_tx = Self::generate_vote_tx(
@@ -1903,6 +1959,7 @@ impl ReplayStage {
switch_fork_decision,
vote_signatures,
has_new_vote_been_rooted,
wait_to_vote_slot,
);
generate_time.stop();
replay_timing.generate_vote_us += generate_time.as_us();
@@ -1992,6 +2049,7 @@ impl ReplayStage {
duplicate_slots_to_repair: &mut DuplicateSlotsToRepair,
ancestor_hashes_replay_update_sender: &AncestorHashesReplayUpdateSender,
block_metadata_notifier: Option<BlockMetadataNotifierLock>,
transaction_cost_metrics_sender: Option<&TransactionCostMetricsSender>,
) -> bool {
let mut did_complete_bank = false;
let mut tx_count = 0;
@@ -2041,6 +2099,7 @@ impl ReplayStage {
bank_progress,
transaction_status_sender,
replay_vote_sender,
transaction_cost_metrics_sender,
verify_recyclers,
);
match replay_result {
@@ -2170,7 +2229,9 @@ impl ReplayStage {
// send accumulated excute-timings to cost_update_service
if !execute_timings.details.per_program_timings.is_empty() {
cost_update_sender
.send(CostUpdate::ExecuteTiming { execute_timings })
.send(CostUpdate::ExecuteTiming {
execute_timings: Box::new(execute_timings),
})
.unwrap_or_else(|err| warn!("cost_update_sender failed: {:?}", err));
}
@@ -2783,24 +2844,34 @@ impl ReplayStage {
leader_schedule_cache: &Arc<LeaderScheduleCache>,
rpc_subscriptions: &Arc<RpcSubscriptions>,
progress: &mut ProgressMap,
replay_timing: &mut ReplayTiming,
) {
// Find the next slot that chains to the old slot
let mut generate_new_bank_forks_read_lock =
Measure::start("generate_new_bank_forks_read_lock");
let forks = bank_forks.read().unwrap();
generate_new_bank_forks_read_lock.stop();
let frozen_banks = forks.frozen_banks();
let frozen_bank_slots: Vec<u64> = frozen_banks
.keys()
.cloned()
.filter(|s| *s >= forks.root())
.collect();
let mut generate_new_bank_forks_get_slots_since =
Measure::start("generate_new_bank_forks_get_slots_since");
let next_slots = blockstore
.get_slots_since(&frozen_bank_slots)
.expect("Db error");
generate_new_bank_forks_get_slots_since.stop();
// Filter out what we've already seen
trace!("generate new forks {:?}", {
let mut next_slots = next_slots.iter().collect::<Vec<_>>();
next_slots.sort();
next_slots
});
let mut generate_new_bank_forks_loop = Measure::start("generate_new_bank_forks_loop");
let mut new_banks = HashMap::new();
for (parent_slot, children) in next_slots {
let parent_bank = frozen_banks
@@ -2841,11 +2912,31 @@ impl ReplayStage {
}
}
drop(forks);
generate_new_bank_forks_loop.stop();
let mut generate_new_bank_forks_write_lock =
Measure::start("generate_new_bank_forks_write_lock");
let mut forks = bank_forks.write().unwrap();
for (_, bank) in new_banks {
forks.insert(bank);
}
generate_new_bank_forks_write_lock.stop();
saturating_add_assign!(
replay_timing.generate_new_bank_forks_read_lock_us,
generate_new_bank_forks_read_lock.as_us()
);
saturating_add_assign!(
replay_timing.generate_new_bank_forks_get_slots_since_us,
generate_new_bank_forks_get_slots_since.as_us()
);
saturating_add_assign!(
replay_timing.generate_new_bank_forks_loop_us,
generate_new_bank_forks_loop.as_us()
);
saturating_add_assign!(
replay_timing.generate_new_bank_forks_write_lock_us,
generate_new_bank_forks_write_lock.as_us()
);
}
fn new_bank_from_parent_with_notify(
@@ -3119,12 +3210,14 @@ pub mod tests {
.unwrap()
.get(NUM_CONSECUTIVE_LEADER_SLOTS)
.is_none());
let mut replay_timing = ReplayTiming::default();
ReplayStage::generate_new_bank_forks(
&blockstore,
&bank_forks,
&leader_schedule_cache,
&rpc_subscriptions,
&mut progress,
&mut replay_timing,
);
assert!(bank_forks
.read()
@@ -3147,6 +3240,7 @@ pub mod tests {
&leader_schedule_cache,
&rpc_subscriptions,
&mut progress,
&mut replay_timing,
);
assert!(bank_forks
.read()
@@ -3582,6 +3676,7 @@ pub mod tests {
bank1_progress,
None,
&replay_vote_sender,
None,
&VerifyRecyclers::default(),
);
let max_complete_transaction_status_slot = Arc::new(AtomicU64::default());
@@ -3751,10 +3846,12 @@ pub mod tests {
#[test]
fn test_write_persist_transaction_status() {
let GenesisConfigInfo {
genesis_config,
mut genesis_config,
mint_keypair,
..
} = create_genesis_config(1000);
} = create_genesis_config(solana_sdk::native_token::sol_to_lamports(1000.0));
genesis_config.rent.lamports_per_byte_year = 50;
genesis_config.rent.exemption_threshold = 2.0;
let (ledger_path, _) = create_new_tmp_ledger!(&genesis_config);
{
let blockstore = Blockstore::open(&ledger_path)
@@ -3767,7 +3864,11 @@ pub mod tests {
let bank0 = Arc::new(Bank::new_for_tests(&genesis_config));
bank0
.transfer(4, &mint_keypair, &keypair2.pubkey())
.transfer(
bank0.get_minimum_balance_for_rent_exemption(0),
&mint_keypair,
&keypair2.pubkey(),
)
.unwrap();
let bank1 = Arc::new(Bank::new_from_parent(&bank0, &Pubkey::default(), 1));
@@ -4332,7 +4433,7 @@ pub mod tests {
// runs in `update_propagation_status`
assert!(!progress_map.is_propagated(10));
let vote_tracker = VoteTracker::new(&bank_forks.root_bank());
let vote_tracker = VoteTracker::default();
vote_tracker.insert_vote(10, vote_pubkey);
ReplayStage::update_propagation_status(
&mut progress_map,
@@ -4417,7 +4518,7 @@ pub mod tests {
);
}
let vote_tracker = VoteTracker::new(&bank_forks.root_bank());
let vote_tracker = VoteTracker::default();
for vote_pubkey in &vote_pubkeys {
// Insert a vote for the last bank for each voter
vote_tracker.insert_vote(10, *vote_pubkey);
@@ -4504,7 +4605,7 @@ pub mod tests {
progress_map.insert(i, fork_progress);
}
let vote_tracker = VoteTracker::new(&bank_forks.root_bank());
let vote_tracker = VoteTracker::default();
// Insert a new vote
vote_tracker.insert_vote(10, vote_pubkeys[2]);
@@ -5647,6 +5748,7 @@ pub mod tests {
has_new_vote_been_rooted,
&mut ReplayTiming::default(),
&voting_sender,
None,
);
let vote_info = voting_receiver
.recv_timeout(Duration::from_secs(1))
@@ -5686,6 +5788,7 @@ pub mod tests {
has_new_vote_been_rooted,
&mut last_vote_refresh_time,
&voting_sender,
None,
);
// No new votes have been submitted to gossip
@@ -5711,6 +5814,7 @@ pub mod tests {
has_new_vote_been_rooted,
&mut ReplayTiming::default(),
&voting_sender,
None,
);
let vote_info = voting_receiver
.recv_timeout(Duration::from_secs(1))
@@ -5742,6 +5846,7 @@ pub mod tests {
has_new_vote_been_rooted,
&mut last_vote_refresh_time,
&voting_sender,
None,
);
// No new votes have been submitted to gossip
@@ -5779,6 +5884,7 @@ pub mod tests {
has_new_vote_been_rooted,
&mut last_vote_refresh_time,
&voting_sender,
None,
);
let vote_info = voting_receiver
.recv_timeout(Duration::from_secs(1))
@@ -5846,6 +5952,7 @@ pub mod tests {
has_new_vote_been_rooted,
&mut last_vote_refresh_time,
&voting_sender,
None,
);
let votes = cluster_info.get_votes(&mut cursor);

View File

@@ -7,13 +7,15 @@
use {
crate::sigverify,
core::time::Duration,
crossbeam_channel::{SendError, Sender as CrossbeamSender},
itertools::Itertools,
solana_measure::measure::Measure,
solana_perf::packet::PacketBatch,
solana_perf::sigverify::{count_valid_packets, shrink_batches, Deduper},
solana_sdk::timing,
solana_streamer::streamer::{self, PacketBatchReceiver, StreamerError},
std::{
collections::HashMap,
sync::mpsc::{Receiver, RecvTimeoutError},
thread::{self, Builder, JoinHandle},
time::Instant,
@@ -49,10 +51,17 @@ pub struct DisabledSigVerifier {}
struct SigVerifierStats {
recv_batches_us_hist: histogram::Histogram, // time to call recv_batch
verify_batches_pp_us_hist: histogram::Histogram, // per-packet time to call verify_batch
discard_packets_pp_us_hist: histogram::Histogram, // per-packet time to call verify_batch
dedup_packets_pp_us_hist: histogram::Histogram, // per-packet time to call verify_batch
batches_hist: histogram::Histogram, // number of packet batches per verify call
packets_hist: histogram::Histogram, // number of packets per verify call
total_batches: usize,
total_packets: usize,
total_dedup: usize,
total_excess_fail: usize,
total_shrink_time: usize,
total_shrinks: usize,
total_valid_packets: usize,
}
impl SigVerifierStats {
@@ -99,6 +108,48 @@ impl SigVerifierStats {
self.verify_batches_pp_us_hist.mean().unwrap_or(0),
i64
),
(
"discard_packets_pp_us_90pct",
self.discard_packets_pp_us_hist
.percentile(90.0)
.unwrap_or(0),
i64
),
(
"discard_packets_pp_us_min",
self.discard_packets_pp_us_hist.minimum().unwrap_or(0),
i64
),
(
"discard_packets_pp_us_max",
self.discard_packets_pp_us_hist.maximum().unwrap_or(0),
i64
),
(
"discard_packets_pp_us_mean",
self.discard_packets_pp_us_hist.mean().unwrap_or(0),
i64
),
(
"dedup_packets_pp_us_90pct",
self.dedup_packets_pp_us_hist.percentile(90.0).unwrap_or(0),
i64
),
(
"dedup_packets_pp_us_min",
self.dedup_packets_pp_us_hist.minimum().unwrap_or(0),
i64
),
(
"dedup_packets_pp_us_max",
self.dedup_packets_pp_us_hist.maximum().unwrap_or(0),
i64
),
(
"dedup_packets_pp_us_mean",
self.dedup_packets_pp_us_hist.mean().unwrap_or(0),
i64
),
(
"batches_90pct",
self.batches_hist.percentile(90.0).unwrap_or(0),
@@ -117,6 +168,11 @@ impl SigVerifierStats {
("packets_mean", self.packets_hist.mean().unwrap_or(0), i64),
("total_batches", self.total_batches, i64),
("total_packets", self.total_packets, i64),
("total_dedup", self.total_dedup, i64),
("total_excess_fail", self.total_excess_fail, i64),
("total_shrink_time", self.total_shrink_time, i64),
("total_shrinks", self.total_shrinks, i64),
("total_valid_packets", self.total_valid_packets, i64),
);
}
}
@@ -139,38 +195,32 @@ impl SigVerifyStage {
Self { thread_hdl }
}
pub fn discard_excess_packets(batches: &mut Vec<PacketBatch>, max_packets: usize) {
let mut received_ips = HashMap::new();
for (batch_index, batch) in batches.iter().enumerate() {
for (packet_index, packets) in batch.packets.iter().enumerate() {
let e = received_ips
.entry(packets.meta.addr().ip())
.or_insert_with(Vec::new);
e.push((batch_index, packet_index));
}
pub fn discard_excess_packets(batches: &mut [PacketBatch], mut max_packets: usize) {
// Group packets by their incoming IP address.
let mut addrs = batches
.iter_mut()
.rev()
.flat_map(|batch| batch.packets.iter_mut().rev())
.map(|packet| (packet.meta.addr, packet))
.into_group_map();
// Allocate max_packets evenly across addresses.
while max_packets > 0 && !addrs.is_empty() {
let num_addrs = addrs.len();
addrs.retain(|_, packets| {
let cap = (max_packets + num_addrs - 1) / num_addrs;
max_packets -= packets.len().min(cap);
packets.truncate(packets.len().saturating_sub(cap));
!packets.is_empty()
});
}
let mut batch_len = 0;
while batch_len < max_packets {
for (_ip, indexes) in received_ips.iter_mut() {
if !indexes.is_empty() {
indexes.remove(0);
batch_len += 1;
if batch_len >= MAX_SIGVERIFY_BATCH {
break;
}
}
}
}
for (_addr, indexes) in received_ips {
for (batch_index, packet_index) in indexes {
batches[batch_index].packets[packet_index]
.meta
.set_discard(true);
}
// Discard excess packets from each address.
for packet in addrs.into_values().flatten() {
packet.meta.set_discard(true);
}
}
fn verifier<T: SigVerifier>(
deduper: &Deduper,
recvr: &PacketBatchReceiver,
sendr: &CrossbeamSender<Vec<PacketBatch>>,
verifier: &T,
@@ -184,12 +234,35 @@ impl SigVerifyStage {
timing::timestamp(),
num_packets,
);
if num_packets > MAX_SIGVERIFY_BATCH {
Self::discard_excess_packets(&mut batches, MAX_SIGVERIFY_BATCH);
}
let mut dedup_time = Measure::start("sigverify_dedup_time");
let dedup_fail = deduper.dedup_packets(&mut batches) as usize;
dedup_time.stop();
let num_unique = num_packets.saturating_sub(dedup_fail);
let mut discard_time = Measure::start("sigverify_discard_time");
if num_unique > MAX_SIGVERIFY_BATCH {
Self::discard_excess_packets(&mut batches, MAX_SIGVERIFY_BATCH)
};
let excess_fail = num_unique.saturating_sub(MAX_SIGVERIFY_BATCH);
discard_time.stop();
let mut verify_batch_time = Measure::start("sigverify_batch_time");
sendr.send(verifier.verify_batches(batches))?;
let mut batches = verifier.verify_batches(batches);
verify_batch_time.stop();
let mut shrink_time = Measure::start("sigverify_shrink_time");
let num_valid_packets = count_valid_packets(&batches);
let start_len = batches.len();
const MAX_EMPTY_BATCH_RATIO: usize = 4;
if num_packets > num_valid_packets.saturating_mul(MAX_EMPTY_BATCH_RATIO) {
let valid = shrink_batches(&mut batches);
batches.truncate(valid);
}
let total_shrinks = start_len.saturating_sub(batches.len());
shrink_time.stop();
sendr.send(batches)?;
verify_batch_time.stop();
debug!(
@@ -209,10 +282,23 @@ impl SigVerifyStage {
.verify_batches_pp_us_hist
.increment(verify_batch_time.as_us() / (num_packets as u64))
.unwrap();
stats
.discard_packets_pp_us_hist
.increment(discard_time.as_us() / (num_packets as u64))
.unwrap();
stats
.dedup_packets_pp_us_hist
.increment(dedup_time.as_us() / (num_packets as u64))
.unwrap();
stats.batches_hist.increment(batches_len as u64).unwrap();
stats.packets_hist.increment(num_packets as u64).unwrap();
stats.total_batches += batches_len;
stats.total_packets += num_packets;
stats.total_dedup += dedup_fail;
stats.total_valid_packets += num_valid_packets;
stats.total_excess_fail += excess_fail;
stats.total_shrink_time += shrink_time.as_us() as usize;
stats.total_shrinks += total_shrinks;
Ok(())
}
@@ -225,29 +311,39 @@ impl SigVerifyStage {
let verifier = verifier.clone();
let mut stats = SigVerifierStats::default();
let mut last_print = Instant::now();
const MAX_DEDUPER_AGE: Duration = Duration::from_secs(2);
const MAX_DEDUPER_ITEMS: u32 = 1_000_000;
Builder::new()
.name("solana-verifier".to_string())
.spawn(move || loop {
if let Err(e) =
Self::verifier(&packet_receiver, &verified_sender, &verifier, &mut stats)
{
match e {
SigVerifyServiceError::Streamer(StreamerError::RecvTimeout(
RecvTimeoutError::Disconnected,
)) => break,
SigVerifyServiceError::Streamer(StreamerError::RecvTimeout(
RecvTimeoutError::Timeout,
)) => (),
SigVerifyServiceError::Send(_) => {
break;
.spawn(move || {
let mut deduper = Deduper::new(MAX_DEDUPER_ITEMS, MAX_DEDUPER_AGE);
loop {
deduper.reset();
if let Err(e) = Self::verifier(
&deduper,
&packet_receiver,
&verified_sender,
&verifier,
&mut stats,
) {
match e {
SigVerifyServiceError::Streamer(StreamerError::RecvTimeout(
RecvTimeoutError::Disconnected,
)) => break,
SigVerifyServiceError::Streamer(StreamerError::RecvTimeout(
RecvTimeoutError::Timeout,
)) => (),
SigVerifyServiceError::Send(_) => {
break;
}
_ => error!("{:?}", e),
}
_ => error!("{:?}", e),
}
}
if last_print.elapsed().as_secs() > 2 {
stats.report();
stats = SigVerifierStats::default();
last_print = Instant::now();
if last_print.elapsed().as_secs() > 2 {
stats.report();
stats = SigVerifierStats::default();
last_print = Instant::now();
}
}
})
.unwrap()
@@ -268,6 +364,12 @@ impl SigVerifyStage {
#[cfg(test)]
mod tests {
use crate::sigverify::TransactionSigVerifier;
use crate::sigverify_stage::timing::duration_as_ms;
use crossbeam_channel::unbounded;
use solana_perf::packet::to_packet_batches;
use solana_perf::test_tx::test_tx;
use std::sync::mpsc::channel;
use {super::*, solana_perf::packet::Packet};
fn count_non_discard(packet_batches: &[PacketBatch]) -> usize {
@@ -296,4 +398,58 @@ mod tests {
assert!(!batches[0].packets[0].meta.discard());
assert!(!batches[0].packets[3].meta.discard());
}
fn gen_batches(use_same_tx: bool) -> Vec<PacketBatch> {
let len = 4096;
let chunk_size = 1024;
if use_same_tx {
let tx = test_tx();
to_packet_batches(&vec![tx; len], chunk_size)
} else {
let txs: Vec<_> = (0..len).map(|_| test_tx()).collect();
to_packet_batches(&txs, chunk_size)
}
}
#[test]
fn test_sigverify_stage() {
solana_logger::setup();
trace!("start");
let (packet_s, packet_r) = channel();
let (verified_s, verified_r) = unbounded();
let verifier = TransactionSigVerifier::default();
let stage = SigVerifyStage::new(packet_r, verified_s, verifier);
let use_same_tx = true;
let now = Instant::now();
let mut batches = gen_batches(use_same_tx);
trace!(
"starting... generation took: {} ms batches: {}",
duration_as_ms(&now.elapsed()),
batches.len()
);
let mut sent_len = 0;
for _ in 0..batches.len() {
if let Some(batch) = batches.pop() {
sent_len += batch.packets.len();
packet_s.send(batch).unwrap();
}
}
let mut received = 0;
trace!("sent: {}", sent_len);
loop {
if let Ok(mut verifieds) = verified_r.recv_timeout(Duration::from_millis(10)) {
while let Some(v) = verifieds.pop() {
received += v.packets.len();
batches.push(v);
}
if use_same_tx || received >= sent_len {
break;
}
}
}
trace!("received: {}", received);
drop(packet_s);
stage.join().unwrap();
}
}

View File

@@ -177,11 +177,11 @@ impl SystemMonitorService {
);
}
fn calc_percent(numerator: u64, denom: u64) -> f32 {
fn calc_percent(numerator: u64, denom: u64) -> f64 {
if denom == 0 {
0.0
} else {
(numerator as f32 / denom as f32) * 100.0
(numerator as f64 / denom as f64) * 100.0
}
}
@@ -281,4 +281,11 @@ UdpLite: 0 0 0 0 0 0 0 0" as &[u8];
let stats = parse_udp_stats(&mut mock_snmp);
assert!(stats.is_err());
}
#[test]
fn test_calc_percent() {
assert!(SystemMonitorService::calc_percent(99, 100) < 100.0);
let one_tb_as_kb = (1u64 << 40) >> 10;
assert!(SystemMonitorService::calc_percent(one_tb_as_kb - 1, one_tb_as_kb) < 100.0);
}
}

View File

@@ -26,6 +26,7 @@ use {
cost_model::CostModel,
vote_sender_types::{ReplayVoteReceiver, ReplayVoteSender},
},
solana_sdk::signature::Keypair,
std::{
net::UdpSocket,
sync::{
@@ -46,6 +47,7 @@ pub struct Tpu {
banking_stage: BankingStage,
cluster_info_vote_listener: ClusterInfoVoteListener,
broadcast_stage: BroadcastStage,
tpu_quic_t: thread::JoinHandle<()>,
}
impl Tpu {
@@ -59,6 +61,7 @@ impl Tpu {
tpu_forwards_sockets: Vec<UdpSocket>,
tpu_vote_sockets: Vec<UdpSocket>,
broadcast_sockets: Vec<UdpSocket>,
transactions_quic_socket: UdpSocket,
subscriptions: &Arc<RpcSubscriptions>,
transaction_status_sender: Option<TransactionStatusSender>,
blockstore: &Arc<Blockstore>,
@@ -75,6 +78,7 @@ impl Tpu {
tpu_coalesce_ms: u64,
cluster_confirmed_slot_sender: GossipDuplicateConfirmedSlotsSender,
cost_model: &Arc<RwLock<CostModel>>,
keypair: &Keypair,
) -> Self {
let (packet_sender, packet_receiver) = channel();
let (vote_packet_sender, vote_packet_receiver) = channel();
@@ -90,6 +94,15 @@ impl Tpu {
);
let (verified_sender, verified_receiver) = unbounded();
let tpu_quic_t = solana_streamer::quic::spawn_server(
transactions_quic_socket,
keypair,
cluster_info.my_contact_info().tpu.ip(),
packet_sender,
exit.clone(),
)
.unwrap();
let sigverify_stage = {
let verifier = TransactionSigVerifier::default();
SigVerifyStage::new(packet_receiver, verified_sender, verifier)
@@ -153,6 +166,7 @@ impl Tpu {
banking_stage,
cluster_info_vote_listener,
broadcast_stage,
tpu_quic_t,
}
}
@@ -164,6 +178,7 @@ impl Tpu {
self.cluster_info_vote_listener.join(),
self.banking_stage.join(),
];
self.tpu_quic_t.join()?;
let broadcast_result = self.broadcast_stage.join();
for result in results {
result?;

View File

@@ -49,6 +49,9 @@ use {
snapshot_package::{
AccountsPackageReceiver, AccountsPackageSender, PendingSnapshotPackage,
},
transaction_cost_metrics_sender::{
TransactionCostMetricsSender, TransactionCostMetricsService,
},
vote_sender_types::ReplayVoteSender,
},
solana_sdk::{clock::Slot, pubkey::Pubkey, signature::Keypair},
@@ -76,6 +79,7 @@ pub struct Tvu {
cost_update_service: CostUpdateService,
voting_service: VotingService,
drop_bank_service: DropBankService,
transaction_cost_metrics_service: TransactionCostMetricsService,
}
pub struct Sockets {
@@ -145,6 +149,7 @@ impl Tvu {
accounts_package_channel: (AccountsPackageSender, AccountsPackageReceiver),
last_full_snapshot_slot: Option<Slot>,
block_metadata_notifier: Option<BlockMetadataNotifierLock>,
wait_to_vote_slot: Option<Slot>,
) -> Self {
let Sockets {
repair: repair_socket,
@@ -293,6 +298,7 @@ impl Tvu {
wait_for_vote_to_start_leader: tvu_config.wait_for_vote_to_start_leader,
ancestor_hashes_replay_update_sender,
tower_storage: tower_storage.clone(),
wait_to_vote_slot,
};
let (voting_sender, voting_receiver) = channel();
@@ -305,14 +311,19 @@ impl Tvu {
);
let (cost_update_sender, cost_update_receiver) = channel();
let cost_update_service = CostUpdateService::new(
exit.clone(),
blockstore.clone(),
cost_model.clone(),
cost_update_receiver,
);
let cost_update_service =
CostUpdateService::new(blockstore.clone(), cost_model.clone(), cost_update_receiver);
let (drop_bank_sender, drop_bank_receiver) = channel();
let (tx_cost_metrics_sender, tx_cost_metrics_receiver) = unbounded();
let transaction_cost_metrics_sender = Some(TransactionCostMetricsSender::new(
cost_model.clone(),
tx_cost_metrics_sender,
));
let transaction_cost_metrics_service =
TransactionCostMetricsService::new(tx_cost_metrics_receiver);
let drop_bank_service = DropBankService::new(drop_bank_receiver);
let replay_stage = ReplayStage::new(
@@ -336,6 +347,7 @@ impl Tvu {
voting_sender,
drop_bank_sender,
block_metadata_notifier,
transaction_cost_metrics_sender,
);
let ledger_cleanup_service = tvu_config.max_ledger_shreds.map(|max_ledger_shreds| {
@@ -370,6 +382,7 @@ impl Tvu {
cost_update_service,
voting_service,
drop_bank_service,
transaction_cost_metrics_service,
}
}
@@ -386,6 +399,7 @@ impl Tvu {
self.cost_update_service.join()?;
self.voting_service.join()?;
self.drop_bank_service.join()?;
self.transaction_cost_metrics_service.join()?;
Ok(())
}
}
@@ -443,7 +457,7 @@ pub mod tests {
let blockstore = Arc::new(blockstore);
let bank = bank_forks.working_bank();
let (exit, poh_recorder, poh_service, _entry_receiver) =
create_test_recorder(&bank, &blockstore, None);
create_test_recorder(&bank, &blockstore, None, None);
let vote_keypair = Keypair::new();
let leader_schedule_cache = Arc::new(LeaderScheduleCache::new_from_bank(&bank));
let block_commitment_cache = Arc::new(RwLock::new(BlockCommitmentCache::default()));
@@ -491,7 +505,7 @@ pub mod tests {
None,
None,
None,
Arc::new(VoteTracker::new(&bank)),
Arc::<VoteTracker>::default(),
retransmit_slots_sender,
gossip_verified_vote_hash_receiver,
verified_vote_receiver,
@@ -505,6 +519,7 @@ pub mod tests {
accounts_package_channel,
None,
None,
None,
);
exit.store(true, Ordering::Relaxed);
tvu.join().unwrap();

View File

@@ -95,7 +95,6 @@ use {
std::{
collections::{HashMap, HashSet},
net::SocketAddr,
ops::Deref,
path::{Path, PathBuf},
sync::{
atomic::{AtomicBool, AtomicU64, Ordering},
@@ -165,6 +164,7 @@ pub struct ValidatorConfig {
pub validator_exit: Arc<RwLock<Exit>>,
pub no_wait_for_vote_to_start_leader: bool,
pub accounts_shrink_ratio: AccountShrinkThreshold,
pub wait_to_vote_slot: Option<Slot>,
}
impl Default for ValidatorConfig {
@@ -224,6 +224,16 @@ impl Default for ValidatorConfig {
no_wait_for_vote_to_start_leader: true,
accounts_shrink_ratio: AccountShrinkThreshold::default(),
accounts_db_config: None,
wait_to_vote_slot: None,
}
}
}
impl ValidatorConfig {
pub fn default_for_test() -> Self {
Self {
rpc_config: JsonRpcConfig::default_for_test(),
..Self::default()
}
}
}
@@ -286,6 +296,7 @@ pub struct Validator {
tvu: Tvu,
ip_echo_server: Option<solana_net_utils::IpEchoServer>,
pub cluster_info: Arc<ClusterInfo>,
pub bank_forks: Arc<RwLock<BankForks>>,
accountsdb_repl_service: Option<AccountsDbReplService>,
accountsdb_plugin_service: Option<AccountsDbPluginService>,
}
@@ -530,14 +541,17 @@ impl Validator {
}
}
let mut cluster_info =
ClusterInfo::new(node.info.clone(), identity_keypair, socket_addr_space);
let mut cluster_info = ClusterInfo::new(
node.info.clone(),
identity_keypair.clone(),
socket_addr_space,
);
cluster_info.set_contact_debug_interval(config.contact_debug_interval);
cluster_info.set_entrypoints(cluster_entrypoints);
cluster_info.restore_contact_info(ledger_path, config.contact_save_interval);
let cluster_info = Arc::new(cluster_info);
let mut block_commitment_cache = BlockCommitmentCache::default();
block_commitment_cache.initialize_slots(bank.slot());
block_commitment_cache.initialize_slots(bank.slot(), bank_forks.read().unwrap().root());
let block_commitment_cache = Arc::new(RwLock::new(block_commitment_cache));
let optimistically_confirmed_bank =
@@ -653,7 +667,7 @@ impl Validator {
leader_schedule_cache.clone(),
max_complete_transaction_status_slot,
)),
if config.rpc_config.minimal_api {
if !config.rpc_config.full_api {
None
} else {
let (trigger, pubsub_service) = PubSubService::new(
@@ -789,10 +803,7 @@ impl Validator {
"New shred signal for the TVU should be the same as the clear bank signal."
);
let vote_tracker = Arc::new(VoteTracker::new(
bank_forks.read().unwrap().root_bank().deref(),
));
let vote_tracker = Arc::<VoteTracker>::default();
let mut cost_model = CostModel::default();
cost_model.initialize_cost_table(&blockstore.read_program_costs().unwrap());
let cost_model = Arc::new(RwLock::new(cost_model));
@@ -884,6 +895,7 @@ impl Validator {
accounts_package_channel,
last_full_snapshot_slot,
block_metadata_notifier,
config.wait_to_vote_slot,
);
let tpu = Tpu::new(
@@ -895,6 +907,7 @@ impl Validator {
node.sockets.tpu_forwards,
node.sockets.tpu_vote,
node.sockets.broadcast,
node.sockets.tpu_quic,
&rpc_subscriptions,
transaction_status_sender,
&blockstore,
@@ -902,7 +915,7 @@ impl Validator {
&exit,
node.info.shred_version,
vote_tracker,
bank_forks,
bank_forks.clone(),
verified_vote_sender,
gossip_verified_vote_hash_sender,
replay_vote_receiver,
@@ -911,6 +924,7 @@ impl Validator {
config.tpu_coalesce_ms,
cluster_confirmed_slot_sender,
&cost_model,
&identity_keypair,
);
datapoint_info!("validator-new", ("id", id.to_string(), String));
@@ -937,6 +951,7 @@ impl Validator {
ip_echo_server,
validator_exit: config.validator_exit.clone(),
cluster_info,
bank_forks,
accountsdb_repl_service,
accountsdb_plugin_service,
}
@@ -978,6 +993,7 @@ impl Validator {
}
pub fn join(self) {
drop(self.bank_forks);
drop(self.cluster_info);
self.poh_service.join().expect("poh_service");
@@ -1775,7 +1791,7 @@ mod tests {
let voting_keypair = Arc::new(Keypair::new());
let config = ValidatorConfig {
rpc_addrs: Some((validator_node.info.rpc, validator_node.info.rpc_pubsub)),
..ValidatorConfig::default()
..ValidatorConfig::default_for_test()
};
let start_progress = Arc::new(RwLock::new(ValidatorStartProgress::default()));
let validator = Validator::new(
@@ -1857,7 +1873,7 @@ mod tests {
let vote_account_keypair = Keypair::new();
let config = ValidatorConfig {
rpc_addrs: Some((validator_node.info.rpc, validator_node.info.rpc_pubsub)),
..ValidatorConfig::default()
..ValidatorConfig::default_for_test()
};
Validator::new(
validator_node,
@@ -1900,7 +1916,7 @@ mod tests {
let (genesis_config, _mint_keypair) = create_genesis_config(1);
let bank = Arc::new(Bank::new_for_tests(&genesis_config));
let mut config = ValidatorConfig::default();
let mut config = ValidatorConfig::default_for_test();
let rpc_override_health_check = Arc::new(AtomicBool::new(false));
let start_progress = Arc::new(RwLock::new(ValidatorStartProgress::default()));

View File

@@ -9,6 +9,9 @@ source ../ci/rust-version.sh stable
: "${rust_stable:=}" # Pacify shellcheck
# pre-build with output enabled to appease Travis CI's hang check
"$cargo" build -p solana-cli
usage=$("$cargo" stable -q run -p solana-cli -- -C ~/.foo --help | sed -e 's|'"$HOME"'|~|g' -e 's/[[:space:]]\+$//')
out=${1:-src/cli/usage.md}

View File

@@ -3,17 +3,6 @@ module.exports = {
About: ["introduction", "terminology", "history"],
Wallets: [
"wallet-guide",
"wallet-guide/apps",
{
type: "category",
label: "Web Wallets",
items: ["wallet-guide/web-wallets", "wallet-guide/solflare"],
},
{
type: "category",
label: "Hardware Wallets",
items: ["wallet-guide/ledger-live"],
},
{
type: "category",
label: "Command-line Wallets",

View File

@@ -36,4 +36,4 @@ Solana rotates leaders at fixed intervals, called _slots_. Each leader may only
Next, transactions are broken into batches so that a node can send transactions to multiple parties without making multiple copies. If, for example, the leader needed to send 60 transactions to 6 nodes, it would break that collection of 60 into batches of 10 transactions and send one to each node. This allows the leader to put 60 transactions on the wire, not 60 transactions for each node. Each node then shares its batch with its peers. Once the node has collected all 6 batches, it reconstructs the original set of 60 transactions.
A batch of transactions can only be split so many times before it is so small that header information becomes the primary consumer of network bandwidth. At the time of this writing (December, 2021), the approach is scaling well up to about 1,250 validators. To scale up to hundreds of thousands of validators, each node can apply the same technique as the leader node to another set of nodes of equal size. We call the technique [_Turbine Block Propogation_](turbine-block-propagation.md).
A batch of transactions can only be split so many times before it is so small that header information becomes the primary consumer of network bandwidth. At the time of this writing (December, 2021), the approach is scaling well up to about 1,250 validators. To scale up to hundreds of thousands of validators, each node can apply the same technique as the leader node to another set of nodes of equal size. We call the technique [_Turbine Block Propagation_](turbine-block-propagation.md).

View File

@@ -6,54 +6,12 @@ A validator receives entries from the current leader and submits votes confirmin
The validator votes on its chosen fork by submitting a transaction that uses an asymmetric key to sign the result of its validation work. Other entities can verify this signature using the validator's public key. If the validator's key is used to sign incorrect data \(e.g. votes on multiple forks of the ledger\), the node's stake or its resources could be compromised.
Solana addresses this risk by splitting off a separate _vote signer_ service that evaluates each vote to ensure it does not violate a slashing condition.
## Validators, Vote Signers, and Stakeholders
When a validator receives multiple blocks for the same slot, it tracks all possible forks until it can determine a "best" one. A validator selects the best fork by submitting a vote to it, using a vote signer to minimize the possibility of its vote inadvertently violating a consensus rule and getting a stake slashed.
A vote signer evaluates the vote proposed by the validator and signs the vote only if it does not violate a slashing condition. A vote signer only needs to maintain minimal state regarding the votes it signed and the votes signed by the rest of the cluster. It doesn't need to process a full set of transactions.
When a validator receives multiple blocks for the same slot, it tracks all possible forks until it can determine a "best" one. A validator selects the best fork by submitting a vote to it.
A stakeholder is an identity that has control of the staked capital. The stakeholder can delegate its stake to the vote signer. Once a stake is delegated, the vote signer's votes represent the voting weight of all the delegated stakes, and produce rewards for all the delegated stakes.
Currently, there is a 1:1 relationship between validators and vote signers, and stakeholders delegate their entire stake to a single vote signer.
## Signing service
The vote signing service consists of a JSON RPC server and a request processor. At startup, the service starts the RPC server at a configured port and waits for validator requests. It expects the following type of requests:
1. Register a new validator node
- The request must contain validator's identity \(public key\)
- The request must be signed with the validator's private key
- The service drops the request if signature of the request cannot be verified
- The service creates a new voting asymmetric key for the validator, and returns the public key as a response
- If a validator tries to register again, the service returns the public key from the pre-existing keypair
2. Sign a vote
- The request must contain a voting transaction and all verification data
- The request must be signed with the validator's private key
- The service drops the request if signature of the request cannot be verified
- The service verifies the voting data
- The service returns a signature for the transaction
## Validator voting
A validator node, at startup, creates a new vote account and registers it with the cluster by submitting a new "vote register" transaction. The other nodes on the cluster process this transaction and include the new validator in the active set. Subsequently, the validator submits a "new vote" transaction signed with the validator's voting private key on each voting event.
### Configuration
The validator node is configured with the signing service's network endpoint \(IP/Port\).
### Registration
At startup, the validator registers itself with its signing service using JSON RPC. The RPC call returns the voting public key for the validator node. The validator creates a new "vote register" transaction including this public key, and submits it to the cluster.
### Vote Collection
The validator looks up the votes submitted by all the nodes in the cluster for the last voting period. This information is submitted to the signing service with a new vote signing request.
### New Vote Signing
The validator creates a "new vote" transaction and sends it to the signing service using JSON RPC. The RPC request also includes the vote verification data. On success, the RPC call returns the signature for the vote. On failure, RPC call returns the failure code.
A validator node, at startup, creates a new vote account and registers it with the cluster via gossip. The other nodes on the cluster include the new validator in the active set. Subsequently, the validator submits a "new vote" transaction signed with the validator's voting private key on each voting event.

View File

@@ -51,7 +51,7 @@ $ solana-validator \
--only-known-rpc \
--ledger ledger \
--rpc-port 8899 \
--dynamic-port-range 8000-8010 \
--dynamic-port-range 8000-8020 \
--entrypoint entrypoint.devnet.solana.com:8001 \
--entrypoint entrypoint2.devnet.solana.com:8001 \
--entrypoint entrypoint3.devnet.solana.com:8001 \
@@ -103,7 +103,7 @@ $ solana-validator \
--only-known-rpc \
--ledger ledger \
--rpc-port 8899 \
--dynamic-port-range 8000-8010 \
--dynamic-port-range 8000-8020 \
--entrypoint entrypoint.testnet.solana.com:8001 \
--entrypoint entrypoint2.testnet.solana.com:8001 \
--entrypoint entrypoint3.testnet.solana.com:8001 \
@@ -126,9 +126,8 @@ A permissionless, persistent cluster for early token holders and launch partners
- Tokens that are issued on Mainnet Beta are **real** SOL
- If you have paid money to purchase/be issued tokens, such as through our
CoinList auction, these tokens will be transferred on Mainnet Beta.
- Note: If you are using a non-command-line wallet such as
[Solflare](wallet-guide/solflare.md),
the wallet will always be connecting to Mainnet Beta.
- Note: If you are using a non-command-line wallet, the wallet will always be
connecting to Mainnet Beta.
- Gossip entrypoint for Mainnet Beta: `entrypoint.mainnet-beta.solana.com:8001`
- Metrics environment variable for Mainnet Beta:
@@ -158,7 +157,7 @@ $ solana-validator \
--ledger ledger \
--rpc-port 8899 \
--private-rpc \
--dynamic-port-range 8000-8010 \
--dynamic-port-range 8000-8020 \
--entrypoint entrypoint.mainnet-beta.solana.com:8001 \
--entrypoint entrypoint2.mainnet-beta.solana.com:8001 \
--entrypoint entrypoint3.mainnet-beta.solana.com:8001 \

View File

@@ -147,7 +147,7 @@ sendAndConfirmTransaction(
);
```
The above code takes in a `TransactionInstruction` using `SystemProgram`, creates a `Transaction`, and sends it over the network. You use `Connection` in order to define with Solana network you are connecting to, namely `mainnet-beta`, `testnet`, or `devnet`.
The above code takes in a `TransactionInstruction` using `SystemProgram`, creates a `Transaction`, and sends it over the network. You use `Connection` in order to define which Solana network you are connecting to, namely `mainnet-beta`, `testnet`, or `devnet`.
### Interacting with Custom Programs

View File

@@ -98,7 +98,7 @@ Unstable methods may see breaking changes in patch releases and may not be suppo
- [getConfirmedBlocks](jsonrpc-api.md#getconfirmedblocks)
- [getConfirmedBlocksWithLimit](jsonrpc-api.md#getconfirmedblockswithlimit)
- [getConfirmedSignaturesForAddress2](jsonrpc-api.md#getconfirmedsignaturesforaddress2)
- [getConfirmedTransaction](jsonrpc-api.md#getconfirmedtransact)
- [getConfirmedTransaction](jsonrpc-api.md#getconfirmedtransaction)
- [getFeeCalculatorForBlockhash](jsonrpc-api.md#getfeecalculatorforblockhash)
- [getFeeRateGovernor](jsonrpc-api.md#getfeerategovernor)
- [getFees](jsonrpc-api.md#getfees)
@@ -999,13 +999,12 @@ Get the fee the network will charge for a particular Message
#### Parameters:
- `blockhash: <string>` - The blockhash of this block, as base-58 encoded string
- `message: <string>` - Base-64 encoded Message
- `<object>` - (optional) [Commitment](jsonrpc-api.md#configuring-state-commitment) (used for retrieving blockhash)
#### Results:
- `<u64>` - Fee corresponding to the message at the specified blockhash
- `<u64 | null>` - Fee corresponding to the message at the specified blockhash
#### Example:
@@ -1017,7 +1016,7 @@ curl http://localhost:8899 -X POST -H "Content-Type: application/json" -d '
"jsonrpc":"2.0",
"method":"getFeeForMessage",
"params":[
"FxVKTksYShgKjnFG3RQUEo2AEesDb4ZHGY3NGJ7KHd7F","AQABAgIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEBAQAA",
"AQABAgIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEBAQAA",
{
"commitment":"processed"
}
@@ -1291,7 +1290,7 @@ Result:
### getInflationReward
Returns the inflation reward for a list of addresses for an epoch
Returns the inflation / staking reward for a list of addresses for an epoch
#### Parameters:
- `<array>` - An array of addresses to query, as base-58 encoded strings
@@ -2950,7 +2949,7 @@ curl http://localhost:8899 -X POST -H "Content-Type: application/json" -d '
Result:
```json
{"jsonrpc":"2.0","result":{"solana-core": "1.9.3"},"id":1}
{"jsonrpc":"2.0","result":{"solana-core": "1.9.8"},"id":1}
```
### getVoteAccounts
@@ -3192,7 +3191,7 @@ Before submitting, the following preflight checks are performed:
preflight commitment to avoid confusing behavior.
The returned signature is the first signature in the transaction, which
is used to identify the transaction ([transaction id](../../terminology.md#transanction-id)).
is used to identify the transaction ([transaction id](../../terminology.md#transaction-id)).
This identifier can be easily extracted from the transaction data before
submission.
@@ -3208,7 +3207,7 @@ submission.
#### Results:
- `<string>` - First Transaction Signature embedded in the transaction, as base-58 encoded string ([transaction id](../../terminology.md#transanction-id))
- `<string>` - First Transaction Signature embedded in the transaction, as base-58 encoded string ([transaction id](../../terminology.md#transaction-id))
#### Example:
@@ -5094,7 +5093,7 @@ Result:
### getRecentBlockhash
**DEPRECATED: Please use [getFeeForMessage](jsonrpc-api.md#getfeeformessage) instead**
**DEPRECATED: Please use [getLatestBlockhash](jsonrpc-api.md#getlatestblockhash) instead**
This method is expected to be removed in solana-core v2.0
Returns a recent block hash from the ledger, and a fee schedule that can be used to compute the cost of submitting a transaction using it.

View File

@@ -140,7 +140,7 @@ the [runtime enforcement
policy](developing/programming-model/accounts.md#policy). When an instruction
reference the same account multiple times there may be duplicate
`SolAccountInfo` entries in the array but they both point back to the original
input byte array. A program should handle these case delicately to avoid
input byte array. A program should handle these cases delicately to avoid
overlapping read/writes to the same buffer. If a program implements their own
deserialization function care should be taken to handle duplicate accounts
appropriately.

View File

@@ -81,7 +81,7 @@ Programs have access to a runtime heap either directly in C or via the Rust
utilized. The heap does not support `free` or `realloc` so use it wisely.
Internally, programs have access to the 32KB memory region starting at virtual
address 0x300000000 and may implement a custom heap based on the the program's
address 0x300000000 and may implement a custom heap based on the program's
specific needs.
- [Rust program heap usage](developing-rust.md#heap)
@@ -194,7 +194,7 @@ For language specific information about serialization see:
The latest loader serializes the program input parameters as follows (all
encoding is little endian):
- 8 byte unsigned number of accounts
- 8 bytes unsigned number of accounts
- For each account
- 1 byte indicating if this is a duplicate account, if not a duplicate then
the value is 0xff, otherwise the value is the index of the account it is a
@@ -207,7 +207,7 @@ encoding is little endian):
- 4 bytes of padding
- 32 bytes of the account public key
- 32 bytes of the account's owner public key
- 8 byte unsigned number of lamports owned by the account
- 8 bytes unsigned number of lamports owned by the account
- 8 bytes unsigned number of bytes of account data
- x bytes of account data
- 10k bytes of padding, used for realloc

View File

@@ -135,72 +135,13 @@ blockchain cluster must actively maintain the data to process any future transac
This is different from Bitcoin and Ethereum, where storing accounts doesn't
incur any costs.
The rent is debited from an account's balance by the runtime upon the first
access (including the initial account creation) in the current epoch by
transactions or once per an epoch if there are no transactions. The fee is
currently a fixed rate, measured in bytes-times-epochs. The fee may change in
the future.
For the sake of simple rent calculation, rent is always collected for a single,
full epoch. Rent is not pro-rated, meaning there are neither fees nor refunds
for partial epochs. This means that, on account creation, the first rent
collected isn't for the current partial epoch, but collected up front for the
next full epoch. Subsequent rent collections are for further future epochs. On
the other end, if the balance of an already-rent-collected account drops below
another rent fee mid-epoch, the account will continue to exist through the
current epoch and be purged immediately at the start of the upcoming epoch.
Accounts can be exempt from paying rent if they maintain a minimum balance. This
rent-exemption is described below.
### Calculation of rent
Note: The rent rate can change in the future.
As of writing, the fixed rent fee is 19.055441478439427 lamports per byte-epoch
on the testnet and mainnet-beta clusters. An [epoch](terminology.md#epoch) is
targeted to be 2 days (For devnet, the rent fee is 0.3608183131797095 lamports
per byte-epoch with its 54m36s-long epoch).
This value is calculated to target 0.01 SOL per mebibyte-day (exactly matching
to 3.56 SOL per mebibyte-year):
```text
Rent fee: 19.055441478439427 = 10_000_000 (0.01 SOL) * 365(approx. day in a year) / (1024 * 1024)(1 MiB) / (365.25/2)(epochs in 1 year)
```
And rent calculation is done with the `f64` precision and the final result is
truncated to `u64` in lamports.
The rent calculation includes account metadata (address, owner, lamports, etc)
in the size of an account. Therefore the smallest an account can be for rent
calculations is 128 bytes.
For example, an account is created with the initial transfer of 10,000 lamports
and no additional data. Rent is immediately debited from it on creation,
resulting in a balance of 7,561 lamports:
```text
Rent: 2,439 = 19.055441478439427 (rent rate) * 128 bytes (minimum account size) * 1 (epoch)
Account Balance: 7,561 = 10,000 (transfered lamports) - 2,439 (this account's rent fee for an epoch)
```
The account balance will be reduced to 5,122 lamports at the next epoch even if
there is no activity:
```text
Account Balance: 5,122 = 7,561 (current balance) - 2,439 (this account's rent fee for an epoch)
```
Accordingly, a minimum-size account will be immediately removed after creation
if the transferred lamports are less than or equal to 2,439.
Currently, all new accounts are required to be rent-exempt.
### Rent exemption
Alternatively, an account can be made entirely exempt from rent collection by
depositing at least 2 years worth of rent. This is checked every time an
account's balance is reduced, and rent is immediately debited once the balance
goes below the minimum amount.
An account is considered rent-exempt if it holds at least 2 years worth of rent.
This is checked every time an account's balance is reduced, and transactions
that would reduce the balance to below the minimum amount will fail.
Program executable accounts are required by the runtime to be rent-exempt to
avoid being purged.

Some files were not shown because too many files have changed in this diff Show More