Compare commits

...

115 Commits

Author SHA1 Message Date
mergify[bot]
42a2c29234 Different error if block status is not yet available (#20407) (#21029)
* Different error if block is not available

* Add slot to error message

* Make and use helper function

* Check finalized path as well

Co-authored-by: Tyera Eulberg <tyera@solana.com>
(cherry picked from commit 700e42d556)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-10-27 20:58:15 +00:00
mergify[bot]
a1f1264962 Swap banking stage vote channels (#20987) (#21000)
(cherry picked from commit 261dd96ae3)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-10-27 20:17:38 +00:00
mergify[bot]
66caead016 Add compute budget noops (backport #20992) (#21014)
* Add compute budget program as a noop (#20992)

(cherry picked from commit 1e2bef76e3)

# Conflicts:
#	sdk/src/feature_set.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-10-27 12:47:35 -07:00
mergify[bot]
de1f60fb2d Refactor cost tracker metrics reporting (backport #20802) (#20933)
* - cost_tracker is data member of a bank, it can report metrics when bank is frozen (#20802)

- removed cost_tracker_stats and histogram
- move stats reporting outside of bank freeze

(cherry picked from commit c2bfce90b3)

# Conflicts:
#	Cargo.lock
#	core/src/banking_stage.rs
#	core/src/replay_stage.rs
#	core/src/tvu.rs
#	ledger-tool/src/main.rs
#	programs/bpf/Cargo.lock
#	runtime/Cargo.toml
#	runtime/src/cost_tracker.rs

* manual fix merge conflicts

Co-authored-by: Tao Zhu <82401714+taozhu-chicago@users.noreply.github.com>
Co-authored-by: Tao Zhu <tao@solana.com>
2021-10-27 16:48:20 +00:00
Tao Zhu
7528016e2d Add counter for dropped duplicated packets, fix dropped_packets_count (#20834) (#21023)
(cherry picked from commit 71d0bd4605)
2021-10-27 11:36:37 -05:00
mergify[bot]
adc57899fe tpu-client: Add send_messages_with_spinner from program / stake-o-matic (backport #20960) (#21002)
* tpu-client: Move `send_messages_with_spinner` from program (#20960)

We have too many ways of sending transactions, and too many
reimplementations of the same logic all over the place.

The program deploy logic and stake-o-matic currently make the
most use of the TPU client, so this merges their implementations into
one place to be reused by both.  Yay for consolidation!

(cherry picked from commit 5f7b60576f)

# Conflicts:
#	cli/src/program.rs
#	client/src/mock_sender.rs

* Fix merge issues, use older APIs

* Update mock sender fee to match block height

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-10-27 12:22:17 +00:00
mergify[bot]
afe229a89e Document entrypoint!, custom_heap_default!, and custom_panic_default! (#21003) (#21015)
(cherry picked from commit ced1505b75)

Co-authored-by: Brian Anderson <andersrb@gmail.com>
2021-10-27 07:49:14 +00:00
mergify[bot]
5dd00e9230 Force a recent version of the openssl crate to allow this to build on M1 macs (backport #21008) (#21012)
* Force a recent version of the openssl crate to allow this to build on M1 macs

(cherry picked from commit 920159fc63)

# Conflicts:
#	Cargo.lock
#	programs/bpf_loader/Cargo.toml

* Run cargo check

(cherry picked from commit 8efc577374)

# Conflicts:
#	programs/bpf/Cargo.lock

* Resolve merge conflicts

Co-authored-by: Matt Wilde <matthewcwilde@gmail.com>
Co-authored-by: Michael Vines <mvines@gmail.com>
2021-10-27 02:55:49 +00:00
mergify[bot]
0a698fc48f Instruction sysvar fixes, additions (backport #20958) (#21001)
* Instruction sysvar fixes, additions (#20958)

(cherry picked from commit 4fe3354c8f)

# Conflicts:
#	programs/bpf/rust/sysvar/src/lib.rs
#	programs/bpf/tests/programs.rs
#	sdk/program/src/sysvar/instructions.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-10-27 01:00:01 +00:00
mergify[bot]
1666fc5483 Restore getProgramAccounts spl-token secondary-index functionality (backport #20993) (#21005)
* Restore getProgramAccounts spl-token secondary-index functionality (#20993)

* Allow get_spl_token_X_filters to match on any encoding, and optimize earlier

* Remove redundant optimize calls

* Compress match statements

* Add method docs, including note to use optimize_filters before spl-token checks

* Add logs

(cherry picked from commit b2f6cfb9ff)

# Conflicts:
#	rpc/src/rpc.rs

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-10-26 23:43:45 +00:00
yihau
467abd1f5b feat: update getClusterNodes
(cherry picked from commit dec104c580)
2021-10-26 13:18:19 -07:00
mergify[bot]
19432f2e5f Add CrdsData::IncrementalSnapshotHashes (backport #20374) (#20994)
* Add CrdsData::IncrementalSnapshotHashes (#20374)

(cherry picked from commit 4e3818e5c1)

# Conflicts:
#	gossip/src/cluster_info.rs

* removes backport merge conflicts

Co-authored-by: Brooks Prumo <brooks@solana.com>
Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-10-26 20:07:09 +00:00
mergify[bot]
7e7f8ef5f0 Report timing info for stakes cache updates from txs (backport #20856) (#20884)
* Report timing info for stakes cache updates from txs (#20856)

(cherry picked from commit 735016661b)

# Conflicts:
#	runtime/src/bank.rs

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2021-10-26 20:04:32 +00:00
mergify[bot]
9e81798d6d fix(docs): missing import (#20788) (#20996)
add missing import of `Connection`

(cherry picked from commit 521b7b79cc)

Co-authored-by: Colin Ogoo <ogoo.colin@gmail.com>
2021-10-26 19:10:27 +00:00
mergify[bot]
8986bd301c adds metrics tracking gossip crds writes and votes (backport #20953) (#20982)
* adds metrics tracking crds writes and votes (#20953)

(cherry picked from commit 1297a13586)

# Conflicts:
#	core/src/cluster_nodes.rs
#	gossip/benches/crds_shards.rs
#	gossip/src/cluster_info.rs
#	gossip/src/cluster_info_metrics.rs
#	gossip/src/crds_entry.rs
#	gossip/src/crds_gossip.rs
#	gossip/src/crds_gossip_pull.rs
#	gossip/src/crds_gossip_push.rs
#	gossip/src/crds_shards.rs
#	gossip/tests/crds_gossip.rs
#	rpc/src/rpc_service.rs

* updates itertools version in gossip

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-10-26 17:41:45 +00:00
mergify[bot]
6baad8e239 doubles crds unique pubkey capacity (#20947) (#20981)
(cherry picked from commit 43168e6365)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-10-26 15:14:29 +00:00
Lijun Wang
782d143489 Accountsdb plugin write ordering (#20948) (#20964)
Use the write_version in the Accounts's meta data so that account write with lower write_version would not overwrite the higher ones.
2021-10-26 00:05:40 -07:00
mergify[bot]
b15e87631c [solana-test-validator] add support for keypair file parsing for --bpf-program address argument (#20962)
(cherry picked from commit 58aa2b964b)

Co-authored-by: Paul Schaaf <paulsimonschaaf@gmail.com>
2021-10-26 01:09:56 +00:00
mergify[bot]
d18f553e2d Extend TestBroadcastReceiver::recv timeout (#20957) (#20961)
* Extend TestBroadcastReceiver timeout

* Add elapsed log

(cherry picked from commit 337b94b3bc)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-10-26 00:51:29 +00:00
mergify[bot]
e84c57b659 Hide deploy from cli subcommands (#20901) (#20951)
(cherry picked from commit af405f0ed7)

Co-authored-by: Jack May <jack@solana.com>
2021-10-25 20:03:44 +00:00
Lijun Wang
66630804de Accountsdb plugin postgres -- bulk insertion at startup (#20763) (#20931)
* Accountsdb plugin postgres -- bulk insertion at startup (#20763)

Use bulk insertion to Postgres at startup to reduce time taken for initial snapshot restore for postgres plugin. Avoid duplicate writes of accounts at startup. Doing account plugin notification and indexing in parallel.

Improved error handling for postgres plugin to show the real db issues for debug purpose
Added more metrics for postgres plugin.
Refactored plugin centric code out to a sub module from accounts_db and added unit tests

* Fixed the unit test failures
2021-10-25 09:18:32 -07:00
mergify[bot]
72158e3bf9 CLI: Add SW versions to feature status output (backport #20878) (#20905)
* cli: struct the tuples

(cherry picked from commit b9eb6242f5)

* cli: add software version(s) to feature status

(cherry picked from commit 152da44b62)

# Conflicts:
#	cli/Cargo.toml

* cli: sort feature status output

(cherry picked from commit 30d277b9fd)

* cli: improve feature status arithmatic readability

(cherry picked from commit d98c8b861c)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-10-25 05:04:08 +00:00
behzad nouri
df6063a622 removes backport merge conflicts 2021-10-24 21:29:29 -07:00
behzad nouri
55a1f03eee adds metrics for number of outgoing shreds in retransmit stage (#20882)
(cherry picked from commit 5e1cf39c74)

# Conflicts:
#	core/src/retransmit_stage.rs
2021-10-24 21:29:29 -07:00
sakridge
d20cccc26b Add check for shred data header size (#20668)
(cherry picked from commit 588168b99d)
2021-10-24 20:16:41 -07:00
yihau
6c4a8b2d72 feat(docs): add transactionCount to getEpochInfo response
(cherry picked from commit aa13c90dd7)
2021-10-24 20:15:12 -07:00
mergify[bot]
307cda52ac Fixed bug in AccountInfo::serialize() (#20923)
Closes #20917

(cherry picked from commit edf5bc242c)

Co-authored-by: Eugene Lomov <eugene.v.lomov@gmail.com>
2021-10-25 02:26:18 +00:00
Justin Starry
026385effd ci: Increase timeout duration for coverage step (#20888)
(cherry picked from commit 4fbf44dc75)
2021-10-24 17:44:36 -07:00
mergify[bot]
0363d8d373 Use config limit instead of default (#20900) (#20907)
(cherry picked from commit 9dd87bcdb5)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-10-23 21:05:07 +00:00
mergify[bot]
5c3f15e9c5 Support port number in postgres connection (#20662) (#20704)
* Support port number in postgres connection

* Addressed a few comments from Trent

(cherry picked from commit ad0a88f1f2)

Co-authored-by: Lijun Wang <83639177+lijunwangs@users.noreply.github.com>
2021-10-23 18:35:30 +00:00
mergify[bot]
47e80be023 Fix response examples for getTokenAccountsByOwner and getTokenAccountsByDelegate (#20919)
(cherry picked from commit 63f94a4db3)

Co-authored-by: Slavomir <gagliardetto@users.noreply.github.com>
2021-10-23 16:43:13 +00:00
Michael Vines
460dcad578 solana-test-validator --log now includes version/argument information
(cherry picked from commit 86bf071d77)
2021-10-22 13:46:29 -07:00
mergify[bot]
257d19ca48 Update 'Developing with Rust' GitHub links (#20860) (#20875)
* Update old GitHub links in 'Developing with Rust' docs

* exclude_entrypoint -> no-entrypoint in 'Developing with Rust'

(cherry picked from commit f729dec321)

Co-authored-by: Brian Anderson <andersrb@gmail.com>
2021-10-22 08:13:38 +00:00
mergify[bot]
de2aa898a7 Add counter for new transactions in SendTransactionService (#20852) (#20859)
* Add counter for inserted transactions

* Add counter for tx recv

(cherry picked from commit 8959d5e21c)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-10-22 05:03:26 +00:00
Trent Nelson
23b6ce7980 Bump version to 1.8.2 2021-10-21 00:43:40 -06:00
mergify[bot]
8cba6cca76 rpc-send-tx-svc server-side retry knobs (backport #20818) (#20830)
* rpc-send-tx-svc: add with_config constructor

(cherry picked from commit fe098b5ddc)

# Conflicts:
#	Cargo.lock
#	core/Cargo.toml
#	replica-node/Cargo.toml
#	rpc/src/rpc_service.rs
#	rpc/src/send_transaction_service.rs
#	validator/Cargo.toml

* rpc-send-tx-svc: server-side retry knobs

(cherry picked from commit 2744a2128c)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-10-21 02:15:03 +00:00
mergify[bot]
85048c667c cli: account for rpc nodes when considering feature set adoption (#20774)
(cherry picked from commit 5794bba65c)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-10-20 17:41:47 -06:00
mergify[bot]
440ccd189e Add program heap bump instruction (backport #20607) (#20815)
* Add program heap bump instruction (#20607)

(cherry picked from commit 58164517e4)

* nudge

Co-authored-by: Jack May <jack@solana.com>
2021-10-20 23:05:57 +00:00
mergify[bot]
d5fc81e12a Reduce budget request instruction length (#20636) (#20644)
(cherry picked from commit c231cfe235)

Co-authored-by: Jack May <jack@solana.com>
2021-10-20 12:17:29 -07:00
mergify[bot]
53f4bde471 add checked instructions sysvar api (backport #20790) (#20816)
* add checked instructions sysvar api (#20790)

(cherry picked from commit a8098f37d0)

# Conflicts:
#	programs/bpf/rust/sysvar/src/lib.rs
#	runtime/src/accounts.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-10-20 18:11:51 +00:00
mergify[bot]
232731e869 adds more metrics to blockstore insert shreds stats (backport #20701) (#20751)
* adds more metrics to blockstore insert shreds stats (#20701)

(cherry picked from commit 231b58b5f1)

# Conflicts:
#	ledger/src/blockstore.rs

* removes backport merge conflicts

* removes error logs

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-10-20 08:12:32 +00:00
mergify[bot]
63835ec214 prior to panicing with cap mismatch, try other calculation (#20292) (#20804)
(cherry picked from commit fa5b091b4c)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-10-20 02:26:23 +00:00
mergify[bot]
6de9ef62e8 docs: Amend RPC Transaction History proposal (#20794) (#20812)
# Problem

The initial proposal ruled out implementing BigTable queries for
the `getBlockTime` RPC, but then it was implemented a couple months
later. Indicating that the functionality was never implemented in
the "implemented-proposals" document is a little confusing, so let's
bring the document in line with what actually happened. 🦾

# Summary of Changes

Remove the blurb about how `getBlockTime` was going to be deprecated
and add it to the list of calls that didn't yet support BigTable
queries at the time the proposal was written.

(cherry picked from commit 0c7bade0b2)

Co-authored-by: Arthur Burkart <arthur@presynce.com>
2021-10-20 02:07:17 +00:00
mergify[bot]
0759b666ce Expand Rust API docs entry point (#20770) (#20801)
(cherry picked from commit cc4bb5a451)

Co-authored-by: Brian Anderson <andersrb@gmail.com>
2021-10-20 01:53:55 +00:00
mergify[bot]
c7e3d4cf79 Add docs to solana_clap_utils::keypair (backport #20665) (#20789)
* Add docs to solana_clap_utils::keypair (#20665)

* Add docs to solana_clap_utils::keypair

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Move imports to module level in solana_clap_utils::keypair::tests

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 96c6ba6eb2)

# Conflicts:
#	clap-utils/src/keypair.rs

* Fix conflicts

Co-authored-by: Brian Anderson <andersrb@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-10-20 01:45:14 +00:00
mergify[bot]
63e37b2b20 Remove @brief annotations from Rust API docs (backport #20769) (#20807)
* Remove @brief annotations from Rust API docs (#20769)

(cherry picked from commit d9b0fc0e3e)

# Conflicts:
#	programs/bpf/rust/invoke/src/instructions.rs
#	programs/bpf/rust/invoke/src/processor.rs
#	programs/bpf/rust/realloc/src/instructions.rs
#	programs/bpf/rust/realloc/src/lib.rs
#	programs/bpf/rust/realloc/src/processor.rs
#	programs/bpf/rust/realloc_invoke/src/instructions.rs
#	programs/bpf/rust/realloc_invoke/src/lib.rs
#	programs/bpf/rust/realloc_invoke/src/processor.rs
#	sdk/cargo-build-bpf/tests/crates/fail/src/lib.rs
#	sdk/src/precompiles.rs

* Fix conflicts

Co-authored-by: Brian Anderson <andersrb@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-10-19 19:17:33 -06:00
mergify[bot]
436ec212f4 report udp stats from validator (backport #20587) (#20799)
* report udp stats from validator (#20587)

(cherry picked from commit 4cac66244d)

# Conflicts:
#	core/src/validator.rs

* resolve merge conflicts

Co-authored-by: Jeff Biseda <jbiseda@gmail.com>
2021-10-20 00:57:38 +00:00
Jon Cinque
564cc95b00 runtime: Add foundation stake pool withdraw authority (#20797)
(cherry picked from commit cb2bd65858)
2021-10-19 17:56:09 -07:00
mergify[bot]
28eb6ff796 Invoke cost tracker from its bank (backport #20627) (#20800)
* - make cost_tracker a member of bank, remove shared instance from TPU; (#20627)

- decouple cost_model from cost_tracker; allowing one cost_model
  instance being shared within a validator;
- update cost_model api to calculate_cost(&self...)->transaction_cost

(cherry picked from commit 7496b5784b)

# Conflicts:
#	core/src/banking_stage.rs
#	ledger-tool/src/main.rs
#	runtime/src/bank.rs
#	runtime/src/cost_model.rs
#	runtime/src/cost_tracker.rs

* manual fix merge conflicts

Co-authored-by: Tao Zhu <82401714+taozhu-chicago@users.noreply.github.com>
Co-authored-by: Tao Zhu <tao@solana.com>
2021-10-20 00:22:38 +00:00
mergify[bot]
de32ab4d57 Separate out interrupted slots broadcast metrics (#20537) (#20798)
(cherry picked from commit 838ff3b871)

Co-authored-by: carllin <carl@solana.com>
2021-10-19 22:26:49 +00:00
mergify[bot]
cabe2d5d04 Use node LTS (#20803) (#20806)
(cherry picked from commit 2c2bcd20e6)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-10-19 15:51:59 -06:00
mergify[bot]
ece4ecb792 stake: Add BorshSerialize trait to structs (#20784) (#20792)
(cherry picked from commit dc1b8ddea1)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-10-19 20:23:24 +00:00
Brooks Prumo
ba366f49ad Ignore RUSTSEC-2020-0159
(cherry picked from commit 7baeb04f26)
2021-10-18 13:50:31 -07:00
mergify[bot]
8e666f47e0 optimistic-confirmation-and-slashing - fix typos (#20741) (#20765)
(cherry picked from commit 84660bbf3d)

Co-authored-by: Elliot Lee <github.public@intelliot.com>
2021-10-18 17:49:45 +00:00
Sean Young
0619705ce5 Simplify ed25519 instruction index
Allow u16::MAX to be specified for the instruction index. This makes it
possible to specify the current instruction, so it is not necessary to
know the instruction number.
2021-10-18 15:41:24 +01:00
Sean Young
188089389f feat: support for builtin ed25519 program
Conflicts:
	web3.js/src/index.ts
2021-10-18 15:41:24 +01:00
Sean Young
0a6bb84aec feat: add ed25519 signature verify program
Solang requires a method for verify ed25519 signatures. Add a new
builtin program at address Ed25519SigVerify111111111111111111111111111
which takes any number of ed25519 signature, public key, and message.
If any of the signatures fails to verify, an error is returned.

The changes for the web3.js package will go into another commit, since
the tests test against a released solana node. Adding web3.js ed25519
testing will break CI.

(cherry picked from commit b491354e51)

Conflicts:
	Cargo.lock
	Cargo.toml
	programs/bpf/Cargo.lock
	runtime/Cargo.toml
	sdk/src/feature_set.rs
	sdk/src/transaction.rs
	sdk/src/transaction/sanitized.rs
2021-10-18 15:41:24 +01:00
Sean Young
c8f6a0817b verify_precompiles needs FeatureSet
Rather than pass in individual features, pass in the entire feature set
so that we can add the ed25519 program feature in a later commit.

(cherry picked from commit 0f62771f42)

 Conflicts:
	banks-server/src/banks_server.rs
	core/src/banking_stage.rs
	programs/secp256k1/src/lib.rs
	rpc/src/rpc.rs
	runtime/src/bank.rs
	sdk/src/transaction.rs
	sdk/src/transaction/sanitized.rs
2021-10-18 15:41:24 +01:00
mergify[bot]
5350250a06 Improve program-test process_transaction() speed by reducing sleep duration in banks-server (backport #20508) (#20733)
* Improve program-test process_transaction() speed by reducing sleep duration in banks-server (#20508)

* banks_server: Reduce sleep duration for local server

This speeds up process_transaction_with_commitment_and_context()
and thus most program tests by a lot.

* Plumb tick duration through poh config and signature polling

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
(cherry picked from commit bea181eba9)

# Conflicts:
#	banks-server/src/banks_server.rs
#	program-test/src/lib.rs

* Fix merge issues

Co-authored-by: Christian Kamm <ckamm@delightful-solutions.de>
Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-10-15 21:36:59 +00:00
mergify[bot]
f8fccc7e91 docs: prefer solana gossip to solana-gossip spy (#20734)
(cherry picked from commit 9543fd9cdd)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-10-15 19:42:20 +00:00
mergify[bot]
eaa6d1a4b5 adds counters for errors in window-service run_insert (#20670) (#20724)
(cherry picked from commit 0f03971c3c)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-10-15 18:09:11 +00:00
mergify[bot]
b66c9539c2 program-test: Fix getting new blockhash post-warp (#20710) (#20723)
(cherry picked from commit 0419e6c22e)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-10-15 16:13:09 +00:00
mergify[bot]
bdea60cc19 Rpc: filters performance improvement (#20185) (#20703)
* Add Base58,Base64,Bytes to MemcmpEncodedBytes

* Rpc: decode memcmp before filtering accounts

* Add deprecated attribute

* Add Memcmp::bytes

* Fix clippy for deprecated

* Another clippy fix

* merge RpcFilterError::DataTooLarge

* add deprecation for Base58DataTooLarge

* change filter data size limit

* strict data size len for base58

* add magic numbers

* fix tests

(cherry picked from commit e9a427b9c8)

Co-authored-by: Kirill Fomichev <fanatid@ya.ru>
2021-10-14 21:48:13 +00:00
mergify[bot]
63ac5e4561 clap-utils: trim single-quotes from signer uris on windows (#20695)
(cherry picked from commit 6649dfa899)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-10-14 20:23:15 +00:00
mergify[bot]
88e6f41bec Include token owners in TransactionTokenBalances (backport #20642) (#20677)
* Include token owners in TransactionTokenBalances (#20642)

* Cache owners in TransactionTokenBalances

* Light cleanup

* Use return struct, and remove pub

* Single-use statements

* Why not, just do the whole crate

* Add metrics

* Make datapoint_debug to prevent spam unless troubleshooting

(cherry picked from commit e806fa6904)

# Conflicts:
#	ledger/src/blockstore.rs
#	transaction-status/Cargo.toml
#	transaction-status/src/token_balances.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-10-14 07:09:13 +00:00
Lijun Wang
e0280a68ba Accountsdb plugin metrics (#20606) (#20664)
Added metrics for accountsdb plugin
Handle and log postgres db errors
Print account pubkeys nicely in logging
2021-10-13 14:35:46 -07:00
mergify[bot]
aa8d04d44b uses nanos precision for timestamp when submitting metrics to influxdb (#20623) (#20659)
Current datapoint_info! is apparently overwriting itself when run inside
a loop. For example in
https://github.com/solana-labs/solana/blob/005d6863f/core/src/window_service.rs#L101-L107
only one of the slots will show up in influxdb.

This is apparently because of metrics code using milliseconds as the
timestamp, as mentioned here:
https://github.com/solana-labs/solana/issues/19789#issuecomment-922482013

(cherry picked from commit cd87525f54)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-10-13 20:47:01 +00:00
mergify[bot]
778f37b12d fix unstable test (#20645) (#20663)
(cherry picked from commit 220fd41bbc)

Co-authored-by: Tao Zhu <82401714+taozhu-chicago@users.noreply.github.com>
2021-10-13 20:02:42 +00:00
Sean Young
ebe77a0985 Proposal: log binary data for Solidity
Rename "Program return data: " to "Program return: " since "data" is
redundant.

(cherry picked from commit b89177c8de)

Conflicts:
	programs/bpf_loader/src/syscalls.rs
	sdk/bpf/c/inc/sol/log.h
	sdk/program/Cargo.toml
	sdk/src/feature_set.rs
	sdk/src/process_instruction.rs
2021-10-13 14:34:36 +01:00
mergify[bot]
400a88786a aggregate cost_tracker to bank (backport #20527) (#20622)
* - move cost tracker into bank, so each bank has its own cost tracker; (#20527)

- move related modules to runtime

(cherry picked from commit 005d6863fd)

# Conflicts:
#	Cargo.lock
#	core/benches/banking_stage.rs
#	core/src/banking_stage.rs
#	core/src/lib.rs
#	core/src/tvu.rs
#	ledger-tool/src/main.rs
#	ledger/src/blockstore_processor.rs
#	programs/bpf/Cargo.lock
#	runtime/Cargo.toml
#	runtime/src/cost_model.rs

* manual fix merge conflicts

Co-authored-by: Tao Zhu <82401714+taozhu-chicago@users.noreply.github.com>
Co-authored-by: Tao Zhu <tao@solana.com>
2021-10-13 05:07:09 +00:00
mergify[bot]
29eae21057 Ignore delinquent stake on exit (backport #20367) (#20612)
* Ignore delinquent stake on exit (#20367)

* add --ignore-delinquency flag to validator exit and wait-for-restart-window sub commands

* Fix a merge issue

* Add missing variable declaration

* Remove empty line to help CI checks pass

* run rustfmt

* Change argument wording for clarity and verbosity

* Change --ignore-delinquent-stake to --max-delinquent-stake

* cargo fmtgit add validator/src/main.rsgit add validator/src/main.rs

* Adjust per mvines

* Formatting

* Improve input validation

* Please automate cargo fmt somehow

(cherry picked from commit fc5dd7f3bc)

# Conflicts:
#	validator/src/main.rs

* Fixes cherry-pick conflict

Co-authored-by: Michael <68944931+michaelh-laine@users.noreply.github.com>
Co-authored-by: Steven Czabaniuk <steven@solana.com>
2021-10-12 20:30:47 +00:00
Sean Young
0d1dbb6160 Fix return data too large test
(cherry picked from commit d09687c30e)
2021-10-12 18:31:42 +01:00
Sean Young
927d3b5e0d Add return data implementation
This consists of:
 - syscalls
 - passing return data from invoked to invoker
 - printing to stable log
 - rust and C SDK changes

(cherry picked from commit 53b47b87b2)
2021-10-12 18:31:42 +01:00
Brooks Prumo
df929bda38 Do not shell out for tar (#19043)
When making a snapshot archive, we used to shell out and call `tar -S`
for sparse file support.  The tar crate supports sparse files, so no
need to do this anymore.

Fixes #10860

(cherry picked from commit 68cc71409e)

# Conflicts:
#	runtime/src/snapshot_utils.rs
2021-10-12 15:36:51 +00:00
mergify[bot]
200c5c9fd6 solana-validator wait-for-restart-window command now accepts an optional --identity argument (backport #18684) (#20610)
* wait-for-restart-window command now accepts an optional --identity argument

(cherry picked from commit c418e8f370)

# Conflicts:
#	validator/src/main.rs

* Fixed cherry-pick conflicts

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Steven Czabaniuk <steven@solana.com>
2021-10-12 07:07:40 +00:00
mergify[bot]
9acf708344 Remove support for dynamically loaded native programs (backport #20444) (#20560)
* Remove support for dynamically loaded native programs (#20444)

(cherry picked from commit 785fcb63f5)

# Conflicts:
#	Cargo.lock
#	Cargo.toml
#	program-runtime/src/instruction_processor.rs
#	programs/failure/Cargo.toml
#	programs/failure/tests/failure.rs
#	programs/noop/Cargo.toml
#	programs/ownable/Cargo.toml
#	programs/ownable/src/ownable_processor.rs
#	runtime/src/bank.rs
#	runtime/tests/noop.rs
#	sdk/src/feature_set.rs

* resolve conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-10-11 23:55:39 +00:00
mergify[bot]
af4c1785b6 Reorder RpcClient method defs for more logical docs. (backport #20549) (#20597)
* Reorder RpcClient method defs for more logical docs. (#20549)

Methods follow this order:

- Constructors
- send_and_confirm variations
- send variations
- confirm variations
- simulate variations
- queries

(cherry picked from commit 1417c1456d)

# Conflicts:
#	client/src/rpc_client.rs

* Fix conflicts

Co-authored-by: Brian Anderson <andersrb@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-10-11 21:33:10 +00:00
mergify[bot]
b8f68860a4 docs: Remove outdated instructions for managing stake accounts (#20555) (#20600)
(cherry picked from commit 03d3e0098e)

Co-authored-by: Justin Starry <justin@solana.com>
2021-10-11 20:17:02 +00:00
mergify[bot]
547f33d1d1 Adjust solana validators output to account for the 1k+ validators on mainnet (#20576)
(cherry picked from commit bdf8b1da6b)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-10-11 16:51:18 +00:00
mergify[bot]
67738a229c fix(docs): getInflationRate epoch type from f64 => u64 (#20589) (#20590)
(cherry picked from commit 185c9f9e8f)

Co-authored-by: Yihau Chen <a122092487@gmail.com>
2021-10-11 16:03:41 +00:00
Michael Vines
50803c3f58 Rework AVX/AVX2 detection again
(cherry picked from commit c16510152e)
2021-10-10 15:28:14 -07:00
Lijun Wang
50cb612ae1 Accountsdb stream plugin improvement (#20419) (#20573)
* Accountsdb stream plugin improvement (#20419)

Support using connection pooling and use multiple threads to do Postgres db operations. The performance is improved from 1500 RPS to 40,000 RPS measured during validator start.

Support multiple plugins at the same time.

* Fixed a fmt issue
2021-10-10 15:24:12 -07:00
Dan Albert
e5dc8d731b Web3 docs updated with quickstart guide (#19457) (#20571) 2021-10-09 16:01:35 -06:00
Dan Albert
f9dcb8228f Added web3 reference guide (#19970) (#20568)
Co-authored-by: cryptogosu <82475023+cryptogosu@users.noreply.github.com>
2021-10-09 15:24:50 -06:00
mergify[bot]
68e8a05848 Fix solana docker image (#20551)
The docker image fails with:

/usr/bin/solana-run.sh: line 66: ./fetch-spl.sh: No such file or directory

In the solana docker image, scripts/run.sh is copied to
/usr/bin/solana-run.sh and fetch-spl.sh to /usr/bin/fetch-spl.sh. This
means that the line:

cd "$(dirname "$0")/.."

means we're doing a "cd /usr", which means we can't find fetch-spl.sh or
spl-genesis-args.sh (i.e., the error above).

(cherry picked from commit 2762f6f96f)

Co-authored-by: Sean Young <sean@mess.org>
2021-10-09 04:05:10 +00:00
Tyera Eulberg
bfc5f9fb6c v1.8: Bump crates to resolve audit failures (#20552)
* Bump nix

* Bump sha2 to resolve warning
2021-10-09 00:27:30 +00:00
mergify[bot]
c3cc7d52fe Revert "docs: Explain what solana-stake-accounts new does (#20401)" (#20554) (#20556)
This reverts commit 00c6536528.

(cherry picked from commit 17314f4a95)

Co-authored-by: Justin Starry <justin@solana.com>
2021-10-08 19:46:48 +00:00
mergify[bot]
4268cf1d8b docs: Explain what solana-stake-accounts new does (#20401) (#20547)
(cherry picked from commit 00c6536528)

Co-authored-by: Ted Robertson <10043369+tredondo@users.noreply.github.com>
2021-10-08 15:54:03 +00:00
behzad nouri
c693ecc4c8 fixes backports code changes (#20541) 2021-10-08 15:30:27 +00:00
Tyera Eulberg
afe866ad02 Enable easy full-rpc services on testnet nodes (#20530) 2021-10-07 19:36:23 -06:00
mergify[bot]
6a73bf767b adds metrics for number of nodes vs number of pubkeys (backport #20512) (#20524)
* adds metrics for number of nodes vs number of pubkeys (#20512)

(cherry picked from commit 0da661de62)

# Conflicts:
#	gossip/src/cluster_info_metrics.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-10-07 22:27:48 +00:00
Lijun Wang
7d0494fcaa Merge AccountsDb plugin framework to v1.8 (#20518)
Merge AccountsDb plugin framework to v1.8 (#20518)
Summary of Changes

Create a plugin mechanism in the accounts update path so that accounts data can be streamed out to external data stores (be it Kafka or Postgres). The plugin mechanism allows

Data stores of connection strings/credentials to be configured,
Accounts with patterns to be streamed
PostgreSQL implementation of the streaming for different destination stores to be plugged in.

The code comprises 4 major parts:

accountsdb-plugin-intf: defines the plugin interface which concrete plugin should implement.
accountsdb-plugin-manager: manages the load/unload of plugins and provide interfaces which the validator can notify of accounts update to plugins.
accountsdb-plugin-postgres: the concrete plugin implementation for PostgreSQL
The validator integrations: updated streamed right after snapshot restore and after account update from transaction processing or other real updates.
The plugin is optionally loaded on demand by new validator CLI argument -- there is no impact if the plugin is not loaded.
2021-10-07 14:15:05 -07:00
mergify[bot]
33d8e242c5 Fixup docs on deprecated JSON-RPC methods (backport #20515) (#20521)
* Update expected removal version to match backward-compatibility policy (#20515)

(cherry picked from commit d56ad8ff4f)

# Conflicts:
#	docs/src/developing/clients/jsonrpc-api.md

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-10-07 20:37:01 +00:00
Michael Vines
ef55045724 rebase 2021-10-07 07:44:12 -07:00
Michael Vines
81d2c3261c Derive Pod/Zeroable for Pubkey
(cherry picked from commit f966859829)

# Conflicts:
#	Cargo.lock
#	programs/bpf/Cargo.lock
#	sdk/program/Cargo.toml
2021-10-07 07:44:12 -07:00
Tao Zhu
348ba57b12 Bump version to 1.8.1 2021-10-06 17:57:06 -07:00
Tyera Eulberg
4a8ff62ad3 Add RecentItems metrics (#20484) (#20490) 2021-10-06 17:07:33 -06:00
Tao Zhu
db85d659b9 Cost model 1.7 (#20188)
* Cost Model to limit transactions which are not parallelizeable (#16694)

* * Add following to banking_stage:
  1. CostModel as immutable ref shared between threads, to provide estimated cost for transactions.
  2. CostTracker which is shared between threads, tracks transaction costs for each block.

* replace hard coded program ID with id() calls

* Add Account Access Cost as part of TransactionCost. Account Access cost are weighted differently between read and write, signed and non-signed.

* Establish instruction_execution_cost_table, add function to update or insert instruction cost, unit tested. It is read-only for now; it allows Replay to insert realtime instruction execution costs to the table.

* add test for cost_tracker atomically try_add operation, serves as safety guard for future changes

* check cost against local copy of cost_tracker, return transactions that would exceed limit as unprocessed transaction to be buffered; only apply bank processed transactions cost to tracker;

* bencher to new banking_stage with max cost limit to allow cost model being hit consistently during bench iterations

* replay stage feed back program cost (#17731)

* replay stage feeds back realtime per-program execution cost to cost model;

* program cost execution table is initialized into empty table, no longer populated with hardcoded numbers;

* changed cost unit to microsecond, using value collected from mainnet;

* add ExecuteCostTable with fixed capacity for security concern, when its limit is reached, programs with old age AND less occurrence will be pushed out to make room for new programs.

* investigate system performance test degradation  (#17919)

* Add stats and counter around cost model ops, mainly:
- calculate transaction cost
- check transaction can fit in a block
- update block cost tracker after transactions are added to block
- replay_stage to update/insert execution cost to table

* Change mutex on cost_tracker to RwLock

* removed cloning cost_tracker for local use, as the metrics show clone is very expensive.

* acquire and hold locks for block of TXs, instead of acquire and release per transaction;

* remove redundant would_fit check from cost_tracker update execution path

* refactor cost checking with less frequent lock acquiring

* avoid many Transaction_cost heap allocation when calculate cost, which
is in the hot path - executed per transaction.

* create hashmap with new_capacity to reduce runtime heap realloc.

* code review changes: categorize stats, replace explicit drop calls, concisely initiate to default

* address potential deadlock by acquiring locks one at time

* Persist cost table to blockstore (#18123)

* Add `ProgramCosts` Column Family to blockstore, implement LedgerColumn; add `delete_cf` to Rocks
* Add ProgramCosts to compaction excluding list alone side with TransactionStatusIndex in one place: `excludes_from_compaction()`

* Write cost table to blockstore after `replay_stage` replayed active banks; add stats to measure persist time
* Deletes program from `ProgramCosts` in blockstore when they are removed from cost_table in memory
* Only try to persist to blockstore when cost_table is changed.
* Restore cost table during validator startup

* Offload `cost_model` related operations from replay main thread to dedicated service thread, add channel to send execute_timings between these threads;
* Move `cost_update_service` to its own module; replay_stage is now decoupled from cost_model.

* log warning when channel send fails (#18391)

* Aggregate cost_model into cost_tracker (#18374)

* * aggregate cost_model into cost_tracker, decouple it from banking_stage to prevent accidental deadlock. * Simplified code, removed unused functions

* review fixes

* update ledger tool to restore cost table from blockstore (#18489)

* update ledger tool to restore cost model from blockstore when compute-slot-cost

* Move initialize_cost_table into cost_model, so the function can be tested and shared between validator and ledger-tool

* refactor and simplify a test

* manually fix merge conflicts

* Per-program id timings (#17554)

* more manual fixing

* solve a merge conflict

* featurize cost model

* more merge fix

* cost model uses compute_unit to replace microsecond as cost unit
(#18934)

* Reject blocks for costs above the max block cost (#18994)

* Update block max cost limit to fix performance regession (#19276)

* replace function with const var for better readability (#19285)

* Add few more metrics data points (#19624)

* periodically report sigverify_stage stats (#19674)

* manual merge

* cost model nits (#18528)

* Accumulate consumed units (#18714)

* tx wide compute budget (#18631)

* more manual merge

* ignore zerorize drop security

* - update const cost values with data collected by #19627
- update cost calculation to closely proposed fee schedule #16984

* add transaction cost histogram metrics (#20350)

* rebase to 1.7.15

* add tx count and thread id to stats (#20451)
each stat reports and resets when slot changes

* remove cost_model feature_set

* ignore vote transactions from cost model

Co-authored-by: sakridge <sakridge@gmail.com>
Co-authored-by: Jeff Biseda <jbiseda@gmail.com>
Co-authored-by: Jack May <jack@solana.com>
2021-10-06 15:55:29 -06:00
Trent Nelson
a4df784e82 Bump version to 1.8.0 2021-10-06 15:48:23 -06:00
mergify[bot]
414674eba1 Fix dos data-type for non-gossip mode (#20465) (#20478)
(cherry picked from commit b178f3f2d3)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-10-06 19:00:34 +00:00
Justin Starry
d922971ec6 Optimize stakes cache and rewards at epoch boundaries (backport #20432) (#20472)
* Optimize stakes cache and rewards at epoch boundaries (backport #20432)

* fix conflicts
2021-10-06 16:15:27 +00:00
mergify[bot]
95ac00d30a Make rewards tracer async friendly (backport #20452) (#20456)
* Make rewards tracer async friendly (#20452)

(cherry picked from commit 250a8503fe)

# Conflicts:
#	Cargo.lock
#	ledger-tool/Cargo.toml
#	runtime/src/bank.rs

* fix conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2021-10-06 11:20:50 +00:00
mergify[bot]
1ca4f7d110 Install openssl for travisci windows builds (#20420) (#20458)
(cherry picked from commit df73d8e8a1)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-10-05 22:30:23 -06:00
mergify[bot]
8999f07ed2 Remove nodejs (#20399) (#20433)
(cherry picked from commit 6df0ce5457)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-10-05 08:56:57 +00:00
mergify[bot]
9f4f8fc9e9 Add struct and convenience methods to track stake activation status (backport #20392) (#20425)
* Add struct and convenience methods to track stake activation status (#20392)

* Add struct and convenience methods to track stake activation status

* fix nits

* rename

(cherry picked from commit 0ddb34a0b4)

# Conflicts:
#	runtime/src/stakes.rs

* resolve conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2021-10-05 04:33:30 +00:00
Michael Vines
00b03897e1 Default --rpc-bind-address to 127.0.0.1 when --private-rpc is provided and --bind-address is not
(cherry picked from commit 221343e849)
2021-10-04 16:58:46 -07:00
mergify[bot]
6181df68cf Staking docs: link to overview (#20426)
(cherry picked from commit 2d5b471c09)

Co-authored-by: Ted Robertson <10043369+tredondo@users.noreply.github.com>
2021-10-04 23:22:21 +00:00
mergify[bot]
1588b00f2c fix syntax error in bash_profile (#20386)
if there is no newline at the end of the file, this export is glued to the rest of the code and generates a syntax error like this

```bash
if [ -f ~/.git-completion.bash ]; then
  . ~/.git-completion.bash
fiexport PATH="/Users/user/.local/share/solana/install/active_release/bin:$PATH"
```

(cherry picked from commit 87c0d8d9e7)

Co-authored-by: OleG <emptystamp@gmail.com>
2021-10-02 04:50:39 +00:00
mergify[bot]
ef306aa7cb Deploy error is buffer is too small (#20358) (#20362)
* Deploy error is buffer is too small

* missing file

(cherry picked from commit de8331eeaf)

# Conflicts:
#	cli/tests/fixtures/noop.so

Co-authored-by: Jack May <jack@solana.com>
2021-10-01 05:25:11 +00:00
mergify[bot]
e718f4b04a terminology.md: remove CBC block and unneeded filename (#20269) (#20349)
(cherry picked from commit a7f2d9f55f)

Co-authored-by: Ted Robertson <10043369+tredondo@users.noreply.github.com>
2021-09-30 23:19:12 +00:00
mergify[bot]
51593a882b Properly enable unprefixed_malloc_on_supported_platforms in tikv-jemallocator (#20351) (#20354)
Trivial typo fix.

Fixes: 4bf6d0c4d7 ("adds unprefixed_malloc_on_supported_platforms to jemalloc (#20317)")
(cherry picked from commit 8ae88632cb)

Co-authored-by: Ivan Mironov <mironov.ivan@gmail.com>
2021-09-30 20:26:11 +00:00
mergify[bot]
1c15cc6e9a add unchecked invokes (#20313) (#20337)
(cherry picked from commit 8188c1dd59)

Co-authored-by: Jack May <jack@solana.com>
2021-09-30 17:05:51 +00:00
Tyera Eulberg
734b380cdb Bump version to v1.7.15 (#20338) 2021-09-30 10:51:34 -06:00
mergify[bot]
9cc26b3b00 cli: Stop topping up buffer balance (#20181) (#20312)
(cherry picked from commit 53a810dbad)

Co-authored-by: Justin Starry <justin@solana.com>
2021-09-30 12:31:12 -04:00
mergify[bot]
ef5a0e842c stake-accounts.md: fix grammar, link Solana Explorer (#20270) (#20274)
(cherry picked from commit f24fff8495)

Co-authored-by: Ted Robertson <10043369+tredondo@users.noreply.github.com>
2021-09-29 22:57:00 -06:00
371 changed files with 15459 additions and 3909 deletions

View File

@@ -61,6 +61,12 @@ jobs:
- <<: *release-artifacts
name: "Windows release artifacts"
os: windows
install:
- choco install openssl
- export OPENSSL_DIR="C:\Program Files\OpenSSL-Win64"
- source ci/rust-version.sh
- PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
- readlink -f .
# Linux release artifacts are still built by ci/buildkite-secondary.yml
#- <<: *release-artifacts
# name: "Linux release artifacts"
@@ -117,7 +123,7 @@ jobs:
if: type IN (push, pull_request) OR tag IS present
language: node_js
node_js:
- "node"
- "lts/*"
services:
- docker

853
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,8 @@
[workspace]
members = [
"accountsdb-plugin-interface",
"accountsdb-plugin-manager",
"accountsdb-plugin-postgres",
"accounts-cluster-bench",
"bench-exchange",
"bench-streamer",
@@ -43,11 +46,10 @@ members = [
"poh-bench",
"program-test",
"programs/bpf_loader",
"programs/compute-budget",
"programs/config",
"programs/exchange",
"programs/failure",
"programs/noop",
"programs/ownable",
"programs/ed25519",
"programs/secp256k1",
"programs/stake",
"programs/vote",

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-account-decoder"
version = "1.7.14"
version = "1.8.2"
description = "Solana account decoder"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -19,9 +19,9 @@ lazy_static = "1.4.0"
serde = "1.0.122"
serde_derive = "1.0.103"
serde_json = "1.0.56"
solana-config-program = { path = "../programs/config", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-vote-program = { path = "../programs/vote", version = "=1.7.14" }
solana-config-program = { path = "../programs/config", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-vote-program = { path = "../programs/vote", version = "=1.8.2" }
spl-token-v2-0 = { package = "spl-token", version = "=3.2.0", features = ["no-entrypoint"] }
thiserror = "1.0"
zstd = "0.5.1"

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-accounts-bench"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -11,11 +11,11 @@ publish = false
[dependencies]
log = "0.4.11"
rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-runtime = { path = "../runtime", version = "=1.7.14" }
solana-measure = { path = "../measure", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-measure = { path = "../measure", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
rand = "0.7.0"
clap = "2.33.1"
crossbeam-channel = "0.4"

View File

@@ -66,6 +66,7 @@ fn main() {
AccountSecondaryIndexes::default(),
false,
AccountShrinkThreshold::default(),
None,
);
println!("Creating {} accounts", num_accounts);
let mut create_time = Measure::start("create accounts");

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-accounts-cluster-bench"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -13,24 +13,24 @@ clap = "2.33.1"
log = "0.4.11"
rand = "0.7.0"
rayon = "1.4.1"
solana-account-decoder = { path = "../account-decoder", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-client = { path = "../client", version = "=1.7.14" }
solana-core = { path = "../core", version = "=1.7.14" }
solana-faucet = { path = "../faucet", version = "=1.7.14" }
solana-gossip = { path = "../gossip", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-measure = { path = "../measure", version = "=1.7.14" }
solana-net-utils = { path = "../net-utils", version = "=1.7.14" }
solana-runtime = { path = "../runtime", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-streamer = { path = "../streamer", version = "=1.7.14" }
solana-transaction-status = { path = "../transaction-status", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-account-decoder = { path = "../account-decoder", version = "=1.8.2" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-client = { path = "../client", version = "=1.8.2" }
solana-core = { path = "../core", version = "=1.8.2" }
solana-faucet = { path = "../faucet", version = "=1.8.2" }
solana-gossip = { path = "../gossip", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-measure = { path = "../measure", version = "=1.8.2" }
solana-net-utils = { path = "../net-utils", version = "=1.8.2" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-streamer = { path = "../streamer", version = "=1.8.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
spl-token-v2-0 = { package = "spl-token", version = "=3.2.0", features = ["no-entrypoint"] }
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.7.14" }
solana-local-cluster = { path = "../local-cluster", version = "=1.8.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -0,0 +1,17 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-accountsdb-plugin-interface"
description = "The Solana AccountsDb plugin interface."
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[dependencies]
log = "0.4.11"
thiserror = "1.0.29"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -0,0 +1,20 @@
<p align="center">
<a href="https://solana.com">
<img alt="Solana" src="https://i.imgur.com/IKyzQ6T.png" width="250" />
</a>
</p>
# Solana AccountsDb Plugin Interface
This crate enables an AccountsDb plugin to be plugged into the Solana Validator runtime to take actions
at the time of each account update; for example, saving the account state to an external database. The plugin must implement the `AccountsDbPlugin` trait. Please see the detail of the `accountsdb_plugin_interface.rs` for the interface definition.
The plugin should produce a `cdylib` dynamic library, which must expose a `C` function `_create_plugin()` that
instantiates the implementation of the interface.
The `solana-accountsdb-plugin-postgres` crate provides an example of how to create a plugin which saves the accounts data into an
external PostgreSQL databases.
More information about Solana is available in the [Solana documentation](https://docs.solana.com/).
Still have questions? Ask us on [Discord](https://discordapp.com/invite/pquxPsq)

View File

@@ -0,0 +1,99 @@
/// The interface for AccountsDb plugins. A plugin must implement
/// the AccountsDbPlugin trait to work with the runtime.
/// In addition, the dynamic library must export a "C" function _create_plugin which
/// creates the implementation of the plugin.
use {
std::{any::Any, error, io},
thiserror::Error,
};
impl Eq for ReplicaAccountInfo<'_> {}
#[derive(Clone, PartialEq, Debug)]
pub struct ReplicaAccountInfo<'a> {
pub pubkey: &'a [u8],
pub lamports: u64,
pub owner: &'a [u8],
pub executable: bool,
pub rent_epoch: u64,
pub data: &'a [u8],
pub write_version: u64,
}
pub enum ReplicaAccountInfoVersions<'a> {
V0_0_1(&'a ReplicaAccountInfo<'a>),
}
#[derive(Error, Debug)]
pub enum AccountsDbPluginError {
#[error("Error opening config file. Error detail: ({0}).")]
ConfigFileOpenError(#[from] io::Error),
#[error("Error reading config file. Error message: ({msg})")]
ConfigFileReadError { msg: String },
#[error("Error updating account. Error message: ({msg})")]
AccountsUpdateError { msg: String },
#[error("Error updating slot status. Error message: ({msg})")]
SlotStatusUpdateError { msg: String },
#[error("Plugin-defined custom error. Error message: ({0})")]
Custom(Box<dyn error::Error + Send + Sync>),
}
#[derive(Debug, Clone)]
pub enum SlotStatus {
Processed,
Rooted,
Confirmed,
}
impl SlotStatus {
pub fn as_str(&self) -> &'static str {
match self {
SlotStatus::Confirmed => "confirmed",
SlotStatus::Processed => "processed",
SlotStatus::Rooted => "rooted",
}
}
}
pub type Result<T> = std::result::Result<T, AccountsDbPluginError>;
pub trait AccountsDbPlugin: Any + Send + Sync + std::fmt::Debug {
fn name(&self) -> &'static str;
/// The callback called when a plugin is loaded by the system,
/// used for doing whatever initialization is required by the plugin.
/// The _config_file contains the name of the
/// of the config file. The config must be in JSON format and
/// include a field "libpath" indicating the full path
/// name of the shared library implementing this interface.
fn on_load(&mut self, _config_file: &str) -> Result<()> {
Ok(())
}
/// The callback called right before a plugin is unloaded by the system
/// Used for doing cleanup before unload.
fn on_unload(&mut self) {}
/// Called when an account is updated at a slot.
fn update_account(
&mut self,
account: ReplicaAccountInfoVersions,
slot: u64,
is_startup: bool,
) -> Result<()>;
/// Called when all accounts are notified of during startup.
fn notify_end_of_startup(&mut self) -> Result<()>;
/// Called when a slot status is updated
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<()>;
}

View File

@@ -0,0 +1 @@
pub mod accountsdb_plugin_interface;

View File

@@ -0,0 +1,30 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-accountsdb-plugin-manager"
description = "The Solana AccountsDb plugin manager."
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[dependencies]
bs58 = "0.4.0"
crossbeam-channel = "0.4"
libloading = "0.7.0"
log = "0.4.11"
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.67"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-measure = { path = "../measure", version = "=1.8.2" }
solana-metrics = { path = "../metrics", version = "=1.8.2" }
solana-rpc = { path = "../rpc", version = "=1.8.2" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
thiserror = "1.0.21"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -0,0 +1,227 @@
/// Module responsible for notifying plugins of account updates
use {
crate::accountsdb_plugin_manager::AccountsDbPluginManager,
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
ReplicaAccountInfo, ReplicaAccountInfoVersions, SlotStatus,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_runtime::{
accounts_update_notifier_interface::AccountsUpdateNotifierInterface,
append_vec::{StoredAccountMeta, StoredMeta},
},
solana_sdk::{
account::{AccountSharedData, ReadableAccount},
clock::Slot,
},
std::sync::{Arc, RwLock},
};
#[derive(Debug)]
pub(crate) struct AccountsUpdateNotifierImpl {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
}
impl AccountsUpdateNotifierInterface for AccountsUpdateNotifierImpl {
fn notify_account_update(&self, slot: Slot, meta: &StoredMeta, account: &AccountSharedData) {
if let Some(account_info) = self.accountinfo_from_shared_account_data(meta, account) {
self.notify_plugins_of_account_update(account_info, slot, false);
}
}
fn notify_account_restore_from_snapshot(&self, slot: Slot, account: &StoredAccountMeta) {
let mut measure_all = Measure::start("accountsdb-plugin-notify-account-restore-all");
let mut measure_copy = Measure::start("accountsdb-plugin-copy-stored-account-info");
let account = self.accountinfo_from_stored_account_meta(account);
measure_copy.stop();
inc_new_counter_debug!(
"accountsdb-plugin-copy-stored-account-info-us",
measure_copy.as_us() as usize,
100000,
100000
);
if let Some(account_info) = account {
self.notify_plugins_of_account_update(account_info, slot, true);
}
measure_all.stop();
inc_new_counter_debug!(
"accountsdb-plugin-notify-account-restore-all-us",
measure_all.as_us() as usize,
100000,
100000
);
}
fn notify_end_of_restore_from_snapshot(&self) {
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-end-of-restore-from-snapshot");
match plugin.notify_end_of_startup() {
Err(err) => {
error!(
"Failed to notify the end of restore from snapshot, error: {} to plugin {}",
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully notified the end of restore from snapshot to plugin {}",
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-end-of-restore-from-snapshot",
measure.as_us() as usize
);
}
}
fn notify_slot_confirmed(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Confirmed);
}
fn notify_slot_processed(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Processed);
}
fn notify_slot_rooted(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Rooted);
}
}
impl AccountsUpdateNotifierImpl {
pub fn new(plugin_manager: Arc<RwLock<AccountsDbPluginManager>>) -> Self {
AccountsUpdateNotifierImpl { plugin_manager }
}
fn accountinfo_from_shared_account_data<'a>(
&self,
meta: &'a StoredMeta,
account: &'a AccountSharedData,
) -> Option<ReplicaAccountInfo<'a>> {
Some(ReplicaAccountInfo {
pubkey: meta.pubkey.as_ref(),
lamports: account.lamports(),
owner: account.owner().as_ref(),
executable: account.executable(),
rent_epoch: account.rent_epoch(),
data: account.data(),
write_version: meta.write_version,
})
}
fn accountinfo_from_stored_account_meta<'a>(
&self,
stored_account_meta: &'a StoredAccountMeta,
) -> Option<ReplicaAccountInfo<'a>> {
Some(ReplicaAccountInfo {
pubkey: stored_account_meta.meta.pubkey.as_ref(),
lamports: stored_account_meta.account_meta.lamports,
owner: stored_account_meta.account_meta.owner.as_ref(),
executable: stored_account_meta.account_meta.executable,
rent_epoch: stored_account_meta.account_meta.rent_epoch,
data: stored_account_meta.data,
write_version: stored_account_meta.meta.write_version,
})
}
fn notify_plugins_of_account_update(
&self,
account: ReplicaAccountInfo,
slot: Slot,
is_startup: bool,
) {
let mut measure2 = Measure::start("accountsdb-plugin-notify_plugins_of_account_update");
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-update-account");
match plugin.update_account(
ReplicaAccountInfoVersions::V0_0_1(&account),
slot,
is_startup,
) {
Err(err) => {
error!(
"Failed to update account {} at slot {}, error: {} to plugin {}",
bs58::encode(account.pubkey).into_string(),
slot,
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully updated account {} at slot {} to plugin {}",
bs58::encode(account.pubkey).into_string(),
slot,
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-update-account-us",
measure.as_us() as usize,
100000,
100000
);
}
measure2.stop();
inc_new_counter_debug!(
"accountsdb-plugin-notify_plugins_of_account_update-us",
measure2.as_us() as usize,
100000,
100000
);
}
pub fn notify_slot_status(&self, slot: Slot, parent: Option<Slot>, slot_status: SlotStatus) {
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-update-slot");
match plugin.update_slot_status(slot, parent, slot_status.clone()) {
Err(err) => {
error!(
"Failed to update slot status at slot {}, error: {} to plugin {}",
slot,
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully updated slot status at slot {} to plugin {}",
slot,
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-update-slot-us",
measure.as_us() as usize,
1000,
1000
);
}
}
}

View File

@@ -0,0 +1,55 @@
/// Managing the AccountsDb plugins
use {
libloading::{Library, Symbol},
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::AccountsDbPlugin,
std::error::Error,
};
#[derive(Default, Debug)]
pub struct AccountsDbPluginManager {
pub plugins: Vec<Box<dyn AccountsDbPlugin>>,
libs: Vec<Library>,
}
impl AccountsDbPluginManager {
pub fn new() -> Self {
AccountsDbPluginManager {
plugins: Vec::default(),
libs: Vec::default(),
}
}
/// # Safety
///
/// This function loads the dynamically linked library specified in the path. The library
/// must do necessary initializations.
pub unsafe fn load_plugin(
&mut self,
libpath: &str,
config_file: &str,
) -> Result<(), Box<dyn Error>> {
type PluginConstructor = unsafe fn() -> *mut dyn AccountsDbPlugin;
let lib = Library::new(libpath)?;
let constructor: Symbol<PluginConstructor> = lib.get(b"_create_plugin")?;
let plugin_raw = constructor();
let mut plugin = Box::from_raw(plugin_raw);
plugin.on_load(config_file)?;
self.plugins.push(plugin);
self.libs.push(lib);
Ok(())
}
/// Unload all plugins and loaded plugin libraries, making sure to fire
/// their `on_plugin_unload()` methods so they can do any necessary cleanup.
pub fn unload(&mut self) {
for mut plugin in self.plugins.drain(..) {
info!("Unloading plugin for {:?}", plugin.name());
plugin.on_unload();
}
for lib in self.libs.drain(..) {
drop(lib);
}
}
}

View File

@@ -0,0 +1,157 @@
use {
crate::{
accounts_update_notifier::AccountsUpdateNotifierImpl,
accountsdb_plugin_manager::AccountsDbPluginManager,
slot_status_observer::SlotStatusObserver,
},
crossbeam_channel::Receiver,
log::*,
serde_json,
solana_rpc::optimistically_confirmed_bank_tracker::BankNotification,
solana_runtime::accounts_update_notifier_interface::AccountsUpdateNotifier,
std::{
fs::File,
io::Read,
path::{Path, PathBuf},
sync::{Arc, RwLock},
thread,
},
thiserror::Error,
};
#[derive(Error, Debug)]
pub enum AccountsdbPluginServiceError {
#[error("Cannot open the the plugin config file")]
CannotOpenConfigFile(String),
#[error("Cannot read the the plugin config file")]
CannotReadConfigFile(String),
#[error("The config file is not in a valid Json format")]
InvalidConfigFileFormat(String),
#[error("Plugin library path is not specified in the config file")]
LibPathNotSet,
#[error("Invalid plugin path")]
InvalidPluginPath,
#[error("Cannot load plugin shared library")]
PluginLoadError(String),
}
/// The service managing the AccountsDb plugin workflow.
pub struct AccountsDbPluginService {
slot_status_observer: SlotStatusObserver,
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
accounts_update_notifier: AccountsUpdateNotifier,
}
impl AccountsDbPluginService {
/// Creates and returns the AccountsDbPluginService.
/// # Arguments
/// * `confirmed_bank_receiver` - The receiver for confirmed bank notification
/// * `accountsdb_plugin_config_file` - The config file path for the plugin. The
/// config file controls the plugin responsible
/// for transporting the data to external data stores. It is defined in JSON format.
/// The `libpath` field should be pointed to the full path of the dynamic shared library
/// (.so file) to be loaded. The shared library must implement the `AccountsDbPlugin`
/// trait. And the shared library shall export a `C` function `_create_plugin` which
/// shall create the implementation of `AccountsDbPlugin` and returns to the caller.
/// The rest of the JSON fields' definition is up to to the concrete plugin implementation
/// It is usually used to configure the connection information for the external data store.
pub fn new(
confirmed_bank_receiver: Receiver<BankNotification>,
accountsdb_plugin_config_files: &[PathBuf],
) -> Result<Self, AccountsdbPluginServiceError> {
info!(
"Starting AccountsDbPluginService from config files: {:?}",
accountsdb_plugin_config_files
);
let mut plugin_manager = AccountsDbPluginManager::new();
for accountsdb_plugin_config_file in accountsdb_plugin_config_files {
Self::load_plugin(&mut plugin_manager, accountsdb_plugin_config_file)?;
}
let plugin_manager = Arc::new(RwLock::new(plugin_manager));
let accounts_update_notifier = Arc::new(RwLock::new(AccountsUpdateNotifierImpl::new(
plugin_manager.clone(),
)));
let slot_status_observer =
SlotStatusObserver::new(confirmed_bank_receiver, accounts_update_notifier.clone());
info!("Started AccountsDbPluginService");
Ok(AccountsDbPluginService {
slot_status_observer,
plugin_manager,
accounts_update_notifier,
})
}
fn load_plugin(
plugin_manager: &mut AccountsDbPluginManager,
accountsdb_plugin_config_file: &Path,
) -> Result<(), AccountsdbPluginServiceError> {
let mut file = match File::open(accountsdb_plugin_config_file) {
Ok(file) => file,
Err(err) => {
return Err(AccountsdbPluginServiceError::CannotOpenConfigFile(format!(
"Failed to open the plugin config file {:?}, error: {:?}",
accountsdb_plugin_config_file, err
)));
}
};
let mut contents = String::new();
if let Err(err) = file.read_to_string(&mut contents) {
return Err(AccountsdbPluginServiceError::CannotReadConfigFile(format!(
"Failed to read the plugin config file {:?}, error: {:?}",
accountsdb_plugin_config_file, err
)));
}
let result: serde_json::Value = match serde_json::from_str(&contents) {
Ok(value) => value,
Err(err) => {
return Err(AccountsdbPluginServiceError::InvalidConfigFileFormat(
format!(
"The config file {:?} is not in a valid Json format, error: {:?}",
accountsdb_plugin_config_file, err
),
));
}
};
let libpath = result["libpath"]
.as_str()
.ok_or(AccountsdbPluginServiceError::LibPathNotSet)?;
let config_file = accountsdb_plugin_config_file
.as_os_str()
.to_str()
.ok_or(AccountsdbPluginServiceError::InvalidPluginPath)?;
unsafe {
let result = plugin_manager.load_plugin(libpath, config_file);
if let Err(err) = result {
let msg = format!(
"Failed to load the plugin library: {:?}, error: {:?}",
libpath, err
);
return Err(AccountsdbPluginServiceError::PluginLoadError(msg));
}
}
Ok(())
}
pub fn get_accounts_update_notifier(&self) -> AccountsUpdateNotifier {
self.accounts_update_notifier.clone()
}
pub fn join(mut self) -> thread::Result<()> {
self.slot_status_observer.join()?;
self.plugin_manager.write().unwrap().unload();
Ok(())
}
}

View File

@@ -0,0 +1,4 @@
pub mod accounts_update_notifier;
pub mod accountsdb_plugin_manager;
pub mod accountsdb_plugin_service;
pub mod slot_status_observer;

View File

@@ -0,0 +1,80 @@
use {
crossbeam_channel::Receiver,
solana_rpc::optimistically_confirmed_bank_tracker::BankNotification,
solana_runtime::accounts_update_notifier_interface::AccountsUpdateNotifier,
std::{
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread::{self, Builder, JoinHandle},
},
};
#[derive(Debug)]
pub(crate) struct SlotStatusObserver {
bank_notification_receiver_service: Option<JoinHandle<()>>,
exit_updated_slot_server: Arc<AtomicBool>,
}
impl SlotStatusObserver {
pub fn new(
bank_notification_receiver: Receiver<BankNotification>,
accounts_update_notifier: AccountsUpdateNotifier,
) -> Self {
let exit_updated_slot_server = Arc::new(AtomicBool::new(false));
Self {
bank_notification_receiver_service: Some(Self::run_bank_notification_receiver(
bank_notification_receiver,
exit_updated_slot_server.clone(),
accounts_update_notifier,
)),
exit_updated_slot_server,
}
}
pub fn join(&mut self) -> thread::Result<()> {
self.exit_updated_slot_server.store(true, Ordering::Relaxed);
self.bank_notification_receiver_service
.take()
.map(JoinHandle::join)
.unwrap()
}
fn run_bank_notification_receiver(
bank_notification_receiver: Receiver<BankNotification>,
exit: Arc<AtomicBool>,
accounts_update_notifier: AccountsUpdateNotifier,
) -> JoinHandle<()> {
Builder::new()
.name("bank_notification_receiver".to_string())
.spawn(move || {
while !exit.load(Ordering::Relaxed) {
if let Ok(slot) = bank_notification_receiver.recv() {
match slot {
BankNotification::OptimisticallyConfirmed(slot) => {
accounts_update_notifier
.read()
.unwrap()
.notify_slot_confirmed(slot, None);
}
BankNotification::Frozen(bank) => {
accounts_update_notifier
.read()
.unwrap()
.notify_slot_processed(bank.slot(), Some(bank.parent_slot()));
}
BankNotification::Root(bank) => {
accounts_update_notifier
.read()
.unwrap()
.notify_slot_rooted(bank.slot(), Some(bank.parent_slot()));
}
}
}
}
})
.unwrap()
}
}

View File

@@ -0,0 +1,33 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-accountsdb-plugin-postgres"
description = "The Solana AccountsDb plugin for PostgreSQL database."
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[lib]
crate-type = ["cdylib", "rlib"]
[dependencies]
bs58 = "0.4.0"
chrono = { version = "0.4.11", features = ["serde"] }
crossbeam-channel = "0.5"
log = "0.4.14"
postgres = { version = "0.19.1", features = ["with-chrono-0_4"] }
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.67"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-measure = { path = "../measure", version = "=1.8.2" }
solana-metrics = { path = "../metrics", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
thiserror = "1.0.21"
tokio-postgres = "0.7.3"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -0,0 +1,5 @@
This is an example implementing the AccountsDb plugin for PostgreSQL database.
Please see the `src/accountsdb_plugin_postgres.rs` for the format of the plugin's configuration file.
To create the schema objects for the database, please use `scripts/create_schema.sql`.
`scripts/drop_schema.sql` can be used to tear down the schema objects.

View File

@@ -0,0 +1,54 @@
/**
* This plugin implementation for PostgreSQL requires the following tables
*/
-- The table storing accounts
CREATE TABLE account (
pubkey BYTEA PRIMARY KEY,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- The table storing slot information
CREATE TABLE slot (
slot BIGINT PRIMARY KEY,
parent BIGINT,
status varchar(16) NOT NULL,
updated_on TIMESTAMP NOT NULL
);
/**
* The following is for keeping historical data for accounts and is not required for plugin to work.
*/
-- The table storing historical data for accounts
CREATE TABLE account_audit (
pubkey BYTEA,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
CREATE FUNCTION audit_account_update() RETURNS trigger AS $audit_account_update$
BEGIN
INSERT INTO account_audit (pubkey, owner, lamports, slot, executable, rent_epoch, data, write_version, updated_on)
VALUES (OLD.pubkey, OLD.owner, OLD.lamports, OLD.slot,
OLD.executable, OLD.rent_epoch, OLD.data, OLD.write_version, OLD.updated_on);
RETURN NEW;
END;
$audit_account_update$ LANGUAGE plpgsql;
CREATE TRIGGER account_update_trigger AFTER UPDATE OR DELETE ON account
FOR EACH ROW EXECUTE PROCEDURE audit_account_update();

View File

@@ -0,0 +1,9 @@
/**
* Script for cleaning up the schema for PostgreSQL used for the AccountsDb plugin.
*/
DROP TRIGGER account_update_trigger;
DROP FUNCTION audit_account_update;
DROP TABLE account_audit;
DROP TABLE account;
DROP TABLE slot;

View File

@@ -0,0 +1,69 @@
use {log::*, std::collections::HashSet};
#[derive(Debug)]
pub(crate) struct AccountsSelector {
pub accounts: HashSet<Vec<u8>>,
pub owners: HashSet<Vec<u8>>,
pub select_all_accounts: bool,
}
impl AccountsSelector {
pub fn default() -> Self {
AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts: true,
}
}
pub fn new(accounts: &[String], owners: &[String]) -> Self {
info!(
"Creating AccountsSelector from accounts: {:?}, owners: {:?}",
accounts, owners
);
let select_all_accounts = accounts.iter().any(|key| key == "*");
if select_all_accounts {
return AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts,
};
}
let accounts = accounts
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
let owners = owners
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
AccountsSelector {
accounts,
owners,
select_all_accounts,
}
}
pub fn is_account_selected(&self, account: &[u8], owner: &[u8]) -> bool {
self.select_all_accounts || self.accounts.contains(account) || self.owners.contains(owner)
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_create_accounts_selector() {
AccountsSelector::new(
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
&[],
);
AccountsSelector::new(
&[],
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
);
}
}

View File

@@ -0,0 +1,333 @@
use solana_measure::measure::Measure;
/// Main entry for the PostgreSQL plugin
use {
crate::{
accounts_selector::AccountsSelector,
postgres_client::{ParallelPostgresClient, PostgresClientBuilder},
},
bs58,
log::*,
serde_derive::{Deserialize, Serialize},
serde_json,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPlugin, AccountsDbPluginError, ReplicaAccountInfoVersions, Result, SlotStatus,
},
solana_metrics::*,
std::{fs::File, io::Read},
thiserror::Error,
};
#[derive(Default)]
pub struct AccountsDbPluginPostgres {
client: Option<ParallelPostgresClient>,
accounts_selector: Option<AccountsSelector>,
}
impl std::fmt::Debug for AccountsDbPluginPostgres {
fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Ok(())
}
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct AccountsDbPluginPostgresConfig {
pub host: String,
pub user: String,
pub threads: Option<usize>,
pub port: Option<u16>,
pub batch_size: Option<usize>,
}
#[derive(Error, Debug)]
pub enum AccountsDbPluginPostgresError {
#[error("Error connecting to the backend data store. Error message: ({msg})")]
DataStoreConnectionError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
DataSchemaError { msg: String },
}
impl AccountsDbPlugin for AccountsDbPluginPostgres {
fn name(&self) -> &'static str {
"AccountsDbPluginPostgres"
}
/// Do initialization for the PostgreSQL plugin.
/// # Arguments
///
/// Format of the config file:
/// The `accounts_selector` section allows the user to controls accounts selections.
/// "accounts_selector" : {
/// "accounts" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// or:
/// "accounts_selector" = {
/// "owners" : \["pubkey-1", "pubkey-2", ..., "pubkey-m"\]
/// }
/// Accounts either satisyfing the accounts condition or owners condition will be selected.
/// When only owners is specified,
/// all accounts belonging to the owners will be streamed.
/// The accounts field support wildcard to select all accounts:
/// "accounts_selector" : {
/// "accounts" : \["*"\],
/// }
/// "host" specifies the PostgreSQL server.
/// "user" specifies the PostgreSQL user.
/// "threads" optional, specifies the number of worker threads for the plugin. A thread
/// maintains a PostgreSQL connection to the server. The default is 10.
/// "batch_size" optional, specifies the batch size of bulk insert when the AccountsDb is created
/// from restoring a snapshot. The default is "10".
/// # Examples
/// {
/// "libpath": "/home/solana/target/release/libsolana_accountsdb_plugin_postgres.so",
/// "host": "host_foo",
/// "user": "solana",
/// "threads": 10,
/// "accounts_selector" : {
/// "owners" : ["9oT9R5ZyRovSVnt37QvVoBttGpNqR3J7unkb567NP8k3"]
/// }
fn on_load(&mut self, config_file: &str) -> Result<()> {
solana_logger::setup_with_default("info");
info!(
"Loading plugin {:?} from config_file {:?}",
self.name(),
config_file
);
let mut file = File::open(config_file)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let result: serde_json::Value = serde_json::from_str(&contents).unwrap();
self.accounts_selector = Some(Self::create_accounts_selector_from_config(&result));
let result: serde_json::Result<AccountsDbPluginPostgresConfig> =
serde_json::from_str(&contents);
match result {
Err(err) => {
return Err(AccountsDbPluginError::ConfigFileReadError {
msg: format!(
"The config file is not in the JSON format expected: {:?}",
err
),
})
}
Ok(config) => {
let client = PostgresClientBuilder::build_pararallel_postgres_client(&config)?;
self.client = Some(client);
}
}
Ok(())
}
fn on_unload(&mut self) {
info!("Unloading plugin: {:?}", self.name());
match &mut self.client {
None => {}
Some(client) => {
client.join().unwrap();
}
}
}
fn update_account(
&mut self,
account: ReplicaAccountInfoVersions,
slot: u64,
is_startup: bool,
) -> Result<()> {
let mut measure_all = Measure::start("accountsdb-plugin-postgres-update-account-main");
match account {
ReplicaAccountInfoVersions::V0_0_1(account) => {
let mut measure_select =
Measure::start("accountsdb-plugin-postgres-update-account-select");
if let Some(accounts_selector) = &self.accounts_selector {
if !accounts_selector.is_account_selected(account.pubkey, account.owner) {
return Ok(());
}
} else {
return Ok(());
}
measure_select.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-select-us",
measure_select.as_us() as usize,
100000,
100000
);
debug!(
"Updating account {:?} with owner {:?} at slot {:?} using account selector {:?}",
bs58::encode(account.pubkey).into_string(),
bs58::encode(account.owner).into_string(),
slot,
self.accounts_selector.as_ref().unwrap()
);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database."
.to_string(),
},
)));
}
Some(client) => {
let mut measure_update =
Measure::start("accountsdb-plugin-postgres-update-account-client");
let result = { client.update_account(account, slot, is_startup) };
measure_update.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-client-us",
measure_update.as_us() as usize,
100000,
100000
);
if let Err(err) = result {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!("Failed to persist the update of account to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
}
}
measure_all.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-main-us",
measure_all.as_us() as usize,
100000,
100000
);
Ok(())
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<()> {
info!("Updating slot {:?} at with status {:?}", slot, status);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.update_slot_status(slot, parent, status);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the update of slot to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<()> {
info!("Notifying the end of startup for accounts notifications");
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.notify_end_of_startup();
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to notify the end of startup for accounts notifications. Error: {:?}", err)
});
}
}
}
Ok(())
}
}
impl AccountsDbPluginPostgres {
fn create_accounts_selector_from_config(config: &serde_json::Value) -> AccountsSelector {
let accounts_selector = &config["accounts_selector"];
if accounts_selector.is_null() {
AccountsSelector::default()
} else {
let accounts = &accounts_selector["accounts"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
let owners = &accounts_selector["owners"];
let owners: Vec<String> = if owners.is_array() {
owners
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
AccountsSelector::new(&accounts, &owners)
}
}
pub fn new() -> Self {
AccountsDbPluginPostgres {
client: None,
accounts_selector: None,
}
}
}
#[no_mangle]
#[allow(improper_ctypes_definitions)]
/// # Safety
///
/// This function returns the AccountsDbPluginPostgres pointer as trait AccountsDbPlugin.
pub unsafe extern "C" fn _create_plugin() -> *mut dyn AccountsDbPlugin {
let plugin = AccountsDbPluginPostgres::new();
let plugin: Box<dyn AccountsDbPlugin> = Box::new(plugin);
Box::into_raw(plugin)
}
#[cfg(test)]
pub(crate) mod tests {
use {super::*, serde_json};
#[test]
fn test_accounts_selector_from_config() {
let config = "{\"accounts_selector\" : { \
\"owners\" : [\"9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin\"] \
}}";
let config: serde_json::Value = serde_json::from_str(config).unwrap();
AccountsDbPluginPostgres::create_accounts_selector_from_config(&config);
}
}

View File

@@ -0,0 +1,3 @@
pub mod accounts_selector;
pub mod accountsdb_plugin_postgres;
pub mod postgres_client;

View File

@@ -0,0 +1,776 @@
#![allow(clippy::integer_arithmetic)]
/// A concurrent implementation for writing accounts into the PostgreSQL in parallel.
use {
crate::accountsdb_plugin_postgres::{
AccountsDbPluginPostgresConfig, AccountsDbPluginPostgresError,
},
chrono::Utc,
crossbeam_channel::{bounded, Receiver, RecvTimeoutError, Sender},
log::*,
postgres::{Client, NoTls, Statement},
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPluginError, ReplicaAccountInfo, SlotStatus,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_sdk::timing::AtomicInterval,
std::{
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering},
Arc, Mutex,
},
thread::{self, sleep, Builder, JoinHandle},
time::Duration,
},
tokio_postgres::types::ToSql,
};
/// The maximum asynchronous requests allowed in the channel to avoid excessive
/// memory usage. The downside -- calls after this threshold is reached can get blocked.
const MAX_ASYNC_REQUESTS: usize = 40960;
const DEFAULT_POSTGRES_PORT: u16 = 5432;
const DEFAULT_THREADS_COUNT: usize = 100;
const DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE: usize = 10;
const ACCOUNT_COLUMN_COUNT: usize = 9;
struct PostgresSqlClientWrapper {
client: Client,
update_account_stmt: Statement,
bulk_account_insert_stmt: Statement,
}
pub struct SimplePostgresClient {
batch_size: usize,
pending_account_updates: Vec<DbAccountInfo>,
client: Mutex<PostgresSqlClientWrapper>,
}
struct PostgresClientWorker {
client: SimplePostgresClient,
/// Indicating if accounts notification during startup is done.
is_startup_done: bool,
}
impl Eq for DbAccountInfo {}
#[derive(Clone, PartialEq, Debug)]
pub struct DbAccountInfo {
pub pubkey: Vec<u8>,
pub lamports: i64,
pub owner: Vec<u8>,
pub executable: bool,
pub rent_epoch: i64,
pub data: Vec<u8>,
pub slot: i64,
pub write_version: i64,
}
impl DbAccountInfo {
fn new<T: ReadableAccountInfo>(account: &T, slot: u64) -> DbAccountInfo {
let data = account.data().to_vec();
Self {
pubkey: account.pubkey().to_vec(),
lamports: account.lamports() as i64,
owner: account.owner().to_vec(),
executable: account.executable(),
rent_epoch: account.rent_epoch() as i64,
data,
slot: slot as i64,
write_version: account.write_version(),
}
}
}
pub trait ReadableAccountInfo: Sized {
fn pubkey(&self) -> &[u8];
fn owner(&self) -> &[u8];
fn lamports(&self) -> i64;
fn executable(&self) -> bool;
fn rent_epoch(&self) -> i64;
fn data(&self) -> &[u8];
fn write_version(&self) -> i64;
}
impl ReadableAccountInfo for DbAccountInfo {
fn pubkey(&self) -> &[u8] {
&self.pubkey
}
fn owner(&self) -> &[u8] {
&self.owner
}
fn lamports(&self) -> i64 {
self.lamports
}
fn executable(&self) -> bool {
self.executable
}
fn rent_epoch(&self) -> i64 {
self.rent_epoch
}
fn data(&self) -> &[u8] {
&self.data
}
fn write_version(&self) -> i64 {
self.write_version
}
}
impl<'a> ReadableAccountInfo for ReplicaAccountInfo<'a> {
fn pubkey(&self) -> &[u8] {
self.pubkey
}
fn owner(&self) -> &[u8] {
self.owner
}
fn lamports(&self) -> i64 {
self.lamports as i64
}
fn executable(&self) -> bool {
self.executable
}
fn rent_epoch(&self) -> i64 {
self.rent_epoch as i64
}
fn data(&self) -> &[u8] {
self.data
}
fn write_version(&self) -> i64 {
self.write_version as i64
}
}
pub trait PostgresClient {
fn join(&mut self) -> thread::Result<()> {
Ok(())
}
fn update_account(
&mut self,
account: DbAccountInfo,
is_startup: bool,
) -> Result<(), AccountsDbPluginError>;
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError>;
fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError>;
}
impl SimplePostgresClient {
fn connect_to_db(
config: &AccountsDbPluginPostgresConfig,
) -> Result<Client, AccountsDbPluginError> {
let port = config.port.unwrap_or(DEFAULT_POSTGRES_PORT);
let connection_str = format!("host={} user={} port={}", config.host, config.user, port);
match Client::connect(&connection_str, NoTls) {
Err(err) => {
let msg = format!(
"Error in connecting to the PostgreSQL database: {:?} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, connection_str);
error!("{}", msg);
Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError { msg },
)))
}
Ok(client) => Ok(client),
}
}
fn build_bulk_account_insert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let batch_size = config
.batch_size
.unwrap_or(DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE);
let mut stmt = String::from("INSERT INTO account AS acct (pubkey, slot, owner, lamports, executable, rent_epoch, data, write_version, updated_on) VALUES");
for j in 0..batch_size {
let row = j * ACCOUNT_COLUMN_COUNT;
let val_str = format!(
"(${}, ${}, ${}, ${}, ${}, ${}, ${}, ${}, ${})",
row + 1,
row + 2,
row + 3,
row + 4,
row + 5,
row + 6,
row + 7,
row + 8,
row + 9,
);
if j == 0 {
stmt = format!("{} {}", &stmt, val_str);
} else {
stmt = format!("{}, {}", &stmt, val_str);
}
}
let handle_conflict = "ON CONFLICT (pubkey) DO UPDATE SET slot=excluded.slot, owner=excluded.owner, lamports=excluded.lamports, executable=excluded.executable, rent_epoch=excluded.rent_epoch, \
data=excluded.data, write_version=excluded.write_version, updated_on=excluded.updated_on WHERE acct.slot < excluded.slot OR (\
acct.slot = excluded.slot AND acct.write_version < excluded.write_version)";
stmt = format!("{} {}", stmt, handle_conflict);
info!("{}", stmt);
let bulk_stmt = client.prepare(&stmt);
match bulk_stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the accounts update PostgreSQL database: {} host: {} user: {} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(update_account_stmt) => Ok(update_account_stmt),
}
}
fn build_single_account_upsert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO account AS acct (pubkey, slot, owner, lamports, executable, rent_epoch, data, write_version, updated_on) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9) \
ON CONFLICT (pubkey) DO UPDATE SET slot=excluded.slot, owner=excluded.owner, lamports=excluded.lamports, executable=excluded.executable, rent_epoch=excluded.rent_epoch, \
data=excluded.data, write_version=excluded.write_version, updated_on=excluded.updated_on WHERE acct.slot < excluded.slot OR (\
acct.slot = excluded.slot AND acct.write_version < excluded.write_version)";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the accounts update PostgreSQL database: {} host: {} user: {} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(update_account_stmt) => Ok(update_account_stmt),
}
}
/// Internal function for updating or inserting a single account
fn upsert_account_internal(
account: &DbAccountInfo,
statement: &Statement,
client: &mut Client,
) -> Result<(), AccountsDbPluginError> {
let lamports = account.lamports() as i64;
let rent_epoch = account.rent_epoch() as i64;
let updated_on = Utc::now().naive_utc();
let result = client.query(
statement,
&[
&account.pubkey(),
&account.slot,
&account.owner(),
&lamports,
&account.executable(),
&rent_epoch,
&account.data(),
&account.write_version(),
&updated_on,
],
);
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of account to the PostgreSQL database. Error: {:?}",
err
);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
Ok(())
}
/// Update or insert a single account
fn upsert_account(&mut self, account: &DbAccountInfo) -> Result<(), AccountsDbPluginError> {
let client = self.client.get_mut().unwrap();
let statement = &client.update_account_stmt;
let client = &mut client.client;
Self::upsert_account_internal(account, statement, client)
}
/// Insert accounts in batch to reduce network overhead
fn insert_accounts_in_batch(
&mut self,
account: DbAccountInfo,
) -> Result<(), AccountsDbPluginError> {
self.pending_account_updates.push(account);
if self.pending_account_updates.len() == self.batch_size {
let mut measure = Measure::start("accountsdb-plugin-postgres-prepare-values");
let mut values: Vec<&(dyn ToSql + Sync)> =
Vec::with_capacity(self.batch_size * ACCOUNT_COLUMN_COUNT);
let updated_on = Utc::now().naive_utc();
for j in 0..self.batch_size {
let account = &self.pending_account_updates[j];
values.push(&account.pubkey);
values.push(&account.slot);
values.push(&account.owner);
values.push(&account.lamports);
values.push(&account.executable);
values.push(&account.rent_epoch);
values.push(&account.data);
values.push(&account.write_version);
values.push(&updated_on);
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-prepare-values-us",
measure.as_us() as usize,
10000,
10000
);
let mut measure = Measure::start("accountsdb-plugin-postgres-update-account");
let client = self.client.get_mut().unwrap();
let result = client
.client
.query(&client.bulk_account_insert_stmt, &values);
self.pending_account_updates.clear();
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of account to the PostgreSQL database. Error: {:?}",
err
);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-us",
measure.as_us() as usize,
10000,
10000
);
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-count",
self.batch_size,
10000,
10000
);
}
Ok(())
}
/// Flush any left over accounts in batch which are not processed in the last batch
fn flush_buffered_writes(&mut self) -> Result<(), AccountsDbPluginError> {
if self.pending_account_updates.is_empty() {
return Ok(());
}
let client = self.client.get_mut().unwrap();
let statement = &client.update_account_stmt;
let client = &mut client.client;
for account in self.pending_account_updates.drain(..) {
Self::upsert_account_internal(&account, statement, client)?;
}
Ok(())
}
pub fn new(config: &AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
info!("Creating SimplePostgresClient...");
let mut client = Self::connect_to_db(config)?;
let bulk_account_insert_stmt =
Self::build_bulk_account_insert_statement(&mut client, config)?;
let update_account_stmt = Self::build_single_account_upsert_statement(&mut client, config)?;
let batch_size = config
.batch_size
.unwrap_or(DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE);
info!("Created SimplePostgresClient.");
Ok(Self {
batch_size,
pending_account_updates: Vec::with_capacity(batch_size),
client: Mutex::new(PostgresSqlClientWrapper {
client,
update_account_stmt,
bulk_account_insert_stmt,
}),
})
}
}
impl PostgresClient for SimplePostgresClient {
fn update_account(
&mut self,
account: DbAccountInfo,
is_startup: bool,
) -> Result<(), AccountsDbPluginError> {
trace!(
"Updating account {} with owner {} at slot {}",
bs58::encode(account.pubkey()).into_string(),
bs58::encode(account.owner()).into_string(),
account.slot,
);
if !is_startup {
return self.upsert_account(&account);
}
self.insert_accounts_in_batch(account)
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError> {
info!("Updating slot {:?} at with status {:?}", slot, status);
let slot = slot as i64; // postgres only supports i64
let parent = parent.map(|parent| parent as i64);
let updated_on = Utc::now().naive_utc();
let status_str = status.as_str();
let client = self.client.get_mut().unwrap();
let result = match parent {
Some(parent) => {
client.client.execute(
"INSERT INTO slot (slot, parent, status, updated_on) \
VALUES ($1, $2, $3, $4) \
ON CONFLICT (slot) DO UPDATE SET parent=$2, status=$3, updated_on=$4",
&[
&slot,
&parent,
&status_str,
&updated_on,
],
)
}
None => {
client.client.execute(
"INSERT INTO slot (slot, status, updated_on) \
VALUES ($1, $2, $3) \
ON CONFLICT (slot) DO UPDATE SET status=$2, updated_on=$3",
&[
&slot,
&status_str,
&updated_on,
],
)
}
};
match result {
Err(err) => {
let msg = format!(
"Failed to persist the update of slot to the PostgreSQL database. Error: {:?}",
err
);
error!("{:?}", msg);
return Err(AccountsDbPluginError::SlotStatusUpdateError { msg });
}
Ok(rows) => {
assert_eq!(1, rows, "Expected one rows to be updated a time");
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError> {
self.flush_buffered_writes()
}
}
struct UpdateAccountRequest {
account: DbAccountInfo,
is_startup: bool,
}
struct UpdateSlotRequest {
slot: u64,
parent: Option<u64>,
slot_status: SlotStatus,
}
enum DbWorkItem {
UpdateAccount(UpdateAccountRequest),
UpdateSlot(UpdateSlotRequest),
}
impl PostgresClientWorker {
fn new(config: AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
let result = SimplePostgresClient::new(&config);
match result {
Ok(client) => Ok(PostgresClientWorker {
client,
is_startup_done: false,
}),
Err(err) => {
error!("Error in creating SimplePostgresClient: {}", err);
Err(err)
}
}
}
fn do_work(
&mut self,
receiver: Receiver<DbWorkItem>,
exit_worker: Arc<AtomicBool>,
is_startup_done: Arc<AtomicBool>,
startup_done_count: Arc<AtomicUsize>,
) -> Result<(), AccountsDbPluginError> {
while !exit_worker.load(Ordering::Relaxed) {
let mut measure = Measure::start("accountsdb-plugin-postgres-worker-recv");
let work = receiver.recv_timeout(Duration::from_millis(500));
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-worker-recv-us",
measure.as_us() as usize,
100000,
100000
);
match work {
Ok(work) => match work {
DbWorkItem::UpdateAccount(request) => {
self.client
.update_account(request.account, request.is_startup)?;
}
DbWorkItem::UpdateSlot(request) => {
self.client.update_slot_status(
request.slot,
request.parent,
request.slot_status,
)?;
}
},
Err(err) => match err {
RecvTimeoutError::Timeout => {
if !self.is_startup_done && is_startup_done.load(Ordering::Relaxed) {
self.client.notify_end_of_startup()?;
self.is_startup_done = true;
startup_done_count.fetch_add(1, Ordering::Relaxed);
}
continue;
}
_ => {
error!("Error in receiving the item {:?}", err);
break;
}
},
}
}
Ok(())
}
}
pub struct ParallelPostgresClient {
workers: Vec<JoinHandle<Result<(), AccountsDbPluginError>>>,
exit_worker: Arc<AtomicBool>,
is_startup_done: Arc<AtomicBool>,
startup_done_count: Arc<AtomicUsize>,
initialized_worker_count: Arc<AtomicUsize>,
sender: Sender<DbWorkItem>,
last_report: AtomicInterval,
}
impl ParallelPostgresClient {
pub fn new(config: &AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
info!("Creating ParallelPostgresClient...");
let (sender, receiver) = bounded(MAX_ASYNC_REQUESTS);
let exit_worker = Arc::new(AtomicBool::new(false));
let mut workers = Vec::default();
let is_startup_done = Arc::new(AtomicBool::new(false));
let startup_done_count = Arc::new(AtomicUsize::new(0));
let worker_count = config.threads.unwrap_or(DEFAULT_THREADS_COUNT);
let initialized_worker_count = Arc::new(AtomicUsize::new(0));
for i in 0..worker_count {
let cloned_receiver = receiver.clone();
let exit_clone = exit_worker.clone();
let is_startup_done_clone = is_startup_done.clone();
let startup_done_count_clone = startup_done_count.clone();
let initialized_worker_count_clone = initialized_worker_count.clone();
let config = config.clone();
let worker = Builder::new()
.name(format!("worker-{}", i))
.spawn(move || -> Result<(), AccountsDbPluginError> {
let result = PostgresClientWorker::new(config);
match result {
Ok(mut worker) => {
initialized_worker_count_clone.fetch_add(1, Ordering::Relaxed);
worker.do_work(
cloned_receiver,
exit_clone,
is_startup_done_clone,
startup_done_count_clone,
)?;
Ok(())
}
Err(err) => Err(err),
}
})
.unwrap();
workers.push(worker);
}
info!("Created ParallelPostgresClient.");
Ok(Self {
last_report: AtomicInterval::default(),
workers,
exit_worker,
is_startup_done,
startup_done_count,
initialized_worker_count,
sender,
})
}
pub fn join(&mut self) -> thread::Result<()> {
self.exit_worker.store(true, Ordering::Relaxed);
while !self.workers.is_empty() {
let worker = self.workers.pop();
if worker.is_none() {
break;
}
let worker = worker.unwrap();
let result = worker.join().unwrap();
if result.is_err() {
error!("The worker thread has failed: {:?}", result);
}
}
Ok(())
}
pub fn update_account(
&mut self,
account: &ReplicaAccountInfo,
slot: u64,
is_startup: bool,
) -> Result<(), AccountsDbPluginError> {
if self.last_report.should_update(30000) {
datapoint_debug!(
"postgres-plugin-stats",
("message-queue-length", self.sender.len() as i64, i64),
);
}
let mut measure = Measure::start("accountsdb-plugin-posgres-create-work-item");
let wrk_item = DbWorkItem::UpdateAccount(UpdateAccountRequest {
account: DbAccountInfo::new(account, slot),
is_startup,
});
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-posgres-create-work-item-us",
measure.as_us() as usize,
100000,
100000
);
let mut measure = Measure::start("accountsdb-plugin-posgres-send-msg");
if let Err(err) = self.sender.send(wrk_item) {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!(
"Failed to update the account {:?}, error: {:?}",
bs58::encode(account.pubkey()).into_string(),
err
),
});
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-posgres-send-msg-us",
measure.as_us() as usize,
100000,
100000
);
Ok(())
}
pub fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError> {
if let Err(err) = self.sender.send(DbWorkItem::UpdateSlot(UpdateSlotRequest {
slot,
parent,
slot_status: status,
})) {
return Err(AccountsDbPluginError::SlotStatusUpdateError {
msg: format!("Failed to update the slot {:?}, error: {:?}", slot, err),
});
}
Ok(())
}
pub fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError> {
info!("Notifying the end of startup");
// Ensure all items in the queue has been received by the workers
while !self.sender.is_empty() {
sleep(Duration::from_millis(100));
}
self.is_startup_done.store(true, Ordering::Relaxed);
// Wait for all worker threads to be done with flushing
while self.startup_done_count.load(Ordering::Relaxed)
!= self.initialized_worker_count.load(Ordering::Relaxed)
{
info!(
"Startup done count: {}, good worker thread count: {}",
self.startup_done_count.load(Ordering::Relaxed),
self.initialized_worker_count.load(Ordering::Relaxed)
);
sleep(Duration::from_millis(100));
}
info!("Done with notifying the end of startup");
Ok(())
}
}
pub struct PostgresClientBuilder {}
impl PostgresClientBuilder {
pub fn build_pararallel_postgres_client(
config: &AccountsDbPluginPostgresConfig,
) -> Result<ParallelPostgresClient, AccountsDbPluginError> {
ParallelPostgresClient::new(config)
}
pub fn build_simple_postgres_client(
config: &AccountsDbPluginPostgresConfig,
) -> Result<SimplePostgresClient, AccountsDbPluginError> {
SimplePostgresClient::new(config)
}
}

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-banking-bench"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -14,18 +14,18 @@ crossbeam-channel = "0.4"
log = "0.4.11"
rand = "0.7.0"
rayon = "1.5.0"
solana-core = { path = "../core", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-gossip = { path = "../gossip", version = "=1.7.14" }
solana-ledger = { path = "../ledger", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-measure = { path = "../measure", version = "=1.7.14" }
solana-perf = { path = "../perf", version = "=1.7.14" }
solana-poh = { path = "../poh", version = "=1.7.14" }
solana-runtime = { path = "../runtime", version = "=1.7.14" }
solana-streamer = { path = "../streamer", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-core = { path = "../core", version = "=1.8.2" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-gossip = { path = "../gossip", version = "=1.8.2" }
solana-ledger = { path = "../ledger", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-measure = { path = "../measure", version = "=1.8.2" }
solana-perf = { path = "../perf", version = "=1.8.2" }
solana-poh = { path = "../poh", version = "=1.8.2" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-streamer = { path = "../streamer", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -16,6 +16,7 @@ use solana_perf::packet::to_packets_chunked;
use solana_poh::poh_recorder::{create_test_recorder, PohRecorder, WorkingBankEntry};
use solana_runtime::{
accounts_background_service::AbsRequestSender, bank::Bank, bank_forks::BankForks,
cost_model::CostModel,
};
use solana_sdk::{
hash::Hash,
@@ -27,7 +28,7 @@ use solana_sdk::{
};
use solana_streamer::socket::SocketAddrSpace;
use std::{
sync::{atomic::Ordering, mpsc::Receiver, Arc, Mutex},
sync::{atomic::Ordering, mpsc::Receiver, Arc, Mutex, RwLock},
thread::sleep,
time::{Duration, Instant},
};
@@ -231,6 +232,7 @@ fn main() {
vote_receiver,
None,
replay_vote_sender,
Arc::new(RwLock::new(CostModel::default())),
);
poh_recorder.lock().unwrap().set_bank(&bank);

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-client"
version = "1.7.14"
version = "1.8.2"
description = "Solana banks client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -15,16 +15,16 @@ borsh = "0.9.0"
borsh-derive = "0.9.0"
futures = "0.3"
mio = "0.7.6"
solana-banks-interface = { path = "../banks-interface", version = "=1.7.14" }
solana-program = { path = "../sdk/program", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-banks-interface = { path = "../banks-interface", version = "=1.8.2" }
solana-program = { path = "../sdk/program", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
tarpc = { version = "0.24.1", features = ["full"] }
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
[dev-dependencies]
solana-runtime = { path = "../runtime", version = "=1.7.14" }
solana-banks-server = { path = "../banks-server", version = "=1.7.14" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-banks-server = { path = "../banks-server", version = "=1.8.2" }
[lib]
crate-type = ["lib"]

View File

@@ -385,7 +385,9 @@ mod tests {
let message = Message::new(&[instruction], Some(&mint_pubkey));
Runtime::new()?.block_on(async {
let client_transport = start_local_server(bank_forks, block_commitment_cache).await;
let client_transport =
start_local_server(bank_forks, block_commitment_cache, Duration::from_millis(1))
.await;
let mut banks_client = start_client(client_transport).await?;
let recent_blockhash = banks_client.get_recent_blockhash().await?;
@@ -416,7 +418,9 @@ mod tests {
let message = Message::new(&[instruction], Some(mint_pubkey));
Runtime::new()?.block_on(async {
let client_transport = start_local_server(bank_forks, block_commitment_cache).await;
let client_transport =
start_local_server(bank_forks, block_commitment_cache, Duration::from_millis(1))
.await;
let mut banks_client = start_client(client_transport).await?;
let (_, recent_blockhash, last_valid_block_height) = banks_client.get_fees().await?;
let transaction = Transaction::new(&[&genesis.mint_keypair], message, recent_blockhash);

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-interface"
version = "1.7.14"
version = "1.8.2"
description = "Solana banks RPC interface"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,7 +12,7 @@ edition = "2018"
[dependencies]
mio = "0.7.6"
serde = { version = "1.0.122", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
tarpc = { version = "0.24.1", features = ["full"] }
[dev-dependencies]

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-banks-server"
version = "1.7.14"
version = "1.8.2"
description = "Solana banks server"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -14,10 +14,10 @@ bincode = "1.3.1"
futures = "0.3"
log = "0.4.11"
mio = "0.7.6"
solana-banks-interface = { path = "../banks-interface", version = "=1.7.14" }
solana-runtime = { path = "../runtime", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-metrics = { path = "../metrics", version = "=1.7.14" }
solana-banks-interface = { path = "../banks-interface", version = "=1.8.2" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-metrics = { path = "../metrics", version = "=1.8.2" }
tarpc = { version = "0.24.1", features = ["full"] }
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }

View File

@@ -12,6 +12,7 @@ use solana_sdk::{
account::Account,
clock::Slot,
commitment_config::CommitmentLevel,
feature_set::FeatureSet,
fee_calculator::FeeCalculator,
hash::Hash,
pubkey::Pubkey,
@@ -43,6 +44,7 @@ struct BanksServer {
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
transaction_sender: Sender<TransactionInfo>,
poll_signature_status_sleep_duration: Duration,
}
impl BanksServer {
@@ -54,11 +56,13 @@ impl BanksServer {
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
transaction_sender: Sender<TransactionInfo>,
poll_signature_status_sleep_duration: Duration,
) -> Self {
Self {
bank_forks,
block_commitment_cache,
transaction_sender,
poll_signature_status_sleep_duration,
}
}
@@ -81,6 +85,7 @@ impl BanksServer {
fn new_loopback(
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
poll_signature_status_sleep_duration: Duration,
) -> Self {
let (transaction_sender, transaction_receiver) = channel();
let bank = bank_forks.read().unwrap().working_bank();
@@ -95,7 +100,12 @@ impl BanksServer {
.name("solana-bank-forks-client".to_string())
.spawn(move || Self::run(server_bank_forks, transaction_receiver))
.unwrap();
Self::new(bank_forks, block_commitment_cache, transaction_sender)
Self::new(
bank_forks,
block_commitment_cache,
transaction_sender,
poll_signature_status_sleep_duration,
)
}
fn slot(&self, commitment: CommitmentLevel) -> Slot {
@@ -120,7 +130,7 @@ impl BanksServer {
.bank(commitment)
.get_signature_status_with_blockhash(signature, blockhash);
while status.is_none() {
sleep(Duration::from_millis(200)).await;
sleep(self.poll_signature_status_sleep_duration).await;
let bank = self.bank(commitment);
if bank.block_height() > last_valid_block_height {
break;
@@ -133,11 +143,11 @@ impl BanksServer {
fn verify_transaction(
transaction: &Transaction,
libsecp256k1_0_5_upgrade_enabled: bool,
feature_set: &Arc<FeatureSet>,
) -> transaction::Result<()> {
if let Err(err) = transaction.verify() {
Err(err)
} else if let Err(err) = transaction.verify_precompiles(libsecp256k1_0_5_upgrade_enabled) {
} else if let Err(err) = transaction.verify_precompiles(feature_set) {
Err(err)
} else {
Ok(())
@@ -227,19 +237,13 @@ impl Banks for BanksServer {
transaction: Transaction,
commitment: CommitmentLevel,
) -> Option<transaction::Result<()>> {
if let Err(err) = verify_transaction(
&transaction,
self.bank(commitment).libsecp256k1_0_5_upgrade_enabled(),
) {
if let Err(err) = verify_transaction(&transaction, &self.bank(commitment).feature_set) {
return Some(Err(err));
}
let blockhash = &transaction.message.recent_blockhash;
let last_valid_block_height = self
.bank_forks
.read()
.unwrap()
.root_bank()
.bank(commitment)
.get_blockhash_last_valid_block_height(blockhash)
.unwrap();
let signature = transaction.signatures.get(0).cloned().unwrap_or_default();
@@ -267,8 +271,13 @@ impl Banks for BanksServer {
pub async fn start_local_server(
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
poll_signature_status_sleep_duration: Duration,
) -> UnboundedChannel<Response<BanksResponse>, ClientMessage<BanksRequest>> {
let banks_server = BanksServer::new_loopback(bank_forks, block_commitment_cache);
let banks_server = BanksServer::new_loopback(
bank_forks,
block_commitment_cache,
poll_signature_status_sleep_duration,
);
let (client_transport, server_transport) = transport::channel::unbounded();
let server = server::new(server::Config::default())
.incoming(stream::once(future::ready(server_transport)))
@@ -303,8 +312,12 @@ pub async fn start_tcp_server(
SendTransactionService::new(tpu_addr, &bank_forks, receiver);
let server =
BanksServer::new(bank_forks.clone(), block_commitment_cache.clone(), sender);
let server = BanksServer::new(
bank_forks.clone(),
block_commitment_cache.clone(),
sender,
Duration::from_millis(200),
);
chan.respond_with(server.serve()).execute()
})
// Max 10 channels.

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-bench-exchange"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -18,23 +18,23 @@ rand = "0.7.0"
rayon = "1.5.0"
serde_json = "1.0.56"
serde_yaml = "0.8.13"
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-core = { path = "../core", version = "=1.7.14" }
solana-genesis = { path = "../genesis", version = "=1.7.14" }
solana-client = { path = "../client", version = "=1.7.14" }
solana-exchange-program = { path = "../programs/exchange", version = "=1.7.14" }
solana-faucet = { path = "../faucet", version = "=1.7.14" }
solana-gossip = { path = "../gossip", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-metrics = { path = "../metrics", version = "=1.7.14" }
solana-net-utils = { path = "../net-utils", version = "=1.7.14" }
solana-runtime = { path = "../runtime", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-streamer = { path = "../streamer", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-core = { path = "../core", version = "=1.8.2" }
solana-genesis = { path = "../genesis", version = "=1.8.2" }
solana-client = { path = "../client", version = "=1.8.2" }
solana-exchange-program = { path = "../programs/exchange", version = "=1.8.2" }
solana-faucet = { path = "../faucet", version = "=1.8.2" }
solana-gossip = { path = "../gossip", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-metrics = { path = "../metrics", version = "=1.8.2" }
solana-net-utils = { path = "../net-utils", version = "=1.8.2" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-streamer = { path = "../streamer", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.7.14" }
solana-local-cluster = { path = "../local-cluster", version = "=1.8.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-bench-streamer"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -10,11 +10,11 @@ publish = false
[dependencies]
clap = "2.33.1"
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-streamer = { path = "../streamer", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-net-utils = { path = "../net-utils", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-streamer = { path = "../streamer", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-net-utils = { path = "../net-utils", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -2,7 +2,7 @@
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-bench-tps"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -15,24 +15,24 @@ log = "0.4.11"
rayon = "1.5.0"
serde_json = "1.0.56"
serde_yaml = "0.8.13"
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-core = { path = "../core", version = "=1.7.14" }
solana-genesis = { path = "../genesis", version = "=1.7.14" }
solana-client = { path = "../client", version = "=1.7.14" }
solana-faucet = { path = "../faucet", version = "=1.7.14" }
solana-gossip = { path = "../gossip", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-metrics = { path = "../metrics", version = "=1.7.14" }
solana-measure = { path = "../measure", version = "=1.7.14" }
solana-net-utils = { path = "../net-utils", version = "=1.7.14" }
solana-runtime = { path = "../runtime", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-streamer = { path = "../streamer", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-core = { path = "../core", version = "=1.8.2" }
solana-genesis = { path = "../genesis", version = "=1.8.2" }
solana-client = { path = "../client", version = "=1.8.2" }
solana-faucet = { path = "../faucet", version = "=1.8.2" }
solana-gossip = { path = "../gossip", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-metrics = { path = "../metrics", version = "=1.8.2" }
solana-measure = { path = "../measure", version = "=1.8.2" }
solana-net-utils = { path = "../net-utils", version = "=1.8.2" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-streamer = { path = "../streamer", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
[dev-dependencies]
serial_test = "0.4.0"
solana-local-cluster = { path = "../local-cluster", version = "=1.7.14" }
solana-local-cluster = { path = "../local-cluster", version = "=1.8.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -137,7 +137,7 @@ all_test_steps() {
^ci/test-coverage.sh \
^scripts/coverage.sh \
; then
command_step coverage ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_nightly_docker_image ci/test-coverage.sh" 30
command_step coverage ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_nightly_docker_image ci/test-coverage.sh" 40
wait_step
else
annotate --style info --context test-coverage \

View File

@@ -45,5 +45,12 @@ cargo_audit_ignores=(
# Blocked on jsonrpc removing dependency on unmaintained `websocket`
# https://github.com/paritytech/jsonrpc/issues/605
--ignore RUSTSEC-2021-0079
# chrono: Potential segfault in `localtime_r` invocations
#
# Blocked due to no safe upgrade
# https://github.com/chronotope/chrono/issues/499
--ignore RUSTSEC-2020-0159
)
scripts/cargo-for-all-lock-files.sh stable audit "${cargo_audit_ignores[@]}"

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-clap-utils"
version = "1.7.14"
version = "1.8.2"
description = "Solana utilities for the clap"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -12,8 +12,8 @@ edition = "2018"
[dependencies]
clap = "2.33.0"
rpassword = "4.0"
solana-remote-wallet = { path = "../remote-wallet", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
thiserror = "1.0.21"
tiny-bip39 = "0.8.1"
uriparse = "0.6.3"

View File

@@ -1,3 +1,14 @@
//! Loading signers and keypairs from the command line.
//!
//! This module contains utilities for loading [Signer]s and [Keypair]s from
//! standard signing sources, from the command line, as in the Solana CLI.
//!
//! The key function here is [`signer_from_path`], which loads a `Signer` from
//! one of several possible sources by interpreting a "path" command line
//! argument. Its documentation includes a description of all possible signing
//! sources supported by the Solana CLI. Many other functions here are
//! variations on, or delegate to, `signer_from_path`.
use {
crate::{
input_parsers::{pubkeys_sigs_of, STDOUT_OUTFILE_TOKEN},
@@ -92,14 +103,56 @@ impl CliSignerInfo {
}
}
/// A command line argument that loads a default signer in absence of other signers.
///
/// This type manages a default signing source which may be overridden by other
/// signing sources via its [`generate_unique_signers`] method.
///
/// [`generate_unique_signers`]: DefaultSigner::generate_unique_signers
///
/// `path` is a signing source as documented by [`signer_from_path`], and
/// `arg_name` is the name of its [clap] command line argument, which is passed
/// to `signer_from_path` as its `keypair_name` argument.
#[derive(Debug, Default)]
pub struct DefaultSigner {
/// The name of the signers command line argument.
pub arg_name: String,
/// The signing source.
pub path: String,
is_path_checked: RefCell<bool>,
}
impl DefaultSigner {
/// Create a new `DefaultSigner`.
///
/// `path` is a signing source as documented by [`signer_from_path`], and
/// `arg_name` is the name of its [clap] command line argument, which is
/// passed to `signer_from_path` as its `keypair_name` argument.
///
/// [clap]: https://docs.rs/clap
///
/// # Examples
///
/// ```no_run
/// use clap::{App, Arg, value_t_or_exit};
/// use solana_clap_utils::keypair::DefaultSigner;
/// use solana_clap_utils::offline::OfflineArgs;
///
/// let clap_app = App::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::with_name("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
///
/// let default_signer = DefaultSigner::new("keypair", &keypair_str);
/// # assert!(default_signer.arg_name.len() > 0);
/// assert_eq!(default_signer.path, keypair_str);
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn new<AN: AsRef<str>, P: AsRef<str>>(arg_name: AN, path: P) -> Self {
let arg_name = arg_name.as_ref().to_string();
let path = path.as_ref().to_string();
@@ -134,6 +187,57 @@ impl DefaultSigner {
Ok(&self.path)
}
/// Generate a unique set of signers, possibly excluding this default signer.
///
/// This function allows a command line application to have a default
/// signer, perhaps representing a default wallet, but to override that
/// signer and instead sign with one or more other signers.
///
/// `bulk_signers` is a vector of signers, all of which are optional. If any
/// of those signers is `None`, then the default signer will be loaded; if
/// all of those signers are `Some`, then the default signer will not be
/// loaded.
///
/// The returned value includes all of the `bulk_signers` that were not
/// `None`, and maybe the default signer, if it was loaded.
///
/// # Examples
///
/// ```no_run
/// use clap::{App, Arg, value_t_or_exit};
/// use solana_clap_utils::keypair::{DefaultSigner, signer_from_path};
/// use solana_clap_utils::offline::OfflineArgs;
/// use solana_sdk::signer::Signer;
///
/// let clap_app = App::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::with_name("keypair")
/// .required(true)
/// .help("The default signer"))
/// .arg(Arg::with_name("payer")
/// .long("payer")
/// .help("The account paying for the transaction"))
/// .offline_args();
///
/// let mut wallet_manager = None;
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
/// let maybe_payer = clap_matches.value_of("payer");
///
/// let default_signer = DefaultSigner::new("keypair", &keypair_str);
/// let maybe_payer_signer = maybe_payer.map(|payer| {
/// signer_from_path(&clap_matches, payer, "payer", &mut wallet_manager)
/// }).transpose()?;
/// let bulk_signers = vec![maybe_payer_signer];
///
/// let unique_signers = default_signer.generate_unique_signers(
/// bulk_signers,
/// &clap_matches,
/// &mut wallet_manager,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn generate_unique_signers(
&self,
bulk_signers: Vec<Option<Box<dyn Signer>>>,
@@ -158,6 +262,45 @@ impl DefaultSigner {
})
}
/// Loads the default [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as
/// various types of _signing source_, depending on its format, one of which
/// is a path to a keypair file. Some sources may require user interaction
/// in the course of calling this function.
///
/// This simply delegates to the [`signer_from_path`] free function, passing
/// it the `DefaultSigner`s `path` and `arg_name` fields as the `path` and
/// `keypair_name` arguments.
///
/// See the [`signer_from_path`] free function for full documentation of how
/// this function interprets its arguments.
///
/// # Examples
///
/// ```no_run
/// use clap::{App, Arg, value_t_or_exit};
/// use solana_clap_utils::keypair::DefaultSigner;
/// use solana_clap_utils::offline::OfflineArgs;
///
/// let clap_app = App::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::with_name("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
/// let default_signer = DefaultSigner::new("keypair", &keypair_str);
/// let mut wallet_manager = None;
///
/// let signer = default_signer.signer_from_path(
/// &clap_matches,
/// &mut wallet_manager,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn signer_from_path(
&self,
matches: &ArgMatches,
@@ -166,6 +309,51 @@ impl DefaultSigner {
signer_from_path(matches, self.path()?, &self.arg_name, wallet_manager)
}
/// Loads the default [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as
/// various types of _signing source_, depending on its format, one of which
/// is a path to a keypair file. Some sources may require user interaction
/// in the course of calling this function.
///
/// This simply delegates to the [`signer_from_path_with_config`] free
/// function, passing it the `DefaultSigner`s `path` and `arg_name` fields
/// as the `path` and `keypair_name` arguments.
///
/// See the [`signer_from_path`] free function for full documentation of how
/// this function interprets its arguments.
///
/// # Examples
///
/// ```no_run
/// use clap::{App, Arg, value_t_or_exit};
/// use solana_clap_utils::keypair::{SignerFromPathConfig, DefaultSigner};
/// use solana_clap_utils::offline::OfflineArgs;
///
/// let clap_app = App::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::with_name("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
/// let default_signer = DefaultSigner::new("keypair", &keypair_str);
/// let mut wallet_manager = None;
///
/// // Allow pubkey signers without accompanying signatures
/// let config = SignerFromPathConfig {
/// allow_null_signer: true,
/// };
///
/// let signer = default_signer.signer_from_path_with_config(
/// &clap_matches,
/// &mut wallet_manager,
/// &config,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn signer_from_path_with_config(
&self,
matches: &ArgMatches,
@@ -258,6 +446,15 @@ pub(crate) fn parse_signer_source<S: AsRef<str>>(
let source = {
#[cfg(target_family = "windows")]
{
// trim matched single-quotes since cmd.exe won't
let mut source = source;
while let Some(trimmed) = source.strip_prefix('\'') {
source = if let Some(trimmed) = trimmed.strip_suffix('\'') {
trimmed
} else {
break;
}
}
source.replace("\\", "/")
}
#[cfg(not(target_family = "windows"))]
@@ -324,19 +521,167 @@ pub fn presigner_from_pubkey_sigs(
})
}
#[derive(Debug)]
#[derive(Debug, Default)]
pub struct SignerFromPathConfig {
pub allow_null_signer: bool,
}
impl Default for SignerFromPathConfig {
fn default() -> Self {
Self {
allow_null_signer: false,
}
}
}
/// Loads a [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as various
/// types of _signing source_, depending on its format, one of which is a path
/// to a keypair file. Some sources may require user interaction in the course
/// of calling this function.
///
/// The result of this function is a boxed object of the [Signer] trait. To load
/// a concrete [Keypair], use the [keypair_from_path] function, though note that
/// it does not support all signer sources.
///
/// The `matches` argument is the same set of parsed [clap] matches from which
/// `path` was parsed. It is used to parse various additional command line
/// arguments, depending on which signing source is requested, as described
/// below in "Signing sources".
///
/// [clap]: https//docs.rs/clap
///
/// The `keypair_name` argument is the "name" of the signer, and is typically
/// the name of the clap argument from which the `path` argument was parsed,
/// like "keypair", "from", or "fee-payer". It is used solely for interactively
/// prompting the user, either when entering seed phrases or selecting from
/// multiple hardware wallets.
///
/// The `wallet_manager` is used for establishing connections to a hardware
/// device such as Ledger. If `wallet_manager` is a reference to `None`, and a
/// hardware signer is requested, then this function will attempt to create a
/// wallet manager, assigning it to the mutable `wallet_manager` reference. This
/// argument is typically a reference to `None`.
///
/// # Signing sources
///
/// The `path` argument can simply be a path to a keypair file, but it may also
/// be interpreted in several other ways, in the following order.
///
/// Firstly, the `path` argument may be interpreted as a [URI], with the URI
/// scheme indicating where to load the signer from. If it parses as a URI, then
/// the following schemes are supported:
///
/// - `file:` &mdash; Read the keypair from a JSON keypair file. The path portion
/// of the URI is the file path.
///
/// - `stdin:` &mdash; Read the keypair from stdin, in the JSON format used by
/// the keypair file.
///
/// Non-scheme parts of the URI are ignored.
///
/// - `prompt:` &mdash; The user will be prompted at the command line
/// for their seed phrase and passphrase.
///
/// In this URI the [query string][qs] may contain zero or one of the
/// following key/value pairs that determine the [BIP44 derivation path][dp]
/// of the private key from the seed:
///
/// - `key` &mdash; In this case the value is either one or two numerical
/// indexes separated by a slash, which represent the "account", and
/// "change" components of the BIP44 derivation path. Example: `key=0/0`.
///
/// - `full-path` &mdash; In this case the value is a full derivation path,
/// and the user is responsible for ensuring it is correct. Example:
/// `full-path=m/44/501/0/0/0`.
///
/// If neither is provided, then the default derivation path is used.
///
/// Note that when specifying derivation paths, this routine will convert all
/// indexes into ["hardened"] indexes, even if written as "normal" indexes.
///
/// Other components of the URI besides the scheme and query string are ignored.
///
/// If the "skip_seed_phrase_validation" argument, as defined in
/// [SKIP_SEED_PHRASE_VALIDATION_ARG] is found in `matches`, then the keypair
/// seed will be generated directly from the seed phrase, without parsing or
/// validating it as a BIP39 seed phrase. This allows the use of non-BIP39 seed
/// phrases.
///
/// - `usb:` &mdash; Use a USB hardware device as the signer. In this case, the
/// URI host indicates the device type, and is required. The only currently valid host
/// value is "ledger".
///
/// Optionally, the first segment of the URI path indicates the base-58
/// encoded pubkey of the wallet, and the "account" and "change" indices of
/// the derivation path can be specified with the `key=` query parameter, as
/// with the `prompt:` URI.
///
/// Examples:
///
/// - `usb://ledger`
/// - `usb://ledger?key=0/0`
/// - `usb://ledger/9rPVSygg3brqghvdZ6wsL2i5YNQTGhXGdJzF65YxaCQd`
/// - `usb://ledger/9rPVSygg3brqghvdZ6wsL2i5YNQTGhXGdJzF65YxaCQd?key=0/0`
///
/// Next the `path` argument may be one of the following strings:
///
/// - `-` &mdash; Read the keypair from stdin. This is the same as the `stdin:`
/// URI scheme.
///
/// - `ASK` &mdash; The user will be prompted at the command line for their seed
/// phrase and passphrase. _This uses a legacy key derivation method and should
/// usually be avoided in favor of `prompt:`._
///
/// Next, if the `path` argument parses as a base-58 public key, then the signer
/// is created without a private key, but with presigned signatures, each parsed
/// from the additional command line arguments, provided by the `matches`
/// argument.
///
/// In this case, the remaining command line arguments are searched for clap
/// arguments named "signer", as defined by [SIGNER_ARG], and each is parsed as
/// a key-value pair of the form "pubkey=signature", where `pubkey` is the same
/// base-58 public key, and `signature` is a serialized signature produced by
/// the corresponding keypair. One of the "signer" signatures must be for the
/// pubkey specified in `path` or this function will return an error; unless the
/// "sign_only" clap argument, as defined by [SIGN_ONLY_ARG], is present in
/// `matches`, in which case the signer will be created with no associated
/// signatures.
///
/// Finally, if `path`, interpreted as a file path, represents a file on disk,
/// then the signer is created by reading that file as a JSON-serialized
/// keypair. This is the same as the `file:` URI scheme.
///
/// [qs]: https://en.wikipedia.org/wiki/Query_string
/// [dp]: https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki
/// [URI]: https://en.wikipedia.org/wiki/Uniform_Resource_Identifier
/// ["hardened"]: https://wiki.trezor.io/Hardened_and_non-hardened_derivation
///
/// # Examples
///
/// This shows a reasonable way to set up clap to parse all possible signer
/// sources. Note the use of the [`OfflineArgs::offline_args`] method to add
/// correct clap definitions of the `--signer` and `--sign-only` arguments, as
/// required by the base-58 pubkey offline signing method.
///
/// [`OfflineArgs::offline_args`]: crate::offline::OfflineArgs::offline_args
///
/// ```no_run
/// use clap::{App, Arg, value_t_or_exit};
/// use solana_clap_utils::keypair::signer_from_path;
/// use solana_clap_utils::offline::OfflineArgs;
///
/// let clap_app = App::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::with_name("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
/// let mut wallet_manager = None;
/// let signer = signer_from_path(
/// &clap_matches,
/// &keypair_str,
/// "keypair",
/// &mut wallet_manager,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn signer_from_path(
matches: &ArgMatches,
path: &str,
@@ -347,6 +692,63 @@ pub fn signer_from_path(
signer_from_path_with_config(matches, path, keypair_name, wallet_manager, &config)
}
/// Loads a [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as various
/// types of _signing source_, depending on its format, one of which is a path
/// to a keypair file. Some sources may require user interaction in the course
/// of calling this function.
///
/// This is the same as [`signer_from_path`] except that it additionaolly
/// accepts a [`SignerFromPathConfig`] argument.
///
/// If the `allow_null_signer` field of `config` is `true`, then pubkey signers
/// are allowed to have zero associated signatures via additional "signer"
/// command line arguments. It the same effect as if the "sign_only" clap
/// argument is present.
///
/// See [`signer_from_path`] for full documentation of how this function
/// interprets its arguments.
///
/// # Examples
///
/// This shows a reasonable way to set up clap to parse all possible signer
/// sources. Note the use of the [`OfflineArgs::offline_args`] method to add
/// correct clap definitions of the `--signer` and `--sign-only` arguments, as
/// required by the base-58 pubkey offline signing method.
///
/// [`OfflineArgs::offline_args`]: crate::offline::OfflineArgs::offline_args
///
/// ```no_run
/// use clap::{App, Arg, value_t_or_exit};
/// use solana_clap_utils::keypair::{signer_from_path_with_config, SignerFromPathConfig};
/// use solana_clap_utils::offline::OfflineArgs;
///
/// let clap_app = App::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::with_name("keypair")
/// .required(true)
/// .help("The default signer"))
/// .offline_args();
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
/// let mut wallet_manager = None;
///
/// // Allow pubkey signers without accompanying signatures
/// let config = SignerFromPathConfig {
/// allow_null_signer: true,
/// };
///
/// let signer = signer_from_path_with_config(
/// &clap_matches,
/// &keypair_str,
/// "keypair",
/// &mut wallet_manager,
/// &config,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn signer_from_path_with_config(
matches: &ArgMatches,
path: &str,
@@ -417,6 +819,43 @@ pub fn signer_from_path_with_config(
}
}
/// Loads the pubkey of a [Signer] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as various
/// types of _signing source_, depending on its format, one of which is a path
/// to a keypair file. Some sources may require user interaction in the course
/// of calling this function.
///
/// The only difference between this function and [`signer_from_path`] is in the
/// case of a "pubkey" path: this function does not require that accompanying
/// command line arguments contain an offline signature.
///
/// See [`signer_from_path`] for full documentation of how this function
/// interprets its arguments.
///
/// # Examples
///
/// ```no_run
/// use clap::{App, Arg, value_t_or_exit};
/// use solana_clap_utils::keypair::pubkey_from_path;
///
/// let clap_app = App::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::with_name("keypair")
/// .required(true)
/// .help("The default signer"));
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
/// let mut wallet_manager = None;
/// let pubkey = pubkey_from_path(
/// &clap_matches,
/// &keypair_str,
/// "keypair",
/// &mut wallet_manager,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn pubkey_from_path(
matches: &ArgMatches,
path: &str,
@@ -516,7 +955,46 @@ pub fn prompt_passphrase(prompt: &str) -> Result<String, Box<dyn error::Error>>
Ok(passphrase)
}
/// Parses a path into a SignerSource and returns a Keypair for supporting SignerSourceKinds
/// Loads a [Keypair] from one of several possible sources.
///
/// The `path` is not strictly a file system path, but is interpreted as various
/// types of _signing source_, depending on its format, one of which is a path
/// to a keypair file. Some sources may require user interaction in the course
/// of calling this function.
///
/// This is the same as [`signer_from_path`] except that it only supports
/// signing sources that can result in a [Keypair]: prompt for seed phrase,
/// keypair file, and stdin.
///
/// If `confirm_pubkey` is `true` then after deriving the pubkey, the user will
/// be prompted to confirm that the pubkey is as expected.
///
/// See [`signer_from_path`] for full documentation of how this function
/// interprets its arguments.
///
/// # Examples
///
/// ```no_run
/// use clap::{App, Arg, value_t_or_exit};
/// use solana_clap_utils::keypair::keypair_from_path;
///
/// let clap_app = App::new("my-program")
/// // The argument we'll parse as a signer "path"
/// .arg(Arg::with_name("keypair")
/// .required(true)
/// .help("The default signer"));
///
/// let clap_matches = clap_app.get_matches();
/// let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
///
/// let signer = keypair_from_path(
/// &clap_matches,
/// &keypair_str,
/// "keypair",
/// false,
/// )?;
/// # Ok::<(), Box<dyn std::error::Error>>(())
/// ```
pub fn keypair_from_path(
matches: &ArgMatches,
path: &str,
@@ -566,9 +1044,10 @@ pub fn keypair_from_path(
}
}
/// Reads user input from stdin to retrieve a seed phrase and passphrase for keypair derivation
/// Optionally skips validation of seed phrase
/// Optionally confirms recovered public key
/// Reads user input from stdin to retrieve a seed phrase and passphrase for keypair derivation.
///
/// Optionally skips validation of seed phrase. Optionally confirms recovered
/// public key.
pub fn keypair_from_seed_phrase(
keypair_name: &str,
skip_validation: bool,
@@ -645,9 +1124,13 @@ fn sanitize_seed_phrase(seed_phrase: &str) -> String {
#[cfg(test)]
mod tests {
use super::*;
use crate::offline::OfflineArgs;
use clap::{value_t_or_exit, App, Arg};
use solana_remote_wallet::locator::Manufacturer;
use solana_remote_wallet::remote_wallet::initialize_wallet_manager;
use solana_sdk::signer::keypair::write_keypair_file;
use solana_sdk::system_instruction;
use tempfile::NamedTempFile;
use tempfile::{NamedTempFile, TempDir};
#[test]
fn test_sanitize_seed_phrase() {
@@ -806,4 +1289,41 @@ mod tests {
} if p == relative_path_str)
);
}
#[test]
fn signer_from_path_with_file() -> Result<(), Box<dyn std::error::Error>> {
let dir = TempDir::new()?;
let dir = dir.path();
let keypair_path = dir.join("id.json");
let keypair_path_str = keypair_path.to_str().expect("utf-8");
let keypair = Keypair::new();
write_keypair_file(&keypair, &keypair_path)?;
let args = vec!["program", keypair_path_str];
let clap_app = App::new("my-program")
.arg(
Arg::with_name("keypair")
.required(true)
.help("The signing keypair"),
)
.offline_args();
let clap_matches = clap_app.get_matches_from(args);
let keypair_str = value_t_or_exit!(clap_matches, "keypair", String);
let wallet_manager = initialize_wallet_manager()?;
let signer = signer_from_path(
&clap_matches,
&keypair_str,
"signer",
&mut Some(wallet_manager),
)?;
assert_eq!(keypair.pubkey(), signer.pubkey());
Ok(())
}
}

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-cli-config"
description = "Blockchain, Rebuilt for Scale"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-cli-output"
description = "Blockchain, Rebuilt for Scale"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -20,12 +20,12 @@ indicatif = "0.15.0"
serde = "1.0.122"
serde_derive = "1.0.103"
serde_json = "1.0.56"
solana-account-decoder = { path = "../account-decoder", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-client = { path = "../client", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-transaction-status = { path = "../transaction-status", version = "=1.7.14" }
solana-vote-program = { path = "../programs/vote", version = "=1.7.14" }
solana-account-decoder = { path = "../account-decoder", version = "=1.8.2" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-client = { path = "../client", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.8.2" }
solana-vote-program = { path = "../programs/vote", version = "=1.8.2" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
[package.metadata.docs.rs]

View File

@@ -576,7 +576,7 @@ impl fmt::Display for CliValidators {
for (version, info) in self.stake_by_version.iter() {
writeln!(
f,
"{:<8} - {:3} current validators ({:>5.2}%){}",
"{:<8} - {:4} current validators ({:>5.2}%){}",
version,
info.current_validators,
100. * info.current_active_stake as f64 / self.total_active_stake as f64,

View File

@@ -3,7 +3,7 @@ authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-cli"
description = "Blockchain, Rebuilt for Scale"
version = "1.7.14"
version = "1.8.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -26,33 +26,34 @@ humantime = "2.0.1"
num-traits = "0.2"
pretty-hex = "0.2.1"
reqwest = { version = "0.11.2", default-features = false, features = ["blocking", "rustls-tls", "json"] }
semver = "1.0.4"
serde = "1.0.122"
serde_derive = "1.0.103"
serde_json = "1.0.56"
solana-account-decoder = { path = "../account-decoder", version = "=1.7.14" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-cli-config = { path = "../cli-config", version = "=1.7.14" }
solana-cli-output = { path = "../cli-output", version = "=1.7.14" }
solana-client = { path = "../client", version = "=1.7.14" }
solana-config-program = { path = "../programs/config", version = "=1.7.14" }
solana-faucet = { path = "../faucet", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-net-utils = { path = "../net-utils", version = "=1.7.14" }
solana-account-decoder = { path = "../account-decoder", version = "=1.8.2" }
solana-bpf-loader-program = { path = "../programs/bpf_loader", version = "=1.8.2" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-cli-config = { path = "../cli-config", version = "=1.8.2" }
solana-cli-output = { path = "../cli-output", version = "=1.8.2" }
solana-client = { path = "../client", version = "=1.8.2" }
solana-config-program = { path = "../programs/config", version = "=1.8.2" }
solana-faucet = { path = "../faucet", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-net-utils = { path = "../net-utils", version = "=1.8.2" }
solana_rbpf = "=0.2.11"
solana-remote-wallet = { path = "../remote-wallet", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-transaction-status = { path = "../transaction-status", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-vote-program = { path = "../programs/vote", version = "=1.7.14" }
solana-remote-wallet = { path = "../remote-wallet", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
solana-vote-program = { path = "../programs/vote", version = "=1.8.2" }
spl-memo = { version = "=3.0.1", features = ["no-entrypoint"] }
thiserror = "1.0.21"
tiny-bip39 = "0.8.1"
url = "2.1.1"
[dev-dependencies]
solana-core = { path = "../core", version = "=1.7.14" }
solana-streamer = { path = "../streamer", version = "=1.7.14" }
solana-core = { path = "../core", version = "=1.8.2" }
solana-streamer = { path = "../streamer", version = "=1.8.2" }
tempfile = "3.1.0"
[[bin]]

View File

@@ -1721,7 +1721,7 @@ pub fn process_show_stakes(
// Filter by `StakeState::Stake(_, _)`
rpc_filter::RpcFilterType::Memcmp(rpc_filter::Memcmp {
offset: 0,
bytes: rpc_filter::MemcmpEncodedBytes::Binary(
bytes: rpc_filter::MemcmpEncodedBytes::Base58(
bs58::encode([2, 0, 0, 0]).into_string(),
),
encoding: Some(rpc_filter::MemcmpEncoding::Binary),
@@ -1729,7 +1729,7 @@ pub fn process_show_stakes(
// Filter by `Delegation::voter_pubkey`, which begins at byte offset 124
rpc_filter::RpcFilterType::Memcmp(rpc_filter::Memcmp {
offset: 124,
bytes: rpc_filter::MemcmpEncodedBytes::Binary(
bytes: rpc_filter::MemcmpEncodedBytes::Base58(
vote_account_pubkeys[0].to_string(),
),
encoding: Some(rpc_filter::MemcmpEncoding::Binary),

View File

@@ -18,7 +18,12 @@ use solana_sdk::{
pubkey::Pubkey,
transaction::Transaction,
};
use std::{collections::HashMap, fmt, sync::Arc};
use std::{
cmp::Ordering,
collections::{HashMap, HashSet},
fmt,
sync::Arc,
};
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum ForceActivation {
@@ -222,44 +227,103 @@ pub fn process_feature_subcommand(
}
}
fn active_stake_by_feature_set(rpc_client: &RpcClient) -> Result<HashMap<u32, f64>, ClientError> {
#[derive(Debug, Default)]
struct WorkingFeatureSetStatsEntry {
stake: u64,
rpc_nodes_count: u32,
software_versions: HashSet<Option<semver::Version>>,
}
type WorkingFeatureSetStats = HashMap<u32, WorkingFeatureSetStatsEntry>;
#[derive(Debug, Default)]
struct FeatureSetStatsEntry {
stake_percent: f64,
rpc_nodes_percent: f32,
software_versions: Vec<Option<semver::Version>>,
}
type FeatureSetStats = HashMap<u32, FeatureSetStatsEntry>;
fn feature_set_stats(rpc_client: &RpcClient) -> Result<FeatureSetStats, ClientError> {
// Validator identity -> feature set
let feature_set_map = rpc_client
let feature_sets = rpc_client
.get_cluster_nodes()?
.into_iter()
.map(|contact_info| (contact_info.pubkey, contact_info.feature_set))
.collect::<HashMap<_, _>>();
.map(|contact_info| {
(
contact_info.pubkey,
contact_info.feature_set,
contact_info.rpc.is_some(),
contact_info
.version
.and_then(|v| semver::Version::parse(&v).ok()),
)
})
.collect::<Vec<_>>();
let vote_accounts = rpc_client.get_vote_accounts()?;
let total_active_stake: u64 = vote_accounts
.current
let mut total_active_stake: u64 = vote_accounts
.delinquent
.iter()
.chain(vote_accounts.delinquent.iter())
.map(|vote_account| vote_account.activated_stake)
.sum();
// Sum all active stake by feature set
let mut active_stake_by_feature_set: HashMap<u32, u64> = HashMap::new();
for vote_account in vote_accounts.current {
if let Some(Some(feature_set)) = feature_set_map.get(&vote_account.node_pubkey) {
*active_stake_by_feature_set.entry(*feature_set).or_default() +=
vote_account.activated_stake;
} else {
*active_stake_by_feature_set
.entry(0 /* "unknown" */)
.or_default() += vote_account.activated_stake;
let vote_stakes = vote_accounts
.current
.into_iter()
.map(|vote_account| {
total_active_stake += vote_account.activated_stake;
(vote_account.node_pubkey, vote_account.activated_stake)
})
.collect::<HashMap<_, _>>();
let mut feature_set_stats: WorkingFeatureSetStats = HashMap::new();
let mut total_rpc_nodes = 0;
for (node_id, feature_set, is_rpc, version) in feature_sets {
let feature_set = feature_set.unwrap_or(0);
let feature_set_entry = feature_set_stats.entry(feature_set).or_default();
feature_set_entry.software_versions.insert(version);
if let Some(vote_stake) = vote_stakes.get(&node_id) {
feature_set_entry.stake += *vote_stake;
}
if is_rpc {
feature_set_entry.rpc_nodes_count += 1;
total_rpc_nodes += 1;
}
}
Ok(active_stake_by_feature_set
Ok(feature_set_stats
.into_iter()
.map(|(feature_set, active_stake)| {
(
.filter_map(
|(
feature_set,
active_stake as f64 * 100. / total_active_stake as f64,
)
})
WorkingFeatureSetStatsEntry {
stake,
rpc_nodes_count,
software_versions,
},
)| {
let stake_percent = (stake as f64 / total_active_stake as f64) * 100.;
let rpc_nodes_percent = (rpc_nodes_count as f32 / total_rpc_nodes as f32) * 100.;
let mut software_versions = software_versions.into_iter().collect::<Vec<_>>();
software_versions.sort();
if stake_percent >= 0.001 || rpc_nodes_percent >= 0.001 {
Some((
feature_set,
FeatureSetStatsEntry {
stake_percent,
rpc_nodes_percent,
software_versions,
},
))
} else {
None
}
},
)
.collect())
}
@@ -267,50 +331,164 @@ fn active_stake_by_feature_set(rpc_client: &RpcClient) -> Result<HashMap<u32, f6
fn feature_activation_allowed(rpc_client: &RpcClient, quiet: bool) -> Result<bool, ClientError> {
let my_feature_set = solana_version::Version::default().feature_set;
let active_stake_by_feature_set = active_stake_by_feature_set(rpc_client)?;
let feature_set_stats = feature_set_stats(rpc_client)?;
let feature_activation_allowed = active_stake_by_feature_set
let (stake_allowed, rpc_allowed) = feature_set_stats
.get(&my_feature_set)
.map(|percentage| *percentage >= 95.)
.unwrap_or(false);
.map(
|FeatureSetStatsEntry {
stake_percent,
rpc_nodes_percent,
..
}| (*stake_percent >= 95., *rpc_nodes_percent >= 95.),
)
.unwrap_or((false, false));
if !feature_activation_allowed && !quiet {
if active_stake_by_feature_set.get(&my_feature_set).is_none() {
if !stake_allowed && !rpc_allowed && !quiet {
if feature_set_stats.get(&my_feature_set).is_none() {
println!(
"{}",
style("To activate features the tool and cluster feature sets must match, select a tool version that matches the cluster")
.bold());
} else {
println!(
"{}",
style("To activate features the stake must be >= 95%").bold()
);
if !stake_allowed {
print!(
"\n{}",
style("To activate features the stake must be >= 95%")
.bold()
.red()
);
}
if !rpc_allowed {
print!(
"\n{}",
style("To activate features the RPC nodes must be >= 95%")
.bold()
.red()
);
}
}
println!(
"\n\n{}",
style(format!("Tool Feature Set: {}", my_feature_set)).bold()
);
let mut feature_set_stats = feature_set_stats.into_iter().collect::<Vec<_>>();
feature_set_stats.sort_by(|l, r| {
match l.1.software_versions[0]
.cmp(&r.1.software_versions[0])
.reverse()
{
Ordering::Equal => {
match l
.1
.stake_percent
.partial_cmp(&r.1.stake_percent)
.unwrap()
.reverse()
{
Ordering::Equal => {
l.1.rpc_nodes_percent
.partial_cmp(&r.1.rpc_nodes_percent)
.unwrap()
.reverse()
}
o => o,
}
}
o => o,
}
});
let software_versions_title = "Software Version";
let feature_set_title = "Feature Set";
let stake_percent_title = "Stake";
let rpc_percent_title = "RPC";
let mut stats_output = Vec::new();
let mut max_software_versions_len = software_versions_title.len();
let mut max_feature_set_len = feature_set_title.len();
let mut max_stake_percent_len = stake_percent_title.len();
let mut max_rpc_percent_len = rpc_percent_title.len();
for (
feature_set,
FeatureSetStatsEntry {
stake_percent,
rpc_nodes_percent,
software_versions,
},
) in feature_set_stats.into_iter()
{
let me = feature_set == my_feature_set;
let feature_set = if feature_set == 0 {
"unknown".to_string()
} else {
feature_set.to_string()
};
let stake_percent = format!("{:.2}%", stake_percent);
let rpc_percent = format!("{:.2}%", rpc_nodes_percent);
let mut has_unknown = false;
let mut software_versions = software_versions
.iter()
.filter_map(|v| {
if v.is_none() {
has_unknown = true;
}
v.as_ref()
})
.map(ToString::to_string)
.collect::<Vec<_>>();
if has_unknown {
software_versions.push("unknown".to_string());
}
let software_versions = software_versions.join(", ");
max_software_versions_len = max_software_versions_len.max(software_versions.len());
max_feature_set_len = max_feature_set_len.max(feature_set.len());
max_stake_percent_len = max_stake_percent_len.max(stake_percent.len());
max_rpc_percent_len = max_rpc_percent_len.max(rpc_percent.len());
stats_output.push((
software_versions,
feature_set,
stake_percent,
rpc_percent,
me,
));
}
println!(
"{}",
style(format!("Tool Feature Set: {}", my_feature_set)).bold()
style(format!(
"{1:<0$} {3:<2$} {5:<4$} {7:<6$}",
max_software_versions_len,
software_versions_title,
max_feature_set_len,
feature_set_title,
max_stake_percent_len,
stake_percent_title,
max_rpc_percent_len,
rpc_percent_title,
))
.bold(),
);
println!("{}", style("Cluster Feature Sets and Stakes:").bold());
for (feature_set, percentage) in active_stake_by_feature_set.iter() {
if *feature_set == 0 {
println!(" unknown - {:.2}%", percentage);
} else {
println!(
" {:<10} - {:.2}% {}",
feature_set,
percentage,
if *feature_set == my_feature_set {
" <-- me"
} else {
""
}
);
}
for (software_versions, feature_set, stake_percent, rpc_percent, me) in stats_output {
println!(
"{1:<0$} {3:>2$} {5:>4$} {7:>6$} {8}",
max_software_versions_len,
software_versions,
max_feature_set_len,
feature_set,
max_stake_percent_len,
stake_percent,
max_rpc_percent_len,
rpc_percent,
if me { "<-- me" } else { "" },
);
}
println!();
}
Ok(feature_activation_allowed)
Ok(stake_allowed && rpc_allowed)
}
fn status_from_account(account: Account) -> Option<CliFeatureStatus> {

View File

@@ -12,9 +12,9 @@ use solana_account_decoder::{UiAccountEncoding, UiDataSliceConfig};
use solana_bpf_loader_program::{bpf_verifier, BpfError, ThisInstructionMeter};
use solana_clap_utils::{self, input_parsers::*, input_validators::*, keypair::*};
use solana_cli_output::{
display::new_spinner_progress_bar, CliProgram, CliProgramAccountType, CliProgramAuthority,
CliProgramBuffer, CliProgramId, CliUpgradeableBuffer, CliUpgradeableBuffers,
CliUpgradeableProgram, CliUpgradeableProgramClosed, CliUpgradeablePrograms,
CliProgram, CliProgramAccountType, CliProgramAuthority, CliProgramBuffer, CliProgramId,
CliUpgradeableBuffer, CliUpgradeableBuffers, CliUpgradeableProgram,
CliUpgradeableProgramClosed, CliUpgradeablePrograms,
};
use solana_client::{
client_error::ClientErrorKind,
@@ -22,8 +22,6 @@ use solana_client::{
rpc_config::RpcSendTransactionConfig,
rpc_config::{RpcAccountInfoConfig, RpcProgramAccountsConfig},
rpc_filter::{Memcmp, MemcmpEncodedBytes, RpcFilterType},
rpc_request::MAX_GET_SIGNATURE_STATUSES_QUERY_ITEMS,
rpc_response::Fees,
tpu_client::{TpuClient, TpuClientConfig},
};
use solana_rbpf::vm::{Config, Executable};
@@ -33,7 +31,6 @@ use solana_sdk::{
account_utils::StateMut,
bpf_loader, bpf_loader_deprecated,
bpf_loader_upgradeable::{self, UpgradeableLoaderState},
commitment_config::CommitmentConfig,
instruction::Instruction,
instruction::InstructionError,
loader_instruction,
@@ -42,24 +39,18 @@ use solana_sdk::{
packet::PACKET_DATA_SIZE,
pubkey::Pubkey,
signature::{keypair_from_seed, read_keypair_file, Keypair, Signature, Signer},
signers::Signers,
system_instruction::{self, SystemError},
system_program,
transaction::Transaction,
transaction::TransactionError,
};
use solana_transaction_status::TransactionConfirmationStatus;
use std::{
collections::HashMap,
error,
fs::File,
io::{Read, Write},
mem::size_of,
path::PathBuf,
str::FromStr,
sync::Arc,
thread::sleep,
time::Duration,
};
#[derive(Debug, PartialEq)]
@@ -385,6 +376,7 @@ impl ProgramSubCommands for App<'_, '_> {
.subcommand(
SubCommand::with_name("deploy")
.about("Deploy a program")
.setting(AppSettings::Hidden)
.arg(
Arg::with_name("program_location")
.index(1)
@@ -1146,18 +1138,18 @@ fn get_buffers(
) -> Result<CliUpgradeableBuffers, Box<dyn std::error::Error>> {
let mut filters = vec![RpcFilterType::Memcmp(Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![1, 0, 0, 0]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![1, 0, 0, 0]).into_string()),
encoding: None,
})];
if let Some(authority_pubkey) = authority_pubkey {
filters.push(RpcFilterType::Memcmp(Memcmp {
offset: ACCOUNT_TYPE_SIZE,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![1]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![1]).into_string()),
encoding: None,
}));
filters.push(RpcFilterType::Memcmp(Memcmp {
offset: ACCOUNT_TYPE_SIZE + OPTION_SIZE,
bytes: MemcmpEncodedBytes::Binary(
bytes: MemcmpEncodedBytes::Base58(
bs58::encode(authority_pubkey.as_ref()).into_string(),
),
encoding: None,
@@ -1199,18 +1191,18 @@ fn get_programs(
) -> Result<CliUpgradeablePrograms, Box<dyn std::error::Error>> {
let mut filters = vec![RpcFilterType::Memcmp(Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![3, 0, 0, 0]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![3, 0, 0, 0]).into_string()),
encoding: None,
})];
if let Some(authority_pubkey) = authority_pubkey {
filters.push(RpcFilterType::Memcmp(Memcmp {
offset: ACCOUNT_TYPE_SIZE + SLOT_SIZE,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![1]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![1]).into_string()),
encoding: None,
}));
filters.push(RpcFilterType::Memcmp(Memcmp {
offset: ACCOUNT_TYPE_SIZE + SLOT_SIZE + OPTION_SIZE,
bytes: MemcmpEncodedBytes::Binary(
bytes: MemcmpEncodedBytes::Base58(
bs58::encode(authority_pubkey.as_ref()).into_string(),
),
encoding: None,
@@ -1234,7 +1226,7 @@ fn get_programs(
bytes.extend_from_slice(programdata_address.as_ref());
let filters = vec![RpcFilterType::Memcmp(Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(bytes).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(bytes).into_string()),
encoding: None,
})];
@@ -2018,7 +2010,13 @@ fn complete_partial_program_init(
return Err("Buffer account is already executable".into());
}
if account.owner != *loader_id && !system_program::check_id(&account.owner) {
return Err("Buffer account is already owned by another account".into());
return Err("Buffer account passed is already in use by another program".into());
}
if !account.data.is_empty() && account.data.len() < account_data_len {
return Err(
"Buffer account passed is not large enough, may have been for a different deploy?"
.into(),
);
}
if account.data.is_empty() && system_program::check_id(&account.owner) {
@@ -2029,24 +2027,24 @@ fn complete_partial_program_init(
if account.owner != *loader_id {
instructions.push(system_instruction::assign(elf_pubkey, loader_id));
}
}
if account.lamports < minimum_balance {
let balance = minimum_balance - account.lamports;
instructions.push(system_instruction::transfer(
payer_pubkey,
elf_pubkey,
balance,
));
balance_needed = balance;
} else if account.lamports > minimum_balance
&& system_program::check_id(&account.owner)
&& !allow_excessive_balance
{
return Err(format!(
"Buffer account has a balance: {:?}; it may already be in use",
Sol(account.lamports)
)
.into());
if account.lamports < minimum_balance {
let balance = minimum_balance - account.lamports;
instructions.push(system_instruction::transfer(
payer_pubkey,
elf_pubkey,
balance,
));
balance_needed = balance;
} else if account.lamports > minimum_balance
&& system_program::check_id(&account.owner)
&& !allow_excessive_balance
{
return Err(format!(
"Buffer account has a balance: {:?}; it may already be in use",
Sol(account.lamports)
)
.into());
}
}
Ok((instructions, balance_needed))
}
@@ -2109,29 +2107,29 @@ fn send_deploy_messages(
if let Some(write_messages) = write_messages {
if let Some(write_signer) = write_signer {
trace!("Writing program data");
let Fees {
blockhash,
last_valid_block_height,
..
} = rpc_client
.get_fees_with_commitment(config.commitment)?
.value;
let mut write_transactions = vec![];
for message in write_messages.iter() {
let mut tx = Transaction::new_unsigned(message.clone());
tx.try_sign(&[payer_signer, write_signer], blockhash)?;
write_transactions.push(tx);
}
send_and_confirm_transactions_with_spinner(
let tpu_client = TpuClient::new(
rpc_client.clone(),
&config.websocket_url,
write_transactions,
&[payer_signer, write_signer],
config.commitment,
last_valid_block_height,
)
.map_err(|err| format!("Data writes to account failed: {}", err))?;
TpuClientConfig::default(),
)?;
let transaction_errors = tpu_client
.send_and_confirm_messages_with_spinner(
write_messages,
&[payer_signer, write_signer],
)
.map_err(|err| format!("Data writes to account failed: {}", err))?
.into_iter()
.flatten()
.collect::<Vec<_>>();
if !transaction_errors.is_empty() {
for transaction_error in &transaction_errors {
error!("{:?}", transaction_error);
}
return Err(
format!("{} write transactions failed", transaction_errors.len()).into(),
);
}
}
}
@@ -2183,9 +2181,8 @@ fn report_ephemeral_mnemonic(words: usize, mnemonic: bip39::Mnemonic) {
words
);
eprintln!("{}\n{}\n{}", divider, phrase, divider);
eprintln!("To resume a deploy, pass the recovered keypair as");
eprintln!("the [PROGRAM_ADDRESS_SIGNER] argument to `solana deploy` or");
eprintln!("as the [BUFFER_SIGNER] to `solana program deploy` or `solana write-buffer'.");
eprintln!("To resume a deploy, pass the recovered keypair as the");
eprintln!("[BUFFER_SIGNER] to `solana program deploy` or `solana write-buffer'.");
eprintln!("Or to recover the account's lamports, pass it as the");
eprintln!(
"[BUFFER_ACCOUNT_ADDRESS] argument to `solana program close`.\n{}",
@@ -2193,134 +2190,6 @@ fn report_ephemeral_mnemonic(words: usize, mnemonic: bip39::Mnemonic) {
);
}
fn send_and_confirm_transactions_with_spinner<T: Signers>(
rpc_client: Arc<RpcClient>,
websocket_url: &str,
mut transactions: Vec<Transaction>,
signer_keys: &T,
commitment: CommitmentConfig,
mut last_valid_block_height: u64,
) -> Result<(), Box<dyn error::Error>> {
let progress_bar = new_spinner_progress_bar();
let mut send_retries = 5;
progress_bar.set_message("Finding leader nodes...");
let tpu_client = TpuClient::new(
rpc_client.clone(),
websocket_url,
TpuClientConfig::default(),
)?;
loop {
// Send all transactions
let mut pending_transactions = HashMap::new();
let num_transactions = transactions.len();
for transaction in transactions {
if !tpu_client.send_transaction(&transaction) {
let _result = rpc_client
.send_transaction_with_config(
&transaction,
RpcSendTransactionConfig {
preflight_commitment: Some(commitment.commitment),
..RpcSendTransactionConfig::default()
},
)
.ok();
}
pending_transactions.insert(transaction.signatures[0], transaction);
progress_bar.set_message(&format!(
"[{}/{}] Transactions sent",
pending_transactions.len(),
num_transactions
));
// Throttle transactions to about 100 TPS
sleep(Duration::from_millis(10));
}
// Collect statuses for all the transactions, drop those that are confirmed
loop {
let mut block_height = 0;
let pending_signatures = pending_transactions.keys().cloned().collect::<Vec<_>>();
for pending_signatures_chunk in
pending_signatures.chunks(MAX_GET_SIGNATURE_STATUSES_QUERY_ITEMS)
{
if let Ok(result) = rpc_client.get_signature_statuses(pending_signatures_chunk) {
let statuses = result.value;
for (signature, status) in
pending_signatures_chunk.iter().zip(statuses.into_iter())
{
if let Some(status) = status {
if let Some(confirmation_status) = &status.confirmation_status {
if *confirmation_status != TransactionConfirmationStatus::Processed
{
let _ = pending_transactions.remove(signature);
}
} else if status.confirmations.is_none()
|| status.confirmations.unwrap() > 1
{
let _ = pending_transactions.remove(signature);
}
}
}
}
block_height = rpc_client.get_block_height()?;
progress_bar.set_message(&format!(
"[{}/{}] Transactions confirmed. Retrying in {} blocks",
num_transactions - pending_transactions.len(),
num_transactions,
last_valid_block_height.saturating_sub(block_height)
));
}
if pending_transactions.is_empty() {
return Ok(());
}
if block_height > last_valid_block_height {
break;
}
for transaction in pending_transactions.values() {
if !tpu_client.send_transaction(transaction) {
let _result = rpc_client
.send_transaction_with_config(
transaction,
RpcSendTransactionConfig {
preflight_commitment: Some(commitment.commitment),
..RpcSendTransactionConfig::default()
},
)
.ok();
}
}
if cfg!(not(test)) {
// Retry twice a second
sleep(Duration::from_millis(500));
}
}
if send_retries == 0 {
return Err("Transactions failed".into());
}
send_retries -= 1;
// Re-sign any failed transactions with a new blockhash and retry
let Fees {
blockhash,
last_valid_block_height: new_last_valid_block_height,
..
} = rpc_client.get_fees_with_commitment(commitment)?.value;
last_valid_block_height = new_last_valid_block_height;
transactions = vec![];
for (_, mut transaction) in pending_transactions.into_iter() {
transaction.try_sign(signer_keys, blockhash)?;
transactions.push(transaction);
}
}
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -39,13 +39,11 @@ use solana_sdk::{
stake::{
self,
instruction::{self as stake_instruction, LockupArgs, StakeError},
state::{Authorized, Lockup, Meta, StakeAuthorize, StakeState},
state::{Authorized, Lockup, Meta, StakeActivationStatus, StakeAuthorize, StakeState},
},
stake_history::StakeHistory,
system_instruction::SystemError,
sysvar::{
clock,
stake_history::{self, StakeHistory},
},
sysvar::{clock, stake_history},
transaction::Transaction,
};
use solana_vote_program::vote_state::VoteState;
@@ -2020,7 +2018,11 @@ pub fn build_stake_state(
stake,
) => {
let current_epoch = clock.epoch;
let (active_stake, activating_stake, deactivating_stake) = stake
let StakeActivationStatus {
effective,
activating,
deactivating,
} = stake
.delegation
.stake_activating_and_deactivating(current_epoch, Some(stake_history));
let lockup = if lockup.is_in_force(clock, None) {
@@ -2055,9 +2057,9 @@ pub fn build_stake_state(
use_lamports_unit,
current_epoch,
rent_exempt_reserve: Some(*rent_exempt_reserve),
active_stake: u64_some_if_not_zero(active_stake),
activating_stake: u64_some_if_not_zero(activating_stake),
deactivating_stake: u64_some_if_not_zero(deactivating_stake),
active_stake: u64_some_if_not_zero(effective),
activating_stake: u64_some_if_not_zero(activating),
deactivating_stake: u64_some_if_not_zero(deactivating),
..CliStakeState::default()
}
}

View File

@@ -5,3 +5,4 @@ cd "$(dirname "$0")"
make -C ../../../programs/bpf/c/
cp ../../../programs/bpf/c/out/noop.so .
cat noop.so noop.so noop.so > noop_large.so

BIN
cli/tests/fixtures/noop_large.so vendored Normal file

Binary file not shown.

View File

@@ -22,11 +22,11 @@ use std::{env, fs::File, io::Read, path::PathBuf, str::FromStr};
fn test_cli_program_deploy_non_upgradeable() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -37,7 +37,7 @@ fn test_cli_program_deploy_non_upgradeable() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let minimum_balance_for_rent_exemption = rpc_client
@@ -55,7 +55,7 @@ fn test_cli_program_deploy_non_upgradeable() {
process_command(&config).unwrap();
config.command = CliCommand::Deploy {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
address: None,
use_deprecated_loader: false,
allow_excessive_balance: false,
@@ -75,7 +75,7 @@ fn test_cli_program_deploy_non_upgradeable() {
assert_eq!(account0.lamports, minimum_balance_for_rent_exemption);
assert_eq!(account0.owner, bpf_loader::id());
assert!(account0.executable);
let mut file = File::open(pathbuf.to_str().unwrap().to_string()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap().to_string()).unwrap();
let mut elf = Vec::new();
file.read_to_end(&mut elf).unwrap();
assert_eq!(account0.data, elf);
@@ -84,7 +84,7 @@ fn test_cli_program_deploy_non_upgradeable() {
let custom_address_keypair = Keypair::new();
config.signers = vec![&keypair, &custom_address_keypair];
config.command = CliCommand::Deploy {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
address: Some(1),
use_deprecated_loader: false,
allow_excessive_balance: false,
@@ -111,7 +111,7 @@ fn test_cli_program_deploy_non_upgradeable() {
process_command(&config).unwrap();
config.signers = vec![&keypair, &custom_address_keypair];
config.command = CliCommand::Deploy {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
address: Some(1),
use_deprecated_loader: false,
allow_excessive_balance: false,
@@ -120,7 +120,7 @@ fn test_cli_program_deploy_non_upgradeable() {
// Use forcing parameter to deploy to account with excess balance
config.command = CliCommand::Deploy {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
address: Some(1),
use_deprecated_loader: false,
allow_excessive_balance: true,
@@ -139,11 +139,11 @@ fn test_cli_program_deploy_non_upgradeable() {
fn test_cli_program_deploy_no_authority() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -154,7 +154,7 @@ fn test_cli_program_deploy_no_authority() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let max_len = program_data.len();
@@ -181,7 +181,7 @@ fn test_cli_program_deploy_no_authority() {
// Deploy a program
config.signers = vec![&keypair, &upgrade_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: None,
buffer_signer_index: None,
@@ -206,7 +206,7 @@ fn test_cli_program_deploy_no_authority() {
// Attempt to upgrade the program
config.signers = vec![&keypair, &upgrade_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: Some(program_id),
buffer_signer_index: None,
@@ -223,11 +223,11 @@ fn test_cli_program_deploy_no_authority() {
fn test_cli_program_deploy_with_authority() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -238,7 +238,7 @@ fn test_cli_program_deploy_with_authority() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let max_len = program_data.len();
@@ -266,7 +266,7 @@ fn test_cli_program_deploy_with_authority() {
let program_keypair = Keypair::new();
config.signers = vec![&keypair, &upgrade_authority, &program_keypair];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: Some(2),
program_pubkey: Some(program_keypair.pubkey()),
buffer_signer_index: None,
@@ -313,7 +313,7 @@ fn test_cli_program_deploy_with_authority() {
// Deploy the upgradeable program
config.signers = vec![&keypair, &upgrade_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: None,
buffer_signer_index: None,
@@ -354,7 +354,7 @@ fn test_cli_program_deploy_with_authority() {
// Upgrade the program
config.signers = vec![&keypair, &upgrade_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: Some(program_pubkey),
buffer_signer_index: None,
@@ -408,7 +408,7 @@ fn test_cli_program_deploy_with_authority() {
// Upgrade with new authority
config.signers = vec![&keypair, &new_upgrade_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: Some(program_pubkey),
buffer_signer_index: None,
@@ -482,7 +482,7 @@ fn test_cli_program_deploy_with_authority() {
// Upgrade with no authority
config.signers = vec![&keypair, &new_upgrade_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: Some(program_pubkey),
buffer_signer_index: None,
@@ -497,7 +497,7 @@ fn test_cli_program_deploy_with_authority() {
// deploy with finality
config.signers = vec![&keypair, &new_upgrade_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: None,
buffer_signer_index: None,
@@ -556,11 +556,11 @@ fn test_cli_program_deploy_with_authority() {
fn test_cli_program_close_program() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -571,7 +571,7 @@ fn test_cli_program_close_program() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let max_len = program_data.len();
@@ -599,7 +599,7 @@ fn test_cli_program_close_program() {
let program_keypair = Keypair::new();
config.signers = vec![&keypair, &upgrade_authority, &program_keypair];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: Some(2),
program_pubkey: Some(program_keypair.pubkey()),
buffer_signer_index: None,
@@ -638,11 +638,17 @@ fn test_cli_program_close_program() {
fn test_cli_program_write_buffer() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mut noop_large_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_large_path.push("tests");
noop_large_path.push("fixtures");
noop_large_path.push("noop_large");
noop_large_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -653,7 +659,7 @@ fn test_cli_program_write_buffer() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let max_len = program_data.len();
@@ -681,7 +687,7 @@ fn test_cli_program_write_buffer() {
// Write a buffer with default params
config.signers = vec![&keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: None,
buffer_pubkey: None,
buffer_authority_signer_index: None,
@@ -715,7 +721,7 @@ fn test_cli_program_write_buffer() {
let buffer_keypair = Keypair::new();
config.signers = vec![&keypair, &buffer_keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: Some(1),
buffer_pubkey: Some(buffer_keypair.pubkey()),
buffer_authority_signer_index: None,
@@ -776,7 +782,7 @@ fn test_cli_program_write_buffer() {
let authority_keypair = Keypair::new();
config.signers = vec![&keypair, &buffer_keypair, &authority_keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: Some(1),
buffer_pubkey: Some(buffer_keypair.pubkey()),
buffer_authority_signer_index: Some(2),
@@ -813,7 +819,7 @@ fn test_cli_program_write_buffer() {
let authority_keypair = Keypair::new();
config.signers = vec![&keypair, &buffer_keypair, &authority_keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: None,
buffer_pubkey: None,
buffer_authority_signer_index: Some(2),
@@ -885,7 +891,7 @@ fn test_cli_program_write_buffer() {
// Write a buffer with default params
config.signers = vec![&keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: None,
buffer_pubkey: None,
buffer_authority_signer_index: None,
@@ -919,17 +925,47 @@ fn test_cli_program_write_buffer() {
pre_lamports + minimum_balance_for_buffer,
recipient_account.lamports
);
// write small buffer then attempt to deploy larger program
let buffer_keypair = Keypair::new();
config.signers = vec![&keypair, &buffer_keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: Some(1),
buffer_pubkey: Some(buffer_keypair.pubkey()),
buffer_authority_signer_index: None,
max_len: None, //Some(max_len),
});
process_command(&config).unwrap();
config.signers = vec![&keypair, &buffer_keypair];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(noop_large_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: None,
buffer_signer_index: Some(1),
buffer_pubkey: Some(buffer_keypair.pubkey()),
allow_excessive_balance: false,
upgrade_authority_signer_index: 1,
is_final: true,
max_len: None,
});
config.output_format = OutputFormat::JsonCompact;
let error = process_command(&config).unwrap_err();
assert_eq!(
error.to_string(),
"Buffer account passed is not large enough, may have been for a different deploy?"
);
}
#[test]
fn test_cli_program_set_buffer_authority() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -940,7 +976,7 @@ fn test_cli_program_set_buffer_authority() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let max_len = program_data.len();
@@ -964,7 +1000,7 @@ fn test_cli_program_set_buffer_authority() {
let buffer_keypair = Keypair::new();
config.signers = vec![&keypair, &buffer_keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: Some(1),
buffer_pubkey: Some(buffer_keypair.pubkey()),
buffer_authority_signer_index: None,
@@ -1039,11 +1075,11 @@ fn test_cli_program_set_buffer_authority() {
fn test_cli_program_mismatch_buffer_authority() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -1054,7 +1090,7 @@ fn test_cli_program_mismatch_buffer_authority() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let max_len = program_data.len();
@@ -1079,7 +1115,7 @@ fn test_cli_program_mismatch_buffer_authority() {
let buffer_keypair = Keypair::new();
config.signers = vec![&keypair, &buffer_keypair, &buffer_authority];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: Some(1),
buffer_pubkey: Some(buffer_keypair.pubkey()),
buffer_authority_signer_index: Some(2),
@@ -1097,7 +1133,7 @@ fn test_cli_program_mismatch_buffer_authority() {
let upgrade_authority = Keypair::new();
config.signers = vec![&keypair, &upgrade_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: None,
buffer_signer_index: None,
@@ -1112,7 +1148,7 @@ fn test_cli_program_mismatch_buffer_authority() {
// Attempt to deploy matched authority
config.signers = vec![&keypair, &buffer_authority];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: None,
program_pubkey: None,
buffer_signer_index: None,
@@ -1129,11 +1165,11 @@ fn test_cli_program_mismatch_buffer_authority() {
fn test_cli_program_show() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -1144,7 +1180,7 @@ fn test_cli_program_show() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let max_len = program_data.len();
@@ -1172,7 +1208,7 @@ fn test_cli_program_show() {
let authority_keypair = Keypair::new();
config.signers = vec![&keypair, &buffer_keypair, &authority_keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: Some(1),
buffer_pubkey: Some(buffer_keypair.pubkey()),
buffer_authority_signer_index: Some(2),
@@ -1227,7 +1263,7 @@ fn test_cli_program_show() {
let program_keypair = Keypair::new();
config.signers = vec![&keypair, &authority_keypair, &program_keypair];
config.command = CliCommand::Program(ProgramCliCommand::Deploy {
program_location: Some(pathbuf.to_str().unwrap().to_string()),
program_location: Some(noop_path.to_str().unwrap().to_string()),
program_signer_index: Some(2),
program_pubkey: Some(program_keypair.pubkey()),
buffer_signer_index: None,
@@ -1314,11 +1350,11 @@ fn test_cli_program_show() {
fn test_cli_program_dump() {
solana_logger::setup();
let mut pathbuf = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
pathbuf.push("tests");
pathbuf.push("fixtures");
pathbuf.push("noop");
pathbuf.set_extension("so");
let mut noop_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
noop_path.push("tests");
noop_path.push("fixtures");
noop_path.push("noop");
noop_path.set_extension("so");
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
@@ -1329,7 +1365,7 @@ fn test_cli_program_dump() {
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
let mut file = File::open(pathbuf.to_str().unwrap()).unwrap();
let mut file = File::open(noop_path.to_str().unwrap()).unwrap();
let mut program_data = Vec::new();
file.read_to_end(&mut program_data).unwrap();
let max_len = program_data.len();
@@ -1357,7 +1393,7 @@ fn test_cli_program_dump() {
let authority_keypair = Keypair::new();
config.signers = vec![&keypair, &buffer_keypair, &authority_keypair];
config.command = CliCommand::Program(ProgramCliCommand::WriteBuffer {
program_location: pathbuf.to_str().unwrap().to_string(),
program_location: noop_path.to_str().unwrap().to_string(),
buffer_signer_index: Some(1),
buffer_pubkey: Some(buffer_keypair.pubkey()),
buffer_authority_signer_index: Some(2),

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-client"
version = "1.7.14"
version = "1.8.2"
description = "Solana Client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
@@ -24,14 +24,14 @@ semver = "0.11.0"
serde = "1.0.122"
serde_derive = "1.0.103"
serde_json = "1.0.56"
solana-account-decoder = { path = "../account-decoder", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-faucet = { path = "../faucet", version = "=1.7.14" }
solana-net-utils = { path = "../net-utils", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-transaction-status = { path = "../transaction-status", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-vote-program = { path = "../programs/vote", version = "=1.7.14" }
solana-account-decoder = { path = "../account-decoder", version = "=1.8.2" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-faucet = { path = "../faucet", version = "=1.8.2" }
solana-net-utils = { path = "../net-utils", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
solana-vote-program = { path = "../programs/vote", version = "=1.8.2" }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tungstenite = "0.10.1"
@@ -40,7 +40,7 @@ url = "2.1.1"
[dev-dependencies]
assert_matches = "1.3.0"
jsonrpc-http-server = "18.0.0"
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.8.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -18,5 +18,6 @@ pub mod rpc_filter;
pub mod rpc_request;
pub mod rpc_response;
pub mod rpc_sender;
pub mod spinner;
pub mod thin_client;
pub mod tpu_client;

View File

@@ -146,8 +146,8 @@ impl RpcSender for MockSender {
value: serde_json::to_value(RpcFees {
blockhash: PUBKEY.to_string(),
fee_calculator: FeeCalculator::default(),
last_valid_slot: 42,
last_valid_block_height: 42,
last_valid_slot: 1234,
last_valid_block_height: 1234,
})
.unwrap(),
})?,

View File

@@ -21,9 +21,9 @@ use {
rpc_request::{RpcError, RpcRequest, RpcResponseErrorData, TokenAccountsFilter},
rpc_response::*,
rpc_sender::*,
spinner,
},
bincode::serialize,
indicatif::{ProgressBar, ProgressStyle},
log::*,
serde_json::{json, Value},
solana_account_decoder::{
@@ -527,34 +527,50 @@ impl RpcClient {
Ok(request)
}
/// Check the confirmation status of a transaction.
/// Submit a transaction and wait for confirmation.
///
/// Returns `true` if the given transaction succeeded and has been committed
/// with the configured [commitment level][cl], which can be retrieved with
/// the [`commitment`](RpcClient::commitment) method.
/// Once this function returns successfully, the given transaction is
/// guaranteed to be processed with the configured [commitment level][cl].
///
/// [cl]: https://docs.solana.com/developing/clients/jsonrpc-api#configuring-state-commitment
///
/// Note that this method does not wait for a transaction to be confirmed
/// &mdash; it only checks whether a transaction has been confirmed. To
/// submit a transaction and wait for it to confirm, use
/// [`send_and_confirm_transaction`][RpcClient::send_and_confirm_transaction].
/// After sending the transaction, this method polls in a loop for the
/// status of the transaction until it has ben confirmed.
///
/// _This method returns `false` if the transaction failed, even if it has
/// been confirmed._
/// # Errors
///
/// If the transaction is not signed then an error with kind [`RpcError`] is
/// returned, containing an [`RpcResponseError`] with `code` set to
/// [`JSON_RPC_SERVER_ERROR_TRANSACTION_SIGNATURE_VERIFICATION_FAILURE`].
///
/// If the preflight transaction simulation fails then an error with kind
/// [`RpcError`] is returned, containing an [`RpcResponseError`] with `code`
/// set to [`JSON_RPC_SERVER_ERROR_SEND_TRANSACTION_PREFLIGHT_FAILURE`].
///
/// If the receiving node is unhealthy, e.g. it is not fully synced to
/// the cluster, then an error with kind [`RpcError`] is returned,
/// containing an [`RpcResponseError`] with `code` set to
/// [`JSON_RPC_SERVER_ERROR_NODE_UNHEALTHY`].
///
/// [`RpcResponseError`]: RpcError::RpcResponseError
/// [`JSON_RPC_SERVER_ERROR_TRANSACTION_SIGNATURE_VERIFICATION_FAILURE`]: crate::rpc_custom_error::JSON_RPC_SERVER_ERROR_TRANSACTION_SIGNATURE_VERIFICATION_FAILURE
/// [`JSON_RPC_SERVER_ERROR_SEND_TRANSACTION_PREFLIGHT_FAILURE`]: crate::rpc_custom_error::JSON_RPC_SERVER_ERROR_SEND_TRANSACTION_PREFLIGHT_FAILURE
/// [`JSON_RPC_SERVER_ERROR_NODE_UNHEALTHY`]: crate::rpc_custom_error::JSON_RPC_SERVER_ERROR_NODE_UNHEALTHY
///
/// # RPC Reference
///
/// This method is built on the [`getSignatureStatuses`] RPC method.
/// This method is built on the [`sendTransaction`] RPC method, and the
/// [`getLatestBlockhash`] RPC method.
///
/// [`getSignatureStatuses`]: https://docs.solana.com/developing/clients/jsonrpc-api#getsignaturestatuses
/// [`sendTransaction`]: https://docs.solana.com/developing/clients/jsonrpc-api#sendtransaction
/// [`getLatestBlockhash`]: https://docs.solana.com/developing/clients/jsonrpc-api#getlatestblockhash
///
/// # Examples
///
/// ```
/// # use solana_client::{
/// # client_error::ClientError,
/// # rpc_client::RpcClient,
/// # client_error::ClientError,
/// # };
/// # use solana_sdk::{
/// # signature::Signer,
@@ -563,97 +579,110 @@ impl RpcClient {
/// # system_transaction,
/// # };
/// # let rpc_client = RpcClient::new_mock("succeeds".to_string());
/// // Transfer lamports from Alice to Bob and wait for confirmation
/// # let alice = Keypair::new();
/// # let bob = Keypair::new();
/// # let lamports = 50;
/// let (recent_blockhash, _) = rpc_client.get_recent_blockhash()?;
/// # let (recent_blockhash, _) = rpc_client.get_recent_blockhash()?;
/// let tx = system_transaction::transfer(&alice, &bob.pubkey(), lamports, recent_blockhash);
/// let signature = rpc_client.send_transaction(&tx)?;
///
/// loop {
/// let confirmed = rpc_client.confirm_transaction(&signature)?;
/// if confirmed {
/// break;
/// }
/// }
/// let signature = rpc_client.send_and_confirm_transaction(&tx)?;
/// # Ok::<(), ClientError>(())
/// ```
pub fn confirm_transaction(&self, signature: &Signature) -> ClientResult<bool> {
Ok(self
.confirm_transaction_with_commitment(signature, self.commitment())?
.value)
pub fn send_and_confirm_transaction(
&self,
transaction: &Transaction,
) -> ClientResult<Signature> {
const SEND_RETRIES: usize = 1;
const GET_STATUS_RETRIES: usize = usize::MAX;
'sending: for _ in 0..SEND_RETRIES {
let signature = self.send_transaction(transaction)?;
let recent_blockhash = if uses_durable_nonce(transaction).is_some() {
let (recent_blockhash, ..) = self
.get_recent_blockhash_with_commitment(CommitmentConfig::processed())?
.value;
recent_blockhash
} else {
transaction.message.recent_blockhash
};
for status_retry in 0..GET_STATUS_RETRIES {
match self.get_signature_status(&signature)? {
Some(Ok(_)) => return Ok(signature),
Some(Err(e)) => return Err(e.into()),
None => {
let fee_calculator = self
.get_fee_calculator_for_blockhash_with_commitment(
&recent_blockhash,
CommitmentConfig::processed(),
)?
.value;
if fee_calculator.is_none() {
// Block hash is not found for some reason
break 'sending;
} else if cfg!(not(test))
// Ignore sleep at last step.
&& status_retry < GET_STATUS_RETRIES
{
// Retry twice a second
sleep(Duration::from_millis(500));
continue;
}
}
}
}
}
Err(RpcError::ForUser(
"unable to confirm transaction. \
This can happen in situations such as transaction expiration \
and insufficient fee-payer funds"
.to_string(),
)
.into())
}
/// Check the confirmation status of a transaction.
///
/// Returns an [`RpcResult`] with value `true` if the given transaction
/// succeeded and has been committed with the given [commitment level][cl].
///
/// [cl]: https://docs.solana.com/developing/clients/jsonrpc-api#configuring-state-commitment
///
/// Note that this method does not wait for a transaction to be confirmed
/// &mdash; it only checks whether a transaction has been confirmed. To
/// submit a transaction and wait for it to confirm, use
/// [`send_and_confirm_transaction`][RpcClient::send_and_confirm_transaction].
///
/// _This method returns an [`RpcResult`] with value `false` if the
/// transaction failed, even if it has been confirmed._
///
/// # RPC Reference
///
/// This method is built on the [`getSignatureStatuses`] RPC method.
///
/// [`getSignatureStatuses`]: https://docs.solana.com/developing/clients/jsonrpc-api#getsignaturestatuses
///
/// # Examples
///
/// ```
/// # use solana_client::{
/// # client_error::ClientError,
/// # rpc_client::RpcClient,
/// # };
/// # use solana_sdk::{
/// # commitment_config::CommitmentConfig,
/// # signature::Signer,
/// # signature::Signature,
/// # signer::keypair::Keypair,
/// # system_transaction,
/// # };
/// # use std::time::Duration;
/// # let rpc_client = RpcClient::new_mock("succeeds".to_string());
/// // Transfer lamports from Alice to Bob and wait for confirmation
/// # let alice = Keypair::new();
/// # let bob = Keypair::new();
/// # let lamports = 50;
/// let (recent_blockhash, _) = rpc_client.get_recent_blockhash()?;
/// let tx = system_transaction::transfer(&alice, &bob.pubkey(), lamports, recent_blockhash);
/// let signature = rpc_client.send_transaction(&tx)?;
///
/// loop {
/// let commitment_config = CommitmentConfig::processed();
/// let confirmed = rpc_client.confirm_transaction_with_commitment(&signature, commitment_config)?;
/// if confirmed.value {
/// break;
/// }
/// }
/// # Ok::<(), ClientError>(())
/// ```
pub fn confirm_transaction_with_commitment(
pub fn send_and_confirm_transaction_with_spinner(
&self,
signature: &Signature,
commitment_config: CommitmentConfig,
) -> RpcResult<bool> {
let Response { context, value } = self.get_signature_statuses(&[*signature])?;
transaction: &Transaction,
) -> ClientResult<Signature> {
self.send_and_confirm_transaction_with_spinner_and_commitment(
transaction,
self.commitment(),
)
}
Ok(Response {
context,
value: value[0]
.as_ref()
.filter(|result| result.satisfies_commitment(commitment_config))
.map(|result| result.status.is_ok())
.unwrap_or_default(),
})
pub fn send_and_confirm_transaction_with_spinner_and_commitment(
&self,
transaction: &Transaction,
commitment: CommitmentConfig,
) -> ClientResult<Signature> {
self.send_and_confirm_transaction_with_spinner_and_config(
transaction,
commitment,
RpcSendTransactionConfig {
preflight_commitment: Some(commitment.commitment),
..RpcSendTransactionConfig::default()
},
)
}
pub fn send_and_confirm_transaction_with_spinner_and_config(
&self,
transaction: &Transaction,
commitment: CommitmentConfig,
config: RpcSendTransactionConfig,
) -> ClientResult<Signature> {
let recent_blockhash = if uses_durable_nonce(transaction).is_some() {
self.get_recent_blockhash_with_commitment(CommitmentConfig::processed())?
.value
.0
} else {
transaction.message.recent_blockhash
};
let signature = self.send_transaction_with_config(transaction, config)?;
self.confirm_transaction_with_spinner(&signature, &recent_blockhash, commitment)?;
Ok(signature)
}
/// Submits a signed transaction to the network.
@@ -737,14 +766,6 @@ impl RpcClient {
)
}
fn default_cluster_transaction_encoding(&self) -> Result<UiTransactionEncoding, RpcError> {
if self.get_node_version()? < semver::Version::new(1, 3, 16) {
Ok(UiTransactionEncoding::Base58)
} else {
Ok(UiTransactionEncoding::Base64)
}
}
/// Submits a signed transaction to the network.
///
/// Before a transaction is processed, the receiving node runs a "preflight
@@ -890,6 +911,251 @@ impl RpcClient {
}
}
pub fn send<T>(&self, request: RpcRequest, params: Value) -> ClientResult<T>
where
T: serde::de::DeserializeOwned,
{
assert!(params.is_array() || params.is_null());
let response = self
.sender
.send(request, params)
.map_err(|err| err.into_with_request(request))?;
serde_json::from_value(response)
.map_err(|err| ClientError::new_with_request(err.into(), request))
}
/// Check the confirmation status of a transaction.
///
/// Returns `true` if the given transaction succeeded and has been committed
/// with the configured [commitment level][cl], which can be retrieved with
/// the [`commitment`](RpcClient::commitment) method.
///
/// [cl]: https://docs.solana.com/developing/clients/jsonrpc-api#configuring-state-commitment
///
/// Note that this method does not wait for a transaction to be confirmed
/// &mdash; it only checks whether a transaction has been confirmed. To
/// submit a transaction and wait for it to confirm, use
/// [`send_and_confirm_transaction`][RpcClient::send_and_confirm_transaction].
///
/// _This method returns `false` if the transaction failed, even if it has
/// been confirmed._
///
/// # RPC Reference
///
/// This method is built on the [`getSignatureStatuses`] RPC method.
///
/// [`getSignatureStatuses`]: https://docs.solana.com/developing/clients/jsonrpc-api#getsignaturestatuses
///
/// # Examples
///
/// ```
/// # use solana_client::{
/// # client_error::ClientError,
/// # rpc_client::RpcClient,
/// # };
/// # use solana_sdk::{
/// # signature::Signer,
/// # signature::Signature,
/// # signer::keypair::Keypair,
/// # system_transaction,
/// # };
/// # let rpc_client = RpcClient::new_mock("succeeds".to_string());
/// // Transfer lamports from Alice to Bob and wait for confirmation
/// # let alice = Keypair::new();
/// # let bob = Keypair::new();
/// # let lamports = 50;
/// let (recent_blockhash, _) = rpc_client.get_recent_blockhash()?;
/// let tx = system_transaction::transfer(&alice, &bob.pubkey(), lamports, recent_blockhash);
/// let signature = rpc_client.send_transaction(&tx)?;
///
/// loop {
/// let confirmed = rpc_client.confirm_transaction(&signature)?;
/// if confirmed {
/// break;
/// }
/// }
/// # Ok::<(), ClientError>(())
/// ```
pub fn confirm_transaction(&self, signature: &Signature) -> ClientResult<bool> {
Ok(self
.confirm_transaction_with_commitment(signature, self.commitment())?
.value)
}
/// Check the confirmation status of a transaction.
///
/// Returns an [`RpcResult`] with value `true` if the given transaction
/// succeeded and has been committed with the given [commitment level][cl].
///
/// [cl]: https://docs.solana.com/developing/clients/jsonrpc-api#configuring-state-commitment
///
/// Note that this method does not wait for a transaction to be confirmed
/// &mdash; it only checks whether a transaction has been confirmed. To
/// submit a transaction and wait for it to confirm, use
/// [`send_and_confirm_transaction`][RpcClient::send_and_confirm_transaction].
///
/// _This method returns an [`RpcResult`] with value `false` if the
/// transaction failed, even if it has been confirmed._
///
/// # RPC Reference
///
/// This method is built on the [`getSignatureStatuses`] RPC method.
///
/// [`getSignatureStatuses`]: https://docs.solana.com/developing/clients/jsonrpc-api#getsignaturestatuses
///
/// # Examples
///
/// ```
/// # use solana_client::{
/// # client_error::ClientError,
/// # rpc_client::RpcClient,
/// # };
/// # use solana_sdk::{
/// # commitment_config::CommitmentConfig,
/// # signature::Signer,
/// # signature::Signature,
/// # signer::keypair::Keypair,
/// # system_transaction,
/// # };
/// # use std::time::Duration;
/// # let rpc_client = RpcClient::new_mock("succeeds".to_string());
/// // Transfer lamports from Alice to Bob and wait for confirmation
/// # let alice = Keypair::new();
/// # let bob = Keypair::new();
/// # let lamports = 50;
/// let (recent_blockhash, _) = rpc_client.get_recent_blockhash()?;
/// let tx = system_transaction::transfer(&alice, &bob.pubkey(), lamports, recent_blockhash);
/// let signature = rpc_client.send_transaction(&tx)?;
///
/// loop {
/// let commitment_config = CommitmentConfig::processed();
/// let confirmed = rpc_client.confirm_transaction_with_commitment(&signature, commitment_config)?;
/// if confirmed.value {
/// break;
/// }
/// }
/// # Ok::<(), ClientError>(())
/// ```
pub fn confirm_transaction_with_commitment(
&self,
signature: &Signature,
commitment_config: CommitmentConfig,
) -> RpcResult<bool> {
let Response { context, value } = self.get_signature_statuses(&[*signature])?;
Ok(Response {
context,
value: value[0]
.as_ref()
.filter(|result| result.satisfies_commitment(commitment_config))
.map(|result| result.status.is_ok())
.unwrap_or_default(),
})
}
pub fn confirm_transaction_with_spinner(
&self,
signature: &Signature,
recent_blockhash: &Hash,
commitment: CommitmentConfig,
) -> ClientResult<()> {
let desired_confirmations = if commitment.is_finalized() {
MAX_LOCKOUT_HISTORY + 1
} else {
1
};
let mut confirmations = 0;
let progress_bar = spinner::new_progress_bar();
progress_bar.set_message(&format!(
"[{}/{}] Finalizing transaction {}",
confirmations, desired_confirmations, signature,
));
let now = Instant::now();
let confirm_transaction_initial_timeout = self
.config
.confirm_transaction_initial_timeout
.unwrap_or_default();
let (signature, status) = loop {
// Get recent commitment in order to count confirmations for successful transactions
let status = self
.get_signature_status_with_commitment(signature, CommitmentConfig::processed())?;
if status.is_none() {
let blockhash_not_found = self
.get_fee_calculator_for_blockhash_with_commitment(
recent_blockhash,
CommitmentConfig::processed(),
)?
.value
.is_none();
if blockhash_not_found && now.elapsed() >= confirm_transaction_initial_timeout {
break (signature, status);
}
} else {
break (signature, status);
}
if cfg!(not(test)) {
sleep(Duration::from_millis(500));
}
};
if let Some(result) = status {
if let Err(err) = result {
return Err(err.into());
}
} else {
return Err(RpcError::ForUser(
"unable to confirm transaction. \
This can happen in situations such as transaction expiration \
and insufficient fee-payer funds"
.to_string(),
)
.into());
}
let now = Instant::now();
loop {
// Return when specified commitment is reached
// Failed transactions have already been eliminated, `is_some` check is sufficient
if self
.get_signature_status_with_commitment(signature, commitment)?
.is_some()
{
progress_bar.set_message("Transaction confirmed");
progress_bar.finish_and_clear();
return Ok(());
}
progress_bar.set_message(&format!(
"[{}/{}] Finalizing transaction {}",
min(confirmations + 1, desired_confirmations),
desired_confirmations,
signature,
));
sleep(Duration::from_millis(500));
confirmations = self
.get_num_blocks_since_signature_confirmation(signature)
.unwrap_or(confirmations);
if now.elapsed().as_secs() >= MAX_HASH_AGE_IN_SECONDS as u64 {
return Err(
RpcError::ForUser("transaction not finalized. \
This can happen when a transaction lands in an abandoned fork. \
Please retry.".to_string()).into(),
);
}
}
}
fn default_cluster_transaction_encoding(&self) -> Result<UiTransactionEncoding, RpcError> {
if self.get_node_version()? < semver::Version::new(1, 3, 16) {
Ok(UiTransactionEncoding::Base58)
} else {
Ok(UiTransactionEncoding::Base64)
}
}
/// Simulates sending a transaction.
///
/// If the transaction fails, then the [`err`] field of the returned
@@ -3223,121 +3489,6 @@ impl RpcClient {
self.send(RpcRequest::MinimumLedgerSlot, Value::Null)
}
/// Submit a transaction and wait for confirmation.
///
/// Once this function returns successfully, the given transaction is
/// guaranteed to be processed with the configured [commitment level][cl].
///
/// [cl]: https://docs.solana.com/developing/clients/jsonrpc-api#configuring-state-commitment
///
/// After sending the transaction, this method polls in a loop for the
/// status of the transaction until it has ben confirmed.
///
/// # Errors
///
/// If the transaction is not signed then an error with kind [`RpcError`] is
/// returned, containing an [`RpcResponseError`] with `code` set to
/// [`JSON_RPC_SERVER_ERROR_TRANSACTION_SIGNATURE_VERIFICATION_FAILURE`].
///
/// If the preflight transaction simulation fails then an error with kind
/// [`RpcError`] is returned, containing an [`RpcResponseError`] with `code`
/// set to [`JSON_RPC_SERVER_ERROR_SEND_TRANSACTION_PREFLIGHT_FAILURE`].
///
/// If the receiving node is unhealthy, e.g. it is not fully synced to
/// the cluster, then an error with kind [`RpcError`] is returned,
/// containing an [`RpcResponseError`] with `code` set to
/// [`JSON_RPC_SERVER_ERROR_NODE_UNHEALTHY`].
///
/// [`RpcResponseError`]: RpcError::RpcResponseError
/// [`JSON_RPC_SERVER_ERROR_TRANSACTION_SIGNATURE_VERIFICATION_FAILURE`]: crate::rpc_custom_error::JSON_RPC_SERVER_ERROR_TRANSACTION_SIGNATURE_VERIFICATION_FAILURE
/// [`JSON_RPC_SERVER_ERROR_SEND_TRANSACTION_PREFLIGHT_FAILURE`]: crate::rpc_custom_error::JSON_RPC_SERVER_ERROR_SEND_TRANSACTION_PREFLIGHT_FAILURE
/// [`JSON_RPC_SERVER_ERROR_NODE_UNHEALTHY`]: crate::rpc_custom_error::JSON_RPC_SERVER_ERROR_NODE_UNHEALTHY
///
/// # RPC Reference
///
/// This method is built on the [`sendTransaction`] RPC method, and the
/// [`getLatestBlockhash`] RPC method.
///
/// [`sendTransaction`]: https://docs.solana.com/developing/clients/jsonrpc-api#sendtransaction
/// [`getLatestBlockhash`]: https://docs.solana.com/developing/clients/jsonrpc-api#getlatestblockhash
///
/// # Examples
///
/// ```
/// # use solana_client::{
/// # rpc_client::RpcClient,
/// # client_error::ClientError,
/// # };
/// # use solana_sdk::{
/// # signature::Signer,
/// # signature::Signature,
/// # signer::keypair::Keypair,
/// # system_transaction,
/// # };
/// # let rpc_client = RpcClient::new_mock("succeeds".to_string());
/// # let alice = Keypair::new();
/// # let bob = Keypair::new();
/// # let lamports = 50;
/// # let recent_blockhash = rpc_client.get_recent_blockhash()?.0;
/// let tx = system_transaction::transfer(&alice, &bob.pubkey(), lamports, recent_blockhash);
/// let signature = rpc_client.send_and_confirm_transaction(&tx)?;
/// # Ok::<(), ClientError>(())
/// ```
pub fn send_and_confirm_transaction(
&self,
transaction: &Transaction,
) -> ClientResult<Signature> {
const SEND_RETRIES: usize = 1;
const GET_STATUS_RETRIES: usize = usize::MAX;
'sending: for _ in 0..SEND_RETRIES {
let signature = self.send_transaction(transaction)?;
let recent_blockhash = if uses_durable_nonce(transaction).is_some() {
let (recent_blockhash, ..) = self
.get_recent_blockhash_with_commitment(CommitmentConfig::processed())?
.value;
recent_blockhash
} else {
transaction.message.recent_blockhash
};
for status_retry in 0..GET_STATUS_RETRIES {
match self.get_signature_status(&signature)? {
Some(Ok(_)) => return Ok(signature),
Some(Err(e)) => return Err(e.into()),
None => {
let fee_calculator = self
.get_fee_calculator_for_blockhash_with_commitment(
&recent_blockhash,
CommitmentConfig::processed(),
)?
.value;
if fee_calculator.is_none() {
// Block hash is not found by some reason
break 'sending;
} else if cfg!(not(test))
// Ignore sleep at last step.
&& status_retry < GET_STATUS_RETRIES
{
// Retry twice a second
sleep(Duration::from_millis(500));
continue;
}
}
}
}
}
Err(RpcError::ForUser(
"unable to confirm transaction. \
This can happen in situations such as transaction expiration \
and insufficient fee-payer funds"
.to_string(),
)
.into())
}
/// Returns all information associated with the account of the provided pubkey.
///
/// This method uses the configured [commitment level][cl].
@@ -4541,157 +4692,6 @@ impl RpcClient {
Ok(confirmations)
}
pub fn send_and_confirm_transaction_with_spinner(
&self,
transaction: &Transaction,
) -> ClientResult<Signature> {
self.send_and_confirm_transaction_with_spinner_and_commitment(
transaction,
self.commitment(),
)
}
pub fn send_and_confirm_transaction_with_spinner_and_commitment(
&self,
transaction: &Transaction,
commitment: CommitmentConfig,
) -> ClientResult<Signature> {
self.send_and_confirm_transaction_with_spinner_and_config(
transaction,
commitment,
RpcSendTransactionConfig {
preflight_commitment: Some(commitment.commitment),
..RpcSendTransactionConfig::default()
},
)
}
pub fn send_and_confirm_transaction_with_spinner_and_config(
&self,
transaction: &Transaction,
commitment: CommitmentConfig,
config: RpcSendTransactionConfig,
) -> ClientResult<Signature> {
let recent_blockhash = if uses_durable_nonce(transaction).is_some() {
self.get_recent_blockhash_with_commitment(CommitmentConfig::processed())?
.value
.0
} else {
transaction.message.recent_blockhash
};
let signature = self.send_transaction_with_config(transaction, config)?;
self.confirm_transaction_with_spinner(&signature, &recent_blockhash, commitment)?;
Ok(signature)
}
pub fn confirm_transaction_with_spinner(
&self,
signature: &Signature,
recent_blockhash: &Hash,
commitment: CommitmentConfig,
) -> ClientResult<()> {
let desired_confirmations = if commitment.is_finalized() {
MAX_LOCKOUT_HISTORY + 1
} else {
1
};
let mut confirmations = 0;
let progress_bar = new_spinner_progress_bar();
progress_bar.set_message(&format!(
"[{}/{}] Finalizing transaction {}",
confirmations, desired_confirmations, signature,
));
let now = Instant::now();
let confirm_transaction_initial_timeout = self
.config
.confirm_transaction_initial_timeout
.unwrap_or_default();
let (signature, status) = loop {
// Get recent commitment in order to count confirmations for successful transactions
let status = self
.get_signature_status_with_commitment(signature, CommitmentConfig::processed())?;
if status.is_none() {
let blockhash_not_found = self
.get_fee_calculator_for_blockhash_with_commitment(
recent_blockhash,
CommitmentConfig::processed(),
)?
.value
.is_none();
if blockhash_not_found && now.elapsed() >= confirm_transaction_initial_timeout {
break (signature, status);
}
} else {
break (signature, status);
}
if cfg!(not(test)) {
sleep(Duration::from_millis(500));
}
};
if let Some(result) = status {
if let Err(err) = result {
return Err(err.into());
}
} else {
return Err(RpcError::ForUser(
"unable to confirm transaction. \
This can happen in situations such as transaction expiration \
and insufficient fee-payer funds"
.to_string(),
)
.into());
}
let now = Instant::now();
loop {
// Return when specified commitment is reached
// Failed transactions have already been eliminated, `is_some` check is sufficient
if self
.get_signature_status_with_commitment(signature, commitment)?
.is_some()
{
progress_bar.set_message("Transaction confirmed");
progress_bar.finish_and_clear();
return Ok(());
}
progress_bar.set_message(&format!(
"[{}/{}] Finalizing transaction {}",
min(confirmations + 1, desired_confirmations),
desired_confirmations,
signature,
));
sleep(Duration::from_millis(500));
confirmations = self
.get_num_blocks_since_signature_confirmation(signature)
.unwrap_or(confirmations);
if now.elapsed().as_secs() >= MAX_HASH_AGE_IN_SECONDS as u64 {
return Err(
RpcError::ForUser("transaction not finalized. \
This can happen when a transaction lands in an abandoned fork. \
Please retry.".to_string()).into(),
);
}
}
}
pub fn send<T>(&self, request: RpcRequest, params: Value) -> ClientResult<T>
where
T: serde::de::DeserializeOwned,
{
assert!(params.is_array() || params.is_null());
let response = self
.sender
.send(request, params)
.map_err(|err| err.into_with_request(request))?;
serde_json::from_value(response)
.map_err(|err| ClientError::new_with_request(err.into(), request))
}
pub fn get_transport_stats(&self) -> RpcTransportStats {
self.sender.get_transport_stats()
}
@@ -4725,14 +4725,6 @@ pub struct GetConfirmedSignaturesForAddress2Config {
pub commitment: Option<CommitmentConfig>,
}
fn new_spinner_progress_bar() -> ProgressBar {
let progress_bar = ProgressBar::new(42);
progress_bar
.set_style(ProgressStyle::default_spinner().template("{spinner:.green} {wide_msg}"));
progress_bar.enable_steady_tick(100);
progress_bar
}
fn get_rpc_request_str(rpc_addr: SocketAddr, tls: bool) -> String {
if tls {
format!("https://{}", rpc_addr)

View File

@@ -19,6 +19,7 @@ pub const JSON_RPC_SERVER_ERROR_KEY_EXCLUDED_FROM_SECONDARY_INDEX: i64 = -32010;
pub const JSON_RPC_SERVER_ERROR_TRANSACTION_HISTORY_NOT_AVAILABLE: i64 = -32011;
pub const JSON_RPC_SCAN_ERROR: i64 = -32012;
pub const JSON_RPC_SERVER_ERROR_TRANSACTION_SIGNATURE_LEN_MISMATCH: i64 = -32013;
pub const JSON_RPC_SERVER_ERROR_BLOCK_STATUS_NOT_AVAILABLE_YET: i64 = -32014;
#[derive(Error, Debug)]
pub enum RpcCustomError {
@@ -54,6 +55,8 @@ pub enum RpcCustomError {
ScanError { message: String },
#[error("TransactionSignatureLenMismatch")]
TransactionSignatureLenMismatch,
#[error("BlockStatusNotAvailableYet")]
BlockStatusNotAvailableYet { slot: Slot },
}
#[derive(Debug, Serialize, Deserialize)]
@@ -161,6 +164,11 @@ impl From<RpcCustomError> for Error {
message: "Transaction signature length mismatch".to_string(),
data: None,
},
RpcCustomError::BlockStatusNotAvailableYet { slot } => Self {
code: ErrorCode::ServerError(JSON_RPC_SERVER_ERROR_BLOCK_STATUS_NOT_AVAILABLE_YET),
message: format!("Block status not yet available for slot {}", slot),
data: None,
},
}
}
}

View File

@@ -1,4 +1,9 @@
use thiserror::Error;
#![allow(deprecated)]
use {std::borrow::Cow, thiserror::Error};
const MAX_DATA_SIZE: usize = 128;
const MAX_DATA_BASE58_SIZE: usize = 175;
const MAX_DATA_BASE64_SIZE: usize = 172;
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
@@ -15,15 +20,50 @@ impl RpcFilterType {
let encoding = compare.encoding.as_ref().unwrap_or(&MemcmpEncoding::Binary);
match encoding {
MemcmpEncoding::Binary => {
let MemcmpEncodedBytes::Binary(bytes) = &compare.bytes;
if bytes.len() > 128 {
Err(RpcFilterError::Base58DataTooLarge)
} else {
bs58::decode(&bytes)
.into_vec()
.map(|_| ())
.map_err(|e| e.into())
use MemcmpEncodedBytes::*;
match &compare.bytes {
// DEPRECATED
Binary(bytes) => {
if bytes.len() > MAX_DATA_BASE58_SIZE {
return Err(RpcFilterError::Base58DataTooLarge);
}
let bytes = bs58::decode(&bytes)
.into_vec()
.map_err(RpcFilterError::DecodeError)?;
if bytes.len() > MAX_DATA_SIZE {
Err(RpcFilterError::Base58DataTooLarge)
} else {
Ok(())
}
}
Base58(bytes) => {
if bytes.len() > MAX_DATA_BASE58_SIZE {
return Err(RpcFilterError::DataTooLarge);
}
let bytes = bs58::decode(&bytes).into_vec()?;
if bytes.len() > MAX_DATA_SIZE {
Err(RpcFilterError::DataTooLarge)
} else {
Ok(())
}
}
Base64(bytes) => {
if bytes.len() > MAX_DATA_BASE64_SIZE {
return Err(RpcFilterError::DataTooLarge);
}
let bytes = base64::decode(&bytes)?;
if bytes.len() > MAX_DATA_SIZE {
Err(RpcFilterError::DataTooLarge)
} else {
Ok(())
}
}
Bytes(bytes) => {
if bytes.len() > MAX_DATA_SIZE {
return Err(RpcFilterError::DataTooLarge);
}
Ok(())
}
}
}
}
@@ -34,10 +74,24 @@ impl RpcFilterType {
#[derive(Error, PartialEq, Debug)]
pub enum RpcFilterError {
#[error("bs58 decode error")]
DecodeError(#[from] bs58::decode::Error),
#[error("encoded binary data should be less than 129 bytes")]
DataTooLarge,
#[deprecated(
since = "1.9.0",
note = "Error for MemcmpEncodedBytes::Binary which is deprecated"
)]
#[error("encoded binary (base 58) data should be less than 129 bytes")]
Base58DataTooLarge,
#[deprecated(
since = "1.9.0",
note = "Error for MemcmpEncodedBytes::Binary which is deprecated"
)]
#[error("bs58 decode error")]
DecodeError(bs58::decode::Error),
#[error("base58 decode error")]
Base58DecodeError(#[from] bs58::decode::Error),
#[error("base64 decode error")]
Base64DecodeError(#[from] base64::DecodeError),
}
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
@@ -49,7 +103,14 @@ pub enum MemcmpEncoding {
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[serde(rename_all = "camelCase", untagged)]
pub enum MemcmpEncodedBytes {
#[deprecated(
since = "1.9.0",
note = "Please use MemcmpEncodedBytes::Base58 instead"
)]
Binary(String),
Base58(String),
Base64(String),
Bytes(Vec<u8>),
}
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
@@ -63,14 +124,18 @@ pub struct Memcmp {
}
impl Memcmp {
pub fn bytes_match(&self, data: &[u8]) -> bool {
pub fn bytes(&self) -> Option<Cow<Vec<u8>>> {
use MemcmpEncodedBytes::*;
match &self.bytes {
MemcmpEncodedBytes::Binary(bytes) => {
let bytes = bs58::decode(bytes).into_vec();
if bytes.is_err() {
return false;
}
let bytes = bytes.unwrap();
Binary(bytes) | Base58(bytes) => bs58::decode(bytes).into_vec().ok().map(Cow::Owned),
Base64(bytes) => base64::decode(bytes).ok().map(Cow::Owned),
Bytes(bytes) => Some(Cow::Borrowed(bytes)),
}
}
pub fn bytes_match(&self, data: &[u8]) -> bool {
match self.bytes() {
Some(bytes) => {
if self.offset > data.len() {
return false;
}
@@ -79,6 +144,7 @@ impl Memcmp {
}
data[self.offset..self.offset + bytes.len()] == bytes[..]
}
None => false,
}
}
}
@@ -87,6 +153,15 @@ impl Memcmp {
mod tests {
use super::*;
#[test]
fn test_worst_case_encoded_tx_goldens() {
let ff_data = vec![0xffu8; MAX_DATA_SIZE];
let data58 = bs58::encode(&ff_data).into_string();
assert_eq!(data58.len(), MAX_DATA_BASE58_SIZE);
let data64 = base64::encode(&ff_data);
assert_eq!(data64.len(), MAX_DATA_BASE64_SIZE);
}
#[test]
fn test_bytes_match() {
let data = vec![1, 2, 3, 4, 5];
@@ -94,7 +169,7 @@ mod tests {
// Exact match of data succeeds
assert!(Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![1, 2, 3, 4, 5]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![1, 2, 3, 4, 5]).into_string()),
encoding: None,
}
.bytes_match(&data));
@@ -102,7 +177,7 @@ mod tests {
// Partial match of data succeeds
assert!(Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![1, 2]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![1, 2]).into_string()),
encoding: None,
}
.bytes_match(&data));
@@ -110,7 +185,7 @@ mod tests {
// Offset partial match of data succeeds
assert!(Memcmp {
offset: 2,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![3, 4]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![3, 4]).into_string()),
encoding: None,
}
.bytes_match(&data));
@@ -118,7 +193,7 @@ mod tests {
// Incorrect partial match of data fails
assert!(!Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![2]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![2]).into_string()),
encoding: None,
}
.bytes_match(&data));
@@ -126,7 +201,7 @@ mod tests {
// Bytes overrun data fails
assert!(!Memcmp {
offset: 2,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![3, 4, 5, 6]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![3, 4, 5, 6]).into_string()),
encoding: None,
}
.bytes_match(&data));
@@ -134,7 +209,7 @@ mod tests {
// Offset outside data fails
assert!(!Memcmp {
offset: 6,
bytes: MemcmpEncodedBytes::Binary(bs58::encode(vec![5]).into_string()),
bytes: MemcmpEncodedBytes::Base58(bs58::encode(vec![5]).into_string()),
encoding: None,
}
.bytes_match(&data));
@@ -142,7 +217,7 @@ mod tests {
// Invalid base-58 fails
assert!(!Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary("III".to_string()),
bytes: MemcmpEncodedBytes::Base58("III".to_string()),
encoding: None,
}
.bytes_match(&data));
@@ -157,7 +232,7 @@ mod tests {
assert_eq!(
RpcFilterType::Memcmp(Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary(base58_bytes.to_string()),
bytes: MemcmpEncodedBytes::Base58(base58_bytes.to_string()),
encoding: None,
})
.verify(),
@@ -172,11 +247,11 @@ mod tests {
assert_eq!(
RpcFilterType::Memcmp(Memcmp {
offset: 0,
bytes: MemcmpEncodedBytes::Binary(base58_bytes.to_string()),
bytes: MemcmpEncodedBytes::Base58(base58_bytes.to_string()),
encoding: None,
})
.verify(),
Err(RpcFilterError::Base58DataTooLarge)
Err(RpcFilterError::DataTooLarge)
);
}
}

11
client/src/spinner.rs Normal file
View File

@@ -0,0 +1,11 @@
//! Spinner creator
use indicatif::{ProgressBar, ProgressStyle};
pub(crate) fn new_progress_bar() -> ProgressBar {
let progress_bar = ProgressBar::new(42);
progress_bar
.set_style(ProgressStyle::default_spinner().template("{spinner:.green} {wide_msg}"));
progress_bar.enable_steady_tick(100);
progress_bar
}

View File

@@ -1,12 +1,21 @@
use crate::{
client_error::ClientError,
pubsub_client::{PubsubClient, PubsubClientError, PubsubClientSubscription},
rpc_client::RpcClient,
rpc_response::SlotUpdate,
rpc_request::MAX_GET_SIGNATURE_STATUSES_QUERY_ITEMS,
rpc_response::{Fees, SlotUpdate},
spinner,
};
use bincode::serialize;
use log::*;
use solana_sdk::{
clock::Slot, commitment_config::CommitmentConfig, pubkey::Pubkey, transaction::Transaction,
clock::Slot,
commitment_config::CommitmentConfig,
message::Message,
pubkey::Pubkey,
signature::SignerError,
signers::Signers,
transaction::{Transaction, TransactionError},
};
use std::{
collections::{HashMap, HashSet, VecDeque},
@@ -16,7 +25,7 @@ use std::{
atomic::{AtomicBool, Ordering},
Arc, RwLock,
},
thread::JoinHandle,
thread::{sleep, JoinHandle},
time::{Duration, Instant},
};
use thiserror::Error;
@@ -26,9 +35,13 @@ pub enum TpuSenderError {
#[error("Pubsub error: {0:?}")]
PubsubError(#[from] PubsubClientError),
#[error("RPC error: {0:?}")]
RpcError(#[from] crate::client_error::ClientError),
RpcError(#[from] ClientError),
#[error("IO error: {0:?}")]
IoError(#[from] std::io::Error),
#[error("Signer error: {0:?}")]
SignerError(#[from] SignerError),
#[error("Custom error: {0}")]
Custom(String),
}
type Result<T> = std::result::Result<T, TpuSenderError>;
@@ -62,6 +75,7 @@ pub struct TpuClient {
fanout_slots: u64,
leader_tpu_service: LeaderTpuService,
exit: Arc<AtomicBool>,
rpc_client: Arc<RpcClient>,
}
impl TpuClient {
@@ -96,15 +110,161 @@ impl TpuClient {
config: TpuClientConfig,
) -> Result<Self> {
let exit = Arc::new(AtomicBool::new(false));
let leader_tpu_service = LeaderTpuService::new(rpc_client, websocket_url, exit.clone())?;
let leader_tpu_service =
LeaderTpuService::new(rpc_client.clone(), websocket_url, exit.clone())?;
Ok(Self {
send_socket: UdpSocket::bind("0.0.0.0:0").unwrap(),
fanout_slots: config.fanout_slots.min(MAX_FANOUT_SLOTS).max(1),
leader_tpu_service,
exit,
rpc_client,
})
}
pub fn send_and_confirm_messages_with_spinner<T: Signers>(
&self,
messages: &[Message],
signers: &T,
) -> Result<Vec<Option<TransactionError>>> {
let mut expired_blockhash_retries = 5;
/* Send at ~100 TPS */
const SEND_TRANSACTION_INTERVAL: Duration = Duration::from_millis(10);
/* Retry batch send after 4 seconds */
const TRANSACTION_RESEND_INTERVAL: Duration = Duration::from_secs(4);
let progress_bar = spinner::new_progress_bar();
progress_bar.set_message("Setting up...");
let mut transactions = messages
.iter()
.enumerate()
.map(|(i, message)| (i, Transaction::new_unsigned(message.clone())))
.collect::<Vec<_>>();
let num_transactions = transactions.len() as f64;
let mut transaction_errors = vec![None; transactions.len()];
let set_message = |confirmed_transactions,
block_height: Option<u64>,
last_valid_block_height: u64,
status: &str| {
progress_bar.set_message(&format!(
"{:>5.1}% | {:<40}{}",
confirmed_transactions as f64 * 100. / num_transactions,
status,
match block_height {
Some(block_height) => format!(
" [block height {}; re-sign in {} blocks]",
block_height,
last_valid_block_height.saturating_sub(block_height),
),
None => String::new(),
},
));
};
let mut confirmed_transactions = 0;
let mut block_height = self.rpc_client.get_block_height()?;
while expired_blockhash_retries > 0 {
let Fees {
blockhash,
fee_calculator: _,
last_valid_block_height,
} = self.rpc_client.get_fees()?;
let mut pending_transactions = HashMap::new();
for (i, mut transaction) in transactions {
transaction.try_sign(signers, blockhash)?;
pending_transactions.insert(transaction.signatures[0], (i, transaction));
}
let mut last_resend = Instant::now() - TRANSACTION_RESEND_INTERVAL;
while block_height <= last_valid_block_height {
let num_transactions = pending_transactions.len();
// Periodically re-send all pending transactions
if Instant::now().duration_since(last_resend) > TRANSACTION_RESEND_INTERVAL {
for (index, (_i, transaction)) in pending_transactions.values().enumerate() {
if !self.send_transaction(transaction) {
let _result = self.rpc_client.send_transaction(transaction).ok();
}
set_message(
confirmed_transactions,
None, //block_height,
last_valid_block_height,
&format!("Sending {}/{} transactions", index + 1, num_transactions,),
);
sleep(SEND_TRANSACTION_INTERVAL);
}
last_resend = Instant::now();
}
// Wait for the next block before checking for transaction statuses
let mut block_height_refreshes = 10;
set_message(
confirmed_transactions,
Some(block_height),
last_valid_block_height,
&format!("Waiting for next block, {} pending...", num_transactions),
);
let mut new_block_height = block_height;
while block_height == new_block_height && block_height_refreshes > 0 {
sleep(Duration::from_millis(500));
new_block_height = self.rpc_client.get_block_height()?;
block_height_refreshes -= 1;
}
block_height = new_block_height;
// Collect statuses for the transactions, drop those that are confirmed
let pending_signatures = pending_transactions.keys().cloned().collect::<Vec<_>>();
for pending_signatures_chunk in
pending_signatures.chunks(MAX_GET_SIGNATURE_STATUSES_QUERY_ITEMS)
{
if let Ok(result) = self
.rpc_client
.get_signature_statuses(pending_signatures_chunk)
{
let statuses = result.value;
for (signature, status) in
pending_signatures_chunk.iter().zip(statuses.into_iter())
{
if let Some(status) = status {
if status.satisfies_commitment(self.rpc_client.commitment()) {
if let Some((i, _)) = pending_transactions.remove(signature) {
confirmed_transactions += 1;
if status.err.is_some() {
progress_bar.println(format!(
"Failed transaction: {:?}",
status
));
}
transaction_errors[i] = status.err;
}
}
}
}
}
set_message(
confirmed_transactions,
Some(block_height),
last_valid_block_height,
"Checking transaction status...",
);
}
if pending_transactions.is_empty() {
return Ok(transaction_errors);
}
}
transactions = pending_transactions.into_iter().map(|(_k, v)| v).collect();
progress_bar.println(format!(
"Blockhash expired. {} retries remaining",
expired_blockhash_retries
));
expired_blockhash_retries -= 1;
}
Err(TpuSenderError::Custom("Max retries exceeded".into()))
}
}
impl Drop for TpuClient {

View File

@@ -1,7 +1,7 @@
[package]
name = "solana-core"
description = "Blockchain, Rebuilt for Scale"
version = "1.7.14"
version = "1.8.2"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-core"
readme = "../README.md"
@@ -27,13 +27,14 @@ ed25519-dalek = "=1.0.1"
fs_extra = "1.2.0"
flate2 = "1.0"
indexmap = { version = "1.5", features = ["rayon"] }
itertools = "0.9.0"
libc = "0.2.81"
log = "0.4.11"
lru = "0.6.1"
miow = "0.2.2"
net2 = "0.2.37"
num-traits = "0.2"
histogram = "0.6.9"
itertools = "0.10.1"
log = "0.4.14"
lru = "0.6.6"
rand = "0.7.0"
rand_chacha = "0.2.2"
rand_core = "0.6.2"
@@ -43,33 +44,34 @@ retain_mut = "0.1.2"
serde = "1.0.122"
serde_bytes = "0.11"
serde_derive = "1.0.103"
solana-account-decoder = { path = "../account-decoder", version = "=1.7.14" }
solana-banks-server = { path = "../banks-server", version = "=1.7.14" }
solana-clap-utils = { path = "../clap-utils", version = "=1.7.14" }
solana-client = { path = "../client", version = "=1.7.14" }
solana-gossip = { path = "../gossip", version = "=1.7.14" }
solana-ledger = { path = "../ledger", version = "=1.7.14" }
solana-logger = { path = "../logger", version = "=1.7.14" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.7.14" }
solana-metrics = { path = "../metrics", version = "=1.7.14" }
solana-measure = { path = "../measure", version = "=1.7.14" }
solana-net-utils = { path = "../net-utils", version = "=1.7.14" }
solana-perf = { path = "../perf", version = "=1.7.14" }
solana-poh = { path = "../poh", version = "=1.7.14" }
solana-program-test = { path = "../program-test", version = "=1.7.14" }
solana-rpc = { path = "../rpc", version = "=1.7.14" }
solana-runtime = { path = "../runtime", version = "=1.7.14" }
solana-sdk = { path = "../sdk", version = "=1.7.14" }
solana-frozen-abi = { path = "../frozen-abi", version = "=1.7.14" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.7.14" }
solana-streamer = { path = "../streamer", version = "=1.7.14" }
solana-transaction-status = { path = "../transaction-status", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-vote-program = { path = "../programs/vote", version = "=1.7.14" }
solana-account-decoder = { path = "../account-decoder", version = "=1.8.2" }
solana-accountsdb-plugin-manager = { path = "../accountsdb-plugin-manager", version = "=1.8.2" }
solana-banks-server = { path = "../banks-server", version = "=1.8.2" }
solana-clap-utils = { path = "../clap-utils", version = "=1.8.2" }
solana-client = { path = "../client", version = "=1.8.2" }
solana-gossip = { path = "../gossip", version = "=1.8.2" }
solana-ledger = { path = "../ledger", version = "=1.8.2" }
solana-logger = { path = "../logger", version = "=1.8.2" }
solana-merkle-tree = { path = "../merkle-tree", version = "=1.8.2" }
solana-metrics = { path = "../metrics", version = "=1.8.2" }
solana-measure = { path = "../measure", version = "=1.8.2" }
solana-net-utils = { path = "../net-utils", version = "=1.8.2" }
solana-perf = { path = "../perf", version = "=1.8.2" }
solana-poh = { path = "../poh", version = "=1.8.2" }
solana-program-test = { path = "../program-test", version = "=1.8.2" }
solana-rpc = { path = "../rpc", version = "=1.8.2" }
solana-runtime = { path = "../runtime", version = "=1.8.2" }
solana-sdk = { path = "../sdk", version = "=1.8.2" }
solana-frozen-abi = { path = "../frozen-abi", version = "=1.8.2" }
solana-frozen-abi-macro = { path = "../frozen-abi/macro", version = "=1.8.2" }
solana-streamer = { path = "../streamer", version = "=1.8.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
solana-vote-program = { path = "../programs/vote", version = "=1.8.2" }
spl-token-v2-0 = { package = "spl-token", version = "=3.2.0", features = ["no-entrypoint"] }
tempfile = "3.1.0"
thiserror = "1.0"
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.7.14" }
solana-rayon-threadlimit = { path = "../rayon-threadlimit", version = "=1.8.2" }
trees = "0.2.1"
[dev-dependencies]
@@ -82,8 +84,8 @@ num_cpus = "1.13.0"
reqwest = { version = "0.11.2", default-features = false, features = ["blocking", "rustls-tls", "json"] }
serde_json = "1.0.56"
serial_test = "0.4.0"
solana-stake-program = { path = "../programs/stake", version = "=1.7.14" }
solana-version = { path = "../version", version = "=1.7.14" }
solana-stake-program = { path = "../programs/stake", version = "=1.8.2" }
solana-version = { path = "../version", version = "=1.8.2" }
symlink = "0.1.0"
systemstat = "0.1.5"
tokio = { version = "1", features = ["full"] }

View File

@@ -18,6 +18,7 @@ use solana_perf::packet::to_packets_chunked;
use solana_perf::test_tx::test_tx;
use solana_poh::poh_recorder::{create_test_recorder, WorkingBankEntry};
use solana_runtime::bank::Bank;
use solana_runtime::cost_model::CostModel;
use solana_sdk::genesis_config::GenesisConfig;
use solana_sdk::hash::Hash;
use solana_sdk::message::Message;
@@ -33,7 +34,7 @@ use solana_streamer::socket::SocketAddrSpace;
use std::collections::VecDeque;
use std::sync::atomic::Ordering;
use std::sync::mpsc::Receiver;
use std::sync::Arc;
use std::sync::{Arc, RwLock};
use std::time::{Duration, Instant};
use test::Bencher;
@@ -92,6 +93,7 @@ fn bench_consume_buffered(bencher: &mut Bencher) {
None::<Box<dyn Fn()>>,
&BankingStageStats::default(),
&recorder,
&Arc::new(RwLock::new(CostModel::default())),
);
});
@@ -165,6 +167,11 @@ fn bench_banking(bencher: &mut Bencher, tx_type: TransactionType) {
bank.ns_per_slot = std::u128::MAX;
let bank = Arc::new(Bank::new(&genesis_config));
// set cost tracker limits to MAX so it will not filter out TXs
bank.write_cost_tracker()
.unwrap()
.set_limits(std::u64::MAX, std::u64::MAX);
debug!("threads: {} txs: {}", num_threads, txes);
let transactions = match tx_type {
@@ -218,6 +225,7 @@ fn bench_banking(bencher: &mut Bencher, tx_type: TransactionType) {
vote_receiver,
None,
s,
Arc::new(RwLock::new(CostModel::default())),
);
poh_recorder.lock().unwrap().set_bank(&bank);

View File

@@ -75,9 +75,9 @@ fn broadcast_shreds_bench(bencher: &mut Bencher) {
&cluster_nodes_cache,
&last_datapoint,
&mut TransmitShredsStats::default(),
&SocketAddrSpace::Unspecified,
&cluster_info,
&bank_forks,
&SocketAddrSpace::Unspecified,
)
.unwrap();
});

View File

@@ -24,6 +24,8 @@ use solana_runtime::{
TransactionExecutionResult,
},
bank_utils,
cost_model::CostModel,
cost_tracker::CostTracker,
hashed_transaction::HashedTransaction,
transaction_batch::TransactionBatch,
vote_sender_types::ReplayVoteSender,
@@ -33,6 +35,7 @@ use solana_sdk::{
Slot, DEFAULT_TICKS_PER_SLOT, MAX_PROCESSING_AGE, MAX_TRANSACTION_FORWARDING_DELAY,
MAX_TRANSACTION_FORWARDING_DELAY_GPU,
},
feature_set,
message::Message,
pubkey::Pubkey,
short_vec::decode_shortu16_len,
@@ -52,7 +55,7 @@ use std::{
net::{SocketAddr, UdpSocket},
ops::DerefMut,
sync::atomic::{AtomicU64, AtomicUsize, Ordering},
sync::{Arc, Mutex},
sync::{Arc, Mutex, RwLock, RwLockReadGuard},
thread::{self, Builder, JoinHandle},
time::Duration,
time::Instant,
@@ -88,11 +91,14 @@ pub struct BankingStageStats {
new_tx_count: AtomicUsize,
dropped_packet_batches_count: AtomicUsize,
dropped_packets_count: AtomicUsize,
dropped_duplicated_packets_count: AtomicUsize,
newly_buffered_packets_count: AtomicUsize,
current_buffered_packets_count: AtomicUsize,
current_buffered_packet_batches_count: AtomicUsize,
rebuffered_packets_count: AtomicUsize,
consumed_buffered_packets_count: AtomicUsize,
cost_tracker_check_count: AtomicUsize,
cost_forced_retry_transactions_count: AtomicUsize,
// Timing
consume_buffered_packets_elapsed: AtomicU64,
@@ -101,7 +107,11 @@ pub struct BankingStageStats {
filter_pending_packets_elapsed: AtomicU64,
packet_duplicate_check_elapsed: AtomicU64,
packet_conversion_elapsed: AtomicU64,
unprocessed_packet_conversion_elapsed: AtomicU64,
transaction_processing_elapsed: AtomicU64,
cost_tracker_update_elapsed: AtomicU64,
cost_tracker_clone_elapsed: AtomicU64,
cost_tracker_check_elapsed: AtomicU64,
}
impl BankingStageStats {
@@ -137,6 +147,12 @@ impl BankingStageStats {
self.dropped_packets_count.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"dropped_duplicated_packets_count",
self.dropped_duplicated_packets_count
.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"newly_buffered_packets_count",
self.newly_buffered_packets_count.swap(0, Ordering::Relaxed) as i64,
@@ -165,6 +181,17 @@ impl BankingStageStats {
.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"cost_tracker_check_count",
self.cost_tracker_check_count.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"cost_forced_retry_transactions_count",
self.cost_forced_retry_transactions_count
.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"consume_buffered_packets_elapsed",
self.consume_buffered_packets_elapsed
@@ -199,12 +226,33 @@ impl BankingStageStats {
self.packet_conversion_elapsed.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"unprocessed_packet_conversion_elapsed",
self.unprocessed_packet_conversion_elapsed
.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"transaction_processing_elapsed",
self.transaction_processing_elapsed
.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"cost_tracker_update_elapsed",
self.cost_tracker_update_elapsed.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"cost_tracker_clone_elapsed",
self.cost_tracker_clone_elapsed.swap(0, Ordering::Relaxed) as i64,
i64
),
(
"cost_tracker_check_elapsed",
self.cost_tracker_check_elapsed.swap(0, Ordering::Relaxed) as i64,
i64
),
);
}
}
@@ -241,6 +289,7 @@ impl BankingStage {
verified_vote_receiver: CrossbeamReceiver<Vec<Packets>>,
transaction_status_sender: Option<TransactionStatusSender>,
gossip_vote_sender: ReplayVoteSender,
cost_model: Arc<RwLock<CostModel>>,
) -> Self {
Self::new_num_threads(
cluster_info,
@@ -251,6 +300,7 @@ impl BankingStage {
Self::num_threads(),
transaction_status_sender,
gossip_vote_sender,
cost_model,
)
}
@@ -258,11 +308,12 @@ impl BankingStage {
cluster_info: &Arc<ClusterInfo>,
poh_recorder: &Arc<Mutex<PohRecorder>>,
verified_receiver: CrossbeamReceiver<Vec<Packets>>,
verified_vote_receiver: CrossbeamReceiver<Vec<Packets>>,
tpu_verified_vote_receiver: CrossbeamReceiver<Vec<Packets>>,
verified_vote_receiver: CrossbeamReceiver<Vec<Packets>>,
num_threads: u32,
transaction_status_sender: Option<TransactionStatusSender>,
gossip_vote_sender: ReplayVoteSender,
cost_model: Arc<RwLock<CostModel>>,
) -> Self {
let batch_limit = TOTAL_BUFFERED_PACKETS / ((num_threads - 1) as usize * PACKETS_PER_BATCH);
// Single thread to generate entries from many banks.
@@ -298,6 +349,7 @@ impl BankingStage {
let gossip_vote_sender = gossip_vote_sender.clone();
let duplicates = duplicates.clone();
let data_budget = data_budget.clone();
let cost_model = cost_model.clone();
Builder::new()
.name("solana-banking-stage-tx".to_string())
.spawn(move || {
@@ -314,6 +366,7 @@ impl BankingStage {
gossip_vote_sender,
&duplicates,
&data_budget,
cost_model,
);
})
.unwrap()
@@ -371,6 +424,7 @@ impl BankingStage {
has_more_unprocessed_transactions
}
#[allow(clippy::too_many_arguments)]
pub fn consume_buffered_packets(
my_pubkey: &Pubkey,
max_tx_ingestion_ns: u128,
@@ -381,6 +435,7 @@ impl BankingStage {
test_fn: Option<impl Fn()>,
banking_stage_stats: &BankingStageStats,
recorder: &TransactionRecorder,
cost_model: &Arc<RwLock<CostModel>>,
) {
let mut rebuffered_packets_len = 0;
let mut new_tx_count = 0;
@@ -398,6 +453,8 @@ impl BankingStage {
original_unprocessed_indexes,
my_pubkey,
*next_leader,
banking_stage_stats,
cost_model,
);
Self::update_buffered_packets_with_new_unprocessed(
original_unprocessed_indexes,
@@ -416,6 +473,7 @@ impl BankingStage {
transaction_status_sender.clone(),
gossip_vote_sender,
banking_stage_stats,
cost_model,
);
if processed < verified_txs_len
|| !Bank::should_bank_still_be_processing_txs(
@@ -519,6 +577,7 @@ impl BankingStage {
banking_stage_stats: &BankingStageStats,
recorder: &TransactionRecorder,
data_budget: &DataBudget,
cost_model: &Arc<RwLock<CostModel>>,
) -> BufferedPacketsDecision {
let bank_start;
let (
@@ -559,6 +618,7 @@ impl BankingStage {
None::<Box<dyn Fn()>>,
banking_stage_stats,
recorder,
cost_model,
);
}
BufferedPacketsDecision::Forward => {
@@ -638,6 +698,7 @@ impl BankingStage {
gossip_vote_sender: ReplayVoteSender,
duplicates: &Arc<Mutex<(LruCache<u64, ()>, PacketHasher)>>,
data_budget: &DataBudget,
cost_model: Arc<RwLock<CostModel>>,
) {
let recorder = poh_recorder.lock().unwrap().recorder();
let socket = UdpSocket::bind("0.0.0.0:0").unwrap();
@@ -657,6 +718,7 @@ impl BankingStage {
&banking_stage_stats,
&recorder,
data_budget,
&cost_model,
);
if matches!(decision, BufferedPacketsDecision::Hold)
|| matches!(decision, BufferedPacketsDecision::ForwardAndHold)
@@ -691,6 +753,7 @@ impl BankingStage {
&banking_stage_stats,
duplicates,
&recorder,
&cost_model,
) {
Ok(()) | Err(RecvTimeoutError::Timeout) => (),
Err(RecvTimeoutError::Disconnected) => break,
@@ -794,7 +857,6 @@ impl BankingStage {
};
let mut execute_timings = ExecuteTimings::default();
let (
mut loaded_accounts,
results,
@@ -935,12 +997,12 @@ impl BankingStage {
) -> (usize, Vec<usize>) {
let mut chunk_start = 0;
let mut unprocessed_txs = vec![];
while chunk_start != transactions.len() {
let chunk_end = std::cmp::min(
transactions.len(),
chunk_start + MAX_NUM_TRANSACTIONS_PER_BATCH,
);
let (result, retryable_txs_in_chunk) = Self::process_and_record_transactions(
bank,
&transactions[chunk_start..chunk_end],
@@ -1023,13 +1085,21 @@ impl BankingStage {
// This function deserializes packets into transactions, computes the blake3 hash of transaction messages,
// and verifies secp256k1 instructions. A list of valid transactions are returned with their message hashes
// and packet indexes.
// Also returned is packet indexes for transaction should be retried due to cost limits.
#[allow(clippy::needless_collect)]
fn transactions_from_packets(
msgs: &Packets,
transaction_indexes: &[usize],
libsecp256k1_0_5_upgrade_enabled: bool,
feature_set: &Arc<feature_set::FeatureSet>,
read_cost_tracker: &RwLockReadGuard<CostTracker>,
banking_stage_stats: &BankingStageStats,
demote_program_write_locks: bool,
votes_only: bool,
) -> (Vec<HashedTransaction<'static>>, Vec<usize>) {
transaction_indexes
cost_model: &Arc<RwLock<CostModel>>,
) -> (Vec<HashedTransaction<'static>>, Vec<usize>, Vec<usize>) {
let mut retryable_transaction_packet_indexes: Vec<usize> = vec![];
let verified_transactions_with_packet_indexes: Vec<_> = transaction_indexes
.iter()
.filter_map(|tx_index| {
let p = &msgs.packets[*tx_index];
@@ -1038,16 +1108,70 @@ impl BankingStage {
}
let tx: Transaction = limited_deserialize(&p.data[0..p.meta.size]).ok()?;
tx.verify_precompiles(libsecp256k1_0_5_upgrade_enabled)
.ok()?;
let message_bytes = Self::packet_message(p)?;
let message_hash = Message::hash_raw_message(message_bytes);
Some((
HashedTransaction::new(Cow::Owned(tx), message_hash),
tx_index,
))
tx.verify_precompiles(feature_set).ok()?;
Some((tx, *tx_index))
})
.unzip()
.collect();
banking_stage_stats.cost_tracker_check_count.fetch_add(
verified_transactions_with_packet_indexes.len(),
Ordering::Relaxed,
);
let mut cost_tracker_check_time = Measure::start("cost_tracker_check_time");
let filtered_transactions_with_packet_indexes: Vec<_> = {
verified_transactions_with_packet_indexes
.into_iter()
.filter_map(|(tx, tx_index)| {
// put transaction into retry queue if it wouldn't fit
// into current bank
let is_vote = &msgs.packets[tx_index].meta.is_simple_vote_tx;
// excluding vote TX from cost_model, for now
if !is_vote
&& read_cost_tracker
.would_transaction_fit(
&tx,
&cost_model
.read()
.unwrap()
.calculate_cost(&tx, demote_program_write_locks),
)
.is_err()
{
debug!("transaction {:?} would exceed limit", tx);
retryable_transaction_packet_indexes.push(tx_index);
return None;
}
Some((tx, tx_index))
})
.collect()
};
cost_tracker_check_time.stop();
let (filtered_transactions, filter_transaction_packet_indexes) =
filtered_transactions_with_packet_indexes
.into_iter()
.filter_map(|(tx, tx_index)| {
let p = &msgs.packets[tx_index];
let message_bytes = Self::packet_message(p)?;
let message_hash = Message::hash_raw_message(message_bytes);
Some((
HashedTransaction::new(Cow::Owned(tx), message_hash),
tx_index,
))
})
.unzip();
banking_stage_stats
.cost_tracker_check_elapsed
.fetch_add(cost_tracker_check_time.as_us(), Ordering::Relaxed);
(
filtered_transactions,
filter_transaction_packet_indexes,
retryable_transaction_packet_indexes,
)
}
/// This function filters pending packets that are still valid
@@ -1089,6 +1213,7 @@ impl BankingStage {
Self::filter_valid_transaction_indexes(&results, transaction_to_packet_indexes)
}
#[allow(clippy::too_many_arguments)]
fn process_packets_transactions(
bank: &Arc<Bank>,
bank_creation_time: &Instant,
@@ -1098,20 +1223,31 @@ impl BankingStage {
transaction_status_sender: Option<TransactionStatusSender>,
gossip_vote_sender: &ReplayVoteSender,
banking_stage_stats: &BankingStageStats,
cost_model: &Arc<RwLock<CostModel>>,
) -> (usize, usize, Vec<usize>) {
let mut packet_conversion_time = Measure::start("packet_conversion");
let (transactions, transaction_to_packet_indexes) = Self::transactions_from_packets(
msgs,
&packet_indexes,
bank.libsecp256k1_0_5_upgrade_enabled(),
bank.vote_only_bank(),
);
let (transactions, transaction_to_packet_indexes, retryable_packet_indexes) =
Self::transactions_from_packets(
msgs,
&packet_indexes,
&bank.feature_set,
&bank.read_cost_tracker().unwrap(),
banking_stage_stats,
bank.demote_program_write_locks(),
bank.vote_only_bank(),
cost_model,
);
packet_conversion_time.stop();
inc_new_counter_info!("banking_stage-packet_conversion", 1);
banking_stage_stats
.cost_forced_retry_transactions_count
.fetch_add(retryable_packet_indexes.len(), Ordering::Relaxed);
debug!(
"bank: {} filtered transactions {}",
"bank: {} filtered transactions {} cost limited transactions {}",
bank.slot(),
transactions.len()
transactions.len(),
retryable_packet_indexes.len()
);
let tx_len = transactions.len();
@@ -1126,11 +1262,29 @@ impl BankingStage {
gossip_vote_sender,
);
process_tx_time.stop();
let unprocessed_tx_count = unprocessed_tx_indexes.len();
inc_new_counter_info!(
"banking_stage-unprocessed_transactions",
unprocessed_tx_count
);
// applying cost of processed transactions to shared cost_tracker
let mut cost_tracking_time = Measure::start("cost_tracking_time");
transactions.iter().enumerate().for_each(|(index, tx)| {
if unprocessed_tx_indexes.iter().all(|&i| i != index) {
bank.write_cost_tracker().unwrap().add_transaction_cost(
tx.transaction(),
&cost_model
.read()
.unwrap()
.calculate_cost(tx.transaction(), bank.demote_program_write_locks()),
);
}
});
cost_tracking_time.stop();
let mut filter_pending_packets_time = Measure::start("filter_pending_packets_time");
let filtered_unprocessed_packet_indexes = Self::filter_pending_packets_from_pending_txs(
let mut filtered_unprocessed_packet_indexes = Self::filter_pending_packets_from_pending_txs(
bank,
&transactions,
&transaction_to_packet_indexes,
@@ -1143,12 +1297,19 @@ impl BankingStage {
unprocessed_tx_count.saturating_sub(filtered_unprocessed_packet_indexes.len())
);
// combine cost-related unprocessed transactions with bank determined unprocessed for
// buffering
filtered_unprocessed_packet_indexes.extend(retryable_packet_indexes);
banking_stage_stats
.packet_conversion_elapsed
.fetch_add(packet_conversion_time.as_us(), Ordering::Relaxed);
banking_stage_stats
.transaction_processing_elapsed
.fetch_add(process_tx_time.as_us(), Ordering::Relaxed);
banking_stage_stats
.cost_tracker_update_elapsed
.fetch_add(cost_tracking_time.as_us(), Ordering::Relaxed);
banking_stage_stats
.filter_pending_packets_elapsed
.fetch_add(filter_pending_packets_time.as_us(), Ordering::Relaxed);
@@ -1162,6 +1323,8 @@ impl BankingStage {
transaction_indexes: &[usize],
my_pubkey: &Pubkey,
next_leader: Option<Pubkey>,
banking_stage_stats: &BankingStageStats,
cost_model: &Arc<RwLock<CostModel>>,
) -> Vec<usize> {
// Check if we are the next leader. If so, let's not filter the packets
// as we'll filter it again while processing the packets.
@@ -1172,27 +1335,43 @@ impl BankingStage {
}
}
let (transactions, transaction_to_packet_indexes) = Self::transactions_from_packets(
msgs,
&transaction_indexes,
bank.libsecp256k1_0_5_upgrade_enabled(),
bank.vote_only_bank(),
);
let mut unprocessed_packet_conversion_time =
Measure::start("unprocessed_packet_conversion");
let (transactions, transaction_to_packet_indexes, retry_packet_indexes) =
Self::transactions_from_packets(
msgs,
&transaction_indexes,
&bank.feature_set,
&bank.read_cost_tracker().unwrap(),
banking_stage_stats,
bank.demote_program_write_locks(),
bank.vote_only_bank(),
cost_model,
);
unprocessed_packet_conversion_time.stop();
let tx_count = transaction_to_packet_indexes.len();
let unprocessed_tx_indexes = (0..transactions.len()).collect_vec();
let filtered_unprocessed_packet_indexes = Self::filter_pending_packets_from_pending_txs(
let mut filtered_unprocessed_packet_indexes = Self::filter_pending_packets_from_pending_txs(
bank,
&transactions,
&transaction_to_packet_indexes,
&unprocessed_tx_indexes,
);
filtered_unprocessed_packet_indexes.extend(retry_packet_indexes);
inc_new_counter_info!(
"banking_stage-dropped_tx_before_forwarding",
tx_count.saturating_sub(filtered_unprocessed_packet_indexes.len())
);
banking_stage_stats
.unprocessed_packet_conversion_elapsed
.fetch_add(
unprocessed_packet_conversion_time.as_us(),
Ordering::Relaxed,
);
filtered_unprocessed_packet_indexes
}
@@ -1228,6 +1407,7 @@ impl BankingStage {
banking_stage_stats: &BankingStageStats,
duplicates: &Arc<Mutex<(LruCache<u64, ()>, PacketHasher)>>,
recorder: &TransactionRecorder,
cost_model: &Arc<RwLock<CostModel>>,
) -> Result<(), RecvTimeoutError> {
let mut recv_time = Measure::start("process_packets_recv");
let mms = verified_receiver.recv_timeout(recv_timeout)?;
@@ -1258,8 +1438,8 @@ impl BankingStage {
buffered_packets,
msgs,
packet_indexes,
&mut dropped_packets_count,
&mut dropped_packet_batches_count,
&mut dropped_packets_count,
&mut newly_buffered_packets_count,
batch_limit,
duplicates,
@@ -1279,6 +1459,7 @@ impl BankingStage {
transaction_status_sender.clone(),
gossip_vote_sender,
banking_stage_stats,
cost_model,
);
new_tx_count += processed;
@@ -1310,6 +1491,8 @@ impl BankingStage {
&packet_indexes,
my_pubkey,
next_leader,
banking_stage_stats,
cost_model,
);
Self::push_unprocessed(
buffered_packets,
@@ -1353,6 +1536,9 @@ impl BankingStage {
banking_stage_stats
.dropped_packet_batches_count
.fetch_add(dropped_packet_batches_count, Ordering::Relaxed);
banking_stage_stats
.dropped_packets_count
.fetch_add(dropped_packets_count, Ordering::Relaxed);
banking_stage_stats
.newly_buffered_packets_count
.fetch_add(newly_buffered_packets_count, Ordering::Relaxed);
@@ -1379,6 +1565,7 @@ impl BankingStage {
banking_stage_stats: &BankingStageStats,
) {
{
let original_packets_count = packet_indexes.len();
let mut packet_duplicate_check_time = Measure::start("packet_duplicate_check");
let mut duplicates = duplicates.lock().unwrap();
let (cache, hasher) = duplicates.deref_mut();
@@ -1396,6 +1583,12 @@ impl BankingStage {
banking_stage_stats
.packet_duplicate_check_elapsed
.fetch_add(packet_duplicate_check_time.as_us(), Ordering::Relaxed);
banking_stage_stats
.dropped_duplicated_packets_count
.fetch_add(
original_packets_count.saturating_sub(packet_indexes.len()),
Ordering::Relaxed,
);
}
if Self::packet_has_more_unprocessed_transactions(&packet_indexes) {
if unprocessed_packets.len() >= batch_limit {
@@ -1480,6 +1673,7 @@ mod tests {
poh_service::PohService,
};
use solana_rpc::transaction_status_service::TransactionStatusService;
use solana_runtime::cost_model::CostModel;
use solana_sdk::{
hash::Hash,
instruction::InstructionError,
@@ -1536,6 +1730,7 @@ mod tests {
gossip_verified_vote_receiver,
None,
vote_forward_sender,
Arc::new(RwLock::new(CostModel::default())),
);
drop(verified_sender);
drop(gossip_verified_vote_sender);
@@ -1584,6 +1779,7 @@ mod tests {
verified_gossip_vote_receiver,
None,
vote_forward_sender,
Arc::new(RwLock::new(CostModel::default())),
);
trace!("sending bank");
drop(verified_sender);
@@ -1656,6 +1852,7 @@ mod tests {
gossip_verified_vote_receiver,
None,
gossip_vote_sender,
Arc::new(RwLock::new(CostModel::default())),
);
// fund another account so we can send 2 good transactions in a single batch.
@@ -1806,6 +2003,7 @@ mod tests {
3,
None,
gossip_vote_sender,
Arc::new(RwLock::new(CostModel::default())),
);
// wait for banking_stage to eat the packets
@@ -2627,6 +2825,7 @@ mod tests {
None::<Box<dyn Fn()>>,
&BankingStageStats::default(),
&recorder,
&Arc::new(RwLock::new(CostModel::default())),
);
assert_eq!(buffered_packets[0].1.len(), num_conflicting_transactions);
// When the poh recorder has a bank, should process all non conflicting buffered packets.
@@ -2643,6 +2842,7 @@ mod tests {
None::<Box<dyn Fn()>>,
&BankingStageStats::default(),
&recorder,
&Arc::new(RwLock::new(CostModel::default())),
);
if num_expected_unprocessed == 0 {
assert!(buffered_packets.is_empty())
@@ -2708,6 +2908,7 @@ mod tests {
test_fn,
&BankingStageStats::default(),
&recorder,
&Arc::new(RwLock::new(CostModel::default())),
);
// Check everything is correct. All indexes after `interrupted_iteration`
@@ -2956,22 +3157,32 @@ mod tests {
make_test_packets(vec![transfer_tx.clone(), transfer_tx.clone()], vote_indexes);
let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
false,
votes_only,
);
let (txs, tx_packet_index, _retryable_packet_indexes) =
BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
&Arc::new(feature_set::FeatureSet::default()),
&RwLock::new(CostTracker::default()).read().unwrap(),
&BankingStageStats::default(),
false,
votes_only,
&Arc::new(RwLock::new(CostModel::default())),
);
assert_eq!(2, txs.len());
assert_eq!(vec![0, 1], tx_packet_index);
votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
false,
votes_only,
);
let (txs, tx_packet_index, _retryable_packet_indexes) =
BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
&Arc::new(feature_set::FeatureSet::default()),
&RwLock::new(CostTracker::default()).read().unwrap(),
&BankingStageStats::default(),
false,
votes_only,
&Arc::new(RwLock::new(CostModel::default())),
);
assert_eq!(0, txs.len());
assert_eq!(0, tx_packet_index.len());
}
@@ -2985,22 +3196,32 @@ mod tests {
);
let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
false,
votes_only,
);
let (txs, tx_packet_index, _retryable_packet_indexes) =
BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
&Arc::new(feature_set::FeatureSet::default()),
&RwLock::new(CostTracker::default()).read().unwrap(),
&BankingStageStats::default(),
false,
votes_only,
&Arc::new(RwLock::new(CostModel::default())),
);
assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], tx_packet_index);
votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
false,
votes_only,
);
let (txs, tx_packet_index, _retryable_packet_indexes) =
BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
&Arc::new(feature_set::FeatureSet::default()),
&RwLock::new(CostTracker::default()).read().unwrap(),
&BankingStageStats::default(),
false,
votes_only,
&Arc::new(RwLock::new(CostModel::default())),
);
assert_eq!(2, txs.len());
assert_eq!(vec![0, 2], tx_packet_index);
}
@@ -3014,22 +3235,32 @@ mod tests {
);
let mut votes_only = false;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
false,
votes_only,
);
let (txs, tx_packet_index, _retryable_packet_indexes) =
BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
&Arc::new(feature_set::FeatureSet::default()),
&RwLock::new(CostTracker::default()).read().unwrap(),
&BankingStageStats::default(),
false,
votes_only,
&Arc::new(RwLock::new(CostModel::default())),
);
assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], tx_packet_index);
votes_only = true;
let (txs, tx_packet_index) = BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
false,
votes_only,
);
let (txs, tx_packet_index, _retryable_packet_indexes) =
BankingStage::transactions_from_packets(
&packets,
&packet_indexes,
&Arc::new(feature_set::FeatureSet::default()),
&RwLock::new(CostTracker::default()).read().unwrap(),
&BankingStageStats::default(),
false,
votes_only,
&Arc::new(RwLock::new(CostModel::default())),
);
assert_eq!(3, txs.len());
assert_eq!(vec![0, 1, 2], tx_packet_index);
}

View File

@@ -22,8 +22,10 @@ use {
solana_metrics::{inc_new_counter_error, inc_new_counter_info},
solana_poh::poh_recorder::WorkingBankEntry,
solana_runtime::{bank::Bank, bank_forks::BankForks},
solana_sdk::timing::{timestamp, AtomicInterval},
solana_sdk::{clock::Slot, pubkey::Pubkey},
solana_sdk::{
timing::{timestamp, AtomicInterval},
{clock::Slot, pubkey::Pubkey},
},
solana_streamer::{
sendmmsg::{batch_send, SendPktsError},
socket::SocketAddrSpace,
@@ -31,9 +33,11 @@ use {
std::{
collections::HashMap,
net::UdpSocket,
sync::atomic::{AtomicBool, Ordering},
sync::mpsc::{channel, Receiver, RecvError, RecvTimeoutError, Sender},
sync::{Arc, Mutex, RwLock},
sync::{
atomic::{AtomicBool, Ordering},
mpsc::{channel, Receiver, RecvError, RecvTimeoutError, Sender},
Arc, Mutex, RwLock,
},
thread::{self, Builder, JoinHandle},
time::{Duration, Instant},
},
@@ -399,9 +403,9 @@ pub fn broadcast_shreds(
cluster_nodes_cache: &ClusterNodesCache<BroadcastStage>,
last_datapoint_submit: &Arc<AtomicInterval>,
transmit_stats: &mut TransmitShredsStats,
socket_addr_space: &SocketAddrSpace,
cluster_info: &ClusterInfo,
bank_forks: &Arc<RwLock<BankForks>>,
socket_addr_space: &SocketAddrSpace,
) -> Result<()> {
let mut result = Ok(());
let mut shred_select = Measure::start("shred_select");

View File

@@ -89,6 +89,7 @@ impl BroadcastRun for BroadcastFakeShredsRun {
slot,
num_expected_batches: None,
slot_start_ts: Instant::now(),
was_interrupted: false,
};
// 3) Start broadcast step
//some indicates fake shreds

View File

@@ -2,7 +2,7 @@ use super::*;
pub(crate) trait BroadcastStats {
fn update(&mut self, new_stats: &Self);
fn report_stats(&mut self, slot: Slot, slot_start: Instant);
fn report_stats(&mut self, slot: Slot, slot_start: Instant, was_interrupted: bool);
}
#[derive(Clone)]
@@ -10,6 +10,7 @@ pub(crate) struct BroadcastShredBatchInfo {
pub(crate) slot: Slot,
pub(crate) num_expected_batches: Option<usize>,
pub(crate) slot_start_ts: Instant,
pub(crate) was_interrupted: bool,
}
#[derive(Default, Clone)]
@@ -33,25 +34,39 @@ impl BroadcastStats for TransmitShredsStats {
self.total_packets += new_stats.total_packets;
self.dropped_packets += new_stats.dropped_packets;
}
fn report_stats(&mut self, slot: Slot, slot_start: Instant) {
datapoint_info!(
"broadcast-transmit-shreds-stats",
("slot", slot as i64, i64),
(
"end_to_end_elapsed",
// `slot_start` signals when the first batch of shreds was
// received, used to measure duration of broadcast
slot_start.elapsed().as_micros() as i64,
i64
),
("transmit_elapsed", self.transmit_elapsed as i64, i64),
("send_mmsg_elapsed", self.send_mmsg_elapsed as i64, i64),
("get_peers_elapsed", self.get_peers_elapsed as i64, i64),
("num_shreds", self.num_shreds as i64, i64),
("shred_select", self.shred_select as i64, i64),
("total_packets", self.total_packets as i64, i64),
("dropped_packets", self.dropped_packets as i64, i64),
);
fn report_stats(&mut self, slot: Slot, slot_start: Instant, was_interrupted: bool) {
if was_interrupted {
datapoint_info!(
"broadcast-transmit-shreds-interrupted-stats",
("slot", slot as i64, i64),
("transmit_elapsed", self.transmit_elapsed as i64, i64),
("send_mmsg_elapsed", self.send_mmsg_elapsed as i64, i64),
("get_peers_elapsed", self.get_peers_elapsed as i64, i64),
("num_shreds", self.num_shreds as i64, i64),
("shred_select", self.shred_select as i64, i64),
("total_packets", self.total_packets as i64, i64),
("dropped_packets", self.dropped_packets as i64, i64),
);
} else {
datapoint_info!(
"broadcast-transmit-shreds-stats",
("slot", slot as i64, i64),
(
"end_to_end_elapsed",
// `slot_start` signals when the first batch of shreds was
// received, used to measure duration of broadcast
slot_start.elapsed().as_micros() as i64,
i64
),
("transmit_elapsed", self.transmit_elapsed as i64, i64),
("send_mmsg_elapsed", self.send_mmsg_elapsed as i64, i64),
("get_peers_elapsed", self.get_peers_elapsed as i64, i64),
("num_shreds", self.num_shreds as i64, i64),
("shred_select", self.shred_select as i64, i64),
("total_packets", self.total_packets as i64, i64),
("dropped_packets", self.dropped_packets as i64, i64),
);
}
}
}
@@ -65,24 +80,37 @@ impl BroadcastStats for InsertShredsStats {
self.insert_shreds_elapsed += new_stats.insert_shreds_elapsed;
self.num_shreds += new_stats.num_shreds;
}
fn report_stats(&mut self, slot: Slot, slot_start: Instant) {
datapoint_info!(
"broadcast-insert-shreds-stats",
("slot", slot as i64, i64),
(
"end_to_end_elapsed",
// `slot_start` signals when the first batch of shreds was
// received, used to measure duration of broadcast
slot_start.elapsed().as_micros() as i64,
i64
),
(
"insert_shreds_elapsed",
self.insert_shreds_elapsed as i64,
i64
),
("num_shreds", self.num_shreds as i64, i64),
);
fn report_stats(&mut self, slot: Slot, slot_start: Instant, was_interrupted: bool) {
if was_interrupted {
datapoint_info!(
"broadcast-insert-shreds-interrupted-stats",
("slot", slot as i64, i64),
(
"insert_shreds_elapsed",
self.insert_shreds_elapsed as i64,
i64
),
("num_shreds", self.num_shreds as i64, i64),
);
} else {
datapoint_info!(
"broadcast-insert-shreds-stats",
("slot", slot as i64, i64),
(
"end_to_end_elapsed",
// `slot_start` signals when the first batch of shreds was
// received, used to measure duration of broadcast
slot_start.elapsed().as_micros() as i64,
i64
),
(
"insert_shreds_elapsed",
self.insert_shreds_elapsed as i64,
i64
),
("num_shreds", self.num_shreds as i64, i64),
);
}
}
}
@@ -128,9 +156,11 @@ impl<T: BroadcastStats + Default> SlotBroadcastStats<T> {
}
if let Some(num_expected_batches) = slot_batch_counter.num_expected_batches {
if slot_batch_counter.num_batches == num_expected_batches {
slot_batch_counter
.broadcast_shred_stats
.report_stats(batch_info.slot, batch_info.slot_start_ts);
slot_batch_counter.broadcast_shred_stats.report_stats(
batch_info.slot,
batch_info.slot_start_ts,
batch_info.was_interrupted,
);
should_delete = true;
}
}
@@ -159,7 +189,7 @@ mod test {
self.count += new_stats.count;
self.sender = new_stats.sender.clone();
}
fn report_stats(&mut self, slot: Slot, slot_start: Instant) {
fn report_stats(&mut self, slot: Slot, slot_start: Instant, _was_interrupted: bool) {
self.sender
.as_ref()
.unwrap()
@@ -186,6 +216,7 @@ mod test {
slot: 0,
num_expected_batches: Some(2),
slot_start_ts: start,
was_interrupted: false,
}),
);
@@ -242,6 +273,7 @@ mod test {
slot: 0,
num_expected_batches: None,
slot_start_ts: start,
was_interrupted: false,
}),
);
@@ -265,6 +297,7 @@ mod test {
slot,
num_expected_batches: None,
slot_start_ts: start,
was_interrupted: false,
};
if i == round % num_threads {
broadcast_batch_info.num_expected_batches = Some(num_threads);

View File

@@ -145,9 +145,9 @@ impl BroadcastRun for FailEntryVerificationBroadcastRun {
&self.cluster_nodes_cache,
&Arc::new(AtomicInterval::default()),
&mut TransmitShredsStats::default(),
cluster_info.socket_addr_space(),
cluster_info,
bank_forks,
cluster_info.socket_addr_space(),
)
}
fn record(

View File

@@ -97,7 +97,7 @@ impl StandardBroadcastRun {
stats,
);
shreds.insert(0, shred);
self.report_and_reset_stats();
self.report_and_reset_stats(true);
self.unfinished_slot = None;
shreds
}
@@ -245,6 +245,7 @@ impl StandardBroadcastRun {
"Old broadcast start time for previous slot must exist if the previous slot
was interrupted",
),
was_interrupted: true,
});
let shreds = Arc::new(prev_slot_shreds);
debug_assert!(shreds.iter().all(|shred| shred.slot() == slot));
@@ -267,6 +268,7 @@ impl StandardBroadcastRun {
slot_start_ts: self
.slot_broadcast_start
.expect("Start timestamp must exist for a slot if we're broadcasting the slot"),
was_interrupted: false,
});
get_leader_schedule_time.stop();
@@ -302,7 +304,7 @@ impl StandardBroadcastRun {
self.process_shreds_stats.update(&process_stats);
if last_tick_height == bank.max_tick_height() {
self.report_and_reset_stats();
self.report_and_reset_stats(false);
self.unfinished_slot = None;
}
@@ -362,9 +364,9 @@ impl StandardBroadcastRun {
&self.cluster_nodes_cache,
&self.last_datapoint_submit,
&mut transmit_stats,
cluster_info.socket_addr_space(),
cluster_info,
bank_forks,
cluster_info.socket_addr_space(),
)?;
transmit_time.stop();
@@ -385,35 +387,59 @@ impl StandardBroadcastRun {
transmit_shreds_stats.update(new_transmit_shreds_stats, broadcast_shred_batch_info);
}
fn report_and_reset_stats(&mut self) {
fn report_and_reset_stats(&mut self, was_interrupted: bool) {
let stats = &self.process_shreds_stats;
let unfinished_slot = self.unfinished_slot.as_ref().unwrap();
datapoint_info!(
"broadcast-process-shreds-stats",
("slot", unfinished_slot.slot as i64, i64),
("shredding_time", stats.shredding_elapsed, i64),
("receive_time", stats.receive_elapsed, i64),
(
"num_data_shreds",
unfinished_slot.next_shred_index as i64,
i64
),
(
"slot_broadcast_time",
self.slot_broadcast_start.unwrap().elapsed().as_micros() as i64,
i64
),
(
"get_leader_schedule_time",
stats.get_leader_schedule_elapsed,
i64
),
("serialize_shreds_time", stats.serialize_elapsed, i64),
("gen_data_time", stats.gen_data_elapsed, i64),
("gen_coding_time", stats.gen_coding_elapsed, i64),
("sign_coding_time", stats.sign_coding_elapsed, i64),
("coding_send_time", stats.coding_send_elapsed, i64),
);
if was_interrupted {
datapoint_info!(
"broadcast-process-shreds-interrupted-stats",
("slot", unfinished_slot.slot as i64, i64),
("shredding_time", stats.shredding_elapsed, i64),
("receive_time", stats.receive_elapsed, i64),
(
"num_data_shreds",
unfinished_slot.next_shred_index as i64,
i64
),
(
"get_leader_schedule_time",
stats.get_leader_schedule_elapsed,
i64
),
("serialize_shreds_time", stats.serialize_elapsed, i64),
("gen_data_time", stats.gen_data_elapsed, i64),
("gen_coding_time", stats.gen_coding_elapsed, i64),
("sign_coding_time", stats.sign_coding_elapsed, i64),
("coding_send_time", stats.coding_send_elapsed, i64),
);
} else {
datapoint_info!(
"broadcast-process-shreds-stats",
("slot", unfinished_slot.slot as i64, i64),
("shredding_time", stats.shredding_elapsed, i64),
("receive_time", stats.receive_elapsed, i64),
(
"num_data_shreds",
unfinished_slot.next_shred_index as i64,
i64
),
(
"slot_broadcast_time",
self.slot_broadcast_start.unwrap().elapsed().as_micros() as i64,
i64
),
(
"get_leader_schedule_time",
stats.get_leader_schedule_elapsed,
i64
),
("serialize_shreds_time", stats.serialize_elapsed, i64),
("gen_data_time", stats.gen_data_elapsed, i64),
("gen_coding_time", stats.gen_coding_elapsed, i64),
("sign_coding_time", stats.sign_coding_elapsed, i64),
("coding_send_time", stats.coding_send_elapsed, i64),
);
}
self.process_shreds_stats.reset();
}
}

View File

@@ -318,6 +318,7 @@ mod tests {
super::*,
rand::{seq::SliceRandom, Rng},
solana_gossip::{
crds::GossipRoute,
crds_value::{CrdsData, CrdsValue},
deprecated::{
shuffle_peers_and_index, sorted_retransmit_peers_and_stakes,
@@ -384,7 +385,10 @@ mod tests {
for node in nodes.iter().skip(1) {
let node = CrdsData::ContactInfo(node.clone());
let node = CrdsValue::new_unsigned(node);
assert_eq!(gossip.crds.insert(node, now), Ok(()));
assert_eq!(
gossip.crds.insert(node, now, GossipRoute::LocalMessage),
Ok(())
);
}
}
(nodes, stakes, cluster_info)

View File

@@ -0,0 +1,303 @@
//! this service receives instruction ExecuteTimings from replay_stage,
//! update cost_model which is shared with banking_stage to optimize
//! packing transactions into block; it also triggers persisting cost
//! table to blockstore.
use solana_ledger::blockstore::Blockstore;
use solana_measure::measure::Measure;
use solana_runtime::{bank::Bank, bank::ExecuteTimings, cost_model::CostModel};
use solana_sdk::timing::timestamp;
use std::{
sync::{
atomic::{AtomicBool, Ordering},
mpsc::Receiver,
Arc, RwLock,
},
thread::{self, Builder, JoinHandle},
time::Duration,
};
#[derive(Default)]
pub struct CostUpdateServiceTiming {
last_print: u64,
update_cost_model_count: u64,
update_cost_model_elapsed: u64,
persist_cost_table_elapsed: u64,
}
impl CostUpdateServiceTiming {
fn update(
&mut self,
update_cost_model_count: u64,
update_cost_model_elapsed: u64,
persist_cost_table_elapsed: u64,
) {
self.update_cost_model_count += update_cost_model_count;
self.update_cost_model_elapsed += update_cost_model_elapsed;
self.persist_cost_table_elapsed += persist_cost_table_elapsed;
let now = timestamp();
let elapsed_ms = now - self.last_print;
if elapsed_ms > 1000 {
datapoint_info!(
"cost-update-service-stats",
("total_elapsed_us", elapsed_ms * 1000, i64),
(
"update_cost_model_count",
self.update_cost_model_count as i64,
i64
),
(
"update_cost_model_elapsed",
self.update_cost_model_elapsed as i64,
i64
),
(
"persist_cost_table_elapsed",
self.persist_cost_table_elapsed as i64,
i64
),
);
*self = CostUpdateServiceTiming::default();
self.last_print = now;
}
}
}
pub enum CostUpdate {
FrozenBank { bank: Arc<Bank> },
ExecuteTiming { execute_timings: ExecuteTimings },
}
pub type CostUpdateReceiver = Receiver<CostUpdate>;
pub struct CostUpdateService {
thread_hdl: JoinHandle<()>,
}
impl CostUpdateService {
#[allow(clippy::new_ret_no_self)]
pub fn new(
exit: Arc<AtomicBool>,
blockstore: Arc<Blockstore>,
cost_model: Arc<RwLock<CostModel>>,
cost_update_receiver: CostUpdateReceiver,
) -> Self {
let thread_hdl = Builder::new()
.name("solana-cost-update-service".to_string())
.spawn(move || {
Self::service_loop(exit, blockstore, cost_model, cost_update_receiver);
})
.unwrap();
Self { thread_hdl }
}
pub fn join(self) -> thread::Result<()> {
self.thread_hdl.join()
}
fn service_loop(
exit: Arc<AtomicBool>,
blockstore: Arc<Blockstore>,
cost_model: Arc<RwLock<CostModel>>,
cost_update_receiver: CostUpdateReceiver,
) {
let mut cost_update_service_timing = CostUpdateServiceTiming::default();
let mut dirty: bool;
let mut update_count: u64;
let wait_timer = Duration::from_millis(100);
loop {
if exit.load(Ordering::Relaxed) {
break;
}
dirty = false;
update_count = 0_u64;
let mut update_cost_model_time = Measure::start("update_cost_model_time");
for cost_update in cost_update_receiver.try_iter() {
match cost_update {
CostUpdate::FrozenBank { bank } => {
bank.read_cost_tracker().unwrap().report_stats(bank.slot());
}
CostUpdate::ExecuteTiming { execute_timings } => {
dirty |= Self::update_cost_model(&cost_model, &execute_timings);
update_count += 1;
}
}
}
update_cost_model_time.stop();
let mut persist_cost_table_time = Measure::start("persist_cost_table_time");
if dirty {
Self::persist_cost_table(&blockstore, &cost_model);
}
persist_cost_table_time.stop();
cost_update_service_timing.update(
update_count,
update_cost_model_time.as_us(),
persist_cost_table_time.as_us(),
);
thread::sleep(wait_timer);
}
}
fn update_cost_model(cost_model: &RwLock<CostModel>, execute_timings: &ExecuteTimings) -> bool {
let mut dirty = false;
{
let mut cost_model_mutable = cost_model.write().unwrap();
for (program_id, timing) in &execute_timings.details.per_program_timings {
if timing.count < 1 {
continue;
}
let units = timing.accumulated_units / timing.count as u64;
match cost_model_mutable.upsert_instruction_cost(program_id, units) {
Ok(c) => {
debug!(
"after replayed into bank, instruction {:?} has averaged cost {}",
program_id, c
);
dirty = true;
}
Err(err) => {
debug!(
"after replayed into bank, instruction {:?} failed to update cost, err: {}",
program_id, err
);
}
}
}
}
debug!(
"after replayed into bank, updated cost model instruction cost table, current values: {:?}",
cost_model.read().unwrap().get_instruction_cost_table()
);
dirty
}
fn persist_cost_table(blockstore: &Blockstore, cost_model: &RwLock<CostModel>) {
let cost_model_read = cost_model.read().unwrap();
let cost_table = cost_model_read.get_instruction_cost_table();
let db_records = blockstore.read_program_costs().expect("read programs");
// delete records from blockstore if they are no longer in cost_table
db_records.iter().for_each(|(pubkey, _)| {
if cost_table.get(pubkey).is_none() {
blockstore
.delete_program_cost(pubkey)
.expect("delete old program");
}
});
for (key, cost) in cost_table.iter() {
blockstore
.write_program_cost(key, cost)
.expect("persist program costs to blockstore");
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use solana_runtime::message_processor::ProgramTiming;
use solana_sdk::pubkey::Pubkey;
#[test]
fn test_update_cost_model_with_empty_execute_timings() {
let cost_model = Arc::new(RwLock::new(CostModel::default()));
let empty_execute_timings = ExecuteTimings::default();
CostUpdateService::update_cost_model(&cost_model, &empty_execute_timings);
assert_eq!(
0,
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.len()
);
}
#[test]
fn test_update_cost_model_with_execute_timings() {
let cost_model = Arc::new(RwLock::new(CostModel::default()));
let mut execute_timings = ExecuteTimings::default();
let program_key_1 = Pubkey::new_unique();
let mut expected_cost: u64;
// add new program
{
let accumulated_us: u64 = 1000;
let accumulated_units: u64 = 100;
let count: u32 = 10;
expected_cost = accumulated_units / count as u64;
execute_timings.details.per_program_timings.insert(
program_key_1,
ProgramTiming {
accumulated_us,
accumulated_units,
count,
},
);
CostUpdateService::update_cost_model(&cost_model, &execute_timings);
assert_eq!(
1,
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.len()
);
assert_eq!(
Some(&expected_cost),
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.get(&program_key_1)
);
}
// update program
{
let accumulated_us: u64 = 2000;
let accumulated_units: u64 = 200;
let count: u32 = 10;
// to expect new cost is Average(new_value, existing_value)
expected_cost = ((accumulated_units / count as u64) + expected_cost) / 2;
execute_timings.details.per_program_timings.insert(
program_key_1,
ProgramTiming {
accumulated_us,
accumulated_units,
count,
},
);
CostUpdateService::update_cost_model(&cost_model, &execute_timings);
assert_eq!(
1,
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.len()
);
assert_eq!(
Some(&expected_cost),
cost_model
.read()
.unwrap()
.get_instruction_cost_table()
.get(&program_key_1)
);
}
}
}

View File

@@ -19,6 +19,7 @@ pub mod cluster_slots_service;
pub mod commitment_service;
pub mod completed_data_sets_service;
pub mod consensus;
pub mod cost_update_service;
pub mod fetch_stage;
pub mod fork_choice;
pub mod gen_keys;
@@ -46,6 +47,7 @@ pub mod sigverify;
pub mod sigverify_shreds;
pub mod sigverify_stage;
pub mod snapshot_packager_service;
pub mod system_monitor_service;
pub mod test_validator;
pub mod tpu;
pub mod tree_diff;

View File

@@ -63,6 +63,11 @@ impl ReplaySlotStats {
("load_us", self.execute_timings.load_us, i64),
("execute_us", self.execute_timings.execute_us, i64),
("store_us", self.execute_timings.store_us, i64),
(
"update_stakes_cache_us",
self.execute_timings.update_stakes_cache_us,
i64
),
(
"total_batches_len",
self.execute_timings.total_batches_len,
@@ -114,6 +119,43 @@ impl ReplaySlotStats {
i64
),
);
let mut per_pubkey_timings: Vec<_> = self
.execute_timings
.details
.per_program_timings
.iter()
.collect();
per_pubkey_timings.sort_by(|a, b| b.1.accumulated_us.cmp(&a.1.accumulated_us));
let (total_us, total_units, total_count) =
per_pubkey_timings
.iter()
.fold((0, 0, 0), |(sum_us, sum_units, sum_count), a| {
(
sum_us + a.1.accumulated_us,
sum_units + a.1.accumulated_units,
sum_count + a.1.count,
)
});
for (pubkey, time) in per_pubkey_timings.iter().take(5) {
datapoint_info!(
"per_program_timings",
("slot", slot as i64, i64),
("pubkey", pubkey.to_string(), String),
("execute_us", time.accumulated_us, i64),
("accumulated_units", time.accumulated_units, i64),
("count", time.count, i64)
);
}
datapoint_info!(
"per_program_timings",
("slot", slot as i64, i64),
("pubkey", "all", String),
("execute_us", total_us, i64),
("accumulated_units", total_units, i64),
("count", total_count, i64)
);
}
}

View File

@@ -13,12 +13,12 @@ use crate::{
consensus::{
ComputedBankState, Stake, SwitchForkDecision, Tower, VotedStakes, SWITCH_FORK_THRESHOLD,
},
cost_update_service::CostUpdate,
fork_choice::{ForkChoice, SelectVoteAndResetForkResult},
heaviest_subtree_fork_choice::HeaviestSubtreeForkChoice,
latest_validator_votes_for_frozen_banks::LatestValidatorVotesForFrozenBanks,
progress_map::{ForkProgress, ProgressMap, PropagatedStats},
repair_service::DuplicateSlotsResetReceiver,
result::Result,
rewards_recorder_service::RewardsRecorderSender,
unfrozen_gossip_verified_vote_hashes::UnfrozenGossipVerifiedVoteHashes,
voting_service::VoteOp,
@@ -41,8 +41,11 @@ use solana_rpc::{
rpc_subscriptions::RpcSubscriptions,
};
use solana_runtime::{
accounts_background_service::AbsRequestSender, bank::Bank, bank_forks::BankForks,
commitment::BlockCommitmentCache, vote_sender_types::ReplayVoteSender,
accounts_background_service::AbsRequestSender,
bank::{Bank, ExecuteTimings, NewBankOptions},
bank_forks::BankForks,
commitment::BlockCommitmentCache,
vote_sender_types::ReplayVoteSender,
};
use solana_sdk::{
clock::{Slot, MAX_PROCESSING_AGE, NUM_CONSECUTIVE_LEADER_SLOTS},
@@ -128,6 +131,7 @@ pub struct ReplayStageConfig {
pub cache_block_meta_sender: Option<CacheBlockMetaSender>,
pub bank_notification_sender: Option<BankNotificationSender>,
pub wait_for_vote_to_start_leader: bool,
pub disable_epoch_boundary_optimization: bool,
}
#[derive(Default)]
@@ -277,7 +281,7 @@ impl ReplayTiming {
"process_duplicate_slots_elapsed",
self.process_duplicate_slots_elapsed as i64,
i64
)
),
);
*self = ReplayTiming::default();
@@ -287,7 +291,7 @@ impl ReplayTiming {
}
pub struct ReplayStage {
t_replay: JoinHandle<Result<()>>,
t_replay: JoinHandle<()>,
commitment_service: AggregateCommitmentService,
}
@@ -311,6 +315,7 @@ impl ReplayStage {
gossip_verified_vote_hash_receiver: GossipVerifiedVoteHashReceiver,
cluster_slots_update_sender: ClusterSlotsUpdateSender,
voting_sender: Sender<VoteOp>,
cost_update_sender: Sender<CostUpdate>,
) -> Self {
let ReplayStageConfig {
my_pubkey,
@@ -327,6 +332,7 @@ impl ReplayStage {
cache_block_meta_sender,
bank_notification_sender,
wait_for_vote_to_start_leader,
disable_epoch_boundary_optimization,
} = config;
trace!("replay stage");
@@ -407,6 +413,7 @@ impl ReplayStage {
&mut unfrozen_gossip_verified_vote_hashes,
&mut latest_validator_votes_for_frozen_banks,
&cluster_slots_update_sender,
&cost_update_sender,
);
replay_active_banks_time.stop();
@@ -689,6 +696,7 @@ impl ReplayStage {
&retransmit_slots_sender,
&mut skipped_slots_info,
has_new_vote_been_rooted,
disable_epoch_boundary_optimization,
);
let poh_bank = poh_recorder.lock().unwrap().bank();
@@ -736,7 +744,6 @@ impl ReplayStage {
process_duplicate_slots_time.as_us(),
);
}
Ok(())
})
.unwrap();
@@ -1069,6 +1076,7 @@ impl ReplayStage {
}
}
#[allow(clippy::too_many_arguments)]
fn maybe_start_leader(
my_pubkey: &Pubkey,
bank_forks: &Arc<RwLock<BankForks>>,
@@ -1079,6 +1087,7 @@ impl ReplayStage {
retransmit_slots_sender: &RetransmitSlotsSender,
skipped_slots_info: &mut SkippedSlotsInfo,
has_new_vote_been_rooted: bool,
disable_epoch_boundary_optimization: bool,
) {
// all the individual calls to poh_recorder.lock() are designed to
// increase granularity, decrease contention
@@ -1196,7 +1205,10 @@ impl ReplayStage {
root_slot,
my_pubkey,
rpc_subscriptions,
vote_only_bank,
NewBankOptions {
vote_only_bank,
disable_epoch_boundary_optimization,
},
);
let tpu_bank = bank_forks.write().unwrap().insert(tpu_bank);
@@ -1679,9 +1691,11 @@ impl ReplayStage {
unfrozen_gossip_verified_vote_hashes: &mut UnfrozenGossipVerifiedVoteHashes,
latest_validator_votes_for_frozen_banks: &mut LatestValidatorVotesForFrozenBanks,
cluster_slots_update_sender: &ClusterSlotsUpdateSender,
cost_update_sender: &Sender<CostUpdate>,
) -> bool {
let mut did_complete_bank = false;
let mut tx_count = 0;
let mut execute_timings = ExecuteTimings::default();
let active_banks = bank_forks.read().unwrap().active_banks();
trace!("active banks {:?}", active_banks);
@@ -1752,6 +1766,12 @@ impl ReplayStage {
}
assert_eq!(*bank_slot, bank.slot());
if bank.is_complete() {
execute_timings.accumulate(&bank_progress.replay_stats.execute_timings);
debug!("bank {} is completed replay from blockstore, contribute to update cost with {:?}",
bank.slot(),
bank_progress.replay_stats.execute_timings
);
bank_progress.replay_stats.report_stats(
bank.slot(),
bank_progress.replay_progress.num_entries,
@@ -1764,6 +1784,13 @@ impl ReplayStage {
transaction_status_sender.send_transaction_status_freeze_message(&bank);
}
bank.freeze();
// report cost tracker stats
cost_update_sender
.send(CostUpdate::FrozenBank { bank: bank.clone() })
.unwrap_or_else(|err| {
warn!("cost_update_sender failed sending bank stats: {:?}", err)
});
let bank_hash = bank.hash();
assert_ne!(bank_hash, Hash::default());
// Needs to be updated before `check_slot_agrees_with_cluster()` so that
@@ -1813,6 +1840,14 @@ impl ReplayStage {
);
}
}
// send accumulated excute-timings to cost_update_service
if !execute_timings.details.per_program_timings.is_empty() {
cost_update_sender
.send(CostUpdate::ExecuteTiming { execute_timings })
.unwrap_or_else(|err| warn!("cost_update_sender failed: {:?}", err));
}
inc_new_counter_info!("replay_stage-replay_transactions", tx_count);
did_complete_bank
}
@@ -2452,7 +2487,7 @@ impl ReplayStage {
forks.root(),
&leader,
rpc_subscriptions,
false,
NewBankOptions::default(),
);
let empty: Vec<Pubkey> = vec![];
Self::update_fork_propagated_threshold_from_votes(
@@ -2479,10 +2514,10 @@ impl ReplayStage {
root_slot: u64,
leader: &Pubkey,
rpc_subscriptions: &Arc<RpcSubscriptions>,
vote_only_bank: bool,
new_bank_options: NewBankOptions,
) -> Bank {
rpc_subscriptions.notify_slot(slot, parent.slot(), root_slot);
Bank::new_from_parent_with_vote_only(parent, leader, slot, vote_only_bank)
Bank::new_from_parent_with_options(parent, leader, slot, new_bank_options)
}
fn record_rewards(bank: &Bank, rewards_recorder_sender: &Option<RewardsRecorderSender>) {
@@ -4918,7 +4953,6 @@ mod tests {
);
assert_eq!(tower.last_voted_slot().unwrap(), 1);
}
fn run_compute_and_select_forks(
bank_forks: &RwLock<BankForks>,
progress: &mut ProgressMap,

View File

@@ -28,9 +28,9 @@ use {
solana_runtime::{bank::Bank, bank_forks::BankForks},
solana_sdk::{clock::Slot, epoch_schedule::EpochSchedule, pubkey::Pubkey, timing::timestamp},
std::{
collections::{BTreeSet, HashSet},
collections::{BTreeSet, HashMap, HashSet},
net::UdpSocket,
ops::DerefMut,
ops::{AddAssign, DerefMut},
sync::{
atomic::{AtomicBool, AtomicU64, AtomicUsize, Ordering},
mpsc::{self, channel, RecvTimeoutError},
@@ -47,9 +47,25 @@ const DEFAULT_LRU_SIZE: usize = 10_000;
const CLUSTER_NODES_CACHE_NUM_EPOCH_CAP: usize = 8;
const CLUSTER_NODES_CACHE_TTL: Duration = Duration::from_secs(5);
#[derive(Default)]
struct RetransmitSlotStats {
num_shreds: usize,
num_nodes: usize,
}
impl AddAssign for RetransmitSlotStats {
fn add_assign(&mut self, other: Self) {
*self = Self {
num_shreds: self.num_shreds + other.num_shreds,
num_nodes: self.num_nodes + other.num_nodes,
}
}
}
#[derive(Default)]
struct RetransmitStats {
since: Option<Instant>,
num_nodes: AtomicUsize,
num_shreds: usize,
num_shreds_skipped: AtomicUsize,
total_batches: usize,
@@ -58,6 +74,7 @@ struct RetransmitStats {
epoch_cache_update: u64,
retransmit_total: AtomicU64,
compute_turbine_peers_total: AtomicU64,
slot_stats: HashMap<Slot, RetransmitSlotStats>,
unknown_shred_slot_leader: AtomicUsize,
}
@@ -91,6 +108,7 @@ impl RetransmitStats {
("epoch_fetch", stats.epoch_fetch, i64),
("epoch_cache_update", stats.epoch_cache_update, i64),
("total_batches", stats.total_batches, i64),
("num_nodes", stats.num_nodes.into_inner(), i64),
("num_shreds", stats.num_shreds, i64),
(
"num_shreds_skipped",
@@ -109,6 +127,14 @@ impl RetransmitStats {
i64
),
);
for (slot, stats) in stats.slot_stats {
datapoint_info!(
"retransmit-stage-slot-stats",
("slot", slot, i64),
("num_shreds", stats.num_shreds, i64),
("num_nodes", stats.num_nodes, i64),
);
}
}
}
@@ -216,10 +242,10 @@ fn retransmit(
let my_id = cluster_info.id();
let socket_addr_space = cluster_info.socket_addr_space();
let retransmit_shred = |shred: Shred, socket: &UdpSocket| {
if should_skip_retransmit(&shred, shreds_received) {
let retransmit_shred = |shred: &Shred, socket: &UdpSocket| {
if should_skip_retransmit(shred, shreds_received) {
stats.num_shreds_skipped.fetch_add(1, Ordering::Relaxed);
return;
return 0;
}
let shred_slot = shred.slot();
max_slots
@@ -247,7 +273,7 @@ fn retransmit(
stats
.unknown_shred_slot_leader
.fetch_add(1, Ordering::Relaxed);
return;
return 0;
}
};
let cluster_nodes =
@@ -284,17 +310,52 @@ fn retransmit(
socket_addr_space,
);
retransmit_time.stop();
let num_nodes = if anchor_node {
neighbors.len() + children.len() - 1
} else {
children.len()
};
stats.num_nodes.fetch_add(num_nodes, Ordering::Relaxed);
stats
.retransmit_total
.fetch_add(retransmit_time.as_us(), Ordering::Relaxed);
num_nodes
};
thread_pool.install(|| {
shreds.into_par_iter().with_min_len(4).for_each(|shred| {
let index = thread_pool.current_thread_index().unwrap();
let socket = &sockets[index % sockets.len()];
retransmit_shred(shred, socket);
});
fn merge<K, V>(mut acc: HashMap<K, V>, other: HashMap<K, V>) -> HashMap<K, V>
where
K: Eq + std::hash::Hash,
V: Default + AddAssign,
{
if acc.len() < other.len() {
return merge(other, acc);
}
for (key, value) in other {
*acc.entry(key).or_default() += value;
}
acc
}
let slot_stats = thread_pool.install(|| {
shreds
.into_par_iter()
.with_min_len(4)
.map(|shred| {
let index = thread_pool.current_thread_index().unwrap();
let socket = &sockets[index % sockets.len()];
let num_nodes = retransmit_shred(&shred, socket);
(shred.slot(), num_nodes)
})
.fold(
HashMap::<Slot, RetransmitSlotStats>::new,
|mut acc, (slot, num_nodes)| {
let stats = acc.entry(slot).or_default();
stats.num_nodes += num_nodes;
stats.num_shreds += 1;
acc
},
)
.reduce(HashMap::new, merge)
});
stats.slot_stats = merge(std::mem::take(&mut stats.slot_stats), slot_stats);
timer_start.stop();
stats.total_time += timer_start.as_us();
stats.maybe_submit(&root_bank, &working_bank, cluster_info, cluster_nodes_cache);
@@ -495,7 +556,7 @@ mod tests {
..ProcessOptions::default()
};
let (bank_forks, cached_leader_schedule) =
process_blockstore(&genesis_config, &blockstore, Vec::new(), opts, None).unwrap();
process_blockstore(&genesis_config, &blockstore, Vec::new(), opts, None, None).unwrap();
let leader_schedule_cache = Arc::new(cached_leader_schedule);
let bank_forks = Arc::new(RwLock::new(bank_forks));

View File

@@ -8,13 +8,13 @@
use crate::sigverify;
use crossbeam_channel::{SendError, Sender as CrossbeamSender};
use solana_measure::measure::Measure;
use solana_metrics::datapoint_debug;
use solana_perf::packet::Packets;
use solana_sdk::timing;
use solana_streamer::streamer::{self, PacketReceiver, StreamerError};
use std::collections::HashMap;
use std::sync::mpsc::{Receiver, RecvTimeoutError};
use std::thread::{self, Builder, JoinHandle};
use std::time::Instant;
use thiserror::Error;
const MAX_SIGVERIFY_BATCH: usize = 10_000;
@@ -41,6 +41,82 @@ pub trait SigVerifier {
#[derive(Default, Clone)]
pub struct DisabledSigVerifier {}
#[derive(Default)]
struct SigVerifierStats {
recv_batches_us_hist: histogram::Histogram, // time to call recv_batch
verify_batches_pp_us_hist: histogram::Histogram, // per-packet time to call verify_batch
batches_hist: histogram::Histogram, // number of Packets structures per verify call
packets_hist: histogram::Histogram, // number of packets per verify call
total_batches: usize,
total_packets: usize,
}
impl SigVerifierStats {
fn report(&self) {
datapoint_info!(
"sigverify_stage-total_verify_time",
(
"recv_batches_us_90pct",
self.recv_batches_us_hist.percentile(90.0).unwrap_or(0),
i64
),
(
"recv_batches_us_min",
self.recv_batches_us_hist.minimum().unwrap_or(0),
i64
),
(
"recv_batches_us_max",
self.recv_batches_us_hist.maximum().unwrap_or(0),
i64
),
(
"recv_batches_us_mean",
self.recv_batches_us_hist.mean().unwrap_or(0),
i64
),
(
"verify_batches_pp_us_90pct",
self.verify_batches_pp_us_hist.percentile(90.0).unwrap_or(0),
i64
),
(
"verify_batches_pp_us_min",
self.verify_batches_pp_us_hist.minimum().unwrap_or(0),
i64
),
(
"verify_batches_pp_us_max",
self.verify_batches_pp_us_hist.maximum().unwrap_or(0),
i64
),
(
"verify_batches_pp_us_mean",
self.verify_batches_pp_us_hist.mean().unwrap_or(0),
i64
),
(
"batches_90pct",
self.batches_hist.percentile(90.0).unwrap_or(0),
i64
),
("batches_min", self.batches_hist.minimum().unwrap_or(0), i64),
("batches_max", self.batches_hist.maximum().unwrap_or(0), i64),
("batches_mean", self.batches_hist.mean().unwrap_or(0), i64),
(
"packets_90pct",
self.packets_hist.percentile(90.0).unwrap_or(0),
i64
),
("packets_min", self.packets_hist.minimum().unwrap_or(0), i64),
("packets_max", self.packets_hist.maximum().unwrap_or(0), i64),
("packets_mean", self.packets_hist.mean().unwrap_or(0), i64),
("total_batches", self.total_batches, i64),
("total_packets", self.total_packets, i64),
);
}
}
impl SigVerifier for DisabledSigVerifier {
fn verify_batch(&self, mut batch: Vec<Packets>) -> Vec<Packets> {
sigverify::ed25519_verify_disabled(&mut batch);
@@ -92,6 +168,7 @@ impl SigVerifyStage {
recvr: &PacketReceiver,
sendr: &CrossbeamSender<Vec<Packets>>,
verifier: &T,
stats: &mut SigVerifierStats,
) -> Result<()> {
let (mut batches, len, recv_time) = streamer::recv_batch(recvr)?;
@@ -121,6 +198,19 @@ impl SigVerifyStage {
("recv_time", recv_time, i64),
);
stats
.recv_batches_us_hist
.increment(recv_time as u64)
.unwrap();
stats
.verify_batches_pp_us_hist
.increment(verify_batch_time.as_us() / (len as u64))
.unwrap();
stats.batches_hist.increment(batches_len as u64).unwrap();
stats.packets_hist.increment(len as u64).unwrap();
stats.total_batches += batches_len;
stats.total_packets += len;
Ok(())
}
@@ -130,10 +220,14 @@ impl SigVerifyStage {
verifier: &T,
) -> JoinHandle<()> {
let verifier = verifier.clone();
let mut stats = SigVerifierStats::default();
let mut last_print = Instant::now();
Builder::new()
.name("solana-verifier".to_string())
.spawn(move || loop {
if let Err(e) = Self::verifier(&packet_receiver, &verified_sender, &verifier) {
if let Err(e) =
Self::verifier(&packet_receiver, &verified_sender, &verifier, &mut stats)
{
match e {
SigVerifyServiceError::Streamer(StreamerError::RecvTimeout(
RecvTimeoutError::Disconnected,
@@ -147,6 +241,11 @@ impl SigVerifyStage {
_ => error!("{:?}", e),
}
}
if last_print.elapsed().as_secs() > 2 {
stats.report();
stats = SigVerifierStats::default();
last_print = Instant::now();
}
})
.unwrap()
}

View File

@@ -0,0 +1,227 @@
use std::{
collections::HashMap,
io::BufRead,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread::{self, sleep, Builder, JoinHandle},
time::{Duration, Instant},
};
#[cfg(target_os = "linux")]
use std::{fs::File, io::BufReader, path::Path};
const SAMPLE_INTERVAL: Duration = Duration::from_secs(60);
const SLEEP_INTERVAL: Duration = Duration::from_millis(500);
#[cfg(target_os = "linux")]
const PROC_NET_SNMP_PATH: &str = "/proc/net/snmp";
pub struct SystemMonitorService {
thread_hdl: JoinHandle<()>,
}
#[cfg_attr(not(target_os = "linux"), allow(dead_code))]
struct UdpStats {
in_datagrams: usize,
no_ports: usize,
in_errors: usize,
out_datagrams: usize,
rcvbuf_errors: usize,
sndbuf_errors: usize,
in_csum_errors: usize,
ignored_multi: usize,
}
#[cfg(target_os = "linux")]
fn read_udp_stats(file_path: impl AsRef<Path>) -> Result<UdpStats, String> {
let file = File::open(file_path).map_err(|e| e.to_string())?;
let mut reader = BufReader::new(file);
parse_udp_stats(&mut reader)
}
#[cfg_attr(not(target_os = "linux"), allow(dead_code))]
fn parse_udp_stats(reader: &mut impl BufRead) -> Result<UdpStats, String> {
let mut udp_lines = Vec::default();
for line in reader.lines() {
let line = line.map_err(|e| e.to_string())?;
if line.starts_with("Udp:") {
udp_lines.push(line);
if udp_lines.len() == 2 {
break;
}
}
}
if udp_lines.len() != 2 {
return Err(format!(
"parse error, expected 2 lines, num lines: {}",
udp_lines.len()
));
}
let pairs: Vec<_> = udp_lines[0]
.split_ascii_whitespace()
.zip(udp_lines[1].split_ascii_whitespace())
.collect();
let udp_stats: HashMap<String, usize> = pairs[1..]
.iter()
.map(|(label, val)| (label.to_string(), val.parse::<usize>().unwrap()))
.collect();
let stats = UdpStats {
in_datagrams: *udp_stats.get("InDatagrams").unwrap_or(&0),
no_ports: *udp_stats.get("NoPorts").unwrap_or(&0),
in_errors: *udp_stats.get("InErrors").unwrap_or(&0),
out_datagrams: *udp_stats.get("OutDatagrams").unwrap_or(&0),
rcvbuf_errors: *udp_stats.get("RcvbufErrors").unwrap_or(&0),
sndbuf_errors: *udp_stats.get("SndbufErrors").unwrap_or(&0),
in_csum_errors: *udp_stats.get("InCsumErrors").unwrap_or(&0),
ignored_multi: *udp_stats.get("IgnoredMulti").unwrap_or(&0),
};
Ok(stats)
}
#[cfg(target_os = "linux")]
pub fn verify_udp_stats_access() -> Result<(), String> {
read_udp_stats(PROC_NET_SNMP_PATH)?;
Ok(())
}
#[cfg(not(target_os = "linux"))]
pub fn verify_udp_stats_access() -> Result<(), String> {
Ok(())
}
impl SystemMonitorService {
pub fn new(exit: Arc<AtomicBool>) -> Self {
info!("Starting SystemMonitorService");
let thread_hdl = Builder::new()
.name("system-monitor".to_string())
.spawn(move || {
Self::run(exit);
})
.unwrap();
Self { thread_hdl }
}
#[cfg(target_os = "linux")]
fn process_udp_stats(udp_stats: &mut Option<UdpStats>) {
match read_udp_stats(PROC_NET_SNMP_PATH) {
Ok(new_stats) => {
if let Some(old_stats) = udp_stats {
SystemMonitorService::report_udp_stats(old_stats, &new_stats);
}
*udp_stats = Some(new_stats);
}
Err(e) => warn!("read_udp_stats: {}", e),
}
}
#[cfg(not(target_os = "linux"))]
fn process_udp_stats(_udp_stats: &mut Option<UdpStats>) {}
#[cfg(target_os = "linux")]
fn report_udp_stats(old_stats: &UdpStats, new_stats: &UdpStats) {
datapoint_info!(
"net-stats",
(
"in_datagrams_delta",
new_stats.in_datagrams - old_stats.in_datagrams,
i64
),
(
"no_ports_delta",
new_stats.no_ports - old_stats.no_ports,
i64
),
(
"in_errors_delta",
new_stats.in_errors - old_stats.in_errors,
i64
),
(
"out_datagrams_delta",
new_stats.out_datagrams - old_stats.out_datagrams,
i64
),
(
"rcvbuf_errors_delta",
new_stats.rcvbuf_errors - old_stats.rcvbuf_errors,
i64
),
(
"sndbuf_errors_delta",
new_stats.sndbuf_errors - old_stats.sndbuf_errors,
i64
),
(
"in_csum_errors_delta",
new_stats.in_csum_errors - old_stats.in_csum_errors,
i64
),
(
"ignored_multi_delta",
new_stats.ignored_multi - old_stats.ignored_multi,
i64
),
("in_errors", new_stats.in_errors, i64),
("rcvbuf_errors", new_stats.rcvbuf_errors, i64),
("sndbuf_errors", new_stats.sndbuf_errors, i64),
);
}
pub fn run(exit: Arc<AtomicBool>) {
let mut udp_stats = None;
let mut now = Instant::now();
loop {
if exit.load(Ordering::Relaxed) {
break;
}
if now.elapsed() >= SAMPLE_INTERVAL {
now = Instant::now();
SystemMonitorService::process_udp_stats(&mut udp_stats);
}
sleep(SLEEP_INTERVAL);
}
}
pub fn join(self) -> thread::Result<()> {
self.thread_hdl.join()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_udp_stats() {
let mut mock_snmp =
b"Ip: Forwarding DefaultTTL InReceives InHdrErrors InAddrErrors ForwDatagrams InUnknownProtos InDiscards InDelivers OutRequests OutDiscards OutNoRoutes ReasmTimeout ReasmReqds ReasmOKs ReasmFails FragOKs FragFails FragCreates
Ip: 1 64 357 0 2 0 0 0 355 315 0 6 0 0 0 0 0 0 0
Icmp: InMsgs InErrors InCsumErrors InDestUnreachs InTimeExcds InParmProbs InSrcQuenchs InRedirects InEchos InEchoReps InTimestamps InTimestampReps InAddrMasks InAddrMaskReps OutMsgs OutErrors OutDestUnreachs OutTimeExcds OutParmProbs OutSrcQuenchs OutRedirects OutEchos OutEchoReps OutTimestamps OutTimestampReps OutAddrMasks OutAddrMaskReps
Icmp: 3 0 0 3 0 0 0 0 0 0 0 0 0 0 7 0 7 0 0 0 0 0 0 0 0 0 0
IcmpMsg: InType3 OutType3
IcmpMsg: 3 7
Tcp: RtoAlgorithm RtoMin RtoMax MaxConn ActiveOpens PassiveOpens AttemptFails EstabResets CurrEstab InSegs OutSegs RetransSegs InErrs OutRsts InCsumErrors
Tcp: 1 200 120000 -1 29 1 0 0 5 318 279 0 0 4 0
Udp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti
Udp: 27 7 0 30 0 0 0 0
UdpLite: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti
UdpLite: 0 0 0 0 0 0 0 0" as &[u8];
let stats = parse_udp_stats(&mut mock_snmp).unwrap();
assert_eq!(stats.out_datagrams, 30);
assert_eq!(stats.no_ports, 7);
let mut mock_snmp = b"unexpected data" as &[u8];
let stats = parse_udp_stats(&mut mock_snmp);
assert!(stats.is_err());
}
}

View File

@@ -22,6 +22,7 @@ use solana_rpc::{
};
use solana_runtime::{
bank_forks::BankForks,
cost_model::CostModel,
vote_sender_types::{ReplayVoteReceiver, ReplayVoteSender},
};
use std::{
@@ -71,6 +72,7 @@ impl Tpu {
bank_notification_sender: Option<BankNotificationSender>,
tpu_coalesce_ms: u64,
cluster_confirmed_slot_sender: GossipDuplicateConfirmedSlotsSender,
cost_model: &Arc<RwLock<CostModel>>,
) -> Self {
let (packet_sender, packet_receiver) = channel();
let (vote_packet_sender, vote_packet_receiver) = channel();
@@ -128,6 +130,7 @@ impl Tpu {
verified_gossip_vote_packets_receiver,
transaction_status_sender,
replay_vote_sender,
cost_model.clone(),
);
let broadcast_stage = broadcast_type.new_broadcast_stage(

View File

@@ -12,6 +12,7 @@ use crate::{
cluster_slots::ClusterSlots,
completed_data_sets_service::CompletedDataSetsSender,
consensus::Tower,
cost_update_service::CostUpdateService,
ledger_cleanup_service::LedgerCleanupService,
replay_stage::{ReplayStage, ReplayStageConfig},
retransmit_stage::RetransmitStage,
@@ -40,6 +41,7 @@ use solana_runtime::{
accounts_db::AccountShrinkThreshold,
bank_forks::{BankForks, SnapshotConfig},
commitment::BlockCommitmentCache,
cost_model::CostModel,
vote_sender_types::ReplayVoteSender,
};
use solana_sdk::{
@@ -67,6 +69,7 @@ pub struct Tvu {
accounts_background_service: AccountsBackgroundService,
accounts_hash_verifier: AccountsHashVerifier,
voting_service: VotingService,
cost_update_service: CostUpdateService,
}
pub struct Sockets {
@@ -91,6 +94,7 @@ pub struct TvuConfig {
pub rocksdb_max_compaction_jitter: Option<u64>,
pub wait_for_vote_to_start_leader: bool,
pub accounts_shrink_ratio: AccountShrinkThreshold,
pub disable_epoch_boundary_optimization: bool,
}
impl Tvu {
@@ -130,6 +134,7 @@ impl Tvu {
gossip_confirmed_slots_receiver: GossipDuplicateConfirmedSlotsReceiver,
tvu_config: TvuConfig,
max_slots: &Arc<MaxSlots>,
cost_model: &Arc<RwLock<CostModel>>,
) -> Self {
let keypair: Arc<Keypair> = cluster_info.keypair.clone();
@@ -273,6 +278,7 @@ impl Tvu {
cache_block_meta_sender,
bank_notification_sender,
wait_for_vote_to_start_leader: tvu_config.wait_for_vote_to_start_leader,
disable_epoch_boundary_optimization: tvu_config.disable_epoch_boundary_optimization,
};
let (voting_sender, voting_receiver) = channel();
@@ -283,6 +289,14 @@ impl Tvu {
bank_forks.clone(),
);
let (cost_update_sender, cost_update_receiver) = channel();
let cost_update_service = CostUpdateService::new(
exit.clone(),
blockstore.clone(),
cost_model.clone(),
cost_update_receiver,
);
let replay_stage = ReplayStage::new(
replay_stage_config,
blockstore.clone(),
@@ -301,6 +315,7 @@ impl Tvu {
gossip_verified_vote_hash_receiver,
cluster_slots_update_sender,
voting_sender,
cost_update_sender,
);
let ledger_cleanup_service = tvu_config.max_ledger_shreds.map(|max_ledger_shreds| {
@@ -332,6 +347,7 @@ impl Tvu {
accounts_background_service,
accounts_hash_verifier,
voting_service,
cost_update_service,
}
}
@@ -346,6 +362,7 @@ impl Tvu {
self.replay_stage.join()?;
self.accounts_hash_verifier.join()?;
self.voting_service.join()?;
self.cost_update_service.join()?;
Ok(())
}
}
@@ -453,6 +470,7 @@ pub mod tests {
gossip_confirmed_slots_receiver,
TvuConfig::default(),
&Arc::new(MaxSlots::default()),
&Arc::new(RwLock::new(CostModel::default())),
);
exit.store(true, Ordering::Relaxed);
tvu.join().unwrap();

View File

@@ -1,5 +1,6 @@
//! The `validator` module hosts all the validator microservices.
pub use solana_perf::report_target_features;
use {
crate::{
broadcast_stage::BroadcastStageType,
@@ -13,11 +14,13 @@ use {
serve_repair_service::ServeRepairService,
sigverify,
snapshot_packager_service::{PendingSnapshotPackage, SnapshotPackagerService},
system_monitor_service::{verify_udp_stats_access, SystemMonitorService},
tpu::{Tpu, DEFAULT_TPU_COALESCE_MS},
tvu::{Sockets, Tvu, TvuConfig},
},
crossbeam_channel::{bounded, unbounded},
rand::{thread_rng, Rng},
solana_accountsdb_plugin_manager::accountsdb_plugin_service::AccountsDbPluginService,
solana_gossip::{
cluster_info::{
ClusterInfo, Node, DEFAULT_CONTACT_DEBUG_INTERVAL_MILLIS,
@@ -42,6 +45,7 @@ use {
poh_recorder::{PohRecorder, GRACE_TICKS_FACTOR, MAX_GRACE_SLOTS},
poh_service::{self, PohService},
},
solana_rpc::send_transaction_service,
solana_rpc::{
max_slots::MaxSlots,
optimistically_confirmed_bank_tracker::{
@@ -57,9 +61,11 @@ use {
solana_runtime::{
accounts_db::AccountShrinkThreshold,
accounts_index::AccountSecondaryIndexes,
accounts_update_notifier_interface::AccountsUpdateNotifier,
bank::Bank,
bank_forks::{BankForks, SnapshotConfig},
commitment::BlockCommitmentCache,
cost_model::CostModel,
hardened_unpack::{open_genesis_config, MAX_GENESIS_ARCHIVE_UNPACKED_SIZE},
},
solana_sdk::{
@@ -103,6 +109,7 @@ pub struct ValidatorConfig {
pub account_paths: Vec<PathBuf>,
pub account_shrink_paths: Option<Vec<PathBuf>>,
pub rpc_config: JsonRpcConfig,
pub accountsdb_plugin_config_files: Option<Vec<PathBuf>>,
pub rpc_addrs: Option<(SocketAddr, SocketAddr)>, // (JsonRpc, JsonRpcPubSub)
pub pubsub_config: PubSubConfig,
pub snapshot_config: Option<SnapshotConfig>,
@@ -132,8 +139,7 @@ pub struct ValidatorConfig {
pub contact_debug_interval: u64,
pub contact_save_interval: u64,
pub bpf_jit: bool,
pub send_transaction_retry_ms: u64,
pub send_transaction_leader_forward_count: u64,
pub send_transaction_service_config: send_transaction_service::Config,
pub no_poh_speed_test: bool,
pub poh_pinned_cpu_core: usize,
pub poh_hashes_per_batch: u64,
@@ -147,6 +153,7 @@ pub struct ValidatorConfig {
pub validator_exit: Arc<RwLock<Exit>>,
pub no_wait_for_vote_to_start_leader: bool,
pub accounts_shrink_ratio: AccountShrinkThreshold,
pub disable_epoch_boundary_optimization: bool,
}
impl Default for ValidatorConfig {
@@ -161,6 +168,7 @@ impl Default for ValidatorConfig {
account_paths: Vec::new(),
account_shrink_paths: None,
rpc_config: JsonRpcConfig::default(),
accountsdb_plugin_config_files: None,
rpc_addrs: None,
pubsub_config: PubSubConfig::default(),
snapshot_config: None,
@@ -189,8 +197,7 @@ impl Default for ValidatorConfig {
contact_debug_interval: DEFAULT_CONTACT_DEBUG_INTERVAL_MILLIS,
contact_save_interval: DEFAULT_CONTACT_SAVE_INTERVAL_MILLIS,
bpf_jit: false,
send_transaction_retry_ms: 2000,
send_transaction_leader_forward_count: 2,
send_transaction_service_config: send_transaction_service::Config::default(),
no_poh_speed_test: true,
poh_pinned_cpu_core: poh_service::DEFAULT_PINNED_CPU_CORE,
poh_hashes_per_batch: poh_service::DEFAULT_HASHES_PER_BATCH,
@@ -204,6 +211,7 @@ impl Default for ValidatorConfig {
validator_exit: Arc::new(RwLock::new(Exit::default())),
no_wait_for_vote_to_start_leader: true,
accounts_shrink_ratio: AccountShrinkThreshold::default(),
disable_epoch_boundary_optimization: false,
}
}
}
@@ -254,6 +262,7 @@ pub struct Validator {
transaction_status_service: Option<TransactionStatusService>,
rewards_recorder_service: Option<RewardsRecorderService>,
cache_block_meta_service: Option<CacheBlockMetaService>,
system_monitor_service: Option<SystemMonitorService>,
sample_performance_service: Option<SamplePerformanceService>,
gossip_service: GossipService,
serve_repair_service: ServeRepairService,
@@ -264,6 +273,7 @@ pub struct Validator {
tpu: Tpu,
tvu: Tvu,
ip_echo_server: Option<solana_net_utils::IpEchoServer>,
accountsdb_plugin_service: Option<AccountsDbPluginService>,
}
// in the distant future, get rid of ::new()/exit() and use Result properly...
@@ -300,6 +310,27 @@ impl Validator {
warn!("identity: {}", id);
warn!("vote account: {}", vote_account);
let mut bank_notification_senders = Vec::new();
let accountsdb_plugin_service =
if let Some(accountsdb_plugin_config_files) = &config.accountsdb_plugin_config_files {
let (confirmed_bank_sender, confirmed_bank_receiver) = unbounded();
bank_notification_senders.push(confirmed_bank_sender);
let result = AccountsDbPluginService::new(
confirmed_bank_receiver,
accountsdb_plugin_config_files,
);
match result {
Ok(accountsdb_plugin_service) => Some(accountsdb_plugin_service),
Err(err) => {
error!("Failed to load the AccountsDb plugin: {:?}", err);
abort();
}
}
} else {
None
};
if config.voting_disabled {
warn!("voting disabled");
authorized_voter_keypairs.write().unwrap().clear();
@@ -393,10 +424,19 @@ impl Validator {
config.enforce_ulimit_nofile,
&start_progress,
config.no_poh_speed_test,
accountsdb_plugin_service
.as_ref()
.map(|plugin_service| plugin_service.get_accounts_update_notifier()),
);
*start_progress.write().unwrap() = ValidatorStartProgress::StartingServices;
verify_udp_stats_access().unwrap_or_else(|err| {
error!("Failed to access UDP stats: {}", err);
abort();
});
let system_monitor_service = Some(SystemMonitorService::new(Arc::clone(&exit)));
let leader_schedule_cache = Arc::new(leader_schedule_cache);
let bank = bank_forks.working_bank();
if let Some(ref shrink_paths) = config.account_shrink_paths {
@@ -530,6 +570,11 @@ impl Validator {
));
}
let (bank_notification_sender, bank_notification_receiver) = unbounded();
let confirmed_bank_subscribers = if !bank_notification_senders.is_empty() {
Some(Arc::new(RwLock::new(bank_notification_senders)))
} else {
None
};
(
Some(JsonRpcService::new(
rpc_addr,
@@ -546,8 +591,7 @@ impl Validator {
config.trusted_validators.clone(),
rpc_override_health_check.clone(),
optimistically_confirmed_bank.clone(),
config.send_transaction_retry_ms,
config.send_transaction_leader_forward_count,
config.send_transaction_service_config.clone(),
max_slots.clone(),
leader_schedule_cache.clone(),
max_complete_transaction_status_slot,
@@ -574,6 +618,7 @@ impl Validator {
bank_forks.clone(),
optimistically_confirmed_bank,
rpc_subscriptions.clone(),
confirmed_bank_subscribers,
)),
Some(bank_notification_sender),
)
@@ -679,6 +724,10 @@ impl Validator {
bank_forks.read().unwrap().root_bank().deref(),
));
let mut cost_model = CostModel::default();
cost_model.initialize_cost_table(&blockstore.read_program_costs().unwrap());
let cost_model = Arc::new(RwLock::new(cost_model));
let (retransmit_slots_sender, retransmit_slots_receiver) = unbounded();
let (verified_vote_sender, verified_vote_receiver) = unbounded();
let (gossip_verified_vote_hash_sender, gossip_verified_vote_hash_receiver) = unbounded();
@@ -753,8 +802,10 @@ impl Validator {
rocksdb_max_compaction_jitter: config.rocksdb_compaction_interval,
wait_for_vote_to_start_leader,
accounts_shrink_ratio: config.accounts_shrink_ratio,
disable_epoch_boundary_optimization: config.disable_epoch_boundary_optimization,
},
&max_slots,
&cost_model,
);
let tpu = Tpu::new(
@@ -781,6 +832,7 @@ impl Validator {
bank_notification_sender,
config.tpu_coalesce_ms,
cluster_confirmed_slot_sender,
&cost_model,
);
datapoint_info!("validator-new", ("id", id.to_string(), String));
@@ -795,6 +847,7 @@ impl Validator {
transaction_status_service,
rewards_recorder_service,
cache_block_meta_service,
system_monitor_service,
sample_performance_service,
snapshot_packager_service,
completed_data_sets_service,
@@ -804,6 +857,7 @@ impl Validator {
poh_recorder,
ip_echo_server,
validator_exit: config.validator_exit.clone(),
accountsdb_plugin_service,
}
}
@@ -884,6 +938,12 @@ impl Validator {
.expect("cache_block_meta_service");
}
if let Some(system_monitor_service) = self.system_monitor_service {
system_monitor_service
.join()
.expect("system_monitor_service");
}
if let Some(sample_performance_service) = self.sample_performance_service {
sample_performance_service
.join()
@@ -906,6 +966,12 @@ impl Validator {
if let Some(ip_echo_server) = self.ip_echo_server {
ip_echo_server.shutdown_background();
}
if let Some(accountsdb_plugin_service) = self.accountsdb_plugin_service {
accountsdb_plugin_service
.join()
.expect("accountsdb_plugin_service");
}
}
}
@@ -1039,6 +1105,7 @@ fn post_process_restored_tower(
}
#[allow(clippy::type_complexity)]
#[allow(clippy::too_many_arguments)]
fn new_banks_from_ledger(
validator_identity: &Pubkey,
vote_account: &Pubkey,
@@ -1049,6 +1116,7 @@ fn new_banks_from_ledger(
enforce_ulimit_nofile: bool,
start_progress: &Arc<RwLock<ValidatorStartProgress>>,
no_poh_speed_test: bool,
accounts_update_notifier: Option<AccountsUpdateNotifier>,
) -> (
GenesisConfig,
BankForks,
@@ -1165,6 +1233,7 @@ fn new_banks_from_ledger(
transaction_history_services
.cache_block_meta_sender
.as_ref(),
accounts_update_notifier,
)
.unwrap_or_else(|err| {
error!("Failed to load ledger: {:?}", err);
@@ -1445,76 +1514,6 @@ fn wait_for_supermajority(
Ok(true)
}
fn is_rosetta_emulated() -> bool {
#[cfg(target_os = "macos")]
{
use std::str::FromStr;
std::process::Command::new("sysctl")
.args(&["-in", "sysctl.proc_translated"])
.output()
.map_err(|_| ())
.and_then(|output| String::from_utf8(output.stdout).map_err(|_| ()))
.and_then(|stdout| u8::from_str(stdout.trim()).map_err(|_| ()))
.map(|enabled| enabled == 1)
.unwrap_or(false)
}
#[cfg(not(target_os = "macos"))]
{
false
}
}
pub fn report_target_features() {
warn!(
"CUDA is {}abled",
if solana_perf::perf_libs::api().is_some() {
"en"
} else {
"dis"
}
);
if !is_rosetta_emulated() {
unsafe { check_avx() };
unsafe { check_avx2() };
}
}
// Validator binaries built on a machine with AVX support will generate invalid opcodes
// when run on machines without AVX causing a non-obvious process abort. Instead detect
// the mismatch and error cleanly.
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
#[target_feature(enable = "avx")]
unsafe fn check_avx() {
if is_x86_feature_detected!("avx") {
info!("AVX detected");
} else {
error!(
"Incompatible CPU detected: missing AVX support. Please build from source on the target"
);
abort();
}
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
unsafe fn check_avx() {}
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
#[target_feature(enable = "avx2")]
unsafe fn check_avx2() {
if is_x86_feature_detected!("avx2") {
info!("AVX2 detected");
} else {
error!(
"Incompatible CPU detected: missing AVX2 support. Please build from source on the target"
);
abort();
}
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
unsafe fn check_avx2() {}
// Get the activated stake percentage (based on the provided bank) that is visible in gossip
fn get_stake_percent_in_gossip(bank: &Bank, cluster_info: &ClusterInfo, log: bool) -> u64 {
let mut online_stake = 0;

View File

@@ -50,6 +50,12 @@ struct WindowServiceMetrics {
num_shreds_received: u64,
shred_receiver_elapsed_us: u64,
prune_shreds_elapsed_us: u64,
num_shreds_pruned_invalid_repair: usize,
num_errors: u64,
num_errors_blockstore: u64,
num_errors_cross_beam_recv_timeout: u64,
num_errors_other: u64,
num_errors_try_crossbeam_send: u64,
}
impl WindowServiceMetrics {
@@ -68,8 +74,39 @@ impl WindowServiceMetrics {
self.prune_shreds_elapsed_us as i64,
i64
),
(
"num_shreds_pruned_invalid_repair",
self.num_shreds_pruned_invalid_repair,
i64
),
("num_errors", self.num_errors, i64),
("num_errors_blockstore", self.num_errors_blockstore, i64),
("num_errors_other", self.num_errors_other, i64),
(
"num_errors_try_crossbeam_send",
self.num_errors_try_crossbeam_send,
i64
),
(
"num_errors_cross_beam_recv_timeout",
self.num_errors_cross_beam_recv_timeout,
i64
),
);
}
fn record_error(&mut self, err: &Error) {
self.num_errors += 1;
match err {
Error::TryCrossbeamSend => self.num_errors_try_crossbeam_send += 1,
Error::CrossbeamRecvTimeout(_) => self.num_errors_cross_beam_recv_timeout += 1,
Error::Blockstore(err) => {
self.num_errors_blockstore += 1;
error!("blockstore error: {}", err);
}
_ => self.num_errors_other += 1,
}
}
}
#[derive(Default)]
@@ -158,6 +195,9 @@ pub(crate) fn should_retransmit_and_persist(
} else if shred.index() >= MAX_DATA_SHREDS_PER_SLOT as u32 {
inc_new_counter_warn!("streamer-recv_window-shred_index_overrun", 1);
false
} else if shred.data_header.size as usize > shred.payload.len() {
inc_new_counter_warn!("streamer-recv_window-shred_bad_meta_size", 1);
false
} else {
true
}
@@ -263,6 +303,7 @@ fn run_insert<F>(
where
F: Fn(Shred),
{
ws_metrics.run_insert_count += 1;
let mut shred_receiver_elapsed = Measure::start("shred_receiver_elapsed");
let timer = Duration::from_millis(200);
let (mut shreds, mut repair_infos) = shred_receiver.recv_timeout(timer)?;
@@ -271,15 +312,19 @@ where
repair_infos.extend(more_repair_infos);
}
shred_receiver_elapsed.stop();
ws_metrics.shred_receiver_elapsed_us += shred_receiver_elapsed.as_us();
ws_metrics.num_shreds_received += shreds.len() as u64;
let mut prune_shreds_elapsed = Measure::start("prune_shreds_elapsed");
let num_shreds = shreds.len();
prune_shreds_invalid_repair(&mut shreds, &mut repair_infos, outstanding_requests);
ws_metrics.num_shreds_pruned_invalid_repair = num_shreds - shreds.len();
let repairs: Vec<_> = repair_infos
.iter()
.map(|repair_info| repair_info.is_some())
.collect();
prune_shreds_elapsed.stop();
ws_metrics.prune_shreds_elapsed_us += prune_shreds_elapsed.as_us();
let (completed_data_sets, inserted_indices) = blockstore.insert_shreds_handle_duplicate(
shreds,
@@ -297,11 +342,6 @@ where
}
completed_data_sets_sender.try_send(completed_data_sets)?;
ws_metrics.run_insert_count += 1;
ws_metrics.shred_receiver_elapsed_us += shred_receiver_elapsed.as_us();
ws_metrics.prune_shreds_elapsed_us += prune_shreds_elapsed.as_us();
Ok(())
}
@@ -561,6 +601,7 @@ impl WindowService {
&retransmit_sender,
&outstanding_requests,
) {
ws_metrics.record_error(&e);
if Self::should_exit_on_error(e, &mut handle_timeout, &handle_error) {
break;
}
@@ -729,7 +770,7 @@ mod test {
));
let cache = Arc::new(LeaderScheduleCache::new_from_bank(&bank));
let mut shreds = local_entries_to_shred(&[Entry::default()], 0, 0, &leader_keypair);
let shreds = local_entries_to_shred(&[Entry::default()], 0, 0, &leader_keypair);
// with a Bank for slot 0, shred continues
assert!(should_retransmit_and_persist(
@@ -781,9 +822,22 @@ mod test {
));
// with a Bank and no idea who leader is, shred gets thrown out
shreds[0].set_slot(MINIMUM_SLOTS_PER_EPOCH as u64 * 3);
let mut bad_slot_shred = shreds[0].clone();
bad_slot_shred.set_slot(MINIMUM_SLOTS_PER_EPOCH as u64 * 3);
assert!(!should_retransmit_and_persist(
&shreds[0],
&bad_slot_shred,
Some(bank.clone()),
&cache,
&me_id,
0,
0
));
// with a bad header size
let mut bad_header_shred = shreds[0].clone();
bad_header_shred.data_header.size = (bad_header_shred.payload.len() + 1) as u16;
assert!(!should_retransmit_and_persist(
&bad_header_shred,
Some(bank.clone()),
&cache,
&me_id,

View File

@@ -109,6 +109,7 @@ mod tests {
false,
accounts_db::AccountShrinkThreshold::default(),
false,
None,
);
bank0.freeze();
let mut bank_forks = BankForks::new(bank0);
@@ -172,6 +173,7 @@ mod tests {
accounts_db::AccountShrinkThreshold::default(),
check_hash_calculation,
false,
None,
)
.unwrap();

View File

@@ -1,6 +1,6 @@
[package]
name = "solana-crate-features"
version = "1.7.14"
version = "1.8.2"
description = "Solana Crate Features"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"

View File

@@ -41,7 +41,6 @@ module.exports = {
"cli/choose-a-cluster",
"cli/transfer-tokens",
"cli/delegate-stake",
"cli/manage-stake-accounts",
"cli/deploy-a-program",
"offline-signing",
"offline-signing/durable-nonce",
@@ -65,6 +64,7 @@ module.exports = {
items: [
"developing/clients/jsonrpc-api",
"developing/clients/javascript-api",
"developing/clients/javascript-reference",
"developing/clients/rust-api",
],
},

View File

@@ -1,6 +1,10 @@
---
title: Delegate Stake
title: Staking
---
For an overview of staking, read first the
[Staking and Inflation FAQ](https://solana.com/staking).
------
After you have [received SOL](transfer-tokens.md), you might consider putting
it to use by delegating _stake_ to a validator. Stake is what we call tokens

View File

@@ -132,8 +132,7 @@ Recover the intermediate account's ephemeral keypair file with
valley flat great hockey share token excess clever benefit traffic avocado athlete
==================================================================================
To resume a deploy, pass the recovered keypair as
the [PROGRAM_ADDRESS_SIGNER] argument to `solana deploy` or
as the [BUFFER_SIGNER] to `solana program deploy` or `solana write-buffer'.
the [BUFFER_SIGNER] to `solana program deploy` or `solana write-buffer'.
Or to recover the account's lamports, pass it as the
[BUFFER_ACCOUNT_ADDRESS] argument to `solana program drain`.
==================================================================================
@@ -243,16 +242,6 @@ Or anytime after:
solana program set-upgrade-authority <PROGRAM_ADDRESS> --final
```
`solana program deploy ...` utilizes Solana's upgradeable loader, but there is
another way to deploy immutable programs using the original on-chain loader:
```bash
solana deploy <PROGRAM_FILEPATH>
```
Programs deployed with `solana deploy ...` are not redeployable and are not
compatible with the `solana program ...` commands.
### Dumping a program to a file
The deployed program may be dumped back to a local file:

View File

@@ -1,78 +0,0 @@
---
title: Manage Stake Accounts
---
If you want to delegate stake to many different validators, you will need
to create a separate stake account for each. If you follow the convention
of creating the first stake account at seed "0", the second at "1", the
third at "2", and so on, then the `solana-stake-accounts` tool will allow
you to operate on all accounts with single invocations. You can use it to
sum up the balances of all accounts, move accounts to a new wallet, or set
new authorities.
## Usage
### Create a stake account
Create and fund a derived stake account at the stake authority public key:
```bash
solana-stake-accounts new <FUNDING_KEYPAIR> <BASE_KEYPAIR> <AMOUNT> \
--stake-authority <PUBKEY> --withdraw-authority <PUBKEY> \
--fee-payer <KEYPAIR>
```
### Count accounts
Count the number of derived accounts:
```bash
solana-stake-accounts count <BASE_PUBKEY>
```
### Get stake account balances
Sum the balance of derived stake accounts:
```bash
solana-stake-accounts balance <BASE_PUBKEY> --num-accounts <NUMBER>
```
### Get stake account addresses
List the address of each stake account derived from the given public key:
```bash
solana-stake-accounts addresses <BASE_PUBKEY> --num-accounts <NUMBER>
```
### Set new authorities
Set new authorities on each derived stake account:
```bash
solana-stake-accounts authorize <BASE_PUBKEY> \
--stake-authority <KEYPAIR> --withdraw-authority <KEYPAIR> \
--new-stake-authority <PUBKEY> --new-withdraw-authority <PUBKEY> \
--num-accounts <NUMBER> --fee-payer <KEYPAIR>
```
### Relocate stake accounts
Relocate stake accounts:
```bash
solana-stake-accounts rebase <BASE_PUBKEY> <NEW_BASE_KEYPAIR> \
--stake-authority <KEYPAIR> --num-accounts <NUMBER> \
--fee-payer <KEYPAIR>
```
To atomically rebase and authorize each stake account, use the 'move'
command:
```bash
solana-stake-accounts move <BASE_PUBKEY> <NEW_BASE_KEYPAIR> \
--stake-authority <KEYPAIR> --withdraw-authority <KEYPAIR> \
--new-stake-authority <PUBKEY> --new-withdraw-authority <PUBKEY> \
--num-accounts <NUMBER> --fee-payer <KEYPAIR>
```

View File

@@ -76,6 +76,7 @@ Major releases:
- [`solana-program`](https://docs.rs/solana-program/) - Rust SDK for writing programs
- [`solana-client`](https://docs.rs/solana-client/) - Rust client for connecting to RPC API
- [`solana-cli-config`](https://docs.rs/solana-cli-config/) - Rust client for managing Solana CLI config files
- [`solana-accountsdb-plugin-interface`](https://docs.rs/solana-accountsdb-plugin-interface/) - Rust interface for developing Solana AccountsDb plugins.
Patch releases:

View File

@@ -2,4 +2,331 @@
title: Web3 JavaScript API
---
See [solana-web3](https://solana-labs.github.io/solana-web3.js/).
## What is Solana-Web3.js?
The Solana-Web3.js library aims to provide complete coverage of Solana. The library was built on top of the [Solana JSON RPC API](https://docs.solana.com/developing/clients/jsonrpc-api).
## Common Terminology
| Term | Definition |
|-------------|------------------------|
| Program | Stateless executable code written to interpret instructions. Programs are capable of performing actions based on the instructions provided. |
| Instruction | The smallest unit of a program that a client can include in a transaction. Within its processing code, an instruction may contain one or more cross-program invocations. |
| Transaction | One or more instructions signed by the client using one or more Keypairs and executed atomically with only two possible outcomes: success or failure. |
For the full list of terms, see [Solana terminology](https://docs.solana.com/terminology#cross-program-invocation)
## Getting Started
### Installation
#### yarn
```bash
$ yarn add @solana/web3.js
```
#### npm
```bash
$ npm install --save @solana/web3.js
```
#### Bundle
```html
<!-- Development (un-minified) -->
<script src="https://unpkg.com/@solana/web3.js@latest/lib/index.iife.js"></script>
<!-- Production (minified) -->
<script src="https://unpkg.com/@solana/web3.js@latest/lib/index.iife.min.js"></script>
```
### Usage
#### Javascript
```javascript
const solanaWeb3 = require('@solana/web3.js');
console.log(solanaWeb3);
```
#### ES6
```javascript
import * as solanaWeb3 from '@solana/web3.js';
console.log(solanaWeb3);
```
#### Browser Bundle
```javascript
// solanaWeb3 is provided in the global namespace by the bundle script
console.log(solanaWeb3);
```
## Quickstart
### Connecting to a Wallet
To allow users to use your dApp or application on Solana, they will need to get access to their Keypair. A Keypair is a private key with a matching public key, used to sign transactions.
There are two ways to obtain a Keypair:
1. Generate a new Keypair
2. Obtain a Keypair using the secret key
You can obtain a new Keypair with the following:
```javascript
const {Keypair} = require("@solana/web3.js");
let keypair = Keypair.generate();
```
This will generate a brand new Keypair for a user to fund and use within your application.
You can allow entry of the secretKey using a textbox, and obtain the Keypair with `Keypair.fromSecretKey(secretKey)`.
```javascript
const {Keypair} = require("@solana/web3.js");
let secretKey = Uint8Array.from([
202, 171, 192, 129, 150, 189, 204, 241, 142, 71, 205,
2, 81, 97, 2, 176, 48, 81, 45, 1, 96, 138,
220, 132, 231, 131, 120, 77, 66, 40, 97, 172, 91,
245, 84, 221, 157, 190, 9, 145, 176, 130, 25, 43,
72, 107, 190, 229, 75, 88, 191, 136, 7, 167, 109,
91, 170, 164, 186, 15, 142, 36, 12, 23
]);
let keypair = Keypair.fromSecretKey(secretKey);
```
Many wallets today allow users to bring their Keypair using a variety of extensions or web wallets. The general recommendation is to use wallets, not Keypairs, to sign transactions. The wallet creates a layer of separation between the dApp and the Keypair, ensuring that the dApp never has access to the secret key. You can find ways to connect to external wallets with the [wallet-adapter](https://github.com/solana-labs/wallet-adapter) library.
### Creating and Sending Transactions
To interact with programs on Solana, you create, sign, and send transactions to the network. Transactions are collections of instructions with signatures. The order that instructions exist in a transaction determines the order they are executed.
A transaction in Solana-Web3.js is created using the [`Transaction`](javascript-api.md#Transaction) object and adding desired messages, addresses, or instructions.
Take the example of a transfer transaction:
```javascript
const {Keypair, Transaction, SystemProgram, LAMPORTS_PER_SOL} = require("@solana/web3.js");
let fromKeypair = Keypair.generate();
let toKeypair = Keypair.generate();
let transaction = new Transaction();
transaction.add(
SystemProgram.transfer({
fromPubkey: fromKeypair.publicKey,
toPubkey: toKeypair.publicKey,
lamports: LAMPORTS_PER_SOL
})
);
```
The above code achieves creating a transaction ready to be signed and broadcasted to the network. The `SystemProgram.transfer` instruction was added to the transaction, containing the amount of lamports to send, and the `to` and `from` public keys.
All that is left is to sign the transaction with keypair and send it over the network. You can accomplish sending a transaction by using `sendAndConfirmTransaction` if you wish to alert the user or do something after a transaction is finished, or use `sendTransaction` if you don't need to wait for the transaction to be confirmed.
```javascript
const {sendAndConfirmTransaction, clusterApiUrl, Connection} = require("@solana/web3.js");
let keypair = Keypair.generate();
let connection = new Connection(clusterApiUrl('testnet'));
sendAndConfirmTransaction(
connection,
transaction,
[keypair]
);
```
The above code takes in a `TransactionInstruction` using `SystemProgram`, creates a `Transaction`, and sends it over the network. You use `Connection` in order to define with Solana network you are connecting to, namely `mainnet-beta`, `testnet`, or `devnet`.
### Interacting with Custom Programs
The previous section visits sending basic transactions. In Solana everything you do interacts with different programs, including the previous section's transfer transaction. At the time of writing programs on Solana are either written in Rust or C.
Let's look at the `SystemProgram`. The method signature for allocating space in your account on Solana in Rust looks like this:
```rust
pub fn allocate(
pubkey: &Pubkey,
space: u64
) -> Instruction
```
In Solana when you want to interact with a program you must first know all the accounts you will be interacting with.
You must always provide every account that the program will be interacting within the instruction. Not only that, but you must provide whether or not the account is `isSigner` or `isWritable`.
In the `allocate` method above, a single account `pubkey` is required, as well as an amount of `space` for allocation. We know that the `allocate` method writes to the account by allocating space within it, making the `pubkey` required to be `isWritable`. `isSigner` is required when you are designating the account that is running the instruction. In this case, the signer is the account calling to allocate space within itself.
Let's look at how to call this instruction using solana-web3.js:
```javascript
let keypair = web3.Keypair.generate();
let payer = web3.Keypair.generate();
let connection = new web3.Connection(web3.clusterApiUrl('testnet'));
let airdropSignature = await connection.requestAirdrop(
payer.publicKey,
web3.LAMPORTS_PER_SOL,
);
await connection.confirmTransaction(airdropSignature);
```
First, we set up the account Keypair and connection so that we have an account to make allocate on the testnet. We also create a payer Keypair and airdrop some sol so we can pay for the allocate transaction.
```javascript
let allocateTransaction = new web3.Transaction({
feePayer: payer.publicKey
});
let keys = [{pubkey: keypair.publicKey, isSigner: true, isWritable: true}];
let params = { space: 100 };
```
We create the transaction `allocateTransaction`, keys, and params objects. `feePayer` is an optional field when creating a transaction that specifies who is paying for the transaction, defaulting to the pubkey of the first signer in the transaction. `keys` represents all accounts that the program's `allocate` function will interact with. Since the `allocate` function also required space, we created `params` to be used later when invoking the `allocate` function.
```javascript
let allocateStruct = {
index: 8,
layout: struct([
u32('instruction'),
ns64('space'),
])
};
```
The above is created using using u32 and ns64 from `@solana/buffer-layout` to facilitate the payload creation. The `allocate` function takes in the parameter `space`. To interact with the function we must provide the data as a Buffer format. The `buffer-layout` library helps with allocating the buffer and encoding it correctly for Rust programs on Solana to interpret.
Let's break down this struct.
```javascript
{
index: 8, /* <-- */
layout: struct([
u32('instruction'),
ns64('space'),
])
}
```
`index` is set to 8 because the function `allocate` is in the 8th position in the instruction enum for `SystemProgram`.
```rust
/* https://github.com/solana-labs/solana/blob/21bc43ed58c63c827ba4db30426965ef3e807180/sdk/program/src/system_instruction.rs#L142-L305 */
pub enum SystemInstruction {
/** 0 **/CreateAccount {/**/},
/** 1 **/Assign {/**/},
/** 2 **/Transfer {/**/},
/** 3 **/CreateAccountWithSeed {/**/},
/** 4 **/AdvanceNonceAccount,
/** 5 **/WithdrawNonceAccount(u64),
/** 6 **/InitializeNonceAccount(Pubkey),
/** 7 **/AuthorizeNonceAccount(Pubkey),
/** 8 **/Allocate {/**/},
/** 9 **/AllocateWithSeed {/**/},
/** 10 **/AssignWithSeed {/**/},
/** 11 **/TransferWithSeed {/**/},
}
```
Next up is `u32('instruction')`.
```javascript
{
index: 8,
layout: struct([
u32('instruction'), /* <-- */
ns64('space'),
])
}
```
The `layout` in the allocate struct must always have `u32('instruction')` first when you are using it to call an instruction.
```javascript
{
index: 8,
layout: struct([
u32('instruction'),
ns64('space'), /* <-- */
])
}
```
`ns64('space')` is the argument for the `allocate` function. You can see in the original `allocate` function in Rust that space was of the type `u64`. `u64` is an unsigned 64bit integer. Javascript by default only provides up to 53bit integers. `ns64` comes from `@solana/buffer-layout` to help with type conversions between Rust and Javascript. You can find more type conversions between Rust and Javascript at [solana-labs/buffer-layout](https://github.com/solana-labs/buffer-layout).
```javascript
let data = Buffer.alloc(allocateStruct.layout.span);
let layoutFields = Object.assign({instruction: allocateStruct.index}, params);
allocateStruct.layout.encode(layoutFields, data);
```
Using the previously created bufferLayout, we can allocate a data buffer. We then assign our params `{ space: 100 }` so that it maps correctly to the layout, and encode it to the data buffer. Now the data is ready to be sent to the program.
```javascript
allocateTransaction.add(new web3.TransactionInstruction({
keys,
programId: web3.SystemProgram.programId,
data,
}));
await web3.sendAndConfirmTransaction(connection, allocateTransaction, [payer, keypair]);
```
Finally, we add the transaction instruction with all the account keys, payer, data, and programId and broadcast the transaction to the network.
The full code can be found below.
```javascript
const {struct, u32, ns64} = require("@solana/buffer-layout");
const {Buffer} = require('buffer');
const web3 = require("@solana/web3.js");
let keypair = web3.Keypair.generate();
let payer = web3.Keypair.generate();
let connection = new web3.Connection(web3.clusterApiUrl('testnet'));
let airdropSignature = await connection.requestAirdrop(
payer.publicKey,
web3.LAMPORTS_PER_SOL,
);
await connection.confirmTransaction(airdropSignature);
let allocateTransaction = new web3.Transaction({
feePayer: payer.publicKey
});
let keys = [{pubkey: keypair.publicKey, isSigner: true, isWritable: true}];
let params = { space: 100 };
let allocateStruct = {
index: 8,
layout: struct([
u32('instruction'),
ns64('space'),
])
};
let data = Buffer.alloc(allocateStruct.layout.span);
let layoutFields = Object.assign({instruction: allocateStruct.index}, params);
allocateStruct.layout.encode(layoutFields, data);
allocateTransaction.add(new web3.TransactionInstruction({
keys,
programId: web3.SystemProgram.programId,
data,
}));
await web3.sendAndConfirmTransaction(connection, allocateTransaction, [payer, keypair]);
```

View File

@@ -0,0 +1,729 @@
---
title: Web3 API Reference
---
## Web3 API Reference Guide
## General
### Connection
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Connection.html)
Connection is used to interact with the [Solana JSON RPC](https://docs.solana.com/developing/clients/jsonrpc-api). You can use Connection to confirm transactions, get account info, and more.
You create a connection by defining the JSON RPC cluster endpoint and the desired commitment. Once this is complete, you can use this connection object to interact with any of the Solana JSON RPC API.
#### Example Usage
```javascript
const web3 = require("@solana/web3.js");
let connection = new web3.Connection(web3.clusterApiUrl('devnet'), 'confirmed');
let slot = await connection.getSlot();
console.log(slot);
// 93186439
let blockTime = await connection.getBlockTime(slot);
console.log(blockTime);
// 1630747045
let block = await connection.getBlock(slot);
console.log(block);
/*
{
blockHeight: null,
blockTime: 1630747045,
blockhash: 'AsFv1aV5DGip9YJHHqVjrGg6EKk55xuyxn2HeiN9xQyn',
parentSlot: 93186438,
previousBlockhash: '11111111111111111111111111111111',
rewards: [],
transactions: []
}
*/
let slotLeader = await connection.getSlotLeader();
console.log(slotLeader);
//49AqLYbpJYc2DrzGUAH1fhWJy62yxBxpLEkfJwjKy2jr
```
The above example shows only a few of the methods on Connection. Please see the [source generated docs](https://solana-labs.github.io/solana-web3.js/classes/Connection.html) for the full list.
### Transaction
[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/Transaction.html)
A transaction is used to interact with programs on the Solana blockchain. These transactions are constructed with TransactionInstructions, containing all the accounts possible to interact with, as well as any needed data or program addresses. Each TransactionInstruction consists of keys, data, and a programId. You can do multiple instructions in a single transaction, interacting with multiple programs at once.
#### Example Usage
```javascript
const web3 = require('@solana/web3.js');
const nacl = require('tweetnacl');
// Airdrop SOL for paying transactions
let payer = web3.Keypair.generate();
let connection = new web3.Connection(web3.clusterApiUrl('devnet'), 'confirmed');
let airdropSignature = await connection.requestAirdrop(
payer.publicKey,
web3.LAMPORTS_PER_SOL,
);
await connection.confirmTransaction(airdropSignature);
let toAccount = web3.Keypair.generate();
// Create Simple Transaction
let transaction = new web3.Transaction();
// Add an instruction to execute
transaction.add(web3.SystemProgram.transfer({
fromPubkey: payer.publicKey,
toPubkey: toAccount.publicKey,
lamports: 1000,
}));
// Send and confirm transaction
// Note: feePayer is by default the first signer, or payer, if the parameter is not set
await web3.sendAndConfirmTransaction(connection, transaction, [payer])
// Alternatively, manually construct the transaction
let recentBlockhash = await connection.getRecentBlockhash();
let manualTransaction = new web3.Transaction({
recentBlockhash: recentBlockhash.blockhash,
feePayer: payer.publicKey
});
manualTransaction.add(web3.SystemProgram.transfer({
fromPubkey: payer.publicKey,
toPubkey: toAccount.publicKey,
lamports: 1000,
}));
let transactionBuffer = manualTransaction.serializeMessage();
let signature = nacl.sign.detached(transactionBuffer, payer.secretKey);
manualTransaction.addSignature(payer.publicKey, signature);
let isVerifiedSignature = manualTransaction.verifySignatures();
console.log(`The signatures were verifed: ${isVerifiedSignature}`)
// The signatures were verified: true
let rawTransaction = manualTransaction.serialize();
await web3.sendAndConfirmRawTransaction(connection, rawTransaction);
```
### Keypair
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Keypair.html)
The keypair is used to create an account with a public key and secret key within Solana. You can either generate, generate from a seed, or create from a secret key.
#### Example Usage
```javascript
const {Keypair} = require("@solana/web3.js")
let account = Keypair.generate();
console.log(account.publicKey.toBase58());
console.log(account.secretKey);
// 2DVaHtcdTf7cm18Zm9VV8rKK4oSnjmTkKE6MiXe18Qsb
// Uint8Array(64) [
// 152, 43, 116, 211, 207, 41, 220, 33, 193, 168, 118,
// 24, 176, 83, 206, 132, 47, 194, 2, 203, 186, 131,
// 197, 228, 156, 170, 154, 41, 56, 76, 159, 124, 18,
// 14, 247, 32, 210, 51, 102, 41, 43, 21, 12, 170,
// 166, 210, 195, 188, 60, 220, 210, 96, 136, 158, 6,
// 205, 189, 165, 112, 32, 200, 116, 164, 234
// ]
let seed = Uint8Array.from([70,60,102,100,70,60,102,100,70,60,102,100,70,60,102,100,70,60,102,100,70,60,102,100,70,60,102,100,70,60,102,100]);
let accountFromSeed = Keypair.fromSeed(seed);
console.log(accountFromSeed.publicKey.toBase58());
console.log(accountFromSeed.secretKey);
// 3LDverZtSC9Duw2wyGC1C38atMG49toPNW9jtGJiw9Ar
// Uint8Array(64) [
// 70, 60, 102, 100, 70, 60, 102, 100, 70, 60, 102,
// 100, 70, 60, 102, 100, 70, 60, 102, 100, 70, 60,
// 102, 100, 70, 60, 102, 100, 70, 60, 102, 100, 34,
// 164, 6, 12, 9, 193, 196, 30, 148, 122, 175, 11,
// 28, 243, 209, 82, 240, 184, 30, 31, 56, 223, 236,
// 227, 60, 72, 215, 47, 208, 209, 162, 59
// ]
let accountFromSecret = Keypair.fromSecretKey(account.secretKey);
console.log(accountFromSecret.publicKey.toBase58());
console.log(accountFromSecret.secretKey);
// 2DVaHtcdTf7cm18Zm9VV8rKK4oSnjmTkKE6MiXe18Qsb
// Uint8Array(64) [
// 152, 43, 116, 211, 207, 41, 220, 33, 193, 168, 118,
// 24, 176, 83, 206, 132, 47, 194, 2, 203, 186, 131,
// 197, 228, 156, 170, 154, 41, 56, 76, 159, 124, 18,
// 14, 247, 32, 210, 51, 102, 41, 43, 21, 12, 170,
// 166, 210, 195, 188, 60, 220, 210, 96, 136, 158, 6,
// 205, 189, 165, 112, 32, 200, 116, 164, 234
// ]
```
Using `generate` generates a random Keypair for use as an account on Solana. Using `fromSeed`, you can generate a Keypair using a deterministic constructor. `fromSecret` creates a Keypair from a secret Uint8array. You can see that the publicKey for the `generate` Keypair and `fromSecret` Keypair are the same because the secret from the `generate` Keypair is used in `fromSecret`.
**Warning**: Do not use `fromSeed` unless you are creating a seed with high entropy. Do not share your seed. Treat the seed like you would a private key.
### PublicKey
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/PublicKey.html)
PublicKey is used throughout `@solana/web3.js` in transactions, keypairs, and programs. You require publickey when listing each account in a transaction and as a general identifier on Solana.
A PublicKey can be created with a base58 encoded string, buffer, Uint8Array, number, and an array of numbers.
#### Example Usage
```javascript
const {Buffer} = require('buffer');
const web3 = require('@solana/web3.js');
const crypto = require('crypto');
// Create a PublicKey with a base58 encoded string
let base58publicKey = new web3.PublicKey('5xot9PVkphiX2adznghwrAuxGs2zeWisNSxMW6hU6Hkj');
console.log(base58publicKey.toBase58());
// 5xot9PVkphiX2adznghwrAuxGs2zeWisNSxMW6hU6Hkj
// Create a Program Address
let highEntropyBuffer = crypto.randomBytes(31);
let programAddressFromKey = await web3.PublicKey.createProgramAddress([highEntropyBuffer.slice(0, 31)], base58publicKey);
console.log(`Generated Program Address: ${programAddressFromKey.toBase58()}`);
// Generated Program Address: 3thxPEEz4EDWHNxo1LpEpsAxZryPAHyvNVXJEJWgBgwJ
// Find Program address given a PublicKey
let validProgramAddress = await web3.PublicKey.findProgramAddress([Buffer.from('', 'utf8')], programAddressFromKey);
console.log(`Valid Program Address: ${validProgramAddress}`);
// Valid Program Address: C14Gs3oyeXbASzwUpqSymCKpEyccfEuSe8VRar9vJQRE,253
```
### SystemProgram
[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/SystemProgram.html)
The SystemProgram grants the ability to create accounts, allocate account data, assign an account to programs, work with nonce accounts, and transfer lamports. You can use the SystemInstruction class to help with decoding and reading individual instructions
#### Example Usage
```javascript
const web3 = require("@solana/web3.js");
// Airdrop SOL for paying transactions
let payer = web3.Keypair.generate();
let connection = new web3.Connection(web3.clusterApiUrl('devnet'), 'confirmed');
let airdropSignature = await connection.requestAirdrop(
payer.publicKey,
web3.LAMPORTS_PER_SOL,
);
await connection.confirmTransaction(airdropSignature);
// Allocate Account Data
let allocatedAccount = web3.Keypair.generate();
let allocateInstruction = web3.SystemProgram.allocate({
accountPubkey: allocatedAccount.publicKey,
space: 100,
})
let transaction = new web3.Transaction().add(allocateInstruction);
await web3.sendAndConfirmTransaction(connection, transaction, [payer, allocatedAccount])
// Create Nonce Account
let nonceAccount = web3.Keypair.generate();
let minimumAmountForNonceAccount = await connection.getMinimumBalanceForRentExemption(
web3.NONCE_ACCOUNT_LENGTH,
);
let createNonceAccountTransaction = new web3.Transaction().add(
web3.SystemProgram.createNonceAccount({
fromPubkey: payer.publicKey,
noncePubkey: nonceAccount.publicKey,
authorizedPubkey: payer.publicKey,
lamports: minimumAmountForNonceAccount,
}),
);
await web3.sendAndConfirmTransaction(connection, createNonceAccountTransaction, [payer, nonceAccount])
// Advance nonce - Used to create transactions as an account custodian
let advanceNonceTransaction = new web3.Transaction().add(
web3.SystemProgram.nonceAdvance({
noncePubkey: nonceAccount.publicKey,
authorizedPubkey: payer.publicKey,
}),
);
await web3.sendAndConfirmTransaction(connection, advanceNonceTransaction, [payer])
// Transfer lamports between accounts
let toAccount = web3.Keypair.generate();
let transferTransaction = new web3.Transaction().add(
web3.SystemProgram.transfer({
fromPubkey: payer.publicKey,
toPubkey: toAccount.publicKey,
lamports: 1000,
}),
);
await web3.sendAndConfirmTransaction(connection, transferTransaction, [payer])
// Assign a new account to a program
let programId = web3.Keypair.generate();
let assignedAccount = web3.Keypair.generate();
let assignTransaction = new web3.Transaction().add(
web3.SystemProgram.assign({
accountPubkey: assignedAccount.publicKey,
programId: programId.publicKey,
}),
);
await web3.sendAndConfirmTransaction(connection, assignTransaction, [payer, assignedAccount]);
```
### Secp256k1Program
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Secp256k1Program.html)
The Secp256k1Program is used to verify Secp256k1 signatures, which are used by both Bitcoin and Ethereum.
#### Example Usage
```javascript
const {keccak_256} = require('js-sha3');
const web3 = require("@solana/web3.js");
const secp256k1 = require('secp256k1');
// Create a Ethereum Address from secp256k1
let secp256k1PrivateKey;
do {
secp256k1PrivateKey = web3.Keypair.generate().secretKey.slice(0, 32);
} while (!secp256k1.privateKeyVerify(secp256k1PrivateKey));
let secp256k1PublicKey = secp256k1.publicKeyCreate(secp256k1PrivateKey, false).slice(1);
let ethAddress = web3.Secp256k1Program.publicKeyToEthAddress(secp256k1PublicKey);
console.log(`Ethereum Address: 0x${ethAddress.toString('hex')}`);
// Ethereum Address: 0xadbf43eec40694eacf36e34bb5337fba6a2aa8ee
// Fund a keypair to create instructions
let fromPublicKey = web3.Keypair.generate();
let connection = new web3.Connection(web3.clusterApiUrl('devnet'), 'confirmed');
let airdropSignature = await connection.requestAirdrop(
fromPublicKey.publicKey,
web3.LAMPORTS_PER_SOL,
);
await connection.confirmTransaction(airdropSignature);
// Sign Message with Ethereum Key
let plaintext = Buffer.from('string address');
let plaintextHash = Buffer.from(keccak_256.update(plaintext).digest());
let {signature, recid: recoveryId} = secp256k1.ecdsaSign(
plaintextHash,
secp256k1PrivateKey
);
// Create transaction to verify the signature
let transaction = new Transaction().add(
web3.Secp256k1Program.createInstructionWithEthAddress({
ethAddress: ethAddress.toString('hex'),
plaintext,
signature,
recoveryId,
}),
);
// Transaction will succeed if the message is verified to be signed by the address
await web3.sendAndConfirmTransaction(connection, transaction, [fromPublicKey]);
```
### Message
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Message.html)
Message is used as another way to construct transactions. You can construct a message using the accounts, header, instructions, and recentBlockhash that are a part of a transaction. A [Transaction](https://solana-labs.github.io/solana-web3.js/classes/Transaction.html) is a Message plus the list of required signatures required to execute the transaction.
#### Example Usage
```javascript
const {Buffer} = require("buffer");
const bs58 = require('bs58');
const web3 = require('@solana/web3.js');
let toPublicKey = web3.Keypair.generate().publicKey;
let fromPublicKey = web3.Keypair.generate();
let connection = new web3.Connection(
web3.clusterApiUrl('devnet'),
'confirmed'
);
let airdropSignature = await connection.requestAirdrop(
fromPublicKey.publicKey,
web3.LAMPORTS_PER_SOL,
);
await connection.confirmTransaction(airdropSignature);
let type = web3.SYSTEM_INSTRUCTION_LAYOUTS.Transfer;
let data = Buffer.alloc(type.layout.span);
let layoutFields = Object.assign({instruction: type.index});
type.layout.encode(layoutFields, data);
let recentBlockhash = await connection.getRecentBlockhash();
let messageParams = {
accountKeys: [
fromPublicKey.publicKey.toString(),
toPublicKey.toString(),
web3.SystemProgram.programId.toString()
],
header: {
numReadonlySignedAccounts: 0,
numReadonlyUnsignedAccounts: 1,
numRequiredSignatures: 1,
},
instructions: [
{
accounts: [0, 1],
data: bs58.encode(data),
programIdIndex: 2,
},
],
recentBlockhash,
};
let message = new web3.Message(messageParams);
let transaction = web3.Transaction.populate(
message,
[fromPublicKey.publicKey.toString()]
);
await web3.sendAndConfirmTransaction(connection, transaction, [fromPublicKey])
```
### Struct
[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/Struct.html)
The struct class is used to create Rust compatible structs in javascript. This class is only compatible with Borsch encoded Rust structs.
#### Example Usage
Struct in Rust:
```rust
pub struct Fee {
pub denominator: u64,
pub numerator: u64,
}
```
Using web3:
```javascript
import BN from 'bn.js';
import {Struct} from '@solana/web3.js';
export class Fee extends Struct {
denominator: BN;
numerator: BN;
}
```
### Enum
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Enum.html)
The Enum class is used to represent a Rust compatible Enum in javascript. The enum will just be a string representation if logged but can be properly encoded/decoded when used in conjunction with [Struct](https://solana-labs.github.io/solana-web3.js/classes/Struct.html). This class is only compatible with Borsch encoded Rust enumerations.
#### Example Usage
Rust:
```rust
pub enum AccountType {
Uninitialized,
StakePool,
ValidatorList,
}
```
Web3:
```javascript
import {Enum} from '@solana/web3.js';
export class AccountType extends Enum {}
```
### NonceAccount
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/NonceAccount.html)
Normally a transaction is rejected if a transaction's `recentBlockhash` field is too old. To provide for certain custodial services, Nonce Accounts are used. Transactions which use a `recentBlockhash` captured on-chain by a Nonce Account do not expire as long at the Nonce Account is not advanced.
You can create a nonce account by first creating a normal account, then using `SystemProgram` to make the account a Nonce Account.
#### Example Usage
```javascript
const web3 = require('@solana/web3.js');
// Create connection
let connection = new web3.Connection(
web3.clusterApiUrl('devnet'),
'confirmed',
);
// Generate accounts
let account = web3.Keypair.generate();
let nonceAccount = web3.Keypair.generate();
// Fund account
let airdropSignature = await connection.requestAirdrop(
account.publicKey,
web3.LAMPORTS_PER_SOL,
);
await connection.confirmTransaction(airdropSignature);
// Get Minimum amount for rent exemption
let minimumAmount = await connection.getMinimumBalanceForRentExemption(
web3.NONCE_ACCOUNT_LENGTH,
);
// Form CreateNonceAccount transaction
let transaction = new web3.Transaction().add(
web3.SystemProgram.createNonceAccount({
fromPubkey: account.publicKey,
noncePubkey: nonceAccount.publicKey,
authorizedPubkey: account.publicKey,
lamports: minimumAmount,
}),
);
// Create Nonce Account
await web3.sendAndConfirmTransaction(
connection,
transaction,
[account, nonceAccount]
);
let nonceAccountData = await connection.getNonce(
nonceAccount.publicKey,
'confirmed',
);
console.log(nonceAccountData);
// NonceAccount {
// authorizedPubkey: PublicKey {
// _bn: <BN: 919981a5497e8f85c805547439ae59f607ea625b86b1138ea6e41a68ab8ee038>
// },
// nonce: '93zGZbhMmReyz4YHXjt2gHsvu5tjARsyukxD4xnaWaBq',
// feeCalculator: { lamportsPerSignature: 5000 }
// }
let nonceAccountInfo = await connection.getAccountInfo(
nonceAccount.publicKey,
'confirmed'
);
let nonceAccountFromInfo = web3.NonceAccount.fromAccountData(
nonceAccountInfo.data
);
console.log(nonceAccountFromInfo);
// NonceAccount {
// authorizedPubkey: PublicKey {
// _bn: <BN: 919981a5497e8f85c805547439ae59f607ea625b86b1138ea6e41a68ab8ee038>
// },
// nonce: '93zGZbhMmReyz4YHXjt2gHsvu5tjARsyukxD4xnaWaBq',
// feeCalculator: { lamportsPerSignature: 5000 }
// }
```
The above example shows both how to create a `NonceAccount` using `SystemProgram.createNonceAccount`, as well as how to retrieve the `NonceAccount` from accountInfo. Using the nonce, you can create transactions offline with the nonce in place of the `recentBlockhash`.
### VoteAccount
[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/VoteAccount.html)
Vote account is an object that grants the capability of decoding vote accounts from the native vote account program on the network.
#### Example Usage
```javascript
const web3 = require('@solana/web3.js');
let voteAccountInfo = await connection.getProgramAccounts(web3.VOTE_PROGRAM_ID);
let voteAccountFromData = web3.VoteAccount.fromAccountData(voteAccountInfo[0].account.data);
console.log(voteAccountFromData);
/*
VoteAccount {
nodePubkey: PublicKey {
_bn: <BN: 100000096c4e1e61a6393e51937e548aee2c062e23c67cbaa8d04f289d18232>
},
authorizedVoterPubkey: PublicKey {
_bn: <BN: 5862b94396c4e1e61a6393e51937e548aee2c062e23c67cbaa8d04f289d18232>
},
authorizedWithdrawerPubkey: PublicKey {
_bn: <BN: 5862b9430a0800000000000000cb136204000000001f000000cc136204000000>
},
commission: 0,
votes: [
{ slot: 124554051584, confirmationCount: 73536462 },
{ slot: 120259084288, confirmationCount: 73536463 },
{ slot: 115964116992, confirmationCount: 73536464 },
{ slot: 111669149696, confirmationCount: 73536465 },
{ slot: 107374182400, confirmationCount: 96542804 },
{ slot: 4294967296, confirmationCount: 1645464065 },
{ slot: 1099511627780, confirmationCount: 0 },
{ slot: 57088, confirmationCount: 3787757056 },
{ slot: 16516698632215534000, confirmationCount: 3236081224 },
{ slot: 328106138455040640, confirmationCount: 2194770418 },
{ slot: 290873038898, confirmationCount: 0 },
],
rootSlot: null,
epoch: 0,
credits: 0,
lastEpochCredits: 0,
epochCredits: []
}
*/
```
## Staking
### StakeProgram
[SourceDocumentation](https://solana-labs.github.io/solana-web3.js/classes/StakeProgram.html)
The StakeProgram facilitates staking SOL and delegating them to any validators on the network. You can use StakeProgram to create a stake account, stake some SOL, authorize accounts for withdrawal of your stake, deactivate your stake, and withdraw your funds. The StakeInstruction class is used to decode and read more instructions from transactions calling the StakeProgram
#### Example Usage
```javascript
const web3 = require("@solana/web3.js");
// Fund a key to create transactions
let fromPublicKey = web3.Keypair.generate();
let connection = new web3.Connection(web3.clusterApiUrl('devnet'), 'confirmed');
let airdropSignature = await connection.requestAirdrop(
fromPublicKey.publicKey,
web3.LAMPORTS_PER_SOL,
);
await connection.confirmTransaction(airdropSignature);
// Create Account
let stakeAccount = web3.Keypair.generate();
let authorizedAccount = web3.Keypair.generate();
/* Note: This is the minimum amount for a stake account -- Add additional Lamports for staking
For example, we add 50 lamports as part of the stake */
let lamportsForStakeAccount = (await connection.getMinimumBalanceForRentExemption(web3.StakeProgram.space)) + 50;
let createAccountTransaction = web3.StakeProgram.createAccount({
fromPubkey: fromPublicKey.publicKey,
authorized: new web3.Authorized(authorizedAccount.publicKey, authorizedAccount.publicKey),
lamports: lamportsForStakeAccount,
lockup: new web3.Lockup(0, 0, fromPublicKey.publicKey),
stakePubkey: stakeAccount.publicKey
});
await web3.sendAndConfirmTransaction(connection, createAccountTransaction, [fromPublicKey, stakeAccount]);
// Check that stake is available
let stakeBalance = await connection.getBalance(stakeAccount.publicKey);
console.log(`Stake balance: ${stakeBalance}`)
// Stake balance: 2282930
// We can verify the state of our stake. This may take some time to become active
let stakeState = await connection.getStakeActivation(stakeAccount.publicKey);
console.log(`Stake Stake: ${stakeState.state}`);
// Stake State: inactive
// To delegate our stake, we get the current vote accounts and choose the first
let voteAccounts = await connection.getVoteAccounts();
let voteAccount = voteAccounts.current.concat(
voteAccounts.delinquent,
)[0];
let votePubkey = new web3.PublicKey(voteAccount.votePubkey);
// We can then delegate our stake to the voteAccount
let delegateTransaction = web3.StakeProgram.delegate({
stakePubkey: stakeAccount.publicKey,
authorizedPubkey: authorizedAccount.publicKey,
votePubkey: votePubkey,
});
await web3.sendAndConfirmTransaction(connection, delegateTransaction, [fromPublicKey, authorizedAccount]);
// To withdraw our funds, we first have to deactivate the stake
let deactivateTransaction = web3.StakeProgram.deactivate({
stakePubkey: stakeAccount.publicKey,
authorizedPubkey: authorizedAccount.publicKey,
});
await web3.sendAndConfirmTransaction(connection, deactivateTransaction, [fromPublicKey, authorizedAccount]);
// Once deactivated, we can withdraw our funds
let withdrawTransaction = web3.StakeProgram.withdraw({
stakePubkey: stakeAccount.publicKey,
authorizedPubkey: authorizedAccount.publicKey,
toPubkey: fromPublicKey.publicKey,
lamports: stakeBalance,
});
await web3.sendAndConfirmTransaction(connection, withdrawTransaction, [fromPublicKey, authorizedAccount]);
```
### Authorized
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Authorized.html)
Authorized is an object used when creating an authorized account for staking within Solana. You can designate a `staker` and `withdrawer` separately, allowing for a different account to withdraw other than the staker.
You can find more usage of the `Authorized` object under [`StakeProgram`](https://solana-labs.github.io/solana-web3.js/classes/StakeProgram.html)
### Lockup
[Source Documentation](https://solana-labs.github.io/solana-web3.js/classes/Lockup.html)
Lockup is used in conjunction with the [StakeProgram](https://solana-labs.github.io/solana-web3.js/classes/StakeProgram.html) to create an account. The Lockup is used to determine how long the stake will be locked, or unable to be retrieved. If the Lockup is set to 0 for both epoch and the Unix timestamp, the lockup will be disabled for the stake account.
#### Example Usage
```javascript
const {Authorized, Keypair, Lockup, StakeProgram} = require("@solana/web3.js");
let account = Keypair.generate();
let stakeAccount = Keypair.generate();
let authorized = new Authorized(account.publicKey, account.publicKey);
let lockup = new Lockup(0, 0, account.publicKey);
let createStakeAccountInstruction = StakeProgram.createAccount({
fromPubkey: account.publicKey,
authorized: authorized,
lamports: 1000,
lockup: lockup,
stakePubkey: stakeAccount.publicKey
});
```
The above code creates a `createStakeAccountInstruction` to be used when creating an account with the `StakeProgram`. The Lockup is set to 0 for both the epoch and Unix timestamp, disabling lockup for the account.
See [StakeProgram](https://solana-labs.github.io/solana-web3.js/classes/StakeProgram.html) for more.

View File

@@ -873,12 +873,12 @@ None
The result field will be an array of JSON objects, each with the following sub fields:
- `pubkey: <string>` - Node public key, as base-58 encoded string
- `gossip: <string>` - Gossip network address for the node
- `tpu: <string>` - TPU network address for the node
- `rpc: <string>|null` - JSON RPC network address for the node, or `null` if the JSON RPC service is not enabled
- `version: <string>|null` - The software version of the node, or `null` if the version information is not available
- `featureSet: <number>|null` - The unique identifier of the node's feature set
- `shredVersion: <number>|null` - The shred version the node has been configured to use
- `gossip: <string | null>` - Gossip network address for the node
- `tpu: <string | null>` - TPU network address for the node
- `rpc: <string | null>` - JSON RPC network address for the node, or `null` if the JSON RPC service is not enabled
- `version: <string | null>` - The software version of the node, or `null` if the version information is not available
- `featureSet: <u32 | null >` - The unique identifier of the node's feature set
- `shredVersion: <u16 | null>` - The shred version the node has been configured to use
#### Example:
@@ -923,6 +923,7 @@ The result field will be an object with the following fields:
- `epoch: <u64>`, the current epoch
- `slotIndex: <u64>`, the current slot relative to the start of the current epoch
- `slotsInEpoch: <u64>`, the number of slots in this epoch
- `transactionCount: <u64 | null>`, total number of transactions processed without error since genesis
#### Example:
@@ -942,7 +943,8 @@ Result:
"blockHeight": 166500,
"epoch": 27,
"slotIndex": 2790,
"slotsInEpoch": 8192
"slotsInEpoch": 8192,
"transactionCount": 22661093
},
"id": 1
}
@@ -1340,7 +1342,7 @@ The result field will be a JSON object with the following fields:
- `total: <f64>`, total inflation
- `validator: <f64>`, inflation allocated to validators
- `foundation: <f64>`, inflation allocated to the foundation
- `epoch: <f64>`, epoch for which these values are valid
- `epoch: <u64>`, epoch for which these values are valid
#### Example:
@@ -2569,37 +2571,41 @@ Result:
},
"value": [
{
"data": {
"program": "spl-token",
"parsed": {
"info": {
"tokenAmount": {
"amount": "1",
"decimals": 1,
"uiAmount": 0.1,
"uiAmountString": "0.1",
"account": {
"data": {
"program": "spl-token",
"parsed": {
"info": {
"tokenAmount": {
"amount": "1",
"decimals": 1,
"uiAmount": 0.1,
"uiAmountString": "0.1"
},
"delegate": "4Nd1mBQtrMJVYVfKf2PJy9NZUZdTAsp7D4xWLs4gDB4T",
"delegatedAmount": {
"amount": "1",
"decimals": 1,
"uiAmount": 0.1,
"uiAmountString": "0.1"
},
"state": "initialized",
"isNative": false,
"mint": "3wyAj7Rt1TWVPZVteFJPLa26JmLvdb1CAKEFZm3NY75E",
"owner": "CnPoSPKXu7wJqxe59Fs72tkBeALovhsCxYeFwPCQH9TD"
},
"delegate": "4Nd1mBQtrMJVYVfKf2PJy9NZUZdTAsp7D4xWLs4gDB4T",
"delegatedAmount": {
"amount": "1",
"decimals": 1,
"uiAmount": 0.1,
"uiAmountString": "0.1",
},
"state": "initialized",
"isNative": false,
"mint": "3wyAj7Rt1TWVPZVteFJPLa26JmLvdb1CAKEFZm3NY75E",
"owner": "CnPoSPKXu7wJqxe59Fs72tkBeALovhsCxYeFwPCQH9TD"
"type": "account"
},
"type": "account"
"space": 165
},
"space": 165
"executable": false,
"lamports": 1726080,
"owner": "TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA",
"rentEpoch": 4
},
"executable": false,
"lamports": 1726080,
"owner": "TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA",
"rentEpoch": 4
"pubkey": "28YTZEwqtMHWrhWcvv34se7pjS7wctgqzCPB3gReCFKp"
}
]
},
"id": 1
@@ -2667,37 +2673,40 @@ Result:
},
"value": [
{
"data": {
"program": "spl-token",
"parsed": {
"accountType": "account",
"info": {
"tokenAmount": {
"amount": "1",
"decimals": 1,
"uiAmount": 0.1,
"uiAmountString": "0.1",
"account": {
"data": {
"program": "spl-token",
"parsed": {
"accountType": "account",
"info": {
"tokenAmount": {
"amount": "1",
"decimals": 1,
"uiAmount": 0.1,
"uiAmountString": "0.1"
},
"delegate": "4Nd1mBQtrMJVYVfKf2PJy9NZUZdTAsp7D4xWLs4gDB4T",
"delegatedAmount": {
"amount": "1",
"decimals": 1,
"uiAmount": 0.1,
"uiAmountString": "0.1"
},
"state": "initialized",
"isNative": false,
"mint": "3wyAj7Rt1TWVPZVteFJPLa26JmLvdb1CAKEFZm3NY75E",
"owner": "4Qkev8aNZcqFNSRhQzwyLMFSsi94jHqE8WNVTJzTP99F"
},
"type": "account"
},
"delegate": "4Nd1mBQtrMJVYVfKf2PJy9NZUZdTAsp7D4xWLs4gDB4T",
"delegatedAmount": {
"amount": "1",
"decimals": 1,
"uiAmount": 0.1,
"uiAmountString": "0.1",
},
"state": "initialized",
"isNative": false,
"mint": "3wyAj7Rt1TWVPZVteFJPLa26JmLvdb1CAKEFZm3NY75E",
"owner": "4Qkev8aNZcqFNSRhQzwyLMFSsi94jHqE8WNVTJzTP99F"
"space": 165
},
"type": "account"
},
"space": 165
"executable": false,
"lamports": 1726080,
"owner": "TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA",
"rentEpoch": 4
},
"executable": false,
"lamports": 1726080,
"owner": "TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA",
"rentEpoch": 4
"pubkey": "C2gJg6tKpQs41PRS1nC8aw3ZKNZK3HQQZGVrDFDup5nx"
}
]
},
@@ -3039,7 +3048,7 @@ curl http://localhost:8899 -X POST -H "Content-Type: application/json" -d '
Result:
```json
{"jsonrpc":"2.0","result":{"solana-core": "1.7.14"},"id":1}
{"jsonrpc":"2.0","result":{"solana-core": "1.8.2"},"id":1}
```
### getVoteAccounts
@@ -4151,7 +4160,7 @@ Response:
### getConfirmedBlock
**DEPRECATED: Please use [getBlock](jsonrpc-api.md#getblock) instead**
This method is expected to be removed in solana-core v1.8
This method is expected to be removed in solana-core v2.0
Returns identity and transaction information about a confirmed block in the ledger
@@ -4345,7 +4354,7 @@ For more details on returned data:
### getConfirmedBlocks
**DEPRECATED: Please use [getBlocks](jsonrpc-api.md#getblocks) instead**
This method is expected to be removed in solana-core v1.8
This method is expected to be removed in solana-core v2.0
Returns a list of confirmed blocks between two slots
@@ -4379,7 +4388,7 @@ Result:
### getConfirmedBlocksWithLimit
**DEPRECATED: Please use [getBlocksWithLimit](jsonrpc-api.md#getblockswithlimit) instead**
This method is expected to be removed in solana-core v1.8
This method is expected to be removed in solana-core v2.0
Returns a list of confirmed blocks starting at the given slot
@@ -4411,7 +4420,7 @@ Result:
### getConfirmedSignaturesForAddress2
**DEPRECATED: Please use [getSignaturesForAddress](jsonrpc-api.md#getsignaturesforaddress) instead**
This method is expected to be removed in solana-core v1.8
This method is expected to be removed in solana-core v2.0
Returns confirmed signatures for transactions involving an
address backwards in time from the provided signature or most recent confirmed block
@@ -4473,7 +4482,7 @@ Result:
### getConfirmedTransaction
**DEPRECATED: Please use [getTransaction](jsonrpc-api.md#gettransaction) instead**
This method is expected to be removed in solana-core v1.8
This method is expected to be removed in solana-core v2.0
Returns transaction details for a confirmed transaction

View File

@@ -2,7 +2,30 @@
title: Rust API
---
See [doc.rs](https://docs.rs/releases/search?query=solana-) for documentation of
all crates published by Solana. In particular [solana-sdk](https://docs.rs/solana-sdk)
for working with common data structures and [solana-client](https://docs.rs/solana-client)
for querying the [JSON RPC API](jsonrpc-api).
Solana's Rust crates are [published to crates.io][crates.io] and can be found
[on docs.rs with the "solana-" prefix][docs.rs].
[crates.io]: https://crates.io/search?q=solana-
[docs.rs]: https://docs.rs/releases/search?query=solana-
Some important crates:
- [`solana-program`] &mdash; Imported by programs running on Solana, compiled
to BPF. This crate contains many fundamental data types and is re-exported from
[`solana-sdk`], which cannot be imported from a Solana program.
- [`solana-sdk`] &mdash; The basic off-chain SDK, it re-exports
[`solana-program`] and adds more APIs on top of that. Most Solana programs
that do not run on-chain will import this.
- [`solana-client`] &mdash; For interacting with a Solana node via the
[JSON RPC API](jsonrpc-api).
- [`solana-clap-utils`] &mdash; Routines for setting up a CLI, using [`clap`],
as used by the main Solana CLI.
[`solana-program`]: https://docs.rs/solana-program
[`solana-sdk`]: https://docs.rs/solana-sdk
[`solana-client`]: https://docs.rs/solana-client
[`solana-clap-utils`]: https://docs.rs/solana-clap-utils
[`clap`]: https://docs.rs/clap

View File

@@ -33,19 +33,19 @@ Solana Rust programs may depend directly on each other in order to gain access
to instruction helpers when making [cross-program invocations](developing/programming-model/calling-between-programs.md#cross-program-invocations).
When doing so it's important to not pull in the dependent program's entrypoint
symbols because they may conflict with the program's own. To avoid this,
programs should define an `exclude_entrypoint` feature in `Cargo.toml` and use
programs should define an `no-entrypoint` feature in `Cargo.toml` and use
to exclude the entrypoint.
- [Define the
feature](https://github.com/solana-labs/solana-program-library/blob/a5babd6cbea0d3f29d8c57d2ecbbd2a2bd59c8a9/token/program/Cargo.toml#L12)
feature](https://github.com/solana-labs/solana-program-library/blob/fca9836a2c8e18fc7e3595287484e9acd60a8f64/token/program/Cargo.toml#L12)
- [Exclude the
entrypoint](https://github.com/solana-labs/solana-program-library/blob/a5babd6cbea0d3f29d8c57d2ecbbd2a2bd59c8a9/token/program/src/lib.rs#L12)
entrypoint](https://github.com/solana-labs/solana-program-library/blob/fca9836a2c8e18fc7e3595287484e9acd60a8f64/token/program/src/lib.rs#L12)
Then when other programs include this program as a dependency, they should do so
using the `exclude_entrypoint` feature.
using the `no-entrypoint` feature.
- [Include without
entrypoint](https://github.com/solana-labs/solana-program-library/blob/a5babd6cbea0d3f29d8c57d2ecbbd2a2bd59c8a9/token-swap/program/Cargo.toml#L19)
entrypoint](https://github.com/solana-labs/solana-program-library/blob/fca9836a2c8e18fc7e3595287484e9acd60a8f64/token-swap/program/Cargo.toml#L22)
## Project Dependencies
@@ -115,9 +115,9 @@ Programs must be written for and deployed to the same loader. For more details
see the [overview](overview#loaders).
Currently there are two supported loaders [BPF
Loader](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/bpf_loader.rs#L17)
Loader](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/bpf_loader.rs#L17)
and [BPF loader
deprecated](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/bpf_loader_deprecated.rs#L14)
deprecated](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/bpf_loader_deprecated.rs#L14)
They both have the same raw entrypoint definition, the following is the raw
symbol that the runtime looks up and calls:
@@ -136,9 +136,9 @@ processing function, and returns the results.
You can find the entrypoint macros here:
- [BPF Loader's entrypoint
macro](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/entrypoint.rs#L46)
macro](https://github.com/solana-labs/solana/blob/9b1199cdb1b391b00d510ed7fc4866bdf6ee4eb3/sdk/program/src/entrypoint.rs#L42)
- [BPF Loader deprecated's entrypoint
macro](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/entrypoint_deprecated.rs#L37)
macro](https://github.com/solana-labs/solana/blob/9b1199cdb1b391b00d510ed7fc4866bdf6ee4eb3/sdk/program/src/entrypoint_deprecated.rs#L38)
The program defined instruction processing function that the entrypoint macros
call must be of this form:
@@ -149,7 +149,7 @@ pub type ProcessInstruction =
```
Refer to [helloworld's use of the
entrypoint](https://github.com/solana-labs/example-helloworld/blob/c1a7247d87cd045f574ed49aec5d160aefc45cf2/src/program-rust/src/lib.rs#L15)
entrypoint](https://github.com/solana-labs/example-helloworld/blob/1e049076e10be8712b1a725d2d886ce0cd036b2e/src/program-rust/src/lib.rs#L19)
as an example of how things fit together.
### Parameter Deserialization
@@ -159,9 +159,9 @@ parameters into Rust types. The entrypoint macros automatically calls the
deserialization helper:
- [BPF Loader
deserialization](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/entrypoint.rs#L104)
deserialization](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/entrypoint.rs#L146)
- [BPF Loader deprecated
deserialization](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/entrypoint_deprecated.rs#L56)
deserialization](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/entrypoint_deprecated.rs#L57)
Some programs may want to perform deserialization themselves and they can by
providing their own implementation of the [raw entrypoint](#program-entrypoint).
@@ -190,7 +190,7 @@ The program id is the public key of the currently executing program.
The accounts is an ordered slice of the accounts referenced by the instruction
and represented as an
[AccountInfo](https://github.com/solana-labs/solana/blob/7ddf10e602d2ed87a9e3737aa8c32f1db9f909d8/sdk/program/src/account_info.rs#L10)
[AccountInfo](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/account_info.rs#L12)
structures. An account's place in the array signifies its meaning, for example,
when transferring lamports an instruction may define the first account as the
source and the second as the destination.
@@ -214,7 +214,7 @@ being processed.
## Heap
Rust programs implement the heap directly by defining a custom
[`global_allocator`](https://github.com/solana-labs/solana/blob/8330123861a719cd7a79af0544617896e7f00ce3/sdk/program/src/entrypoint.rs#L50)
[`global_allocator`](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/entrypoint.rs#L72)
Programs may implement their own `global_allocator` based on its specific needs.
Refer to the [custom heap example](#examples) for more information.
@@ -288,7 +288,7 @@ getrandom = { version = "0.2.2", features = ["custom"] }
Rust's `println!` macro is computationally expensive and not supported. Instead
the helper macro
[`msg!`](https://github.com/solana-labs/solana/blob/6705b5a98c076ac08f3991bb8a6f9fcb280bf51e/sdk/program/src/log.rs#L33)
[`msg!`](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/log.rs#L33)
is provided.
`msg!` has two forms:
@@ -375,7 +375,7 @@ fn custom_panic(info: &core::panic::PanicInfo<'_>) {
## Compute Budget
Use the system call
[`sol_log_compute_units()`](https://github.com/solana-labs/solana/blob/d3a3a7548c857f26ec2cb10e270da72d373020ec/sdk/program/src/log.rs#L102)
[`sol_log_compute_units()`](https://github.com/solana-labs/solana/blob/d9b0fc0e3eec67dfe4a97d9298b15969b2804fab/sdk/program/src/log.rs#L141)
to log a message containing the remaining number of compute units the program
may consume before execution is halted

View File

@@ -65,6 +65,51 @@ to the BPF Upgradeable Loader to process the instruction.
[More information about deployment](cli/deploy-a-program.md)
## Ed25519 Program
Verify ed25519 signature program. This program takes an ed25519 signature, public key, and message.
Multiple signatures can be verified. If any of the signatures fail to verify, an error is returned.
- Program id: `Ed25519SigVerify111111111111111111111111111`
- Instructions: [new_ed25519_instruction](https://github.com/solana-labs/solana/blob/master/sdk/src/ed25519_instruction.rs#L45)
The ed25519 program processes an instruction. The first `u8` is a count of the number of
signatures to check, which is followed by a single byte padding. After that, the
following struct is serialized, one for each signature to check.
```
struct Ed25519SignatureOffsets {
signature_offset: u16, // offset to ed25519 signature of 64 bytes
signature_instruction_index: u16, // instruction index to find signature
public_key_offset: u16, // offset to public key of 32 bytes
public_key_instruction_index: u16, // instruction index to find public key
message_data_offset: u16, // offset to start of message data
message_data_size: u16, // size of message data
message_instruction_index: u16, // index of instruction data to get message data
}
```
Pseudo code of the operation:
```
process_instruction() {
for i in 0..count {
// i'th index values referenced:
instructions = &transaction.message().instructions
instruction_index = ed25519_signature_instruction_index != u16::MAX ? ed25519_signature_instruction_index : current_instruction;
signature = instructions[instruction_index].data[ed25519_signature_offset..ed25519_signature_offset + 64]
instruction_index = ed25519_pubkey_instruction_index != u16::MAX ? ed25519_pubkey_instruction_index : current_instruction;
pubkey = instructions[instruction_index].data[ed25519_pubkey_offset..ed25519_pubkey_offset + 32]
instruction_index = ed25519_message_instruction_index != u16::MAX ? ed25519_message_instruction_index : current_instruction;
message = instructions[instruction_index].data[ed25519_message_data_offset..ed25519_message_data_offset + ed25519_message_data_size]
if pubkey.verify(signature, message) != Success {
return Error
}
}
return Success
}
```
## Secp256k1 Program
Verify secp256k1 public key recovery operations (ecrecover).

View File

@@ -16,10 +16,7 @@ The affected RPC endpoints are:
- [getConfirmedSignaturesForAddress](developing/clients/jsonrpc-api.md#getconfirmedsignaturesforaddress)
- [getConfirmedTransaction](developing/clients/jsonrpc-api.md#getconfirmedtransaction)
- [getSignatureStatuses](developing/clients/jsonrpc-api.md#getsignaturestatuses)
Note that [getBlockTime](developing/clients/jsonrpc-api.md#getblocktime)
is not supported, as once https://github.com/solana-labs/solana/issues/10089 is
fixed then `getBlockTime` can be removed.
- [getBlockTime](developing/clients/jsonrpc-api.md#getblocktime)
Some system design constraints:

View File

@@ -0,0 +1,146 @@
# Program log binary data
## Problem
There is no support for logging binary data in Solidity.
### Events in Solidity
In Solidity, events can be reported. These look like structures with zero or
more fields, and can be emitted with specific values. For example:
```
event PaymentReceived {
address sender;
uint amount;
}
contract c {
function pay() public payable {
emit PaymentReceived(msg.sender, msg.value);
}
}
```
Events are write-only from a Solidity/VM perspective and are written to
the blocks in the tx records.
Some of these fields can be marked `indexed`, which affects how the data is
encoded. All non-indexed fields are eth abi encoded into a variable length
byte array. All indexed fields go into so-called topics.
Topics are fixed length fields of 32 bytes. There are a maximum of 4 topics;
if a type does not always fit into 32 bytes (e.g. string types), then the topic
is keccak256 hashed.
The first topic is a keccak256 hash of the event signature, in this case
`keccak256('PaymentReceived(address,uint)')`. The four remaining are available
for `indexed` fields. The event may be declared `anonymous`, in which case
the first field is not a hash of the signature, and it is permitted to have
4 indexed fields.
### Listening to events in a client
The reason for the distinction between topics/indexed and regular fields is
that it easier to filter on topics.
```
const Web3 = require('web3');
const url = 'ws://127.0.0.1:8546';
const web3 = new Web3(url);
var options = {
address: '0xfbBE8f06FAda977Ea1E177da391C370EFbEE3D25',
topics: [
'0xdf50c7bb3b25f812aedef81bc334454040e7b27e27de95a79451d663013b7e17',
//'0x0000000000000000000000000d8a3f5e71560982fb0eb5959ecf84412be6ae3e'
]
};
var subscription = web3.eth.subscribe('logs', options, function(error, result){
if (!error) console.log('got result');
else console.log(error);
}).on("data", function(log){
console.log('got data', log);
}).on("changed", function(log){
console.log('changed');
});
```
In order to decode the non-indexed fields (the data), the abi of the contract
is needed. So, the topic is first used to discover what event was used, and
then the data can be decoded.
### Ethereum Tx in block
The transaction calls event logs. Here is a tx with a single event, with 3
topics and some data.
```
{
"tx": {
"nonce": "0x2",
"gasPrice": "0xf224d4a00",
"gas": "0xc350",
"to": "0x6B175474E89094C44Da98b954EedeAC495271d0F",
"value": "0x0",
"input": "0xa9059cbb000000000000000000000000a12431d0b9db640034b0cdfceef9cce161e62be40000000000000000000000000000000000000000000000a030dcebbd2f4c0000",
"hash": "0x98a67f0a35ebc0ac068acf0885d38419c632ffa4354e96641d6d5103a7681910",
"blockNumber": "0xc96431",
"from": "0x82f890D638478d211eF2208f3c1466B5Abf83551",
"transactionIndex": "0xe1"
},
"receipt": {
"gasUsed": "0x74d2",
"status": "0x1",
"logs": [
{
"address": "0x6B175474E89094C44Da98b954EedeAC495271d0F",
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x00000000000000000000000082f890d638478d211ef2208f3c1466b5abf83551",
"0x000000000000000000000000a12431d0b9db640034b0cdfceef9cce161e62be4"
],
"data": "0x0000000000000000000000000000000000000000000000a030dcebbd2f4c0000"
}
]
}
}
```
### Further considerations
In Ethereum, events are stored in blocks. Events mark certain state changes in
smart contracts. This serves two purposes:
- Listen to events (i.e. state changes) as they happen by reading new blocks
as they are published
- Re-read historical events by reading old blocks
So for example, smart contracts may emit changes as they happen but never the
complete state, so the only way to recover the entire state of the contract
is by re-reading all events from the chain. So an application will read events
from block 1 or whatever block the application was deployed at and then use
that state for local processing. This is a local cache and may re-populated
from the chain at any point.
## Proposed Solution
Binary logging should be added to the program log. The program log should include the base64 encoded data (zero or more one permitted).
So if we encoding the topics first, followed by the data then the event in the
tx above would look like:
```
program data: 3fJSrRviyJtpwrBo/DeNqpUrpFjxKEWKPVaTfUjs8AAAAAAAAAAAAAAACC+JDWOEeNIR7yII88FGa1q/g1UQAAAAAAAAAAAAAAAKEkMdC522QANLDN/O75zOFh5ivk AAAAAAAAAAAAAAAAAAAAAAAAAAAAAACgMNzrvS9MAAA=
```
This requires a new system call:
```
void sol_log_data(SolBytes *fields, uint64_t length);
```
### Considerations
- Should there be text field in the program log so we can have a little bit of
metadata on the binary data, to make it more human readable

View File

@@ -39,9 +39,9 @@ on our mainnet, cosmos, or tezos. For our network, which is primarily
composed of high availability systems, this seems unlikely. Currently,
we have set the threshold percentage to 4.66%, which means that if 23.68%
have failed the network may stop finalizing blocks. For our network,
which is primarily composed of high availability systems a 23.68% drop
in availabilty seems unlinkely. 1:10^12 odds assuming five 4.7% staked
nodes with 0.995 of uptime.
which is primarily composed of high availability systems, a 23.68% drop
in availability seems unlikely. 1:10^12 odds, assuming five 4.7% staked
nodes with 0.995 uptime.
## Security

View File

@@ -0,0 +1,144 @@
# Return data from BPF programs
## Problem
In the Solidity langauge it is permitted to return any number of values from a function,
for example a variable length string can be returned:
```
function foo1() public returns (string) {
return "Hello, world!\n";
}
```
Multiple values, arrays and structs are permitted too.
```
struct S {
int f1;
bool f2
};
function foo2() public returns (string, int[], S) {
return (a, b, c);
}
```
All the return values are eth abi encoded to a variable-length byte array.
On ethereum errors can be returned too:
```
function withdraw() public {
require(msg.sender == owner, "Permission denied");
}
function failure() public {
revert("I afraid I can't do that dave");
}
```
These errors help the developer debug any issue they are having, and can
also be caught in a Solidity `try` .. `catch` block. Outside of a `try` .. `catch`
block, any of these would cause the transaction or rpc to fail.
## Existing solution
The existing solution that Solang uses, writes the return data to the callee account data.
The caller's account cannot be used, since the callee may not be the same BPF program, so
it will not have permission to write to the callee's account data.
Another solution would be to have a single return data account which is passed
around through CPI. Again this does not work for CPI as the callee may not have
permission to write to it.
The problem with this solution is:
- It does not work for RPC calls
- It is very racey; a client has to submit the Tx and then retrieve the account
data. This is not atomic so the return data can be overwritten by another transaction.
## Requirements for Solution
It must work for:
- RPC: An RPC should be able to return any number of values without writing to account data
- Transaction: An transaction should be able to return any number of values without needing to write them account data
- CPI: The callee must "set" return value, and the caller must be able to retrieve it.
## Review of other chains
### Ethereum (EVM)
The `RETURN` opcode allows a contract to set a buffer as a returndata. This opcode takes a pointer to memory and a size. The `REVERT` opcode works similarly but signals that the call failed, and all account data changes must be reverted.
For CPI, the caller can retrieve the returned data of the callee using the `RETURNDATASIZE` opcode which returns the length, and the `RETURNDATACOPY` opcode, which takes a memory destination pointer, offset into the returndata, and a length argument.
Ethereum stores the returndata in blocks.
### Parity Substrate
The return data can be set using the `seal_return(u32 flags, u32 pointer, u32 size)` syscall.
- Flags can be 1 for revert, 0 for success (nothing else defined)
- Function does not return
CPI: The `seal_call()` syscall takes pointer to buffer and pointer to buffer size where return data goes
- There is a 32KB limit for return data.
Parity Substrate does not write the return data to blocks.
## Rejected Solution
The concept of ephemeral accounts has been proposed a solution for this. This would
certainly work for the CPI case, but this would not work RPC or Transaction case.
## Proposed Solution
The callee can set the return data using a new system call `sol_set_return_data(buf: *const u8, length: u64)`.
There is a limit of 1024 bytes for the returndata. This function can be called multiple times, and
will simply overwrite what was written in the last call.
The return data can be retrieved with `sol_get_return_data(buf: *mut u8, length: u64, program_id: *mut Pubkey) -> u64`.
This function copies the return buffer, and the program_id that set the return data, and
returns the length of the return data, or `0` if no return data is set. In this case, program_id is not set.
When an instruction calls `sol_invoke()`, the return data of the callee is copied into the return data
of the current instruction. This means that any return data is automatically passed up the call stack,
to the callee of the current instruction (or the RPC call).
Note that `sol_invoke()` clears the returns data before invoking the callee, so that any return data from
a previous invoke is not reused if the invoked fails to set a return data. For example:
- A invokes B
- Before entry to B, return data is cleared.0
- B sets some return data and returns
- A invokes C
- Before entry to C, return data is cleared.
- C does not set return data and returns
- A checks return data and finds it empty
Another scenario to consider:
- A invokes B
- B invokes C
- C sets return data and returns
- B does not touch return data and returns
- A gets return data from C
- A does not touch return data
- Return data from transaction is what C set.
The compute costs are calculated for getting and setting the return data using
the syscalls.
For a normal RPC or Transaction, the returndata is base64-encoded and stored along side the sol_log
strings in the [stable log](https://github.com/solana-labs/solana/blob/95292841947763bdd47ef116b40fc34d0585bca8/sdk/src/process_instruction.rs#L275-L281).
## Note on returning errors
Solidity on Ethereum allows the contract to return an error in the return data. In this case, all
the account data changes for the account should be reverted. On Solana, any non-zero exit code
for a BPF prorgram means the entire transaction fails. We do not wish to support an error return
by returning success and then returning an error in the return data. This would mean we would have
to support reverting the account data changes; this too expensive both on the VM side and the BPF
contract side.
Errors will be reported via sol_log.

View File

@@ -8,7 +8,7 @@ Confirm the IP address and **identity pubkey** of your validator is visible in
the gossip network by running:
```bash
solana-gossip spy --entrypoint devnet.solana.com:8001
solana gossip
```
## Check Your Balance

View File

@@ -26,16 +26,6 @@ solana transaction-count
View the [metrics dashboard](https://metrics.solana.com:3000/d/monitor/cluster-telemetry) for more
detail on cluster activity.
## Confirm your Installation
Try running following command to join the gossip network and view all the other
nodes in the cluster:
```bash
solana-gossip spy --entrypoint entrypoint.devnet.solana.com:8001
# Press ^C to exit
```
## Enabling CUDA
If your machine has a GPU with CUDA installed \(Linux-only currently\), include
@@ -318,11 +308,11 @@ The ledger will be placed in the `ledger/` directory by default, use the
> `solana-validator --identity ASK ... --authorized-voter ASK ...`
> and you will be prompted to enter your seed phrases and optional passphrase.
Confirm your validator connected to the network by opening a new terminal and
Confirm your validator is connected to the network by opening a new terminal and
running:
```bash
solana-gossip spy --entrypoint entrypoint.devnet.solana.com:8001
solana gossip
```
If your validator is connected, its public key and IP address will appear in the list.

View File

@@ -23,8 +23,8 @@ any control over the account. In fact, a keypair or private key may not even
exist for a stake account's address.
The only time a stake account's address has a keypair file is when [creating
a stake account using the command line tools](../cli/delegate-stake.md#create-a-stake-account),
a new keypair file is created first only to ensure that the stake account's
a stake account using the command line tools](../cli/delegate-stake.md#create-a-stake-account).
A new keypair file is created first only to ensure that the stake account's
address is new and unique.
#### Understanding Account Authorities
@@ -137,7 +137,5 @@ re-created for the address to be used again.
#### Viewing Stake Accounts
Stake account details can be viewed on the Solana Explorer by copying and pasting
an account address into the search bar.
- http://explorer.solana.com/accounts
Stake account details can be viewed on the [Solana Explorer](http://explorer.solana.com/accounts)
by copying and pasting an account address into the search bar.

View File

@@ -8,7 +8,7 @@ The following terms are used throughout the documentation.
A record in the Solana ledger that either holds data or is an executable program.
Like an account at a traditional bank, a Solana account may hold funds called [lamports](terminology.md#lamport). Like a file in Linux, it is addressable by a key, often referred to as a [public key](terminology.md#public-key-pubkey) or pubkey.
Like an account at a traditional bank, a Solana account may hold funds called [lamports](#lamport). Like a file in Linux, it is addressable by a key, often referred to as a [public key](#public-key-pubkey) or pubkey.
The key may be one of:
@@ -26,59 +26,55 @@ A front-end application that interacts with a Solana cluster.
## bank state
The result of interpreting all programs on the ledger at a given [tick height](terminology.md#tick-height). It includes at least the set of all [accounts](terminology.md#account) holding nonzero [native tokens](terminology.md#native-token).
The result of interpreting all programs on the ledger at a given [tick height](#tick-height). It includes at least the set of all [accounts](#account) holding nonzero [native tokens](#native-token).
## block
A contiguous set of [entries](terminology.md#entry) on the ledger covered by a [vote](terminology.md#ledger-vote). A [leader](terminology.md#leader) produces at most one block per [slot](terminology.md#slot).
A contiguous set of [entries](#entry) on the ledger covered by a [vote](#ledger-vote). A [leader](#leader) produces at most one block per [slot](#slot).
## blockhash
A unique value ([hash](terminology.md#hash)) that identifies a record (block). Solana computes a blockhash from the last [entry id](terminology.md#entry-id) of the block.
A unique value ([hash](#hash)) that identifies a record (block). Solana computes a blockhash from the last [entry id](#entry-id) of the block.
## block height
The number of [blocks](terminology.md#block) beneath the current block. The first block after the [genesis block](terminology.md#genesis-block) has height one.
The number of [blocks](#block) beneath the current block. The first block after the [genesis block](#genesis-block) has height one.
## bootstrap validator
The [validator](terminology.md#validator) that produces the genesis (first) [block](terminology.md#block) of a block chain.
The [validator](#validator) that produces the genesis (first) [block](#block) of a block chain.
## BPF loader
The Solana program that owns and loads [BPF](developing/on-chain-programs/overview#berkeley-packet-filter-bpf) smart contract programs, allowing the program to interface with the runtime.
## CBC block
The smallest encrypted chunk of ledger, an encrypted ledger segment would be made of many CBC blocks. `ledger_segment_size / cbc_block_size` to be exact.
## client
A computer program that accesses the Solana server network [cluster](terminology.md#cluster).
A computer program that accesses the Solana server network [cluster](#cluster).
## cluster
A set of [validators](terminology.md#validator) maintaining a single [ledger](terminology.md#ledger).
A set of [validators](#validator) maintaining a single [ledger](#ledger).
## confirmation time
The wallclock duration between a [leader](terminology.md#leader) creating a [tick entry](terminology.md#tick) and creating a [confirmed block](terminology.md#confirmed-block).
The wallclock duration between a [leader](#leader) creating a [tick entry](#tick) and creating a [confirmed block](#confirmed-block).
## confirmed block
A [block](terminology.md#block) that has received a [supermajority](terminology.md#supermajority) of [ledger votes](terminology.md#ledger-vote).
A [block](#block) that has received a [supermajority](#supermajority) of [ledger votes](#ledger-vote).
## control plane
A gossip network connecting all [nodes](terminology.md#node) of a [cluster](terminology.md#cluster).
A gossip network connecting all [nodes](#node) of a [cluster](#cluster).
## cooldown period
Some number of [epochs](terminology.md#epoch) after [stake](terminology.md#stake) has been deactivated while it progressively becomes available for withdrawal. During this period, the stake is considered to be "deactivating". More info about: [warmup and cooldown](implemented-proposals/staking-rewards.md#stake-warmup-cooldown-withdrawal)
Some number of [epochs](#epoch) after [stake](#stake) has been deactivated while it progressively becomes available for withdrawal. During this period, the stake is considered to be "deactivating". More info about: [warmup and cooldown](implemented-proposals/staking-rewards.md#stake-warmup-cooldown-withdrawal)
## credit
See [vote credit](terminology.md#vote-credit).
See [vote credit](#vote-credit).
## cross-program invocation (CPI)
@@ -86,7 +82,7 @@ A call from one smart contract program to another. For more information, see [ca
## data plane
A multicast network used to efficiently validate [entries](terminology.md#entry) and gain consensus.
A multicast network used to efficiently validate [entries](#entry) and gain consensus.
## drone
@@ -94,21 +90,21 @@ An off-chain service that acts as a custodian for a user's private key. It typic
## entry
An entry on the [ledger](terminology.md#ledger) either a [tick](terminology.md#tick) or a [transactions entry](terminology.md#transactions-entry).
An entry on the [ledger](#ledger) either a [tick](#tick) or a [transactions entry](#transactions-entry).
## entry id
A preimage resistant [hash](terminology.md#hash) over the final contents of an entry, which acts as the [entry's](terminology.md#entry) globally unique identifier. The hash serves as evidence of:
A preimage resistant [hash](#hash) over the final contents of an entry, which acts as the [entry's](#entry) globally unique identifier. The hash serves as evidence of:
- The entry being generated after a duration of time
- The specified [transactions](terminology.md#transaction) are those included in the entry
- The entry's position with respect to other entries in [ledger](terminology.md#ledger)
- The specified [transactions](#transaction) are those included in the entry
- The entry's position with respect to other entries in [ledger](#ledger)
See [proof of history](terminology.md#proof-of-history-poh).
See [proof of history](#proof-of-history-poh).
## epoch
The time, i.e. number of [slots](terminology.md#slot), for which a [leader schedule](terminology.md#leader-schedule) is valid.
The time, i.e. number of [slots](#slot), for which a [leader schedule](#leader-schedule) is valid.
## fee account
@@ -116,19 +112,19 @@ The fee account in the transaction is the account pays for the cost of including
## finality
When nodes representing 2/3rd of the [stake](terminology.md#stake) have a common [root](terminology.md#root).
When nodes representing 2/3rd of the [stake](#stake) have a common [root](#root).
## fork
A [ledger](terminology.md#ledger) derived from common entries but then diverged.
A [ledger](#ledger) derived from common entries but then diverged.
## genesis block
The first [block](terminology.md#block) in the chain.
The first [block](#block) in the chain.
## genesis config
The configuration file that prepares the [ledger](terminology.md#ledger) for the [genesis block](terminology.md#genesis-block).
The configuration file that prepares the [ledger](#ledger) for the [genesis block](#genesis-block).
## hash
@@ -140,76 +136,76 @@ An increase in token supply over time used to fund rewards for validation and to
## inner instruction
See [cross-program invocation](terminology.md#cross-program-invocation-cpi).
See [cross-program invocation](#cross-program-invocation-cpi).
## instruction
The smallest contiguous unit of execution logic in a [program](terminology.md#program). An instruction specifies which program it is calling, which accounts it wants to read or modify, and additional data that serves as auxiliary input to the program. A [client](terminology.md#client) can include one or multiple instructions in a [transaction](terminology.md#transaction). An instruction may contain one or more [cross-program invocations](terminology.md#cross-program-invocation-cpi).
The smallest contiguous unit of execution logic in a [program](#program). An instruction specifies which program it is calling, which accounts it wants to read or modify, and additional data that serves as auxiliary input to the program. A [client](#client) can include one or multiple instructions in a [transaction](#transaction). An instruction may contain one or more [cross-program invocations](#cross-program-invocation-cpi).
## keypair
A [public key](terminology.md#public-key-pubkey) and corresponding [private key](terminology.md#private-key) for accessing an account.
A [public key](#public-key-pubkey) and corresponding [private key](#private-key) for accessing an account.
## lamport
A fractional [native token](terminology.md#native-token) with the value of 0.000000001 [sol](terminology.md#sol).
A fractional [native token](#native-token) with the value of 0.000000001 [sol](#sol).
## leader
The role of a [validator](terminology.md#validator) when it is appending [entries](terminology.md#entry) to the [ledger](terminology.md#ledger).
The role of a [validator](#validator) when it is appending [entries](#entry) to the [ledger](#ledger).
## leader schedule
A sequence of [validator](terminology.md#validator) [public keys](terminology.md#public-key-pubkey) mapped to [slots](terminology.md#slot). The cluster uses the leader schedule to determine which validator is the [leader](terminology.md#leader) at any moment in time.
A sequence of [validator](#validator) [public keys](#public-key-pubkey) mapped to [slots](#slot). The cluster uses the leader schedule to determine which validator is the [leader](#leader) at any moment in time.
## ledger
A list of [entries](terminology.md#entry) containing [transactions](terminology.md#transaction) signed by [clients](terminology.md#client).
Conceptually, this can be traced back to the [genesis block](terminology.md#genesis-block), but an actual [validator](terminology.md#validator)'s ledger may have only newer [blocks](terminology.md#block) to reduce storage, as older ones are not needed for validation of future blocks by design.
A list of [entries](#entry) containing [transactions](#transaction) signed by [clients](#client).
Conceptually, this can be traced back to the [genesis block](#genesis-block), but an actual [validator](#validator)'s ledger may have only newer [blocks](#block) to reduce storage, as older ones are not needed for validation of future blocks by design.
## ledger vote
A [hash](terminology.md#hash) of the [validator's state](terminology.md#bank-state) at a given [tick height](terminology.md#tick-height). It comprises a [validator's](terminology.md#validator) affirmation that a [block](terminology.md#block) it has received has been verified, as well as a promise not to vote for a conflicting [block](terminology.md#block) \(i.e. [fork](terminology.md#fork)\) for a specific amount of time, the [lockout](terminology.md#lockout) period.
A [hash](#hash) of the [validator's state](#bank-state) at a given [tick height](#tick-height). It comprises a [validator's](#validator) affirmation that a [block](#block) it has received has been verified, as well as a promise not to vote for a conflicting [block](#block) \(i.e. [fork](#fork)\) for a specific amount of time, the [lockout](#lockout) period.
## light client
A type of [client](terminology.md#client) that can verify it's pointing to a valid [cluster](terminology.md#cluster). It performs more ledger verification than a [thin client](terminology.md#thin-client) and less than a [validator](terminology.md#validator).
A type of [client](#client) that can verify it's pointing to a valid [cluster](#cluster). It performs more ledger verification than a [thin client](#thin-client) and less than a [validator](#validator).
## loader
A [program](terminology.md#program) with the ability to interpret the binary encoding of other on-chain programs.
A [program](#program) with the ability to interpret the binary encoding of other on-chain programs.
## lockout
The duration of time for which a [validator](terminology.md#validator) is unable to [vote](terminology.md#ledger-vote) on another [fork](terminology.md#fork).
The duration of time for which a [validator](#validator) is unable to [vote](#ledger-vote) on another [fork](#fork).
## native token
The [token](terminology.md#token) used to track work done by [nodes](terminology.md#node) in a cluster.
The [token](#token) used to track work done by [nodes](#node) in a cluster.
## node
A computer participating in a [cluster](terminology.md#cluster).
A computer participating in a [cluster](#cluster).
## node count
The number of [validators](terminology.md#validator) participating in a [cluster](terminology.md#cluster).
The number of [validators](#validator) participating in a [cluster](#cluster).
## PoH
See [Proof of History](terminology.md#proof-of-history-poh).
See [Proof of History](#proof-of-history-poh).
## point
A weighted [credit](terminology.md#credit) in a rewards regime. In the [validator](terminology.md#validator) [rewards regime](cluster/stake-delegation-and-rewards.md), the number of points owed to a [stake](terminology.md#stake) during redemption is the product of the [vote credits](terminology.md#vote-credit) earned and the number of lamports staked.
A weighted [credit](#credit) in a rewards regime. In the [validator](#validator) [rewards regime](cluster/stake-delegation-and-rewards.md), the number of points owed to a [stake](#stake) during redemption is the product of the [vote credits](#vote-credit) earned and the number of lamports staked.
## private key
The private key of a [keypair](terminology.md#keypair).
The private key of a [keypair](#keypair).
## program
The code that interprets [instructions](terminology.md#instruction).
The code that interprets [instructions](#instruction).
## program derived account (PDA)
@@ -217,23 +213,23 @@ An account whose owner is a program and thus is not controlled by a private key
## program id
The public key of the [account](terminology.md#account) containing a [program](terminology.md#program).
The public key of the [account](#account) containing a [program](#program).
## proof of history (PoH)
A stack of proofs, each which proves that some data existed before the proof was created and that a precise duration of time passed before the previous proof. Like a [VDF](terminology.md#verifiable-delay-function-vdf), a Proof of History can be verified in less time than it took to produce.
A stack of proofs, each which proves that some data existed before the proof was created and that a precise duration of time passed before the previous proof. Like a [VDF](#verifiable-delay-function-vdf), a Proof of History can be verified in less time than it took to produce.
## public key (pubkey)
The public key of a [keypair](terminology.md#keypair).
The public key of a [keypair](#keypair).
## root
A [block](terminology.md#block) or [slot](terminology.md#slot) that has reached maximum [lockout](terminology.md#lockout) on a [validator](terminology.md#validator). The root is the highest block that is an ancestor of all active forks on a validator. All ancestor blocks of a root are also transitively a root. Blocks that are not an ancestor and not a descendant of the root are excluded from consideration for consensus and can be discarded.
A [block](#block) or [slot](#slot) that has reached maximum [lockout](#lockout) on a [validator](#validator). The root is the highest block that is an ancestor of all active forks on a validator. All ancestor blocks of a root are also transitively a root. Blocks that are not an ancestor and not a descendant of the root are excluded from consideration for consensus and can be discarded.
## runtime
The component of a [validator](terminology.md#validator) responsible for [program](terminology.md#program) execution.
The component of a [validator](#validator) responsible for [program](#program) execution.
## Sealevel
@@ -241,25 +237,25 @@ Solana's parallel smart contracts run-time.
## shred
A fraction of a [block](terminology.md#block); the smallest unit sent between [validators](terminology.md#validator).
A fraction of a [block](#block); the smallest unit sent between [validators](#validator).
## signature
A 64-byte ed25519 signature of R (32-bytes) and S (32-bytes). With the requirement that R is a packed Edwards point not of small order and S is a scalar in the range of 0 <= S < L.
This requirement ensures no signature malleability. Each transaction must have at least one signature for [fee account](terminology#fee-account).
Thus, the first signature in transaction can be treated as [transacton id](terminology.md#transaction-id)
Thus, the first signature in transaction can be treated as [transacton id](#transaction-id)
## skipped slot
A past [slot](terminology.md#slot) that did not produce a [block](terminology.md#block), because the leader was offline or the [fork](terminology.md#fork) containing the slot was abandoned for a better alternative by cluster consensus. A skipped slot will not appear as an ancestor for blocks at subsequent slots, nor increment the [block height](terminology#block-height), nor expire the oldest `recent_blockhash`.
A past [slot](#slot) that did not produce a [block](#block), because the leader was offline or the [fork](#fork) containing the slot was abandoned for a better alternative by cluster consensus. A skipped slot will not appear as an ancestor for blocks at subsequent slots, nor increment the [block height](terminology#block-height), nor expire the oldest `recent_blockhash`.
Whether a slot has been skipped can only be determined when it becomes older than the latest [rooted](terminology.md#root) (thus not-skipped) slot.
Whether a slot has been skipped can only be determined when it becomes older than the latest [rooted](#root) (thus not-skipped) slot.
## slot
The period of time for which each [leader](terminology.md#leader) ingests transactions and produces a [block](terminology.md#block).
The period of time for which each [leader](#leader) ingests transactions and produces a [block](#block).
Collectively, slots create a logical clock. Slots are ordered sequentially and non-overlapping, comprising roughly equal real-world time as per [PoH](terminology.md#proof-of-history-poh).
Collectively, slots create a logical clock. Slots are ordered sequentially and non-overlapping, comprising roughly equal real-world time as per [PoH](#proof-of-history-poh).
## smart contract
@@ -267,7 +263,7 @@ A program on a blockchain that can read and modify accounts over which it has co
## sol
The [native token](terminology.md#native-token) of a Solana [cluster](terminology.md#cluster).
The [native token](#native-token) of a Solana [cluster](#cluster).
## Solana Program Library (SPL)
@@ -275,27 +271,27 @@ A library of programs on Solana such as spl-token that facilitates tasks such as
## stake
Tokens forfeit to the [cluster](terminology.md#cluster) if malicious [validator](terminology.md#validator) behavior can be proven.
Tokens forfeit to the [cluster](#cluster) if malicious [validator](#validator) behavior can be proven.
## supermajority
2/3 of a [cluster](terminology.md#cluster).
2/3 of a [cluster](#cluster).
## sysvar
A system [account](terminology.md#account). [Sysvars](developing/runtime-facilities/sysvars.md) provide cluster state information such as current tick height, rewards [points](terminology.md#point) values, etc. Programs can access Sysvars via a Sysvar account (pubkey) or by querying via a syscall.
A system [account](#account). [Sysvars](developing/runtime-facilities/sysvars.md) provide cluster state information such as current tick height, rewards [points](#point) values, etc. Programs can access Sysvars via a Sysvar account (pubkey) or by querying via a syscall.
## thin client
A type of [client](terminology.md#client) that trusts it is communicating with a valid [cluster](terminology.md#cluster).
A type of [client](#client) that trusts it is communicating with a valid [cluster](#cluster).
## tick
A ledger [entry](terminology.md#entry) that estimates wallclock duration.
A ledger [entry](#entry) that estimates wallclock duration.
## tick height
The Nth [tick](terminology.md#tick) in the [ledger](terminology.md#ledger).
The Nth [tick](#tick) in the [ledger](#ledger).
## token
@@ -303,31 +299,31 @@ A digitally transferable asset.
## tps
[Transactions](terminology.md#transaction) per second.
[Transactions](#transaction) per second.
## transaction
One or more [instructions](terminology.md#instruction) signed by a [client](terminology.md#client) using one or more [keypairs](terminology.md#keypair) and executed atomically with only two possible outcomes: success or failure.
One or more [instructions](#instruction) signed by a [client](#client) using one or more [keypairs](#keypair) and executed atomically with only two possible outcomes: success or failure.
## transaction id
The first [signature](terminology.md#signature) in a [transaction](terminology.md#transaction), which can be used to uniquely identify the transaction across the complete [ledger](terminology.md#ledger).
The first [signature](#signature) in a [transaction](#transaction), which can be used to uniquely identify the transaction across the complete [ledger](#ledger).
## transaction confirmations
The number of [confirmed blocks](terminology.md#confirmed-block) since the transaction was accepted onto the [ledger](terminology.md#ledger). A transaction is finalized when its block becomes a [root](terminology.md#root).
The number of [confirmed blocks](#confirmed-block) since the transaction was accepted onto the [ledger](#ledger). A transaction is finalized when its block becomes a [root](#root).
## transactions entry
A set of [transactions](terminology.md#transaction) that may be executed in parallel.
A set of [transactions](#transaction) that may be executed in parallel.
## validator
A full participant in a Solana network [cluster](terminology.md#cluster) that produces new [blocks](terminology.md#block). A validator validates the transactions added to the [ledger](terminology.md#ledger)
A full participant in a Solana network [cluster](#cluster) that produces new [blocks](#block). A validator validates the transactions added to the [ledger](#ledger)
## VDF
See [verifiable delay function](terminology.md#verifiable-delay-function-vdf).
See [verifiable delay function](#verifiable-delay-function-vdf).
## verifiable delay function (VDF)
@@ -335,16 +331,16 @@ A function that takes a fixed amount of time to execute that produces a proof th
## vote
See [ledger vote](terminology.md#ledger-vote).
See [ledger vote](#ledger-vote).
## vote credit
A reward tally for [validators](terminology.md#validator). A vote credit is awarded to a validator in its vote account when the validator reaches a [root](terminology.md#root).
A reward tally for [validators](#validator). A vote credit is awarded to a validator in its vote account when the validator reaches a [root](#root).
## wallet
A collection of [keypairs](terminology.md#keypair) that allows users to manage their funds.
A collection of [keypairs](#keypair) that allows users to manage their funds.
## warmup period
Some number of [epochs](terminology.md#epoch) after [stake](terminology.md#stake) has been delegated while it progressively becomes effective. During this period, the stake is considered to be "activating". More info about: [warmup and cooldown](cluster/stake-delegation-and-rewards.md#stake-warmup-cooldown-withdrawal)
Some number of [epochs](#epoch) after [stake](#stake) has been delegated while it progressively becomes effective. During this period, the stake is considered to be "activating". More info about: [warmup and cooldown](cluster/stake-delegation-and-rewards.md#stake-warmup-cooldown-withdrawal)

Some files were not shown because too many files have changed in this diff Show More