Compare commits

..

588 Commits
v1.9.2 ... v1.5

Author SHA1 Message Date
mergify[bot]
347ff770bd Correct days/year (#17024) (#17032)
(cherry picked from commit 46d2755205)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-05-04 11:09:00 +00:00
mergify[bot]
41ab35fc9b Refactor SignerSource to expose DerivationPath to other kinds of signers (backport #16933) (#16940)
* Refactor SignerSource to expose DerivationPath to other kinds of signers (#16933)

* One use statement

* Add stdin uri scheme

* Convert parse_signer_source to return Result

* A-Z deps

* Convert Usb data to Locator

* Pull DerivationPath out of Locator

* Wrap SignerSource to share derivation_path

* Review comments

* Check Filepath existence, readability in parse_signer_source

(cherry picked from commit d6f30b7537)

# Conflicts:
#	Cargo.lock
#	clap-utils/src/memo.rs
#	remote-wallet/src/locator.rs
#	sdk/Cargo.toml

* Fix conflicts

* Fix legacy compile test

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-29 17:14:37 +00:00
mergify[bot]
9cca79090c Refactor remote-wallet path parsing (backport #16798) (#16893)
* SDK: More conversions for `Pubkey`

(cherry picked from commit 9b7120bf73)

* SDK: More conversion for `DerivationPath`

(cherry picked from commit 722de942ca)

* remote-wallet: Add helpers for locating remote wallets

(cherry picked from commit 64fcb792c2)

# Conflicts:
#	Cargo.lock

* remote-wallet: Plumb `Locator` into `RemoteWalletInfo`

(cherry picked from commit 3d12be29ec)

# Conflicts:
#	remote-wallet/src/ledger.rs

* remote-wallet: `derivation-path` crate doesn't like empty trailing child indexes

(cherry picked from commit 4ce4f04c58)

* remote-wallet: Move `Locator` to its own module

(cherry picked from commit cac666d035)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-28 04:20:19 +00:00
mergify[bot]
9541973377 docs: getInflationReward rpc output fields should be in lower camel case (#16802) (#16804)
(cherry picked from commit ec37a843a4)

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-04-24 19:23:12 +00:00
mergify[bot]
0bfe517cfb runtime: checked math for Bank::withdraw (#16787)
(cherry picked from commit be29568318)

# Conflicts:
#	runtime/src/bank.rs

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-24 02:50:15 +00:00
mergify[bot]
bd8e3182ae requires stakes for propagating crds values through gossip (backport #15561) (#16773)
* requires stakes for propagating crds values through gossip (#15561)

(cherry picked from commit f2865dfd63)

# Conflicts:
#	core/src/cluster_info.rs
#	sdk/src/feature_set.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-23 17:55:15 +00:00
mergify[bot]
e5460f97ad Remove unactivated ristretto syscall (backport #16727) (#16744)
* Remove unactivated ristretto syscall (#16727)

(cherry picked from commit be4df39a4c)

# Conflicts:
#	programs/bpf/Cargo.lock
#	programs/bpf/rust/ristretto/Cargo.toml
#	programs/bpf/tests/programs.rs
#	programs/bpf_loader/src/syscalls.rs
#	sdk/src/feature_set.rs

* fix conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-04-22 21:45:18 +00:00
mergify[bot]
0de08eab3b CLI: Make pay subcommand a proper alias of transfer (#16720)
(cherry picked from commit 63957f0677)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-21 22:39:16 +00:00
mergify[bot]
80d6a5aa9d Wrap derivation_path::DerivationPath (backport #16609) (#16650)
* Wrap derivation_path::DerivationPath (#16609)

* Replace custom DerivationPath impl

* Add method to parse full-path from str with hardening

* Convert Bip44 to trait

* Hoist more work on derivation-path

* Privatize Bip44 trait

(cherry picked from commit 185bbf2db5)

# Conflicts:
#	programs/bpf/Cargo.lock

* Fix conflict

* Make derivation-path optional for legacy bpf build

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-20 22:14:56 +00:00
Michael Vines
7f2977a3b0 Bump version to 1.5.20 2021-04-20 17:04:26 +00:00
mergify[bot]
936ff7424e makes turbine peer computation consistent between broadcast and retransmit (#14910) (#16653)
get_broadcast_peers is using tvu_peers:
https://github.com/solana-labs/solana/blob/84e52b606/core/src/broadcast_stage.rs#L362-L370
which is potentially inconsistent with retransmit_peers:
https://github.com/solana-labs/solana/blob/84e52b606/core/src/cluster_info.rs#L1332-L1345

Also, the leader does not include its own contact-info when broadcasting
shreds:
https://github.com/solana-labs/solana/blob/84e52b606/core/src/cluster_info.rs#L1324
but on the retransmit side, slot leader is removed only _after_ neighbors and
children are computed:
https://github.com/solana-labs/solana/blob/84e52b606/core/src/retransmit_stage.rs#L383-L384
So the turbine broadcast tree is different between the two stages.

This commit:
* Removes retransmit_peers. Broadcast and retransmit stages will use tvu_peers
  consistently.
* Retransmit stage removes slot leader _before_ computing children and
  neighbors.

(cherry picked from commit 570fd3f810)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-20 12:44:50 +00:00
mergify[bot]
23e114e077 buffers data shreds to make larger erasure coded sets (#15849) (#16674)
Broadcast stage batches up to 8 entries:
https://github.com/solana-labs/solana/blob/79280b304/core/src/broadcast_stage/broadcast_utils.rs#L26-L29
which will be serialized into some number of shreds and chunked into FEC
sets of at most 32 shreds each:
https://github.com/solana-labs/solana/blob/79280b304/ledger/src/shred.rs#L576-L597
So depending on the size of entries, FEC sets can be small, which may
aggravate loss rate.
For example 16 FEC sets of 2:2 data/code shreds each have higher loss
rate than one 32:32 set.

This commit broadcasts data shreds immediately, but also buffers them
until it has a batch of 32 data shreds, at which point 32 coding shreds
are generated and broadcasted.

(cherry picked from commit 4f82b897bc)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-20 12:40:04 +00:00
mergify[bot]
09445506e2 CLI: Limit stake-history output by default (#16672)
(cherry picked from commit f91de6a84d)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-20 11:34:03 +00:00
mergify[bot]
6268fa32c6 RPC: use finalized as default pubsub commitment level (backport #16659) (#16665)
* RPC: use finalized as default pubsub commitment level (#16659)

* RPC: use finalized as default pubsub commitment level

* update docs

* Fix tests

(cherry picked from commit a7e65c0034)

# Conflicts:
#	core/tests/rpc.rs

* fix conflicts

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-20 10:07:16 +00:00
Kristofer Peterson
7df2525972 Refactored ShortU16Visitor::visit_seq() to reject overflows, extra leading zeros and ensure one-to-one encoding. 2021-04-20 03:01:52 -06:00
Trent Nelson
90543940bd sdk: ShortU16 - rename variables for clarity
ShortU16's implementation embeds its usage as the length of a
ShortVec, confusingly referring to both a 'len' and a 'size'
at the same time.
2021-04-20 03:01:52 -06:00
Trent Nelson
dd21f20555 sdk: Add ShortU16 deser test 2021-04-20 03:01:52 -06:00
Trent Nelson
20a282f03f Pin SPL downstream build to 589da55 2021-04-20 07:27:12 +00:00
Trent Nelson
b5894515e8 perf: use saturating/checked integer arithmetic 2021-04-19 23:05:26 -07:00
behzad nouri
f6bf48e3c8 renames is_last_in_fec_set back to is_last_data (#15848)
https://github.com/solana-labs/solana/pull/10095
renamed is_last_data to is_last_in_fec_set. However, the code shows that
this is actually meant to indicate where the serialized data is
complete:
https://github.com/solana-labs/solana/blob/420174d3d/ledger/src/shred.rs#L599-L600
https://github.com/solana-labs/solana/blob/420174d3d/ledger/src/shred.rs#L229-L231

There are multiple FEC sets for each `&[Entry]` serialized and this flag
does not represent shreds last in FEC sets (only the very last one by
overlap). So the name is wrong and confusing

(cherry picked from commit 3b85cbc504)
2021-04-19 23:04:51 -07:00
mergify[bot]
63224bd11e Move derivation path into sdk (backport #16603) (#16606)
* Move derivation path into sdk (#16603)

* Move DerivationPath to sdk

* Remove eprintln

(cherry picked from commit 52f4b96a80)

* Fix legacy compile test

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-17 05:57:03 +00:00
mergify[bot]
4790b0a5cd CLI BIP32 prep: KeypairUrl refactor (backport #16592) (#16604)
* clap-utils: Rename KeypairUrl to SignerSource

(cherry picked from commit 09dcc9ea04)

* clap-utils: Reduce SignerSource's visibility

(cherry picked from commit c5ab3ba6f1)

* clap-utils: Use `uriparse` crate to parse `SignerSource`

(cherry picked from commit 5d1ef5d01d)

# Conflicts:
#	Cargo.lock
#	clap-utils/Cargo.toml

* clap-utils: Add explicit schemes for `ask` and `file` `SignerSource`s

(cherry picked from commit 6444f0e57b)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-16 22:43:47 +00:00
mergify[bot]
8beaa43e6e Feature-gate hash-based duplicate transaction check (#16600)
(cherry picked from commit 285f3c9d56)

# Conflicts:
#	sdk/src/feature_set.rs

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-16 20:23:33 +00:00
mergify[bot]
151d9bd28c Don't parse uninitialized system/nonce accounts (#16584) (#16586)
(cherry picked from commit ba77e48c12)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-15 23:40:21 +00:00
mergify[bot]
0c2faa3168 Cli: move airdrop to rpc requests (#16557) (#16563)
* Add recent_blockhash to requestAirdrop

* Move tx confirmation to separate method

* Add RpcClient airdrop methods

* Request cli airdrop via RpcClient

* Pass optional faucet_addr into TestValidator and fix tests

* Update client/src/rpc_client.rs

Co-authored-by: Michael Vines <mvines@gmail.com>

Co-authored-by: Michael Vines <mvines@gmail.com>

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-15 18:12:07 +00:00
mergify[bot]
525421a669 Rotate CODECOV_TOKEN (#16578)
(cherry picked from commit a535c0e129)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-15 16:15:14 +00:00
mergify[bot]
2e5ad16b62 docs: Fix typo in program deploy instructions (#16572) (#16574)
(cherry picked from commit c8ed14c647)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-15 14:04:54 +00:00
Tyera Eulberg
87494812a7 v1.5: faucet backports (#16565)
* Faucet: repurpose cap and slice args to apply to single IPs (#16381)

* Single use stmt

* Log request IP

* Switch cap and slice to apply per IP

* Use SOL in logs, error msgs

* Use thiserror instead of overloading io::Error

* Return memo transaction for requests that exceed per-request-cap

* Handle faucet memos in cli

* Add some docs, esp about memo transaction

* Use SOL symbol & standardize memo

Co-authored-by: Michael Vines <mvines@gmail.com>

* Differentiate faucet tx-length errors

* Populate signature in cli airdrop memo case

Co-authored-by: Michael Vines <mvines@gmail.com>

* Add address_cache and exclude loopback from ip limit (#16487)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-15 10:13:14 +00:00
mergify[bot]
b555845ca2 dl-utils: use wide_msg everywhere for truncation on narrow terminals (#16554)
(cherry picked from commit e61b4b7d70)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-15 01:09:41 +00:00
mergify[bot]
98867430d5 Fix sanity test flakiness by prebuilding binaries (#16530) (#16545)
* Fix sanity test flakiness by prebuilding binaries

* ignore shellcheck

* bump

* nudge

* simplify

(cherry picked from commit 328e7690f3)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-14 18:50:35 +00:00
mergify[bot]
d7a8420d9a bump solana_rbpf from 0.2.5 to 0.2.7 (#16515) (#16524)
(cherry picked from commit f7eadd9d70)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-14 13:00:53 +00:00
mergify[bot]
ef3c33ad06 Remove blake3 from bpf program dependencies (#16506) (#16528)
(cherry picked from commit f641429056)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-14 03:22:12 +00:00
Justin Starry
02762ff785 v1.5: Use blake3 message hash in status cache (#16513)
* v1.5: Use blake3 message hash in status cache

* bump bpf ix count
2021-04-14 09:52:24 +08:00
mergify[bot]
07336e3e43 Fix account copy step in program test message processor (#16469) (#16471)
(cherry picked from commit 278c125d99)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-14 01:45:12 +00:00
mergify[bot]
e5a76572d9 Deprecate RpcClient methods, RpcRequest variants (#16516) (#16518)
* Deprecate RpcClient methods, RpcRequest variants

* Update cli to getSupply

(cherry picked from commit ccb11a939f)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-13 21:27:46 +00:00
mergify[bot]
0d3756b4d8 Status cache improvements (#16174) (#16511)
Co-authored-by: sakridge <sakridge@gmail.com>
2021-04-13 13:08:31 +00:00
mergify[bot]
a6b346d876 removes OrderedIterator and transaction batch iteration order (backport #16153) (#16510)
Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-13 12:47:44 +00:00
mergify[bot]
4276591eb9 limits number of unique pubkeys in the crds table (bp #15539) (#16475)
* limits number of unique pubkeys in the crds table (#15539)

(cherry picked from commit 56923c91bf)

# Conflicts:
#	core/src/cluster_info.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-12 00:36:41 +00:00
mergify[bot]
127e7407e4 Fill in not-yet-finalized block-time if possible (#16460) (#16462)
(cherry picked from commit 8bc0bdd40b)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-09 21:36:44 +00:00
mergify[bot]
11ab894256 [easy, cleanup] Simplify some pattern-matches (bp #16402) (#16445)
* Simplify some pattern-matches (#16402)

When those match an exact combinator on Option / Result.

Tool-aided by [comby-rust](https://github.com/huitseeker/comby-rust).

(cherry picked from commit b08cff9e77)

# Conflicts:
#	accounts-cluster-bench/src/main.rs
#	core/src/rpc.rs
#	runtime/src/accounts_hash.rs
#	runtime/src/message_processor.rs

* Fix conflicts

Co-authored-by: François Garillot <4142+huitseeker@users.noreply.github.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-08 20:01:20 +00:00
mergify[bot]
f10f57e89f Cli: use get_inflation_rewards and limit epochs queried (#16408) (#16443)
* Fix block-with-limit when not finalized blocks found

* Enable confirmed commitment in getInflationReward

* Use get_inflation_rewards in cli

* Line up rewards output

* Add range validator

* Change cli epoch arg -> num epochs

* Add solana inflation rewards subcommand

* Consolidate epoch rewards meta

(cherry picked from commit bb9d2fd07a)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-08 19:05:59 +00:00
mergify[bot]
592a13b5f8 CI: Let cargo-install-all.sh resolve stable (#16429)
(cherry picked from commit 388ce12207)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-07 21:46:59 +00:00
mergify[bot]
5702744399 CLI: Fix rent panic (#16417) (#16425)
* CLI: Fix `rent` panic on non-numeric input (+monikers)

* Update cli/src/cluster_query.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Update cli/src/cluster_query.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Update cli/src/cluster_query.rs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit c5c3ae0203)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-07 17:17:51 +00:00
mergify[bot]
c7629d595b Remove wallclock throttle from BankingStage tests (bp #16396) (#16398)
* No wallclock throttle tests (#16396)

(cherry picked from commit 1219842a96)

# Conflicts:
#	core/src/banking_stage.rs

* Resolve conflicts

Co-authored-by: carllin <carl@solana.com>
2021-04-07 16:05:26 +00:00
mergify[bot]
cfc824d38d Speed up net.sh builds (bp #16360) (#16419)
* Speed up net.sh builds (#16360)

* Speed up net.sh builds

* feedback

* Update net/net.sh

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* feedback

* fix

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 6cd4bc5e60)

# Conflicts:
#	scripts/cargo-install-all.sh

* fix

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-07 08:22:48 +00:00
mergify[bot]
492a02d737 Rpc: introduce get_inflation_reward rpc call (bp #16278) (#16409)
* Rpc: introduce get_inflation_reward rpc call (#16278)

* feat: introduce get_inflation_reward rpc call

* fix: style suggestions

* fix: more style changes and match how other rpc functions are defined

* feat: get reward for a single epoch

* feat: default to the most recent epoch

* fix: don't factor out get_confirmed_block

* style: introduce from impl for RpcEncodingConfigWrapper

* style: bring commitment into variable

* feat: support multiple pubkeys for get_inflation_reward

* feat: add get_inflation_reward to rpc client

* feat: return rewards in order

* fix: rename pubkeys to addresses

* docs: introduce jsonrpc docs for get_inflation_reward

* style: early return in map (not sure which is more idiomatic)

* fix: call the rpc client function args addresses as well

* fix: style

* fix: filter out only addresses we care about

* style: make this more idiomatic

* fix: change rpc client epoch to optional and include some docs edits

* feat: filter out rent rewards in get_inflation_reward

* feat: add option epoch config param to get_inflation_reward

* feat: rpc client get_inflation_reward takes epoch instead of config and some filter staking and voting rewards

(cherry picked from commit e501fa5f0b)

# Conflicts:
#	client/src/rpc_client.rs
#	core/src/rpc.rs

* fix: resolve cherry-pick conflicts

* fix: change bool_to_option filter_map to filter + map

* style: use filter_map in place of filter + map

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-04-07 06:09:27 +00:00
mergify[bot]
624f9790bd validator: Use a const for wait for supermajority threshold (#16391)
(cherry picked from commit 7a2a39093d)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-06 05:02:25 +00:00
mergify[bot]
a23fb497ec Cluster info shred spies (bp #16389) (#16394)
* cluster-info: Don't subtract non-shred spies from node count

(cherry picked from commit b6b08706b9)

* cluster-info: Get rid of some integer math while we're here

(cherry picked from commit b71875df61)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-06 01:25:16 +00:00
mergify[bot]
6107de87c0 merkle-tree: fix build when targeting bpf (bp #16335) (#16341)
* merkle-tree: Add Xargo.toml

(cherry picked from commit a1d9b53cd7)

* merkle-tree: Get `Hash` et. al from program instead of sdk

(cherry picked from commit ddc0a16cec)

* merkle-tree: Use `matches` crate when targeting eBPF

(cherry picked from commit a44c32694f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-05 20:42:37 +00:00
mergify[bot]
0c0b65e9ab Only get Blockstore::last_root once (#16362) (#16365)
(cherry picked from commit b8b6777262)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-05 06:54:23 +00:00
mergify[bot]
9b04b634bd Fixup iterator method (#16357) (#16358)
(cherry picked from commit 1a13d22984)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-05 00:46:47 +00:00
mergify[bot]
aca3534e89 Remove unprocessed transactions from log notifications (#16349) (#16352)
(cherry picked from commit 0596cf5405)

Co-authored-by: Justin Starry <justin@solana.com>
2021-04-04 17:06:18 +00:00
mergify[bot]
b4bb062a2e Wait for 90 percent of stake before starting (#16340) (#16343)
(cherry picked from commit 3429785d9b)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-04-03 22:34:37 +00:00
mergify[bot]
194a07862f makes test_pull_request_time_pruning smaller (#16128) (#16338)
(cherry picked from commit b041b55028)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-03 14:53:59 +00:00
mergify[bot]
e043825de2 limits CrdsGossipPull::pull_request_time size (bp #15793) (#16334)
* limits CrdsGossipPull::pull_request_time size (#15793)

There is no pruning logic on CrdsGossipPull::pull_request_time
https://github.com/solana-labs/solana/blob/79ac1997d/core/src/crds_gossip_pull.rs#L172-L174
potentially allowing this to take too much memory.

Additionally, CrdsGossipPush::last_pushed_to is pruning recent push
timestamps:
https://github.com/solana-labs/solana/blob/79ac1997d/core/src/crds_gossip_push.rs#L275-L279
instead of the older ones.

Co-authored-by: Nathan Hawkins <utsl@utsl.org>
(cherry picked from commit a6c23648cb)

# Conflicts:
#	core/src/cluster_info.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-03 13:29:45 +00:00
mergify[bot]
3e91fa313e Parse SPL associated-token-account instructions (bp #16318) (#16320)
* Parse SPL associated-token-account instructions (#16318)

(cherry picked from commit a902505810)

# Conflicts:
#	transaction-status/Cargo.toml

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-02 01:45:15 +00:00
mergify[bot]
9ac4b11185 solana-install init can now select a pre-release from Github (#16319)
(cherry picked from commit d9176c1903)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-04-01 23:41:29 +00:00
Trent Nelson
c32bd40aa4 Bump version to v1.5.19 2021-04-01 20:23:50 +00:00
mergify[bot]
7e480df9fa Rpc: enable getConfirmedSignaturesForAddress2 to return confirmed (not yet finalized) data (#16281) (#16292)
* Update blockstore method to allow return of unfinalized signature

* Support confirmed sigs in getConfirmedSignaturesForAddress2

* Add deprecated comments

* Update docs

* Enable confirmed transaction-history in cli

* Return real confirmation_status; fill in not-yet-finalized block time if possible

(cherry picked from commit da27acabcc)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-01 11:29:51 -06:00
behzad nouri
d1da10f512 pushes addresses instead of insert 2021-04-01 11:06:43 -06:00
mergify[bot]
e55d5385bd adds an inverted index to leader schedule (#15249) (#16286)
next_leader_slot is doing a linear search for slots in which a pubkey is
the leader:
https://github.com/solana-labs/solana/blob/e59a24d9f/ledger/src/leader_schedule_cache.rs#L123-L157
This can be done more efficiently by adding an inverted index to leader
schedule.

(cherry picked from commit e403aeaf05)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-01 14:27:54 +00:00
mergify[bot]
708a45e5ed indexes epoch slots in crds table (#15459) (#16287)
ClusterInfo::get_epoch_slots_since scans the entire crds table to obtain
epoch-slots inserted since a timestamp:
https://github.com/solana-labs/solana/blob/013daa8f4/core/src/cluster_info.rs#L1245-L1262
The alternative is to index epoch-slots in crds table ordered by their
insert timestamp.

(cherry picked from commit 5a9896706c)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-04-01 14:21:00 +00:00
mergify[bot]
afd5b9f09b Rpc: fix getConfirmedTransaction slot (#16288) (#16289)
* Fix transaction blockstore apis

* Update blockstore apis in rpc

(cherry picked from commit 18bd47dbe1)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-04-01 08:47:03 +00:00
mergify[bot]
a9a7f21f51 Add RpcClient::get_stake_activation() (bp #16165) (#16294)
* Add RpcClient::get_stake_activation()

(cherry picked from commit 5791b95b17)

# Conflicts:
#	client/src/rpc_client.rs

* Fix conflicts

* Derive PartialEq for StakeActivationState

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-04-01 07:30:25 +00:00
mergify[bot]
f69ee59e21 docs: Reduce airdrop examples to 1 SOL (#16240)
(cherry picked from commit 2bcfbad653)

# Conflicts:
#	docs/src/cli/transfer-tokens.md

Co-authored-by: Trent Nelson <trent@solana.com>
2021-04-01 06:53:28 +00:00
mergify[bot]
aad1e4ee4a security policy: Add out-of-scope section (bp #16249) (#16250)
* security policy: Add out-of-scope section

(cherry picked from commit e9e46ff521)

# Conflicts:
#	SECURITY.md

* Update SECURITY.md

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 700ebde474)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-31 04:47:25 +00:00
Michael Vines
97a58d7320 Add channel version check 2021-03-29 23:13:23 -07:00
Tyera Eulberg
ac4722afd7 Bump version to 1.5.18 2021-03-29 23:00:20 -07:00
mergify[bot]
9ded0bdc03 Allow incomplete features in frozen-abi (#16204)
(cherry picked from commit 9ba9d2a8ae)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-30 05:33:15 +00:00
Justin Starry
b2cee2753d Print the rust version when building bpf programs (#16181) (#16203) 2021-03-30 03:32:16 +00:00
mergify[bot]
09843a28d9 Rpc: enable getConfirmedBlocks and getConfirmedBlocksWithLimit to return confirmed (not yet finalized) data (bp #16161) (#16197)
* Rpc: enable getConfirmedBlocks and getConfirmedBlocksWithLimit to return confirmed (not yet finalized) data (#16161)

* Add commitment config capabilities

* Use rpc limit if no end_slot provided

* Limit to actually finalized blocks

* Support confirmed blocks in getConfirmedBlocks and getConfirmedBlocksWithLimit

* Update docs

* Add client plumbing

* Rename config enum

(cherry picked from commit 60ed8e2892)

# Conflicts:
#	client/src/rpc_config.rs
#	core/src/rpc.rs

* Fix conflicts

* Future-aware enum name

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-29 22:11:46 +00:00
Tyera Eulberg
72ff8973a2 Revert "Print the rust version when building bpf programs (#16181) (#16182)" (#16200)
This reverts commit cc7342a8ef.
2021-03-29 15:00:25 -06:00
mergify[bot]
cc7342a8ef Print the rust version when building bpf programs (#16181) (#16182)
(cherry picked from commit abada56ba1)

# Conflicts:
#	sdk/bpf/scripts/install.sh

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-29 20:00:23 +08:00
mergify[bot]
ffe2f96a58 Fix handling of invoked ix accounts in program-test (#16170) (#16175)
(cherry picked from commit 27ab415ecc)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-29 07:00:53 +00:00
mergify[bot]
fb08b41513 Rpc: enable getConfirmedBlock and getConfirmedTransaction to return confirmed (not yet finalized) data (bp #16142) (#16159)
* Rpc: enable getConfirmedBlock and getConfirmedTransaction to return confirmed (not yet finalized) data (#16142)

* Add Blockstore block and tx apis that allow unrooted responses

* Add TransactionStatusMessage, and send on bank freeze; also refactor TransactionStatusSender

* Track highest slot with tx-status writes complete

* Rename and unpub fn

* Add commitment to GetConfirmed input configs

* Support confirmed blocks in getConfirmedBlock

* Support confirmed txs in getConfirmedTransaction

* Update sigs-for-addr2 comment

* Enable confirmed block in cli

* Enable confirmed transaction in cli

* Review comments

* Rename blockstore method

(cherry picked from commit 433f1ead1c)

# Conflicts:
#	core/src/replay_stage.rs
#	core/src/rpc.rs
#	core/src/validator.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-27 05:45:00 +00:00
mergify[bot]
579575fe84 Add timeout for local cluster partition tests (bp #16123) (#16136)
* Add timeout for local cluster partition tests (#16123)

* Add timeout for local cluster partition tests

* fix optimistic conf test logs

* Bump instruction count assertions

(cherry picked from commit e817a6db00)

# Conflicts:
#	local-cluster/Cargo.toml
#	local-cluster/tests/local_cluster.rs
#	programs/bpf/tests/programs.rs

* Fix conflicts

* Revert instruction count assertion changes

Co-authored-by: Justin Starry <justin@solana.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-26 00:15:08 +00:00
mergify[bot]
0bb95b470f clap-utils: Allow NullSigners outside sign-only mode (#16115)
(cherry picked from commit 7f0ac6a67c)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-25 17:48:41 +00:00
mergify[bot]
4e6d175697 Simplify account.rent_epoch handling for sysvar rent (bp #16049) (#16117)
* Simplify account.rent_epoch handling for sysvar rent (#16049)

* Add some code for special local testing

* Add comment to store_account_and_update_capitalization

* Simplify account.rent_epoch handling for sysvar rent

* Introduce *_for_test functions

* Add deprecation messages to existing api

(cherry picked from commit 6d5c6c17c5)

# Conflicts:
#	programs/bpf_loader/src/lib.rs
#	programs/stake/src/stake_instruction.rs
#	programs/vote/src/vote_instruction.rs
#	runtime/src/accounts.rs
#	runtime/src/bank.rs
#	runtime/src/message_processor.rs
#	runtime/src/system_instruction_processor.rs
#	sdk/benches/slot_hashes.rs
#	sdk/benches/slot_history.rs
#	sdk/src/account.rs
#	sdk/src/keyed_account.rs
#	sdk/src/native_loader.rs
#	sdk/src/recent_blockhashes_account.rs

* Fix conflicts

* rustfmt

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-03-25 18:12:33 +09:00
mergify[bot]
173ca7b448 program: Correct clamp in Message::signer_keys() (#16113)
(cherry picked from commit 8b3de72e2a)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-25 06:41:14 +00:00
mergify[bot]
fc36a0c5ba Support getBlockTime for unfinalized blocks (#16103) (#16109)
(cherry picked from commit a8ef29df27)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-25 06:05:41 +00:00
mergify[bot]
3317a14bd5 rpc: add getSlotLeaders method (#16057) (#16078)
(cherry picked from commit e7fd7d46cf)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-24 03:20:57 +00:00
mergify[bot]
64a754f610 Handle blockstore insert dup checks (#16051) (#16065)
(cherry picked from commit d76ad33597)

Co-authored-by: carllin <carl@solana.com>
2021-03-23 05:19:05 +00:00
Michael Vines
734a4e39f7 Add stub --allow-unfunded-recipient argument for forward compatibility with v1.6 2021-03-22 13:45:22 -07:00
Tyera Eulberg
f356a05e96 Bump version to 1.5.17 (#16043) 2021-03-19 18:56:05 -06:00
Tyera Eulberg
c3c4ce48af Make getStakeActivation response consistent for undelegated accounts (#16039) 2021-03-19 15:51:40 -06:00
mergify[bot]
452663a99a Remove read only locks when they hit ref count zero, cleanup accounts locking (#15449) (#16006)
(cherry picked from commit 9a7cd8885d)

Co-authored-by: carllin <carl@solana.com>
2021-03-19 20:59:18 +00:00
mergify[bot]
c0d27503ce docs: SIGUSR1 killing wrapper shell scripts (#16008)
(cherry picked from commit 07dc522981)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-19 07:30:14 +00:00
mergify[bot]
57833a36e3 Santize instruction index when loading instruction from sysvar (#15942) (#16003)
(cherry picked from commit 4c5660ba7a)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-19 13:25:39 +08:00
mergify[bot]
e58365a160 cli cleanup (#15990) (#15996)
(cherry picked from commit 067b390194)

Co-authored-by: Jack May <jack@solana.com>
2021-03-18 21:02:59 +00:00
mergify[bot]
a5f8635fdc rpc: Add config options limiting getConfirmedBlock response data (bp #15970) (#15994)
* rpc: Add config options limiting getConfirmedBlock response data (#15970)

* Add new confirmed block struct

* Add RpcConfirmedBlockConfig options

* Configure block response based on new options

* Add client api, use in cli fetch_epoch_rewards

* Update docs

* Apply review suggestions

(cherry picked from commit aa54c468ea)

# Conflicts:
#	core/src/rpc.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-18 20:51:53 +00:00
mergify[bot]
de8df24203 Avoid panic when validator doesn't have performance samples (#15976) (#15980)
(cherry picked from commit ba33c9e18e)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-18 09:22:02 +00:00
mergify[bot]
ea1312ab0b remote-wallet: Expose Ledger app settings (#15977)
(cherry picked from commit 2dabcac0da)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-18 09:07:24 +00:00
mergify[bot]
e0119e7de7 Close buffer accounts (bp #15887) (#15971)
* Add Close instrruction and tooling to upgradeable loader (#15887)

(cherry picked from commit 7f500d610c)

# Conflicts:
#	cli/src/program.rs
#	programs/bpf_loader/src/lib.rs

* resolve conflicts

* slice fill not supported on older rust

Co-authored-by: Jack May <jack@solana.com>
2021-03-18 07:33:40 +00:00
mergify[bot]
9c596cfd6c Allow unbounded wallclock processing time in tests (#15961) (#15965)
(cherry picked from commit f548a04fae)

Co-authored-by: carllin <carl@solana.com>
2021-03-18 00:09:08 +00:00
mergify[bot]
e4032ec87f Ignore flaky test_banking_stage_entries_only and test_banking_stage_entryfication (#15958)
(cherry picked from commit 8a9b51952e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-17 20:25:13 +00:00
DimAn
e3bab5a987 Revert to removing only tmp-
(cherry picked from commit a5d144b00f)
2021-03-17 11:13:42 -07:00
DimAn
5f426ee2dc Revert to snapshots 2
(cherry picked from commit 20b53eb4b4)
2021-03-17 11:13:42 -07:00
DimAn
a76426514e Revert to snapshots
Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 0b42379ed7)
2021-03-17 11:13:42 -07:00
DimAn
4c48740647 add missed suggestion
(cherry picked from commit a43b3674c7)
2021-03-17 11:13:42 -07:00
DimAn
d9c7f78717 Apply suggestions from code review
Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit cfb01e26dd)
2021-03-17 11:13:42 -07:00
DimAn
305fbe38e9 Add option for separate snapshot location
(cherry picked from commit 6126878f509c69e23480a5ec22b3271e2b16e072)
(cherry picked from commit 0209d334bd)
2021-03-17 11:13:42 -07:00
Michael Vines
a6b7cc202a Download snapshot files with a tmp- prefix so they'll automatically be cleaned up if interrupted
(cherry picked from commit 58b980f9cd)
2021-03-17 10:18:14 -07:00
Michael Vines
3584a764f9 Replace solana-program-test when building example-helloworld 2021-03-17 09:09:08 -07:00
mergify[bot]
eed62bc408 Add helper for paring down signers to those requried by a tx message (bp #15899) (#15937)
* sdk: Add accessor for signer pubkeys of a tx message

(cherry picked from commit bf33ce8906)

* clap-utils: Add helper to `CliSignerInfo` for getting signers for a message

(cherry picked from commit 4e99f1e634)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-17 05:22:40 +00:00
mergify[bot]
6a334a8bef CLI: Support dumping the TX message in sign-only mode (#15932)
(cherry picked from commit 672e9c640f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-17 04:41:08 +00:00
Michael Vines
be05c8b121 Bump version to 1.5.16 2021-03-16 13:29:52 -07:00
mergify[bot]
9bd3773934 nit: fix spelling (#15908) (#15910)
(cherry picked from commit 5760cf0f41)

Co-authored-by: Jack May <jack@solana.com>
2021-03-16 11:51:15 -07:00
mergify[bot]
999f81c56d Charge compute budget for bytes passed via cpi (bp #15874) (#15904)
* Charge compute budget for bytes passed via cpi (#15874)

(cherry picked from commit ad9901d7c6)

# Conflicts:
#	programs/bpf_loader/src/syscalls.rs
#	sdk/src/feature_set.rs
#	sdk/src/process_instruction.rs

* fix conflicts

* nudge

Co-authored-by: Jack May <jack@solana.com>
2021-03-16 18:47:40 +00:00
mergify[bot]
1967b90489 fix: compute pre/post token balances on all accounts if token program present (#15900) (#15922)
* fix: compute pre/post token balances on all accounts if token program present

* fix: skip token program in balance query

* fix: prevent program ids from being collected

(cherry picked from commit 61112d4826)

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-03-16 18:46:53 +00:00
Michael Vines
22ca850ce7 Add AccountSharedData stub for forwards compatibility with the v1.6 release line 2021-03-16 11:36:10 -07:00
mergify[bot]
824daab8f6 Cli: better estimate of epoch time elapsed/remaining (bp #15893) (#15917)
* Cli: better estimate of epoch time elapsed/remaining (#15893)

* Add rpc_client api for getRecentPerformanceSamples

* Prep fn for variable avg slot time

* Use recent-perf-samples to more-accurately estimate epoch completed times

* Spell out average

(cherry picked from commit 3726358f51)

# Conflicts:
#	cli-output/src/cli_output.rs

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-16 17:59:31 +00:00
mergify[bot]
191e51b01d Wallclock BankingStage Throttle (#15731) (#15889)
(cherry picked from commit c1ba265dd9)

Co-authored-by: carllin <carl@solana.com>
2021-03-16 08:40:03 +00:00
Michael Vines
0de081c776 Pin solana crate versions to prevent downstream users from accidentally mixing crate versions 2021-03-16 00:32:15 -07:00
Michael Vines
9a151259b2 =1.5.15 2021-03-16 00:32:15 -07:00
Michael Vines
3da1f67d40 Build full SPL in CI 2021-03-15 23:30:35 -07:00
Michael Vines
39a2fbe2bf Add Instruction::new_with_bincode
Programs can now prepare for the deprecation of `Instruction::new` in v1.6
2021-03-16 05:21:33 +00:00
mergify[bot]
7a08d47588 Export tokio for program-test clients (#15894)
(cherry picked from commit 430ed6d774)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-16 05:03:03 +00:00
mergify[bot]
3f3f3bb443 increment_cargo_version.sh tune ups (bp #15880) (#15891)
* Disallow version bump with dirty working tree

(cherry picked from commit 853e735edf)

* Ignore `not_paths` for `*.md` files when bumping version

(cherry picked from commit 510760d81b)

* Also ignore `*/node_modules/*` paths when bumping version

(cherry picked from commit 2bf46b789f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-16 01:58:15 +00:00
mergify[bot]
e2f893d743 Fix real_number_string_trimmed zero-decimal behavior (#15873) (#15876)
* Add failing test

* Don't strip zeroes from zero-decimal amounts

* Add zero-case test

(cherry picked from commit c40bd5f394)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-15 23:45:36 +00:00
Michael Vines
1a66968126 Update cargo lock files on version bump 2021-03-15 20:54:55 +00:00
mergify[bot]
437f0bb8c6 Rpc: support extended config for getConfirmedBlock (bp #15827) (#15832)
* Rpc: support extended config for getConfirmedBlock (#15827)

* Add rpc confirmed-block config wrapper to support struct of extended config

* Update docs

* Make config wrapper generic and use in getConfirmedTransaction as well

* Update/clean confirmed-tx docs

(cherry picked from commit 5b2da19c93)

# Conflicts:
#	core/src/rpc.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-13 00:02:26 +00:00
mergify[bot]
16f8fcd711 solana-install update now updates a named release to the latest patch version (bp #15828) (#15830)
* `solana-install update` now updates a named release to the latest patch version

(cherry picked from commit 79280b304b)

# Conflicts:
#	install/Cargo.toml

* resolve conflicts

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-12 22:27:05 +00:00
mergify[bot]
f511625887 Add more slot update notifications (bp #15734) (#15821)
* Add more slot update notifications (#15734)

* Add more slot update notifications

* fix merge

* Address feedback and add integration test

* switch to datapoint

* remove unused shred method

* fix clippy

* new thread for rpc completed slots

* remove extra constant

* fixes

* rely on channel closing

* fix check

(cherry picked from commit 918d04e3f0)

* fix tests

* fix fmt

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-12 15:25:45 +00:00
mergify[bot]
4c3bfcaedf Remove old feature: simple_capitalization (bp #15763) (#15814)
* Remove old feature: simple_capitalization (#15763)

* Remove old feature: simple_capitalization

* Fix another failing test in core

* Finish up test cleanup

* Further clean up a bit

(cherry picked from commit 4bbeb9c033)

# Conflicts:
#	runtime/src/accounts_db.rs
#	runtime/src/bank.rs

* Fix conflicts

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-03-12 05:21:55 +00:00
mergify[bot]
f44eeb4165 cli: improve deploy error reporting (#15806) (#15810)
(cherry picked from commit e1ceb430e3)

Co-authored-by: Jack May <jack@solana.com>
2021-03-11 23:00:37 +00:00
mergify[bot]
29730b5a19 Default --ledger arg to "ledger" for solana-validator and solana-ledger-tool (#15809)
(cherry picked from commit aa2b2d6b75)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-11 22:28:34 +00:00
mergify[bot]
6e214bbc04 Docs fixups (bp #15801) (#15802)
* docs: add docs links for crates published to crates.io

(cherry picked from commit 24d18b3cf2)

# Conflicts:
#	core/Cargo.toml
#	measure/Cargo.toml
#	programs/bpf/rust/finalize/Cargo.toml

* docs: add rust client api entry

(cherry picked from commit 3e6c7c4a3e)

* docs: rename 'deployed programs' section to 'on-chain programs'

(cherry picked from commit 0e452c8d91)

* docs: 'builtins' -> 'runtime facilities'

(cherry picked from commit 9c8be34906)

* docs: stabilize spl token jsonrpc methods

(cherry picked from commit 45190f6281)

* docs: deprecate lastvalidslot field of jsonrpc getfees

(cherry picked from commit c4ee1ab710)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-11 21:49:14 +00:00
mergify[bot]
a79ba512a8 add catchup average speed and remaining time (#15608) (#15805)
* add catchup average speed and remaining time

* code style and improve average time remaining calculation

* code style

* remove instant time remaining

* negative speed perceives better

* Some little improves and comments of catchup avg and eta

* format code of catchup avg and eta

* fix copy-paste error

(cherry picked from commit c078e01fa9)

Co-authored-by: DimAn <diman@diman.io>
2021-03-11 16:15:24 +00:00
mergify[bot]
e777436cfd sdk: add macro for unchecked div with const denominator (#15803)
(cherry picked from commit 79ac1997de)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-11 10:03:30 +00:00
mergify[bot]
b4f5627a5b Improve load_largest_accounts more (bp #15785) (#15792)
* Improve load_largest_accounts more (#15785)

* Add load_largest_accounts bench

* Check lamports before address filter

* Use BinaryHeap, add Accounts test

* Use pubkey reference in the min-heap

Also, flatten code with early returns

Co-authored-by: Greg Fitzgerald <greg@solana.com>
(cherry picked from commit 9c1198c0c7)

# Conflicts:
#	runtime/src/accounts.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-10 20:14:40 +00:00
mergify[bot]
d4790be689 Improve load_largest_accounts and add Bank test (#15775) (#15783)
(cherry picked from commit 5991cef5f5)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-10 01:36:08 +00:00
mergify[bot]
58a9639810 Add tracer key for tracing transaction path through the network (#15732) (#15771)
(cherry picked from commit 2bee9435f3)

Co-authored-by: carllin <carl@solana.com>
2021-03-09 04:44:16 +00:00
mergify[bot]
74b13605c0 Report datapoint on number of retransmit shreds (#15694) (#15769)
(cherry picked from commit 331c45decf)

Co-authored-by: carllin <carl@solana.com>
2021-03-09 03:08:36 +00:00
Tyera Eulberg
9fbc03d517 Bump version to 1.5.15 (#15768) 2021-03-09 01:48:54 +00:00
mergify[bot]
7f1368e792 Remove old feature: cumulative_rent_related_fixes (bp #15754) (#15757)
* Remove old feature: cumulative_rent_related_fixes (#15754)

(cherry picked from commit 8b0c6db871)

# Conflicts:
#	runtime/src/accounts.rs
#	runtime/src/bank.rs
#	sdk/src/feature_set.rs

* Fix conflicts

* Remove stale comment

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-03-08 12:34:08 +09:00
mergify[bot]
c60e21b900 PoH batch size calibration (#15717) (#15747)
(cherry picked from commit d09112fa6d)

# Conflicts:
#	local-cluster/src/validator_configs.rs

Co-authored-by: sakridge <sakridge@gmail.com>
2021-03-06 04:29:54 +00:00
mergify[bot]
30781bb7b1 consolidate DEFAULT_HASHES_PER_TICK (#14463) (#15748)
(cherry picked from commit 773b21b34e)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-03-06 02:59:43 +00:00
mergify[bot]
2114864626 Convert blockstore TransactionStatus column family to protobufs (bp #15733) (#15737)
* Convert blockstore TransactionStatus column family to protobufs (#15733)

* Prevent panic if TransactionStatus can't be deserialized

* Convert Blockstore TransactionStatus column to protobuf

* Add compatability test

(cherry picked from commit 7e65289729)

# Conflicts:
#	ledger/Cargo.toml

* Fix conflict

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-05 17:37:54 +00:00
mergify[bot]
fc29ee7001 Add timeout to prevent infinite loop (#15715) (#15735)
(cherry picked from commit 1fc8836631)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-05 13:52:20 +00:00
mergify[bot]
166fa33433 Update deploy-a-program.md (#15727) (#15728)
(cherry picked from commit 9c8e7564ed)

Co-authored-by: Kasim Te <kasimte@gmail.com>
2021-03-05 09:02:50 +00:00
mergify[bot]
2ba5828a75 Use OrderedIterator to produce TransactionLogInfo (#15712) (#15719)
* Add failing test

* Fix iteration_order issue with stored logs

(cherry picked from commit 872f7117c3)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-03-05 01:36:08 +00:00
mergify[bot]
98f5f58975 Permit the snapshots/status_cache file to be sparse (#15713)
(cherry picked from commit 1e2f5a5f55)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-04 22:05:37 +00:00
mergify[bot]
5f7258640b cli: add program show for non-upgradeable programs (bp #15707) (#15709)
* cli: add program show for non-upgradeable programs (#15707)

(cherry picked from commit 2177e0aff8)

# Conflicts:
#	cli/src/program.rs

* fix conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-03-04 21:27:48 +00:00
Trent Nelson
e9d04b5517 docs: address post-merge review of #15649
(cherry picked from commit d6ea2f392b)
2021-03-04 11:49:14 -07:00
Trent Nelson
b856b99487 Docs: Update validator hardware recommendations
(cherry picked from commit 5cd6a0c2f1)
2021-03-03 22:34:38 -07:00
Michael Vines
d3672ca23b Bump version to 1.5.14 2021-03-03 19:01:25 -08:00
Michael Vines
6242809a07 Bump version to 1.5.13 2021-03-03 21:51:34 +00:00
mergify[bot]
1958bb1169 sends only the latest vote of each validator to the banking stage (#15629) (#15685)
(cherry picked from commit 658951e680)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-03 20:41:27 +00:00
mergify[bot]
5e2de5648d Only report metrics every second (#15652) (#15684)
(cherry picked from commit aacb28c453)

Co-authored-by: carllin <carl@solana.com>
2021-03-03 20:19:30 +00:00
mergify[bot]
16ded2115c Deprecate UiTokenAmount::ui_amount (bp #15616) (#15651)
* Deprecate UiTokenAmount::ui_amount (#15616)

* Add TokenAmount::ui_amount_string

* Fixup solana-tokens

* Update docs

(cherry picked from commit 19ac79b5cc)

# Conflicts:
#	storage-proto/proto/solana.storage.confirmed_block.rs

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-03-03 20:12:06 +00:00
mergify[bot]
b775e8748c Forward and hold packets (#15634) (#15682)
(cherry picked from commit 830be855dc)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-03-03 19:39:46 +00:00
mergify[bot]
9d00220d88 Remove unnecessary packet meta abi lock (#15653) (#15665)
(cherry picked from commit b8e28b8c55)

Co-authored-by: Justin Starry <justin@solana.com>
2021-03-03 10:52:51 +00:00
mergify[bot]
a00cbb55b9 Add error reporting to system program (#15644) (#15650)
(cherry picked from commit a9c8dbfd0c)

Co-authored-by: Jack May <jack@solana.com>
2021-03-02 22:41:07 -08:00
mergify[bot]
7ae3b55dde improve cli insufficient funds error messages (#15648)
(cherry picked from commit b20bf8ebb0)

Co-authored-by: Jack May <jack@solana.com>
2021-03-03 05:38:19 +00:00
mergify[bot]
280437bad2 Cleanup buffered packets (#15210) (#15643)
(cherry picked from commit 629dcd0f39)

Co-authored-by: carllin <carl@solana.com>
2021-03-03 04:36:07 +00:00
mergify[bot]
722879b96c Remove Hackathon banner (#15646)
(cherry picked from commit 00f2b039b4)

Co-authored-by: rmshea <8948187+rmshea@users.noreply.github.com>
2021-03-03 03:41:56 +00:00
Jack May
c22b83aa6c resolve conflicts 2021-03-02 18:05:11 -08:00
Jack May
9387ee6f3b configure rust-bpf toolchain for each tree (#15620)
(cherry picked from commit 4789a13a6e)

# Conflicts:
#	sdk/bpf/scripts/install.sh
2021-03-02 18:05:11 -08:00
mergify[bot]
211a42fb98 adds more metrics for tx counts and batch sizes (#15642) (#15645)
(cherry picked from commit 0bd0084b0d)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-03 01:36:09 +00:00
Trent Nelson
247fa529b0 Cargo.log update from 297c083 2021-03-02 17:59:16 -07:00
Trent Nelson
b3bfe7b6ad ci: drop redundant programs/bpf audit
(cherry picked from commit 4f63afce32)
2021-03-02 17:59:16 -07:00
Trent Nelson
109c15c97c ci: disallow uncommitted Cargo.lock changes
(cherry picked from commit 15e1314209)
2021-03-02 17:59:16 -07:00
Trent Nelson
32b05e7ba0 ci: checks - factor out audit so it can run independently
(cherry picked from commit 3c1dd891af)
2021-03-02 17:59:16 -07:00
Trent Nelson
1da88658c3 ci: run clippy before fmt
(cherry picked from commit 21f66179ba)
2021-03-02 17:59:16 -07:00
mergify[bot]
77d8468df2 adds metrics for the size and number of batches in bank_send_loop (#15627) (#15628)
(cherry picked from commit 416ea38028)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-02 19:37:19 +00:00
mergify[bot]
163efc3bdf coalesces vote packets into one Packets (#15566) (#15630)
(cherry picked from commit f7a049f87f)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-03-02 19:20:55 +00:00
carllin
e7aa6cd5ea custom vm type (#15202)
(cherry picked from commit 8c49985b5c)
2021-03-02 11:00:14 -08:00
mergify[bot]
eb12d29683 cli: don't overallocate upgradeable buffer accounts (#15603) (#15625)
(cherry picked from commit d73af9c1dd)

Co-authored-by: Jack May <jack@solana.com>
2021-03-02 08:21:09 -08:00
mergify[bot]
979e07501a Lower blockstore processor error severity (#15578) (#15609)
(cherry picked from commit f1223fb783)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-03-02 07:47:15 -08:00
mergify[bot]
297c08310f Enable BPF program instruction traces (#15613) (#15621)
(cherry picked from commit 3cd00965a7)

Co-authored-by: Jack May <jack@solana.com>
2021-03-02 00:35:55 -08:00
mergify[bot]
f0afbf4948 cli: dump non-upgradeable programs (#15598) (#15612)
(cherry picked from commit fbb1012584)

Co-authored-by: Jack May <jack@solana.com>
2021-03-01 22:23:46 -08:00
mergify[bot]
c82c750091 Fixes for latest RustSec Audit manifest (bp #15601) (#15605)
* Add RUSTSEC-2020-0146 to audit ignores

(cherry picked from commit 85252777a6)

# Conflicts:
#	ci/test-checks.sh

* Pass audit ignores to bpf program audit

(cherry picked from commit ccb604f8c3)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-03-01 22:30:03 +00:00
mergify[bot]
a7d1223583 Plumb slot update pubsub notifications (#15488) (#15585)
(cherry picked from commit ae96ba3459)

Co-authored-by: carllin <carl@solana.com>
2021-03-01 08:42:11 +00:00
mergify[bot]
b550f5bc19 Sort forks in "ledger processed..." log message (#15583)
(cherry picked from commit 33eaa2b238)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-03-01 02:48:16 +00:00
mergify[bot]
5d341d9bf4 CLI: Support querying FeeCalculator for a specific blockhash (bp #15555) (#15577)
* cli-output: Minor refactor of CliFees

(cherry picked from commit ebd56f7ff4)

* CLI: Support querying fees by blockhash

(cherry picked from commit 21e08b5b2c)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-02-27 20:59:54 +00:00
Michael Vines
82ba19488e Update testnet break RPC node identity 2021-02-27 09:34:33 -08:00
sakridge
fa7067950a Update version to v1.5.11 (#15574) 2021-02-27 04:27:54 +00:00
Michael Vines
d69f09f152 create-snapshot subcommad now accepts the ROOT keyword 2021-02-26 20:20:41 -08:00
Michael Vines
0b510ac9b4 --help cleanup 2021-02-26 20:20:41 -08:00
mergify[bot]
96e587c83e Fix finalize_dead_slot_removal() of cached slots decrementing refcount (#15534) (#15570)
(cherry picked from commit 97eaf3c334)

Co-authored-by: carllin <carl@solana.com>
2021-02-27 03:01:50 +00:00
mergify[bot]
fc6e7a5e2a Increase tpu coalescing and add parameter (#15536) (#15560)
Should create larger entries on average

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-27 01:55:13 +00:00
mergify[bot]
087c43ad33 Add limit and shrink policy for recycler (#15320) (#15565)
(cherry picked from commit c2e8814dce)

Co-authored-by: carllin <carl@solana.com>
2021-02-27 00:34:56 +00:00
mergify[bot]
7d67f5b18c check program owners (#15495) (#15568)
* check program owners

* BankSlotDelta should change because InstructionError variant added

Co-authored-by: Tyera Eulberg <tyera@solana.com>
(cherry picked from commit 8399851d11)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-26 23:34:55 +00:00
mergify[bot]
80f5127dfa Fix root scan in ledger tool (#15532) (#15549)
Co-authored-by: carllin <carl@solana.com>
2021-02-26 09:54:40 +00:00
mergify[bot]
bb5a69aa4d ledger-tool cleanup and additions (#15179) (#15554)
* Plumb allow-dead-slots to ledger-tool verify

* ledger-tool cleanup and add some useful missing args

Print root slots and how many unrooted past last root.

(cherry picked from commit bbae23358c)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-26 07:37:46 +00:00
mergify[bot]
23e6ff3e94 Log devbuild branch and commit for locally built testnet (#15541) (#15546)
(cherry picked from commit 28a9926ba1)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-25 23:06:50 +00:00
mergify[bot]
5e0ca65100 Implement OutputFormat for confirm in Cli and ledger-tool bigtable (#15528) (#15544)
* Add CliTransaction struct

* Impl DisplayFormat for decode-transaction

* Add block-time to transaction println, writeln

* Impl DisplayFormat for confirm

* Use DisplayFormat in ledger-tool bigtable confirm

(cherry picked from commit d521dfe63c)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-25 22:31:12 +00:00
Jon Cinque
6500fed8b7 docs: Update stake merging documentation (#15489)
* Update stake merging documentation

* Integrate review feedback

* Integrate review feedback in comment too

(cherry picked from commit ebd43938a7)
2021-02-25 09:13:13 -08:00
Michael Vines
a80ac11b68 Bump version to v1.5.11 2021-02-25 09:12:39 -08:00
Ryo Onodera
5f03a17a6e Revert "Make UiTokenAmount::ui_amount a String (#15447) (#15472)" (#15535)
This reverts commit 1f1dd58c78.
2021-02-25 16:35:06 +00:00
Ryo Onodera
a52a22f558 Bump version to 1.5.10 (#15533) 2021-02-25 21:00:17 +09:00
mergify[bot]
dfd09a5c13 Introduce ttl eviction for RecycleStore (#15513) (#15529)
(cherry picked from commit 21b43009f6)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-02-25 09:42:38 +00:00
mergify[bot]
37eb205b54 Ubuntu 20.04 instead of 18.04 (#15525) (#15527)
(cherry picked from commit 5656c684a5)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-25 01:07:29 +00:00
mergify[bot]
e4a68f7d99 Implement OutputFormat for block in Cli and ledger-tool bigtable (#15524) (#15526)
* Impl DisplayFormat for solana block

* Use DisplayFormat in ledger-tool bigtable block

(cherry picked from commit d5f235d997)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-25 00:31:26 +00:00
mergify[bot]
f0be7032cb Speed up getLeaderSchedule (#15520)
(cherry picked from commit 5b54aed1c0)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-24 20:33:26 +00:00
mergify[bot]
241fb938c1 Check vote account initialization (#15503) (#15517)
* Check account data_len on Vote account init

* Check account data populated on update_cached_accounts

(cherry picked from commit eddb7f98f5)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-24 18:22:48 +00:00
mergify[bot]
11190259d5 change store account scan to not use dashmap (#15104) (#15162)
* change store account scan to not use dashmap

    * add test_accountsdb_de_dup_accounts_from_stores

    * add tests

    * add test_accountsdb_flatten_hash_intermediate

    * add tests

    * add sort test

    * add test

    * clippy

    * first_slice -> is_first_slice

    * comment

    * use partial_cmp

    (cherry picked from commit 600cea274d)

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2021-02-24 17:33:38 +00:00
mergify[bot]
7f9d5ac383 pass expected capitalization to hash calculation to improve assert msg (#15191) (#15248)
* cleanup if

        * pass expected capitalization to hash calculation to improve assert message

        * fix bank function

        * one more level

        * calculate_accounts_hash_helper

        * add slot to error message

        * success

        (cherry picked from commit e59a24d9f9)

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2021-02-24 17:13:16 +00:00
mergify[bot]
abfae5d46a Fix received notifications for gossip signature subscriptions (#15506) (#15508)
(cherry picked from commit 61ed980ac0)

Co-authored-by: Justin Starry <justin@solana.com>
2021-02-24 10:19:01 +00:00
mergify[bot]
081f1cd118 gracefully handle vote account without authorized voter (#15501) (#15507)
(cherry picked from commit 2f46da346d)

Co-authored-by: Jack May <jack@solana.com>
2021-02-24 09:19:52 +00:00
mergify[bot]
6f31373b21 Count if optimistically confirmed slot is already rooted (#15492) (#15500)
(cherry picked from commit 52f2d425e5)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-23 23:21:22 +00:00
mergify[bot]
b231fb2c18 Add max retransmit and shred insert slot (#15475) (#15498)
(cherry picked from commit 1b59b163dd)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-23 22:57:45 +00:00
mergify[bot]
e255c85bef RPC: Limit getProgramAccounts memcpy filter string to 128 bytes (bp #15483) (#15490)
* Limit getProgramAccounts memcpy filter string to 128 bytes

(cherry picked from commit 65f1afe5e1)

* Limit the number of getProgramAccounts filters

(cherry picked from commit 4b0114b991)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-23 19:57:46 +00:00
mergify[bot]
e5bb1597a4 Transition config program over to ic_msg() logging (#15481)
(cherry picked from commit 8680a46458)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-23 05:36:44 +00:00
mergify[bot]
a97d89fb5a Update uiAmount type in docs (#15471) (#15474)
(cherry picked from commit 123de5de54)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-23 00:14:51 +00:00
Tyera Eulberg
1f1dd58c78 Make UiTokenAmount::ui_amount a String (#15447) (#15472)
* Make UiTokenAmount::ui_amount a String

* Fixup solana-tokens

* Ignore spl downstream-project
2021-02-22 16:50:59 -07:00
mergify[bot]
b21ce376fb Fix solana feature status stake % overflow (#15468)
(cherry picked from commit f7c0b69fd4)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-22 21:08:25 +00:00
Michael Vines
451d974f26 Improve help for split-stake-account 2021-02-22 12:50:21 -08:00
mergify[bot]
f254bf85eb RPC: Improve snapshot path sanitization (bp #15456) (#15457)
Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-22 19:35:49 +00:00
mergify[bot]
8b80628b38 Sortable feature status list (#15150) (#15192)
(cherry picked from commit 88f22c360b)

Co-authored-by: Jack May <jack@solana.com>
2021-02-22 18:44:45 +00:00
mergify[bot]
2ac95a3ebc Print original error from accounts dir remove (#15458) (#15466)
(cherry picked from commit 5ccaa6336a)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-02-22 13:43:55 +00:00
mergify[bot]
ddef2fb7fa Prevent u64 overflow when calculating current stake percentage (#15453)
(cherry picked from commit 5ae37b9675)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-20 08:11:45 +00:00
mergify[bot]
de4d2e640e CLI: Support transfer with seed (bp #15389) (#15446)
* CLI: Factor out ProgramId moniker resolution

(cherry picked from commit 84e7ba0b3f)

* CLI: Make derived address seed.len() check a clap validator

(cherry picked from commit 16e0a4b412)

* CLI: Add hidden support for `SystemInstruction::TransferWithSeed`

(cherry picked from commit 700685c223)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-02-20 00:36:35 +00:00
mergify[bot]
c857467262 adds metrics for inbound/outbound gossip packets counts (#15407) (#15445)
(cherry picked from commit aa3aac766f)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-02-20 00:04:40 +00:00
Michael Vines
671fb3519d Pacify clippy 2021-02-19 16:04:18 -08:00
mergify[bot]
9aed0b0952 cli: improve deploy resume interface (#15418) (#15444)
* cli: improve deploy resume interface

* add docs

(cherry picked from commit 4648439f5c)

Co-authored-by: Jack May <jack@solana.com>
2021-02-19 22:23:15 +00:00
mergify[bot]
767c89526a add validator flag no-accounts-db-index-hashing (bp #15350) (#15413)
* add validator flag no-accounts-db-index-hashing (#15350)

* add validator flag no_accounts_db_index_hashing

* add validator flag no_accounts_db_index_hashing

(cherry picked from commit ba02452d75)

# Conflicts:
#	runtime/src/accounts_background_service.rs

* fix merge error

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2021-02-19 21:18:59 +00:00
Michael Vines
2bfe545438 Remove unix path separators
(cherry picked from commit 6a8dd86722)
2021-02-19 11:14:21 -08:00
mergify[bot]
804a284a52 Threadpool2 (#15151) (#15159)
* rework thread pool for hash calculation

* rename

(cherry picked from commit fbf9dc47e9)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-19 12:57:52 -06:00
Michael Vines
51e5189ffb Update ABI digest 2021-02-19 10:54:59 -08:00
Michael Vines
5216f51ff2 Rename IOError to BorshIoError 2021-02-19 10:54:59 -08:00
mergify[bot]
0c55add96b Load memo v2 into genesis for test validator (bp #15425) (#15430)
* Load memo v2 into genesis for test validator (#15425)

* Load memo v2 into genesis for test validator

* feedback

* versions

* remove .so

* add .so

(cherry picked from commit 7b67a6d208)

# Conflicts:
#	explorer/src/utils/tx.ts

* Update tx.ts

Co-authored-by: Justin Starry <justin@solana.com>
2021-02-19 12:23:33 +00:00
mergify[bot]
291f81d5b0 Bump SPL token version to v3.1.0 (bp #15429) (#15434)
* Bump SPL token version to v3.1.0 (#15429)

* Bump SPL token version to v3.1.0

* Cargo.lock

(cherry picked from commit 15bbe6436d)

# Conflicts:
#	account-decoder/Cargo.toml
#	core/Cargo.toml

* Update Cargo.toml

* Update Cargo.toml

Co-authored-by: Justin Starry <justin@solana.com>
2021-02-19 12:14:05 +00:00
mergify[bot]
5c8a878f1b Send program deploy txs to up to 2 leaders (#15421) (#15424)
(cherry picked from commit 4e84869c8e)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-19 04:30:05 +00:00
mergify[bot]
7148aaa30c chore: bump serde from 1.0.112 to 1.0.118 (bp #14828) (#15394)
* chore: bump serde from 1.0.112 to 1.0.118 (#14828)

* chore: bump serde from 1.0.112 to 1.0.122

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.112 to 1.0.122.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.112...v1.0.122)

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

* Update frozen_abi digest following serde update

* Revert "chore: bump serde from 1.0.112 to 1.0.122"

This reverts commit a3ef4442a4.

* Revert "[auto-commit] Update all Cargo lock files"

This reverts commit c41c3b005f.

* chore: bump serde from 1.0.112 to 1.0.118

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.112 to 1.0.118.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.112...v1.0.118)

Signed-off-by: dependabot[bot] <support@github.com>

* [auto-commit] Update all Cargo lock files

* Remove serum-dex pinning

* blind commit!

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot-buildkite <dependabot-buildkite@noreply.solana.com>
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
(cherry picked from commit 1df93fa2be)

# Conflicts:
#	banks-interface/Cargo.toml
#	perf/Cargo.toml
#	programs/config/Cargo.toml

* Fix conflicts

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-02-18 13:29:17 +00:00
mergify[bot]
7a3c4c184f sdk: Add Borsh support for types and utilities (bp #15290) (#15393)
* sdk: Add Borsh support for types and utilities (#15290)

* sdk: Add Borsh to Pubkey

* Add serialization error for easier borsh integration

* Add Borsh usage to banks-client and sdk

* Rename SerializationError -> IOError

* Add new errors to proto

* Update Cargo lock

* Update Cargo.lock based on CI

* Clippy

* Update ABI on bank

* Address review feedback

* Update sanity program instruction count test

(cherry picked from commit 0f6f6080f3)

# Conflicts:
#	banks-client/Cargo.toml

* Update new dependencies

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-02-18 13:23:08 +00:00
mergify[bot]
bc5f434e48 Add lamports overflow test for nonce withdraw (#15383) (#15385)
(cherry picked from commit fcee227021)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-18 02:54:13 +00:00
mergify[bot]
569edbb241 Return blockstore error if previous_blockhash cannot be determined (#15382) (#15384)
* Return blockstore error if previous_blockhash cannot be determined

* Add require_previous_blockshash flag

(cherry picked from commit 170cb792eb)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-18 02:38:48 +00:00
mergify[bot]
abf2d71f4c More failure codepath tracing (#15246) (#15370)
(cherry picked from commit 4e99aa5fa6)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-02-18 00:36:46 +00:00
mergify[bot]
1a8b57fcd0 First step towards denying clippy::integer_arithmetic (bp #15366) (#15381)
* CI: Globally deny clippy::integer_arithmetic lint

(cherry picked from commit 7035e8485c)

* Re-allow clippy::integer_arithmetic at crate-level

(cherry picked from commit 7f7370c306)

# Conflicts:
#	bench-tps/tests/bench_tps.rs

Co-authored-by: Trent Nelson <trent@solana.com>
2021-02-17 22:30:03 +00:00
mergify[bot]
723c03dfbd Adapt to fs_extra 1.2.0 (#15380)
(cherry picked from commit 9ba69a7381)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-17 22:07:12 +00:00
mergify[bot]
0a1fcfa08b docs: Remove references to "create_address_with_seed" (#15339) (#15372)
(cherry picked from commit 3ac7e09de6)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-02-17 14:50:25 +00:00
Michael Vines
8820933287 Bump version to 1.5.9 2021-02-16 22:17:10 -08:00
Tyera Eulberg
460c643f8e Clean nonce 2021-02-16 19:24:35 -08:00
Tyera Eulberg
65600f9a1f Move fn to sdk 2021-02-16 19:24:35 -08:00
Stephen Akridge
ef61dc9780 Vote program updates 2021-02-16 18:58:34 -08:00
mergify[bot]
477e5d4bff Add --force arg for bigtable upload (#15362)
(cherry picked from commit 98e3e570d2)

Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-02-17 02:55:35 +00:00
Tyera Eulberg
d54632da00 Clean & check stake 2021-02-16 18:54:24 -07:00
Justin Starry
26b420bd39 cli: Speed up program deploys (#15347)
* Speed up deploys

* fix test

(cherry picked from commit f5c564bc6c)
2021-02-16 17:47:50 -08:00
Trent Nelson
c3dda3ce0c stake: add lamports overflow test for withdraw
(cherry picked from commit ae82b5ebfd)
2021-02-16 17:38:38 -08:00
mergify[bot]
c527e1f2e5 adds an upper bound on cluster-slots size (#15300) (#15357)
https://github.com/solana-labs/solana/issues/14366#issuecomment-769096305
(cherry picked from commit f79c9d4094)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-02-16 22:32:26 +00:00
mergify[bot]
135f47b6be checks that prune-messages have the same inner/outer pubkey (#15352) (#15356)
(cherry picked from commit 076c20f1ca)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-02-16 22:22:29 +00:00
mergify[bot]
6656b3965f rbpf-v0.2.5 (#15334) (#15335)
(cherry picked from commit b43d2bc882)

Co-authored-by: Alexander Meißner <AlexanderMeissner@gmx.net>
2021-02-16 17:55:04 +00:00
mergify[bot]
3068572bb9 Fix typo in account docs (#15349) (#15351)
(cherry picked from commit 17a328bc6f)

Co-authored-by: Austin Abell <austinabell8@gmail.com>
2021-02-16 17:23:06 +00:00
mergify[bot]
f48236837c fill in timing gaps in replay_stage (#14550) (#15197)
* fill in timing gaps in replay_stage

* add replay_stage bank_count metric

* formatting

* handle another gap

* cleanup wait_receive_time to be more straightforward

(cherry picked from commit 935dfdf0f6)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-16 09:53:08 +00:00
mergify[bot]
59beb8e548 More configurable rocksdb compaction (#15213) (#15325)
rocksdb compaction can cause long stalls, so
make it more configurable to try and reduce those stalls
and also to coordinate between multiple nodes to not induce
stall at the same time.

(cherry picked from commit 5b8f046c67)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-16 01:46:36 +00:00
Trent Nelson
543f7e7ec1 Bump rand_core to 0.6.2
https://rustsec.org/advisories/RUSTSEC-2021-0023
2021-02-15 17:58:41 -07:00
mergify[bot]
efe563201f Track RecycleStore basic stats with needed refactor (#15291) (#15327)
* Track RecycleStore basic stats with needed refactor

* Fix another wrong metrics def

(cherry picked from commit 30f18319f2)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-02-15 08:26:24 +00:00
mergify[bot]
603cae4a5c Log if unsanitary transactions are read from blockstore (#15319) (#15322)
(cherry picked from commit 0812931c38)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-14 07:46:54 +00:00
mergify[bot]
73fb9695bc style: Fix the typos (#15318)
(cherry picked from commit 9c7b3dc1b5)

Co-authored-by: HowJMay <vulxj0j8j8@gmail.com>
2021-02-14 00:46:40 +00:00
publish-docs.sh
1aec2102d4 Fix broken TdS links 2021-02-13 10:24:26 -07:00
mergify[bot]
99012f022e sdk: sanitize Hash base58 input (#15315)
(cherry picked from commit 1a20ab968f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-02-13 09:54:06 +00:00
Trent Nelson
20afb912cd Bump version to 1.5.8 2021-02-13 04:34:36 +00:00
sakridge
563231132f Stake program update (#15308) 2021-02-12 17:15:48 -08:00
mergify[bot]
32ec9147bb Rework spl_token_v2_self_transfer_fix to avoid any SPL Token downtime (#15306)
(cherry picked from commit 2e7aebf0bb)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-12 23:59:08 +00:00
mergify[bot]
4be8842925 Upgrade to SPL Token 3.1.0 program binary (#15302)
(cherry picked from commit aa97da2146)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-12 23:27:30 +00:00
mergify[bot]
b1ef9045ec docs: getLargestAccounts caching notice (#15293) (#15295)
(cherry picked from commit 760e163190)

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-02-12 18:43:56 +00:00
publish-docs.sh
dbe4d87e60 Fix registration link 2021-02-11 21:54:00 -07:00
mergify[bot]
d2efa3aa15 RPC documentation updates for token deltas / blockTimes in getConfirmedSignatures2/getConfirmedTransaction (#14871) (#15284)
* docs: add token balances response info

* docs: add blockTime to getConfirmedSignatures and getConfirmedTransaction

* docs: update example responses

* fix: remove space

(cherry picked from commit 6b8e710988)

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-02-12 02:13:32 +00:00
mergify[bot]
ccd2c6cc13 Add per-byte logging cost (#15279) (#15282)
(cherry picked from commit 6650fbf443)

Co-authored-by: Jack May <jack@solana.com>
2021-02-12 02:09:45 +00:00
mergify[bot]
e976b1547a Fix flaky test test_concurrent_snapshot_packaging (#15252) (#15281)
(cherry picked from commit 990bb426a9)

Co-authored-by: carllin <carl@solana.com>
2021-02-12 01:22:06 +00:00
mergify[bot]
03ac807756 RPC: add caching to getLargestAccounts (#15154) (#15271)
* introduce get largest accounts cache

* remove cache size and change hash key

* remove eq and hash derivation from commitment config

* add slot to the cache

(cherry picked from commit 4013f91dbe)

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-02-11 21:13:09 +00:00
mergify[bot]
067871cc39 Clean up mainnet-beta inflation candidate features (#15257)
(cherry picked from commit 47c60f8e98)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-11 03:02:16 +00:00
mergify[bot]
e9ceb99460 Match BPF instruction reporting to dump file (#15254) (#15256)
(cherry picked from commit 10abd199e1)

Co-authored-by: Jack May <jack@solana.com>
2021-02-11 02:52:25 +00:00
Michael Vines
cd994f0162 Bump version to 1.5.7 2021-02-10 05:18:39 +00:00
mergify[bot]
01e4d0a1e9 Use spl-token-mint secondary index for relevant getProgramAccounts requests (#15219) (#15224)
(cherry picked from commit 948819dfa8)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-10 00:39:55 +00:00
mergify[bot]
7f0f43cb28 Warp timestamp and extend max-allowable-drift for accommodate slow blocks (#15204) (#15222)
* Remove timestamp_correction feature gating

* Remove timestamp_bounding feature gating

* Remove unused deprecated ledger code

* Remove unused deprecated unbounded-timestamp code

* Enable independent adjustment of fast/slow timestamp bounding

* Update timestamp bounds to 25% fast, 80% slow; warp timestamp

* Update bank hash test

* Add PR number to feature

Co-authored-by: Michael Vines <mvines@gmail.com>

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit da6753b8c0)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-10 00:03:26 +00:00
mergify[bot]
d0bf97d25d uses btree-map instead of hash-map for cluster-slots (#15194) (#15220)
retain traverses all values in the hashmap which is slow:
https://github.com/solana-labs/solana/blob/88f22c360/core/src/cluster_slots.rs#L45
btree-map instead allows more efficient prunning there.

In addition there is potential race condition here:
https://github.com/solana-labs/solana/blob/88f22c360/core/src/cluster_slots.rs#L68-L74
If another thread inserts a value at the same slot key between the read
and write lock, current thread will discard the inserted value.

(cherry picked from commit 2758588ddd)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-02-09 23:21:08 +00:00
Michael Vines
c3056734e3 solana-test-validator now uses the BPF JIT by default, --no-bpf-jit to disable 2021-02-09 21:43:00 +00:00
mergify[bot]
5e2b9e595d use index version of calculating hash (#15189) (#15211)
* use index version of calculating hash

* invert const

* formatting

(cherry picked from commit 8424fe2c12)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-09 18:02:23 +00:00
mergify[bot]
9be3e00546 Add cli deploy tests (bp #15116) (#15147)
* Add cli deploy tests (#15116)

(cherry picked from commit 210514b136)

* fix conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-02-09 05:42:43 +00:00
mergify[bot]
0c2dcd759c Parse upgradeable loader instructions and accounts (#15195) (#15199)
* Parse upgradeable-loader instructions

* Parse upgradeable-loader accounts

(cherry picked from commit c0a6272afd)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-09 01:47:46 +00:00
mergify[bot]
b711476811 removes locked pubkey references (#15152) (#15182)
(cherry picked from commit b6f231b60e)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-02-08 03:24:38 +00:00
mergify[bot]
e4fe7dfbbd Add jit and caching args to ledger-tool (#15177) (#15178)
(cherry picked from commit 11b84cb870)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-06 21:04:46 +00:00
mergify[bot]
40e62c60d3 Require lockup authority to change withdraw authority on locked stake (#14861) (#15170)
(cherry picked from commit dc7041ba07)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-06 08:27:25 +00:00
Jon Cinque
a93d37b804 program-test: Add warp tests for rent and stake rewards (#15136)
* program-test Add rent collection and stake rewards

* Improve tests to initialize vote state

* Update comment

* Update program-test/src/lib.rs

Co-authored-by: Michael Vines <mvines@gmail.com>

* Review feedback

* cargo fmt

* Avoid using hard-coded slots in tests

* Make genesis_config private

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-05 23:39:12 -08:00
Jon Cinque
65e6df2b0d program-test: Add ability to warp to the future (#14998)
* program-test: Add ability to warp to the future

* Make `start_local_server` take by value

* Remove clear_invoke_context
2021-02-05 23:39:12 -08:00
Jon Cinque
f02bd10d4a program-test: Set context without panic (#14997)
* program-test: Fix CPI and multiple instructions

* Whitespace

* Add CPI test in program-test
2021-02-05 23:39:12 -08:00
Michael Vines
d7d8a751d9 Increment hyper versions to pacify cargo audit (#15172) 2021-02-05 23:13:16 -08:00
mergify[bot]
f6f4193d4d Only publish release-tag docs on beta channel (#15158) (#15168)
(cherry picked from commit 819d829c41)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-06 06:59:09 +00:00
publish-docs.sh
fe0077d88a Update slashing roadmap link 2021-02-05 16:29:44 -07:00
mergify[bot]
8016f61ce8 use thread pool for non-index hash calculations (#15149) (#15153)
(cherry picked from commit fabecdc86c)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-05 21:59:59 +00:00
mergify[bot]
0b6366da9c sentinel value for zero lamport accounts in hash scanning (#15097) (#15145)
* sentinel value for zero lamport accounts in hash scanning

* fix test

(cherry picked from commit f85be6259b)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-05 15:16:11 -06:00
mergify[bot]
d567a62cc7 caches descendants in bank forks (#15107) (#15148)
(cherry picked from commit 6fd5ec0e4c)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-02-05 19:38:36 +00:00
mergify[bot]
eccea2b1ea Add w3m's inflation pubkeys (#15142) (#15144)
(cherry picked from commit 2a60dd8492)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-05 08:59:26 -08:00
Michael Vines
bfd93cc13f Sort inflation candidates alphabetically 2021-02-05 00:07:46 -08:00
mergify[bot]
b21c89e494 Warn lastValidSlot with some terminology tweaks (#15081) (#15122)
* Warn lastValidSlot with some terminology tweaks

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Restore previous arrangment of slot def. and tweak upon it

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 85ffc8fa1c)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-02-05 07:07:00 +00:00
mergify[bot]
253b68abf1 Inflation Nomination for sotcsa (#15105) (#15120)
(cherry picked from commit e908a4b3fc)

Co-authored-by: sotcsa <sotcsa@users.noreply.github.com>
2021-02-04 21:49:51 -08:00
mergify[bot]
49034b8016 nit: cleanup feature status display (#15113) (#15117)
* nit: cleanup feature status display

* nudge

(cherry picked from commit a52a241852)

Co-authored-by: Jack May <jack@solana.com>
2021-02-05 05:38:38 +00:00
mergify[bot]
3838fb62d4 move timer end outside if (#15087) (#15114)
(cherry picked from commit f0d58f5549)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-04 20:12:56 -08:00
mergify[bot]
9f267fc5e7 remove unused arg from function (#15096) (#15115)
(cherry picked from commit 7d9f5ad525)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-04 20:12:31 -08:00
mergify[bot]
5a82265650 deploy doc updates (#15109) (#15112)
(cherry picked from commit 82350f9350)

Co-authored-by: Jack May <jack@solana.com>
2021-02-05 00:52:26 +00:00
mergify[bot]
77d2ed95ff Add ref count from storage (#15078) (#15092)
(cherry picked from commit e5225b7e68)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-04 15:12:23 -08:00
mergify[bot]
858ca752e2 Generate keypair file for c program deployment (#15080) (#15110)
* Generate keypair file for c program deployment

* Build and use solana-keygen in test-stable-perf

(cherry picked from commit bba1b49663)

Co-authored-by: Jack May <jack@solana.com>
2021-02-04 23:02:01 +00:00
mergify[bot]
fea0bd234c Fix pubkey refcount for shrink + clean (#14987) (#15108)
(cherry picked from commit e4d0d4bfae)

Co-authored-by: carllin <wumu727@gmail.com>
2021-02-04 22:11:57 +00:00
mergify[bot]
7af7d5f22c Add LowFeeValidation Nomination (#15098) (#15102)
(cherry picked from commit 53dab29528)

Co-authored-by: bonsfi <bonsfi@users.noreply.github.com>
2021-02-04 11:06:29 -08:00
mergify[bot]
de4cccd977 Enable inflation candidate for RockX (#15099) (#15101)
(cherry picked from commit c6f572c331)

Co-authored-by: calvinzhou-rockx <55546839+calvinzhou-rockx@users.noreply.github.com>
2021-02-04 10:50:04 -08:00
mergify[bot]
9f74136632 borrow storages (#15088) (#15095)
(cherry picked from commit f49a70e626)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-04 18:43:15 +00:00
mergify[bot]
21484acc20 Inflation Nomination for Diman (#15083) (#15091)
(cherry picked from commit d87e0c3f1d)

Co-authored-by: DimAn <71597545+diman-io@users.noreply.github.com>
2021-02-04 09:39:14 -08:00
mergify[bot]
36ad03af1f calculate hash from store instead of index (bp #15034) (#15084)
* calculate hash from store instead of index (#15034)

* calculate hash from store instead of index

* restore update hash in abs

(cherry picked from commit 600ff0d915)

* fix merge conflict (#15085)

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-04 16:58:11 +00:00
mergify[bot]
e255ee52b1 Nomination candidate for p2pvalidator (#15079) (#15090)
(cherry picked from commit 2ed074ba2a)

Co-authored-by: rk-p2p <56305239+rk-p2p@users.noreply.github.com>
2021-02-04 08:55:18 -08:00
mergify[bot]
972540907b Don't load all accounts into memory for capitalization check (#14957) (#15072)
* Don't load all accounts into memory for capitalization check

Co-authored-by: carllin <carl@solana.com>
2021-02-04 11:49:48 +00:00
mergify[bot]
cfeed09f1f Add program deployment docs (#15075) (#15082)
(cherry picked from commit d0118a5c42)

Co-authored-by: Jack May <jack@solana.com>
2021-02-04 10:42:37 +00:00
mergify[bot]
dadebb2bba Don't reset accounts if the remove_account comes from a clean (#15022) (#15066)
Store may be in-use with a snapshot creation, so don't disturb
it's state.

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-04 06:02:02 +00:00
mergify[bot]
169403a15e removes pubkey references (#15050) (#15073)
(cherry picked from commit 86467d825a)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-02-04 00:24:58 +00:00
mergify[bot]
e2a874370b Cleanup v1 shrink path (#15009) (#15071)
move legacy functions to another impl block

(cherry picked from commit 37aac5a12d)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-02-03 23:33:21 +00:00
mergify[bot]
baf7713744 Fix integer overflow in degenerate invoke_signed BPF syscalls (#15051) (#15069)
(cherry picked from commit ebbaa1f8ea)

Co-authored-by: Mrmaxmeier <Mrmaxmeier@gmail.com>
2021-02-03 23:04:03 +00:00
mergify[bot]
573304cf73 Fix which shared object the test uses (#15060) (#15068)
(cherry picked from commit 02a5f7104a)

Co-authored-by: Jack May <jack@solana.com>
2021-02-03 22:49:55 +00:00
mergify[bot]
ec6d5933de Revert hard nofile limit back to 500000 (#15061)
(cherry picked from commit 42bf6dc2ab)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-03 13:49:54 -08:00
mergify[bot]
30b815e7bc Correct stakeconomy::vote::id() (#15062) (#15065)
(cherry picked from commit c3ba70300b)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-03 12:41:53 -08:00
mergify[bot]
f463ebfde2 Upgradeable loader max_data_len limit (#15039) (#15057)
(cherry picked from commit d24d5fba0e)

Co-authored-by: Jack May <jack@solana.com>
2021-02-03 18:34:06 +00:00
Michael Vines
ba0aa706e4 Avoid panic when the release cache is empty 2021-02-03 09:32:57 -08:00
Jack May
eacf9209f7 update transaction.md 2021-02-03 09:03:02 -08:00
mergify[bot]
ba12a14494 Nomination candidate for buburuza (#15047) (#15055)
(cherry picked from commit f2d415cf13)

Co-authored-by: buburuza27 <78487355+buburuza27@users.noreply.github.com>
2021-02-03 08:41:51 -08:00
mergify[bot]
4e60f95854 Don't squash caught errors, please (#15046) (#15049)
* Don't squash caught errors, please

* Update blockstore.rs

* Update blockstore.rs

(cherry picked from commit 8376781ec8)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-02-03 16:33:53 +00:00
mergify[bot]
a7193ce834 adds flag to disable duplicate instance check (#15006) (#15053)
(cherry picked from commit 0ad063f4e9)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-02-03 08:33:07 -08:00
mergify[bot]
f12a467177 transaction-history -v now shows the transaction timestamp if available (#15052)
(cherry picked from commit 971c222cf7)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-02-03 08:31:34 -08:00
mergify[bot]
45a92337f4 bump rust-bpf-sysroot to v0.14 (#15040) (#15045)
(cherry picked from commit 286e4d6924)

Co-authored-by: Jack May <jack@solana.com>
2021-02-03 13:03:25 +00:00
mergify[bot]
bfa6e9bdf6 Nomination candidate for bunghi (#15036) (#15044)
* Update feature_set.rs

* Update feature_set.rs

* Update sdk/src/feature_set.rs

* Update feature_set.rs

* Update sdk/src/feature_set.rs

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit 87815ae1fd)

Co-authored-by: bunghi <31234197+bunghi@users.noreply.github.com>
2021-02-03 11:41:37 +00:00
mergify[bot]
a7d9a52690 cli: add command to dump the upgradeable program to a file (#15029) (#15035)
(cherry picked from commit 9c6d899efb)

Co-authored-by: Jack May <jack@solana.com>
2021-02-03 10:22:29 +00:00
mergify[bot]
4b3391f1d8 Cli: some moniker follow-up (#14981) (#15038)
* Enable monikers in config set

* Fixup websocket compute

(cherry picked from commit 38e2fe8997)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-03 09:25:10 +00:00
mergify[bot]
19b203f344 cli: add query command to solana program (bp #15017) (#15028)
* cli: add query command to solana program (#15017)

(cherry picked from commit 6cf6ef3a32)

# Conflicts:
#	cli-output/src/cli_output.rs
#	cli/src/program.rs

* fix conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-02-03 08:04:18 +00:00
Michael Vines
6150faafb6 Adapt create-snapshot to avoid triggering recent internal bank sanity checks
(cherry picked from commit 709aa74e11)
2021-02-02 23:22:01 -08:00
Trent Nelson
49e608295e docs: bump nofiles recommendations to match maps
(cherry picked from commit 894b412aef)
2021-02-02 23:21:24 -08:00
mergify[bot]
636be95e2a CLI: Move solana validators summary to end of output (#15033)
(cherry picked from commit 31d30bb5e8)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-02-03 06:36:47 +00:00
mergify[bot]
0db5a74bb9 streamline calculate_accounts_hash (#14980) (#15015)
Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-03 05:31:25 +00:00
mergify[bot]
08fd4905db remove legacy merkle root (bp #14772) (#15021)
* remove legacy merkle root (#14772)

* remove legacy merkle root
f78197a

* clippy

* compile error

* borrow error

* derministic results

* clippy

* borrow

(cherry picked from commit 1b85114a9c)

# Conflicts:
#	merkle-root-bench/src/main.rs
#	runtime/src/accounts_db.rs

* remove legacy merkle root (#14772)

* remove legacy merkle root
f78197a

* clippy

* compile error

* borrow error

* derministic results

* clippy

* borrow

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
2021-02-03 03:19:28 +00:00
mergify[bot]
3610b6f31c cli: Don't overallocate upgradeable program if --final specified (#15011) (#15027)
(cherry picked from commit a1b9e00c14)

Co-authored-by: Jack May <jack@solana.com>
2021-02-03 03:09:27 +00:00
mergify[bot]
b35f35a7e8 keygen: Improve messaging around BIP39 passphrase usage (#15026)
(cherry picked from commit 53423c99aa)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-02-03 02:09:20 +00:00
mergify[bot]
790a6b7550 CLI: Surface account query errors (#15024)
(cherry picked from commit 3abb39c04f)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-02-03 01:59:59 +00:00
mergify[bot]
30d2e15945 speed up merkle root calculation (bp #14710) (#15020)
* speed up merkle calculation (#14710)

(cherry picked from commit 18bd0c9a5b)

* back port crate versions for merkle-root-bench

Co-authored-by: Jeff Washington (jwash) <75863576+jeffwashington@users.noreply.github.com>
Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2021-02-02 19:39:49 -06:00
mergify[bot]
7b3c7a075a Allow passing buffer by keypair to cli program deploy (#15010) (#15016)
(cherry picked from commit 7831428e82)

Co-authored-by: Jack May <jack@solana.com>
2021-02-02 22:49:09 +00:00
mergify[bot]
f534698618 CLI: Add sigverify results to solana decode-transaction output (bp #14964) (#15008)
* cli-output: Add option sigverify status to `println_transaction()` output

(cherry picked from commit a2aea0ca33)

* cli: Add sigverify status to `decode-transaction` output

(cherry picked from commit d547585041)

* CLI: Modernize `decode-transaction` about message

(cherry picked from commit fddbfe1052)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-02-02 20:33:24 +00:00
mergify[bot]
fe1347b441 Clean up some old commitment names (#14994) (#15003)
(cherry picked from commit 2780214e71)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-02 18:48:19 +00:00
mergify[bot]
8f6c01f0f8 Add Hackathon banner (#15004) (#15005)
(cherry picked from commit b57f33948d)

Co-authored-by: R. M. Shea <8948187+rmshea@users.noreply.github.com>
2021-02-02 09:46:47 -07:00
mergify[bot]
066ff36175 Disable AppendVec warn! for now (#14996) (#15001)
* Disable AppendVed warn! for now

* Fix version...

* Update append_vec.rs

(cherry picked from commit 31168fe343)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-02-02 16:09:49 +00:00
mergify[bot]
5da9e7cb8a Parse SPL Memo v3 (#14979) (#14989)
* Parse memo v3 too

* tree

(cherry picked from commit 34dfcc9c6f)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-02-01 23:16:01 -07:00
Brian Long
09b68f2fbb Inflation Nomination for BL (#14972)
(cherry picked from commit 8e0fdff17c)
2021-02-01 21:00:15 -08:00
mergify[bot]
d9fcd84bc2 Add validator flag to opt in to cpi and logs storage (bp #14922) (#14973)
* Add validator flag to opt in to cpi and logs storage (#14922)

* Add validator flag to opt in to cpi and logs storage

* Default TestValidator to opt-in; allow using in multinode-demo

* No clone

Co-authored-by: Carl Lin <carl@solana.com>
(cherry picked from commit cbb8b79a60)

* TestValidator store cpi and logs

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-02-02 02:51:35 +00:00
Trent Nelson
86703384dc cli: Improve stake-history output readability 2021-02-02 02:45:23 +00:00
Trent Nelson
580b0859e8 cli-output: Minor refactor of build_balance_message() 2021-02-02 02:45:23 +00:00
Michael Vines
5d8c5254b0 Add "init" subcommand
(cherry picked from commit 49c908dc50)
2021-02-01 17:05:00 -08:00
Leopold Schabel
ea83292daa Certus One inflation enablement feature pair (#14961)
(cherry picked from commit c06568f3db)
2021-02-01 17:00:18 -08:00
Eric Williams
f1c3e6dc36 Update economics docs (#14965)
* clarified inflation split and equation

* clarify staking yield description
2021-02-01 22:40:00 +00:00
mergify[bot]
bdd19c09d1 More rich runtime logging (#14938) (#14967) 2021-02-01 14:26:31 -08:00
Michael Vines
95cbfce900 Update sdk/src/feature_set.rs
(cherry picked from commit e0f6695cc2)
2021-02-01 08:12:12 -08:00
Stakeconomy.com
a404a9d802 Update feature_set.rs
(cherry picked from commit 4ba9e39941)
2021-02-01 08:12:12 -08:00
Michael Vines
15cd1283e8 Template for an Inflation Candidate nomination
To submit your nomination:
1. Replace all instances of "my_name" with a suitable alternative then address the "TODO" code comments
2. Submit a new Github pull request and work with the project contributors to merge your pull request

(cherry picked from commit 15baf43d1e)
2021-02-01 08:12:12 -08:00
mergify[bot]
512a193674 Use helper for count() in accountsDB (#14953) (#14956)
(cherry picked from commit 63c44bd690)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-31 18:23:01 -08:00
Michael Vines
de37795ca1 /i/o/ 2021-01-31 08:22:23 -08:00
nampdn
cb7871347d style(spacing): reformat tab spacing
(cherry picked from commit f98889adc0)
2021-01-30 08:36:14 -08:00
Michael Vines
b4b9ea7771 Template for an Inflation Candidate nomination
To submit your nomination:
1. Replace all instances of "my_name" with a suitable alternative then address the "TODO" code comments
2. Submit a new Github pull request and work with the project contributors to merge your pull request

(cherry picked from commit a7ff1684f5)
2021-01-30 08:36:14 -08:00
Trent Nelson
62b7bf5365 CLI: Reinstate logging, disabled by default
(cherry picked from commit a44392048d)
2021-01-29 21:46:58 -08:00
mergify[bot]
91c57cd70d Add generalized voting process to enable full inflation (bp #14702) (#14732)
* Add generalized voting process to enable full inflation

(cherry picked from commit 072e5e54d8)

* Update feature_set.rs

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-30 03:19:19 +00:00
Michael Vines
cb35fc185b Garbage collect old releases
(cherry picked from commit ea4f516f84)
2021-01-29 18:37:43 -08:00
Michael Vines
cca9009f23 Help capitalization fixes
(cherry picked from commit 9e3c130ac9)
2021-01-29 18:37:43 -08:00
Ryo Onodera
d8d73ff56c Clean up VerifiedVotePackets (#14822)
(cherry picked from commit bd0433c373)
2021-01-29 18:03:41 -08:00
Jack May
34504797b4 Richer runtime failure logging (#14875)
(cherry picked from commit 0b1015f7d3)
2021-01-29 18:03:33 -08:00
sakridge
1767e4fbde Increase vm map limit recommendation (#14892)
Give some more buffer from 400k

(cherry picked from commit 84e52b6065)
2021-01-29 18:03:03 -08:00
sakridge
39515cae5e Use already-generated key set to populate dirty keys for clean (#14905)
Don't need to scan the stores again when we already found the key
set of updates per slot. Just insert it earlier.

(cherry picked from commit 65315fa4c2)
2021-01-29 18:02:42 -08:00
Michael Vines
116d67e1e3 Prevent bricked install when ^C is pressed during archive extraction
(cherry picked from commit 7ad9870071)
2021-01-29 18:02:25 -08:00
mergify[bot]
08bda35fd6 Buffer authority must match upgrade authority for deploys and upgrades (bp #14923) (#14935)
* Buffer authority must match upgrade authority for deploys and upgrades (#14923)

(cherry picked from commit 07cef5a557)

# Conflicts:
#	cli/src/program.rs
#	cli/tests/program.rs

* fix conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-01-29 23:04:23 +00:00
mergify[bot]
ba1d0927e6 docs: Fix mangled getConfirmedTransaction parameter list (#14921)
(cherry picked from commit 52326d53be)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-01-29 13:53:53 -07:00
mergify[bot]
555df9f96c cli: Improve reliability of program deploys (#14902) (#14925)
* cli: Improve reliability of program deploys

* chore: fix clippy

(cherry picked from commit 996a27d475)

Co-authored-by: Justin Starry <justin@solana.com>
2021-01-29 13:07:31 -07:00
mergify[bot]
99166a4a59 program-test: Expose bank task to fix fuzzing (#14908) (#14927)
* program-test: Expose bank task to fix fuzzing

* Run cargo fmt and clippy

* Remove unnecessary print in test

* Review feedback

* Transition to AtomicBool

(cherry picked from commit 0ce08274f9)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-01-29 20:57:56 +01:00
Michael Vines
92534849db Fix cli usage build
(cherry picked from commit 2e54b6acb1)
2021-01-29 11:45:56 -08:00
mergify[bot]
5ba8b4884b Ignore syscalls which are not registered in cached rbpf executable. (#14898) (#14929)
(cherry picked from commit d026da4a1b)

Co-authored-by: Alexander Meißner <AlexanderMeissner@gmx.net>
2021-01-29 11:07:54 -08:00
Trent Nelson
cb878f2ea8 Add feature for pending SPL Token self-transfer fix
(cherry picked from commit 85b5dbead6)
2021-01-29 10:34:04 -07:00
mergify[bot]
b1d5bf30d2 Remove potentially too costly Packets::default() (#14821) (#14915)
* Remove potentially too costly Packets::default()

* Fix test...

* Restore Packets::default()

* Restore Packets::default() more

(cherry picked from commit d6873b82ab)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-29 13:52:33 +09:00
Michael Vines
481c60e287 Surface faucet start failures to the user of solana-test-validator
(cherry picked from commit 8993ac0c74)
2021-01-28 16:59:44 -08:00
Eric Williams
86242dc3ba format to list 2021-01-28 16:14:42 -07:00
mergify[bot]
71899deb53 Reorg and cleanup of economics section of docs (#14868) (#14889) 2021-01-28 16:07:31 -07:00
Tyera Eulberg
7e2e0d4a86 Manually camelCase solana program json (#14907) 2021-01-28 13:41:57 -07:00
mergify[bot]
d4cc7c6b66 Only mmap file from snapshot once (#14815) (#14901)
(cherry picked from commit a53b8558cd)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-28 11:03:40 -08:00
mergify[bot]
b62349f081 cli now supports a custodian for stake authorize operations (#14860)
Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-28 18:30:43 +00:00
mergify[bot]
d16638dc90 Add syscall feature activation test (#14890) (#14895)
(cherry picked from commit 63429507b2)

Co-authored-by: Jack May <jack@solana.com>
2021-01-28 09:57:46 -08:00
mergify[bot]
b97fc31fcd nit: message doesn't represent (#14893) (#14897)
(cherry picked from commit 2ca0872a98)

Co-authored-by: Jack May <jack@solana.com>
2021-01-28 09:57:23 -08:00
Michael Vines
4378634970 Bump version to 1.5.6 2021-01-27 10:50:56 -08:00
Trent Nelson
10e12d14e1 install: Add version envvar to info --eval output
(cherry picked from commit dcb6f68287)
2021-01-27 09:05:17 -08:00
mergify[bot]
c5cfbee853 Aggregate purge and shrink metrics (#14763) (#14883)
Co-authored-by: Carl Lin <carl@solana.com>
(cherry picked from commit 72f10f5f29)
2021-01-27 02:56:45 -08:00
mergify[bot]
c276670a94 Snapshots missing slots from accounts cache clean optimization (#14852) (#14878)
Co-authored-by: Carl Lin <carl@solana.com>
2021-01-26 22:20:07 -08:00
Michael Vines
676da0a836 Ensure sanitary transactions
(cherry picked from commit 04ce33a04e)
2021-01-26 17:03:01 -08:00
Michael Vines
0021cf924f solana decode-transaction no longer panics on unsanitary transactions
(cherry picked from commit e9b5d65f40)
2021-01-26 17:03:01 -08:00
Michael Vines
d593ee187c chore: comment blockHeight
(cherry picked from commit 8cd036938e)
2021-01-26 17:01:41 -08:00
Michael Vines
a07bfc2d76 test: account for rent collection to avoid bogus test failure
(cherry picked from commit fba0e933a4)
2021-01-26 17:01:41 -08:00
Michael Vines
f0e9843dd4 fix: add Clock sysvar to AuthorizeWithSeed instruction
(cherry picked from commit fd06c1f8fa)
2021-01-26 17:01:41 -08:00
Michael Vines
a2f643e7c7 Include Clock sysvar in AuthorizeWithSeed instruction
(cherry picked from commit 8359f4f5ff)
2021-01-26 17:01:41 -08:00
Michael Vines
7ebaf1c192 Add StakeInstruction::Merge logging
(cherry picked from commit ff22091a98)
2021-01-26 17:01:33 -08:00
sakridge
6a61e7a01e Enable accounts caching by default (#14854)
Co-authored-by: Carl Lin <carl@solana.com>
(cherry picked from commit 5bf5a5ec41)
2021-01-26 16:56:57 -08:00
Jack May
1c23f135bf Bump rbpf to v0.2.4 (#14867) 2021-01-26 22:49:57 +00:00
mergify[bot]
3c67f71695 Deprecate commitment variants (bp #14797) (#14858)
* Deprecate commitment variants (#14797)

* Deprecate commitment variants

* Add new CommitmentConfig builders

* Add helpers to avoid allowing deprecated variants

* Remove deprecated transaction-status code

* Include new commitment variants in runtime commitment; allow deprecated as long as old variants persist

* Remove deprecated banks code

* Remove deprecated variants in core; allow deprecated in rpc/rpc-subscriptions for now

* Heavier hand with rpc/rpc-subscription commitment

* Remove deprecated variants from local-cluster

* Remove deprecated variants from various tools

* Remove deprecated variants from validator

* Update docs

* Remove deprecated client code

* Add new variants to cli; remove deprecated variants as possible

* Don't send new commitment variants to old clusters

* Retain deprecated method in test_validator_saves_tower

* Fix clippy matches! suggestion for BPF solana-sdk legacy compile test

* Refactor node version check to handle commitment variants and transaction encoding

* Hide deprecated variants from cli help

* Add cli App comments

(cherry picked from commit ffa5c7dcc8)

* Fix 1.5 stake-o-matic

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2021-01-26 20:49:04 +00:00
mergify[bot]
d380b9cef7 Add more upgradeable tests (#14846) (#14850)
(cherry picked from commit e57b9c3b02)

Co-authored-by: Jack May <jack@solana.com>
2021-01-26 09:42:49 -08:00
Jack May
7415821156 Rotate feature key: use loaded executable accounts (#14838)
(cherry picked from commit 74c83e6854)
2021-01-26 08:53:59 -08:00
Ryo Onodera
4a6c3a9331 Add security best practice sections (#14798)
(cherry picked from commit 60611ae8a0)
2021-01-26 08:53:54 -08:00
Jack May
0cd1cce588 Update find_program_address docs (#14840)
(cherry picked from commit 4a4881d30f)
2021-01-26 08:53:44 -08:00
Michael Vines
f762b4a730 Remove legacy_stake program
(cherry picked from commit 2b50433099)
2021-01-26 08:53:38 -08:00
behzad nouri
08f7f2546e fixes test_filter_current flakiness (#14816)
(cherry picked from commit d1df9da7d3)
2021-01-25 12:44:46 -08:00
mergify[bot]
cb701fbbd9 Reduce ~2 GBs mem by avoiding another overalloc. (#14806) (#14820)
* Reduce few GBs mem by avoiding another overalloc.

* Use x.len() for the last item from chunks()

(cherry picked from commit 015058e0b7)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-25 08:08:33 +00:00
mergify[bot]
231ff7d70d removes redundant epoch stakes cache in retransmit (#14781) (#14817)
Following d6d76219b, staked nodes computed from vote accounts are
already cached in runtime::Stakes, so the caching in retransmit_stage is
redundant.

(cherry picked from commit e1021d9f83)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-25 01:11:58 +00:00
mergify[bot]
a154414e65 patches crds vote-index assignment bug (bp #14438) (#14741)
* patches crds vote-index assignment bug (#14438)

If tower is full, old votes are evicted from the front of the deque:
https://github.com/solana-labs/solana/blob/2074e407c/programs/vote/src/vote_state/mod.rs#L367-L373
whereas recent votes if expire are evicted from the back:
https://github.com/solana-labs/solana/blob/2074e407c/programs/vote/src/vote_state/mod.rs#L529-L537

As a result, from a single tower_index scalar, we cannot infer which crds-vote
should be overwritten:
https://github.com/solana-labs/solana/blob/2074e407c/core/src/crds_value.rs#L576

In addition there is an off by one bug in the existing code. tower_index is
bounded by MAX_LOCKOUT_HISTORY - 1:
https://github.com/solana-labs/solana/blob/2074e407c/core/src/consensus.rs#L382
So, it is at most 30, whereas MAX_VOTES is 32:
https://github.com/solana-labs/solana/blob/2074e407c/core/src/crds_value.rs#L29
Which means that this branch is never taken:
https://github.com/solana-labs/solana/blob/2074e407c/core/src/crds_value.rs#L590-L593
so crds table alwasys keeps 29 **oldest** votes by wallclock, and then
only overrides the 30st one each time. (i.e a tally of only two most
recent votes).

(cherry picked from commit 8e581601d6)

* removes unnecessary semicolon

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-24 21:24:16 +00:00
mergify[bot]
673c679523 Partial clean (#14800) (#14813)
* Revert "Revert "Partial accounts clean (#14652)" (#14777)"

This reverts commit ad2e10e17b.

* Remove squashed uncleaned keys

(cherry picked from commit 0d32a0e0f4)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-24 20:52:23 +00:00
mergify[bot]
65a7621b17 broadcasts duplicate shreds through gossip (bp #14699) (#14812)
* broadcasts duplicate shreds through gossip (#14699)

(cherry picked from commit 491b059755)

# Conflicts:
#	core/src/cluster_info.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-24 17:45:35 +00:00
mergify[bot]
c9da25836a Upgrade to Rust v1.49.0 (bp #14810) (#14811)
* Upgrade to Rust v1.49.0

(cherry picked from commit cbffab7850)

# Conflicts:
#	core/src/crds_value.rs

* rebase

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-24 04:42:09 +00:00
mergify[bot]
b48dd58fda Upgrade sha2 to 0.9.3 (#14746) (#14799)
(cherry picked from commit 191193289f)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-23 23:08:33 +00:00
mergify[bot]
40f32fd37d Speed up generate_index (#14792) (#14807)
(cherry picked from commit 424bb797a6)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-23 18:50:04 +00:00
mergify[bot]
fef4100f5f Remove unnecesary flushes in previous roots (#14596) (#14803)
Co-authored-by: Carl Lin <carl@solana.com>
(cherry picked from commit c77461e428)

Co-authored-by: carllin <wumu727@gmail.com>
2021-01-23 15:02:32 +00:00
mergify[bot]
68bf58aac0 Parallel cache scan (#14544) (#14804)
Co-authored-by: Carl Lin <carl@solana.com>
(cherry picked from commit 2745b79b74)

Co-authored-by: carllin <wumu727@gmail.com>
2021-01-23 13:42:47 +00:00
mergify[bot]
5677662d61 Improve documentation of sendTransaction (#14770) (#14802)
* Improve documentation of sendTransaction

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

* Word wrap and improve terminology

* Tweak

* Oops

* Apply suggestions from code review

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
(cherry picked from commit 1d87091d51)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-23 10:02:29 +00:00
mergify[bot]
6755fd0c96 Make exchange listening-for-deposits language stronger (#14775) (#14801)
* Make exchange listening-for-deposits language stronger

* Update docs/src/integrations/exchange.md

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>

* Update from deprecated method

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
(cherry picked from commit 66fd187f16)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-23 07:27:58 +00:00
mergify[bot]
dfbe38b859 Add solana-test-validator --warp-slot argument (bp #14785) (#14796)
* Add convenience function to create a snapshot archive out of any Bank

(cherry picked from commit dd5a2ef05f)

* Add solana-test-validator --warp-slot argument

(cherry picked from commit bf1943e489)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-23 06:38:16 +00:00
mergify[bot]
480a35d678 Track account writable deescalation (bp #14626) (#14787)
* Track account writable deescalation (#14626)

(cherry picked from commit 77572a7c53)

# Conflicts:
#	sdk/src/feature_set.rs

* fix conflicts

Co-authored-by: Jack May <jack@solana.com>
2021-01-23 03:33:21 +00:00
sakridge
e6b53c262b Revert "Partial accounts clean (#14652) (#14751)" (#14776)
This reverts commit c89f5e28b7.
2021-01-22 16:17:20 -08:00
mergify[bot]
e127631f8d Add ability to clone accounts from an RPC endpoint (#14784)
(cherry picked from commit cbb9ac19b9)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-22 22:47:29 +00:00
mergify[bot]
2d246581ed Add ability to force feature activation without code modification (#14783)
(cherry picked from commit c3548f790c)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-22 22:39:56 +00:00
mergify[bot]
952a5b11c1 CLI: Strive for at least one signer (bp #14767) (#14779)
* CLI: Strive for at least one signer

(cherry picked from commit 8f8d593457)

* CLI: Allow missing pubkey in `--verbose` config output

(cherry picked from commit 90e1778cd2)

* CLI: Don't scare the users

(cherry picked from commit e9c98f2416)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-01-22 19:50:50 +00:00
mergify[bot]
c49a8cb67c Rpc: Add custom error for BigTable data not found (#14762) (#14765)
* Expose not-found bigtable error

* Add custom rpc error for bigtable data not found

* Return custom rpc error when bigtable block is not found

* Generalize long-term storage

(cherry picked from commit 71e9958e06)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-22 05:56:47 +00:00
mergify[bot]
733a1c85cf Add block_time to getConfirmedSignaturesForAddress2 and getConfirmedTransaction (bp #14572) (#14728)
* Add block_time to getConfirmedSignaturesForAddress2 and getConfirmedTransaction (#14572)

* add block_time to get_confirmed_signatures_for_address2 and protobuf implementation for tx_by_addr

* add tests for convert

* update cargo lock

* run cargo format after rebase

* introduce legacy TransactionByAddrInfo

* move LegacyTransactionByAddrInfo back to storage-bigtable

(cherry picked from commit 1de6d28eaf)

* fix local sanity script

* add missing block_time field

Co-authored-by: Josh <josh.hundley@gmail.com>
2021-01-22 02:02:45 +00:00
mergify[bot]
c8f7719c9e fixes test_filter_current flakiness (bp #14749) (#14761)
* fixes test_filter_current flakiness (#14749)

(cherry picked from commit e4da6761a7)

# Conflicts:
#	core/src/crds_value.rs

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-22 01:24:10 +00:00
mergify[bot]
c89f5e28b7 Partial accounts clean (#14652) (#14751)
Clean less keys by tracking the two cases:
* Touched keys per slot in uncleaned_keys derived from
accounts delta hash operation.
* Set of keys with any zero-lamport updates.

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-22 00:34:49 +00:00
mergify[bot]
38e4c34d18 CLI: Add calculate-rent subcommand (bp #14725) (#14759)
* cli-output: Genericize `writeln_name_value()`

(cherry picked from commit 2820d0a23d)

* CLI: Add `calculate-rent` subcommand

(cherry picked from commit 12410541a4)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-01-22 00:04:07 +00:00
mergify[bot]
afa7343bc2 Add ic_msg()/ic_logger_msg() macros (#14757)
(cherry picked from commit 3c6dbd21d2)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-21 23:10:50 +00:00
mergify[bot]
8ea584e01f Update bigtable confirm to use confirmation_status (#14750) (#14754)
(cherry picked from commit ca95302038)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-21 22:21:34 +00:00
mergify[bot]
239dc9b0b7 rewrites turbine retransmit peers computation (#14584) (#14742)
(cherry picked from commit b5fd0ed859)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-21 14:25:46 +00:00
mergify[bot]
6e6a55b7d6 Add signer/writable de/escalation tests (#14726) (#14739)
(cherry picked from commit aa96ad042b)

Co-authored-by: Jack May <jack@solana.com>
2021-01-21 10:39:08 +00:00
mergify[bot]
815bad8a6c Minor doc clarification (#14733)
(cherry picked from commit 5ac536d0fb)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-21 08:13:05 +00:00
mergify[bot]
74f813574f Nonce address doesn't sign AdvanceNonceAccount (#14722)
(cherry picked from commit 447e3de1f2)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-01-21 05:31:55 +00:00
mergify[bot]
07648f43db Make it possible to opt-out jemalloc for heaptrack (#14634) (#14685)
(cherry picked from commit d63b2baf0e)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-21 03:32:07 +00:00
mergify[bot]
99f0d29e65 Return confirmation-status (#14709) (#14715)
(cherry picked from commit 0e87572eb0)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-21 03:05:53 +00:00
mergify[bot]
87825f3beb Sanitize base58 pubkeys and sigs (bp #14708) (#14712)
* SDK: Sanitize base58 pubkey input

(cherry picked from commit 250b3969d4)

* SDK: Sanitize base58 signature input

(cherry picked from commit 2783aee483)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-01-21 02:39:58 +00:00
mergify[bot]
8e38f90e54 Default to highest finalized block if no slot provided (#14701) (#14704)
(cherry picked from commit c64d4f7693)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-20 22:48:32 +00:00
Jack May
d72c90e475 Bump version to v1.5.5 (#14700) 2021-01-20 20:26:16 +00:00
mergify[bot]
459ae81655 Cli: promote commitment to a global arg + config.yml (#14684) (#14698)
* Make commitment a global arg

* Add commitment to solana/cli/config.yml

* Fixup a couple Display/Verbose bugs

(cherry picked from commit a7086a0f83)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-20 18:04:25 +00:00
mergify[bot]
a2ce22f11b Bail on small deploy buffers (#14677) (#14682)
(cherry picked from commit a480b63234)

Co-authored-by: Jack May <jack@solana.com>
2021-01-20 03:39:27 +00:00
mergify[bot]
44ad4a1ecd Prevent the invoke and upgrade of programs in the same tx batch (bp #14653) (#14680) 2021-01-19 17:58:45 -08:00
mergify[bot]
fcd8dd75c5 Cli: default to single gossip (#14673) (#14676)
* Init cli RpcClient with chosen commitment; default to single_gossip

* Fill in missing client methods

* Cli tests: make RpcClient commitment specific

* Simply rpc_client calls, using configured commitment

* Check validator vote account with single-gossip commitment

(cherry picked from commit 4964b0fe61)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-19 23:58:19 +00:00
mergify[bot]
ca262fdeb9 Configure Bigtable's timeout, enabling by default (#14657) (#14669)
* Configure bigtable's timeout when read-only

* Review comments

* Apply nits (thanks!)

Co-authored-by: Michael Vines <mvines@gmail.com>

* Timeout in the streamed decoding as well

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit dcaa025822)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-19 15:14:59 +00:00
mergify[bot]
3d6bb95932 Improve docs around bigtable read limit (#14660) (#14662)
(cherry picked from commit 2eb19fa5e5)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-19 07:51:44 +00:00
mergify[bot]
e5d36fcfb3 feature gates turbine retransmit peers patch (#14631) (#14659)
(cherry picked from commit c6ae0667e6)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-19 05:38:26 +00:00
mergify[bot]
061965e291 Rename RpcNodeUnhealthy error to NodeUnhealthy, generalize getHealth RPC error object for the future (#14656)
(cherry picked from commit 5d9dc609b1)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-19 05:24:29 +00:00
mergify[bot]
e56681a2f6 Make Bigtable::get_confirmed_blocks inclusive of requested start_slot and end_slot (#14651) (#14655)
* Fix off-by-one error

* Filter out blocks greater than end slot

(cherry picked from commit cbf8ef7480)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-19 03:42:59 +00:00
mergify[bot]
8cf23903ce Fix the occasional stuck RPC request (bp #14628) (#14638)
* WIP fix the occasional stuck RPC request

(cherry picked from commit 5cf9094bb9)

* Clean up and add comment

(cherry picked from commit 8d4ab1bab1)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-18 19:57:28 +00:00
mergify[bot]
7ac2aae730 More generic accounts purge functions (#14595) (#14640)
Co-authored-by: Carl Lin <carl@solana.com>
(cherry picked from commit 5f14f45282)

Co-authored-by: carllin <wumu727@gmail.com>
2021-01-18 05:53:40 +00:00
mergify[bot]
6f5b9331bd Add --minimum-validator-identity-balance (#14636)
(cherry picked from commit a12ede8e7d)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-18 03:36:58 +00:00
mergify[bot]
a04375e204 Add getSnapshotSlot RPC method (#14632)
(cherry picked from commit 4003f86f04)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-16 20:51:33 +00:00
Michael Vines
7b19d26a6e Add getHealth RPC method 2021-01-16 10:51:54 -08:00
Michael Vines
3c3a3f0b50 Improve solana-test-validator output
(cherry picked from commit 1c2ae15b1d)
2021-01-16 10:14:43 -08:00
mergify[bot]
0367b32533 Update-executable flag in pre-accounts (#14622) (#14625)
(cherry picked from commit 66b54b852d)

Co-authored-by: Jack May <jack@solana.com>
2021-01-16 03:05:12 +00:00
mergify[bot]
09392ee562 Support account on tmpfs via net/ scripts (bp #14459) (#14621)
* multinode-demo: Pass --accounts through bootstrap leader wrapper

(cherry picked from commit 327be55acc)

* gce.sh: Factor out default custom memory

(cherry picked from commit ddf1d2dbf5)

* net/: Support accounts on swap-backed tmpfs

(cherry picked from commit ff599ace4d)

* net/gce.sh: Add cusom RAM arg instead of doubling default with tmpfs

(cherry picked from commit 3175cf1deb)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-01-16 00:14:16 +00:00
mergify[bot]
1a848e22ea net/net.sh: Quite pre-emptible instance status check (#14618)
(cherry picked from commit 7b67228bc1)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-01-15 21:19:33 +00:00
mergify[bot]
3d8cadebc0 Use optimistic confirmation in getSignatureStatuses, and various downstream client methods (#14430) (#14611)
* Add optimistically_confirmed field to TransactionStatus

* Update docs

* Convert new field to confirmation_status

* Update docs to confirmationStatus

* Update variants

* Update docs

* Just Confirmed

(cherry picked from commit 9a89689ad3)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-15 17:25:04 +00:00
mergify[bot]
47b8d518c5 Use highest-confirmed-root for max check (#14599) (#14604)
(cherry picked from commit 465f991035)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-15 15:28:45 +00:00
mergify[bot]
011d2dd41d Fix program-test's CPI support (#14594) (#14597)
* Fix program-test's CPI support

* feedback

(cherry picked from commit 0d29f9e82c)

Co-authored-by: Jack May <jack@solana.com>
2021-01-15 04:44:58 +00:00
mergify[bot]
bdfffd0151 Add load/execute/store timings (#14561) (#14591)
(cherry picked from commit 907f518f6d)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-14 23:48:36 +00:00
mergify[bot]
722cebc4c3 docs: Add stake programming documentation (#14529) (#14582)
* Add stake programming documentation

We had some questions about stake programming documentation, and there
wasn't a place that contained information about the stake-o-matic and
other stake development in one place.  This adds a page with that
information.

* Update docs/src/staking/stake-programming.md

Co-authored-by: Eric Williams <eric@solana.com>

* Update docs/src/staking/stake-programming.md

Co-authored-by: Eric Williams <eric@solana.com>

* Update docs/src/staking/stake-programming.md

Co-authored-by: Eric Williams <eric@solana.com>

* Update docs/src/staking/stake-programming.md

Co-authored-by: Eric Williams <eric@solana.com>

* Update docs/src/staking/stake-programming.md

Co-authored-by: Eric Williams <eric@solana.com>

* Apply suggestions from code review

* Remove trailing whitespace

Co-authored-by: Eric Williams <eric@solana.com>
(cherry picked from commit b37dbed479)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-01-14 16:17:25 +00:00
mergify[bot]
771b98a168 Load executable accounts from invoke context (#14574) (#14575)
(cherry picked from commit 6e8a1ba7de)

Co-authored-by: Jack May <jack@solana.com>
2021-01-14 09:39:26 +00:00
Trent Nelson
1b02ec4f6e Bump version to v1.5.4 2021-01-14 04:40:25 +00:00
mergify[bot]
aae51925c1 patches bug in turbine's neighbors computation (#14565) (#14569)
Removing local node's index early from the set here:
https://github.com/solana-labs/solana/blob/e1b59ded4/core/src/retransmit_stage.rs#L346
distorts the order of nodes depending on which node is computing the
turbine fan-out tree, and results in incorrect neighbors computation.

(cherry picked from commit cfcca1cd3c)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-13 23:52:14 +00:00
mergify[bot]
00626fbf4c Add --rpc-threads argument (#14568)
(cherry picked from commit 11daaadc93)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-13 22:52:15 +00:00
mergify[bot]
14ffc05fd4 Use leader_forward_count for tx retries too (#14547) (#14564)
(cherry picked from commit e1b59ded4b)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-13 19:30:55 +00:00
mergify[bot]
6e5c9a1b1c adds pubkey for behzad@solana.com (#14558) (#14563)
(cherry picked from commit 673cb39975)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-13 18:57:10 +00:00
mergify[bot]
1276b92462 Don't stop to find newer cluster-confirmed roots (#14557) (#14560)
* Don't stop to find newer cluster-confirmed roots

* Fix and add new tests

* nits

(cherry picked from commit e95ebcf864)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-13 17:05:46 +00:00
Alexander Meißner
89241cedba Bump RBPF version to v0.2.3
(cherry picked from commit 0d26cb6d37)
2021-01-12 09:47:28 -08:00
mergify[bot]
0e3e3a03cc Cache account stores, flush from AccountsBackgroundService (#13140) (#14542)
(cherry picked from commit 6dfad0652f)

Co-authored-by: carllin <wumu727@gmail.com>
2021-01-12 06:12:18 +00:00
Jack May
25fe93e9fb Check native account owner (#14535)
(cherry picked from commit 8ad5931bfc)
2021-01-11 21:29:53 -08:00
mergify[bot]
4440a8d9fa Update timestamp max allowable drift to 50% of PoH (#14531) (#14539)
* Repurpose warp-timestamp feature for general bump

* Change max_allowable_drift to 50%

* Fill in PR#

* Fix rpc test setup

(cherry picked from commit b0e6e29527)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-12 02:03:41 +00:00
Michael Vines
b737e562ab Use standard tmp-snapshot- file prefix for the "new_state" archive for better cleanup/consistency 2021-01-11 17:33:10 -08:00
Michael Vines
9cb32b2105 Restore snapshot hard linking 2021-01-11 17:33:03 -08:00
Michael Vines
f46ad1b7b7 Avoid tmp snapshot backlog in SnapshotPackagerService under high load (#14516) 2021-01-11 17:32:58 -08:00
mergify[bot]
b15603b4eb Add rocskdb high priority threads (#14515) (#14536)
Without them, memtable writes can stall on compactions.

(cherry picked from commit d8105bb7d7)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-11 23:07:59 +00:00
Michael Vines
c7267799ce Clarify log message, the remote snapshot might not actually be newer 2021-01-11 11:53:55 -08:00
mergify[bot]
ed0a083cd9 Cli: Implement OutputFormat for some missing subcommands (#14518) (#14520)
* Implement OutputFormat for solana leader-schedule

* Implement OutputFormat for solana inflation

(cherry picked from commit e4cf845974)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-10 01:20:39 +00:00
mergify[bot]
484bd48b35 Various snapshot-related code clean up (bp #14487) (#14513)
* Create account paths once

(cherry picked from commit fe0ba4a429)

* Replace incorrect symlink_dir usage with symlink_file

(cherry picked from commit f2a7f561a0)

* Reduce TempDir exposure

(cherry picked from commit 9f70f7dc3e)

* Rename AccountsPackage::root to AccountsPackage::slot

(cherry picked from commit 141e6706e6)

* Rename CompressionType to ArchiveFormat

(cherry picked from commit 7be6770808)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-09 18:29:09 +00:00
mergify[bot]
ae73cc8d05 Humanize the 'ledger processed...' time (#14511)
(cherry picked from commit 86c81a0ba2)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-09 08:14:14 +00:00
mergify[bot]
fc59a08f0e Bail on all CPI errors (#14500) (#14507)
* Bail on all CPI errors

* whitespace

(cherry picked from commit ec48631fc5)

Co-authored-by: Jack May <jack@solana.com>
2021-01-09 04:44:14 +00:00
mergify[bot]
15da7968c5 Add cli command to query upgradeable account authorities (#14491) (#14499)
(cherry picked from commit 638f225dc4)

Co-authored-by: Jack May <jack@solana.com>
2021-01-09 01:13:20 +00:00
mergify[bot]
b58a6e2b6e Report correct program id (#14486) (#14498)
(cherry picked from commit 9d53eca6e3)

Co-authored-by: Jack May <jack@solana.com>
2021-01-09 01:00:42 +00:00
Michael Vines
ec15ea079f Bump version to 1.5.3 2021-01-08 16:19:27 -08:00
mergify[bot]
4b5a05bf38 limits number of crds values associated with a pubkey (bp #14467) (#14490)
* limits number of crds values associated with a pubkey (#14467)

(cherry picked from commit 766195dded)

* updates smallvec

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2021-01-08 21:52:40 +00:00
mergify[bot]
7dd7141307 Suppress cargo audit failure for difference crate (bp #14488) (#14493)
* Suppress cargo audit failure for `difference` crate, there's no newer crate to upgrade to yet

(cherry picked from commit 3eaa826ad9)

* Bump smallvec version

(cherry picked from commit 21a0a83543)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-08 21:52:28 +00:00
mergify[bot]
e5175c843d Add buffer authority to upgradeable loader (#14482) (#14485)
(cherry picked from commit 58487c6360)

Co-authored-by: Jack May <jack@solana.com>
2021-01-08 18:54:11 +00:00
mergify[bot]
d5ff64b0d7 docs: Validator tuning improvements (bp #14478) (#14480)
* docs: wrap lines

(cherry picked from commit 140642ea21)

* docs: Prefer `dd` to `fallocate` when creating swap file

(cherry picked from commit c035f2a745)

* docs: Add RUST_LOG explainer

(cherry picked from commit 30038a8849)

Co-authored-by: Trent Nelson <trent@solana.com>
2021-01-07 19:41:45 +00:00
mergify[bot]
0fbdc7e152 Enable program upgrades via CPI (#14449) (#14469)
(cherry picked from commit 5eacc5d08d)

Co-authored-by: Jack May <jack@solana.com>
2021-01-06 23:45:10 +00:00
Tyera Eulberg
49aca9ecd8 Add fixed tick rate adjustment (#14447) (#14464)
Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-06 21:44:06 +00:00
mergify[bot]
fcc147b4f2 Gate cpi program account passing (#14443) (#14446)
(cherry picked from commit a8b5a32b50)

Co-authored-by: Jack May <jack@solana.com>
2021-01-06 19:20:49 +00:00
mergify[bot]
c455d1b1c5 Enable program-id account index for supply calculations (#14444) (#14456)
* Enable program-id account index for supply calculations

* Fixup comments

(cherry picked from commit ce1766d798)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-06 04:04:44 +00:00
mergify[bot]
e9b29fc697 Bump serum-dex pegged commit (#14448) (#14454)
(cherry picked from commit d2b0fd973f)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-05 19:17:03 -07:00
Ryo Onodera
fdea6fad26 Save 7G mem on mainnet fixing AccIndex overalloc. (#14435)
(cherry picked from commit c9df6134fa)
2021-01-05 17:55:44 -08:00
mergify[bot]
a4bc31341a Lower recycle store count (#14429) (#14442)
Too many stores can cause swap usage which
is detrimental to account store times.

(cherry picked from commit 53d65009a0)

Co-authored-by: sakridge <sakridge@gmail.com>
2021-01-05 21:39:31 +00:00
mergify[bot]
4af797c0a2 Introduce rpc url monikers for cli (#14409) (#14433)
* Introduce rpc url monikers for cli

* Use https:// and support initials as well

(cherry picked from commit 54a5876c48)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-05 12:16:11 +00:00
mergify[bot]
a1e06df4a8 Add validator --account-index docs (#14418) (#14428)
(cherry picked from commit efd9b769fc)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2021-01-05 03:06:15 +00:00
mergify[bot]
1f2480fd9f Fix pre-merge old name in the docs (#14425) (#14427)
(cherry picked from commit 974eb6e1ef)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-05 02:55:02 +00:00
mergify[bot]
8587bd0d69 Improve solana catchup (#14313) (#14424)
* Improve solana catchup

* Overidable port, retry, args error clean up

* print cleanup

* Reduce diff

* Tweak warns a bit

(cherry picked from commit aa4da339ff)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2021-01-05 02:34:53 +00:00
mergify[bot]
0063a58e95 Upgradeable programs needs program account's address as program id (#14417) (#14420)
(cherry picked from commit 0619805806)

Co-authored-by: Jack May <jack@solana.com>
2021-01-04 23:00:36 +00:00
mergify[bot]
9aeb3bc5d6 docs: Use "msg!" instead of "info!" (#14411) (#14416)
* docs: Use "msg!" instead of "info!"

* Update docs/src/developing/deployed-programs/developing-rust.md

Co-authored-by: Michael Vines <mvines@gmail.com>

* Fix typo / format

Co-authored-by: Michael Vines <mvines@gmail.com>
(cherry picked from commit a41b5137f6)

Co-authored-by: Jon Cinque <jon.cinque@gmail.com>
2021-01-04 20:02:09 +00:00
Michael Vines
97665b977e Bump version to v1.5.2 2021-01-04 06:44:52 +00:00
Michael Vines
c45ed29cf4 Use max commitment when fetching epoch info for block production
(cherry picked from commit 2724f37d0e)
2021-01-03 21:05:13 -08:00
mergify[bot]
635afbabff snapshot_utils: Don't bother restoring snapshots, they're never used (bp #14392) (#14396)
* Remove dead code

(cherry picked from commit b6dcdb90e8)

* Don't bother restoring snapshots, they're never used

(cherry picked from commit db6ee289c9)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-03 05:20:26 +00:00
mergify[bot]
98afdad1dd Tune rewards output (#14395)
(cherry picked from commit 560ed90168)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-03 02:39:35 +00:00
mergify[bot]
c085b94b43 docs: Update tmpfs partition guidance to include swap (bp #14387) (#14397)
* Update tmpfs partition guidance to include swap

(cherry picked from commit 68a84cf581)

* Update docs/src/running-validator/validator-start.md

Co-authored-by: Trent Nelson <trent.a.b.nelson@gmail.com>
(cherry picked from commit 9bb08ce75e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-03 01:45:07 +00:00
Michael Vines
a53946c485 Use singleGossip for program deployment
(cherry picked from commit c63e14dd0e)
2021-01-02 09:21:36 -08:00
mergify[bot]
f6de92c346 Add secondary indexes (#14212) (#14382)
(cherry picked from commit 5affd8aa72)

Co-authored-by: carllin <wumu727@gmail.com>
2021-01-01 07:42:47 +00:00
mergify[bot]
46f9822d62 Only initialize BigTable upload service when requested (#14380)
(cherry picked from commit 4a3d217839)

Co-authored-by: Michael Vines <mvines@gmail.com>
2021-01-01 03:06:34 +00:00
mergify[bot]
6dad84d228 Add --ignore-http-bad-gateway flag (#14377)
(cherry picked from commit 6c167615ad)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-31 22:00:29 +00:00
mergify[bot]
3582607aa0 solana-test-validator: bind RPC and faucet to 0.0.0.0 (bp #14369) (#14370)
* Minor help improvements

(cherry picked from commit 04bf5ce830)

* Bind RPC and faucet to 0.0.0.0

(cherry picked from commit 0b23abd479)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-31 09:10:07 +00:00
Michael Vines
f051077350 Require tokio 0.3.5 2020-12-30 22:25:23 -08:00
Michael Vines
ffd6f3e6bf Revert "Upgrade in-tree tokio 0.2 usage to tokio 0.3 (#14326)"
This reverts commit 6c5be574c8.
2020-12-30 22:25:23 -08:00
mergify[bot]
c6b2eb07ee Gate CPI authorized programs (#14361) (#14365)
(cherry picked from commit 2d8dacb72b)

Co-authored-by: Jack May <jack@solana.com>
2020-12-31 03:29:46 +00:00
mergify[bot]
7a3e1f9826 Remove assert (#14356) (#14360)
(cherry picked from commit 1c5427ff17)

Co-authored-by: Jack May <jack@solana.com>
2020-12-30 22:39:55 +00:00
mergify[bot]
8a690b6cf7 nit: clarify loader id (#14355) (#14358)
(cherry picked from commit 6c6095abe7)

Co-authored-by: Jack May <jack@solana.com>
2020-12-30 21:25:41 +00:00
mergify[bot]
8688efa89b Speed up UDP reachable port checks (#14351)
(cherry picked from commit 71b88da48e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-30 19:00:28 +00:00
mergify[bot]
3b047e5b99 Port ip-echo-server to tokio 0.3 (bp #14345) (#14350)
* Port ip-echo-server to tokio 0.3

(cherry picked from commit fb6c660cfd)

# Conflicts:
#	net-utils/Cargo.toml

* Update Cargo.toml

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-30 18:55:24 +00:00
mergify[bot]
3cddc731b2 Add --test arg to cargo-test-bpf (#14342) (#14344)
(cherry picked from commit 3d0cd2cdb0)

Co-authored-by: Justin Starry <justin@solana.com>
2020-12-30 07:55:38 +00:00
mergify[bot]
1d29a583c6 Rewrite faucet with tokio v0.3 (bp #14336) (#14343)
* Rewrite faucet with tokio v0.3 (#14336)

* Rewrite faucet for contemporary tokio

* Move away from framed decoder

(cherry picked from commit d63dd95806)

# Conflicts:
#	faucet/Cargo.toml

* Fix conflicts

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
2020-12-30 05:09:00 +00:00
mergify[bot]
b5335edb35 Add experimental knob for tuning PoH pinned CPU core (bp #14330) (#14341)
* core: Update stale error message

(cherry picked from commit 82f61c0c4a)

* validator: Add experimental flag to select PoH pinned core

(cherry picked from commit fe667db910)

Co-authored-by: Trent Nelson <trent@solana.com>
2020-12-30 03:33:39 +00:00
mergify[bot]
abee1e83eb Add poh speed check and tick speed calibration (#14292) (#14328)
(cherry picked from commit 2074e407cd)

Co-authored-by: sakridge <sakridge@gmail.com>
2020-12-29 19:40:49 +00:00
mergify[bot]
6c5be574c8 Upgrade in-tree tokio 0.2 usage to tokio 0.3 (#14326)
(cherry picked from commit 444ed768dc)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-29 19:03:18 +00:00
mergify[bot]
c1f993d2fc Retry durable-nonce transactions (#14308) (#14325)
* Retry durable-nonce transactions

* Add metric to track durable-nonce txs in queue

* Populate send-tx-service initial addresses with tpu_address if empty (primarily for testing)

* Reinstate last_valid_slot check for durable-nonce txs; use arbitrary future slot

(cherry picked from commit 3f10fb993b)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2020-12-29 18:03:04 +00:00
mergify[bot]
e2ddb2f0ea Limit CPI instruction size (#14317) (#14321)
(cherry picked from commit 5524938a50)

Co-authored-by: Jack May <jack@solana.com>
2020-12-29 02:38:22 +00:00
mergify[bot]
f3faba5ca9 Remove Testnet-specific old code (#14305) (#14315)
(cherry picked from commit 7893e2e307)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2020-12-28 22:18:33 +00:00
mergify[bot]
3a6fd91739 Log error from AppendVec removal & a panic clean (#14302) (#14310)
(cherry picked from commit addffd7694)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2020-12-28 22:08:22 +00:00
mergify[bot]
2d2b3d8287 CLI: Support retrieving past leader schedules (bp #14304) (#14312)
* clap-utils: Add epoch validator

(cherry picked from commit a709850ee4)

* CLI: Support displaying past leader schedules

(cherry picked from commit bd761e2a52)

Co-authored-by: Trent Nelson <trent@solana.com>
2020-12-28 21:41:55 +00:00
mergify[bot]
6e47b88399 run.sh: add env knob for solana-validor (#14303) (#14307)
(cherry picked from commit 4af33674a7)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2020-12-28 20:48:03 +00:00
Michael Vines
941e56c6c7 Avoid creating "..tmp" files 2020-12-28 08:58:41 -08:00
Michael Vines
d1adc2a446 Persist gossip contact info
(cherry picked from commit 9ddd6f08e8)
2020-12-27 22:09:00 -08:00
Michael Vines
02da7dfedf Bump version to v1.5.1 2020-12-27 21:57:43 -08:00
mergify[bot]
eb0fd3625a Fix subtraction overflow in metrics (#14290) (#14296)
(cherry picked from commit c693ffaa08)

Co-authored-by: sakridge <sakridge@gmail.com>
2020-12-28 02:34:58 +00:00
mergify[bot]
b87e606626 Fix download speed (#14291) (#14295)
(cherry picked from commit 7b49c85aa7)

Co-authored-by: sakridge <sakridge@gmail.com>
2020-12-28 02:21:40 +00:00
mergify[bot]
1c91376f78 obtains staked-nodes from the root-bank (#14257) (#14293)
... as opposed to the working bank

(cherry picked from commit 49019c6613)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2020-12-27 14:49:29 +00:00
mergify[bot]
10067ad07b indexes votes in crds table (#14272) (#14294)
(cherry picked from commit 2fd38d9912)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2020-12-27 14:49:23 +00:00
Michael Vines
eb76289107 Fix windows build 2020-12-24 17:49:41 -08:00
Michael Vines
8926736e1c Remove stray dbg 2020-12-24 10:45:34 -08:00
mergify[bot]
bf4c169703 Prevent bpf loader impersonators (#14278) (#14279)
(cherry picked from commit ee0a80a092)

Co-authored-by: Jack May <jack@solana.com>
2020-12-24 04:24:30 +00:00
mergify[bot]
0020e43476 Don't use caller passed executable account (#14276) (#14277)
(cherry picked from commit b1d702a618)

Co-authored-by: Jack May <jack@solana.com>
2020-12-23 23:52:04 +00:00
mergify[bot]
a9a2c76221 Limit CPI from calling loader or native programs (#14252) (#14275)
(cherry picked from commit 0b479ab180)

Co-authored-by: Jack May <jack@solana.com>
2020-12-23 20:01:56 +00:00
mergify[bot]
4754b4e871 Save cloning program account data (#14251) (#14274)
(cherry picked from commit 5945305b1d)

Co-authored-by: Jack May <jack@solana.com>
2020-12-23 19:35:09 +00:00
mergify[bot]
52ffb9a64a Add accounts shrink paths (bp #14238) (#14270)
* Add shrink paths (#14238)


(cherry picked from commit baa9602411)

* Ignore long/hanging test (#14261)

Co-authored-by: sakridge <sakridge@gmail.com>
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2020-12-23 08:03:33 +00:00
Trent Nelson
bd0b1503c6 Deinitialize stake data upon zero balance 2020-12-23 06:17:59 +00:00
Trent Nelson
10e7fa40ac Deinitialize vote data upon zero balance 2020-12-23 06:17:59 +00:00
Trent Nelson
198ed407b7 vote: Add helper for creating current-versioned states 2020-12-23 06:17:59 +00:00
Trent Nelson
d96af2dd23 Deinitialize nonce data upon zero balance 2020-12-23 06:17:59 +00:00
mergify[bot]
192cca8f98 validator: Multiple --entrypoint support (bp #14256) (#14264)
* Update entrypoint contact info even when shred version adoption is not requested

(cherry picked from commit 3373082ffa)

* Multiple entrypoint support

(cherry picked from commit ace360ade2)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-23 04:15:44 +00:00
Michael Vines
ee716e1c55 Add log message for when a local snapshot is too old
(cherry picked from commit 65dcb3dc81)
2020-12-22 19:58:29 -08:00
mergify[bot]
6dd3c7c2dd removes &Arc<Self> receivers (#14234) (#14262)
(cherry picked from commit a14cfd660a)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2020-12-23 02:11:08 +00:00
mergify[bot]
582b4c9edf Upgradeable programs called same as non-upgradeable (#14239) (#14254)
* Upgradeable programs called same as non-upgradeable

* nudge

(cherry picked from commit ab205b682a)

Co-authored-by: Jack May <jack@solana.com>
2020-12-22 21:17:18 +00:00
mergify[bot]
f15add2a74 Feature-gate stake-program-v3 (#14232) (#14250)
* Remove deprecated legacy stake program

* Add legacy stake program

* Strip out duplicative legacy code

* Feature-deploy stake-program-v3

* Add ownership check in stake processor

(cherry picked from commit 7042f11791)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2020-12-22 19:42:30 +00:00
mergify[bot]
74d48910e2 Rework upgradeable loader cli (#14209) (#14236)
(cherry picked from commit 3316e7166c)

Co-authored-by: Jack May <jack@solana.com>
2020-12-21 22:26:11 +00:00
mergify[bot]
c53e8ee3ad improves performance in replay-stage (#14217) (#14233)
bank::vote_accounts returns a hash-map which is slow to iterate, but all uses
only require an iterator:
https://github.com/solana-labs/solana/blob/b3dc98856/runtime/src/bank.rs#L4300-L4306
Similarly, calculate_stake_weighted_timestamp takes a hash-map whereas it only
requires an iterator:
https://github.com/solana-labs/solana/blob/b3dc98856/sdk/src/stake_weighted_timestamp.rs#L21-L28

(cherry picked from commit 7b08cb1f0d)

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2020-12-21 21:23:35 +00:00
Michael Vines
c5e5fedc47 Allow multiple --accounts arguments
(cherry picked from commit 8082a2454c)
2020-12-21 11:43:04 -08:00
Tyera Eulberg
b9929dcd67 Warp-timestamp pr# 2020-12-21 10:53:43 -07:00
sakridge
554a158443 Fix test_max_hashes (#14189)
(cherry picked from commit a5db6399ad)
2020-12-21 09:05:26 -08:00
behzad nouri
b7fa4b7ee1 caches staked nodes computed from vote-accounts (#13929)
(cherry picked from commit d6d76219b6)
2020-12-21 09:05:17 -08:00
behzad nouri
fd44cee8cc limits number of crds values returned when responding to pull requests (#13739)
Crds values buffered when responding to pull-requests can be very large taking a lot of memory.
Added a limit for number of buffered crds values based on outbound data budget.

(cherry picked from commit 691031fefd)
2020-12-21 09:04:50 -08:00
mergify[bot]
c6a362cce2 Do not delete ALL other snapshots before downloading a new snapshot (#14227)
(cherry picked from commit 93ae177503)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-21 10:27:25 +00:00
mergify[bot]
252180c244 Restore Content-Length header for streaming snapshot download (#14222)
(cherry picked from commit 57b03c5bc1)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-21 08:41:02 +00:00
mergify[bot]
e02b4e698e Fix timestamp handling on ledger warp (#14210) (#14218)
* Reset timestamp for slot and epoch-start on warp

* Fix genesis timestamp metric source

* Remove check that timestamp > unix_timestamp_from_genesis

Default to previous timestamp, not genesis timestamp

* Move timestamp metrics to report even on warp

* Initialize slot 0 timestamps correctly

* Add feature gate to warp testnet timestamp

* Review suggestion: simplify warp-timestamp slot check

(cherry picked from commit e15f95a36f)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2020-12-20 22:52:23 +00:00
mergify[bot]
4811afe8eb Stream RPC snapshot downloads (bp #14213) (#14215)
* Stream RPC snapshot downloads

(cherry picked from commit b3dc988564)

# Conflicts:
#	core/Cargo.toml

* Update Cargo.toml

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-20 01:28:41 +00:00
Michael Vines
bc4568b10f Update Cargo.toml 2020-12-18 20:16:48 -08:00
Michael Vines
d59c131e90 Create a random -keypair.json file alongside the program deploy artifact for easy upgrades
(cherry picked from commit 636a455790)
2020-12-18 20:16:48 -08:00
Michael Vines
825027f9f7 Use AsRef
(cherry picked from commit 9993d2c623)
2020-12-18 20:16:48 -08:00
mergify[bot]
9b8f0bee99 adds crds-value for broadcasting duplicate shreds through gossip (bp #14133) (#14203)
* adds crds-value for broadcasting duplicate shreds through gossip (#14133)

In gossip, the header overhead we get from:
https://github.com/solana-labs/solana/blob/de9ac43eb/core/src/cluster_info.rs#L434-L435
https://github.com/solana-labs/solana/blob/de9ac43eb/core/src/crds_value.rs#L31-L36
https://github.com/solana-labs/solana/blob/de9ac43eb/core/src/crds_value.rs#L73
already exceeds SIZE_OF_NONCE in shreds. We also need aditional
meta-data (wallclock, source pubkey, ...). Which means that given the
SHRED_PAYLOAD_SIZE, we cannot fit all these in PACKET_DATA_SIZE:
https://github.com/solana-labs/solana/blob/de9ac43eb/ledger/src/shred.rs#L80

On top of that, we need 2 shred payloads as the proof of duplicate. So
each DuplicateShred crds value includes only a chunk of the payload,
along with the meta-data to reconstruct the full payload from the chunks
on the receiving end.

(cherry picked from commit 6a3797e164)

# Conflicts:
#	Cargo.lock
#	ledger/Cargo.toml

* removes backport merge conflicts

Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2020-12-18 22:54:50 +00:00
mergify[bot]
fc13c1d654 getBlockTime RPC method now falls back to BigTable in all cases (#14207)
(cherry picked from commit 0090106f60)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-18 22:23:35 +00:00
mergify[bot]
a57758e9c9 Add CPI support for upgradeable loader (bp #14193) (#14199) 2020-12-18 11:23:00 -08:00
Michael Vines
564590462a Add transactionCount field to GetEpochInfo
(cherry picked from commit efc091e28a)
2020-12-18 10:09:30 -08:00
Michael Vines
269f6af97e fix: add transactionCount field to GetEpochInfo
(cherry picked from commit 01fe835e73)
2020-12-18 10:09:30 -08:00
mergify[bot]
57b8a59632 Reject invalid --expected-shred-version (#14183) (#14202)
* Reject invalid --expected-shred-version

* less code

(cherry picked from commit 3c9b853268)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2020-12-18 19:19:57 +09:00
Trent Nelson
4289f52d2b net/gce.sh: Upgrade to Ubuntu 20.04
(cherry picked from commit 3322b83183)
2020-12-17 18:17:01 -07:00
Trent Nelson
573f68620b net/gce.sh: Switch to SSD boot disks
(cherry picked from commit a0507505f4)
2020-12-17 18:17:01 -07:00
Trent Nelson
4bfe64688b net/gce.sh: Bump machine type to 24-core, 64GB RAM
(cherry picked from commit ffe0532ded)
2020-12-17 18:17:01 -07:00
mergify[bot]
50034848a5 Improved Transaction Forwarding (#13944) (#14195)
* Forwarding

* Dedupe leaders

* Use consistent commitment for last_valid_slot in rpc send_transaction

* Plumb rpc send_transaction options into solana-validator

* Extend num slots banking-stage holds forwarded txs

Co-authored-by: Tyera Eulberg <tyera@solana.com>
(cherry picked from commit da7d1e2302)

Co-authored-by: sakridge <sakridge@gmail.com>
2020-12-17 18:14:06 -07:00
Michael Vines
981294cbc6 Don't require increased open file limit at ledger creation
Follow-up to 0b92720fdb, `create_new_ledger()` does not require a higher fd limit
2020-12-17 08:49:23 -08:00
mergify[bot]
ff728e5e56 Fix program account rent exemption (#14176) (#14180)
(cherry picked from commit 593ad80954)

Co-authored-by: Jack May <jack@solana.com>
2020-12-17 03:46:43 -08:00
Michael Vines
9aaf41bef2 Don't require increased open file limit in solana-test-validator
Travis CI in particular does not allow the open file limit to be
increased.

(cherry picked from commit 0b92720fdb)
2020-12-16 22:59:56 -08:00
Michael Vines
271eec656c Use an ephemeral mint address if the client keypair is not available
Typically this can occur in a CI environment

(cherry picked from commit 8d700c3b94)
2020-12-16 22:59:56 -08:00
Trent Nelson
13d071607f Revert "Ignore RUSTSEC-2020-0077 until next 1.4 release"
This reverts commit 1792100e2b.
2020-12-17 01:54:26 +00:00
Trent Nelson
ffe35d9a10 Bump SPL crates 2020-12-17 01:54:26 +00:00
mergify[bot]
bb2fb07b39 Add blockstore skipped api (#14145) (#14167)
* Add blockstore api to determine if a slot was skipped

* Return custom rpc error if slot is skipped

(cherry picked from commit ac0d32bc7e)

Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
2020-12-16 22:26:54 +00:00
mergify[bot]
85fc51dc61 fix formatting error in docs (#14163)
(cherry picked from commit 41a93ced23)

Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
2020-12-16 18:51:24 +00:00
mergify[bot]
0276b6c4c2 Correctly show reward percent changes (#14161)
(cherry picked from commit bebfa6e93c)

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2020-12-16 18:15:36 +00:00
mergify[bot]
c481e4fe7f Partial shred deserialize cleanup and shred type differentiation (#14094) (#14139)
* Partial shred deserialize cleanup and shred type differentiation in retransmit

* consolidate packet hashing logic

(cherry picked from commit d4a174fb7c)

Co-authored-by: sakridge <sakridge@gmail.com>
2020-12-16 08:57:21 -08:00
mergify[bot]
76a3b3ad11 Remove lock files from programs/bpf/rust (#14148) (#14158)
(cherry picked from commit 49c3f14016)

Co-authored-by: Jack May <jack@solana.com>
2020-12-16 11:56:48 +00:00
mergify[bot]
356c663e88 check for resize access violations (#14142) (#14152)
(cherry picked from commit 025f886e10)

Co-authored-by: Jack May <jack@solana.com>
2020-12-16 10:28:27 +00:00
mergify[bot]
015bbc1e12 Fix up upgradeable bpf loader activation (#14149)
(cherry picked from commit 501fd83afd)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-16 07:54:44 +00:00
mergify[bot]
454a9f3175 Switch solana deploy commitment default from "max" to "singleGossip" (#14146)
(cherry picked from commit db4ac17259)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-16 04:46:45 +00:00
mergify[bot]
485b3d64a1 Add Program loader/environment instruction errors (#14120) (#14143)
(cherry picked from commit d513b0c4ca)

Co-authored-by: Jack May <jack@solana.com>
2020-12-16 03:50:04 +00:00
Michael Vines
5d170d83c0 Remove stray println 2020-12-15 16:44:56 -08:00
mergify[bot]
f54d8ea3ab solana-test-validator usability improvements (bp #14129) (#14136)
* Clean up Cargo.toml

(cherry picked from commit d2af09a647)

* Prevent multiple test-validators from using the same ledger directory

(cherry picked from commit f3272db7f7)

* Add --reset flag to allow for easy ledger reset

(cherry picked from commit 00c46c528e)

Co-authored-by: Michael Vines <mvines@gmail.com>
2020-12-15 23:21:21 +00:00
mergify[bot]
ef9f54b3d4 Fix race between setting tick height and calculating accounts hash (#14101) (#14132)
Co-authored-by: Carl Lin <carl@solana.com>
(cherry picked from commit 75e9e321de)

Co-authored-by: carllin <wumu727@gmail.com>
2020-12-15 22:05:44 +00:00
mergify[bot]
8d0b102b44 Cleanup ledger builtins (#14083) (#14130)
(cherry picked from commit 582418de5e)

Co-authored-by: Jack May <jack@solana.com>
2020-12-15 21:45:44 +00:00
1686 changed files with 129788 additions and 291786 deletions

View File

@@ -36,7 +36,4 @@ export CARGO_TARGET_CACHE=$HOME/cargo-target-cache/"$CHANNEL"-"$BUILDKITE_LABEL"
# `std:
# "found possibly newer version of crate `std` which `xyz` depends on
rm -rf target/bpfel-unknown-unknown
if [[ $BUILDKITE_LABEL = "stable-perf" ]]; then
rm -rf target/release
fi
)

View File

@@ -1,28 +0,0 @@
name: Explorer_build&test_on_PR
on:
pull_request:
branches:
- master
paths:
- 'explorer/**'
jobs:
check-explorer:
runs-on: ubuntu-latest
defaults:
run:
working-directory: explorer
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-node@v2
with:
node-version: '14'
cache: 'npm'
cache-dependency-path: explorer/package-lock.json
- run: npm i -g npm@7
- run: npm ci
- run: npm run format
- run: npm run build
- run: npm run test

View File

@@ -1,20 +0,0 @@
name : explorer_preview
on:
workflow_run:
workflows: ["Explorer_build&test_on_PR"]
types:
- completed
jobs:
explorer_preview:
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:
- uses: actions/checkout@v2
- uses: amondnet/vercel-action@v20
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }} # Required
github-token: ${{ secrets.GITHUB_TOKEN }} #Optional
vercel-org-id: ${{ secrets.ORG_ID}} #Required
vercel-project-id: ${{ secrets.PROJECT_ID}} #Required
working-directory: ./explorer
scope: ${{ secrets.TEAM_ID }}

View File

@@ -1,46 +0,0 @@
name: Explorer_production_build&test
on:
push:
branches: [master]
paths:
- 'explorer/**'
jobs:
Explorer_production_build_test:
runs-on: ubuntu-latest
defaults:
run:
working-directory: explorer
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-node@v2
with:
node-version: '14'
cache: 'npm'
cache-dependency-path: explorer/package-lock.json
- run: npm i -g npm@7
- run: npm ci
- run: npm run format
- run: npm run build
- run: npm run test
Explorer_production_deploy:
needs: Explorer_production_build_test
runs-on: ubuntu-latest
defaults:
run:
working-directory: explorer
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: amondnet/vercel-action@v20
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }} # Required
github-token: ${{ secrets.GITHUB_TOKEN }} #Optional
vercel-args: '--prod' #for production
vercel-org-id: ${{ secrets.ORG_ID}} #Required
vercel-project-id: ${{ secrets.PROJECT_ID}} #Required
working-directory: ./explorer
scope: ${{ secrets.TEAM_ID }}

View File

@@ -1,208 +0,0 @@
name : minimal
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
Export_Github_Repositories:
runs-on: ubuntu-latest
env:
VERCEL_TOKEN: ${{secrets.VERCEL_TOKEN}}
GITHUB_TOKEN: ${{secrets.PAT_ANM}}
COMMIT_RANGE: ${{ github.event.before}}...${{ github.event.after}}
steps:
- name: Checkout repo
uses: actions/checkout@v2
with:
fetch-depth: 2
- run: echo "COMMIT_DIFF_RANGE=$(echo $COMMIT_RANGE)" >> $GITHUB_ENV
# - run: echo "$COMMIT_DIFF_RANGE"
- name: Set up Python
uses: actions/setup-python@v2
with:
GITHUB_TOKEN: ${{secrets.PAT_ANM}}
if: ${{ github.event_name == 'push' && 'cron'&& github.ref == 'refs/heads/master'}}
- name: cmd
run : |
.travis/export-github-repo.sh web3.js/ solana-web3.js
macos-artifacts:
needs: [Export_Github_Repositories]
strategy:
fail-fast: false
runs-on: macos-latest
if : ${{ github.event_name == 'api' && 'cron' || 'push' || startsWith(github.ref, 'refs/tags/v')}}
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup | Rust
uses: ATiltedTree/setup-rust@v1
with:
rust-version: stable
- name: release artifact
run: |
source ci/rust-version.sh
brew install coreutils
export PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
greadlink -f .
source ci/env.sh
rustup set profile default
ci/publish-tarball.sh
shell: bash
- name: Cache modules
uses: actions/cache@master
id: yarn-cache
with:
path: node_modules
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: ${{ runner.os }}-yarn-
# - To stop from uploading on the production
# - uses: ochanje210/simple-s3-upload-action@master
# with:
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY}}
# AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
# SOURCE_DIR: 'travis-s3-upload1'
# DEST_DIR: 'giitsol'
# - uses: ochanje210/simple-s3-upload-action@master
# with:
# AWS_ACCESS_KEY_ID: ${{ secrets.AWS_KEY_ID }}
# AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY}}
# AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
# SOURCE_DIR: './docs/'
# DEST_DIR: 'giitsol'
windows-artifact:
needs: [Export_Github_Repositories]
strategy:
fail-fast: false
runs-on: windows-latest
if : ${{ github.event_name == 'api' && 'cron' || 'push' || startsWith(github.ref, 'refs/tags/v')}}
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup | Rust
uses: ATiltedTree/setup-rust@v1
with:
rust-version: stable
release-artifact:
needs: windows-artifact
runs-on: windows-latest
if : ${{ github.event_name == 'api' && 'cron' || github.ref == 'refs/heads/master'}}
steps:
- name: release artifact
run: |
git clone git://git.openssl.org/openssl.git
cd openssl
make
make test
make install
openssl version
# choco install openssl
# vcpkg integrate install
# refreshenv
- name: Checkout repository
uses: actions/checkout@v2
- uses: actions/checkout@v2
- run: choco install msys2
- uses: actions/checkout@v2
- run: |
openssl version
bash ci/rust-version.sh
readlink -f .
bash ci/env.sh
rustup set profile default
bash ci/publish-tarball.sh
shell: bash
- name: Cache modules
uses: actions/cache@v1
id: yarn-cache
with:
path: node_modules
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: ${{ runner.os }}-yarn-
# - To stop from uploading on the production
# - name: Config. aws cred
# uses: aws-actions/configure-aws-credentials@v1
# with:
# aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
# aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# aws-region: us-east-2
# - name: Deploy
# uses: shallwefootball/s3-upload-action@master
# with:
# folder: build
# aws_bucket: ${{ secrets.AWS_S3_BUCKET }}
# aws_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
# aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# destination_dir: /
# bucket-region: us-east-2
# delete-removed: true
# no-cache: true
# private: true
# Docs:
# needs: [windows-artifact,release-artifact]
# runs-on: ubuntu-latest
# env:
# GITHUB_TOKEN: ${{secrets.PAT_NEW}}
# GITHUB_EVENT_BEFORE: ${{ github.event.before }}
# GITHUB_EVENT_AFTER: ${{ github.event.after }}
# COMMIT_RANGE: ${{ github.event.before}}...${{ github.event.after}}
# steps:
# - name: Checkout repo
# uses: actions/checkout@v2
# with:
# fetch-depth: 2
# - name: docs
# if: ${{github.event_name == 'pull_request' || startsWith(github.ref, 'refs/tags/v')}}
# run: |
# touch .env
# echo "COMMIT_RANGE=($COMMIT_RANGE)" > .env
# source ci/env.sh
# .travis/channel_restriction.sh edge beta || exit 0
# .travis/affects.sh docs/ .travis || exit 0
# cd docs/
# source .travis/before_install.sh
# source .travis/script.sh
# - name: setup-node
# uses: actions/checkout@v2
# - name: setup-node
# uses: actions/setup-node@v2
# with:
# node-version: 'lts/*'
# - name: Cache
# uses: actions/cache@v1
# with:
# path: ~/.npm
# key: ${{ runner.OS }}-npm-cache-${{ hashFiles('**/package-lock.json') }}
# restore-keys: |
# ${{ runner.OS }}-npm-cache-2
# auto_bump:
# needs: [windows-artifact,release-artifact,Docs]
# runs-on: ubuntu-latest
# steps:
# - name : checkout repo
# uses: actions/checkout@v2
# with:
# fetch-depth: '0'
# - name: Bump version and push tag
# uses: anothrNick/github-tag-action@1.26.0
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# WITH_V: true
# DEFAULT_BUMP: patch

View File

@@ -1,84 +0,0 @@
name: Web3
on:
push:
branches: [ master ]
paths:
- "web3.js/**"
pull_request:
branches: [ master ]
paths:
- "web3.js/**"
jobs:
# needed for grouping check-web3 strategies into one check for mergify
all-web3-checks:
runs-on: ubuntu-latest
needs:
- check-web3
steps:
- run: echo "Done"
web3-commit-lint:
runs-on: ubuntu-latest
# Set to true in order to avoid cancelling other workflow jobs.
# Mergify will still require web3-commit-lint for automerge
continue-on-error: true
defaults:
run:
working-directory: web3.js
steps:
- uses: actions/checkout@v2
with:
# maybe needed for base sha below
fetch-depth: 0
- uses: actions/setup-node@v2
with:
node-version: '16'
cache: 'npm'
cache-dependency-path: web3.js/package-lock.json
- run: npm ci
- name: commit-lint
if: ${{ github.event_name == 'pull_request' }}
run: bash commitlint.sh
env:
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
check-web3:
runs-on: ubuntu-latest
defaults:
run:
working-directory: web3.js
strategy:
matrix:
node: [ '12', '14', '16' ]
name: Node ${{ matrix.node }}
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: ${{ matrix.node }}
cache: 'npm'
cache-dependency-path: web3.js/package-lock.json
- run: npm i -g npm@7
- run: npm ci
- run: npm run lint
- run: |
npm run build
ls -l lib
test -r lib/index.iife.js
test -r lib/index.cjs.js
test -r lib/index.esm.js
- run: npm run doc
- run: npm run codecov
- run: |
sh -c "$(curl -sSfL https://release.solana.com/edge/install)"
echo "$HOME/.local/share/solana/install/active_release/bin" >> $GITHUB_PATH
PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"
solana --version
- run: npm run test:live-with-test-validator

3
.gitignore vendored
View File

@@ -4,7 +4,6 @@
/solana-metrics/
/solana-metrics.tar.bz2
/target/
/test-ledger/
**/*.rs.bk
.cargo
@@ -28,5 +27,3 @@ log-*/
/spl_*.so
.DS_Store
# scripts that may be generated by cargo *-bpf commands
**/cargo-*-bpf-child-script-*.sh

View File

@@ -4,71 +4,24 @@
#
# https://doc.mergify.io/
pull_request_rules:
- name: label changes from community
conditions:
- author≠@core-contributors
- author≠mergify[bot]
- author≠dependabot[bot]
actions:
label:
add:
- community
- name: request review for community changes
conditions:
- author≠@core-contributors
- author≠mergify[bot]
- author≠dependabot[bot]
# Only request reviews from the pr subscribers group if no one
# has reviewed the community PR yet. These checks only match
# reviewers with admin, write or maintain permission on the repository.
- "#approved-reviews-by=0"
- "#commented-reviews-by=0"
- "#changes-requested-reviews-by=0"
actions:
request_reviews:
teams:
- "@solana-labs/community-pr-subscribers"
- name: automatic merge (squash) on CI success
conditions:
- and:
- status-success=buildkite/solana
- status-success=ci-gate
- label=automerge
- author≠@dont-squash-my-commits
- or:
# only require travis success if docs files changed
- status-success=Travis CI - Pull Request
- -files~=^docs/
- or:
# only require explorer checks if explorer files changed
- status-success=check-explorer
- -files~=^explorer/
- or:
- and:
- status-success=all-web3-checks
- status-success=web3-commit-lint
# only require web3 checks if web3.js files changed
- -files~=^web3.js/
- status-success=buildkite/solana
- status-success=Travis CI - Pull Request
- status-success=ci-gate
- label=automerge
- author≠@dont-squash-my-commits
actions:
merge:
method: squash
# Join the dont-squash-my-commits group if you won't like your commits squashed
- name: automatic merge (rebase) on CI success
conditions:
- and:
- status-success=buildkite/solana
- status-success=Travis CI - Pull Request
- status-success=ci-gate
- label=automerge
- author=@dont-squash-my-commits
- or:
# only require explorer checks if explorer files changed
- status-success=check-explorer
- -files~=^explorer/
- or:
# only require web3 checks if web3.js files changed
- status-success=all-web3-checks
- -files~=^web3.js/
- status-success=buildkite/solana
- status-success=Travis CI - Pull Request
- status-success=ci-gate
- label=automerge
- author=@dont-squash-my-commits
actions:
merge:
method: rebase
@@ -97,19 +50,35 @@ pull_request_rules:
label:
add:
- automerge
- name: v1.8 backport
- name: v1.3 backport
conditions:
- label=v1.8
- label=v1.3
actions:
backport:
ignore_conflicts: true
branches:
- v1.8
- name: v1.9 backport
- v1.3
- name: v1.4 backport
conditions:
- label=v1.9
- label=v1.4
actions:
backport:
ignore_conflicts: true
branches:
- v1.9
- v1.4
- name: v1.5 backport
conditions:
- label=v1.5
actions:
backport:
ignore_conflicts: true
branches:
- v1.5
- name: v1.6 backport
conditions:
- label=v1.6
actions:
backport:
ignore_conflicts: true
branches:
- v1.6

View File

@@ -23,12 +23,12 @@ jobs:
depth: false
script:
- .travis/export-github-repo.sh web3.js/ solana-web3.js
- .travis/export-github-repo.sh explorer/ explorer
- &release-artifacts
if: type IN (api, cron) OR tag IS present
name: "macOS release artifacts"
os: osx
osx_image: xcode12
language: rust
rust:
- stable
@@ -36,12 +36,8 @@ jobs:
- source ci/rust-version.sh
- PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
- readlink -f .
- brew install gnu-tar
- PATH="/usr/local/opt/gnu-tar/libexec/gnubin:$PATH"
- tar --version
script:
- source ci/env.sh
- rustup set profile default
- ci/publish-tarball.sh
deploy:
- provider: s3
@@ -64,12 +60,6 @@ jobs:
- <<: *release-artifacts
name: "Windows release artifacts"
os: windows
install:
- choco install openssl
- export OPENSSL_DIR="C:\Program Files\OpenSSL-Win64"
- source ci/rust-version.sh
- PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
- readlink -f .
# Linux release artifacts are still built by ci/buildkite-secondary.yml
#- <<: *release-artifacts
# name: "Linux release artifacts"
@@ -77,12 +67,56 @@ jobs:
# before_install:
# - sudo apt-get install libssl-dev libudev-dev
# explorer pull request
- name: "explorer"
if: type = pull_request AND branch = master
language: node_js
node_js:
- "node"
cache:
directories:
- ~/.npm
before_install:
- .travis/affects.sh explorer/ .travis || travis_terminate 0
- cd explorer
script:
- npm run build
- npm run format
# web3.js pull request
- name: "web3.js"
if: type = pull_request AND branch = master
language: node_js
node_js:
- "lts/*"
services:
- docker
cache:
directories:
- ~/.npm
before_install:
- .travis/affects.sh web3.js/ .travis || travis_terminate 0
- cd web3.js/
- source .travis/before_install.sh
script:
- ../.travis/commitlint.sh
- source .travis/script.sh
# docs pull request
- name: "docs"
if: type IN (push, pull_request) OR tag IS present
language: node_js
node_js:
- "lts/*"
- "node"
services:
- docker

View File

@@ -20,13 +20,13 @@ if [[ ! -f "$basedir"/commitlint.config.js ]]; then
exit 1
fi
if [[ -z $COMMIT_RANGE ]]; then
echo "Error: COMMIT_RANGE not defined"
if [[ -z $TRAVIS_COMMIT_RANGE ]]; then
echo "Error: TRAVIS_COMMIT_RANGE not defined"
exit 1
fi
cd "$basedir"
echo "Checking commits in COMMIT_RANGE: $COMMIT_RANGE"
echo "Checking commits in TRAVIS_COMMIT_RANGE: $TRAVIS_COMMIT_RANGE"
while IFS= read -r line; do
echo "$line" | npx commitlint
done < <(git log "$COMMIT_RANGE" --format=%s -- .)
done < <(git log "$TRAVIS_COMMIT_RANGE" --format=%s -- .)

5598
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,6 @@
[workspace]
members = [
"accountsdb-plugin-interface",
"accountsdb-plugin-manager",
"accountsdb-plugin-postgres",
"accounts-cluster-bench",
"bench-exchange",
"bench-streamer",
"bench-tps",
"accounts-bench",
@@ -11,7 +8,6 @@ members = [
"banks-client",
"banks-interface",
"banks-server",
"bucket_map",
"clap-utils",
"cli-config",
"cli-output",
@@ -19,13 +15,11 @@ members = [
"core",
"dos",
"download-utils",
"entry",
"faucet",
"frozen-abi",
"perf",
"validator",
"genesis",
"genesis-utils",
"gossip",
"install",
"keygen",
@@ -36,6 +30,7 @@ members = [
"log-analyzer",
"merkle-root-bench",
"merkle-tree",
"stake-o-matic",
"storage-bigtable",
"storage-proto",
"streamer",
@@ -43,30 +38,31 @@ members = [
"metrics",
"net-shaper",
"notifier",
"poh",
"poh-bench",
"program-test",
"programs/address-lookup-table",
"programs/address-lookup-table-tests",
"programs/secp256k1",
"programs/bpf_loader",
"programs/bpf_loader/gen-syscall-list",
"programs/compute-budget",
"programs/budget",
"programs/config",
"programs/exchange",
"programs/failure",
"programs/noop",
"programs/ownable",
"programs/stake",
"programs/vest",
"programs/vote",
"rbpf-cli",
"remote-wallet",
"rpc",
"ramp-tps",
"runtime",
"runtime/store-tool",
"sdk",
"sdk/cargo-build-bpf",
"sdk/cargo-test-bpf",
"send-transaction-service",
"scripts",
"stake-accounts",
"stake-monitor",
"sys-tuner",
"tokens",
"transaction-dos",
"transaction-status",
"account-decoder",
"upload-perf",
@@ -75,18 +71,8 @@ members = [
"cli",
"rayon-threadlimit",
"watchtower",
"replica-node",
"replica-lib",
"test-validator",
"rpc-test",
"client-test",
]
exclude = [
"programs/bpf",
]
# TODO: Remove once the "simd-accel" feature from the reed-solomon-erasure
# dependency is supported on Apple M1. v2 of the feature resolver is needed to
# specify arch-specific features.
resolver = "2"

View File

@@ -1,6 +1,6 @@
<p align="center">
<a href="https://solana.com">
<img alt="Solana" src="https://i.imgur.com/IKyzQ6T.png" width="250" />
<img alt="Solana" src="https://i.imgur.com/OMnvVEz.png" width="250" />
</a>
</p>
@@ -19,18 +19,12 @@ $ source $HOME/.cargo/env
$ rustup component add rustfmt
```
When building the master branch, please make sure you are using the latest stable rust version by running:
Please sure you are always using the latest stable rust version by running:
```bash
$ rustup update
```
When building a specific release branch, you should check the rust version in `ci/rust-version.sh` and if necessary, install that version by running:
```bash
$ rustup install VERSION
```
Note that if this is not the latest rust version on your machine, cargo commands may require an [override](https://rust-lang.github.io/rustup/overrides.html) in order to use the correct version.
On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc. On Ubuntu:
```bash
@@ -38,12 +32,6 @@ $ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang make
```
On Mac M1s, make sure you set up your terminal & homebrew [to use](https://5balloons.info/correct-way-to-install-and-use-homebrew-on-m1-macs/) Rosetta. You can install it with:
```bash
$ softwareupdate --install-rosetta
```
## **2. Download the source code.**
```bash
@@ -57,6 +45,11 @@ $ cd solana
$ cargo build
```
## **4. Run a minimal local cluster.**
```bash
$ ./run.sh
```
# Testing
**Run the test suite:**
@@ -114,41 +107,6 @@ send us that patch!
# Disclaimer
All claims, content, designs, algorithms, estimates, roadmaps,
specifications, and performance measurements described in this project
are done with the Solana Foundation's ("SF") good faith efforts. It is up to
the reader to check and validate their accuracy and truthfulness.
Furthermore, nothing in this project constitutes a solicitation for
investment.
All claims, content, designs, algorithms, estimates, roadmaps, specifications, and performance measurements described in this project are done with the author's best effort. It is up to the reader to check and validate their accuracy and truthfulness. Furthermore nothing in this project constitutes a solicitation for investment.
Any content produced by SF or developer resources that SF provides, are
for educational and inspirational purposes only. SF does not encourage,
induce or sanction the deployment, integration or use of any such
applications (including the code comprising the Solana blockchain
protocol) in violation of applicable laws or regulations and hereby
prohibits any such deployment, integration or use. This includes use of
any such applications by the reader (a) in violation of export control
or sanctions laws of the United States or any other applicable
jurisdiction, (b) if the reader is located in or ordinarily resident in
a country or territory subject to comprehensive sanctions administered
by the U.S. Office of Foreign Assets Control (OFAC), or (c) if the
reader is or is working on behalf of a Specially Designated National
(SDN) or a person subject to similar blocking or denied party
prohibitions.
The reader should be aware that U.S. export control and sanctions laws
prohibit U.S. persons (and other persons that are subject to such laws)
from transacting with persons in certain countries and territories or
that are on the SDN list. As a project based primarily on open-source
software, it is possible that such sanctioned persons may nevertheless
bypass prohibitions, obtain the code comprising the Solana blockchain
protocol (or other project code or applications) and deploy, integrate,
or otherwise use it. Accordingly, there is a risk to individuals that
other persons using the Solana blockchain protocol may be sanctioned
persons and that transactions with such persons would be a violation of
U.S. export controls and sanctions law. This risk applies to
individuals, organizations, and other ecosystem participants that
deploy, integrate, or use the Solana blockchain protocol code directly
(e.g., as a node operator), and individuals that transact on the Solana
blockchain through light clients, third party interfaces, and/or wallet
software.
Any content produced by Solana, or developer resources that Solana provides, are for educational and inspiration purposes only. Solana does not encourage, induce or sanction the deployment of any such applications in violation of applicable laws or regulations.

View File

@@ -18,24 +18,24 @@ Expect a response as fast as possible, within one business day at the latest.
We offer bounties for critical security issues. Please see below for more details.
Loss of Funds:
$2,000,000 USD in locked SOL tokens (locked for 12 months)
$500,000 USD in locked SOL tokens (locked for 12 months)
* Theft of funds without users signature from any account
* Theft of funds without users interaction in system, token, stake, vote programs
* Theft of funds that requires users signature - creating a vote program that drains the delegated stakes.
Consensus/Safety Violations:
$1,000,000 USD in locked SOL tokens (locked for 12 months)
$250,000 USD in locked SOL tokens (locked for 12 months)
* Consensus safety violation
* Tricking a validator to accept an optimistic confirmation or rooted slot without a double vote, etc..
Other Attacks:
$400,000 USD in locked SOL tokens (locked for 12 months)
$100,000 USD in locked SOL tokens (locked for 12 months)
* Protocol liveness attacks,
* Eclipse attacks,
* Remote attacks that partition the network,
DoS Attacks:
$100,000 USD in locked SOL tokens (locked for 12 months)
$25,000 USD in locked SOL tokens (locked for 12 months)
* Remote resource exaustion via Non-RPC protocols
RPC DoS/Crashes:
@@ -51,27 +51,13 @@ The following components are out of scope for the bounty program
* Attacks that require social engineering
Eligibility:
* The participant submitting the bug report shall follow the process outlined within this document
* The participant submitting the bug bounty shall follow the process outlined within this document
* Valid exploits can be eligible even if they are not successfully executed on the cluster
* Multiple submissions for the same class of exploit are still eligible for compensation, though may be compensated at a lower rate, however these will be assessed on a case-by-case basis
* Participants must complete KYC and sign the participation agreement here when the registrations are open https://solana.com/validator-registration. Security exploits will still be assessed and open for submission at all times. This needs only be done prior to distribution of tokens.
Payment of Bug Bounties:
* Payments for eligible bug reports are distributed monthly.
* Bounties for all bug reports submitted in a given month are paid out in the middle of the
following month.
* The SOL/USD conversion rate used for payments is the market price at the end of
the last day of the month for the month in which the bug was submitted.
* The reference for this price is the Closing Price given by Coingecko.com on
that date given here:
https://www.coingecko.com/en/coins/solana/historical_data/usd#panel
* For example, for all bugs submitted in March 2021, the SOL/USD price for bug
payouts is the Close price on 2021-03-31 of $19.49. This applies to all bugs
submitted in March 2021, to be paid in mid-April 2021.
* Bug bounties are paid out in
[stake accounts](https://solana.com/staking) with a
[lockup](https://docs.solana.com/staking/stake-accounts#lockups)
expiring 12 months from the last day of the month in which the bug was submitted.
Notes:
* All locked tokens can be staked during the lockup period
<a name="process"></a>
## Incident Response Process

View File

@@ -1,30 +1,31 @@
[package]
name = "solana-account-decoder"
version = "1.9.2"
version = "1.5.20"
description = "Solana account decoder"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-account-decoder"
license = "Apache-2.0"
edition = "2021"
edition = "2018"
[dependencies]
base64 = "0.12.3"
bincode = "1.3.3"
bs58 = "0.4.0"
bincode = "1.3.1"
bs58 = "0.3.1"
bv = "0.11.1"
Inflector = "0.11.4"
lazy_static = "1.4.0"
serde = "1.0.130"
serde = "1.0.118"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-config-program = { path = "../programs/config", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-vote-program = { path = "../programs/vote", version = "=1.9.2" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
serde_json = "1.0.56"
solana-config-program = { path = "../programs/config", version = "=1.5.20" }
solana-sdk = { path = "../sdk", version = "=1.5.20" }
solana-stake-program = { path = "../programs/stake", version = "=1.5.20" }
solana-vote-program = { path = "../programs/vote", version = "=1.5.20" }
spl-token-v2-0 = { package = "spl-token", version = "=3.1.0", features = ["no-entrypoint"] }
thiserror = "1.0"
zstd = "0.9.0"
zstd = "0.5.1"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -16,12 +16,7 @@ pub mod validator_info;
use {
crate::parse_account_data::{parse_account_data, AccountAdditionalData, ParsedAccount},
solana_sdk::{
account::{ReadableAccount, WritableAccount},
clock::Epoch,
fee_calculator::FeeCalculator,
pubkey::Pubkey,
},
solana_sdk::{account::Account, clock::Epoch, fee_calculator::FeeCalculator, pubkey::Pubkey},
std::{
io::{Read, Write},
str::FromStr,
@@ -30,7 +25,6 @@ use {
pub type StringAmount = String;
pub type StringDecimals = String;
pub const MAX_BASE58_BYTES: usize = 128;
/// A duplicate representation of an Account for pretty JSON serialization
#[derive(Serialize, Deserialize, Clone, Debug)]
@@ -51,7 +45,7 @@ pub enum UiAccountData {
Binary(String, UiAccountEncoding),
}
#[derive(Serialize, Deserialize, Clone, Copy, Debug, PartialEq, Eq, Hash)]
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
#[serde(rename_all = "camelCase")]
pub enum UiAccountEncoding {
Binary, // Legacy. Retained for RPC backwards compatibility
@@ -63,73 +57,58 @@ pub enum UiAccountEncoding {
}
impl UiAccount {
fn encode_bs58<T: ReadableAccount>(
account: &T,
data_slice_config: Option<UiDataSliceConfig>,
) -> String {
if account.data().len() <= MAX_BASE58_BYTES {
bs58::encode(slice_data(account.data(), data_slice_config)).into_string()
} else {
"error: data too large for bs58 encoding".to_string()
}
}
pub fn encode<T: ReadableAccount>(
pub fn encode(
pubkey: &Pubkey,
account: &T,
account: Account,
encoding: UiAccountEncoding,
additional_data: Option<AccountAdditionalData>,
data_slice_config: Option<UiDataSliceConfig>,
) -> Self {
let data = match encoding {
UiAccountEncoding::Binary => {
let data = Self::encode_bs58(account, data_slice_config);
UiAccountData::LegacyBinary(data)
}
UiAccountEncoding::Base58 => {
let data = Self::encode_bs58(account, data_slice_config);
UiAccountData::Binary(data, encoding)
}
UiAccountEncoding::Binary => UiAccountData::LegacyBinary(
bs58::encode(slice_data(&account.data, data_slice_config)).into_string(),
),
UiAccountEncoding::Base58 => UiAccountData::Binary(
bs58::encode(slice_data(&account.data, data_slice_config)).into_string(),
encoding,
),
UiAccountEncoding::Base64 => UiAccountData::Binary(
base64::encode(slice_data(account.data(), data_slice_config)),
base64::encode(slice_data(&account.data, data_slice_config)),
encoding,
),
UiAccountEncoding::Base64Zstd => {
let mut encoder = zstd::stream::write::Encoder::new(Vec::new(), 0).unwrap();
match encoder
.write_all(slice_data(account.data(), data_slice_config))
.write_all(slice_data(&account.data, data_slice_config))
.and_then(|()| encoder.finish())
{
Ok(zstd_data) => UiAccountData::Binary(base64::encode(zstd_data), encoding),
Err(_) => UiAccountData::Binary(
base64::encode(slice_data(account.data(), data_slice_config)),
base64::encode(slice_data(&account.data, data_slice_config)),
UiAccountEncoding::Base64,
),
}
}
UiAccountEncoding::JsonParsed => {
if let Ok(parsed_data) =
parse_account_data(pubkey, account.owner(), account.data(), additional_data)
parse_account_data(pubkey, &account.owner, &account.data, additional_data)
{
UiAccountData::Json(parsed_data)
} else {
UiAccountData::Binary(
base64::encode(&account.data()),
UiAccountEncoding::Base64,
)
UiAccountData::Binary(base64::encode(&account.data), UiAccountEncoding::Base64)
}
}
};
UiAccount {
lamports: account.lamports(),
lamports: account.lamports,
data,
owner: account.owner().to_string(),
executable: account.executable(),
rent_epoch: account.rent_epoch(),
owner: account.owner.to_string(),
executable: account.executable,
rent_epoch: account.rent_epoch,
}
}
pub fn decode<T: WritableAccount>(&self) -> Option<T> {
pub fn decode(&self) -> Option<Account> {
let data = match &self.data {
UiAccountData::Json(_) => None,
UiAccountData::LegacyBinary(blob) => bs58::decode(blob).into_vec().ok(),
@@ -149,13 +128,13 @@ impl UiAccount {
UiAccountEncoding::Binary | UiAccountEncoding::JsonParsed => None,
},
}?;
Some(T::create(
self.lamports,
Some(Account {
lamports: self.lamports,
data,
Pubkey::from_str(&self.owner).ok()?,
self.executable,
self.rent_epoch,
))
owner: Pubkey::from_str(&self.owner).ok()?,
executable: self.executable,
rent_epoch: self.rent_epoch,
})
}
}
@@ -181,7 +160,7 @@ impl Default for UiFeeCalculator {
}
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[derive(Clone, Copy, Debug, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct UiDataSliceConfig {
pub offset: usize,
@@ -204,10 +183,7 @@ fn slice_data(data: &[u8], data_slice_config: Option<UiDataSliceConfig>) -> &[u8
#[cfg(test)]
mod test {
use {
super::*,
solana_sdk::account::{Account, AccountSharedData},
};
use super::*;
#[test]
fn test_slice_data() {
@@ -241,10 +217,10 @@ mod test {
fn test_base64_zstd() {
let encoded_account = UiAccount::encode(
&Pubkey::default(),
&AccountSharedData::from(Account {
Account {
data: vec![0; 1024],
..Account::default()
}),
},
UiAccountEncoding::Base64Zstd,
None,
None,
@@ -254,9 +230,7 @@ mod test {
UiAccountData::Binary(_, UiAccountEncoding::Base64Zstd)
));
let decoded_account = encoded_account.decode::<Account>().unwrap();
assert_eq!(decoded_account.data(), &vec![0; 1024]);
let decoded_account = encoded_account.decode::<AccountSharedData>().unwrap();
assert_eq!(decoded_account.data(), &vec![0; 1024]);
let decoded_account = encoded_account.decode().unwrap();
assert_eq!(decoded_account.data, vec![0; 1024]);
}
}

View File

@@ -1,27 +1,25 @@
use {
crate::{
parse_bpf_loader::parse_bpf_upgradeable_loader,
parse_config::parse_config,
parse_nonce::parse_nonce,
parse_stake::parse_stake,
parse_sysvar::parse_sysvar,
parse_token::{parse_token, spl_token_id},
parse_vote::parse_vote,
},
inflector::Inflector,
serde_json::Value,
solana_sdk::{instruction::InstructionError, pubkey::Pubkey, stake, system_program, sysvar},
std::collections::HashMap,
thiserror::Error,
use crate::{
parse_bpf_loader::parse_bpf_upgradeable_loader,
parse_config::parse_config,
parse_nonce::parse_nonce,
parse_stake::parse_stake,
parse_sysvar::parse_sysvar,
parse_token::{parse_token, spl_token_id_v2_0},
parse_vote::parse_vote,
};
use inflector::Inflector;
use serde_json::Value;
use solana_sdk::{instruction::InstructionError, pubkey::Pubkey, system_program, sysvar};
use std::collections::HashMap;
use thiserror::Error;
lazy_static! {
static ref BPF_UPGRADEABLE_LOADER_PROGRAM_ID: Pubkey = solana_sdk::bpf_loader_upgradeable::id();
static ref CONFIG_PROGRAM_ID: Pubkey = solana_config_program::id();
static ref STAKE_PROGRAM_ID: Pubkey = stake::program::id();
static ref STAKE_PROGRAM_ID: Pubkey = solana_stake_program::id();
static ref SYSTEM_PROGRAM_ID: Pubkey = system_program::id();
static ref SYSVAR_PROGRAM_ID: Pubkey = sysvar::id();
static ref TOKEN_PROGRAM_ID: Pubkey = spl_token_id();
static ref TOKEN_PROGRAM_ID: Pubkey = spl_token_id_v2_0();
static ref VOTE_PROGRAM_ID: Pubkey = solana_vote_program::id();
pub static ref PARSABLE_PROGRAM_IDS: HashMap<Pubkey, ParsableAccount> = {
let mut m = HashMap::new();
@@ -114,14 +112,12 @@ pub fn parse_account_data(
#[cfg(test)]
mod test {
use {
super::*,
solana_sdk::nonce::{
state::{Data, Versions},
State,
},
solana_vote_program::vote_state::{VoteState, VoteStateVersions},
use super::*;
use solana_sdk::nonce::{
state::{Data, Versions},
State,
};
use solana_vote_program::vote_state::{VoteState, VoteStateVersions};
#[test]
fn test_parse_account_data() {

View File

@@ -1,11 +1,9 @@
use {
crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
UiAccountData, UiAccountEncoding,
},
bincode::{deserialize, serialized_size},
solana_sdk::{bpf_loader_upgradeable::UpgradeableLoaderState, pubkey::Pubkey},
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
UiAccountData, UiAccountEncoding,
};
use bincode::{deserialize, serialized_size};
use solana_sdk::{bpf_loader_upgradeable::UpgradeableLoaderState, pubkey::Pubkey};
pub fn parse_bpf_upgradeable_loader(
data: &[u8],
@@ -92,7 +90,9 @@ pub struct UiProgramData {
#[cfg(test)]
mod test {
use {super::*, bincode::serialize, solana_sdk::pubkey::Pubkey};
use super::*;
use bincode::serialize;
use solana_sdk::pubkey::Pubkey;
#[test]
fn test_parse_bpf_upgradeable_loader_accounts() {

View File

@@ -1,19 +1,15 @@
use {
crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
validator_info,
},
bincode::deserialize,
serde_json::Value,
solana_config_program::{get_config_data, ConfigKeys},
solana_sdk::{
pubkey::Pubkey,
stake::config::{self as stake_config, Config as StakeConfig},
},
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
validator_info,
};
use bincode::deserialize;
use serde_json::Value;
use solana_config_program::{get_config_data, ConfigKeys};
use solana_sdk::pubkey::Pubkey;
use solana_stake_program::config::Config as StakeConfig;
pub fn parse_config(data: &[u8], pubkey: &Pubkey) -> Result<ConfigAccountType, ParseAccountError> {
let parsed_account = if pubkey == &stake_config::id() {
let parsed_account = if pubkey == &solana_stake_program::config::id() {
get_config_data(data)
.ok()
.and_then(|data| deserialize::<StakeConfig>(data).ok())
@@ -41,7 +37,7 @@ fn parse_config_data<T>(data: &[u8], keys: Vec<(Pubkey, bool)>) -> Option<UiConf
where
T: serde::de::DeserializeOwned,
{
let config_data: T = deserialize(get_config_data(data).ok()?).ok()?;
let config_data: T = deserialize(&get_config_data(data).ok()?).ok()?;
let keys = keys
.iter()
.map(|key| UiConfigKey {
@@ -91,10 +87,10 @@ pub struct UiConfig<T> {
#[cfg(test)]
mod test {
use {
super::*, crate::validator_info::ValidatorInfo, serde_json::json,
solana_config_program::create_config_account, solana_sdk::account::ReadableAccount,
};
use super::*;
use crate::validator_info::ValidatorInfo;
use serde_json::json;
use solana_config_program::create_config_account;
#[test]
fn test_parse_config() {
@@ -104,7 +100,11 @@ mod test {
};
let stake_config_account = create_config_account(vec![], &stake_config, 10);
assert_eq!(
parse_config(stake_config_account.data(), &stake_config::id()).unwrap(),
parse_config(
&stake_config_account.data,
&solana_stake_program::config::id()
)
.unwrap(),
ConfigAccountType::StakeConfig(UiStakeConfig {
warmup_cooldown_rate: 0.25,
slash_penalty: 50,
@@ -124,7 +124,7 @@ mod test {
10,
);
assert_eq!(
parse_config(validator_info_config_account.data(), &info_pubkey).unwrap(),
parse_config(&validator_info_config_account.data, &info_pubkey).unwrap(),
ConfigAccountType::ValidatorInfo(UiConfig {
keys: vec![
UiConfigKey {

View File

@@ -1,9 +1,7 @@
use {
crate::{parse_account_data::ParseAccountError, UiFeeCalculator},
solana_sdk::{
instruction::InstructionError,
nonce::{state::Versions, State},
},
use crate::{parse_account_data::ParseAccountError, UiFeeCalculator};
use solana_sdk::{
instruction::InstructionError,
nonce::{state::Versions, State},
};
pub fn parse_nonce(data: &[u8]) -> Result<UiNonceState, ParseAccountError> {
@@ -44,16 +42,14 @@ pub struct UiNonceData {
#[cfg(test)]
mod test {
use {
super::*,
solana_sdk::{
hash::Hash,
nonce::{
state::{Data, Versions},
State,
},
pubkey::Pubkey,
use super::*;
use solana_sdk::{
hash::Hash,
nonce::{
state::{Data, Versions},
State,
},
pubkey::Pubkey,
};
#[test]

View File

@@ -1,14 +1,10 @@
use {
crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount,
},
bincode::deserialize,
solana_sdk::{
clock::{Epoch, UnixTimestamp},
stake::state::{Authorized, Delegation, Lockup, Meta, Stake, StakeState},
},
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount,
};
use bincode::deserialize;
use solana_sdk::clock::{Epoch, UnixTimestamp};
use solana_stake_program::stake_state::{Authorized, Delegation, Lockup, Meta, Stake, StakeState};
pub fn parse_stake(data: &[u8]) -> Result<StakeAccountType, ParseAccountError> {
let stake_state: StakeState = deserialize(data)
@@ -136,7 +132,8 @@ impl From<Delegation> for UiDelegation {
#[cfg(test)]
mod test {
use {super::*, bincode::serialize};
use super::*;
use bincode::serialize;
#[test]
fn test_parse_stake() {

View File

@@ -1,26 +1,21 @@
#[allow(deprecated)]
use solana_sdk::sysvar::{fees::Fees, recent_blockhashes::RecentBlockhashes};
use {
crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount, UiFeeCalculator,
},
bincode::deserialize,
bv::BitVec,
solana_sdk::{
clock::{Clock, Epoch, Slot, UnixTimestamp},
epoch_schedule::EpochSchedule,
pubkey::Pubkey,
rent::Rent,
slot_hashes::SlotHashes,
slot_history::{self, SlotHistory},
stake_history::{StakeHistory, StakeHistoryEntry},
sysvar::{self, rewards::Rewards},
},
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount, UiFeeCalculator,
};
use bincode::deserialize;
use bv::BitVec;
use solana_sdk::{
clock::{Clock, Epoch, Slot, UnixTimestamp},
epoch_schedule::EpochSchedule,
pubkey::Pubkey,
rent::Rent,
slot_hashes::SlotHashes,
slot_history::{self, SlotHistory},
stake_history::{StakeHistory, StakeHistoryEntry},
sysvar::{self, fees::Fees, recent_blockhashes::RecentBlockhashes, rewards::Rewards},
};
pub fn parse_sysvar(data: &[u8], pubkey: &Pubkey) -> Result<SysvarAccountType, ParseAccountError> {
#[allow(deprecated)]
let parsed_account = {
if pubkey == &sysvar::clock::id() {
deserialize::<Clock>(data)
@@ -96,9 +91,7 @@ pub fn parse_sysvar(data: &[u8], pubkey: &Pubkey) -> Result<SysvarAccountType, P
pub enum SysvarAccountType {
Clock(UiClock),
EpochSchedule(EpochSchedule),
#[allow(deprecated)]
Fees(UiFees),
#[allow(deprecated)]
RecentBlockhashes(Vec<UiRecentBlockhashesEntry>),
Rent(UiRent),
Rewards(UiRewards),
@@ -134,7 +127,6 @@ impl From<Clock> for UiClock {
pub struct UiFees {
pub fee_calculator: UiFeeCalculator,
}
#[allow(deprecated)]
impl From<Fees> for UiFees {
fn from(fees: Fees) -> Self {
Self {
@@ -220,17 +212,14 @@ pub struct UiStakeHistoryEntry {
#[cfg(test)]
mod test {
#[allow(deprecated)]
use solana_sdk::sysvar::recent_blockhashes::IterItem;
use {
super::*,
solana_sdk::{account::create_account_for_test, fee_calculator::FeeCalculator, hash::Hash},
use super::*;
use solana_sdk::{
account::create_account_for_test, fee_calculator::FeeCalculator, hash::Hash,
sysvar::recent_blockhashes::IterItem,
};
#[test]
fn test_parse_sysvars() {
let hash = Hash::new(&[1; 32]);
let clock_sysvar = create_account_for_test(&Clock::default());
assert_eq!(
parse_sysvar(&clock_sysvar.data, &sysvar::clock::id()).unwrap(),
@@ -250,29 +239,31 @@ mod test {
SysvarAccountType::EpochSchedule(epoch_schedule),
);
#[allow(deprecated)]
{
let fees_sysvar = create_account_for_test(&Fees::default());
assert_eq!(
parse_sysvar(&fees_sysvar.data, &sysvar::fees::id()).unwrap(),
SysvarAccountType::Fees(UiFees::default()),
);
let fees_sysvar = create_account_for_test(&Fees::default());
assert_eq!(
parse_sysvar(&fees_sysvar.data, &sysvar::fees::id()).unwrap(),
SysvarAccountType::Fees(UiFees::default()),
);
let recent_blockhashes: RecentBlockhashes =
vec![IterItem(0, &hash, 10)].into_iter().collect();
let recent_blockhashes_sysvar = create_account_for_test(&recent_blockhashes);
assert_eq!(
parse_sysvar(
&recent_blockhashes_sysvar.data,
&sysvar::recent_blockhashes::id()
)
.unwrap(),
SysvarAccountType::RecentBlockhashes(vec![UiRecentBlockhashesEntry {
blockhash: hash.to_string(),
fee_calculator: FeeCalculator::new(10).into(),
}]),
);
}
let hash = Hash::new(&[1; 32]);
let fee_calculator = FeeCalculator {
lamports_per_signature: 10,
};
let recent_blockhashes: RecentBlockhashes = vec![IterItem(0, &hash, &fee_calculator)]
.into_iter()
.collect();
let recent_blockhashes_sysvar = create_account_for_test(&recent_blockhashes);
assert_eq!(
parse_sysvar(
&recent_blockhashes_sysvar.data,
&sysvar::recent_blockhashes::id()
)
.unwrap(),
SysvarAccountType::RecentBlockhashes(vec![UiRecentBlockhashesEntry {
blockhash: hash.to_string(),
fee_calculator: fee_calculator.into(),
}]),
);
let rent = Rent {
lamports_per_byte_year: 10,

View File

@@ -1,38 +1,36 @@
use {
crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount, StringDecimals,
},
solana_sdk::pubkey::Pubkey,
spl_token::{
solana_program::{
program_option::COption, program_pack::Pack, pubkey::Pubkey as SplTokenPubkey,
},
state::{Account, AccountState, Mint, Multisig},
},
std::str::FromStr,
use crate::{
parse_account_data::{ParsableAccount, ParseAccountError},
StringAmount, StringDecimals,
};
use solana_sdk::pubkey::Pubkey;
use spl_token_v2_0::{
solana_program::{
program_option::COption, program_pack::Pack, pubkey::Pubkey as SplTokenPubkey,
},
state::{Account, AccountState, Mint, Multisig},
};
use std::str::FromStr;
// A helper function to convert spl_token::id() as spl_sdk::pubkey::Pubkey to
// A helper function to convert spl_token_v2_0::id() as spl_sdk::pubkey::Pubkey to
// solana_sdk::pubkey::Pubkey
pub fn spl_token_id() -> Pubkey {
Pubkey::new_from_array(spl_token::id().to_bytes())
pub fn spl_token_id_v2_0() -> Pubkey {
Pubkey::from_str(&spl_token_v2_0::id().to_string()).unwrap()
}
// A helper function to convert spl_token::native_mint::id() as spl_sdk::pubkey::Pubkey to
// A helper function to convert spl_token_v2_0::native_mint::id() as spl_sdk::pubkey::Pubkey to
// solana_sdk::pubkey::Pubkey
pub fn spl_token_native_mint() -> Pubkey {
Pubkey::new_from_array(spl_token::native_mint::id().to_bytes())
pub fn spl_token_v2_0_native_mint() -> Pubkey {
Pubkey::from_str(&spl_token_v2_0::native_mint::id().to_string()).unwrap()
}
// A helper function to convert a solana_sdk::pubkey::Pubkey to spl_sdk::pubkey::Pubkey
pub fn spl_token_pubkey(pubkey: &Pubkey) -> SplTokenPubkey {
SplTokenPubkey::new_from_array(pubkey.to_bytes())
pub fn spl_token_v2_0_pubkey(pubkey: &Pubkey) -> SplTokenPubkey {
SplTokenPubkey::from_str(&pubkey.to_string()).unwrap()
}
// A helper function to convert a spl_sdk::pubkey::Pubkey to solana_sdk::pubkey::Pubkey
pub fn pubkey_from_spl_token(pubkey: &SplTokenPubkey) -> Pubkey {
Pubkey::new_from_array(pubkey.to_bytes())
pub fn pubkey_from_spl_token_v2_0(pubkey: &SplTokenPubkey) -> Pubkey {
Pubkey::from_str(&pubkey.to_string()).unwrap()
}
pub fn parse_token(
@@ -118,7 +116,6 @@ pub fn parse_token(
#[derive(Debug, Serialize, Deserialize, PartialEq)]
#[serde(rename_all = "camelCase", tag = "type", content = "info")]
#[allow(clippy::large_enum_variant)]
pub enum TokenAccountType {
Account(UiTokenAccount),
Mint(UiMint),

View File

@@ -1,11 +1,9 @@
use {
crate::{parse_account_data::ParseAccountError, StringAmount},
solana_sdk::{
clock::{Epoch, Slot},
pubkey::Pubkey,
},
solana_vote_program::vote_state::{BlockTimestamp, Lockout, VoteState},
use crate::{parse_account_data::ParseAccountError, StringAmount};
use solana_sdk::{
clock::{Epoch, Slot},
pubkey::Pubkey,
};
use solana_vote_program::vote_state::{BlockTimestamp, Lockout, VoteState};
pub fn parse_vote(data: &[u8]) -> Result<VoteAccountType, ParseAccountError> {
let mut vote_state = VoteState::deserialize(data).map_err(ParseAccountError::from)?;
@@ -123,7 +121,8 @@ struct UiEpochCredits {
#[cfg(test)]
mod test {
use {super::*, solana_vote_program::vote_state::VoteStateVersions};
use super::*;
use solana_vote_program::vote_state::VoteStateVersions;
#[test]
fn test_parse_vote() {

View File

@@ -1,22 +1,24 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-accounts-bench"
version = "1.9.2"
version = "1.5.20"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
log = "0.4.14"
rayon = "1.5.1"
solana-logger = { path = "../logger", version = "=1.9.2" }
solana-runtime = { path = "../runtime", version = "=1.9.2" }
solana-measure = { path = "../measure", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-version = { path = "../version", version = "=1.9.2" }
log = "0.4.11"
rayon = "1.4.0"
solana-logger = { path = "../logger", version = "=1.5.20" }
solana-runtime = { path = "../runtime", version = "=1.5.20" }
solana-measure = { path = "../measure", version = "=1.5.20" }
solana-sdk = { path = "../sdk", version = "=1.5.20" }
solana-version = { path = "../version", version = "=1.5.20" }
rand = "0.7.0"
clap = "2.33.1"
crossbeam-channel = "0.4"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,19 +1,15 @@
#![allow(clippy::integer_arithmetic)]
#[macro_use]
extern crate log;
use {
clap::{crate_description, crate_name, value_t, App, Arg},
rayon::prelude::*,
solana_measure::measure::Measure,
solana_runtime::{
accounts::{create_test_accounts, update_accounts_bench, Accounts},
accounts_db::AccountShrinkThreshold,
accounts_index::AccountSecondaryIndexes,
ancestors::Ancestors,
},
solana_sdk::{genesis_config::ClusterType, pubkey::Pubkey},
std::{env, fs, path::PathBuf},
use clap::{crate_description, crate_name, value_t, App, Arg};
use rayon::prelude::*;
use solana_measure::measure::Measure;
use solana_runtime::{
accounts::{create_test_accounts, update_accounts_bench, Accounts},
accounts_index::Ancestors,
};
use solana_sdk::{genesis_config::ClusterType, pubkey::Pubkey};
use std::{collections::HashSet, env, fs, path::PathBuf};
fn main() {
solana_logger::setup();
@@ -62,13 +58,8 @@ fn main() {
if fs::remove_dir_all(path.clone()).is_err() {
println!("Warning: Couldn't remove {:?}", path);
}
let accounts = Accounts::new_with_config_for_benches(
vec![path],
&ClusterType::Testnet,
AccountSecondaryIndexes::default(),
false,
AccountShrinkThreshold::default(),
);
let accounts =
Accounts::new_with_config(vec![path], &ClusterType::Testnet, HashSet::new(), false);
println!("Creating {} accounts", num_accounts);
let mut create_time = Measure::start("create accounts");
let pubkeys: Vec<_> = (0..num_slots)
@@ -92,19 +83,17 @@ fn main() {
num_slots,
create_time
);
let mut ancestors = Vec::with_capacity(num_slots);
ancestors.push(0);
let mut ancestors: Ancestors = vec![(0, 0)].into_iter().collect();
for i in 1..num_slots {
ancestors.push(i as u64);
ancestors.insert(i as u64, i - 1);
accounts.add_root(i as u64);
}
let ancestors = Ancestors::from(ancestors);
let mut elapsed = vec![0; iterations];
let mut elapsed_store = vec![0; iterations];
for x in 0..iterations {
if clean {
let mut time = Measure::start("clean");
accounts.accounts_db.clean_accounts(None, false, None);
accounts.accounts_db.clean_accounts(None);
time.stop();
println!("{}", time);
for slot in 0..num_slots {
@@ -123,9 +112,6 @@ fn main() {
solana_sdk::clock::Slot::default(),
&ancestors,
None,
false,
None,
false,
);
time_store.stop();
if results != results_store {

View File

@@ -1 +0,0 @@
/farf/

View File

@@ -1,37 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accounts-cluster-bench"
version = "1.9.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
clap = "2.33.1"
log = "0.4.14"
rand = "0.7.0"
rayon = "1.5.1"
solana-account-decoder = { path = "../account-decoder", version = "=1.9.2" }
solana-clap-utils = { path = "../clap-utils", version = "=1.9.2" }
solana-client = { path = "../client", version = "=1.9.2" }
solana-core = { path = "../core", version = "=1.9.2" }
solana-faucet = { path = "../faucet", version = "=1.9.2" }
solana-gossip = { path = "../gossip", version = "=1.9.2" }
solana-logger = { path = "../logger", version = "=1.9.2" }
solana-measure = { path = "../measure", version = "=1.9.2" }
solana-net-utils = { path = "../net-utils", version = "=1.9.2" }
solana-runtime = { path = "../runtime", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-streamer = { path = "../streamer", version = "=1.9.2" }
solana-test-validator = { path = "../test-validator", version = "=1.9.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.2" }
solana-version = { path = "../version", version = "=1.9.2" }
spl-token = { version = "=3.2.0", features = ["no-entrypoint"] }
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.9.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,813 +0,0 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, value_t, values_t_or_exit, App, Arg},
log::*,
rand::{thread_rng, Rng},
rayon::prelude::*,
solana_account_decoder::parse_token::spl_token_pubkey,
solana_clap_utils::input_parsers::pubkey_of,
solana_client::{rpc_client::RpcClient, transaction_executor::TransactionExecutor},
solana_faucet::faucet::{request_airdrop_transaction, FAUCET_PORT},
solana_gossip::gossip_service::discover,
solana_runtime::inline_spl_token,
solana_sdk::{
commitment_config::CommitmentConfig,
instruction::{AccountMeta, Instruction},
message::Message,
pubkey::Pubkey,
rpc_port::DEFAULT_RPC_PORT,
signature::{read_keypair_file, Keypair, Signer},
system_instruction, system_program,
transaction::Transaction,
},
solana_streamer::socket::SocketAddrSpace,
solana_transaction_status::parse_token::spl_token_instruction,
std::{
cmp::min,
net::SocketAddr,
process::exit,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
thread::sleep,
time::{Duration, Instant},
},
};
pub fn airdrop_lamports(
client: &RpcClient,
faucet_addr: &SocketAddr,
id: &Keypair,
desired_balance: u64,
) -> bool {
let starting_balance = client.get_balance(&id.pubkey()).unwrap_or(0);
info!("starting balance {}", starting_balance);
if starting_balance < desired_balance {
let airdrop_amount = desired_balance - starting_balance;
info!(
"Airdropping {:?} lamports from {} for {}",
airdrop_amount,
faucet_addr,
id.pubkey(),
);
let blockhash = client.get_latest_blockhash().unwrap();
match request_airdrop_transaction(faucet_addr, &id.pubkey(), airdrop_amount, blockhash) {
Ok(transaction) => {
let mut tries = 0;
loop {
tries += 1;
let result = client.send_and_confirm_transaction(&transaction);
if result.is_ok() {
break;
}
if tries >= 5 {
panic!(
"Error requesting airdrop: to addr: {:?} amount: {} {:?}",
faucet_addr, airdrop_amount, result
)
}
}
}
Err(err) => {
panic!(
"Error requesting airdrop: {:?} to addr: {:?} amount: {}",
err, faucet_addr, airdrop_amount
);
}
};
let current_balance = client.get_balance(&id.pubkey()).unwrap_or_else(|e| {
panic!("airdrop error {}", e);
});
info!("current balance {}...", current_balance);
if current_balance - starting_balance != airdrop_amount {
info!(
"Airdrop failed? {} {} {} {}",
id.pubkey(),
current_balance,
starting_balance,
airdrop_amount,
);
}
}
true
}
struct SeedTracker {
max_created: Arc<AtomicU64>,
max_closed: Arc<AtomicU64>,
}
fn make_create_message(
keypair: &Keypair,
base_keypair: &Keypair,
max_created_seed: Arc<AtomicU64>,
num_instructions: usize,
balance: u64,
maybe_space: Option<u64>,
mint: Option<Pubkey>,
) -> Message {
let space = maybe_space.unwrap_or_else(|| thread_rng().gen_range(0, 1000));
let instructions: Vec<_> = (0..num_instructions)
.into_iter()
.map(|_| {
let program_id = if mint.is_some() {
inline_spl_token::id()
} else {
system_program::id()
};
let seed = max_created_seed.fetch_add(1, Ordering::Relaxed).to_string();
let to_pubkey =
Pubkey::create_with_seed(&base_keypair.pubkey(), &seed, &program_id).unwrap();
let mut instructions = vec![system_instruction::create_account_with_seed(
&keypair.pubkey(),
&to_pubkey,
&base_keypair.pubkey(),
&seed,
balance,
space,
&program_id,
)];
if let Some(mint_address) = mint {
instructions.push(spl_token_instruction(
spl_token::instruction::initialize_account(
&spl_token::id(),
&spl_token_pubkey(&to_pubkey),
&spl_token_pubkey(&mint_address),
&spl_token_pubkey(&base_keypair.pubkey()),
)
.unwrap(),
));
}
instructions
})
.flatten()
.collect();
Message::new(&instructions, Some(&keypair.pubkey()))
}
fn make_close_message(
keypair: &Keypair,
base_keypair: &Keypair,
max_created: Arc<AtomicU64>,
max_closed: Arc<AtomicU64>,
num_instructions: usize,
balance: u64,
spl_token: bool,
) -> Message {
let instructions: Vec<_> = (0..num_instructions)
.into_iter()
.filter_map(|_| {
let program_id = if spl_token {
inline_spl_token::id()
} else {
system_program::id()
};
let max_created_seed = max_created.load(Ordering::Relaxed);
let max_closed_seed = max_closed.load(Ordering::Relaxed);
if max_closed_seed >= max_created_seed {
return None;
}
let seed = max_closed.fetch_add(1, Ordering::Relaxed).to_string();
let address =
Pubkey::create_with_seed(&base_keypair.pubkey(), &seed, &program_id).unwrap();
if spl_token {
Some(spl_token_instruction(
spl_token::instruction::close_account(
&spl_token::id(),
&spl_token_pubkey(&address),
&spl_token_pubkey(&keypair.pubkey()),
&spl_token_pubkey(&base_keypair.pubkey()),
&[],
)
.unwrap(),
))
} else {
Some(system_instruction::transfer_with_seed(
&address,
&base_keypair.pubkey(),
seed,
&program_id,
&keypair.pubkey(),
balance,
))
}
})
.collect();
Message::new(&instructions, Some(&keypair.pubkey()))
}
#[allow(clippy::too_many_arguments)]
fn run_accounts_bench(
entrypoint_addr: SocketAddr,
faucet_addr: SocketAddr,
payer_keypairs: &[&Keypair],
iterations: usize,
maybe_space: Option<u64>,
batch_size: usize,
close_nth_batch: u64,
maybe_lamports: Option<u64>,
num_instructions: usize,
mint: Option<Pubkey>,
reclaim_accounts: bool,
) {
assert!(num_instructions > 0);
let client =
RpcClient::new_socket_with_commitment(entrypoint_addr, CommitmentConfig::confirmed());
info!("Targeting {}", entrypoint_addr);
let mut latest_blockhash = Instant::now();
let mut last_log = Instant::now();
let mut count = 0;
let mut blockhash = client.get_latest_blockhash().expect("blockhash");
let mut tx_sent_count = 0;
let mut total_accounts_created = 0;
let mut total_accounts_closed = 0;
let mut balances: Vec<_> = payer_keypairs
.iter()
.map(|keypair| client.get_balance(&keypair.pubkey()).unwrap_or(0))
.collect();
let mut last_balance = Instant::now();
let default_max_lamports = 1000;
let min_balance = maybe_lamports.unwrap_or_else(|| {
let space = maybe_space.unwrap_or(default_max_lamports);
client
.get_minimum_balance_for_rent_exemption(space as usize)
.expect("min balance")
});
let base_keypair = Keypair::new();
let seed_tracker = SeedTracker {
max_created: Arc::new(AtomicU64::default()),
max_closed: Arc::new(AtomicU64::default()),
};
info!("Starting balance(s): {:?}", balances);
let executor = TransactionExecutor::new(entrypoint_addr);
// Create and close messages both require 2 signatures, fake a 2 signature message to calculate fees
let mut message = Message::new(
&[
Instruction::new_with_bytes(
Pubkey::new_unique(),
&[],
vec![AccountMeta::new(Pubkey::new_unique(), true)],
),
Instruction::new_with_bytes(
Pubkey::new_unique(),
&[],
vec![AccountMeta::new(Pubkey::new_unique(), true)],
),
],
None,
);
loop {
if latest_blockhash.elapsed().as_millis() > 10_000 {
blockhash = client.get_latest_blockhash().expect("blockhash");
latest_blockhash = Instant::now();
}
message.recent_blockhash = blockhash;
let fee = client
.get_fee_for_message(&message)
.expect("get_fee_for_message");
let lamports = min_balance + fee;
for (i, balance) in balances.iter_mut().enumerate() {
if *balance < lamports || last_balance.elapsed().as_millis() > 2000 {
if let Ok(b) = client.get_balance(&payer_keypairs[i].pubkey()) {
*balance = b;
}
last_balance = Instant::now();
if *balance < lamports * 2 {
info!(
"Balance {} is less than needed: {}, doing aidrop...",
balance, lamports
);
if !airdrop_lamports(
&client,
&faucet_addr,
payer_keypairs[i],
lamports * 100_000,
) {
warn!("failed airdrop, exiting");
return;
}
}
}
}
// Create accounts
let sigs_len = executor.num_outstanding();
if sigs_len < batch_size {
let num_to_create = batch_size - sigs_len;
if num_to_create >= payer_keypairs.len() {
info!("creating {} new", num_to_create);
let chunk_size = num_to_create / payer_keypairs.len();
if chunk_size > 0 {
for (i, keypair) in payer_keypairs.iter().enumerate() {
let txs: Vec<_> = (0..chunk_size)
.into_par_iter()
.map(|_| {
let message = make_create_message(
keypair,
&base_keypair,
seed_tracker.max_created.clone(),
num_instructions,
min_balance,
maybe_space,
mint,
);
let signers: Vec<&Keypair> = vec![keypair, &base_keypair];
Transaction::new(&signers, message, blockhash)
})
.collect();
balances[i] = balances[i].saturating_sub(lamports * txs.len() as u64);
info!("txs: {}", txs.len());
let new_ids = executor.push_transactions(txs);
info!("ids: {}", new_ids.len());
tx_sent_count += new_ids.len();
total_accounts_created += num_instructions * new_ids.len();
}
}
}
if close_nth_batch > 0 {
let num_batches_to_close =
total_accounts_created as u64 / (close_nth_batch * batch_size as u64);
let expected_closed = num_batches_to_close * batch_size as u64;
let max_closed_seed = seed_tracker.max_closed.load(Ordering::Relaxed);
// Close every account we've created with seed between max_closed_seed..expected_closed
if max_closed_seed < expected_closed {
let txs: Vec<_> = (0..expected_closed - max_closed_seed)
.into_par_iter()
.map(|_| {
let message = make_close_message(
payer_keypairs[0],
&base_keypair,
seed_tracker.max_created.clone(),
seed_tracker.max_closed.clone(),
1,
min_balance,
mint.is_some(),
);
let signers: Vec<&Keypair> = vec![payer_keypairs[0], &base_keypair];
Transaction::new(&signers, message, blockhash)
})
.collect();
balances[0] = balances[0].saturating_sub(fee * txs.len() as u64);
info!("close txs: {}", txs.len());
let new_ids = executor.push_transactions(txs);
info!("close ids: {}", new_ids.len());
tx_sent_count += new_ids.len();
total_accounts_closed += new_ids.len() as u64;
}
}
} else {
let _ = executor.drain_cleared();
}
count += 1;
if last_log.elapsed().as_millis() > 3000 || count >= iterations {
info!(
"total_accounts_created: {} total_accounts_closed: {} tx_sent_count: {} loop_count: {} balance(s): {:?}",
total_accounts_created, total_accounts_closed, tx_sent_count, count, balances
);
last_log = Instant::now();
}
if iterations != 0 && count >= iterations {
break;
}
if executor.num_outstanding() >= batch_size {
sleep(Duration::from_millis(500));
}
}
executor.close();
if reclaim_accounts {
let executor = TransactionExecutor::new(entrypoint_addr);
loop {
let max_closed_seed = seed_tracker.max_closed.load(Ordering::Relaxed);
let max_created_seed = seed_tracker.max_created.load(Ordering::Relaxed);
if latest_blockhash.elapsed().as_millis() > 10_000 {
blockhash = client.get_latest_blockhash().expect("blockhash");
latest_blockhash = Instant::now();
}
message.recent_blockhash = blockhash;
let fee = client
.get_fee_for_message(&message)
.expect("get_fee_for_message");
let sigs_len = executor.num_outstanding();
if sigs_len < batch_size && max_closed_seed < max_created_seed {
let num_to_close = min(
batch_size - sigs_len,
(max_created_seed - max_closed_seed) as usize,
);
if num_to_close >= payer_keypairs.len() {
info!("closing {} accounts", num_to_close);
let chunk_size = num_to_close / payer_keypairs.len();
info!("{:?} chunk_size", chunk_size);
if chunk_size > 0 {
for (i, keypair) in payer_keypairs.iter().enumerate() {
let txs: Vec<_> = (0..chunk_size)
.into_par_iter()
.filter_map(|_| {
let message = make_close_message(
keypair,
&base_keypair,
seed_tracker.max_created.clone(),
seed_tracker.max_closed.clone(),
num_instructions,
min_balance,
mint.is_some(),
);
if message.instructions.is_empty() {
return None;
}
let signers: Vec<&Keypair> = vec![keypair, &base_keypair];
Some(Transaction::new(&signers, message, blockhash))
})
.collect();
balances[i] = balances[i].saturating_sub(fee * txs.len() as u64);
info!("close txs: {}", txs.len());
let new_ids = executor.push_transactions(txs);
info!("close ids: {}", new_ids.len());
tx_sent_count += new_ids.len();
total_accounts_closed += (num_instructions * new_ids.len()) as u64;
}
}
}
} else {
let _ = executor.drain_cleared();
}
count += 1;
if last_log.elapsed().as_millis() > 3000 || max_closed_seed >= max_created_seed {
info!(
"total_accounts_closed: {} tx_sent_count: {} loop_count: {} balance(s): {:?}",
total_accounts_closed, tx_sent_count, count, balances
);
last_log = Instant::now();
}
if max_closed_seed >= max_created_seed {
break;
}
if executor.num_outstanding() >= batch_size {
sleep(Duration::from_millis(500));
}
}
executor.close();
}
}
fn main() {
solana_logger::setup_with_default("solana=info");
let matches = App::new(crate_name!())
.about(crate_description!())
.version(solana_version::version!())
.arg(
Arg::with_name("entrypoint")
.long("entrypoint")
.takes_value(true)
.value_name("HOST:PORT")
.help("RPC entrypoint address. Usually <ip>:8899"),
)
.arg(
Arg::with_name("faucet_addr")
.long("faucet")
.takes_value(true)
.value_name("HOST:PORT")
.help("Faucet entrypoint address. Usually <ip>:9900"),
)
.arg(
Arg::with_name("space")
.long("space")
.takes_value(true)
.value_name("BYTES")
.help("Size of accounts to create"),
)
.arg(
Arg::with_name("lamports")
.long("lamports")
.takes_value(true)
.value_name("LAMPORTS")
.help("How many lamports to fund each account"),
)
.arg(
Arg::with_name("identity")
.long("identity")
.takes_value(true)
.multiple(true)
.value_name("FILE")
.help("keypair file"),
)
.arg(
Arg::with_name("batch_size")
.long("batch-size")
.takes_value(true)
.value_name("BYTES")
.help("Number of transactions to send per batch"),
)
.arg(
Arg::with_name("close_nth_batch")
.long("close-frequency")
.takes_value(true)
.value_name("BYTES")
.help(
"Every `n` batches, create a batch of close transactions for
the earliest remaining batch of accounts created.
Note: Should be > 1 to avoid situations where the close \
transactions will be submitted before the corresponding \
create transactions have been confirmed",
),
)
.arg(
Arg::with_name("num_instructions")
.long("num-instructions")
.takes_value(true)
.value_name("NUM")
.help("Number of accounts to create on each transaction"),
)
.arg(
Arg::with_name("iterations")
.long("iterations")
.takes_value(true)
.value_name("NUM")
.help("Number of iterations to make. 0 = unlimited iterations."),
)
.arg(
Arg::with_name("check_gossip")
.long("check-gossip")
.help("Just use entrypoint address directly"),
)
.arg(
Arg::with_name("mint")
.long("mint")
.takes_value(true)
.help("Mint address to initialize account"),
)
.arg(
Arg::with_name("reclaim_accounts")
.long("reclaim-accounts")
.takes_value(false)
.help("Reclaim accounts after session ends; incompatible with --iterations 0"),
)
.get_matches();
let skip_gossip = !matches.is_present("check_gossip");
let port = if skip_gossip { DEFAULT_RPC_PORT } else { 8001 };
let mut entrypoint_addr = SocketAddr::from(([127, 0, 0, 1], port));
if let Some(addr) = matches.value_of("entrypoint") {
entrypoint_addr = solana_net_utils::parse_host_port(addr).unwrap_or_else(|e| {
eprintln!("failed to parse entrypoint address: {}", e);
exit(1)
});
}
let mut faucet_addr = SocketAddr::from(([127, 0, 0, 1], FAUCET_PORT));
if let Some(addr) = matches.value_of("faucet_addr") {
faucet_addr = solana_net_utils::parse_host_port(addr).unwrap_or_else(|e| {
eprintln!("failed to parse entrypoint address: {}", e);
exit(1)
});
}
let space = value_t!(matches, "space", u64).ok();
let lamports = value_t!(matches, "lamports", u64).ok();
let batch_size = value_t!(matches, "batch_size", usize).unwrap_or(4);
let close_nth_batch = value_t!(matches, "close_nth_batch", u64).unwrap_or(0);
let iterations = value_t!(matches, "iterations", usize).unwrap_or(10);
let num_instructions = value_t!(matches, "num_instructions", usize).unwrap_or(1);
if num_instructions == 0 || num_instructions > 500 {
eprintln!("bad num_instructions: {}", num_instructions);
exit(1);
}
let mint = pubkey_of(&matches, "mint");
let payer_keypairs: Vec<_> = values_t_or_exit!(matches, "identity", String)
.iter()
.map(|keypair_string| {
read_keypair_file(keypair_string)
.unwrap_or_else(|_| panic!("bad keypair {:?}", keypair_string))
})
.collect();
let mut payer_keypair_refs: Vec<&Keypair> = vec![];
for keypair in payer_keypairs.iter() {
payer_keypair_refs.push(keypair);
}
let rpc_addr = if !skip_gossip {
info!("Finding cluster entry: {:?}", entrypoint_addr);
let (gossip_nodes, _validators) = discover(
None, // keypair
Some(&entrypoint_addr),
None, // num_nodes
Duration::from_secs(60), // timeout
None, // find_node_by_pubkey
Some(&entrypoint_addr), // find_node_by_gossip_addr
None, // my_gossip_addr
0, // my_shred_version
SocketAddrSpace::Unspecified,
)
.unwrap_or_else(|err| {
eprintln!("Failed to discover {} node: {:?}", entrypoint_addr, err);
exit(1);
});
info!("done found {} nodes", gossip_nodes.len());
gossip_nodes[0].rpc
} else {
info!("Using {:?} as the RPC address", entrypoint_addr);
entrypoint_addr
};
run_accounts_bench(
rpc_addr,
faucet_addr,
&payer_keypair_refs,
iterations,
space,
batch_size,
close_nth_batch,
lamports,
num_instructions,
mint,
matches.is_present("reclaim_accounts"),
);
}
#[cfg(test)]
pub mod test {
use {
super::*,
solana_core::validator::ValidatorConfig,
solana_faucet::faucet::run_local_faucet,
solana_local_cluster::{
local_cluster::{ClusterConfig, LocalCluster},
validator_configs::make_identical_validator_configs,
},
solana_measure::measure::Measure,
solana_sdk::{native_token::sol_to_lamports, poh_config::PohConfig},
solana_test_validator::TestValidator,
spl_token::{
solana_program::program_pack::Pack,
state::{Account, Mint},
},
};
#[test]
fn test_accounts_cluster_bench() {
solana_logger::setup();
let validator_config = ValidatorConfig::default();
let num_nodes = 1;
let mut config = ClusterConfig {
cluster_lamports: 10_000_000,
poh_config: PohConfig::new_sleep(Duration::from_millis(50)),
node_stakes: vec![100; num_nodes],
validator_configs: make_identical_validator_configs(&validator_config, num_nodes),
..ClusterConfig::default()
};
let faucet_addr = SocketAddr::from(([127, 0, 0, 1], 9900));
let cluster = LocalCluster::new(&mut config, SocketAddrSpace::Unspecified);
let iterations = 10;
let maybe_space = None;
let batch_size = 100;
let close_nth_batch = 100;
let maybe_lamports = None;
let num_instructions = 2;
let mut start = Measure::start("total accounts run");
run_accounts_bench(
cluster.entry_point_info.rpc,
faucet_addr,
&[&cluster.funding_keypair],
iterations,
maybe_space,
batch_size,
close_nth_batch,
maybe_lamports,
num_instructions,
None,
false,
);
start.stop();
info!("{}", start);
}
#[test]
fn test_create_then_reclaim_spl_token_accounts() {
solana_logger::setup();
let mint_keypair = Keypair::new();
let mint_pubkey = mint_keypair.pubkey();
let faucet_addr = run_local_faucet(mint_keypair, None);
let test_validator = TestValidator::with_custom_fees(
mint_pubkey,
1,
Some(faucet_addr),
SocketAddrSpace::Unspecified,
);
let rpc_client =
RpcClient::new_with_commitment(test_validator.rpc_url(), CommitmentConfig::processed());
// Created funder
let funder = Keypair::new();
let latest_blockhash = rpc_client.get_latest_blockhash().unwrap();
let signature = rpc_client
.request_airdrop_with_blockhash(
&funder.pubkey(),
sol_to_lamports(1.0),
&latest_blockhash,
)
.unwrap();
rpc_client
.confirm_transaction_with_spinner(
&signature,
&latest_blockhash,
CommitmentConfig::confirmed(),
)
.unwrap();
// Create Mint
let spl_mint_keypair = Keypair::new();
let spl_mint_len = Mint::get_packed_len();
let spl_mint_rent = rpc_client
.get_minimum_balance_for_rent_exemption(spl_mint_len)
.unwrap();
let transaction = Transaction::new_signed_with_payer(
&[
system_instruction::create_account(
&funder.pubkey(),
&spl_mint_keypair.pubkey(),
spl_mint_rent,
spl_mint_len as u64,
&inline_spl_token::id(),
),
spl_token_instruction(
spl_token::instruction::initialize_mint(
&spl_token::id(),
&spl_token_pubkey(&spl_mint_keypair.pubkey()),
&spl_token_pubkey(&spl_mint_keypair.pubkey()),
None,
2,
)
.unwrap(),
),
],
Some(&funder.pubkey()),
&[&funder, &spl_mint_keypair],
latest_blockhash,
);
let _sig = rpc_client
.send_and_confirm_transaction(&transaction)
.unwrap();
let account_len = Account::get_packed_len();
let minimum_balance = rpc_client
.get_minimum_balance_for_rent_exemption(account_len)
.unwrap();
let iterations = 5;
let batch_size = 100;
let close_nth_batch = 0;
let num_instructions = 4;
let mut start = Measure::start("total accounts run");
let keypair0 = Keypair::new();
let keypair1 = Keypair::new();
let keypair2 = Keypair::new();
run_accounts_bench(
test_validator
.rpc_url()
.replace("http://", "")
.parse()
.unwrap(),
faucet_addr,
&[&keypair0, &keypair1, &keypair2],
iterations,
Some(account_len as u64),
batch_size,
close_nth_batch,
Some(minimum_balance),
num_instructions,
Some(spl_mint_keypair.pubkey()),
true,
);
start.stop();
info!("{}", start);
}
}

View File

@@ -1,19 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-interface"
description = "The Solana AccountsDb plugin interface."
version = "1.9.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-accountsdb-plugin-interface"
[dependencies]
log = "0.4.11"
thiserror = "1.0.30"
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,20 +0,0 @@
<p align="center">
<a href="https://solana.com">
<img alt="Solana" src="https://i.imgur.com/IKyzQ6T.png" width="250" />
</a>
</p>
# Solana AccountsDb Plugin Interface
This crate enables an AccountsDb plugin to be plugged into the Solana Validator runtime to take actions
at the time of each account update; for example, saving the account state to an external database. The plugin must implement the `AccountsDbPlugin` trait. Please see the detail of the `accountsdb_plugin_interface.rs` for the interface definition.
The plugin should produce a `cdylib` dynamic library, which must expose a `C` function `_create_plugin()` that
instantiates the implementation of the interface.
The `solana-accountsdb-plugin-postgres` crate provides an example of how to create a plugin which saves the accounts data into an
external PostgreSQL databases.
More information about Solana is available in the [Solana documentation](https://docs.solana.com/).
Still have questions? Ask us on [Discord](https://discordapp.com/invite/pquxPsq)

View File

@@ -1,189 +0,0 @@
/// The interface for AccountsDb plugins. A plugin must implement
/// the AccountsDbPlugin trait to work with the runtime.
/// In addition, the dynamic library must export a "C" function _create_plugin which
/// creates the implementation of the plugin.
use {
solana_sdk::{signature::Signature, transaction::SanitizedTransaction},
solana_transaction_status::TransactionStatusMeta,
std::{any::Any, error, io},
thiserror::Error,
};
impl Eq for ReplicaAccountInfo<'_> {}
#[derive(Clone, PartialEq, Debug)]
/// Information about an account being updated
pub struct ReplicaAccountInfo<'a> {
/// The Pubkey for the account
pub pubkey: &'a [u8],
/// The lamports for the account
pub lamports: u64,
/// The Pubkey of the owner program account
pub owner: &'a [u8],
/// This account's data contains a loaded program (and is now read-only)
pub executable: bool,
/// The epoch at which this account will next owe rent
pub rent_epoch: u64,
/// The data held in this account.
pub data: &'a [u8],
/// A global monotonically increasing atomic number, which can be used
/// to tell the order of the account update. For example, when an
/// account is updated in the same slot multiple times, the update
/// with higher write_version should supersede the one with lower
/// write_version.
pub write_version: u64,
}
/// A wrapper to future-proof ReplicaAccountInfo handling.
/// If there were a change to the structure of ReplicaAccountInfo,
/// there would be new enum entry for the newer version, forcing
/// plugin implementations to handle the change.
pub enum ReplicaAccountInfoVersions<'a> {
V0_0_1(&'a ReplicaAccountInfo<'a>),
}
#[derive(Clone, Debug)]
pub struct ReplicaTransactionInfo<'a> {
pub signature: &'a Signature,
pub is_vote: bool,
pub transaction: &'a SanitizedTransaction,
pub transaction_status_meta: &'a TransactionStatusMeta,
}
pub enum ReplicaTransactionInfoVersions<'a> {
V0_0_1(&'a ReplicaTransactionInfo<'a>),
}
/// Errors returned by plugin calls
#[derive(Error, Debug)]
pub enum AccountsDbPluginError {
/// Error opening the configuration file; for example, when the file
/// is not found or when the validator process has no permission to read it.
#[error("Error opening config file. Error detail: ({0}).")]
ConfigFileOpenError(#[from] io::Error),
/// Error in reading the content of the config file or the content
/// is not in the expected format.
#[error("Error reading config file. Error message: ({msg})")]
ConfigFileReadError { msg: String },
/// Error when updating the account.
#[error("Error updating account. Error message: ({msg})")]
AccountsUpdateError { msg: String },
/// Error when updating the slot status
#[error("Error updating slot status. Error message: ({msg})")]
SlotStatusUpdateError { msg: String },
/// Any custom error defined by the plugin.
#[error("Plugin-defined custom error. Error message: ({0})")]
Custom(Box<dyn error::Error + Send + Sync>),
}
/// The current status of a slot
#[derive(Debug, Clone)]
pub enum SlotStatus {
/// The highest slot of the heaviest fork processed by the node. Ledger state at this slot is
/// not derived from a confirmed or finalized block, but if multiple forks are present, is from
/// the fork the validator believes is most likely to finalize.
Processed,
/// The highest slot having reached max vote lockout.
Rooted,
/// The highest slot that has been voted on by supermajority of the cluster, ie. is confirmed.
Confirmed,
}
impl SlotStatus {
pub fn as_str(&self) -> &'static str {
match self {
SlotStatus::Confirmed => "confirmed",
SlotStatus::Processed => "processed",
SlotStatus::Rooted => "rooted",
}
}
}
pub type Result<T> = std::result::Result<T, AccountsDbPluginError>;
/// Defines an AccountsDb plugin, to stream data from the runtime.
/// AccountsDb plugins must describe desired behavior for load and unload,
/// as well as how they will handle streamed data.
pub trait AccountsDbPlugin: Any + Send + Sync + std::fmt::Debug {
fn name(&self) -> &'static str;
/// The callback called when a plugin is loaded by the system,
/// used for doing whatever initialization is required by the plugin.
/// The _config_file contains the name of the
/// of the config file. The config must be in JSON format and
/// include a field "libpath" indicating the full path
/// name of the shared library implementing this interface.
fn on_load(&mut self, _config_file: &str) -> Result<()> {
Ok(())
}
/// The callback called right before a plugin is unloaded by the system
/// Used for doing cleanup before unload.
fn on_unload(&mut self) {}
/// Called when an account is updated at a slot.
/// When `is_startup` is true, it indicates the account is loaded from
/// snapshots when the validator starts up. When `is_startup` is false,
/// the account is updated during transaction processing.
#[allow(unused_variables)]
fn update_account(
&mut self,
account: ReplicaAccountInfoVersions,
slot: u64,
is_startup: bool,
) -> Result<()> {
Ok(())
}
/// Called when all accounts are notified of during startup.
fn notify_end_of_startup(&mut self) -> Result<()> {
Ok(())
}
/// Called when a slot status is updated
#[allow(unused_variables)]
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<()> {
Ok(())
}
/// Called when a transaction is updated at a slot.
#[allow(unused_variables)]
fn notify_transaction(
&mut self,
transaction: ReplicaTransactionInfoVersions,
slot: u64,
) -> Result<()> {
Ok(())
}
/// Check if the plugin is interested in account data
/// Default is true -- if the plugin is not interested in
/// account data, please return false.
fn account_data_notifications_enabled(&self) -> bool {
true
}
/// Check if the plugin is interested in transaction data
/// Default is false -- if the plugin is not interested in
/// transaction data, please return false.
fn transaction_notifications_enabled(&self) -> bool {
false
}
}

View File

@@ -1 +0,0 @@
pub mod accountsdb_plugin_interface;

View File

@@ -1,31 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-manager"
description = "The Solana AccountsDb plugin manager."
version = "1.9.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[dependencies]
bs58 = "0.4.0"
crossbeam-channel = "0.5"
libloading = "0.7.2"
log = "0.4.11"
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.9.2" }
solana-logger = { path = "../logger", version = "=1.9.2" }
solana-measure = { path = "../measure", version = "=1.9.2" }
solana-metrics = { path = "../metrics", version = "=1.9.2" }
solana-rpc = { path = "../rpc", version = "=1.9.2" }
solana-runtime = { path = "../runtime", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.2" }
thiserror = "1.0.30"
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,180 +0,0 @@
/// Module responsible for notifying plugins of account updates
use {
crate::accountsdb_plugin_manager::AccountsDbPluginManager,
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
ReplicaAccountInfo, ReplicaAccountInfoVersions,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_runtime::{
accounts_update_notifier_interface::AccountsUpdateNotifierInterface,
append_vec::{StoredAccountMeta, StoredMeta},
},
solana_sdk::{
account::{AccountSharedData, ReadableAccount},
clock::Slot,
},
std::sync::{Arc, RwLock},
};
#[derive(Debug)]
pub(crate) struct AccountsUpdateNotifierImpl {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
}
impl AccountsUpdateNotifierInterface for AccountsUpdateNotifierImpl {
fn notify_account_update(&self, slot: Slot, meta: &StoredMeta, account: &AccountSharedData) {
if let Some(account_info) = self.accountinfo_from_shared_account_data(meta, account) {
self.notify_plugins_of_account_update(account_info, slot, false);
}
}
fn notify_account_restore_from_snapshot(&self, slot: Slot, account: &StoredAccountMeta) {
let mut measure_all = Measure::start("accountsdb-plugin-notify-account-restore-all");
let mut measure_copy = Measure::start("accountsdb-plugin-copy-stored-account-info");
let account = self.accountinfo_from_stored_account_meta(account);
measure_copy.stop();
inc_new_counter_debug!(
"accountsdb-plugin-copy-stored-account-info-us",
measure_copy.as_us() as usize,
100000,
100000
);
if let Some(account_info) = account {
self.notify_plugins_of_account_update(account_info, slot, true);
}
measure_all.stop();
inc_new_counter_debug!(
"accountsdb-plugin-notify-account-restore-all-us",
measure_all.as_us() as usize,
100000,
100000
);
}
fn notify_end_of_restore_from_snapshot(&self) {
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-end-of-restore-from-snapshot");
match plugin.notify_end_of_startup() {
Err(err) => {
error!(
"Failed to notify the end of restore from snapshot, error: {} to plugin {}",
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully notified the end of restore from snapshot to plugin {}",
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-end-of-restore-from-snapshot",
measure.as_us() as usize
);
}
}
}
impl AccountsUpdateNotifierImpl {
pub fn new(plugin_manager: Arc<RwLock<AccountsDbPluginManager>>) -> Self {
AccountsUpdateNotifierImpl { plugin_manager }
}
fn accountinfo_from_shared_account_data<'a>(
&self,
meta: &'a StoredMeta,
account: &'a AccountSharedData,
) -> Option<ReplicaAccountInfo<'a>> {
Some(ReplicaAccountInfo {
pubkey: meta.pubkey.as_ref(),
lamports: account.lamports(),
owner: account.owner().as_ref(),
executable: account.executable(),
rent_epoch: account.rent_epoch(),
data: account.data(),
write_version: meta.write_version,
})
}
fn accountinfo_from_stored_account_meta<'a>(
&self,
stored_account_meta: &'a StoredAccountMeta,
) -> Option<ReplicaAccountInfo<'a>> {
Some(ReplicaAccountInfo {
pubkey: stored_account_meta.meta.pubkey.as_ref(),
lamports: stored_account_meta.account_meta.lamports,
owner: stored_account_meta.account_meta.owner.as_ref(),
executable: stored_account_meta.account_meta.executable,
rent_epoch: stored_account_meta.account_meta.rent_epoch,
data: stored_account_meta.data,
write_version: stored_account_meta.meta.write_version,
})
}
fn notify_plugins_of_account_update(
&self,
account: ReplicaAccountInfo,
slot: Slot,
is_startup: bool,
) {
let mut measure2 = Measure::start("accountsdb-plugin-notify_plugins_of_account_update");
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-update-account");
match plugin.update_account(
ReplicaAccountInfoVersions::V0_0_1(&account),
slot,
is_startup,
) {
Err(err) => {
error!(
"Failed to update account {} at slot {}, error: {} to plugin {}",
bs58::encode(account.pubkey).into_string(),
slot,
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully updated account {} at slot {} to plugin {}",
bs58::encode(account.pubkey).into_string(),
slot,
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-update-account-us",
measure.as_us() as usize,
100000,
100000
);
}
measure2.stop();
inc_new_counter_debug!(
"accountsdb-plugin-notify_plugins_of_account_update-us",
measure2.as_us() as usize,
100000,
100000
);
}
}

View File

@@ -1,75 +0,0 @@
/// Managing the AccountsDb plugins
use {
libloading::{Library, Symbol},
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::AccountsDbPlugin,
std::error::Error,
};
#[derive(Default, Debug)]
pub struct AccountsDbPluginManager {
pub plugins: Vec<Box<dyn AccountsDbPlugin>>,
libs: Vec<Library>,
}
impl AccountsDbPluginManager {
pub fn new() -> Self {
AccountsDbPluginManager {
plugins: Vec::default(),
libs: Vec::default(),
}
}
/// # Safety
///
/// This function loads the dynamically linked library specified in the path. The library
/// must do necessary initializations.
pub unsafe fn load_plugin(
&mut self,
libpath: &str,
config_file: &str,
) -> Result<(), Box<dyn Error>> {
type PluginConstructor = unsafe fn() -> *mut dyn AccountsDbPlugin;
let lib = Library::new(libpath)?;
let constructor: Symbol<PluginConstructor> = lib.get(b"_create_plugin")?;
let plugin_raw = constructor();
let mut plugin = Box::from_raw(plugin_raw);
plugin.on_load(config_file)?;
self.plugins.push(plugin);
self.libs.push(lib);
Ok(())
}
/// Unload all plugins and loaded plugin libraries, making sure to fire
/// their `on_plugin_unload()` methods so they can do any necessary cleanup.
pub fn unload(&mut self) {
for mut plugin in self.plugins.drain(..) {
info!("Unloading plugin for {:?}", plugin.name());
plugin.on_unload();
}
for lib in self.libs.drain(..) {
drop(lib);
}
}
/// Check if there is any plugin interested in account data
pub fn account_data_notifications_enabled(&self) -> bool {
for plugin in &self.plugins {
if plugin.account_data_notifications_enabled() {
return true;
}
}
false
}
/// Check if there is any plugin interested in transaction data
pub fn transaction_notifications_enabled(&self) -> bool {
for plugin in &self.plugins {
if plugin.transaction_notifications_enabled() {
return true;
}
}
false
}
}

View File

@@ -1,196 +0,0 @@
use {
crate::{
accounts_update_notifier::AccountsUpdateNotifierImpl,
accountsdb_plugin_manager::AccountsDbPluginManager,
slot_status_notifier::SlotStatusNotifierImpl, slot_status_observer::SlotStatusObserver,
transaction_notifier::TransactionNotifierImpl,
},
crossbeam_channel::Receiver,
log::*,
serde_json,
solana_rpc::{
optimistically_confirmed_bank_tracker::BankNotification,
transaction_notifier_interface::TransactionNotifierLock,
},
solana_runtime::accounts_update_notifier_interface::AccountsUpdateNotifier,
std::{
fs::File,
io::Read,
path::{Path, PathBuf},
sync::{Arc, RwLock},
thread,
},
thiserror::Error,
};
#[derive(Error, Debug)]
pub enum AccountsdbPluginServiceError {
#[error("Cannot open the the plugin config file")]
CannotOpenConfigFile(String),
#[error("Cannot read the the plugin config file")]
CannotReadConfigFile(String),
#[error("The config file is not in a valid Json format")]
InvalidConfigFileFormat(String),
#[error("Plugin library path is not specified in the config file")]
LibPathNotSet,
#[error("Invalid plugin path")]
InvalidPluginPath,
#[error("Cannot load plugin shared library")]
PluginLoadError(String),
}
/// The service managing the AccountsDb plugin workflow.
pub struct AccountsDbPluginService {
slot_status_observer: Option<SlotStatusObserver>,
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
accounts_update_notifier: Option<AccountsUpdateNotifier>,
transaction_notifier: Option<TransactionNotifierLock>,
}
impl AccountsDbPluginService {
/// Creates and returns the AccountsDbPluginService.
/// # Arguments
/// * `confirmed_bank_receiver` - The receiver for confirmed bank notification
/// * `accountsdb_plugin_config_file` - The config file path for the plugin. The
/// config file controls the plugin responsible
/// for transporting the data to external data stores. It is defined in JSON format.
/// The `libpath` field should be pointed to the full path of the dynamic shared library
/// (.so file) to be loaded. The shared library must implement the `AccountsDbPlugin`
/// trait. And the shared library shall export a `C` function `_create_plugin` which
/// shall create the implementation of `AccountsDbPlugin` and returns to the caller.
/// The rest of the JSON fields' definition is up to to the concrete plugin implementation
/// It is usually used to configure the connection information for the external data store.
pub fn new(
confirmed_bank_receiver: Receiver<BankNotification>,
accountsdb_plugin_config_files: &[PathBuf],
) -> Result<Self, AccountsdbPluginServiceError> {
info!(
"Starting AccountsDbPluginService from config files: {:?}",
accountsdb_plugin_config_files
);
let mut plugin_manager = AccountsDbPluginManager::new();
for accountsdb_plugin_config_file in accountsdb_plugin_config_files {
Self::load_plugin(&mut plugin_manager, accountsdb_plugin_config_file)?;
}
let account_data_notifications_enabled =
plugin_manager.account_data_notifications_enabled();
let transaction_notifications_enabled = plugin_manager.transaction_notifications_enabled();
let plugin_manager = Arc::new(RwLock::new(plugin_manager));
let accounts_update_notifier: Option<AccountsUpdateNotifier> =
if account_data_notifications_enabled {
let accounts_update_notifier =
AccountsUpdateNotifierImpl::new(plugin_manager.clone());
Some(Arc::new(RwLock::new(accounts_update_notifier)))
} else {
None
};
let transaction_notifier: Option<TransactionNotifierLock> =
if transaction_notifications_enabled {
let transaction_notifier = TransactionNotifierImpl::new(plugin_manager.clone());
Some(Arc::new(RwLock::new(transaction_notifier)))
} else {
None
};
let slot_status_observer =
if account_data_notifications_enabled || transaction_notifications_enabled {
let slot_status_notifier = SlotStatusNotifierImpl::new(plugin_manager.clone());
let slot_status_notifier = Arc::new(RwLock::new(slot_status_notifier));
Some(SlotStatusObserver::new(
confirmed_bank_receiver,
slot_status_notifier,
))
} else {
None
};
info!("Started AccountsDbPluginService");
Ok(AccountsDbPluginService {
slot_status_observer,
plugin_manager,
accounts_update_notifier,
transaction_notifier,
})
}
fn load_plugin(
plugin_manager: &mut AccountsDbPluginManager,
accountsdb_plugin_config_file: &Path,
) -> Result<(), AccountsdbPluginServiceError> {
let mut file = match File::open(accountsdb_plugin_config_file) {
Ok(file) => file,
Err(err) => {
return Err(AccountsdbPluginServiceError::CannotOpenConfigFile(format!(
"Failed to open the plugin config file {:?}, error: {:?}",
accountsdb_plugin_config_file, err
)));
}
};
let mut contents = String::new();
if let Err(err) = file.read_to_string(&mut contents) {
return Err(AccountsdbPluginServiceError::CannotReadConfigFile(format!(
"Failed to read the plugin config file {:?}, error: {:?}",
accountsdb_plugin_config_file, err
)));
}
let result: serde_json::Value = match serde_json::from_str(&contents) {
Ok(value) => value,
Err(err) => {
return Err(AccountsdbPluginServiceError::InvalidConfigFileFormat(
format!(
"The config file {:?} is not in a valid Json format, error: {:?}",
accountsdb_plugin_config_file, err
),
));
}
};
let libpath = result["libpath"]
.as_str()
.ok_or(AccountsdbPluginServiceError::LibPathNotSet)?;
let config_file = accountsdb_plugin_config_file
.as_os_str()
.to_str()
.ok_or(AccountsdbPluginServiceError::InvalidPluginPath)?;
unsafe {
let result = plugin_manager.load_plugin(libpath, config_file);
if let Err(err) = result {
let msg = format!(
"Failed to load the plugin library: {:?}, error: {:?}",
libpath, err
);
return Err(AccountsdbPluginServiceError::PluginLoadError(msg));
}
}
Ok(())
}
pub fn get_accounts_update_notifier(&self) -> Option<AccountsUpdateNotifier> {
self.accounts_update_notifier.clone()
}
pub fn get_transaction_notifier(&self) -> Option<TransactionNotifierLock> {
self.transaction_notifier.clone()
}
pub fn join(self) -> thread::Result<()> {
if let Some(mut slot_status_observer) = self.slot_status_observer {
slot_status_observer.join()?;
}
self.plugin_manager.write().unwrap().unload();
Ok(())
}
}

View File

@@ -1,6 +0,0 @@
pub mod accounts_update_notifier;
pub mod accountsdb_plugin_manager;
pub mod accountsdb_plugin_service;
pub mod slot_status_notifier;
pub mod slot_status_observer;
pub mod transaction_notifier;

View File

@@ -1,81 +0,0 @@
use {
crate::accountsdb_plugin_manager::AccountsDbPluginManager,
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::SlotStatus,
solana_measure::measure::Measure,
solana_metrics::*,
solana_sdk::clock::Slot,
std::sync::{Arc, RwLock},
};
pub trait SlotStatusNotifierInterface {
/// Notified when a slot is optimistically confirmed
fn notify_slot_confirmed(&self, slot: Slot, parent: Option<Slot>);
/// Notified when a slot is marked frozen.
fn notify_slot_processed(&self, slot: Slot, parent: Option<Slot>);
/// Notified when a slot is rooted.
fn notify_slot_rooted(&self, slot: Slot, parent: Option<Slot>);
}
pub type SlotStatusNotifier = Arc<RwLock<dyn SlotStatusNotifierInterface + Sync + Send>>;
pub struct SlotStatusNotifierImpl {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
}
impl SlotStatusNotifierInterface for SlotStatusNotifierImpl {
fn notify_slot_confirmed(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Confirmed);
}
fn notify_slot_processed(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Processed);
}
fn notify_slot_rooted(&self, slot: Slot, parent: Option<Slot>) {
self.notify_slot_status(slot, parent, SlotStatus::Rooted);
}
}
impl SlotStatusNotifierImpl {
pub fn new(plugin_manager: Arc<RwLock<AccountsDbPluginManager>>) -> Self {
Self { plugin_manager }
}
pub fn notify_slot_status(&self, slot: Slot, parent: Option<Slot>, slot_status: SlotStatus) {
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
let mut measure = Measure::start("accountsdb-plugin-update-slot");
match plugin.update_slot_status(slot, parent, slot_status.clone()) {
Err(err) => {
error!(
"Failed to update slot status at slot {}, error: {} to plugin {}",
slot,
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully updated slot status at slot {} to plugin {}",
slot,
plugin.name()
);
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-update-slot-us",
measure.as_us() as usize,
1000,
1000
);
}
}
}

View File

@@ -1,80 +0,0 @@
use {
crate::slot_status_notifier::SlotStatusNotifier,
crossbeam_channel::Receiver,
solana_rpc::optimistically_confirmed_bank_tracker::BankNotification,
std::{
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread::{self, Builder, JoinHandle},
},
};
#[derive(Debug)]
pub(crate) struct SlotStatusObserver {
bank_notification_receiver_service: Option<JoinHandle<()>>,
exit_updated_slot_server: Arc<AtomicBool>,
}
impl SlotStatusObserver {
pub fn new(
bank_notification_receiver: Receiver<BankNotification>,
slot_status_notifier: SlotStatusNotifier,
) -> Self {
let exit_updated_slot_server = Arc::new(AtomicBool::new(false));
Self {
bank_notification_receiver_service: Some(Self::run_bank_notification_receiver(
bank_notification_receiver,
exit_updated_slot_server.clone(),
slot_status_notifier,
)),
exit_updated_slot_server,
}
}
pub fn join(&mut self) -> thread::Result<()> {
self.exit_updated_slot_server.store(true, Ordering::Relaxed);
self.bank_notification_receiver_service
.take()
.map(JoinHandle::join)
.unwrap()
}
fn run_bank_notification_receiver(
bank_notification_receiver: Receiver<BankNotification>,
exit: Arc<AtomicBool>,
slot_status_notifier: SlotStatusNotifier,
) -> JoinHandle<()> {
Builder::new()
.name("bank_notification_receiver".to_string())
.spawn(move || {
while !exit.load(Ordering::Relaxed) {
if let Ok(slot) = bank_notification_receiver.recv() {
match slot {
BankNotification::OptimisticallyConfirmed(slot) => {
slot_status_notifier
.read()
.unwrap()
.notify_slot_confirmed(slot, None);
}
BankNotification::Frozen(bank) => {
slot_status_notifier
.read()
.unwrap()
.notify_slot_processed(bank.slot(), Some(bank.parent_slot()));
}
BankNotification::Root(bank) => {
slot_status_notifier
.read()
.unwrap()
.notify_slot_rooted(bank.slot(), Some(bank.parent_slot()));
}
}
}
}
})
.unwrap()
}
}

View File

@@ -1,93 +0,0 @@
/// Module responsible for notifying plugins of transactions
use {
crate::accountsdb_plugin_manager::AccountsDbPluginManager,
log::*,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
ReplicaTransactionInfo, ReplicaTransactionInfoVersions,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_rpc::transaction_notifier_interface::TransactionNotifier,
solana_runtime::bank,
solana_sdk::{clock::Slot, signature::Signature, transaction::SanitizedTransaction},
solana_transaction_status::TransactionStatusMeta,
std::sync::{Arc, RwLock},
};
/// This implementation of TransactionNotifier is passed to the rpc's TransactionStatusService
/// at the validator startup. TransactionStatusService invokes the notify_transaction method
/// for new transactions. The implementation in turn invokes the notify_transaction of each
/// plugin enabled with transaction notification managed by the AccountsDbPluginManager.
pub(crate) struct TransactionNotifierImpl {
plugin_manager: Arc<RwLock<AccountsDbPluginManager>>,
}
impl TransactionNotifier for TransactionNotifierImpl {
fn notify_transaction(
&self,
slot: Slot,
signature: &Signature,
transaction_status_meta: &TransactionStatusMeta,
transaction: &SanitizedTransaction,
) {
let mut measure = Measure::start("accountsdb-plugin-notify_plugins_of_transaction_info");
let transaction_log_info =
Self::build_replica_transaction_info(signature, transaction_status_meta, transaction);
let mut plugin_manager = self.plugin_manager.write().unwrap();
if plugin_manager.plugins.is_empty() {
return;
}
for plugin in plugin_manager.plugins.iter_mut() {
if !plugin.transaction_notifications_enabled() {
continue;
}
match plugin.notify_transaction(
ReplicaTransactionInfoVersions::V0_0_1(&transaction_log_info),
slot,
) {
Err(err) => {
error!(
"Failed to notify transaction, error: ({}) to plugin {}",
err,
plugin.name()
)
}
Ok(_) => {
trace!(
"Successfully notified transaction to plugin {}",
plugin.name()
);
}
}
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-notify_plugins_of_transaction_info-us",
measure.as_us() as usize,
10000,
10000
);
}
}
impl TransactionNotifierImpl {
pub fn new(plugin_manager: Arc<RwLock<AccountsDbPluginManager>>) -> Self {
Self { plugin_manager }
}
fn build_replica_transaction_info<'a>(
signature: &'a Signature,
transaction_status_meta: &'a TransactionStatusMeta,
transaction: &'a SanitizedTransaction,
) -> ReplicaTransactionInfo<'a> {
ReplicaTransactionInfo {
signature,
is_vote: bank::is_simple_vote_transaction(transaction),
transaction,
transaction_status_meta,
}
}
}

View File

@@ -1,39 +0,0 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
name = "solana-accountsdb-plugin-postgres"
description = "The Solana AccountsDb plugin for PostgreSQL database."
version = "1.9.2"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-validator"
[lib]
crate-type = ["cdylib", "rlib"]
[dependencies]
bs58 = "0.4.0"
chrono = { version = "0.4.11", features = ["serde"] }
crossbeam-channel = "0.5"
log = "0.4.14"
postgres = { version = "0.19.2", features = ["with-chrono-0_4"] }
postgres-types = { version = "0.2.2", features = ["derive"] }
serde = "1.0.130"
serde_derive = "1.0.103"
serde_json = "1.0.72"
solana-accountsdb-plugin-interface = { path = "../accountsdb-plugin-interface", version = "=1.9.2" }
solana-logger = { path = "../logger", version = "=1.9.2" }
solana-measure = { path = "../measure", version = "=1.9.2" }
solana-metrics = { path = "../metrics", version = "=1.9.2" }
solana-runtime = { path = "../runtime", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-transaction-status = { path = "../transaction-status", version = "=1.9.2" }
thiserror = "1.0.30"
tokio-postgres = "0.7.4"
[dev-dependencies]
solana-account-decoder = { path = "../account-decoder", version = "=1.9.2" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,5 +0,0 @@
This is an example implementing the AccountsDb plugin for PostgreSQL database.
Please see the `src/accountsdb_plugin_postgres.rs` for the format of the plugin's configuration file.
To create the schema objects for the database, please use `scripts/create_schema.sql`.
`scripts/drop_schema.sql` can be used to tear down the schema objects.

View File

@@ -1,184 +0,0 @@
/**
* This plugin implementation for PostgreSQL requires the following tables
*/
-- The table storing accounts
CREATE TABLE account (
pubkey BYTEA PRIMARY KEY,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- The table storing slot information
CREATE TABLE slot (
slot BIGINT PRIMARY KEY,
parent BIGINT,
status VARCHAR(16) NOT NULL,
updated_on TIMESTAMP NOT NULL
);
-- Types for Transactions
Create TYPE "TransactionErrorCode" AS ENUM (
'AccountInUse',
'AccountLoadedTwice',
'AccountNotFound',
'ProgramAccountNotFound',
'InsufficientFundsForFee',
'InvalidAccountForFee',
'AlreadyProcessed',
'BlockhashNotFound',
'InstructionError',
'CallChainTooDeep',
'MissingSignatureForFee',
'InvalidAccountIndex',
'SignatureFailure',
'InvalidProgramForExecution',
'SanitizeFailure',
'ClusterMaintenance',
'AccountBorrowOutstanding',
'WouldExceedMaxAccountCostLimit',
'WouldExceedMaxBlockCostLimit',
'UnsupportedVersion',
'InvalidWritableAccount'
);
CREATE TYPE "TransactionError" AS (
error_code "TransactionErrorCode",
error_detail VARCHAR(256)
);
CREATE TYPE "CompiledInstruction" AS (
program_id_index SMALLINT,
accounts SMALLINT[],
data BYTEA
);
CREATE TYPE "InnerInstructions" AS (
index SMALLINT,
instructions "CompiledInstruction"[]
);
CREATE TYPE "TransactionTokenBalance" AS (
account_index SMALLINT,
mint VARCHAR(44),
ui_token_amount DOUBLE PRECISION,
owner VARCHAR(44)
);
Create TYPE "RewardType" AS ENUM (
'Fee',
'Rent',
'Staking',
'Voting'
);
CREATE TYPE "Reward" AS (
pubkey VARCHAR(44),
lamports BIGINT,
post_balance BIGINT,
reward_type "RewardType",
commission SMALLINT
);
CREATE TYPE "TransactionStatusMeta" AS (
error "TransactionError",
fee BIGINT,
pre_balances BIGINT[],
post_balances BIGINT[],
inner_instructions "InnerInstructions"[],
log_messages TEXT[],
pre_token_balances "TransactionTokenBalance"[],
post_token_balances "TransactionTokenBalance"[],
rewards "Reward"[]
);
CREATE TYPE "TransactionMessageHeader" AS (
num_required_signatures SMALLINT,
num_readonly_signed_accounts SMALLINT,
num_readonly_unsigned_accounts SMALLINT
);
CREATE TYPE "TransactionMessage" AS (
header "TransactionMessageHeader",
account_keys BYTEA[],
recent_blockhash BYTEA,
instructions "CompiledInstruction"[]
);
CREATE TYPE "TransactionMessageAddressTableLookup" AS (
account_key: BYTEA[],
writable_indexes SMALLINT[],
readonly_indexes SMALLINT[]
);
CREATE TYPE "TransactionMessageV0" AS (
header "TransactionMessageHeader",
account_keys BYTEA[],
recent_blockhash BYTEA,
instructions "CompiledInstruction"[],
address_table_lookups "TransactionMessageAddressTableLookup"[]
);
CREATE TYPE "LoadedAddresses" AS (
writable BYTEA[],
readonly BYTEA[]
);
CREATE TYPE "LoadedMessageV0" AS (
message "TransactionMessageV0",
loaded_addresses "LoadedAddresses"
);
-- The table storing transactions
CREATE TABLE transaction (
slot BIGINT NOT NULL,
signature BYTEA NOT NULL,
is_vote BOOL NOT NULL,
message_type SMALLINT, -- 0: legacy, 1: v0 message
legacy_message "TransactionMessage",
v0_loaded_message "LoadedMessageV0",
signatures BYTEA[],
message_hash BYTEA,
meta "TransactionStatusMeta",
updated_on TIMESTAMP NOT NULL,
CONSTRAINT transaction_pk PRIMARY KEY (slot, signature)
);
/**
* The following is for keeping historical data for accounts and is not required for plugin to work.
*/
-- The table storing historical data for accounts
CREATE TABLE account_audit (
pubkey BYTEA,
owner BYTEA,
lamports BIGINT NOT NULL,
slot BIGINT NOT NULL,
executable BOOL NOT NULL,
rent_epoch BIGINT NOT NULL,
data BYTEA,
write_version BIGINT NOT NULL,
updated_on TIMESTAMP NOT NULL
);
CREATE INDEX account_audit_account_key ON account_audit (pubkey, write_version);
CREATE FUNCTION audit_account_update() RETURNS trigger AS $audit_account_update$
BEGIN
INSERT INTO account_audit (pubkey, owner, lamports, slot, executable, rent_epoch, data, write_version, updated_on)
VALUES (OLD.pubkey, OLD.owner, OLD.lamports, OLD.slot,
OLD.executable, OLD.rent_epoch, OLD.data, OLD.write_version, OLD.updated_on);
RETURN NEW;
END;
$audit_account_update$ LANGUAGE plpgsql;
CREATE TRIGGER account_update_trigger AFTER UPDATE OR DELETE ON account
FOR EACH ROW EXECUTE PROCEDURE audit_account_update();

View File

@@ -1,25 +0,0 @@
/**
* Script for cleaning up the schema for PostgreSQL used for the AccountsDb plugin.
*/
DROP TRIGGER account_update_trigger ON account;
DROP FUNCTION audit_account_update;
DROP TABLE account_audit;
DROP TABLE account;
DROP TABLE slot;
DROP TABLE transaction;
DROP TYPE "TransactionError" CASCADE;
DROP TYPE "TransactionErrorCode" CASCADE;
DROP TYPE "LoadedMessageV0" CASCADE;
DROP TYPE "LoadedAddresses" CASCADE;
DROP TYPE "TransactionMessageV0" CASCADE;
DROP TYPE "TransactionMessage" CASCADE;
DROP TYPE "TransactionMessageHeader" CASCADE;
DROP TYPE "TransactionMessageAddressTableLookup" CASCADE;
DROP TYPE "TransactionStatusMeta" CASCADE;
DROP TYPE "RewardType" CASCADE;
DROP TYPE "Reward" CASCADE;
DROP TYPE "TransactionTokenBalance" CASCADE;
DROP TYPE "InnerInstructions" CASCADE;
DROP TYPE "CompiledInstruction" CASCADE;

View File

@@ -1,802 +0,0 @@
# This a reference configuration file for the PostgreSQL database version 14.
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, run "pg_ctl reload", or execute
# "SELECT pg_reload_conf()". Some parameters, which are marked below,
# require a server shutdown and restart to take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: B = bytes Time units: us = microseconds
# kB = kilobytes ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# TB = terabytes h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
data_directory = '/var/lib/postgresql/14/main' # use data in another directory
# (change requires restart)
hba_file = '/etc/postgresql/14/main/pg_hba.conf' # host-based authentication file
# (change requires restart)
ident_file = '/etc/postgresql/14/main/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
external_pid_file = '/var/run/postgresql/14-main.pid' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
listen_addresses = '*'
port = 5433 # (change requires restart)
max_connections = 200 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - TCP settings -
# see "man tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#tcp_user_timeout = 0 # TCP_USER_TIMEOUT, in milliseconds;
# 0 selects the system default
#client_connection_check_interval = 0 # time between checks for client
# disconnection while running queries;
# 0 for never
# - Authentication -
#authentication_timeout = 1min # 1s-600s
#password_encryption = scram-sha-256 # scram-sha-256 or md5
#db_user_namespace = off
# GSSAPI using Kerberos
#krb_server_keyfile = 'FILE:${sysconfdir}/krb5.keytab'
#krb_caseins_users = off
# - SSL -
ssl = on
#ssl_ca_file = ''
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
#ssl_crl_file = ''
#ssl_crl_dir = ''
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
#ssl_prefer_server_ciphers = on
#ssl_ecdh_curve = 'prime256v1'
#ssl_min_protocol_version = 'TLSv1.2'
#ssl_max_protocol_version = ''
#ssl_dh_params_file = ''
#ssl_passphrase_command = ''
#ssl_passphrase_command_supports_reload = off
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 1GB # min 128kB
# (change requires restart)
#huge_pages = try # on, off, or try
# (change requires restart)
#huge_page_size = 0 # zero for system default
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB # min 64kB
#hash_mem_multiplier = 1.0 # 1-1000.0 multiplier on hash table work_mem
#maintenance_work_mem = 64MB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
#logical_decoding_work_mem = 64MB # min 64kB
#max_stack_depth = 2MB # min 100kB
#shared_memory_type = mmap # the default is the first option
# supported by the operating system:
# mmap
# sysv
# windows
# (change requires restart)
dynamic_shared_memory_type = posix # the default is the first option
# supported by the operating system:
# posix
# sysv
# windows
# mmap
# (change requires restart)
#min_dynamic_shared_memory = 0MB # (change requires restart)
# - Disk -
#temp_file_limit = -1 # limits per-process temp file space
# in kilobytes, or -1 for no limit
# - Kernel Resources -
#max_files_per_process = 1000 # min 64
# (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds (0 disables)
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 2 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # max buffers written/round, 0 disables
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 512kB # measured in pages, 0 disables
# - Asynchronous Behavior -
#backend_flush_after = 0 # measured in pages, 0 disables
effective_io_concurrency = 1000 # 1-1000; 0 disables prefetching
#maintenance_io_concurrency = 10 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
#max_parallel_workers = 8 # maximum number of max_worker_processes that
# can be used in parallel operations
#parallel_leader_participation = on
#old_snapshot_threshold = -1 # 1min-60d; -1 disables; 0 is immediate
# (change requires restart)
#------------------------------------------------------------------------------
# WRITE-AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
wal_level = minimal # minimal, replica, or logical
# (change requires restart)
fsync = off # flush data to disk for crash safety
# (turning this off can cause
# unrecoverable data corruption)
synchronous_commit = off # synchronization level;
# off, local, remote_write, remote_apply, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux and FreeBSD)
# fsync
# fsync_writethrough
# open_sync
full_page_writes = off # recover from partial page writes
#wal_log_hints = off # also do full page writes of non-critical updates
# (change requires restart)
#wal_compression = off # enable compression of full-page writes
#wal_init_zero = on # zero-fill new WAL files
#wal_recycle = on # recycle WAL files
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#wal_writer_flush_after = 1MB # measured in pages, 0 disables
#wal_skip_threshold = 2MB
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_timeout = 5min # range 30s-1d
#checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
#checkpoint_flush_after = 256kB # measured in pages, 0 disables
#checkpoint_warning = 30s # 0 disables
max_wal_size = 1GB
min_wal_size = 80MB
# - Archiving -
#archive_mode = off # enables archiving; off, on, or always
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
# - Archive Recovery -
# These are only used in recovery mode.
#restore_command = '' # command to use to restore an archived logfile segment
# placeholders: %p = path of file to restore
# %f = file name only
# e.g. 'cp /mnt/server/archivedir/%f %p'
#archive_cleanup_command = '' # command to execute at every restartpoint
#recovery_end_command = '' # command to execute at completion of recovery
# - Recovery Target -
# Set these only when performing a targeted recovery.
#recovery_target = '' # 'immediate' to end recovery as soon as a
# consistent state is reached
# (change requires restart)
#recovery_target_name = '' # the named restore point to which recovery will proceed
# (change requires restart)
#recovery_target_time = '' # the time stamp up to which recovery will proceed
# (change requires restart)
#recovery_target_xid = '' # the transaction ID up to which recovery will proceed
# (change requires restart)
#recovery_target_lsn = '' # the WAL LSN up to which recovery will proceed
# (change requires restart)
#recovery_target_inclusive = on # Specifies whether to stop:
# just after the specified recovery target (on)
# just before the recovery target (off)
# (change requires restart)
#recovery_target_timeline = 'latest' # 'current', 'latest', or timeline ID
# (change requires restart)
#recovery_target_action = 'pause' # 'pause', 'promote', 'shutdown'
# (change requires restart)
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Servers -
# Set these on the primary and on any standby that will send replication data.
max_wal_senders = 0 # max number of walsender processes
# (change requires restart)
#max_replication_slots = 10 # max number of replication slots
# (change requires restart)
#wal_keep_size = 0 # in megabytes; 0 disables
#max_slot_wal_keep_size = -1 # in megabytes; -1 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
#track_commit_timestamp = off # collect timestamp of transaction commit
# (change requires restart)
# - Primary Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# method to choose sync standbys, number of sync standbys,
# and comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a primary server.
#primary_conninfo = '' # connection string to sending server
#primary_slot_name = '' # replication slot on sending server
#promote_trigger_file = '' # file name whose presence ends recovery
#hot_standby = on # "off" disallows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_create_temp_slot = off # create temp slot if primary_slot_name
# is not set
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from primary
# in milliseconds; 0 disables
#wal_retrieve_retry_interval = 5s # time to wait before retrying to
# retrieve WAL after a failed attempt
#recovery_min_apply_delay = 0 # minimum delay for applying changes during recovery
# - Subscribers -
# These settings are ignored on a publisher.
#max_logical_replication_workers = 4 # taken from max_worker_processes
# (change requires restart)
#max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_async_append = on
#enable_bitmapscan = on
#enable_gathermerge = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_incremental_sort = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_memoize = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_parallel_append = on
#enable_parallel_hash = on
#enable_partition_pruning = on
#enable_partitionwise_join = off
#enable_partitionwise_aggregate = off
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#parallel_setup_cost = 1000.0 # same scale as above
#parallel_tuple_cost = 0.1 # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
#effective_cache_size = 4GB
#jit_above_cost = 100000 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 500000 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 500000 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#jit = on # allow JIT compilation
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#plan_cache_mode = auto # auto, force_generic_plan or
# force_custom_plan
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#syslog_sequence_numbers = on
#syslog_split_messages = on
# This is only relevant when logging to eventlog (Windows):
# (change requires restart)
#event_source = 'PostgreSQL'
# - When to Log -
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
#log_min_duration_sample = -1 # -1 is disabled, 0 logs a sample of statements
# and their durations, > 0 logs only a sample of
# statements running at least this number
# of milliseconds;
# sample fraction is determined by log_statement_sample_rate
#log_statement_sample_rate = 1.0 # fraction of logged statements exceeding
# log_min_duration_sample to be logged;
# 1.0 logs all such statements, 0.0 never logs
#log_transaction_sample_rate = 0.0 # fraction of transactions whose statements
# are logged regardless of their duration; 1.0 logs all
# statements from all transactions, 0.0 never logs
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_autovacuum_min_duration = -1 # log autovacuum activity;
# -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%m [%p] %q%u@%d ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %b = backend type
# %p = process ID
# %P = process ID of parallel group leader
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %n = timestamp with milliseconds (as a Unix epoch)
# %Q = query ID (0 if none or not computed)
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_recovery_conflict_waits = off # log standby recovery conflict waits
# >= deadlock_timeout
#log_parameter_max_length = -1 # when logging statements, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_parameter_max_length_on_error = 0 # when logging an error, limit logged
# bind-parameter values to N bytes;
# -1 means print in full, 0 disables
#log_statement = 'none' # none, ddl, mod, all
#log_replication_commands = off
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'Etc/UTC'
#------------------------------------------------------------------------------
# PROCESS TITLE
#------------------------------------------------------------------------------
cluster_name = '14/main' # added to process titles if nonempty
# (change requires restart)
#update_process_title = on
#------------------------------------------------------------------------------
# STATISTICS
#------------------------------------------------------------------------------
# - Query and Index Statistics Collector -
#track_activities = on
#track_activity_query_size = 1024 # (change requires restart)
#track_counts = on
#track_io_timing = off
#track_wal_io_timing = off
#track_functions = none # none, pl, all
stats_temp_directory = '/var/run/postgresql/14-main.pg_stat_tmp'
# - Monitoring -
#compute_query_id = auto
#log_statement_stats = off
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_vacuum_insert_threshold = 1000 # min number of row inserts
# before vacuum; -1 disables insert
# vacuums
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_vacuum_insert_scale_factor = 0.2 # fraction of inserts over table
# size before insert vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#search_path = '"$user", public' # schema names
#row_security = on
#default_table_access_method = 'heap'
#default_tablespace = '' # a tablespace name, '' uses the default
#default_toast_compression = 'pglz' # 'pglz' or 'lz4'
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#idle_session_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_table_age = 150000000
#vacuum_freeze_min_age = 50000000
#vacuum_failsafe_age = 1600000000
#vacuum_multixact_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_failsafe_age = 1600000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
#gin_pending_list_limit = 4MB
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'Etc/UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia (historical usage)
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 1 # min -15, max 3; any value >0 actually
# selects precise output mode
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'C.UTF-8' # locale for system error message
# strings
lc_monetary = 'C.UTF-8' # locale for monetary formatting
lc_numeric = 'C.UTF-8' # locale for number formatting
lc_time = 'C.UTF-8' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Shared Library Preloading -
#local_preload_libraries = ''
#session_preload_libraries = ''
#shared_preload_libraries = '' # (change requires restart)
#jit_provider = 'llvmjit' # JIT library to use
# - Other Defaults -
#dynamic_library_path = '$libdir'
#extension_destdir = '' # prepend path when loading extensions
# and shared objects (added by Debian)
#gin_fuzzy_search_limit = 0
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#max_pred_locks_per_relation = -2 # negative values mean
# (max_pred_locks_per_transaction
# / -max_pred_locks_per_relation) - 1
#max_pred_locks_per_page = 2 # min 0
#------------------------------------------------------------------------------
# VERSION AND PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#data_sync_retry = off # retry or panic on failure to fsync
# data?
# (change requires restart)
#recovery_init_sync_method = fsync # fsync, syncfs (Linux 5.8+)
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf. Note that these are directives, not variable
# assignments, so they can usefully be given more than once.
include_dir = 'conf.d' # include files ending in '.conf' from
# a directory, e.g., 'conf.d'
#include_if_exists = '...' # include file only if it exists
#include = '...' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here

View File

@@ -1,74 +0,0 @@
use {log::*, std::collections::HashSet};
#[derive(Debug)]
pub(crate) struct AccountsSelector {
pub accounts: HashSet<Vec<u8>>,
pub owners: HashSet<Vec<u8>>,
pub select_all_accounts: bool,
}
impl AccountsSelector {
pub fn default() -> Self {
AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts: true,
}
}
pub fn new(accounts: &[String], owners: &[String]) -> Self {
info!(
"Creating AccountsSelector from accounts: {:?}, owners: {:?}",
accounts, owners
);
let select_all_accounts = accounts.iter().any(|key| key == "*");
if select_all_accounts {
return AccountsSelector {
accounts: HashSet::default(),
owners: HashSet::default(),
select_all_accounts,
};
}
let accounts = accounts
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
let owners = owners
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
AccountsSelector {
accounts,
owners,
select_all_accounts,
}
}
pub fn is_account_selected(&self, account: &[u8], owner: &[u8]) -> bool {
self.select_all_accounts || self.accounts.contains(account) || self.owners.contains(owner)
}
/// Check if any account is of interested at all
pub fn is_enabled(&self) -> bool {
self.select_all_accounts || !self.accounts.is_empty() || !self.owners.is_empty()
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_create_accounts_selector() {
AccountsSelector::new(
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
&[],
);
AccountsSelector::new(
&[],
&["9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin".to_string()],
);
}
}

View File

@@ -1,437 +0,0 @@
use solana_measure::measure::Measure;
/// Main entry for the PostgreSQL plugin
use {
crate::{
accounts_selector::AccountsSelector,
postgres_client::{ParallelPostgresClient, PostgresClientBuilder},
transaction_selector::TransactionSelector,
},
bs58,
log::*,
serde_derive::{Deserialize, Serialize},
serde_json,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPlugin, AccountsDbPluginError, ReplicaAccountInfoVersions,
ReplicaTransactionInfoVersions, Result, SlotStatus,
},
solana_metrics::*,
std::{fs::File, io::Read},
thiserror::Error,
};
#[derive(Default)]
pub struct AccountsDbPluginPostgres {
client: Option<ParallelPostgresClient>,
accounts_selector: Option<AccountsSelector>,
transaction_selector: Option<TransactionSelector>,
}
impl std::fmt::Debug for AccountsDbPluginPostgres {
fn fmt(&self, _: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Ok(())
}
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub struct AccountsDbPluginPostgresConfig {
pub host: Option<String>,
pub user: Option<String>,
pub port: Option<u16>,
pub connection_str: Option<String>,
pub threads: Option<usize>,
pub batch_size: Option<usize>,
pub panic_on_db_errors: Option<bool>,
}
#[derive(Error, Debug)]
pub enum AccountsDbPluginPostgresError {
#[error("Error connecting to the backend data store. Error message: ({msg})")]
DataStoreConnectionError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
DataSchemaError { msg: String },
#[error("Error preparing data store schema. Error message: ({msg})")]
ConfigurationError { msg: String },
}
impl AccountsDbPlugin for AccountsDbPluginPostgres {
fn name(&self) -> &'static str {
"AccountsDbPluginPostgres"
}
/// Do initialization for the PostgreSQL plugin.
///
/// # Format of the config file:
/// * The `accounts_selector` section allows the user to controls accounts selections.
/// "accounts_selector" : {
/// "accounts" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// or:
/// "accounts_selector" = {
/// "owners" : \["pubkey-1", "pubkey-2", ..., "pubkey-m"\]
/// }
/// Accounts either satisyfing the accounts condition or owners condition will be selected.
/// When only owners is specified,
/// all accounts belonging to the owners will be streamed.
/// The accounts field support wildcard to select all accounts:
/// "accounts_selector" : {
/// "accounts" : \["*"\],
/// }
/// * "host", optional, specifies the PostgreSQL server.
/// * "user", optional, specifies the PostgreSQL user.
/// * "port", optional, specifies the PostgreSQL server's port.
/// * "connection_str", optional, the custom PostgreSQL connection string.
/// Please refer to https://docs.rs/postgres/0.19.2/postgres/config/struct.Config.html for the connection configuration.
/// When `connection_str` is set, the values in "host", "user" and "port" are ignored. If `connection_str` is not given,
/// `host` and `user` must be given.
/// * "threads" optional, specifies the number of worker threads for the plugin. A thread
/// maintains a PostgreSQL connection to the server. The default is '10'.
/// * "batch_size" optional, specifies the batch size of bulk insert when the AccountsDb is created
/// from restoring a snapshot. The default is '10'.
/// * "panic_on_db_errors", optional, contols if to panic when there are errors replicating data to the
/// PostgreSQL database. The default is 'false'.
/// * "transaction_selector", optional, controls if and what transaction to store. If this field is missing
/// None of the transction is stored.
/// "transaction_selector" : {
/// "mentions" : \["pubkey-1", "pubkey-2", ..., "pubkey-n"\],
/// }
/// The `mentions` field support wildcard to select all transaction or all 'vote' transactions:
/// For example, to select all transactions:
/// "transaction_selector" : {
/// "mentions" : \["*"\],
/// }
/// To select all vote transactions:
/// "transaction_selector" : {
/// "mentions" : \["all_votes"\],
/// }
/// # Examples
///
/// {
/// "libpath": "/home/solana/target/release/libsolana_accountsdb_plugin_postgres.so",
/// "host": "host_foo",
/// "user": "solana",
/// "threads": 10,
/// "accounts_selector" : {
/// "owners" : ["9oT9R5ZyRovSVnt37QvVoBttGpNqR3J7unkb567NP8k3"]
/// }
/// }
fn on_load(&mut self, config_file: &str) -> Result<()> {
solana_logger::setup_with_default("info");
info!(
"Loading plugin {:?} from config_file {:?}",
self.name(),
config_file
);
let mut file = File::open(config_file)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let result: serde_json::Value = serde_json::from_str(&contents).unwrap();
self.accounts_selector = Some(Self::create_accounts_selector_from_config(&result));
self.transaction_selector = Some(Self::create_transaction_selector_from_config(&result));
let result: serde_json::Result<AccountsDbPluginPostgresConfig> =
serde_json::from_str(&contents);
match result {
Err(err) => {
return Err(AccountsDbPluginError::ConfigFileReadError {
msg: format!(
"The config file is not in the JSON format expected: {:?}",
err
),
})
}
Ok(config) => {
let client = PostgresClientBuilder::build_pararallel_postgres_client(&config)?;
self.client = Some(client);
}
}
Ok(())
}
fn on_unload(&mut self) {
info!("Unloading plugin: {:?}", self.name());
match &mut self.client {
None => {}
Some(client) => {
client.join().unwrap();
}
}
}
fn update_account(
&mut self,
account: ReplicaAccountInfoVersions,
slot: u64,
is_startup: bool,
) -> Result<()> {
let mut measure_all = Measure::start("accountsdb-plugin-postgres-update-account-main");
match account {
ReplicaAccountInfoVersions::V0_0_1(account) => {
let mut measure_select =
Measure::start("accountsdb-plugin-postgres-update-account-select");
if let Some(accounts_selector) = &self.accounts_selector {
if !accounts_selector.is_account_selected(account.pubkey, account.owner) {
return Ok(());
}
} else {
return Ok(());
}
measure_select.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-select-us",
measure_select.as_us() as usize,
100000,
100000
);
debug!(
"Updating account {:?} with owner {:?} at slot {:?} using account selector {:?}",
bs58::encode(account.pubkey).into_string(),
bs58::encode(account.owner).into_string(),
slot,
self.accounts_selector.as_ref().unwrap()
);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database."
.to_string(),
},
)));
}
Some(client) => {
let mut measure_update =
Measure::start("accountsdb-plugin-postgres-update-account-client");
let result = { client.update_account(account, slot, is_startup) };
measure_update.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-client-us",
measure_update.as_us() as usize,
100000,
100000
);
if let Err(err) = result {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!("Failed to persist the update of account to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
}
}
measure_all.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-main-us",
measure_all.as_us() as usize,
100000,
100000
);
Ok(())
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<()> {
info!("Updating slot {:?} at with status {:?}", slot, status);
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.update_slot_status(slot, parent, status);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the update of slot to the PostgreSQL database. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<()> {
info!("Notifying the end of startup for accounts notifications");
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => {
let result = client.notify_end_of_startup();
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to notify the end of startup for accounts notifications. Error: {:?}", err)
});
}
}
}
Ok(())
}
fn notify_transaction(
&mut self,
transaction_info: ReplicaTransactionInfoVersions,
slot: u64,
) -> Result<()> {
match &mut self.client {
None => {
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError {
msg: "There is no connection to the PostgreSQL database.".to_string(),
},
)));
}
Some(client) => match transaction_info {
ReplicaTransactionInfoVersions::V0_0_1(transaction_info) => {
if let Some(transaction_selector) = &self.transaction_selector {
if !transaction_selector.is_transaction_selected(
transaction_info.is_vote,
transaction_info.transaction.message().account_keys_iter(),
) {
return Ok(());
}
} else {
return Ok(());
}
let result = client.log_transaction_info(transaction_info, slot);
if let Err(err) = result {
return Err(AccountsDbPluginError::SlotStatusUpdateError{
msg: format!("Failed to persist the transaction info to the PostgreSQL database. Error: {:?}", err)
});
}
}
},
}
Ok(())
}
/// Check if the plugin is interested in account data
/// Default is true -- if the plugin is not interested in
/// account data, please return false.
fn account_data_notifications_enabled(&self) -> bool {
self.accounts_selector
.as_ref()
.map_or_else(|| false, |selector| selector.is_enabled())
}
/// Check if the plugin is interested in transaction data
fn transaction_notifications_enabled(&self) -> bool {
self.transaction_selector
.as_ref()
.map_or_else(|| false, |selector| selector.is_enabled())
}
}
impl AccountsDbPluginPostgres {
fn create_accounts_selector_from_config(config: &serde_json::Value) -> AccountsSelector {
let accounts_selector = &config["accounts_selector"];
if accounts_selector.is_null() {
AccountsSelector::default()
} else {
let accounts = &accounts_selector["accounts"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
let owners = &accounts_selector["owners"];
let owners: Vec<String> = if owners.is_array() {
owners
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
AccountsSelector::new(&accounts, &owners)
}
}
fn create_transaction_selector_from_config(config: &serde_json::Value) -> TransactionSelector {
let transaction_selector = &config["transaction_selector"];
if transaction_selector.is_null() {
TransactionSelector::default()
} else {
let accounts = &transaction_selector["mentions"];
let accounts: Vec<String> = if accounts.is_array() {
accounts
.as_array()
.unwrap()
.iter()
.map(|val| val.as_str().unwrap().to_string())
.collect()
} else {
Vec::default()
};
TransactionSelector::new(&accounts)
}
}
pub fn new() -> Self {
Self::default()
}
}
#[no_mangle]
#[allow(improper_ctypes_definitions)]
/// # Safety
///
/// This function returns the AccountsDbPluginPostgres pointer as trait AccountsDbPlugin.
pub unsafe extern "C" fn _create_plugin() -> *mut dyn AccountsDbPlugin {
let plugin = AccountsDbPluginPostgres::new();
let plugin: Box<dyn AccountsDbPlugin> = Box::new(plugin);
Box::into_raw(plugin)
}
#[cfg(test)]
pub(crate) mod tests {
use {super::*, serde_json};
#[test]
fn test_accounts_selector_from_config() {
let config = "{\"accounts_selector\" : { \
\"owners\" : [\"9xQeWvG816bUx9EPjHmaT23yvVM2ZWbrrpZb9PusVFin\"] \
}}";
let config: serde_json::Value = serde_json::from_str(config).unwrap();
AccountsDbPluginPostgres::create_accounts_selector_from_config(&config);
}
}

View File

@@ -1,4 +0,0 @@
pub mod accounts_selector;
pub mod accountsdb_plugin_postgres;
pub mod postgres_client;
pub mod transaction_selector;

View File

@@ -1,905 +0,0 @@
#![allow(clippy::integer_arithmetic)]
mod postgres_client_transaction;
/// A concurrent implementation for writing accounts into the PostgreSQL in parallel.
use {
crate::accountsdb_plugin_postgres::{
AccountsDbPluginPostgresConfig, AccountsDbPluginPostgresError,
},
chrono::Utc,
crossbeam_channel::{bounded, Receiver, RecvTimeoutError, Sender},
log::*,
postgres::{Client, NoTls, Statement},
postgres_client_transaction::LogTransactionRequest,
solana_accountsdb_plugin_interface::accountsdb_plugin_interface::{
AccountsDbPluginError, ReplicaAccountInfo, SlotStatus,
},
solana_measure::measure::Measure,
solana_metrics::*,
solana_sdk::timing::AtomicInterval,
std::{
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering},
Arc, Mutex,
},
thread::{self, sleep, Builder, JoinHandle},
time::Duration,
},
tokio_postgres::types,
};
/// The maximum asynchronous requests allowed in the channel to avoid excessive
/// memory usage. The downside -- calls after this threshold is reached can get blocked.
const MAX_ASYNC_REQUESTS: usize = 40960;
const DEFAULT_POSTGRES_PORT: u16 = 5432;
const DEFAULT_THREADS_COUNT: usize = 100;
const DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE: usize = 10;
const ACCOUNT_COLUMN_COUNT: usize = 9;
const DEFAULT_PANIC_ON_DB_ERROR: bool = false;
struct PostgresSqlClientWrapper {
client: Client,
update_account_stmt: Statement,
bulk_account_insert_stmt: Statement,
update_slot_with_parent_stmt: Statement,
update_slot_without_parent_stmt: Statement,
update_transaction_log_stmt: Statement,
}
pub struct SimplePostgresClient {
batch_size: usize,
pending_account_updates: Vec<DbAccountInfo>,
client: Mutex<PostgresSqlClientWrapper>,
}
struct PostgresClientWorker {
client: SimplePostgresClient,
/// Indicating if accounts notification during startup is done.
is_startup_done: bool,
}
impl Eq for DbAccountInfo {}
#[derive(Clone, PartialEq, Debug)]
pub struct DbAccountInfo {
pub pubkey: Vec<u8>,
pub lamports: i64,
pub owner: Vec<u8>,
pub executable: bool,
pub rent_epoch: i64,
pub data: Vec<u8>,
pub slot: i64,
pub write_version: i64,
}
pub(crate) fn abort() -> ! {
#[cfg(not(test))]
{
// standard error is usually redirected to a log file, cry for help on standard output as
// well
eprintln!("Validator process aborted. The validator log may contain further details");
std::process::exit(1);
}
#[cfg(test)]
panic!("process::exit(1) is intercepted for friendly test failure...");
}
impl DbAccountInfo {
fn new<T: ReadableAccountInfo>(account: &T, slot: u64) -> DbAccountInfo {
let data = account.data().to_vec();
Self {
pubkey: account.pubkey().to_vec(),
lamports: account.lamports() as i64,
owner: account.owner().to_vec(),
executable: account.executable(),
rent_epoch: account.rent_epoch() as i64,
data,
slot: slot as i64,
write_version: account.write_version(),
}
}
}
pub trait ReadableAccountInfo: Sized {
fn pubkey(&self) -> &[u8];
fn owner(&self) -> &[u8];
fn lamports(&self) -> i64;
fn executable(&self) -> bool;
fn rent_epoch(&self) -> i64;
fn data(&self) -> &[u8];
fn write_version(&self) -> i64;
}
impl ReadableAccountInfo for DbAccountInfo {
fn pubkey(&self) -> &[u8] {
&self.pubkey
}
fn owner(&self) -> &[u8] {
&self.owner
}
fn lamports(&self) -> i64 {
self.lamports
}
fn executable(&self) -> bool {
self.executable
}
fn rent_epoch(&self) -> i64 {
self.rent_epoch
}
fn data(&self) -> &[u8] {
&self.data
}
fn write_version(&self) -> i64 {
self.write_version
}
}
impl<'a> ReadableAccountInfo for ReplicaAccountInfo<'a> {
fn pubkey(&self) -> &[u8] {
self.pubkey
}
fn owner(&self) -> &[u8] {
self.owner
}
fn lamports(&self) -> i64 {
self.lamports as i64
}
fn executable(&self) -> bool {
self.executable
}
fn rent_epoch(&self) -> i64 {
self.rent_epoch as i64
}
fn data(&self) -> &[u8] {
self.data
}
fn write_version(&self) -> i64 {
self.write_version as i64
}
}
pub trait PostgresClient {
fn join(&mut self) -> thread::Result<()> {
Ok(())
}
fn update_account(
&mut self,
account: DbAccountInfo,
is_startup: bool,
) -> Result<(), AccountsDbPluginError>;
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError>;
fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError>;
fn log_transaction(
&mut self,
transaction_log_info: LogTransactionRequest,
) -> Result<(), AccountsDbPluginError>;
}
impl SimplePostgresClient {
fn connect_to_db(
config: &AccountsDbPluginPostgresConfig,
) -> Result<Client, AccountsDbPluginError> {
let port = config.port.unwrap_or(DEFAULT_POSTGRES_PORT);
let connection_str = if let Some(connection_str) = &config.connection_str {
connection_str.clone()
} else {
if config.host.is_none() || config.user.is_none() {
let msg = format!(
"\"connection_str\": {:?}, or \"host\": {:?} \"user\": {:?} must be specified",
config.connection_str, config.host, config.user
);
return Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::ConfigurationError { msg },
)));
}
format!(
"host={} user={} port={}",
config.host.as_ref().unwrap(),
config.user.as_ref().unwrap(),
port
)
};
match Client::connect(&connection_str, NoTls) {
Err(err) => {
let msg = format!(
"Error in connecting to the PostgreSQL database: {:?} connection_str: {:?}",
err, connection_str
);
error!("{}", msg);
Err(AccountsDbPluginError::Custom(Box::new(
AccountsDbPluginPostgresError::DataStoreConnectionError { msg },
)))
}
Ok(client) => Ok(client),
}
}
fn build_bulk_account_insert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let batch_size = config
.batch_size
.unwrap_or(DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE);
let mut stmt = String::from("INSERT INTO account AS acct (pubkey, slot, owner, lamports, executable, rent_epoch, data, write_version, updated_on) VALUES");
for j in 0..batch_size {
let row = j * ACCOUNT_COLUMN_COUNT;
let val_str = format!(
"(${}, ${}, ${}, ${}, ${}, ${}, ${}, ${}, ${})",
row + 1,
row + 2,
row + 3,
row + 4,
row + 5,
row + 6,
row + 7,
row + 8,
row + 9,
);
if j == 0 {
stmt = format!("{} {}", &stmt, val_str);
} else {
stmt = format!("{}, {}", &stmt, val_str);
}
}
let handle_conflict = "ON CONFLICT (pubkey) DO UPDATE SET slot=excluded.slot, owner=excluded.owner, lamports=excluded.lamports, executable=excluded.executable, rent_epoch=excluded.rent_epoch, \
data=excluded.data, write_version=excluded.write_version, updated_on=excluded.updated_on WHERE acct.slot < excluded.slot OR (\
acct.slot = excluded.slot AND acct.write_version < excluded.write_version)";
stmt = format!("{} {}", stmt, handle_conflict);
info!("{}", stmt);
let bulk_stmt = client.prepare(&stmt);
match bulk_stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the accounts update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(update_account_stmt) => Ok(update_account_stmt),
}
}
fn build_single_account_upsert_statement(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO account AS acct (pubkey, slot, owner, lamports, executable, rent_epoch, data, write_version, updated_on) \
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9) \
ON CONFLICT (pubkey) DO UPDATE SET slot=excluded.slot, owner=excluded.owner, lamports=excluded.lamports, executable=excluded.executable, rent_epoch=excluded.rent_epoch, \
data=excluded.data, write_version=excluded.write_version, updated_on=excluded.updated_on WHERE acct.slot < excluded.slot OR (\
acct.slot = excluded.slot AND acct.write_version < excluded.write_version)";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the accounts update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(update_account_stmt) => Ok(update_account_stmt),
}
}
fn build_slot_upsert_statement_with_parent(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO slot (slot, parent, status, updated_on) \
VALUES ($1, $2, $3, $4) \
ON CONFLICT (slot) DO UPDATE SET parent=excluded.parent, status=excluded.status, updated_on=excluded.updated_on";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the slot update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(stmt) => Ok(stmt),
}
}
fn build_slot_upsert_statement_without_parent(
client: &mut Client,
config: &AccountsDbPluginPostgresConfig,
) -> Result<Statement, AccountsDbPluginError> {
let stmt = "INSERT INTO slot (slot, status, updated_on) \
VALUES ($1, $2, $3) \
ON CONFLICT (slot) DO UPDATE SET status=excluded.status, updated_on=excluded.updated_on";
let stmt = client.prepare(stmt);
match stmt {
Err(err) => {
return Err(AccountsDbPluginError::Custom(Box::new(AccountsDbPluginPostgresError::DataSchemaError {
msg: format!(
"Error in preparing for the slot update PostgreSQL database: {} host: {:?} user: {:?} config: {:?}",
err, config.host, config.user, config
),
})));
}
Ok(stmt) => Ok(stmt),
}
}
/// Internal function for updating or inserting a single account
fn upsert_account_internal(
account: &DbAccountInfo,
statement: &Statement,
client: &mut Client,
) -> Result<(), AccountsDbPluginError> {
let lamports = account.lamports() as i64;
let rent_epoch = account.rent_epoch() as i64;
let updated_on = Utc::now().naive_utc();
let result = client.query(
statement,
&[
&account.pubkey(),
&account.slot,
&account.owner(),
&lamports,
&account.executable(),
&rent_epoch,
&account.data(),
&account.write_version(),
&updated_on,
],
);
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of account to the PostgreSQL database. Error: {:?}",
err
);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
Ok(())
}
/// Update or insert a single account
fn upsert_account(&mut self, account: &DbAccountInfo) -> Result<(), AccountsDbPluginError> {
let client = self.client.get_mut().unwrap();
let statement = &client.update_account_stmt;
let client = &mut client.client;
Self::upsert_account_internal(account, statement, client)
}
/// Insert accounts in batch to reduce network overhead
fn insert_accounts_in_batch(
&mut self,
account: DbAccountInfo,
) -> Result<(), AccountsDbPluginError> {
self.pending_account_updates.push(account);
if self.pending_account_updates.len() == self.batch_size {
let mut measure = Measure::start("accountsdb-plugin-postgres-prepare-values");
let mut values: Vec<&(dyn types::ToSql + Sync)> =
Vec::with_capacity(self.batch_size * ACCOUNT_COLUMN_COUNT);
let updated_on = Utc::now().naive_utc();
for j in 0..self.batch_size {
let account = &self.pending_account_updates[j];
values.push(&account.pubkey);
values.push(&account.slot);
values.push(&account.owner);
values.push(&account.lamports);
values.push(&account.executable);
values.push(&account.rent_epoch);
values.push(&account.data);
values.push(&account.write_version);
values.push(&updated_on);
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-prepare-values-us",
measure.as_us() as usize,
10000,
10000
);
let mut measure = Measure::start("accountsdb-plugin-postgres-update-account");
let client = self.client.get_mut().unwrap();
let result = client
.client
.query(&client.bulk_account_insert_stmt, &values);
self.pending_account_updates.clear();
if let Err(err) = result {
let msg = format!(
"Failed to persist the update of account to the PostgreSQL database. Error: {:?}",
err
);
error!("{}", msg);
return Err(AccountsDbPluginError::AccountsUpdateError { msg });
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-us",
measure.as_us() as usize,
10000,
10000
);
inc_new_counter_debug!(
"accountsdb-plugin-postgres-update-account-count",
self.batch_size,
10000,
10000
);
}
Ok(())
}
/// Flush any left over accounts in batch which are not processed in the last batch
fn flush_buffered_writes(&mut self) -> Result<(), AccountsDbPluginError> {
if self.pending_account_updates.is_empty() {
return Ok(());
}
let client = self.client.get_mut().unwrap();
let statement = &client.update_account_stmt;
let client = &mut client.client;
for account in self.pending_account_updates.drain(..) {
Self::upsert_account_internal(&account, statement, client)?;
}
Ok(())
}
pub fn new(config: &AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
info!("Creating SimplePostgresClient...");
let mut client = Self::connect_to_db(config)?;
let bulk_account_insert_stmt =
Self::build_bulk_account_insert_statement(&mut client, config)?;
let update_account_stmt = Self::build_single_account_upsert_statement(&mut client, config)?;
let update_slot_with_parent_stmt =
Self::build_slot_upsert_statement_with_parent(&mut client, config)?;
let update_slot_without_parent_stmt =
Self::build_slot_upsert_statement_without_parent(&mut client, config)?;
let update_transaction_log_stmt =
Self::build_transaction_info_upsert_statement(&mut client, config)?;
let batch_size = config
.batch_size
.unwrap_or(DEFAULT_ACCOUNTS_INSERT_BATCH_SIZE);
info!("Created SimplePostgresClient.");
Ok(Self {
batch_size,
pending_account_updates: Vec::with_capacity(batch_size),
client: Mutex::new(PostgresSqlClientWrapper {
client,
update_account_stmt,
bulk_account_insert_stmt,
update_slot_with_parent_stmt,
update_slot_without_parent_stmt,
update_transaction_log_stmt,
}),
})
}
}
impl PostgresClient for SimplePostgresClient {
fn update_account(
&mut self,
account: DbAccountInfo,
is_startup: bool,
) -> Result<(), AccountsDbPluginError> {
trace!(
"Updating account {} with owner {} at slot {}",
bs58::encode(account.pubkey()).into_string(),
bs58::encode(account.owner()).into_string(),
account.slot,
);
if !is_startup {
return self.upsert_account(&account);
}
self.insert_accounts_in_batch(account)
}
fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError> {
info!("Updating slot {:?} at with status {:?}", slot, status);
let slot = slot as i64; // postgres only supports i64
let parent = parent.map(|parent| parent as i64);
let updated_on = Utc::now().naive_utc();
let status_str = status.as_str();
let client = self.client.get_mut().unwrap();
let result = match parent {
Some(parent) => client.client.execute(
&client.update_slot_with_parent_stmt,
&[&slot, &parent, &status_str, &updated_on],
),
None => client.client.execute(
&client.update_slot_without_parent_stmt,
&[&slot, &status_str, &updated_on],
),
};
match result {
Err(err) => {
let msg = format!(
"Failed to persist the update of slot to the PostgreSQL database. Error: {:?}",
err
);
error!("{:?}", msg);
return Err(AccountsDbPluginError::SlotStatusUpdateError { msg });
}
Ok(rows) => {
assert_eq!(1, rows, "Expected one rows to be updated a time");
}
}
Ok(())
}
fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError> {
self.flush_buffered_writes()
}
fn log_transaction(
&mut self,
transaction_log_info: LogTransactionRequest,
) -> Result<(), AccountsDbPluginError> {
self.log_transaction_impl(transaction_log_info)
}
}
struct UpdateAccountRequest {
account: DbAccountInfo,
is_startup: bool,
}
struct UpdateSlotRequest {
slot: u64,
parent: Option<u64>,
slot_status: SlotStatus,
}
#[warn(clippy::large_enum_variant)]
enum DbWorkItem {
UpdateAccount(Box<UpdateAccountRequest>),
UpdateSlot(Box<UpdateSlotRequest>),
LogTransaction(Box<LogTransactionRequest>),
}
impl PostgresClientWorker {
fn new(config: AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
let result = SimplePostgresClient::new(&config);
match result {
Ok(client) => Ok(PostgresClientWorker {
client,
is_startup_done: false,
}),
Err(err) => {
error!("Error in creating SimplePostgresClient: {}", err);
Err(err)
}
}
}
fn do_work(
&mut self,
receiver: Receiver<DbWorkItem>,
exit_worker: Arc<AtomicBool>,
is_startup_done: Arc<AtomicBool>,
startup_done_count: Arc<AtomicUsize>,
panic_on_db_errors: bool,
) -> Result<(), AccountsDbPluginError> {
while !exit_worker.load(Ordering::Relaxed) {
let mut measure = Measure::start("accountsdb-plugin-postgres-worker-recv");
let work = receiver.recv_timeout(Duration::from_millis(500));
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-postgres-worker-recv-us",
measure.as_us() as usize,
100000,
100000
);
match work {
Ok(work) => match work {
DbWorkItem::UpdateAccount(request) => {
if let Err(err) = self
.client
.update_account(request.account, request.is_startup)
{
error!("Failed to update account: ({})", err);
if panic_on_db_errors {
abort();
}
}
}
DbWorkItem::UpdateSlot(request) => {
if let Err(err) = self.client.update_slot_status(
request.slot,
request.parent,
request.slot_status,
) {
error!("Failed to update slot: ({})", err);
if panic_on_db_errors {
abort();
}
}
}
DbWorkItem::LogTransaction(transaction_log_info) => {
self.client.log_transaction(*transaction_log_info)?;
}
},
Err(err) => match err {
RecvTimeoutError::Timeout => {
if !self.is_startup_done && is_startup_done.load(Ordering::Relaxed) {
if let Err(err) = self.client.notify_end_of_startup() {
error!("Error in notifying end of startup: ({})", err);
if panic_on_db_errors {
abort();
}
}
self.is_startup_done = true;
startup_done_count.fetch_add(1, Ordering::Relaxed);
}
continue;
}
_ => {
error!("Error in receiving the item {:?}", err);
if panic_on_db_errors {
abort();
}
break;
}
},
}
}
Ok(())
}
}
pub struct ParallelPostgresClient {
workers: Vec<JoinHandle<Result<(), AccountsDbPluginError>>>,
exit_worker: Arc<AtomicBool>,
is_startup_done: Arc<AtomicBool>,
startup_done_count: Arc<AtomicUsize>,
initialized_worker_count: Arc<AtomicUsize>,
sender: Sender<DbWorkItem>,
last_report: AtomicInterval,
}
impl ParallelPostgresClient {
pub fn new(config: &AccountsDbPluginPostgresConfig) -> Result<Self, AccountsDbPluginError> {
info!("Creating ParallelPostgresClient...");
let (sender, receiver) = bounded(MAX_ASYNC_REQUESTS);
let exit_worker = Arc::new(AtomicBool::new(false));
let mut workers = Vec::default();
let is_startup_done = Arc::new(AtomicBool::new(false));
let startup_done_count = Arc::new(AtomicUsize::new(0));
let worker_count = config.threads.unwrap_or(DEFAULT_THREADS_COUNT);
let initialized_worker_count = Arc::new(AtomicUsize::new(0));
for i in 0..worker_count {
let cloned_receiver = receiver.clone();
let exit_clone = exit_worker.clone();
let is_startup_done_clone = is_startup_done.clone();
let startup_done_count_clone = startup_done_count.clone();
let initialized_worker_count_clone = initialized_worker_count.clone();
let config = config.clone();
let worker = Builder::new()
.name(format!("worker-{}", i))
.spawn(move || -> Result<(), AccountsDbPluginError> {
let panic_on_db_errors = *config
.panic_on_db_errors
.as_ref()
.unwrap_or(&DEFAULT_PANIC_ON_DB_ERROR);
let result = PostgresClientWorker::new(config);
match result {
Ok(mut worker) => {
initialized_worker_count_clone.fetch_add(1, Ordering::Relaxed);
worker.do_work(
cloned_receiver,
exit_clone,
is_startup_done_clone,
startup_done_count_clone,
panic_on_db_errors,
)?;
Ok(())
}
Err(err) => {
error!("Error when making connection to database: ({})", err);
if panic_on_db_errors {
abort();
}
Err(err)
}
}
})
.unwrap();
workers.push(worker);
}
info!("Created ParallelPostgresClient.");
Ok(Self {
last_report: AtomicInterval::default(),
workers,
exit_worker,
is_startup_done,
startup_done_count,
initialized_worker_count,
sender,
})
}
pub fn join(&mut self) -> thread::Result<()> {
self.exit_worker.store(true, Ordering::Relaxed);
while !self.workers.is_empty() {
let worker = self.workers.pop();
if worker.is_none() {
break;
}
let worker = worker.unwrap();
let result = worker.join().unwrap();
if result.is_err() {
error!("The worker thread has failed: {:?}", result);
}
}
Ok(())
}
pub fn update_account(
&mut self,
account: &ReplicaAccountInfo,
slot: u64,
is_startup: bool,
) -> Result<(), AccountsDbPluginError> {
if self.last_report.should_update(30000) {
datapoint_debug!(
"postgres-plugin-stats",
("message-queue-length", self.sender.len() as i64, i64),
);
}
let mut measure = Measure::start("accountsdb-plugin-posgres-create-work-item");
let wrk_item = DbWorkItem::UpdateAccount(Box::new(UpdateAccountRequest {
account: DbAccountInfo::new(account, slot),
is_startup,
}));
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-posgres-create-work-item-us",
measure.as_us() as usize,
100000,
100000
);
let mut measure = Measure::start("accountsdb-plugin-posgres-send-msg");
if let Err(err) = self.sender.send(wrk_item) {
return Err(AccountsDbPluginError::AccountsUpdateError {
msg: format!(
"Failed to update the account {:?}, error: {:?}",
bs58::encode(account.pubkey()).into_string(),
err
),
});
}
measure.stop();
inc_new_counter_debug!(
"accountsdb-plugin-posgres-send-msg-us",
measure.as_us() as usize,
100000,
100000
);
Ok(())
}
pub fn update_slot_status(
&mut self,
slot: u64,
parent: Option<u64>,
status: SlotStatus,
) -> Result<(), AccountsDbPluginError> {
if let Err(err) = self
.sender
.send(DbWorkItem::UpdateSlot(Box::new(UpdateSlotRequest {
slot,
parent,
slot_status: status,
})))
{
return Err(AccountsDbPluginError::SlotStatusUpdateError {
msg: format!("Failed to update the slot {:?}, error: {:?}", slot, err),
});
}
Ok(())
}
pub fn notify_end_of_startup(&mut self) -> Result<(), AccountsDbPluginError> {
info!("Notifying the end of startup");
// Ensure all items in the queue has been received by the workers
while !self.sender.is_empty() {
sleep(Duration::from_millis(100));
}
self.is_startup_done.store(true, Ordering::Relaxed);
// Wait for all worker threads to be done with flushing
while self.startup_done_count.load(Ordering::Relaxed)
!= self.initialized_worker_count.load(Ordering::Relaxed)
{
info!(
"Startup done count: {}, good worker thread count: {}",
self.startup_done_count.load(Ordering::Relaxed),
self.initialized_worker_count.load(Ordering::Relaxed)
);
sleep(Duration::from_millis(100));
}
info!("Done with notifying the end of startup");
Ok(())
}
}
pub struct PostgresClientBuilder {}
impl PostgresClientBuilder {
pub fn build_pararallel_postgres_client(
config: &AccountsDbPluginPostgresConfig,
) -> Result<ParallelPostgresClient, AccountsDbPluginError> {
ParallelPostgresClient::new(config)
}
pub fn build_simple_postgres_client(
config: &AccountsDbPluginPostgresConfig,
) -> Result<SimplePostgresClient, AccountsDbPluginError> {
SimplePostgresClient::new(config)
}
}

View File

@@ -1,194 +0,0 @@
/// The transaction selector is responsible for filtering transactions
/// in the plugin framework.
use {log::*, solana_sdk::pubkey::Pubkey, std::collections::HashSet};
pub(crate) struct TransactionSelector {
pub mentioned_addresses: HashSet<Vec<u8>>,
pub select_all_transactions: bool,
pub select_all_vote_transactions: bool,
}
#[allow(dead_code)]
impl TransactionSelector {
pub fn default() -> Self {
Self {
mentioned_addresses: HashSet::default(),
select_all_transactions: false,
select_all_vote_transactions: false,
}
}
/// Create a selector based on the mentioned addresses
/// To select all transactions use ["*"] or ["all"]
/// To select all vote transactions, use ["all_votes"]
/// To select transactions mentioning specific addresses use ["<pubkey1>", "<pubkey2>", ...]
pub fn new(mentioned_addresses: &[String]) -> Self {
info!(
"Creating TransactionSelector from addresses: {:?}",
mentioned_addresses
);
let select_all_transactions = mentioned_addresses
.iter()
.any(|key| key == "*" || key == "all");
if select_all_transactions {
return Self {
mentioned_addresses: HashSet::default(),
select_all_transactions,
select_all_vote_transactions: true,
};
}
let select_all_vote_transactions = mentioned_addresses.iter().any(|key| key == "all_votes");
if select_all_vote_transactions {
return Self {
mentioned_addresses: HashSet::default(),
select_all_transactions,
select_all_vote_transactions: true,
};
}
let mentioned_addresses = mentioned_addresses
.iter()
.map(|key| bs58::decode(key).into_vec().unwrap())
.collect();
Self {
mentioned_addresses,
select_all_transactions: false,
select_all_vote_transactions: false,
}
}
/// Check if a transaction is of interest.
pub fn is_transaction_selected(
&self,
is_vote: bool,
mentioned_addresses: Box<dyn Iterator<Item = &Pubkey> + '_>,
) -> bool {
if !self.is_enabled() {
return false;
}
if self.select_all_transactions || (self.select_all_vote_transactions && is_vote) {
return true;
}
for address in mentioned_addresses {
if self.mentioned_addresses.contains(address.as_ref()) {
return true;
}
}
false
}
/// Check if any transaction is of interest at all
pub fn is_enabled(&self) -> bool {
self.select_all_transactions
|| self.select_all_vote_transactions
|| !self.mentioned_addresses.is_empty()
}
}
#[cfg(test)]
pub(crate) mod tests {
use super::*;
#[test]
fn test_select_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&[pubkey1.to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_transaction_using_wildcard() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["*".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_transaction_all() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["all".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(false, Box::new(addresses.iter())));
}
#[test]
fn test_select_all_vote_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&["all_votes".to_string()]);
assert!(selector.is_enabled());
let addresses = [pubkey1];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(selector.is_transaction_selected(true, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(selector.is_transaction_selected(true, Box::new(addresses.iter())));
}
#[test]
fn test_select_no_transaction() {
let pubkey1 = Pubkey::new_unique();
let pubkey2 = Pubkey::new_unique();
let selector = TransactionSelector::new(&[]);
assert!(!selector.is_enabled());
let addresses = [pubkey1];
assert!(!selector.is_transaction_selected(false, Box::new(addresses.iter())));
let addresses = [pubkey2];
assert!(!selector.is_transaction_selected(true, Box::new(addresses.iter())));
let addresses = [pubkey1, pubkey2];
assert!(!selector.is_transaction_selected(true, Box::new(addresses.iter())));
}
}

View File

@@ -1,8 +1,8 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-banking-bench"
version = "1.9.2"
version = "1.5.20"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -10,21 +10,20 @@ publish = false
[dependencies]
clap = "2.33.1"
crossbeam-channel = "0.5"
log = "0.4.14"
crossbeam-channel = "0.4"
log = "0.4.11"
rand = "0.7.0"
rayon = "1.5.1"
solana-core = { path = "../core", version = "=1.9.2" }
solana-gossip = { path = "../gossip", version = "=1.9.2" }
solana-ledger = { path = "../ledger", version = "=1.9.2" }
solana-logger = { path = "../logger", version = "=1.9.2" }
solana-measure = { path = "../measure", version = "=1.9.2" }
solana-perf = { path = "../perf", version = "=1.9.2" }
solana-poh = { path = "../poh", version = "=1.9.2" }
solana-runtime = { path = "../runtime", version = "=1.9.2" }
solana-streamer = { path = "../streamer", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-version = { path = "../version", version = "=1.9.2" }
rayon = "1.4.0"
solana-core = { path = "../core", version = "=1.5.20" }
solana-clap-utils = { path = "../clap-utils", version = "=1.5.20" }
solana-streamer = { path = "../streamer", version = "=1.5.20" }
solana-perf = { path = "../perf", version = "=1.5.20" }
solana-ledger = { path = "../ledger", version = "=1.5.20" }
solana-logger = { path = "../logger", version = "=1.5.20" }
solana-runtime = { path = "../runtime", version = "=1.5.20" }
solana-measure = { path = "../measure", version = "=1.5.20" }
solana-sdk = { path = "../sdk", version = "=1.5.20" }
solana-version = { path = "../version", version = "=1.5.20" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,37 +1,38 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, value_t, App, Arg},
crossbeam_channel::unbounded,
log::*,
rand::{thread_rng, Rng},
rayon::prelude::*,
solana_core::banking_stage::BankingStage,
solana_gossip::cluster_info::{ClusterInfo, Node},
solana_ledger::{
blockstore::Blockstore,
genesis_utils::{create_genesis_config, GenesisConfigInfo},
get_tmp_ledger_path,
},
solana_measure::measure::Measure,
solana_perf::packet::to_packet_batches,
solana_poh::poh_recorder::{create_test_recorder, PohRecorder, WorkingBankEntry},
solana_runtime::{
accounts_background_service::AbsRequestSender, bank::Bank, bank_forks::BankForks,
cost_model::CostModel,
},
solana_sdk::{
hash::Hash,
signature::{Keypair, Signature},
system_transaction,
timing::{duration_as_us, timestamp},
transaction::Transaction,
},
solana_streamer::socket::SocketAddrSpace,
std::{
sync::{atomic::Ordering, mpsc::Receiver, Arc, Mutex, RwLock},
thread::sleep,
time::{Duration, Instant},
},
use clap::{crate_description, crate_name, value_t, App, Arg};
use crossbeam_channel::unbounded;
use log::*;
use rand::{thread_rng, Rng};
use rayon::prelude::*;
use solana_core::{
banking_stage::{create_test_recorder, BankingStage},
cluster_info::ClusterInfo,
cluster_info::Node,
poh_recorder::PohRecorder,
poh_recorder::WorkingBankEntry,
};
use solana_ledger::{
blockstore::Blockstore,
genesis_utils::{create_genesis_config, GenesisConfigInfo},
get_tmp_ledger_path,
};
use solana_measure::measure::Measure;
use solana_perf::packet::to_packets_chunked;
use solana_runtime::{
accounts_background_service::AbsRequestSender, bank::Bank, bank_forks::BankForks,
};
use solana_sdk::{
hash::Hash,
signature::Keypair,
signature::Signature,
system_transaction,
timing::{duration_as_us, timestamp},
transaction::Transaction,
};
use std::{
sync::{atomic::Ordering, mpsc::Receiver, Arc, Mutex},
thread::sleep,
time::{Duration, Instant},
};
fn check_txs(
@@ -77,7 +78,7 @@ fn make_accounts_txs(
.into_par_iter()
.map(|_| {
let mut new = dummy.clone();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen::<u8>()).collect();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
if !same_payer {
new.message.account_keys[0] = solana_sdk::pubkey::new_rand();
}
@@ -168,9 +169,8 @@ fn main() {
let (verified_sender, verified_receiver) = unbounded();
let (vote_sender, vote_receiver) = unbounded();
let (tpu_vote_sender, tpu_vote_receiver) = unbounded();
let (replay_vote_sender, _replay_vote_receiver) = unbounded();
let bank0 = Bank::new_for_benches(&genesis_config);
let bank0 = Bank::new(&genesis_config);
let mut bank_forks = BankForks::new(bank0);
let mut bank = bank_forks.working_bank();
@@ -189,7 +189,7 @@ fn main() {
genesis_config.hash(),
);
// Ignore any pesky duplicate signature errors in the case we are using single-payer
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen::<u8>()).collect();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
fund.signatures = vec![Signature::new(&sig[0..64])];
let x = bank.process_transaction(&fund);
x.unwrap();
@@ -199,20 +199,19 @@ fn main() {
if !skip_sanity {
//sanity check, make sure all the transactions can execute sequentially
transactions.iter().for_each(|tx| {
let res = bank.process_transaction(tx);
let res = bank.process_transaction(&tx);
assert!(res.is_ok(), "sanity test transactions error: {:?}", res);
});
bank.clear_signatures();
//sanity check, make sure all the transactions can execute in parallel
let res = bank.process_transactions(transactions.iter());
let res = bank.process_transactions(&transactions);
for r in res {
assert!(r.is_ok(), "sanity parallel execution error: {:?}", r);
}
bank.clear_signatures();
}
let mut verified: Vec<_> = to_packet_batches(&transactions, packets_per_chunk);
let mut verified: Vec<_> = to_packets_chunked(&transactions, packets_per_chunk);
let ledger_path = get_tmp_ledger_path!();
{
let blockstore = Arc::new(
@@ -220,21 +219,15 @@ fn main() {
);
let (exit, poh_recorder, poh_service, signal_receiver) =
create_test_recorder(&bank, &blockstore, None);
let cluster_info = ClusterInfo::new(
Node::new_localhost().info,
Arc::new(Keypair::new()),
SocketAddrSpace::Unspecified,
);
let cluster_info = ClusterInfo::new_with_invalid_keypair(Node::new_localhost().info);
let cluster_info = Arc::new(cluster_info);
let banking_stage = BankingStage::new(
&cluster_info,
&poh_recorder,
verified_receiver,
tpu_vote_receiver,
vote_receiver,
None,
replay_vote_sender,
Arc::new(RwLock::new(CostModel::default())),
);
poh_recorder.lock().unwrap().set_bank(&bank);
@@ -314,10 +307,11 @@ fn main() {
tx_total_us += duration_as_us(&now.elapsed());
let mut poh_time = Measure::start("poh_time");
poh_recorder
.lock()
.unwrap()
.reset(bank.clone(), Some((bank.slot(), bank.slot() + 1)));
poh_recorder.lock().unwrap().reset(
bank.last_blockhash(),
bank.slot(),
Some((bank.slot(), bank.slot() + 1)),
);
poh_time.stop();
let mut new_bank_time = Measure::start("new_bank");
@@ -361,10 +355,10 @@ fn main() {
if bank.slot() > 0 && bank.slot() % 16 == 0 {
for tx in transactions.iter_mut() {
tx.message.recent_blockhash = bank.last_blockhash();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen::<u8>()).collect();
let sig: Vec<u8> = (0..64).map(|_| thread_rng().gen()).collect();
tx.signatures[0] = Signature::new(&sig[0..64]);
}
verified = to_packet_batches(&transactions.clone(), packets_per_chunk);
verified = to_packets_chunked(&transactions.clone(), packets_per_chunk);
}
start += chunk_len;
@@ -386,7 +380,6 @@ fn main() {
);
drop(verified_sender);
drop(tpu_vote_sender);
drop(vote_sender);
exit.store(true, Ordering::Relaxed);
banking_stage.join().unwrap();

View File

@@ -1,28 +1,30 @@
[package]
name = "solana-banks-client"
version = "1.9.2"
version = "1.5.20"
description = "Solana banks client"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-banks-client"
edition = "2021"
edition = "2018"
[dependencies]
borsh = "0.9.1"
bincode = "1.3.1"
borsh = "0.8.1"
borsh-derive = "0.8.1"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.9.2" }
solana-program = { path = "../sdk/program", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
tarpc = { version = "0.27.2", features = ["full"] }
thiserror = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
mio = "0.7.6"
solana-banks-interface = { path = "../banks-interface", version = "=1.5.20" }
solana-program = { path = "../sdk/program", version = "=1.5.20" }
solana-sdk = { path = "../sdk", version = "=1.5.20" }
tarpc = { version = "0.23.0", features = ["full"] }
tokio = { version = "0.3.5", features = ["full"] }
tokio-serde = { version = "0.6", features = ["bincode"] }
[dev-dependencies]
solana-runtime = { path = "../runtime", version = "=1.9.2" }
solana-banks-server = { path = "../banks-server", version = "=1.9.2" }
solana-runtime = { path = "../runtime", version = "=1.5.20" }
solana-banks-server = { path = "../banks-server", version = "=1.5.20" }
[lib]
crate-type = ["lib"]

View File

@@ -1,62 +0,0 @@
use {
solana_sdk::{transaction::TransactionError, transport::TransportError},
std::io,
tarpc::client::RpcError,
thiserror::Error,
};
/// Errors from BanksClient
#[derive(Error, Debug)]
pub enum BanksClientError {
#[error("client error: {0}")]
ClientError(&'static str),
#[error(transparent)]
Io(#[from] io::Error),
#[error(transparent)]
RpcError(#[from] RpcError),
#[error("transport transaction error: {0}")]
TransactionError(#[from] TransactionError),
}
impl BanksClientError {
pub fn unwrap(&self) -> TransactionError {
if let BanksClientError::TransactionError(err) = self {
err.clone()
} else {
panic!("unexpected transport error")
}
}
}
impl From<BanksClientError> for io::Error {
fn from(err: BanksClientError) -> Self {
match err {
BanksClientError::ClientError(err) => Self::new(io::ErrorKind::Other, err.to_string()),
BanksClientError::Io(err) => err,
BanksClientError::RpcError(err) => Self::new(io::ErrorKind::Other, err.to_string()),
BanksClientError::TransactionError(err) => {
Self::new(io::ErrorKind::Other, err.to_string())
}
}
}
}
impl From<BanksClientError> for TransportError {
fn from(err: BanksClientError) -> Self {
match err {
BanksClientError::ClientError(err) => {
Self::IoError(io::Error::new(io::ErrorKind::Other, err.to_string()))
}
BanksClientError::Io(err) => {
Self::IoError(io::Error::new(io::ErrorKind::Other, err.to_string()))
}
BanksClientError::RpcError(err) => {
Self::IoError(io::Error::new(io::ErrorKind::Other, err.to_string()))
}
BanksClientError::TransactionError(err) => Self::TransactionError(err),
}
}
}

View File

@@ -5,36 +5,31 @@
//! but they are undocumented, may change over time, and are generally more
//! cumbersome to use.
use borsh::BorshDeserialize;
use futures::{future::join_all, Future, FutureExt};
pub use solana_banks_interface::{BanksClient as TarpcClient, TransactionStatus};
use {
crate::error::BanksClientError,
borsh::BorshDeserialize,
futures::{future::join_all, Future, FutureExt, TryFutureExt},
solana_banks_interface::{BanksRequest, BanksResponse},
solana_program::{
clock::Slot, fee_calculator::FeeCalculator, hash::Hash, program_pack::Pack, pubkey::Pubkey,
rent::Rent, sysvar::Sysvar,
},
solana_sdk::{
account::{from_account, Account},
commitment_config::CommitmentLevel,
message::Message,
signature::Signature,
transaction::{self, Transaction},
transport,
},
std::io,
tarpc::{
client::{self, NewClient, RequestDispatch},
context::{self, Context},
serde_transport::tcp,
ClientMessage, Response, Transport,
},
tokio::{net::ToSocketAddrs, time::Duration},
tokio_serde::formats::Bincode,
use solana_banks_interface::{BanksRequest, BanksResponse};
use solana_program::{
clock::Slot, fee_calculator::FeeCalculator, hash::Hash, program_pack::Pack, pubkey::Pubkey,
rent::Rent, sysvar,
};
mod error;
use solana_sdk::{
account::{from_account, Account},
commitment_config::CommitmentLevel,
signature::Signature,
transaction::{self, Transaction},
transport,
};
use std::io::{self, Error, ErrorKind};
use tarpc::{
client::{self, channel::RequestDispatch, NewClient},
context::{self, Context},
rpc::{ClientMessage, Response},
serde_transport::tcp,
Transport,
};
use tokio::{net::ToSocketAddrs, time::Duration};
use tokio_serde::formats::Bincode;
// This exists only for backward compatibility
pub trait BanksClientExt {}
@@ -61,26 +56,16 @@ impl BanksClient {
ctx: Context,
transaction: Transaction,
) -> impl Future<Output = io::Result<()>> + '_ {
self.inner
.send_transaction_with_context(ctx, transaction)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
self.inner.send_transaction_with_context(ctx, transaction)
}
#[deprecated(
since = "1.9.0",
note = "Please use `get_fee_for_message` or `is_blockhash_valid` instead"
)]
pub fn get_fees_with_commitment_and_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, u64)>> + '_ {
#[allow(deprecated)]
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, Slot)>> + '_ {
self.inner
.get_fees_with_commitment_and_context(ctx, commitment)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
}
pub fn get_transaction_status_with_context(
@@ -90,8 +75,6 @@ impl BanksClient {
) -> impl Future<Output = io::Result<Option<TransactionStatus>>> + '_ {
self.inner
.get_transaction_status_with_context(ctx, signature)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
}
pub fn get_slot_with_context(
@@ -99,21 +82,7 @@ impl BanksClient {
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Slot>> + '_ {
self.inner
.get_slot_with_context(ctx, commitment)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
}
pub fn get_block_height_with_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Slot>> + '_ {
self.inner
.get_block_height_with_context(ctx, commitment)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
self.inner.get_slot_with_context(ctx, commitment)
}
pub fn process_transaction_with_commitment_and_context(
@@ -124,8 +93,6 @@ impl BanksClient {
) -> impl Future<Output = io::Result<Option<transaction::Result<()>>>> + '_ {
self.inner
.process_transaction_with_commitment_and_context(ctx, transaction, commitment)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
}
pub fn get_account_with_commitment_and_context(
@@ -136,8 +103,6 @@ impl BanksClient {
) -> impl Future<Output = io::Result<Option<Account>>> + '_ {
self.inner
.get_account_with_commitment_and_context(ctx, address, commitment)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
}
/// Send a transaction and return immediately. The server will resend the
@@ -153,42 +118,27 @@ impl BanksClient {
/// Return the fee parameters associated with a recent, rooted blockhash. The cluster
/// will use the transaction's blockhash to look up these same fee parameters and
/// use them to calculate the transaction fee.
#[deprecated(
since = "1.9.0",
note = "Please use `get_fee_for_message` or `is_blockhash_valid` instead"
)]
pub fn get_fees(
&mut self,
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, u64)>> + '_ {
#[allow(deprecated)]
) -> impl Future<Output = io::Result<(FeeCalculator, Hash, Slot)>> + '_ {
self.get_fees_with_commitment_and_context(context::current(), CommitmentLevel::default())
}
/// Return the cluster Sysvar
pub fn get_sysvar<T: Sysvar>(&mut self) -> impl Future<Output = io::Result<T>> + '_ {
self.get_account(T::id()).map(|result| {
let sysvar = result?
.ok_or(BanksClientError::ClientError("Sysvar not present"))
.map_err(io::Error::from)?; // Remove this map when return Err type updated to BanksClientError
from_account::<T, _>(&sysvar)
.ok_or(BanksClientError::ClientError(
"Failed to deserialize sysvar",
))
.map_err(Into::into) // Remove this when return Err type updated to BanksClientError
})
}
/// Return the cluster rent
pub fn get_rent(&mut self) -> impl Future<Output = io::Result<Rent>> + '_ {
self.get_sysvar::<Rent>()
self.get_account(sysvar::rent::id()).map(|result| {
let rent_sysvar = result?
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "Rent sysvar not present"))?;
from_account::<Rent>(&rent_sysvar).ok_or_else(|| {
io::Error::new(io::ErrorKind::Other, "Failed to deserialize Rent sysvar")
})
})
}
/// Return a recent, rooted blockhash from the server. The cluster will only accept
/// transactions with a blockhash that has not yet expired. Use the `get_fees`
/// method to get both a blockhash and the blockhash's last valid slot.
#[deprecated(since = "1.9.0", note = "Please use `get_latest_blockhash` instead")]
pub fn get_recent_blockhash(&mut self) -> impl Future<Output = io::Result<Hash>> + '_ {
#[allow(deprecated)]
self.get_fees().map(|result| Ok(result?.1))
}
@@ -203,12 +153,11 @@ impl BanksClient {
ctx.deadline += Duration::from_secs(50);
self.process_transaction_with_commitment_and_context(ctx, transaction, commitment)
.map(|result| match result? {
None => Err(BanksClientError::ClientError(
"invalid blockhash or fee-payer",
)),
None => {
Err(Error::new(ErrorKind::TimedOut, "invalid blockhash or fee-payer").into())
}
Some(transaction_result) => Ok(transaction_result?),
})
.map_err(Into::into) // Remove this when return Err type updated to BanksClientError
}
/// Send a transaction and return until the transaction has been finalized or rejected.
@@ -243,18 +192,12 @@ impl BanksClient {
self.process_transactions_with_commitment(transactions, CommitmentLevel::default())
}
/// Return the most recent rooted slot. All transactions at or below this slot
/// are said to be finalized. The cluster will not fork to a higher slot.
/// Return the most recent rooted slot height. All transactions at or below this height
/// are said to be finalized. The cluster will not fork to a higher slot height.
pub fn get_root_slot(&mut self) -> impl Future<Output = io::Result<Slot>> + '_ {
self.get_slot_with_context(context::current(), CommitmentLevel::default())
}
/// Return the most recent rooted block height. All transactions at or below this height
/// are said to be finalized. The cluster will not fork to a higher block height.
pub fn get_root_block_height(&mut self) -> impl Future<Output = io::Result<Slot>> + '_ {
self.get_block_height_with_context(context::current(), CommitmentLevel::default())
}
/// Return the account at the given address at the slot corresponding to the given
/// commitment level. If the account is not found, None is returned.
pub fn get_account_with_commitment(
@@ -281,12 +224,10 @@ impl BanksClient {
address: Pubkey,
) -> impl Future<Output = io::Result<T>> + '_ {
self.get_account(address).map(|result| {
let account = result?
.ok_or(BanksClientError::ClientError("Account not found"))
.map_err(io::Error::from)?; // Remove this map when return Err type updated to BanksClientError
let account =
result?.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "Account not found"))?;
T::unpack_from_slice(&account.data)
.map_err(|_| BanksClientError::ClientError("Failed to deserialize account"))
.map_err(Into::into) // Remove this when return Err type updated to BanksClientError
.map_err(|_| io::Error::new(io::ErrorKind::Other, "Failed to deserialize account"))
})
}
@@ -297,8 +238,9 @@ impl BanksClient {
address: Pubkey,
) -> impl Future<Output = io::Result<T>> + '_ {
self.get_account(address).map(|result| {
let account = result?.ok_or(BanksClientError::ClientError("Account not found"))?;
T::try_from_slice(&account.data).map_err(Into::into)
let account =
result?.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "account not found"))?;
T::try_from_slice(&account.data)
})
}
@@ -351,46 +293,6 @@ impl BanksClient {
// Convert Vec<Result<_, _>> to Result<Vec<_>>
statuses.into_iter().collect()
}
pub fn get_latest_blockhash(&mut self) -> impl Future<Output = io::Result<Hash>> + '_ {
self.get_latest_blockhash_with_commitment(CommitmentLevel::default())
.map(|result| {
result?
.map(|x| x.0)
.ok_or(BanksClientError::ClientError("valid blockhash not found"))
.map_err(Into::into)
})
}
pub fn get_latest_blockhash_with_commitment(
&mut self,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<(Hash, u64)>>> + '_ {
self.get_latest_blockhash_with_commitment_and_context(context::current(), commitment)
}
pub fn get_latest_blockhash_with_commitment_and_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
) -> impl Future<Output = io::Result<Option<(Hash, u64)>>> + '_ {
self.inner
.get_latest_blockhash_with_commitment_and_context(ctx, commitment)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
}
pub fn get_fee_for_message_with_commitment_and_context(
&mut self,
ctx: Context,
commitment: CommitmentLevel,
message: Message,
) -> impl Future<Output = io::Result<Option<u64>>> + '_ {
self.inner
.get_fee_for_message_with_commitment_and_context(ctx, commitment, message)
.map_err(BanksClientError::from) // Remove this when return Err type updated to BanksClientError
.map_err(Into::into)
}
}
pub async fn start_client<C>(transport: C) -> io::Result<BanksClient>
@@ -398,31 +300,29 @@ where
C: Transport<ClientMessage<BanksRequest>, Response<BanksResponse>> + Send + 'static,
{
Ok(BanksClient {
inner: TarpcClient::new(client::Config::default(), transport).spawn(),
inner: TarpcClient::new(client::Config::default(), transport).spawn()?,
})
}
pub async fn start_tcp_client<T: ToSocketAddrs>(addr: T) -> io::Result<BanksClient> {
let transport = tcp::connect(addr, Bincode::default).await?;
Ok(BanksClient {
inner: TarpcClient::new(client::Config::default(), transport).spawn(),
inner: TarpcClient::new(client::Config::default(), transport).spawn()?,
})
}
#[cfg(test)]
mod tests {
use {
super::*,
solana_banks_server::banks_server::start_local_server,
solana_runtime::{
bank::Bank, bank_forks::BankForks, commitment::BlockCommitmentCache,
genesis_utils::create_genesis_config,
},
solana_sdk::{message::Message, signature::Signer, system_instruction},
std::sync::{Arc, RwLock},
tarpc::transport,
tokio::{runtime::Runtime, time::sleep},
use super::*;
use solana_banks_server::banks_server::start_local_server;
use solana_runtime::{
bank::Bank, bank_forks::BankForks, commitment::BlockCommitmentCache,
genesis_utils::create_genesis_config,
};
use solana_sdk::{message::Message, signature::Signer, system_instruction};
use std::sync::{Arc, RwLock};
use tarpc::transport;
use tokio::{runtime::Runtime, time::sleep};
#[test]
fn test_banks_client_new() {
@@ -431,13 +331,13 @@ mod tests {
}
#[test]
fn test_banks_server_transfer_via_server() -> Result<(), BanksClientError> {
fn test_banks_server_transfer_via_server() -> io::Result<()> {
// This test shows the preferred way to interact with BanksServer.
// It creates a runtime explicitly (no globals via tokio macros) and calls
// `runtime.block_on()` just once, to run all the async code.
let genesis = create_genesis_config(10);
let bank = Bank::new_for_tests(&genesis.genesis_config);
let bank = Bank::new(&genesis.genesis_config);
let slot = bank.slot();
let block_commitment_cache = Arc::new(RwLock::new(
BlockCommitmentCache::new_for_tests_with_slots(slot, slot),
@@ -450,12 +350,10 @@ mod tests {
let message = Message::new(&[instruction], Some(&mint_pubkey));
Runtime::new()?.block_on(async {
let client_transport =
start_local_server(bank_forks, block_commitment_cache, Duration::from_millis(1))
.await;
let client_transport = start_local_server(bank_forks, block_commitment_cache).await;
let mut banks_client = start_client(client_transport).await?;
let recent_blockhash = banks_client.get_latest_blockhash().await?;
let recent_blockhash = banks_client.get_recent_blockhash().await?;
let transaction = Transaction::new(&[&genesis.mint_keypair], message, recent_blockhash);
banks_client.process_transaction(transaction).await.unwrap();
assert_eq!(banks_client.get_balance(bob_pubkey).await?, 1);
@@ -464,13 +362,13 @@ mod tests {
}
#[test]
fn test_banks_server_transfer_via_client() -> Result<(), BanksClientError> {
fn test_banks_server_transfer_via_client() -> io::Result<()> {
// The caller may not want to hold the connection open until the transaction
// is processed (or blockhash expires). In this test, we verify the
// server-side functionality is available to the client.
let genesis = create_genesis_config(10);
let bank = Bank::new_for_tests(&genesis.genesis_config);
let bank = Bank::new(&genesis.genesis_config);
let slot = bank.slot();
let block_commitment_cache = Arc::new(RwLock::new(
BlockCommitmentCache::new_for_tests_with_slots(slot, slot),
@@ -479,18 +377,13 @@ mod tests {
let mint_pubkey = &genesis.mint_keypair.pubkey();
let bob_pubkey = solana_sdk::pubkey::new_rand();
let instruction = system_instruction::transfer(mint_pubkey, &bob_pubkey, 1);
let message = Message::new(&[instruction], Some(mint_pubkey));
let instruction = system_instruction::transfer(&mint_pubkey, &bob_pubkey, 1);
let message = Message::new(&[instruction], Some(&mint_pubkey));
Runtime::new()?.block_on(async {
let client_transport =
start_local_server(bank_forks, block_commitment_cache, Duration::from_millis(1))
.await;
let client_transport = start_local_server(bank_forks, block_commitment_cache).await;
let mut banks_client = start_client(client_transport).await?;
let (recent_blockhash, last_valid_block_height) = banks_client
.get_latest_blockhash_with_commitment(CommitmentLevel::default())
.await?
.unwrap();
let (_, recent_blockhash, last_valid_slot) = banks_client.get_fees().await?;
let transaction = Transaction::new(&[&genesis.mint_keypair], message, recent_blockhash);
let signature = transaction.signatures[0];
banks_client.send_transaction(transaction).await?;
@@ -498,8 +391,8 @@ mod tests {
let mut status = banks_client.get_transaction_status(signature).await?;
while status.is_none() {
let root_block_height = banks_client.get_root_block_height().await?;
if root_block_height > last_valid_block_height {
let root_slot = banks_client.get_root_slot().await?;
if root_slot > last_valid_slot {
break;
}
sleep(Duration::from_millis(100)).await;

View File

@@ -1,18 +1,22 @@
[package]
name = "solana-banks-interface"
version = "1.9.2"
version = "1.5.20"
description = "Solana banks RPC interface"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-banks-interface"
edition = "2021"
edition = "2018"
[dependencies]
serde = { version = "1.0.130", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
tarpc = { version = "0.27.2", features = ["full"] }
mio = "0.7.6"
serde = { version = "1.0.118", features = ["derive"] }
solana-sdk = { path = "../sdk", version = "=1.5.20" }
tarpc = { version = "0.23.0", features = ["full"] }
[dev-dependencies]
tokio = { version = "0.3.5", features = ["full"] }
[lib]
crate-type = ["lib"]

View File

@@ -1,18 +1,13 @@
#![allow(deprecated)]
use {
serde::{Deserialize, Serialize},
solana_sdk::{
account::Account,
clock::Slot,
commitment_config::CommitmentLevel,
fee_calculator::FeeCalculator,
hash::Hash,
message::Message,
pubkey::Pubkey,
signature::Signature,
transaction::{self, Transaction, TransactionError},
},
use serde::{Deserialize, Serialize};
use solana_sdk::{
account::Account,
clock::Slot,
commitment_config::CommitmentLevel,
fee_calculator::FeeCalculator,
hash::Hash,
pubkey::Pubkey,
signature::Signature,
transaction::{self, Transaction, TransactionError},
};
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
@@ -33,17 +28,12 @@ pub struct TransactionStatus {
#[tarpc::service]
pub trait Banks {
async fn send_transaction_with_context(transaction: Transaction);
#[deprecated(
since = "1.9.0",
note = "Please use `get_fee_for_message_with_commitment_and_context` instead"
)]
async fn get_fees_with_commitment_and_context(
commitment: CommitmentLevel,
) -> (FeeCalculator, Hash, Slot);
async fn get_transaction_status_with_context(signature: Signature)
-> Option<TransactionStatus>;
async fn get_slot_with_context(commitment: CommitmentLevel) -> Slot;
async fn get_block_height_with_context(commitment: CommitmentLevel) -> u64;
async fn process_transaction_with_commitment_and_context(
transaction: Transaction,
commitment: CommitmentLevel,
@@ -52,22 +42,12 @@ pub trait Banks {
address: Pubkey,
commitment: CommitmentLevel,
) -> Option<Account>;
async fn get_latest_blockhash_with_context() -> Hash;
async fn get_latest_blockhash_with_commitment_and_context(
commitment: CommitmentLevel,
) -> Option<(Hash, u64)>;
async fn get_fee_for_message_with_commitment_and_context(
commitment: CommitmentLevel,
message: Message,
) -> Option<u64>;
}
#[cfg(test)]
mod tests {
use {
super::*,
tarpc::{client, transport},
};
use super::*;
use tarpc::{client, transport};
#[test]
fn test_banks_client_new() {

View File

@@ -1,25 +1,26 @@
[package]
name = "solana-banks-server"
version = "1.9.2"
version = "1.5.20"
description = "Solana banks server"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-banks-server"
edition = "2021"
edition = "2018"
[dependencies]
bincode = "1.3.3"
bincode = "1.3.1"
futures = "0.3"
solana-banks-interface = { path = "../banks-interface", version = "=1.9.2" }
solana-runtime = { path = "../runtime", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-send-transaction-service = { path = "../send-transaction-service", version = "=1.9.2" }
tarpc = { version = "0.27.2", features = ["full"] }
tokio = { version = "1", features = ["full"] }
tokio-serde = { version = "0.8", features = ["bincode"] }
tokio-stream = "0.1"
log = "0.4.11"
mio = "0.7.6"
solana-banks-interface = { path = "../banks-interface", version = "=1.5.20" }
solana-runtime = { path = "../runtime", version = "=1.5.20" }
solana-sdk = { path = "../sdk", version = "=1.5.20" }
solana-metrics = { path = "../metrics", version = "=1.5.20" }
tarpc = { version = "0.23.0", features = ["full"] }
tokio = { version = "0.3", features = ["full"] }
tokio-serde = { version = "0.6", features = ["bincode"] }
[lib]
crate-type = ["lib"]

View File

@@ -1,54 +1,48 @@
use {
bincode::{deserialize, serialize},
futures::{future, prelude::stream::StreamExt},
solana_banks_interface::{
Banks, BanksRequest, BanksResponse, TransactionConfirmationStatus, TransactionStatus,
},
solana_runtime::{bank::Bank, bank_forks::BankForks, commitment::BlockCommitmentCache},
solana_sdk::{
account::Account,
clock::Slot,
commitment_config::CommitmentLevel,
feature_set::FeatureSet,
fee_calculator::FeeCalculator,
hash::Hash,
message::{Message, SanitizedMessage},
pubkey::Pubkey,
signature::Signature,
transaction::{self, Transaction},
},
solana_send_transaction_service::{
send_transaction_service::{SendTransactionService, TransactionInfo},
tpu_info::NullTpuInfo,
},
std::{
convert::TryFrom,
io,
net::{Ipv4Addr, SocketAddr},
sync::{
mpsc::{channel, Receiver, Sender},
Arc, RwLock,
},
thread::Builder,
time::Duration,
},
tarpc::{
context::Context,
serde_transport::tcp,
server::{self, incoming::Incoming, Channel},
transport::{self, channel::UnboundedChannel},
ClientMessage, Response,
},
tokio::time::sleep,
tokio_serde::formats::Bincode,
use crate::send_transaction_service::{SendTransactionService, TransactionInfo};
use bincode::{deserialize, serialize};
use futures::{
future,
prelude::stream::{self, StreamExt},
};
use solana_banks_interface::{
Banks, BanksRequest, BanksResponse, TransactionConfirmationStatus, TransactionStatus,
};
use solana_runtime::{bank::Bank, bank_forks::BankForks, commitment::BlockCommitmentCache};
use solana_sdk::{
account::Account,
clock::Slot,
commitment_config::CommitmentLevel,
fee_calculator::FeeCalculator,
hash::Hash,
pubkey::Pubkey,
signature::Signature,
transaction::{self, Transaction},
};
use std::{
io,
net::{Ipv4Addr, SocketAddr},
sync::{
mpsc::{channel, Receiver, Sender},
Arc, RwLock,
},
thread::Builder,
time::Duration,
};
use tarpc::{
context::Context,
rpc::{transport::channel::UnboundedChannel, ClientMessage, Response},
serde_transport::tcp,
server::{self, Channel, Handler},
transport,
};
use tokio::time::sleep;
use tokio_serde::formats::Bincode;
#[derive(Clone)]
struct BanksServer {
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
transaction_sender: Sender<TransactionInfo>,
poll_signature_status_sleep_duration: Duration,
}
impl BanksServer {
@@ -60,13 +54,11 @@ impl BanksServer {
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
transaction_sender: Sender<TransactionInfo>,
poll_signature_status_sleep_duration: Duration,
) -> Self {
Self {
bank_forks,
block_commitment_cache,
transaction_sender,
poll_signature_status_sleep_duration,
}
}
@@ -81,7 +73,7 @@ impl BanksServer {
.map(|info| deserialize(&info.wire_transaction).unwrap())
.collect();
let bank = bank_forks.read().unwrap().working_bank();
let _ = bank.try_process_transactions(transactions.iter());
let _ = bank.process_transactions(&transactions);
}
}
@@ -89,7 +81,6 @@ impl BanksServer {
fn new_loopback(
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
poll_signature_status_sleep_duration: Duration,
) -> Self {
let (transaction_sender, transaction_receiver) = channel();
let bank = bank_forks.read().unwrap().working_bank();
@@ -104,12 +95,7 @@ impl BanksServer {
.name("solana-bank-forks-client".to_string())
.spawn(move || Self::run(server_bank_forks, transaction_receiver))
.unwrap();
Self::new(
bank_forks,
block_commitment_cache,
transaction_sender,
poll_signature_status_sleep_duration,
)
Self::new(bank_forks, block_commitment_cache, transaction_sender)
}
fn slot(&self, commitment: CommitmentLevel) -> Slot {
@@ -127,16 +113,16 @@ impl BanksServer {
self,
signature: &Signature,
blockhash: &Hash,
last_valid_block_height: u64,
last_valid_slot: Slot,
commitment: CommitmentLevel,
) -> Option<transaction::Result<()>> {
let mut status = self
.bank(commitment)
.get_signature_status_with_blockhash(signature, blockhash);
while status.is_none() {
sleep(self.poll_signature_status_sleep_duration).await;
sleep(Duration::from_millis(200)).await;
let bank = self.bank(commitment);
if bank.block_height() > last_valid_block_height {
if bank.slot() > last_valid_slot {
break;
}
status = bank.get_signature_status_with_blockhash(signature, blockhash);
@@ -145,13 +131,10 @@ impl BanksServer {
}
}
fn verify_transaction(
transaction: &Transaction,
feature_set: &Arc<FeatureSet>,
) -> transaction::Result<()> {
fn verify_transaction(transaction: &Transaction) -> transaction::Result<()> {
if let Err(err) = transaction.verify() {
Err(err)
} else if let Err(err) = transaction.verify_precompiles(feature_set) {
} else if let Err(err) = transaction.verify_precompiles() {
Err(err)
} else {
Ok(())
@@ -162,21 +145,16 @@ fn verify_transaction(
impl Banks for BanksServer {
async fn send_transaction_with_context(self, _: Context, transaction: Transaction) {
let blockhash = &transaction.message.recent_blockhash;
let last_valid_block_height = self
let last_valid_slot = self
.bank_forks
.read()
.unwrap()
.root_bank()
.get_blockhash_last_valid_block_height(blockhash)
.get_blockhash_last_valid_slot(&blockhash)
.unwrap();
let signature = transaction.signatures.get(0).cloned().unwrap_or_default();
let info = TransactionInfo::new(
signature,
serialize(&transaction).unwrap(),
last_valid_block_height,
None,
None,
);
let info =
TransactionInfo::new(signature, serialize(&transaction).unwrap(), last_valid_slot);
self.transaction_sender.send(info).unwrap();
}
@@ -184,18 +162,11 @@ impl Banks for BanksServer {
self,
_: Context,
commitment: CommitmentLevel,
) -> (FeeCalculator, Hash, u64) {
) -> (FeeCalculator, Hash, Slot) {
let bank = self.bank(commitment);
let blockhash = bank.last_blockhash();
let lamports_per_signature = bank.get_lamports_per_signature();
let last_valid_block_height = bank
.get_blockhash_last_valid_block_height(&blockhash)
.unwrap();
(
FeeCalculator::new(lamports_per_signature),
blockhash,
last_valid_block_height,
)
let (blockhash, fee_calculator) = bank.last_blockhash_with_fee_calculator();
let last_valid_slot = bank.get_blockhash_last_valid_slot(&blockhash).unwrap();
(fee_calculator, blockhash, last_valid_slot)
}
async fn get_transaction_status_with_context(
@@ -238,35 +209,29 @@ impl Banks for BanksServer {
self.slot(commitment)
}
async fn get_block_height_with_context(self, _: Context, commitment: CommitmentLevel) -> u64 {
self.bank(commitment).block_height()
}
async fn process_transaction_with_commitment_and_context(
self,
_: Context,
transaction: Transaction,
commitment: CommitmentLevel,
) -> Option<transaction::Result<()>> {
if let Err(err) = verify_transaction(&transaction, &self.bank(commitment).feature_set) {
if let Err(err) = verify_transaction(&transaction) {
return Some(Err(err));
}
let blockhash = &transaction.message.recent_blockhash;
let last_valid_block_height = self
.bank(commitment)
.get_blockhash_last_valid_block_height(blockhash)
let last_valid_slot = self
.bank_forks
.read()
.unwrap()
.root_bank()
.get_blockhash_last_valid_slot(blockhash)
.unwrap();
let signature = transaction.signatures.get(0).cloned().unwrap_or_default();
let info = TransactionInfo::new(
signature,
serialize(&transaction).unwrap(),
last_valid_block_height,
None,
None,
);
let info =
TransactionInfo::new(signature, serialize(&transaction).unwrap(), last_valid_slot);
self.transaction_sender.send(info).unwrap();
self.poll_signature_status(&signature, blockhash, last_valid_block_height, commitment)
self.poll_signature_status(&signature, blockhash, last_valid_slot, commitment)
.await
}
@@ -277,49 +242,19 @@ impl Banks for BanksServer {
commitment: CommitmentLevel,
) -> Option<Account> {
let bank = self.bank(commitment);
bank.get_account(&address).map(Account::from)
}
async fn get_latest_blockhash_with_context(self, _: Context) -> Hash {
let bank = self.bank(CommitmentLevel::default());
bank.last_blockhash()
}
async fn get_latest_blockhash_with_commitment_and_context(
self,
_: Context,
commitment: CommitmentLevel,
) -> Option<(Hash, u64)> {
let bank = self.bank(commitment);
let blockhash = bank.last_blockhash();
let last_valid_block_height = bank.get_blockhash_last_valid_block_height(&blockhash)?;
Some((blockhash, last_valid_block_height))
}
async fn get_fee_for_message_with_commitment_and_context(
self,
_: Context,
commitment: CommitmentLevel,
message: Message,
) -> Option<u64> {
let bank = self.bank(commitment);
let sanitized_message = SanitizedMessage::try_from(message).ok()?;
bank.get_fee_for_message(&sanitized_message)
bank.get_account(&address)
}
}
pub async fn start_local_server(
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
poll_signature_status_sleep_duration: Duration,
) -> UnboundedChannel<Response<BanksResponse>, ClientMessage<BanksRequest>> {
let banks_server = BanksServer::new_loopback(
bank_forks,
block_commitment_cache,
poll_signature_status_sleep_duration,
);
let banks_server = BanksServer::new_loopback(bank_forks, block_commitment_cache);
let (client_transport, server_transport) = transport::channel::unbounded();
let server = server::BaseChannel::with_defaults(server_transport).execute(banks_server.serve());
let server = server::new(server::Config::default())
.incoming(stream::once(future::ready(server_transport)))
.respond_with(banks_server.serve());
tokio::spawn(server);
client_transport
}
@@ -348,22 +283,11 @@ pub async fn start_tcp_server(
.map(move |chan| {
let (sender, receiver) = channel();
SendTransactionService::new::<NullTpuInfo>(
tpu_addr,
&bank_forks,
None,
receiver,
5_000,
0,
);
SendTransactionService::new(tpu_addr, &bank_forks, receiver);
let server = BanksServer::new(
bank_forks.clone(),
block_commitment_cache.clone(),
sender,
Duration::from_millis(200),
);
chan.execute(server.serve())
let server =
BanksServer::new(bank_forks.clone(), block_commitment_cache.clone(), sender);
chan.respond_with(server.serve()).execute()
})
// Max 10 channels.
.buffer_unordered(10)

View File

@@ -1,3 +1,7 @@
#![allow(clippy::integer_arithmetic)]
pub mod banks_server;
pub mod rpc_banks_service;
pub mod send_transaction_service;
#[macro_use]
extern crate solana_metrics;

View File

@@ -1,22 +1,19 @@
//! The `rpc_banks_service` module implements the Solana Banks RPC API.
use {
crate::banks_server::start_tcp_server,
futures::{future::FutureExt, pin_mut, prelude::stream::StreamExt, select},
solana_runtime::{bank_forks::BankForks, commitment::BlockCommitmentCache},
std::{
net::SocketAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc, RwLock,
},
thread::{self, Builder, JoinHandle},
use crate::banks_server::start_tcp_server;
use futures::{future::FutureExt, pin_mut, prelude::stream::StreamExt, select};
use solana_runtime::{bank_forks::BankForks, commitment::BlockCommitmentCache};
use std::{
net::SocketAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc, RwLock,
},
tokio::{
runtime::Runtime,
time::{self, Duration},
},
tokio_stream::wrappers::IntervalStream,
thread::{self, Builder, JoinHandle},
};
use tokio::{
runtime::Runtime,
time::{self, Duration},
};
pub struct RpcBanksService {
@@ -38,7 +35,7 @@ async fn start_abortable_tcp_server(
block_commitment_cache.clone(),
)
.fuse();
let interval = IntervalStream::new(time::interval(Duration::from_millis(100))).fuse();
let interval = time::interval(Duration::from_millis(100)).fuse();
pin_mut!(server, interval);
loop {
select! {
@@ -103,11 +100,12 @@ impl RpcBanksService {
#[cfg(test)]
mod tests {
use {super::*, solana_runtime::bank::Bank};
use super::*;
use solana_runtime::bank::Bank;
#[test]
fn test_rpc_banks_server_exit() {
let bank_forks = Arc::new(RwLock::new(BankForks::new(Bank::default_for_tests())));
let bank_forks = Arc::new(RwLock::new(BankForks::new(Bank::default())));
let block_commitment_cache = Arc::new(RwLock::new(BlockCommitmentCache::default()));
let exit = Arc::new(AtomicBool::new(false));
let addr = "127.0.0.1:0".parse().unwrap();

View File

@@ -0,0 +1,343 @@
// TODO: Merge this implementation with the one at `core/src/send_transaction_service.rs`
use log::*;
use solana_metrics::{datapoint_warn, inc_new_counter_info};
use solana_runtime::{bank::Bank, bank_forks::BankForks};
use solana_sdk::{clock::Slot, signature::Signature};
use std::{
collections::HashMap,
net::{SocketAddr, UdpSocket},
sync::{
mpsc::{Receiver, RecvTimeoutError},
Arc, RwLock,
},
thread::{self, Builder, JoinHandle},
time::{Duration, Instant},
};
/// Maximum size of the transaction queue
const MAX_TRANSACTION_QUEUE_SIZE: usize = 10_000; // This seems like a lot but maybe it needs to be bigger one day
pub struct SendTransactionService {
thread: JoinHandle<()>,
}
pub struct TransactionInfo {
pub signature: Signature,
pub wire_transaction: Vec<u8>,
pub last_valid_slot: Slot,
}
impl TransactionInfo {
pub fn new(signature: Signature, wire_transaction: Vec<u8>, last_valid_slot: Slot) -> Self {
Self {
signature,
wire_transaction,
last_valid_slot,
}
}
}
#[derive(Default, Debug, PartialEq)]
struct ProcessTransactionsResult {
rooted: u64,
expired: u64,
retried: u64,
failed: u64,
retained: u64,
}
impl SendTransactionService {
pub fn new(
tpu_address: SocketAddr,
bank_forks: &Arc<RwLock<BankForks>>,
receiver: Receiver<TransactionInfo>,
) -> Self {
let thread = Self::retry_thread(receiver, bank_forks.clone(), tpu_address);
Self { thread }
}
fn retry_thread(
receiver: Receiver<TransactionInfo>,
bank_forks: Arc<RwLock<BankForks>>,
tpu_address: SocketAddr,
) -> JoinHandle<()> {
let mut last_status_check = Instant::now();
let mut transactions = HashMap::new();
let send_socket = UdpSocket::bind("0.0.0.0:0").unwrap();
Builder::new()
.name("send-tx-svc".to_string())
.spawn(move || loop {
match receiver.recv_timeout(Duration::from_secs(1)) {
Err(RecvTimeoutError::Disconnected) => break,
Err(RecvTimeoutError::Timeout) => {}
Ok(transaction_info) => {
Self::send_transaction(
&send_socket,
&tpu_address,
&transaction_info.wire_transaction,
);
if transactions.len() < MAX_TRANSACTION_QUEUE_SIZE {
transactions.insert(transaction_info.signature, transaction_info);
} else {
datapoint_warn!("send_transaction_service-queue-overflow");
}
}
}
if Instant::now().duration_since(last_status_check).as_secs() >= 5 {
if !transactions.is_empty() {
datapoint_info!(
"send_transaction_service-queue-size",
("len", transactions.len(), i64)
);
let bank_forks = bank_forks.read().unwrap();
let root_bank = bank_forks.root_bank();
let working_bank = bank_forks.working_bank();
let _result = Self::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
}
last_status_check = Instant::now();
}
})
.unwrap()
}
fn process_transactions(
working_bank: &Arc<Bank>,
root_bank: &Arc<Bank>,
send_socket: &UdpSocket,
tpu_address: &SocketAddr,
transactions: &mut HashMap<Signature, TransactionInfo>,
) -> ProcessTransactionsResult {
let mut result = ProcessTransactionsResult::default();
transactions.retain(|signature, transaction_info| {
if root_bank.has_signature(signature) {
info!("Transaction is rooted: {}", signature);
result.rooted += 1;
inc_new_counter_info!("send_transaction_service-rooted", 1);
false
} else if transaction_info.last_valid_slot < root_bank.slot() {
info!("Dropping expired transaction: {}", signature);
result.expired += 1;
inc_new_counter_info!("send_transaction_service-expired", 1);
false
} else {
match working_bank.get_signature_status_slot(signature) {
None => {
// Transaction is unknown to the working bank, it might have been
// dropped or landed in another fork. Re-send it
info!("Retrying transaction: {}", signature);
result.retried += 1;
inc_new_counter_info!("send_transaction_service-retry", 1);
Self::send_transaction(
&send_socket,
&tpu_address,
&transaction_info.wire_transaction,
);
true
}
Some((_slot, status)) => {
if status.is_err() {
info!("Dropping failed transaction: {}", signature);
result.failed += 1;
inc_new_counter_info!("send_transaction_service-failed", 1);
false
} else {
result.retained += 1;
true
}
}
}
}
});
result
}
fn send_transaction(
send_socket: &UdpSocket,
tpu_address: &SocketAddr,
wire_transaction: &[u8],
) {
if let Err(err) = send_socket.send_to(wire_transaction, tpu_address) {
warn!("Failed to send transaction to {}: {:?}", tpu_address, err);
}
}
pub fn join(self) -> thread::Result<()> {
self.thread.join()
}
}
#[cfg(test)]
mod test {
use super::*;
use solana_sdk::{
genesis_config::create_genesis_config, pubkey::Pubkey, signature::Signer,
system_transaction,
};
use std::sync::mpsc::channel;
#[test]
fn service_exit() {
let tpu_address = "127.0.0.1:0".parse().unwrap();
let bank = Bank::default();
let bank_forks = Arc::new(RwLock::new(BankForks::new(bank)));
let (sender, receiver) = channel();
let send_tranaction_service =
SendTransactionService::new(tpu_address, &bank_forks, receiver);
drop(sender);
send_tranaction_service.join().unwrap();
}
#[test]
fn process_transactions() {
let (genesis_config, mint_keypair) = create_genesis_config(4);
let bank = Bank::new(&genesis_config);
let bank_forks = Arc::new(RwLock::new(BankForks::new(bank)));
let send_socket = UdpSocket::bind("0.0.0.0:0").unwrap();
let tpu_address = "127.0.0.1:0".parse().unwrap();
let root_bank = Arc::new(Bank::new_from_parent(
&bank_forks.read().unwrap().working_bank(),
&Pubkey::default(),
1,
));
let rooted_signature = root_bank
.transfer(1, &mint_keypair, &mint_keypair.pubkey())
.unwrap();
let working_bank = Arc::new(Bank::new_from_parent(&root_bank, &Pubkey::default(), 2));
let non_rooted_signature = working_bank
.transfer(2, &mint_keypair, &mint_keypair.pubkey())
.unwrap();
let failed_signature = {
let blockhash = working_bank.last_blockhash();
let transaction =
system_transaction::transfer(&mint_keypair, &Pubkey::default(), 1, blockhash);
let signature = transaction.signatures[0];
working_bank.process_transaction(&transaction).unwrap_err();
signature
};
let mut transactions = HashMap::new();
info!("Expired transactions are dropped..");
transactions.insert(
Signature::default(),
TransactionInfo::new(Signature::default(), vec![], root_bank.slot() - 1),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert!(transactions.is_empty());
assert_eq!(
result,
ProcessTransactionsResult {
expired: 1,
..ProcessTransactionsResult::default()
}
);
info!("Rooted transactions are dropped...");
transactions.insert(
rooted_signature,
TransactionInfo::new(rooted_signature, vec![], working_bank.slot()),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert!(transactions.is_empty());
assert_eq!(
result,
ProcessTransactionsResult {
rooted: 1,
..ProcessTransactionsResult::default()
}
);
info!("Failed transactions are dropped...");
transactions.insert(
failed_signature,
TransactionInfo::new(failed_signature, vec![], working_bank.slot()),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert!(transactions.is_empty());
assert_eq!(
result,
ProcessTransactionsResult {
failed: 1,
..ProcessTransactionsResult::default()
}
);
info!("Non-rooted transactions are kept...");
transactions.insert(
non_rooted_signature,
TransactionInfo::new(non_rooted_signature, vec![], working_bank.slot()),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert_eq!(transactions.len(), 1);
assert_eq!(
result,
ProcessTransactionsResult {
retained: 1,
..ProcessTransactionsResult::default()
}
);
transactions.clear();
info!("Unknown transactions are retried...");
transactions.insert(
Signature::default(),
TransactionInfo::new(Signature::default(), vec![], working_bank.slot()),
);
let result = SendTransactionService::process_transactions(
&working_bank,
&root_bank,
&send_socket,
&tpu_address,
&mut transactions,
);
assert_eq!(transactions.len(), 1);
assert_eq!(
result,
ProcessTransactionsResult {
retried: 1,
..ProcessTransactionsResult::default()
}
);
}
}

4
bench-exchange/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
/target/
/config/
/config-local/
/farf/

38
bench-exchange/Cargo.toml Normal file
View File

@@ -0,0 +1,38 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2018"
name = "solana-bench-exchange"
version = "1.5.20"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
clap = "2.33.1"
itertools = "0.9.0"
log = "0.4.11"
num-derive = "0.3"
num-traits = "0.2"
rand = "0.7.0"
rayon = "1.4.0"
serde_json = "1.0.56"
serde_yaml = "0.8.13"
solana-clap-utils = { path = "../clap-utils", version = "=1.5.20" }
solana-core = { path = "../core", version = "=1.5.20" }
solana-genesis = { path = "../genesis", version = "=1.5.20" }
solana-client = { path = "../client", version = "=1.5.20" }
solana-faucet = { path = "../faucet", version = "=1.5.20" }
solana-exchange-program = { path = "../programs/exchange", version = "=1.5.20" }
solana-logger = { path = "../logger", version = "=1.5.20" }
solana-metrics = { path = "../metrics", version = "=1.5.20" }
solana-net-utils = { path = "../net-utils", version = "=1.5.20" }
solana-runtime = { path = "../runtime", version = "=1.5.20" }
solana-sdk = { path = "../sdk", version = "=1.5.20" }
solana-version = { path = "../version", version = "=1.5.20" }
[dev-dependencies]
solana-local-cluster = { path = "../local-cluster", version = "=1.5.20" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

479
bench-exchange/README.md Normal file
View File

@@ -0,0 +1,479 @@
# token-exchange
Solana Token Exchange Bench
If you can't wait; jump to [Running the exchange](#Running-the-exchange) to
learn how to start and interact with the exchange.
### Table of Contents
[Overview](#Overview)<br>
[Premise](#Premise)<br>
[Exchange startup](#Exchange-startup)<br>
[Order Requests](#Trade-requests)<br>
[Order Cancellations](#Trade-cancellations)<br>
[Trade swap](#Trade-swap)<br>
[Exchange program operations](#Exchange-program-operations)<br>
[Quotes and OHLCV](#Quotes-and-OHLCV)<br>
[Investor strategies](#Investor-strategies)<br>
[Running the exchange](#Running-the-exchange)<br>
## Overview
An exchange is a marketplace where one asset can be traded for another. This
demo demonstrates one way to host an exchange on the Solana blockchain by
emulating a currency exchange.
The assets are virtual tokens held by investors who may post order requests to
the exchange. A Matcher monitors the exchange and posts swap requests for
matching orders. All the transactions can execute concurrently.
## Premise
- Exchange
- An exchange is a marketplace where one asset can be traded for another.
The exchange in this demo is the on-chain program that implements the
tokens and the policies for trading those tokens.
- Token
- A virtual asset that can be owned, traded, and holds virtual intrinsic value
compared to other assets. There are four types of tokens in this demo, A,
B, C, D. Each one may be traded for another.
- Token account
- An account owned by the exchange that holds a quantity of one type of token.
- Account request
- A request to create a token account
- Token request
- A request to deposit tokens of a particular type into a token account.
- Asset pair
- A struct with fields Base and Quote, representing the two assets which make up a
trading pair, which themselves are Tokens. The Base or 'primary' asset is the
numerator and the Quote is the denominator for pricing purposes.
- Order side
- Describes which side of the market an investor wants to place a trade on. Options
are "Bid" or "Ask", where a bid represents an offer to purchase the Base asset of
the AssetPair for a sum of the Quote Asset and an Ask is an offer to sell Base asset
for the Quote asset.
- Price ratio
- An expression of the relative prices of two tokens. Calculated with the Base
Asset as the numerator and the Quote Asset as the denominator. Ratios are
represented as fixed point numbers. The fixed point scaler is defined in
[exchange_state.rs](https://github.com/solana-labs/solana/blob/c2fdd1362a029dcf89c8907c562d2079d977df11/programs/exchange_api/src/exchange_state.rs#L7)
- Order request
- A Solana transaction sent by a trader to the exchange to submit an order.
Order requests are made up of the token pair, the order side (bid or ask),
quantity of the primary token, the price ratio, and the two token accounts
to be credited/deducted. An example trade request looks like "T AB 5 2"
which reads "Exchange 5 A tokens to B tokens at a price ratio of 1:2" A fulfilled trade would result in 5 A tokens
deducted and 10 B tokens credited to the trade initiator's token accounts.
Successful order requests result in an order.
- Order
- The result of a successful order request. orders are stored in
accounts owned by the submitter of the order request. They can only be
canceled by their owner but can be used by anyone in a trade swap. They
contain the same information as the order request.
- Price spread
- The difference between the two matching orders. The spread is the
profit of the Matcher initiating the swap request.
- Match requirements
- Policies that result in a successful trade swap.
- Match request
- A request to fill two complementary orders (bid/ask), resulting if successful,
in a trade being created.
- Trade
- A successful trade is created from two matching orders that meet
swap requirements which are submitted in a Match Request by a Matcher and
executed by the exchange. A trade may not wholly satisfy one or both of the
orders in which case the orders are adjusted appropriately. Upon execution,
tokens are distributed to the traders' accounts and any overlap or
"negative spread" between orders is deposited into the Matcher's profit
account. All successful trades are recorded in the data of a new solana
account for posterity.
- Investor
- Individual investors who hold a number of tokens and wish to trade them on
the exchange. Investors operate as Solana thin clients who own a set of
accounts containing tokens and/or order requests. Investors post
transactions to the exchange in order to request tokens and post or cancel
order requests.
- Matcher
- An agent who facilitates trading between investors. Matchers operate as
Solana thin clients who monitor all the orders looking for a trade
match. Once found, the Matcher issues a swap request to the exchange.
Matchers are the engine of the exchange and are rewarded for their efforts by
accumulating the price spreads of the swaps they initiate. Matchers also
provide current bid/ask price and OHLCV (Open, High, Low, Close, Volume)
information on demand via a public network port.
- Transaction fees
- Solana transaction fees are paid for by the transaction submitters who are
the Investors and Matchers.
## Exchange startup
The exchange is up and running when it reaches a state where it can take
investors' trades and Matchers' match requests. To achieve this state the
following must occur in order:
- Start the Solana blockchain
- Start the thin-client
- The Matcher subscribes to change notifications for all the accounts owned by
the exchange program id. The subscription is managed via Solana's JSON RPC
interface.
- The Matcher starts responding to queries for bid/ask price and OHLCV
The Matcher responding successfully to price and OHLCV requests is the signal to
the investors that trades submitted after that point will be analyzed. <!--This
is not ideal, and instead investors should be able to submit trades at any time,
and the Matcher could come and go without missing a trade. One way to achieve
this is for the Matcher to read the current state of all accounts looking for all
open orders.-->
Investors will initially query the exchange to discover their current balance
for each type of token. If the investor does not already have an account for
each type of token, they will submit account requests. Matcher as well will
request accounts to hold the tokens they earn by initiating trade swaps.
```rust
/// Supported token types
pub enum Token {
A,
B,
C,
D,
}
/// Supported token pairs
pub enum TokenPair {
AB,
AC,
AD,
BC,
BD,
CD,
}
pub enum ExchangeInstruction {
/// New token account
/// key 0 - Signer
/// key 1 - New token account
AccountRequest,
}
/// Token accounts are populated with this structure
pub struct TokenAccountInfo {
/// Investor who owns this account
pub owner: Pubkey,
/// Current number of tokens this account holds
pub tokens: Tokens,
}
```
For this demo investors or Matcher can request more tokens from the exchange at
any time by submitting token requests. In non-demos, an exchange of this type
would provide another way to exchange a 3rd party asset into tokens.
To request tokens, investors submit transfer requests:
```rust
pub enum ExchangeInstruction {
/// Transfer tokens between two accounts
/// key 0 - Account to transfer tokens to
/// key 1 - Account to transfer tokens from. This can be the exchange program itself,
/// the exchange has a limitless number of tokens it can transfer.
TransferRequest(Token, u64),
}
```
## Order Requests
When an investor decides to exchange a token of one type for another, they
submit a transaction to the Solana Blockchain containing an order request, which,
if successful, is turned into an order. orders do not expire but are
cancellable. <!-- orders should have a timestamp to enable trade
expiration --> When an order is created, tokens are deducted from a token
account and the order acts as an escrow. The tokens are held until the
order is fulfilled or canceled. If the direction is `To`, then the number
of `tokens` are deducted from the primary account, if `From` then `tokens`
multiplied by `price` are deducted from the secondary account. orders are
no longer valid when the number of `tokens` goes to zero, at which point they
can no longer be used. <!-- Could support refilling orders, so order
accounts are refilled rather than accumulating -->
```rust
/// Direction of the exchange between two tokens in a pair
pub enum Direction {
/// Trade first token type (primary) in the pair 'To' the second
To,
/// Trade first token type in the pair 'From' the second (secondary)
From,
}
pub struct OrderRequestInfo {
/// Direction of trade
pub direction: Direction,
/// Token pair to trade
pub pair: TokenPair,
/// Number of tokens to exchange; refers to the primary or the secondary depending on the direction
pub tokens: u64,
/// The price ratio the primary price over the secondary price. The primary price is fixed
/// and equal to the variable `SCALER`.
pub price: u64,
/// Token account to deposit tokens on successful swap
pub dst_account: Pubkey,
}
pub enum ExchangeInstruction {
/// order request
/// key 0 - Signer
/// key 1 - Account in which to record the swap
/// key 2 - Token account associated with this trade
TradeRequest(TradeRequestInfo),
}
/// Trade accounts are populated with this structure
pub struct TradeOrderInfo {
/// Owner of the order
pub owner: Pubkey,
/// Direction of the exchange
pub direction: Direction,
/// Token pair indicating two tokens to exchange, first is primary
pub pair: TokenPair,
/// Number of tokens to exchange; primary or secondary depending on direction
pub tokens: u64,
/// Scaled price of the secondary token given the primary is equal to the scale value
/// If scale is 1 and price is 2 then ratio is 1:2 or 1 primary token for 2 secondary tokens
pub price: u64,
/// account which the tokens were source from. The trade account holds the tokens in escrow
/// until either one or more part of a swap or the trade is canceled.
pub src_account: Pubkey,
/// account which the tokens the tokens will be deposited into on a successful trade
pub dst_account: Pubkey,
}
```
## Order cancellations
An investor may cancel a trade at anytime, but only trades they own. If the
cancellation is successful, any tokens held in escrow are returned to the
account from which they came.
```rust
pub enum ExchangeInstruction {
/// order cancellation
/// key 0 - Signer
/// key 1 -order to cancel
TradeCancellation,
}
```
## Trade swaps
The Matcher is monitoring the accounts assigned to the exchange program and
building a trade-order table. The order table is used to identify
matching orders which could be fulfilled. When a match is found the
Matcher should issue a swap request. Swap requests may not satisfy the entirety
of either order, but the exchange will greedily fulfill it. Any leftover tokens
in either account will keep the order valid for further swap requests in
the future.
Matching orders are defined by the following swap requirements:
- Opposite polarity (one `To` and one `From`)
- Operate on the same token pair
- The price ratio of the `From` order is greater than or equal to the `To` order
- There are sufficient tokens to perform the trade
Orders can be written in the following format:
`investor direction pair quantity price-ratio`
For example:
- `1 T AB 2 1`
- Investor 1 wishes to exchange 2 A tokens to B tokens at a ratio of 1 A to 1
B
- `2 F AC 6 1.2`
- Investor 2 wishes to exchange A tokens from 6 B tokens at a ratio of 1 A
from 1.2 B
An order table could look something like the following. Notice how the columns
are sorted low to high and high to low, respectively. Prices are dramatic and
whole for clarity.
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 2 4 | 2 F AB 2 8 |
| 2 | 1 T AB 1 4 | 2 F AB 2 8 |
| 3 | 1 T AB 6 6 | 2 F AB 2 7 |
| 4 | 1 T AB 2 8 | 2 F AB 3 6 |
| 5 | 1 T AB 2 10 | 2 F AB 1 5 |
As part of a successful swap request, the exchange will credit tokens to the
Matcher's account equal to the difference in the price ratios or the two orders.
These tokens are considered the Matcher's profit for initiating the trade.
The Matcher would initiate the following swap on the order table above:
- Row 1, To: Investor 1 trades 2 A tokens to 8 B tokens
- Row 1, From: Investor 2 trades 2 A tokens from 8 B tokens
- Matcher takes 8 B tokens as profit
Both row 1 trades are fully realized, table becomes:
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 1 4 | 2 F AB 2 8 |
| 2 | 1 T AB 6 6 | 2 F AB 2 7 |
| 3 | 1 T AB 2 8 | 2 F AB 3 6 |
| 4 | 1 T AB 2 10 | 2 F AB 1 5 |
The Matcher would initiate the following swap:
- Row 1, To: Investor 1 trades 1 A token to 4 B tokens
- Row 1, From: Investor 2 trades 1 A token from 4 B tokens
- Matcher takes 4 B tokens as profit
Row 1 From is not fully realized, table becomes:
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 6 6 | 2 F AB 1 8 |
| 2 | 1 T AB 2 8 | 2 F AB 2 7 |
| 3 | 1 T AB 2 10 | 2 F AB 3 6 |
| 4 | | 2 F AB 1 5 |
The Matcher would initiate the following swap:
- Row 1, To: Investor 1 trades 1 A token to 6 B tokens
- Row 1, From: Investor 2 trades 1 A token from 6 B tokens
- Matcher takes 2 B tokens as profit
Row 1 To is now fully realized, table becomes:
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 5 6 | 2 F AB 2 7 |
| 2 | 1 T AB 2 8 | 2 F AB 3 5 |
| 3 | 1 T AB 2 10 | 2 F AB 1 5 |
The Matcher would initiate the following last swap:
- Row 1, To: Investor 1 trades 2 A token to 12 B tokens
- Row 1, From: Investor 2 trades 2 A token from 12 B tokens
- Matcher takes 2 B tokens as profit
Table becomes:
|Row| To | From |
|---|-------------|------------|
| 1 | 1 T AB 3 6 | 2 F AB 3 5 |
| 2 | 1 T AB 2 8 | 2 F AB 1 5 |
| 3 | 1 T AB 2 10 | |
At this point the lowest To's price is larger than the largest From's price so
no more swaps would be initiated until new orders came in.
```rust
pub enum ExchangeInstruction {
/// Trade swap request
/// key 0 - Signer
/// key 1 - Account in which to record the swap
/// key 2 - 'To' order
/// key 3 - `From` order
/// key 4 - Token account associated with the To Trade
/// key 5 - Token account associated with From trade
/// key 6 - Token account in which to deposit the Matcher profit from the swap.
SwapRequest,
}
/// Swap accounts are populated with this structure
pub struct TradeSwapInfo {
/// Pair swapped
pub pair: TokenPair,
/// `To` order
pub to_trade_order: Pubkey,
/// `From` order
pub from_trade_order: Pubkey,
/// Number of primary tokens exchanged
pub primary_tokens: u64,
/// Price the primary tokens were exchanged for
pub primary_price: u64,
/// Number of secondary tokens exchanged
pub secondary_tokens: u64,
/// Price the secondary tokens were exchanged for
pub secondary_price: u64,
}
```
## Exchange program operations
Putting all the commands together from above, the following operations will be
supported by the on-chain exchange program:
```rust
pub enum ExchangeInstruction {
/// New token account
/// key 0 - Signer
/// key 1 - New token account
AccountRequest,
/// Transfer tokens between two accounts
/// key 0 - Account to transfer tokens to
/// key 1 - Account to transfer tokens from. This can be the exchange program itself,
/// the exchange has a limitless number of tokens it can transfer.
TransferRequest(Token, u64),
/// order request
/// key 0 - Signer
/// key 1 - Account in which to record the swap
/// key 2 - Token account associated with this trade
TradeRequest(TradeRequestInfo),
/// order cancellation
/// key 0 - Signer
/// key 1 -order to cancel
TradeCancellation,
/// Trade swap request
/// key 0 - Signer
/// key 1 - Account in which to record the swap
/// key 2 - 'To' order
/// key 3 - `From` order
/// key 4 - Token account associated with the To Trade
/// key 5 - Token account associated with From trade
/// key 6 - Token account in which to deposit the Matcher profit from the swap.
SwapRequest,
}
```
## Quotes and OHLCV
The Matcher will provide current bid/ask price quotes based on trade actively and
also provide OHLCV based on some time window. The details of how the bid/ask
price quotes are calculated are yet to be decided.
## Investor strategies
To make a compelling demo, the investors needs to provide interesting trade
behavior. Something as simple as a randomly twiddled baseline would be a
minimum starting point.
## Running the exchange
The exchange bench posts trades and swaps matches as fast as it can.
You might want to bump the duration up
to 60 seconds and the batch size to 1000 for better numbers. You can modify those
in client_demo/src/demo.rs::test_exchange_local_cluster.
The following command runs the bench:
```bash
$ RUST_LOG=solana_bench_exchange=info cargo test --release -- --nocapture test_exchange_local_cluster
```
To also see the cluster messages:
```bash
$ RUST_LOG=solana_bench_exchange=info,solana=info cargo test --release -- --nocapture test_exchange_local_cluster
```

1028
bench-exchange/src/bench.rs Normal file

File diff suppressed because it is too large Load Diff

221
bench-exchange/src/cli.rs Normal file
View File

@@ -0,0 +1,221 @@
use clap::{crate_description, crate_name, value_t, App, Arg, ArgMatches};
use solana_core::gen_keys::GenKeys;
use solana_faucet::faucet::FAUCET_PORT;
use solana_sdk::signature::{read_keypair_file, Keypair};
use std::net::SocketAddr;
use std::process::exit;
use std::time::Duration;
pub struct Config {
pub entrypoint_addr: SocketAddr,
pub faucet_addr: SocketAddr,
pub identity: Keypair,
pub threads: usize,
pub num_nodes: usize,
pub duration: Duration,
pub transfer_delay: u64,
pub fund_amount: u64,
pub batch_size: usize,
pub chunk_size: usize,
pub account_groups: usize,
pub client_ids_and_stake_file: String,
pub write_to_client_file: bool,
pub read_from_client_file: bool,
}
impl Default for Config {
fn default() -> Self {
Self {
entrypoint_addr: SocketAddr::from(([127, 0, 0, 1], 8001)),
faucet_addr: SocketAddr::from(([127, 0, 0, 1], FAUCET_PORT)),
identity: Keypair::new(),
num_nodes: 1,
threads: 4,
duration: Duration::new(u64::max_value(), 0),
transfer_delay: 0,
fund_amount: 100_000,
batch_size: 100,
chunk_size: 100,
account_groups: 100,
client_ids_and_stake_file: String::new(),
write_to_client_file: false,
read_from_client_file: false,
}
}
}
pub fn build_args<'a, 'b>(version: &'b str) -> App<'a, 'b> {
App::new(crate_name!())
.about(crate_description!())
.version(version)
.arg(
Arg::with_name("entrypoint")
.short("n")
.long("entrypoint")
.value_name("HOST:PORT")
.takes_value(true)
.required(false)
.default_value("127.0.0.1:8001")
.help("Cluster entry point; defaults to 127.0.0.1:8001"),
)
.arg(
Arg::with_name("faucet")
.short("d")
.long("faucet")
.value_name("HOST:PORT")
.takes_value(true)
.required(false)
.default_value("127.0.0.1:9900")
.help("Location of the faucet; defaults to 127.0.0.1:9900"),
)
.arg(
Arg::with_name("identity")
.short("i")
.long("identity")
.value_name("PATH")
.takes_value(true)
.help("File containing a client identity (keypair)"),
)
.arg(
Arg::with_name("threads")
.long("threads")
.value_name("<threads>")
.takes_value(true)
.required(false)
.default_value("1")
.help("Number of threads submitting transactions"),
)
.arg(
Arg::with_name("num-nodes")
.long("num-nodes")
.value_name("NUM")
.takes_value(true)
.required(false)
.default_value("1")
.help("Wait for NUM nodes to converge"),
)
.arg(
Arg::with_name("duration")
.long("duration")
.value_name("SECS")
.takes_value(true)
.default_value("60")
.help("Seconds to run benchmark, then exit; default is forever"),
)
.arg(
Arg::with_name("transfer-delay")
.long("transfer-delay")
.value_name("<delay>")
.takes_value(true)
.required(false)
.default_value("0")
.help("Delay between each chunk"),
)
.arg(
Arg::with_name("fund-amount")
.long("fund-amount")
.value_name("<fund>")
.takes_value(true)
.required(false)
.default_value("100000")
.help("Number of lamports to fund to each signer"),
)
.arg(
Arg::with_name("batch-size")
.long("batch-size")
.value_name("<batch>")
.takes_value(true)
.required(false)
.default_value("1000")
.help("Number of transactions before the signer rolls over"),
)
.arg(
Arg::with_name("chunk-size")
.long("chunk-size")
.value_name("<cunk>")
.takes_value(true)
.required(false)
.default_value("500")
.help("Number of transactions to generate and send at a time"),
)
.arg(
Arg::with_name("account-groups")
.long("account-groups")
.value_name("<groups>")
.takes_value(true)
.required(false)
.default_value("10")
.help("Number of account groups to cycle for each batch"),
)
.arg(
Arg::with_name("write-client-keys")
.long("write-client-keys")
.value_name("FILENAME")
.takes_value(true)
.help("Generate client keys and stakes and write the list to YAML file"),
)
.arg(
Arg::with_name("read-client-keys")
.long("read-client-keys")
.value_name("FILENAME")
.takes_value(true)
.help("Read client keys and stakes from the YAML file"),
)
}
#[allow(clippy::field_reassign_with_default)]
pub fn extract_args(matches: &ArgMatches) -> Config {
let mut args = Config::default();
args.entrypoint_addr = solana_net_utils::parse_host_port(
matches.value_of("entrypoint").unwrap(),
)
.unwrap_or_else(|e| {
eprintln!("failed to parse entrypoint address: {}", e);
exit(1)
});
args.faucet_addr = solana_net_utils::parse_host_port(matches.value_of("faucet").unwrap())
.unwrap_or_else(|e| {
eprintln!("failed to parse faucet address: {}", e);
exit(1)
});
if matches.is_present("identity") {
args.identity = read_keypair_file(matches.value_of("identity").unwrap())
.expect("can't read client identity");
} else {
args.identity = {
let seed = [42_u8; 32];
let mut rnd = GenKeys::new(seed);
rnd.gen_keypair()
};
}
args.threads = value_t!(matches.value_of("threads"), usize).expect("Failed to parse threads");
args.num_nodes =
value_t!(matches.value_of("num-nodes"), usize).expect("Failed to parse num-nodes");
let duration = value_t!(matches.value_of("duration"), u64).expect("Failed to parse duration");
args.duration = Duration::from_secs(duration);
args.transfer_delay =
value_t!(matches.value_of("transfer-delay"), u64).expect("Failed to parse transfer-delay");
args.fund_amount =
value_t!(matches.value_of("fund-amount"), u64).expect("Failed to parse fund-amount");
args.batch_size =
value_t!(matches.value_of("batch-size"), usize).expect("Failed to parse batch-size");
args.chunk_size =
value_t!(matches.value_of("chunk-size"), usize).expect("Failed to parse chunk-size");
args.account_groups = value_t!(matches.value_of("account-groups"), usize)
.expect("Failed to parse account-groups");
if let Some(s) = matches.value_of("write-client-keys") {
args.write_to_client_file = true;
args.client_ids_and_stake_file = s.to_string();
}
if let Some(s) = matches.value_of("read-client-keys") {
assert!(!args.write_to_client_file);
args.read_from_client_file = true;
args.client_ids_and_stake_file = s.to_string();
}
args
}

View File

@@ -0,0 +1,3 @@
pub mod bench;
pub mod cli;
mod order_book;

View File

@@ -0,0 +1,83 @@
#![allow(clippy::integer_arithmetic)]
pub mod bench;
mod cli;
pub mod order_book;
use crate::bench::{airdrop_lamports, create_client_accounts_file, do_bench_exchange, Config};
use log::*;
use solana_core::gossip_service::{discover_cluster, get_multi_client};
use solana_sdk::signature::Signer;
fn main() {
solana_logger::setup();
solana_metrics::set_panic_hook("bench-exchange");
let matches = cli::build_args(solana_version::version!()).get_matches();
let cli_config = cli::extract_args(&matches);
let cli::Config {
entrypoint_addr,
faucet_addr,
identity,
threads,
num_nodes,
duration,
transfer_delay,
fund_amount,
batch_size,
chunk_size,
account_groups,
client_ids_and_stake_file,
write_to_client_file,
read_from_client_file,
..
} = cli_config;
let config = Config {
identity,
threads,
duration,
transfer_delay,
fund_amount,
batch_size,
chunk_size,
account_groups,
client_ids_and_stake_file,
read_from_client_file,
};
if write_to_client_file {
create_client_accounts_file(
&config.client_ids_and_stake_file,
config.batch_size,
config.account_groups,
config.fund_amount,
);
} else {
info!("Connecting to the cluster");
let nodes = discover_cluster(&entrypoint_addr, num_nodes).unwrap_or_else(|_| {
panic!("Failed to discover nodes");
});
let (client, num_clients) = get_multi_client(&nodes);
info!("{} nodes found", num_clients);
if num_clients < num_nodes {
panic!("Error: Insufficient nodes discovered");
}
if !read_from_client_file {
info!("Funding keypair: {}", config.identity.pubkey());
let accounts_in_groups = batch_size * account_groups;
const NUM_SIGNERS: u64 = 2;
airdrop_lamports(
&client,
&faucet_addr,
&config.identity,
fund_amount * (accounts_in_groups + 1) as u64 * NUM_SIGNERS,
);
}
do_bench_exchange(vec![client], config);
}
}

View File

@@ -0,0 +1,134 @@
use itertools::EitherOrBoth::{Both, Left, Right};
use itertools::Itertools;
use log::*;
use solana_exchange_program::exchange_state::*;
use solana_sdk::pubkey::Pubkey;
use std::cmp::Ordering;
use std::collections::BinaryHeap;
use std::{error, fmt};
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct ToOrder {
pub pubkey: Pubkey,
pub info: OrderInfo,
}
impl Ord for ToOrder {
fn cmp(&self, other: &Self) -> Ordering {
other.info.price.cmp(&self.info.price)
}
}
impl PartialOrd for ToOrder {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct FromOrder {
pub pubkey: Pubkey,
pub info: OrderInfo,
}
impl Ord for FromOrder {
fn cmp(&self, other: &Self) -> Ordering {
self.info.price.cmp(&other.info.price)
}
}
impl PartialOrd for FromOrder {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
#[derive(Default)]
pub struct OrderBook {
// TODO scale to x token types
to_ab: BinaryHeap<ToOrder>,
from_ab: BinaryHeap<FromOrder>,
}
impl fmt::Display for OrderBook {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(
f,
"+-Order Book--------------------------+-------------------------------------+"
)?;
for (i, it) in self
.to_ab
.iter()
.zip_longest(self.from_ab.iter())
.enumerate()
{
match it {
Both(to, from) => writeln!(
f,
"| T AB {:8} for {:8}/{:8} | F AB {:8} for {:8}/{:8} |{}",
to.info.tokens,
SCALER,
to.info.price,
from.info.tokens,
SCALER,
from.info.price,
i
)?,
Left(to) => writeln!(
f,
"| T AB {:8} for {:8}/{:8} | |{}",
to.info.tokens, SCALER, to.info.price, i
)?,
Right(from) => writeln!(
f,
"| | F AB {:8} for {:8}/{:8} |{}",
from.info.tokens, SCALER, from.info.price, i
)?,
}
}
write!(
f,
"+-------------------------------------+-------------------------------------+"
)?;
Ok(())
}
}
impl OrderBook {
// TODO
// pub fn cancel(&mut self, pubkey: Pubkey) -> Result<(), Box<dyn error::Error>> {
// Ok(())
// }
pub fn push(&mut self, pubkey: Pubkey, info: OrderInfo) -> Result<(), Box<dyn error::Error>> {
check_trade(info.side, info.tokens, info.price)?;
match info.side {
OrderSide::Ask => {
self.to_ab.push(ToOrder { pubkey, info });
}
OrderSide::Bid => {
self.from_ab.push(FromOrder { pubkey, info });
}
}
Ok(())
}
pub fn pop(&mut self) -> Option<(ToOrder, FromOrder)> {
if let Some(pair) = Self::pop_pair(&mut self.to_ab, &mut self.from_ab) {
return Some(pair);
}
None
}
pub fn get_num_outstanding(&self) -> (usize, usize) {
(self.to_ab.len(), self.from_ab.len())
}
fn pop_pair(
to_ab: &mut BinaryHeap<ToOrder>,
from_ab: &mut BinaryHeap<FromOrder>,
) -> Option<(ToOrder, FromOrder)> {
let to = to_ab.peek()?;
let from = from_ab.peek()?;
if from.info.price < to.info.price {
debug!("Trade not viable");
return None;
}
let to = to_ab.pop()?;
let from = from_ab.pop()?;
Some((to, from))
}
}

View File

@@ -0,0 +1,111 @@
use log::*;
use solana_bench_exchange::bench::{airdrop_lamports, do_bench_exchange, Config};
use solana_core::gossip_service::{discover_cluster, get_multi_client};
use solana_core::validator::ValidatorConfig;
use solana_exchange_program::exchange_processor::process_instruction;
use solana_exchange_program::id;
use solana_exchange_program::solana_exchange_program;
use solana_faucet::faucet::run_local_faucet_with_port;
use solana_local_cluster::local_cluster::{ClusterConfig, LocalCluster};
use solana_runtime::bank::Bank;
use solana_runtime::bank_client::BankClient;
use solana_sdk::genesis_config::create_genesis_config;
use solana_sdk::signature::{Keypair, Signer};
use std::process::exit;
use std::sync::mpsc::channel;
use std::time::Duration;
#[test]
#[ignore]
fn test_exchange_local_cluster() {
solana_logger::setup();
const NUM_NODES: usize = 1;
let config = Config {
identity: Keypair::new(),
duration: Duration::from_secs(1),
fund_amount: 100_000,
threads: 1,
transfer_delay: 20, // 15
batch_size: 100, // 1000
chunk_size: 10, // 200
account_groups: 1, // 10
..Config::default()
};
let Config {
fund_amount,
batch_size,
account_groups,
..
} = config;
let accounts_in_groups = batch_size * account_groups;
let cluster = LocalCluster::new(&mut ClusterConfig {
node_stakes: vec![100_000; NUM_NODES],
cluster_lamports: 100_000_000_000_000,
validator_configs: vec![ValidatorConfig::default(); NUM_NODES],
native_instruction_processors: [solana_exchange_program!()].to_vec(),
..ClusterConfig::default()
});
let faucet_keypair = Keypair::new();
cluster.transfer(
&cluster.funding_keypair,
&faucet_keypair.pubkey(),
2_000_000_000_000,
);
let (addr_sender, addr_receiver) = channel();
run_local_faucet_with_port(faucet_keypair, addr_sender, Some(1_000_000_000_000), 0);
let faucet_addr = addr_receiver
.recv_timeout(Duration::from_secs(2))
.expect("run_local_faucet")
.expect("faucet_addr");
info!("Connecting to the cluster");
let nodes =
discover_cluster(&cluster.entry_point_info.gossip, NUM_NODES).unwrap_or_else(|err| {
error!("Failed to discover {} nodes: {:?}", NUM_NODES, err);
exit(1);
});
let (client, num_clients) = get_multi_client(&nodes);
info!("clients: {}", num_clients);
assert!(num_clients >= NUM_NODES);
const NUM_SIGNERS: u64 = 2;
airdrop_lamports(
&client,
&faucet_addr,
&config.identity,
fund_amount * (accounts_in_groups + 1) as u64 * NUM_SIGNERS,
);
do_bench_exchange(vec![client], config);
}
#[test]
fn test_exchange_bank_client() {
solana_logger::setup();
let (genesis_config, identity) = create_genesis_config(100_000_000_000_000);
let mut bank = Bank::new(&genesis_config);
bank.add_builtin("exchange_program", id(), process_instruction);
let clients = vec![BankClient::new(bank)];
do_bench_exchange(
clients,
Config {
identity,
duration: Duration::from_secs(1),
fund_amount: 100_000,
threads: 1,
transfer_delay: 20, // 0;
batch_size: 100, // 1500;
chunk_size: 10, // 1500;
account_groups: 1, // 50;
..Config::default()
},
);
}

View File

@@ -1,8 +1,8 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-bench-streamer"
version = "1.9.2"
version = "1.5.20"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
@@ -10,11 +10,11 @@ publish = false
[dependencies]
clap = "2.33.1"
solana-clap-utils = { path = "../clap-utils", version = "=1.9.2" }
solana-streamer = { path = "../streamer", version = "=1.9.2" }
solana-logger = { path = "../logger", version = "=1.9.2" }
solana-net-utils = { path = "../net-utils", version = "=1.9.2" }
solana-version = { path = "../version", version = "=1.9.2" }
solana-clap-utils = { path = "../clap-utils", version = "=1.5.20" }
solana-streamer = { path = "../streamer", version = "=1.5.20" }
solana-logger = { path = "../logger", version = "=1.5.20" }
solana-net-utils = { path = "../net-utils", version = "=1.5.20" }
solana-version = { path = "../version", version = "=1.5.20" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,38 +1,32 @@
#![allow(clippy::integer_arithmetic)]
use {
clap::{crate_description, crate_name, App, Arg},
solana_streamer::{
packet::{Packet, PacketBatch, PacketBatchRecycler, PACKET_DATA_SIZE},
streamer::{receiver, PacketBatchReceiver},
},
std::{
cmp::max,
net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket},
sync::{
atomic::{AtomicBool, AtomicUsize, Ordering},
mpsc::channel,
Arc,
},
thread::{sleep, spawn, JoinHandle, Result},
time::{Duration, SystemTime},
},
};
use clap::{crate_description, crate_name, App, Arg};
use solana_streamer::packet::{Packet, Packets, PacketsRecycler, PACKET_DATA_SIZE};
use solana_streamer::streamer::{receiver, PacketReceiver};
use std::cmp::max;
use std::net::{IpAddr, Ipv4Addr, SocketAddr, UdpSocket};
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::mpsc::channel;
use std::sync::Arc;
use std::thread::sleep;
use std::thread::{spawn, JoinHandle, Result};
use std::time::Duration;
use std::time::SystemTime;
fn producer(addr: &SocketAddr, exit: Arc<AtomicBool>) -> JoinHandle<()> {
let send = UdpSocket::bind("0.0.0.0:0").unwrap();
let mut packet_batch = PacketBatch::default();
packet_batch.packets.resize(10, Packet::default());
for w in packet_batch.packets.iter_mut() {
let mut msgs = Packets::default();
msgs.packets.resize(10, Packet::default());
for w in msgs.packets.iter_mut() {
w.meta.size = PACKET_DATA_SIZE;
w.meta.set_addr(addr);
w.meta.set_addr(&addr);
}
let packet_batch = Arc::new(packet_batch);
let msgs = Arc::new(msgs);
spawn(move || loop {
if exit.load(Ordering::Relaxed) {
return;
}
let mut num = 0;
for p in &packet_batch.packets {
for p in &msgs.packets {
let a = p.meta.addr();
assert!(p.meta.size <= PACKET_DATA_SIZE);
send.send_to(&p.data[..p.meta.size], &a).unwrap();
@@ -42,14 +36,14 @@ fn producer(addr: &SocketAddr, exit: Arc<AtomicBool>) -> JoinHandle<()> {
})
}
fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketBatchReceiver) -> JoinHandle<()> {
fn sink(exit: Arc<AtomicBool>, rvs: Arc<AtomicUsize>, r: PacketReceiver) -> JoinHandle<()> {
spawn(move || loop {
if exit.load(Ordering::Relaxed) {
return;
}
let timer = Duration::new(1, 0);
if let Ok(packet_batch) = r.recv_timeout(timer) {
rvs.fetch_add(packet_batch.packets.len(), Ordering::Relaxed);
if let Ok(msgs) = r.recv_timeout(timer) {
rvs.fetch_add(msgs.packets.len(), Ordering::Relaxed);
}
})
}
@@ -81,7 +75,7 @@ fn main() -> Result<()> {
let mut read_channels = Vec::new();
let mut read_threads = Vec::new();
let recycler = PacketBatchRecycler::default();
let recycler = PacketsRecycler::new_without_limit("bench-streamer-recycler-shrink-stats");
for _ in 0..num_sockets {
let read = solana_net_utils::bind_to(ip_addr, port, false).unwrap();
read.set_read_timeout(Some(Duration::new(1, 0))).unwrap();
@@ -98,7 +92,6 @@ fn main() -> Result<()> {
recycler.clone(),
"bench-streamer-test",
1,
true,
));
}

View File

@@ -1,36 +1,37 @@
[package]
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
edition = "2021"
edition = "2018"
name = "solana-bench-tps"
version = "1.9.2"
version = "1.5.20"
repository = "https://github.com/solana-labs/solana"
license = "Apache-2.0"
homepage = "https://solana.com/"
publish = false
[dependencies]
bincode = "1.3.1"
clap = "2.33.1"
log = "0.4.14"
rayon = "1.5.1"
serde_json = "1.0.72"
serde_yaml = "0.8.21"
solana-core = { path = "../core", version = "=1.9.2" }
solana-genesis = { path = "../genesis", version = "=1.9.2" }
solana-client = { path = "../client", version = "=1.9.2" }
solana-faucet = { path = "../faucet", version = "=1.9.2" }
solana-gossip = { path = "../gossip", version = "=1.9.2" }
solana-logger = { path = "../logger", version = "=1.9.2" }
solana-metrics = { path = "../metrics", version = "=1.9.2" }
solana-measure = { path = "../measure", version = "=1.9.2" }
solana-net-utils = { path = "../net-utils", version = "=1.9.2" }
solana-runtime = { path = "../runtime", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
solana-streamer = { path = "../streamer", version = "=1.9.2" }
solana-version = { path = "../version", version = "=1.9.2" }
log = "0.4.11"
rayon = "1.4.0"
serde_json = "1.0.56"
serde_yaml = "0.8.13"
solana-clap-utils = { path = "../clap-utils", version = "=1.5.20" }
solana-core = { path = "../core", version = "=1.5.20" }
solana-genesis = { path = "../genesis", version = "=1.5.20" }
solana-client = { path = "../client", version = "=1.5.20" }
solana-faucet = { path = "../faucet", version = "=1.5.20" }
solana-logger = { path = "../logger", version = "=1.5.20" }
solana-metrics = { path = "../metrics", version = "=1.5.20" }
solana-measure = { path = "../measure", version = "=1.5.20" }
solana-net-utils = { path = "../net-utils", version = "=1.5.20" }
solana-runtime = { path = "../runtime", version = "=1.5.20" }
solana-sdk = { path = "../sdk", version = "=1.5.20" }
solana-version = { path = "../version", version = "=1.5.20" }
[dev-dependencies]
serial_test = "0.5.1"
solana-local-cluster = { path = "../local-cluster", version = "=1.9.2" }
serial_test = "0.4.0"
serial_test_derive = "0.4.0"
solana-local-cluster = { path = "../local-cluster", version = "=1.5.20" }
[package.metadata.docs.rs]
targets = ["x86_64-unknown-linux-gnu"]

View File

@@ -1,40 +1,39 @@
use {
crate::cli::Config,
log::*,
rayon::prelude::*,
solana_client::perf_utils::{sample_txs, SampleStats},
solana_core::gen_keys::GenKeys,
solana_faucet::faucet::request_airdrop_transaction,
solana_measure::measure::Measure,
solana_metrics::{self, datapoint_info},
solana_sdk::{
client::Client,
clock::{DEFAULT_MS_PER_SLOT, DEFAULT_S_PER_SLOT, MAX_PROCESSING_AGE},
commitment_config::CommitmentConfig,
hash::Hash,
instruction::{AccountMeta, Instruction},
message::Message,
pubkey::Pubkey,
signature::{Keypair, Signer},
system_instruction, system_transaction,
timing::{duration_as_ms, duration_as_s, duration_as_us, timestamp},
transaction::Transaction,
},
std::{
collections::{HashSet, VecDeque},
net::SocketAddr,
process::exit,
sync::{
atomic::{AtomicBool, AtomicIsize, AtomicUsize, Ordering},
Arc, Mutex, RwLock,
},
thread::{sleep, Builder, JoinHandle},
time::{Duration, Instant},
use crate::cli::Config;
use log::*;
use rayon::prelude::*;
use solana_client::perf_utils::{sample_txs, SampleStats};
use solana_core::gen_keys::GenKeys;
use solana_faucet::faucet::request_airdrop_transaction;
use solana_measure::measure::Measure;
use solana_metrics::{self, datapoint_info};
use solana_sdk::{
client::Client,
clock::{DEFAULT_TICKS_PER_SECOND, DEFAULT_TICKS_PER_SLOT, MAX_PROCESSING_AGE},
commitment_config::CommitmentConfig,
fee_calculator::FeeCalculator,
hash::Hash,
message::Message,
pubkey::Pubkey,
signature::{Keypair, Signer},
system_instruction, system_transaction,
timing::{duration_as_ms, duration_as_s, duration_as_us, timestamp},
transaction::Transaction,
};
use std::{
collections::{HashSet, VecDeque},
net::SocketAddr,
process::exit,
sync::{
atomic::{AtomicBool, AtomicIsize, AtomicUsize, Ordering},
Arc, Mutex, RwLock,
},
thread::{sleep, Builder, JoinHandle},
time::{Duration, Instant},
};
// The point at which transactions become "too old", in seconds.
const MAX_TX_QUEUE_AGE: u64 = (MAX_PROCESSING_AGE as f64 * DEFAULT_S_PER_SLOT) as u64;
const MAX_TX_QUEUE_AGE: u64 =
MAX_PROCESSING_AGE as u64 * DEFAULT_TICKS_PER_SLOT / DEFAULT_TICKS_PER_SECOND;
pub const MAX_SPENDS_PER_TX: u64 = 4;
@@ -47,12 +46,14 @@ pub type Result<T> = std::result::Result<T, BenchTpsError>;
pub type SharedTransactions = Arc<RwLock<VecDeque<Vec<(Transaction, u64)>>>>;
fn get_latest_blockhash<T: Client>(client: &T) -> Hash {
fn get_recent_blockhash<T: Client>(client: &T) -> (Hash, FeeCalculator) {
loop {
match client.get_latest_blockhash_with_commitment(CommitmentConfig::processed()) {
Ok((blockhash, _)) => return blockhash,
match client.get_recent_blockhash_with_commitment(CommitmentConfig::processed()) {
Ok((blockhash, fee_calculator, _last_valid_slot)) => {
return (blockhash, fee_calculator)
}
Err(err) => {
info!("Couldn't get last blockhash: {:?}", err);
info!("Couldn't get recent blockhash: {:?}", err);
sleep(Duration::from_secs(1));
}
};
@@ -239,19 +240,19 @@ where
let shared_txs: SharedTransactions = Arc::new(RwLock::new(VecDeque::new()));
let blockhash = Arc::new(RwLock::new(get_latest_blockhash(client.as_ref())));
let recent_blockhash = Arc::new(RwLock::new(get_recent_blockhash(client.as_ref()).0));
let shared_tx_active_thread_count = Arc::new(AtomicIsize::new(0));
let total_tx_sent_count = Arc::new(AtomicUsize::new(0));
let blockhash_thread = {
let exit_signal = exit_signal.clone();
let blockhash = blockhash.clone();
let recent_blockhash = recent_blockhash.clone();
let client = client.clone();
let id = id.pubkey();
Builder::new()
.name("solana-blockhash-poller".to_string())
.spawn(move || {
poll_blockhash(&exit_signal, &blockhash, &client, &id);
poll_blockhash(&exit_signal, &recent_blockhash, &client, &id);
})
.unwrap()
};
@@ -271,7 +272,7 @@ where
let start = Instant::now();
generate_chunked_transfers(
blockhash,
recent_blockhash,
&shared_txs,
shared_tx_active_thread_count,
source_keypair_chunks,
@@ -391,22 +392,6 @@ fn generate_txs(
}
}
fn get_new_latest_blockhash<T: Client>(client: &Arc<T>, blockhash: &Hash) -> Option<Hash> {
let start = Instant::now();
while start.elapsed().as_secs() < 5 {
if let Ok(new_blockhash) = client.get_latest_blockhash() {
if new_blockhash != *blockhash {
return Some(new_blockhash);
}
}
debug!("Got same blockhash ({:?}), will retry...", blockhash);
// Retry ~twice during a slot
sleep(Duration::from_millis(DEFAULT_MS_PER_SLOT / 2));
}
None
}
fn poll_blockhash<T: Client>(
exit_signal: &Arc<AtomicBool>,
blockhash: &Arc<RwLock<Hash>>,
@@ -418,7 +403,7 @@ fn poll_blockhash<T: Client>(
loop {
let blockhash_updated = {
let old_blockhash = *blockhash.read().unwrap();
if let Some(new_blockhash) = get_new_latest_blockhash(client, &old_blockhash) {
if let Ok((new_blockhash, _fee)) = client.get_new_blockhash(&old_blockhash) {
*blockhash.write().unwrap() = new_blockhash;
blockhash_last_updated = Instant::now();
true
@@ -556,16 +541,16 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
self.len(),
);
let blockhash = get_latest_blockhash(client.as_ref());
let (blockhash, _fee_calculator) = get_recent_blockhash(client.as_ref());
// re-sign retained to_fund_txes with updated blockhash
self.sign(blockhash);
self.send(client);
self.send(&client);
// Sleep a few slots to allow transactions to process
sleep(Duration::from_secs(1));
self.verify(client, to_lamports);
self.verify(&client, to_lamports);
// retry anything that seems to have dropped through cracks
// again since these txs are all or nothing, they're fine to
@@ -580,7 +565,7 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
let to_fund_txs: Vec<(&Keypair, Transaction)> = to_fund
.par_iter()
.map(|(k, t)| {
let instructions = system_instruction::transfer_many(&k.pubkey(), t);
let instructions = system_instruction::transfer_many(&k.pubkey(), &t);
let message = Message::new(&instructions, Some(&k.pubkey()));
(*k, Transaction::new_unsigned(message))
})
@@ -633,7 +618,7 @@ impl<'a> FundingTransactions<'a> for Vec<(&'a Keypair, Transaction)> {
return None;
}
let verified = if verify_funding_transfer(&client, tx, to_lamports) {
let verified = if verify_funding_transfer(&client, &tx, to_lamports) {
verified_txs.fetch_add(1, Ordering::Relaxed);
Some(k.pubkey())
} else {
@@ -748,8 +733,8 @@ pub fn airdrop_lamports<T: Client>(
id.pubkey(),
);
let blockhash = get_latest_blockhash(client);
match request_airdrop_transaction(faucet_addr, &id.pubkey(), airdrop_amount, blockhash) {
let (blockhash, _fee_calculator) = get_recent_blockhash(client);
match request_airdrop_transaction(&faucet_addr, &id.pubkey(), airdrop_amount, blockhash) {
Ok(transaction) => {
let mut tries = 0;
loop {
@@ -906,16 +891,8 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
// pay for the transaction fees in a new run.
let enough_lamports = 8 * lamports_per_account / 10;
if first_keypair_balance < enough_lamports || last_keypair_balance < enough_lamports {
let single_sig_message = Message::new_with_blockhash(
&[Instruction::new_with_bytes(
Pubkey::new_unique(),
&[],
vec![AccountMeta::new(Pubkey::new_unique(), true)],
)],
None,
&client.get_latest_blockhash().unwrap(),
);
let max_fee = client.get_fee_for_message(&single_sig_message).unwrap();
let fee_rate_governor = client.get_fee_rate_governor().unwrap();
let max_fee = fee_rate_governor.max_lamports_per_signature;
let extra_fees = extra * max_fee;
let total_keypairs = keypairs.len() as u64 + 1; // Add one for funding keypair
let total = lamports_per_account * total_keypairs + extra_fees;
@@ -948,19 +925,17 @@ pub fn generate_and_fund_keypairs<T: 'static + Client + Send + Sync>(
#[cfg(test)]
mod tests {
use {
super::*,
solana_runtime::{bank::Bank, bank_client::BankClient},
solana_sdk::{
client::SyncClient, fee_calculator::FeeRateGovernor,
genesis_config::create_genesis_config,
},
};
use super::*;
use solana_runtime::bank::Bank;
use solana_runtime::bank_client::BankClient;
use solana_sdk::client::SyncClient;
use solana_sdk::fee_calculator::FeeRateGovernor;
use solana_sdk::genesis_config::create_genesis_config;
#[test]
fn test_bench_tps_bank_client() {
let (genesis_config, id) = create_genesis_config(10_000);
let bank = Bank::new_for_tests(&genesis_config);
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let config = Config {
@@ -981,7 +956,7 @@ mod tests {
#[test]
fn test_bench_tps_fund_keys() {
let (genesis_config, id) = create_genesis_config(10_000);
let bank = Bank::new_for_tests(&genesis_config);
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let keypair_count = 20;
let lamports = 20;
@@ -1004,7 +979,7 @@ mod tests {
let (mut genesis_config, id) = create_genesis_config(10_000);
let fee_rate_governor = FeeRateGovernor::new(11, 0);
genesis_config.fee_rate_governor = fee_rate_governor;
let bank = Bank::new_for_tests(&genesis_config);
let bank = Bank::new(&genesis_config);
let client = Arc::new(BankClient::new(bank));
let keypair_count = 20;
let lamports = 20;

View File

@@ -1,13 +1,11 @@
use {
clap::{crate_description, crate_name, App, Arg, ArgMatches},
solana_faucet::faucet::FAUCET_PORT,
solana_sdk::{
fee_calculator::FeeRateGovernor,
pubkey::Pubkey,
signature::{read_keypair_file, Keypair},
},
std::{net::SocketAddr, process::exit, time::Duration},
use clap::{crate_description, crate_name, App, Arg, ArgMatches};
use solana_faucet::faucet::FAUCET_PORT;
use solana_sdk::fee_calculator::FeeRateGovernor;
use solana_sdk::{
pubkey::Pubkey,
signature::{read_keypair_file, Keypair},
};
use std::{net::SocketAddr, process::exit, time::Duration};
const NUM_LAMPORTS_PER_ACCOUNT_DEFAULT: u64 = solana_sdk::native_token::LAMPORTS_PER_SOL;

View File

@@ -1,20 +1,13 @@
#![allow(clippy::integer_arithmetic)]
use {
log::*,
solana_bench_tps::{
bench::{do_bench_tps, generate_and_fund_keypairs, generate_keypairs},
cli,
},
solana_genesis::Base64Account,
solana_gossip::gossip_service::{discover_cluster, get_client, get_multi_client},
solana_sdk::{
fee_calculator::FeeRateGovernor,
signature::{Keypair, Signer},
system_program,
},
solana_streamer::socket::SocketAddrSpace,
std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc},
};
use log::*;
use solana_bench_tps::bench::{do_bench_tps, generate_and_fund_keypairs, generate_keypairs};
use solana_bench_tps::cli;
use solana_core::gossip_service::{discover_cluster, get_client, get_multi_client};
use solana_genesis::Base64Account;
use solana_sdk::fee_calculator::FeeRateGovernor;
use solana_sdk::signature::{Keypair, Signer};
use solana_sdk::system_program;
use std::{collections::HashMap, fs::File, io::prelude::*, path::Path, process::exit, sync::Arc};
/// Number of signatures for all transactions in ~1 week at ~100K TPS
pub const NUM_SIGNATURES_FOR_TXS: u64 = 100_000 * 60 * 60 * 24 * 7;
@@ -46,7 +39,7 @@ fn main() {
let keypair_count = *tx_count * keypair_multiplier;
if *write_to_client_file {
info!("Generating {} keypairs", keypair_count);
let (keypairs, _) = generate_keypairs(id, keypair_count as u64);
let (keypairs, _) = generate_keypairs(&id, keypair_count as u64);
let num_accounts = keypairs.len() as u64;
let max_fee =
FeeRateGovernor::new(*target_lamports_per_signature, 0).max_lamports_per_signature;
@@ -75,14 +68,13 @@ fn main() {
}
info!("Connecting to the cluster");
let nodes = discover_cluster(entrypoint_addr, *num_nodes, SocketAddrSpace::Unspecified)
.unwrap_or_else(|err| {
eprintln!("Failed to discover {} nodes: {:?}", num_nodes, err);
exit(1);
});
let nodes = discover_cluster(&entrypoint_addr, *num_nodes).unwrap_or_else(|err| {
eprintln!("Failed to discover {} nodes: {:?}", num_nodes, err);
exit(1);
});
let client = if *multi_client {
let (client, num_clients) = get_multi_client(&nodes, &SocketAddrSpace::Unspecified);
let (client, num_clients) = get_multi_client(&nodes);
if nodes.len() < num_clients {
eprintln!(
"Error: Insufficient nodes discovered. Expecting {} or more",
@@ -96,7 +88,7 @@ fn main() {
let mut target_client = None;
for node in nodes {
if node.id == *target_node {
target_client = Some(Arc::new(get_client(&[node], &SocketAddrSpace::Unspecified)));
target_client = Some(Arc::new(get_client(&[node])));
break;
}
}
@@ -105,7 +97,7 @@ fn main() {
exit(1);
})
} else {
Arc::new(get_client(&nodes, &SocketAddrSpace::Unspecified))
Arc::new(get_client(&nodes))
};
let keypairs = if *read_from_client_file {
@@ -143,7 +135,7 @@ fn main() {
generate_and_fund_keypairs(
client.clone(),
Some(*faucet_addr),
id,
&id,
keypair_count,
*num_lamports_per_account,
)

View File

@@ -1,44 +1,28 @@
#![allow(clippy::integer_arithmetic)]
use {
serial_test::serial,
solana_bench_tps::{
bench::{do_bench_tps, generate_and_fund_keypairs},
cli::Config,
},
solana_client::thin_client::create_client,
solana_core::validator::ValidatorConfig,
solana_faucet::faucet::run_local_faucet_with_port,
solana_gossip::cluster_info::VALIDATOR_PORT_RANGE,
solana_local_cluster::{
local_cluster::{ClusterConfig, LocalCluster},
validator_configs::make_identical_validator_configs,
},
solana_sdk::signature::{Keypair, Signer},
solana_streamer::socket::SocketAddrSpace,
std::{
sync::{mpsc::channel, Arc},
time::Duration,
},
};
use serial_test_derive::serial;
use solana_bench_tps::bench::{do_bench_tps, generate_and_fund_keypairs};
use solana_bench_tps::cli::Config;
use solana_client::thin_client::create_client;
use solana_core::cluster_info::VALIDATOR_PORT_RANGE;
use solana_core::validator::ValidatorConfig;
use solana_faucet::faucet::run_local_faucet_with_port;
use solana_local_cluster::local_cluster::{ClusterConfig, LocalCluster};
use solana_sdk::signature::{Keypair, Signer};
use std::sync::{mpsc::channel, Arc};
use std::time::Duration;
fn test_bench_tps_local_cluster(config: Config) {
let native_instruction_processors = vec![];
solana_logger::setup();
const NUM_NODES: usize = 1;
let cluster = LocalCluster::new(
&mut ClusterConfig {
node_stakes: vec![999_990; NUM_NODES],
cluster_lamports: 200_000_000,
validator_configs: make_identical_validator_configs(
&ValidatorConfig::default(),
NUM_NODES,
),
native_instruction_processors,
..ClusterConfig::default()
},
SocketAddrSpace::Unspecified,
);
let cluster = LocalCluster::new(&mut ClusterConfig {
node_stakes: vec![999_990; NUM_NODES],
cluster_lamports: 200_000_000,
validator_configs: vec![ValidatorConfig::default(); NUM_NODES],
native_instruction_processors,
..ClusterConfig::default()
});
let faucet_keypair = Keypair::new();
cluster.transfer(

View File

@@ -1,88 +0,0 @@
# Leader Duplicate Block Slashing
This design describes how the cluster slashes leaders that produce duplicate
blocks.
Leaders that produce multiple blocks for the same slot increase the number of
potential forks that the cluster has to resolve.
## Primitives
1. gossip_root: Nodes now gossip their current root
2. gossip_duplicate_slots: Nodes can gossip up to `N` duplicate slot proofs.
3. `DUPLICATE_THRESHOLD`: The minimum percentage of stake that needs to vote on a fork with version `X` of a duplicate slot, in order for that fork to become votable.
## Protocol
1. When WindowStage detects a duplicate slot proof `P`, it checks the new `gossip_root` to see if `<= 1/3` of the nodes have rooted a slot `S >= P`. If so, it pushes a proof to `gossip_duplicate_slots` to gossip. WindowStage then signals ReplayStage about this duplicate slot `S`. These proofs can be purged from gossip once the validator sees > 2/3 of people gossiping roots `R > S`.
2. When ReplayStage receives the signal for a duplicate slot `S` from `1)` above, the validator monitors gossip and replay waiting for`>= DUPLICATE_THRESHOLD` votes for the same hash which implies the same version of the slot. If this conditon is met for some version with hash `H` of slot `S`, this is then known as the `duplicate_confirmed` version of the slot.
Before a duplicate slot `S` is `duplicate_confirmed`, it's first excluded from the vote candidate set in the fork choice rules. In addition, ReplayStage also resets PoH to the *latest* ancestor of the *earliest* `non-duplicate/confirmed_duplicate_slot`, so that block generation can start happening on the earliest known *safe* block.
Some notes about the `DUPLICATE_THRESHOLD`. In the cases below, assume `DUPLICATE_THRESHOLD = 52`:
a) If less than `2 * DUPLICATE_THRESHOLD - 1` percentage of the network is malicious, then there can only be one such `duplicate_confirmed` version of the slot. With `DUPLICATE_THRESHOLD = 52`, this is
a malcious tolerance of `4%`
b) The liveness of the network is at most `1 - DUPLICATE_THRESHOLD - SWITCH_THRESHOLD`. This is because if you need at least `SWITCH_THRESHOLD` percentage of the stake voting on a different fork in order to switch off of a duplicate fork that has `< DUPLICATE_THRESHOLD` stake voting on it, and is *not* `duplicate_confirmed`. For `DUPLICATE_THRESHOLD = 52` and `DUPLICATE_THRESHOLD = 38`, this implies a liveness tolerance of `10%`.
For example in the situation below, validators that voted on `2` can't vote any further on fork `2` because it's been removed from fork choice. Now slot 6 better have enough stake for a switching proof, or the network halts.
```text
|-------- 2 (51% voted, then detected this slot was a duplicate and removed this slot from fork choice)
0---|
|---------- 6 (39%)
```
3. Switching proofs need to be extended to allow including vote hashes from different versions of the same same slot (detected through 1). Right now this is not supported since switching proofs can
only be built using votes from banks in BankForks, and two different versions of the same slot cannot
simultaneously exist in BankForks. For instance:
```text
|-------- 2
|
0------------- 1 ------ 2'
|
|---------- 6
```
Imagine each version of slot 2 and 2' have `DUPLICATE_THRESHOLD / 2` of the votes on them, so neither duplicate can be confirmed. At most slot 6 has `1 - DUPLICATE_THRESHOLD / 2` of the votes
on it, which is less than the switching threshold. Thus, in order for validators voting on `2` or `2'` to switch to slot 6, and make progress, they need to incorporate votes from the other version of the slot into their switching proofs.
### The repair problem.
Now what happens if one of the following occurs:
1) Due to network blips/latencies, some validators fail to observe the gossip votes before they are overwritten by newer votes? Then some validators may conclude a slot `S` is `duplicate_confirmed` while others don't.
2) Due to lockouts, no version of duplicate slot `S` reaches `duplicate_confirmed` status, but one of its descendants may reach `duplicate_confirmed` after those lockouts expire, which by definition, means `S` is also `duplicate_confirmed`.
3) People who are catching up and don't see the votes in gossip encounter a dup block and can't make progress.
We assume that given a network is eventually stable, if at least one correct validator observed `S` is `duplicate_confirmed`, then if `S` is part of the heaviest fork, then eventually all validators will observe some descendant of `S` is duplicate confirmed.
This problem we need to solve is modeled simply by the below scenario:
```text
1 -> 2 (duplicate) -> 3 -> 4 (duplicate)
```
Assume the following:
1. Due to gossiping duplciate proofs, we assume everyone will eventually see duplicate proofs for 2 and 4, so everyone agrees to remove them from fork choice until they are `duplicate_confirmed`.
2. Due to lockouts, `> DUPLICATE_THRESHOLD` of the stake votes on 4, but not 2. This means at least `DUPLICATE_THRESHOLD` of people have the "correct" version of both slots 2 and 4.
3. However, the remaining `1-DUPLICATE_THRESHOLD` of people have wrong version of 2. This means in replay, their slot 3 will be marked dead, *even though the faulty slot is 2*. The goal is to get these people on the right fork again.
Possible solution:
1. Change `EpochSlots` to signal when a bank is frozen, not when a slot is complete. If we see > `DUPLICATE_THRESHOLD` have frozen the dead slot 3, then we attempt recovery. Note this does not mean that all `DUPLICATE_THRESHOLD` have frozen the same version of the bank, it's just a signal to us that something may be wrong with our version of the bank.
2. Recovery takes the form of a special repair request, `RepairDuplicateConfirmed(dead_slot, Vec<(Slot, Hash)>)`, which specifies a dead slot, and then a vector of `(slot, hash)` of `N` of its latest parents.
3. The repairer sees this request and responds with the correct hash only if any element of the `(slot, hash)` vector is both `duplicate_confirmed` and the hash doesn't match the requester's hash in the vector.
4. Once the requester sees the "correct" hash is different than their frozen hash, they dump the block so that they can accept a new block, and ask the network for the block with the correct hash.
Of course the repairer might lie to you, and you'll get the wrong version of the block, in which case you'll end up with another dead block and repeat the procedure.

View File

@@ -1,29 +0,0 @@
[package]
name = "solana-bucket-map"
version = "1.9.2"
description = "solana-bucket-map"
homepage = "https://solana.com/"
documentation = "https://docs.rs/solana-bucket-map"
readme = "../README.md"
repository = "https://github.com/solana-labs/solana"
authors = ["Solana Maintainers <maintainers@solana.foundation>"]
license = "Apache-2.0"
edition = "2021"
[dependencies]
rayon = "1.5.0"
solana-logger = { path = "../logger", version = "=1.9.2" }
solana-sdk = { path = "../sdk", version = "=1.9.2" }
memmap2 = "0.5.0"
log = { version = "0.4.11" }
solana-measure = { path = "../measure", version = "=1.9.2" }
rand = "0.7.0"
fs_extra = "1.2.0"
tempfile = "3.2.0"
[lib]
crate-type = ["lib"]
name = "solana_bucket_map"
[[bench]]
name = "bucket_map"

View File

@@ -1,77 +0,0 @@
#![feature(test)]
macro_rules! DEFINE_NxM_BENCH {
($i:ident, $n:literal, $m:literal) => {
mod $i {
use super::*;
#[bench]
fn bench_insert_baseline_hashmap(bencher: &mut Bencher) {
do_bench_insert_baseline_hashmap(bencher, $n, $m);
}
#[bench]
fn bench_insert_bucket_map(bencher: &mut Bencher) {
do_bench_insert_bucket_map(bencher, $n, $m);
}
}
};
}
extern crate test;
use {
rayon::prelude::*,
solana_bucket_map::bucket_map::{BucketMap, BucketMapConfig},
solana_sdk::pubkey::Pubkey,
std::{collections::hash_map::HashMap, sync::RwLock},
test::Bencher,
};
type IndexValue = u64;
DEFINE_NxM_BENCH!(dim_01x02, 1, 2);
DEFINE_NxM_BENCH!(dim_02x04, 2, 4);
DEFINE_NxM_BENCH!(dim_04x08, 4, 8);
DEFINE_NxM_BENCH!(dim_08x16, 8, 16);
DEFINE_NxM_BENCH!(dim_16x32, 16, 32);
DEFINE_NxM_BENCH!(dim_32x64, 32, 64);
/// Benchmark insert with Hashmap as baseline for N threads inserting M keys each
fn do_bench_insert_baseline_hashmap(bencher: &mut Bencher, n: usize, m: usize) {
let index = RwLock::new(HashMap::new());
(0..n).into_iter().into_par_iter().for_each(|i| {
let key = Pubkey::new_unique();
index
.write()
.unwrap()
.insert(key, vec![(i, IndexValue::default())]);
});
bencher.iter(|| {
(0..n).into_iter().into_par_iter().for_each(|_| {
for j in 0..m {
let key = Pubkey::new_unique();
index
.write()
.unwrap()
.insert(key, vec![(j, IndexValue::default())]);
}
})
});
}
/// Benchmark insert with BucketMap with N buckets for N threads inserting M keys each
fn do_bench_insert_bucket_map(bencher: &mut Bencher, n: usize, m: usize) {
let index = BucketMap::new(BucketMapConfig::new(n));
(0..n).into_iter().into_par_iter().for_each(|i| {
let key = Pubkey::new_unique();
index.update(&key, |_| Some((vec![(i, IndexValue::default())], 0)));
});
bencher.iter(|| {
(0..n).into_iter().into_par_iter().for_each(|_| {
for j in 0..m {
let key = Pubkey::new_unique();
index.update(&key, |_| Some((vec![(j, IndexValue::default())], 0)));
}
})
});
}

View File

@@ -1,491 +0,0 @@
use {
crate::{
bucket_item::BucketItem,
bucket_map::BucketMapError,
bucket_stats::BucketMapStats,
bucket_storage::{BucketStorage, Uid, DEFAULT_CAPACITY_POW2, UID_UNLOCKED},
index_entry::IndexEntry,
MaxSearch, RefCount,
},
rand::{thread_rng, Rng},
solana_measure::measure::Measure,
solana_sdk::pubkey::Pubkey,
std::{
collections::hash_map::DefaultHasher,
hash::{Hash, Hasher},
marker::PhantomData,
ops::RangeBounds,
path::PathBuf,
sync::{
atomic::{AtomicUsize, Ordering},
Arc, Mutex,
},
},
};
#[derive(Default)]
pub struct ReallocatedItems {
// Some if the index was reallocated
// u64 is random associated with the new index
pub index: Option<(u64, BucketStorage)>,
// Some for a data bucket reallocation
// u64 is data bucket index
pub data: Option<(u64, BucketStorage)>,
}
#[derive(Default)]
pub struct Reallocated {
/// > 0 if reallocations are encoded
pub active_reallocations: AtomicUsize,
/// actual reallocated bucket
/// mutex because bucket grow code runs with a read lock
pub items: Mutex<ReallocatedItems>,
}
impl Reallocated {
/// specify that a reallocation has occurred
pub fn add_reallocation(&self) {
assert_eq!(
0,
self.active_reallocations.fetch_add(1, Ordering::Relaxed),
"Only 1 reallocation can occur at a time"
);
}
/// Return true IFF a reallocation has occurred.
/// Calling this takes conceptual ownership of the reallocation encoded in the struct.
pub fn get_reallocated(&self) -> bool {
self.active_reallocations
.compare_exchange(1, 0, Ordering::Acquire, Ordering::Relaxed)
.is_ok()
}
}
// >= 2 instances of BucketStorage per 'bucket' in the bucket map. 1 for index, >= 1 for data
pub struct Bucket<T> {
drives: Arc<Vec<PathBuf>>,
//index
pub index: BucketStorage,
//random offset for the index
random: u64,
//storage buckets to store SlotSlice up to a power of 2 in len
pub data: Vec<BucketStorage>,
_phantom: PhantomData<T>,
stats: Arc<BucketMapStats>,
pub reallocated: Reallocated,
}
impl<T: Clone + Copy> Bucket<T> {
pub fn new(
drives: Arc<Vec<PathBuf>>,
max_search: MaxSearch,
stats: Arc<BucketMapStats>,
) -> Self {
let index = BucketStorage::new(
Arc::clone(&drives),
1,
std::mem::size_of::<IndexEntry>() as u64,
max_search,
Arc::clone(&stats.index),
);
Self {
random: thread_rng().gen(),
drives,
index,
data: vec![],
_phantom: PhantomData::default(),
stats,
reallocated: Reallocated::default(),
}
}
pub fn bucket_len(&self) -> u64 {
self.index.used.load(Ordering::Relaxed)
}
pub fn keys(&self) -> Vec<Pubkey> {
let mut rv = vec![];
for i in 0..self.index.capacity() {
if self.index.uid(i) == UID_UNLOCKED {
continue;
}
let ix: &IndexEntry = self.index.get(i);
rv.push(ix.key);
}
rv
}
pub fn items_in_range<R>(&self, range: &Option<&R>) -> Vec<BucketItem<T>>
where
R: RangeBounds<Pubkey>,
{
let mut result = Vec::with_capacity(self.index.used.load(Ordering::Relaxed) as usize);
for i in 0..self.index.capacity() {
let ii = i % self.index.capacity();
if self.index.uid(ii) == UID_UNLOCKED {
continue;
}
let ix: &IndexEntry = self.index.get(ii);
let key = ix.key;
if range.map(|r| r.contains(&key)).unwrap_or(true) {
let val = ix.read_value(self);
result.push(BucketItem {
pubkey: key,
ref_count: ix.ref_count(),
slot_list: val.map(|(v, _ref_count)| v.to_vec()).unwrap_or_default(),
});
}
}
result
}
pub fn find_entry(&self, key: &Pubkey) -> Option<(&IndexEntry, u64)> {
Self::bucket_find_entry(&self.index, key, self.random)
}
fn find_entry_mut(&self, key: &Pubkey) -> Option<(&mut IndexEntry, u64)> {
Self::bucket_find_entry_mut(&self.index, key, self.random)
}
fn bucket_find_entry_mut<'a>(
index: &'a BucketStorage,
key: &Pubkey,
random: u64,
) -> Option<(&'a mut IndexEntry, u64)> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i % index.capacity();
if index.uid(ii) == UID_UNLOCKED {
continue;
}
let elem: &mut IndexEntry = index.get_mut(ii);
if elem.key == *key {
return Some((elem, ii));
}
}
None
}
fn bucket_find_entry<'a>(
index: &'a BucketStorage,
key: &Pubkey,
random: u64,
) -> Option<(&'a IndexEntry, u64)> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i % index.capacity();
if index.uid(ii) == UID_UNLOCKED {
continue;
}
let elem: &IndexEntry = index.get(ii);
if elem.key == *key {
return Some((elem, ii));
}
}
None
}
fn bucket_create_key(
index: &BucketStorage,
key: &Pubkey,
elem_uid: Uid,
random: u64,
) -> Result<u64, BucketMapError> {
let ix = Self::bucket_index_ix(index, key, random);
for i in ix..ix + index.max_search() {
let ii = i as u64 % index.capacity();
if index.uid(ii) != UID_UNLOCKED {
continue;
}
index.allocate(ii, elem_uid).unwrap();
let mut elem: &mut IndexEntry = index.get_mut(ii);
elem.key = *key;
// These will be overwritten after allocation by callers.
// Since this part of the mmapped file could have previously been used by someone else, there can be garbage here.
elem.ref_count = 0;
elem.storage_offset = 0;
elem.storage_capacity_when_created_pow2 = 0;
elem.num_slots = 0;
//debug!( "INDEX ALLOC {:?} {} {} {}", key, ii, index.capacity, elem_uid );
return Ok(ii);
}
Err(BucketMapError::IndexNoSpace(index.capacity_pow2))
}
pub fn addref(&mut self, key: &Pubkey) -> Option<RefCount> {
let (elem, _) = self.find_entry_mut(key)?;
elem.ref_count += 1;
Some(elem.ref_count)
}
pub fn unref(&mut self, key: &Pubkey) -> Option<RefCount> {
let (elem, _) = self.find_entry_mut(key)?;
elem.ref_count -= 1;
Some(elem.ref_count)
}
fn create_key(&self, key: &Pubkey) -> Result<u64, BucketMapError> {
Self::bucket_create_key(&self.index, key, IndexEntry::key_uid(key), self.random)
}
pub fn read_value(&self, key: &Pubkey) -> Option<(&[T], RefCount)> {
//debug!("READ_VALUE: {:?}", key);
let (elem, _) = self.find_entry(key)?;
elem.read_value(self)
}
pub fn try_write(
&mut self,
key: &Pubkey,
data: &[T],
ref_count: u64,
) -> Result<(), BucketMapError> {
let best_fit_bucket = IndexEntry::data_bucket_from_num_slots(data.len() as u64);
if self.data.get(best_fit_bucket as usize).is_none() {
// fail early if the data bucket we need doesn't exist - we don't want the index entry partially allocated
return Err(BucketMapError::DataNoSpace((best_fit_bucket, 0)));
}
let index_entry = self.find_entry_mut(key);
let (elem, elem_ix) = match index_entry {
None => {
let ii = self.create_key(key)?;
let elem: &mut IndexEntry = self.index.get_mut(ii);
(elem, ii)
}
Some(res) => res,
};
elem.ref_count = ref_count;
let elem_uid = self.index.uid(elem_ix);
let bucket_ix = elem.data_bucket_ix();
let current_bucket = &self.data[bucket_ix as usize];
if best_fit_bucket == bucket_ix && elem.num_slots > 0 {
// in place update
let elem_loc = elem.data_loc(current_bucket);
let slice: &mut [T] = current_bucket.get_mut_cell_slice(elem_loc, data.len() as u64);
assert!(current_bucket.uid(elem_loc) == elem_uid);
elem.num_slots = data.len() as u64;
slice.clone_from_slice(data);
Ok(())
} else {
// need to move the allocation to a best fit spot
let best_bucket = &self.data[best_fit_bucket as usize];
let cap_power = best_bucket.capacity_pow2;
let cap = best_bucket.capacity();
let pos = thread_rng().gen_range(0, cap);
for i in pos..pos + self.index.max_search() {
let ix = i % cap;
if best_bucket.uid(ix) == UID_UNLOCKED {
let elem_loc = elem.data_loc(current_bucket);
if elem.num_slots > 0 {
current_bucket.free(elem_loc, elem_uid);
}
elem.storage_offset = ix;
elem.storage_capacity_when_created_pow2 = best_bucket.capacity_pow2;
elem.num_slots = data.len() as u64;
//debug!( "DATA ALLOC {:?} {} {} {}", key, elem.data_location, best_bucket.capacity, elem_uid );
if elem.num_slots > 0 {
best_bucket.allocate(ix, elem_uid).unwrap();
let slice = best_bucket.get_mut_cell_slice(ix, data.len() as u64);
slice.copy_from_slice(data);
}
return Ok(());
}
}
Err(BucketMapError::DataNoSpace((best_fit_bucket, cap_power)))
}
}
pub fn delete_key(&mut self, key: &Pubkey) {
if let Some((elem, elem_ix)) = self.find_entry(key) {
let elem_uid = self.index.uid(elem_ix);
if elem.num_slots > 0 {
let data_bucket = &self.data[elem.data_bucket_ix() as usize];
let loc = elem.data_loc(data_bucket);
//debug!( "DATA FREE {:?} {} {} {}", key, elem.data_location, data_bucket.capacity, elem_uid );
data_bucket.free(loc, elem_uid);
}
//debug!("INDEX FREE {:?} {}", key, elem_uid);
self.index.free(elem_ix, elem_uid);
}
}
pub fn grow_index(&self, current_capacity_pow2: u8) {
if self.index.capacity_pow2 == current_capacity_pow2 {
let mut m = Measure::start("grow_index");
//debug!("GROW_INDEX: {}", current_capacity_pow2);
let increment = 1;
for i in increment.. {
//increasing the capacity by ^4 reduces the
//likelyhood of a re-index collision of 2^(max_search)^2
//1 in 2^32
let index = BucketStorage::new_with_capacity(
Arc::clone(&self.drives),
1,
std::mem::size_of::<IndexEntry>() as u64,
// *2 causes rapid growth of index buckets
self.index.capacity_pow2 + i, // * 2,
self.index.max_search,
Arc::clone(&self.stats.index),
);
let random = thread_rng().gen();
let mut valid = true;
for ix in 0..self.index.capacity() {
let uid = self.index.uid(ix);
if UID_UNLOCKED != uid {
let elem: &IndexEntry = self.index.get(ix);
let new_ix = Self::bucket_create_key(&index, &elem.key, uid, random);
if new_ix.is_err() {
valid = false;
break;
}
let new_ix = new_ix.unwrap();
let new_elem: &mut IndexEntry = index.get_mut(new_ix);
*new_elem = *elem;
/*
let dbg_elem: IndexEntry = *new_elem;
assert_eq!(
Self::bucket_find_entry(&index, &elem.key, random).unwrap(),
(&dbg_elem, new_ix)
);
*/
}
}
if valid {
let sz = index.capacity();
{
let mut max = self.stats.index.max_size.lock().unwrap();
*max = std::cmp::max(*max, sz);
}
let mut items = self.reallocated.items.lock().unwrap();
items.index = Some((random, index));
self.reallocated.add_reallocation();
break;
}
}
m.stop();
self.stats.index.resizes.fetch_add(1, Ordering::Relaxed);
self.stats
.index
.resize_us
.fetch_add(m.as_us(), Ordering::Relaxed);
}
}
pub fn apply_grow_index(&mut self, random: u64, index: BucketStorage) {
self.random = random;
self.index = index;
}
fn elem_size() -> u64 {
std::mem::size_of::<T>() as u64
}
pub fn apply_grow_data(&mut self, ix: usize, bucket: BucketStorage) {
if self.data.get(ix).is_none() {
for i in self.data.len()..ix {
// insert empty data buckets
self.data.push(BucketStorage::new(
Arc::clone(&self.drives),
1 << i,
Self::elem_size(),
self.index.max_search,
Arc::clone(&self.stats.data),
))
}
self.data.push(bucket);
} else {
self.data[ix] = bucket;
}
}
/// grow a data bucket
/// The application of the new bucket is deferred until the next write lock.
pub fn grow_data(&self, data_index: u64, current_capacity_pow2: u8) {
let new_bucket = BucketStorage::new_resized(
&self.drives,
self.index.max_search,
self.data.get(data_index as usize),
std::cmp::max(current_capacity_pow2 + 1, DEFAULT_CAPACITY_POW2),
1 << data_index,
Self::elem_size(),
&self.stats.data,
);
self.reallocated.add_reallocation();
let mut items = self.reallocated.items.lock().unwrap();
items.data = Some((data_index, new_bucket));
}
fn bucket_index_ix(index: &BucketStorage, key: &Pubkey, random: u64) -> u64 {
let uid = IndexEntry::key_uid(key);
let mut s = DefaultHasher::new();
uid.hash(&mut s);
//the locally generated random will make it hard for an attacker
//to deterministically cause all the pubkeys to land in the same
//location in any bucket on all validators
random.hash(&mut s);
let ix = s.finish();
ix % index.capacity()
//debug!( "INDEX_IX: {:?} uid:{} loc: {} cap:{}", key, uid, location, index.capacity() );
}
/// grow the appropriate piece. Note this takes an immutable ref.
/// The actual grow is set into self.reallocated and applied later on a write lock
pub fn grow(&self, err: BucketMapError) {
match err {
BucketMapError::DataNoSpace((data_index, current_capacity_pow2)) => {
//debug!("GROWING SPACE {:?}", (data_index, current_capacity_pow2));
self.grow_data(data_index, current_capacity_pow2);
}
BucketMapError::IndexNoSpace(current_capacity_pow2) => {
//debug!("GROWING INDEX {}", sz);
self.grow_index(current_capacity_pow2);
}
}
}
/// if a bucket was resized previously with a read lock, then apply that resize now
pub fn handle_delayed_grows(&mut self) {
if self.reallocated.get_reallocated() {
// swap out the bucket that was resized previously with a read lock
let mut items = ReallocatedItems::default();
std::mem::swap(&mut items, &mut self.reallocated.items.lock().unwrap());
if let Some((random, bucket)) = items.index.take() {
self.apply_grow_index(random, bucket);
} else {
// data bucket
let (i, new_bucket) = items.data.take().unwrap();
self.apply_grow_data(i as usize, new_bucket);
}
}
}
pub fn insert(&mut self, key: &Pubkey, value: (&[T], RefCount)) {
let (new, refct) = value;
loop {
let rv = self.try_write(key, new, refct);
match rv {
Ok(_) => return,
Err(err) => {
self.grow(err);
self.handle_delayed_grows();
}
}
}
}
pub fn update<F>(&mut self, key: &Pubkey, mut updatefn: F)
where
F: FnMut(Option<(&[T], RefCount)>) -> Option<(Vec<T>, RefCount)>,
{
let current = self.read_value(key);
let new = updatefn(current);
if new.is_none() {
self.delete_key(key);
return;
}
let (new, refct) = new.unwrap();
self.insert(key, (&new, refct));
}
}

View File

@@ -1,148 +0,0 @@
use {
crate::{
bucket::Bucket, bucket_item::BucketItem, bucket_map::BucketMapError,
bucket_stats::BucketMapStats, MaxSearch, RefCount,
},
solana_sdk::pubkey::Pubkey,
std::{
ops::RangeBounds,
path::PathBuf,
sync::{
atomic::{AtomicU64, Ordering},
Arc, RwLock, RwLockWriteGuard,
},
},
};
type LockedBucket<T> = RwLock<Option<Bucket<T>>>;
pub struct BucketApi<T: Clone + Copy> {
drives: Arc<Vec<PathBuf>>,
max_search: MaxSearch,
pub stats: Arc<BucketMapStats>,
bucket: LockedBucket<T>,
count: Arc<AtomicU64>,
}
impl<T: Clone + Copy> BucketApi<T> {
pub fn new(
drives: Arc<Vec<PathBuf>>,
max_search: MaxSearch,
stats: Arc<BucketMapStats>,
count: Arc<AtomicU64>,
) -> Self {
Self {
drives,
max_search,
stats,
bucket: RwLock::default(),
count,
}
}
/// Get the items for bucket
pub fn items_in_range<R>(&self, range: &Option<&R>) -> Vec<BucketItem<T>>
where
R: RangeBounds<Pubkey>,
{
self.bucket
.read()
.unwrap()
.as_ref()
.map(|bucket| bucket.items_in_range(range))
.unwrap_or_default()
}
/// Get the Pubkeys
pub fn keys(&self) -> Vec<Pubkey> {
self.bucket
.read()
.unwrap()
.as_ref()
.map_or_else(Vec::default, |bucket| bucket.keys())
}
/// Get the values for Pubkey `key`
pub fn read_value(&self, key: &Pubkey) -> Option<(Vec<T>, RefCount)> {
self.bucket.read().unwrap().as_ref().and_then(|bucket| {
bucket
.read_value(key)
.map(|(value, ref_count)| (value.to_vec(), ref_count))
})
}
pub fn bucket_len(&self) -> u64 {
self.bucket
.read()
.unwrap()
.as_ref()
.map(|bucket| bucket.bucket_len())
.unwrap_or_default()
}
pub fn delete_key(&self, key: &Pubkey) {
let mut bucket = self.get_write_bucket();
if let Some(bucket) = bucket.as_mut() {
bucket.delete_key(key)
}
}
fn get_write_bucket(&self) -> RwLockWriteGuard<Option<Bucket<T>>> {
let mut bucket = self.bucket.write().unwrap();
if bucket.is_none() {
*bucket = Some(Bucket::new(
Arc::clone(&self.drives),
self.max_search,
Arc::clone(&self.stats),
));
} else {
let write = bucket.as_mut().unwrap();
write.handle_delayed_grows();
self.count.store(write.bucket_len(), Ordering::Relaxed);
}
bucket
}
pub fn addref(&self, key: &Pubkey) -> Option<RefCount> {
self.get_write_bucket()
.as_mut()
.and_then(|bucket| bucket.addref(key))
}
pub fn unref(&self, key: &Pubkey) -> Option<RefCount> {
self.get_write_bucket()
.as_mut()
.and_then(|bucket| bucket.unref(key))
}
pub fn insert(&self, pubkey: &Pubkey, value: (&[T], RefCount)) {
let mut bucket = self.get_write_bucket();
bucket.as_mut().unwrap().insert(pubkey, value)
}
pub fn grow(&self, err: BucketMapError) {
// grows are special - they get a read lock and modify 'reallocated'
// the grown changes are applied the next time there is a write lock taken
if let Some(bucket) = self.bucket.read().unwrap().as_ref() {
bucket.grow(err)
}
}
pub fn update<F>(&self, key: &Pubkey, updatefn: F)
where
F: FnMut(Option<(&[T], RefCount)>) -> Option<(Vec<T>, RefCount)>,
{
let mut bucket = self.get_write_bucket();
bucket.as_mut().unwrap().update(key, updatefn)
}
pub fn try_write(
&self,
pubkey: &Pubkey,
value: (&[T], RefCount),
) -> Result<(), BucketMapError> {
let mut bucket = self.get_write_bucket();
bucket.as_mut().unwrap().try_write(pubkey, value.0, value.1)
}
}

View File

@@ -1,8 +0,0 @@
use {crate::RefCount, solana_sdk::pubkey::Pubkey};
#[derive(Debug, Default, Clone)]
pub struct BucketItem<T> {
pub pubkey: Pubkey,
pub ref_count: RefCount,
pub slot_list: Vec<T>,
}

View File

@@ -1,529 +0,0 @@
//! BucketMap is a mostly contention free concurrent map backed by MmapMut
use {
crate::{bucket_api::BucketApi, bucket_stats::BucketMapStats, MaxSearch, RefCount},
solana_sdk::pubkey::Pubkey,
std::{convert::TryInto, fmt::Debug, fs, path::PathBuf, sync::Arc},
tempfile::TempDir,
};
#[derive(Debug, Default, Clone)]
pub struct BucketMapConfig {
pub max_buckets: usize,
pub drives: Option<Vec<PathBuf>>,
pub max_search: Option<MaxSearch>,
}
impl BucketMapConfig {
/// Create a new BucketMapConfig
/// NOTE: BucketMap requires that max_buckets is a power of two
pub fn new(max_buckets: usize) -> BucketMapConfig {
BucketMapConfig {
max_buckets,
..BucketMapConfig::default()
}
}
}
pub struct BucketMap<T: Clone + Copy + Debug> {
buckets: Vec<Arc<BucketApi<T>>>,
drives: Arc<Vec<PathBuf>>,
max_buckets_pow2: u8,
pub stats: Arc<BucketMapStats>,
pub temp_dir: Option<TempDir>,
}
impl<T: Clone + Copy + Debug> Drop for BucketMap<T> {
fn drop(&mut self) {
if self.temp_dir.is_none() {
BucketMap::<T>::erase_previous_drives(&self.drives);
}
}
}
impl<T: Clone + Copy + Debug> std::fmt::Debug for BucketMap<T> {
fn fmt(&self, _f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Ok(())
}
}
#[derive(Debug)]
pub enum BucketMapError {
/// (bucket_index, current_capacity_pow2)
DataNoSpace((u64, u8)),
/// current_capacity_pow2
IndexNoSpace(u8),
}
impl<T: Clone + Copy + Debug> BucketMap<T> {
pub fn new(config: BucketMapConfig) -> Self {
assert_ne!(
config.max_buckets, 0,
"Max number of buckets must be non-zero"
);
assert!(
config.max_buckets.is_power_of_two(),
"Max number of buckets must be a power of two"
);
// this should be <= 1 << DEFAULT_CAPACITY or we end up searching the same items over and over - probably not a big deal since it is so small anyway
const MAX_SEARCH: MaxSearch = 32;
let max_search = config.max_search.unwrap_or(MAX_SEARCH);
if let Some(drives) = config.drives.as_ref() {
Self::erase_previous_drives(drives);
}
let mut temp_dir = None;
let drives = config.drives.unwrap_or_else(|| {
temp_dir = Some(TempDir::new().unwrap());
vec![temp_dir.as_ref().unwrap().path().to_path_buf()]
});
let drives = Arc::new(drives);
let mut per_bucket_count = Vec::with_capacity(config.max_buckets);
per_bucket_count.resize_with(config.max_buckets, Arc::default);
let stats = Arc::new(BucketMapStats {
per_bucket_count,
..BucketMapStats::default()
});
let buckets = stats
.per_bucket_count
.iter()
.map(|per_bucket_count| {
Arc::new(BucketApi::new(
Arc::clone(&drives),
max_search,
Arc::clone(&stats),
Arc::clone(per_bucket_count),
))
})
.collect();
// A simple log2 function that is correct if x is a power of two
let log2 = |x: usize| usize::BITS - x.leading_zeros() - 1;
Self {
buckets,
drives,
max_buckets_pow2: log2(config.max_buckets) as u8,
stats,
temp_dir,
}
}
fn erase_previous_drives(drives: &[PathBuf]) {
drives.iter().for_each(|folder| {
let _ = fs::remove_dir_all(&folder);
let _ = fs::create_dir_all(&folder);
})
}
pub fn num_buckets(&self) -> usize {
self.buckets.len()
}
/// Get the values for Pubkey `key`
pub fn read_value(&self, key: &Pubkey) -> Option<(Vec<T>, RefCount)> {
self.get_bucket(key).read_value(key)
}
/// Delete the Pubkey `key`
pub fn delete_key(&self, key: &Pubkey) {
self.get_bucket(key).delete_key(key);
}
/// Update Pubkey `key`'s value with 'value'
pub fn insert(&self, key: &Pubkey, value: (&[T], RefCount)) {
self.get_bucket(key).insert(key, value)
}
/// Update Pubkey `key`'s value with 'value'
pub fn try_insert(&self, key: &Pubkey, value: (&[T], RefCount)) -> Result<(), BucketMapError> {
self.get_bucket(key).try_write(key, value)
}
/// Update Pubkey `key`'s value with function `updatefn`
pub fn update<F>(&self, key: &Pubkey, updatefn: F)
where
F: FnMut(Option<(&[T], RefCount)>) -> Option<(Vec<T>, RefCount)>,
{
self.get_bucket(key).update(key, updatefn)
}
pub fn get_bucket(&self, key: &Pubkey) -> &Arc<BucketApi<T>> {
self.get_bucket_from_index(self.bucket_ix(key))
}
pub fn get_bucket_from_index(&self, ix: usize) -> &Arc<BucketApi<T>> {
&self.buckets[ix]
}
/// Get the bucket index for Pubkey `key`
pub fn bucket_ix(&self, key: &Pubkey) -> usize {
if self.max_buckets_pow2 > 0 {
let location = read_be_u64(key.as_ref());
(location >> (u64::BITS - self.max_buckets_pow2 as u32)) as usize
} else {
0
}
}
/// Increment the refcount for Pubkey `key`
pub fn addref(&self, key: &Pubkey) -> Option<RefCount> {
let ix = self.bucket_ix(key);
let bucket = &self.buckets[ix];
bucket.addref(key)
}
/// Decrement the refcount for Pubkey `key`
pub fn unref(&self, key: &Pubkey) -> Option<RefCount> {
let ix = self.bucket_ix(key);
let bucket = &self.buckets[ix];
bucket.unref(key)
}
}
/// Look at the first 8 bytes of the input and reinterpret them as a u64
fn read_be_u64(input: &[u8]) -> u64 {
assert!(input.len() >= std::mem::size_of::<u64>());
u64::from_be_bytes(input[0..std::mem::size_of::<u64>()].try_into().unwrap())
}
#[cfg(test)]
mod tests {
use {
super::*,
rand::{thread_rng, Rng},
std::{collections::HashMap, sync::RwLock},
};
#[test]
fn bucket_map_test_insert() {
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
index.update(&key, |_| Some((vec![0], 0)));
assert_eq!(index.read_value(&key), Some((vec![0], 0)));
}
#[test]
fn bucket_map_test_insert2() {
for pass in 0..3 {
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
let bucket = index.get_bucket(&key);
if pass == 0 {
index.insert(&key, (&[0], 0));
} else {
let result = index.try_insert(&key, (&[0], 0));
assert!(result.is_err());
assert_eq!(index.read_value(&key), None);
if pass == 2 {
// another call to try insert again - should still return an error
let result = index.try_insert(&key, (&[0], 0));
assert!(result.is_err());
assert_eq!(index.read_value(&key), None);
}
bucket.grow(result.unwrap_err());
let result = index.try_insert(&key, (&[0], 0));
assert!(result.is_ok());
}
assert_eq!(index.read_value(&key), Some((vec![0], 0)));
}
}
#[test]
fn bucket_map_test_update2() {
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
index.insert(&key, (&[0], 0));
assert_eq!(index.read_value(&key), Some((vec![0], 0)));
index.insert(&key, (&[1], 0));
assert_eq!(index.read_value(&key), Some((vec![1], 0)));
}
#[test]
fn bucket_map_test_update() {
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
index.update(&key, |_| Some((vec![0], 0)));
assert_eq!(index.read_value(&key), Some((vec![0], 0)));
index.update(&key, |_| Some((vec![1], 0)));
assert_eq!(index.read_value(&key), Some((vec![1], 0)));
}
#[test]
fn bucket_map_test_update_to_0_len() {
solana_logger::setup();
let key = Pubkey::new_unique();
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
index.update(&key, |_| Some((vec![0], 1)));
assert_eq!(index.read_value(&key), Some((vec![0], 1)));
// sets len to 0, updates in place
index.update(&key, |_| Some((vec![], 1)));
assert_eq!(index.read_value(&key), Some((vec![], 1)));
// sets len to 0, doesn't update in place - finds a new place, which causes us to no longer have an allocation in data
index.update(&key, |_| Some((vec![], 2)));
assert_eq!(index.read_value(&key), Some((vec![], 2)));
// sets len to 1, doesn't update in place - finds a new place
index.update(&key, |_| Some((vec![1], 2)));
assert_eq!(index.read_value(&key), Some((vec![1], 2)));
}
#[test]
fn bucket_map_test_delete() {
let config = BucketMapConfig::new(1 << 1);
let index = BucketMap::new(config);
for i in 0..10 {
let key = Pubkey::new_unique();
assert_eq!(index.read_value(&key), None);
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
index.delete_key(&key);
assert_eq!(index.read_value(&key), None);
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
index.delete_key(&key);
}
}
#[test]
fn bucket_map_test_delete_2() {
let config = BucketMapConfig::new(1 << 2);
let index = BucketMap::new(config);
for i in 0..100 {
let key = Pubkey::new_unique();
assert_eq!(index.read_value(&key), None);
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
index.delete_key(&key);
assert_eq!(index.read_value(&key), None);
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
index.delete_key(&key);
}
}
#[test]
fn bucket_map_test_n_drives() {
let config = BucketMapConfig::new(1 << 2);
let index = BucketMap::new(config);
for i in 0..100 {
let key = Pubkey::new_unique();
index.update(&key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(&key), Some((vec![i], 0)));
}
}
#[test]
fn bucket_map_test_grow_read() {
let config = BucketMapConfig::new(1 << 2);
let index = BucketMap::new(config);
let keys: Vec<Pubkey> = (0..100).into_iter().map(|_| Pubkey::new_unique()).collect();
for k in 0..keys.len() {
let key = &keys[k];
let i = read_be_u64(key.as_ref());
index.update(key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(key), Some((vec![i], 0)));
for (ix, key) in keys.iter().enumerate() {
let i = read_be_u64(key.as_ref());
//debug!("READ: {:?} {}", key, i);
let expected = if ix <= k { Some((vec![i], 0)) } else { None };
assert_eq!(index.read_value(key), expected);
}
}
}
#[test]
fn bucket_map_test_n_delete() {
let config = BucketMapConfig::new(1 << 2);
let index = BucketMap::new(config);
let keys: Vec<Pubkey> = (0..20).into_iter().map(|_| Pubkey::new_unique()).collect();
for key in keys.iter() {
let i = read_be_u64(key.as_ref());
index.update(key, |_| Some((vec![i], 0)));
assert_eq!(index.read_value(key), Some((vec![i], 0)));
}
for key in keys.iter() {
let i = read_be_u64(key.as_ref());
//debug!("READ: {:?} {}", key, i);
assert_eq!(index.read_value(key), Some((vec![i], 0)));
}
for k in 0..keys.len() {
let key = &keys[k];
index.delete_key(key);
assert_eq!(index.read_value(key), None);
for key in keys.iter().skip(k + 1) {
let i = read_be_u64(key.as_ref());
assert_eq!(index.read_value(key), Some((vec![i], 0)));
}
}
}
#[test]
fn hashmap_compare() {
use std::sync::Mutex;
solana_logger::setup();
let maps = (0..2)
.into_iter()
.map(|max_buckets_pow2| {
let config = BucketMapConfig::new(1 << max_buckets_pow2);
BucketMap::new(config)
})
.collect::<Vec<_>>();
let hash_map = RwLock::new(HashMap::<Pubkey, (Vec<(usize, usize)>, RefCount)>::new());
let max_slot_list_len = 3;
let all_keys = Mutex::new(vec![]);
let gen_rand_value = || {
let count = thread_rng().gen_range(0, max_slot_list_len);
let v = (0..count)
.into_iter()
.map(|x| (x as usize, x as usize /*thread_rng().gen::<usize>()*/))
.collect::<Vec<_>>();
let rc = thread_rng().gen::<RefCount>();
(v, rc)
};
let get_key = || {
let mut keys = all_keys.lock().unwrap();
if keys.is_empty() {
return None;
}
let len = keys.len();
Some(keys.remove(thread_rng().gen_range(0, len)))
};
let return_key = |key| {
let mut keys = all_keys.lock().unwrap();
keys.push(key);
};
let verify = || {
let mut maps = maps
.iter()
.map(|map| {
let mut r = vec![];
for bin in 0..map.num_buckets() {
r.append(
&mut map.buckets[bin]
.items_in_range(&None::<&std::ops::RangeInclusive<Pubkey>>),
);
}
r
})
.collect::<Vec<_>>();
let hm = hash_map.read().unwrap();
for (k, v) in hm.iter() {
for map in maps.iter_mut() {
for i in 0..map.len() {
if k == &map[i].pubkey {
assert_eq!(map[i].slot_list, v.0);
assert_eq!(map[i].ref_count, v.1);
map.remove(i);
break;
}
}
}
}
for map in maps.iter() {
assert!(map.is_empty());
}
};
let mut initial = 100; // put this many items in to start
// do random operations: insert, update, delete, add/unref in random order
// verify consistency between hashmap and all bucket maps
for i in 0..10000 {
if initial > 0 {
initial -= 1;
}
if initial > 0 || thread_rng().gen_range(0, 5) == 0 {
// insert
let k = solana_sdk::pubkey::new_rand();
let v = gen_rand_value();
hash_map.write().unwrap().insert(k, v.clone());
let insert = thread_rng().gen_range(0, 2) == 0;
maps.iter().for_each(|map| {
if insert {
map.insert(&k, (&v.0, v.1))
} else {
map.update(&k, |current| {
assert!(current.is_none());
Some(v.clone())
})
}
});
return_key(k);
}
if thread_rng().gen_range(0, 10) == 0 {
// update
if let Some(k) = get_key() {
let hm = hash_map.read().unwrap();
let (v, rc) = gen_rand_value();
let v_old = hm.get(&k);
let insert = thread_rng().gen_range(0, 2) == 0;
maps.iter().for_each(|map| {
if insert {
map.insert(&k, (&v, rc))
} else {
map.update(&k, |current| {
assert_eq!(current, v_old.map(|(v, rc)| (&v[..], *rc)), "{}", k);
Some((v.clone(), rc))
})
}
});
drop(hm);
hash_map.write().unwrap().insert(k, (v, rc));
return_key(k);
}
}
if thread_rng().gen_range(0, 20) == 0 {
// delete
if let Some(k) = get_key() {
let mut hm = hash_map.write().unwrap();
hm.remove(&k);
maps.iter().for_each(|map| {
map.delete_key(&k);
});
}
}
if thread_rng().gen_range(0, 10) == 0 {
// add/unref
if let Some(k) = get_key() {
let mut inc = thread_rng().gen_range(0, 2) == 0;
let mut hm = hash_map.write().unwrap();
let (v, mut rc) = hm.get(&k).map(|(v, rc)| (v.to_vec(), *rc)).unwrap();
if !inc && rc == 0 {
// can't decrement rc=0
inc = true;
}
rc = if inc { rc + 1 } else { rc - 1 };
hm.insert(k, (v.to_vec(), rc));
maps.iter().for_each(|map| {
if thread_rng().gen_range(0, 2) == 0 {
map.update(&k, |current| Some((current.unwrap().0.to_vec(), rc)))
} else if inc {
map.addref(&k);
} else {
map.unref(&k);
}
});
return_key(k);
}
}
if i % 1000 == 0 {
verify();
}
}
verify();
}
}

View File

@@ -1,18 +0,0 @@
use std::sync::{atomic::AtomicU64, Arc, Mutex};
#[derive(Debug, Default)]
pub struct BucketStats {
pub resizes: AtomicU64,
pub max_size: Mutex<u64>,
pub resize_us: AtomicU64,
pub new_file_us: AtomicU64,
pub flush_file_us: AtomicU64,
pub mmap_us: AtomicU64,
}
#[derive(Debug, Default)]
pub struct BucketMapStats {
pub index: Arc<BucketStats>,
pub data: Arc<BucketStats>,
pub per_bucket_count: Vec<Arc<AtomicU64>>,
}

View File

@@ -1,343 +0,0 @@
use {
crate::{bucket_stats::BucketStats, MaxSearch},
memmap2::MmapMut,
rand::{thread_rng, Rng},
solana_measure::measure::Measure,
std::{
fs::{remove_file, OpenOptions},
io::{Seek, SeekFrom, Write},
path::PathBuf,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
},
};
/*
1 2
2 4
3 8
4 16
5 32
6 64
7 128
8 256
9 512
10 1,024
11 2,048
12 4,096
13 8,192
14 16,384
23 8,388,608
24 16,777,216
*/
pub const DEFAULT_CAPACITY_POW2: u8 = 5;
/// A Header UID of 0 indicates that the header is unlocked
pub(crate) const UID_UNLOCKED: Uid = 0;
pub(crate) type Uid = u64;
#[repr(C)]
struct Header {
lock: AtomicU64,
}
impl Header {
fn try_lock(&self, uid: Uid) -> bool {
Ok(UID_UNLOCKED)
== self
.lock
.compare_exchange(UID_UNLOCKED, uid, Ordering::AcqRel, Ordering::Relaxed)
}
fn unlock(&self) -> Uid {
self.lock.swap(UID_UNLOCKED, Ordering::Release)
}
fn uid(&self) -> Uid {
self.lock.load(Ordering::Acquire)
}
}
pub struct BucketStorage {
path: PathBuf,
mmap: MmapMut,
pub cell_size: u64,
pub capacity_pow2: u8,
pub used: AtomicU64,
pub stats: Arc<BucketStats>,
pub max_search: MaxSearch,
}
#[derive(Debug)]
pub enum BucketStorageError {
AlreadyAllocated,
}
impl Drop for BucketStorage {
fn drop(&mut self) {
let _ = remove_file(&self.path);
}
}
impl BucketStorage {
pub fn new_with_capacity(
drives: Arc<Vec<PathBuf>>,
num_elems: u64,
elem_size: u64,
capacity_pow2: u8,
max_search: MaxSearch,
stats: Arc<BucketStats>,
) -> Self {
let cell_size = elem_size * num_elems + std::mem::size_of::<Header>() as u64;
let (mmap, path) = Self::new_map(&drives, cell_size as usize, capacity_pow2, &stats);
Self {
path,
mmap,
cell_size,
used: AtomicU64::new(0),
capacity_pow2,
stats,
max_search,
}
}
pub fn max_search(&self) -> u64 {
self.max_search as u64
}
pub fn new(
drives: Arc<Vec<PathBuf>>,
num_elems: u64,
elem_size: u64,
max_search: MaxSearch,
stats: Arc<BucketStats>,
) -> Self {
Self::new_with_capacity(
drives,
num_elems,
elem_size,
DEFAULT_CAPACITY_POW2,
max_search,
stats,
)
}
pub fn uid(&self, ix: u64) -> Uid {
assert!(ix < self.capacity(), "bad index size");
let ix = (ix * self.cell_size) as usize;
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
return hdr.as_ref().unwrap().uid();
}
}
pub fn allocate(&self, ix: u64, uid: Uid) -> Result<(), BucketStorageError> {
assert!(ix < self.capacity(), "allocate: bad index size");
assert!(UID_UNLOCKED != uid, "allocate: bad uid");
let mut e = Err(BucketStorageError::AlreadyAllocated);
let ix = (ix * self.cell_size) as usize;
//debug!("ALLOC {} {}", ix, uid);
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
if hdr.as_ref().unwrap().try_lock(uid) {
e = Ok(());
self.used.fetch_add(1, Ordering::Relaxed);
}
};
e
}
pub fn free(&self, ix: u64, uid: Uid) {
assert!(ix < self.capacity(), "bad index size");
assert!(UID_UNLOCKED != uid, "free: bad uid");
let ix = (ix * self.cell_size) as usize;
//debug!("FREE {} {}", ix, uid);
let hdr_slice: &[u8] = &self.mmap[ix..ix + std::mem::size_of::<Header>()];
unsafe {
let hdr = hdr_slice.as_ptr() as *const Header;
//debug!("FREE uid: {}", hdr.as_ref().unwrap().uid());
let previous_uid = hdr.as_ref().unwrap().unlock();
assert_eq!(
previous_uid, uid,
"free: unlocked a header with a differet uid: {}",
previous_uid
);
self.used.fetch_sub(1, Ordering::Relaxed);
}
}
pub fn get<T: Sized>(&self, ix: u64) -> &T {
assert!(ix < self.capacity(), "bad index size");
let start = (ix * self.cell_size) as usize + std::mem::size_of::<Header>();
let end = start + std::mem::size_of::<T>();
let item_slice: &[u8] = &self.mmap[start..end];
unsafe {
let item = item_slice.as_ptr() as *const T;
&*item
}
}
pub fn get_empty_cell_slice<T: Sized>(&self) -> &[T] {
let len = 0;
let item_slice: &[u8] = &self.mmap[0..0];
unsafe {
let item = item_slice.as_ptr() as *const T;
std::slice::from_raw_parts(item, len as usize)
}
}
pub fn get_cell_slice<T: Sized>(&self, ix: u64, len: u64) -> &[T] {
assert!(ix < self.capacity(), "bad index size");
let ix = self.cell_size * ix;
let start = ix as usize + std::mem::size_of::<Header>();
let end = start + std::mem::size_of::<T>() * len as usize;
//debug!("GET slice {} {}", start, end);
let item_slice: &[u8] = &self.mmap[start..end];
unsafe {
let item = item_slice.as_ptr() as *const T;
std::slice::from_raw_parts(item, len as usize)
}
}
#[allow(clippy::mut_from_ref)]
pub fn get_mut<T: Sized>(&self, ix: u64) -> &mut T {
assert!(ix < self.capacity(), "bad index size");
let start = (ix * self.cell_size) as usize + std::mem::size_of::<Header>();
let end = start + std::mem::size_of::<T>();
let item_slice: &[u8] = &self.mmap[start..end];
unsafe {
let item = item_slice.as_ptr() as *mut T;
&mut *item
}
}
#[allow(clippy::mut_from_ref)]
pub fn get_mut_cell_slice<T: Sized>(&self, ix: u64, len: u64) -> &mut [T] {
assert!(ix < self.capacity(), "bad index size");
let ix = self.cell_size * ix;
let start = ix as usize + std::mem::size_of::<Header>();
let end = start + std::mem::size_of::<T>() * len as usize;
//debug!("GET mut slice {} {}", start, end);
let item_slice: &[u8] = &self.mmap[start..end];
unsafe {
let item = item_slice.as_ptr() as *mut T;
std::slice::from_raw_parts_mut(item, len as usize)
}
}
fn new_map(
drives: &[PathBuf],
cell_size: usize,
capacity_pow2: u8,
stats: &BucketStats,
) -> (MmapMut, PathBuf) {
let mut measure_new_file = Measure::start("measure_new_file");
let capacity = 1u64 << capacity_pow2;
let r = thread_rng().gen_range(0, drives.len());
let drive = &drives[r];
let pos = format!("{}", thread_rng().gen_range(0, u128::MAX),);
let file = drive.join(pos);
let mut data = OpenOptions::new()
.read(true)
.write(true)
.create(true)
.open(file.clone())
.map_err(|e| {
panic!(
"Unable to create data file {} in current dir({:?}): {:?}",
file.display(),
std::env::current_dir(),
e
);
})
.unwrap();
// Theoretical performance optimization: write a zero to the end of
// the file so that we won't have to resize it later, which may be
// expensive.
//debug!("GROWING file {}", capacity * cell_size as u64);
data.seek(SeekFrom::Start(capacity * cell_size as u64 - 1))
.unwrap();
data.write_all(&[0]).unwrap();
data.seek(SeekFrom::Start(0)).unwrap();
measure_new_file.stop();
let mut measure_flush = Measure::start("measure_flush");
data.flush().unwrap(); // can we skip this?
measure_flush.stop();
let mut measure_mmap = Measure::start("measure_mmap");
let res = (unsafe { MmapMut::map_mut(&data).unwrap() }, file);
measure_mmap.stop();
stats
.new_file_us
.fetch_add(measure_new_file.as_us(), Ordering::Relaxed);
stats
.flush_file_us
.fetch_add(measure_flush.as_us(), Ordering::Relaxed);
stats
.mmap_us
.fetch_add(measure_mmap.as_us(), Ordering::Relaxed);
res
}
/// copy contents from 'old_bucket' to 'self'
fn copy_contents(&mut self, old_bucket: &Self) {
let mut m = Measure::start("grow");
let old_cap = old_bucket.capacity();
let old_map = &old_bucket.mmap;
let increment = self.capacity_pow2 - old_bucket.capacity_pow2;
let index_grow = 1 << increment;
(0..old_cap as usize).into_iter().for_each(|i| {
let old_ix = i * old_bucket.cell_size as usize;
let new_ix = old_ix * index_grow;
let dst_slice: &[u8] = &self.mmap[new_ix..new_ix + old_bucket.cell_size as usize];
let src_slice: &[u8] = &old_map[old_ix..old_ix + old_bucket.cell_size as usize];
unsafe {
let dst = dst_slice.as_ptr() as *mut u8;
let src = src_slice.as_ptr() as *const u8;
std::ptr::copy_nonoverlapping(src, dst, old_bucket.cell_size as usize);
};
});
m.stop();
self.stats.resizes.fetch_add(1, Ordering::Relaxed);
self.stats.resize_us.fetch_add(m.as_us(), Ordering::Relaxed);
}
/// allocate a new bucket, copying data from 'bucket'
pub fn new_resized(
drives: &Arc<Vec<PathBuf>>,
max_search: MaxSearch,
bucket: Option<&Self>,
capacity_pow_2: u8,
num_elems: u64,
elem_size: u64,
stats: &Arc<BucketStats>,
) -> Self {
let mut new_bucket = Self::new_with_capacity(
Arc::clone(drives),
num_elems,
elem_size,
capacity_pow_2,
max_search,
Arc::clone(stats),
);
if let Some(bucket) = bucket {
new_bucket.copy_contents(bucket);
}
let sz = new_bucket.capacity();
{
let mut max = new_bucket.stats.max_size.lock().unwrap();
*max = std::cmp::max(*max, sz);
}
new_bucket
}
/// Return the number of cells currently allocated
pub fn capacity(&self) -> u64 {
1 << self.capacity_pow2
}
}

View File

@@ -1,67 +0,0 @@
use {
crate::{
bucket::Bucket,
bucket_storage::{BucketStorage, Uid},
RefCount,
},
solana_sdk::{clock::Slot, pubkey::Pubkey},
std::{
collections::hash_map::DefaultHasher,
fmt::Debug,
hash::{Hash, Hasher},
},
};
#[repr(C)]
#[derive(Debug, Copy, Clone, PartialEq)]
// one instance of this per item in the index
// stored in the index bucket
pub struct IndexEntry {
pub key: Pubkey, // can this be smaller if we have reduced the keys into buckets already?
pub ref_count: RefCount, // can this be smaller? Do we ever need more than 4B refcounts?
pub storage_offset: u64, // smaller? since these are variably sized, this could get tricky. well, actually accountinfo is not variable sized...
// if the bucket doubled, the index can be recomputed using create_bucket_capacity_pow2
pub storage_capacity_when_created_pow2: u8, // see data_location
pub num_slots: Slot, // can this be smaller? epoch size should ~ be the max len. this is the num elements in the slot list
}
impl IndexEntry {
pub fn data_bucket_from_num_slots(num_slots: Slot) -> u64 {
(num_slots as f64).log2().ceil() as u64 // use int log here?
}
pub fn data_bucket_ix(&self) -> u64 {
Self::data_bucket_from_num_slots(self.num_slots)
}
pub fn ref_count(&self) -> RefCount {
self.ref_count
}
// This function maps the original data location into an index in the current bucket storage.
// This is coupled with how we resize bucket storages.
pub fn data_loc(&self, storage: &BucketStorage) -> u64 {
self.storage_offset << (storage.capacity_pow2 - self.storage_capacity_when_created_pow2)
}
pub fn read_value<'a, T>(&self, bucket: &'a Bucket<T>) -> Option<(&'a [T], RefCount)> {
let data_bucket_ix = self.data_bucket_ix();
let data_bucket = &bucket.data[data_bucket_ix as usize];
let slice = if self.num_slots > 0 {
let loc = self.data_loc(data_bucket);
let uid = Self::key_uid(&self.key);
assert_eq!(uid, bucket.data[data_bucket_ix as usize].uid(loc));
bucket.data[data_bucket_ix as usize].get_cell_slice(loc, self.num_slots)
} else {
// num_slots is 0. This means we don't have an actual allocation.
// can we trust that the data_bucket is even safe?
bucket.data[data_bucket_ix as usize].get_empty_cell_slice()
};
Some((slice, self.ref_count))
}
pub fn key_uid(key: &Pubkey) -> Uid {
let mut s = DefaultHasher::new();
key.hash(&mut s);
s.finish().max(1u64)
}
}

View File

@@ -1,11 +0,0 @@
#![allow(clippy::integer_arithmetic)]
mod bucket;
pub mod bucket_api;
mod bucket_item;
pub mod bucket_map;
mod bucket_stats;
mod bucket_storage;
mod index_entry;
pub type MaxSearch = u8;
pub type RefCount = u64;

View File

@@ -1,48 +0,0 @@
use {
rayon::prelude::*,
solana_bucket_map::bucket_map::{BucketMap, BucketMapConfig},
solana_measure::measure::Measure,
solana_sdk::pubkey::Pubkey,
std::path::PathBuf,
};
#[test]
#[ignore]
fn bucket_map_test_mt() {
let threads = 4096;
let items = 4096;
let tmpdir1 = std::env::temp_dir().join("bucket_map_test_mt");
let tmpdir2 = PathBuf::from("/mnt/data/0").join("bucket_map_test_mt");
let paths: Vec<PathBuf> = [tmpdir1, tmpdir2]
.iter()
.filter(|x| std::fs::create_dir_all(x).is_ok())
.cloned()
.collect();
assert!(!paths.is_empty());
let index = BucketMap::new(BucketMapConfig {
max_buckets: 1 << 12,
drives: Some(paths.clone()),
..BucketMapConfig::default()
});
(0..threads).into_iter().into_par_iter().for_each(|_| {
let key = Pubkey::new_unique();
index.update(&key, |_| Some((vec![0u64], 0)));
});
let mut timer = Measure::start("bucket_map_test_mt");
(0..threads).into_iter().into_par_iter().for_each(|_| {
for _ in 0..items {
let key = Pubkey::new_unique();
let ix: u64 = index.bucket_ix(&key) as u64;
index.update(&key, |_| Some((vec![ix], 0)));
assert_eq!(index.read_value(&key), Some((vec![ix], 0)));
}
});
timer.stop();
println!("time: {}ns per item", timer.as_ns() / (threads * items));
let mut total = 0;
for tmpdir in paths.iter() {
let folder_size = fs_extra::dir::get_size(tmpdir).unwrap();
total += folder_size;
std::fs::remove_dir_all(tmpdir).unwrap();
}
println!("overhead: {}bytes per item", total / (threads * items));
}

View File

@@ -9,8 +9,5 @@ for a in "$@"; do
fi
done
set -ex
if [[ ! -f sdk/bpf/syscalls.txt ]]; then
"$here"/cargo build --manifest-path "$here"/programs/bpf_loader/gen-syscall-list/Cargo.toml
fi
set -x
exec "$here"/cargo run --manifest-path "$here"/sdk/cargo-build-bpf/Cargo.toml -- $maybe_bpf_sdk "$@"

View File

@@ -137,7 +137,7 @@ all_test_steps() {
^ci/test-coverage.sh \
^scripts/coverage.sh \
; then
command_step coverage ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_nightly_docker_image ci/test-coverage.sh" 40
command_step coverage ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_nightly_docker_image ci/test-coverage.sh" 30
wait_step
else
annotate --style info --context test-coverage \
@@ -148,33 +148,6 @@ all_test_steps() {
command_step stable ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-stable.sh" 60
wait_step
# BPF test suite
if affects \
.rs$ \
Cargo.lock$ \
Cargo.toml$ \
^ci/rust-version.sh \
^ci/test-stable-bpf.sh \
^ci/test-stable.sh \
^ci/test-local-cluster.sh \
^core/build.rs \
^fetch-perf-libs.sh \
^programs/ \
^sdk/ \
; then
cat >> "$output_file" <<"EOF"
- command: "ci/test-stable-bpf.sh"
name: "stable-bpf"
timeout_in_minutes: 20
artifact_paths: "bpf-dumps.tar.bz2"
agents:
- "queue=default"
EOF
else
annotate --style info \
"Stable-BPF skipped as no relevant files were modified"
fi
# Perf test suite
if affects \
.rs$ \
@@ -192,7 +165,7 @@ EOF
cat >> "$output_file" <<"EOF"
- command: "ci/test-stable-perf.sh"
name: "stable-perf"
timeout_in_minutes: 20
timeout_in_minutes: 40
artifact_paths: "log-*.txt"
agents:
- "queue=cuda"
@@ -226,19 +199,6 @@ EOF
annotate --style info \
"downstream-projects skipped as no relevant files were modified"
fi
# Wasm support
if affects \
^ci/test-wasm.sh \
^ci/test-stable.sh \
^sdk/ \
; then
command_step wasm ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-wasm.sh" 20
else
annotate --style info \
"wasm skipped as no relevant files were modified"
fi
# Benches...
if affects \
.rs$ \
@@ -256,7 +216,7 @@ EOF
command_step "local-cluster" \
". ci/rust-version.sh; ci/docker-run.sh \$\$rust_stable_docker_image ci/test-local-cluster.sh" \
50
45
}
pull_or_push_steps() {
@@ -275,7 +235,7 @@ pull_or_push_steps() {
all_test_steps
fi
# web3.js, explorer and docs changes run on Travis or Github actions...
# web3.js, explorer and docs changes run on Travis...
}

View File

@@ -3,19 +3,16 @@
# Pull requests to not run these steps.
steps:
- command: "ci/publish-tarball.sh"
agents:
- "queue=release-build"
timeout_in_minutes: 60
name: "publish tarball"
- command: "ci/publish-bpf-sdk.sh"
timeout_in_minutes: 5
name: "publish bpf sdk"
- wait
- command: "sdk/docker-solana/build.sh"
agents:
- "queue=release-build"
timeout_in_minutes: 60
name: "publish docker"
- command: "ci/publish-crate.sh"
agents:
- "queue=release-build"
timeout_in_minutes: 240
name: "publish crate"
branches: "!master"

View File

@@ -23,7 +23,7 @@ echo --- "(FAILING) Backpropagating dependabot-triggered Cargo.lock updates"
name="dependabot-buildkite"
api_base="https://api.github.com/repos/solana-labs/solana/pulls"
pr_num=$(echo "$BUILDKITE_BRANCH" | grep -Eo '[0-9]+')
branch=$(curl -s "$api_base/$pr_num" | python3 -c 'import json,sys;print(json.load(sys.stdin)["head"]["ref"])')
branch=$(curl -s "$api_base/$pr_num" | python -c 'import json,sys;print json.load(sys.stdin)["head"]["ref"]')
git add :**/Cargo.lock
EMAIL="dependabot-buildkite@noreply.solana.com" \

View File

@@ -28,29 +28,18 @@ cargo_audit_ignores=(
# Blocked on multiple crates updating `time` to >= 0.2.23
--ignore RUSTSEC-2020-0071
# difference is unmaintained
#
# Blocked on predicates v1.0.6 removing its dependency on `difference`
--ignore RUSTSEC-2020-0095
# hyper is upgraded on master/v1.6 but not for v1.5
--ignore RUSTSEC-2021-0020
# generic-array: arr! macro erases lifetimes
#
# Blocked on libsecp256k1 releasing with upgraded dependencies
# https://github.com/paritytech/libsecp256k1/issues/66
# ed25519-dalek and libsecp256k1 not upgraded for v1.5
--ignore RUSTSEC-2020-0146
# hyper: Lenient `hyper` header parsing of `Content-Length` could allow request smuggling
#
# Blocked on jsonrpc removing dependency on unmaintained `websocket`
# https://github.com/paritytech/jsonrpc/issues/605
--ignore RUSTSEC-2021-0078
# hyper: Integer overflow in `hyper`'s parsing of the `Transfer-Encoding` header leads to data loss
#
# Blocked on jsonrpc removing dependency on unmaintained `websocket`
# https://github.com/paritytech/jsonrpc/issues/605
--ignore RUSTSEC-2021-0079
# chrono: Potential segfault in `localtime_r` invocations
#
# Blocked due to no safe upgrade
# https://github.com/chronotope/chrono/issues/499
--ignore RUSTSEC-2020-0159
)
scripts/cargo-for-all-lock-files.sh stable audit "${cargo_audit_ignores[@]}"

View File

@@ -1,4 +1,4 @@
FROM solanalabs/rust:1.57.0
FROM solanalabs/rust:1.49.0
ARG date
RUN set -x \

View File

@@ -1,6 +1,6 @@
# Note: when the rust version is changed also modify
# ci/rust-version.sh to pick up the new image tag
FROM rust:1.57.0
FROM rust:1.49.0
# Add Google Protocol Buffers for Libra's metrics library.
ENV PROTOC_VERSION 3.8.0
@@ -11,34 +11,28 @@ RUN set -x \
&& apt-get install apt-transport-https \
&& echo deb https://apt.buildkite.com/buildkite-agent stable main > /etc/apt/sources.list.d/buildkite-agent.list \
&& apt-key adv --no-tty --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 32A37959C2FA5C3C99EFBC32A79206696452D198 \
&& curl -fsSL https://deb.nodesource.com/setup_current.x | bash - \
&& apt update \
&& apt install -y \
buildkite-agent \
clang \
clang-7 \
cmake \
lcov \
libudev-dev \
libclang-common-7-dev \
mscgen \
nodejs \
net-tools \
rsync \
sudo \
golang \
unzip \
\
&& apt remove -y libcurl4-openssl-dev \
&& rm -rf /var/lib/apt/lists/* \
&& node --version \
&& npm --version \
&& rustup component add rustfmt \
&& rustup component add clippy \
&& rustup target add wasm32-unknown-unknown \
&& cargo install cargo-audit \
&& cargo install svgbob_cli \
&& cargo install mdbook \
&& cargo install mdbook-linkcheck \
&& cargo install svgbob_cli \
&& cargo install wasm-pack \
&& rustc --version \
&& cargo --version \
&& curl -OL https://github.com/google/protobuf/releases/download/v$PROTOC_VERSION/$PROTOC_ZIP \

View File

@@ -74,13 +74,10 @@ else
export CI_BUILD_ID=
export CI_COMMIT=
export CI_JOB_ID=
export CI_OS_NAME=
export CI_PULL_REQUEST=
export CI_REPO_SLUG=
export CI_TAG=
# Don't override ci/run-local.sh
if [[ -z $CI_LOCAL_RUN ]]; then
export CI_OS_NAME=
fi
fi
cat <<EOF

Some files were not shown because too many files have changed in this diff Show More